[libcamera-devel,RFC] Documentation: IPU3 IPA Design guide
diff mbox series

Message ID 20210912213017.702492-1-kieran.bingham@ideasonboard.com
State Superseded
Headers show
Series
  • [libcamera-devel,RFC] Documentation: IPU3 IPA Design guide
Related show

Commit Message

Kieran Bingham Sept. 12, 2021, 9:30 p.m. UTC
The IPU3 IPA implements the basic 3a using the ImgU ISP.

Provide an overview document to describe it's operations, and provide a
block diagram to help visualise how the components are put together to
assist any new developers exploring the code.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>

---
This is really just an RFC: Particularly:

 - Should this content actually live in Documentation?
   or is it more suited to a page under src/ipa/ipu3/...

 - This is not complete to document all functionality
   but gives an overview of the current implementation

 - Is there anything anyone would find helpful for me to
   extend/write about on this?

 - It could likely get merged in with some of the work
   that Jean-Michel is doing, and might duplicate some of
   the parts he is documenting, but this was an aim to
   write a single page overview as a getting started with
   the IPU3 IPA ....


 .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
 1 file changed, 144 insertions(+)
 create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst

Comments

Umang Jain Sept. 13, 2021, 2:29 p.m. UTC | #1
Hi Kieran,

On 9/13/21 3:00 AM, Kieran Bingham wrote:
> The IPU3 IPA implements the basic 3a using the ImgU ISP.
3a or 3A?
>
> Provide an overview document to describe it's operations, and provide a
> block diagram to help visualise how the components are put together to
> assist any new developers exploring the code.
>
> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>
> ---
> This is really just an RFC: Particularly:
>
>   - Should this content actually live in Documentation?
>     or is it more suited to a page under src/ipa/ipu3/...


I would probably keep it under Documentation/ but there is already a IPA 
writing guide. So maybe we need a new dir, to document overviews of 
existing IPAs?

>
>   - This is not complete to document all functionality
>     but gives an overview of the current implementation
>
>   - Is there anything anyone would find helpful for me to
>     extend/write about on this?
>
>   - It could likely get merged in with some of the work
>     that Jean-Michel is doing, and might duplicate some of
>     the parts he is documenting, but this was an aim to
>     write a single page overview as a getting started with
>     the IPU3 IPA ....
>
>
>   .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>   1 file changed, 144 insertions(+)
>   create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
>
> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
> new file mode 100644
> index 000000000000..a1d0f13fbb00
> --- /dev/null
> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
> @@ -0,0 +1,144 @@
> +IPU3 IPA Architecture Design and Overview
> +=========================================
> +
> +The IPU3 IPA is built as a modular and extensible framework with an
> +upper layer to manage the interactions with the pipeline handler, and
> +the image processing algorithms split to compartmentalise the processing
> +required for each accellerator cluster provided by the ImgU ISP.
> +
> +The core IPU3 class is responsible for initialisation and construction
s/IPU3/IPAIPU3 maybe?
> +of the algorithm components, processing controls set by the requests
> +from applications, and managing events from the Pipeline handler.
> +
> +::
> +
> +        ┌───────────────────────────────────────────┐
> +        │      IPU3 Pipeline Handler                │
> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
> +        │   │        │    │        │    │        │  │
> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
> +        │   │        │    │        │    │        │  │
> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
> +        │                                 │P   │    │    S: Statistics Buffer
> +        │                                 │    │S   │
> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
> +        │ IPU3 IPA                                  │
> +        │                 ┌───────────────────────┐ │
> +        │ ┌───────────┐   │ Algorithms            │ │
> +        │ │IPAContext │   │          ┌─────────┐  │ │
> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
> +        │ │ │       │ │   │    │         │        │ │
> +        │ │ └───────┘ │   │    └─────────┘        │ │
> +        │ └───────────┘   └───────────────────────┘ │
> +        └───────────────────────────────────────────┘
> +         SC: IPASessionConfiguration
> +         FC: IPAFrameContext(s)
> +
> +The IPA instance is constructed and initialised at the point a Camera is
> +created by the IPU3 pipeline handler. The initialisation call provides
> +details about which Camera Sensor is being used, and the controls that
> +it has available, along with their defaults and ranges.


Brief line about how sensor controls are used in IPA will be useful? 
This will explain "why" sensor controls are passed to IPA and what IPA 
does with it (for e.g gather necessary info to run algorithms).

> +
> +Buffers
> +~~~~~~~
> +
> +The IPA will have Parameter and Statistics buffers shared with it from
> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
> +before the ``start()`` operation occurs.
> +
> +The IPA will map the buffers into CPU accessible memory, associated with
> +a buffer ID, and further events for sending or receiving parameter and
> +statistics buffers will reference the ID to avoid expensive memory
> +mapping operations, or the passing of file handles during streaming.
> +
> +After the ``stop()`` operation occurs, these buffers will be unmapped,
> +and no further access to the buffers is permitted.
> +
> +Context
> +~~~~~~~
> +
> +Algorithm calls will always have the ``IPAContext`` available to them.
> +This context comprises of two parts:
> +
> +-  IPA Session Configuration
> +-  IPA Frame Context
> +
> +The session configuration structure ``IPASessionConfiguration``
> +represents constant parameters determined from either before streaming
> +commenced during ``configure()`` or updated parameters when processing
> +controls.
> +
> +The IPA Frame Context provides the storage for algorithms for a single
> +frame operation.
> +
> +The ``IPAFrameContext`` structure may be extended to an array, list, or
> +queue to store historical state for each frame, allowing algorithms to
> +obtain and reference results of calculations which are deeply pipelined.
> +This may only be done if an algorithm needs to know the context that was
> +applied at the frame the statistics were produced for, rather than the
> +previous or current frame.
> +
> +Presently there is a single ``IPAFrameContext`` without historical data,
> +and the context is maintained and updated through successive processing
> +operations.
> +
> +Operating
> +~~~~~~~~~
> +
> +There are three main parts of interactions with the algorithms for the
> +IPU3 IPA to operate when running:
> +
> +-  configure()
> +-  processEvent(``EventFillParams``)


One liner explanation for EventFillParams is missing. It should be 
present in Pre-frame preparation section, no? Can you please include a 
brief line analogous to "EventStatReady" event in the Post-frame 
completion section?

Other than that, looks fine to me:

Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>

> +-  processEvent(``EventStatReady``)
> +
> +The configuration phase allows the pipeline-handler to inform the IPA of
> +the current stream configurations, which is then passed into each
> +algorithm to provide an opportunity to identify and track state of the
> +hardware, such as image size or ImgU pipeline configurations.
> +
> +Pre-frame preparation
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +When configured, the IPA is notified by the pipeline handler of the
> +Camera ``start()`` event, after which incoming requests will be queued
> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
> +be populated for the ImgU. This is passed directly to each algorithm
> +through the ``prepare()`` call allowing the ISP configuration to be
> +updated for the needs of each component that the algorithm is
> +responsible for.
> +
> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
> +structure that it modifies, and it should take care to ensure that any
> +structure set by a use flag is fully initialised to suitable values.
> +
> +The parameter buffer is returned to the pipeline handler through the
> +``ActionParamFilled`` event, and from there queued to the ImgU along
> +with a raw frame captured with the CIO2.
> +
> +Post-frame completion
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +When the capture of an image is completed, and successfully processed
> +through the ImgU, the generated statistics buffer
> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
> +``EventStatReady`` event. This provides the IPA with an opportunity to
> +examine the results of the ISP and run the calculations required by each
> +algorithm on the new data. The algorithms may require context from the
> +operations of other algorithms, for example, the AWB might choose to use
> +a scene brightness determined by the AGC. It is important that the
> +algorithms are ordered to ensure that required results are determined
> +before they are needed.
> +
> +The ordering of the algorithm processing is determined by their
> +placement in the ``IPU3::algorithms_`` ordered list.
Laurent Pinchart Sept. 14, 2021, 5:37 a.m. UTC | #2
Hi Kieran,

Thank you for the patch.

On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
> The IPU3 IPA implements the basic 3a using the ImgU ISP.

s/3a/3A/

Technically speaking we're only implementing 2 out of 3 at this point
:-)

> Provide an overview document to describe it's operations, and provide a

s/it's/its/

> block diagram to help visualise how the components are put together to
> assist any new developers exploring the code.
> 
> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> 
> ---
> This is really just an RFC: Particularly:
> 
>  - Should this content actually live in Documentation?
>    or is it more suited to a page under src/ipa/ipu3/...

Good question. We tend to document things in the code where possible.
Would there be any drawback from doing so in this case ?

>  - This is not complete to document all functionality
>    but gives an overview of the current implementation

Sounds good to me.

>  - Is there anything anyone would find helpful for me to
>    extend/write about on this?
> 
>  - It could likely get merged in with some of the work
>    that Jean-Michel is doing, and might duplicate some of
>    the parts he is documenting, but this was an aim to
>    write a single page overview as a getting started with
>    the IPU3 IPA ....

I think it's *very* useful to have an overview.

>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>  1 file changed, 144 insertions(+)
>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
> 
> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
> new file mode 100644
> index 000000000000..a1d0f13fbb00
> --- /dev/null
> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
> @@ -0,0 +1,144 @@
> +IPU3 IPA Architecture Design and Overview
> +=========================================
> +
> +The IPU3 IPA is built as a modular and extensible framework with an
> +upper layer to manage the interactions with the pipeline handler, and
> +the image processing algorithms split to compartmentalise the processing
> +required for each accellerator cluster provided by the ImgU ISP.

I've commented on the expression "accelerator cluster" when reviewing
Jean-Michel's series. I've then seen that it's mentioned in the IPU3
kernel documentation in one place, I suppose that's where it comes from.
From what I understand, the expression is used to refer to a processing
block (or sub-block) that is fully implemented in hardware, as opposed
as being implemented partly or fully in the ImgU firmware. I don't think
that's relevant from an IPA point of view, it's an internal
implementation detail of the ImgU. I'd thus use "processing block" or a
similar term instead.

> +
> +The core IPU3 class is responsible for initialisation and construction
> +of the algorithm components, processing controls set by the requests
> +from applications, and managing events from the Pipeline handler.

s/Pipeline/pipeline/

> +
> +::
> +
> +        ┌───────────────────────────────────────────┐
> +        │      IPU3 Pipeline Handler                │
> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
> +        │   │        │    │        │    │        │  │
> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
> +        │   │        │    │        │    │        │  │
> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
> +        │                                 │P   │    │    S: Statistics Buffer
> +        │                                 │    │S   │
> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
> +        │ IPU3 IPA                                  │
> +        │                 ┌───────────────────────┐ │
> +        │ ┌───────────┐   │ Algorithms            │ │
> +        │ │IPAContext │   │          ┌─────────┐  │ │
> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
> +        │ │ │       │ │   │    │         │        │ │
> +        │ │ └───────┘ │   │    └─────────┘        │ │
> +        │ └───────────┘   └───────────────────────┘ │
> +        └───────────────────────────────────────────┘
> +         SC: IPASessionConfiguration
> +         FC: IPAFrameContext(s)
> +
> +The IPA instance is constructed and initialised at the point a Camera is
> +created by the IPU3 pipeline handler. The initialisation call provides
> +details about which Camera Sensor is being used, and the controls that

s/Camera Sensor/camera sensor/ ?

> +it has available, along with their defaults and ranges.

s/defaults/default values/

> +
> +Buffers
> +~~~~~~~
> +
> +The IPA will have Parameter and Statistics buffers shared with it from
> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
> +before the ``start()`` operation occurs.
> +
> +The IPA will map the buffers into CPU accessible memory, associated with

s/CPU accessible/CPU-accessible/

> +a buffer ID, and further events for sending or receiving parameter and
> +statistics buffers will reference the ID to avoid expensive memory
> +mapping operations, or the passing of file handles during streaming.
> +
> +After the ``stop()`` operation occurs, these buffers will be unmapped,
> +and no further access to the buffers is permitted.
> +
> +Context
> +~~~~~~~
> +
> +Algorithm calls will always have the ``IPAContext`` available to them.
> +This context comprises of two parts:
> +
> +-  IPA Session Configuration
> +-  IPA Frame Context
> +
> +The session configuration structure ``IPASessionConfiguration``
> +represents constant parameters determined from either before streaming
> +commenced during ``configure()`` or updated parameters when processing
> +controls.

I'm not sure about the last part, I thought the session configuration
was meant to be constant after configure() ?

> +The IPA Frame Context provides the storage for algorithms for a single
> +frame operation.
> +
> +The ``IPAFrameContext`` structure may be extended to an array, list, or
> +queue to store historical state for each frame, allowing algorithms to
> +obtain and reference results of calculations which are deeply pipelined.
> +This may only be done if an algorithm needs to know the context that was
> +applied at the frame the statistics were produced for, rather than the
> +previous or current frame.
> +
> +Presently there is a single ``IPAFrameContext`` without historical data,
> +and the context is maintained and updated through successive processing
> +operations.
> +
> +Operating
> +~~~~~~~~~
> +
> +There are three main parts of interactions with the algorithms for the
> +IPU3 IPA to operate when running:
> +
> +-  configure()
> +-  processEvent(``EventFillParams``)
> +-  processEvent(``EventStatReady``)
> +
> +The configuration phase allows the pipeline-handler to inform the IPA of
> +the current stream configurations, which is then passed into each
> +algorithm to provide an opportunity to identify and track state of the
> +hardware, such as image size or ImgU pipeline configurations.
> +
> +Pre-frame preparation
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +When configured, the IPA is notified by the pipeline handler of the
> +Camera ``start()`` event, after which incoming requests will be queued
> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
> +be populated for the ImgU. This is passed directly to each algorithm
> +through the ``prepare()`` call allowing the ISP configuration to be
> +updated for the needs of each component that the algorithm is
> +responsible for.
> +
> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
> +structure that it modifies, and it should take care to ensure that any
> +structure set by a use flag is fully initialised to suitable values.
> +
> +The parameter buffer is returned to the pipeline handler through the
> +``ActionParamFilled`` event, and from there queued to the ImgU along
> +with a raw frame captured with the CIO2.

This reminds me that we should turn the operations (actions and events)
into separate functions in the IPAIPU3Interface. Out of scope for this
patch of course.

> +
> +Post-frame completion
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +When the capture of an image is completed, and successfully processed
> +through the ImgU, the generated statistics buffer
> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
> +``EventStatReady`` event. This provides the IPA with an opportunity to
> +examine the results of the ISP and run the calculations required by each
> +algorithm on the new data. The algorithms may require context from the
> +operations of other algorithms, for example, the AWB might choose to use
> +a scene brightness determined by the AGC. It is important that the
> +algorithms are ordered to ensure that required results are determined
> +before they are needed.
> +
> +The ordering of the algorithm processing is determined by their
> +placement in the ``IPU3::algorithms_`` ordered list.

I expect interesting research to happen in this area :-)
Kieran Bingham Sept. 14, 2021, 10:13 a.m. UTC | #3
Hi Laurent,

On 14/09/2021 06:37, Laurent Pinchart wrote:
> Hi Kieran,
> 
> Thank you for the patch.
> 
> On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
> 
> s/3a/3A/
> 
> Technically speaking we're only implementing 2 out of 3 at this point
> :-)
> 
>> Provide an overview document to describe it's operations, and provide a
> 
> s/it's/its/
> 
>> block diagram to help visualise how the components are put together to
>> assist any new developers exploring the code.
>>
>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>>
>> ---
>> This is really just an RFC: Particularly:
>>
>>  - Should this content actually live in Documentation?
>>    or is it more suited to a page under src/ipa/ipu3/...
> 
> Good question. We tend to document things in the code where possible.
> Would there be any drawback from doing so in this case ?

I don't think so - I think the question mostly comes from
'discoverability' of the documentation.


Which is also a bigger question that we need to make it easier for
readers to find the documentation that they desire.

I've come across a great talk on this recently:
  - What nobody tells you about documentation - Daniele Procida
  - PyConAu
  - https://2017.pycon-au.org/schedule/presentation/15/
  - https://youtu.be/t4vKPhjcMZg

And that's been spun into a website to document how to document ;-)
  - https://diataxis.fr/


So, I think this could indeed live under src/ipa/ipu3 but it needs to be
clear that this should be the first thing to read when looking into the IPA.



> 
>>  - This is not complete to document all functionality
>>    but gives an overview of the current implementation
> 
> Sounds good to me.
> 
>>  - Is there anything anyone would find helpful for me to
>>    extend/write about on this?
>>
>>  - It could likely get merged in with some of the work
>>    that Jean-Michel is doing, and might duplicate some of
>>    the parts he is documenting, but this was an aim to
>>    write a single page overview as a getting started with
>>    the IPU3 IPA ....
> 
> I think it's *very* useful to have an overview.
> 
>>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>>  1 file changed, 144 insertions(+)
>>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
>>
>> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
>> new file mode 100644
>> index 000000000000..a1d0f13fbb00
>> --- /dev/null
>> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
>> @@ -0,0 +1,144 @@
>> +IPU3 IPA Architecture Design and Overview
>> +=========================================
>> +
>> +The IPU3 IPA is built as a modular and extensible framework with an
>> +upper layer to manage the interactions with the pipeline handler, and
>> +the image processing algorithms split to compartmentalise the processing
>> +required for each accellerator cluster provided by the ImgU ISP.
> 
> I've commented on the expression "accelerator cluster" when reviewing
> Jean-Michel's series. I've then seen that it's mentioned in the IPU3
> kernel documentation in one place, I suppose that's where it comes from.
> From what I understand, the expression is used to refer to a processing
> block (or sub-block) that is fully implemented in hardware, as opposed
> as being implemented partly or fully in the ImgU firmware. I don't think
> that's relevant from an IPA point of view, it's an internal
> implementation detail of the ImgU. I'd thus use "processing block" or a
> similar term instead.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/media/ipu3/include/uapi/intel-ipu3.h#n2428

Describes it as:

"""
ACC refers to the HW cluster containing all Fixed Functions (FFs). Each
FF implements a specific algorithm.
"""

So in fact, yes - the term Accelerator Cluster is a 'group' of all the
hardware accelerator functions, rather than each individual one.

Each individual function (fixed function) can indeed be better named as
a processing block.


Lets make this more like

"""
The IPU3 IPA is built as a modular and extensible framework with an
upper layer to manage the interactions with the pipeline handler, and
the image processing algorithms split to compartmentalise the processing
required for each processing block, making use of the fixed-function
accelerators provided by the ImgU ISP.
"""



>> +
>> +The core IPU3 class is responsible for initialisation and construction
>> +of the algorithm components, processing controls set by the requests
>> +from applications, and managing events from the Pipeline handler.
> 
> s/Pipeline/pipeline/
> 
>> +
>> +::
>> +
>> +        ┌───────────────────────────────────────────┐
>> +        │      IPU3 Pipeline Handler                │
>> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
>> +        │   │        │    │        │    │        │  │
>> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
>> +        │   │        │    │        │    │        │  │
>> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
>> +        │                                 │P   │    │    S: Statistics Buffer
>> +        │                                 │    │S   │
>> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
>> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
>> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
>> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
>> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
>> +        │ IPU3 IPA                                  │
>> +        │                 ┌───────────────────────┐ │
>> +        │ ┌───────────┐   │ Algorithms            │ │
>> +        │ │IPAContext │   │          ┌─────────┐  │ │
>> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
>> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
>> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
>> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
>> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
>> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
>> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
>> +        │ │ │       │ │   │    │         │        │ │
>> +        │ │ └───────┘ │   │    └─────────┘        │ │
>> +        │ └───────────┘   └───────────────────────┘ │
>> +        └───────────────────────────────────────────┘
>> +         SC: IPASessionConfiguration
>> +         FC: IPAFrameContext(s)
>> +
>> +The IPA instance is constructed and initialised at the point a Camera is
>> +created by the IPU3 pipeline handler. The initialisation call provides


"Camera" is capitalised as a libcamera noun. Shouldn't Pipeline Handler
also be capitalised as it's a libcamera named thing?


(From your comment above about s/Pipeline/pipeline/. I wonder if instead
it should be capitalised there and here too.


>> +details about which Camera Sensor is being used, and the controls that
> 
> s/Camera Sensor/camera sensor/ ?

Now that one is awkward, as it's both a libcamera noun, and a real world
object.

In this context, I believe camera sensor is correct, (as you've
suggested) as it's the external world camera sensor details.

> 
>> +it has available, along with their defaults and ranges.
> 
> s/defaults/default values/
> 
>> +
>> +Buffers
>> +~~~~~~~
>> +
>> +The IPA will have Parameter and Statistics buffers shared with it from
>> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
>> +before the ``start()`` operation occurs.
>> +
>> +The IPA will map the buffers into CPU accessible memory, associated with
> 
> s/CPU accessible/CPU-accessible/
> 
>> +a buffer ID, and further events for sending or receiving parameter and
>> +statistics buffers will reference the ID to avoid expensive memory
>> +mapping operations, or the passing of file handles during streaming.
>> +
>> +After the ``stop()`` operation occurs, these buffers will be unmapped,
>> +and no further access to the buffers is permitted.
>> +
>> +Context
>> +~~~~~~~
>> +
>> +Algorithm calls will always have the ``IPAContext`` available to them.
>> +This context comprises of two parts:
>> +
>> +-  IPA Session Configuration
>> +-  IPA Frame Context
>> +
>> +The session configuration structure ``IPASessionConfiguration``
>> +represents constant parameters determined from either before streaming
>> +commenced during ``configure()`` or updated parameters when processing
>> +controls.
> 
> I'm not sure about the last part, I thought the session configuration
> was meant to be constant after configure() ?

I would expect updates from controls to update the 'active session
configuration'.

Of course some of that session configuration will be constant, and not
change (I don't think we'll change the resolution for instance) - but I
expect that some parameters will be set such as whether AE is enabled or
locked.

That to me is a session configuration parameter. But it can change at
runtime - and while it remains the same, every new frame will have that
configuration applied.


Whereas the FrameContexts are about the context that is handled for each
frame, and any context shared between algorithms , and is much more dynamic.


>> +The IPA Frame Context provides the storage for algorithms for a single
>> +frame operation.
>> +
>> +The ``IPAFrameContext`` structure may be extended to an array, list, or
>> +queue to store historical state for each frame, allowing algorithms to
>> +obtain and reference results of calculations which are deeply pipelined.
>> +This may only be done if an algorithm needs to know the context that was
>> +applied at the frame the statistics were produced for, rather than the
>> +previous or current frame.
>> +
>> +Presently there is a single ``IPAFrameContext`` without historical data,
>> +and the context is maintained and updated through successive processing
>> +operations.
>> +
>> +Operating
>> +~~~~~~~~~
>> +
>> +There are three main parts of interactions with the algorithms for the
>> +IPU3 IPA to operate when running:
>> +
>> +-  configure()
>> +-  processEvent(``EventFillParams``)
>> +-  processEvent(``EventStatReady``)
>> +
>> +The configuration phase allows the pipeline-handler to inform the IPA of
>> +the current stream configurations, which is then passed into each
>> +algorithm to provide an opportunity to identify and track state of the
>> +hardware, such as image size or ImgU pipeline configurations.
>> +
>> +Pre-frame preparation
>> +~~~~~~~~~~~~~~~~~~~~~
>> +
>> +When configured, the IPA is notified by the pipeline handler of the
>> +Camera ``start()`` event, after which incoming requests will be queued
>> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
>> +be populated for the ImgU. This is passed directly to each algorithm
>> +through the ``prepare()`` call allowing the ISP configuration to be
>> +updated for the needs of each component that the algorithm is
>> +responsible for.
>> +
>> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
>> +structure that it modifies, and it should take care to ensure that any
>> +structure set by a use flag is fully initialised to suitable values.
>> +
>> +The parameter buffer is returned to the pipeline handler through the
>> +``ActionParamFilled`` event, and from there queued to the ImgU along
>> +with a raw frame captured with the CIO2.
> 
> This reminds me that we should turn the operations (actions and events)
> into separate functions in the IPAIPU3Interface. Out of scope for this
> patch of course.

You mean directly defining them in Mojo instead of passing them through
processEvent with an event flag?

I think that would probably be better indeed, I don't see much added
value to having processEvent() ... (with the small exception that it
partially shows which functions are called while streaming, but we could
also document the async/streaming expectations anyway).


> 
>> +
>> +Post-frame completion
>> +~~~~~~~~~~~~~~~~~~~~~
>> +
>> +When the capture of an image is completed, and successfully processed
>> +through the ImgU, the generated statistics buffer
>> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
>> +``EventStatReady`` event. This provides the IPA with an opportunity to
>> +examine the results of the ISP and run the calculations required by each
>> +algorithm on the new data. The algorithms may require context from the
>> +operations of other algorithms, for example, the AWB might choose to use
>> +a scene brightness determined by the AGC. It is important that the
>> +algorithms are ordered to ensure that required results are determined
>> +before they are needed.
>> +
>> +The ordering of the algorithm processing is determined by their
>> +placement in the ``IPU3::algorithms_`` ordered list.
> 
> I expect interesting research to happen in this area :-)

Yes, well it's too early to state any more than that currently I think.

--
Kieran
Laurent Pinchart Sept. 14, 2021, 10:34 a.m. UTC | #4
Hi Kieran,

On Tue, Sep 14, 2021 at 11:13:39AM +0100, Kieran Bingham wrote:
> On 14/09/2021 06:37, Laurent Pinchart wrote:
> > On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
> >> The IPU3 IPA implements the basic 3a using the ImgU ISP.
> > 
> > s/3a/3A/
> > 
> > Technically speaking we're only implementing 2 out of 3 at this point
> > :-)
> > 
> >> Provide an overview document to describe it's operations, and provide a
> > 
> > s/it's/its/
> > 
> >> block diagram to help visualise how the components are put together to
> >> assist any new developers exploring the code.
> >>
> >> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> >>
> >> ---
> >> This is really just an RFC: Particularly:
> >>
> >>  - Should this content actually live in Documentation?
> >>    or is it more suited to a page under src/ipa/ipu3/...
> > 
> > Good question. We tend to document things in the code where possible.
> > Would there be any drawback from doing so in this case ?
> 
> I don't think so - I think the question mostly comes from
> 'discoverability' of the documentation.
> 
> 
> Which is also a bigger question that we need to make it easier for
> readers to find the documentation that they desire.
> 
> I've come across a great talk on this recently:
>   - What nobody tells you about documentation - Daniele Procida
>   - PyConAu
>   - https://2017.pycon-au.org/schedule/presentation/15/
>   - https://youtu.be/t4vKPhjcMZg
> 
> And that's been spun into a website to document how to document ;-)
>   - https://diataxis.fr/

Thanks, I'll have a look.

> So, I think this could indeed live under src/ipa/ipu3 but it needs to be
> clear that this should be the first thing to read when looking into the IPA.

Agreed. I could envision moving it to Documentation/ later, as part of a
longer document that uses the IPU3 IPA to teach how to write algorithms,
but at that point we'll start writing a book about algorithm design :-)

> >>  - This is not complete to document all functionality
> >>    but gives an overview of the current implementation
> > 
> > Sounds good to me.
> > 
> >>  - Is there anything anyone would find helpful for me to
> >>    extend/write about on this?
> >>
> >>  - It could likely get merged in with some of the work
> >>    that Jean-Michel is doing, and might duplicate some of
> >>    the parts he is documenting, but this was an aim to
> >>    write a single page overview as a getting started with
> >>    the IPU3 IPA ....
> > 
> > I think it's *very* useful to have an overview.
> > 
> >>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
> >>  1 file changed, 144 insertions(+)
> >>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
> >>
> >> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
> >> new file mode 100644
> >> index 000000000000..a1d0f13fbb00
> >> --- /dev/null
> >> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
> >> @@ -0,0 +1,144 @@
> >> +IPU3 IPA Architecture Design and Overview
> >> +=========================================
> >> +
> >> +The IPU3 IPA is built as a modular and extensible framework with an
> >> +upper layer to manage the interactions with the pipeline handler, and
> >> +the image processing algorithms split to compartmentalise the processing
> >> +required for each accellerator cluster provided by the ImgU ISP.
> > 
> > I've commented on the expression "accelerator cluster" when reviewing
> > Jean-Michel's series. I've then seen that it's mentioned in the IPU3
> > kernel documentation in one place, I suppose that's where it comes from.
> > From what I understand, the expression is used to refer to a processing
> > block (or sub-block) that is fully implemented in hardware, as opposed
> > as being implemented partly or fully in the ImgU firmware. I don't think
> > that's relevant from an IPA point of view, it's an internal
> > implementation detail of the ImgU. I'd thus use "processing block" or a
> > similar term instead.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/media/ipu3/include/uapi/intel-ipu3.h#n2428
> 
> Describes it as:
> 
> """
> ACC refers to the HW cluster containing all Fixed Functions (FFs). Each
> FF implements a specific algorithm.
> """
> 
> So in fact, yes - the term Accelerator Cluster is a 'group' of all the
> hardware accelerator functions, rather than each individual one.
> 
> Each individual function (fixed function) can indeed be better named as
> a processing block.
> 
> 
> Lets make this more like
> 
> """
> The IPU3 IPA is built as a modular and extensible framework with an
> upper layer to manage the interactions with the pipeline handler, and
> the image processing algorithms split to compartmentalise the processing
> required for each processing block, making use of the fixed-function
> accelerators provided by the ImgU ISP.
> """

Sounds good to me. Let's standardize on the same "processing block"
terminology in the IPU3 IPA documentation series too.

> >> +
> >> +The core IPU3 class is responsible for initialisation and construction
> >> +of the algorithm components, processing controls set by the requests
> >> +from applications, and managing events from the Pipeline handler.
> > 
> > s/Pipeline/pipeline/
> > 
> >> +
> >> +::
> >> +
> >> +        ┌───────────────────────────────────────────┐
> >> +        │      IPU3 Pipeline Handler                │
> >> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
> >> +        │   │        │    │        │    │        │  │
> >> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
> >> +        │   │        │    │        │    │        │  │
> >> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
> >> +        │                                 │P   │    │    S: Statistics Buffer
> >> +        │                                 │    │S   │
> >> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
> >> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
> >> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
> >> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
> >> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
> >> +        │ IPU3 IPA                                  │
> >> +        │                 ┌───────────────────────┐ │
> >> +        │ ┌───────────┐   │ Algorithms            │ │
> >> +        │ │IPAContext │   │          ┌─────────┐  │ │
> >> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
> >> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
> >> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
> >> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
> >> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
> >> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
> >> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
> >> +        │ │ │       │ │   │    │         │        │ │
> >> +        │ │ └───────┘ │   │    └─────────┘        │ │
> >> +        │ └───────────┘   └───────────────────────┘ │
> >> +        └───────────────────────────────────────────┘
> >> +         SC: IPASessionConfiguration
> >> +         FC: IPAFrameContext(s)
> >> +
> >> +The IPA instance is constructed and initialised at the point a Camera is
> >> +created by the IPU3 pipeline handler. The initialisation call provides
> 
> "Camera" is capitalised as a libcamera noun. Shouldn't Pipeline Handler
> also be capitalised as it's a libcamera named thing?
> 
> (From your comment above about s/Pipeline/pipeline/. I wonder if instead
> it should be capitalised there and here too.

In Doxygen capitalization generates links to classes, so we have to be
careful about writing "Camera" for instance. When it comes to pipeline
handlers, "Pipeline Handler" won't generate a link, and I wouldn't write
"PipelineHandler" here anyway.

I'm sure there are conventions for all this in documentation. I usually
capitalize words that have special meanings the first time they're
introduced only, but I don't know if that's a good thing to do.

> >> +details about which Camera Sensor is being used, and the controls that
> > 
> > s/Camera Sensor/camera sensor/ ?
> 
> Now that one is awkward, as it's both a libcamera noun, and a real world
> object.
> 
> In this context, I believe camera sensor is correct, (as you've
> suggested) as it's the external world camera sensor details.
> 
> >> +it has available, along with their defaults and ranges.
> > 
> > s/defaults/default values/
> > 
> >> +
> >> +Buffers
> >> +~~~~~~~
> >> +
> >> +The IPA will have Parameter and Statistics buffers shared with it from
> >> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
> >> +before the ``start()`` operation occurs.
> >> +
> >> +The IPA will map the buffers into CPU accessible memory, associated with
> > 
> > s/CPU accessible/CPU-accessible/
> > 
> >> +a buffer ID, and further events for sending or receiving parameter and
> >> +statistics buffers will reference the ID to avoid expensive memory
> >> +mapping operations, or the passing of file handles during streaming.
> >> +
> >> +After the ``stop()`` operation occurs, these buffers will be unmapped,
> >> +and no further access to the buffers is permitted.
> >> +
> >> +Context
> >> +~~~~~~~
> >> +
> >> +Algorithm calls will always have the ``IPAContext`` available to them.
> >> +This context comprises of two parts:
> >> +
> >> +-  IPA Session Configuration
> >> +-  IPA Frame Context
> >> +
> >> +The session configuration structure ``IPASessionConfiguration``
> >> +represents constant parameters determined from either before streaming
> >> +commenced during ``configure()`` or updated parameters when processing
> >> +controls.
> > 
> > I'm not sure about the last part, I thought the session configuration
> > was meant to be constant after configure() ?
> 
> I would expect updates from controls to update the 'active session
> configuration'.
> 
> Of course some of that session configuration will be constant, and not
> change (I don't think we'll change the resolution for instance) - but I
> expect that some parameters will be set such as whether AE is enabled or
> locked.
> 
> That to me is a session configuration parameter. But it can change at
> runtime - and while it remains the same, every new frame will have that
> configuration applied.
> 
> 
> Whereas the FrameContexts are about the context that is handled for each
> frame, and any context shared between algorithms , and is much more dynamic.

This makes me think that we may want to introduce a third structure to
group state that isn't per-frame. I recall designing
IPASessionConfiguration as something that will not change during the
session. Maybe I recall it wrong, but I like the idea of keeping it
static. Separating configuration that is set at the beginning and
doesn't change from controls that are updated dynamically seems a good
design principle to me.

> >> +The IPA Frame Context provides the storage for algorithms for a single
> >> +frame operation.
> >> +
> >> +The ``IPAFrameContext`` structure may be extended to an array, list, or
> >> +queue to store historical state for each frame, allowing algorithms to
> >> +obtain and reference results of calculations which are deeply pipelined.
> >> +This may only be done if an algorithm needs to know the context that was
> >> +applied at the frame the statistics were produced for, rather than the
> >> +previous or current frame.
> >> +
> >> +Presently there is a single ``IPAFrameContext`` without historical data,
> >> +and the context is maintained and updated through successive processing
> >> +operations.
> >> +
> >> +Operating
> >> +~~~~~~~~~
> >> +
> >> +There are three main parts of interactions with the algorithms for the
> >> +IPU3 IPA to operate when running:
> >> +
> >> +-  configure()
> >> +-  processEvent(``EventFillParams``)
> >> +-  processEvent(``EventStatReady``)
> >> +
> >> +The configuration phase allows the pipeline-handler to inform the IPA of
> >> +the current stream configurations, which is then passed into each
> >> +algorithm to provide an opportunity to identify and track state of the
> >> +hardware, such as image size or ImgU pipeline configurations.
> >> +
> >> +Pre-frame preparation
> >> +~~~~~~~~~~~~~~~~~~~~~
> >> +
> >> +When configured, the IPA is notified by the pipeline handler of the
> >> +Camera ``start()`` event, after which incoming requests will be queued
> >> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
> >> +be populated for the ImgU. This is passed directly to each algorithm
> >> +through the ``prepare()`` call allowing the ISP configuration to be
> >> +updated for the needs of each component that the algorithm is
> >> +responsible for.
> >> +
> >> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
> >> +structure that it modifies, and it should take care to ensure that any
> >> +structure set by a use flag is fully initialised to suitable values.
> >> +
> >> +The parameter buffer is returned to the pipeline handler through the
> >> +``ActionParamFilled`` event, and from there queued to the ImgU along
> >> +with a raw frame captured with the CIO2.
> > 
> > This reminds me that we should turn the operations (actions and events)
> > into separate functions in the IPAIPU3Interface. Out of scope for this
> > patch of course.
> 
> You mean directly defining them in Mojo instead of passing them through
> processEvent with an event flag?

Yes.

> I think that would probably be better indeed, I don't see much added
> value to having processEvent() ... (with the small exception that it
> partially shows which functions are called while streaming, but we could
> also document the async/streaming expectations anyway).

processEvent() dates back from when we tried to have a single generic
API for IPAs.

> >> +
> >> +Post-frame completion
> >> +~~~~~~~~~~~~~~~~~~~~~
> >> +
> >> +When the capture of an image is completed, and successfully processed
> >> +through the ImgU, the generated statistics buffer
> >> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
> >> +``EventStatReady`` event. This provides the IPA with an opportunity to
> >> +examine the results of the ISP and run the calculations required by each
> >> +algorithm on the new data. The algorithms may require context from the
> >> +operations of other algorithms, for example, the AWB might choose to use
> >> +a scene brightness determined by the AGC. It is important that the
> >> +algorithms are ordered to ensure that required results are determined
> >> +before they are needed.
> >> +
> >> +The ordering of the algorithm processing is determined by their
> >> +placement in the ``IPU3::algorithms_`` ordered list.
> > 
> > I expect interesting research to happen in this area :-)
> 
> Yes, well it's too early to state any more than that currently I think.
Kieran Bingham Sept. 16, 2021, 12:36 p.m. UTC | #5
On 13/09/2021 15:29, Umang Jain wrote:
> Hi Kieran,
> 
> On 9/13/21 3:00 AM, Kieran Bingham wrote:
>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
> 3a or 3A?
>>
>> Provide an overview document to describe it's operations, and provide a
>> block diagram to help visualise how the components are put together to
>> assist any new developers exploring the code.
>>
>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>>
>> ---
>> This is really just an RFC: Particularly:
>>
>>   - Should this content actually live in Documentation?
>>     or is it more suited to a page under src/ipa/ipu3/...
> 
> 
> I would probably keep it under Documentation/ but there is already a IPA
> writing guide. So maybe we need a new dir, to document overviews of
> existing IPAs?

Indeed, the existing document really describes the IPA IPC interfaces, I
wonder if it should be renamed ...

The aim of this is to provide some overview on how the IPU3 IPA operates
from a higher level.

Along with the discussions with Laurent, I'm going to move this to
src/ipa/ipu3/ipu3-ipa-design-guide.rst for now.

That will keep the overview document in the actual component for now.

It may be that we write further 'generic' IPA design guides, which may
take inspiration from this, and could live in Documentation, but I think
I/we need to spend some dedicated cycles on how we clean up our
documentation anyway, so I'd rather keep this file close to it's
implementation for now.




>>
>>   - This is not complete to document all functionality
>>     but gives an overview of the current implementation
>>
>>   - Is there anything anyone would find helpful for me to
>>     extend/write about on this?
>>
>>   - It could likely get merged in with some of the work
>>     that Jean-Michel is doing, and might duplicate some of
>>     the parts he is documenting, but this was an aim to
>>     write a single page overview as a getting started with
>>     the IPU3 IPA ....
>>
>>
>>   .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>>   1 file changed, 144 insertions(+)
>>   create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
>>
>> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst
>> b/Documentation/guides/ipu3-ipa-design-guide.rst
>> new file mode 100644
>> index 000000000000..a1d0f13fbb00
>> --- /dev/null
>> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
>> @@ -0,0 +1,144 @@
>> +IPU3 IPA Architecture Design and Overview
>> +=========================================
>> +
>> +The IPU3 IPA is built as a modular and extensible framework with an
>> +upper layer to manage the interactions with the pipeline handler, and
>> +the image processing algorithms split to compartmentalise the processing
>> +required for each accellerator cluster provided by the ImgU ISP.
>> +
>> +The core IPU3 class is responsible for initialisation and construction
> s/IPU3/IPAIPU3 maybe?
>> +of the algorithm components, processing controls set by the requests
>> +from applications, and managing events from the Pipeline handler.
>> +
>> +::
>> +
>> +        ┌───────────────────────────────────────────┐
>> +        │      IPU3 Pipeline Handler                │
>> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
>> +        │   │        │    │        │    │        │  │
>> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
>> +        │   │        │    │        │    │        │  │
>> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter
>> Buffer
>> +        │                                 │P   │    │    S:
>> Statistics Buffer
>> +        │                                 │    │S   │
>> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
>> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
>> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3:
>> mapBuffers(), start()
>> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4:
>> processEvent()
>> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(),
>> unmapBuffers()
>> +        │ IPU3 IPA                                  │
>> +        │                 ┌───────────────────────┐ │
>> +        │ ┌───────────┐   │ Algorithms            │ │
>> +        │ │IPAContext │   │          ┌─────────┐  │ │
>> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
>> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
>> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
>> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
>> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
>> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
>> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
>> +        │ │ │       │ │   │    │         │        │ │
>> +        │ │ └───────┘ │   │    └─────────┘        │ │
>> +        │ └───────────┘   └───────────────────────┘ │
>> +        └───────────────────────────────────────────┘
>> +         SC: IPASessionConfiguration
>> +         FC: IPAFrameContext(s)
>> +
>> +The IPA instance is constructed and initialised at the point a Camera is
>> +created by the IPU3 pipeline handler. The initialisation call provides
>> +details about which Camera Sensor is being used, and the controls that
>> +it has available, along with their defaults and ranges.
> 
> 
> Brief line about how sensor controls are used in IPA will be useful?
> This will explain "why" sensor controls are passed to IPA and what IPA
> does with it (for e.g gather necessary info to run algorithms).

I have wondered about putting a CameraSensorHelper box in the IPA too,
but I thought that would be too cluttered, and I don't think it adds
much to convey the context of the running IPA (at least for the moment) ...


But it might certainly be worth explaining about how controls are also
sent to the pipeline handler. I'll see if I can add a section:


Controls
~~~~~~~~

The AutoExposure and AutoGain (AGC) algorithm differs slightly from the
others as it requires operating directly on the sensor, as opposed to
through the ImgU ISP. To support this, there is a dedicated action
`ActionSetSensorControls` to allow the IPA to request controls to be set
on the camera sensor through the pipeline handler.



>> +
>> +Buffers
>> +~~~~~~~
>> +
>> +The IPA will have Parameter and Statistics buffers shared with it from
>> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
>> +before the ``start()`` operation occurs.
>> +
>> +The IPA will map the buffers into CPU accessible memory, associated with
>> +a buffer ID, and further events for sending or receiving parameter and
>> +statistics buffers will reference the ID to avoid expensive memory
>> +mapping operations, or the passing of file handles during streaming.
>> +
>> +After the ``stop()`` operation occurs, these buffers will be unmapped,
>> +and no further access to the buffers is permitted.
>> +
>> +Context
>> +~~~~~~~
>> +
>> +Algorithm calls will always have the ``IPAContext`` available to them.
>> +This context comprises of two parts:
>> +
>> +-  IPA Session Configuration
>> +-  IPA Frame Context
>> +
>> +The session configuration structure ``IPASessionConfiguration``
>> +represents constant parameters determined from either before streaming
>> +commenced during ``configure()`` or updated parameters when processing
>> +controls.
>> +
>> +The IPA Frame Context provides the storage for algorithms for a single
>> +frame operation.
>> +
>> +The ``IPAFrameContext`` structure may be extended to an array, list, or
>> +queue to store historical state for each frame, allowing algorithms to
>> +obtain and reference results of calculations which are deeply pipelined.
>> +This may only be done if an algorithm needs to know the context that was
>> +applied at the frame the statistics were produced for, rather than the
>> +previous or current frame.
>> +
>> +Presently there is a single ``IPAFrameContext`` without historical data,
>> +and the context is maintained and updated through successive processing
>> +operations.
>> +
>> +Operating
>> +~~~~~~~~~
>> +
>> +There are three main parts of interactions with the algorithms for the
>> +IPU3 IPA to operate when running:
>> +
>> +-  configure()
>> +-  processEvent(``EventFillParams``)
> 
> 
> One liner explanation for EventFillParams is missing. It should be
> present in Pre-frame preparation section, no? Can you please include a
> brief line analogous to "EventStatReady" event in the Post-frame
> completion section?

Ah yes, indeed they should both be covered in the same way, so that's
missing thanks.

I've updated it.



> 
> Other than that, looks fine to me:
> 
> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
> 
>> +-  processEvent(``EventStatReady``)
>> +
>> +The configuration phase allows the pipeline-handler to inform the IPA of
>> +the current stream configurations, which is then passed into each
>> +algorithm to provide an opportunity to identify and track state of the
>> +hardware, such as image size or ImgU pipeline configurations.
>> +
>> +Pre-frame preparation
>> +~~~~~~~~~~~~~~~~~~~~~
>> +
>> +When configured, the IPA is notified by the pipeline handler of the
>> +Camera ``start()`` event, after which incoming requests will be queued
>> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
>> +be populated for the ImgU. This is passed directly to each algorithm>> +through the ``prepare()`` call allowing the ISP configuration to be
>> +updated for the needs of each component that the algorithm is
>> +responsible for.
>> +
>> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
>> +structure that it modifies, and it should take care to ensure that any
>> +structure set by a use flag is fully initialised to suitable values.
>> +
>> +The parameter buffer is returned to the pipeline handler through the
>> +``ActionParamFilled`` event, and from there queued to the ImgU along
>> +with a raw frame captured with the CIO2.
>> +
>> +Post-frame completion
>> +~~~~~~~~~~~~~~~~~~~~~
>> +
>> +When the capture of an image is completed, and successfully processed
>> +through the ImgU, the generated statistics buffer
>> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
>> +``EventStatReady`` event. This provides the IPA with an opportunity to
>> +examine the results of the ISP and run the calculations required by each
>> +algorithm on the new data. The algorithms may require context from the
>> +operations of other algorithms, for example, the AWB might choose to use
>> +a scene brightness determined by the AGC. It is important that the
>> +algorithms are ordered to ensure that required results are determined
>> +before they are needed.
>> +
>> +The ordering of the algorithm processing is determined by their
>> +placement in the ``IPU3::algorithms_`` ordered list.
Kieran Bingham Sept. 16, 2021, 12:37 p.m. UTC | #6
Hi Laurent,

+cc JM for discussion point below...

On 14/09/2021 11:34, Laurent Pinchart wrote:
> Hi Kieran,
> 
> On Tue, Sep 14, 2021 at 11:13:39AM +0100, Kieran Bingham wrote:
>> On 14/09/2021 06:37, Laurent Pinchart wrote:
>>> On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
>>>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
>>>
>>> s/3a/3A/
>>>
>>> Technically speaking we're only implementing 2 out of 3 at this point
>>> :-)
>>>
>>>> Provide an overview document to describe it's operations, and provide a
>>>
>>> s/it's/its/
>>>
>>>> block diagram to help visualise how the components are put together to
>>>> assist any new developers exploring the code.
>>>>
>>>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>>>>
>>>> ---
>>>> This is really just an RFC: Particularly:
>>>>
>>>>  - Should this content actually live in Documentation?
>>>>    or is it more suited to a page under src/ipa/ipu3/...
>>>
>>> Good question. We tend to document things in the code where possible.
>>> Would there be any drawback from doing so in this case ?
>>
>> I don't think so - I think the question mostly comes from
>> 'discoverability' of the documentation.
>>
>>
>> Which is also a bigger question that we need to make it easier for
>> readers to find the documentation that they desire.
>>
>> I've come across a great talk on this recently:
>>   - What nobody tells you about documentation - Daniele Procida
>>   - PyConAu
>>   - https://2017.pycon-au.org/schedule/presentation/15/
>>   - https://youtu.be/t4vKPhjcMZg
>>
>> And that's been spun into a website to document how to document ;-)
>>   - https://diataxis.fr/
> 
> Thanks, I'll have a look.
> 
>> So, I think this could indeed live under src/ipa/ipu3 but it needs to be
>> clear that this should be the first thing to read when looking into the IPA.
> 
> Agreed. I could envision moving it to Documentation/ later, as part of a
> longer document that uses the IPU3 IPA to teach how to write algorithms,
> but at that point we'll start writing a book about algorithm design :-)

For now, for the v2 I will keep this as an RST document, but place it in
src/ipa/ipu3/.


>>>>  - This is not complete to document all functionality
>>>>    but gives an overview of the current implementation
>>>
>>> Sounds good to me.
>>>
>>>>  - Is there anything anyone would find helpful for me to
>>>>    extend/write about on this?
>>>>
>>>>  - It could likely get merged in with some of the work
>>>>    that Jean-Michel is doing, and might duplicate some of
>>>>    the parts he is documenting, but this was an aim to
>>>>    write a single page overview as a getting started with
>>>>    the IPU3 IPA ....
>>>
>>> I think it's *very* useful to have an overview.

I'll keep this separate for now then.

>>>
>>>>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>>>>  1 file changed, 144 insertions(+)
>>>>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
>>>>
>>>> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
>>>> new file mode 100644
>>>> index 000000000000..a1d0f13fbb00
>>>> --- /dev/null
>>>> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
>>>> @@ -0,0 +1,144 @@
>>>> +IPU3 IPA Architecture Design and Overview
>>>> +=========================================
>>>> +
>>>> +The IPU3 IPA is built as a modular and extensible framework with an
>>>> +upper layer to manage the interactions with the pipeline handler, and
>>>> +the image processing algorithms split to compartmentalise the processing
>>>> +required for each accellerator cluster provided by the ImgU ISP.
>>>
>>> I've commented on the expression "accelerator cluster" when reviewing
>>> Jean-Michel's series. I've then seen that it's mentioned in the IPU3
>>> kernel documentation in one place, I suppose that's where it comes from.
>>> From what I understand, the expression is used to refer to a processing
>>> block (or sub-block) that is fully implemented in hardware, as opposed
>>> as being implemented partly or fully in the ImgU firmware. I don't think
>>> that's relevant from an IPA point of view, it's an internal
>>> implementation detail of the ImgU. I'd thus use "processing block" or a
>>> similar term instead.
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/media/ipu3/include/uapi/intel-ipu3.h#n2428
>>
>> Describes it as:
>>
>> """
>> ACC refers to the HW cluster containing all Fixed Functions (FFs). Each
>> FF implements a specific algorithm.
>> """
>>
>> So in fact, yes - the term Accelerator Cluster is a 'group' of all the
>> hardware accelerator functions, rather than each individual one.
>>
>> Each individual function (fixed function) can indeed be better named as
>> a processing block.
>>
>>
>> Lets make this more like
>>
>> """
>> The IPU3 IPA is built as a modular and extensible framework with an
>> upper layer to manage the interactions with the pipeline handler, and
>> the image processing algorithms split to compartmentalise the processing
>> required for each processing block, making use of the fixed-function
>> accelerators provided by the ImgU ISP.
>> """
> 
> Sounds good to me. Let's standardize on the same "processing block"
> terminology in the IPU3 IPA documentation series too.

Ok,

+Cc: Jean Michel to highlight this discussion point.

> 
>>>> +
>>>> +The core IPU3 class is responsible for initialisation and construction
>>>> +of the algorithm components, processing controls set by the requests
>>>> +from applications, and managing events from the Pipeline handler.
>>>
>>> s/Pipeline/pipeline/
>>>
>>>> +
>>>> +::
>>>> +
>>>> +        ┌───────────────────────────────────────────┐
>>>> +        │      IPU3 Pipeline Handler                │
>>>> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
>>>> +        │   │        │    │        │    │        │  │
>>>> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
>>>> +        │   │        │    │        │    │        │  │
>>>> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
>>>> +        │                                 │P   │    │    S: Statistics Buffer
>>>> +        │                                 │    │S   │
>>>> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
>>>> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
>>>> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
>>>> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
>>>> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
>>>> +        │ IPU3 IPA                                  │
>>>> +        │                 ┌───────────────────────┐ │
>>>> +        │ ┌───────────┐   │ Algorithms            │ │
>>>> +        │ │IPAContext │   │          ┌─────────┐  │ │
>>>> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
>>>> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
>>>> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
>>>> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
>>>> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
>>>> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
>>>> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
>>>> +        │ │ │       │ │   │    │         │        │ │
>>>> +        │ │ └───────┘ │   │    └─────────┘        │ │
>>>> +        │ └───────────┘   └───────────────────────┘ │
>>>> +        └───────────────────────────────────────────┘
>>>> +         SC: IPASessionConfiguration
>>>> +         FC: IPAFrameContext(s)
>>>> +
>>>> +The IPA instance is constructed and initialised at the point a Camera is
>>>> +created by the IPU3 pipeline handler. The initialisation call provides
>>
>> "Camera" is capitalised as a libcamera noun. Shouldn't Pipeline Handler
>> also be capitalised as it's a libcamera named thing?
>>
>> (From your comment above about s/Pipeline/pipeline/. I wonder if instead
>> it should be capitalised there and here too.
> 
> In Doxygen capitalization generates links to classes, so we have to be
> careful about writing "Camera" for instance. When it comes to pipeline
> handlers, "Pipeline Handler" won't generate a link, and I wouldn't write
> "PipelineHandler" here anyway.
> 
> I'm sure there are conventions for all this in documentation. I usually
> capitalize words that have special meanings the first time they're
> introduced only, but I don't know if that's a good thing to do.
> 
>>>> +details about which Camera Sensor is being used, and the controls that
>>>
>>> s/Camera Sensor/camera sensor/ ?
>>
>> Now that one is awkward, as it's both a libcamera noun, and a real world
>> object.
>>
>> In this context, I believe camera sensor is correct, (as you've
>> suggested) as it's the external world camera sensor details.
>>
>>>> +it has available, along with their defaults and ranges.
>>>
>>> s/defaults/default values/
>>>
>>>> +
>>>> +Buffers
>>>> +~~~~~~~
>>>> +
>>>> +The IPA will have Parameter and Statistics buffers shared with it from
>>>> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
>>>> +before the ``start()`` operation occurs.
>>>> +
>>>> +The IPA will map the buffers into CPU accessible memory, associated with
>>>
>>> s/CPU accessible/CPU-accessible/
>>>
>>>> +a buffer ID, and further events for sending or receiving parameter and
>>>> +statistics buffers will reference the ID to avoid expensive memory
>>>> +mapping operations, or the passing of file handles during streaming.
>>>> +
>>>> +After the ``stop()`` operation occurs, these buffers will be unmapped,
>>>> +and no further access to the buffers is permitted.
>>>> +
>>>> +Context
>>>> +~~~~~~~
>>>> +
>>>> +Algorithm calls will always have the ``IPAContext`` available to them.
>>>> +This context comprises of two parts:
>>>> +
>>>> +-  IPA Session Configuration
>>>> +-  IPA Frame Context
>>>> +
>>>> +The session configuration structure ``IPASessionConfiguration``
>>>> +represents constant parameters determined from either before streaming
>>>> +commenced during ``configure()`` or updated parameters when processing
>>>> +controls.
>>>
>>> I'm not sure about the last part, I thought the session configuration
>>> was meant to be constant after configure() ?
>>
>> I would expect updates from controls to update the 'active session
>> configuration'.
>>
>> Of course some of that session configuration will be constant, and not
>> change (I don't think we'll change the resolution for instance) - but I
>> expect that some parameters will be set such as whether AE is enabled or
>> locked.
>>
>> That to me is a session configuration parameter. But it can change at
>> runtime - and while it remains the same, every new frame will have that
>> configuration applied.
>>
>>
>> Whereas the FrameContexts are about the context that is handled for each
>> frame, and any context shared between algorithms , and is much more dynamic.
> 
> This makes me think that we may want to introduce a third structure to
> group state that isn't per-frame. I recall designing
> IPASessionConfiguration as something that will not change during the
> session. Maybe I recall it wrong, but I like the idea of keeping it

But ... state that isn't per frame is the ... session ? and thus that
state is stored as the SessionConfiguration ...

At the moment I anticipate we'll need to store things like control
state. I.e. is AE enabled or manual for instance. And if manual - what
is the current fixed manual value.

I expected that would be the session configuration, as in it's the
current active settings ... but maybe a third group would come up, I
think we'll find out when we actually add the controls.


> static. Separating configuration that is set at the beginning and
> doesn't change from controls that are updated dynamically seems a good
> design principle to me.


I'll delete the last part:
"or updated parameters when processing controls."


We can revisit this when we add controls.



>>>> +The IPA Frame Context provides the storage for algorithms for a single
>>>> +frame operation.
>>>> +
>>>> +The ``IPAFrameContext`` structure may be extended to an array, list, or
>>>> +queue to store historical state for each frame, allowing algorithms to
>>>> +obtain and reference results of calculations which are deeply pipelined.
>>>> +This may only be done if an algorithm needs to know the context that was
>>>> +applied at the frame the statistics were produced for, rather than the
>>>> +previous or current frame.
>>>> +
>>>> +Presently there is a single ``IPAFrameContext`` without historical data,
>>>> +and the context is maintained and updated through successive processing
>>>> +operations.
>>>> +
>>>> +Operating
>>>> +~~~~~~~~~
>>>> +
>>>> +There are three main parts of interactions with the algorithms for the
>>>> +IPU3 IPA to operate when running:
>>>> +
>>>> +-  configure()
>>>> +-  processEvent(``EventFillParams``)
>>>> +-  processEvent(``EventStatReady``)
>>>> +
>>>> +The configuration phase allows the pipeline-handler to inform the IPA of
>>>> +the current stream configurations, which is then passed into each
>>>> +algorithm to provide an opportunity to identify and track state of the
>>>> +hardware, such as image size or ImgU pipeline configurations.
>>>> +
>>>> +Pre-frame preparation
>>>> +~~~~~~~~~~~~~~~~~~~~~
>>>> +
>>>> +When configured, the IPA is notified by the pipeline handler of the
>>>> +Camera ``start()`` event, after which incoming requests will be queued
>>>> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
>>>> +be populated for the ImgU. This is passed directly to each algorithm
>>>> +through the ``prepare()`` call allowing the ISP configuration to be
>>>> +updated for the needs of each component that the algorithm is
>>>> +responsible for.
>>>> +
>>>> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
>>>> +structure that it modifies, and it should take care to ensure that any
>>>> +structure set by a use flag is fully initialised to suitable values.
>>>> +
>>>> +The parameter buffer is returned to the pipeline handler through the
>>>> +``ActionParamFilled`` event, and from there queued to the ImgU along
>>>> +with a raw frame captured with the CIO2.
>>>
>>> This reminds me that we should turn the operations (actions and events)
>>> into separate functions in the IPAIPU3Interface. Out of scope for this
>>> patch of course.
>>
>> You mean directly defining them in Mojo instead of passing them through
>> processEvent with an event flag?
> 
> Yes.

That sounds like a good thing to do, and may indeed make the interfaces
clearer I think.

We may have to further document the expectations of those calls for both
directions. I.e. EventFillParams (or it's new direct call equivalent) is
expected to call or initiate the calling of a return of the buffer
through ActionParamFilled (or it's renamed direct call equivalent).



>> I think that would probably be better indeed, I don't see much added
>> value to having processEvent() ... (with the small exception that it
>> partially shows which functions are called while streaming, but we could
>> also document the async/streaming expectations anyway).
> 
> processEvent() dates back from when we tried to have a single generic
> API for IPAs.

And we clearly have per - pipeline handler IPA's now.


>>>> +
>>>> +Post-frame completion
>>>> +~~~~~~~~~~~~~~~~~~~~~
>>>> +
>>>> +When the capture of an image is completed, and successfully processed
>>>> +through the ImgU, the generated statistics buffer
>>>> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
>>>> +``EventStatReady`` event. This provides the IPA with an opportunity to
>>>> +examine the results of the ISP and run the calculations required by each
>>>> +algorithm on the new data. The algorithms may require context from the
>>>> +operations of other algorithms, for example, the AWB might choose to use
>>>> +a scene brightness determined by the AGC. It is important that the
>>>> +algorithms are ordered to ensure that required results are determined
>>>> +before they are needed.
>>>> +
>>>> +The ordering of the algorithm processing is determined by their
>>>> +placement in the ``IPU3::algorithms_`` ordered list.
>>>
>>> I expect interesting research to happen in this area :-)
>>
>> Yes, well it's too early to state any more than that currently I think.
>
Jean-Michel Hautbois Sept. 16, 2021, 12:46 p.m. UTC | #7
Hi Kieran,

On 16/09/2021 14:37, Kieran Bingham wrote:
> Hi Laurent,
> 
> +cc JM for discussion point below...
> 
> On 14/09/2021 11:34, Laurent Pinchart wrote:
>> Hi Kieran,
>>
>> On Tue, Sep 14, 2021 at 11:13:39AM +0100, Kieran Bingham wrote:
>>> On 14/09/2021 06:37, Laurent Pinchart wrote:
>>>> On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
>>>>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
>>>>
>>>> s/3a/3A/
>>>>
>>>> Technically speaking we're only implementing 2 out of 3 at this point
>>>> :-)
>>>>
>>>>> Provide an overview document to describe it's operations, and provide a
>>>>
>>>> s/it's/its/
>>>>
>>>>> block diagram to help visualise how the components are put together to
>>>>> assist any new developers exploring the code.
>>>>>
>>>>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>>>>>
>>>>> ---
>>>>> This is really just an RFC: Particularly:
>>>>>
>>>>>  - Should this content actually live in Documentation?
>>>>>    or is it more suited to a page under src/ipa/ipu3/...
>>>>
>>>> Good question. We tend to document things in the code where possible.
>>>> Would there be any drawback from doing so in this case ?
>>>
>>> I don't think so - I think the question mostly comes from
>>> 'discoverability' of the documentation.
>>>
>>>
>>> Which is also a bigger question that we need to make it easier for
>>> readers to find the documentation that they desire.
>>>
>>> I've come across a great talk on this recently:
>>>   - What nobody tells you about documentation - Daniele Procida
>>>   - PyConAu
>>>   - https://2017.pycon-au.org/schedule/presentation/15/
>>>   - https://youtu.be/t4vKPhjcMZg
>>>
>>> And that's been spun into a website to document how to document ;-)
>>>   - https://diataxis.fr/
>>
>> Thanks, I'll have a look.
>>
>>> So, I think this could indeed live under src/ipa/ipu3 but it needs to be
>>> clear that this should be the first thing to read when looking into the IPA.
>>
>> Agreed. I could envision moving it to Documentation/ later, as part of a
>> longer document that uses the IPU3 IPA to teach how to write algorithms,
>> but at that point we'll start writing a book about algorithm design :-)
> 
> For now, for the v2 I will keep this as an RST document, but place it in
> src/ipa/ipu3/.
> 
> 
>>>>>  - This is not complete to document all functionality
>>>>>    but gives an overview of the current implementation
>>>>
>>>> Sounds good to me.
>>>>
>>>>>  - Is there anything anyone would find helpful for me to
>>>>>    extend/write about on this?
>>>>>
>>>>>  - It could likely get merged in with some of the work
>>>>>    that Jean-Michel is doing, and might duplicate some of
>>>>>    the parts he is documenting, but this was an aim to
>>>>>    write a single page overview as a getting started with
>>>>>    the IPU3 IPA ....
>>>>
>>>> I think it's *very* useful to have an overview.
> 
> I'll keep this separate for now then.
> 
>>>>
>>>>>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
>>>>>  1 file changed, 144 insertions(+)
>>>>>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
>>>>>
>>>>> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
>>>>> new file mode 100644
>>>>> index 000000000000..a1d0f13fbb00
>>>>> --- /dev/null
>>>>> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
>>>>> @@ -0,0 +1,144 @@
>>>>> +IPU3 IPA Architecture Design and Overview
>>>>> +=========================================
>>>>> +
>>>>> +The IPU3 IPA is built as a modular and extensible framework with an
>>>>> +upper layer to manage the interactions with the pipeline handler, and
>>>>> +the image processing algorithms split to compartmentalise the processing
>>>>> +required for each accellerator cluster provided by the ImgU ISP.
>>>>
>>>> I've commented on the expression "accelerator cluster" when reviewing
>>>> Jean-Michel's series. I've then seen that it's mentioned in the IPU3
>>>> kernel documentation in one place, I suppose that's where it comes from.
>>>> From what I understand, the expression is used to refer to a processing
>>>> block (or sub-block) that is fully implemented in hardware, as opposed
>>>> as being implemented partly or fully in the ImgU firmware. I don't think
>>>> that's relevant from an IPA point of view, it's an internal
>>>> implementation detail of the ImgU. I'd thus use "processing block" or a
>>>> similar term instead.
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/media/ipu3/include/uapi/intel-ipu3.h#n2428
>>>
>>> Describes it as:
>>>
>>> """
>>> ACC refers to the HW cluster containing all Fixed Functions (FFs). Each
>>> FF implements a specific algorithm.
>>> """
>>>
>>> So in fact, yes - the term Accelerator Cluster is a 'group' of all the
>>> hardware accelerator functions, rather than each individual one.
>>>
>>> Each individual function (fixed function) can indeed be better named as
>>> a processing block.
>>>
>>>
>>> Lets make this more like
>>>
>>> """
>>> The IPU3 IPA is built as a modular and extensible framework with an
>>> upper layer to manage the interactions with the pipeline handler, and
>>> the image processing algorithms split to compartmentalise the processing
>>> required for each processing block, making use of the fixed-function
>>> accelerators provided by the ImgU ISP.
>>> """
>>
>> Sounds good to me. Let's standardize on the same "processing block"
>> terminology in the IPU3 IPA documentation series too.
> 
> Ok,
> 
> +Cc: Jean Michel to highlight this discussion point.
> 

This terminology is indeed coming from the kernel. I agree with the
"processing block" term ;-).

>>
>>>>> +
>>>>> +The core IPU3 class is responsible for initialisation and construction
>>>>> +of the algorithm components, processing controls set by the requests
>>>>> +from applications, and managing events from the Pipeline handler.
>>>>
>>>> s/Pipeline/pipeline/
>>>>
>>>>> +
>>>>> +::
>>>>> +
>>>>> +        ┌───────────────────────────────────────────┐
>>>>> +        │      IPU3 Pipeline Handler                │
>>>>> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
>>>>> +        │   │        │    │        │    │        │  │
>>>>> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
>>>>> +        │   │        │    │        │    │        │  │
>>>>> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
>>>>> +        │                                 │P   │    │    S: Statistics Buffer
>>>>> +        │                                 │    │S   │
>>>>> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
>>>>> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
>>>>> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
>>>>> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
>>>>> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
>>>>> +        │ IPU3 IPA                                  │
>>>>> +        │                 ┌───────────────────────┐ │
>>>>> +        │ ┌───────────┐   │ Algorithms            │ │
>>>>> +        │ │IPAContext │   │          ┌─────────┐  │ │
>>>>> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
>>>>> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
>>>>> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
>>>>> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
>>>>> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
>>>>> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
>>>>> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
>>>>> +        │ │ │       │ │   │    │         │        │ │
>>>>> +        │ │ └───────┘ │   │    └─────────┘        │ │
>>>>> +        │ └───────────┘   └───────────────────────┘ │
>>>>> +        └───────────────────────────────────────────┘
>>>>> +         SC: IPASessionConfiguration
>>>>> +         FC: IPAFrameContext(s)
>>>>> +
>>>>> +The IPA instance is constructed and initialised at the point a Camera is
>>>>> +created by the IPU3 pipeline handler. The initialisation call provides
>>>
>>> "Camera" is capitalised as a libcamera noun. Shouldn't Pipeline Handler
>>> also be capitalised as it's a libcamera named thing?
>>>
>>> (From your comment above about s/Pipeline/pipeline/. I wonder if instead
>>> it should be capitalised there and here too.
>>
>> In Doxygen capitalization generates links to classes, so we have to be
>> careful about writing "Camera" for instance. When it comes to pipeline
>> handlers, "Pipeline Handler" won't generate a link, and I wouldn't write
>> "PipelineHandler" here anyway.
>>
>> I'm sure there are conventions for all this in documentation. I usually
>> capitalize words that have special meanings the first time they're
>> introduced only, but I don't know if that's a good thing to do.
>>
>>>>> +details about which Camera Sensor is being used, and the controls that
>>>>
>>>> s/Camera Sensor/camera sensor/ ?
>>>
>>> Now that one is awkward, as it's both a libcamera noun, and a real world
>>> object.
>>>
>>> In this context, I believe camera sensor is correct, (as you've
>>> suggested) as it's the external world camera sensor details.
>>>
>>>>> +it has available, along with their defaults and ranges.
>>>>
>>>> s/defaults/default values/
>>>>
>>>>> +
>>>>> +Buffers
>>>>> +~~~~~~~
>>>>> +
>>>>> +The IPA will have Parameter and Statistics buffers shared with it from
>>>>> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
>>>>> +before the ``start()`` operation occurs.
>>>>> +
>>>>> +The IPA will map the buffers into CPU accessible memory, associated with
>>>>
>>>> s/CPU accessible/CPU-accessible/
>>>>
>>>>> +a buffer ID, and further events for sending or receiving parameter and
>>>>> +statistics buffers will reference the ID to avoid expensive memory
>>>>> +mapping operations, or the passing of file handles during streaming.
>>>>> +
>>>>> +After the ``stop()`` operation occurs, these buffers will be unmapped,
>>>>> +and no further access to the buffers is permitted.
>>>>> +
>>>>> +Context
>>>>> +~~~~~~~
>>>>> +
>>>>> +Algorithm calls will always have the ``IPAContext`` available to them.
>>>>> +This context comprises of two parts:
>>>>> +
>>>>> +-  IPA Session Configuration
>>>>> +-  IPA Frame Context
>>>>> +
>>>>> +The session configuration structure ``IPASessionConfiguration``
>>>>> +represents constant parameters determined from either before streaming
>>>>> +commenced during ``configure()`` or updated parameters when processing
>>>>> +controls.
>>>>
>>>> I'm not sure about the last part, I thought the session configuration
>>>> was meant to be constant after configure() ?
>>>
>>> I would expect updates from controls to update the 'active session
>>> configuration'.
>>>
>>> Of course some of that session configuration will be constant, and not
>>> change (I don't think we'll change the resolution for instance) - but I
>>> expect that some parameters will be set such as whether AE is enabled or
>>> locked.
>>>
>>> That to me is a session configuration parameter. But it can change at
>>> runtime - and while it remains the same, every new frame will have that
>>> configuration applied.
>>>
>>>
>>> Whereas the FrameContexts are about the context that is handled for each
>>> frame, and any context shared between algorithms , and is much more dynamic.
>>
>> This makes me think that we may want to introduce a third structure to
>> group state that isn't per-frame. I recall designing
>> IPASessionConfiguration as something that will not change during the
>> session. Maybe I recall it wrong, but I like the idea of keeping it
> 
> But ... state that isn't per frame is the ... session ? and thus that
> state is stored as the SessionConfiguration ...
> 
> At the moment I anticipate we'll need to store things like control
> state. I.e. is AE enabled or manual for instance. And if manual - what
> is the current fixed manual value.
> 
> I expected that would be the session configuration, as in it's the
> current active settings ... but maybe a third group would come up, I
> think we'll find out when we actually add the controls.
> 
> 
>> static. Separating configuration that is set at the beginning and
>> doesn't change from controls that are updated dynamically seems a good
>> design principle to me.
> 
> 
> I'll delete the last part:
> "or updated parameters when processing controls."
> 
> 
> We can revisit this when we add controls.
> 
> 
> 
>>>>> +The IPA Frame Context provides the storage for algorithms for a single
>>>>> +frame operation.
>>>>> +
>>>>> +The ``IPAFrameContext`` structure may be extended to an array, list, or
>>>>> +queue to store historical state for each frame, allowing algorithms to
>>>>> +obtain and reference results of calculations which are deeply pipelined.
>>>>> +This may only be done if an algorithm needs to know the context that was
>>>>> +applied at the frame the statistics were produced for, rather than the
>>>>> +previous or current frame.
>>>>> +
>>>>> +Presently there is a single ``IPAFrameContext`` without historical data,
>>>>> +and the context is maintained and updated through successive processing
>>>>> +operations.
>>>>> +
>>>>> +Operating
>>>>> +~~~~~~~~~
>>>>> +
>>>>> +There are three main parts of interactions with the algorithms for the
>>>>> +IPU3 IPA to operate when running:
>>>>> +
>>>>> +-  configure()
>>>>> +-  processEvent(``EventFillParams``)
>>>>> +-  processEvent(``EventStatReady``)
>>>>> +
>>>>> +The configuration phase allows the pipeline-handler to inform the IPA of
>>>>> +the current stream configurations, which is then passed into each
>>>>> +algorithm to provide an opportunity to identify and track state of the
>>>>> +hardware, such as image size or ImgU pipeline configurations.
>>>>> +
>>>>> +Pre-frame preparation
>>>>> +~~~~~~~~~~~~~~~~~~~~~
>>>>> +
>>>>> +When configured, the IPA is notified by the pipeline handler of the
>>>>> +Camera ``start()`` event, after which incoming requests will be queued
>>>>> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
>>>>> +be populated for the ImgU. This is passed directly to each algorithm
>>>>> +through the ``prepare()`` call allowing the ISP configuration to be
>>>>> +updated for the needs of each component that the algorithm is
>>>>> +responsible for.
>>>>> +
>>>>> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
>>>>> +structure that it modifies, and it should take care to ensure that any
>>>>> +structure set by a use flag is fully initialised to suitable values.
>>>>> +
>>>>> +The parameter buffer is returned to the pipeline handler through the
>>>>> +``ActionParamFilled`` event, and from there queued to the ImgU along
>>>>> +with a raw frame captured with the CIO2.
>>>>
>>>> This reminds me that we should turn the operations (actions and events)
>>>> into separate functions in the IPAIPU3Interface. Out of scope for this
>>>> patch of course.
>>>
>>> You mean directly defining them in Mojo instead of passing them through
>>> processEvent with an event flag?
>>
>> Yes.
> 
> That sounds like a good thing to do, and may indeed make the interfaces
> clearer I think.
> 
> We may have to further document the expectations of those calls for both
> directions. I.e. EventFillParams (or it's new direct call equivalent) is
> expected to call or initiate the calling of a return of the buffer
> through ActionParamFilled (or it's renamed direct call equivalent).
> 
> 
> 
>>> I think that would probably be better indeed, I don't see much added
>>> value to having processEvent() ... (with the small exception that it
>>> partially shows which functions are called while streaming, but we could
>>> also document the async/streaming expectations anyway).
>>
>> processEvent() dates back from when we tried to have a single generic
>> API for IPAs.
> 
> And we clearly have per - pipeline handler IPA's now.
> 
> 
>>>>> +
>>>>> +Post-frame completion
>>>>> +~~~~~~~~~~~~~~~~~~~~~
>>>>> +
>>>>> +When the capture of an image is completed, and successfully processed
>>>>> +through the ImgU, the generated statistics buffer
>>>>> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
>>>>> +``EventStatReady`` event. This provides the IPA with an opportunity to
>>>>> +examine the results of the ISP and run the calculations required by each
>>>>> +algorithm on the new data. The algorithms may require context from the
>>>>> +operations of other algorithms, for example, the AWB might choose to use
>>>>> +a scene brightness determined by the AGC. It is important that the
>>>>> +algorithms are ordered to ensure that required results are determined
>>>>> +before they are needed.
>>>>> +
>>>>> +The ordering of the algorithm processing is determined by their
>>>>> +placement in the ``IPU3::algorithms_`` ordered list.
>>>>
>>>> I expect interesting research to happen in this area :-)
>>>
>>> Yes, well it's too early to state any more than that currently I think.
>>
Laurent Pinchart Sept. 16, 2021, 1:41 p.m. UTC | #8
Hi Kieran,

On Thu, Sep 16, 2021 at 01:36:43PM +0100, Kieran Bingham wrote:
> On 13/09/2021 15:29, Umang Jain wrote:
> > On 9/13/21 3:00 AM, Kieran Bingham wrote:
> >> The IPU3 IPA implements the basic 3a using the ImgU ISP.
> > 3a or 3A?
> >>
> >> Provide an overview document to describe it's operations, and provide a
> >> block diagram to help visualise how the components are put together to
> >> assist any new developers exploring the code.
> >>
> >> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> >>
> >> ---
> >> This is really just an RFC: Particularly:
> >>
> >>   - Should this content actually live in Documentation?
> >>     or is it more suited to a page under src/ipa/ipu3/...
> > 
> > 
> > I would probably keep it under Documentation/ but there is already a IPA
> > writing guide. So maybe we need a new dir, to document overviews of
> > existing IPAs?
> 
> Indeed, the existing document really describes the IPA IPC interfaces, I
> wonder if it should be renamed ...
> 
> The aim of this is to provide some overview on how the IPU3 IPA operates
> from a higher level.
> 
> Along with the discussions with Laurent, I'm going to move this to
> src/ipa/ipu3/ipu3-ipa-design-guide.rst for now.

I had assumed it would be moved to ipu3.cpp. Seems like explicit
communication would work better than assumptions.

I'm not sure I want to set a precedent of storing .rst documentation
somewhere else than in the Documentation/ directory. If you want to keep
it as .rst, I think it should stay there, and get compiled to catch
errors. My preference would be ipu3.cpp though.

> That will keep the overview document in the actual component for now.
> 
> It may be that we write further 'generic' IPA design guides, which may
> take inspiration from this, and could live in Documentation, but I think
> I/we need to spend some dedicated cycles on how we clean up our
> documentation anyway, so I'd rather keep this file close to it's
> implementation for now.
> 
> >>
> >>   - This is not complete to document all functionality
> >>     but gives an overview of the current implementation
> >>
> >>   - Is there anything anyone would find helpful for me to
> >>     extend/write about on this?
> >>
> >>   - It could likely get merged in with some of the work
> >>     that Jean-Michel is doing, and might duplicate some of
> >>     the parts he is documenting, but this was an aim to
> >>     write a single page overview as a getting started with
> >>     the IPU3 IPA ....
> >>
> >>
> >>   .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
> >>   1 file changed, 144 insertions(+)
> >>   create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
> >>
> >> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst
> >> b/Documentation/guides/ipu3-ipa-design-guide.rst
> >> new file mode 100644
> >> index 000000000000..a1d0f13fbb00
> >> --- /dev/null
> >> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
> >> @@ -0,0 +1,144 @@
> >> +IPU3 IPA Architecture Design and Overview
> >> +=========================================
> >> +
> >> +The IPU3 IPA is built as a modular and extensible framework with an
> >> +upper layer to manage the interactions with the pipeline handler, and
> >> +the image processing algorithms split to compartmentalise the processing
> >> +required for each accellerator cluster provided by the ImgU ISP.
> >> +
> >> +The core IPU3 class is responsible for initialisation and construction
> >
> > s/IPU3/IPAIPU3 maybe?
> >
> >> +of the algorithm components, processing controls set by the requests
> >> +from applications, and managing events from the Pipeline handler.
> >> +
> >> +::
> >> +
> >> +        ┌───────────────────────────────────────────┐
> >> +        │      IPU3 Pipeline Handler                │
> >> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
> >> +        │   │        │    │        │    │        │  │
> >> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
> >> +        │   │        │    │        │    │        │  │
> >> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
> >> +        │                                 │P   │    │    S: Statistics Buffer
> >> +        │                                 │    │S   │
> >> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
> >> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
> >> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
> >> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
> >> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
> >> +        │ IPU3 IPA                                  │
> >> +        │                 ┌───────────────────────┐ │
> >> +        │ ┌───────────┐   │ Algorithms            │ │
> >> +        │ │IPAContext │   │          ┌─────────┐  │ │
> >> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
> >> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
> >> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
> >> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
> >> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
> >> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
> >> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
> >> +        │ │ │       │ │   │    │         │        │ │
> >> +        │ │ └───────┘ │   │    └─────────┘        │ │
> >> +        │ └───────────┘   └───────────────────────┘ │
> >> +        └───────────────────────────────────────────┘
> >> +         SC: IPASessionConfiguration
> >> +         FC: IPAFrameContext(s)
> >> +
> >> +The IPA instance is constructed and initialised at the point a Camera is
> >> +created by the IPU3 pipeline handler. The initialisation call provides
> >> +details about which Camera Sensor is being used, and the controls that
> >> +it has available, along with their defaults and ranges.
> > 
> > Brief line about how sensor controls are used in IPA will be useful?
> > This will explain "why" sensor controls are passed to IPA and what IPA
> > does with it (for e.g gather necessary info to run algorithms).
> 
> I have wondered about putting a CameraSensorHelper box in the IPA too,
> but I thought that would be too cluttered, and I don't think it adds
> much to convey the context of the running IPA (at least for the moment) ...
> 
> But it might certainly be worth explaining about how controls are also
> sent to the pipeline handler. I'll see if I can add a section:
> 
> Controls
> ~~~~~~~~
> 
> The AutoExposure and AutoGain (AGC) algorithm differs slightly from the
> others as it requires operating directly on the sensor, as opposed to
> through the ImgU ISP. To support this, there is a dedicated action
> `ActionSetSensorControls` to allow the IPA to request controls to be set
> on the camera sensor through the pipeline handler.
> 
> >> +
> >> +Buffers
> >> +~~~~~~~
> >> +
> >> +The IPA will have Parameter and Statistics buffers shared with it from
> >> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
> >> +before the ``start()`` operation occurs.
> >> +
> >> +The IPA will map the buffers into CPU accessible memory, associated with
> >> +a buffer ID, and further events for sending or receiving parameter and
> >> +statistics buffers will reference the ID to avoid expensive memory
> >> +mapping operations, or the passing of file handles during streaming.
> >> +
> >> +After the ``stop()`` operation occurs, these buffers will be unmapped,
> >> +and no further access to the buffers is permitted.
> >> +
> >> +Context
> >> +~~~~~~~
> >> +
> >> +Algorithm calls will always have the ``IPAContext`` available to them.
> >> +This context comprises of two parts:
> >> +
> >> +-  IPA Session Configuration
> >> +-  IPA Frame Context
> >> +
> >> +The session configuration structure ``IPASessionConfiguration``
> >> +represents constant parameters determined from either before streaming
> >> +commenced during ``configure()`` or updated parameters when processing
> >> +controls.
> >> +
> >> +The IPA Frame Context provides the storage for algorithms for a single
> >> +frame operation.
> >> +
> >> +The ``IPAFrameContext`` structure may be extended to an array, list, or
> >> +queue to store historical state for each frame, allowing algorithms to
> >> +obtain and reference results of calculations which are deeply pipelined.
> >> +This may only be done if an algorithm needs to know the context that was
> >> +applied at the frame the statistics were produced for, rather than the
> >> +previous or current frame.
> >> +
> >> +Presently there is a single ``IPAFrameContext`` without historical data,
> >> +and the context is maintained and updated through successive processing
> >> +operations.
> >> +
> >> +Operating
> >> +~~~~~~~~~
> >> +
> >> +There are three main parts of interactions with the algorithms for the
> >> +IPU3 IPA to operate when running:
> >> +
> >> +-  configure()
> >> +-  processEvent(``EventFillParams``)
> > 
> > 
> > One liner explanation for EventFillParams is missing. It should be
> > present in Pre-frame preparation section, no? Can you please include a
> > brief line analogous to "EventStatReady" event in the Post-frame
> > completion section?
> 
> Ah yes, indeed they should both be covered in the same way, so that's
> missing thanks.
> 
> I've updated it.
> 
> > Other than that, looks fine to me:
> > 
> > Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
> > 
> >> +-  processEvent(``EventStatReady``)
> >> +
> >> +The configuration phase allows the pipeline-handler to inform the IPA of
> >> +the current stream configurations, which is then passed into each
> >> +algorithm to provide an opportunity to identify and track state of the
> >> +hardware, such as image size or ImgU pipeline configurations.
> >> +
> >> +Pre-frame preparation
> >> +~~~~~~~~~~~~~~~~~~~~~
> >> +
> >> +When configured, the IPA is notified by the pipeline handler of the
> >> +Camera ``start()`` event, after which incoming requests will be queued
> >> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
> >> +be populated for the ImgU. This is passed directly to each algorithm
> >> +through the ``prepare()`` call allowing the ISP configuration to be
> >> +updated for the needs of each component that the algorithm is
> >> +responsible for.
> >> +
> >> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
> >> +structure that it modifies, and it should take care to ensure that any
> >> +structure set by a use flag is fully initialised to suitable values.
> >> +
> >> +The parameter buffer is returned to the pipeline handler through the
> >> +``ActionParamFilled`` event, and from there queued to the ImgU along
> >> +with a raw frame captured with the CIO2.
> >> +
> >> +Post-frame completion
> >> +~~~~~~~~~~~~~~~~~~~~~
> >> +
> >> +When the capture of an image is completed, and successfully processed
> >> +through the ImgU, the generated statistics buffer
> >> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
> >> +``EventStatReady`` event. This provides the IPA with an opportunity to
> >> +examine the results of the ISP and run the calculations required by each
> >> +algorithm on the new data. The algorithms may require context from the
> >> +operations of other algorithms, for example, the AWB might choose to use
> >> +a scene brightness determined by the AGC. It is important that the
> >> +algorithms are ordered to ensure that required results are determined
> >> +before they are needed.
> >> +
> >> +The ordering of the algorithm processing is determined by their
> >> +placement in the ``IPU3::algorithms_`` ordered list.
Laurent Pinchart Sept. 16, 2021, 1:45 p.m. UTC | #9
Hi Kieran,

On Thu, Sep 16, 2021 at 01:37:28PM +0100, Kieran Bingham wrote:
> On 14/09/2021 11:34, Laurent Pinchart wrote:
> > On Tue, Sep 14, 2021 at 11:13:39AM +0100, Kieran Bingham wrote:
> >> On 14/09/2021 06:37, Laurent Pinchart wrote:
> >>> On Sun, Sep 12, 2021 at 10:30:17PM +0100, Kieran Bingham wrote:
> >>>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
> >>>
> >>> s/3a/3A/
> >>>
> >>> Technically speaking we're only implementing 2 out of 3 at this point
> >>> :-)
> >>>
> >>>> Provide an overview document to describe it's operations, and provide a
> >>>
> >>> s/it's/its/
> >>>
> >>>> block diagram to help visualise how the components are put together to
> >>>> assist any new developers exploring the code.
> >>>>
> >>>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> >>>>
> >>>> ---
> >>>> This is really just an RFC: Particularly:
> >>>>
> >>>>  - Should this content actually live in Documentation?
> >>>>    or is it more suited to a page under src/ipa/ipu3/...
> >>>
> >>> Good question. We tend to document things in the code where possible.
> >>> Would there be any drawback from doing so in this case ?
> >>
> >> I don't think so - I think the question mostly comes from
> >> 'discoverability' of the documentation.
> >>
> >>
> >> Which is also a bigger question that we need to make it easier for
> >> readers to find the documentation that they desire.
> >>
> >> I've come across a great talk on this recently:
> >>   - What nobody tells you about documentation - Daniele Procida
> >>   - PyConAu
> >>   - https://2017.pycon-au.org/schedule/presentation/15/
> >>   - https://youtu.be/t4vKPhjcMZg
> >>
> >> And that's been spun into a website to document how to document ;-)
> >>   - https://diataxis.fr/
> > 
> > Thanks, I'll have a look.
> > 
> >> So, I think this could indeed live under src/ipa/ipu3 but it needs to be
> >> clear that this should be the first thing to read when looking into the IPA.
> > 
> > Agreed. I could envision moving it to Documentation/ later, as part of a
> > longer document that uses the IPU3 IPA to teach how to write algorithms,
> > but at that point we'll start writing a book about algorithm design :-)
> 
> For now, for the v2 I will keep this as an RST document, but place it in
> src/ipa/ipu3/.
> 
> 
> >>>>  - This is not complete to document all functionality
> >>>>    but gives an overview of the current implementation
> >>>
> >>> Sounds good to me.
> >>>
> >>>>  - Is there anything anyone would find helpful for me to
> >>>>    extend/write about on this?
> >>>>
> >>>>  - It could likely get merged in with some of the work
> >>>>    that Jean-Michel is doing, and might duplicate some of
> >>>>    the parts he is documenting, but this was an aim to
> >>>>    write a single page overview as a getting started with
> >>>>    the IPU3 IPA ....
> >>>
> >>> I think it's *very* useful to have an overview.
> 
> I'll keep this separate for now then.
> 
> >>>
> >>>>  .../guides/ipu3-ipa-design-guide.rst          | 144 ++++++++++++++++++
> >>>>  1 file changed, 144 insertions(+)
> >>>>  create mode 100644 Documentation/guides/ipu3-ipa-design-guide.rst
> >>>>
> >>>> diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
> >>>> new file mode 100644
> >>>> index 000000000000..a1d0f13fbb00
> >>>> --- /dev/null
> >>>> +++ b/Documentation/guides/ipu3-ipa-design-guide.rst
> >>>> @@ -0,0 +1,144 @@
> >>>> +IPU3 IPA Architecture Design and Overview
> >>>> +=========================================
> >>>> +
> >>>> +The IPU3 IPA is built as a modular and extensible framework with an
> >>>> +upper layer to manage the interactions with the pipeline handler, and
> >>>> +the image processing algorithms split to compartmentalise the processing
> >>>> +required for each accellerator cluster provided by the ImgU ISP.
> >>>
> >>> I've commented on the expression "accelerator cluster" when reviewing
> >>> Jean-Michel's series. I've then seen that it's mentioned in the IPU3
> >>> kernel documentation in one place, I suppose that's where it comes from.
> >>> From what I understand, the expression is used to refer to a processing
> >>> block (or sub-block) that is fully implemented in hardware, as opposed
> >>> as being implemented partly or fully in the ImgU firmware. I don't think
> >>> that's relevant from an IPA point of view, it's an internal
> >>> implementation detail of the ImgU. I'd thus use "processing block" or a
> >>> similar term instead.
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/media/ipu3/include/uapi/intel-ipu3.h#n2428
> >>
> >> Describes it as:
> >>
> >> """
> >> ACC refers to the HW cluster containing all Fixed Functions (FFs). Each
> >> FF implements a specific algorithm.
> >> """
> >>
> >> So in fact, yes - the term Accelerator Cluster is a 'group' of all the
> >> hardware accelerator functions, rather than each individual one.
> >>
> >> Each individual function (fixed function) can indeed be better named as
> >> a processing block.
> >>
> >>
> >> Lets make this more like
> >>
> >> """
> >> The IPU3 IPA is built as a modular and extensible framework with an
> >> upper layer to manage the interactions with the pipeline handler, and
> >> the image processing algorithms split to compartmentalise the processing
> >> required for each processing block, making use of the fixed-function
> >> accelerators provided by the ImgU ISP.
> >> """
> > 
> > Sounds good to me. Let's standardize on the same "processing block"
> > terminology in the IPU3 IPA documentation series too.
> 
> Ok,
> 
> +Cc: Jean Michel to highlight this discussion point.
> 
> > 
> >>>> +
> >>>> +The core IPU3 class is responsible for initialisation and construction
> >>>> +of the algorithm components, processing controls set by the requests
> >>>> +from applications, and managing events from the Pipeline handler.
> >>>
> >>> s/Pipeline/pipeline/
> >>>
> >>>> +
> >>>> +::
> >>>> +
> >>>> +        ┌───────────────────────────────────────────┐
> >>>> +        │      IPU3 Pipeline Handler                │
> >>>> +        │   ┌────────┐    ┌────────┐    ┌────────┐  │
> >>>> +        │   │        │    │        │    │        │  │
> >>>> +        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
> >>>> +        │   │        │    │        │    │        │  │
> >>>> +        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
> >>>> +        │                                 │P   │    │    S: Statistics Buffer
> >>>> +        │                                 │    │S   │
> >>>> +        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
> >>>> +          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
> >>>> +          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
> >>>> +          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
> >>>> +        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
> >>>> +        │ IPU3 IPA                                  │
> >>>> +        │                 ┌───────────────────────┐ │
> >>>> +        │ ┌───────────┐   │ Algorithms            │ │
> >>>> +        │ │IPAContext │   │          ┌─────────┐  │ │
> >>>> +        │ │ ┌───────┐ │   │          │ ...     │  │ │
> >>>> +        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
> >>>> +        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
> >>>> +        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
> >>>> +        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
> >>>> +        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
> >>>> +        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
> >>>> +        │ │ │       │ │   │    │         │        │ │
> >>>> +        │ │ └───────┘ │   │    └─────────┘        │ │
> >>>> +        │ └───────────┘   └───────────────────────┘ │
> >>>> +        └───────────────────────────────────────────┘
> >>>> +         SC: IPASessionConfiguration
> >>>> +         FC: IPAFrameContext(s)
> >>>> +
> >>>> +The IPA instance is constructed and initialised at the point a Camera is
> >>>> +created by the IPU3 pipeline handler. The initialisation call provides
> >>
> >> "Camera" is capitalised as a libcamera noun. Shouldn't Pipeline Handler
> >> also be capitalised as it's a libcamera named thing?
> >>
> >> (From your comment above about s/Pipeline/pipeline/. I wonder if instead
> >> it should be capitalised there and here too.
> > 
> > In Doxygen capitalization generates links to classes, so we have to be
> > careful about writing "Camera" for instance. When it comes to pipeline
> > handlers, "Pipeline Handler" won't generate a link, and I wouldn't write
> > "PipelineHandler" here anyway.
> > 
> > I'm sure there are conventions for all this in documentation. I usually
> > capitalize words that have special meanings the first time they're
> > introduced only, but I don't know if that's a good thing to do.
> > 
> >>>> +details about which Camera Sensor is being used, and the controls that
> >>>
> >>> s/Camera Sensor/camera sensor/ ?
> >>
> >> Now that one is awkward, as it's both a libcamera noun, and a real world
> >> object.
> >>
> >> In this context, I believe camera sensor is correct, (as you've
> >> suggested) as it's the external world camera sensor details.
> >>
> >>>> +it has available, along with their defaults and ranges.
> >>>
> >>> s/defaults/default values/
> >>>
> >>>> +
> >>>> +Buffers
> >>>> +~~~~~~~
> >>>> +
> >>>> +The IPA will have Parameter and Statistics buffers shared with it from
> >>>> +the IPU3 Pipeline handler. These buffers will be passed to the IPA
> >>>> +before the ``start()`` operation occurs.
> >>>> +
> >>>> +The IPA will map the buffers into CPU accessible memory, associated with
> >>>
> >>> s/CPU accessible/CPU-accessible/
> >>>
> >>>> +a buffer ID, and further events for sending or receiving parameter and
> >>>> +statistics buffers will reference the ID to avoid expensive memory
> >>>> +mapping operations, or the passing of file handles during streaming.
> >>>> +
> >>>> +After the ``stop()`` operation occurs, these buffers will be unmapped,
> >>>> +and no further access to the buffers is permitted.
> >>>> +
> >>>> +Context
> >>>> +~~~~~~~
> >>>> +
> >>>> +Algorithm calls will always have the ``IPAContext`` available to them.
> >>>> +This context comprises of two parts:
> >>>> +
> >>>> +-  IPA Session Configuration
> >>>> +-  IPA Frame Context
> >>>> +
> >>>> +The session configuration structure ``IPASessionConfiguration``
> >>>> +represents constant parameters determined from either before streaming
> >>>> +commenced during ``configure()`` or updated parameters when processing
> >>>> +controls.
> >>>
> >>> I'm not sure about the last part, I thought the session configuration
> >>> was meant to be constant after configure() ?
> >>
> >> I would expect updates from controls to update the 'active session
> >> configuration'.
> >>
> >> Of course some of that session configuration will be constant, and not
> >> change (I don't think we'll change the resolution for instance) - but I
> >> expect that some parameters will be set such as whether AE is enabled or
> >> locked.
> >>
> >> That to me is a session configuration parameter. But it can change at
> >> runtime - and while it remains the same, every new frame will have that
> >> configuration applied.
> >>
> >>
> >> Whereas the FrameContexts are about the context that is handled for each
> >> frame, and any context shared between algorithms , and is much more dynamic.
> > 
> > This makes me think that we may want to introduce a third structure to
> > group state that isn't per-frame. I recall designing
> > IPASessionConfiguration as something that will not change during the
> > session. Maybe I recall it wrong, but I like the idea of keeping it
> 
> But ... state that isn't per frame is the ... session ? and thus that
> state is stored as the SessionConfiguration ...

I agree that it's the session, but in my mind we'd have a session
configuration that groups the parameters set at configure() time and
that never vary during the session, and a session state for the state.

> At the moment I anticipate we'll need to store things like control
> state. I.e. is AE enabled or manual for instance. And if manual - what
> is the current fixed manual value.
> 
> I expected that would be the session configuration, as in it's the
> current active settings ... but maybe a third group would come up, I
> think we'll find out when we actually add the controls.
> 
> > static. Separating configuration that is set at the beginning and
> > doesn't change from controls that are updated dynamically seems a good
> > design principle to me.
> 
> I'll delete the last part:
> "or updated parameters when processing controls."
> 
> We can revisit this when we add controls.
> 
> >>>> +The IPA Frame Context provides the storage for algorithms for a single
> >>>> +frame operation.
> >>>> +
> >>>> +The ``IPAFrameContext`` structure may be extended to an array, list, or
> >>>> +queue to store historical state for each frame, allowing algorithms to
> >>>> +obtain and reference results of calculations which are deeply pipelined.
> >>>> +This may only be done if an algorithm needs to know the context that was
> >>>> +applied at the frame the statistics were produced for, rather than the
> >>>> +previous or current frame.
> >>>> +
> >>>> +Presently there is a single ``IPAFrameContext`` without historical data,
> >>>> +and the context is maintained and updated through successive processing
> >>>> +operations.
> >>>> +
> >>>> +Operating
> >>>> +~~~~~~~~~
> >>>> +
> >>>> +There are three main parts of interactions with the algorithms for the
> >>>> +IPU3 IPA to operate when running:
> >>>> +
> >>>> +-  configure()
> >>>> +-  processEvent(``EventFillParams``)
> >>>> +-  processEvent(``EventStatReady``)
> >>>> +
> >>>> +The configuration phase allows the pipeline-handler to inform the IPA of
> >>>> +the current stream configurations, which is then passed into each
> >>>> +algorithm to provide an opportunity to identify and track state of the
> >>>> +hardware, such as image size or ImgU pipeline configurations.
> >>>> +
> >>>> +Pre-frame preparation
> >>>> +~~~~~~~~~~~~~~~~~~~~~
> >>>> +
> >>>> +When configured, the IPA is notified by the pipeline handler of the
> >>>> +Camera ``start()`` event, after which incoming requests will be queued
> >>>> +for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
> >>>> +be populated for the ImgU. This is passed directly to each algorithm
> >>>> +through the ``prepare()`` call allowing the ISP configuration to be
> >>>> +updated for the needs of each component that the algorithm is
> >>>> +responsible for.
> >>>> +
> >>>> +The algorithm should set the use flag (``ipu3_uapi_flags``) for any
> >>>> +structure that it modifies, and it should take care to ensure that any
> >>>> +structure set by a use flag is fully initialised to suitable values.
> >>>> +
> >>>> +The parameter buffer is returned to the pipeline handler through the
> >>>> +``ActionParamFilled`` event, and from there queued to the ImgU along
> >>>> +with a raw frame captured with the CIO2.
> >>>
> >>> This reminds me that we should turn the operations (actions and events)
> >>> into separate functions in the IPAIPU3Interface. Out of scope for this
> >>> patch of course.
> >>
> >> You mean directly defining them in Mojo instead of passing them through
> >> processEvent with an event flag?
> > 
> > Yes.
> 
> That sounds like a good thing to do, and may indeed make the interfaces
> clearer I think.
> 
> We may have to further document the expectations of those calls for both
> directions. I.e. EventFillParams (or it's new direct call equivalent) is
> expected to call or initiate the calling of a return of the buffer
> through ActionParamFilled (or it's renamed direct call equivalent).
> 
> >> I think that would probably be better indeed, I don't see much added
> >> value to having processEvent() ... (with the small exception that it
> >> partially shows which functions are called while streaming, but we could
> >> also document the async/streaming expectations anyway).
> > 
> > processEvent() dates back from when we tried to have a single generic
> > API for IPAs.
> 
> And we clearly have per - pipeline handler IPA's now.
> 
> >>>> +
> >>>> +Post-frame completion
> >>>> +~~~~~~~~~~~~~~~~~~~~~
> >>>> +
> >>>> +When the capture of an image is completed, and successfully processed
> >>>> +through the ImgU, the generated statistics buffer
> >>>> +(``ipu3_uapi_stats_3a``) is given to the IPA through the
> >>>> +``EventStatReady`` event. This provides the IPA with an opportunity to
> >>>> +examine the results of the ISP and run the calculations required by each
> >>>> +algorithm on the new data. The algorithms may require context from the
> >>>> +operations of other algorithms, for example, the AWB might choose to use
> >>>> +a scene brightness determined by the AGC. It is important that the
> >>>> +algorithms are ordered to ensure that required results are determined
> >>>> +before they are needed.
> >>>> +
> >>>> +The ordering of the algorithm processing is determined by their
> >>>> +placement in the ``IPU3::algorithms_`` ordered list.
> >>>
> >>> I expect interesting research to happen in this area :-)
> >>
> >> Yes, well it's too early to state any more than that currently I think.
Kieran Bingham Sept. 16, 2021, 3:22 p.m. UTC | #10
On 16/09/2021 14:41, Laurent Pinchart wrote:
> Hi Kieran,
> 
> On Thu, Sep 16, 2021 at 01:36:43PM +0100, Kieran Bingham wrote:
>> On 13/09/2021 15:29, Umang Jain wrote:
>>> On 9/13/21 3:00 AM, Kieran Bingham wrote:
>>>> The IPU3 IPA implements the basic 3a using the ImgU ISP.
>>> 3a or 3A?
>>>>
>>>> Provide an overview document to describe it's operations, and provide a
>>>> block diagram to help visualise how the components are put together to
>>>> assist any new developers exploring the code.
>>>>
>>>> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
>>>>
>>>> ---
>>>> This is really just an RFC: Particularly:
>>>>
>>>>   - Should this content actually live in Documentation?
>>>>     or is it more suited to a page under src/ipa/ipu3/...
>>>
>>>
>>> I would probably keep it under Documentation/ but there is already a IPA
>>> writing guide. So maybe we need a new dir, to document overviews of
>>> existing IPAs?
>>
>> Indeed, the existing document really describes the IPA IPC interfaces, I
>> wonder if it should be renamed ...
>>
>> The aim of this is to provide some overview on how the IPU3 IPA operates
>> from a higher level.
>>
>> Along with the discussions with Laurent, I'm going to move this to
>> src/ipa/ipu3/ipu3-ipa-design-guide.rst for now.
> 
> I had assumed it would be moved to ipu3.cpp. Seems like explicit
> communication would work better than assumptions.

Isn't it always ;-)

> I'm not sure I want to set a precedent of storing .rst documentation
> somewhere else than in the Documentation/ directory. If you want to keep
> it as .rst, I think it should stay there, and get compiled to catch
> errors. My preference would be ipu3.cpp though.

I don't mind it being in the 'doxygen' in ipu3.cpp - but then I /really/
conflict with what Jean-Michel has been doing, while this was aiming to
be separate.

I'll try to tackle that for v3, and incorporate src/ipa/ipu3 into the
Doxygen build ... and try to figure out if I can generate a distinct
'page' for this overview.


>> That will keep the overview document in the actual component for now.

--
Kieran

Patch
diff mbox series

diff --git a/Documentation/guides/ipu3-ipa-design-guide.rst b/Documentation/guides/ipu3-ipa-design-guide.rst
new file mode 100644
index 000000000000..a1d0f13fbb00
--- /dev/null
+++ b/Documentation/guides/ipu3-ipa-design-guide.rst
@@ -0,0 +1,144 @@ 
+IPU3 IPA Architecture Design and Overview
+=========================================
+
+The IPU3 IPA is built as a modular and extensible framework with an
+upper layer to manage the interactions with the pipeline handler, and
+the image processing algorithms split to compartmentalise the processing
+required for each accellerator cluster provided by the ImgU ISP.
+
+The core IPU3 class is responsible for initialisation and construction
+of the algorithm components, processing controls set by the requests
+from applications, and managing events from the Pipeline handler.
+
+::
+
+        ┌───────────────────────────────────────────┐
+        │      IPU3 Pipeline Handler                │
+        │   ┌────────┐    ┌────────┐    ┌────────┐  │
+        │   │        │    │        │    │        │  │
+        │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
+        │   │        │    │        │    │        │  │
+        │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
+        │                                 │P   │    │    S: Statistics Buffer
+        │                                 │    │S   │
+        └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
+          │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
+          │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
+          ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
+        ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
+        │ IPU3 IPA                                  │
+        │                 ┌───────────────────────┐ │
+        │ ┌───────────┐   │ Algorithms            │ │
+        │ │IPAContext │   │          ┌─────────┐  │ │
+        │ │ ┌───────┐ │   │          │ ...     │  │ │
+        │ │ │       │ │   │        ┌─┴───────┐ │  │ │
+        │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
+        │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
+        │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
+        │ │ │       │ │   │    ┌─┴───────┐ │      │ │
+        │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
+        │ │ │       │ │   │    │         │        │ │
+        │ │ └───────┘ │   │    └─────────┘        │ │
+        │ └───────────┘   └───────────────────────┘ │
+        └───────────────────────────────────────────┘
+         SC: IPASessionConfiguration
+         FC: IPAFrameContext(s)
+
+The IPA instance is constructed and initialised at the point a Camera is
+created by the IPU3 pipeline handler. The initialisation call provides
+details about which Camera Sensor is being used, and the controls that
+it has available, along with their defaults and ranges.
+
+Buffers
+~~~~~~~
+
+The IPA will have Parameter and Statistics buffers shared with it from
+the IPU3 Pipeline handler. These buffers will be passed to the IPA
+before the ``start()`` operation occurs.
+
+The IPA will map the buffers into CPU accessible memory, associated with
+a buffer ID, and further events for sending or receiving parameter and
+statistics buffers will reference the ID to avoid expensive memory
+mapping operations, or the passing of file handles during streaming.
+
+After the ``stop()`` operation occurs, these buffers will be unmapped,
+and no further access to the buffers is permitted.
+
+Context
+~~~~~~~
+
+Algorithm calls will always have the ``IPAContext`` available to them.
+This context comprises of two parts:
+
+-  IPA Session Configuration
+-  IPA Frame Context
+
+The session configuration structure ``IPASessionConfiguration``
+represents constant parameters determined from either before streaming
+commenced during ``configure()`` or updated parameters when processing
+controls.
+
+The IPA Frame Context provides the storage for algorithms for a single
+frame operation.
+
+The ``IPAFrameContext`` structure may be extended to an array, list, or
+queue to store historical state for each frame, allowing algorithms to
+obtain and reference results of calculations which are deeply pipelined.
+This may only be done if an algorithm needs to know the context that was
+applied at the frame the statistics were produced for, rather than the
+previous or current frame.
+
+Presently there is a single ``IPAFrameContext`` without historical data,
+and the context is maintained and updated through successive processing
+operations.
+
+Operating
+~~~~~~~~~
+
+There are three main parts of interactions with the algorithms for the
+IPU3 IPA to operate when running:
+
+-  configure()
+-  processEvent(``EventFillParams``)
+-  processEvent(``EventStatReady``)
+
+The configuration phase allows the pipeline-handler to inform the IPA of
+the current stream configurations, which is then passed into each
+algorithm to provide an opportunity to identify and track state of the
+hardware, such as image size or ImgU pipeline configurations.
+
+Pre-frame preparation
+~~~~~~~~~~~~~~~~~~~~~
+
+When configured, the IPA is notified by the pipeline handler of the
+Camera ``start()`` event, after which incoming requests will be queued
+for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
+be populated for the ImgU. This is passed directly to each algorithm
+through the ``prepare()`` call allowing the ISP configuration to be
+updated for the needs of each component that the algorithm is
+responsible for.
+
+The algorithm should set the use flag (``ipu3_uapi_flags``) for any
+structure that it modifies, and it should take care to ensure that any
+structure set by a use flag is fully initialised to suitable values.
+
+The parameter buffer is returned to the pipeline handler through the
+``ActionParamFilled`` event, and from there queued to the ImgU along
+with a raw frame captured with the CIO2.
+
+Post-frame completion
+~~~~~~~~~~~~~~~~~~~~~
+
+When the capture of an image is completed, and successfully processed
+through the ImgU, the generated statistics buffer
+(``ipu3_uapi_stats_3a``) is given to the IPA through the
+``EventStatReady`` event. This provides the IPA with an opportunity to
+examine the results of the ISP and run the calculations required by each
+algorithm on the new data. The algorithms may require context from the
+operations of other algorithms, for example, the AWB might choose to use
+a scene brightness determined by the AGC. It is important that the
+algorithms are ordered to ensure that required results are determined
+before they are needed.
+
+The ordering of the algorithm processing is determined by their
+placement in the ``IPU3::algorithms_`` ordered list.