[libcamera-devel,v6] libcamera: properties: Define pixel array properties

Message ID 20200604153122.18074-1-jacopo@jmondi.org
State Superseded
Headers show
Series
  • [libcamera-devel,v6] libcamera: properties: Define pixel array properties
Related show

Commit Message

Jacopo Mondi June 4, 2020, 3:31 p.m. UTC
Add definition of pixel array related properties.

Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
---
 src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
 1 file changed, 263 insertions(+)

--
2.26.2

Comments

Niklas Söderlund June 5, 2020, 7:53 p.m. UTC | #1
Hi Jacopo,

Nice work!

On 2020-06-04 17:31:22 +0200, Jacopo Mondi wrote:
> Add definition of pixel array related properties.
> 
> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>

Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>

> ---
>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
>  1 file changed, 263 insertions(+)
> 
> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> index ce627fa042ba..762d60881568 100644
> --- a/src/libcamera/property_ids.yaml
> +++ b/src/libcamera/property_ids.yaml
> @@ -386,4 +386,267 @@ controls:
>                                |                    |
>                                |                    |
>                                +--------------------+
> +
> +  - UnitCellSize:
> +      type: Size
> +      description: |
> +        The pixel unit cell physical size, in nanometers.
> +
> +        The UnitCellSize properties defines the horizontal and vertical sizes
> +        of a single pixel unit, including its active and non-active parts.
> +
> +        The property can be used to calculate the physical size of the sensor's
> +        pixel array area and for calibration purposes.
> +
> +  - PixelArrayPhysicalSize:
> +      type: Size
> +      description: |
> +        The camera sensor full pixel array size, in nanometers.
> +
> +        The PixelArrayPhysicalSize property reports the physical dimensions
> +        (width x height) of the full pixel array matrix, including readable
> +        and non-readable pixels.
> +
> +        \todo Rename this property to PhysicalSize once we will have property
> +              categories (i.e. Properties::PixelArray::PhysicalSize)
> +
> +  - PixelArraySize:
> +      type: Size
> +      description: |
> +        The camera sensor pixel array readable area vertical and horizontal
> +        sizes, in pixels.
> +
> +        The PixelArraySize property defines the size in pixel units of the
> +        readable part of full pixel array matrix, including optically black
> +        pixels used for calibration, pixels which are not considered valid for
> +        capture and active pixels valid for image capture.
> +
> +        The property describes a rectangle whose top-left corner is placed
> +        in position (0, 0) and whose vertical and horizontal sizes are defined
> +        by the Size element transported by the property.
> +
> +        The property describes the maximum size of the raw data produced by
> +        the sensor, which might not correspond to the physical size of the
> +        sensor pixel array matrix, as some portions of the physical pixel
> +        array matrix are not accessible and cannot be transmitted out.
> +
> +        For example, a pixel array matrix assembled as follow
> +
> +             +--------------------------------------------------+
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             ...          ...           ...      ...          ...
> +
> +             ...          ...           ...      ...          ...
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             +--------------------------------------------------+
> +
> +        composed by two lines of non-readable pixels (x) followed by N lines of
> +        readable data (D) surrounded by two columns of non-readable pixels on
> +        each side, only the readable portion is transmitted to the receiving
> +        side, defining the sizes of the largest possible buffer of raw data
> +        that can be presented to applications.
> +
> +                               PixelArraySize[0]
> +               /----------------------------------------------/
> +               +----------------------------------------------+ /
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> +               ...        ...           ...      ...        ...
> +               ...        ...           ...      ...        ...
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               +----------------------------------------------+ /
> +
> +        All other rectangles that describes portions of the pixel array, such as
> +        the optical black pixels rectangles and active pixel areas are defined
> +        relatively to the rectangle described by this property.
> +
> +        \todo Rename this property to Size once we will have property
> +              categories (i.e. Properties::PixelArray::Size)
> +
> +  - PixelArrayOpticalBlackRectangles:
> +      type: Rectangle
> +      size: [1 x n]
> +      description: |
> +        The raw data buffer regions which contains optical black pixels
> +        considered valid for calibration purposes.
> +
> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> +        rectangular areas of the raw data buffer, where optical black pixels are
> +        located and could be accessed for calibration and black level
> +        correction.
> +
> +        This property describes the position and size of optically black pixel
> +        rectangles relatively to their position in the raw data buffer as stored
> +        in memory, which might differ from their actual physical location in the
> +        pixel array matrix.
> +
> +        It is important to note, in facts, that camera sensor might
> +        automatically re-order, shuffle or skip portions of their pixels array
> +        matrix when transmitting data to the receiver.
> +
> +        The pixel array contains several areas with different purposes,
> +        interleaved by lines and columns which are said not to be valid for
> +        capturing purposes. Invalid lines and columns are defined as invalid as
> +        they could be positioned too close to the chip margins or to the optical
> +        blank shielding placed on top of optical black pixels.
> +
> +                                PixelArraySize[0]
> +               /----------------------------------------------/
> +                  x1                                       x2
> +               +--o---------------------------------------o---+ /
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               ...          ...           ...     ...       ...
> +               ...          ...           ...     ...       ...
> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               +----------------------------------------------+ /
> +
> +        The readable pixel array matrix is composed by
> +        2 invalid lines (I)
> +        4 lines of valid optically black pixels (O)
> +        2 invalid lines (I)
> +        n lines of valid pixel data (P)
> +        2 invalid lines (I)
> +
> +        And the position of the optical black pixel rectangles is defined by
> +
> +            PixelArrayOpticalBlackRectangles = {
> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> +               { x1, y3, 2, y4 - y3 + 1 },
> +               { x2, y3, 2, y4 - y3 + 1 },
> +            };
> +
> +        If the sensor, when required to output the full pixel array matrix,
> +        automatically skip the invalid lines and columns, producing the
> +        following data buffer, when captured to memory
> +
> +                                    PixelArraySize[0]
> +               /----------------------------------------------/
> +                                                           x1
> +               +--------------------------------------------o-+ /
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> +               ...       ...          ...       ...         ... |
> +               ...       ...          ...       ...         ... |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               +----------------------------------------------+ /
> +
> +        The invalid lines and columns should not be reported as part of the
> +        PixelArraySize property in first place.
> +
> +        In this case, the position of the black pixel rectangles will be
> +
> +            PixelArrayOpticalBlackRectangles = {
> +               { 0, 0, y1 + 1, PixelArraySize[0] },
> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> +            };
> +
> +        \todo Rename this property to Size once we will have property
> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> +
> +  - PixelArrayActiveAreas:
> +      type: Rectangle
> +      size: [1 x n]
> +      description: |
> +        The PixelArrayActiveAreas property defines the (possibly multiple and
> +        overlapping) portions of the camera sensor readable pixel matrix
> +        which are considered valid for image acquisition purposes.
> +
> +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> +        with its top-left corner defined by its horizontal and vertical
> +        distances from the PixelArraySize rectangle top-left corner, placed in
> +        position (0, 0).
> +
> +        This property describes an arbitrary number of overlapping rectangles,
> +        with each rectangle representing the maximum image size that the camera
> +        sensor can produce for a particular aspect ratio.
> +
> +        When multiple rectangles are reported, they shall be ordered from the
> +        tallest to the shortest.
> +
> +        Example 1
> +        A camera sensor which only produces images in the 4:3 image resolution
> +        will report a single PixelArrayActiveAreas rectangle, from which all
> +        other image formats are obtained by either cropping the field-of-view
> +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> +        binning.
> +
> +                     PixelArraySize[0]
> +                    /----------------/
> +                      x1          x2
> +            (0,0)-> +-o------------o-+  /
> +                 y1 o +------------+ |  |
> +                    | |////////////| |  |
> +                    | |////////////| |  | PixelArraySize[1]
> +                    | |////////////| |  |
> +                 y2 o +------------+ |  |
> +                    +----------------+  /
> +
> +        The property reports a single rectangle
> +
> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> +
> +        Example 2
> +        A camera sensor which can produce images in different native
> +        resolutions will report several overlapping rectangles, one for each
> +        natively supported resolution.
> +
> +                     PixelArraySize[0]
> +                    /------------------/
> +                      x1  x2    x3  x4
> +            (0,0)-> +o---o------o---o+  /
> +                 y1 o    +------+    |  |
> +                    |    |//////|    |  |
> +                 y2 o+---+------+---+|  |
> +                    ||///|//////|///||  | PixelArraySize[1]
> +                 y3 o+---+------+---+|  |
> +                    |    |//////|    |  |
> +                 y4 o    +------+    |  |
> +                    +----+------+----+  /
> +
> +        The property reports two rectangles
> +
> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> +
> +        The first rectangle describes the maximum field-of-view of all image
> +        formats in the 4:3 resolutions, while the second one describes the
> +        maximum field of view for all image formats in the 16:9 resolutions.
> +
> +        \todo Rename this property to ActiveAreas once we will have property
> +              categories (i.e. Properties::PixelArray::ActiveAreas)
> +
>  ...
> --
> 2.26.2
> 
> _______________________________________________
> libcamera-devel mailing list
> libcamera-devel@lists.libcamera.org
> https://lists.libcamera.org/listinfo/libcamera-devel
Ricardo Ribalda Delgado June 8, 2020, 8:25 a.m. UTC | #2
Hi Jacopo

On Fri, Jun 5, 2020 at 9:53 PM Niklas Söderlund
<niklas.soderlund@ragnatech.se> wrote:
>
> Hi Jacopo,
>
> Nice work!

Good work indeed :)
>
> On 2020-06-04 17:31:22 +0200, Jacopo Mondi wrote:
> > Add definition of pixel array related properties.
> >
> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
>
> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Ricardo Ribalda <ricardo@ribalda.com>
>
> > ---
> >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> >  1 file changed, 263 insertions(+)
> >
> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > index ce627fa042ba..762d60881568 100644
> > --- a/src/libcamera/property_ids.yaml
> > +++ b/src/libcamera/property_ids.yaml
> > @@ -386,4 +386,267 @@ controls:
> >                                |                    |
> >                                |                    |
> >                                +--------------------+
> > +
> > +  - UnitCellSize:
> > +      type: Size
> > +      description: |
> > +        The pixel unit cell physical size, in nanometers.
> > +
> > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > +        of a single pixel unit, including its active and non-active parts.
> > +
> > +        The property can be used to calculate the physical size of the sensor's
> > +        pixel array area and for calibration purposes.
> > +
> > +  - PixelArrayPhysicalSize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor full pixel array size, in nanometers.
> > +
> > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > +        (width x height) of the full pixel array matrix, including readable
> > +        and non-readable pixels.
> > +
> > +        \todo Rename this property to PhysicalSize once we will have property
> > +              categories (i.e. Properties::PixelArray::PhysicalSize)
> > +
> > +  - PixelArraySize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor pixel array readable area vertical and horizontal
> > +        sizes, in pixels.
> > +
> > +        The PixelArraySize property defines the size in pixel units of the
> > +        readable part of full pixel array matrix, including optically black
> > +        pixels used for calibration, pixels which are not considered valid for
> > +        capture and active pixels valid for image capture.
> > +
> > +        The property describes a rectangle whose top-left corner is placed
> > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > +        by the Size element transported by the property.
> > +
> > +        The property describes the maximum size of the raw data produced by
> > +        the sensor, which might not correspond to the physical size of the
> > +        sensor pixel array matrix, as some portions of the physical pixel
> > +        array matrix are not accessible and cannot be transmitted out.
> > +
> > +        For example, a pixel array matrix assembled as follow
> > +
> > +             +--------------------------------------------------+
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             ...          ...           ...      ...          ...
> > +
> > +             ...          ...           ...      ...          ...
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             +--------------------------------------------------+
> > +
> > +        composed by two lines of non-readable pixels (x) followed by N lines of
> > +        readable data (D) surrounded by two columns of non-readable pixels on
> > +        each side, only the readable portion is transmitted to the receiving
> > +        side, defining the sizes of the largest possible buffer of raw data
> > +        that can be presented to applications.
> > +
> > +                               PixelArraySize[0]
> > +               /----------------------------------------------/
> > +               +----------------------------------------------+ /
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > +               ...        ...           ...      ...        ...
> > +               ...        ...           ...      ...        ...
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               +----------------------------------------------+ /
> > +
> > +        All other rectangles that describes portions of the pixel array, such as
> > +        the optical black pixels rectangles and active pixel areas are defined
> > +        relatively to the rectangle described by this property.
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::Size)
> > +
> > +  - PixelArrayOpticalBlackRectangles:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The raw data buffer regions which contains optical black pixels
> > +        considered valid for calibration purposes.
> > +
> > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> > +        rectangular areas of the raw data buffer, where optical black pixels are
> > +        located and could be accessed for calibration and black level
> > +        correction.
> > +
> > +        This property describes the position and size of optically black pixel
> > +        rectangles relatively to their position in the raw data buffer as stored
> > +        in memory, which might differ from their actual physical location in the
> > +        pixel array matrix.
> > +
> > +        It is important to note, in facts, that camera sensor might
> > +        automatically re-order, shuffle or skip portions of their pixels array
> > +        matrix when transmitting data to the receiver.
> > +
> > +        The pixel array contains several areas with different purposes,
> > +        interleaved by lines and columns which are said not to be valid for
> > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > +        they could be positioned too close to the chip margins or to the optical
> > +        blank shielding placed on top of optical black pixels.
> > +
> > +                                PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                  x1                                       x2
> > +               +--o---------------------------------------o---+ /
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               ...          ...           ...     ...       ...
> > +               ...          ...           ...     ...       ...
> > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               +----------------------------------------------+ /
> > +
> > +        The readable pixel array matrix is composed by
> > +        2 invalid lines (I)
> > +        4 lines of valid optically black pixels (O)
> > +        2 invalid lines (I)
> > +        n lines of valid pixel data (P)
> > +        2 invalid lines (I)
> > +
> > +        And the position of the optical black pixel rectangles is defined by
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> > +               { x1, y3, 2, y4 - y3 + 1 },
> > +               { x2, y3, 2, y4 - y3 + 1 },
> > +            };
> > +
> > +        If the sensor, when required to output the full pixel array matrix,
> > +        automatically skip the invalid lines and columns, producing the
> > +        following data buffer, when captured to memory
> > +
> > +                                    PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                                                           x1
> > +               +--------------------------------------------o-+ /
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > +               ...       ...          ...       ...         ... |
> > +               ...       ...          ...       ...         ... |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               +----------------------------------------------+ /
> > +
> > +        The invalid lines and columns should not be reported as part of the
> > +        PixelArraySize property in first place.
> > +
> > +        In this case, the position of the black pixel rectangles will be
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +            };
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > +
> > +  - PixelArrayActiveAreas:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > +        overlapping) portions of the camera sensor readable pixel matrix
> > +        which are considered valid for image acquisition purposes.
> > +
> > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > +        with its top-left corner defined by its horizontal and vertical
> > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > +        position (0, 0).
> > +
> > +        This property describes an arbitrary number of overlapping rectangles,
> > +        with each rectangle representing the maximum image size that the camera
> > +        sensor can produce for a particular aspect ratio.
> > +
> > +        When multiple rectangles are reported, they shall be ordered from the
> > +        tallest to the shortest.
> > +
> > +        Example 1
> > +        A camera sensor which only produces images in the 4:3 image resolution
> > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > +        other image formats are obtained by either cropping the field-of-view
> > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > +        binning.
> > +
> > +                     PixelArraySize[0]
> > +                    /----------------/
> > +                      x1          x2
> > +            (0,0)-> +-o------------o-+  /
> > +                 y1 o +------------+ |  |
> > +                    | |////////////| |  |
> > +                    | |////////////| |  | PixelArraySize[1]
> > +                    | |////////////| |  |
> > +                 y2 o +------------+ |  |
> > +                    +----------------+  /
> > +
> > +        The property reports a single rectangle
> > +
> > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > +
> > +        Example 2
> > +        A camera sensor which can produce images in different native
> > +        resolutions will report several overlapping rectangles, one for each
> > +        natively supported resolution.
> > +
> > +                     PixelArraySize[0]
> > +                    /------------------/
> > +                      x1  x2    x3  x4
> > +            (0,0)-> +o---o------o---o+  /
> > +                 y1 o    +------+    |  |
> > +                    |    |//////|    |  |
> > +                 y2 o+---+------+---+|  |
> > +                    ||///|//////|///||  | PixelArraySize[1]
> > +                 y3 o+---+------+---+|  |
> > +                    |    |//////|    |  |
> > +                 y4 o    +------+    |  |
> > +                    +----+------+----+  /
> > +
> > +        The property reports two rectangles
> > +
> > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > +
> > +        The first rectangle describes the maximum field-of-view of all image
> > +        formats in the 4:3 resolutions, while the second one describes the
> > +        maximum field of view for all image formats in the 16:9 resolutions.
> > +
> > +        \todo Rename this property to ActiveAreas once we will have property
> > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > +
> >  ...
> > --
> > 2.26.2
> >
> > _______________________________________________
> > libcamera-devel mailing list
> > libcamera-devel@lists.libcamera.org
> > https://lists.libcamera.org/listinfo/libcamera-devel
>
> --
> Regards,
> Niklas Söderlund
Sakari Ailus June 8, 2020, 10:26 a.m. UTC | #3
Hi Jacopo,

Thank you for the patch.

On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> Add definition of pixel array related properties.
> 
> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> ---
>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
>  1 file changed, 263 insertions(+)
> 
> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> index ce627fa042ba..762d60881568 100644
> --- a/src/libcamera/property_ids.yaml
> +++ b/src/libcamera/property_ids.yaml
> @@ -386,4 +386,267 @@ controls:
>                                |                    |
>                                |                    |
>                                +--------------------+
> +
> +  - UnitCellSize:
> +      type: Size
> +      description: |
> +        The pixel unit cell physical size, in nanometers.
> +
> +        The UnitCellSize properties defines the horizontal and vertical sizes
> +        of a single pixel unit, including its active and non-active parts.
> +
> +        The property can be used to calculate the physical size of the sensor's
> +        pixel array area and for calibration purposes.

Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
PixelArraySize?

> +
> +  - PixelArrayPhysicalSize:
> +      type: Size
> +      description: |
> +        The camera sensor full pixel array size, in nanometers.
> +
> +        The PixelArrayPhysicalSize property reports the physical dimensions
> +        (width x height) of the full pixel array matrix, including readable
> +        and non-readable pixels.
> +
> +        \todo Rename this property to PhysicalSize once we will have property
> +              categories (i.e. Properties::PixelArray::PhysicalSize)
> +
> +  - PixelArraySize:
> +      type: Size
> +      description: |
> +        The camera sensor pixel array readable area vertical and horizontal
> +        sizes, in pixels.
> +
> +        The PixelArraySize property defines the size in pixel units of the
> +        readable part of full pixel array matrix, including optically black
> +        pixels used for calibration, pixels which are not considered valid for
> +        capture and active pixels valid for image capture.
> +
> +        The property describes a rectangle whose top-left corner is placed
> +        in position (0, 0) and whose vertical and horizontal sizes are defined
> +        by the Size element transported by the property.
> +
> +        The property describes the maximum size of the raw data produced by
> +        the sensor, which might not correspond to the physical size of the
> +        sensor pixel array matrix, as some portions of the physical pixel
> +        array matrix are not accessible and cannot be transmitted out.
> +
> +        For example, a pixel array matrix assembled as follow
> +
> +             +--------------------------------------------------+
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             ...          ...           ...      ...          ...
> +
> +             ...          ...           ...      ...          ...
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> +             +--------------------------------------------------+
> +
> +        composed by two lines of non-readable pixels (x) followed by N lines of
> +        readable data (D) surrounded by two columns of non-readable pixels on
> +        each side, only the readable portion is transmitted to the receiving
> +        side, defining the sizes of the largest possible buffer of raw data
> +        that can be presented to applications.
> +
> +                               PixelArraySize[0]
> +               /----------------------------------------------/
> +               +----------------------------------------------+ /
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> +               ...        ...           ...      ...        ...
> +               ...        ...           ...      ...        ...
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> +               +----------------------------------------------+ /
> +
> +        All other rectangles that describes portions of the pixel array, such as
> +        the optical black pixels rectangles and active pixel areas are defined
> +        relatively to the rectangle described by this property.
> +
> +        \todo Rename this property to Size once we will have property
> +              categories (i.e. Properties::PixelArray::Size)
> +
> +  - PixelArrayOpticalBlackRectangles:
> +      type: Rectangle
> +      size: [1 x n]
> +      description: |
> +        The raw data buffer regions which contains optical black pixels
> +        considered valid for calibration purposes.
> +

How does this interact with the rotation property?

If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
invert the flipping controls. I presume the same would apply to e.g. dark
pixels in case they are read out, but that should be something for a driver
to handle.

But if the frame layout isn't conveyed to the user space by the driver,
then we need another way to convey the sensor is actually rotated. Oh well.

Not every sensor that is mounted upside down has flipping controls so the
user space will still somehow need to manage the rotation in that case.

> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> +        rectangular areas of the raw data buffer, where optical black pixels are
> +        located and could be accessed for calibration and black level
> +        correction.
> +
> +        This property describes the position and size of optically black pixel
> +        rectangles relatively to their position in the raw data buffer as stored
> +        in memory, which might differ from their actual physical location in the
> +        pixel array matrix.
> +
> +        It is important to note, in facts, that camera sensor might
> +        automatically re-order, shuffle or skip portions of their pixels array
> +        matrix when transmitting data to the receiver.
> +
> +        The pixel array contains several areas with different purposes,
> +        interleaved by lines and columns which are said not to be valid for
> +        capturing purposes. Invalid lines and columns are defined as invalid as
> +        they could be positioned too close to the chip margins or to the optical
> +        blank shielding placed on top of optical black pixels.
> +
> +                                PixelArraySize[0]
> +               /----------------------------------------------/
> +                  x1                                       x2
> +               +--o---------------------------------------o---+ /
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               ...          ...           ...     ...       ...
> +               ...          ...           ...     ...       ...
> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> +               +----------------------------------------------+ /
> +
> +        The readable pixel array matrix is composed by
> +        2 invalid lines (I)
> +        4 lines of valid optically black pixels (O)
> +        2 invalid lines (I)
> +        n lines of valid pixel data (P)
> +        2 invalid lines (I)
> +
> +        And the position of the optical black pixel rectangles is defined by
> +
> +            PixelArrayOpticalBlackRectangles = {
> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },

s/\+ \K}/1 }/

> +               { x1, y3, 2, y4 - y3 + 1 },
> +               { x2, y3, 2, y4 - y3 + 1 },
> +            };
> +
> +        If the sensor, when required to output the full pixel array matrix,
> +        automatically skip the invalid lines and columns, producing the
> +        following data buffer, when captured to memory
> +
> +                                    PixelArraySize[0]
> +               /----------------------------------------------/
> +                                                           x1
> +               +--------------------------------------------o-+ /
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> +               ...       ...          ...       ...         ... |
> +               ...       ...          ...       ...         ... |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> +               +----------------------------------------------+ /
> +
> +        The invalid lines and columns should not be reported as part of the
> +        PixelArraySize property in first place.
> +
> +        In this case, the position of the black pixel rectangles will be
> +
> +            PixelArrayOpticalBlackRectangles = {
> +               { 0, 0, y1 + 1, PixelArraySize[0] },
> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> +            };
> +
> +        \todo Rename this property to Size once we will have property
> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> +
> +  - PixelArrayActiveAreas:
> +      type: Rectangle
> +      size: [1 x n]
> +      description: |
> +        The PixelArrayActiveAreas property defines the (possibly multiple and
> +        overlapping) portions of the camera sensor readable pixel matrix
> +        which are considered valid for image acquisition purposes.
> +
> +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> +        with its top-left corner defined by its horizontal and vertical
> +        distances from the PixelArraySize rectangle top-left corner, placed in
> +        position (0, 0).
> +
> +        This property describes an arbitrary number of overlapping rectangles,
> +        with each rectangle representing the maximum image size that the camera
> +        sensor can produce for a particular aspect ratio.
> +
> +        When multiple rectangles are reported, they shall be ordered from the
> +        tallest to the shortest.
> +
> +        Example 1
> +        A camera sensor which only produces images in the 4:3 image resolution
> +        will report a single PixelArrayActiveAreas rectangle, from which all
> +        other image formats are obtained by either cropping the field-of-view
> +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> +        binning.
> +
> +                     PixelArraySize[0]
> +                    /----------------/
> +                      x1          x2
> +            (0,0)-> +-o------------o-+  /
> +                 y1 o +------------+ |  |
> +                    | |////////////| |  |
> +                    | |////////////| |  | PixelArraySize[1]
> +                    | |////////////| |  |
> +                 y2 o +------------+ |  |
> +                    +----------------+  /
> +
> +        The property reports a single rectangle
> +
> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> +
> +        Example 2
> +        A camera sensor which can produce images in different native
> +        resolutions will report several overlapping rectangles, one for each
> +        natively supported resolution.
> +
> +                     PixelArraySize[0]
> +                    /------------------/
> +                      x1  x2    x3  x4
> +            (0,0)-> +o---o------o---o+  /
> +                 y1 o    +------+    |  |
> +                    |    |//////|    |  |
> +                 y2 o+---+------+---+|  |
> +                    ||///|//////|///||  | PixelArraySize[1]
> +                 y3 o+---+------+---+|  |
> +                    |    |//////|    |  |
> +                 y4 o    +------+    |  |
> +                    +----+------+----+  /
> +
> +        The property reports two rectangles
> +
> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> +
> +        The first rectangle describes the maximum field-of-view of all image
> +        formats in the 4:3 resolutions, while the second one describes the
> +        maximum field of view for all image formats in the 16:9 resolutions.
> +
> +        \todo Rename this property to ActiveAreas once we will have property
> +              categories (i.e. Properties::PixelArray::ActiveAreas)
> +
>  ...
Jacopo Mondi June 8, 2020, 10:55 a.m. UTC | #4
Hi Sakari,

On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> Hi Jacopo,
>
> Thank you for the patch.

Thanks a lot for taking time to comment on this

>
> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > Add definition of pixel array related properties.
> >
> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > ---
> >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> >  1 file changed, 263 insertions(+)
> >
> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > index ce627fa042ba..762d60881568 100644
> > --- a/src/libcamera/property_ids.yaml
> > +++ b/src/libcamera/property_ids.yaml
> > @@ -386,4 +386,267 @@ controls:
> >                                |                    |
> >                                |                    |
> >                                +--------------------+
> > +
> > +  - UnitCellSize:
> > +      type: Size
> > +      description: |
> > +        The pixel unit cell physical size, in nanometers.
> > +
> > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > +        of a single pixel unit, including its active and non-active parts.
> > +
> > +        The property can be used to calculate the physical size of the sensor's
> > +        pixel array area and for calibration purposes.
>
> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> PixelArraySize?
>

Not really, as PixelArrayPhysicalSize reports the physical dimension
of the full pixel array (readable and non readable pixels) while
PixelArraySize reports the size in pixel of the largest readable
image. To sum it up: PixelArrayPhysicalSize reports the chip area,
which covers more space that the readable PixelArraySize.

> > +
> > +  - PixelArrayPhysicalSize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor full pixel array size, in nanometers.
> > +
> > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > +        (width x height) of the full pixel array matrix, including readable
> > +        and non-readable pixels.
> > +
> > +        \todo Rename this property to PhysicalSize once we will have property
> > +              categories (i.e. Properties::PixelArray::PhysicalSize)
> > +
> > +  - PixelArraySize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor pixel array readable area vertical and horizontal
> > +        sizes, in pixels.
> > +
> > +        The PixelArraySize property defines the size in pixel units of the
> > +        readable part of full pixel array matrix, including optically black
> > +        pixels used for calibration, pixels which are not considered valid for
> > +        capture and active pixels valid for image capture.
> > +
> > +        The property describes a rectangle whose top-left corner is placed
> > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > +        by the Size element transported by the property.
> > +
> > +        The property describes the maximum size of the raw data produced by
> > +        the sensor, which might not correspond to the physical size of the
> > +        sensor pixel array matrix, as some portions of the physical pixel
> > +        array matrix are not accessible and cannot be transmitted out.
> > +
> > +        For example, a pixel array matrix assembled as follow
> > +
> > +             +--------------------------------------------------+
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             ...          ...           ...      ...          ...
> > +
> > +             ...          ...           ...      ...          ...
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             +--------------------------------------------------+
> > +
> > +        composed by two lines of non-readable pixels (x) followed by N lines of
> > +        readable data (D) surrounded by two columns of non-readable pixels on
> > +        each side, only the readable portion is transmitted to the receiving
> > +        side, defining the sizes of the largest possible buffer of raw data
> > +        that can be presented to applications.
> > +
> > +                               PixelArraySize[0]
> > +               /----------------------------------------------/
> > +               +----------------------------------------------+ /
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > +               ...        ...           ...      ...        ...
> > +               ...        ...           ...      ...        ...
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               +----------------------------------------------+ /
> > +
> > +        All other rectangles that describes portions of the pixel array, such as
> > +        the optical black pixels rectangles and active pixel areas are defined
> > +        relatively to the rectangle described by this property.
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::Size)
> > +
> > +  - PixelArrayOpticalBlackRectangles:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The raw data buffer regions which contains optical black pixels
> > +        considered valid for calibration purposes.
> > +
>
> How does this interact with the rotation property?
>
> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> invert the flipping controls. I presume the same would apply to e.g. dark
> pixels in case they are read out, but that should be something for a driver
> to handle.

Right. I think this also depends how black pixels are read out. I here
assumed the sensor is capable of reading out the whole PixelArraySize area,
and the receiver is able to capture its whole content in a single
buffer, where application can then go and pick the areas of interest
using the information conveyed by this property. If this model holds,
then if sensor flipping is enabled, indeed the here reported
information are mis-leading, unless the chip architecture is
perfectly symmetric in vertical and horizontal dimensions (which seems
unlikely).

>
> But if the frame layout isn't conveyed to the user space by the driver,
> then we need another way to convey the sensor is actually rotated. Oh well.

Not sure how to interpret this part. Do you mean "convey that a
rotation is applied by, ie, setting the canonical V\HFLIP controls,
which cause the sensor pixel array to be transmitted out with its
vertical/horizontal read-out directions inverted ?"

We currently have a read-only property that reports the mounting
rotation (like the dt-property you have just reviewed :) I assume we
will have a rotation control that reports instead if a V/HFLIP is
applied, so that application know how to compensate that.

>
> Not every sensor that is mounted upside down has flipping controls so the
> user space will still somehow need to manage the rotation in that case.
>

I think it should, and I think we'll provide all information to be
able to do so.

Thanks
  j

> > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> > +        rectangular areas of the raw data buffer, where optical black pixels are
> > +        located and could be accessed for calibration and black level
> > +        correction.
> > +
> > +        This property describes the position and size of optically black pixel
> > +        rectangles relatively to their position in the raw data buffer as stored
> > +        in memory, which might differ from their actual physical location in the
> > +        pixel array matrix.
> > +
> > +        It is important to note, in facts, that camera sensor might
> > +        automatically re-order, shuffle or skip portions of their pixels array
> > +        matrix when transmitting data to the receiver.
> > +
> > +        The pixel array contains several areas with different purposes,
> > +        interleaved by lines and columns which are said not to be valid for
> > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > +        they could be positioned too close to the chip margins or to the optical
> > +        blank shielding placed on top of optical black pixels.
> > +
> > +                                PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                  x1                                       x2
> > +               +--o---------------------------------------o---+ /
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               ...          ...           ...     ...       ...
> > +               ...          ...           ...     ...       ...
> > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               +----------------------------------------------+ /
> > +
> > +        The readable pixel array matrix is composed by
> > +        2 invalid lines (I)
> > +        4 lines of valid optically black pixels (O)
> > +        2 invalid lines (I)
> > +        n lines of valid pixel data (P)
> > +        2 invalid lines (I)
> > +
> > +        And the position of the optical black pixel rectangles is defined by
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
>
> s/\+ \K}/1 }/
>
> > +               { x1, y3, 2, y4 - y3 + 1 },
> > +               { x2, y3, 2, y4 - y3 + 1 },
> > +            };
> > +
> > +        If the sensor, when required to output the full pixel array matrix,
> > +        automatically skip the invalid lines and columns, producing the
> > +        following data buffer, when captured to memory
> > +
> > +                                    PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                                                           x1
> > +               +--------------------------------------------o-+ /
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > +               ...       ...          ...       ...         ... |
> > +               ...       ...          ...       ...         ... |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               +----------------------------------------------+ /
> > +
> > +        The invalid lines and columns should not be reported as part of the
> > +        PixelArraySize property in first place.
> > +
> > +        In this case, the position of the black pixel rectangles will be
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +            };
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > +
> > +  - PixelArrayActiveAreas:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > +        overlapping) portions of the camera sensor readable pixel matrix
> > +        which are considered valid for image acquisition purposes.
> > +
> > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > +        with its top-left corner defined by its horizontal and vertical
> > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > +        position (0, 0).
> > +
> > +        This property describes an arbitrary number of overlapping rectangles,
> > +        with each rectangle representing the maximum image size that the camera
> > +        sensor can produce for a particular aspect ratio.
> > +
> > +        When multiple rectangles are reported, they shall be ordered from the
> > +        tallest to the shortest.
> > +
> > +        Example 1
> > +        A camera sensor which only produces images in the 4:3 image resolution
> > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > +        other image formats are obtained by either cropping the field-of-view
> > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > +        binning.
> > +
> > +                     PixelArraySize[0]
> > +                    /----------------/
> > +                      x1          x2
> > +            (0,0)-> +-o------------o-+  /
> > +                 y1 o +------------+ |  |
> > +                    | |////////////| |  |
> > +                    | |////////////| |  | PixelArraySize[1]
> > +                    | |////////////| |  |
> > +                 y2 o +------------+ |  |
> > +                    +----------------+  /
> > +
> > +        The property reports a single rectangle
> > +
> > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > +
> > +        Example 2
> > +        A camera sensor which can produce images in different native
> > +        resolutions will report several overlapping rectangles, one for each
> > +        natively supported resolution.
> > +
> > +                     PixelArraySize[0]
> > +                    /------------------/
> > +                      x1  x2    x3  x4
> > +            (0,0)-> +o---o------o---o+  /
> > +                 y1 o    +------+    |  |
> > +                    |    |//////|    |  |
> > +                 y2 o+---+------+---+|  |
> > +                    ||///|//////|///||  | PixelArraySize[1]
> > +                 y3 o+---+------+---+|  |
> > +                    |    |//////|    |  |
> > +                 y4 o    +------+    |  |
> > +                    +----+------+----+  /
> > +
> > +        The property reports two rectangles
> > +
> > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > +
> > +        The first rectangle describes the maximum field-of-view of all image
> > +        formats in the 4:3 resolutions, while the second one describes the
> > +        maximum field of view for all image formats in the 16:9 resolutions.
> > +
> > +        \todo Rename this property to ActiveAreas once we will have property
> > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > +
> >  ...
>
> --
> Kind regards,
>
> Sakari Ailus
Laurent Pinchart July 31, 2020, 2:21 a.m. UTC | #5
Hi Jacopo,

Thank you for the patch (and sorry for such a late reply)

First of all, I really like this, I think it's really good work.
Describing the pixel array size may seem like an easy task, but I know
how much variation we can see in camera sensors, and how much research
you've but into this. Coming up with the properties below that are both
flexible and clearly defined was not a simple task.

I have quite a few comments (you know me...), but nothing that requires
significant changes. Since you've posted this series David has submitted
a proposal for transformation (including rotation) support, which needs
to be taken into account as Sakari pointed out. I've proposed in the
review below a way to do so (don't worry, it's not a complete rewrite
:-)), please feel free to tell me if you disagree with some (or all) of
the proposals.

On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:
> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> > On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > > Add definition of pixel array related properties.
> > >
> > > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > > ---
> > >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> > >  1 file changed, 263 insertions(+)
> > >
> > > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > > index ce627fa042ba..762d60881568 100644
> > > --- a/src/libcamera/property_ids.yaml
> > > +++ b/src/libcamera/property_ids.yaml
> > > @@ -386,4 +386,267 @@ controls:
> > >                                |                    |
> > >                                |                    |
> > >                                +--------------------+
> > > +
> > > +  - UnitCellSize:
> > > +      type: Size
> > > +      description: |
> > > +        The pixel unit cell physical size, in nanometers.
> > > +
> > > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > > +        of a single pixel unit, including its active and non-active parts.

Would it be useful to add "In other words, it expresses the horizontal
and vertical distance between the top-left corners of adjacent pixels."
?

> > > +
> > > +        The property can be used to calculate the physical size of the sensor's
> > > +        pixel array area and for calibration purposes.
> >
> > Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> > PixelArraySize?
> 
> Not really, as PixelArrayPhysicalSize reports the physical dimension
> of the full pixel array (readable and non readable pixels) while
> PixelArraySize reports the size in pixel of the largest readable
> image. To sum it up: PixelArrayPhysicalSize reports the chip area,
> which covers more space that the readable PixelArraySize.
> 
> > > +
> > > +  - PixelArrayPhysicalSize:
> > > +      type: Size
> > > +      description: |
> > > +        The camera sensor full pixel array size, in nanometers.
> > > +
> > > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > > +        (width x height) of the full pixel array matrix, including readable
> > > +        and non-readable pixels.
> > > +
> > > +        \todo Rename this property to PhysicalSize once we will have property
> > > +              categories (i.e. Properties::PixelArray::PhysicalSize)

I understand Sakari's confusion regarding the UnitCellSize calculation
from PixelArrayPhysicalSize and PixelArraySize. I also agree with your
answer, but I think we could actually drop the PixelArrayPhysicalSize
property. I don't see what it could be used for, it describes the
physical size of something that can't be accessed at all from the host,
and should thus not matter at all.

If we later find a need for this, we can always at it. At that point, I
also think it would be best to express the property in pixel units
(although depending on the use case that drives addition of this
property, I could very well be wrong).

> > > +
> > > +  - PixelArraySize:
> > > +      type: Size
> > > +      description: |
> > > +        The camera sensor pixel array readable area vertical and horizontal
> > > +        sizes, in pixels.
> > > +
> > > +        The PixelArraySize property defines the size in pixel units of the
> > > +        readable part of full pixel array matrix, including optically black

s/optically/optical/

I've also come across the term "optical dark pixels" which I think is a
bit more accurate, as the pixels are covered and thus dark, but not
fully black due to leakages currents (otherwise there would be no need
to black level correction). "optical black" is used much more often
though, so I think we should keep that term.

> > > +        pixels used for calibration, pixels which are not considered valid for
> > > +        capture and active pixels valid for image capture.

Maybe "and active pixels containing valid image data" ?

> > > +
> > > +        The property describes a rectangle whose top-left corner is placed
> > > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > > +        by the Size element transported by the property.

I would move this paragraph below to [1].

> > > +
> > > +        The property describes the maximum size of the raw data produced by
> > > +        the sensor, which might not correspond to the physical size of the

Maybe s/produced by the sensor/captured by the camera/, as the CSI-2
receiver could drop data too ? For instance, the invalid lines between
the optical black and valid regions could be sent by the sensor, with a
specific data type, and dropped on the receiving side by a decision of
the pipeline handler that chooses to configure the CSI-2 receiver to
drop the corresponding data type.

> > > +        sensor pixel array matrix, as some portions of the physical pixel
> > > +        array matrix are not accessible and cannot be transmitted out.
> > > +
> > > +        For example, a pixel array matrix assembled as follow

s/a pixel array/let's consider a pixel array/
s/follow/follows/

> > > +
> > > +             +--------------------------------------------------+
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             ...          ...           ...      ...          ...
> > > +
> > > +             ...          ...           ...      ...          ...
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             +--------------------------------------------------+
> > > +
> > > +        composed by two lines of non-readable pixels (x) followed by N lines of

s/composed by/starting with/
s/(x)/(x),/

> > > +        readable data (D) surrounded by two columns of non-readable pixels on
> > > +        each side, only the readable portion is transmitted to the receiving

s/each side,/each side, and ending with two more lines of non-readable pixels./
s/only/Only/

> > > +        side, defining the sizes of the largest possible buffer of raw data

s/sizes/size/

> > > +        that can be presented to applications.
> > > +
> > > +                               PixelArraySize[0]

should this be PixelArraySize.width ? Same for height instead of [1],
and in other locations below.

> > > +               /----------------------------------------------/
> > > +               +----------------------------------------------+ /
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > > +               ...        ...           ...      ...        ...
> > > +               ...        ...           ...      ...        ...
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               +----------------------------------------------+ /
> > > +

[1], with a small modification:

"This defines a rectangle ... sizes are defined by this property."

and merge it with the following paragraph in a single paragraph.

> > > +        All other rectangles that describes portions of the pixel array, such as

s/describes/describe/

> > > +        the optical black pixels rectangles and active pixel areas are defined

s/areas/areas,/

> > > +        relatively to the rectangle described by this property.
> > > +
> > > +        \todo Rename this property to Size once we will have property
> > > +              categories (i.e. Properties::PixelArray::Size)
> > > +
> > > +  - PixelArrayOpticalBlackRectangles:
> > > +      type: Rectangle
> > > +      size: [1 x n]

This can just be [n], not all sizes need to be bi-dimensional.

> > > +      description: |
> > > +        The raw data buffer regions which contains optical black pixels

s/raw data buffer/pixel array/ for the reason explained below in [2]
s/regions/region(s)/ ?
s/contains/contain/

> > > +        considered valid for calibration purposes.
> > > +
> >
> > How does this interact with the rotation property?
> >
> > If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> > invert the flipping controls. I presume the same would apply to e.g. dark
> > pixels in case they are read out, but that should be something for a driver
> > to handle.
> 
> Right. I think this also depends how black pixels are read out. I here
> assumed the sensor is capable of reading out the whole PixelArraySize area,
> and the receiver is able to capture its whole content in a single
> buffer, where application can then go and pick the areas of interest
> using the information conveyed by this property. If this model holds,
> then if sensor flipping is enabled, indeed the here reported
> information are mis-leading, unless the chip architecture is
> perfectly symmetric in vertical and horizontal dimensions (which seems
> unlikely).
> 
> > But if the frame layout isn't conveyed to the user space by the driver,
> > then we need another way to convey the sensor is actually rotated. Oh well.
> 
> Not sure how to interpret this part. Do you mean "convey that a
> rotation is applied by, ie, setting the canonical V\HFLIP controls,
> which cause the sensor pixel array to be transmitted out with its
> vertical/horizontal read-out directions inverted ?"
> 
> We currently have a read-only property that reports the mounting
> rotation (like the dt-property you have just reviewed :) I assume we
> will have a rotation control that reports instead if a V/HFLIP is
> applied, so that application know how to compensate that.
> 
> > Not every sensor that is mounted upside down has flipping controls so the
> > user space will still somehow need to manage the rotation in that case.
> 
> I think it should, and I think we'll provide all information to be
> able to do so.

I think all the properties here should be expressed relative to the
default sensor readout direction. Any transformation (such as h/v flip)
will need to be taken into account by the application if the use cases
require it. That would be the easiest course of action, otherwise these
properties couldn't be static, as they would depend on the selected
transformation.

David has submitted a patch series with a Transform control (actually
specified in CameraConfiguration, not as a control). We will need to
document how the properties in this patch, the Rotation property, and
the Transform control, interact.

How about adding in the PixelArraySize documentation a paragraph that
states

"All the coordinates are expressed relative to the default sensor
readout direction, without any transformation (such as horizontal and
vertical flipping) applied. When mapping them to the raw pixel buffer,
applications shall take any configured transformation into account."

?

> > > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)

s/describes/property describes/ (it feels weird to have a plural name
with a singular verb).

> > > +        rectangular areas of the raw data buffer, where optical black pixels are

s/buffer,/buffer/

> > > +        located and could be accessed for calibration and black level
> > > +        correction.

Maybe just "... of the raw data buffers that contain optical black
pixels." ? That duplicates the previous paragraph a bit, maybe we could
just drop it.

> > > +
> > > +        This property describes the position and size of optically black pixel

s/optically/optical/

> > > +        rectangles relatively to their position in the raw data buffer as stored

s/rectangles/regions/
s/relatively to their position in the raw data buffer/in the raw data buffer/

> > > +        in memory, which might differ from their actual physical location in the
> > > +        pixel array matrix.
> > > +
> > > +        It is important to note, in facts, that camera sensor might

s/facts/fact/
s/sensors/sensor/

> > > +        automatically re-order, shuffle or skip portions of their pixels array

s/re-order/reorder/

I'd drop shuffle, I think "reorder or skip" is enough, hopefully the
sensors won't shuffle pixels randomly :-)

> > > +        matrix when transmitting data to the receiver.

Should we add an example ?

"For instance, a sensor may merge the top and bottom optical black
rectangles into a single rectangle, transmitted at the beginning of the
frame."

> > > +
> > > +        The pixel array contains several areas with different purposes,
> > > +        interleaved by lines and columns which are said not to be valid for
> > > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > > +        they could be positioned too close to the chip margins or to the optical
> > > +        blank shielding placed on top of optical black pixels.

s/blank/black/

> > > +
> > > +                                PixelArraySize[0]
> > > +               /----------------------------------------------/
> > > +                  x1                                       x2
> > > +               +--o---------------------------------------o---+ /
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               ...          ...           ...     ...       ...
> > > +               ...          ...           ...     ...       ...
> > > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               +----------------------------------------------+ /
> > > +
> > > +        The readable pixel array matrix is composed by
> > > +        2 invalid lines (I)
> > > +        4 lines of valid optically black pixels (O)

s/optically/optical/

> > > +        2 invalid lines (I)
> > > +        n lines of valid pixel data (P)
> > > +        2 invalid lines (I)
> > > +
> > > +        And the position of the optical black pixel rectangles is defined by
> > > +
> > > +            PixelArrayOpticalBlackRectangles = {
> > > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> >
> > s/\+ \K}/1 }/

:perldo s/\+ \K}/1 }/

;-)

> > > +               { x1, y3, 2, y4 - y3 + 1 },
> > > +               { x2, y3, 2, y4 - y3 + 1 },
> > > +            };
> > > +
> > > +        If the sensor, when required to output the full pixel array matrix,
> > > +        automatically skip the invalid lines and columns, producing the

s/skip/skips/

> > > +        following data buffer, when captured to memory

For the same reason as above, this could become

        If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory

> > > +
> > > +                                    PixelArraySize[0]
> > > +               /----------------------------------------------/
> > > +                                                           x1
> > > +               +--------------------------------------------o-+ /
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > > +               ...       ...          ...       ...         ... |
> > > +               ...       ...          ...       ...         ... |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               +----------------------------------------------+ /
> > > +
> > > +        The invalid lines and columns should not be reported as part of the

s/The/then the/

> > > +        PixelArraySize property in first place.
> > > +
> > > +        In this case, the position of the black pixel rectangles will be
> > > +
> > > +            PixelArrayOpticalBlackRectangles = {
> > > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > +            };

I think part of this should be moved to the PixelArraySize property, as
it defines what the PixelArraySize property should report. Let's merge
this patch first though, and improvements can go on top (unless you
would prefer addressing this first :-)).

> > > +
> > > +        \todo Rename this property to Size once we will have property
> > > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > > +
> > > +  - PixelArrayActiveAreas:
> > > +      type: Rectangle
> > > +      size: [1 x n]

[n] here too.

> > > +      description: |
> > > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > > +        overlapping) portions of the camera sensor readable pixel matrix
> > > +        which are considered valid for image acquisition purposes.
> > > +
> > > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > > +        with its top-left corner defined by its horizontal and vertical
> > > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > > +        position (0, 0).

I think you can drop the last three lines, they're a bit confusing.

> > > +
> > > +        This property describes an arbitrary number of overlapping rectangles,
> > > +        with each rectangle representing the maximum image size that the camera
> > > +        sensor can produce for a particular aspect ratio.

Maybe even moving the previous paragraph here as "They are defined
relatively to the PixelArraySize rectangle." ?

> > > +
> > > +        When multiple rectangles are reported, they shall be ordered from the
> > > +        tallest to the shortest.
> > > +
> > > +        Example 1
> > > +        A camera sensor which only produces images in the 4:3 image resolution
> > > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > > +        other image formats are obtained by either cropping the field-of-view
> > > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > > +        binning.
> > > +
> > > +                     PixelArraySize[0]
> > > +                    /----------------/
> > > +                      x1          x2
> > > +            (0,0)-> +-o------------o-+  /
> > > +                 y1 o +------------+ |  |
> > > +                    | |////////////| |  |
> > > +                    | |////////////| |  | PixelArraySize[1]
> > > +                    | |////////////| |  |
> > > +                 y2 o +------------+ |  |
> > > +                    +----------------+  /
> > > +
> > > +        The property reports a single rectangle
> > > +
> > > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > > +
> > > +        Example 2
> > > +        A camera sensor which can produce images in different native
> > > +        resolutions will report several overlapping rectangles, one for each
> > > +        natively supported resolution.
> > > +
> > > +                     PixelArraySize[0]
> > > +                    /------------------/
> > > +                      x1  x2    x3  x4
> > > +            (0,0)-> +o---o------o---o+  /
> > > +                 y1 o    +------+    |  |
> > > +                    |    |//////|    |  |
> > > +                 y2 o+---+------+---+|  |
> > > +                    ||///|//////|///||  | PixelArraySize[1]
> > > +                 y3 o+---+------+---+|  |
> > > +                    |    |//////|    |  |
> > > +                 y4 o    +------+    |  |
> > > +                    +----+------+----+  /
> > > +
> > > +        The property reports two rectangles
> > > +
> > > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > > +
> > > +        The first rectangle describes the maximum field-of-view of all image
> > > +        formats in the 4:3 resolutions, while the second one describes the
> > > +        maximum field of view for all image formats in the 16:9 resolutions.

Should we add one paragraph here as a clarification:

Multiple rectangles shall only be reported when the sensor can't capture
the pixels in the corner regions. If all the pixels in the (x1,y1) -
(x4,y4) area can be captured, the PixelArrayActiveAreas property shall
contains the single rectangle (x1,y1) - (x4,y4).

Or do you think it's overkill ?

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> > > +
> > > +        \todo Rename this property to ActiveAreas once we will have property
> > > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > > +
> > >  ...
Jacopo Mondi July 31, 2020, 7:46 a.m. UTC | #6
Hi Laurent,
   thanks for review

On Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> Thank you for the patch (and sorry for such a late reply)
>
> First of all, I really like this, I think it's really good work.
> Describing the pixel array size may seem like an easy task, but I know
> how much variation we can see in camera sensors, and how much research
> you've but into this. Coming up with the properties below that are both
> flexible and clearly defined was not a simple task.
>
> I have quite a few comments (you know me...), but nothing that requires
> significant changes. Since you've posted this series David has submitted
> a proposal for transformation (including rotation) support, which needs
> to be taken into account as Sakari pointed out. I've proposed in the
> review below a way to do so (don't worry, it's not a complete rewrite
> :-)), please feel free to tell me if you disagree with some (or all) of
> the proposals.
>
> On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:
> > On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> > > On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > > > Add definition of pixel array related properties.
> > > >
> > > > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > > > ---
> > > >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> > > >  1 file changed, 263 insertions(+)
> > > >
> > > > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > > > index ce627fa042ba..762d60881568 100644
> > > > --- a/src/libcamera/property_ids.yaml
> > > > +++ b/src/libcamera/property_ids.yaml
> > > > @@ -386,4 +386,267 @@ controls:
> > > >                                |                    |
> > > >                                |                    |
> > > >                                +--------------------+
> > > > +
> > > > +  - UnitCellSize:
> > > > +      type: Size
> > > > +      description: |
> > > > +        The pixel unit cell physical size, in nanometers.
> > > > +
> > > > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > > > +        of a single pixel unit, including its active and non-active parts.
>
> Would it be useful to add "In other words, it expresses the horizontal
> and vertical distance between the top-left corners of adjacent pixels."
> ?
>

It won't hurt, I can add it...

> > > > +
> > > > +        The property can be used to calculate the physical size of the sensor's
> > > > +        pixel array area and for calibration purposes.
> > >
> > > Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> > > PixelArraySize?
> >
> > Not really, as PixelArrayPhysicalSize reports the physical dimension
> > of the full pixel array (readable and non readable pixels) while
> > PixelArraySize reports the size in pixel of the largest readable
> > image. To sum it up: PixelArrayPhysicalSize reports the chip area,
> > which covers more space that the readable PixelArraySize.
> >
> > > > +
> > > > +  - PixelArrayPhysicalSize:
> > > > +      type: Size
> > > > +      description: |
> > > > +        The camera sensor full pixel array size, in nanometers.
> > > > +
> > > > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > > > +        (width x height) of the full pixel array matrix, including readable
> > > > +        and non-readable pixels.
> > > > +
> > > > +        \todo Rename this property to PhysicalSize once we will have property
> > > > +              categories (i.e. Properties::PixelArray::PhysicalSize)
>
> I understand Sakari's confusion regarding the UnitCellSize calculation
> from PixelArrayPhysicalSize and PixelArraySize. I also agree with your
> answer, but I think we could actually drop the PixelArrayPhysicalSize
> property. I don't see what it could be used for, it describes the
> physical size of something that can't be accessed at all from the host,
> and should thus not matter at all.
>
> If we later find a need for this, we can always at it. At that point, I
> also think it would be best to express the property in pixel units
> (although depending on the use case that drives addition of this
> property, I could very well be wrong).
>

Ack, I'll drop

> > > > +
> > > > +  - PixelArraySize:
> > > > +      type: Size
> > > > +      description: |
> > > > +        The camera sensor pixel array readable area vertical and horizontal
> > > > +        sizes, in pixels.
> > > > +
> > > > +        The PixelArraySize property defines the size in pixel units of the
> > > > +        readable part of full pixel array matrix, including optically black
>
> s/optically/optical/
>
> I've also come across the term "optical dark pixels" which I think is a
> bit more accurate, as the pixels are covered and thus dark, but not
> fully black due to leakages currents (otherwise there would be no need
> to black level correction). "optical black" is used much more often
> though, so I think we should keep that term.
>
> > > > +        pixels used for calibration, pixels which are not considered valid for
> > > > +        capture and active pixels valid for image capture.
>
> Maybe "and active pixels containing valid image data" ?
>

I can change this

> > > > +
> > > > +        The property describes a rectangle whose top-left corner is placed
> > > > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > > > +        by the Size element transported by the property.
>
> I would move this paragraph below to [1].
>
> > > > +
> > > > +        The property describes the maximum size of the raw data produced by
> > > > +        the sensor, which might not correspond to the physical size of the
>
> Maybe s/produced by the sensor/captured by the camera/, as the CSI-2
> receiver could drop data too ? For instance, the invalid lines between
> the optical black and valid regions could be sent by the sensor, with a
> specific data type, and dropped on the receiving side by a decision of
> the pipeline handler that chooses to configure the CSI-2 receiver to
> drop the corresponding data type.
>

Ok, we're really picking details here, but yes, I can change this. However I
wonder if that should not actually describe what the sensor puts on
the bus. How the receiver filters it, it's a separate issue.

> > > > +        sensor pixel array matrix, as some portions of the physical pixel
> > > > +        array matrix are not accessible and cannot be transmitted out.
> > > > +
> > > > +        For example, a pixel array matrix assembled as follow
>
> s/a pixel array/let's consider a pixel array/
> s/follow/follows/
>
> > > > +
> > > > +             +--------------------------------------------------+
> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             ...          ...           ...      ...          ...
> > > > +
> > > > +             ...          ...           ...      ...          ...
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > > +             +--------------------------------------------------+
> > > > +
> > > > +        composed by two lines of non-readable pixels (x) followed by N lines of
>
> s/composed by/starting with/
> s/(x)/(x),/
>
> > > > +        readable data (D) surrounded by two columns of non-readable pixels on
> > > > +        each side, only the readable portion is transmitted to the receiving
>
> s/each side,/each side, and ending with two more lines of non-readable pixels./
> s/only/Only/
>
> > > > +        side, defining the sizes of the largest possible buffer of raw data
>
> s/sizes/size/
>
> > > > +        that can be presented to applications.
> > > > +
> > > > +                               PixelArraySize[0]
>
> should this be PixelArraySize.width ? Same for height instead of [1],
> and in other locations below.
>

ack

> > > > +               /----------------------------------------------/
> > > > +               +----------------------------------------------+ /
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > > > +               ...        ...           ...      ...        ...
> > > > +               ...        ...           ...      ...        ...
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > > +               +----------------------------------------------+ /
> > > > +
>
> [1], with a small modification:
>
> "This defines a rectangle ... sizes are defined by this property."
>
> and merge it with the following paragraph in a single paragraph.
>
> > > > +        All other rectangles that describes portions of the pixel array, such as
>
> s/describes/describe/
>
> > > > +        the optical black pixels rectangles and active pixel areas are defined
>
> s/areas/areas,/
>
> > > > +        relatively to the rectangle described by this property.
> > > > +
> > > > +        \todo Rename this property to Size once we will have property
> > > > +              categories (i.e. Properties::PixelArray::Size)
> > > > +
> > > > +  - PixelArrayOpticalBlackRectangles:
> > > > +      type: Rectangle
> > > > +      size: [1 x n]
>
> This can just be [n], not all sizes need to be bi-dimensional.
>
> > > > +      description: |
> > > > +        The raw data buffer regions which contains optical black pixels
>
> s/raw data buffer/pixel array/ for the reason explained below in [2]
> s/regions/region(s)/ ?
> s/contains/contain/
>
> > > > +        considered valid for calibration purposes.
> > > > +
> > >
> > > How does this interact with the rotation property?
> > >
> > > If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> > > invert the flipping controls. I presume the same would apply to e.g. dark
> > > pixels in case they are read out, but that should be something for a driver
> > > to handle.
> >
> > Right. I think this also depends how black pixels are read out. I here
> > assumed the sensor is capable of reading out the whole PixelArraySize area,
> > and the receiver is able to capture its whole content in a single
> > buffer, where application can then go and pick the areas of interest
> > using the information conveyed by this property. If this model holds,
> > then if sensor flipping is enabled, indeed the here reported
> > information are mis-leading, unless the chip architecture is
> > perfectly symmetric in vertical and horizontal dimensions (which seems
> > unlikely).
> >
> > > But if the frame layout isn't conveyed to the user space by the driver,
> > > then we need another way to convey the sensor is actually rotated. Oh well.
> >
> > Not sure how to interpret this part. Do you mean "convey that a
> > rotation is applied by, ie, setting the canonical V\HFLIP controls,
> > which cause the sensor pixel array to be transmitted out with its
> > vertical/horizontal read-out directions inverted ?"
> >
> > We currently have a read-only property that reports the mounting
> > rotation (like the dt-property you have just reviewed :) I assume we
> > will have a rotation control that reports instead if a V/HFLIP is
> > applied, so that application know how to compensate that.
> >
> > > Not every sensor that is mounted upside down has flipping controls so the
> > > user space will still somehow need to manage the rotation in that case.
> >
> > I think it should, and I think we'll provide all information to be
> > able to do so.
>
> I think all the properties here should be expressed relative to the
> default sensor readout direction. Any transformation (such as h/v flip)
> will need to be taken into account by the application if the use cases
> require it. That would be the easiest course of action, otherwise these
> properties couldn't be static, as they would depend on the selected
> transformation.
>
> David has submitted a patch series with a Transform control (actually
> specified in CameraConfiguration, not as a control). We will need to
> document how the properties in this patch, the Rotation property, and
> the Transform control, interact.
>
> How about adding in the PixelArraySize documentation a paragraph that
> states
>
> "All the coordinates are expressed relative to the default sensor
> readout direction, without any transformation (such as horizontal and
> vertical flipping) applied. When mapping them to the raw pixel buffer,
> applications shall take any configured transformation into account."
>
> ?

That's the easieast way forward, sounds good to me

>
> > > > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
>
> s/describes/property describes/ (it feels weird to have a plural name
> with a singular verb).
>
> > > > +        rectangular areas of the raw data buffer, where optical black pixels are
>
> s/buffer,/buffer/
>
> > > > +        located and could be accessed for calibration and black level
> > > > +        correction.
>
> Maybe just "... of the raw data buffers that contain optical black
> pixels." ? That duplicates the previous paragraph a bit, maybe we could
> just drop it.
>
> > > > +
> > > > +        This property describes the position and size of optically black pixel
>
> s/optically/optical/
>
> > > > +        rectangles relatively to their position in the raw data buffer as stored
>
> s/rectangles/regions/
> s/relatively to their position in the raw data buffer/in the raw data buffer/
>
> > > > +        in memory, which might differ from their actual physical location in the
> > > > +        pixel array matrix.
> > > > +
> > > > +        It is important to note, in facts, that camera sensor might
>
> s/facts/fact/
> s/sensors/sensor/
>
> > > > +        automatically re-order, shuffle or skip portions of their pixels array
>
> s/re-order/reorder/
>
> I'd drop shuffle, I think "reorder or skip" is enough, hopefully the
> sensors won't shuffle pixels randomly :-)
>
> > > > +        matrix when transmitting data to the receiver.
>
> Should we add an example ?
>
> "For instance, a sensor may merge the top and bottom optical black
> rectangles into a single rectangle, transmitted at the beginning of the
> frame."
>
> > > > +
> > > > +        The pixel array contains several areas with different purposes,
> > > > +        interleaved by lines and columns which are said not to be valid for
> > > > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > > > +        they could be positioned too close to the chip margins or to the optical
> > > > +        blank shielding placed on top of optical black pixels.
>
> s/blank/black/
>
> > > > +
> > > > +                                PixelArraySize[0]
> > > > +               /----------------------------------------------/
> > > > +                  x1                                       x2
> > > > +               +--o---------------------------------------o---+ /
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > > +               ...          ...           ...     ...       ...
> > > > +               ...          ...           ...     ...       ...
> > > > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > > +               +----------------------------------------------+ /
> > > > +
> > > > +        The readable pixel array matrix is composed by
> > > > +        2 invalid lines (I)
> > > > +        4 lines of valid optically black pixels (O)
>
> s/optically/optical/
>
> > > > +        2 invalid lines (I)
> > > > +        n lines of valid pixel data (P)
> > > > +        2 invalid lines (I)
> > > > +
> > > > +        And the position of the optical black pixel rectangles is defined by
> > > > +
> > > > +            PixelArrayOpticalBlackRectangles = {
> > > > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> > >
> > > s/\+ \K}/1 }/
>
> :perldo s/\+ \K}/1 }/
>
> ;-)
>
> > > > +               { x1, y3, 2, y4 - y3 + 1 },
> > > > +               { x2, y3, 2, y4 - y3 + 1 },
> > > > +            };
> > > > +
> > > > +        If the sensor, when required to output the full pixel array matrix,
> > > > +        automatically skip the invalid lines and columns, producing the
>
> s/skip/skips/
>
> > > > +        following data buffer, when captured to memory
>
> For the same reason as above, this could become
>
>         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory
>
> > > > +
> > > > +                                    PixelArraySize[0]
> > > > +               /----------------------------------------------/
> > > > +                                                           x1
> > > > +               +--------------------------------------------o-+ /
> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > > > +               ...       ...          ...       ...         ... |
> > > > +               ...       ...          ...       ...         ... |
> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > > +               +----------------------------------------------+ /
> > > > +
> > > > +        The invalid lines and columns should not be reported as part of the
>
> s/The/then the/
>
> > > > +        PixelArraySize property in first place.
> > > > +
> > > > +        In this case, the position of the black pixel rectangles will be
> > > > +
> > > > +            PixelArrayOpticalBlackRectangles = {
> > > > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > > > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > > +            };
>
> I think part of this should be moved to the PixelArraySize property, as
> it defines what the PixelArraySize property should report. Let's merge
> this patch first though, and improvements can go on top (unless you
> would prefer addressing this first :-)).
>
> > > > +
> > > > +        \todo Rename this property to Size once we will have property
> > > > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > > > +
> > > > +  - PixelArrayActiveAreas:
> > > > +      type: Rectangle
> > > > +      size: [1 x n]
>
> [n] here too.
>
> > > > +      description: |
> > > > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > > > +        overlapping) portions of the camera sensor readable pixel matrix
> > > > +        which are considered valid for image acquisition purposes.
> > > > +
> > > > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > > > +        with its top-left corner defined by its horizontal and vertical
> > > > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > > > +        position (0, 0).
>
> I think you can drop the last three lines, they're a bit confusing.
>
> > > > +
> > > > +        This property describes an arbitrary number of overlapping rectangles,
> > > > +        with each rectangle representing the maximum image size that the camera
> > > > +        sensor can produce for a particular aspect ratio.
>
> Maybe even moving the previous paragraph here as "They are defined
> relatively to the PixelArraySize rectangle." ?
>

Ack, I'll merge the two

> > > > +
> > > > +        When multiple rectangles are reported, they shall be ordered from the
> > > > +        tallest to the shortest.
> > > > +
> > > > +        Example 1
> > > > +        A camera sensor which only produces images in the 4:3 image resolution
> > > > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > > > +        other image formats are obtained by either cropping the field-of-view
> > > > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > > > +        binning.
> > > > +
> > > > +                     PixelArraySize[0]
> > > > +                    /----------------/
> > > > +                      x1          x2
> > > > +            (0,0)-> +-o------------o-+  /
> > > > +                 y1 o +------------+ |  |
> > > > +                    | |////////////| |  |
> > > > +                    | |////////////| |  | PixelArraySize[1]
> > > > +                    | |////////////| |  |
> > > > +                 y2 o +------------+ |  |
> > > > +                    +----------------+  /
> > > > +
> > > > +        The property reports a single rectangle
> > > > +
> > > > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > > > +
> > > > +        Example 2
> > > > +        A camera sensor which can produce images in different native
> > > > +        resolutions will report several overlapping rectangles, one for each
> > > > +        natively supported resolution.
> > > > +
> > > > +                     PixelArraySize[0]
> > > > +                    /------------------/
> > > > +                      x1  x2    x3  x4
> > > > +            (0,0)-> +o---o------o---o+  /
> > > > +                 y1 o    +------+    |  |
> > > > +                    |    |//////|    |  |
> > > > +                 y2 o+---+------+---+|  |
> > > > +                    ||///|//////|///||  | PixelArraySize[1]
> > > > +                 y3 o+---+------+---+|  |
> > > > +                    |    |//////|    |  |
> > > > +                 y4 o    +------+    |  |
> > > > +                    +----+------+----+  /
> > > > +
> > > > +        The property reports two rectangles
> > > > +
> > > > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > > > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > > > +
> > > > +        The first rectangle describes the maximum field-of-view of all image
> > > > +        formats in the 4:3 resolutions, while the second one describes the
> > > > +        maximum field of view for all image formats in the 16:9 resolutions.
>
> Should we add one paragraph here as a clarification:
>
> Multiple rectangles shall only be reported when the sensor can't capture
> the pixels in the corner regions. If all the pixels in the (x1,y1) -
> (x4,y4) area can be captured, the PixelArrayActiveAreas property shall
> contains the single rectangle (x1,y1) - (x4,y4).

Isn't this the example above ? Re-stating the same here seems
confusing.

>
> Or do you think it's overkill ?
>
> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>

Thanks, I'll take comments in and resend a v7

> > > > +
> > > > +        \todo Rename this property to ActiveAreas once we will have property
> > > > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > > > +
> > > >  ...
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart July 31, 2020, 10:54 a.m. UTC | #7
Hi Jacopo,

On Fri, Jul 31, 2020 at 09:46:48AM +0200, Jacopo Mondi wrote:
> On Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:
> > Hi Jacopo,
> >
> > Thank you for the patch (and sorry for such a late reply)
> >
> > First of all, I really like this, I think it's really good work.
> > Describing the pixel array size may seem like an easy task, but I know
> > how much variation we can see in camera sensors, and how much research
> > you've but into this. Coming up with the properties below that are both
> > flexible and clearly defined was not a simple task.
> >
> > I have quite a few comments (you know me...), but nothing that requires
> > significant changes. Since you've posted this series David has submitted
> > a proposal for transformation (including rotation) support, which needs
> > to be taken into account as Sakari pointed out. I've proposed in the
> > review below a way to do so (don't worry, it's not a complete rewrite
> > :-)), please feel free to tell me if you disagree with some (or all) of
> > the proposals.
> >
> > On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:
> >> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> >>> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> >>>> Add definition of pixel array related properties.
> >>>>
> >>>> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> >>>> ---
> >>>>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> >>>>  1 file changed, 263 insertions(+)
> >>>>
> >>>> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> >>>> index ce627fa042ba..762d60881568 100644
> >>>> --- a/src/libcamera/property_ids.yaml
> >>>> +++ b/src/libcamera/property_ids.yaml
> >>>> @@ -386,4 +386,267 @@ controls:
> >>>>                                |                    |
> >>>>                                |                    |
> >>>>                                +--------------------+
> >>>> +
> >>>> +  - UnitCellSize:
> >>>> +      type: Size
> >>>> +      description: |
> >>>> +        The pixel unit cell physical size, in nanometers.
> >>>> +
> >>>> +        The UnitCellSize properties defines the horizontal and vertical sizes
> >>>> +        of a single pixel unit, including its active and non-active parts.
> >
> > Would it be useful to add "In other words, it expresses the horizontal
> > and vertical distance between the top-left corners of adjacent pixels."
> > ?
> 
> It won't hurt, I can add it...
> 
> >>>> +
> >>>> +        The property can be used to calculate the physical size of the sensor's
> >>>> +        pixel array area and for calibration purposes.
> >>>
> >>> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> >>> PixelArraySize?
> >>
> >> Not really, as PixelArrayPhysicalSize reports the physical dimension
> >> of the full pixel array (readable and non readable pixels) while
> >> PixelArraySize reports the size in pixel of the largest readable
> >> image. To sum it up: PixelArrayPhysicalSize reports the chip area,
> >> which covers more space that the readable PixelArraySize.
> >>
> >>>> +
> >>>> +  - PixelArrayPhysicalSize:
> >>>> +      type: Size
> >>>> +      description: |
> >>>> +        The camera sensor full pixel array size, in nanometers.
> >>>> +
> >>>> +        The PixelArrayPhysicalSize property reports the physical dimensions
> >>>> +        (width x height) of the full pixel array matrix, including readable
> >>>> +        and non-readable pixels.
> >>>> +
> >>>> +        \todo Rename this property to PhysicalSize once we will have property
> >>>> +              categories (i.e. Properties::PixelArray::PhysicalSize)
> >
> > I understand Sakari's confusion regarding the UnitCellSize calculation
> > from PixelArrayPhysicalSize and PixelArraySize. I also agree with your
> > answer, but I think we could actually drop the PixelArrayPhysicalSize
> > property. I don't see what it could be used for, it describes the
> > physical size of something that can't be accessed at all from the host,
> > and should thus not matter at all.
> >
> > If we later find a need for this, we can always at it. At that point, I
> > also think it would be best to express the property in pixel units
> > (although depending on the use case that drives addition of this
> > property, I could very well be wrong).
> 
> Ack, I'll drop
> 
> >>>> +
> >>>> +  - PixelArraySize:
> >>>> +      type: Size
> >>>> +      description: |
> >>>> +        The camera sensor pixel array readable area vertical and horizontal
> >>>> +        sizes, in pixels.
> >>>> +
> >>>> +        The PixelArraySize property defines the size in pixel units of the
> >>>> +        readable part of full pixel array matrix, including optically black
> >
> > s/optically/optical/
> >
> > I've also come across the term "optical dark pixels" which I think is a
> > bit more accurate, as the pixels are covered and thus dark, but not
> > fully black due to leakages currents (otherwise there would be no need
> > to black level correction). "optical black" is used much more often
> > though, so I think we should keep that term.
> >
> >>>> +        pixels used for calibration, pixels which are not considered valid for
> >>>> +        capture and active pixels valid for image capture.
> >
> > Maybe "and active pixels containing valid image data" ?
> 
> I can change this
> 
> >>>> +
> >>>> +        The property describes a rectangle whose top-left corner is placed
> >>>> +        in position (0, 0) and whose vertical and horizontal sizes are defined
> >>>> +        by the Size element transported by the property.
> >
> > I would move this paragraph below to [1].
> >
> >>>> +
> >>>> +        The property describes the maximum size of the raw data produced by
> >>>> +        the sensor, which might not correspond to the physical size of the
> >
> > Maybe s/produced by the sensor/captured by the camera/, as the CSI-2
> > receiver could drop data too ? For instance, the invalid lines between
> > the optical black and valid regions could be sent by the sensor, with a
> > specific data type, and dropped on the receiving side by a decision of
> > the pipeline handler that chooses to configure the CSI-2 receiver to
> > drop the corresponding data type.
> 
> Ok, we're really picking details here, but yes, I can change this. However I
> wonder if that should not actually describe what the sensor puts on
> the bus. How the receiver filters it, it's a separate issue.

I've been pondering about that, and concluded that, from an application
point of view (as properties are designed to be consumed by
applications), what matters is what is being captured. Even if we
disregard the CSI-2 receiver configuration, it's conceivable that the
sensor would output invalid lines with a data type that the CSI-2
receiver isn't able to capture due to the hardware being hardcoded to
skip some data types.

I agree it won't make much difference in practice, application
developers will likely not think about all this.

> >>>> +        sensor pixel array matrix, as some portions of the physical pixel
> >>>> +        array matrix are not accessible and cannot be transmitted out.
> >>>> +
> >>>> +        For example, a pixel array matrix assembled as follow
> >
> > s/a pixel array/let's consider a pixel array/
> > s/follow/follows/
> >
> >>>> +
> >>>> +             +--------------------------------------------------+
> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             ...          ...           ...      ...          ...
> >>>> +
> >>>> +             ...          ...           ...      ...          ...
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> >>>> +             +--------------------------------------------------+
> >>>> +
> >>>> +        composed by two lines of non-readable pixels (x) followed by N lines of
> >
> > s/composed by/starting with/
> > s/(x)/(x),/
> >
> >>>> +        readable data (D) surrounded by two columns of non-readable pixels on
> >>>> +        each side, only the readable portion is transmitted to the receiving
> >
> > s/each side,/each side, and ending with two more lines of non-readable pixels./
> > s/only/Only/
> >
> >>>> +        side, defining the sizes of the largest possible buffer of raw data
> >
> > s/sizes/size/
> >
> >>>> +        that can be presented to applications.
> >>>> +
> >>>> +                               PixelArraySize[0]
> >
> > should this be PixelArraySize.width ? Same for height instead of [1],
> > and in other locations below.
> 
> ack
> 
> >>>> +               /----------------------------------------------/
> >>>> +               +----------------------------------------------+ /
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> >>>> +               ...        ...           ...      ...        ...
> >>>> +               ...        ...           ...      ...        ...
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> >>>> +               +----------------------------------------------+ /
> >>>> +
> >
> > [1], with a small modification:
> >
> > "This defines a rectangle ... sizes are defined by this property."
> >
> > and merge it with the following paragraph in a single paragraph.
> >
> >>>> +        All other rectangles that describes portions of the pixel array, such as
> >
> > s/describes/describe/
> >
> >>>> +        the optical black pixels rectangles and active pixel areas are defined
> >
> > s/areas/areas,/
> >
> >>>> +        relatively to the rectangle described by this property.
> >>>> +
> >>>> +        \todo Rename this property to Size once we will have property
> >>>> +              categories (i.e. Properties::PixelArray::Size)
> >>>> +
> >>>> +  - PixelArrayOpticalBlackRectangles:
> >>>> +      type: Rectangle
> >>>> +      size: [1 x n]
> >
> > This can just be [n], not all sizes need to be bi-dimensional.
> >
> >>>> +      description: |
> >>>> +        The raw data buffer regions which contains optical black pixels
> >
> > s/raw data buffer/pixel array/ for the reason explained below in [2]
> > s/regions/region(s)/ ?
> > s/contains/contain/
> >
> >>>> +        considered valid for calibration purposes.
> >>>> +
> >>>
> >>> How does this interact with the rotation property?
> >>>
> >>> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> >>> invert the flipping controls. I presume the same would apply to e.g. dark
> >>> pixels in case they are read out, but that should be something for a driver
> >>> to handle.
> >>
> >> Right. I think this also depends how black pixels are read out. I here
> >> assumed the sensor is capable of reading out the whole PixelArraySize area,
> >> and the receiver is able to capture its whole content in a single
> >> buffer, where application can then go and pick the areas of interest
> >> using the information conveyed by this property. If this model holds,
> >> then if sensor flipping is enabled, indeed the here reported
> >> information are mis-leading, unless the chip architecture is
> >> perfectly symmetric in vertical and horizontal dimensions (which seems
> >> unlikely).
> >>
> >>> But if the frame layout isn't conveyed to the user space by the driver,
> >>> then we need another way to convey the sensor is actually rotated. Oh well.
> >>
> >> Not sure how to interpret this part. Do you mean "convey that a
> >> rotation is applied by, ie, setting the canonical V\HFLIP controls,
> >> which cause the sensor pixel array to be transmitted out with its
> >> vertical/horizontal read-out directions inverted ?"
> >>
> >> We currently have a read-only property that reports the mounting
> >> rotation (like the dt-property you have just reviewed :) I assume we
> >> will have a rotation control that reports instead if a V/HFLIP is
> >> applied, so that application know how to compensate that.
> >>
> >>> Not every sensor that is mounted upside down has flipping controls so the
> >>> user space will still somehow need to manage the rotation in that case.
> >>
> >> I think it should, and I think we'll provide all information to be
> >> able to do so.
> >
> > I think all the properties here should be expressed relative to the
> > default sensor readout direction. Any transformation (such as h/v flip)
> > will need to be taken into account by the application if the use cases
> > require it. That would be the easiest course of action, otherwise these
> > properties couldn't be static, as they would depend on the selected
> > transformation.
> >
> > David has submitted a patch series with a Transform control (actually
> > specified in CameraConfiguration, not as a control). We will need to
> > document how the properties in this patch, the Rotation property, and
> > the Transform control, interact.
> >
> > How about adding in the PixelArraySize documentation a paragraph that
> > states
> >
> > "All the coordinates are expressed relative to the default sensor
> > readout direction, without any transformation (such as horizontal and
> > vertical flipping) applied. When mapping them to the raw pixel buffer,
> > applications shall take any configured transformation into account."
> >
> > ?
> 
> That's the easieast way forward, sounds good to me
> 
> >
> >>>> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> >
> > s/describes/property describes/ (it feels weird to have a plural name
> > with a singular verb).
> >
> >>>> +        rectangular areas of the raw data buffer, where optical black pixels are
> >
> > s/buffer,/buffer/
> >
> >>>> +        located and could be accessed for calibration and black level
> >>>> +        correction.
> >
> > Maybe just "... of the raw data buffers that contain optical black
> > pixels." ? That duplicates the previous paragraph a bit, maybe we could
> > just drop it.
> >
> >>>> +
> >>>> +        This property describes the position and size of optically black pixel
> >
> > s/optically/optical/
> >
> >>>> +        rectangles relatively to their position in the raw data buffer as stored
> >
> > s/rectangles/regions/
> > s/relatively to their position in the raw data buffer/in the raw data buffer/
> >
> >>>> +        in memory, which might differ from their actual physical location in the
> >>>> +        pixel array matrix.
> >>>> +
> >>>> +        It is important to note, in facts, that camera sensor might
> >
> > s/facts/fact/
> > s/sensors/sensor/
> >
> >>>> +        automatically re-order, shuffle or skip portions of their pixels array
> >
> > s/re-order/reorder/
> >
> > I'd drop shuffle, I think "reorder or skip" is enough, hopefully the
> > sensors won't shuffle pixels randomly :-)
> >
> >>>> +        matrix when transmitting data to the receiver.
> >
> > Should we add an example ?
> >
> > "For instance, a sensor may merge the top and bottom optical black
> > rectangles into a single rectangle, transmitted at the beginning of the
> > frame."
> >
> >>>> +
> >>>> +        The pixel array contains several areas with different purposes,
> >>>> +        interleaved by lines and columns which are said not to be valid for
> >>>> +        capturing purposes. Invalid lines and columns are defined as invalid as
> >>>> +        they could be positioned too close to the chip margins or to the optical
> >>>> +        blank shielding placed on top of optical black pixels.
> >
> > s/blank/black/
> >
> >>>> +
> >>>> +                                PixelArraySize[0]
> >>>> +               /----------------------------------------------/
> >>>> +                  x1                                       x2
> >>>> +               +--o---------------------------------------o---+ /
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> >>>> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> >>>> +               ...          ...           ...     ...       ...
> >>>> +               ...          ...           ...     ...       ...
> >>>> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> >>>> +               +----------------------------------------------+ /
> >>>> +
> >>>> +        The readable pixel array matrix is composed by
> >>>> +        2 invalid lines (I)
> >>>> +        4 lines of valid optically black pixels (O)
> >
> > s/optically/optical/
> >
> >>>> +        2 invalid lines (I)
> >>>> +        n lines of valid pixel data (P)
> >>>> +        2 invalid lines (I)
> >>>> +
> >>>> +        And the position of the optical black pixel rectangles is defined by
> >>>> +
> >>>> +            PixelArrayOpticalBlackRectangles = {
> >>>> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> >>>
> >>> s/\+ \K}/1 }/
> >
> > :perldo s/\+ \K}/1 }/
> >
> > ;-)
> >
> >>>> +               { x1, y3, 2, y4 - y3 + 1 },
> >>>> +               { x2, y3, 2, y4 - y3 + 1 },
> >>>> +            };
> >>>> +
> >>>> +        If the sensor, when required to output the full pixel array matrix,
> >>>> +        automatically skip the invalid lines and columns, producing the
> >
> > s/skip/skips/
> >
> >>>> +        following data buffer, when captured to memory
> >
> > For the same reason as above, this could become
> >
> >         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory
> >
> >>>> +
> >>>> +                                    PixelArraySize[0]
> >>>> +               /----------------------------------------------/
> >>>> +                                                           x1
> >>>> +               +--------------------------------------------o-+ /
> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> >>>> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> >>>> +               ...       ...          ...       ...         ... |
> >>>> +               ...       ...          ...       ...         ... |
> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> >>>> +               +----------------------------------------------+ /
> >>>> +
> >>>> +        The invalid lines and columns should not be reported as part of the
> >
> > s/The/then the/
> >
> >>>> +        PixelArraySize property in first place.
> >>>> +
> >>>> +        In this case, the position of the black pixel rectangles will be
> >>>> +
> >>>> +            PixelArrayOpticalBlackRectangles = {
> >>>> +               { 0, 0, y1 + 1, PixelArraySize[0] },
> >>>> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> >>>> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> >>>> +            };
> >
> > I think part of this should be moved to the PixelArraySize property, as
> > it defines what the PixelArraySize property should report. Let's merge
> > this patch first though, and improvements can go on top (unless you
> > would prefer addressing this first :-)).
> >
> >>>> +
> >>>> +        \todo Rename this property to Size once we will have property
> >>>> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> >>>> +
> >>>> +  - PixelArrayActiveAreas:
> >>>> +      type: Rectangle
> >>>> +      size: [1 x n]
> >
> > [n] here too.
> >
> >>>> +      description: |
> >>>> +        The PixelArrayActiveAreas property defines the (possibly multiple and
> >>>> +        overlapping) portions of the camera sensor readable pixel matrix
> >>>> +        which are considered valid for image acquisition purposes.
> >>>> +
> >>>> +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> >>>> +        with its top-left corner defined by its horizontal and vertical
> >>>> +        distances from the PixelArraySize rectangle top-left corner, placed in
> >>>> +        position (0, 0).
> >
> > I think you can drop the last three lines, they're a bit confusing.
> >
> >>>> +
> >>>> +        This property describes an arbitrary number of overlapping rectangles,
> >>>> +        with each rectangle representing the maximum image size that the camera
> >>>> +        sensor can produce for a particular aspect ratio.
> >
> > Maybe even moving the previous paragraph here as "They are defined
> > relatively to the PixelArraySize rectangle." ?
> >
> 
> Ack, I'll merge the two
> 
> >>>> +
> >>>> +        When multiple rectangles are reported, they shall be ordered from the
> >>>> +        tallest to the shortest.
> >>>> +
> >>>> +        Example 1
> >>>> +        A camera sensor which only produces images in the 4:3 image resolution
> >>>> +        will report a single PixelArrayActiveAreas rectangle, from which all
> >>>> +        other image formats are obtained by either cropping the field-of-view
> >>>> +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> >>>> +        binning.
> >>>> +
> >>>> +                     PixelArraySize[0]
> >>>> +                    /----------------/
> >>>> +                      x1          x2
> >>>> +            (0,0)-> +-o------------o-+  /
> >>>> +                 y1 o +------------+ |  |
> >>>> +                    | |////////////| |  |
> >>>> +                    | |////////////| |  | PixelArraySize[1]
> >>>> +                    | |////////////| |  |
> >>>> +                 y2 o +------------+ |  |
> >>>> +                    +----------------+  /
> >>>> +
> >>>> +        The property reports a single rectangle
> >>>> +
> >>>> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> >>>> +
> >>>> +        Example 2
> >>>> +        A camera sensor which can produce images in different native
> >>>> +        resolutions will report several overlapping rectangles, one for each
> >>>> +        natively supported resolution.
> >>>> +
> >>>> +                     PixelArraySize[0]
> >>>> +                    /------------------/
> >>>> +                      x1  x2    x3  x4
> >>>> +            (0,0)-> +o---o------o---o+  /
> >>>> +                 y1 o    +------+    |  |
> >>>> +                    |    |//////|    |  |
> >>>> +                 y2 o+---+------+---+|  |
> >>>> +                    ||///|//////|///||  | PixelArraySize[1]
> >>>> +                 y3 o+---+------+---+|  |
> >>>> +                    |    |//////|    |  |
> >>>> +                 y4 o    +------+    |  |
> >>>> +                    +----+------+----+  /
> >>>> +
> >>>> +        The property reports two rectangles
> >>>> +
> >>>> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> >>>> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> >>>> +
> >>>> +        The first rectangle describes the maximum field-of-view of all image
> >>>> +        formats in the 4:3 resolutions, while the second one describes the
> >>>> +        maximum field of view for all image formats in the 16:9 resolutions.
> >
> > Should we add one paragraph here as a clarification:
> >
> > Multiple rectangles shall only be reported when the sensor can't capture
> > the pixels in the corner regions. If all the pixels in the (x1,y1) -
> > (x4,y4) area can be captured, the PixelArrayActiveAreas property shall
> > contains the single rectangle (x1,y1) - (x4,y4).
> 
> Isn't this the example above ? Re-stating the same here seems
> confusing.

What I meant to say is that a camera should not implement example 2 just
because the pipeline handler developer wants to expose different aspect
ratios, when the hardware actually implements example 1. For instance,
considering a sensor with a native resolution of 3840x2880 (4/3) that
can be captured fully, a 16/9 picture can be captured by cropping to
3840*2160. In that case PixelArrayActiveAreas should only report
(0,0)/3840x2880, not both (0,0)/3840x2880 and (0,360)/3840x2880.

But maybe I'm just worrying too much and nobody will think about doing
that :-)

> > Or do you think it's overkill ?
> >
> > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> 
> Thanks, I'll take comments in and resend a v7
> 
> >>>> +
> >>>> +        \todo Rename this property to ActiveAreas once we will have property
> >>>> +              categories (i.e. Properties::PixelArray::ActiveAreas)
> >>>> +
> >>>>  ...
Jacopo Mondi July 31, 2020, 11:45 a.m. UTC | #8
On Fri, Jul 31, 2020 at 01:54:46PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> On Fri, Jul 31, 2020 at 09:46:48AM +0200, Jacopo Mondi wrote:
> > On Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:
> > > Hi Jacopo,
> > >
> > > Thank you for the patch (and sorry for such a late reply)
> > >
> > > First of all, I really like this, I think it's really good work.
> > > Describing the pixel array size may seem like an easy task, but I know
> > > how much variation we can see in camera sensors, and how much research
> > > you've but into this. Coming up with the properties below that are both
> > > flexible and clearly defined was not a simple task.
> > >
> > > I have quite a few comments (you know me...), but nothing that requires
> > > significant changes. Since you've posted this series David has submitted
> > > a proposal for transformation (including rotation) support, which needs
> > > to be taken into account as Sakari pointed out. I've proposed in the
> > > review below a way to do so (don't worry, it's not a complete rewrite
> > > :-)), please feel free to tell me if you disagree with some (or all) of
> > > the proposals.
> > >
> > > On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:
> > >> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> > >>> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > >>>> Add definition of pixel array related properties.
> > >>>>
> > >>>> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > >>>> ---
> > >>>>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> > >>>>  1 file changed, 263 insertions(+)
> > >>>>
> > >>>> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > >>>> index ce627fa042ba..762d60881568 100644
> > >>>> --- a/src/libcamera/property_ids.yaml
> > >>>> +++ b/src/libcamera/property_ids.yaml
> > >>>> @@ -386,4 +386,267 @@ controls:
> > >>>>                                |                    |
> > >>>>                                |                    |
> > >>>>                                +--------------------+
> > >>>> +
> > >>>> +  - UnitCellSize:
> > >>>> +      type: Size
> > >>>> +      description: |
> > >>>> +        The pixel unit cell physical size, in nanometers.
> > >>>> +
> > >>>> +        The UnitCellSize properties defines the horizontal and vertical sizes
> > >>>> +        of a single pixel unit, including its active and non-active parts.
> > >
> > > Would it be useful to add "In other words, it expresses the horizontal
> > > and vertical distance between the top-left corners of adjacent pixels."
> > > ?
> >
> > It won't hurt, I can add it...
> >
> > >>>> +
> > >>>> +        The property can be used to calculate the physical size of the sensor's
> > >>>> +        pixel array area and for calibration purposes.
> > >>>
> > >>> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> > >>> PixelArraySize?
> > >>
> > >> Not really, as PixelArrayPhysicalSize reports the physical dimension
> > >> of the full pixel array (readable and non readable pixels) while
> > >> PixelArraySize reports the size in pixel of the largest readable
> > >> image. To sum it up: PixelArrayPhysicalSize reports the chip area,
> > >> which covers more space that the readable PixelArraySize.
> > >>
> > >>>> +
> > >>>> +  - PixelArrayPhysicalSize:
> > >>>> +      type: Size
> > >>>> +      description: |
> > >>>> +        The camera sensor full pixel array size, in nanometers.
> > >>>> +
> > >>>> +        The PixelArrayPhysicalSize property reports the physical dimensions
> > >>>> +        (width x height) of the full pixel array matrix, including readable
> > >>>> +        and non-readable pixels.
> > >>>> +
> > >>>> +        \todo Rename this property to PhysicalSize once we will have property
> > >>>> +              categories (i.e. Properties::PixelArray::PhysicalSize)
> > >
> > > I understand Sakari's confusion regarding the UnitCellSize calculation
> > > from PixelArrayPhysicalSize and PixelArraySize. I also agree with your
> > > answer, but I think we could actually drop the PixelArrayPhysicalSize
> > > property. I don't see what it could be used for, it describes the
> > > physical size of something that can't be accessed at all from the host,
> > > and should thus not matter at all.
> > >
> > > If we later find a need for this, we can always at it. At that point, I
> > > also think it would be best to express the property in pixel units
> > > (although depending on the use case that drives addition of this
> > > property, I could very well be wrong).
> >
> > Ack, I'll drop
> >
> > >>>> +
> > >>>> +  - PixelArraySize:
> > >>>> +      type: Size
> > >>>> +      description: |
> > >>>> +        The camera sensor pixel array readable area vertical and horizontal
> > >>>> +        sizes, in pixels.
> > >>>> +
> > >>>> +        The PixelArraySize property defines the size in pixel units of the
> > >>>> +        readable part of full pixel array matrix, including optically black
> > >
> > > s/optically/optical/
> > >
> > > I've also come across the term "optical dark pixels" which I think is a
> > > bit more accurate, as the pixels are covered and thus dark, but not
> > > fully black due to leakages currents (otherwise there would be no need
> > > to black level correction). "optical black" is used much more often
> > > though, so I think we should keep that term.
> > >
> > >>>> +        pixels used for calibration, pixels which are not considered valid for
> > >>>> +        capture and active pixels valid for image capture.
> > >
> > > Maybe "and active pixels containing valid image data" ?
> >
> > I can change this
> >
> > >>>> +
> > >>>> +        The property describes a rectangle whose top-left corner is placed
> > >>>> +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > >>>> +        by the Size element transported by the property.
> > >
> > > I would move this paragraph below to [1].
> > >
> > >>>> +
> > >>>> +        The property describes the maximum size of the raw data produced by
> > >>>> +        the sensor, which might not correspond to the physical size of the
> > >
> > > Maybe s/produced by the sensor/captured by the camera/, as the CSI-2
> > > receiver could drop data too ? For instance, the invalid lines between
> > > the optical black and valid regions could be sent by the sensor, with a
> > > specific data type, and dropped on the receiving side by a decision of
> > > the pipeline handler that chooses to configure the CSI-2 receiver to
> > > drop the corresponding data type.
> >
> > Ok, we're really picking details here, but yes, I can change this. However I
> > wonder if that should not actually describe what the sensor puts on
> > the bus. How the receiver filters it, it's a separate issue.
>
> I've been pondering about that, and concluded that, from an application
> point of view (as properties are designed to be consumed by
> applications), what matters is what is being captured. Even if we
> disregard the CSI-2 receiver configuration, it's conceivable that the
> sensor would output invalid lines with a data type that the CSI-2
> receiver isn't able to capture due to the hardware being hardcoded to
> skip some data types.
>
> I agree it won't make much difference in practice, application
> developers will likely not think about all this.
>
> > >>>> +        sensor pixel array matrix, as some portions of the physical pixel
> > >>>> +        array matrix are not accessible and cannot be transmitted out.
> > >>>> +
> > >>>> +        For example, a pixel array matrix assembled as follow
> > >
> > > s/a pixel array/let's consider a pixel array/
> > > s/follow/follows/
> > >
> > >>>> +
> > >>>> +             +--------------------------------------------------+
> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             ...          ...           ...      ...          ...
> > >>>> +
> > >>>> +             ...          ...           ...      ...          ...
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > >>>> +             +--------------------------------------------------+
> > >>>> +
> > >>>> +        composed by two lines of non-readable pixels (x) followed by N lines of
> > >
> > > s/composed by/starting with/
> > > s/(x)/(x),/
> > >
> > >>>> +        readable data (D) surrounded by two columns of non-readable pixels on
> > >>>> +        each side, only the readable portion is transmitted to the receiving
> > >
> > > s/each side,/each side, and ending with two more lines of non-readable pixels./
> > > s/only/Only/
> > >
> > >>>> +        side, defining the sizes of the largest possible buffer of raw data
> > >
> > > s/sizes/size/
> > >
> > >>>> +        that can be presented to applications.
> > >>>> +
> > >>>> +                               PixelArraySize[0]
> > >
> > > should this be PixelArraySize.width ? Same for height instead of [1],
> > > and in other locations below.
> >
> > ack
> >
> > >>>> +               /----------------------------------------------/
> > >>>> +               +----------------------------------------------+ /
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > >>>> +               ...        ...           ...      ...        ...
> > >>>> +               ...        ...           ...      ...        ...
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > >>>> +               +----------------------------------------------+ /
> > >>>> +
> > >
> > > [1], with a small modification:
> > >
> > > "This defines a rectangle ... sizes are defined by this property."
> > >
> > > and merge it with the following paragraph in a single paragraph.
> > >
> > >>>> +        All other rectangles that describes portions of the pixel array, such as
> > >
> > > s/describes/describe/
> > >
> > >>>> +        the optical black pixels rectangles and active pixel areas are defined
> > >
> > > s/areas/areas,/
> > >
> > >>>> +        relatively to the rectangle described by this property.
> > >>>> +
> > >>>> +        \todo Rename this property to Size once we will have property
> > >>>> +              categories (i.e. Properties::PixelArray::Size)
> > >>>> +
> > >>>> +  - PixelArrayOpticalBlackRectangles:
> > >>>> +      type: Rectangle
> > >>>> +      size: [1 x n]
> > >
> > > This can just be [n], not all sizes need to be bi-dimensional.
> > >
> > >>>> +      description: |
> > >>>> +        The raw data buffer regions which contains optical black pixels
> > >
> > > s/raw data buffer/pixel array/ for the reason explained below in [2]
> > > s/regions/region(s)/ ?
> > > s/contains/contain/
> > >
> > >>>> +        considered valid for calibration purposes.
> > >>>> +
> > >>>
> > >>> How does this interact with the rotation property?
> > >>>
> > >>> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> > >>> invert the flipping controls. I presume the same would apply to e.g. dark
> > >>> pixels in case they are read out, but that should be something for a driver
> > >>> to handle.
> > >>
> > >> Right. I think this also depends how black pixels are read out. I here
> > >> assumed the sensor is capable of reading out the whole PixelArraySize area,
> > >> and the receiver is able to capture its whole content in a single
> > >> buffer, where application can then go and pick the areas of interest
> > >> using the information conveyed by this property. If this model holds,
> > >> then if sensor flipping is enabled, indeed the here reported
> > >> information are mis-leading, unless the chip architecture is
> > >> perfectly symmetric in vertical and horizontal dimensions (which seems
> > >> unlikely).
> > >>
> > >>> But if the frame layout isn't conveyed to the user space by the driver,
> > >>> then we need another way to convey the sensor is actually rotated. Oh well.
> > >>
> > >> Not sure how to interpret this part. Do you mean "convey that a
> > >> rotation is applied by, ie, setting the canonical V\HFLIP controls,
> > >> which cause the sensor pixel array to be transmitted out with its
> > >> vertical/horizontal read-out directions inverted ?"
> > >>
> > >> We currently have a read-only property that reports the mounting
> > >> rotation (like the dt-property you have just reviewed :) I assume we
> > >> will have a rotation control that reports instead if a V/HFLIP is
> > >> applied, so that application know how to compensate that.
> > >>
> > >>> Not every sensor that is mounted upside down has flipping controls so the
> > >>> user space will still somehow need to manage the rotation in that case.
> > >>
> > >> I think it should, and I think we'll provide all information to be
> > >> able to do so.
> > >
> > > I think all the properties here should be expressed relative to the
> > > default sensor readout direction. Any transformation (such as h/v flip)
> > > will need to be taken into account by the application if the use cases
> > > require it. That would be the easiest course of action, otherwise these
> > > properties couldn't be static, as they would depend on the selected
> > > transformation.
> > >
> > > David has submitted a patch series with a Transform control (actually
> > > specified in CameraConfiguration, not as a control). We will need to
> > > document how the properties in this patch, the Rotation property, and
> > > the Transform control, interact.
> > >
> > > How about adding in the PixelArraySize documentation a paragraph that
> > > states
> > >
> > > "All the coordinates are expressed relative to the default sensor
> > > readout direction, without any transformation (such as horizontal and
> > > vertical flipping) applied. When mapping them to the raw pixel buffer,
> > > applications shall take any configured transformation into account."
> > >
> > > ?
> >
> > That's the easieast way forward, sounds good to me
> >
> > >
> > >>>> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> > >
> > > s/describes/property describes/ (it feels weird to have a plural name
> > > with a singular verb).
> > >
> > >>>> +        rectangular areas of the raw data buffer, where optical black pixels are
> > >
> > > s/buffer,/buffer/
> > >
> > >>>> +        located and could be accessed for calibration and black level
> > >>>> +        correction.
> > >
> > > Maybe just "... of the raw data buffers that contain optical black
> > > pixels." ? That duplicates the previous paragraph a bit, maybe we could
> > > just drop it.
> > >
> > >>>> +
> > >>>> +        This property describes the position and size of optically black pixel
> > >
> > > s/optically/optical/
> > >
> > >>>> +        rectangles relatively to their position in the raw data buffer as stored
> > >
> > > s/rectangles/regions/
> > > s/relatively to their position in the raw data buffer/in the raw data buffer/
> > >
> > >>>> +        in memory, which might differ from their actual physical location in the
> > >>>> +        pixel array matrix.
> > >>>> +
> > >>>> +        It is important to note, in facts, that camera sensor might
> > >
> > > s/facts/fact/
> > > s/sensors/sensor/
> > >
> > >>>> +        automatically re-order, shuffle or skip portions of their pixels array
> > >
> > > s/re-order/reorder/
> > >
> > > I'd drop shuffle, I think "reorder or skip" is enough, hopefully the
> > > sensors won't shuffle pixels randomly :-)
> > >
> > >>>> +        matrix when transmitting data to the receiver.
> > >
> > > Should we add an example ?
> > >
> > > "For instance, a sensor may merge the top and bottom optical black
> > > rectangles into a single rectangle, transmitted at the beginning of the
> > > frame."
> > >
> > >>>> +
> > >>>> +        The pixel array contains several areas with different purposes,
> > >>>> +        interleaved by lines and columns which are said not to be valid for
> > >>>> +        capturing purposes. Invalid lines and columns are defined as invalid as
> > >>>> +        they could be positioned too close to the chip margins or to the optical
> > >>>> +        blank shielding placed on top of optical black pixels.
> > >
> > > s/blank/black/
> > >
> > >>>> +
> > >>>> +                                PixelArraySize[0]
> > >>>> +               /----------------------------------------------/
> > >>>> +                  x1                                       x2
> > >>>> +               +--o---------------------------------------o---+ /
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > >>>> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > >>>> +               ...          ...           ...     ...       ...
> > >>>> +               ...          ...           ...     ...       ...
> > >>>> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > >>>> +               +----------------------------------------------+ /
> > >>>> +
> > >>>> +        The readable pixel array matrix is composed by
> > >>>> +        2 invalid lines (I)
> > >>>> +        4 lines of valid optically black pixels (O)
> > >
> > > s/optically/optical/
> > >
> > >>>> +        2 invalid lines (I)
> > >>>> +        n lines of valid pixel data (P)
> > >>>> +        2 invalid lines (I)
> > >>>> +
> > >>>> +        And the position of the optical black pixel rectangles is defined by
> > >>>> +
> > >>>> +            PixelArrayOpticalBlackRectangles = {
> > >>>> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> > >>>
> > >>> s/\+ \K}/1 }/
> > >
> > > :perldo s/\+ \K}/1 }/
> > >
> > > ;-)
> > >
> > >>>> +               { x1, y3, 2, y4 - y3 + 1 },
> > >>>> +               { x2, y3, 2, y4 - y3 + 1 },
> > >>>> +            };
> > >>>> +
> > >>>> +        If the sensor, when required to output the full pixel array matrix,
> > >>>> +        automatically skip the invalid lines and columns, producing the
> > >
> > > s/skip/skips/
> > >
> > >>>> +        following data buffer, when captured to memory
> > >
> > > For the same reason as above, this could become
> > >
> > >         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory
> > >
> > >>>> +
> > >>>> +                                    PixelArraySize[0]
> > >>>> +               /----------------------------------------------/
> > >>>> +                                                           x1
> > >>>> +               +--------------------------------------------o-+ /
> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > >>>> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > >>>> +               ...       ...          ...       ...         ... |
> > >>>> +               ...       ...          ...       ...         ... |
> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > >>>> +               +----------------------------------------------+ /
> > >>>> +
> > >>>> +        The invalid lines and columns should not be reported as part of the
> > >
> > > s/The/then the/
> > >
> > >>>> +        PixelArraySize property in first place.
> > >>>> +
> > >>>> +        In this case, the position of the black pixel rectangles will be
> > >>>> +
> > >>>> +            PixelArrayOpticalBlackRectangles = {
> > >>>> +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > >>>> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > >>>> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > >>>> +            };
> > >
> > > I think part of this should be moved to the PixelArraySize property, as
> > > it defines what the PixelArraySize property should report. Let's merge
> > > this patch first though, and improvements can go on top (unless you
> > > would prefer addressing this first :-)).
> > >
> > >>>> +
> > >>>> +        \todo Rename this property to Size once we will have property
> > >>>> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > >>>> +
> > >>>> +  - PixelArrayActiveAreas:
> > >>>> +      type: Rectangle
> > >>>> +      size: [1 x n]
> > >
> > > [n] here too.
> > >
> > >>>> +      description: |
> > >>>> +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > >>>> +        overlapping) portions of the camera sensor readable pixel matrix
> > >>>> +        which are considered valid for image acquisition purposes.
> > >>>> +
> > >>>> +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > >>>> +        with its top-left corner defined by its horizontal and vertical
> > >>>> +        distances from the PixelArraySize rectangle top-left corner, placed in
> > >>>> +        position (0, 0).
> > >
> > > I think you can drop the last three lines, they're a bit confusing.
> > >
> > >>>> +
> > >>>> +        This property describes an arbitrary number of overlapping rectangles,
> > >>>> +        with each rectangle representing the maximum image size that the camera
> > >>>> +        sensor can produce for a particular aspect ratio.
> > >
> > > Maybe even moving the previous paragraph here as "They are defined
> > > relatively to the PixelArraySize rectangle." ?
> > >
> >
> > Ack, I'll merge the two
> >
> > >>>> +
> > >>>> +        When multiple rectangles are reported, they shall be ordered from the
> > >>>> +        tallest to the shortest.
> > >>>> +
> > >>>> +        Example 1
> > >>>> +        A camera sensor which only produces images in the 4:3 image resolution
> > >>>> +        will report a single PixelArrayActiveAreas rectangle, from which all
> > >>>> +        other image formats are obtained by either cropping the field-of-view
> > >>>> +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > >>>> +        binning.
> > >>>> +
> > >>>> +                     PixelArraySize[0]
> > >>>> +                    /----------------/
> > >>>> +                      x1          x2
> > >>>> +            (0,0)-> +-o------------o-+  /
> > >>>> +                 y1 o +------------+ |  |
> > >>>> +                    | |////////////| |  |
> > >>>> +                    | |////////////| |  | PixelArraySize[1]
> > >>>> +                    | |////////////| |  |
> > >>>> +                 y2 o +------------+ |  |
> > >>>> +                    +----------------+  /
> > >>>> +
> > >>>> +        The property reports a single rectangle
> > >>>> +
> > >>>> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > >>>> +
> > >>>> +        Example 2
> > >>>> +        A camera sensor which can produce images in different native
> > >>>> +        resolutions will report several overlapping rectangles, one for each
> > >>>> +        natively supported resolution.
> > >>>> +
> > >>>> +                     PixelArraySize[0]
> > >>>> +                    /------------------/
> > >>>> +                      x1  x2    x3  x4
> > >>>> +            (0,0)-> +o---o------o---o+  /
> > >>>> +                 y1 o    +------+    |  |
> > >>>> +                    |    |//////|    |  |
> > >>>> +                 y2 o+---+------+---+|  |
> > >>>> +                    ||///|//////|///||  | PixelArraySize[1]
> > >>>> +                 y3 o+---+------+---+|  |
> > >>>> +                    |    |//////|    |  |
> > >>>> +                 y4 o    +------+    |  |
> > >>>> +                    +----+------+----+  /
> > >>>> +
> > >>>> +        The property reports two rectangles
> > >>>> +
> > >>>> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > >>>> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > >>>> +
> > >>>> +        The first rectangle describes the maximum field-of-view of all image
> > >>>> +        formats in the 4:3 resolutions, while the second one describes the
> > >>>> +        maximum field of view for all image formats in the 16:9 resolutions.
> > >
> > > Should we add one paragraph here as a clarification:
> > >
> > > Multiple rectangles shall only be reported when the sensor can't capture
> > > the pixels in the corner regions. If all the pixels in the (x1,y1) -
> > > (x4,y4) area can be captured, the PixelArrayActiveAreas property shall
> > > contains the single rectangle (x1,y1) - (x4,y4).
> >
> > Isn't this the example above ? Re-stating the same here seems
> > confusing.
>
> What I meant to say is that a camera should not implement example 2 just
> because the pipeline handler developer wants to expose different aspect
> ratios, when the hardware actually implements example 1. For instance,
> considering a sensor with a native resolution of 3840x2880 (4/3) that
> can be captured fully, a 16/9 picture can be captured by cropping to
> 3840*2160. In that case PixelArrayActiveAreas should only report
> (0,0)/3840x2880, not both (0,0)/3840x2880 and (0,360)/3840x2880.
>
> But maybe I'm just worrying too much and nobody will think about doing
> that :-)

I ended up taking this part in in v7, as after reading it again I got
the same concern.


Thanks
  j

>
> > > Or do you think it's overkill ?
> > >
> > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> >
> > Thanks, I'll take comments in and resend a v7
> >
> > >>>> +
> > >>>> +        \todo Rename this property to ActiveAreas once we will have property
> > >>>> +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > >>>> +
> > >>>>  ...
>
> --
> Regards,
>
> Laurent Pinchart

Patch

diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
index ce627fa042ba..762d60881568 100644
--- a/src/libcamera/property_ids.yaml
+++ b/src/libcamera/property_ids.yaml
@@ -386,4 +386,267 @@  controls:
                               |                    |
                               |                    |
                               +--------------------+
+
+  - UnitCellSize:
+      type: Size
+      description: |
+        The pixel unit cell physical size, in nanometers.
+
+        The UnitCellSize properties defines the horizontal and vertical sizes
+        of a single pixel unit, including its active and non-active parts.
+
+        The property can be used to calculate the physical size of the sensor's
+        pixel array area and for calibration purposes.
+
+  - PixelArrayPhysicalSize:
+      type: Size
+      description: |
+        The camera sensor full pixel array size, in nanometers.
+
+        The PixelArrayPhysicalSize property reports the physical dimensions
+        (width x height) of the full pixel array matrix, including readable
+        and non-readable pixels.
+
+        \todo Rename this property to PhysicalSize once we will have property
+              categories (i.e. Properties::PixelArray::PhysicalSize)
+
+  - PixelArraySize:
+      type: Size
+      description: |
+        The camera sensor pixel array readable area vertical and horizontal
+        sizes, in pixels.
+
+        The PixelArraySize property defines the size in pixel units of the
+        readable part of full pixel array matrix, including optically black
+        pixels used for calibration, pixels which are not considered valid for
+        capture and active pixels valid for image capture.
+
+        The property describes a rectangle whose top-left corner is placed
+        in position (0, 0) and whose vertical and horizontal sizes are defined
+        by the Size element transported by the property.
+
+        The property describes the maximum size of the raw data produced by
+        the sensor, which might not correspond to the physical size of the
+        sensor pixel array matrix, as some portions of the physical pixel
+        array matrix are not accessible and cannot be transmitted out.
+
+        For example, a pixel array matrix assembled as follow
+
+             +--------------------------------------------------+
+             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
+             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             ...          ...           ...      ...          ...
+
+             ...          ...           ...      ...          ...
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
+             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
+             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
+             +--------------------------------------------------+
+
+        composed by two lines of non-readable pixels (x) followed by N lines of
+        readable data (D) surrounded by two columns of non-readable pixels on
+        each side, only the readable portion is transmitted to the receiving
+        side, defining the sizes of the largest possible buffer of raw data
+        that can be presented to applications.
+
+                               PixelArraySize[0]
+               /----------------------------------------------/
+               +----------------------------------------------+ /
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
+               ...        ...           ...      ...        ...
+               ...        ...           ...      ...        ...
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
+               +----------------------------------------------+ /
+
+        All other rectangles that describes portions of the pixel array, such as
+        the optical black pixels rectangles and active pixel areas are defined
+        relatively to the rectangle described by this property.
+
+        \todo Rename this property to Size once we will have property
+              categories (i.e. Properties::PixelArray::Size)
+
+  - PixelArrayOpticalBlackRectangles:
+      type: Rectangle
+      size: [1 x n]
+      description: |
+        The raw data buffer regions which contains optical black pixels
+        considered valid for calibration purposes.
+
+        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
+        rectangular areas of the raw data buffer, where optical black pixels are
+        located and could be accessed for calibration and black level
+        correction.
+
+        This property describes the position and size of optically black pixel
+        rectangles relatively to their position in the raw data buffer as stored
+        in memory, which might differ from their actual physical location in the
+        pixel array matrix.
+
+        It is important to note, in facts, that camera sensor might
+        automatically re-order, shuffle or skip portions of their pixels array
+        matrix when transmitting data to the receiver.
+
+        The pixel array contains several areas with different purposes,
+        interleaved by lines and columns which are said not to be valid for
+        capturing purposes. Invalid lines and columns are defined as invalid as
+        they could be positioned too close to the chip margins or to the optical
+        blank shielding placed on top of optical black pixels.
+
+                                PixelArraySize[0]
+               /----------------------------------------------/
+                  x1                                       x2
+               +--o---------------------------------------o---+ /
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
+               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
+               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
+            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
+               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
+               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
+               ...          ...           ...     ...       ...
+               ...          ...           ...     ...       ...
+            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
+               +----------------------------------------------+ /
+
+        The readable pixel array matrix is composed by
+        2 invalid lines (I)
+        4 lines of valid optically black pixels (O)
+        2 invalid lines (I)
+        n lines of valid pixel data (P)
+        2 invalid lines (I)
+
+        And the position of the optical black pixel rectangles is defined by
+
+            PixelArrayOpticalBlackRectangles = {
+               { x1, y1, x2 - x1 + 1, y2 - y1 + },
+               { x1, y3, 2, y4 - y3 + 1 },
+               { x2, y3, 2, y4 - y3 + 1 },
+            };
+
+        If the sensor, when required to output the full pixel array matrix,
+        automatically skip the invalid lines and columns, producing the
+        following data buffer, when captured to memory
+
+                                    PixelArraySize[0]
+               /----------------------------------------------/
+                                                           x1
+               +--------------------------------------------o-+ /
+               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
+               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
+               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
+               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
+            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
+               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
+               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
+               ...       ...          ...       ...         ... |
+               ...       ...          ...       ...         ... |
+               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
+               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
+               +----------------------------------------------+ /
+
+        The invalid lines and columns should not be reported as part of the
+        PixelArraySize property in first place.
+
+        In this case, the position of the black pixel rectangles will be
+
+            PixelArrayOpticalBlackRectangles = {
+               { 0, 0, y1 + 1, PixelArraySize[0] },
+               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
+               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
+            };
+
+        \todo Rename this property to Size once we will have property
+              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
+
+  - PixelArrayActiveAreas:
+      type: Rectangle
+      size: [1 x n]
+      description: |
+        The PixelArrayActiveAreas property defines the (possibly multiple and
+        overlapping) portions of the camera sensor readable pixel matrix
+        which are considered valid for image acquisition purposes.
+
+        Each rectangle is defined relatively to the PixelArraySize rectangle,
+        with its top-left corner defined by its horizontal and vertical
+        distances from the PixelArraySize rectangle top-left corner, placed in
+        position (0, 0).
+
+        This property describes an arbitrary number of overlapping rectangles,
+        with each rectangle representing the maximum image size that the camera
+        sensor can produce for a particular aspect ratio.
+
+        When multiple rectangles are reported, they shall be ordered from the
+        tallest to the shortest.
+
+        Example 1
+        A camera sensor which only produces images in the 4:3 image resolution
+        will report a single PixelArrayActiveAreas rectangle, from which all
+        other image formats are obtained by either cropping the field-of-view
+        and/or applying pixel sub-sampling techniques such as pixel skipping or
+        binning.
+
+                     PixelArraySize[0]
+                    /----------------/
+                      x1          x2
+            (0,0)-> +-o------------o-+  /
+                 y1 o +------------+ |  |
+                    | |////////////| |  |
+                    | |////////////| |  | PixelArraySize[1]
+                    | |////////////| |  |
+                 y2 o +------------+ |  |
+                    +----------------+  /
+
+        The property reports a single rectangle
+
+                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
+
+        Example 2
+        A camera sensor which can produce images in different native
+        resolutions will report several overlapping rectangles, one for each
+        natively supported resolution.
+
+                     PixelArraySize[0]
+                    /------------------/
+                      x1  x2    x3  x4
+            (0,0)-> +o---o------o---o+  /
+                 y1 o    +------+    |  |
+                    |    |//////|    |  |
+                 y2 o+---+------+---+|  |
+                    ||///|//////|///||  | PixelArraySize[1]
+                 y3 o+---+------+---+|  |
+                    |    |//////|    |  |
+                 y4 o    +------+    |  |
+                    +----+------+----+  /
+
+        The property reports two rectangles
+
+                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
+                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
+
+        The first rectangle describes the maximum field-of-view of all image
+        formats in the 4:3 resolutions, while the second one describes the
+        maximum field of view for all image formats in the 16:9 resolutions.
+
+        \todo Rename this property to ActiveAreas once we will have property
+              categories (i.e. Properties::PixelArray::ActiveAreas)
+
 ...