[libcamera-devel,v3,02/13] libcamera: properties: Define pixel array properties

Message ID 20200424215304.558317-3-jacopo@jmondi.org
State Superseded
Headers show
Series
  • libcamera: Add CameraSensorInfo
Related show

Commit Message

Jacopo Mondi April 24, 2020, 9:52 p.m. UTC
Add definition of pixel array related properties.

Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
---
 src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
 1 file changed, 155 insertions(+)

Comments

Laurent Pinchart April 25, 2020, 2:05 a.m. UTC | #1
Hi Jacopo,

Thank you for the patch.

On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> Add definition of pixel array related properties.
> 
> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> ---
>  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
>  1 file changed, 155 insertions(+)
> 
> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> index ce627fa042ba..8f6797723a9d 100644
> --- a/src/libcamera/property_ids.yaml
> +++ b/src/libcamera/property_ids.yaml
> @@ -386,4 +386,159 @@ controls:
>                                |                    |
>                                |                    |
>                                +--------------------+
> +
> +  - PixelArraySize:
> +      type: float
> +      size: [2]
> +      description: |
> +        The physical sizes of the pixel array (width and height), in
> +        millimeters.

Once we'll have categories for properties I think this should become
PhysicalSize in the sensor category. We could already rename it now to
PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
idea. I don't mind much either way, but if we keep the current name, how
about adding a comment to remember this ?

      # \todo Rename this property to PhysicalSize once we will have property
      # categories

> +
> +  - PixelArray:

I was going to say we should rename this to PixelArraySize as it's the
size of the pixel array, and then realized that the previous property
has the same name :-) Maybe we should rename both ?

> +      type: int32_t
> +      size: [2]
> +      description: |
> +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> +
> +        The property describes a rectangle with its top-left corner in position
> +        (0, 0) and width and height described by the first and second values
> +        of this property.
> +
> +        The PixelArray property defines the rectangle that includes all possible
> +        rectangles defined by the ActiveAreas property, and describes the full
> +        pixel array, including non-active pixels, black level calibration
> +        pixels etc.

It's not entirely clear to me what pixels should be included in this
rectangle. I'm not blaming your documentation here, but rather my lack
of knowledge of what else than non-active pixels and black level
calibration pixels there are :-) Just an idea, instead of linking this
property to ActiveAreas, would it make sense to link it to
PixelArraySize ? Maybe something along the lines of

	The PixelArraySize property defines the size of the full pixel
	array, covered by the PixelArrayPhysicalSize area. This may
	include inactive or optical black pixels.

> +
> +  - ActiveAreas:
> +      type: int32_t
> +      size: [4 x n]
> +      description: |
> +        The camera sensor active pixel area rectangles, represented as
> +        rectangles contained in the one described by the PixelArray property.
> +
> +        This property describes an arbitrary number of overlapping rectangles,
> +        representing the active pixel portions of the camera sensor pixel array.
> +
> +        Each rectangle is defined by its displacement from pixel (0, 0) of
> +        the rectangle described by the PixelArray property, a width and an
> +        height.

s/an height/a height/

> +
> +        Each rectangle described by this property represents the maximum image
> +        size that the camera module can produce for a given image resolution.

How about s/for a given image resolution/for a particular aspect ratio/
?

I would also add, taken from below,

        When multiple rectangles are reported, they shall be ordered
	from the tallest to the shortest.

> +
> +        Example 1.
> +        A sensor which only produces images in the 4:3 image resolution will
> +        report a single ActiveArea rectangle, from which all other image formats
> +        are obtained by either cropping the field-of-view and/or applying pixel
> +        sub-sampling techniques such as pixel skipping or binning.
> +
> +                        PixelArray(0)
> +                    /-----------------/
> +                      x1          x2
> +            (0,0)-> +-o------------o-+  /
> +                 y1 o +------------+ |  |
> +                    | |////////////| |  |
> +                    | |////////////| |  | PixelArray(1)
> +                    | |////////////| |  |
> +                 y2 o +------------+ |  |
> +                    +----------------+  /
> +
> +        The property reports a single rectangle
> +
> +                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))

If the rectangle is defined through width and height, this should be

                 ActiveArea = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)

Alternatively you could use coordinates:

                 ActiveArea = (x1, y1, x2, y2)

> +
> +        Example 2
> +        A camera sensor which can produce images in different native

s/camera sensor/sensor/ here or s/sensor/camera sensor/ in example 1.

> +        resolutions, will report several overlapping rectangle, one for each

s/resolutions,/resolutions/
s/rectangle/rectangles/

> +        natively supported resolution, ordered from the tallest to the shortest
> +        one.
> +
> +                        PixelArray(0)
> +                    /-----------------/
> +                     x1  x2    x3  x4
> +            (0,0)-> +o---o------o---o+  /
> +                 y1 |    +------+    |  |
> +                    |    |//////|    |  |
> +                 y2 o+---+------+---+|  |
> +                    ||///|//////|///||  | PixelArray(1)
> +                 y3 o+---+------+---+|  |
> +                    |    |//////|    |  |
> +                 y4 |    +------+    |  |

I think you need a o next to y1 and y4.

> +                    +----+------+----+  /
> +
> +        The property reports two rectangles
> +
> +                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),

s/y4 - 1/y4 - y1/

> +                               (x1, y2, (x4 - x1), (y3 - y2))

And comment as above regarding the width.

> +
> +        The first rectangle describes the maximum field-of-view of all image
> +        formats in the 4:3 resolutions, while the second one describes the
> +        maximum field of view for all image formats in the 16:9 resolutions.
> +
> +  - BayerFilterArrangement:
> +      type: int32_t
> +      description: |
> +        The pixel array color filter displacement.

I could be wrong, but is "displacement" the right term ? Maybe
arrangement ?

> +
> +        This property describes the arrangement and readout sequence of the
> +        three RGB color components of the sensor's Bayer Color Filter Array
> +        (CFA).

As we could support sensors with non-Bayer CFAs, how about already
renaming the control to ColorFilterArrangement, and remove Bayer from
here ?

> +
> +        Color filters are usually displaced in line-alternating fashion on the

s/displaced/arranged/ ?

> +        sensor pixel array. In example, one line might be composed of Red-Green

s/In example/For example/

> +        while the successive is composed of Blue-Green color information.
> +
> +        The value of this property represents the arrangement of color filters
> +        in the top-left 2x2 pixel square.

As this is only valid for Bayer, maybe

        For Bayer filters, the value of this property represents the arrangement
        of color filters in the top-left 2x2 pixel square.

> +
> +        For example, for a sensor with the following color filter displacement

s/displacement/arrangement/

(you could also use pattern instead of arrangement here or in other
places)

> +
> +                 (0, 0)               (max-col)
> +           +---+    +--------------...---+
> +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> +           +---+    |B|G|B|G|B|G|B|...B|G|
> +                    ...                  ..
> +                    ...                  ..
> +                    |G|R|G|R|G|R|G|...G|R|
> +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> +                    +--------------...---+
> +
> +        The filter arrangement is represented by the BGGR value, which
> +        correspond to the pixel readout sequence in line interleaved mode.
> +
> +      enum:
> +        - name: BayerFilterRGGB
> +          value: 0
> +          description: |
> +            Color filter array displacement is Red-Green/Green-Blue
> +
> +        - name: BayerFilterGRBG
> +          value: 1
> +          description: |
> +            Color filter array displacement is Green-Red/Blue-Green
> +
> +        - name: BayerFilterGBRG
> +          value: 2
> +          description: |
> +            Color filter array displacement is Green-Blue/Red-Green
> +
> +        - name: BayerFilterBGGR
> +          value: 3
> +          description: |
> +            Color filter array displacement is Blue-Green/Green-Red

Should these be sorted alphabetically ?

> +
> +        - name: BayerFilterNonStandard
> +          value: 4
> +          description: |
> +            The pixel array color filter does not use the standard Bayer RGB
> +            color model

I would drop this value for now, as we should instead report which
filter arrangement is used. We could already add known patterns, such as
the ones from https://en.wikipedia.org/wiki/Bayer_filter for instance,
but I think that's overkill, we can extend it later when/if needed.

> +
> +  - ISOSensitivityRange:
> +      type: int32_t
> +      size: [2]
> +      description: |
> +        The range of supported ISO sensitivities, as documented by the
> +        ISO 12232:2006 (or later) standard.

Is this the ISO speed or the Standard Output Sensitivity (SOS) ? I think
this property needs a bit more research, should we leave it out for now
to avoid blocking the rest ?

Overall, good work ! No major issue, so the next version should be the
last one.

> +
>  ...
Jacopo Mondi April 25, 2020, 1:58 p.m. UTC | #2
Hi Laurent,
   thanks for review

On Sat, Apr 25, 2020 at 05:05:11AM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> Thank you for the patch.
>
> On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> > Add definition of pixel array related properties.
> >
> > Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > ---
> >  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
> >  1 file changed, 155 insertions(+)
> >
> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > index ce627fa042ba..8f6797723a9d 100644
> > --- a/src/libcamera/property_ids.yaml
> > +++ b/src/libcamera/property_ids.yaml
> > @@ -386,4 +386,159 @@ controls:
> >                                |                    |
> >                                |                    |
> >                                +--------------------+
> > +
> > +  - PixelArraySize:
> > +      type: float
> > +      size: [2]
> > +      description: |
> > +        The physical sizes of the pixel array (width and height), in
> > +        millimeters.
>
> Once we'll have categories for properties I think this should become
> PhysicalSize in the sensor category. We could already rename it now to
> PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
> idea. I don't mind much either way, but if we keep the current name, how
> about adding a comment to remember this ?

I like PhysicalSize.

>
>       # \todo Rename this property to PhysicalSize once we will have property
>       # categories
>
> > +
> > +  - PixelArray:
>
> I was going to say we should rename this to PixelArraySize as it's the
> size of the pixel array, and then realized that the previous property
> has the same name :-) Maybe we should rename both ?

Let's rename both

>
> > +      type: int32_t
> > +      size: [2]
> > +      description: |
> > +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> > +
> > +        The property describes a rectangle with its top-left corner in position
> > +        (0, 0) and width and height described by the first and second values
> > +        of this property.
> > +
> > +        The PixelArray property defines the rectangle that includes all possible
> > +        rectangles defined by the ActiveAreas property, and describes the full
> > +        pixel array, including non-active pixels, black level calibration
> > +        pixels etc.
>
> It's not entirely clear to me what pixels should be included in this
> rectangle. I'm not blaming your documentation here, but rather my lack
> of knowledge of what else than non-active pixels and black level
> calibration pixels there are :-) Just an idea, instead of linking this
> property to ActiveAreas, would it make sense to link it to
> PixelArraySize ? Maybe something along the lines of

I would just drop rectangle maybe, as that property transports a widht
and an height.

>
> 	The PixelArraySize property defines the size of the full pixel
> 	array, covered by the PixelArrayPhysicalSize area. This may
> 	include inactive or optical black pixels.
>

I'll take this in, but I'm not sure if it's a good idea to link this
one to the PhysicalArraySize, as they are expressed with two different
measure units. So I would keep at least the first line of the existing
description, and s/may include/includes/ in your proposed change, as
this is not optional, the property should report inactive pixels too.

> > +
> > +  - ActiveAreas:
> > +      type: int32_t
> > +      size: [4 x n]
> > +      description: |
> > +        The camera sensor active pixel area rectangles, represented as
> > +        rectangles contained in the one described by the PixelArray property.
> > +
> > +        This property describes an arbitrary number of overlapping rectangles,
> > +        representing the active pixel portions of the camera sensor pixel array.
> > +
> > +        Each rectangle is defined by its displacement from pixel (0, 0) of
> > +        the rectangle described by the PixelArray property, a width and an
> > +        height.
>
> s/an height/a height/
>
> > +
> > +        Each rectangle described by this property represents the maximum image
> > +        size that the camera module can produce for a given image resolution.
>
> How about s/for a given image resolution/for a particular aspect ratio/
> ?
>
> I would also add, taken from below,
>
>         When multiple rectangles are reported, they shall be ordered
> 	from the tallest to the shortest.

ack, I though I had it in previous versions..

>
> > +
> > +        Example 1.
> > +        A sensor which only produces images in the 4:3 image resolution will
> > +        report a single ActiveArea rectangle, from which all other image formats
> > +        are obtained by either cropping the field-of-view and/or applying pixel
> > +        sub-sampling techniques such as pixel skipping or binning.
> > +
> > +                        PixelArray(0)
> > +                    /-----------------/
> > +                      x1          x2
> > +            (0,0)-> +-o------------o-+  /
> > +                 y1 o +------------+ |  |
> > +                    | |////////////| |  |
> > +                    | |////////////| |  | PixelArray(1)
> > +                    | |////////////| |  |
> > +                 y2 o +------------+ |  |
> > +                    +----------------+  /
> > +
> > +        The property reports a single rectangle
> > +
> > +                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))
>
> If the rectangle is defined through width and height, this should be
>
>                  ActiveArea = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
>
> Alternatively you could use coordinates:
>
>                  ActiveArea = (x1, y1, x2, y2)

this is not correct, as y2 is the vertical distance from pixel (0,0)
while the active area vertical size is (y2 - y1 +1). Same for the
horizontal size.

>
> > +
> > +        Example 2
> > +        A camera sensor which can produce images in different native
>
> s/camera sensor/sensor/ here or s/sensor/camera sensor/ in example 1.

I would keep using 'camera sensor' whenever possible.

>
> > +        resolutions, will report several overlapping rectangle, one for each
>
> s/resolutions,/resolutions/
> s/rectangle/rectangles/
>
> > +        natively supported resolution, ordered from the tallest to the shortest
> > +        one.
> > +
> > +                        PixelArray(0)
> > +                    /-----------------/
> > +                     x1  x2    x3  x4
> > +            (0,0)-> +o---o------o---o+  /
> > +                 y1 |    +------+    |  |
> > +                    |    |//////|    |  |
> > +                 y2 o+---+------+---+|  |
> > +                    ||///|//////|///||  | PixelArray(1)
> > +                 y3 o+---+------+---+|  |
> > +                    |    |//////|    |  |
> > +                 y4 |    +------+    |  |
>
> I think you need a o next to y1 and y4.
>

Yes!

> > +                    +----+------+----+  /
> > +
> > +        The property reports two rectangles
> > +
> > +                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),
>
> s/y4 - 1/y4 - y1/

ups, yes

>
> > +                               (x1, y2, (x4 - x1), (y3 - y2))
>
> And comment as above regarding the width.
>

I'll add '- 1'

> > +
> > +        The first rectangle describes the maximum field-of-view of all image
> > +        formats in the 4:3 resolutions, while the second one describes the
> > +        maximum field of view for all image formats in the 16:9 resolutions.
> > +
> > +  - BayerFilterArrangement:
> > +      type: int32_t
> > +      description: |
> > +        The pixel array color filter displacement.
>
> I could be wrong, but is "displacement" the right term ? Maybe
> arrangement ?
>

Probably yes.

> > +
> > +        This property describes the arrangement and readout sequence of the
> > +        three RGB color components of the sensor's Bayer Color Filter Array
> > +        (CFA).
>
> As we could support sensors with non-Bayer CFAs, how about already
> renaming the control to ColorFilterArrangement, and remove Bayer from
> here ?
>

Ack

> > +
> > +        Color filters are usually displaced in line-alternating fashion on the
>
> s/displaced/arranged/ ?
>
> > +        sensor pixel array. In example, one line might be composed of Red-Green
>
> s/In example/For example/
>
> > +        while the successive is composed of Blue-Green color information.
> > +
> > +        The value of this property represents the arrangement of color filters
> > +        in the top-left 2x2 pixel square.
>
> As this is only valid for Bayer, maybe
>
>         For Bayer filters, the value of this property represents the arrangement
>         of color filters in the top-left 2x2 pixel square.

Ack

>
> > +
> > +        For example, for a sensor with the following color filter displacement
>
> s/displacement/arrangement/
>
> (you could also use pattern instead of arrangement here or in other
> places)
>
> > +
> > +                 (0, 0)               (max-col)
> > +           +---+    +--------------...---+
> > +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> > +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> > +           +---+    |B|G|B|G|B|G|B|...B|G|
> > +                    ...                  ..
> > +                    ...                  ..
> > +                    |G|R|G|R|G|R|G|...G|R|
> > +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> > +                    +--------------...---+
> > +
> > +        The filter arrangement is represented by the BGGR value, which
> > +        correspond to the pixel readout sequence in line interleaved mode.
> > +
> > +      enum:
> > +        - name: BayerFilterRGGB
> > +          value: 0
> > +          description: |
> > +            Color filter array displacement is Red-Green/Green-Blue
> > +
> > +        - name: BayerFilterGRBG
> > +          value: 1
> > +          description: |
> > +            Color filter array displacement is Green-Red/Blue-Green
> > +
> > +        - name: BayerFilterGBRG
> > +          value: 2
> > +          description: |
> > +            Color filter array displacement is Green-Blue/Red-Green
> > +
> > +        - name: BayerFilterBGGR
> > +          value: 3
> > +          description: |
> > +            Color filter array displacement is Blue-Green/Green-Red
>
> Should these be sorted alphabetically ?
>

We could, yes

> > +
> > +        - name: BayerFilterNonStandard
> > +          value: 4
> > +          description: |
> > +            The pixel array color filter does not use the standard Bayer RGB
> > +            color model
>
> I would drop this value for now, as we should instead report which
> filter arrangement is used. We could already add known patterns, such as
> the ones from https://en.wikipedia.org/wiki/Bayer_filter for instance,
> but I think that's overkill, we can extend it later when/if needed.
>

Ack

> > +
> > +  - ISOSensitivityRange:
> > +      type: int32_t
> > +      size: [2]
> > +      description: |
> > +        The range of supported ISO sensitivities, as documented by the
> > +        ISO 12232:2006 (or later) standard.
>
> Is this the ISO speed or the Standard Output Sensitivity (SOS) ? I think
> this property needs a bit more research, should we leave it out for now
> to avoid blocking the rest ?

Yes, this is barely copied without going into much detail. Let's leave
it out for now.

>
> Overall, good work ! No major issue, so the next version should be the
> last one.

Thanks and thanks for your comments.

>
> > +
> >  ...
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart April 25, 2020, 4:50 p.m. UTC | #3
Hi Jacopo,

On Sat, Apr 25, 2020 at 03:58:45PM +0200, Jacopo Mondi wrote:
> On Sat, Apr 25, 2020 at 05:05:11AM +0300, Laurent Pinchart wrote:
> > On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> >> Add definition of pixel array related properties.
> >>
> >> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> >> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> >> ---
> >>  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
> >>  1 file changed, 155 insertions(+)
> >>
> >> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> >> index ce627fa042ba..8f6797723a9d 100644
> >> --- a/src/libcamera/property_ids.yaml
> >> +++ b/src/libcamera/property_ids.yaml
> >> @@ -386,4 +386,159 @@ controls:
> >>                                |                    |
> >>                                |                    |
> >>                                +--------------------+
> >> +
> >> +  - PixelArraySize:
> >> +      type: float
> >> +      size: [2]
> >> +      description: |
> >> +        The physical sizes of the pixel array (width and height), in
> >> +        millimeters.
> >
> > Once we'll have categories for properties I think this should become
> > PhysicalSize in the sensor category. We could already rename it now to
> > PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
> > idea. I don't mind much either way, but if we keep the current name, how
> > about adding a comment to remember this ?
> 
> I like PhysicalSize.

BTW, is mm precise enough, or should we use µm ?

> >
> >       # \todo Rename this property to PhysicalSize once we will have property
> >       # categories
> >
> >> +
> >> +  - PixelArray:
> >
> > I was going to say we should rename this to PixelArraySize as it's the
> > size of the pixel array, and then realized that the previous property
> > has the same name :-) Maybe we should rename both ?
> 
> Let's rename both
> 
> >> +      type: int32_t
> >> +      size: [2]
> >> +      description: |
> >> +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> >> +
> >> +        The property describes a rectangle with its top-left corner in position
> >> +        (0, 0) and width and height described by the first and second values
> >> +        of this property.
> >> +
> >> +        The PixelArray property defines the rectangle that includes all possible
> >> +        rectangles defined by the ActiveAreas property, and describes the full
> >> +        pixel array, including non-active pixels, black level calibration
> >> +        pixels etc.
> >
> > It's not entirely clear to me what pixels should be included in this
> > rectangle. I'm not blaming your documentation here, but rather my lack
> > of knowledge of what else than non-active pixels and black level
> > calibration pixels there are :-) Just an idea, instead of linking this
> > property to ActiveAreas, would it make sense to link it to
> > PixelArraySize ? Maybe something along the lines of
> 
> I would just drop rectangle maybe, as that property transports a widht
> and an height.
> 
> > 	The PixelArraySize property defines the size of the full pixel
> > 	array, covered by the PixelArrayPhysicalSize area. This may
> > 	include inactive or optical black pixels.
> 
> I'll take this in, but I'm not sure if it's a good idea to link this
> one to the PhysicalArraySize, as they are expressed with two different
> measure units.

I think linking the two are important, but we need to figure out what
the link is. Otherwise, does the physical size represent the size of the
chip (quite unlikelu), the size of the full pixel array, the size of the
optical window, ... ?

I'm looking at the IMX219 datasheet, and it reports

- Image size: diagonal 4.60mm
- Total number of pixels: 3296x2512
- Number of effective pixels: 3296x2480
- Number of active pixels: 3280x2464
- Unit cell size: 1.12µm x 1.12µm

Calculating the diagonal leads to 4641µm, 4620µm and 4595µm for the
three rectangles respectively. The last one match 4.60mm.

With the above definition of the physical size, the diagonal would be
4641µm (for a size of 3692µm x 2813µm). I wonder if we should report the
active image size instead (3674µm x 2760µm for a diagonal of 4595µm).
It's not difficult to convert between the two (assuming all the pixels
have the same size), but I'd like to pick the one that makes the most
sense. We can of course reconsider that decision later if we find out we
made a mistake.

My reasoning for picking the size of the active area is that the
physical size will most likely be used to make optics calculation, and
the lens should be selected and positioned according to the active area.
However, there could be systems with a different lens positioning, and
I'm not sure yet what that would imply.

Android seems to report the physical size of the full pixel array,
including inactive pixels. I'm not sure what the rationale is, and maybe
the difference between the two is lost in the measurement error noise in
the end, so I wouldn't rule out that they may not have been picked the
best option (but I also don't imply I know better :-)).

As a conclusion, let's pick one option (physical size covering the
active area or the full array), and document it.

> So I would keep at least the first line of the existing
> description, and s/may include/includes/ in your proposed change, as
> this is not optional, the property should report inactive pixels too.

I think it would be better to refer to the pixel array size property in
the definition of ActiveAreas below, not the other way around, otherwise
you'll have a cycle :-)

The sensors I've seen typically have the following structure, from top
to bottom:

- A few dummy lines, for unknown purpose
- Optical black lines, split in three areas (invalid, valid, invalid)
- Exposed lines, split in three areas (invalid, valid, invalid)

Exposed lines are sometimes called effective, and "active" pixels can
refer to either all the exposed lines or to the valid exposed lines,
depending on datasheets.

The invalid lines are considered not to be usable, usually because
they're too close to the edges and thus can contain artifacts. Some of
them may be used for internal purpose. The dummy lines, optical black
lines (both invalid and valid) and invalid exposed lines may or may not
be readable, depending on the sensor.

The question now is what else than the valid exposed pixels need to be
reported in this property. I think we need to look at it from the point
of view of what the sensor can readout (assuming we map the physical
size to the valid active pixels), otherwise I don't really see the point
in giving information about something that would have absolutely no
influence on the receiver side.

> >> +
> >> +  - ActiveAreas:
> >> +      type: int32_t
> >> +      size: [4 x n]
> >> +      description: |
> >> +        The camera sensor active pixel area rectangles, represented as
> >> +        rectangles contained in the one described by the PixelArray property.
> >> +
> >> +        This property describes an arbitrary number of overlapping rectangles,
> >> +        representing the active pixel portions of the camera sensor pixel array.
> >> +
> >> +        Each rectangle is defined by its displacement from pixel (0, 0) of
> >> +        the rectangle described by the PixelArray property, a width and an
> >> +        height.
> >
> > s/an height/a height/
> >
> >> +
> >> +        Each rectangle described by this property represents the maximum image
> >> +        size that the camera module can produce for a given image resolution.
> >
> > How about s/for a given image resolution/for a particular aspect ratio/
> > ?
> >
> > I would also add, taken from below,
> >
> >         When multiple rectangles are reported, they shall be ordered
> > 	from the tallest to the shortest.
> 
> ack, I though I had it in previous versions..
> 
> >> +
> >> +        Example 1.
> >> +        A sensor which only produces images in the 4:3 image resolution will
> >> +        report a single ActiveArea rectangle, from which all other image formats
> >> +        are obtained by either cropping the field-of-view and/or applying pixel
> >> +        sub-sampling techniques such as pixel skipping or binning.
> >> +
> >> +                        PixelArray(0)
> >> +                    /-----------------/
> >> +                      x1          x2
> >> +            (0,0)-> +-o------------o-+  /
> >> +                 y1 o +------------+ |  |
> >> +                    | |////////////| |  |
> >> +                    | |////////////| |  | PixelArray(1)
> >> +                    | |////////////| |  |
> >> +                 y2 o +------------+ |  |
> >> +                    +----------------+  /
> >> +
> >> +        The property reports a single rectangle
> >> +
> >> +                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))
> >
> > If the rectangle is defined through width and height, this should be
> >
> >                  ActiveArea = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> >
> > Alternatively you could use coordinates:
> >
> >                  ActiveArea = (x1, y1, x2, y2)
> 
> this is not correct, as y2 is the vertical distance from pixel (0,0)
> while the active area vertical size is (y2 - y1 +1). Same for the
> horizontal size.

My point was that we could define the rectangle through the coordinates
of the top-left and bottom-right pixels (x1, y1, x2, y2), or through the
coordinates of the top-left pixel and its size (x1, y1, x2 - x1 + 1, y2
- y1 + 1). I don't have a preference.

> >> +
> >> +        Example 2
> >> +        A camera sensor which can produce images in different native
> >
> > s/camera sensor/sensor/ here or s/sensor/camera sensor/ in example 1.
> 
> I would keep using 'camera sensor' whenever possible.
> 
> >> +        resolutions, will report several overlapping rectangle, one for each
> >
> > s/resolutions,/resolutions/
> > s/rectangle/rectangles/
> >
> >> +        natively supported resolution, ordered from the tallest to the shortest
> >> +        one.
> >> +
> >> +                        PixelArray(0)
> >> +                    /-----------------/
> >> +                     x1  x2    x3  x4
> >> +            (0,0)-> +o---o------o---o+  /
> >> +                 y1 |    +------+    |  |
> >> +                    |    |//////|    |  |
> >> +                 y2 o+---+------+---+|  |
> >> +                    ||///|//////|///||  | PixelArray(1)
> >> +                 y3 o+---+------+---+|  |
> >> +                    |    |//////|    |  |
> >> +                 y4 |    +------+    |  |
> >
> > I think you need a o next to y1 and y4.
> 
> Yes!
> 
> >> +                    +----+------+----+  /
> >> +
> >> +        The property reports two rectangles
> >> +
> >> +                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),
> >
> > s/y4 - 1/y4 - y1/
> 
> ups, yes
> 
> >> +                               (x1, y2, (x4 - x1), (y3 - y2))
> >
> > And comment as above regarding the width.
> 
> I'll add '- 1'
> 
> >> +
> >> +        The first rectangle describes the maximum field-of-view of all image
> >> +        formats in the 4:3 resolutions, while the second one describes the
> >> +        maximum field of view for all image formats in the 16:9 resolutions.
> >> +
> >> +  - BayerFilterArrangement:
> >> +      type: int32_t
> >> +      description: |
> >> +        The pixel array color filter displacement.
> >
> > I could be wrong, but is "displacement" the right term ? Maybe
> > arrangement ?
> 
> Probably yes.
> 
> >> +
> >> +        This property describes the arrangement and readout sequence of the
> >> +        three RGB color components of the sensor's Bayer Color Filter Array
> >> +        (CFA).
> >
> > As we could support sensors with non-Bayer CFAs, how about already
> > renaming the control to ColorFilterArrangement, and remove Bayer from
> >
> 
> Ack
> 
> >> +
> >> +        Color filters are usually displaced in line-alternating fashion on the
> >
> > s/displaced/arranged/ ?
> >
> >> +        sensor pixel array. In example, one line might be composed of Red-Green
> >
> > s/In example/For example/
> >
> >> +        while the successive is composed of Blue-Green color information.
> >> +
> >> +        The value of this property represents the arrangement of color filters
> >> +        in the top-left 2x2 pixel square.
> >
> > As this is only valid for Bayer, maybe
> >
> >         For Bayer filters, the value of this property represents the arrangement
> >         of color filters in the top-left 2x2 pixel square.
> 
> Ack
> 
> >> +
> >> +        For example, for a sensor with the following color filter displacement
> >
> > s/displacement/arrangement/
> >
> > (you could also use pattern instead of arrangement here or in other
> > places)
> >
> >> +
> >> +                 (0, 0)               (max-col)
> >> +           +---+    +--------------...---+
> >> +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> >> +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> >> +           +---+    |B|G|B|G|B|G|B|...B|G|
> >> +                    ...                  ..
> >> +                    ...                  ..
> >> +                    |G|R|G|R|G|R|G|...G|R|
> >> +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> >> +                    +--------------...---+
> >> +
> >> +        The filter arrangement is represented by the BGGR value, which
> >> +        correspond to the pixel readout sequence in line interleaved mode.
> >> +
> >> +      enum:
> >> +        - name: BayerFilterRGGB
> >> +          value: 0
> >> +          description: |
> >> +            Color filter array displacement is Red-Green/Green-Blue
> >> +
> >> +        - name: BayerFilterGRBG
> >> +          value: 1
> >> +          description: |
> >> +            Color filter array displacement is Green-Red/Blue-Green
> >> +
> >> +        - name: BayerFilterGBRG
> >> +          value: 2
> >> +          description: |
> >> +            Color filter array displacement is Green-Blue/Red-Green
> >> +
> >> +        - name: BayerFilterBGGR
> >> +          value: 3
> >> +          description: |
> >> +            Color filter array displacement is Blue-Green/Green-Red
> >
> > Should these be sorted alphabetically ?
> 
> We could, yes
> 
> >> +
> >> +        - name: BayerFilterNonStandard
> >> +          value: 4
> >> +          description: |
> >> +            The pixel array color filter does not use the standard Bayer RGB
> >> +            color model
> >
> > I would drop this value for now, as we should instead report which
> > filter arrangement is used. We could already add known patterns, such as
> > the ones from https://en.wikipedia.org/wiki/Bayer_filter for instance,
> > but I think that's overkill, we can extend it later when/if needed.
> 
> Ack
> 
> >> +
> >> +  - ISOSensitivityRange:
> >> +      type: int32_t
> >> +      size: [2]
> >> +      description: |
> >> +        The range of supported ISO sensitivities, as documented by the
> >> +        ISO 12232:2006 (or later) standard.
> >
> > Is this the ISO speed or the Standard Output Sensitivity (SOS) ? I think
> > this property needs a bit more research, should we leave it out for now
> > to avoid blocking the rest ?
> 
> Yes, this is barely copied without going into much detail. Let's leave
> it out for now.
> 
> > Overall, good work ! No major issue, so the next version should be the
> > last one.
> 
> Thanks and thanks for your comments.
Laurent Pinchart April 25, 2020, 9:19 p.m. UTC | #4
(Adding Sakari to CC)

On Sat, Apr 25, 2020 at 07:50:12PM +0300, Laurent Pinchart wrote:
> On Sat, Apr 25, 2020 at 03:58:45PM +0200, Jacopo Mondi wrote:
> > On Sat, Apr 25, 2020 at 05:05:11AM +0300, Laurent Pinchart wrote:
> > > On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> > >> Add definition of pixel array related properties.
> > >>
> > >> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > >> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > >> ---
> > >>  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
> > >>  1 file changed, 155 insertions(+)
> > >>
> > >> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > >> index ce627fa042ba..8f6797723a9d 100644
> > >> --- a/src/libcamera/property_ids.yaml
> > >> +++ b/src/libcamera/property_ids.yaml
> > >> @@ -386,4 +386,159 @@ controls:
> > >>                                |                    |
> > >>                                |                    |
> > >>                                +--------------------+
> > >> +
> > >> +  - PixelArraySize:
> > >> +      type: float
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The physical sizes of the pixel array (width and height), in
> > >> +        millimeters.
> > >
> > > Once we'll have categories for properties I think this should become
> > > PhysicalSize in the sensor category. We could already rename it now to
> > > PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
> > > idea. I don't mind much either way, but if we keep the current name, how
> > > about adding a comment to remember this ?
> > 
> > I like PhysicalSize.
> 
> BTW, is mm precise enough, or should we use µm ?
> 
> > >
> > >       # \todo Rename this property to PhysicalSize once we will have property
> > >       # categories
> > >
> > >> +
> > >> +  - PixelArray:
> > >
> > > I was going to say we should rename this to PixelArraySize as it's the
> > > size of the pixel array, and then realized that the previous property
> > > has the same name :-) Maybe we should rename both ?
> > 
> > Let's rename both
> > 
> > >> +      type: int32_t
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> > >> +
> > >> +        The property describes a rectangle with its top-left corner in position
> > >> +        (0, 0) and width and height described by the first and second values
> > >> +        of this property.
> > >> +
> > >> +        The PixelArray property defines the rectangle that includes all possible
> > >> +        rectangles defined by the ActiveAreas property, and describes the full
> > >> +        pixel array, including non-active pixels, black level calibration
> > >> +        pixels etc.
> > >
> > > It's not entirely clear to me what pixels should be included in this
> > > rectangle. I'm not blaming your documentation here, but rather my lack
> > > of knowledge of what else than non-active pixels and black level
> > > calibration pixels there are :-) Just an idea, instead of linking this
> > > property to ActiveAreas, would it make sense to link it to
> > > PixelArraySize ? Maybe something along the lines of
> > 
> > I would just drop rectangle maybe, as that property transports a widht
> > and an height.
> > 
> > > 	The PixelArraySize property defines the size of the full pixel
> > > 	array, covered by the PixelArrayPhysicalSize area. This may
> > > 	include inactive or optical black pixels.
> > 
> > I'll take this in, but I'm not sure if it's a good idea to link this
> > one to the PhysicalArraySize, as they are expressed with two different
> > measure units.
> 
> I think linking the two are important, but we need to figure out what
> the link is. Otherwise, does the physical size represent the size of the
> chip (quite unlikelu), the size of the full pixel array, the size of the
> optical window, ... ?
> 
> I'm looking at the IMX219 datasheet, and it reports
> 
> - Image size: diagonal 4.60mm
> - Total number of pixels: 3296x2512
> - Number of effective pixels: 3296x2480
> - Number of active pixels: 3280x2464
> - Unit cell size: 1.12µm x 1.12µm
> 
> Calculating the diagonal leads to 4641µm, 4620µm and 4595µm for the
> three rectangles respectively. The last one match 4.60mm.
> 
> With the above definition of the physical size, the diagonal would be
> 4641µm (for a size of 3692µm x 2813µm). I wonder if we should report the
> active image size instead (3674µm x 2760µm for a diagonal of 4595µm).
> It's not difficult to convert between the two (assuming all the pixels
> have the same size), but I'd like to pick the one that makes the most
> sense. We can of course reconsider that decision later if we find out we
> made a mistake.
> 
> My reasoning for picking the size of the active area is that the
> physical size will most likely be used to make optics calculation, and
> the lens should be selected and positioned according to the active area.
> However, there could be systems with a different lens positioning, and
> I'm not sure yet what that would imply.
> 
> Android seems to report the physical size of the full pixel array,
> including inactive pixels. I'm not sure what the rationale is, and maybe
> the difference between the two is lost in the measurement error noise in
> the end, so I wouldn't rule out that they may not have been picked the
> best option (but I also don't imply I know better :-)).
> 
> As a conclusion, let's pick one option (physical size covering the
> active area or the full array), and document it.
> 
> > So I would keep at least the first line of the existing
> > description, and s/may include/includes/ in your proposed change, as
> > this is not optional, the property should report inactive pixels too.
> 
> I think it would be better to refer to the pixel array size property in
> the definition of ActiveAreas below, not the other way around, otherwise
> you'll have a cycle :-)
> 
> The sensors I've seen typically have the following structure, from top
> to bottom:
> 
> - A few dummy lines, for unknown purpose
> - Optical black lines, split in three areas (invalid, valid, invalid)
> - Exposed lines, split in three areas (invalid, valid, invalid)
> 
> Exposed lines are sometimes called effective, and "active" pixels can
> refer to either all the exposed lines or to the valid exposed lines,
> depending on datasheets.
> 
> The invalid lines are considered not to be usable, usually because
> they're too close to the edges and thus can contain artifacts. Some of
> them may be used for internal purpose. The dummy lines, optical black
> lines (both invalid and valid) and invalid exposed lines may or may not
> be readable, depending on the sensor.
> 
> The question now is what else than the valid exposed pixels need to be
> reported in this property. I think we need to look at it from the point
> of view of what the sensor can readout (assuming we map the physical
> size to the valid active pixels), otherwise I don't really see the point
> in giving information about something that would have absolutely no
> influence on the receiver side.
> 
> > >> +
> > >> +  - ActiveAreas:
> > >> +      type: int32_t
> > >> +      size: [4 x n]
> > >> +      description: |
> > >> +        The camera sensor active pixel area rectangles, represented as
> > >> +        rectangles contained in the one described by the PixelArray property.
> > >> +
> > >> +        This property describes an arbitrary number of overlapping rectangles,
> > >> +        representing the active pixel portions of the camera sensor pixel array.
> > >> +
> > >> +        Each rectangle is defined by its displacement from pixel (0, 0) of
> > >> +        the rectangle described by the PixelArray property, a width and an
> > >> +        height.
> > >
> > > s/an height/a height/
> > >
> > >> +
> > >> +        Each rectangle described by this property represents the maximum image
> > >> +        size that the camera module can produce for a given image resolution.
> > >
> > > How about s/for a given image resolution/for a particular aspect ratio/
> > > ?
> > >
> > > I would also add, taken from below,
> > >
> > >         When multiple rectangles are reported, they shall be ordered
> > > 	from the tallest to the shortest.
> > 
> > ack, I though I had it in previous versions..
> > 
> > >> +
> > >> +        Example 1.
> > >> +        A sensor which only produces images in the 4:3 image resolution will
> > >> +        report a single ActiveArea rectangle, from which all other image formats
> > >> +        are obtained by either cropping the field-of-view and/or applying pixel
> > >> +        sub-sampling techniques such as pixel skipping or binning.
> > >> +
> > >> +                        PixelArray(0)
> > >> +                    /-----------------/
> > >> +                      x1          x2
> > >> +            (0,0)-> +-o------------o-+  /
> > >> +                 y1 o +------------+ |  |
> > >> +                    | |////////////| |  |
> > >> +                    | |////////////| |  | PixelArray(1)
> > >> +                    | |////////////| |  |
> > >> +                 y2 o +------------+ |  |
> > >> +                    +----------------+  /
> > >> +
> > >> +        The property reports a single rectangle
> > >> +
> > >> +                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))
> > >
> > > If the rectangle is defined through width and height, this should be
> > >
> > >                  ActiveArea = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > >
> > > Alternatively you could use coordinates:
> > >
> > >                  ActiveArea = (x1, y1, x2, y2)
> > 
> > this is not correct, as y2 is the vertical distance from pixel (0,0)
> > while the active area vertical size is (y2 - y1 +1). Same for the
> > horizontal size.
> 
> My point was that we could define the rectangle through the coordinates
> of the top-left and bottom-right pixels (x1, y1, x2, y2), or through the
> coordinates of the top-left pixel and its size (x1, y1, x2 - x1 + 1, y2
> - y1 + 1). I don't have a preference.
> 
> > >> +
> > >> +        Example 2
> > >> +        A camera sensor which can produce images in different native
> > >
> > > s/camera sensor/sensor/ here or s/sensor/camera sensor/ in example 1.
> > 
> > I would keep using 'camera sensor' whenever possible.
> > 
> > >> +        resolutions, will report several overlapping rectangle, one for each
> > >
> > > s/resolutions,/resolutions/
> > > s/rectangle/rectangles/
> > >
> > >> +        natively supported resolution, ordered from the tallest to the shortest
> > >> +        one.
> > >> +
> > >> +                        PixelArray(0)
> > >> +                    /-----------------/
> > >> +                     x1  x2    x3  x4
> > >> +            (0,0)-> +o---o------o---o+  /
> > >> +                 y1 |    +------+    |  |
> > >> +                    |    |//////|    |  |
> > >> +                 y2 o+---+------+---+|  |
> > >> +                    ||///|//////|///||  | PixelArray(1)
> > >> +                 y3 o+---+------+---+|  |
> > >> +                    |    |//////|    |  |
> > >> +                 y4 |    +------+    |  |
> > >
> > > I think you need a o next to y1 and y4.
> > 
> > Yes!
> > 
> > >> +                    +----+------+----+  /
> > >> +
> > >> +        The property reports two rectangles
> > >> +
> > >> +                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),
> > >
> > > s/y4 - 1/y4 - y1/
> > 
> > ups, yes
> > 
> > >> +                               (x1, y2, (x4 - x1), (y3 - y2))
> > >
> > > And comment as above regarding the width.
> > 
> > I'll add '- 1'
> > 
> > >> +
> > >> +        The first rectangle describes the maximum field-of-view of all image
> > >> +        formats in the 4:3 resolutions, while the second one describes the
> > >> +        maximum field of view for all image formats in the 16:9 resolutions.
> > >> +
> > >> +  - BayerFilterArrangement:
> > >> +      type: int32_t
> > >> +      description: |
> > >> +        The pixel array color filter displacement.
> > >
> > > I could be wrong, but is "displacement" the right term ? Maybe
> > > arrangement ?
> > 
> > Probably yes.
> > 
> > >> +
> > >> +        This property describes the arrangement and readout sequence of the
> > >> +        three RGB color components of the sensor's Bayer Color Filter Array
> > >> +        (CFA).
> > >
> > > As we could support sensors with non-Bayer CFAs, how about already
> > > renaming the control to ColorFilterArrangement, and remove Bayer from
> > >
> > 
> > Ack
> > 
> > >> +
> > >> +        Color filters are usually displaced in line-alternating fashion on the
> > >
> > > s/displaced/arranged/ ?
> > >
> > >> +        sensor pixel array. In example, one line might be composed of Red-Green
> > >
> > > s/In example/For example/
> > >
> > >> +        while the successive is composed of Blue-Green color information.
> > >> +
> > >> +        The value of this property represents the arrangement of color filters
> > >> +        in the top-left 2x2 pixel square.
> > >
> > > As this is only valid for Bayer, maybe
> > >
> > >         For Bayer filters, the value of this property represents the arrangement
> > >         of color filters in the top-left 2x2 pixel square.
> > 
> > Ack
> > 
> > >> +
> > >> +        For example, for a sensor with the following color filter displacement
> > >
> > > s/displacement/arrangement/
> > >
> > > (you could also use pattern instead of arrangement here or in other
> > > places)
> > >
> > >> +
> > >> +                 (0, 0)               (max-col)
> > >> +           +---+    +--------------...---+
> > >> +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> > >> +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> > >> +           +---+    |B|G|B|G|B|G|B|...B|G|
> > >> +                    ...                  ..
> > >> +                    ...                  ..
> > >> +                    |G|R|G|R|G|R|G|...G|R|
> > >> +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> > >> +                    +--------------...---+
> > >> +
> > >> +        The filter arrangement is represented by the BGGR value, which
> > >> +        correspond to the pixel readout sequence in line interleaved mode.
> > >> +
> > >> +      enum:
> > >> +        - name: BayerFilterRGGB
> > >> +          value: 0
> > >> +          description: |
> > >> +            Color filter array displacement is Red-Green/Green-Blue
> > >> +
> > >> +        - name: BayerFilterGRBG
> > >> +          value: 1
> > >> +          description: |
> > >> +            Color filter array displacement is Green-Red/Blue-Green
> > >> +
> > >> +        - name: BayerFilterGBRG
> > >> +          value: 2
> > >> +          description: |
> > >> +            Color filter array displacement is Green-Blue/Red-Green
> > >> +
> > >> +        - name: BayerFilterBGGR
> > >> +          value: 3
> > >> +          description: |
> > >> +            Color filter array displacement is Blue-Green/Green-Red
> > >
> > > Should these be sorted alphabetically ?
> > 
> > We could, yes
> > 
> > >> +
> > >> +        - name: BayerFilterNonStandard
> > >> +          value: 4
> > >> +          description: |
> > >> +            The pixel array color filter does not use the standard Bayer RGB
> > >> +            color model
> > >
> > > I would drop this value for now, as we should instead report which
> > > filter arrangement is used. We could already add known patterns, such as
> > > the ones from https://en.wikipedia.org/wiki/Bayer_filter for instance,
> > > but I think that's overkill, we can extend it later when/if needed.
> > 
> > Ack
> > 
> > >> +
> > >> +  - ISOSensitivityRange:
> > >> +      type: int32_t
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The range of supported ISO sensitivities, as documented by the
> > >> +        ISO 12232:2006 (or later) standard.
> > >
> > > Is this the ISO speed or the Standard Output Sensitivity (SOS) ? I think
> > > this property needs a bit more research, should we leave it out for now
> > > to avoid blocking the rest ?
> > 
> > Yes, this is barely copied without going into much detail. Let's leave
> > it out for now.
> > 
> > > Overall, good work ! No major issue, so the next version should be the
> > > last one.
> > 
> > Thanks and thanks for your comments.
Jacopo Mondi May 19, 2020, 3:50 p.m. UTC | #5
Hi Laurent,
   I just re-started looking into this

On Sat, Apr 25, 2020 at 07:50:11PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> On Sat, Apr 25, 2020 at 03:58:45PM +0200, Jacopo Mondi wrote:
> > On Sat, Apr 25, 2020 at 05:05:11AM +0300, Laurent Pinchart wrote:
> > > On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> > >> Add definition of pixel array related properties.
> > >>
> > >> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > >> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > >> ---
> > >>  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
> > >>  1 file changed, 155 insertions(+)
> > >>
> > >> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > >> index ce627fa042ba..8f6797723a9d 100644
> > >> --- a/src/libcamera/property_ids.yaml
> > >> +++ b/src/libcamera/property_ids.yaml
> > >> @@ -386,4 +386,159 @@ controls:
> > >>                                |                    |
> > >>                                |                    |
> > >>                                +--------------------+
> > >> +
> > >> +  - PixelArraySize:
> > >> +      type: float
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The physical sizes of the pixel array (width and height), in
> > >> +        millimeters.
> > >
> > > Once we'll have categories for properties I think this should become
> > > PhysicalSize in the sensor category. We could already rename it now to
> > > PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
> > > idea. I don't mind much either way, but if we keep the current name, how
> > > about adding a comment to remember this ?
> >
> > I like PhysicalSize.
>
> BTW, is mm precise enough, or should we use µm ?
>

I had a look at a few manuals of recent-ish sensors and the their
(active or full pixel array size) range from ~2 to ~6 mm, with a
precision of usually 3 more digits.

Considering the single pixel size will shrink, even if resolution
would likely go up, we might go below 1mm, and anyway, it's easier to
express 1234 um than 1,234 mm

> > >
> > >       # \todo Rename this property to PhysicalSize once we will have property
> > >       # categories
> > >
> > >> +
> > >> +  - PixelArray:
> > >
> > > I was going to say we should rename this to PixelArraySize as it's the
> > > size of the pixel array, and then realized that the previous property
> > > has the same name :-) Maybe we should rename both ?
> >
> > Let's rename both
> >
> > >> +      type: int32_t
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> > >> +
> > >> +        The property describes a rectangle with its top-left corner in position
> > >> +        (0, 0) and width and height described by the first and second values
> > >> +        of this property.
> > >> +
> > >> +        The PixelArray property defines the rectangle that includes all possible
> > >> +        rectangles defined by the ActiveAreas property, and describes the full
> > >> +        pixel array, including non-active pixels, black level calibration
> > >> +        pixels etc.
> > >
> > > It's not entirely clear to me what pixels should be included in this
> > > rectangle. I'm not blaming your documentation here, but rather my lack
> > > of knowledge of what else than non-active pixels and black level
> > > calibration pixels there are :-) Just an idea, instead of linking this
> > > property to ActiveAreas, would it make sense to link it to
> > > PixelArraySize ? Maybe something along the lines of
> >
> > I would just drop rectangle maybe, as that property transports a widht
> > and an height.
> >
> > > 	The PixelArraySize property defines the size of the full pixel
> > > 	array, covered by the PixelArrayPhysicalSize area. This may
> > > 	include inactive or optical black pixels.
> >
> > I'll take this in, but I'm not sure if it's a good idea to link this
> > one to the PhysicalArraySize, as they are expressed with two different
> > measure units.
>
> I think linking the two are important, but we need to figure out what
> the link is. Otherwise, does the physical size represent the size of the
> chip (quite unlikelu), the size of the full pixel array, the size of the
> optical window, ... ?

A few manuals and products briefs from OV report the chip size (I
assume, at least, as it varies depending on the package). But as you
noticed below, I presume the size is mostly used for optical
caluculations, so the chip area size is less relevant indeed.

>
> I'm looking at the IMX219 datasheet, and it reports
>
> - Image size: diagonal 4.60mm
> - Total number of pixels: 3296x2512
> - Number of effective pixels: 3296x2480
> - Number of active pixels: 3280x2464
> - Unit cell size: 1.12µm x 1.12µm
>
> Calculating the diagonal leads to 4641µm, 4620µm and 4595µm for the
> three rectangles respectively. The last one match 4.60mm.
>
> With the above definition of the physical size, the diagonal would be
> 4641µm (for a size of 3692µm x 2813µm). I wonder if we should report the
> active image size instead (3674µm x 2760µm for a diagonal of 4595µm).
> It's not difficult to convert between the two (assuming all the pixels
> have the same size), but I'd like to pick the one that makes the most
> sense. We can of course reconsider that decision later if we find out we
> made a mistake.
>
> My reasoning for picking the size of the active area is that the
> physical size will most likely be used to make optics calculation, and
> the lens should be selected and positioned according to the active area.
> However, there could be systems with a different lens positioning, and
> I'm not sure yet what that would imply.
>

I have checked the same for the ov13858, and it reports
the physical size of the 'active area'.

ov5670 instead reports as phyisical size an area larger than the
active one.

I fear there is no standard but I agree with linking the physical area
size to the active pixel area size, unfortunaly defining the 'active
area' itself might not be trivial

> Android seems to report the physical size of the full pixel array,
> including inactive pixels. I'm not sure what the rationale is, and maybe
> the difference between the two is lost in the measurement error noise in
> the end, so I wouldn't rule out that they may not have been picked the
> best option (but I also don't imply I know better :-)).
>
> As a conclusion, let's pick one option (physical size covering the
> active area or the full array), and document it.
>
> > So I would keep at least the first line of the existing
> > description, and s/may include/includes/ in your proposed change, as
> > this is not optional, the property should report inactive pixels too.
>
> I think it would be better to refer to the pixel array size property in
> the definition of ActiveAreas below, not the other way around, otherwise
> you'll have a cycle :-)
>
> The sensors I've seen typically have the following structure, from top
> to bottom:
>
> - A few dummy lines, for unknown purpose
> - Optical black lines, split in three areas (invalid, valid, invalid)

what is a 'valid' black line compared to an 'invalid' one ? I think
the same applies to column

> - Exposed lines, split in three areas (invalid, valid, invalid)
>
> Exposed lines are sometimes called effective, and "active" pixels can
> refer to either all the exposed lines or to the valid exposed lines,
> depending on datasheets.
>
> The invalid lines are considered not to be usable, usually because
> they're too close to the edges and thus can contain artifacts. Some of
> them may be used for internal purpose. The dummy lines, optical black
> lines (both invalid and valid) and invalid exposed lines may or may not
> be readable, depending on the sensor.
>
> The question now is what else than the valid exposed pixels need to be
> reported in this property. I think we need to look at it from the point
> of view of what the sensor can readout (assuming we map the physical
> size to the valid active pixels), otherwise I don't really see the point
> in giving information about something that would have absolutely no
> influence on the receiver side.

I would consider invalid but exposed lines as part of the
PixelArraySize, usually pixel (0,0) is part of an invalid but exposed
line/column, isn't it ?

I would then just drop the mention of 'optical blank' from the
property definition, but keep it moslty like it is.

>
> > >> +
> > >> +  - ActiveAreas:
> > >> +      type: int32_t
> > >> +      size: [4 x n]
> > >> +      description: |
> > >> +        The camera sensor active pixel area rectangles, represented as
> > >> +        rectangles contained in the one described by the PixelArray property.
> > >> +
> > >> +        This property describes an arbitrary number of overlapping rectangles,
> > >> +        representing the active pixel portions of the camera sensor pixel array.
> > >> +
> > >> +        Each rectangle is defined by its displacement from pixel (0, 0) of
> > >> +        the rectangle described by the PixelArray property, a width and an
> > >> +        height.
> > >
> > > s/an height/a height/
> > >
> > >> +
> > >> +        Each rectangle described by this property represents the maximum image
> > >> +        size that the camera module can produce for a given image resolution.
> > >
> > > How about s/for a given image resolution/for a particular aspect ratio/
> > > ?
> > >
> > > I would also add, taken from below,
> > >
> > >         When multiple rectangles are reported, they shall be ordered
> > > 	from the tallest to the shortest.
> >
> > ack, I though I had it in previous versions..
> >
> > >> +
> > >> +        Example 1.
> > >> +        A sensor which only produces images in the 4:3 image resolution will
> > >> +        report a single ActiveArea rectangle, from which all other image formats
> > >> +        are obtained by either cropping the field-of-view and/or applying pixel
> > >> +        sub-sampling techniques such as pixel skipping or binning.
> > >> +
> > >> +                        PixelArray(0)
> > >> +                    /-----------------/
> > >> +                      x1          x2
> > >> +            (0,0)-> +-o------------o-+  /
> > >> +                 y1 o +------------+ |  |
> > >> +                    | |////////////| |  |
> > >> +                    | |////////////| |  | PixelArray(1)
> > >> +                    | |////////////| |  |
> > >> +                 y2 o +------------+ |  |
> > >> +                    +----------------+  /
> > >> +
> > >> +        The property reports a single rectangle
> > >> +
> > >> +                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))
> > >
> > > If the rectangle is defined through width and height, this should be
> > >
> > >                  ActiveArea = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > >
> > > Alternatively you could use coordinates:
> > >
> > >                  ActiveArea = (x1, y1, x2, y2)
> >
> > this is not correct, as y2 is the vertical distance from pixel (0,0)
> > while the active area vertical size is (y2 - y1 +1). Same for the
> > horizontal size.
>
> My point was that we could define the rectangle through the coordinates
> of the top-left and bottom-right pixels (x1, y1, x2, y2), or through the
> coordinates of the top-left pixel and its size (x1, y1, x2 - x1 + 1, y2
> - y1 + 1). I don't have a preference.

x2 != (x2 - x1)

>
> > >> +
> > >> +        Example 2
> > >> +        A camera sensor which can produce images in different native
> > >
> > > s/camera sensor/sensor/ here or s/sensor/camera sensor/ in example 1.
> >
> > I would keep using 'camera sensor' whenever possible.
> >
> > >> +        resolutions, will report several overlapping rectangle, one for each
> > >
> > > s/resolutions,/resolutions/
> > > s/rectangle/rectangles/
> > >
> > >> +        natively supported resolution, ordered from the tallest to the shortest
> > >> +        one.
> > >> +
> > >> +                        PixelArray(0)
> > >> +                    /-----------------/
> > >> +                     x1  x2    x3  x4
> > >> +            (0,0)-> +o---o------o---o+  /
> > >> +                 y1 |    +------+    |  |
> > >> +                    |    |//////|    |  |
> > >> +                 y2 o+---+------+---+|  |
> > >> +                    ||///|//////|///||  | PixelArray(1)
> > >> +                 y3 o+---+------+---+|  |
> > >> +                    |    |//////|    |  |
> > >> +                 y4 |    +------+    |  |
> > >
> > > I think you need a o next to y1 and y4.
> >
> > Yes!
> >
> > >> +                    +----+------+----+  /
> > >> +
> > >> +        The property reports two rectangles
> > >> +
> > >> +                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),
> > >
> > > s/y4 - 1/y4 - y1/
> >
> > ups, yes
> >
> > >> +                               (x1, y2, (x4 - x1), (y3 - y2))
> > >
> > > And comment as above regarding the width.
> >
> > I'll add '- 1'
> >
> > >> +
> > >> +        The first rectangle describes the maximum field-of-view of all image
> > >> +        formats in the 4:3 resolutions, while the second one describes the
> > >> +        maximum field of view for all image formats in the 16:9 resolutions.
> > >> +
> > >> +  - BayerFilterArrangement:
> > >> +      type: int32_t
> > >> +      description: |
> > >> +        The pixel array color filter displacement.
> > >
> > > I could be wrong, but is "displacement" the right term ? Maybe
> > > arrangement ?
> >
> > Probably yes.
> >
> > >> +
> > >> +        This property describes the arrangement and readout sequence of the
> > >> +        three RGB color components of the sensor's Bayer Color Filter Array
> > >> +        (CFA).
> > >
> > > As we could support sensors with non-Bayer CFAs, how about already
> > > renaming the control to ColorFilterArrangement, and remove Bayer from
> > >
> >
> > Ack
> >
> > >> +
> > >> +        Color filters are usually displaced in line-alternating fashion on the
> > >
> > > s/displaced/arranged/ ?
> > >
> > >> +        sensor pixel array. In example, one line might be composed of Red-Green
> > >
> > > s/In example/For example/
> > >
> > >> +        while the successive is composed of Blue-Green color information.
> > >> +
> > >> +        The value of this property represents the arrangement of color filters
> > >> +        in the top-left 2x2 pixel square.
> > >
> > > As this is only valid for Bayer, maybe
> > >
> > >         For Bayer filters, the value of this property represents the arrangement
> > >         of color filters in the top-left 2x2 pixel square.
> >
> > Ack
> >
> > >> +
> > >> +        For example, for a sensor with the following color filter displacement
> > >
> > > s/displacement/arrangement/
> > >
> > > (you could also use pattern instead of arrangement here or in other
> > > places)
> > >
> > >> +
> > >> +                 (0, 0)               (max-col)
> > >> +           +---+    +--------------...---+
> > >> +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> > >> +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> > >> +           +---+    |B|G|B|G|B|G|B|...B|G|
> > >> +                    ...                  ..
> > >> +                    ...                  ..
> > >> +                    |G|R|G|R|G|R|G|...G|R|
> > >> +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> > >> +                    +--------------...---+
> > >> +
> > >> +        The filter arrangement is represented by the BGGR value, which
> > >> +        correspond to the pixel readout sequence in line interleaved mode.
> > >> +
> > >> +      enum:
> > >> +        - name: BayerFilterRGGB
> > >> +          value: 0
> > >> +          description: |
> > >> +            Color filter array displacement is Red-Green/Green-Blue
> > >> +
> > >> +        - name: BayerFilterGRBG
> > >> +          value: 1
> > >> +          description: |
> > >> +            Color filter array displacement is Green-Red/Blue-Green
> > >> +
> > >> +        - name: BayerFilterGBRG
> > >> +          value: 2
> > >> +          description: |
> > >> +            Color filter array displacement is Green-Blue/Red-Green
> > >> +
> > >> +        - name: BayerFilterBGGR
> > >> +          value: 3
> > >> +          description: |
> > >> +            Color filter array displacement is Blue-Green/Green-Red
> > >
> > > Should these be sorted alphabetically ?
> >
> > We could, yes
> >
> > >> +
> > >> +        - name: BayerFilterNonStandard
> > >> +          value: 4
> > >> +          description: |
> > >> +            The pixel array color filter does not use the standard Bayer RGB
> > >> +            color model
> > >
> > > I would drop this value for now, as we should instead report which
> > > filter arrangement is used. We could already add known patterns, such as
> > > the ones from https://en.wikipedia.org/wiki/Bayer_filter for instance,
> > > but I think that's overkill, we can extend it later when/if needed.
> >
> > Ack
> >
> > >> +
> > >> +  - ISOSensitivityRange:
> > >> +      type: int32_t
> > >> +      size: [2]
> > >> +      description: |
> > >> +        The range of supported ISO sensitivities, as documented by the
> > >> +        ISO 12232:2006 (or later) standard.
> > >
> > > Is this the ISO speed or the Standard Output Sensitivity (SOS) ? I think
> > > this property needs a bit more research, should we leave it out for now
> > > to avoid blocking the rest ?
> >
> > Yes, this is barely copied without going into much detail. Let's leave
> > it out for now.
> >
> > > Overall, good work ! No major issue, so the next version should be the
> > > last one.
> >
> > Thanks and thanks for your comments.

There is a v4 out, but it was probably sent before these comments were
received. I'll send a v5, maybe of this patch only to fast-track it

>
> --
> Regards,
>
> Laurent Pinchart
Jacopo Mondi May 20, 2020, 10:42 a.m. UTC | #6
Hi again,

On Tue, May 19, 2020 at 05:50:07PM +0200, Jacopo Mondi wrote:
> Hi Laurent,
>    I just re-started looking into this
>
> On Sat, Apr 25, 2020 at 07:50:11PM +0300, Laurent Pinchart wrote:
> > Hi Jacopo,
> >
> > On Sat, Apr 25, 2020 at 03:58:45PM +0200, Jacopo Mondi wrote:
> > > On Sat, Apr 25, 2020 at 05:05:11AM +0300, Laurent Pinchart wrote:
> > > > On Fri, Apr 24, 2020 at 11:52:53PM +0200, Jacopo Mondi wrote:
> > > >> Add definition of pixel array related properties.
> > > >>
> > > >> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > > >> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > > >> ---
> > > >>  src/libcamera/property_ids.yaml | 155 ++++++++++++++++++++++++++++++++
> > > >>  1 file changed, 155 insertions(+)
> > > >>
> > > >> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > > >> index ce627fa042ba..8f6797723a9d 100644
> > > >> --- a/src/libcamera/property_ids.yaml
> > > >> +++ b/src/libcamera/property_ids.yaml
> > > >> @@ -386,4 +386,159 @@ controls:
> > > >>                                |                    |
> > > >>                                |                    |
> > > >>                                +--------------------+
> > > >> +
> > > >> +  - PixelArraySize:
> > > >> +      type: float
> > > >> +      size: [2]
> > > >> +      description: |
> > > >> +        The physical sizes of the pixel array (width and height), in
> > > >> +        millimeters.
> > > >
> > > > Once we'll have categories for properties I think this should become
> > > > PhysicalSize in the sensor category. We could already rename it now to
> > > > PhysicalSize or PixelArrayPhysicalSize if you think it would be a good
> > > > idea. I don't mind much either way, but if we keep the current name, how
> > > > about adding a comment to remember this ?
> > >
> > > I like PhysicalSize.
> >
> > BTW, is mm precise enough, or should we use µm ?
> >
>
> I had a look at a few manuals of recent-ish sensors and the their
> (active or full pixel array size) range from ~2 to ~6 mm, with a
> precision of usually 3 more digits.
>
> Considering the single pixel size will shrink, even if resolution
> would likely go up, we might go below 1mm, and anyway, it's easier to
> express 1234 um than 1,234 mm
>
> > > >
> > > >       # \todo Rename this property to PhysicalSize once we will have property
> > > >       # categories
> > > >
> > > >> +
> > > >> +  - PixelArray:
> > > >
> > > > I was going to say we should rename this to PixelArraySize as it's the
> > > > size of the pixel array, and then realized that the previous property
> > > > has the same name :-) Maybe we should rename both ?
> > >
> > > Let's rename both
> > >
> > > >> +      type: int32_t
> > > >> +      size: [2]
> > > >> +      description: |
> > > >> +        The camera sensor pixel array vertical and horizontal sizes, in pixels.
> > > >> +
> > > >> +        The property describes a rectangle with its top-left corner in position
> > > >> +        (0, 0) and width and height described by the first and second values
> > > >> +        of this property.
> > > >> +
> > > >> +        The PixelArray property defines the rectangle that includes all possible
> > > >> +        rectangles defined by the ActiveAreas property, and describes the full
> > > >> +        pixel array, including non-active pixels, black level calibration
> > > >> +        pixels etc.
> > > >
> > > > It's not entirely clear to me what pixels should be included in this
> > > > rectangle. I'm not blaming your documentation here, but rather my lack
> > > > of knowledge of what else than non-active pixels and black level
> > > > calibration pixels there are :-) Just an idea, instead of linking this
> > > > property to ActiveAreas, would it make sense to link it to
> > > > PixelArraySize ? Maybe something along the lines of
> > >
> > > I would just drop rectangle maybe, as that property transports a widht
> > > and an height.
> > >
> > > > 	The PixelArraySize property defines the size of the full pixel
> > > > 	array, covered by the PixelArrayPhysicalSize area. This may
> > > > 	include inactive or optical black pixels.
> > >
> > > I'll take this in, but I'm not sure if it's a good idea to link this
> > > one to the PhysicalArraySize, as they are expressed with two different
> > > measure units.
> >
> > I think linking the two are important, but we need to figure out what
> > the link is. Otherwise, does the physical size represent the size of the
> > chip (quite unlikelu), the size of the full pixel array, the size of the
> > optical window, ... ?
>
> A few manuals and products briefs from OV report the chip size (I
> assume, at least, as it varies depending on the package). But as you
> noticed below, I presume the size is mostly used for optical
> caluculations, so the chip area size is less relevant indeed.
>
> >
> > I'm looking at the IMX219 datasheet, and it reports
> >
> > - Image size: diagonal 4.60mm
> > - Total number of pixels: 3296x2512
> > - Number of effective pixels: 3296x2480
> > - Number of active pixels: 3280x2464
> > - Unit cell size: 1.12µm x 1.12µm
> >
> > Calculating the diagonal leads to 4641µm, 4620µm and 4595µm for the
> > three rectangles respectively. The last one match 4.60mm.
> >
> > With the above definition of the physical size, the diagonal would be
> > 4641µm (for a size of 3692µm x 2813µm). I wonder if we should report the
> > active image size instead (3674µm x 2760µm for a diagonal of 4595µm).
> > It's not difficult to convert between the two (assuming all the pixels
> > have the same size), but I'd like to pick the one that makes the most
> > sense. We can of course reconsider that decision later if we find out we
> > made a mistake.
> >
> > My reasoning for picking the size of the active area is that the
> > physical size will most likely be used to make optics calculation, and
> > the lens should be selected and positioned according to the active area.
> > However, there could be systems with a different lens positioning, and
> > I'm not sure yet what that would imply.
> >
>
> I have checked the same for the ov13858, and it reports
> the physical size of the 'active area'.
>
> ov5670 instead reports as phyisical size an area larger than the
> active one.
>
> I fear there is no standard but I agree with linking the physical area
> size to the active pixel area size, unfortunaly defining the 'active
> area' itself might not be trivial
>

Now that I re-read that, I wonder: do we need to expose the already
calculated physical size ? V4L2 already offers a control to retrieve
the unit cell area, we expose the rectangle(s) in unit cells for each
area, application can do the calculation by themselves, if they want
to, right ?

In this way, my proposal now would be:
1) Expose the unit cell size (there's a V4L2 control already)
2) Expose three rectangles (in pixel units)
  - Total number of pixels
    Includes optically blank pixels, exposed but not valid, and valid
    pixels
  - Effective pixels
    Total number of exposed but not valid pixels, includes all the
    possible 'valid' rectangles
  - Valid(/active) pixels
    The rectangle(s) of exposed and valid pixels, contained in the
    effective pixels rectangles, which represents the maximum image
    size for an image resolution.

Application can calculate the phyisical area of interestes with these
properties.

The Android HAL will do the calculations to report what android needs.

The only drawback I see is that we require one more information from
the sensor driver, the UNIT_CELL_SIZE control.

Ideally, if we could match the above three rectangles with the V4L2
selection targets it would be lovely. I fear that part is
under-specified, and for the moment we would like to keep an array of
per-sensor static information to report those values.

Ack to explore this direction ?

Thanks
  j

Patch

diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
index ce627fa042ba..8f6797723a9d 100644
--- a/src/libcamera/property_ids.yaml
+++ b/src/libcamera/property_ids.yaml
@@ -386,4 +386,159 @@  controls:
                               |                    |
                               |                    |
                               +--------------------+
+
+  - PixelArraySize:
+      type: float
+      size: [2]
+      description: |
+        The physical sizes of the pixel array (width and height), in
+        millimeters.
+
+  - PixelArray:
+      type: int32_t
+      size: [2]
+      description: |
+        The camera sensor pixel array vertical and horizontal sizes, in pixels.
+
+        The property describes a rectangle with its top-left corner in position
+        (0, 0) and width and height described by the first and second values
+        of this property.
+
+        The PixelArray property defines the rectangle that includes all possible
+        rectangles defined by the ActiveAreas property, and describes the full
+        pixel array, including non-active pixels, black level calibration
+        pixels etc.
+
+  - ActiveAreas:
+      type: int32_t
+      size: [4 x n]
+      description: |
+        The camera sensor active pixel area rectangles, represented as
+        rectangles contained in the one described by the PixelArray property.
+
+        This property describes an arbitrary number of overlapping rectangles,
+        representing the active pixel portions of the camera sensor pixel array.
+
+        Each rectangle is defined by its displacement from pixel (0, 0) of
+        the rectangle described by the PixelArray property, a width and an
+        height.
+
+        Each rectangle described by this property represents the maximum image
+        size that the camera module can produce for a given image resolution.
+
+        Example 1.
+        A sensor which only produces images in the 4:3 image resolution will
+        report a single ActiveArea rectangle, from which all other image formats
+        are obtained by either cropping the field-of-view and/or applying pixel
+        sub-sampling techniques such as pixel skipping or binning.
+
+                        PixelArray(0)
+                    /-----------------/
+                      x1          x2
+            (0,0)-> +-o------------o-+  /
+                 y1 o +------------+ |  |
+                    | |////////////| |  |
+                    | |////////////| |  | PixelArray(1)
+                    | |////////////| |  |
+                 y2 o +------------+ |  |
+                    +----------------+  /
+
+        The property reports a single rectangle
+
+                 ActiveArea = (x1, y1, (x2 - x1), (y2 - y1))
+
+        Example 2
+        A camera sensor which can produce images in different native
+        resolutions, will report several overlapping rectangle, one for each
+        natively supported resolution, ordered from the tallest to the shortest
+        one.
+
+                        PixelArray(0)
+                    /-----------------/
+                     x1  x2    x3  x4
+            (0,0)-> +o---o------o---o+  /
+                 y1 |    +------+    |  |
+                    |    |//////|    |  |
+                 y2 o+---+------+---+|  |
+                    ||///|//////|///||  | PixelArray(1)
+                 y3 o+---+------+---+|  |
+                    |    |//////|    |  |
+                 y4 |    +------+    |  |
+                    +----+------+----+  /
+
+        The property reports two rectangles
+
+                PixelArray = ( (x2, y1, (x3 - x2), (y4 - 1),
+                               (x1, y2, (x4 - x1), (y3 - y2))
+
+        The first rectangle describes the maximum field-of-view of all image
+        formats in the 4:3 resolutions, while the second one describes the
+        maximum field of view for all image formats in the 16:9 resolutions.
+
+  - BayerFilterArrangement:
+      type: int32_t
+      description: |
+        The pixel array color filter displacement.
+
+        This property describes the arrangement and readout sequence of the
+        three RGB color components of the sensor's Bayer Color Filter Array
+        (CFA).
+
+        Color filters are usually displaced in line-alternating fashion on the
+        sensor pixel array. In example, one line might be composed of Red-Green
+        while the successive is composed of Blue-Green color information.
+
+        The value of this property represents the arrangement of color filters
+        in the top-left 2x2 pixel square.
+
+        For example, for a sensor with the following color filter displacement
+
+                 (0, 0)               (max-col)
+           +---+    +--------------...---+
+           |B|G|<---|B|G|B|G|B|G|B|...B|G|
+           |G|R|<---|G|R|G|R|G|R|G|...G|R|
+           +---+    |B|G|B|G|B|G|B|...B|G|
+                    ...                  ..
+                    ...                  ..
+                    |G|R|G|R|G|R|G|...G|R|
+                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
+                    +--------------...---+
+
+        The filter arrangement is represented by the BGGR value, which
+        correspond to the pixel readout sequence in line interleaved mode.
+
+      enum:
+        - name: BayerFilterRGGB
+          value: 0
+          description: |
+            Color filter array displacement is Red-Green/Green-Blue
+
+        - name: BayerFilterGRBG
+          value: 1
+          description: |
+            Color filter array displacement is Green-Red/Blue-Green
+
+        - name: BayerFilterGBRG
+          value: 2
+          description: |
+            Color filter array displacement is Green-Blue/Red-Green
+
+        - name: BayerFilterBGGR
+          value: 3
+          description: |
+            Color filter array displacement is Blue-Green/Green-Red
+
+        - name: BayerFilterNonStandard
+          value: 4
+          description: |
+            The pixel array color filter does not use the standard Bayer RGB
+            color model
+
+  - ISOSensitivityRange:
+      type: int32_t
+      size: [2]
+      description: |
+        The range of supported ISO sensitivities, as documented by the
+        ISO 12232:2006 (or later) standard.
+
 ...