[{"id":5065,"web_url":"https://patchwork.libcamera.org/comment/5065/","msgid":"<20200605195344.GN5864@oden.dyn.berto.se>","date":"2020-06-05T19:53:44","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":5,"url":"https://patchwork.libcamera.org/api/people/5/","name":"Niklas Söderlund","email":"niklas.soderlund@ragnatech.se"},"content":"Hi Jacopo,\n\nNice work!\n\nOn 2020-06-04 17:31:22 +0200, Jacopo Mondi wrote:\n> Add definition of pixel array related properties.\n> \n> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n\nReviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>\n\n> ---\n>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n>  1 file changed, 263 insertions(+)\n> \n> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> index ce627fa042ba..762d60881568 100644\n> --- a/src/libcamera/property_ids.yaml\n> +++ b/src/libcamera/property_ids.yaml\n> @@ -386,4 +386,267 @@ controls:\n>                                |                    |\n>                                |                    |\n>                                +--------------------+\n> +\n> +  - UnitCellSize:\n> +      type: Size\n> +      description: |\n> +        The pixel unit cell physical size, in nanometers.\n> +\n> +        The UnitCellSize properties defines the horizontal and vertical sizes\n> +        of a single pixel unit, including its active and non-active parts.\n> +\n> +        The property can be used to calculate the physical size of the sensor's\n> +        pixel array area and for calibration purposes.\n> +\n> +  - PixelArrayPhysicalSize:\n> +      type: Size\n> +      description: |\n> +        The camera sensor full pixel array size, in nanometers.\n> +\n> +        The PixelArrayPhysicalSize property reports the physical dimensions\n> +        (width x height) of the full pixel array matrix, including readable\n> +        and non-readable pixels.\n> +\n> +        \\todo Rename this property to PhysicalSize once we will have property\n> +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> +\n> +  - PixelArraySize:\n> +      type: Size\n> +      description: |\n> +        The camera sensor pixel array readable area vertical and horizontal\n> +        sizes, in pixels.\n> +\n> +        The PixelArraySize property defines the size in pixel units of the\n> +        readable part of full pixel array matrix, including optically black\n> +        pixels used for calibration, pixels which are not considered valid for\n> +        capture and active pixels valid for image capture.\n> +\n> +        The property describes a rectangle whose top-left corner is placed\n> +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> +        by the Size element transported by the property.\n> +\n> +        The property describes the maximum size of the raw data produced by\n> +        the sensor, which might not correspond to the physical size of the\n> +        sensor pixel array matrix, as some portions of the physical pixel\n> +        array matrix are not accessible and cannot be transmitted out.\n> +\n> +        For example, a pixel array matrix assembled as follow\n> +\n> +             +--------------------------------------------------+\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             ...          ...           ...      ...          ...\n> +\n> +             ...          ...           ...      ...          ...\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             +--------------------------------------------------+\n> +\n> +        composed by two lines of non-readable pixels (x) followed by N lines of\n> +        readable data (D) surrounded by two columns of non-readable pixels on\n> +        each side, only the readable portion is transmitted to the receiving\n> +        side, defining the sizes of the largest possible buffer of raw data\n> +        that can be presented to applications.\n> +\n> +                               PixelArraySize[0]\n> +               /----------------------------------------------/\n> +               +----------------------------------------------+ /\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> +               ...        ...           ...      ...        ...\n> +               ...        ...           ...      ...        ...\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               +----------------------------------------------+ /\n> +\n> +        All other rectangles that describes portions of the pixel array, such as\n> +        the optical black pixels rectangles and active pixel areas are defined\n> +        relatively to the rectangle described by this property.\n> +\n> +        \\todo Rename this property to Size once we will have property\n> +              categories (i.e. Properties::PixelArray::Size)\n> +\n> +  - PixelArrayOpticalBlackRectangles:\n> +      type: Rectangle\n> +      size: [1 x n]\n> +      description: |\n> +        The raw data buffer regions which contains optical black pixels\n> +        considered valid for calibration purposes.\n> +\n> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> +        rectangular areas of the raw data buffer, where optical black pixels are\n> +        located and could be accessed for calibration and black level\n> +        correction.\n> +\n> +        This property describes the position and size of optically black pixel\n> +        rectangles relatively to their position in the raw data buffer as stored\n> +        in memory, which might differ from their actual physical location in the\n> +        pixel array matrix.\n> +\n> +        It is important to note, in facts, that camera sensor might\n> +        automatically re-order, shuffle or skip portions of their pixels array\n> +        matrix when transmitting data to the receiver.\n> +\n> +        The pixel array contains several areas with different purposes,\n> +        interleaved by lines and columns which are said not to be valid for\n> +        capturing purposes. Invalid lines and columns are defined as invalid as\n> +        they could be positioned too close to the chip margins or to the optical\n> +        blank shielding placed on top of optical black pixels.\n> +\n> +                                PixelArraySize[0]\n> +               /----------------------------------------------/\n> +                  x1                                       x2\n> +               +--o---------------------------------------o---+ /\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               ...          ...           ...     ...       ...\n> +               ...          ...           ...     ...       ...\n> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               +----------------------------------------------+ /\n> +\n> +        The readable pixel array matrix is composed by\n> +        2 invalid lines (I)\n> +        4 lines of valid optically black pixels (O)\n> +        2 invalid lines (I)\n> +        n lines of valid pixel data (P)\n> +        2 invalid lines (I)\n> +\n> +        And the position of the optical black pixel rectangles is defined by\n> +\n> +            PixelArrayOpticalBlackRectangles = {\n> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> +               { x1, y3, 2, y4 - y3 + 1 },\n> +               { x2, y3, 2, y4 - y3 + 1 },\n> +            };\n> +\n> +        If the sensor, when required to output the full pixel array matrix,\n> +        automatically skip the invalid lines and columns, producing the\n> +        following data buffer, when captured to memory\n> +\n> +                                    PixelArraySize[0]\n> +               /----------------------------------------------/\n> +                                                           x1\n> +               +--------------------------------------------o-+ /\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> +               ...       ...          ...       ...         ... |\n> +               ...       ...          ...       ...         ... |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               +----------------------------------------------+ /\n> +\n> +        The invalid lines and columns should not be reported as part of the\n> +        PixelArraySize property in first place.\n> +\n> +        In this case, the position of the black pixel rectangles will be\n> +\n> +            PixelArrayOpticalBlackRectangles = {\n> +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> +            };\n> +\n> +        \\todo Rename this property to Size once we will have property\n> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> +\n> +  - PixelArrayActiveAreas:\n> +      type: Rectangle\n> +      size: [1 x n]\n> +      description: |\n> +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> +        overlapping) portions of the camera sensor readable pixel matrix\n> +        which are considered valid for image acquisition purposes.\n> +\n> +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> +        with its top-left corner defined by its horizontal and vertical\n> +        distances from the PixelArraySize rectangle top-left corner, placed in\n> +        position (0, 0).\n> +\n> +        This property describes an arbitrary number of overlapping rectangles,\n> +        with each rectangle representing the maximum image size that the camera\n> +        sensor can produce for a particular aspect ratio.\n> +\n> +        When multiple rectangles are reported, they shall be ordered from the\n> +        tallest to the shortest.\n> +\n> +        Example 1\n> +        A camera sensor which only produces images in the 4:3 image resolution\n> +        will report a single PixelArrayActiveAreas rectangle, from which all\n> +        other image formats are obtained by either cropping the field-of-view\n> +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> +        binning.\n> +\n> +                     PixelArraySize[0]\n> +                    /----------------/\n> +                      x1          x2\n> +            (0,0)-> +-o------------o-+  /\n> +                 y1 o +------------+ |  |\n> +                    | |////////////| |  |\n> +                    | |////////////| |  | PixelArraySize[1]\n> +                    | |////////////| |  |\n> +                 y2 o +------------+ |  |\n> +                    +----------------+  /\n> +\n> +        The property reports a single rectangle\n> +\n> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> +\n> +        Example 2\n> +        A camera sensor which can produce images in different native\n> +        resolutions will report several overlapping rectangles, one for each\n> +        natively supported resolution.\n> +\n> +                     PixelArraySize[0]\n> +                    /------------------/\n> +                      x1  x2    x3  x4\n> +            (0,0)-> +o---o------o---o+  /\n> +                 y1 o    +------+    |  |\n> +                    |    |//////|    |  |\n> +                 y2 o+---+------+---+|  |\n> +                    ||///|//////|///||  | PixelArraySize[1]\n> +                 y3 o+---+------+---+|  |\n> +                    |    |//////|    |  |\n> +                 y4 o    +------+    |  |\n> +                    +----+------+----+  /\n> +\n> +        The property reports two rectangles\n> +\n> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> +\n> +        The first rectangle describes the maximum field-of-view of all image\n> +        formats in the 4:3 resolutions, while the second one describes the\n> +        maximum field of view for all image formats in the 16:9 resolutions.\n> +\n> +        \\todo Rename this property to ActiveAreas once we will have property\n> +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> +\n>  ...\n> --\n> 2.26.2\n> \n> _______________________________________________\n> libcamera-devel mailing list\n> libcamera-devel@lists.libcamera.org\n> https://lists.libcamera.org/listinfo/libcamera-devel","headers":{"Return-Path":"<niklas.soderlund@ragnatech.se>","Received":["from mail-lj1-x241.google.com (mail-lj1-x241.google.com\n\t[IPv6:2a00:1450:4864:20::241])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id F213C603CA\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri,  5 Jun 2020 21:53:45 +0200 (CEST)","by mail-lj1-x241.google.com with SMTP id 9so3562705ljv.5\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 05 Jun 2020 12:53:45 -0700 (PDT)","from localhost (h-209-203.A463.priv.bahnhof.se. [155.4.209.203])\n\tby smtp.gmail.com with ESMTPSA id\n\ty28sm1149441ljn.4.2020.06.05.12.53.44\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tFri, 05 Jun 2020 12:53:44 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected)\n\theader.d=ragnatech-se.20150623.gappssmtp.com\n\theader.i=@ragnatech-se.20150623.gappssmtp.com header.b=\"1WoNgsl9\"; \n\tdkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=ragnatech-se.20150623.gappssmtp.com; s=20150623;\n\th=date:from:to:cc:subject:message-id:references:mime-version\n\t:content-disposition:content-transfer-encoding:in-reply-to;\n\tbh=mXijWZP2JCTUv0fGwkBvJ9uQMqx7nrkV+dvFmLiDNUA=;\n\tb=1WoNgsl97VnW2hrANBx0QwBIyAAvT7Wz/PF8ljdGsXyPYqxxS05xOjPXMnqlusfqil\n\taYnV1Uc6r2Wy2IVetIU6HBr1G0xTI/HXayE5VDt7tW/cUTqC/y8I5m2mdPg/oDN2Leo7\n\tkDKIV6HkYhHLDgY0aot6VR5lL7kbrNwFqKIabRb6Y9Ggc/rz2ky28KLHilEWI52r3OGT\n\tRA1mFk5D/F+SwQN1+Ya8zt402gVQ5BMgEuHjXg5AuIY47wDQnkMQY7dBx+X3Ku/0x6zl\n\td4QCodCzw/dCEWMbbh0QtDigO7O9SV9hi3/iCaTgBqlHeuADppHSz0CMrtf3MbPQ/yi8\n\tHrMA==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:date:from:to:cc:subject:message-id:references\n\t:mime-version:content-disposition:content-transfer-encoding\n\t:in-reply-to;\n\tbh=mXijWZP2JCTUv0fGwkBvJ9uQMqx7nrkV+dvFmLiDNUA=;\n\tb=Z7lnO529YUrYSX7KYw50mlvKvRr/WoUxoqbjMH2Gp0WxYKW81XNRDy99ir6JuSNQwu\n\tpwsZxCs3wDCNJpOqahrs6mMDxh1BlKTBDTPCQLZx3d+SJCQxY6SsvBbjvQpKB6KO3YCI\n\tt7OHmVXB0WxPOnUozCBfye8wFB94mz+R9HIbc0mGZko2OENrfWOvMqKjjNM197FSaHUJ\n\ttPNPb0RkRPw1BXcQ6ovr+WZi72QCicXErMcUDOKBATgjwb2DprRQhC4oxemqlxdQU/eC\n\tyErzSz+bNDhXvdr601CostaFN1mZnb4ngU8g8Wzm86aMw0+hTd4f8He/FutfId0Minng\n\tDaug==","X-Gm-Message-State":"AOAM532h9x1hzTvBDP6GX59NKilpHB1uklZHRbio+P9TRjXJBcpL1p7h\n\tRugITxruxrzMiYKH6y/vTtac+Q==","X-Google-Smtp-Source":"ABdhPJwRvw+wvn0vh5fubmWRwLjYXZEIrCRwp1qWvhLVqYjw0WGZhb8lVK/yHDs6OEG1esu1yF7Zeg==","X-Received":"by 2002:a2e:3010:: with SMTP id w16mr5568441ljw.8.1591386825184; \n\tFri, 05 Jun 2020 12:53:45 -0700 (PDT)","Date":"Fri, 5 Jun 2020 21:53:44 +0200","From":"Niklas =?iso-8859-1?q?S=F6derlund?= <niklas.soderlund@ragnatech.se>","To":"Jacopo Mondi <jacopo@jmondi.org>","Cc":"libcamera-devel@lists.libcamera.org, figa@chromium.org,\n\tricardo.ribalda@gmail.com, sakari.ailus@iki.fi","Message-ID":"<20200605195344.GN5864@oden.dyn.berto.se>","References":"<20200604153122.18074-1-jacopo@jmondi.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=iso-8859-1","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<20200604153122.18074-1-jacopo@jmondi.org>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Fri, 05 Jun 2020 19:53:46 -0000"}},{"id":5123,"web_url":"https://patchwork.libcamera.org/comment/5123/","msgid":"<CAPybu_0t8Ay6t4HV-hTz-oX0XKVTGE7Wufo4TSPFgR7dQevGhw@mail.gmail.com>","date":"2020-06-08T08:25:08","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":53,"url":"https://patchwork.libcamera.org/api/people/53/","name":"Ricardo Ribalda Delgado","email":"ricardo.ribalda@gmail.com"},"content":"Hi Jacopo\n\nOn Fri, Jun 5, 2020 at 9:53 PM Niklas Söderlund\n<niklas.soderlund@ragnatech.se> wrote:\n>\n> Hi Jacopo,\n>\n> Nice work!\n\nGood work indeed :)\n>\n> On 2020-06-04 17:31:22 +0200, Jacopo Mondi wrote:\n> > Add definition of pixel array related properties.\n> >\n> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n>\n> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>\nReviewed-by: Ricardo Ribalda <ricardo@ribalda.com>\n>\n> > ---\n> >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> >  1 file changed, 263 insertions(+)\n> >\n> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> > index ce627fa042ba..762d60881568 100644\n> > --- a/src/libcamera/property_ids.yaml\n> > +++ b/src/libcamera/property_ids.yaml\n> > @@ -386,4 +386,267 @@ controls:\n> >                                |                    |\n> >                                |                    |\n> >                                +--------------------+\n> > +\n> > +  - UnitCellSize:\n> > +      type: Size\n> > +      description: |\n> > +        The pixel unit cell physical size, in nanometers.\n> > +\n> > +        The UnitCellSize properties defines the horizontal and vertical sizes\n> > +        of a single pixel unit, including its active and non-active parts.\n> > +\n> > +        The property can be used to calculate the physical size of the sensor's\n> > +        pixel array area and for calibration purposes.\n> > +\n> > +  - PixelArrayPhysicalSize:\n> > +      type: Size\n> > +      description: |\n> > +        The camera sensor full pixel array size, in nanometers.\n> > +\n> > +        The PixelArrayPhysicalSize property reports the physical dimensions\n> > +        (width x height) of the full pixel array matrix, including readable\n> > +        and non-readable pixels.\n> > +\n> > +        \\todo Rename this property to PhysicalSize once we will have property\n> > +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> > +\n> > +  - PixelArraySize:\n> > +      type: Size\n> > +      description: |\n> > +        The camera sensor pixel array readable area vertical and horizontal\n> > +        sizes, in pixels.\n> > +\n> > +        The PixelArraySize property defines the size in pixel units of the\n> > +        readable part of full pixel array matrix, including optically black\n> > +        pixels used for calibration, pixels which are not considered valid for\n> > +        capture and active pixels valid for image capture.\n> > +\n> > +        The property describes a rectangle whose top-left corner is placed\n> > +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> > +        by the Size element transported by the property.\n> > +\n> > +        The property describes the maximum size of the raw data produced by\n> > +        the sensor, which might not correspond to the physical size of the\n> > +        sensor pixel array matrix, as some portions of the physical pixel\n> > +        array matrix are not accessible and cannot be transmitted out.\n> > +\n> > +        For example, a pixel array matrix assembled as follow\n> > +\n> > +             +--------------------------------------------------+\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             ...          ...           ...      ...          ...\n> > +\n> > +             ...          ...           ...      ...          ...\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             +--------------------------------------------------+\n> > +\n> > +        composed by two lines of non-readable pixels (x) followed by N lines of\n> > +        readable data (D) surrounded by two columns of non-readable pixels on\n> > +        each side, only the readable portion is transmitted to the receiving\n> > +        side, defining the sizes of the largest possible buffer of raw data\n> > +        that can be presented to applications.\n> > +\n> > +                               PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +               +----------------------------------------------+ /\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> > +               ...        ...           ...      ...        ...\n> > +               ...        ...           ...      ...        ...\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        All other rectangles that describes portions of the pixel array, such as\n> > +        the optical black pixels rectangles and active pixel areas are defined\n> > +        relatively to the rectangle described by this property.\n> > +\n> > +        \\todo Rename this property to Size once we will have property\n> > +              categories (i.e. Properties::PixelArray::Size)\n> > +\n> > +  - PixelArrayOpticalBlackRectangles:\n> > +      type: Rectangle\n> > +      size: [1 x n]\n> > +      description: |\n> > +        The raw data buffer regions which contains optical black pixels\n> > +        considered valid for calibration purposes.\n> > +\n> > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> > +        rectangular areas of the raw data buffer, where optical black pixels are\n> > +        located and could be accessed for calibration and black level\n> > +        correction.\n> > +\n> > +        This property describes the position and size of optically black pixel\n> > +        rectangles relatively to their position in the raw data buffer as stored\n> > +        in memory, which might differ from their actual physical location in the\n> > +        pixel array matrix.\n> > +\n> > +        It is important to note, in facts, that camera sensor might\n> > +        automatically re-order, shuffle or skip portions of their pixels array\n> > +        matrix when transmitting data to the receiver.\n> > +\n> > +        The pixel array contains several areas with different purposes,\n> > +        interleaved by lines and columns which are said not to be valid for\n> > +        capturing purposes. Invalid lines and columns are defined as invalid as\n> > +        they could be positioned too close to the chip margins or to the optical\n> > +        blank shielding placed on top of optical black pixels.\n> > +\n> > +                                PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +                  x1                                       x2\n> > +               +--o---------------------------------------o---+ /\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               ...          ...           ...     ...       ...\n> > +               ...          ...           ...     ...       ...\n> > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        The readable pixel array matrix is composed by\n> > +        2 invalid lines (I)\n> > +        4 lines of valid optically black pixels (O)\n> > +        2 invalid lines (I)\n> > +        n lines of valid pixel data (P)\n> > +        2 invalid lines (I)\n> > +\n> > +        And the position of the optical black pixel rectangles is defined by\n> > +\n> > +            PixelArrayOpticalBlackRectangles = {\n> > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> > +               { x1, y3, 2, y4 - y3 + 1 },\n> > +               { x2, y3, 2, y4 - y3 + 1 },\n> > +            };\n> > +\n> > +        If the sensor, when required to output the full pixel array matrix,\n> > +        automatically skip the invalid lines and columns, producing the\n> > +        following data buffer, when captured to memory\n> > +\n> > +                                    PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +                                                           x1\n> > +               +--------------------------------------------o-+ /\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> > +               ...       ...          ...       ...         ... |\n> > +               ...       ...          ...       ...         ... |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        The invalid lines and columns should not be reported as part of the\n> > +        PixelArraySize property in first place.\n> > +\n> > +        In this case, the position of the black pixel rectangles will be\n> > +\n> > +            PixelArrayOpticalBlackRectangles = {\n> > +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > +            };\n> > +\n> > +        \\todo Rename this property to Size once we will have property\n> > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> > +\n> > +  - PixelArrayActiveAreas:\n> > +      type: Rectangle\n> > +      size: [1 x n]\n> > +      description: |\n> > +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> > +        overlapping) portions of the camera sensor readable pixel matrix\n> > +        which are considered valid for image acquisition purposes.\n> > +\n> > +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> > +        with its top-left corner defined by its horizontal and vertical\n> > +        distances from the PixelArraySize rectangle top-left corner, placed in\n> > +        position (0, 0).\n> > +\n> > +        This property describes an arbitrary number of overlapping rectangles,\n> > +        with each rectangle representing the maximum image size that the camera\n> > +        sensor can produce for a particular aspect ratio.\n> > +\n> > +        When multiple rectangles are reported, they shall be ordered from the\n> > +        tallest to the shortest.\n> > +\n> > +        Example 1\n> > +        A camera sensor which only produces images in the 4:3 image resolution\n> > +        will report a single PixelArrayActiveAreas rectangle, from which all\n> > +        other image formats are obtained by either cropping the field-of-view\n> > +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> > +        binning.\n> > +\n> > +                     PixelArraySize[0]\n> > +                    /----------------/\n> > +                      x1          x2\n> > +            (0,0)-> +-o------------o-+  /\n> > +                 y1 o +------------+ |  |\n> > +                    | |////////////| |  |\n> > +                    | |////////////| |  | PixelArraySize[1]\n> > +                    | |////////////| |  |\n> > +                 y2 o +------------+ |  |\n> > +                    +----------------+  /\n> > +\n> > +        The property reports a single rectangle\n> > +\n> > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> > +\n> > +        Example 2\n> > +        A camera sensor which can produce images in different native\n> > +        resolutions will report several overlapping rectangles, one for each\n> > +        natively supported resolution.\n> > +\n> > +                     PixelArraySize[0]\n> > +                    /------------------/\n> > +                      x1  x2    x3  x4\n> > +            (0,0)-> +o---o------o---o+  /\n> > +                 y1 o    +------+    |  |\n> > +                    |    |//////|    |  |\n> > +                 y2 o+---+------+---+|  |\n> > +                    ||///|//////|///||  | PixelArraySize[1]\n> > +                 y3 o+---+------+---+|  |\n> > +                    |    |//////|    |  |\n> > +                 y4 o    +------+    |  |\n> > +                    +----+------+----+  /\n> > +\n> > +        The property reports two rectangles\n> > +\n> > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> > +\n> > +        The first rectangle describes the maximum field-of-view of all image\n> > +        formats in the 4:3 resolutions, while the second one describes the\n> > +        maximum field of view for all image formats in the 16:9 resolutions.\n> > +\n> > +        \\todo Rename this property to ActiveAreas once we will have property\n> > +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> > +\n> >  ...\n> > --\n> > 2.26.2\n> >\n> > _______________________________________________\n> > libcamera-devel mailing list\n> > libcamera-devel@lists.libcamera.org\n> > https://lists.libcamera.org/listinfo/libcamera-devel\n>\n> --\n> Regards,\n> Niklas Söderlund","headers":{"Return-Path":"<ricardo.ribalda@gmail.com>","Received":["from mail-lj1-x242.google.com (mail-lj1-x242.google.com\n\t[IPv6:2a00:1450:4864:20::242])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id A5C13603C7\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  8 Jun 2020 10:25:25 +0200 (CEST)","by mail-lj1-x242.google.com with SMTP id s1so19358048ljo.0\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 08 Jun 2020 01:25:25 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"CqJVgwbB\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;\n\th=mime-version:references:in-reply-to:from:date:message-id:subject:to\n\t:cc:content-transfer-encoding;\n\tbh=ns6Sy/N4eubD3sitEqGKLsfvq4uc4q8u8bqtNXMGK2k=;\n\tb=CqJVgwbB/mxZIpTeJxUxY1hW7KuvZqeV4IQ7RgKVW2nJ9oI28Apswm+aUY38ELon+K\n\t2ygfvtM0JMlwKhBaC85r2RvJzpN4/EXXMMpRgykdd7ADAN2+G5JKUtYhDSzymR3ed/Ay\n\tpzKJTVp7p1ZxJPmXkddvEvdktVwN3oOqo7wjZFdgTn11b2b6HzczzNMqcxG97CdD4e+i\n\tRA4ytd1drXvZpr2bmPxFRk1nYmfFxI8rbFEuVyXfAjzrLBIi3HOW58tZnYYs86U5vkD7\n\tgwIP52qHW+PyRfuz6n52g6dNBaW8MyJRNCccB9khIDuKfHQowbVXYYiQrVZC0Bs+e6OU\n\tK/bA==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:mime-version:references:in-reply-to:from:date\n\t:message-id:subject:to:cc:content-transfer-encoding;\n\tbh=ns6Sy/N4eubD3sitEqGKLsfvq4uc4q8u8bqtNXMGK2k=;\n\tb=UXLyfMNt58/PJWdtwZI0J7eo2I9rTmFb3CtDMdOW6W+RYut0tiqRZ4LQVbepEaFEJr\n\tvu73wOpj7i/n5xdhyddHZWBF+Bj6CR6FiZwdiRRscUomRyfVUPkRO/WisgCJbAxmO8MN\n\tEFNTXCmLgK0lO4Nzmdd0E7HkqAghdE8kDgC2g+ACzxuwvJBjdQF+EXHKYlzh3Xi+FV7I\n\tssLu1BjY5wMUPeCeftfrJkjy9gxtMZnyYupGDzpRcWXsDyyAVu6YUf0ZQQPt21O/jQ4N\n\teOKET392WoIYjgoDZ4wfZ8Sf14pfo7GPPSLI6hAbeALeigSgfQoe/OE6HkiaMjj+tHUv\n\tCMhA==","X-Gm-Message-State":"AOAM532rCUU+0XTC74Zx03SgWjLdkUb1Ihv0RJn43pvpo3saqPcvb3iN\n\tzb7+joBDR8mmlhdgM3FUzQKsjQEQGOjovU/LdMU=","X-Google-Smtp-Source":"ABdhPJxSGvvkBLOgbZZVh+0dxjGVfue+ULURigHeQJnCxBHWLwUeR9ua7NlgZSTkcsymQbXzfh+LpZ3Qlkv30cOFRRs=","X-Received":"by 2002:a2e:9496:: with SMTP id\n\tc22mr3587289ljh.159.1591604724670; \n\tMon, 08 Jun 2020 01:25:24 -0700 (PDT)","MIME-Version":"1.0","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200605195344.GN5864@oden.dyn.berto.se>","In-Reply-To":"<20200605195344.GN5864@oden.dyn.berto.se>","From":"Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>","Date":"Mon, 8 Jun 2020 10:25:08 +0200","Message-ID":"<CAPybu_0t8Ay6t4HV-hTz-oX0XKVTGE7Wufo4TSPFgR7dQevGhw@mail.gmail.com>","To":"=?utf-8?q?Niklas_S=C3=B6derlund?= <niklas.soderlund@ragnatech.se>","Cc":"Jacopo Mondi <jacopo@jmondi.org>, libcamera-devel@lists.libcamera.org, \n\tfiga@chromium.org, Sakari Ailus <sakari.ailus@iki.fi>","Content-Type":"text/plain; charset=\"UTF-8\"","Content-Transfer-Encoding":"quoted-printable","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 08 Jun 2020 08:25:25 -0000"}},{"id":5127,"web_url":"https://patchwork.libcamera.org/comment/5127/","msgid":"<20200608102604.GO9947@valkosipuli.retiisi.org.uk>","date":"2020-06-08T10:26:04","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":32,"url":"https://patchwork.libcamera.org/api/people/32/","name":"Sakari Ailus","email":"sakari.ailus@iki.fi"},"content":"Hi Jacopo,\n\nThank you for the patch.\n\nOn Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> Add definition of pixel array related properties.\n> \n> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> ---\n>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n>  1 file changed, 263 insertions(+)\n> \n> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> index ce627fa042ba..762d60881568 100644\n> --- a/src/libcamera/property_ids.yaml\n> +++ b/src/libcamera/property_ids.yaml\n> @@ -386,4 +386,267 @@ controls:\n>                                |                    |\n>                                |                    |\n>                                +--------------------+\n> +\n> +  - UnitCellSize:\n> +      type: Size\n> +      description: |\n> +        The pixel unit cell physical size, in nanometers.\n> +\n> +        The UnitCellSize properties defines the horizontal and vertical sizes\n> +        of a single pixel unit, including its active and non-active parts.\n> +\n> +        The property can be used to calculate the physical size of the sensor's\n> +        pixel array area and for calibration purposes.\n\nDo we need this? Could it not be calculated from PixelArrayPhysicalSize and\nPixelArraySize?\n\n> +\n> +  - PixelArrayPhysicalSize:\n> +      type: Size\n> +      description: |\n> +        The camera sensor full pixel array size, in nanometers.\n> +\n> +        The PixelArrayPhysicalSize property reports the physical dimensions\n> +        (width x height) of the full pixel array matrix, including readable\n> +        and non-readable pixels.\n> +\n> +        \\todo Rename this property to PhysicalSize once we will have property\n> +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> +\n> +  - PixelArraySize:\n> +      type: Size\n> +      description: |\n> +        The camera sensor pixel array readable area vertical and horizontal\n> +        sizes, in pixels.\n> +\n> +        The PixelArraySize property defines the size in pixel units of the\n> +        readable part of full pixel array matrix, including optically black\n> +        pixels used for calibration, pixels which are not considered valid for\n> +        capture and active pixels valid for image capture.\n> +\n> +        The property describes a rectangle whose top-left corner is placed\n> +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> +        by the Size element transported by the property.\n> +\n> +        The property describes the maximum size of the raw data produced by\n> +        the sensor, which might not correspond to the physical size of the\n> +        sensor pixel array matrix, as some portions of the physical pixel\n> +        array matrix are not accessible and cannot be transmitted out.\n> +\n> +        For example, a pixel array matrix assembled as follow\n> +\n> +             +--------------------------------------------------+\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             ...          ...           ...      ...          ...\n> +\n> +             ...          ...           ...      ...          ...\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> +             +--------------------------------------------------+\n> +\n> +        composed by two lines of non-readable pixels (x) followed by N lines of\n> +        readable data (D) surrounded by two columns of non-readable pixels on\n> +        each side, only the readable portion is transmitted to the receiving\n> +        side, defining the sizes of the largest possible buffer of raw data\n> +        that can be presented to applications.\n> +\n> +                               PixelArraySize[0]\n> +               /----------------------------------------------/\n> +               +----------------------------------------------+ /\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> +               ...        ...           ...      ...        ...\n> +               ...        ...           ...      ...        ...\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> +               +----------------------------------------------+ /\n> +\n> +        All other rectangles that describes portions of the pixel array, such as\n> +        the optical black pixels rectangles and active pixel areas are defined\n> +        relatively to the rectangle described by this property.\n> +\n> +        \\todo Rename this property to Size once we will have property\n> +              categories (i.e. Properties::PixelArray::Size)\n> +\n> +  - PixelArrayOpticalBlackRectangles:\n> +      type: Rectangle\n> +      size: [1 x n]\n> +      description: |\n> +        The raw data buffer regions which contains optical black pixels\n> +        considered valid for calibration purposes.\n> +\n\nHow does this interact with the rotation property?\n\nIf the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\ninvert the flipping controls. I presume the same would apply to e.g. dark\npixels in case they are read out, but that should be something for a driver\nto handle.\n\nBut if the frame layout isn't conveyed to the user space by the driver,\nthen we need another way to convey the sensor is actually rotated. Oh well.\n\nNot every sensor that is mounted upside down has flipping controls so the\nuser space will still somehow need to manage the rotation in that case.\n\n> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> +        rectangular areas of the raw data buffer, where optical black pixels are\n> +        located and could be accessed for calibration and black level\n> +        correction.\n> +\n> +        This property describes the position and size of optically black pixel\n> +        rectangles relatively to their position in the raw data buffer as stored\n> +        in memory, which might differ from their actual physical location in the\n> +        pixel array matrix.\n> +\n> +        It is important to note, in facts, that camera sensor might\n> +        automatically re-order, shuffle or skip portions of their pixels array\n> +        matrix when transmitting data to the receiver.\n> +\n> +        The pixel array contains several areas with different purposes,\n> +        interleaved by lines and columns which are said not to be valid for\n> +        capturing purposes. Invalid lines and columns are defined as invalid as\n> +        they could be positioned too close to the chip margins or to the optical\n> +        blank shielding placed on top of optical black pixels.\n> +\n> +                                PixelArraySize[0]\n> +               /----------------------------------------------/\n> +                  x1                                       x2\n> +               +--o---------------------------------------o---+ /\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               ...          ...           ...     ...       ...\n> +               ...          ...           ...     ...       ...\n> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> +               +----------------------------------------------+ /\n> +\n> +        The readable pixel array matrix is composed by\n> +        2 invalid lines (I)\n> +        4 lines of valid optically black pixels (O)\n> +        2 invalid lines (I)\n> +        n lines of valid pixel data (P)\n> +        2 invalid lines (I)\n> +\n> +        And the position of the optical black pixel rectangles is defined by\n> +\n> +            PixelArrayOpticalBlackRectangles = {\n> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n\ns/\\+ \\K}/1 }/\n\n> +               { x1, y3, 2, y4 - y3 + 1 },\n> +               { x2, y3, 2, y4 - y3 + 1 },\n> +            };\n> +\n> +        If the sensor, when required to output the full pixel array matrix,\n> +        automatically skip the invalid lines and columns, producing the\n> +        following data buffer, when captured to memory\n> +\n> +                                    PixelArraySize[0]\n> +               /----------------------------------------------/\n> +                                                           x1\n> +               +--------------------------------------------o-+ /\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> +               ...       ...          ...       ...         ... |\n> +               ...       ...          ...       ...         ... |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> +               +----------------------------------------------+ /\n> +\n> +        The invalid lines and columns should not be reported as part of the\n> +        PixelArraySize property in first place.\n> +\n> +        In this case, the position of the black pixel rectangles will be\n> +\n> +            PixelArrayOpticalBlackRectangles = {\n> +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> +            };\n> +\n> +        \\todo Rename this property to Size once we will have property\n> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> +\n> +  - PixelArrayActiveAreas:\n> +      type: Rectangle\n> +      size: [1 x n]\n> +      description: |\n> +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> +        overlapping) portions of the camera sensor readable pixel matrix\n> +        which are considered valid for image acquisition purposes.\n> +\n> +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> +        with its top-left corner defined by its horizontal and vertical\n> +        distances from the PixelArraySize rectangle top-left corner, placed in\n> +        position (0, 0).\n> +\n> +        This property describes an arbitrary number of overlapping rectangles,\n> +        with each rectangle representing the maximum image size that the camera\n> +        sensor can produce for a particular aspect ratio.\n> +\n> +        When multiple rectangles are reported, they shall be ordered from the\n> +        tallest to the shortest.\n> +\n> +        Example 1\n> +        A camera sensor which only produces images in the 4:3 image resolution\n> +        will report a single PixelArrayActiveAreas rectangle, from which all\n> +        other image formats are obtained by either cropping the field-of-view\n> +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> +        binning.\n> +\n> +                     PixelArraySize[0]\n> +                    /----------------/\n> +                      x1          x2\n> +            (0,0)-> +-o------------o-+  /\n> +                 y1 o +------------+ |  |\n> +                    | |////////////| |  |\n> +                    | |////////////| |  | PixelArraySize[1]\n> +                    | |////////////| |  |\n> +                 y2 o +------------+ |  |\n> +                    +----------------+  /\n> +\n> +        The property reports a single rectangle\n> +\n> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> +\n> +        Example 2\n> +        A camera sensor which can produce images in different native\n> +        resolutions will report several overlapping rectangles, one for each\n> +        natively supported resolution.\n> +\n> +                     PixelArraySize[0]\n> +                    /------------------/\n> +                      x1  x2    x3  x4\n> +            (0,0)-> +o---o------o---o+  /\n> +                 y1 o    +------+    |  |\n> +                    |    |//////|    |  |\n> +                 y2 o+---+------+---+|  |\n> +                    ||///|//////|///||  | PixelArraySize[1]\n> +                 y3 o+---+------+---+|  |\n> +                    |    |//////|    |  |\n> +                 y4 o    +------+    |  |\n> +                    +----+------+----+  /\n> +\n> +        The property reports two rectangles\n> +\n> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> +\n> +        The first rectangle describes the maximum field-of-view of all image\n> +        formats in the 4:3 resolutions, while the second one describes the\n> +        maximum field of view for all image formats in the 16:9 resolutions.\n> +\n> +        \\todo Rename this property to ActiveAreas once we will have property\n> +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> +\n>  ...","headers":{"Return-Path":"<sakari.ailus@retiisi.org.uk>","Received":["from hillosipuli.retiisi.org.uk (retiisi.org.uk [95.216.213.190])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 6DC1A603C5\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  8 Jun 2020 12:27:46 +0200 (CEST)","from valkosipuli.localdomain (valkosipuli.retiisi.org.uk\n\t[IPv6:2a01:4f9:c010:4572::80:2])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\tkey-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits)\n\tserver-digest SHA256) (No client certificate requested)\n\tby hillosipuli.retiisi.org.uk (Postfix) with ESMTPS id 5A96B634C87;\n\tMon,  8 Jun 2020 13:26:04 +0300 (EEST)","from sailus by valkosipuli.localdomain with local (Exim 4.92)\n\t(envelope-from <sakari.ailus@retiisi.org.uk>)\n\tid 1jiEyi-0004n0-AU; Mon, 08 Jun 2020 13:26:04 +0300"],"Date":"Mon, 8 Jun 2020 13:26:04 +0300","From":"Sakari Ailus <sakari.ailus@iki.fi>","To":"Jacopo Mondi <jacopo@jmondi.org>","Cc":"libcamera-devel@lists.libcamera.org, ricardo.ribalda@gmail.com,\n\tfiga@chromium.org","Message-ID":"<20200608102604.GO9947@valkosipuli.retiisi.org.uk>","References":"<20200604153122.18074-1-jacopo@jmondi.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=iso-8859-1","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<20200604153122.18074-1-jacopo@jmondi.org>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 08 Jun 2020 10:27:46 -0000"}},{"id":5128,"web_url":"https://patchwork.libcamera.org/comment/5128/","msgid":"<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>","date":"2020-06-08T10:55:09","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"Hi Sakari,\n\nOn Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:\n> Hi Jacopo,\n>\n> Thank you for the patch.\n\nThanks a lot for taking time to comment on this\n\n>\n> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> > Add definition of pixel array related properties.\n> >\n> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> > ---\n> >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> >  1 file changed, 263 insertions(+)\n> >\n> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> > index ce627fa042ba..762d60881568 100644\n> > --- a/src/libcamera/property_ids.yaml\n> > +++ b/src/libcamera/property_ids.yaml\n> > @@ -386,4 +386,267 @@ controls:\n> >                                |                    |\n> >                                |                    |\n> >                                +--------------------+\n> > +\n> > +  - UnitCellSize:\n> > +      type: Size\n> > +      description: |\n> > +        The pixel unit cell physical size, in nanometers.\n> > +\n> > +        The UnitCellSize properties defines the horizontal and vertical sizes\n> > +        of a single pixel unit, including its active and non-active parts.\n> > +\n> > +        The property can be used to calculate the physical size of the sensor's\n> > +        pixel array area and for calibration purposes.\n>\n> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and\n> PixelArraySize?\n>\n\nNot really, as PixelArrayPhysicalSize reports the physical dimension\nof the full pixel array (readable and non readable pixels) while\nPixelArraySize reports the size in pixel of the largest readable\nimage. To sum it up: PixelArrayPhysicalSize reports the chip area,\nwhich covers more space that the readable PixelArraySize.\n\n> > +\n> > +  - PixelArrayPhysicalSize:\n> > +      type: Size\n> > +      description: |\n> > +        The camera sensor full pixel array size, in nanometers.\n> > +\n> > +        The PixelArrayPhysicalSize property reports the physical dimensions\n> > +        (width x height) of the full pixel array matrix, including readable\n> > +        and non-readable pixels.\n> > +\n> > +        \\todo Rename this property to PhysicalSize once we will have property\n> > +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> > +\n> > +  - PixelArraySize:\n> > +      type: Size\n> > +      description: |\n> > +        The camera sensor pixel array readable area vertical and horizontal\n> > +        sizes, in pixels.\n> > +\n> > +        The PixelArraySize property defines the size in pixel units of the\n> > +        readable part of full pixel array matrix, including optically black\n> > +        pixels used for calibration, pixels which are not considered valid for\n> > +        capture and active pixels valid for image capture.\n> > +\n> > +        The property describes a rectangle whose top-left corner is placed\n> > +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> > +        by the Size element transported by the property.\n> > +\n> > +        The property describes the maximum size of the raw data produced by\n> > +        the sensor, which might not correspond to the physical size of the\n> > +        sensor pixel array matrix, as some portions of the physical pixel\n> > +        array matrix are not accessible and cannot be transmitted out.\n> > +\n> > +        For example, a pixel array matrix assembled as follow\n> > +\n> > +             +--------------------------------------------------+\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             ...          ...           ...      ...          ...\n> > +\n> > +             ...          ...           ...      ...          ...\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > +             +--------------------------------------------------+\n> > +\n> > +        composed by two lines of non-readable pixels (x) followed by N lines of\n> > +        readable data (D) surrounded by two columns of non-readable pixels on\n> > +        each side, only the readable portion is transmitted to the receiving\n> > +        side, defining the sizes of the largest possible buffer of raw data\n> > +        that can be presented to applications.\n> > +\n> > +                               PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +               +----------------------------------------------+ /\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> > +               ...        ...           ...      ...        ...\n> > +               ...        ...           ...      ...        ...\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        All other rectangles that describes portions of the pixel array, such as\n> > +        the optical black pixels rectangles and active pixel areas are defined\n> > +        relatively to the rectangle described by this property.\n> > +\n> > +        \\todo Rename this property to Size once we will have property\n> > +              categories (i.e. Properties::PixelArray::Size)\n> > +\n> > +  - PixelArrayOpticalBlackRectangles:\n> > +      type: Rectangle\n> > +      size: [1 x n]\n> > +      description: |\n> > +        The raw data buffer regions which contains optical black pixels\n> > +        considered valid for calibration purposes.\n> > +\n>\n> How does this interact with the rotation property?\n>\n> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\n> invert the flipping controls. I presume the same would apply to e.g. dark\n> pixels in case they are read out, but that should be something for a driver\n> to handle.\n\nRight. I think this also depends how black pixels are read out. I here\nassumed the sensor is capable of reading out the whole PixelArraySize area,\nand the receiver is able to capture its whole content in a single\nbuffer, where application can then go and pick the areas of interest\nusing the information conveyed by this property. If this model holds,\nthen if sensor flipping is enabled, indeed the here reported\ninformation are mis-leading, unless the chip architecture is\nperfectly symmetric in vertical and horizontal dimensions (which seems\nunlikely).\n\n>\n> But if the frame layout isn't conveyed to the user space by the driver,\n> then we need another way to convey the sensor is actually rotated. Oh well.\n\nNot sure how to interpret this part. Do you mean \"convey that a\nrotation is applied by, ie, setting the canonical V\\HFLIP controls,\nwhich cause the sensor pixel array to be transmitted out with its\nvertical/horizontal read-out directions inverted ?\"\n\nWe currently have a read-only property that reports the mounting\nrotation (like the dt-property you have just reviewed :) I assume we\nwill have a rotation control that reports instead if a V/HFLIP is\napplied, so that application know how to compensate that.\n\n>\n> Not every sensor that is mounted upside down has flipping controls so the\n> user space will still somehow need to manage the rotation in that case.\n>\n\nI think it should, and I think we'll provide all information to be\nable to do so.\n\nThanks\n  j\n\n> > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> > +        rectangular areas of the raw data buffer, where optical black pixels are\n> > +        located and could be accessed for calibration and black level\n> > +        correction.\n> > +\n> > +        This property describes the position and size of optically black pixel\n> > +        rectangles relatively to their position in the raw data buffer as stored\n> > +        in memory, which might differ from their actual physical location in the\n> > +        pixel array matrix.\n> > +\n> > +        It is important to note, in facts, that camera sensor might\n> > +        automatically re-order, shuffle or skip portions of their pixels array\n> > +        matrix when transmitting data to the receiver.\n> > +\n> > +        The pixel array contains several areas with different purposes,\n> > +        interleaved by lines and columns which are said not to be valid for\n> > +        capturing purposes. Invalid lines and columns are defined as invalid as\n> > +        they could be positioned too close to the chip margins or to the optical\n> > +        blank shielding placed on top of optical black pixels.\n> > +\n> > +                                PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +                  x1                                       x2\n> > +               +--o---------------------------------------o---+ /\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               ...          ...           ...     ...       ...\n> > +               ...          ...           ...     ...       ...\n> > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        The readable pixel array matrix is composed by\n> > +        2 invalid lines (I)\n> > +        4 lines of valid optically black pixels (O)\n> > +        2 invalid lines (I)\n> > +        n lines of valid pixel data (P)\n> > +        2 invalid lines (I)\n> > +\n> > +        And the position of the optical black pixel rectangles is defined by\n> > +\n> > +            PixelArrayOpticalBlackRectangles = {\n> > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n>\n> s/\\+ \\K}/1 }/\n>\n> > +               { x1, y3, 2, y4 - y3 + 1 },\n> > +               { x2, y3, 2, y4 - y3 + 1 },\n> > +            };\n> > +\n> > +        If the sensor, when required to output the full pixel array matrix,\n> > +        automatically skip the invalid lines and columns, producing the\n> > +        following data buffer, when captured to memory\n> > +\n> > +                                    PixelArraySize[0]\n> > +               /----------------------------------------------/\n> > +                                                           x1\n> > +               +--------------------------------------------o-+ /\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> > +               ...       ...          ...       ...         ... |\n> > +               ...       ...          ...       ...         ... |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > +               +----------------------------------------------+ /\n> > +\n> > +        The invalid lines and columns should not be reported as part of the\n> > +        PixelArraySize property in first place.\n> > +\n> > +        In this case, the position of the black pixel rectangles will be\n> > +\n> > +            PixelArrayOpticalBlackRectangles = {\n> > +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > +            };\n> > +\n> > +        \\todo Rename this property to Size once we will have property\n> > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> > +\n> > +  - PixelArrayActiveAreas:\n> > +      type: Rectangle\n> > +      size: [1 x n]\n> > +      description: |\n> > +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> > +        overlapping) portions of the camera sensor readable pixel matrix\n> > +        which are considered valid for image acquisition purposes.\n> > +\n> > +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> > +        with its top-left corner defined by its horizontal and vertical\n> > +        distances from the PixelArraySize rectangle top-left corner, placed in\n> > +        position (0, 0).\n> > +\n> > +        This property describes an arbitrary number of overlapping rectangles,\n> > +        with each rectangle representing the maximum image size that the camera\n> > +        sensor can produce for a particular aspect ratio.\n> > +\n> > +        When multiple rectangles are reported, they shall be ordered from the\n> > +        tallest to the shortest.\n> > +\n> > +        Example 1\n> > +        A camera sensor which only produces images in the 4:3 image resolution\n> > +        will report a single PixelArrayActiveAreas rectangle, from which all\n> > +        other image formats are obtained by either cropping the field-of-view\n> > +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> > +        binning.\n> > +\n> > +                     PixelArraySize[0]\n> > +                    /----------------/\n> > +                      x1          x2\n> > +            (0,0)-> +-o------------o-+  /\n> > +                 y1 o +------------+ |  |\n> > +                    | |////////////| |  |\n> > +                    | |////////////| |  | PixelArraySize[1]\n> > +                    | |////////////| |  |\n> > +                 y2 o +------------+ |  |\n> > +                    +----------------+  /\n> > +\n> > +        The property reports a single rectangle\n> > +\n> > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> > +\n> > +        Example 2\n> > +        A camera sensor which can produce images in different native\n> > +        resolutions will report several overlapping rectangles, one for each\n> > +        natively supported resolution.\n> > +\n> > +                     PixelArraySize[0]\n> > +                    /------------------/\n> > +                      x1  x2    x3  x4\n> > +            (0,0)-> +o---o------o---o+  /\n> > +                 y1 o    +------+    |  |\n> > +                    |    |//////|    |  |\n> > +                 y2 o+---+------+---+|  |\n> > +                    ||///|//////|///||  | PixelArraySize[1]\n> > +                 y3 o+---+------+---+|  |\n> > +                    |    |//////|    |  |\n> > +                 y4 o    +------+    |  |\n> > +                    +----+------+----+  /\n> > +\n> > +        The property reports two rectangles\n> > +\n> > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> > +\n> > +        The first rectangle describes the maximum field-of-view of all image\n> > +        formats in the 4:3 resolutions, while the second one describes the\n> > +        maximum field of view for all image formats in the 16:9 resolutions.\n> > +\n> > +        \\todo Rename this property to ActiveAreas once we will have property\n> > +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> > +\n> >  ...\n>\n> --\n> Kind regards,\n>\n> Sakari Ailus","headers":{"Return-Path":"<jacopo@jmondi.org>","Received":["from relay12.mail.gandi.net (relay12.mail.gandi.net\n\t[217.70.178.232])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 0C74361027\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  8 Jun 2020 12:51:47 +0200 (CEST)","from uno.localdomain (2-224-242-101.ip172.fastwebnet.it\n\t[2.224.242.101]) (Authenticated sender: jacopo@jmondi.org)\n\tby relay12.mail.gandi.net (Postfix) with ESMTPSA id ADB99200002;\n\tMon,  8 Jun 2020 10:51:44 +0000 (UTC)"],"Date":"Mon, 8 Jun 2020 12:55:09 +0200","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Sakari Ailus <sakari.ailus@iki.fi>","Cc":"libcamera-devel@lists.libcamera.org, ricardo.ribalda@gmail.com,\n\tfiga@chromium.org","Message-ID":"<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200608102604.GO9947@valkosipuli.retiisi.org.uk>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<20200608102604.GO9947@valkosipuli.retiisi.org.uk>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 08 Jun 2020 10:51:47 -0000"}},{"id":11740,"web_url":"https://patchwork.libcamera.org/comment/11740/","msgid":"<20200731021732.GN6107@pendragon.ideasonboard.com>","date":"2020-07-31T02:21:25","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Jacopo,\n\nThank you for the patch (and sorry for such a late reply)\n\nFirst of all, I really like this, I think it's really good work.\nDescribing the pixel array size may seem like an easy task, but I know\nhow much variation we can see in camera sensors, and how much research\nyou've but into this. Coming up with the properties below that are both\nflexible and clearly defined was not a simple task.\n\nI have quite a few comments (you know me...), but nothing that requires\nsignificant changes. Since you've posted this series David has submitted\na proposal for transformation (including rotation) support, which needs\nto be taken into account as Sakari pointed out. I've proposed in the\nreview below a way to do so (don't worry, it's not a complete rewrite\n:-)), please feel free to tell me if you disagree with some (or all) of\nthe proposals.\n\nOn Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:\n> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:\n> > On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> > > Add definition of pixel array related properties.\n> > >\n> > > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> > > ---\n> > >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> > >  1 file changed, 263 insertions(+)\n> > >\n> > > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> > > index ce627fa042ba..762d60881568 100644\n> > > --- a/src/libcamera/property_ids.yaml\n> > > +++ b/src/libcamera/property_ids.yaml\n> > > @@ -386,4 +386,267 @@ controls:\n> > >                                |                    |\n> > >                                |                    |\n> > >                                +--------------------+\n> > > +\n> > > +  - UnitCellSize:\n> > > +      type: Size\n> > > +      description: |\n> > > +        The pixel unit cell physical size, in nanometers.\n> > > +\n> > > +        The UnitCellSize properties defines the horizontal and vertical sizes\n> > > +        of a single pixel unit, including its active and non-active parts.\n\nWould it be useful to add \"In other words, it expresses the horizontal\nand vertical distance between the top-left corners of adjacent pixels.\"\n?\n\n> > > +\n> > > +        The property can be used to calculate the physical size of the sensor's\n> > > +        pixel array area and for calibration purposes.\n> >\n> > Do we need this? Could it not be calculated from PixelArrayPhysicalSize and\n> > PixelArraySize?\n> \n> Not really, as PixelArrayPhysicalSize reports the physical dimension\n> of the full pixel array (readable and non readable pixels) while\n> PixelArraySize reports the size in pixel of the largest readable\n> image. To sum it up: PixelArrayPhysicalSize reports the chip area,\n> which covers more space that the readable PixelArraySize.\n> \n> > > +\n> > > +  - PixelArrayPhysicalSize:\n> > > +      type: Size\n> > > +      description: |\n> > > +        The camera sensor full pixel array size, in nanometers.\n> > > +\n> > > +        The PixelArrayPhysicalSize property reports the physical dimensions\n> > > +        (width x height) of the full pixel array matrix, including readable\n> > > +        and non-readable pixels.\n> > > +\n> > > +        \\todo Rename this property to PhysicalSize once we will have property\n> > > +              categories (i.e. Properties::PixelArray::PhysicalSize)\n\nI understand Sakari's confusion regarding the UnitCellSize calculation\nfrom PixelArrayPhysicalSize and PixelArraySize. I also agree with your\nanswer, but I think we could actually drop the PixelArrayPhysicalSize\nproperty. I don't see what it could be used for, it describes the\nphysical size of something that can't be accessed at all from the host,\nand should thus not matter at all.\n\nIf we later find a need for this, we can always at it. At that point, I\nalso think it would be best to express the property in pixel units\n(although depending on the use case that drives addition of this\nproperty, I could very well be wrong).\n\n> > > +\n> > > +  - PixelArraySize:\n> > > +      type: Size\n> > > +      description: |\n> > > +        The camera sensor pixel array readable area vertical and horizontal\n> > > +        sizes, in pixels.\n> > > +\n> > > +        The PixelArraySize property defines the size in pixel units of the\n> > > +        readable part of full pixel array matrix, including optically black\n\ns/optically/optical/\n\nI've also come across the term \"optical dark pixels\" which I think is a\nbit more accurate, as the pixels are covered and thus dark, but not\nfully black due to leakages currents (otherwise there would be no need\nto black level correction). \"optical black\" is used much more often\nthough, so I think we should keep that term.\n\n> > > +        pixels used for calibration, pixels which are not considered valid for\n> > > +        capture and active pixels valid for image capture.\n\nMaybe \"and active pixels containing valid image data\" ?\n\n> > > +\n> > > +        The property describes a rectangle whose top-left corner is placed\n> > > +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> > > +        by the Size element transported by the property.\n\nI would move this paragraph below to [1].\n\n> > > +\n> > > +        The property describes the maximum size of the raw data produced by\n> > > +        the sensor, which might not correspond to the physical size of the\n\nMaybe s/produced by the sensor/captured by the camera/, as the CSI-2\nreceiver could drop data too ? For instance, the invalid lines between\nthe optical black and valid regions could be sent by the sensor, with a\nspecific data type, and dropped on the receiving side by a decision of\nthe pipeline handler that chooses to configure the CSI-2 receiver to\ndrop the corresponding data type.\n\n> > > +        sensor pixel array matrix, as some portions of the physical pixel\n> > > +        array matrix are not accessible and cannot be transmitted out.\n> > > +\n> > > +        For example, a pixel array matrix assembled as follow\n\ns/a pixel array/let's consider a pixel array/\ns/follow/follows/\n\n> > > +\n> > > +             +--------------------------------------------------+\n> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             ...          ...           ...      ...          ...\n> > > +\n> > > +             ...          ...           ...      ...          ...\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > +             +--------------------------------------------------+\n> > > +\n> > > +        composed by two lines of non-readable pixels (x) followed by N lines of\n\ns/composed by/starting with/\ns/(x)/(x),/\n\n> > > +        readable data (D) surrounded by two columns of non-readable pixels on\n> > > +        each side, only the readable portion is transmitted to the receiving\n\ns/each side,/each side, and ending with two more lines of non-readable pixels./\ns/only/Only/\n\n> > > +        side, defining the sizes of the largest possible buffer of raw data\n\ns/sizes/size/\n\n> > > +        that can be presented to applications.\n> > > +\n> > > +                               PixelArraySize[0]\n\nshould this be PixelArraySize.width ? Same for height instead of [1],\nand in other locations below.\n\n> > > +               /----------------------------------------------/\n> > > +               +----------------------------------------------+ /\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> > > +               ...        ...           ...      ...        ...\n> > > +               ...        ...           ...      ...        ...\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > +               +----------------------------------------------+ /\n> > > +\n\n[1], with a small modification:\n\n\"This defines a rectangle ... sizes are defined by this property.\"\n\nand merge it with the following paragraph in a single paragraph.\n\n> > > +        All other rectangles that describes portions of the pixel array, such as\n\ns/describes/describe/\n\n> > > +        the optical black pixels rectangles and active pixel areas are defined\n\ns/areas/areas,/\n\n> > > +        relatively to the rectangle described by this property.\n> > > +\n> > > +        \\todo Rename this property to Size once we will have property\n> > > +              categories (i.e. Properties::PixelArray::Size)\n> > > +\n> > > +  - PixelArrayOpticalBlackRectangles:\n> > > +      type: Rectangle\n> > > +      size: [1 x n]\n\nThis can just be [n], not all sizes need to be bi-dimensional.\n\n> > > +      description: |\n> > > +        The raw data buffer regions which contains optical black pixels\n\ns/raw data buffer/pixel array/ for the reason explained below in [2]\ns/regions/region(s)/ ?\ns/contains/contain/\n\n> > > +        considered valid for calibration purposes.\n> > > +\n> >\n> > How does this interact with the rotation property?\n> >\n> > If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\n> > invert the flipping controls. I presume the same would apply to e.g. dark\n> > pixels in case they are read out, but that should be something for a driver\n> > to handle.\n> \n> Right. I think this also depends how black pixels are read out. I here\n> assumed the sensor is capable of reading out the whole PixelArraySize area,\n> and the receiver is able to capture its whole content in a single\n> buffer, where application can then go and pick the areas of interest\n> using the information conveyed by this property. If this model holds,\n> then if sensor flipping is enabled, indeed the here reported\n> information are mis-leading, unless the chip architecture is\n> perfectly symmetric in vertical and horizontal dimensions (which seems\n> unlikely).\n> \n> > But if the frame layout isn't conveyed to the user space by the driver,\n> > then we need another way to convey the sensor is actually rotated. Oh well.\n> \n> Not sure how to interpret this part. Do you mean \"convey that a\n> rotation is applied by, ie, setting the canonical V\\HFLIP controls,\n> which cause the sensor pixel array to be transmitted out with its\n> vertical/horizontal read-out directions inverted ?\"\n> \n> We currently have a read-only property that reports the mounting\n> rotation (like the dt-property you have just reviewed :) I assume we\n> will have a rotation control that reports instead if a V/HFLIP is\n> applied, so that application know how to compensate that.\n> \n> > Not every sensor that is mounted upside down has flipping controls so the\n> > user space will still somehow need to manage the rotation in that case.\n> \n> I think it should, and I think we'll provide all information to be\n> able to do so.\n\nI think all the properties here should be expressed relative to the\ndefault sensor readout direction. Any transformation (such as h/v flip)\nwill need to be taken into account by the application if the use cases\nrequire it. That would be the easiest course of action, otherwise these\nproperties couldn't be static, as they would depend on the selected\ntransformation.\n\nDavid has submitted a patch series with a Transform control (actually\nspecified in CameraConfiguration, not as a control). We will need to\ndocument how the properties in this patch, the Rotation property, and\nthe Transform control, interact.\n\nHow about adding in the PixelArraySize documentation a paragraph that\nstates\n\n\"All the coordinates are expressed relative to the default sensor\nreadout direction, without any transformation (such as horizontal and\nvertical flipping) applied. When mapping them to the raw pixel buffer,\napplications shall take any configured transformation into account.\"\n\n?\n\n> > > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n\ns/describes/property describes/ (it feels weird to have a plural name\nwith a singular verb).\n\n> > > +        rectangular areas of the raw data buffer, where optical black pixels are\n\ns/buffer,/buffer/\n\n> > > +        located and could be accessed for calibration and black level\n> > > +        correction.\n\nMaybe just \"... of the raw data buffers that contain optical black\npixels.\" ? That duplicates the previous paragraph a bit, maybe we could\njust drop it.\n\n> > > +\n> > > +        This property describes the position and size of optically black pixel\n\ns/optically/optical/\n\n> > > +        rectangles relatively to their position in the raw data buffer as stored\n\ns/rectangles/regions/\ns/relatively to their position in the raw data buffer/in the raw data buffer/\n\n> > > +        in memory, which might differ from their actual physical location in the\n> > > +        pixel array matrix.\n> > > +\n> > > +        It is important to note, in facts, that camera sensor might\n\ns/facts/fact/\ns/sensors/sensor/\n\n> > > +        automatically re-order, shuffle or skip portions of their pixels array\n\ns/re-order/reorder/\n\nI'd drop shuffle, I think \"reorder or skip\" is enough, hopefully the\nsensors won't shuffle pixels randomly :-)\n\n> > > +        matrix when transmitting data to the receiver.\n\nShould we add an example ?\n\n\"For instance, a sensor may merge the top and bottom optical black\nrectangles into a single rectangle, transmitted at the beginning of the\nframe.\"\n\n> > > +\n> > > +        The pixel array contains several areas with different purposes,\n> > > +        interleaved by lines and columns which are said not to be valid for\n> > > +        capturing purposes. Invalid lines and columns are defined as invalid as\n> > > +        they could be positioned too close to the chip margins or to the optical\n> > > +        blank shielding placed on top of optical black pixels.\n\ns/blank/black/\n\n> > > +\n> > > +                                PixelArraySize[0]\n> > > +               /----------------------------------------------/\n> > > +                  x1                                       x2\n> > > +               +--o---------------------------------------o---+ /\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > +               ...          ...           ...     ...       ...\n> > > +               ...          ...           ...     ...       ...\n> > > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > +               +----------------------------------------------+ /\n> > > +\n> > > +        The readable pixel array matrix is composed by\n> > > +        2 invalid lines (I)\n> > > +        4 lines of valid optically black pixels (O)\n\ns/optically/optical/\n\n> > > +        2 invalid lines (I)\n> > > +        n lines of valid pixel data (P)\n> > > +        2 invalid lines (I)\n> > > +\n> > > +        And the position of the optical black pixel rectangles is defined by\n> > > +\n> > > +            PixelArrayOpticalBlackRectangles = {\n> > > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> >\n> > s/\\+ \\K}/1 }/\n\n:perldo s/\\+ \\K}/1 }/\n\n;-)\n\n> > > +               { x1, y3, 2, y4 - y3 + 1 },\n> > > +               { x2, y3, 2, y4 - y3 + 1 },\n> > > +            };\n> > > +\n> > > +        If the sensor, when required to output the full pixel array matrix,\n> > > +        automatically skip the invalid lines and columns, producing the\n\ns/skip/skips/\n\n> > > +        following data buffer, when captured to memory\n\nFor the same reason as above, this could become\n\n        If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory\n\n> > > +\n> > > +                                    PixelArraySize[0]\n> > > +               /----------------------------------------------/\n> > > +                                                           x1\n> > > +               +--------------------------------------------o-+ /\n> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> > > +               ...       ...          ...       ...         ... |\n> > > +               ...       ...          ...       ...         ... |\n> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > +               +----------------------------------------------+ /\n> > > +\n> > > +        The invalid lines and columns should not be reported as part of the\n\ns/The/then the/\n\n> > > +        PixelArraySize property in first place.\n> > > +\n> > > +        In this case, the position of the black pixel rectangles will be\n> > > +\n> > > +            PixelArrayOpticalBlackRectangles = {\n> > > +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> > > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > > +            };\n\nI think part of this should be moved to the PixelArraySize property, as\nit defines what the PixelArraySize property should report. Let's merge\nthis patch first though, and improvements can go on top (unless you\nwould prefer addressing this first :-)).\n\n> > > +\n> > > +        \\todo Rename this property to Size once we will have property\n> > > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> > > +\n> > > +  - PixelArrayActiveAreas:\n> > > +      type: Rectangle\n> > > +      size: [1 x n]\n\n[n] here too.\n\n> > > +      description: |\n> > > +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> > > +        overlapping) portions of the camera sensor readable pixel matrix\n> > > +        which are considered valid for image acquisition purposes.\n> > > +\n> > > +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> > > +        with its top-left corner defined by its horizontal and vertical\n> > > +        distances from the PixelArraySize rectangle top-left corner, placed in\n> > > +        position (0, 0).\n\nI think you can drop the last three lines, they're a bit confusing.\n\n> > > +\n> > > +        This property describes an arbitrary number of overlapping rectangles,\n> > > +        with each rectangle representing the maximum image size that the camera\n> > > +        sensor can produce for a particular aspect ratio.\n\nMaybe even moving the previous paragraph here as \"They are defined\nrelatively to the PixelArraySize rectangle.\" ?\n\n> > > +\n> > > +        When multiple rectangles are reported, they shall be ordered from the\n> > > +        tallest to the shortest.\n> > > +\n> > > +        Example 1\n> > > +        A camera sensor which only produces images in the 4:3 image resolution\n> > > +        will report a single PixelArrayActiveAreas rectangle, from which all\n> > > +        other image formats are obtained by either cropping the field-of-view\n> > > +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> > > +        binning.\n> > > +\n> > > +                     PixelArraySize[0]\n> > > +                    /----------------/\n> > > +                      x1          x2\n> > > +            (0,0)-> +-o------------o-+  /\n> > > +                 y1 o +------------+ |  |\n> > > +                    | |////////////| |  |\n> > > +                    | |////////////| |  | PixelArraySize[1]\n> > > +                    | |////////////| |  |\n> > > +                 y2 o +------------+ |  |\n> > > +                    +----------------+  /\n> > > +\n> > > +        The property reports a single rectangle\n> > > +\n> > > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> > > +\n> > > +        Example 2\n> > > +        A camera sensor which can produce images in different native\n> > > +        resolutions will report several overlapping rectangles, one for each\n> > > +        natively supported resolution.\n> > > +\n> > > +                     PixelArraySize[0]\n> > > +                    /------------------/\n> > > +                      x1  x2    x3  x4\n> > > +            (0,0)-> +o---o------o---o+  /\n> > > +                 y1 o    +------+    |  |\n> > > +                    |    |//////|    |  |\n> > > +                 y2 o+---+------+---+|  |\n> > > +                    ||///|//////|///||  | PixelArraySize[1]\n> > > +                 y3 o+---+------+---+|  |\n> > > +                    |    |//////|    |  |\n> > > +                 y4 o    +------+    |  |\n> > > +                    +----+------+----+  /\n> > > +\n> > > +        The property reports two rectangles\n> > > +\n> > > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> > > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> > > +\n> > > +        The first rectangle describes the maximum field-of-view of all image\n> > > +        formats in the 4:3 resolutions, while the second one describes the\n> > > +        maximum field of view for all image formats in the 16:9 resolutions.\n\nShould we add one paragraph here as a clarification:\n\nMultiple rectangles shall only be reported when the sensor can't capture\nthe pixels in the corner regions. If all the pixels in the (x1,y1) -\n(x4,y4) area can be captured, the PixelArrayActiveAreas property shall\ncontains the single rectangle (x1,y1) - (x4,y4).\n\nOr do you think it's overkill ?\n\nReviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n\n> > > +\n> > > +        \\todo Rename this property to ActiveAreas once we will have property\n> > > +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> > > +\n> > >  ...","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 3F476BD86F\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 31 Jul 2020 02:21:37 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id B4DF961DC3;\n\tFri, 31 Jul 2020 04:21:36 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id D99B060540\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 31 Jul 2020 04:21:35 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 3870753C;\n\tFri, 31 Jul 2020 04:21:35 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"r7ol7Rc1\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1596162095;\n\tbh=UlCDlaJqdJ+rpeLEUBjcqv7Ozt5yc6mnt/mwGBjYZVY=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=r7ol7Rc13ooY8kjTGlsLo6uG++QAThAR3QVkSWAZaPOeWQtTClIdPbHuhogvgVsHt\n\tUczFZ+hIZPaeBSSRwgzAbYMy5VC9Y9lnbu/pkNa0+ixIYbqAnDU1HjU2P1HjQhmHZO\n\tRzBc95DfEhksOB++EeTzF4qowy0CLjENLbYKAJTM=","Date":"Fri, 31 Jul 2020 05:21:25 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Message-ID":"<20200731021732.GN6107@pendragon.ideasonboard.com>","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200608102604.GO9947@valkosipuli.retiisi.org.uk>\n\t<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>","MIME-Version":"1.0","Content-Disposition":"inline","In-Reply-To":"<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"figa@chromium.org, ricardo.ribalda@gmail.com,\n\tlibcamera-devel@lists.libcamera.org, Sakari Ailus <sakari.ailus@iki.fi>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":11741,"web_url":"https://patchwork.libcamera.org/comment/11741/","msgid":"<20200731074648.jzmvmhoupt22dqb6@uno.localdomain>","date":"2020-07-31T07:46:48","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"Hi Laurent,\n   thanks for review\n\nOn Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:\n> Hi Jacopo,\n>\n> Thank you for the patch (and sorry for such a late reply)\n>\n> First of all, I really like this, I think it's really good work.\n> Describing the pixel array size may seem like an easy task, but I know\n> how much variation we can see in camera sensors, and how much research\n> you've but into this. Coming up with the properties below that are both\n> flexible and clearly defined was not a simple task.\n>\n> I have quite a few comments (you know me...), but nothing that requires\n> significant changes. Since you've posted this series David has submitted\n> a proposal for transformation (including rotation) support, which needs\n> to be taken into account as Sakari pointed out. I've proposed in the\n> review below a way to do so (don't worry, it's not a complete rewrite\n> :-)), please feel free to tell me if you disagree with some (or all) of\n> the proposals.\n>\n> On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:\n> > On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:\n> > > On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> > > > Add definition of pixel array related properties.\n> > > >\n> > > > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> > > > ---\n> > > >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> > > >  1 file changed, 263 insertions(+)\n> > > >\n> > > > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> > > > index ce627fa042ba..762d60881568 100644\n> > > > --- a/src/libcamera/property_ids.yaml\n> > > > +++ b/src/libcamera/property_ids.yaml\n> > > > @@ -386,4 +386,267 @@ controls:\n> > > >                                |                    |\n> > > >                                |                    |\n> > > >                                +--------------------+\n> > > > +\n> > > > +  - UnitCellSize:\n> > > > +      type: Size\n> > > > +      description: |\n> > > > +        The pixel unit cell physical size, in nanometers.\n> > > > +\n> > > > +        The UnitCellSize properties defines the horizontal and vertical sizes\n> > > > +        of a single pixel unit, including its active and non-active parts.\n>\n> Would it be useful to add \"In other words, it expresses the horizontal\n> and vertical distance between the top-left corners of adjacent pixels.\"\n> ?\n>\n\nIt won't hurt, I can add it...\n\n> > > > +\n> > > > +        The property can be used to calculate the physical size of the sensor's\n> > > > +        pixel array area and for calibration purposes.\n> > >\n> > > Do we need this? Could it not be calculated from PixelArrayPhysicalSize and\n> > > PixelArraySize?\n> >\n> > Not really, as PixelArrayPhysicalSize reports the physical dimension\n> > of the full pixel array (readable and non readable pixels) while\n> > PixelArraySize reports the size in pixel of the largest readable\n> > image. To sum it up: PixelArrayPhysicalSize reports the chip area,\n> > which covers more space that the readable PixelArraySize.\n> >\n> > > > +\n> > > > +  - PixelArrayPhysicalSize:\n> > > > +      type: Size\n> > > > +      description: |\n> > > > +        The camera sensor full pixel array size, in nanometers.\n> > > > +\n> > > > +        The PixelArrayPhysicalSize property reports the physical dimensions\n> > > > +        (width x height) of the full pixel array matrix, including readable\n> > > > +        and non-readable pixels.\n> > > > +\n> > > > +        \\todo Rename this property to PhysicalSize once we will have property\n> > > > +              categories (i.e. Properties::PixelArray::PhysicalSize)\n>\n> I understand Sakari's confusion regarding the UnitCellSize calculation\n> from PixelArrayPhysicalSize and PixelArraySize. I also agree with your\n> answer, but I think we could actually drop the PixelArrayPhysicalSize\n> property. I don't see what it could be used for, it describes the\n> physical size of something that can't be accessed at all from the host,\n> and should thus not matter at all.\n>\n> If we later find a need for this, we can always at it. At that point, I\n> also think it would be best to express the property in pixel units\n> (although depending on the use case that drives addition of this\n> property, I could very well be wrong).\n>\n\nAck, I'll drop\n\n> > > > +\n> > > > +  - PixelArraySize:\n> > > > +      type: Size\n> > > > +      description: |\n> > > > +        The camera sensor pixel array readable area vertical and horizontal\n> > > > +        sizes, in pixels.\n> > > > +\n> > > > +        The PixelArraySize property defines the size in pixel units of the\n> > > > +        readable part of full pixel array matrix, including optically black\n>\n> s/optically/optical/\n>\n> I've also come across the term \"optical dark pixels\" which I think is a\n> bit more accurate, as the pixels are covered and thus dark, but not\n> fully black due to leakages currents (otherwise there would be no need\n> to black level correction). \"optical black\" is used much more often\n> though, so I think we should keep that term.\n>\n> > > > +        pixels used for calibration, pixels which are not considered valid for\n> > > > +        capture and active pixels valid for image capture.\n>\n> Maybe \"and active pixels containing valid image data\" ?\n>\n\nI can change this\n\n> > > > +\n> > > > +        The property describes a rectangle whose top-left corner is placed\n> > > > +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> > > > +        by the Size element transported by the property.\n>\n> I would move this paragraph below to [1].\n>\n> > > > +\n> > > > +        The property describes the maximum size of the raw data produced by\n> > > > +        the sensor, which might not correspond to the physical size of the\n>\n> Maybe s/produced by the sensor/captured by the camera/, as the CSI-2\n> receiver could drop data too ? For instance, the invalid lines between\n> the optical black and valid regions could be sent by the sensor, with a\n> specific data type, and dropped on the receiving side by a decision of\n> the pipeline handler that chooses to configure the CSI-2 receiver to\n> drop the corresponding data type.\n>\n\nOk, we're really picking details here, but yes, I can change this. However I\nwonder if that should not actually describe what the sensor puts on\nthe bus. How the receiver filters it, it's a separate issue.\n\n> > > > +        sensor pixel array matrix, as some portions of the physical pixel\n> > > > +        array matrix are not accessible and cannot be transmitted out.\n> > > > +\n> > > > +        For example, a pixel array matrix assembled as follow\n>\n> s/a pixel array/let's consider a pixel array/\n> s/follow/follows/\n>\n> > > > +\n> > > > +             +--------------------------------------------------+\n> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             ...          ...           ...      ...          ...\n> > > > +\n> > > > +             ...          ...           ...      ...          ...\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > > > +             +--------------------------------------------------+\n> > > > +\n> > > > +        composed by two lines of non-readable pixels (x) followed by N lines of\n>\n> s/composed by/starting with/\n> s/(x)/(x),/\n>\n> > > > +        readable data (D) surrounded by two columns of non-readable pixels on\n> > > > +        each side, only the readable portion is transmitted to the receiving\n>\n> s/each side,/each side, and ending with two more lines of non-readable pixels./\n> s/only/Only/\n>\n> > > > +        side, defining the sizes of the largest possible buffer of raw data\n>\n> s/sizes/size/\n>\n> > > > +        that can be presented to applications.\n> > > > +\n> > > > +                               PixelArraySize[0]\n>\n> should this be PixelArraySize.width ? Same for height instead of [1],\n> and in other locations below.\n>\n\nack\n\n> > > > +               /----------------------------------------------/\n> > > > +               +----------------------------------------------+ /\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> > > > +               ...        ...           ...      ...        ...\n> > > > +               ...        ...           ...      ...        ...\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > > > +               +----------------------------------------------+ /\n> > > > +\n>\n> [1], with a small modification:\n>\n> \"This defines a rectangle ... sizes are defined by this property.\"\n>\n> and merge it with the following paragraph in a single paragraph.\n>\n> > > > +        All other rectangles that describes portions of the pixel array, such as\n>\n> s/describes/describe/\n>\n> > > > +        the optical black pixels rectangles and active pixel areas are defined\n>\n> s/areas/areas,/\n>\n> > > > +        relatively to the rectangle described by this property.\n> > > > +\n> > > > +        \\todo Rename this property to Size once we will have property\n> > > > +              categories (i.e. Properties::PixelArray::Size)\n> > > > +\n> > > > +  - PixelArrayOpticalBlackRectangles:\n> > > > +      type: Rectangle\n> > > > +      size: [1 x n]\n>\n> This can just be [n], not all sizes need to be bi-dimensional.\n>\n> > > > +      description: |\n> > > > +        The raw data buffer regions which contains optical black pixels\n>\n> s/raw data buffer/pixel array/ for the reason explained below in [2]\n> s/regions/region(s)/ ?\n> s/contains/contain/\n>\n> > > > +        considered valid for calibration purposes.\n> > > > +\n> > >\n> > > How does this interact with the rotation property?\n> > >\n> > > If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\n> > > invert the flipping controls. I presume the same would apply to e.g. dark\n> > > pixels in case they are read out, but that should be something for a driver\n> > > to handle.\n> >\n> > Right. I think this also depends how black pixels are read out. I here\n> > assumed the sensor is capable of reading out the whole PixelArraySize area,\n> > and the receiver is able to capture its whole content in a single\n> > buffer, where application can then go and pick the areas of interest\n> > using the information conveyed by this property. If this model holds,\n> > then if sensor flipping is enabled, indeed the here reported\n> > information are mis-leading, unless the chip architecture is\n> > perfectly symmetric in vertical and horizontal dimensions (which seems\n> > unlikely).\n> >\n> > > But if the frame layout isn't conveyed to the user space by the driver,\n> > > then we need another way to convey the sensor is actually rotated. Oh well.\n> >\n> > Not sure how to interpret this part. Do you mean \"convey that a\n> > rotation is applied by, ie, setting the canonical V\\HFLIP controls,\n> > which cause the sensor pixel array to be transmitted out with its\n> > vertical/horizontal read-out directions inverted ?\"\n> >\n> > We currently have a read-only property that reports the mounting\n> > rotation (like the dt-property you have just reviewed :) I assume we\n> > will have a rotation control that reports instead if a V/HFLIP is\n> > applied, so that application know how to compensate that.\n> >\n> > > Not every sensor that is mounted upside down has flipping controls so the\n> > > user space will still somehow need to manage the rotation in that case.\n> >\n> > I think it should, and I think we'll provide all information to be\n> > able to do so.\n>\n> I think all the properties here should be expressed relative to the\n> default sensor readout direction. Any transformation (such as h/v flip)\n> will need to be taken into account by the application if the use cases\n> require it. That would be the easiest course of action, otherwise these\n> properties couldn't be static, as they would depend on the selected\n> transformation.\n>\n> David has submitted a patch series with a Transform control (actually\n> specified in CameraConfiguration, not as a control). We will need to\n> document how the properties in this patch, the Rotation property, and\n> the Transform control, interact.\n>\n> How about adding in the PixelArraySize documentation a paragraph that\n> states\n>\n> \"All the coordinates are expressed relative to the default sensor\n> readout direction, without any transformation (such as horizontal and\n> vertical flipping) applied. When mapping them to the raw pixel buffer,\n> applications shall take any configured transformation into account.\"\n>\n> ?\n\nThat's the easieast way forward, sounds good to me\n\n>\n> > > > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n>\n> s/describes/property describes/ (it feels weird to have a plural name\n> with a singular verb).\n>\n> > > > +        rectangular areas of the raw data buffer, where optical black pixels are\n>\n> s/buffer,/buffer/\n>\n> > > > +        located and could be accessed for calibration and black level\n> > > > +        correction.\n>\n> Maybe just \"... of the raw data buffers that contain optical black\n> pixels.\" ? That duplicates the previous paragraph a bit, maybe we could\n> just drop it.\n>\n> > > > +\n> > > > +        This property describes the position and size of optically black pixel\n>\n> s/optically/optical/\n>\n> > > > +        rectangles relatively to their position in the raw data buffer as stored\n>\n> s/rectangles/regions/\n> s/relatively to their position in the raw data buffer/in the raw data buffer/\n>\n> > > > +        in memory, which might differ from their actual physical location in the\n> > > > +        pixel array matrix.\n> > > > +\n> > > > +        It is important to note, in facts, that camera sensor might\n>\n> s/facts/fact/\n> s/sensors/sensor/\n>\n> > > > +        automatically re-order, shuffle or skip portions of their pixels array\n>\n> s/re-order/reorder/\n>\n> I'd drop shuffle, I think \"reorder or skip\" is enough, hopefully the\n> sensors won't shuffle pixels randomly :-)\n>\n> > > > +        matrix when transmitting data to the receiver.\n>\n> Should we add an example ?\n>\n> \"For instance, a sensor may merge the top and bottom optical black\n> rectangles into a single rectangle, transmitted at the beginning of the\n> frame.\"\n>\n> > > > +\n> > > > +        The pixel array contains several areas with different purposes,\n> > > > +        interleaved by lines and columns which are said not to be valid for\n> > > > +        capturing purposes. Invalid lines and columns are defined as invalid as\n> > > > +        they could be positioned too close to the chip margins or to the optical\n> > > > +        blank shielding placed on top of optical black pixels.\n>\n> s/blank/black/\n>\n> > > > +\n> > > > +                                PixelArraySize[0]\n> > > > +               /----------------------------------------------/\n> > > > +                  x1                                       x2\n> > > > +               +--o---------------------------------------o---+ /\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> > > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > > +               ...          ...           ...     ...       ...\n> > > > +               ...          ...           ...     ...       ...\n> > > > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > > > +               +----------------------------------------------+ /\n> > > > +\n> > > > +        The readable pixel array matrix is composed by\n> > > > +        2 invalid lines (I)\n> > > > +        4 lines of valid optically black pixels (O)\n>\n> s/optically/optical/\n>\n> > > > +        2 invalid lines (I)\n> > > > +        n lines of valid pixel data (P)\n> > > > +        2 invalid lines (I)\n> > > > +\n> > > > +        And the position of the optical black pixel rectangles is defined by\n> > > > +\n> > > > +            PixelArrayOpticalBlackRectangles = {\n> > > > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> > >\n> > > s/\\+ \\K}/1 }/\n>\n> :perldo s/\\+ \\K}/1 }/\n>\n> ;-)\n>\n> > > > +               { x1, y3, 2, y4 - y3 + 1 },\n> > > > +               { x2, y3, 2, y4 - y3 + 1 },\n> > > > +            };\n> > > > +\n> > > > +        If the sensor, when required to output the full pixel array matrix,\n> > > > +        automatically skip the invalid lines and columns, producing the\n>\n> s/skip/skips/\n>\n> > > > +        following data buffer, when captured to memory\n>\n> For the same reason as above, this could become\n>\n>         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory\n>\n> > > > +\n> > > > +                                    PixelArraySize[0]\n> > > > +               /----------------------------------------------/\n> > > > +                                                           x1\n> > > > +               +--------------------------------------------o-+ /\n> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > > > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> > > > +               ...       ...          ...       ...         ... |\n> > > > +               ...       ...          ...       ...         ... |\n> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > > > +               +----------------------------------------------+ /\n> > > > +\n> > > > +        The invalid lines and columns should not be reported as part of the\n>\n> s/The/then the/\n>\n> > > > +        PixelArraySize property in first place.\n> > > > +\n> > > > +        In this case, the position of the black pixel rectangles will be\n> > > > +\n> > > > +            PixelArrayOpticalBlackRectangles = {\n> > > > +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> > > > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > > > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > > > +            };\n>\n> I think part of this should be moved to the PixelArraySize property, as\n> it defines what the PixelArraySize property should report. Let's merge\n> this patch first though, and improvements can go on top (unless you\n> would prefer addressing this first :-)).\n>\n> > > > +\n> > > > +        \\todo Rename this property to Size once we will have property\n> > > > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> > > > +\n> > > > +  - PixelArrayActiveAreas:\n> > > > +      type: Rectangle\n> > > > +      size: [1 x n]\n>\n> [n] here too.\n>\n> > > > +      description: |\n> > > > +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> > > > +        overlapping) portions of the camera sensor readable pixel matrix\n> > > > +        which are considered valid for image acquisition purposes.\n> > > > +\n> > > > +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> > > > +        with its top-left corner defined by its horizontal and vertical\n> > > > +        distances from the PixelArraySize rectangle top-left corner, placed in\n> > > > +        position (0, 0).\n>\n> I think you can drop the last three lines, they're a bit confusing.\n>\n> > > > +\n> > > > +        This property describes an arbitrary number of overlapping rectangles,\n> > > > +        with each rectangle representing the maximum image size that the camera\n> > > > +        sensor can produce for a particular aspect ratio.\n>\n> Maybe even moving the previous paragraph here as \"They are defined\n> relatively to the PixelArraySize rectangle.\" ?\n>\n\nAck, I'll merge the two\n\n> > > > +\n> > > > +        When multiple rectangles are reported, they shall be ordered from the\n> > > > +        tallest to the shortest.\n> > > > +\n> > > > +        Example 1\n> > > > +        A camera sensor which only produces images in the 4:3 image resolution\n> > > > +        will report a single PixelArrayActiveAreas rectangle, from which all\n> > > > +        other image formats are obtained by either cropping the field-of-view\n> > > > +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> > > > +        binning.\n> > > > +\n> > > > +                     PixelArraySize[0]\n> > > > +                    /----------------/\n> > > > +                      x1          x2\n> > > > +            (0,0)-> +-o------------o-+  /\n> > > > +                 y1 o +------------+ |  |\n> > > > +                    | |////////////| |  |\n> > > > +                    | |////////////| |  | PixelArraySize[1]\n> > > > +                    | |////////////| |  |\n> > > > +                 y2 o +------------+ |  |\n> > > > +                    +----------------+  /\n> > > > +\n> > > > +        The property reports a single rectangle\n> > > > +\n> > > > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> > > > +\n> > > > +        Example 2\n> > > > +        A camera sensor which can produce images in different native\n> > > > +        resolutions will report several overlapping rectangles, one for each\n> > > > +        natively supported resolution.\n> > > > +\n> > > > +                     PixelArraySize[0]\n> > > > +                    /------------------/\n> > > > +                      x1  x2    x3  x4\n> > > > +            (0,0)-> +o---o------o---o+  /\n> > > > +                 y1 o    +------+    |  |\n> > > > +                    |    |//////|    |  |\n> > > > +                 y2 o+---+------+---+|  |\n> > > > +                    ||///|//////|///||  | PixelArraySize[1]\n> > > > +                 y3 o+---+------+---+|  |\n> > > > +                    |    |//////|    |  |\n> > > > +                 y4 o    +------+    |  |\n> > > > +                    +----+------+----+  /\n> > > > +\n> > > > +        The property reports two rectangles\n> > > > +\n> > > > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> > > > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> > > > +\n> > > > +        The first rectangle describes the maximum field-of-view of all image\n> > > > +        formats in the 4:3 resolutions, while the second one describes the\n> > > > +        maximum field of view for all image formats in the 16:9 resolutions.\n>\n> Should we add one paragraph here as a clarification:\n>\n> Multiple rectangles shall only be reported when the sensor can't capture\n> the pixels in the corner regions. If all the pixels in the (x1,y1) -\n> (x4,y4) area can be captured, the PixelArrayActiveAreas property shall\n> contains the single rectangle (x1,y1) - (x4,y4).\n\nIsn't this the example above ? Re-stating the same here seems\nconfusing.\n\n>\n> Or do you think it's overkill ?\n>\n> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n>\n\nThanks, I'll take comments in and resend a v7\n\n> > > > +\n> > > > +        \\todo Rename this property to ActiveAreas once we will have property\n> > > > +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> > > > +\n> > > >  ...\n>\n> --\n> Regards,\n>\n> Laurent Pinchart","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 69C85BD878\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 31 Jul 2020 07:43:12 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id E7B6161988;\n\tFri, 31 Jul 2020 09:43:11 +0200 (CEST)","from relay8-d.mail.gandi.net (relay8-d.mail.gandi.net\n\t[217.70.183.201])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id B71A160395\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 31 Jul 2020 09:43:10 +0200 (CEST)","from uno.localdomain (93-34-118-233.ip49.fastwebnet.it\n\t[93.34.118.233]) (Authenticated sender: jacopo@jmondi.org)\n\tby relay8-d.mail.gandi.net (Postfix) with ESMTPSA id E92141BF206;\n\tFri, 31 Jul 2020 07:43:08 +0000 (UTC)"],"X-Originating-IP":"93.34.118.233","Date":"Fri, 31 Jul 2020 09:46:48 +0200","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Message-ID":"<20200731074648.jzmvmhoupt22dqb6@uno.localdomain>","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200608102604.GO9947@valkosipuli.retiisi.org.uk>\n\t<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>\n\t<20200731021732.GN6107@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Disposition":"inline","In-Reply-To":"<20200731021732.GN6107@pendragon.ideasonboard.com>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"figa@chromium.org, ricardo.ribalda@gmail.com,\n\tlibcamera-devel@lists.libcamera.org, Sakari Ailus <sakari.ailus@iki.fi>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":11745,"web_url":"https://patchwork.libcamera.org/comment/11745/","msgid":"<20200731105446.GA6218@pendragon.ideasonboard.com>","date":"2020-07-31T10:54:46","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Jacopo,\n\nOn Fri, Jul 31, 2020 at 09:46:48AM +0200, Jacopo Mondi wrote:\n> On Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:\n> > Hi Jacopo,\n> >\n> > Thank you for the patch (and sorry for such a late reply)\n> >\n> > First of all, I really like this, I think it's really good work.\n> > Describing the pixel array size may seem like an easy task, but I know\n> > how much variation we can see in camera sensors, and how much research\n> > you've but into this. Coming up with the properties below that are both\n> > flexible and clearly defined was not a simple task.\n> >\n> > I have quite a few comments (you know me...), but nothing that requires\n> > significant changes. Since you've posted this series David has submitted\n> > a proposal for transformation (including rotation) support, which needs\n> > to be taken into account as Sakari pointed out. I've proposed in the\n> > review below a way to do so (don't worry, it's not a complete rewrite\n> > :-)), please feel free to tell me if you disagree with some (or all) of\n> > the proposals.\n> >\n> > On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:\n> >> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:\n> >>> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> >>>> Add definition of pixel array related properties.\n> >>>>\n> >>>> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> >>>> ---\n> >>>>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> >>>>  1 file changed, 263 insertions(+)\n> >>>>\n> >>>> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> >>>> index ce627fa042ba..762d60881568 100644\n> >>>> --- a/src/libcamera/property_ids.yaml\n> >>>> +++ b/src/libcamera/property_ids.yaml\n> >>>> @@ -386,4 +386,267 @@ controls:\n> >>>>                                |                    |\n> >>>>                                |                    |\n> >>>>                                +--------------------+\n> >>>> +\n> >>>> +  - UnitCellSize:\n> >>>> +      type: Size\n> >>>> +      description: |\n> >>>> +        The pixel unit cell physical size, in nanometers.\n> >>>> +\n> >>>> +        The UnitCellSize properties defines the horizontal and vertical sizes\n> >>>> +        of a single pixel unit, including its active and non-active parts.\n> >\n> > Would it be useful to add \"In other words, it expresses the horizontal\n> > and vertical distance between the top-left corners of adjacent pixels.\"\n> > ?\n> \n> It won't hurt, I can add it...\n> \n> >>>> +\n> >>>> +        The property can be used to calculate the physical size of the sensor's\n> >>>> +        pixel array area and for calibration purposes.\n> >>>\n> >>> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and\n> >>> PixelArraySize?\n> >>\n> >> Not really, as PixelArrayPhysicalSize reports the physical dimension\n> >> of the full pixel array (readable and non readable pixels) while\n> >> PixelArraySize reports the size in pixel of the largest readable\n> >> image. To sum it up: PixelArrayPhysicalSize reports the chip area,\n> >> which covers more space that the readable PixelArraySize.\n> >>\n> >>>> +\n> >>>> +  - PixelArrayPhysicalSize:\n> >>>> +      type: Size\n> >>>> +      description: |\n> >>>> +        The camera sensor full pixel array size, in nanometers.\n> >>>> +\n> >>>> +        The PixelArrayPhysicalSize property reports the physical dimensions\n> >>>> +        (width x height) of the full pixel array matrix, including readable\n> >>>> +        and non-readable pixels.\n> >>>> +\n> >>>> +        \\todo Rename this property to PhysicalSize once we will have property\n> >>>> +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> >\n> > I understand Sakari's confusion regarding the UnitCellSize calculation\n> > from PixelArrayPhysicalSize and PixelArraySize. I also agree with your\n> > answer, but I think we could actually drop the PixelArrayPhysicalSize\n> > property. I don't see what it could be used for, it describes the\n> > physical size of something that can't be accessed at all from the host,\n> > and should thus not matter at all.\n> >\n> > If we later find a need for this, we can always at it. At that point, I\n> > also think it would be best to express the property in pixel units\n> > (although depending on the use case that drives addition of this\n> > property, I could very well be wrong).\n> \n> Ack, I'll drop\n> \n> >>>> +\n> >>>> +  - PixelArraySize:\n> >>>> +      type: Size\n> >>>> +      description: |\n> >>>> +        The camera sensor pixel array readable area vertical and horizontal\n> >>>> +        sizes, in pixels.\n> >>>> +\n> >>>> +        The PixelArraySize property defines the size in pixel units of the\n> >>>> +        readable part of full pixel array matrix, including optically black\n> >\n> > s/optically/optical/\n> >\n> > I've also come across the term \"optical dark pixels\" which I think is a\n> > bit more accurate, as the pixels are covered and thus dark, but not\n> > fully black due to leakages currents (otherwise there would be no need\n> > to black level correction). \"optical black\" is used much more often\n> > though, so I think we should keep that term.\n> >\n> >>>> +        pixels used for calibration, pixels which are not considered valid for\n> >>>> +        capture and active pixels valid for image capture.\n> >\n> > Maybe \"and active pixels containing valid image data\" ?\n> \n> I can change this\n> \n> >>>> +\n> >>>> +        The property describes a rectangle whose top-left corner is placed\n> >>>> +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> >>>> +        by the Size element transported by the property.\n> >\n> > I would move this paragraph below to [1].\n> >\n> >>>> +\n> >>>> +        The property describes the maximum size of the raw data produced by\n> >>>> +        the sensor, which might not correspond to the physical size of the\n> >\n> > Maybe s/produced by the sensor/captured by the camera/, as the CSI-2\n> > receiver could drop data too ? For instance, the invalid lines between\n> > the optical black and valid regions could be sent by the sensor, with a\n> > specific data type, and dropped on the receiving side by a decision of\n> > the pipeline handler that chooses to configure the CSI-2 receiver to\n> > drop the corresponding data type.\n> \n> Ok, we're really picking details here, but yes, I can change this. However I\n> wonder if that should not actually describe what the sensor puts on\n> the bus. How the receiver filters it, it's a separate issue.\n\nI've been pondering about that, and concluded that, from an application\npoint of view (as properties are designed to be consumed by\napplications), what matters is what is being captured. Even if we\ndisregard the CSI-2 receiver configuration, it's conceivable that the\nsensor would output invalid lines with a data type that the CSI-2\nreceiver isn't able to capture due to the hardware being hardcoded to\nskip some data types.\n\nI agree it won't make much difference in practice, application\ndevelopers will likely not think about all this.\n\n> >>>> +        sensor pixel array matrix, as some portions of the physical pixel\n> >>>> +        array matrix are not accessible and cannot be transmitted out.\n> >>>> +\n> >>>> +        For example, a pixel array matrix assembled as follow\n> >\n> > s/a pixel array/let's consider a pixel array/\n> > s/follow/follows/\n> >\n> >>>> +\n> >>>> +             +--------------------------------------------------+\n> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             ...          ...           ...      ...          ...\n> >>>> +\n> >>>> +             ...          ...           ...      ...          ...\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> >>>> +             +--------------------------------------------------+\n> >>>> +\n> >>>> +        composed by two lines of non-readable pixels (x) followed by N lines of\n> >\n> > s/composed by/starting with/\n> > s/(x)/(x),/\n> >\n> >>>> +        readable data (D) surrounded by two columns of non-readable pixels on\n> >>>> +        each side, only the readable portion is transmitted to the receiving\n> >\n> > s/each side,/each side, and ending with two more lines of non-readable pixels./\n> > s/only/Only/\n> >\n> >>>> +        side, defining the sizes of the largest possible buffer of raw data\n> >\n> > s/sizes/size/\n> >\n> >>>> +        that can be presented to applications.\n> >>>> +\n> >>>> +                               PixelArraySize[0]\n> >\n> > should this be PixelArraySize.width ? Same for height instead of [1],\n> > and in other locations below.\n> \n> ack\n> \n> >>>> +               /----------------------------------------------/\n> >>>> +               +----------------------------------------------+ /\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> >>>> +               ...        ...           ...      ...        ...\n> >>>> +               ...        ...           ...      ...        ...\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> >>>> +               +----------------------------------------------+ /\n> >>>> +\n> >\n> > [1], with a small modification:\n> >\n> > \"This defines a rectangle ... sizes are defined by this property.\"\n> >\n> > and merge it with the following paragraph in a single paragraph.\n> >\n> >>>> +        All other rectangles that describes portions of the pixel array, such as\n> >\n> > s/describes/describe/\n> >\n> >>>> +        the optical black pixels rectangles and active pixel areas are defined\n> >\n> > s/areas/areas,/\n> >\n> >>>> +        relatively to the rectangle described by this property.\n> >>>> +\n> >>>> +        \\todo Rename this property to Size once we will have property\n> >>>> +              categories (i.e. Properties::PixelArray::Size)\n> >>>> +\n> >>>> +  - PixelArrayOpticalBlackRectangles:\n> >>>> +      type: Rectangle\n> >>>> +      size: [1 x n]\n> >\n> > This can just be [n], not all sizes need to be bi-dimensional.\n> >\n> >>>> +      description: |\n> >>>> +        The raw data buffer regions which contains optical black pixels\n> >\n> > s/raw data buffer/pixel array/ for the reason explained below in [2]\n> > s/regions/region(s)/ ?\n> > s/contains/contain/\n> >\n> >>>> +        considered valid for calibration purposes.\n> >>>> +\n> >>>\n> >>> How does this interact with the rotation property?\n> >>>\n> >>> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\n> >>> invert the flipping controls. I presume the same would apply to e.g. dark\n> >>> pixels in case they are read out, but that should be something for a driver\n> >>> to handle.\n> >>\n> >> Right. I think this also depends how black pixels are read out. I here\n> >> assumed the sensor is capable of reading out the whole PixelArraySize area,\n> >> and the receiver is able to capture its whole content in a single\n> >> buffer, where application can then go and pick the areas of interest\n> >> using the information conveyed by this property. If this model holds,\n> >> then if sensor flipping is enabled, indeed the here reported\n> >> information are mis-leading, unless the chip architecture is\n> >> perfectly symmetric in vertical and horizontal dimensions (which seems\n> >> unlikely).\n> >>\n> >>> But if the frame layout isn't conveyed to the user space by the driver,\n> >>> then we need another way to convey the sensor is actually rotated. Oh well.\n> >>\n> >> Not sure how to interpret this part. Do you mean \"convey that a\n> >> rotation is applied by, ie, setting the canonical V\\HFLIP controls,\n> >> which cause the sensor pixel array to be transmitted out with its\n> >> vertical/horizontal read-out directions inverted ?\"\n> >>\n> >> We currently have a read-only property that reports the mounting\n> >> rotation (like the dt-property you have just reviewed :) I assume we\n> >> will have a rotation control that reports instead if a V/HFLIP is\n> >> applied, so that application know how to compensate that.\n> >>\n> >>> Not every sensor that is mounted upside down has flipping controls so the\n> >>> user space will still somehow need to manage the rotation in that case.\n> >>\n> >> I think it should, and I think we'll provide all information to be\n> >> able to do so.\n> >\n> > I think all the properties here should be expressed relative to the\n> > default sensor readout direction. Any transformation (such as h/v flip)\n> > will need to be taken into account by the application if the use cases\n> > require it. That would be the easiest course of action, otherwise these\n> > properties couldn't be static, as they would depend on the selected\n> > transformation.\n> >\n> > David has submitted a patch series with a Transform control (actually\n> > specified in CameraConfiguration, not as a control). We will need to\n> > document how the properties in this patch, the Rotation property, and\n> > the Transform control, interact.\n> >\n> > How about adding in the PixelArraySize documentation a paragraph that\n> > states\n> >\n> > \"All the coordinates are expressed relative to the default sensor\n> > readout direction, without any transformation (such as horizontal and\n> > vertical flipping) applied. When mapping them to the raw pixel buffer,\n> > applications shall take any configured transformation into account.\"\n> >\n> > ?\n> \n> That's the easieast way forward, sounds good to me\n> \n> >\n> >>>> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> >\n> > s/describes/property describes/ (it feels weird to have a plural name\n> > with a singular verb).\n> >\n> >>>> +        rectangular areas of the raw data buffer, where optical black pixels are\n> >\n> > s/buffer,/buffer/\n> >\n> >>>> +        located and could be accessed for calibration and black level\n> >>>> +        correction.\n> >\n> > Maybe just \"... of the raw data buffers that contain optical black\n> > pixels.\" ? That duplicates the previous paragraph a bit, maybe we could\n> > just drop it.\n> >\n> >>>> +\n> >>>> +        This property describes the position and size of optically black pixel\n> >\n> > s/optically/optical/\n> >\n> >>>> +        rectangles relatively to their position in the raw data buffer as stored\n> >\n> > s/rectangles/regions/\n> > s/relatively to their position in the raw data buffer/in the raw data buffer/\n> >\n> >>>> +        in memory, which might differ from their actual physical location in the\n> >>>> +        pixel array matrix.\n> >>>> +\n> >>>> +        It is important to note, in facts, that camera sensor might\n> >\n> > s/facts/fact/\n> > s/sensors/sensor/\n> >\n> >>>> +        automatically re-order, shuffle or skip portions of their pixels array\n> >\n> > s/re-order/reorder/\n> >\n> > I'd drop shuffle, I think \"reorder or skip\" is enough, hopefully the\n> > sensors won't shuffle pixels randomly :-)\n> >\n> >>>> +        matrix when transmitting data to the receiver.\n> >\n> > Should we add an example ?\n> >\n> > \"For instance, a sensor may merge the top and bottom optical black\n> > rectangles into a single rectangle, transmitted at the beginning of the\n> > frame.\"\n> >\n> >>>> +\n> >>>> +        The pixel array contains several areas with different purposes,\n> >>>> +        interleaved by lines and columns which are said not to be valid for\n> >>>> +        capturing purposes. Invalid lines and columns are defined as invalid as\n> >>>> +        they could be positioned too close to the chip margins or to the optical\n> >>>> +        blank shielding placed on top of optical black pixels.\n> >\n> > s/blank/black/\n> >\n> >>>> +\n> >>>> +                                PixelArraySize[0]\n> >>>> +               /----------------------------------------------/\n> >>>> +                  x1                                       x2\n> >>>> +               +--o---------------------------------------o---+ /\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> >>>> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> >>>> +               ...          ...           ...     ...       ...\n> >>>> +               ...          ...           ...     ...       ...\n> >>>> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> >>>> +               +----------------------------------------------+ /\n> >>>> +\n> >>>> +        The readable pixel array matrix is composed by\n> >>>> +        2 invalid lines (I)\n> >>>> +        4 lines of valid optically black pixels (O)\n> >\n> > s/optically/optical/\n> >\n> >>>> +        2 invalid lines (I)\n> >>>> +        n lines of valid pixel data (P)\n> >>>> +        2 invalid lines (I)\n> >>>> +\n> >>>> +        And the position of the optical black pixel rectangles is defined by\n> >>>> +\n> >>>> +            PixelArrayOpticalBlackRectangles = {\n> >>>> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> >>>\n> >>> s/\\+ \\K}/1 }/\n> >\n> > :perldo s/\\+ \\K}/1 }/\n> >\n> > ;-)\n> >\n> >>>> +               { x1, y3, 2, y4 - y3 + 1 },\n> >>>> +               { x2, y3, 2, y4 - y3 + 1 },\n> >>>> +            };\n> >>>> +\n> >>>> +        If the sensor, when required to output the full pixel array matrix,\n> >>>> +        automatically skip the invalid lines and columns, producing the\n> >\n> > s/skip/skips/\n> >\n> >>>> +        following data buffer, when captured to memory\n> >\n> > For the same reason as above, this could become\n> >\n> >         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory\n> >\n> >>>> +\n> >>>> +                                    PixelArraySize[0]\n> >>>> +               /----------------------------------------------/\n> >>>> +                                                           x1\n> >>>> +               +--------------------------------------------o-+ /\n> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> >>>> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> >>>> +               ...       ...          ...       ...         ... |\n> >>>> +               ...       ...          ...       ...         ... |\n> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> >>>> +               +----------------------------------------------+ /\n> >>>> +\n> >>>> +        The invalid lines and columns should not be reported as part of the\n> >\n> > s/The/then the/\n> >\n> >>>> +        PixelArraySize property in first place.\n> >>>> +\n> >>>> +        In this case, the position of the black pixel rectangles will be\n> >>>> +\n> >>>> +            PixelArrayOpticalBlackRectangles = {\n> >>>> +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> >>>> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> >>>> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> >>>> +            };\n> >\n> > I think part of this should be moved to the PixelArraySize property, as\n> > it defines what the PixelArraySize property should report. Let's merge\n> > this patch first though, and improvements can go on top (unless you\n> > would prefer addressing this first :-)).\n> >\n> >>>> +\n> >>>> +        \\todo Rename this property to Size once we will have property\n> >>>> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> >>>> +\n> >>>> +  - PixelArrayActiveAreas:\n> >>>> +      type: Rectangle\n> >>>> +      size: [1 x n]\n> >\n> > [n] here too.\n> >\n> >>>> +      description: |\n> >>>> +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> >>>> +        overlapping) portions of the camera sensor readable pixel matrix\n> >>>> +        which are considered valid for image acquisition purposes.\n> >>>> +\n> >>>> +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> >>>> +        with its top-left corner defined by its horizontal and vertical\n> >>>> +        distances from the PixelArraySize rectangle top-left corner, placed in\n> >>>> +        position (0, 0).\n> >\n> > I think you can drop the last three lines, they're a bit confusing.\n> >\n> >>>> +\n> >>>> +        This property describes an arbitrary number of overlapping rectangles,\n> >>>> +        with each rectangle representing the maximum image size that the camera\n> >>>> +        sensor can produce for a particular aspect ratio.\n> >\n> > Maybe even moving the previous paragraph here as \"They are defined\n> > relatively to the PixelArraySize rectangle.\" ?\n> >\n> \n> Ack, I'll merge the two\n> \n> >>>> +\n> >>>> +        When multiple rectangles are reported, they shall be ordered from the\n> >>>> +        tallest to the shortest.\n> >>>> +\n> >>>> +        Example 1\n> >>>> +        A camera sensor which only produces images in the 4:3 image resolution\n> >>>> +        will report a single PixelArrayActiveAreas rectangle, from which all\n> >>>> +        other image formats are obtained by either cropping the field-of-view\n> >>>> +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> >>>> +        binning.\n> >>>> +\n> >>>> +                     PixelArraySize[0]\n> >>>> +                    /----------------/\n> >>>> +                      x1          x2\n> >>>> +            (0,0)-> +-o------------o-+  /\n> >>>> +                 y1 o +------------+ |  |\n> >>>> +                    | |////////////| |  |\n> >>>> +                    | |////////////| |  | PixelArraySize[1]\n> >>>> +                    | |////////////| |  |\n> >>>> +                 y2 o +------------+ |  |\n> >>>> +                    +----------------+  /\n> >>>> +\n> >>>> +        The property reports a single rectangle\n> >>>> +\n> >>>> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> >>>> +\n> >>>> +        Example 2\n> >>>> +        A camera sensor which can produce images in different native\n> >>>> +        resolutions will report several overlapping rectangles, one for each\n> >>>> +        natively supported resolution.\n> >>>> +\n> >>>> +                     PixelArraySize[0]\n> >>>> +                    /------------------/\n> >>>> +                      x1  x2    x3  x4\n> >>>> +            (0,0)-> +o---o------o---o+  /\n> >>>> +                 y1 o    +------+    |  |\n> >>>> +                    |    |//////|    |  |\n> >>>> +                 y2 o+---+------+---+|  |\n> >>>> +                    ||///|//////|///||  | PixelArraySize[1]\n> >>>> +                 y3 o+---+------+---+|  |\n> >>>> +                    |    |//////|    |  |\n> >>>> +                 y4 o    +------+    |  |\n> >>>> +                    +----+------+----+  /\n> >>>> +\n> >>>> +        The property reports two rectangles\n> >>>> +\n> >>>> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> >>>> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> >>>> +\n> >>>> +        The first rectangle describes the maximum field-of-view of all image\n> >>>> +        formats in the 4:3 resolutions, while the second one describes the\n> >>>> +        maximum field of view for all image formats in the 16:9 resolutions.\n> >\n> > Should we add one paragraph here as a clarification:\n> >\n> > Multiple rectangles shall only be reported when the sensor can't capture\n> > the pixels in the corner regions. If all the pixels in the (x1,y1) -\n> > (x4,y4) area can be captured, the PixelArrayActiveAreas property shall\n> > contains the single rectangle (x1,y1) - (x4,y4).\n> \n> Isn't this the example above ? Re-stating the same here seems\n> confusing.\n\nWhat I meant to say is that a camera should not implement example 2 just\nbecause the pipeline handler developer wants to expose different aspect\nratios, when the hardware actually implements example 1. For instance,\nconsidering a sensor with a native resolution of 3840x2880 (4/3) that\ncan be captured fully, a 16/9 picture can be captured by cropping to\n3840*2160. In that case PixelArrayActiveAreas should only report\n(0,0)/3840x2880, not both (0,0)/3840x2880 and (0,360)/3840x2880.\n\nBut maybe I'm just worrying too much and nobody will think about doing\nthat :-)\n\n> > Or do you think it's overkill ?\n> >\n> > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n> \n> Thanks, I'll take comments in and resend a v7\n> \n> >>>> +\n> >>>> +        \\todo Rename this property to ActiveAreas once we will have property\n> >>>> +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> >>>> +\n> >>>>  ...","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 59595BD86F\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 31 Jul 2020 10:54:59 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 28CFF61DC3;\n\tFri, 31 Jul 2020 12:54:59 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 1749F60396\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 31 Jul 2020 12:54:58 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 5206053C;\n\tFri, 31 Jul 2020 12:54:56 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"snoz1hjN\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1596192896;\n\tbh=BfUa0jrCuGBdpFTNQ64gxYzHXl/ZK0sqxKX+00fJ5bQ=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=snoz1hjN0Gg7ap76WV9iJpNiuyl7LvupVEYeX49sOEM7PsZ+JhxarmITMRS9F43GF\n\tL7M09tA2aRr5XhndlkCZSaC2CubmbkRtoZInCWYmM2F3DH+JQPkzKXeLXM90d4btG6\n\t6CZhfyw56RHH8C+m/gMV0Ya9NPlDpMcEm13ez5xU=","Date":"Fri, 31 Jul 2020 13:54:46 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Message-ID":"<20200731105446.GA6218@pendragon.ideasonboard.com>","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200608102604.GO9947@valkosipuli.retiisi.org.uk>\n\t<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>\n\t<20200731021732.GN6107@pendragon.ideasonboard.com>\n\t<20200731074648.jzmvmhoupt22dqb6@uno.localdomain>","MIME-Version":"1.0","Content-Disposition":"inline","In-Reply-To":"<20200731074648.jzmvmhoupt22dqb6@uno.localdomain>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"figa@chromium.org, ricardo.ribalda@gmail.com,\n\tlibcamera-devel@lists.libcamera.org, Sakari Ailus <sakari.ailus@iki.fi>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":11746,"web_url":"https://patchwork.libcamera.org/comment/11746/","msgid":"<20200731114534.cmw5w3wlfmcgmrsr@uno.localdomain>","date":"2020-07-31T11:45:34","subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"On Fri, Jul 31, 2020 at 01:54:46PM +0300, Laurent Pinchart wrote:\n> Hi Jacopo,\n>\n> On Fri, Jul 31, 2020 at 09:46:48AM +0200, Jacopo Mondi wrote:\n> > On Fri, Jul 31, 2020 at 05:21:25AM +0300, Laurent Pinchart wrote:\n> > > Hi Jacopo,\n> > >\n> > > Thank you for the patch (and sorry for such a late reply)\n> > >\n> > > First of all, I really like this, I think it's really good work.\n> > > Describing the pixel array size may seem like an easy task, but I know\n> > > how much variation we can see in camera sensors, and how much research\n> > > you've but into this. Coming up with the properties below that are both\n> > > flexible and clearly defined was not a simple task.\n> > >\n> > > I have quite a few comments (you know me...), but nothing that requires\n> > > significant changes. Since you've posted this series David has submitted\n> > > a proposal for transformation (including rotation) support, which needs\n> > > to be taken into account as Sakari pointed out. I've proposed in the\n> > > review below a way to do so (don't worry, it's not a complete rewrite\n> > > :-)), please feel free to tell me if you disagree with some (or all) of\n> > > the proposals.\n> > >\n> > > On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:\n> > >> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:\n> > >>> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:\n> > >>>> Add definition of pixel array related properties.\n> > >>>>\n> > >>>> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> > >>>> ---\n> > >>>>  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++\n> > >>>>  1 file changed, 263 insertions(+)\n> > >>>>\n> > >>>> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml\n> > >>>> index ce627fa042ba..762d60881568 100644\n> > >>>> --- a/src/libcamera/property_ids.yaml\n> > >>>> +++ b/src/libcamera/property_ids.yaml\n> > >>>> @@ -386,4 +386,267 @@ controls:\n> > >>>>                                |                    |\n> > >>>>                                |                    |\n> > >>>>                                +--------------------+\n> > >>>> +\n> > >>>> +  - UnitCellSize:\n> > >>>> +      type: Size\n> > >>>> +      description: |\n> > >>>> +        The pixel unit cell physical size, in nanometers.\n> > >>>> +\n> > >>>> +        The UnitCellSize properties defines the horizontal and vertical sizes\n> > >>>> +        of a single pixel unit, including its active and non-active parts.\n> > >\n> > > Would it be useful to add \"In other words, it expresses the horizontal\n> > > and vertical distance between the top-left corners of adjacent pixels.\"\n> > > ?\n> >\n> > It won't hurt, I can add it...\n> >\n> > >>>> +\n> > >>>> +        The property can be used to calculate the physical size of the sensor's\n> > >>>> +        pixel array area and for calibration purposes.\n> > >>>\n> > >>> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and\n> > >>> PixelArraySize?\n> > >>\n> > >> Not really, as PixelArrayPhysicalSize reports the physical dimension\n> > >> of the full pixel array (readable and non readable pixels) while\n> > >> PixelArraySize reports the size in pixel of the largest readable\n> > >> image. To sum it up: PixelArrayPhysicalSize reports the chip area,\n> > >> which covers more space that the readable PixelArraySize.\n> > >>\n> > >>>> +\n> > >>>> +  - PixelArrayPhysicalSize:\n> > >>>> +      type: Size\n> > >>>> +      description: |\n> > >>>> +        The camera sensor full pixel array size, in nanometers.\n> > >>>> +\n> > >>>> +        The PixelArrayPhysicalSize property reports the physical dimensions\n> > >>>> +        (width x height) of the full pixel array matrix, including readable\n> > >>>> +        and non-readable pixels.\n> > >>>> +\n> > >>>> +        \\todo Rename this property to PhysicalSize once we will have property\n> > >>>> +              categories (i.e. Properties::PixelArray::PhysicalSize)\n> > >\n> > > I understand Sakari's confusion regarding the UnitCellSize calculation\n> > > from PixelArrayPhysicalSize and PixelArraySize. I also agree with your\n> > > answer, but I think we could actually drop the PixelArrayPhysicalSize\n> > > property. I don't see what it could be used for, it describes the\n> > > physical size of something that can't be accessed at all from the host,\n> > > and should thus not matter at all.\n> > >\n> > > If we later find a need for this, we can always at it. At that point, I\n> > > also think it would be best to express the property in pixel units\n> > > (although depending on the use case that drives addition of this\n> > > property, I could very well be wrong).\n> >\n> > Ack, I'll drop\n> >\n> > >>>> +\n> > >>>> +  - PixelArraySize:\n> > >>>> +      type: Size\n> > >>>> +      description: |\n> > >>>> +        The camera sensor pixel array readable area vertical and horizontal\n> > >>>> +        sizes, in pixels.\n> > >>>> +\n> > >>>> +        The PixelArraySize property defines the size in pixel units of the\n> > >>>> +        readable part of full pixel array matrix, including optically black\n> > >\n> > > s/optically/optical/\n> > >\n> > > I've also come across the term \"optical dark pixels\" which I think is a\n> > > bit more accurate, as the pixels are covered and thus dark, but not\n> > > fully black due to leakages currents (otherwise there would be no need\n> > > to black level correction). \"optical black\" is used much more often\n> > > though, so I think we should keep that term.\n> > >\n> > >>>> +        pixels used for calibration, pixels which are not considered valid for\n> > >>>> +        capture and active pixels valid for image capture.\n> > >\n> > > Maybe \"and active pixels containing valid image data\" ?\n> >\n> > I can change this\n> >\n> > >>>> +\n> > >>>> +        The property describes a rectangle whose top-left corner is placed\n> > >>>> +        in position (0, 0) and whose vertical and horizontal sizes are defined\n> > >>>> +        by the Size element transported by the property.\n> > >\n> > > I would move this paragraph below to [1].\n> > >\n> > >>>> +\n> > >>>> +        The property describes the maximum size of the raw data produced by\n> > >>>> +        the sensor, which might not correspond to the physical size of the\n> > >\n> > > Maybe s/produced by the sensor/captured by the camera/, as the CSI-2\n> > > receiver could drop data too ? For instance, the invalid lines between\n> > > the optical black and valid regions could be sent by the sensor, with a\n> > > specific data type, and dropped on the receiving side by a decision of\n> > > the pipeline handler that chooses to configure the CSI-2 receiver to\n> > > drop the corresponding data type.\n> >\n> > Ok, we're really picking details here, but yes, I can change this. However I\n> > wonder if that should not actually describe what the sensor puts on\n> > the bus. How the receiver filters it, it's a separate issue.\n>\n> I've been pondering about that, and concluded that, from an application\n> point of view (as properties are designed to be consumed by\n> applications), what matters is what is being captured. Even if we\n> disregard the CSI-2 receiver configuration, it's conceivable that the\n> sensor would output invalid lines with a data type that the CSI-2\n> receiver isn't able to capture due to the hardware being hardcoded to\n> skip some data types.\n>\n> I agree it won't make much difference in practice, application\n> developers will likely not think about all this.\n>\n> > >>>> +        sensor pixel array matrix, as some portions of the physical pixel\n> > >>>> +        array matrix are not accessible and cannot be transmitted out.\n> > >>>> +\n> > >>>> +        For example, a pixel array matrix assembled as follow\n> > >\n> > > s/a pixel array/let's consider a pixel array/\n> > > s/follow/follows/\n> > >\n> > >>>> +\n> > >>>> +             +--------------------------------------------------+\n> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             ...          ...           ...      ...          ...\n> > >>>> +\n> > >>>> +             ...          ...           ...      ...          ...\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|\n> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > >>>> +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|\n> > >>>> +             +--------------------------------------------------+\n> > >>>> +\n> > >>>> +        composed by two lines of non-readable pixels (x) followed by N lines of\n> > >\n> > > s/composed by/starting with/\n> > > s/(x)/(x),/\n> > >\n> > >>>> +        readable data (D) surrounded by two columns of non-readable pixels on\n> > >>>> +        each side, only the readable portion is transmitted to the receiving\n> > >\n> > > s/each side,/each side, and ending with two more lines of non-readable pixels./\n> > > s/only/Only/\n> > >\n> > >>>> +        side, defining the sizes of the largest possible buffer of raw data\n> > >\n> > > s/sizes/size/\n> > >\n> > >>>> +        that can be presented to applications.\n> > >>>> +\n> > >>>> +                               PixelArraySize[0]\n> > >\n> > > should this be PixelArraySize.width ? Same for height instead of [1],\n> > > and in other locations below.\n> >\n> > ack\n> >\n> > >>>> +               /----------------------------------------------/\n> > >>>> +               +----------------------------------------------+ /\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]\n> > >>>> +               ...        ...           ...      ...        ...\n> > >>>> +               ...        ...           ...      ...        ...\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |\n> > >>>> +               +----------------------------------------------+ /\n> > >>>> +\n> > >\n> > > [1], with a small modification:\n> > >\n> > > \"This defines a rectangle ... sizes are defined by this property.\"\n> > >\n> > > and merge it with the following paragraph in a single paragraph.\n> > >\n> > >>>> +        All other rectangles that describes portions of the pixel array, such as\n> > >\n> > > s/describes/describe/\n> > >\n> > >>>> +        the optical black pixels rectangles and active pixel areas are defined\n> > >\n> > > s/areas/areas,/\n> > >\n> > >>>> +        relatively to the rectangle described by this property.\n> > >>>> +\n> > >>>> +        \\todo Rename this property to Size once we will have property\n> > >>>> +              categories (i.e. Properties::PixelArray::Size)\n> > >>>> +\n> > >>>> +  - PixelArrayOpticalBlackRectangles:\n> > >>>> +      type: Rectangle\n> > >>>> +      size: [1 x n]\n> > >\n> > > This can just be [n], not all sizes need to be bi-dimensional.\n> > >\n> > >>>> +      description: |\n> > >>>> +        The raw data buffer regions which contains optical black pixels\n> > >\n> > > s/raw data buffer/pixel array/ for the reason explained below in [2]\n> > > s/regions/region(s)/ ?\n> > > s/contains/contain/\n> > >\n> > >>>> +        considered valid for calibration purposes.\n> > >>>> +\n> > >>>\n> > >>> How does this interact with the rotation property?\n> > >>>\n> > >>> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally\n> > >>> invert the flipping controls. I presume the same would apply to e.g. dark\n> > >>> pixels in case they are read out, but that should be something for a driver\n> > >>> to handle.\n> > >>\n> > >> Right. I think this also depends how black pixels are read out. I here\n> > >> assumed the sensor is capable of reading out the whole PixelArraySize area,\n> > >> and the receiver is able to capture its whole content in a single\n> > >> buffer, where application can then go and pick the areas of interest\n> > >> using the information conveyed by this property. If this model holds,\n> > >> then if sensor flipping is enabled, indeed the here reported\n> > >> information are mis-leading, unless the chip architecture is\n> > >> perfectly symmetric in vertical and horizontal dimensions (which seems\n> > >> unlikely).\n> > >>\n> > >>> But if the frame layout isn't conveyed to the user space by the driver,\n> > >>> then we need another way to convey the sensor is actually rotated. Oh well.\n> > >>\n> > >> Not sure how to interpret this part. Do you mean \"convey that a\n> > >> rotation is applied by, ie, setting the canonical V\\HFLIP controls,\n> > >> which cause the sensor pixel array to be transmitted out with its\n> > >> vertical/horizontal read-out directions inverted ?\"\n> > >>\n> > >> We currently have a read-only property that reports the mounting\n> > >> rotation (like the dt-property you have just reviewed :) I assume we\n> > >> will have a rotation control that reports instead if a V/HFLIP is\n> > >> applied, so that application know how to compensate that.\n> > >>\n> > >>> Not every sensor that is mounted upside down has flipping controls so the\n> > >>> user space will still somehow need to manage the rotation in that case.\n> > >>\n> > >> I think it should, and I think we'll provide all information to be\n> > >> able to do so.\n> > >\n> > > I think all the properties here should be expressed relative to the\n> > > default sensor readout direction. Any transformation (such as h/v flip)\n> > > will need to be taken into account by the application if the use cases\n> > > require it. That would be the easiest course of action, otherwise these\n> > > properties couldn't be static, as they would depend on the selected\n> > > transformation.\n> > >\n> > > David has submitted a patch series with a Transform control (actually\n> > > specified in CameraConfiguration, not as a control). We will need to\n> > > document how the properties in this patch, the Rotation property, and\n> > > the Transform control, interact.\n> > >\n> > > How about adding in the PixelArraySize documentation a paragraph that\n> > > states\n> > >\n> > > \"All the coordinates are expressed relative to the default sensor\n> > > readout direction, without any transformation (such as horizontal and\n> > > vertical flipping) applied. When mapping them to the raw pixel buffer,\n> > > applications shall take any configured transformation into account.\"\n> > >\n> > > ?\n> >\n> > That's the easieast way forward, sounds good to me\n> >\n> > >\n> > >>>> +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)\n> > >\n> > > s/describes/property describes/ (it feels weird to have a plural name\n> > > with a singular verb).\n> > >\n> > >>>> +        rectangular areas of the raw data buffer, where optical black pixels are\n> > >\n> > > s/buffer,/buffer/\n> > >\n> > >>>> +        located and could be accessed for calibration and black level\n> > >>>> +        correction.\n> > >\n> > > Maybe just \"... of the raw data buffers that contain optical black\n> > > pixels.\" ? That duplicates the previous paragraph a bit, maybe we could\n> > > just drop it.\n> > >\n> > >>>> +\n> > >>>> +        This property describes the position and size of optically black pixel\n> > >\n> > > s/optically/optical/\n> > >\n> > >>>> +        rectangles relatively to their position in the raw data buffer as stored\n> > >\n> > > s/rectangles/regions/\n> > > s/relatively to their position in the raw data buffer/in the raw data buffer/\n> > >\n> > >>>> +        in memory, which might differ from their actual physical location in the\n> > >>>> +        pixel array matrix.\n> > >>>> +\n> > >>>> +        It is important to note, in facts, that camera sensor might\n> > >\n> > > s/facts/fact/\n> > > s/sensors/sensor/\n> > >\n> > >>>> +        automatically re-order, shuffle or skip portions of their pixels array\n> > >\n> > > s/re-order/reorder/\n> > >\n> > > I'd drop shuffle, I think \"reorder or skip\" is enough, hopefully the\n> > > sensors won't shuffle pixels randomly :-)\n> > >\n> > >>>> +        matrix when transmitting data to the receiver.\n> > >\n> > > Should we add an example ?\n> > >\n> > > \"For instance, a sensor may merge the top and bottom optical black\n> > > rectangles into a single rectangle, transmitted at the beginning of the\n> > > frame.\"\n> > >\n> > >>>> +\n> > >>>> +        The pixel array contains several areas with different purposes,\n> > >>>> +        interleaved by lines and columns which are said not to be valid for\n> > >>>> +        capturing purposes. Invalid lines and columns are defined as invalid as\n> > >>>> +        they could be positioned too close to the chip margins or to the optical\n> > >>>> +        blank shielding placed on top of optical black pixels.\n> > >\n> > > s/blank/black/\n> > >\n> > >>>> +\n> > >>>> +                                PixelArraySize[0]\n> > >>>> +               /----------------------------------------------/\n> > >>>> +                  x1                                       x2\n> > >>>> +               +--o---------------------------------------o---+ /\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > >>>> +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > >>>> +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]\n> > >>>> +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > >>>> +               ...          ...           ...     ...       ...\n> > >>>> +               ...          ...           ...     ...       ...\n> > >>>> +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |\n> > >>>> +               +----------------------------------------------+ /\n> > >>>> +\n> > >>>> +        The readable pixel array matrix is composed by\n> > >>>> +        2 invalid lines (I)\n> > >>>> +        4 lines of valid optically black pixels (O)\n> > >\n> > > s/optically/optical/\n> > >\n> > >>>> +        2 invalid lines (I)\n> > >>>> +        n lines of valid pixel data (P)\n> > >>>> +        2 invalid lines (I)\n> > >>>> +\n> > >>>> +        And the position of the optical black pixel rectangles is defined by\n> > >>>> +\n> > >>>> +            PixelArrayOpticalBlackRectangles = {\n> > >>>> +               { x1, y1, x2 - x1 + 1, y2 - y1 + },\n> > >>>\n> > >>> s/\\+ \\K}/1 }/\n> > >\n> > > :perldo s/\\+ \\K}/1 }/\n> > >\n> > > ;-)\n> > >\n> > >>>> +               { x1, y3, 2, y4 - y3 + 1 },\n> > >>>> +               { x2, y3, 2, y4 - y3 + 1 },\n> > >>>> +            };\n> > >>>> +\n> > >>>> +        If the sensor, when required to output the full pixel array matrix,\n> > >>>> +        automatically skip the invalid lines and columns, producing the\n> > >\n> > > s/skip/skips/\n> > >\n> > >>>> +        following data buffer, when captured to memory\n> > >\n> > > For the same reason as above, this could become\n> > >\n> > >         If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory\n> > >\n> > >>>> +\n> > >>>> +                                    PixelArraySize[0]\n> > >>>> +               /----------------------------------------------/\n> > >>>> +                                                           x1\n> > >>>> +               +--------------------------------------------o-+ /\n> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > >>>> +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |\n> > >>>> +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]\n> > >>>> +               ...       ...          ...       ...         ... |\n> > >>>> +               ...       ...          ...       ...         ... |\n> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > >>>> +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |\n> > >>>> +               +----------------------------------------------+ /\n> > >>>> +\n> > >>>> +        The invalid lines and columns should not be reported as part of the\n> > >\n> > > s/The/then the/\n> > >\n> > >>>> +        PixelArraySize property in first place.\n> > >>>> +\n> > >>>> +        In this case, the position of the black pixel rectangles will be\n> > >>>> +\n> > >>>> +            PixelArrayOpticalBlackRectangles = {\n> > >>>> +               { 0, 0, y1 + 1, PixelArraySize[0] },\n> > >>>> +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > >>>> +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },\n> > >>>> +            };\n> > >\n> > > I think part of this should be moved to the PixelArraySize property, as\n> > > it defines what the PixelArraySize property should report. Let's merge\n> > > this patch first though, and improvements can go on top (unless you\n> > > would prefer addressing this first :-)).\n> > >\n> > >>>> +\n> > >>>> +        \\todo Rename this property to Size once we will have property\n> > >>>> +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)\n> > >>>> +\n> > >>>> +  - PixelArrayActiveAreas:\n> > >>>> +      type: Rectangle\n> > >>>> +      size: [1 x n]\n> > >\n> > > [n] here too.\n> > >\n> > >>>> +      description: |\n> > >>>> +        The PixelArrayActiveAreas property defines the (possibly multiple and\n> > >>>> +        overlapping) portions of the camera sensor readable pixel matrix\n> > >>>> +        which are considered valid for image acquisition purposes.\n> > >>>> +\n> > >>>> +        Each rectangle is defined relatively to the PixelArraySize rectangle,\n> > >>>> +        with its top-left corner defined by its horizontal and vertical\n> > >>>> +        distances from the PixelArraySize rectangle top-left corner, placed in\n> > >>>> +        position (0, 0).\n> > >\n> > > I think you can drop the last three lines, they're a bit confusing.\n> > >\n> > >>>> +\n> > >>>> +        This property describes an arbitrary number of overlapping rectangles,\n> > >>>> +        with each rectangle representing the maximum image size that the camera\n> > >>>> +        sensor can produce for a particular aspect ratio.\n> > >\n> > > Maybe even moving the previous paragraph here as \"They are defined\n> > > relatively to the PixelArraySize rectangle.\" ?\n> > >\n> >\n> > Ack, I'll merge the two\n> >\n> > >>>> +\n> > >>>> +        When multiple rectangles are reported, they shall be ordered from the\n> > >>>> +        tallest to the shortest.\n> > >>>> +\n> > >>>> +        Example 1\n> > >>>> +        A camera sensor which only produces images in the 4:3 image resolution\n> > >>>> +        will report a single PixelArrayActiveAreas rectangle, from which all\n> > >>>> +        other image formats are obtained by either cropping the field-of-view\n> > >>>> +        and/or applying pixel sub-sampling techniques such as pixel skipping or\n> > >>>> +        binning.\n> > >>>> +\n> > >>>> +                     PixelArraySize[0]\n> > >>>> +                    /----------------/\n> > >>>> +                      x1          x2\n> > >>>> +            (0,0)-> +-o------------o-+  /\n> > >>>> +                 y1 o +------------+ |  |\n> > >>>> +                    | |////////////| |  |\n> > >>>> +                    | |////////////| |  | PixelArraySize[1]\n> > >>>> +                    | |////////////| |  |\n> > >>>> +                 y2 o +------------+ |  |\n> > >>>> +                    +----------------+  /\n> > >>>> +\n> > >>>> +        The property reports a single rectangle\n> > >>>> +\n> > >>>> +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)\n> > >>>> +\n> > >>>> +        Example 2\n> > >>>> +        A camera sensor which can produce images in different native\n> > >>>> +        resolutions will report several overlapping rectangles, one for each\n> > >>>> +        natively supported resolution.\n> > >>>> +\n> > >>>> +                     PixelArraySize[0]\n> > >>>> +                    /------------------/\n> > >>>> +                      x1  x2    x3  x4\n> > >>>> +            (0,0)-> +o---o------o---o+  /\n> > >>>> +                 y1 o    +------+    |  |\n> > >>>> +                    |    |//////|    |  |\n> > >>>> +                 y2 o+---+------+---+|  |\n> > >>>> +                    ||///|//////|///||  | PixelArraySize[1]\n> > >>>> +                 y3 o+---+------+---+|  |\n> > >>>> +                    |    |//////|    |  |\n> > >>>> +                 y4 o    +------+    |  |\n> > >>>> +                    +----+------+----+  /\n> > >>>> +\n> > >>>> +        The property reports two rectangles\n> > >>>> +\n> > >>>> +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),\n> > >>>> +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))\n> > >>>> +\n> > >>>> +        The first rectangle describes the maximum field-of-view of all image\n> > >>>> +        formats in the 4:3 resolutions, while the second one describes the\n> > >>>> +        maximum field of view for all image formats in the 16:9 resolutions.\n> > >\n> > > Should we add one paragraph here as a clarification:\n> > >\n> > > Multiple rectangles shall only be reported when the sensor can't capture\n> > > the pixels in the corner regions. If all the pixels in the (x1,y1) -\n> > > (x4,y4) area can be captured, the PixelArrayActiveAreas property shall\n> > > contains the single rectangle (x1,y1) - (x4,y4).\n> >\n> > Isn't this the example above ? Re-stating the same here seems\n> > confusing.\n>\n> What I meant to say is that a camera should not implement example 2 just\n> because the pipeline handler developer wants to expose different aspect\n> ratios, when the hardware actually implements example 1. For instance,\n> considering a sensor with a native resolution of 3840x2880 (4/3) that\n> can be captured fully, a 16/9 picture can be captured by cropping to\n> 3840*2160. In that case PixelArrayActiveAreas should only report\n> (0,0)/3840x2880, not both (0,0)/3840x2880 and (0,360)/3840x2880.\n>\n> But maybe I'm just worrying too much and nobody will think about doing\n> that :-)\n\nI ended up taking this part in in v7, as after reading it again I got\nthe same concern.\n\n\nThanks\n  j\n\n>\n> > > Or do you think it's overkill ?\n> > >\n> > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n> >\n> > Thanks, I'll take comments in and resend a v7\n> >\n> > >>>> +\n> > >>>> +        \\todo Rename this property to ActiveAreas once we will have property\n> > >>>> +              categories (i.e. Properties::PixelArray::ActiveAreas)\n> > >>>> +\n> > >>>>  ...\n>\n> --\n> Regards,\n>\n> Laurent Pinchart","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 99DF2BD878\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 31 Jul 2020 11:41:59 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 14B5061988;\n\tFri, 31 Jul 2020 13:41:59 +0200 (CEST)","from relay10.mail.gandi.net (relay10.mail.gandi.net\n\t[217.70.178.230])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id DB17E60396\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 31 Jul 2020 13:41:56 +0200 (CEST)","from uno.localdomain (93-34-118-233.ip49.fastwebnet.it\n\t[93.34.118.233]) (Authenticated sender: jacopo@jmondi.org)\n\tby relay10.mail.gandi.net (Postfix) with ESMTPSA id 50F13240003;\n\tFri, 31 Jul 2020 11:41:55 +0000 (UTC)"],"Date":"Fri, 31 Jul 2020 13:45:34 +0200","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Message-ID":"<20200731114534.cmw5w3wlfmcgmrsr@uno.localdomain>","References":"<20200604153122.18074-1-jacopo@jmondi.org>\n\t<20200608102604.GO9947@valkosipuli.retiisi.org.uk>\n\t<20200608105509.jhh73dvm6sm4lxpj@uno.localdomain>\n\t<20200731021732.GN6107@pendragon.ideasonboard.com>\n\t<20200731074648.jzmvmhoupt22dqb6@uno.localdomain>\n\t<20200731105446.GA6218@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Disposition":"inline","In-Reply-To":"<20200731105446.GA6218@pendragon.ideasonboard.com>","Subject":"Re: [libcamera-devel] [PATCH v6] libcamera: properties: Define\n\tpixel array properties","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"figa@chromium.org, ricardo.ribalda@gmail.com,\n\tlibcamera-devel@lists.libcamera.org, Sakari Ailus <sakari.ailus@iki.fi>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]