[{"id":24840,"web_url":"https://patchwork.libcamera.org/comment/24840/","msgid":"<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>","date":"2022-08-30T14:13:12","subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","submitter":{"id":86,"url":"https://patchwork.libcamera.org/api/people/86/","name":"Umang Jain","email":"umang.jain@ideasonboard.com"},"content":"Hi Laurent,\n\nOn 8/29/22 3:34 PM, Laurent Pinchart via libcamera-devel wrote:\n> Update the YUV shaders and the viewfinder_gl to correctly take the\n> Y'CbCr encoding and the quantization range into account when rendering\n> YUV formats to RGB. Support for the primaries and transfer function will\n> be added in a subsequent step.\n>\n> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n\nPatch looks good and straight forward for most parts, however few \nspecifics are still a bit unclear to me\n> ---\n>   src/qcam/assets/shader/YUV_2_planes.frag | 27 ++++----\n>   src/qcam/assets/shader/YUV_3_planes.frag | 23 ++++---\n>   src/qcam/assets/shader/YUV_packed.frag   | 17 ++---\n>   src/qcam/viewfinder_gl.cpp               | 79 +++++++++++++++++++++++-\n>   src/qcam/viewfinder_gl.h                 |  2 +\n>   5 files changed, 115 insertions(+), 33 deletions(-)\n>\n> diff --git a/src/qcam/assets/shader/YUV_2_planes.frag b/src/qcam/assets/shader/YUV_2_planes.frag\n> index 254463c05cac..da8dbcc5f801 100644\n> --- a/src/qcam/assets/shader/YUV_2_planes.frag\n> +++ b/src/qcam/assets/shader/YUV_2_planes.frag\n> @@ -13,27 +13,30 @@ varying vec2 textureOut;\n>   uniform sampler2D tex_y;\n>   uniform sampler2D tex_u;\n>   \n> +const mat3 yuv2rgb_matrix = mat3(\n> +\tYUV2RGB_MATRIX\n> +);\n> +\n> +const vec3 yuv2rgb_offset = vec3(\n> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n\nI understood the YUV2RGB_Y_OFFSET #define but don't understand where \nother values come from (or why they exist :D)\n\nMaybe I should start learning shaders programming ;-)\n\nReviewed-by: Umang Jain <umang.jain@ideasonboard.com>\n> +);\n> +\n>   void main(void)\n>   {\n>   \tvec3 yuv;\n> -\tvec3 rgb;\n> -\tmat3 yuv2rgb_bt601_mat = mat3(\n> -\t\tvec3(1.164,  1.164, 1.164),\n> -\t\tvec3(0.000, -0.392, 2.017),\n> -\t\tvec3(1.596, -0.813, 0.000)\n> -\t);\n>   \n> -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> +\tyuv.x = texture2D(tex_y, textureOut).r;\n>   #if defined(YUV_PATTERN_UV)\n> -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> -\tyuv.z = texture2D(tex_u, textureOut).a - 0.500;\n> +\tyuv.y = texture2D(tex_u, textureOut).r;\n> +\tyuv.z = texture2D(tex_u, textureOut).a;\n>   #elif defined(YUV_PATTERN_VU)\n> -\tyuv.y = texture2D(tex_u, textureOut).a - 0.500;\n> -\tyuv.z = texture2D(tex_u, textureOut).r - 0.500;\n> +\tyuv.y = texture2D(tex_u, textureOut).a;\n> +\tyuv.z = texture2D(tex_u, textureOut).r;\n>   #else\n>   #error Invalid pattern\n>   #endif\n>   \n> -\trgb = yuv2rgb_bt601_mat * yuv;\n> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> +\n>   \tgl_FragColor = vec4(rgb, 1.0);\n>   }\n> diff --git a/src/qcam/assets/shader/YUV_3_planes.frag b/src/qcam/assets/shader/YUV_3_planes.frag\n> index 2be74b5d2a9d..e754129d74d1 100644\n> --- a/src/qcam/assets/shader/YUV_3_planes.frag\n> +++ b/src/qcam/assets/shader/YUV_3_planes.frag\n> @@ -14,20 +14,23 @@ uniform sampler2D tex_y;\n>   uniform sampler2D tex_u;\n>   uniform sampler2D tex_v;\n>   \n> +const mat3 yuv2rgb_matrix = mat3(\n> +\tYUV2RGB_MATRIX\n> +);\n> +\n> +const vec3 yuv2rgb_offset = vec3(\n> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> +);\n> +\n>   void main(void)\n>   {\n>   \tvec3 yuv;\n> -\tvec3 rgb;\n> -\tmat3 yuv2rgb_bt601_mat = mat3(\n> -\t\tvec3(1.164,  1.164, 1.164),\n> -\t\tvec3(0.000, -0.392, 2.017),\n> -\t\tvec3(1.596, -0.813, 0.000)\n> -\t);\n>   \n> -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> -\tyuv.z = texture2D(tex_v, textureOut).r - 0.500;\n> +\tyuv.x = texture2D(tex_y, textureOut).r;\n> +\tyuv.y = texture2D(tex_u, textureOut).r;\n> +\tyuv.z = texture2D(tex_v, textureOut).r;\n> +\n> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n>   \n> -\trgb = yuv2rgb_bt601_mat * yuv;\n>   \tgl_FragColor = vec4(rgb, 1.0);\n>   }\n> diff --git a/src/qcam/assets/shader/YUV_packed.frag b/src/qcam/assets/shader/YUV_packed.frag\n> index d6efd4ce92a9..b9ef9d41beae 100644\n> --- a/src/qcam/assets/shader/YUV_packed.frag\n> +++ b/src/qcam/assets/shader/YUV_packed.frag\n> @@ -14,15 +14,16 @@ varying vec2 textureOut;\n>   uniform sampler2D tex_y;\n>   uniform vec2 tex_step;\n>   \n> +const mat3 yuv2rgb_matrix = mat3(\n> +\tYUV2RGB_MATRIX\n> +);\n> +\n> +const vec3 yuv2rgb_offset = vec3(\n> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> +);\n> +\n>   void main(void)\n>   {\n> -\tmat3 yuv2rgb_bt601_mat = mat3(\n> -\t\tvec3(1.164,  1.164, 1.164),\n> -\t\tvec3(0.000, -0.392, 2.017),\n> -\t\tvec3(1.596, -0.813, 0.000)\n> -\t);\n> -\tvec3 yuv2rgb_bt601_offset = vec3(0.063, 0.500, 0.500);\n> -\n>   \t/*\n>   \t * The sampler won't interpolate the texture correctly along the X axis,\n>   \t * as each RGBA pixel effectively stores two pixels. We thus need to\n> @@ -76,7 +77,7 @@ void main(void)\n>   \n>   \tfloat y = mix(y_left, y_right, step(0.5, f_x));\n>   \n> -\tvec3 rgb = yuv2rgb_bt601_mat * (vec3(y, uv) - yuv2rgb_bt601_offset);\n> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n>   \n>   \tgl_FragColor = vec4(rgb, 1.0);\n>   }\n> diff --git a/src/qcam/viewfinder_gl.cpp b/src/qcam/viewfinder_gl.cpp\n> index ec295b6de0dd..e2aa24703ff0 100644\n> --- a/src/qcam/viewfinder_gl.cpp\n> +++ b/src/qcam/viewfinder_gl.cpp\n> @@ -7,9 +7,12 @@\n>   \n>   #include \"viewfinder_gl.h\"\n>   \n> +#include <array>\n> +\n>   #include <QByteArray>\n>   #include <QFile>\n>   #include <QImage>\n> +#include <QStringList>\n>   \n>   #include <libcamera/formats.h>\n>   \n> @@ -56,7 +59,8 @@ static const QList<libcamera::PixelFormat> supportedFormats{\n>   };\n>   \n>   ViewFinderGL::ViewFinderGL(QWidget *parent)\n> -\t: QOpenGLWidget(parent), buffer_(nullptr), image_(nullptr),\n> +\t: QOpenGLWidget(parent), buffer_(nullptr),\n> +\t  colorSpace_(libcamera::ColorSpace::Raw), image_(nullptr),\n>   \t  vertexBuffer_(QOpenGLBuffer::VertexBuffer)\n>   {\n>   }\n> @@ -72,10 +76,10 @@ const QList<libcamera::PixelFormat> &ViewFinderGL::nativeFormats() const\n>   }\n>   \n>   int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &size,\n> -\t\t\t    [[maybe_unused]] const libcamera::ColorSpace &colorSpace,\n> +\t\t\t    const libcamera::ColorSpace &colorSpace,\n>   \t\t\t    unsigned int stride)\n>   {\n> -\tif (format != format_) {\n> +\tif (format != format_ || colorSpace != colorSpace_) {\n>   \t\t/*\n>   \t\t * If the fragment already exists, remove it and create a new\n>   \t\t * one for the new format.\n> @@ -89,7 +93,10 @@ int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &s\n>   \t\tif (!selectFormat(format))\n>   \t\t\treturn -1;\n>   \n> +\t\tselectColorSpace(colorSpace);\n> +\n>   \t\tformat_ = format;\n> +\t\tcolorSpace_ = colorSpace;\n>   \t}\n>   \n>   \tsize_ = size;\n> @@ -318,6 +325,72 @@ bool ViewFinderGL::selectFormat(const libcamera::PixelFormat &format)\n>   \treturn ret;\n>   }\n>   \n> +void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace &colorSpace)\n> +{\n> +\tstd::array<double, 9> yuv2rgb;\n> +\n> +\t/* OpenGL stores arrays in column-major order. */\n> +\tswitch (colorSpace.ycbcrEncoding) {\n> +\tcase libcamera::ColorSpace::YcbcrEncoding::None:\n> +\t\tyuv2rgb = {\n> +\t\t\t1.0000,  0.0000,  0.0000,\n> +\t\t\t0.0000,  1.0000,  0.0000,\n> +\t\t\t0.0000,  0.0000,  1.0000,\n> +\t\t};\n> +\t\tbreak;\n> +\n> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec601:\n> +\t\tyuv2rgb = {\n> +\t\t\t1.0000,  1.0000,  1.0000,\n> +\t\t\t0.0000, -0.3441,  1.7720,\n> +\t\t\t1.4020, -0.7141,  0.0000,\n> +\t\t};\n> +\t\tbreak;\n> +\n> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec709:\n> +\t\tyuv2rgb = {\n> +\t\t\t1.0000,  1.0000,  1.0000,\n> +\t\t\t0.0000, -0.1873,  1.8856,\n> +\t\t\t1.5748, -0.4681,  0.0000,\n> +\t\t};\n> +\t\tbreak;\n> +\n> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec2020:\n> +\t\tyuv2rgb = {\n> +\t\t\t1.0000,  1.0000,  1.0000,\n> +\t\t\t0.0000, -0.1646,  1.8814,\n> +\t\t\t1.4746, -0.5714,  0.0000,\n> +\t\t};\n> +\t\tbreak;\n> +\t}\n> +\n> +\tdouble offset;\n> +\n> +\tswitch (colorSpace.range) {\n> +\tcase libcamera::ColorSpace::Range::Full:\n> +\t\toffset = 0.0;\n> +\t\tbreak;\n> +\n> +\tcase libcamera::ColorSpace::Range::Limited:\n> +\t\toffset = 16.0;\n> +\n> +\t\tfor (unsigned int i = 0; i < 3; ++i)\n> +\t\t\tyuv2rgb[i] *= 255.0 / 219.0;\n> +\t\tfor (unsigned int i = 4; i < 9; ++i)\n> +\t\t\tyuv2rgb[i] *= 255.0 / 224.0;\n> +\t\tbreak;\n> +\t}\n> +\n> +\tQStringList matrix;\n> +\n> +\tfor (double coeff : yuv2rgb)\n> +\t\tmatrix.append(QString::number(coeff, 'f'));\n> +\n> +\tfragmentShaderDefines_.append(\"#define YUV2RGB_MATRIX \" + matrix.join(\", \"));\n> +\tfragmentShaderDefines_.append(QString(\"#define YUV2RGB_Y_OFFSET %1\")\n> +\t\t.arg(offset, 0, 'f', 1));\n> +}\n> +\n>   bool ViewFinderGL::createVertexShader()\n>   {\n>   \t/* Create Vertex Shader */\n> diff --git a/src/qcam/viewfinder_gl.h b/src/qcam/viewfinder_gl.h\n> index 798830a31cd2..68c2912df12f 100644\n> --- a/src/qcam/viewfinder_gl.h\n> +++ b/src/qcam/viewfinder_gl.h\n> @@ -57,6 +57,7 @@ protected:\n>   \n>   private:\n>   \tbool selectFormat(const libcamera::PixelFormat &format);\n> +\tvoid selectColorSpace(const libcamera::ColorSpace &colorSpace);\n>   \n>   \tvoid configureTexture(QOpenGLTexture &texture);\n>   \tbool createFragmentShader();\n> @@ -67,6 +68,7 @@ private:\n>   \t/* Captured image size, format and buffer */\n>   \tlibcamera::FrameBuffer *buffer_;\n>   \tlibcamera::PixelFormat format_;\n> +\tlibcamera::ColorSpace colorSpace_;\n>   \tQSize size_;\n>   \tunsigned int stride_;\n>   \tImage *image_;","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 89D13C0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tTue, 30 Aug 2022 14:13:21 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id B7D8E61FBD;\n\tTue, 30 Aug 2022 16:13:20 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 3324E61F9C\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 30 Aug 2022 16:13:19 +0200 (CEST)","from [IPV6:2401:4900:1f3f:1548:78ac:4a3:edc3:c28a] (unknown\n\t[IPv6:2401:4900:1f3f:1548:78ac:4a3:edc3:c28a])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id E5C43481;\n\tTue, 30 Aug 2022 16:13:17 +0200 (CEST)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1661868800;\n\tbh=0NqEIQghJvJZlwW+uXqhtKgVVeWwzstXOiGWGVX8n68=;\n\th=Date:To:References:In-Reply-To:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:\n\tFrom;\n\tb=tXb6nsakLaRx4HIwarMEUSc8o1OeGy766ArIh2FU4u2vq1w/cWzx54o40F58i4ci3\n\tLncwvmVxr2iFipdZN/hYrwptD1dGobmbamb5Ac2TuDXalqXc+qbg2H874TpZlJtflK\n\tZJY88WvgqKHiO0jv6m/C8F4tYQEvxnvbilVSXJ+9CgSkodYjL3TIWca6SrlfK3F/7d\n\tyAz+Ozp4tjGeqIzvSEwX5zwMtTzyPkAHzd9FpEPiURlqqkqsOli29RF4unRyTzIkJI\n\tlJ/FVSCB165CdR/SWMikZT/nJV/MHsyvrQFjgDz8M/JNTkxZykYsnT54cuLjtrQLS8\n\t6UhfcwWUPtL6A==","v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1661868798;\n\tbh=0NqEIQghJvJZlwW+uXqhtKgVVeWwzstXOiGWGVX8n68=;\n\th=Date:Subject:To:References:From:In-Reply-To:From;\n\tb=AVcFzsBxt1TYfLATTScycA+sN3vCgeBYR/Dr+BaR9zzN16Gs/x+F8MOII6WavTS0U\n\tpTNX9UuWX8txjN4DAWQS06RGbeh552jqWzai9bGijhk+ODzl1AC4SlNqVwon1VXAoG\n\tjshcTNKqXpKO3pL02qJIlGWflSlcagOLRDD05X6U="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"AVcFzsBx\"; dkim-atps=neutral","Message-ID":"<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>","Date":"Tue, 30 Aug 2022 19:43:12 +0530","MIME-Version":"1.0","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101\n\tThunderbird/91.12.0","Content-Language":"en-US","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","References":"<20220829100414.28404-1-laurent.pinchart@ideasonboard.com>\n\t<20220829100414.28404-4-laurent.pinchart@ideasonboard.com>","In-Reply-To":"<20220829100414.28404-4-laurent.pinchart@ideasonboard.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","Subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Umang Jain via libcamera-devel <libcamera-devel@lists.libcamera.org>","Reply-To":"Umang Jain <umang.jain@ideasonboard.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":24841,"web_url":"https://patchwork.libcamera.org/comment/24841/","msgid":"<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","date":"2022-08-30T17:27:15","subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Umang,\n\nOn Tue, Aug 30, 2022 at 07:43:12PM +0530, Umang Jain wrote:\n> On 8/29/22 3:34 PM, Laurent Pinchart via libcamera-devel wrote:\n> > Update the YUV shaders and the viewfinder_gl to correctly take the\n> > Y'CbCr encoding and the quantization range into account when rendering\n> > YUV formats to RGB. Support for the primaries and transfer function will\n> > be added in a subsequent step.\n> >\n> > Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n> \n> Patch looks good and straight forward for most parts, however few \n> specifics are still a bit unclear to me\n> \n> > ---\n> >   src/qcam/assets/shader/YUV_2_planes.frag | 27 ++++----\n> >   src/qcam/assets/shader/YUV_3_planes.frag | 23 ++++---\n> >   src/qcam/assets/shader/YUV_packed.frag   | 17 ++---\n> >   src/qcam/viewfinder_gl.cpp               | 79 +++++++++++++++++++++++-\n> >   src/qcam/viewfinder_gl.h                 |  2 +\n> >   5 files changed, 115 insertions(+), 33 deletions(-)\n> >\n> > diff --git a/src/qcam/assets/shader/YUV_2_planes.frag b/src/qcam/assets/shader/YUV_2_planes.frag\n> > index 254463c05cac..da8dbcc5f801 100644\n> > --- a/src/qcam/assets/shader/YUV_2_planes.frag\n> > +++ b/src/qcam/assets/shader/YUV_2_planes.frag\n> > @@ -13,27 +13,30 @@ varying vec2 textureOut;\n> >   uniform sampler2D tex_y;\n> >   uniform sampler2D tex_u;\n> >   \n> > +const mat3 yuv2rgb_matrix = mat3(\n> > +\tYUV2RGB_MATRIX\n> > +);\n> > +\n> > +const vec3 yuv2rgb_offset = vec3(\n> > +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> \n> I understood the YUV2RGB_Y_OFFSET #define but don't understand where \n> other values come from (or why they exist :D)\n\nThe quantization of the Cb and Cr values in all relevant color spaces\n(ITU-R BT.601, BT.709, BT.2020, ...) add an offset of 128 (for 8-bit\nvalues). For instance, in BT.709, we have\n\nD'Cb = INT[(224*E'Cb + 128)*2^(n-8)]\n\nwhere D'Cb is the Cb signal after quantization, E'Cb the Cb signal\nbefore quantization (in the [-0.5, 0.5] range), and n the number of\nbits). INT[] denotes rounding to the closest integer.\n\nThe 224 multiplier creates a limited quantization range, following the\nabove formula, -0.5 will be quantized to INT[224 * -0.5 + 128] = 16, and\n0.5 to INT[224 * 0.5 + 128] = 240. The values are then stored as 8-bit\nunsigned integers in memory.\n\nFor full range quantization, the same applies, with a multiplier equal\nto 255 instead of 224. [-0.5, 0.5] is thus mapped to [0, 255].\n\nWe need to apply the reverse quantization on D'Y, D'Cb and D'Cr in order\nto get the original E'Y, E'Cb and E'Cr values (in the [0.0, 1.0] and\n[-0.5, 0.5] ranges respectively for E'Y and E'C[br]. Starting with full\nrange, given\n\nD'Cb = INT[(255*E'Cb + 128)] (for 8-bit data)\n\nthe inverse is given by\n\nE'Cb = (D'Cb - 128) / 255\n\nor\n\nE'Cb = D'Cb / 255 - 128 / 255\n\nOpenGL, when reading texture data through a floating point texture\nsampler (which we do in the shader by calling texture2D on a sampler2D\nvariable), normalizes the values stored in memory ([0, 255]) to the\n[0.0, 1.0] range. This means that the D'Cb value is already divided by\n255 by the GPU. We only need to subtract 128 / 255 to get the original\nE'Cb value.\n\nIn the limited quantization range case, we have\n\nD'Cb = INT[(225*E'Cb + 128)] (for 8-bit data)\n\nthe inverse is given by\n\nE'Cb = (D'Cb - 128) / 224\n\nLet's introduce the 255 factor:\n\nE'Cb = (D'Cb - 128) / 255 * 255 / 224\n\nwhich can also be written as\n\nE'Cb = (D'Cb / 255 - 128 / 255) * 255 / 224\n\nWe thus have\n\nE'Cb(lim) = E'Cb(full) * 255 / 224\n\nThe shader doesn't include the 255 / 224 multiplier directly, it gets\nincluded by the C++ code in the yuv2rgb matrix, and there's no need for\na different offset between the limited and full range quantization.\n\nI hope this helps clarifying the implementation.\n\n> Maybe I should start learning shaders programming ;-)\n> \n> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>\n> \n> > +);\n> > +\n> >   void main(void)\n> >   {\n> >   \tvec3 yuv;\n> > -\tvec3 rgb;\n> > -\tmat3 yuv2rgb_bt601_mat = mat3(\n> > -\t\tvec3(1.164,  1.164, 1.164),\n> > -\t\tvec3(0.000, -0.392, 2.017),\n> > -\t\tvec3(1.596, -0.813, 0.000)\n> > -\t);\n> >   \n> > -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> > +\tyuv.x = texture2D(tex_y, textureOut).r;\n> >   #if defined(YUV_PATTERN_UV)\n> > -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> > -\tyuv.z = texture2D(tex_u, textureOut).a - 0.500;\n> > +\tyuv.y = texture2D(tex_u, textureOut).r;\n> > +\tyuv.z = texture2D(tex_u, textureOut).a;\n> >   #elif defined(YUV_PATTERN_VU)\n> > -\tyuv.y = texture2D(tex_u, textureOut).a - 0.500;\n> > -\tyuv.z = texture2D(tex_u, textureOut).r - 0.500;\n> > +\tyuv.y = texture2D(tex_u, textureOut).a;\n> > +\tyuv.z = texture2D(tex_u, textureOut).r;\n> >   #else\n> >   #error Invalid pattern\n> >   #endif\n> >   \n> > -\trgb = yuv2rgb_bt601_mat * yuv;\n> > +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> > +\n> >   \tgl_FragColor = vec4(rgb, 1.0);\n> >   }\n> > diff --git a/src/qcam/assets/shader/YUV_3_planes.frag b/src/qcam/assets/shader/YUV_3_planes.frag\n> > index 2be74b5d2a9d..e754129d74d1 100644\n> > --- a/src/qcam/assets/shader/YUV_3_planes.frag\n> > +++ b/src/qcam/assets/shader/YUV_3_planes.frag\n> > @@ -14,20 +14,23 @@ uniform sampler2D tex_y;\n> >   uniform sampler2D tex_u;\n> >   uniform sampler2D tex_v;\n> >   \n> > +const mat3 yuv2rgb_matrix = mat3(\n> > +\tYUV2RGB_MATRIX\n> > +);\n> > +\n> > +const vec3 yuv2rgb_offset = vec3(\n> > +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> > +);\n> > +\n> >   void main(void)\n> >   {\n> >   \tvec3 yuv;\n> > -\tvec3 rgb;\n> > -\tmat3 yuv2rgb_bt601_mat = mat3(\n> > -\t\tvec3(1.164,  1.164, 1.164),\n> > -\t\tvec3(0.000, -0.392, 2.017),\n> > -\t\tvec3(1.596, -0.813, 0.000)\n> > -\t);\n> >   \n> > -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> > -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> > -\tyuv.z = texture2D(tex_v, textureOut).r - 0.500;\n> > +\tyuv.x = texture2D(tex_y, textureOut).r;\n> > +\tyuv.y = texture2D(tex_u, textureOut).r;\n> > +\tyuv.z = texture2D(tex_v, textureOut).r;\n> > +\n> > +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> >   \n> > -\trgb = yuv2rgb_bt601_mat * yuv;\n> >   \tgl_FragColor = vec4(rgb, 1.0);\n> >   }\n> > diff --git a/src/qcam/assets/shader/YUV_packed.frag b/src/qcam/assets/shader/YUV_packed.frag\n> > index d6efd4ce92a9..b9ef9d41beae 100644\n> > --- a/src/qcam/assets/shader/YUV_packed.frag\n> > +++ b/src/qcam/assets/shader/YUV_packed.frag\n> > @@ -14,15 +14,16 @@ varying vec2 textureOut;\n> >   uniform sampler2D tex_y;\n> >   uniform vec2 tex_step;\n> >   \n> > +const mat3 yuv2rgb_matrix = mat3(\n> > +\tYUV2RGB_MATRIX\n> > +);\n> > +\n> > +const vec3 yuv2rgb_offset = vec3(\n> > +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> > +);\n> > +\n> >   void main(void)\n> >   {\n> > -\tmat3 yuv2rgb_bt601_mat = mat3(\n> > -\t\tvec3(1.164,  1.164, 1.164),\n> > -\t\tvec3(0.000, -0.392, 2.017),\n> > -\t\tvec3(1.596, -0.813, 0.000)\n> > -\t);\n> > -\tvec3 yuv2rgb_bt601_offset = vec3(0.063, 0.500, 0.500);\n> > -\n> >   \t/*\n> >   \t * The sampler won't interpolate the texture correctly along the X axis,\n> >   \t * as each RGBA pixel effectively stores two pixels. We thus need to\n> > @@ -76,7 +77,7 @@ void main(void)\n> >   \n> >   \tfloat y = mix(y_left, y_right, step(0.5, f_x));\n> >   \n> > -\tvec3 rgb = yuv2rgb_bt601_mat * (vec3(y, uv) - yuv2rgb_bt601_offset);\n> > +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> >   \n> >   \tgl_FragColor = vec4(rgb, 1.0);\n> >   }\n> > diff --git a/src/qcam/viewfinder_gl.cpp b/src/qcam/viewfinder_gl.cpp\n> > index ec295b6de0dd..e2aa24703ff0 100644\n> > --- a/src/qcam/viewfinder_gl.cpp\n> > +++ b/src/qcam/viewfinder_gl.cpp\n> > @@ -7,9 +7,12 @@\n> >   \n> >   #include \"viewfinder_gl.h\"\n> >   \n> > +#include <array>\n> > +\n> >   #include <QByteArray>\n> >   #include <QFile>\n> >   #include <QImage>\n> > +#include <QStringList>\n> >   \n> >   #include <libcamera/formats.h>\n> >   \n> > @@ -56,7 +59,8 @@ static const QList<libcamera::PixelFormat> supportedFormats{\n> >   };\n> >   \n> >   ViewFinderGL::ViewFinderGL(QWidget *parent)\n> > -\t: QOpenGLWidget(parent), buffer_(nullptr), image_(nullptr),\n> > +\t: QOpenGLWidget(parent), buffer_(nullptr),\n> > +\t  colorSpace_(libcamera::ColorSpace::Raw), image_(nullptr),\n> >   \t  vertexBuffer_(QOpenGLBuffer::VertexBuffer)\n> >   {\n> >   }\n> > @@ -72,10 +76,10 @@ const QList<libcamera::PixelFormat> &ViewFinderGL::nativeFormats() const\n> >   }\n> >   \n> >   int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &size,\n> > -\t\t\t    [[maybe_unused]] const libcamera::ColorSpace &colorSpace,\n> > +\t\t\t    const libcamera::ColorSpace &colorSpace,\n> >   \t\t\t    unsigned int stride)\n> >   {\n> > -\tif (format != format_) {\n> > +\tif (format != format_ || colorSpace != colorSpace_) {\n> >   \t\t/*\n> >   \t\t * If the fragment already exists, remove it and create a new\n> >   \t\t * one for the new format.\n> > @@ -89,7 +93,10 @@ int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &s\n> >   \t\tif (!selectFormat(format))\n> >   \t\t\treturn -1;\n> >   \n> > +\t\tselectColorSpace(colorSpace);\n> > +\n> >   \t\tformat_ = format;\n> > +\t\tcolorSpace_ = colorSpace;\n> >   \t}\n> >   \n> >   \tsize_ = size;\n> > @@ -318,6 +325,72 @@ bool ViewFinderGL::selectFormat(const libcamera::PixelFormat &format)\n> >   \treturn ret;\n> >   }\n> >   \n> > +void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace &colorSpace)\n> > +{\n> > +\tstd::array<double, 9> yuv2rgb;\n> > +\n> > +\t/* OpenGL stores arrays in column-major order. */\n> > +\tswitch (colorSpace.ycbcrEncoding) {\n> > +\tcase libcamera::ColorSpace::YcbcrEncoding::None:\n> > +\t\tyuv2rgb = {\n> > +\t\t\t1.0000,  0.0000,  0.0000,\n> > +\t\t\t0.0000,  1.0000,  0.0000,\n> > +\t\t\t0.0000,  0.0000,  1.0000,\n> > +\t\t};\n> > +\t\tbreak;\n> > +\n> > +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec601:\n> > +\t\tyuv2rgb = {\n> > +\t\t\t1.0000,  1.0000,  1.0000,\n> > +\t\t\t0.0000, -0.3441,  1.7720,\n> > +\t\t\t1.4020, -0.7141,  0.0000,\n> > +\t\t};\n> > +\t\tbreak;\n> > +\n> > +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec709:\n> > +\t\tyuv2rgb = {\n> > +\t\t\t1.0000,  1.0000,  1.0000,\n> > +\t\t\t0.0000, -0.1873,  1.8856,\n> > +\t\t\t1.5748, -0.4681,  0.0000,\n> > +\t\t};\n> > +\t\tbreak;\n> > +\n> > +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec2020:\n> > +\t\tyuv2rgb = {\n> > +\t\t\t1.0000,  1.0000,  1.0000,\n> > +\t\t\t0.0000, -0.1646,  1.8814,\n> > +\t\t\t1.4746, -0.5714,  0.0000,\n> > +\t\t};\n> > +\t\tbreak;\n> > +\t}\n> > +\n> > +\tdouble offset;\n> > +\n> > +\tswitch (colorSpace.range) {\n> > +\tcase libcamera::ColorSpace::Range::Full:\n> > +\t\toffset = 0.0;\n> > +\t\tbreak;\n> > +\n> > +\tcase libcamera::ColorSpace::Range::Limited:\n> > +\t\toffset = 16.0;\n> > +\n> > +\t\tfor (unsigned int i = 0; i < 3; ++i)\n> > +\t\t\tyuv2rgb[i] *= 255.0 / 219.0;\n> > +\t\tfor (unsigned int i = 4; i < 9; ++i)\n> > +\t\t\tyuv2rgb[i] *= 255.0 / 224.0;\n> > +\t\tbreak;\n> > +\t}\n> > +\n> > +\tQStringList matrix;\n> > +\n> > +\tfor (double coeff : yuv2rgb)\n> > +\t\tmatrix.append(QString::number(coeff, 'f'));\n> > +\n> > +\tfragmentShaderDefines_.append(\"#define YUV2RGB_MATRIX \" + matrix.join(\", \"));\n> > +\tfragmentShaderDefines_.append(QString(\"#define YUV2RGB_Y_OFFSET %1\")\n> > +\t\t.arg(offset, 0, 'f', 1));\n> > +}\n> > +\n> >   bool ViewFinderGL::createVertexShader()\n> >   {\n> >   \t/* Create Vertex Shader */\n> > diff --git a/src/qcam/viewfinder_gl.h b/src/qcam/viewfinder_gl.h\n> > index 798830a31cd2..68c2912df12f 100644\n> > --- a/src/qcam/viewfinder_gl.h\n> > +++ b/src/qcam/viewfinder_gl.h\n> > @@ -57,6 +57,7 @@ protected:\n> >   \n> >   private:\n> >   \tbool selectFormat(const libcamera::PixelFormat &format);\n> > +\tvoid selectColorSpace(const libcamera::ColorSpace &colorSpace);\n> >   \n> >   \tvoid configureTexture(QOpenGLTexture &texture);\n> >   \tbool createFragmentShader();\n> > @@ -67,6 +68,7 @@ private:\n> >   \t/* Captured image size, format and buffer */\n> >   \tlibcamera::FrameBuffer *buffer_;\n> >   \tlibcamera::PixelFormat format_;\n> > +\tlibcamera::ColorSpace colorSpace_;\n> >   \tQSize size_;\n> >   \tunsigned int stride_;\n> >   \tImage *image_;","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 20090C0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tTue, 30 Aug 2022 17:27:28 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 8AAB961F9E;\n\tTue, 30 Aug 2022 19:27:27 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 1FEC861F9C\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 30 Aug 2022 19:27:26 +0200 (CEST)","from pendragon.ideasonboard.com (62-78-145-57.bb.dnainternet.fi\n\t[62.78.145.57])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 5F8484A8;\n\tTue, 30 Aug 2022 19:27:25 +0200 (CEST)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1661880447;\n\tbh=6Jxe7z8zUDVxpk2y8mdiytHqbTly5PTfi7epMQpNcW4=;\n\th=Date:To:References:In-Reply-To:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:\n\tFrom;\n\tb=h8muUg4r1eah8Np8+VoEbt3bfRwsuAttVW2CAD2ftxWGJr90RB+Fmz+gjjcHiFK7D\n\tJfmSCLrRH6bEzPSc9GzGTHFNOx2C0Fioa3zOvUPVMuSBQWCyj/QQB8ZmAdJu4YGcks\n\t4pnTsyYIPfCM2LwzBmNsAhtvHq7jR6QS3AYlo64ytETBeeN0mzzJPzF1xtDk3l186g\n\tfx/am8vQTRspsFbNnHNJkILls2Y1IM09RjFOg/jO5BIiNgCGI6Ki0YPdc2sloQDiLP\n\tkCmflMy0klF6+LxzFW/sVetrf2jZ1WUsXdSMAqi7LeXY3hxs+lcHOv5nEb+upkNs5H\n\t026EC94L6l+ZA==","v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1661880445;\n\tbh=6Jxe7z8zUDVxpk2y8mdiytHqbTly5PTfi7epMQpNcW4=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=ljxi/SoHJxkgK2FxuEj+Dhj26SoG61H9xwBGkzSWKMhK56r8E1UIvRW9vvAoygnCf\n\tVK9rPezx7OTMjCwapcziDPDdvC9kJGRHweY2IqYM91QOZHu1JbfOeXWP57M2DhwkOt\n\t1EG/3blObO8vd3s+06l0bk/BK2AAXg8AHVRwxuGY="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"ljxi/SoH\"; dkim-atps=neutral","Date":"Tue, 30 Aug 2022 20:27:15 +0300","To":"Umang Jain <umang.jain@ideasonboard.com>","Message-ID":"<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","References":"<20220829100414.28404-1-laurent.pinchart@ideasonboard.com>\n\t<20220829100414.28404-4-laurent.pinchart@ideasonboard.com>\n\t<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>","Subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Laurent Pinchart via libcamera-devel\n\t<libcamera-devel@lists.libcamera.org>","Reply-To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":24858,"web_url":"https://patchwork.libcamera.org/comment/24858/","msgid":"<CAJP1LGaBxGoK-Kb4ABOJkEsT+NsTF6oHzzBaT-zAzJUWfFdA8w@mail.gmail.com>","date":"2022-08-31T10:05:25","subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","submitter":{"id":116,"url":"https://patchwork.libcamera.org/api/people/116/","name":"Kunal Agarwal","email":"kunalagarwal1072002@gmail.com"},"content":"Hi Laurent and Umang,\n\n> Hi Umang,\n> The quantization of the Cb and Cr values in all relevant color spaces\n> (ITU-R BT.601, BT.709, BT.2020, ...) add an offset of 128 (for 8-bit\n> values). For instance, in BT.709, we have\n>\n> D'Cb = INT[(224*E'Cb + 128)*2^(n-8)]\n>\n> where D'Cb is the Cb signal after quantization, E'Cb the Cb signal\n> before quantization (in the [-0.5, 0.5] range), and n the number of\n> bits). INT[] denotes rounding to the closest integer.\n>\n> The 224 multiplier creates a limited quantization range, following the\n> above formula, -0.5 will be quantized to INT[224 * -0.5 + 128] = 16, and\n> 0.5 to INT[224 * 0.5 + 128] = 240. The values are then stored as 8-bit\n> unsigned integers in memory.\n>\n> For full range quantization, the same applies, with a multiplier equal\n> to 255 instead of 224. [-0.5, 0.5] is thus mapped to [0, 255].\n>\n> We need to apply the reverse quantization on D'Y, D'Cb and D'Cr in order\n> to get the original E'Y, E'Cb and E'Cr values (in the [0.0, 1.0] and\n> [-0.5, 0.5] ranges respectively for E'Y and E'C[br]. Starting with full\n> range, given\n>\n> D'Cb = INT[(255*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 255\n>\n> or\n>\n> E'Cb = D'Cb / 255 - 128 / 255\n>\n> OpenGL, when reading texture data through a floating point texture\n> sampler (which we do in the shader by calling texture2D on a sampler2D\n> variable), normalizes the values stored in memory ([0, 255]) to the\n> [0.0, 1.0] range. This means that the D'Cb value is already divided by\n> 255 by the GPU. We only need to subtract 128 / 255 to get the original\n> E'Cb value.\n>\n> In the limited quantization range case, we have\n>\n> D'Cb = INT[(225*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 224\n>\n> Let's introduce the 255 factor:\n>\n> E'Cb = (D'Cb - 128) / 255 * 255 / 224\n>\n> which can also be written as\n>\n> E'Cb = (D'Cb / 255 - 128 / 255) * 255 / 224\n>\n> We thus have\n>\n> E'Cb(lim) = E'Cb(full) * 255 / 224\n>\n> The shader doesn't include the 255 / 224 multiplier directly, it gets\n> included by the C++ code in the yuv2rgb matrix, and there's no need for\n> a different offset between the limited and full range quantization.\n>\n> I hope this helps clarifying the implementation.\n> --\n> Regards,\n>\n> Laurent Pinchart\n\nI had gone through this conversion on multiple resources.\nThe implementation looks correct.\n\nReviewed-by: Kunal Agarwal <kunalagarwal1072002@gmail.com>\n\nRegards,\n\nKunal Agarwal\n\n\nOn Tue, Aug 30, 2022 at 10:57 PM Laurent Pinchart via libcamera-devel <\nlibcamera-devel@lists.libcamera.org> wrote:\n\n> Hi Umang,\n>\n> On Tue, Aug 30, 2022 at 07:43:12PM +0530, Umang Jain wrote:\n> > On 8/29/22 3:34 PM, Laurent Pinchart via libcamera-devel wrote:\n> > > Update the YUV shaders and the viewfinder_gl to correctly take the\n> > > Y'CbCr encoding and the quantization range into account when rendering\n> > > YUV formats to RGB. Support for the primaries and transfer function\n> will\n> > > be added in a subsequent step.\n> > >\n> > > Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n> >\n> > Patch looks good and straight forward for most parts, however few\n> > specifics are still a bit unclear to me\n> >\n> > > ---\n> > >   src/qcam/assets/shader/YUV_2_planes.frag | 27 ++++----\n> > >   src/qcam/assets/shader/YUV_3_planes.frag | 23 ++++---\n> > >   src/qcam/assets/shader/YUV_packed.frag   | 17 ++---\n> > >   src/qcam/viewfinder_gl.cpp               | 79\n> +++++++++++++++++++++++-\n> > >   src/qcam/viewfinder_gl.h                 |  2 +\n> > >   5 files changed, 115 insertions(+), 33 deletions(-)\n> > >\n> > > diff --git a/src/qcam/assets/shader/YUV_2_planes.frag\n> b/src/qcam/assets/shader/YUV_2_planes.frag\n> > > index 254463c05cac..da8dbcc5f801 100644\n> > > --- a/src/qcam/assets/shader/YUV_2_planes.frag\n> > > +++ b/src/qcam/assets/shader/YUV_2_planes.frag\n> > > @@ -13,27 +13,30 @@ varying vec2 textureOut;\n> > >   uniform sampler2D tex_y;\n> > >   uniform sampler2D tex_u;\n> > >\n> > > +const mat3 yuv2rgb_matrix = mat3(\n> > > +   YUV2RGB_MATRIX\n> > > +);\n> > > +\n> > > +const vec3 yuv2rgb_offset = vec3(\n> > > +   YUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> >\n> > I understood the YUV2RGB_Y_OFFSET #define but don't understand where\n> > other values come from (or why they exist :D)\n>\n> The quantization of the Cb and Cr values in all relevant color spaces\n> (ITU-R BT.601, BT.709, BT.2020, ...) add an offset of 128 (for 8-bit\n> values). For instance, in BT.709, we have\n>\n> D'Cb = INT[(224*E'Cb + 128)*2^(n-8)]\n>\n> where D'Cb is the Cb signal after quantization, E'Cb the Cb signal\n> before quantization (in the [-0.5, 0.5] range), and n the number of\n> bits). INT[] denotes rounding to the closest integer.\n>\n> The 224 multiplier creates a limited quantization range, following the\n> above formula, -0.5 will be quantized to INT[224 * -0.5 + 128] = 16, and\n> 0.5 to INT[224 * 0.5 + 128] = 240. The values are then stored as 8-bit\n> unsigned integers in memory.\n>\n> For full range quantization, the same applies, with a multiplier equal\n> to 255 instead of 224. [-0.5, 0.5] is thus mapped to [0, 255].\n>\n> We need to apply the reverse quantization on D'Y, D'Cb and D'Cr in order\n> to get the original E'Y, E'Cb and E'Cr values (in the [0.0, 1.0] and\n> [-0.5, 0.5] ranges respectively for E'Y and E'C[br]. Starting with full\n> range, given\n>\n> D'Cb = INT[(255*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 255\n>\n> or\n>\n> E'Cb = D'Cb / 255 - 128 / 255\n>\n> OpenGL, when reading texture data through a floating point texture\n> sampler (which we do in the shader by calling texture2D on a sampler2D\n> variable), normalizes the values stored in memory ([0, 255]) to the\n> [0.0, 1.0] range. This means that the D'Cb value is already divided by\n> 255 by the GPU. We only need to subtract 128 / 255 to get the original\n> E'Cb value.\n>\n> In the limited quantization range case, we have\n>\n> D'Cb = INT[(225*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 224\n>\n> Let's introduce the 255 factor:\n>\n> E'Cb = (D'Cb - 128) / 255 * 255 / 224\n>\n> which can also be written as\n>\n> E'Cb = (D'Cb / 255 - 128 / 255) * 255 / 224\n>\n> We thus have\n>\n> E'Cb(lim) = E'Cb(full) * 255 / 224\n>\n> The shader doesn't include the 255 / 224 multiplier directly, it gets\n> included by the C++ code in the yuv2rgb matrix, and there's no need for\n> a different offset between the limited and full range quantization.\n>\n> I hope this helps clarifying the implementation.\n>\n> > Maybe I should start learning shaders programming ;-)\n> >\n> > Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>\n> >\n> > > +);\n> > > +\n> > >   void main(void)\n> > >   {\n> > >     vec3 yuv;\n> > > -   vec3 rgb;\n> > > -   mat3 yuv2rgb_bt601_mat = mat3(\n> > > -           vec3(1.164,  1.164, 1.164),\n> > > -           vec3(0.000, -0.392, 2.017),\n> > > -           vec3(1.596, -0.813, 0.000)\n> > > -   );\n> > >\n> > > -   yuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> > > +   yuv.x = texture2D(tex_y, textureOut).r;\n> > >   #if defined(YUV_PATTERN_UV)\n> > > -   yuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> > > -   yuv.z = texture2D(tex_u, textureOut).a - 0.500;\n> > > +   yuv.y = texture2D(tex_u, textureOut).r;\n> > > +   yuv.z = texture2D(tex_u, textureOut).a;\n> > >   #elif defined(YUV_PATTERN_VU)\n> > > -   yuv.y = texture2D(tex_u, textureOut).a - 0.500;\n> > > -   yuv.z = texture2D(tex_u, textureOut).r - 0.500;\n> > > +   yuv.y = texture2D(tex_u, textureOut).a;\n> > > +   yuv.z = texture2D(tex_u, textureOut).r;\n> > >   #else\n> > >   #error Invalid pattern\n> > >   #endif\n> > >\n> > > -   rgb = yuv2rgb_bt601_mat * yuv;\n> > > +   vec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> > > +\n> > >     gl_FragColor = vec4(rgb, 1.0);\n> > >   }\n> > > diff --git a/src/qcam/assets/shader/YUV_3_planes.frag\n> b/src/qcam/assets/shader/YUV_3_planes.frag\n> > > index 2be74b5d2a9d..e754129d74d1 100644\n> > > --- a/src/qcam/assets/shader/YUV_3_planes.frag\n> > > +++ b/src/qcam/assets/shader/YUV_3_planes.frag\n> > > @@ -14,20 +14,23 @@ uniform sampler2D tex_y;\n> > >   uniform sampler2D tex_u;\n> > >   uniform sampler2D tex_v;\n> > >\n> > > +const mat3 yuv2rgb_matrix = mat3(\n> > > +   YUV2RGB_MATRIX\n> > > +);\n> > > +\n> > > +const vec3 yuv2rgb_offset = vec3(\n> > > +   YUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> > > +);\n> > > +\n> > >   void main(void)\n> > >   {\n> > >     vec3 yuv;\n> > > -   vec3 rgb;\n> > > -   mat3 yuv2rgb_bt601_mat = mat3(\n> > > -           vec3(1.164,  1.164, 1.164),\n> > > -           vec3(0.000, -0.392, 2.017),\n> > > -           vec3(1.596, -0.813, 0.000)\n> > > -   );\n> > >\n> > > -   yuv.x = texture2D(tex_y, textureOut).r - 0.063;\n> > > -   yuv.y = texture2D(tex_u, textureOut).r - 0.500;\n> > > -   yuv.z = texture2D(tex_v, textureOut).r - 0.500;\n> > > +   yuv.x = texture2D(tex_y, textureOut).r;\n> > > +   yuv.y = texture2D(tex_u, textureOut).r;\n> > > +   yuv.z = texture2D(tex_v, textureOut).r;\n> > > +\n> > > +   vec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> > >\n> > > -   rgb = yuv2rgb_bt601_mat * yuv;\n> > >     gl_FragColor = vec4(rgb, 1.0);\n> > >   }\n> > > diff --git a/src/qcam/assets/shader/YUV_packed.frag\n> b/src/qcam/assets/shader/YUV_packed.frag\n> > > index d6efd4ce92a9..b9ef9d41beae 100644\n> > > --- a/src/qcam/assets/shader/YUV_packed.frag\n> > > +++ b/src/qcam/assets/shader/YUV_packed.frag\n> > > @@ -14,15 +14,16 @@ varying vec2 textureOut;\n> > >   uniform sampler2D tex_y;\n> > >   uniform vec2 tex_step;\n> > >\n> > > +const mat3 yuv2rgb_matrix = mat3(\n> > > +   YUV2RGB_MATRIX\n> > > +);\n> > > +\n> > > +const vec3 yuv2rgb_offset = vec3(\n> > > +   YUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n> > > +);\n> > > +\n> > >   void main(void)\n> > >   {\n> > > -   mat3 yuv2rgb_bt601_mat = mat3(\n> > > -           vec3(1.164,  1.164, 1.164),\n> > > -           vec3(0.000, -0.392, 2.017),\n> > > -           vec3(1.596, -0.813, 0.000)\n> > > -   );\n> > > -   vec3 yuv2rgb_bt601_offset = vec3(0.063, 0.500, 0.500);\n> > > -\n> > >     /*\n> > >      * The sampler won't interpolate the texture correctly along the X\n> axis,\n> > >      * as each RGBA pixel effectively stores two pixels. We thus need\n> to\n> > > @@ -76,7 +77,7 @@ void main(void)\n> > >\n> > >     float y = mix(y_left, y_right, step(0.5, f_x));\n> > >\n> > > -   vec3 rgb = yuv2rgb_bt601_mat * (vec3(y, uv) -\n> yuv2rgb_bt601_offset);\n> > > +   vec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n> > >\n> > >     gl_FragColor = vec4(rgb, 1.0);\n> > >   }\n> > > diff --git a/src/qcam/viewfinder_gl.cpp b/src/qcam/viewfinder_gl.cpp\n> > > index ec295b6de0dd..e2aa24703ff0 100644\n> > > --- a/src/qcam/viewfinder_gl.cpp\n> > > +++ b/src/qcam/viewfinder_gl.cpp\n> > > @@ -7,9 +7,12 @@\n> > >\n> > >   #include \"viewfinder_gl.h\"\n> > >\n> > > +#include <array>\n> > > +\n> > >   #include <QByteArray>\n> > >   #include <QFile>\n> > >   #include <QImage>\n> > > +#include <QStringList>\n> > >\n> > >   #include <libcamera/formats.h>\n> > >\n> > > @@ -56,7 +59,8 @@ static const QList<libcamera::PixelFormat>\n> supportedFormats{\n> > >   };\n> > >\n> > >   ViewFinderGL::ViewFinderGL(QWidget *parent)\n> > > -   : QOpenGLWidget(parent), buffer_(nullptr), image_(nullptr),\n> > > +   : QOpenGLWidget(parent), buffer_(nullptr),\n> > > +     colorSpace_(libcamera::ColorSpace::Raw), image_(nullptr),\n> > >       vertexBuffer_(QOpenGLBuffer::VertexBuffer)\n> > >   {\n> > >   }\n> > > @@ -72,10 +76,10 @@ const QList<libcamera::PixelFormat>\n> &ViewFinderGL::nativeFormats() const\n> > >   }\n> > >\n> > >   int ViewFinderGL::setFormat(const libcamera::PixelFormat &format,\n> const QSize &size,\n> > > -                       [[maybe_unused]] const libcamera::ColorSpace\n> &colorSpace,\n> > > +                       const libcamera::ColorSpace &colorSpace,\n> > >                         unsigned int stride)\n> > >   {\n> > > -   if (format != format_) {\n> > > +   if (format != format_ || colorSpace != colorSpace_) {\n> > >             /*\n> > >              * If the fragment already exists, remove it and create a\n> new\n> > >              * one for the new format.\n> > > @@ -89,7 +93,10 @@ int ViewFinderGL::setFormat(const\n> libcamera::PixelFormat &format, const QSize &s\n> > >             if (!selectFormat(format))\n> > >                     return -1;\n> > >\n> > > +           selectColorSpace(colorSpace);\n> > > +\n> > >             format_ = format;\n> > > +           colorSpace_ = colorSpace;\n> > >     }\n> > >\n> > >     size_ = size;\n> > > @@ -318,6 +325,72 @@ bool ViewFinderGL::selectFormat(const\n> libcamera::PixelFormat &format)\n> > >     return ret;\n> > >   }\n> > >\n> > > +void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace\n> &colorSpace)\n> > > +{\n> > > +   std::array<double, 9> yuv2rgb;\n> > > +\n> > > +   /* OpenGL stores arrays in column-major order. */\n> > > +   switch (colorSpace.ycbcrEncoding) {\n> > > +   case libcamera::ColorSpace::YcbcrEncoding::None:\n> > > +           yuv2rgb = {\n> > > +                   1.0000,  0.0000,  0.0000,\n> > > +                   0.0000,  1.0000,  0.0000,\n> > > +                   0.0000,  0.0000,  1.0000,\n> > > +           };\n> > > +           break;\n> > > +\n> > > +   case libcamera::ColorSpace::YcbcrEncoding::Rec601:\n> > > +           yuv2rgb = {\n> > > +                   1.0000,  1.0000,  1.0000,\n> > > +                   0.0000, -0.3441,  1.7720,\n> > > +                   1.4020, -0.7141,  0.0000,\n> > > +           };\n> > > +           break;\n> > > +\n> > > +   case libcamera::ColorSpace::YcbcrEncoding::Rec709:\n> > > +           yuv2rgb = {\n> > > +                   1.0000,  1.0000,  1.0000,\n> > > +                   0.0000, -0.1873,  1.8856,\n> > > +                   1.5748, -0.4681,  0.0000,\n> > > +           };\n> > > +           break;\n> > > +\n> > > +   case libcamera::ColorSpace::YcbcrEncoding::Rec2020:\n> > > +           yuv2rgb = {\n> > > +                   1.0000,  1.0000,  1.0000,\n> > > +                   0.0000, -0.1646,  1.8814,\n> > > +                   1.4746, -0.5714,  0.0000,\n> > > +           };\n> > > +           break;\n> > > +   }\n> > > +\n> > > +   double offset;\n> > > +\n> > > +   switch (colorSpace.range) {\n> > > +   case libcamera::ColorSpace::Range::Full:\n> > > +           offset = 0.0;\n> > > +           break;\n> > > +\n> > > +   case libcamera::ColorSpace::Range::Limited:\n> > > +           offset = 16.0;\n> > > +\n> > > +           for (unsigned int i = 0; i < 3; ++i)\n> > > +                   yuv2rgb[i] *= 255.0 / 219.0;\n> > > +           for (unsigned int i = 4; i < 9; ++i)\n> > > +                   yuv2rgb[i] *= 255.0 / 224.0;\n> > > +           break;\n> > > +   }\n> > > +\n> > > +   QStringList matrix;\n> > > +\n> > > +   for (double coeff : yuv2rgb)\n> > > +           matrix.append(QString::number(coeff, 'f'));\n> > > +\n> > > +   fragmentShaderDefines_.append(\"#define YUV2RGB_MATRIX \" +\n> matrix.join(\", \"));\n> > > +   fragmentShaderDefines_.append(QString(\"#define YUV2RGB_Y_OFFSET\n> %1\")\n> > > +           .arg(offset, 0, 'f', 1));\n> > > +}\n> > > +\n> > >   bool ViewFinderGL::createVertexShader()\n> > >   {\n> > >     /* Create Vertex Shader */\n> > > diff --git a/src/qcam/viewfinder_gl.h b/src/qcam/viewfinder_gl.h\n> > > index 798830a31cd2..68c2912df12f 100644\n> > > --- a/src/qcam/viewfinder_gl.h\n> > > +++ b/src/qcam/viewfinder_gl.h\n> > > @@ -57,6 +57,7 @@ protected:\n> > >\n> > >   private:\n> > >     bool selectFormat(const libcamera::PixelFormat &format);\n> > > +   void selectColorSpace(const libcamera::ColorSpace &colorSpace);\n> > >\n> > >     void configureTexture(QOpenGLTexture &texture);\n> > >     bool createFragmentShader();\n> > > @@ -67,6 +68,7 @@ private:\n> > >     /* Captured image size, format and buffer */\n> > >     libcamera::FrameBuffer *buffer_;\n> > >     libcamera::PixelFormat format_;\n> > > +   libcamera::ColorSpace colorSpace_;\n> > >     QSize size_;\n> > >     unsigned int stride_;\n> > >     Image *image_;\n>\n> --\n> Regards,\n>\n> Laurent Pinchart\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 97691C3272\n\tfor <parsemail@patchwork.libcamera.org>;\n\tWed, 31 Aug 2022 10:05:42 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 023F661FC1;\n\tWed, 31 Aug 2022 12:05:42 +0200 (CEST)","from mail-ua1-x931.google.com (mail-ua1-x931.google.com\n\t[IPv6:2607:f8b0:4864:20::931])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 4AE8A61F9F\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 31 Aug 2022 12:05:40 +0200 (CEST)","by mail-ua1-x931.google.com with SMTP id e3so5261605uax.4\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 31 Aug 2022 03:05:40 -0700 (PDT)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1661940342;\n\tbh=0dcMzPy4ht+aXqAR5x/MHjvTlcXKhvlMnQgQ4wAknAI=;\n\th=References:In-Reply-To:Date:To:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:\n\tFrom;\n\tb=pVdCGEddyswtiUTQ3D2yHGtoTqJpIYflMaRW36AyzBE0P++kp9fhf2xrzaVS3pgAZ\n\tJSVW14VOkkeLFgOmBjVfS1v+Pd9PAwpostAjz2i1dWUxMvIuT3o26IzsW27q/jY2gv\n\t0YgzNyUIWnwZ3IUon9Gg+RkO0VI21oWBkz5Fd1Z9MMqzQ3Kbvz3WikckrohSpCBMJJ\n\tTizNXcF+rsJS6spZ8XoDIuIjXRqCssXTVZQ1aQ8sjElEis2mOo4YA03SOC8onnwmBw\n\tdTVSGf1LlRCLQaVF84+VvaHRHbVYFMpkhGc14GVqOBaW9REhIjTk3RmjK1zViYz/tu\n\t4AWxHP259YaJw==","v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112;\n\th=cc:to:subject:message-id:date:from:in-reply-to:references\n\t:mime-version:from:to:cc;\n\tbh=PbTEVMH46m2onFQEmQZjMAvIb0uWV5zh6OW4c9Rutz0=;\n\tb=nN1nLb4Q2SmaNLSCaIQzKHztt8bTZuPV7qQgxgEqQv8y3lJQ+4Q8aLzMFCNJZ3Uj+t\n\tkaZtgMXRTUG1DzT6qIwnmXmEjQH8AAv08dpQUELcFfuIfopZPHeKArXjc3HwgQhcLWvo\n\tI40/oBW6M3nlCMhWE4Ht+nr5fVg6tmcZO5pVakv2J85GCJbNuEsgjBlECMFnuyuhL+oi\n\tYcKlEYT6ObrV//20OGKK9cqydhDRg3aByMxbTH0AczElzmbt6XolNuZZN9LtSwXx0xfX\n\taB7x0vStJnQYflkfBeVWMkqcylWqAvQEbVHfe2GkMpmZ8x0yPK1VlXEkTmFuLIDa4K2h\n\tiueA=="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"nN1nLb4Q\"; dkim-atps=neutral","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20210112;\n\th=cc:to:subject:message-id:date:from:in-reply-to:references\n\t:mime-version:x-gm-message-state:from:to:cc;\n\tbh=PbTEVMH46m2onFQEmQZjMAvIb0uWV5zh6OW4c9Rutz0=;\n\tb=SJuvzXT+oBu+pXf4+2dxaBqce/zckFp2Ry6dP3jpPGL+mLl0GNnImWhuiyY0ZernkW\n\tN49X2q6ekfoPwSAmokFxWLUb6l+7pWnMvknfKj5y/VBZUUDp5AMrnlB03AO5u+4v+oiI\n\tbj7Bx5oiCq+YOBYXM3OJom01lcYMSS8mUWBCnzqO8DVdnQCbmq4xoeWL63iS1UhxjACX\n\ty9rI5UZEZqDglq8I2p5jazs6nXFjGEBjeXEmhKoZs+cOCWvnKW+ZkWEOEIooJuDa4uBQ\n\tUubnZgVqefYQjjizN5nJ9y8VdoFPxyWca3W2iuaT5w+8d+Uw4zCGPol9ApZVLO3AEAMB\n\tNQZA==","X-Gm-Message-State":"ACgBeo2mi+9Iw3ScG0zVYNL/dMVbt+hiVlIQmFlX/cHHgkeDGLlgCT3n\n\tIrE3O730HsIPHSzIyaaQdpc0O0iOjQqG4nEUbVVKIZqW6UU=","X-Google-Smtp-Source":"AA6agR5NW4WdBF7pGJW7qt0YREWy1l5i3VX9wJ2ZmOZAkK15ABayXuFUDL/4xOAh4nlDCj3/utoMj3OzgbDD56ERXxk=","X-Received":"by 2002:a05:6130:64c:b0:390:f639:5ac4 with SMTP id\n\tbh12-20020a056130064c00b00390f6395ac4mr6525237uab.98.1661940338887;\n\tWed, 31 Aug 2022 03:05:38 -0700 (PDT)","MIME-Version":"1.0","References":"<20220829100414.28404-1-laurent.pinchart@ideasonboard.com>\n\t<20220829100414.28404-4-laurent.pinchart@ideasonboard.com>\n\t<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>\n\t<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","In-Reply-To":"<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","Date":"Wed, 31 Aug 2022 15:35:25 +0530","Message-ID":"<CAJP1LGaBxGoK-Kb4ABOJkEsT+NsTF6oHzzBaT-zAzJUWfFdA8w@mail.gmail.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Content-Type":"multipart/alternative; boundary=\"00000000000089925005e786a300\"","Subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Kunal Agarwal via libcamera-devel <libcamera-devel@lists.libcamera.org>","Reply-To":"Kunal Agarwal <kunalagarwal1072002@gmail.com>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":24890,"web_url":"https://patchwork.libcamera.org/comment/24890/","msgid":"<4d95997b-9385-e3a0-1196-cd841d9e495b@ideasonboard.com>","date":"2022-09-02T05:42:55","subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","submitter":{"id":86,"url":"https://patchwork.libcamera.org/api/people/86/","name":"Umang Jain","email":"umang.jain@ideasonboard.com"},"content":"Hi Laurent,\n\nOn 8/30/22 10:57 PM, Laurent Pinchart wrote:\n> Hi Umang,\n>\n> On Tue, Aug 30, 2022 at 07:43:12PM +0530, Umang Jain wrote:\n>> On 8/29/22 3:34 PM, Laurent Pinchart via libcamera-devel wrote:\n>>> Update the YUV shaders and the viewfinder_gl to correctly take the\n>>> Y'CbCr encoding and the quantization range into account when rendering\n>>> YUV formats to RGB. Support for the primaries and transfer function will\n>>> be added in a subsequent step.\n>>>\n>>> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n>> Patch looks good and straight forward for most parts, however few\n>> specifics are still a bit unclear to me\n>>\n>>> ---\n>>>    src/qcam/assets/shader/YUV_2_planes.frag | 27 ++++----\n>>>    src/qcam/assets/shader/YUV_3_planes.frag | 23 ++++---\n>>>    src/qcam/assets/shader/YUV_packed.frag   | 17 ++---\n>>>    src/qcam/viewfinder_gl.cpp               | 79 +++++++++++++++++++++++-\n>>>    src/qcam/viewfinder_gl.h                 |  2 +\n>>>    5 files changed, 115 insertions(+), 33 deletions(-)\n>>>\n>>> diff --git a/src/qcam/assets/shader/YUV_2_planes.frag b/src/qcam/assets/shader/YUV_2_planes.frag\n>>> index 254463c05cac..da8dbcc5f801 100644\n>>> --- a/src/qcam/assets/shader/YUV_2_planes.frag\n>>> +++ b/src/qcam/assets/shader/YUV_2_planes.frag\n>>> @@ -13,27 +13,30 @@ varying vec2 textureOut;\n>>>    uniform sampler2D tex_y;\n>>>    uniform sampler2D tex_u;\n>>>    \n>>> +const mat3 yuv2rgb_matrix = mat3(\n>>> +\tYUV2RGB_MATRIX\n>>> +);\n>>> +\n>>> +const vec3 yuv2rgb_offset = vec3(\n>>> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n>> I understood the YUV2RGB_Y_OFFSET #define but don't understand where\n>> other values come from (or why they exist :D)\n> The quantization of the Cb and Cr values in all relevant color spaces\n> (ITU-R BT.601, BT.709, BT.2020, ...) add an offset of 128 (for 8-bit\n> values). For instance, in BT.709, we have\n>\n> D'Cb = INT[(224*E'Cb + 128)*2^(n-8)]\n>\n> where D'Cb is the Cb signal after quantization, E'Cb the Cb signal\n> before quantization (in the [-0.5, 0.5] range), and n the number of\n> bits). INT[] denotes rounding to the closest integer.\n>\n> The 224 multiplier creates a limited quantization range, following the\n> above formula, -0.5 will be quantized to INT[224 * -0.5 + 128] = 16, and\n> 0.5 to INT[224 * 0.5 + 128] = 240. The values are then stored as 8-bit\n> unsigned integers in memory.\n>\n> For full range quantization, the same applies, with a multiplier equal\n> to 255 instead of 224. [-0.5, 0.5] is thus mapped to [0, 255].\n>\n> We need to apply the reverse quantization on D'Y, D'Cb and D'Cr in order\n> to get the original E'Y, E'Cb and E'Cr values (in the [0.0, 1.0] and\n> [-0.5, 0.5] ranges respectively for E'Y and E'C[br]. Starting with full\n> range, given\n>\n> D'Cb = INT[(255*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 255\n>\n> or\n>\n> E'Cb = D'Cb / 255 - 128 / 255\n>\n> OpenGL, when reading texture data through a floating point texture\n> sampler (which we do in the shader by calling texture2D on a sampler2D\n> variable), normalizes the values stored in memory ([0, 255]) to the\n> [0.0, 1.0] range. This means that the D'Cb value is already divided by\n> 255 by the GPU. We only need to subtract 128 / 255 to get the original\n> E'Cb value.\n>\n> In the limited quantization range case, we have\n>\n> D'Cb = INT[(225*E'Cb + 128)] (for 8-bit data)\n>\n> the inverse is given by\n>\n> E'Cb = (D'Cb - 128) / 224\n>\n> Let's introduce the 255 factor:\n>\n> E'Cb = (D'Cb - 128) / 255 * 255 / 224\n>\n> which can also be written as\n>\n> E'Cb = (D'Cb / 255 - 128 / 255) * 255 / 224\n>\n> We thus have\n>\n> E'Cb(lim) = E'Cb(full) * 255 / 224\n>\n> The shader doesn't include the 255 / 224 multiplier directly, it gets\n> included by the C++ code in the yuv2rgb matrix, and there's no need for\n> a different offset between the limited and full range quantization.\n\nAh thanks, I got time to read and understand, thanks for the write-up!\n>\n> I hope this helps clarifying the implementation.\n\nYes, it does.\n\n>\n>> Maybe I should start learning shaders programming ;-)\n>>\n>> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>\n>>\n>>> +);\n>>> +\n>>>    void main(void)\n>>>    {\n>>>    \tvec3 yuv;\n>>> -\tvec3 rgb;\n>>> -\tmat3 yuv2rgb_bt601_mat = mat3(\n>>> -\t\tvec3(1.164,  1.164, 1.164),\n>>> -\t\tvec3(0.000, -0.392, 2.017),\n>>> -\t\tvec3(1.596, -0.813, 0.000)\n>>> -\t);\n>>>    \n>>> -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n>>> +\tyuv.x = texture2D(tex_y, textureOut).r;\n>>>    #if defined(YUV_PATTERN_UV)\n>>> -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n>>> -\tyuv.z = texture2D(tex_u, textureOut).a - 0.500;\n>>> +\tyuv.y = texture2D(tex_u, textureOut).r;\n>>> +\tyuv.z = texture2D(tex_u, textureOut).a;\n>>>    #elif defined(YUV_PATTERN_VU)\n>>> -\tyuv.y = texture2D(tex_u, textureOut).a - 0.500;\n>>> -\tyuv.z = texture2D(tex_u, textureOut).r - 0.500;\n>>> +\tyuv.y = texture2D(tex_u, textureOut).a;\n>>> +\tyuv.z = texture2D(tex_u, textureOut).r;\n>>>    #else\n>>>    #error Invalid pattern\n>>>    #endif\n>>>    \n>>> -\trgb = yuv2rgb_bt601_mat * yuv;\n>>> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n>>> +\n>>>    \tgl_FragColor = vec4(rgb, 1.0);\n>>>    }\n>>> diff --git a/src/qcam/assets/shader/YUV_3_planes.frag b/src/qcam/assets/shader/YUV_3_planes.frag\n>>> index 2be74b5d2a9d..e754129d74d1 100644\n>>> --- a/src/qcam/assets/shader/YUV_3_planes.frag\n>>> +++ b/src/qcam/assets/shader/YUV_3_planes.frag\n>>> @@ -14,20 +14,23 @@ uniform sampler2D tex_y;\n>>>    uniform sampler2D tex_u;\n>>>    uniform sampler2D tex_v;\n>>>    \n>>> +const mat3 yuv2rgb_matrix = mat3(\n>>> +\tYUV2RGB_MATRIX\n>>> +);\n>>> +\n>>> +const vec3 yuv2rgb_offset = vec3(\n>>> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n>>> +);\n>>> +\n>>>    void main(void)\n>>>    {\n>>>    \tvec3 yuv;\n>>> -\tvec3 rgb;\n>>> -\tmat3 yuv2rgb_bt601_mat = mat3(\n>>> -\t\tvec3(1.164,  1.164, 1.164),\n>>> -\t\tvec3(0.000, -0.392, 2.017),\n>>> -\t\tvec3(1.596, -0.813, 0.000)\n>>> -\t);\n>>>    \n>>> -\tyuv.x = texture2D(tex_y, textureOut).r - 0.063;\n>>> -\tyuv.y = texture2D(tex_u, textureOut).r - 0.500;\n>>> -\tyuv.z = texture2D(tex_v, textureOut).r - 0.500;\n>>> +\tyuv.x = texture2D(tex_y, textureOut).r;\n>>> +\tyuv.y = texture2D(tex_u, textureOut).r;\n>>> +\tyuv.z = texture2D(tex_v, textureOut).r;\n>>> +\n>>> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n>>>    \n>>> -\trgb = yuv2rgb_bt601_mat * yuv;\n>>>    \tgl_FragColor = vec4(rgb, 1.0);\n>>>    }\n>>> diff --git a/src/qcam/assets/shader/YUV_packed.frag b/src/qcam/assets/shader/YUV_packed.frag\n>>> index d6efd4ce92a9..b9ef9d41beae 100644\n>>> --- a/src/qcam/assets/shader/YUV_packed.frag\n>>> +++ b/src/qcam/assets/shader/YUV_packed.frag\n>>> @@ -14,15 +14,16 @@ varying vec2 textureOut;\n>>>    uniform sampler2D tex_y;\n>>>    uniform vec2 tex_step;\n>>>    \n>>> +const mat3 yuv2rgb_matrix = mat3(\n>>> +\tYUV2RGB_MATRIX\n>>> +);\n>>> +\n>>> +const vec3 yuv2rgb_offset = vec3(\n>>> +\tYUV2RGB_Y_OFFSET / 255.0, 128.0 / 255.0, 128.0 / 255.0\n>>> +);\n>>> +\n>>>    void main(void)\n>>>    {\n>>> -\tmat3 yuv2rgb_bt601_mat = mat3(\n>>> -\t\tvec3(1.164,  1.164, 1.164),\n>>> -\t\tvec3(0.000, -0.392, 2.017),\n>>> -\t\tvec3(1.596, -0.813, 0.000)\n>>> -\t);\n>>> -\tvec3 yuv2rgb_bt601_offset = vec3(0.063, 0.500, 0.500);\n>>> -\n>>>    \t/*\n>>>    \t * The sampler won't interpolate the texture correctly along the X axis,\n>>>    \t * as each RGBA pixel effectively stores two pixels. We thus need to\n>>> @@ -76,7 +77,7 @@ void main(void)\n>>>    \n>>>    \tfloat y = mix(y_left, y_right, step(0.5, f_x));\n>>>    \n>>> -\tvec3 rgb = yuv2rgb_bt601_mat * (vec3(y, uv) - yuv2rgb_bt601_offset);\n>>> +\tvec3 rgb = yuv2rgb_matrix * (vec3(y, uv) - yuv2rgb_offset);\n>>>    \n>>>    \tgl_FragColor = vec4(rgb, 1.0);\n>>>    }\n>>> diff --git a/src/qcam/viewfinder_gl.cpp b/src/qcam/viewfinder_gl.cpp\n>>> index ec295b6de0dd..e2aa24703ff0 100644\n>>> --- a/src/qcam/viewfinder_gl.cpp\n>>> +++ b/src/qcam/viewfinder_gl.cpp\n>>> @@ -7,9 +7,12 @@\n>>>    \n>>>    #include \"viewfinder_gl.h\"\n>>>    \n>>> +#include <array>\n>>> +\n>>>    #include <QByteArray>\n>>>    #include <QFile>\n>>>    #include <QImage>\n>>> +#include <QStringList>\n>>>    \n>>>    #include <libcamera/formats.h>\n>>>    \n>>> @@ -56,7 +59,8 @@ static const QList<libcamera::PixelFormat> supportedFormats{\n>>>    };\n>>>    \n>>>    ViewFinderGL::ViewFinderGL(QWidget *parent)\n>>> -\t: QOpenGLWidget(parent), buffer_(nullptr), image_(nullptr),\n>>> +\t: QOpenGLWidget(parent), buffer_(nullptr),\n>>> +\t  colorSpace_(libcamera::ColorSpace::Raw), image_(nullptr),\n>>>    \t  vertexBuffer_(QOpenGLBuffer::VertexBuffer)\n>>>    {\n>>>    }\n>>> @@ -72,10 +76,10 @@ const QList<libcamera::PixelFormat> &ViewFinderGL::nativeFormats() const\n>>>    }\n>>>    \n>>>    int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &size,\n>>> -\t\t\t    [[maybe_unused]] const libcamera::ColorSpace &colorSpace,\n>>> +\t\t\t    const libcamera::ColorSpace &colorSpace,\n>>>    \t\t\t    unsigned int stride)\n>>>    {\n>>> -\tif (format != format_) {\n>>> +\tif (format != format_ || colorSpace != colorSpace_) {\n>>>    \t\t/*\n>>>    \t\t * If the fragment already exists, remove it and create a new\n>>>    \t\t * one for the new format.\n>>> @@ -89,7 +93,10 @@ int ViewFinderGL::setFormat(const libcamera::PixelFormat &format, const QSize &s\n>>>    \t\tif (!selectFormat(format))\n>>>    \t\t\treturn -1;\n>>>    \n>>> +\t\tselectColorSpace(colorSpace);\n>>> +\n>>>    \t\tformat_ = format;\n>>> +\t\tcolorSpace_ = colorSpace;\n>>>    \t}\n>>>    \n>>>    \tsize_ = size;\n>>> @@ -318,6 +325,72 @@ bool ViewFinderGL::selectFormat(const libcamera::PixelFormat &format)\n>>>    \treturn ret;\n>>>    }\n>>>    \n>>> +void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace &colorSpace)\n>>> +{\n>>> +\tstd::array<double, 9> yuv2rgb;\n>>> +\n>>> +\t/* OpenGL stores arrays in column-major order. */\n>>> +\tswitch (colorSpace.ycbcrEncoding) {\n>>> +\tcase libcamera::ColorSpace::YcbcrEncoding::None:\n>>> +\t\tyuv2rgb = {\n>>> +\t\t\t1.0000,  0.0000,  0.0000,\n>>> +\t\t\t0.0000,  1.0000,  0.0000,\n>>> +\t\t\t0.0000,  0.0000,  1.0000,\n>>> +\t\t};\n>>> +\t\tbreak;\n>>> +\n>>> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec601:\n>>> +\t\tyuv2rgb = {\n>>> +\t\t\t1.0000,  1.0000,  1.0000,\n>>> +\t\t\t0.0000, -0.3441,  1.7720,\n>>> +\t\t\t1.4020, -0.7141,  0.0000,\n>>> +\t\t};\n>>> +\t\tbreak;\n>>> +\n>>> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec709:\n>>> +\t\tyuv2rgb = {\n>>> +\t\t\t1.0000,  1.0000,  1.0000,\n>>> +\t\t\t0.0000, -0.1873,  1.8856,\n>>> +\t\t\t1.5748, -0.4681,  0.0000,\n>>> +\t\t};\n>>> +\t\tbreak;\n>>> +\n>>> +\tcase libcamera::ColorSpace::YcbcrEncoding::Rec2020:\n>>> +\t\tyuv2rgb = {\n>>> +\t\t\t1.0000,  1.0000,  1.0000,\n>>> +\t\t\t0.0000, -0.1646,  1.8814,\n>>> +\t\t\t1.4746, -0.5714,  0.0000,\n>>> +\t\t};\n>>> +\t\tbreak;\n>>> +\t}\n>>> +\n>>> +\tdouble offset;\n>>> +\n>>> +\tswitch (colorSpace.range) {\n>>> +\tcase libcamera::ColorSpace::Range::Full:\n>>> +\t\toffset = 0.0;\n>>> +\t\tbreak;\n>>> +\n>>> +\tcase libcamera::ColorSpace::Range::Limited:\n>>> +\t\toffset = 16.0;\n>>> +\n>>> +\t\tfor (unsigned int i = 0; i < 3; ++i)\n>>> +\t\t\tyuv2rgb[i] *= 255.0 / 219.0;\n>>> +\t\tfor (unsigned int i = 4; i < 9; ++i)\n>>> +\t\t\tyuv2rgb[i] *= 255.0 / 224.0;\n>>> +\t\tbreak;\n>>> +\t}\n>>> +\n>>> +\tQStringList matrix;\n>>> +\n>>> +\tfor (double coeff : yuv2rgb)\n>>> +\t\tmatrix.append(QString::number(coeff, 'f'));\n>>> +\n>>> +\tfragmentShaderDefines_.append(\"#define YUV2RGB_MATRIX \" + matrix.join(\", \"));\n>>> +\tfragmentShaderDefines_.append(QString(\"#define YUV2RGB_Y_OFFSET %1\")\n>>> +\t\t.arg(offset, 0, 'f', 1));\n>>> +}\n>>> +\n>>>    bool ViewFinderGL::createVertexShader()\n>>>    {\n>>>    \t/* Create Vertex Shader */\n>>> diff --git a/src/qcam/viewfinder_gl.h b/src/qcam/viewfinder_gl.h\n>>> index 798830a31cd2..68c2912df12f 100644\n>>> --- a/src/qcam/viewfinder_gl.h\n>>> +++ b/src/qcam/viewfinder_gl.h\n>>> @@ -57,6 +57,7 @@ protected:\n>>>    \n>>>    private:\n>>>    \tbool selectFormat(const libcamera::PixelFormat &format);\n>>> +\tvoid selectColorSpace(const libcamera::ColorSpace &colorSpace);\n>>>    \n>>>    \tvoid configureTexture(QOpenGLTexture &texture);\n>>>    \tbool createFragmentShader();\n>>> @@ -67,6 +68,7 @@ private:\n>>>    \t/* Captured image size, format and buffer */\n>>>    \tlibcamera::FrameBuffer *buffer_;\n>>>    \tlibcamera::PixelFormat format_;\n>>> +\tlibcamera::ColorSpace colorSpace_;\n>>>    \tQSize size_;\n>>>    \tunsigned int stride_;\n>>>    \tImage *image_;","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id BF28EC0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri,  2 Sep 2022 05:43:04 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 0510C61FDA;\n\tFri,  2 Sep 2022 07:43:04 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 9D65261F99\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri,  2 Sep 2022 07:43:02 +0200 (CEST)","from [IPV6:2401:4900:1f3f:1548:78ac:4a3:edc3:c28a] (unknown\n\t[IPv6:2401:4900:1f3f:1548:78ac:4a3:edc3:c28a])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 7AEC14A8;\n\tFri,  2 Sep 2022 07:43:01 +0200 (CEST)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1662097384;\n\tbh=Gowj+nmqryPNWFDwZs01qYyPdkNqQ6xPzOSDcvgr7vw=;\n\th=Date:To:References:In-Reply-To:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:\n\tFrom;\n\tb=E0dzGmEl9XNr+1mGV05HCU1q0fwd+zuGiS7rsIpYivIIlBGBCVVge18r2ecFw2/gG\n\tiJMQyDTMYdoX6mlLaNSNTF7Y1nl85H3Y6+WoOteAfxClmlr8/q9wNtgN0Xg86lAblb\n\tCnXiS6cIkbyoIERkU0Gqx7YZBOTRGmFefPf1w5W+sCJHxe3CCQwFb6636JkpOwrQeN\n\tDeI0zDDcnqG8iiPe+blP8iLV4QdDTJyvFHAQ7gWaZXs5Nq6/whlLlW0YGddj45TuZ9\n\tibex/81XYt7vndDfO2i4H+y8UqeQBbUMNV7yS2D5cuHmafKlia7wdny0dgciLe6UAO\n\tP5WsfOCcj/U1A==","v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1662097382;\n\tbh=Gowj+nmqryPNWFDwZs01qYyPdkNqQ6xPzOSDcvgr7vw=;\n\th=Date:Subject:To:Cc:References:From:In-Reply-To:From;\n\tb=TgvJYWvhqxTEDaWPWyWdWNYKIjUXnDMa7nGjCdUzxiCGTj5n1nlyiS1jmQx2mMgbr\n\tPGpTdeRptRI8LRAOTT6VwiEoX1FpJ5eucomLvCzpaKTRWAyMKRnYhHRKJkGx5YNWOX\n\tlp/byDM0eYRdDnLlpLw2nfOAuJKSlByczrBfCqkE="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"TgvJYWvh\"; dkim-atps=neutral","Message-ID":"<4d95997b-9385-e3a0-1196-cd841d9e495b@ideasonboard.com>","Date":"Fri, 2 Sep 2022 11:12:55 +0530","MIME-Version":"1.0","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101\n\tThunderbird/91.12.0","Content-Language":"en-US","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","References":"<20220829100414.28404-1-laurent.pinchart@ideasonboard.com>\n\t<20220829100414.28404-4-laurent.pinchart@ideasonboard.com>\n\t<c1cfc300-f2ce-2629-5e03-f29e64bdda3f@ideasonboard.com>\n\t<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","In-Reply-To":"<Yw5Ic6XTnQqP28gW@pendragon.ideasonboard.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","Subject":"Re: [libcamera-devel] [PATCH 3/3] qcam: viewfinder_gl: Take color\n\tspace into account for YUV rendering","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Umang Jain via libcamera-devel <libcamera-devel@lists.libcamera.org>","Reply-To":"Umang Jain <umang.jain@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]