[{"id":27847,"web_url":"https://patchwork.libcamera.org/comment/27847/","msgid":"<p5lv775564h53rejsevybtlqwtgl5xkdyag4q4mlbmfagf3f73@y4isajuax3u4>","date":"2023-09-22T13:18:30","subject":"Re: [libcamera-devel] [PATCH RFC 7/7] WIP: android: add YUYV->NV12\n\tformat conversion via libyuv","submitter":{"id":143,"url":"https://patchwork.libcamera.org/api/people/143/","name":"Jacopo Mondi","email":"jacopo.mondi@ideasonboard.com"},"content":"Hi Mattijs\n\nOn Fri, Sep 15, 2023 at 09:57:31AM +0200, Mattijs Korpershoek via libcamera-devel wrote:\n> For some platforms, it's possible that the gralloc implementation\n> and the CSI receiver cannot agree on a pixel format. When that happens,\n> there is usually a m2m converter in the pipeline which handles pixel format\n> conversion.\n>\n> On platforms without pixel format converters, such as the AM62x, we need to do\n> software conversion.\n>\n> The AM62x platform:\n> * uses a CSI receiver (j721e-csi2rx), that only supports\n>   packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.\n> * Has a gralloc implementation that only supports of semi-planar\n>   YUV420 formats such as NV12.\n>\n> Implement YUYV->NV12 format conversion using libyuv.\n>\n> This is mainly done by transforming the first stream from Type::Direct into\n> Type::Internal so that it goes through the post-processor loop.\n>\n> ```\n> The WIP: part is mainly around computeYUYVSize():\n>\n> Since gralloc and j721e-csi2rx are incompatible, we need a way to get\n> gralloc to allocate (NV12) the kernel-requested buffer length (YUYV).\n> In other words, we should make sure that the first plane of the NV12\n> allocated buffer is long enough to fit a YUYV image.\n>\n> According to [1], NV12 has 8 bits (one byte) per component, and the\n> first plane is the Y component.\n> So a 1920x1080 image in NV12 has plane[0].length=1920*1080=2073600\n>\n> According to [2], YUYV stores 2 pixels per container of 32 bits, which\n> gives us 16 bits (2 bytes for one pixel).\n> So a 1920x1080 image in YUYV has plane[0].length=1920*1080*2=4147200\n>\n> So apply a *2 factor to make the kernel believe it's receiving a YUYV buffer.\n>\n> Note: this also means that we are wasting NV12's plane[1] buffer with\n> each allocation.\n>\n> [1] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html\n> [2] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html\n> ```\n>\n> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>\n> ---\n>  src/android/camera_capabilities.cpp | 90 ++++++++++++++++++++++++++++++++++++-\n>  src/android/camera_capabilities.h   |  4 ++\n>  src/android/camera_device.cpp       |  6 ++-\n>  src/android/camera_stream.cpp       | 54 +++++++++++++++++++++-\n>  src/android/camera_stream.h         |  5 +++\n>  5 files changed, 154 insertions(+), 5 deletions(-)\n>\n> diff --git a/src/android/camera_capabilities.cpp b/src/android/camera_capabilities.cpp\n> index 1bfeaea4b121..e2e0f7409e94 100644\n> --- a/src/android/camera_capabilities.cpp\n> +++ b/src/android/camera_capabilities.cpp\n> @@ -124,6 +124,16 @@ const std::map<int, const Camera3Format> camera3FormatsMap = {\n>  \t},\n>  };\n>\n> +/**\n> + * \\var yuvConversions\n> + * \\brief list of supported pixel formats for an input pixel format\n> + *\n> + * \\todo This should be retrieved statically from yuv/post_processor_yuv instead\n> + */\n> +const std::map<PixelFormat, const std::vector<PixelFormat>> yuvConversions = {\n> +\t{ formats::YUYV, { formats::NV12 } },\n> +};\n> +\n>  const std::map<camera_metadata_enum_android_info_supported_hardware_level, std::string>\n>  hwLevelStrings = {\n>  \t{ ANDROID_INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED,  \"LIMITED\" },\n> @@ -582,8 +592,10 @@ int CameraCapabilities::initializeStreamConfigurations()\n>  \t\t\tLOG(HAL, Debug) << \"Testing \" << pixelFormat;\n>\n>  \t\t\t/*\n> -\t\t\t * The stream configuration size can be adjusted,\n> -\t\t\t * not the pixel format.\n> +\t\t\t * The stream configuration size can be adjusted.\n> +\t\t\t * The pixel format might be converted via libyuv.\n> +\t\t\t * Conversion check is done in another loop after\n> +\t\t\t * testing native supported formats.\n>  \t\t\t *\n>  \t\t\t * \\todo This could be simplified once all pipeline\n>  \t\t\t * handlers will report the StreamFormats list of\n> @@ -603,7 +615,46 @@ int CameraCapabilities::initializeStreamConfigurations()\n>  \t\t\t/* If the format is not mandatory, skip it. */\n>  \t\t\tif (!camera3Format.mandatory)\n>  \t\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * Test if we can map the format via a software conversion.\n> +\t\t * This means that the converter can produce an \"output\" that is\n> +\t\t * compatible with the format defined in Android.\n> +\t\t */\n> +\t\tbool needConversion = false;\n> +\t\tfor (const PixelFormat &pixelFormat : libcameraFormats) {\n>\n> +\t\t\tLOG(HAL, Debug) << \"Testing \" << pixelFormat << \" using conversion\";\n> +\n> +\t\t\t/* \\todo move this into a separate function */\n\nMight be a good idea\n\n> +\t\t\tfor (const auto &[inputFormat, outputFormats] : yuvConversions) {\n> +\t\t\t\t/* check if the converter can produce pixelFormat */\n> +\t\t\t\tauto it = std::find(outputFormats.begin(), outputFormats.end(), pixelFormat);\n> +\t\t\t\tif (it == outputFormats.end())\n> +\t\t\t\t\tcontinue;\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * The converter can produce output pixelFormat, see if we can configure\n> +\t\t\t\t * the camera with the associated input pixelFormat.\n> +\t\t\t\t */\n> +\t\t\t\tcfg.pixelFormat = inputFormat;\n> +\t\t\t\tCameraConfiguration::Status status = cameraConfig->validate();\n> +\n> +\t\t\t\tif (status != CameraConfiguration::Invalid && cfg.pixelFormat == inputFormat) {\n> +\t\t\t\t\tmappedFormat = inputFormat;\n> +\t\t\t\t\tconversionMap_[androidFormat] = std::make_pair(inputFormat, *it);\n> +\t\t\t\t\tneedConversion = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\n> +\t\t\t/* We found a valid conversion format, so bail out */\n> +\t\t\tif (mappedFormat.isValid())\n> +\t\t\t\tbreak;\n> +\t\t}\n\nI quite like this.\n\nWhen I first thought about this problem I was considering expanding\ncamera3FormatsMap with an additional field to list there the possible\nsource formats an android-format could be software converted to.\n\nKeeping a separate yuvConversions[] map and checking it here\nseparately is however nice, but as you noted, this should come from\nthe post-processor. For this first version I think it's good\n\n> +\n> +\t\tif (!mappedFormat.isValid()) {\n>  \t\t\tLOG(HAL, Error)\n>  \t\t\t\t<< \"Failed to map mandatory Android format \"\n>  \t\t\t\t<< camera3Format.name << \" (\"\n> @@ -619,6 +670,11 @@ int CameraCapabilities::initializeStreamConfigurations()\n>  \t\tLOG(HAL, Debug) << \"Mapped Android format \"\n>  \t\t\t\t<< camera3Format.name << \" to \"\n>  \t\t\t\t<< mappedFormat;\n> +\t\tif (needConversion) {\n> +\t\t\tLOG(HAL, Debug) << mappedFormat\n> +\t\t\t\t\t<< \" will be converted into \"\n> +\t\t\t\t\t<< conversionMap_[androidFormat].second;\n> +\t\t}\n\nnit: no {} for single liners\n\nUnless it's:\n                if () {\n                        instruction1;\n                        instruction2;\n                } else {\n                        instruction1;\n                }\n\nHere and in other parts of the code\n\n>\n>  \t\tstd::vector<Size> resolutions;\n>  \t\tconst PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);\n> @@ -1457,6 +1513,36 @@ PixelFormat CameraCapabilities::toPixelFormat(int format) const\n>  \treturn it->second;\n>  }\n>\n> +/*\n> + * Check if we need to do software conversion via a post-processor\n> + * for an Android format code\n> + */\n> +bool CameraCapabilities::needConversion(int format) const\n> +{\n> +\tauto it = conversionMap_.find(format);\n> +\tif (it == conversionMap_.end()) {\n> +\t\tLOG(HAL, Error) << \"Requested format \" << utils::hex(format)\n> +\t\t\t\t<< \" not supported for conversion\";\n> +\t\treturn false;\n> +\t}\n> +\n> +\treturn true;\n\nThis could just be\n\n        auto formats = conversionFormats(format);\n        return formats != conversionMap_.end();\n\n> +}\n> +\n> +/*\n> + * Returns a conversion (input,output) pair for a given Android format code\n> + */\n> +std::pair<PixelFormat, PixelFormat> CameraCapabilities::conversionFormats(int format) const\n> +{\n> +\tauto it = conversionMap_.find(format);\n> +\tif (it == conversionMap_.end()) {\n> +\t\tLOG(HAL, Error) << \"Requested format \" << utils::hex(format)\n> +\t\t\t\t<< \" not supported for conversion\";\n> +\t}\n> +\n> +\treturn it->second;\n> +}\n> +\n>  std::unique_ptr<CameraMetadata> CameraCapabilities::requestTemplateManual() const\n>  {\n>  \tif (!capabilities_.count(ANDROID_REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR)) {\n> diff --git a/src/android/camera_capabilities.h b/src/android/camera_capabilities.h\n> index 6f66f221d33f..c3e6b48ab91d 100644\n> --- a/src/android/camera_capabilities.h\n> +++ b/src/android/camera_capabilities.h\n> @@ -30,6 +30,9 @@ public:\n>\n>  \tCameraMetadata *staticMetadata() const { return staticMetadata_.get(); }\n>  \tlibcamera::PixelFormat toPixelFormat(int format) const;\n> +\tbool needConversion(int format) const;\n> +\tstd::pair<libcamera::PixelFormat, libcamera::PixelFormat>\n> +\tconversionFormats(int format) const;\n>  \tunsigned int maxJpegBufferSize() const { return maxJpegBufferSize_; }\n>\n>  \tstd::unique_ptr<CameraMetadata> requestTemplateManual() const;\n> @@ -77,6 +80,7 @@ private:\n>\n>  \tstd::vector<Camera3StreamConfiguration> streamConfigurations_;\n>  \tstd::map<int, libcamera::PixelFormat> formatsMap_;\n> +\tstd::map<int, std::pair<libcamera::PixelFormat, libcamera::PixelFormat>> conversionMap_;\n>  \tstd::unique_ptr<CameraMetadata> staticMetadata_;\n>  \tunsigned int maxJpegBufferSize_;\n>\n> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp\n> index d34bae715a47..842cbb06d345 100644\n> --- a/src/android/camera_device.cpp\n> +++ b/src/android/camera_device.cpp\n> @@ -635,8 +635,12 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list)\n>  \t\t\tcontinue;\n>  \t\t}\n>\n> +\t\tCameraStream::Type type = CameraStream::Type::Direct;\n> +\t\tif (capabilities_.needConversion(stream->format))\n> +\t\t\ttype = CameraStream::Type::Internal;\n> +\n\nOk, now patch #4 makes more sense indeed :)\n\nI think it can be squashed here\n\n>  \t\tCamera3StreamConfig streamConfig;\n> -\t\tstreamConfig.streams = { { stream, CameraStream::Type::Direct } };\n> +\t\tstreamConfig.streams = { { stream, type } };\n>  \t\tstreamConfig.config.size = size;\n>  \t\tstreamConfig.config.pixelFormat = format;\n>  \t\tstreamConfigs.push_back(std::move(streamConfig));\n> diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp\n> index 4fd05dda5ed3..961ee40017f1 100644\n> --- a/src/android/camera_stream.cpp\n> +++ b/src/android/camera_stream.cpp\n> @@ -95,6 +95,7 @@ int CameraStream::configure()\n>\n>  \t\tswitch (outFormat) {\n>  \t\tcase formats::NV12:\n> +\t\tcase formats::YUYV:\n>  \t\t\tpostProcessor_ = std::make_unique<PostProcessorYuv>();\n>  \t\t\tbreak;\n>\n> @@ -107,6 +108,16 @@ int CameraStream::configure()\n>  \t\t\treturn -EINVAL;\n>  \t\t}\n>\n> +\t\tneedConversion_ =\n> +\t\t\tcameraDevice_->capabilities()->needConversion(camera3Stream_->format);\n> +\n> +\t\tif (needConversion_) {\n> +\t\t\tauto conv = cameraDevice_->capabilities()->conversionFormats(camera3Stream_->format);\n> +\t\t\tLOG(HAL, Debug) << \"Configuring the post processor to convert \"\n> +\t\t\t\t\t<< conv.first << \" -> \" << conv.second;\n> +\t\t\toutput.pixelFormat = conv.second;\n> +\t\t}\n> +\n>  \t\tint ret = postProcessor_->configure(input, output);\n>  \t\tif (ret)\n>  \t\t\treturn ret;\n> @@ -183,7 +194,12 @@ int CameraStream::process(Camera3RequestDescriptor::StreamBuffer *streamBuffer)\n>  \t\tstreamBuffer->fence.reset();\n>  \t}\n>\n> -\tconst StreamConfiguration &output = configuration();\n> +\tStreamConfiguration output = configuration();\n> +\tif (needConversion_) {\n> +\t\toutput.pixelFormat =\n> +\t\t\tcameraDevice_->capabilities()->conversionFormats(camera3Stream_->format).second;\n\nnit: 80 cols preferred, 120 when necessary :)\nalso no {}\n\nDoes stride needs adjusment too or is it not considered during\npost-processing ?\n\n> +\t}\n> +\n>  \tstreamBuffer->dstBuffer = std::make_unique<CameraBuffer>(\n>  \t\t*streamBuffer->camera3Buffer, output.pixelFormat, output.size,\n>  \t\tPROT_READ | PROT_WRITE);\n> @@ -205,6 +221,39 @@ void CameraStream::flush()\n>  \tworker_->flush();\n>  }\n>\n> +Size CameraStream::computeYUYVSize(const Size &nv12Size)\n> +{\n> +\t/*\n> +\t * On am62x platforms, the receiver driver (j721e-csi2rx) only\n> +\t * supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.\n> +\t *\n> +\t * However, the gralloc implementation is only capable of semiplanar\n> +\t * YUV420 such as NV12.\n> +\t *\n> +\t * To trick the kernel into believing it's receiving a YUYV buffer, we adjust the\n> +\t * size we request to gralloc so that plane(0) of the NV12 buffer is long enough to\n> +\t * match the length of a YUYV plane.\n> +\t *\n> +\t * for NV12, one pixel is encoded on 1.5 bytes, but plane 0 has 1 byte per pixel.\n> +\t * for YUYV, one pixel is encoded on 2 bytes.\n> +\t *\n> +\t * So apply a *2 factor.\n> +\t *\n> +\t * See:\n> +\t * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html\n> +\t * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html\n> +\t */\n> +\tconstexpr unsigned int YUYVfactor = 2;\n> +\n> +\tunsigned int width = nv12Size.width;\n> +\tunsigned int height = nv12Size.height;\n> +\n> +\tif (needConversion_)\n> +\t\twidth = width * YUYVfactor;\n> +\n> +\treturn Size{ width, height };\n> +}\n> +\n>  FrameBuffer *CameraStream::getBuffer()\n>  {\n>  \tif (!allocator_)\n> @@ -222,8 +271,9 @@ FrameBuffer *CameraStream::getBuffer()\n>  \t\t * \\todo Store a reference to the format of the source stream\n>  \t\t * instead of hardcoding.\n>  \t\t */\n> +\t\tconst Size hackedSize = computeYUYVSize(configuration().size);\n>  \t\tauto frameBuffer = allocator_->allocate(HAL_PIXEL_FORMAT_YCBCR_420_888,\n> -\t\t\t\t\t\t\tconfiguration().size,\n> +\t\t\t\t\t\t\thackedSize,\n>  \t\t\t\t\t\t\tcamera3Stream_->usage);\n\nI see your point about this being problematic, and I wonder if we\nshouldn't stop assuming HAL_PIXEL_FORMAT_YCBCR_420_888 and instead map\nthis to the actual format produced by libcamera (YUYV in this case).\n\nCameraStream has access to the libcamera::StreamConfiguration it maps\nto. If I'm not mistaken that StreamConfiguration::pixelFormat will be\n== formats::YUYV, right ? Could we associated it to the corresponding\nAndroid format (HAL_PIXEL_FORMAT_YCrCb_420_SP?) instead ? Would this\nremove the need to trick gralloc into allocating a larger buffer ?\n\n>  \t\tallocatedBuffers_.push_back(std::move(frameBuffer));\n>  \t\tbuffers_.emplace_back(allocatedBuffers_.back().get());\n> diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h\n> index 4c5078b2c26d..52a5606399c5 100644\n> --- a/src/android/camera_stream.h\n> +++ b/src/android/camera_stream.h\n> @@ -128,10 +128,13 @@ public:\n>\n>  \tint configure();\n>  \tint process(Camera3RequestDescriptor::StreamBuffer *streamBuffer);\n> +\tlibcamera::Size computeYUYVSize(const libcamera::Size &nv12Size);\n>  \tlibcamera::FrameBuffer *getBuffer();\n>  \tvoid putBuffer(libcamera::FrameBuffer *buffer);\n>  \tvoid flush();\n>\n> +\tbool needConversion() const { return needConversion_; }\n\nNot used ?\n\nOk, lot of work, very nice! With a few adjustments I hope we can see\nthis as a proper patch series.\n\nI understand this is very specific to your use case (YUYV-to-NV12) and\nmight not work out-of-the-box for other systems, but I think it's fine\nand it's a good first step on which others can build on top.\n\nThanks!\n   j\n\n> +\n>  private:\n>  \tclass PostProcessorWorker : public libcamera::Thread\n>  \t{\n> @@ -184,4 +187,6 @@ private:\n>  \tstd::unique_ptr<PostProcessor> postProcessor_;\n>\n>  \tstd::unique_ptr<PostProcessorWorker> worker_;\n> +\n> +\tbool needConversion_;\n>  };\n>\n> --\n> 2.41.0\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 96B3EBE080\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 22 Sep 2023 13:18:35 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id D3B1E62944;\n\tFri, 22 Sep 2023 15:18:34 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 385286291F\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 22 Sep 2023 15:18:33 +0200 (CEST)","from ideasonboard.com (93-46-82-201.ip106.fastwebnet.it\n\t[93.46.82.201])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id E614783F;\n\tFri, 22 Sep 2023 15:16:54 +0200 (CEST)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1695388714;\n\tbh=QDxgEFFzQQW791EKpSIlRGFQEjSgE6M4P3X+bzDUi4k=;\n\th=Date:To:References:In-Reply-To:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:\n\tFrom;\n\tb=sIxf9BphzyjbUJamjM0Rw/bXwX+btbKaeoBcmhH0Sblg9vnrUelt1zrQbEuIg8qSx\n\tA4DhEGFwAeD8bM02kU0cQqjjtYG/gbewVV5HMAHLFhgNhp3aHTUr4PRs5cBPpKpxTO\n\tvNbKXwReHcqxrckvXTNNhx0uFHMYzjsnq3IBBtXk4+HbnRLtTEhriro+QWnP0c2jnG\n\tS9CdFbeb55FuxeEpo0eJ+SGwCcHJMJo2t/Cq5ftTjjEHjDrxFTwXwTY4IZcnvz9koe\n\tKiKBrgPQA82Hg5d2IP+smwVWfdPtJQgBDz+lT+naZFZbSZC3PTipfO6U2uKdXokdqh\n\trp0npzLyIIVLQ==","v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1695388615;\n\tbh=QDxgEFFzQQW791EKpSIlRGFQEjSgE6M4P3X+bzDUi4k=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=cneSQzb54bdl4/vizxpkN8Y7WOHkMlEhb/fst+1JPGK8uPQO+Rg/3gi/EzdXpWtio\n\toedOyYoe1j5chCUq+GarBQKgIddhMR+mO+e8wLIzhYL8Gi1qK4OReQ1KIkoVCIvUqT\n\tQVH8bNB9hGLJgFTOver5052LJdYhlakgs3OAbRD4="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"cneSQzb5\"; dkim-atps=neutral","Date":"Fri, 22 Sep 2023 15:18:30 +0200","To":"Mattijs Korpershoek <mkorpershoek@baylibre.com>","Message-ID":"<p5lv775564h53rejsevybtlqwtgl5xkdyag4q4mlbmfagf3f73@y4isajuax3u4>","References":"<20230915-libyuv-convert-v1-0-1e5bcf68adac@baylibre.com>\n\t<20230915-libyuv-convert-v1-7-1e5bcf68adac@baylibre.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20230915-libyuv-convert-v1-7-1e5bcf68adac@baylibre.com>","Subject":"Re: [libcamera-devel] [PATCH RFC 7/7] WIP: android: add YUYV->NV12\n\tformat conversion via libyuv","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Jacopo Mondi via libcamera-devel <libcamera-devel@lists.libcamera.org>","Reply-To":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org,\n\tGuillaume La Roque <glaroque@baylibre.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":27855,"web_url":"https://patchwork.libcamera.org/comment/27855/","msgid":"<877cofiv8e.fsf@baylibre.com>","date":"2023-09-24T12:58:09","subject":"Re: [libcamera-devel] [PATCH RFC 7/7] WIP: android: add YUYV->NV12\n\tformat conversion via libyuv","submitter":{"id":153,"url":"https://patchwork.libcamera.org/api/people/153/","name":"Mattijs Korpershoek","email":"mkorpershoek@baylibre.com"},"content":"Hi Jacopo,\n\nThank you for your review\n\nOn ven., sept. 22, 2023 at 15:18, Jacopo Mondi <jacopo.mondi@ideasonboard.com> wrote:\n\n> Hi Mattijs\n>\n> On Fri, Sep 15, 2023 at 09:57:31AM +0200, Mattijs Korpershoek via libcamera-devel wrote:\n>> For some platforms, it's possible that the gralloc implementation\n>> and the CSI receiver cannot agree on a pixel format. When that happens,\n>> there is usually a m2m converter in the pipeline which handles pixel format\n>> conversion.\n>>\n>> On platforms without pixel format converters, such as the AM62x, we need to do\n>> software conversion.\n>>\n>> The AM62x platform:\n>> * uses a CSI receiver (j721e-csi2rx), that only supports\n>>   packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.\n>> * Has a gralloc implementation that only supports of semi-planar\n>>   YUV420 formats such as NV12.\n>>\n>> Implement YUYV->NV12 format conversion using libyuv.\n>>\n>> This is mainly done by transforming the first stream from Type::Direct into\n>> Type::Internal so that it goes through the post-processor loop.\n>>\n>> ```\n>> The WIP: part is mainly around computeYUYVSize():\n>>\n>> Since gralloc and j721e-csi2rx are incompatible, we need a way to get\n>> gralloc to allocate (NV12) the kernel-requested buffer length (YUYV).\n>> In other words, we should make sure that the first plane of the NV12\n>> allocated buffer is long enough to fit a YUYV image.\n>>\n>> According to [1], NV12 has 8 bits (one byte) per component, and the\n>> first plane is the Y component.\n>> So a 1920x1080 image in NV12 has plane[0].length=1920*1080=2073600\n>>\n>> According to [2], YUYV stores 2 pixels per container of 32 bits, which\n>> gives us 16 bits (2 bytes for one pixel).\n>> So a 1920x1080 image in YUYV has plane[0].length=1920*1080*2=4147200\n>>\n>> So apply a *2 factor to make the kernel believe it's receiving a YUYV buffer.\n>>\n>> Note: this also means that we are wasting NV12's plane[1] buffer with\n>> each allocation.\n>>\n>> [1] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html\n>> [2] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html\n>> ```\n>>\n>> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>\n>> ---\n>>  src/android/camera_capabilities.cpp | 90 ++++++++++++++++++++++++++++++++++++-\n>>  src/android/camera_capabilities.h   |  4 ++\n>>  src/android/camera_device.cpp       |  6 ++-\n>>  src/android/camera_stream.cpp       | 54 +++++++++++++++++++++-\n>>  src/android/camera_stream.h         |  5 +++\n>>  5 files changed, 154 insertions(+), 5 deletions(-)\n>>\n>> diff --git a/src/android/camera_capabilities.cpp b/src/android/camera_capabilities.cpp\n>> index 1bfeaea4b121..e2e0f7409e94 100644\n>> --- a/src/android/camera_capabilities.cpp\n>> +++ b/src/android/camera_capabilities.cpp\n>> @@ -124,6 +124,16 @@ const std::map<int, const Camera3Format> camera3FormatsMap = {\n>>  \t},\n>>  };\n>>\n>> +/**\n>> + * \\var yuvConversions\n>> + * \\brief list of supported pixel formats for an input pixel format\n>> + *\n>> + * \\todo This should be retrieved statically from yuv/post_processor_yuv instead\n>> + */\n>> +const std::map<PixelFormat, const std::vector<PixelFormat>> yuvConversions = {\n>> +\t{ formats::YUYV, { formats::NV12 } },\n>> +};\n>> +\n>>  const std::map<camera_metadata_enum_android_info_supported_hardware_level, std::string>\n>>  hwLevelStrings = {\n>>  \t{ ANDROID_INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED,  \"LIMITED\" },\n>> @@ -582,8 +592,10 @@ int CameraCapabilities::initializeStreamConfigurations()\n>>  \t\t\tLOG(HAL, Debug) << \"Testing \" << pixelFormat;\n>>\n>>  \t\t\t/*\n>> -\t\t\t * The stream configuration size can be adjusted,\n>> -\t\t\t * not the pixel format.\n>> +\t\t\t * The stream configuration size can be adjusted.\n>> +\t\t\t * The pixel format might be converted via libyuv.\n>> +\t\t\t * Conversion check is done in another loop after\n>> +\t\t\t * testing native supported formats.\n>>  \t\t\t *\n>>  \t\t\t * \\todo This could be simplified once all pipeline\n>>  \t\t\t * handlers will report the StreamFormats list of\n>> @@ -603,7 +615,46 @@ int CameraCapabilities::initializeStreamConfigurations()\n>>  \t\t\t/* If the format is not mandatory, skip it. */\n>>  \t\t\tif (!camera3Format.mandatory)\n>>  \t\t\t\tcontinue;\n>> +\t\t}\n>> +\n>> +\t\t/*\n>> +\t\t * Test if we can map the format via a software conversion.\n>> +\t\t * This means that the converter can produce an \"output\" that is\n>> +\t\t * compatible with the format defined in Android.\n>> +\t\t */\n>> +\t\tbool needConversion = false;\n>> +\t\tfor (const PixelFormat &pixelFormat : libcameraFormats) {\n>>\n>> +\t\t\tLOG(HAL, Debug) << \"Testing \" << pixelFormat << \" using conversion\";\n>> +\n>> +\t\t\t/* \\todo move this into a separate function */\n>\n> Might be a good idea\n\nWill consider it for v2.\n\n>\n>> +\t\t\tfor (const auto &[inputFormat, outputFormats] : yuvConversions) {\n>> +\t\t\t\t/* check if the converter can produce pixelFormat */\n>> +\t\t\t\tauto it = std::find(outputFormats.begin(), outputFormats.end(), pixelFormat);\n>> +\t\t\t\tif (it == outputFormats.end())\n>> +\t\t\t\t\tcontinue;\n>> +\n>> +\t\t\t\t/*\n>> +\t\t\t\t * The converter can produce output pixelFormat, see if we can configure\n>> +\t\t\t\t * the camera with the associated input pixelFormat.\n>> +\t\t\t\t */\n>> +\t\t\t\tcfg.pixelFormat = inputFormat;\n>> +\t\t\t\tCameraConfiguration::Status status = cameraConfig->validate();\n>> +\n>> +\t\t\t\tif (status != CameraConfiguration::Invalid && cfg.pixelFormat == inputFormat) {\n>> +\t\t\t\t\tmappedFormat = inputFormat;\n>> +\t\t\t\t\tconversionMap_[androidFormat] = std::make_pair(inputFormat, *it);\n>> +\t\t\t\t\tneedConversion = true;\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\t}\n>> +\t\t\t}\n>> +\n>> +\t\t\t/* We found a valid conversion format, so bail out */\n>> +\t\t\tif (mappedFormat.isValid())\n>> +\t\t\t\tbreak;\n>> +\t\t}\n>\n> I quite like this.\n>\n> When I first thought about this problem I was considering expanding\n> camera3FormatsMap with an additional field to list there the possible\n> source formats an android-format could be software converted to.\n>\n> Keeping a separate yuvConversions[] map and checking it here\n> separately is however nice, but as you noted, this should come from\n> the post-processor. For this first version I think it's good\n\nI'm glad you like it. This feels like a good solution to me as well. We\nprioritize \"natively supported\" formats and only try this as a \"last resort\".\n\nSo if I understand well, it's okay if I keep it as-is for v2?\n\n>\n>> +\n>> +\t\tif (!mappedFormat.isValid()) {\n>>  \t\t\tLOG(HAL, Error)\n>>  \t\t\t\t<< \"Failed to map mandatory Android format \"\n>>  \t\t\t\t<< camera3Format.name << \" (\"\n>> @@ -619,6 +670,11 @@ int CameraCapabilities::initializeStreamConfigurations()\n>>  \t\tLOG(HAL, Debug) << \"Mapped Android format \"\n>>  \t\t\t\t<< camera3Format.name << \" to \"\n>>  \t\t\t\t<< mappedFormat;\n>> +\t\tif (needConversion) {\n>> +\t\t\tLOG(HAL, Debug) << mappedFormat\n>> +\t\t\t\t\t<< \" will be converted into \"\n>> +\t\t\t\t\t<< conversionMap_[androidFormat].second;\n>> +\t\t}\n>\n> nit: no {} for single liners\n>\n> Unless it's:\n>                 if () {\n>                         instruction1;\n>                         instruction2;\n>                 } else {\n>                         instruction1;\n>                 }\n>\n> Here and in other parts of the code\n\nSorry I missed this. utils/checkstyle.py did not complain to me.\nI will make sure there won't be any {} for single liners.\n\n>\n>>\n>>  \t\tstd::vector<Size> resolutions;\n>>  \t\tconst PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);\n>> @@ -1457,6 +1513,36 @@ PixelFormat CameraCapabilities::toPixelFormat(int format) const\n>>  \treturn it->second;\n>>  }\n>>\n>> +/*\n>> + * Check if we need to do software conversion via a post-processor\n>> + * for an Android format code\n>> + */\n>> +bool CameraCapabilities::needConversion(int format) const\n>> +{\n>> +\tauto it = conversionMap_.find(format);\n>> +\tif (it == conversionMap_.end()) {\n>> +\t\tLOG(HAL, Error) << \"Requested format \" << utils::hex(format)\n>> +\t\t\t\t<< \" not supported for conversion\";\n>> +\t\treturn false;\n>> +\t}\n>> +\n>> +\treturn true;\n>\n> This could just be\n>\n>         auto formats = conversionFormats(format);\n>         return formats != conversionMap_.end();\n\nWill do in v2.\n\n>\n>> +}\n>> +\n>> +/*\n>> + * Returns a conversion (input,output) pair for a given Android format code\n>> + */\n>> +std::pair<PixelFormat, PixelFormat> CameraCapabilities::conversionFormats(int format) const\n>> +{\n>> +\tauto it = conversionMap_.find(format);\n>> +\tif (it == conversionMap_.end()) {\n>> +\t\tLOG(HAL, Error) << \"Requested format \" << utils::hex(format)\n>> +\t\t\t\t<< \" not supported for conversion\";\n>> +\t}\n>> +\n>> +\treturn it->second;\n>> +}\n>> +\n>>  std::unique_ptr<CameraMetadata> CameraCapabilities::requestTemplateManual() const\n>>  {\n>>  \tif (!capabilities_.count(ANDROID_REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR)) {\n>> diff --git a/src/android/camera_capabilities.h b/src/android/camera_capabilities.h\n>> index 6f66f221d33f..c3e6b48ab91d 100644\n>> --- a/src/android/camera_capabilities.h\n>> +++ b/src/android/camera_capabilities.h\n>> @@ -30,6 +30,9 @@ public:\n>>\n>>  \tCameraMetadata *staticMetadata() const { return staticMetadata_.get(); }\n>>  \tlibcamera::PixelFormat toPixelFormat(int format) const;\n>> +\tbool needConversion(int format) const;\n>> +\tstd::pair<libcamera::PixelFormat, libcamera::PixelFormat>\n>> +\tconversionFormats(int format) const;\n>>  \tunsigned int maxJpegBufferSize() const { return maxJpegBufferSize_; }\n>>\n>>  \tstd::unique_ptr<CameraMetadata> requestTemplateManual() const;\n>> @@ -77,6 +80,7 @@ private:\n>>\n>>  \tstd::vector<Camera3StreamConfiguration> streamConfigurations_;\n>>  \tstd::map<int, libcamera::PixelFormat> formatsMap_;\n>> +\tstd::map<int, std::pair<libcamera::PixelFormat, libcamera::PixelFormat>> conversionMap_;\n>>  \tstd::unique_ptr<CameraMetadata> staticMetadata_;\n>>  \tunsigned int maxJpegBufferSize_;\n>>\n>> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp\n>> index d34bae715a47..842cbb06d345 100644\n>> --- a/src/android/camera_device.cpp\n>> +++ b/src/android/camera_device.cpp\n>> @@ -635,8 +635,12 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list)\n>>  \t\t\tcontinue;\n>>  \t\t}\n>>\n>> +\t\tCameraStream::Type type = CameraStream::Type::Direct;\n>> +\t\tif (capabilities_.needConversion(stream->format))\n>> +\t\t\ttype = CameraStream::Type::Internal;\n>> +\n>\n> Ok, now patch #4 makes more sense indeed :)\n>\n> I think it can be squashed here\n\nWill squash patch #4 into #7\n\n>\n>>  \t\tCamera3StreamConfig streamConfig;\n>> -\t\tstreamConfig.streams = { { stream, CameraStream::Type::Direct } };\n>> +\t\tstreamConfig.streams = { { stream, type } };\n>>  \t\tstreamConfig.config.size = size;\n>>  \t\tstreamConfig.config.pixelFormat = format;\n>>  \t\tstreamConfigs.push_back(std::move(streamConfig));\n>> diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp\n>> index 4fd05dda5ed3..961ee40017f1 100644\n>> --- a/src/android/camera_stream.cpp\n>> +++ b/src/android/camera_stream.cpp\n>> @@ -95,6 +95,7 @@ int CameraStream::configure()\n>>\n>>  \t\tswitch (outFormat) {\n>>  \t\tcase formats::NV12:\n>> +\t\tcase formats::YUYV:\n>>  \t\t\tpostProcessor_ = std::make_unique<PostProcessorYuv>();\n>>  \t\t\tbreak;\n>>\n>> @@ -107,6 +108,16 @@ int CameraStream::configure()\n>>  \t\t\treturn -EINVAL;\n>>  \t\t}\n>>\n>> +\t\tneedConversion_ =\n>> +\t\t\tcameraDevice_->capabilities()->needConversion(camera3Stream_->format);\n>> +\n>> +\t\tif (needConversion_) {\n>> +\t\t\tauto conv = cameraDevice_->capabilities()->conversionFormats(camera3Stream_->format);\n>> +\t\t\tLOG(HAL, Debug) << \"Configuring the post processor to convert \"\n>> +\t\t\t\t\t<< conv.first << \" -> \" << conv.second;\n>> +\t\t\toutput.pixelFormat = conv.second;\n>> +\t\t}\n>> +\n>>  \t\tint ret = postProcessor_->configure(input, output);\n>>  \t\tif (ret)\n>>  \t\t\treturn ret;\n>> @@ -183,7 +194,12 @@ int CameraStream::process(Camera3RequestDescriptor::StreamBuffer *streamBuffer)\n>>  \t\tstreamBuffer->fence.reset();\n>>  \t}\n>>\n>> -\tconst StreamConfiguration &output = configuration();\n>> +\tStreamConfiguration output = configuration();\n>> +\tif (needConversion_) {\n>> +\t\toutput.pixelFormat =\n>> +\t\t\tcameraDevice_->capabilities()->conversionFormats(camera3Stream_->format).second;\n>\n> nit: 80 cols preferred, 120 when necessary :)\n> also no {}\n\nWill do on one line and remove {} in v2.\n\n>\n> Does stride needs adjusment too or is it not considered during\n> post-processing ?\n\nStride is definitely used in the yuv post-processor but it is set at\nconfigure time when calling CameraStream::configure().\n\ncalculateLengths() derives it from the destination PixelFormatInfo.\n\nHere we only re get the output.pixelFormat because we need it to\nallocate the dstBuffer below.\n\nShould I cache the output.pixelFormat as a class member or is it okay if\nI keep this as is ?\n\n>\n>> +\t}\n>> +\n>>  \tstreamBuffer->dstBuffer = std::make_unique<CameraBuffer>(\n>>  \t\t*streamBuffer->camera3Buffer, output.pixelFormat, output.size,\n>>  \t\tPROT_READ | PROT_WRITE);\n>> @@ -205,6 +221,39 @@ void CameraStream::flush()\n>>  \tworker_->flush();\n>>  }\n>>\n>> +Size CameraStream::computeYUYVSize(const Size &nv12Size)\n>> +{\n>> +\t/*\n>> +\t * On am62x platforms, the receiver driver (j721e-csi2rx) only\n>> +\t * supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.\n>> +\t *\n>> +\t * However, the gralloc implementation is only capable of semiplanar\n>> +\t * YUV420 such as NV12.\n>> +\t *\n>> +\t * To trick the kernel into believing it's receiving a YUYV buffer, we adjust the\n>> +\t * size we request to gralloc so that plane(0) of the NV12 buffer is long enough to\n>> +\t * match the length of a YUYV plane.\n>> +\t *\n>> +\t * for NV12, one pixel is encoded on 1.5 bytes, but plane 0 has 1 byte per pixel.\n>> +\t * for YUYV, one pixel is encoded on 2 bytes.\n>> +\t *\n>> +\t * So apply a *2 factor.\n>> +\t *\n>> +\t * See:\n>> +\t * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html\n>> +\t * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html\n>> +\t */\n>> +\tconstexpr unsigned int YUYVfactor = 2;\n>> +\n>> +\tunsigned int width = nv12Size.width;\n>> +\tunsigned int height = nv12Size.height;\n>> +\n>> +\tif (needConversion_)\n>> +\t\twidth = width * YUYVfactor;\n>> +\n>> +\treturn Size{ width, height };\n>> +}\n>> +\n>>  FrameBuffer *CameraStream::getBuffer()\n>>  {\n>>  \tif (!allocator_)\n>> @@ -222,8 +271,9 @@ FrameBuffer *CameraStream::getBuffer()\n>>  \t\t * \\todo Store a reference to the format of the source stream\n>>  \t\t * instead of hardcoding.\n>>  \t\t */\n>> +\t\tconst Size hackedSize = computeYUYVSize(configuration().size);\n>>  \t\tauto frameBuffer = allocator_->allocate(HAL_PIXEL_FORMAT_YCBCR_420_888,\n>> -\t\t\t\t\t\t\tconfiguration().size,\n>> +\t\t\t\t\t\t\thackedSize,\n>>  \t\t\t\t\t\t\tcamera3Stream_->usage);\n>\n> I see your point about this being problematic, and I wonder if we\n> shouldn't stop assuming HAL_PIXEL_FORMAT_YCBCR_420_888 and instead map\n> this to the actual format produced by libcamera (YUYV in this case).\n\nRight.\n\n>\n> CameraStream has access to the libcamera::StreamConfiguration it maps\n> to. If I'm not mistaken that StreamConfiguration::pixelFormat will be\n> == formats::YUYV, right ? Could we associated it to the corresponding\n> Android format (HAL_PIXEL_FORMAT_YCrCb_420_SP?) instead ? Would this\n\nI think it would be HAL_PIXEL_FORMAT_YCBCR_422_888_SP (because YUYV is\n422) but i'm not sure that format exist.\n\n> remove the need to trick gralloc into allocating a larger buffer ?\n\nYes, you are not mistaken.\nThe StreamConfiguration::pixelFormat will be formats::YUYV.\n\nThe problem I'm having is mapping a libcamera pixelFormat to an Android\nformat.\nRight now, we have Android format -> pixelFormat but not the opposite.\n\nI will investigate this a little more. I have not studied\ndrm_hwcomposer which might have similar problems (android formats\nvs drm formats)\n\n\n>\n>>  \t\tallocatedBuffers_.push_back(std::move(frameBuffer));\n>>  \t\tbuffers_.emplace_back(allocatedBuffers_.back().get());\n>> diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h\n>> index 4c5078b2c26d..52a5606399c5 100644\n>> --- a/src/android/camera_stream.h\n>> +++ b/src/android/camera_stream.h\n>> @@ -128,10 +128,13 @@ public:\n>>\n>>  \tint configure();\n>>  \tint process(Camera3RequestDescriptor::StreamBuffer *streamBuffer);\n>> +\tlibcamera::Size computeYUYVSize(const libcamera::Size &nv12Size);\n>>  \tlibcamera::FrameBuffer *getBuffer();\n>>  \tvoid putBuffer(libcamera::FrameBuffer *buffer);\n>>  \tvoid flush();\n>>\n>> +\tbool needConversion() const { return needConversion_; }\n>\n> Not used ?\n\nSorry, this is a leftover from a previous design. I will remove in v2.\n\n>\n> Ok, lot of work, very nice! With a few adjustments I hope we can see\n> this as a proper patch series.\n>\n> I understand this is very specific to your use case (YUYV-to-NV12) and\n> might not work out-of-the-box for other systems, but I think it's fine\n> and it's a good first step on which others can build on top.\n\nThank you, that is very encouraging. I am glad you are considering it\nfor master and I will try my best to polish it up to libcamera's standard!\n\n>\n> Thanks!\n>    j\n>\n>> +\n>>  private:\n>>  \tclass PostProcessorWorker : public libcamera::Thread\n>>  \t{\n>> @@ -184,4 +187,6 @@ private:\n>>  \tstd::unique_ptr<PostProcessor> postProcessor_;\n>>\n>>  \tstd::unique_ptr<PostProcessorWorker> worker_;\n>> +\n>> +\tbool needConversion_;\n>>  };\n>>\n>> --\n>> 2.41.0\n>>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 48718C326B\n\tfor <parsemail@patchwork.libcamera.org>;\n\tSun, 24 Sep 2023 12:58:17 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 8921E62931;\n\tSun, 24 Sep 2023 14:58:16 +0200 (CEST)","from mail-lj1-x232.google.com (mail-lj1-x232.google.com\n\t[IPv6:2a00:1450:4864:20::232])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id A966762916\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSun, 24 Sep 2023 14:58:14 +0200 (CEST)","by mail-lj1-x232.google.com with SMTP id\n\t38308e7fff4ca-2c108e106f0so77008991fa.1\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSun, 24 Sep 2023 05:58:14 -0700 (PDT)","from localhost ([89.207.171.157]) by smtp.gmail.com with ESMTPSA id\n\ti13-20020a170906090d00b0099cbfee34e3sm5027044ejd.196.2023.09.24.05.58.12\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tSun, 24 Sep 2023 05:58:13 -0700 (PDT)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1695560296;\n\tbh=NSA4OzsPMEIqlnEmG300TBlE9caRHt3AnMwZskqn80Y=;\n\th=To:In-Reply-To:References:Date:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc:\n\tFrom;\n\tb=Miy+jZ/799Auuaa9SRgWwzf0Cu7N7Zh/fe8gNzGnT/eVP0nAPbUaWB0epZEhCSE4w\n\tcFDayZk2vHrRSc4DJzagoQmZUAjsrVXwohxkzQYLB88CbnFGElZu7+Z7HRVxqnee0i\n\tkgTnlPlqk8ynTSpNFXvNiEpub/3jlDL8Az0cPYkIr8TZh/Djm1RX5RzIVLz3zwIwIl\n\tPqlYOStWIcnAoB3wsn4Sq+57Wd5oUjNPQJAjAG1iE2Cera5O573Tqb6HIyae5QXEph\n\t95hrLkWV5of90m/AkvTIm271Xrw6lBgLLRhTTyw7G7wUFN5SdYXzfzbyk01/PTYsbi\n\tz//1CnLfaLPvg==","v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1695560294;\n\tx=1696165094; darn=lists.libcamera.org; \n\th=mime-version:message-id:date:references:in-reply-to:subject:cc:to\n\t:from:from:to:cc:subject:date:message-id:reply-to;\n\tbh=CJSUolcHqGA52tKgYIUU4ukzb/5SR+eRBM9H2+EuAZ4=;\n\tb=AP0i7Rsh8ABh3XMpdpXIQS7iCTFtUOy5p3LH2CcWvGf4C1EIiEfGwaM4ksWwv2mzla\n\tfj5KuOZIIH4TTtXVX5g+kv/rZhaVbV9dhifLHgqVY8J1bXA1DdXtb6U+s8e2xtQ3L8R+\n\tSf9g94F8hW+s7KZ2hpHBOTJviC3RC3ltPHofjMU3UoIf/UA5UePJgB9A6QncIDZl5a9T\n\tJOnvCwBB4SvsDi23AR6Ypbgvu7otJfRXtl+YrCECd79EnrfHMDvEnsHb3Wkkhq+egDyI\n\tCdgda+k8Jkpwb1otRgFFIHijxb8Wcm+OR5GlByMx1H9hADoIbfAJVkxh3OYvlfqPzmCt\n\t8OOg=="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected)\n\theader.d=baylibre-com.20230601.gappssmtp.com\n\theader.i=@baylibre-com.20230601.gappssmtp.com header.b=\"AP0i7Rsh\"; \n\tdkim-atps=neutral","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20230601; t=1695560294; x=1696165094;\n\th=mime-version:message-id:date:references:in-reply-to:subject:cc:to\n\t:from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; \n\tbh=CJSUolcHqGA52tKgYIUU4ukzb/5SR+eRBM9H2+EuAZ4=;\n\tb=aGvxOrHySFG7sd9Ra0lFKgyXe5+3rOfEWMhP4KCyb5mzaYF1C6G6M2FG7ZzxHy70ta\n\tFY5kNrv99mw7p8dP/8cysMZtRl/2GpTBQiZ+BxmFdoyPqu3MgD66JsTsWcQwmHAZ7NHA\n\tHxt/7p9z5osvDOIWKd5nwDA2zoEt2I+TDyl+f0d3EnPf6tFi+j942wEpAZRfjmprAOBU\n\tT8RcgjcuCcOVT6t+yzHf3X0JnkeAFLKMY9i/CY+/H7iZlOdG8WGnwJdzzpp9vBWgtbTb\n\tkE5hOjov4k4OjRfjFDDA1oGVtcs+pD+VRIReiIlSKZwAClOHl+jBnu1co5yKEMautm1r\n\t/O1g==","X-Gm-Message-State":"AOJu0YwytIdqTILcyEUkqYJppU3TazO3cQjKm51gYm1CDEXaz5bfbDBK\n\tG3SZmiWKtoSAfZJO4dG0KSg5btYlCofAzB8c2CY=","X-Google-Smtp-Source":"AGHT+IERrpS3e0vnlGQKzRT/QVYZzZiTVI9szoQW04qyABOz2QyLRhXh+W4Kx63kUvTVjXnVfBkS0Q==","X-Received":"by 2002:a2e:97d2:0:b0:2bc:c28c:a2b8 with SMTP id\n\tm18-20020a2e97d2000000b002bcc28ca2b8mr3443194ljj.27.1695560293610; \n\tSun, 24 Sep 2023 05:58:13 -0700 (PDT)","To":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>","In-Reply-To":"<p5lv775564h53rejsevybtlqwtgl5xkdyag4q4mlbmfagf3f73@y4isajuax3u4>","References":"<20230915-libyuv-convert-v1-0-1e5bcf68adac@baylibre.com>\n\t<20230915-libyuv-convert-v1-7-1e5bcf68adac@baylibre.com>\n\t<p5lv775564h53rejsevybtlqwtgl5xkdyag4q4mlbmfagf3f73@y4isajuax3u4>","Date":"Sun, 24 Sep 2023 14:58:09 +0200","Message-ID":"<877cofiv8e.fsf@baylibre.com>","MIME-Version":"1.0","Content-Type":"text/plain","Subject":"Re: [libcamera-devel] [PATCH RFC 7/7] WIP: android: add YUYV->NV12\n\tformat conversion via libyuv","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Mattijs Korpershoek via libcamera-devel\n\t<libcamera-devel@lists.libcamera.org>","Reply-To":"Mattijs Korpershoek <mkorpershoek@baylibre.com>","Cc":"libcamera-devel@lists.libcamera.org,\n\tGuillaume La Roque <glaroque@baylibre.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]