{"id":18561,"url":"https://patchwork.libcamera.org/api/patches/18561/?format=json","web_url":"https://patchwork.libcamera.org/patch/18561/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20230426131057.21550-12-naush@raspberrypi.com>","date":"2023-04-26T13:10:55","name":"[libcamera-devel,11/13] pipeline: raspberrypi: Introduce PipelineHandlerBase class","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"462173a35f6c45bc30c71f8bf4821e2bfbc7e5e1","submitter":{"id":34,"url":"https://patchwork.libcamera.org/api/people/34/?format=json","name":"Naushir Patuck","email":"naush@raspberrypi.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/18561/mbox/","series":[{"id":3847,"url":"https://patchwork.libcamera.org/api/series/3847/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=3847","date":"2023-04-26T13:10:44","name":"Raspberry Pi: Code refactoring","version":1,"mbox":"https://patchwork.libcamera.org/series/3847/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/18561/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/18561/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 59F3CBDCBD\n\tfor <parsemail@patchwork.libcamera.org>;\n\tWed, 26 Apr 2023 13:13:45 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 0DC3C627E3;\n\tWed, 26 Apr 2023 15:13:45 +0200 (CEST)","from mail-wm1-x333.google.com (mail-wm1-x333.google.com\n\t[IPv6:2a00:1450:4864:20::333])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id E15B9627E2\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 26 Apr 2023 15:13:42 +0200 (CEST)","by mail-wm1-x333.google.com with SMTP id\n\t5b1f17b1804b1-3f09b4a1527so73161185e9.0\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 26 Apr 2023 06:13:42 -0700 (PDT)","from localhost.localdomain ([93.93.133.154])\n\tby smtp.gmail.com with ESMTPSA id\n\tk5-20020adff5c5000000b002f103ca90cdsm15780949wrp.101.2023.04.26.06.13.38\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tWed, 26 Apr 2023 06:13:39 -0700 (PDT)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1682514825;\n\tbh=vY+CVfGNwYGyUN7UJHdJRIs19CLiVZZt1CKrKRy8Ues=;\n\th=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:\n\tFrom;\n\tb=WCOJBh9KYERPKWoOVbJVqXXlsrIe+HSK5gveJWNPGDttQnixLzqx0bgQ8o7hfucGX\n\t7XLIf0jHfaaasc0G/G4cGSir6W7BEs0D3YwM4D5K1FVKG+n03eP0gV+TuC8GTGuwkG\n\td99wfurbBnJ5RQsJROcNZe3hKeyFit60OjZDTW6SX07wnRvBJmJPTDQQLIhvO2pJUZ\n\t2bp9FUCZTy3BsIc/mX6TmqNwr68+CWvQoFI63UbUEejsShMorzOK1cpuGQ+uEQDr6v\n\tKMrp2HgdW8ZDCzp8ycMhqVmPNbfgx6Sax3Q/c6GsLmLbAc6iN44Ijxfz/1K0P1HXvW\n\tW/Za/aGhPNb8w==","v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google; t=1682514822; x=1685106822;\n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:from:to:cc:subject:date\n\t:message-id:reply-to;\n\tbh=ZYvMG3JlfJ2iPI34zyy+2kQQVY/C+zcG7SR5d24k+NU=;\n\tb=QugtJzQoXZPNgnUSONgYxnzTgfnO4hpK5PTXvLh4WRL11+3j7jUzNR0HRQ9yhwHo50\n\tz/z+Lq6hr3dfudcZ4U8jXECWImr42hmbJLU0kp+KUT7kw6FbV8ifUKTJDt12N2q2Q2cC\n\tPorQIyKDK4cERr7aV18iz0yqzqaFejxqmqn+1u97pIYMSt+VaE7Me2I+QZW4N0i8eTnR\n\tdLzzOPfLUySevGiJp/5KpctHSRZbixOQ+x75M1Wl76KFuenuFWS4G3QhWU929TO5fQPz\n\tEttwgOOgx3pxovxmWZixkD2pb1sfjSdeal53IkMkZcnJGipi/7EDaO96I6hlfC4l9Qnn\n\tnZOQ=="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=raspberrypi.com\n\theader.i=@raspberrypi.com\n\theader.b=\"QugtJzQo\"; dkim-atps=neutral","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20221208; t=1682514822; x=1685106822;\n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc\n\t:subject:date:message-id:reply-to;\n\tbh=ZYvMG3JlfJ2iPI34zyy+2kQQVY/C+zcG7SR5d24k+NU=;\n\tb=gjxv8bcW3rrNZ2uNlRqMf4kpE6VxCtP9QfF1EzL6+BwDvQQRpZFvop9DC14VOqYXby\n\txhsngi7iRyAoMFhQZ6QFwtMd0BtKt9aXv+nZ6L4466F9UCvv/za2QtAKrHk7FJbn66WY\n\taKQ0SSly1msmHSp5sMGJZgIgTEPnVHCZfpN6RBj/KPpqKPoptDitWAxAm1SuSaZILMOC\n\tifKFru4UWfXlInoMsN2uzk2zXAb/AYObBY434sIrX32EZaAyxaW8aUiptEJgYoL4DJ68\n\tkWiPNsp6K8lGf2ZAm5M4YZxUR9XsekMK6Wp+vrqVq57rgbX4w0YFkPmSJ0yASjSnOlBg\n\tZX4g==","X-Gm-Message-State":"AAQBX9dqp2y7cHIChkkH1IHn4c60T+tGAV9jY+Bepz+NSzQogPZJP8Dm\n\tffNMgvReHcOzicBpmsnA4nYYE5uOii18SLgfq/NnYA==","X-Google-Smtp-Source":"AKy350b0vsAZMsG4bmKBjIB4FDoCm6BncuNiukzJhZzSi/GnALRhheLSukNHwszNECO16nFb9uNjkg==","X-Received":"by 2002:a05:600c:24cd:b0:3f0:7f4b:f3c0 with SMTP id\n\t13-20020a05600c24cd00b003f07f4bf3c0mr13483215wmu.19.1682514819909; \n\tWed, 26 Apr 2023 06:13:39 -0700 (PDT)","To":"libcamera-devel@lists.libcamera.org","Date":"Wed, 26 Apr 2023 14:10:55 +0100","Message-Id":"<20230426131057.21550-12-naush@raspberrypi.com>","X-Mailer":"git-send-email 2.34.1","In-Reply-To":"<20230426131057.21550-1-naush@raspberrypi.com>","References":"<20230426131057.21550-1-naush@raspberrypi.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH 11/13] pipeline: raspberrypi: Introduce\n\tPipelineHandlerBase class","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Naushir Patuck via libcamera-devel\n\t<libcamera-devel@lists.libcamera.org>","Reply-To":"Naushir Patuck <naush@raspberrypi.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"Create a new PipelineHandlerBase class that handles general purpose\nhousekeeping duties for the Raspberry Pi pipeline handler. Code the\nimplementation of new class is essentially pulled from the existing\npipeline/rpi/vc4/raspberrypi.cpp file with a small amount of\nrefactoring.\n\nCreate a derived PipelineHandlerVc4 class from PipelineHandlerBase that\nhandles the VC4 pipeline specific tasks of the pipeline handler. Again,\ncode for this class implementation is taken from the existing\npipeline/rpi/vc4/raspberrypi.cpp with a small amount of\nrefactoring.\n\nThe goal of this change is to allow third parties to implement their own\npipeline handlers running on the Raspberry Pi without duplicating all of\nthe pipeline handler housekeeping tasks.\n\nSigned-off-by: Naushir Patuck <naush@raspberrypi.com>\n---\n src/ipa/rpi/vc4/vc4.cpp                       |    2 +-\n src/libcamera/pipeline/rpi/common/meson.build |    1 +\n .../pipeline/rpi/common/pipeline_base.cpp     | 1447 ++++++++++\n .../pipeline/rpi/common/pipeline_base.h       |  276 ++\n .../pipeline/rpi/vc4/data/example.yaml        |    4 +-\n src/libcamera/pipeline/rpi/vc4/meson.build    |    2 +-\n .../pipeline/rpi/vc4/raspberrypi.cpp          | 2428 -----------------\n src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 1001 +++++++\n 8 files changed, 2729 insertions(+), 2432 deletions(-)\n create mode 100644 src/libcamera/pipeline/rpi/common/pipeline_base.cpp\n create mode 100644 src/libcamera/pipeline/rpi/common/pipeline_base.h\n delete mode 100644 src/libcamera/pipeline/rpi/vc4/raspberrypi.cpp\n create mode 100644 src/libcamera/pipeline/rpi/vc4/vc4.cpp","diff":"diff --git a/src/ipa/rpi/vc4/vc4.cpp b/src/ipa/rpi/vc4/vc4.cpp\nindex 0e022c2aeed3..f3d83b2afadf 100644\n--- a/src/ipa/rpi/vc4/vc4.cpp\n+++ b/src/ipa/rpi/vc4/vc4.cpp\n@@ -538,7 +538,7 @@ extern \"C\" {\n const struct IPAModuleInfo ipaModuleInfo = {\n \tIPA_MODULE_API_VERSION,\n \t1,\n-\t\"PipelineHandlerRPi\",\n+\t\"PipelineHandlerVc4\",\n \t\"vc4\",\n };\n \ndiff --git a/src/libcamera/pipeline/rpi/common/meson.build b/src/libcamera/pipeline/rpi/common/meson.build\nindex 1dec6d3d028b..f8ea790b42a1 100644\n--- a/src/libcamera/pipeline/rpi/common/meson.build\n+++ b/src/libcamera/pipeline/rpi/common/meson.build\n@@ -2,6 +2,7 @@\n \n libcamera_sources += files([\n     'delayed_controls.cpp',\n+    'pipeline_base.cpp',\n     'rpi_stream.cpp',\n ])\n \ndiff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp\nnew file mode 100644\nindex 000000000000..012766b38c32\n--- /dev/null\n+++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp\n@@ -0,0 +1,1447 @@\n+/* SPDX-License-Identifier: LGPL-2.1-or-later */\n+/*\n+ * Copyright (C) 2019-2023, Raspberry Pi Ltd\n+ *\n+ * pipeline_base.h - Pipeline handler base class for Raspberry Pi devices\n+ */\n+\n+#include \"pipeline_base.h\"\n+\n+#include <chrono>\n+\n+#include <linux/media-bus-format.h>\n+#include <linux/videodev2.h>\n+\n+#include <libcamera/base/file.h>\n+#include <libcamera/base/utils.h>\n+\n+#include <libcamera/formats.h>\n+#include <libcamera/logging.h>\n+#include <libcamera/property_ids.h>\n+\n+#include \"libcamera/internal/camera_lens.h\"\n+#include \"libcamera/internal/ipa_manager.h\"\n+#include \"libcamera/internal/v4l2_subdevice.h\"\n+\n+using namespace std::chrono_literals;\n+\n+namespace libcamera {\n+\n+using namespace RPi;\n+\n+LOG_DEFINE_CATEGORY(RPI)\n+\n+namespace {\n+\n+constexpr unsigned int defaultRawBitDepth = 12;\n+\n+bool isRaw(const PixelFormat &pixFmt)\n+{\n+\t/* This test works for both Bayer and raw mono formats. */\n+\treturn BayerFormat::fromPixelFormat(pixFmt).isValid();\n+}\n+\n+PixelFormat mbusCodeToPixelFormat(unsigned int mbus_code,\n+\t\t\t\t  BayerFormat::Packing packingReq)\n+{\n+\tBayerFormat bayer = BayerFormat::fromMbusCode(mbus_code);\n+\n+\tASSERT(bayer.isValid());\n+\n+\tbayer.packing = packingReq;\n+\tPixelFormat pix = bayer.toPixelFormat();\n+\n+\t/*\n+\t * Not all formats (e.g. 8-bit or 16-bit Bayer formats) can have packed\n+\t * variants. So if the PixelFormat returns as invalid, use the non-packed\n+\t * conversion instead.\n+\t */\n+\tif (!pix.isValid()) {\n+\t\tbayer.packing = BayerFormat::Packing::None;\n+\t\tpix = bayer.toPixelFormat();\n+\t}\n+\n+\treturn pix;\n+}\n+\n+SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor> &sensor)\n+{\n+\tSensorFormats formats;\n+\n+\tfor (auto const mbusCode : sensor->mbusCodes())\n+\t\tformats.emplace(mbusCode, sensor->sizes(mbusCode));\n+\n+\treturn formats;\n+}\n+\n+bool isMonoSensor(std::unique_ptr<CameraSensor> &sensor)\n+{\n+\tunsigned int mbusCode = sensor->mbusCodes()[0];\n+\tconst BayerFormat &bayer = BayerFormat::fromMbusCode(mbusCode);\n+\n+\treturn bayer.order == BayerFormat::Order::MONO;\n+}\n+\n+double scoreFormat(double desired, double actual)\n+{\n+\tdouble score = desired - actual;\n+\t/* Smaller desired dimensions are preferred. */\n+\tif (score < 0.0)\n+\t\tscore = (-score) / 8;\n+\t/* Penalise non-exact matches. */\n+\tif (actual != desired)\n+\t\tscore *= 2;\n+\n+\treturn score;\n+}\n+\n+V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req, unsigned int bitDepth)\n+{\n+\tdouble bestScore = std::numeric_limits<double>::max(), score;\n+\tV4L2SubdeviceFormat bestFormat;\n+\tbestFormat.colorSpace = ColorSpace::Raw;\n+\n+\tconstexpr float penaltyAr = 1500.0;\n+\tconstexpr float penaltyBitDepth = 500.0;\n+\n+\t/* Calculate the closest/best mode from the user requested size. */\n+\tfor (const auto &iter : formatsMap) {\n+\t\tconst unsigned int mbusCode = iter.first;\n+\t\tconst PixelFormat format = mbusCodeToPixelFormat(mbusCode,\n+\t\t\t\t\t\t\t\t BayerFormat::Packing::None);\n+\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(format);\n+\n+\t\tfor (const Size &size : iter.second) {\n+\t\t\tdouble reqAr = static_cast<double>(req.width) / req.height;\n+\t\t\tdouble fmtAr = static_cast<double>(size.width) / size.height;\n+\n+\t\t\t/* Score the dimensions for closeness. */\n+\t\t\tscore = scoreFormat(req.width, size.width);\n+\t\t\tscore += scoreFormat(req.height, size.height);\n+\t\t\tscore += penaltyAr * scoreFormat(reqAr, fmtAr);\n+\n+\t\t\t/* Add any penalties... this is not an exact science! */\n+\t\t\tscore += utils::abs_diff(info.bitsPerPixel, bitDepth) * penaltyBitDepth;\n+\n+\t\t\tif (score <= bestScore) {\n+\t\t\t\tbestScore = score;\n+\t\t\t\tbestFormat.mbus_code = mbusCode;\n+\t\t\t\tbestFormat.size = size;\n+\t\t\t}\n+\n+\t\t\tLOG(RPI, Debug) << \"Format: \" << size\n+\t\t\t\t\t<< \" fmt \" << format\n+\t\t\t\t\t<< \" Score: \" << score\n+\t\t\t\t\t<< \" (best \" << bestScore << \")\";\n+\t\t}\n+\t}\n+\n+\treturn bestFormat;\n+}\n+\n+const std::vector<ColorSpace> validColorSpaces = {\n+\tColorSpace::Sycc,\n+\tColorSpace::Smpte170m,\n+\tColorSpace::Rec709\n+};\n+\n+std::optional<ColorSpace> findValidColorSpace(const ColorSpace &colourSpace)\n+{\n+\tfor (auto cs : validColorSpaces) {\n+\t\tif (colourSpace.primaries == cs.primaries &&\n+\t\t    colourSpace.transferFunction == cs.transferFunction)\n+\t\t\treturn cs;\n+\t}\n+\n+\treturn std::nullopt;\n+}\n+\n+bool isRgb(const PixelFormat &pixFmt)\n+{\n+\tconst PixelFormatInfo &info = PixelFormatInfo::info(pixFmt);\n+\treturn info.colourEncoding == PixelFormatInfo::ColourEncodingRGB;\n+}\n+\n+bool isYuv(const PixelFormat &pixFmt)\n+{\n+\t/* The code below would return true for raw mono streams, so weed those out first. */\n+\tif (isRaw(pixFmt))\n+\t\treturn false;\n+\n+\tconst PixelFormatInfo &info = PixelFormatInfo::info(pixFmt);\n+\treturn info.colourEncoding == PixelFormatInfo::ColourEncodingYUV;\n+}\n+\n+} /* namespace */\n+\n+/*\n+ * Raspberry Pi drivers expect the following colour spaces:\n+ * - V4L2_COLORSPACE_RAW for raw streams.\n+ * - One of V4L2_COLORSPACE_JPEG, V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_REC709 for\n+ *   non-raw streams. Other fields such as transfer function, YCbCr encoding and\n+ *   quantisation are not used.\n+ *\n+ * The libcamera colour spaces that we wish to use corresponding to these are therefore:\n+ * - ColorSpace::Raw for V4L2_COLORSPACE_RAW\n+ * - ColorSpace::Sycc for V4L2_COLORSPACE_JPEG\n+ * - ColorSpace::Smpte170m for V4L2_COLORSPACE_SMPTE170M\n+ * - ColorSpace::Rec709 for V4L2_COLORSPACE_REC709\n+ */\n+CameraConfiguration::Status RPiCameraConfiguration::validateColorSpaces([[maybe_unused]] ColorSpaceFlags flags)\n+{\n+\tStatus status = Valid;\n+\tyuvColorSpace_.reset();\n+\n+\tfor (auto cfg : config_) {\n+\t\t/* First fix up raw streams to have the \"raw\" colour space. */\n+\t\tif (isRaw(cfg.pixelFormat)) {\n+\t\t\t/* If there was no value here, that doesn't count as \"adjusted\". */\n+\t\t\tif (cfg.colorSpace && cfg.colorSpace != ColorSpace::Raw)\n+\t\t\t\tstatus = Adjusted;\n+\t\t\tcfg.colorSpace = ColorSpace::Raw;\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\t/* Next we need to find our shared colour space. The first valid one will do. */\n+\t\tif (cfg.colorSpace && !yuvColorSpace_)\n+\t\t\tyuvColorSpace_ = findValidColorSpace(cfg.colorSpace.value());\n+\t}\n+\n+\t/* If no colour space was given anywhere, choose sYCC. */\n+\tif (!yuvColorSpace_)\n+\t\tyuvColorSpace_ = ColorSpace::Sycc;\n+\n+\t/* Note the version of this that any RGB streams will have to use. */\n+\trgbColorSpace_ = yuvColorSpace_;\n+\trgbColorSpace_->ycbcrEncoding = ColorSpace::YcbcrEncoding::None;\n+\trgbColorSpace_->range = ColorSpace::Range::Full;\n+\n+\t/* Go through the streams again and force everyone to the same colour space. */\n+\tfor (auto cfg : config_) {\n+\t\tif (cfg.colorSpace == ColorSpace::Raw)\n+\t\t\tcontinue;\n+\n+\t\tif (isYuv(cfg.pixelFormat) && cfg.colorSpace != yuvColorSpace_) {\n+\t\t\t/* Again, no value means \"not adjusted\". */\n+\t\t\tif (cfg.colorSpace)\n+\t\t\t\tstatus = Adjusted;\n+\t\t\tcfg.colorSpace = yuvColorSpace_;\n+\t\t}\n+\t\tif (isRgb(cfg.pixelFormat) && cfg.colorSpace != rgbColorSpace_) {\n+\t\t\t/* Be nice, and let the YUV version count as non-adjusted too. */\n+\t\t\tif (cfg.colorSpace && cfg.colorSpace != yuvColorSpace_)\n+\t\t\t\tstatus = Adjusted;\n+\t\t\tcfg.colorSpace = rgbColorSpace_;\n+\t\t}\n+\t}\n+\n+\treturn status;\n+}\n+\n+CameraConfiguration::Status RPiCameraConfiguration::validate()\n+{\n+\tStatus status = Valid;\n+\n+\tif (config_.empty())\n+\t\treturn Invalid;\n+\n+\tstatus = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);\n+\n+\t/*\n+\t * Validate the requested transform against the sensor capabilities and\n+\t * rotation and store the final combined transform that configure() will\n+\t * need to apply to the sensor to save us working it out again.\n+\t */\n+\tTransform requestedTransform = transform;\n+\tcombinedTransform_ = data_->sensor_->validateTransform(&transform);\n+\tif (transform != requestedTransform)\n+\t\tstatus = Adjusted;\n+\n+\tstd::vector<CameraData::StreamParams> rawStreams, outStreams;\n+\tfor (const auto &[index, cfg] : utils::enumerate(config_)) {\n+\t\tif (isRaw(cfg.pixelFormat))\n+\t\t\trawStreams.emplace_back(index, &cfg);\n+\t\telse\n+\t\t\toutStreams.emplace_back(index, &cfg);\n+\t}\n+\n+\t/* Sort the streams so the highest resolution is first. */\n+\tstd::sort(rawStreams.begin(), rawStreams.end(),\n+\t\t  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });\n+\n+\tstd::sort(outStreams.begin(), outStreams.end(),\n+\t\t  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });\n+\n+\t/* Do any platform specific fixups. */\n+\tstatus = data_->platformValidate(rawStreams, outStreams);\n+\tif (status == Invalid)\n+\t\treturn Invalid;\n+\n+\t/* Further fixups on the RAW streams. */\n+\tfor (auto &raw : rawStreams) {\n+\t\tStreamConfiguration &cfg = config_.at(raw.index);\n+\t\tV4L2DeviceFormat rawFormat;\n+\n+\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(cfg.pixelFormat);\n+\t\tunsigned int bitDepth = info.isValid() ? info.bitsPerPixel : defaultRawBitDepth;\n+\t\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size, bitDepth);\n+\n+\t\trawFormat.size = sensorFormat.size;\n+\t\trawFormat.fourcc = raw.dev->toV4L2PixelFormat(cfg.pixelFormat);\n+\n+\t\tint ret = raw.dev->tryFormat(&rawFormat);\n+\t\tif (ret)\n+\t\t\treturn Invalid;\n+\t\t/*\n+\t\t * Some sensors change their Bayer order when they are h-flipped\n+\t\t * or v-flipped, according to the transform. If this one does, we\n+\t\t * must advertise the transformed Bayer order in the raw stream.\n+\t\t * Note how we must fetch the \"native\" (i.e. untransformed) Bayer\n+\t\t * order, because the sensor may currently be flipped!\n+\t\t */\n+\t\tV4L2PixelFormat fourcc = rawFormat.fourcc;\n+\t\tif (data_->flipsAlterBayerOrder_) {\n+\t\t\tBayerFormat bayer = BayerFormat::fromV4L2PixelFormat(fourcc);\n+\t\t\tbayer.order = data_->nativeBayerOrder_;\n+\t\t\tbayer = bayer.transform(combinedTransform_);\n+\t\t\tfourcc = bayer.toV4L2PixelFormat();\n+\t\t}\n+\n+\t\tPixelFormat inputPixFormat = fourcc.toPixelFormat();\n+\t\tif (raw.cfg->size != rawFormat.size || raw.cfg->pixelFormat != inputPixFormat) {\n+\t\t\traw.cfg->size = rawFormat.size;\n+\t\t\traw.cfg->pixelFormat = inputPixFormat;\n+\t\t\tstatus = Adjusted;\n+\t\t}\n+\n+\t\traw.cfg->stride = rawFormat.planes[0].bpl;\n+\t\traw.cfg->frameSize = rawFormat.planes[0].size;\n+\t}\n+\n+\t/* Further fixups on the ISP output streams. */\n+\tfor (auto &out : outStreams) {\n+\t\tStreamConfiguration &cfg = config_.at(out.index);\n+\t\tPixelFormat &cfgPixFmt = cfg.pixelFormat;\n+\t\tV4L2VideoDevice::Formats fmts = out.dev->formats();\n+\n+\t\tif (fmts.find(out.dev->toV4L2PixelFormat(cfgPixFmt)) == fmts.end()) {\n+\t\t\t/* If we cannot find a native format, use a default one. */\n+\t\t\tcfgPixFmt = formats::NV12;\n+\t\t\tstatus = Adjusted;\n+\t\t}\n+\n+\t\tV4L2DeviceFormat format;\n+\t\tformat.fourcc = out.dev->toV4L2PixelFormat(cfg.pixelFormat);\n+\t\tformat.size = cfg.size;\n+\t\t/* We want to send the associated YCbCr info through to the driver. */\n+\t\tformat.colorSpace = yuvColorSpace_;\n+\n+\t\tLOG(RPI, Debug)\n+\t\t\t<< \"Try color space \" << ColorSpace::toString(cfg.colorSpace);\n+\n+\t\tint ret = out.dev->tryFormat(&format);\n+\t\tif (ret)\n+\t\t\treturn Invalid;\n+\n+\t\t/*\n+\t\t * But for RGB streams, the YCbCr info gets overwritten on the way back\n+\t\t * so we must check against what the stream cfg says, not what we actually\n+\t\t * requested (which carefully included the YCbCr info)!\n+\t\t */\n+\t\tif (cfg.colorSpace != format.colorSpace) {\n+\t\t\tstatus = Adjusted;\n+\t\t\tLOG(RPI, Debug)\n+\t\t\t\t<< \"Color space changed from \"\n+\t\t\t\t<< ColorSpace::toString(cfg.colorSpace) << \" to \"\n+\t\t\t\t<< ColorSpace::toString(format.colorSpace);\n+\t\t}\n+\n+\t\tcfg.colorSpace = format.colorSpace;\n+\t\tcfg.stride = format.planes[0].bpl;\n+\t\tcfg.frameSize = format.planes[0].size;\n+\t}\n+\n+\treturn status;\n+}\n+\n+V4L2DeviceFormat PipelineHandlerBase::toV4L2DeviceFormat(const V4L2VideoDevice *dev,\n+\t\t\t\t\t\t\t const V4L2SubdeviceFormat &format,\n+\t\t\t\t\t\t\t BayerFormat::Packing packingReq)\n+{\n+\tunsigned int mbus_code = format.mbus_code;\n+\tconst PixelFormat pix = mbusCodeToPixelFormat(mbus_code, packingReq);\n+\tV4L2DeviceFormat deviceFormat;\n+\n+\tdeviceFormat.fourcc = dev->toV4L2PixelFormat(pix);\n+\tdeviceFormat.size = format.size;\n+\tdeviceFormat.colorSpace = format.colorSpace;\n+\treturn deviceFormat;\n+}\n+\n+std::unique_ptr<CameraConfiguration>\n+PipelineHandlerBase::generateConfiguration(Camera *camera, const StreamRoles &roles)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tstd::unique_ptr<CameraConfiguration> config =\n+\t\tstd::make_unique<RPiCameraConfiguration>(data);\n+\tV4L2SubdeviceFormat sensorFormat;\n+\tunsigned int bufferCount;\n+\tPixelFormat pixelFormat;\n+\tV4L2VideoDevice::Formats fmts;\n+\tSize size;\n+\tstd::optional<ColorSpace> colorSpace;\n+\n+\tif (roles.empty())\n+\t\treturn config;\n+\n+\tSize sensorSize = data->sensor_->resolution();\n+\tfor (const StreamRole role : roles) {\n+\t\tswitch (role) {\n+\t\tcase StreamRole::Raw:\n+\t\t\tsize = sensorSize;\n+\t\t\tsensorFormat = findBestFormat(data->sensorFormats_, size, defaultRawBitDepth);\n+\t\t\tpixelFormat = mbusCodeToPixelFormat(sensorFormat.mbus_code,\n+\t\t\t\t\t\t\t    BayerFormat::Packing::CSI2);\n+\t\t\tASSERT(pixelFormat.isValid());\n+\t\t\tcolorSpace = ColorSpace::Raw;\n+\t\t\tbufferCount = 2;\n+\t\t\tbreak;\n+\n+\t\tcase StreamRole::StillCapture:\n+\t\t\tfmts = data->ispFormats();\n+\t\t\tpixelFormat = formats::NV12;\n+\t\t\t/*\n+\t\t\t * Still image codecs usually expect the sYCC color space.\n+\t\t\t * Even RGB codecs will be fine as the RGB we get with the\n+\t\t\t * sYCC color space is the same as sRGB.\n+\t\t\t */\n+\t\t\tcolorSpace = ColorSpace::Sycc;\n+\t\t\t/* Return the largest sensor resolution. */\n+\t\t\tsize = sensorSize;\n+\t\t\tbufferCount = 1;\n+\t\t\tbreak;\n+\n+\t\tcase StreamRole::VideoRecording:\n+\t\t\t/*\n+\t\t\t * The colour denoise algorithm requires the analysis\n+\t\t\t * image, produced by the second ISP output, to be in\n+\t\t\t * YUV420 format. Select this format as the default, to\n+\t\t\t * maximize chances that it will be picked by\n+\t\t\t * applications and enable usage of the colour denoise\n+\t\t\t * algorithm.\n+\t\t\t */\n+\t\t\tfmts = data->ispFormats();\n+\t\t\tpixelFormat = formats::YUV420;\n+\t\t\t/*\n+\t\t\t * Choose a color space appropriate for video recording.\n+\t\t\t * Rec.709 will be a good default for HD resolutions.\n+\t\t\t */\n+\t\t\tcolorSpace = ColorSpace::Rec709;\n+\t\t\tsize = { 1920, 1080 };\n+\t\t\tbufferCount = 4;\n+\t\t\tbreak;\n+\n+\t\tcase StreamRole::Viewfinder:\n+\t\t\tfmts = data->ispFormats();\n+\t\t\tpixelFormat = formats::ARGB8888;\n+\t\t\tcolorSpace = ColorSpace::Sycc;\n+\t\t\tsize = { 800, 600 };\n+\t\t\tbufferCount = 4;\n+\t\t\tbreak;\n+\n+\t\tdefault:\n+\t\t\tLOG(RPI, Error) << \"Requested stream role not supported: \"\n+\t\t\t\t\t<< role;\n+\t\t\treturn nullptr;\n+\t\t}\n+\n+\t\tstd::map<PixelFormat, std::vector<SizeRange>> deviceFormats;\n+\t\tif (role == StreamRole::Raw) {\n+\t\t\t/* Translate the MBUS codes to a PixelFormat. */\n+\t\t\tfor (const auto &format : data->sensorFormats_) {\n+\t\t\t\tPixelFormat pf = mbusCodeToPixelFormat(format.first,\n+\t\t\t\t\t\t\t\t       BayerFormat::Packing::CSI2);\n+\t\t\t\tif (pf.isValid())\n+\t\t\t\t\tdeviceFormats.emplace(std::piecewise_construct, std::forward_as_tuple(pf),\n+\t\t\t\t\t\t\t      std::forward_as_tuple(format.second.begin(), format.second.end()));\n+\t\t\t}\n+\t\t} else {\n+\t\t\t/*\n+\t\t\t * Translate the V4L2PixelFormat to PixelFormat. Note that we\n+\t\t\t * limit the recommended largest ISP output size to match the\n+\t\t\t * sensor resolution.\n+\t\t\t */\n+\t\t\tfor (const auto &format : fmts) {\n+\t\t\t\tPixelFormat pf = format.first.toPixelFormat();\n+\t\t\t\tif (pf.isValid()) {\n+\t\t\t\t\tconst SizeRange &ispSizes = format.second[0];\n+\t\t\t\t\tdeviceFormats[pf].emplace_back(ispSizes.min, sensorSize,\n+\t\t\t\t\t\t\t\t       ispSizes.hStep, ispSizes.vStep);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t}\n+\n+\t\t/* Add the stream format based on the device node used for the use case. */\n+\t\tStreamFormats formats(deviceFormats);\n+\t\tStreamConfiguration cfg(formats);\n+\t\tcfg.size = size;\n+\t\tcfg.pixelFormat = pixelFormat;\n+\t\tcfg.colorSpace = colorSpace;\n+\t\tcfg.bufferCount = bufferCount;\n+\t\tconfig->addConfiguration(cfg);\n+\t}\n+\n+\tconfig->validate();\n+\n+\treturn config;\n+}\n+\n+int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tint ret;\n+\n+\t/* Start by freeing all buffers and reset the stream states. */\n+\tdata->freeBuffers();\n+\tfor (auto const stream : data->streams_)\n+\t\tstream->setExternal(false);\n+\n+\tstd::vector<CameraData::StreamParams> rawStreams, ispStreams;\n+\tstd::optional<BayerFormat::Packing> packing;\n+\tunsigned int bitDepth = defaultRawBitDepth;\n+\n+\tfor (unsigned i = 0; i < config->size(); i++) {\n+\t\tStreamConfiguration *cfg = &config->at(i);\n+\n+\t\tif (isRaw(cfg->pixelFormat))\n+\t\t\trawStreams.emplace_back(i, cfg);\n+\t\telse\n+\t\t\tispStreams.emplace_back(i, cfg);\n+\t}\n+\n+\t/* Sort the streams so the highest resolution is first. */\n+\tstd::sort(rawStreams.begin(), rawStreams.end(),\n+\t\t  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });\n+\n+\tstd::sort(ispStreams.begin(), ispStreams.end(),\n+\t\t  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });\n+\n+\t/*\n+\t * Calculate the best sensor mode we can use based on the user's request,\n+\t * and apply it to the sensor with the cached tranform, if any.\n+\t *\n+\t * If we have been given a RAW stream, use that size for setting up the sensor.\n+\t */\n+\tif (!rawStreams.empty()) {\n+\t\tBayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);\n+\t\t/* Replace the user requested packing/bit-depth. */\n+\t\tpacking = bayerFormat.packing;\n+\t\tbitDepth = bayerFormat.bitDepth;\n+\t}\n+\n+\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_,\n+\t\t\t\t\t\t\t  rawStreams.empty() ? ispStreams[0].cfg->size\n+\t\t\t\t\t\t\t\t\t     : rawStreams[0].cfg->size,\n+\t\t\t\t\t\t\t  bitDepth);\n+\t/* Apply any cached transform. */\n+\tconst RPiCameraConfiguration *rpiConfig = static_cast<const RPiCameraConfiguration *>(config);\n+\n+\t/* Then apply the format on the sensor. */\n+\tret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Platform specific internal stream configuration. This also assigns\n+\t * external streams which get configured below.\n+\t */\n+\tret = data->platformConfigure(sensorFormat, packing, rawStreams, ispStreams);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tipa::RPi::ConfigResult result;\n+\tret = data->configureIPA(config, &result);\n+\tif (ret) {\n+\t\tLOG(RPI, Error) << \"Failed to configure the IPA: \" << ret;\n+\t\treturn ret;\n+\t}\n+\n+\t/*\n+\t * Set the scaler crop to the value we are using (scaled to native sensor\n+\t * coordinates).\n+\t */\n+\tdata->scalerCrop_ = data->scaleIspCrop(data->ispCrop_);\n+\n+\t/*\n+\t * Update the ScalerCropMaximum to the correct value for this camera mode.\n+\t * For us, it's the same as the \"analogue crop\".\n+\t *\n+\t * \\todo Make this property the ScalerCrop maximum value when dynamic\n+\t * controls are available and set it at validate() time\n+\t */\n+\tdata->properties_.set(properties::ScalerCropMaximum, data->sensorInfo_.analogCrop);\n+\n+\t/* Store the mode sensitivity for the application. */\n+\tdata->properties_.set(properties::SensorSensitivity, result.modeSensitivity);\n+\n+\t/* Update the controls that the Raspberry Pi IPA can handle. */\n+\tControlInfoMap::Map ctrlMap;\n+\tfor (auto const &c : result.controlInfo)\n+\t\tctrlMap.emplace(c.first, c.second);\n+\n+\t/* Add the ScalerCrop control limits based on the current mode. */\n+\tRectangle ispMinCrop = data->scaleIspCrop(Rectangle(data->ispMinCropSize_));\n+\tctrlMap[&controls::ScalerCrop] = ControlInfo(ispMinCrop, data->sensorInfo_.analogCrop, data->scalerCrop_);\n+\n+\tdata->controlInfo_ = ControlInfoMap(std::move(ctrlMap), result.controlInfo.idmap());\n+\n+\t/* Setup the Video Mux/Bridge entities. */\n+\tfor (auto &[device, link] : data->bridgeDevices_) {\n+\t\t/*\n+\t\t * Start by disabling all the sink pad links on the devices in the\n+\t\t * cascade, with the exception of the link connecting the device.\n+\t\t */\n+\t\tfor (const MediaPad *p : device->entity()->pads()) {\n+\t\t\tif (!(p->flags() & MEDIA_PAD_FL_SINK))\n+\t\t\t\tcontinue;\n+\n+\t\t\tfor (MediaLink *l : p->links()) {\n+\t\t\t\tif (l != link)\n+\t\t\t\t\tl->setEnabled(false);\n+\t\t\t}\n+\t\t}\n+\n+\t\t/*\n+\t\t * Next, enable the entity -> entity links, and setup the pad format.\n+\t\t *\n+\t\t * \\todo Some bridge devices may chainge the media bus code, so we\n+\t\t * ought to read the source pad format and propagate it to the sink pad.\n+\t\t */\n+\t\tlink->setEnabled(true);\n+\t\tconst MediaPad *sinkPad = link->sink();\n+\t\tret = device->setFormat(sinkPad->index(), &sensorFormat);\n+\t\tif (ret) {\n+\t\t\tLOG(RPI, Error) << \"Failed to set format on \" << device->entity()->name()\n+\t\t\t\t\t<< \" pad \" << sinkPad->index()\n+\t\t\t\t\t<< \" with format  \" << sensorFormat\n+\t\t\t\t\t<< \": \" << ret;\n+\t\t\treturn ret;\n+\t\t}\n+\n+\t\tLOG(RPI, Debug) << \"Configured media link on device \" << device->entity()->name()\n+\t\t\t\t<< \" on pad \" << sinkPad->index();\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int PipelineHandlerBase::exportFrameBuffers([[maybe_unused]] Camera *camera, libcamera::Stream *stream,\n+\t\t\t\t\t    std::vector<std::unique_ptr<FrameBuffer>> *buffers)\n+{\n+\tRPi::Stream *s = static_cast<RPi::Stream *>(stream);\n+\tunsigned int count = stream->configuration().bufferCount;\n+\tint ret = s->dev()->exportBuffers(count, buffers);\n+\n+\ts->setExportedBuffers(buffers);\n+\n+\treturn ret;\n+}\n+\n+int PipelineHandlerBase::start(Camera *camera, const ControlList *controls)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tint ret;\n+\n+\t/* Check if a ScalerCrop control was specified. */\n+\tif (controls)\n+\t\tdata->calculateScalerCrop(*controls);\n+\n+\t/* Start the IPA. */\n+\tipa::RPi::StartResult result;\n+\tdata->ipa_->start(controls ? *controls : ControlList{ controls::controls },\n+\t\t\t  &result);\n+\n+\t/* Apply any gain/exposure settings that the IPA may have passed back. */\n+\tif (!result.controls.empty())\n+\t\tdata->setSensorControls(result.controls);\n+\n+\t/* Configure the number of dropped frames required on startup. */\n+\tdata->dropFrameCount_ = data->config_.disableStartupFrameDrops ? 0\n+\t\t\t\t\t\t\t\t       : result.dropFrameCount;\n+\n+\tfor (auto const stream : data->streams_)\n+\t\tstream->resetBuffers();\n+\n+\tif (!data->buffersAllocated_) {\n+\t\t/* Allocate buffers for internal pipeline usage. */\n+\t\tret = prepareBuffers(camera);\n+\t\tif (ret) {\n+\t\t\tLOG(RPI, Error) << \"Failed to allocate buffers\";\n+\t\t\tdata->freeBuffers();\n+\t\t\tstop(camera);\n+\t\t\treturn ret;\n+\t\t}\n+\t\tdata->buffersAllocated_ = true;\n+\t}\n+\n+\t/* We need to set the dropFrameCount_ before queueing buffers. */\n+\tret = queueAllBuffers(camera);\n+\tif (ret) {\n+\t\tLOG(RPI, Error) << \"Failed to queue buffers\";\n+\t\tstop(camera);\n+\t\treturn ret;\n+\t}\n+\n+\t/*\n+\t * Reset the delayed controls with the gain and exposure values set by\n+\t * the IPA.\n+\t */\n+\tdata->delayedCtrls_->reset(0);\n+\tdata->state_ = CameraData::State::Idle;\n+\n+\t/* Enable SOF event generation. */\n+\tdata->frontendDevice()->setFrameStartEnabled(true);\n+\n+\tdata->platformStart();\n+\n+\t/* Start all streams. */\n+\tfor (auto const stream : data->streams_) {\n+\t\tret = stream->dev()->streamOn();\n+\t\tif (ret) {\n+\t\t\tstop(camera);\n+\t\t\treturn ret;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+void PipelineHandlerBase::stopDevice(Camera *camera)\n+{\n+\tCameraData *data = cameraData(camera);\n+\n+\tdata->state_ = CameraData::State::Stopped;\n+\tdata->platformStop();\n+\n+\tfor (auto const stream : data->streams_)\n+\t\tstream->dev()->streamOff();\n+\n+\t/* Disable SOF event generation. */\n+\tdata->frontendDevice()->setFrameStartEnabled(false);\n+\n+\tdata->clearIncompleteRequests();\n+\n+\t/* Stop the IPA. */\n+\tdata->ipa_->stop();\n+}\n+\n+void PipelineHandlerBase::releaseDevice(Camera *camera)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tdata->freeBuffers();\n+}\n+\n+int PipelineHandlerBase::queueRequestDevice(Camera *camera, Request *request)\n+{\n+\tCameraData *data = cameraData(camera);\n+\n+\tif (!data->isRunning())\n+\t\treturn -EINVAL;\n+\n+\tLOG(RPI, Debug) << \"queueRequestDevice: New request.\";\n+\n+\t/* Push all buffers supplied in the Request to the respective streams. */\n+\tfor (auto stream : data->streams_) {\n+\t\tif (!stream->isExternal())\n+\t\t\tcontinue;\n+\n+\t\tFrameBuffer *buffer = request->findBuffer(stream);\n+\t\tif (buffer && !stream->getBufferId(buffer)) {\n+\t\t\t/*\n+\t\t\t * This buffer is not recognised, so it must have been allocated\n+\t\t\t * outside the v4l2 device. Store it in the stream buffer list\n+\t\t\t * so we can track it.\n+\t\t\t */\n+\t\t\tstream->setExternalBuffer(buffer);\n+\t\t}\n+\n+\t\t/*\n+\t\t * If no buffer is provided by the request for this stream, we\n+\t\t * queue a nullptr to the stream to signify that it must use an\n+\t\t * internally allocated buffer for this capture request. This\n+\t\t * buffer will not be given back to the application, but is used\n+\t\t * to support the internal pipeline flow.\n+\t\t *\n+\t\t * The below queueBuffer() call will do nothing if there are not\n+\t\t * enough internal buffers allocated, but this will be handled by\n+\t\t * queuing the request for buffers in the RPiStream object.\n+\t\t */\n+\t\tint ret = stream->queueBuffer(buffer);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\t/* Push the request to the back of the queue. */\n+\tdata->requestQueue_.push(request);\n+\tdata->handleState();\n+\n+\treturn 0;\n+}\n+\n+int PipelineHandlerBase::registerCamera(MediaDevice *frontend, const std::string &frontendName,\n+\t\t\t\t\tMediaDevice *backend, MediaEntity *sensorEntity)\n+{\n+\tstd::unique_ptr<CameraData> cameraData = allocateCameraData();\n+\tCameraData *data = cameraData.get();\n+\tint ret;\n+\n+\tdata->sensor_ = std::make_unique<CameraSensor>(sensorEntity);\n+\tif (!data->sensor_)\n+\t\treturn -EINVAL;\n+\n+\tif (data->sensor_->init())\n+\t\treturn -EINVAL;\n+\n+\tdata->sensorFormats_ = populateSensorFormats(data->sensor_);\n+\n+\t/*\n+\t * Enumerate all the Video Mux/Bridge devices across the sensor -> Fr\n+\t * chain. There may be a cascade of devices in this chain!\n+\t */\n+\tMediaLink *link = sensorEntity->getPadByIndex(0)->links()[0];\n+\tdata->enumerateVideoDevices(link, frontendName);\n+\n+\tipa::RPi::InitResult result;\n+\tif (data->loadIPA(&result)) {\n+\t\tLOG(RPI, Error) << \"Failed to load a suitable IPA library\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t * Setup our delayed control writer with the sensor default\n+\t * gain and exposure delays. Mark VBLANK for priority write.\n+\t */\n+\tstd::unordered_map<uint32_t, RPi::DelayedControls::ControlParams> params = {\n+\t\t{ V4L2_CID_ANALOGUE_GAIN, { result.sensorConfig.gainDelay, false } },\n+\t\t{ V4L2_CID_EXPOSURE, { result.sensorConfig.exposureDelay, false } },\n+\t\t{ V4L2_CID_HBLANK, { result.sensorConfig.hblankDelay, false } },\n+\t\t{ V4L2_CID_VBLANK, { result.sensorConfig.vblankDelay, true } }\n+\t};\n+\tdata->delayedCtrls_ = std::make_unique<RPi::DelayedControls>(data->sensor_->device(), params);\n+\tdata->sensorMetadata_ = result.sensorConfig.sensorMetadata;\n+\n+\t/* Register initial controls that the Raspberry Pi IPA can handle. */\n+\tdata->controlInfo_ = std::move(result.controlInfo);\n+\n+\t/* Initialize the camera properties. */\n+\tdata->properties_ = data->sensor_->properties();\n+\n+\t/*\n+\t * The V4L2_CID_NOTIFY_GAINS control, if present, is used to inform the\n+\t * sensor of the colour gains. It is defined to be a linear gain where\n+\t * the default value represents a gain of exactly one.\n+\t */\n+\tauto it = data->sensor_->controls().find(V4L2_CID_NOTIFY_GAINS);\n+\tif (it != data->sensor_->controls().end())\n+\t\tdata->notifyGainsUnity_ = it->second.def().get<int32_t>();\n+\n+\t/*\n+\t * Set a default value for the ScalerCropMaximum property to show\n+\t * that we support its use, however, initialise it to zero because\n+\t * it's not meaningful until a camera mode has been chosen.\n+\t */\n+\tdata->properties_.set(properties::ScalerCropMaximum, Rectangle{});\n+\n+\t/*\n+\t * We cache two things about the sensor in relation to transforms\n+\t * (meaning horizontal and vertical flips): if they affect the Bayer\n+         * ordering, and what the \"native\" Bayer order is, when no transforms\n+\t * are applied.\n+\t *\n+\t * If flips are supported verify if they affect the Bayer ordering\n+\t * and what the \"native\" Bayer order is, when no transforms are\n+\t * applied.\n+\t *\n+\t * We note that the sensor's cached list of supported formats is\n+\t * already in the \"native\" order, with any flips having been undone.\n+\t */\n+\tconst V4L2Subdevice *sensor = data->sensor_->device();\n+\tconst struct v4l2_query_ext_ctrl *hflipCtrl = sensor->controlInfo(V4L2_CID_HFLIP);\n+\tif (hflipCtrl) {\n+\t\t/* We assume it will support vflips too... */\n+\t\tdata->flipsAlterBayerOrder_ = hflipCtrl->flags & V4L2_CTRL_FLAG_MODIFY_LAYOUT;\n+\t}\n+\n+\t/* Look for a valid Bayer format. */\n+\tBayerFormat bayerFormat;\n+\tfor (const auto &iter : data->sensorFormats_) {\n+\t\tbayerFormat = BayerFormat::fromMbusCode(iter.first);\n+\t\tif (bayerFormat.isValid())\n+\t\t\tbreak;\n+\t}\n+\n+\tif (!bayerFormat.isValid()) {\n+\t\tLOG(RPI, Error) << \"No Bayer format found\";\n+\t\treturn -EINVAL;\n+\t}\n+\tdata->nativeBayerOrder_ = bayerFormat.order;\n+\n+\tret = data->loadPipelineConfiguration();\n+\tif (ret) {\n+\t\tLOG(RPI, Error) << \"Unable to load pipeline configuration\";\n+\t\treturn ret;\n+\t}\n+\n+\tret = platformRegister(cameraData, frontend, backend);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Setup the general IPA signal handlers. */\n+\tdata->frontendDevice()->dequeueTimeout.connect(data, &RPi::CameraData::cameraTimeout);\n+\tdata->frontendDevice()->frameStart.connect(data, &RPi::CameraData::frameStarted);\n+\tdata->ipa_->setDelayedControls.connect(data, &CameraData::setDelayedControls);\n+\tdata->ipa_->setLensControls.connect(data, &CameraData::setLensControls);\n+\n+\treturn 0;\n+}\n+\n+void PipelineHandlerBase::mapBuffers(Camera *camera, const BufferMap &buffers, unsigned int mask)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tstd::vector<IPABuffer> bufferIds;\n+\t/*\n+\t * Link the FrameBuffers with the id (key value) in the map stored in\n+\t * the RPi stream object - along with an identifier mask.\n+\t *\n+\t * This will allow us to identify buffers passed between the pipeline\n+\t * handler and the IPA.\n+\t */\n+\tfor (auto const &it : buffers) {\n+\t\tbufferIds.push_back(IPABuffer(mask | it.first,\n+\t\t\t\t\t      it.second->planes()));\n+\t\tdata->bufferIds_.insert(mask | it.first);\n+\t}\n+\n+\tdata->ipa_->mapBuffers(bufferIds);\n+}\n+\n+int PipelineHandlerBase::queueAllBuffers(Camera *camera)\n+{\n+\tCameraData *data = cameraData(camera);\n+\tint ret;\n+\n+\tfor (auto const stream : data->streams_) {\n+\t\tif (!stream->isExternal()) {\n+\t\t\tret = stream->queueAllBuffers();\n+\t\t\tif (ret < 0)\n+\t\t\t\treturn ret;\n+\t\t} else {\n+\t\t\t/*\n+\t\t\t * For external streams, we must queue up a set of internal\n+\t\t\t * buffers to handle the number of drop frames requested by\n+\t\t\t * the IPA. This is done by passing nullptr in queueBuffer().\n+\t\t\t *\n+\t\t\t * The below queueBuffer() call will do nothing if there\n+\t\t\t * are not enough internal buffers allocated, but this will\n+\t\t\t * be handled by queuing the request for buffers in the\n+\t\t\t * RPiStream object.\n+\t\t\t */\n+\t\t\tunsigned int i;\n+\t\t\tfor (i = 0; i < data->dropFrameCount_; i++) {\n+\t\t\t\tret = stream->queueBuffer(nullptr);\n+\t\t\t\tif (ret)\n+\t\t\t\t\treturn ret;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+void CameraData::freeBuffers()\n+{\n+\tif (ipa_) {\n+\t\t/*\n+\t\t * Copy the buffer ids from the unordered_set to a vector to\n+\t\t * pass to the IPA.\n+\t\t */\n+\t\tstd::vector<unsigned int> bufferIds(bufferIds_.begin(),\n+\t\t\t\t\t\t    bufferIds_.end());\n+\t\tipa_->unmapBuffers(bufferIds);\n+\t\tbufferIds_.clear();\n+\t}\n+\n+\tfor (auto const stream : streams_)\n+\t\tstream->releaseBuffers();\n+\n+\tplatformFreeBuffers();\n+\n+\tbuffersAllocated_ = false;\n+}\n+\n+/*\n+ * enumerateVideoDevices() iterates over the Media Controller topology, starting\n+ * at the sensor and finishing at the frontend. For each sensor, CameraData stores\n+ * a unique list of any intermediate video mux or bridge devices connected in a\n+ * cascade, together with the entity to entity link.\n+ *\n+ * Entity pad configuration and link enabling happens at the end of configure().\n+ * We first disable all pad links on each entity device in the chain, and then\n+ * selectively enabling the specific links to link sensor to the frontend across\n+ * all intermediate muxes and bridges.\n+ *\n+ * In the cascaded topology below, if Sensor1 is used, the Mux2 -> Mux1 link\n+ * will be disabled, and Sensor1 -> Mux1 -> Frontend links enabled. Alternatively,\n+ * if Sensor3 is used, the Sensor2 -> Mux2 and Sensor1 -> Mux1 links are disabled,\n+ * and Sensor3 -> Mux2 -> Mux1 -> Frontend links are enabled. All other links will\n+ * remain unchanged.\n+ *\n+ *  +----------+\n+ *  |     FE   |\n+ *  +-----^----+\n+ *        |\n+ *    +---+---+\n+ *    | Mux1  |<------+\n+ *    +--^----        |\n+ *       |            |\n+ * +-----+---+    +---+---+\n+ * | Sensor1 |    |  Mux2 |<--+\n+ * +---------+    +-^-----+   |\n+ *                  |         |\n+ *          +-------+-+   +---+-----+\n+ *          | Sensor2 |   | Sensor3 |\n+ *          +---------+   +---------+\n+ */\n+void CameraData::enumerateVideoDevices(MediaLink *link, const std::string &frontend)\n+{\n+\tconst MediaPad *sinkPad = link->sink();\n+\tconst MediaEntity *entity = sinkPad->entity();\n+\tbool frontendFound = false;\n+\n+\t/* We only deal with Video Mux and Bridge devices in cascade. */\n+\tif (entity->function() != MEDIA_ENT_F_VID_MUX &&\n+\t    entity->function() != MEDIA_ENT_F_VID_IF_BRIDGE)\n+\t\treturn;\n+\n+\t/* Find the source pad for this Video Mux or Bridge device. */\n+\tconst MediaPad *sourcePad = nullptr;\n+\tfor (const MediaPad *pad : entity->pads()) {\n+\t\tif (pad->flags() & MEDIA_PAD_FL_SOURCE) {\n+\t\t\t/*\n+\t\t\t * We can only deal with devices that have a single source\n+\t\t\t * pad. If this device has multiple source pads, ignore it\n+\t\t\t * and this branch in the cascade.\n+\t\t\t */\n+\t\t\tif (sourcePad)\n+\t\t\t\treturn;\n+\n+\t\t\tsourcePad = pad;\n+\t\t}\n+\t}\n+\n+\tLOG(RPI, Debug) << \"Found video mux device \" << entity->name()\n+\t\t\t<< \" linked to sink pad \" << sinkPad->index();\n+\n+\tbridgeDevices_.emplace_back(std::make_unique<V4L2Subdevice>(entity), link);\n+\tbridgeDevices_.back().first->open();\n+\n+\t/*\n+\t * Iterate through all the sink pad links down the cascade to find any\n+\t * other Video Mux and Bridge devices.\n+\t */\n+\tfor (MediaLink *l : sourcePad->links()) {\n+\t\tenumerateVideoDevices(l, frontend);\n+\t\t/* Once we reach the Frontend entity, we are done. */\n+\t\tif (l->sink()->entity()->name() == frontend) {\n+\t\t\tfrontendFound = true;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\t/* This identifies the end of our entity enumeration recursion. */\n+\tif (link->source()->entity()->function() == MEDIA_ENT_F_CAM_SENSOR) {\n+\t\t/*\n+\t\t* If the frontend is not at the end of this cascade, we cannot\n+\t\t* configure this topology automatically, so remove all entity references.\n+\t\t*/\n+\t\tif (!frontendFound) {\n+\t\t\tLOG(RPI, Warning) << \"Cannot automatically configure this MC topology!\";\n+\t\t\tbridgeDevices_.clear();\n+\t\t}\n+\t}\n+}\n+\n+int CameraData::loadPipelineConfiguration()\n+{\n+\tconfig_ = {\n+\t\t.disableStartupFrameDrops = false,\n+\t\t.cameraTimeoutValue = 0,\n+\t};\n+\n+\t/* Initial configuration of the platform, in case no config file is present */\n+\tplatformPipelineConfigure({});\n+\n+\tchar const *configFromEnv = utils::secure_getenv(\"LIBCAMERA_RPI_CONFIG_FILE\");\n+\tif (!configFromEnv || *configFromEnv == '\\0')\n+\t\treturn 0;\n+\n+\tstd::string filename = std::string(configFromEnv);\n+\tFile file(filename);\n+\n+\tif (!file.open(File::OpenModeFlag::ReadOnly)) {\n+\t\tLOG(RPI, Error) << \"Failed to open configuration file '\" << filename << \"'\";\n+\t\treturn -EIO;\n+\t}\n+\n+\tLOG(RPI, Info) << \"Using configuration file '\" << filename << \"'\";\n+\n+\tstd::unique_ptr<YamlObject> root = YamlParser::parse(file);\n+\tif (!root) {\n+\t\tLOG(RPI, Warning) << \"Failed to parse configuration file, using defaults\";\n+\t\treturn 0;\n+\t}\n+\n+\tstd::optional<double> ver = (*root)[\"version\"].get<double>();\n+\tif (!ver || *ver != 1.0) {\n+\t\tLOG(RPI, Error) << \"Unexpected configuration file version reported\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tconst YamlObject &phConfig = (*root)[\"pipeline_handler\"];\n+\n+\tconfig_.disableStartupFrameDrops =\n+\t\tphConfig[\"disable_startup_frame_drops\"].get<bool>(config_.disableStartupFrameDrops);\n+\n+\tconfig_.cameraTimeoutValue =\n+\t\tphConfig[\"camera_timeout_value_ms\"].get<unsigned int>(config_.cameraTimeoutValue);\n+\n+\tif (config_.cameraTimeoutValue) {\n+\t\t/* Disable the IPA signal to control timeout and set the user requested value. */\n+\t\tipa_->setCameraTimeout.disconnect();\n+\t\tfrontendDevice()->setDequeueTimeout(config_.cameraTimeoutValue * 1ms);\n+\t}\n+\n+\treturn platformPipelineConfigure(root);\n+}\n+\n+int CameraData::loadIPA(ipa::RPi::InitResult *result)\n+{\n+\tipa_ = IPAManager::createIPA<ipa::RPi::IPAProxyRPi>(pipe(), 1, 1);\n+\n+\tif (!ipa_)\n+\t\treturn -ENOENT;\n+\n+\t/*\n+\t * The configuration (tuning file) is made from the sensor name unless\n+\t * the environment variable overrides it.\n+\t */\n+\tstd::string configurationFile;\n+\tchar const *configFromEnv = utils::secure_getenv(\"LIBCAMERA_RPI_TUNING_FILE\");\n+\tif (!configFromEnv || *configFromEnv == '\\0') {\n+\t\tstd::string model = sensor_->model();\n+\t\tif (isMonoSensor(sensor_))\n+\t\t\tmodel += \"_mono\";\n+\t\tconfigurationFile = ipa_->configurationFile(model + \".json\", \"rpi\");\n+\t} else {\n+\t\tconfigurationFile = std::string(configFromEnv);\n+\t}\n+\n+\tIPASettings settings(configurationFile, sensor_->model());\n+\tipa::RPi::InitParams params;\n+\n+\tparams.lensPresent = !!sensor_->focusLens();\n+\tint ret = platformInitIpa(params);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\treturn ipa_->init(settings, params, result);\n+}\n+\n+int CameraData::configureIPA(const CameraConfiguration *config, ipa::RPi::ConfigResult *result)\n+{\n+\tstd::map<unsigned int, ControlInfoMap> entityControls;\n+\tipa::RPi::ConfigParams params;\n+\tint ret;\n+\n+\tparams.sensorControls = sensor_->controls();\n+\tif (sensor_->focusLens())\n+\t\tparams.lensControls = sensor_->focusLens()->controls();\n+\n+\tret = platformConfigureIpa(params);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* We store the IPACameraSensorInfo for digital zoom calculations. */\n+\tret = sensor_->sensorInfo(&sensorInfo_);\n+\tif (ret) {\n+\t\tLOG(RPI, Error) << \"Failed to retrieve camera sensor info\";\n+\t\treturn ret;\n+\t}\n+\n+\t/* Always send the user transform to the IPA. */\n+\tparams.transform = static_cast<unsigned int>(config->transform);\n+\n+\t/* Ready the IPA - it must know about the sensor resolution. */\n+\tret = ipa_->configure(sensorInfo_, params, result);\n+\tif (ret < 0) {\n+\t\tLOG(RPI, Error) << \"IPA configuration failed!\";\n+\t\treturn -EPIPE;\n+\t}\n+\n+\tif (!result->controls.empty())\n+\t\tsetSensorControls(result->controls);\n+\n+\treturn 0;\n+}\n+\n+void CameraData::setDelayedControls(const ControlList &controls, uint32_t delayContext)\n+{\n+\tif (!delayedCtrls_->push(controls, delayContext))\n+\t\tLOG(RPI, Error) << \"V4L2 DelayedControl set failed\";\n+}\n+\n+void CameraData::setLensControls(const ControlList &controls)\n+{\n+\tCameraLens *lens = sensor_->focusLens();\n+\n+\tif (lens && controls.contains(V4L2_CID_FOCUS_ABSOLUTE)) {\n+\t\tControlValue const &focusValue = controls.get(V4L2_CID_FOCUS_ABSOLUTE);\n+\t\tlens->setFocusPosition(focusValue.get<int32_t>());\n+\t}\n+}\n+\n+void CameraData::setSensorControls(ControlList &controls)\n+{\n+\t/*\n+\t * We need to ensure that if both VBLANK and EXPOSURE are present, the\n+\t * former must be written ahead of, and separately from EXPOSURE to avoid\n+\t * V4L2 rejecting the latter. This is identical to what DelayedControls\n+\t * does with the priority write flag.\n+\t *\n+\t * As a consequence of the below logic, VBLANK gets set twice, and we\n+\t * rely on the v4l2 framework to not pass the second control set to the\n+\t * driver as the actual control value has not changed.\n+\t */\n+\tif (controls.contains(V4L2_CID_EXPOSURE) && controls.contains(V4L2_CID_VBLANK)) {\n+\t\tControlList vblank_ctrl;\n+\n+\t\tvblank_ctrl.set(V4L2_CID_VBLANK, controls.get(V4L2_CID_VBLANK));\n+\t\tsensor_->setControls(&vblank_ctrl);\n+\t}\n+\n+\tsensor_->setControls(&controls);\n+}\n+\n+Rectangle CameraData::scaleIspCrop(const Rectangle &ispCrop) const\n+{\n+\t/*\n+\t * Scale a crop rectangle defined in the ISP's coordinates into native sensor\n+\t * coordinates.\n+\t */\n+\tRectangle nativeCrop = ispCrop.scaledBy(sensorInfo_.analogCrop.size(),\n+\t\t\t\t\t\tsensorInfo_.outputSize);\n+\tnativeCrop.translateBy(sensorInfo_.analogCrop.topLeft());\n+\treturn nativeCrop;\n+}\n+\n+void CameraData::calculateScalerCrop(const ControlList &controls)\n+{\n+\tconst auto &scalerCrop = controls.get<Rectangle>(controls::ScalerCrop);\n+\tif (scalerCrop) {\n+\t\tRectangle nativeCrop = *scalerCrop;\n+\n+\t\tif (!nativeCrop.width || !nativeCrop.height)\n+\t\t\tnativeCrop = { 0, 0, 1, 1 };\n+\n+\t\t/* Create a version of the crop scaled to ISP (camera mode) pixels. */\n+\t\tRectangle ispCrop = nativeCrop.translatedBy(-sensorInfo_.analogCrop.topLeft());\n+\t\tispCrop.scaleBy(sensorInfo_.outputSize, sensorInfo_.analogCrop.size());\n+\n+\t\t/*\n+\t\t * The crop that we set must be:\n+\t\t * 1. At least as big as ispMinCropSize_, once that's been\n+\t\t *    enlarged to the same aspect ratio.\n+\t\t * 2. With the same mid-point, if possible.\n+\t\t * 3. But it can't go outside the sensor area.\n+\t\t */\n+\t\tSize minSize = ispMinCropSize_.expandedToAspectRatio(nativeCrop.size());\n+\t\tSize size = ispCrop.size().expandedTo(minSize);\n+\t\tispCrop = size.centeredTo(ispCrop.center()).enclosedIn(Rectangle(sensorInfo_.outputSize));\n+\n+\t\tif (ispCrop != ispCrop_) {\n+\t\t\tispCrop_ = ispCrop;\n+\t\t\tplatformIspCrop();\n+\n+\t\t\t/*\n+\t\t\t * Also update the ScalerCrop in the metadata with what we actually\n+\t\t\t * used. But we must first rescale that from ISP (camera mode) pixels\n+\t\t\t * back into sensor native pixels.\n+\t\t\t */\n+\t\t\tscalerCrop_ = scaleIspCrop(ispCrop_);\n+\t\t}\n+\t}\n+}\n+\n+void CameraData::cameraTimeout()\n+{\n+\tLOG(RPI, Error) << \"Camera frontend has timed out!\";\n+\tLOG(RPI, Error) << \"Please check that your camera sensor connector is attached securely.\";\n+\tLOG(RPI, Error) << \"Alternatively, try another cable and/or sensor.\";\n+\n+\tstate_ = CameraData::State::Error;\n+\tplatformStop();\n+\n+\t/*\n+\t * To allow the application to attempt a recovery from this timeout,\n+\t * stop all devices streaming, and return any outstanding requests as\n+\t * incomplete and cancelled.\n+\t */\n+\tfor (auto const stream : streams_)\n+\t\tstream->dev()->streamOff();\n+\n+\tclearIncompleteRequests();\n+}\n+\n+void CameraData::frameStarted(uint32_t sequence)\n+{\n+\tLOG(RPI, Debug) << \"Frame start \" << sequence;\n+\n+\t/* Write any controls for the next frame as soon as we can. */\n+\tdelayedCtrls_->applyControls(sequence);\n+}\n+\n+void CameraData::clearIncompleteRequests()\n+{\n+\t/*\n+\t * All outstanding requests (and associated buffers) must be returned\n+\t * back to the application.\n+\t */\n+\twhile (!requestQueue_.empty()) {\n+\t\tRequest *request = requestQueue_.front();\n+\n+\t\tfor (auto &b : request->buffers()) {\n+\t\t\tFrameBuffer *buffer = b.second;\n+\t\t\t/*\n+\t\t\t * Has the buffer already been handed back to the\n+\t\t\t * request? If not, do so now.\n+\t\t\t */\n+\t\t\tif (buffer->request()) {\n+\t\t\t\tbuffer->_d()->cancel();\n+\t\t\t\tpipe()->completeBuffer(request, buffer);\n+\t\t\t}\n+\t\t}\n+\n+\t\tpipe()->completeRequest(request);\n+\t\trequestQueue_.pop();\n+\t}\n+}\n+\n+void CameraData::handleStreamBuffer(FrameBuffer *buffer, RPi::Stream *stream)\n+{\n+\t/*\n+\t * It is possible to be here without a pending request, so check\n+\t * that we actually have one to action, otherwise we just return\n+\t * buffer back to the stream.\n+\t */\n+\tRequest *request = requestQueue_.empty() ? nullptr : requestQueue_.front();\n+\tif (!dropFrameCount_ && request && request->findBuffer(stream) == buffer) {\n+\t\t/*\n+\t\t * Check if this is an externally provided buffer, and if\n+\t\t * so, we must stop tracking it in the pipeline handler.\n+\t\t */\n+\t\thandleExternalBuffer(buffer, stream);\n+\t\t/*\n+\t\t * Tag the buffer as completed, returning it to the\n+\t\t * application.\n+\t\t */\n+\t\tpipe()->completeBuffer(request, buffer);\n+\t} else {\n+\t\t/*\n+\t\t * This buffer was not part of the Request (which happens if an\n+\t\t * internal buffer was used for an external stream, or\n+\t\t * unconditionally for internal streams), or there is no pending\n+\t\t * request, so we can recycle it.\n+\t\t */\n+\t\tstream->returnBuffer(buffer);\n+\t}\n+}\n+\n+void CameraData::handleState()\n+{\n+\tswitch (state_) {\n+\tcase State::Stopped:\n+\tcase State::Busy:\n+\tcase State::Error:\n+\t\tbreak;\n+\n+\tcase State::IpaComplete:\n+\t\t/* If the request is completed, we will switch to Idle state. */\n+\t\tcheckRequestCompleted();\n+\t\t/*\n+\t\t * No break here, we want to try running the pipeline again.\n+\t\t * The fallthrough clause below suppresses compiler warnings.\n+\t\t */\n+\t\t[[fallthrough]];\n+\n+\tcase State::Idle:\n+\t\ttryRunPipeline();\n+\t\tbreak;\n+\t}\n+}\n+\n+void CameraData::handleExternalBuffer(FrameBuffer *buffer, RPi::Stream *stream)\n+{\n+\tunsigned int id = stream->getBufferId(buffer);\n+\n+\tif (!(id & MaskExternalBuffer))\n+\t\treturn;\n+\n+\t/* Stop the Stream object from tracking the buffer. */\n+\tstream->removeExternalBuffer(buffer);\n+}\n+\n+void CameraData::checkRequestCompleted()\n+{\n+\tbool requestCompleted = false;\n+\t/*\n+\t * If we are dropping this frame, do not touch the request, simply\n+\t * change the state to IDLE when ready.\n+\t */\n+\tif (!dropFrameCount_) {\n+\t\tRequest *request = requestQueue_.front();\n+\t\tif (request->hasPendingBuffers())\n+\t\t\treturn;\n+\n+\t\t/* Must wait for metadata to be filled in before completing. */\n+\t\tif (state_ != State::IpaComplete)\n+\t\t\treturn;\n+\n+\t\tpipe()->completeRequest(request);\n+\t\trequestQueue_.pop();\n+\t\trequestCompleted = true;\n+\t}\n+\n+\t/*\n+\t * Make sure we have three outputs completed in the case of a dropped\n+\t * frame.\n+\t */\n+\tif (state_ == State::IpaComplete &&\n+\t    ((ispOutputCount_ == ispOutputTotal_ && dropFrameCount_) || requestCompleted)) {\n+\t\tstate_ = State::Idle;\n+\t\tif (dropFrameCount_) {\n+\t\t\tdropFrameCount_--;\n+\t\t\tLOG(RPI, Debug) << \"Dropping frame at the request of the IPA (\"\n+\t\t\t\t\t<< dropFrameCount_ << \" left)\";\n+\t\t}\n+\t}\n+}\n+\n+void CameraData::fillRequestMetadata(const ControlList &bufferControls, Request *request)\n+{\n+\trequest->metadata().set(controls::SensorTimestamp,\n+\t\t\t\tbufferControls.get(controls::SensorTimestamp).value_or(0));\n+\n+\trequest->metadata().set(controls::ScalerCrop, scalerCrop_);\n+}\n+\n+} /* namespace libcamera */\ndiff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h\nnew file mode 100644\nindex 000000000000..318266a6fb51\n--- /dev/null\n+++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h\n@@ -0,0 +1,276 @@\n+/* SPDX-License-Identifier: LGPL-2.1-or-later */\n+/*\n+ * Copyright (C) 2019-2023, Raspberry Pi Ltd\n+ *\n+ * pipeline_base.h - Pipeline handler base class for Raspberry Pi devices\n+ */\n+\n+#include <map>\n+#include <memory>\n+#include <optional>\n+#include <queue>\n+#include <string>\n+#include <unordered_set>\n+#include <utility>\n+#include <vector>\n+\n+#include <libcamera/controls.h>\n+#include <libcamera/request.h>\n+\n+#include \"libcamera/internal/bayer_format.h\"\n+#include \"libcamera/internal/camera.h\"\n+#include \"libcamera/internal/camera_sensor.h\"\n+#include \"libcamera/internal/framebuffer.h\"\n+#include \"libcamera/internal/media_device.h\"\n+#include \"libcamera/internal/media_object.h\"\n+#include \"libcamera/internal/pipeline_handler.h\"\n+#include \"libcamera/internal/v4l2_videodevice.h\"\n+#include \"libcamera/internal/yaml_parser.h\"\n+\n+#include <libcamera/ipa/raspberrypi_ipa_interface.h>\n+#include <libcamera/ipa/raspberrypi_ipa_proxy.h>\n+\n+#include \"delayed_controls.h\"\n+#include \"rpi_stream.h\"\n+\n+using namespace std::chrono_literals;\n+\n+namespace libcamera {\n+\n+namespace RPi {\n+\n+/* Map of mbus codes to supported sizes reported by the sensor. */\n+using SensorFormats = std::map<unsigned int, std::vector<Size>>;\n+\n+class CameraData : public Camera::Private\n+{\n+public:\n+\tCameraData(PipelineHandler *pipe)\n+\t\t: Camera::Private(pipe), state_(State::Stopped),\n+\t\t  flipsAlterBayerOrder_(false), dropFrameCount_(0), buffersAllocated_(false),\n+\t\t  ispOutputCount_(0), ispOutputTotal_(0)\n+\t{\n+\t}\n+\n+\tvirtual ~CameraData()\n+\t{\n+\t}\n+\n+\tstruct StreamParams {\n+\t\tStreamParams()\n+\t\t\t: index(0), cfg(nullptr), dev(nullptr)\n+\t\t{\n+\t\t}\n+\n+\t\tStreamParams(unsigned int index_, StreamConfiguration *cfg_)\n+\t\t\t: index(index_), cfg(cfg_), dev(nullptr)\n+\t\t{\n+\t\t}\n+\n+\t\tunsigned int index;\n+\t\tStreamConfiguration *cfg;\n+\t\tV4L2VideoDevice *dev;\n+\t};\n+\n+\tvirtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,\n+\t\t\t\t\t\t\t     std::vector<StreamParams> &outStreams) const = 0;\n+\tvirtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,\n+\t\t\t\t      std::optional<BayerFormat::Packing> packing,\n+\t\t\t\t      std::vector<StreamParams> &rawStreams,\n+\t\t\t\t      std::vector<StreamParams> &outStreams) = 0;\n+\tvirtual void platformStart() = 0;\n+\tvirtual void platformStop() = 0;\n+\n+\tvoid freeBuffers();\n+\tvirtual void platformFreeBuffers() = 0;\n+\n+\tvoid enumerateVideoDevices(MediaLink *link, const std::string &frontend);\n+\n+\tint loadPipelineConfiguration();\n+\tint loadIPA(ipa::RPi::InitResult *result);\n+\tint configureIPA(const CameraConfiguration *config, ipa::RPi::ConfigResult *result);\n+\tvirtual int platformInitIpa(ipa::RPi::InitParams &params) = 0;\n+\tvirtual int platformConfigureIpa(ipa::RPi::ConfigParams &params) = 0;\n+\n+\tvoid setDelayedControls(const ControlList &controls, uint32_t delayContext);\n+\tvoid setLensControls(const ControlList &controls);\n+\tvoid setSensorControls(ControlList &controls);\n+\n+\tRectangle scaleIspCrop(const Rectangle &ispCrop) const;\n+\tvoid calculateScalerCrop(const ControlList &controls);\n+\tvirtual void platformIspCrop() = 0;\n+\n+\tvoid cameraTimeout();\n+\tvoid frameStarted(uint32_t sequence);\n+\n+\tvoid clearIncompleteRequests();\n+\tvoid handleStreamBuffer(FrameBuffer *buffer, Stream *stream);\n+\tvoid handleState();\n+\n+\tvirtual V4L2VideoDevice::Formats ispFormats() const = 0;\n+\tvirtual V4L2VideoDevice::Formats rawFormats() const = 0;\n+\tvirtual V4L2VideoDevice *frontendDevice() = 0;\n+\n+\tvirtual int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) = 0;\n+\n+\tstd::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;\n+\n+\tstd::unique_ptr<CameraSensor> sensor_;\n+\tSensorFormats sensorFormats_;\n+\n+\t/* The vector below is just for convenience when iterating over all streams. */\n+\tstd::vector<Stream *> streams_;\n+\t/* Stores the ids of the buffers mapped in the IPA. */\n+\tstd::unordered_set<unsigned int> bufferIds_;\n+\t/*\n+\t * Stores a cascade of Video Mux or Bridge devices between the sensor and\n+\t * Unicam together with media link across the entities.\n+\t */\n+\tstd::vector<std::pair<std::unique_ptr<V4L2Subdevice>, MediaLink *>> bridgeDevices_;\n+\n+\tstd::unique_ptr<DelayedControls> delayedCtrls_;\n+\tbool sensorMetadata_;\n+\n+\t/*\n+\t * All the functions in this class are called from a single calling\n+\t * thread. So, we do not need to have any mutex to protect access to any\n+\t * of the variables below.\n+\t */\n+\tenum class State { Stopped, Idle, Busy, IpaComplete, Error };\n+\tState state_;\n+\n+\tbool isRunning()\n+\t{\n+\t\treturn state_ != State::Stopped && state_ != State::Error;\n+\t}\n+\n+\tstd::queue<Request *> requestQueue_;\n+\n+\t/* Store the \"native\" Bayer order (that is, with no transforms applied). */\n+\tbool flipsAlterBayerOrder_;\n+\tBayerFormat::Order nativeBayerOrder_;\n+\n+\t/* For handling digital zoom. */\n+\tIPACameraSensorInfo sensorInfo_;\n+\tRectangle ispCrop_; /* crop in ISP (camera mode) pixels */\n+\tRectangle scalerCrop_; /* crop in sensor native pixels */\n+\tSize ispMinCropSize_;\n+\n+\tunsigned int dropFrameCount_;\n+\n+\t/*\n+\t * If set, this stores the value that represets a gain of one for\n+\t * the V4L2_CID_NOTIFY_GAINS control.\n+\t */\n+\tstd::optional<int32_t> notifyGainsUnity_;\n+\n+\t/* Have internal buffers been allocated? */\n+\tbool buffersAllocated_;\n+\n+\tstruct Config {\n+\t\t/*\n+\t\t * Override any request from the IPA to drop a number of startup\n+\t\t * frames.\n+\t\t */\n+\t\tbool disableStartupFrameDrops;\n+\t\t/*\n+\t\t * Override the camera timeout value calculated by the IPA based\n+\t\t * on frame durations.\n+\t\t */\n+\t\tunsigned int cameraTimeoutValue;\n+\t};\n+\n+\tConfig config_;\n+\n+protected:\n+\tvoid fillRequestMetadata(const ControlList &bufferControls,\n+\t\t\t\t Request *request);\n+\n+\tvirtual void tryRunPipeline() = 0;\n+\n+\tunsigned int ispOutputCount_;\n+\tunsigned int ispOutputTotal_;\n+\n+private:\n+\tvoid handleExternalBuffer(FrameBuffer *buffer, Stream *stream);\n+\tvoid checkRequestCompleted();\n+};\n+\n+class PipelineHandlerBase : public PipelineHandler\n+{\n+public:\n+\tPipelineHandlerBase(CameraManager *manager)\n+\t\t: PipelineHandler(manager)\n+\t{\n+\t}\n+\n+\tvirtual ~PipelineHandlerBase()\n+\t{\n+\t}\n+\n+\tstatic V4L2DeviceFormat toV4L2DeviceFormat(const V4L2VideoDevice *dev,\n+\t\t\t\t\t\t   const V4L2SubdeviceFormat &format,\n+\t\t\t\t\t\t   BayerFormat::Packing packingReq);\n+\n+\tstd::unique_ptr<CameraConfiguration>\n+\tgenerateConfiguration(Camera *camera, const StreamRoles &roles) override;\n+\tint configure(Camera *camera, CameraConfiguration *config) override;\n+\n+\tint exportFrameBuffers(Camera *camera, libcamera::Stream *stream,\n+\t\t\t       std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;\n+\n+\tint start(Camera *camera, const ControlList *controls) override;\n+\tvoid stopDevice(Camera *camera) override;\n+\tvoid releaseDevice(Camera *camera) override;\n+\n+\tint queueRequestDevice(Camera *camera, Request *request) override;\n+\n+protected:\n+\tint registerCamera(MediaDevice *frontent, const std::string &frontendName,\n+\t\t\t   MediaDevice *backend, MediaEntity *sensorEntity);\n+\n+\tvoid mapBuffers(Camera *camera, const BufferMap &buffers, unsigned int mask);\n+\n+\tvirtual std::unique_ptr<CameraData> allocateCameraData() = 0;\n+\tvirtual int platformRegister(std::unique_ptr<CameraData> &cameraData,\n+\t\t\t\t     MediaDevice *unicam, MediaDevice *isp) = 0;\n+\n+private:\n+\tCameraData *cameraData(Camera *camera)\n+\t{\n+\t\treturn static_cast<CameraData *>(camera->_d());\n+\t}\n+\n+\tint queueAllBuffers(Camera *camera);\n+\tvirtual int prepareBuffers(Camera *camera) = 0;\n+};\n+\n+class RPiCameraConfiguration final : public CameraConfiguration\n+{\n+public:\n+\tRPiCameraConfiguration(const CameraData *data)\n+\t\t: CameraConfiguration(), data_(data)\n+\t{\n+\t}\n+\n+\tCameraConfiguration::Status validateColorSpaces(ColorSpaceFlags flags);\n+\tStatus validate() override;\n+\n+\t/* Cache the combinedTransform_ that will be applied to the sensor */\n+\tTransform combinedTransform_;\n+\n+private:\n+\tconst CameraData *data_;\n+\n+\t/*\n+\t * Store the colour spaces that all our streams will have. RGB format streams\n+\t * will have the same colorspace as YUV streams, with YCbCr field cleared and\n+\t * range set to full.\n+         */\n+\tstd::optional<ColorSpace> yuvColorSpace_;\n+\tstd::optional<ColorSpace> rgbColorSpace_;\n+};\n+\n+} /* namespace RPi */\n+\n+} /* namespace libcamera */\ndiff --git a/src/libcamera/pipeline/rpi/vc4/data/example.yaml b/src/libcamera/pipeline/rpi/vc4/data/example.yaml\nindex c90f518f8849..b8e01adeaf40 100644\n--- a/src/libcamera/pipeline/rpi/vc4/data/example.yaml\n+++ b/src/libcamera/pipeline/rpi/vc4/data/example.yaml\n@@ -34,13 +34,13 @@\n                 #\n                 # \"disable_startup_frame_drops\": false,\n \n-                # Custom timeout value (in ms) for Unicam to use. This overrides\n+                # Custom timeout value (in ms) for camera to use. This overrides\n                 # the value computed by the pipeline handler based on frame\n                 # durations.\n                 #\n                 # Set this value to 0 to use the pipeline handler computed\n                 # timeout value.\n                 #\n-                # \"unicam_timeout_value_ms\": 0,\n+                # \"camera_timeout_value_ms\": 0,\n         }\n }\ndiff --git a/src/libcamera/pipeline/rpi/vc4/meson.build b/src/libcamera/pipeline/rpi/vc4/meson.build\nindex 228823f30922..cdb049c58d2c 100644\n--- a/src/libcamera/pipeline/rpi/vc4/meson.build\n+++ b/src/libcamera/pipeline/rpi/vc4/meson.build\n@@ -2,7 +2,7 @@\n \n libcamera_sources += files([\n     'dma_heaps.cpp',\n-    'raspberrypi.cpp',\n+    'vc4.cpp',\n ])\n \n subdir('data')\ndiff --git a/src/libcamera/pipeline/rpi/vc4/raspberrypi.cpp b/src/libcamera/pipeline/rpi/vc4/raspberrypi.cpp\ndeleted file mode 100644\nindex bd66468683df..000000000000\n--- a/src/libcamera/pipeline/rpi/vc4/raspberrypi.cpp\n+++ /dev/null\n@@ -1,2428 +0,0 @@\n-/* SPDX-License-Identifier: LGPL-2.1-or-later */\n-/*\n- * Copyright (C) 2019-2021, Raspberry Pi Ltd\n- *\n- * raspberrypi.cpp - Pipeline handler for VC4 based Raspberry Pi devices\n- */\n-#include <algorithm>\n-#include <assert.h>\n-#include <cmath>\n-#include <fcntl.h>\n-#include <memory>\n-#include <mutex>\n-#include <queue>\n-#include <unordered_set>\n-#include <utility>\n-\n-#include <libcamera/base/file.h>\n-#include <libcamera/base/shared_fd.h>\n-#include <libcamera/base/utils.h>\n-\n-#include <libcamera/camera.h>\n-#include <libcamera/control_ids.h>\n-#include <libcamera/formats.h>\n-#include <libcamera/ipa/raspberrypi_ipa_interface.h>\n-#include <libcamera/ipa/raspberrypi_ipa_proxy.h>\n-#include <libcamera/logging.h>\n-#include <libcamera/property_ids.h>\n-#include <libcamera/request.h>\n-\n-#include <linux/bcm2835-isp.h>\n-#include <linux/media-bus-format.h>\n-#include <linux/videodev2.h>\n-\n-#include \"libcamera/internal/bayer_format.h\"\n-#include \"libcamera/internal/camera.h\"\n-#include \"libcamera/internal/camera_lens.h\"\n-#include \"libcamera/internal/camera_sensor.h\"\n-#include \"libcamera/internal/device_enumerator.h\"\n-#include \"libcamera/internal/framebuffer.h\"\n-#include \"libcamera/internal/ipa_manager.h\"\n-#include \"libcamera/internal/media_device.h\"\n-#include \"libcamera/internal/pipeline_handler.h\"\n-#include \"libcamera/internal/v4l2_videodevice.h\"\n-#include \"libcamera/internal/yaml_parser.h\"\n-\n-#include \"delayed_controls.h\"\n-#include \"dma_heaps.h\"\n-#include \"rpi_stream.h\"\n-\n-using namespace std::chrono_literals;\n-\n-namespace libcamera {\n-\n-LOG_DEFINE_CATEGORY(RPI)\n-\n-namespace {\n-\n-constexpr unsigned int defaultRawBitDepth = 12;\n-\n-/* Map of mbus codes to supported sizes reported by the sensor. */\n-using SensorFormats = std::map<unsigned int, std::vector<Size>>;\n-\n-SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor> &sensor)\n-{\n-\tSensorFormats formats;\n-\n-\tfor (auto const mbusCode : sensor->mbusCodes())\n-\t\tformats.emplace(mbusCode, sensor->sizes(mbusCode));\n-\n-\treturn formats;\n-}\n-\n-bool isMonoSensor(std::unique_ptr<CameraSensor> &sensor)\n-{\n-\tunsigned int mbusCode = sensor->mbusCodes()[0];\n-\tconst BayerFormat &bayer = BayerFormat::fromMbusCode(mbusCode);\n-\n-\treturn bayer.order == BayerFormat::Order::MONO;\n-}\n-\n-PixelFormat mbusCodeToPixelFormat(unsigned int mbus_code,\n-\t\t\t\t  BayerFormat::Packing packingReq)\n-{\n-\tBayerFormat bayer = BayerFormat::fromMbusCode(mbus_code);\n-\n-\tASSERT(bayer.isValid());\n-\n-\tbayer.packing = packingReq;\n-\tPixelFormat pix = bayer.toPixelFormat();\n-\n-\t/*\n-\t * Not all formats (e.g. 8-bit or 16-bit Bayer formats) can have packed\n-\t * variants. So if the PixelFormat returns as invalid, use the non-packed\n-\t * conversion instead.\n-\t */\n-\tif (!pix.isValid()) {\n-\t\tbayer.packing = BayerFormat::Packing::None;\n-\t\tpix = bayer.toPixelFormat();\n-\t}\n-\n-\treturn pix;\n-}\n-\n-V4L2DeviceFormat toV4L2DeviceFormat(const V4L2VideoDevice *dev,\n-\t\t\t\t    const V4L2SubdeviceFormat &format,\n-\t\t\t\t    BayerFormat::Packing packingReq)\n-{\n-\tconst PixelFormat pix = mbusCodeToPixelFormat(format.mbus_code, packingReq);\n-\tV4L2DeviceFormat deviceFormat;\n-\n-\tdeviceFormat.fourcc = dev->toV4L2PixelFormat(pix);\n-\tdeviceFormat.size = format.size;\n-\tdeviceFormat.colorSpace = format.colorSpace;\n-\treturn deviceFormat;\n-}\n-\n-bool isRaw(const PixelFormat &pixFmt)\n-{\n-\t/* This test works for both Bayer and raw mono formats. */\n-\treturn BayerFormat::fromPixelFormat(pixFmt).isValid();\n-}\n-\n-double scoreFormat(double desired, double actual)\n-{\n-\tdouble score = desired - actual;\n-\t/* Smaller desired dimensions are preferred. */\n-\tif (score < 0.0)\n-\t\tscore = (-score) / 8;\n-\t/* Penalise non-exact matches. */\n-\tif (actual != desired)\n-\t\tscore *= 2;\n-\n-\treturn score;\n-}\n-\n-V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req, unsigned int bitDepth)\n-{\n-\tdouble bestScore = std::numeric_limits<double>::max(), score;\n-\tV4L2SubdeviceFormat bestFormat;\n-\tbestFormat.colorSpace = ColorSpace::Raw;\n-\n-\tconstexpr float penaltyAr = 1500.0;\n-\tconstexpr float penaltyBitDepth = 500.0;\n-\n-\t/* Calculate the closest/best mode from the user requested size. */\n-\tfor (const auto &iter : formatsMap) {\n-\t\tconst unsigned int mbusCode = iter.first;\n-\t\tconst PixelFormat format = mbusCodeToPixelFormat(mbusCode,\n-\t\t\t\t\t\t\t\t BayerFormat::Packing::None);\n-\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(format);\n-\n-\t\tfor (const Size &size : iter.second) {\n-\t\t\tdouble reqAr = static_cast<double>(req.width) / req.height;\n-\t\t\tdouble fmtAr = static_cast<double>(size.width) / size.height;\n-\n-\t\t\t/* Score the dimensions for closeness. */\n-\t\t\tscore = scoreFormat(req.width, size.width);\n-\t\t\tscore += scoreFormat(req.height, size.height);\n-\t\t\tscore += penaltyAr * scoreFormat(reqAr, fmtAr);\n-\n-\t\t\t/* Add any penalties... this is not an exact science! */\n-\t\t\tscore += utils::abs_diff(info.bitsPerPixel, bitDepth) * penaltyBitDepth;\n-\n-\t\t\tif (score <= bestScore) {\n-\t\t\t\tbestScore = score;\n-\t\t\t\tbestFormat.mbus_code = mbusCode;\n-\t\t\t\tbestFormat.size = size;\n-\t\t\t}\n-\n-\t\t\tLOG(RPI, Debug) << \"Format: \" << size\n-\t\t\t\t\t<< \" fmt \" << format\n-\t\t\t\t\t<< \" Score: \" << score\n-\t\t\t\t\t<< \" (best \" << bestScore << \")\";\n-\t\t}\n-\t}\n-\n-\treturn bestFormat;\n-}\n-\n-enum class Unicam : unsigned int { Image, Embedded };\n-enum class Isp : unsigned int { Input, Output0, Output1, Stats };\n-\n-} /* namespace */\n-\n-class RPiCameraData : public Camera::Private\n-{\n-public:\n-\tRPiCameraData(PipelineHandler *pipe)\n-\t\t: Camera::Private(pipe), state_(State::Stopped),\n-\t\t  flipsAlterBayerOrder_(false), dropFrameCount_(0),\n-\t\t  buffersAllocated_(false), ispOutputCount_(0)\n-\t{\n-\t}\n-\n-\t~RPiCameraData()\n-\t{\n-\t\tfreeBuffers();\n-\t}\n-\n-\tvoid freeBuffers();\n-\tvoid frameStarted(uint32_t sequence);\n-\n-\tint loadIPA(ipa::RPi::InitResult *result);\n-\tint configureIPA(const CameraConfiguration *config, ipa::RPi::ConfigResult *result);\n-\tint loadPipelineConfiguration();\n-\n-\tvoid enumerateVideoDevices(MediaLink *link);\n-\n-\tvoid processStatsComplete(const ipa::RPi::BufferIds &buffers);\n-\tvoid prepareIspComplete(const ipa::RPi::BufferIds &buffers);\n-\tvoid setIspControls(const ControlList &controls);\n-\tvoid setDelayedControls(const ControlList &controls, uint32_t delayContext);\n-\tvoid setLensControls(const ControlList &controls);\n-\tvoid setCameraTimeout(uint32_t maxExposureTimeMs);\n-\tvoid setSensorControls(ControlList &controls);\n-\tvoid unicamTimeout();\n-\n-\t/* bufferComplete signal handlers. */\n-\tvoid unicamBufferDequeue(FrameBuffer *buffer);\n-\tvoid ispInputDequeue(FrameBuffer *buffer);\n-\tvoid ispOutputDequeue(FrameBuffer *buffer);\n-\n-\tvoid clearIncompleteRequests();\n-\tvoid handleStreamBuffer(FrameBuffer *buffer, RPi::Stream *stream);\n-\tvoid handleExternalBuffer(FrameBuffer *buffer, RPi::Stream *stream);\n-\tvoid handleState();\n-\tRectangle scaleIspCrop(const Rectangle &ispCrop) const;\n-\tvoid applyScalerCrop(const ControlList &controls);\n-\n-\tstd::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;\n-\n-\tstd::unique_ptr<CameraSensor> sensor_;\n-\tSensorFormats sensorFormats_;\n-\t/* Array of Unicam and ISP device streams and associated buffers/streams. */\n-\tRPi::Device<Unicam, 2> unicam_;\n-\tRPi::Device<Isp, 4> isp_;\n-\t/* The vector below is just for convenience when iterating over all streams. */\n-\tstd::vector<RPi::Stream *> streams_;\n-\t/* Stores the ids of the buffers mapped in the IPA. */\n-\tstd::unordered_set<unsigned int> bufferIds_;\n-\t/*\n-\t * Stores a cascade of Video Mux or Bridge devices between the sensor and\n-\t * Unicam together with media link across the entities.\n-\t */\n-\tstd::vector<std::pair<std::unique_ptr<V4L2Subdevice>, MediaLink *>> bridgeDevices_;\n-\n-\t/* DMAHEAP allocation helper. */\n-\tRPi::DmaHeap dmaHeap_;\n-\tSharedFD lsTable_;\n-\n-\tstd::unique_ptr<RPi::DelayedControls> delayedCtrls_;\n-\tbool sensorMetadata_;\n-\n-\t/*\n-\t * All the functions in this class are called from a single calling\n-\t * thread. So, we do not need to have any mutex to protect access to any\n-\t * of the variables below.\n-\t */\n-\tenum class State { Stopped, Idle, Busy, IpaComplete, Error };\n-\tState state_;\n-\n-\tbool isRunning()\n-\t{\n-\t\treturn state_ != State::Stopped && state_ != State::Error;\n-\t}\n-\n-\tstruct BayerFrame {\n-\t\tFrameBuffer *buffer;\n-\t\tControlList controls;\n-\t\tunsigned int delayContext;\n-\t};\n-\n-\tstd::queue<BayerFrame> bayerQueue_;\n-\tstd::queue<FrameBuffer *> embeddedQueue_;\n-\tstd::deque<Request *> requestQueue_;\n-\n-\t/*\n-\t * Store the \"native\" Bayer order (that is, with no transforms\n-\t * applied).\n-\t */\n-\tbool flipsAlterBayerOrder_;\n-\tBayerFormat::Order nativeBayerOrder_;\n-\n-\t/* For handling digital zoom. */\n-\tIPACameraSensorInfo sensorInfo_;\n-\tRectangle ispCrop_; /* crop in ISP (camera mode) pixels */\n-\tRectangle scalerCrop_; /* crop in sensor native pixels */\n-\tSize ispMinCropSize_;\n-\n-\tunsigned int dropFrameCount_;\n-\n-\t/*\n-\t * If set, this stores the value that represets a gain of one for\n-\t * the V4L2_CID_NOTIFY_GAINS control.\n-\t */\n-\tstd::optional<int32_t> notifyGainsUnity_;\n-\n-\t/* Have internal buffers been allocated? */\n-\tbool buffersAllocated_;\n-\n-\tstruct Config {\n-\t\t/*\n-\t\t * The minimum number of internal buffers to be allocated for\n-\t\t * the Unicam Image stream.\n-\t\t */\n-\t\tunsigned int minUnicamBuffers;\n-\t\t/*\n-\t\t * The minimum total (internal + external) buffer count used for\n-\t\t * the Unicam Image stream.\n-\t\t *\n-\t\t * Note that:\n-\t\t * minTotalUnicamBuffers must be >= 1, and\n-\t\t * minTotalUnicamBuffers >= minUnicamBuffers\n-\t\t */\n-\t\tunsigned int minTotalUnicamBuffers;\n-\t\t/*\n-\t\t * Override any request from the IPA to drop a number of startup\n-\t\t * frames.\n-\t\t */\n-\t\tbool disableStartupFrameDrops;\n-\t\t/*\n-\t\t * Override the Unicam timeout value calculated by the IPA based\n-\t\t * on frame durations.\n-\t\t */\n-\t\tunsigned int unicamTimeoutValue;\n-\t};\n-\n-\tConfig config_;\n-\n-private:\n-\tvoid checkRequestCompleted();\n-\tvoid fillRequestMetadata(const ControlList &bufferControls,\n-\t\t\t\t Request *request);\n-\tvoid tryRunPipeline();\n-\tbool findMatchingBuffers(BayerFrame &bayerFrame, FrameBuffer *&embeddedBuffer);\n-\n-\tunsigned int ispOutputCount_;\n-};\n-\n-class RPiCameraConfiguration : public CameraConfiguration\n-{\n-public:\n-\tRPiCameraConfiguration(const RPiCameraData *data);\n-\n-\tCameraConfiguration::Status validateColorSpaces(ColorSpaceFlags flags);\n-\tStatus validate() override;\n-\n-\t/* Cache the combinedTransform_ that will be applied to the sensor */\n-\tTransform combinedTransform_;\n-\n-private:\n-\tconst RPiCameraData *data_;\n-\n-\t/*\n-\t * Store the colour spaces that all our streams will have. RGB format streams\n-\t * will have the same colorspace as YUV streams, with YCbCr field cleared and\n-\t * range set to full.\n-         */\n-\tstd::optional<ColorSpace> yuvColorSpace_;\n-\tstd::optional<ColorSpace> rgbColorSpace_;\n-};\n-\n-class PipelineHandlerRPi : public PipelineHandler\n-{\n-public:\n-\tPipelineHandlerRPi(CameraManager *manager);\n-\n-\tstd::unique_ptr<CameraConfiguration> generateConfiguration(Camera *camera,\n-\t\tconst StreamRoles &roles) override;\n-\tint configure(Camera *camera, CameraConfiguration *config) override;\n-\n-\tint exportFrameBuffers(Camera *camera, Stream *stream,\n-\t\t\t       std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;\n-\n-\tint start(Camera *camera, const ControlList *controls) override;\n-\tvoid stopDevice(Camera *camera) override;\n-\n-\tint queueRequestDevice(Camera *camera, Request *request) override;\n-\n-\tbool match(DeviceEnumerator *enumerator) override;\n-\n-\tvoid releaseDevice(Camera *camera) override;\n-\n-private:\n-\tRPiCameraData *cameraData(Camera *camera)\n-\t{\n-\t\treturn static_cast<RPiCameraData *>(camera->_d());\n-\t}\n-\n-\tint registerCamera(MediaDevice *unicam, MediaDevice *isp, MediaEntity *sensorEntity);\n-\tint queueAllBuffers(Camera *camera);\n-\tint prepareBuffers(Camera *camera);\n-\tvoid mapBuffers(Camera *camera, const RPi::BufferMap &buffers, unsigned int mask);\n-};\n-\n-RPiCameraConfiguration::RPiCameraConfiguration(const RPiCameraData *data)\n-\t: CameraConfiguration(), data_(data)\n-{\n-}\n-\n-static const std::vector<ColorSpace> validColorSpaces = {\n-\tColorSpace::Sycc,\n-\tColorSpace::Smpte170m,\n-\tColorSpace::Rec709\n-};\n-\n-static std::optional<ColorSpace> findValidColorSpace(const ColorSpace &colourSpace)\n-{\n-\tfor (auto cs : validColorSpaces) {\n-\t\tif (colourSpace.primaries == cs.primaries &&\n-\t\t    colourSpace.transferFunction == cs.transferFunction)\n-\t\t\treturn cs;\n-\t}\n-\n-\treturn std::nullopt;\n-}\n-\n-static bool isRgb(const PixelFormat &pixFmt)\n-{\n-\tconst PixelFormatInfo &info = PixelFormatInfo::info(pixFmt);\n-\treturn info.colourEncoding == PixelFormatInfo::ColourEncodingRGB;\n-}\n-\n-static bool isYuv(const PixelFormat &pixFmt)\n-{\n-\t/* The code below would return true for raw mono streams, so weed those out first. */\n-\tif (isRaw(pixFmt))\n-\t\treturn false;\n-\n-\tconst PixelFormatInfo &info = PixelFormatInfo::info(pixFmt);\n-\treturn info.colourEncoding == PixelFormatInfo::ColourEncodingYUV;\n-}\n-\n-/*\n- * Raspberry Pi drivers expect the following colour spaces:\n- * - V4L2_COLORSPACE_RAW for raw streams.\n- * - One of V4L2_COLORSPACE_JPEG, V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_REC709 for\n- *   non-raw streams. Other fields such as transfer function, YCbCr encoding and\n- *   quantisation are not used.\n- *\n- * The libcamera colour spaces that we wish to use corresponding to these are therefore:\n- * - ColorSpace::Raw for V4L2_COLORSPACE_RAW\n- * - ColorSpace::Sycc for V4L2_COLORSPACE_JPEG\n- * - ColorSpace::Smpte170m for V4L2_COLORSPACE_SMPTE170M\n- * - ColorSpace::Rec709 for V4L2_COLORSPACE_REC709\n- */\n-\n-CameraConfiguration::Status RPiCameraConfiguration::validateColorSpaces([[maybe_unused]] ColorSpaceFlags flags)\n-{\n-\tStatus status = Valid;\n-\tyuvColorSpace_.reset();\n-\n-\tfor (auto cfg : config_) {\n-\t\t/* First fix up raw streams to have the \"raw\" colour space. */\n-\t\tif (isRaw(cfg.pixelFormat)) {\n-\t\t\t/* If there was no value here, that doesn't count as \"adjusted\". */\n-\t\t\tif (cfg.colorSpace && cfg.colorSpace != ColorSpace::Raw)\n-\t\t\t\tstatus = Adjusted;\n-\t\t\tcfg.colorSpace = ColorSpace::Raw;\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\t/* Next we need to find our shared colour space. The first valid one will do. */\n-\t\tif (cfg.colorSpace && !yuvColorSpace_)\n-\t\t\tyuvColorSpace_ = findValidColorSpace(cfg.colorSpace.value());\n-\t}\n-\n-\t/* If no colour space was given anywhere, choose sYCC. */\n-\tif (!yuvColorSpace_)\n-\t\tyuvColorSpace_ = ColorSpace::Sycc;\n-\n-\t/* Note the version of this that any RGB streams will have to use. */\n-\trgbColorSpace_ = yuvColorSpace_;\n-\trgbColorSpace_->ycbcrEncoding = ColorSpace::YcbcrEncoding::None;\n-\trgbColorSpace_->range = ColorSpace::Range::Full;\n-\n-\t/* Go through the streams again and force everyone to the same colour space. */\n-\tfor (auto cfg : config_) {\n-\t\tif (cfg.colorSpace == ColorSpace::Raw)\n-\t\t\tcontinue;\n-\n-\t\tif (isYuv(cfg.pixelFormat) && cfg.colorSpace != yuvColorSpace_) {\n-\t\t\t/* Again, no value means \"not adjusted\". */\n-\t\t\tif (cfg.colorSpace)\n-\t\t\t\tstatus = Adjusted;\n-\t\t\tcfg.colorSpace = yuvColorSpace_;\n-\t\t}\n-\t\tif (isRgb(cfg.pixelFormat) && cfg.colorSpace != rgbColorSpace_) {\n-\t\t\t/* Be nice, and let the YUV version count as non-adjusted too. */\n-\t\t\tif (cfg.colorSpace && cfg.colorSpace != yuvColorSpace_)\n-\t\t\t\tstatus = Adjusted;\n-\t\t\tcfg.colorSpace = rgbColorSpace_;\n-\t\t}\n-\t}\n-\n-\treturn status;\n-}\n-\n-CameraConfiguration::Status RPiCameraConfiguration::validate()\n-{\n-\tStatus status = Valid;\n-\n-\tif (config_.empty())\n-\t\treturn Invalid;\n-\n-\tstatus = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);\n-\n-\t/*\n-\t * Validate the requested transform against the sensor capabilities and\n-\t * rotation and store the final combined transform that configure() will\n-\t * need to apply to the sensor to save us working it out again.\n-\t */\n-\tTransform requestedTransform = transform;\n-\tcombinedTransform_ = data_->sensor_->validateTransform(&transform);\n-\tif (transform != requestedTransform)\n-\t\tstatus = Adjusted;\n-\n-\tunsigned int rawCount = 0, outCount = 0, count = 0, maxIndex = 0;\n-\tstd::pair<int, Size> outSize[2];\n-\tSize maxSize;\n-\tfor (StreamConfiguration &cfg : config_) {\n-\t\tif (isRaw(cfg.pixelFormat)) {\n-\t\t\t/*\n-\t\t\t * Calculate the best sensor mode we can use based on\n-\t\t\t * the user request.\n-\t\t\t */\n-\t\t\tV4L2VideoDevice *unicam = data_->unicam_[Unicam::Image].dev();\n-\t\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(cfg.pixelFormat);\n-\t\t\tunsigned int bitDepth = info.isValid() ? info.bitsPerPixel : defaultRawBitDepth;\n-\t\t\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size, bitDepth);\n-\t\t\tBayerFormat::Packing packing = BayerFormat::Packing::CSI2;\n-\t\t\tif (info.isValid() && !info.packed)\n-\t\t\t\tpacking = BayerFormat::Packing::None;\n-\t\t\tV4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(unicam, sensorFormat, packing);\n-\t\t\tint ret = unicam->tryFormat(&unicamFormat);\n-\t\t\tif (ret)\n-\t\t\t\treturn Invalid;\n-\n-\t\t\t/*\n-\t\t\t * Some sensors change their Bayer order when they are\n-\t\t\t * h-flipped or v-flipped, according to the transform.\n-\t\t\t * If this one does, we must advertise the transformed\n-\t\t\t * Bayer order in the raw stream. Note how we must\n-\t\t\t * fetch the \"native\" (i.e. untransformed) Bayer order,\n-\t\t\t * because the sensor may currently be flipped!\n-\t\t\t */\n-\t\t\tV4L2PixelFormat fourcc = unicamFormat.fourcc;\n-\t\t\tif (data_->flipsAlterBayerOrder_) {\n-\t\t\t\tBayerFormat bayer = BayerFormat::fromV4L2PixelFormat(fourcc);\n-\t\t\t\tbayer.order = data_->nativeBayerOrder_;\n-\t\t\t\tbayer = bayer.transform(combinedTransform_);\n-\t\t\t\tfourcc = bayer.toV4L2PixelFormat();\n-\t\t\t}\n-\n-\t\t\tPixelFormat unicamPixFormat = fourcc.toPixelFormat();\n-\t\t\tif (cfg.size != unicamFormat.size ||\n-\t\t\t    cfg.pixelFormat != unicamPixFormat) {\n-\t\t\t\tcfg.size = unicamFormat.size;\n-\t\t\t\tcfg.pixelFormat = unicamPixFormat;\n-\t\t\t\tstatus = Adjusted;\n-\t\t\t}\n-\n-\t\t\tcfg.stride = unicamFormat.planes[0].bpl;\n-\t\t\tcfg.frameSize = unicamFormat.planes[0].size;\n-\n-\t\t\trawCount++;\n-\t\t} else {\n-\t\t\toutSize[outCount] = std::make_pair(count, cfg.size);\n-\t\t\t/* Record the largest resolution for fixups later. */\n-\t\t\tif (maxSize < cfg.size) {\n-\t\t\t\tmaxSize = cfg.size;\n-\t\t\t\tmaxIndex = outCount;\n-\t\t\t}\n-\t\t\toutCount++;\n-\t\t}\n-\n-\t\tcount++;\n-\n-\t\t/* Can only output 1 RAW stream, or 2 YUV/RGB streams. */\n-\t\tif (rawCount > 1 || outCount > 2) {\n-\t\t\tLOG(RPI, Error) << \"Invalid number of streams requested\";\n-\t\t\treturn Invalid;\n-\t\t}\n-\t}\n-\n-\t/*\n-\t * Now do any fixups needed. For the two ISP outputs, one stream must be\n-\t * equal or smaller than the other in all dimensions.\n-\t */\n-\tfor (unsigned int i = 0; i < outCount; i++) {\n-\t\toutSize[i].second.width = std::min(outSize[i].second.width,\n-\t\t\t\t\t\t   maxSize.width);\n-\t\toutSize[i].second.height = std::min(outSize[i].second.height,\n-\t\t\t\t\t\t    maxSize.height);\n-\n-\t\tif (config_.at(outSize[i].first).size != outSize[i].second) {\n-\t\t\tconfig_.at(outSize[i].first).size = outSize[i].second;\n-\t\t\tstatus = Adjusted;\n-\t\t}\n-\n-\t\t/*\n-\t\t * Also validate the correct pixel formats here.\n-\t\t * Note that Output0 and Output1 support a different\n-\t\t * set of formats.\n-\t\t *\n-\t\t * Output 0 must be for the largest resolution. We will\n-\t\t * have that fixed up in the code above.\n-\t\t *\n-\t\t */\n-\t\tStreamConfiguration &cfg = config_.at(outSize[i].first);\n-\t\tPixelFormat &cfgPixFmt = cfg.pixelFormat;\n-\t\tV4L2VideoDevice *dev;\n-\n-\t\tif (i == maxIndex)\n-\t\t\tdev = data_->isp_[Isp::Output0].dev();\n-\t\telse\n-\t\t\tdev = data_->isp_[Isp::Output1].dev();\n-\n-\t\tV4L2VideoDevice::Formats fmts = dev->formats();\n-\n-\t\tif (fmts.find(dev->toV4L2PixelFormat(cfgPixFmt)) == fmts.end()) {\n-\t\t\t/* If we cannot find a native format, use a default one. */\n-\t\t\tcfgPixFmt = formats::NV12;\n-\t\t\tstatus = Adjusted;\n-\t\t}\n-\n-\t\tV4L2DeviceFormat format;\n-\t\tformat.fourcc = dev->toV4L2PixelFormat(cfg.pixelFormat);\n-\t\tformat.size = cfg.size;\n-\t\t/* We want to send the associated YCbCr info through to the driver. */\n-\t\tformat.colorSpace = yuvColorSpace_;\n-\n-\t\tLOG(RPI, Debug)\n-\t\t\t<< \"Try color space \" << ColorSpace::toString(cfg.colorSpace);\n-\n-\t\tint ret = dev->tryFormat(&format);\n-\t\tif (ret)\n-\t\t\treturn Invalid;\n-\n-\t\t/*\n-\t\t * But for RGB streams, the YCbCr info gets overwritten on the way back\n-\t\t * so we must check against what the stream cfg says, not what we actually\n-\t\t * requested (which carefully included the YCbCr info)!\n-\t\t */\n-\t\tif (cfg.colorSpace != format.colorSpace) {\n-\t\t\tstatus = Adjusted;\n-\t\t\tLOG(RPI, Debug)\n-\t\t\t\t<< \"Color space changed from \"\n-\t\t\t\t<< ColorSpace::toString(cfg.colorSpace) << \" to \"\n-\t\t\t\t<< ColorSpace::toString(format.colorSpace);\n-\t\t}\n-\n-\t\tcfg.colorSpace = format.colorSpace;\n-\n-\t\tcfg.stride = format.planes[0].bpl;\n-\t\tcfg.frameSize = format.planes[0].size;\n-\n-\t}\n-\n-\treturn status;\n-}\n-\n-PipelineHandlerRPi::PipelineHandlerRPi(CameraManager *manager)\n-\t: PipelineHandler(manager)\n-{\n-}\n-\n-std::unique_ptr<CameraConfiguration>\n-PipelineHandlerRPi::generateConfiguration(Camera *camera, const StreamRoles &roles)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tstd::unique_ptr<CameraConfiguration> config =\n-\t\tstd::make_unique<RPiCameraConfiguration>(data);\n-\tV4L2SubdeviceFormat sensorFormat;\n-\tunsigned int bufferCount;\n-\tPixelFormat pixelFormat;\n-\tV4L2VideoDevice::Formats fmts;\n-\tSize size;\n-\tstd::optional<ColorSpace> colorSpace;\n-\n-\tif (roles.empty())\n-\t\treturn config;\n-\n-\tunsigned int rawCount = 0;\n-\tunsigned int outCount = 0;\n-\tSize sensorSize = data->sensor_->resolution();\n-\tfor (const StreamRole role : roles) {\n-\t\tswitch (role) {\n-\t\tcase StreamRole::Raw:\n-\t\t\tsize = sensorSize;\n-\t\t\tsensorFormat = findBestFormat(data->sensorFormats_, size, defaultRawBitDepth);\n-\t\t\tpixelFormat = mbusCodeToPixelFormat(sensorFormat.mbus_code,\n-\t\t\t\t\t\t\t    BayerFormat::Packing::CSI2);\n-\t\t\tASSERT(pixelFormat.isValid());\n-\t\t\tcolorSpace = ColorSpace::Raw;\n-\t\t\tbufferCount = 2;\n-\t\t\trawCount++;\n-\t\t\tbreak;\n-\n-\t\tcase StreamRole::StillCapture:\n-\t\t\tfmts = data->isp_[Isp::Output0].dev()->formats();\n-\t\t\tpixelFormat = formats::NV12;\n-\t\t\t/*\n-\t\t\t * Still image codecs usually expect the sYCC color space.\n-\t\t\t * Even RGB codecs will be fine as the RGB we get with the\n-\t\t\t * sYCC color space is the same as sRGB.\n-\t\t\t */\n-\t\t\tcolorSpace = ColorSpace::Sycc;\n-\t\t\t/* Return the largest sensor resolution. */\n-\t\t\tsize = sensorSize;\n-\t\t\tbufferCount = 1;\n-\t\t\toutCount++;\n-\t\t\tbreak;\n-\n-\t\tcase StreamRole::VideoRecording:\n-\t\t\t/*\n-\t\t\t * The colour denoise algorithm requires the analysis\n-\t\t\t * image, produced by the second ISP output, to be in\n-\t\t\t * YUV420 format. Select this format as the default, to\n-\t\t\t * maximize chances that it will be picked by\n-\t\t\t * applications and enable usage of the colour denoise\n-\t\t\t * algorithm.\n-\t\t\t */\n-\t\t\tfmts = data->isp_[Isp::Output0].dev()->formats();\n-\t\t\tpixelFormat = formats::YUV420;\n-\t\t\t/*\n-\t\t\t * Choose a color space appropriate for video recording.\n-\t\t\t * Rec.709 will be a good default for HD resolutions.\n-\t\t\t */\n-\t\t\tcolorSpace = ColorSpace::Rec709;\n-\t\t\tsize = { 1920, 1080 };\n-\t\t\tbufferCount = 4;\n-\t\t\toutCount++;\n-\t\t\tbreak;\n-\n-\t\tcase StreamRole::Viewfinder:\n-\t\t\tfmts = data->isp_[Isp::Output0].dev()->formats();\n-\t\t\tpixelFormat = formats::ARGB8888;\n-\t\t\tcolorSpace = ColorSpace::Sycc;\n-\t\t\tsize = { 800, 600 };\n-\t\t\tbufferCount = 4;\n-\t\t\toutCount++;\n-\t\t\tbreak;\n-\n-\t\tdefault:\n-\t\t\tLOG(RPI, Error) << \"Requested stream role not supported: \"\n-\t\t\t\t\t<< role;\n-\t\t\treturn nullptr;\n-\t\t}\n-\n-\t\tif (rawCount > 1 || outCount > 2) {\n-\t\t\tLOG(RPI, Error) << \"Invalid stream roles requested\";\n-\t\t\treturn nullptr;\n-\t\t}\n-\n-\t\tstd::map<PixelFormat, std::vector<SizeRange>> deviceFormats;\n-\t\tif (role == StreamRole::Raw) {\n-\t\t\t/* Translate the MBUS codes to a PixelFormat. */\n-\t\t\tfor (const auto &format : data->sensorFormats_) {\n-\t\t\t\tPixelFormat pf = mbusCodeToPixelFormat(format.first,\n-\t\t\t\t\t\t\t\t       BayerFormat::Packing::CSI2);\n-\t\t\t\tif (pf.isValid())\n-\t\t\t\t\tdeviceFormats.emplace(std::piecewise_construct,\tstd::forward_as_tuple(pf),\n-\t\t\t\t\t\tstd::forward_as_tuple(format.second.begin(), format.second.end()));\n-\t\t\t}\n-\t\t} else {\n-\t\t\t/*\n-\t\t\t * Translate the V4L2PixelFormat to PixelFormat. Note that we\n-\t\t\t * limit the recommended largest ISP output size to match the\n-\t\t\t * sensor resolution.\n-\t\t\t */\n-\t\t\tfor (const auto &format : fmts) {\n-\t\t\t\tPixelFormat pf = format.first.toPixelFormat();\n-\t\t\t\tif (pf.isValid()) {\n-\t\t\t\t\tconst SizeRange &ispSizes = format.second[0];\n-\t\t\t\t\tdeviceFormats[pf].emplace_back(ispSizes.min, sensorSize,\n-\t\t\t\t\t\t\t\t       ispSizes.hStep, ispSizes.vStep);\n-\t\t\t\t}\n-\t\t\t}\n-\t\t}\n-\n-\t\t/* Add the stream format based on the device node used for the use case. */\n-\t\tStreamFormats formats(deviceFormats);\n-\t\tStreamConfiguration cfg(formats);\n-\t\tcfg.size = size;\n-\t\tcfg.pixelFormat = pixelFormat;\n-\t\tcfg.colorSpace = colorSpace;\n-\t\tcfg.bufferCount = bufferCount;\n-\t\tconfig->addConfiguration(cfg);\n-\t}\n-\n-\tconfig->validate();\n-\n-\treturn config;\n-}\n-\n-int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tint ret;\n-\n-\t/* Start by freeing all buffers and reset the Unicam and ISP stream states. */\n-\tdata->freeBuffers();\n-\tfor (auto const stream : data->streams_)\n-\t\tstream->setExternal(false);\n-\n-\tBayerFormat::Packing packing = BayerFormat::Packing::CSI2;\n-\tSize maxSize, sensorSize;\n-\tunsigned int maxIndex = 0;\n-\tbool rawStream = false;\n-\tunsigned int bitDepth = defaultRawBitDepth;\n-\n-\t/*\n-\t * Look for the RAW stream (if given) size as well as the largest\n-\t * ISP output size.\n-\t */\n-\tfor (unsigned i = 0; i < config->size(); i++) {\n-\t\tStreamConfiguration &cfg = config->at(i);\n-\n-\t\tif (isRaw(cfg.pixelFormat)) {\n-\t\t\t/*\n-\t\t\t * If we have been given a RAW stream, use that size\n-\t\t\t * for setting up the sensor.\n-\t\t\t */\n-\t\t\tsensorSize = cfg.size;\n-\t\t\trawStream = true;\n-\t\t\t/* Check if the user has explicitly set an unpacked format. */\n-\t\t\tBayerFormat bayerFormat = BayerFormat::fromPixelFormat(cfg.pixelFormat);\n-\t\t\tpacking = bayerFormat.packing;\n-\t\t\tbitDepth = bayerFormat.bitDepth;\n-\t\t} else {\n-\t\t\tif (cfg.size > maxSize) {\n-\t\t\t\tmaxSize = config->at(i).size;\n-\t\t\t\tmaxIndex = i;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\t/*\n-\t * Calculate the best sensor mode we can use based on the user's\n-\t * request, and apply it to the sensor with the cached transform, if\n-\t * any.\n-\t */\n-\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize, bitDepth);\n-\tconst RPiCameraConfiguration *rpiConfig = static_cast<const RPiCameraConfiguration *>(config);\n-\tret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tV4L2VideoDevice *unicam = data->unicam_[Unicam::Image].dev();\n-\tV4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(unicam, sensorFormat, packing);\n-\tret = unicam->setFormat(&unicamFormat);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tLOG(RPI, Info) << \"Sensor: \" << camera->id()\n-\t\t       << \" - Selected sensor format: \" << sensorFormat\n-\t\t       << \" - Selected unicam format: \" << unicamFormat;\n-\n-\tret = data->isp_[Isp::Input].dev()->setFormat(&unicamFormat);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * See which streams are requested, and route the user\n-\t * StreamConfiguration appropriately.\n-\t */\n-\tV4L2DeviceFormat format;\n-\tbool output0Set = false, output1Set = false;\n-\tfor (unsigned i = 0; i < config->size(); i++) {\n-\t\tStreamConfiguration &cfg = config->at(i);\n-\n-\t\tif (isRaw(cfg.pixelFormat)) {\n-\t\t\tcfg.setStream(&data->unicam_[Unicam::Image]);\n-\t\t\tdata->unicam_[Unicam::Image].setExternal(true);\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\t/* The largest resolution gets routed to the ISP Output 0 node. */\n-\t\tRPi::Stream *stream = i == maxIndex ? &data->isp_[Isp::Output0]\n-\t\t\t\t\t\t    : &data->isp_[Isp::Output1];\n-\n-\t\tV4L2PixelFormat fourcc = stream->dev()->toV4L2PixelFormat(cfg.pixelFormat);\n-\t\tformat.size = cfg.size;\n-\t\tformat.fourcc = fourcc;\n-\t\tformat.colorSpace = cfg.colorSpace;\n-\n-\t\tLOG(RPI, Debug) << \"Setting \" << stream->name() << \" to \"\n-\t\t\t\t<< format;\n-\n-\t\tret = stream->dev()->setFormat(&format);\n-\t\tif (ret)\n-\t\t\treturn -EINVAL;\n-\n-\t\tif (format.size != cfg.size || format.fourcc != fourcc) {\n-\t\t\tLOG(RPI, Error)\n-\t\t\t\t<< \"Failed to set requested format on \" << stream->name()\n-\t\t\t\t<< \", returned \" << format;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\n-\t\tLOG(RPI, Debug)\n-\t\t\t<< \"Stream \" << stream->name() << \" has color space \"\n-\t\t\t<< ColorSpace::toString(cfg.colorSpace);\n-\n-\t\tcfg.setStream(stream);\n-\t\tstream->setExternal(true);\n-\n-\t\tif (i != maxIndex)\n-\t\t\toutput1Set = true;\n-\t\telse\n-\t\t\toutput0Set = true;\n-\t}\n-\n-\t/*\n-\t * If ISP::Output0 stream has not been configured by the application,\n-\t * we must allow the hardware to generate an output so that the data\n-\t * flow in the pipeline handler remains consistent, and we still generate\n-\t * statistics for the IPA to use. So enable the output at a very low\n-\t * resolution for internal use.\n-\t *\n-\t * \\todo Allow the pipeline to work correctly without Output0 and only\n-\t * statistics coming from the hardware.\n-\t */\n-\tif (!output0Set) {\n-\t\tV4L2VideoDevice *dev = data->isp_[Isp::Output0].dev();\n-\n-\t\tmaxSize = Size(320, 240);\n-\t\tformat = {};\n-\t\tformat.size = maxSize;\n-\t\tformat.fourcc = dev->toV4L2PixelFormat(formats::YUV420);\n-\t\t/* No one asked for output, so the color space doesn't matter. */\n-\t\tformat.colorSpace = ColorSpace::Sycc;\n-\t\tret = dev->setFormat(&format);\n-\t\tif (ret) {\n-\t\t\tLOG(RPI, Error)\n-\t\t\t\t<< \"Failed to set default format on ISP Output0: \"\n-\t\t\t\t<< ret;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\n-\t\tLOG(RPI, Debug) << \"Defaulting ISP Output0 format to \"\n-\t\t\t\t<< format;\n-\t}\n-\n-\t/*\n-\t * If ISP::Output1 stream has not been requested by the application, we\n-\t * set it up for internal use now. This second stream will be used for\n-\t * fast colour denoise, and must be a quarter resolution of the ISP::Output0\n-\t * stream. However, also limit the maximum size to 1200 pixels in the\n-\t * larger dimension, just to avoid being wasteful with buffer allocations\n-\t * and memory bandwidth.\n-\t *\n-\t * \\todo If Output 1 format is not YUV420, Output 1 ought to be disabled as\n-\t * colour denoise will not run.\n-\t */\n-\tif (!output1Set) {\n-\t\tV4L2VideoDevice *dev = data->isp_[Isp::Output1].dev();\n-\n-\t\tV4L2DeviceFormat output1Format;\n-\t\tconstexpr Size maxDimensions(1200, 1200);\n-\t\tconst Size limit = maxDimensions.boundedToAspectRatio(format.size);\n-\n-\t\toutput1Format.size = (format.size / 2).boundedTo(limit).alignedDownTo(2, 2);\n-\t\toutput1Format.colorSpace = format.colorSpace;\n-\t\toutput1Format.fourcc = dev->toV4L2PixelFormat(formats::YUV420);\n-\n-\t\tLOG(RPI, Debug) << \"Setting ISP Output1 (internal) to \"\n-\t\t\t\t<< output1Format;\n-\n-\t\tret = dev->setFormat(&output1Format);\n-\t\tif (ret) {\n-\t\t\tLOG(RPI, Error) << \"Failed to set format on ISP Output1: \"\n-\t\t\t\t\t<< ret;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\t/* ISP statistics output format. */\n-\tformat = {};\n-\tformat.fourcc = V4L2PixelFormat(V4L2_META_FMT_BCM2835_ISP_STATS);\n-\tret = data->isp_[Isp::Stats].dev()->setFormat(&format);\n-\tif (ret) {\n-\t\tLOG(RPI, Error) << \"Failed to set format on ISP stats stream: \"\n-\t\t\t\t<< format;\n-\t\treturn ret;\n-\t}\n-\n-\t/* Figure out the smallest selection the ISP will allow. */\n-\tRectangle testCrop(0, 0, 1, 1);\n-\tdata->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &testCrop);\n-\tdata->ispMinCropSize_ = testCrop.size();\n-\n-\t/* Adjust aspect ratio by providing crops on the input image. */\n-\tSize size = unicamFormat.size.boundedToAspectRatio(maxSize);\n-\tRectangle crop = size.centeredTo(Rectangle(unicamFormat.size).center());\n-\tRectangle defaultCrop = crop;\n-\tdata->ispCrop_ = crop;\n-\n-\tdata->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &crop);\n-\n-\tipa::RPi::ConfigResult result;\n-\tret = data->configureIPA(config, &result);\n-\tif (ret)\n-\t\tLOG(RPI, Error) << \"Failed to configure the IPA: \" << ret;\n-\n-\t/*\n-\t * Set the scaler crop to the value we are using (scaled to native sensor\n-\t * coordinates).\n-\t */\n-\tdata->scalerCrop_ = data->scaleIspCrop(data->ispCrop_);\n-\n-\t/*\n-\t * Configure the Unicam embedded data output format only if the sensor\n-\t * supports it.\n-\t */\n-\tif (data->sensorMetadata_) {\n-\t\tV4L2SubdeviceFormat embeddedFormat;\n-\n-\t\tdata->sensor_->device()->getFormat(1, &embeddedFormat);\n-\t\tformat.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);\n-\t\tformat.planes[0].size = embeddedFormat.size.width * embeddedFormat.size.height;\n-\n-\t\tLOG(RPI, Debug) << \"Setting embedded data format.\";\n-\t\tret = data->unicam_[Unicam::Embedded].dev()->setFormat(&format);\n-\t\tif (ret) {\n-\t\t\tLOG(RPI, Error) << \"Failed to set format on Unicam embedded: \"\n-\t\t\t\t\t<< format;\n-\t\t\treturn ret;\n-\t\t}\n-\t}\n-\n-\t/*\n-\t * Update the ScalerCropMaximum to the correct value for this camera mode.\n-\t * For us, it's the same as the \"analogue crop\".\n-\t *\n-\t * \\todo Make this property the ScalerCrop maximum value when dynamic\n-\t * controls are available and set it at validate() time\n-\t */\n-\tdata->properties_.set(properties::ScalerCropMaximum, data->sensorInfo_.analogCrop);\n-\n-\t/* Store the mode sensitivity for the application. */\n-\tdata->properties_.set(properties::SensorSensitivity, result.modeSensitivity);\n-\n-\t/* Update the controls that the Raspberry Pi IPA can handle. */\n-\tControlInfoMap::Map ctrlMap;\n-\tfor (auto const &c : result.controlInfo)\n-\t\tctrlMap.emplace(c.first, c.second);\n-\n-\t/* Add the ScalerCrop control limits based on the current mode. */\n-\tRectangle ispMinCrop = data->scaleIspCrop(Rectangle(data->ispMinCropSize_));\n-\tdefaultCrop = data->scaleIspCrop(defaultCrop);\n-\tctrlMap[&controls::ScalerCrop] = ControlInfo(ispMinCrop, data->sensorInfo_.analogCrop, defaultCrop);\n-\n-\tdata->controlInfo_ = ControlInfoMap(std::move(ctrlMap), result.controlInfo.idmap());\n-\n-\t/* Setup the Video Mux/Bridge entities. */\n-\tfor (auto &[device, link] : data->bridgeDevices_) {\n-\t\t/*\n-\t\t * Start by disabling all the sink pad links on the devices in the\n-\t\t * cascade, with the exception of the link connecting the device.\n-\t\t */\n-\t\tfor (const MediaPad *p : device->entity()->pads()) {\n-\t\t\tif (!(p->flags() & MEDIA_PAD_FL_SINK))\n-\t\t\t\tcontinue;\n-\n-\t\t\tfor (MediaLink *l : p->links()) {\n-\t\t\t\tif (l != link)\n-\t\t\t\t\tl->setEnabled(false);\n-\t\t\t}\n-\t\t}\n-\n-\t\t/*\n-\t\t * Next, enable the entity -> entity links, and setup the pad format.\n-\t\t *\n-\t\t * \\todo Some bridge devices may chainge the media bus code, so we\n-\t\t * ought to read the source pad format and propagate it to the sink pad.\n-\t\t */\n-\t\tlink->setEnabled(true);\n-\t\tconst MediaPad *sinkPad = link->sink();\n-\t\tret = device->setFormat(sinkPad->index(), &sensorFormat);\n-\t\tif (ret) {\n-\t\t\tLOG(RPI, Error) << \"Failed to set format on \" << device->entity()->name()\n-\t\t\t\t\t<< \" pad \" << sinkPad->index()\n-\t\t\t\t\t<< \" with format  \" << format\n-\t\t\t\t\t<< \": \" << ret;\n-\t\t\treturn ret;\n-\t\t}\n-\n-\t\tLOG(RPI, Debug) << \"Configured media link on device \" << device->entity()->name()\n-\t\t\t\t<< \" on pad \" << sinkPad->index();\n-\t}\n-\n-\treturn ret;\n-}\n-\n-int PipelineHandlerRPi::exportFrameBuffers([[maybe_unused]] Camera *camera, Stream *stream,\n-\t\t\t\t\t   std::vector<std::unique_ptr<FrameBuffer>> *buffers)\n-{\n-\tRPi::Stream *s = static_cast<RPi::Stream *>(stream);\n-\tunsigned int count = stream->configuration().bufferCount;\n-\tint ret = s->dev()->exportBuffers(count, buffers);\n-\n-\ts->setExportedBuffers(buffers);\n-\n-\treturn ret;\n-}\n-\n-int PipelineHandlerRPi::start(Camera *camera, const ControlList *controls)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tint ret;\n-\n-\t/* Check if a ScalerCrop control was specified. */\n-\tif (controls)\n-\t\tdata->applyScalerCrop(*controls);\n-\n-\t/* Start the IPA. */\n-\tipa::RPi::StartResult result;\n-\tdata->ipa_->start(controls ? *controls : ControlList{ controls::controls },\n-\t\t\t  &result);\n-\n-\t/* Apply any gain/exposure settings that the IPA may have passed back. */\n-\tif (!result.controls.empty())\n-\t\tdata->setSensorControls(result.controls);\n-\n-\t/* Configure the number of dropped frames required on startup. */\n-\tdata->dropFrameCount_ = data->config_.disableStartupFrameDrops\n-\t\t\t\t? 0 : result.dropFrameCount;\n-\n-\tfor (auto const stream : data->streams_)\n-\t\tstream->resetBuffers();\n-\n-\tif (!data->buffersAllocated_) {\n-\t\t/* Allocate buffers for internal pipeline usage. */\n-\t\tret = prepareBuffers(camera);\n-\t\tif (ret) {\n-\t\t\tLOG(RPI, Error) << \"Failed to allocate buffers\";\n-\t\t\tdata->freeBuffers();\n-\t\t\tstop(camera);\n-\t\t\treturn ret;\n-\t\t}\n-\t\tdata->buffersAllocated_ = true;\n-\t}\n-\n-\t/* We need to set the dropFrameCount_ before queueing buffers. */\n-\tret = queueAllBuffers(camera);\n-\tif (ret) {\n-\t\tLOG(RPI, Error) << \"Failed to queue buffers\";\n-\t\tstop(camera);\n-\t\treturn ret;\n-\t}\n-\n-\t/* Enable SOF event generation. */\n-\tdata->unicam_[Unicam::Image].dev()->setFrameStartEnabled(true);\n-\n-\t/*\n-\t * Reset the delayed controls with the gain and exposure values set by\n-\t * the IPA.\n-\t */\n-\tdata->delayedCtrls_->reset(0);\n-\n-\tdata->state_ = RPiCameraData::State::Idle;\n-\n-\t/* Start all streams. */\n-\tfor (auto const stream : data->streams_) {\n-\t\tret = stream->dev()->streamOn();\n-\t\tif (ret) {\n-\t\t\tstop(camera);\n-\t\t\treturn ret;\n-\t\t}\n-\t}\n-\n-\treturn 0;\n-}\n-\n-void PipelineHandlerRPi::stopDevice(Camera *camera)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\n-\tdata->state_ = RPiCameraData::State::Stopped;\n-\n-\t/* Disable SOF event generation. */\n-\tdata->unicam_[Unicam::Image].dev()->setFrameStartEnabled(false);\n-\n-\tfor (auto const stream : data->streams_)\n-\t\tstream->dev()->streamOff();\n-\n-\tdata->clearIncompleteRequests();\n-\tdata->bayerQueue_ = {};\n-\tdata->embeddedQueue_ = {};\n-\n-\t/* Stop the IPA. */\n-\tdata->ipa_->stop();\n-}\n-\n-int PipelineHandlerRPi::queueRequestDevice(Camera *camera, Request *request)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\n-\tif (!data->isRunning())\n-\t\treturn -EINVAL;\n-\n-\tLOG(RPI, Debug) << \"queueRequestDevice: New request.\";\n-\n-\t/* Push all buffers supplied in the Request to the respective streams. */\n-\tfor (auto stream : data->streams_) {\n-\t\tif (!stream->isExternal())\n-\t\t\tcontinue;\n-\n-\t\tFrameBuffer *buffer = request->findBuffer(stream);\n-\t\tif (buffer && !stream->getBufferId(buffer)) {\n-\t\t\t/*\n-\t\t\t * This buffer is not recognised, so it must have been allocated\n-\t\t\t * outside the v4l2 device. Store it in the stream buffer list\n-\t\t\t * so we can track it.\n-\t\t\t */\n-\t\t\tstream->setExternalBuffer(buffer);\n-\t\t}\n-\n-\t\t/*\n-\t\t * If no buffer is provided by the request for this stream, we\n-\t\t * queue a nullptr to the stream to signify that it must use an\n-\t\t * internally allocated buffer for this capture request. This\n-\t\t * buffer will not be given back to the application, but is used\n-\t\t * to support the internal pipeline flow.\n-\t\t *\n-\t\t * The below queueBuffer() call will do nothing if there are not\n-\t\t * enough internal buffers allocated, but this will be handled by\n-\t\t * queuing the request for buffers in the RPiStream object.\n-\t\t */\n-\t\tint ret = stream->queueBuffer(buffer);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\t/* Push the request to the back of the queue. */\n-\tdata->requestQueue_.push_back(request);\n-\tdata->handleState();\n-\n-\treturn 0;\n-}\n-\n-bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n-{\n-\tconstexpr unsigned int numUnicamDevices = 2;\n-\n-\t/*\n-\t * Loop over all Unicam instances, but return out once a match is found.\n-\t * This is to ensure we correctly enumrate the camera when an instance\n-\t * of Unicam has registered with media controller, but has not registered\n-\t * device nodes due to a sensor subdevice failure.\n-\t */\n-\tfor (unsigned int i = 0; i < numUnicamDevices; i++) {\n-\t\tDeviceMatch unicam(\"unicam\");\n-\t\tMediaDevice *unicamDevice = acquireMediaDevice(enumerator, unicam);\n-\n-\t\tif (!unicamDevice) {\n-\t\t\tLOG(RPI, Debug) << \"Unable to acquire a Unicam instance\";\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\tDeviceMatch isp(\"bcm2835-isp\");\n-\t\tMediaDevice *ispDevice = acquireMediaDevice(enumerator, isp);\n-\n-\t\tif (!ispDevice) {\n-\t\t\tLOG(RPI, Debug) << \"Unable to acquire ISP instance\";\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\t/*\n-\t\t * The loop below is used to register multiple cameras behind one or more\n-\t\t * video mux devices that are attached to a particular Unicam instance.\n-\t\t * Obviously these cameras cannot be used simultaneously.\n-\t\t */\n-\t\tunsigned int numCameras = 0;\n-\t\tfor (MediaEntity *entity : unicamDevice->entities()) {\n-\t\t\tif (entity->function() != MEDIA_ENT_F_CAM_SENSOR)\n-\t\t\t\tcontinue;\n-\n-\t\t\tint ret = registerCamera(unicamDevice, ispDevice, entity);\n-\t\t\tif (ret)\n-\t\t\t\tLOG(RPI, Error) << \"Failed to register camera \"\n-\t\t\t\t\t\t<< entity->name() << \": \" << ret;\n-\t\t\telse\n-\t\t\t\tnumCameras++;\n-\t\t}\n-\n-\t\tif (numCameras)\n-\t\t\treturn true;\n-\t}\n-\n-\treturn false;\n-}\n-\n-void PipelineHandlerRPi::releaseDevice(Camera *camera)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tdata->freeBuffers();\n-}\n-\n-int PipelineHandlerRPi::registerCamera(MediaDevice *unicam, MediaDevice *isp, MediaEntity *sensorEntity)\n-{\n-\tstd::unique_ptr<RPiCameraData> data = std::make_unique<RPiCameraData>(this);\n-\n-\tif (!data->dmaHeap_.isValid())\n-\t\treturn -ENOMEM;\n-\n-\tMediaEntity *unicamImage = unicam->getEntityByName(\"unicam-image\");\n-\tMediaEntity *ispOutput0 = isp->getEntityByName(\"bcm2835-isp0-output0\");\n-\tMediaEntity *ispCapture1 = isp->getEntityByName(\"bcm2835-isp0-capture1\");\n-\tMediaEntity *ispCapture2 = isp->getEntityByName(\"bcm2835-isp0-capture2\");\n-\tMediaEntity *ispCapture3 = isp->getEntityByName(\"bcm2835-isp0-capture3\");\n-\n-\tif (!unicamImage || !ispOutput0 || !ispCapture1 || !ispCapture2 || !ispCapture3)\n-\t\treturn -ENOENT;\n-\n-\t/* Locate and open the unicam video streams. */\n-\tdata->unicam_[Unicam::Image] = RPi::Stream(\"Unicam Image\", unicamImage);\n-\n-\t/* An embedded data node will not be present if the sensor does not support it. */\n-\tMediaEntity *unicamEmbedded = unicam->getEntityByName(\"unicam-embedded\");\n-\tif (unicamEmbedded) {\n-\t\tdata->unicam_[Unicam::Embedded] = RPi::Stream(\"Unicam Embedded\", unicamEmbedded);\n-\t\tdata->unicam_[Unicam::Embedded].dev()->bufferReady.connect(data.get(),\n-\t\t\t\t\t\t\t\t\t   &RPiCameraData::unicamBufferDequeue);\n-\t}\n-\n-\t/* Tag the ISP input stream as an import stream. */\n-\tdata->isp_[Isp::Input] = RPi::Stream(\"ISP Input\", ispOutput0, true);\n-\tdata->isp_[Isp::Output0] = RPi::Stream(\"ISP Output0\", ispCapture1);\n-\tdata->isp_[Isp::Output1] = RPi::Stream(\"ISP Output1\", ispCapture2);\n-\tdata->isp_[Isp::Stats] = RPi::Stream(\"ISP Stats\", ispCapture3);\n-\n-\t/* Wire up all the buffer connections. */\n-\tdata->unicam_[Unicam::Image].dev()->dequeueTimeout.connect(data.get(), &RPiCameraData::unicamTimeout);\n-\tdata->unicam_[Unicam::Image].dev()->frameStart.connect(data.get(), &RPiCameraData::frameStarted);\n-\tdata->unicam_[Unicam::Image].dev()->bufferReady.connect(data.get(), &RPiCameraData::unicamBufferDequeue);\n-\tdata->isp_[Isp::Input].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispInputDequeue);\n-\tdata->isp_[Isp::Output0].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue);\n-\tdata->isp_[Isp::Output1].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue);\n-\tdata->isp_[Isp::Stats].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue);\n-\n-\tdata->sensor_ = std::make_unique<CameraSensor>(sensorEntity);\n-\tif (!data->sensor_)\n-\t\treturn -EINVAL;\n-\n-\tif (data->sensor_->init())\n-\t\treturn -EINVAL;\n-\n-\t/*\n-\t * Enumerate all the Video Mux/Bridge devices across the sensor -> unicam\n-\t * chain. There may be a cascade of devices in this chain!\n-\t */\n-\tMediaLink *link = sensorEntity->getPadByIndex(0)->links()[0];\n-\tdata->enumerateVideoDevices(link);\n-\n-\tdata->sensorFormats_ = populateSensorFormats(data->sensor_);\n-\n-\tipa::RPi::InitResult result;\n-\tif (data->loadIPA(&result)) {\n-\t\tLOG(RPI, Error) << \"Failed to load a suitable IPA library\";\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (result.sensorConfig.sensorMetadata ^ !!unicamEmbedded) {\n-\t\tLOG(RPI, Warning) << \"Mismatch between Unicam and CamHelper for embedded data usage!\";\n-\t\tresult.sensorConfig.sensorMetadata = false;\n-\t\tif (unicamEmbedded)\n-\t\t\tdata->unicam_[Unicam::Embedded].dev()->bufferReady.disconnect();\n-\t}\n-\n-\t/*\n-\t * Open all Unicam and ISP streams. The exception is the embedded data\n-\t * stream, which only gets opened below if the IPA reports that the sensor\n-\t * supports embedded data.\n-\t *\n-\t * The below grouping is just for convenience so that we can easily\n-\t * iterate over all streams in one go.\n-\t */\n-\tdata->streams_.push_back(&data->unicam_[Unicam::Image]);\n-\tif (result.sensorConfig.sensorMetadata)\n-\t\tdata->streams_.push_back(&data->unicam_[Unicam::Embedded]);\n-\n-\tfor (auto &stream : data->isp_)\n-\t\tdata->streams_.push_back(&stream);\n-\n-\tfor (auto stream : data->streams_) {\n-\t\tint ret = stream->dev()->open();\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\tif (!data->unicam_[Unicam::Image].dev()->caps().hasMediaController()) {\n-\t\tLOG(RPI, Error) << \"Unicam driver does not use the MediaController, please update your kernel!\";\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/*\n-\t * Setup our delayed control writer with the sensor default\n-\t * gain and exposure delays. Mark VBLANK for priority write.\n-\t */\n-\tstd::unordered_map<uint32_t, RPi::DelayedControls::ControlParams> params = {\n-\t\t{ V4L2_CID_ANALOGUE_GAIN, { result.sensorConfig.gainDelay, false } },\n-\t\t{ V4L2_CID_EXPOSURE, { result.sensorConfig.exposureDelay, false } },\n-\t\t{ V4L2_CID_HBLANK, { result.sensorConfig.hblankDelay, false } },\n-\t\t{ V4L2_CID_VBLANK, { result.sensorConfig.vblankDelay, true } }\n-\t};\n-\tdata->delayedCtrls_ = std::make_unique<RPi::DelayedControls>(data->sensor_->device(), params);\n-\tdata->sensorMetadata_ = result.sensorConfig.sensorMetadata;\n-\n-\t/* Register initial controls that the Raspberry Pi IPA can handle. */\n-\tdata->controlInfo_ = std::move(result.controlInfo);\n-\n-\t/* Initialize the camera properties. */\n-\tdata->properties_ = data->sensor_->properties();\n-\n-\t/*\n-\t * The V4L2_CID_NOTIFY_GAINS control, if present, is used to inform the\n-\t * sensor of the colour gains. It is defined to be a linear gain where\n-\t * the default value represents a gain of exactly one.\n-\t */\n-\tauto it = data->sensor_->controls().find(V4L2_CID_NOTIFY_GAINS);\n-\tif (it != data->sensor_->controls().end())\n-\t\tdata->notifyGainsUnity_ = it->second.def().get<int32_t>();\n-\n-\t/*\n-\t * Set a default value for the ScalerCropMaximum property to show\n-\t * that we support its use, however, initialise it to zero because\n-\t * it's not meaningful until a camera mode has been chosen.\n-\t */\n-\tdata->properties_.set(properties::ScalerCropMaximum, Rectangle{});\n-\n-\t/*\n-\t * We cache two things about the sensor in relation to transforms\n-\t * (meaning horizontal and vertical flips): if they affect the Bayer\n-\t * ordering, and what the \"native\" Bayer order is, when no transforms\n-\t * are applied.\n-\t *\n-\t * We note that the sensor's cached list of supported formats is\n-\t * already in the \"native\" order, with any flips having been undone.\n-\t */\n-\tconst V4L2Subdevice *sensor = data->sensor_->device();\n-\tconst struct v4l2_query_ext_ctrl *hflipCtrl = sensor->controlInfo(V4L2_CID_HFLIP);\n-\tif (hflipCtrl) {\n-\t\t/* We assume it will support vflips too... */\n-\t\tdata->flipsAlterBayerOrder_ = hflipCtrl->flags & V4L2_CTRL_FLAG_MODIFY_LAYOUT;\n-\t}\n-\n-\t/* Look for a valid Bayer format. */\n-\tBayerFormat bayerFormat;\n-\tfor (const auto &iter : data->sensorFormats_) {\n-\t\tbayerFormat = BayerFormat::fromMbusCode(iter.first);\n-\t\tif (bayerFormat.isValid())\n-\t\t\tbreak;\n-\t}\n-\n-\tif (!bayerFormat.isValid()) {\n-\t\tLOG(RPI, Error) << \"No Bayer format found\";\n-\t\treturn -EINVAL;\n-\t}\n-\tdata->nativeBayerOrder_ = bayerFormat.order;\n-\n-\t/*\n-\t * List the available streams an application may request. At present, we\n-\t * do not advertise Unicam Embedded and ISP Statistics streams, as there\n-\t * is no mechanism for the application to request non-image buffer formats.\n-\t */\n-\tstd::set<Stream *> streams;\n-\tstreams.insert(&data->unicam_[Unicam::Image]);\n-\tstreams.insert(&data->isp_[Isp::Output0]);\n-\tstreams.insert(&data->isp_[Isp::Output1]);\n-\n-\tint ret = data->loadPipelineConfiguration();\n-\tif (ret) {\n-\t\tLOG(RPI, Error) << \"Unable to load pipeline configuration\";\n-\t\treturn ret;\n-\t}\n-\n-\t/* Create and register the camera. */\n-\tconst std::string &id = data->sensor_->id();\n-\tstd::shared_ptr<Camera> camera =\n-\t\tCamera::create(std::move(data), id, streams);\n-\tPipelineHandler::registerCamera(std::move(camera));\n-\n-\tLOG(RPI, Info) << \"Registered camera \" << id\n-\t\t       << \" to Unicam device \" << unicam->deviceNode()\n-\t\t       << \" and ISP device \" << isp->deviceNode();\n-\treturn 0;\n-}\n-\n-int PipelineHandlerRPi::queueAllBuffers(Camera *camera)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tint ret;\n-\n-\tfor (auto const stream : data->streams_) {\n-\t\tif (!stream->isExternal()) {\n-\t\t\tret = stream->queueAllBuffers();\n-\t\t\tif (ret < 0)\n-\t\t\t\treturn ret;\n-\t\t} else {\n-\t\t\t/*\n-\t\t\t * For external streams, we must queue up a set of internal\n-\t\t\t * buffers to handle the number of drop frames requested by\n-\t\t\t * the IPA. This is done by passing nullptr in queueBuffer().\n-\t\t\t *\n-\t\t\t * The below queueBuffer() call will do nothing if there\n-\t\t\t * are not enough internal buffers allocated, but this will\n-\t\t\t * be handled by queuing the request for buffers in the\n-\t\t\t * RPiStream object.\n-\t\t\t */\n-\t\t\tunsigned int i;\n-\t\t\tfor (i = 0; i < data->dropFrameCount_; i++) {\n-\t\t\t\tret = stream->queueBuffer(nullptr);\n-\t\t\t\tif (ret)\n-\t\t\t\t\treturn ret;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\treturn 0;\n-}\n-\n-int PipelineHandlerRPi::prepareBuffers(Camera *camera)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tunsigned int numRawBuffers = 0;\n-\tint ret;\n-\n-\tfor (Stream *s : camera->streams()) {\n-\t\tif (isRaw(s->configuration().pixelFormat)) {\n-\t\t\tnumRawBuffers = s->configuration().bufferCount;\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\t/* Decide how many internal buffers to allocate. */\n-\tfor (auto const stream : data->streams_) {\n-\t\tunsigned int numBuffers;\n-\t\t/*\n-\t\t * For Unicam, allocate a minimum number of buffers for internal\n-\t\t * use as we want to avoid any frame drops.\n-\t\t */\n-\t\tconst unsigned int minBuffers = data->config_.minTotalUnicamBuffers;\n-\t\tif (stream == &data->unicam_[Unicam::Image]) {\n-\t\t\t/*\n-\t\t\t * If an application has configured a RAW stream, allocate\n-\t\t\t * additional buffers to make up the minimum, but ensure\n-\t\t\t * we have at least minUnicamBuffers of internal buffers\n-\t\t\t * to use to minimise frame drops.\n-\t\t\t */\n-\t\t\tnumBuffers = std::max<int>(data->config_.minUnicamBuffers,\n-\t\t\t\t\t\t   minBuffers - numRawBuffers);\n-\t\t} else if (stream == &data->isp_[Isp::Input]) {\n-\t\t\t/*\n-\t\t\t * ISP input buffers are imported from Unicam, so follow\n-\t\t\t * similar logic as above to count all the RAW buffers\n-\t\t\t * available.\n-\t\t\t */\n-\t\t\tnumBuffers = numRawBuffers +\n-\t\t\t\t     std::max<int>(data->config_.minUnicamBuffers,\n-\t\t\t\t\t\t   minBuffers - numRawBuffers);\n-\n-\t\t} else if (stream == &data->unicam_[Unicam::Embedded]) {\n-\t\t\t/*\n-\t\t\t * Embedded data buffers are (currently) for internal use,\n-\t\t\t * so allocate the minimum required to avoid frame drops.\n-\t\t\t */\n-\t\t\tnumBuffers = minBuffers;\n-\t\t} else {\n-\t\t\t/*\n-\t\t\t * Since the ISP runs synchronous with the IPA and requests,\n-\t\t\t * we only ever need one set of internal buffers. Any buffers\n-\t\t\t * the application wants to hold onto will already be exported\n-\t\t\t * through PipelineHandlerRPi::exportFrameBuffers().\n-\t\t\t */\n-\t\t\tnumBuffers = 1;\n-\t\t}\n-\n-\t\tret = stream->prepareBuffers(numBuffers);\n-\t\tif (ret < 0)\n-\t\t\treturn ret;\n-\t}\n-\n-\t/*\n-\t * Pass the stats and embedded data buffers to the IPA. No other\n-\t * buffers need to be passed.\n-\t */\n-\tmapBuffers(camera, data->isp_[Isp::Stats].getBuffers(), RPi::MaskStats);\n-\tif (data->sensorMetadata_)\n-\t\tmapBuffers(camera, data->unicam_[Unicam::Embedded].getBuffers(),\n-\t\t\t   RPi::MaskEmbeddedData);\n-\n-\treturn 0;\n-}\n-\n-void PipelineHandlerRPi::mapBuffers(Camera *camera, const RPi::BufferMap &buffers, unsigned int mask)\n-{\n-\tRPiCameraData *data = cameraData(camera);\n-\tstd::vector<IPABuffer> bufferIds;\n-\t/*\n-\t * Link the FrameBuffers with the id (key value) in the map stored in\n-\t * the RPi stream object - along with an identifier mask.\n-\t *\n-\t * This will allow us to identify buffers passed between the pipeline\n-\t * handler and the IPA.\n-\t */\n-\tfor (auto const &it : buffers) {\n-\t\tbufferIds.push_back(IPABuffer(mask | it.first,\n-\t\t\t\t\t       it.second->planes()));\n-\t\tdata->bufferIds_.insert(mask | it.first);\n-\t}\n-\n-\tdata->ipa_->mapBuffers(bufferIds);\n-}\n-\n-void RPiCameraData::freeBuffers()\n-{\n-\tif (ipa_) {\n-\t\t/*\n-\t\t * Copy the buffer ids from the unordered_set to a vector to\n-\t\t * pass to the IPA.\n-\t\t */\n-\t\tstd::vector<unsigned int> bufferIds(bufferIds_.begin(),\n-\t\t\t\t\t\t    bufferIds_.end());\n-\t\tipa_->unmapBuffers(bufferIds);\n-\t\tbufferIds_.clear();\n-\t}\n-\n-\tfor (auto const stream : streams_)\n-\t\tstream->releaseBuffers();\n-\n-\tbuffersAllocated_ = false;\n-}\n-\n-void RPiCameraData::frameStarted(uint32_t sequence)\n-{\n-\tLOG(RPI, Debug) << \"frame start \" << sequence;\n-\n-\t/* Write any controls for the next frame as soon as we can. */\n-\tdelayedCtrls_->applyControls(sequence);\n-}\n-\n-int RPiCameraData::loadIPA(ipa::RPi::InitResult *result)\n-{\n-\tipa_ = IPAManager::createIPA<ipa::RPi::IPAProxyRPi>(pipe(), 1, 1);\n-\n-\tif (!ipa_)\n-\t\treturn -ENOENT;\n-\n-\tipa_->processStatsComplete.connect(this, &RPiCameraData::processStatsComplete);\n-\tipa_->prepareIspComplete.connect(this, &RPiCameraData::prepareIspComplete);\n-\tipa_->setIspControls.connect(this, &RPiCameraData::setIspControls);\n-\tipa_->setDelayedControls.connect(this, &RPiCameraData::setDelayedControls);\n-\tipa_->setLensControls.connect(this, &RPiCameraData::setLensControls);\n-\tipa_->setCameraTimeout.connect(this, &RPiCameraData::setCameraTimeout);\n-\n-\t/*\n-\t * The configuration (tuning file) is made from the sensor name unless\n-\t * the environment variable overrides it.\n-\t */\n-\tstd::string configurationFile;\n-\tchar const *configFromEnv = utils::secure_getenv(\"LIBCAMERA_RPI_TUNING_FILE\");\n-\tif (!configFromEnv || *configFromEnv == '\\0') {\n-\t\tstd::string model = sensor_->model();\n-\t\tif (isMonoSensor(sensor_))\n-\t\t\tmodel += \"_mono\";\n-\t\tconfigurationFile = ipa_->configurationFile(model + \".json\", \"rpi\");\n-\t} else {\n-\t\tconfigurationFile = std::string(configFromEnv);\n-\t}\n-\n-\tIPASettings settings(configurationFile, sensor_->model());\n-\tipa::RPi::InitParams params;\n-\n-\tparams.lensPresent = !!sensor_->focusLens();\n-\treturn ipa_->init(settings, params, result);\n-}\n-\n-int RPiCameraData::configureIPA(const CameraConfiguration *config, ipa::RPi::ConfigResult *result)\n-{\n-\tstd::map<unsigned int, ControlInfoMap> entityControls;\n-\tipa::RPi::ConfigParams params;\n-\n-\t/* \\todo Move passing of ispControls and lensControls to ipa::init() */\n-\tparams.sensorControls = sensor_->controls();\n-\tparams.ispControls = isp_[Isp::Input].dev()->controls();\n-\tif (sensor_->focusLens())\n-\t\tparams.lensControls = sensor_->focusLens()->controls();\n-\n-\t/* Always send the user transform to the IPA. */\n-\tparams.transform = static_cast<unsigned int>(config->transform);\n-\n-\t/* Allocate the lens shading table via dmaHeap and pass to the IPA. */\n-\tif (!lsTable_.isValid()) {\n-\t\tlsTable_ = SharedFD(dmaHeap_.alloc(\"ls_grid\", ipa::RPi::MaxLsGridSize));\n-\t\tif (!lsTable_.isValid())\n-\t\t\treturn -ENOMEM;\n-\n-\t\t/* Allow the IPA to mmap the LS table via the file descriptor. */\n-\t\t/*\n-\t\t * \\todo Investigate if mapping the lens shading table buffer\n-\t\t * could be handled with mapBuffers().\n-\t\t */\n-\t\tparams.lsTableHandle = lsTable_;\n-\t}\n-\n-\t/* We store the IPACameraSensorInfo for digital zoom calculations. */\n-\tint ret = sensor_->sensorInfo(&sensorInfo_);\n-\tif (ret) {\n-\t\tLOG(RPI, Error) << \"Failed to retrieve camera sensor info\";\n-\t\treturn ret;\n-\t}\n-\n-\t/* Ready the IPA - it must know about the sensor resolution. */\n-\tret = ipa_->configure(sensorInfo_, params, result);\n-\tif (ret < 0) {\n-\t\tLOG(RPI, Error) << \"IPA configuration failed!\";\n-\t\treturn -EPIPE;\n-\t}\n-\n-\tif (!result->controls.empty())\n-\t\tsetSensorControls(result->controls);\n-\n-\treturn 0;\n-}\n-\n-int RPiCameraData::loadPipelineConfiguration()\n-{\n-\tconfig_ = {\n-\t\t.minUnicamBuffers = 2,\n-\t\t.minTotalUnicamBuffers = 4,\n-\t\t.disableStartupFrameDrops = false,\n-\t\t.unicamTimeoutValue = 0,\n-\t};\n-\n-\tchar const *configFromEnv = utils::secure_getenv(\"LIBCAMERA_RPI_CONFIG_FILE\");\n-\tif (!configFromEnv || *configFromEnv == '\\0')\n-\t\treturn 0;\n-\n-\tstd::string filename = std::string(configFromEnv);\n-\tFile file(filename);\n-\n-\tif (!file.open(File::OpenModeFlag::ReadOnly)) {\n-\t\tLOG(RPI, Error) << \"Failed to open configuration file '\" << filename << \"'\";\n-\t\treturn -EIO;\n-\t}\n-\n-\tLOG(RPI, Info) << \"Using configuration file '\" << filename << \"'\";\n-\n-\tstd::unique_ptr<YamlObject> root = YamlParser::parse(file);\n-\tif (!root) {\n-\t\tLOG(RPI, Warning) << \"Failed to parse configuration file, using defaults\";\n-\t\treturn 0;\n-\t}\n-\n-\tstd::optional<double> ver = (*root)[\"version\"].get<double>();\n-\tif (!ver || *ver != 1.0) {\n-\t\tLOG(RPI, Error) << \"Unexpected configuration file version reported\";\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tconst YamlObject &phConfig = (*root)[\"pipeline_handler\"];\n-\tconfig_.minUnicamBuffers =\n-\t\tphConfig[\"min_unicam_buffers\"].get<unsigned int>(config_.minUnicamBuffers);\n-\tconfig_.minTotalUnicamBuffers =\n-\t\tphConfig[\"min_total_unicam_buffers\"].get<unsigned int>(config_.minTotalUnicamBuffers);\n-\tconfig_.disableStartupFrameDrops =\n-\t\tphConfig[\"disable_startup_frame_drops\"].get<bool>(config_.disableStartupFrameDrops);\n-\tconfig_.unicamTimeoutValue =\n-\t\tphConfig[\"unicam_timeout_value_ms\"].get<unsigned int>(config_.unicamTimeoutValue);\n-\n-\tif (config_.unicamTimeoutValue) {\n-\t\t/* Disable the IPA signal to control timeout and set the user requested value. */\n-\t\tipa_->setCameraTimeout.disconnect();\n-\t\tunicam_[Unicam::Image].dev()->setDequeueTimeout(config_.unicamTimeoutValue * 1ms);\n-\t}\n-\n-\tif (config_.minTotalUnicamBuffers < config_.minUnicamBuffers) {\n-\t\tLOG(RPI, Error) << \"Invalid configuration: min_total_unicam_buffers must be >= min_unicam_buffers\";\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (config_.minTotalUnicamBuffers < 1) {\n-\t\tLOG(RPI, Error) << \"Invalid configuration: min_total_unicam_buffers must be >= 1\";\n-\t\treturn -EINVAL;\n-\t}\n-\n-\treturn 0;\n-}\n-\n-/*\n- * enumerateVideoDevices() iterates over the Media Controller topology, starting\n- * at the sensor and finishing at Unicam. For each sensor, RPiCameraData stores\n- * a unique list of any intermediate video mux or bridge devices connected in a\n- * cascade, together with the entity to entity link.\n- *\n- * Entity pad configuration and link enabling happens at the end of configure().\n- * We first disable all pad links on each entity device in the chain, and then\n- * selectively enabling the specific links to link sensor to Unicam across all\n- * intermediate muxes and bridges.\n- *\n- * In the cascaded topology below, if Sensor1 is used, the Mux2 -> Mux1 link\n- * will be disabled, and Sensor1 -> Mux1 -> Unicam links enabled. Alternatively,\n- * if Sensor3 is used, the Sensor2 -> Mux2 and Sensor1 -> Mux1 links are disabled,\n- * and Sensor3 -> Mux2 -> Mux1 -> Unicam links are enabled. All other links will\n- * remain unchanged.\n- *\n- *  +----------+\n- *  |  Unicam  |\n- *  +-----^----+\n- *        |\n- *    +---+---+\n- *    |  Mux1 <-------+\n- *    +--^----+       |\n- *       |            |\n- * +-----+---+    +---+---+\n- * | Sensor1 |    |  Mux2 |<--+\n- * +---------+    +-^-----+   |\n- *                  |         |\n- *          +-------+-+   +---+-----+\n- *          | Sensor2 |   | Sensor3 |\n- *          +---------+   +---------+\n- */\n-void RPiCameraData::enumerateVideoDevices(MediaLink *link)\n-{\n-\tconst MediaPad *sinkPad = link->sink();\n-\tconst MediaEntity *entity = sinkPad->entity();\n-\tbool unicamFound = false;\n-\n-\t/* We only deal with Video Mux and Bridge devices in cascade. */\n-\tif (entity->function() != MEDIA_ENT_F_VID_MUX &&\n-\t    entity->function() != MEDIA_ENT_F_VID_IF_BRIDGE)\n-\t\treturn;\n-\n-\t/* Find the source pad for this Video Mux or Bridge device. */\n-\tconst MediaPad *sourcePad = nullptr;\n-\tfor (const MediaPad *pad : entity->pads()) {\n-\t\tif (pad->flags() & MEDIA_PAD_FL_SOURCE) {\n-\t\t\t/*\n-\t\t\t * We can only deal with devices that have a single source\n-\t\t\t * pad. If this device has multiple source pads, ignore it\n-\t\t\t * and this branch in the cascade.\n-\t\t\t */\n-\t\t\tif (sourcePad)\n-\t\t\t\treturn;\n-\n-\t\t\tsourcePad = pad;\n-\t\t}\n-\t}\n-\n-\tLOG(RPI, Debug) << \"Found video mux device \" << entity->name()\n-\t\t\t<< \" linked to sink pad \" << sinkPad->index();\n-\n-\tbridgeDevices_.emplace_back(std::make_unique<V4L2Subdevice>(entity), link);\n-\tbridgeDevices_.back().first->open();\n-\n-\t/*\n-\t * Iterate through all the sink pad links down the cascade to find any\n-\t * other Video Mux and Bridge devices.\n-\t */\n-\tfor (MediaLink *l : sourcePad->links()) {\n-\t\tenumerateVideoDevices(l);\n-\t\t/* Once we reach the Unicam entity, we are done. */\n-\t\tif (l->sink()->entity()->name() == \"unicam-image\") {\n-\t\t\tunicamFound = true;\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\t/* This identifies the end of our entity enumeration recursion. */\n-\tif (link->source()->entity()->function() == MEDIA_ENT_F_CAM_SENSOR) {\n-\t\t/*\n-\t\t* If Unicam is not at the end of this cascade, we cannot configure\n-\t\t* this topology automatically, so remove all entity references.\n-\t\t*/\n-\t\tif (!unicamFound) {\n-\t\t\tLOG(RPI, Warning) << \"Cannot automatically configure this MC topology!\";\n-\t\t\tbridgeDevices_.clear();\n-\t\t}\n-\t}\n-}\n-\n-void RPiCameraData::processStatsComplete(const ipa::RPi::BufferIds &buffers)\n-{\n-\tif (!isRunning())\n-\t\treturn;\n-\n-\tFrameBuffer *buffer = isp_[Isp::Stats].getBuffers().at(buffers.stats & RPi::MaskID);\n-\n-\thandleStreamBuffer(buffer, &isp_[Isp::Stats]);\n-\n-\t/* Last thing to do is to fill up the request metadata. */\n-\tRequest *request = requestQueue_.front();\n-\tControlList metadata;\n-\n-\tipa_->reportMetadata(request->sequence(), &metadata);\n-\trequest->metadata().merge(metadata);\n-\n-\t/*\n-\t * Inform the sensor of the latest colour gains if it has the\n-\t * V4L2_CID_NOTIFY_GAINS control (which means notifyGainsUnity_ is set).\n-\t */\n-\tconst auto &colourGains = metadata.get(libcamera::controls::ColourGains);\n-\tif (notifyGainsUnity_ && colourGains) {\n-\t\t/* The control wants linear gains in the order B, Gb, Gr, R. */\n-\t\tControlList ctrls(sensor_->controls());\n-\t\tstd::array<int32_t, 4> gains{\n-\t\t\tstatic_cast<int32_t>((*colourGains)[1] * *notifyGainsUnity_),\n-\t\t\t*notifyGainsUnity_,\n-\t\t\t*notifyGainsUnity_,\n-\t\t\tstatic_cast<int32_t>((*colourGains)[0] * *notifyGainsUnity_)\n-\t\t};\n-\t\tctrls.set(V4L2_CID_NOTIFY_GAINS, Span<const int32_t>{ gains });\n-\n-\t\tsensor_->setControls(&ctrls);\n-\t}\n-\n-\tstate_ = State::IpaComplete;\n-\thandleState();\n-}\n-\n-void RPiCameraData::prepareIspComplete(const ipa::RPi::BufferIds &buffers)\n-{\n-\tunsigned int embeddedId = buffers.embedded & RPi::MaskID;\n-\tunsigned int bayer = buffers.bayer & RPi::MaskID;\n-\tFrameBuffer *buffer;\n-\n-\tif (!isRunning())\n-\t\treturn;\n-\n-\tbuffer = unicam_[Unicam::Image].getBuffers().at(bayer & RPi::MaskID);\n-\tLOG(RPI, Debug) << \"Input re-queue to ISP, buffer id \" << (bayer & RPi::MaskID)\n-\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n-\n-\tisp_[Isp::Input].queueBuffer(buffer);\n-\tispOutputCount_ = 0;\n-\n-\tif (sensorMetadata_ && embeddedId) {\n-\t\tbuffer = unicam_[Unicam::Embedded].getBuffers().at(embeddedId & RPi::MaskID);\n-\t\thandleStreamBuffer(buffer, &unicam_[Unicam::Embedded]);\n-\t}\n-\n-\thandleState();\n-}\n-\n-void RPiCameraData::setIspControls(const ControlList &controls)\n-{\n-\tControlList ctrls = controls;\n-\n-\tif (ctrls.contains(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING)) {\n-\t\tControlValue &value =\n-\t\t\tconst_cast<ControlValue &>(ctrls.get(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING));\n-\t\tSpan<uint8_t> s = value.data();\n-\t\tbcm2835_isp_lens_shading *ls =\n-\t\t\treinterpret_cast<bcm2835_isp_lens_shading *>(s.data());\n-\t\tls->dmabuf = lsTable_.get();\n-\t}\n-\n-\tisp_[Isp::Input].dev()->setControls(&ctrls);\n-\thandleState();\n-}\n-\n-void RPiCameraData::setDelayedControls(const ControlList &controls, uint32_t delayContext)\n-{\n-\tif (!delayedCtrls_->push(controls, delayContext))\n-\t\tLOG(RPI, Error) << \"V4L2 DelayedControl set failed\";\n-\thandleState();\n-}\n-\n-void RPiCameraData::setLensControls(const ControlList &controls)\n-{\n-\tCameraLens *lens = sensor_->focusLens();\n-\n-\tif (lens && controls.contains(V4L2_CID_FOCUS_ABSOLUTE)) {\n-\t\tControlValue const &focusValue = controls.get(V4L2_CID_FOCUS_ABSOLUTE);\n-\t\tlens->setFocusPosition(focusValue.get<int32_t>());\n-\t}\n-}\n-\n-void RPiCameraData::setCameraTimeout(uint32_t maxFrameLengthMs)\n-{\n-\t/*\n-\t * Set the dequeue timeout to the larger of 5x the maximum reported\n-\t * frame length advertised by the IPA over a number of frames. Allow\n-\t * a minimum timeout value of 1s.\n-\t */\n-\tutils::Duration timeout =\n-\t\tstd::max<utils::Duration>(1s, 5 * maxFrameLengthMs * 1ms);\n-\n-\tLOG(RPI, Debug) << \"Setting Unicam timeout to \" << timeout;\n-\tunicam_[Unicam::Image].dev()->setDequeueTimeout(timeout);\n-}\n-\n-void RPiCameraData::setSensorControls(ControlList &controls)\n-{\n-\t/*\n-\t * We need to ensure that if both VBLANK and EXPOSURE are present, the\n-\t * former must be written ahead of, and separately from EXPOSURE to avoid\n-\t * V4L2 rejecting the latter. This is identical to what DelayedControls\n-\t * does with the priority write flag.\n-\t *\n-\t * As a consequence of the below logic, VBLANK gets set twice, and we\n-\t * rely on the v4l2 framework to not pass the second control set to the\n-\t * driver as the actual control value has not changed.\n-\t */\n-\tif (controls.contains(V4L2_CID_EXPOSURE) && controls.contains(V4L2_CID_VBLANK)) {\n-\t\tControlList vblank_ctrl;\n-\n-\t\tvblank_ctrl.set(V4L2_CID_VBLANK, controls.get(V4L2_CID_VBLANK));\n-\t\tsensor_->setControls(&vblank_ctrl);\n-\t}\n-\n-\tsensor_->setControls(&controls);\n-}\n-\n-void RPiCameraData::unicamTimeout()\n-{\n-\tLOG(RPI, Error) << \"Unicam has timed out!\";\n-\tLOG(RPI, Error) << \"Please check that your camera sensor connector is attached securely.\";\n-\tLOG(RPI, Error) << \"Alternatively, try another cable and/or sensor.\";\n-\n-\tstate_ = RPiCameraData::State::Error;\n-\t/*\n-\t * To allow the application to attempt a recovery from this timeout,\n-\t * stop all devices streaming, and return any outstanding requests as\n-\t * incomplete and cancelled.\n-\t */\n-\tfor (auto const stream : streams_)\n-\t\tstream->dev()->streamOff();\n-\n-\tclearIncompleteRequests();\n-}\n-\n-void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer)\n-{\n-\tRPi::Stream *stream = nullptr;\n-\tint index;\n-\n-\tif (!isRunning())\n-\t\treturn;\n-\n-\tfor (RPi::Stream &s : unicam_) {\n-\t\tindex = s.getBufferId(buffer);\n-\t\tif (index) {\n-\t\t\tstream = &s;\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\t/* The buffer must belong to one of our streams. */\n-\tASSERT(stream);\n-\n-\tLOG(RPI, Debug) << \"Stream \" << stream->name() << \" buffer dequeue\"\n-\t\t\t<< \", buffer id \" << index\n-\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n-\n-\tif (stream == &unicam_[Unicam::Image]) {\n-\t\t/*\n-\t\t * Lookup the sensor controls used for this frame sequence from\n-\t\t * DelayedControl and queue them along with the frame buffer.\n-\t\t */\n-\t\tauto [ctrl, delayContext] = delayedCtrls_->get(buffer->metadata().sequence);\n-\t\t/*\n-\t\t * Add the frame timestamp to the ControlList for the IPA to use\n-\t\t * as it does not receive the FrameBuffer object.\n-\t\t */\n-\t\tctrl.set(controls::SensorTimestamp, buffer->metadata().timestamp);\n-\t\tbayerQueue_.push({ buffer, std::move(ctrl), delayContext });\n-\t} else {\n-\t\tembeddedQueue_.push(buffer);\n-\t}\n-\n-\thandleState();\n-}\n-\n-void RPiCameraData::ispInputDequeue(FrameBuffer *buffer)\n-{\n-\tif (!isRunning())\n-\t\treturn;\n-\n-\tLOG(RPI, Debug) << \"Stream ISP Input buffer complete\"\n-\t\t\t<< \", buffer id \" << unicam_[Unicam::Image].getBufferId(buffer)\n-\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n-\n-\t/* The ISP input buffer gets re-queued into Unicam. */\n-\thandleStreamBuffer(buffer, &unicam_[Unicam::Image]);\n-\thandleState();\n-}\n-\n-void RPiCameraData::ispOutputDequeue(FrameBuffer *buffer)\n-{\n-\tRPi::Stream *stream = nullptr;\n-\tint index;\n-\n-\tif (!isRunning())\n-\t\treturn;\n-\n-\tfor (RPi::Stream &s : isp_) {\n-\t\tindex = s.getBufferId(buffer);\n-\t\tif (index) {\n-\t\t\tstream = &s;\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\t/* The buffer must belong to one of our ISP output streams. */\n-\tASSERT(stream);\n-\n-\tLOG(RPI, Debug) << \"Stream \" << stream->name() << \" buffer complete\"\n-\t\t\t<< \", buffer id \" << index\n-\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n-\n-\t/*\n-\t * ISP statistics buffer must not be re-queued or sent back to the\n-\t * application until after the IPA signals so.\n-\t */\n-\tif (stream == &isp_[Isp::Stats]) {\n-\t\tipa::RPi::ProcessParams params;\n-\t\tparams.buffers.stats = index | RPi::MaskStats;\n-\t\tparams.ipaContext = requestQueue_.front()->sequence();\n-\t\tipa_->processStats(params);\n-\t} else {\n-\t\t/* Any other ISP output can be handed back to the application now. */\n-\t\thandleStreamBuffer(buffer, stream);\n-\t}\n-\n-\t/*\n-\t * Increment the number of ISP outputs generated.\n-\t * This is needed to track dropped frames.\n-\t */\n-\tispOutputCount_++;\n-\n-\thandleState();\n-}\n-\n-void RPiCameraData::clearIncompleteRequests()\n-{\n-\t/*\n-\t * All outstanding requests (and associated buffers) must be returned\n-\t * back to the application.\n-\t */\n-\twhile (!requestQueue_.empty()) {\n-\t\tRequest *request = requestQueue_.front();\n-\n-\t\tfor (auto &b : request->buffers()) {\n-\t\t\tFrameBuffer *buffer = b.second;\n-\t\t\t/*\n-\t\t\t * Has the buffer already been handed back to the\n-\t\t\t * request? If not, do so now.\n-\t\t\t */\n-\t\t\tif (buffer->request()) {\n-\t\t\t\tbuffer->_d()->cancel();\n-\t\t\t\tpipe()->completeBuffer(request, buffer);\n-\t\t\t}\n-\t\t}\n-\n-\t\tpipe()->completeRequest(request);\n-\t\trequestQueue_.pop_front();\n-\t}\n-}\n-\n-void RPiCameraData::handleStreamBuffer(FrameBuffer *buffer, RPi::Stream *stream)\n-{\n-\t/*\n-\t * It is possible to be here without a pending request, so check\n-\t * that we actually have one to action, otherwise we just return\n-\t * buffer back to the stream.\n-\t */\n-\tRequest *request = requestQueue_.empty() ? nullptr : requestQueue_.front();\n-\tif (!dropFrameCount_ && request && request->findBuffer(stream) == buffer) {\n-\t\t/*\n-\t\t * Check if this is an externally provided buffer, and if\n-\t\t * so, we must stop tracking it in the pipeline handler.\n-\t\t */\n-\t\thandleExternalBuffer(buffer, stream);\n-\t\t/*\n-\t\t * Tag the buffer as completed, returning it to the\n-\t\t * application.\n-\t\t */\n-\t\tpipe()->completeBuffer(request, buffer);\n-\t} else {\n-\t\t/*\n-\t\t * This buffer was not part of the Request (which happens if an\n-\t\t * internal buffer was used for an external stream, or\n-\t\t * unconditionally for internal streams), or there is no pending\n-\t\t * request, so we can recycle it.\n-\t\t */\n-\t\tstream->returnBuffer(buffer);\n-\t}\n-}\n-\n-void RPiCameraData::handleExternalBuffer(FrameBuffer *buffer, RPi::Stream *stream)\n-{\n-\tunsigned int id = stream->getBufferId(buffer);\n-\n-\tif (!(id & RPi::MaskExternalBuffer))\n-\t\treturn;\n-\n-\t/* Stop the Stream object from tracking the buffer. */\n-\tstream->removeExternalBuffer(buffer);\n-}\n-\n-void RPiCameraData::handleState()\n-{\n-\tswitch (state_) {\n-\tcase State::Stopped:\n-\tcase State::Busy:\n-\tcase State::Error:\n-\t\tbreak;\n-\n-\tcase State::IpaComplete:\n-\t\t/* If the request is completed, we will switch to Idle state. */\n-\t\tcheckRequestCompleted();\n-\t\t/*\n-\t\t * No break here, we want to try running the pipeline again.\n-\t\t * The fallthrough clause below suppresses compiler warnings.\n-\t\t */\n-\t\t[[fallthrough]];\n-\n-\tcase State::Idle:\n-\t\ttryRunPipeline();\n-\t\tbreak;\n-\t}\n-}\n-\n-void RPiCameraData::checkRequestCompleted()\n-{\n-\tbool requestCompleted = false;\n-\t/*\n-\t * If we are dropping this frame, do not touch the request, simply\n-\t * change the state to IDLE when ready.\n-\t */\n-\tif (!dropFrameCount_) {\n-\t\tRequest *request = requestQueue_.front();\n-\t\tif (request->hasPendingBuffers())\n-\t\t\treturn;\n-\n-\t\t/* Must wait for metadata to be filled in before completing. */\n-\t\tif (state_ != State::IpaComplete)\n-\t\t\treturn;\n-\n-\t\tpipe()->completeRequest(request);\n-\t\trequestQueue_.pop_front();\n-\t\trequestCompleted = true;\n-\t}\n-\n-\t/*\n-\t * Make sure we have three outputs completed in the case of a dropped\n-\t * frame.\n-\t */\n-\tif (state_ == State::IpaComplete &&\n-\t    ((ispOutputCount_ == 3 && dropFrameCount_) || requestCompleted)) {\n-\t\tstate_ = State::Idle;\n-\t\tif (dropFrameCount_) {\n-\t\t\tdropFrameCount_--;\n-\t\t\tLOG(RPI, Debug) << \"Dropping frame at the request of the IPA (\"\n-\t\t\t\t\t<< dropFrameCount_ << \" left)\";\n-\t\t}\n-\t}\n-}\n-\n-Rectangle RPiCameraData::scaleIspCrop(const Rectangle &ispCrop) const\n-{\n-\t/*\n-\t * Scale a crop rectangle defined in the ISP's coordinates into native sensor\n-\t * coordinates.\n-\t */\n-\tRectangle nativeCrop = ispCrop.scaledBy(sensorInfo_.analogCrop.size(),\n-\t\t\t\t\t\tsensorInfo_.outputSize);\n-\tnativeCrop.translateBy(sensorInfo_.analogCrop.topLeft());\n-\treturn nativeCrop;\n-}\n-\n-void RPiCameraData::applyScalerCrop(const ControlList &controls)\n-{\n-\tconst auto &scalerCrop = controls.get<Rectangle>(controls::ScalerCrop);\n-\tif (scalerCrop) {\n-\t\tRectangle nativeCrop = *scalerCrop;\n-\n-\t\tif (!nativeCrop.width || !nativeCrop.height)\n-\t\t\tnativeCrop = { 0, 0, 1, 1 };\n-\n-\t\t/* Create a version of the crop scaled to ISP (camera mode) pixels. */\n-\t\tRectangle ispCrop = nativeCrop.translatedBy(-sensorInfo_.analogCrop.topLeft());\n-\t\tispCrop.scaleBy(sensorInfo_.outputSize, sensorInfo_.analogCrop.size());\n-\n-\t\t/*\n-\t\t * The crop that we set must be:\n-\t\t * 1. At least as big as ispMinCropSize_, once that's been\n-\t\t *    enlarged to the same aspect ratio.\n-\t\t * 2. With the same mid-point, if possible.\n-\t\t * 3. But it can't go outside the sensor area.\n-\t\t */\n-\t\tSize minSize = ispMinCropSize_.expandedToAspectRatio(nativeCrop.size());\n-\t\tSize size = ispCrop.size().expandedTo(minSize);\n-\t\tispCrop = size.centeredTo(ispCrop.center()).enclosedIn(Rectangle(sensorInfo_.outputSize));\n-\n-\t\tif (ispCrop != ispCrop_) {\n-\t\t\tisp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &ispCrop);\n-\t\t\tispCrop_ = ispCrop;\n-\n-\t\t\t/*\n-\t\t\t * Also update the ScalerCrop in the metadata with what we actually\n-\t\t\t * used. But we must first rescale that from ISP (camera mode) pixels\n-\t\t\t * back into sensor native pixels.\n-\t\t\t */\n-\t\t\tscalerCrop_ = scaleIspCrop(ispCrop_);\n-\t\t}\n-\t}\n-}\n-\n-void RPiCameraData::fillRequestMetadata(const ControlList &bufferControls,\n-\t\t\t\t\tRequest *request)\n-{\n-\trequest->metadata().set(controls::SensorTimestamp,\n-\t\t\t\tbufferControls.get(controls::SensorTimestamp).value_or(0));\n-\n-\trequest->metadata().set(controls::ScalerCrop, scalerCrop_);\n-}\n-\n-void RPiCameraData::tryRunPipeline()\n-{\n-\tFrameBuffer *embeddedBuffer;\n-\tBayerFrame bayerFrame;\n-\n-\t/* If any of our request or buffer queues are empty, we cannot proceed. */\n-\tif (state_ != State::Idle || requestQueue_.empty() ||\n-\t    bayerQueue_.empty() || (embeddedQueue_.empty() && sensorMetadata_))\n-\t\treturn;\n-\n-\tif (!findMatchingBuffers(bayerFrame, embeddedBuffer))\n-\t\treturn;\n-\n-\t/* Take the first request from the queue and action the IPA. */\n-\tRequest *request = requestQueue_.front();\n-\n-\t/* See if a new ScalerCrop value needs to be applied. */\n-\tapplyScalerCrop(request->controls());\n-\n-\t/*\n-\t * Clear the request metadata and fill it with some initial non-IPA\n-\t * related controls. We clear it first because the request metadata\n-\t * may have been populated if we have dropped the previous frame.\n-\t */\n-\trequest->metadata().clear();\n-\tfillRequestMetadata(bayerFrame.controls, request);\n-\n-\t/* Set our state to say the pipeline is active. */\n-\tstate_ = State::Busy;\n-\n-\tunsigned int bayer = unicam_[Unicam::Image].getBufferId(bayerFrame.buffer);\n-\n-\tLOG(RPI, Debug) << \"Signalling prepareIsp:\"\n-\t\t\t<< \" Bayer buffer id: \" << bayer;\n-\n-\tipa::RPi::PrepareParams params;\n-\tparams.buffers.bayer = RPi::MaskBayerData | bayer;\n-\tparams.sensorControls = std::move(bayerFrame.controls);\n-\tparams.requestControls = request->controls();\n-\tparams.ipaContext = request->sequence();\n-\tparams.delayContext = bayerFrame.delayContext;\n-\n-\tif (embeddedBuffer) {\n-\t\tunsigned int embeddedId = unicam_[Unicam::Embedded].getBufferId(embeddedBuffer);\n-\n-\t\tparams.buffers.embedded = RPi::MaskEmbeddedData | embeddedId;\n-\t\tLOG(RPI, Debug) << \"Signalling prepareIsp:\"\n-\t\t\t\t<< \" Embedded buffer id: \" << embeddedId;\n-\t}\n-\n-\tipa_->prepareIsp(params);\n-}\n-\n-bool RPiCameraData::findMatchingBuffers(BayerFrame &bayerFrame, FrameBuffer *&embeddedBuffer)\n-{\n-\tif (bayerQueue_.empty())\n-\t\treturn false;\n-\n-\t/*\n-\t * Find the embedded data buffer with a matching timestamp to pass to\n-\t * the IPA. Any embedded buffers with a timestamp lower than the\n-\t * current bayer buffer will be removed and re-queued to the driver.\n-\t */\n-\tuint64_t ts = bayerQueue_.front().buffer->metadata().timestamp;\n-\tembeddedBuffer = nullptr;\n-\twhile (!embeddedQueue_.empty()) {\n-\t\tFrameBuffer *b = embeddedQueue_.front();\n-\t\tif (b->metadata().timestamp < ts) {\n-\t\t\tembeddedQueue_.pop();\n-\t\t\tunicam_[Unicam::Embedded].returnBuffer(b);\n-\t\t\tLOG(RPI, Debug) << \"Dropping unmatched input frame in stream \"\n-\t\t\t\t\t<< unicam_[Unicam::Embedded].name();\n-\t\t} else if (b->metadata().timestamp == ts) {\n-\t\t\t/* Found a match! */\n-\t\t\tembeddedBuffer = b;\n-\t\t\tembeddedQueue_.pop();\n-\t\t\tbreak;\n-\t\t} else {\n-\t\t\tbreak; /* Only higher timestamps from here. */\n-\t\t}\n-\t}\n-\n-\tif (!embeddedBuffer && sensorMetadata_) {\n-\t\tif (embeddedQueue_.empty()) {\n-\t\t\t/*\n-\t\t\t * If the embedded buffer queue is empty, wait for the next\n-\t\t\t * buffer to arrive - dequeue ordering may send the image\n-\t\t\t * buffer first.\n-\t\t\t */\n-\t\t\tLOG(RPI, Debug) << \"Waiting for next embedded buffer.\";\n-\t\t\treturn false;\n-\t\t}\n-\n-\t\t/* Log if there is no matching embedded data buffer found. */\n-\t\tLOG(RPI, Debug) << \"Returning bayer frame without a matching embedded buffer.\";\n-\t}\n-\n-\tbayerFrame = std::move(bayerQueue_.front());\n-\tbayerQueue_.pop();\n-\n-\treturn true;\n-}\n-\n-REGISTER_PIPELINE_HANDLER(PipelineHandlerRPi)\n-\n-} /* namespace libcamera */\ndiff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp\nnew file mode 100644\nindex 000000000000..a085a06376a8\n--- /dev/null\n+++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp\n@@ -0,0 +1,1001 @@\n+/* SPDX-License-Identifier: LGPL-2.1-or-later */\n+/*\n+ * Copyright (C) 2019-2023, Raspberry Pi Ltd\n+ *\n+ * vc4.cpp - Pipeline handler for VC4 based Raspberry Pi devices\n+ */\n+\n+#include <linux/bcm2835-isp.h>\n+#include <linux/v4l2-controls.h>\n+#include <linux/videodev2.h>\n+\n+#include <libcamera/formats.h>\n+\n+#include \"libcamera/internal/device_enumerator.h\"\n+\n+#include \"dma_heaps.h\"\n+#include \"pipeline_base.h\"\n+#include \"rpi_stream.h\"\n+\n+using namespace std::chrono_literals;\n+\n+namespace libcamera {\n+\n+LOG_DECLARE_CATEGORY(RPI)\n+\n+namespace {\n+\n+enum class Unicam : unsigned int { Image, Embedded };\n+enum class Isp : unsigned int { Input, Output0, Output1, Stats };\n+\n+} /* namespace */\n+\n+class Vc4CameraData final : public RPi::CameraData\n+{\n+public:\n+\tVc4CameraData(PipelineHandler *pipe)\n+\t\t: RPi::CameraData(pipe)\n+\t{\n+\t}\n+\n+\t~Vc4CameraData()\n+\t{\n+\t\tfreeBuffers();\n+\t}\n+\n+\tV4L2VideoDevice::Formats ispFormats() const override\n+\t{\n+\t\treturn isp_[Isp::Output0].dev()->formats();\n+\t}\n+\n+\tV4L2VideoDevice::Formats rawFormats() const override\n+\t{\n+\t\treturn unicam_[Unicam::Image].dev()->formats();\n+\t}\n+\n+\tV4L2VideoDevice *frontendDevice() override\n+\t{\n+\t\treturn unicam_[Unicam::Image].dev();\n+\t}\n+\n+\tvoid platformFreeBuffers() override\n+\t{\n+\t}\n+\n+\tCameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,\n+\t\t\t\t\t\t     std::vector<StreamParams> &outStreams) const override;\n+\n+\tint platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;\n+\n+\tvoid platformStart() override;\n+\tvoid platformStop() override;\n+\n+\tvoid unicamBufferDequeue(FrameBuffer *buffer);\n+\tvoid ispInputDequeue(FrameBuffer *buffer);\n+\tvoid ispOutputDequeue(FrameBuffer *buffer);\n+\n+\tvoid processStatsComplete(const ipa::RPi::BufferIds &buffers);\n+\tvoid prepareIspComplete(const ipa::RPi::BufferIds &buffers);\n+\tvoid setIspControls(const ControlList &controls);\n+\tvoid setCameraTimeout(uint32_t maxFrameLengthMs);\n+\n+\t/* Array of Unicam and ISP device streams and associated buffers/streams. */\n+\tRPi::Device<Unicam, 2> unicam_;\n+\tRPi::Device<Isp, 4> isp_;\n+\n+\t/* DMAHEAP allocation helper. */\n+\tRPi::DmaHeap dmaHeap_;\n+\tSharedFD lsTable_;\n+\n+\tstruct Config {\n+\t\t/*\n+\t\t * The minimum number of internal buffers to be allocated for\n+\t\t * the Unicam Image stream.\n+\t\t */\n+\t\tunsigned int minUnicamBuffers;\n+\t\t/*\n+\t\t * The minimum total (internal + external) buffer count used for\n+\t\t * the Unicam Image stream.\n+\t\t *\n+\t\t * Note that:\n+\t\t * minTotalUnicamBuffers must be >= 1, and\n+\t\t * minTotalUnicamBuffers >= minUnicamBuffers\n+\t\t */\n+\t\tunsigned int minTotalUnicamBuffers;\n+\t};\n+\n+\tConfig config_;\n+\n+private:\n+\tvoid platformIspCrop() override\n+\t{\n+\t\tisp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &ispCrop_);\n+\t}\n+\n+\tint platformConfigure(const V4L2SubdeviceFormat &sensorFormat,\n+\t\t\t      std::optional<BayerFormat::Packing> packing,\n+\t\t\t      std::vector<StreamParams> &rawStreams,\n+\t\t\t      std::vector<StreamParams> &outStreams) override;\n+\tint platformConfigureIpa(ipa::RPi::ConfigParams &params) override;\n+\n+\tint platformInitIpa([[maybe_unused]] ipa::RPi::InitParams &params) override\n+\t{\n+\t\treturn 0;\n+\t}\n+\n+\tstruct BayerFrame {\n+\t\tFrameBuffer *buffer;\n+\t\tControlList controls;\n+\t\tunsigned int delayContext;\n+\t};\n+\n+\tvoid tryRunPipeline() override;\n+\tbool findMatchingBuffers(BayerFrame &bayerFrame, FrameBuffer *&embeddedBuffer);\n+\n+\tstd::queue<BayerFrame> bayerQueue_;\n+\tstd::queue<FrameBuffer *> embeddedQueue_;\n+};\n+\n+class PipelineHandlerVc4 : public RPi::PipelineHandlerBase\n+{\n+public:\n+\tPipelineHandlerVc4(CameraManager *manager)\n+\t\t: RPi::PipelineHandlerBase(manager)\n+\t{\n+\t}\n+\n+\t~PipelineHandlerVc4()\n+\t{\n+\t}\n+\n+\tbool match(DeviceEnumerator *enumerator) override;\n+\n+private:\n+\tVc4CameraData *cameraData(Camera *camera)\n+\t{\n+\t\treturn static_cast<Vc4CameraData *>(camera->_d());\n+\t}\n+\n+\tstd::unique_ptr<RPi::CameraData> allocateCameraData() override\n+\t{\n+\t\treturn std::make_unique<Vc4CameraData>(this);\n+\t}\n+\n+\tint prepareBuffers(Camera *camera) override;\n+\tint platformRegister(std::unique_ptr<RPi::CameraData> &cameraData,\n+\t\t\t     MediaDevice *unicam, MediaDevice *isp) override;\n+};\n+\n+bool PipelineHandlerVc4::match(DeviceEnumerator *enumerator)\n+{\n+\tconstexpr unsigned int numUnicamDevices = 2;\n+\n+\t/*\n+\t * Loop over all Unicam instances, but return out once a match is found.\n+\t * This is to ensure we correctly enumrate the camera when an instance\n+\t * of Unicam has registered with media controller, but has not registered\n+\t * device nodes due to a sensor subdevice failure.\n+\t */\n+\tfor (unsigned int i = 0; i < numUnicamDevices; i++) {\n+\t\tDeviceMatch unicam(\"unicam\");\n+\t\tMediaDevice *unicamDevice = acquireMediaDevice(enumerator, unicam);\n+\n+\t\tif (!unicamDevice) {\n+\t\t\tLOG(RPI, Debug) << \"Unable to acquire a Unicam instance\";\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tDeviceMatch isp(\"bcm2835-isp\");\n+\t\tMediaDevice *ispDevice = acquireMediaDevice(enumerator, isp);\n+\n+\t\tif (!ispDevice) {\n+\t\t\tLOG(RPI, Debug) << \"Unable to acquire ISP instance\";\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\t/*\n+\t\t * The loop below is used to register multiple cameras behind one or more\n+\t\t * video mux devices that are attached to a particular Unicam instance.\n+\t\t * Obviously these cameras cannot be used simultaneously.\n+\t\t */\n+\t\tunsigned int numCameras = 0;\n+\t\tfor (MediaEntity *entity : unicamDevice->entities()) {\n+\t\t\tif (entity->function() != MEDIA_ENT_F_CAM_SENSOR)\n+\t\t\t\tcontinue;\n+\n+\t\t\tint ret = RPi::PipelineHandlerBase::registerCamera(unicamDevice, \"unicam-image\",\n+\t\t\t\t\t\t\t\t\t   ispDevice, entity);\n+\t\t\tif (ret)\n+\t\t\t\tLOG(RPI, Error) << \"Failed to register camera \"\n+\t\t\t\t\t\t<< entity->name() << \": \" << ret;\n+\t\t\telse\n+\t\t\t\tnumCameras++;\n+\t\t}\n+\n+\t\tif (numCameras)\n+\t\t\treturn true;\n+\t}\n+\n+\treturn false;\n+}\n+\n+int PipelineHandlerVc4::prepareBuffers(Camera *camera)\n+{\n+\tVc4CameraData *data = cameraData(camera);\n+\tunsigned int numRawBuffers = 0;\n+\tint ret;\n+\n+\tfor (Stream *s : camera->streams()) {\n+\t\tif (BayerFormat::fromPixelFormat(s->configuration().pixelFormat).isValid()) {\n+\t\t\tnumRawBuffers = s->configuration().bufferCount;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\t/* Decide how many internal buffers to allocate. */\n+\tfor (auto const stream : data->streams_) {\n+\t\tunsigned int numBuffers;\n+\t\t/*\n+\t\t * For Unicam, allocate a minimum number of buffers for internal\n+\t\t * use as we want to avoid any frame drops.\n+\t\t */\n+\t\tconst unsigned int minBuffers = data->config_.minTotalUnicamBuffers;\n+\t\tif (stream == &data->unicam_[Unicam::Image]) {\n+\t\t\t/*\n+\t\t\t * If an application has configured a RAW stream, allocate\n+\t\t\t * additional buffers to make up the minimum, but ensure\n+\t\t\t * we have at least minUnicamBuffers of internal buffers\n+\t\t\t * to use to minimise frame drops.\n+\t\t\t */\n+\t\t\tnumBuffers = std::max<int>(data->config_.minUnicamBuffers,\n+\t\t\t\t\t\t   minBuffers - numRawBuffers);\n+\t\t} else if (stream == &data->isp_[Isp::Input]) {\n+\t\t\t/*\n+\t\t\t * ISP input buffers are imported from Unicam, so follow\n+\t\t\t * similar logic as above to count all the RAW buffers\n+\t\t\t * available.\n+\t\t\t */\n+\t\t\tnumBuffers = numRawBuffers +\n+\t\t\t\t     std::max<int>(data->config_.minUnicamBuffers,\n+\t\t\t\t\t\t   minBuffers - numRawBuffers);\n+\n+\t\t} else if (stream == &data->unicam_[Unicam::Embedded]) {\n+\t\t\t/*\n+\t\t\t * Embedded data buffers are (currently) for internal use,\n+\t\t\t * so allocate the minimum required to avoid frame drops.\n+\t\t\t */\n+\t\t\tnumBuffers = minBuffers;\n+\t\t} else {\n+\t\t\t/*\n+\t\t\t * Since the ISP runs synchronous with the IPA and requests,\n+\t\t\t * we only ever need one set of internal buffers. Any buffers\n+\t\t\t * the application wants to hold onto will already be exported\n+\t\t\t * through PipelineHandlerRPi::exportFrameBuffers().\n+\t\t\t */\n+\t\t\tnumBuffers = 1;\n+\t\t}\n+\n+\t\tret = stream->prepareBuffers(numBuffers);\n+\t\tif (ret < 0)\n+\t\t\treturn ret;\n+\t}\n+\n+\t/*\n+\t * Pass the stats and embedded data buffers to the IPA. No other\n+\t * buffers need to be passed.\n+\t */\n+\tmapBuffers(camera, data->isp_[Isp::Stats].getBuffers(), RPi::MaskStats);\n+\tif (data->sensorMetadata_)\n+\t\tmapBuffers(camera, data->unicam_[Unicam::Embedded].getBuffers(),\n+\t\t\t   RPi::MaskEmbeddedData);\n+\n+\treturn 0;\n+}\n+\n+int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &cameraData, MediaDevice *unicam, MediaDevice *isp)\n+{\n+\tVc4CameraData *data = static_cast<Vc4CameraData *>(cameraData.get());\n+\n+\tif (!data->dmaHeap_.isValid())\n+\t\treturn -ENOMEM;\n+\n+\tMediaEntity *unicamImage = unicam->getEntityByName(\"unicam-image\");\n+\tMediaEntity *ispOutput0 = isp->getEntityByName(\"bcm2835-isp0-output0\");\n+\tMediaEntity *ispCapture1 = isp->getEntityByName(\"bcm2835-isp0-capture1\");\n+\tMediaEntity *ispCapture2 = isp->getEntityByName(\"bcm2835-isp0-capture2\");\n+\tMediaEntity *ispCapture3 = isp->getEntityByName(\"bcm2835-isp0-capture3\");\n+\n+\tif (!unicamImage || !ispOutput0 || !ispCapture1 || !ispCapture2 || !ispCapture3)\n+\t\treturn -ENOENT;\n+\n+\t/* Locate and open the unicam video streams. */\n+\tdata->unicam_[Unicam::Image] = RPi::Stream(\"Unicam Image\", unicamImage);\n+\n+\t/* An embedded data node will not be present if the sensor does not support it. */\n+\tMediaEntity *unicamEmbedded = unicam->getEntityByName(\"unicam-embedded\");\n+\tif (unicamEmbedded) {\n+\t\tdata->unicam_[Unicam::Embedded] = RPi::Stream(\"Unicam Embedded\", unicamEmbedded);\n+\t\tdata->unicam_[Unicam::Embedded].dev()->bufferReady.connect(data,\n+\t\t\t\t\t\t\t\t\t   &Vc4CameraData::unicamBufferDequeue);\n+\t}\n+\n+\t/* Tag the ISP input stream as an import stream. */\n+\tdata->isp_[Isp::Input] = RPi::Stream(\"ISP Input\", ispOutput0, true);\n+\tdata->isp_[Isp::Output0] = RPi::Stream(\"ISP Output0\", ispCapture1);\n+\tdata->isp_[Isp::Output1] = RPi::Stream(\"ISP Output1\", ispCapture2);\n+\tdata->isp_[Isp::Stats] = RPi::Stream(\"ISP Stats\", ispCapture3);\n+\n+\t/* Wire up all the buffer connections. */\n+\tdata->unicam_[Unicam::Image].dev()->bufferReady.connect(data, &Vc4CameraData::unicamBufferDequeue);\n+\tdata->isp_[Isp::Input].dev()->bufferReady.connect(data, &Vc4CameraData::ispInputDequeue);\n+\tdata->isp_[Isp::Output0].dev()->bufferReady.connect(data, &Vc4CameraData::ispOutputDequeue);\n+\tdata->isp_[Isp::Output1].dev()->bufferReady.connect(data, &Vc4CameraData::ispOutputDequeue);\n+\tdata->isp_[Isp::Stats].dev()->bufferReady.connect(data, &Vc4CameraData::ispOutputDequeue);\n+\n+\tif (data->sensorMetadata_ ^ !!data->unicam_[Unicam::Embedded].dev()) {\n+\t\tLOG(RPI, Warning) << \"Mismatch between Unicam and CamHelper for embedded data usage!\";\n+\t\tdata->sensorMetadata_ = false;\n+\t\tif (data->unicam_[Unicam::Embedded].dev())\n+\t\t\tdata->unicam_[Unicam::Embedded].dev()->bufferReady.disconnect();\n+\t}\n+\n+\t/*\n+\t * Open all Unicam and ISP streams. The exception is the embedded data\n+\t * stream, which only gets opened below if the IPA reports that the sensor\n+\t * supports embedded data.\n+\t *\n+\t * The below grouping is just for convenience so that we can easily\n+\t * iterate over all streams in one go.\n+\t */\n+\tdata->streams_.push_back(&data->unicam_[Unicam::Image]);\n+\tif (data->sensorMetadata_)\n+\t\tdata->streams_.push_back(&data->unicam_[Unicam::Embedded]);\n+\n+\tfor (auto &stream : data->isp_)\n+\t\tdata->streams_.push_back(&stream);\n+\n+\tfor (auto stream : data->streams_) {\n+\t\tint ret = stream->dev()->open();\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\tif (!data->unicam_[Unicam::Image].dev()->caps().hasMediaController()) {\n+\t\tLOG(RPI, Error) << \"Unicam driver does not use the MediaController, please update your kernel!\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Write up all the IPA connections. */\n+\tdata->ipa_->processStatsComplete.connect(data, &Vc4CameraData::processStatsComplete);\n+\tdata->ipa_->prepareIspComplete.connect(data, &Vc4CameraData::prepareIspComplete);\n+\tdata->ipa_->setIspControls.connect(data, &Vc4CameraData::setIspControls);\n+\tdata->ipa_->setCameraTimeout.connect(data, &Vc4CameraData::setCameraTimeout);\n+\n+\t/*\n+\t * List the available streams an application may request. At present, we\n+\t * do not advertise Unicam Embedded and ISP Statistics streams, as there\n+\t * is no mechanism for the application to request non-image buffer formats.\n+\t */\n+\tstd::set<Stream *> streams;\n+\tstreams.insert(&data->unicam_[Unicam::Image]);\n+\tstreams.insert(&data->isp_[Isp::Output0]);\n+\tstreams.insert(&data->isp_[Isp::Output1]);\n+\n+\t/* Create and register the camera. */\n+\tconst std::string &id = data->sensor_->id();\n+\tstd::shared_ptr<Camera> camera =\n+\t\tCamera::create(std::move(cameraData), id, streams);\n+\tPipelineHandler::registerCamera(std::move(camera));\n+\n+\tLOG(RPI, Info) << \"Registered camera \" << id\n+\t\t       << \" to Unicam device \" << unicam->deviceNode()\n+\t\t       << \" and ISP device \" << isp->deviceNode();\n+\n+\treturn 0;\n+}\n+\n+CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,\n+\t\t\t\t\t\t\t    std::vector<StreamParams> &outStreams) const\n+{\n+\tCameraConfiguration::Status status = CameraConfiguration::Status::Valid;\n+\n+\t/* Can only output 1 RAW stream, or 2 YUV/RGB streams. */\n+\tif (rawStreams.size() > 1 || outStreams.size() > 2) {\n+\t\tLOG(RPI, Error) << \"Invalid number of streams requested\";\n+\t\treturn CameraConfiguration::Status::Invalid;\n+\t}\n+\n+\tif (!rawStreams.empty())\n+\t\trawStreams[0].dev = unicam_[Unicam::Image].dev();\n+\n+\t/*\n+\t * For the two ISP outputs, one stream must be equal or smaller than the\n+\t * other in all dimensions.\n+\t *\n+\t * Index 0 contains the largest requested resolution.\n+\t */\n+\tfor (unsigned int i = 0; i < outStreams.size(); i++) {\n+\t\tSize size;\n+\n+\t\tsize.width = std::min(outStreams[i].cfg->size.width,\n+\t\t\t\t      outStreams[0].cfg->size.width);\n+\t\tsize.height = std::min(outStreams[i].cfg->size.height,\n+\t\t\t\t       outStreams[0].cfg->size.height);\n+\n+\t\tif (outStreams[i].cfg->size != size) {\n+\t\t\toutStreams[i].cfg->size = size;\n+\t\t\tstatus = CameraConfiguration::Status::Adjusted;\n+\t\t}\n+\n+\t\t/*\n+\t\t * Output 0 must be for the largest resolution. We will\n+\t\t * have that fixed up in the code above.\n+\t\t */\n+\t\toutStreams[i].dev = isp_[i == 0 ? Isp::Output0 : Isp::Output1].dev();\n+\t}\n+\n+\treturn status;\n+}\n+\n+int Vc4CameraData::platformPipelineConfigure(const std::unique_ptr<YamlObject> &root)\n+{\n+\tconfig_ = {\n+\t\t.minUnicamBuffers = 2,\n+\t\t.minTotalUnicamBuffers = 4,\n+\t};\n+\n+\tif (!root)\n+\t\treturn 0;\n+\n+\tstd::optional<double> ver = (*root)[\"version\"].get<double>();\n+\tif (!ver || *ver != 1.0) {\n+\t\tLOG(RPI, Error) << \"Unexpected configuration file version reported\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tstd::optional<std::string> target = (*root)[\"target\"].get<std::string>();\n+\tif (!target || *target != \"bcm2835\") {\n+\t\tLOG(RPI, Error) << \"Unexpected target reported: expected \\\"bcm2835\\\", got \"\n+\t\t\t\t<< *target;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tconst YamlObject &phConfig = (*root)[\"pipeline_handler\"];\n+\tconfig_.minUnicamBuffers =\n+\t\tphConfig[\"min_unicam_buffers\"].get<unsigned int>(config_.minUnicamBuffers);\n+\tconfig_.minTotalUnicamBuffers =\n+\t\tphConfig[\"min_total_unicam_buffers\"].get<unsigned int>(config_.minTotalUnicamBuffers);\n+\n+\tif (config_.minTotalUnicamBuffers < config_.minUnicamBuffers) {\n+\t\tLOG(RPI, Error) << \"Invalid configuration: min_total_unicam_buffers must be >= min_unicam_buffers\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (config_.minTotalUnicamBuffers < 1) {\n+\t\tLOG(RPI, Error) << \"Invalid configuration: min_total_unicam_buffers must be >= 1\";\n+\t\treturn -EINVAL;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int Vc4CameraData::platformConfigure(const V4L2SubdeviceFormat &sensorFormat,\n+\t\t\t\t     std::optional<BayerFormat::Packing> packing,\n+\t\t\t\t     std::vector<StreamParams> &rawStreams,\n+\t\t\t\t     std::vector<StreamParams> &outStreams)\n+{\n+\tint ret;\n+\n+\tif (!packing)\n+\t\tpacking = BayerFormat::Packing::CSI2;\n+\n+\tV4L2VideoDevice *unicam = unicam_[Unicam::Image].dev();\n+\tV4L2DeviceFormat unicamFormat = RPi::PipelineHandlerBase::toV4L2DeviceFormat(unicam, sensorFormat, *packing);\n+\n+\tret = unicam->setFormat(&unicamFormat);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * See which streams are requested, and route the user\n+\t * StreamConfiguration appropriately.\n+\t */\n+\tif (!rawStreams.empty()) {\n+\t\trawStreams[0].cfg->setStream(&unicam_[Unicam::Image]);\n+\t\tunicam_[Unicam::Image].setExternal(true);\n+\t}\n+\n+\tret = isp_[Isp::Input].dev()->setFormat(&unicamFormat);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tLOG(RPI, Info) << \"Sensor: \" << sensor_->id()\n+\t\t       << \" - Selected sensor format: \" << sensorFormat\n+\t\t       << \" - Selected unicam format: \" << unicamFormat;\n+\n+\t/* Use a sensible small default size if no output streams are configured. */\n+\tSize maxSize = outStreams.empty() ? Size(320, 240) : outStreams[0].cfg->size;\n+\tV4L2DeviceFormat format;\n+\n+\tfor (unsigned int i = 0; i < outStreams.size(); i++) {\n+\t\tStreamConfiguration *cfg = outStreams[i].cfg;\n+\n+\t\t/* The largest resolution gets routed to the ISP Output 0 node. */\n+\t\tRPi::Stream *stream = i == 0 ? &isp_[Isp::Output0] : &isp_[Isp::Output1];\n+\n+\t\tV4L2PixelFormat fourcc = stream->dev()->toV4L2PixelFormat(cfg->pixelFormat);\n+\t\tformat.size = cfg->size;\n+\t\tformat.fourcc = fourcc;\n+\t\tformat.colorSpace = cfg->colorSpace;\n+\n+\t\tLOG(RPI, Debug) << \"Setting \" << stream->name() << \" to \"\n+\t\t\t\t<< format;\n+\n+\t\tret = stream->dev()->setFormat(&format);\n+\t\tif (ret)\n+\t\t\treturn -EINVAL;\n+\n+\t\tif (format.size != cfg->size || format.fourcc != fourcc) {\n+\t\t\tLOG(RPI, Error)\n+\t\t\t\t<< \"Failed to set requested format on \" << stream->name()\n+\t\t\t\t<< \", returned \" << format;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\n+\t\tLOG(RPI, Debug)\n+\t\t\t<< \"Stream \" << stream->name() << \" has color space \"\n+\t\t\t<< ColorSpace::toString(cfg->colorSpace);\n+\n+\t\tcfg->setStream(stream);\n+\t\tstream->setExternal(true);\n+\t}\n+\n+\tispOutputTotal_ = outStreams.size();\n+\n+\t/*\n+\t * If ISP::Output0 stream has not been configured by the application,\n+\t * we must allow the hardware to generate an output so that the data\n+\t * flow in the pipeline handler remains consistent, and we still generate\n+\t * statistics for the IPA to use. So enable the output at a very low\n+\t * resolution for internal use.\n+\t *\n+\t * \\todo Allow the pipeline to work correctly without Output0 and only\n+\t * statistics coming from the hardware.\n+\t */\n+\tif (outStreams.empty()) {\n+\t\tV4L2VideoDevice *dev = isp_[Isp::Output0].dev();\n+\n+\t\tformat = {};\n+\t\tformat.size = maxSize;\n+\t\tformat.fourcc = dev->toV4L2PixelFormat(formats::YUV420);\n+\t\t/* No one asked for output, so the color space doesn't matter. */\n+\t\tformat.colorSpace = ColorSpace::Sycc;\n+\t\tret = dev->setFormat(&format);\n+\t\tif (ret) {\n+\t\t\tLOG(RPI, Error)\n+\t\t\t\t<< \"Failed to set default format on ISP Output0: \"\n+\t\t\t\t<< ret;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\n+\t\tispOutputTotal_++;\n+\n+\t\tLOG(RPI, Debug) << \"Defaulting ISP Output0 format to \"\n+\t\t\t\t<< format;\n+\t}\n+\n+\t/*\n+\t * If ISP::Output1 stream has not been requested by the application, we\n+\t * set it up for internal use now. This second stream will be used for\n+\t * fast colour denoise, and must be a quarter resolution of the ISP::Output0\n+\t * stream. However, also limit the maximum size to 1200 pixels in the\n+\t * larger dimension, just to avoid being wasteful with buffer allocations\n+\t * and memory bandwidth.\n+\t *\n+\t * \\todo If Output 1 format is not YUV420, Output 1 ought to be disabled as\n+\t * colour denoise will not run.\n+\t */\n+\tif (outStreams.size() == 1) {\n+\t\tV4L2VideoDevice *dev = isp_[Isp::Output1].dev();\n+\n+\t\tV4L2DeviceFormat output1Format;\n+\t\tconstexpr Size maxDimensions(1200, 1200);\n+\t\tconst Size limit = maxDimensions.boundedToAspectRatio(format.size);\n+\n+\t\toutput1Format.size = (format.size / 2).boundedTo(limit).alignedDownTo(2, 2);\n+\t\toutput1Format.colorSpace = format.colorSpace;\n+\t\toutput1Format.fourcc = dev->toV4L2PixelFormat(formats::YUV420);\n+\n+\t\tLOG(RPI, Debug) << \"Setting ISP Output1 (internal) to \"\n+\t\t\t\t<< output1Format;\n+\n+\t\tret = dev->setFormat(&output1Format);\n+\t\tif (ret) {\n+\t\t\tLOG(RPI, Error) << \"Failed to set format on ISP Output1: \"\n+\t\t\t\t\t<< ret;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\n+\t\tispOutputTotal_++;\n+\t}\n+\n+\t/* ISP statistics output format. */\n+\tformat = {};\n+\tformat.fourcc = V4L2PixelFormat(V4L2_META_FMT_BCM2835_ISP_STATS);\n+\tret = isp_[Isp::Stats].dev()->setFormat(&format);\n+\tif (ret) {\n+\t\tLOG(RPI, Error) << \"Failed to set format on ISP stats stream: \"\n+\t\t\t\t<< format;\n+\t\treturn ret;\n+\t}\n+\n+\tispOutputTotal_++;\n+\n+\t/*\n+\t * Configure the Unicam embedded data output format only if the sensor\n+\t * supports it.\n+\t */\n+\tif (sensorMetadata_) {\n+\t\tV4L2SubdeviceFormat embeddedFormat;\n+\n+\t\tsensor_->device()->getFormat(1, &embeddedFormat);\n+\t\tformat = {};\n+\t\tformat.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);\n+\t\tformat.planes[0].size = embeddedFormat.size.width * embeddedFormat.size.height;\n+\n+\t\tLOG(RPI, Debug) << \"Setting embedded data format \" << format.toString();\n+\t\tret = unicam_[Unicam::Embedded].dev()->setFormat(&format);\n+\t\tif (ret) {\n+\t\t\tLOG(RPI, Error) << \"Failed to set format on Unicam embedded: \"\n+\t\t\t\t\t<< format;\n+\t\t\treturn ret;\n+\t\t}\n+\t}\n+\n+\t/* Figure out the smallest selection the ISP will allow. */\n+\tRectangle testCrop(0, 0, 1, 1);\n+\tisp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &testCrop);\n+\tispMinCropSize_ = testCrop.size();\n+\n+\t/* Adjust aspect ratio by providing crops on the input image. */\n+\tSize size = unicamFormat.size.boundedToAspectRatio(maxSize);\n+\tispCrop_ = size.centeredTo(Rectangle(unicamFormat.size).center());\n+\n+\tplatformIspCrop();\n+\n+\treturn 0;\n+}\n+\n+int Vc4CameraData::platformConfigureIpa(ipa::RPi::ConfigParams &params)\n+{\n+\tparams.ispControls = isp_[Isp::Input].dev()->controls();\n+\n+\t/* Allocate the lens shading table via dmaHeap and pass to the IPA. */\n+\tif (!lsTable_.isValid()) {\n+\t\tlsTable_ = SharedFD(dmaHeap_.alloc(\"ls_grid\", ipa::RPi::MaxLsGridSize));\n+\t\tif (!lsTable_.isValid())\n+\t\t\treturn -ENOMEM;\n+\n+\t\t/* Allow the IPA to mmap the LS table via the file descriptor. */\n+\t\t/*\n+\t\t * \\todo Investigate if mapping the lens shading table buffer\n+\t\t * could be handled with mapBuffers().\n+\t\t */\n+\t\tparams.lsTableHandle = lsTable_;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+void Vc4CameraData::platformStart()\n+{\n+}\n+\n+void Vc4CameraData::platformStop()\n+{\n+\tbayerQueue_ = {};\n+\tembeddedQueue_ = {};\n+}\n+\n+void Vc4CameraData::unicamBufferDequeue(FrameBuffer *buffer)\n+{\n+\tRPi::Stream *stream = nullptr;\n+\tunsigned int index;\n+\n+\tif (!isRunning())\n+\t\treturn;\n+\n+\tfor (RPi::Stream &s : unicam_) {\n+\t\tindex = s.getBufferId(buffer);\n+\t\tif (index) {\n+\t\t\tstream = &s;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\t/* The buffer must belong to one of our streams. */\n+\tASSERT(stream);\n+\n+\tLOG(RPI, Debug) << \"Stream \" << stream->name() << \" buffer dequeue\"\n+\t\t\t<< \", buffer id \" << index\n+\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n+\n+\tif (stream == &unicam_[Unicam::Image]) {\n+\t\t/*\n+\t\t * Lookup the sensor controls used for this frame sequence from\n+\t\t * DelayedControl and queue them along with the frame buffer.\n+\t\t */\n+\t\tauto [ctrl, delayContext] = delayedCtrls_->get(buffer->metadata().sequence);\n+\t\t/*\n+\t\t * Add the frame timestamp to the ControlList for the IPA to use\n+\t\t * as it does not receive the FrameBuffer object.\n+\t\t */\n+\t\tctrl.set(controls::SensorTimestamp, buffer->metadata().timestamp);\n+\t\tbayerQueue_.push({ buffer, std::move(ctrl), delayContext });\n+\t} else {\n+\t\tembeddedQueue_.push(buffer);\n+\t}\n+\n+\thandleState();\n+}\n+\n+void Vc4CameraData::ispInputDequeue(FrameBuffer *buffer)\n+{\n+\tif (!isRunning())\n+\t\treturn;\n+\n+\tLOG(RPI, Debug) << \"Stream ISP Input buffer complete\"\n+\t\t\t<< \", buffer id \" << unicam_[Unicam::Image].getBufferId(buffer)\n+\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n+\n+\t/* The ISP input buffer gets re-queued into Unicam. */\n+\thandleStreamBuffer(buffer, &unicam_[Unicam::Image]);\n+\thandleState();\n+}\n+\n+void Vc4CameraData::ispOutputDequeue(FrameBuffer *buffer)\n+{\n+\tRPi::Stream *stream = nullptr;\n+\tunsigned int index;\n+\n+\tif (!isRunning())\n+\t\treturn;\n+\n+\tfor (RPi::Stream &s : isp_) {\n+\t\tindex = s.getBufferId(buffer);\n+\t\tif (index) {\n+\t\t\tstream = &s;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\t/* The buffer must belong to one of our ISP output streams. */\n+\tASSERT(stream);\n+\n+\tLOG(RPI, Debug) << \"Stream \" << stream->name() << \" buffer complete\"\n+\t\t\t<< \", buffer id \" << index\n+\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n+\n+\t/*\n+\t * ISP statistics buffer must not be re-queued or sent back to the\n+\t * application until after the IPA signals so.\n+\t */\n+\tif (stream == &isp_[Isp::Stats]) {\n+\t\tipa::RPi::ProcessParams params;\n+\t\tparams.buffers.stats = index | RPi::MaskStats;\n+\t\tparams.ipaContext = requestQueue_.front()->sequence();\n+\t\tipa_->processStats(params);\n+\t} else {\n+\t\t/* Any other ISP output can be handed back to the application now. */\n+\t\thandleStreamBuffer(buffer, stream);\n+\t}\n+\n+\t/*\n+\t * Increment the number of ISP outputs generated.\n+\t * This is needed to track dropped frames.\n+\t */\n+\tispOutputCount_++;\n+\n+\thandleState();\n+}\n+\n+void Vc4CameraData::processStatsComplete(const ipa::RPi::BufferIds &buffers)\n+{\n+\tif (!isRunning())\n+\t\treturn;\n+\n+\tFrameBuffer *buffer = isp_[Isp::Stats].getBuffers().at(buffers.stats & RPi::MaskID);\n+\n+\thandleStreamBuffer(buffer, &isp_[Isp::Stats]);\n+\n+\t/* Last thing to do is to fill up the request metadata. */\n+\tRequest *request = requestQueue_.front();\n+\tControlList metadata(controls::controls);\n+\n+\tipa_->reportMetadata(request->sequence(), &metadata);\n+\trequest->metadata().merge(metadata);\n+\n+\t/*\n+\t * Inform the sensor of the latest colour gains if it has the\n+\t * V4L2_CID_NOTIFY_GAINS control (which means notifyGainsUnity_ is set).\n+\t */\n+\tconst auto &colourGains = metadata.get(libcamera::controls::ColourGains);\n+\tif (notifyGainsUnity_ && colourGains) {\n+\t\t/* The control wants linear gains in the order B, Gb, Gr, R. */\n+\t\tControlList ctrls(sensor_->controls());\n+\t\tstd::array<int32_t, 4> gains{\n+\t\t\tstatic_cast<int32_t>((*colourGains)[1] * *notifyGainsUnity_),\n+\t\t\t*notifyGainsUnity_,\n+\t\t\t*notifyGainsUnity_,\n+\t\t\tstatic_cast<int32_t>((*colourGains)[0] * *notifyGainsUnity_)\n+\t\t};\n+\t\tctrls.set(V4L2_CID_NOTIFY_GAINS, Span<const int32_t>{ gains });\n+\n+\t\tsensor_->setControls(&ctrls);\n+\t}\n+\n+\tstate_ = State::IpaComplete;\n+\thandleState();\n+}\n+\n+void Vc4CameraData::prepareIspComplete(const ipa::RPi::BufferIds &buffers)\n+{\n+\tunsigned int embeddedId = buffers.embedded & RPi::MaskID;\n+\tunsigned int bayer = buffers.bayer & RPi::MaskID;\n+\tFrameBuffer *buffer;\n+\n+\tif (!isRunning())\n+\t\treturn;\n+\n+\tbuffer = unicam_[Unicam::Image].getBuffers().at(bayer & RPi::MaskID);\n+\tLOG(RPI, Debug) << \"Input re-queue to ISP, buffer id \" << (bayer & RPi::MaskID)\n+\t\t\t<< \", timestamp: \" << buffer->metadata().timestamp;\n+\n+\tisp_[Isp::Input].queueBuffer(buffer);\n+\tispOutputCount_ = 0;\n+\n+\tif (sensorMetadata_ && embeddedId) {\n+\t\tbuffer = unicam_[Unicam::Embedded].getBuffers().at(embeddedId & RPi::MaskID);\n+\t\thandleStreamBuffer(buffer, &unicam_[Unicam::Embedded]);\n+\t}\n+\n+\thandleState();\n+}\n+\n+void Vc4CameraData::setIspControls(const ControlList &controls)\n+{\n+\tControlList ctrls = controls;\n+\n+\tif (ctrls.contains(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING)) {\n+\t\tControlValue &value =\n+\t\t\tconst_cast<ControlValue &>(ctrls.get(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING));\n+\t\tSpan<uint8_t> s = value.data();\n+\t\tbcm2835_isp_lens_shading *ls =\n+\t\t\treinterpret_cast<bcm2835_isp_lens_shading *>(s.data());\n+\t\tls->dmabuf = lsTable_.get();\n+\t}\n+\n+\tisp_[Isp::Input].dev()->setControls(&ctrls);\n+\thandleState();\n+}\n+\n+void Vc4CameraData::setCameraTimeout(uint32_t maxFrameLengthMs)\n+{\n+\t/*\n+\t * Set the dequeue timeout to the larger of 5x the maximum reported\n+\t * frame length advertised by the IPA over a number of frames. Allow\n+\t * a minimum timeout value of 1s.\n+\t */\n+\tutils::Duration timeout =\n+\t\tstd::max<utils::Duration>(1s, 5 * maxFrameLengthMs * 1ms);\n+\n+\tLOG(RPI, Debug) << \"Setting Unicam timeout to \" << timeout;\n+\tunicam_[Unicam::Image].dev()->setDequeueTimeout(timeout);\n+}\n+\n+void Vc4CameraData::tryRunPipeline()\n+{\n+\tFrameBuffer *embeddedBuffer;\n+\tBayerFrame bayerFrame;\n+\n+\t/* If any of our request or buffer queues are empty, we cannot proceed. */\n+\tif (state_ != State::Idle || requestQueue_.empty() ||\n+\t    bayerQueue_.empty() || (embeddedQueue_.empty() && sensorMetadata_))\n+\t\treturn;\n+\n+\tif (!findMatchingBuffers(bayerFrame, embeddedBuffer))\n+\t\treturn;\n+\n+\t/* Take the first request from the queue and action the IPA. */\n+\tRequest *request = requestQueue_.front();\n+\n+\t/* See if a new ScalerCrop value needs to be applied. */\n+\tcalculateScalerCrop(request->controls());\n+\n+\t/*\n+\t * Clear the request metadata and fill it with some initial non-IPA\n+\t * related controls. We clear it first because the request metadata\n+\t * may have been populated if we have dropped the previous frame.\n+\t */\n+\trequest->metadata().clear();\n+\tfillRequestMetadata(bayerFrame.controls, request);\n+\n+\t/* Set our state to say the pipeline is active. */\n+\tstate_ = State::Busy;\n+\n+\tunsigned int bayer = unicam_[Unicam::Image].getBufferId(bayerFrame.buffer);\n+\n+\tLOG(RPI, Debug) << \"Signalling prepareIsp:\"\n+\t\t\t<< \" Bayer buffer id: \" << bayer;\n+\n+\tipa::RPi::PrepareParams params;\n+\tparams.buffers.bayer = RPi::MaskBayerData | bayer;\n+\tparams.sensorControls = std::move(bayerFrame.controls);\n+\tparams.requestControls = request->controls();\n+\tparams.ipaContext = request->sequence();\n+\tparams.delayContext = bayerFrame.delayContext;\n+\n+\tif (embeddedBuffer) {\n+\t\tunsigned int embeddedId = unicam_[Unicam::Embedded].getBufferId(embeddedBuffer);\n+\n+\t\tparams.buffers.embedded = RPi::MaskEmbeddedData | embeddedId;\n+\t\tLOG(RPI, Debug) << \"Signalling prepareIsp:\"\n+\t\t\t\t<< \" Embedded buffer id: \" << embeddedId;\n+\t}\n+\n+\tipa_->prepareIsp(params);\n+}\n+\n+bool Vc4CameraData::findMatchingBuffers(BayerFrame &bayerFrame, FrameBuffer *&embeddedBuffer)\n+{\n+\tif (bayerQueue_.empty())\n+\t\treturn false;\n+\n+\t/*\n+\t * Find the embedded data buffer with a matching timestamp to pass to\n+\t * the IPA. Any embedded buffers with a timestamp lower than the\n+\t * current bayer buffer will be removed and re-queued to the driver.\n+\t */\n+\tuint64_t ts = bayerQueue_.front().buffer->metadata().timestamp;\n+\tembeddedBuffer = nullptr;\n+\twhile (!embeddedQueue_.empty()) {\n+\t\tFrameBuffer *b = embeddedQueue_.front();\n+\t\tif (b->metadata().timestamp < ts) {\n+\t\t\tembeddedQueue_.pop();\n+\t\t\tunicam_[Unicam::Embedded].returnBuffer(b);\n+\t\t\tLOG(RPI, Debug) << \"Dropping unmatched input frame in stream \"\n+\t\t\t\t\t<< unicam_[Unicam::Embedded].name();\n+\t\t} else if (b->metadata().timestamp == ts) {\n+\t\t\t/* Found a match! */\n+\t\t\tembeddedBuffer = b;\n+\t\t\tembeddedQueue_.pop();\n+\t\t\tbreak;\n+\t\t} else {\n+\t\t\tbreak; /* Only higher timestamps from here. */\n+\t\t}\n+\t}\n+\n+\tif (!embeddedBuffer && sensorMetadata_) {\n+\t\tif (embeddedQueue_.empty()) {\n+\t\t\t/*\n+\t\t\t * If the embedded buffer queue is empty, wait for the next\n+\t\t\t * buffer to arrive - dequeue ordering may send the image\n+\t\t\t * buffer first.\n+\t\t\t */\n+\t\t\tLOG(RPI, Debug) << \"Waiting for next embedded buffer.\";\n+\t\t\treturn false;\n+\t\t}\n+\n+\t\t/* Log if there is no matching embedded data buffer found. */\n+\t\tLOG(RPI, Debug) << \"Returning bayer frame without a matching embedded buffer.\";\n+\t}\n+\n+\tbayerFrame = std::move(bayerQueue_.front());\n+\tbayerQueue_.pop();\n+\n+\treturn true;\n+}\n+\n+REGISTER_PIPELINE_HANDLER(PipelineHandlerVc4)\n+\n+} /* namespace libcamera */\n","prefixes":["libcamera-devel","11/13"]}