[{"id":20607,"web_url":"https://patchwork.libcamera.org/comment/20607/","msgid":"<163542705086.2498920.3363270468624530444@Monstersaurus>","date":"2021-10-28T13:17:30","subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","submitter":{"id":4,"url":"https://patchwork.libcamera.org/api/people/4/","name":"Kieran Bingham","email":"kieran.bingham@ideasonboard.com"},"content":"Quoting Naushir Patuck (2021-10-28 09:46:44)\n> Switch the pipeline handler to use the new Unicam media controller based driver.\n> With this change, we directly talk to the sensor device driver to set controls\n> and set/get formats in the pipeline handler.\n> \n> This change requires the accompanying Raspberry Pi linux kernel change at\n> https://github.com/raspberrypi/linux/pull/4645. If this kernel change is not\n> present, the pipeline handler will fail to run with an error message informing\n> the user to update the kernel build.\n> \n> Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> ---\n>  .../pipeline/raspberrypi/raspberrypi.cpp      | 167 ++++++++++++------\n>  1 file changed, 112 insertions(+), 55 deletions(-)\n> \n> diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> index e078b8b9a811..8e3e96a4359f 100644\n> --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> @@ -26,6 +26,7 @@\n>  #include <libcamera/base/utils.h>\n>  \n>  #include <linux/bcm2835-isp.h>\n> +#include <linux/media-bus-format.h>\n>  #include <linux/videodev2.h>\n>  \n>  #include \"libcamera/internal/bayer_format.h\"\n> @@ -48,6 +49,53 @@ LOG_DEFINE_CATEGORY(RPI)\n>  \n>  namespace {\n>  \n> +/* Map of mbus codes to supported sizes reported by the sensor. */\n> +using SensorFormats = std::map<unsigned int, std::vector<Size>>;\n> +\n> +SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor> &sensor)\n> +{\n> +       SensorFormats formats;\n> +\n> +       for (auto const mbusCode : sensor->mbusCodes())\n> +               formats.emplace(mbusCode, sensor->sizes(mbusCode));\n> +\n> +       return formats;\n> +}\n> +\n> +PixelFormat mbusCodeToPixelFormat(unsigned int mbus_code,\n> +                                 BayerFormat::Packing packingReq)\n> +{\n> +       BayerFormat bayer = BayerFormat::fromMbusCode(mbus_code);\n> +\n> +       ASSERT(bayer.isValid());\n> +\n> +       bayer.packing = packingReq;\n> +       PixelFormat pix = bayer.toPixelFormat();\n> +\n> +       /*\n> +        * Not all formats (e.g. 8-bit or 16-bit Bayer formats) can have packed\n> +        * variants. So if the PixelFormat returns as invalid, use the non-packed\n> +        * conversion instead.\n> +        */\n> +       if (!pix.isValid()) {\n> +               bayer.packing = BayerFormat::Packing::None;\n> +               pix = bayer.toPixelFormat();\n> +       }\n> +\n> +       return pix;\n> +}\n> +\n> +V4L2DeviceFormat toV4L2DeviceFormat(const V4L2SubdeviceFormat &format,\n> +                                   BayerFormat::Packing packingReq)\n> +{\n> +       const PixelFormat pix = mbusCodeToPixelFormat(format.mbus_code, packingReq);\n> +       V4L2DeviceFormat deviceFormat;\n> +\n> +       deviceFormat.fourcc = V4L2PixelFormat::fromPixelFormat(pix);\n> +       deviceFormat.size = format.size;\n\nDo we need to warn or log if we get an invalid V4L2PixelFormat at all?\nOr is it checked by the users of this call?\n\nI see the setFormats on the devices would likely fail. Is there enough\ninformation presented to determine that this was the fault path?\n\nSeems minor anyway. I think we might not even be able to get an\nunsupported format through here ...\n\n\nAnyway, It's a long haul through this patch, but nothing else is jumping\nout at me ... so\n\n\nReviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>\n\n> +       return deviceFormat;\n> +}\n> +\n>  bool isRaw(PixelFormat &pixFmt)\n>  {\n>         /*\n> @@ -74,11 +122,10 @@ double scoreFormat(double desired, double actual)\n>         return score;\n>  }\n>  \n> -V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n> -                             const Size &req)\n> +V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req)\n>  {\n>         double bestScore = std::numeric_limits<double>::max(), score;\n> -       V4L2DeviceFormat bestMode;\n> +       V4L2SubdeviceFormat bestFormat;\n>  \n>  #define PENALTY_AR             1500.0\n>  #define PENALTY_8BIT           2000.0\n> @@ -88,19 +135,19 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n>  \n>         /* Calculate the closest/best mode from the user requested size. */\n>         for (const auto &iter : formatsMap) {\n> -               V4L2PixelFormat v4l2Format = iter.first;\n> -               const PixelFormatInfo &info = PixelFormatInfo::info(v4l2Format);\n> +               const unsigned int mbusCode = iter.first;\n> +               const PixelFormat format = mbusCodeToPixelFormat(mbusCode,\n> +                                                                BayerFormat::Packing::None);\n> +               const PixelFormatInfo &info = PixelFormatInfo::info(format);\n>  \n> -               for (const SizeRange &sz : iter.second) {\n> -                       double modeWidth = sz.contains(req) ? req.width : sz.max.width;\n> -                       double modeHeight = sz.contains(req) ? req.height : sz.max.height;\n> +               for (const Size &size : iter.second) {\n>                         double reqAr = static_cast<double>(req.width) / req.height;\n> -                       double modeAr = modeWidth / modeHeight;\n> +                       double fmtAr = size.width / size.height;\n>  \n>                         /* Score the dimensions for closeness. */\n> -                       score = scoreFormat(req.width, modeWidth);\n> -                       score += scoreFormat(req.height, modeHeight);\n> -                       score += PENALTY_AR * scoreFormat(reqAr, modeAr);\n> +                       score = scoreFormat(req.width, size.width);\n> +                       score += scoreFormat(req.height, size.height);\n> +                       score += PENALTY_AR * scoreFormat(reqAr, fmtAr);\n>  \n>                         /* Add any penalties... this is not an exact science! */\n>                         if (!info.packed)\n> @@ -115,18 +162,18 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n>  \n>                         if (score <= bestScore) {\n>                                 bestScore = score;\n> -                               bestMode.fourcc = v4l2Format;\n> -                               bestMode.size = Size(modeWidth, modeHeight);\n> +                               bestFormat.mbus_code = mbusCode;\n> +                               bestFormat.size = size;\n>                         }\n>  \n> -                       LOG(RPI, Info) << \"Mode: \" << modeWidth << \"x\" << modeHeight\n> -                                      << \" fmt \" << v4l2Format.toString()\n> +                       LOG(RPI, Info) << \"Format: \" << size.toString()\n> +                                      << \" fmt \" << format.toString()\n>                                        << \" Score: \" << score\n>                                        << \" (best \" << bestScore << \")\";\n>                 }\n>         }\n>  \n> -       return bestMode;\n> +       return bestFormat;\n>  }\n>  \n>  enum class Unicam : unsigned int { Image, Embedded };\n> @@ -170,6 +217,7 @@ public:\n>         std::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;\n>  \n>         std::unique_ptr<CameraSensor> sensor_;\n> +       SensorFormats sensorFormats_;\n>         /* Array of Unicam and ISP device streams and associated buffers/streams. */\n>         RPi::Device<Unicam, 2> unicam_;\n>         RPi::Device<Isp, 4> isp_;\n> @@ -352,9 +400,10 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>                          * Calculate the best sensor mode we can use based on\n>                          * the user request.\n>                          */\n> -                       V4L2VideoDevice::Formats fmts = data_->unicam_[Unicam::Image].dev()->formats();\n> -                       V4L2DeviceFormat sensorFormat = findBestMode(fmts, cfg.size);\n> -                       int ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&sensorFormat);\n> +                       V4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size);\n> +                       V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat,\n> +                                                                          BayerFormat::Packing::CSI2);\n> +                       int ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&unicamFormat);\n>                         if (ret)\n>                                 return Invalid;\n>  \n> @@ -366,7 +415,7 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>                          * fetch the \"native\" (i.e. untransformed) Bayer order,\n>                          * because the sensor may currently be flipped!\n>                          */\n> -                       V4L2PixelFormat fourcc = sensorFormat.fourcc;\n> +                       V4L2PixelFormat fourcc = unicamFormat.fourcc;\n>                         if (data_->flipsAlterBayerOrder_) {\n>                                 BayerFormat bayer = BayerFormat::fromV4L2PixelFormat(fourcc);\n>                                 bayer.order = data_->nativeBayerOrder_;\n> @@ -375,15 +424,15 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>                         }\n>  \n>                         PixelFormat sensorPixFormat = fourcc.toPixelFormat();\n> -                       if (cfg.size != sensorFormat.size ||\n> +                       if (cfg.size != unicamFormat.size ||\n>                             cfg.pixelFormat != sensorPixFormat) {\n> -                               cfg.size = sensorFormat.size;\n> +                               cfg.size = unicamFormat.size;\n>                                 cfg.pixelFormat = sensorPixFormat;\n>                                 status = Adjusted;\n>                         }\n>  \n> -                       cfg.stride = sensorFormat.planes[0].bpl;\n> -                       cfg.frameSize = sensorFormat.planes[0].size;\n> +                       cfg.stride = unicamFormat.planes[0].bpl;\n> +                       cfg.frameSize = unicamFormat.planes[0].size;\n>  \n>                         rawCount++;\n>                 } else {\n> @@ -472,7 +521,7 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n>  {\n>         RPiCameraData *data = cameraData(camera);\n>         CameraConfiguration *config = new RPiCameraConfiguration(data);\n> -       V4L2DeviceFormat sensorFormat;\n> +       V4L2SubdeviceFormat sensorFormat;\n>         unsigned int bufferCount;\n>         PixelFormat pixelFormat;\n>         V4L2VideoDevice::Formats fmts;\n> @@ -487,9 +536,9 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n>                 switch (role) {\n>                 case StreamRole::Raw:\n>                         size = data->sensor_->resolution();\n> -                       fmts = data->unicam_[Unicam::Image].dev()->formats();\n> -                       sensorFormat = findBestMode(fmts, size);\n> -                       pixelFormat = sensorFormat.fourcc.toPixelFormat();\n> +                       sensorFormat = findBestFormat(data->sensorFormats_, size);\n> +                       pixelFormat = mbusCodeToPixelFormat(sensorFormat.mbus_code,\n> +                                                           BayerFormat::Packing::CSI2);\n>                         ASSERT(pixelFormat.isValid());\n>                         bufferCount = 2;\n>                         rawCount++;\n> @@ -572,6 +621,7 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>         for (auto const stream : data->streams_)\n>                 stream->reset();\n>  \n> +       BayerFormat::Packing packing = BayerFormat::Packing::CSI2;\n>         Size maxSize, sensorSize;\n>         unsigned int maxIndex = 0;\n>         bool rawStream = false;\n> @@ -590,6 +640,8 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>                          */\n>                         sensorSize = cfg.size;\n>                         rawStream = true;\n> +                       /* Check if the user has explicitly set an unpacked format. */\n> +                       packing = BayerFormat::fromPixelFormat(cfg.pixelFormat).packing;\n>                 } else {\n>                         if (cfg.size > maxSize) {\n>                                 maxSize = config->at(i).size;\n> @@ -615,24 +667,21 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>         }\n>  \n>         /* First calculate the best sensor mode we can use based on the user request. */\n> -       V4L2VideoDevice::Formats fmts = data->unicam_[Unicam::Image].dev()->formats();\n> -       V4L2DeviceFormat sensorFormat = findBestMode(fmts, rawStream ? sensorSize : maxSize);\n> -\n> -       ret = data->unicam_[Unicam::Image].dev()->setFormat(&sensorFormat);\n> +       V4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize);\n> +       ret = data->sensor_->setFormat(&sensorFormat);\n>         if (ret)\n>                 return ret;\n>  \n> -       /*\n> -        * The control ranges associated with the sensor may need updating\n> -        * after a format change.\n> -        * \\todo Use the CameraSensor::setFormat API instead.\n> -        */\n> -       data->sensor_->updateControlInfo();\n> +       V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat, packing);\n> +       ret = data->unicam_[Unicam::Image].dev()->setFormat(&unicamFormat);\n> +       if (ret)\n> +               return ret;\n>  \n>         LOG(RPI, Info) << \"Sensor: \" << camera->id()\n> -                      << \" - Selected mode: \" << sensorFormat.toString();\n> +                      << \" - Selected sensor format: \" << sensorFormat.toString()\n> +                      << \" - Selected unicam format: \" << unicamFormat.toString();\n>  \n> -       ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat);\n> +       ret = data->isp_[Isp::Input].dev()->setFormat(&unicamFormat);\n>         if (ret)\n>                 return ret;\n>  \n> @@ -754,8 +803,8 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>         data->ispMinCropSize_ = testCrop.size();\n>  \n>         /* Adjust aspect ratio by providing crops on the input image. */\n> -       Size size = sensorFormat.size.boundedToAspectRatio(maxSize);\n> -       Rectangle crop = size.centeredTo(Rectangle(sensorFormat.size).center());\n> +       Size size = unicamFormat.size.boundedToAspectRatio(maxSize);\n> +       Rectangle crop = size.centeredTo(Rectangle(unicamFormat.size).center());\n>         data->ispCrop_ = crop;\n>  \n>         data->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &crop);\n> @@ -769,8 +818,11 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>          * supports it.\n>          */\n>         if (data->sensorMetadata_) {\n> -               format = {};\n> +               V4L2SubdeviceFormat embeddedFormat;\n> +\n> +               data->sensor_->device()->getFormat(1, &embeddedFormat);\n>                 format.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);\n> +               format.planes[0].size = embeddedFormat.size.width * embeddedFormat.size.height;\n>  \n>                 LOG(RPI, Debug) << \"Setting embedded data format.\";\n>                 ret = data->unicam_[Unicam::Embedded].dev()->setFormat(&format);\n> @@ -1000,6 +1052,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>         if (data->sensor_->init())\n>                 return false;\n>  \n> +       data->sensorFormats_ = populateSensorFormats(data->sensor_);\n> +\n>         ipa::RPi::SensorConfig sensorConfig;\n>         if (data->loadIPA(&sensorConfig)) {\n>                 LOG(RPI, Error) << \"Failed to load a suitable IPA library\";\n> @@ -1026,6 +1080,11 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>                         return false;\n>         }\n>  \n> +       if (!data->unicam_[Unicam::Image].dev()->caps().hasMediaController()) {\n> +               LOG(RPI, Error) << \"Unicam driver does not use the MediaController, please update your kernel!\";\n> +               return false;\n> +       }\n> +\n>         /*\n>          * Setup our delayed control writer with the sensor default\n>          * gain and exposure delays. Mark VBLANK for priority write.\n> @@ -1035,7 +1094,7 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>                 { V4L2_CID_EXPOSURE, { sensorConfig.exposureDelay, false } },\n>                 { V4L2_CID_VBLANK, { sensorConfig.vblankDelay, true } }\n>         };\n> -       data->delayedCtrls_ = std::make_unique<DelayedControls>(data->unicam_[Unicam::Image].dev(), params);\n> +       data->delayedCtrls_ = std::make_unique<DelayedControls>(data->sensor_->device(), params);\n>         data->sensorMetadata_ = sensorConfig.sensorMetadata;\n>  \n>         /* Register the controls that the Raspberry Pi IPA can handle. */\n> @@ -1062,15 +1121,14 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>          * As part of answering the final question, we reset the camera to\n>          * no transform at all.\n>          */\n> -\n> -       V4L2VideoDevice *dev = data->unicam_[Unicam::Image].dev();\n> -       const struct v4l2_query_ext_ctrl *hflipCtrl = dev->controlInfo(V4L2_CID_HFLIP);\n> +       const V4L2Subdevice *sensor = data->sensor_->device();\n> +       const struct v4l2_query_ext_ctrl *hflipCtrl = sensor->controlInfo(V4L2_CID_HFLIP);\n>         if (hflipCtrl) {\n>                 /* We assume it will support vflips too... */\n>                 data->supportsFlips_ = true;\n>                 data->flipsAlterBayerOrder_ = hflipCtrl->flags & V4L2_CTRL_FLAG_MODIFY_LAYOUT;\n>  \n> -               ControlList ctrls(dev->controls());\n> +               ControlList ctrls(data->sensor_->controls());\n>                 ctrls.set(V4L2_CID_HFLIP, 0);\n>                 ctrls.set(V4L2_CID_VFLIP, 0);\n>                 data->setSensorControls(ctrls);\n> @@ -1078,9 +1136,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>  \n>         /* Look for a valid Bayer format. */\n>         BayerFormat bayerFormat;\n> -       for (const auto &iter : dev->formats()) {\n> -               V4L2PixelFormat v4l2Format = iter.first;\n> -               bayerFormat = BayerFormat::fromV4L2PixelFormat(v4l2Format);\n> +       for (const auto &iter : data->sensorFormats_) {\n> +               bayerFormat = BayerFormat::fromMbusCode(iter.first);\n>                 if (bayerFormat.isValid())\n>                         break;\n>         }\n> @@ -1263,7 +1320,7 @@ int RPiCameraData::configureIPA(const CameraConfiguration *config)\n>                 }\n>         }\n>  \n> -       entityControls.emplace(0, unicam_[Unicam::Image].dev()->controls());\n> +       entityControls.emplace(0, sensor_->controls());\n>         entityControls.emplace(1, isp_[Isp::Input].dev()->controls());\n>  \n>         /* Always send the user transform to the IPA. */\n> @@ -1387,10 +1444,10 @@ void RPiCameraData::setSensorControls(ControlList &controls)\n>                 ControlList vblank_ctrl;\n>  \n>                 vblank_ctrl.set(V4L2_CID_VBLANK, controls.get(V4L2_CID_VBLANK));\n> -               unicam_[Unicam::Image].dev()->setControls(&vblank_ctrl);\n> +               sensor_->setControls(&vblank_ctrl);\n>         }\n>  \n> -       unicam_[Unicam::Image].dev()->setControls(&controls);\n> +       sensor_->setControls(&controls);\n>  }\n>  \n>  void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer)\n> -- \n> 2.25.1\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id B14DFBF415\n\tfor <parsemail@patchwork.libcamera.org>;\n\tThu, 28 Oct 2021 13:17:36 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id DE548600BA;\n\tThu, 28 Oct 2021 15:17:35 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 087CB600B5\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 28 Oct 2021 15:17:34 +0200 (CEST)","from pendragon.ideasonboard.com\n\t(cpc89244-aztw30-2-0-cust3082.18-1.cable.virginm.net [86.31.172.11])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 9FE67513;\n\tThu, 28 Oct 2021 15:17:33 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"Fl/+tFkg\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1635427053;\n\tbh=GTL3pLK6dd2k9b1lDiohS30gaIlcuPo2Hk5UZRiwWg4=;\n\th=In-Reply-To:References:Subject:From:To:Date:From;\n\tb=Fl/+tFkgsmSJgdpjOGaFxoU+EorLXPykRGYcKp9o1CSjhh0ZV+G+V/VVypw/O3L/N\n\t/0kSTSrBAAzoDr3zlmDcHkcNHIHl3ypmV/i7y6FE45u5uue/NS3NHbSz7gnysGy3Vl\n\tlCkuXAy2w0ZCSWRVpJjsjvJCKzL6i0jRcPd42fxc=","Content-Type":"text/plain; charset=\"utf-8\"","MIME-Version":"1.0","Content-Transfer-Encoding":"quoted-printable","In-Reply-To":"<20211028084646.453775-9-naush@raspberrypi.com>","References":"<20211028084646.453775-1-naush@raspberrypi.com>\n\t<20211028084646.453775-9-naush@raspberrypi.com>","From":"Kieran Bingham <kieran.bingham@ideasonboard.com>","To":"Naushir Patuck <naush@raspberrypi.com>,\n\tlibcamera-devel@lists.libcamera.org","Date":"Thu, 28 Oct 2021 14:17:30 +0100","Message-ID":"<163542705086.2498920.3363270468624530444@Monstersaurus>","User-Agent":"alot/0.9.1","Subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":20626,"web_url":"https://patchwork.libcamera.org/comment/20626/","msgid":"<20211029084941.de7zc2ulmd4as4rt@uno.localdomain>","date":"2021-10-29T08:49:41","subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"Hi Naush,\n   thanks for the update!\n\nOn Thu, Oct 28, 2021 at 09:46:44AM +0100, Naushir Patuck wrote:\n> Switch the pipeline handler to use the new Unicam media controller based driver.\n> With this change, we directly talk to the sensor device driver to set controls\n> and set/get formats in the pipeline handler.\n>\n> This change requires the accompanying Raspberry Pi linux kernel change at\n> https://github.com/raspberrypi/linux/pull/4645. If this kernel change is not\n> present, the pipeline handler will fail to run with an error message informing\n> the user to update the kernel build.\n>\n> Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> ---\n>  .../pipeline/raspberrypi/raspberrypi.cpp      | 167 ++++++++++++------\n>  1 file changed, 112 insertions(+), 55 deletions(-)\n>\n> diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> index e078b8b9a811..8e3e96a4359f 100644\n> --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> @@ -26,6 +26,7 @@\n>  #include <libcamera/base/utils.h>\n>\n>  #include <linux/bcm2835-isp.h>\n> +#include <linux/media-bus-format.h>\n>  #include <linux/videodev2.h>\n>\n>  #include \"libcamera/internal/bayer_format.h\"\n> @@ -48,6 +49,53 @@ LOG_DEFINE_CATEGORY(RPI)\n>\n>  namespace {\n>\n> +/* Map of mbus codes to supported sizes reported by the sensor. */\n> +using SensorFormats = std::map<unsigned int, std::vector<Size>>;\n> +\n> +SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor> &sensor)\n> +{\n> +\tSensorFormats formats;\n> +\n> +\tfor (auto const mbusCode : sensor->mbusCodes())\n> +\t\tformats.emplace(mbusCode, sensor->sizes(mbusCode));\n> +\n> +\treturn formats;\n> +}\n\nWe have V4L2Subdevice::Format defined as\n        uing Formats = std::map<unsigned int, std::vector<SizeRange>>;\n\nbut we don't expose the v4l2 subdevice formats from CameraSensor, and\nI think it still makes sense as CameraSensor::sizes() filters and sorts\nsizes instead of returning the raw subdevice SizeRange.\n\nSo I think defining this part makes sense here even if it's a pattern\nwhich I assume it's repeated in many pipeline handlers.\n\n> +\n> +PixelFormat mbusCodeToPixelFormat(unsigned int mbus_code,\n> +\t\t\t\t  BayerFormat::Packing packingReq)\n> +{\n> +\tBayerFormat bayer = BayerFormat::fromMbusCode(mbus_code);\n> +\n> +\tASSERT(bayer.isValid());\n> +\n> +\tbayer.packing = packingReq;\n> +\tPixelFormat pix = bayer.toPixelFormat();\n> +\n> +\t/*\n> +\t * Not all formats (e.g. 8-bit or 16-bit Bayer formats) can have packed\n> +\t * variants. So if the PixelFormat returns as invalid, use the non-packed\n> +\t * conversion instead.\n> +\t */\n> +\tif (!pix.isValid()) {\n> +\t\tbayer.packing = BayerFormat::Packing::None;\n> +\t\tpix = bayer.toPixelFormat();\n> +\t}\n> +\n> +\treturn pix;\n> +}\n> +\n> +V4L2DeviceFormat toV4L2DeviceFormat(const V4L2SubdeviceFormat &format,\n> +\t\t\t\t    BayerFormat::Packing packingReq)\n> +{\n> +\tconst PixelFormat pix = mbusCodeToPixelFormat(format.mbus_code, packingReq);\n> +\tV4L2DeviceFormat deviceFormat;\n> +\n> +\tdeviceFormat.fourcc = V4L2PixelFormat::fromPixelFormat(pix);\n> +\tdeviceFormat.size = format.size;\n> +\treturn deviceFormat;\n> +}\n> +\n>  bool isRaw(PixelFormat &pixFmt)\n>  {\n>  \t/*\n> @@ -74,11 +122,10 @@ double scoreFormat(double desired, double actual)\n>  \treturn score;\n>  }\n>\n> -V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n> -\t\t\t      const Size &req)\n> +V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req)\n>  {\n>  \tdouble bestScore = std::numeric_limits<double>::max(), score;\n> -\tV4L2DeviceFormat bestMode;\n> +\tV4L2SubdeviceFormat bestFormat;\n>\n>  #define PENALTY_AR\t\t1500.0\n>  #define PENALTY_8BIT\t\t2000.0\n> @@ -88,19 +135,19 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n>\n>  \t/* Calculate the closest/best mode from the user requested size. */\n>  \tfor (const auto &iter : formatsMap) {\n> -\t\tV4L2PixelFormat v4l2Format = iter.first;\n> -\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(v4l2Format);\n> +\t\tconst unsigned int mbusCode = iter.first;\n> +\t\tconst PixelFormat format = mbusCodeToPixelFormat(mbusCode,\n> +\t\t\t\t\t\t\t\t BayerFormat::Packing::None);\n> +\t\tconst PixelFormatInfo &info = PixelFormatInfo::info(format);\n>\n> -\t\tfor (const SizeRange &sz : iter.second) {\n> -\t\t\tdouble modeWidth = sz.contains(req) ? req.width : sz.max.width;\n> -\t\t\tdouble modeHeight = sz.contains(req) ? req.height : sz.max.height;\n> +\t\tfor (const Size &size : iter.second) {\n>  \t\t\tdouble reqAr = static_cast<double>(req.width) / req.height;\n> -\t\t\tdouble modeAr = modeWidth / modeHeight;\n> +\t\t\tdouble fmtAr = size.width / size.height;\n>\n>  \t\t\t/* Score the dimensions for closeness. */\n> -\t\t\tscore = scoreFormat(req.width, modeWidth);\n> -\t\t\tscore += scoreFormat(req.height, modeHeight);\n> -\t\t\tscore += PENALTY_AR * scoreFormat(reqAr, modeAr);\n> +\t\t\tscore = scoreFormat(req.width, size.width);\n> +\t\t\tscore += scoreFormat(req.height, size.height);\n> +\t\t\tscore += PENALTY_AR * scoreFormat(reqAr, fmtAr);\n>\n>  \t\t\t/* Add any penalties... this is not an exact science! */\n>  \t\t\tif (!info.packed)\n> @@ -115,18 +162,18 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n>\n>  \t\t\tif (score <= bestScore) {\n>  \t\t\t\tbestScore = score;\n> -\t\t\t\tbestMode.fourcc = v4l2Format;\n> -\t\t\t\tbestMode.size = Size(modeWidth, modeHeight);\n> +\t\t\t\tbestFormat.mbus_code = mbusCode;\n> +\t\t\t\tbestFormat.size = size;\n>  \t\t\t}\n>\n> -\t\t\tLOG(RPI, Info) << \"Mode: \" << modeWidth << \"x\" << modeHeight\n> -\t\t\t\t       << \" fmt \" << v4l2Format.toString()\n> +\t\t\tLOG(RPI, Info) << \"Format: \" << size.toString()\n> +\t\t\t\t       << \" fmt \" << format.toString()\n>  \t\t\t\t       << \" Score: \" << score\n>  \t\t\t\t       << \" (best \" << bestScore << \")\";\n>  \t\t}\n>  \t}\n>\n> -\treturn bestMode;\n> +\treturn bestFormat;\n>  }\n>\n>  enum class Unicam : unsigned int { Image, Embedded };\n> @@ -170,6 +217,7 @@ public:\n>  \tstd::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;\n>\n>  \tstd::unique_ptr<CameraSensor> sensor_;\n> +\tSensorFormats sensorFormats_;\n>  \t/* Array of Unicam and ISP device streams and associated buffers/streams. */\n>  \tRPi::Device<Unicam, 2> unicam_;\n>  \tRPi::Device<Isp, 4> isp_;\n> @@ -352,9 +400,10 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>  \t\t\t * Calculate the best sensor mode we can use based on\n>  \t\t\t * the user request.\n>  \t\t\t */\n> -\t\t\tV4L2VideoDevice::Formats fmts = data_->unicam_[Unicam::Image].dev()->formats();\n> -\t\t\tV4L2DeviceFormat sensorFormat = findBestMode(fmts, cfg.size);\n> -\t\t\tint ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&sensorFormat);\n> +\t\t\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size);\n> +\t\t\tV4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat,\n> +\t\t\t\t\t\t\t\t\t   BayerFormat::Packing::CSI2);\n> +\t\t\tint ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&unicamFormat);\n>  \t\t\tif (ret)\n>  \t\t\t\treturn Invalid;\n>\n> @@ -366,7 +415,7 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>  \t\t\t * fetch the \"native\" (i.e. untransformed) Bayer order,\n>  \t\t\t * because the sensor may currently be flipped!\n>  \t\t\t */\n> -\t\t\tV4L2PixelFormat fourcc = sensorFormat.fourcc;\n> +\t\t\tV4L2PixelFormat fourcc = unicamFormat.fourcc;\n>  \t\t\tif (data_->flipsAlterBayerOrder_) {\n>  \t\t\t\tBayerFormat bayer = BayerFormat::fromV4L2PixelFormat(fourcc);\n>  \t\t\t\tbayer.order = data_->nativeBayerOrder_;\n> @@ -375,15 +424,15 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()\n>  \t\t\t}\n>\n>  \t\t\tPixelFormat sensorPixFormat = fourcc.toPixelFormat();\n\nThis is not the sensorPixFormat anymore, but rather the unicam one,\nright ?\n\nThe rest looks good to me! Thanks\nReviewed-by: Jacopo Mondi <jacopo@jmondi.org>\n\n> -\t\t\tif (cfg.size != sensorFormat.size ||\n> +\t\t\tif (cfg.size != unicamFormat.size ||\n>  \t\t\t    cfg.pixelFormat != sensorPixFormat) {\n> -\t\t\t\tcfg.size = sensorFormat.size;\n> +\t\t\t\tcfg.size = unicamFormat.size;\n>  \t\t\t\tcfg.pixelFormat = sensorPixFormat;\n>  \t\t\t\tstatus = Adjusted;\n>  \t\t\t}\n>\n> -\t\t\tcfg.stride = sensorFormat.planes[0].bpl;\n> -\t\t\tcfg.frameSize = sensorFormat.planes[0].size;\n> +\t\t\tcfg.stride = unicamFormat.planes[0].bpl;\n> +\t\t\tcfg.frameSize = unicamFormat.planes[0].size;\n>\n>  \t\t\trawCount++;\n>  \t\t} else {\n> @@ -472,7 +521,7 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n>  {\n>  \tRPiCameraData *data = cameraData(camera);\n>  \tCameraConfiguration *config = new RPiCameraConfiguration(data);\n> -\tV4L2DeviceFormat sensorFormat;\n> +\tV4L2SubdeviceFormat sensorFormat;\n>  \tunsigned int bufferCount;\n>  \tPixelFormat pixelFormat;\n>  \tV4L2VideoDevice::Formats fmts;\n> @@ -487,9 +536,9 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n>  \t\tswitch (role) {\n>  \t\tcase StreamRole::Raw:\n>  \t\t\tsize = data->sensor_->resolution();\n> -\t\t\tfmts = data->unicam_[Unicam::Image].dev()->formats();\n> -\t\t\tsensorFormat = findBestMode(fmts, size);\n> -\t\t\tpixelFormat = sensorFormat.fourcc.toPixelFormat();\n> +\t\t\tsensorFormat = findBestFormat(data->sensorFormats_, size);\n> +\t\t\tpixelFormat = mbusCodeToPixelFormat(sensorFormat.mbus_code,\n> +\t\t\t\t\t\t\t    BayerFormat::Packing::CSI2);\n>  \t\t\tASSERT(pixelFormat.isValid());\n>  \t\t\tbufferCount = 2;\n>  \t\t\trawCount++;\n> @@ -572,6 +621,7 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>  \tfor (auto const stream : data->streams_)\n>  \t\tstream->reset();\n>\n> +\tBayerFormat::Packing packing = BayerFormat::Packing::CSI2;\n>  \tSize maxSize, sensorSize;\n>  \tunsigned int maxIndex = 0;\n>  \tbool rawStream = false;\n> @@ -590,6 +640,8 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>  \t\t\t */\n>  \t\t\tsensorSize = cfg.size;\n>  \t\t\trawStream = true;\n> +\t\t\t/* Check if the user has explicitly set an unpacked format. */\n> +\t\t\tpacking = BayerFormat::fromPixelFormat(cfg.pixelFormat).packing;\n>  \t\t} else {\n>  \t\t\tif (cfg.size > maxSize) {\n>  \t\t\t\tmaxSize = config->at(i).size;\n> @@ -615,24 +667,21 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>  \t}\n>\n>  \t/* First calculate the best sensor mode we can use based on the user request. */\n> -\tV4L2VideoDevice::Formats fmts = data->unicam_[Unicam::Image].dev()->formats();\n> -\tV4L2DeviceFormat sensorFormat = findBestMode(fmts, rawStream ? sensorSize : maxSize);\n> -\n> -\tret = data->unicam_[Unicam::Image].dev()->setFormat(&sensorFormat);\n> +\tV4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize);\n> +\tret = data->sensor_->setFormat(&sensorFormat);\n>  \tif (ret)\n>  \t\treturn ret;\n>\n> -\t/*\n> -\t * The control ranges associated with the sensor may need updating\n> -\t * after a format change.\n> -\t * \\todo Use the CameraSensor::setFormat API instead.\n> -\t */\n> -\tdata->sensor_->updateControlInfo();\n> +\tV4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat, packing);\n> +\tret = data->unicam_[Unicam::Image].dev()->setFormat(&unicamFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n>\n>  \tLOG(RPI, Info) << \"Sensor: \" << camera->id()\n> -\t\t       << \" - Selected mode: \" << sensorFormat.toString();\n> +\t\t       << \" - Selected sensor format: \" << sensorFormat.toString()\n> +\t\t       << \" - Selected unicam format: \" << unicamFormat.toString();\n>\n> -\tret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat);\n> +\tret = data->isp_[Isp::Input].dev()->setFormat(&unicamFormat);\n>  \tif (ret)\n>  \t\treturn ret;\n>\n> @@ -754,8 +803,8 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>  \tdata->ispMinCropSize_ = testCrop.size();\n>\n>  \t/* Adjust aspect ratio by providing crops on the input image. */\n> -\tSize size = sensorFormat.size.boundedToAspectRatio(maxSize);\n> -\tRectangle crop = size.centeredTo(Rectangle(sensorFormat.size).center());\n> +\tSize size = unicamFormat.size.boundedToAspectRatio(maxSize);\n> +\tRectangle crop = size.centeredTo(Rectangle(unicamFormat.size).center());\n>  \tdata->ispCrop_ = crop;\n>\n>  \tdata->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &crop);\n> @@ -769,8 +818,11 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)\n>  \t * supports it.\n>  \t */\n>  \tif (data->sensorMetadata_) {\n> -\t\tformat = {};\n> +\t\tV4L2SubdeviceFormat embeddedFormat;\n> +\n> +\t\tdata->sensor_->device()->getFormat(1, &embeddedFormat);\n>  \t\tformat.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);\n> +\t\tformat.planes[0].size = embeddedFormat.size.width * embeddedFormat.size.height;\n>\n>  \t\tLOG(RPI, Debug) << \"Setting embedded data format.\";\n>  \t\tret = data->unicam_[Unicam::Embedded].dev()->setFormat(&format);\n> @@ -1000,6 +1052,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>  \tif (data->sensor_->init())\n>  \t\treturn false;\n>\n> +\tdata->sensorFormats_ = populateSensorFormats(data->sensor_);\n> +\n>  \tipa::RPi::SensorConfig sensorConfig;\n>  \tif (data->loadIPA(&sensorConfig)) {\n>  \t\tLOG(RPI, Error) << \"Failed to load a suitable IPA library\";\n> @@ -1026,6 +1080,11 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>  \t\t\treturn false;\n>  \t}\n>\n> +\tif (!data->unicam_[Unicam::Image].dev()->caps().hasMediaController()) {\n> +\t\tLOG(RPI, Error) << \"Unicam driver does not use the MediaController, please update your kernel!\";\n> +\t\treturn false;\n> +\t}\n> +\n>  \t/*\n>  \t * Setup our delayed control writer with the sensor default\n>  \t * gain and exposure delays. Mark VBLANK for priority write.\n> @@ -1035,7 +1094,7 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>  \t\t{ V4L2_CID_EXPOSURE, { sensorConfig.exposureDelay, false } },\n>  \t\t{ V4L2_CID_VBLANK, { sensorConfig.vblankDelay, true } }\n>  \t};\n> -\tdata->delayedCtrls_ = std::make_unique<DelayedControls>(data->unicam_[Unicam::Image].dev(), params);\n> +\tdata->delayedCtrls_ = std::make_unique<DelayedControls>(data->sensor_->device(), params);\n>  \tdata->sensorMetadata_ = sensorConfig.sensorMetadata;\n>\n>  \t/* Register the controls that the Raspberry Pi IPA can handle. */\n> @@ -1062,15 +1121,14 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>  \t * As part of answering the final question, we reset the camera to\n>  \t * no transform at all.\n>  \t */\n> -\n> -\tV4L2VideoDevice *dev = data->unicam_[Unicam::Image].dev();\n> -\tconst struct v4l2_query_ext_ctrl *hflipCtrl = dev->controlInfo(V4L2_CID_HFLIP);\n> +\tconst V4L2Subdevice *sensor = data->sensor_->device();\n> +\tconst struct v4l2_query_ext_ctrl *hflipCtrl = sensor->controlInfo(V4L2_CID_HFLIP);\n>  \tif (hflipCtrl) {\n>  \t\t/* We assume it will support vflips too... */\n>  \t\tdata->supportsFlips_ = true;\n>  \t\tdata->flipsAlterBayerOrder_ = hflipCtrl->flags & V4L2_CTRL_FLAG_MODIFY_LAYOUT;\n>\n> -\t\tControlList ctrls(dev->controls());\n> +\t\tControlList ctrls(data->sensor_->controls());\n>  \t\tctrls.set(V4L2_CID_HFLIP, 0);\n>  \t\tctrls.set(V4L2_CID_VFLIP, 0);\n>  \t\tdata->setSensorControls(ctrls);\n> @@ -1078,9 +1136,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)\n>\n>  \t/* Look for a valid Bayer format. */\n>  \tBayerFormat bayerFormat;\n> -\tfor (const auto &iter : dev->formats()) {\n> -\t\tV4L2PixelFormat v4l2Format = iter.first;\n> -\t\tbayerFormat = BayerFormat::fromV4L2PixelFormat(v4l2Format);\n> +\tfor (const auto &iter : data->sensorFormats_) {\n> +\t\tbayerFormat = BayerFormat::fromMbusCode(iter.first);\n>  \t\tif (bayerFormat.isValid())\n>  \t\t\tbreak;\n>  \t}\n> @@ -1263,7 +1320,7 @@ int RPiCameraData::configureIPA(const CameraConfiguration *config)\n>  \t\t}\n>  \t}\n>\n> -\tentityControls.emplace(0, unicam_[Unicam::Image].dev()->controls());\n> +\tentityControls.emplace(0, sensor_->controls());\n>  \tentityControls.emplace(1, isp_[Isp::Input].dev()->controls());\n>\n>  \t/* Always send the user transform to the IPA. */\n> @@ -1387,10 +1444,10 @@ void RPiCameraData::setSensorControls(ControlList &controls)\n>  \t\tControlList vblank_ctrl;\n>\n>  \t\tvblank_ctrl.set(V4L2_CID_VBLANK, controls.get(V4L2_CID_VBLANK));\n> -\t\tunicam_[Unicam::Image].dev()->setControls(&vblank_ctrl);\n> +\t\tsensor_->setControls(&vblank_ctrl);\n>  \t}\n>\n> -\tunicam_[Unicam::Image].dev()->setControls(&controls);\n> +\tsensor_->setControls(&controls);\n>  }\n>\n>  void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer)\n> --\n> 2.25.1\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id C19D7BF415\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 29 Oct 2021 08:48:53 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 16B9D600BF;\n\tFri, 29 Oct 2021 10:48:53 +0200 (CEST)","from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net\n\t[217.70.183.194])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id C3DE0600B4\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 29 Oct 2021 10:48:51 +0200 (CEST)","(Authenticated sender: jacopo@jmondi.org)\n\tby relay2-d.mail.gandi.net (Postfix) with ESMTPSA id 47AC94000E;\n\tFri, 29 Oct 2021 08:48:51 +0000 (UTC)"],"Date":"Fri, 29 Oct 2021 10:49:41 +0200","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Naushir Patuck <naush@raspberrypi.com>","Message-ID":"<20211029084941.de7zc2ulmd4as4rt@uno.localdomain>","References":"<20211028084646.453775-1-naush@raspberrypi.com>\n\t<20211028084646.453775-9-naush@raspberrypi.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20211028084646.453775-9-naush@raspberrypi.com>","Subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":20629,"web_url":"https://patchwork.libcamera.org/comment/20629/","msgid":"<CAEmqJPpYkUw2P4DAoPXWmEjuo=vFzAwQCEXhQw8znJA9nCVRHQ@mail.gmail.com>","date":"2021-10-29T09:07:42","subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","submitter":{"id":34,"url":"https://patchwork.libcamera.org/api/people/34/","name":"Naushir Patuck","email":"naush@raspberrypi.com"},"content":"Hi Jacopo,\n\nThank you for your feedback.\n\nOn Fri, 29 Oct 2021 at 09:48, Jacopo Mondi <jacopo@jmondi.org> wrote:\n\n> Hi Naush,\n>    thanks for the update!\n>\n> On Thu, Oct 28, 2021 at 09:46:44AM +0100, Naushir Patuck wrote:\n> > Switch the pipeline handler to use the new Unicam media controller based\n> driver.\n> > With this change, we directly talk to the sensor device driver to set\n> controls\n> > and set/get formats in the pipeline handler.\n> >\n> > This change requires the accompanying Raspberry Pi linux kernel change at\n> > https://github.com/raspberrypi/linux/pull/4645. If this kernel change\n> is not\n> > present, the pipeline handler will fail to run with an error message\n> informing\n> > the user to update the kernel build.\n> >\n> > Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> > ---\n> >  .../pipeline/raspberrypi/raspberrypi.cpp      | 167 ++++++++++++------\n> >  1 file changed, 112 insertions(+), 55 deletions(-)\n> >\n> > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > index e078b8b9a811..8e3e96a4359f 100644\n> > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > @@ -26,6 +26,7 @@\n> >  #include <libcamera/base/utils.h>\n> >\n> >  #include <linux/bcm2835-isp.h>\n> > +#include <linux/media-bus-format.h>\n> >  #include <linux/videodev2.h>\n> >\n> >  #include \"libcamera/internal/bayer_format.h\"\n> > @@ -48,6 +49,53 @@ LOG_DEFINE_CATEGORY(RPI)\n> >\n> >  namespace {\n> >\n> > +/* Map of mbus codes to supported sizes reported by the sensor. */\n> > +using SensorFormats = std::map<unsigned int, std::vector<Size>>;\n> > +\n> > +SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor>\n> &sensor)\n> > +{\n> > +     SensorFormats formats;\n> > +\n> > +     for (auto const mbusCode : sensor->mbusCodes())\n> > +             formats.emplace(mbusCode, sensor->sizes(mbusCode));\n> > +\n> > +     return formats;\n> > +}\n>\n> We have V4L2Subdevice::Format defined as\n>         uing Formats = std::map<unsigned int, std::vector<SizeRange>>;\n>\n> but we don't expose the v4l2 subdevice formats from CameraSensor, and\n> I think it still makes sense as CameraSensor::sizes() filters and sorts\n> sizes instead of returning the raw subdevice SizeRange.\n>\n> So I think defining this part makes sense here even if it's a pattern\n> which I assume it's repeated in many pipeline handlers.\n>\n\nIndeed.  I would have used  the v4l2 subdevice formats from CameraSensor,\nbut\nit was not a public member.  Not a problem though, as it is trivial for\npipeline\nhandlers to the equivalent to my code above to get the formats list.\n\n\n>\n> > +\n> > +PixelFormat mbusCodeToPixelFormat(unsigned int mbus_code,\n> > +                               BayerFormat::Packing packingReq)\n> > +{\n> > +     BayerFormat bayer = BayerFormat::fromMbusCode(mbus_code);\n> > +\n> > +     ASSERT(bayer.isValid());\n> > +\n> > +     bayer.packing = packingReq;\n> > +     PixelFormat pix = bayer.toPixelFormat();\n> > +\n> > +     /*\n> > +      * Not all formats (e.g. 8-bit or 16-bit Bayer formats) can have\n> packed\n> > +      * variants. So if the PixelFormat returns as invalid, use the\n> non-packed\n> > +      * conversion instead.\n> > +      */\n> > +     if (!pix.isValid()) {\n> > +             bayer.packing = BayerFormat::Packing::None;\n> > +             pix = bayer.toPixelFormat();\n> > +     }\n> > +\n> > +     return pix;\n> > +}\n> > +\n> > +V4L2DeviceFormat toV4L2DeviceFormat(const V4L2SubdeviceFormat &format,\n> > +                                 BayerFormat::Packing packingReq)\n> > +{\n> > +     const PixelFormat pix = mbusCodeToPixelFormat(format.mbus_code,\n> packingReq);\n> > +     V4L2DeviceFormat deviceFormat;\n> > +\n> > +     deviceFormat.fourcc = V4L2PixelFormat::fromPixelFormat(pix);\n> > +     deviceFormat.size = format.size;\n> > +     return deviceFormat;\n> > +}\n> > +\n> >  bool isRaw(PixelFormat &pixFmt)\n> >  {\n> >       /*\n> > @@ -74,11 +122,10 @@ double scoreFormat(double desired, double actual)\n> >       return score;\n> >  }\n> >\n> > -V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,\n> > -                           const Size &req)\n> > +V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap,\n> const Size &req)\n> >  {\n> >       double bestScore = std::numeric_limits<double>::max(), score;\n> > -     V4L2DeviceFormat bestMode;\n> > +     V4L2SubdeviceFormat bestFormat;\n> >\n> >  #define PENALTY_AR           1500.0\n> >  #define PENALTY_8BIT         2000.0\n> > @@ -88,19 +135,19 @@ V4L2DeviceFormat\n> findBestMode(V4L2VideoDevice::Formats &formatsMap,\n> >\n> >       /* Calculate the closest/best mode from the user requested size. */\n> >       for (const auto &iter : formatsMap) {\n> > -             V4L2PixelFormat v4l2Format = iter.first;\n> > -             const PixelFormatInfo &info =\n> PixelFormatInfo::info(v4l2Format);\n> > +             const unsigned int mbusCode = iter.first;\n> > +             const PixelFormat format = mbusCodeToPixelFormat(mbusCode,\n> > +\n> BayerFormat::Packing::None);\n> > +             const PixelFormatInfo &info =\n> PixelFormatInfo::info(format);\n> >\n> > -             for (const SizeRange &sz : iter.second) {\n> > -                     double modeWidth = sz.contains(req) ? req.width :\n> sz.max.width;\n> > -                     double modeHeight = sz.contains(req) ? req.height\n> : sz.max.height;\n> > +             for (const Size &size : iter.second) {\n> >                       double reqAr = static_cast<double>(req.width) /\n> req.height;\n> > -                     double modeAr = modeWidth / modeHeight;\n> > +                     double fmtAr = size.width / size.height;\n> >\n> >                       /* Score the dimensions for closeness. */\n> > -                     score = scoreFormat(req.width, modeWidth);\n> > -                     score += scoreFormat(req.height, modeHeight);\n> > -                     score += PENALTY_AR * scoreFormat(reqAr, modeAr);\n> > +                     score = scoreFormat(req.width, size.width);\n> > +                     score += scoreFormat(req.height, size.height);\n> > +                     score += PENALTY_AR * scoreFormat(reqAr, fmtAr);\n> >\n> >                       /* Add any penalties... this is not an exact\n> science! */\n> >                       if (!info.packed)\n> > @@ -115,18 +162,18 @@ V4L2DeviceFormat\n> findBestMode(V4L2VideoDevice::Formats &formatsMap,\n> >\n> >                       if (score <= bestScore) {\n> >                               bestScore = score;\n> > -                             bestMode.fourcc = v4l2Format;\n> > -                             bestMode.size = Size(modeWidth,\n> modeHeight);\n> > +                             bestFormat.mbus_code = mbusCode;\n> > +                             bestFormat.size = size;\n> >                       }\n> >\n> > -                     LOG(RPI, Info) << \"Mode: \" << modeWidth << \"x\" <<\n> modeHeight\n> > -                                    << \" fmt \" << v4l2Format.toString()\n> > +                     LOG(RPI, Info) << \"Format: \" << size.toString()\n> > +                                    << \" fmt \" << format.toString()\n> >                                      << \" Score: \" << score\n> >                                      << \" (best \" << bestScore << \")\";\n> >               }\n> >       }\n> >\n> > -     return bestMode;\n> > +     return bestFormat;\n> >  }\n> >\n> >  enum class Unicam : unsigned int { Image, Embedded };\n> > @@ -170,6 +217,7 @@ public:\n> >       std::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;\n> >\n> >       std::unique_ptr<CameraSensor> sensor_;\n> > +     SensorFormats sensorFormats_;\n> >       /* Array of Unicam and ISP device streams and associated\n> buffers/streams. */\n> >       RPi::Device<Unicam, 2> unicam_;\n> >       RPi::Device<Isp, 4> isp_;\n> > @@ -352,9 +400,10 @@ CameraConfiguration::Status\n> RPiCameraConfiguration::validate()\n> >                        * Calculate the best sensor mode we can use based\n> on\n> >                        * the user request.\n> >                        */\n> > -                     V4L2VideoDevice::Formats fmts =\n> data_->unicam_[Unicam::Image].dev()->formats();\n> > -                     V4L2DeviceFormat sensorFormat = findBestMode(fmts,\n> cfg.size);\n> > -                     int ret =\n> data_->unicam_[Unicam::Image].dev()->tryFormat(&sensorFormat);\n> > +                     V4L2SubdeviceFormat sensorFormat =\n> findBestFormat(data_->sensorFormats_, cfg.size);\n> > +                     V4L2DeviceFormat unicamFormat =\n> toV4L2DeviceFormat(sensorFormat,\n> > +\n> BayerFormat::Packing::CSI2);\n> > +                     int ret =\n> data_->unicam_[Unicam::Image].dev()->tryFormat(&unicamFormat);\n> >                       if (ret)\n> >                               return Invalid;\n> >\n> > @@ -366,7 +415,7 @@ CameraConfiguration::Status\n> RPiCameraConfiguration::validate()\n> >                        * fetch the \"native\" (i.e. untransformed) Bayer\n> order,\n> >                        * because the sensor may currently be flipped!\n> >                        */\n> > -                     V4L2PixelFormat fourcc = sensorFormat.fourcc;\n> > +                     V4L2PixelFormat fourcc = unicamFormat.fourcc;\n> >                       if (data_->flipsAlterBayerOrder_) {\n> >                               BayerFormat bayer =\n> BayerFormat::fromV4L2PixelFormat(fourcc);\n> >                               bayer.order = data_->nativeBayerOrder_;\n> > @@ -375,15 +424,15 @@ CameraConfiguration::Status\n> RPiCameraConfiguration::validate()\n> >                       }\n> >\n> >                       PixelFormat sensorPixFormat =\n> fourcc.toPixelFormat();\n>\n> This is not the sensorPixFormat anymore, but rather the unicam one,\n> right ?\n>\n\nYes, that's correct!  I will rename this variable to unicamPixFormat to\navoid any confusion in the next version.\n\n\n>\n> The rest looks good to me! Thanks\n> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>\n>\n\nThanks!\nNaush\n\n\n>\n> > -                     if (cfg.size != sensorFormat.size ||\n> > +                     if (cfg.size != unicamFormat.size ||\n> >                           cfg.pixelFormat != sensorPixFormat) {\n> > -                             cfg.size = sensorFormat.size;\n> > +                             cfg.size = unicamFormat.size;\n> >                               cfg.pixelFormat = sensorPixFormat;\n> >                               status = Adjusted;\n> >                       }\n> >\n> > -                     cfg.stride = sensorFormat.planes[0].bpl;\n> > -                     cfg.frameSize = sensorFormat.planes[0].size;\n> > +                     cfg.stride = unicamFormat.planes[0].bpl;\n> > +                     cfg.frameSize = unicamFormat.planes[0].size;\n> >\n> >                       rawCount++;\n> >               } else {\n> > @@ -472,7 +521,7 @@ CameraConfiguration\n> *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n> >  {\n> >       RPiCameraData *data = cameraData(camera);\n> >       CameraConfiguration *config = new RPiCameraConfiguration(data);\n> > -     V4L2DeviceFormat sensorFormat;\n> > +     V4L2SubdeviceFormat sensorFormat;\n> >       unsigned int bufferCount;\n> >       PixelFormat pixelFormat;\n> >       V4L2VideoDevice::Formats fmts;\n> > @@ -487,9 +536,9 @@ CameraConfiguration\n> *PipelineHandlerRPi::generateConfiguration(Camera *camera,\n> >               switch (role) {\n> >               case StreamRole::Raw:\n> >                       size = data->sensor_->resolution();\n> > -                     fmts =\n> data->unicam_[Unicam::Image].dev()->formats();\n> > -                     sensorFormat = findBestMode(fmts, size);\n> > -                     pixelFormat = sensorFormat.fourcc.toPixelFormat();\n> > +                     sensorFormat =\n> findBestFormat(data->sensorFormats_, size);\n> > +                     pixelFormat =\n> mbusCodeToPixelFormat(sensorFormat.mbus_code,\n> > +\n>  BayerFormat::Packing::CSI2);\n> >                       ASSERT(pixelFormat.isValid());\n> >                       bufferCount = 2;\n> >                       rawCount++;\n> > @@ -572,6 +621,7 @@ int PipelineHandlerRPi::configure(Camera *camera,\n> CameraConfiguration *config)\n> >       for (auto const stream : data->streams_)\n> >               stream->reset();\n> >\n> > +     BayerFormat::Packing packing = BayerFormat::Packing::CSI2;\n> >       Size maxSize, sensorSize;\n> >       unsigned int maxIndex = 0;\n> >       bool rawStream = false;\n> > @@ -590,6 +640,8 @@ int PipelineHandlerRPi::configure(Camera *camera,\n> CameraConfiguration *config)\n> >                        */\n> >                       sensorSize = cfg.size;\n> >                       rawStream = true;\n> > +                     /* Check if the user has explicitly set an\n> unpacked format. */\n> > +                     packing =\n> BayerFormat::fromPixelFormat(cfg.pixelFormat).packing;\n> >               } else {\n> >                       if (cfg.size > maxSize) {\n> >                               maxSize = config->at(i).size;\n> > @@ -615,24 +667,21 @@ int PipelineHandlerRPi::configure(Camera *camera,\n> CameraConfiguration *config)\n> >       }\n> >\n> >       /* First calculate the best sensor mode we can use based on the\n> user request. */\n> > -     V4L2VideoDevice::Formats fmts =\n> data->unicam_[Unicam::Image].dev()->formats();\n> > -     V4L2DeviceFormat sensorFormat = findBestMode(fmts, rawStream ?\n> sensorSize : maxSize);\n> > -\n> > -     ret = data->unicam_[Unicam::Image].dev()->setFormat(&sensorFormat);\n> > +     V4L2SubdeviceFormat sensorFormat =\n> findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize);\n> > +     ret = data->sensor_->setFormat(&sensorFormat);\n> >       if (ret)\n> >               return ret;\n> >\n> > -     /*\n> > -      * The control ranges associated with the sensor may need updating\n> > -      * after a format change.\n> > -      * \\todo Use the CameraSensor::setFormat API instead.\n> > -      */\n> > -     data->sensor_->updateControlInfo();\n> > +     V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat,\n> packing);\n> > +     ret = data->unicam_[Unicam::Image].dev()->setFormat(&unicamFormat);\n> > +     if (ret)\n> > +             return ret;\n> >\n> >       LOG(RPI, Info) << \"Sensor: \" << camera->id()\n> > -                    << \" - Selected mode: \" << sensorFormat.toString();\n> > +                    << \" - Selected sensor format: \" <<\n> sensorFormat.toString()\n> > +                    << \" - Selected unicam format: \" <<\n> unicamFormat.toString();\n> >\n> > -     ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat);\n> > +     ret = data->isp_[Isp::Input].dev()->setFormat(&unicamFormat);\n> >       if (ret)\n> >               return ret;\n> >\n> > @@ -754,8 +803,8 @@ int PipelineHandlerRPi::configure(Camera *camera,\n> CameraConfiguration *config)\n> >       data->ispMinCropSize_ = testCrop.size();\n> >\n> >       /* Adjust aspect ratio by providing crops on the input image. */\n> > -     Size size = sensorFormat.size.boundedToAspectRatio(maxSize);\n> > -     Rectangle crop =\n> size.centeredTo(Rectangle(sensorFormat.size).center());\n> > +     Size size = unicamFormat.size.boundedToAspectRatio(maxSize);\n> > +     Rectangle crop =\n> size.centeredTo(Rectangle(unicamFormat.size).center());\n> >       data->ispCrop_ = crop;\n> >\n> >       data->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP,\n> &crop);\n> > @@ -769,8 +818,11 @@ int PipelineHandlerRPi::configure(Camera *camera,\n> CameraConfiguration *config)\n> >        * supports it.\n> >        */\n> >       if (data->sensorMetadata_) {\n> > -             format = {};\n> > +             V4L2SubdeviceFormat embeddedFormat;\n> > +\n> > +             data->sensor_->device()->getFormat(1, &embeddedFormat);\n> >               format.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);\n> > +             format.planes[0].size = embeddedFormat.size.width *\n> embeddedFormat.size.height;\n> >\n> >               LOG(RPI, Debug) << \"Setting embedded data format.\";\n> >               ret =\n> data->unicam_[Unicam::Embedded].dev()->setFormat(&format);\n> > @@ -1000,6 +1052,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator\n> *enumerator)\n> >       if (data->sensor_->init())\n> >               return false;\n> >\n> > +     data->sensorFormats_ = populateSensorFormats(data->sensor_);\n> > +\n> >       ipa::RPi::SensorConfig sensorConfig;\n> >       if (data->loadIPA(&sensorConfig)) {\n> >               LOG(RPI, Error) << \"Failed to load a suitable IPA library\";\n> > @@ -1026,6 +1080,11 @@ bool PipelineHandlerRPi::match(DeviceEnumerator\n> *enumerator)\n> >                       return false;\n> >       }\n> >\n> > +     if\n> (!data->unicam_[Unicam::Image].dev()->caps().hasMediaController()) {\n> > +             LOG(RPI, Error) << \"Unicam driver does not use the\n> MediaController, please update your kernel!\";\n> > +             return false;\n> > +     }\n> > +\n> >       /*\n> >        * Setup our delayed control writer with the sensor default\n> >        * gain and exposure delays. Mark VBLANK for priority write.\n> > @@ -1035,7 +1094,7 @@ bool PipelineHandlerRPi::match(DeviceEnumerator\n> *enumerator)\n> >               { V4L2_CID_EXPOSURE, { sensorConfig.exposureDelay, false }\n> },\n> >               { V4L2_CID_VBLANK, { sensorConfig.vblankDelay, true } }\n> >       };\n> > -     data->delayedCtrls_ =\n> std::make_unique<DelayedControls>(data->unicam_[Unicam::Image].dev(),\n> params);\n> > +     data->delayedCtrls_ =\n> std::make_unique<DelayedControls>(data->sensor_->device(), params);\n> >       data->sensorMetadata_ = sensorConfig.sensorMetadata;\n> >\n> >       /* Register the controls that the Raspberry Pi IPA can handle. */\n> > @@ -1062,15 +1121,14 @@ bool PipelineHandlerRPi::match(DeviceEnumerator\n> *enumerator)\n> >        * As part of answering the final question, we reset the camera to\n> >        * no transform at all.\n> >        */\n> > -\n> > -     V4L2VideoDevice *dev = data->unicam_[Unicam::Image].dev();\n> > -     const struct v4l2_query_ext_ctrl *hflipCtrl =\n> dev->controlInfo(V4L2_CID_HFLIP);\n> > +     const V4L2Subdevice *sensor = data->sensor_->device();\n> > +     const struct v4l2_query_ext_ctrl *hflipCtrl =\n> sensor->controlInfo(V4L2_CID_HFLIP);\n> >       if (hflipCtrl) {\n> >               /* We assume it will support vflips too... */\n> >               data->supportsFlips_ = true;\n> >               data->flipsAlterBayerOrder_ = hflipCtrl->flags &\n> V4L2_CTRL_FLAG_MODIFY_LAYOUT;\n> >\n> > -             ControlList ctrls(dev->controls());\n> > +             ControlList ctrls(data->sensor_->controls());\n> >               ctrls.set(V4L2_CID_HFLIP, 0);\n> >               ctrls.set(V4L2_CID_VFLIP, 0);\n> >               data->setSensorControls(ctrls);\n> > @@ -1078,9 +1136,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator\n> *enumerator)\n> >\n> >       /* Look for a valid Bayer format. */\n> >       BayerFormat bayerFormat;\n> > -     for (const auto &iter : dev->formats()) {\n> > -             V4L2PixelFormat v4l2Format = iter.first;\n> > -             bayerFormat = BayerFormat::fromV4L2PixelFormat(v4l2Format);\n> > +     for (const auto &iter : data->sensorFormats_) {\n> > +             bayerFormat = BayerFormat::fromMbusCode(iter.first);\n> >               if (bayerFormat.isValid())\n> >                       break;\n> >       }\n> > @@ -1263,7 +1320,7 @@ int RPiCameraData::configureIPA(const\n> CameraConfiguration *config)\n> >               }\n> >       }\n> >\n> > -     entityControls.emplace(0,\n> unicam_[Unicam::Image].dev()->controls());\n> > +     entityControls.emplace(0, sensor_->controls());\n> >       entityControls.emplace(1, isp_[Isp::Input].dev()->controls());\n> >\n> >       /* Always send the user transform to the IPA. */\n> > @@ -1387,10 +1444,10 @@ void\n> RPiCameraData::setSensorControls(ControlList &controls)\n> >               ControlList vblank_ctrl;\n> >\n> >               vblank_ctrl.set(V4L2_CID_VBLANK,\n> controls.get(V4L2_CID_VBLANK));\n> > -             unicam_[Unicam::Image].dev()->setControls(&vblank_ctrl);\n> > +             sensor_->setControls(&vblank_ctrl);\n> >       }\n> >\n> > -     unicam_[Unicam::Image].dev()->setControls(&controls);\n> > +     sensor_->setControls(&controls);\n> >  }\n> >\n> >  void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer)\n> > --\n> > 2.25.1\n> >\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 8E134BF415\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 29 Oct 2021 09:08:00 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id DF12F600BF;\n\tFri, 29 Oct 2021 11:07:59 +0200 (CEST)","from mail-lf1-x131.google.com (mail-lf1-x131.google.com\n\t[IPv6:2a00:1450:4864:20::131])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 5EB20600B4\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 29 Oct 2021 11:07:58 +0200 (CEST)","by mail-lf1-x131.google.com with SMTP id l13so19782914lfg.6\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 29 Oct 2021 02:07:58 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key;\n\tunprotected) header.d=raspberrypi.com header.i=@raspberrypi.com\n\theader.b=\"ouGjEroG\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google;\n\th=mime-version:references:in-reply-to:from:date:message-id:subject:to\n\t:cc; bh=Kk0SJc109Du25kZ1kNJe6zFqUjTon2MOICDMYlbnDws=;\n\tb=ouGjEroG+xSta54uti8US3dSilxyJVNYxYzBoBMiBhFkw27HDAG9ClXr8hNV1wq6yJ\n\tawPd4pn4bpU/XS8OgCHORc6Y0TSL3SUAwpslAYnv9me2tjLqgsXNGQVnQGPXPWRy01mZ\n\tbiqp2T3+00ubdXqhWi/dlrCNEeUn8O0sY0gxXfBqaNWLH2CPpoNp+Sj3efJZS6Z9dL/S\n\tfK/WPvRDxvXQJiOueEwIzAz79GNLPQxqHOEOjrpwfdbiGowRpH5IDfjg8Ms1H8LXAVKW\n\t7XCizQqWsb++WA2BSGSPZfVrLXZeo3i9htYAjp1iLPH33q1E7Bqap7EccYqK4r7aVsBk\n\tA6fg==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20210112;\n\th=x-gm-message-state:mime-version:references:in-reply-to:from:date\n\t:message-id:subject:to:cc;\n\tbh=Kk0SJc109Du25kZ1kNJe6zFqUjTon2MOICDMYlbnDws=;\n\tb=vk13QkCsNh+FM/3xobIXRF+QvoFGQdgfi1SJb321NL2jG5tF9o45rIEvFV6HI+d2I6\n\tKidKnlMtMpBI2Qlv3zOtdsqqtqEchxhK5YTRMX0pMian2hOuHUu1j2bA7fURBU8k6n8T\n\tYsgAP7SSkVWxbYMxG3yOvme3e3TX2HyO8SC86hhA1Mo71aBzMLz1HKh2k3NcQfExjmON\n\t7o+4faM63bTJfe1khdGUexFMRbs31wMPTXp6ZM4uJoOPYC4HtnSKZCmZ1zLLxBV4cfIt\n\twP2O0dWN58DGYzkvas8thIJBTEf+6m3bqAnjrtVPOKGdyyqYPYVUDAMgvhOzcEosO9tR\n\tIF9A==","X-Gm-Message-State":"AOAM5300nhV/jpZHoPplMv31UAKRMFQ+TsihVAsN31WWnDpKjT//Q0Yo\n\tLVdCInrwLj7rRwzruufZ548tLJ+n//pU5SN80cX5t8IU3cP16V7v","X-Google-Smtp-Source":"ABdhPJzT6sOSvPv216JDAX3oc2A9zj1XOuboCd/rCRhFGbIG15WDrDrkwz50UsIgSLXrI/2Vvyv5xkFwOJiZS2oOcBo=","X-Received":"by 2002:a05:6512:3144:: with SMTP id\n\ts4mr9170540lfi.108.1635498477359; \n\tFri, 29 Oct 2021 02:07:57 -0700 (PDT)","MIME-Version":"1.0","References":"<20211028084646.453775-1-naush@raspberrypi.com>\n\t<20211028084646.453775-9-naush@raspberrypi.com>\n\t<20211029084941.de7zc2ulmd4as4rt@uno.localdomain>","In-Reply-To":"<20211029084941.de7zc2ulmd4as4rt@uno.localdomain>","From":"Naushir Patuck <naush@raspberrypi.com>","Date":"Fri, 29 Oct 2021 10:07:42 +0100","Message-ID":"<CAEmqJPpYkUw2P4DAoPXWmEjuo=vFzAwQCEXhQw8znJA9nCVRHQ@mail.gmail.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Content-Type":"multipart/alternative; boundary=\"000000000000c627e605cf7a295e\"","Subject":"Re: [libcamera-devel] [PATCH v4 08/10] pipeline: raspberrypi:\n\tConvert the pipeline handler to use media controller","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"libcamera devel <libcamera-devel@lists.libcamera.org>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]