[{"id":2339,"web_url":"https://patchwork.libcamera.org/comment/2339/","msgid":"<20190808074701.GA6055@pendragon.ideasonboard.com>","date":"2019-08-08T07:47:01","subject":"Re: [libcamera-devel] [PATCH v3 2/4] libcamera: pipeline: vimc:\n\tSwitch to using the RGB/YUV Capture video node","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Niklas,\n\nThank you for the patch.\n\nOn Thu, Aug 08, 2019 at 02:04:02AM +0200, Niklas Söderlund wrote:\n> Linux commit 85ab1aa1fac17bcd (\"media: vimc: deb: fix default sink bayer\n> format\") which is part of v5.2 changes the default media bus format for\n> the debayer subdevices. This leads to a -EPIPE error when trying to use\n> the raw capture video device nodes.\n> \n> Fix this by moving the vimc pipeline to use the RGB/YUV Capture capture\n> video node. As a consequence of this change the scaler in the vimc\n> pipeline is used and a hard coded upscale of 3 is present in the video\n> pipeline. This limits the sizes exposed and accepted by libcamera to\n> multiples of 3.\n\nYou should also mention that the tests are update according to the new\nvimc format requirements.\n\n> The raw capture video node still needs to be handled by the pipeline as\n> its format needs to be updated to allow the pipeline format validation\n> to pass.\n> \n> Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>\n> ---\n>  src/libcamera/pipeline/vimc.cpp | 112 ++++++++++++++++++++++++++++----\n>  test/camera/buffer_import.cpp   |   8 +--\n>  2 files changed, 104 insertions(+), 16 deletions(-)\n> \n> diff --git a/src/libcamera/pipeline/vimc.cpp b/src/libcamera/pipeline/vimc.cpp\n> index 3d6808222a8a2c5d..55975cae066d71cd 100644\n> --- a/src/libcamera/pipeline/vimc.cpp\n> +++ b/src/libcamera/pipeline/vimc.cpp\n> @@ -10,6 +10,8 @@\n>  #include <iomanip>\n>  #include <tuple>\n>  \n> +#include <linux/media-bus-format.h>\n> +\n>  #include <libcamera/camera.h>\n>  #include <libcamera/controls.h>\n>  #include <libcamera/ipa/ipa_interface.h>\n> @@ -25,6 +27,7 @@\n>  #include \"pipeline_handler.h\"\n>  #include \"utils.h\"\n>  #include \"v4l2_controls.h\"\n> +#include \"v4l2_subdevice.h\"\n>  #include \"v4l2_videodevice.h\"\n>  \n>  namespace libcamera {\n> @@ -35,21 +38,28 @@ class VimcCameraData : public CameraData\n>  {\n>  public:\n>  \tVimcCameraData(PipelineHandler *pipe)\n> -\t\t: CameraData(pipe), video_(nullptr), sensor_(nullptr)\n> +\t\t: CameraData(pipe), sensor_(nullptr), debayer_(nullptr),\n> +\t\t  scaler_(nullptr), video_(nullptr), raw_(nullptr)\n>  \t{\n>  \t}\n>  \n>  \t~VimcCameraData()\n>  \t{\n>  \t\tdelete sensor_;\n> +\t\tdelete debayer_;\n> +\t\tdelete scaler_;\n>  \t\tdelete video_;\n> +\t\tdelete raw_;\n>  \t}\n>  \n>  \tint init(MediaDevice *media);\n>  \tvoid bufferReady(Buffer *buffer);\n>  \n> -\tV4L2VideoDevice *video_;\n>  \tCameraSensor *sensor_;\n> +\tV4L2Subdevice *debayer_;\n> +\tV4L2Subdevice *scaler_;\n> +\tV4L2VideoDevice *video_;\n> +\tV4L2VideoDevice *raw_;\n>  \tStream stream_;\n>  };\n>  \n> @@ -80,6 +90,8 @@ public:\n>  \n>  \tint queueRequest(Camera *camera, Request *request) override;\n>  \n> +\tint initLinks(MediaDevice *media);\n> +\n>  \tbool match(DeviceEnumerator *enumerator) override;\n>  \n>  private:\n> @@ -131,8 +143,11 @@ CameraConfiguration::Status VimcCameraConfiguration::validate()\n>  \t/* Clamp the size based on the device limits. */\n>  \tconst Size size = cfg.size;\n>  \n> -\tcfg.size.width = std::max(16U, std::min(4096U, cfg.size.width));\n> -\tcfg.size.height = std::max(16U, std::min(2160U, cfg.size.height));\n> +\t/* Dimensions need to be a multiple of 3 for the scaler to work. */\n\nThat's not entirely accurate. It's not that the scaler won't work, but\nthat the scaler hardcodes a x3 scale up factor. I would write this as\n\n/* The scaler hardcodes a x3 scale-up ratio. */\n\n> +\tcfg.size.width = std::max(18U, std::min(4096U, cfg.size.width));\n> +\tcfg.size.height = std::max(18U, std::min(2160U, cfg.size.height));\n\n18 isn't correct, as the sensor's minimum size is 16x16. The output\nminimum size is thus 48x48.\n\n> +\tcfg.size.width -= cfg.size.width % 3;\n> +\tcfg.size.height -= cfg.size.height % 3;\n>  \n>  \tif (cfg.size != size) {\n>  \t\tLOG(VIMC, Debug)\n> @@ -160,7 +175,7 @@ CameraConfiguration *PipelineHandlerVimc::generateConfiguration(Camera *camera,\n>  \n>  \tStreamConfiguration cfg{};\n>  \tcfg.pixelFormat = V4L2_PIX_FMT_RGB24;\n> -\tcfg.size = { 640, 480 };\n> +\tcfg.size = { 1920, 1080 };\n>  \tcfg.bufferCount = 4;\n>  \n>  \tconfig->addConfiguration(cfg);\n> @@ -176,6 +191,33 @@ int PipelineHandlerVimc::configure(Camera *camera, CameraConfiguration *config)\n>  \tStreamConfiguration &cfg = config->at(0);\n>  \tint ret;\n>  \n> +\t/* The scaler hardcodes a x3 scale-up ratio. */\n> +\tV4L2SubdeviceFormat subformat = {};\n> +\tsubformat.mbus_code = MEDIA_BUS_FMT_SGRBG8_1X8;\n> +\tsubformat.size = { cfg.size.width / 3, cfg.size.height / 3 };\n> +\n> +\tret = data->sensor_->setFormat(&subformat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tret = data->debayer_->setFormat(0, &subformat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tsubformat.mbus_code = MEDIA_BUS_FMT_RGB888_1X24;\n> +\tret = data->debayer_->setFormat(1, &subformat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tret = data->scaler_->setFormat(0, &subformat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tsubformat.size = cfg.size;\n> +\tret = data->scaler_->setFormat(1, &subformat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n>  \tV4L2DeviceFormat format = {};\n>  \tformat.fourcc = cfg.pixelFormat;\n>  \tformat.size = cfg.size;\n> @@ -188,6 +230,17 @@ int PipelineHandlerVimc::configure(Camera *camera, CameraConfiguration *config)\n>  \t    format.fourcc != cfg.pixelFormat)\n>  \t\treturn -EINVAL;\n>  \n> +\t/*\n> +\t * Format has to be set on the raw capture video node, otherwise the\n> +\t * vimc driver will fail pipeline validation.\n> +\t */\n> +\tformat.fourcc = V4L2_PIX_FMT_SGRBG8;\n> +\tformat.size = { cfg.size.width / 3, cfg.size.height / 3 };\n> +\n> +\tret = data->raw_->setFormat(&format);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n>  \tcfg.setStream(&data->stream_);\n>  \n>  \treturn 0;\n> @@ -292,6 +345,26 @@ int PipelineHandlerVimc::queueRequest(Camera *camera, Request *request)\n>  \treturn 0;\n>  }\n>  \n> +int PipelineHandlerVimc::initLinks(MediaDevice *media)\n> +{\n> +\tMediaLink *link;\n> +\tint ret;\n> +\n> +\tret = media->disableLinks();\n> +\tif (ret < 0)\n> +\t\treturn ret;\n> +\n> +\tlink = media->link(\"Debayer B\", 1, \"Scaler\", 0);\n\nYou can declare the variable here instead of at the top of the function.\n\n> +\tif (!link)\n> +\t\treturn -ENODEV;\n> +\n> +\tret = link->setEnabled(true);\n> +\tif (ret < 0)\n> +\t\treturn ret;\n> +\n> +\treturn 0;\n> +}\n> +\n>  bool PipelineHandlerVimc::match(DeviceEnumerator *enumerator)\n>  {\n>  \tDeviceMatch dm(\"vimc\");\n> @@ -318,6 +391,9 @@ bool PipelineHandlerVimc::match(DeviceEnumerator *enumerator)\n>  \n>  \tstd::unique_ptr<VimcCameraData> data = utils::make_unique<VimcCameraData>(this);\n>  \n> +\tif (initLinks(media))\n> +\t\treturn false;\n> +\n\nI would move this to the VimcCameraData class to group all\ninitialisation together. As the amount of code is small you can inline\ninitLink() in init(), or keep it separate, up to you.\n\nWith all these fixed,\n\nReviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n\n>  \t/* Locate and open the capture video node. */\n>  \tif (data->init(media))\n>  \t\treturn false;\n> @@ -335,18 +411,30 @@ int VimcCameraData::init(MediaDevice *media)\n>  {\n>  \tint ret;\n>  \n> -\t/* Create and open the video device and the camera sensor. */\n> -\tvideo_ = new V4L2VideoDevice(media->getEntityByName(\"Raw Capture 1\"));\n> -\tif (video_->open())\n> -\t\treturn -ENODEV;\n> -\n> -\tvideo_->bufferReady.connect(this, &VimcCameraData::bufferReady);\n> -\n> +\t/* Create and open the camera sensor, debayer, scaler and video device. */\n>  \tsensor_ = new CameraSensor(media->getEntityByName(\"Sensor B\"));\n>  \tret = sensor_->init();\n>  \tif (ret)\n>  \t\treturn ret;\n>  \n> +\tdebayer_ = new V4L2Subdevice(media->getEntityByName(\"Debayer B\"));\n> +\tif (debayer_->open())\n> +\t\treturn -ENODEV;\n> +\n> +\tscaler_ = new V4L2Subdevice(media->getEntityByName(\"Scaler\"));\n> +\tif (scaler_->open())\n> +\t\treturn -ENODEV;\n> +\n> +\tvideo_ = new V4L2VideoDevice(media->getEntityByName(\"RGB/YUV Capture\"));\n> +\tif (video_->open())\n> +\t\treturn -ENODEV;\n> +\n> +\tvideo_->bufferReady.connect(this, &VimcCameraData::bufferReady);\n> +\n> +\traw_ = new V4L2VideoDevice(media->getEntityByName(\"Raw Capture 1\"));\n> +\tif (raw_->open())\n> +\t\treturn -ENODEV;\n> +\n>  \t/* Initialise the supported controls. */\n>  \tconst V4L2ControlInfoMap &controls = sensor_->controls();\n>  \tfor (const auto &ctrl : controls) {\n> diff --git a/test/camera/buffer_import.cpp b/test/camera/buffer_import.cpp\n> index 400d02b350c1aa8f..620ee66dee0570bc 100644\n> --- a/test/camera/buffer_import.cpp\n> +++ b/test/camera/buffer_import.cpp\n> @@ -72,12 +72,12 @@ public:\n>  \t\t\treturn ret;\n>  \t\t}\n>  \n> -\t\tformat_.size.width = 640;\n> -\t\tformat_.size.height = 480;\n> +\t\tformat_.size.width = 1920;\n> +\t\tformat_.size.height = 1080;\n>  \t\tformat_.fourcc = V4L2_PIX_FMT_RGB24;\n>  \t\tformat_.planesCount = 1;\n> -\t\tformat_.planes[0].size = 640 * 480 * 3;\n> -\t\tformat_.planes[0].bpl = 640 * 3;\n> +\t\tformat_.planes[0].size = 1920 * 1080 * 3;\n> +\t\tformat_.planes[0].bpl = 1920 * 3;\n>  \n>  \t\tif (video_->setFormat(&format_)) {\n>  \t\t\tcleanup();","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 1541760E2F\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu,  8 Aug 2019 09:47:05 +0200 (CEST)","from pendragon.ideasonboard.com\n\t(dfj612yhrgyx302h3jwwy-3.rev.dnainternet.fi\n\t[IPv6:2001:14ba:21f5:5b00:ce28:277f:58d7:3ca4])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 74225CC;\n\tThu,  8 Aug 2019 09:47:04 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1565250424;\n\tbh=IJjhl0th9mJPgdcN3x8HRhGtRFXVXT1G8fiz7Pw1Fa8=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=c5nh7MKRzpGOYDIQDTmU1ym+KHhCaGLf14p3chYjmV9ZWP6KIBUZLhZkvYKvO+AXI\n\tRIFbx5XLYBtQceo1zr8jH0pESgx9oULO2V5gSTH9MHEL4M7k2UBufIhfJcd5sNMPGX\n\tx6pA4GrKRLLTVtc8oIXJU/2N/BGgE4DVp1ALsowI=","Date":"Thu, 8 Aug 2019 10:47:01 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Niklas =?utf-8?q?S=C3=B6derlund?= <niklas.soderlund@ragnatech.se>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190808074701.GA6055@pendragon.ideasonboard.com>","References":"<20190808000404.10841-1-niklas.soderlund@ragnatech.se>\n\t<20190808000404.10841-3-niklas.soderlund@ragnatech.se>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<20190808000404.10841-3-niklas.soderlund@ragnatech.se>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH v3 2/4] libcamera: pipeline: vimc:\n\tSwitch to using the RGB/YUV Capture video node","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Thu, 08 Aug 2019 07:47:05 -0000"}}]