[{"id":1007,"web_url":"https://patchwork.libcamera.org/comment/1007/","msgid":"<20190302224423.GK4682@pendragon.ideasonboard.com>","date":"2019-03-02T22:44:23","subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Jacopo,\n\nThank you for the patch.\n\nOn Thu, Feb 28, 2019 at 09:04:04PM +0100, Jacopo Mondi wrote:\n> Apply the requested image format to the sensor device, and apply the\n> adjusted one to the CIO2 device, the ImgU subdevice and its input and\n> output video devices.\n> \n> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> ---\n>  src/libcamera/pipeline/ipu3/ipu3.cpp | 257 +++++++++++++++++++++++----\n>  1 file changed, 224 insertions(+), 33 deletions(-)\n> \n> diff --git a/src/libcamera/pipeline/ipu3/ipu3.cpp b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> index 9fa59c1bc97e..1e89e57f628b 100644\n> --- a/src/libcamera/pipeline/ipu3/ipu3.cpp\n> +++ b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> @@ -8,11 +8,14 @@\n>  #include <memory>\n>  #include <vector>\n>  \n> +#include <linux/media-bus-format.h>\n> +\n>  #include <libcamera/camera.h>\n>  #include <libcamera/request.h>\n>  #include <libcamera/stream.h>\n>  \n>  #include \"device_enumerator.h\"\n> +#include \"geometry.h\"\n>  #include \"log.h\"\n>  #include \"media_device.h\"\n>  #include \"pipeline_handler.h\"\n> @@ -106,6 +109,29 @@ private:\n>  \t\t\tPipelineHandler::cameraData(camera));\n>  \t}\n>  \n> +\tvoid printDevFormat(const V4L2Device *dev, const V4L2DeviceFormat &fmt)\n> +\t{\n> +\t\tLOG(IPU3, Info)\n\nI think this is way too verbose in general.\n\n> +\t\t\t<< dev->deviceNode() << \": \" << fmt.width << \"x\"\n> +\t\t\t<< fmt.height << \"- 0x\" << std::hex << fmt.fourcc\n> +\t\t\t<< \" planes: \" << fmt.planesCount;\n> +\t}\n> +\n> +\tvoid printSubdevFormat(const V4L2Subdevice *dev, unsigned int pad,\n> +\t\t\t       const V4L2SubdeviceFormat &fmt)\n> +\t{\n> +\t\tLOG(IPU3, Info)\n> +\t\t\t<< \"'\" << dev->deviceName() << \"':\" << pad << \" = \"\n> +\t\t\t<< fmt.width << \"x\" << fmt.height << \" - 0x\"\n> +\t\t\t<< std::hex << fmt.mbus_code;\n> +\t}\n\nHow about adding a std::string toString() const method to the\nV4L2DeviceFormat and V4L2SubdeviceFormat classes ? print methods in the\nPipelineHandlerIPU3 class aren't very nice :-/ (especially with the\nhardcoded log level)\n\n> +\n> +\tint setImguFormat(V4L2Subdevice *imgu,\n> +\t\t\t  const StreamConfiguration &config,\n> +\t\t\t  Rectangle *rect);\n> +\tint setSensorFormat(V4L2Subdevice *sensor,\n> +\t\t\t    const StreamConfiguration &config,\n> +\t\t\t    V4L2SubdeviceFormat *format);\n>  \tint linkImgu(ImguDevice *imgu);\n>  \n>  \tV4L2Device *openDevice(MediaDevice *media, std::string &name);\n> @@ -186,14 +212,29 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n>  \t\t\t\t\t  std::map<Stream *, StreamConfiguration> &config)\n>  {\n>  \tIPU3CameraData *data = cameraData(camera);\n> -\tStreamConfiguration *cfg = &config[&data->stream_];\n> -\tV4L2Subdevice *sensor = data->cio2.sensor;\n> +\tconst StreamConfiguration &cfg = config[&data->stream_];\n>  \tV4L2Subdevice *csi2 = data->cio2.csi2;\n>  \tV4L2Device *cio2 = data->cio2.output;\n> -\tV4L2SubdeviceFormat subdevFormat = {};\n> -\tV4L2DeviceFormat devFormat = {};\n>  \tint ret;\n>  \n> +\t/*\n> +\t * Verify that the requested size respects the IPU3 alignement\n> +\t * requirements: image size shall be 8-aligned in width and 4-aligned\n> +\t * in height.\n\nMaybe \"the image width shall be a multiple of 8 pixels and its height a\nmultiple of 4 pixels\" to clarify we're talking about pixels, not bytes ?\n\n> +\t *\n> +\t * TODO: consider the BDS scaling factor requirements: \"the downscaling\n> +\t * factor must be an integer value multiple of 1/32\"\n> +\t */\n> +\tif (cfg.width % 8 || cfg.height % 4) {\n> +\t\tLOG(IPU3, Error) << \"Stream format not support: bad alignement\";\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tLOG(IPU3, Info)\n> +\t\t<< \"Camera :'\" << camera->name() << \" - Stream format: \"\n> +\t\t<< cfg.width << \"x\" << cfg.height << \" - \"\n> +\t\t<< std::hex << cfg.pixelFormat;\n\nLogging is important, but this is all way too verbose. Please consider\nfor every log message if it's needed, and if it is, what the appropriate\nlog level should be. The messages should also be more informative. This\nmessage, for instance, should explicitly state that it prints the\nrequested format, and also mention which stream it applies to. You\nshould also move it before the alignment check.\n\n> +\n>  \t/*\n>  \t * TODO: dynamically assign ImgU devices; as of now, with a single\n>  \t * stream supported, always use 'imgu0'.\n> @@ -203,52 +244,73 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n>  \tif (ret)\n>  \t\treturn ret;\n>  \n> +\tV4L2Device *viewfinder = data->imgu->viewfinder;\n> +\tV4L2Subdevice *sensor = data->cio2.sensor;\n> +\tV4L2Device *output = data->imgu->output;\n> +\tV4L2Subdevice *imgu = data->imgu->imgu;\n> +\tV4L2Device *input = data->imgu->input;\n> +\n>  \t/*\n> -\t * FIXME: as of now, the format gets applied to the sensor and is\n> -\t * propagated along the pipeline. It should instead be applied on the\n> -\t * capture device and the sensor format calculated accordingly.\n> +\t * Pass the requested output image size to the sensor and get back the\n> +\t * adjusted one to be propagated to the CIO2 device and to the ImgU\n> +\t * input.\n>  \t */\n\nIf the sensor driver returns the closest match, you may get a size\nsmaller than requested, which would require up-scaling. I'm not sure if\nthe IPU3 supports that, but in any case it may not be a very good idea.\nI think you should instead iterate over the sizes supported by the\nsensor and pick the best one yourself. We may also want to use cropping\nat some point, but that can be left for later.\n\nAnd now that I've written this, I realized that your setSensorFormat()\nmethod has such an iteration. Could you update the above comment\naccordingly, and possibly rename setSensorFormat() to a more descriptive\nname ?\n\n> +\tV4L2SubdeviceFormat sensorFormat = {};\n> +\tret = setSensorFormat(sensor, cfg, &sensorFormat);\n> +\tif (ret) {\n> +\t\tLOG(IPU3, Error) << \"Stream format not supported: \";\n\nLooks like you're missing part of the message. I would also move it to\nthe setSensorFormat() function in order to print the exact reason of the\nerror.\n\n> +\t\treturn ret;\n> +\t}\n> +\tprintSubdevFormat(sensor, 0, sensorFormat);\n>  \n> -\tret = sensor->getFormat(0, &subdevFormat);\n> +\tret = csi2->setFormat(0, &sensorFormat);\n>  \tif (ret)\n>  \t\treturn ret;\n> -\n> -\tsubdevFormat.width = cfg->width;\n> -\tsubdevFormat.height = cfg->height;\n> -\tret = sensor->setFormat(0, &subdevFormat);\n> +\tprintSubdevFormat(csi2, 0, sensorFormat);\n> +\n> +\t/* Apply the CIO2 image format to the CIO2 output and ImgU input. */\n> +\tV4L2DeviceFormat cio2Format = {};\n> +\tcio2Format.width = sensorFormat.width;\n> +\tcio2Format.height = sensorFormat.height;\n> +\tcio2Format.fourcc = V4L2_PIX_FMT_IPU3_SGRBG10;\n\nLet's not hardcode a particular bayer pattern, please pick the\nappropriate one based on what the sensor provides.\n\n> +\tcio2Format.planesCount = 1;\n> +\tret = cio2->setFormat(&cio2Format);\n>  \tif (ret)\n>  \t\treturn ret;\n> +\tprintDevFormat(cio2, cio2Format);\n>  \n> -\t/* Return error if the requested format cannot be applied to sensor. */\n> -\tif (subdevFormat.width != cfg->width ||\n> -\t    subdevFormat.height != cfg->height) {\n> -\t\tLOG(IPU3, Error)\n> -\t\t\t<< \"Failed to apply image format \"\n> -\t\t\t<< subdevFormat.width << \"x\" << subdevFormat.height\n> -\t\t\t<< \" - got: \" << cfg->width << \"x\" << cfg->height;\n> -\t\treturn -EINVAL;\n> -\t}\n> -\n> -\tret = csi2->setFormat(0, &subdevFormat);\n> +\tret = input->setFormat(&cio2Format);\n>  \tif (ret)\n>  \t\treturn ret;\n> -\n> -\tret = cio2->getFormat(&devFormat);\n> +\tprintDevFormat(input, cio2Format);\n> +\n> +\t/* Apply pad formats and crop/compose rectangle to the ImgU. */\n> +\tRectangle rect = {\n> +\t\t.x = 0,\n> +\t\t.y = 0,\n> +\t\t.w = cio2Format.width,\n> +\t\t.h = cio2Format.height,\n> +\t};\n> +\tret = setImguFormat(imgu, cfg, &rect);\n>  \tif (ret)\n>  \t\treturn ret;\n>  \n> -\tdevFormat.width = subdevFormat.width;\n> -\tdevFormat.height = subdevFormat.height;\n> -\tdevFormat.fourcc = cfg->pixelFormat;\n> +\t/* Apply the format to the ImgU output and viewfinder devices. */\n\nAs commented on the previous patch, with a single stream supported, do\nyou really need both outputs ?\n\n> +\tV4L2DeviceFormat outputFormat = {};\n> +\toutputFormat.width = cfg.width;\n> +\toutputFormat.height = cfg.height;\n> +\toutputFormat.fourcc = V4L2_PIX_FMT_NV12;\n> +\toutputFormat.planesCount = 2;\n>  \n> -\tret = cio2->setFormat(&devFormat);\n> +\tret = output->setFormat(&outputFormat);\n>  \tif (ret)\n>  \t\treturn ret;\n> +\tprintDevFormat(output, outputFormat);\n>  \n> -\tLOG(IPU3, Info) << cio2->driverName() << \": \"\n> -\t\t\t<< devFormat.width << \"x\" << devFormat.height\n> -\t\t\t<< \"- 0x\" << std::hex << devFormat.fourcc << \" planes: \"\n> -\t\t\t<< devFormat.planes;\n> +\tret = viewfinder->setFormat(&outputFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tprintDevFormat(viewfinder, outputFormat);\n>  \n>  \treturn 0;\n>  }\n> @@ -409,6 +471,135 @@ error_release_mdev:\n>  \treturn false;\n>  }\n>  \n> +int PipelineHandlerIPU3::setImguFormat(V4L2Subdevice *imgu,\n> +\t\t\t\t       const StreamConfiguration &config,\n> +\t\t\t\t       Rectangle *rect)\n> +{\n> +\tint ret;\n> +\n> +\t/*\n> +\t * Configure the 'imgu' subdevice with the requested sizes.\n> +\t *\n> +\t * FIXME: the IPU3 driver implementation shall be changed to use the\n> +\t * actual input sizes as 'imgu input' subdevice sizes, and use the\n> +\t * desired output sizes to configure the crop/compose rectangles.  The\n> +\t * current implementation uses output sizes as 'imgu input' sizes, and\n> +\t * uses the input dimension to configure the crop/compose rectangles,\n> +\t * which contradicts the V4L2 specifications.\n> +\t */\n> +\tret = imgu->setCrop(IMGU_PAD_INPUT, rect);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tLOG(IPU3, Info)\n> +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> +\t\t<< \" = crop: (0,0)/\" << rect->w << \"x\" << rect->h;\n> +\n> +\tret = imgu->setCompose(IMGU_PAD_INPUT, rect);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tLOG(IPU3, Info)\n> +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> +\t\t<< \" = compose: (0,0)/\" << rect->w << \"x\" << rect->h;\n> +\n> +\n> +\tV4L2SubdeviceFormat imguFormat = {};\n> +\timguFormat.width = config.width;\n> +\timguFormat.height = config.height;\n> +\timguFormat.mbus_code = MEDIA_BUS_FMT_FIXED;\n> +\n> +\tret = imgu->setFormat(IMGU_PAD_INPUT, &imguFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tprintSubdevFormat(imgu, IMGU_PAD_INPUT, imguFormat);\n> +\n> +\tret = imgu->setFormat(IMGU_PAD_OUTPUT, &imguFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tprintSubdevFormat(imgu, IMGU_PAD_OUTPUT, imguFormat);\n> +\n> +\tret = imgu->setFormat(IMGU_PAD_VF, &imguFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tprintSubdevFormat(imgu, IMGU_PAD_VF, imguFormat);\n> +\n> +\tret = imgu->setFormat(IMGU_PAD_STAT, &imguFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\tprintSubdevFormat(imgu, IMGU_PAD_STAT, imguFormat);\n> +\n> +\treturn 0;\n> +}\n> +\n> +int PipelineHandlerIPU3::setSensorFormat(V4L2Subdevice *sensor,\n> +\t\t\t\t\t const StreamConfiguration &config,\n> +\t\t\t\t\t V4L2SubdeviceFormat *format)\n> +{\n> +\tstd::map<unsigned int, std::vector<SizeRange>> formats;\n> +\tunsigned int best = ~0;\n\nPlease use -1 instead of ~0.\n\n> +\tbool found = false;\n> +\tint ret;\n> +\n> +\tformats = sensor->formats(0);\n> +\tif (formats.empty()) {\n> +\t\t/*\n> +\t\t * If the format list is empty, try with the currently\n> +\t\t * applied one.\n> +\t\t */\n\nThis shouldn't happen.\n\n> +\t\tret = sensor->getFormat(0, format);\n> +\t\tif (ret)\n> +\t\t\treturn ret;\n> +\n> +\t\tif (format->width < config.width ||\n> +\t\t    format->height < config.height)\n> +\t\t\treturn -EINVAL;\n> +\n> +\t\treturn 0;\n> +\t}\n> +\n> +\t/* Search for the best approximation the sensor can provide. */\n> +\tauto it = formats.begin();\n> +\twhile (it != formats.end()) {\n\nfor loop.\n\n> +\t\tfor (SizeRange &size : it->second) {\n> +\t\t\tif (size.maxWidth < config.width ||\n> +\t\t\t    size.maxHeight < config.height)\n> +\t\t\t\tcontinue;\n> +\n> +\t\t\tunsigned int diff =\n> +\t\t\t\t(abs(size.maxWidth - config.width) +\n> +\t\t\t\t abs(size.maxHeight - config.height));\n\nNo need for abs() as you check that maxWidth >= config.width above.\n\nShould you use the difference of area instead of the difference of\nlengths ?\n\n> +\t\t\tif (diff >= best)\n> +\t\t\t\tcontinue;\n> +\n> +\t\t\tbest = diff;\n> +\t\t\tfound = true;\n\nNo need for the found variable, you can just check best != -1.\n\n> +\n> +\t\t\tformat->width = size.maxWidth;\n> +\t\t\tformat->height = size.maxHeight;\n> +\t\t\tformat->mbus_code = it->first;\n> +\t\t}\n> +\n> +\t\t++it;\n> +\t}\n> +\tif (!found)\n> +\t\treturn -EINVAL;\n> +\n> +\tret = sensor->setFormat(0, format);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\t/*\n> +\t * Make sure everything is all right and the format did not get\n> +\t * adjusted.\n> +\t */\n> +\tif (format->width < config.width ||\n> +\t    format->height < config.height)\n> +\t\treturn -EINVAL;\n> +\n> +\treturn 0;\n> +}\n> +\n>  /* Link entities in the ImgU unit to prepare for capture operations. */\n>  int PipelineHandlerIPU3::linkImgu(ImguDevice *imguDevice)\n>  {","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 154EE610BF\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  2 Mar 2019 23:44:30 +0100 (CET)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 705B254E;\n\tSat,  2 Mar 2019 23:44:29 +0100 (CET)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1551566669;\n\tbh=hhShLlKGQAoem9AKh36BPiZBFkVFWIurrjGjP/j+j7w=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=ipkhw2V8drlHYTEZbg7ZnRDbBZOyD8v95XH4DknfapdJa6XNXhIToMZYxyU638zjo\n\teY/ev61MTUVbyh/YIQl12xVUfBCVdecJkrB6oLE/AIadz7WXUuuW4KROB67J308dRJ\n\t+hauRuTrl4ZkGLzV5BJmDvRb5VZHchZR3vZB6Bkk=","Date":"Sun, 3 Mar 2019 00:44:23 +0200","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190302224423.GK4682@pendragon.ideasonboard.com>","References":"<20190228200410.3022-1-jacopo@jmondi.org>\n\t<20190228200410.3022-5-jacopo@jmondi.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20190228200410.3022-5-jacopo@jmondi.org>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Sat, 02 Mar 2019 22:44:30 -0000"}},{"id":1022,"web_url":"https://patchwork.libcamera.org/comment/1022/","msgid":"<20190309205816.obhovpo5qwy6kw2e@uno.localdomain>","date":"2019-03-09T20:58:16","subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"Hi Laurent,\n\nOn Sun, Mar 03, 2019 at 12:44:23AM +0200, Laurent Pinchart wrote:\n> Hi Jacopo,\n>\n> Thank you for the patch.\n>\n> On Thu, Feb 28, 2019 at 09:04:04PM +0100, Jacopo Mondi wrote:\n> > Apply the requested image format to the sensor device, and apply the\n> > adjusted one to the CIO2 device, the ImgU subdevice and its input and\n> > output video devices.\n> >\n> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> > ---\n> >  src/libcamera/pipeline/ipu3/ipu3.cpp | 257 +++++++++++++++++++++++----\n> >  1 file changed, 224 insertions(+), 33 deletions(-)\n> >\n> > diff --git a/src/libcamera/pipeline/ipu3/ipu3.cpp b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> > index 9fa59c1bc97e..1e89e57f628b 100644\n> > --- a/src/libcamera/pipeline/ipu3/ipu3.cpp\n> > +++ b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> > @@ -8,11 +8,14 @@\n> >  #include <memory>\n> >  #include <vector>\n> >\n> > +#include <linux/media-bus-format.h>\n> > +\n> >  #include <libcamera/camera.h>\n> >  #include <libcamera/request.h>\n> >  #include <libcamera/stream.h>\n> >\n> >  #include \"device_enumerator.h\"\n> > +#include \"geometry.h\"\n> >  #include \"log.h\"\n> >  #include \"media_device.h\"\n> >  #include \"pipeline_handler.h\"\n> > @@ -106,6 +109,29 @@ private:\n> >  \t\t\tPipelineHandler::cameraData(camera));\n> >  \t}\n> >\n> > +\tvoid printDevFormat(const V4L2Device *dev, const V4L2DeviceFormat &fmt)\n> > +\t{\n> > +\t\tLOG(IPU3, Info)\n>\n> I think this is way too verbose in general.\n>\n> > +\t\t\t<< dev->deviceNode() << \": \" << fmt.width << \"x\"\n> > +\t\t\t<< fmt.height << \"- 0x\" << std::hex << fmt.fourcc\n> > +\t\t\t<< \" planes: \" << fmt.planesCount;\n> > +\t}\n> > +\n> > +\tvoid printSubdevFormat(const V4L2Subdevice *dev, unsigned int pad,\n> > +\t\t\t       const V4L2SubdeviceFormat &fmt)\n> > +\t{\n> > +\t\tLOG(IPU3, Info)\n> > +\t\t\t<< \"'\" << dev->deviceName() << \"':\" << pad << \" = \"\n> > +\t\t\t<< fmt.width << \"x\" << fmt.height << \" - 0x\"\n> > +\t\t\t<< std::hex << fmt.mbus_code;\n> > +\t}\n>\n> How about adding a std::string toString() const method to the\n> V4L2DeviceFormat and V4L2SubdeviceFormat classes ? print methods in the\n> PipelineHandlerIPU3 class aren't very nice :-/ (especially with the\n> hardcoded log level)\n>\n\nMore code in the framework, less in pipeline handlers: it's a good idea!\n\n> > +\n> > +\tint setImguFormat(V4L2Subdevice *imgu,\n> > +\t\t\t  const StreamConfiguration &config,\n> > +\t\t\t  Rectangle *rect);\n> > +\tint setSensorFormat(V4L2Subdevice *sensor,\n> > +\t\t\t    const StreamConfiguration &config,\n> > +\t\t\t    V4L2SubdeviceFormat *format);\n> >  \tint linkImgu(ImguDevice *imgu);\n> >\n> >  \tV4L2Device *openDevice(MediaDevice *media, std::string &name);\n> > @@ -186,14 +212,29 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n> >  \t\t\t\t\t  std::map<Stream *, StreamConfiguration> &config)\n> >  {\n> >  \tIPU3CameraData *data = cameraData(camera);\n> > -\tStreamConfiguration *cfg = &config[&data->stream_];\n> > -\tV4L2Subdevice *sensor = data->cio2.sensor;\n> > +\tconst StreamConfiguration &cfg = config[&data->stream_];\n> >  \tV4L2Subdevice *csi2 = data->cio2.csi2;\n> >  \tV4L2Device *cio2 = data->cio2.output;\n> > -\tV4L2SubdeviceFormat subdevFormat = {};\n> > -\tV4L2DeviceFormat devFormat = {};\n> >  \tint ret;\n> >\n> > +\t/*\n> > +\t * Verify that the requested size respects the IPU3 alignement\n> > +\t * requirements: image size shall be 8-aligned in width and 4-aligned\n> > +\t * in height.\n>\n> Maybe \"the image width shall be a multiple of 8 pixels and its height a\n> multiple of 4 pixels\" to clarify we're talking about pixels, not bytes ?\n>\n\nAck\n\n> > +\t *\n> > +\t * TODO: consider the BDS scaling factor requirements: \"the downscaling\n> > +\t * factor must be an integer value multiple of 1/32\"\n> > +\t */\n> > +\tif (cfg.width % 8 || cfg.height % 4) {\n> > +\t\tLOG(IPU3, Error) << \"Stream format not support: bad alignement\";\n> > +\t\treturn -EINVAL;\n> > +\t}\n> > +\n> > +\tLOG(IPU3, Info)\n> > +\t\t<< \"Camera :'\" << camera->name() << \" - Stream format: \"\n> > +\t\t<< cfg.width << \"x\" << cfg.height << \" - \"\n> > +\t\t<< std::hex << cfg.pixelFormat;\n>\n> Logging is important, but this is all way too verbose. Please consider\n> for every log message if it's needed, and if it is, what the appropriate\n> log level should be. The messages should also be more informative. This\n> message, for instance, should explicitly state that it prints the\n> requested format, and also mention which stream it applies to. You\n> should also move it before the alignment check.\n\nAs we have clarified offline I'll make this Debug and will make this\nand the others more user-consumable.\n>\n> > +\n> >  \t/*\n> >  \t * TODO: dynamically assign ImgU devices; as of now, with a single\n> >  \t * stream supported, always use 'imgu0'.\n> > @@ -203,52 +244,73 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n> >  \tif (ret)\n> >  \t\treturn ret;\n> >\n> > +\tV4L2Device *viewfinder = data->imgu->viewfinder;\n> > +\tV4L2Subdevice *sensor = data->cio2.sensor;\n> > +\tV4L2Device *output = data->imgu->output;\n> > +\tV4L2Subdevice *imgu = data->imgu->imgu;\n> > +\tV4L2Device *input = data->imgu->input;\n> > +\n> >  \t/*\n> > -\t * FIXME: as of now, the format gets applied to the sensor and is\n> > -\t * propagated along the pipeline. It should instead be applied on the\n> > -\t * capture device and the sensor format calculated accordingly.\n> > +\t * Pass the requested output image size to the sensor and get back the\n> > +\t * adjusted one to be propagated to the CIO2 device and to the ImgU\n> > +\t * input.\n> >  \t */\n>\n> If the sensor driver returns the closest match, you may get a size\n> smaller than requested, which would require up-scaling. I'm not sure if\n> the IPU3 supports that, but in any case it may not be a very good idea.\n> I think you should instead iterate over the sizes supported by the\n> sensor and pick the best one yourself. We may also want to use cropping\n> at some point, but that can be left for later.\n>\n> And now that I've written this, I realized that your setSensorFormat()\n> method has such an iteration. Could you update the above comment\n> accordingly, and possibly rename setSensorFormat() to a more descriptive\n> name ?\n\nRe-reading the comment, I got why you got mis-lead... I'll change\nthis. Suggestions welcome for the method name.\n\n>\n> > +\tV4L2SubdeviceFormat sensorFormat = {};\n> > +\tret = setSensorFormat(sensor, cfg, &sensorFormat);\n> > +\tif (ret) {\n> > +\t\tLOG(IPU3, Error) << \"Stream format not supported: \";\n>\n> Looks like you're missing part of the message. I would also move it to\n> the setSensorFormat() function in order to print the exact reason of the\n> error.\n>\n> > +\t\treturn ret;\n> > +\t}\n> > +\tprintSubdevFormat(sensor, 0, sensorFormat);\n> >\n> > -\tret = sensor->getFormat(0, &subdevFormat);\n> > +\tret = csi2->setFormat(0, &sensorFormat);\n> >  \tif (ret)\n> >  \t\treturn ret;\n> > -\n> > -\tsubdevFormat.width = cfg->width;\n> > -\tsubdevFormat.height = cfg->height;\n> > -\tret = sensor->setFormat(0, &subdevFormat);\n> > +\tprintSubdevFormat(csi2, 0, sensorFormat);\n> > +\n> > +\t/* Apply the CIO2 image format to the CIO2 output and ImgU input. */\n> > +\tV4L2DeviceFormat cio2Format = {};\n> > +\tcio2Format.width = sensorFormat.width;\n> > +\tcio2Format.height = sensorFormat.height;\n> > +\tcio2Format.fourcc = V4L2_PIX_FMT_IPU3_SGRBG10;\n>\n> Let's not hardcode a particular bayer pattern, please pick the\n> appropriate one based on what the sensor provides.\n\nAh! Isn't the output of the CIO2 device IPU3 specific and fixed?\nThat's not what the sensor provides, but the CIO2 produced one\n(for which there are a few versions, which are different permutations\nof the RGB bayer components, but I don't think that' too relevant)\n\n>\n> > +\tcio2Format.planesCount = 1;\n> > +\tret = cio2->setFormat(&cio2Format);\n> >  \tif (ret)\n> >  \t\treturn ret;\n> > +\tprintDevFormat(cio2, cio2Format);\n> >\n> > -\t/* Return error if the requested format cannot be applied to sensor. */\n> > -\tif (subdevFormat.width != cfg->width ||\n> > -\t    subdevFormat.height != cfg->height) {\n> > -\t\tLOG(IPU3, Error)\n> > -\t\t\t<< \"Failed to apply image format \"\n> > -\t\t\t<< subdevFormat.width << \"x\" << subdevFormat.height\n> > -\t\t\t<< \" - got: \" << cfg->width << \"x\" << cfg->height;\n> > -\t\treturn -EINVAL;\n> > -\t}\n> > -\n> > -\tret = csi2->setFormat(0, &subdevFormat);\n> > +\tret = input->setFormat(&cio2Format);\n> >  \tif (ret)\n> >  \t\treturn ret;\n> > -\n> > -\tret = cio2->getFormat(&devFormat);\n> > +\tprintDevFormat(input, cio2Format);\n> > +\n> > +\t/* Apply pad formats and crop/compose rectangle to the ImgU. */\n> > +\tRectangle rect = {\n> > +\t\t.x = 0,\n> > +\t\t.y = 0,\n> > +\t\t.w = cio2Format.width,\n> > +\t\t.h = cio2Format.height,\n> > +\t};\n> > +\tret = setImguFormat(imgu, cfg, &rect);\n> >  \tif (ret)\n> >  \t\treturn ret;\n> >\n> > -\tdevFormat.width = subdevFormat.width;\n> > -\tdevFormat.height = subdevFormat.height;\n> > -\tdevFormat.fourcc = cfg->pixelFormat;\n> > +\t/* Apply the format to the ImgU output and viewfinder devices. */\n>\n> As commented on the previous patch, with a single stream supported, do\n> you really need both outputs ?\n>\n\nSee my comments on the previous patch.\n\n> > +\tV4L2DeviceFormat outputFormat = {};\n> > +\toutputFormat.width = cfg.width;\n> > +\toutputFormat.height = cfg.height;\n> > +\toutputFormat.fourcc = V4L2_PIX_FMT_NV12;\n> > +\toutputFormat.planesCount = 2;\n> >\n> > -\tret = cio2->setFormat(&devFormat);\n> > +\tret = output->setFormat(&outputFormat);\n> >  \tif (ret)\n> >  \t\treturn ret;\n> > +\tprintDevFormat(output, outputFormat);\n> >\n> > -\tLOG(IPU3, Info) << cio2->driverName() << \": \"\n> > -\t\t\t<< devFormat.width << \"x\" << devFormat.height\n> > -\t\t\t<< \"- 0x\" << std::hex << devFormat.fourcc << \" planes: \"\n> > -\t\t\t<< devFormat.planes;\n> > +\tret = viewfinder->setFormat(&outputFormat);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tprintDevFormat(viewfinder, outputFormat);\n> >\n> >  \treturn 0;\n> >  }\n> > @@ -409,6 +471,135 @@ error_release_mdev:\n> >  \treturn false;\n> >  }\n> >\n> > +int PipelineHandlerIPU3::setImguFormat(V4L2Subdevice *imgu,\n> > +\t\t\t\t       const StreamConfiguration &config,\n> > +\t\t\t\t       Rectangle *rect)\n> > +{\n> > +\tint ret;\n> > +\n> > +\t/*\n> > +\t * Configure the 'imgu' subdevice with the requested sizes.\n> > +\t *\n> > +\t * FIXME: the IPU3 driver implementation shall be changed to use the\n> > +\t * actual input sizes as 'imgu input' subdevice sizes, and use the\n> > +\t * desired output sizes to configure the crop/compose rectangles.  The\n> > +\t * current implementation uses output sizes as 'imgu input' sizes, and\n> > +\t * uses the input dimension to configure the crop/compose rectangles,\n> > +\t * which contradicts the V4L2 specifications.\n> > +\t */\n> > +\tret = imgu->setCrop(IMGU_PAD_INPUT, rect);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\tLOG(IPU3, Info)\n> > +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> > +\t\t<< \" = crop: (0,0)/\" << rect->w << \"x\" << rect->h;\n> > +\n> > +\tret = imgu->setCompose(IMGU_PAD_INPUT, rect);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\tLOG(IPU3, Info)\n> > +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> > +\t\t<< \" = compose: (0,0)/\" << rect->w << \"x\" << rect->h;\n> > +\n> > +\n> > +\tV4L2SubdeviceFormat imguFormat = {};\n> > +\timguFormat.width = config.width;\n> > +\timguFormat.height = config.height;\n> > +\timguFormat.mbus_code = MEDIA_BUS_FMT_FIXED;\n> > +\n> > +\tret = imgu->setFormat(IMGU_PAD_INPUT, &imguFormat);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tprintSubdevFormat(imgu, IMGU_PAD_INPUT, imguFormat);\n> > +\n> > +\tret = imgu->setFormat(IMGU_PAD_OUTPUT, &imguFormat);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tprintSubdevFormat(imgu, IMGU_PAD_OUTPUT, imguFormat);\n> > +\n> > +\tret = imgu->setFormat(IMGU_PAD_VF, &imguFormat);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tprintSubdevFormat(imgu, IMGU_PAD_VF, imguFormat);\n> > +\n> > +\tret = imgu->setFormat(IMGU_PAD_STAT, &imguFormat);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\tprintSubdevFormat(imgu, IMGU_PAD_STAT, imguFormat);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +int PipelineHandlerIPU3::setSensorFormat(V4L2Subdevice *sensor,\n> > +\t\t\t\t\t const StreamConfiguration &config,\n> > +\t\t\t\t\t V4L2SubdeviceFormat *format)\n> > +{\n> > +\tstd::map<unsigned int, std::vector<SizeRange>> formats;\n> > +\tunsigned int best = ~0;\n>\n> Please use -1 instead of ~0.\n>\n\nAck. Personal tastes or some reason I'm missing?\n\n> > +\tbool found = false;\n> > +\tint ret;\n> > +\n> > +\tformats = sensor->formats(0);\n> > +\tif (formats.empty()) {\n> > +\t\t/*\n> > +\t\t * If the format list is empty, try with the currently\n> > +\t\t * applied one.\n> > +\t\t */\n>\n> This shouldn't happen.\n>\n\nAgreed. Let's always assume sensor drivers are well written (creepy\nlaughs in background..)\n\n> > +\t\tret = sensor->getFormat(0, format);\n> > +\t\tif (ret)\n> > +\t\t\treturn ret;\n> > +\n> > +\t\tif (format->width < config.width ||\n> > +\t\t    format->height < config.height)\n> > +\t\t\treturn -EINVAL;\n> > +\n> > +\t\treturn 0;\n> > +\t}\n> > +\n> > +\t/* Search for the best approximation the sensor can provide. */\n> > +\tauto it = formats.begin();\n> > +\twhile (it != formats.end()) {\n>\n> for loop.\n>\n\nAck\n\n> > +\t\tfor (SizeRange &size : it->second) {\n> > +\t\t\tif (size.maxWidth < config.width ||\n> > +\t\t\t    size.maxHeight < config.height)\n> > +\t\t\t\tcontinue;\n> > +\n> > +\t\t\tunsigned int diff =\n> > +\t\t\t\t(abs(size.maxWidth - config.width) +\n> > +\t\t\t\t abs(size.maxHeight - config.height));\n>\n> No need for abs() as you check that maxWidth >= config.width above.\n>\n> Should you use the difference of area instead of the difference of\n> lengths ?\n>\n\nI'll try to move most of this to geometry\n\n> > +\t\t\tif (diff >= best)\n> > +\t\t\t\tcontinue;\n> > +\n> > +\t\t\tbest = diff;\n> > +\t\t\tfound = true;\n>\n> No need for the found variable, you can just check best != -1.\n>\n\nCorrect, this probably answers my above question.\n\nThanks\n  j\n\n> > +\n> > +\t\t\tformat->width = size.maxWidth;\n> > +\t\t\tformat->height = size.maxHeight;\n> > +\t\t\tformat->mbus_code = it->first;\n> > +\t\t}\n> > +\n> > +\t\t++it;\n> > +\t}\n> > +\tif (!found)\n> > +\t\treturn -EINVAL;\n> > +\n> > +\tret = sensor->setFormat(0, format);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\t/*\n> > +\t * Make sure everything is all right and the format did not get\n> > +\t * adjusted.\n> > +\t */\n> > +\tif (format->width < config.width ||\n> > +\t    format->height < config.height)\n> > +\t\treturn -EINVAL;\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> >  /* Link entities in the ImgU unit to prepare for capture operations. */\n> >  int PipelineHandlerIPU3::linkImgu(ImguDevice *imguDevice)\n> >  {\n>\n> --\n> Regards,\n>\n> Laurent Pinchart","headers":{"Return-Path":"<jacopo@jmondi.org>","Received":["from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net\n\t[217.70.183.194])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id C9E87600F9\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  9 Mar 2019 21:57:43 +0100 (CET)","from uno.localdomain (2-224-242-101.ip172.fastwebnet.it\n\t[2.224.242.101]) (Authenticated sender: jacopo@jmondi.org)\n\tby relay2-d.mail.gandi.net (Postfix) with ESMTPSA id 34EB740005;\n\tSat,  9 Mar 2019 20:57:42 +0000 (UTC)"],"X-Originating-IP":"2.224.242.101","Date":"Sat, 9 Mar 2019 21:58:16 +0100","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190309205816.obhovpo5qwy6kw2e@uno.localdomain>","References":"<20190228200410.3022-1-jacopo@jmondi.org>\n\t<20190228200410.3022-5-jacopo@jmondi.org>\n\t<20190302224423.GK4682@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Type":"multipart/signed; micalg=pgp-sha256;\n\tprotocol=\"application/pgp-signature\"; boundary=\"dymdxdmsxbae54e7\"","Content-Disposition":"inline","In-Reply-To":"<20190302224423.GK4682@pendragon.ideasonboard.com>","User-Agent":"NeoMutt/20180716","Subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Sat, 09 Mar 2019 20:57:44 -0000"}},{"id":1028,"web_url":"https://patchwork.libcamera.org/comment/1028/","msgid":"<20190310131403.GD4814@pendragon.ideasonboard.com>","date":"2019-03-10T13:14:03","subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Jacopo,\n\nOn Sat, Mar 09, 2019 at 09:58:16PM +0100, Jacopo Mondi wrote:\n> On Sun, Mar 03, 2019 at 12:44:23AM +0200, Laurent Pinchart wrote:\n> > On Thu, Feb 28, 2019 at 09:04:04PM +0100, Jacopo Mondi wrote:\n> >> Apply the requested image format to the sensor device, and apply the\n> >> adjusted one to the CIO2 device, the ImgU subdevice and its input and\n> >> output video devices.\n> >>\n> >> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> >> ---\n> >>  src/libcamera/pipeline/ipu3/ipu3.cpp | 257 +++++++++++++++++++++++----\n> >>  1 file changed, 224 insertions(+), 33 deletions(-)\n> >>\n> >> diff --git a/src/libcamera/pipeline/ipu3/ipu3.cpp b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> >> index 9fa59c1bc97e..1e89e57f628b 100644\n> >> --- a/src/libcamera/pipeline/ipu3/ipu3.cpp\n> >> +++ b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> >> @@ -8,11 +8,14 @@\n> >>  #include <memory>\n> >>  #include <vector>\n> >>\n> >> +#include <linux/media-bus-format.h>\n> >> +\n> >>  #include <libcamera/camera.h>\n> >>  #include <libcamera/request.h>\n> >>  #include <libcamera/stream.h>\n> >>\n> >>  #include \"device_enumerator.h\"\n> >> +#include \"geometry.h\"\n> >>  #include \"log.h\"\n> >>  #include \"media_device.h\"\n> >>  #include \"pipeline_handler.h\"\n> >> @@ -106,6 +109,29 @@ private:\n> >>  \t\t\tPipelineHandler::cameraData(camera));\n> >>  \t}\n> >>\n> >> +\tvoid printDevFormat(const V4L2Device *dev, const V4L2DeviceFormat &fmt)\n> >> +\t{\n> >> +\t\tLOG(IPU3, Info)\n> >\n> > I think this is way too verbose in general.\n> >\n> >> +\t\t\t<< dev->deviceNode() << \": \" << fmt.width << \"x\"\n> >> +\t\t\t<< fmt.height << \"- 0x\" << std::hex << fmt.fourcc\n> >> +\t\t\t<< \" planes: \" << fmt.planesCount;\n> >> +\t}\n> >> +\n> >> +\tvoid printSubdevFormat(const V4L2Subdevice *dev, unsigned int pad,\n> >> +\t\t\t       const V4L2SubdeviceFormat &fmt)\n> >> +\t{\n> >> +\t\tLOG(IPU3, Info)\n> >> +\t\t\t<< \"'\" << dev->deviceName() << \"':\" << pad << \" = \"\n> >> +\t\t\t<< fmt.width << \"x\" << fmt.height << \" - 0x\"\n> >> +\t\t\t<< std::hex << fmt.mbus_code;\n> >> +\t}\n> >\n> > How about adding a std::string toString() const method to the\n> > V4L2DeviceFormat and V4L2SubdeviceFormat classes ? print methods in the\n> > PipelineHandlerIPU3 class aren't very nice :-/ (especially with the\n> > hardcoded log level)\n> \n> More code in the framework, less in pipeline handlers: it's a good idea!\n> \n> >> +\n> >> +\tint setImguFormat(V4L2Subdevice *imgu,\n> >> +\t\t\t  const StreamConfiguration &config,\n> >> +\t\t\t  Rectangle *rect);\n> >> +\tint setSensorFormat(V4L2Subdevice *sensor,\n> >> +\t\t\t    const StreamConfiguration &config,\n> >> +\t\t\t    V4L2SubdeviceFormat *format);\n> >>  \tint linkImgu(ImguDevice *imgu);\n> >>\n> >>  \tV4L2Device *openDevice(MediaDevice *media, std::string &name);\n> >> @@ -186,14 +212,29 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n> >>  \t\t\t\t\t  std::map<Stream *, StreamConfiguration> &config)\n> >>  {\n> >>  \tIPU3CameraData *data = cameraData(camera);\n> >> -\tStreamConfiguration *cfg = &config[&data->stream_];\n> >> -\tV4L2Subdevice *sensor = data->cio2.sensor;\n> >> +\tconst StreamConfiguration &cfg = config[&data->stream_];\n> >>  \tV4L2Subdevice *csi2 = data->cio2.csi2;\n> >>  \tV4L2Device *cio2 = data->cio2.output;\n> >> -\tV4L2SubdeviceFormat subdevFormat = {};\n> >> -\tV4L2DeviceFormat devFormat = {};\n> >>  \tint ret;\n> >>\n> >> +\t/*\n> >> +\t * Verify that the requested size respects the IPU3 alignement\n> >> +\t * requirements: image size shall be 8-aligned in width and 4-aligned\n> >> +\t * in height.\n> >\n> > Maybe \"the image width shall be a multiple of 8 pixels and its height a\n> > multiple of 4 pixels\" to clarify we're talking about pixels, not bytes ?\n> \n> Ack\n> \n> >> +\t *\n> >> +\t * TODO: consider the BDS scaling factor requirements: \"the downscaling\n> >> +\t * factor must be an integer value multiple of 1/32\"\n> >> +\t */\n> >> +\tif (cfg.width % 8 || cfg.height % 4) {\n> >> +\t\tLOG(IPU3, Error) << \"Stream format not support: bad alignement\";\n> >> +\t\treturn -EINVAL;\n> >> +\t}\n> >> +\n> >> +\tLOG(IPU3, Info)\n> >> +\t\t<< \"Camera :'\" << camera->name() << \" - Stream format: \"\n> >> +\t\t<< cfg.width << \"x\" << cfg.height << \" - \"\n> >> +\t\t<< std::hex << cfg.pixelFormat;\n> >\n> > Logging is important, but this is all way too verbose. Please consider\n> > for every log message if it's needed, and if it is, what the appropriate\n> > log level should be. The messages should also be more informative. This\n> > message, for instance, should explicitly state that it prints the\n> > requested format, and also mention which stream it applies to. You\n> > should also move it before the alignment check.\n> \n> As we have clarified offline I'll make this Debug and will make this\n> and the others more user-consumable.\n> \n> >> +\n> >>  \t/*\n> >>  \t * TODO: dynamically assign ImgU devices; as of now, with a single\n> >>  \t * stream supported, always use 'imgu0'.\n> >> @@ -203,52 +244,73 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >>\n> >> +\tV4L2Device *viewfinder = data->imgu->viewfinder;\n> >> +\tV4L2Subdevice *sensor = data->cio2.sensor;\n> >> +\tV4L2Device *output = data->imgu->output;\n> >> +\tV4L2Subdevice *imgu = data->imgu->imgu;\n> >> +\tV4L2Device *input = data->imgu->input;\n> >> +\n> >>  \t/*\n> >> -\t * FIXME: as of now, the format gets applied to the sensor and is\n> >> -\t * propagated along the pipeline. It should instead be applied on the\n> >> -\t * capture device and the sensor format calculated accordingly.\n> >> +\t * Pass the requested output image size to the sensor and get back the\n> >> +\t * adjusted one to be propagated to the CIO2 device and to the ImgU\n> >> +\t * input.\n> >>  \t */\n> >\n> > If the sensor driver returns the closest match, you may get a size\n> > smaller than requested, which would require up-scaling. I'm not sure if\n> > the IPU3 supports that, but in any case it may not be a very good idea.\n> > I think you should instead iterate over the sizes supported by the\n> > sensor and pick the best one yourself. We may also want to use cropping\n> > at some point, but that can be left for later.\n> >\n> > And now that I've written this, I realized that your setSensorFormat()\n> > method has such an iteration. Could you update the above comment\n> > accordingly, and possibly rename setSensorFormat() to a more descriptive\n> > name ?\n> \n> Re-reading the comment, I got why you got mis-lead... I'll change\n> this. Suggestions welcome for the method name.\n> \n> >> +\tV4L2SubdeviceFormat sensorFormat = {};\n> >> +\tret = setSensorFormat(sensor, cfg, &sensorFormat);\n> >> +\tif (ret) {\n> >> +\t\tLOG(IPU3, Error) << \"Stream format not supported: \";\n> >\n> > Looks like you're missing part of the message. I would also move it to\n> > the setSensorFormat() function in order to print the exact reason of the\n> > error.\n> >\n> >> +\t\treturn ret;\n> >> +\t}\n> >> +\tprintSubdevFormat(sensor, 0, sensorFormat);\n> >>\n> >> -\tret = sensor->getFormat(0, &subdevFormat);\n> >> +\tret = csi2->setFormat(0, &sensorFormat);\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >> -\n> >> -\tsubdevFormat.width = cfg->width;\n> >> -\tsubdevFormat.height = cfg->height;\n> >> -\tret = sensor->setFormat(0, &subdevFormat);\n> >> +\tprintSubdevFormat(csi2, 0, sensorFormat);\n> >> +\n> >> +\t/* Apply the CIO2 image format to the CIO2 output and ImgU input. */\n> >> +\tV4L2DeviceFormat cio2Format = {};\n> >> +\tcio2Format.width = sensorFormat.width;\n> >> +\tcio2Format.height = sensorFormat.height;\n> >> +\tcio2Format.fourcc = V4L2_PIX_FMT_IPU3_SGRBG10;\n> >\n> > Let's not hardcode a particular bayer pattern, please pick the\n> > appropriate one based on what the sensor provides.\n> \n> Ah! Isn't the output of the CIO2 device IPU3 specific and fixed?\n> That's not what the sensor provides, but the CIO2 produced one\n> (for which there are a few versions, which are different permutations\n> of the RGB bayer components, but I don't think that' too relevant)\n\nThe CIO2 writes a custom packing of 10-bit Bayer data to memory (25\npixels stored in 32 bytes if I remember correctly). As far as I know it\ndoesn't reorder Bayer components, so you'll have to pick the\nV4L2_PIX_FMT_IPU3_S[RGB]*10 variant that corresponds to the order\nproduced by the sensor.\n\n> >> +\tcio2Format.planesCount = 1;\n> >> +\tret = cio2->setFormat(&cio2Format);\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >> +\tprintDevFormat(cio2, cio2Format);\n> >>\n> >> -\t/* Return error if the requested format cannot be applied to sensor. */\n> >> -\tif (subdevFormat.width != cfg->width ||\n> >> -\t    subdevFormat.height != cfg->height) {\n> >> -\t\tLOG(IPU3, Error)\n> >> -\t\t\t<< \"Failed to apply image format \"\n> >> -\t\t\t<< subdevFormat.width << \"x\" << subdevFormat.height\n> >> -\t\t\t<< \" - got: \" << cfg->width << \"x\" << cfg->height;\n> >> -\t\treturn -EINVAL;\n> >> -\t}\n> >> -\n> >> -\tret = csi2->setFormat(0, &subdevFormat);\n> >> +\tret = input->setFormat(&cio2Format);\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >> -\n> >> -\tret = cio2->getFormat(&devFormat);\n> >> +\tprintDevFormat(input, cio2Format);\n> >> +\n> >> +\t/* Apply pad formats and crop/compose rectangle to the ImgU. */\n> >> +\tRectangle rect = {\n> >> +\t\t.x = 0,\n> >> +\t\t.y = 0,\n> >> +\t\t.w = cio2Format.width,\n> >> +\t\t.h = cio2Format.height,\n> >> +\t};\n> >> +\tret = setImguFormat(imgu, cfg, &rect);\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >>\n> >> -\tdevFormat.width = subdevFormat.width;\n> >> -\tdevFormat.height = subdevFormat.height;\n> >> -\tdevFormat.fourcc = cfg->pixelFormat;\n> >> +\t/* Apply the format to the ImgU output and viewfinder devices. */\n> >\n> > As commented on the previous patch, with a single stream supported, do\n> > you really need both outputs ?\n> \n> See my comments on the previous patch.\n> \n> >> +\tV4L2DeviceFormat outputFormat = {};\n> >> +\toutputFormat.width = cfg.width;\n> >> +\toutputFormat.height = cfg.height;\n> >> +\toutputFormat.fourcc = V4L2_PIX_FMT_NV12;\n> >> +\toutputFormat.planesCount = 2;\n> >>\n> >> -\tret = cio2->setFormat(&devFormat);\n> >> +\tret = output->setFormat(&outputFormat);\n> >>  \tif (ret)\n> >>  \t\treturn ret;\n> >> +\tprintDevFormat(output, outputFormat);\n> >>\n> >> -\tLOG(IPU3, Info) << cio2->driverName() << \": \"\n> >> -\t\t\t<< devFormat.width << \"x\" << devFormat.height\n> >> -\t\t\t<< \"- 0x\" << std::hex << devFormat.fourcc << \" planes: \"\n> >> -\t\t\t<< devFormat.planes;\n> >> +\tret = viewfinder->setFormat(&outputFormat);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\tprintDevFormat(viewfinder, outputFormat);\n> >>\n> >>  \treturn 0;\n> >>  }\n> >> @@ -409,6 +471,135 @@ error_release_mdev:\n> >>  \treturn false;\n> >>  }\n> >>\n> >> +int PipelineHandlerIPU3::setImguFormat(V4L2Subdevice *imgu,\n> >> +\t\t\t\t       const StreamConfiguration &config,\n> >> +\t\t\t\t       Rectangle *rect)\n> >> +{\n> >> +\tint ret;\n> >> +\n> >> +\t/*\n> >> +\t * Configure the 'imgu' subdevice with the requested sizes.\n> >> +\t *\n> >> +\t * FIXME: the IPU3 driver implementation shall be changed to use the\n> >> +\t * actual input sizes as 'imgu input' subdevice sizes, and use the\n> >> +\t * desired output sizes to configure the crop/compose rectangles.  The\n> >> +\t * current implementation uses output sizes as 'imgu input' sizes, and\n> >> +\t * uses the input dimension to configure the crop/compose rectangles,\n> >> +\t * which contradicts the V4L2 specifications.\n> >> +\t */\n> >> +\tret = imgu->setCrop(IMGU_PAD_INPUT, rect);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\n> >> +\tLOG(IPU3, Info)\n> >> +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> >> +\t\t<< \" = crop: (0,0)/\" << rect->w << \"x\" << rect->h;\n> >> +\n> >> +\tret = imgu->setCompose(IMGU_PAD_INPUT, rect);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\n> >> +\tLOG(IPU3, Info)\n> >> +\t\t<< \"'\" << imgu->deviceName() << \"':\" << IMGU_PAD_INPUT\n> >> +\t\t<< \" = compose: (0,0)/\" << rect->w << \"x\" << rect->h;\n> >> +\n> >> +\n> >> +\tV4L2SubdeviceFormat imguFormat = {};\n> >> +\timguFormat.width = config.width;\n> >> +\timguFormat.height = config.height;\n> >> +\timguFormat.mbus_code = MEDIA_BUS_FMT_FIXED;\n> >> +\n> >> +\tret = imgu->setFormat(IMGU_PAD_INPUT, &imguFormat);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\tprintSubdevFormat(imgu, IMGU_PAD_INPUT, imguFormat);\n> >> +\n> >> +\tret = imgu->setFormat(IMGU_PAD_OUTPUT, &imguFormat);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\tprintSubdevFormat(imgu, IMGU_PAD_OUTPUT, imguFormat);\n> >> +\n> >> +\tret = imgu->setFormat(IMGU_PAD_VF, &imguFormat);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\tprintSubdevFormat(imgu, IMGU_PAD_VF, imguFormat);\n> >> +\n> >> +\tret = imgu->setFormat(IMGU_PAD_STAT, &imguFormat);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\tprintSubdevFormat(imgu, IMGU_PAD_STAT, imguFormat);\n> >> +\n> >> +\treturn 0;\n> >> +}\n> >> +\n> >> +int PipelineHandlerIPU3::setSensorFormat(V4L2Subdevice *sensor,\n> >> +\t\t\t\t\t const StreamConfiguration &config,\n> >> +\t\t\t\t\t V4L2SubdeviceFormat *format)\n> >> +{\n> >> +\tstd::map<unsigned int, std::vector<SizeRange>> formats;\n> >> +\tunsigned int best = ~0;\n> >\n> > Please use -1 instead of ~0.\n> \n> Ack. Personal tastes or some reason I'm missing?\n> \n> >> +\tbool found = false;\n> >> +\tint ret;\n> >> +\n> >> +\tformats = sensor->formats(0);\n> >> +\tif (formats.empty()) {\n> >> +\t\t/*\n> >> +\t\t * If the format list is empty, try with the currently\n> >> +\t\t * applied one.\n> >> +\t\t */\n> >\n> > This shouldn't happen.\n> \n> Agreed. Let's always assume sensor drivers are well written (creepy\n> laughs in background..)\n\n:-) Well, sanity checks are nice, but I'd move them to match() time to\nfail early instead of creating a camera that won't be able to work. Some\nchecks could even be moved to the V4L2 device and subdevice classes.\n\n> >> +\t\tret = sensor->getFormat(0, format);\n> >> +\t\tif (ret)\n> >> +\t\t\treturn ret;\n> >> +\n> >> +\t\tif (format->width < config.width ||\n> >> +\t\t    format->height < config.height)\n> >> +\t\t\treturn -EINVAL;\n> >> +\n> >> +\t\treturn 0;\n> >> +\t}\n> >> +\n> >> +\t/* Search for the best approximation the sensor can provide. */\n> >> +\tauto it = formats.begin();\n> >> +\twhile (it != formats.end()) {\n> >\n> > for loop.\n> \n> Ack\n> \n> >> +\t\tfor (SizeRange &size : it->second) {\n> >> +\t\t\tif (size.maxWidth < config.width ||\n> >> +\t\t\t    size.maxHeight < config.height)\n> >> +\t\t\t\tcontinue;\n> >> +\n> >> +\t\t\tunsigned int diff =\n> >> +\t\t\t\t(abs(size.maxWidth - config.width) +\n> >> +\t\t\t\t abs(size.maxHeight - config.height));\n> >\n> > No need for abs() as you check that maxWidth >= config.width above.\n> >\n> > Should you use the difference of area instead of the difference of\n> > lengths ?\n> \n> I'll try to move most of this to geometry\n> \n> >> +\t\t\tif (diff >= best)\n> >> +\t\t\t\tcontinue;\n> >> +\n> >> +\t\t\tbest = diff;\n> >> +\t\t\tfound = true;\n> >\n> > No need for the found variable, you can just check best != -1.\n> \n> Correct, this probably answers my above question.\n> \n> >> +\n> >> +\t\t\tformat->width = size.maxWidth;\n> >> +\t\t\tformat->height = size.maxHeight;\n> >> +\t\t\tformat->mbus_code = it->first;\n> >> +\t\t}\n> >> +\n> >> +\t\t++it;\n> >> +\t}\n> >> +\tif (!found)\n> >> +\t\treturn -EINVAL;\n> >> +\n> >> +\tret = sensor->setFormat(0, format);\n> >> +\tif (ret)\n> >> +\t\treturn ret;\n> >> +\n> >> +\t/*\n> >> +\t * Make sure everything is all right and the format did not get\n> >> +\t * adjusted.\n> >> +\t */\n> >> +\tif (format->width < config.width ||\n> >> +\t    format->height < config.height)\n> >> +\t\treturn -EINVAL;\n> >> +\n> >> +\treturn 0;\n> >> +}\n> >> +\n> >>  /* Link entities in the ImgU unit to prepare for capture operations. */\n> >>  int PipelineHandlerIPU3::linkImgu(ImguDevice *imguDevice)\n> >>  {","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 65703600CB\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSun, 10 Mar 2019 14:14:09 +0100 (CET)","from pendragon.ideasonboard.com\n\t(dfj612yhrgyx302h3jwwy-3.rev.dnainternet.fi\n\t[IPv6:2001:14ba:21f5:5b00:ce28:277f:58d7:3ca4])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id CFA7057D;\n\tSun, 10 Mar 2019 14:14:08 +0100 (CET)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1552223649;\n\tbh=asTeTk1VeVjKJwh1ai5+QPlvrXsL0dSWT6+q9uyfN3g=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=XKiLxSHD+ss+yaPRw6VI/Dg44UL27XwnMxA8CbIwPKMHFHut1xVs0fyqrGXNXl5wm\n\tsHvTR6ZdprfmPkAvJ1w/he6/X5CnmCVo+/2LDM1FaXLgIFT4E+V/DetTRCzdgIYB2b\n\tn96Ptm9Ool+Rn5+Jl5u6hFojssOSHNnByoSmyKWg=","Date":"Sun, 10 Mar 2019 15:14:03 +0200","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190310131403.GD4814@pendragon.ideasonboard.com>","References":"<20190228200410.3022-1-jacopo@jmondi.org>\n\t<20190228200410.3022-5-jacopo@jmondi.org>\n\t<20190302224423.GK4682@pendragon.ideasonboard.com>\n\t<20190309205816.obhovpo5qwy6kw2e@uno.localdomain>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20190309205816.obhovpo5qwy6kw2e@uno.localdomain>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Sun, 10 Mar 2019 13:14:09 -0000"}},{"id":1045,"web_url":"https://patchwork.libcamera.org/comment/1045/","msgid":"<20190311181101.b6g5vjmkbzpzjcur@uno.localdomain>","date":"2019-03-11T18:11:01","subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","submitter":{"id":3,"url":"https://patchwork.libcamera.org/api/people/3/","name":"Jacopo Mondi","email":"jacopo@jmondi.org"},"content":"Hi Laurent,\n   two updates so that v2 comes with no surprises\n\nOn Sun, Mar 03, 2019 at 12:44:23AM +0200, Laurent Pinchart wrote:\n> Hi Jacopo,\n>\n> Thank you for the patch.\n>\n\n[snip]\n\n> > +\t\tfor (SizeRange &size : it->second) {\n> > +\t\t\tif (size.maxWidth < config.width ||\n> > +\t\t\t    size.maxHeight < config.height)\n> > +\t\t\t\tcontinue;\n> > +\n> > +\t\t\tunsigned int diff =\n> > +\t\t\t\t(abs(size.maxWidth - config.width) +\n> > +\t\t\t\t abs(size.maxHeight - config.height));\n>\n> No need for abs() as you check that maxWidth >= config.width above.\n>\n> Should you use the difference of area instead of the difference of\n> lengths ?\n>\n\nI suggested moving this to geometry, but as in this pipeline\nimplementation we often compare SizeRanges with other structure, it's\nactually nicer to calculate areas here directly. It's a basic\nmultiplication after all...\n\n> > +\t\t\tif (diff >= best)\n> > +\t\t\t\tcontinue;\n> > +\n> > +\t\t\tbest = diff;\n> > +\t\t\tfound = true;\n>\n> No need for the found variable, you can just check best != -1.\n>\n\nI went down a path of casts between signed and unsigned int this way.\nI'm keeping the extra variable.\n\nThanks\n  j\n\n> > +\n> > +\t\t\tformat->width = size.maxWidth;\n> > +\t\t\tformat->height = size.maxHeight;\n> > +\t\t\tformat->mbus_code = it->first;\n> > +\t\t}\n> > +\n> > +\t\t++it;\n> > +\t}\n> > +\tif (!found)\n> > +\t\treturn -EINVAL;\n> > +\n> > +\tret = sensor->setFormat(0, format);\n> > +\tif (ret)\n> > +\t\treturn ret;\n> > +\n> > +\t/*\n> > +\t * Make sure everything is all right and the format did not get\n> > +\t * adjusted.\n> > +\t */\n> > +\tif (format->width < config.width ||\n> > +\t    format->height < config.height)\n> > +\t\treturn -EINVAL;\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> >  /* Link entities in the ImgU unit to prepare for capture operations. */\n> >  int PipelineHandlerIPU3::linkImgu(ImguDevice *imguDevice)\n> >  {\n>\n> --\n> Regards,\n>\n> Laurent Pinchart","headers":{"Return-Path":"<jacopo@jmondi.org>","Received":["from relay1-d.mail.gandi.net (relay1-d.mail.gandi.net\n\t[217.70.183.193])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 4317A600FD\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 11 Mar 2019 19:10:28 +0100 (CET)","from uno.localdomain (2-224-242-101.ip172.fastwebnet.it\n\t[2.224.242.101]) (Authenticated sender: jacopo@jmondi.org)\n\tby relay1-d.mail.gandi.net (Postfix) with ESMTPSA id A967524000A;\n\tMon, 11 Mar 2019 18:10:27 +0000 (UTC)"],"X-Originating-IP":"2.224.242.101","Date":"Mon, 11 Mar 2019 19:11:01 +0100","From":"Jacopo Mondi <jacopo@jmondi.org>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190311181101.b6g5vjmkbzpzjcur@uno.localdomain>","References":"<20190228200410.3022-1-jacopo@jmondi.org>\n\t<20190228200410.3022-5-jacopo@jmondi.org>\n\t<20190302224423.GK4682@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Type":"multipart/signed; micalg=pgp-sha256;\n\tprotocol=\"application/pgp-signature\"; boundary=\"cfdpmzlq6kchomy3\"","Content-Disposition":"inline","In-Reply-To":"<20190302224423.GK4682@pendragon.ideasonboard.com>","User-Agent":"NeoMutt/20180716","Subject":"Re: [libcamera-devel] [PATCH 04/10] libcamera: ipu3: Propagate\n\timage format","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 11 Mar 2019 18:10:28 -0000"}}]