[{"id":842,"web_url":"https://patchwork.libcamera.org/comment/842/","msgid":"<20190222000926.GM3485@pendragon.ideasonboard.com>","date":"2019-02-22T00:09:26","subject":"Re: [libcamera-devel] [PATCH 4/5] libcamera: ipu3: Add CIO2Device\n\tclass","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Jacopo,\n\nThank you for the patch.\n\nOn Wed, Feb 20, 2019 at 02:17:56PM +0100, Jacopo Mondi wrote:\n> Add a CIO2Device class that represents the CIO2 unit associated with a\n> Camera. Implement image format negotiation in the CIO2Device and create\n> a CIO2Device instance for each camera created by the IPU3 pipeline\n> handler.\n> \n> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>\n> ---\n>  src/libcamera/pipeline/ipu3/cio2.cpp    | 266 ++++++++++++++++++++++++\n>  src/libcamera/pipeline/ipu3/ipu3.cpp    | 158 +++-----------\n>  src/libcamera/pipeline/ipu3/ipu3.h      |  51 +++--\n>  src/libcamera/pipeline/ipu3/meson.build |   1 +\n>  4 files changed, 335 insertions(+), 141 deletions(-)\n>  create mode 100644 src/libcamera/pipeline/ipu3/cio2.cpp\n> \n> diff --git a/src/libcamera/pipeline/ipu3/cio2.cpp b/src/libcamera/pipeline/ipu3/cio2.cpp\n> new file mode 100644\n> index 000000000000..b0eeff987f64\n> --- /dev/null\n> +++ b/src/libcamera/pipeline/ipu3/cio2.cpp\n> @@ -0,0 +1,266 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * Copyright (C) 2019, Google Inc.\n> + *\n> + * cio2.cpp - IPU3 CIO2 unit\n> + */\n> +\n> +#include <string>\n> +#include <vector>\n> +\n> +#include \"ipu3.h\"\n> +#include \"media_device.h\"\n> +#include \"v4l2_device.h\"\n> +#include \"v4l2_subdevice.h\"\n> +\n> +/*\n> + * CIO2Device represents one of the four 'CIO2' units the Intel IPU3 ISP\n\ns/ISP //\n\n> + * provides.\n> + *\n> + * Each CIO2 unit has associated one image sensor, which provides RAW image\n\ns/has associated/is associated with/ (or \"has one image sensor\nassociated to it\")\n\n> + * data to be processed by one of the IPU3's IMGU units.\n\ns/IMGU/ImgU/\n\n> + */\n> +\n> +namespace libcamera {\n> +\n> +LOG_DECLARE_CATEGORY(IPU3)\n> +\n> +CIO2Device::CIO2Device()\n> +\t: cio2_(nullptr), csi2_(nullptr), sensor_(nullptr)\n> +{\n> +}\n> +\n> +CIO2Device::~CIO2Device()\n> +{\n> +\tdelete cio2_;\n> +\tdelete csi2_;\n> +\tdelete sensor_;\n> +}\n> +\n> +/*\n> + * Create and open the video device and video subdevices associated with\n> + * this CIO2 unit.\n\nCould you document he parameters too ? I'd format this using the Doxygen\nformat.\n\n> + */\n> +int CIO2Device::open(unsigned int index, MediaDevice *cio2MediaDev)\n\nI would name the second parameter mediaDevice as I don't think it could\nbe confused with another media device. I would also swap the two\nparameters, to have the more generic first and the more specific last.\n\n> +{\n> +\tint ret;\n> +\n> +\tstd::string cio2Name = \"ipu3-cio2 \" + std::to_string(index);\n> +\tMediaEntity *cio2Entity = cio2MediaDev->getEntityByName(cio2Name);\n> +\tif (!cio2Entity) {\n> +\t\tLOG(IPU3, Error)\n> +\t\t\t<< \"Failed to get entity '\" << cio2Name << \"'\";\n> +\t\treturn -EINVAL;\n\nMaybe -ENODEV here and below, to indicate the CIO2 device isn't found ?\n\n> +\t}\n> +\n> +\tstd::string csi2Name = \"ipu3-csi2 \" + std::to_string(index);\n> +\tMediaEntity *csi2Entity = cio2MediaDev->getEntityByName(csi2Name);\n> +\tif (!csi2Entity) {\n> +\t\tLOG(IPU3, Error)\n> +\t\t\t<< \"Failed to get entity '\" << csi2Name << \"'\";\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tconst std::vector<MediaPad *> &pads = csi2Entity->pads();\n> +\tif (pads.empty())\n> +\t\treturn -EINVAL;\n> +\n> +\t/* IPU3 CSI-2 receivers have a single sink pad at index 0. */\n> +\tMediaPad *sink = pads[0];\n> +\tconst std::vector<MediaLink *> &links = sink->links();\n> +\tif (links.empty())\n> +\t\treturn -EINVAL;\n> +\n> +\t/*\n> +\t * Verify that the CSI-2 receiver is connected to a sensor and enable\n> +\t * the media link between the two.\n> +\t */\n> +\tMediaLink *link = links[0];\n> +\tMediaEntity *sensorEntity = link->source()->entity();\n> +\tif (sensorEntity->function() != MEDIA_ENT_F_CAM_SENSOR)\n> +\t\treturn -EINVAL;\n> +\n> +\tif (link->setEnabled(true))\n> +\t\treturn -EINVAL;\n> +\n> +\t/*\n> +\t * Create and open video devices and subdevices associated with\n> +\t * the camera.\n> +\t */\n> +\tcio2_ = new V4L2Device(cio2Entity);\n> +\tret = cio2_->open();\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tsensor_ = new V4L2Subdevice(sensorEntity);\n> +\tret = sensor_->open();\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tcsi2_ = new V4L2Subdevice(csi2Entity);\n> +\tret = csi2_->open();\n> +\tif (ret)\n> +\t\treturn ret;\n\nI would call close() for the last two errors (or possibly even the last\nthree for symmetry, it won't hurt).\n\n> +\n> +\treturn 0;\n> +}\n> +\n> +int CIO2Device::sensorSetFormat(unsigned int width, unsigned int height,\n> +\t\t\t\tV4L2SubdeviceFormat *format)\n\nHow about just setFormat() ?\n\n> +{\n> +\tunsigned int best = ~0;\n> +\tbool found = false;\n> +\tint ret;\n> +\n> +\tret = sensor_->getFormat(0, format);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\t/*\n> +\t * Enumerate the sensor formats until a matching one is found.\n> +\t * Re-use the default media_bus pixel format as it is configured\n> +\t * in the sensor.\n> +\t */\n\nI think this isn't the right layer to perform these operations. The code\nshould be moved to the IPU3 pipeline handler that will enumerate all\nresolutions supported by the sensor and decide what to offer to\napplications, taking into account cropping and scaling capabilities of\nthe ImgU. This class shouldn't try to be smart.\n\n> +\tV4L2SubdeviceFormatEnum bestEnum = {};\n> +\tV4L2SubdeviceFormatEnum formatEnum = {};\n> +\tformatEnum.index = 0;\n> +\tformatEnum.mbus_code = format->mbus_code;\n> +\n> +\twhile (true) {\n> +\t\tunsigned int diff;\n> +\n> +\t\tint ret = sensor_->enumFormat(0, &formatEnum);\n> +\t\tif (ret)\n> +\t\t\tbreak;\n> +\n> +\t\tdiff = (abs(formatEnum.minWidth - width) +\n> +\t\t\tabs(formatEnum.minHeight - height));\n> +\n> +\t\tif (formatEnum.minWidth >= width &&\n> +\t\t    formatEnum.minHeight >= height &&\n> +\t\t    diff < best) {\n> +\t\t\tbest = diff;\n> +\t\t\tbestEnum = formatEnum;\n> +\t\t\tfound = true;\n> +\t\t}\n> +\n> +\t\tLOG(IPU3, Debug)\n> +\t\t\t<< \"Enumerate image sizes : \" << formatEnum.index\n> +\t\t\t<< \" = \" << formatEnum.minWidth << \"x\"\n> +\t\t\t<< formatEnum.minHeight\n> +\t\t\t<< \"Diff \" << diff << \" Best \" << best;\n> +\n> +\t\tformatEnum.index++;\n> +\t}\n> +\tif (!found) {\n> +\t\tLOG(IPU3, Error)\n> +\t\t\t<< \"Failed to find image resolution for: \"\n> +\t\t\t<< width << \"x\" << height;\n> +\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\tformat->width = bestEnum.minWidth;\n> +\tformat->height = bestEnum.minHeight;\n> +\tret = sensor_->setFormat(0, format);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\t/* Make sure everything is all right. */\n> +\tif (format->width < width || format->height < height) {\n> +\t\tLOG(IPU3, Error)\n> +\t\t\t<< \"Failed to find image resolution for: \"\n> +\t\t\t<< width << \"x\" << height << \" - Got : \"\n> +\t\t\t<< format->width << \"x\" << format->height\n> +\t\t\t<< \" instead\";\n> +\n> +\t\treturn -EINVAL;\n> +\t}\n> +\n> +\treturn 0;\n> +}\n> +\n> +/*\n> + * Configure the CIO2 unit to provide big enough images to satisfy\n> + * the requested width and height.\n\nSame thing here, the IPU3 pipeline handler should make the smart\ndecisions, and request an exact resolution that the CIO2 can provide.\n\n> + *\n> + * The images captured from the CIO2 unit will be fed to an IMGU unit that\n> + * can scale and crop on them to obtain the desired output sizes.\n> + */\n> +int CIO2Device::configure(const IPU3DeviceFormat &format)\n> +{\n> +\tV4L2SubdeviceFormat subdevFormat = {};\n> +\tV4L2DeviceFormat devFormat = {};\n> +\tint ret;\n> +\n> +\tret = sensorSetFormat(format.width, format.height, &subdevFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tLOG(IPU3, Debug)\n> +\t\t<< \"'\" << sensor_->deviceName() << \"':0 = \"\n> +\t\t<< subdevFormat.width << \"x\" << subdevFormat.height << \" - \"\n> +\t\t<< std::hex << subdevFormat.mbus_code;\n> +\n> +\t/* Propagate the format along the pipeline. */\n> +\tret = csi2_->setFormat(0, &subdevFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tret = cio2_->getFormat(&devFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tdevFormat.width = subdevFormat.width;\n> +\tdevFormat.height = subdevFormat.height;\n> +\tdevFormat.fourcc = V4L2_PIX_FMT_IPU3_SGRBG10;\n> +\n> +\tret = cio2_->setFormat(&devFormat);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tLOG(IPU3, Debug)\n> +\t\t<< cio2_->driverName() << \": \"\n> +\t\t<< devFormat.width << \"x\" << devFormat.height\n> +\t\t<< \"- 0x\" << std::hex << devFormat.fourcc << \" planes: \"\n> +\t\t<< devFormat.planesCount;\n> +\n> +\t/* Store the applied format in the cio2Format_ member for re-use. */\n> +\tcio2Format_.width = devFormat.width;\n> +\tcio2Format_.height = devFormat.height;\n> +\tcio2Format_.fourcc = devFormat.fourcc;\n> +\n> +\treturn 0;\n> +}\n> +\n> +const IPU3DeviceFormat &CIO2Device::format() const\n> +{\n> +\treturn cio2Format_;\n> +}\n> +\n> +int CIO2Device::exportBuffers(BufferPool *pool)\n> +{\n> +\treturn cio2_->exportBuffers(pool);\n> +}\n> +\n> +int CIO2Device::releaseBuffers()\n> +{\n> +\treturn cio2_->releaseBuffers();\n> +}\n> +\n> +int CIO2Device::queueBuffer(Buffer *buffer)\n> +{\n> +\treturn cio2_->queueBuffer(buffer);\n> +}\n> +\n> +int CIO2Device::streamOn()\n> +{\n> +\treturn cio2_->streamOn();\n> +}\n> +\n> +int CIO2Device::streamOff()\n> +{\n> +\treturn cio2_->streamOff();\n> +}\n\nThis almost makes me feel you should inherit from V4L2Device.\n\n> +\n> +} /* namespace libcamera */\n> diff --git a/src/libcamera/pipeline/ipu3/ipu3.cpp b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> index 07029dd763c9..5fcdd6335db6 100644\n> --- a/src/libcamera/pipeline/ipu3/ipu3.cpp\n> +++ b/src/libcamera/pipeline/ipu3/ipu3.cpp\n> @@ -60,74 +60,37 @@ PipelineHandlerIPU3::streamConfiguration(Camera *camera,\n>  int PipelineHandlerIPU3::configureStreams(Camera *camera,\n>  \t\t\t\t\t  std::map<Stream *, StreamConfiguration> &config)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n> -\tStreamConfiguration *cfg = &config[&data->stream_];\n> -\tV4L2Subdevice *sensor = data->sensor_;\n> -\tV4L2Subdevice *csi2 = data->csi2_;\n> -\tV4L2Device *cio2 = data->cio2_;\n> -\tV4L2SubdeviceFormat subdevFormat = {};\n> -\tV4L2DeviceFormat devFormat = {};\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n> +\tStream *stream = &cameraData(camera)->stream_;\n> +\tStreamConfiguration *cfg = &config[stream];\n> +\tIPU3DeviceFormat outputFormat = {};\n>  \tint ret;\n>  \n> -\t/*\n> -\t * FIXME: as of now, the format gets applied to the sensor and is\n> -\t * propagated along the pipeline. It should instead be applied on the\n> -\t * capture device and the sensor format calculated accordingly.\n> -\t */\n> -\n> -\tret = sensor->getFormat(0, &subdevFormat);\n> -\tif (ret)\n> -\t\treturn ret;\n> -\n> -\tsubdevFormat.width = cfg->width;\n> -\tsubdevFormat.height = cfg->height;\n> -\tret = sensor->setFormat(0, &subdevFormat);\n> -\tif (ret)\n> -\t\treturn ret;\n> -\n> -\t/* Return error if the requested format cannot be applied to sensor. */\n> -\tif (subdevFormat.width != cfg->width ||\n> -\t    subdevFormat.height != cfg->height) {\n> -\t\tLOG(IPU3, Error)\n> -\t\t\t<< \"Failed to apply image format \"\n> -\t\t\t<< subdevFormat.width << \"x\" << subdevFormat.height\n> -\t\t\t<< \" - got: \" << cfg->width << \"x\" << cfg->height;\n> +\t/* Validate the requested image format and resolution. */\n> +\tif (cfg->pixelFormat != V4L2_PIX_FMT_NV12) {\n> +\t\tLOG(IPU3, Error) << \"Image format not supported\";\n>  \t\treturn -EINVAL;\n>  \t}\n> +\toutputFormat.width = cfg->width;\n> +\toutputFormat.height = cfg->height;\n> +\toutputFormat.fourcc = cfg->pixelFormat;\n>  \n> -\tret = csi2->setFormat(0, &subdevFormat);\n> +\tret = cio2->configure(outputFormat);\n>  \tif (ret)\n>  \t\treturn ret;\n>  \n> -\tret = cio2->getFormat(&devFormat);\n> -\tif (ret)\n> -\t\treturn ret;\n> -\n> -\tdevFormat.width = subdevFormat.width;\n> -\tdevFormat.height = subdevFormat.height;\n> -\tdevFormat.fourcc = cfg->pixelFormat;\n> -\n> -\tret = cio2->setFormat(&devFormat);\n> -\tif (ret)\n> -\t\treturn ret;\n> -\n> -\tLOG(IPU3, Info) << cio2->driverName() << \": \"\n> -\t\t\t<< devFormat.width << \"x\" << devFormat.height\n> -\t\t\t<< \"- 0x\" << std::hex << devFormat.fourcc << \" planes: \"\n> -\t\t\t<< devFormat.planes;\n> -\n>  \treturn 0;\n>  }\n>  \n>  int PipelineHandlerIPU3::allocateBuffers(Camera *camera, Stream *stream)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n>  \tconst StreamConfiguration &cfg = stream->configuration();\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n\nYou could use a reference instead of a pointer.\n\n>  \n>  \tif (!cfg.bufferCount)\n>  \t\treturn -EINVAL;\n>  \n> -\tint ret = data->cio2_->exportBuffers(&stream->bufferPool());\n> +\tint ret = cio2->exportBuffers(&stream->bufferPool());\n>  \tif (ret) {\n>  \t\tLOG(IPU3, Error) << \"Failed to request memory\";\n>  \t\treturn ret;\n> @@ -138,9 +101,9 @@ int PipelineHandlerIPU3::allocateBuffers(Camera *camera, Stream *stream)\n>  \n>  int PipelineHandlerIPU3::freeBuffers(Camera *camera, Stream *stream)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n>  \n> -\tint ret = data->cio2_->releaseBuffers();\n> +\tint ret = cio2->releaseBuffers();\n>  \tif (ret) {\n>  \t\tLOG(IPU3, Error) << \"Failed to release memory\";\n>  \t\treturn ret;\n> @@ -151,10 +114,10 @@ int PipelineHandlerIPU3::freeBuffers(Camera *camera, Stream *stream)\n>  \n>  int PipelineHandlerIPU3::start(const Camera *camera)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n>  \tint ret;\n>  \n> -\tret = data->cio2_->streamOn();\n> +\tret = cio2->streamOn();\n>  \tif (ret) {\n>  \t\tLOG(IPU3, Info) << \"Failed to start camera \" << camera->name();\n>  \t\treturn ret;\n> @@ -165,16 +128,16 @@ int PipelineHandlerIPU3::start(const Camera *camera)\n>  \n>  void PipelineHandlerIPU3::stop(const Camera *camera)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n>  \n> -\tif (data->cio2_->streamOff())\n> +\tif (cio2->streamOff())\n>  \t\tLOG(IPU3, Info) << \"Failed to stop camera \" << camera->name();\n>  }\n>  \n>  int PipelineHandlerIPU3::queueRequest(const Camera *camera, Request *request)\n>  {\n> -\tIPU3CameraData *data = cameraData(camera);\n> -\tStream *stream = &data->stream_;\n> +\tCIO2Device *cio2 = &cameraData(camera)->cio2Unit_;\n> +\tStream *stream = &cameraData(camera)->stream_;\n>  \n>  \tBuffer *buffer = request->findBuffer(stream);\n>  \tif (!buffer) {\n> @@ -183,7 +146,7 @@ int PipelineHandlerIPU3::queueRequest(const Camera *camera, Request *request)\n>  \t\treturn -ENOENT;\n>  \t}\n>  \n> -\tdata->cio2_->queueBuffer(buffer);\n> +\tcio2->queueBuffer(buffer);\n>  \n>  \treturn 0;\n>  }\n> @@ -271,79 +234,22 @@ void PipelineHandlerIPU3::registerCameras()\n>  \t */\n>  \tunsigned int numCameras = 0;\n>  \tfor (unsigned int id = 0; id < 4; ++id) {\n> -\t\tstd::string csi2Name = \"ipu3-csi2 \" + std::to_string(id);\n> -\t\tMediaEntity *csi2 = cio2_->getEntityByName(csi2Name);\n> -\t\tint ret;\n> +\t\tstd::unique_ptr<IPU3CameraData> data =\n> +\t\t\tutils::make_unique<IPU3CameraData>();\n>  \n>  \t\t/*\n> -\t\t * This shall not happen, as the device enumerator matched\n> -\t\t * all entities described in the cio2_dm DeviceMatch.\n> -\t\t *\n> -\t\t * As this check is basically free, better stay safe than sorry.\n> +\t\t * If opening the CIO2 unit fails, the Camera instance won't\n> +\t\t * be registered and the 'data' unique pointers goes out of\n> +\t\t * scope and delete the objects they manage.\n>  \t\t */\n> -\t\tif (!csi2)\n> -\t\t\tcontinue;\n> -\n> -\t\tconst std::vector<MediaPad *> &pads = csi2->pads();\n> -\t\tif (pads.empty())\n> -\t\t\tcontinue;\n> -\n> -\t\t/* IPU3 CSI-2 receivers have a single sink pad at index 0. */\n> -\t\tMediaPad *sink = pads[0];\n> -\t\tconst std::vector<MediaLink *> &links = sink->links();\n> -\t\tif (links.empty())\n> -\t\t\tcontinue;\n> -\n> -\t\t/*\n> -\t\t * Verify that the receiver is connected to a sensor, enable\n> -\t\t * the media link between the two, and create a Camera with\n> -\t\t * a unique name.\n> -\t\t */\n> -\t\tMediaLink *link = links[0];\n> -\t\tMediaEntity *sensor = link->source()->entity();\n> -\t\tif (sensor->function() != MEDIA_ENT_F_CAM_SENSOR)\n> -\t\t\tcontinue;\n> -\n> -\t\tif (link->setEnabled(true))\n> +\t\tif (data->cio2Unit_.open(id, cio2_.get()))\n>  \t\t\tcontinue;\n>  \n> -\t\tstd::unique_ptr<IPU3CameraData> data = utils::make_unique<IPU3CameraData>();\n> -\n> -\t\tstd::string cameraName = sensor->name() + \" \" + std::to_string(id);\n> +\t\tstd::string cameraName = data->cio2Unit_.sensor_->deviceName()\n> +\t\t\t\t       + \" \" + std::to_string(id);\n>  \t\tstd::vector<Stream *> streams{ &data->stream_ };\n> -\t\tstd::shared_ptr<Camera> camera = Camera::create(this, cameraName, streams);\n> -\n> -\t\t/*\n> -\t\t * Create and open video devices and subdevices associated with\n> -\t\t * the camera.\n> -\t\t *\n> -\t\t * If any of these operations fails, the Camera instance won't\n> -\t\t * be registered. The 'camera' shared pointer and the 'data'\n> -\t\t * unique pointers go out of scope and delete the objects they\n> -\t\t * manage.\n> -\t\t */\n> -\t\tstd::string cio2Name = \"ipu3-cio2 \" + std::to_string(id);\n> -\t\tMediaEntity *cio2 = cio2_->getEntityByName(cio2Name);\n> -\t\tif (!cio2) {\n> -\t\t\tLOG(IPU3, Error)\n> -\t\t\t\t<< \"Failed to get entity '\" << cio2Name << \"'\";\n> -\t\t\tcontinue;\n> -\t\t}\n> -\n> -\t\tdata->cio2_ = new V4L2Device(cio2);\n> -\t\tret = data->cio2_->open();\n> -\t\tif (ret)\n> -\t\t\tcontinue;\n> -\n> -\t\tdata->sensor_ = new V4L2Subdevice(sensor);\n> -\t\tret = data->sensor_->open();\n> -\t\tif (ret)\n> -\t\t\tcontinue;\n> -\n> -\t\tdata->csi2_ = new V4L2Subdevice(csi2);\n> -\t\tret = data->csi2_->open();\n> -\t\tif (ret)\n> -\t\t\tcontinue;\n> +\t\tstd::shared_ptr<Camera> camera = Camera::create(this, cameraName,\n> +\t\t\t\t\t\t\t\tstreams);\n>  \n>  \t\tsetCameraData(camera.get(), std::move(data));\n>  \t\tregisterCamera(std::move(camera));\n> diff --git a/src/libcamera/pipeline/ipu3/ipu3.h b/src/libcamera/pipeline/ipu3/ipu3.h\n> index 48c2a3e16980..2a8b6f13b1c7 100644\n> --- a/src/libcamera/pipeline/ipu3/ipu3.h\n> +++ b/src/libcamera/pipeline/ipu3/ipu3.h\n> @@ -23,6 +23,40 @@\n>  \n>  namespace libcamera {\n>  \n> +struct IPU3DeviceFormat {\n> +\tunsigned int width;\n> +\tunsigned int height;\n> +\tunsigned int fourcc;\n> +};\n\nThis doesn't seem very IPU3-specific to me, could we use a more generic\nclass ?\n\n> +\n> +class CIO2Device\n> +{\n> +public:\n> +\tCIO2Device();\n> +\tCIO2Device(const CIO2Device &) = delete;\n\nShould you also delete the copy assignment operator ?\n\n> +\t~CIO2Device();\n> +\n> +\tint open(unsigned int index, MediaDevice *cio2MediaDev);\n\nHow about a close() (that you would call from the destructor) ?\n\n> +\tint sensorSetFormat(unsigned int width, unsigned int height,\n> +\t\t\t    V4L2SubdeviceFormat *format);\n> +\tint configure(const IPU3DeviceFormat &format);\n> +\tconst IPU3DeviceFormat &format() const;\n> +\n> +\tint exportBuffers(BufferPool *pool);\n> +\tint releaseBuffers();\n> +\n> +\tint queueBuffer(Buffer *buffer);\n> +\n> +\tint streamOn();\n> +\tint streamOff();\n> +\n> +\tV4L2Device *cio2_;\n> +\tV4L2Subdevice *csi2_;\n> +\tV4L2Subdevice *sensor_;\n> +\n> +\tIPU3DeviceFormat cio2Format_;\n\nShould the last four members be private ? Otherwise the cio2_ operations\nwrappers are a bit useless. As this is an internal class I'm tempted to\nkeep the members public and remove the wrappers.\n\nGiven the above comments regarding handling the smart decisions in the\nIPU3 pipeline handler, this class will become a bit of an empty shell, I\nwonder if it's really worth splitting it. You may want to instead expand\nthe IPU3CameraData() class a bit, and add a cio2 initialization function\nfor instance, containing more or less the code from CIO2Device::open(),\nas that should certainly be split out of\nPipelineHandlerIPU3::registerCameras().\n\n> +};\n> +\n>  class PipelineHandlerIPU3 : public PipelineHandler\n>  {\n>  public:\n> @@ -49,19 +83,8 @@ private:\n>  \tclass IPU3CameraData : public CameraData\n>  \t{\n>  \tpublic:\n> -\t\tIPU3CameraData()\n> -\t\t\t: cio2_(nullptr), csi2_(nullptr), sensor_(nullptr) {}\n> -\n> -\t\t~IPU3CameraData()\n> -\t\t{\n> -\t\t\tdelete cio2_;\n> -\t\t\tdelete csi2_;\n> -\t\t\tdelete sensor_;\n> -\t\t}\n> -\n> -\t\tV4L2Device *cio2_;\n> -\t\tV4L2Subdevice *csi2_;\n> -\t\tV4L2Subdevice *sensor_;\n> +\t\t/* Each camera has a CIO2 unit associated. */\n\nSame comment as above.\n\n> +\t\tCIO2Device cio2Unit_;\n>  \n>  \t\tStream stream_;\n>  \t};\n> @@ -76,8 +99,6 @@ private:\n>  \n>  \tstd::shared_ptr<MediaDevice> cio2_;\n>  \tstd::shared_ptr<MediaDevice> imgu_;\n> -\tIMGUDevice imgu0_;\n> -\tIMGUDevice imgu1_;\n>  };\n>  \n>  } /* namespace libcamera */\n> diff --git a/src/libcamera/pipeline/ipu3/meson.build b/src/libcamera/pipeline/ipu3/meson.build\n> index 0ab766a257a0..fcb2d319d517 100644\n> --- a/src/libcamera/pipeline/ipu3/meson.build\n> +++ b/src/libcamera/pipeline/ipu3/meson.build\n> @@ -1,3 +1,4 @@\n>  libcamera_sources += files([\n> +    'cio2.cpp',\n>      'ipu3.cpp',\n>  ])","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 0837C600FB\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 22 Feb 2019 01:09:31 +0100 (CET)","from pendragon.ideasonboard.com\n\t(dfj612yhrgyx302h3jwwy-3.rev.dnainternet.fi\n\t[IPv6:2001:14ba:21f5:5b00:ce28:277f:58d7:3ca4])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 6A070255;\n\tFri, 22 Feb 2019 01:09:30 +0100 (CET)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1550794170;\n\tbh=CCl3xRrYFSObF3ZqIpjyQt3qvl639gaOT2GNFmmxWog=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=uXmNCC51aX+iWGOONOjKduB01HCiF+0StWO7bFOuA4/Y4PthL0IYds7owLuYwvdhG\n\tIfJ8mfJC0hxQqCb5RMcU3a992H0W3Ac8+wUcrNSNeEvr/zWRFt1f0G6ZMbI8a71+PV\n\t5rN6rHe71T5xJ1AKCwB3ghbtNj0NQHjl6pmfroCw=","Date":"Fri, 22 Feb 2019 02:09:26 +0200","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo@jmondi.org>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20190222000926.GM3485@pendragon.ideasonboard.com>","References":"<20190220131757.14004-1-jacopo@jmondi.org>\n\t<20190220131757.14004-5-jacopo@jmondi.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20190220131757.14004-5-jacopo@jmondi.org>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH 4/5] libcamera: ipu3: Add CIO2Device\n\tclass","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Fri, 22 Feb 2019 00:09:31 -0000"}}]