{"id":3389,"url":"https://patchwork.libcamera.org/api/1.1/patches/3389/?format=json","web_url":"https://patchwork.libcamera.org/patch/3389/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20200404011624.19728-1-laurent.pinchart@ideasonboard.com>","date":"2020-04-04T01:16:24","name":"[libcamera-devel,v4,12/11] libcamera: pipeline: simple: Support multiple capture video nodes","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"646839372eb296303e25a0f4fc33775c0b302833","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/1.1/people/2/?format=json","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/3389/mbox/","series":[{"id":796,"url":"https://patchwork.libcamera.org/api/1.1/series/796/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=796","date":"2020-04-04T00:44:27","name":"Simple pipeline handler","version":4,"mbox":"https://patchwork.libcamera.org/series/796/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/3389/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/3389/checks/","tags":{},"headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 8A08960409\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  4 Apr 2020 03:16:38 +0200 (CEST)","from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id E9550321;\n\tSat,  4 Apr 2020 03:16:37 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"p4yPhB5y\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1585962998;\n\tbh=Gh6eGiCVZoiIvC6IvOZ9ipOxTaFpb00WKQf2eMwsTPI=;\n\th=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n\tb=p4yPhB5yu6Y8c1+zmmfLMisDKp8hD2m81dMQyDo7Dow27JHOps1Hkq4K8tOFVC85G\n\tFe/+OBDpbDV41NCDywCx/1ivSrWfHHhj3Bl55AbVBkPxvm+OYXV50jII+tmDv4mIfL\n\tx85W5NfGKkM6wajhJdi4kYGOhOq77rvNGolQAXKw=","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"libcamera-devel@lists.libcamera.org","Date":"Sat,  4 Apr 2020 04:16:24 +0300","Message-Id":"<20200404011624.19728-1-laurent.pinchart@ideasonboard.com>","X-Mailer":"git-send-email 2.24.1","In-Reply-To":"<20200404004438.17992-1-laurent.pinchart@ideasonboard.com>","References":"<20200404004438.17992-1-laurent.pinchart@ideasonboard.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH v4 12/11] libcamera: pipeline: simple:\n\tSupport multiple capture video nodes","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Sat, 04 Apr 2020 01:16:38 -0000"},"content":"The simple pipeline handler rejects devices that have multiple capture\nvideo nodes. There's no real reason to do so, a more dynamic approach is\npossible as the pipeline handler already locates the video device by\nwalking the media graph.\n\nRework the match sequence by skipping any check on the video nodes, and\ncreate the V4L2VideoDevice for the media entity at the end of the\npipeline when initializing the camera data. The V4L2VideoDevice\ninstances are managed by the pipeline handler itself, to avoid creating\nseparate instances in the camera data if multiple sensors are routed to\nthe same video device.\n\nSigned-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n---\n src/libcamera/pipeline/simple/simple.cpp | 136 +++++++++++++----------\n 1 file changed, 75 insertions(+), 61 deletions(-)\n\nAndrey, does this fix your issue with the qcom-camss ?","diff":"diff --git a/src/libcamera/pipeline/simple/simple.cpp b/src/libcamera/pipeline/simple/simple.cpp\nindex 8e7dd091b4ab..b5f9177dd383 100644\n--- a/src/libcamera/pipeline/simple/simple.cpp\n+++ b/src/libcamera/pipeline/simple/simple.cpp\n@@ -57,8 +57,7 @@ static const SimplePipelineInfo supportedDevices[] = {\n class SimpleCameraData : public CameraData\n {\n public:\n-\tSimpleCameraData(PipelineHandler *pipe, MediaEntity *sensor,\n-\t\t\t MediaEntity *video);\n+\tSimpleCameraData(SimplePipelineHandler *pipe, MediaEntity *sensor);\n \n \tbool isValid() const { return sensor_ != nullptr; }\n \tstd::set<Stream *> streams() { return { &stream_ }; }\n@@ -82,6 +81,7 @@ public:\n \tStream stream_;\n \tstd::unique_ptr<CameraSensor> sensor_;\n \tstd::list<Entity> entities_;\n+\tV4L2VideoDevice *video_;\n \n \tstd::vector<Configuration> configs_;\n \tstd::map<PixelFormat, Configuration> formats_;\n@@ -129,7 +129,7 @@ public:\n \n \tbool match(DeviceEnumerator *enumerator) override;\n \n-\tV4L2VideoDevice *video() { return video_; }\n+\tV4L2VideoDevice *video(const MediaEntity *entity);\n \tV4L2Subdevice *subdev(const MediaEntity *entity);\n \tSimpleConverter *converter() { return converter_; }\n \n@@ -151,7 +151,7 @@ private:\n \tvoid converterDone(FrameBuffer *input, FrameBuffer *output);\n \n \tMediaDevice *media_;\n-\tV4L2VideoDevice *video_;\n+\tstd::map<const MediaEntity *, std::unique_ptr<V4L2VideoDevice>> videos_;\n \tstd::map<const MediaEntity *, V4L2Subdevice> subdevs_;\n \n \tSimpleConverter *converter_;\n@@ -166,8 +166,8 @@ private:\n  * Camera Data\n  */\n \n-SimpleCameraData::SimpleCameraData(PipelineHandler *pipe, MediaEntity *sensor,\n-\t\t\t\t   MediaEntity *video)\n+SimpleCameraData::SimpleCameraData(SimplePipelineHandler *pipe,\n+\t\t\t\t   MediaEntity *sensor)\n \t: CameraData(pipe)\n {\n \tint ret;\n@@ -179,8 +179,8 @@ SimpleCameraData::SimpleCameraData(PipelineHandler *pipe, MediaEntity *sensor,\n \tMediaEntity *source = sensor;\n \n \twhile (source) {\n-\t\t/* If we have reached the video node, we're done. */\n-\t\tif (source == video)\n+\t\t/* If we have reached a video node, we're done. */\n+\t\tif (source->function() == MEDIA_ENT_F_IO_V4L)\n \t\t\tbreak;\n \n \t\t/* Use the first output pad that has links. */\n@@ -224,7 +224,14 @@ SimpleCameraData::SimpleCameraData(PipelineHandler *pipe, MediaEntity *sensor,\n \t\t}\n \t}\n \n-\t/* We have a valid pipeline, create the camera sensor. */\n+\t/*\n+\t * We have a valid pipeline, get the video device and create the camera\n+\t * sensor.\n+\t */\n+\tvideo_ = pipe->video(source);\n+\tif (!video_)\n+\t\treturn;\n+\n \tsensor_ = std::make_unique<CameraSensor>(sensor);\n \tret = sensor_->init();\n \tif (ret) {\n@@ -236,7 +243,6 @@ SimpleCameraData::SimpleCameraData(PipelineHandler *pipe, MediaEntity *sensor,\n int SimpleCameraData::init()\n {\n \tSimplePipelineHandler *pipe = static_cast<SimplePipelineHandler *>(pipe_);\n-\tV4L2VideoDevice *video = pipe->video();\n \tSimpleConverter *converter = pipe->converter();\n \tint ret;\n \n@@ -266,7 +272,7 @@ int SimpleCameraData::init()\n \t\t}\n \n \t\tstd::map<V4L2PixelFormat, std::vector<SizeRange>> videoFormats =\n-\t\t\tvideo->formats(format.mbus_code);\n+\t\t\tvideo_->formats(format.mbus_code);\n \n \t\tLOG(SimplePipeline, Debug)\n \t\t\t<< \"Adding configuration for \" << format.size.toString()\n@@ -285,7 +291,7 @@ int SimpleCameraData::init()\n \t\t * PixelFormat is achieved.\n \t\t */\n \t\tfor (const auto &videoFormat : videoFormats) {\n-\t\t\tPixelFormat pixelFormat = video->toPixelFormat(videoFormat.first);\n+\t\t\tPixelFormat pixelFormat = video_->toPixelFormat(videoFormat.first);\n \t\t\tif (!pixelFormat)\n \t\t\t\tcontinue;\n \n@@ -451,13 +457,12 @@ CameraConfiguration::Status SimpleCameraConfiguration::validate()\n  */\n \n SimplePipelineHandler::SimplePipelineHandler(CameraManager *manager)\n-\t: PipelineHandler(manager), video_(nullptr), converter_(nullptr)\n+\t: PipelineHandler(manager), converter_(nullptr)\n {\n }\n \n SimplePipelineHandler::~SimplePipelineHandler()\n {\n-\tdelete video_;\n \tdelete converter_;\n }\n \n@@ -503,6 +508,7 @@ int SimplePipelineHandler::configure(Camera *camera, CameraConfiguration *c)\n \tSimpleCameraConfiguration *config =\n \t\tstatic_cast<SimpleCameraConfiguration *>(c);\n \tSimpleCameraData *data = cameraData(camera);\n+\tV4L2VideoDevice *video = data->video_;\n \tStreamConfiguration &cfg = config->at(0);\n \tint ret;\n \n@@ -524,13 +530,13 @@ int SimplePipelineHandler::configure(Camera *camera, CameraConfiguration *c)\n \t\treturn ret;\n \n \t/* Configure the video node. */\n-\tV4L2PixelFormat videoFormat = video_->toV4L2PixelFormat(pipeConfig.pixelFormat);\n+\tV4L2PixelFormat videoFormat = video->toV4L2PixelFormat(pipeConfig.pixelFormat);\n \n \tV4L2DeviceFormat captureFormat = {};\n \tcaptureFormat.fourcc = videoFormat;\n \tcaptureFormat.size = cfg.size;\n \n-\tret = video_->setFormat(&captureFormat);\n+\tret = video->setFormat(&captureFormat);\n \tif (ret)\n \t\treturn ret;\n \n@@ -563,6 +569,7 @@ int SimplePipelineHandler::configure(Camera *camera, CameraConfiguration *c)\n int SimplePipelineHandler::exportFrameBuffers(Camera *camera, Stream *stream,\n \t\t\t\t\t      std::vector<std::unique_ptr<FrameBuffer>> *buffers)\n {\n+\tSimpleCameraData *data = cameraData(camera);\n \tunsigned int count = stream->configuration().bufferCount;\n \n \t/*\n@@ -572,23 +579,24 @@ int SimplePipelineHandler::exportFrameBuffers(Camera *camera, Stream *stream,\n \tif (useConverter_)\n \t\treturn converter_->exportBuffers(count, buffers);\n \telse\n-\t\treturn video_->exportBuffers(count, buffers);\n+\t\treturn data->video_->exportBuffers(count, buffers);\n }\n \n int SimplePipelineHandler::start(Camera *camera)\n {\n \tSimpleCameraData *data = cameraData(camera);\n+\tV4L2VideoDevice *video = data->video_;\n \tunsigned int count = data->stream_.configuration().bufferCount;\n \tint ret;\n \n \tif (useConverter_)\n-\t\tret = video_->allocateBuffers(count, &converterBuffers_);\n+\t\tret = video->allocateBuffers(count, &converterBuffers_);\n \telse\n-\t\tret = video_->importBuffers(count);\n+\t\tret = video->importBuffers(count);\n \tif (ret < 0)\n \t\treturn ret;\n \n-\tret = video_->streamOn();\n+\tret = video->streamOn();\n \tif (ret < 0) {\n \t\tstop(camera);\n \t\treturn ret;\n@@ -603,7 +611,7 @@ int SimplePipelineHandler::start(Camera *camera)\n \n \t\t/* Queue all internal buffers for capture. */\n \t\tfor (std::unique_ptr<FrameBuffer> &buffer : converterBuffers_)\n-\t\t\tvideo_->queueBuffer(buffer.get());\n+\t\t\tvideo->queueBuffer(buffer.get());\n \t}\n \n \tactiveCamera_ = camera;\n@@ -613,11 +621,14 @@ int SimplePipelineHandler::start(Camera *camera)\n \n void SimplePipelineHandler::stop(Camera *camera)\n {\n+\tSimpleCameraData *data = cameraData(camera);\n+\tV4L2VideoDevice *video = data->video_;\n+\n \tif (useConverter_)\n \t\tconverter_->stop();\n \n-\tvideo_->streamOff();\n-\tvideo_->releaseBuffers();\n+\tvideo->streamOff();\n+\tvideo->releaseBuffers();\n \n \tconverterBuffers_.clear();\n \tactiveCamera_ = nullptr;\n@@ -644,7 +655,7 @@ int SimplePipelineHandler::queueRequestDevice(Camera *camera, Request *request)\n \t\treturn 0;\n \t}\n \n-\treturn video_->queueBuffer(buffer);\n+\treturn data->video_->queueBuffer(buffer);\n }\n \n /* -----------------------------------------------------------------------------\n@@ -672,12 +683,8 @@ bool SimplePipelineHandler::match(DeviceEnumerator *enumerator)\n \tif (!media_)\n \t\treturn false;\n \n-\t/*\n-\t * Locate sensors and video nodes. We only support pipelines with at\n-\t * least one sensor and exactly one video capture node.\n-\t */\n+\t/* Locate the sensors. */\n \tstd::vector<MediaEntity *> sensors;\n-\tstd::vector<MediaEntity *> videos;\n \n \tfor (MediaEntity *entity : media_->entities()) {\n \t\tswitch (entity->function()) {\n@@ -685,12 +692,6 @@ bool SimplePipelineHandler::match(DeviceEnumerator *enumerator)\n \t\t\tsensors.push_back(entity);\n \t\t\tbreak;\n \n-\t\tcase MEDIA_ENT_F_IO_V4L:\n-\t\t\tif (entity->pads().size() == 1 &&\n-\t\t\t    (entity->pads()[0]->flags() & MEDIA_PAD_FL_SINK))\n-\t\t\t\tvideos.push_back(entity);\n-\t\t\tbreak;\n-\n \t\tdefault:\n \t\t\tbreak;\n \t\t}\n@@ -701,26 +702,6 @@ bool SimplePipelineHandler::match(DeviceEnumerator *enumerator)\n \t\treturn false;\n \t}\n \n-\tif (videos.size() != 1) {\n-\t\tLOG(SimplePipeline, Error)\n-\t\t\t<< \"Pipeline with \" << videos.size()\n-\t\t\t<< \" video capture nodes is not supported\";\n-\t\treturn false;\n-\t}\n-\n-\t/* Locate and open the capture video node. */\n-\tvideo_ = new V4L2VideoDevice(videos[0]);\n-\tif (video_->open() < 0)\n-\t\treturn false;\n-\n-\tif (video_->caps().isMultiplanar()) {\n-\t\tLOG(SimplePipeline, Error)\n-\t\t\t<< \"V4L2 multiplanar devices are not supported\";\n-\t\treturn false;\n-\t}\n-\n-\tvideo_->bufferReady.connect(this, &SimplePipelineHandler::bufferReady);\n-\n \t/* Open the converter, if any. */\n \tif (converter) {\n \t\tconverter_ = new SimpleConverter(converter);\n@@ -745,8 +726,7 @@ bool SimplePipelineHandler::match(DeviceEnumerator *enumerator)\n \n \tfor (MediaEntity *sensor : sensors) {\n \t\tstd::unique_ptr<SimpleCameraData> data =\n-\t\t\tstd::make_unique<SimpleCameraData>(this, sensor,\n-\t\t\t\t\t\t\t   videos[0]);\n+\t\t\tstd::make_unique<SimpleCameraData>(this, sensor);\n \t\tif (!data->isValid()) {\n \t\t\tLOG(SimplePipeline, Error)\n \t\t\t\t<< \"No valid pipeline for sensor '\"\n@@ -793,6 +773,35 @@ bool SimplePipelineHandler::match(DeviceEnumerator *enumerator)\n \treturn true;\n }\n \n+V4L2VideoDevice *SimplePipelineHandler::video(const MediaEntity *entity)\n+{\n+\t/*\n+\t * Return the V4L2VideoDevice corresponding to the media entity, either\n+\t * as a previously constructed device if available from the cache, or\n+\t * by constructing a new one.\n+\t */\n+\n+\tauto iter = videos_.find(entity);\n+\tif (iter != videos_.end())\n+\t\treturn iter->second.get();\n+\n+\tstd::unique_ptr<V4L2VideoDevice> video =\n+\t\tstd::make_unique<V4L2VideoDevice>(entity);\n+\tif (video->open() < 0)\n+\t\treturn nullptr;\n+\n+\tif (video->caps().isMultiplanar()) {\n+\t\tLOG(SimplePipeline, Error)\n+\t\t\t<< \"V4L2 multiplanar devices are not supported\";\n+\t\treturn nullptr;\n+\t}\n+\n+\tvideo->bufferReady.connect(this, &SimplePipelineHandler::bufferReady);\n+\n+\tauto element = videos_.emplace(entity, std::move(video));\n+\treturn element.first->second.get();\n+}\n+\n V4L2Subdevice *SimplePipelineHandler::subdev(const MediaEntity *entity)\n {\n \tauto iter = subdevs_.find(entity);\n@@ -808,6 +817,9 @@ V4L2Subdevice *SimplePipelineHandler::subdev(const MediaEntity *entity)\n \n void SimplePipelineHandler::bufferReady(FrameBuffer *buffer)\n {\n+\tASSERT(activeCamera_);\n+\tSimpleCameraData *data = cameraData(activeCamera_);\n+\n \t/*\n \t * If an error occurred during capture, or if the buffer was cancelled,\n \t * complete the request, even if the converter is in use as there's no\n@@ -816,7 +828,7 @@ void SimplePipelineHandler::bufferReady(FrameBuffer *buffer)\n \tif (buffer->metadata().status != FrameMetadata::FrameSuccess) {\n \t\tif (useConverter_) {\n \t\t\t/* Requeue the buffer for capture. */\n-\t\t\tvideo_->queueBuffer(buffer);\n+\t\t\tdata->video_->queueBuffer(buffer);\n \n \t\t\t/*\n \t\t\t * Get the next user-facing buffer to complete the\n@@ -842,7 +854,7 @@ void SimplePipelineHandler::bufferReady(FrameBuffer *buffer)\n \t */\n \tif (useConverter_) {\n \t\tif (converterQueue_.empty()) {\n-\t\t\tvideo_->queueBuffer(buffer);\n+\t\t\tdata->video_->queueBuffer(buffer);\n \t\t\treturn;\n \t\t}\n \n@@ -862,14 +874,16 @@ void SimplePipelineHandler::bufferReady(FrameBuffer *buffer)\n void SimplePipelineHandler::converterDone(FrameBuffer *input,\n \t\t\t\t\t  FrameBuffer *output)\n {\n-\t/* Complete the request. */\n \tASSERT(activeCamera_);\n+\tSimpleCameraData *data = cameraData(activeCamera_);\n+\n+\t/* Complete the request. */\n \tRequest *request = output->request();\n \tcompleteBuffer(activeCamera_, request, output);\n \tcompleteRequest(activeCamera_, request);\n \n \t/* Queue the input buffer back for capture. */\n-\tvideo_->queueBuffer(input);\n+\tdata->video_->queueBuffer(input);\n }\n \n REGISTER_PIPELINE_HANDLER(SimplePipelineHandler);\n","prefixes":["libcamera-devel","v4","12/11"]}