[{"id":19119,"web_url":"https://patchwork.libcamera.org/comment/19119/","msgid":"<YShWV1FNwakkoR92@pendragon.ideasonboard.com>","date":"2021-08-27T03:04:55","subject":"Re: [libcamera-devel] [RFC PATCH 5/5] android: Run post processing\n\tin a separate thread","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Umang,\n\nThank you for the patch.\n\nI'll start with a few high-level comments.\n\nOn Fri, Aug 27, 2021 at 03:00:16AM +0530, Umang Jain wrote:\n> In CameraStream, introduce a new private PostProcessorWorker\n> class derived from Thread. The post-processor shall run in\n> this derived thread instance asynchronously.\n> \n> Running PostProcessor asynchronously should entail that all the\n> data needed by the PostProcessor should remain valid for the entire\n> duration of its run. In this instance, we currently need to ensure:\n> \n> - 'source' FrameBuffer for the post processor\n> - Camera result metadata\n> \n> Should be valid and saved with preserving the entire context. The\n> 'source' framebuffer is copied internally for streams other than\n> internal (since internal ones are allocated by CameraStream class\n> itself) and the copy is passed along to post-processor.\n> \n> Coming to preserving the context, a quick reference on how captured\n> results are sent to android framework. Completed captured results\n> should be sent using process_capture_result() callback. The order\n> of sending them should match the order the capture request\n> (camera3_capture_request_t), that was submitted to the HAL in the first\n> place.\n> \n> In order to enforce the ordering, we need to maintain a queue. When a\n> stream requires post-processing, we save the associated capture results\n> (via Camera3RequestDescriptor) and add it to the queue. All transient\n> completed captured results are queued as well. When the post-processing\n> results are available, we can simply start de-queuing all the queued\n> completed captured results and invoke process_capture_result() callback\n> on each of them.\n> \n> The context saving part is done by Camera3RequestDescriptor. It is\n> further extended to include data-structs to fulfill the demands of\n> async post-processing.\n\nTo ease review, do you think it would be possible to split this patch in\ntwo, with a first part that starts making use of the signals but doesn't\nmove the post-processor to a separate threads\n\n> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>\n> ---\n>  src/android/camera_device.cpp | 92 +++++++++++++++++++++++++++++++----\n>  src/android/camera_device.h   | 21 +++++++-\n>  src/android/camera_stream.cpp | 35 ++++++++++---\n>  src/android/camera_stream.h   | 40 +++++++++++++++\n>  4 files changed, 170 insertions(+), 18 deletions(-)\n> \n> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp\n> index fea59ce6..08237187 100644\n> --- a/src/android/camera_device.cpp\n> +++ b/src/android/camera_device.cpp\n> @@ -960,6 +960,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n>  \t\t * and add them to the Request if required.\n>  \t\t */\n>  \t\tFrameBuffer *buffer = nullptr;\n> +\t\tdescriptor.srcInternalBuffer_ = nullptr;\n>  \t\tswitch (cameraStream->type()) {\n>  \t\tcase CameraStream::Type::Mapped:\n>  \t\t\t/*\n> @@ -990,6 +991,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n>  \t\t\t * once it has been processed.\n>  \t\t\t */\n>  \t\t\tbuffer = cameraStream->getBuffer();\n> +\t\t\tdescriptor.srcInternalBuffer_ = buffer;\n>  \t\t\tLOG(HAL, Debug) << ss.str() << \" (internal)\";\n>  \t\t\tbreak;\n>  \t\t}\n> @@ -1149,25 +1151,95 @@ void CameraDevice::requestComplete(Request *request)\n>  \t\t\tcontinue;\n>  \t\t}\n>  \n> +\t\t/*\n> +\t\t * Save the current context of capture result and queue the\n> +\t\t * descriptor. cameraStream will process asynchronously, so\n> +\t\t * we can only send the results back after the processing has\n> +\t\t * been completed.\n> +\t\t */\n> +\t\tdescriptor.status_ = Camera3RequestDescriptor::NOT_READY;\n> +\t\tdescriptor.resultMetadata_ = std::move(resultMetadata);\n> +\t\tdescriptor.captureResult_ = captureResult;\n> +\n> +\t\tcameraStream->processComplete.connect(this, &CameraDevice::streamProcessingComplete);\n\nWhile signals can be connected and disconnected on demand, it often\nindicates a design issue. Is there a reason why you can't connect the\nsignal when the cameraStream instance is created, and keep it connected\nfor the whole lifetime of the stream ?\n\n>  \t\tint ret = cameraStream->process(src, *buffer.buffer,\n>  \t\t\t\t\t\tdescriptor.settings_,\n> -\t\t\t\t\t\tresultMetadata.get());\n> +\t\t\t\t\t\tdescriptor.resultMetadata_.get());\n\nYou have a race condition here, if cameraStream->process() runs in a\nseparate thread, it could complete and emit the processComplete signal\nbefore the code below gets executed.\n\n> +\t\tstd::unique_ptr<Camera3RequestDescriptor> reqDescriptor =\n> +\t\t\tstd::make_unique<Camera3RequestDescriptor>();\n> +\t\t*reqDescriptor = std::move(descriptor);\n> +\t\tqueuedDescriptor_.push_back(std::move(reqDescriptor));\n> +\n> +\t\treturn;\n> +\t}\n> +\n> +\tif (queuedDescriptor_.empty()) {\n> +\t\tcaptureResult.result = resultMetadata->get();\n> +\t\tcallbacks_->process_capture_result(callbacks_, &captureResult);\n> +\t} else {\n> +\t\t/*\n> +\t\t * Previous capture results waiting to be sent to framework\n> +\t\t * hence, queue the current capture results as well. After that,\n> +\t\t * check if any results are ready to be sent back to the\n> +\t\t * framework.\n> +\t\t */\n> +\t\tdescriptor.status_ = Camera3RequestDescriptor::READY_SUCCESS;\n> +\t\tdescriptor.resultMetadata_ = std::move(resultMetadata);\n> +\t\tdescriptor.captureResult_ = captureResult;\n> +\t\tstd::unique_ptr<Camera3RequestDescriptor> reqDescriptor =\n> +\t\t\tstd::make_unique<Camera3RequestDescriptor>();\n> +\t\t*reqDescriptor = std::move(descriptor);\n> +\t\tqueuedDescriptor_.push_back(std::move(reqDescriptor));\n> +\n> +\t\twhile (!queuedDescriptor_.empty()) {\n> +\t\t\tstd::unique_ptr<Camera3RequestDescriptor> &d = queuedDescriptor_.front();\n> +\t\t\tif (d->status_ != Camera3RequestDescriptor::NOT_READY) {\n> +\t\t\t\td->captureResult_.result = d->resultMetadata_->get();\n> +\t\t\t\tcallbacks_->process_capture_result(callbacks_, &(d->captureResult_));\n> +\t\t\t\tqueuedDescriptor_.pop_front();\n> +\t\t\t} else {\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t}\n> +}\n> +\n> +void CameraDevice::streamProcessingComplete(CameraStream *cameraStream,\n> +\t\t\t\t\t    CameraStream::ProcessStatus status,\n> +\t\t\t\t\t    CameraMetadata *resultMetadata)\n> +{\n> +\tcameraStream->processComplete.disconnect(this, &CameraDevice::streamProcessingComplete);\n> +\n> +\tfor (auto &d : queuedDescriptor_) {\n> +\t\tif (d->resultMetadata_.get() != resultMetadata)\n> +\t\t\tcontinue;\n> +\n>  \t\t/*\n>  \t\t * Return the FrameBuffer to the CameraStream now that we're\n>  \t\t * done processing it.\n>  \t\t */\n>  \t\tif (cameraStream->type() == CameraStream::Type::Internal)\n> -\t\t\tcameraStream->putBuffer(src);\n> -\n> -\t\tif (ret) {\n> -\t\t\tbuffer.status = CAMERA3_BUFFER_STATUS_ERROR;\n> -\t\t\tnotifyError(descriptor.frameNumber_, buffer.stream,\n> -\t\t\t\t    CAMERA3_MSG_ERROR_BUFFER);\n> +\t\t\tcameraStream->putBuffer(d->srcInternalBuffer_);\n> +\n> +\t\tif (status == CameraStream::ProcessStatus::Success) {\n> +\t\t\td->status_ = Camera3RequestDescriptor::READY_SUCCESS;\n> +\t\t} else {\n> +\t\t\t/* \\todo this is clumsy error handling, be better. */\n> +\t\t\td->status_ = Camera3RequestDescriptor::READY_FAILED;\n> +\t\t\tfor (camera3_stream_buffer_t &buffer : d->buffers_) {\n> +\t\t\t\tCameraStream *cs = static_cast<CameraStream *>(buffer.stream->priv);\n> +\n> +\t\t\t\tif (cs->camera3Stream().format != HAL_PIXEL_FORMAT_BLOB)\n> +\t\t\t\t\tcontinue;\n> +\n> +\t\t\t\tbuffer.status = CAMERA3_BUFFER_STATUS_ERROR;\n> +\t\t\t\tnotifyError(d->frameNumber_, buffer.stream,\n> +\t\t\t\t\t    CAMERA3_MSG_ERROR_BUFFER);\n> +\t\t\t}\n>  \t\t}\n> -\t}\n>  \n> -\tcaptureResult.result = resultMetadata->get();\n> -\tcallbacks_->process_capture_result(callbacks_, &captureResult);\n> +\t\treturn;\n> +\t}\n>  }\n>  \n>  std::string CameraDevice::logPrefix() const\n> diff --git a/src/android/camera_device.h b/src/android/camera_device.h\n> index dd9aebba..4e4bb970 100644\n> --- a/src/android/camera_device.h\n> +++ b/src/android/camera_device.h\n> @@ -7,6 +7,7 @@\n>  #ifndef __ANDROID_CAMERA_DEVICE_H__\n>  #define __ANDROID_CAMERA_DEVICE_H__\n>  \n> +#include <deque>\n>  #include <map>\n>  #include <memory>\n>  #include <mutex>\n> @@ -81,6 +82,20 @@ private:\n>  \t\tstd::vector<std::unique_ptr<libcamera::FrameBuffer>> frameBuffers_;\n>  \t\tCameraMetadata settings_;\n>  \t\tstd::unique_ptr<CaptureRequest> request_;\n> +\n> +\t\t/*\n> +\t\t * Only set when a capture result needs to be queued before\n> +\t\t * completion via process_capture_result() callback.\n> +\t\t */\n> +\t\tenum processingStatus {\n> +\t\t\tNOT_READY,\n> +\t\t\tREADY_SUCCESS,\n> +\t\t\tREADY_FAILED,\n> +\t\t};\n> +\t\tstd::unique_ptr<CameraMetadata> resultMetadata_;\n> +\t\tcamera3_capture_result_t captureResult_;\n> +\t\tlibcamera::FrameBuffer *srcInternalBuffer_;\n> +\t\tprocessingStatus status_;\n>  \t};\n>  \n>  \tenum class State {\n> @@ -100,7 +115,9 @@ private:\n>  \tint processControls(Camera3RequestDescriptor *descriptor);\n>  \tstd::unique_ptr<CameraMetadata> getResultMetadata(\n>  \t\tconst Camera3RequestDescriptor &descriptor) const;\n> -\n> +\tvoid streamProcessingComplete(CameraStream *cameraStream,\n> +\t\t\t\t      CameraStream::ProcessStatus status,\n> +\t\t\t\t      CameraMetadata *resultMetadata);\n>  \tunsigned int id_;\n>  \tcamera3_device_t camera3Device_;\n>  \n> @@ -128,6 +145,8 @@ private:\n>  \tint orientation_;\n>  \n>  \tCameraMetadata lastSettings_;\n> +\n> +\tstd::deque<std::unique_ptr<Camera3RequestDescriptor>> queuedDescriptor_;\n>  };\n>  \n>  #endif /* __ANDROID_CAMERA_DEVICE_H__ */\n> diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp\n> index bdcc7cf9..d5981c0e 100644\n> --- a/src/android/camera_stream.cpp\n> +++ b/src/android/camera_stream.cpp\n> @@ -55,6 +55,7 @@ CameraStream::CameraStream(CameraDevice *const cameraDevice,\n>  \t\t * is what we instantiate here.\n>  \t\t */\n>  \t\tpostProcessor_ = std::make_unique<PostProcessorJpeg>(cameraDevice_);\n> +\t\tppWorker_ = std::make_unique<PostProcessorWorker>(postProcessor_.get());\n>  \t}\n>  \n>  \tif (type == Type::Internal) {\n> @@ -108,22 +109,41 @@ int CameraStream::process(const libcamera::FrameBuffer *source,\n>  \tif (!postProcessor_)\n>  \t\treturn 0;\n>  \n> -\t/*\n> -\t * \\todo Buffer mapping and processing should be moved to a\n> -\t * separate thread.\n> -\t */\n> -\tCameraBuffer dest(camera3Dest, PROT_READ | PROT_WRITE);\n> -\tif (!dest.isValid()) {\n> +\t/* \\todo Should buffer mapping be moved to post-processor's thread? */\n\nGood point :-)\n\n> +\tdest_ = std::make_unique<CameraBuffer>(camera3Dest, PROT_READ | PROT_WRITE);\n> +\tif (!dest_->isValid()) {\n>  \t\tLOG(HAL, Error) << \"Failed to map android blob buffer\";\n>  \t\treturn -EINVAL;\n>  \t}\n>  \n> -\treturn postProcessor_->process(source, &dest, requestMetadata, resultMetadata);\n> +\tif (type() != CameraStream::Type::Internal) {\n> +\t\t/*\n> +\t\t * The source buffer is owned by Request object which can be\n> +\t\t * reused for future capture request. Since post-processor will\n> +\t\t * run asynchrnously, we need to copy the source buffer and use\n> +\t\t * it as source data for post-processor.\n> +\t\t */\n> +\t\tsrcPlanes_.clear();\n> +\t\tfor (const auto &plane : source->planes())\n> +\t\t\tsrcPlanes_.push_back(plane);\n> +\n> +\t\tsrcFramebuffer_ = std::make_unique<libcamera::FrameBuffer>(srcPlanes_);\n> +\t\tsource = srcFramebuffer_.get();\n> +\t}\n\nThis will break if the next frame needs to be post-processed before the\ncurrent frame completes. You need a queue of frames to post-process, not\njust one. One option would be to store the data in\nCamera3RequestDescriptor instead, as that the full context for a\nrequest, including all its frames. Some refactoring would be required,\nas the structure is current private to CameraDevice.\n\n> +\n> +\tppWorker_->start();\n> +\n> +\treturn postProcessor_->invokeMethod(&PostProcessor::process,\n> +\t\t\t\t\t    ConnectionTypeQueued,\n> +\t\t\t\t\t    source, dest_.get(),\n> +\t\t\t\t\t    requestMetadata, resultMetadata);\n\nIs the CameraStream actually the right place to spawn a thread and\ndispatch processing jobs ? Brainstorming a bit, I wonder if this\nshouldn't be moved to the PostProcessor class instead, as the need for a\nthread depends on the type of post-processor. The current software-based\npost-processors definitely need a thread, but a post-processor that\nwould use a hardware-accelerated JPEG encoder would be asynchronous by\nnature (posting jobs to the device and being notified of their\ncompletion through callbacks) and may not need to be wrapped in a\nthread. It would actually also depend on how the hardware JPEG encoder\nwould be interfaced. If the post-processor were to deal with the kernel\ndevice directly, it would likely need an event loop, which we're missing\nin the HAL implementation. A thread would provide that event loop, but\nit could be best to use a single thread for all event-driven\npost-processors instead of one per post-processor. This is largely\ntheoretical though, we don't have hardware-backed post-processors today.\n\n>  }\n>  \n>  void CameraStream::handlePostProcessing(PostProcessor::Status status,\n>  \t\t\t\t\tCameraMetadata *resultMetadata)\n>  {\n> +\tppWorker_->exit();\n\nThreads won't consume much resources when they're idle, but starting and\nstopping a thread is a costly operation. I would be better to start the\nthread when starting the camera capture session.\n\n> +\n>  \tswitch (status) {\n>  \tcase PostProcessor::Status::Success:\n>  \t\tprocessComplete.emit(this, ProcessStatus::Success, resultMetadata);\n> @@ -134,6 +154,7 @@ void CameraStream::handlePostProcessing(PostProcessor::Status status,\n>  \tdefault:\n>  \t\tLOG(HAL, Error) << \"PostProcessor status invalid\";\n>  \t}\n> +\tsrcFramebuffer_.reset();\n>  }\n>  \n>  FrameBuffer *CameraStream::getBuffer()\n> diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h\n> index d54d3f58..567d896f 100644\n> --- a/src/android/camera_stream.h\n> +++ b/src/android/camera_stream.h\n> @@ -13,7 +13,9 @@\n>  \n>  #include <hardware/camera3.h>\n>  \n> +#include <libcamera/base/object.h>\n>  #include <libcamera/base/signal.h>\n> +#include <libcamera/base/thread.h>\n>  \n>  #include <libcamera/camera.h>\n>  #include <libcamera/framebuffer.h>\n> @@ -23,6 +25,7 @@\n>  \n>  #include \"post_processor.h\"\n>  \n> +class CameraBuffer;\n>  class CameraDevice;\n>  class CameraMetadata;\n>  \n> @@ -135,6 +138,38 @@ public:\n>  \tlibcamera::Signal<CameraStream *, ProcessStatus, CameraMetadata *> processComplete;\n>  \n>  private:\n> +\tclass PostProcessorWorker : public libcamera::Thread\n> +\t{\n> +\tpublic:\n> +\t\tPostProcessorWorker(PostProcessor *postProcessor)\n> +\t\t{\n> +\t\t\tpostProcessor->moveToThread(this);\n> +\t\t}\n\nAssuming we want to keep the thread inside the CameraStream, I think\nthis needs to be reworked a little bit. The post-processor, in this\ncase, isn't thread-aware, it gets run in a thread, but doesn't really\nknow much about that fact. However, in patch 1/5, you make PostProcessor\ninherit from Object to enabled the invokeMethod() call. There's a design\nconflict between these two facts.\n\nI would propose instead keeping the PostProcessor unaware of threads,\nwithout inheriting from Object, and making PostProcessorWorker the class\nthat inherits from Object and is moved to the thread. The thread could\nstay in PostProcessorWorker by inheriting from both Object and Thread,\nbut that's a bit confusing, so I'd recommend adding a libcamera::Thread\nthread_ member variable to CameraStream for the thread. This is the\ndesign used by the IPA proxies, you can look at the generated code to\nsee how it's implemented (it's more readable than the templates). The\nworker would have a process() function, called from\nCameraStream::process() using invokeMethod(), and that worker process()\nfunctions would simply call PostProcessor::process() directly.\n\n> +\n> +\t\tvoid start()\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * \\todo [BLOCKER] !!!\n> +\t\t\t * On second shutter capture, this fails to start the\n> +\t\t\t * thread(again). No errors for first shutter capture.\n> +\t\t\t */\n> +\t\t\tThread::start();\n> +\t\t}\n\nWhy do you reimplement start(), if you only call the same function in\nthe parent class ?\n\n> +\n> +\t\tvoid stop()\n> +\t\t{\n> +\t\t\texit();\n> +\t\t\twait();\n> +\t\t}\n\nThis is never called.\n\n> +\n> +\tprotected:\n> +\t\tvoid run() override\n> +\t\t{\n> +\t\t\texec();\n> +\t\t\tdispatchMessages();\n> +\t\t}\n> +\t};\n> +\n>  \tvoid handlePostProcessing(PostProcessor::Status status,\n>  \t\t\t\t  CameraMetadata *resultMetadata);\n>  \n> @@ -152,6 +187,11 @@ private:\n>  \t */\n>  \tstd::unique_ptr<std::mutex> mutex_;\n>  \tstd::unique_ptr<PostProcessor> postProcessor_;\n> +\tstd::unique_ptr<PostProcessorWorker> ppWorker_;\n> +\n> +\tstd::unique_ptr<CameraBuffer> dest_;\n> +\tstd::unique_ptr<libcamera::FrameBuffer> srcFramebuffer_;\n> +\tstd::vector<libcamera::FrameBuffer::Plane> srcPlanes_;\n>  };\n>  \n>  #endif /* __ANDROID_CAMERA_STREAM__ */","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 98B6BBD87C\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 27 Aug 2021 03:05:10 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 20C0A6893A;\n\tFri, 27 Aug 2021 05:05:10 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 0C99768920\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 27 Aug 2021 05:05:09 +0200 (CEST)","from pendragon.ideasonboard.com (62-78-145-57.bb.dnainternet.fi\n\t[62.78.145.57])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 7C89B5A1;\n\tFri, 27 Aug 2021 05:05:08 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"d8pRNOzr\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1630033508;\n\tbh=D0zX7ljk44YHF08rFYyG4KcEvki9/TbJEe/5flwHteA=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=d8pRNOzre/cGCV1qT8YgBJuRjpcm3FfcRizU+n+ITp2qG8kZ1lzD78FvqpIvXkBq3\n\t2ChKNDsnYfQ/C9H/vVH07uW6jO36m22OckHw/+j1aXElKq6ifMT4puqrnghnp/xVFc\n\triyuEBZ/R1DqO+KMf1VOM+IjrKql0KhN6kURjsZY=","Date":"Fri, 27 Aug 2021 06:04:55 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Umang Jain <umang.jain@ideasonboard.com>","Message-ID":"<YShWV1FNwakkoR92@pendragon.ideasonboard.com>","References":"<20210826213016.1829504-1-umang.jain@ideasonboard.com>\n\t<20210826213016.1829504-6-umang.jain@ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20210826213016.1829504-6-umang.jain@ideasonboard.com>","Subject":"Re: [libcamera-devel] [RFC PATCH 5/5] android: Run post processing\n\tin a separate thread","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":19148,"web_url":"https://patchwork.libcamera.org/comment/19148/","msgid":"<2cd0eea7-2c13-1a7a-8859-1ba3a68490e5@ideasonboard.com>","date":"2021-08-27T14:00:56","subject":"Re: [libcamera-devel] [RFC PATCH 5/5] android: Run post processing\n\tin a separate thread","submitter":{"id":86,"url":"https://patchwork.libcamera.org/api/people/86/","name":"Umang Jain","email":"umang.jain@ideasonboard.com"},"content":"Hi Laurent\n\nOn 8/27/21 8:34 AM, Laurent Pinchart wrote:\n> Hi Umang,\n>\n> Thank you for the patch.\n>\n> I'll start with a few high-level comments.\nGreat, I was expecting exactly this.\n>\n> On Fri, Aug 27, 2021 at 03:00:16AM +0530, Umang Jain wrote:\n>> In CameraStream, introduce a new private PostProcessorWorker\n>> class derived from Thread. The post-processor shall run in\n>> this derived thread instance asynchronously.\n>>\n>> Running PostProcessor asynchronously should entail that all the\n>> data needed by the PostProcessor should remain valid for the entire\n>> duration of its run. In this instance, we currently need to ensure:\n>>\n>> - 'source' FrameBuffer for the post processor\n>> - Camera result metadata\n>>\n>> Should be valid and saved with preserving the entire context. The\n>> 'source' framebuffer is copied internally for streams other than\n>> internal (since internal ones are allocated by CameraStream class\n>> itself) and the copy is passed along to post-processor.\n>>\n>> Coming to preserving the context, a quick reference on how captured\n>> results are sent to android framework. Completed captured results\n>> should be sent using process_capture_result() callback. The order\n>> of sending them should match the order the capture request\n>> (camera3_capture_request_t), that was submitted to the HAL in the first\n>> place.\n>>\n>> In order to enforce the ordering, we need to maintain a queue. When a\n>> stream requires post-processing, we save the associated capture results\n>> (via Camera3RequestDescriptor) and add it to the queue. All transient\n>> completed captured results are queued as well. When the post-processing\n>> results are available, we can simply start de-queuing all the queued\n>> completed captured results and invoke process_capture_result() callback\n>> on each of them.\n>>\n>> The context saving part is done by Camera3RequestDescriptor. It is\n>> further extended to include data-structs to fulfill the demands of\n>> async post-processing.\n> To ease review, do you think it would be possible to split this patch in\n> two, with a first part that starts making use of the signals but doesn't\n> move the post-processor to a separate threads\n\n\nAh, indeed, we can break it with use of signal emitting in the signals \nin same thread and then introducing the threading mechanism in a \nseparate patch (Why didn't I think of this)\n\n>\n>> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>\n>> ---\n>>   src/android/camera_device.cpp | 92 +++++++++++++++++++++++++++++++----\n>>   src/android/camera_device.h   | 21 +++++++-\n>>   src/android/camera_stream.cpp | 35 ++++++++++---\n>>   src/android/camera_stream.h   | 40 +++++++++++++++\n>>   4 files changed, 170 insertions(+), 18 deletions(-)\n>>\n>> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp\n>> index fea59ce6..08237187 100644\n>> --- a/src/android/camera_device.cpp\n>> +++ b/src/android/camera_device.cpp\n>> @@ -960,6 +960,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n>>   \t\t * and add them to the Request if required.\n>>   \t\t */\n>>   \t\tFrameBuffer *buffer = nullptr;\n>> +\t\tdescriptor.srcInternalBuffer_ = nullptr;\n>>   \t\tswitch (cameraStream->type()) {\n>>   \t\tcase CameraStream::Type::Mapped:\n>>   \t\t\t/*\n>> @@ -990,6 +991,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n>>   \t\t\t * once it has been processed.\n>>   \t\t\t */\n>>   \t\t\tbuffer = cameraStream->getBuffer();\n>> +\t\t\tdescriptor.srcInternalBuffer_ = buffer;\n>>   \t\t\tLOG(HAL, Debug) << ss.str() << \" (internal)\";\n>>   \t\t\tbreak;\n>>   \t\t}\n>> @@ -1149,25 +1151,95 @@ void CameraDevice::requestComplete(Request *request)\n>>   \t\t\tcontinue;\n>>   \t\t}\n>>   \n>> +\t\t/*\n>> +\t\t * Save the current context of capture result and queue the\n>> +\t\t * descriptor. cameraStream will process asynchronously, so\n>> +\t\t * we can only send the results back after the processing has\n>> +\t\t * been completed.\n>> +\t\t */\n>> +\t\tdescriptor.status_ = Camera3RequestDescriptor::NOT_READY;\n>> +\t\tdescriptor.resultMetadata_ = std::move(resultMetadata);\n>> +\t\tdescriptor.captureResult_ = captureResult;\n>> +\n>> +\t\tcameraStream->processComplete.connect(this, &CameraDevice::streamProcessingComplete);\n> While signals can be connected and disconnected on demand, it often\n> indicates a design issue. Is there a reason why you can't connect the\n> signal when the cameraStream instance is created, and keep it connected\n> for the whole lifetime of the stream ?\n\n\nyes, I can keep it connected for the while lifetime of the stream too...\n\n>\n>>   \t\tint ret = cameraStream->process(src, *buffer.buffer,\n>>   \t\t\t\t\t\tdescriptor.settings_,\n>> -\t\t\t\t\t\tresultMetadata.get());\n>> +\t\t\t\t\t\tdescriptor.resultMetadata_.get());\n> You have a race condition here, if cameraStream->process() runs in a\n> separate thread, it could complete and emit the processComplete signal\n> before the code below gets executed.\n\n\nouch, that's not right order.  You are correct. I should queue the \ndescriptor first before process().. Good spot!\n\n>\n>> +\t\tstd::unique_ptr<Camera3RequestDescriptor> reqDescriptor =\n>> +\t\t\tstd::make_unique<Camera3RequestDescriptor>();\n>> +\t\t*reqDescriptor = std::move(descriptor);\n>> +\t\tqueuedDescriptor_.push_back(std::move(reqDescriptor));\n>> +\n>> +\t\treturn;\n>> +\t}\n>> +\n>> +\tif (queuedDescriptor_.empty()) {\n>> +\t\tcaptureResult.result = resultMetadata->get();\n>> +\t\tcallbacks_->process_capture_result(callbacks_, &captureResult);\n>> +\t} else {\n>> +\t\t/*\n>> +\t\t * Previous capture results waiting to be sent to framework\n>> +\t\t * hence, queue the current capture results as well. After that,\n>> +\t\t * check if any results are ready to be sent back to the\n>> +\t\t * framework.\n>> +\t\t */\n>> +\t\tdescriptor.status_ = Camera3RequestDescriptor::READY_SUCCESS;\n>> +\t\tdescriptor.resultMetadata_ = std::move(resultMetadata);\n>> +\t\tdescriptor.captureResult_ = captureResult;\n>> +\t\tstd::unique_ptr<Camera3RequestDescriptor> reqDescriptor =\n>> +\t\t\tstd::make_unique<Camera3RequestDescriptor>();\n>> +\t\t*reqDescriptor = std::move(descriptor);\n>> +\t\tqueuedDescriptor_.push_back(std::move(reqDescriptor));\n>> +\n>> +\t\twhile (!queuedDescriptor_.empty()) {\n>> +\t\t\tstd::unique_ptr<Camera3RequestDescriptor> &d = queuedDescriptor_.front();\n>> +\t\t\tif (d->status_ != Camera3RequestDescriptor::NOT_READY) {\n>> +\t\t\t\td->captureResult_.result = d->resultMetadata_->get();\n>> +\t\t\t\tcallbacks_->process_capture_result(callbacks_, &(d->captureResult_));\n>> +\t\t\t\tqueuedDescriptor_.pop_front();\n>> +\t\t\t} else {\n>> +\t\t\t\tbreak;\n>> +\t\t\t}\n>> +\t\t}\n>> +\t}\n>> +}\n>> +\n>> +void CameraDevice::streamProcessingComplete(CameraStream *cameraStream,\n>> +\t\t\t\t\t    CameraStream::ProcessStatus status,\n>> +\t\t\t\t\t    CameraMetadata *resultMetadata)\n>> +{\n>> +\tcameraStream->processComplete.disconnect(this, &CameraDevice::streamProcessingComplete);\n>> +\n>> +\tfor (auto &d : queuedDescriptor_) {\n>> +\t\tif (d->resultMetadata_.get() != resultMetadata)\n>> +\t\t\tcontinue;\n>> +\n>>   \t\t/*\n>>   \t\t * Return the FrameBuffer to the CameraStream now that we're\n>>   \t\t * done processing it.\n>>   \t\t */\n>>   \t\tif (cameraStream->type() == CameraStream::Type::Internal)\n>> -\t\t\tcameraStream->putBuffer(src);\n>> -\n>> -\t\tif (ret) {\n>> -\t\t\tbuffer.status = CAMERA3_BUFFER_STATUS_ERROR;\n>> -\t\t\tnotifyError(descriptor.frameNumber_, buffer.stream,\n>> -\t\t\t\t    CAMERA3_MSG_ERROR_BUFFER);\n>> +\t\t\tcameraStream->putBuffer(d->srcInternalBuffer_);\n>> +\n>> +\t\tif (status == CameraStream::ProcessStatus::Success) {\n>> +\t\t\td->status_ = Camera3RequestDescriptor::READY_SUCCESS;\n>> +\t\t} else {\n>> +\t\t\t/* \\todo this is clumsy error handling, be better. */\n>> +\t\t\td->status_ = Camera3RequestDescriptor::READY_FAILED;\n>> +\t\t\tfor (camera3_stream_buffer_t &buffer : d->buffers_) {\n>> +\t\t\t\tCameraStream *cs = static_cast<CameraStream *>(buffer.stream->priv);\n>> +\n>> +\t\t\t\tif (cs->camera3Stream().format != HAL_PIXEL_FORMAT_BLOB)\n>> +\t\t\t\t\tcontinue;\n>> +\n>> +\t\t\t\tbuffer.status = CAMERA3_BUFFER_STATUS_ERROR;\n>> +\t\t\t\tnotifyError(d->frameNumber_, buffer.stream,\n>> +\t\t\t\t\t    CAMERA3_MSG_ERROR_BUFFER);\n>> +\t\t\t}\n>>   \t\t}\n>> -\t}\n>>   \n>> -\tcaptureResult.result = resultMetadata->get();\n>> -\tcallbacks_->process_capture_result(callbacks_, &captureResult);\n>> +\t\treturn;\n>> +\t}\n>>   }\n>>   \n>>   std::string CameraDevice::logPrefix() const\n>> diff --git a/src/android/camera_device.h b/src/android/camera_device.h\n>> index dd9aebba..4e4bb970 100644\n>> --- a/src/android/camera_device.h\n>> +++ b/src/android/camera_device.h\n>> @@ -7,6 +7,7 @@\n>>   #ifndef __ANDROID_CAMERA_DEVICE_H__\n>>   #define __ANDROID_CAMERA_DEVICE_H__\n>>   \n>> +#include <deque>\n>>   #include <map>\n>>   #include <memory>\n>>   #include <mutex>\n>> @@ -81,6 +82,20 @@ private:\n>>   \t\tstd::vector<std::unique_ptr<libcamera::FrameBuffer>> frameBuffers_;\n>>   \t\tCameraMetadata settings_;\n>>   \t\tstd::unique_ptr<CaptureRequest> request_;\n>> +\n>> +\t\t/*\n>> +\t\t * Only set when a capture result needs to be queued before\n>> +\t\t * completion via process_capture_result() callback.\n>> +\t\t */\n>> +\t\tenum processingStatus {\n>> +\t\t\tNOT_READY,\n>> +\t\t\tREADY_SUCCESS,\n>> +\t\t\tREADY_FAILED,\n>> +\t\t};\n>> +\t\tstd::unique_ptr<CameraMetadata> resultMetadata_;\n>> +\t\tcamera3_capture_result_t captureResult_;\n>> +\t\tlibcamera::FrameBuffer *srcInternalBuffer_;\n>> +\t\tprocessingStatus status_;\n>>   \t};\n>>   \n>>   \tenum class State {\n>> @@ -100,7 +115,9 @@ private:\n>>   \tint processControls(Camera3RequestDescriptor *descriptor);\n>>   \tstd::unique_ptr<CameraMetadata> getResultMetadata(\n>>   \t\tconst Camera3RequestDescriptor &descriptor) const;\n>> -\n>> +\tvoid streamProcessingComplete(CameraStream *cameraStream,\n>> +\t\t\t\t      CameraStream::ProcessStatus status,\n>> +\t\t\t\t      CameraMetadata *resultMetadata);\n>>   \tunsigned int id_;\n>>   \tcamera3_device_t camera3Device_;\n>>   \n>> @@ -128,6 +145,8 @@ private:\n>>   \tint orientation_;\n>>   \n>>   \tCameraMetadata lastSettings_;\n>> +\n>> +\tstd::deque<std::unique_ptr<Camera3RequestDescriptor>> queuedDescriptor_;\n>>   };\n>>   \n>>   #endif /* __ANDROID_CAMERA_DEVICE_H__ */\n>> diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp\n>> index bdcc7cf9..d5981c0e 100644\n>> --- a/src/android/camera_stream.cpp\n>> +++ b/src/android/camera_stream.cpp\n>> @@ -55,6 +55,7 @@ CameraStream::CameraStream(CameraDevice *const cameraDevice,\n>>   \t\t * is what we instantiate here.\n>>   \t\t */\n>>   \t\tpostProcessor_ = std::make_unique<PostProcessorJpeg>(cameraDevice_);\n>> +\t\tppWorker_ = std::make_unique<PostProcessorWorker>(postProcessor_.get());\n>>   \t}\n>>   \n>>   \tif (type == Type::Internal) {\n>> @@ -108,22 +109,41 @@ int CameraStream::process(const libcamera::FrameBuffer *source,\n>>   \tif (!postProcessor_)\n>>   \t\treturn 0;\n>>   \n>> -\t/*\n>> -\t * \\todo Buffer mapping and processing should be moved to a\n>> -\t * separate thread.\n>> -\t */\n>> -\tCameraBuffer dest(camera3Dest, PROT_READ | PROT_WRITE);\n>> -\tif (!dest.isValid()) {\n>> +\t/* \\todo Should buffer mapping be moved to post-processor's thread? */\n> Good point :-)\n\numm, what does this indicate?\n\nI prefer to leave the mapping as it is, for now, with \\todo '?' \nconverted to a statement. We can decide on it later and can be done on top?\n\n\n>\n>> +\tdest_ = std::make_unique<CameraBuffer>(camera3Dest, PROT_READ | PROT_WRITE);\n>> +\tif (!dest_->isValid()) {\n>>   \t\tLOG(HAL, Error) << \"Failed to map android blob buffer\";\n>>   \t\treturn -EINVAL;\n>>   \t}\n>>   \n>> -\treturn postProcessor_->process(source, &dest, requestMetadata, resultMetadata);\n>> +\tif (type() != CameraStream::Type::Internal) {\n>> +\t\t/*\n>> +\t\t * The source buffer is owned by Request object which can be\n>> +\t\t * reused for future capture request. Since post-processor will\n>> +\t\t * run asynchrnously, we need to copy the source buffer and use\n>> +\t\t * it as source data for post-processor.\n>> +\t\t */\n>> +\t\tsrcPlanes_.clear();\n>> +\t\tfor (const auto &plane : source->planes())\n>> +\t\t\tsrcPlanes_.push_back(plane);\n>> +\n>> +\t\tsrcFramebuffer_ = std::make_unique<libcamera::FrameBuffer>(srcPlanes_);\n>> +\t\tsource = srcFramebuffer_.get();\n>> +\t}\n> This will break if the next frame needs to be post-processed before the\n> current frame completes. You need a queue of frames to post-process, not\n> just one. One option would be to store the data in\n> Camera3RequestDescriptor instead, as that the full context for a\n\n\nAh, yes, this is thin ice.\n\nBut exposing Camera3RequestDescriptor data, I am not sure... maybe we \ncan just expose the frame data /only/. Something I'll meditate upon... \nbut I agree with the data storage location in Camera3RequestDescriptor\n\nDid you notice, I am /copying/ the frame buffer 'source' ? :-) Does the \nassociated comment makes sense to you? Asking because I and Kieran had a \nquite a bit of discussion on this right now.\n\n... In my head right now, I see that 'source' comes from \nlibcamera::Request in CameraDevice::requestComplete. So after signal \nhandler, the libcamera::Request is disposed or reused (I don't know, I \nhaven't seen the 'higher' context of what's happening with request). In \nboth cases, the 'source' framebuffer (used for post-processing) might \nget invalidated/segfault, hence the reason of copy the framebuffer for \npost-processing purposes.\n\nThis is the context in my head right now, I might be wrong. Does it make \nsense? I'll try to look around for the 'higher' context of the request, \nwhether or not it's getting reused or something else\n\n> request, including all its frames. Some refactoring would be required,\n> as the structure is current private to CameraDevice.\n>\n>> +\n>> +\tppWorker_->start();\n>> +\n>> +\treturn postProcessor_->invokeMethod(&PostProcessor::process,\n>> +\t\t\t\t\t    ConnectionTypeQueued,\n>> +\t\t\t\t\t    source, dest_.get(),\n>> +\t\t\t\t\t    requestMetadata, resultMetadata);\n> Is the CameraStream actually the right place to spawn a thread and\n> dispatch processing jobs ? Brainstorming a bit, I wonder if this\n> shouldn't be moved to the PostProcessor class instead, as the need for a\n> thread depends on the type of post-processor. The current software-based\n> post-processors definitely need a thread, but a post-processor that\n> would use a hardware-accelerated JPEG encoder would be asynchronous by\n> nature (posting jobs to the device and being notified of their\n> completion through callbacks) and may not need to be wrapped in a\n> thread. It would actually also depend on how the hardware JPEG encoder\n\n\nYes, this bit needs discussion but certainly you have more look-forward \nideas than me. The way I envisioned is that, we have 'n' post-processors \nin future. And CameraStream is the one that creates the post-processor \ninstances.\n\nFor eg. , currently we have in CameraStream constructor:\n\n             postProcessor_ = \nstd::make_unique<PostProcessorJpeg>(cameraDevice_);\n\nIn a future of multiple post-processors, I see that CameraStream decides \nwhich postProcessor_ it needs, depending on the context. Speaking  from \nhigh-level, it feels CameraStream is the one that creates and manages \nthe post-processor objects. Hence, it should be the one to decide \nwhether or not, to post-processor asynchronously or synchronously; And \nthe post-processor implementation should just care about the \npost-processing specific bits.\n\nSo the hardware-accelerator JPEG encoder you mentioned, kind of fits \ninto this design i.e. letting CameraStream decide if a post-processor \nneeds to run in separate threa or not. As the h/w accerlerated encoder \nis async by nature, the CameraStream shouldn't use a thread for this \npost-processor.\n\nThere can also be multiple post-processors right, for a single \nCameraStream instance? One chaining into the another... would need to \nthink about it too, I guess.\n\nAnyway, this is my vision while writing the series. As you can already \nsee, it's limited and I am open to suggestions / experimenting new \ndesign ideas.\n\n\n> would be interfaced. If the post-processor were to deal with the kernel\n> device directly, it would likely need an event loop, which we're missing\n> in the HAL implementation. A thread would provide that event loop, but\n> it could be best to use a single thread for all event-driven\n> post-processors instead of one per post-processor. This is largely\n> theoretical though, we don't have hardware-backed post-processors today.\n>\n>>   }\n>>   \n>>   void CameraStream::handlePostProcessing(PostProcessor::Status status,\n>>   \t\t\t\t\tCameraMetadata *resultMetadata)\n>>   {\n>> +\tppWorker_->exit();\n> Threads won't consume much resources when they're idle, but starting and\n> stopping a thread is a costly operation. I would be better to start the\n> thread when starting the camera capture session.\n\nOk, noted.\n\nAlso I am sure I had stop() here, but seems a bad rebase/cherry-pick \nfiasco that the exit() creeped in (from earlier implementations).\n\n>\n>> +\n>>   \tswitch (status) {\n>>   \tcase PostProcessor::Status::Success:\n>>   \t\tprocessComplete.emit(this, ProcessStatus::Success, resultMetadata);\n>> @@ -134,6 +154,7 @@ void CameraStream::handlePostProcessing(PostProcessor::Status status,\n>>   \tdefault:\n>>   \t\tLOG(HAL, Error) << \"PostProcessor status invalid\";\n>>   \t}\n>> +\tsrcFramebuffer_.reset();\n>>   }\n>>   \n>>   FrameBuffer *CameraStream::getBuffer()\n>> diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h\n>> index d54d3f58..567d896f 100644\n>> --- a/src/android/camera_stream.h\n>> +++ b/src/android/camera_stream.h\n>> @@ -13,7 +13,9 @@\n>>   \n>>   #include <hardware/camera3.h>\n>>   \n>> +#include <libcamera/base/object.h>\n>>   #include <libcamera/base/signal.h>\n>> +#include <libcamera/base/thread.h>\n>>   \n>>   #include <libcamera/camera.h>\n>>   #include <libcamera/framebuffer.h>\n>> @@ -23,6 +25,7 @@\n>>   \n>>   #include \"post_processor.h\"\n>>   \n>> +class CameraBuffer;\n>>   class CameraDevice;\n>>   class CameraMetadata;\n>>   \n>> @@ -135,6 +138,38 @@ public:\n>>   \tlibcamera::Signal<CameraStream *, ProcessStatus, CameraMetadata *> processComplete;\n>>   \n>>   private:\n>> +\tclass PostProcessorWorker : public libcamera::Thread\n>> +\t{\n>> +\tpublic:\n>> +\t\tPostProcessorWorker(PostProcessor *postProcessor)\n>> +\t\t{\n>> +\t\t\tpostProcessor->moveToThread(this);\n>> +\t\t}\n> Assuming we want to keep the thread inside the CameraStream, I think\n> this needs to be reworked a little bit. The post-processor, in this\n> case, isn't thread-aware, it gets run in a thread, but doesn't really\n> know much about that fact. However, in patch 1/5, you make PostProcessor\n> inherit from Object to enabled the invokeMethod() call. There's a design\n> conflict between these two facts.\n>\n> I would propose instead keeping the PostProcessor unaware of threads,\n> without inheriting from Object, and making PostProcessorWorker the class\n> that inherits from Object and is moved to the thread. The thread could\n> stay in PostProcessorWorker by inheriting from both Object and Thread,\n> but that's a bit confusing, so I'd recommend adding a libcamera::Thread\n> thread_ member variable to CameraStream for the thread. This is the\n> design used by the IPA proxies, you can look at the generated code to\n\n\nOk, I have noted this down but not commenting right now. I think I need \nto read the code bits again and visualize the flow first\n\n> see how it's implemented (it's more readable than the templates). The\n> worker would have a process() function, called from\n> CameraStream::process() using invokeMethod(), and that worker process()\n> functions would simply call PostProcessor::process() directly.\n>\n>> +\n>> +\t\tvoid start()\n>> +\t\t{\n>> +\t\t\t/*\n>> +\t\t\t * \\todo [BLOCKER] !!!\n>> +\t\t\t * On second shutter capture, this fails to start the\n>> +\t\t\t * thread(again). No errors for first shutter capture.\n>> +\t\t\t */\n>> +\t\t\tThread::start();\n>> +\t\t}\n> Why do you reimplement start(), if you only call the same function in\n> the parent class ?\n\nOh, hmm :-S\n\nI think I had moveToThread() here for post-processor but it changed over \niterations. And I failed to cleanup the ruins..\n\n>\n>> +\n>> +\t\tvoid stop()\n>> +\t\t{\n>> +\t\t\texit();\n>> +\t\t\twait();\n>> +\t\t}\n> This is never called.\n>\n>> +\n>> +\tprotected:\n>> +\t\tvoid run() override\n>> +\t\t{\n>> +\t\t\texec();\n>> +\t\t\tdispatchMessages();\n>> +\t\t}\n>> +\t};\n>> +\n>>   \tvoid handlePostProcessing(PostProcessor::Status status,\n>>   \t\t\t\t  CameraMetadata *resultMetadata);\n>>   \n>> @@ -152,6 +187,11 @@ private:\n>>   \t */\n>>   \tstd::unique_ptr<std::mutex> mutex_;\n>>   \tstd::unique_ptr<PostProcessor> postProcessor_;\n>> +\tstd::unique_ptr<PostProcessorWorker> ppWorker_;\n>> +\n>> +\tstd::unique_ptr<CameraBuffer> dest_;\n>> +\tstd::unique_ptr<libcamera::FrameBuffer> srcFramebuffer_;\n>> +\tstd::vector<libcamera::FrameBuffer::Plane> srcPlanes_;\n>>   };\n>>   \n>>   #endif /* __ANDROID_CAMERA_STREAM__ */","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id B527ABD87C\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 27 Aug 2021 14:01:04 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 15D006891F;\n\tFri, 27 Aug 2021 16:01:04 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 1638F60256\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 27 Aug 2021 16:01:03 +0200 (CEST)","from [192.168.1.104] (unknown [103.251.226.167])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id E974C5A1;\n\tFri, 27 Aug 2021 16:01:01 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"uCkolWhl\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1630072862;\n\tbh=ETXia/IT7IMy0nMCYwBQ08YqfPS7lhTaayyoJhtLKNo=;\n\th=Subject:To:Cc:References:From:Date:In-Reply-To:From;\n\tb=uCkolWhlIEjo9oxSLOyK+RbGlPXuHDYu72l/JBqzRW1xg0sy+nHjC14oec2al383P\n\tAYlKDiJspdWmJfyHrSj1jRMuvfU2g5QGkawe6wAjJ1F5r1FUrFseIfDvaaaYZ/6ceK\n\tviCyVfpPtAepq1cHq6GE7p0pKoR1WXwiPisSfC8U=","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","References":"<20210826213016.1829504-1-umang.jain@ideasonboard.com>\n\t<20210826213016.1829504-6-umang.jain@ideasonboard.com>\n\t<YShWV1FNwakkoR92@pendragon.ideasonboard.com>","From":"Umang Jain <umang.jain@ideasonboard.com>","Message-ID":"<2cd0eea7-2c13-1a7a-8859-1ba3a68490e5@ideasonboard.com>","Date":"Fri, 27 Aug 2021 19:30:56 +0530","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101\n\tThunderbird/78.10.2","MIME-Version":"1.0","In-Reply-To":"<YShWV1FNwakkoR92@pendragon.ideasonboard.com>","Content-Type":"text/plain; charset=utf-8; format=flowed","Content-Transfer-Encoding":"8bit","Content-Language":"en-US","Subject":"Re: [libcamera-devel] [RFC PATCH 5/5] android: Run post processing\n\tin a separate thread","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"libcamera-devel@lists.libcamera.org","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]