{"id":22120,"url":"https://patchwork.libcamera.org/api/1.1/patches/22120/?format=json","web_url":"https://patchwork.libcamera.org/patch/22120/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20241127092632.3145984-10-chenghaoyang@chromium.org>","date":"2024-11-27T09:25:59","name":"[v2,9/9] android: Support partial results","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"34c1cb969c26511e857777671b4687cd31d79b2f","submitter":{"id":117,"url":"https://patchwork.libcamera.org/api/1.1/people/117/?format=json","name":"Cheng-Hao Yang","email":"chenghaoyang@chromium.org"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/22120/mbox/","series":[{"id":4828,"url":"https://patchwork.libcamera.org/api/1.1/series/4828/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=4828","date":"2024-11-27T09:25:50","name":"Signal metadataAvailable and Android partial result","version":2,"mbox":"https://patchwork.libcamera.org/series/4828/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/22120/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/22120/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 65A83C3213\n\tfor <parsemail@patchwork.libcamera.org>;\n\tWed, 27 Nov 2024 09:27:02 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id D9EC6660DA;\n\tWed, 27 Nov 2024 10:27:01 +0100 (CET)","from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com\n\t[IPv6:2607:f8b0:4864:20::52e])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id EA850660D1\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 27 Nov 2024 10:26:53 +0100 (CET)","by mail-pg1-x52e.google.com with SMTP id\n\t41be03b00d2f7-7fbc29b3145so459468a12.0\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 27 Nov 2024 01:26:53 -0800 (PST)","from chenghaoyang-low.c.googlers.com.com\n\t(27.247.221.35.bc.googleusercontent.com. [35.221.247.27])\n\tby smtp.gmail.com with ESMTPSA id\n\t41be03b00d2f7-7fbcbfc41f9sm8693027a12.8.2024.11.27.01.26.49\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tWed, 27 Nov 2024 01:26:50 -0800 (PST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=chromium.org header.i=@chromium.org\n\theader.b=\"n6+11Lh4\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=chromium.org; s=google; t=1732699612; x=1733304412;\n\tdarn=lists.libcamera.org; \n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:from:to:cc:subject:date\n\t:message-id:reply-to;\n\tbh=h4NeHvcJcw/iNl+9WLU3pCWOhShUrqZjicvR0HM/C9M=;\n\tb=n6+11Lh4fUEy4HPVW2HGQhI1jhLPSUVx2WGC5Uy3iI8bW3Gzxplu/RIyPIIT9kb5ms\n\t4qI7hUqiTockGKa1F2EGWn34OorXgsF2q/YGS5Ui4H+c/hXk6GrXA638Y+eix9fgrhDM\n\tambH9H/36QxDSP4DazuQspFQLHpAy37llRobY=","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20230601; t=1732699612; x=1733304412;\n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc\n\t:subject:date:message-id:reply-to;\n\tbh=h4NeHvcJcw/iNl+9WLU3pCWOhShUrqZjicvR0HM/C9M=;\n\tb=rt2mNhD3t6JcpvxAWxNjf2z/koh7AM8MW+P2xh3lVL3F27uNxy32OrJctazuPkjqSP\n\tDdYxWpsO2g/Um/p9AAM2HVsr+s4RppoNY5HvKZfDyml79+BpcpP6Q0YqxlhyhJgAdDJE\n\trDRzY7dWJNTUWEEiKbOHgCeQCh9+qbV8CoWfbSBnv0j+86zkNqP0PKO5tPkJas2dOlua\n\t7EwgxVza9sS/j+Q7vIjsv4ATS6hMNyl9+OZBbevNHLUc2YI9SnmEtUHng+ESvouf0nAM\n\tOFUZzw3EFrtkAzo3m5FVbrT7H5PPFjXMkd9hLDaNbC9/FGxf4A2Pnyg41L0KpTKVy20P\n\tjc/w==","X-Gm-Message-State":"AOJu0YxyogkgoU0b0UyeZmDbm4JGZKoxJcXXp9quG+4lCXOCh03meENu\n\tvBECNpQ9vZoPPfeLypqwLXsjvAniL/K0KOJEA9xBJFYFJwBY0nZtCjnhVq13K2uUJzJoj5/pvlI\n\t=","X-Gm-Gg":"ASbGncuxUsnRuNphpj4nFZJZFKRDI0xjGjR46ARnNF9AsVkk9+uEaGDP8kgoCcinw5H\n\tzGLzdNIpbQBIgO/7JWfKkDgPKiqvOHhp58XpfI/eNPm0Sp5YFq+QG9XAJeBMthpTblDjtHlm60e\n\tGnk84k2JL7vuA1/TdGmXl6FBRwqTlUCS/p3mCEWYiSTCVI96vl7C1O/FQVCLE3k4NufJ1eGop5B\n\tyXhPcUbxEAAMO2xGRTFli1NSPN5+OYkz4o+iTFJMVYOQbLCtd2U1ofWUpOsz4utt/QXSEoBE88A\n\tgIYEmg+/ZWFWxAmsV92nnJ1XvaTwvKd+ppeQv6bj7Sb+he5/RWN51g==","X-Google-Smtp-Source":"AGHT+IHyW5+aq2GchDfLw2B+xVi6QvHTr/Wa3l6PDNmnoaDNY3QtE4hij+hz1jbccypPmStM+omGdQ==","X-Received":"by 2002:a05:6a20:4393:b0:1e0:d5be:bf75 with SMTP id\n\tadf61e73a8af0-1e0e11badd8mr3569104637.17.1732699611400; \n\tWed, 27 Nov 2024 01:26:51 -0800 (PST)","From":"Harvey Yang <chenghaoyang@chromium.org>","To":"libcamera-devel@lists.libcamera.org","Cc":"Harvey Yang <chenghaoyang@chromium.org>,\n\tHan-Lin Chen <hanlinchen@chromium.org>","Subject":"[PATCH v2 9/9] android: Support partial results","Date":"Wed, 27 Nov 2024 09:25:59 +0000","Message-ID":"<20241127092632.3145984-10-chenghaoyang@chromium.org>","X-Mailer":"git-send-email 2.47.0.338.g60cca15819-goog","In-Reply-To":"<20241127092632.3145984-1-chenghaoyang@chromium.org>","References":"<20241127092632.3145984-1-chenghaoyang@chromium.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"8bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"With bufferCompleted and metadataAvailable signals, CameraDevice can\nsupport partial results. This allows applications to get results\nearlier, especially for buffers that would be blocked by other streams.\n\nSigned-off-by: Han-Lin Chen <hanlinchen@chromium.org>\nCo-developed-by: Harvey Yang <chenghaoyang@chromium.org>\nSigned-off-by: Harvey Yang <chenghaoyang@chromium.org>\n---\n src/android/camera_capabilities.cpp      |  11 +-\n src/android/camera_capabilities.h        |   2 +\n src/android/camera_device.cpp            | 750 ++++++++++++++++-------\n src/android/camera_device.h              |  39 +-\n src/android/camera_request.cpp           |  54 +-\n src/android/camera_request.h             |  36 +-\n src/android/camera_stream.cpp            |   4 +-\n src/android/jpeg/post_processor_jpeg.cpp |   2 +-\n 8 files changed, 621 insertions(+), 277 deletions(-)","diff":"diff --git a/src/android/camera_capabilities.cpp b/src/android/camera_capabilities.cpp\nindex b161bc6b3..bb0a3b755 100644\n--- a/src/android/camera_capabilities.cpp\n+++ b/src/android/camera_capabilities.cpp\n@@ -223,6 +223,14 @@ std::vector<U> setMetadata(CameraMetadata *metadata, uint32_t tag,\n \n } /* namespace */\n \n+/**\n+ * \\var CameraCapabilities::kMaxMetadataPackIndex\n+ *\n+ * It defines how many sub-components a result will be composed of. This enables\n+ * partial results. It's currently identical to\n+ * ANDROID_REQUEST_PARTIAL_RESULT_COUNT.\n+ */\n+\n bool CameraCapabilities::validateManualSensorCapability()\n {\n \tconst char *noMode = \"Manual sensor capability unavailable: \";\n@@ -1416,9 +1424,8 @@ int CameraCapabilities::initializeStaticMetadata()\n \tstaticMetadata_->addEntry(ANDROID_SCALER_CROPPING_TYPE, croppingType);\n \n \t/* Request static metadata. */\n-\tint32_t partialResultCount = 1;\n \tstaticMetadata_->addEntry(ANDROID_REQUEST_PARTIAL_RESULT_COUNT,\n-\t\t\t\t  partialResultCount);\n+\t\t\t\t  kMaxMetadataPackIndex);\n \n \t{\n \t\t/* Default the value to 2 if not reported by the camera. */\ndiff --git a/src/android/camera_capabilities.h b/src/android/camera_capabilities.h\nindex 56ac1efeb..b11f93241 100644\n--- a/src/android/camera_capabilities.h\n+++ b/src/android/camera_capabilities.h\n@@ -23,6 +23,8 @@\n class CameraCapabilities\n {\n public:\n+\tstatic constexpr int32_t kMaxMetadataPackIndex = 64;\n+\n \tCameraCapabilities() = default;\n \n \tint initialize(std::shared_ptr<libcamera::Camera> camera,\ndiff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp\nindex e085e18b2..f03440b79 100644\n--- a/src/android/camera_device.cpp\n+++ b/src/android/camera_device.cpp\n@@ -252,6 +252,8 @@ CameraDevice::CameraDevice(unsigned int id, std::shared_ptr<Camera> camera)\n \t  facing_(CAMERA_FACING_FRONT), orientation_(0)\n {\n \tcamera_->requestCompleted.connect(this, &CameraDevice::requestComplete);\n+\tcamera_->bufferCompleted.connect(this, &CameraDevice::bufferComplete);\n+\tcamera_->metadataAvailable.connect(this, &CameraDevice::metadataAvailable);\n \n \tmaker_ = \"libcamera\";\n \tmodel_ = \"cameraModel\";\n@@ -438,8 +440,9 @@ void CameraDevice::stop()\n \tcamera_->stop();\n \n \t{\n-\t\tMutexLocker descriptorsLock(descriptorsMutex_);\n-\t\tdescriptors_ = {};\n+\t\tMutexLocker descriptorsLock(pendingRequestMutex_);\n+\t\tpendingRequests_ = {};\n+\t\tpendingStreamBuffers_ = {};\n \t}\n \n \tstreams_.clear();\n@@ -860,16 +863,39 @@ int CameraDevice::processControls(Camera3RequestDescriptor *descriptor)\n \treturn 0;\n }\n \n+/*\n+ * abortRequest() is only called before the request is queued into the device,\n+ * i.e., there is no need to remove it from pendingRequests_ and\n+ * pendingStreamBuffers_.\n+ */\n void CameraDevice::abortRequest(Camera3RequestDescriptor *descriptor) const\n {\n-\tnotifyError(descriptor->frameNumber_, nullptr, CAMERA3_MSG_ERROR_REQUEST);\n+\t/*\n+\t * Since the failed buffers do not have to follow the strict ordering\n+\t * valid buffers do, and could be out-of-order with respect to valid\n+\t * buffers, it's safe to send the aborted result back to the framework\n+\t * immediately.\n+\t */\n+\tdescriptor->status_ = Camera3RequestDescriptor::Status::Error;\n+\tdescriptor->finalResult_ = std::make_unique<Camera3ResultDescriptor>(descriptor);\n \n-\tfor (auto &buffer : descriptor->buffers_)\n+\tCamera3ResultDescriptor *result = descriptor->finalResult_.get();\n+\n+\tresult->metadataPackIndex_ = 0;\n+\tfor (auto &buffer : descriptor->buffers_) {\n \t\tbuffer.status = StreamBuffer::Status::Error;\n+\t\tresult->buffers_.emplace_back(&buffer);\n+\t}\n \n-\tdescriptor->status_ = Camera3RequestDescriptor::Status::Error;\n+\t/*\n+\t * After CAMERA3_MSG_ERROR_REQUEST is notified, for a given frame,\n+\t * only process_capture_results with buffers of the status\n+\t * CAMERA3_BUFFER_STATUS_ERROR are allowed. No further notifies or\n+\t * process_capture_result with non-null metadata is allowed.\n+\t */\n+\tnotifyError(descriptor->frameNumber_, nullptr, CAMERA3_MSG_ERROR_REQUEST);\n \n-\tsendCaptureResult(descriptor);\n+\tsendCaptureResult(result);\n }\n \n bool CameraDevice::isValidRequest(camera3_capture_request_t *camera3Request) const\n@@ -1031,9 +1057,6 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n \t\t\t * */\n \t\t\tdescriptor->internalBuffers_[cameraStream] = frameBuffer;\n \t\t\tLOG(HAL, Debug) << ss.str() << \" (internal)\";\n-\n-\t\t\tdescriptor->pendingStreamsToProcess_.insert(\n-\t\t\t\t{ cameraStream, &buffer });\n \t\t\tbreak;\n \t\t}\n \n@@ -1066,8 +1089,6 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n \t\t\t\t<< cameraStream->configuration().pixelFormat << \"]\"\n \t\t\t\t<< \" (mapped)\";\n \n-\t\tdescriptor->pendingStreamsToProcess_.insert({ cameraStream, &buffer });\n-\n \t\t/*\n \t\t * Make sure the CameraStream this stream is mapped on has been\n \t\t * added to the request.\n@@ -1154,8 +1175,10 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n \tRequest *request = descriptor->request_.get();\n \n \t{\n-\t\tMutexLocker descriptorsLock(descriptorsMutex_);\n-\t\tdescriptors_.push(std::move(descriptor));\n+\t\tMutexLocker descriptorsLock(pendingRequestMutex_);\n+\t\tfor (auto &buffer : descriptor->buffers_)\n+\t\t\tpendingStreamBuffers_[buffer.stream].push_back(&buffer);\n+\t\tpendingRequests_.emplace(std::move(descriptor));\n \t}\n \n \tcamera_->queueRequest(request);\n@@ -1163,132 +1186,279 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques\n \treturn 0;\n }\n \n-void CameraDevice::requestComplete(Request *request)\n+void CameraDevice::bufferComplete(libcamera::Request *request,\n+\t\t\t\t  libcamera::FrameBuffer *frameBuffer)\n {\n \tCamera3RequestDescriptor *descriptor =\n \t\treinterpret_cast<Camera3RequestDescriptor *>(request->cookie());\n \n-\t/*\n-\t * Prepare the capture result for the Android camera stack.\n-\t *\n-\t * The buffer status is set to Success and later changed to Error if\n-\t * post-processing/compression fails.\n-\t */\n+\tdescriptor->partialResults_.emplace_back(new Camera3ResultDescriptor(descriptor));\n+\tCamera3ResultDescriptor *camera3Result = descriptor->partialResults_.back().get();\n+\n \tfor (auto &buffer : descriptor->buffers_) {\n-\t\tCameraStream *stream = buffer.stream;\n+\t\tCameraStream *cameraStream = buffer.stream;\n+\t\tif (buffer.srcBuffer != frameBuffer &&\n+\t\t    buffer.frameBuffer.get() != frameBuffer)\n+\t\t\tcontinue;\n \n-\t\t/*\n-\t\t * Streams of type Direct have been queued to the\n-\t\t * libcamera::Camera and their acquire fences have\n-\t\t * already been waited on by the library.\n-\t\t *\n-\t\t * Acquire fences of streams of type Internal and Mapped\n-\t\t * will be handled during post-processing.\n-\t\t */\n-\t\tif (stream->type() == CameraStream::Type::Direct) {\n-\t\t\t/* If handling of the fence has failed restore buffer.fence. */\n+\t\tbuffer.result = camera3Result;\n+\t\tcamera3Result->buffers_.emplace_back(&buffer);\n+\n+\t\tStreamBuffer::Status status = StreamBuffer::Status::Success;\n+\t\tif (frameBuffer->metadata().status != FrameMetadata::FrameSuccess) {\n+\t\t\tstatus = StreamBuffer::Status::Error;\n+\t\t}\n+\t\tsetBufferStatus(buffer, status);\n+\n+\t\tswitch (cameraStream->type()) {\n+\t\tcase CameraStream::Type::Direct: {\n+\t\t\tASSERT(buffer.frameBuffer.get() == frameBuffer);\n+\t\t\t/*\n+\t\t\t\t * Streams of type Direct have been queued to the\n+\t\t\t\t * libcamera::Camera and their acquire fences have\n+\t\t\t\t * already been waited on by the library.\n+\t\t\t\t */\n \t\t\tstd::unique_ptr<Fence> fence = buffer.frameBuffer->releaseFence();\n \t\t\tif (fence)\n \t\t\t\tbuffer.fence = fence->release();\n+\t\t\tbreak;\n+\t\t}\n+\t\tcase CameraStream::Type::Mapped:\n+\t\tcase CameraStream::Type::Internal:\n+\t\t\tASSERT(buffer.srcBuffer == frameBuffer);\n+\t\t\tif (status == StreamBuffer::Status::Error)\n+\t\t\t\tbreak;\n+\n+\t\t\tcamera3Result->pendingBuffersToProcess_.emplace_back(&buffer);\n+\n+\t\t\tif (cameraStream->isJpegStream()) {\n+\t\t\t\tgenerateJpegExifMetadata(descriptor, &buffer);\n+\n+\t\t\t\t/*\n+\t\t\t\t * Allocate for post-processor to fill\n+\t\t\t\t * in JPEG related metadata.\n+\t\t\t\t */\n+\t\t\t\tcamera3Result->resultMetadata_ = getJpegPartialResultMetadata();\n+\t\t\t}\n+\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tfor (auto iter = camera3Result->pendingBuffersToProcess_.begin();\n+\t     iter != camera3Result->pendingBuffersToProcess_.end();) {\n+\t\tStreamBuffer *buffer = *iter;\n+\t\tint ret = buffer->stream->process(buffer);\n+\t\tif (ret) {\n+\t\t\titer = camera3Result->pendingBuffersToProcess_.erase(iter);\n+\t\t\tsetBufferStatus(*buffer, StreamBuffer::Status::Error);\n+\t\t\tLOG(HAL, Error) << \"Failed to run post process of request \"\n+\t\t\t\t\t<< descriptor->frameNumber_;\n+\t\t} else {\n+\t\t\titer++;\n \t\t}\n-\t\tbuffer.status = StreamBuffer::Status::Success;\n \t}\n \n+\tif (camera3Result->pendingBuffersToProcess_.empty())\n+\t\tcheckAndCompleteReadyPartialResults(camera3Result);\n+}\n+\n+void CameraDevice::metadataAvailable(libcamera::Request *request,\n+\t\t\t\t     const libcamera::ControlList &metadata)\n+{\n+\tASSERT(!metadata.empty());\n+\n+\tCamera3RequestDescriptor *descriptor =\n+\t\treinterpret_cast<Camera3RequestDescriptor *>(request->cookie());\n+\n+\tdescriptor->partialResults_.emplace_back(new Camera3ResultDescriptor(descriptor));\n+\tCamera3ResultDescriptor *camera3Result = descriptor->partialResults_.back().get();\n+\n \t/*\n-\t * If the Request has failed, abort the request by notifying the error\n-\t * and complete the request with all buffers in error state.\n+\t * Notify shutter as soon as we have received SensorTimestamp.\n \t */\n-\tif (request->status() != Request::RequestComplete) {\n-\t\tLOG(HAL, Error) << \"Request \" << request->cookie()\n-\t\t\t\t<< \" not successfully completed: \"\n-\t\t\t\t<< request->status();\n+\tconst auto &timestamp = metadata.get(controls::SensorTimestamp);\n+\tif (timestamp) {\n+\t\tnotifyShutter(descriptor->frameNumber_, *timestamp);\n+\t\tLOG(HAL, Debug) << \"Request \" << request->cookie() << \" notifies shutter\";\n+\t}\n+\n+\tcamera3Result->resultMetadata_ = getPartialResultMetadata(metadata);\n+\n+\tcheckAndCompleteReadyPartialResults(camera3Result);\n+}\n+\n+void CameraDevice::requestComplete(Request *request)\n+{\n+\tCamera3RequestDescriptor *camera3Request =\n+\t\treinterpret_cast<Camera3RequestDescriptor *>(request->cookie());\n \n-\t\tdescriptor->status_ = Camera3RequestDescriptor::Status::Error;\n+\tswitch (request->status()) {\n+\tcase Request::RequestComplete:\n+\t\tcamera3Request->status_ = Camera3RequestDescriptor::Status::Success;\n+\t\tbreak;\n+\tcase Request::RequestCancelled:\n+\t\tcamera3Request->status_ = Camera3RequestDescriptor::Status::Error;\n+\t\tbreak;\n+\tcase Request::RequestPending:\n+\t\tLOG(HAL, Fatal) << \"Try to complete an unfinished request\";\n+\t\tbreak;\n \t}\n \n+\tcamera3Request->finalResult_ = std::make_unique<Camera3ResultDescriptor>(camera3Request);\n+\tCamera3ResultDescriptor *result = camera3Request->finalResult_.get();\n+\n \t/*\n-\t * Notify shutter as soon as we have verified we have a valid request.\n-\t *\n-\t * \\todo The shutter event notification should be sent to the framework\n-\t * as soon as possible, earlier than request completion time.\n+\t * On Android, The final result with metadata has to set the field as\n+\t * CameraCapabilities::MaxMetadataPackIndex, and should be returned by\n+\t * the submission order of the requests. Create a result as the final\n+\t * result which is guranteed be sent in order by CompleteRequestDescriptor().\n+\t */\n+\tresult->resultMetadata_ = getFinalResultMetadata(camera3Request->settings_);\n+\tresult->metadataPackIndex_ = CameraCapabilities::kMaxMetadataPackIndex;\n+\n+\t/*\n+\t * We need to check whether there are partial results pending for\n+\t * post-processing, before we complete the request descriptor. Otherwise,\n+\t * the callback of post-processing will complete the request instead.\n \t */\n-\tuint64_t sensorTimestamp = static_cast<uint64_t>(request->metadata()\n-\t\t\t\t\t\t\t\t .get(controls::SensorTimestamp)\n-\t\t\t\t\t\t\t\t .value_or(0));\n-\tnotifyShutter(descriptor->frameNumber_, sensorTimestamp);\n+\tfor (auto &r : camera3Request->partialResults_)\n+\t\tif (!r->completed_)\n+\t\t\treturn;\n \n-\tLOG(HAL, Debug) << \"Request \" << request->cookie() << \" completed with \"\n-\t\t\t<< descriptor->request_->buffers().size() << \" streams\";\n+\tcompleteRequestDescriptor(camera3Request);\n+}\n \n+void CameraDevice::checkAndCompleteReadyPartialResults(Camera3ResultDescriptor *result)\n+{\n \t/*\n-\t * Generate the metadata associated with the captured buffers.\n+\t * Android requires buffers for a given stream must be returned in FIFO\n+\t * order. However, different streams are independent of each other, so\n+\t * it is acceptable and expected that the buffer for request 5 for\n+\t * stream A may be returned after the buffer for request 6 for stream\n+\t * B is. And it is acceptable that the result metadata for request 6\n+\t * for stream B is returned before the buffer for request 5 for stream\n+\t * A is. As a result, if all buffers of a result are the most front\n+\t * buffers of each stream, or the result contains no buffers, the result\n+\t * is allowed to send. Collect ready results to send in the order which\n+\t * follows the above rule.\n \t *\n-\t * Notify if the metadata generation has failed, but continue processing\n-\t * buffers and return an empty metadata pack.\n+\t * \\todo The reprocessing result can be returned ahead of the pending\n+\t * normal output results. But the FIFO ordering must be maintained for\n+\t * all reprocessing results. Track the reprocessing buffer's order\n+\t * independently when we have reprocessing API.\n \t */\n-\tdescriptor->resultMetadata_ = getResultMetadata(*descriptor);\n-\tif (!descriptor->resultMetadata_) {\n-\t\tdescriptor->status_ = Camera3RequestDescriptor::Status::Error;\n+\tMutexLocker lock(pendingRequestMutex_);\n \n-\t\t/*\n-\t\t * The camera framework expects an empty metadata pack on error.\n-\t\t *\n-\t\t * \\todo Check that the post-processor code handles this situation\n-\t\t * correctly.\n-\t\t */\n-\t\tdescriptor->resultMetadata_ = std::make_unique<CameraMetadata>(0, 0);\n-\t}\n+\tpendingPartialResults_.emplace_front(result);\n+\tstd::list<Camera3ResultDescriptor *> readyResults;\n \n \t/*\n-\t * Queue all the post-processing streams request at once. The completion\n-\t * slot streamProcessingComplete() can only execute when we are out\n-\t * this critical section. This helps to handle synchronous errors here\n-\t * itself.\n+\t * Error buffers do not have to follow the strict ordering as valid\n+\t * buffers do. They're ready to be sent directly. Therefore, remove them\n+\t * from the pendingBuffers so it won't block following valid buffers.\n \t */\n-\tauto iter = descriptor->pendingStreamsToProcess_.begin();\n-\twhile (iter != descriptor->pendingStreamsToProcess_.end()) {\n-\t\tCameraStream *stream = iter->first;\n-\t\tStreamBuffer *buffer = iter->second;\n+\tfor (auto &buffer : result->buffers_)\n+\t\tif (buffer->status == StreamBuffer::Status::Error)\n+\t\t\tpendingStreamBuffers_[buffer->stream].remove(buffer);\n \n-\t\tif (stream->isJpegStream()) {\n-\t\t\tgenerateJpegExifMetadata(descriptor, buffer);\n-\t\t}\n+\t/*\n+\t * Exhaustly collect results which is ready to sent.\n+\t */\n+\tbool keepChecking;\n+\tdo {\n+\t\tkeepChecking = false;\n+\t\tauto iter = pendingPartialResults_.begin();\n+\t\twhile (iter != pendingPartialResults_.end()) {\n+\t\t\t/*\n+\t\t\t * A result is considered as ready when all of the valid\n+\t\t\t * buffers of the result are at the front of the pending\n+\t\t\t * buffers associated with its stream.\n+\t\t\t */\n+\t\t\tbool ready = true;\n+\t\t\tfor (auto &buffer : (*iter)->buffers_) {\n+\t\t\t\tif (buffer->status == StreamBuffer::Status::Error)\n+\t\t\t\t\tcontinue;\n \n-\t\tFrameBuffer *src = request->findBuffer(stream->stream());\n-\t\tif (!src) {\n-\t\t\tLOG(HAL, Error) << \"Failed to find a source stream buffer\";\n-\t\t\tsetBufferStatus(*buffer, StreamBuffer::Status::Error);\n-\t\t\titer = descriptor->pendingStreamsToProcess_.erase(iter);\n-\t\t\tcontinue;\n-\t\t}\n+\t\t\t\tauto &pendingBuffers = pendingStreamBuffers_[buffer->stream];\n \n-\t\t++iter;\n-\t\tint ret = stream->process(buffer);\n-\t\tif (ret) {\n-\t\t\tsetBufferStatus(*buffer, StreamBuffer::Status::Error);\n-\t\t\tdescriptor->pendingStreamsToProcess_.erase(stream);\n+\t\t\t\tASSERT(!pendingBuffers.empty());\n+\n+\t\t\t\tif (pendingBuffers.front() != buffer) {\n+\t\t\t\t\tready = false;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\t\t\t}\n+\n+\t\t\tif (!ready) {\n+\t\t\t\titer++;\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\n+\t\t\tfor (auto &buffer : (*iter)->buffers_)\n+\t\t\t\tif (buffer->status != StreamBuffer::Status::Error)\n+\t\t\t\t\tpendingStreamBuffers_[buffer->stream].pop_front();\n+\n+\t\t\t/* Keep checking since pendingStreamBuffers has updated */\n+\t\t\tkeepChecking = true;\n+\n+\t\t\treadyResults.emplace_back(*iter);\n+\t\t\titer = pendingPartialResults_.erase(iter);\n \t\t}\n+\t} while (keepChecking);\n+\n+\tlock.unlock();\n+\n+\tfor (auto &res : readyResults) {\n+\t\tcompletePartialResultDescriptor(res);\n \t}\n+}\n+\n+void CameraDevice::completePartialResultDescriptor(Camera3ResultDescriptor *result)\n+{\n+\tCamera3RequestDescriptor *request = result->request_;\n+\tresult->completed_ = true;\n+\n+\t/*\n+\t * Android requires value of metadataPackIndex of partial results\n+\t * set it to 0 if the result contains only buffers, Otherwise set it\n+\t * Incrementally from 1 to MaxMetadataPackIndex - 1.\n+\t */\n+\tif (result->resultMetadata_)\n+\t\tresult->metadataPackIndex_ = request->nextPartialResultIndex_++;\n+\telse\n+\t\tresult->metadataPackIndex_ = 0;\n+\n+\tsendCaptureResult(result);\n \n-\tif (descriptor->pendingStreamsToProcess_.empty())\n-\t\tcompleteDescriptor(descriptor);\n+\t/*\n+\t * The Status would be changed from Pending to Success or Error only\n+\t * when the requestComplete() has been called. It's garanteed that no\n+\t * more partial results will be added to the request and the final result\n+\t * is ready. In the case, if all partial results are completed, we can\n+\t * complete the request.\n+\t */\n+\tif (request->status_ == Camera3RequestDescriptor::Status::Pending)\n+\t\treturn;\n+\n+\tfor (auto &r : request->partialResults_)\n+\t\tif (!r->completed_)\n+\t\t\treturn;\n+\n+\tcompleteRequestDescriptor(request);\n }\n \n /**\n  * \\brief Complete the Camera3RequestDescriptor\n- * \\param[in] descriptor The Camera3RequestDescriptor that has completed\n+ * \\param[in] descriptor The Camera3RequestDescriptor\n  *\n- * The function marks the Camera3RequestDescriptor as 'complete'. It shall be\n- * called when all the streams in the Camera3RequestDescriptor have completed\n- * capture (or have been generated via post-processing) and the request is ready\n- * to be sent back to the framework.\n- *\n- * \\context This function is \\threadsafe.\n+ * The function shall complete the descriptor only when all of the partial\n+ * result has sent back to the framework, and send the final result according\n+ * to the submission order of the requests.\n  */\n-void CameraDevice::completeDescriptor(Camera3RequestDescriptor *descriptor)\n+void CameraDevice::completeRequestDescriptor(Camera3RequestDescriptor *request)\n {\n-\tMutexLocker lock(descriptorsMutex_);\n-\tdescriptor->complete_ = true;\n+\trequest->complete_ = true;\n \n \tsendCaptureResults();\n }\n@@ -1304,15 +1474,23 @@ void CameraDevice::completeDescriptor(Camera3RequestDescriptor *descriptor)\n  * Stop iterating if the descriptor at the front of the queue is not complete.\n  *\n  * This function should never be called directly in the codebase. Use\n- * completeDescriptor() instead.\n+ * completeRequestDescriptor() instead.\n  */\n void CameraDevice::sendCaptureResults()\n {\n-\twhile (!descriptors_.empty() && !descriptors_.front()->isPending()) {\n-\t\tauto descriptor = std::move(descriptors_.front());\n-\t\tdescriptors_.pop();\n+\tMutexLocker descriptorsLock(pendingRequestMutex_);\n+\n+\twhile (!pendingRequests_.empty()) {\n+\t\tauto &descriptor = pendingRequests_.front();\n+\t\tif (!descriptor->complete_)\n+\t\t\tbreak;\n \n-\t\tsendCaptureResult(descriptor.get());\n+\t\t/*\n+\t\t * Android requires the final result of each request returns in\n+\t\t * their submission order.\n+\t\t */\n+\t\tASSERT(descriptor->finalResult_);\n+\t\tsendCaptureResult(descriptor->finalResult_.get());\n \n \t\t/*\n \t\t * Call notify with CAMERA3_MSG_ERROR_RESULT to indicate some\n@@ -1323,18 +1501,20 @@ void CameraDevice::sendCaptureResults()\n \t\t */\n \t\tif (descriptor->status_ == Camera3RequestDescriptor::Status::Error)\n \t\t\tnotifyError(descriptor->frameNumber_, nullptr, CAMERA3_MSG_ERROR_RESULT);\n+\n+\t\tpendingRequests_.pop();\n \t}\n }\n \n-void CameraDevice::sendCaptureResult(Camera3RequestDescriptor *request) const\n+void CameraDevice::sendCaptureResult(Camera3ResultDescriptor *result) const\n {\n \tstd::vector<camera3_stream_buffer_t> resultBuffers;\n-\tresultBuffers.reserve(request->buffers_.size());\n+\tresultBuffers.reserve(result->buffers_.size());\n \n-\tfor (auto &buffer : request->buffers_) {\n+\tfor (auto &buffer : result->buffers_) {\n \t\tcamera3_buffer_status status = CAMERA3_BUFFER_STATUS_ERROR;\n \n-\t\tif (buffer.status == StreamBuffer::Status::Success)\n+\t\tif (buffer->status == StreamBuffer::Status::Success)\n \t\t\tstatus = CAMERA3_BUFFER_STATUS_OK;\n \n \t\t/*\n@@ -1343,22 +1523,20 @@ void CameraDevice::sendCaptureResult(Camera3RequestDescriptor *request) const\n \t\t * on the acquire fence in case we haven't done so\n \t\t * ourselves for any reason.\n \t\t */\n-\t\tresultBuffers.push_back({ buffer.stream->camera3Stream(),\n-\t\t\t\t\t  buffer.camera3Buffer, status,\n-\t\t\t\t\t  -1, buffer.fence.release() });\n+\t\tresultBuffers.push_back({ buffer->stream->camera3Stream(),\n+\t\t\t\t\t  buffer->camera3Buffer, status,\n+\t\t\t\t\t  -1, buffer->fence.release() });\n \t}\n \n \tcamera3_capture_result_t captureResult = {};\n \n-\tcaptureResult.frame_number = request->frameNumber_;\n+\tcaptureResult.frame_number = result->request_->frameNumber_;\n \tcaptureResult.num_output_buffers = resultBuffers.size();\n \tcaptureResult.output_buffers = resultBuffers.data();\n+\tcaptureResult.partial_result = result->metadataPackIndex_;\n \n-\tif (request->status_ == Camera3RequestDescriptor::Status::Success)\n-\t\tcaptureResult.partial_result = 1;\n-\n-\tif (request->resultMetadata_)\n-\t\tcaptureResult.result = request->resultMetadata_->getMetadata();\n+\tif (result->resultMetadata_)\n+\t\tcaptureResult.result = result->resultMetadata_->getMetadata();\n \n \tcallbacks_->process_capture_result(callbacks_, &captureResult);\n }\n@@ -1371,10 +1549,6 @@ void CameraDevice::setBufferStatus(StreamBuffer &streamBuffer,\n \t\tnotifyError(streamBuffer.request->frameNumber_,\n \t\t\t    streamBuffer.stream->camera3Stream(),\n \t\t\t    CAMERA3_MSG_ERROR_BUFFER);\n-\n-\t\t/* Also set error status on entire request descriptor. */\n-\t\tstreamBuffer.request->status_ =\n-\t\t\tCamera3RequestDescriptor::Status::Error;\n \t}\n }\n \n@@ -1396,26 +1570,25 @@ void CameraDevice::streamProcessingCompleteDelegate(StreamBuffer *streamBuffer,\n  * \\param[in] streamBuffer The StreamBuffer for which processing is complete\n  * \\param[in] status Stream post-processing status\n  *\n- * This function is called from the post-processor's thread whenever a camera\n+ * This function is called from the camera's thread whenever a camera\n  * stream has finished post processing. The corresponding entry is dropped from\n- * the descriptor's pendingStreamsToProcess_ map.\n+ * the result's pendingBufferToProcess_ list.\n  *\n- * If the pendingStreamsToProcess_ map is then empty, all streams requiring to\n- * be generated from post-processing have been completed. Mark the descriptor as\n- * complete using completeDescriptor() in that case.\n+ * If the pendingBufferToProcess_ list is then empty, all streams requiring to\n+ * be generated from post-processing have been completed.\n  */\n void CameraDevice::streamProcessingComplete(StreamBuffer *streamBuffer,\n \t\t\t\t\t    StreamBuffer::Status status)\n {\n \tsetBufferStatus(*streamBuffer, status);\n \n-\tCamera3RequestDescriptor *request = streamBuffer->request;\n+\tCamera3ResultDescriptor *result = streamBuffer->result;\n+\tresult->pendingBuffersToProcess_.remove(streamBuffer);\n \n-\trequest->pendingStreamsToProcess_.erase(streamBuffer->stream);\n-\tif (!request->pendingStreamsToProcess_.empty())\n+\tif (!result->pendingBuffersToProcess_.empty())\n \t\treturn;\n \n-\tcompleteDescriptor(streamBuffer->request);\n+\tcheckAndCompleteReadyPartialResults(result);\n }\n \n std::string CameraDevice::logPrefix() const\n@@ -1469,23 +1642,12 @@ void CameraDevice::generateJpegExifMetadata(Camera3RequestDescriptor *request,\n \tjpegExifMetadata->sensorSensitivityISO = 100;\n }\n \n-/*\n- * Produce a set of fixed result metadata.\n- */\n-std::unique_ptr<CameraMetadata>\n-CameraDevice::getResultMetadata(const Camera3RequestDescriptor &descriptor) const\n+std::unique_ptr<CameraMetadata> CameraDevice::getJpegPartialResultMetadata() const\n {\n-\tconst ControlList &metadata = descriptor.request_->metadata();\n-\tconst CameraMetadata &settings = descriptor.settings_;\n-\tcamera_metadata_ro_entry_t entry;\n-\tbool found;\n-\n \t/*\n-\t * \\todo Keep this in sync with the actual number of entries.\n-\t * Currently: 40 entries, 156 bytes\n+\t * Reserve more capacity for the JPEG metadata set by the post-processor.\n+\t * Currently: 8 entries, 82 bytes extra capaticy.\n \t *\n-\t * Reserve more space for the JPEG metadata set by the post-processor.\n-\t * Currently:\n \t * ANDROID_JPEG_GPS_COORDINATES (double x 3) = 24 bytes\n \t * ANDROID_JPEG_GPS_PROCESSING_METHOD (byte x 32) = 32 bytes\n \t * ANDROID_JPEG_GPS_TIMESTAMP (int64) = 8 bytes\n@@ -1497,7 +1659,215 @@ CameraDevice::getResultMetadata(const Camera3RequestDescriptor &descriptor) cons\n \t * Total bytes for JPEG metadata: 82\n \t */\n \tstd::unique_ptr<CameraMetadata> resultMetadata =\n-\t\tstd::make_unique<CameraMetadata>(88, 166);\n+\t\tstd::make_unique<CameraMetadata>(8, 82);\n+\tif (!resultMetadata->isValid()) {\n+\t\tLOG(HAL, Error) << \"Failed to allocate result metadata\";\n+\t\treturn nullptr;\n+\t}\n+\n+\treturn resultMetadata;\n+}\n+\n+std::unique_ptr<CameraMetadata>\n+CameraDevice::getPartialResultMetadata(const ControlList &metadata) const\n+{\n+\t/*\n+\t * \\todo Keep this in sync with the actual number of entries.\n+\t *\n+\t * Reserve capacity for the metadata larger than 4 bytes which cannot\n+\t * store in entries.\n+\t * Currently: 6 entries, 40 bytes extra capaticy.\n+\t *\n+\t * ANDROID_SENSOR_TIMESTAMP (int64) = 8 bytes\n+\t * ANDROID_REQUEST_PIPELINE_DEPTH (byte) = 1 byte\n+\t * ANDROID_SENSOR_EXPOSURE_TIME (int64) = 8 bytes\n+\t * ANDROID_CONTROL_AE_STATE (enum) = 4 bytes\n+\t * ANDROID_CONTROL_AF_STATE (enum) = 4 bytes\n+\t * ANDROID_SENSOR_SENSITIVITY (int32) = 4 bytes\n+\t * ANDROID_CONTROL_AWB_STATE (enum) = 4 bytes\n+\t * ANDROID_SENSOR_FRAME_DURATION (int64) = 8 bytes\n+\t * ANDROID_SCALER_CROP_REGION (int32 X 4) = 16 bytes\n+\t * ANDROID_SENSOR_TEST_PATTERN_MODE (enum) = 4 bytes\n+\t * Total bytes for capacity: 61\n+\t *\n+\t * ANDROID_STATISTICS_FACE_RECTANGLES (int32[]) = 4*4*n bytes\n+\t * ANDROID_STATISTICS_FACE_SCORES (byte[]) = n bytes\n+\t * ANDROID_STATISTICS_FACE_LANDMARKS (int32[]) = 4*2*n bytes\n+\t * ANDROID_STATISTICS_FACE_IDS (int32[]) = 4*n bytes\n+\t *\n+\t * \\todo Calculate the entries and capacity by the input ControlList.\n+\t */\n+\n+\tconst auto &faceDetectRectangles =\n+\t\tmetadata.get(controls::draft::FaceDetectFaceRectangles);\n+\n+\tsize_t entryCapacity = 10;\n+\tsize_t dataCapacity = 61;\n+\tif (faceDetectRectangles) {\n+\t\tentryCapacity += 4;\n+\t\tdataCapacity += faceDetectRectangles->size() * 29;\n+\t}\n+\n+\tstd::unique_ptr<CameraMetadata> resultMetadata =\n+\t\tstd::make_unique<CameraMetadata>(entryCapacity, dataCapacity);\n+\tif (!resultMetadata->isValid()) {\n+\t\tLOG(HAL, Error) << \"Failed to allocate result metadata\";\n+\t\treturn nullptr;\n+\t}\n+\n+\tif (faceDetectRectangles) {\n+\t\tstd::vector<int32_t> flatRectangles;\n+\t\tfor (const Rectangle &rect : *faceDetectRectangles) {\n+\t\t\tflatRectangles.push_back(rect.x);\n+\t\t\tflatRectangles.push_back(rect.y);\n+\t\t\tflatRectangles.push_back(rect.x + rect.width);\n+\t\t\tflatRectangles.push_back(rect.y + rect.height);\n+\t\t}\n+\t\tresultMetadata->addEntry(\n+\t\t\tANDROID_STATISTICS_FACE_RECTANGLES, flatRectangles);\n+\t}\n+\n+\tconst auto &faceDetectFaceScores =\n+\t\tmetadata.get(controls::draft::FaceDetectFaceScores);\n+\tif (faceDetectRectangles && faceDetectFaceScores) {\n+\t\tif (faceDetectFaceScores->size() != faceDetectRectangles->size()) {\n+\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face scores; \"\n+\t\t\t\t\t<< \"Expected: \" << faceDetectRectangles->size()\n+\t\t\t\t\t<< \", got: \" << faceDetectFaceScores->size();\n+\t\t}\n+\t\tresultMetadata->addEntry(ANDROID_STATISTICS_FACE_SCORES,\n+\t\t\t\t\t *faceDetectFaceScores);\n+\t}\n+\n+\tconst auto &faceDetectFaceLandmarks =\n+\t\tmetadata.get(controls::draft::FaceDetectFaceLandmarks);\n+\tif (faceDetectRectangles && faceDetectFaceLandmarks) {\n+\t\tsize_t expectedLandmarks = faceDetectRectangles->size() * 3;\n+\t\tif (faceDetectFaceLandmarks->size() != expectedLandmarks) {\n+\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face landmarks; \"\n+\t\t\t\t\t<< \"Expected: \" << expectedLandmarks\n+\t\t\t\t\t<< \", got: \" << faceDetectFaceLandmarks->size();\n+\t\t}\n+\n+\t\tstd::vector<int32_t> androidLandmarks;\n+\t\tfor (const Point &landmark : *faceDetectFaceLandmarks) {\n+\t\t\tandroidLandmarks.push_back(landmark.x);\n+\t\t\tandroidLandmarks.push_back(landmark.y);\n+\t\t}\n+\t\tresultMetadata->addEntry(\n+\t\t\tANDROID_STATISTICS_FACE_LANDMARKS, androidLandmarks);\n+\t}\n+\n+\tconst auto &faceDetectFaceIds = metadata.get(controls::draft::FaceDetectFaceIds);\n+\tif (faceDetectRectangles && faceDetectFaceIds) {\n+\t\tif (faceDetectFaceIds->size() != faceDetectRectangles->size()) {\n+\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face ids; \"\n+\t\t\t\t\t<< \"Expected: \" << faceDetectRectangles->size()\n+\t\t\t\t\t<< \", got: \" << faceDetectFaceIds->size();\n+\t\t}\n+\t\tresultMetadata->addEntry(ANDROID_STATISTICS_FACE_IDS, *faceDetectFaceIds);\n+\t}\n+\n+\t/* Add metadata tags reported by libcamera. */\n+\tconst auto &timestamp = metadata.get(controls::SensorTimestamp);\n+\tif (timestamp)\n+\t\tresultMetadata->addEntry(ANDROID_SENSOR_TIMESTAMP, *timestamp);\n+\n+\tconst auto &pipelineDepth = metadata.get(controls::draft::PipelineDepth);\n+\tif (pipelineDepth)\n+\t\tresultMetadata->addEntry(ANDROID_REQUEST_PIPELINE_DEPTH,\n+\t\t\t\t\t *pipelineDepth);\n+\n+\tif (metadata.contains(controls::EXPOSURE_TIME)) {\n+\t\tconst auto &exposureTime = metadata.get(controls::ExposureTime);\n+\t\tint64_t exposure_time = static_cast<int64_t>(exposureTime.value_or(33'333));\n+\t\tresultMetadata->addEntry(ANDROID_SENSOR_EXPOSURE_TIME, exposure_time * 1000ULL);\n+\t}\n+\n+\tif (metadata.contains(controls::draft::AE_STATE)) {\n+\t\tconst auto &aeState = metadata.get(controls::draft::AeState);\n+\t\tresultMetadata->addEntry(ANDROID_CONTROL_AE_STATE, aeState.value_or(0));\n+\t}\n+\n+\tif (metadata.contains(controls::AF_STATE)) {\n+\t\tconst auto &afState = metadata.get(controls::AfState);\n+\t\tresultMetadata->addEntry(ANDROID_CONTROL_AF_STATE, afState.value_or(0));\n+\t}\n+\n+\tif (metadata.contains(controls::ANALOGUE_GAIN)) {\n+\t\tconst auto &sensorSensitivity = metadata.get(controls::AnalogueGain).value_or(100);\n+\t\tresultMetadata->addEntry(ANDROID_SENSOR_SENSITIVITY, static_cast<int>(sensorSensitivity));\n+\t}\n+\n+\tconst auto &awbState = metadata.get(controls::draft::AwbState);\n+\tif (metadata.contains(controls::draft::AWB_STATE)) {\n+\t\tresultMetadata->addEntry(ANDROID_CONTROL_AWB_STATE, awbState.value_or(0));\n+\t}\n+\n+\tconst auto &frameDuration = metadata.get(controls::FrameDuration);\n+\tif (metadata.contains(controls::FRAME_DURATION)) {\n+\t\tresultMetadata->addEntry(ANDROID_SENSOR_FRAME_DURATION, frameDuration.value_or(33'333'333));\n+\t}\n+\n+\tconst auto &scalerCrop = metadata.get(controls::ScalerCrop);\n+\tif (scalerCrop) {\n+\t\tconst Rectangle &crop = *scalerCrop;\n+\t\tint32_t cropRect[] = {\n+\t\t\tcrop.x,\n+\t\t\tcrop.y,\n+\t\t\tstatic_cast<int32_t>(crop.width),\n+\t\t\tstatic_cast<int32_t>(crop.height),\n+\t\t};\n+\t\tresultMetadata->addEntry(ANDROID_SCALER_CROP_REGION, cropRect);\n+\t}\n+\n+\tconst auto &testPatternMode = metadata.get(controls::draft::TestPatternMode);\n+\tif (testPatternMode)\n+\t\tresultMetadata->addEntry(ANDROID_SENSOR_TEST_PATTERN_MODE,\n+\t\t\t\t\t *testPatternMode);\n+\n+\t/*\n+\t * Return the result metadata pack even is not valid: get() will return\n+\t * nullptr.\n+\t */\n+\tif (!resultMetadata->isValid()) {\n+\t\tLOG(HAL, Error) << \"Failed to construct result metadata\";\n+\t}\n+\n+\tif (resultMetadata->resized()) {\n+\t\tauto [entryCount, dataCount] = resultMetadata->usage();\n+\t\tLOG(HAL, Info)\n+\t\t\t<< \"Result metadata resized: \" << entryCount\n+\t\t\t<< \" entries and \" << dataCount << \" bytes used\";\n+\t}\n+\n+\treturn resultMetadata;\n+}\n+\n+/*\n+ * Produce a set of fixed result metadata.\n+ */\n+std::unique_ptr<CameraMetadata>\n+CameraDevice::getFinalResultMetadata(const CameraMetadata &settings) const\n+{\n+\tcamera_metadata_ro_entry_t entry;\n+\tbool found;\n+\n+\t/*\n+\t * \\todo Retrieve metadata from corresponding libcamera controls.\n+\t * \\todo Keep this in sync with the actual number of entries.\n+\t *\n+\t * Reserve capacity for the metadata larger than 4 bytes which cannot\n+\t * store in entries.\n+\t * Currently: 31 entries, 16 bytes\n+\t *\n+\t * ANDROID_CONTROL_AE_TARGET_FPS_RANGE (int32 X 2) = 8 bytes\n+\t * ANDROID_SENSOR_ROLLING_SHUTTER_SKEW (int64) = 8 bytes\n+\t *\n+\t * Total bytes: 16\n+\t */\n+\tstd::unique_ptr<CameraMetadata> resultMetadata =\n+\t\tstd::make_unique<CameraMetadata>(31, 16);\n \tif (!resultMetadata->isValid()) {\n \t\tLOG(HAL, Error) << \"Failed to allocate result metadata\";\n \t\treturn nullptr;\n@@ -1536,8 +1906,7 @@ CameraDevice::getResultMetadata(const Camera3RequestDescriptor &descriptor) cons\n \t\t\t\t\t entry.data.i32, 2);\n \n \tfound = settings.getEntry(ANDROID_CONTROL_AE_PRECAPTURE_TRIGGER, &entry);\n-\tvalue = found ? *entry.data.u8 :\n-\t\t\t(uint8_t)ANDROID_CONTROL_AE_PRECAPTURE_TRIGGER_IDLE;\n+\tvalue = found ? *entry.data.u8 : (uint8_t)ANDROID_CONTROL_AE_PRECAPTURE_TRIGGER_IDLE;\n \tresultMetadata->addEntry(ANDROID_CONTROL_AE_PRECAPTURE_TRIGGER, value);\n \n \tvalue = ANDROID_CONTROL_AE_STATE_CONVERGED;\n@@ -1620,95 +1989,6 @@ CameraDevice::getResultMetadata(const Camera3RequestDescriptor &descriptor) cons\n \tresultMetadata->addEntry(ANDROID_SENSOR_ROLLING_SHUTTER_SKEW,\n \t\t\t\t rolling_shutter_skew);\n \n-\t/* Add metadata tags reported by libcamera. */\n-\tconst int64_t timestamp = metadata.get(controls::SensorTimestamp).value_or(0);\n-\tresultMetadata->addEntry(ANDROID_SENSOR_TIMESTAMP, timestamp);\n-\n-\tconst auto &pipelineDepth = metadata.get(controls::draft::PipelineDepth);\n-\tif (pipelineDepth)\n-\t\tresultMetadata->addEntry(ANDROID_REQUEST_PIPELINE_DEPTH,\n-\t\t\t\t\t *pipelineDepth);\n-\n-\tconst auto &exposureTime = metadata.get(controls::ExposureTime);\n-\tif (exposureTime)\n-\t\tresultMetadata->addEntry(ANDROID_SENSOR_EXPOSURE_TIME,\n-\t\t\t\t\t *exposureTime * 1000ULL);\n-\n-\tconst auto &frameDuration = metadata.get(controls::FrameDuration);\n-\tif (frameDuration)\n-\t\tresultMetadata->addEntry(ANDROID_SENSOR_FRAME_DURATION,\n-\t\t\t\t\t *frameDuration * 1000);\n-\n-\tconst auto &faceDetectRectangles =\n-\t\tmetadata.get(controls::draft::FaceDetectFaceRectangles);\n-\tif (faceDetectRectangles) {\n-\t\tstd::vector<int32_t> flatRectangles;\n-\t\tfor (const Rectangle &rect : *faceDetectRectangles) {\n-\t\t\tflatRectangles.push_back(rect.x);\n-\t\t\tflatRectangles.push_back(rect.y);\n-\t\t\tflatRectangles.push_back(rect.x + rect.width);\n-\t\t\tflatRectangles.push_back(rect.y + rect.height);\n-\t\t}\n-\t\tresultMetadata->addEntry(\n-\t\t\tANDROID_STATISTICS_FACE_RECTANGLES, flatRectangles);\n-\t}\n-\n-\tconst auto &faceDetectFaceScores =\n-\t\tmetadata.get(controls::draft::FaceDetectFaceScores);\n-\tif (faceDetectRectangles && faceDetectFaceScores) {\n-\t\tif (faceDetectFaceScores->size() != faceDetectRectangles->size()) {\n-\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face scores; \"\n-\t\t\t\t\t<< \"Expected: \" << faceDetectRectangles->size()\n-\t\t\t\t\t<< \", got: \" << faceDetectFaceScores->size();\n-\t\t}\n-\t\tresultMetadata->addEntry(ANDROID_STATISTICS_FACE_SCORES,\n-\t\t\t\t\t *faceDetectFaceScores);\n-\t}\n-\n-\tconst auto &faceDetectFaceLandmarks =\n-\t\tmetadata.get(controls::draft::FaceDetectFaceLandmarks);\n-\tif (faceDetectRectangles && faceDetectFaceLandmarks) {\n-\t\tsize_t expectedLandmarks = faceDetectRectangles->size() * 3;\n-\t\tif (faceDetectFaceLandmarks->size() != expectedLandmarks) {\n-\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face landmarks; \"\n-\t\t\t\t\t<< \"Expected: \" << expectedLandmarks\n-\t\t\t\t\t<< \", got: \" << faceDetectFaceLandmarks->size();\n-\t\t}\n-\n-\t\tstd::vector<int32_t> androidLandmarks;\n-\t\tfor (const Point &landmark : *faceDetectFaceLandmarks) {\n-\t\t\tandroidLandmarks.push_back(landmark.x);\n-\t\t\tandroidLandmarks.push_back(landmark.y);\n-\t\t}\n-\t\tresultMetadata->addEntry(\n-\t\t\tANDROID_STATISTICS_FACE_LANDMARKS, androidLandmarks);\n-\t}\n-\n-\tconst auto &faceDetectFaceIds = metadata.get(controls::draft::FaceDetectFaceIds);\n-\tif (faceDetectRectangles && faceDetectFaceIds) {\n-\t\tif (faceDetectFaceIds->size() != faceDetectRectangles->size()) {\n-\t\t\tLOG(HAL, Error) << \"Pipeline returned wrong number of face ids; \"\n-\t\t\t\t\t<< \"Expected: \" << faceDetectRectangles->size()\n-\t\t\t\t\t<< \", got: \" << faceDetectFaceIds->size();\n-\t\t}\n-\t\tresultMetadata->addEntry(ANDROID_STATISTICS_FACE_IDS, *faceDetectFaceIds);\n-\t}\n-\n-\tconst auto &scalerCrop = metadata.get(controls::ScalerCrop);\n-\tif (scalerCrop) {\n-\t\tconst Rectangle &crop = *scalerCrop;\n-\t\tint32_t cropRect[] = {\n-\t\t\tcrop.x, crop.y, static_cast<int32_t>(crop.width),\n-\t\t\tstatic_cast<int32_t>(crop.height),\n-\t\t};\n-\t\tresultMetadata->addEntry(ANDROID_SCALER_CROP_REGION, cropRect);\n-\t}\n-\n-\tconst auto &testPatternMode = metadata.get(controls::draft::TestPatternMode);\n-\tif (testPatternMode)\n-\t\tresultMetadata->addEntry(ANDROID_SENSOR_TEST_PATTERN_MODE,\n-\t\t\t\t\t *testPatternMode);\n-\n \t/*\n \t * Return the result metadata pack even is not valid: get() will return\n \t * nullptr.\ndiff --git a/src/android/camera_device.h b/src/android/camera_device.h\nindex 3c46ff918..2aa6b2c09 100644\n--- a/src/android/camera_device.h\n+++ b/src/android/camera_device.h\n@@ -64,11 +64,13 @@ public:\n \tconst camera_metadata_t *constructDefaultRequestSettings(int type);\n \tint configureStreams(camera3_stream_configuration_t *stream_list);\n \tint processCaptureRequest(camera3_capture_request_t *request);\n+\tvoid bufferComplete(libcamera::Request *request,\n+\t\t\t    libcamera::FrameBuffer *buffer);\n+\tvoid metadataAvailable(libcamera::Request *request,\n+\t\t\t       const libcamera::ControlList &metadata);\n \tvoid requestComplete(libcamera::Request *request);\n \tvoid streamProcessingCompleteDelegate(StreamBuffer *bufferStream,\n \t\t\t\t\t      StreamBuffer::Status status);\n-\tvoid streamProcessingComplete(StreamBuffer *bufferStream,\n-\t\t\t\t      StreamBuffer::Status status);\n \n protected:\n \tstd::string logPrefix() const override;\n@@ -96,16 +98,26 @@ private:\n \tvoid notifyError(uint32_t frameNumber, camera3_stream_t *stream,\n \t\t\t camera3_error_msg_code code) const;\n \tint processControls(Camera3RequestDescriptor *descriptor);\n-\tvoid completeDescriptor(Camera3RequestDescriptor *descriptor)\n-\t\tLIBCAMERA_TSA_EXCLUDES(descriptorsMutex_);\n-\tvoid sendCaptureResults() LIBCAMERA_TSA_REQUIRES(descriptorsMutex_);\n-\tvoid sendCaptureResult(Camera3RequestDescriptor *request) const;\n+\n+\tvoid checkAndCompleteReadyPartialResults(Camera3ResultDescriptor *result);\n+\tvoid completePartialResultDescriptor(Camera3ResultDescriptor *result);\n+\tvoid completeRequestDescriptor(Camera3RequestDescriptor *descriptor);\n+\n+\tvoid streamProcessingComplete(StreamBuffer *bufferStream,\n+\t\t\t\t      StreamBuffer::Status status);\n+\n+\tvoid sendCaptureResults();\n+\tvoid sendCaptureResult(Camera3ResultDescriptor *result) const;\n \tvoid setBufferStatus(StreamBuffer &buffer,\n \t\t\t     StreamBuffer::Status status);\n \tvoid generateJpegExifMetadata(Camera3RequestDescriptor *request,\n \t\t\t\t      StreamBuffer *buffer) const;\n-\tstd::unique_ptr<CameraMetadata> getResultMetadata(\n-\t\tconst Camera3RequestDescriptor &descriptor) const;\n+\n+\tstd::unique_ptr<CameraMetadata> getJpegPartialResultMetadata() const;\n+\tstd::unique_ptr<CameraMetadata> getPartialResultMetadata(\n+\t\tconst libcamera::ControlList &metadata) const;\n+\tstd::unique_ptr<CameraMetadata> getFinalResultMetadata(\n+\t\tconst CameraMetadata &settings) const;\n \n \tunsigned int id_;\n \tcamera3_device_t camera3Device_;\n@@ -122,9 +134,14 @@ private:\n \n \tstd::vector<CameraStream> streams_;\n \n-\tlibcamera::Mutex descriptorsMutex_ LIBCAMERA_TSA_ACQUIRED_AFTER(stateMutex_);\n-\tstd::queue<std::unique_ptr<Camera3RequestDescriptor>> descriptors_\n-\t\tLIBCAMERA_TSA_GUARDED_BY(descriptorsMutex_);\n+\t/* Protects access to the pending requests and stream buffers. */\n+\tlibcamera::Mutex pendingRequestMutex_;\n+\tstd::queue<std::unique_ptr<Camera3RequestDescriptor>> pendingRequests_\n+\t\tLIBCAMERA_TSA_GUARDED_BY(pendingRequestMutex_);\n+\tstd::map<CameraStream *, std::list<StreamBuffer *>> pendingStreamBuffers_\n+\t\tLIBCAMERA_TSA_GUARDED_BY(pendingRequestMutex_);\n+\n+\tstd::list<Camera3ResultDescriptor *> pendingPartialResults_;\n \n \tstd::string maker_;\n \tstd::string model_;\ndiff --git a/src/android/camera_request.cpp b/src/android/camera_request.cpp\nindex a9240a83c..20e1b3e54 100644\n--- a/src/android/camera_request.cpp\n+++ b/src/android/camera_request.cpp\n@@ -43,25 +43,25 @@ using namespace libcamera;\n  * │  processCaptureRequest(camera3_capture_request_t request)   │\n  * │                                                             │\n  * │   - Create Camera3RequestDescriptor tracking this request   │\n- * │   - Streams requiring post-processing are stored in the     │\n- * │     pendingStreamsToProcess map                             │\n+ * │   - Buffers requiring post-processing are marked by the     │\n+ * │     CameraStream::Type as Mapped or Internal                │\n  * │   - Add this Camera3RequestDescriptor to descriptors' queue │\n- * │     CameraDevice::descriptors_                              │\n- * │                                                             │ ┌─────────────────────────┐\n- * │   - Queue the capture request to libcamera core ────────────┤►│libcamera core           │\n- * │                                                             │ ├─────────────────────────┤\n- * │                                                             │ │- Capture from Camera    │\n- * │                                                             │ │                         │\n- * │                                                             │ │- Emit                   │\n- * │                                                             │ │  Camera::requestComplete│\n- * │  requestCompleted(Request *request) ◄───────────────────────┼─┼────                     │\n- * │                                                             │ │                         │\n- * │   - Check request completion status                         │ └─────────────────────────┘\n+ * │     CameraDevice::pendingRequests_                          │\n+ * │                                                             │ ┌────────────────────────────────┐\n+ * │   - Queue the capture request to libcamera core ────────────┤►│libcamera core                  │\n+ * │                                                             │ ├────────────────────────────────┤\n+ * │                                                             │ │- Capture from Camera           │\n+ * │                                                             │ │                                │\n+ * │                                                             │ │- Emit                          │\n+ * │                                                             │ │  Camera::partialResultCompleted│\n+ * │  partialResultComplete(Request *request, Result result*) ◄──┼─┼────                            │\n+ * │                                                             │ │                                │\n+ * │   - Check request completion status                         │ └────────────────────────────────┘\n  * │                                                             │\n- * │   - if (pendingStreamsToProcess > 0)                        │\n- * │      Queue all entries from pendingStreamsToProcess         │\n+ * │   - if (pendingBuffersToProcess > 0)                        │\n+ * │      Queue all entries from pendingBuffersToProcess         │\n  * │    else                                   │                 │\n- * │      completeDescriptor()                 │                 └──────────────────────┐\n+ * │      completeResultDescriptor()           │                 └──────────────────────┐\n  * │                                           │                                        │\n  * │                ┌──────────────────────────┴───┬──────────────────┐                 │\n  * │                │                              │                  │                 │\n@@ -94,10 +94,10 @@ using namespace libcamera;\n  * │ |                                       |     |              |                     │\n  * │ | - Check and set buffer status         |     |     ....     |                     │\n  * │ | - Remove post+processing entry        |     |              |                     │\n- * │ |   from pendingStreamsToProcess        |     |              |                     │\n+ * │ |   from pendingBuffersToProcess        |     |              |                     │\n  * │ |                                       |     |              |                     │\n- * │ | - if (pendingStreamsToProcess.empty())|     |              |                     │\n- * │ |        completeDescriptor             |     |              |                     │\n+ * │ | - if (pendingBuffersToProcess.empty())|     |              |                     │\n+ * │ |        completeResultDescriptor       |     |              |                     │\n  * │ |                                       |     |              |                     │\n  * │ +---------------------------------------+     +--------------+                     │\n  * │                                                                                    │\n@@ -148,6 +148,19 @@ Camera3RequestDescriptor::~Camera3RequestDescriptor()\n \t\tsourceStream->putBuffer(frameBuffer);\n }\n \n+/*\n+ * \\class Camera3ResultDescriptor\n+ *\n+ * A utility class that groups information about a capture result to be sent to\n+ * framework.\n+ */\n+Camera3ResultDescriptor::Camera3ResultDescriptor(Camera3RequestDescriptor *request)\n+\t: request_(request), metadataPackIndex_(1), completed_(false)\n+{\n+}\n+\n+Camera3ResultDescriptor::~Camera3ResultDescriptor() = default;\n+\n /**\n  * \\class StreamBuffer\n  * \\brief Group information for per-stream buffer of Camera3RequestDescriptor\n@@ -182,6 +195,9 @@ Camera3RequestDescriptor::~Camera3RequestDescriptor()\n  *\n  * \\var StreamBuffer::request\n  * \\brief Back pointer to the Camera3RequestDescriptor to which the StreamBuffer belongs\n+ *\n+ * \\var StreamBuffer::result\n+ * \\brief Back pointer to the Camera3ResultDescriptor to which the StreamBuffer belongs\n  */\n StreamBuffer::StreamBuffer(\n \tCameraStream *cameraStream, const camera3_stream_buffer_t &buffer,\ndiff --git a/src/android/camera_request.h b/src/android/camera_request.h\nindex bd87b36fd..18386a905 100644\n--- a/src/android/camera_request.h\n+++ b/src/android/camera_request.h\n@@ -26,6 +26,7 @@\n class CameraBuffer;\n class CameraStream;\n \n+class Camera3ResultDescriptor;\n class Camera3RequestDescriptor;\n \n class StreamBuffer\n@@ -57,41 +58,62 @@ public:\n \tconst libcamera::FrameBuffer *srcBuffer = nullptr;\n \tstd::unique_ptr<CameraBuffer> dstBuffer;\n \tstd::optional<JpegExifMetadata> jpegExifMetadata;\n+\tCamera3ResultDescriptor *result;\n \tCamera3RequestDescriptor *request;\n \n private:\n \tLIBCAMERA_DISABLE_COPY(StreamBuffer)\n };\n \n+class Camera3ResultDescriptor\n+{\n+public:\n+\tCamera3ResultDescriptor(Camera3RequestDescriptor *request);\n+\t~Camera3ResultDescriptor();\n+\n+\tCamera3RequestDescriptor *request_;\n+\tuint32_t metadataPackIndex_;\n+\n+\tstd::unique_ptr<CameraMetadata> resultMetadata_;\n+\tstd::vector<StreamBuffer *> buffers_;\n+\n+\t/* Keeps track of buffers waiting for post-processing. */\n+\tstd::list<StreamBuffer *> pendingBuffersToProcess_;\n+\n+\tbool completed_;\n+\n+private:\n+\tLIBCAMERA_DISABLE_COPY(Camera3ResultDescriptor)\n+};\n+\n class Camera3RequestDescriptor\n {\n public:\n \tenum class Status {\n+\t\tPending,\n \t\tSuccess,\n \t\tError,\n \t};\n \n-\t/* Keeps track of streams requiring post-processing. */\n-\tstd::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_;\n-\n \tCamera3RequestDescriptor(libcamera::Camera *camera,\n \t\t\t\t const camera3_capture_request_t *camera3Request);\n \t~Camera3RequestDescriptor();\n \n-\tbool isPending() const { return !complete_; }\n-\n \tuint32_t frameNumber_ = 0;\n \n \tstd::vector<StreamBuffer> buffers_;\n \n \tCameraMetadata settings_;\n \tstd::unique_ptr<libcamera::Request> request_;\n-\tstd::unique_ptr<CameraMetadata> resultMetadata_;\n \n \tstd::map<CameraStream *, libcamera::FrameBuffer *> internalBuffers_;\n \n \tbool complete_ = false;\n-\tStatus status_ = Status::Success;\n+\tStatus status_ = Status::Pending;\n+\n+\tuint32_t nextPartialResultIndex_ = 1;\n+\tstd::unique_ptr<Camera3ResultDescriptor> finalResult_;\n+\tstd::vector<std::unique_ptr<Camera3ResultDescriptor>> partialResults_;\n \n private:\n \tLIBCAMERA_DISABLE_COPY(Camera3RequestDescriptor)\ndiff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp\nindex 53f292d4b..7837fd7aa 100644\n--- a/src/android/camera_stream.cpp\n+++ b/src/android/camera_stream.cpp\n@@ -121,8 +121,8 @@ int CameraStream::configure()\n \t\t\t\telse\n \t\t\t\t\tbufferStatus = StreamBuffer::Status::Error;\n \n-\t\t\t\tcameraDevice_->streamProcessingComplete(streamBuffer,\n-\t\t\t\t\t\t\t\t\tbufferStatus);\n+\t\t\t\tcameraDevice_->streamProcessingCompleteDelegate(streamBuffer,\n+\t\t\t\t\t\t\t\t\t\tbufferStatus);\n \t\t\t});\n \n \t\tworker_->start();\ndiff --git a/src/android/jpeg/post_processor_jpeg.cpp b/src/android/jpeg/post_processor_jpeg.cpp\nindex 48782b574..671e560ec 100644\n--- a/src/android/jpeg/post_processor_jpeg.cpp\n+++ b/src/android/jpeg/post_processor_jpeg.cpp\n@@ -119,7 +119,7 @@ void PostProcessorJpeg::process(StreamBuffer *streamBuffer)\n \tASSERT(jpegExifMetadata.has_value());\n \n \tconst CameraMetadata &requestMetadata = streamBuffer->request->settings_;\n-\tCameraMetadata *resultMetadata = streamBuffer->request->resultMetadata_.get();\n+\tCameraMetadata *resultMetadata = streamBuffer->result->resultMetadata_.get();\n \tcamera_metadata_ro_entry_t entry;\n \tint ret;\n \n","prefixes":["v2","9/9"]}