{"id":15238,"url":"https://patchwork.libcamera.org/api/patches/15238/?format=json","web_url":"https://patchwork.libcamera.org/patch/15238/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20220103170956.323025-5-umang.jain@ideasonboard.com>","date":"2022-01-03T17:09:56","name":"[libcamera-devel,4/4] ipa: ipu3: Add a IPAFrameContext queue","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"d32ef6f41aba1be598819179568d4931e6f37d74","submitter":{"id":86,"url":"https://patchwork.libcamera.org/api/people/86/?format=json","name":"Umang Jain","email":"umang.jain@ideasonboard.com"},"delegate":{"id":12,"url":"https://patchwork.libcamera.org/api/users/12/?format=json","username":"uajain","first_name":"Umang","last_name":"Jain","email":"umang.jain@ideasonboard.com"},"mbox":"https://patchwork.libcamera.org/patch/15238/mbox/","series":[{"id":2873,"url":"https://patchwork.libcamera.org/api/series/2873/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=2873","date":"2022-01-03T17:09:52","name":"IPAIPU3 - Rework interface and introduce context","version":1,"mbox":"https://patchwork.libcamera.org/series/2873/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/15238/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/15238/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 90EC2C3259\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon,  3 Jan 2022 17:10:19 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 08BED60915;\n\tMon,  3 Jan 2022 18:10:19 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 71452604F4\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  3 Jan 2022 18:10:18 +0100 (CET)","from perceval.ideasonboard.com (unknown\n\t[IPv6:2401:4900:1f3e:193e:9a73:f356:8c6a:a1aa])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 28A84CC;\n\tMon,  3 Jan 2022 18:10:16 +0100 (CET)"],"Authentication-Results":"lancelot.ideasonboard.com;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"SZZA1CFt\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1641229818;\n\tbh=GbL0mvguMBDWLn/Hc3KT+sigc0IjfR0x/sQ+mE8HHxc=;\n\th=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n\tb=SZZA1CFtacT35zMURgi1KC/KxhFpEP9glGVnX5Z56S/snAEAC2Viy1cxiDNke1eTs\n\tJxt3uR0GyuUsnz+O52SoTJC1uUjWsz1tMzAaNlr/oSd+BurUZi3SzCKbIITyjZoBna\n\tnjcsaWAG2NypfnH4RosH+tr6lxxK70yU8y4OkCi8=","From":"Umang Jain <umang.jain@ideasonboard.com>","To":"libcamera-devel@lists.libcamera.org","Date":"Mon,  3 Jan 2022 22:39:56 +0530","Message-Id":"<20220103170956.323025-5-umang.jain@ideasonboard.com>","X-Mailer":"git-send-email 2.31.1","In-Reply-To":"<20220103170956.323025-1-umang.jain@ideasonboard.com>","References":"<20220103170956.323025-1-umang.jain@ideasonboard.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH 4/4] ipa: ipu3: Add a IPAFrameContext queue","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"Having a single IPAFrameContext queue is limiting especially when\nwe need to preserve per-frame controls. Right now, we are not processing\nany controls on the IPA side (processControls()) but sooner or later\nwe need to preserve the controls setting for the frames in the context\nin a retrievable fashion. Hence a std::deque is introduced to preserve\nthe frame context of the incoming request's settings as soon as it is\nqueued.\n\nSince IPAIPU3::processControls() is executed on\nIPU3CameraData::queuePendingRequests() code path, we need to store the\nincoming control setting in a separate IPAFrameContext and push that\ninto the queue. The IPAFrameContext is then dropped when processing for\nthat frame has been finished.\n\nSigned-off-by: Umang Jain <umang.jain@ideasonboard.com>\n---\n src/ipa/ipu3/algorithms/agc.cpp          | 18 ++++----\n src/ipa/ipu3/algorithms/agc.h            |  2 +-\n src/ipa/ipu3/algorithms/awb.cpp          | 18 ++++----\n src/ipa/ipu3/algorithms/tone_mapping.cpp | 11 ++---\n src/ipa/ipu3/ipa_context.cpp             | 57 +++++++++++++++++++++---\n src/ipa/ipu3/ipa_context.h               | 13 +++++-\n src/ipa/ipu3/ipu3.cpp                    | 41 ++++++++++++++---\n 7 files changed, 124 insertions(+), 36 deletions(-)","diff":"diff --git a/src/ipa/ipu3/algorithms/agc.cpp b/src/ipa/ipu3/algorithms/agc.cpp\nindex 1d0778d8..f8e1fef7 100644\n--- a/src/ipa/ipu3/algorithms/agc.cpp\n+++ b/src/ipa/ipu3/algorithms/agc.cpp\n@@ -99,8 +99,9 @@ int Agc::configure(IPAContext &context, const IPAConfigInfo &configInfo)\n \tmaxAnalogueGain_ = std::min(context.configuration.agc.maxAnalogueGain, kMaxAnalogueGain);\n \n \t/* Configure the default exposure and gain. */\n-\tcontext.frameContext.agc.gain = minAnalogueGain_;\n-\tcontext.frameContext.agc.exposure = minShutterSpeed_ / lineDuration_;\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n+\tframeContext.agc.gain = minAnalogueGain_;\n+\tframeContext.agc.exposure = minShutterSpeed_ / lineDuration_;\n \n \treturn 0;\n }\n@@ -174,16 +175,17 @@ void Agc::filterExposure()\n \n /**\n  * \\brief Estimate the new exposure and gain values\n+ * \\param[in] frame The frame number\n  * \\param[inout] frameContext The shared IPA frame Context\n  * \\param[in] yGain The gain calculated based on the relative luminance target\n  * \\param[in] iqMeanGain The gain calculated based on the relative luminance target\n  */\n-void Agc::computeExposure(IPAFrameContext &frameContext, double yGain,\n-\t\t\t  double iqMeanGain)\n+void Agc::computeExposure(const uint32_t frame, IPAContext &context, double yGain, double iqMeanGain)\n {\n \t/* Get the effective exposure and gain applied on the sensor. */\n-\tuint32_t exposure = frameContext.sensor.exposure;\n-\tdouble analogueGain = frameContext.sensor.gain;\n+\tuint32_t exposure = context.prevFrameContext.sensor.exposure;\n+\tdouble analogueGain = context.prevFrameContext.sensor.gain;\n+\tIPAFrameContext &frameContext = context.getFrameContext(frame);\n \n \t/* Use the highest of the two gain estimates. */\n \tdouble evGain = std::max(yGain, iqMeanGain);\n@@ -336,7 +338,7 @@ void Agc::process(const uint32_t frame, IPAContext &context, const ipu3_uapi_sta\n \tdouble yTarget = kRelativeLuminanceTarget;\n \n \tfor (unsigned int i = 0; i < 8; i++) {\n-\t\tdouble yValue = estimateLuminance(context.frameContext,\n+\t\tdouble yValue = estimateLuminance(context.prevFrameContext,\n \t\t\t\t\t\t  context.configuration.grid.bdsGrid,\n \t\t\t\t\t\t  stats, yGain);\n \t\tdouble extraGain = std::min(10.0, yTarget / (yValue + .001));\n@@ -349,7 +351,7 @@ void Agc::process(const uint32_t frame, IPAContext &context, const ipu3_uapi_sta\n \t\t\tbreak;\n \t}\n \n-\tcomputeExposure(context.frameContext, yGain, iqMeanGain);\n+\tcomputeExposure(frame, context, yGain, iqMeanGain);\n \tframeCount_++;\n }\n \ndiff --git a/src/ipa/ipu3/algorithms/agc.h b/src/ipa/ipu3/algorithms/agc.h\nindex c6ab8e91..a3c52fc7 100644\n--- a/src/ipa/ipu3/algorithms/agc.h\n+++ b/src/ipa/ipu3/algorithms/agc.h\n@@ -34,7 +34,7 @@ private:\n \tdouble measureBrightness(const ipu3_uapi_stats_3a *stats,\n \t\t\t\t const ipu3_uapi_grid_config &grid) const;\n \tvoid filterExposure();\n-\tvoid computeExposure(IPAFrameContext &frameContext, double yGain,\n+\tvoid computeExposure(const uint32_t frame, IPAContext &context, double yGain,\n \t\t\t     double iqMeanGain);\n \tdouble estimateLuminance(IPAFrameContext &frameContext,\n \t\t\t\t const ipu3_uapi_grid_config &grid,\ndiff --git a/src/ipa/ipu3/algorithms/awb.cpp b/src/ipa/ipu3/algorithms/awb.cpp\nindex 99fb5305..a8347d0f 100644\n--- a/src/ipa/ipu3/algorithms/awb.cpp\n+++ b/src/ipa/ipu3/algorithms/awb.cpp\n@@ -382,16 +382,17 @@ void Awb::calculateWBGains(const ipu3_uapi_stats_3a *stats)\n void Awb::process(const uint32_t frame, IPAContext &context, const ipu3_uapi_stats_3a *stats)\n {\n \tcalculateWBGains(stats);\n+\tIPAFrameContext &frameContext = context.getFrameContext(frame);\n \n \t/*\n \t * Gains are only recalculated if enough zones were detected.\n \t * The results are cached, so if no results were calculated, we set the\n \t * cached values from asyncResults_ here.\n \t */\n-\tcontext.frameContext.awb.gains.blue = asyncResults_.blueGain;\n-\tcontext.frameContext.awb.gains.green = asyncResults_.greenGain;\n-\tcontext.frameContext.awb.gains.red = asyncResults_.redGain;\n-\tcontext.frameContext.awb.temperatureK = asyncResults_.temperatureK;\n+\tframeContext.awb.gains.blue = asyncResults_.blueGain;\n+\tframeContext.awb.gains.green = asyncResults_.greenGain;\n+\tframeContext.awb.gains.red = asyncResults_.redGain;\n+\tframeContext.awb.temperatureK = asyncResults_.temperatureK;\n }\n \n constexpr uint16_t Awb::threshold(float value)\n@@ -434,6 +435,7 @@ void Awb::prepare([[maybe_unused]] const uint32_t frame, IPAContext &context, ip\n \t */\n \tparams->acc_param.bnr = imguCssBnrDefaults;\n \tSize &bdsOutputSize = context.configuration.grid.bdsOutputSize;\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \tparams->acc_param.bnr.column_size = bdsOutputSize.width;\n \tparams->acc_param.bnr.opt_center.x_reset = grid.x_start - (bdsOutputSize.width / 2);\n \tparams->acc_param.bnr.opt_center.y_reset = grid.y_start - (bdsOutputSize.height / 2);\n@@ -442,10 +444,10 @@ void Awb::prepare([[maybe_unused]] const uint32_t frame, IPAContext &context, ip\n \tparams->acc_param.bnr.opt_center_sqr.y_sqr_reset = params->acc_param.bnr.opt_center.y_reset\n \t\t\t\t\t\t\t* params->acc_param.bnr.opt_center.y_reset;\n \t/* Convert to u3.13 fixed point values */\n-\tparams->acc_param.bnr.wb_gains.gr = 8192 * context.frameContext.awb.gains.green;\n-\tparams->acc_param.bnr.wb_gains.r  = 8192 * context.frameContext.awb.gains.red;\n-\tparams->acc_param.bnr.wb_gains.b  = 8192 * context.frameContext.awb.gains.blue;\n-\tparams->acc_param.bnr.wb_gains.gb = 8192 * context.frameContext.awb.gains.green;\n+\tparams->acc_param.bnr.wb_gains.gr = 8192 * frameContext.awb.gains.green;\n+\tparams->acc_param.bnr.wb_gains.r  = 8192 * frameContext.awb.gains.red;\n+\tparams->acc_param.bnr.wb_gains.b  = 8192 * frameContext.awb.gains.blue;\n+\tparams->acc_param.bnr.wb_gains.gb = 8192 * frameContext.awb.gains.green;\n \n \tLOG(IPU3Awb, Debug) << \"Color temperature estimated: \" << asyncResults_.temperatureK;\n \ndiff --git a/src/ipa/ipu3/algorithms/tone_mapping.cpp b/src/ipa/ipu3/algorithms/tone_mapping.cpp\nindex bba5bc9a..ce6c330d 100644\n--- a/src/ipa/ipu3/algorithms/tone_mapping.cpp\n+++ b/src/ipa/ipu3/algorithms/tone_mapping.cpp\n@@ -42,7 +42,7 @@ int ToneMapping::configure(IPAContext &context,\n \t\t\t   [[maybe_unused]] const IPAConfigInfo &configInfo)\n {\n \t/* Initialise tone mapping gamma value. */\n-\tcontext.frameContext.toneMapping.gamma = 0.0;\n+\tcontext.frameContextQueue.front().toneMapping.gamma = 0.0;\n \n \treturn 0;\n }\n@@ -62,7 +62,7 @@ void ToneMapping::prepare([[maybe_unused]] const uint32_t frame,\n {\n \t/* Copy the calculated LUT into the parameters buffer. */\n \tmemcpy(params->acc_param.gamma.gc_lut.lut,\n-\t       context.frameContext.toneMapping.gammaCorrection.lut,\n+\t       context.frameContextQueue.front().toneMapping.gammaCorrection.lut,\n \t       IPU3_UAPI_GAMMA_CORR_LUT_ENTRIES *\n \t       sizeof(params->acc_param.gamma.gc_lut.lut[0]));\n \n@@ -83,6 +83,7 @@ void ToneMapping::prepare([[maybe_unused]] const uint32_t frame,\n void ToneMapping::process(const uint32_t frame, IPAContext &context,\n \t\t\t  [[maybe_unused]] const ipu3_uapi_stats_3a *stats)\n {\n+\tIPAFrameContext &frameContext = context.getFrameContext(frame);\n \t/*\n \t * Hardcode gamma to 1.1 as a default for now.\n \t *\n@@ -90,11 +91,11 @@ void ToneMapping::process(const uint32_t frame, IPAContext &context,\n \t */\n \tgamma_ = 1.1;\n \n-\tif (context.frameContext.toneMapping.gamma == gamma_)\n+\tif (frameContext.toneMapping.gamma == gamma_)\n \t\treturn;\n \n \tstruct ipu3_uapi_gamma_corr_lut &lut =\n-\t\tcontext.frameContext.toneMapping.gammaCorrection;\n+\t\tframeContext.toneMapping.gammaCorrection;\n \n \tfor (uint32_t i = 0; i < std::size(lut.lut); i++) {\n \t\tdouble j = static_cast<double>(i) / (std::size(lut.lut) - 1);\n@@ -104,7 +105,7 @@ void ToneMapping::process(const uint32_t frame, IPAContext &context,\n \t\tlut.lut[i] = gamma * 8191;\n \t}\n \n-\tcontext.frameContext.toneMapping.gamma = gamma_;\n+\tframeContext.toneMapping.gamma = gamma_;\n }\n \n } /* namespace ipa::ipu3::algorithms */\ndiff --git a/src/ipa/ipu3/ipa_context.cpp b/src/ipa/ipu3/ipa_context.cpp\nindex 86794ac1..95a08547 100644\n--- a/src/ipa/ipu3/ipa_context.cpp\n+++ b/src/ipa/ipu3/ipa_context.cpp\n@@ -39,6 +39,48 @@ namespace libcamera::ipa::ipu3 {\n  * algorithm, but should only be written by its owner.\n  */\n \n+/**\n+ * \\brief Retrieve the context of a particular frame\n+ * \\param[in] frame Frame number\n+ *\n+ * Retrieve the frame context of the \\a frame.\n+ *\n+ * \\return The frame context of the given frame number or nullptr, if not found\n+ */\n+IPAFrameContext &IPAContext::getFrameContext(const uint32_t frame)\n+{\n+\tauto iter = frameContextQueue.begin();\n+\twhile (iter != frameContextQueue.end()) {\n+\t\tif (iter->frame == frame)\n+\t\t\treturn *iter;\n+\n+\t\titer++;\n+\t}\n+\n+\t/*\n+\t * \\todo Handle the case where frame-context is not found here.\n+\t * Should we be FATAL ?\n+\t */\n+\treturn *iter; /* returns frameContextQueue.end() */\n+}\n+\n+/**\n+ * \\brief Construct a IPAFrameContext instance\n+ */\n+IPAFrameContext::IPAFrameContext() = default;\n+\n+/**\n+ * \\brief Move constructor for IPAFrameContext\n+ * \\param[in] other The other IPAFrameContext\n+ */\n+IPAFrameContext::IPAFrameContext(IPAFrameContext &&other) = default;\n+\n+/**\n+ * \\brief Move assignment operator for IPAFrameContext\n+ * \\param[in] other The other IPAFrameContext\n+ */\n+IPAFrameContext &IPAFrameContext::operator=(IPAFrameContext &&other) = default;\n+\n /**\n  * \\struct IPAContext\n  * \\brief Global IPA context data shared between all algorithms\n@@ -46,13 +88,11 @@ namespace libcamera::ipa::ipu3 {\n  * \\var IPAContext::configuration\n  * \\brief The IPA session configuration, immutable during the session\n  *\n- * \\var IPAContext::frameContext\n- * \\brief The frame context for the frame being processed\n+ * \\var IPAContext::frameContextQueue\n+ * \\brief A queue of frame contexts to be processed by the IPA\n  *\n- * \\todo While the frame context is supposed to be per-frame, this\n- * single frame context stores data related to both the current frame\n- * and the previous frames, with fields being updated as the algorithms\n- * are run. This needs to be turned into real per-frame data storage.\n+ * \\var IPAContext::prevFrameContext\n+ * \\brief The latest frame context which the IPA has finished processing\n  */\n \n /**\n@@ -86,6 +126,11 @@ namespace libcamera::ipa::ipu3 {\n  * \\brief Maximum analogue gain supported with the configured sensor\n  */\n \n+/**\n+ * \\var IPAFrameContext::frame\n+ * \\brief Frame number of the corresponding frame context\n+ */\n+\n /**\n  * \\var IPAFrameContext::agc\n  * \\brief Context for the Automatic Gain Control algorithm\ndiff --git a/src/ipa/ipu3/ipa_context.h b/src/ipa/ipu3/ipa_context.h\nindex c6dc0814..df2a9779 100644\n--- a/src/ipa/ipu3/ipa_context.h\n+++ b/src/ipa/ipu3/ipa_context.h\n@@ -8,6 +8,8 @@\n \n #pragma once\n \n+#include <deque>\n+\n #include <linux/intel-ipu3.h>\n \n #include <libcamera/base/utils.h>\n@@ -34,6 +36,12 @@ struct IPASessionConfiguration {\n };\n \n struct IPAFrameContext {\n+\tuint32_t frame;\n+\n+\tIPAFrameContext();\n+\tIPAFrameContext(IPAFrameContext &&other);\n+\tIPAFrameContext &operator=(IPAFrameContext &&other);\n+\n \tstruct {\n \t\tuint32_t exposure;\n \t\tdouble gain;\n@@ -61,8 +69,11 @@ struct IPAFrameContext {\n };\n \n struct IPAContext {\n+\tIPAFrameContext &getFrameContext(const uint32_t frame);\n+\n \tIPASessionConfiguration configuration;\n-\tIPAFrameContext frameContext;\n+\tstd::deque<IPAFrameContext> frameContextQueue;\n+\tIPAFrameContext prevFrameContext;\n };\n \n } /* namespace ipa::ipu3 */\ndiff --git a/src/ipa/ipu3/ipu3.cpp b/src/ipa/ipu3/ipu3.cpp\nindex fa40c41f..9c3d5ff4 100644\n--- a/src/ipa/ipu3/ipu3.cpp\n+++ b/src/ipa/ipu3/ipu3.cpp\n@@ -336,6 +336,8 @@ int IPAIPU3::start()\n  */\n void IPAIPU3::stop()\n {\n+\twhile (!context_.frameContextQueue.empty())\n+\t\tcontext_.frameContextQueue.pop_front();\n }\n \n /**\n@@ -469,6 +471,14 @@ int IPAIPU3::configure(const IPAConfigInfo &configInfo,\n \t/* Clean context at configuration */\n \tcontext_ = {};\n \n+\t/*\n+\t * Insert a initial context into the queue to faciliate\n+\t * algo->configure() below.\n+\t */\n+\tIPAFrameContext initContext;\n+\tinitContext.frame = 0;\n+\tcontext_.frameContextQueue.push_back(std::move(initContext));\n+\n \tcalculateBdsGrid(configInfo.bdsOutputSize);\n \n \tlineDuration_ = sensorInfo_.lineLength * 1.0s / sensorInfo_.pixelRate;\n@@ -518,10 +528,25 @@ void IPAIPU3::unmapBuffers(const std::vector<unsigned int> &ids)\n \n void IPAIPU3::frameStarted([[maybe_unused]] const uint32_t frame)\n {\n+\tIPAFrameContext newContext;\n+\tnewContext.frame = frame;\n+\n+\tcontext_.frameContextQueue.push_back(std::move(newContext));\n }\n \n void IPAIPU3::frameCompleted([[maybe_unused]] const uint32_t frame)\n {\n+\twhile (!context_.frameContextQueue.empty()) {\n+\t\tauto &fc = context_.frameContextQueue.front();\n+\t\tif (fc.frame < frame)\n+\t\t\tcontext_.frameContextQueue.pop_front();\n+\n+\t\t/* Keep newer frames */\n+\t\tif (fc.frame >= frame) {\n+\t\t\tcontext_.prevFrameContext = std::move(fc);\n+\t\t\tbreak;\n+\t\t}\n+\t}\n }\n \n /**\n@@ -564,8 +589,9 @@ void IPAIPU3::statsReady(const uint32_t frame, const int64_t frameTimestamp,\n \tconst ipu3_uapi_stats_3a *stats =\n \t\treinterpret_cast<ipu3_uapi_stats_3a *>(mem.data());\n \n-\tcontext_.frameContext.sensor.exposure = sensorControls.get(V4L2_CID_EXPOSURE).get<int32_t>();\n-\tcontext_.frameContext.sensor.gain = camHelper_->gain(sensorControls.get(V4L2_CID_ANALOGUE_GAIN).get<int32_t>());\n+\tIPAFrameContext &curFrameContext = context_.frameContextQueue.front();\n+\tcurFrameContext.sensor.exposure = sensorControls.get(V4L2_CID_EXPOSURE).get<int32_t>();\n+\tcurFrameContext.sensor.gain = camHelper_->gain(sensorControls.get(V4L2_CID_ANALOGUE_GAIN).get<int32_t>());\n \n \tparseStatistics(frame, frameTimestamp, stats);\n }\n@@ -645,11 +671,11 @@ void IPAIPU3::parseStatistics(unsigned int frame,\n \tint64_t frameDuration = (defVBlank_ + sensorInfo_.outputSize.height) * lineDuration_.get<std::micro>();\n \tctrls.set(controls::FrameDuration, frameDuration);\n \n-\tctrls.set(controls::AnalogueGain, context_.frameContext.sensor.gain);\n+\tctrls.set(controls::AnalogueGain, context_.prevFrameContext.sensor.gain);\n \n-\tctrls.set(controls::ColourTemperature, context_.frameContext.awb.temperatureK);\n+\tctrls.set(controls::ColourTemperature, context_.prevFrameContext.awb.temperatureK);\n \n-\tctrls.set(controls::ExposureTime, context_.frameContext.sensor.exposure * lineDuration_.get<std::micro>());\n+\tctrls.set(controls::ExposureTime, context_.prevFrameContext.sensor.exposure * lineDuration_.get<std::micro>());\n \n \t/*\n \t * \\todo The Metadata provides a path to getting extended data\n@@ -679,8 +705,9 @@ void IPAIPU3::parseStatistics(unsigned int frame,\n  */\n void IPAIPU3::setControls(unsigned int frame)\n {\n-\texposure_ = context_.frameContext.agc.exposure;\n-\tgain_ = camHelper_->gainCode(context_.frameContext.agc.gain);\n+\tIPAFrameContext &context = context_.frameContextQueue.front();\n+\texposure_ = context.agc.exposure;\n+\tgain_ = camHelper_->gainCode(context.agc.gain);\n \n \tControlList ctrls(ctrls_);\n \tControlList lensCtrls;\n","prefixes":["libcamera-devel","4/4"]}