{"id":15154,"url":"https://patchwork.libcamera.org/api/1.1/patches/15154/?format=json","web_url":"https://patchwork.libcamera.org/patch/15154/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20211213153705.353256-1-umang.jain@ideasonboard.com>","date":"2021-12-13T15:37:05","name":"[libcamera-devel] ipa: ipu3: Add a IPAFrameContext queue","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"04ccb8c3d93ae6d763e4813b7bbcd83f7c476d4f","submitter":{"id":86,"url":"https://patchwork.libcamera.org/api/1.1/people/86/?format=json","name":"Umang Jain","email":"umang.jain@ideasonboard.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/15154/mbox/","series":[{"id":2842,"url":"https://patchwork.libcamera.org/api/1.1/series/2842/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=2842","date":"2021-12-13T15:37:05","name":"[libcamera-devel] ipa: ipu3: Add a IPAFrameContext queue","version":1,"mbox":"https://patchwork.libcamera.org/series/2842/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/15154/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/15154/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id CC9AEBDB13\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon, 13 Dec 2021 15:37:28 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 2FDEB60876;\n\tMon, 13 Dec 2021 16:37:28 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 28BF260868\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 13 Dec 2021 16:37:26 +0100 (CET)","from perceval.ideasonboard.com (unknown\n\t[IPv6:2401:4900:1f3e:37a3:270b:b541:a3a9:933b])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 182A151C;\n\tMon, 13 Dec 2021 16:37:23 +0100 (CET)"],"Authentication-Results":"lancelot.ideasonboard.com;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"SdjChnta\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1639409845;\n\tbh=buf/F9hJzhgr+nc1RjnEtswqCCrjMZvH2EN173K6z5Q=;\n\th=From:To:Cc:Subject:Date:From;\n\tb=SdjChntabjf8E/ZO/mChvL2Pi5CwqJ8VZDbgaMguBRwRvbv2EKhbcwHQKunVmi5gG\n\tip1tAh7rOPBr5bCxjUDvwR4YDBH4d3ag64uY65L7vxcKc2aF/mmusmnB9yvtkJ4g5V\n\t3goQ1+FmiBObI4F4oaPL5e2g72mlU8FNdlQQJcFQ=","From":"Umang Jain <umang.jain@ideasonboard.com>","To":"libcamera-devel@lists.libcamera.org","Date":"Mon, 13 Dec 2021 21:07:05 +0530","Message-Id":"<20211213153705.353256-1-umang.jain@ideasonboard.com>","X-Mailer":"git-send-email 2.31.1","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH] ipa: ipu3: Add a IPAFrameContext queue","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"Having a single IPAFrameContext queue is limiting especially when\nwe need to preserve per-frame controls. Right now, we are not processing\nany controls on the IPA side (processControls()) but sooner or later\nwe need to preserve the controls setting for the frames in a sequential\nfashion. Since IPAIPU3::processControls() is executed on\nIPU3CameraData::queuePendingRequests() code path, we need to store the\nincoming control setting in a separate IPAFrameContext and push that\ninto the queue.\n\nThe controls for the next frame to be processed by the IPA are retrieved\nback from the queue in parseStatistics() and set via setControls().\nMeanwhile, the last IPAFrameContext is preserved as prevFrameContext to\npopulate the metadata (for ActionMetadataReady).\n\nSigned-off-by: Umang Jain <umang.jain@ideasonboard.com>\n---\n- Patch is developed as a part of CTS-FULL tests investigate\n\t- testing manual AE control (under W.I.P)\n\t- The tests inputs a manual ExposureTime on Requests X, requires\n\t  to read the same ExposureTime on X's capture result 'exactly'.\n\t- In order to facilate that, we need per-frame context\n\t  preservation (of controls settings)\n- No regressions on CTS (Limited).\n- Sometimes I now face over-exposed condition of the stream, as a result\n  of this patch. It's not obvious to me why it happens, but my educated\n  guess is to with initial exposure / gain settings. It is also hard to\n  reproduce (this issue) deterministically.\n---\n src/ipa/ipu3/algorithms/agc.cpp          | 19 ++++++-------\n src/ipa/ipu3/algorithms/agc.h            |  2 +-\n src/ipa/ipu3/algorithms/awb.cpp          | 18 +++++++------\n src/ipa/ipu3/algorithms/tone_mapping.cpp | 11 ++++----\n src/ipa/ipu3/ipa_context.cpp             | 32 +++++++++++++++++-----\n src/ipa/ipu3/ipa_context.h               | 11 +++++++-\n src/ipa/ipu3/ipu3.cpp                    | 34 +++++++++++++++++++-----\n 7 files changed, 90 insertions(+), 37 deletions(-)","diff":"diff --git a/src/ipa/ipu3/algorithms/agc.cpp b/src/ipa/ipu3/algorithms/agc.cpp\nindex 8d6f18f6..dbbcdd53 100644\n--- a/src/ipa/ipu3/algorithms/agc.cpp\n+++ b/src/ipa/ipu3/algorithms/agc.cpp\n@@ -99,8 +99,9 @@ int Agc::configure(IPAContext &context, const IPAConfigInfo &configInfo)\n \tmaxAnalogueGain_ = std::min(context.configuration.agc.maxAnalogueGain, kMaxAnalogueGain);\n \n \t/* Configure the default exposure and gain. */\n-\tcontext.frameContext.agc.gain = minAnalogueGain_;\n-\tcontext.frameContext.agc.exposure = minShutterSpeed_ / lineDuration_;\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n+\tframeContext.agc.gain = minAnalogueGain_;\n+\tframeContext.agc.exposure = minShutterSpeed_ / lineDuration_;\n \n \treturn 0;\n }\n@@ -178,12 +179,12 @@ void Agc::filterExposure()\n  * \\param[in] yGain The gain calculated based on the relative luminance target\n  * \\param[in] iqMeanGain The gain calculated based on the relative luminance target\n  */\n-void Agc::computeExposure(IPAFrameContext &frameContext, double yGain,\n-\t\t\t  double iqMeanGain)\n+void Agc::computeExposure(IPAContext &context, double yGain, double iqMeanGain)\n {\n \t/* Get the effective exposure and gain applied on the sensor. */\n-\tuint32_t exposure = frameContext.sensor.exposure;\n-\tdouble analogueGain = frameContext.sensor.gain;\n+\tuint32_t exposure = context.prevFrameContext.sensor.exposure;\n+\tdouble analogueGain = context.prevFrameContext.sensor.gain;\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \n \t/* Use the highest of the two gain estimates. */\n \tdouble evGain = std::max(yGain, iqMeanGain);\n@@ -333,9 +334,9 @@ void Agc::process(IPAContext &context, const ipu3_uapi_stats_3a *stats)\n \t */\n \tdouble yGain = 1.0;\n \tdouble yTarget = kRelativeLuminanceTarget;\n-\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \tfor (unsigned int i = 0; i < 8; i++) {\n-\t\tdouble yValue = estimateLuminance(context.frameContext,\n+\t\tdouble yValue = estimateLuminance(frameContext,\n \t\t\t\t\t\t  context.configuration.grid.bdsGrid,\n \t\t\t\t\t\t  stats, yGain);\n \t\tdouble extraGain = std::min(10.0, yTarget / (yValue + .001));\n@@ -348,7 +349,7 @@ void Agc::process(IPAContext &context, const ipu3_uapi_stats_3a *stats)\n \t\t\tbreak;\n \t}\n \n-\tcomputeExposure(context.frameContext, yGain, iqMeanGain);\n+\tcomputeExposure(context, yGain, iqMeanGain);\n \tframeCount_++;\n }\n \ndiff --git a/src/ipa/ipu3/algorithms/agc.h b/src/ipa/ipu3/algorithms/agc.h\nindex 96ec7005..1b444b12 100644\n--- a/src/ipa/ipu3/algorithms/agc.h\n+++ b/src/ipa/ipu3/algorithms/agc.h\n@@ -34,7 +34,7 @@ private:\n \tdouble measureBrightness(const ipu3_uapi_stats_3a *stats,\n \t\t\t\t const ipu3_uapi_grid_config &grid) const;\n \tvoid filterExposure();\n-\tvoid computeExposure(IPAFrameContext &frameContext, double yGain,\n+\tvoid computeExposure(IPAContext &context, double yGain,\n \t\t\t     double iqMeanGain);\n \tdouble estimateLuminance(IPAFrameContext &frameContext,\n \t\t\t\t const ipu3_uapi_grid_config &grid,\ndiff --git a/src/ipa/ipu3/algorithms/awb.cpp b/src/ipa/ipu3/algorithms/awb.cpp\nindex 1dc27fc9..3c004928 100644\n--- a/src/ipa/ipu3/algorithms/awb.cpp\n+++ b/src/ipa/ipu3/algorithms/awb.cpp\n@@ -382,16 +382,17 @@ void Awb::calculateWBGains(const ipu3_uapi_stats_3a *stats)\n void Awb::process(IPAContext &context, const ipu3_uapi_stats_3a *stats)\n {\n \tcalculateWBGains(stats);\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \n \t/*\n \t * Gains are only recalculated if enough zones were detected.\n \t * The results are cached, so if no results were calculated, we set the\n \t * cached values from asyncResults_ here.\n \t */\n-\tcontext.frameContext.awb.gains.blue = asyncResults_.blueGain;\n-\tcontext.frameContext.awb.gains.green = asyncResults_.greenGain;\n-\tcontext.frameContext.awb.gains.red = asyncResults_.redGain;\n-\tcontext.frameContext.awb.temperatureK = asyncResults_.temperatureK;\n+\tframeContext.awb.gains.blue = asyncResults_.blueGain;\n+\tframeContext.awb.gains.green = asyncResults_.greenGain;\n+\tframeContext.awb.gains.red = asyncResults_.redGain;\n+\tframeContext.awb.temperatureK = asyncResults_.temperatureK;\n }\n \n constexpr uint16_t Awb::threshold(float value)\n@@ -434,6 +435,7 @@ void Awb::prepare(IPAContext &context, ipu3_uapi_params *params)\n \t */\n \tparams->acc_param.bnr = imguCssBnrDefaults;\n \tSize &bdsOutputSize = context.configuration.grid.bdsOutputSize;\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \tparams->acc_param.bnr.column_size = bdsOutputSize.width;\n \tparams->acc_param.bnr.opt_center.x_reset = grid.x_start - (bdsOutputSize.width / 2);\n \tparams->acc_param.bnr.opt_center.y_reset = grid.y_start - (bdsOutputSize.height / 2);\n@@ -442,10 +444,10 @@ void Awb::prepare(IPAContext &context, ipu3_uapi_params *params)\n \tparams->acc_param.bnr.opt_center_sqr.y_sqr_reset = params->acc_param.bnr.opt_center.y_reset\n \t\t\t\t\t\t\t* params->acc_param.bnr.opt_center.y_reset;\n \t/* Convert to u3.13 fixed point values */\n-\tparams->acc_param.bnr.wb_gains.gr = 8192 * context.frameContext.awb.gains.green;\n-\tparams->acc_param.bnr.wb_gains.r  = 8192 * context.frameContext.awb.gains.red;\n-\tparams->acc_param.bnr.wb_gains.b  = 8192 * context.frameContext.awb.gains.blue;\n-\tparams->acc_param.bnr.wb_gains.gb = 8192 * context.frameContext.awb.gains.green;\n+\tparams->acc_param.bnr.wb_gains.gr = 8192 * frameContext.awb.gains.green;\n+\tparams->acc_param.bnr.wb_gains.r  = 8192 * frameContext.awb.gains.red;\n+\tparams->acc_param.bnr.wb_gains.b  = 8192 * frameContext.awb.gains.blue;\n+\tparams->acc_param.bnr.wb_gains.gb = 8192 * frameContext.awb.gains.green;\n \n \tLOG(IPU3Awb, Debug) << \"Color temperature estimated: \" << asyncResults_.temperatureK;\n \ndiff --git a/src/ipa/ipu3/algorithms/tone_mapping.cpp b/src/ipa/ipu3/algorithms/tone_mapping.cpp\nindex 2040eda5..bf710bd1 100644\n--- a/src/ipa/ipu3/algorithms/tone_mapping.cpp\n+++ b/src/ipa/ipu3/algorithms/tone_mapping.cpp\n@@ -42,7 +42,7 @@ int ToneMapping::configure(IPAContext &context,\n \t\t\t   [[maybe_unused]] const IPAConfigInfo &configInfo)\n {\n \t/* Initialise tone mapping gamma value. */\n-\tcontext.frameContext.toneMapping.gamma = 0.0;\n+\tcontext.frameContextQueue.front().toneMapping.gamma = 0.0;\n \n \treturn 0;\n }\n@@ -60,7 +60,7 @@ void ToneMapping::prepare([[maybe_unused]] IPAContext &context,\n {\n \t/* Copy the calculated LUT into the parameters buffer. */\n \tmemcpy(params->acc_param.gamma.gc_lut.lut,\n-\t       context.frameContext.toneMapping.gammaCorrection.lut,\n+\t       context.frameContextQueue.front().toneMapping.gammaCorrection.lut,\n \t       IPU3_UAPI_GAMMA_CORR_LUT_ENTRIES *\n \t       sizeof(params->acc_param.gamma.gc_lut.lut[0]));\n \n@@ -80,6 +80,7 @@ void ToneMapping::prepare([[maybe_unused]] IPAContext &context,\n void ToneMapping::process(IPAContext &context,\n \t\t\t  [[maybe_unused]] const ipu3_uapi_stats_3a *stats)\n {\n+\tIPAFrameContext &frameContext = context.frameContextQueue.front();\n \t/*\n \t * Hardcode gamma to 1.1 as a default for now.\n \t *\n@@ -87,11 +88,11 @@ void ToneMapping::process(IPAContext &context,\n \t */\n \tgamma_ = 1.1;\n \n-\tif (context.frameContext.toneMapping.gamma == gamma_)\n+\tif (frameContext.toneMapping.gamma == gamma_)\n \t\treturn;\n \n \tstruct ipu3_uapi_gamma_corr_lut &lut =\n-\t\tcontext.frameContext.toneMapping.gammaCorrection;\n+\t\tframeContext.toneMapping.gammaCorrection;\n \n \tfor (uint32_t i = 0; i < std::size(lut.lut); i++) {\n \t\tdouble j = static_cast<double>(i) / (std::size(lut.lut) - 1);\n@@ -101,7 +102,7 @@ void ToneMapping::process(IPAContext &context,\n \t\tlut.lut[i] = gamma * 8191;\n \t}\n \n-\tcontext.frameContext.toneMapping.gamma = gamma_;\n+\tframeContext.toneMapping.gamma = gamma_;\n }\n \n } /* namespace ipa::ipu3::algorithms */\ndiff --git a/src/ipa/ipu3/ipa_context.cpp b/src/ipa/ipu3/ipa_context.cpp\nindex 86794ac1..f503b93d 100644\n--- a/src/ipa/ipu3/ipa_context.cpp\n+++ b/src/ipa/ipu3/ipa_context.cpp\n@@ -39,6 +39,23 @@ namespace libcamera::ipa::ipu3 {\n  * algorithm, but should only be written by its owner.\n  */\n \n+/**\n+ * \\brief Construct a IPAFrameContext instance\n+ */\n+IPAFrameContext::IPAFrameContext() = default;\n+\n+/**\n+ * \\brief Move constructor for IPAFrameContext\n+ * \\param[in] other The other IPAFrameContext\n+ */\n+IPAFrameContext::IPAFrameContext(IPAFrameContext &&other) = default;\n+\n+/**\n+ * \\brief Move assignment operator for IPAFrameContext\n+ * \\param[in] other The other IPAFrameContext\n+ */\n+IPAFrameContext &IPAFrameContext::operator=(IPAFrameContext &&other) = default;\n+\n /**\n  * \\struct IPAContext\n  * \\brief Global IPA context data shared between all algorithms\n@@ -46,13 +63,11 @@ namespace libcamera::ipa::ipu3 {\n  * \\var IPAContext::configuration\n  * \\brief The IPA session configuration, immutable during the session\n  *\n- * \\var IPAContext::frameContext\n- * \\brief The frame context for the frame being processed\n+ * \\var IPAContext::frameContextQueue\n+ * \\brief A queue of frame contexts to be processed by the IPA\n  *\n- * \\todo While the frame context is supposed to be per-frame, this\n- * single frame context stores data related to both the current frame\n- * and the previous frames, with fields being updated as the algorithms\n- * are run. This needs to be turned into real per-frame data storage.\n+ * \\var IPAContext::prevFrameContext\n+ * \\brief The latest frame context which the IPA has finished processing\n  */\n \n /**\n@@ -86,6 +101,11 @@ namespace libcamera::ipa::ipu3 {\n  * \\brief Maximum analogue gain supported with the configured sensor\n  */\n \n+/**\n+ * \\var IPAFrameContext::frame\n+ * \\brief Frame number of the corresponding frame context\n+ */\n+\n /**\n  * \\var IPAFrameContext::agc\n  * \\brief Context for the Automatic Gain Control algorithm\ndiff --git a/src/ipa/ipu3/ipa_context.h b/src/ipa/ipu3/ipa_context.h\nindex c6dc0814..a5e298bd 100644\n--- a/src/ipa/ipu3/ipa_context.h\n+++ b/src/ipa/ipu3/ipa_context.h\n@@ -14,6 +14,8 @@\n \n #include <libcamera/geometry.h>\n \n+#include <queue>\n+\n namespace libcamera {\n \n namespace ipa::ipu3 {\n@@ -34,6 +36,12 @@ struct IPASessionConfiguration {\n };\n \n struct IPAFrameContext {\n+\tuint32_t frame;\n+\n+\tIPAFrameContext();\n+\tIPAFrameContext(IPAFrameContext &&other);\n+\tIPAFrameContext &operator=(IPAFrameContext &&other);\n+\n \tstruct {\n \t\tuint32_t exposure;\n \t\tdouble gain;\n@@ -62,7 +70,8 @@ struct IPAFrameContext {\n \n struct IPAContext {\n \tIPASessionConfiguration configuration;\n-\tIPAFrameContext frameContext;\n+\tstd::queue<IPAFrameContext> frameContextQueue;\n+\tIPAFrameContext prevFrameContext;\n };\n \n } /* namespace ipa::ipu3 */\ndiff --git a/src/ipa/ipu3/ipu3.cpp b/src/ipa/ipu3/ipu3.cpp\nindex 3d307708..763dbd7d 100644\n--- a/src/ipa/ipu3/ipu3.cpp\n+++ b/src/ipa/ipu3/ipu3.cpp\n@@ -324,6 +324,8 @@ int IPAIPU3::start()\n  */\n void IPAIPU3::stop()\n {\n+\twhile (!context_.frameContextQueue.empty())\n+\t\tcontext_.frameContextQueue.pop();\n }\n \n /**\n@@ -457,6 +459,14 @@ int IPAIPU3::configure(const IPAConfigInfo &configInfo,\n \t/* Clean context at configuration */\n \tcontext_ = {};\n \n+\t/*\n+\t * Insert a initial context into the queue to faciliate\n+\t * algo->configure() below.\n+\t */\n+\tIPAFrameContext initContext;\n+\tinitContext.frame = 0;\n+\tcontext_.frameContextQueue.push(std::move(initContext));\n+\n \tcalculateBdsGrid(configInfo.bdsOutputSize);\n \n \tlineDuration_ = sensorInfo_.lineLength * 1.0s / sensorInfo_.pixelRate;\n@@ -547,8 +557,9 @@ void IPAIPU3::processEvent(const IPU3Event &event)\n \t\tconst ipu3_uapi_stats_3a *stats =\n \t\t\treinterpret_cast<ipu3_uapi_stats_3a *>(mem.data());\n \n-\t\tcontext_.frameContext.sensor.exposure = event.sensorControls.get(V4L2_CID_EXPOSURE).get<int32_t>();\n-\t\tcontext_.frameContext.sensor.gain = camHelper_->gain(event.sensorControls.get(V4L2_CID_ANALOGUE_GAIN).get<int32_t>());\n+\t\tIPAFrameContext &curFrameContext = context_.frameContextQueue.front();\n+\t\tcurFrameContext.sensor.exposure = event.sensorControls.get(V4L2_CID_EXPOSURE).get<int32_t>();\n+\t\tcurFrameContext.sensor.gain = camHelper_->gain(event.sensorControls.get(V4L2_CID_ANALOGUE_GAIN).get<int32_t>());\n \n \t\tparseStatistics(event.frame, event.frameTimestamp, stats);\n \t\tbreak;\n@@ -571,6 +582,11 @@ void IPAIPU3::processControls([[maybe_unused]] unsigned int frame,\n \t\t\t      [[maybe_unused]] const ControlList &controls)\n {\n \t/* \\todo Start processing for 'frame' based on 'controls'. */\n+\n+\tIPAFrameContext newContext;\n+\tnewContext.frame = frame;\n+\n+\tcontext_.frameContextQueue.push(std::move(newContext));\n }\n \n /**\n@@ -619,6 +635,9 @@ void IPAIPU3::parseStatistics(unsigned int frame,\n {\n \tControlList ctrls(controls::controls);\n \n+\tcontext_.prevFrameContext = std::move(context_.frameContextQueue.front());\n+\tcontext_.frameContextQueue.pop();\n+\n \tfor (auto const &algo : algorithms_)\n \t\talgo->process(context_, stats);\n \n@@ -628,11 +647,11 @@ void IPAIPU3::parseStatistics(unsigned int frame,\n \tint64_t frameDuration = (defVBlank_ + sensorInfo_.outputSize.height) * lineDuration_.get<std::micro>();\n \tctrls.set(controls::FrameDuration, frameDuration);\n \n-\tctrls.set(controls::AnalogueGain, context_.frameContext.sensor.gain);\n+\tctrls.set(controls::AnalogueGain, context_.prevFrameContext.sensor.gain);\n \n-\tctrls.set(controls::ColourTemperature, context_.frameContext.awb.temperatureK);\n+\tctrls.set(controls::ColourTemperature, context_.prevFrameContext.awb.temperatureK);\n \n-\tctrls.set(controls::ExposureTime, context_.frameContext.sensor.exposure * lineDuration_.get<std::micro>());\n+\tctrls.set(controls::ExposureTime, context_.prevFrameContext.sensor.exposure * lineDuration_.get<std::micro>());\n \n \t/*\n \t * \\todo The Metadata provides a path to getting extended data\n@@ -661,8 +680,9 @@ void IPAIPU3::setControls(unsigned int frame)\n \tIPU3Action op;\n \top.op = ActionSetSensorControls;\n \n-\texposure_ = context_.frameContext.agc.exposure;\n-\tgain_ = camHelper_->gainCode(context_.frameContext.agc.gain);\n+\tIPAFrameContext &context = context_.frameContextQueue.front();\n+\texposure_ = context.agc.exposure;\n+\tgain_ = camHelper_->gainCode(context.agc.gain);\n \n \tControlList ctrls(ctrls_);\n \tctrls.set(V4L2_CID_EXPOSURE, static_cast<int32_t>(exposure_));\n","prefixes":["libcamera-devel"]}