[{"id":29056,"web_url":"https://patchwork.libcamera.org/comment/29056/","msgid":"<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>","date":"2024-03-25T20:16:00","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":143,"url":"https://patchwork.libcamera.org/api/people/143/","name":"Jacopo Mondi","email":"jacopo.mondi@ideasonboard.com"},"content":"Hi Dan\n\nOn Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> very large part; following the Rpi IPA's algorithm in spirit with a\n> few tunable values in that IPA being hardcoded in the libipa ones.\n> Add a new base class for MeanLuminanceAgc which implements the same\n\nnit: I would rather call this one AgcMeanLuminance.\n\nOne other note, not sure if applicable here, from the experience of\ntrying to upstream an auto-focus algorithm for RkISP1. We had there a\nbase class to define the algorithm interface, one derived class for\nthe common calculation and one platform-specific part for statistics\ncollection and IPA module interfacing.\n\nThe base class was there mostly to handle the algorithm state machine\nby handling the controls that influence the algorithm behaviour. In\ncase of AEGC I can think, in example, about handling the switch\nbetween enable/disable of auto-mode (and consequentially handling a\nmanually set ExposureTime and AnalogueGain), switching between\ndifferent ExposureModes etc\n\nThis is the last attempt for AF\nhttps://patchwork.libcamera.org/patch/18510/\n\nand I'm wondering if it would be desirable to abstract away from the\nMeanLuminance part the part that is tightly coupled with the\nlibcamera's controls definition.\n\nLet's make a concrete example: look how the rkisp1 and the ipu3 agc\nimplementations handle AeEnabled (or better, look at how the rkisp1\ndoes that and the IPU3 does not).\n\nIdeally, my goal would be to abstract the handling of the control and\nall the state machine that decides if the manual or auto-computed\nvalues should be used to an AEGCAlgorithm base class.\n\nYour series does that for the tuning file parsing already but does\nthat in the MeanLuminance method implementation, while it should or\ncould be common to all AEGC methods.\n\nOne very interesting experiment could be starting with this and then\nplumb-in AeEnable support in the IPU3 in example and move everything\ncommon to a base class.\n\n> algorithm and additionally parses yaml tuning files to inform an IPA\n> module's Agc algorithm about valid constraint and exposure modes and\n> their associated bounds.\n>\n> Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> ---\n>  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n>  src/ipa/libipa/agc.h       |  82 ++++++\n>  src/ipa/libipa/meson.build |   2 +\n>  3 files changed, 610 insertions(+)\n>  create mode 100644 src/ipa/libipa/agc.cpp\n>  create mode 100644 src/ipa/libipa/agc.h\n>\n> diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> new file mode 100644\n> index 00000000..af57a571\n> --- /dev/null\n> +++ b/src/ipa/libipa/agc.cpp\n> @@ -0,0 +1,526 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * Copyright (C) 2024 Ideas on Board Oy\n> + *\n> + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> + */\n> +\n> +#include \"agc.h\"\n> +\n> +#include <cmath>\n> +\n> +#include <libcamera/base/log.h>\n> +#include <libcamera/control_ids.h>\n> +\n> +#include \"exposure_mode_helper.h\"\n> +\n> +using namespace libcamera::controls;\n> +\n> +/**\n> + * \\file agc.h\n> + * \\brief Base class implementing mean luminance AEGC.\n\nnit: not '.' at the end of briefs\n\n> + */\n> +\n> +namespace libcamera {\n> +\n> +using namespace std::literals::chrono_literals;\n> +\n> +LOG_DEFINE_CATEGORY(Agc)\n> +\n> +namespace ipa {\n> +\n> +/*\n> + * Number of frames to wait before calculating stats on minimum exposure\n> + * \\todo should this be a tunable value?\n\nDoes this depend on the ISP (comes from IPA), the sensor (comes from\ntuning file) or... both ? :)\n\n> + */\n> +static constexpr uint32_t kNumStartupFrames = 10;\n> +\n> +/*\n> + * Default relative luminance target\n> + *\n> + * This value should be chosen so that when the camera points at a grey target,\n> + * the resulting image brightness looks \"right\". Custom values can be passed\n> + * as the relativeLuminanceTarget value in sensor tuning files.\n> + */\n> +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> +\n> +/**\n> + * \\struct MeanLuminanceAgc::AgcConstraint\n> + * \\brief The boundaries and target for an AeConstraintMode constraint\n> + *\n> + * This structure describes an AeConstraintMode constraint for the purposes of\n> + * this algorithm. The algorithm will apply the constraints by calculating the\n> + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> + * the resulting value is the right side of the given target (as defined by the\n> + * boundary and luminance target).\n> + */\n\nHere, in example.\n\ncontrols::AeConstraintMode and the supported values are defined as\n(core|vendor) controls in control_ids_*.yaml\n\nThe tuning file expresses the constraint modes using the Control\ndefinition (I wonder if this has always been like this) but it\ndefinitely ties the tuning file to the controls definition.\n\nApplications use controls::AeConstraintMode to select one of the\nconstrained modes to have the algorithm use it.\n\nIn all of this, how much is part of the MeanLuminance implementation\nand how much shared between possibly multiple implementations ?\n\n> +\n> +/**\n> + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> + * \\brief Specify whether the constraint defines a lower or upper bound\n> + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> + * \\brief The constraint defines a lower bound\n> + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> + * \\brief The constraint defines an upper bound\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> + * \\brief The type of constraint bound\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> + * \\brief The lower quantile to use for the constraint\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> + * \\brief The upper quantile to use for the constraint\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> + * \\brief The luminance target for the constraint\n> + */\n> +\n> +/**\n> + * \\class MeanLuminanceAgc\n> + * \\brief a mean-based auto-exposure algorithm\n> + *\n> + * This algorithm calculates a shutter time, analogue and digital gain such that\n> + * the normalised mean luminance value of an image is driven towards a target,\n> + * which itself is discovered from tuning data. The algorithm is a two-stage\n> + * process:\n> + *\n> + * In the first stage, an initial gain value is derived by iteratively comparing\n> + * the gain-adjusted mean luminance across an entire image against a target, and\n> + * selecting a value which pushes it as closely as possible towards the target.\n> + *\n> + * In the second stage we calculate the gain required to drive the average of a\n> + * section of a histogram to a target value, where the target and the boundaries\n> + * of the section of the histogram used in the calculation are taken from the\n> + * values defined for the currently configured AeConstraintMode within the\n> + * tuning data. The gain from the first stage is then clamped to the gain from\n> + * this stage.\n> + *\n> + * The final gain is used to adjust the effective exposure value of the image,\n> + * and that new exposure value divided into shutter time, analogue gain and\n> + * digital gain according to the selected AeExposureMode.\n> + */\n> +\n> +MeanLuminanceAgc::MeanLuminanceAgc()\n> +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> +{\n> +}\n> +\n> +/**\n> + * \\brief Parse the relative luminance target from the tuning data\n> + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> + */\n> +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> +{\n> +\trelativeLuminanceTarget_ =\n> +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n\nHow do you expect this to be computed in the tuning file ?\n\n> +}\n> +\n> +/**\n> + * \\brief Parse an AeConstraintMode constraint from tuning data\n> + * \\param[in] modeDict the YamlObject holding the constraint data\n> + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> + */\n> +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> +{\n> +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> +\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> +\t\tdouble yTarget =\n> +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n> +\n> +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> +\n> +\t\tif (!constraintModes_.count(id))\n> +\t\t\tconstraintModes_[id] = {};\n> +\n> +\t\tif (idx)\n> +\t\t\tconstraintModes_[id].push_back(constraint);\n> +\t\telse\n> +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> +\t}\n> +}\n> +\n> +/**\n> + * \\brief Parse tuning data file to populate AeConstraintMode control\n> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> + *\n> + * The Agc algorithm's tuning data should contain a dictionary called\n> + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> + *\n> + * \\code{.unparsed}\n> + * algorithms:\n> + *   - Agc:\n> + *       AeConstraintMode:\n> + *         ConstraintNormal:\n> + *           lower:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.5\n\nOk, so this ties the tuning file not just to the libcamera controls\ndefinition, but to this specific implementation of the algorithm ?\n\nNot that it was not expected, and I think it's fine, as using a\nlibcamera defined control value as 'index' makes sure the applications\nwill deal with the same interface, but this largely conflicts with the\nidea to have shared parsing for all algorithms in a base class.\n\nAlso, we made AeConstrainModes a 'core control' because at the time\nwhen RPi first implemented AGC support there was no alternative to\nthat. Then the RPi implementation has been copied in all other\nplatforms and this is still fine as a 'core control'. This however\nseems to be a tuning parameter for this specific algorithm\nimplementation, isn't it ?\n\n> + *         ConstraintHighlight:\n> + *           lower:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.5\n> + *           upper:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.8\n> + *\n> + * \\endcode\n> + *\n> + * The parsed dictionaries are used to populate an array of available values for\n> + * the AeConstraintMode control and stored for later use in the algorithm.\n> + *\n> + * \\return -EINVAL Where a defined constraint mode is invalid\n> + * \\return 0 on success\n> + */\n> +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> +{\n> +\tstd::vector<ControlValue> availableConstraintModes;\n> +\n> +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> +\tif (yamlConstraintModes.isDictionary()) {\n> +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> +\t\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tif (!modeDict.isDictionary()) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> +\t\t\t\treturn -EINVAL;\n> +\t\t\t}\n> +\n> +\t\t\tparseConstraint(modeDict,\n> +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> +\t\t\tavailableConstraintModes.push_back(\n> +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * If the tuning data file contains no constraints then we use the\n> +\t * default constraint that the various Agc algorithms were adhering to\n> +\t * anyway before centralisation.\n> +\t */\n> +\tif (constraintModes_.empty()) {\n> +\t\tAgcConstraint constraint = {\n> +\t\t\tAgcConstraint::Bound::LOWER,\n> +\t\t\t0.98,\n> +\t\t\t1.0,\n> +\t\t\t0.5\n> +\t\t};\n> +\n> +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> +\t\t\tconstraint);\n> +\t\tavailableConstraintModes.push_back(\n> +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> +\t}\n> +\n> +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> +\n> +\treturn 0;\n> +}\n> +\n> +/**\n> + * \\brief Parse tuning data file to populate AeExposureMode control\n> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> + *\n> + * The Agc algorithm's tuning data should contain a dictionary called\n> + * AeExposureMode containing per-mode setting dictionaries with the key being\n> + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> + * values with the key \"gain\", in this format:\n> + *\n> + * \\code{.unparsed}\n> + * algorithms:\n> + *   - Agc:\n> + *       AeExposureMode:\n\nSame reasoning as per the constraints, really.\n\nThere's nothing bad here, if not me realizing our controls definition\nis intimately tied with the algorithms implementation, so I wonder\nagain if even handling things like AeEnable in a common form makes\nany sense..\n\nNot going to review the actual implementation now, as it comes from\nthe existing ones...\n\n\n> + *         ExposureNormal:\n> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> + *         ExposureShort:\n> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> + *\n> + * \\endcode\n> + *\n> + * The parsed dictionaries are used to populate an array of available values for\n> + * the AeExposureMode control and to create ExposureModeHelpers\n> + *\n> + * \\return -EINVAL Where a defined constraint mode is invalid\n> + * \\return 0 on success\n> + */\n> +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> +{\n> +\tstd::vector<ControlValue> availableExposureModes;\n> +\tint ret;\n> +\n> +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> +\tif (yamlExposureModes.isDictionary()) {\n> +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> +\t\t\t    AeExposureModeNameValueMap.end()) {\n> +\t\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tif (!modeValues.isDictionary()) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> +\t\t\t\treturn -EINVAL;\n> +\t\t\t}\n> +\n> +\t\t\tstd::vector<uint32_t> shutters =\n> +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> +\t\t\tstd::vector<double> gains =\n> +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> +\n> +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> +\t\t\t\tstd::back_inserter(shutterDurations),\n> +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> +\n> +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> +\t\t\t\treturn ret;\n> +\t\t\t}\n> +\n> +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * If we don't have any exposure modes in the tuning data we create an\n> +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> +\t * will then internally simply drive the shutter as high as possible\n> +\t * before touching gain\n> +\t */\n> +\tif (availableExposureModes.empty()) {\n> +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> +\t\tstd::vector<double> gains = {};\n> +\n> +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> +\t\t\tstd::make_shared<ExposureModeHelper>();\n> +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> +\t\t\tLOG(Agc, Error)\n> +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> +\t\t\treturn ret;\n> +\t\t}\n> +\n> +\t\texposureModeHelpers_[exposureModeId] = helper;\n> +\t\tavailableExposureModes.push_back(exposureModeId);\n> +\t}\n> +\n> +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> +\n> +\treturn 0;\n> +}\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::constraintModes()\n> + * \\brief Get the constraint modes that have been parsed from tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::controls()\n> + * \\brief Get the controls that have been generated after parsing tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> + * \\param[in] gain The gain with which to adjust the luminance estimate\n> + *\n> + * This function is a pure virtual function because estimation of luminance is a\n> + * hardware-specific operation, which depends wholly on the format of the stats\n> + * that are delivered to libcamera from the ISP. Derived classes must implement\n> + * an overriding function that calculates the normalised mean luminance value\n> + * across the entire image.\n> + *\n> + * \\return The normalised relative luminance of the image\n> + */\n> +\n> +/**\n> + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> + *        target\n\nnit: we don't usually indent in briefs, or in general when breaking\nlines in doxygen as far as I can tell\n\nNot going\n\n> + *\n> + * To account for non-linearity caused by saturation, the value needs to be\n> + * estimated in an iterative process, as multiplying by a gain will not increase\n> + * the relative luminance by the same factor if some image regions are saturated\n> + *\n> + * \\return The calculated initial gain\n> + */\n> +double MeanLuminanceAgc::estimateInitialGain()\n> +{\n> +\tdouble yTarget = relativeLuminanceTarget_;\n> +\tdouble yGain = 1.0;\n> +\n> +\tfor (unsigned int i = 0; i < 8; i++) {\n> +\t\tdouble yValue = estimateLuminance(yGain);\n> +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> +\n> +\t\tyGain *= extra_gain;\n> +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> +\t\t\t\t<< \", Y target: \" << yTarget\n> +\t\t\t\t<< \", gives gain \" << yGain;\n> +\n> +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\treturn yGain;\n> +}\n> +\n> +/**\n> + * \\brief Clamp gain within the bounds of a defined constraint\n> + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> + * \\param[in] gain The gain to clamp\n> + *\n> + * \\return The gain clamped within the constraint bounds\n> + */\n> +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> +\t\t\t\t\t     const Histogram &hist,\n> +\t\t\t\t\t     double gain)\n> +{\n> +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> +\tfor (const AgcConstraint &constraint : constraints) {\n> +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> +\n> +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> +\t\t    newGain > gain)\n> +\t\t\tgain = newGain;\n> +\n> +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> +\t\t    newGain < gain)\n> +\t\t\tgain = newGain;\n> +\t}\n> +\n> +\treturn gain;\n> +}\n> +\n> +/**\n> + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> + *\n> + * The speed of the filter is adaptive, and will produce the target quicker\n> + * during startup, or when the target exposure is within 20% of the most recent\n> + * filter output.\n> + *\n> + * \\return The filtered exposure\n> + */\n> +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> +{\n> +\tdouble speed = 0.2;\n> +\n> +\t/* Adapt instantly if we are in startup phase. */\n> +\tif (frameCount_ < kNumStartupFrames)\n> +\t\tspeed = 1.0;\n> +\n> +\t/*\n> +\t * If we are close to the desired result, go faster to avoid making\n> +\t * multiple micro-adjustments.\n> +\t * \\todo Make this customisable?\n> +\t */\n> +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> +\t    filteredExposure_ > 0.8 * exposureValue)\n> +\t\tspeed = sqrt(speed);\n> +\n> +\tfilteredExposure_ = speed * exposureValue +\n> +\t\t\t    filteredExposure_ * (1.0 - speed);\n> +\n> +\treturn filteredExposure_;\n> +}\n> +\n> +/**\n> + * \\brief Calculate the new exposure value\n> + * \\param[in] constraintModeIndex The index of the current constraint mode\n> + * \\param[in] exposureModeIndex The index of the current exposure mode\n> + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> + *\t      the calculated gain\n> + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> + *\t      statistics in use derive\n> + *\n> + * Calculate a new exposure value to try to obtain the target. The calculated\n> + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> + * divided into shutter time, analogue and digital gain.\n> + *\n> + * \\return Tuple of shutter time, analogue gain, and digital gain\n> + */\n> +std::tuple<utils::Duration, double, double>\n> +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> +\t\t\t\t uint32_t exposureModeIndex,\n> +\t\t\t\t const Histogram &yHist,\n> +\t\t\t\t utils::Duration effectiveExposureValue)\n> +{\n> +\t/*\n> +\t * The pipeline handler should validate that we have received an allowed\n> +\t * value for AeExposureMode.\n> +\t */\n> +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> +\t\texposureModeHelpers_.at(exposureModeIndex);\n> +\n> +\tdouble gain = estimateInitialGain();\n> +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> +\n> +\t/*\n> +\t * We don't check whether we're already close to the target, because\n> +\t * even if the effective exposure value is the same as the last frame's\n> +\t * we could have switched to an exposure mode that would require a new\n> +\t * pass through the splitExposure() function.\n> +\t */\n> +\n> +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> +\n> +\t/*\n> +\t * We filter the exposure value to make sure changes are not too jarring\n> +\t * from frame to frame.\n> +\t */\n> +\tnewExposureValue = filterExposure(newExposureValue);\n> +\n> +\tframeCount_++;\n> +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> +}\n> +\n> +}; /* namespace ipa */\n> +\n> +}; /* namespace libcamera */\n> diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> new file mode 100644\n> index 00000000..902a359a\n> --- /dev/null\n> +++ b/src/ipa/libipa/agc.h\n> @@ -0,0 +1,82 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * Copyright (C) 2024 Ideas on Board Oy\n> + *\n> + agc.h - Base class for libipa-compliant AGC algorithms\n> + */\n> +\n> +#pragma once\n> +\n> +#include <tuple>\n> +#include <vector>\n> +\n> +#include <libcamera/controls.h>\n> +\n> +#include \"libcamera/internal/yaml_parser.h\"\n> +\n> +#include \"exposure_mode_helper.h\"\n> +#include \"histogram.h\"\n> +\n> +namespace libcamera {\n> +\n> +namespace ipa {\n> +\n> +class MeanLuminanceAgc\n> +{\n> +public:\n> +\tMeanLuminanceAgc();\n> +\tvirtual ~MeanLuminanceAgc() = default;\n> +\n> +\tstruct AgcConstraint {\n> +\t\tenum class Bound {\n> +\t\t\tLOWER = 0,\n> +\t\t\tUPPER = 1\n> +\t\t};\n> +\t\tBound bound;\n> +\t\tdouble qLo;\n> +\t\tdouble qHi;\n> +\t\tdouble yTarget;\n> +\t};\n> +\n> +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> +\tint parseConstraintModes(const YamlObject &tuningData);\n> +\tint parseExposureModes(const YamlObject &tuningData);\n> +\n> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> +\t{\n> +\t\treturn constraintModes_;\n> +\t}\n> +\n> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> +\t{\n> +\t\treturn exposureModeHelpers_;\n> +\t}\n> +\n> +\tControlInfoMap::Map controls()\n> +\t{\n> +\t\treturn controls_;\n> +\t}\n> +\n> +\tvirtual double estimateLuminance(const double gain) = 0;\n> +\tdouble estimateInitialGain();\n> +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> +\t\t\t\t   const Histogram &hist,\n> +\t\t\t\t   double gain);\n> +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> +\tstd::tuple<utils::Duration, double, double>\n> +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> +private:\n> +\tuint64_t frameCount_;\n> +\tutils::Duration filteredExposure_;\n> +\tdouble relativeLuminanceTarget_;\n> +\n> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> +\tControlInfoMap::Map controls_;\n> +};\n> +\n> +}; /* namespace ipa */\n> +\n> +}; /* namespace libcamera */\n> diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> index 37fbd177..31cc8d70 100644\n> --- a/src/ipa/libipa/meson.build\n> +++ b/src/ipa/libipa/meson.build\n> @@ -1,6 +1,7 @@\n>  # SPDX-License-Identifier: CC0-1.0\n>\n>  libipa_headers = files([\n> +    'agc.h',\n>      'algorithm.h',\n>      'camera_sensor_helper.h',\n>      'exposure_mode_helper.h',\n> @@ -10,6 +11,7 @@ libipa_headers = files([\n>  ])\n>\n>  libipa_sources = files([\n> +    'agc.cpp',\n>      'algorithm.cpp',\n>      'camera_sensor_helper.cpp',\n>      'exposure_mode_helper.cpp',\n> --\n> 2.34.1\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id B1C98BD160\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon, 25 Mar 2024 20:16:06 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 5B253632EA;\n\tMon, 25 Mar 2024 21:16:05 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id E0A6063036\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 25 Mar 2024 21:16:03 +0100 (CET)","from ideasonboard.com (93-61-96-190.ip145.fastwebnet.it\n\t[93.61.96.190])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 7BAA47E4;\n\tMon, 25 Mar 2024 21:15:32 +0100 (CET)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"rU+qeoNk\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1711397732;\n\tbh=AF0P6+CJ1YUq9mZOBNSxCfC8zhPbHey84Zk+3T6X+rA=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=rU+qeoNkL+YJv4k9iNha+HR3Y4t45z9cQB6p+eE8zf7UZN19aP7yZ8Dqzcc4tYuNG\n\tDMn6cf/1CzOrqYfnQ9nF5kp8Cza2+08TEoQCsauAOoge0bAEXdR/yEfvRY0/1sn+vf\n\tae8c2gCawIMUE/ftGDIh0Q4Kn6lMfo65TXniuPjM=","Date":"Mon, 25 Mar 2024 21:16:00 +0100","From":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>","To":"Daniel Scally <dan.scally@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20240322131451.3092931-5-dan.scally@ideasonboard.com>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29092,"web_url":"https://patchwork.libcamera.org/comment/29092/","msgid":"<20240327152545.wy5gj3wqecejrtfu@jasper>","date":"2024-03-27T15:25:45","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":184,"url":"https://patchwork.libcamera.org/api/people/184/","name":"Stefan Klug","email":"stefan.klug@ideasonboard.com"},"content":"Hi Daniel,\n\nthanks for the patch. Jacopo already did the whole review, so only a\nsmall comment...\n\nOn Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> very large part; following the Rpi IPA's algorithm in spirit with a\n> few tunable values in that IPA being hardcoded in the libipa ones.\n> Add a new base class for MeanLuminanceAgc which implements the same\n> algorithm and additionally parses yaml tuning files to inform an IPA\n> module's Agc algorithm about valid constraint and exposure modes and\n> their associated bounds.\n> \n> Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> ---\n>  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n>  src/ipa/libipa/agc.h       |  82 ++++++\n>  src/ipa/libipa/meson.build |   2 +\n>  3 files changed, 610 insertions(+)\n>  create mode 100644 src/ipa/libipa/agc.cpp\n>  create mode 100644 src/ipa/libipa/agc.h\n> \n> diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> new file mode 100644\n> index 00000000..af57a571\n> --- /dev/null\n> +++ b/src/ipa/libipa/agc.cpp\n> @@ -0,0 +1,526 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * Copyright (C) 2024 Ideas on Board Oy\n> + *\n> + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> + */\n> +\n> +#include \"agc.h\"\n> +\n> +#include <cmath>\n> +\n> +#include <libcamera/base/log.h>\n> +#include <libcamera/control_ids.h>\n> +\n> +#include \"exposure_mode_helper.h\"\n> +\n> +using namespace libcamera::controls;\n> +\n> +/**\n> + * \\file agc.h\n> + * \\brief Base class implementing mean luminance AEGC.\n> + */\n> +\n> +namespace libcamera {\n> +\n> +using namespace std::literals::chrono_literals;\n> +\n> +LOG_DEFINE_CATEGORY(Agc)\n> +\n> +namespace ipa {\n> +\n> +/*\n> + * Number of frames to wait before calculating stats on minimum exposure\n> + * \\todo should this be a tunable value?\n> + */\n> +static constexpr uint32_t kNumStartupFrames = 10;\n> +\n> +/*\n> + * Default relative luminance target\n> + *\n> + * This value should be chosen so that when the camera points at a grey target,\n> + * the resulting image brightness looks \"right\". Custom values can be passed\n> + * as the relativeLuminanceTarget value in sensor tuning files.\n> + */\n> +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> +\n> +/**\n> + * \\struct MeanLuminanceAgc::AgcConstraint\n> + * \\brief The boundaries and target for an AeConstraintMode constraint\n> + *\n> + * This structure describes an AeConstraintMode constraint for the purposes of\n> + * this algorithm. The algorithm will apply the constraints by calculating the\n> + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> + * the resulting value is the right side of the given target (as defined by the\n> + * boundary and luminance target).\n> + */\n> +\n> +/**\n> + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> + * \\brief Specify whether the constraint defines a lower or upper bound\n> + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> + * \\brief The constraint defines a lower bound\n> + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> + * \\brief The constraint defines an upper bound\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> + * \\brief The type of constraint bound\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> + * \\brief The lower quantile to use for the constraint\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> + * \\brief The upper quantile to use for the constraint\n> + */\n> +\n> +/**\n> + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> + * \\brief The luminance target for the constraint\n> + */\n> +\n> +/**\n> + * \\class MeanLuminanceAgc\n> + * \\brief a mean-based auto-exposure algorithm\n> + *\n> + * This algorithm calculates a shutter time, analogue and digital gain such that\n> + * the normalised mean luminance value of an image is driven towards a target,\n> + * which itself is discovered from tuning data. The algorithm is a two-stage\n> + * process:\n> + *\n> + * In the first stage, an initial gain value is derived by iteratively comparing\n> + * the gain-adjusted mean luminance across an entire image against a target, and\n> + * selecting a value which pushes it as closely as possible towards the target.\n> + *\n> + * In the second stage we calculate the gain required to drive the average of a\n> + * section of a histogram to a target value, where the target and the boundaries\n> + * of the section of the histogram used in the calculation are taken from the\n> + * values defined for the currently configured AeConstraintMode within the\n> + * tuning data. The gain from the first stage is then clamped to the gain from\n> + * this stage.\n> + *\n> + * The final gain is used to adjust the effective exposure value of the image,\n> + * and that new exposure value divided into shutter time, analogue gain and\n> + * digital gain according to the selected AeExposureMode.\n> + */\n> +\n> +MeanLuminanceAgc::MeanLuminanceAgc()\n> +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> +{\n> +}\n> +\n> +/**\n> + * \\brief Parse the relative luminance target from the tuning data\n> + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> + */\n> +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> +{\n> +\trelativeLuminanceTarget_ =\n> +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n> +}\n> +\n> +/**\n> + * \\brief Parse an AeConstraintMode constraint from tuning data\n> + * \\param[in] modeDict the YamlObject holding the constraint data\n> + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> + */\n> +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> +{\n> +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> +\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> +\t\tdouble yTarget =\n> +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n> +\n> +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> +\n> +\t\tif (!constraintModes_.count(id))\n> +\t\t\tconstraintModes_[id] = {};\n> +\n> +\t\tif (idx)\n> +\t\t\tconstraintModes_[id].push_back(constraint);\n> +\t\telse\n> +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> +\t}\n> +}\n> +\n> +/**\n> + * \\brief Parse tuning data file to populate AeConstraintMode control\n> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> + *\n> + * The Agc algorithm's tuning data should contain a dictionary called\n> + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> + *\n> + * \\code{.unparsed}\n> + * algorithms:\n> + *   - Agc:\n> + *       AeConstraintMode:\n> + *         ConstraintNormal:\n> + *           lower:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.5\n> + *         ConstraintHighlight:\n> + *           lower:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.5\n> + *           upper:\n> + *             qLo: 0.98\n> + *             qHi: 1.0\n> + *             yTarget: 0.8\n> + *\n> + * \\endcode\n> + *\n> + * The parsed dictionaries are used to populate an array of available values for\n> + * the AeConstraintMode control and stored for later use in the algorithm.\n> + *\n> + * \\return -EINVAL Where a defined constraint mode is invalid\n> + * \\return 0 on success\n> + */\n> +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> +{\n> +\tstd::vector<ControlValue> availableConstraintModes;\n> +\n> +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> +\tif (yamlConstraintModes.isDictionary()) {\n> +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> +\t\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tif (!modeDict.isDictionary()) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> +\t\t\t\treturn -EINVAL;\n> +\t\t\t}\n> +\n> +\t\t\tparseConstraint(modeDict,\n> +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> +\t\t\tavailableConstraintModes.push_back(\n> +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * If the tuning data file contains no constraints then we use the\n> +\t * default constraint that the various Agc algorithms were adhering to\n> +\t * anyway before centralisation.\n> +\t */\n> +\tif (constraintModes_.empty()) {\n> +\t\tAgcConstraint constraint = {\n> +\t\t\tAgcConstraint::Bound::LOWER,\n> +\t\t\t0.98,\n> +\t\t\t1.0,\n> +\t\t\t0.5\n> +\t\t};\n> +\n> +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> +\t\t\tconstraint);\n> +\t\tavailableConstraintModes.push_back(\n> +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> +\t}\n> +\n> +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> +\n> +\treturn 0;\n> +}\n> +\n> +/**\n> + * \\brief Parse tuning data file to populate AeExposureMode control\n> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> + *\n> + * The Agc algorithm's tuning data should contain a dictionary called\n> + * AeExposureMode containing per-mode setting dictionaries with the key being\n> + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> + * values with the key \"gain\", in this format:\n> + *\n> + * \\code{.unparsed}\n> + * algorithms:\n> + *   - Agc:\n> + *       AeExposureMode:\n> + *         ExposureNormal:\n> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> + *         ExposureShort:\n> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> + *\n> + * \\endcode\n> + *\n> + * The parsed dictionaries are used to populate an array of available values for\n> + * the AeExposureMode control and to create ExposureModeHelpers\n> + *\n> + * \\return -EINVAL Where a defined constraint mode is invalid\n> + * \\return 0 on success\n> + */\n> +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> +{\n> +\tstd::vector<ControlValue> availableExposureModes;\n> +\tint ret;\n> +\n> +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> +\tif (yamlExposureModes.isDictionary()) {\n> +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> +\t\t\t    AeExposureModeNameValueMap.end()) {\n> +\t\t\t\tLOG(Agc, Warning)\n> +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tif (!modeValues.isDictionary()) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> +\t\t\t\treturn -EINVAL;\n> +\t\t\t}\n> +\n> +\t\t\tstd::vector<uint32_t> shutters =\n> +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> +\t\t\tstd::vector<double> gains =\n> +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> +\n> +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> +\t\t\t\tstd::back_inserter(shutterDurations),\n> +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> +\n> +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> +\t\t\t\tLOG(Agc, Error)\n> +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> +\t\t\t\treturn ret;\n> +\t\t\t}\n> +\n> +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * If we don't have any exposure modes in the tuning data we create an\n> +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> +\t * will then internally simply drive the shutter as high as possible\n> +\t * before touching gain\n> +\t */\n> +\tif (availableExposureModes.empty()) {\n> +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> +\t\tstd::vector<double> gains = {};\n> +\n> +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> +\t\t\tstd::make_shared<ExposureModeHelper>();\n> +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> +\t\t\tLOG(Agc, Error)\n> +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> +\t\t\treturn ret;\n> +\t\t}\n> +\n> +\t\texposureModeHelpers_[exposureModeId] = helper;\n> +\t\tavailableExposureModes.push_back(exposureModeId);\n> +\t}\n> +\n> +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> +\n> +\treturn 0;\n> +}\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::constraintModes()\n> + * \\brief Get the constraint modes that have been parsed from tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::controls()\n> + * \\brief Get the controls that have been generated after parsing tuning data\n> + */\n> +\n> +/**\n> + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> + * \\param[in] gain The gain with which to adjust the luminance estimate\n> + *\n> + * This function is a pure virtual function because estimation of luminance is a\n> + * hardware-specific operation, which depends wholly on the format of the stats\n> + * that are delivered to libcamera from the ISP. Derived classes must implement\n> + * an overriding function that calculates the normalised mean luminance value\n> + * across the entire image.\n> + *\n> + * \\return The normalised relative luminance of the image\n> + */\n> +\n> +/**\n> + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> + *        target\n> + *\n> + * To account for non-linearity caused by saturation, the value needs to be\n> + * estimated in an iterative process, as multiplying by a gain will not increase\n> + * the relative luminance by the same factor if some image regions are saturated\n> + *\n> + * \\return The calculated initial gain\n> + */\n> +double MeanLuminanceAgc::estimateInitialGain()\n> +{\n> +\tdouble yTarget = relativeLuminanceTarget_;\n> +\tdouble yGain = 1.0;\n> +\n> +\tfor (unsigned int i = 0; i < 8; i++) {\n> +\t\tdouble yValue = estimateLuminance(yGain);\n> +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> +\n> +\t\tyGain *= extra_gain;\n> +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> +\t\t\t\t<< \", Y target: \" << yTarget\n> +\t\t\t\t<< \", gives gain \" << yGain;\n> +\n> +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\treturn yGain;\n> +}\n> +\n> +/**\n> + * \\brief Clamp gain within the bounds of a defined constraint\n> + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> + * \\param[in] gain The gain to clamp\n> + *\n> + * \\return The gain clamped within the constraint bounds\n> + */\n> +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> +\t\t\t\t\t     const Histogram &hist,\n> +\t\t\t\t\t     double gain)\n> +{\n> +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> +\tfor (const AgcConstraint &constraint : constraints) {\n> +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> +\n> +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> +\t\t    newGain > gain)\n> +\t\t\tgain = newGain;\n> +\n> +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> +\t\t    newGain < gain)\n> +\t\t\tgain = newGain;\n> +\t}\n> +\n> +\treturn gain;\n> +}\n> +\n> +/**\n> + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> + *\n> + * The speed of the filter is adaptive, and will produce the target quicker\n> + * during startup, or when the target exposure is within 20% of the most recent\n> + * filter output.\n> + *\n> + * \\return The filtered exposure\n> + */\n> +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> +{\n> +\tdouble speed = 0.2;\n> +\n> +\t/* Adapt instantly if we are in startup phase. */\n> +\tif (frameCount_ < kNumStartupFrames)\n> +\t\tspeed = 1.0;\n\nframeCount is only reset in the constructor. If I got it right, this\nclass doesn't know when the camera is stopped/restarted. This might be\non purpose, with the expectation that a camera continues with the last\nstate after a stop()/start(). Could we fix the comment to express that\nsomehow ( Adapt instntly if we are in the first startup phase ) or\nsomething. I'm no native speaker, so maybe you have better ideas :-)\n\nCheers Stefan\n\n> +\n> +\t/*\n> +\t * If we are close to the desired result, go faster to avoid making\n> +\t * multiple micro-adjustments.\n> +\t * \\todo Make this customisable?\n> +\t */\n> +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> +\t    filteredExposure_ > 0.8 * exposureValue)\n> +\t\tspeed = sqrt(speed);\n> +\n> +\tfilteredExposure_ = speed * exposureValue +\n> +\t\t\t    filteredExposure_ * (1.0 - speed);\n> +\n> +\treturn filteredExposure_;\n> +}\n> +\n> +/**\n> + * \\brief Calculate the new exposure value\n> + * \\param[in] constraintModeIndex The index of the current constraint mode\n> + * \\param[in] exposureModeIndex The index of the current exposure mode\n> + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> + *\t      the calculated gain\n> + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> + *\t      statistics in use derive\n> + *\n> + * Calculate a new exposure value to try to obtain the target. The calculated\n> + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> + * divided into shutter time, analogue and digital gain.\n> + *\n> + * \\return Tuple of shutter time, analogue gain, and digital gain\n> + */\n> +std::tuple<utils::Duration, double, double>\n> +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> +\t\t\t\t uint32_t exposureModeIndex,\n> +\t\t\t\t const Histogram &yHist,\n> +\t\t\t\t utils::Duration effectiveExposureValue)\n> +{\n> +\t/*\n> +\t * The pipeline handler should validate that we have received an allowed\n> +\t * value for AeExposureMode.\n> +\t */\n> +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> +\t\texposureModeHelpers_.at(exposureModeIndex);\n> +\n> +\tdouble gain = estimateInitialGain();\n> +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> +\n> +\t/*\n> +\t * We don't check whether we're already close to the target, because\n> +\t * even if the effective exposure value is the same as the last frame's\n> +\t * we could have switched to an exposure mode that would require a new\n> +\t * pass through the splitExposure() function.\n> +\t */\n> +\n> +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> +\n> +\t/*\n> +\t * We filter the exposure value to make sure changes are not too jarring\n> +\t * from frame to frame.\n> +\t */\n> +\tnewExposureValue = filterExposure(newExposureValue);\n> +\n> +\tframeCount_++;\n> +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> +}\n> +\n> +}; /* namespace ipa */\n> +\n> +}; /* namespace libcamera */\n> diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> new file mode 100644\n> index 00000000..902a359a\n> --- /dev/null\n> +++ b/src/ipa/libipa/agc.h\n> @@ -0,0 +1,82 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * Copyright (C) 2024 Ideas on Board Oy\n> + *\n> + agc.h - Base class for libipa-compliant AGC algorithms\n> + */\n> +\n> +#pragma once\n> +\n> +#include <tuple>\n> +#include <vector>\n> +\n> +#include <libcamera/controls.h>\n> +\n> +#include \"libcamera/internal/yaml_parser.h\"\n> +\n> +#include \"exposure_mode_helper.h\"\n> +#include \"histogram.h\"\n> +\n> +namespace libcamera {\n> +\n> +namespace ipa {\n> +\n> +class MeanLuminanceAgc\n> +{\n> +public:\n> +\tMeanLuminanceAgc();\n> +\tvirtual ~MeanLuminanceAgc() = default;\n> +\n> +\tstruct AgcConstraint {\n> +\t\tenum class Bound {\n> +\t\t\tLOWER = 0,\n> +\t\t\tUPPER = 1\n> +\t\t};\n> +\t\tBound bound;\n> +\t\tdouble qLo;\n> +\t\tdouble qHi;\n> +\t\tdouble yTarget;\n> +\t};\n> +\n> +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> +\tint parseConstraintModes(const YamlObject &tuningData);\n> +\tint parseExposureModes(const YamlObject &tuningData);\n> +\n> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> +\t{\n> +\t\treturn constraintModes_;\n> +\t}\n> +\n> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> +\t{\n> +\t\treturn exposureModeHelpers_;\n> +\t}\n> +\n> +\tControlInfoMap::Map controls()\n> +\t{\n> +\t\treturn controls_;\n> +\t}\n> +\n> +\tvirtual double estimateLuminance(const double gain) = 0;\n> +\tdouble estimateInitialGain();\n> +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> +\t\t\t\t   const Histogram &hist,\n> +\t\t\t\t   double gain);\n> +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> +\tstd::tuple<utils::Duration, double, double>\n> +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> +private:\n> +\tuint64_t frameCount_;\n> +\tutils::Duration filteredExposure_;\n> +\tdouble relativeLuminanceTarget_;\n> +\n> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> +\tControlInfoMap::Map controls_;\n> +};\n> +\n> +}; /* namespace ipa */\n> +\n> +}; /* namespace libcamera */\n> diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> index 37fbd177..31cc8d70 100644\n> --- a/src/ipa/libipa/meson.build\n> +++ b/src/ipa/libipa/meson.build\n> @@ -1,6 +1,7 @@\n>  # SPDX-License-Identifier: CC0-1.0\n>  \n>  libipa_headers = files([\n> +    'agc.h',\n>      'algorithm.h',\n>      'camera_sensor_helper.h',\n>      'exposure_mode_helper.h',\n> @@ -10,6 +11,7 @@ libipa_headers = files([\n>  ])\n>  \n>  libipa_sources = files([\n> +    'agc.cpp',\n>      'algorithm.cpp',\n>      'camera_sensor_helper.cpp',\n>      'exposure_mode_helper.cpp',\n> -- \n> 2.34.1\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 964AEC0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tWed, 27 Mar 2024 15:25:50 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id C7A716308D;\n\tWed, 27 Mar 2024 16:25:49 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 5DB3F6308D\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 27 Mar 2024 16:25:48 +0100 (CET)","from ideasonboard.com (unknown\n\t[IPv6:2a00:6020:448c:6c00:da09:7e54:ae7f:d731])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id D4EC71571;\n\tWed, 27 Mar 2024 16:25:15 +0100 (CET)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"rHMNHY/9\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1711553115;\n\tbh=bWc7h+cPikBolS1aMzoweEQnEmJV+La6I0tw01hIfq0=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=rHMNHY/9LrFGytKGVQVCPuXSfc8kU73u3WuMEPt/JXLnzTqo5WQMZYNj30Nq2ThuO\n\tT87mwakz3yGkjBifJKeHCl+Ro8IXprbNiCglGRQ9MK/sAOJ9FNh0uDqcEDqTJ1cwoh\n\thOlP2vC8r3yCe2WYrazw8brCAy1pWcLKjqBoCPQE=","Date":"Wed, 27 Mar 2024 16:25:45 +0100","From":"Stefan Klug <stefan.klug@ideasonboard.com>","To":"Daniel Scally <dan.scally@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<20240327152545.wy5gj3wqecejrtfu@jasper>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20240322131451.3092931-5-dan.scally@ideasonboard.com>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29165,"web_url":"https://patchwork.libcamera.org/comment/29165/","msgid":"<20240405230823.GK12507@pendragon.ideasonboard.com>","date":"2024-04-05T23:08:23","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"On Wed, Mar 27, 2024 at 04:25:45PM +0100, Stefan Klug wrote:\n> Hi Daniel,\n> \n> thanks for the patch. Jacopo already did the whole review, so only a\n> small comment...\n> \n> On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> > The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> > very large part; following the Rpi IPA's algorithm in spirit with a\n> > few tunable values in that IPA being hardcoded in the libipa ones.\n> > Add a new base class for MeanLuminanceAgc which implements the same\n> > algorithm and additionally parses yaml tuning files to inform an IPA\n> > module's Agc algorithm about valid constraint and exposure modes and\n> > their associated bounds.\n> > \n> > Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> > ---\n> >  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n> >  src/ipa/libipa/agc.h       |  82 ++++++\n> >  src/ipa/libipa/meson.build |   2 +\n> >  3 files changed, 610 insertions(+)\n> >  create mode 100644 src/ipa/libipa/agc.cpp\n> >  create mode 100644 src/ipa/libipa/agc.h\n> > \n> > diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> > new file mode 100644\n> > index 00000000..af57a571\n> > --- /dev/null\n> > +++ b/src/ipa/libipa/agc.cpp\n> > @@ -0,0 +1,526 @@\n> > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > +/*\n> > + * Copyright (C) 2024 Ideas on Board Oy\n> > + *\n> > + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> > + */\n> > +\n> > +#include \"agc.h\"\n> > +\n> > +#include <cmath>\n> > +\n> > +#include <libcamera/base/log.h>\n> > +#include <libcamera/control_ids.h>\n> > +\n> > +#include \"exposure_mode_helper.h\"\n> > +\n> > +using namespace libcamera::controls;\n> > +\n> > +/**\n> > + * \\file agc.h\n> > + * \\brief Base class implementing mean luminance AEGC.\n> > + */\n> > +\n> > +namespace libcamera {\n> > +\n> > +using namespace std::literals::chrono_literals;\n> > +\n> > +LOG_DEFINE_CATEGORY(Agc)\n> > +\n> > +namespace ipa {\n> > +\n> > +/*\n> > + * Number of frames to wait before calculating stats on minimum exposure\n> > + * \\todo should this be a tunable value?\n> > + */\n> > +static constexpr uint32_t kNumStartupFrames = 10;\n> > +\n> > +/*\n> > + * Default relative luminance target\n> > + *\n> > + * This value should be chosen so that when the camera points at a grey target,\n> > + * the resulting image brightness looks \"right\". Custom values can be passed\n> > + * as the relativeLuminanceTarget value in sensor tuning files.\n> > + */\n> > +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> > +\n> > +/**\n> > + * \\struct MeanLuminanceAgc::AgcConstraint\n> > + * \\brief The boundaries and target for an AeConstraintMode constraint\n> > + *\n> > + * This structure describes an AeConstraintMode constraint for the purposes of\n> > + * this algorithm. The algorithm will apply the constraints by calculating the\n> > + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> > + * the resulting value is the right side of the given target (as defined by the\n> > + * boundary and luminance target).\n> > + */\n> > +\n> > +/**\n> > + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> > + * \\brief Specify whether the constraint defines a lower or upper bound\n> > + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> > + * \\brief The constraint defines a lower bound\n> > + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> > + * \\brief The constraint defines an upper bound\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> > + * \\brief The type of constraint bound\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> > + * \\brief The lower quantile to use for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> > + * \\brief The upper quantile to use for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> > + * \\brief The luminance target for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\class MeanLuminanceAgc\n> > + * \\brief a mean-based auto-exposure algorithm\n> > + *\n> > + * This algorithm calculates a shutter time, analogue and digital gain such that\n> > + * the normalised mean luminance value of an image is driven towards a target,\n> > + * which itself is discovered from tuning data. The algorithm is a two-stage\n> > + * process:\n> > + *\n> > + * In the first stage, an initial gain value is derived by iteratively comparing\n> > + * the gain-adjusted mean luminance across an entire image against a target, and\n> > + * selecting a value which pushes it as closely as possible towards the target.\n> > + *\n> > + * In the second stage we calculate the gain required to drive the average of a\n> > + * section of a histogram to a target value, where the target and the boundaries\n> > + * of the section of the histogram used in the calculation are taken from the\n> > + * values defined for the currently configured AeConstraintMode within the\n> > + * tuning data. The gain from the first stage is then clamped to the gain from\n> > + * this stage.\n> > + *\n> > + * The final gain is used to adjust the effective exposure value of the image,\n> > + * and that new exposure value divided into shutter time, analogue gain and\n> > + * digital gain according to the selected AeExposureMode.\n> > + */\n> > +\n> > +MeanLuminanceAgc::MeanLuminanceAgc()\n> > +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> > +{\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse the relative luminance target from the tuning data\n> > + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> > + */\n> > +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> > +{\n> > +\trelativeLuminanceTarget_ =\n> > +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse an AeConstraintMode constraint from tuning data\n> > + * \\param[in] modeDict the YamlObject holding the constraint data\n> > + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> > + */\n> > +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> > +{\n> > +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> > +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> > +\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\n> > +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> > +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> > +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> > +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> > +\t\tdouble yTarget =\n> > +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n> > +\n> > +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> > +\n> > +\t\tif (!constraintModes_.count(id))\n> > +\t\t\tconstraintModes_[id] = {};\n> > +\n> > +\t\tif (idx)\n> > +\t\t\tconstraintModes_[id].push_back(constraint);\n> > +\t\telse\n> > +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> > +\t}\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse tuning data file to populate AeConstraintMode control\n> > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > + *\n> > + * The Agc algorithm's tuning data should contain a dictionary called\n> > + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> > + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> > + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> > + *\n> > + * \\code{.unparsed}\n> > + * algorithms:\n> > + *   - Agc:\n> > + *       AeConstraintMode:\n> > + *         ConstraintNormal:\n> > + *           lower:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.5\n> > + *         ConstraintHighlight:\n> > + *           lower:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.5\n> > + *           upper:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.8\n> > + *\n> > + * \\endcode\n> > + *\n> > + * The parsed dictionaries are used to populate an array of available values for\n> > + * the AeConstraintMode control and stored for later use in the algorithm.\n> > + *\n> > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > + * \\return 0 on success\n> > + */\n> > +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> > +{\n> > +\tstd::vector<ControlValue> availableConstraintModes;\n> > +\n> > +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> > +\tif (yamlConstraintModes.isDictionary()) {\n> > +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> > +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> > +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> > +\t\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> > +\t\t\t\tcontinue;\n> > +\t\t\t}\n> > +\n> > +\t\t\tif (!modeDict.isDictionary()) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn -EINVAL;\n> > +\t\t\t}\n> > +\n> > +\t\t\tparseConstraint(modeDict,\n> > +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > +\t\t\tavailableConstraintModes.push_back(\n> > +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > +\t\t}\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If the tuning data file contains no constraints then we use the\n> > +\t * default constraint that the various Agc algorithms were adhering to\n> > +\t * anyway before centralisation.\n> > +\t */\n> > +\tif (constraintModes_.empty()) {\n> > +\t\tAgcConstraint constraint = {\n> > +\t\t\tAgcConstraint::Bound::LOWER,\n> > +\t\t\t0.98,\n> > +\t\t\t1.0,\n> > +\t\t\t0.5\n> > +\t\t};\n> > +\n> > +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> > +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> > +\t\t\tconstraint);\n> > +\t\tavailableConstraintModes.push_back(\n> > +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> > +\t}\n> > +\n> > +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse tuning data file to populate AeExposureMode control\n> > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > + *\n> > + * The Agc algorithm's tuning data should contain a dictionary called\n> > + * AeExposureMode containing per-mode setting dictionaries with the key being\n> > + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> > + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> > + * values with the key \"gain\", in this format:\n> > + *\n> > + * \\code{.unparsed}\n> > + * algorithms:\n> > + *   - Agc:\n> > + *       AeExposureMode:\n> > + *         ExposureNormal:\n> > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > + *         ExposureShort:\n> > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > + *\n> > + * \\endcode\n> > + *\n> > + * The parsed dictionaries are used to populate an array of available values for\n> > + * the AeExposureMode control and to create ExposureModeHelpers\n> > + *\n> > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > + * \\return 0 on success\n> > + */\n> > +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> > +{\n> > +\tstd::vector<ControlValue> availableExposureModes;\n> > +\tint ret;\n> > +\n> > +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> > +\tif (yamlExposureModes.isDictionary()) {\n> > +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> > +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> > +\t\t\t    AeExposureModeNameValueMap.end()) {\n> > +\t\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\tcontinue;\n> > +\t\t\t}\n> > +\n> > +\t\t\tif (!modeValues.isDictionary()) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn -EINVAL;\n> > +\t\t\t}\n> > +\n> > +\t\t\tstd::vector<uint32_t> shutters =\n> > +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> > +\t\t\tstd::vector<double> gains =\n> > +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> > +\n> > +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> > +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> > +\t\t\t\tstd::back_inserter(shutterDurations),\n> > +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> > +\n> > +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> > +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn ret;\n> > +\t\t\t}\n> > +\n> > +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> > +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> > +\t\t}\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If we don't have any exposure modes in the tuning data we create an\n> > +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> > +\t * will then internally simply drive the shutter as high as possible\n> > +\t * before touching gain\n> > +\t */\n> > +\tif (availableExposureModes.empty()) {\n> > +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> > +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> > +\t\tstd::vector<double> gains = {};\n> > +\n> > +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > +\t\t\tstd::make_shared<ExposureModeHelper>();\n> > +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > +\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> > +\t\t\treturn ret;\n> > +\t\t}\n> > +\n> > +\t\texposureModeHelpers_[exposureModeId] = helper;\n> > +\t\tavailableExposureModes.push_back(exposureModeId);\n> > +\t}\n> > +\n> > +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::constraintModes()\n> > + * \\brief Get the constraint modes that have been parsed from tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> > + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::controls()\n> > + * \\brief Get the controls that have been generated after parsing tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> > + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> > + * \\param[in] gain The gain with which to adjust the luminance estimate\n> > + *\n> > + * This function is a pure virtual function because estimation of luminance is a\n> > + * hardware-specific operation, which depends wholly on the format of the stats\n> > + * that are delivered to libcamera from the ISP. Derived classes must implement\n> > + * an overriding function that calculates the normalised mean luminance value\n> > + * across the entire image.\n> > + *\n> > + * \\return The normalised relative luminance of the image\n> > + */\n> > +\n> > +/**\n> > + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> > + *        target\n> > + *\n> > + * To account for non-linearity caused by saturation, the value needs to be\n> > + * estimated in an iterative process, as multiplying by a gain will not increase\n> > + * the relative luminance by the same factor if some image regions are saturated\n> > + *\n> > + * \\return The calculated initial gain\n> > + */\n> > +double MeanLuminanceAgc::estimateInitialGain()\n> > +{\n> > +\tdouble yTarget = relativeLuminanceTarget_;\n> > +\tdouble yGain = 1.0;\n> > +\n> > +\tfor (unsigned int i = 0; i < 8; i++) {\n> > +\t\tdouble yValue = estimateLuminance(yGain);\n> > +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> > +\n> > +\t\tyGain *= extra_gain;\n> > +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> > +\t\t\t\t<< \", Y target: \" << yTarget\n> > +\t\t\t\t<< \", gives gain \" << yGain;\n> > +\n> > +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> > +\t\t\tbreak;\n> > +\t}\n> > +\n> > +\treturn yGain;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Clamp gain within the bounds of a defined constraint\n> > + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> > + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> > + * \\param[in] gain The gain to clamp\n> > + *\n> > + * \\return The gain clamped within the constraint bounds\n> > + */\n> > +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> > +\t\t\t\t\t     const Histogram &hist,\n> > +\t\t\t\t\t     double gain)\n> > +{\n> > +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> > +\tfor (const AgcConstraint &constraint : constraints) {\n> > +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> > +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> > +\n> > +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> > +\t\t    newGain > gain)\n> > +\t\t\tgain = newGain;\n> > +\n> > +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> > +\t\t    newGain < gain)\n> > +\t\t\tgain = newGain;\n> > +\t}\n> > +\n> > +\treturn gain;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> > + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> > + *\n> > + * The speed of the filter is adaptive, and will produce the target quicker\n> > + * during startup, or when the target exposure is within 20% of the most recent\n> > + * filter output.\n> > + *\n> > + * \\return The filtered exposure\n> > + */\n> > +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> > +{\n> > +\tdouble speed = 0.2;\n> > +\n> > +\t/* Adapt instantly if we are in startup phase. */\n> > +\tif (frameCount_ < kNumStartupFrames)\n> > +\t\tspeed = 1.0;\n> \n> frameCount is only reset in the constructor. If I got it right, this\n> class doesn't know when the camera is stopped/restarted. This might be\n> on purpose, with the expectation that a camera continues with the last\n> state after a stop()/start(). Could we fix the comment to express that\n> somehow ( Adapt instntly if we are in the first startup phase ) or\n> something. I'm no native speaker, so maybe you have better ideas :-)\n\nI think it would be better to reset the counter when the camera is\nstopped and restarted. Otherwise we'll have inconsistent behaviour and\ndifficult to debug problems. We can later improve the restart case if\nneeded, possibly taking into account how long the camera has been\nstopped for.\n\n> > +\n> > +\t/*\n> > +\t * If we are close to the desired result, go faster to avoid making\n> > +\t * multiple micro-adjustments.\n> > +\t * \\todo Make this customisable?\n> > +\t */\n> > +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> > +\t    filteredExposure_ > 0.8 * exposureValue)\n> > +\t\tspeed = sqrt(speed);\n> > +\n> > +\tfilteredExposure_ = speed * exposureValue +\n> > +\t\t\t    filteredExposure_ * (1.0 - speed);\n> > +\n> > +\treturn filteredExposure_;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Calculate the new exposure value\n> > + * \\param[in] constraintModeIndex The index of the current constraint mode\n> > + * \\param[in] exposureModeIndex The index of the current exposure mode\n> > + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> > + *\t      the calculated gain\n> > + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> > + *\t      statistics in use derive\n> > + *\n> > + * Calculate a new exposure value to try to obtain the target. The calculated\n> > + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> > + * divided into shutter time, analogue and digital gain.\n> > + *\n> > + * \\return Tuple of shutter time, analogue gain, and digital gain\n> > + */\n> > +std::tuple<utils::Duration, double, double>\n> > +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> > +\t\t\t\t uint32_t exposureModeIndex,\n> > +\t\t\t\t const Histogram &yHist,\n> > +\t\t\t\t utils::Duration effectiveExposureValue)\n> > +{\n> > +\t/*\n> > +\t * The pipeline handler should validate that we have received an allowed\n> > +\t * value for AeExposureMode.\n> > +\t */\n> > +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> > +\t\texposureModeHelpers_.at(exposureModeIndex);\n> > +\n> > +\tdouble gain = estimateInitialGain();\n> > +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> > +\n> > +\t/*\n> > +\t * We don't check whether we're already close to the target, because\n> > +\t * even if the effective exposure value is the same as the last frame's\n> > +\t * we could have switched to an exposure mode that would require a new\n> > +\t * pass through the splitExposure() function.\n> > +\t */\n> > +\n> > +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> > +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> > +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> > +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> > +\n> > +\t/*\n> > +\t * We filter the exposure value to make sure changes are not too jarring\n> > +\t * from frame to frame.\n> > +\t */\n> > +\tnewExposureValue = filterExposure(newExposureValue);\n> > +\n> > +\tframeCount_++;\n> > +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> > +}\n> > +\n> > +}; /* namespace ipa */\n> > +\n> > +}; /* namespace libcamera */\n> > diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> > new file mode 100644\n> > index 00000000..902a359a\n> > --- /dev/null\n> > +++ b/src/ipa/libipa/agc.h\n> > @@ -0,0 +1,82 @@\n> > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > +/*\n> > + * Copyright (C) 2024 Ideas on Board Oy\n> > + *\n> > + agc.h - Base class for libipa-compliant AGC algorithms\n> > + */\n> > +\n> > +#pragma once\n> > +\n> > +#include <tuple>\n> > +#include <vector>\n> > +\n> > +#include <libcamera/controls.h>\n> > +\n> > +#include \"libcamera/internal/yaml_parser.h\"\n> > +\n> > +#include \"exposure_mode_helper.h\"\n> > +#include \"histogram.h\"\n> > +\n> > +namespace libcamera {\n> > +\n> > +namespace ipa {\n> > +\n> > +class MeanLuminanceAgc\n> > +{\n> > +public:\n> > +\tMeanLuminanceAgc();\n> > +\tvirtual ~MeanLuminanceAgc() = default;\n> > +\n> > +\tstruct AgcConstraint {\n> > +\t\tenum class Bound {\n> > +\t\t\tLOWER = 0,\n> > +\t\t\tUPPER = 1\n> > +\t\t};\n> > +\t\tBound bound;\n> > +\t\tdouble qLo;\n> > +\t\tdouble qHi;\n> > +\t\tdouble yTarget;\n> > +\t};\n> > +\n> > +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> > +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> > +\tint parseConstraintModes(const YamlObject &tuningData);\n> > +\tint parseExposureModes(const YamlObject &tuningData);\n> > +\n> > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> > +\t{\n> > +\t\treturn constraintModes_;\n> > +\t}\n> > +\n> > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> > +\t{\n> > +\t\treturn exposureModeHelpers_;\n> > +\t}\n> > +\n> > +\tControlInfoMap::Map controls()\n> > +\t{\n> > +\t\treturn controls_;\n> > +\t}\n> > +\n> > +\tvirtual double estimateLuminance(const double gain) = 0;\n> > +\tdouble estimateInitialGain();\n> > +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> > +\t\t\t\t   const Histogram &hist,\n> > +\t\t\t\t   double gain);\n> > +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> > +\tstd::tuple<utils::Duration, double, double>\n> > +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> > +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> > +private:\n> > +\tuint64_t frameCount_;\n> > +\tutils::Duration filteredExposure_;\n> > +\tdouble relativeLuminanceTarget_;\n> > +\n> > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> > +\tControlInfoMap::Map controls_;\n> > +};\n> > +\n> > +}; /* namespace ipa */\n> > +\n> > +}; /* namespace libcamera */\n> > diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> > index 37fbd177..31cc8d70 100644\n> > --- a/src/ipa/libipa/meson.build\n> > +++ b/src/ipa/libipa/meson.build\n> > @@ -1,6 +1,7 @@\n> >  # SPDX-License-Identifier: CC0-1.0\n> >  \n> >  libipa_headers = files([\n> > +    'agc.h',\n> >      'algorithm.h',\n> >      'camera_sensor_helper.h',\n> >      'exposure_mode_helper.h',\n> > @@ -10,6 +11,7 @@ libipa_headers = files([\n> >  ])\n> >  \n> >  libipa_sources = files([\n> > +    'agc.cpp',\n> >      'algorithm.cpp',\n> >      'camera_sensor_helper.cpp',\n> >      'exposure_mode_helper.cpp',","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 5BCEABD16B\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri,  5 Apr 2024 23:08:39 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 582936333E;\n\tSat,  6 Apr 2024 01:08:38 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id CC5E261C15\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  6 Apr 2024 01:08:34 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-209-231.bb.dnainternet.fi\n\t[81.175.209.231])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 7CC5C1A2;\n\tSat,  6 Apr 2024 01:07:55 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"kU4BPcn2\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712358475;\n\tbh=+EYNr+8sGrFvJhjIpjqtfYL3W69LWjStiCbJ51fJKYg=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=kU4BPcn2rkvMz4zcv2kEGXcsa7Go80jeRsBUlxRMSLImc77YAVqsUuE5/4pMncHHr\n\tC8eS1WjvSG1e8rK0zwXKPYjCgn7P5/9NHL0F3ABixshHKv9UuPTpYIiSqCIilqlRbx\n\tmxT1Cdjxk6wPxaaIXI2S8Q+8MnmEvRju2YCD5IpY=","Date":"Sat, 6 Apr 2024 02:08:23 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Stefan Klug <stefan.klug@ideasonboard.com>","Cc":"Daniel Scally <dan.scally@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<20240405230823.GK12507@pendragon.ideasonboard.com>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<20240327152545.wy5gj3wqecejrtfu@jasper>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20240327152545.wy5gj3wqecejrtfu@jasper>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29167,"web_url":"https://patchwork.libcamera.org/comment/29167/","msgid":"<20240406010752.GL12507@pendragon.ideasonboard.com>","date":"2024-04-06T01:07:52","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hello,\n\nOn Mon, Mar 25, 2024 at 09:16:00PM +0100, Jacopo Mondi wrote:\n> On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> > The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> > very large part; following the Rpi IPA's algorithm in spirit with a\n> > few tunable values in that IPA being hardcoded in the libipa ones.\n> > Add a new base class for MeanLuminanceAgc which implements the same\n> \n> nit: I would rather call this one AgcMeanLuminance.\n\nAnd rename the files accordingly ?\n\n> One other note, not sure if applicable here, from the experience of\n> trying to upstream an auto-focus algorithm for RkISP1. We had there a\n> base class to define the algorithm interface, one derived class for\n> the common calculation and one platform-specific part for statistics\n> collection and IPA module interfacing.\n> \n> The base class was there mostly to handle the algorithm state machine\n> by handling the controls that influence the algorithm behaviour. In\n> case of AEGC I can think, in example, about handling the switch\n> between enable/disable of auto-mode (and consequentially handling a\n> manually set ExposureTime and AnalogueGain), switching between\n> different ExposureModes etc\n> \n> This is the last attempt for AF\n> https://patchwork.libcamera.org/patch/18510/\n> \n> and I'm wondering if it would be desirable to abstract away from the\n> MeanLuminance part the part that is tightly coupled with the\n> libcamera's controls definition.\n> \n> Let's make a concrete example: look how the rkisp1 and the ipu3 agc\n> implementations handle AeEnabled (or better, look at how the rkisp1\n> does that and the IPU3 does not).\n> \n> Ideally, my goal would be to abstract the handling of the control and\n> all the state machine that decides if the manual or auto-computed\n> values should be used to an AEGCAlgorithm base class.\n> \n> Your series does that for the tuning file parsing already but does\n> that in the MeanLuminance method implementation, while it should or\n> could be common to all AEGC methods.\n> \n> One very interesting experiment could be starting with this and then\n> plumb-in AeEnable support in the IPU3 in example and move everything\n> common to a base class.\n> \n> > algorithm and additionally parses yaml tuning files to inform an IPA\n> > module's Agc algorithm about valid constraint and exposure modes and\n> > their associated bounds.\n> >\n> > Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> > ---\n> >  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n> >  src/ipa/libipa/agc.h       |  82 ++++++\n> >  src/ipa/libipa/meson.build |   2 +\n> >  3 files changed, 610 insertions(+)\n> >  create mode 100644 src/ipa/libipa/agc.cpp\n> >  create mode 100644 src/ipa/libipa/agc.h\n> >\n> > diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> > new file mode 100644\n> > index 00000000..af57a571\n> > --- /dev/null\n> > +++ b/src/ipa/libipa/agc.cpp\n> > @@ -0,0 +1,526 @@\n> > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > +/*\n> > + * Copyright (C) 2024 Ideas on Board Oy\n> > + *\n> > + * agc.cpp - Base class for libipa-compliant AGC algorithms\n\nWe could have different AGC algorithms that are all libipa-compliant.\nThis is one particular AGC algorithm, right ?\n\n> > + */\n> > +\n> > +#include \"agc.h\"\n> > +\n> > +#include <cmath>\n> > +\n> > +#include <libcamera/base/log.h>\n> > +#include <libcamera/control_ids.h>\n> > +\n> > +#include \"exposure_mode_helper.h\"\n> > +\n> > +using namespace libcamera::controls;\n> > +\n> > +/**\n> > + * \\file agc.h\n> > + * \\brief Base class implementing mean luminance AEGC.\n> \n> nit: not '.' at the end of briefs\n> \n> > + */\n> > +\n> > +namespace libcamera {\n> > +\n> > +using namespace std::literals::chrono_literals;\n> > +\n> > +LOG_DEFINE_CATEGORY(Agc)\n> > +\n> > +namespace ipa {\n> > +\n> > +/*\n> > + * Number of frames to wait before calculating stats on minimum exposure\n\nThis doesn't seem to match what the value is used for.\n\n> > + * \\todo should this be a tunable value?\n> \n> Does this depend on the ISP (comes from IPA), the sensor (comes from\n> tuning file) or... both ? :)\n> \n> > + */\n> > +static constexpr uint32_t kNumStartupFrames = 10;\n> > +\n> > +/*\n> > + * Default relative luminance target\n> > + *\n> > + * This value should be chosen so that when the camera points at a grey target,\n> > + * the resulting image brightness looks \"right\". Custom values can be passed\n> > + * as the relativeLuminanceTarget value in sensor tuning files.\n> > + */\n> > +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> > +\n> > +/**\n> > + * \\struct MeanLuminanceAgc::AgcConstraint\n> > + * \\brief The boundaries and target for an AeConstraintMode constraint\n> > + *\n> > + * This structure describes an AeConstraintMode constraint for the purposes of\n> > + * this algorithm. The algorithm will apply the constraints by calculating the\n> > + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> > + * the resulting value is the right side of the given target (as defined by the\n> > + * boundary and luminance target).\n> > + */\n> \n> Here, in example.\n> \n> controls::AeConstraintMode and the supported values are defined as\n> (core|vendor) controls in control_ids_*.yaml\n> \n> The tuning file expresses the constraint modes using the Control\n> definition (I wonder if this has always been like this) but it\n> definitely ties the tuning file to the controls definition.\n> \n> Applications use controls::AeConstraintMode to select one of the\n> constrained modes to have the algorithm use it.\n> \n> In all of this, how much is part of the MeanLuminance implementation\n> and how much shared between possibly multiple implementations ?\n> \n> > +\n> > +/**\n> > + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> > + * \\brief Specify whether the constraint defines a lower or upper bound\n> > + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> > + * \\brief The constraint defines a lower bound\n> > + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> > + * \\brief The constraint defines an upper bound\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> > + * \\brief The type of constraint bound\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> > + * \\brief The lower quantile to use for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> > + * \\brief The upper quantile to use for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> > + * \\brief The luminance target for the constraint\n> > + */\n> > +\n> > +/**\n> > + * \\class MeanLuminanceAgc\n> > + * \\brief a mean-based auto-exposure algorithm\n\ns/a/A/\n\n> > + *\n> > + * This algorithm calculates a shutter time, analogue and digital gain such that\n> > + * the normalised mean luminance value of an image is driven towards a target,\n> > + * which itself is discovered from tuning data. The algorithm is a two-stage\n> > + * process:\n\ns/:/./\n\nor make the next two paragraph bullet list entries.\n\n> > + *\n> > + * In the first stage, an initial gain value is derived by iteratively comparing\n> > + * the gain-adjusted mean luminance across an entire image against a target, and\n> > + * selecting a value which pushes it as closely as possible towards the target.\n> > + *\n> > + * In the second stage we calculate the gain required to drive the average of a\n> > + * section of a histogram to a target value, where the target and the boundaries\n> > + * of the section of the histogram used in the calculation are taken from the\n> > + * values defined for the currently configured AeConstraintMode within the\n> > + * tuning data. The gain from the first stage is then clamped to the gain from\n> > + * this stage.\n> > + *\n> > + * The final gain is used to adjust the effective exposure value of the image,\n> > + * and that new exposure value divided into shutter time, analogue gain and\n\ns/divided/is divided/\n\n> > + * digital gain according to the selected AeExposureMode.\n\nThere should be a mention here that this class handles tuning file\nparsing and expects a particular tuning data format. You can reference\nthe function that parses the tuning data for detailed documentation\nabout the format.\n\n> > + */\n> > +\n> > +MeanLuminanceAgc::MeanLuminanceAgc()\n> > +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> > +{\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse the relative luminance target from the tuning data\n> > + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> > + */\n> > +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> > +{\n> > +\trelativeLuminanceTarget_ =\n> > +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n> \n> How do you expect this to be computed in the tuning file ?\n> \n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse an AeConstraintMode constraint from tuning data\n> > + * \\param[in] modeDict the YamlObject holding the constraint data\n> > + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> > + */\n> > +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> > +{\n> > +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> > +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> > +\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\n> > +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> > +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> > +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> > +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> > +\t\tdouble yTarget =\n> > +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n> > +\n> > +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> > +\n> > +\t\tif (!constraintModes_.count(id))\n> > +\t\t\tconstraintModes_[id] = {};\n> > +\n> > +\t\tif (idx)\n> > +\t\t\tconstraintModes_[id].push_back(constraint);\n> > +\t\telse\n> > +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> > +\t}\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse tuning data file to populate AeConstraintMode control\n> > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > + *\n> > + * The Agc algorithm's tuning data should contain a dictionary called\n> > + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> > + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> > + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> > + *\n> > + * \\code{.unparsed}\n> > + * algorithms:\n> > + *   - Agc:\n> > + *       AeConstraintMode:\n> > + *         ConstraintNormal:\n> > + *           lower:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.5\n> \n> Ok, so this ties the tuning file not just to the libcamera controls\n> definition, but to this specific implementation of the algorithm ?\n\nTying the algorithm implementation with the tuning data format seems\nunavoidable. I'm slightly concerned about handling the YAML parsing here\nthough, as I'm wondering if we can always guarantee that the file will\nnot need to diverge somehow (possibly with additional data somewhere\nthat would make the shared parser fail), but that's probably such a\nsmall risk that we can worry about it later.\n\n> Not that it was not expected, and I think it's fine, as using a\n> libcamera defined control value as 'index' makes sure the applications\n> will deal with the same interface, but this largely conflicts with the\n> idea to have shared parsing for all algorithms in a base class.\n\nThe idea comes straight from the RPi implementation, which expect users\nto customize tuning files with new modes. Indexing into a non standard\nlist of modes isn't an API I really like, and that's something I think\nwe need to fix eventually.\n\n> Also, we made AeConstrainModes a 'core control' because at the time\n> when RPi first implemented AGC support there was no alternative to\n> that. Then the RPi implementation has been copied in all other\n> platforms and this is still fine as a 'core control'. This however\n> seems to be a tuning parameter for this specific algorithm\n> implementation, isn't it ?\n\nI should read your whole reply before writing anything :-)\n\n> > + *         ConstraintHighlight:\n> > + *           lower:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.5\n> > + *           upper:\n> > + *             qLo: 0.98\n> > + *             qHi: 1.0\n> > + *             yTarget: 0.8\n> > + *\n> > + * \\endcode\n> > + *\n> > + * The parsed dictionaries are used to populate an array of available values for\n> > + * the AeConstraintMode control and stored for later use in the algorithm.\n> > + *\n> > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > + * \\return 0 on success\n> > + */\n> > +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> > +{\n> > +\tstd::vector<ControlValue> availableConstraintModes;\n> > +\n> > +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> > +\tif (yamlConstraintModes.isDictionary()) {\n> > +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> > +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> > +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> > +\t\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> > +\t\t\t\tcontinue;\n> > +\t\t\t}\n> > +\n> > +\t\t\tif (!modeDict.isDictionary()) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn -EINVAL;\n> > +\t\t\t}\n> > +\n> > +\t\t\tparseConstraint(modeDict,\n> > +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > +\t\t\tavailableConstraintModes.push_back(\n> > +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > +\t\t}\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If the tuning data file contains no constraints then we use the\n> > +\t * default constraint that the various Agc algorithms were adhering to\n> > +\t * anyway before centralisation.\n> > +\t */\n> > +\tif (constraintModes_.empty()) {\n> > +\t\tAgcConstraint constraint = {\n> > +\t\t\tAgcConstraint::Bound::LOWER,\n> > +\t\t\t0.98,\n> > +\t\t\t1.0,\n> > +\t\t\t0.5\n> > +\t\t};\n> > +\n> > +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> > +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> > +\t\t\tconstraint);\n> > +\t\tavailableConstraintModes.push_back(\n> > +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> > +\t}\n> > +\n> > +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Parse tuning data file to populate AeExposureMode control\n> > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > + *\n> > + * The Agc algorithm's tuning data should contain a dictionary called\n> > + * AeExposureMode containing per-mode setting dictionaries with the key being\n> > + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> > + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> > + * values with the key \"gain\", in this format:\n> > + *\n> > + * \\code{.unparsed}\n> > + * algorithms:\n> > + *   - Agc:\n> > + *       AeExposureMode:\n> \n> Same reasoning as per the constraints, really.\n> \n> There's nothing bad here, if not me realizing our controls definition\n> is intimately tied with the algorithms implementation, so I wonder\n> again if even handling things like AeEnable in a common form makes\n> any sense..\n> \n> Not going to review the actual implementation now, as it comes from\n> the existing ones...\n> \n> \n> > + *         ExposureNormal:\n> > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > + *         ExposureShort:\n> > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > + *\n> > + * \\endcode\n> > + *\n> > + * The parsed dictionaries are used to populate an array of available values for\n> > + * the AeExposureMode control and to create ExposureModeHelpers\n> > + *\n> > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > + * \\return 0 on success\n> > + */\n> > +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> > +{\n> > +\tstd::vector<ControlValue> availableExposureModes;\n> > +\tint ret;\n> > +\n> > +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> > +\tif (yamlExposureModes.isDictionary()) {\n> > +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> > +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> > +\t\t\t    AeExposureModeNameValueMap.end()) {\n> > +\t\t\t\tLOG(Agc, Warning)\n> > +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\tcontinue;\n> > +\t\t\t}\n> > +\n> > +\t\t\tif (!modeValues.isDictionary()) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn -EINVAL;\n> > +\t\t\t}\n> > +\n> > +\t\t\tstd::vector<uint32_t> shutters =\n> > +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> > +\t\t\tstd::vector<double> gains =\n> > +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> > +\n> > +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> > +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> > +\t\t\t\tstd::back_inserter(shutterDurations),\n> > +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> > +\n> > +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> > +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > +\t\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> > +\t\t\t\treturn ret;\n> > +\t\t\t}\n> > +\n> > +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> > +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> > +\t\t}\n> > +\t}\n> > +\n> > +\t/*\n> > +\t * If we don't have any exposure modes in the tuning data we create an\n> > +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> > +\t * will then internally simply drive the shutter as high as possible\n> > +\t * before touching gain\n> > +\t */\n> > +\tif (availableExposureModes.empty()) {\n> > +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> > +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> > +\t\tstd::vector<double> gains = {};\n> > +\n> > +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > +\t\t\tstd::make_shared<ExposureModeHelper>();\n> > +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > +\t\t\tLOG(Agc, Error)\n> > +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> > +\t\t\treturn ret;\n> > +\t\t}\n> > +\n> > +\t\texposureModeHelpers_[exposureModeId] = helper;\n> > +\t\tavailableExposureModes.push_back(exposureModeId);\n> > +\t}\n> > +\n> > +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> > +\n> > +\treturn 0;\n> > +}\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::constraintModes()\n> > + * \\brief Get the constraint modes that have been parsed from tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> > + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::controls()\n> > + * \\brief Get the controls that have been generated after parsing tuning data\n> > + */\n> > +\n> > +/**\n> > + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> > + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> > + * \\param[in] gain The gain with which to adjust the luminance estimate\n> > + *\n> > + * This function is a pure virtual function because estimation of luminance is a\n> > + * hardware-specific operation, which depends wholly on the format of the stats\n> > + * that are delivered to libcamera from the ISP. Derived classes must implement\n> > + * an overriding function that calculates the normalised mean luminance value\n> > + * across the entire image.\n> > + *\n> > + * \\return The normalised relative luminance of the image\n> > + */\n> > +\n> > +/**\n> > + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> > + *        target\n> \n> nit: we don't usually indent in briefs, or in general when breaking\n> lines in doxygen as far as I can tell\n> \n> Not going\n> \n> > + *\n> > + * To account for non-linearity caused by saturation, the value needs to be\n> > + * estimated in an iterative process, as multiplying by a gain will not increase\n> > + * the relative luminance by the same factor if some image regions are saturated\n> > + *\n> > + * \\return The calculated initial gain\n> > + */\n> > +double MeanLuminanceAgc::estimateInitialGain()\n> > +{\n> > +\tdouble yTarget = relativeLuminanceTarget_;\n> > +\tdouble yGain = 1.0;\n> > +\n> > +\tfor (unsigned int i = 0; i < 8; i++) {\n> > +\t\tdouble yValue = estimateLuminance(yGain);\n> > +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> > +\n> > +\t\tyGain *= extra_gain;\n> > +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> > +\t\t\t\t<< \", Y target: \" << yTarget\n> > +\t\t\t\t<< \", gives gain \" << yGain;\n> > +\n> > +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> > +\t\t\tbreak;\n> > +\t}\n> > +\n> > +\treturn yGain;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Clamp gain within the bounds of a defined constraint\n> > + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> > + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> > + * \\param[in] gain The gain to clamp\n> > + *\n> > + * \\return The gain clamped within the constraint bounds\n> > + */\n> > +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> > +\t\t\t\t\t     const Histogram &hist,\n> > +\t\t\t\t\t     double gain)\n> > +{\n> > +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> > +\tfor (const AgcConstraint &constraint : constraints) {\n> > +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> > +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> > +\n> > +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> > +\t\t    newGain > gain)\n> > +\t\t\tgain = newGain;\n> > +\n> > +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> > +\t\t    newGain < gain)\n> > +\t\t\tgain = newGain;\n> > +\t}\n> > +\n> > +\treturn gain;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> > + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> > + *\n> > + * The speed of the filter is adaptive, and will produce the target quicker\n> > + * during startup, or when the target exposure is within 20% of the most recent\n> > + * filter output.\n> > + *\n> > + * \\return The filtered exposure\n> > + */\n> > +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> > +{\n> > +\tdouble speed = 0.2;\n> > +\n> > +\t/* Adapt instantly if we are in startup phase. */\n> > +\tif (frameCount_ < kNumStartupFrames)\n> > +\t\tspeed = 1.0;\n> > +\n> > +\t/*\n> > +\t * If we are close to the desired result, go faster to avoid making\n> > +\t * multiple micro-adjustments.\n> > +\t * \\todo Make this customisable?\n> > +\t */\n> > +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> > +\t    filteredExposure_ > 0.8 * exposureValue)\n> > +\t\tspeed = sqrt(speed);\n> > +\n> > +\tfilteredExposure_ = speed * exposureValue +\n> > +\t\t\t    filteredExposure_ * (1.0 - speed);\n> > +\n> > +\treturn filteredExposure_;\n> > +}\n> > +\n> > +/**\n> > + * \\brief Calculate the new exposure value\n> > + * \\param[in] constraintModeIndex The index of the current constraint mode\n> > + * \\param[in] exposureModeIndex The index of the current exposure mode\n> > + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> > + *\t      the calculated gain\n> > + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> > + *\t      statistics in use derive\n> > + *\n> > + * Calculate a new exposure value to try to obtain the target. The calculated\n> > + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> > + * divided into shutter time, analogue and digital gain.\n> > + *\n> > + * \\return Tuple of shutter time, analogue gain, and digital gain\n> > + */\n> > +std::tuple<utils::Duration, double, double>\n> > +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> > +\t\t\t\t uint32_t exposureModeIndex,\n> > +\t\t\t\t const Histogram &yHist,\n> > +\t\t\t\t utils::Duration effectiveExposureValue)\n> > +{\n> > +\t/*\n> > +\t * The pipeline handler should validate that we have received an allowed\n> > +\t * value for AeExposureMode.\n> > +\t */\n> > +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> > +\t\texposureModeHelpers_.at(exposureModeIndex);\n> > +\n> > +\tdouble gain = estimateInitialGain();\n> > +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> > +\n> > +\t/*\n> > +\t * We don't check whether we're already close to the target, because\n> > +\t * even if the effective exposure value is the same as the last frame's\n> > +\t * we could have switched to an exposure mode that would require a new\n> > +\t * pass through the splitExposure() function.\n> > +\t */\n> > +\n> > +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> > +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> > +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> > +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> > +\n> > +\t/*\n> > +\t * We filter the exposure value to make sure changes are not too jarring\n> > +\t * from frame to frame.\n> > +\t */\n> > +\tnewExposureValue = filterExposure(newExposureValue);\n> > +\n> > +\tframeCount_++;\n> > +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> > +}\n> > +\n> > +}; /* namespace ipa */\n> > +\n> > +}; /* namespace libcamera */\n> > diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> > new file mode 100644\n> > index 00000000..902a359a\n> > --- /dev/null\n> > +++ b/src/ipa/libipa/agc.h\n> > @@ -0,0 +1,82 @@\n> > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > +/*\n> > + * Copyright (C) 2024 Ideas on Board Oy\n> > + *\n> > + agc.h - Base class for libipa-compliant AGC algorithms\n> > + */\n> > +\n> > +#pragma once\n> > +\n> > +#include <tuple>\n> > +#include <vector>\n> > +\n> > +#include <libcamera/controls.h>\n> > +\n> > +#include \"libcamera/internal/yaml_parser.h\"\n> > +\n> > +#include \"exposure_mode_helper.h\"\n> > +#include \"histogram.h\"\n> > +\n> > +namespace libcamera {\n> > +\n> > +namespace ipa {\n> > +\n> > +class MeanLuminanceAgc\n> > +{\n> > +public:\n> > +\tMeanLuminanceAgc();\n> > +\tvirtual ~MeanLuminanceAgc() = default;\n\nNo need to make this inline, define the destructor in the .cpp with\n\nMeanLumimanceAgc::~MeanLuminanceAgc() = default;\n\n> > +\n> > +\tstruct AgcConstraint {\n> > +\t\tenum class Bound {\n> > +\t\t\tLOWER = 0,\n> > +\t\t\tUPPER = 1\n\nWrong coding style.\n\n> > +\t\t};\n> > +\t\tBound bound;\n> > +\t\tdouble qLo;\n> > +\t\tdouble qHi;\n> > +\t\tdouble yTarget;\n> > +\t};\n> > +\n> > +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> > +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n\nThis function doesn't seem to be called from the derived classes, you\ncan make it private. Same for other functions below.\n\n> > +\tint parseConstraintModes(const YamlObject &tuningData);\n> > +\tint parseExposureModes(const YamlObject &tuningData);\n\nIs there a reason to split the parsing in multiple functions, given that\nthey are called the same from from both the rkisp1 and ipu3\nimplementations ?\n\n> > +\n> > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> > +\t{\n> > +\t\treturn constraintModes_;\n> > +\t}\n\nThis seems unused, so the AgcConstraint structure can be made private.\n\n> > +\n> > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> > +\t{\n> > +\t\treturn exposureModeHelpers_;\n> > +\t}\n\nCan we avoid exposing this too ?\n\nI never expected there would be multiple helpers when reading the\ndocumentation of the ExposureModeHelper class. That may be due to\nmissing documentation.\n\n> > +\n> > +\tControlInfoMap::Map controls()\n> > +\t{\n> > +\t\treturn controls_;\n> > +\t}\n\nHmmm... We don't have precendents for pushing ControlInfo to algorithms.\nAre we sure we want to go that way ?\n\n> > +\n> > +\tvirtual double estimateLuminance(const double gain) = 0;\n> > +\tdouble estimateInitialGain();\n> > +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> > +\t\t\t\t   const Histogram &hist,\n> > +\t\t\t\t   double gain);\n> > +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> > +\tstd::tuple<utils::Duration, double, double>\n> > +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> > +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n\nMissing blank line.\n\n> > +private:\n> > +\tuint64_t frameCount_;\n> > +\tutils::Duration filteredExposure_;\n> > +\tdouble relativeLuminanceTarget_;\n> > +\n> > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> > +\tControlInfoMap::Map controls_;\n> > +};\n> > +\n> > +}; /* namespace ipa */\n> > +\n> > +}; /* namespace libcamera */\n> > diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> > index 37fbd177..31cc8d70 100644\n> > --- a/src/ipa/libipa/meson.build\n> > +++ b/src/ipa/libipa/meson.build\n> > @@ -1,6 +1,7 @@\n> >  # SPDX-License-Identifier: CC0-1.0\n> >\n> >  libipa_headers = files([\n> > +    'agc.h',\n> >      'algorithm.h',\n> >      'camera_sensor_helper.h',\n> >      'exposure_mode_helper.h',\n> > @@ -10,6 +11,7 @@ libipa_headers = files([\n> >  ])\n> >\n> >  libipa_sources = files([\n> > +    'agc.cpp',\n> >      'algorithm.cpp',\n> >      'camera_sensor_helper.cpp',\n> >      'exposure_mode_helper.cpp',","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 492D0BD16B\n\tfor <parsemail@patchwork.libcamera.org>;\n\tSat,  6 Apr 2024 01:08:06 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id EEF666334D;\n\tSat,  6 Apr 2024 03:08:04 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 0977961C15\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  6 Apr 2024 03:08:04 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-209-231.bb.dnainternet.fi\n\t[81.175.209.231])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id A88974D4;\n\tSat,  6 Apr 2024 03:07:24 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"QnGlXmwb\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712365644;\n\tbh=CjXKoUFle3fx7ISa1Ls6FaTCjQMkCWsgpxR5rNY2MtM=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=QnGlXmwbCgpV4E5RNzXOpqIVTRq+DdNQzbxxipu40NJAjSdOUE/4mxmCnNFhPN8Vn\n\tvavzRxSzTW5uRSYORWfteM+NTfMdbO7D8LBRu8yk498CUXmt54/cDLneGDpyldOZGI\n\ti0F2xQNn9gv3c+/JPo+5MF+v2Fzc3Lx1oxvy0rWs=","Date":"Sat, 6 Apr 2024 04:07:52 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>","Cc":"Daniel Scally <dan.scally@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<20240406010752.GL12507@pendragon.ideasonboard.com>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29168,"web_url":"https://patchwork.libcamera.org/comment/29168/","msgid":"<20240406012752.GA13085@pendragon.ideasonboard.com>","date":"2024-04-06T01:27:52","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"On Sat, Apr 06, 2024 at 04:07:52AM +0300, Laurent Pinchart wrote:\n> Hello,\n> \n> On Mon, Mar 25, 2024 at 09:16:00PM +0100, Jacopo Mondi wrote:\n> > On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> > > The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> > > very large part; following the Rpi IPA's algorithm in spirit with a\n> > > few tunable values in that IPA being hardcoded in the libipa ones.\n> > > Add a new base class for MeanLuminanceAgc which implements the same\n> > \n> > nit: I would rather call this one AgcMeanLuminance.\n> \n> And rename the files accordingly ?\n> \n> > One other note, not sure if applicable here, from the experience of\n> > trying to upstream an auto-focus algorithm for RkISP1. We had there a\n> > base class to define the algorithm interface, one derived class for\n> > the common calculation and one platform-specific part for statistics\n> > collection and IPA module interfacing.\n> > \n> > The base class was there mostly to handle the algorithm state machine\n> > by handling the controls that influence the algorithm behaviour. In\n> > case of AEGC I can think, in example, about handling the switch\n> > between enable/disable of auto-mode (and consequentially handling a\n> > manually set ExposureTime and AnalogueGain), switching between\n> > different ExposureModes etc\n> > \n> > This is the last attempt for AF\n> > https://patchwork.libcamera.org/patch/18510/\n> > \n> > and I'm wondering if it would be desirable to abstract away from the\n> > MeanLuminance part the part that is tightly coupled with the\n> > libcamera's controls definition.\n> > \n> > Let's make a concrete example: look how the rkisp1 and the ipu3 agc\n> > implementations handle AeEnabled (or better, look at how the rkisp1\n> > does that and the IPU3 does not).\n> > \n> > Ideally, my goal would be to abstract the handling of the control and\n> > all the state machine that decides if the manual or auto-computed\n> > values should be used to an AEGCAlgorithm base class.\n> > \n> > Your series does that for the tuning file parsing already but does\n> > that in the MeanLuminance method implementation, while it should or\n> > could be common to all AEGC methods.\n> > \n> > One very interesting experiment could be starting with this and then\n> > plumb-in AeEnable support in the IPU3 in example and move everything\n> > common to a base class.\n> > \n> > > algorithm and additionally parses yaml tuning files to inform an IPA\n> > > module's Agc algorithm about valid constraint and exposure modes and\n> > > their associated bounds.\n> > >\n> > > Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> > > ---\n> > >  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n> > >  src/ipa/libipa/agc.h       |  82 ++++++\n> > >  src/ipa/libipa/meson.build |   2 +\n> > >  3 files changed, 610 insertions(+)\n> > >  create mode 100644 src/ipa/libipa/agc.cpp\n> > >  create mode 100644 src/ipa/libipa/agc.h\n> > >\n> > > diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> > > new file mode 100644\n> > > index 00000000..af57a571\n> > > --- /dev/null\n> > > +++ b/src/ipa/libipa/agc.cpp\n> > > @@ -0,0 +1,526 @@\n> > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > > +/*\n> > > + * Copyright (C) 2024 Ideas on Board Oy\n> > > + *\n> > > + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> \n> We could have different AGC algorithms that are all libipa-compliant.\n> This is one particular AGC algorithm, right ?\n> \n> > > + */\n> > > +\n> > > +#include \"agc.h\"\n> > > +\n> > > +#include <cmath>\n> > > +\n> > > +#include <libcamera/base/log.h>\n> > > +#include <libcamera/control_ids.h>\n> > > +\n> > > +#include \"exposure_mode_helper.h\"\n> > > +\n> > > +using namespace libcamera::controls;\n> > > +\n> > > +/**\n> > > + * \\file agc.h\n> > > + * \\brief Base class implementing mean luminance AEGC.\n> > \n> > nit: not '.' at the end of briefs\n> > \n> > > + */\n> > > +\n> > > +namespace libcamera {\n> > > +\n> > > +using namespace std::literals::chrono_literals;\n> > > +\n> > > +LOG_DEFINE_CATEGORY(Agc)\n> > > +\n> > > +namespace ipa {\n> > > +\n> > > +/*\n> > > + * Number of frames to wait before calculating stats on minimum exposure\n> \n> This doesn't seem to match what the value is used for.\n> \n> > > + * \\todo should this be a tunable value?\n> > \n> > Does this depend on the ISP (comes from IPA), the sensor (comes from\n> > tuning file) or... both ? :)\n> > \n> > > + */\n> > > +static constexpr uint32_t kNumStartupFrames = 10;\n> > > +\n> > > +/*\n> > > + * Default relative luminance target\n> > > + *\n> > > + * This value should be chosen so that when the camera points at a grey target,\n> > > + * the resulting image brightness looks \"right\". Custom values can be passed\n> > > + * as the relativeLuminanceTarget value in sensor tuning files.\n> > > + */\n> > > +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> > > +\n> > > +/**\n> > > + * \\struct MeanLuminanceAgc::AgcConstraint\n> > > + * \\brief The boundaries and target for an AeConstraintMode constraint\n> > > + *\n> > > + * This structure describes an AeConstraintMode constraint for the purposes of\n> > > + * this algorithm. The algorithm will apply the constraints by calculating the\n> > > + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> > > + * the resulting value is the right side of the given target (as defined by the\n> > > + * boundary and luminance target).\n> > > + */\n> > \n> > Here, in example.\n> > \n> > controls::AeConstraintMode and the supported values are defined as\n> > (core|vendor) controls in control_ids_*.yaml\n> > \n> > The tuning file expresses the constraint modes using the Control\n> > definition (I wonder if this has always been like this) but it\n> > definitely ties the tuning file to the controls definition.\n> > \n> > Applications use controls::AeConstraintMode to select one of the\n> > constrained modes to have the algorithm use it.\n> > \n> > In all of this, how much is part of the MeanLuminance implementation\n> > and how much shared between possibly multiple implementations ?\n> > \n> > > +\n> > > +/**\n> > > + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> > > + * \\brief Specify whether the constraint defines a lower or upper bound\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> > > + * \\brief The constraint defines a lower bound\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> > > + * \\brief The constraint defines an upper bound\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> > > + * \\brief The type of constraint bound\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> > > + * \\brief The lower quantile to use for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> > > + * \\brief The upper quantile to use for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> > > + * \\brief The luminance target for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\class MeanLuminanceAgc\n> > > + * \\brief a mean-based auto-exposure algorithm\n> \n> s/a/A/\n> \n> > > + *\n> > > + * This algorithm calculates a shutter time, analogue and digital gain such that\n> > > + * the normalised mean luminance value of an image is driven towards a target,\n> > > + * which itself is discovered from tuning data. The algorithm is a two-stage\n> > > + * process:\n> \n> s/:/./\n> \n> or make the next two paragraph bullet list entries.\n> \n> > > + *\n> > > + * In the first stage, an initial gain value is derived by iteratively comparing\n> > > + * the gain-adjusted mean luminance across an entire image against a target, and\n> > > + * selecting a value which pushes it as closely as possible towards the target.\n> > > + *\n> > > + * In the second stage we calculate the gain required to drive the average of a\n> > > + * section of a histogram to a target value, where the target and the boundaries\n> > > + * of the section of the histogram used in the calculation are taken from the\n> > > + * values defined for the currently configured AeConstraintMode within the\n> > > + * tuning data. The gain from the first stage is then clamped to the gain from\n> > > + * this stage.\n\nAnother thing we need to document is that requirements to use this\nclass. The IPU3 can provide the required histogram because we compute a\npoor man's histogram from the grid of AWB stats, as the ImgU doesn't\ngive us a histogram. Platforms that can't provide a histogram at all\nwon't be able to use this. What precision do we need for the histogram\n(number of bins, and number of pixels used for the calculation) for it\nto be usable with this class ?\n\n> > > + *\n> > > + * The final gain is used to adjust the effective exposure value of the image,\n> > > + * and that new exposure value divided into shutter time, analogue gain and\n> \n> s/divided/is divided/\n> \n> > > + * digital gain according to the selected AeExposureMode.\n> \n> There should be a mention here that this class handles tuning file\n> parsing and expects a particular tuning data format. You can reference\n> the function that parses the tuning data for detailed documentation\n> about the format.\n> \n> > > + */\n> > > +\n> > > +MeanLuminanceAgc::MeanLuminanceAgc()\n> > > +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> > > +{\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse the relative luminance target from the tuning data\n> > > + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> > > + */\n> > > +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> > > +{\n> > > +\trelativeLuminanceTarget_ =\n> > > +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n> > \n> > How do you expect this to be computed in the tuning file ?\n> > \n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse an AeConstraintMode constraint from tuning data\n> > > + * \\param[in] modeDict the YamlObject holding the constraint data\n> > > + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> > > + */\n> > > +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> > > +{\n> > > +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> > > +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> > > +\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> > > +\t\t\tcontinue;\n> > > +\t\t}\n> > > +\n> > > +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> > > +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> > > +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> > > +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> > > +\t\tdouble yTarget =\n> > > +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n> > > +\n> > > +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> > > +\n> > > +\t\tif (!constraintModes_.count(id))\n> > > +\t\t\tconstraintModes_[id] = {};\n> > > +\n> > > +\t\tif (idx)\n> > > +\t\t\tconstraintModes_[id].push_back(constraint);\n> > > +\t\telse\n> > > +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> > > +\t}\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse tuning data file to populate AeConstraintMode control\n> > > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > > + *\n> > > + * The Agc algorithm's tuning data should contain a dictionary called\n> > > + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> > > + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> > > + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> > > + *\n> > > + * \\code{.unparsed}\n> > > + * algorithms:\n> > > + *   - Agc:\n> > > + *       AeConstraintMode:\n> > > + *         ConstraintNormal:\n> > > + *           lower:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.5\n> > \n> > Ok, so this ties the tuning file not just to the libcamera controls\n> > definition, but to this specific implementation of the algorithm ?\n> \n> Tying the algorithm implementation with the tuning data format seems\n> unavoidable. I'm slightly concerned about handling the YAML parsing here\n> though, as I'm wondering if we can always guarantee that the file will\n> not need to diverge somehow (possibly with additional data somewhere\n> that would make the shared parser fail), but that's probably such a\n> small risk that we can worry about it later.\n> \n> > Not that it was not expected, and I think it's fine, as using a\n> > libcamera defined control value as 'index' makes sure the applications\n> > will deal with the same interface, but this largely conflicts with the\n> > idea to have shared parsing for all algorithms in a base class.\n> \n> The idea comes straight from the RPi implementation, which expect users\n> to customize tuning files with new modes. Indexing into a non standard\n> list of modes isn't an API I really like, and that's something I think\n> we need to fix eventually.\n> \n> > Also, we made AeConstrainModes a 'core control' because at the time\n> > when RPi first implemented AGC support there was no alternative to\n> > that. Then the RPi implementation has been copied in all other\n> > platforms and this is still fine as a 'core control'. This however\n> > seems to be a tuning parameter for this specific algorithm\n> > implementation, isn't it ?\n> \n> I should read your whole reply before writing anything :-)\n> \n> > > + *         ConstraintHighlight:\n> > > + *           lower:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.5\n> > > + *           upper:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.8\n> > > + *\n> > > + * \\endcode\n> > > + *\n> > > + * The parsed dictionaries are used to populate an array of available values for\n> > > + * the AeConstraintMode control and stored for later use in the algorithm.\n> > > + *\n> > > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > > + * \\return 0 on success\n> > > + */\n> > > +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> > > +{\n> > > +\tstd::vector<ControlValue> availableConstraintModes;\n> > > +\n> > > +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> > > +\tif (yamlConstraintModes.isDictionary()) {\n> > > +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> > > +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> > > +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> > > +\t\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> > > +\t\t\t\tcontinue;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tif (!modeDict.isDictionary()) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn -EINVAL;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tparseConstraint(modeDict,\n> > > +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > > +\t\t\tavailableConstraintModes.push_back(\n> > > +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > > +\t\t}\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * If the tuning data file contains no constraints then we use the\n> > > +\t * default constraint that the various Agc algorithms were adhering to\n> > > +\t * anyway before centralisation.\n> > > +\t */\n> > > +\tif (constraintModes_.empty()) {\n> > > +\t\tAgcConstraint constraint = {\n> > > +\t\t\tAgcConstraint::Bound::LOWER,\n> > > +\t\t\t0.98,\n> > > +\t\t\t1.0,\n> > > +\t\t\t0.5\n> > > +\t\t};\n> > > +\n> > > +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> > > +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> > > +\t\t\tconstraint);\n> > > +\t\tavailableConstraintModes.push_back(\n> > > +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> > > +\t}\n> > > +\n> > > +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> > > +\n> > > +\treturn 0;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse tuning data file to populate AeExposureMode control\n> > > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > > + *\n> > > + * The Agc algorithm's tuning data should contain a dictionary called\n> > > + * AeExposureMode containing per-mode setting dictionaries with the key being\n> > > + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> > > + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> > > + * values with the key \"gain\", in this format:\n> > > + *\n> > > + * \\code{.unparsed}\n> > > + * algorithms:\n> > > + *   - Agc:\n> > > + *       AeExposureMode:\n> > \n> > Same reasoning as per the constraints, really.\n> > \n> > There's nothing bad here, if not me realizing our controls definition\n> > is intimately tied with the algorithms implementation, so I wonder\n> > again if even handling things like AeEnable in a common form makes\n> > any sense..\n> > \n> > Not going to review the actual implementation now, as it comes from\n> > the existing ones...\n> > \n> > \n> > > + *         ExposureNormal:\n> > > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > > + *         ExposureShort:\n> > > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > > + *\n> > > + * \\endcode\n> > > + *\n> > > + * The parsed dictionaries are used to populate an array of available values for\n> > > + * the AeExposureMode control and to create ExposureModeHelpers\n> > > + *\n> > > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > > + * \\return 0 on success\n> > > + */\n> > > +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> > > +{\n> > > +\tstd::vector<ControlValue> availableExposureModes;\n> > > +\tint ret;\n> > > +\n> > > +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> > > +\tif (yamlExposureModes.isDictionary()) {\n> > > +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> > > +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> > > +\t\t\t    AeExposureModeNameValueMap.end()) {\n> > > +\t\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\tcontinue;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tif (!modeValues.isDictionary()) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn -EINVAL;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tstd::vector<uint32_t> shutters =\n> > > +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> > > +\t\t\tstd::vector<double> gains =\n> > > +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> > > +\n> > > +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> > > +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> > > +\t\t\t\tstd::back_inserter(shutterDurations),\n> > > +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> > > +\n> > > +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > > +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> > > +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn ret;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> > > +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> > > +\t\t}\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * If we don't have any exposure modes in the tuning data we create an\n> > > +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> > > +\t * will then internally simply drive the shutter as high as possible\n> > > +\t * before touching gain\n> > > +\t */\n> > > +\tif (availableExposureModes.empty()) {\n> > > +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> > > +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> > > +\t\tstd::vector<double> gains = {};\n> > > +\n> > > +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > > +\t\t\tstd::make_shared<ExposureModeHelper>();\n> > > +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > > +\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> > > +\t\t\treturn ret;\n> > > +\t\t}\n> > > +\n> > > +\t\texposureModeHelpers_[exposureModeId] = helper;\n> > > +\t\tavailableExposureModes.push_back(exposureModeId);\n> > > +\t}\n> > > +\n> > > +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> > > +\n> > > +\treturn 0;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::constraintModes()\n> > > + * \\brief Get the constraint modes that have been parsed from tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> > > + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::controls()\n> > > + * \\brief Get the controls that have been generated after parsing tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> > > + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> > > + * \\param[in] gain The gain with which to adjust the luminance estimate\n> > > + *\n> > > + * This function is a pure virtual function because estimation of luminance is a\n> > > + * hardware-specific operation, which depends wholly on the format of the stats\n> > > + * that are delivered to libcamera from the ISP. Derived classes must implement\n> > > + * an overriding function that calculates the normalised mean luminance value\n> > > + * across the entire image.\n> > > + *\n> > > + * \\return The normalised relative luminance of the image\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> > > + *        target\n> > \n> > nit: we don't usually indent in briefs, or in general when breaking\n> > lines in doxygen as far as I can tell\n> > \n> > Not going\n> > \n> > > + *\n> > > + * To account for non-linearity caused by saturation, the value needs to be\n> > > + * estimated in an iterative process, as multiplying by a gain will not increase\n> > > + * the relative luminance by the same factor if some image regions are saturated\n> > > + *\n> > > + * \\return The calculated initial gain\n> > > + */\n> > > +double MeanLuminanceAgc::estimateInitialGain()\n> > > +{\n> > > +\tdouble yTarget = relativeLuminanceTarget_;\n> > > +\tdouble yGain = 1.0;\n> > > +\n> > > +\tfor (unsigned int i = 0; i < 8; i++) {\n> > > +\t\tdouble yValue = estimateLuminance(yGain);\n> > > +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> > > +\n> > > +\t\tyGain *= extra_gain;\n> > > +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> > > +\t\t\t\t<< \", Y target: \" << yTarget\n> > > +\t\t\t\t<< \", gives gain \" << yGain;\n> > > +\n> > > +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> > > +\t\t\tbreak;\n> > > +\t}\n> > > +\n> > > +\treturn yGain;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Clamp gain within the bounds of a defined constraint\n> > > + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> > > + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> > > + * \\param[in] gain The gain to clamp\n> > > + *\n> > > + * \\return The gain clamped within the constraint bounds\n> > > + */\n> > > +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> > > +\t\t\t\t\t     const Histogram &hist,\n> > > +\t\t\t\t\t     double gain)\n> > > +{\n> > > +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> > > +\tfor (const AgcConstraint &constraint : constraints) {\n> > > +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> > > +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> > > +\n> > > +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> > > +\t\t    newGain > gain)\n> > > +\t\t\tgain = newGain;\n> > > +\n> > > +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> > > +\t\t    newGain < gain)\n> > > +\t\t\tgain = newGain;\n> > > +\t}\n> > > +\n> > > +\treturn gain;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> > > + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> > > + *\n> > > + * The speed of the filter is adaptive, and will produce the target quicker\n> > > + * during startup, or when the target exposure is within 20% of the most recent\n> > > + * filter output.\n> > > + *\n> > > + * \\return The filtered exposure\n> > > + */\n> > > +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> > > +{\n> > > +\tdouble speed = 0.2;\n> > > +\n> > > +\t/* Adapt instantly if we are in startup phase. */\n> > > +\tif (frameCount_ < kNumStartupFrames)\n> > > +\t\tspeed = 1.0;\n> > > +\n> > > +\t/*\n> > > +\t * If we are close to the desired result, go faster to avoid making\n> > > +\t * multiple micro-adjustments.\n> > > +\t * \\todo Make this customisable?\n> > > +\t */\n> > > +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> > > +\t    filteredExposure_ > 0.8 * exposureValue)\n> > > +\t\tspeed = sqrt(speed);\n> > > +\n> > > +\tfilteredExposure_ = speed * exposureValue +\n> > > +\t\t\t    filteredExposure_ * (1.0 - speed);\n> > > +\n> > > +\treturn filteredExposure_;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Calculate the new exposure value\n> > > + * \\param[in] constraintModeIndex The index of the current constraint mode\n> > > + * \\param[in] exposureModeIndex The index of the current exposure mode\n> > > + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> > > + *\t      the calculated gain\n> > > + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> > > + *\t      statistics in use derive\n> > > + *\n> > > + * Calculate a new exposure value to try to obtain the target. The calculated\n> > > + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> > > + * divided into shutter time, analogue and digital gain.\n> > > + *\n> > > + * \\return Tuple of shutter time, analogue gain, and digital gain\n> > > + */\n> > > +std::tuple<utils::Duration, double, double>\n> > > +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> > > +\t\t\t\t uint32_t exposureModeIndex,\n> > > +\t\t\t\t const Histogram &yHist,\n> > > +\t\t\t\t utils::Duration effectiveExposureValue)\n> > > +{\n> > > +\t/*\n> > > +\t * The pipeline handler should validate that we have received an allowed\n> > > +\t * value for AeExposureMode.\n> > > +\t */\n> > > +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> > > +\t\texposureModeHelpers_.at(exposureModeIndex);\n> > > +\n> > > +\tdouble gain = estimateInitialGain();\n> > > +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> > > +\n> > > +\t/*\n> > > +\t * We don't check whether we're already close to the target, because\n> > > +\t * even if the effective exposure value is the same as the last frame's\n> > > +\t * we could have switched to an exposure mode that would require a new\n> > > +\t * pass through the splitExposure() function.\n> > > +\t */\n> > > +\n> > > +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> > > +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> > > +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> > > +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> > > +\n> > > +\t/*\n> > > +\t * We filter the exposure value to make sure changes are not too jarring\n> > > +\t * from frame to frame.\n> > > +\t */\n> > > +\tnewExposureValue = filterExposure(newExposureValue);\n> > > +\n> > > +\tframeCount_++;\n> > > +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> > > +}\n> > > +\n> > > +}; /* namespace ipa */\n> > > +\n> > > +}; /* namespace libcamera */\n> > > diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> > > new file mode 100644\n> > > index 00000000..902a359a\n> > > --- /dev/null\n> > > +++ b/src/ipa/libipa/agc.h\n> > > @@ -0,0 +1,82 @@\n> > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > > +/*\n> > > + * Copyright (C) 2024 Ideas on Board Oy\n> > > + *\n> > > + agc.h - Base class for libipa-compliant AGC algorithms\n> > > + */\n> > > +\n> > > +#pragma once\n> > > +\n> > > +#include <tuple>\n> > > +#include <vector>\n> > > +\n> > > +#include <libcamera/controls.h>\n> > > +\n> > > +#include \"libcamera/internal/yaml_parser.h\"\n> > > +\n> > > +#include \"exposure_mode_helper.h\"\n> > > +#include \"histogram.h\"\n> > > +\n> > > +namespace libcamera {\n> > > +\n> > > +namespace ipa {\n> > > +\n> > > +class MeanLuminanceAgc\n> > > +{\n> > > +public:\n> > > +\tMeanLuminanceAgc();\n> > > +\tvirtual ~MeanLuminanceAgc() = default;\n> \n> No need to make this inline, define the destructor in the .cpp with\n> \n> MeanLumimanceAgc::~MeanLuminanceAgc() = default;\n> \n> > > +\n> > > +\tstruct AgcConstraint {\n> > > +\t\tenum class Bound {\n> > > +\t\t\tLOWER = 0,\n> > > +\t\t\tUPPER = 1\n> \n> Wrong coding style.\n> \n> > > +\t\t};\n> > > +\t\tBound bound;\n> > > +\t\tdouble qLo;\n> > > +\t\tdouble qHi;\n> > > +\t\tdouble yTarget;\n> > > +\t};\n> > > +\n> > > +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> > > +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> \n> This function doesn't seem to be called from the derived classes, you\n> can make it private. Same for other functions below.\n> \n> > > +\tint parseConstraintModes(const YamlObject &tuningData);\n> > > +\tint parseExposureModes(const YamlObject &tuningData);\n> \n> Is there a reason to split the parsing in multiple functions, given that\n> they are called the same from from both the rkisp1 and ipu3\n> implementations ?\n> \n> > > +\n> > > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> > > +\t{\n> > > +\t\treturn constraintModes_;\n> > > +\t}\n> \n> This seems unused, so the AgcConstraint structure can be made private.\n> \n> > > +\n> > > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> > > +\t{\n> > > +\t\treturn exposureModeHelpers_;\n> > > +\t}\n> \n> Can we avoid exposing this too ?\n> \n> I never expected there would be multiple helpers when reading the\n> documentation of the ExposureModeHelper class. That may be due to\n> missing documentation.\n> \n> > > +\n> > > +\tControlInfoMap::Map controls()\n> > > +\t{\n> > > +\t\treturn controls_;\n> > > +\t}\n> \n> Hmmm... We don't have precendents for pushing ControlInfo to algorithms.\n> Are we sure we want to go that way ?\n> \n> > > +\n> > > +\tvirtual double estimateLuminance(const double gain) = 0;\n> > > +\tdouble estimateInitialGain();\n> > > +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> > > +\t\t\t\t   const Histogram &hist,\n> > > +\t\t\t\t   double gain);\n> > > +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> > > +\tstd::tuple<utils::Duration, double, double>\n> > > +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> > > +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> \n> Missing blank line.\n> \n> > > +private:\n> > > +\tuint64_t frameCount_;\n> > > +\tutils::Duration filteredExposure_;\n> > > +\tdouble relativeLuminanceTarget_;\n> > > +\n> > > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> > > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> > > +\tControlInfoMap::Map controls_;\n> > > +};\n> > > +\n> > > +}; /* namespace ipa */\n> > > +\n> > > +}; /* namespace libcamera */\n> > > diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> > > index 37fbd177..31cc8d70 100644\n> > > --- a/src/ipa/libipa/meson.build\n> > > +++ b/src/ipa/libipa/meson.build\n> > > @@ -1,6 +1,7 @@\n> > >  # SPDX-License-Identifier: CC0-1.0\n> > >\n> > >  libipa_headers = files([\n> > > +    'agc.h',\n> > >      'algorithm.h',\n> > >      'camera_sensor_helper.h',\n> > >      'exposure_mode_helper.h',\n> > > @@ -10,6 +11,7 @@ libipa_headers = files([\n> > >  ])\n> > >\n> > >  libipa_sources = files([\n> > > +    'agc.cpp',\n> > >      'algorithm.cpp',\n> > >      'camera_sensor_helper.cpp',\n> > >      'exposure_mode_helper.cpp',\n> \n> -- \n> Regards,\n> \n> Laurent Pinchart","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 7BB89C0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tSat,  6 Apr 2024 01:28:06 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 65CCA6334D;\n\tSat,  6 Apr 2024 03:28:05 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id D8E7561C15\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat,  6 Apr 2024 03:28:03 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-209-231.bb.dnainternet.fi\n\t[81.175.209.231])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 76AA9231;\n\tSat,  6 Apr 2024 03:27:24 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"kEaRfgq7\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712366844;\n\tbh=K5UFhvbzbLd4wDa7aBQEiVDO/18D1vUjNQLXAM8wE5E=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=kEaRfgq7sKKZZSQJhlNI6Ca14dzONesvEg3ZozoNQb0b/zFYbzLOVJcn8NYjUXMc4\n\tfrgRsUaHRfquYsokMaftf7sUhXRnl8C6tDxvELPjQ9BgnwghVS7LjUCHsLzUPqCQS5\n\td4yd53Fyv/24RHsd8OHeChRCXB7YREzpdVf9D5Ik=","Date":"Sat, 6 Apr 2024 04:27:52 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>","Cc":"Daniel Scally <dan.scally@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<20240406012752.GA13085@pendragon.ideasonboard.com>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>\n\t<20240406010752.GL12507@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20240406010752.GL12507@pendragon.ideasonboard.com>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29179,"web_url":"https://patchwork.libcamera.org/comment/29179/","msgid":"<65144b63-4035-4dc1-8bb5-240d2db3e748@ideasonboard.com>","date":"2024-04-08T10:57:07","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":156,"url":"https://patchwork.libcamera.org/api/people/156/","name":"Dan Scally","email":"dan.scally@ideasonboard.com"},"content":"Hi Jacopo, Laurent\n\nOn 06/04/2024 02:07, Laurent Pinchart wrote:\n> Hello,\n>\n> On Mon, Mar 25, 2024 at 09:16:00PM +0100, Jacopo Mondi wrote:\n>> On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n>>> The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n>>> very large part; following the Rpi IPA's algorithm in spirit with a\n>>> few tunable values in that IPA being hardcoded in the libipa ones.\n>>> Add a new base class for MeanLuminanceAgc which implements the same\n>> nit: I would rather call this one AgcMeanLuminance.\n> And rename the files accordingly ?\n>\n>> One other note, not sure if applicable here, from the experience of\n>> trying to upstream an auto-focus algorithm for RkISP1. We had there a\n>> base class to define the algorithm interface, one derived class for\n>> the common calculation and one platform-specific part for statistics\n>> collection and IPA module interfacing.\n>>\n>> The base class was there mostly to handle the algorithm state machine\n>> by handling the controls that influence the algorithm behaviour. In\n>> case of AEGC I can think, in example, about handling the switch\n>> between enable/disable of auto-mode (and consequentially handling a\n>> manually set ExposureTime and AnalogueGain), switching between\n>> different ExposureModes etc\n>>\n>> This is the last attempt for AF\n>> https://patchwork.libcamera.org/patch/18510/\n>>\n>> and I'm wondering if it would be desirable to abstract away from the\n>> MeanLuminance part the part that is tightly coupled with the\n>> libcamera's controls definition.\n>>\n>> Let's make a concrete example: look how the rkisp1 and the ipu3 agc\n>> implementations handle AeEnabled (or better, look at how the rkisp1\n>> does that and the IPU3 does not).\n>>\n>> Ideally, my goal would be to abstract the handling of the control and\n>> all the state machine that decides if the manual or auto-computed\n>> values should be used to an AEGCAlgorithm base class.\n>>\n>> Your series does that for the tuning file parsing already but does\n>> that in the MeanLuminance method implementation, while it should or\n>> could be common to all AEGC methods.\n>>\n>> One very interesting experiment could be starting with this and then\n>> plumb-in AeEnable support in the IPU3 in example and move everything\n>> common to a base class.\n\n\nI agree that conceptually that seems like a good direction of travel.\n\n>>\n>>> algorithm and additionally parses yaml tuning files to inform an IPA\n>>> module's Agc algorithm about valid constraint and exposure modes and\n>>> their associated bounds.\n>>>\n>>> Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n>>> ---\n>>>   src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n>>>   src/ipa/libipa/agc.h       |  82 ++++++\n>>>   src/ipa/libipa/meson.build |   2 +\n>>>   3 files changed, 610 insertions(+)\n>>>   create mode 100644 src/ipa/libipa/agc.cpp\n>>>   create mode 100644 src/ipa/libipa/agc.h\n>>>\n>>> diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n>>> new file mode 100644\n>>> index 00000000..af57a571\n>>> --- /dev/null\n>>> +++ b/src/ipa/libipa/agc.cpp\n>>> @@ -0,0 +1,526 @@\n>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>> +/*\n>>> + * Copyright (C) 2024 Ideas on Board Oy\n>>> + *\n>>> + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> We could have different AGC algorithms that are all libipa-compliant.\n> This is one particular AGC algorithm, right ?\n\n\nYes - the filenames do need to change.\n\n>\n>>> + */\n>>> +\n>>> +#include \"agc.h\"\n>>> +\n>>> +#include <cmath>\n>>> +\n>>> +#include <libcamera/base/log.h>\n>>> +#include <libcamera/control_ids.h>\n>>> +\n>>> +#include \"exposure_mode_helper.h\"\n>>> +\n>>> +using namespace libcamera::controls;\n>>> +\n>>> +/**\n>>> + * \\file agc.h\n>>> + * \\brief Base class implementing mean luminance AEGC.\n>> nit: not '.' at the end of briefs\n>>\n>>> + */\n>>> +\n>>> +namespace libcamera {\n>>> +\n>>> +using namespace std::literals::chrono_literals;\n>>> +\n>>> +LOG_DEFINE_CATEGORY(Agc)\n>>> +\n>>> +namespace ipa {\n>>> +\n>>> +/*\n>>> + * Number of frames to wait before calculating stats on minimum exposure\n> This doesn't seem to match what the value is used for.\n>\n>>> + * \\todo should this be a tunable value?\n>> Does this depend on the ISP (comes from IPA), the sensor (comes from\n>> tuning file) or... both ? :)\n>>\n>>> + */\n>>> +static constexpr uint32_t kNumStartupFrames = 10;\n>>> +\n>>> +/*\n>>> + * Default relative luminance target\n>>> + *\n>>> + * This value should be chosen so that when the camera points at a grey target,\n>>> + * the resulting image brightness looks \"right\". Custom values can be passed\n>>> + * as the relativeLuminanceTarget value in sensor tuning files.\n>>> + */\n>>> +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n>>> +\n>>> +/**\n>>> + * \\struct MeanLuminanceAgc::AgcConstraint\n>>> + * \\brief The boundaries and target for an AeConstraintMode constraint\n>>> + *\n>>> + * This structure describes an AeConstraintMode constraint for the purposes of\n>>> + * this algorithm. The algorithm will apply the constraints by calculating the\n>>> + * Histogram's inter-quantile mean between the given quantiles and ensure that\n>>> + * the resulting value is the right side of the given target (as defined by the\n>>> + * boundary and luminance target).\n>>> + */\n>> Here, in example.\n>>\n>> controls::AeConstraintMode and the supported values are defined as\n>> (core|vendor) controls in control_ids_*.yaml\n>>\n>> The tuning file expresses the constraint modes using the Control\n>> definition (I wonder if this has always been like this) but it\n>> definitely ties the tuning file to the controls definition.\n>>\n>> Applications use controls::AeConstraintMode to select one of the\n>> constrained modes to have the algorithm use it.\n>>\n>> In all of this, how much is part of the MeanLuminance implementation\n>> and how much shared between possibly multiple implementations ?\n>>\n>>> +\n>>> +/**\n>>> + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n>>> + * \\brief Specify whether the constraint defines a lower or upper bound\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n>>> + * \\brief The constraint defines a lower bound\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n>>> + * \\brief The constraint defines an upper bound\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::bound\n>>> + * \\brief The type of constraint bound\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n>>> + * \\brief The lower quantile to use for the constraint\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n>>> + * \\brief The upper quantile to use for the constraint\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n>>> + * \\brief The luminance target for the constraint\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\class MeanLuminanceAgc\n>>> + * \\brief a mean-based auto-exposure algorithm\n> s/a/A/\n>\n>>> + *\n>>> + * This algorithm calculates a shutter time, analogue and digital gain such that\n>>> + * the normalised mean luminance value of an image is driven towards a target,\n>>> + * which itself is discovered from tuning data. The algorithm is a two-stage\n>>> + * process:\n> s/:/./\n>\n> or make the next two paragraph bullet list entries.\n>\n>>> + *\n>>> + * In the first stage, an initial gain value is derived by iteratively comparing\n>>> + * the gain-adjusted mean luminance across an entire image against a target, and\n>>> + * selecting a value which pushes it as closely as possible towards the target.\n>>> + *\n>>> + * In the second stage we calculate the gain required to drive the average of a\n>>> + * section of a histogram to a target value, where the target and the boundaries\n>>> + * of the section of the histogram used in the calculation are taken from the\n>>> + * values defined for the currently configured AeConstraintMode within the\n>>> + * tuning data. The gain from the first stage is then clamped to the gain from\n>>> + * this stage.\n>>> + *\n>>> + * The final gain is used to adjust the effective exposure value of the image,\n>>> + * and that new exposure value divided into shutter time, analogue gain and\n> s/divided/is divided/\n>\n>>> + * digital gain according to the selected AeExposureMode.\n> There should be a mention here that this class handles tuning file\n> parsing and expects a particular tuning data format. You can reference\n> the function that parses the tuning data for detailed documentation\n> about the format.\n>\n>>> + */\n>>> +\n>>> +MeanLuminanceAgc::MeanLuminanceAgc()\n>>> +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n>>> +{\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Parse the relative luminance target from the tuning data\n>>> + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n>>> + */\n>>> +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n>>> +{\n>>> +\trelativeLuminanceTarget_ =\n>>> +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n>> How do you expect this to be computed in the tuning file ?\n>>\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Parse an AeConstraintMode constraint from tuning data\n>>> + * \\param[in] modeDict the YamlObject holding the constraint data\n>>> + * \\param[in] id The constraint ID from AeConstraintModeEnum\n>>> + */\n>>> +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n>>> +{\n>>> +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n>>> +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n>>> +\t\t\tLOG(Agc, Warning)\n>>> +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n>>> +\t\t\tcontinue;\n>>> +\t\t}\n>>> +\n>>> +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n>>> +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n>>> +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n>>> +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n>>> +\t\tdouble yTarget =\n>>> +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n>>> +\n>>> +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n>>> +\n>>> +\t\tif (!constraintModes_.count(id))\n>>> +\t\t\tconstraintModes_[id] = {};\n>>> +\n>>> +\t\tif (idx)\n>>> +\t\t\tconstraintModes_[id].push_back(constraint);\n>>> +\t\telse\n>>> +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n>>> +\t}\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Parse tuning data file to populate AeConstraintMode control\n>>> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n>>> + *\n>>> + * The Agc algorithm's tuning data should contain a dictionary called\n>>> + * AeConstraintMode containing per-mode setting dictionaries with the key being\n>>> + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n>>> + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n>>> + *\n>>> + * \\code{.unparsed}\n>>> + * algorithms:\n>>> + *   - Agc:\n>>> + *       AeConstraintMode:\n>>> + *         ConstraintNormal:\n>>> + *           lower:\n>>> + *             qLo: 0.98\n>>> + *             qHi: 1.0\n>>> + *             yTarget: 0.5\n>> Ok, so this ties the tuning file not just to the libcamera controls\n>> definition, but to this specific implementation of the algorithm ?\n> Tying the algorithm implementation with the tuning data format seems\n> unavoidable. I'm slightly concerned about handling the YAML parsing here\n> though, as I'm wondering if we can always guarantee that the file will\n> not need to diverge somehow (possibly with additional data somewhere\n> that would make the shared parser fail), but that's probably such a\n> small risk that we can worry about it later.\n>\n>> Not that it was not expected, and I think it's fine, as using a\n>> libcamera defined control value as 'index' makes sure the applications\n>> will deal with the same interface, but this largely conflicts with the\n>> idea to have shared parsing for all algorithms in a base class.\n> The idea comes straight from the RPi implementation, which expect users\n> to customize tuning files with new modes. Indexing into a non standard\n> list of modes isn't an API I really like, and that's something I think\n> we need to fix eventually.\n>\n>> Also, we made AeConstrainModes a 'core control' because at the time\n>> when RPi first implemented AGC support there was no alternative to\n>> that. Then the RPi implementation has been copied in all other\n>> platforms and this is still fine as a 'core control'. This however\n>> seems to be a tuning parameter for this specific algorithm\n>> implementation, isn't it ?\n> I should read your whole reply before writing anything :-)\n\n\nWe (Jacopo, Stefan and I) discussed this a little in a call a while ago too; I think that although \nthe ExposureMode control and probably AeEnable could be centralised across all implementations of \nAgc, the ConstraintMode and the luminance target are really quite specific to this mean luminance \nalgorithm and so parsing the settings for those probably does need to live here. For the same reason \nthe question below about whether it's right to pass a ControlInfoMap::Map to the algorithms is, I \nthink, necessary - the controls that get instantiated might depend on the version of an algorithm \nthat is \"selected\" by the tuning file.\n\n\nIn recognition of that, in addition to renaming the file to AgcMeanLuminance I think that the key \nfor the parameters for the algorithm in the tuning files will need to change to that too.\n\n>\n>>> + *         ConstraintHighlight:\n>>> + *           lower:\n>>> + *             qLo: 0.98\n>>> + *             qHi: 1.0\n>>> + *             yTarget: 0.5\n>>> + *           upper:\n>>> + *             qLo: 0.98\n>>> + *             qHi: 1.0\n>>> + *             yTarget: 0.8\n>>> + *\n>>> + * \\endcode\n>>> + *\n>>> + * The parsed dictionaries are used to populate an array of available values for\n>>> + * the AeConstraintMode control and stored for later use in the algorithm.\n>>> + *\n>>> + * \\return -EINVAL Where a defined constraint mode is invalid\n>>> + * \\return 0 on success\n>>> + */\n>>> +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n>>> +{\n>>> +\tstd::vector<ControlValue> availableConstraintModes;\n>>> +\n>>> +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n>>> +\tif (yamlConstraintModes.isDictionary()) {\n>>> +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n>>> +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n>>> +\t\t\t    AeConstraintModeNameValueMap.end()) {\n>>> +\t\t\t\tLOG(Agc, Warning)\n>>> +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n>>> +\t\t\t\tcontinue;\n>>> +\t\t\t}\n>>> +\n>>> +\t\t\tif (!modeDict.isDictionary()) {\n>>> +\t\t\t\tLOG(Agc, Error)\n>>> +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n>>> +\t\t\t\treturn -EINVAL;\n>>> +\t\t\t}\n>>> +\n>>> +\t\t\tparseConstraint(modeDict,\n>>> +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n>>> +\t\t\tavailableConstraintModes.push_back(\n>>> +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n>>> +\t\t}\n>>> +\t}\n>>> +\n>>> +\t/*\n>>> +\t * If the tuning data file contains no constraints then we use the\n>>> +\t * default constraint that the various Agc algorithms were adhering to\n>>> +\t * anyway before centralisation.\n>>> +\t */\n>>> +\tif (constraintModes_.empty()) {\n>>> +\t\tAgcConstraint constraint = {\n>>> +\t\t\tAgcConstraint::Bound::LOWER,\n>>> +\t\t\t0.98,\n>>> +\t\t\t1.0,\n>>> +\t\t\t0.5\n>>> +\t\t};\n>>> +\n>>> +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n>>> +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n>>> +\t\t\tconstraint);\n>>> +\t\tavailableConstraintModes.push_back(\n>>> +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n>>> +\t}\n>>> +\n>>> +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Parse tuning data file to populate AeExposureMode control\n>>> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n>>> + *\n>>> + * The Agc algorithm's tuning data should contain a dictionary called\n>>> + * AeExposureMode containing per-mode setting dictionaries with the key being\n>>> + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n>>> + * contain an array of shutter times with the key \"shutter\" and an array of gain\n>>> + * values with the key \"gain\", in this format:\n>>> + *\n>>> + * \\code{.unparsed}\n>>> + * algorithms:\n>>> + *   - Agc:\n>>> + *       AeExposureMode:\n>> Same reasoning as per the constraints, really.\n>>\n>> There's nothing bad here, if not me realizing our controls definition\n>> is intimately tied with the algorithms implementation, so I wonder\n>> again if even handling things like AeEnable in a common form makes\n>> any sense..\n>>\n>> Not going to review the actual implementation now, as it comes from\n>> the existing ones...\n>>\n>>\n>>> + *         ExposureNormal:\n>>> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n>>> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n>>> + *         ExposureShort:\n>>> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n>>> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n>>> + *\n>>> + * \\endcode\n>>> + *\n>>> + * The parsed dictionaries are used to populate an array of available values for\n>>> + * the AeExposureMode control and to create ExposureModeHelpers\n>>> + *\n>>> + * \\return -EINVAL Where a defined constraint mode is invalid\n>>> + * \\return 0 on success\n>>> + */\n>>> +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n>>> +{\n>>> +\tstd::vector<ControlValue> availableExposureModes;\n>>> +\tint ret;\n>>> +\n>>> +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n>>> +\tif (yamlExposureModes.isDictionary()) {\n>>> +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n>>> +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n>>> +\t\t\t    AeExposureModeNameValueMap.end()) {\n>>> +\t\t\t\tLOG(Agc, Warning)\n>>> +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n>>> +\t\t\t\tcontinue;\n>>> +\t\t\t}\n>>> +\n>>> +\t\t\tif (!modeValues.isDictionary()) {\n>>> +\t\t\t\tLOG(Agc, Error)\n>>> +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n>>> +\t\t\t\treturn -EINVAL;\n>>> +\t\t\t}\n>>> +\n>>> +\t\t\tstd::vector<uint32_t> shutters =\n>>> +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n>>> +\t\t\tstd::vector<double> gains =\n>>> +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n>>> +\n>>> +\t\t\tstd::vector<utils::Duration> shutterDurations;\n>>> +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n>>> +\t\t\t\tstd::back_inserter(shutterDurations),\n>>> +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n>>> +\n>>> +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n>>> +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n>>> +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n>>> +\t\t\t\tLOG(Agc, Error)\n>>> +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n>>> +\t\t\t\treturn ret;\n>>> +\t\t\t}\n>>> +\n>>> +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n>>> +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n>>> +\t\t}\n>>> +\t}\n>>> +\n>>> +\t/*\n>>> +\t * If we don't have any exposure modes in the tuning data we create an\n>>> +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n>>> +\t * will then internally simply drive the shutter as high as possible\n>>> +\t * before touching gain\n>>> +\t */\n>>> +\tif (availableExposureModes.empty()) {\n>>> +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n>>> +\t\tstd::vector<utils::Duration> shutterDurations = {};\n>>> +\t\tstd::vector<double> gains = {};\n>>> +\n>>> +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n>>> +\t\t\tstd::make_shared<ExposureModeHelper>();\n>>> +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n>>> +\t\t\tLOG(Agc, Error)\n>>> +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n>>> +\t\t\treturn ret;\n>>> +\t\t}\n>>> +\n>>> +\t\texposureModeHelpers_[exposureModeId] = helper;\n>>> +\t\tavailableExposureModes.push_back(exposureModeId);\n>>> +\t}\n>>> +\n>>> +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n>>> +\n>>> +\treturn 0;\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\fn MeanLuminanceAgc::constraintModes()\n>>> + * \\brief Get the constraint modes that have been parsed from tuning data\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n>>> + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\fn MeanLuminanceAgc::controls()\n>>> + * \\brief Get the controls that have been generated after parsing tuning data\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n>>> + * \\brief Estimate the luminance of an image, adjusted by a given gain\n>>> + * \\param[in] gain The gain with which to adjust the luminance estimate\n>>> + *\n>>> + * This function is a pure virtual function because estimation of luminance is a\n>>> + * hardware-specific operation, which depends wholly on the format of the stats\n>>> + * that are delivered to libcamera from the ISP. Derived classes must implement\n>>> + * an overriding function that calculates the normalised mean luminance value\n>>> + * across the entire image.\n>>> + *\n>>> + * \\return The normalised relative luminance of the image\n>>> + */\n>>> +\n>>> +/**\n>>> + * \\brief Estimate the initial gain needed to achieve a relative luminance\n>>> + *        target\n>> nit: we don't usually indent in briefs, or in general when breaking\n>> lines in doxygen as far as I can tell\n>>\n>> Not going\n>>\n>>> + *\n>>> + * To account for non-linearity caused by saturation, the value needs to be\n>>> + * estimated in an iterative process, as multiplying by a gain will not increase\n>>> + * the relative luminance by the same factor if some image regions are saturated\n>>> + *\n>>> + * \\return The calculated initial gain\n>>> + */\n>>> +double MeanLuminanceAgc::estimateInitialGain()\n>>> +{\n>>> +\tdouble yTarget = relativeLuminanceTarget_;\n>>> +\tdouble yGain = 1.0;\n>>> +\n>>> +\tfor (unsigned int i = 0; i < 8; i++) {\n>>> +\t\tdouble yValue = estimateLuminance(yGain);\n>>> +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n>>> +\n>>> +\t\tyGain *= extra_gain;\n>>> +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n>>> +\t\t\t\t<< \", Y target: \" << yTarget\n>>> +\t\t\t\t<< \", gives gain \" << yGain;\n>>> +\n>>> +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n>>> +\t\t\tbreak;\n>>> +\t}\n>>> +\n>>> +\treturn yGain;\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Clamp gain within the bounds of a defined constraint\n>>> + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n>>> + * \\param[in] hist A histogram over which to calculate inter-quantile means\n>>> + * \\param[in] gain The gain to clamp\n>>> + *\n>>> + * \\return The gain clamped within the constraint bounds\n>>> + */\n>>> +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n>>> +\t\t\t\t\t     const Histogram &hist,\n>>> +\t\t\t\t\t     double gain)\n>>> +{\n>>> +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n>>> +\tfor (const AgcConstraint &constraint : constraints) {\n>>> +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n>>> +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n>>> +\n>>> +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n>>> +\t\t    newGain > gain)\n>>> +\t\t\tgain = newGain;\n>>> +\n>>> +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n>>> +\t\t    newGain < gain)\n>>> +\t\t\tgain = newGain;\n>>> +\t}\n>>> +\n>>> +\treturn gain;\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Apply a filter on the exposure value to limit the speed of changes\n>>> + * \\param[in] exposureValue The target exposure from the AGC algorithm\n>>> + *\n>>> + * The speed of the filter is adaptive, and will produce the target quicker\n>>> + * during startup, or when the target exposure is within 20% of the most recent\n>>> + * filter output.\n>>> + *\n>>> + * \\return The filtered exposure\n>>> + */\n>>> +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n>>> +{\n>>> +\tdouble speed = 0.2;\n>>> +\n>>> +\t/* Adapt instantly if we are in startup phase. */\n>>> +\tif (frameCount_ < kNumStartupFrames)\n>>> +\t\tspeed = 1.0;\n>>> +\n>>> +\t/*\n>>> +\t * If we are close to the desired result, go faster to avoid making\n>>> +\t * multiple micro-adjustments.\n>>> +\t * \\todo Make this customisable?\n>>> +\t */\n>>> +\tif (filteredExposure_ < 1.2 * exposureValue &&\n>>> +\t    filteredExposure_ > 0.8 * exposureValue)\n>>> +\t\tspeed = sqrt(speed);\n>>> +\n>>> +\tfilteredExposure_ = speed * exposureValue +\n>>> +\t\t\t    filteredExposure_ * (1.0 - speed);\n>>> +\n>>> +\treturn filteredExposure_;\n>>> +}\n>>> +\n>>> +/**\n>>> + * \\brief Calculate the new exposure value\n>>> + * \\param[in] constraintModeIndex The index of the current constraint mode\n>>> + * \\param[in] exposureModeIndex The index of the current exposure mode\n>>> + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n>>> + *\t      the calculated gain\n>>> + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n>>> + *\t      statistics in use derive\n>>> + *\n>>> + * Calculate a new exposure value to try to obtain the target. The calculated\n>>> + * exposure value is filtered to prevent rapid changes from frame to frame, and\n>>> + * divided into shutter time, analogue and digital gain.\n>>> + *\n>>> + * \\return Tuple of shutter time, analogue gain, and digital gain\n>>> + */\n>>> +std::tuple<utils::Duration, double, double>\n>>> +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n>>> +\t\t\t\t uint32_t exposureModeIndex,\n>>> +\t\t\t\t const Histogram &yHist,\n>>> +\t\t\t\t utils::Duration effectiveExposureValue)\n>>> +{\n>>> +\t/*\n>>> +\t * The pipeline handler should validate that we have received an allowed\n>>> +\t * value for AeExposureMode.\n>>> +\t */\n>>> +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n>>> +\t\texposureModeHelpers_.at(exposureModeIndex);\n>>> +\n>>> +\tdouble gain = estimateInitialGain();\n>>> +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n>>> +\n>>> +\t/*\n>>> +\t * We don't check whether we're already close to the target, because\n>>> +\t * even if the effective exposure value is the same as the last frame's\n>>> +\t * we could have switched to an exposure mode that would require a new\n>>> +\t * pass through the splitExposure() function.\n>>> +\t */\n>>> +\n>>> +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n>>> +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n>>> +\t\t\t\t\t   * exposureModeHelper->maxGain();\n>>> +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n>>> +\n>>> +\t/*\n>>> +\t * We filter the exposure value to make sure changes are not too jarring\n>>> +\t * from frame to frame.\n>>> +\t */\n>>> +\tnewExposureValue = filterExposure(newExposureValue);\n>>> +\n>>> +\tframeCount_++;\n>>> +\treturn exposureModeHelper->splitExposure(newExposureValue);\n>>> +}\n>>> +\n>>> +}; /* namespace ipa */\n>>> +\n>>> +}; /* namespace libcamera */\n>>> diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n>>> new file mode 100644\n>>> index 00000000..902a359a\n>>> --- /dev/null\n>>> +++ b/src/ipa/libipa/agc.h\n>>> @@ -0,0 +1,82 @@\n>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>> +/*\n>>> + * Copyright (C) 2024 Ideas on Board Oy\n>>> + *\n>>> + agc.h - Base class for libipa-compliant AGC algorithms\n>>> + */\n>>> +\n>>> +#pragma once\n>>> +\n>>> +#include <tuple>\n>>> +#include <vector>\n>>> +\n>>> +#include <libcamera/controls.h>\n>>> +\n>>> +#include \"libcamera/internal/yaml_parser.h\"\n>>> +\n>>> +#include \"exposure_mode_helper.h\"\n>>> +#include \"histogram.h\"\n>>> +\n>>> +namespace libcamera {\n>>> +\n>>> +namespace ipa {\n>>> +\n>>> +class MeanLuminanceAgc\n>>> +{\n>>> +public:\n>>> +\tMeanLuminanceAgc();\n>>> +\tvirtual ~MeanLuminanceAgc() = default;\n> No need to make this inline, define the destructor in the .cpp with\n>\n> MeanLumimanceAgc::~MeanLuminanceAgc() = default;\n>\n>>> +\n>>> +\tstruct AgcConstraint {\n>>> +\t\tenum class Bound {\n>>> +\t\t\tLOWER = 0,\n>>> +\t\t\tUPPER = 1\n> Wrong coding style.\n>\n>>> +\t\t};\n>>> +\t\tBound bound;\n>>> +\t\tdouble qLo;\n>>> +\t\tdouble qHi;\n>>> +\t\tdouble yTarget;\n>>> +\t};\n>>> +\n>>> +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n>>> +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> This function doesn't seem to be called from the derived classes, you\n> can make it private. Same for other functions below.\n>\n>>> +\tint parseConstraintModes(const YamlObject &tuningData);\n>>> +\tint parseExposureModes(const YamlObject &tuningData);\n> Is there a reason to split the parsing in multiple functions, given that\n> they are called the same from from both the rkisp1 and ipu3\n> implementations ?\n>\n>>> +\n>>> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n>>> +\t{\n>>> +\t\treturn constraintModes_;\n>>> +\t}\n> This seems unused, so the AgcConstraint structure can be made private.\n>\n>>> +\n>>> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n>>> +\t{\n>>> +\t\treturn exposureModeHelpers_;\n>>> +\t}\n> Can we avoid exposing this too ?\n>\n> I never expected there would be multiple helpers when reading the\n> documentation of the ExposureModeHelper class. That may be due to\n> missing documentation.\n>\n>>> +\n>>> +\tControlInfoMap::Map controls()\n>>> +\t{\n>>> +\t\treturn controls_;\n>>> +\t}\n> Hmmm... We don't have precendents for pushing ControlInfo to algorithms.\n> Are we sure we want to go that way ?\n>\n>>> +\n>>> +\tvirtual double estimateLuminance(const double gain) = 0;\n>>> +\tdouble estimateInitialGain();\n>>> +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n>>> +\t\t\t\t   const Histogram &hist,\n>>> +\t\t\t\t   double gain);\n>>> +\tutils::Duration filterExposure(utils::Duration exposureValue);\n>>> +\tstd::tuple<utils::Duration, double, double>\n>>> +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n>>> +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> Missing blank line.\n>\n>>> +private:\n>>> +\tuint64_t frameCount_;\n>>> +\tutils::Duration filteredExposure_;\n>>> +\tdouble relativeLuminanceTarget_;\n>>> +\n>>> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n>>> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n>>> +\tControlInfoMap::Map controls_;\n>>> +};\n>>> +\n>>> +}; /* namespace ipa */\n>>> +\n>>> +}; /* namespace libcamera */\n>>> diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n>>> index 37fbd177..31cc8d70 100644\n>>> --- a/src/ipa/libipa/meson.build\n>>> +++ b/src/ipa/libipa/meson.build\n>>> @@ -1,6 +1,7 @@\n>>>   # SPDX-License-Identifier: CC0-1.0\n>>>\n>>>   libipa_headers = files([\n>>> +    'agc.h',\n>>>       'algorithm.h',\n>>>       'camera_sensor_helper.h',\n>>>       'exposure_mode_helper.h',\n>>> @@ -10,6 +11,7 @@ libipa_headers = files([\n>>>   ])\n>>>\n>>>   libipa_sources = files([\n>>> +    'agc.cpp',\n>>>       'algorithm.cpp',\n>>>       'camera_sensor_helper.cpp',\n>>>       'exposure_mode_helper.cpp',","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 50900C0DA4\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon,  8 Apr 2024 10:57:12 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 72AA163352;\n\tMon,  8 Apr 2024 12:57:11 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 2BBCC61C28\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  8 Apr 2024 12:57:10 +0200 (CEST)","from [192.168.0.43]\n\t(cpc141996-chfd3-2-0-cust928.12-3.cable.virginm.net [86.13.91.161])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 351931A2;\n\tMon,  8 Apr 2024 12:56:29 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"TDmg9jQj\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712573789;\n\tbh=TSwYlYC5kASdGr0cemfWj8fnXUhrOdCmt7wwYr+6Y8A=;\n\th=Date:Subject:To:Cc:References:From:In-Reply-To:From;\n\tb=TDmg9jQj/xFKluT4YG0OHeDWtqNe3WWc4bH800YGDOoZjLuAK6gEn4NJzpOwqLuSq\n\tDNEM75bdQ/WRlaTfZXfpuNY1AmfFdrB2mTkEjS+yocI+HdWnTYrziLeoCCesFdpYLr\n\tVMqjgnjcmzOyzXFsWoXzXk3toWnKs96ntyiy4hmA=","Message-ID":"<65144b63-4035-4dc1-8bb5-240d2db3e748@ideasonboard.com>","Date":"Mon, 8 Apr 2024 11:57:07 +0100","MIME-Version":"1.0","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>,\n\tJacopo Mondi <jacopo.mondi@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>\n\t<20240406010752.GL12507@pendragon.ideasonboard.com>","Content-Language":"en-US","From":"Dan Scally <dan.scally@ideasonboard.com>","Autocrypt":"addr=dan.scally@ideasonboard.com; keydata=\n\txsFNBGLydlEBEADa5O2s0AbUguprfvXOQun/0a8y2Vk6BqkQALgeD6KnXSWwaoCULp18etYW\n\tB31bfgrdphXQ5kUQibB0ADK8DERB4wrzrUb5CMxLBFE7mQty+v5NsP0OFNK9XTaAOcmD+Ove\n\teIjYvqurAaro91jrRVrS1gBRxIFqyPgNvwwL+alMZhn3/2jU2uvBmuRrgnc/e9cHKiuT3Dtq\n\tMHGPKL2m+plk+7tjMoQFfexoQ1JKugHAjxAhJfrkXh6uS6rc01bYCyo7ybzg53m1HLFJdNGX\n\tsUKR+dQpBs3SY4s66tc1sREJqdYyTsSZf80HjIeJjU/hRunRo4NjRIJwhvnK1GyjOvvuCKVU\n\tRWpY8dNjNu5OeAfdrlvFJOxIE9M8JuYCQTMULqd1NuzbpFMjc9524U3Cngs589T7qUMPb1H1\n\tNTA81LmtJ6Y+IV5/kiTUANflpzBwhu18Ok7kGyCq2a2jsOcVmk8gZNs04gyjuj8JziYwwLbf\n\tvzABwpFVcS8aR+nHIZV1HtOzyw8CsL8OySc3K9y+Y0NRpziMRvutrppzgyMb9V+N31mK9Mxl\n\t1YkgaTl4ciNWpdfUe0yxH03OCuHi3922qhPLF4XX5LN+NaVw5Xz2o3eeWklXdouxwV7QlN33\n\tu4+u2FWzKxDqO6WLQGjxPE0mVB4Gh5Pa1Vb0ct9Ctg0qElvtGQARAQABzShEYW4gU2NhbGx5\n\tIDxkYW4uc2NhbGx5QGlkZWFzb25ib2FyZC5jb20+wsGNBBMBCAA3FiEEsdtt8OWP7+8SNfQe\n\tkiQuh/L+GMQFAmLydlIFCQWjmoACGwMECwkIBwUVCAkKCwUWAgMBAAAKCRCSJC6H8v4YxDI2\n\tEAC2Gz0iyaXJkPInyshrREEWbo0CA6v5KKf3I/HlMPqkZ48bmGoYm4mEQGFWZJAT3K4ir8bg\n\tcEfs9V54gpbrZvdwS4abXbUK4WjKwEs8HK3XJv1WXUN2bsz5oEJWZUImh9gD3naiLLI9QMMm\n\tw/aZkT+NbN5/2KvChRWhdcha7+2Te4foOY66nIM+pw2FZM6zIkInLLUik2zXOhaZtqdeJZQi\n\tHSPU9xu7TRYN4cvdZAnSpG7gQqmLm5/uGZN1/sB3kHTustQtSXKMaIcD/DMNI3JN/t+RJVS7\n\tc0Jh/ThzTmhHyhxx3DRnDIy7kwMI4CFvmhkVC2uNs9kWsj1DuX5kt8513mvfw2OcX9UnNKmZ\n\tnhNCuF6DxVrL8wjOPuIpiEj3V+K7DFF1Cxw1/yrLs8dYdYh8T8vCY2CHBMsqpESROnTazboh\n\tAiQ2xMN1cyXtX11Qwqm5U3sykpLbx2BcmUUUEAKNsM//Zn81QXKG8vOx0ZdMfnzsCaCzt8f6\n\t9dcDBBI3tJ0BI9ByiocqUoL6759LM8qm18x3FYlxvuOs4wSGPfRVaA4yh0pgI+ModVC2Pu3y\n\tejE/IxeatGqJHh6Y+iJzskdi27uFkRixl7YJZvPJAbEn7kzSi98u/5ReEA8Qhc8KO/B7wprj\n\txjNMZNYd0Eth8+WkixHYj752NT5qshKJXcyUU87BTQRi8nZSARAAx0BJayh1Fhwbf4zoY56x\n\txHEpT6DwdTAYAetd3yiKClLVJadYxOpuqyWa1bdfQWPb+h4MeXbWw/53PBgn7gI2EA7ebIRC\n\tPJJhAIkeym7hHZoxqDQTGDJjxFEL11qF+U3rhWiL2Zt0Pl+zFq0eWYYVNiXjsIS4FI2+4m16\n\ttPbDWZFJnSZ828VGtRDQdhXfx3zyVX21lVx1bX4/OZvIET7sVUufkE4hrbqrrufre7wsjD1t\n\t8MQKSapVrr1RltpzPpScdoxknOSBRwOvpp57pJJe5A0L7+WxJ+vQoQXj0j+5tmIWOAV1qBQp\n\thyoyUk9JpPfntk2EKnZHWaApFp5TcL6c5LhUvV7F6XwOjGPuGlZQCWXee9dr7zym8iR3irWT\n\t+49bIh5PMlqSLXJDYbuyFQHFxoiNdVvvf7etvGfqFYVMPVjipqfEQ38ST2nkzx+KBICz7uwj\n\tJwLBdTXzGFKHQNckGMl7F5QdO/35An/QcxBnHVMXqaSd12tkJmoRVWduwuuoFfkTY5mUV3uX\n\txGj3iVCK4V+ezOYA7c2YolfRCNMTza6vcK/P4tDjjsyBBZrCCzhBvd4VVsnnlZhVaIxoky4K\n\taL+AP+zcQrUZmXmgZjXOLryGnsaeoVrIFyrU6ly90s1y3KLoPsDaTBMtnOdwxPmo1xisH8oL\n\ta/VRgpFBfojLPxMAEQEAAcLBfAQYAQgAJhYhBLHbbfDlj+/vEjX0HpIkLofy/hjEBQJi8nZT\n\tBQkFo5qAAhsMAAoJEJIkLofy/hjEXPcQAMIPNqiWiz/HKu9W4QIf1OMUpKn3YkVIj3p3gvfM\n\tRes4fGX94Ji599uLNrPoxKyaytC4R6BTxVriTJjWK8mbo9jZIRM4vkwkZZ2bu98EweSucxbp\n\tvjESsvMXGgxniqV/RQ/3T7LABYRoIUutARYq58p5HwSP0frF0fdFHYdTa2g7MYZl1ur2JzOC\n\tFHRpGadlNzKDE3fEdoMobxHB3Lm6FDml5GyBAA8+dQYVI0oDwJ3gpZPZ0J5Vx9RbqXe8RDuR\n\tdu90hvCJkq7/tzSQ0GeD3BwXb9/R/A4dVXhaDd91Q1qQXidI+2jwhx8iqiYxbT+DoAUkQRQy\n\txBtoCM1CxH7u45URUgD//fxYr3D4B1SlonA6vdaEdHZOGwECnDpTxecENMbz/Bx7qfrmd901\n\tD+N9SjIwrbVhhSyUXYnSUb8F+9g2RDY42Sk7GcYxIeON4VzKqWM7hpkXZ47pkK0YodO+dRKM\n\tyMcoUWrTK0Uz6UzUGKoJVbxmSW/EJLEGoI5p3NWxWtScEVv8mO49gqQdrRIOheZycDmHnItt\n\t9Qjv00uFhEwv2YfiyGk6iGF2W40s2pH2t6oeuGgmiZ7g6d0MEK8Ql/4zPItvr1c1rpwpXUC1\n\tu1kQWgtnNjFHX3KiYdqjcZeRBiry1X0zY+4Y24wUU0KsEewJwjhmCKAsju1RpdlPg2kC","In-Reply-To":"<20240406010752.GL12507@pendragon.ideasonboard.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29180,"web_url":"https://patchwork.libcamera.org/comment/29180/","msgid":"<855c8500-16bb-4c17-814e-8ffe0dd25c5a@ideasonboard.com>","date":"2024-04-08T11:09:21","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":156,"url":"https://patchwork.libcamera.org/api/people/156/","name":"Dan Scally","email":"dan.scally@ideasonboard.com"},"content":"Hi Laurent\n\nOn 06/04/2024 02:27, Laurent Pinchart wrote:\n> On Sat, Apr 06, 2024 at 04:07:52AM +0300, Laurent Pinchart wrote:\n>> Hello,\n>>\n>> On Mon, Mar 25, 2024 at 09:16:00PM +0100, Jacopo Mondi wrote:\n>>> On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n>>>> The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n>>>> very large part; following the Rpi IPA's algorithm in spirit with a\n>>>> few tunable values in that IPA being hardcoded in the libipa ones.\n>>>> Add a new base class for MeanLuminanceAgc which implements the same\n>>> nit: I would rather call this one AgcMeanLuminance.\n>> And rename the files accordingly ?\n>>\n>>> One other note, not sure if applicable here, from the experience of\n>>> trying to upstream an auto-focus algorithm for RkISP1. We had there a\n>>> base class to define the algorithm interface, one derived class for\n>>> the common calculation and one platform-specific part for statistics\n>>> collection and IPA module interfacing.\n>>>\n>>> The base class was there mostly to handle the algorithm state machine\n>>> by handling the controls that influence the algorithm behaviour. In\n>>> case of AEGC I can think, in example, about handling the switch\n>>> between enable/disable of auto-mode (and consequentially handling a\n>>> manually set ExposureTime and AnalogueGain), switching between\n>>> different ExposureModes etc\n>>>\n>>> This is the last attempt for AF\n>>> https://patchwork.libcamera.org/patch/18510/\n>>>\n>>> and I'm wondering if it would be desirable to abstract away from the\n>>> MeanLuminance part the part that is tightly coupled with the\n>>> libcamera's controls definition.\n>>>\n>>> Let's make a concrete example: look how the rkisp1 and the ipu3 agc\n>>> implementations handle AeEnabled (or better, look at how the rkisp1\n>>> does that and the IPU3 does not).\n>>>\n>>> Ideally, my goal would be to abstract the handling of the control and\n>>> all the state machine that decides if the manual or auto-computed\n>>> values should be used to an AEGCAlgorithm base class.\n>>>\n>>> Your series does that for the tuning file parsing already but does\n>>> that in the MeanLuminance method implementation, while it should or\n>>> could be common to all AEGC methods.\n>>>\n>>> One very interesting experiment could be starting with this and then\n>>> plumb-in AeEnable support in the IPU3 in example and move everything\n>>> common to a base class.\n>>>\n>>>> algorithm and additionally parses yaml tuning files to inform an IPA\n>>>> module's Agc algorithm about valid constraint and exposure modes and\n>>>> their associated bounds.\n>>>>\n>>>> Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n>>>> ---\n>>>>   src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n>>>>   src/ipa/libipa/agc.h       |  82 ++++++\n>>>>   src/ipa/libipa/meson.build |   2 +\n>>>>   3 files changed, 610 insertions(+)\n>>>>   create mode 100644 src/ipa/libipa/agc.cpp\n>>>>   create mode 100644 src/ipa/libipa/agc.h\n>>>>\n>>>> diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n>>>> new file mode 100644\n>>>> index 00000000..af57a571\n>>>> --- /dev/null\n>>>> +++ b/src/ipa/libipa/agc.cpp\n>>>> @@ -0,0 +1,526 @@\n>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>>> +/*\n>>>> + * Copyright (C) 2024 Ideas on Board Oy\n>>>> + *\n>>>> + * agc.cpp - Base class for libipa-compliant AGC algorithms\n>> We could have different AGC algorithms that are all libipa-compliant.\n>> This is one particular AGC algorithm, right ?\n>>\n>>>> + */\n>>>> +\n>>>> +#include \"agc.h\"\n>>>> +\n>>>> +#include <cmath>\n>>>> +\n>>>> +#include <libcamera/base/log.h>\n>>>> +#include <libcamera/control_ids.h>\n>>>> +\n>>>> +#include \"exposure_mode_helper.h\"\n>>>> +\n>>>> +using namespace libcamera::controls;\n>>>> +\n>>>> +/**\n>>>> + * \\file agc.h\n>>>> + * \\brief Base class implementing mean luminance AEGC.\n>>> nit: not '.' at the end of briefs\n>>>\n>>>> + */\n>>>> +\n>>>> +namespace libcamera {\n>>>> +\n>>>> +using namespace std::literals::chrono_literals;\n>>>> +\n>>>> +LOG_DEFINE_CATEGORY(Agc)\n>>>> +\n>>>> +namespace ipa {\n>>>> +\n>>>> +/*\n>>>> + * Number of frames to wait before calculating stats on minimum exposure\n>> This doesn't seem to match what the value is used for.\n>>\n>>>> + * \\todo should this be a tunable value?\n>>> Does this depend on the ISP (comes from IPA), the sensor (comes from\n>>> tuning file) or... both ? :)\n>>>\n>>>> + */\n>>>> +static constexpr uint32_t kNumStartupFrames = 10;\n>>>> +\n>>>> +/*\n>>>> + * Default relative luminance target\n>>>> + *\n>>>> + * This value should be chosen so that when the camera points at a grey target,\n>>>> + * the resulting image brightness looks \"right\". Custom values can be passed\n>>>> + * as the relativeLuminanceTarget value in sensor tuning files.\n>>>> + */\n>>>> +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n>>>> +\n>>>> +/**\n>>>> + * \\struct MeanLuminanceAgc::AgcConstraint\n>>>> + * \\brief The boundaries and target for an AeConstraintMode constraint\n>>>> + *\n>>>> + * This structure describes an AeConstraintMode constraint for the purposes of\n>>>> + * this algorithm. The algorithm will apply the constraints by calculating the\n>>>> + * Histogram's inter-quantile mean between the given quantiles and ensure that\n>>>> + * the resulting value is the right side of the given target (as defined by the\n>>>> + * boundary and luminance target).\n>>>> + */\n>>> Here, in example.\n>>>\n>>> controls::AeConstraintMode and the supported values are defined as\n>>> (core|vendor) controls in control_ids_*.yaml\n>>>\n>>> The tuning file expresses the constraint modes using the Control\n>>> definition (I wonder if this has always been like this) but it\n>>> definitely ties the tuning file to the controls definition.\n>>>\n>>> Applications use controls::AeConstraintMode to select one of the\n>>> constrained modes to have the algorithm use it.\n>>>\n>>> In all of this, how much is part of the MeanLuminance implementation\n>>> and how much shared between possibly multiple implementations ?\n>>>\n>>>> +\n>>>> +/**\n>>>> + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n>>>> + * \\brief Specify whether the constraint defines a lower or upper bound\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n>>>> + * \\brief The constraint defines a lower bound\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n>>>> + * \\brief The constraint defines an upper bound\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::bound\n>>>> + * \\brief The type of constraint bound\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n>>>> + * \\brief The lower quantile to use for the constraint\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n>>>> + * \\brief The upper quantile to use for the constraint\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n>>>> + * \\brief The luminance target for the constraint\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\class MeanLuminanceAgc\n>>>> + * \\brief a mean-based auto-exposure algorithm\n>> s/a/A/\n>>\n>>>> + *\n>>>> + * This algorithm calculates a shutter time, analogue and digital gain such that\n>>>> + * the normalised mean luminance value of an image is driven towards a target,\n>>>> + * which itself is discovered from tuning data. The algorithm is a two-stage\n>>>> + * process:\n>> s/:/./\n>>\n>> or make the next two paragraph bullet list entries.\n>>\n>>>> + *\n>>>> + * In the first stage, an initial gain value is derived by iteratively comparing\n>>>> + * the gain-adjusted mean luminance across an entire image against a target, and\n>>>> + * selecting a value which pushes it as closely as possible towards the target.\n>>>> + *\n>>>> + * In the second stage we calculate the gain required to drive the average of a\n>>>> + * section of a histogram to a target value, where the target and the boundaries\n>>>> + * of the section of the histogram used in the calculation are taken from the\n>>>> + * values defined for the currently configured AeConstraintMode within the\n>>>> + * tuning data. The gain from the first stage is then clamped to the gain from\n>>>> + * this stage.\n> Another thing we need to document is that requirements to use this\n> class. The IPU3 can provide the required histogram because we compute a\n> poor man's histogram from the grid of AWB stats, as the ImgU doesn't\n> give us a histogram. Platforms that can't provide a histogram at all\n> won't be able to use this. What precision do we need for the histogram\n> (number of bins, and number of pixels used for the calculation) for it\n> to be usable with this class ?\n\n\nI'm not sure how to go about that last question really; the precision that's necessary for the \nhistogram is itself dependent on the precision of the constraints that you want to set...which ought \nto be informed by the capabilities of the hardware that you're writing the tuning file for anyway. \nOr in other words, even a very small histogram could be used as long as you account for the \nresulting imprecision when you define the constraints that you want to adhere to...and I suspect \nthat would affect the quality of the algorithm.\n\n>\n>>>> + *\n>>>> + * The final gain is used to adjust the effective exposure value of the image,\n>>>> + * and that new exposure value divided into shutter time, analogue gain and\n>> s/divided/is divided/\n>>\n>>>> + * digital gain according to the selected AeExposureMode.\n>> There should be a mention here that this class handles tuning file\n>> parsing and expects a particular tuning data format. You can reference\n>> the function that parses the tuning data for detailed documentation\n>> about the format.\n>>\n>>>> + */\n>>>> +\n>>>> +MeanLuminanceAgc::MeanLuminanceAgc()\n>>>> +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n>>>> +{\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Parse the relative luminance target from the tuning data\n>>>> + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n>>>> + */\n>>>> +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n>>>> +{\n>>>> +\trelativeLuminanceTarget_ =\n>>>> +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n>>> How do you expect this to be computed in the tuning file ?\n>>>\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Parse an AeConstraintMode constraint from tuning data\n>>>> + * \\param[in] modeDict the YamlObject holding the constraint data\n>>>> + * \\param[in] id The constraint ID from AeConstraintModeEnum\n>>>> + */\n>>>> +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n>>>> +{\n>>>> +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n>>>> +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n>>>> +\t\t\tLOG(Agc, Warning)\n>>>> +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n>>>> +\t\t\tcontinue;\n>>>> +\t\t}\n>>>> +\n>>>> +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n>>>> +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n>>>> +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n>>>> +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n>>>> +\t\tdouble yTarget =\n>>>> +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n>>>> +\n>>>> +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n>>>> +\n>>>> +\t\tif (!constraintModes_.count(id))\n>>>> +\t\t\tconstraintModes_[id] = {};\n>>>> +\n>>>> +\t\tif (idx)\n>>>> +\t\t\tconstraintModes_[id].push_back(constraint);\n>>>> +\t\telse\n>>>> +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n>>>> +\t}\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Parse tuning data file to populate AeConstraintMode control\n>>>> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n>>>> + *\n>>>> + * The Agc algorithm's tuning data should contain a dictionary called\n>>>> + * AeConstraintMode containing per-mode setting dictionaries with the key being\n>>>> + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n>>>> + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n>>>> + *\n>>>> + * \\code{.unparsed}\n>>>> + * algorithms:\n>>>> + *   - Agc:\n>>>> + *       AeConstraintMode:\n>>>> + *         ConstraintNormal:\n>>>> + *           lower:\n>>>> + *             qLo: 0.98\n>>>> + *             qHi: 1.0\n>>>> + *             yTarget: 0.5\n>>> Ok, so this ties the tuning file not just to the libcamera controls\n>>> definition, but to this specific implementation of the algorithm ?\n>> Tying the algorithm implementation with the tuning data format seems\n>> unavoidable. I'm slightly concerned about handling the YAML parsing here\n>> though, as I'm wondering if we can always guarantee that the file will\n>> not need to diverge somehow (possibly with additional data somewhere\n>> that would make the shared parser fail), but that's probably such a\n>> small risk that we can worry about it later.\n>>\n>>> Not that it was not expected, and I think it's fine, as using a\n>>> libcamera defined control value as 'index' makes sure the applications\n>>> will deal with the same interface, but this largely conflicts with the\n>>> idea to have shared parsing for all algorithms in a base class.\n>> The idea comes straight from the RPi implementation, which expect users\n>> to customize tuning files with new modes. Indexing into a non standard\n>> list of modes isn't an API I really like, and that's something I think\n>> we need to fix eventually.\n>>\n>>> Also, we made AeConstrainModes a 'core control' because at the time\n>>> when RPi first implemented AGC support there was no alternative to\n>>> that. Then the RPi implementation has been copied in all other\n>>> platforms and this is still fine as a 'core control'. This however\n>>> seems to be a tuning parameter for this specific algorithm\n>>> implementation, isn't it ?\n>> I should read your whole reply before writing anything :-)\n>>\n>>>> + *         ConstraintHighlight:\n>>>> + *           lower:\n>>>> + *             qLo: 0.98\n>>>> + *             qHi: 1.0\n>>>> + *             yTarget: 0.5\n>>>> + *           upper:\n>>>> + *             qLo: 0.98\n>>>> + *             qHi: 1.0\n>>>> + *             yTarget: 0.8\n>>>> + *\n>>>> + * \\endcode\n>>>> + *\n>>>> + * The parsed dictionaries are used to populate an array of available values for\n>>>> + * the AeConstraintMode control and stored for later use in the algorithm.\n>>>> + *\n>>>> + * \\return -EINVAL Where a defined constraint mode is invalid\n>>>> + * \\return 0 on success\n>>>> + */\n>>>> +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n>>>> +{\n>>>> +\tstd::vector<ControlValue> availableConstraintModes;\n>>>> +\n>>>> +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n>>>> +\tif (yamlConstraintModes.isDictionary()) {\n>>>> +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n>>>> +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n>>>> +\t\t\t    AeConstraintModeNameValueMap.end()) {\n>>>> +\t\t\t\tLOG(Agc, Warning)\n>>>> +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n>>>> +\t\t\t\tcontinue;\n>>>> +\t\t\t}\n>>>> +\n>>>> +\t\t\tif (!modeDict.isDictionary()) {\n>>>> +\t\t\t\tLOG(Agc, Error)\n>>>> +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n>>>> +\t\t\t\treturn -EINVAL;\n>>>> +\t\t\t}\n>>>> +\n>>>> +\t\t\tparseConstraint(modeDict,\n>>>> +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n>>>> +\t\t\tavailableConstraintModes.push_back(\n>>>> +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n>>>> +\t\t}\n>>>> +\t}\n>>>> +\n>>>> +\t/*\n>>>> +\t * If the tuning data file contains no constraints then we use the\n>>>> +\t * default constraint that the various Agc algorithms were adhering to\n>>>> +\t * anyway before centralisation.\n>>>> +\t */\n>>>> +\tif (constraintModes_.empty()) {\n>>>> +\t\tAgcConstraint constraint = {\n>>>> +\t\t\tAgcConstraint::Bound::LOWER,\n>>>> +\t\t\t0.98,\n>>>> +\t\t\t1.0,\n>>>> +\t\t\t0.5\n>>>> +\t\t};\n>>>> +\n>>>> +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n>>>> +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n>>>> +\t\t\tconstraint);\n>>>> +\t\tavailableConstraintModes.push_back(\n>>>> +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n>>>> +\t}\n>>>> +\n>>>> +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Parse tuning data file to populate AeExposureMode control\n>>>> + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n>>>> + *\n>>>> + * The Agc algorithm's tuning data should contain a dictionary called\n>>>> + * AeExposureMode containing per-mode setting dictionaries with the key being\n>>>> + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n>>>> + * contain an array of shutter times with the key \"shutter\" and an array of gain\n>>>> + * values with the key \"gain\", in this format:\n>>>> + *\n>>>> + * \\code{.unparsed}\n>>>> + * algorithms:\n>>>> + *   - Agc:\n>>>> + *       AeExposureMode:\n>>> Same reasoning as per the constraints, really.\n>>>\n>>> There's nothing bad here, if not me realizing our controls definition\n>>> is intimately tied with the algorithms implementation, so I wonder\n>>> again if even handling things like AeEnable in a common form makes\n>>> any sense..\n>>>\n>>> Not going to review the actual implementation now, as it comes from\n>>> the existing ones...\n>>>\n>>>\n>>>> + *         ExposureNormal:\n>>>> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n>>>> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n>>>> + *         ExposureShort:\n>>>> + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n>>>> + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n>>>> + *\n>>>> + * \\endcode\n>>>> + *\n>>>> + * The parsed dictionaries are used to populate an array of available values for\n>>>> + * the AeExposureMode control and to create ExposureModeHelpers\n>>>> + *\n>>>> + * \\return -EINVAL Where a defined constraint mode is invalid\n>>>> + * \\return 0 on success\n>>>> + */\n>>>> +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n>>>> +{\n>>>> +\tstd::vector<ControlValue> availableExposureModes;\n>>>> +\tint ret;\n>>>> +\n>>>> +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n>>>> +\tif (yamlExposureModes.isDictionary()) {\n>>>> +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n>>>> +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n>>>> +\t\t\t    AeExposureModeNameValueMap.end()) {\n>>>> +\t\t\t\tLOG(Agc, Warning)\n>>>> +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n>>>> +\t\t\t\tcontinue;\n>>>> +\t\t\t}\n>>>> +\n>>>> +\t\t\tif (!modeValues.isDictionary()) {\n>>>> +\t\t\t\tLOG(Agc, Error)\n>>>> +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n>>>> +\t\t\t\treturn -EINVAL;\n>>>> +\t\t\t}\n>>>> +\n>>>> +\t\t\tstd::vector<uint32_t> shutters =\n>>>> +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n>>>> +\t\t\tstd::vector<double> gains =\n>>>> +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n>>>> +\n>>>> +\t\t\tstd::vector<utils::Duration> shutterDurations;\n>>>> +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n>>>> +\t\t\t\tstd::back_inserter(shutterDurations),\n>>>> +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n>>>> +\n>>>> +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n>>>> +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n>>>> +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n>>>> +\t\t\t\tLOG(Agc, Error)\n>>>> +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n>>>> +\t\t\t\treturn ret;\n>>>> +\t\t\t}\n>>>> +\n>>>> +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n>>>> +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n>>>> +\t\t}\n>>>> +\t}\n>>>> +\n>>>> +\t/*\n>>>> +\t * If we don't have any exposure modes in the tuning data we create an\n>>>> +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n>>>> +\t * will then internally simply drive the shutter as high as possible\n>>>> +\t * before touching gain\n>>>> +\t */\n>>>> +\tif (availableExposureModes.empty()) {\n>>>> +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n>>>> +\t\tstd::vector<utils::Duration> shutterDurations = {};\n>>>> +\t\tstd::vector<double> gains = {};\n>>>> +\n>>>> +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n>>>> +\t\t\tstd::make_shared<ExposureModeHelper>();\n>>>> +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n>>>> +\t\t\tLOG(Agc, Error)\n>>>> +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n>>>> +\t\t\treturn ret;\n>>>> +\t\t}\n>>>> +\n>>>> +\t\texposureModeHelpers_[exposureModeId] = helper;\n>>>> +\t\tavailableExposureModes.push_back(exposureModeId);\n>>>> +\t}\n>>>> +\n>>>> +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n>>>> +\n>>>> +\treturn 0;\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\fn MeanLuminanceAgc::constraintModes()\n>>>> + * \\brief Get the constraint modes that have been parsed from tuning data\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n>>>> + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\fn MeanLuminanceAgc::controls()\n>>>> + * \\brief Get the controls that have been generated after parsing tuning data\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n>>>> + * \\brief Estimate the luminance of an image, adjusted by a given gain\n>>>> + * \\param[in] gain The gain with which to adjust the luminance estimate\n>>>> + *\n>>>> + * This function is a pure virtual function because estimation of luminance is a\n>>>> + * hardware-specific operation, which depends wholly on the format of the stats\n>>>> + * that are delivered to libcamera from the ISP. Derived classes must implement\n>>>> + * an overriding function that calculates the normalised mean luminance value\n>>>> + * across the entire image.\n>>>> + *\n>>>> + * \\return The normalised relative luminance of the image\n>>>> + */\n>>>> +\n>>>> +/**\n>>>> + * \\brief Estimate the initial gain needed to achieve a relative luminance\n>>>> + *        target\n>>> nit: we don't usually indent in briefs, or in general when breaking\n>>> lines in doxygen as far as I can tell\n>>>\n>>> Not going\n>>>\n>>>> + *\n>>>> + * To account for non-linearity caused by saturation, the value needs to be\n>>>> + * estimated in an iterative process, as multiplying by a gain will not increase\n>>>> + * the relative luminance by the same factor if some image regions are saturated\n>>>> + *\n>>>> + * \\return The calculated initial gain\n>>>> + */\n>>>> +double MeanLuminanceAgc::estimateInitialGain()\n>>>> +{\n>>>> +\tdouble yTarget = relativeLuminanceTarget_;\n>>>> +\tdouble yGain = 1.0;\n>>>> +\n>>>> +\tfor (unsigned int i = 0; i < 8; i++) {\n>>>> +\t\tdouble yValue = estimateLuminance(yGain);\n>>>> +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n>>>> +\n>>>> +\t\tyGain *= extra_gain;\n>>>> +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n>>>> +\t\t\t\t<< \", Y target: \" << yTarget\n>>>> +\t\t\t\t<< \", gives gain \" << yGain;\n>>>> +\n>>>> +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n>>>> +\t\t\tbreak;\n>>>> +\t}\n>>>> +\n>>>> +\treturn yGain;\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Clamp gain within the bounds of a defined constraint\n>>>> + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n>>>> + * \\param[in] hist A histogram over which to calculate inter-quantile means\n>>>> + * \\param[in] gain The gain to clamp\n>>>> + *\n>>>> + * \\return The gain clamped within the constraint bounds\n>>>> + */\n>>>> +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n>>>> +\t\t\t\t\t     const Histogram &hist,\n>>>> +\t\t\t\t\t     double gain)\n>>>> +{\n>>>> +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n>>>> +\tfor (const AgcConstraint &constraint : constraints) {\n>>>> +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n>>>> +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n>>>> +\n>>>> +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n>>>> +\t\t    newGain > gain)\n>>>> +\t\t\tgain = newGain;\n>>>> +\n>>>> +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n>>>> +\t\t    newGain < gain)\n>>>> +\t\t\tgain = newGain;\n>>>> +\t}\n>>>> +\n>>>> +\treturn gain;\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Apply a filter on the exposure value to limit the speed of changes\n>>>> + * \\param[in] exposureValue The target exposure from the AGC algorithm\n>>>> + *\n>>>> + * The speed of the filter is adaptive, and will produce the target quicker\n>>>> + * during startup, or when the target exposure is within 20% of the most recent\n>>>> + * filter output.\n>>>> + *\n>>>> + * \\return The filtered exposure\n>>>> + */\n>>>> +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n>>>> +{\n>>>> +\tdouble speed = 0.2;\n>>>> +\n>>>> +\t/* Adapt instantly if we are in startup phase. */\n>>>> +\tif (frameCount_ < kNumStartupFrames)\n>>>> +\t\tspeed = 1.0;\n>>>> +\n>>>> +\t/*\n>>>> +\t * If we are close to the desired result, go faster to avoid making\n>>>> +\t * multiple micro-adjustments.\n>>>> +\t * \\todo Make this customisable?\n>>>> +\t */\n>>>> +\tif (filteredExposure_ < 1.2 * exposureValue &&\n>>>> +\t    filteredExposure_ > 0.8 * exposureValue)\n>>>> +\t\tspeed = sqrt(speed);\n>>>> +\n>>>> +\tfilteredExposure_ = speed * exposureValue +\n>>>> +\t\t\t    filteredExposure_ * (1.0 - speed);\n>>>> +\n>>>> +\treturn filteredExposure_;\n>>>> +}\n>>>> +\n>>>> +/**\n>>>> + * \\brief Calculate the new exposure value\n>>>> + * \\param[in] constraintModeIndex The index of the current constraint mode\n>>>> + * \\param[in] exposureModeIndex The index of the current exposure mode\n>>>> + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n>>>> + *\t      the calculated gain\n>>>> + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n>>>> + *\t      statistics in use derive\n>>>> + *\n>>>> + * Calculate a new exposure value to try to obtain the target. The calculated\n>>>> + * exposure value is filtered to prevent rapid changes from frame to frame, and\n>>>> + * divided into shutter time, analogue and digital gain.\n>>>> + *\n>>>> + * \\return Tuple of shutter time, analogue gain, and digital gain\n>>>> + */\n>>>> +std::tuple<utils::Duration, double, double>\n>>>> +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n>>>> +\t\t\t\t uint32_t exposureModeIndex,\n>>>> +\t\t\t\t const Histogram &yHist,\n>>>> +\t\t\t\t utils::Duration effectiveExposureValue)\n>>>> +{\n>>>> +\t/*\n>>>> +\t * The pipeline handler should validate that we have received an allowed\n>>>> +\t * value for AeExposureMode.\n>>>> +\t */\n>>>> +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n>>>> +\t\texposureModeHelpers_.at(exposureModeIndex);\n>>>> +\n>>>> +\tdouble gain = estimateInitialGain();\n>>>> +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n>>>> +\n>>>> +\t/*\n>>>> +\t * We don't check whether we're already close to the target, because\n>>>> +\t * even if the effective exposure value is the same as the last frame's\n>>>> +\t * we could have switched to an exposure mode that would require a new\n>>>> +\t * pass through the splitExposure() function.\n>>>> +\t */\n>>>> +\n>>>> +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n>>>> +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n>>>> +\t\t\t\t\t   * exposureModeHelper->maxGain();\n>>>> +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n>>>> +\n>>>> +\t/*\n>>>> +\t * We filter the exposure value to make sure changes are not too jarring\n>>>> +\t * from frame to frame.\n>>>> +\t */\n>>>> +\tnewExposureValue = filterExposure(newExposureValue);\n>>>> +\n>>>> +\tframeCount_++;\n>>>> +\treturn exposureModeHelper->splitExposure(newExposureValue);\n>>>> +}\n>>>> +\n>>>> +}; /* namespace ipa */\n>>>> +\n>>>> +}; /* namespace libcamera */\n>>>> diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n>>>> new file mode 100644\n>>>> index 00000000..902a359a\n>>>> --- /dev/null\n>>>> +++ b/src/ipa/libipa/agc.h\n>>>> @@ -0,0 +1,82 @@\n>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>>> +/*\n>>>> + * Copyright (C) 2024 Ideas on Board Oy\n>>>> + *\n>>>> + agc.h - Base class for libipa-compliant AGC algorithms\n>>>> + */\n>>>> +\n>>>> +#pragma once\n>>>> +\n>>>> +#include <tuple>\n>>>> +#include <vector>\n>>>> +\n>>>> +#include <libcamera/controls.h>\n>>>> +\n>>>> +#include \"libcamera/internal/yaml_parser.h\"\n>>>> +\n>>>> +#include \"exposure_mode_helper.h\"\n>>>> +#include \"histogram.h\"\n>>>> +\n>>>> +namespace libcamera {\n>>>> +\n>>>> +namespace ipa {\n>>>> +\n>>>> +class MeanLuminanceAgc\n>>>> +{\n>>>> +public:\n>>>> +\tMeanLuminanceAgc();\n>>>> +\tvirtual ~MeanLuminanceAgc() = default;\n>> No need to make this inline, define the destructor in the .cpp with\n>>\n>> MeanLumimanceAgc::~MeanLuminanceAgc() = default;\n>>\n>>>> +\n>>>> +\tstruct AgcConstraint {\n>>>> +\t\tenum class Bound {\n>>>> +\t\t\tLOWER = 0,\n>>>> +\t\t\tUPPER = 1\n>> Wrong coding style.\n>>\n>>>> +\t\t};\n>>>> +\t\tBound bound;\n>>>> +\t\tdouble qLo;\n>>>> +\t\tdouble qHi;\n>>>> +\t\tdouble yTarget;\n>>>> +\t};\n>>>> +\n>>>> +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n>>>> +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n>> This function doesn't seem to be called from the derived classes, you\n>> can make it private. Same for other functions below.\n>>\n>>>> +\tint parseConstraintModes(const YamlObject &tuningData);\n>>>> +\tint parseExposureModes(const YamlObject &tuningData);\n>> Is there a reason to split the parsing in multiple functions, given that\n>> they are called the same from from both the rkisp1 and ipu3\n>> implementations ?\n>>\n>>>> +\n>>>> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n>>>> +\t{\n>>>> +\t\treturn constraintModes_;\n>>>> +\t}\n>> This seems unused, so the AgcConstraint structure can be made private.\n>>\n>>>> +\n>>>> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n>>>> +\t{\n>>>> +\t\treturn exposureModeHelpers_;\n>>>> +\t}\n>> Can we avoid exposing this too ?\n>>\n>> I never expected there would be multiple helpers when reading the\n>> documentation of the ExposureModeHelper class. That may be due to\n>> missing documentation.\n>>\n>>>> +\n>>>> +\tControlInfoMap::Map controls()\n>>>> +\t{\n>>>> +\t\treturn controls_;\n>>>> +\t}\n>> Hmmm... We don't have precendents for pushing ControlInfo to algorithms.\n>> Are we sure we want to go that way ?\n>>\n>>>> +\n>>>> +\tvirtual double estimateLuminance(const double gain) = 0;\n>>>> +\tdouble estimateInitialGain();\n>>>> +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n>>>> +\t\t\t\t   const Histogram &hist,\n>>>> +\t\t\t\t   double gain);\n>>>> +\tutils::Duration filterExposure(utils::Duration exposureValue);\n>>>> +\tstd::tuple<utils::Duration, double, double>\n>>>> +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n>>>> +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n>> Missing blank line.\n>>\n>>>> +private:\n>>>> +\tuint64_t frameCount_;\n>>>> +\tutils::Duration filteredExposure_;\n>>>> +\tdouble relativeLuminanceTarget_;\n>>>> +\n>>>> +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n>>>> +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n>>>> +\tControlInfoMap::Map controls_;\n>>>> +};\n>>>> +\n>>>> +}; /* namespace ipa */\n>>>> +\n>>>> +}; /* namespace libcamera */\n>>>> diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n>>>> index 37fbd177..31cc8d70 100644\n>>>> --- a/src/ipa/libipa/meson.build\n>>>> +++ b/src/ipa/libipa/meson.build\n>>>> @@ -1,6 +1,7 @@\n>>>>   # SPDX-License-Identifier: CC0-1.0\n>>>>\n>>>>   libipa_headers = files([\n>>>> +    'agc.h',\n>>>>       'algorithm.h',\n>>>>       'camera_sensor_helper.h',\n>>>>       'exposure_mode_helper.h',\n>>>> @@ -10,6 +11,7 @@ libipa_headers = files([\n>>>>   ])\n>>>>\n>>>>   libipa_sources = files([\n>>>> +    'agc.cpp',\n>>>>       'algorithm.cpp',\n>>>>       'camera_sensor_helper.cpp',\n>>>>       'exposure_mode_helper.cpp',\n>> -- \n>> Regards,\n>>\n>> Laurent Pinchart","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 7F14BBE08B\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon,  8 Apr 2024 11:09:28 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 4A9FB63352;\n\tMon,  8 Apr 2024 13:09:27 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 92A4261C28\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  8 Apr 2024 13:09:25 +0200 (CEST)","from [192.168.0.43]\n\t(cpc141996-chfd3-2-0-cust928.12-3.cable.virginm.net [86.13.91.161])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 80F071A2;\n\tMon,  8 Apr 2024 13:08:44 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"MzKvcHa6\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712574524;\n\tbh=+xBPaDVFBzW/o/7odELNoHRhhYdOCa0QXhwtrM9cANw=;\n\th=Date:Subject:To:Cc:References:From:In-Reply-To:From;\n\tb=MzKvcHa6oTX4qnZwrKFri1sgHgo8DDw4OJruqY7bV36o/Bkg3Z2dtW6x8X8YFViPE\n\tXOvFTb0eZKZHMNnzXudvyBTiZQvMpd2YN9PTbu+BI5iXkYNX6fMN5ZnxZuNzzdc1Eb\n\t+Djf0ByWoPj6CyC0Dz+KMzZquv7dy5DA26plz6Ks=","Message-ID":"<855c8500-16bb-4c17-814e-8ffe0dd25c5a@ideasonboard.com>","Date":"Mon, 8 Apr 2024 12:09:21 +0100","MIME-Version":"1.0","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>,\n\tJacopo Mondi <jacopo.mondi@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>\n\t<20240406010752.GL12507@pendragon.ideasonboard.com>\n\t<20240406012752.GA13085@pendragon.ideasonboard.com>","Content-Language":"en-US","From":"Dan Scally <dan.scally@ideasonboard.com>","Autocrypt":"addr=dan.scally@ideasonboard.com; keydata=\n\txsFNBGLydlEBEADa5O2s0AbUguprfvXOQun/0a8y2Vk6BqkQALgeD6KnXSWwaoCULp18etYW\n\tB31bfgrdphXQ5kUQibB0ADK8DERB4wrzrUb5CMxLBFE7mQty+v5NsP0OFNK9XTaAOcmD+Ove\n\teIjYvqurAaro91jrRVrS1gBRxIFqyPgNvwwL+alMZhn3/2jU2uvBmuRrgnc/e9cHKiuT3Dtq\n\tMHGPKL2m+plk+7tjMoQFfexoQ1JKugHAjxAhJfrkXh6uS6rc01bYCyo7ybzg53m1HLFJdNGX\n\tsUKR+dQpBs3SY4s66tc1sREJqdYyTsSZf80HjIeJjU/hRunRo4NjRIJwhvnK1GyjOvvuCKVU\n\tRWpY8dNjNu5OeAfdrlvFJOxIE9M8JuYCQTMULqd1NuzbpFMjc9524U3Cngs589T7qUMPb1H1\n\tNTA81LmtJ6Y+IV5/kiTUANflpzBwhu18Ok7kGyCq2a2jsOcVmk8gZNs04gyjuj8JziYwwLbf\n\tvzABwpFVcS8aR+nHIZV1HtOzyw8CsL8OySc3K9y+Y0NRpziMRvutrppzgyMb9V+N31mK9Mxl\n\t1YkgaTl4ciNWpdfUe0yxH03OCuHi3922qhPLF4XX5LN+NaVw5Xz2o3eeWklXdouxwV7QlN33\n\tu4+u2FWzKxDqO6WLQGjxPE0mVB4Gh5Pa1Vb0ct9Ctg0qElvtGQARAQABzShEYW4gU2NhbGx5\n\tIDxkYW4uc2NhbGx5QGlkZWFzb25ib2FyZC5jb20+wsGNBBMBCAA3FiEEsdtt8OWP7+8SNfQe\n\tkiQuh/L+GMQFAmLydlIFCQWjmoACGwMECwkIBwUVCAkKCwUWAgMBAAAKCRCSJC6H8v4YxDI2\n\tEAC2Gz0iyaXJkPInyshrREEWbo0CA6v5KKf3I/HlMPqkZ48bmGoYm4mEQGFWZJAT3K4ir8bg\n\tcEfs9V54gpbrZvdwS4abXbUK4WjKwEs8HK3XJv1WXUN2bsz5oEJWZUImh9gD3naiLLI9QMMm\n\tw/aZkT+NbN5/2KvChRWhdcha7+2Te4foOY66nIM+pw2FZM6zIkInLLUik2zXOhaZtqdeJZQi\n\tHSPU9xu7TRYN4cvdZAnSpG7gQqmLm5/uGZN1/sB3kHTustQtSXKMaIcD/DMNI3JN/t+RJVS7\n\tc0Jh/ThzTmhHyhxx3DRnDIy7kwMI4CFvmhkVC2uNs9kWsj1DuX5kt8513mvfw2OcX9UnNKmZ\n\tnhNCuF6DxVrL8wjOPuIpiEj3V+K7DFF1Cxw1/yrLs8dYdYh8T8vCY2CHBMsqpESROnTazboh\n\tAiQ2xMN1cyXtX11Qwqm5U3sykpLbx2BcmUUUEAKNsM//Zn81QXKG8vOx0ZdMfnzsCaCzt8f6\n\t9dcDBBI3tJ0BI9ByiocqUoL6759LM8qm18x3FYlxvuOs4wSGPfRVaA4yh0pgI+ModVC2Pu3y\n\tejE/IxeatGqJHh6Y+iJzskdi27uFkRixl7YJZvPJAbEn7kzSi98u/5ReEA8Qhc8KO/B7wprj\n\txjNMZNYd0Eth8+WkixHYj752NT5qshKJXcyUU87BTQRi8nZSARAAx0BJayh1Fhwbf4zoY56x\n\txHEpT6DwdTAYAetd3yiKClLVJadYxOpuqyWa1bdfQWPb+h4MeXbWw/53PBgn7gI2EA7ebIRC\n\tPJJhAIkeym7hHZoxqDQTGDJjxFEL11qF+U3rhWiL2Zt0Pl+zFq0eWYYVNiXjsIS4FI2+4m16\n\ttPbDWZFJnSZ828VGtRDQdhXfx3zyVX21lVx1bX4/OZvIET7sVUufkE4hrbqrrufre7wsjD1t\n\t8MQKSapVrr1RltpzPpScdoxknOSBRwOvpp57pJJe5A0L7+WxJ+vQoQXj0j+5tmIWOAV1qBQp\n\thyoyUk9JpPfntk2EKnZHWaApFp5TcL6c5LhUvV7F6XwOjGPuGlZQCWXee9dr7zym8iR3irWT\n\t+49bIh5PMlqSLXJDYbuyFQHFxoiNdVvvf7etvGfqFYVMPVjipqfEQ38ST2nkzx+KBICz7uwj\n\tJwLBdTXzGFKHQNckGMl7F5QdO/35An/QcxBnHVMXqaSd12tkJmoRVWduwuuoFfkTY5mUV3uX\n\txGj3iVCK4V+ezOYA7c2YolfRCNMTza6vcK/P4tDjjsyBBZrCCzhBvd4VVsnnlZhVaIxoky4K\n\taL+AP+zcQrUZmXmgZjXOLryGnsaeoVrIFyrU6ly90s1y3KLoPsDaTBMtnOdwxPmo1xisH8oL\n\ta/VRgpFBfojLPxMAEQEAAcLBfAQYAQgAJhYhBLHbbfDlj+/vEjX0HpIkLofy/hjEBQJi8nZT\n\tBQkFo5qAAhsMAAoJEJIkLofy/hjEXPcQAMIPNqiWiz/HKu9W4QIf1OMUpKn3YkVIj3p3gvfM\n\tRes4fGX94Ji599uLNrPoxKyaytC4R6BTxVriTJjWK8mbo9jZIRM4vkwkZZ2bu98EweSucxbp\n\tvjESsvMXGgxniqV/RQ/3T7LABYRoIUutARYq58p5HwSP0frF0fdFHYdTa2g7MYZl1ur2JzOC\n\tFHRpGadlNzKDE3fEdoMobxHB3Lm6FDml5GyBAA8+dQYVI0oDwJ3gpZPZ0J5Vx9RbqXe8RDuR\n\tdu90hvCJkq7/tzSQ0GeD3BwXb9/R/A4dVXhaDd91Q1qQXidI+2jwhx8iqiYxbT+DoAUkQRQy\n\txBtoCM1CxH7u45URUgD//fxYr3D4B1SlonA6vdaEdHZOGwECnDpTxecENMbz/Bx7qfrmd901\n\tD+N9SjIwrbVhhSyUXYnSUb8F+9g2RDY42Sk7GcYxIeON4VzKqWM7hpkXZ47pkK0YodO+dRKM\n\tyMcoUWrTK0Uz6UzUGKoJVbxmSW/EJLEGoI5p3NWxWtScEVv8mO49gqQdrRIOheZycDmHnItt\n\t9Qjv00uFhEwv2YfiyGk6iGF2W40s2pH2t6oeuGgmiZ7g6d0MEK8Ql/4zPItvr1c1rpwpXUC1\n\tu1kQWgtnNjFHX3KiYdqjcZeRBiry1X0zY+4Y24wUU0KsEewJwjhmCKAsju1RpdlPg2kC","In-Reply-To":"<20240406012752.GA13085@pendragon.ideasonboard.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}},{"id":29184,"web_url":"https://patchwork.libcamera.org/comment/29184/","msgid":"<ZhUMT42GutkJ_gYt@pyrite.rasen.tech>","date":"2024-04-09T09:37:19","subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","submitter":{"id":17,"url":"https://patchwork.libcamera.org/api/people/17/","name":"Paul Elder","email":"paul.elder@ideasonboard.com"},"content":"On Sat, Apr 06, 2024 at 04:07:52AM +0300, Laurent Pinchart wrote:\n> Hello,\n> \n> On Mon, Mar 25, 2024 at 09:16:00PM +0100, Jacopo Mondi wrote:\n> > On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:\n> > > The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in\n> > > very large part; following the Rpi IPA's algorithm in spirit with a\n> > > few tunable values in that IPA being hardcoded in the libipa ones.\n> > > Add a new base class for MeanLuminanceAgc which implements the same\n> > \n> > nit: I would rather call this one AgcMeanLuminance.\n> \n> And rename the files accordingly ?\n> \n> > One other note, not sure if applicable here, from the experience of\n> > trying to upstream an auto-focus algorithm for RkISP1. We had there a\n> > base class to define the algorithm interface, one derived class for\n> > the common calculation and one platform-specific part for statistics\n> > collection and IPA module interfacing.\n> > \n> > The base class was there mostly to handle the algorithm state machine\n> > by handling the controls that influence the algorithm behaviour. In\n> > case of AEGC I can think, in example, about handling the switch\n> > between enable/disable of auto-mode (and consequentially handling a\n> > manually set ExposureTime and AnalogueGain), switching between\n> > different ExposureModes etc\n> > \n> > This is the last attempt for AF\n> > https://patchwork.libcamera.org/patch/18510/\n> > \n> > and I'm wondering if it would be desirable to abstract away from the\n> > MeanLuminance part the part that is tightly coupled with the\n> > libcamera's controls definition.\n> > \n> > Let's make a concrete example: look how the rkisp1 and the ipu3 agc\n> > implementations handle AeEnabled (or better, look at how the rkisp1\n> > does that and the IPU3 does not).\n> > \n> > Ideally, my goal would be to abstract the handling of the control and\n> > all the state machine that decides if the manual or auto-computed\n> > values should be used to an AEGCAlgorithm base class.\n> > \n> > Your series does that for the tuning file parsing already but does\n> > that in the MeanLuminance method implementation, while it should or\n> > could be common to all AEGC methods.\n> > \n> > One very interesting experiment could be starting with this and then\n> > plumb-in AeEnable support in the IPU3 in example and move everything\n> > common to a base class.\n> > \n> > > algorithm and additionally parses yaml tuning files to inform an IPA\n> > > module's Agc algorithm about valid constraint and exposure modes and\n> > > their associated bounds.\n> > >\n> > > Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>\n> > > ---\n> > >  src/ipa/libipa/agc.cpp     | 526 +++++++++++++++++++++++++++++++++++++\n> > >  src/ipa/libipa/agc.h       |  82 ++++++\n> > >  src/ipa/libipa/meson.build |   2 +\n> > >  3 files changed, 610 insertions(+)\n> > >  create mode 100644 src/ipa/libipa/agc.cpp\n> > >  create mode 100644 src/ipa/libipa/agc.h\n> > >\n> > > diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp\n> > > new file mode 100644\n> > > index 00000000..af57a571\n> > > --- /dev/null\n> > > +++ b/src/ipa/libipa/agc.cpp\n> > > @@ -0,0 +1,526 @@\n> > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > > +/*\n> > > + * Copyright (C) 2024 Ideas on Board Oy\n> > > + *\n> > > + * agc.cpp - Base class for libipa-compliant AGC algorithms\n> \n> We could have different AGC algorithms that are all libipa-compliant.\n> This is one particular AGC algorithm, right ?\n> \n> > > + */\n> > > +\n> > > +#include \"agc.h\"\n> > > +\n> > > +#include <cmath>\n> > > +\n> > > +#include <libcamera/base/log.h>\n> > > +#include <libcamera/control_ids.h>\n> > > +\n> > > +#include \"exposure_mode_helper.h\"\n> > > +\n> > > +using namespace libcamera::controls;\n> > > +\n> > > +/**\n> > > + * \\file agc.h\n> > > + * \\brief Base class implementing mean luminance AEGC.\n> > \n> > nit: not '.' at the end of briefs\n> > \n> > > + */\n> > > +\n> > > +namespace libcamera {\n> > > +\n> > > +using namespace std::literals::chrono_literals;\n> > > +\n> > > +LOG_DEFINE_CATEGORY(Agc)\n> > > +\n> > > +namespace ipa {\n> > > +\n> > > +/*\n> > > + * Number of frames to wait before calculating stats on minimum exposure\n> \n> This doesn't seem to match what the value is used for.\n> \n> > > + * \\todo should this be a tunable value?\n> > \n> > Does this depend on the ISP (comes from IPA), the sensor (comes from\n> > tuning file) or... both ? :)\n> > \n> > > + */\n> > > +static constexpr uint32_t kNumStartupFrames = 10;\n> > > +\n> > > +/*\n> > > + * Default relative luminance target\n> > > + *\n> > > + * This value should be chosen so that when the camera points at a grey target,\n> > > + * the resulting image brightness looks \"right\". Custom values can be passed\n> > > + * as the relativeLuminanceTarget value in sensor tuning files.\n> > > + */\n> > > +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;\n> > > +\n> > > +/**\n> > > + * \\struct MeanLuminanceAgc::AgcConstraint\n> > > + * \\brief The boundaries and target for an AeConstraintMode constraint\n> > > + *\n> > > + * This structure describes an AeConstraintMode constraint for the purposes of\n> > > + * this algorithm. The algorithm will apply the constraints by calculating the\n> > > + * Histogram's inter-quantile mean between the given quantiles and ensure that\n> > > + * the resulting value is the right side of the given target (as defined by the\n> > > + * boundary and luminance target).\n> > > + */\n> > \n> > Here, in example.\n> > \n> > controls::AeConstraintMode and the supported values are defined as\n> > (core|vendor) controls in control_ids_*.yaml\n> > \n> > The tuning file expresses the constraint modes using the Control\n> > definition (I wonder if this has always been like this) but it\n> > definitely ties the tuning file to the controls definition.\n> > \n> > Applications use controls::AeConstraintMode to select one of the\n> > constrained modes to have the algorithm use it.\n> > \n> > In all of this, how much is part of the MeanLuminance implementation\n> > and how much shared between possibly multiple implementations ?\n> > \n> > > +\n> > > +/**\n> > > + * \\enum MeanLuminanceAgc::AgcConstraint::Bound\n> > > + * \\brief Specify whether the constraint defines a lower or upper bound\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::LOWER\n> > > + * \\brief The constraint defines a lower bound\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::UPPER\n> > > + * \\brief The constraint defines an upper bound\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::bound\n> > > + * \\brief The type of constraint bound\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::qLo\n> > > + * \\brief The lower quantile to use for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::qHi\n> > > + * \\brief The upper quantile to use for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\var MeanLuminanceAgc::AgcConstraint::yTarget\n> > > + * \\brief The luminance target for the constraint\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\class MeanLuminanceAgc\n> > > + * \\brief a mean-based auto-exposure algorithm\n> \n> s/a/A/\n> \n> > > + *\n> > > + * This algorithm calculates a shutter time, analogue and digital gain such that\n> > > + * the normalised mean luminance value of an image is driven towards a target,\n> > > + * which itself is discovered from tuning data. The algorithm is a two-stage\n> > > + * process:\n> \n> s/:/./\n> \n> or make the next two paragraph bullet list entries.\n> \n> > > + *\n> > > + * In the first stage, an initial gain value is derived by iteratively comparing\n> > > + * the gain-adjusted mean luminance across an entire image against a target, and\n> > > + * selecting a value which pushes it as closely as possible towards the target.\n> > > + *\n> > > + * In the second stage we calculate the gain required to drive the average of a\n> > > + * section of a histogram to a target value, where the target and the boundaries\n> > > + * of the section of the histogram used in the calculation are taken from the\n> > > + * values defined for the currently configured AeConstraintMode within the\n> > > + * tuning data. The gain from the first stage is then clamped to the gain from\n> > > + * this stage.\n> > > + *\n> > > + * The final gain is used to adjust the effective exposure value of the image,\n> > > + * and that new exposure value divided into shutter time, analogue gain and\n> \n> s/divided/is divided/\n> \n> > > + * digital gain according to the selected AeExposureMode.\n> \n> There should be a mention here that this class handles tuning file\n> parsing and expects a particular tuning data format. You can reference\n> the function that parses the tuning data for detailed documentation\n> about the format.\n> \n> > > + */\n> > > +\n> > > +MeanLuminanceAgc::MeanLuminanceAgc()\n> > > +\t: frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)\n> > > +{\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse the relative luminance target from the tuning data\n> > > + * \\param[in] tuningData The YamlObject holding the algorithm's tuning data\n> > > + */\n> > > +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)\n> > > +{\n> > > +\trelativeLuminanceTarget_ =\n> > > +\t\ttuningData[\"relativeLuminanceTarget\"].get<double>(kDefaultRelativeLuminanceTarget);\n> > \n> > How do you expect this to be computed in the tuning file ?\n> > \n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse an AeConstraintMode constraint from tuning data\n> > > + * \\param[in] modeDict the YamlObject holding the constraint data\n> > > + * \\param[in] id The constraint ID from AeConstraintModeEnum\n> > > + */\n> > > +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)\n> > > +{\n> > > +\tfor (const auto &[boundName, content] : modeDict.asDict()) {\n> > > +\t\tif (boundName != \"upper\" && boundName != \"lower\") {\n> > > +\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t<< \"Ignoring unknown constraint bound '\" << boundName << \"'\";\n> > > +\t\t\tcontinue;\n> > > +\t\t}\n> > > +\n> > > +\t\tunsigned int idx = static_cast<unsigned int>(boundName == \"upper\");\n> > > +\t\tAgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);\n> > > +\t\tdouble qLo = content[\"qLo\"].get<double>().value_or(0.98);\n> > > +\t\tdouble qHi = content[\"qHi\"].get<double>().value_or(1.0);\n> > > +\t\tdouble yTarget =\n> > > +\t\t\tcontent[\"yTarget\"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);\n\nSo this should really be a piecewise linear function and not a scalar.\nThat's why I had it parse as a vector from the get-go so that we don't\nrun into compatibility issues later when we do add it.\n\nI was going to fix it on top in \"ipa: libipa: agc: Change luminance\ntarget to piecewise linear function\" [1] but clearly I forgot and only\ndid it for the main luminance target. Up to you if you want to fix it in\nv2 or if you want me to do it on top in my v2.\n\n[1] https://patchwork.libcamera.org/patch/19854/\n\n> > > +\n> > > +\t\tAgcConstraint constraint = { bound, qLo, qHi, yTarget };\n> > > +\n> > > +\t\tif (!constraintModes_.count(id))\n> > > +\t\t\tconstraintModes_[id] = {};\n> > > +\n> > > +\t\tif (idx)\n> > > +\t\t\tconstraintModes_[id].push_back(constraint);\n> > > +\t\telse\n> > > +\t\t\tconstraintModes_[id].insert(constraintModes_[id].begin(), constraint);\n> > > +\t}\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse tuning data file to populate AeConstraintMode control\n> > > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > > + *\n> > > + * The Agc algorithm's tuning data should contain a dictionary called\n> > > + * AeConstraintMode containing per-mode setting dictionaries with the key being\n> > > + * a value from \\ref controls::AeConstraintModeNameValueMap. Each mode dict may\n> > > + * contain either a \"lower\" or \"upper\" key, or both, in this format:\n> > > + *\n> > > + * \\code{.unparsed}\n> > > + * algorithms:\n> > > + *   - Agc:\n> > > + *       AeConstraintMode:\n> > > + *         ConstraintNormal:\n> > > + *           lower:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.5\n> > \n> > Ok, so this ties the tuning file not just to the libcamera controls\n> > definition, but to this specific implementation of the algorithm ?\n> \n> Tying the algorithm implementation with the tuning data format seems\n> unavoidable. I'm slightly concerned about handling the YAML parsing here\n> though, as I'm wondering if we can always guarantee that the file will\n> not need to diverge somehow (possibly with additional data somewhere\n> that would make the shared parser fail), but that's probably such a\n> small risk that we can worry about it later.\n> \n> > Not that it was not expected, and I think it's fine, as using a\n> > libcamera defined control value as 'index' makes sure the applications\n> > will deal with the same interface, but this largely conflicts with the\n> > idea to have shared parsing for all algorithms in a base class.\n> \n> The idea comes straight from the RPi implementation, which expect users\n> to customize tuning files with new modes. Indexing into a non standard\n> list of modes isn't an API I really like, and that's something I think\n> we need to fix eventually.\n> \n> > Also, we made AeConstrainModes a 'core control' because at the time\n> > when RPi first implemented AGC support there was no alternative to\n> > that. Then the RPi implementation has been copied in all other\n> > platforms and this is still fine as a 'core control'. This however\n> > seems to be a tuning parameter for this specific algorithm\n> > implementation, isn't it ?\n> \n> I should read your whole reply before writing anything :-)\n> \n> > > + *         ConstraintHighlight:\n> > > + *           lower:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.5\n> > > + *           upper:\n> > > + *             qLo: 0.98\n> > > + *             qHi: 1.0\n> > > + *             yTarget: 0.8\n> > > + *\n> > > + * \\endcode\n> > > + *\n> > > + * The parsed dictionaries are used to populate an array of available values for\n> > > + * the AeConstraintMode control and stored for later use in the algorithm.\n> > > + *\n> > > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > > + * \\return 0 on success\n> > > + */\n> > > +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)\n> > > +{\n> > > +\tstd::vector<ControlValue> availableConstraintModes;\n> > > +\n> > > +\tconst YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];\n> > > +\tif (yamlConstraintModes.isDictionary()) {\n> > > +\t\tfor (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {\n> > > +\t\t\tif (AeConstraintModeNameValueMap.find(modeName) ==\n> > > +\t\t\t    AeConstraintModeNameValueMap.end()) {\n> > > +\t\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t\t<< \"Skipping unknown constraint mode '\" << modeName << \"'\";\n> > > +\t\t\t\tcontinue;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tif (!modeDict.isDictionary()) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Invalid constraint mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn -EINVAL;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tparseConstraint(modeDict,\n> > > +\t\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > > +\t\t\tavailableConstraintModes.push_back(\n> > > +\t\t\t\tAeConstraintModeNameValueMap.at(modeName));\n> > > +\t\t}\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * If the tuning data file contains no constraints then we use the\n> > > +\t * default constraint that the various Agc algorithms were adhering to\n> > > +\t * anyway before centralisation.\n> > > +\t */\n> > > +\tif (constraintModes_.empty()) {\n> > > +\t\tAgcConstraint constraint = {\n> > > +\t\t\tAgcConstraint::Bound::LOWER,\n> > > +\t\t\t0.98,\n> > > +\t\t\t1.0,\n> > > +\t\t\t0.5\n> > > +\t\t};\n> > > +\n> > > +\t\tconstraintModes_[controls::ConstraintNormal].insert(\n> > > +\t\t\tconstraintModes_[controls::ConstraintNormal].begin(),\n> > > +\t\t\tconstraint);\n> > > +\t\tavailableConstraintModes.push_back(\n> > > +\t\t\tAeConstraintModeNameValueMap.at(\"ConstraintNormal\"));\n> > > +\t}\n> > > +\n> > > +\tcontrols_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);\n> > > +\n> > > +\treturn 0;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Parse tuning data file to populate AeExposureMode control\n> > > + * \\param[in] tuningData the YamlObject representing the tuning data for Agc\n> > > + *\n> > > + * The Agc algorithm's tuning data should contain a dictionary called\n> > > + * AeExposureMode containing per-mode setting dictionaries with the key being\n> > > + * a value from \\ref controls::AeExposureModeNameValueMap. Each mode dict should\n> > > + * contain an array of shutter times with the key \"shutter\" and an array of gain\n> > > + * values with the key \"gain\", in this format:\n> > > + *\n> > > + * \\code{.unparsed}\n> > > + * algorithms:\n> > > + *   - Agc:\n> > > + *       AeExposureMode:\n> > \n> > Same reasoning as per the constraints, really.\n> > \n> > There's nothing bad here, if not me realizing our controls definition\n> > is intimately tied with the algorithms implementation, so I wonder\n> > again if even handling things like AeEnable in a common form makes\n> > any sense..\n> > \n> > Not going to review the actual implementation now, as it comes from\n> > the existing ones...\n> > \n> > \n> > > + *         ExposureNormal:\n> > > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > > + *         ExposureShort:\n> > > + *           shutter: [ 100, 10000, 30000, 60000, 120000 ]\n> > > + *           gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]\n> > > + *\n> > > + * \\endcode\n> > > + *\n> > > + * The parsed dictionaries are used to populate an array of available values for\n> > > + * the AeExposureMode control and to create ExposureModeHelpers\n> > > + *\n> > > + * \\return -EINVAL Where a defined constraint mode is invalid\n> > > + * \\return 0 on success\n> > > + */\n> > > +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)\n> > > +{\n> > > +\tstd::vector<ControlValue> availableExposureModes;\n> > > +\tint ret;\n> > > +\n> > > +\tconst YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];\n> > > +\tif (yamlExposureModes.isDictionary()) {\n> > > +\t\tfor (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {\n> > > +\t\t\tif (AeExposureModeNameValueMap.find(modeName) ==\n> > > +\t\t\t    AeExposureModeNameValueMap.end()) {\n> > > +\t\t\t\tLOG(Agc, Warning)\n> > > +\t\t\t\t\t<< \"Skipping unknown exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\tcontinue;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tif (!modeValues.isDictionary()) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Invalid exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn -EINVAL;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\tstd::vector<uint32_t> shutters =\n> > > +\t\t\t\tmodeValues[\"shutter\"].getList<uint32_t>().value_or(std::vector<uint32_t>{});\n> > > +\t\t\tstd::vector<double> gains =\n> > > +\t\t\t\tmodeValues[\"gain\"].getList<double>().value_or(std::vector<double>{});\n> > > +\n> > > +\t\t\tstd::vector<utils::Duration> shutterDurations;\n> > > +\t\t\tstd::transform(shutters.begin(), shutters.end(),\n> > > +\t\t\t\tstd::back_inserter(shutterDurations),\n> > > +\t\t\t\t[](uint32_t time) { return std::chrono::microseconds(time); });\n> > > +\n> > > +\t\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > > +\t\t\t\tstd::make_shared<ExposureModeHelper>();\n> > > +\t\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > > +\t\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t\t<< \"Failed to parse exposure mode '\" << modeName << \"'\";\n> > > +\t\t\t\treturn ret;\n> > > +\t\t\t}\n> > > +\n> > > +\t\t\texposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;\n> > > +\t\t\tavailableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));\n> > > +\t\t}\n> > > +\t}\n> > > +\n> > > +\t/*\n> > > +\t * If we don't have any exposure modes in the tuning data we create an\n> > > +\t * ExposureModeHelper using empty shutter time and gain arrays, which\n> > > +\t * will then internally simply drive the shutter as high as possible\n> > > +\t * before touching gain\n> > > +\t */\n> > > +\tif (availableExposureModes.empty()) {\n> > > +\t\tint32_t exposureModeId = AeExposureModeNameValueMap.at(\"ExposureNormal\");\n> > > +\t\tstd::vector<utils::Duration> shutterDurations = {};\n> > > +\t\tstd::vector<double> gains = {};\n> > > +\n> > > +\t\tstd::shared_ptr<ExposureModeHelper> helper =\n> > > +\t\t\tstd::make_shared<ExposureModeHelper>();\n> > > +\t\tif ((ret = helper->init(shutterDurations, gains) < 0)) {\n> > > +\t\t\tLOG(Agc, Error)\n> > > +\t\t\t\t<< \"Failed to create default ExposureModeHelper\";\n> > > +\t\t\treturn ret;\n> > > +\t\t}\n> > > +\n> > > +\t\texposureModeHelpers_[exposureModeId] = helper;\n> > > +\t\tavailableExposureModes.push_back(exposureModeId);\n> > > +\t}\n> > > +\n> > > +\tcontrols_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);\n> > > +\n> > > +\treturn 0;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::constraintModes()\n> > > + * \\brief Get the constraint modes that have been parsed from tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::exposureModeHelpers()\n> > > + * \\brief Get the ExposureModeHelpers that have been parsed from tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::controls()\n> > > + * \\brief Get the controls that have been generated after parsing tuning data\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\fn MeanLuminanceAgc::estimateLuminance(const double gain)\n> > > + * \\brief Estimate the luminance of an image, adjusted by a given gain\n> > > + * \\param[in] gain The gain with which to adjust the luminance estimate\n> > > + *\n> > > + * This function is a pure virtual function because estimation of luminance is a\n> > > + * hardware-specific operation, which depends wholly on the format of the stats\n> > > + * that are delivered to libcamera from the ISP. Derived classes must implement\n> > > + * an overriding function that calculates the normalised mean luminance value\n> > > + * across the entire image.\n> > > + *\n> > > + * \\return The normalised relative luminance of the image\n> > > + */\n> > > +\n> > > +/**\n> > > + * \\brief Estimate the initial gain needed to achieve a relative luminance\n> > > + *        target\n> > \n> > nit: we don't usually indent in briefs, or in general when breaking\n> > lines in doxygen as far as I can tell\n> > \n> > Not going\n> > \n> > > + *\n> > > + * To account for non-linearity caused by saturation, the value needs to be\n> > > + * estimated in an iterative process, as multiplying by a gain will not increase\n> > > + * the relative luminance by the same factor if some image regions are saturated\n> > > + *\n> > > + * \\return The calculated initial gain\n> > > + */\n> > > +double MeanLuminanceAgc::estimateInitialGain()\n> > > +{\n> > > +\tdouble yTarget = relativeLuminanceTarget_;\n> > > +\tdouble yGain = 1.0;\n> > > +\n> > > +\tfor (unsigned int i = 0; i < 8; i++) {\n> > > +\t\tdouble yValue = estimateLuminance(yGain);\n> > > +\t\tdouble extra_gain = std::min(10.0, yTarget / (yValue + .001));\n> > > +\n> > > +\t\tyGain *= extra_gain;\n> > > +\t\tLOG(Agc, Debug) << \"Y value: \" << yValue\n> > > +\t\t\t\t<< \", Y target: \" << yTarget\n> > > +\t\t\t\t<< \", gives gain \" << yGain;\n> > > +\n> > > +\t\tif (utils::abs_diff(extra_gain, 1.0) < 0.01)\n> > > +\t\t\tbreak;\n> > > +\t}\n> > > +\n> > > +\treturn yGain;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Clamp gain within the bounds of a defined constraint\n> > > + * \\param[in] constraintModeIndex The index of the constraint to adhere to\n> > > + * \\param[in] hist A histogram over which to calculate inter-quantile means\n> > > + * \\param[in] gain The gain to clamp\n> > > + *\n> > > + * \\return The gain clamped within the constraint bounds\n> > > + */\n> > > +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,\n> > > +\t\t\t\t\t     const Histogram &hist,\n> > > +\t\t\t\t\t     double gain)\n> > > +{\n> > > +\tstd::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];\n> > > +\tfor (const AgcConstraint &constraint : constraints) {\n> > > +\t\tdouble newGain = constraint.yTarget * hist.bins() /\n> > > +\t\t\t\t hist.interQuantileMean(constraint.qLo, constraint.qHi);\n> > > +\n> > > +\t\tif (constraint.bound == AgcConstraint::Bound::LOWER &&\n> > > +\t\t    newGain > gain)\n> > > +\t\t\tgain = newGain;\n> > > +\n> > > +\t\tif (constraint.bound == AgcConstraint::Bound::UPPER &&\n> > > +\t\t    newGain < gain)\n> > > +\t\t\tgain = newGain;\n> > > +\t}\n> > > +\n> > > +\treturn gain;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Apply a filter on the exposure value to limit the speed of changes\n> > > + * \\param[in] exposureValue The target exposure from the AGC algorithm\n> > > + *\n> > > + * The speed of the filter is adaptive, and will produce the target quicker\n> > > + * during startup, or when the target exposure is within 20% of the most recent\n> > > + * filter output.\n> > > + *\n> > > + * \\return The filtered exposure\n> > > + */\n> > > +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)\n> > > +{\n> > > +\tdouble speed = 0.2;\n> > > +\n> > > +\t/* Adapt instantly if we are in startup phase. */\n> > > +\tif (frameCount_ < kNumStartupFrames)\n> > > +\t\tspeed = 1.0;\n> > > +\n> > > +\t/*\n> > > +\t * If we are close to the desired result, go faster to avoid making\n> > > +\t * multiple micro-adjustments.\n> > > +\t * \\todo Make this customisable?\n> > > +\t */\n> > > +\tif (filteredExposure_ < 1.2 * exposureValue &&\n> > > +\t    filteredExposure_ > 0.8 * exposureValue)\n> > > +\t\tspeed = sqrt(speed);\n> > > +\n> > > +\tfilteredExposure_ = speed * exposureValue +\n> > > +\t\t\t    filteredExposure_ * (1.0 - speed);\n> > > +\n> > > +\treturn filteredExposure_;\n> > > +}\n> > > +\n> > > +/**\n> > > + * \\brief Calculate the new exposure value\n> > > + * \\param[in] constraintModeIndex The index of the current constraint mode\n> > > + * \\param[in] exposureModeIndex The index of the current exposure mode\n> > > + * \\param[in] yHist A Histogram from the ISP statistics to use in constraining\n> > > + *\t      the calculated gain\n> > > + * \\param[in] effectiveExposureValue The EV applied to the frame from which the\n> > > + *\t      statistics in use derive\n> > > + *\n> > > + * Calculate a new exposure value to try to obtain the target. The calculated\n> > > + * exposure value is filtered to prevent rapid changes from frame to frame, and\n> > > + * divided into shutter time, analogue and digital gain.\n> > > + *\n> > > + * \\return Tuple of shutter time, analogue gain, and digital gain\n> > > + */\n> > > +std::tuple<utils::Duration, double, double>\n> > > +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,\n> > > +\t\t\t\t uint32_t exposureModeIndex,\n> > > +\t\t\t\t const Histogram &yHist,\n> > > +\t\t\t\t utils::Duration effectiveExposureValue)\n> > > +{\n> > > +\t/*\n> > > +\t * The pipeline handler should validate that we have received an allowed\n> > > +\t * value for AeExposureMode.\n> > > +\t */\n> > > +\tstd::shared_ptr<ExposureModeHelper> exposureModeHelper =\n> > > +\t\texposureModeHelpers_.at(exposureModeIndex);\n> > > +\n> > > +\tdouble gain = estimateInitialGain();\n> > > +\tgain = constraintClampGain(constraintModeIndex, yHist, gain);\n> > > +\n> > > +\t/*\n> > > +\t * We don't check whether we're already close to the target, because\n> > > +\t * even if the effective exposure value is the same as the last frame's\n> > > +\t * we could have switched to an exposure mode that would require a new\n> > > +\t * pass through the splitExposure() function.\n> > > +\t */\n> > > +\n> > > +\tutils::Duration newExposureValue = effectiveExposureValue * gain;\n> > > +\tutils::Duration maxTotalExposure = exposureModeHelper->maxShutter()\n> > > +\t\t\t\t\t   * exposureModeHelper->maxGain();\n> > > +\tnewExposureValue = std::min(newExposureValue, maxTotalExposure);\n> > > +\n> > > +\t/*\n> > > +\t * We filter the exposure value to make sure changes are not too jarring\n> > > +\t * from frame to frame.\n> > > +\t */\n> > > +\tnewExposureValue = filterExposure(newExposureValue);\n> > > +\n> > > +\tframeCount_++;\n> > > +\treturn exposureModeHelper->splitExposure(newExposureValue);\n> > > +}\n> > > +\n> > > +}; /* namespace ipa */\n> > > +\n> > > +}; /* namespace libcamera */\n> > > diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h\n> > > new file mode 100644\n> > > index 00000000..902a359a\n> > > --- /dev/null\n> > > +++ b/src/ipa/libipa/agc.h\n> > > @@ -0,0 +1,82 @@\n> > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> > > +/*\n> > > + * Copyright (C) 2024 Ideas on Board Oy\n> > > + *\n> > > + agc.h - Base class for libipa-compliant AGC algorithms\n> > > + */\n> > > +\n> > > +#pragma once\n> > > +\n> > > +#include <tuple>\n> > > +#include <vector>\n> > > +\n> > > +#include <libcamera/controls.h>\n> > > +\n> > > +#include \"libcamera/internal/yaml_parser.h\"\n> > > +\n> > > +#include \"exposure_mode_helper.h\"\n> > > +#include \"histogram.h\"\n> > > +\n> > > +namespace libcamera {\n> > > +\n> > > +namespace ipa {\n> > > +\n> > > +class MeanLuminanceAgc\n> > > +{\n> > > +public:\n> > > +\tMeanLuminanceAgc();\n> > > +\tvirtual ~MeanLuminanceAgc() = default;\n> \n> No need to make this inline, define the destructor in the .cpp with\n> \n> MeanLumimanceAgc::~MeanLuminanceAgc() = default;\n> \n> > > +\n> > > +\tstruct AgcConstraint {\n> > > +\t\tenum class Bound {\n> > > +\t\t\tLOWER = 0,\n> > > +\t\t\tUPPER = 1\n> \n> Wrong coding style.\n> \n> > > +\t\t};\n> > > +\t\tBound bound;\n> > > +\t\tdouble qLo;\n> > > +\t\tdouble qHi;\n> > > +\t\tdouble yTarget;\n> > > +\t};\n> > > +\n> > > +\tvoid parseRelativeLuminanceTarget(const YamlObject &tuningData);\n> > > +\tvoid parseConstraint(const YamlObject &modeDict, int32_t id);\n> \n> This function doesn't seem to be called from the derived classes, you\n> can make it private. Same for other functions below.\n> \n> > > +\tint parseConstraintModes(const YamlObject &tuningData);\n> > > +\tint parseExposureModes(const YamlObject &tuningData);\n> \n> Is there a reason to split the parsing in multiple functions, given that\n> they are called the same from from both the rkisp1 and ipu3\n> implementations ?\n\n(I assume this code came from what I wrote) It made the parent parsing\nfunction more readable/nice imo :)\n\n> \n> > > +\n> > > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes()\n> > > +\t{\n> > > +\t\treturn constraintModes_;\n> > > +\t}\n> \n> This seems unused, so the AgcConstraint structure can be made private.\n\nI'm using it! (on patches on top)\n\nThe alternative is to make these protected as opposed to private, which\nmight be better than getter functions.\n\n> \n> > > +\n> > > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()\n> > > +\t{\n> > > +\t\treturn exposureModeHelpers_;\n> > > +\t}\n> \n> Can we avoid exposing this too ?\n\nSame here.\n\n> \n> I never expected there would be multiple helpers when reading the\n> documentation of the ExposureModeHelper class. That may be due to\n> missing documentation.\n\nYeah I didn't write any documentation for it...\n\nThe ExposureModeHelper only helps for a single ExposureMode so...\n\nShould we instead have an ExposuresModeHelper? I didn't feel too\nstrongly that I needed that level (when I wrote it originally) and that\na map of ExposureModeHelpers was enough.\n\n> \n> > > +\n> > > +\tControlInfoMap::Map controls()\n> > > +\t{\n> > > +\t\treturn controls_;\n> > > +\t}\n> \n> Hmmm... We don't have precendents for pushing ControlInfo to algorithms.\n> Are we sure we want to go that way ?\n\nI added a similar pattern in the rkisp1 AGC/IPA (which I assume is where\nthis comes from?). But that was so that the algorithms could report to\nthe IPA what controls and values are supported so that the IPA can in\nturn report it to the pipeline handler (and then to the application).\n\nSo this hunk actually conflicts with what I added (in \"ipa: rkisp1: agc:\nPlumb mode-selection and frame duration controls \" [2]) because the\nrkisp1 agc now inherits from both this and rkisp1 algo. That's why I\nhave to specify Algorithm::controls_.\n\n[2] https://patchwork.libcamera.org/patch/19855/\n\n\nThanks,\n\nPaul\n\n> \n> > > +\n> > > +\tvirtual double estimateLuminance(const double gain) = 0;\n> > > +\tdouble estimateInitialGain();\n> > > +\tdouble constraintClampGain(uint32_t constraintModeIndex,\n> > > +\t\t\t\t   const Histogram &hist,\n> > > +\t\t\t\t   double gain);\n> > > +\tutils::Duration filterExposure(utils::Duration exposureValue);\n> > > +\tstd::tuple<utils::Duration, double, double>\n> > > +\tcalculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,\n> > > +\t\t       const Histogram &yHist, utils::Duration effectiveExposureValue);\n> \n> Missing blank line.\n> \n> > > +private:\n> > > +\tuint64_t frameCount_;\n> > > +\tutils::Duration filteredExposure_;\n> > > +\tdouble relativeLuminanceTarget_;\n> > > +\n> > > +\tstd::map<int32_t, std::vector<AgcConstraint>> constraintModes_;\n> > > +\tstd::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;\n> > > +\tControlInfoMap::Map controls_;\n> > > +};\n> > > +\n> > > +}; /* namespace ipa */\n> > > +\n> > > +}; /* namespace libcamera */\n> > > diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build\n> > > index 37fbd177..31cc8d70 100644\n> > > --- a/src/ipa/libipa/meson.build\n> > > +++ b/src/ipa/libipa/meson.build\n> > > @@ -1,6 +1,7 @@\n> > >  # SPDX-License-Identifier: CC0-1.0\n> > >\n> > >  libipa_headers = files([\n> > > +    'agc.h',\n> > >      'algorithm.h',\n> > >      'camera_sensor_helper.h',\n> > >      'exposure_mode_helper.h',\n> > > @@ -10,6 +11,7 @@ libipa_headers = files([\n> > >  ])\n> > >\n> > >  libipa_sources = files([\n> > > +    'agc.cpp',\n> > >      'algorithm.cpp',\n> > >      'camera_sensor_helper.cpp',\n> > >      'exposure_mode_helper.cpp',","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id B3365BE08B\n\tfor <parsemail@patchwork.libcamera.org>;\n\tTue,  9 Apr 2024 09:37:31 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 7CBC463352;\n\tTue,  9 Apr 2024 11:37:30 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id A3AB361C2F\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue,  9 Apr 2024 11:37:28 +0200 (CEST)","from pyrite.rasen.tech (h175-177-049-156.catv02.itscom.jp\n\t[175.177.49.156])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 5E3981A2;\n\tTue,  9 Apr 2024 11:36:45 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"Eq2zkMer\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1712655407;\n\tbh=dK48P5OAVx1+pMywe2Nyeknb6Krx4/AjMBBG786vJDE=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=Eq2zkMerRPep4usrWXivWn4Il/0xqb2b1zCpM3qpVFZGc8KvsiaxyJcNSdQ0NDon+\n\tYMZ0pK0YpECbw0Zx7qNASgp7NlSenNKZP0MLohQW3SxpJGmE9poPUcicd1Qa1G+kPO\n\tuYS0r20yTXdpU6kbKSDCTCqFF9M1F0hA1REzZ/nc=","Date":"Tue, 9 Apr 2024 18:37:19 +0900","From":"Paul Elder <paul.elder@ideasonboard.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"Jacopo Mondi <jacopo.mondi@ideasonboard.com>,\n\tDaniel Scally <dan.scally@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","Subject":"Re: [PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class","Message-ID":"<ZhUMT42GutkJ_gYt@pyrite.rasen.tech>","References":"<20240322131451.3092931-1-dan.scally@ideasonboard.com>\n\t<20240322131451.3092931-5-dan.scally@ideasonboard.com>\n\t<l3xzffnlkm7npaqn67qc52move4hpurhrfpfezdbomqtlrelp2@4eccse56yvlq>\n\t<20240406010752.GL12507@pendragon.ideasonboard.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20240406010752.GL12507@pendragon.ideasonboard.com>","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]