{"id":25972,"url":"https://patchwork.libcamera.org/api/patches/25972/?format=json","web_url":"https://patchwork.libcamera.org/patch/25972/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20260127120604.6560-3-david.plowman@raspberrypi.com>","date":"2026-01-27T11:59:57","name":"[v5,2/4] ipa: rpi: controller: awb: Add Neural Network AWB","commit_ref":null,"pull_url":null,"state":"accepted","archived":false,"hash":"922af8f8363ab6082b9ed1a24615aa6e1e49769f","submitter":{"id":42,"url":"https://patchwork.libcamera.org/api/people/42/?format=json","name":"David Plowman","email":"david.plowman@raspberrypi.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/25972/mbox/","series":[{"id":5741,"url":"https://patchwork.libcamera.org/api/series/5741/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=5741","date":"2026-01-27T11:59:55","name":"Raspberry Pi AWB using neural networks","version":5,"mbox":"https://patchwork.libcamera.org/series/5741/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/25972/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/25972/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id B63D9C3200\n\tfor <parsemail@patchwork.libcamera.org>;\n\tTue, 27 Jan 2026 12:06:13 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 2C52D61FD2;\n\tTue, 27 Jan 2026 13:06:12 +0100 (CET)","from mail-wr1-x433.google.com (mail-wr1-x433.google.com\n\t[IPv6:2a00:1450:4864:20::433])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 4EA2A61FC8\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 27 Jan 2026 13:06:09 +0100 (CET)","by mail-wr1-x433.google.com with SMTP id\n\tffacd0b85a97d-4359a316d89so4140885f8f.0\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 27 Jan 2026 04:06:09 -0800 (PST)","from davidp-pi5.pitowers.org\n\t([2a00:1098:3142:1f:88ea:c658:5b20:5e46])\n\tby smtp.gmail.com with ESMTPSA id\n\tffacd0b85a97d-435b1c30293sm38221267f8f.19.2026.01.27.04.06.07\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tTue, 27 Jan 2026 04:06:07 -0800 (PST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key;\n\tunprotected) header.d=raspberrypi.com header.i=@raspberrypi.com\n\theader.b=\"YO/fRYeY\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google; t=1769515568; x=1770120368;\n\tdarn=lists.libcamera.org; \n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:from:to:cc:subject:date\n\t:message-id:reply-to;\n\tbh=aP2ORru0Eb3orGIeXPei+KvCYV+PoWG0Wk4e3LRdcTs=;\n\tb=YO/fRYeY7k2Qn7T8GRKDRFDFcU4bodue5nRpXjUg6nLy4+7rxDTmwiEQSBYEzyoA8M\n\t4s6ulAjVegL5iC/j3VhJJTS1AX5whZBfcPI1FR838zcXCESdWSxVrmdKbbmXw0kCpqii\n\tuoYPyi0d4ELvTmlNQTtUFJjmp93FyWWppv/TbkJ9qIGQfYYHl5QOA6vFyVcMnrpnASNl\n\tvIQ47YW+/zYT3hRt02mQep70488/+NzUQgY3Apiq/rFeTmgQjRvgT54rSREVkj7oT55j\n\tR9lJbO0RcJV3zCqSY+ysZLcsSF1cZUUMataXCbXx7RDHlsMK/SOPNaPzEnczV9g0H/GF\n\tcG+A==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20230601; t=1769515568; x=1770120368;\n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n\t:to:cc:subject:date:message-id:reply-to;\n\tbh=aP2ORru0Eb3orGIeXPei+KvCYV+PoWG0Wk4e3LRdcTs=;\n\tb=ksRTVXrKsMr/X9eKqbcdGdrnrwCchuo9/4ZwYthrIydy1XE30fxs0RsWNGB5tEMYI5\n\tzUkP3bUGb/CkfIBjKpZJuw76gSUx/3UN8iEILE0WmvHpk1O0uMtVmWXwAi/a8OND1DjB\n\tNaGYCB88WAesUPp6iJCANxPnUA+gdneOLq0JVBUEwnqHLu2HkK4BfpNy0FEvUWrrf7w7\n\tePPMJVKkZAV2gZqLOBbFIIroRvBMWh3rxUn1D8L+VXFo6GQa/2MBDPRjpPm2CcYviuGz\n\tSL27uD6byeID5qhUED8d69xU3PYofz0tIjjgS5f1ViOI0f95nGdvByYyULV+lK2ArCMK\n\tpORg==","X-Gm-Message-State":"AOJu0YyDU7InxsWlC+UJMhfSe4nZQ9meIfkNqWh4hbxOPcMs3btw7qW7\n\tf+aauAoKZShbWtOTYM2iKFsMBZQJiEGmsfQY89Q/CJywoTCp+5PMLIXTVWMalKU5rD1w5/yq+lb\n\tB9/N8","X-Gm-Gg":"AZuq6aI/yHwvr9r7IjEfWqIY9noXCvuYpPGwZRdcsPXmztGCeXJJ4g5IbKMRCamDS1b\n\tUeYT08GSgEGBwryDspss2v/hngoM9NhKPKQ5ufbNUPKPXx+sfQRIiO2JK2FMZ97reCMEEKbkjL7\n\tkYNp39lmhUGdaGZU1wDNTSZ4g8i+xSD8s/Wsw5ZF7ThHb7Nbli+mH2KRhrzE+nl3uxyjjUdE+DV\n\ty89XFDMD+fHDd1t8u9zsEDY9+s/IusrixUFHz4LSo67auUgzl0B7QJgzsRRMiX3Zy4+p1L+93Rf\n\tXW2sKGQl8Wkl+AknS8kYQha4Wj87CG5JsZunxaiLCWcA0BinhV4iENjYgBlePVKrvmTJMHeLdPf\n\tik+GFeRwsxfmi/QIg63dsygo5PFhu0o355bxPllSIbtJy2b/XmfzKqGUdfP8GDpIRfLr8EHjRI4\n\tsAvNyXP7h1z+gu1ytpstTZOGdy1s45CbgrN1x3VZYWPH0ARxB/zw1sOUAMcUrt9rmtxRuHZCD5h\n\tOb6jy+YhhRo8i5ZCXQYITZkCLdRUA==","X-Received":"by 2002:a05:6000:2c11:b0:433:2f55:7cab with SMTP id\n\tffacd0b85a97d-435dd1c29abmr2128073f8f.37.1769515568321; \n\tTue, 27 Jan 2026 04:06:08 -0800 (PST)","From":"David Plowman <david.plowman@raspberrypi.com>","To":"libcamera-devel@lists.libcamera.org","Cc":"Peter Bailey <peter.bailey@raspberrypi.com>,\n\tDavid Plowman <david.plowman@raspberrypi.com>,\n\tNaushir Patuck <naush@raspberrypi.com>","Subject":"[PATCH v5 2/4] ipa: rpi: controller: awb: Add Neural Network AWB","Date":"Tue, 27 Jan 2026 11:59:57 +0000","Message-ID":"<20260127120604.6560-3-david.plowman@raspberrypi.com>","X-Mailer":"git-send-email 2.47.3","In-Reply-To":"<20260127120604.6560-1-david.plowman@raspberrypi.com>","References":"<20260127120604.6560-1-david.plowman@raspberrypi.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"From: Peter Bailey <peter.bailey@raspberrypi.com>\n\nAdd an AWB algorithm which uses neural networks.\n\nSigned-off-by: Peter Bailey <peter.bailey@raspberrypi.com>\nReviewed-by: David Plowman <david.plowman@raspberrypi.com>\nReviewed-by: Naushir Patuck <naush@raspberrypi.com>\n---\n meson_options.txt                     |   5 +\n src/ipa/rpi/controller/meson.build    |   9 +\n src/ipa/rpi/controller/rpi/awb_nn.cpp | 456 ++++++++++++++++++++++++++\n 3 files changed, 470 insertions(+)\n create mode 100644 src/ipa/rpi/controller/rpi/awb_nn.cpp","diff":"diff --git a/meson_options.txt b/meson_options.txt\nindex c052e85a..07847294 100644\n--- a/meson_options.txt\n+++ b/meson_options.txt\n@@ -76,6 +76,11 @@ option('qcam',\n         value : 'auto',\n         description : 'Compile the qcam test application')\n \n+option('rpi-awb-nn',\n+        type : 'feature',\n+        value : 'auto',\n+        description : 'Enable the Raspberry Pi Neural Network AWB algorithm')\n+\n option('test',\n         type : 'boolean',\n         value : false,\ndiff --git a/src/ipa/rpi/controller/meson.build b/src/ipa/rpi/controller/meson.build\nindex c8637906..03ee7c20 100644\n--- a/src/ipa/rpi/controller/meson.build\n+++ b/src/ipa/rpi/controller/meson.build\n@@ -32,6 +32,15 @@ rpi_ipa_controller_deps = [\n     libcamera_private,\n ]\n \n+tflite_dep = dependency('tensorflow-lite', required : get_option('rpi-awb-nn'))\n+\n+if tflite_dep.found()\n+    rpi_ipa_controller_sources += files([\n+        'rpi/awb_nn.cpp',\n+    ])\n+    rpi_ipa_controller_deps += tflite_dep\n+endif\n+\n rpi_ipa_controller_lib = static_library('rpi_ipa_controller', rpi_ipa_controller_sources,\n                                         include_directories : libipa_includes,\n                                         dependencies : rpi_ipa_controller_deps)\ndiff --git a/src/ipa/rpi/controller/rpi/awb_nn.cpp b/src/ipa/rpi/controller/rpi/awb_nn.cpp\nnew file mode 100644\nindex 00000000..395add85\n--- /dev/null\n+++ b/src/ipa/rpi/controller/rpi/awb_nn.cpp\n@@ -0,0 +1,456 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2025, Raspberry Pi Ltd\n+ *\n+ * AWB control algorithm using neural network\n+ *\n+ * The AWB Neural Network algorithm can be run entirely with the code here\n+ * and the suppllied TFLite models. Those interested in the full model\n+ * definitions, or who may want to re-train the models should visit\n+ *\n+ * https://github.com/raspberrypi/awb_nn\n+ *\n+ * where you will find full source code for the models, the full datasets\n+ * used for training our supplied models, and full instructions for capturing\n+ * your own images and re-training the models for your own use cases.\n+ */\n+\n+#include <chrono>\n+#include <condition_variable>\n+#include <thread>\n+\n+#include <libcamera/base/file.h>\n+#include <libcamera/base/log.h>\n+\n+#include <tensorflow/lite/interpreter.h>\n+#include <tensorflow/lite/kernels/register.h>\n+#include <tensorflow/lite/model.h>\n+\n+#include \"../awb_algorithm.h\"\n+#include \"../awb_status.h\"\n+#include \"../lux_status.h\"\n+#include \"libipa/pwl.h\"\n+\n+#include \"alsc_status.h\"\n+#include \"awb.h\"\n+\n+using namespace libcamera;\n+\n+LOG_DECLARE_CATEGORY(RPiAwb)\n+\n+constexpr double kDefaultCT = 4500.0;\n+\n+/*\n+ * The neural networks are trained to work on images rendered at a canonical\n+ * colour temperature. That value is 5000K, which must be reproduced here.\n+ */\n+constexpr double kNetworkCanonicalCT = 5000.0;\n+\n+#define NAME \"rpi.nn.awb\"\n+\n+namespace RPiController {\n+\n+struct AwbNNConfig {\n+\tAwbNNConfig() {}\n+\tint read(const libcamera::YamlObject &params, AwbConfig &config);\n+\n+\t/* An empty model will check default locations for model.tflite */\n+\tstd::string model;\n+\tfloat minTemp;\n+\tfloat maxTemp;\n+\n+\tbool enableNn;\n+\n+\t/* CCM matrix for canonical network CT */\n+\tdouble ccm[9];\n+};\n+\n+class AwbNN : public Awb\n+{\n+public:\n+\tAwbNN(Controller *controller = NULL);\n+\t~AwbNN();\n+\tchar const *name() const override;\n+\tvoid initialise() override;\n+\tint read(const libcamera::YamlObject &params) override;\n+\n+protected:\n+\tvoid doAwb() override;\n+\tvoid prepareStats() override;\n+\n+private:\n+\tbool isAutoEnabled() const;\n+\tAwbNNConfig nnConfig_;\n+\tvoid transverseSearch(double t, double &r, double &b);\n+\tRGB processZone(RGB zone, float red_gain, float blue_gain);\n+\tvoid awbNN();\n+\tvoid loadModel();\n+\n+\tlibcamera::Size zoneSize_;\n+\tstd::unique_ptr<tflite::FlatBufferModel> model_;\n+\tstd::unique_ptr<tflite::Interpreter> interpreter_;\n+};\n+\n+int AwbNNConfig::read(const libcamera::YamlObject &params, AwbConfig &config)\n+{\n+\tmodel = params[\"model\"].get<std::string>(\"\");\n+\tminTemp = params[\"min_temp\"].get<float>(2800.0);\n+\tmaxTemp = params[\"max_temp\"].get<float>(7600.0);\n+\n+\tfor (int i = 0; i < 9; i++)\n+\t\tccm[i] = params[\"ccm\"][i].get<double>(0.0);\n+\n+\tenableNn = params[\"enable_nn\"].get<int>(1);\n+\n+\tif (enableNn) {\n+\t\tif (!config.hasCtCurve()) {\n+\t\t\tLOG(RPiAwb, Error) << \"CT curve not specified\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tif (!model.empty() && model.find(\".tflite\") == std::string::npos) {\n+\t\t\tLOG(RPiAwb, Error) << \"Model must be a .tflite file\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tbool validCcm = true;\n+\t\tfor (int i = 0; i < 9; i++)\n+\t\t\tif (ccm[i] == 0.0)\n+\t\t\t\tvalidCcm = false;\n+\n+\t\tif (!validCcm) {\n+\t\t\tLOG(RPiAwb, Error) << \"CCM not specified or invalid\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tif (!enableNn) {\n+\t\t\tLOG(RPiAwb, Warning) << \"Neural Network AWB mis-configured - switch to Grey method\";\n+\t\t}\n+\t}\n+\n+\tif (!enableNn) {\n+\t\tconfig.sensitivityR = config.sensitivityB = 1.0;\n+\t\tconfig.greyWorld = true;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+AwbNN::AwbNN(Controller *controller)\n+\t: Awb(controller)\n+{\n+\tzoneSize_ = getHardwareConfig().awbRegions;\n+}\n+\n+AwbNN::~AwbNN()\n+{\n+}\n+\n+char const *AwbNN::name() const\n+{\n+\treturn NAME;\n+}\n+\n+int AwbNN::read(const libcamera::YamlObject &params)\n+{\n+\tint ret;\n+\n+\tret = config_.read(params);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = nnConfig_.read(params, config_);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\treturn 0;\n+}\n+\n+static bool checkTensorShape(TfLiteTensor *tensor, const int *expectedDims, const int expectedDimsSize)\n+{\n+\tif (tensor->dims->size != expectedDimsSize)\n+\t\treturn false;\n+\n+\tfor (int i = 0; i < tensor->dims->size; i++) {\n+\t\tif (tensor->dims->data[i] != expectedDims[i]) {\n+\t\t\treturn false;\n+\t\t}\n+\t}\n+\treturn true;\n+}\n+\n+static std::string buildDimString(const int *dims, const int dimsSize)\n+{\n+\tstd::string s = \"[\";\n+\tfor (int i = 0; i < dimsSize; i++) {\n+\t\ts += std::to_string(dims[i]);\n+\t\tif (i < dimsSize - 1)\n+\t\t\ts += \",\";\n+\t\telse\n+\t\t\ts += \"]\";\n+\t}\n+\treturn s;\n+}\n+\n+void AwbNN::loadModel()\n+{\n+\tstd::string modelPath;\n+\tif (getTarget() == \"bcm2835\") {\n+\t\tmodelPath = \"/ipa/rpi/vc4/awb_model.tflite\";\n+\t} else {\n+\t\tmodelPath = \"/ipa/rpi/pisp/awb_model.tflite\";\n+\t}\n+\n+\tif (nnConfig_.model.empty()) {\n+\t\tstd::string root = utils::libcameraSourcePath();\n+\t\tif (!root.empty()) {\n+\t\t\tmodelPath = root + modelPath;\n+\t\t} else {\n+\t\t\tmodelPath = LIBCAMERA_DATA_DIR + modelPath;\n+\t\t}\n+\n+\t\tif (!File::exists(modelPath)) {\n+\t\t\tLOG(RPiAwb, Error) << \"No model file found in standard locations\";\n+\t\t\tnnConfig_.enableNn = false;\n+\t\t\treturn;\n+\t\t}\n+\t} else {\n+\t\tmodelPath = nnConfig_.model;\n+\t}\n+\n+\tLOG(RPiAwb, Debug) << \"Attempting to load model from: \" << modelPath;\n+\n+\tmodel_ = tflite::FlatBufferModel::BuildFromFile(modelPath.c_str());\n+\n+\tif (!model_) {\n+\t\tLOG(RPiAwb, Error) << \"Failed to load model from \" << modelPath;\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\ttflite::MutableOpResolver resolver;\n+\ttflite::ops::builtin::BuiltinOpResolver builtin_resolver;\n+\tresolver.AddAll(builtin_resolver);\n+\ttflite::InterpreterBuilder(*model_, resolver)(&interpreter_);\n+\tif (!interpreter_) {\n+\t\tLOG(RPiAwb, Error) << \"Failed to build interpreter for model \" << nnConfig_.model;\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tinterpreter_->AllocateTensors();\n+\tTfLiteTensor *inputTensor = interpreter_->input_tensor(0);\n+\tTfLiteTensor *inputLuxTensor = interpreter_->input_tensor(1);\n+\tTfLiteTensor *outputTensor = interpreter_->output_tensor(0);\n+\tif (!inputTensor || !inputLuxTensor || !outputTensor) {\n+\t\tLOG(RPiAwb, Error) << \"Model missing input or output tensor\";\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tconst int expectedInputDims[] = { 1, (int)zoneSize_.height, (int)zoneSize_.width, 3 };\n+\tconst int expectedInputLuxDims[] = { 1 };\n+\tconst int expectedOutputDims[] = { 1 };\n+\n+\tif (!checkTensorShape(inputTensor, expectedInputDims, 4)) {\n+\t\tLOG(RPiAwb, Error) << \"Model input tensor dimension mismatch. Expected: \" << buildDimString(expectedInputDims, 4)\n+\t\t\t\t   << \", Got: \" << buildDimString(inputTensor->dims->data, inputTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (!checkTensorShape(inputLuxTensor, expectedInputLuxDims, 1)) {\n+\t\tLOG(RPiAwb, Error) << \"Model input lux tensor dimension mismatch. Expected: \" << buildDimString(expectedInputLuxDims, 1)\n+\t\t\t\t   << \", Got: \" << buildDimString(inputLuxTensor->dims->data, inputLuxTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (!checkTensorShape(outputTensor, expectedOutputDims, 1)) {\n+\t\tLOG(RPiAwb, Error) << \"Model output tensor dimension mismatch. Expected: \" << buildDimString(expectedOutputDims, 1)\n+\t\t\t\t   << \", Got: \" << buildDimString(outputTensor->dims->data, outputTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (inputTensor->type != kTfLiteFloat32 || inputLuxTensor->type != kTfLiteFloat32 || outputTensor->type != kTfLiteFloat32) {\n+\t\tLOG(RPiAwb, Error) << \"Model input and output tensors must be float32\";\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tLOG(RPiAwb, Info) << \"Model loaded successfully from \" << modelPath;\n+\tLOG(RPiAwb, Debug) << \"Model validation successful - Input Image: \"\n+\t\t\t   << buildDimString(expectedInputDims, 4)\n+\t\t\t   << \", Input Lux: \" << buildDimString(expectedInputLuxDims, 1)\n+\t\t\t   << \", Output: \" << buildDimString(expectedOutputDims, 1) << \" floats\";\n+}\n+\n+void AwbNN::initialise()\n+{\n+\tAwb::initialise();\n+\n+\tif (nnConfig_.enableNn) {\n+\t\tloadModel();\n+\t\tif (!nnConfig_.enableNn) {\n+\t\t\tLOG(RPiAwb, Warning) << \"Neural Network AWB failed to load - switch to Grey method\";\n+\t\t\tconfig_.greyWorld = true;\n+\t\t\tconfig_.sensitivityR = config_.sensitivityB = 1.0;\n+\t\t}\n+\t}\n+}\n+\n+void AwbNN::prepareStats()\n+{\n+\tzones_.clear();\n+\t/*\n+\t * LSC has already been applied to the stats in this pipeline, so stop\n+\t * any LSC compensation.  We also ignore config_.fast in this version.\n+\t */\n+\tgenerateStats(zones_, statistics_, 0.0, 0.0, getGlobalMetadata(), 0.0, 0.0, 0.0);\n+\t/*\n+\t * apply sensitivities, so values appear to come from our \"canonical\"\n+\t * sensor.\n+\t */\n+\tfor (auto &zone : zones_) {\n+\t\tzone.R *= config_.sensitivityR;\n+\t\tzone.B *= config_.sensitivityB;\n+\t}\n+}\n+\n+void AwbNN::transverseSearch(double t, double &r, double &b)\n+{\n+\tint spanR = -1, spanB = -1;\n+\tconfig_.ctR.eval(t, &spanR);\n+\tconfig_.ctB.eval(t, &spanB);\n+\n+\tconst int diff = 10;\n+\tdouble rDiff = config_.ctR.eval(t + diff, &spanR) -\n+\t\t       config_.ctR.eval(t - diff, &spanR);\n+\tdouble bDiff = config_.ctB.eval(t + diff, &spanB) -\n+\t\t       config_.ctB.eval(t - diff, &spanB);\n+\n+\tipa::Pwl::Point transverse({ bDiff, -rDiff });\n+\tif (transverse.length2() < 1e-6)\n+\t\treturn;\n+\n+\ttransverse = transverse / transverse.length();\n+\tdouble transverseRange = config_.transverseNeg + config_.transversePos;\n+\tconst int maxNumDeltas = 12;\n+\tint numDeltas = floor(transverseRange * 100 + 0.5) + 1;\n+\tnumDeltas = numDeltas < 3 ? 3 : (numDeltas > maxNumDeltas ? maxNumDeltas : numDeltas);\n+\n+\tipa::Pwl::Point points[maxNumDeltas];\n+\tint bestPoint = 0;\n+\n+\tfor (int i = 0; i < numDeltas; i++) {\n+\t\tpoints[i][0] = -config_.transverseNeg +\n+\t\t\t       (transverseRange * i) / (numDeltas - 1);\n+\t\tipa::Pwl::Point rbTest = ipa::Pwl::Point({ r, b }) +\n+\t\t\t\t\t transverse * points[i].x();\n+\t\tdouble rTest = rbTest.x(), bTest = rbTest.y();\n+\t\tdouble gainR = 1 / rTest, gainB = 1 / bTest;\n+\t\tdouble delta2Sum = computeDelta2Sum(gainR, gainB, 0.0, 0.0);\n+\t\tpoints[i][1] = delta2Sum;\n+\t\tif (points[i].y() < points[bestPoint].y())\n+\t\t\tbestPoint = i;\n+\t}\n+\n+\tbestPoint = std::clamp(bestPoint, 1, numDeltas - 2);\n+\tipa::Pwl::Point rbBest = ipa::Pwl::Point({ r, b }) +\n+\t\t\t\t transverse * interpolateQuadatric(points[bestPoint - 1],\n+\t\t\t\t\t\t\t\t   points[bestPoint],\n+\t\t\t\t\t\t\t\t   points[bestPoint + 1]);\n+\tdouble rBest = rbBest.x(), bBest = rbBest.y();\n+\n+\tr = rBest, b = bBest;\n+}\n+\n+AwbNN::RGB AwbNN::processZone(AwbNN::RGB zone, float redGain, float blueGain)\n+{\n+\t/*\n+\t * Renders the pixel at canonical network colour temperature\n+\t */\n+\tRGB zoneGains = zone;\n+\n+\tzoneGains.R *= redGain;\n+\tzoneGains.G *= 1.0;\n+\tzoneGains.B *= blueGain;\n+\n+\tRGB zoneCcm;\n+\n+\tzoneCcm.R = nnConfig_.ccm[0] * zoneGains.R + nnConfig_.ccm[1] * zoneGains.G + nnConfig_.ccm[2] * zoneGains.B;\n+\tzoneCcm.G = nnConfig_.ccm[3] * zoneGains.R + nnConfig_.ccm[4] * zoneGains.G + nnConfig_.ccm[5] * zoneGains.B;\n+\tzoneCcm.B = nnConfig_.ccm[6] * zoneGains.R + nnConfig_.ccm[7] * zoneGains.G + nnConfig_.ccm[8] * zoneGains.B;\n+\n+\treturn zoneCcm;\n+}\n+\n+void AwbNN::awbNN()\n+{\n+\tfloat *inputData = interpreter_->typed_input_tensor<float>(0);\n+\tfloat *inputLux = interpreter_->typed_input_tensor<float>(1);\n+\n+\tfloat redGain = 1.0 / config_.ctR.eval(kNetworkCanonicalCT);\n+\tfloat blueGain = 1.0 / config_.ctB.eval(kNetworkCanonicalCT);\n+\n+\tfor (uint i = 0; i < zoneSize_.height; i++) {\n+\t\tfor (uint j = 0; j < zoneSize_.width; j++) {\n+\t\t\tuint zoneIdx = i * zoneSize_.width + j;\n+\n+\t\t\tRGB processedZone = processZone(zones_[zoneIdx] * (1.0 / 65535), redGain, blueGain);\n+\t\t\tuint baseIdx = zoneIdx * 3;\n+\n+\t\t\tinputData[baseIdx + 0] = static_cast<float>(processedZone.R);\n+\t\t\tinputData[baseIdx + 1] = static_cast<float>(processedZone.G);\n+\t\t\tinputData[baseIdx + 2] = static_cast<float>(processedZone.B);\n+\t\t}\n+\t}\n+\n+\tinputLux[0] = static_cast<float>(lux_);\n+\n+\tTfLiteStatus status = interpreter_->Invoke();\n+\tif (status != kTfLiteOk) {\n+\t\tLOG(RPiAwb, Error) << \"Model inference failed with status: \" << status;\n+\t\treturn;\n+\t}\n+\n+\tfloat *outputData = interpreter_->typed_output_tensor<float>(0);\n+\n+\tdouble t = outputData[0];\n+\n+\tLOG(RPiAwb, Debug) << \"Model output temperature: \" << t;\n+\n+\tt = std::clamp(t, mode_->ctLo, mode_->ctHi);\n+\n+\tdouble r = config_.ctR.eval(t);\n+\tdouble b = config_.ctB.eval(t);\n+\n+\ttransverseSearch(t, r, b);\n+\n+\tLOG(RPiAwb, Debug) << \"After transverse search: Temperature: \" << t << \" Red gain: \" << 1.0 / r << \" Blue gain: \" << 1.0 / b;\n+\n+\tasyncResults_.temperatureK = t;\n+\tasyncResults_.gainR = 1.0 / r * config_.sensitivityR;\n+\tasyncResults_.gainG = 1.0;\n+\tasyncResults_.gainB = 1.0 / b * config_.sensitivityB;\n+}\n+\n+void AwbNN::doAwb()\n+{\n+\tprepareStats();\n+\tif (zones_.size() == (zoneSize_.width * zoneSize_.height) && nnConfig_.enableNn)\n+\t\tawbNN();\n+\telse\n+\t\tawbGrey();\n+\tstatistics_.reset();\n+}\n+\n+/* Register algorithm with the system. */\n+static Algorithm *create(Controller *controller)\n+{\n+\treturn (Algorithm *)new AwbNN(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &create);\n+\n+} /* namespace RPiController */\n","prefixes":["v5","2/4"]}