{"id":25545,"url":"https://patchwork.libcamera.org/api/patches/25545/?format=json","web_url":"https://patchwork.libcamera.org/patch/25545/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20251212103401.3776-3-david.plowman@raspberrypi.com>","date":"2025-12-12T10:23:51","name":"[v3,2/4] ipa: rpi: controller: awb: Add Neural Network AWB","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"c9a4654cfdede598616fd170c876ae5242557ba4","submitter":{"id":42,"url":"https://patchwork.libcamera.org/api/people/42/?format=json","name":"David Plowman","email":"david.plowman@raspberrypi.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/25545/mbox/","series":[{"id":5659,"url":"https://patchwork.libcamera.org/api/series/5659/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=5659","date":"2025-12-12T10:23:49","name":"Raspberry Pi AWB using neural networks","version":3,"mbox":"https://patchwork.libcamera.org/series/5659/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/25545/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/25545/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 3F2E8C3257\n\tfor <parsemail@patchwork.libcamera.org>;\n\tFri, 12 Dec 2025 10:34:16 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id E64DB6169C;\n\tFri, 12 Dec 2025 11:34:15 +0100 (CET)","from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com\n\t[IPv6:2a00:1450:4864:20::42e])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 0D7DF6069A\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 12 Dec 2025 11:34:12 +0100 (CET)","by mail-wr1-x42e.google.com with SMTP id\n\tffacd0b85a97d-42e2e40582eso631430f8f.1\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 12 Dec 2025 02:34:12 -0800 (PST)","from localhost.localdomain ([2a06:61c0:f337:0:9c1f:b517:931a:3b19])\n\tby smtp.gmail.com with ESMTPSA id\n\tffacd0b85a97d-42fa8a7044csm12232495f8f.15.2025.12.12.02.34.08\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tFri, 12 Dec 2025 02:34:09 -0800 (PST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key;\n\tunprotected) header.d=raspberrypi.com header.i=@raspberrypi.com\n\theader.b=\"M9wHzFNV\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google; t=1765535651; x=1766140451;\n\tdarn=lists.libcamera.org; \n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:from:to:cc:subject:date\n\t:message-id:reply-to;\n\tbh=+b4JBKOYlGoy+e3nKceizu6lfQGxgB8DEXt0UrESb8Y=;\n\tb=M9wHzFNV5qVHUQIrPAvKHlOGzXPr1T5Hw3BwEN7iT88EByZNNWprUOnMljjPZX+3Ea\n\tKoNu2TPhXoFiCOTZXmkiW5BJaWuZI8gh2N+IMed6nzxt7EPcspYtF6elU96CujFlLkgE\n\txH1jrgog8oAEPS4GnwPZxtbPHYJgsQMa25GdQGUbQR8yo5d4f4ymZweIojaDR/zQFq3n\n\tFlkHGGqot2yq8smbxtq9gn+1bY+562WjENqMLr/oFE6eq+Tkdh5zb1Ps45nTqdRQ5Gt3\n\t1GbJD2TldAyJHVhem3LwYIDIQDdynml0Ihf2Rw/tuV4ih51rQ+Al7c4ZILpYzbJ+YR9J\n\tIjMw==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20230601; t=1765535651; x=1766140451;\n\th=content-transfer-encoding:mime-version:references:in-reply-to\n\t:message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n\t:to:cc:subject:date:message-id:reply-to;\n\tbh=+b4JBKOYlGoy+e3nKceizu6lfQGxgB8DEXt0UrESb8Y=;\n\tb=uVizxdnhKg6akXfIJhYO87cI7fFTNz4Hr37DaySMkLDh994yh87NIUOahz6SYYWCpf\n\t9NiOH50YG8Zb4g3RtiwzdibmSR5a3xBoaLPFKkg8g6RFi0aJdMCHMfBjfqra85lrGiY3\n\tL15ncXdtFPFoygyKKs8qXMKm2UHbPbeG0APfvlPXHrPjTc9en6Qfj7pnUnFgAY6DT3Tk\n\tbqBXcaBpvt0+T60HpXNHCakcfqPRyySn8rROCXAwfJTSEww1BM+hJQ+ZKwLYPiOP4oR8\n\tA/Vdy3NJdTf4rEDNUeIBqANGEYP2DO3ElcEnmF6/9SaqjRN8GwoaX2w+Zwn/uqblxBzE\n\tmomA==","X-Gm-Message-State":"AOJu0Yy9eRzLFMEINGbAhuZqzpHr+JeMWUx6pUyHFmWOyXQzVHLe6dma\n\tbhbUn8RwHkjFxGGvkZ1t4/O+aECvemRJ/Yk08kLILUBDDgmB4qcvfCH0gyzxFSx9f6b4uJnTIcj\n\tB+Hgw","X-Gm-Gg":"AY/fxX7KS8U7Wwtc7URod9ka2cvohkuqjq3nQwr6k9LHQH4TBNQ5VS2+3qzn4KlBSC8\n\tGrvZtIz1LbMC+fi59/77rSPwOTm4L2qdseifcQ0eWkibeodaKgh1BtClUP8yhHz7HXqntNVrdGB\n\t7urUgmaDRKmLqNxPBuMrZoR4Sx6B4S3K3lOLSSnEich+lxMoRAfU818sGIHR5vyvJQ0MzjlyU3I\n\tYjspk8uMM86cYHg6Pifow1cWT5YzORMlQP+a1P+DUxYGhwToT5kJsl8kGc9/xHj3RH97Vu2cgz0\n\tSOyxWEMUK3TpVKrASFomoNn97YIysAMFaWg0NGiGxbO2Q3qkRxknSe+ukQdoJu2je1o6wneiZfx\n\tw58AG1wgXk+yvxiOEq+bqnPwg+KcmxlS7kzf5oKZQemk9IWzegAun/bsSSRNzef4zCp2lFP0r4F\n\tsXlJLrHRUXHNdaVyxULaA2diyfmJDy4EJTiYlH2EzOeF9H3JDpAgjxGmVa9RzgJ7jHW2lMD7YkQ\n\tpqW/Zp1MBTraRj6GN7c+fhjqQ==","X-Google-Smtp-Source":"AGHT+IHzzCuHQbs27zr81Btj9mOns6nIpuhdIk05E1c/QwN5of0PWDf2EcE4RS3YTye4mhYIZWdahw==","X-Received":"by 2002:a5d:5886:0:b0:42b:3a84:1ec3 with SMTP id\n\tffacd0b85a97d-42fb48e5309mr1845633f8f.29.1765535651105; \n\tFri, 12 Dec 2025 02:34:11 -0800 (PST)","From":"David Plowman <david.plowman@raspberrypi.com>","To":"libcamera-devel@lists.libcamera.org","Cc":"Peter Bailey <peter.bailey@raspberrypi.com>,\n\tDavid Plowman <david.plowman@raspberrypi.com>,\n\tNaushir Patuck <naush@raspberrypi.com>","Subject":"[PATCH v3 2/4] ipa: rpi: controller: awb: Add Neural Network AWB","Date":"Fri, 12 Dec 2025 10:23:51 +0000","Message-ID":"<20251212103401.3776-3-david.plowman@raspberrypi.com>","X-Mailer":"git-send-email 2.47.3","In-Reply-To":"<20251212103401.3776-1-david.plowman@raspberrypi.com>","References":"<20251212103401.3776-1-david.plowman@raspberrypi.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"From: Peter Bailey <peter.bailey@raspberrypi.com>\n\nAdd an AWB algorithm which uses neural networks.\n\nSigned-off-by: Peter Bailey <peter.bailey@raspberrypi.com>\nReviewed-by: David Plowman <david.plowman@raspberrypi.com>\nReviewed-by: Naushir Patuck <naush@raspberrypi.com>\n---\n meson_options.txt                     |   5 +\n src/ipa/rpi/controller/meson.build    |   9 +\n src/ipa/rpi/controller/rpi/awb_nn.cpp | 446 ++++++++++++++++++++++++++\n 3 files changed, 460 insertions(+)\n create mode 100644 src/ipa/rpi/controller/rpi/awb_nn.cpp","diff":"diff --git a/meson_options.txt b/meson_options.txt\nindex 5954e028..89eece52 100644\n--- a/meson_options.txt\n+++ b/meson_options.txt\n@@ -78,6 +78,11 @@ option('qcam',\n         value : 'disabled',\n         description : 'Compile the qcam test application')\n \n+option('rpi-awb-nn',\n+        type : 'feature',\n+        value : 'auto',\n+        description : 'Enable the Raspberry Pi Neural Network AWB algorithm')\n+\n option('test',\n         type : 'boolean',\n         value : false,\ndiff --git a/src/ipa/rpi/controller/meson.build b/src/ipa/rpi/controller/meson.build\nindex 90d9e285..eba6cb28 100644\n--- a/src/ipa/rpi/controller/meson.build\n+++ b/src/ipa/rpi/controller/meson.build\n@@ -33,6 +33,15 @@ rpi_ipa_controller_deps = [\n     libcamera_private,\n ]\n \n+tflite_dep = dependency('tensorflow-lite', required : get_option('rpi-awb-nn'))\n+\n+if tflite_dep.found()\n+    rpi_ipa_controller_sources += files([\n+        'rpi/awb_nn.cpp',\n+    ])\n+    rpi_ipa_controller_deps += tflite_dep\n+endif\n+\n rpi_ipa_controller_lib = static_library('rpi_ipa_controller', rpi_ipa_controller_sources,\n                                         include_directories : libipa_includes,\n                                         dependencies : rpi_ipa_controller_deps)\ndiff --git a/src/ipa/rpi/controller/rpi/awb_nn.cpp b/src/ipa/rpi/controller/rpi/awb_nn.cpp\nnew file mode 100644\nindex 00000000..35d1270e\n--- /dev/null\n+++ b/src/ipa/rpi/controller/rpi/awb_nn.cpp\n@@ -0,0 +1,446 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2025, Raspberry Pi Ltd\n+ *\n+ * AWB control algorithm using neural network\n+ */\n+\n+#include <chrono>\n+#include <condition_variable>\n+#include <thread>\n+\n+#include <libcamera/base/file.h>\n+#include <libcamera/base/log.h>\n+\n+#include <tensorflow/lite/interpreter.h>\n+#include <tensorflow/lite/kernels/register.h>\n+#include <tensorflow/lite/model.h>\n+\n+#include \"../awb_algorithm.h\"\n+#include \"../awb_status.h\"\n+#include \"../lux_status.h\"\n+#include \"libipa/pwl.h\"\n+\n+#include \"alsc_status.h\"\n+#include \"awb.h\"\n+\n+using namespace libcamera;\n+\n+LOG_DECLARE_CATEGORY(RPiAwb)\n+\n+constexpr double kDefaultCT = 4500.0;\n+\n+/*\n+ * The neural networks are trained to work on images rendered at a canonical\n+ * colour temperature. That value is 5000K, which must be reproduced here.\n+ */\n+constexpr double kNetworkCanonicalCT = 5000.0;\n+\n+#define NAME \"rpi.nn.awb\"\n+\n+namespace RPiController {\n+\n+struct AwbNNConfig {\n+\tAwbNNConfig() {}\n+\tint read(const libcamera::YamlObject &params, AwbConfig &config);\n+\n+\t/* An empty model will check default locations for model.tflite */\n+\tstd::string model;\n+\tfloat minTemp;\n+\tfloat maxTemp;\n+\n+\tbool enableNn;\n+\n+\t/* CCM matrix for canonical network CT */\n+\tdouble ccm[9];\n+};\n+\n+class AwbNN : public Awb\n+{\n+public:\n+\tAwbNN(Controller *controller = NULL);\n+\t~AwbNN();\n+\tchar const *name() const override;\n+\tvoid initialise() override;\n+\tint read(const libcamera::YamlObject &params) override;\n+\n+protected:\n+\tvoid doAwb() override;\n+\tvoid prepareStats() override;\n+\n+private:\n+\tbool isAutoEnabled() const;\n+\tAwbNNConfig nnConfig_;\n+\tvoid transverseSearch(double t, double &r, double &b);\n+\tRGB processZone(RGB zone, float red_gain, float blue_gain);\n+\tvoid awbNN();\n+\tvoid loadModel();\n+\n+\tlibcamera::Size zoneSize_;\n+\tstd::unique_ptr<tflite::FlatBufferModel> model_;\n+\tstd::unique_ptr<tflite::Interpreter> interpreter_;\n+};\n+\n+int AwbNNConfig::read(const libcamera::YamlObject &params, AwbConfig &config)\n+{\n+\tmodel = params[\"model\"].get<std::string>(\"\");\n+\tminTemp = params[\"min_temp\"].get<float>(2800.0);\n+\tmaxTemp = params[\"max_temp\"].get<float>(7600.0);\n+\n+\tfor (int i = 0; i < 9; i++)\n+\t\tccm[i] = params[\"ccm\"][i].get<double>(0.0);\n+\n+\tenableNn = params[\"enable_nn\"].get<int>(1);\n+\n+\tif (enableNn) {\n+\t\tif (!config.hasCtCurve()) {\n+\t\t\tLOG(RPiAwb, Error) << \"CT curve not specified\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tif (!model.empty() && model.find(\".tflite\") == std::string::npos) {\n+\t\t\tLOG(RPiAwb, Error) << \"Model must be a .tflite file\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tbool validCcm = true;\n+\t\tfor (int i = 0; i < 9; i++)\n+\t\t\tif (ccm[i] == 0.0)\n+\t\t\t\tvalidCcm = false;\n+\n+\t\tif (!validCcm) {\n+\t\t\tLOG(RPiAwb, Error) << \"CCM not specified or invalid\";\n+\t\t\tenableNn = false;\n+\t\t}\n+\n+\t\tif (!enableNn) {\n+\t\t\tLOG(RPiAwb, Warning) << \"Neural Network AWB mis-configured - switch to Grey method\";\n+\t\t}\n+\t}\n+\n+\tif (!enableNn) {\n+\t\tconfig.sensitivityR = config.sensitivityB = 1.0;\n+\t\tconfig.greyWorld = true;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+AwbNN::AwbNN(Controller *controller)\n+\t: Awb(controller)\n+{\n+\tzoneSize_ = getHardwareConfig().awbRegions;\n+}\n+\n+AwbNN::~AwbNN()\n+{\n+}\n+\n+char const *AwbNN::name() const\n+{\n+\treturn NAME;\n+}\n+\n+int AwbNN::read(const libcamera::YamlObject &params)\n+{\n+\tint ret;\n+\n+\tret = config_.read(params);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = nnConfig_.read(params, config_);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\treturn 0;\n+}\n+\n+static bool checkTensorShape(TfLiteTensor *tensor, const int *expectedDims, const int expectedDimsSize)\n+{\n+\tif (tensor->dims->size != expectedDimsSize)\n+\t\treturn false;\n+\n+\tfor (int i = 0; i < tensor->dims->size; i++) {\n+\t\tif (tensor->dims->data[i] != expectedDims[i]) {\n+\t\t\treturn false;\n+\t\t}\n+\t}\n+\treturn true;\n+}\n+\n+static std::string buildDimString(const int *dims, const int dimsSize)\n+{\n+\tstd::string s = \"[\";\n+\tfor (int i = 0; i < dimsSize; i++) {\n+\t\ts += std::to_string(dims[i]);\n+\t\tif (i < dimsSize - 1)\n+\t\t\ts += \",\";\n+\t\telse\n+\t\t\ts += \"]\";\n+\t}\n+\treturn s;\n+}\n+\n+void AwbNN::loadModel()\n+{\n+\tstd::string modelPath;\n+\tif (getTarget() == \"bcm2835\") {\n+\t\tmodelPath = \"/ipa/rpi/vc4/awb_model.tflite\";\n+\t} else {\n+\t\tmodelPath = \"/ipa/rpi/pisp/awb_model.tflite\";\n+\t}\n+\n+\tif (nnConfig_.model.empty()) {\n+\t\tstd::string root = utils::libcameraSourcePath();\n+\t\tif (!root.empty()) {\n+\t\t\tmodelPath = root + modelPath;\n+\t\t} else {\n+\t\t\tmodelPath = LIBCAMERA_DATA_DIR + modelPath;\n+\t\t}\n+\n+\t\tif (!File::exists(modelPath)) {\n+\t\t\tLOG(RPiAwb, Error) << \"No model file found in standard locations\";\n+\t\t\tnnConfig_.enableNn = false;\n+\t\t\treturn;\n+\t\t}\n+\t} else {\n+\t\tmodelPath = nnConfig_.model;\n+\t}\n+\n+\tLOG(RPiAwb, Debug) << \"Attempting to load model from: \" << modelPath;\n+\n+\tmodel_ = tflite::FlatBufferModel::BuildFromFile(modelPath.c_str());\n+\n+\tif (!model_) {\n+\t\tLOG(RPiAwb, Error) << \"Failed to load model from \" << modelPath;\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\ttflite::MutableOpResolver resolver;\n+\ttflite::ops::builtin::BuiltinOpResolver builtin_resolver;\n+\tresolver.AddAll(builtin_resolver);\n+\ttflite::InterpreterBuilder(*model_, resolver)(&interpreter_);\n+\tif (!interpreter_) {\n+\t\tLOG(RPiAwb, Error) << \"Failed to build interpreter for model \" << nnConfig_.model;\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tinterpreter_->AllocateTensors();\n+\tTfLiteTensor *inputTensor = interpreter_->input_tensor(0);\n+\tTfLiteTensor *inputLuxTensor = interpreter_->input_tensor(1);\n+\tTfLiteTensor *outputTensor = interpreter_->output_tensor(0);\n+\tif (!inputTensor || !inputLuxTensor || !outputTensor) {\n+\t\tLOG(RPiAwb, Error) << \"Model missing input or output tensor\";\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tconst int expectedInputDims[] = { 1, (int)zoneSize_.height, (int)zoneSize_.width, 3 };\n+\tconst int expectedInputLuxDims[] = { 1 };\n+\tconst int expectedOutputDims[] = { 1 };\n+\n+\tif (!checkTensorShape(inputTensor, expectedInputDims, 4)) {\n+\t\tLOG(RPiAwb, Error) << \"Model input tensor dimension mismatch. Expected: \" << buildDimString(expectedInputDims, 4)\n+\t\t\t\t   << \", Got: \" << buildDimString(inputTensor->dims->data, inputTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (!checkTensorShape(inputLuxTensor, expectedInputLuxDims, 1)) {\n+\t\tLOG(RPiAwb, Error) << \"Model input lux tensor dimension mismatch. Expected: \" << buildDimString(expectedInputLuxDims, 1)\n+\t\t\t\t   << \", Got: \" << buildDimString(inputLuxTensor->dims->data, inputLuxTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (!checkTensorShape(outputTensor, expectedOutputDims, 1)) {\n+\t\tLOG(RPiAwb, Error) << \"Model output tensor dimension mismatch. Expected: \" << buildDimString(expectedOutputDims, 1)\n+\t\t\t\t   << \", Got: \" << buildDimString(outputTensor->dims->data, outputTensor->dims->size);\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tif (inputTensor->type != kTfLiteFloat32 || inputLuxTensor->type != kTfLiteFloat32 || outputTensor->type != kTfLiteFloat32) {\n+\t\tLOG(RPiAwb, Error) << \"Model input and output tensors must be float32\";\n+\t\tnnConfig_.enableNn = false;\n+\t\treturn;\n+\t}\n+\n+\tLOG(RPiAwb, Info) << \"Model loaded successfully from \" << modelPath;\n+\tLOG(RPiAwb, Debug) << \"Model validation successful - Input Image: \"\n+\t\t\t   << buildDimString(expectedInputDims, 4)\n+\t\t\t   << \", Input Lux: \" << buildDimString(expectedInputLuxDims, 1)\n+\t\t\t   << \", Output: \" << buildDimString(expectedOutputDims, 1) << \" floats\";\n+}\n+\n+void AwbNN::initialise()\n+{\n+\tAwb::initialise();\n+\n+\tif (nnConfig_.enableNn) {\n+\t\tloadModel();\n+\t\tif (!nnConfig_.enableNn) {\n+\t\t\tLOG(RPiAwb, Warning) << \"Neural Network AWB failed to load - switch to Grey method\";\n+\t\t\tconfig_.greyWorld = true;\n+\t\t\tconfig_.sensitivityR = config_.sensitivityB = 1.0;\n+\t\t}\n+\t}\n+}\n+\n+void AwbNN::prepareStats()\n+{\n+\tzones_.clear();\n+\t/*\n+\t * LSC has already been applied to the stats in this pipeline, so stop\n+\t * any LSC compensation.  We also ignore config_.fast in this version.\n+\t */\n+\tgenerateStats(zones_, statistics_, 0.0, 0.0, getGlobalMetadata(), 0.0, 0.0, 0.0);\n+\t/*\n+\t * apply sensitivities, so values appear to come from our \"canonical\"\n+\t * sensor.\n+\t */\n+\tfor (auto &zone : zones_) {\n+\t\tzone.R *= config_.sensitivityR;\n+\t\tzone.B *= config_.sensitivityB;\n+\t}\n+}\n+\n+void AwbNN::transverseSearch(double t, double &r, double &b)\n+{\n+\tint spanR = -1, spanB = -1;\n+\tconfig_.ctR.eval(t, &spanR);\n+\tconfig_.ctB.eval(t, &spanB);\n+\n+\tconst int diff = 10;\n+\tdouble rDiff = config_.ctR.eval(t + diff, &spanR) -\n+\t\t       config_.ctR.eval(t - diff, &spanR);\n+\tdouble bDiff = config_.ctB.eval(t + diff, &spanB) -\n+\t\t       config_.ctB.eval(t - diff, &spanB);\n+\n+\tipa::Pwl::Point transverse({ bDiff, -rDiff });\n+\tif (transverse.length2() < 1e-6)\n+\t\treturn;\n+\n+\ttransverse = transverse / transverse.length();\n+\tdouble transverseRange = config_.transverseNeg + config_.transversePos;\n+\tconst int maxNumDeltas = 12;\n+\tint numDeltas = floor(transverseRange * 100 + 0.5) + 1;\n+\tnumDeltas = numDeltas < 3 ? 3 : (numDeltas > maxNumDeltas ? maxNumDeltas : numDeltas);\n+\n+\tipa::Pwl::Point points[maxNumDeltas];\n+\tint bestPoint = 0;\n+\n+\tfor (int i = 0; i < numDeltas; i++) {\n+\t\tpoints[i][0] = -config_.transverseNeg +\n+\t\t\t       (transverseRange * i) / (numDeltas - 1);\n+\t\tipa::Pwl::Point rbTest = ipa::Pwl::Point({ r, b }) +\n+\t\t\t\t\t transverse * points[i].x();\n+\t\tdouble rTest = rbTest.x(), bTest = rbTest.y();\n+\t\tdouble gainR = 1 / rTest, gainB = 1 / bTest;\n+\t\tdouble delta2Sum = computeDelta2Sum(gainR, gainB, 0.0, 0.0);\n+\t\tpoints[i][1] = delta2Sum;\n+\t\tif (points[i].y() < points[bestPoint].y())\n+\t\t\tbestPoint = i;\n+\t}\n+\n+\tbestPoint = std::clamp(bestPoint, 1, numDeltas - 2);\n+\tipa::Pwl::Point rbBest = ipa::Pwl::Point({ r, b }) +\n+\t\t\t\t transverse * interpolateQuadatric(points[bestPoint - 1],\n+\t\t\t\t\t\t\t\t   points[bestPoint],\n+\t\t\t\t\t\t\t\t   points[bestPoint + 1]);\n+\tdouble rBest = rbBest.x(), bBest = rbBest.y();\n+\n+\tr = rBest, b = bBest;\n+}\n+\n+AwbNN::RGB AwbNN::processZone(AwbNN::RGB zone, float redGain, float blueGain)\n+{\n+\t/*\n+\t * Renders the pixel at canonical network colour temperature\n+\t */\n+\tRGB zoneGains = zone;\n+\n+\tzoneGains.R *= redGain;\n+\tzoneGains.G *= 1.0;\n+\tzoneGains.B *= blueGain;\n+\n+\tRGB zoneCcm;\n+\n+\tzoneCcm.R = nnConfig_.ccm[0] * zoneGains.R + nnConfig_.ccm[1] * zoneGains.G + nnConfig_.ccm[2] * zoneGains.B;\n+\tzoneCcm.G = nnConfig_.ccm[3] * zoneGains.R + nnConfig_.ccm[4] * zoneGains.G + nnConfig_.ccm[5] * zoneGains.B;\n+\tzoneCcm.B = nnConfig_.ccm[6] * zoneGains.R + nnConfig_.ccm[7] * zoneGains.G + nnConfig_.ccm[8] * zoneGains.B;\n+\n+\treturn zoneCcm;\n+}\n+\n+void AwbNN::awbNN()\n+{\n+\tfloat *inputData = interpreter_->typed_input_tensor<float>(0);\n+\tfloat *inputLux = interpreter_->typed_input_tensor<float>(1);\n+\n+\tfloat redGain = 1.0 / config_.ctR.eval(kNetworkCanonicalCT);\n+\tfloat blueGain = 1.0 / config_.ctB.eval(kNetworkCanonicalCT);\n+\n+\tfor (uint i = 0; i < zoneSize_.height; i++) {\n+\t\tfor (uint j = 0; j < zoneSize_.width; j++) {\n+\t\t\tuint zoneIdx = i * zoneSize_.width + j;\n+\n+\t\t\tRGB processedZone = processZone(zones_[zoneIdx] * (1.0 / 65535), redGain, blueGain);\n+\t\t\tuint baseIdx = zoneIdx * 3;\n+\n+\t\t\tinputData[baseIdx + 0] = static_cast<float>(processedZone.R);\n+\t\t\tinputData[baseIdx + 1] = static_cast<float>(processedZone.G);\n+\t\t\tinputData[baseIdx + 2] = static_cast<float>(processedZone.B);\n+\t\t}\n+\t}\n+\n+\tinputLux[0] = static_cast<float>(lux_);\n+\n+\tTfLiteStatus status = interpreter_->Invoke();\n+\tif (status != kTfLiteOk) {\n+\t\tLOG(RPiAwb, Error) << \"Model inference failed with status: \" << status;\n+\t\treturn;\n+\t}\n+\n+\tfloat *outputData = interpreter_->typed_output_tensor<float>(0);\n+\n+\tdouble t = outputData[0];\n+\n+\tLOG(RPiAwb, Debug) << \"Model output temperature: \" << t;\n+\n+\tt = std::clamp(t, mode_->ctLo, mode_->ctHi);\n+\n+\tdouble r = config_.ctR.eval(t);\n+\tdouble b = config_.ctB.eval(t);\n+\n+\ttransverseSearch(t, r, b);\n+\n+\tLOG(RPiAwb, Debug) << \"After transverse search: Temperature: \" << t << \" Red gain: \" << 1.0 / r << \" Blue gain: \" << 1.0 / b;\n+\n+\tasyncResults_.temperatureK = t;\n+\tasyncResults_.gainR = 1.0 / r * config_.sensitivityR;\n+\tasyncResults_.gainG = 1.0;\n+\tasyncResults_.gainB = 1.0 / b * config_.sensitivityB;\n+}\n+\n+void AwbNN::doAwb()\n+{\n+\tprepareStats();\n+\tif (zones_.size() == (zoneSize_.width * zoneSize_.height) && nnConfig_.enableNn)\n+\t\tawbNN();\n+\telse\n+\t\tawbGrey();\n+\tstatistics_.reset();\n+}\n+\n+/* Register algorithm with the system. */\n+static Algorithm *create(Controller *controller)\n+{\n+\treturn (Algorithm *)new AwbNN(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &create);\n+\n+} /* namespace RPiController */\n","prefixes":["v3","2/4"]}