{"id":3687,"url":"https://patchwork.libcamera.org/api/1.1/patches/3687/?format=json","web_url":"https://patchwork.libcamera.org/patch/3687/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20200504092829.10099-5-laurent.pinchart@ideasonboard.com>","date":"2020-05-04T09:28:27","name":"[libcamera-devel,4/6] libcamera: ipa: Raspberry Pi IPA","commit_ref":null,"pull_url":null,"state":"accepted","archived":false,"hash":"04cf22b1e921294f5273bb024eeaa1737a6e8361","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/1.1/people/2/?format=json","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/3687/mbox/","series":[{"id":880,"url":"https://patchwork.libcamera.org/api/1.1/series/880/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=880","date":"2020-05-04T09:28:23","name":"libcamera: Raspberry Pi camera support","version":1,"mbox":"https://patchwork.libcamera.org/series/880/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/3687/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/3687/checks/","tags":{},"headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id D324C616BB\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon,  4 May 2020 11:28:40 +0200 (CEST)","from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 6A853304;\n\tMon,  4 May 2020 11:28:39 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"fcSWUhc4\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1588584520;\n\tbh=9lE8fvH7mu8XmMc8Nfp6vXrPoxD6Py9EUt5z7urq6vM=;\n\th=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n\tb=fcSWUhc4iH3IqQAwObbCJ671R/2mv5ZqO7T14UY5k2M4vmNa2qeEIENKHB56Yxffh\n\tPGNZaZLsytAskIJgaNzPl1UKSSLUJmLhx/A1szZGZEyvglk9i2nL+63AVCSkZleHph\n\t8XVxj7BZJzu2KbYJmumHhdJru50xsREzYjXhP4UQ=","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"libcamera-devel@lists.libcamera.org","Date":"Mon,  4 May 2020 12:28:27 +0300","Message-Id":"<20200504092829.10099-5-laurent.pinchart@ideasonboard.com>","X-Mailer":"git-send-email 2.26.2","In-Reply-To":"<20200504092829.10099-1-laurent.pinchart@ideasonboard.com>","References":"<20200504092829.10099-1-laurent.pinchart@ideasonboard.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Mailman-Approved-At":"Mon, 04 May 2020 11:29:25 +0200","Subject":"[libcamera-devel] [PATCH 4/6] libcamera: ipa: Raspberry Pi IPA","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 04 May 2020 09:28:42 -0000"},"content":"From: Naushir Patuck <naush@raspberrypi.com>\n\nInitial implementation of the Raspberry Pi (BCM2835) libcamera IPA and\nassociated libraries.\n\nAll code is licensed under the BSD-2-Clause terms.\nCopyright (c) 2019-2020 Raspberry Pi Trading Ltd.\n\nSigned-off-by: Naushir Patuck <naush@raspberrypi.com>\nSigned-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\n---\n src/ipa/raspberrypi/README.md                 |   23 +\n src/ipa/raspberrypi/cam_helper.cpp            |  119 ++\n src/ipa/raspberrypi/cam_helper.hpp            |  102 ++\n src/ipa/raspberrypi/cam_helper_imx219.cpp     |  180 +++\n src/ipa/raspberrypi/cam_helper_imx477.cpp     |  162 +++\n src/ipa/raspberrypi/cam_helper_ov5647.cpp     |   89 ++\n .../raspberrypi/controller/agc_algorithm.hpp  |   28 +\n src/ipa/raspberrypi/controller/agc_status.h   |   39 +\n src/ipa/raspberrypi/controller/algorithm.cpp  |   47 +\n src/ipa/raspberrypi/controller/algorithm.hpp  |   62 +\n src/ipa/raspberrypi/controller/alsc_status.h  |   27 +\n .../raspberrypi/controller/awb_algorithm.hpp  |   22 +\n src/ipa/raspberrypi/controller/awb_status.h   |   26 +\n .../controller/black_level_status.h           |   23 +\n src/ipa/raspberrypi/controller/camera_mode.h  |   40 +\n .../raspberrypi/controller/ccm_algorithm.hpp  |   21 +\n src/ipa/raspberrypi/controller/ccm_status.h   |   22 +\n .../controller/contrast_algorithm.hpp         |   22 +\n .../raspberrypi/controller/contrast_status.h  |   31 +\n src/ipa/raspberrypi/controller/controller.cpp |  109 ++\n src/ipa/raspberrypi/controller/controller.hpp |   54 +\n .../raspberrypi/controller/device_status.h    |   30 +\n src/ipa/raspberrypi/controller/dpc_status.h   |   21 +\n src/ipa/raspberrypi/controller/geq_status.h   |   22 +\n src/ipa/raspberrypi/controller/histogram.cpp  |   64 +\n src/ipa/raspberrypi/controller/histogram.hpp  |   44 +\n src/ipa/raspberrypi/controller/logging.hpp    |   30 +\n src/ipa/raspberrypi/controller/lux_status.h   |   29 +\n src/ipa/raspberrypi/controller/metadata.hpp   |   77 ++\n src/ipa/raspberrypi/controller/noise_status.h |   22 +\n src/ipa/raspberrypi/controller/pwl.cpp        |  216 ++++\n src/ipa/raspberrypi/controller/pwl.hpp        |  109 ++\n src/ipa/raspberrypi/controller/rpi/agc.cpp    |  642 ++++++++++\n src/ipa/raspberrypi/controller/rpi/agc.hpp    |  123 ++\n src/ipa/raspberrypi/controller/rpi/alsc.cpp   |  705 +++++++++++\n src/ipa/raspberrypi/controller/rpi/alsc.hpp   |  104 ++\n src/ipa/raspberrypi/controller/rpi/awb.cpp    |  608 +++++++++\n src/ipa/raspberrypi/controller/rpi/awb.hpp    |  178 +++\n .../controller/rpi/black_level.cpp            |   56 +\n .../controller/rpi/black_level.hpp            |   30 +\n src/ipa/raspberrypi/controller/rpi/ccm.cpp    |  163 +++\n src/ipa/raspberrypi/controller/rpi/ccm.hpp    |   76 ++\n .../raspberrypi/controller/rpi/contrast.cpp   |  176 +++\n .../raspberrypi/controller/rpi/contrast.hpp   |   51 +\n src/ipa/raspberrypi/controller/rpi/dpc.cpp    |   49 +\n src/ipa/raspberrypi/controller/rpi/dpc.hpp    |   32 +\n src/ipa/raspberrypi/controller/rpi/geq.cpp    |   75 ++\n src/ipa/raspberrypi/controller/rpi/geq.hpp    |   34 +\n src/ipa/raspberrypi/controller/rpi/lux.cpp    |  104 ++\n src/ipa/raspberrypi/controller/rpi/lux.hpp    |   42 +\n src/ipa/raspberrypi/controller/rpi/noise.cpp  |   71 ++\n src/ipa/raspberrypi/controller/rpi/noise.hpp  |   32 +\n src/ipa/raspberrypi/controller/rpi/sdn.cpp    |   63 +\n src/ipa/raspberrypi/controller/rpi/sdn.hpp    |   29 +\n .../raspberrypi/controller/rpi/sharpen.cpp    |   60 +\n .../raspberrypi/controller/rpi/sharpen.hpp    |   32 +\n src/ipa/raspberrypi/controller/sdn_status.h   |   23 +\n .../raspberrypi/controller/sharpen_status.h   |   26 +\n src/ipa/raspberrypi/data/imx219.json          |  401 ++++++\n src/ipa/raspberrypi/data/imx477.json          |  416 +++++++\n src/ipa/raspberrypi/data/meson.build          |    9 +\n src/ipa/raspberrypi/data/ov5647.json          |  398 ++++++\n src/ipa/raspberrypi/data/uncalibrated.json    |   82 ++\n src/ipa/raspberrypi/md_parser.cpp             |  101 ++\n src/ipa/raspberrypi/md_parser.hpp             |  123 ++\n src/ipa/raspberrypi/md_parser_rpi.cpp         |   37 +\n src/ipa/raspberrypi/md_parser_rpi.hpp         |   32 +\n src/ipa/raspberrypi/meson.build               |   59 +\n src/ipa/raspberrypi/raspberrypi.cpp           | 1081 +++++++++++++++++\n 69 files changed, 8235 insertions(+)\n create mode 100644 src/ipa/raspberrypi/README.md\n create mode 100644 src/ipa/raspberrypi/cam_helper.cpp\n create mode 100644 src/ipa/raspberrypi/cam_helper.hpp\n create mode 100644 src/ipa/raspberrypi/cam_helper_imx219.cpp\n create mode 100644 src/ipa/raspberrypi/cam_helper_imx477.cpp\n create mode 100644 src/ipa/raspberrypi/cam_helper_ov5647.cpp\n create mode 100644 src/ipa/raspberrypi/controller/agc_algorithm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/agc_status.h\n create mode 100644 src/ipa/raspberrypi/controller/algorithm.cpp\n create mode 100644 src/ipa/raspberrypi/controller/algorithm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/alsc_status.h\n create mode 100644 src/ipa/raspberrypi/controller/awb_algorithm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/awb_status.h\n create mode 100644 src/ipa/raspberrypi/controller/black_level_status.h\n create mode 100644 src/ipa/raspberrypi/controller/camera_mode.h\n create mode 100644 src/ipa/raspberrypi/controller/ccm_algorithm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/ccm_status.h\n create mode 100644 src/ipa/raspberrypi/controller/contrast_algorithm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/contrast_status.h\n create mode 100644 src/ipa/raspberrypi/controller/controller.cpp\n create mode 100644 src/ipa/raspberrypi/controller/controller.hpp\n create mode 100644 src/ipa/raspberrypi/controller/device_status.h\n create mode 100644 src/ipa/raspberrypi/controller/dpc_status.h\n create mode 100644 src/ipa/raspberrypi/controller/geq_status.h\n create mode 100644 src/ipa/raspberrypi/controller/histogram.cpp\n create mode 100644 src/ipa/raspberrypi/controller/histogram.hpp\n create mode 100644 src/ipa/raspberrypi/controller/logging.hpp\n create mode 100644 src/ipa/raspberrypi/controller/lux_status.h\n create mode 100644 src/ipa/raspberrypi/controller/metadata.hpp\n create mode 100644 src/ipa/raspberrypi/controller/noise_status.h\n create mode 100644 src/ipa/raspberrypi/controller/pwl.cpp\n create mode 100644 src/ipa/raspberrypi/controller/pwl.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/agc.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/agc.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/alsc.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/alsc.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/awb.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/awb.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/black_level.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/black_level.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/ccm.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/ccm.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/contrast.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/contrast.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/dpc.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/dpc.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/geq.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/geq.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/lux.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/lux.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/noise.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/noise.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/sdn.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/sdn.hpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/sharpen.cpp\n create mode 100644 src/ipa/raspberrypi/controller/rpi/sharpen.hpp\n create mode 100644 src/ipa/raspberrypi/controller/sdn_status.h\n create mode 100644 src/ipa/raspberrypi/controller/sharpen_status.h\n create mode 100644 src/ipa/raspberrypi/data/imx219.json\n create mode 100644 src/ipa/raspberrypi/data/imx477.json\n create mode 100644 src/ipa/raspberrypi/data/meson.build\n create mode 100644 src/ipa/raspberrypi/data/ov5647.json\n create mode 100644 src/ipa/raspberrypi/data/uncalibrated.json\n create mode 100644 src/ipa/raspberrypi/md_parser.cpp\n create mode 100644 src/ipa/raspberrypi/md_parser.hpp\n create mode 100644 src/ipa/raspberrypi/md_parser_rpi.cpp\n create mode 100644 src/ipa/raspberrypi/md_parser_rpi.hpp\n create mode 100644 src/ipa/raspberrypi/meson.build\n create mode 100644 src/ipa/raspberrypi/raspberrypi.cpp","diff":"diff --git a/src/ipa/raspberrypi/README.md b/src/ipa/raspberrypi/README.md\nnew file mode 100644\nindex 000000000000..68bdff12fbeb\n--- /dev/null\n+++ b/src/ipa/raspberrypi/README.md\n@@ -0,0 +1,23 @@\n+# _libcamera_ for the Raspberry Pi\n+\n+Raspberry Pi provides a fully featured pipeline handler and control algorithms\n+(IPAs, or \"Image Processing Algorithms\") to work with _libcamera_. Support is\n+included for all existing Raspberry Pi camera modules.\n+\n+_libcamera_ for the Raspberry Pi allows users to:\n+\n+1. Use their existing Raspberry Pi cameras.\n+1. Change the tuning of the image processing for their Raspberry Pi cameras.\n+1. Alter or amend the control algorithms (such as AGC/AEC, AWB or any others)\n+   that control the sensor and ISP.\n+1. Implement their own custom control algorithms.\n+1. Supply new tunings and/or algorithms for completely new sensors.\n+\n+## How to install and run _libcamera_ on the Raspberry Pi\n+\n+Please follow the instructions [here](https://github.com/raspberrypi/documentation/tree/master/linux/software/libcamera/README.md).\n+\n+## Documentation\n+\n+Full documentation for the _Raspberry Pi Camera Algorithm and Tuning Guide_ can\n+be found [here](https://github.com/raspberrypi/documentation/tree/master/linux/software/libcamera/rpi_SOFT_libcamera_1p0.pdf).\ndiff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp\nnew file mode 100644\nindex 000000000000..167508f7bf38\n--- /dev/null\n+++ b/src/ipa/raspberrypi/cam_helper.cpp\n@@ -0,0 +1,119 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * cam_helper.cpp - helper information for different sensors\n+ */\n+\n+#include <linux/videodev2.h>\n+\n+#include <assert.h>\n+#include <map>\n+#include <string.h>\n+\n+#include \"cam_helper.hpp\"\n+#include \"md_parser.hpp\"\n+\n+#include \"v4l2_videodevice.h\"\n+\n+using namespace RPi;\n+\n+static std::map<std::string, CamHelperCreateFunc> cam_helpers;\n+\n+CamHelper *CamHelper::Create(std::string const &cam_name)\n+{\n+\t/*\n+\t * CamHelpers get registered by static RegisterCamHelper\n+\t * initialisers.\n+\t */\n+\tfor (auto &p : cam_helpers) {\n+\t\tif (cam_name.find(p.first) != std::string::npos)\n+\t\t\treturn p.second();\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+CamHelper::CamHelper(MdParser *parser)\n+\t: parser_(parser), initialized_(false)\n+{\n+}\n+\n+CamHelper::~CamHelper()\n+{\n+\tdelete parser_;\n+}\n+\n+uint32_t CamHelper::ExposureLines(double exposure_us) const\n+{\n+\tassert(initialized_);\n+\treturn exposure_us * 1000.0 / mode_.line_length;\n+}\n+\n+double CamHelper::Exposure(uint32_t exposure_lines) const\n+{\n+\tassert(initialized_);\n+\treturn exposure_lines * mode_.line_length / 1000.0;\n+}\n+\n+void CamHelper::SetCameraMode(const CameraMode &mode)\n+{\n+\tmode_ = mode;\n+\tparser_->SetBitsPerPixel(mode.bitdepth);\n+\tparser_->SetLineLengthBytes(0); /* We use SetBufferSize. */\n+\tinitialized_ = true;\n+}\n+\n+void CamHelper::GetDelays(int &exposure_delay, int &gain_delay) const\n+{\n+\t/*\n+\t * These values are correct for many sensors. Other sensors will\n+\t * need to over-ride this method.\n+\t */\n+\texposure_delay = 2;\n+\tgain_delay = 1;\n+}\n+\n+bool CamHelper::SensorEmbeddedDataPresent() const\n+{\n+\treturn false;\n+}\n+\n+unsigned int CamHelper::HideFramesStartup() const\n+{\n+\t/*\n+\t * By default, hide 6 frames completely at start-up while AGC etc. sort\n+\t * themselves out (converge).\n+\t */\n+\treturn 6;\n+}\n+\n+unsigned int CamHelper::HideFramesModeSwitch() const\n+{\n+\t/* After a mode switch, many sensors return valid frames immediately. */\n+\treturn 0;\n+}\n+\n+unsigned int CamHelper::MistrustFramesStartup() const\n+{\n+\t/* Many sensors return a single bad frame on start-up. */\n+\treturn 1;\n+}\n+\n+unsigned int CamHelper::MistrustFramesModeSwitch() const\n+{\n+\t/* Many sensors return valid metadata immediately. */\n+\treturn 0;\n+}\n+\n+CamTransform CamHelper::GetOrientation() const\n+{\n+\t/* Most sensors will be mounted the \"right\" way up? */\n+\treturn CamTransform_IDENTITY;\n+}\n+\n+RegisterCamHelper::RegisterCamHelper(char const *cam_name,\n+\t\t\t\t     CamHelperCreateFunc create_func)\n+{\n+\tcam_helpers[std::string(cam_name)] = create_func;\n+}\ndiff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\nnew file mode 100644\nindex 000000000000..0c8aa29a4cff\n--- /dev/null\n+++ b/src/ipa/raspberrypi/cam_helper.hpp\n@@ -0,0 +1,102 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * cam_helper.hpp - helper class providing camera information\n+ */\n+#pragma once\n+\n+#include <string>\n+\n+#include \"camera_mode.h\"\n+#include \"md_parser.hpp\"\n+\n+#include \"v4l2_videodevice.h\"\n+\n+namespace RPi {\n+\n+// The CamHelper class provides a number of facilities that anyone trying\n+// trying to drive a camera will need to know, but which are not provided by\n+// by the standard driver framework. Specifically, it provides:\n+//\n+// A \"CameraMode\" structure to describe extra information about the chosen\n+// mode of the driver. For example, how it is cropped from the full sensor\n+// area, how it is scaled, whether pixels are averaged compared to the full\n+// resolution.\n+//\n+// The ability to convert between number of lines of exposure and actual\n+// exposure time, and to convert between the sensor's gain codes and actual\n+// gains.\n+//\n+// A method to return the number of frames of delay between updating exposure\n+// and analogue gain and the changes taking effect. For many sensors these\n+// take the values 2 and 1 respectively, but sensors that are different will\n+// need to over-ride the default method provided.\n+//\n+// A method to query if the sensor outputs embedded data that can be parsed.\n+//\n+// A parser to parse the metadata buffers provided by some sensors (for\n+// example, the imx219 does; the ov5647 doesn't). This allows us to know for\n+// sure the exposure and gain of the frame we're looking at. CamHelper\n+// provides methods for converting analogue gains to and from the sensor's\n+// native gain codes.\n+//\n+// Finally, a set of methods that determine how to handle the vagaries of\n+// different camera modules on start-up or when switching modes. Some\n+// modules may produce one or more frames that are not yet correctly exposed,\n+// or where the metadata may be suspect. We have the following methods:\n+// HideFramesStartup(): Tell the pipeline handler not to return this many\n+//     frames at start-up. This can also be used to hide initial frames\n+//     while the AGC and other algorithms are sorting themselves out.\n+// HideFramesModeSwitch(): Tell the pipeline handler not to return this\n+//     many frames after a mode switch (other than start-up). Some sensors\n+//     may produce innvalid frames after a mode switch; others may not.\n+// MistrustFramesStartup(): At start-up a sensor may return frames for\n+//    which we should not run any control algorithms (for example, metadata\n+//    may be invalid).\n+// MistrustFramesModeSwitch(): The number of frames, after a mode switch\n+//    (other than start-up), for which control algorithms should not run\n+//    (for example, metadata may be unreliable).\n+\n+// Bitfield to represent the default orientation of the camera.\n+typedef int CamTransform;\n+static constexpr CamTransform CamTransform_IDENTITY = 0;\n+static constexpr CamTransform CamTransform_HFLIP    = 1;\n+static constexpr CamTransform CamTransform_VFLIP    = 2;\n+\n+class CamHelper\n+{\n+public:\n+\tstatic CamHelper *Create(std::string const &cam_name);\n+\tCamHelper(MdParser *parser);\n+\tvirtual ~CamHelper();\n+\tvoid SetCameraMode(const CameraMode &mode);\n+\tMdParser &Parser() const { return *parser_; }\n+\tuint32_t ExposureLines(double exposure_us) const;\n+\tdouble Exposure(uint32_t exposure_lines) const; // in us\n+\tvirtual uint32_t GainCode(double gain) const = 0;\n+\tvirtual double Gain(uint32_t gain_code) const = 0;\n+\tvirtual void GetDelays(int &exposure_delay, int &gain_delay) const;\n+\tvirtual bool SensorEmbeddedDataPresent() const;\n+\tvirtual unsigned int HideFramesStartup() const;\n+\tvirtual unsigned int HideFramesModeSwitch() const;\n+\tvirtual unsigned int MistrustFramesStartup() const;\n+\tvirtual unsigned int MistrustFramesModeSwitch() const;\n+\tvirtual CamTransform GetOrientation() const;\n+protected:\n+\tMdParser *parser_;\n+\tCameraMode mode_;\n+\tbool initialized_;\n+};\n+\n+// This is for registering camera helpers with the system, so that the\n+// CamHelper::Create function picks them up automatically.\n+\n+typedef CamHelper *(*CamHelperCreateFunc)();\n+struct RegisterCamHelper\n+{\n+\tRegisterCamHelper(char const *cam_name,\n+\t\t\t  CamHelperCreateFunc create_func);\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp\nnew file mode 100644\nindex 000000000000..35c6597c2016\n--- /dev/null\n+++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n@@ -0,0 +1,180 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * cam_helper_imx219.cpp - camera helper for imx219 sensor\n+ */\n+\n+#include <assert.h>\n+#include <stddef.h>\n+#include <stdio.h>\n+#include <stdlib.h>\n+\n+/*\n+ * We have observed the imx219 embedded data stream randomly return junk\n+ * reister values.  Do not rely on embedded data until this has been resolved.\n+ */\n+#define ENABLE_EMBEDDED_DATA 0\n+\n+#include \"cam_helper.hpp\"\n+#if ENABLE_EMBEDDED_DATA\n+#include \"md_parser.hpp\"\n+#else\n+#include \"md_parser_rpi.hpp\"\n+#endif\n+\n+using namespace RPi;\n+\n+/* Metadata parser implementation specific to Sony IMX219 sensors. */\n+\n+class MdParserImx219 : public MdParserSmia\n+{\n+public:\n+\tMdParserImx219();\n+\tStatus Parse(void *data) override;\n+\tStatus GetExposureLines(unsigned int &lines) override;\n+\tStatus GetGainCode(unsigned int &gain_code) override;\n+private:\n+\t/* Offset of the register's value in the metadata block. */\n+\tint reg_offsets_[3];\n+\t/* Value of the register, once read from the metadata block. */\n+\tint reg_values_[3];\n+};\n+\n+class CamHelperImx219 : public CamHelper\n+{\n+public:\n+\tCamHelperImx219();\n+\tuint32_t GainCode(double gain) const override;\n+\tdouble Gain(uint32_t gain_code) const override;\n+\tunsigned int MistrustFramesModeSwitch() const override;\n+\tbool SensorEmbeddedDataPresent() const override;\n+\tCamTransform GetOrientation() const override;\n+};\n+\n+CamHelperImx219::CamHelperImx219()\n+#if ENABLE_EMBEDDED_DATA\n+\t: CamHelper(new MdParserImx219())\n+#else\n+\t: CamHelper(new MdParserRPi())\n+#endif\n+{\n+}\n+\n+uint32_t CamHelperImx219::GainCode(double gain) const\n+{\n+\treturn (uint32_t)(256 - 256 / gain);\n+}\n+\n+double CamHelperImx219::Gain(uint32_t gain_code) const\n+{\n+\treturn 256.0 / (256 - gain_code);\n+}\n+\n+unsigned int CamHelperImx219::MistrustFramesModeSwitch() const\n+{\n+\t/*\n+\t * For reasons unknown, we do occasionally get a bogus metadata frame\n+\t * at a mode switch (though not at start-up). Possibly warrants some\n+\t * investigation, though not a big deal.\n+\t */\n+\treturn 1;\n+}\n+\n+bool CamHelperImx219::SensorEmbeddedDataPresent() const\n+{\n+\treturn ENABLE_EMBEDDED_DATA;\n+}\n+\n+CamTransform CamHelperImx219::GetOrientation() const\n+{\n+\t/* Camera is \"upside down\" on this board. */\n+\treturn CamTransform_HFLIP | CamTransform_VFLIP;\n+}\n+\n+static CamHelper *Create()\n+{\n+\treturn new CamHelperImx219();\n+}\n+\n+static RegisterCamHelper reg(\"imx219\", &Create);\n+\n+/*\n+ * We care about one gain register and a pair of exposure registers. Their I2C\n+ * addresses from the Sony IMX219 datasheet:\n+ */\n+#define GAIN_REG 0x157\n+#define EXPHI_REG 0x15A\n+#define EXPLO_REG 0x15B\n+\n+/*\n+ * Index of each into the reg_offsets and reg_values arrays. Must be in\n+ * register address order.\n+ */\n+#define GAIN_INDEX 0\n+#define EXPHI_INDEX 1\n+#define EXPLO_INDEX 2\n+\n+MdParserImx219::MdParserImx219()\n+{\n+\treg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = -1;\n+}\n+\n+MdParser::Status MdParserImx219::Parse(void *data)\n+{\n+\tbool try_again = false;\n+\n+\tif (reset_) {\n+\t\t/*\n+\t\t * Search again through the metadata for the gain and exposure\n+\t\t * registers.\n+\t\t */\n+\t\tassert(bits_per_pixel_);\n+\t\tassert(num_lines_ || buffer_size_bytes_);\n+\t\t/* Need to be ordered */\n+\t\tuint32_t regs[3] = { GAIN_REG, EXPHI_REG, EXPLO_REG };\n+\t\treg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = -1;\n+\t\tint ret = static_cast<int>(findRegs(static_cast<uint8_t *>(data),\n+\t\t\t\t\t\t    regs, reg_offsets_, 3));\n+\t\t/*\n+\t\t * > 0 means \"worked partially but parse again next time\",\n+\t\t * < 0 means \"hard error\".\n+\t\t */\n+\t\tif (ret > 0)\n+\t\t\ttry_again = true;\n+\t\telse if (ret < 0)\n+\t\t\treturn ERROR;\n+\t}\n+\n+\tfor (int i = 0; i < 3; i++) {\n+\t\tif (reg_offsets_[i] == -1)\n+\t\t\tcontinue;\n+\n+\t\treg_values_[i] = static_cast<uint8_t *>(data)[reg_offsets_[i]];\n+\t}\n+\n+\t/* Re-parse next time if we were unhappy in some way. */\n+\treset_ = try_again;\n+\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserImx219::GetExposureLines(unsigned int &lines)\n+{\n+\tif (reg_offsets_[EXPHI_INDEX] == -1 || reg_offsets_[EXPLO_INDEX] == -1)\n+\t\treturn NOTFOUND;\n+\n+\tlines = reg_values_[EXPHI_INDEX] * 256 + reg_values_[EXPLO_INDEX];\n+\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserImx219::GetGainCode(unsigned int &gain_code)\n+{\n+\tif (reg_offsets_[GAIN_INDEX] == -1)\n+\t\treturn NOTFOUND;\n+\n+\tgain_code = reg_values_[GAIN_INDEX];\n+\n+\treturn OK;\n+}\ndiff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp\nnew file mode 100644\nindex 000000000000..695444567bb2\n--- /dev/null\n+++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n@@ -0,0 +1,162 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2020, Raspberry Pi (Trading) Limited\n+ *\n+ * cam_helper_imx477.cpp - camera helper for imx477 sensor\n+ */\n+\n+#include <assert.h>\n+#include <stddef.h>\n+#include <stdio.h>\n+#include <stdlib.h>\n+\n+#include \"cam_helper.hpp\"\n+#include \"md_parser.hpp\"\n+\n+using namespace RPi;\n+\n+/* Metadata parser implementation specific to Sony IMX477 sensors. */\n+\n+class MdParserImx477 : public MdParserSmia\n+{\n+public:\n+\tMdParserImx477();\n+\tStatus Parse(void *data) override;\n+\tStatus GetExposureLines(unsigned int &lines) override;\n+\tStatus GetGainCode(unsigned int &gain_code) override;\n+private:\n+\t/* Offset of the register's value in the metadata block. */\n+\tint reg_offsets_[4];\n+\t/* Value of the register, once read from the metadata block. */\n+\tint reg_values_[4];\n+};\n+\n+class CamHelperImx477 : public CamHelper\n+{\n+public:\n+\tCamHelperImx477();\n+\tuint32_t GainCode(double gain) const override;\n+\tdouble Gain(uint32_t gain_code) const override;\n+\tbool SensorEmbeddedDataPresent() const override;\n+\tCamTransform GetOrientation() const override;\n+};\n+\n+CamHelperImx477::CamHelperImx477()\n+\t: CamHelper(new MdParserImx477())\n+{\n+}\n+\n+uint32_t CamHelperImx477::GainCode(double gain) const\n+{\n+\treturn static_cast<uint32_t>(1024 - 1024 / gain);\n+}\n+\n+double CamHelperImx477::Gain(uint32_t gain_code) const\n+{\n+\treturn 1024.0 / (1024 - gain_code);\n+}\n+\n+bool CamHelperImx477::SensorEmbeddedDataPresent() const\n+{\n+\treturn true;\n+}\n+\n+CamTransform CamHelperImx477::GetOrientation() const\n+{\n+\t/* Camera is \"upside down\" on this board. */\n+\treturn CamTransform_HFLIP | CamTransform_VFLIP;\n+}\n+\n+static CamHelper *Create()\n+{\n+\treturn new CamHelperImx477();\n+}\n+\n+static RegisterCamHelper reg(\"imx477\", &Create);\n+\n+/*\n+ * We care about two gain registers and a pair of exposure registers. Their\n+ * I2C addresses from the Sony IMX477 datasheet:\n+ */\n+#define EXPHI_REG 0x0202\n+#define EXPLO_REG 0x0203\n+#define GAINHI_REG 0x0204\n+#define GAINLO_REG 0x0205\n+\n+/*\n+ * Index of each into the reg_offsets and reg_values arrays. Must be in register\n+ * address order.\n+ */\n+#define EXPHI_INDEX 0\n+#define EXPLO_INDEX 1\n+#define GAINHI_INDEX 2\n+#define GAINLO_INDEX 3\n+\n+MdParserImx477::MdParserImx477()\n+{\n+\treg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = reg_offsets_[3] = -1;\n+}\n+\n+MdParser::Status MdParserImx477::Parse(void *data)\n+{\n+\tbool try_again = false;\n+\n+\tif (reset_) {\n+\t\t/*\n+\t\t * Search again through the metadata for the gain and exposure\n+\t\t * registers.\n+\t\t */\n+\t\tassert(bits_per_pixel_);\n+\t\tassert(num_lines_ || buffer_size_bytes_);\n+\t\t/* Need to be ordered */\n+\t\tuint32_t regs[4] = {\n+\t\t\tEXPHI_REG,\n+\t\t\tEXPLO_REG,\n+\t\t\tGAINHI_REG,\n+\t\t\tGAINLO_REG\n+\t\t};\n+\t\treg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = reg_offsets_[3] = -1;\n+\t\tint ret = static_cast<int>(findRegs(static_cast<uint8_t *>(data),\n+\t\t\t\t\t\t    regs, reg_offsets_, 4));\n+\t\t/*\n+\t\t * > 0 means \"worked partially but parse again next time\",\n+\t\t * < 0 means \"hard error\".\n+\t\t */\n+\t\tif (ret > 0)\n+\t\t\ttry_again = true;\n+\t\telse if (ret < 0)\n+\t\t\treturn ERROR;\n+\t}\n+\n+\tfor (int i = 0; i < 4; i++) {\n+\t\tif (reg_offsets_[i] == -1)\n+\t\t\tcontinue;\n+\n+\t\treg_values_[i] = static_cast<uint8_t *>(data)[reg_offsets_[i]];\n+\t}\n+\n+\t/* Re-parse next time if we were unhappy in some way. */\n+\treset_ = try_again;\n+\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserImx477::GetExposureLines(unsigned int &lines)\n+{\n+\tif (reg_offsets_[EXPHI_INDEX] == -1 || reg_offsets_[EXPLO_INDEX] == -1)\n+\t\treturn NOTFOUND;\n+\n+\tlines = reg_values_[EXPHI_INDEX] * 256 + reg_values_[EXPLO_INDEX];\n+\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserImx477::GetGainCode(unsigned int &gain_code)\n+{\n+\tif (reg_offsets_[GAINHI_INDEX] == -1 || reg_offsets_[GAINLO_INDEX] == -1)\n+\t\treturn NOTFOUND;\n+\n+\tgain_code = reg_values_[GAINHI_INDEX] * 256 + reg_values_[GAINLO_INDEX];\n+\n+\treturn OK;\n+}\ndiff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\nnew file mode 100644\nindex 000000000000..3dbcb164451f\n--- /dev/null\n+++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n@@ -0,0 +1,89 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * cam_helper_ov5647.cpp - camera information for ov5647 sensor\n+ */\n+\n+#include <assert.h>\n+\n+#include \"cam_helper.hpp\"\n+#include \"md_parser_rpi.hpp\"\n+\n+using namespace RPi;\n+\n+class CamHelperOv5647 : public CamHelper\n+{\n+public:\n+\tCamHelperOv5647();\n+\tuint32_t GainCode(double gain) const override;\n+\tdouble Gain(uint32_t gain_code) const override;\n+\tvoid GetDelays(int &exposure_delay, int &gain_delay) const override;\n+\tunsigned int HideFramesModeSwitch() const override;\n+\tunsigned int MistrustFramesStartup() const override;\n+\tunsigned int MistrustFramesModeSwitch() const override;\n+};\n+\n+/*\n+ * OV5647 doesn't output metadata, so we have to use the \"unicam parser\" which\n+ * works by counting frames.\n+ */\n+\n+CamHelperOv5647::CamHelperOv5647()\n+\t: CamHelper(new MdParserRPi())\n+{\n+}\n+\n+uint32_t CamHelperOv5647::GainCode(double gain) const\n+{\n+\treturn static_cast<uint32_t>(gain * 16.0);\n+}\n+\n+double CamHelperOv5647::Gain(uint32_t gain_code) const\n+{\n+\treturn static_cast<double>(gain_code) / 16.0;\n+}\n+\n+void CamHelperOv5647::GetDelays(int &exposure_delay, int &gain_delay) const\n+{\n+\t/*\n+\t * We run this sensor in a mode where the gain delay is bumped up to\n+\t * 2. It seems to be the only way to make the delays \"predictable\".\n+\t */\n+\texposure_delay = 2;\n+\tgain_delay = 2;\n+}\n+\n+unsigned int CamHelperOv5647::HideFramesModeSwitch() const\n+{\n+\t/*\n+\t * After a mode switch, we get a couple of under-exposed frames which\n+\t * we don't want shown.\n+\t */\n+\treturn 2;\n+}\n+\n+unsigned int CamHelperOv5647::MistrustFramesStartup() const\n+{\n+\t/*\n+\t * First couple of frames are under-exposed and are no good for control\n+\t * algos.\n+\t */\n+\treturn 2;\n+}\n+\n+unsigned int CamHelperOv5647::MistrustFramesModeSwitch() const\n+{\n+\t/*\n+\t * First couple of frames are under-exposed even after a simple\n+\t * mode switch, and are no good for control algos.\n+\t */\n+\treturn 2;\n+}\n+\n+static CamHelper *Create()\n+{\n+\treturn new CamHelperOv5647();\n+}\n+\n+static RegisterCamHelper reg(\"ov5647\", &Create);\ndiff --git a/src/ipa/raspberrypi/controller/agc_algorithm.hpp b/src/ipa/raspberrypi/controller/agc_algorithm.hpp\nnew file mode 100644\nindex 000000000000..f29bb3ac941c\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/agc_algorithm.hpp\n@@ -0,0 +1,28 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * agc_algorithm.hpp - AGC/AEC control algorithm interface\n+ */\n+#pragma once\n+\n+#include \"algorithm.hpp\"\n+\n+namespace RPi {\n+\n+class AgcAlgorithm : public Algorithm\n+{\n+public:\n+\tAgcAlgorithm(Controller *controller) : Algorithm(controller) {}\n+\t// An AGC algorithm must provide the following:\n+\tvirtual void SetEv(double ev) = 0;\n+\tvirtual void SetFlickerPeriod(double flicker_period) = 0;\n+\tvirtual void SetFixedShutter(double fixed_shutter) = 0; // microseconds\n+\tvirtual void SetFixedAnalogueGain(double fixed_analogue_gain) = 0;\n+\tvirtual void SetMeteringMode(std::string const &metering_mode_name) = 0;\n+\tvirtual void SetExposureMode(std::string const &exposure_mode_name) = 0;\n+\tvirtual void\n+\tSetConstraintMode(std::string const &contraint_mode_name) = 0;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/agc_status.h b/src/ipa/raspberrypi/controller/agc_status.h\nnew file mode 100644\nindex 000000000000..10381c90a313\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/agc_status.h\n@@ -0,0 +1,39 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * agc_status.h - AGC/AEC control algorithm status\n+ */\n+#pragma once\n+\n+// The AGC algorithm should post the following structure into the image's\n+// \"agc.status\" metadata.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+// Note: total_exposure_value will be reported as zero until the algorithm has\n+// seen statistics and calculated meaningful values. The contents should be\n+// ignored until then.\n+\n+struct AgcStatus {\n+\tdouble total_exposure_value; // value for all exposure and gain for this image\n+\tdouble target_exposure_value; // (unfiltered) target total exposure AGC is aiming for\n+\tdouble shutter_time;\n+\tdouble analogue_gain;\n+\tchar exposure_mode[32];\n+\tchar constraint_mode[32];\n+\tchar metering_mode[32];\n+\tdouble ev;\n+\tdouble flicker_period;\n+\tint floating_region_enable;\n+\tdouble fixed_shutter;\n+\tdouble fixed_analogue_gain;\n+\tdouble digital_gain;\n+\tint locked;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/algorithm.cpp b/src/ipa/raspberrypi/controller/algorithm.cpp\nnew file mode 100644\nindex 000000000000..9bd3df8615f8\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/algorithm.cpp\n@@ -0,0 +1,47 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * algorithm.cpp - ISP control algorithms\n+ */\n+\n+#include \"algorithm.hpp\"\n+\n+using namespace RPi;\n+\n+void Algorithm::Read(boost::property_tree::ptree const &params)\n+{\n+\t(void)params;\n+}\n+\n+void Algorithm::Initialise() {}\n+\n+void Algorithm::SwitchMode(CameraMode const &camera_mode)\n+{\n+\t(void)camera_mode;\n+}\n+\n+void Algorithm::Prepare(Metadata *image_metadata)\n+{\n+\t(void)image_metadata;\n+}\n+\n+void Algorithm::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\t(void)stats;\n+\t(void)image_metadata;\n+}\n+\n+// For registering algorithms with the system:\n+\n+static std::map<std::string, AlgoCreateFunc> algorithms;\n+std::map<std::string, AlgoCreateFunc> const &RPi::GetAlgorithms()\n+{\n+\treturn algorithms;\n+}\n+\n+RegisterAlgorithm::RegisterAlgorithm(char const *name,\n+\t\t\t\t     AlgoCreateFunc create_func)\n+{\n+\talgorithms[std::string(name)] = create_func;\n+}\ndiff --git a/src/ipa/raspberrypi/controller/algorithm.hpp b/src/ipa/raspberrypi/controller/algorithm.hpp\nnew file mode 100644\nindex 000000000000..b82c184168ce\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/algorithm.hpp\n@@ -0,0 +1,62 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * algorithm.hpp - ISP control algorithm interface\n+ */\n+#pragma once\n+\n+// All algorithms should be derived from this class and made available to the\n+// Controller.\n+\n+#include <string>\n+#include <memory>\n+#include <map>\n+#include <atomic>\n+\n+#include \"logging.hpp\"\n+#include \"controller.hpp\"\n+\n+#include <boost/property_tree/ptree.hpp>\n+\n+namespace RPi {\n+\n+// This defines the basic interface for all control algorithms.\n+\n+class Algorithm\n+{\n+public:\n+\tAlgorithm(Controller *controller)\n+\t\t: controller_(controller), paused_(false)\n+\t{\n+\t}\n+\tvirtual ~Algorithm() {}\n+\tvirtual char const *Name() const = 0;\n+\tvirtual bool IsPaused() const { return paused_; }\n+\tvirtual void Pause() { paused_ = true; }\n+\tvirtual void Resume() { paused_ = false; }\n+\tvirtual void Read(boost::property_tree::ptree const &params);\n+\tvirtual void Initialise();\n+\tvirtual void SwitchMode(CameraMode const &camera_mode);\n+\tvirtual void Prepare(Metadata *image_metadata);\n+\tvirtual void Process(StatisticsPtr &stats, Metadata *image_metadata);\n+\tMetadata &GetGlobalMetadata() const\n+\t{\n+\t\treturn controller_->GetGlobalMetadata();\n+\t}\n+\n+private:\n+\tController *controller_;\n+\tstd::atomic<bool> paused_;\n+};\n+\n+// This code is for automatic registration of Front End algorithms with the\n+// system.\n+\n+typedef Algorithm *(*AlgoCreateFunc)(Controller *controller);\n+struct RegisterAlgorithm {\n+\tRegisterAlgorithm(char const *name, AlgoCreateFunc create_func);\n+};\n+std::map<std::string, AlgoCreateFunc> const &GetAlgorithms();\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/alsc_status.h b/src/ipa/raspberrypi/controller/alsc_status.h\nnew file mode 100644\nindex 000000000000..d3f579715594\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/alsc_status.h\n@@ -0,0 +1,27 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * alsc_status.h - ALSC (auto lens shading correction) control algorithm status\n+ */\n+#pragma once\n+\n+// The ALSC algorithm should post the following structure into the image's\n+// \"alsc.status\" metadata.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#define ALSC_CELLS_X 16\n+#define ALSC_CELLS_Y 12\n+\n+struct AlscStatus {\n+\tdouble r[ALSC_CELLS_Y][ALSC_CELLS_X];\n+\tdouble g[ALSC_CELLS_Y][ALSC_CELLS_X];\n+\tdouble b[ALSC_CELLS_Y][ALSC_CELLS_X];\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/awb_algorithm.hpp b/src/ipa/raspberrypi/controller/awb_algorithm.hpp\nnew file mode 100644\nindex 000000000000..22508ddd75a1\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/awb_algorithm.hpp\n@@ -0,0 +1,22 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * awb_algorithm.hpp - AWB control algorithm interface\n+ */\n+#pragma once\n+\n+#include \"algorithm.hpp\"\n+\n+namespace RPi {\n+\n+class AwbAlgorithm : public Algorithm\n+{\n+public:\n+\tAwbAlgorithm(Controller *controller) : Algorithm(controller) {}\n+\t// An AWB algorithm must provide the following:\n+\tvirtual void SetMode(std::string const &mode_name) = 0;\n+\tvirtual void SetManualGains(double manual_r, double manual_b) = 0;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/awb_status.h b/src/ipa/raspberrypi/controller/awb_status.h\nnew file mode 100644\nindex 000000000000..46d7c842299a\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/awb_status.h\n@@ -0,0 +1,26 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * awb_status.h - AWB control algorithm status\n+ */\n+#pragma once\n+\n+// The AWB algorithm places its results into both the image and global metadata,\n+// under the tag \"awb.status\".\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct AwbStatus {\n+\tchar mode[32];\n+\tdouble temperature_K;\n+\tdouble gain_r;\n+\tdouble gain_g;\n+\tdouble gain_b;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/black_level_status.h b/src/ipa/raspberrypi/controller/black_level_status.h\nnew file mode 100644\nindex 000000000000..d085f64b27fe\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/black_level_status.h\n@@ -0,0 +1,23 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * black_level_status.h - black level control algorithm status\n+ */\n+#pragma once\n+\n+// The \"black level\" algorithm stores the black levels to use.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct BlackLevelStatus {\n+\tuint16_t black_level_r; // out of 16 bits\n+\tuint16_t black_level_g;\n+\tuint16_t black_level_b;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/camera_mode.h b/src/ipa/raspberrypi/controller/camera_mode.h\nnew file mode 100644\nindex 000000000000..875bab3161af\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/camera_mode.h\n@@ -0,0 +1,40 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019-2020, Raspberry Pi (Trading) Limited\n+ *\n+ * camera_mode.h - description of a particular operating mode of a sensor\n+ */\n+#pragma once\n+\n+// Description of a \"camera mode\", holding enough information for control\n+// algorithms to adapt their behaviour to the different modes of the camera,\n+// including binning, scaling, cropping etc.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#define CAMERA_MODE_NAME_LEN 32\n+\n+struct CameraMode {\n+\t// bit depth of the raw camera output\n+\tuint32_t bitdepth;\n+\t// size in pixels of frames in this mode\n+\tuint16_t width, height;\n+\t// size of full resolution uncropped frame (\"sensor frame\")\n+\tuint16_t sensor_width, sensor_height;\n+\t// binning factor (1 = no binning, 2 = 2-pixel binning etc.)\n+\tuint8_t bin_x, bin_y;\n+\t// location of top left pixel in the sensor frame\n+\tuint16_t crop_x, crop_y;\n+\t// scaling factor (so if uncropped, width*scale_x is sensor_width)\n+\tdouble scale_x, scale_y;\n+\t// scaling of the noise compared to the native sensor mode\n+\tdouble noise_factor;\n+\t// line time in nanoseconds\n+\tdouble line_length;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/ccm_algorithm.hpp b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp\nnew file mode 100644\nindex 000000000000..21806cb0ddf9\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp\n@@ -0,0 +1,21 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * ccm_algorithm.hpp - CCM (colour correction matrix) control algorithm interface\n+ */\n+#pragma once\n+\n+#include \"algorithm.hpp\"\n+\n+namespace RPi {\n+\n+class CcmAlgorithm : public Algorithm\n+{\n+public:\n+\tCcmAlgorithm(Controller *controller) : Algorithm(controller) {}\n+\t// A CCM algorithm must provide the following:\n+\tvirtual void SetSaturation(double saturation) = 0;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/ccm_status.h b/src/ipa/raspberrypi/controller/ccm_status.h\nnew file mode 100644\nindex 000000000000..7e41dd1ff3c0\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/ccm_status.h\n@@ -0,0 +1,22 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * ccm_status.h - CCM (colour correction matrix) control algorithm status\n+ */\n+#pragma once\n+\n+// The \"ccm\" algorithm generates an appropriate colour matrix.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct CcmStatus {\n+\tdouble matrix[9];\n+\tdouble saturation;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/contrast_algorithm.hpp b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp\nnew file mode 100644\nindex 000000000000..9780322b28bc\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp\n@@ -0,0 +1,22 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * contrast_algorithm.hpp - contrast (gamma) control algorithm interface\n+ */\n+#pragma once\n+\n+#include \"algorithm.hpp\"\n+\n+namespace RPi {\n+\n+class ContrastAlgorithm : public Algorithm\n+{\n+public:\n+\tContrastAlgorithm(Controller *controller) : Algorithm(controller) {}\n+\t// A contrast algorithm must provide the following:\n+\tvirtual void SetBrightness(double brightness) = 0;\n+\tvirtual void SetContrast(double contrast) = 0;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/contrast_status.h b/src/ipa/raspberrypi/controller/contrast_status.h\nnew file mode 100644\nindex 000000000000..d7edd4e9990d\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/contrast_status.h\n@@ -0,0 +1,31 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * contrast_status.h - contrast (gamma) control algorithm status\n+ */\n+#pragma once\n+\n+// The \"contrast\" algorithm creates a gamma curve, optionally doing a little bit\n+// of contrast stretching based on the AGC histogram.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#define CONTRAST_NUM_POINTS 33\n+\n+struct ContrastPoint {\n+\tuint16_t x;\n+\tuint16_t y;\n+};\n+\n+struct ContrastStatus {\n+\tstruct ContrastPoint points[CONTRAST_NUM_POINTS];\n+\tdouble brightness;\n+\tdouble contrast;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/controller.cpp b/src/ipa/raspberrypi/controller/controller.cpp\nnew file mode 100644\nindex 000000000000..20dd4c787d14\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/controller.cpp\n@@ -0,0 +1,109 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * controller.cpp - ISP controller\n+ */\n+\n+#include \"algorithm.hpp\"\n+#include \"controller.hpp\"\n+\n+#include <boost/property_tree/json_parser.hpp>\n+#include <boost/property_tree/ptree.hpp>\n+\n+using namespace RPi;\n+\n+Controller::Controller()\n+\t: switch_mode_called_(false) {}\n+\n+Controller::Controller(char const *json_filename)\n+\t: switch_mode_called_(false)\n+{\n+\tRead(json_filename);\n+\tInitialise();\n+}\n+\n+Controller::~Controller() {}\n+\n+void Controller::Read(char const *filename)\n+{\n+\tRPI_LOG(\"Controller starting\");\n+\tboost::property_tree::ptree root;\n+\tboost::property_tree::read_json(filename, root);\n+\tfor (auto const &key_and_value : root) {\n+\t\tAlgorithm *algo = CreateAlgorithm(key_and_value.first.c_str());\n+\t\tif (algo) {\n+\t\t\talgo->Read(key_and_value.second);\n+\t\t\talgorithms_.push_back(AlgorithmPtr(algo));\n+\t\t} else\n+\t\t\tRPI_LOG(\"WARNING: No algorithm found for \\\"\"\n+\t\t\t\t<< key_and_value.first << \"\\\"\");\n+\t}\n+\tRPI_LOG(\"Controller finished\");\n+}\n+\n+Algorithm *Controller::CreateAlgorithm(char const *name)\n+{\n+\tauto it = GetAlgorithms().find(std::string(name));\n+\treturn it != GetAlgorithms().end() ? (*it->second)(this) : nullptr;\n+}\n+\n+void Controller::Initialise()\n+{\n+\tRPI_LOG(\"Controller starting\");\n+\tfor (auto &algo : algorithms_)\n+\t\talgo->Initialise();\n+\tRPI_LOG(\"Controller finished\");\n+}\n+\n+void Controller::SwitchMode(CameraMode const &camera_mode)\n+{\n+\tRPI_LOG(\"Controller starting\");\n+\tfor (auto &algo : algorithms_)\n+\t\talgo->SwitchMode(camera_mode);\n+\tswitch_mode_called_ = true;\n+\tRPI_LOG(\"Controller finished\");\n+}\n+\n+void Controller::Prepare(Metadata *image_metadata)\n+{\n+\tRPI_LOG(\"Controller::Prepare starting\");\n+\tassert(switch_mode_called_);\n+\tfor (auto &algo : algorithms_)\n+\t\tif (!algo->IsPaused())\n+\t\t\talgo->Prepare(image_metadata);\n+\tRPI_LOG(\"Controller::Prepare finished\");\n+}\n+\n+void Controller::Process(StatisticsPtr stats, Metadata *image_metadata)\n+{\n+\tRPI_LOG(\"Controller::Process starting\");\n+\tassert(switch_mode_called_);\n+\tfor (auto &algo : algorithms_)\n+\t\tif (!algo->IsPaused())\n+\t\t\talgo->Process(stats, image_metadata);\n+\tRPI_LOG(\"Controller::Process finished\");\n+}\n+\n+Metadata &Controller::GetGlobalMetadata()\n+{\n+\treturn global_metadata_;\n+}\n+\n+Algorithm *Controller::GetAlgorithm(std::string const &name) const\n+{\n+\t// The passed name must be the entire algorithm name, or must match the\n+\t// last part of it with a period (.) just before.\n+\tsize_t name_len = name.length();\n+\tfor (auto &algo : algorithms_) {\n+\t\tchar const *algo_name = algo->Name();\n+\t\tsize_t algo_name_len = strlen(algo_name);\n+\t\tif (algo_name_len >= name_len &&\n+\t\t    strcasecmp(name.c_str(),\n+\t\t\t       algo_name + algo_name_len - name_len) == 0 &&\n+\t\t    (name_len == algo_name_len ||\n+\t\t     algo_name[algo_name_len - name_len - 1] == '.'))\n+\t\t\treturn algo.get();\n+\t}\n+\treturn nullptr;\n+}\ndiff --git a/src/ipa/raspberrypi/controller/controller.hpp b/src/ipa/raspberrypi/controller/controller.hpp\nnew file mode 100644\nindex 000000000000..d6853866d33c\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/controller.hpp\n@@ -0,0 +1,54 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * controller.hpp - ISP controller interface\n+ */\n+#pragma once\n+\n+// The Controller is simply a container for a collecting together a number of\n+// \"control algorithms\" (such as AWB etc.) and for running them all in a\n+// convenient manner.\n+\n+#include <vector>\n+#include <string>\n+\n+#include <linux/bcm2835-isp.h>\n+\n+#include \"camera_mode.h\"\n+#include \"device_status.h\"\n+#include \"metadata.hpp\"\n+\n+namespace RPi {\n+\n+class Algorithm;\n+typedef std::unique_ptr<Algorithm> AlgorithmPtr;\n+typedef std::shared_ptr<bcm2835_isp_stats> StatisticsPtr;\n+\n+// The Controller holds a pointer to some global_metadata, which is how\n+// different controllers and control algorithms within them can exchange\n+// information. The Prepare method returns a pointer to metadata for this\n+// specific image, and which should be passed on to the Process method.\n+\n+class Controller\n+{\n+public:\n+\tController();\n+\tController(char const *json_filename);\n+\t~Controller();\n+\tAlgorithm *CreateAlgorithm(char const *name);\n+\tvoid Read(char const *filename);\n+\tvoid Initialise();\n+\tvoid SwitchMode(CameraMode const &camera_mode);\n+\tvoid Prepare(Metadata *image_metadata);\n+\tvoid Process(StatisticsPtr stats, Metadata *image_metadata);\n+\tMetadata &GetGlobalMetadata();\n+\tAlgorithm *GetAlgorithm(std::string const &name) const;\n+\n+protected:\n+\tMetadata global_metadata_;\n+\tstd::vector<AlgorithmPtr> algorithms_;\n+\tbool switch_mode_called_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/device_status.h b/src/ipa/raspberrypi/controller/device_status.h\nnew file mode 100644\nindex 000000000000..aa08608b5d40\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/device_status.h\n@@ -0,0 +1,30 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * device_status.h - device (image sensor) status\n+ */\n+#pragma once\n+\n+// Definition of \"device metadata\" which stores things like shutter time and\n+// analogue gain that downstream control algorithms will want to know.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct DeviceStatus {\n+\t// time shutter is open, in microseconds\n+\tdouble shutter_speed;\n+\tdouble analogue_gain;\n+\t// 1.0/distance-in-metres, or 0 if unknown\n+\tdouble lens_position;\n+\t// 1/f so that brightness quadruples when this doubles, or 0 if unknown\n+\tdouble aperture;\n+\t// proportional to brightness with 0 = no flash, 1 = maximum flash\n+\tdouble flash_intensity;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/dpc_status.h b/src/ipa/raspberrypi/controller/dpc_status.h\nnew file mode 100644\nindex 000000000000..a3ec2762573b\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/dpc_status.h\n@@ -0,0 +1,21 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * dpc_status.h - DPC (defective pixel correction) control algorithm status\n+ */\n+#pragma once\n+\n+// The \"DPC\" algorithm sets defective pixel correction strength.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct DpcStatus {\n+\tint strength; // 0 = \"off\", 1 = \"normal\", 2 = \"strong\"\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/geq_status.h b/src/ipa/raspberrypi/controller/geq_status.h\nnew file mode 100644\nindex 000000000000..07fd5f0347ef\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/geq_status.h\n@@ -0,0 +1,22 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * geq_status.h - GEQ (green equalisation) control algorithm status\n+ */\n+#pragma once\n+\n+// The \"GEQ\" algorithm calculates the green equalisation thresholds\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct GeqStatus {\n+\tuint16_t offset;\n+\tdouble slope;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/histogram.cpp b/src/ipa/raspberrypi/controller/histogram.cpp\nnew file mode 100644\nindex 000000000000..103d3f606a76\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/histogram.cpp\n@@ -0,0 +1,64 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * histogram.cpp - histogram calculations\n+ */\n+#include <math.h>\n+#include <stdio.h>\n+\n+#include \"histogram.hpp\"\n+\n+using namespace RPi;\n+\n+uint64_t Histogram::CumulativeFreq(double bin) const\n+{\n+\tif (bin <= 0)\n+\t\treturn 0;\n+\telse if (bin >= Bins())\n+\t\treturn Total();\n+\tint b = (int)bin;\n+\treturn cumulative_[b] +\n+\t       (bin - b) * (cumulative_[b + 1] - cumulative_[b]);\n+}\n+\n+double Histogram::Quantile(double q, int first, int last) const\n+{\n+\tif (first == -1)\n+\t\tfirst = 0;\n+\tif (last == -1)\n+\t\tlast = cumulative_.size() - 2;\n+\tassert(first <= last);\n+\tuint64_t items = q * Total();\n+\twhile (first < last) // binary search to find the right bin\n+\t{\n+\t\tint middle = (first + last) / 2;\n+\t\tif (cumulative_[middle + 1] > items)\n+\t\t\tlast = middle; // between first and middle\n+\t\telse\n+\t\t\tfirst = middle + 1; // after middle\n+\t}\n+\tassert(items >= cumulative_[first] && items <= cumulative_[last + 1]);\n+\tdouble frac = cumulative_[first + 1] == cumulative_[first] ? 0\n+\t\t      : (double)(items - cumulative_[first]) /\n+\t\t\t\t  (cumulative_[first + 1] - cumulative_[first]);\n+\treturn first + frac;\n+}\n+\n+double Histogram::InterQuantileMean(double q_lo, double q_hi) const\n+{\n+\tassert(q_hi > q_lo);\n+\tdouble p_lo = Quantile(q_lo);\n+\tdouble p_hi = Quantile(q_hi, (int)p_lo);\n+\tdouble sum_bin_freq = 0, cumul_freq = 0;\n+\tfor (double p_next = floor(p_lo) + 1.0; p_next <= ceil(p_hi);\n+\t     p_lo = p_next, p_next += 1.0) {\n+\t\tint bin = floor(p_lo);\n+\t\tdouble freq = (cumulative_[bin + 1] - cumulative_[bin]) *\n+\t\t\t      (std::min(p_next, p_hi) - p_lo);\n+\t\tsum_bin_freq += bin * freq;\n+\t\tcumul_freq += freq;\n+\t}\n+\t// add 0.5 to give an average for bin mid-points\n+\treturn sum_bin_freq / cumul_freq + 0.5;\n+}\ndiff --git a/src/ipa/raspberrypi/controller/histogram.hpp b/src/ipa/raspberrypi/controller/histogram.hpp\nnew file mode 100644\nindex 000000000000..06fc3aa7b9e8\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/histogram.hpp\n@@ -0,0 +1,44 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * histogram.hpp - histogram calculation interface\n+ */\n+#pragma once\n+\n+#include <stdint.h>\n+#include <vector>\n+#include <cassert>\n+\n+// A simple histogram class, for use in particular to find \"quantiles\" and\n+// averages between \"quantiles\".\n+\n+namespace RPi {\n+\n+class Histogram\n+{\n+public:\n+\ttemplate<typename T> Histogram(T *histogram, int num)\n+\t{\n+\t\tassert(num);\n+\t\tcumulative_.reserve(num + 1);\n+\t\tcumulative_.push_back(0);\n+\t\tfor (int i = 0; i < num; i++)\n+\t\t\tcumulative_.push_back(cumulative_.back() +\n+\t\t\t\t\t      histogram[i]);\n+\t}\n+\tuint32_t Bins() const { return cumulative_.size() - 1; }\n+\tuint64_t Total() const { return cumulative_[cumulative_.size() - 1]; }\n+\t// Cumulative frequency up to a (fractional) point in a bin.\n+\tuint64_t CumulativeFreq(double bin) const;\n+\t// Return the (fractional) bin of the point q (0 <= q <= 1) through the\n+\t// histogram. Optionally provide limits to help.\n+\tdouble Quantile(double q, int first = -1, int last = -1) const;\n+\t// Return the average histogram bin value between the two quantiles.\n+\tdouble InterQuantileMean(double q_lo, double q_hi) const;\n+\n+private:\n+\tstd::vector<uint64_t> cumulative_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/logging.hpp b/src/ipa/raspberrypi/controller/logging.hpp\nnew file mode 100644\nindex 000000000000..f0d306b6e7f4\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/logging.hpp\n@@ -0,0 +1,30 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019-2020, Raspberry Pi (Trading) Limited\n+ *\n+ * logging.hpp - logging macros\n+ */\n+#pragma once\n+\n+#include <iostream>\n+\n+#ifndef RPI_LOGGING_ENABLE\n+#define RPI_LOGGING_ENABLE 0\n+#endif\n+\n+#ifndef RPI_WARNING_ENABLE\n+#define RPI_WARNING_ENABLE 1\n+#endif\n+\n+#define RPI_LOG(stuff)                                                         \\\n+\tdo {                                                                   \\\n+\t\tif (RPI_LOGGING_ENABLE)                                        \\\n+\t\t\tstd::cout << __FUNCTION__ << \": \" << stuff << \"\\n\";    \\\n+\t} while (0)\n+\n+#define RPI_WARN(stuff)                                                        \\\n+\tdo {                                                                   \\\n+\t\tif (RPI_WARNING_ENABLE)                                        \\\n+\t\t\tstd::cout << __FUNCTION__ << \" ***WARNING*** \"         \\\n+\t\t\t\t  << stuff << \"\\n\";                            \\\n+\t} while (0)\ndiff --git a/src/ipa/raspberrypi/controller/lux_status.h b/src/ipa/raspberrypi/controller/lux_status.h\nnew file mode 100644\nindex 000000000000..8ccfd933829b\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/lux_status.h\n@@ -0,0 +1,29 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * lux_status.h - Lux control algorithm status\n+ */\n+#pragma once\n+\n+// The \"lux\" algorithm looks at the (AGC) histogram statistics of the frame and\n+// estimates the current lux level of the scene. It does this by a simple ratio\n+// calculation comparing to a reference image that was taken in known conditions\n+// with known statistics and a properly measured lux level. There is a slight\n+// problem with aperture, in that it may be variable without the system knowing\n+// or being aware of it. In this case an external application may set a\n+// \"current_aperture\" value if it wishes, which would be used in place of the\n+// (presumably meaningless) value in the image metadata.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct LuxStatus {\n+\tdouble lux;\n+\tdouble aperture;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/metadata.hpp b/src/ipa/raspberrypi/controller/metadata.hpp\nnew file mode 100644\nindex 000000000000..1d7624a0911e\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/metadata.hpp\n@@ -0,0 +1,77 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * metadata.hpp - general metadata class\n+ */\n+#pragma once\n+\n+// A simple class for carrying arbitrary metadata, for example about an image.\n+\n+#include <string>\n+#include <mutex>\n+#include <map>\n+#include <memory>\n+\n+#include <boost/any.hpp>\n+\n+namespace RPi {\n+\n+class Metadata\n+{\n+public:\n+\ttemplate<typename T> void Set(std::string const &tag, T const &value)\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tdata_[tag] = value;\n+\t}\n+\ttemplate<typename T> int Get(std::string const &tag, T &value) const\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tauto it = data_.find(tag);\n+\t\tif (it == data_.end())\n+\t\t\treturn -1;\n+\t\tvalue = boost::any_cast<T>(it->second);\n+\t\treturn 0;\n+\t}\n+\tvoid Clear()\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tdata_.clear();\n+\t}\n+\tMetadata &operator=(Metadata const &other)\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tstd::lock_guard<std::mutex> other_lock(other.mutex_);\n+\t\tdata_ = other.data_;\n+\t\treturn *this;\n+\t}\n+\ttemplate<typename T> T *GetLocked(std::string const &tag)\n+\t{\n+\t\t// This allows in-place access to the Metadata contents,\n+\t\t// for which you should be holding the lock.\n+\t\tauto it = data_.find(tag);\n+\t\tif (it == data_.end())\n+\t\t\treturn nullptr;\n+\t\treturn boost::any_cast<T>(&it->second);\n+\t}\n+\ttemplate<typename T>\n+\tvoid SetLocked(std::string const &tag, T const &value)\n+\t{\n+\t\t// Use this only if you're holding the lock yourself.\n+\t\tdata_[tag] = value;\n+\t}\n+\t// Note: use of (lowercase) lock and unlock means you can create scoped\n+\t// locks with the standard lock classes.\n+\t// e.g. std::lock_guard<PisP::Metadata> lock(metadata)\n+\tvoid lock() { mutex_.lock(); }\n+\tvoid unlock() { mutex_.unlock(); }\n+\n+private:\n+\tmutable std::mutex mutex_;\n+\tstd::map<std::string, boost::any> data_;\n+};\n+\n+typedef std::shared_ptr<Metadata> MetadataPtr;\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/noise_status.h b/src/ipa/raspberrypi/controller/noise_status.h\nnew file mode 100644\nindex 000000000000..8439a40213aa\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/noise_status.h\n@@ -0,0 +1,22 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * noise_status.h - Noise control algorithm status\n+ */\n+#pragma once\n+\n+// The \"noise\" algorithm stores an estimate of the noise profile for this image.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct NoiseStatus {\n+\tdouble noise_constant;\n+\tdouble noise_slope;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/pwl.cpp b/src/ipa/raspberrypi/controller/pwl.cpp\nnew file mode 100644\nindex 000000000000..7e11d8f3c3db\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/pwl.cpp\n@@ -0,0 +1,216 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * pwl.cpp - piecewise linear functions\n+ */\n+\n+#include <cassert>\n+#include <stdexcept>\n+\n+#include \"pwl.hpp\"\n+\n+using namespace RPi;\n+\n+void Pwl::Read(boost::property_tree::ptree const &params)\n+{\n+\tfor (auto it = params.begin(); it != params.end(); it++) {\n+\t\tdouble x = it->second.get_value<double>();\n+\t\tassert(it == params.begin() || x > points_.back().x);\n+\t\tit++;\n+\t\tdouble y = it->second.get_value<double>();\n+\t\tpoints_.push_back(Point(x, y));\n+\t}\n+\tassert(points_.size() >= 2);\n+}\n+\n+void Pwl::Append(double x, double y, const double eps)\n+{\n+\tif (points_.empty() || points_.back().x + eps < x)\n+\t\tpoints_.push_back(Point(x, y));\n+}\n+\n+void Pwl::Prepend(double x, double y, const double eps)\n+{\n+\tif (points_.empty() || points_.front().x - eps > x)\n+\t\tpoints_.insert(points_.begin(), Point(x, y));\n+}\n+\n+Pwl::Interval Pwl::Domain() const\n+{\n+\treturn Interval(points_[0].x, points_[points_.size() - 1].x);\n+}\n+\n+Pwl::Interval Pwl::Range() const\n+{\n+\tdouble lo = points_[0].y, hi = lo;\n+\tfor (auto &p : points_)\n+\t\tlo = std::min(lo, p.y), hi = std::max(hi, p.y);\n+\treturn Interval(lo, hi);\n+}\n+\n+bool Pwl::Empty() const\n+{\n+\treturn points_.empty();\n+}\n+\n+double Pwl::Eval(double x, int *span_ptr, bool update_span) const\n+{\n+\tint span = findSpan(x, span_ptr && *span_ptr != -1\n+\t\t\t\t       ? *span_ptr\n+\t\t\t\t       : points_.size() / 2 - 1);\n+\tif (span_ptr && update_span)\n+\t\t*span_ptr = span;\n+\treturn points_[span].y +\n+\t       (x - points_[span].x) * (points_[span + 1].y - points_[span].y) /\n+\t\t       (points_[span + 1].x - points_[span].x);\n+}\n+\n+int Pwl::findSpan(double x, int span) const\n+{\n+\t// Pwls are generally small, so linear search may well be faster than\n+\t// binary, though could review this if large PWls start turning up.\n+\tint last_span = points_.size() - 2;\n+\t// some algorithms may call us with span pointing directly at the last\n+\t// control point\n+\tspan = std::max(0, std::min(last_span, span));\n+\twhile (span < last_span && x >= points_[span + 1].x)\n+\t\tspan++;\n+\twhile (span && x < points_[span].x)\n+\t\tspan--;\n+\treturn span;\n+}\n+\n+Pwl::PerpType Pwl::Invert(Point const &xy, Point &perp, int &span,\n+\t\t\t  const double eps) const\n+{\n+\tassert(span >= -1);\n+\tbool prev_off_end = false;\n+\tfor (span = span + 1; span < (int)points_.size() - 1; span++) {\n+\t\tPoint span_vec = points_[span + 1] - points_[span];\n+\t\tdouble t = ((xy - points_[span]) % span_vec) / span_vec.Len2();\n+\t\tif (t < -eps) // off the start of this span\n+\t\t{\n+\t\t\tif (span == 0) {\n+\t\t\t\tperp = points_[span];\n+\t\t\t\treturn PerpType::Start;\n+\t\t\t} else if (prev_off_end) {\n+\t\t\t\tperp = points_[span];\n+\t\t\t\treturn PerpType::Vertex;\n+\t\t\t}\n+\t\t} else if (t > 1 + eps) // off the end of this span\n+\t\t{\n+\t\t\tif (span == (int)points_.size() - 2) {\n+\t\t\t\tperp = points_[span + 1];\n+\t\t\t\treturn PerpType::End;\n+\t\t\t}\n+\t\t\tprev_off_end = true;\n+\t\t} else // a true perpendicular\n+\t\t{\n+\t\t\tperp = points_[span] + span_vec * t;\n+\t\t\treturn PerpType::Perpendicular;\n+\t\t}\n+\t}\n+\treturn PerpType::None;\n+}\n+\n+Pwl Pwl::Compose(Pwl const &other, const double eps) const\n+{\n+\tdouble this_x = points_[0].x, this_y = points_[0].y;\n+\tint this_span = 0, other_span = other.findSpan(this_y, 0);\n+\tPwl result({ { this_x, other.Eval(this_y, &other_span, false) } });\n+\twhile (this_span != (int)points_.size() - 1) {\n+\t\tdouble dx = points_[this_span + 1].x - points_[this_span].x,\n+\t\t       dy = points_[this_span + 1].y - points_[this_span].y;\n+\t\tif (abs(dy) > eps &&\n+\t\t    other_span + 1 < (int)other.points_.size() &&\n+\t\t    points_[this_span + 1].y >=\n+\t\t\t    other.points_[other_span + 1].x + eps) {\n+\t\t\t// next control point in result will be where this\n+\t\t\t// function's y reaches the next span in other\n+\t\t\tthis_x = points_[this_span].x +\n+\t\t\t\t (other.points_[other_span + 1].x -\n+\t\t\t\t  points_[this_span].y) * dx / dy;\n+\t\t\tthis_y = other.points_[++other_span].x;\n+\t\t} else if (abs(dy) > eps && other_span > 0 &&\n+\t\t\t   points_[this_span + 1].y <=\n+\t\t\t\t   other.points_[other_span - 1].x - eps) {\n+\t\t\t// next control point in result will be where this\n+\t\t\t// function's y reaches the previous span in other\n+\t\t\tthis_x = points_[this_span].x +\n+\t\t\t\t (other.points_[other_span + 1].x -\n+\t\t\t\t  points_[this_span].y) * dx / dy;\n+\t\t\tthis_y = other.points_[--other_span].x;\n+\t\t} else {\n+\t\t\t// we stay in the same span in other\n+\t\t\tthis_span++;\n+\t\t\tthis_x = points_[this_span].x,\n+\t\t\tthis_y = points_[this_span].y;\n+\t\t}\n+\t\tresult.Append(this_x, other.Eval(this_y, &other_span, false),\n+\t\t\t      eps);\n+\t}\n+\treturn result;\n+}\n+\n+void Pwl::Map(std::function<void(double x, double y)> f) const\n+{\n+\tfor (auto &pt : points_)\n+\t\tf(pt.x, pt.y);\n+}\n+\n+void Pwl::Map2(Pwl const &pwl0, Pwl const &pwl1,\n+\t       std::function<void(double x, double y0, double y1)> f)\n+{\n+\tint span0 = 0, span1 = 0;\n+\tdouble x = std::min(pwl0.points_[0].x, pwl1.points_[0].x);\n+\tf(x, pwl0.Eval(x, &span0, false), pwl1.Eval(x, &span1, false));\n+\twhile (span0 < (int)pwl0.points_.size() - 1 ||\n+\t       span1 < (int)pwl1.points_.size() - 1) {\n+\t\tif (span0 == (int)pwl0.points_.size() - 1)\n+\t\t\tx = pwl1.points_[++span1].x;\n+\t\telse if (span1 == (int)pwl1.points_.size() - 1)\n+\t\t\tx = pwl0.points_[++span0].x;\n+\t\telse if (pwl0.points_[span0 + 1].x > pwl1.points_[span1 + 1].x)\n+\t\t\tx = pwl1.points_[++span1].x;\n+\t\telse\n+\t\t\tx = pwl0.points_[++span0].x;\n+\t\tf(x, pwl0.Eval(x, &span0, false), pwl1.Eval(x, &span1, false));\n+\t}\n+}\n+\n+Pwl Pwl::Combine(Pwl const &pwl0, Pwl const &pwl1,\n+\t\t std::function<double(double x, double y0, double y1)> f,\n+\t\t const double eps)\n+{\n+\tPwl result;\n+\tMap2(pwl0, pwl1, [&](double x, double y0, double y1) {\n+\t\tresult.Append(x, f(x, y0, y1), eps);\n+\t});\n+\treturn result;\n+}\n+\n+void Pwl::MatchDomain(Interval const &domain, bool clip, const double eps)\n+{\n+\tint span = 0;\n+\tPrepend(domain.start, Eval(clip ? points_[0].x : domain.start, &span),\n+\t\teps);\n+\tspan = points_.size() - 2;\n+\tAppend(domain.end, Eval(clip ? points_.back().x : domain.end, &span),\n+\t       eps);\n+}\n+\n+Pwl &Pwl::operator*=(double d)\n+{\n+\tfor (auto &pt : points_)\n+\t\tpt.y *= d;\n+\treturn *this;\n+}\n+\n+void Pwl::Debug(FILE *fp) const\n+{\n+\tfprintf(fp, \"Pwl {\\n\");\n+\tfor (auto &p : points_)\n+\t\tfprintf(fp, \"\\t(%g, %g)\\n\", p.x, p.y);\n+\tfprintf(fp, \"}\\n\");\n+}\ndiff --git a/src/ipa/raspberrypi/controller/pwl.hpp b/src/ipa/raspberrypi/controller/pwl.hpp\nnew file mode 100644\nindex 000000000000..bd7c76683103\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/pwl.hpp\n@@ -0,0 +1,109 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * pwl.hpp - piecewise linear functions interface\n+ */\n+#pragma once\n+\n+#include <math.h>\n+#include <vector>\n+\n+#include <boost/property_tree/ptree.hpp>\n+\n+namespace RPi {\n+\n+class Pwl\n+{\n+public:\n+\tstruct Interval {\n+\t\tInterval(double _start, double _end) : start(_start), end(_end)\n+\t\t{\n+\t\t}\n+\t\tdouble start, end;\n+\t\tbool Contains(double value)\n+\t\t{\n+\t\t\treturn value >= start && value <= end;\n+\t\t}\n+\t\tdouble Clip(double value)\n+\t\t{\n+\t\t\treturn value < start ? start\n+\t\t\t\t\t     : (value > end ? end : value);\n+\t\t}\n+\t\tdouble Len() const { return end - start; }\n+\t};\n+\tstruct Point {\n+\t\tPoint() : x(0), y(0) {}\n+\t\tPoint(double _x, double _y) : x(_x), y(_y) {}\n+\t\tdouble x, y;\n+\t\tPoint operator-(Point const &p) const\n+\t\t{\n+\t\t\treturn Point(x - p.x, y - p.y);\n+\t\t}\n+\t\tPoint operator+(Point const &p) const\n+\t\t{\n+\t\t\treturn Point(x + p.x, y + p.y);\n+\t\t}\n+\t\tdouble operator%(Point const &p) const\n+\t\t{\n+\t\t\treturn x * p.x + y * p.y;\n+\t\t}\n+\t\tPoint operator*(double f) const { return Point(x * f, y * f); }\n+\t\tPoint operator/(double f) const { return Point(x / f, y / f); }\n+\t\tdouble Len2() const { return x * x + y * y; }\n+\t\tdouble Len() const { return sqrt(Len2()); }\n+\t};\n+\tPwl() {}\n+\tPwl(std::vector<Point> const &points) : points_(points) {}\n+\tvoid Read(boost::property_tree::ptree const &params);\n+\tvoid Append(double x, double y, const double eps = 1e-6);\n+\tvoid Prepend(double x, double y, const double eps = 1e-6);\n+\tInterval Domain() const;\n+\tInterval Range() const;\n+\tbool Empty() const;\n+\t// Evaluate Pwl, optionally supplying an initial guess for the\n+\t// \"span\". The \"span\" may be optionally be updated.  If you want to know\n+\t// the \"span\" value but don't have an initial guess you can set it to\n+\t// -1.\n+\tdouble Eval(double x, int *span_ptr = nullptr,\n+\t\t    bool update_span = true) const;\n+\t// Find perpendicular closest to xy, starting from span+1 so you can\n+\t// call it repeatedly to check for multiple closest points (set span to\n+\t// -1 on the first call). Also returns \"pseudo\" perpendiculars; see\n+\t// PerpType enum.\n+\tenum class PerpType {\n+\t\tNone, // no perpendicular found\n+\t\tStart, // start of Pwl is closest point\n+\t\tEnd, // end of Pwl is closest point\n+\t\tVertex, // vertex of Pwl is closest point\n+\t\tPerpendicular // true perpendicular found\n+\t};\n+\tPerpType Invert(Point const &xy, Point &perp, int &span,\n+\t\t\tconst double eps = 1e-6) const;\n+\t// Compose two Pwls together, doing \"this\" first and \"other\" after.\n+\tPwl Compose(Pwl const &other, const double eps = 1e-6) const;\n+\t// Apply function to (x,y) values at every control point.\n+\tvoid Map(std::function<void(double x, double y)> f) const;\n+\t// Apply function to (x, y0, y1) values wherever either Pwl has a\n+\t// control point.\n+\tstatic void Map2(Pwl const &pwl0, Pwl const &pwl1,\n+\t\t\t std::function<void(double x, double y0, double y1)> f);\n+\t// Combine two Pwls, meaning we create a new Pwl where the y values are\n+\t// given by running f wherever either has a knot.\n+\tstatic Pwl\n+\tCombine(Pwl const &pwl0, Pwl const &pwl1,\n+\t\tstd::function<double(double x, double y0, double y1)> f,\n+\t\tconst double eps = 1e-6);\n+\t// Make \"this\" match (at least) the given domain. Any extension my be\n+\t// clipped or linear.\n+\tvoid MatchDomain(Interval const &domain, bool clip = true,\n+\t\t\t const double eps = 1e-6);\n+\tPwl &operator*=(double d);\n+\tvoid Debug(FILE *fp = stdout) const;\n+\n+private:\n+\tint findSpan(double x, int span) const;\n+\tstd::vector<Point> points_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/agc.cpp b/src/ipa/raspberrypi/controller/rpi/agc.cpp\nnew file mode 100644\nindex 000000000000..a4742872e3f9\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/agc.cpp\n@@ -0,0 +1,642 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * agc.cpp - AGC/AEC control algorithm\n+ */\n+\n+#include <map>\n+\n+#include \"linux/bcm2835-isp.h\"\n+\n+#include \"../awb_status.h\"\n+#include \"../device_status.h\"\n+#include \"../histogram.hpp\"\n+#include \"../logging.hpp\"\n+#include \"../lux_status.h\"\n+#include \"../metadata.hpp\"\n+\n+#include \"agc.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.agc\"\n+\n+#define PIPELINE_BITS 13 // seems to be a 13-bit pipeline\n+\n+void AgcMeteringMode::Read(boost::property_tree::ptree const &params)\n+{\n+\tint num = 0;\n+\tfor (auto &p : params.get_child(\"weights\")) {\n+\t\tif (num == AGC_STATS_SIZE)\n+\t\t\tthrow std::runtime_error(\"AgcConfig: too many weights\");\n+\t\tweights[num++] = p.second.get_value<double>();\n+\t}\n+\tif (num != AGC_STATS_SIZE)\n+\t\tthrow std::runtime_error(\"AgcConfig: insufficient weights\");\n+}\n+\n+static std::string\n+read_metering_modes(std::map<std::string, AgcMeteringMode> &metering_modes,\n+\t\t    boost::property_tree::ptree const &params)\n+{\n+\tstd::string first;\n+\tfor (auto &p : params) {\n+\t\tAgcMeteringMode metering_mode;\n+\t\tmetering_mode.Read(p.second);\n+\t\tmetering_modes[p.first] = std::move(metering_mode);\n+\t\tif (first.empty())\n+\t\t\tfirst = p.first;\n+\t}\n+\treturn first;\n+}\n+\n+static int read_double_list(std::vector<double> &list,\n+\t\t\t    boost::property_tree::ptree const &params)\n+{\n+\tfor (auto &p : params)\n+\t\tlist.push_back(p.second.get_value<double>());\n+\treturn list.size();\n+}\n+\n+void AgcExposureMode::Read(boost::property_tree::ptree const &params)\n+{\n+\tint num_shutters =\n+\t\tread_double_list(shutter, params.get_child(\"shutter\"));\n+\tint num_ags = read_double_list(gain, params.get_child(\"gain\"));\n+\tif (num_shutters < 2 || num_ags < 2)\n+\t\tthrow std::runtime_error(\n+\t\t\t\"AgcConfig: must have at least two entries in exposure profile\");\n+\tif (num_shutters != num_ags)\n+\t\tthrow std::runtime_error(\n+\t\t\t\"AgcConfig: expect same number of exposure and gain entries in exposure profile\");\n+}\n+\n+static std::string\n+read_exposure_modes(std::map<std::string, AgcExposureMode> &exposure_modes,\n+\t\t    boost::property_tree::ptree const &params)\n+{\n+\tstd::string first;\n+\tfor (auto &p : params) {\n+\t\tAgcExposureMode exposure_mode;\n+\t\texposure_mode.Read(p.second);\n+\t\texposure_modes[p.first] = std::move(exposure_mode);\n+\t\tif (first.empty())\n+\t\t\tfirst = p.first;\n+\t}\n+\treturn first;\n+}\n+\n+void AgcConstraint::Read(boost::property_tree::ptree const &params)\n+{\n+\tstd::string bound_string = params.get<std::string>(\"bound\", \"\");\n+\ttransform(bound_string.begin(), bound_string.end(),\n+\t\t  bound_string.begin(), ::toupper);\n+\tif (bound_string != \"UPPER\" && bound_string != \"LOWER\")\n+\t\tthrow std::runtime_error(\n+\t\t\t\"AGC constraint type should be UPPER or LOWER\");\n+\tbound = bound_string == \"UPPER\" ? Bound::UPPER : Bound::LOWER;\n+\tq_lo = params.get<double>(\"q_lo\");\n+\tq_hi = params.get<double>(\"q_hi\");\n+\tY_target.Read(params.get_child(\"y_target\"));\n+}\n+\n+static AgcConstraintMode\n+read_constraint_mode(boost::property_tree::ptree const &params)\n+{\n+\tAgcConstraintMode mode;\n+\tfor (auto &p : params) {\n+\t\tAgcConstraint constraint;\n+\t\tconstraint.Read(p.second);\n+\t\tmode.push_back(std::move(constraint));\n+\t}\n+\treturn mode;\n+}\n+\n+static std::string read_constraint_modes(\n+\tstd::map<std::string, AgcConstraintMode> &constraint_modes,\n+\tboost::property_tree::ptree const &params)\n+{\n+\tstd::string first;\n+\tfor (auto &p : params) {\n+\t\tconstraint_modes[p.first] = read_constraint_mode(p.second);\n+\t\tif (first.empty())\n+\t\t\tfirst = p.first;\n+\t}\n+\treturn first;\n+}\n+\n+void AgcConfig::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(\"AgcConfig\");\n+\tdefault_metering_mode = read_metering_modes(\n+\t\tmetering_modes, params.get_child(\"metering_modes\"));\n+\tdefault_exposure_mode = read_exposure_modes(\n+\t\texposure_modes, params.get_child(\"exposure_modes\"));\n+\tdefault_constraint_mode = read_constraint_modes(\n+\t\tconstraint_modes, params.get_child(\"constraint_modes\"));\n+\tY_target.Read(params.get_child(\"y_target\"));\n+\tspeed = params.get<double>(\"speed\", 0.2);\n+\tstartup_frames = params.get<uint16_t>(\"startup_frames\", 10);\n+\tfast_reduce_threshold =\n+\t\tparams.get<double>(\"fast_reduce_threshold\", 0.4);\n+\tbase_ev = params.get<double>(\"base_ev\", 1.0);\n+}\n+\n+Agc::Agc(Controller *controller)\n+\t: AgcAlgorithm(controller), metering_mode_(nullptr),\n+\t  exposure_mode_(nullptr), constraint_mode_(nullptr),\n+\t  frame_count_(0), lock_count_(0)\n+{\n+\tev_ = status_.ev = 1.0;\n+\tflicker_period_ = status_.flicker_period = 0.0;\n+\tfixed_shutter_ = status_.fixed_shutter = 0;\n+\tfixed_analogue_gain_ = status_.fixed_analogue_gain = 0.0;\n+\t// set to zero initially, so we can tell it's not been calculated\n+\tstatus_.total_exposure_value = 0.0;\n+\tstatus_.target_exposure_value = 0.0;\n+\tstatus_.locked = false;\n+\toutput_status_ = status_;\n+}\n+\n+char const *Agc::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Agc::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(\"Agc\");\n+\tconfig_.Read(params);\n+\t// Set the config's defaults (which are the first ones it read) as our\n+\t// current modes, until someone changes them.  (they're all known to\n+\t// exist at this point)\n+\tmetering_mode_name_ = config_.default_metering_mode;\n+\tmetering_mode_ = &config_.metering_modes[metering_mode_name_];\n+\texposure_mode_name_ = config_.default_exposure_mode;\n+\texposure_mode_ = &config_.exposure_modes[exposure_mode_name_];\n+\tconstraint_mode_name_ = config_.default_constraint_mode;\n+\tconstraint_mode_ = &config_.constraint_modes[constraint_mode_name_];\n+}\n+\n+void Agc::SetEv(double ev)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tev_ = ev;\n+}\n+\n+void Agc::SetFlickerPeriod(double flicker_period)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tflicker_period_ = flicker_period;\n+}\n+\n+void Agc::SetFixedShutter(double fixed_shutter)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tfixed_shutter_ = fixed_shutter;\n+}\n+\n+void Agc::SetFixedAnalogueGain(double fixed_analogue_gain)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tfixed_analogue_gain_ = fixed_analogue_gain;\n+}\n+\n+void Agc::SetMeteringMode(std::string const &metering_mode_name)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tmetering_mode_name_ = metering_mode_name;\n+}\n+\n+void Agc::SetExposureMode(std::string const &exposure_mode_name)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\texposure_mode_name_ = exposure_mode_name;\n+}\n+\n+void Agc::SetConstraintMode(std::string const &constraint_mode_name)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tconstraint_mode_name_ = constraint_mode_name;\n+}\n+\n+void Agc::Prepare(Metadata *image_metadata)\n+{\n+\tAgcStatus status;\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(output_mutex_);\n+\t\tstatus = output_status_;\n+\t}\n+\tint lock_count = lock_count_;\n+\tlock_count_ = 0;\n+\tstatus.digital_gain = 1.0;\n+\tif (status_.total_exposure_value) {\n+\t\t// Process has run, so we have meaningful values.\n+\t\tDeviceStatus device_status;\n+\t\tif (image_metadata->Get(\"device.status\", device_status) == 0) {\n+\t\t\tdouble actual_exposure = device_status.shutter_speed *\n+\t\t\t\t\t\t device_status.analogue_gain;\n+\t\t\tif (actual_exposure) {\n+\t\t\t\tstatus.digital_gain =\n+\t\t\t\t\tstatus_.total_exposure_value /\n+\t\t\t\t\tactual_exposure;\n+\t\t\t\tRPI_LOG(\"Want total exposure \" << status_.total_exposure_value);\n+\t\t\t\t// Never ask for a gain < 1.0, and also impose\n+\t\t\t\t// some upper limit. Make it customisable?\n+\t\t\t\tstatus.digital_gain = std::max(\n+\t\t\t\t\t1.0,\n+\t\t\t\t\tstd::min(status.digital_gain, 4.0));\n+\t\t\t\tRPI_LOG(\"Actual exposure \" << actual_exposure);\n+\t\t\t\tRPI_LOG(\"Use digital_gain \" << status.digital_gain);\n+\t\t\t\tRPI_LOG(\"Effective exposure \" << actual_exposure * status.digital_gain);\n+\t\t\t\t// Decide whether AEC/AGC has converged.\n+\t\t\t\t// Insist AGC is steady for MAX_LOCK_COUNT\n+\t\t\t\t// frames before we say we are \"locked\".\n+\t\t\t\t// (The hard-coded constants may need to\n+\t\t\t\t// become customisable.)\n+\t\t\t\tif (status.target_exposure_value) {\n+#define MAX_LOCK_COUNT 3\n+\t\t\t\t\tdouble err = 0.10 * status.target_exposure_value + 200;\n+\t\t\t\t\tif (actual_exposure <\n+\t\t\t\t\t    status.target_exposure_value + err\n+\t\t\t\t\t    && actual_exposure >\n+\t\t\t\t\t    status.target_exposure_value - err)\n+\t\t\t\t\t\tlock_count_ =\n+\t\t\t\t\t\t\tstd::min(lock_count + 1,\n+\t\t\t\t\t\t\t       MAX_LOCK_COUNT);\n+\t\t\t\t\telse if (actual_exposure <\n+\t\t\t\t\t\t status.target_exposure_value\n+\t\t\t\t\t\t + 1.5 * err &&\n+\t\t\t\t\t\t actual_exposure >\n+\t\t\t\t\t\t status.target_exposure_value\n+\t\t\t\t\t\t - 1.5 * err)\n+\t\t\t\t\t\tlock_count_ = lock_count;\n+\t\t\t\t\tRPI_LOG(\"Lock count: \" << lock_count_);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t} else\n+\t\t\tRPI_LOG(Name() << \": no device metadata\");\n+\t\tstatus.locked = lock_count_ >= MAX_LOCK_COUNT;\n+\t\t//printf(\"%s\\n\", status.locked ? \"+++++++++\" : \"-\");\n+\t\timage_metadata->Set(\"agc.status\", status);\n+\t}\n+}\n+\n+void Agc::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\tframe_count_++;\n+\t// First a little bit of housekeeping, fetching up-to-date settings and\n+\t// configuration, that kind of thing.\n+\thousekeepConfig();\n+\t// Get the current exposure values for the frame that's just arrived.\n+\tfetchCurrentExposure(image_metadata);\n+\t// Compute the total gain we require relative to the current exposure.\n+\tdouble gain, target_Y;\n+\tcomputeGain(stats.get(), image_metadata, gain, target_Y);\n+\t// Now compute the target (final) exposure which we think we want.\n+\tcomputeTargetExposure(gain);\n+\t// Some of the exposure has to be applied as digital gain, so work out\n+\t// what that is. This function also tells us whether it's decided to\n+\t// \"desaturate\" the image more quickly.\n+\tbool desaturate = applyDigitalGain(image_metadata, gain, target_Y);\n+\t// The results have to be filtered so as not to change too rapidly.\n+\tfilterExposure(desaturate);\n+\t// The last thing is to divvy up the exposure value into a shutter time\n+\t// and analogue_gain, according to the current exposure mode.\n+\tdivvyupExposure();\n+\t// Finally advertise what we've done.\n+\twriteAndFinish(image_metadata, desaturate);\n+}\n+\n+static void copy_string(std::string const &s, char *d, size_t size)\n+{\n+\tsize_t length = s.copy(d, size - 1);\n+\td[length] = '\\0';\n+}\n+\n+void Agc::housekeepConfig()\n+{\n+\t// First fetch all the up-to-date settings, so no one else has to do it.\n+\tstd::string new_exposure_mode_name, new_constraint_mode_name,\n+\t\tnew_metering_mode_name;\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\t\tnew_metering_mode_name = metering_mode_name_;\n+\t\tnew_exposure_mode_name = exposure_mode_name_;\n+\t\tnew_constraint_mode_name = constraint_mode_name_;\n+\t\tstatus_.ev = ev_;\n+\t\tstatus_.fixed_shutter = fixed_shutter_;\n+\t\tstatus_.fixed_analogue_gain = fixed_analogue_gain_;\n+\t\tstatus_.flicker_period = flicker_period_;\n+\t}\n+\tRPI_LOG(\"ev \" << status_.ev << \" fixed_shutter \"\n+\t\t      << status_.fixed_shutter << \" fixed_analogue_gain \"\n+\t\t      << status_.fixed_analogue_gain);\n+\t// Make sure the \"mode\" pointers point to the up-to-date things, if\n+\t// they've changed.\n+\tif (strcmp(new_metering_mode_name.c_str(), status_.metering_mode)) {\n+\t\tauto it = config_.metering_modes.find(new_metering_mode_name);\n+\t\tif (it == config_.metering_modes.end())\n+\t\t\tthrow std::runtime_error(\"Agc: no metering mode \" +\n+\t\t\t\t\t\t new_metering_mode_name);\n+\t\tmetering_mode_ = &it->second;\n+\t\tcopy_string(new_metering_mode_name, status_.metering_mode,\n+\t\t\t    sizeof(status_.metering_mode));\n+\t}\n+\tif (strcmp(new_exposure_mode_name.c_str(), status_.exposure_mode)) {\n+\t\tauto it = config_.exposure_modes.find(new_exposure_mode_name);\n+\t\tif (it == config_.exposure_modes.end())\n+\t\t\tthrow std::runtime_error(\"Agc: no exposure profile \" +\n+\t\t\t\t\t\t new_exposure_mode_name);\n+\t\texposure_mode_ = &it->second;\n+\t\tcopy_string(new_exposure_mode_name, status_.exposure_mode,\n+\t\t\t    sizeof(status_.exposure_mode));\n+\t}\n+\tif (strcmp(new_constraint_mode_name.c_str(), status_.constraint_mode)) {\n+\t\tauto it =\n+\t\t\tconfig_.constraint_modes.find(new_constraint_mode_name);\n+\t\tif (it == config_.constraint_modes.end())\n+\t\t\tthrow std::runtime_error(\"Agc: no constraint list \" +\n+\t\t\t\t\t\t new_constraint_mode_name);\n+\t\tconstraint_mode_ = &it->second;\n+\t\tcopy_string(new_constraint_mode_name, status_.constraint_mode,\n+\t\t\t    sizeof(status_.constraint_mode));\n+\t}\n+\tRPI_LOG(\"exposure_mode \"\n+\t\t<< new_exposure_mode_name << \" constraint_mode \"\n+\t\t<< new_constraint_mode_name << \" metering_mode \"\n+\t\t<< new_metering_mode_name);\n+}\n+\n+void Agc::fetchCurrentExposure(Metadata *image_metadata)\n+{\n+\tstd::unique_lock<Metadata> lock(*image_metadata);\n+\tDeviceStatus *device_status =\n+\t\timage_metadata->GetLocked<DeviceStatus>(\"device.status\");\n+\tif (!device_status)\n+\t\tthrow std::runtime_error(\"Agc: no device metadata\");\n+\tcurrent_.shutter = device_status->shutter_speed;\n+\tcurrent_.analogue_gain = device_status->analogue_gain;\n+\tAgcStatus *agc_status =\n+\t\timage_metadata->GetLocked<AgcStatus>(\"agc.status\");\n+\tcurrent_.total_exposure = agc_status ? agc_status->total_exposure_value : 0;\n+\tcurrent_.total_exposure_no_dg = current_.shutter * current_.analogue_gain;\n+}\n+\n+static double compute_initial_Y(bcm2835_isp_stats *stats, Metadata *image_metadata,\n+\t\t\t\tdouble weights[])\n+{\n+\tbcm2835_isp_stats_region *regions = stats->agc_stats;\n+\tstruct AwbStatus awb;\n+\tawb.gain_r = awb.gain_g = awb.gain_b = 1.0; // in case no metadata\n+\tif (image_metadata->Get(\"awb.status\", awb) != 0)\n+\t\tRPI_WARN(\"Agc: no AWB status found\");\n+\tdouble Y_sum = 0, weight_sum = 0;\n+\tfor (int i = 0; i < AGC_STATS_SIZE; i++) {\n+\t\tif (regions[i].counted == 0)\n+\t\t\tcontinue;\n+\t\tweight_sum += weights[i];\n+\t\tdouble Y = regions[i].r_sum * awb.gain_r * .299 +\n+\t\t\t   regions[i].g_sum * awb.gain_g * .587 +\n+\t\t\t   regions[i].b_sum * awb.gain_b * .114;\n+\t\tY /= regions[i].counted;\n+\t\tY_sum += Y * weights[i];\n+\t}\n+\treturn Y_sum / weight_sum / (1 << PIPELINE_BITS);\n+}\n+\n+// We handle extra gain through EV by adjusting our Y targets. However, you\n+// simply can't monitor histograms once they get very close to (or beyond!)\n+// saturation, so we clamp the Y targets to this value. It does mean that EV\n+// increases don't necessarily do quite what you might expect in certain\n+// (contrived) cases.\n+\n+#define EV_GAIN_Y_TARGET_LIMIT 0.9\n+\n+static double constraint_compute_gain(AgcConstraint &c, Histogram &h,\n+\t\t\t\t      double lux, double ev_gain,\n+\t\t\t\t      double &target_Y)\n+{\n+\ttarget_Y = c.Y_target.Eval(c.Y_target.Domain().Clip(lux));\n+\ttarget_Y = std::min(EV_GAIN_Y_TARGET_LIMIT, target_Y * ev_gain);\n+\tdouble iqm = h.InterQuantileMean(c.q_lo, c.q_hi);\n+\treturn (target_Y * NUM_HISTOGRAM_BINS) / iqm;\n+}\n+\n+void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *image_metadata,\n+\t\t      double &gain, double &target_Y)\n+{\n+\tstruct LuxStatus lux = {};\n+\tlux.lux = 400; // default lux level to 400 in case no metadata found\n+\tif (image_metadata->Get(\"lux.status\", lux) != 0)\n+\t\tRPI_WARN(\"Agc: no lux level found\");\n+\tHistogram h(statistics->hist[0].g_hist, NUM_HISTOGRAM_BINS);\n+\tdouble ev_gain = status_.ev * config_.base_ev;\n+\t// The initial gain and target_Y come from some of the regions. After\n+\t// that we consider the histogram constraints.\n+\ttarget_Y =\n+\t\tconfig_.Y_target.Eval(config_.Y_target.Domain().Clip(lux.lux));\n+\ttarget_Y = std::min(EV_GAIN_Y_TARGET_LIMIT, target_Y * ev_gain);\n+\tdouble initial_Y = compute_initial_Y(statistics, image_metadata,\n+\t\t\t\t\t     metering_mode_->weights);\n+\tgain = std::min(10.0, target_Y / (initial_Y + .001));\n+\tRPI_LOG(\"Initially Y \" << initial_Y << \" target \" << target_Y\n+\t\t\t       << \" gives gain \" << gain);\n+\tfor (auto &c : *constraint_mode_) {\n+\t\tdouble new_target_Y;\n+\t\tdouble new_gain =\n+\t\t\tconstraint_compute_gain(c, h, lux.lux, ev_gain,\n+\t\t\t\t\t\tnew_target_Y);\n+\t\tRPI_LOG(\"Constraint has target_Y \"\n+\t\t\t<< new_target_Y << \" giving gain \" << new_gain);\n+\t\tif (c.bound == AgcConstraint::Bound::LOWER &&\n+\t\t    new_gain > gain) {\n+\t\t\tRPI_LOG(\"Lower bound constraint adopted\");\n+\t\t\tgain = new_gain, target_Y = new_target_Y;\n+\t\t} else if (c.bound == AgcConstraint::Bound::UPPER &&\n+\t\t\t   new_gain < gain) {\n+\t\t\tRPI_LOG(\"Upper bound constraint adopted\");\n+\t\t\tgain = new_gain, target_Y = new_target_Y;\n+\t\t}\n+\t}\n+\tRPI_LOG(\"Final gain \" << gain << \" (target_Y \" << target_Y << \" ev \"\n+\t\t\t      << status_.ev << \" base_ev \" << config_.base_ev\n+\t\t\t      << \")\");\n+}\n+\n+void Agc::computeTargetExposure(double gain)\n+{\n+\t// The statistics reflect the image without digital gain, so the final\n+\t// total exposure we're aiming for is:\n+\ttarget_.total_exposure = current_.total_exposure_no_dg * gain;\n+\t// The final target exposure is also limited to what the exposure\n+\t// mode allows.\n+\tdouble max_total_exposure =\n+\t\t(status_.fixed_shutter != 0.0\n+\t\t\t ? status_.fixed_shutter\n+\t\t\t : exposure_mode_->shutter.back()) *\n+\t\t(status_.fixed_analogue_gain != 0.0\n+\t\t\t ? status_.fixed_analogue_gain\n+\t\t\t : exposure_mode_->gain.back());\n+\ttarget_.total_exposure = std::min(target_.total_exposure,\n+\t\t\t\t\t  max_total_exposure);\n+\tRPI_LOG(\"Target total_exposure \" << target_.total_exposure);\n+}\n+\n+bool Agc::applyDigitalGain(Metadata *image_metadata, double gain,\n+\t\t\t   double target_Y)\n+{\n+\tdouble dg = 1.0;\n+\t// I think this pipeline subtracts black level and rescales before we\n+\t// get the stats, so no need to worry about it.\n+\tstruct AwbStatus awb;\n+\tif (image_metadata->Get(\"awb.status\", awb) == 0) {\n+\t\tdouble min_gain = std::min(awb.gain_r,\n+\t\t\t\t\t   std::min(awb.gain_g, awb.gain_b));\n+\t\tdg *= std::max(1.0, 1.0 / min_gain);\n+\t} else\n+\t\tRPI_WARN(\"Agc: no AWB status found\");\n+\tRPI_LOG(\"after AWB, target dg \" << dg << \" gain \" << gain\n+\t\t\t\t\t<< \" target_Y \" << target_Y);\n+\t// Finally, if we're trying to reduce exposure but the target_Y is\n+\t// \"close\" to 1.0, then the gain computed for that constraint will be\n+\t// only slightly less than one, because the measured Y can never be\n+\t// larger than 1.0. When this happens, demand a large digital gain so\n+\t// that the exposure can be reduced, de-saturating the image much more\n+\t// quickly (and we then approach the correct value more quickly from\n+\t// below).\n+\tbool desaturate = target_Y > config_.fast_reduce_threshold &&\n+\t\t\t  gain < sqrt(target_Y);\n+\tif (desaturate)\n+\t\tdg /= config_.fast_reduce_threshold;\n+\tRPI_LOG(\"Digital gain \" << dg << \" desaturate? \" << desaturate);\n+\ttarget_.total_exposure_no_dg = target_.total_exposure / dg;\n+\tRPI_LOG(\"Target total_exposure_no_dg \" << target_.total_exposure_no_dg);\n+\treturn desaturate;\n+}\n+\n+void Agc::filterExposure(bool desaturate)\n+{\n+\tdouble speed = frame_count_ <= config_.startup_frames ? 1.0 : config_.speed;\n+\tif (filtered_.total_exposure == 0.0) {\n+\t\tfiltered_.total_exposure = target_.total_exposure;\n+\t\tfiltered_.total_exposure_no_dg = target_.total_exposure_no_dg;\n+\t} else {\n+\t\t// If close to the result go faster, to save making so many\n+\t\t// micro-adjustments on the way. (Make this customisable?)\n+\t\tif (filtered_.total_exposure < 1.2 * target_.total_exposure &&\n+\t\t    filtered_.total_exposure > 0.8 * target_.total_exposure)\n+\t\t\tspeed = sqrt(speed);\n+\t\tfiltered_.total_exposure = speed * target_.total_exposure +\n+\t\t\t\t\t   filtered_.total_exposure * (1.0 - speed);\n+\t\t// When desaturing, take a big jump down in exposure_no_dg,\n+\t\t// which we'll hide with digital gain.\n+\t\tif (desaturate)\n+\t\t\tfiltered_.total_exposure_no_dg =\n+\t\t\t\ttarget_.total_exposure_no_dg;\n+\t\telse\n+\t\t\tfiltered_.total_exposure_no_dg =\n+\t\t\t\tspeed * target_.total_exposure_no_dg +\n+\t\t\t\tfiltered_.total_exposure_no_dg * (1.0 - speed);\n+\t}\n+\t// We can't let the no_dg exposure deviate too far below the\n+\t// total exposure, as there might not be enough digital gain available\n+\t// in the ISP to hide it (which will cause nasty oscillation).\n+\tif (filtered_.total_exposure_no_dg <\n+\t    filtered_.total_exposure * config_.fast_reduce_threshold)\n+\t\tfiltered_.total_exposure_no_dg = filtered_.total_exposure *\n+\t\t\t\t\t\t config_.fast_reduce_threshold;\n+\tRPI_LOG(\"After filtering, total_exposure \" << filtered_.total_exposure <<\n+\t\t\" no dg \" << filtered_.total_exposure_no_dg);\n+}\n+\n+void Agc::divvyupExposure()\n+{\n+\t// Sending the fixed shutter/gain cases through the same code may seem\n+\t// unnecessary, but it will make more sense when extend this to cover\n+\t// variable aperture.\n+\tdouble exposure_value = filtered_.total_exposure_no_dg;\n+\tdouble shutter_time, analogue_gain;\n+\tshutter_time = status_.fixed_shutter != 0.0\n+\t\t\t       ? status_.fixed_shutter\n+\t\t\t       : exposure_mode_->shutter[0];\n+\tanalogue_gain = status_.fixed_analogue_gain != 0.0\n+\t\t\t\t? status_.fixed_analogue_gain\n+\t\t\t\t: exposure_mode_->gain[0];\n+\tif (shutter_time * analogue_gain < exposure_value) {\n+\t\tfor (unsigned int stage = 1;\n+\t\t     stage < exposure_mode_->gain.size(); stage++) {\n+\t\t\tif (status_.fixed_shutter == 0.0) {\n+\t\t\t\tif (exposure_mode_->shutter[stage] *\n+\t\t\t\t\t    analogue_gain >=\n+\t\t\t\t    exposure_value) {\n+\t\t\t\t\tshutter_time =\n+\t\t\t\t\t\texposure_value / analogue_gain;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\t\t\t\tshutter_time = exposure_mode_->shutter[stage];\n+\t\t\t}\n+\t\t\tif (status_.fixed_analogue_gain == 0.0) {\n+\t\t\t\tif (exposure_mode_->gain[stage] *\n+\t\t\t\t\t    shutter_time >=\n+\t\t\t\t    exposure_value) {\n+\t\t\t\t\tanalogue_gain =\n+\t\t\t\t\t\texposure_value / shutter_time;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\t\t\t\tanalogue_gain = exposure_mode_->gain[stage];\n+\t\t\t}\n+\t\t}\n+\t}\n+\tRPI_LOG(\"Divided up shutter and gain are \" << shutter_time << \" and \"\n+\t\t\t\t\t\t   << analogue_gain);\n+\t// Finally adjust shutter time for flicker avoidance (require both\n+\t// shutter and gain not to be fixed).\n+\tif (status_.fixed_shutter == 0.0 &&\n+\t    status_.fixed_analogue_gain == 0.0 &&\n+\t    status_.flicker_period != 0.0) {\n+\t\tint flicker_periods = shutter_time / status_.flicker_period;\n+\t\tif (flicker_periods > 0) {\n+\t\t\tdouble new_shutter_time = flicker_periods * status_.flicker_period;\n+\t\t\tanalogue_gain *= shutter_time / new_shutter_time;\n+\t\t\t// We should still not allow the ag to go over the\n+\t\t\t// largest value in the exposure mode. Note that this\n+\t\t\t// may force more of the total exposure into the digital\n+\t\t\t// gain as a side-effect.\n+\t\t\tanalogue_gain = std::min(analogue_gain,\n+\t\t\t\t\t\t exposure_mode_->gain.back());\n+\t\t\tshutter_time = new_shutter_time;\n+\t\t}\n+\t\tRPI_LOG(\"After flicker avoidance, shutter \"\n+\t\t\t<< shutter_time << \" gain \" << analogue_gain);\n+\t}\n+\tfiltered_.shutter = shutter_time;\n+\tfiltered_.analogue_gain = analogue_gain;\n+}\n+\n+void Agc::writeAndFinish(Metadata *image_metadata, bool desaturate)\n+{\n+\tstatus_.total_exposure_value = filtered_.total_exposure;\n+\tstatus_.target_exposure_value = desaturate ? 0 : target_.total_exposure_no_dg;\n+\tstatus_.shutter_time = filtered_.shutter;\n+\tstatus_.analogue_gain = filtered_.analogue_gain;\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(output_mutex_);\n+\t\toutput_status_ = status_;\n+\t}\n+\t// Write to metadata as well, in case anyone wants to update the camera\n+\t// immediately.\n+\timage_metadata->Set(\"agc.status\", status_);\n+\tRPI_LOG(\"Output written, total exposure requested is \"\n+\t\t<< filtered_.total_exposure);\n+\tRPI_LOG(\"Camera exposure update: shutter time \" << filtered_.shutter <<\n+\t\t\" analogue gain \" << filtered_.analogue_gain);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Agc(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/agc.hpp b/src/ipa/raspberrypi/controller/rpi/agc.hpp\nnew file mode 100644\nindex 000000000000..dbcefba65d31\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/agc.hpp\n@@ -0,0 +1,123 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * agc.hpp - AGC/AEC control algorithm\n+ */\n+#pragma once\n+\n+#include <vector>\n+#include <mutex>\n+\n+#include \"../agc_algorithm.hpp\"\n+#include \"../agc_status.h\"\n+#include \"../pwl.hpp\"\n+\n+// This is our implementation of AGC.\n+\n+// This is the number actually set up by the firmware, not the maximum possible\n+// number (which is 16).\n+\n+#define AGC_STATS_SIZE 15\n+\n+namespace RPi {\n+\n+struct AgcMeteringMode {\n+\tdouble weights[AGC_STATS_SIZE];\n+\tvoid Read(boost::property_tree::ptree const &params);\n+};\n+\n+struct AgcExposureMode {\n+\tstd::vector<double> shutter;\n+\tstd::vector<double> gain;\n+\tvoid Read(boost::property_tree::ptree const &params);\n+};\n+\n+struct AgcConstraint {\n+\tenum class Bound { LOWER = 0, UPPER = 1 };\n+\tBound bound;\n+\tdouble q_lo;\n+\tdouble q_hi;\n+\tPwl Y_target;\n+\tvoid Read(boost::property_tree::ptree const &params);\n+};\n+\n+typedef std::vector<AgcConstraint> AgcConstraintMode;\n+\n+struct AgcConfig {\n+\tvoid Read(boost::property_tree::ptree const &params);\n+\tstd::map<std::string, AgcMeteringMode> metering_modes;\n+\tstd::map<std::string, AgcExposureMode> exposure_modes;\n+\tstd::map<std::string, AgcConstraintMode> constraint_modes;\n+\tPwl Y_target;\n+\tdouble speed;\n+\tuint16_t startup_frames;\n+\tdouble max_change;\n+\tdouble min_change;\n+\tdouble fast_reduce_threshold;\n+\tdouble speed_up_threshold;\n+\tstd::string default_metering_mode;\n+\tstd::string default_exposure_mode;\n+\tstd::string default_constraint_mode;\n+\tdouble base_ev;\n+};\n+\n+class Agc : public AgcAlgorithm\n+{\n+public:\n+\tAgc(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid SetEv(double ev) override;\n+\tvoid SetFlickerPeriod(double flicker_period) override;\n+\tvoid SetFixedShutter(double fixed_shutter) override; // microseconds\n+\tvoid SetFixedAnalogueGain(double fixed_analogue_gain) override;\n+\tvoid SetMeteringMode(std::string const &metering_mode_name) override;\n+\tvoid SetExposureMode(std::string const &exposure_mode_name) override;\n+\tvoid SetConstraintMode(std::string const &contraint_mode_name) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\tvoid Process(StatisticsPtr &stats, Metadata *image_metadata) override;\n+\n+private:\n+\tAgcConfig config_;\n+\tvoid housekeepConfig();\n+\tvoid fetchCurrentExposure(Metadata *image_metadata);\n+\tvoid computeGain(bcm2835_isp_stats *statistics, Metadata *image_metadata,\n+\t\t\t double &gain, double &target_Y);\n+\tvoid computeTargetExposure(double gain);\n+\tbool applyDigitalGain(Metadata *image_metadata, double gain,\n+\t\t\t      double target_Y);\n+\tvoid filterExposure(bool desaturate);\n+\tvoid divvyupExposure();\n+\tvoid writeAndFinish(Metadata *image_metadata, bool desaturate);\n+\tAgcMeteringMode *metering_mode_;\n+\tAgcExposureMode *exposure_mode_;\n+\tAgcConstraintMode *constraint_mode_;\n+\tuint64_t frame_count_;\n+\tstruct ExposureValues {\n+\t\tExposureValues() : shutter(0), analogue_gain(0),\n+\t\t\t\t   total_exposure(0), total_exposure_no_dg(0) {}\n+\t\tdouble shutter;\n+\t\tdouble analogue_gain;\n+\t\tdouble total_exposure;\n+\t\tdouble total_exposure_no_dg; // without digital gain\n+\t};\n+\tExposureValues current_;  // values for the current frame\n+\tExposureValues target_;   // calculate the values we want here\n+\tExposureValues filtered_; // these values are filtered towards target\n+\tAgcStatus status_;        // to \"latch\" settings so they can't change\n+\tAgcStatus output_status_; // the status we will write out\n+\tstd::mutex output_mutex_;\n+\tint lock_count_;\n+\t// Below here the \"settings\" that applications can change.\n+\tstd::mutex settings_mutex_;\n+\tstd::string metering_mode_name_;\n+\tstd::string exposure_mode_name_;\n+\tstd::string constraint_mode_name_;\n+\tdouble ev_;\n+\tdouble flicker_period_;\n+\tdouble fixed_shutter_;\n+\tdouble fixed_analogue_gain_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/alsc.cpp b/src/ipa/raspberrypi/controller/rpi/alsc.cpp\nnew file mode 100644\nindex 000000000000..821a0ca34a2c\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/alsc.cpp\n@@ -0,0 +1,705 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * alsc.cpp - ALSC (auto lens shading correction) control algorithm\n+ */\n+#include <math.h>\n+\n+#include \"../awb_status.h\"\n+#include \"alsc.hpp\"\n+\n+// Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm.\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.alsc\"\n+\n+static const int X = ALSC_CELLS_X;\n+static const int Y = ALSC_CELLS_Y;\n+static const int XY = X * Y;\n+static const double INSUFFICIENT_DATA = -1.0;\n+\n+Alsc::Alsc(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+\tasync_abort_ = async_start_ = async_started_ = async_finished_ = false;\n+\tasync_thread_ = std::thread(std::bind(&Alsc::asyncFunc, this));\n+}\n+\n+Alsc::~Alsc()\n+{\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tasync_abort_ = true;\n+\t\tasync_signal_.notify_one();\n+\t}\n+\tasync_thread_.join();\n+}\n+\n+char const *Alsc::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+static void generate_lut(double *lut, boost::property_tree::ptree const &params)\n+{\n+\tdouble cstrength = params.get<double>(\"corner_strength\", 2.0);\n+\tif (cstrength <= 1.0)\n+\t\tthrow std::runtime_error(\"Alsc: corner_strength must be > 1.0\");\n+\tdouble asymmetry = params.get<double>(\"asymmetry\", 1.0);\n+\tif (asymmetry < 0)\n+\t\tthrow std::runtime_error(\"Alsc: asymmetry must be >= 0\");\n+\tdouble f1 = cstrength - 1, f2 = 1 + sqrt(cstrength);\n+\tdouble R2 = X * Y / 4 * (1 + asymmetry * asymmetry);\n+\tint num = 0;\n+\tfor (int y = 0; y < Y; y++) {\n+\t\tfor (int x = 0; x < X; x++) {\n+\t\t\tdouble dy = y - Y / 2 + 0.5,\n+\t\t\t       dx = (x - X / 2 + 0.5) * asymmetry;\n+\t\t\tdouble r2 = (dx * dx + dy * dy) / R2;\n+\t\t\tlut[num++] =\n+\t\t\t\t(f1 * r2 + f2) * (f1 * r2 + f2) /\n+\t\t\t\t(f2 * f2); // this reproduces the cos^4 rule\n+\t\t}\n+\t}\n+}\n+\n+static void read_lut(double *lut, boost::property_tree::ptree const &params)\n+{\n+\tint num = 0;\n+\tconst int max_num = XY;\n+\tfor (auto &p : params) {\n+\t\tif (num == max_num)\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"Alsc: too many entries in LSC table\");\n+\t\tlut[num++] = p.second.get_value<double>();\n+\t}\n+\tif (num < max_num)\n+\t\tthrow std::runtime_error(\"Alsc: too few entries in LSC table\");\n+}\n+\n+static void read_calibrations(std::vector<AlscCalibration> &calibrations,\n+\t\t\t      boost::property_tree::ptree const &params,\n+\t\t\t      std::string const &name)\n+{\n+\tif (params.get_child_optional(name)) {\n+\t\tdouble last_ct = 0;\n+\t\tfor (auto &p : params.get_child(name)) {\n+\t\t\tdouble ct = p.second.get<double>(\"ct\");\n+\t\t\tif (ct <= last_ct)\n+\t\t\t\tthrow std::runtime_error(\n+\t\t\t\t\t\"Alsc: entries in \" + name +\n+\t\t\t\t\t\" must be in increasing ct order\");\n+\t\t\tAlscCalibration calibration;\n+\t\t\tcalibration.ct = last_ct = ct;\n+\t\t\tboost::property_tree::ptree const &table =\n+\t\t\t\tp.second.get_child(\"table\");\n+\t\t\tint num = 0;\n+\t\t\tfor (auto it = table.begin(); it != table.end(); it++) {\n+\t\t\t\tif (num == XY)\n+\t\t\t\t\tthrow std::runtime_error(\n+\t\t\t\t\t\t\"Alsc: too many values for ct \" +\n+\t\t\t\t\t\tstd::to_string(ct) + \" in \" +\n+\t\t\t\t\t\tname);\n+\t\t\t\tcalibration.table[num++] =\n+\t\t\t\t\tit->second.get_value<double>();\n+\t\t\t}\n+\t\t\tif (num != XY)\n+\t\t\t\tthrow std::runtime_error(\n+\t\t\t\t\t\"Alsc: too few values for ct \" +\n+\t\t\t\t\tstd::to_string(ct) + \" in \" + name);\n+\t\t\tcalibrations.push_back(calibration);\n+\t\t\tRPI_LOG(\"Read \" << name << \" calibration for ct \"\n+\t\t\t\t\t<< ct);\n+\t\t}\n+\t}\n+}\n+\n+void Alsc::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(\"Alsc\");\n+\tconfig_.frame_period = params.get<uint16_t>(\"frame_period\", 12);\n+\tconfig_.startup_frames = params.get<uint16_t>(\"startup_frames\", 10);\n+\tconfig_.speed = params.get<double>(\"speed\", 0.05);\n+\tdouble sigma = params.get<double>(\"sigma\", 0.01);\n+\tconfig_.sigma_Cr = params.get<double>(\"sigma_Cr\", sigma);\n+\tconfig_.sigma_Cb = params.get<double>(\"sigma_Cb\", sigma);\n+\tconfig_.min_count = params.get<double>(\"min_count\", 10.0);\n+\tconfig_.min_G = params.get<uint16_t>(\"min_G\", 50);\n+\tconfig_.omega = params.get<double>(\"omega\", 1.3);\n+\tconfig_.n_iter = params.get<uint32_t>(\"n_iter\", X + Y);\n+\tconfig_.luminance_strength =\n+\t\tparams.get<double>(\"luminance_strength\", 1.0);\n+\tfor (int i = 0; i < XY; i++)\n+\t\tconfig_.luminance_lut[i] = 1.0;\n+\tif (params.get_child_optional(\"corner_strength\"))\n+\t\tgenerate_lut(config_.luminance_lut, params);\n+\telse if (params.get_child_optional(\"luminance_lut\"))\n+\t\tread_lut(config_.luminance_lut,\n+\t\t\t params.get_child(\"luminance_lut\"));\n+\telse\n+\t\tRPI_WARN(\"Alsc: no luminance table - assume unity everywhere\");\n+\tread_calibrations(config_.calibrations_Cr, params, \"calibrations_Cr\");\n+\tread_calibrations(config_.calibrations_Cb, params, \"calibrations_Cb\");\n+\tconfig_.default_ct = params.get<double>(\"default_ct\", 4500.0);\n+\tconfig_.threshold = params.get<double>(\"threshold\", 1e-3);\n+}\n+\n+static void get_cal_table(double ct,\n+\t\t\t  std::vector<AlscCalibration> const &calibrations,\n+\t\t\t  double cal_table[XY]);\n+static void resample_cal_table(double const cal_table_in[XY],\n+\t\t\t       CameraMode const &camera_mode,\n+\t\t\t       double cal_table_out[XY]);\n+static void compensate_lambdas_for_cal(double const cal_table[XY],\n+\t\t\t\t       double const old_lambdas[XY],\n+\t\t\t\t       double new_lambdas[XY]);\n+static void add_luminance_to_tables(double results[3][Y][X],\n+\t\t\t\t    double const lambda_r[XY], double lambda_g,\n+\t\t\t\t    double const lambda_b[XY],\n+\t\t\t\t    double const luminance_lut[XY],\n+\t\t\t\t    double luminance_strength);\n+\n+void Alsc::Initialise()\n+{\n+\tRPI_LOG(\"Alsc\");\n+\tframe_count2_ = frame_count_ = frame_phase_ = 0;\n+\tfirst_time_ = true;\n+\t// Initialise the lambdas. Each call to Process then restarts from the\n+\t// previous results.  Also initialise the previous frame tables to the\n+\t// same harmless values.\n+\tfor (int i = 0; i < XY; i++)\n+\t\tlambda_r_[i] = lambda_b_[i] = 1.0;\n+}\n+\n+void Alsc::SwitchMode(CameraMode const &camera_mode)\n+{\n+\t// There's a bit of a question what we should do if the \"crop\" of the\n+\t// camera mode has changed.  Any calculation currently in flight would\n+\t// not be useful to the new mode, so arguably we should abort it, and\n+\t// generate a new table (like the \"first_time\" code already here).  When\n+\t// the crop doesn't change, we can presumably just leave things\n+\t// alone. For now, I think we'll just wait and see. When the crop does\n+\t// change, any effects should be transient, and if they're not transient\n+\t// enough, we'll revisit the question then.\n+\tcamera_mode_ = camera_mode;\n+\tif (first_time_) {\n+\t\t// On the first time, arrange for something sensible in the\n+\t\t// initial tables. Construct the tables for some default colour\n+\t\t// temperature. This echoes the code in doAlsc, without the\n+\t\t// adaptive algorithm.\n+\t\tdouble cal_table_r[XY], cal_table_b[XY], cal_table_tmp[XY];\n+\t\tget_cal_table(4000, config_.calibrations_Cr, cal_table_tmp);\n+\t\tresample_cal_table(cal_table_tmp, camera_mode_, cal_table_r);\n+\t\tget_cal_table(4000, config_.calibrations_Cb, cal_table_tmp);\n+\t\tresample_cal_table(cal_table_tmp, camera_mode_, cal_table_b);\n+\t\tcompensate_lambdas_for_cal(cal_table_r, lambda_r_,\n+\t\t\t\t\t   async_lambda_r_);\n+\t\tcompensate_lambdas_for_cal(cal_table_b, lambda_b_,\n+\t\t\t\t\t   async_lambda_b_);\n+\t\tadd_luminance_to_tables(sync_results_, async_lambda_r_, 1.0,\n+\t\t\t\t\tasync_lambda_b_, config_.luminance_lut,\n+\t\t\t\t\tconfig_.luminance_strength);\n+\t\tmemcpy(prev_sync_results_, sync_results_,\n+\t\t       sizeof(prev_sync_results_));\n+\t\tfirst_time_ = false;\n+\t}\n+}\n+\n+void Alsc::fetchAsyncResults()\n+{\n+\tRPI_LOG(\"Fetch ALSC results\");\n+\tasync_finished_ = false;\n+\tasync_started_ = false;\n+\tmemcpy(sync_results_, async_results_, sizeof(sync_results_));\n+}\n+\n+static double get_ct(Metadata *metadata, double default_ct)\n+{\n+\tAwbStatus awb_status;\n+\tawb_status.temperature_K = default_ct; // in case nothing found\n+\tif (metadata->Get(\"awb.status\", awb_status) != 0)\n+\t\tRPI_WARN(\"Alsc: no AWB results found, using \"\n+\t\t\t << awb_status.temperature_K);\n+\telse\n+\t\tRPI_LOG(\"Alsc: AWB results found, using \"\n+\t\t\t<< awb_status.temperature_K);\n+\treturn awb_status.temperature_K;\n+}\n+\n+static void copy_stats(bcm2835_isp_stats_region regions[XY], StatisticsPtr &stats,\n+\t\t       AlscStatus const &status)\n+{\n+\tbcm2835_isp_stats_region *input_regions = stats->awb_stats;\n+\tdouble *r_table = (double *)status.r;\n+\tdouble *g_table = (double *)status.g;\n+\tdouble *b_table = (double *)status.b;\n+\tfor (int i = 0; i < XY; i++) {\n+\t\tregions[i].r_sum = input_regions[i].r_sum / r_table[i];\n+\t\tregions[i].g_sum = input_regions[i].g_sum / g_table[i];\n+\t\tregions[i].b_sum = input_regions[i].b_sum / b_table[i];\n+\t\tregions[i].counted = input_regions[i].counted;\n+\t\t// (don't care about the uncounted value)\n+\t}\n+}\n+\n+void Alsc::restartAsync(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\tRPI_LOG(\"Starting ALSC thread\");\n+\t// Get the current colour temperature. It's all we need from the\n+\t// metadata.\n+\tct_ = get_ct(image_metadata, config_.default_ct);\n+\t// We have to copy the statistics here, dividing out our best guess of\n+\t// the LSC table that the pipeline applied to them.\n+\tAlscStatus alsc_status;\n+\tif (image_metadata->Get(\"alsc.status\", alsc_status) != 0) {\n+\t\tRPI_WARN(\"No ALSC status found for applied gains!\");\n+\t\tfor (int y = 0; y < Y; y++)\n+\t\t\tfor (int x = 0; x < X; x++) {\n+\t\t\t\talsc_status.r[y][x] = 1.0;\n+\t\t\t\talsc_status.g[y][x] = 1.0;\n+\t\t\t\talsc_status.b[y][x] = 1.0;\n+\t\t\t}\n+\t}\n+\tcopy_stats(statistics_, stats, alsc_status);\n+\tframe_phase_ = 0;\n+\t// copy the camera mode so it won't change during the calculations\n+\tasync_camera_mode_ = camera_mode_;\n+\tasync_start_ = true;\n+\tasync_started_ = true;\n+\tasync_signal_.notify_one();\n+}\n+\n+void Alsc::Prepare(Metadata *image_metadata)\n+{\n+\t// Count frames since we started, and since we last poked the async\n+\t// thread.\n+\tif (frame_count_ < (int)config_.startup_frames)\n+\t\tframe_count_++;\n+\tdouble speed = frame_count_ < (int)config_.startup_frames\n+\t\t\t       ? 1.0\n+\t\t\t       : config_.speed;\n+\tRPI_LOG(\"Alsc: frame_count \" << frame_count_ << \" speed \" << speed);\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\tif (async_started_ && async_finished_) {\n+\t\t\tRPI_LOG(\"ALSC thread finished\");\n+\t\t\tfetchAsyncResults();\n+\t\t}\n+\t}\n+\t// Apply IIR filter to results and program into the pipeline.\n+\tdouble *ptr = (double *)sync_results_,\n+\t       *pptr = (double *)prev_sync_results_;\n+\tfor (unsigned int i = 0;\n+\t     i < sizeof(sync_results_) / sizeof(double); i++)\n+\t\tpptr[i] = speed * ptr[i] + (1.0 - speed) * pptr[i];\n+\t// Put output values into status metadata.\n+\tAlscStatus status;\n+\tmemcpy(status.r, prev_sync_results_[0], sizeof(status.r));\n+\tmemcpy(status.g, prev_sync_results_[1], sizeof(status.g));\n+\tmemcpy(status.b, prev_sync_results_[2], sizeof(status.b));\n+\timage_metadata->Set(\"alsc.status\", status);\n+}\n+\n+void Alsc::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\t// Count frames since we started, and since we last poked the async\n+\t// thread.\n+\tif (frame_phase_ < (int)config_.frame_period)\n+\t\tframe_phase_++;\n+\tif (frame_count2_ < (int)config_.startup_frames)\n+\t\tframe_count2_++;\n+\tRPI_LOG(\"Alsc: frame_phase \" << frame_phase_);\n+\tif (frame_phase_ >= (int)config_.frame_period ||\n+\t    frame_count2_ < (int)config_.startup_frames) {\n+\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\tif (async_started_ == false) {\n+\t\t\tRPI_LOG(\"ALSC thread starting\");\n+\t\t\trestartAsync(stats, image_metadata);\n+\t\t}\n+\t}\n+}\n+\n+void Alsc::asyncFunc()\n+{\n+\twhile (true) {\n+\t\t{\n+\t\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\t\tasync_signal_.wait(lock, [&] {\n+\t\t\t\treturn async_start_ || async_abort_;\n+\t\t\t});\n+\t\t\tasync_start_ = false;\n+\t\t\tif (async_abort_)\n+\t\t\t\tbreak;\n+\t\t}\n+\t\tdoAlsc();\n+\t\t{\n+\t\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\t\tasync_finished_ = true;\n+\t\t\tsync_signal_.notify_one();\n+\t\t}\n+\t}\n+}\n+\n+void get_cal_table(double ct, std::vector<AlscCalibration> const &calibrations,\n+\t\t   double cal_table[XY])\n+{\n+\tif (calibrations.empty()) {\n+\t\tfor (int i = 0; i < XY; i++)\n+\t\t\tcal_table[i] = 1.0;\n+\t\tRPI_LOG(\"Alsc: no calibrations found\");\n+\t} else if (ct <= calibrations.front().ct) {\n+\t\tmemcpy(cal_table, calibrations.front().table,\n+\t\t       XY * sizeof(double));\n+\t\tRPI_LOG(\"Alsc: using calibration for \"\n+\t\t\t<< calibrations.front().ct);\n+\t} else if (ct >= calibrations.back().ct) {\n+\t\tmemcpy(cal_table, calibrations.back().table,\n+\t\t       XY * sizeof(double));\n+\t\tRPI_LOG(\"Alsc: using calibration for \"\n+\t\t\t<< calibrations.front().ct);\n+\t} else {\n+\t\tint idx = 0;\n+\t\twhile (ct > calibrations[idx + 1].ct)\n+\t\t\tidx++;\n+\t\tdouble ct0 = calibrations[idx].ct,\n+\t\t       ct1 = calibrations[idx + 1].ct;\n+\t\tRPI_LOG(\"Alsc: ct is \" << ct << \", interpolating between \"\n+\t\t\t\t       << ct0 << \" and \" << ct1);\n+\t\tfor (int i = 0; i < XY; i++)\n+\t\t\tcal_table[i] =\n+\t\t\t\t(calibrations[idx].table[i] * (ct1 - ct) +\n+\t\t\t\t calibrations[idx + 1].table[i] * (ct - ct0)) /\n+\t\t\t\t(ct1 - ct0);\n+\t}\n+}\n+\n+void resample_cal_table(double const cal_table_in[XY],\n+\t\t\tCameraMode const &camera_mode, double cal_table_out[XY])\n+{\n+\t// Precalculate and cache the x sampling locations and phases to save\n+\t// recomputing them on every row.\n+\tint x_lo[X], x_hi[X];\n+\tdouble xf[X];\n+\tdouble scale_x = camera_mode.sensor_width /\n+\t\t\t (camera_mode.width * camera_mode.scale_x);\n+\tdouble x_off = camera_mode.crop_x / (double)camera_mode.sensor_width;\n+\tdouble x = .5 / scale_x + x_off * X - .5;\n+\tdouble x_inc = 1 / scale_x;\n+\tfor (int i = 0; i < X; i++, x += x_inc) {\n+\t\tx_lo[i] = floor(x);\n+\t\txf[i] = x - x_lo[i];\n+\t\tx_hi[i] = std::min(x_lo[i] + 1, X - 1);\n+\t\tx_lo[i] = std::max(x_lo[i], 0);\n+\t}\n+\t// Now march over the output table generating the new values.\n+\tdouble scale_y = camera_mode.sensor_height /\n+\t\t\t (camera_mode.height * camera_mode.scale_y);\n+\tdouble y_off = camera_mode.crop_y / (double)camera_mode.sensor_height;\n+\tdouble y = .5 / scale_y + y_off * Y - .5;\n+\tdouble y_inc = 1 / scale_y;\n+\tfor (int j = 0; j < Y; j++, y += y_inc) {\n+\t\tint y_lo = floor(y);\n+\t\tdouble yf = y - y_lo;\n+\t\tint y_hi = std::min(y_lo + 1, Y - 1);\n+\t\ty_lo = std::max(y_lo, 0);\n+\t\tdouble const *row_above = cal_table_in + X * y_lo;\n+\t\tdouble const *row_below = cal_table_in + X * y_hi;\n+\t\tfor (int i = 0; i < X; i++) {\n+\t\t\tdouble above = row_above[x_lo[i]] * (1 - xf[i]) +\n+\t\t\t\t       row_above[x_hi[i]] * xf[i];\n+\t\t\tdouble below = row_below[x_lo[i]] * (1 - xf[i]) +\n+\t\t\t\t       row_below[x_hi[i]] * xf[i];\n+\t\t\t*(cal_table_out++) = above * (1 - yf) + below * yf;\n+\t\t}\n+\t}\n+}\n+\n+// Calculate chrominance statistics (R/G and B/G) for each region.\n+static_assert(XY == AWB_REGIONS, \"ALSC/AWB statistics region mismatch\");\n+static void calculate_Cr_Cb(bcm2835_isp_stats_region *awb_region, double Cr[XY],\n+\t\t\t    double Cb[XY], uint32_t min_count, uint16_t min_G)\n+{\n+\tfor (int i = 0; i < XY; i++) {\n+\t\tbcm2835_isp_stats_region &zone = awb_region[i];\n+\t\tif (zone.counted <= min_count ||\n+\t\t    zone.g_sum / zone.counted <= min_G) {\n+\t\t\tCr[i] = Cb[i] = INSUFFICIENT_DATA;\n+\t\t\tcontinue;\n+\t\t}\n+\t\tCr[i] = zone.r_sum / (double)zone.g_sum;\n+\t\tCb[i] = zone.b_sum / (double)zone.g_sum;\n+\t}\n+}\n+\n+static void apply_cal_table(double const cal_table[XY], double C[XY])\n+{\n+\tfor (int i = 0; i < XY; i++)\n+\t\tif (C[i] != INSUFFICIENT_DATA)\n+\t\t\tC[i] *= cal_table[i];\n+}\n+\n+void compensate_lambdas_for_cal(double const cal_table[XY],\n+\t\t\t\tdouble const old_lambdas[XY],\n+\t\t\t\tdouble new_lambdas[XY])\n+{\n+\tdouble min_new_lambda = std::numeric_limits<double>::max();\n+\tfor (int i = 0; i < XY; i++) {\n+\t\tnew_lambdas[i] = old_lambdas[i] * cal_table[i];\n+\t\tmin_new_lambda = std::min(min_new_lambda, new_lambdas[i]);\n+\t}\n+\tfor (int i = 0; i < XY; i++)\n+\t\tnew_lambdas[i] /= min_new_lambda;\n+}\n+\n+static void print_cal_table(double const C[XY])\n+{\n+\tprintf(\"table: [\\n\");\n+\tfor (int j = 0; j < Y; j++) {\n+\t\tfor (int i = 0; i < X; i++) {\n+\t\t\tprintf(\"%5.3f\", 1.0 / C[j * X + i]);\n+\t\t\tif (i != X - 1 || j != Y - 1)\n+\t\t\t\tprintf(\",\");\n+\t\t}\n+\t\tprintf(\"\\n\");\n+\t}\n+\tprintf(\"]\\n\");\n+}\n+\n+// Compute weight out of 1.0 which reflects how similar we wish to make the\n+// colours of these two regions.\n+static double compute_weight(double C_i, double C_j, double sigma)\n+{\n+\tif (C_i == INSUFFICIENT_DATA || C_j == INSUFFICIENT_DATA)\n+\t\treturn 0;\n+\tdouble diff = (C_i - C_j) / sigma;\n+\treturn exp(-diff * diff / 2);\n+}\n+\n+// Compute all weights.\n+static void compute_W(double const C[XY], double sigma, double W[XY][4])\n+{\n+\tfor (int i = 0; i < XY; i++) {\n+\t\t// Start with neighbour above and go clockwise.\n+\t\tW[i][0] = i >= X ? compute_weight(C[i], C[i - X], sigma) : 0;\n+\t\tW[i][1] = i % X < X - 1 ? compute_weight(C[i], C[i + 1], sigma)\n+\t\t\t\t\t: 0;\n+\t\tW[i][2] =\n+\t\t\ti < XY - X ? compute_weight(C[i], C[i + X], sigma) : 0;\n+\t\tW[i][3] = i % X ? compute_weight(C[i], C[i - 1], sigma) : 0;\n+\t}\n+}\n+\n+// Compute M, the large but sparse matrix such that M * lambdas = 0.\n+static void construct_M(double const C[XY], double const W[XY][4],\n+\t\t\tdouble M[XY][4])\n+{\n+\tdouble epsilon = 0.001;\n+\tfor (int i = 0; i < XY; i++) {\n+\t\t// Note how, if C[i] == INSUFFICIENT_DATA, the weights will all\n+\t\t// be zero so the equation is still set up correctly.\n+\t\tint m = !!(i >= X) + !!(i % X < X - 1) + !!(i < XY - X) +\n+\t\t\t!!(i % X); // total number of neighbours\n+\t\t// we'll divide the diagonal out straight away\n+\t\tdouble diagonal =\n+\t\t\t(epsilon + W[i][0] + W[i][1] + W[i][2] + W[i][3]) *\n+\t\t\tC[i];\n+\t\tM[i][0] = i >= X ? (W[i][0] * C[i - X] + epsilon / m * C[i]) /\n+\t\t\t\t\t   diagonal\n+\t\t\t\t : 0;\n+\t\tM[i][1] = i % X < X - 1\n+\t\t\t\t  ? (W[i][1] * C[i + 1] + epsilon / m * C[i]) /\n+\t\t\t\t\t    diagonal\n+\t\t\t\t  : 0;\n+\t\tM[i][2] = i < XY - X\n+\t\t\t\t  ? (W[i][2] * C[i + X] + epsilon / m * C[i]) /\n+\t\t\t\t\t    diagonal\n+\t\t\t\t  : 0;\n+\t\tM[i][3] = i % X ? (W[i][3] * C[i - 1] + epsilon / m * C[i]) /\n+\t\t\t\t\t  diagonal\n+\t\t\t\t: 0;\n+\t}\n+}\n+\n+// In the compute_lambda_ functions, note that the matrix coefficients for the\n+// left/right neighbours are zero down the left/right edges, so we don't need\n+// need to test the i value to exclude them.\n+static double compute_lambda_bottom(int i, double const M[XY][4],\n+\t\t\t\t    double lambda[XY])\n+{\n+\treturn M[i][1] * lambda[i + 1] + M[i][2] * lambda[i + X] +\n+\t       M[i][3] * lambda[i - 1];\n+}\n+static double compute_lambda_bottom_start(int i, double const M[XY][4],\n+\t\t\t\t\t  double lambda[XY])\n+{\n+\treturn M[i][1] * lambda[i + 1] + M[i][2] * lambda[i + X];\n+}\n+static double compute_lambda_interior(int i, double const M[XY][4],\n+\t\t\t\t      double lambda[XY])\n+{\n+\treturn M[i][0] * lambda[i - X] + M[i][1] * lambda[i + 1] +\n+\t       M[i][2] * lambda[i + X] + M[i][3] * lambda[i - 1];\n+}\n+static double compute_lambda_top(int i, double const M[XY][4],\n+\t\t\t\t double lambda[XY])\n+{\n+\treturn M[i][0] * lambda[i - X] + M[i][1] * lambda[i + 1] +\n+\t       M[i][3] * lambda[i - 1];\n+}\n+static double compute_lambda_top_end(int i, double const M[XY][4],\n+\t\t\t\t     double lambda[XY])\n+{\n+\treturn M[i][0] * lambda[i - X] + M[i][3] * lambda[i - 1];\n+}\n+\n+// Gauss-Seidel iteration with over-relaxation.\n+static double gauss_seidel2_SOR(double const M[XY][4], double omega,\n+\t\t\t\tdouble lambda[XY])\n+{\n+\tdouble old_lambda[XY];\n+\tfor (int i = 0; i < XY; i++)\n+\t\told_lambda[i] = lambda[i];\n+\tint i;\n+\tlambda[0] = compute_lambda_bottom_start(0, M, lambda);\n+\tfor (i = 1; i < X; i++)\n+\t\tlambda[i] = compute_lambda_bottom(i, M, lambda);\n+\tfor (; i < XY - X; i++)\n+\t\tlambda[i] = compute_lambda_interior(i, M, lambda);\n+\tfor (; i < XY - 1; i++)\n+\t\tlambda[i] = compute_lambda_top(i, M, lambda);\n+\tlambda[i] = compute_lambda_top_end(i, M, lambda);\n+\t// Also solve the system from bottom to top, to help spread the updates\n+\t// better.\n+\tlambda[i] = compute_lambda_top_end(i, M, lambda);\n+\tfor (i = XY - 2; i >= XY - X; i--)\n+\t\tlambda[i] = compute_lambda_top(i, M, lambda);\n+\tfor (; i >= X; i--)\n+\t\tlambda[i] = compute_lambda_interior(i, M, lambda);\n+\tfor (; i >= 1; i--)\n+\t\tlambda[i] = compute_lambda_bottom(i, M, lambda);\n+\tlambda[0] = compute_lambda_bottom_start(0, M, lambda);\n+\tdouble max_diff = 0;\n+\tfor (int i = 0; i < XY; i++) {\n+\t\tlambda[i] = old_lambda[i] + (lambda[i] - old_lambda[i]) * omega;\n+\t\tif (fabs(lambda[i] - old_lambda[i]) > fabs(max_diff))\n+\t\t\tmax_diff = lambda[i] - old_lambda[i];\n+\t}\n+\treturn max_diff;\n+}\n+\n+// Normalise the values so that the smallest value is 1.\n+static void normalise(double *ptr, size_t n)\n+{\n+\tdouble minval = ptr[0];\n+\tfor (size_t i = 1; i < n; i++)\n+\t\tminval = std::min(minval, ptr[i]);\n+\tfor (size_t i = 0; i < n; i++)\n+\t\tptr[i] /= minval;\n+}\n+\n+static void run_matrix_iterations(double const C[XY], double lambda[XY],\n+\t\t\t\t  double const W[XY][4], double omega,\n+\t\t\t\t  int n_iter, double threshold)\n+{\n+\tdouble M[XY][4];\n+\tconstruct_M(C, W, M);\n+\tdouble last_max_diff = std::numeric_limits<double>::max();\n+\tfor (int i = 0; i < n_iter; i++) {\n+\t\tdouble max_diff = fabs(gauss_seidel2_SOR(M, omega, lambda));\n+\t\tif (max_diff < threshold) {\n+\t\t\tRPI_LOG(\"Stop after \" << i + 1 << \" iterations\");\n+\t\t\tbreak;\n+\t\t}\n+\t\t// this happens very occasionally (so make a note), though\n+\t\t// doesn't seem to matter\n+\t\tif (max_diff > last_max_diff)\n+\t\t\tRPI_LOG(\"Iteration \" << i << \": max_diff gone up \"\n+\t\t\t\t\t     << last_max_diff << \" to \"\n+\t\t\t\t\t     << max_diff);\n+\t\tlast_max_diff = max_diff;\n+\t}\n+\t// We're going to normalise the lambdas so the smallest is 1. Not sure\n+\t// this is really necessary as they get renormalised later, but I\n+\t// suppose it does stop these quantities from wandering off...\n+\tnormalise(lambda, XY);\n+}\n+\n+static void add_luminance_rb(double result[XY], double const lambda[XY],\n+\t\t\t     double const luminance_lut[XY],\n+\t\t\t     double luminance_strength)\n+{\n+\tfor (int i = 0; i < XY; i++)\n+\t\tresult[i] = lambda[i] *\n+\t\t\t    ((luminance_lut[i] - 1) * luminance_strength + 1);\n+}\n+\n+static void add_luminance_g(double result[XY], double lambda,\n+\t\t\t    double const luminance_lut[XY],\n+\t\t\t    double luminance_strength)\n+{\n+\tfor (int i = 0; i < XY; i++)\n+\t\tresult[i] = lambda *\n+\t\t\t    ((luminance_lut[i] - 1) * luminance_strength + 1);\n+}\n+\n+void add_luminance_to_tables(double results[3][Y][X], double const lambda_r[XY],\n+\t\t\t     double lambda_g, double const lambda_b[XY],\n+\t\t\t     double const luminance_lut[XY],\n+\t\t\t     double luminance_strength)\n+{\n+\tadd_luminance_rb((double *)results[0], lambda_r, luminance_lut,\n+\t\t\t luminance_strength);\n+\tadd_luminance_g((double *)results[1], lambda_g, luminance_lut,\n+\t\t\tluminance_strength);\n+\tadd_luminance_rb((double *)results[2], lambda_b, luminance_lut,\n+\t\t\t luminance_strength);\n+\tnormalise((double *)results, 3 * XY);\n+}\n+\n+void Alsc::doAlsc()\n+{\n+\tdouble Cr[XY], Cb[XY], Wr[XY][4], Wb[XY][4], cal_table_r[XY],\n+\t\tcal_table_b[XY], cal_table_tmp[XY];\n+\t// Calculate our R/B (\"Cr\"/\"Cb\") colour statistics, and assess which are\n+\t// usable.\n+\tcalculate_Cr_Cb(statistics_, Cr, Cb, config_.min_count, config_.min_G);\n+\t// Fetch the new calibrations (if any) for this CT. Resample them in\n+\t// case the camera mode is not full-frame.\n+\tget_cal_table(ct_, config_.calibrations_Cr, cal_table_tmp);\n+\tresample_cal_table(cal_table_tmp, async_camera_mode_, cal_table_r);\n+\tget_cal_table(ct_, config_.calibrations_Cb, cal_table_tmp);\n+\tresample_cal_table(cal_table_tmp, async_camera_mode_, cal_table_b);\n+\t// You could print out the cal tables for this image here, if you're\n+\t// tuning the algorithm...\n+\t(void)print_cal_table;\n+\t// Apply any calibration to the statistics, so the adaptive algorithm\n+\t// makes only the extra adjustments.\n+\tapply_cal_table(cal_table_r, Cr);\n+\tapply_cal_table(cal_table_b, Cb);\n+\t// Compute weights between zones.\n+\tcompute_W(Cr, config_.sigma_Cr, Wr);\n+\tcompute_W(Cb, config_.sigma_Cb, Wb);\n+\t// Run Gauss-Seidel iterations over the resulting matrix, for R and B.\n+\trun_matrix_iterations(Cr, lambda_r_, Wr, config_.omega, config_.n_iter,\n+\t\t\t      config_.threshold);\n+\trun_matrix_iterations(Cb, lambda_b_, Wb, config_.omega, config_.n_iter,\n+\t\t\t      config_.threshold);\n+\t// Fold the calibrated gains into our final lambda values. (Note that on\n+\t// the next run, we re-start with the lambda values that don't have the\n+\t// calibration gains included.)\n+\tcompensate_lambdas_for_cal(cal_table_r, lambda_r_, async_lambda_r_);\n+\tcompensate_lambdas_for_cal(cal_table_b, lambda_b_, async_lambda_b_);\n+\t// Fold in the luminance table at the appropriate strength.\n+\tadd_luminance_to_tables(async_results_, async_lambda_r_, 1.0,\n+\t\t\t\tasync_lambda_b_, config_.luminance_lut,\n+\t\t\t\tconfig_.luminance_strength);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Alsc(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/alsc.hpp b/src/ipa/raspberrypi/controller/rpi/alsc.hpp\nnew file mode 100644\nindex 000000000000..c8ed3d2190b5\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/alsc.hpp\n@@ -0,0 +1,104 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * alsc.hpp - ALSC (auto lens shading correction) control algorithm\n+ */\n+#pragma once\n+\n+#include <mutex>\n+#include <condition_variable>\n+#include <thread>\n+\n+#include \"../algorithm.hpp\"\n+#include \"../alsc_status.h\"\n+\n+namespace RPi {\n+\n+// Algorithm to generate automagic LSC (Lens Shading Correction) tables.\n+\n+struct AlscCalibration {\n+\tdouble ct;\n+\tdouble table[ALSC_CELLS_X * ALSC_CELLS_Y];\n+};\n+\n+struct AlscConfig {\n+\t// Only repeat the ALSC calculation every \"this many\" frames\n+\tuint16_t frame_period;\n+\t// number of initial frames for which speed taken as 1.0 (maximum)\n+\tuint16_t startup_frames;\n+\t// IIR filter speed applied to algorithm results\n+\tdouble speed;\n+\tdouble sigma_Cr;\n+\tdouble sigma_Cb;\n+\tdouble min_count;\n+\tuint16_t min_G;\n+\tdouble omega;\n+\tuint32_t n_iter;\n+\tdouble luminance_lut[ALSC_CELLS_X * ALSC_CELLS_Y];\n+\tdouble luminance_strength;\n+\tstd::vector<AlscCalibration> calibrations_Cr;\n+\tstd::vector<AlscCalibration> calibrations_Cb;\n+\tdouble default_ct; // colour temperature if no metadata found\n+\tdouble threshold; // iteration termination threshold\n+};\n+\n+class Alsc : public Algorithm\n+{\n+public:\n+\tAlsc(Controller *controller = NULL);\n+\t~Alsc();\n+\tchar const *Name() const override;\n+\tvoid Initialise() override;\n+\tvoid SwitchMode(CameraMode const &camera_mode) override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\tvoid Process(StatisticsPtr &stats, Metadata *image_metadata) override;\n+\n+private:\n+\t// configuration is read-only, and available to both threads\n+\tAlscConfig config_;\n+\tbool first_time_;\n+\tstd::atomic<CameraMode> camera_mode_;\n+\tstd::thread async_thread_;\n+\tvoid asyncFunc(); // asynchronous thread function\n+\tstd::mutex mutex_;\n+\tCameraMode async_camera_mode_;\n+\t// condvar for async thread to wait on\n+\tstd::condition_variable async_signal_;\n+\t// condvar for synchronous thread to wait on\n+\tstd::condition_variable sync_signal_;\n+\t// for sync thread to check  if async thread finished (requires mutex)\n+\tbool async_finished_;\n+\t// for async thread to check if it's been told to run (requires mutex)\n+\tbool async_start_;\n+\t// for async thread to check if it's been told to quit (requires mutex)\n+\tbool async_abort_;\n+\n+\t// The following are only for the synchronous thread to use:\n+\t// for sync thread to note its has asked async thread to run\n+\tbool async_started_;\n+\t// counts up to frame_period before restarting the async thread\n+\tint frame_phase_;\n+\t// counts up to startup_frames\n+\tint frame_count_;\n+\t// counts up to startup_frames for Process method\n+\tint frame_count2_;\n+\tdouble sync_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X];\n+\tdouble prev_sync_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X];\n+\t// The following are for the asynchronous thread to use, though the main\n+\t// thread can set/reset them if the async thread is known to be idle:\n+\tvoid restartAsync(StatisticsPtr &stats, Metadata *image_metadata);\n+\t// copy out the results from the async thread so that it can be restarted\n+\tvoid fetchAsyncResults();\n+\tdouble ct_;\n+\tbcm2835_isp_stats_region statistics_[ALSC_CELLS_Y * ALSC_CELLS_X];\n+\tdouble async_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X];\n+\tdouble async_lambda_r_[ALSC_CELLS_X * ALSC_CELLS_Y];\n+\tdouble async_lambda_b_[ALSC_CELLS_X * ALSC_CELLS_Y];\n+\tvoid doAlsc();\n+\tdouble lambda_r_[ALSC_CELLS_X * ALSC_CELLS_Y];\n+\tdouble lambda_b_[ALSC_CELLS_X * ALSC_CELLS_Y];\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/awb.cpp b/src/ipa/raspberrypi/controller/rpi/awb.cpp\nnew file mode 100644\nindex 000000000000..a58fa11df863\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/awb.cpp\n@@ -0,0 +1,608 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * awb.cpp - AWB control algorithm\n+ */\n+\n+#include \"../logging.hpp\"\n+#include \"../lux_status.h\"\n+\n+#include \"awb.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.awb\"\n+\n+#define AWB_STATS_SIZE_X DEFAULT_AWB_REGIONS_X\n+#define AWB_STATS_SIZE_Y DEFAULT_AWB_REGIONS_Y\n+\n+const double Awb::RGB::INVALID = -1.0;\n+\n+void AwbMode::Read(boost::property_tree::ptree const &params)\n+{\n+\tct_lo = params.get<double>(\"lo\");\n+\tct_hi = params.get<double>(\"hi\");\n+}\n+\n+void AwbPrior::Read(boost::property_tree::ptree const &params)\n+{\n+\tlux = params.get<double>(\"lux\");\n+\tprior.Read(params.get_child(\"prior\"));\n+}\n+\n+static void read_ct_curve(Pwl &ct_r, Pwl &ct_b,\n+\t\t\t  boost::property_tree::ptree const &params)\n+{\n+\tint num = 0;\n+\tfor (auto it = params.begin(); it != params.end(); it++) {\n+\t\tdouble ct = it->second.get_value<double>();\n+\t\tassert(it == params.begin() || ct != ct_r.Domain().end);\n+\t\tif (++it == params.end())\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"AwbConfig: incomplete CT curve entry\");\n+\t\tct_r.Append(ct, it->second.get_value<double>());\n+\t\tif (++it == params.end())\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"AwbConfig: incomplete CT curve entry\");\n+\t\tct_b.Append(ct, it->second.get_value<double>());\n+\t\tnum++;\n+\t}\n+\tif (num < 2)\n+\t\tthrow std::runtime_error(\n+\t\t\t\"AwbConfig: insufficient points in CT curve\");\n+}\n+\n+void AwbConfig::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(\"AwbConfig\");\n+\tbayes = params.get<int>(\"bayes\", 1);\n+\tframe_period = params.get<uint16_t>(\"frame_period\", 10);\n+\tstartup_frames = params.get<uint16_t>(\"startup_frames\", 10);\n+\tspeed = params.get<double>(\"speed\", 0.05);\n+\tif (params.get_child_optional(\"ct_curve\"))\n+\t\tread_ct_curve(ct_r, ct_b, params.get_child(\"ct_curve\"));\n+\tif (params.get_child_optional(\"priors\")) {\n+\t\tfor (auto &p : params.get_child(\"priors\")) {\n+\t\t\tAwbPrior prior;\n+\t\t\tprior.Read(p.second);\n+\t\t\tif (!priors.empty() && prior.lux <= priors.back().lux)\n+\t\t\t\tthrow std::runtime_error(\n+\t\t\t\t\t\"AwbConfig: Prior must be ordered in increasing lux value\");\n+\t\t\tpriors.push_back(prior);\n+\t\t}\n+\t\tif (priors.empty())\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"AwbConfig: no AWB priors configured\");\n+\t}\n+\tif (params.get_child_optional(\"modes\")) {\n+\t\tfor (auto &p : params.get_child(\"modes\")) {\n+\t\t\tmodes[p.first].Read(p.second);\n+\t\t\tif (default_mode == nullptr)\n+\t\t\t\tdefault_mode = &modes[p.first];\n+\t\t}\n+\t\tif (default_mode == nullptr)\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"AwbConfig: no AWB modes configured\");\n+\t}\n+\tmin_pixels = params.get<double>(\"min_pixels\", 16.0);\n+\tmin_G = params.get<uint16_t>(\"min_G\", 32);\n+\tmin_regions = params.get<uint32_t>(\"min_regions\", 10);\n+\tdelta_limit = params.get<double>(\"delta_limit\", 0.2);\n+\tcoarse_step = params.get<double>(\"coarse_step\", 0.2);\n+\ttransverse_pos = params.get<double>(\"transverse_pos\", 0.01);\n+\ttransverse_neg = params.get<double>(\"transverse_neg\", 0.01);\n+\tif (transverse_pos <= 0 || transverse_neg <= 0)\n+\t\tthrow std::runtime_error(\n+\t\t\t\"AwbConfig: transverse_pos/neg must be > 0\");\n+\tsensitivity_r = params.get<double>(\"sensitivity_r\", 1.0);\n+\tsensitivity_b = params.get<double>(\"sensitivity_b\", 1.0);\n+\tif (bayes) {\n+\t\tif (ct_r.Empty() || ct_b.Empty() || priors.empty() ||\n+\t\t    default_mode == nullptr) {\n+\t\t\tRPI_WARN(\n+\t\t\t\t\"Bayesian AWB mis-configured - switch to Grey method\");\n+\t\t\tbayes = false;\n+\t\t}\n+\t}\n+\tfast = params.get<int>(\n+\t\t\"fast\", bayes); // default to fast for Bayesian, otherwise slow\n+\twhitepoint_r = params.get<double>(\"whitepoint_r\", 0.0);\n+\twhitepoint_b = params.get<double>(\"whitepoint_b\", 0.0);\n+\tif (bayes == false)\n+\t\tsensitivity_r = sensitivity_b =\n+\t\t\t1.0; // nor do sensitivities make any sense\n+}\n+\n+Awb::Awb(Controller *controller)\n+\t: AwbAlgorithm(controller)\n+{\n+\tasync_abort_ = async_start_ = async_started_ = async_finished_ = false;\n+\tmode_ = nullptr;\n+\tmanual_r_ = manual_b_ = 0.0;\n+\tasync_thread_ = std::thread(std::bind(&Awb::asyncFunc, this));\n+}\n+\n+Awb::~Awb()\n+{\n+\t{\n+\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\tasync_abort_ = true;\n+\t\tasync_signal_.notify_one();\n+\t}\n+\tasync_thread_.join();\n+}\n+\n+char const *Awb::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Awb::Read(boost::property_tree::ptree const &params)\n+{\n+\tconfig_.Read(params);\n+}\n+\n+void Awb::Initialise()\n+{\n+\tframe_count2_ = frame_count_ = frame_phase_ = 0;\n+\t// Put something sane into the status that we are filtering towards,\n+\t// just in case the first few frames don't have anything meaningful in\n+\t// them.\n+\tif (!config_.ct_r.Empty() && !config_.ct_b.Empty()) {\n+\t\tsync_results_.temperature_K = config_.ct_r.Domain().Clip(4000);\n+\t\tsync_results_.gain_r =\n+\t\t\t1.0 / config_.ct_r.Eval(sync_results_.temperature_K);\n+\t\tsync_results_.gain_g = 1.0;\n+\t\tsync_results_.gain_b =\n+\t\t\t1.0 / config_.ct_b.Eval(sync_results_.temperature_K);\n+\t} else {\n+\t\t// random values just to stop the world blowing up\n+\t\tsync_results_.temperature_K = 4500;\n+\t\tsync_results_.gain_r = sync_results_.gain_g =\n+\t\t\tsync_results_.gain_b = 1.0;\n+\t}\n+\tprev_sync_results_ = sync_results_;\n+}\n+\n+void Awb::SetMode(std::string const &mode_name)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\tmode_name_ = mode_name;\n+}\n+\n+void Awb::SetManualGains(double manual_r, double manual_b)\n+{\n+\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\t// If any of these are 0.0, we swich back to auto.\n+\tmanual_r_ = manual_r;\n+\tmanual_b_ = manual_b;\n+}\n+\n+void Awb::fetchAsyncResults()\n+{\n+\tRPI_LOG(\"Fetch AWB results\");\n+\tasync_finished_ = false;\n+\tasync_started_ = false;\n+\tsync_results_ = async_results_;\n+}\n+\n+void Awb::restartAsync(StatisticsPtr &stats, std::string const &mode_name,\n+\t\t       double lux)\n+{\n+\tRPI_LOG(\"Starting AWB thread\");\n+\t// this makes a new reference which belongs to the asynchronous thread\n+\tstatistics_ = stats;\n+\t// store the mode as it could technically change\n+\tauto m = config_.modes.find(mode_name);\n+\tmode_ = m != config_.modes.end()\n+\t\t\t? &m->second\n+\t\t\t: (mode_ == nullptr ? config_.default_mode : mode_);\n+\tlux_ = lux;\n+\tframe_phase_ = 0;\n+\tasync_start_ = true;\n+\tasync_started_ = true;\n+\tsize_t len = mode_name.copy(async_results_.mode,\n+\t\t\t\t    sizeof(async_results_.mode) - 1);\n+\tasync_results_.mode[len] = '\\0';\n+\tasync_signal_.notify_one();\n+}\n+\n+void Awb::Prepare(Metadata *image_metadata)\n+{\n+\tif (frame_count_ < (int)config_.startup_frames)\n+\t\tframe_count_++;\n+\tdouble speed = frame_count_ < (int)config_.startup_frames\n+\t\t\t       ? 1.0\n+\t\t\t       : config_.speed;\n+\tRPI_LOG(\"Awb: frame_count \" << frame_count_ << \" speed \" << speed);\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\tif (async_started_ && async_finished_) {\n+\t\t\tRPI_LOG(\"AWB thread finished\");\n+\t\t\tfetchAsyncResults();\n+\t\t}\n+\t}\n+\t// Finally apply IIR filter to results and put into metadata.\n+\tmemcpy(prev_sync_results_.mode, sync_results_.mode,\n+\t       sizeof(prev_sync_results_.mode));\n+\tprev_sync_results_.temperature_K =\n+\t\tspeed * sync_results_.temperature_K +\n+\t\t(1.0 - speed) * prev_sync_results_.temperature_K;\n+\tprev_sync_results_.gain_r = speed * sync_results_.gain_r +\n+\t\t\t\t    (1.0 - speed) * prev_sync_results_.gain_r;\n+\tprev_sync_results_.gain_g = speed * sync_results_.gain_g +\n+\t\t\t\t    (1.0 - speed) * prev_sync_results_.gain_g;\n+\tprev_sync_results_.gain_b = speed * sync_results_.gain_b +\n+\t\t\t\t    (1.0 - speed) * prev_sync_results_.gain_b;\n+\timage_metadata->Set(\"awb.status\", prev_sync_results_);\n+\tRPI_LOG(\"Using AWB gains r \" << prev_sync_results_.gain_r << \" g \"\n+\t\t\t\t     << prev_sync_results_.gain_g << \" b \"\n+\t\t\t\t     << prev_sync_results_.gain_b);\n+}\n+\n+void Awb::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\t// Count frames since we last poked the async thread.\n+\tif (frame_phase_ < (int)config_.frame_period)\n+\t\tframe_phase_++;\n+\tif (frame_count2_ < (int)config_.startup_frames)\n+\t\tframe_count2_++;\n+\tRPI_LOG(\"Awb: frame_phase \" << frame_phase_);\n+\tif (frame_phase_ >= (int)config_.frame_period ||\n+\t    frame_count2_ < (int)config_.startup_frames) {\n+\t\t// Update any settings and any image metadata that we need.\n+\t\tstd::string mode_name;\n+\t\t{\n+\t\t\tstd::unique_lock<std::mutex> lock(settings_mutex_);\n+\t\t\tmode_name = mode_name_;\n+\t\t}\n+\t\tstruct LuxStatus lux_status = {};\n+\t\tlux_status.lux = 400; // in case no metadata\n+\t\tif (image_metadata->Get(\"lux.status\", lux_status) != 0)\n+\t\t\tRPI_LOG(\"No lux metadata found\");\n+\t\tRPI_LOG(\"Awb lux value is \" << lux_status.lux);\n+\n+\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\tif (async_started_ == false) {\n+\t\t\tRPI_LOG(\"AWB thread starting\");\n+\t\t\trestartAsync(stats, mode_name, lux_status.lux);\n+\t\t}\n+\t}\n+}\n+\n+void Awb::asyncFunc()\n+{\n+\twhile (true) {\n+\t\t{\n+\t\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\t\tasync_signal_.wait(lock, [&] {\n+\t\t\t\treturn async_start_ || async_abort_;\n+\t\t\t});\n+\t\t\tasync_start_ = false;\n+\t\t\tif (async_abort_)\n+\t\t\t\tbreak;\n+\t\t}\n+\t\tdoAwb();\n+\t\t{\n+\t\t\tstd::lock_guard<std::mutex> lock(mutex_);\n+\t\t\tasync_finished_ = true;\n+\t\t\tsync_signal_.notify_one();\n+\t\t}\n+\t}\n+}\n+\n+static void generate_stats(std::vector<Awb::RGB> &zones,\n+\t\t\t   bcm2835_isp_stats_region *stats, double min_pixels,\n+\t\t\t   double min_G)\n+{\n+\tfor (int i = 0; i < AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y; i++) {\n+\t\tAwb::RGB zone; // this is \"invalid\", unless R gets overwritten later\n+\t\tdouble counted = stats[i].counted;\n+\t\tif (counted >= min_pixels) {\n+\t\t\tzone.G = stats[i].g_sum / counted;\n+\t\t\tif (zone.G >= min_G) {\n+\t\t\t\tzone.R = stats[i].r_sum / counted;\n+\t\t\t\tzone.B = stats[i].b_sum / counted;\n+\t\t\t}\n+\t\t}\n+\t\tzones.push_back(zone);\n+\t}\n+}\n+\n+void Awb::prepareStats()\n+{\n+\tzones_.clear();\n+\t// LSC has already been applied to the stats in this pipeline, so stop\n+\t// any LSC compensation.  We also ignore config_.fast in this version.\n+\tgenerate_stats(zones_, statistics_->awb_stats, config_.min_pixels,\n+\t\t       config_.min_G);\n+\t// we're done with these; we may as well relinquish our hold on the\n+\t// pointer.\n+\tstatistics_.reset();\n+\t// apply sensitivities, so values appear to come from our \"canonical\"\n+\t// sensor.\n+\tfor (auto &zone : zones_)\n+\t\tzone.R *= config_.sensitivity_r,\n+\t\t\tzone.B *= config_.sensitivity_b;\n+}\n+\n+double Awb::computeDelta2Sum(double gain_r, double gain_b)\n+{\n+\t// Compute the sum of the squared colour error (non-greyness) as it\n+\t// appears in the log likelihood equation.\n+\tdouble delta2_sum = 0;\n+\tfor (auto &z : zones_) {\n+\t\tdouble delta_r = gain_r * z.R - 1 - config_.whitepoint_r;\n+\t\tdouble delta_b = gain_b * z.B - 1 - config_.whitepoint_b;\n+\t\tdouble delta2 = delta_r * delta_r + delta_b * delta_b;\n+\t\t//RPI_LOG(\"delta_r \" << delta_r << \" delta_b \" << delta_b << \" delta2 \" << delta2);\n+\t\tdelta2 = std::min(delta2, config_.delta_limit);\n+\t\tdelta2_sum += delta2;\n+\t}\n+\treturn delta2_sum;\n+}\n+\n+Pwl Awb::interpolatePrior()\n+{\n+\t// Interpolate the prior log likelihood function for our current lux\n+\t// value.\n+\tif (lux_ <= config_.priors.front().lux)\n+\t\treturn config_.priors.front().prior;\n+\telse if (lux_ >= config_.priors.back().lux)\n+\t\treturn config_.priors.back().prior;\n+\telse {\n+\t\tint idx = 0;\n+\t\t// find which two we lie between\n+\t\twhile (config_.priors[idx + 1].lux < lux_)\n+\t\t\tidx++;\n+\t\tdouble lux0 = config_.priors[idx].lux,\n+\t\t       lux1 = config_.priors[idx + 1].lux;\n+\t\treturn Pwl::Combine(config_.priors[idx].prior,\n+\t\t\t\t    config_.priors[idx + 1].prior,\n+\t\t\t\t    [&](double /*x*/, double y0, double y1) {\n+\t\t\t\t\t    return y0 + (y1 - y0) *\n+\t\t\t\t\t\t\t(lux_ - lux0) / (lux1 - lux0);\n+\t\t\t\t    });\n+\t}\n+}\n+\n+static double interpolate_quadatric(Pwl::Point const &A, Pwl::Point const &B,\n+\t\t\t\t    Pwl::Point const &C)\n+{\n+\t// Given 3 points on a curve, find the extremum of the function in that\n+\t// interval by fitting a quadratic.\n+\tconst double eps = 1e-3;\n+\tPwl::Point CA = C - A, BA = B - A;\n+\tdouble denominator = 2 * (BA.y * CA.x - CA.y * BA.x);\n+\tif (abs(denominator) > eps) {\n+\t\tdouble numerator = BA.y * CA.x * CA.x - CA.y * BA.x * BA.x;\n+\t\tdouble result = numerator / denominator + A.x;\n+\t\treturn std::max(A.x, std::min(C.x, result));\n+\t}\n+\t// has degenerated to straight line segment\n+\treturn A.y < C.y - eps ? A.x : (C.y < A.y - eps ? C.x : B.x);\n+}\n+\n+double Awb::coarseSearch(Pwl const &prior)\n+{\n+\tpoints_.clear(); // assume doesn't deallocate memory\n+\tsize_t best_point = 0;\n+\tdouble t = mode_->ct_lo;\n+\tint span_r = 0, span_b = 0;\n+\t// Step down the CT curve evaluating log likelihood.\n+\twhile (true) {\n+\t\tdouble r = config_.ct_r.Eval(t, &span_r);\n+\t\tdouble b = config_.ct_b.Eval(t, &span_b);\n+\t\tdouble gain_r = 1 / r, gain_b = 1 / b;\n+\t\tdouble delta2_sum = computeDelta2Sum(gain_r, gain_b);\n+\t\tdouble prior_log_likelihood =\n+\t\t\tprior.Eval(prior.Domain().Clip(t));\n+\t\tdouble final_log_likelihood = delta2_sum - prior_log_likelihood;\n+\t\tRPI_LOG(\"t: \" << t << \" gain_r \" << gain_r << \" gain_b \"\n+\t\t\t      << gain_b << \" delta2_sum \" << delta2_sum\n+\t\t\t      << \" prior \" << prior_log_likelihood << \" final \"\n+\t\t\t      << final_log_likelihood);\n+\t\tpoints_.push_back(Pwl::Point(t, final_log_likelihood));\n+\t\tif (points_.back().y < points_[best_point].y)\n+\t\t\tbest_point = points_.size() - 1;\n+\t\tif (t == mode_->ct_hi)\n+\t\t\tbreak;\n+\t\t// for even steps along the r/b curve scale them by the current t\n+\t\tt = std::min(t + t / 10 * config_.coarse_step,\n+\t\t\t     mode_->ct_hi);\n+\t}\n+\tt = points_[best_point].x;\n+\tRPI_LOG(\"Coarse search found CT \" << t);\n+\t// We have the best point of the search, but refine it with a quadratic\n+\t// interpolation around its neighbours.\n+\tif (points_.size() > 2) {\n+\t\tunsigned long bp = std::min(best_point, points_.size() - 2);\n+\t\tbest_point = std::max(1UL, bp);\n+\t\tt = interpolate_quadatric(points_[best_point - 1],\n+\t\t\t\t\t  points_[best_point],\n+\t\t\t\t\t  points_[best_point + 1]);\n+\t\tRPI_LOG(\"After quadratic refinement, coarse search has CT \"\n+\t\t\t<< t);\n+\t}\n+\treturn t;\n+}\n+\n+void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)\n+{\n+\tint span_r, span_b;\n+\tconfig_.ct_r.Eval(t, &span_r);\n+\tconfig_.ct_b.Eval(t, &span_b);\n+\tdouble step = t / 10 * config_.coarse_step * 0.1;\n+\tint nsteps = 5;\n+\tdouble r_diff = config_.ct_r.Eval(t + nsteps * step, &span_r) -\n+\t\t\tconfig_.ct_r.Eval(t - nsteps * step, &span_r);\n+\tdouble b_diff = config_.ct_b.Eval(t + nsteps * step, &span_b) -\n+\t\t\tconfig_.ct_b.Eval(t - nsteps * step, &span_b);\n+\tPwl::Point transverse(b_diff, -r_diff);\n+\tif (transverse.Len2() < 1e-6)\n+\t\treturn;\n+\t// unit vector orthogonal to the b vs. r function (pointing outwards\n+\t// with r and b increasing)\n+\ttransverse = transverse / transverse.Len();\n+\tdouble best_log_likelihood = 0, best_t = 0, best_r = 0, best_b = 0;\n+\tdouble transverse_range =\n+\t\tconfig_.transverse_neg + config_.transverse_pos;\n+\tconst int MAX_NUM_DELTAS = 12;\n+\t// a transverse step approximately every 0.01 r/b units\n+\tint num_deltas = floor(transverse_range * 100 + 0.5) + 1;\n+\tnum_deltas = num_deltas < 3 ? 3 :\n+\t\t     (num_deltas > MAX_NUM_DELTAS ? MAX_NUM_DELTAS : num_deltas);\n+\t// Step down CT curve. March a bit further if the transverse range is\n+\t// large.\n+\tnsteps += num_deltas;\n+\tfor (int i = -nsteps; i <= nsteps; i++) {\n+\t\tdouble t_test = t + i * step;\n+\t\tdouble prior_log_likelihood =\n+\t\t\tprior.Eval(prior.Domain().Clip(t_test));\n+\t\tdouble r_curve = config_.ct_r.Eval(t_test, &span_r);\n+\t\tdouble b_curve = config_.ct_b.Eval(t_test, &span_b);\n+\t\t// x will be distance off the curve, y the log likelihood there\n+\t\tPwl::Point points[MAX_NUM_DELTAS];\n+\t\tint best_point = 0;\n+\t\t// Take some measurements transversely *off* the CT curve.\n+\t\tfor (int j = 0; j < num_deltas; j++) {\n+\t\t\tpoints[j].x = -config_.transverse_neg +\n+\t\t\t\t      (transverse_range * j) / (num_deltas - 1);\n+\t\t\tPwl::Point rb_test = Pwl::Point(r_curve, b_curve) +\n+\t\t\t\t\t     transverse * points[j].x;\n+\t\t\tdouble r_test = rb_test.x, b_test = rb_test.y;\n+\t\t\tdouble gain_r = 1 / r_test, gain_b = 1 / b_test;\n+\t\t\tdouble delta2_sum = computeDelta2Sum(gain_r, gain_b);\n+\t\t\tpoints[j].y = delta2_sum - prior_log_likelihood;\n+\t\t\tRPI_LOG(\"At t \" << t_test << \" r \" << r_test << \" b \"\n+\t\t\t\t\t<< b_test << \": \" << points[j].y);\n+\t\t\tif (points[j].y < points[best_point].y)\n+\t\t\t\tbest_point = j;\n+\t\t}\n+\t\t// We have NUM_DELTAS points transversely across the CT curve,\n+\t\t// now let's do a quadratic interpolation for the best result.\n+\t\tbest_point = std::max(1, std::min(best_point, num_deltas - 2));\n+\t\tPwl::Point rb_test =\n+\t\t\tPwl::Point(r_curve, b_curve) +\n+\t\t\ttransverse *\n+\t\t\t\tinterpolate_quadatric(points[best_point - 1],\n+\t\t\t\t\t\t      points[best_point],\n+\t\t\t\t\t\t      points[best_point + 1]);\n+\t\tdouble r_test = rb_test.x, b_test = rb_test.y;\n+\t\tdouble gain_r = 1 / r_test, gain_b = 1 / b_test;\n+\t\tdouble delta2_sum = computeDelta2Sum(gain_r, gain_b);\n+\t\tdouble final_log_likelihood = delta2_sum - prior_log_likelihood;\n+\t\tRPI_LOG(\"Finally \"\n+\t\t\t<< t_test << \" r \" << r_test << \" b \" << b_test << \": \"\n+\t\t\t<< final_log_likelihood\n+\t\t\t<< (final_log_likelihood < best_log_likelihood ? \" BEST\"\n+\t\t\t\t\t\t\t\t       : \"\"));\n+\t\tif (best_t == 0 || final_log_likelihood < best_log_likelihood)\n+\t\t\tbest_log_likelihood = final_log_likelihood,\n+\t\t\tbest_t = t_test, best_r = r_test, best_b = b_test;\n+\t}\n+\tt = best_t, r = best_r, b = best_b;\n+\tRPI_LOG(\"Fine search found t \" << t << \" r \" << r << \" b \" << b);\n+}\n+\n+void Awb::awbBayes()\n+{\n+\t// May as well divide out G to save computeDelta2Sum from doing it over\n+\t// and over.\n+\tfor (auto &z : zones_)\n+\t\tz.R = z.R / (z.G + 1), z.B = z.B / (z.G + 1);\n+\t// Get the current prior, and scale according to how many zones are\n+\t// valid... not entirely sure about this.\n+\tPwl prior = interpolatePrior();\n+\tprior *= zones_.size() / (double)(AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y);\n+\tprior.Map([](double x, double y) {\n+\t\tRPI_LOG(\"(\" << x << \",\" << y << \")\");\n+\t});\n+\tdouble t = coarseSearch(prior);\n+\tdouble r = config_.ct_r.Eval(t);\n+\tdouble b = config_.ct_b.Eval(t);\n+\tRPI_LOG(\"After coarse search: r \" << r << \" b \" << b << \" (gains r \"\n+\t\t\t\t\t  << 1 / r << \" b \" << 1 / b << \")\");\n+\t// Not entirely sure how to handle the fine search yet. Mostly the\n+\t// estimated CT is already good enough, but the fine search allows us to\n+\t// wander transverely off the CT curve. Under some illuminants, where\n+\t// there may be more or less green light, this may prove beneficial,\n+\t// though I probably need more real datasets before deciding exactly how\n+\t// this should be controlled and tuned.\n+\tfineSearch(t, r, b, prior);\n+\tRPI_LOG(\"After fine search: r \" << r << \" b \" << b << \" (gains r \"\n+\t\t\t\t\t<< 1 / r << \" b \" << 1 / b << \")\");\n+\t// Write results out for the main thread to pick up. Remember to adjust\n+\t// the gains from the ones that the \"canonical sensor\" would require to\n+\t// the ones needed by *this* sensor.\n+\tasync_results_.temperature_K = t;\n+\tasync_results_.gain_r = 1.0 / r * config_.sensitivity_r;\n+\tasync_results_.gain_g = 1.0;\n+\tasync_results_.gain_b = 1.0 / b * config_.sensitivity_b;\n+}\n+\n+void Awb::awbGrey()\n+{\n+\tRPI_LOG(\"Grey world AWB\");\n+\t// Make a separate list of the derivatives for each of red and blue, so\n+\t// that we can sort them to exclude the extreme gains.  We could\n+\t// consider some variations, such as normalising all the zones first, or\n+\t// doing an L2 average etc.\n+\tstd::vector<RGB> &derivs_R(zones_);\n+\tstd::vector<RGB> derivs_B(derivs_R);\n+\tstd::sort(derivs_R.begin(), derivs_R.end(),\n+\t\t  [](RGB const &a, RGB const &b) {\n+\t\t\t  return a.G * b.R < b.G * a.R;\n+\t\t  });\n+\tstd::sort(derivs_B.begin(), derivs_B.end(),\n+\t\t  [](RGB const &a, RGB const &b) {\n+\t\t\t  return a.G * b.B < b.G * a.B;\n+\t\t  });\n+\t// Average the middle half of the values.\n+\tint discard = derivs_R.size() / 4;\n+\tRGB sum_R(0, 0, 0), sum_B(0, 0, 0);\n+\tfor (auto ri = derivs_R.begin() + discard,\n+\t\t  bi = derivs_B.begin() + discard;\n+\t     ri != derivs_R.end() - discard; ri++, bi++)\n+\t\tsum_R += *ri, sum_B += *bi;\n+\tdouble gain_r = sum_R.G / (sum_R.R + 1),\n+\t       gain_b = sum_B.G / (sum_B.B + 1);\n+\tasync_results_.temperature_K = 4500; // don't know what it is\n+\tasync_results_.gain_r = gain_r;\n+\tasync_results_.gain_g = 1.0;\n+\tasync_results_.gain_b = gain_b;\n+}\n+\n+void Awb::doAwb()\n+{\n+\tif (manual_r_ != 0.0 && manual_b_ != 0.0) {\n+\t\tasync_results_.temperature_K = 4500; // don't know what it is\n+\t\tasync_results_.gain_r = manual_r_;\n+\t\tasync_results_.gain_g = 1.0;\n+\t\tasync_results_.gain_b = manual_b_;\n+\t\tRPI_LOG(\"Using manual white balance: gain_r \"\n+\t\t\t<< async_results_.gain_r << \" gain_b \"\n+\t\t\t<< async_results_.gain_b);\n+\t} else {\n+\t\tprepareStats();\n+\t\tRPI_LOG(\"Valid zones: \" << zones_.size());\n+\t\tif (zones_.size() > config_.min_regions) {\n+\t\t\tif (config_.bayes)\n+\t\t\t\tawbBayes();\n+\t\t\telse\n+\t\t\t\tawbGrey();\n+\t\t\tRPI_LOG(\"CT found is \"\n+\t\t\t\t<< async_results_.temperature_K\n+\t\t\t\t<< \" with gains r \" << async_results_.gain_r\n+\t\t\t\t<< \" and b \" << async_results_.gain_b);\n+\t\t}\n+\t}\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Awb(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/awb.hpp b/src/ipa/raspberrypi/controller/rpi/awb.hpp\nnew file mode 100644\nindex 000000000000..369252523200\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/awb.hpp\n@@ -0,0 +1,178 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * awb.hpp - AWB control algorithm\n+ */\n+#pragma once\n+\n+#include <mutex>\n+#include <condition_variable>\n+#include <thread>\n+\n+#include \"../awb_algorithm.hpp\"\n+#include \"../pwl.hpp\"\n+#include \"../awb_status.h\"\n+\n+namespace RPi {\n+\n+// Control algorithm to perform AWB calculations.\n+\n+struct AwbMode {\n+\tvoid Read(boost::property_tree::ptree const &params);\n+\tdouble ct_lo; // low CT value for search\n+\tdouble ct_hi; // high CT value for search\n+};\n+\n+struct AwbPrior {\n+\tvoid Read(boost::property_tree::ptree const &params);\n+\tdouble lux; // lux level\n+\tPwl prior; // maps CT to prior log likelihood for this lux level\n+};\n+\n+struct AwbConfig {\n+\tAwbConfig() : default_mode(nullptr) {}\n+\tvoid Read(boost::property_tree::ptree const &params);\n+\t// Only repeat the AWB calculation every \"this many\" frames\n+\tuint16_t frame_period;\n+\t// number of initial frames for which speed taken as 1.0 (maximum)\n+\tuint16_t startup_frames;\n+\tdouble speed; // IIR filter speed applied to algorithm results\n+\tbool fast; // \"fast\" mode uses a 16x16 rather than 32x32 grid\n+\tPwl ct_r; // function maps CT to r (= R/G)\n+\tPwl ct_b; // function maps CT to b (= B/G)\n+\t// table of illuminant priors at different lux levels\n+\tstd::vector<AwbPrior> priors;\n+\t// AWB \"modes\" (determines the search range)\n+\tstd::map<std::string, AwbMode> modes;\n+\tAwbMode *default_mode; // mode used if no mode selected\n+\t// minimum proportion of pixels counted within AWB region for it to be\n+\t// \"useful\"\n+\tdouble min_pixels;\n+\t// minimum G value of those pixels, to be regarded a \"useful\"\n+\tuint16_t min_G;\n+\t// number of AWB regions that must be \"useful\" in order to do the AWB\n+\t// calculation\n+\tuint32_t min_regions;\n+\t// clamp on colour error term (so as not to penalise non-grey excessively)\n+\tdouble delta_limit;\n+\t// step size control in coarse search\n+\tdouble coarse_step;\n+\t// how far to wander off CT curve towards \"more purple\"\n+\tdouble transverse_pos;\n+\t// how far to wander off CT curve towards \"more green\"\n+\tdouble transverse_neg;\n+\t// red sensitivity ratio (set to canonical sensor's R/G divided by this\n+\t// sensor's R/G)\n+\tdouble sensitivity_r;\n+\t// blue sensitivity ratio (set to canonical sensor's B/G divided by this\n+\t// sensor's B/G)\n+\tdouble sensitivity_b;\n+\t// The whitepoint (which we normally \"aim\" for) can be moved.\n+\tdouble whitepoint_r;\n+\tdouble whitepoint_b;\n+\tbool bayes; // use Bayesian algorithm\n+};\n+\n+class Awb : public AwbAlgorithm\n+{\n+public:\n+\tAwb(Controller *controller = NULL);\n+\t~Awb();\n+\tchar const *Name() const override;\n+\tvoid Initialise() override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid SetMode(std::string const &name) override;\n+\tvoid SetManualGains(double manual_r, double manual_b) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\tvoid Process(StatisticsPtr &stats, Metadata *image_metadata) override;\n+\tstruct RGB {\n+\t\tRGB(double _R = INVALID, double _G = INVALID,\n+\t\t    double _B = INVALID)\n+\t\t\t: R(_R), G(_G), B(_B)\n+\t\t{\n+\t\t}\n+\t\tdouble R, G, B;\n+\t\tstatic const double INVALID;\n+\t\tbool Valid() const { return G != INVALID; }\n+\t\tbool Invalid() const { return G == INVALID; }\n+\t\tRGB &operator+=(RGB const &other)\n+\t\t{\n+\t\t\tR += other.R, G += other.G, B += other.B;\n+\t\t\treturn *this;\n+\t\t}\n+\t\tRGB Square() const { return RGB(R * R, G * G, B * B); }\n+\t};\n+\n+private:\n+\t// configuration is read-only, and available to both threads\n+\tAwbConfig config_;\n+\tstd::thread async_thread_;\n+\tvoid asyncFunc(); // asynchronous thread function\n+\tstd::mutex mutex_;\n+\t// condvar for async thread to wait on\n+\tstd::condition_variable async_signal_;\n+\t// condvar for synchronous thread to wait on\n+\tstd::condition_variable sync_signal_;\n+\t// for sync thread to check  if async thread finished (requires mutex)\n+\tbool async_finished_;\n+\t// for async thread to check if it's been told to run (requires mutex)\n+\tbool async_start_;\n+\t// for async thread to check if it's been told to quit (requires mutex)\n+\tbool async_abort_;\n+\n+\t// The following are only for the synchronous thread to use:\n+\t// for sync thread to note its has asked async thread to run\n+\tbool async_started_;\n+\t// counts up to frame_period before restarting the async thread\n+\tint frame_phase_;\n+\tint frame_count_; // counts up to startup_frames\n+\tint frame_count2_; // counts up to startup_frames for Process method\n+\tAwbStatus sync_results_;\n+\tAwbStatus prev_sync_results_;\n+\tstd::string mode_name_;\n+\tstd::mutex settings_mutex_;\n+\t// The following are for the asynchronous thread to use, though the main\n+\t// thread can set/reset them if the async thread is known to be idle:\n+\tvoid restartAsync(StatisticsPtr &stats, std::string const &mode_name,\n+\t\t\t  double lux);\n+\t// copy out the results from the async thread so that it can be restarted\n+\tvoid fetchAsyncResults();\n+\tStatisticsPtr statistics_;\n+\tAwbMode *mode_;\n+\tdouble lux_;\n+\tAwbStatus async_results_;\n+\tvoid doAwb();\n+\tvoid awbBayes();\n+\tvoid awbGrey();\n+\tvoid prepareStats();\n+\tdouble computeDelta2Sum(double gain_r, double gain_b);\n+\tPwl interpolatePrior();\n+\tdouble coarseSearch(Pwl const &prior);\n+\tvoid fineSearch(double &t, double &r, double &b, Pwl const &prior);\n+\tstd::vector<RGB> zones_;\n+\tstd::vector<Pwl::Point> points_;\n+\t// manual r setting\n+\tdouble manual_r_;\n+\t// manual b setting\n+\tdouble manual_b_;\n+};\n+\n+static inline Awb::RGB operator+(Awb::RGB const &a, Awb::RGB const &b)\n+{\n+\treturn Awb::RGB(a.R + b.R, a.G + b.G, a.B + b.B);\n+}\n+static inline Awb::RGB operator-(Awb::RGB const &a, Awb::RGB const &b)\n+{\n+\treturn Awb::RGB(a.R - b.R, a.G - b.G, a.B - b.B);\n+}\n+static inline Awb::RGB operator*(double d, Awb::RGB const &rgb)\n+{\n+\treturn Awb::RGB(d * rgb.R, d * rgb.G, d * rgb.B);\n+}\n+static inline Awb::RGB operator*(Awb::RGB const &rgb, double d)\n+{\n+\treturn d * rgb;\n+}\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/black_level.cpp b/src/ipa/raspberrypi/controller/rpi/black_level.cpp\nnew file mode 100644\nindex 000000000000..59c9f5a62379\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/black_level.cpp\n@@ -0,0 +1,56 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * black_level.cpp - black level control algorithm\n+ */\n+\n+#include <math.h>\n+#include <stdint.h>\n+\n+#include \"../black_level_status.h\"\n+#include \"../logging.hpp\"\n+\n+#include \"black_level.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.black_level\"\n+\n+BlackLevel::BlackLevel(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+}\n+\n+char const *BlackLevel::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void BlackLevel::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(Name());\n+\tuint16_t black_level = params.get<uint16_t>(\n+\t\t\"black_level\", 4096); // 64 in 10 bits scaled to 16 bits\n+\tblack_level_r_ = params.get<uint16_t>(\"black_level_r\", black_level);\n+\tblack_level_g_ = params.get<uint16_t>(\"black_level_g\", black_level);\n+\tblack_level_b_ = params.get<uint16_t>(\"black_level_b\", black_level);\n+}\n+\n+void BlackLevel::Prepare(Metadata *image_metadata)\n+{\n+\t// Possibly we should think about doing this in a switch_mode or\n+\t// something?\n+\tstruct BlackLevelStatus status;\n+\tstatus.black_level_r = black_level_r_;\n+\tstatus.black_level_g = black_level_g_;\n+\tstatus.black_level_b = black_level_b_;\n+\timage_metadata->Set(\"black_level.status\", status);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn new BlackLevel(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/black_level.hpp b/src/ipa/raspberrypi/controller/rpi/black_level.hpp\nnew file mode 100644\nindex 000000000000..5d74c6da038f\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/black_level.hpp\n@@ -0,0 +1,30 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * black_level.hpp - black level control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+#include \"../black_level_status.h\"\n+\n+// This is our implementation of the \"black level algorithm\".\n+\n+namespace RPi {\n+\n+class BlackLevel : public Algorithm\n+{\n+public:\n+\tBlackLevel(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tdouble black_level_r_;\n+\tdouble black_level_g_;\n+\tdouble black_level_b_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/ccm.cpp b/src/ipa/raspberrypi/controller/rpi/ccm.cpp\nnew file mode 100644\nindex 000000000000..327cb71ce42a\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/ccm.cpp\n@@ -0,0 +1,163 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * ccm.cpp - CCM (colour correction matrix) control algorithm\n+ */\n+\n+#include \"../awb_status.h\"\n+#include \"../ccm_status.h\"\n+#include \"../logging.hpp\"\n+#include \"../lux_status.h\"\n+#include \"../metadata.hpp\"\n+\n+#include \"ccm.hpp\"\n+\n+using namespace RPi;\n+\n+// This algorithm selects a CCM (Colour Correction Matrix) according to the\n+// colour temperature estimated by AWB (interpolating between known matricies as\n+// necessary). Additionally the amount of colour saturation can be controlled\n+// both according to the current estimated lux level and according to a\n+// saturation setting that is exposed to applications.\n+\n+#define NAME \"rpi.ccm\"\n+\n+Matrix::Matrix()\n+{\n+\tmemset(m, 0, sizeof(m));\n+}\n+Matrix::Matrix(double m0, double m1, double m2, double m3, double m4, double m5,\n+\t       double m6, double m7, double m8)\n+{\n+\tm[0][0] = m0, m[0][1] = m1, m[0][2] = m2, m[1][0] = m3, m[1][1] = m4,\n+\tm[1][2] = m5, m[2][0] = m6, m[2][1] = m7, m[2][2] = m8;\n+}\n+void Matrix::Read(boost::property_tree::ptree const &params)\n+{\n+\tdouble *ptr = (double *)m;\n+\tint n = 0;\n+\tfor (auto it = params.begin(); it != params.end(); it++) {\n+\t\tif (n++ == 9)\n+\t\t\tthrow std::runtime_error(\"Ccm: too many values in CCM\");\n+\t\t*ptr++ = it->second.get_value<double>();\n+\t}\n+\tif (n < 9)\n+\t\tthrow std::runtime_error(\"Ccm: too few values in CCM\");\n+}\n+\n+Ccm::Ccm(Controller *controller)\n+\t: CcmAlgorithm(controller), saturation_(1.0) {}\n+\n+char const *Ccm::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Ccm::Read(boost::property_tree::ptree const &params)\n+{\n+\tif (params.get_child_optional(\"saturation\"))\n+\t\tconfig_.saturation.Read(params.get_child(\"saturation\"));\n+\tfor (auto &p : params.get_child(\"ccms\")) {\n+\t\tCtCcm ct_ccm;\n+\t\tct_ccm.ct = p.second.get<double>(\"ct\");\n+\t\tct_ccm.ccm.Read(p.second.get_child(\"ccm\"));\n+\t\tif (!config_.ccms.empty() &&\n+\t\t    ct_ccm.ct <= config_.ccms.back().ct)\n+\t\t\tthrow std::runtime_error(\n+\t\t\t\t\"Ccm: CCM not in increasing colour temperature order\");\n+\t\tconfig_.ccms.push_back(std::move(ct_ccm));\n+\t}\n+\tif (config_.ccms.empty())\n+\t\tthrow std::runtime_error(\"Ccm: no CCMs specified\");\n+}\n+\n+void Ccm::SetSaturation(double saturation)\n+{\n+\tsaturation_ = saturation;\n+}\n+\n+void Ccm::Initialise() {}\n+\n+template<typename T>\n+static bool get_locked(Metadata *metadata, std::string const &tag, T &value)\n+{\n+\tT *ptr = metadata->GetLocked<T>(tag);\n+\tif (ptr == nullptr)\n+\t\treturn false;\n+\tvalue = *ptr;\n+\treturn true;\n+}\n+\n+Matrix calculate_ccm(std::vector<CtCcm> const &ccms, double ct)\n+{\n+\tif (ct <= ccms.front().ct)\n+\t\treturn ccms.front().ccm;\n+\telse if (ct >= ccms.back().ct)\n+\t\treturn ccms.back().ccm;\n+\telse {\n+\t\tint i = 0;\n+\t\tfor (; ct > ccms[i].ct; i++)\n+\t\t\t;\n+\t\tdouble lambda =\n+\t\t\t(ct - ccms[i - 1].ct) / (ccms[i].ct - ccms[i - 1].ct);\n+\t\treturn lambda * ccms[i].ccm + (1.0 - lambda) * ccms[i - 1].ccm;\n+\t}\n+}\n+\n+Matrix apply_saturation(Matrix const &ccm, double saturation)\n+{\n+\tMatrix RGB2Y(0.299, 0.587, 0.114, -0.169, -0.331, 0.500, 0.500, -0.419,\n+\t\t     -0.081);\n+\tMatrix Y2RGB(1.000, 0.000, 1.402, 1.000, -0.345, -0.714, 1.000, 1.771,\n+\t\t     0.000);\n+\tMatrix S(1, 0, 0, 0, saturation, 0, 0, 0, saturation);\n+\treturn Y2RGB * S * RGB2Y * ccm;\n+}\n+\n+void Ccm::Prepare(Metadata *image_metadata)\n+{\n+\tbool awb_ok = false, lux_ok = false;\n+\tstruct AwbStatus awb = {};\n+\tawb.temperature_K = 4000; // in case no metadata\n+\tstruct LuxStatus lux = {};\n+\tlux.lux = 400; // in case no metadata\n+\t{\n+\t\t// grab mutex just once to get everything\n+\t\tstd::lock_guard<Metadata> lock(*image_metadata);\n+\t\tawb_ok = get_locked(image_metadata, \"awb.status\", awb);\n+\t\tlux_ok = get_locked(image_metadata, \"lux.status\", lux);\n+\t}\n+\tif (!awb_ok)\n+\t\tRPI_WARN(\"Ccm: no colour temperature found\");\n+\tif (!lux_ok)\n+\t\tRPI_WARN(\"Ccm: no lux value found\");\n+\tMatrix ccm = calculate_ccm(config_.ccms, awb.temperature_K);\n+\tdouble saturation = saturation_;\n+\tstruct CcmStatus ccm_status;\n+\tccm_status.saturation = saturation;\n+\tif (!config_.saturation.Empty())\n+\t\tsaturation *= config_.saturation.Eval(\n+\t\t\tconfig_.saturation.Domain().Clip(lux.lux));\n+\tccm = apply_saturation(ccm, saturation);\n+\tfor (int j = 0; j < 3; j++)\n+\t\tfor (int i = 0; i < 3; i++)\n+\t\t\tccm_status.matrix[j * 3 + i] =\n+\t\t\t\tstd::max(-8.0, std::min(7.9999, ccm.m[j][i]));\n+\tRPI_LOG(\"CCM: colour temperature \" << awb.temperature_K << \"K\");\n+\tRPI_LOG(\"CCM: \" << ccm_status.matrix[0] << \" \" << ccm_status.matrix[1]\n+\t\t\t<< \" \" << ccm_status.matrix[2] << \"     \"\n+\t\t\t<< ccm_status.matrix[3] << \" \" << ccm_status.matrix[4]\n+\t\t\t<< \" \" << ccm_status.matrix[5] << \"     \"\n+\t\t\t<< ccm_status.matrix[6] << \" \" << ccm_status.matrix[7]\n+\t\t\t<< \" \" << ccm_status.matrix[8]);\n+\timage_metadata->Set(\"ccm.status\", ccm_status);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Ccm(controller);\n+\t;\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/ccm.hpp b/src/ipa/raspberrypi/controller/rpi/ccm.hpp\nnew file mode 100644\nindex 000000000000..f6f4dee1bb8f\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/ccm.hpp\n@@ -0,0 +1,76 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * ccm.hpp - CCM (colour correction matrix) control algorithm\n+ */\n+#pragma once\n+\n+#include <vector>\n+#include <atomic>\n+\n+#include \"../ccm_algorithm.hpp\"\n+#include \"../pwl.hpp\"\n+\n+namespace RPi {\n+\n+// Algorithm to calculate colour matrix. Should be placed after AWB.\n+\n+struct Matrix {\n+\tMatrix(double m0, double m1, double m2, double m3, double m4, double m5,\n+\t       double m6, double m7, double m8);\n+\tMatrix();\n+\tdouble m[3][3];\n+\tvoid Read(boost::property_tree::ptree const &params);\n+};\n+static inline Matrix operator*(double d, Matrix const &m)\n+{\n+\treturn Matrix(m.m[0][0] * d, m.m[0][1] * d, m.m[0][2] * d,\n+\t\t      m.m[1][0] * d, m.m[1][1] * d, m.m[1][2] * d,\n+\t\t      m.m[2][0] * d, m.m[2][1] * d, m.m[2][2] * d);\n+}\n+static inline Matrix operator*(Matrix const &m1, Matrix const &m2)\n+{\n+\tMatrix m;\n+\tfor (int i = 0; i < 3; i++)\n+\t\tfor (int j = 0; j < 3; j++)\n+\t\t\tm.m[i][j] = m1.m[i][0] * m2.m[0][j] +\n+\t\t\t\t    m1.m[i][1] * m2.m[1][j] +\n+\t\t\t\t    m1.m[i][2] * m2.m[2][j];\n+\treturn m;\n+}\n+static inline Matrix operator+(Matrix const &m1, Matrix const &m2)\n+{\n+\tMatrix m;\n+\tfor (int i = 0; i < 3; i++)\n+\t\tfor (int j = 0; j < 3; j++)\n+\t\t\tm.m[i][j] = m1.m[i][j] + m2.m[i][j];\n+\treturn m;\n+}\n+\n+struct CtCcm {\n+\tdouble ct;\n+\tMatrix ccm;\n+};\n+\n+struct CcmConfig {\n+\tstd::vector<CtCcm> ccms;\n+\tPwl saturation;\n+};\n+\n+class Ccm : public CcmAlgorithm\n+{\n+public:\n+\tCcm(Controller *controller = NULL);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid SetSaturation(double saturation) override;\n+\tvoid Initialise() override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tCcmConfig config_;\n+\tstd::atomic<double> saturation_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/contrast.cpp b/src/ipa/raspberrypi/controller/rpi/contrast.cpp\nnew file mode 100644\nindex 000000000000..e4967990c577\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/contrast.cpp\n@@ -0,0 +1,176 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * contrast.cpp - contrast (gamma) control algorithm\n+ */\n+#include <stdint.h>\n+\n+#include \"../contrast_status.h\"\n+#include \"../histogram.hpp\"\n+\n+#include \"contrast.hpp\"\n+\n+using namespace RPi;\n+\n+// This is a very simple control algorithm which simply retrieves the results of\n+// AGC and AWB via their \"status\" metadata, and applies digital gain to the\n+// colour channels in accordance with those instructions. We take care never to\n+// apply less than unity gains, as that would cause fully saturated pixels to go\n+// off-white.\n+\n+#define NAME \"rpi.contrast\"\n+\n+Contrast::Contrast(Controller *controller)\n+\t: ContrastAlgorithm(controller), brightness_(0.0), contrast_(1.0)\n+{\n+}\n+\n+char const *Contrast::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Contrast::Read(boost::property_tree::ptree const &params)\n+{\n+\t// enable adaptive enhancement by default\n+\tconfig_.ce_enable = params.get<int>(\"ce_enable\", 1);\n+\t// the point near the bottom of the histogram to move\n+\tconfig_.lo_histogram = params.get<double>(\"lo_histogram\", 0.01);\n+\t// where in the range to try and move it to\n+\tconfig_.lo_level = params.get<double>(\"lo_level\", 0.015);\n+\t// but don't move by more than this\n+\tconfig_.lo_max = params.get<double>(\"lo_max\", 500);\n+\t// equivalent values for the top of the histogram...\n+\tconfig_.hi_histogram = params.get<double>(\"hi_histogram\", 0.95);\n+\tconfig_.hi_level = params.get<double>(\"hi_level\", 0.95);\n+\tconfig_.hi_max = params.get<double>(\"hi_max\", 2000);\n+\tconfig_.gamma_curve.Read(params.get_child(\"gamma_curve\"));\n+}\n+\n+void Contrast::SetBrightness(double brightness)\n+{\n+\tbrightness_ = brightness;\n+}\n+\n+void Contrast::SetContrast(double contrast)\n+{\n+\tcontrast_ = contrast;\n+}\n+\n+static void fill_in_status(ContrastStatus &status, double brightness,\n+\t\t\t   double contrast, Pwl &gamma_curve)\n+{\n+\tstatus.brightness = brightness;\n+\tstatus.contrast = contrast;\n+\tfor (int i = 0; i < CONTRAST_NUM_POINTS - 1; i++) {\n+\t\tint x = i < 16 ? i * 1024\n+\t\t\t       : (i < 24 ? (i - 16) * 2048 + 16384\n+\t\t\t\t\t : (i - 24) * 4096 + 32768);\n+\t\tstatus.points[i].x = x;\n+\t\tstatus.points[i].y = std::min(65535.0, gamma_curve.Eval(x));\n+\t}\n+\tstatus.points[CONTRAST_NUM_POINTS - 1].x = 65535;\n+\tstatus.points[CONTRAST_NUM_POINTS - 1].y = 65535;\n+}\n+\n+void Contrast::Initialise()\n+{\n+\t// Fill in some default values as Prepare will run before Process gets\n+\t// called.\n+\tfill_in_status(status_, brightness_, contrast_, config_.gamma_curve);\n+}\n+\n+void Contrast::Prepare(Metadata *image_metadata)\n+{\n+\tstd::unique_lock<std::mutex> lock(mutex_);\n+\timage_metadata->Set(\"contrast.status\", status_);\n+}\n+\n+Pwl compute_stretch_curve(Histogram const &histogram,\n+\t\t\t  ContrastConfig const &config)\n+{\n+\tPwl enhance;\n+\tenhance.Append(0, 0);\n+\t// If the start of the histogram is rather empty, try to pull it down a\n+\t// bit.\n+\tdouble hist_lo = histogram.Quantile(config.lo_histogram) *\n+\t\t\t (65536 / NUM_HISTOGRAM_BINS);\n+\tdouble level_lo = config.lo_level * 65536;\n+\tRPI_LOG(\"Move histogram point \" << hist_lo << \" to \" << level_lo);\n+\thist_lo = std::max(\n+\t\tlevel_lo,\n+\t\tstd::min(65535.0, std::min(hist_lo, level_lo + config.lo_max)));\n+\tRPI_LOG(\"Final values \" << hist_lo << \" -> \" << level_lo);\n+\tenhance.Append(hist_lo, level_lo);\n+\t// Keep the mid-point (median) in the same place, though, to limit the\n+\t// apparent amount of global brightness shift.\n+\tdouble mid = histogram.Quantile(0.5) * (65536 / NUM_HISTOGRAM_BINS);\n+\tenhance.Append(mid, mid);\n+\n+\t// If the top to the histogram is empty, try to pull the pixel values\n+\t// there up.\n+\tdouble hist_hi = histogram.Quantile(config.hi_histogram) *\n+\t\t\t (65536 / NUM_HISTOGRAM_BINS);\n+\tdouble level_hi = config.hi_level * 65536;\n+\tRPI_LOG(\"Move histogram point \" << hist_hi << \" to \" << level_hi);\n+\thist_hi = std::min(\n+\t\tlevel_hi,\n+\t\tstd::max(0.0, std::max(hist_hi, level_hi - config.hi_max)));\n+\tRPI_LOG(\"Final values \" << hist_hi << \" -> \" << level_hi);\n+\tenhance.Append(hist_hi, level_hi);\n+\tenhance.Append(65535, 65535);\n+\treturn enhance;\n+}\n+\n+Pwl apply_manual_contrast(Pwl const &gamma_curve, double brightness,\n+\t\t\t  double contrast)\n+{\n+\tPwl new_gamma_curve;\n+\tRPI_LOG(\"Manual brightness \" << brightness << \" contrast \" << contrast);\n+\tgamma_curve.Map([&](double x, double y) {\n+\t\tnew_gamma_curve.Append(\n+\t\t\tx, std::max(0.0, std::min(65535.0,\n+\t\t\t\t\t\t  (y - 32768) * contrast +\n+\t\t\t\t\t\t\t  32768 + brightness)));\n+\t});\n+\treturn new_gamma_curve;\n+}\n+\n+void Contrast::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\t(void)image_metadata;\n+\tdouble brightness = brightness_, contrast = contrast_;\n+\tHistogram histogram(stats->hist[0].g_hist, NUM_HISTOGRAM_BINS);\n+\t// We look at the histogram and adjust the gamma curve in the following\n+\t// ways: 1. Adjust the gamma curve so as to pull the start of the\n+\t// histogram down, and possibly push the end up.\n+\tPwl gamma_curve = config_.gamma_curve;\n+\tif (config_.ce_enable) {\n+\t\tif (config_.lo_max != 0 || config_.hi_max != 0)\n+\t\t\tgamma_curve = compute_stretch_curve(histogram, config_)\n+\t\t\t\t\t      .Compose(gamma_curve);\n+\t\t// We could apply other adjustments (e.g. partial equalisation)\n+\t\t// based on the histogram...?\n+\t}\n+\t// 2. Finally apply any manually selected brightness/contrast\n+\t// adjustment.\n+\tif (brightness != 0 || contrast != 1.0)\n+\t\tgamma_curve = apply_manual_contrast(gamma_curve, brightness,\n+\t\t\t\t\t\t    contrast);\n+\t// And fill in the status for output. Use more points towards the bottom\n+\t// of the curve.\n+\tContrastStatus status;\n+\tfill_in_status(status, brightness, contrast, gamma_curve);\n+\t{\n+\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\tstatus_ = status;\n+\t}\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Contrast(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/contrast.hpp b/src/ipa/raspberrypi/controller/rpi/contrast.hpp\nnew file mode 100644\nindex 000000000000..2e38a762feff\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/contrast.hpp\n@@ -0,0 +1,51 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * contrast.hpp - contrast (gamma) control algorithm\n+ */\n+#pragma once\n+\n+#include <atomic>\n+#include <mutex>\n+\n+#include \"../contrast_algorithm.hpp\"\n+#include \"../pwl.hpp\"\n+\n+namespace RPi {\n+\n+// Back End algorithm to appaly correct digital gain. Should be placed after\n+// Back End AWB.\n+\n+struct ContrastConfig {\n+\tbool ce_enable;\n+\tdouble lo_histogram;\n+\tdouble lo_level;\n+\tdouble lo_max;\n+\tdouble hi_histogram;\n+\tdouble hi_level;\n+\tdouble hi_max;\n+\tPwl gamma_curve;\n+};\n+\n+class Contrast : public ContrastAlgorithm\n+{\n+public:\n+\tContrast(Controller *controller = NULL);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid SetBrightness(double brightness) override;\n+\tvoid SetContrast(double contrast) override;\n+\tvoid Initialise() override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\tvoid Process(StatisticsPtr &stats, Metadata *image_metadata) override;\n+\n+private:\n+\tContrastConfig config_;\n+\tstd::atomic<double> brightness_;\n+\tstd::atomic<double> contrast_;\n+\tContrastStatus status_;\n+\tstd::mutex mutex_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/dpc.cpp b/src/ipa/raspberrypi/controller/rpi/dpc.cpp\nnew file mode 100644\nindex 000000000000..d31fae977367\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/dpc.cpp\n@@ -0,0 +1,49 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * dpc.cpp - DPC (defective pixel correction) control algorithm\n+ */\n+\n+#include \"../logging.hpp\"\n+#include \"dpc.hpp\"\n+\n+using namespace RPi;\n+\n+// We use the lux status so that we can apply stronger settings in darkness (if\n+// necessary).\n+\n+#define NAME \"rpi.dpc\"\n+\n+Dpc::Dpc(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+}\n+\n+char const *Dpc::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Dpc::Read(boost::property_tree::ptree const &params)\n+{\n+\tconfig_.strength = params.get<int>(\"strength\", 1);\n+\tif (config_.strength < 0 || config_.strength > 2)\n+\t\tthrow std::runtime_error(\"Dpc: bad strength value\");\n+}\n+\n+void Dpc::Prepare(Metadata *image_metadata)\n+{\n+\tDpcStatus dpc_status = {};\n+\t// Should we vary this with lux level or analogue gain? TBD.\n+\tdpc_status.strength = config_.strength;\n+\tRPI_LOG(\"Dpc: strength \" << dpc_status.strength);\n+\timage_metadata->Set(\"dpc.status\", dpc_status);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Dpc(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/dpc.hpp b/src/ipa/raspberrypi/controller/rpi/dpc.hpp\nnew file mode 100644\nindex 000000000000..9fb72867a1f0\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/dpc.hpp\n@@ -0,0 +1,32 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * dpc.hpp - DPC (defective pixel correction) control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+#include \"../dpc_status.h\"\n+\n+namespace RPi {\n+\n+// Back End algorithm to apply appropriate GEQ settings.\n+\n+struct DpcConfig {\n+\tint strength;\n+};\n+\n+class Dpc : public Algorithm\n+{\n+public:\n+\tDpc(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tDpcConfig config_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/geq.cpp b/src/ipa/raspberrypi/controller/rpi/geq.cpp\nnew file mode 100644\nindex 000000000000..ee0cb95d2389\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/geq.cpp\n@@ -0,0 +1,75 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * geq.cpp - GEQ (green equalisation) control algorithm\n+ */\n+\n+#include \"../device_status.h\"\n+#include \"../logging.hpp\"\n+#include \"../lux_status.h\"\n+#include \"../pwl.hpp\"\n+\n+#include \"geq.hpp\"\n+\n+using namespace RPi;\n+\n+// We use the lux status so that we can apply stronger settings in darkness (if\n+// necessary).\n+\n+#define NAME \"rpi.geq\"\n+\n+Geq::Geq(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+}\n+\n+char const *Geq::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Geq::Read(boost::property_tree::ptree const &params)\n+{\n+\tconfig_.offset = params.get<uint16_t>(\"offset\", 0);\n+\tconfig_.slope = params.get<double>(\"slope\", 0.0);\n+\tif (config_.slope < 0.0 || config_.slope >= 1.0)\n+\t\tthrow std::runtime_error(\"Geq: bad slope value\");\n+\tif (params.get_child_optional(\"strength\"))\n+\t\tconfig_.strength.Read(params.get_child(\"strength\"));\n+}\n+\n+void Geq::Prepare(Metadata *image_metadata)\n+{\n+\tLuxStatus lux_status = {};\n+\tlux_status.lux = 400;\n+\tif (image_metadata->Get(\"lux.status\", lux_status))\n+\t\tRPI_WARN(\"Geq: no lux data found\");\n+\tDeviceStatus device_status = {};\n+\tdevice_status.analogue_gain = 1.0; // in case not found\n+\tif (image_metadata->Get(\"device.status\", device_status))\n+\t\tRPI_WARN(\"Geq: no device metadata - use analogue gain of 1x\");\n+\tGeqStatus geq_status = {};\n+\tdouble strength =\n+\t\tconfig_.strength.Empty()\n+\t\t\t? 1.0\n+\t\t\t: config_.strength.Eval(config_.strength.Domain().Clip(\n+\t\t\t\t  lux_status.lux));\n+\tstrength *= device_status.analogue_gain;\n+\tdouble offset = config_.offset * strength;\n+\tdouble slope = config_.slope * strength;\n+\tgeq_status.offset = std::min(65535.0, std::max(0.0, offset));\n+\tgeq_status.slope = std::min(.99999, std::max(0.0, slope));\n+\tRPI_LOG(\"Geq: offset \" << geq_status.offset << \" slope \"\n+\t\t\t       << geq_status.slope << \" (analogue gain \"\n+\t\t\t       << device_status.analogue_gain << \" lux \"\n+\t\t\t       << lux_status.lux << \")\");\n+\timage_metadata->Set(\"geq.status\", geq_status);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Geq(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/geq.hpp b/src/ipa/raspberrypi/controller/rpi/geq.hpp\nnew file mode 100644\nindex 000000000000..7d4bd38d5070\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/geq.hpp\n@@ -0,0 +1,34 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * geq.hpp - GEQ (green equalisation) control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+#include \"../geq_status.h\"\n+\n+namespace RPi {\n+\n+// Back End algorithm to apply appropriate GEQ settings.\n+\n+struct GeqConfig {\n+\tuint16_t offset;\n+\tdouble slope;\n+\tPwl strength; // lux to strength factor\n+};\n+\n+class Geq : public Algorithm\n+{\n+public:\n+\tGeq(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tGeqConfig config_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/lux.cpp b/src/ipa/raspberrypi/controller/rpi/lux.cpp\nnew file mode 100644\nindex 000000000000..154db153b119\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/lux.cpp\n@@ -0,0 +1,104 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * lux.cpp - Lux control algorithm\n+ */\n+#include <math.h>\n+\n+#include \"linux/bcm2835-isp.h\"\n+\n+#include \"../device_status.h\"\n+#include \"../logging.hpp\"\n+\n+#include \"lux.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.lux\"\n+\n+Lux::Lux(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+\t// Put in some defaults as there will be no meaningful values until\n+\t// Process has run.\n+\tstatus_.aperture = 1.0;\n+\tstatus_.lux = 400;\n+}\n+\n+char const *Lux::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Lux::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(Name());\n+\treference_shutter_speed_ =\n+\t\tparams.get<double>(\"reference_shutter_speed\");\n+\treference_gain_ = params.get<double>(\"reference_gain\");\n+\treference_aperture_ = params.get<double>(\"reference_aperture\", 1.0);\n+\treference_Y_ = params.get<double>(\"reference_Y\");\n+\treference_lux_ = params.get<double>(\"reference_lux\");\n+\tcurrent_aperture_ = reference_aperture_;\n+}\n+\n+void Lux::Prepare(Metadata *image_metadata)\n+{\n+\tstd::unique_lock<std::mutex> lock(mutex_);\n+\timage_metadata->Set(\"lux.status\", status_);\n+}\n+\n+void Lux::Process(StatisticsPtr &stats, Metadata *image_metadata)\n+{\n+\t// set some initial values to shut the compiler up\n+\tDeviceStatus device_status =\n+\t\t{ .shutter_speed = 1.0,\n+\t\t  .analogue_gain = 1.0,\n+\t\t  .lens_position = 0.0,\n+\t\t  .aperture = 0.0,\n+\t\t  .flash_intensity = 0.0 };\n+\tif (image_metadata->Get(\"device.status\", device_status) == 0) {\n+\t\tdouble current_gain = device_status.analogue_gain;\n+\t\tdouble current_shutter_speed = device_status.shutter_speed;\n+\t\tdouble current_aperture = device_status.aperture;\n+\t\tif (current_aperture == 0)\n+\t\t\tcurrent_aperture = current_aperture_;\n+\t\tuint64_t sum = 0;\n+\t\tuint32_t num = 0;\n+\t\tuint32_t *bin = stats->hist[0].g_hist;\n+\t\tconst int num_bins = sizeof(stats->hist[0].g_hist) /\n+\t\t\t\t     sizeof(stats->hist[0].g_hist[0]);\n+\t\tfor (int i = 0; i < num_bins; i++)\n+\t\t\tsum += bin[i] * (uint64_t)i, num += bin[i];\n+\t\t// add .5 to reflect the mid-points of bins\n+\t\tdouble current_Y = sum / (double)num + .5;\n+\t\tdouble gain_ratio = reference_gain_ / current_gain;\n+\t\tdouble shutter_speed_ratio =\n+\t\t\treference_shutter_speed_ / current_shutter_speed;\n+\t\tdouble aperture_ratio = reference_aperture_ / current_aperture;\n+\t\tdouble Y_ratio = current_Y * (65536 / num_bins) / reference_Y_;\n+\t\tdouble estimated_lux = shutter_speed_ratio * gain_ratio *\n+\t\t\t\t       aperture_ratio * aperture_ratio *\n+\t\t\t\t       Y_ratio * reference_lux_;\n+\t\tLuxStatus status;\n+\t\tstatus.lux = estimated_lux;\n+\t\tstatus.aperture = current_aperture;\n+\t\tRPI_LOG(Name() << \": estimated lux \" << estimated_lux);\n+\t\t{\n+\t\t\tstd::unique_lock<std::mutex> lock(mutex_);\n+\t\t\tstatus_ = status;\n+\t\t}\n+\t\t// Overwrite the metadata here as well, so that downstream\n+\t\t// algorithms get the latest value.\n+\t\timage_metadata->Set(\"lux.status\", status);\n+\t} else\n+\t\tRPI_WARN(Name() << \": no device metadata\");\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Lux(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/lux.hpp b/src/ipa/raspberrypi/controller/rpi/lux.hpp\nnew file mode 100644\nindex 000000000000..eb9354091452\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/lux.hpp\n@@ -0,0 +1,42 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * lux.hpp - Lux control algorithm\n+ */\n+#pragma once\n+\n+#include <atomic>\n+#include <mutex>\n+\n+#include \"../lux_status.h\"\n+#include \"../algorithm.hpp\"\n+\n+// This is our implementation of the \"lux control algorithm\".\n+\n+namespace RPi {\n+\n+class Lux : public Algorithm\n+{\n+public:\n+\tLux(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\tvoid Process(StatisticsPtr &stats, Metadata *image_metadata) override;\n+\tvoid SetCurrentAperture(double aperture);\n+\n+private:\n+\t// These values define the conditions of the reference image, against\n+\t// which we compare the new image.\n+\tdouble reference_shutter_speed_; // in micro-seconds\n+\tdouble reference_gain_;\n+\tdouble reference_aperture_; // units of 1/f\n+\tdouble reference_Y_; // out of 65536\n+\tdouble reference_lux_;\n+\tstd::atomic<double> current_aperture_;\n+\tLuxStatus status_;\n+\tstd::mutex mutex_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/noise.cpp b/src/ipa/raspberrypi/controller/rpi/noise.cpp\nnew file mode 100644\nindex 000000000000..2209d791315e\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/noise.cpp\n@@ -0,0 +1,71 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * noise.cpp - Noise control algorithm\n+ */\n+\n+#include <math.h>\n+\n+#include \"../device_status.h\"\n+#include \"../logging.hpp\"\n+#include \"../noise_status.h\"\n+\n+#include \"noise.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.noise\"\n+\n+Noise::Noise(Controller *controller)\n+\t: Algorithm(controller), mode_factor_(1.0)\n+{\n+}\n+\n+char const *Noise::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Noise::SwitchMode(CameraMode const &camera_mode)\n+{\n+\t// For example, we would expect a 2x2 binned mode to have a \"noise\n+\t// factor\" of sqrt(2x2) = 2. (can't be less than one, right?)\n+\tmode_factor_ = std::max(1.0, camera_mode.noise_factor);\n+}\n+\n+void Noise::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(Name());\n+\treference_constant_ = params.get<double>(\"reference_constant\");\n+\treference_slope_ = params.get<double>(\"reference_slope\");\n+}\n+\n+void Noise::Prepare(Metadata *image_metadata)\n+{\n+\tstruct DeviceStatus device_status;\n+\tdevice_status.analogue_gain = 1.0; // keep compiler calm\n+\tif (image_metadata->Get(\"device.status\", device_status) == 0) {\n+\t\t// There is a slight question as to exactly how the noise\n+\t\t// profile, specifically the constant part of it, scales. For\n+\t\t// now we assume it all scales the same, and we'll revisit this\n+\t\t// if it proves substantially wrong.  NOTE: we may also want to\n+\t\t// make some adjustments based on the camera mode (such as\n+\t\t// binning), if we knew how to discover it...\n+\t\tdouble factor = sqrt(device_status.analogue_gain) / mode_factor_;\n+\t\tstruct NoiseStatus status;\n+\t\tstatus.noise_constant = reference_constant_ * factor;\n+\t\tstatus.noise_slope = reference_slope_ * factor;\n+\t\timage_metadata->Set(\"noise.status\", status);\n+\t\tRPI_LOG(Name() << \": constant \" << status.noise_constant\n+\t\t\t       << \" slope \" << status.noise_slope);\n+\t} else\n+\t\tRPI_WARN(Name() << \" no metadata\");\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn new Noise(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/noise.hpp b/src/ipa/raspberrypi/controller/rpi/noise.hpp\nnew file mode 100644\nindex 000000000000..51d46a3dad09\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/noise.hpp\n@@ -0,0 +1,32 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * noise.hpp - Noise control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+#include \"../noise_status.h\"\n+\n+// This is our implementation of the \"noise algorithm\".\n+\n+namespace RPi {\n+\n+class Noise : public Algorithm\n+{\n+public:\n+\tNoise(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid SwitchMode(CameraMode const &camera_mode) override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\t// the noise profile for analogue gain of 1.0\n+\tdouble reference_constant_;\n+\tdouble reference_slope_;\n+\tstd::atomic<double> mode_factor_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sdn.cpp b/src/ipa/raspberrypi/controller/rpi/sdn.cpp\nnew file mode 100644\nindex 000000000000..28d9d983da0b\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/sdn.cpp\n@@ -0,0 +1,63 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sdn.cpp - SDN (spatial denoise) control algorithm\n+ */\n+\n+#include \"../noise_status.h\"\n+#include \"../sdn_status.h\"\n+\n+#include \"sdn.hpp\"\n+\n+using namespace RPi;\n+\n+// Calculate settings for the spatial denoise block using the noise profile in\n+// the image metadata.\n+\n+#define NAME \"rpi.sdn\"\n+\n+Sdn::Sdn(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+}\n+\n+char const *Sdn::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Sdn::Read(boost::property_tree::ptree const &params)\n+{\n+\tdeviation_ = params.get<double>(\"deviation\", 3.2);\n+\tstrength_ = params.get<double>(\"strength\", 0.75);\n+}\n+\n+void Sdn::Initialise() {}\n+\n+void Sdn::Prepare(Metadata *image_metadata)\n+{\n+\tstruct NoiseStatus noise_status = {};\n+\tnoise_status.noise_slope = 3.0; // in case no metadata\n+\tif (image_metadata->Get(\"noise.status\", noise_status) != 0)\n+\t\tRPI_WARN(\"Sdn: no noise profile found\");\n+\tRPI_LOG(\"Noise profile: constant \" << noise_status.noise_constant\n+\t\t\t\t\t   << \" slope \"\n+\t\t\t\t\t   << noise_status.noise_slope);\n+\tstruct SdnStatus status;\n+\tstatus.noise_constant = noise_status.noise_constant * deviation_;\n+\tstatus.noise_slope = noise_status.noise_slope * deviation_;\n+\tstatus.strength = strength_;\n+\timage_metadata->Set(\"sdn.status\", status);\n+\tRPI_LOG(\"Sdn: programmed constant \" << status.noise_constant\n+\t\t\t\t\t    << \" slope \" << status.noise_slope\n+\t\t\t\t\t    << \" strength \"\n+\t\t\t\t\t    << status.strength);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn (Algorithm *)new Sdn(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sdn.hpp b/src/ipa/raspberrypi/controller/rpi/sdn.hpp\nnew file mode 100644\nindex 000000000000..d48aab7eaf95\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/sdn.hpp\n@@ -0,0 +1,29 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sdn.hpp - SDN (spatial denoise) control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+\n+namespace RPi {\n+\n+// Algorithm to calculate correct spatial denoise (SDN) settings.\n+\n+class Sdn : public Algorithm\n+{\n+public:\n+\tSdn(Controller *controller = NULL);\n+\tchar const *Name() const override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Initialise() override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tdouble deviation_;\n+\tdouble strength_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.cpp b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp\nnew file mode 100644\nindex 000000000000..1f07bb626500\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp\n@@ -0,0 +1,60 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sharpen.cpp - sharpening control algorithm\n+ */\n+\n+#include <math.h>\n+\n+#include \"../logging.hpp\"\n+#include \"../sharpen_status.h\"\n+\n+#include \"sharpen.hpp\"\n+\n+using namespace RPi;\n+\n+#define NAME \"rpi.sharpen\"\n+\n+Sharpen::Sharpen(Controller *controller)\n+\t: Algorithm(controller)\n+{\n+}\n+\n+char const *Sharpen::Name() const\n+{\n+\treturn NAME;\n+}\n+\n+void Sharpen::SwitchMode(CameraMode const &camera_mode)\n+{\n+\t// can't be less than one, right?\n+\tmode_factor_ = std::max(1.0, camera_mode.noise_factor);\n+}\n+\n+void Sharpen::Read(boost::property_tree::ptree const &params)\n+{\n+\tRPI_LOG(Name());\n+\tthreshold_ = params.get<double>(\"threshold\", 1.0);\n+\tstrength_ = params.get<double>(\"strength\", 1.0);\n+\tlimit_ = params.get<double>(\"limit\", 1.0);\n+}\n+\n+void Sharpen::Prepare(Metadata *image_metadata)\n+{\n+\tdouble mode_factor = mode_factor_;\n+\tstruct SharpenStatus status;\n+\t// Binned modes seem to need the sharpening toned down with this\n+\t// pipeline.\n+\tstatus.threshold = threshold_ * mode_factor;\n+\tstatus.strength = strength_ / mode_factor;\n+\tstatus.limit = limit_ / mode_factor;\n+\timage_metadata->Set(\"sharpen.status\", status);\n+}\n+\n+// Register algorithm with the system.\n+static Algorithm *Create(Controller *controller)\n+{\n+\treturn new Sharpen(controller);\n+}\n+static RegisterAlgorithm reg(NAME, &Create);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.hpp b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp\nnew file mode 100644\nindex 000000000000..3b0d6801d281\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp\n@@ -0,0 +1,32 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sharpen.hpp - sharpening control algorithm\n+ */\n+#pragma once\n+\n+#include \"../algorithm.hpp\"\n+#include \"../sharpen_status.h\"\n+\n+// This is our implementation of the \"sharpen algorithm\".\n+\n+namespace RPi {\n+\n+class Sharpen : public Algorithm\n+{\n+public:\n+\tSharpen(Controller *controller);\n+\tchar const *Name() const override;\n+\tvoid SwitchMode(CameraMode const &camera_mode) override;\n+\tvoid Read(boost::property_tree::ptree const &params) override;\n+\tvoid Prepare(Metadata *image_metadata) override;\n+\n+private:\n+\tdouble threshold_;\n+\tdouble strength_;\n+\tdouble limit_;\n+\tstd::atomic<double> mode_factor_;\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/controller/sdn_status.h b/src/ipa/raspberrypi/controller/sdn_status.h\nnew file mode 100644\nindex 000000000000..871e0b62af1f\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/sdn_status.h\n@@ -0,0 +1,23 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sdn_status.h - SDN (spatial denoise) control algorithm status\n+ */\n+#pragma once\n+\n+// This stores the parameters required for Spatial Denoise (SDN).\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct SdnStatus {\n+\tdouble noise_constant;\n+\tdouble noise_slope;\n+\tdouble strength;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/controller/sharpen_status.h b/src/ipa/raspberrypi/controller/sharpen_status.h\nnew file mode 100644\nindex 000000000000..6de80f407c74\n--- /dev/null\n+++ b/src/ipa/raspberrypi/controller/sharpen_status.h\n@@ -0,0 +1,26 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * sharpen_status.h - Sharpen control algorithm status\n+ */\n+#pragma once\n+\n+// The \"sharpen\" algorithm stores the strength to use.\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+struct SharpenStatus {\n+\t// controls the smallest level of detail (or noise!) that sharpening will pick up\n+\tdouble threshold;\n+\t// the rate at which the sharpening response ramps once above the threshold\n+\tdouble strength;\n+\t// upper limit of the allowed sharpening response\n+\tdouble limit;\n+};\n+\n+#ifdef __cplusplus\n+}\n+#endif\ndiff --git a/src/ipa/raspberrypi/data/imx219.json b/src/ipa/raspberrypi/data/imx219.json\nnew file mode 100644\nindex 000000000000..ce7ff36faf09\n--- /dev/null\n+++ b/src/ipa/raspberrypi/data/imx219.json\n@@ -0,0 +1,401 @@\n+{\n+    \"rpi.black_level\":\n+    {\n+        \"black_level\": 4096\n+    },\n+    \"rpi.dpc\":\n+    {\n+\n+    },\n+    \"rpi.lux\":\n+    {\n+        \"reference_shutter_speed\": 27685,\n+        \"reference_gain\": 1.0,\n+        \"reference_aperture\": 1.0,\n+        \"reference_lux\": 998,\n+        \"reference_Y\": 12744\n+    },\n+    \"rpi.noise\":\n+    {\n+        \"reference_constant\": 0,\n+        \"reference_slope\": 3.67\n+    },\n+    \"rpi.geq\":\n+    {\n+        \"offset\": 204,\n+        \"slope\": 0.01633\n+    },\n+    \"rpi.sdn\":\n+    {\n+\n+    },\n+    \"rpi.awb\":\n+    {\n+        \"priors\":\n+        [\n+            {\n+                \"lux\": 0, \"prior\":\n+                [\n+                    2000, 1.0, 3000, 0.0, 13000, 0.0\n+                ]\n+            },\n+            {\n+                \"lux\": 800, \"prior\":\n+                [\n+                    2000, 0.0, 6000, 2.0, 13000, 2.0\n+                ]\n+            },\n+            {\n+                \"lux\": 1500, \"prior\":\n+                [\n+                    2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0\n+                ]\n+            }\n+        ],\n+        \"modes\":\n+        {\n+            \"auto\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 8000\n+            },\n+            \"incandescent\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 3000\n+            },\n+            \"tungsten\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 3500\n+            },\n+            \"fluorescent\":\n+            {\n+                \"lo\": 4000,\n+                \"hi\": 4700\n+            },\n+            \"indoor\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 5000\n+            },\n+            \"daylight\":\n+            {\n+                \"lo\": 5500,\n+                \"hi\": 6500\n+            },\n+            \"cloudy\":\n+            {\n+                \"lo\": 7000,\n+                \"hi\": 8600\n+            }\n+        },\n+        \"bayes\": 1,\n+        \"ct_curve\":\n+        [\n+            2498.0, 0.9309, 0.3599, 2911.0, 0.8682, 0.4283, 2919.0, 0.8358, 0.4621, 3627.0, 0.7646, 0.5327, 4600.0, 0.6079, 0.6721, 5716.0,\n+            0.5712, 0.7017, 8575.0, 0.4331, 0.8037\n+        ],\n+        \"sensitivity_r\": 1.05,\n+        \"sensitivity_b\": 1.05,\n+        \"transverse_pos\": 0.04791,\n+        \"transverse_neg\": 0.04881\n+    },\n+    \"rpi.agc\":\n+    {\n+        \"metering_modes\":\n+        {\n+            \"centre-weighted\":\n+            {\n+                \"weights\":\n+                [\n+                    3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0\n+                ]\n+            },\n+            \"spot\":\n+            {\n+                \"weights\":\n+                [\n+                    2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\n+                ]\n+            },\n+            \"matrix\":\n+            {\n+                \"weights\":\n+                [\n+                    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\n+                ]\n+            }\n+        },\n+        \"exposure_modes\":\n+        {\n+            \"normal\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 10000, 30000, 60000, 120000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            },\n+            \"sport\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 5000, 10000, 20000, 120000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            }\n+        },\n+        \"constraint_modes\":\n+        {\n+            \"normal\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.5, 1000, 0.5\n+                    ]\n+                }\n+            ],\n+            \"highlight\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.5, 1000, 0.5\n+                    ]\n+                },\n+                {\n+                    \"bound\": \"UPPER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.8, 1000, 0.8\n+                    ]\n+                }\n+            ],\n+\t    \"shadows\":\n+\t    [\n+\t\t{\n+\t\t    \"bound\": \"LOWER\", \"q_lo\": 0.0, \"q_hi\": 0.5, \"y_target\":\n+                    [\n+                        0, 0.17, 1000, 0.17\n+                    ]\n+                }\n+            ]\n+        },\n+        \"y_target\":\n+        [\n+            0, 0.16, 1000, 0.165, 10000, 0.17\n+        ]\n+    },\n+    \"rpi.alsc\":\n+    {\n+        \"omega\": 1.3,\n+        \"n_iter\": 100,\n+        \"luminance_strength\": 0.7,\n+        \"calibrations_Cr\":\n+        [\n+            {\n+                \"ct\": 3000, \"table\":\n+                [\n+                    1.487, 1.481, 1.481, 1.445, 1.389, 1.327, 1.307, 1.307, 1.307, 1.309, 1.341, 1.405, 1.458, 1.494, 1.494, 1.497,\n+                    1.491, 1.481, 1.448, 1.397, 1.331, 1.275, 1.243, 1.229, 1.229, 1.249, 1.287, 1.349, 1.409, 1.463, 1.494, 1.497,\n+                    1.491, 1.469, 1.405, 1.331, 1.275, 1.217, 1.183, 1.172, 1.172, 1.191, 1.231, 1.287, 1.349, 1.424, 1.484, 1.499,\n+                    1.487, 1.444, 1.363, 1.283, 1.217, 1.183, 1.148, 1.138, 1.138, 1.159, 1.191, 1.231, 1.302, 1.385, 1.461, 1.492,\n+                    1.481, 1.423, 1.334, 1.253, 1.189, 1.148, 1.135, 1.119, 1.123, 1.137, 1.159, 1.203, 1.272, 1.358, 1.442, 1.488,\n+                    1.479, 1.413, 1.321, 1.236, 1.176, 1.139, 1.118, 1.114, 1.116, 1.123, 1.149, 1.192, 1.258, 1.344, 1.432, 1.487,\n+                    1.479, 1.413, 1.321, 1.236, 1.176, 1.139, 1.116, 1.114, 1.115, 1.123, 1.149, 1.192, 1.258, 1.344, 1.432, 1.487,\n+                    1.479, 1.425, 1.336, 1.251, 1.189, 1.149, 1.136, 1.118, 1.121, 1.138, 1.158, 1.206, 1.275, 1.358, 1.443, 1.488,\n+                    1.488, 1.448, 1.368, 1.285, 1.219, 1.189, 1.149, 1.139, 1.139, 1.158, 1.195, 1.235, 1.307, 1.387, 1.462, 1.493,\n+                    1.496, 1.475, 1.411, 1.337, 1.284, 1.219, 1.189, 1.176, 1.176, 1.195, 1.235, 1.296, 1.356, 1.429, 1.487, 1.501,\n+                    1.495, 1.489, 1.458, 1.407, 1.337, 1.287, 1.253, 1.239, 1.239, 1.259, 1.296, 1.356, 1.419, 1.472, 1.499, 1.499,\n+                    1.494, 1.489, 1.489, 1.453, 1.398, 1.336, 1.317, 1.317, 1.317, 1.321, 1.351, 1.416, 1.467, 1.501, 1.501, 1.499\n+                ]\n+            },\n+            {\n+                \"ct\": 3850, \"table\":\n+                [\n+                    1.694, 1.688, 1.688, 1.649, 1.588, 1.518, 1.495, 1.495, 1.495, 1.497, 1.532, 1.602, 1.659, 1.698, 1.698, 1.703,\n+                    1.698, 1.688, 1.653, 1.597, 1.525, 1.464, 1.429, 1.413, 1.413, 1.437, 1.476, 1.542, 1.606, 1.665, 1.698, 1.703,\n+                    1.697, 1.673, 1.605, 1.525, 1.464, 1.401, 1.369, 1.354, 1.354, 1.377, 1.417, 1.476, 1.542, 1.623, 1.687, 1.705,\n+                    1.692, 1.646, 1.561, 1.472, 1.401, 1.368, 1.337, 1.323, 1.324, 1.348, 1.377, 1.417, 1.492, 1.583, 1.661, 1.697,\n+                    1.686, 1.625, 1.528, 1.439, 1.372, 1.337, 1.321, 1.311, 1.316, 1.324, 1.348, 1.389, 1.461, 1.553, 1.642, 1.694,\n+                    1.684, 1.613, 1.514, 1.423, 1.359, 1.328, 1.311, 1.306, 1.306, 1.316, 1.339, 1.378, 1.446, 1.541, 1.633, 1.693,\n+                    1.684, 1.613, 1.514, 1.423, 1.359, 1.328, 1.311, 1.305, 1.305, 1.316, 1.339, 1.378, 1.446, 1.541, 1.633, 1.693,\n+                    1.685, 1.624, 1.529, 1.438, 1.372, 1.336, 1.324, 1.309, 1.314, 1.323, 1.348, 1.392, 1.462, 1.555, 1.646, 1.694,\n+                    1.692, 1.648, 1.561, 1.473, 1.403, 1.372, 1.336, 1.324, 1.324, 1.348, 1.378, 1.423, 1.495, 1.585, 1.667, 1.701,\n+                    1.701, 1.677, 1.608, 1.527, 1.471, 1.403, 1.375, 1.359, 1.359, 1.378, 1.423, 1.488, 1.549, 1.631, 1.694, 1.709,\n+                    1.702, 1.694, 1.656, 1.601, 1.527, 1.473, 1.441, 1.424, 1.424, 1.443, 1.488, 1.549, 1.621, 1.678, 1.706, 1.707,\n+                    1.699, 1.694, 1.694, 1.654, 1.593, 1.525, 1.508, 1.508, 1.508, 1.509, 1.546, 1.614, 1.674, 1.708, 1.708, 1.707\n+                ]\n+            },\n+            {\n+                \"ct\": 6000, \"table\":\n+                [\n+                    2.179, 2.176, 2.176, 2.125, 2.048, 1.975, 1.955, 1.954, 1.954, 1.956, 1.993, 2.071, 2.141, 2.184, 2.185, 2.188,\n+                    2.189, 2.176, 2.128, 2.063, 1.973, 1.908, 1.872, 1.856, 1.856, 1.876, 1.922, 1.999, 2.081, 2.144, 2.184, 2.192,\n+                    2.187, 2.152, 2.068, 1.973, 1.907, 1.831, 1.797, 1.786, 1.786, 1.804, 1.853, 1.922, 1.999, 2.089, 2.166, 2.191,\n+                    2.173, 2.117, 2.013, 1.908, 1.831, 1.791, 1.755, 1.749, 1.749, 1.767, 1.804, 1.853, 1.939, 2.041, 2.135, 2.181,\n+                    2.166, 2.089, 1.975, 1.869, 1.792, 1.755, 1.741, 1.731, 1.734, 1.749, 1.767, 1.818, 1.903, 2.005, 2.111, 2.173,\n+                    2.165, 2.074, 1.956, 1.849, 1.777, 1.742, 1.729, 1.725, 1.729, 1.734, 1.758, 1.804, 1.884, 1.991, 2.099, 2.172,\n+                    2.165, 2.074, 1.956, 1.849, 1.777, 1.742, 1.727, 1.724, 1.725, 1.734, 1.758, 1.804, 1.884, 1.991, 2.099, 2.172,\n+                    2.166, 2.085, 1.975, 1.869, 1.791, 1.755, 1.741, 1.729, 1.733, 1.749, 1.769, 1.819, 1.904, 2.009, 2.114, 2.174,\n+                    2.174, 2.118, 2.015, 1.913, 1.831, 1.791, 1.755, 1.749, 1.749, 1.769, 1.811, 1.855, 1.943, 2.047, 2.139, 2.183,\n+                    2.187, 2.151, 2.072, 1.979, 1.911, 1.831, 1.801, 1.791, 1.791, 1.811, 1.855, 1.933, 2.006, 2.101, 2.173, 2.197,\n+                    2.189, 2.178, 2.132, 2.069, 1.979, 1.913, 1.879, 1.867, 1.867, 1.891, 1.933, 2.006, 2.091, 2.156, 2.195, 2.197,\n+                    2.181, 2.179, 2.178, 2.131, 2.057, 1.981, 1.965, 1.965, 1.965, 1.969, 1.999, 2.083, 2.153, 2.197, 2.197, 2.196\n+                ]\n+            }\n+        ],\n+        \"calibrations_Cb\":\n+        [\n+            {\n+                \"ct\": 3000, \"table\":\n+                [\n+                    1.967, 1.961, 1.955, 1.953, 1.954, 1.957, 1.961, 1.963, 1.963, 1.961, 1.959, 1.957, 1.954, 1.951, 1.951, 1.955,\n+                    1.961, 1.959, 1.957, 1.956, 1.962, 1.967, 1.975, 1.979, 1.979, 1.975, 1.971, 1.967, 1.957, 1.952, 1.951, 1.951,\n+                    1.959, 1.959, 1.959, 1.966, 1.976, 1.989, 1.999, 2.004, 2.003, 1.997, 1.991, 1.981, 1.967, 1.956, 1.951, 1.951,\n+                    1.959, 1.962, 1.967, 1.978, 1.993, 2.009, 2.021, 2.028, 2.026, 2.021, 2.011, 1.995, 1.981, 1.964, 1.953, 1.951,\n+                    1.961, 1.965, 1.977, 1.993, 2.009, 2.023, 2.041, 2.047, 2.047, 2.037, 2.024, 2.011, 1.995, 1.975, 1.958, 1.953,\n+                    1.963, 1.968, 1.981, 2.001, 2.019, 2.039, 2.046, 2.052, 2.052, 2.051, 2.035, 2.021, 2.001, 1.978, 1.959, 1.955,\n+                    1.961, 1.966, 1.981, 2.001, 2.019, 2.038, 2.043, 2.051, 2.052, 2.042, 2.034, 2.019, 2.001, 1.978, 1.959, 1.954,\n+                    1.957, 1.961, 1.972, 1.989, 2.003, 2.021, 2.038, 2.039, 2.039, 2.034, 2.019, 2.004, 1.988, 1.971, 1.954, 1.949,\n+                    1.952, 1.953, 1.959, 1.972, 1.989, 2.003, 2.016, 2.019, 2.019, 2.014, 2.003, 1.988, 1.971, 1.955, 1.948, 1.947,\n+                    1.949, 1.948, 1.949, 1.957, 1.971, 1.978, 1.991, 1.994, 1.994, 1.989, 1.979, 1.967, 1.954, 1.946, 1.947, 1.947,\n+                    1.949, 1.946, 1.944, 1.946, 1.949, 1.954, 1.962, 1.967, 1.967, 1.963, 1.956, 1.948, 1.943, 1.943, 1.946, 1.949,\n+                    1.951, 1.946, 1.944, 1.942, 1.943, 1.943, 1.947, 1.948, 1.949, 1.947, 1.945, 1.941, 1.938, 1.939, 1.948, 1.952\n+                ]\n+            },\n+            {\n+                \"ct\": 3850, \"table\":\n+                [\n+                    1.726, 1.724, 1.722, 1.723, 1.731, 1.735, 1.743, 1.746, 1.746, 1.741, 1.735, 1.729, 1.725, 1.721, 1.721, 1.721,\n+                    1.724, 1.723, 1.723, 1.727, 1.735, 1.744, 1.749, 1.756, 1.756, 1.749, 1.744, 1.735, 1.727, 1.719, 1.719, 1.719,\n+                    1.723, 1.723, 1.724, 1.735, 1.746, 1.759, 1.767, 1.775, 1.775, 1.766, 1.758, 1.746, 1.735, 1.723, 1.718, 1.716,\n+                    1.723, 1.725, 1.732, 1.746, 1.759, 1.775, 1.782, 1.792, 1.792, 1.782, 1.772, 1.759, 1.745, 1.729, 1.718, 1.716,\n+                    1.725, 1.729, 1.738, 1.756, 1.775, 1.785, 1.796, 1.803, 1.804, 1.794, 1.783, 1.772, 1.757, 1.736, 1.722, 1.718,\n+                    1.728, 1.731, 1.741, 1.759, 1.781, 1.795, 1.803, 1.806, 1.808, 1.805, 1.791, 1.779, 1.762, 1.739, 1.722, 1.721,\n+                    1.727, 1.731, 1.741, 1.759, 1.781, 1.791, 1.799, 1.804, 1.806, 1.801, 1.791, 1.779, 1.762, 1.739, 1.722, 1.717,\n+                    1.722, 1.724, 1.733, 1.751, 1.768, 1.781, 1.791, 1.796, 1.799, 1.791, 1.781, 1.766, 1.754, 1.731, 1.717, 1.714,\n+                    1.718, 1.718, 1.724, 1.737, 1.752, 1.768, 1.776, 1.782, 1.784, 1.781, 1.766, 1.754, 1.737, 1.724, 1.713, 1.709,\n+                    1.716, 1.715, 1.716, 1.725, 1.737, 1.749, 1.756, 1.763, 1.764, 1.762, 1.749, 1.737, 1.724, 1.717, 1.709, 1.708,\n+                    1.715, 1.714, 1.712, 1.715, 1.722, 1.729, 1.736, 1.741, 1.742, 1.739, 1.731, 1.723, 1.717, 1.712, 1.711, 1.709,\n+                    1.716, 1.714, 1.711, 1.712, 1.715, 1.719, 1.723, 1.728, 1.731, 1.729, 1.723, 1.718, 1.711, 1.711, 1.713, 1.713\n+                ]\n+            },\n+            {\n+                \"ct\": 6000, \"table\":\n+                [\n+                    1.374, 1.372, 1.373, 1.374, 1.375, 1.378, 1.378, 1.381, 1.382, 1.382, 1.378, 1.373, 1.372, 1.369, 1.365, 1.365,\n+                    1.371, 1.371, 1.372, 1.374, 1.378, 1.381, 1.384, 1.386, 1.388, 1.387, 1.384, 1.377, 1.372, 1.368, 1.364, 1.362,\n+                    1.369, 1.371, 1.372, 1.377, 1.383, 1.391, 1.394, 1.396, 1.397, 1.395, 1.391, 1.382, 1.374, 1.369, 1.362, 1.361,\n+                    1.369, 1.371, 1.375, 1.383, 1.391, 1.399, 1.402, 1.404, 1.405, 1.403, 1.398, 1.391, 1.379, 1.371, 1.363, 1.361,\n+                    1.371, 1.373, 1.378, 1.388, 1.399, 1.407, 1.411, 1.413, 1.413, 1.411, 1.405, 1.397, 1.385, 1.374, 1.366, 1.362,\n+                    1.371, 1.374, 1.379, 1.389, 1.405, 1.411, 1.414, 1.414, 1.415, 1.415, 1.411, 1.401, 1.388, 1.376, 1.367, 1.363,\n+                    1.371, 1.373, 1.379, 1.389, 1.405, 1.408, 1.413, 1.414, 1.414, 1.413, 1.409, 1.401, 1.388, 1.376, 1.367, 1.362,\n+                    1.366, 1.369, 1.374, 1.384, 1.396, 1.404, 1.407, 1.408, 1.408, 1.408, 1.401, 1.395, 1.382, 1.371, 1.363, 1.359,\n+                    1.364, 1.365, 1.368, 1.375, 1.386, 1.396, 1.399, 1.401, 1.399, 1.399, 1.395, 1.385, 1.374, 1.365, 1.359, 1.357,\n+                    1.361, 1.363, 1.365, 1.368, 1.377, 1.384, 1.388, 1.391, 1.391, 1.388, 1.385, 1.375, 1.366, 1.361, 1.358, 1.356,\n+                    1.361, 1.362, 1.362, 1.364, 1.367, 1.373, 1.376, 1.377, 1.377, 1.375, 1.373, 1.366, 1.362, 1.358, 1.358, 1.358,\n+                    1.361, 1.362, 1.362, 1.362, 1.363, 1.367, 1.369, 1.368, 1.367, 1.367, 1.367, 1.364, 1.358, 1.357, 1.358, 1.359\n+                ]\n+            }\n+        ],\n+        \"luminance_lut\":\n+        [\n+            2.716, 2.568, 2.299, 2.065, 1.845, 1.693, 1.605, 1.597, 1.596, 1.634, 1.738, 1.914, 2.145, 2.394, 2.719, 2.901,\n+            2.593, 2.357, 2.093, 1.876, 1.672, 1.528, 1.438, 1.393, 1.394, 1.459, 1.569, 1.731, 1.948, 2.169, 2.481, 2.756,\n+            2.439, 2.197, 1.922, 1.691, 1.521, 1.365, 1.266, 1.222, 1.224, 1.286, 1.395, 1.573, 1.747, 1.988, 2.299, 2.563,\n+            2.363, 2.081, 1.797, 1.563, 1.376, 1.244, 1.152, 1.099, 1.101, 1.158, 1.276, 1.421, 1.607, 1.851, 2.163, 2.455,\n+            2.342, 2.003, 1.715, 1.477, 1.282, 1.152, 1.074, 1.033, 1.035, 1.083, 1.163, 1.319, 1.516, 1.759, 2.064, 2.398,\n+            2.342, 1.985, 1.691, 1.446, 1.249, 1.111, 1.034, 1.004, 1.004, 1.028, 1.114, 1.274, 1.472, 1.716, 2.019, 2.389,\n+            2.342, 1.991, 1.691, 1.446, 1.249, 1.112, 1.034, 1.011, 1.005, 1.035, 1.114, 1.274, 1.472, 1.716, 2.019, 2.389,\n+            2.365, 2.052, 1.751, 1.499, 1.299, 1.171, 1.089, 1.039, 1.042, 1.084, 1.162, 1.312, 1.516, 1.761, 2.059, 2.393,\n+            2.434, 2.159, 1.856, 1.601, 1.403, 1.278, 1.166, 1.114, 1.114, 1.162, 1.266, 1.402, 1.608, 1.847, 2.146, 2.435,\n+            2.554, 2.306, 2.002, 1.748, 1.563, 1.396, 1.299, 1.247, 1.243, 1.279, 1.386, 1.551, 1.746, 1.977, 2.272, 2.518,\n+            2.756, 2.493, 2.195, 1.947, 1.739, 1.574, 1.481, 1.429, 1.421, 1.457, 1.559, 1.704, 1.929, 2.159, 2.442, 2.681,\n+            2.935, 2.739, 2.411, 2.151, 1.922, 1.749, 1.663, 1.628, 1.625, 1.635, 1.716, 1.872, 2.113, 2.368, 2.663, 2.824\n+        ],\n+        \"sigma\": 0.00381,\n+        \"sigma_Cb\": 0.00216\n+    },\n+    \"rpi.contrast\":\n+    {\n+        \"ce_enable\": 1,\n+        \"gamma_curve\":\n+        [\n+            0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193,\n+            8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168,\n+            16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796,\n+            32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476,\n+            65535, 65535\n+        ]\n+    },\n+    \"rpi.ccm\":\n+    {\n+        \"ccms\":\n+        [\n+            {\n+                \"ct\": 2498, \"ccm\":\n+                [\n+                    1.58731, -0.18011, -0.40721, -0.60639, 2.03422, -0.42782, -0.19612, -1.69203, 2.88815\n+                ]\n+            },\n+            {\n+                \"ct\": 2811, \"ccm\":\n+                [\n+                    1.61593, -0.33164, -0.28429, -0.55048, 1.97779, -0.42731, -0.12042, -1.42847, 2.54889\n+                ]\n+            },\n+            {\n+                \"ct\": 2911, \"ccm\":\n+                [\n+                    1.62771, -0.41282, -0.21489, -0.57991, 2.04176, -0.46186, -0.07613, -1.13359, 2.20972\n+                ]\n+            },\n+            {\n+                \"ct\": 2919, \"ccm\":\n+                [\n+                    1.62661, -0.37736, -0.24925, -0.52519, 1.95233, -0.42714, -0.10842, -1.34929, 2.45771\n+                ]\n+            },\n+            {\n+                \"ct\": 3627, \"ccm\":\n+                [\n+                    1.70385, -0.57231, -0.13154, -0.47763, 1.85998, -0.38235, -0.07467, -0.82678, 1.90145\n+                ]\n+            },\n+            {\n+                \"ct\": 4600, \"ccm\":\n+                [\n+                    1.68486, -0.61085, -0.07402, -0.41927, 2.04016, -0.62089, -0.08633, -0.67672, 1.76305\n+                ]\n+            },\n+            {\n+                \"ct\": 5716, \"ccm\":\n+                [\n+                    1.80439, -0.73699, -0.06739, -0.36073, 1.83327, -0.47255, -0.08378, -0.56403, 1.64781\n+                ]\n+            },\n+            {\n+                \"ct\": 8575, \"ccm\":\n+                [\n+                    1.89357, -0.76427, -0.12931, -0.27399, 2.15605, -0.88206, -0.12035, -0.68256, 1.80292\n+                ]\n+            }\n+        ]\n+    },\n+    \"rpi.sharpen\":\n+    {\n+\n+    },\n+    \"rpi.dpc\":\n+    {\n+\n+    }\n+}\ndiff --git a/src/ipa/raspberrypi/data/imx477.json b/src/ipa/raspberrypi/data/imx477.json\nnew file mode 100644\nindex 000000000000..dce5234f7209\n--- /dev/null\n+++ b/src/ipa/raspberrypi/data/imx477.json\n@@ -0,0 +1,416 @@\n+{\n+    \"rpi.black_level\":\n+    {\n+        \"black_level\": 4096\n+    },\n+    \"rpi.dpc\":\n+    {\n+\n+    },\n+    \"rpi.lux\":\n+    {\n+        \"reference_shutter_speed\": 27242,\n+        \"reference_gain\": 1.0,\n+        \"reference_aperture\": 1.0,\n+        \"reference_lux\": 830,\n+        \"reference_Y\": 17755\n+    },\n+    \"rpi.noise\":\n+    {\n+        \"reference_constant\": 0,\n+        \"reference_slope\": 2.767\n+    },\n+    \"rpi.geq\":\n+    {\n+        \"offset\": 204,\n+        \"slope\": 0.01078\n+    },\n+    \"rpi.sdn\":\n+    {\n+\n+    },\n+    \"rpi.awb\":\n+    {\n+        \"priors\":\n+        [\n+            {\n+                \"lux\": 0, \"prior\":\n+                [\n+                    2000, 1.0, 3000, 0.0, 13000, 0.0\n+                ]\n+            },\n+            {\n+                \"lux\": 800, \"prior\":\n+                [\n+                    2000, 0.0, 6000, 2.0, 13000, 2.0\n+                ]\n+            },\n+            {\n+                \"lux\": 1500, \"prior\":\n+                [\n+                    2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0\n+                ]\n+            }\n+        ],\n+        \"modes\":\n+        {\n+            \"auto\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 8000\n+            },\n+            \"incandescent\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 3000\n+            },\n+            \"tungsten\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 3500\n+            },\n+            \"fluorescent\":\n+            {\n+                \"lo\": 4000,\n+                \"hi\": 4700\n+            },\n+            \"indoor\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 5000\n+            },\n+            \"daylight\":\n+            {\n+                \"lo\": 5500,\n+                \"hi\": 6500\n+            },\n+            \"cloudy\":\n+            {\n+                \"lo\": 7000,\n+                \"hi\": 8600\n+            }\n+        },\n+        \"bayes\": 1,\n+        \"ct_curve\":\n+        [\n+            2360.0, 0.6009, 0.3093, 2870.0, 0.5047, 0.3936, 2970.0, 0.4782, 0.4221, 3700.0, 0.4212, 0.4923, 3870.0, 0.4037, 0.5166, 4000.0,\n+            0.3965, 0.5271, 4400.0, 0.3703, 0.5666, 4715.0, 0.3411, 0.6147, 5920.0, 0.3108, 0.6687, 9050.0, 0.2524, 0.7856\n+        ],\n+        \"sensitivity_r\": 1.05,\n+        \"sensitivity_b\": 1.05,\n+        \"transverse_pos\": 0.0238,\n+        \"transverse_neg\": 0.04429\n+    },\n+    \"rpi.agc\":\n+    {\n+        \"metering_modes\":\n+        {\n+            \"centre-weighted\":\n+            {\n+                \"weights\":\n+                [\n+                    3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0\n+                ]\n+            },\n+            \"spot\":\n+            {\n+                \"weights\":\n+                [\n+                    2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\n+                ]\n+            },\n+            \"matrix\":\n+            {\n+                \"weights\":\n+                [\n+                    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\n+                ]\n+            }\n+        },\n+        \"exposure_modes\":\n+        {\n+            \"normal\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 10000, 30000, 60000, 120000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            },\n+            \"sport\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 5000, 10000, 20000, 120000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            }\n+        },\n+        \"constraint_modes\":\n+        {\n+            \"normal\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.3, 1000, 0.3\n+                    ]\n+                }\n+            ],\n+            \"highlight\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.3, 1000, 0.3\n+                    ]\n+                },\n+                {\n+                    \"bound\": \"UPPER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.8, 1000, 0.8\n+                    ]\n+                }\n+            ],\n+\t    \"shadows\":\n+\t    [\n+\t\t{\n+\t\t    \"bound\": \"LOWER\", \"q_lo\": 0.0, \"q_hi\": 0.5, \"y_target\":\n+                    [\n+                        0, 0.17, 1000, 0.17\n+                    ]\n+                }\n+            ]\n+        },\n+        \"y_target\":\n+        [\n+            0, 0.16, 1000, 0.165, 10000, 0.17\n+        ]\n+    },\n+    \"rpi.alsc\":\n+    {\n+        \"omega\": 1.3,\n+        \"n_iter\": 100,\n+        \"luminance_strength\": 0.5,\n+        \"calibrations_Cr\":\n+        [\n+            {\n+                \"ct\": 2960, \"table\":\n+                [\n+                    2.088, 2.086, 2.082, 2.081, 2.077, 2.071, 2.068, 2.068, 2.072, 2.073, 2.075, 2.078, 2.084, 2.092, 2.095, 2.098,\n+                    2.086, 2.084, 2.079, 2.078, 2.075, 2.068, 2.064, 2.063, 2.068, 2.071, 2.072, 2.075, 2.081, 2.089, 2.092, 2.094,\n+                    2.083, 2.081, 2.077, 2.072, 2.069, 2.062, 2.059, 2.059, 2.063, 2.067, 2.069, 2.072, 2.079, 2.088, 2.089, 2.089,\n+                    2.081, 2.077, 2.072, 2.068, 2.065, 2.058, 2.055, 2.054, 2.057, 2.062, 2.066, 2.069, 2.077, 2.084, 2.086, 2.086,\n+                    2.078, 2.075, 2.069, 2.065, 2.061, 2.055, 2.052, 2.049, 2.051, 2.056, 2.062, 2.065, 2.072, 2.079, 2.081, 2.079,\n+                    2.079, 2.075, 2.069, 2.064, 2.061, 2.053, 2.049, 2.046, 2.049, 2.051, 2.057, 2.062, 2.069, 2.075, 2.077, 2.075,\n+                    2.082, 2.079, 2.072, 2.065, 2.061, 2.054, 2.049, 2.047, 2.049, 2.051, 2.056, 2.061, 2.066, 2.073, 2.073, 2.069,\n+                    2.086, 2.082, 2.075, 2.068, 2.062, 2.054, 2.051, 2.049, 2.051, 2.052, 2.056, 2.061, 2.066, 2.073, 2.073, 2.072,\n+                    2.088, 2.086, 2.079, 2.074, 2.066, 2.057, 2.051, 2.051, 2.054, 2.055, 2.056, 2.061, 2.067, 2.072, 2.073, 2.072,\n+                    2.091, 2.087, 2.079, 2.075, 2.068, 2.057, 2.052, 2.052, 2.056, 2.055, 2.055, 2.059, 2.066, 2.072, 2.072, 2.072,\n+                    2.093, 2.088, 2.081, 2.077, 2.069, 2.059, 2.054, 2.054, 2.057, 2.056, 2.056, 2.058, 2.066, 2.072, 2.073, 2.073,\n+                    2.095, 2.091, 2.084, 2.078, 2.075, 2.067, 2.057, 2.057, 2.059, 2.059, 2.058, 2.059, 2.068, 2.073, 2.075, 2.078\n+                ]\n+            },\n+            {\n+                \"ct\": 4850, \"table\":\n+                [\n+                    2.973, 2.968, 2.956, 2.943, 2.941, 2.932, 2.923, 2.921, 2.924, 2.929, 2.931, 2.939, 2.953, 2.965, 2.966, 2.976,\n+                    2.969, 2.962, 2.951, 2.941, 2.934, 2.928, 2.919, 2.918, 2.919, 2.923, 2.927, 2.933, 2.945, 2.957, 2.962, 2.962,\n+                    2.964, 2.956, 2.944, 2.932, 2.929, 2.924, 2.915, 2.914, 2.915, 2.919, 2.924, 2.928, 2.941, 2.952, 2.958, 2.959,\n+                    2.957, 2.951, 2.939, 2.928, 2.924, 2.919, 2.913, 2.911, 2.911, 2.915, 2.919, 2.925, 2.936, 2.947, 2.952, 2.953,\n+                    2.954, 2.947, 2.935, 2.924, 2.919, 2.915, 2.908, 2.906, 2.906, 2.907, 2.914, 2.921, 2.932, 2.941, 2.943, 2.942,\n+                    2.953, 2.946, 2.932, 2.921, 2.916, 2.911, 2.904, 2.902, 2.901, 2.904, 2.909, 2.919, 2.926, 2.937, 2.939, 2.939,\n+                    2.953, 2.947, 2.932, 2.918, 2.915, 2.909, 2.903, 2.901, 2.901, 2.906, 2.911, 2.918, 2.924, 2.936, 2.936, 2.932,\n+                    2.956, 2.948, 2.934, 2.919, 2.916, 2.908, 2.903, 2.901, 2.902, 2.907, 2.909, 2.917, 2.926, 2.936, 2.939, 2.939,\n+                    2.957, 2.951, 2.936, 2.923, 2.917, 2.907, 2.904, 2.901, 2.902, 2.908, 2.911, 2.919, 2.929, 2.939, 2.942, 2.942,\n+                    2.961, 2.951, 2.936, 2.922, 2.918, 2.906, 2.904, 2.901, 2.901, 2.907, 2.911, 2.921, 2.931, 2.941, 2.942, 2.944,\n+                    2.964, 2.954, 2.936, 2.924, 2.918, 2.909, 2.905, 2.905, 2.905, 2.907, 2.912, 2.923, 2.933, 2.942, 2.944, 2.944,\n+                    2.964, 2.958, 2.943, 2.927, 2.921, 2.914, 2.909, 2.907, 2.907, 2.912, 2.916, 2.928, 2.936, 2.944, 2.947, 2.952\n+                ]\n+            },\n+            {\n+                \"ct\": 5930, \"table\":\n+                [\n+                    3.312, 3.308, 3.301, 3.294, 3.288, 3.277, 3.268, 3.261, 3.259, 3.261, 3.267, 3.273, 3.285, 3.301, 3.303, 3.312,\n+                    3.308, 3.304, 3.294, 3.291, 3.283, 3.271, 3.263, 3.259, 3.257, 3.258, 3.261, 3.268, 3.278, 3.293, 3.299, 3.299,\n+                    3.302, 3.296, 3.288, 3.282, 3.276, 3.267, 3.259, 3.254, 3.252, 3.253, 3.256, 3.261, 3.273, 3.289, 3.292, 3.292,\n+                    3.296, 3.289, 3.282, 3.276, 3.269, 3.263, 3.256, 3.251, 3.248, 3.249, 3.251, 3.257, 3.268, 3.279, 3.284, 3.284,\n+                    3.292, 3.285, 3.279, 3.271, 3.264, 3.257, 3.249, 3.243, 3.241, 3.241, 3.246, 3.252, 3.261, 3.274, 3.275, 3.273,\n+                    3.291, 3.285, 3.276, 3.268, 3.259, 3.251, 3.242, 3.239, 3.236, 3.238, 3.244, 3.248, 3.258, 3.268, 3.269, 3.265,\n+                    3.294, 3.288, 3.275, 3.266, 3.257, 3.248, 3.239, 3.238, 3.237, 3.238, 3.243, 3.246, 3.255, 3.264, 3.264, 3.257,\n+                    3.297, 3.293, 3.279, 3.268, 3.258, 3.249, 3.238, 3.237, 3.239, 3.239, 3.243, 3.245, 3.255, 3.264, 3.264, 3.263,\n+                    3.301, 3.295, 3.281, 3.271, 3.259, 3.248, 3.237, 3.237, 3.239, 3.241, 3.243, 3.246, 3.257, 3.265, 3.266, 3.264,\n+                    3.306, 3.295, 3.279, 3.271, 3.261, 3.247, 3.235, 3.234, 3.239, 3.239, 3.243, 3.247, 3.258, 3.265, 3.265, 3.264,\n+                    3.308, 3.297, 3.279, 3.272, 3.261, 3.249, 3.239, 3.239, 3.241, 3.243, 3.245, 3.248, 3.261, 3.265, 3.266, 3.265,\n+                    3.309, 3.301, 3.286, 3.276, 3.267, 3.256, 3.246, 3.242, 3.244, 3.244, 3.249, 3.253, 3.263, 3.267, 3.271, 3.274\n+                ]\n+            }\n+        ],\n+        \"calibrations_Cb\":\n+        [\n+            {\n+                \"ct\": 2960, \"table\":\n+                [\n+                    2.133, 2.134, 2.139, 2.143, 2.148, 2.155, 2.158, 2.158, 2.158, 2.161, 2.161, 2.162, 2.159, 2.156, 2.152, 2.151,\n+                    2.132, 2.133, 2.135, 2.142, 2.147, 2.153, 2.158, 2.158, 2.158, 2.158, 2.159, 2.159, 2.157, 2.154, 2.151, 2.148,\n+                    2.133, 2.133, 2.135, 2.142, 2.149, 2.154, 2.158, 2.158, 2.157, 2.156, 2.158, 2.157, 2.155, 2.153, 2.148, 2.146,\n+                    2.133, 2.133, 2.138, 2.145, 2.149, 2.154, 2.158, 2.159, 2.158, 2.155, 2.157, 2.156, 2.153, 2.149, 2.146, 2.144,\n+                    2.133, 2.134, 2.139, 2.146, 2.149, 2.154, 2.158, 2.159, 2.159, 2.156, 2.154, 2.154, 2.149, 2.145, 2.143, 2.139,\n+                    2.135, 2.135, 2.139, 2.146, 2.151, 2.155, 2.158, 2.159, 2.158, 2.156, 2.153, 2.151, 2.146, 2.143, 2.139, 2.136,\n+                    2.135, 2.135, 2.138, 2.145, 2.151, 2.154, 2.157, 2.158, 2.157, 2.156, 2.153, 2.151, 2.147, 2.143, 2.141, 2.137,\n+                    2.135, 2.134, 2.135, 2.141, 2.149, 2.154, 2.157, 2.157, 2.157, 2.157, 2.157, 2.153, 2.149, 2.146, 2.142, 2.139,\n+                    2.132, 2.133, 2.135, 2.139, 2.148, 2.153, 2.158, 2.159, 2.159, 2.161, 2.161, 2.157, 2.154, 2.149, 2.144, 2.141,\n+                    2.132, 2.133, 2.135, 2.141, 2.149, 2.155, 2.161, 2.161, 2.162, 2.162, 2.163, 2.159, 2.154, 2.149, 2.144, 2.138,\n+                    2.136, 2.136, 2.137, 2.143, 2.149, 2.156, 2.162, 2.163, 2.162, 2.163, 2.164, 2.161, 2.157, 2.152, 2.146, 2.138,\n+                    2.137, 2.137, 2.141, 2.147, 2.152, 2.157, 2.162, 2.162, 2.159, 2.161, 2.162, 2.162, 2.157, 2.152, 2.148, 2.148\n+                ]\n+            },\n+            {\n+                \"ct\": 4850, \"table\":\n+                [\n+                    1.463, 1.464, 1.471, 1.478, 1.479, 1.483, 1.484, 1.486, 1.486, 1.484, 1.483, 1.481, 1.478, 1.475, 1.471, 1.468,\n+                    1.463, 1.463, 1.468, 1.476, 1.479, 1.482, 1.484, 1.487, 1.486, 1.484, 1.483, 1.482, 1.478, 1.473, 1.469, 1.468,\n+                    1.463, 1.464, 1.468, 1.476, 1.479, 1.483, 1.484, 1.486, 1.486, 1.485, 1.484, 1.482, 1.477, 1.473, 1.469, 1.468,\n+                    1.463, 1.464, 1.469, 1.477, 1.481, 1.483, 1.485, 1.487, 1.487, 1.485, 1.485, 1.482, 1.478, 1.474, 1.469, 1.468,\n+                    1.465, 1.465, 1.471, 1.478, 1.481, 1.484, 1.486, 1.488, 1.488, 1.487, 1.485, 1.482, 1.477, 1.472, 1.468, 1.467,\n+                    1.465, 1.466, 1.472, 1.479, 1.482, 1.485, 1.486, 1.488, 1.488, 1.486, 1.484, 1.479, 1.475, 1.472, 1.468, 1.466,\n+                    1.466, 1.466, 1.472, 1.478, 1.482, 1.484, 1.485, 1.488, 1.487, 1.485, 1.483, 1.479, 1.475, 1.472, 1.469, 1.468,\n+                    1.465, 1.466, 1.469, 1.476, 1.481, 1.485, 1.485, 1.486, 1.486, 1.485, 1.483, 1.479, 1.477, 1.474, 1.471, 1.469,\n+                    1.464, 1.465, 1.469, 1.476, 1.481, 1.484, 1.485, 1.487, 1.487, 1.486, 1.485, 1.481, 1.478, 1.475, 1.471, 1.469,\n+                    1.463, 1.464, 1.469, 1.477, 1.481, 1.485, 1.485, 1.488, 1.488, 1.487, 1.486, 1.481, 1.478, 1.475, 1.471, 1.468,\n+                    1.464, 1.465, 1.471, 1.478, 1.482, 1.486, 1.486, 1.488, 1.488, 1.487, 1.486, 1.481, 1.478, 1.475, 1.472, 1.468,\n+                    1.465, 1.466, 1.472, 1.481, 1.483, 1.487, 1.487, 1.488, 1.488, 1.486, 1.485, 1.481, 1.479, 1.476, 1.473, 1.472\n+                ]\n+            },\n+            {\n+                \"ct\": 5930, \"table\":\n+                [\n+                    1.443, 1.444, 1.448, 1.453, 1.459, 1.463, 1.465, 1.467, 1.469, 1.469, 1.467, 1.466, 1.462, 1.457, 1.454, 1.451,\n+                    1.443, 1.444, 1.445, 1.451, 1.459, 1.463, 1.465, 1.467, 1.469, 1.469, 1.467, 1.465, 1.461, 1.456, 1.452, 1.451,\n+                    1.444, 1.444, 1.445, 1.451, 1.459, 1.463, 1.466, 1.468, 1.469, 1.469, 1.467, 1.465, 1.461, 1.456, 1.452, 1.449,\n+                    1.444, 1.444, 1.447, 1.452, 1.459, 1.464, 1.467, 1.469, 1.471, 1.469, 1.467, 1.466, 1.461, 1.456, 1.452, 1.449,\n+                    1.444, 1.445, 1.448, 1.452, 1.459, 1.465, 1.469, 1.471, 1.471, 1.471, 1.468, 1.465, 1.461, 1.455, 1.451, 1.449,\n+                    1.445, 1.446, 1.449, 1.453, 1.461, 1.466, 1.469, 1.471, 1.472, 1.469, 1.467, 1.465, 1.459, 1.455, 1.451, 1.447,\n+                    1.446, 1.446, 1.449, 1.453, 1.461, 1.466, 1.469, 1.469, 1.469, 1.469, 1.467, 1.465, 1.459, 1.455, 1.452, 1.449,\n+                    1.446, 1.446, 1.447, 1.451, 1.459, 1.466, 1.469, 1.469, 1.469, 1.469, 1.467, 1.465, 1.461, 1.457, 1.454, 1.451,\n+                    1.444, 1.444, 1.447, 1.451, 1.459, 1.466, 1.469, 1.469, 1.471, 1.471, 1.468, 1.466, 1.462, 1.458, 1.454, 1.452,\n+                    1.444, 1.444, 1.448, 1.453, 1.459, 1.466, 1.469, 1.471, 1.472, 1.472, 1.468, 1.466, 1.462, 1.458, 1.454, 1.449,\n+                    1.446, 1.447, 1.449, 1.454, 1.461, 1.466, 1.471, 1.471, 1.471, 1.471, 1.468, 1.466, 1.462, 1.459, 1.455, 1.449,\n+                    1.447, 1.447, 1.452, 1.457, 1.462, 1.468, 1.472, 1.472, 1.471, 1.471, 1.468, 1.466, 1.462, 1.459, 1.456, 1.455\n+                ]\n+            }\n+        ],\n+        \"luminance_lut\":\n+        [\n+            1.548, 1.499, 1.387, 1.289, 1.223, 1.183, 1.164, 1.154, 1.153, 1.169, 1.211, 1.265, 1.345, 1.448, 1.581, 1.619,\n+            1.513, 1.412, 1.307, 1.228, 1.169, 1.129, 1.105, 1.098, 1.103, 1.127, 1.157, 1.209, 1.272, 1.361, 1.481, 1.583,\n+            1.449, 1.365, 1.257, 1.175, 1.124, 1.085, 1.062, 1.054, 1.059, 1.079, 1.113, 1.151, 1.211, 1.293, 1.407, 1.488,\n+            1.424, 1.324, 1.222, 1.139, 1.089, 1.056, 1.034, 1.031, 1.034, 1.049, 1.075, 1.115, 1.164, 1.241, 1.351, 1.446,\n+            1.412, 1.297, 1.203, 1.119, 1.069, 1.039, 1.021, 1.016, 1.022, 1.032, 1.052, 1.086, 1.135, 1.212, 1.321, 1.439,\n+            1.406, 1.287, 1.195, 1.115, 1.059, 1.028, 1.014, 1.012, 1.015, 1.026, 1.041, 1.074, 1.125, 1.201, 1.302, 1.425,\n+            1.406, 1.294, 1.205, 1.126, 1.062, 1.031, 1.013, 1.009, 1.011, 1.019, 1.042, 1.079, 1.129, 1.203, 1.302, 1.435,\n+            1.415, 1.318, 1.229, 1.146, 1.076, 1.039, 1.019, 1.014, 1.017, 1.031, 1.053, 1.093, 1.144, 1.219, 1.314, 1.436,\n+            1.435, 1.348, 1.246, 1.164, 1.094, 1.059, 1.036, 1.032, 1.037, 1.049, 1.072, 1.114, 1.167, 1.257, 1.343, 1.462,\n+            1.471, 1.385, 1.278, 1.189, 1.124, 1.084, 1.064, 1.061, 1.069, 1.078, 1.101, 1.146, 1.207, 1.298, 1.415, 1.496,\n+            1.522, 1.436, 1.323, 1.228, 1.169, 1.118, 1.101, 1.094, 1.099, 1.113, 1.146, 1.194, 1.265, 1.353, 1.474, 1.571,\n+            1.578, 1.506, 1.378, 1.281, 1.211, 1.156, 1.135, 1.134, 1.139, 1.158, 1.194, 1.251, 1.327, 1.427, 1.559, 1.611\n+        ],\n+        \"sigma\": 0.00121,\n+        \"sigma_Cb\": 0.00115\n+    },\n+    \"rpi.contrast\":\n+    {\n+        \"ce_enable\": 1,\n+        \"gamma_curve\":\n+        [\n+            0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193,\n+            8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168,\n+            16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796,\n+            32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476,\n+            65535, 65535\n+        ]\n+    },\n+    \"rpi.ccm\":\n+    {\n+        \"ccms\":\n+        [\n+            {\n+                \"ct\": 2360, \"ccm\":\n+                [\n+                    1.66078, -0.23588, -0.42491, -0.47456, 1.82763, -0.35307, -0.00545, -1.44729, 2.45273\n+                ]\n+            },\n+            {\n+                \"ct\": 2870, \"ccm\":\n+                [\n+                    1.78373, -0.55344, -0.23029, -0.39951, 1.69701, -0.29751, 0.01986, -1.06525, 2.04539\n+                ]\n+            },\n+            {\n+                \"ct\": 2970, \"ccm\":\n+                [\n+                    1.73511, -0.56973, -0.16537, -0.36338, 1.69878, -0.33539, -0.02354, -0.76813, 1.79168\n+                ]\n+            },\n+            {\n+                \"ct\": 3000, \"ccm\":\n+                [\n+                    2.06374, -0.92218, -0.14156, -0.41721, 1.69289, -0.27568, -0.00554, -0.92741, 1.93295\n+                ]\n+            },\n+            {\n+                \"ct\": 3700, \"ccm\":\n+                [\n+                    2.13792, -1.08136, -0.05655, -0.34739, 1.58989, -0.24249, -0.00349, -0.76789, 1.77138\n+                ]\n+            },\n+            {\n+                \"ct\": 3870, \"ccm\":\n+                [\n+                    1.83834, -0.70528, -0.13307, -0.30499, 1.60523, -0.30024, -0.05701, -0.58313, 1.64014\n+                ]\n+            },\n+            {\n+                \"ct\": 4000, \"ccm\":\n+                [\n+                    2.15741, -1.10295, -0.05447, -0.34631, 1.61158, -0.26528, -0.02723, -0.70288, 1.73011\n+                ]\n+            },\n+            {\n+                \"ct\": 4400, \"ccm\":\n+                [\n+                    2.05729, -0.95007, -0.10723, -0.41712, 1.78606, -0.36894, -0.11899, -0.55727, 1.67626\n+                ]\n+            },\n+\n+            {\n+                \"ct\": 4715, \"ccm\":\n+                [\n+                    1.90255, -0.77478, -0.12777, -0.31338, 1.88197, -0.56858, -0.06001, -0.61785, 1.67786\n+                ]\n+            },\n+            {\n+                \"ct\": 5920, \"ccm\":\n+                [\n+                    1.98691, -0.84671, -0.14019, -0.26581, 1.70615, -0.44035, -0.09532, -0.47332, 1.56864\n+                ]\n+            },\n+            {\n+                \"ct\": 9050, \"ccm\":\n+                [\n+                    2.09255, -0.76541, -0.32714, -0.28973, 2.27462, -0.98489, -0.17299, -0.61275, 1.78574\n+                ]\n+            }\n+        ]\n+    },\n+    \"rpi.sharpen\":\n+    {\n+\n+    }\n+}\ndiff --git a/src/ipa/raspberrypi/data/meson.build b/src/ipa/raspberrypi/data/meson.build\nnew file mode 100644\nindex 000000000000..6ff27745ff80\n--- /dev/null\n+++ b/src/ipa/raspberrypi/data/meson.build\n@@ -0,0 +1,9 @@\n+conf_files = files([\n+    'imx219.json',\n+    'imx477.json',\n+    'ov5647.json',\n+    'uncalibrated.json',\n+])\n+\n+install_data(conf_files,\n+             install_dir : join_paths(ipa_data_dir, 'raspberrypi'))\ndiff --git a/src/ipa/raspberrypi/data/ov5647.json b/src/ipa/raspberrypi/data/ov5647.json\nnew file mode 100644\nindex 000000000000..a2469059a2e8\n--- /dev/null\n+++ b/src/ipa/raspberrypi/data/ov5647.json\n@@ -0,0 +1,398 @@\n+{\n+    \"rpi.black_level\":\n+    {\n+        \"black_level\": 1024\n+    },\n+    \"rpi.dpc\":\n+    {\n+\n+    },\n+    \"rpi.lux\":\n+    {\n+        \"reference_shutter_speed\": 21663,\n+        \"reference_gain\": 1.0,\n+        \"reference_aperture\": 1.0,\n+        \"reference_lux\": 987,\n+        \"reference_Y\": 8961\n+    },\n+    \"rpi.noise\":\n+    {\n+        \"reference_constant\": 0,\n+        \"reference_slope\": 4.25\n+    },\n+    \"rpi.geq\":\n+    {\n+        \"offset\": 401,\n+        \"slope\": 0.05619\n+    },\n+    \"rpi.sdn\":\n+    {\n+\n+    },\n+    \"rpi.awb\":\n+    {\n+        \"priors\":\n+        [\n+            {\n+                \"lux\": 0, \"prior\":\n+                [\n+                    2000, 1.0, 3000, 0.0, 13000, 0.0\n+                ]\n+            },\n+            {\n+                \"lux\": 800, \"prior\":\n+                [\n+                    2000, 0.0, 6000, 2.0, 13000, 2.0\n+                ]\n+            },\n+            {\n+                \"lux\": 1500, \"prior\":\n+                [\n+                    2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0\n+                ]\n+            }\n+        ],\n+        \"modes\":\n+        {\n+            \"auto\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 8000\n+            },\n+            \"incandescent\":\n+            {\n+                \"lo\": 2500,\n+                \"hi\": 3000\n+            },\n+            \"tungsten\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 3500\n+            },\n+            \"fluorescent\":\n+            {\n+                \"lo\": 4000,\n+                \"hi\": 4700\n+            },\n+            \"indoor\":\n+            {\n+                \"lo\": 3000,\n+                \"hi\": 5000\n+            },\n+            \"daylight\":\n+            {\n+                \"lo\": 5500,\n+                \"hi\": 6500\n+            },\n+            \"cloudy\":\n+            {\n+                \"lo\": 7000,\n+                \"hi\": 8600\n+            }\n+        },\n+        \"bayes\": 1,\n+        \"ct_curve\":\n+        [\n+            2500.0, 1.0289, 0.4503, 2803.0, 0.9428, 0.5108, 2914.0, 0.9406, 0.5127, 3605.0, 0.8261, 0.6249, 4540.0, 0.7331, 0.7533, 5699.0,\n+            0.6715, 0.8627, 8625.0, 0.6081, 1.0012\n+        ],\n+        \"sensitivity_r\": 1.05,\n+        \"sensitivity_b\": 1.05,\n+        \"transverse_pos\": 0.0321,\n+        \"transverse_neg\": 0.04313\n+    },\n+    \"rpi.agc\":\n+    {\n+        \"metering_modes\":\n+        {\n+            \"centre-weighted\":\n+            {\n+                \"weights\":\n+                [\n+                    3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0\n+                ]\n+            },\n+            \"spot\":\n+            {\n+                \"weights\":\n+                [\n+                    2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\n+                ]\n+            },\n+            \"matrix\":\n+            {\n+                \"weights\":\n+                [\n+                    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\n+                ]\n+            }\n+        },\n+        \"exposure_modes\":\n+        {\n+            \"normal\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 10000, 30000, 30000, 30000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            },\n+            \"sport\":\n+            {\n+                \"shutter\":\n+                [\n+                    100, 5000, 10000, 20000, 30000\n+                ],\n+                \"gain\":\n+                [\n+                    1.0, 2.0, 4.0, 6.0, 6.0\n+                ]\n+            }\n+        },\n+        \"constraint_modes\":\n+        {\n+            \"normal\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.5, 1000, 0.5\n+                    ]\n+                }\n+            ],\n+            \"highlight\":\n+            [\n+                {\n+                    \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.5, 1000, 0.5\n+                    ]\n+                },\n+                {\n+                    \"bound\": \"UPPER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\":\n+                    [\n+                        0, 0.8, 1000, 0.8\n+                    ]\n+                }\n+            ],\n+\t    \"shadows\":\n+\t    [\n+\t\t{\n+\t\t    \"bound\": \"LOWER\", \"q_lo\": 0.0, \"q_hi\": 0.5, \"y_target\":\n+                    [\n+                        0, 0.17, 1000, 0.17\n+                    ]\n+                }\n+            ]\n+        },\n+        \"y_target\":\n+        [\n+            0, 0.16, 1000, 0.165, 10000, 0.17\n+        ],\n+\t\"base_ev\": 1.25\n+    },\n+    \"rpi.alsc\":\n+    {\n+        \"omega\": 1.3,\n+        \"n_iter\": 100,\n+        \"luminance_strength\": 0.5,\n+        \"calibrations_Cr\":\n+        [\n+            {\n+                \"ct\": 3000, \"table\":\n+                [\n+                    1.105, 1.103, 1.093, 1.083, 1.071, 1.065, 1.065, 1.065, 1.066, 1.069, 1.072, 1.077, 1.084, 1.089, 1.093, 1.093,\n+                    1.103, 1.096, 1.084, 1.072, 1.059, 1.051, 1.047, 1.047, 1.051, 1.053, 1.059, 1.067, 1.075, 1.082, 1.085, 1.086,\n+                    1.096, 1.084, 1.072, 1.059, 1.051, 1.045, 1.039, 1.038, 1.039, 1.045, 1.049, 1.057, 1.063, 1.072, 1.081, 1.082,\n+                    1.092, 1.075, 1.061, 1.052, 1.045, 1.039, 1.036, 1.035, 1.035, 1.039, 1.044, 1.049, 1.056, 1.063, 1.072, 1.081,\n+                    1.092, 1.073, 1.058, 1.048, 1.043, 1.038, 1.035, 1.033, 1.033, 1.035, 1.039, 1.044, 1.051, 1.057, 1.069, 1.078,\n+                    1.091, 1.068, 1.054, 1.045, 1.041, 1.038, 1.035, 1.032, 1.032, 1.032, 1.036, 1.041, 1.045, 1.055, 1.069, 1.078,\n+                    1.091, 1.068, 1.052, 1.043, 1.041, 1.038, 1.035, 1.032, 1.031, 1.032, 1.034, 1.036, 1.043, 1.055, 1.069, 1.078,\n+                    1.092, 1.068, 1.052, 1.047, 1.042, 1.041, 1.038, 1.035, 1.032, 1.032, 1.035, 1.039, 1.043, 1.055, 1.071, 1.079,\n+                    1.092, 1.073, 1.057, 1.051, 1.047, 1.047, 1.044, 1.041, 1.038, 1.038, 1.039, 1.043, 1.051, 1.059, 1.076, 1.083,\n+                    1.092, 1.081, 1.068, 1.058, 1.056, 1.056, 1.053, 1.052, 1.049, 1.048, 1.048, 1.051, 1.059, 1.066, 1.083, 1.085,\n+                    1.091, 1.087, 1.081, 1.068, 1.065, 1.064, 1.062, 1.062, 1.061, 1.056, 1.056, 1.056, 1.064, 1.069, 1.084, 1.089,\n+                    1.091, 1.089, 1.085, 1.079, 1.069, 1.068, 1.067, 1.067, 1.067, 1.063, 1.061, 1.063, 1.068, 1.069, 1.081, 1.092\n+                ]\n+            },\n+            {\n+                \"ct\": 5000, \"table\":\n+                [\n+                    1.486, 1.484, 1.468, 1.449, 1.427, 1.403, 1.399, 1.399, 1.399, 1.404, 1.413, 1.433, 1.454, 1.473, 1.482, 1.488,\n+                    1.484, 1.472, 1.454, 1.431, 1.405, 1.381, 1.365, 1.365, 1.367, 1.373, 1.392, 1.411, 1.438, 1.458, 1.476, 1.481,\n+                    1.476, 1.458, 1.433, 1.405, 1.381, 1.361, 1.339, 1.334, 1.334, 1.346, 1.362, 1.391, 1.411, 1.438, 1.462, 1.474,\n+                    1.471, 1.443, 1.417, 1.388, 1.361, 1.339, 1.321, 1.313, 1.313, 1.327, 1.346, 1.362, 1.391, 1.422, 1.453, 1.473,\n+                    1.469, 1.439, 1.408, 1.377, 1.349, 1.321, 1.312, 1.299, 1.299, 1.311, 1.327, 1.348, 1.378, 1.415, 1.446, 1.468,\n+                    1.468, 1.434, 1.402, 1.371, 1.341, 1.316, 1.299, 1.296, 1.295, 1.299, 1.314, 1.338, 1.371, 1.408, 1.441, 1.466,\n+                    1.468, 1.434, 1.401, 1.371, 1.341, 1.316, 1.301, 1.296, 1.295, 1.297, 1.314, 1.338, 1.369, 1.408, 1.441, 1.465,\n+                    1.469, 1.436, 1.401, 1.374, 1.348, 1.332, 1.315, 1.301, 1.301, 1.313, 1.324, 1.342, 1.372, 1.409, 1.442, 1.465,\n+                    1.471, 1.444, 1.413, 1.388, 1.371, 1.348, 1.332, 1.323, 1.323, 1.324, 1.342, 1.362, 1.386, 1.418, 1.449, 1.467,\n+                    1.473, 1.454, 1.431, 1.407, 1.388, 1.371, 1.359, 1.352, 1.351, 1.351, 1.362, 1.383, 1.404, 1.433, 1.462, 1.472,\n+                    1.474, 1.461, 1.447, 1.424, 1.407, 1.394, 1.385, 1.381, 1.379, 1.381, 1.383, 1.401, 1.419, 1.444, 1.466, 1.481,\n+                    1.474, 1.464, 1.455, 1.442, 1.421, 1.408, 1.403, 1.403, 1.403, 1.399, 1.402, 1.415, 1.432, 1.446, 1.467, 1.483\n+                ]\n+            },\n+            {\n+                \"ct\": 6500, \"table\":\n+                [\n+                    1.567, 1.565, 1.555, 1.541, 1.525, 1.518, 1.518, 1.518, 1.521, 1.527, 1.532, 1.541, 1.551, 1.559, 1.567, 1.569,\n+                    1.565, 1.557, 1.542, 1.527, 1.519, 1.515, 1.511, 1.516, 1.519, 1.524, 1.528, 1.533, 1.542, 1.553, 1.559, 1.562,\n+                    1.561, 1.546, 1.532, 1.521, 1.518, 1.515, 1.511, 1.516, 1.519, 1.524, 1.528, 1.529, 1.533, 1.542, 1.554, 1.559,\n+                    1.561, 1.539, 1.526, 1.524, 1.521, 1.521, 1.522, 1.524, 1.525, 1.531, 1.529, 1.529, 1.531, 1.538, 1.549, 1.558,\n+                    1.559, 1.538, 1.526, 1.525, 1.524, 1.528, 1.534, 1.536, 1.536, 1.536, 1.532, 1.529, 1.531, 1.537, 1.548, 1.556,\n+                    1.561, 1.537, 1.525, 1.524, 1.526, 1.532, 1.537, 1.539, 1.538, 1.537, 1.532, 1.529, 1.529, 1.537, 1.546, 1.556,\n+                    1.561, 1.536, 1.524, 1.522, 1.525, 1.532, 1.538, 1.538, 1.537, 1.533, 1.528, 1.526, 1.527, 1.536, 1.546, 1.555,\n+                    1.561, 1.537, 1.522, 1.521, 1.524, 1.531, 1.536, 1.537, 1.534, 1.529, 1.526, 1.522, 1.523, 1.534, 1.547, 1.555,\n+                    1.561, 1.538, 1.524, 1.522, 1.526, 1.531, 1.535, 1.535, 1.534, 1.527, 1.524, 1.522, 1.522, 1.535, 1.549, 1.556,\n+                    1.558, 1.543, 1.532, 1.526, 1.526, 1.529, 1.534, 1.535, 1.533, 1.526, 1.523, 1.522, 1.524, 1.537, 1.552, 1.557,\n+                    1.555, 1.546, 1.541, 1.528, 1.527, 1.528, 1.531, 1.533, 1.531, 1.527, 1.522, 1.522, 1.526, 1.536, 1.552, 1.561,\n+                    1.555, 1.547, 1.542, 1.538, 1.526, 1.526, 1.529, 1.531, 1.529, 1.528, 1.519, 1.519, 1.527, 1.531, 1.543, 1.561\n+                ]\n+            }\n+        ],\n+        \"calibrations_Cb\":\n+        [\n+            {\n+                \"ct\": 3000, \"table\":\n+                [\n+                    1.684, 1.688, 1.691, 1.697, 1.709, 1.722, 1.735, 1.745, 1.747, 1.745, 1.731, 1.719, 1.709, 1.705, 1.699, 1.699,\n+                    1.684, 1.689, 1.694, 1.708, 1.721, 1.735, 1.747, 1.762, 1.762, 1.758, 1.745, 1.727, 1.716, 1.707, 1.701, 1.699,\n+                    1.684, 1.691, 1.704, 1.719, 1.734, 1.755, 1.772, 1.786, 1.789, 1.788, 1.762, 1.745, 1.724, 1.709, 1.702, 1.698,\n+                    1.682, 1.694, 1.709, 1.729, 1.755, 1.773, 1.798, 1.815, 1.817, 1.808, 1.788, 1.762, 1.733, 1.714, 1.704, 1.699,\n+                    1.682, 1.693, 1.713, 1.742, 1.772, 1.798, 1.815, 1.829, 1.831, 1.821, 1.807, 1.773, 1.742, 1.716, 1.703, 1.699,\n+                    1.681, 1.693, 1.713, 1.742, 1.772, 1.799, 1.828, 1.839, 1.839, 1.828, 1.807, 1.774, 1.742, 1.715, 1.699, 1.695,\n+                    1.679, 1.691, 1.712, 1.739, 1.771, 1.798, 1.825, 1.829, 1.831, 1.818, 1.801, 1.774, 1.738, 1.712, 1.695, 1.691,\n+                    1.676, 1.685, 1.703, 1.727, 1.761, 1.784, 1.801, 1.817, 1.817, 1.801, 1.779, 1.761, 1.729, 1.706, 1.691, 1.684,\n+                    1.669, 1.678, 1.692, 1.714, 1.741, 1.764, 1.784, 1.795, 1.795, 1.779, 1.761, 1.738, 1.713, 1.696, 1.683, 1.679,\n+                    1.664, 1.671, 1.679, 1.693, 1.716, 1.741, 1.762, 1.769, 1.769, 1.753, 1.738, 1.713, 1.701, 1.687, 1.681, 1.676,\n+                    1.661, 1.664, 1.671, 1.679, 1.693, 1.714, 1.732, 1.739, 1.739, 1.729, 1.708, 1.701, 1.685, 1.679, 1.676, 1.677,\n+                    1.659, 1.661, 1.664, 1.671, 1.679, 1.693, 1.712, 1.714, 1.714, 1.708, 1.701, 1.687, 1.679, 1.672, 1.673, 1.677\n+                ]\n+            },\n+            {\n+                \"ct\": 5000, \"table\":\n+                [\n+                    1.177, 1.183, 1.187, 1.191, 1.197, 1.206, 1.213, 1.215, 1.215, 1.215, 1.211, 1.204, 1.196, 1.191, 1.183, 1.182,\n+                    1.179, 1.185, 1.191, 1.196, 1.206, 1.217, 1.224, 1.229, 1.229, 1.226, 1.221, 1.212, 1.202, 1.195, 1.188, 1.182,\n+                    1.183, 1.191, 1.196, 1.206, 1.217, 1.229, 1.239, 1.245, 1.245, 1.245, 1.233, 1.221, 1.212, 1.199, 1.193, 1.187,\n+                    1.183, 1.192, 1.201, 1.212, 1.229, 1.241, 1.252, 1.259, 1.259, 1.257, 1.245, 1.233, 1.217, 1.201, 1.194, 1.192,\n+                    1.183, 1.192, 1.202, 1.219, 1.238, 1.252, 1.261, 1.269, 1.268, 1.261, 1.257, 1.241, 1.223, 1.204, 1.194, 1.191,\n+                    1.182, 1.192, 1.202, 1.219, 1.239, 1.255, 1.266, 1.271, 1.271, 1.265, 1.258, 1.242, 1.223, 1.205, 1.192, 1.191,\n+                    1.181, 1.189, 1.199, 1.218, 1.239, 1.254, 1.262, 1.268, 1.268, 1.258, 1.253, 1.241, 1.221, 1.204, 1.191, 1.187,\n+                    1.179, 1.184, 1.193, 1.211, 1.232, 1.243, 1.254, 1.257, 1.256, 1.253, 1.242, 1.232, 1.216, 1.199, 1.187, 1.183,\n+                    1.174, 1.179, 1.187, 1.202, 1.218, 1.232, 1.243, 1.246, 1.246, 1.239, 1.232, 1.218, 1.207, 1.191, 1.183, 1.179,\n+                    1.169, 1.175, 1.181, 1.189, 1.202, 1.218, 1.229, 1.232, 1.232, 1.224, 1.218, 1.207, 1.199, 1.185, 1.181, 1.174,\n+                    1.164, 1.168, 1.175, 1.179, 1.189, 1.201, 1.209, 1.213, 1.213, 1.209, 1.201, 1.198, 1.186, 1.181, 1.174, 1.173,\n+                    1.161, 1.166, 1.171, 1.175, 1.179, 1.189, 1.197, 1.198, 1.198, 1.197, 1.196, 1.186, 1.182, 1.175, 1.173, 1.173\n+                ]\n+            },\n+            {\n+                \"ct\": 6500, \"table\":\n+                [\n+                    1.166, 1.171, 1.173, 1.178, 1.187, 1.193, 1.201, 1.205, 1.205, 1.205, 1.199, 1.191, 1.184, 1.179, 1.174, 1.171,\n+                    1.166, 1.172, 1.176, 1.184, 1.195, 1.202, 1.209, 1.216, 1.216, 1.213, 1.208, 1.201, 1.189, 1.182, 1.176, 1.171,\n+                    1.166, 1.173, 1.183, 1.195, 1.202, 1.214, 1.221, 1.228, 1.229, 1.228, 1.221, 1.209, 1.201, 1.186, 1.179, 1.174,\n+                    1.165, 1.174, 1.187, 1.201, 1.214, 1.223, 1.235, 1.241, 1.242, 1.241, 1.229, 1.221, 1.205, 1.188, 1.181, 1.177,\n+                    1.165, 1.174, 1.189, 1.207, 1.223, 1.235, 1.242, 1.253, 1.252, 1.245, 1.241, 1.228, 1.211, 1.189, 1.181, 1.178,\n+                    1.164, 1.173, 1.189, 1.207, 1.224, 1.238, 1.249, 1.255, 1.255, 1.249, 1.242, 1.228, 1.211, 1.191, 1.179, 1.176,\n+                    1.163, 1.172, 1.187, 1.207, 1.223, 1.237, 1.245, 1.253, 1.252, 1.243, 1.237, 1.228, 1.207, 1.188, 1.176, 1.173,\n+                    1.159, 1.167, 1.179, 1.199, 1.217, 1.227, 1.237, 1.241, 1.241, 1.237, 1.228, 1.217, 1.201, 1.184, 1.174, 1.169,\n+                    1.156, 1.164, 1.172, 1.189, 1.205, 1.217, 1.226, 1.229, 1.229, 1.222, 1.217, 1.204, 1.192, 1.177, 1.171, 1.166,\n+                    1.154, 1.159, 1.166, 1.177, 1.189, 1.205, 1.213, 1.216, 1.216, 1.209, 1.204, 1.192, 1.183, 1.172, 1.168, 1.162,\n+                    1.152, 1.155, 1.161, 1.166, 1.177, 1.188, 1.195, 1.198, 1.199, 1.196, 1.187, 1.183, 1.173, 1.168, 1.163, 1.162,\n+                    1.151, 1.154, 1.158, 1.162, 1.168, 1.177, 1.183, 1.184, 1.184, 1.184, 1.182, 1.172, 1.168, 1.165, 1.162, 1.161\n+                ]\n+            }\n+        ],\n+        \"luminance_lut\":\n+        [\n+            2.236, 2.111, 1.912, 1.741, 1.579, 1.451, 1.379, 1.349, 1.349, 1.361, 1.411, 1.505, 1.644, 1.816, 2.034, 2.159,\n+            2.139, 1.994, 1.796, 1.625, 1.467, 1.361, 1.285, 1.248, 1.239, 1.265, 1.321, 1.408, 1.536, 1.703, 1.903, 2.087,\n+            2.047, 1.898, 1.694, 1.511, 1.373, 1.254, 1.186, 1.152, 1.142, 1.166, 1.226, 1.309, 1.441, 1.598, 1.799, 1.978,\n+            1.999, 1.824, 1.615, 1.429, 1.281, 1.179, 1.113, 1.077, 1.071, 1.096, 1.153, 1.239, 1.357, 1.525, 1.726, 1.915,\n+            1.976, 1.773, 1.563, 1.374, 1.222, 1.119, 1.064, 1.032, 1.031, 1.049, 1.099, 1.188, 1.309, 1.478, 1.681, 1.893,\n+            1.973, 1.756, 1.542, 1.351, 1.196, 1.088, 1.028, 1.011, 1.004, 1.029, 1.077, 1.169, 1.295, 1.459, 1.663, 1.891,\n+            1.973, 1.761, 1.541, 1.349, 1.193, 1.087, 1.031, 1.006, 1.006, 1.023, 1.075, 1.169, 1.298, 1.463, 1.667, 1.891,\n+            1.982, 1.789, 1.568, 1.373, 1.213, 1.111, 1.051, 1.029, 1.024, 1.053, 1.106, 1.199, 1.329, 1.495, 1.692, 1.903,\n+            2.015, 1.838, 1.621, 1.426, 1.268, 1.159, 1.101, 1.066, 1.068, 1.099, 1.166, 1.259, 1.387, 1.553, 1.751, 1.937,\n+            2.076, 1.911, 1.692, 1.507, 1.346, 1.236, 1.169, 1.136, 1.139, 1.174, 1.242, 1.349, 1.475, 1.641, 1.833, 2.004,\n+            2.193, 2.011, 1.798, 1.604, 1.444, 1.339, 1.265, 1.235, 1.237, 1.273, 1.351, 1.461, 1.598, 1.758, 1.956, 2.125,\n+            2.263, 2.154, 1.916, 1.711, 1.549, 1.432, 1.372, 1.356, 1.356, 1.383, 1.455, 1.578, 1.726, 1.914, 2.119, 2.211\n+        ],\n+        \"sigma\": 0.006,\n+        \"sigma_Cb\": 0.00208\n+    },\n+    \"rpi.contrast\":\n+    {\n+        \"ce_enable\": 1,\n+        \"gamma_curve\":\n+        [\n+            0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193,\n+            8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168,\n+            16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796,\n+            32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476,\n+            65535, 65535\n+        ]\n+    },\n+    \"rpi.ccm\":\n+    {\n+        \"ccms\":\n+        [\n+            {\n+                \"ct\": 2500, \"ccm\":\n+                [\n+                    1.70741, -0.05307, -0.65433, -0.62822, 1.68836, -0.06014, -0.04452, -1.87628, 2.92079\n+                ]\n+            },\n+            {\n+                \"ct\": 2803, \"ccm\":\n+                [\n+                    1.74383, -0.18731, -0.55652, -0.56491, 1.67772, -0.11281, -0.01522, -1.60635, 2.62157\n+                ]\n+            },\n+            {\n+                \"ct\": 2912, \"ccm\":\n+                [\n+                    1.75215, -0.22221, -0.52995, -0.54568, 1.63522, -0.08954, 0.02633, -1.56997, 2.54364\n+                ]\n+            },\n+            {\n+                \"ct\": 2914, \"ccm\":\n+                [\n+                    1.72423, -0.28939, -0.43484, -0.55188, 1.62925, -0.07737, 0.01959, -1.28661, 2.26702\n+                ]\n+            },\n+            {\n+                \"ct\": 3605, \"ccm\":\n+                [\n+                    1.80381, -0.43646, -0.36735, -0.46505, 1.56814, -0.10309, 0.00929, -1.00424, 1.99495\n+                ]\n+            },\n+            {\n+                \"ct\": 4540, \"ccm\":\n+                [\n+                    1.85263, -0.46545, -0.38719, -0.44136, 1.68443, -0.24307, 0.04108, -0.85599, 1.81491\n+                ]\n+            },\n+            {\n+                \"ct\": 5699, \"ccm\":\n+                [\n+                    1.98595, -0.63542, -0.35054, -0.34623, 1.54146, -0.19522, 0.00411, -0.70936, 1.70525\n+                ]\n+            },\n+            {\n+                \"ct\": 8625, \"ccm\":\n+                [\n+                    2.21637, -0.56663, -0.64974, -0.41133, 1.96625, -0.55492, -0.02307, -0.83529, 1.85837\n+                ]\n+            }\n+        ]\n+    },\n+    \"rpi.sharpen\":\n+    {\n+\n+    }\n+}\ndiff --git a/src/ipa/raspberrypi/data/uncalibrated.json b/src/ipa/raspberrypi/data/uncalibrated.json\nnew file mode 100644\nindex 000000000000..16a01e940b42\n--- /dev/null\n+++ b/src/ipa/raspberrypi/data/uncalibrated.json\n@@ -0,0 +1,82 @@\n+{\n+    \"rpi.black_level\":\n+    {\n+        \"black_level\": 4096\n+    },\n+    \"rpi.awb\":\n+    {\n+        \"use_derivatives\": 0,\n+        \"bayes\": 0\n+    },\n+    \"rpi.agc\":\n+    {\n+        \"metering_modes\":\n+        {\n+            \"centre-weighted\": {\n+                \"weights\": [4, 4, 4, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0]\n+            }\n+        },\n+        \"exposure_modes\":\n+        {\n+            \"normal\":\n+            {\n+                \"shutter\": [ 100, 15000, 30000, 60000, 120000 ],\n+                \"gain\":    [ 1.0, 2.0,   3.0,   4.0,   6.0    ]\n+            }\n+        },\n+        \"constraint_modes\":\n+        {\n+            \"normal\":\n+            [\n+                { \"bound\": \"LOWER\", \"q_lo\": 0.98, \"q_hi\": 1.0, \"y_target\": [ 0, 0.4, 1000, 0.4 ] }\n+            ]\n+        },\n+        \"y_target\": [ 0, 0.16, 1000, 0.165, 10000, 0.17 ]\n+    },\n+    \"rpi.ccm\":\n+    {\n+        \"ccms\":\n+        [\n+            { \"ct\": 4000, \"ccm\": [ 2.0, -1.0, 0.0,   -0.5, 2.0, -0.5,   0, -1.0, 2.0 ] }\n+        ]\n+    },\n+    \"rpi.contrast\":\n+    {\n+        \"ce_enable\": 0,\n+        \"gamma_curve\": [\n+            0,     0,\n+            1024,  5040,\n+            2048,  9338,\n+            3072,  12356,\n+            4096,  15312,\n+            5120,  18051,\n+            6144,  20790,\n+            7168,  23193,\n+            8192,  25744,\n+            9216,  27942,\n+            10240, 30035,\n+            11264, 32005,\n+            12288, 33975,\n+            13312, 35815,\n+            14336, 37600,\n+            15360, 39168,\n+            16384, 40642,\n+            18432, 43379,\n+            20480, 45749,\n+            22528, 47753,\n+            24576, 49621,\n+            26624, 51253,\n+            28672, 52698,\n+            30720, 53796,\n+            32768, 54876,\n+            36864, 57012,\n+            40960, 58656,\n+            45056, 59954,\n+            49152, 61183,\n+            53248, 62355,\n+            57344, 63419,\n+            61440, 64476,\n+            65535, 65535\n+        ]\n+    }\n+}\ndiff --git a/src/ipa/raspberrypi/md_parser.cpp b/src/ipa/raspberrypi/md_parser.cpp\nnew file mode 100644\nindex 000000000000..ca809aa266e4\n--- /dev/null\n+++ b/src/ipa/raspberrypi/md_parser.cpp\n@@ -0,0 +1,101 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * md_parser.cpp - image sensor metadata parsers\n+ */\n+\n+#include <assert.h>\n+#include <map>\n+#include <string.h>\n+\n+#include \"md_parser.hpp\"\n+\n+using namespace RPi;\n+\n+// This function goes through the embedded data to find the offsets (not\n+// values!), in the data block, where the values of the given registers can\n+// subsequently be found.\n+\n+// Embedded data tag bytes, from Sony IMX219 datasheet but general to all SMIA\n+// sensors, I think.\n+\n+#define LINE_START 0x0a\n+#define LINE_END_TAG 0x07\n+#define REG_HI_BITS 0xaa\n+#define REG_LOW_BITS 0xa5\n+#define REG_VALUE 0x5a\n+#define REG_SKIP 0x55\n+\n+MdParserSmia::ParseStatus MdParserSmia::findRegs(unsigned char *data,\n+\t\t\t\t\t\t uint32_t regs[], int offsets[],\n+\t\t\t\t\t\t unsigned int num_regs)\n+{\n+\tassert(num_regs > 0);\n+\tif (data[0] != LINE_START)\n+\t\treturn NO_LINE_START;\n+\n+\tunsigned int current_offset = 1; // after the LINE_START\n+\tunsigned int current_line_start = 0, current_line = 0;\n+\tunsigned int reg_num = 0, first_reg = 0;\n+\tParseStatus retcode = PARSE_OK;\n+\twhile (1) {\n+\t\tint tag = data[current_offset++];\n+\t\tif ((bits_per_pixel_ == 10 &&\n+\t\t     (current_offset + 1 - current_line_start) % 5 == 0) ||\n+\t\t    (bits_per_pixel_ == 12 &&\n+\t\t     (current_offset + 1 - current_line_start) % 3 == 0)) {\n+\t\t\tif (data[current_offset++] != REG_SKIP)\n+\t\t\t\treturn BAD_DUMMY;\n+\t\t}\n+\t\tint data_byte = data[current_offset++];\n+\t\t//printf(\"Offset %u, tag 0x%02x data_byte 0x%02x\\n\", current_offset-1, tag, data_byte);\n+\t\tif (tag == LINE_END_TAG) {\n+\t\t\tif (data_byte != LINE_END_TAG)\n+\t\t\t\treturn BAD_LINE_END;\n+\t\t\tif (num_lines_ && ++current_line == num_lines_)\n+\t\t\t\treturn MISSING_REGS;\n+\t\t\tif (line_length_bytes_) {\n+\t\t\t\tcurrent_offset =\n+\t\t\t\t\tcurrent_line_start + line_length_bytes_;\n+\t\t\t\t// Require whole line to be in the buffer (if buffer size set).\n+\t\t\t\tif (buffer_size_bytes_ &&\n+\t\t\t\t    current_offset + line_length_bytes_ >\n+\t\t\t\t\t    buffer_size_bytes_)\n+\t\t\t\t\treturn MISSING_REGS;\n+\t\t\t\tif (data[current_offset] != LINE_START)\n+\t\t\t\t\treturn NO_LINE_START;\n+\t\t\t} else {\n+\t\t\t\t// allow a zero line length to mean \"hunt for the next line\"\n+\t\t\t\twhile (data[current_offset] != LINE_START &&\n+\t\t\t\t       current_offset < buffer_size_bytes_)\n+\t\t\t\t\tcurrent_offset++;\n+\t\t\t\tif (current_offset == buffer_size_bytes_)\n+\t\t\t\t\treturn NO_LINE_START;\n+\t\t\t}\n+\t\t\t// inc current_offset to after LINE_START\n+\t\t\tcurrent_line_start =\n+\t\t\t\tcurrent_offset++;\n+\t\t} else {\n+\t\t\tif (tag == REG_HI_BITS)\n+\t\t\t\treg_num = (reg_num & 0xff) | (data_byte << 8);\n+\t\t\telse if (tag == REG_LOW_BITS)\n+\t\t\t\treg_num = (reg_num & 0xff00) | data_byte;\n+\t\t\telse if (tag == REG_SKIP)\n+\t\t\t\treg_num++;\n+\t\t\telse if (tag == REG_VALUE) {\n+\t\t\t\twhile (reg_num >=\n+\t\t\t\t       // assumes registers are in order...\n+\t\t\t\t       regs[first_reg]) {\n+\t\t\t\t\tif (reg_num == regs[first_reg])\n+\t\t\t\t\t\toffsets[first_reg] =\n+\t\t\t\t\t\t\tcurrent_offset - 1;\n+\t\t\t\t\tif (++first_reg == num_regs)\n+\t\t\t\t\t\treturn retcode;\n+\t\t\t\t}\n+\t\t\t\treg_num++;\n+\t\t\t} else\n+\t\t\t\treturn ILLEGAL_TAG;\n+\t\t}\n+\t}\n+}\ndiff --git a/src/ipa/raspberrypi/md_parser.hpp b/src/ipa/raspberrypi/md_parser.hpp\nnew file mode 100644\nindex 000000000000..70d054b2c013\n--- /dev/null\n+++ b/src/ipa/raspberrypi/md_parser.hpp\n@@ -0,0 +1,123 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * md_parser.hpp - image sensor metadata parser interface\n+ */\n+#pragma once\n+\n+#include <stdint.h>\n+\n+/* Camera metadata parser class. Usage as shown below.\n+\n+Setup:\n+\n+Usually the metadata parser will be made as part of the CamHelper class so\n+application code doesn't have to worry which to kind to instantiate. But for\n+the sake of example let's suppose we're parsing imx219 metadata.\n+\n+MdParser *parser = new MdParserImx219();  // for example\n+parser->SetBitsPerPixel(bpp);\n+parser->SetLineLengthBytes(pitch);\n+parser->SetNumLines(2);\n+\n+Note 1: if you don't know how many lines there are, you can use SetBufferSize\n+instead to limit the total buffer size.\n+\n+Note 2: if you don't know the line length, you can leave the line length unset\n+(or set to zero) and the parser will hunt for the line start instead. In this\n+case SetBufferSize *must* be used so that the parser won't run off the end of\n+the buffer.\n+\n+Then on every frame:\n+\n+if (parser->Parse(data) != MdParser::OK)\n+    much badness;\n+unsigned int exposure_lines, gain_code\n+if (parser->GetExposureLines(exposure_lines) != MdParser::OK)\n+    exposure was not found;\n+if (parser->GetGainCode(parser, gain_code) != MdParser::OK)\n+    gain code was not found;\n+\n+(Note that the CamHelper class converts to/from exposure lines and time,\n+and gain_code / actual gain.)\n+\n+If you suspect your embedded data may have changed its layout, change any line\n+lengths, number of lines, bits per pixel etc. that are different, and\n+then:\n+\n+parser->Reset();\n+\n+before calling Parse again. */\n+\n+namespace RPi {\n+\n+// Abstract base class from which other metadata parsers are derived.\n+\n+class MdParser\n+{\n+public:\n+\t// Parser status codes:\n+\t// OK       - success\n+\t// NOTFOUND - value such as exposure or gain was not found\n+\t// ERROR    - all other errors\n+\tenum Status {\n+\t\tOK = 0,\n+\t\tNOTFOUND = 1,\n+\t\tERROR = 2\n+\t};\n+\tMdParser() : reset_(true) {}\n+\tvirtual ~MdParser() {}\n+\tvoid Reset() { reset_ = true; }\n+\tvoid SetBitsPerPixel(int bpp) { bits_per_pixel_ = bpp; }\n+\tvoid SetNumLines(unsigned int num_lines) { num_lines_ = num_lines; }\n+\tvoid SetLineLengthBytes(unsigned int num_bytes)\n+\t{\n+\t\tline_length_bytes_ = num_bytes;\n+\t}\n+\tvoid SetBufferSize(unsigned int num_bytes)\n+\t{\n+\t\tbuffer_size_bytes_ = num_bytes;\n+\t}\n+\tvirtual Status Parse(void *data) = 0;\n+\tvirtual Status GetExposureLines(unsigned int &lines) = 0;\n+\tvirtual Status GetGainCode(unsigned int &gain_code) = 0;\n+\n+protected:\n+\tbool reset_;\n+\tint bits_per_pixel_;\n+\tunsigned int num_lines_;\n+\tunsigned int line_length_bytes_;\n+\tunsigned int buffer_size_bytes_;\n+};\n+\n+// This isn't a full implementation of a metadata parser for SMIA sensors,\n+// however, it does provide the findRegs method which will prove useful and make\n+// it easier to implement parsers for other SMIA-like sensors (see\n+// md_parser_imx219.cpp for an example).\n+\n+class MdParserSmia : public MdParser\n+{\n+public:\n+\tMdParserSmia() : MdParser() {}\n+\n+protected:\n+\t// Note that error codes > 0 are regarded as non-fatal; codes < 0\n+\t// indicate a bad data buffer. Status codes are:\n+\t// PARSE_OK     - found all registers, much happiness\n+\t// MISSING_REGS - some registers found; should this be a hard error?\n+\t// The remaining codes are all hard errors.\n+\tenum ParseStatus {\n+\t\tPARSE_OK      =  0,\n+\t\tMISSING_REGS  =  1,\n+\t\tNO_LINE_START = -1,\n+\t\tILLEGAL_TAG   = -2,\n+\t\tBAD_DUMMY     = -3,\n+\t\tBAD_LINE_END  = -4,\n+\t\tBAD_PADDING   = -5\n+\t};\n+\tParseStatus findRegs(unsigned char *data, uint32_t regs[],\n+\t\t\t     int offsets[], unsigned int num_regs);\n+};\n+\n+} // namespace RPi\ndiff --git a/src/ipa/raspberrypi/md_parser_rpi.cpp b/src/ipa/raspberrypi/md_parser_rpi.cpp\nnew file mode 100644\nindex 000000000000..a42b28f775b4\n--- /dev/null\n+++ b/src/ipa/raspberrypi/md_parser_rpi.cpp\n@@ -0,0 +1,37 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2020, Raspberry Pi (Trading) Limited\n+ *\n+ * md_parser_rpi.cpp - Metadata parser for generic Raspberry Pi metadata\n+ */\n+\n+#include <string.h>\n+\n+#include \"md_parser_rpi.hpp\"\n+\n+using namespace RPi;\n+\n+MdParserRPi::MdParserRPi()\n+{\n+}\n+\n+MdParser::Status MdParserRPi::Parse(void *data)\n+{\n+\tif (buffer_size_bytes_ < sizeof(rpiMetadata))\n+\t\treturn ERROR;\n+\n+\tmemcpy(&metadata, data, sizeof(rpiMetadata));\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserRPi::GetExposureLines(unsigned int &lines)\n+{\n+\tlines = metadata.exposure;\n+\treturn OK;\n+}\n+\n+MdParser::Status MdParserRPi::GetGainCode(unsigned int &gain_code)\n+{\n+\tgain_code = metadata.gain;\n+\treturn OK;\n+}\ndiff --git a/src/ipa/raspberrypi/md_parser_rpi.hpp b/src/ipa/raspberrypi/md_parser_rpi.hpp\nnew file mode 100644\nindex 000000000000..1fa334f45bd1\n--- /dev/null\n+++ b/src/ipa/raspberrypi/md_parser_rpi.hpp\n@@ -0,0 +1,32 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019, Raspberry Pi (Trading) Limited\n+ *\n+ * md_parser_rpi.hpp - Raspberry Pi metadata parser interface\n+ */\n+#pragma once\n+\n+#include \"md_parser.hpp\"\n+\n+namespace RPi {\n+\n+class MdParserRPi : public MdParser\n+{\n+public:\n+\tMdParserRPi();\n+\tStatus Parse(void *data) override;\n+\tStatus GetExposureLines(unsigned int &lines) override;\n+\tStatus GetGainCode(unsigned int &gain_code) override;\n+\n+private:\n+\t// This must be the same struct that is filled into the metadata buffer\n+\t// in the pipeline handler.\n+\tstruct rpiMetadata\n+\t{\n+\t\tuint32_t exposure;\n+\t\tuint32_t gain;\n+\t};\n+\trpiMetadata metadata;\n+};\n+\n+}\ndiff --git a/src/ipa/raspberrypi/meson.build b/src/ipa/raspberrypi/meson.build\nnew file mode 100644\nindex 000000000000..2dece3a468e8\n--- /dev/null\n+++ b/src/ipa/raspberrypi/meson.build\n@@ -0,0 +1,59 @@\n+ipa_name = 'ipa_rpi'\n+\n+rpi_ipa_deps = [\n+    libcamera_dep,\n+    dependency('boost'),\n+    libatomic,\n+]\n+\n+rpi_ipa_includes = [\n+    ipa_includes,\n+    libipa_includes,\n+    include_directories('controller')\n+]\n+\n+rpi_ipa_sources = files([\n+    'raspberrypi.cpp',\n+    'md_parser.cpp',\n+    'md_parser_rpi.cpp',\n+    'cam_helper.cpp',\n+    'cam_helper_ov5647.cpp',\n+    'cam_helper_imx219.cpp',\n+    'cam_helper_imx477.cpp',\n+    'controller/controller.cpp',\n+    'controller/histogram.cpp',\n+    'controller/algorithm.cpp',\n+    'controller/rpi/alsc.cpp',\n+    'controller/rpi/awb.cpp',\n+    'controller/rpi/sharpen.cpp',\n+    'controller/rpi/black_level.cpp',\n+    'controller/rpi/geq.cpp',\n+    'controller/rpi/noise.cpp',\n+    'controller/rpi/lux.cpp',\n+    'controller/rpi/agc.cpp',\n+    'controller/rpi/dpc.cpp',\n+    'controller/rpi/ccm.cpp',\n+    'controller/rpi/contrast.cpp',\n+    'controller/rpi/sdn.cpp',\n+    'controller/pwl.cpp',\n+])\n+\n+mod = shared_module(ipa_name,\n+                    rpi_ipa_sources,\n+                    name_prefix : '',\n+                    include_directories : rpi_ipa_includes,\n+                    dependencies : rpi_ipa_deps,\n+                    link_with : libipa,\n+                    install : true,\n+                    install_dir : ipa_install_dir)\n+\n+if ipa_sign_module\n+    custom_target(ipa_name + '.so.sign',\n+                  input : mod,\n+                  output : ipa_name + '.so.sign',\n+                  command : [ ipa_sign, ipa_priv_key, '@INPUT@', '@OUTPUT@' ],\n+                  install : false,\n+                  build_by_default : true)\n+endif\n+\n+subdir('data')\ndiff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp\nnew file mode 100644\nindex 000000000000..80dc77a3b1ae\n--- /dev/null\n+++ b/src/ipa/raspberrypi/raspberrypi.cpp\n@@ -0,0 +1,1081 @@\n+/* SPDX-License-Identifier: BSD-2-Clause */\n+/*\n+ * Copyright (C) 2019-2020, Raspberry Pi (Trading) Ltd.\n+ *\n+ * rpi.cpp - Raspberry Pi Image Processing Algorithms\n+ */\n+\n+#include <algorithm>\n+#include <fcntl.h>\n+#include <math.h>\n+#include <stdint.h>\n+#include <string.h>\n+#include <sys/mman.h>\n+\n+#include <ipa/ipa_interface.h>\n+#include <ipa/ipa_module_info.h>\n+#include <ipa/raspberrypi.h>\n+#include <libcamera/buffer.h>\n+#include <libcamera/control_ids.h>\n+#include <libcamera/controls.h>\n+#include <libcamera/request.h>\n+#include <libcamera/span.h>\n+#include <libipa/ipa_interface_wrapper.h>\n+\n+#include <linux/bcm2835-isp.h>\n+\n+#include \"agc_algorithm.hpp\"\n+#include \"agc_status.h\"\n+#include \"alsc_status.h\"\n+#include \"awb_algorithm.hpp\"\n+#include \"awb_status.h\"\n+#include \"black_level_status.h\"\n+#include \"cam_helper.hpp\"\n+#include \"ccm_algorithm.hpp\"\n+#include \"ccm_status.h\"\n+#include \"contrast_algorithm.hpp\"\n+#include \"contrast_status.h\"\n+#include \"controller.hpp\"\n+#include \"dpc_status.h\"\n+#include \"geq_status.h\"\n+#include \"lux_status.h\"\n+#include \"metadata.hpp\"\n+#include \"noise_status.h\"\n+#include \"sdn_status.h\"\n+#include \"sharpen_status.h\"\n+\n+#include \"camera_sensor.h\"\n+#include \"log.h\"\n+#include \"utils.h\"\n+\n+namespace libcamera {\n+\n+/* Configure the sensor with these values initially. */\n+#define DEFAULT_ANALOGUE_GAIN 1.0\n+#define DEFAULT_EXPOSURE_TIME 20000\n+\n+LOG_DEFINE_CATEGORY(IPARPI)\n+\n+class IPARPi : public IPAInterface\n+{\n+public:\n+\tIPARPi()\n+\t\t: lastMode_({}), controller_(), controllerInit_(false),\n+\t\t  frame_count_(0), check_count_(0), hide_count_(0),\n+\t\t  mistrust_count_(0), lsTableHandle_(0), lsTable_(nullptr)\n+\t{\n+\t}\n+\n+\t~IPARPi()\n+\t{\n+\t}\n+\n+\tint init(const IPASettings &settings) override;\n+\tint start() override { return 0; }\n+\tvoid stop() override {}\n+\n+\tvoid configure(const CameraSensorInfo &sensorInfo,\n+\t\t       const std::map<unsigned int, IPAStream> &streamConfig,\n+\t\t       const std::map<unsigned int, const ControlInfoMap &> &entityControls) override;\n+\tvoid mapBuffers(const std::vector<IPABuffer> &buffers) override;\n+\tvoid unmapBuffers(const std::vector<unsigned int> &ids) override;\n+\tvoid processEvent(const IPAOperationData &event) override;\n+\n+private:\n+\tvoid setMode(const CameraSensorInfo &sensorInfo);\n+\tvoid queueRequest(const ControlList &controls);\n+\tvoid returnEmbeddedBuffer(unsigned int bufferId);\n+\tvoid prepareISP(unsigned int bufferId);\n+\tvoid reportMetadata();\n+\tbool parseEmbeddedData(unsigned int bufferId, struct DeviceStatus &deviceStatus);\n+\tvoid processStats(unsigned int bufferId);\n+\tvoid applyAGC(const struct AgcStatus *agcStatus);\n+\tvoid applyAWB(const struct AwbStatus *awbStatus, ControlList &ctrls);\n+\tvoid applyDG(const struct AgcStatus *dgStatus, ControlList &ctrls);\n+\tvoid applyCCM(const struct CcmStatus *ccmStatus, ControlList &ctrls);\n+\tvoid applyBlackLevel(const struct BlackLevelStatus *blackLevelStatus, ControlList &ctrls);\n+\tvoid applyGamma(const struct ContrastStatus *contrastStatus, ControlList &ctrls);\n+\tvoid applyGEQ(const struct GeqStatus *geqStatus, ControlList &ctrls);\n+\tvoid applyDenoise(const struct SdnStatus *denoiseStatus, ControlList &ctrls);\n+\tvoid applySharpen(const struct SharpenStatus *sharpenStatus, ControlList &ctrls);\n+\tvoid applyDPC(const struct DpcStatus *dpcStatus, ControlList &ctrls);\n+\tvoid applyLS(const struct AlscStatus *lsStatus, ControlList &ctrls);\n+\tvoid resampleTable(uint16_t dest[], double const src[12][16], int dest_w, int dest_h);\n+\n+\tstd::map<unsigned int, FrameBuffer> buffers_;\n+\tstd::map<unsigned int, void *> buffersMemory_;\n+\n+\tControlInfoMap unicam_ctrls_;\n+\tControlInfoMap isp_ctrls_;\n+\tControlList libcameraMetadata_;\n+\n+\t/* IPA configuration. */\n+\tstd::string tuningFile_;\n+\n+\t/* Camera sensor params. */\n+\tCameraMode mode_;\n+\tCameraMode lastMode_;\n+\n+\t/* Raspberry Pi controller specific defines. */\n+\tstd::unique_ptr<RPi::CamHelper> helper_;\n+\tRPi::Controller controller_;\n+\tbool controllerInit_;\n+\tRPi::Metadata rpiMetadata_;\n+\n+\t/*\n+\t * We count frames to decide if the frame must be hidden (e.g. from\n+\t * display) or mistrusted (i.e. not given to the control algos).\n+\t */\n+\tuint64_t frame_count_;\n+\t/* For checking the sequencing of Prepare/Process calls. */\n+\tuint64_t check_count_;\n+\t/* How many frames the pipeline handler should hide, or \"drop\". */\n+\tunsigned int hide_count_;\n+\t/* How many frames we should avoid running control algos on. */\n+\tunsigned int mistrust_count_;\n+\t/* LS table allocation passed in from the pipeline handler. */\n+\tuint32_t lsTableHandle_;\n+\tvoid *lsTable_;\n+};\n+\n+int IPARPi::init(const IPASettings &settings)\n+{\n+\ttuningFile_ = settings.configurationFile;\n+\treturn 0;\n+}\n+\n+void IPARPi::setMode(const CameraSensorInfo &sensorInfo)\n+{\n+\tmode_.bitdepth = sensorInfo.bitsPerPixel;\n+\tmode_.width = sensorInfo.outputSize.width;\n+\tmode_.height = sensorInfo.outputSize.height;\n+\tmode_.sensor_width = sensorInfo.activeAreaSize.width;\n+\tmode_.sensor_height = sensorInfo.activeAreaSize.height;\n+\tmode_.crop_x = sensorInfo.analogCrop.x;\n+\tmode_.crop_y = sensorInfo.analogCrop.y;\n+\n+\t/*\n+\t * Calculate scaling parameters. The scale_[xy] factors are determined\n+\t * by the ratio between the crop rectangle size and the output size.\n+\t */\n+\tmode_.scale_x = sensorInfo.analogCrop.width / sensorInfo.outputSize.width;\n+\tmode_.scale_y = sensorInfo.analogCrop.height / sensorInfo.outputSize.height;\n+\n+\t/*\n+\t * We're not told by the pipeline handler how scaling is split between\n+\t * binning and digital scaling. For now, as a heuristic, assume that\n+\t * downscaling up to 2 is achieved through binning, and that any\n+\t * additional scaling is achieved through digital scaling.\n+\t *\n+\t * \\todo Get the pipeline handle to provide the full data\n+\t */\n+\tmode_.bin_y = std::min(2, static_cast<int>(mode_.scale_x));\n+\tmode_.bin_y = std::min(2, static_cast<int>(mode_.scale_y));\n+\n+\t/* The noise factor is the square root of the total binning factor. */\n+\tmode_.noise_factor = sqrt(mode_.bin_x * mode_.bin_y);\n+\n+\t/*\n+\t * Calculate the line length in nanoseconds as the ratio between\n+\t * the line length in pixels and the pixel rate.\n+\t */\n+\tmode_.line_length = 1e9 * sensorInfo.lineLength / sensorInfo.pixelRate;\n+}\n+\n+void IPARPi::configure(const CameraSensorInfo &sensorInfo,\n+\t\t       const std::map<unsigned int, IPAStream> &streamConfig,\n+\t\t       const std::map<unsigned int, const ControlInfoMap &> &entityControls)\n+{\n+\tif (entityControls.empty())\n+\t\treturn;\n+\n+\tunicam_ctrls_ = entityControls.at(0);\n+\tisp_ctrls_ = entityControls.at(1);\n+\t/* Setup a metadata ControlList to output metadata. */\n+\tlibcameraMetadata_ = ControlList(controls::controls);\n+\n+\t/*\n+\t * Load the \"helper\" for this sensor. This tells us all the device specific stuff\n+\t * that the kernel driver doesn't. We only do this the first time; we don't need\n+\t * to re-parse the metadata after a simple mode-switch for no reason.\n+\t */\n+\tstd::string cameraName(sensorInfo.model);\n+\tif (!helper_) {\n+\t\thelper_ = std::unique_ptr<RPi::CamHelper>(RPi::CamHelper::Create(cameraName));\n+\t\t/*\n+\t\t * Pass out the sensor config to the pipeline handler in order\n+\t\t * to setup the staggered writer class.\n+\t\t */\n+\t\tint gainDelay, exposureDelay, sensorMetadata;\n+\t\thelper_->GetDelays(exposureDelay, gainDelay);\n+\t\tsensorMetadata = helper_->SensorEmbeddedDataPresent();\n+\t\tRPi::CamTransform orientation = helper_->GetOrientation();\n+\n+\t\tIPAOperationData op;\n+\t\top.operation = RPI_IPA_ACTION_SET_SENSOR_CONFIG;\n+\t\top.data.push_back(gainDelay);\n+\t\top.data.push_back(exposureDelay);\n+\t\top.data.push_back(sensorMetadata);\n+\n+\t\tControlList ctrls(unicam_ctrls_);\n+\t\tctrls.set(V4L2_CID_HFLIP, (int32_t) !!(orientation & RPi::CamTransform_HFLIP));\n+\t\tctrls.set(V4L2_CID_VFLIP, (int32_t) !!(orientation & RPi::CamTransform_VFLIP));\n+\t\top.controls.push_back(ctrls);\n+\n+\t\tqueueFrameAction.emit(0, op);\n+\t}\n+\n+\t/* Re-assemble camera mode using the sensor info. */\n+\tsetMode(sensorInfo);\n+\n+\t/* Pass the camera mode to the CamHelper to setup algorithms. */\n+\thelper_->SetCameraMode(mode_);\n+\n+\t/*\n+\t * Initialise frame counts, and decide how many frames must be hidden or\n+\t *\"mistrusted\", which depends on whether this is a startup from cold,\n+\t * or merely a mode switch in a running system.\n+\t */\n+\tframe_count_ = 0;\n+\tcheck_count_ = 0;\n+\tif (controllerInit_) {\n+\t\thide_count_ = helper_->HideFramesModeSwitch();\n+\t\tmistrust_count_ = helper_->MistrustFramesModeSwitch();\n+\t} else {\n+\t\thide_count_ = helper_->HideFramesStartup();\n+\t\tmistrust_count_ = helper_->MistrustFramesStartup();\n+\t}\n+\n+\tif (!controllerInit_) {\n+\t\t/* Load the tuning file for this sensor. */\n+\t\tcontroller_.Read(tuningFile_.c_str());\n+\t\tcontroller_.Initialise();\n+\t\tcontrollerInit_ = true;\n+\n+\t\t/* Calculate initial values for gain and exposure. */\n+\t\tint32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN);\n+\t\tint32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME);\n+\n+\t\tControlList ctrls(unicam_ctrls_);\n+\t\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n+\t\tctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n+\n+\t\tIPAOperationData op;\n+\t\top.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n+\t\top.controls.push_back(ctrls);\n+\t\tqueueFrameAction.emit(0, op);\n+\t}\n+\n+\tcontroller_.SwitchMode(mode_);\n+\n+\tlastMode_ = mode_;\n+}\n+\n+void IPARPi::mapBuffers(const std::vector<IPABuffer> &buffers)\n+{\n+\tfor (const IPABuffer &buffer : buffers) {\n+\t\tauto elem = buffers_.emplace(std::piecewise_construct,\n+\t\t\t\t\t     std::forward_as_tuple(buffer.id),\n+\t\t\t\t\t     std::forward_as_tuple(buffer.planes));\n+\t\tconst FrameBuffer &fb = elem.first->second;\n+\n+\t\tbuffersMemory_[buffer.id] = mmap(NULL, fb.planes()[0].length, PROT_READ | PROT_WRITE,\n+\t\t\t\t\t\t MAP_SHARED, fb.planes()[0].fd.fd(), 0);\n+\n+\t\tif (buffersMemory_[buffer.id] == MAP_FAILED) {\n+\t\t\tint ret = -errno;\n+\t\t\tLOG(IPARPI, Fatal) << \"Failed to mmap buffer: \" << strerror(-ret);\n+\t\t}\n+\t}\n+}\n+\n+void IPARPi::unmapBuffers(const std::vector<unsigned int> &ids)\n+{\n+\tfor (unsigned int id : ids) {\n+\t\tconst auto fb = buffers_.find(id);\n+\t\tif (fb == buffers_.end())\n+\t\t\tcontinue;\n+\n+\t\tmunmap(buffersMemory_[id], fb->second.planes()[0].length);\n+\t\tbuffersMemory_.erase(id);\n+\t\tbuffers_.erase(id);\n+\t}\n+}\n+\n+void IPARPi::processEvent(const IPAOperationData &event)\n+{\n+\tswitch (event.operation) {\n+\tcase RPI_IPA_EVENT_SIGNAL_STAT_READY: {\n+\t\tunsigned int bufferId = event.data[0];\n+\n+\t\tif (++check_count_ != frame_count_) /* assert here? */\n+\t\t\tLOG(IPARPI, Error) << \"WARNING: Prepare/Process mismatch!!!\";\n+\t\tif (frame_count_ > mistrust_count_)\n+\t\t\tprocessStats(bufferId);\n+\n+\t\tIPAOperationData op;\n+\t\top.operation = RPI_IPA_ACTION_STATS_METADATA_COMPLETE;\n+\t\top.data = { bufferId & RPiIpaMask::ID };\n+\t\top.controls = { libcameraMetadata_ };\n+\t\tqueueFrameAction.emit(0, op);\n+\t\tbreak;\n+\t}\n+\n+\tcase RPI_IPA_EVENT_SIGNAL_ISP_PREPARE: {\n+\t\tunsigned int embeddedbufferId = event.data[0];\n+\t\tunsigned int bayerbufferId = event.data[1];\n+\n+\t\t/*\n+\t\t * At start-up, or after a mode-switch, we may want to\n+\t\t * avoid running the control algos for a few frames in case\n+\t\t * they are \"unreliable\".\n+\t\t */\n+\t\tprepareISP(embeddedbufferId);\n+\t\treportMetadata();\n+\n+\t\t/* Ready to push the input buffer into the ISP. */\n+\t\tIPAOperationData op;\n+\t\tif (++frame_count_ > hide_count_)\n+\t\t\top.operation = RPI_IPA_ACTION_RUN_ISP;\n+\t\telse\n+\t\t\top.operation = RPI_IPA_ACTION_RUN_ISP_AND_DROP_FRAME;\n+\t\top.data = { bayerbufferId & RPiIpaMask::ID };\n+\t\tqueueFrameAction.emit(0, op);\n+\t\tbreak;\n+\t}\n+\n+\tcase RPI_IPA_EVENT_QUEUE_REQUEST: {\n+\t\tqueueRequest(event.controls[0]);\n+\t\tbreak;\n+\t}\n+\n+\tcase RPI_IPA_EVENT_LS_TABLE_ALLOCATION: {\n+\t\tlsTable_ = reinterpret_cast<void *>(event.data[0]);\n+\t\tlsTableHandle_ = event.data[1];\n+\t\tbreak;\n+\t}\n+\n+\tdefault:\n+\t\tLOG(IPARPI, Error) << \"Unknown event \" << event.operation;\n+\t\tbreak;\n+\t}\n+}\n+\n+void IPARPi::reportMetadata()\n+{\n+\tstd::unique_lock<RPi::Metadata> lock(rpiMetadata_);\n+\n+\t/*\n+\t * Certain information about the current frame and how it will be\n+\t * processed can be extracted and placed into the libcamera metadata\n+\t * buffer, where an application could query it.\n+\t */\n+\n+\tDeviceStatus *deviceStatus = rpiMetadata_.GetLocked<DeviceStatus>(\"device.status\");\n+\tif (deviceStatus) {\n+\t\tlibcameraMetadata_.set(controls::ExposureTime, deviceStatus->shutter_speed);\n+\t\tlibcameraMetadata_.set(controls::AnalogueGain, deviceStatus->analogue_gain);\n+\t}\n+\n+\tAgcStatus *agcStatus = rpiMetadata_.GetLocked<AgcStatus>(\"agc.status\");\n+\tif (agcStatus)\n+\t\tlibcameraMetadata_.set(controls::AeLocked, agcStatus->locked);\n+\n+\tLuxStatus *luxStatus = rpiMetadata_.GetLocked<LuxStatus>(\"lux.status\");\n+\tif (luxStatus)\n+\t\tlibcameraMetadata_.set(controls::Lux, luxStatus->lux);\n+\n+\tAwbStatus *awbStatus = rpiMetadata_.GetLocked<AwbStatus>(\"awb.status\");\n+\tif (awbStatus) {\n+\t\tlibcameraMetadata_.set(controls::ColourGains, { static_cast<float>(awbStatus->gain_r),\n+\t\t\t\t\t\t\t\tstatic_cast<float>(awbStatus->gain_b) });\n+\t\tlibcameraMetadata_.set(controls::ColourTemperature, awbStatus->temperature_K);\n+\t}\n+\n+\tBlackLevelStatus *blackLevelStatus = rpiMetadata_.GetLocked<BlackLevelStatus>(\"black_level.status\");\n+\tif (blackLevelStatus)\n+\t\tlibcameraMetadata_.set(controls::SensorBlackLevels,\n+\t\t\t\t       { static_cast<int32_t>(blackLevelStatus->black_level_r),\n+\t\t\t\t\t static_cast<int32_t>(blackLevelStatus->black_level_g),\n+\t\t\t\t\t static_cast<int32_t>(blackLevelStatus->black_level_g),\n+\t\t\t\t\t static_cast<int32_t>(blackLevelStatus->black_level_b) });\n+}\n+\n+/*\n+ * Converting between enums (used in the libcamera API) and the names that\n+ * we use to identify different modes. Unfortunately, the conversion tables\n+ * must be kept up-to-date by hand.\n+ */\n+\n+static const std::map<int32_t, std::string> MeteringModeTable = {\n+\t{ controls::MeteringCentreWeighted, \"centre-weighted\" },\n+\t{ controls::MeteringSpot, \"spot\" },\n+\t{ controls::MeteringMatrix, \"matrix\" },\n+\t{ controls::MeteringCustom, \"custom\" },\n+};\n+\n+static const std::map<int32_t, std::string> ConstraintModeTable = {\n+\t{ controls::ConstraintNormal, \"normal\" },\n+\t{ controls::ConstraintHighlight, \"highlight\" },\n+\t{ controls::ConstraintCustom, \"custom\" },\n+};\n+\n+static const std::map<int32_t, std::string> ExposureModeTable = {\n+\t{ controls::ExposureNormal, \"normal\" },\n+\t{ controls::ExposureShort, \"short\" },\n+\t{ controls::ExposureLong, \"long\" },\n+\t{ controls::ExposureCustom, \"custom\" },\n+};\n+\n+static const std::map<int32_t, std::string> AwbModeTable = {\n+\t{ controls::AwbAuto, \"normal\" },\n+\t{ controls::AwbIncandescent, \"incandescent\" },\n+\t{ controls::AwbTungsten, \"tungsten\" },\n+\t{ controls::AwbFluorescent, \"fluorescent\" },\n+\t{ controls::AwbIndoor, \"indoor\" },\n+\t{ controls::AwbDaylight, \"daylight\" },\n+\t{ controls::AwbCustom, \"custom\" },\n+};\n+\n+void IPARPi::queueRequest(const ControlList &controls)\n+{\n+\t/* Clear the return metadata buffer. */\n+\tlibcameraMetadata_.clear();\n+\n+\tfor (auto const &ctrl : controls) {\n+\t\tLOG(IPARPI, Info) << \"Request ctrl: \"\n+\t\t\t\t  << controls::controls.at(ctrl.first)->name()\n+\t\t\t\t  << \" = \" << ctrl.second.toString();\n+\n+\t\tswitch (ctrl.first) {\n+\t\tcase controls::AE_ENABLE: {\n+\t\t\tRPi::Algorithm *agc = controller_.GetAlgorithm(\"agc\");\n+\t\t\tASSERT(agc);\n+\t\t\tif (ctrl.second.get<bool>() == false)\n+\t\t\t\tagc->Pause();\n+\t\t\telse\n+\t\t\t\tagc->Resume();\n+\n+\t\t\tlibcameraMetadata_.set(controls::AeEnable, ctrl.second.get<bool>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::EXPOSURE_TIME: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\t\t\t/* This expects units of micro-seconds. */\n+\t\t\tagc->SetFixedShutter(ctrl.second.get<int32_t>());\n+\t\t\t/* For the manual values to take effect, AGC must be unpaused. */\n+\t\t\tif (agc->IsPaused())\n+\t\t\t\tagc->Resume();\n+\n+\t\t\tlibcameraMetadata_.set(controls::ExposureTime, ctrl.second.get<int32_t>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::ANALOGUE_GAIN: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\t\t\tagc->SetFixedAnalogueGain(ctrl.second.get<float>());\n+\t\t\t/* For the manual values to take effect, AGC must be unpaused. */\n+\t\t\tif (agc->IsPaused())\n+\t\t\t\tagc->Resume();\n+\n+\t\t\tlibcameraMetadata_.set(controls::AnalogueGain,\n+\t\t\t\t\t       ctrl.second.get<float>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::AE_METERING_MODE: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\n+\t\t\tint32_t idx = ctrl.second.get<int32_t>();\n+\t\t\tif (MeteringModeTable.count(idx)) {\n+\t\t\t\tagc->SetMeteringMode(MeteringModeTable.at(idx));\n+\t\t\t\tlibcameraMetadata_.set(controls::AeMeteringMode, idx);\n+\t\t\t} else {\n+\t\t\t\tLOG(IPARPI, Error) << \"Metering mode \" << idx\n+\t\t\t\t\t\t   << \" not recognised\";\n+\t\t\t}\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::AE_CONSTRAINT_MODE: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\n+\t\t\tint32_t idx = ctrl.second.get<int32_t>();\n+\t\t\tif (ConstraintModeTable.count(idx)) {\n+\t\t\t\tagc->SetConstraintMode(ConstraintModeTable.at(idx));\n+\t\t\t\tlibcameraMetadata_.set(controls::AeConstraintMode, idx);\n+\t\t\t} else {\n+\t\t\t\tLOG(IPARPI, Error) << \"Constraint mode \" << idx\n+\t\t\t\t\t\t   << \" not recognised\";\n+\t\t\t}\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::AE_EXPOSURE_MODE: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\n+\t\t\tint32_t idx = ctrl.second.get<int32_t>();\n+\t\t\tif (ExposureModeTable.count(idx)) {\n+\t\t\t\tagc->SetExposureMode(ExposureModeTable.at(idx));\n+\t\t\t\tlibcameraMetadata_.set(controls::AeExposureMode, idx);\n+\t\t\t} else {\n+\t\t\t\tLOG(IPARPI, Error) << \"Exposure mode \" << idx\n+\t\t\t\t\t\t   << \" not recognised\";\n+\t\t\t}\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::EXPOSURE_VALUE: {\n+\t\t\tRPi::AgcAlgorithm *agc = dynamic_cast<RPi::AgcAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"agc\"));\n+\t\t\tASSERT(agc);\n+\n+\t\t\t/*\n+\t\t\t * The SetEv() method takes in a direct exposure multiplier.\n+\t\t\t * So convert to 2^EV\n+\t\t\t */\n+\t\t\tdouble ev = pow(2.0, ctrl.second.get<float>());\n+\t\t\tagc->SetEv(ev);\n+\t\t\tlibcameraMetadata_.set(controls::ExposureValue,\n+\t\t\t\t\t       ctrl.second.get<float>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::AWB_ENABLE: {\n+\t\t\tRPi::Algorithm *awb = controller_.GetAlgorithm(\"awb\");\n+\t\t\tASSERT(awb);\n+\n+\t\t\tif (ctrl.second.get<bool>() == false)\n+\t\t\t\tawb->Pause();\n+\t\t\telse\n+\t\t\t\tawb->Resume();\n+\n+\t\t\tlibcameraMetadata_.set(controls::AwbEnable,\n+\t\t\t\t\t       ctrl.second.get<bool>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::AWB_MODE: {\n+\t\t\tRPi::AwbAlgorithm *awb = dynamic_cast<RPi::AwbAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"awb\"));\n+\t\t\tASSERT(awb);\n+\n+\t\t\tint32_t idx = ctrl.second.get<int32_t>();\n+\t\t\tif (AwbModeTable.count(idx)) {\n+\t\t\t\tawb->SetMode(AwbModeTable.at(idx));\n+\t\t\t\tlibcameraMetadata_.set(controls::AwbMode, idx);\n+\t\t\t} else {\n+\t\t\t\tLOG(IPARPI, Error) << \"AWB mode \" << idx\n+\t\t\t\t\t\t   << \" not recognised\";\n+\t\t\t}\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::COLOUR_GAINS: {\n+\t\t\tauto gains = ctrl.second.get<Span<const float>>();\n+\t\t\tRPi::AwbAlgorithm *awb = dynamic_cast<RPi::AwbAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"awb\"));\n+\t\t\tASSERT(awb);\n+\n+\t\t\tawb->SetManualGains(gains[0], gains[1]);\n+\t\t\tif (gains[0] != 0.0f && gains[1] != 0.0f)\n+\t\t\t\t/* A gain of 0.0f will switch back to auto mode. */\n+\t\t\t\tlibcameraMetadata_.set(controls::ColourGains,\n+\t\t\t\t\t\t       { gains[0], gains[1] });\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::BRIGHTNESS: {\n+\t\t\tRPi::ContrastAlgorithm *contrast = dynamic_cast<RPi::ContrastAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"contrast\"));\n+\t\t\tASSERT(contrast);\n+\n+\t\t\tcontrast->SetBrightness(ctrl.second.get<float>() * 65536);\n+\t\t\tlibcameraMetadata_.set(controls::Brightness,\n+\t\t\t\t\t       ctrl.second.get<float>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::CONTRAST: {\n+\t\t\tRPi::ContrastAlgorithm *contrast = dynamic_cast<RPi::ContrastAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"contrast\"));\n+\t\t\tASSERT(contrast);\n+\n+\t\t\tcontrast->SetContrast(ctrl.second.get<float>());\n+\t\t\tlibcameraMetadata_.set(controls::Contrast,\n+\t\t\t\t\t       ctrl.second.get<float>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tcase controls::SATURATION: {\n+\t\t\tRPi::CcmAlgorithm *ccm = dynamic_cast<RPi::CcmAlgorithm *>(\n+\t\t\t\tcontroller_.GetAlgorithm(\"ccm\"));\n+\t\t\tASSERT(ccm);\n+\n+\t\t\tccm->SetSaturation(ctrl.second.get<float>());\n+\t\t\tlibcameraMetadata_.set(controls::Saturation,\n+\t\t\t\t\t       ctrl.second.get<float>());\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tdefault:\n+\t\t\tLOG(IPARPI, Warning)\n+\t\t\t\t<< \"Ctrl \" << controls::controls.at(ctrl.first)->name()\n+\t\t\t\t<< \" is not handled.\";\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+}\n+\n+void IPARPi::returnEmbeddedBuffer(unsigned int bufferId)\n+{\n+\tIPAOperationData op;\n+\top.operation = RPI_IPA_ACTION_EMBEDDED_COMPLETE;\n+\top.data = { bufferId & RPiIpaMask::ID };\n+\tqueueFrameAction.emit(0, op);\n+}\n+\n+void IPARPi::prepareISP(unsigned int bufferId)\n+{\n+\tstruct DeviceStatus deviceStatus = {};\n+\tbool success = parseEmbeddedData(bufferId, deviceStatus);\n+\n+\t/* Done with embedded data now, return to pipeline handler asap. */\n+\treturnEmbeddedBuffer(bufferId);\n+\n+\tif (success) {\n+\t\tControlList ctrls(isp_ctrls_);\n+\n+\t\trpiMetadata_.Clear();\n+\t\trpiMetadata_.Set(\"device.status\", deviceStatus);\n+\t\tcontroller_.Prepare(&rpiMetadata_);\n+\n+\t\t/* Lock the metadata buffer to avoid constant locks/unlocks. */\n+\t\tstd::unique_lock<RPi::Metadata> lock(rpiMetadata_);\n+\n+\t\tAwbStatus *awbStatus = rpiMetadata_.GetLocked<AwbStatus>(\"awb.status\");\n+\t\tif (awbStatus)\n+\t\t\tapplyAWB(awbStatus, ctrls);\n+\n+\t\tCcmStatus *ccmStatus = rpiMetadata_.GetLocked<CcmStatus>(\"ccm.status\");\n+\t\tif (ccmStatus)\n+\t\t\tapplyCCM(ccmStatus, ctrls);\n+\n+\t\tAgcStatus *dgStatus = rpiMetadata_.GetLocked<AgcStatus>(\"agc.status\");\n+\t\tif (dgStatus)\n+\t\t\tapplyDG(dgStatus, ctrls);\n+\n+\t\tAlscStatus *lsStatus = rpiMetadata_.GetLocked<AlscStatus>(\"alsc.status\");\n+\t\tif (lsStatus)\n+\t\t\tapplyLS(lsStatus, ctrls);\n+\n+\t\tContrastStatus *contrastStatus = rpiMetadata_.GetLocked<ContrastStatus>(\"contrast.status\");\n+\t\tif (contrastStatus)\n+\t\t\tapplyGamma(contrastStatus, ctrls);\n+\n+\t\tBlackLevelStatus *blackLevelStatus = rpiMetadata_.GetLocked<BlackLevelStatus>(\"black_level.status\");\n+\t\tif (blackLevelStatus)\n+\t\t\tapplyBlackLevel(blackLevelStatus, ctrls);\n+\n+\t\tGeqStatus *geqStatus = rpiMetadata_.GetLocked<GeqStatus>(\"geq.status\");\n+\t\tif (geqStatus)\n+\t\t\tapplyGEQ(geqStatus, ctrls);\n+\n+\t\tSdnStatus *denoiseStatus = rpiMetadata_.GetLocked<SdnStatus>(\"sdn.status\");\n+\t\tif (denoiseStatus)\n+\t\t\tapplyDenoise(denoiseStatus, ctrls);\n+\n+\t\tSharpenStatus *sharpenStatus = rpiMetadata_.GetLocked<SharpenStatus>(\"sharpen.status\");\n+\t\tif (sharpenStatus)\n+\t\t\tapplySharpen(sharpenStatus, ctrls);\n+\n+\t\tDpcStatus *dpcStatus = rpiMetadata_.GetLocked<DpcStatus>(\"dpc.status\");\n+\t\tif (dpcStatus)\n+\t\t\tapplyDPC(dpcStatus, ctrls);\n+\n+\t\tif (!ctrls.empty()) {\n+\t\t\tIPAOperationData op;\n+\t\t\top.operation = RPI_IPA_ACTION_V4L2_SET_ISP;\n+\t\t\top.controls.push_back(ctrls);\n+\t\t\tqueueFrameAction.emit(0, op);\n+\t\t}\n+\t}\n+}\n+\n+bool IPARPi::parseEmbeddedData(unsigned int bufferId, struct DeviceStatus &deviceStatus)\n+{\n+\tauto it = buffersMemory_.find(bufferId);\n+\tif (it == buffersMemory_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Could not find embedded buffer!\";\n+\t\treturn false;\n+\t}\n+\n+\tint size = buffers_.find(bufferId)->second.planes()[0].length;\n+\thelper_->Parser().SetBufferSize(size);\n+\tRPi::MdParser::Status status = helper_->Parser().Parse(it->second);\n+\tif (status != RPi::MdParser::Status::OK) {\n+\t\tLOG(IPARPI, Error) << \"Embedded Buffer parsing failed, error \" << status;\n+\t} else {\n+\t\tuint32_t exposure_lines, gain_code;\n+\t\tif (helper_->Parser().GetExposureLines(exposure_lines) != RPi::MdParser::Status::OK) {\n+\t\t\tLOG(IPARPI, Error) << \"Exposure time failed\";\n+\t\t\treturn false;\n+\t\t}\n+\n+\t\tdeviceStatus.shutter_speed = helper_->Exposure(exposure_lines);\n+\t\tif (helper_->Parser().GetGainCode(gain_code) != RPi::MdParser::Status::OK) {\n+\t\t\tLOG(IPARPI, Error) << \"Gain failed\";\n+\t\t\treturn false;\n+\t\t}\n+\n+\t\tdeviceStatus.analogue_gain = helper_->Gain(gain_code);\n+\t\tLOG(IPARPI, Debug) << \"Metadata - Exposure : \"\n+\t\t\t\t   << deviceStatus.shutter_speed << \" Gain : \"\n+\t\t\t\t   << deviceStatus.analogue_gain;\n+\t}\n+\n+\treturn true;\n+}\n+\n+void IPARPi::processStats(unsigned int bufferId)\n+{\n+\tauto it = buffersMemory_.find(bufferId);\n+\tif (it == buffersMemory_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Could not find stats buffer!\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_stats *stats = static_cast<bcm2835_isp_stats *>(it->second);\n+\tRPi::StatisticsPtr statistics = std::make_shared<bcm2835_isp_stats>(*stats);\n+\tcontroller_.Process(statistics, &rpiMetadata_);\n+\n+\tstruct AgcStatus agcStatus;\n+\tif (rpiMetadata_.Get(\"agc.status\", agcStatus) == 0)\n+\t\tapplyAGC(&agcStatus);\n+}\n+\n+void IPARPi::applyAWB(const struct AwbStatus *awbStatus, ControlList &ctrls)\n+{\n+\tconst auto gainR = isp_ctrls_.find(V4L2_CID_RED_BALANCE);\n+\tif (gainR == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find red gain control\";\n+\t\treturn;\n+\t}\n+\n+\tconst auto gainB = isp_ctrls_.find(V4L2_CID_BLUE_BALANCE);\n+\tif (gainB == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find blue gain control\";\n+\t\treturn;\n+\t}\n+\n+\tLOG(IPARPI, Debug) << \"Applying WB R: \" << awbStatus->gain_r << \" B: \"\n+\t\t\t   << awbStatus->gain_b;\n+\n+\tctrls.set(V4L2_CID_RED_BALANCE,\n+\t\t  static_cast<int32_t>(awbStatus->gain_r * 1000));\n+\tctrls.set(V4L2_CID_BLUE_BALANCE,\n+\t\t  static_cast<int32_t>(awbStatus->gain_b * 1000));\n+}\n+\n+void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n+{\n+\tIPAOperationData op;\n+\top.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n+\n+\tint32_t gain_code = helper_->GainCode(agcStatus->analogue_gain);\n+\tint32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time);\n+\n+\tif (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find analogue gain control\";\n+\t\treturn;\n+\t}\n+\n+\tif (unicam_ctrls_.find(V4L2_CID_EXPOSURE) == unicam_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find exposure control\";\n+\t\treturn;\n+\t}\n+\n+\tLOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << agcStatus->shutter_time\n+\t\t\t   << \" (Shutter lines: \" << exposure_lines << \") Gain: \"\n+\t\t\t   << agcStatus->analogue_gain << \" (Gain Code: \"\n+\t\t\t   << gain_code << \")\";\n+\n+\tControlList ctrls(unicam_ctrls_);\n+\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n+\tctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n+\top.controls.push_back(ctrls);\n+\tqueueFrameAction.emit(0, op);\n+}\n+\n+void IPARPi::applyDG(const struct AgcStatus *dgStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_DIGITAL_GAIN) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find digital gain control\";\n+\t\treturn;\n+\t}\n+\n+\tctrls.set(V4L2_CID_DIGITAL_GAIN,\n+\t\t  static_cast<int32_t>(dgStatus->digital_gain * 1000));\n+}\n+\n+void IPARPi::applyCCM(const struct CcmStatus *ccmStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_CC_MATRIX) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find CCM control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_custom_ccm ccm;\n+\tfor (int i = 0; i < 9; i++) {\n+\t\tccm.ccm.ccm[i / 3][i % 3].den = 1000;\n+\t\tccm.ccm.ccm[i / 3][i % 3].num = 1000 * ccmStatus->matrix[i];\n+\t}\n+\n+\tccm.enabled = 1;\n+\tccm.ccm.offsets[0] = ccm.ccm.offsets[1] = ccm.ccm.offsets[2] = 0;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&ccm),\n+\t\t\t\t\t    sizeof(ccm) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_CC_MATRIX, c);\n+}\n+\n+void IPARPi::applyGamma(const struct ContrastStatus *contrastStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_GAMMA) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find Gamma control\";\n+\t\treturn;\n+\t}\n+\n+\tstruct bcm2835_isp_gamma gamma;\n+\tgamma.enabled = 1;\n+\tfor (int i = 0; i < CONTRAST_NUM_POINTS; i++) {\n+\t\tgamma.x[i] = contrastStatus->points[i].x;\n+\t\tgamma.y[i] = contrastStatus->points[i].y;\n+\t}\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&gamma),\n+\t\t\t\t\t    sizeof(gamma) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_GAMMA, c);\n+}\n+\n+void IPARPi::applyBlackLevel(const struct BlackLevelStatus *blackLevelStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find black level control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_black_level blackLevel;\n+\tblackLevel.enabled = 1;\n+\tblackLevel.black_level_r = blackLevelStatus->black_level_r;\n+\tblackLevel.black_level_g = blackLevelStatus->black_level_g;\n+\tblackLevel.black_level_b = blackLevelStatus->black_level_b;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&blackLevel),\n+\t\t\t\t\t    sizeof(blackLevel) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL, c);\n+}\n+\n+void IPARPi::applyGEQ(const struct GeqStatus *geqStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_GEQ) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find geq control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_geq geq;\n+\tgeq.enabled = 1;\n+\tgeq.offset = geqStatus->offset;\n+\tgeq.slope.den = 1000;\n+\tgeq.slope.num = 1000 * geqStatus->slope;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&geq),\n+\t\t\t\t\t    sizeof(geq) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_GEQ, c);\n+}\n+\n+void IPARPi::applyDenoise(const struct SdnStatus *denoiseStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_DENOISE) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find denoise control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_denoise denoise;\n+\tdenoise.enabled = 1;\n+\tdenoise.constant = denoiseStatus->noise_constant;\n+\tdenoise.slope.num = 1000 * denoiseStatus->noise_slope;\n+\tdenoise.slope.den = 1000;\n+\tdenoise.strength.num = 1000 * denoiseStatus->strength;\n+\tdenoise.strength.den = 1000;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&denoise),\n+\t\t\t\t\t    sizeof(denoise) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_DENOISE, c);\n+}\n+\n+void IPARPi::applySharpen(const struct SharpenStatus *sharpenStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_SHARPEN) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find sharpen control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_sharpen sharpen;\n+\tsharpen.enabled = 1;\n+\tsharpen.threshold.num = 1000 * sharpenStatus->threshold;\n+\tsharpen.threshold.den = 1000;\n+\tsharpen.strength.num = 1000 * sharpenStatus->strength;\n+\tsharpen.strength.den = 1000;\n+\tsharpen.limit.num = 1000 * sharpenStatus->limit;\n+\tsharpen.limit.den = 1000;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&sharpen),\n+\t\t\t\t\t    sizeof(sharpen) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_SHARPEN, c);\n+}\n+\n+void IPARPi::applyDPC(const struct DpcStatus *dpcStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_DPC) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find DPC control\";\n+\t\treturn;\n+\t}\n+\n+\tbcm2835_isp_dpc dpc;\n+\tdpc.enabled = 1;\n+\tdpc.strength = dpcStatus->strength;\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&dpc),\n+\t\t\t\t\t    sizeof(dpc) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_DPC, c);\n+}\n+\n+void IPARPi::applyLS(const struct AlscStatus *lsStatus, ControlList &ctrls)\n+{\n+\tif (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING) == isp_ctrls_.end()) {\n+\t\tLOG(IPARPI, Error) << \"Can't find LS control\";\n+\t\treturn;\n+\t}\n+\n+\t/*\n+\t * Program lens shading tables into pipeline.\n+\t * Choose smallest cell size that won't exceed 63x48 cells.\n+\t */\n+\tconst int cell_sizes[] = { 16, 32, 64, 128, 256 };\n+\tunsigned int num_cells = ARRAY_SIZE(cell_sizes);\n+\tunsigned int i, w, h, cell_size;\n+\tfor (i = 0; i < num_cells; i++) {\n+\t\tcell_size = cell_sizes[i];\n+\t\tw = (mode_.width + cell_size - 1) / cell_size;\n+\t\th = (mode_.height + cell_size - 1) / cell_size;\n+\t\tif (w < 64 && h <= 48)\n+\t\t\tbreak;\n+\t}\n+\n+\tif (i == num_cells) {\n+\t\tLOG(IPARPI, Error) << \"Cannot find cell size\";\n+\t\treturn;\n+\t}\n+\n+\t/* We're going to supply corner sampled tables, 16 bit samples. */\n+\tw++, h++;\n+\tbcm2835_isp_lens_shading ls = {\n+\t\t.enabled = 1,\n+\t\t.grid_cell_size = cell_size,\n+\t\t.grid_width = w,\n+\t\t.grid_stride = w,\n+\t\t.grid_height = h,\n+\t\t.mem_handle_table = lsTableHandle_,\n+\t\t.ref_transform = 0,\n+\t\t.corner_sampled = 1,\n+\t\t.gain_format = GAIN_FORMAT_U4P10\n+\t};\n+\n+\tif (!lsTable_ || w * h * 4 * sizeof(uint16_t) > MAX_LS_GRID_SIZE) {\n+\t\tLOG(IPARPI, Error) << \"Do not have a correctly allocate lens shading table!\";\n+\t\treturn;\n+\t}\n+\n+\tif (lsStatus) {\n+\t\t/* Format will be u4.10 */\n+\t\tuint16_t *grid = static_cast<uint16_t *>(lsTable_);\n+\n+\t\tresampleTable(grid, lsStatus->r, w, h);\n+\t\tresampleTable(grid + w * h, lsStatus->g, w, h);\n+\t\tstd::memcpy(grid + 2 * w * h, grid + w * h, w * h * sizeof(uint16_t));\n+\t\tresampleTable(grid + 3 * w * h, lsStatus->b, w, h);\n+\t}\n+\n+\tControlValue c(Span<const uint8_t>{ reinterpret_cast<uint8_t *>(&ls),\n+\t\t\t\t\t    sizeof(ls) });\n+\tctrls.set(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING, c);\n+}\n+\n+/* Resamples a 16x12 table with central sampling to dest_w x dest_h with corner sampling. */\n+void IPARPi::resampleTable(uint16_t dest[], double const src[12][16], int dest_w, int dest_h)\n+{\n+\t/*\n+\t * Precalculate and cache the x sampling locations and phases to\n+\t * save recomputing them on every row.\n+\t */\n+\tassert(dest_w > 1 && dest_h > 1 && dest_w <= 64);\n+\tint x_lo[64], x_hi[64];\n+\tdouble xf[64];\n+\tdouble x = -0.5, x_inc = 16.0 / (dest_w - 1);\n+\tfor (int i = 0; i < dest_w; i++, x += x_inc) {\n+\t\tx_lo[i] = floor(x);\n+\t\txf[i] = x - x_lo[i];\n+\t\tx_hi[i] = x_lo[i] < 15 ? x_lo[i] + 1 : 15;\n+\t\tx_lo[i] = x_lo[i] > 0 ? x_lo[i] : 0;\n+\t}\n+\n+\t/* Now march over the output table generating the new values. */\n+\tdouble y = -0.5, y_inc = 12.0 / (dest_h - 1);\n+\tfor (int j = 0; j < dest_h; j++, y += y_inc) {\n+\t\tint y_lo = floor(y);\n+\t\tdouble yf = y - y_lo;\n+\t\tint y_hi = y_lo < 11 ? y_lo + 1 : 11;\n+\t\ty_lo = y_lo > 0 ? y_lo : 0;\n+\t\tdouble const *row_above = src[y_lo];\n+\t\tdouble const *row_below = src[y_hi];\n+\t\tfor (int i = 0; i < dest_w; i++) {\n+\t\t\tdouble above = row_above[x_lo[i]] * (1 - xf[i]) + row_above[x_hi[i]] * xf[i];\n+\t\t\tdouble below = row_below[x_lo[i]] * (1 - xf[i]) + row_below[x_hi[i]] * xf[i];\n+\t\t\tint result = floor(1024 * (above * (1 - yf) + below * yf) + .5);\n+\t\t\t*(dest++) = result > 16383 ? 16383 : result; /* want u4.10 */\n+\t\t}\n+\t}\n+}\n+\n+/*\n+ * External IPA module interface\n+ */\n+\n+extern \"C\" {\n+const struct IPAModuleInfo ipaModuleInfo = {\n+\tIPA_MODULE_API_VERSION,\n+\t1,\n+\t\"PipelineHandlerRPi\",\n+\t\"raspberrypi\",\n+};\n+\n+struct ipa_context *ipaCreate()\n+{\n+\treturn new IPAInterfaceWrapper(std::make_unique<IPARPi>());\n+}\n+\n+}; /* extern \"C\" */\n+\n+} /* namespace libcamera */\n","prefixes":["libcamera-devel","4/6"]}