{"id":16766,"url":"https://patchwork.libcamera.org/api/1.1/patches/16766/?format=json","web_url":"https://patchwork.libcamera.org/patch/16766/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20220725134639.4572-12-naush@raspberrypi.com>","date":"2022-07-25T13:46:35","name":"[libcamera-devel,11/15] ipa: raspberrypi: Change to C style code comments","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"3653bef9dd0e807822caec152a60f276269987ad","submitter":{"id":34,"url":"https://patchwork.libcamera.org/api/1.1/people/34/?format=json","name":"Naushir Patuck","email":"naush@raspberrypi.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/16766/mbox/","series":[{"id":3323,"url":"https://patchwork.libcamera.org/api/1.1/series/3323/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=3323","date":"2022-07-25T13:46:24","name":"Raspberry Pi IPA code refactor","version":1,"mbox":"https://patchwork.libcamera.org/series/3323/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/16766/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/16766/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id C6531C3275\n\tfor <parsemail@patchwork.libcamera.org>;\n\tMon, 25 Jul 2022 13:47:01 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 8A64B63336;\n\tMon, 25 Jul 2022 15:47:01 +0200 (CEST)","from mail-wm1-x330.google.com (mail-wm1-x330.google.com\n\t[IPv6:2a00:1450:4864:20::330])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 4B9316332C\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 25 Jul 2022 15:46:55 +0200 (CEST)","by mail-wm1-x330.google.com with SMTP id\n\tj29-20020a05600c1c1d00b003a2fdafdefbso6396007wms.2\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 25 Jul 2022 06:46:55 -0700 (PDT)","from naush-laptop.localdomain ([93.93.133.154])\n\tby smtp.gmail.com with ESMTPSA id\n\ta20-20020a05600c225400b003a32167b8d4sm18054320wmm.13.2022.07.25.06.46.51\n\t(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n\tMon, 25 Jul 2022 06:46:51 -0700 (PDT)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\n\ts=mail; t=1658756821;\n\tbh=VH+cMqzAjFMUF6gP4pL8/5xjnYGkbOaAN34B+0gU5vw=;\n\th=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe:\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:\n\tFrom;\n\tb=HhGca/pSePOoZ/+1cSWXb943uBytNZml2bgK/qhrnIvMhu7Bp/7341LDKKvrG5Tpe\n\tvDTLLw+3WvhEi6gV078kG7pL5mCnD8YlkD2BF3KQvaaDwZzC4SsWPxPebAHUUp8w88\n\tSNHONhxRtThKUI0e42o+zn9xhjjk4xQMewTNfeEnChzsr3o2mJ2iHTuRvcTHq1oYfG\n\tRHuv3x1+wMy2BI6dxg7O3JlMgojVxaZuSmWpRI/JH8a9wwSuNC8WkXbSGUMhG7gTLF\n\t49trVrZTgz8oVKCdDA3wBQmxNzJiSrCoQ7YdhvhsnqwEBgMRxjVja/avPK3iyHnR7m\n\tbPQe++6ukCGTg==","v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references\n\t:mime-version:content-transfer-encoding;\n\tbh=6FjY6WN+r/Z7EK23A4EC/a+TpOYs99V1qqD59adpGqg=;\n\tb=Z+osmpG6OLn2v8gpiFT+WJXDjuIQqBMNnIdmPg8YKAzEeo76QZ6KUlH9M3czg1AELn\n\tqWE0QapqEJfW1HGaLoy1yPBZU1spc5em2MXdgC3OmBSGO/mBXRzZBV9LpXVUb4iDYIfI\n\tvy7PuoeqfdATOU6xjEUWE1qvKNKw5fFsITbFS58vhGFkP2G90IEw3sM1nou7ofzIRMca\n\t1Owb9ROprMtpYMQGl0Ojbz0+XCCH4B4QCXOM3MefmoN8SLzFqL7EA298Oev3u2AHu4DL\n\toLhBB77xcETafdzW8JmH/MHLVcxq0aNwE88SQPxq3WLzBcNzonCeb4OL+/GK87ItfCIl\n\tLH/A=="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=raspberrypi.com\n\theader.i=@raspberrypi.com\n\theader.b=\"Z+osmpG6\"; dkim-atps=neutral","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20210112;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n\t:references:mime-version:content-transfer-encoding;\n\tbh=6FjY6WN+r/Z7EK23A4EC/a+TpOYs99V1qqD59adpGqg=;\n\tb=Qn5M35IRD2sQuUNHIpXS0wXxAQ3IgEpYtzcsRJpLO3L9m7NDxvYfCU6b1ycL6wEb2O\n\t6LbjhayiaLYOVdjHuMEHx886weAEKJ2bm75YhAJQkqc9QtGK+QyFfjoCW66wA13PGobD\n\t12TKkfeTobb2999w7/SrIjD2RUVWh99fkAxSxjL47Io3tIF5lpFGGUFr7E4ZIOm+/DMw\n\tA2otpcpvBjZeS9LjKkPE/7XBIu1pnJDMTumnzmYwTFu7xyp6nbISlthKrIIlbhTDsNim\n\t13k9hjFzE4mXSh+27k9LtJ+AvidIHaa8FYuzlEtZjQqZfIiKvwJUvNTGQS7PTIAuIJ0s\n\tsFjw==","X-Gm-Message-State":"AJIora+CGvIe7U+kVdfA712HiA0E5bxj7TLTfsJT5M5tT00fHSkyqecR\n\tgrDHbD+dvYI+VlFrrcv7lpWL6OwbDAp7lA==","X-Google-Smtp-Source":"AGRyM1vIuSg0EpXwn0ADGbFZGemzHdGA5os3xLyae3EkMlzPZNsyf4E2a+5wjMMoZDdm1ST33ZjLkg==","X-Received":"by 2002:a05:600c:2c46:b0:3a3:3248:32a6 with SMTP id\n\tr6-20020a05600c2c4600b003a3324832a6mr14437696wmg.179.1658756812113; \n\tMon, 25 Jul 2022 06:46:52 -0700 (PDT)","To":"libcamera-devel@lists.libcamera.org","Date":"Mon, 25 Jul 2022 14:46:35 +0100","Message-Id":"<20220725134639.4572-12-naush@raspberrypi.com>","X-Mailer":"git-send-email 2.25.1","In-Reply-To":"<20220725134639.4572-1-naush@raspberrypi.com>","References":"<20220725134639.4572-1-naush@raspberrypi.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH 11/15] ipa: raspberrypi: Change to C style\n\tcode comments","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Naushir Patuck via libcamera-devel\n\t<libcamera-devel@lists.libcamera.org>","Reply-To":"Naushir Patuck <naush@raspberrypi.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"As part of the on-going refactor efforts for the source files in\nsrc/ipa/raspberrypi/, switch all C++ style comments to C style comments.\n\nSigned-off-by: Naushir Patuck <naush@raspberrypi.com>\n---\n src/ipa/raspberrypi/cam_helper.hpp            |  98 ++++---\n .../raspberrypi/controller/agc_algorithm.hpp  |   4 +-\n src/ipa/raspberrypi/controller/agc_status.h   |  18 +-\n src/ipa/raspberrypi/controller/algorithm.cpp  |   2 +-\n src/ipa/raspberrypi/controller/algorithm.hpp  |  16 +-\n src/ipa/raspberrypi/controller/alsc_status.h  |   6 +-\n .../raspberrypi/controller/awb_algorithm.hpp  |   4 +-\n src/ipa/raspberrypi/controller/awb_status.h   |   6 +-\n .../controller/black_level_status.h           |   4 +-\n src/ipa/raspberrypi/controller/camera_mode.h  |  30 +-\n .../raspberrypi/controller/ccm_algorithm.hpp  |   4 +-\n src/ipa/raspberrypi/controller/ccm_status.h   |   2 +-\n .../controller/contrast_algorithm.hpp         |   4 +-\n .../raspberrypi/controller/contrast_status.h  |   6 +-\n src/ipa/raspberrypi/controller/controller.cpp |   6 +-\n src/ipa/raspberrypi/controller/controller.hpp |  20 +-\n .../controller/denoise_algorithm.hpp          |   4 +-\n .../raspberrypi/controller/denoise_status.h   |   2 +-\n src/ipa/raspberrypi/controller/dpc_status.h   |   4 +-\n src/ipa/raspberrypi/controller/focus_status.h |   8 +-\n src/ipa/raspberrypi/controller/geq_status.h   |   2 +-\n src/ipa/raspberrypi/controller/histogram.cpp  |   8 +-\n src/ipa/raspberrypi/controller/histogram.hpp  |  18 +-\n src/ipa/raspberrypi/controller/lux_status.h   |  18 +-\n src/ipa/raspberrypi/controller/metadata.hpp   |  20 +-\n src/ipa/raspberrypi/controller/noise_status.h |   2 +-\n src/ipa/raspberrypi/controller/pwl.cpp        |  40 ++-\n src/ipa/raspberrypi/controller/pwl.hpp        |  60 ++--\n src/ipa/raspberrypi/controller/rpi/agc.cpp    | 269 +++++++++++-------\n src/ipa/raspberrypi/controller/rpi/agc.hpp    |  24 +-\n src/ipa/raspberrypi/controller/rpi/alsc.cpp   | 180 +++++++-----\n src/ipa/raspberrypi/controller/rpi/alsc.hpp   |  50 ++--\n src/ipa/raspberrypi/controller/rpi/awb.cpp    | 192 ++++++++-----\n src/ipa/raspberrypi/controller/rpi/awb.hpp    | 112 ++++----\n .../controller/rpi/black_level.cpp            |  10 +-\n .../controller/rpi/black_level.hpp            |   4 +-\n src/ipa/raspberrypi/controller/rpi/ccm.cpp    |  20 +-\n src/ipa/raspberrypi/controller/rpi/ccm.hpp    |   4 +-\n .../raspberrypi/controller/rpi/contrast.cpp   |  74 +++--\n .../raspberrypi/controller/rpi/contrast.hpp   |   8 +-\n src/ipa/raspberrypi/controller/rpi/dpc.cpp    |  10 +-\n src/ipa/raspberrypi/controller/rpi/dpc.hpp    |   4 +-\n src/ipa/raspberrypi/controller/rpi/geq.cpp    |  10 +-\n src/ipa/raspberrypi/controller/rpi/geq.hpp    |   6 +-\n src/ipa/raspberrypi/controller/rpi/lux.cpp    |  16 +-\n src/ipa/raspberrypi/controller/rpi/lux.hpp    |  14 +-\n src/ipa/raspberrypi/controller/rpi/noise.cpp  |  24 +-\n src/ipa/raspberrypi/controller/rpi/noise.hpp  |   6 +-\n src/ipa/raspberrypi/controller/rpi/sdn.cpp    |  12 +-\n src/ipa/raspberrypi/controller/rpi/sdn.hpp    |   4 +-\n .../raspberrypi/controller/rpi/sharpen.cpp    |  32 ++-\n .../raspberrypi/controller/rpi/sharpen.hpp    |   4 +-\n .../controller/sharpen_algorithm.hpp          |   4 +-\n .../raspberrypi/controller/sharpen_status.h   |  10 +-\n src/ipa/raspberrypi/md_parser.hpp             |   2 +-\n 55 files changed, 890 insertions(+), 631 deletions(-)","diff":"diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\nindex 0cd718c4bc4e..2408fa154d3d 100644\n--- a/src/ipa/raspberrypi/cam_helper.hpp\n+++ b/src/ipa/raspberrypi/cam_helper.hpp\n@@ -21,50 +21,52 @@\n \n namespace RPiController {\n \n-// The CamHelper class provides a number of facilities that anyone trying\n-// to drive a camera will need to know, but which are not provided by the\n-// standard driver framework. Specifically, it provides:\n-//\n-// A \"CameraMode\" structure to describe extra information about the chosen\n-// mode of the driver. For example, how it is cropped from the full sensor\n-// area, how it is scaled, whether pixels are averaged compared to the full\n-// resolution.\n-//\n-// The ability to convert between number of lines of exposure and actual\n-// exposure time, and to convert between the sensor's gain codes and actual\n-// gains.\n-//\n-// A function to return the number of frames of delay between updating exposure,\n-// analogue gain and vblanking, and for the changes to take effect. For many\n-// sensors these take the values 2, 1 and 2 respectively, but sensors that are\n-// different will need to over-ride the default function provided.\n-//\n-// A function to query if the sensor outputs embedded data that can be parsed.\n-//\n-// A function to return the sensitivity of a given camera mode.\n-//\n-// A parser to parse the embedded data buffers provided by some sensors (for\n-// example, the imx219 does; the ov5647 doesn't). This allows us to know for\n-// sure the exposure and gain of the frame we're looking at. CamHelper\n-// provides functions for converting analogue gains to and from the sensor's\n-// native gain codes.\n-//\n-// Finally, a set of functions that determine how to handle the vagaries of\n-// different camera modules on start-up or when switching modes. Some\n-// modules may produce one or more frames that are not yet correctly exposed,\n-// or where the metadata may be suspect. We have the following functions:\n-// HideFramesStartup(): Tell the pipeline handler not to return this many\n-//     frames at start-up. This can also be used to hide initial frames\n-//     while the AGC and other algorithms are sorting themselves out.\n-// HideFramesModeSwitch(): Tell the pipeline handler not to return this\n-//     many frames after a mode switch (other than start-up). Some sensors\n-//     may produce innvalid frames after a mode switch; others may not.\n-// MistrustFramesStartup(): At start-up a sensor may return frames for\n-//    which we should not run any control algorithms (for example, metadata\n-//    may be invalid).\n-// MistrustFramesModeSwitch(): The number of frames, after a mode switch\n-//    (other than start-up), for which control algorithms should not run\n-//    (for example, metadata may be unreliable).\n+/*\n+ * The CamHelper class provides a number of facilities that anyone trying\n+ * to drive a camera will need to know, but which are not provided by the\n+ * standard driver framework. Specifically, it provides:\n+ *\n+ * A \"CameraMode\" structure to describe extra information about the chosen\n+ * mode of the driver. For example, how it is cropped from the full sensor\n+ * area, how it is scaled, whether pixels are averaged compared to the full\n+ * resolution.\n+ *\n+ * The ability to convert between number of lines of exposure and actual\n+ * exposure time, and to convert between the sensor's gain codes and actual\n+ * gains.\n+ *\n+ * A function to return the number of frames of delay between updating exposure,\n+ * analogue gain and vblanking, and for the changes to take effect. For many\n+ * sensors these take the values 2, 1 and 2 respectively, but sensors that are\n+ * different will need to over-ride the default function provided.\n+ *\n+ * A function to query if the sensor outputs embedded data that can be parsed.\n+ *\n+ * A function to return the sensitivity of a given camera mode.\n+ *\n+ * A parser to parse the embedded data buffers provided by some sensors (for\n+ * example, the imx219 does; the ov5647 doesn't). This allows us to know for\n+ * sure the exposure and gain of the frame we're looking at. CamHelper\n+ * provides functions for converting analogue gains to and from the sensor's\n+ * native gain codes.\n+ *\n+ * Finally, a set of functions that determine how to handle the vagaries of\n+ * different camera modules on start-up or when switching modes. Some\n+ * modules may produce one or more frames that are not yet correctly exposed,\n+ * or where the metadata may be suspect. We have the following functions:\n+ * HideFramesStartup(): Tell the pipeline handler not to return this many\n+ *     frames at start-up. This can also be used to hide initial frames\n+ *     while the AGC and other algorithms are sorting themselves out.\n+ * HideFramesModeSwitch(): Tell the pipeline handler not to return this\n+ *     many frames after a mode switch (other than start-up). Some sensors\n+ *     may produce innvalid frames after a mode switch; others may not.\n+ * MistrustFramesStartup(): At start-up a sensor may return frames for\n+ *    which we should not run any control algorithms (for example, metadata\n+ *    may be invalid).\n+ * MistrustFramesModeSwitch(): The number of frames, after a mode switch\n+ *    (other than start-up), for which control algorithms should not run\n+ *    (for example, metadata may be unreliable).\n+ */\n \n class CamHelper\n {\n@@ -110,8 +112,10 @@ private:\n \tunsigned int frameIntegrationDiff_;\n };\n \n-// This is for registering camera helpers with the system, so that the\n-// CamHelper::Create function picks them up automatically.\n+/*\n+ * This is for registering camera helpers with the system, so that the\n+ * CamHelper::Create function picks them up automatically.\n+ */\n \n typedef CamHelper *(*CamHelperCreateFunc)();\n struct RegisterCamHelper\n@@ -120,4 +124,4 @@ struct RegisterCamHelper\n \t\t\t  CamHelperCreateFunc createFunc);\n };\n \n-} // namespace RPi\n+} /* namespace RPi */\ndiff --git a/src/ipa/raspberrypi/controller/agc_algorithm.hpp b/src/ipa/raspberrypi/controller/agc_algorithm.hpp\nindex 51900b687778..b718e595193b 100644\n--- a/src/ipa/raspberrypi/controller/agc_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/agc_algorithm.hpp\n@@ -16,7 +16,7 @@ class AgcAlgorithm : public Algorithm\n {\n public:\n \tAgcAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// An AGC algorithm must provide the following:\n+\t/* An AGC algorithm must provide the following: */\n \tvirtual unsigned int getConvergenceFrames() const = 0;\n \tvirtual void setEv(double ev) = 0;\n \tvirtual void setFlickerPeriod(libcamera::utils::Duration flickerPeriod) = 0;\n@@ -28,4 +28,4 @@ public:\n \tvirtual void setConstraintMode(std::string const &contraintModeName) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/agc_status.h b/src/ipa/raspberrypi/controller/agc_status.h\nindex 5d04c61d04bd..6723bc9e8200 100644\n--- a/src/ipa/raspberrypi/controller/agc_status.h\n+++ b/src/ipa/raspberrypi/controller/agc_status.h\n@@ -8,20 +8,24 @@\n \n #include <libcamera/base/utils.h>\n \n-// The AGC algorithm should post the following structure into the image's\n-// \"agc.status\" metadata.\n+/*\n+ * The AGC algorithm should post the following structure into the image's\n+ * \"agc.status\" metadata.\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\n #endif\n \n-// Note: total_exposure_value will be reported as zero until the algorithm has\n-// seen statistics and calculated meaningful values. The contents should be\n-// ignored until then.\n+/*\n+ * Note: total_exposure_value will be reported as zero until the algorithm has\n+ * seen statistics and calculated meaningful values. The contents should be\n+ * ignored until then.\n+ */\n \n struct AgcStatus {\n-\tlibcamera::utils::Duration totalExposureValue; // value for all exposure and gain for this image\n-\tlibcamera::utils::Duration targetExposureValue; // (unfiltered) target total exposure AGC is aiming for\n+\tlibcamera::utils::Duration totalExposureValue; /* value for all exposure and gain for this image */\n+\tlibcamera::utils::Duration targetExposureValue; /* (unfiltered) target total exposure AGC is aiming for */\n \tlibcamera::utils::Duration shutterTime;\n \tdouble analogueGain;\n \tchar exposureMode[32];\ndiff --git a/src/ipa/raspberrypi/controller/algorithm.cpp b/src/ipa/raspberrypi/controller/algorithm.cpp\nindex cfcd18a96c93..e3afa647bdd2 100644\n--- a/src/ipa/raspberrypi/controller/algorithm.cpp\n+++ b/src/ipa/raspberrypi/controller/algorithm.cpp\n@@ -31,7 +31,7 @@ void Algorithm::process([[maybe_unused]] StatisticsPtr &stats,\n {\n }\n \n-// For registering algorithms with the system:\n+/* For registering algorithms with the system: */\n \n static std::map<std::string, AlgoCreateFunc> algorithms;\n std::map<std::string, AlgoCreateFunc> const &RPiController::getAlgorithms()\ndiff --git a/src/ipa/raspberrypi/controller/algorithm.hpp b/src/ipa/raspberrypi/controller/algorithm.hpp\nindex a33b14da2726..cad7c15ba5c8 100644\n--- a/src/ipa/raspberrypi/controller/algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/algorithm.hpp\n@@ -6,8 +6,10 @@\n  */\n #pragma once\n \n-// All algorithms should be derived from this class and made available to the\n-// Controller.\n+/*\n+ * All algorithms should be derived from this class and made available to the\n+ * Controller.\n+ */\n \n #include <string>\n #include <memory>\n@@ -19,7 +21,7 @@\n \n namespace RPiController {\n \n-// This defines the basic interface for all control algorithms.\n+/* This defines the basic interface for all control algorithms. */\n \n class Algorithm\n {\n@@ -48,8 +50,10 @@ private:\n \tbool paused_;\n };\n \n-// This code is for automatic registration of Front End algorithms with the\n-// system.\n+/*\n+ * This code is for automatic registration of Front End algorithms with the\n+ * system.\n+ */\n \n typedef Algorithm *(*AlgoCreateFunc)(Controller *controller);\n struct RegisterAlgorithm {\n@@ -57,4 +61,4 @@ struct RegisterAlgorithm {\n };\n std::map<std::string, AlgoCreateFunc> const &getAlgorithms();\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/alsc_status.h b/src/ipa/raspberrypi/controller/alsc_status.h\nindex d3f579715594..e074f9359faa 100644\n--- a/src/ipa/raspberrypi/controller/alsc_status.h\n+++ b/src/ipa/raspberrypi/controller/alsc_status.h\n@@ -6,8 +6,10 @@\n  */\n #pragma once\n \n-// The ALSC algorithm should post the following structure into the image's\n-// \"alsc.status\" metadata.\n+/*\n+ * The ALSC algorithm should post the following structure into the image's\n+ * \"alsc.status\" metadata.\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/awb_algorithm.hpp b/src/ipa/raspberrypi/controller/awb_algorithm.hpp\nindex c5d2ca90263c..0de74fce4269 100644\n--- a/src/ipa/raspberrypi/controller/awb_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/awb_algorithm.hpp\n@@ -14,10 +14,10 @@ class AwbAlgorithm : public Algorithm\n {\n public:\n \tAwbAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// An AWB algorithm must provide the following:\n+\t/* An AWB algorithm must provide the following: */\n \tvirtual unsigned int getConvergenceFrames() const = 0;\n \tvirtual void setMode(std::string const &modeName) = 0;\n \tvirtual void setManualGains(double manualR, double manualB) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/awb_status.h b/src/ipa/raspberrypi/controller/awb_status.h\nindex bc428ed3206a..2f6e88ef6e7f 100644\n--- a/src/ipa/raspberrypi/controller/awb_status.h\n+++ b/src/ipa/raspberrypi/controller/awb_status.h\n@@ -6,8 +6,10 @@\n  */\n #pragma once\n \n-// The AWB algorithm places its results into both the image and global metadata,\n-// under the tag \"awb.status\".\n+/*\n+ * The AWB algorithm places its results into both the image and global metadata,\n+ * under the tag \"awb.status\".\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/black_level_status.h b/src/ipa/raspberrypi/controller/black_level_status.h\nindex d085f64b27fe..ad83ddad5283 100644\n--- a/src/ipa/raspberrypi/controller/black_level_status.h\n+++ b/src/ipa/raspberrypi/controller/black_level_status.h\n@@ -6,14 +6,14 @@\n  */\n #pragma once\n \n-// The \"black level\" algorithm stores the black levels to use.\n+/* The \"black level\" algorithm stores the black levels to use. */\n \n #ifdef __cplusplus\n extern \"C\" {\n #endif\n \n struct BlackLevelStatus {\n-\tuint16_t black_level_r; // out of 16 bits\n+\tuint16_t black_level_r; /* out of 16 bits */\n \tuint16_t black_level_g;\n \tuint16_t black_level_b;\n };\ndiff --git a/src/ipa/raspberrypi/controller/camera_mode.h b/src/ipa/raspberrypi/controller/camera_mode.h\nindex 8b81ca9df725..47a0fea424ca 100644\n--- a/src/ipa/raspberrypi/controller/camera_mode.h\n+++ b/src/ipa/raspberrypi/controller/camera_mode.h\n@@ -10,9 +10,11 @@\n \n #include <libcamera/base/utils.h>\n \n-// Description of a \"camera mode\", holding enough information for control\n-// algorithms to adapt their behaviour to the different modes of the camera,\n-// including binning, scaling, cropping etc.\n+/*\n+ * Description of a \"camera mode\", holding enough information for control\n+ * algorithms to adapt their behaviour to the different modes of the camera,\n+ * including binning, scaling, cropping etc.\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\n@@ -21,27 +23,27 @@ extern \"C\" {\n #define CAMERA_MODE_NAME_LEN 32\n \n struct CameraMode {\n-\t// bit depth of the raw camera output\n+\t/* bit depth of the raw camera output */\n \tuint32_t bitdepth;\n-\t// size in pixels of frames in this mode\n+\t/* size in pixels of frames in this mode */\n \tuint16_t width, height;\n-\t// size of full resolution uncropped frame (\"sensor frame\")\n+\t/* size of full resolution uncropped frame (\"sensor frame\") */\n \tuint16_t sensorWidth, sensorHeight;\n-\t// binning factor (1 = no binning, 2 = 2-pixel binning etc.)\n+\t/* binning factor (1 = no binning, 2 = 2-pixel binning etc.) */\n \tuint8_t binX, binY;\n-\t// location of top left pixel in the sensor frame\n+\t/* location of top left pixel in the sensor frame */\n \tuint16_t cropX, cropY;\n-\t// scaling factor (so if uncropped, width*scaleX is sensorWidth)\n+\t/* scaling factor (so if uncropped, width*scaleX is sensorWidth) */\n \tdouble scaleX, scaleY;\n-\t// scaling of the noise compared to the native sensor mode\n+\t/* scaling of the noise compared to the native sensor mode */\n \tdouble noiseFactor;\n-\t// line time\n+\t/* line time */\n \tlibcamera::utils::Duration lineLength;\n-\t// any camera transform *not* reflected already in the camera tuning\n+\t/* any camera transform *not* reflected already in the camera tuning */\n \tlibcamera::Transform transform;\n-\t// minimum and maximum fame lengths in units of lines\n+\t/* minimum and maximum fame lengths in units of lines */\n \tuint32_t minFrameLength, maxFrameLength;\n-\t// sensitivity of this mode\n+\t/* sensitivity of this mode */\n \tdouble sensitivity;\n };\n \ndiff --git a/src/ipa/raspberrypi/controller/ccm_algorithm.hpp b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp\nindex b8b5879ba99c..9c7172f5782d 100644\n--- a/src/ipa/raspberrypi/controller/ccm_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp\n@@ -14,8 +14,8 @@ class CcmAlgorithm : public Algorithm\n {\n public:\n \tCcmAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// A CCM algorithm must provide the following:\n+\t/* A CCM algorithm must provide the following: */\n \tvirtual void setSaturation(double saturation) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/ccm_status.h b/src/ipa/raspberrypi/controller/ccm_status.h\nindex 7e41dd1ff3c0..4cdd8bed0311 100644\n--- a/src/ipa/raspberrypi/controller/ccm_status.h\n+++ b/src/ipa/raspberrypi/controller/ccm_status.h\n@@ -6,7 +6,7 @@\n  */\n #pragma once\n \n-// The \"ccm\" algorithm generates an appropriate colour matrix.\n+/* The \"ccm\" algorithm generates an appropriate colour matrix. */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/contrast_algorithm.hpp b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp\nindex c76f3cd759ba..1c0562e1c4a2 100644\n--- a/src/ipa/raspberrypi/controller/contrast_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp\n@@ -14,9 +14,9 @@ class ContrastAlgorithm : public Algorithm\n {\n public:\n \tContrastAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// A contrast algorithm must provide the following:\n+\t/* A contrast algorithm must provide the following: */\n \tvirtual void setBrightness(double brightness) = 0;\n \tvirtual void setContrast(double contrast) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/contrast_status.h b/src/ipa/raspberrypi/controller/contrast_status.h\nindex d7edd4e9990d..5eb084f78e71 100644\n--- a/src/ipa/raspberrypi/controller/contrast_status.h\n+++ b/src/ipa/raspberrypi/controller/contrast_status.h\n@@ -6,8 +6,10 @@\n  */\n #pragma once\n \n-// The \"contrast\" algorithm creates a gamma curve, optionally doing a little bit\n-// of contrast stretching based on the AGC histogram.\n+/*\n+ * The \"contrast\" algorithm creates a gamma curve, optionally doing a little bit\n+ * of contrast stretching based on the AGC histogram.\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/controller.cpp b/src/ipa/raspberrypi/controller/controller.cpp\nindex e0b152c74384..6d95fa55d1e4 100644\n--- a/src/ipa/raspberrypi/controller/controller.cpp\n+++ b/src/ipa/raspberrypi/controller/controller.cpp\n@@ -89,8 +89,10 @@ Metadata &Controller::getGlobalMetadata()\n \n Algorithm *Controller::getAlgorithm(std::string const &name) const\n {\n-\t// The passed name must be the entire algorithm name, or must match the\n-\t// last part of it with a period (.) just before.\n+\t/*\n+\t * The passed name must be the entire algorithm name, or must match the\n+\t * last part of it with a period (.) just before.\n+\t */\n \tsize_t nameLen = name.length();\n \tfor (auto &algo : algorithms_) {\n \t\tchar const *algoName = algo->name();\ndiff --git a/src/ipa/raspberrypi/controller/controller.hpp b/src/ipa/raspberrypi/controller/controller.hpp\nindex a5e1eb38ab9d..29b2e8f34826 100644\n--- a/src/ipa/raspberrypi/controller/controller.hpp\n+++ b/src/ipa/raspberrypi/controller/controller.hpp\n@@ -6,9 +6,11 @@\n  */\n #pragma once\n \n-// The Controller is simply a container for a collecting together a number of\n-// \"control algorithms\" (such as AWB etc.) and for running them all in a\n-// convenient manner.\n+/*\n+ * The Controller is simply a container for a collecting together a number of\n+ * \"control algorithms\" (such as AWB etc.) and for running them all in a\n+ * convenient manner.\n+ */\n \n #include <vector>\n #include <string>\n@@ -25,10 +27,12 @@ class Algorithm;\n typedef std::unique_ptr<Algorithm> AlgorithmPtr;\n typedef std::shared_ptr<bcm2835_isp_stats> StatisticsPtr;\n \n-// The Controller holds a pointer to some global_metadata, which is how\n-// different controllers and control algorithms within them can exchange\n-// information. The Prepare function returns a pointer to metadata for this\n-// specific image, and which should be passed on to the Process function.\n+/*\n+ * The Controller holds a pointer to some global_metadata, which is how\n+ * different controllers and control algorithms within them can exchange\n+ * information. The Prepare function returns a pointer to metadata for this\n+ * specific image, and which should be passed on to the Process function.\n+ */\n \n class Controller\n {\n@@ -51,4 +55,4 @@ protected:\n \tbool switchModeCalled_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/denoise_algorithm.hpp b/src/ipa/raspberrypi/controller/denoise_algorithm.hpp\nindex 48de542ac4f3..7004fe55b41f 100644\n--- a/src/ipa/raspberrypi/controller/denoise_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/denoise_algorithm.hpp\n@@ -16,8 +16,8 @@ class DenoiseAlgorithm : public Algorithm\n {\n public:\n \tDenoiseAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// A Denoise algorithm must provide the following:\n+\t/* A Denoise algorithm must provide the following: */\n \tvirtual void setMode(DenoiseMode mode) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/denoise_status.h b/src/ipa/raspberrypi/controller/denoise_status.h\nindex 67a3c361387e..a41e7e89c428 100644\n--- a/src/ipa/raspberrypi/controller/denoise_status.h\n+++ b/src/ipa/raspberrypi/controller/denoise_status.h\n@@ -6,7 +6,7 @@\n  */\n #pragma once\n \n-// This stores the parameters required for Denoise.\n+/* This stores the parameters required for Denoise. */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/dpc_status.h b/src/ipa/raspberrypi/controller/dpc_status.h\nindex a3ec2762573b..c99ad8c500a6 100644\n--- a/src/ipa/raspberrypi/controller/dpc_status.h\n+++ b/src/ipa/raspberrypi/controller/dpc_status.h\n@@ -6,14 +6,14 @@\n  */\n #pragma once\n \n-// The \"DPC\" algorithm sets defective pixel correction strength.\n+/* The \"DPC\" algorithm sets defective pixel correction strength. */\n \n #ifdef __cplusplus\n extern \"C\" {\n #endif\n \n struct DpcStatus {\n-\tint strength; // 0 = \"off\", 1 = \"normal\", 2 = \"strong\"\n+\tint strength; /* 0 = \"off\", 1 = \"normal\", 2 = \"strong\" */\n };\n \n #ifdef __cplusplus\ndiff --git a/src/ipa/raspberrypi/controller/focus_status.h b/src/ipa/raspberrypi/controller/focus_status.h\nindex 656455100b45..c75795dc0621 100644\n--- a/src/ipa/raspberrypi/controller/focus_status.h\n+++ b/src/ipa/raspberrypi/controller/focus_status.h\n@@ -8,9 +8,11 @@\n \n #include <linux/bcm2835-isp.h>\n \n-// The focus algorithm should post the following structure into the image's\n-// \"focus.status\" metadata. Recall that it's only reporting focus (contrast)\n-// measurements, it's not driving any kind of auto-focus algorithm!\n+/*\n+ * The focus algorithm should post the following structure into the image's\n+ * \"focus.status\" metadata. Recall that it's only reporting focus (contrast)\n+ * measurements, it's not driving any kind of auto-focus algorithm!\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/geq_status.h b/src/ipa/raspberrypi/controller/geq_status.h\nindex 07fd5f0347ef..0ebb7ce71d5b 100644\n--- a/src/ipa/raspberrypi/controller/geq_status.h\n+++ b/src/ipa/raspberrypi/controller/geq_status.h\n@@ -6,7 +6,7 @@\n  */\n #pragma once\n \n-// The \"GEQ\" algorithm calculates the green equalisation thresholds\n+/* The \"GEQ\" algorithm calculates the green equalisation thresholds */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/histogram.cpp b/src/ipa/raspberrypi/controller/histogram.cpp\nindex e865bef0057b..91a759b53d34 100644\n--- a/src/ipa/raspberrypi/controller/histogram.cpp\n+++ b/src/ipa/raspberrypi/controller/histogram.cpp\n@@ -30,13 +30,13 @@ double Histogram::quantile(double q, int first, int last) const\n \t\tlast = cumulative_.size() - 2;\n \tassert(first <= last);\n \tuint64_t items = q * total();\n-\twhile (first < last) // binary search to find the right bin\n+\twhile (first < last) /* binary search to find the right bin */\n \t{\n \t\tint middle = (first + last) / 2;\n \t\tif (cumulative_[middle + 1] > items)\n-\t\t\tlast = middle; // between first and middle\n+\t\t\tlast = middle; /* between first and middle */\n \t\telse\n-\t\t\tfirst = middle + 1; // after middle\n+\t\t\tfirst = middle + 1; /* after middle */\n \t}\n \tassert(items >= cumulative_[first] && items <= cumulative_[last + 1]);\n \tdouble frac = cumulative_[first + 1] == cumulative_[first] ? 0\n@@ -59,6 +59,6 @@ double Histogram::interQuantileMean(double qLo, double qHi) const\n \t\tsumBinFreq += bin * freq;\n \t\tcumulFreq += freq;\n \t}\n-\t// add 0.5 to give an average for bin mid-points\n+\t/* add 0.5 to give an average for bin mid-points */\n \treturn sumBinFreq / cumulFreq + 0.5;\n }\ndiff --git a/src/ipa/raspberrypi/controller/histogram.hpp b/src/ipa/raspberrypi/controller/histogram.hpp\nindex 4ff5a56b0243..2ed8d9713764 100644\n--- a/src/ipa/raspberrypi/controller/histogram.hpp\n+++ b/src/ipa/raspberrypi/controller/histogram.hpp\n@@ -10,8 +10,10 @@\n #include <vector>\n #include <cassert>\n \n-// A simple histogram class, for use in particular to find \"quantiles\" and\n-// averages between \"quantiles\".\n+/*\n+ * A simple histogram class, for use in particular to find \"quantiles\" and\n+ * averages between \"quantiles\".\n+ */\n \n namespace RPiController {\n \n@@ -29,16 +31,18 @@ public:\n \t}\n \tuint32_t bins() const { return cumulative_.size() - 1; }\n \tuint64_t total() const { return cumulative_[cumulative_.size() - 1]; }\n-\t// Cumulative frequency up to a (fractional) point in a bin.\n+\t/* Cumulative frequency up to a (fractional) point in a bin. */\n \tuint64_t cumulativeFreq(double bin) const;\n-\t// Return the (fractional) bin of the point q (0 <= q <= 1) through the\n-\t// histogram. Optionally provide limits to help.\n+\t/*\n+\t * Return the (fractional) bin of the point q (0 <= q <= 1) through the\n+\t * histogram. Optionally provide limits to help.\n+\t */\n \tdouble quantile(double q, int first = -1, int last = -1) const;\n-\t// Return the average histogram bin value between the two quantiles.\n+\t/* Return the average histogram bin value between the two quantiles. */\n \tdouble interQuantileMean(double qLo, double qHi) const;\n \n private:\n \tstd::vector<uint64_t> cumulative_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/lux_status.h b/src/ipa/raspberrypi/controller/lux_status.h\nindex 8ccfd933829b..c1134bec3694 100644\n--- a/src/ipa/raspberrypi/controller/lux_status.h\n+++ b/src/ipa/raspberrypi/controller/lux_status.h\n@@ -6,14 +6,16 @@\n  */\n #pragma once\n \n-// The \"lux\" algorithm looks at the (AGC) histogram statistics of the frame and\n-// estimates the current lux level of the scene. It does this by a simple ratio\n-// calculation comparing to a reference image that was taken in known conditions\n-// with known statistics and a properly measured lux level. There is a slight\n-// problem with aperture, in that it may be variable without the system knowing\n-// or being aware of it. In this case an external application may set a\n-// \"current_aperture\" value if it wishes, which would be used in place of the\n-// (presumably meaningless) value in the image metadata.\n+/*\n+ * The \"lux\" algorithm looks at the (AGC) histogram statistics of the frame and\n+ * estimates the current lux level of the scene. It does this by a simple ratio\n+ * calculation comparing to a reference image that was taken in known conditions\n+ * with known statistics and a properly measured lux level. There is a slight\n+ * problem with aperture, in that it may be variable without the system knowing\n+ * or being aware of it. In this case an external application may set a\n+ * \"current_aperture\" value if it wishes, which would be used in place of the\n+ * (presumably meaningless) value in the image metadata.\n+ */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/metadata.hpp b/src/ipa/raspberrypi/controller/metadata.hpp\nindex a79a67d42cce..9f73e61ef91f 100644\n--- a/src/ipa/raspberrypi/controller/metadata.hpp\n+++ b/src/ipa/raspberrypi/controller/metadata.hpp\n@@ -6,7 +6,7 @@\n  */\n #pragma once\n \n-// A simple class for carrying arbitrary metadata, for example about an image.\n+/* A simple class for carrying arbitrary metadata, for example about an image. */\n \n #include <any>\n #include <map>\n@@ -81,8 +81,10 @@ public:\n \ttemplate<typename T>\n \tT *getLocked(std::string const &tag)\n \t{\n-\t\t// This allows in-place access to the Metadata contents,\n-\t\t// for which you should be holding the lock.\n+\t\t/*\n+\t\t * This allows in-place access to the Metadata contents,\n+\t\t * for which you should be holding the lock.\n+\t\t */\n \t\tauto it = data_.find(tag);\n \t\tif (it == data_.end())\n \t\t\treturn nullptr;\n@@ -92,13 +94,15 @@ public:\n \ttemplate<typename T>\n \tvoid setLocked(std::string const &tag, T const &value)\n \t{\n-\t\t// Use this only if you're holding the lock yourself.\n+\t\t/* Use this only if you're holding the lock yourself. */\n \t\tdata_[tag] = value;\n \t}\n \n-\t// Note: use of (lowercase) lock and unlock means you can create scoped\n-\t// locks with the standard lock classes.\n-\t// e.g. std::lock_guard<RPiController::Metadata> lock(metadata)\n+\t/*\n+\t * Note: use of (lowercase) lock and unlock means you can create scoped\n+\t * locks with the standard lock classes.\n+\t * e.g. std::lock_guard<RPiController::Metadata> lock(metadata)\n+\t */\n \tvoid lock() { mutex_.lock(); }\n \tvoid unlock() { mutex_.unlock(); }\n \n@@ -107,4 +111,4 @@ private:\n \tstd::map<std::string, std::any> data_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/noise_status.h b/src/ipa/raspberrypi/controller/noise_status.h\nindex 8439a40213aa..60b995f4fa4f 100644\n--- a/src/ipa/raspberrypi/controller/noise_status.h\n+++ b/src/ipa/raspberrypi/controller/noise_status.h\n@@ -6,7 +6,7 @@\n  */\n #pragma once\n \n-// The \"noise\" algorithm stores an estimate of the noise profile for this image.\n+/* The \"noise\" algorithm stores an estimate of the noise profile for this image. */\n \n #ifdef __cplusplus\n extern \"C\" {\ndiff --git a/src/ipa/raspberrypi/controller/pwl.cpp b/src/ipa/raspberrypi/controller/pwl.cpp\nindex 24ff3ea34f5f..d93cd2016dcf 100644\n--- a/src/ipa/raspberrypi/controller/pwl.cpp\n+++ b/src/ipa/raspberrypi/controller/pwl.cpp\n@@ -66,11 +66,15 @@ double Pwl::eval(double x, int *spanPtr, bool updateSpan) const\n \n int Pwl::findSpan(double x, int span) const\n {\n-\t// Pwls are generally small, so linear search may well be faster than\n-\t// binary, though could review this if large PWls start turning up.\n+\t/*\n+\t * Pwls are generally small, so linear search may well be faster than\n+\t * binary, though could review this if large PWls start turning up.\n+\t */\n \tint lastSpan = points_.size() - 2;\n-\t// some algorithms may call us with span pointing directly at the last\n-\t// control point\n+\t/*\n+\t * some algorithms may call us with span pointing directly at the last\n+\t * control point\n+\t */\n \tspan = std::max(0, std::min(lastSpan, span));\n \twhile (span < lastSpan && x >= points_[span + 1].x)\n \t\tspan++;\n@@ -87,7 +91,7 @@ Pwl::PerpType Pwl::invert(Point const &xy, Point &perp, int &span,\n \tfor (span = span + 1; span < (int)points_.size() - 1; span++) {\n \t\tPoint spanVec = points_[span + 1] - points_[span];\n \t\tdouble t = ((xy - points_[span]) % spanVec) / spanVec.len2();\n-\t\tif (t < -eps) // off the start of this span\n+\t\tif (t < -eps) /* off the start of this span */\n \t\t{\n \t\t\tif (span == 0) {\n \t\t\t\tperp = points_[span];\n@@ -96,14 +100,14 @@ Pwl::PerpType Pwl::invert(Point const &xy, Point &perp, int &span,\n \t\t\t\tperp = points_[span];\n \t\t\t\treturn PerpType::Vertex;\n \t\t\t}\n-\t\t} else if (t > 1 + eps) // off the end of this span\n+\t\t} else if (t > 1 + eps) /* off the end of this span */\n \t\t{\n \t\t\tif (span == (int)points_.size() - 2) {\n \t\t\t\tperp = points_[span + 1];\n \t\t\t\treturn PerpType::End;\n \t\t\t}\n \t\t\tprevOffEnd = true;\n-\t\t} else // a true perpendicular\n+\t\t} else /* a true perpendicular */\n \t\t{\n \t\t\tperp = points_[span] + spanVec * t;\n \t\t\treturn PerpType::Perpendicular;\n@@ -133,9 +137,11 @@ Pwl Pwl::inverse(bool *trueInverse, const double eps) const\n \t\t\tneither = true;\n \t}\n \n-\t// This is not a proper inverse if we found ourselves putting points\n-\t// onto both ends of the inverse, or if there were points that couldn't\n-\t// go on either.\n+\t/*\n+\t * This is not a proper inverse if we found ourselves putting points\n+\t * onto both ends of the inverse, or if there were points that couldn't\n+\t * go on either.\n+\t */\n \tif (trueInverse)\n \t\t*trueInverse = !(neither || (appended && prepended));\n \n@@ -154,8 +160,10 @@ Pwl Pwl::compose(Pwl const &other, const double eps) const\n \t\t    otherSpan + 1 < (int)other.points_.size() &&\n \t\t    points_[thisSpan + 1].y >=\n \t\t\t    other.points_[otherSpan + 1].x + eps) {\n-\t\t\t// next control point in result will be where this\n-\t\t\t// function's y reaches the next span in other\n+\t\t\t/*\n+\t\t\t * next control point in result will be where this\n+\t\t\t * function's y reaches the next span in other\n+\t\t\t */\n \t\t\tthisX = points_[thisSpan].x +\n \t\t\t\t(other.points_[otherSpan + 1].x -\n \t\t\t\t points_[thisSpan].y) *\n@@ -164,15 +172,17 @@ Pwl Pwl::compose(Pwl const &other, const double eps) const\n \t\t} else if (abs(dy) > eps && otherSpan > 0 &&\n \t\t\t   points_[thisSpan + 1].y <=\n \t\t\t\t   other.points_[otherSpan - 1].x - eps) {\n-\t\t\t// next control point in result will be where this\n-\t\t\t// function's y reaches the previous span in other\n+\t\t\t/*\n+\t\t\t * next control point in result will be where this\n+\t\t\t * function's y reaches the previous span in other\n+\t\t\t */\n \t\t\tthisX = points_[thisSpan].x +\n \t\t\t\t(other.points_[otherSpan + 1].x -\n \t\t\t\t points_[thisSpan].y) *\n \t\t\t\t\tdx / dy;\n \t\t\tthisY = other.points_[--otherSpan].x;\n \t\t} else {\n-\t\t\t// we stay in the same span in other\n+\t\t\t/* we stay in the same span in other */\n \t\t\tthisSpan++;\n \t\t\tthisX = points_[thisSpan].x,\n \t\t\tthisY = points_[thisSpan].y;\ndiff --git a/src/ipa/raspberrypi/controller/pwl.hpp b/src/ipa/raspberrypi/controller/pwl.hpp\nindex 4a38d1df5a33..e409c966baa0 100644\n--- a/src/ipa/raspberrypi/controller/pwl.hpp\n+++ b/src/ipa/raspberrypi/controller/pwl.hpp\n@@ -63,44 +63,56 @@ public:\n \tInterval domain() const;\n \tInterval range() const;\n \tbool empty() const;\n-\t// Evaluate Pwl, optionally supplying an initial guess for the\n-\t// \"span\". The \"span\" may be optionally be updated.  If you want to know\n-\t// the \"span\" value but don't have an initial guess you can set it to\n-\t// -1.\n+\t/*\n+\t * Evaluate Pwl, optionally supplying an initial guess for the\n+\t * \"span\". The \"span\" may be optionally be updated.  If you want to know\n+\t * the \"span\" value but don't have an initial guess you can set it to\n+\t * -1.\n+\t */\n \tdouble eval(double x, int *spanPtr = nullptr,\n \t\t    bool updateSpan = true) const;\n-\t// Find perpendicular closest to xy, starting from span+1 so you can\n-\t// call it repeatedly to check for multiple closest points (set span to\n-\t// -1 on the first call). Also returns \"pseudo\" perpendiculars; see\n-\t// PerpType enum.\n+\t/*\n+\t * Find perpendicular closest to xy, starting from span+1 so you can\n+\t * call it repeatedly to check for multiple closest points (set span to\n+\t * -1 on the first call). Also returns \"pseudo\" perpendiculars; see\n+\t * PerpType enum.\n+\t */\n \tenum class PerpType {\n-\t\tNone, // no perpendicular found\n-\t\tStart, // start of Pwl is closest point\n-\t\tEnd, // end of Pwl is closest point\n-\t\tVertex, // vertex of Pwl is closest point\n-\t\tPerpendicular // true perpendicular found\n+\t\tNone, /* no perpendicular found */\n+\t\tStart, /* start of Pwl is closest point */\n+\t\tEnd, /* end of Pwl is closest point */\n+\t\tVertex, /* vertex of Pwl is closest point */\n+\t\tPerpendicular /* true perpendicular found */\n \t};\n \tPerpType invert(Point const &xy, Point &perp, int &span,\n \t\t\tconst double eps = 1e-6) const;\n-\t// Compute the inverse function. Indicate if it is a proper (true)\n-\t// inverse, or only a best effort (e.g. input was non-monotonic).\n+\t/*\n+\t * Compute the inverse function. Indicate if it is a proper (true)\n+\t * inverse, or only a best effort (e.g. input was non-monotonic).\n+\t */\n \tPwl inverse(bool *trueInverse = nullptr, const double eps = 1e-6) const;\n-\t// Compose two Pwls together, doing \"this\" first and \"other\" after.\n+\t/* Compose two Pwls together, doing \"this\" first and \"other\" after. */\n \tPwl compose(Pwl const &other, const double eps = 1e-6) const;\n-\t// Apply function to (x,y) values at every control point.\n+\t/* Apply function to (x,y) values at every control point. */\n \tvoid map(std::function<void(double x, double y)> f) const;\n-\t// Apply function to (x, y0, y1) values wherever either Pwl has a\n-\t// control point.\n+\t/*\n+\t * Apply function to (x, y0, y1) values wherever either Pwl has a\n+\t * control point.\n+\t */\n \tstatic void map2(Pwl const &pwl0, Pwl const &pwl1,\n \t\t\t std::function<void(double x, double y0, double y1)> f);\n-\t// Combine two Pwls, meaning we create a new Pwl where the y values are\n-\t// given by running f wherever either has a knot.\n+\t/*\n+\t * Combine two Pwls, meaning we create a new Pwl where the y values are\n+\t * given by running f wherever either has a knot.\n+\t */\n \tstatic Pwl\n \tcombine(Pwl const &pwl0, Pwl const &pwl1,\n \t\tstd::function<double(double x, double y0, double y1)> f,\n \t\tconst double eps = 1e-6);\n-\t// Make \"this\" match (at least) the given domain. Any extension my be\n-\t// clipped or linear.\n+\t/*\n+\t * Make \"this\" match (at least) the given domain. Any extension my be\n+\t * clipped or linear.\n+\t */\n \tvoid matchDomain(Interval const &domain, bool clip = true,\n \t\t\t const double eps = 1e-6);\n \tPwl &operator*=(double d);\n@@ -111,4 +123,4 @@ private:\n \tstd::vector<Point> points_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/agc.cpp b/src/ipa/raspberrypi/controller/rpi/agc.cpp\nindex 738cf56c6be0..ec737ea13332 100644\n--- a/src/ipa/raspberrypi/controller/rpi/agc.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/agc.cpp\n@@ -28,7 +28,7 @@ LOG_DEFINE_CATEGORY(RPiAgc)\n \n #define NAME \"rpi.agc\"\n \n-#define PIPELINE_BITS 13 // seems to be a 13-bit pipeline\n+#define PIPELINE_BITS 13 /* seems to be a 13-bit pipeline */\n \n void AgcMeteringMode::read(boost::property_tree::ptree const &params)\n {\n@@ -150,7 +150,7 @@ void AgcConfig::read(boost::property_tree::ptree const &params)\n \tconvergenceFrames = params.get<unsigned int>(\"convergence_frames\", 6);\n \tfastReduceThreshold = params.get<double>(\"fast_reduce_threshold\", 0.4);\n \tbaseEv = params.get<double>(\"base_ev\", 1.0);\n-\t// Start with quite a low value as ramping up is easier than ramping down.\n+\t/* Start with quite a low value as ramping up is easier than ramping down. */\n \tdefaultExposureTime = params.get<double>(\"default_exposure_time\", 1000) * 1us;\n \tdefaultAnalogueGain = params.get<double>(\"default_analogueGain\", 1.0);\n }\n@@ -170,8 +170,10 @@ Agc::Agc(Controller *controller)\n \t  maxShutter_(0s), fixedShutter_(0s), fixedAnalogueGain_(0.0)\n {\n \tmemset(&awb_, 0, sizeof(awb_));\n-\t// Setting status_.totalExposureValue_ to zero initially tells us\n-\t// it's not been calculated yet (i.e. Process hasn't yet run).\n+\t/*\n+\t * Setting status_.totalExposureValue_ to zero initially tells us\n+\t * it's not been calculated yet (i.e. Process hasn't yet run).\n+\t */\n \tmemset(&status_, 0, sizeof(status_));\n \tstatus_.ev = ev_;\n }\n@@ -185,16 +187,18 @@ void Agc::read(boost::property_tree::ptree const &params)\n {\n \tLOG(RPiAgc, Debug) << \"Agc\";\n \tconfig_.read(params);\n-\t// Set the config's defaults (which are the first ones it read) as our\n-\t// current modes, until someone changes them.  (they're all known to\n-\t// exist at this point)\n+\t/*\n+\t * Set the config's defaults (which are the first ones it read) as our\n+\t * current modes, until someone changes them.  (they're all known to\n+\t * exist at this point)\n+\t */\n \tmeteringModeName_ = config_.defaultMeteringMode;\n \tmeteringMode_ = &config_.meteringModes[meteringModeName_];\n \texposureModeName_ = config_.defaultExposureMode;\n \texposureMode_ = &config_.exposureModes[exposureModeName_];\n \tconstraintModeName_ = config_.defaultConstraintMode;\n \tconstraintMode_ = &config_.constraintModes[constraintModeName_];\n-\t// Set up the \"last shutter/gain\" values, in case AGC starts \"disabled\".\n+\t/* Set up the \"last shutter/gain\" values, in case AGC starts \"disabled\". */\n \tstatus_.shutterTime = config_.defaultExposureTime;\n \tstatus_.analogueGain = config_.defaultAnalogueGain;\n }\n@@ -218,8 +222,10 @@ void Agc::resume()\n \n unsigned int Agc::getConvergenceFrames() const\n {\n-\t// If shutter and gain have been explicitly set, there is no\n-\t// convergence to happen, so no need to drop any frames - return zero.\n+\t/*\n+\t * If shutter and gain have been explicitly set, there is no\n+\t * convergence to happen, so no need to drop any frames - return zero.\n+\t */\n \tif (fixedShutter_ && fixedAnalogueGain_)\n \t\treturn 0;\n \telse\n@@ -244,14 +250,14 @@ void Agc::setMaxShutter(Duration maxShutter)\n void Agc::setFixedShutter(Duration fixedShutter)\n {\n \tfixedShutter_ = fixedShutter;\n-\t// Set this in case someone calls Pause() straight after.\n+\t/* Set this in case someone calls Pause() straight after. */\n \tstatus_.shutterTime = clipShutter(fixedShutter_);\n }\n \n void Agc::setFixedAnalogueGain(double fixedAnalogueGain)\n {\n \tfixedAnalogueGain_ = fixedAnalogueGain_;\n-\t// Set this in case someone calls Pause() straight after.\n+\t/* Set this in case someone calls Pause() straight after. */\n \tstatus_.analogueGain = fixedAnalogueGain;\n }\n \n@@ -280,30 +286,32 @@ void Agc::switchMode(CameraMode const &cameraMode,\n \n \tDuration fixedShutter = clipShutter(fixedShutter_);\n \tif (fixedShutter && fixedAnalogueGain_) {\n-\t\t// We're going to reset the algorithm here with these fixed values.\n+\t\t/* We're going to reset the algorithm here with these fixed values. */\n \n \t\tfetchAwbStatus(metadata);\n \t\tdouble minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });\n \t\tASSERT(minColourGain != 0.0);\n \n-\t\t// This is the equivalent of computeTargetExposure and applyDigitalGain.\n+\t\t/* This is the equivalent of computeTargetExposure and applyDigitalGain. */\n \t\ttarget_.totalExposureNoDG = fixedShutter_ * fixedAnalogueGain_;\n \t\ttarget_.totalExposure = target_.totalExposureNoDG / minColourGain;\n \n-\t\t// Equivalent of filterExposure. This resets any \"history\".\n+\t\t/* Equivalent of filterExposure. This resets any \"history\". */\n \t\tfiltered_ = target_;\n \n-\t\t// Equivalent of divideUpExposure.\n+\t\t/* Equivalent of divideUpExposure. */\n \t\tfiltered_.shutter = fixedShutter;\n \t\tfiltered_.analogueGain = fixedAnalogueGain_;\n \t} else if (status_.totalExposureValue) {\n-\t\t// On a mode switch, various things could happen:\n-\t\t// - the exposure profile might change\n-\t\t// - a fixed exposure or gain might be set\n-\t\t// - the new mode's sensitivity might be different\n-\t\t// We cope with the last of these by scaling the target values. After\n-\t\t// that we just need to re-divide the exposure/gain according to the\n-\t\t// current exposure profile, which takes care of everything else.\n+\t\t/*\n+\t\t * On a mode switch, various things could happen:\n+\t\t * - the exposure profile might change\n+\t\t * - a fixed exposure or gain might be set\n+\t\t * - the new mode's sensitivity might be different\n+\t\t * We cope with the last of these by scaling the target values. After\n+\t\t * that we just need to re-divide the exposure/gain according to the\n+\t\t * current exposure profile, which takes care of everything else.\n+\t\t */\n \n \t\tdouble ratio = lastSensitivity_ / cameraMode.sensitivity;\n \t\ttarget_.totalExposureNoDG *= ratio;\n@@ -313,29 +321,31 @@ void Agc::switchMode(CameraMode const &cameraMode,\n \n \t\tdivideUpExposure();\n \t} else {\n-\t\t// We come through here on startup, when at least one of the shutter\n-\t\t// or gain has not been fixed. We must still write those values out so\n-\t\t// that they will be applied immediately. We supply some arbitrary defaults\n-\t\t// for any that weren't set.\n-\n-\t\t// Equivalent of divideUpExposure.\n+\t\t/*\n+\t\t * We come through here on startup, when at least one of the shutter\n+\t\t * or gain has not been fixed. We must still write those values out so\n+\t\t * that they will be applied immediately. We supply some arbitrary defaults\n+\t\t * for any that weren't set.\n+\t\t */\n+\n+\t\t/* Equivalent of divideUpExposure. */\n \t\tfiltered_.shutter = fixedShutter ? fixedShutter : config_.defaultExposureTime;\n \t\tfiltered_.analogueGain = fixedAnalogueGain_ ? fixedAnalogueGain_ : config_.defaultAnalogueGain;\n \t}\n \n \twriteAndFinish(metadata, false);\n \n-\t// We must remember the sensitivity of this mode for the next SwitchMode.\n+\t/* We must remember the sensitivity of this mode for the next SwitchMode. */\n \tlastSensitivity_ = cameraMode.sensitivity;\n }\n \n void Agc::prepare(Metadata *imageMetadata)\n {\n \tstatus_.digitalGain = 1.0;\n-\tfetchAwbStatus(imageMetadata); // always fetch it so that Process knows it's been done\n+\tfetchAwbStatus(imageMetadata); /* always fetch it so that Process knows it's been done */\n \n \tif (status_.totalExposureValue) {\n-\t\t// Process has run, so we have meaningful values.\n+\t\t/* Process has run, so we have meaningful values. */\n \t\tDeviceStatus deviceStatus;\n \t\tif (imageMetadata->get(\"device.status\", deviceStatus) == 0) {\n \t\t\tDuration actualExposure = deviceStatus.shutterSpeed *\n@@ -343,14 +353,16 @@ void Agc::prepare(Metadata *imageMetadata)\n \t\t\tif (actualExposure) {\n \t\t\t\tstatus_.digitalGain = status_.totalExposureValue / actualExposure;\n \t\t\t\tLOG(RPiAgc, Debug) << \"Want total exposure \" << status_.totalExposureValue;\n-\t\t\t\t// Never ask for a gain < 1.0, and also impose\n-\t\t\t\t// some upper limit. Make it customisable?\n+\t\t\t\t/*\n+\t\t\t\t * Never ask for a gain < 1.0, and also impose\n+\t\t\t\t * some upper limit. Make it customisable?\n+\t\t\t\t */\n \t\t\t\tstatus_.digitalGain = std::max(1.0, std::min(status_.digitalGain, 4.0));\n \t\t\t\tLOG(RPiAgc, Debug) << \"Actual exposure \" << actualExposure;\n \t\t\t\tLOG(RPiAgc, Debug) << \"Use digital_gain \" << status_.digitalGain;\n \t\t\t\tLOG(RPiAgc, Debug) << \"Effective exposure \"\n \t\t\t\t\t\t   << actualExposure * status_.digitalGain;\n-\t\t\t\t// Decide whether AEC/AGC has converged.\n+\t\t\t\t/* Decide whether AEC/AGC has converged. */\n \t\t\t\tupdateLockStatus(deviceStatus);\n \t\t\t}\n \t\t} else\n@@ -362,44 +374,52 @@ void Agc::prepare(Metadata *imageMetadata)\n void Agc::process(StatisticsPtr &stats, Metadata *imageMetadata)\n {\n \tframeCount_++;\n-\t// First a little bit of housekeeping, fetching up-to-date settings and\n-\t// configuration, that kind of thing.\n+\t/*\n+\t * First a little bit of housekeeping, fetching up-to-date settings and\n+\t * configuration, that kind of thing.\n+\t */\n \thousekeepConfig();\n-\t// Get the current exposure values for the frame that's just arrived.\n+\t/* Get the current exposure values for the frame that's just arrived. */\n \tfetchCurrentExposure(imageMetadata);\n-\t// Compute the total gain we require relative to the current exposure.\n+\t/* Compute the total gain we require relative to the current exposure. */\n \tdouble gain, targetY;\n \tcomputeGain(stats.get(), imageMetadata, gain, targetY);\n-\t// Now compute the target (final) exposure which we think we want.\n+\t/* Now compute the target (final) exposure which we think we want. */\n \tcomputeTargetExposure(gain);\n-\t// Some of the exposure has to be applied as digital gain, so work out\n-\t// what that is. This function also tells us whether it's decided to\n-\t// \"desaturate\" the image more quickly.\n+\t/*\n+\t * Some of the exposure has to be applied as digital gain, so work out\n+\t * what that is. This function also tells us whether it's decided to\n+\t * \"desaturate\" the image more quickly.\n+\t */\n \tbool desaturate = applyDigitalGain(gain, targetY);\n-\t// The results have to be filtered so as not to change too rapidly.\n+\t/* The results have to be filtered so as not to change too rapidly. */\n \tfilterExposure(desaturate);\n-\t// The last thing is to divide up the exposure value into a shutter time\n-\t// and analogue gain, according to the current exposure mode.\n+\t/*\n+\t * The last thing is to divide up the exposure value into a shutter time\n+\t * and analogue gain, according to the current exposure mode.\n+\t */\n \tdivideUpExposure();\n-\t// Finally advertise what we've done.\n+\t/* Finally advertise what we've done. */\n \twriteAndFinish(imageMetadata, desaturate);\n }\n \n void Agc::updateLockStatus(DeviceStatus const &deviceStatus)\n {\n-\tconst double errorFactor = 0.10; // make these customisable?\n+\tconst double errorFactor = 0.10; /* make these customisable? */\n \tconst int maxLockCount = 5;\n-\t// Reset \"lock count\" when we exceed this multiple of errorFactor\n+\t/* Reset \"lock count\" when we exceed this multiple of errorFactor */\n \tconst double resetMargin = 1.5;\n \n-\t// Add 200us to the exposure time error to allow for line quantisation.\n+\t/* Add 200us to the exposure time error to allow for line quantisation. */\n \tDuration exposureError = lastDeviceStatus_.shutterSpeed * errorFactor + 200us;\n \tdouble gainError = lastDeviceStatus_.analogueGain * errorFactor;\n \tDuration targetError = lastTargetExposure_ * errorFactor;\n \n-\t// Note that we don't know the exposure/gain limits of the sensor, so\n-\t// the values we keep requesting may be unachievable. For this reason\n-\t// we only insist that we're close to values in the past few frames.\n+\t/*\n+\t * Note that we don't know the exposure/gain limits of the sensor, so\n+\t * the values we keep requesting may be unachievable. For this reason\n+\t * we only insist that we're close to values in the past few frames.\n+\t */\n \tif (deviceStatus.shutterSpeed > lastDeviceStatus_.shutterSpeed - exposureError &&\n \t    deviceStatus.shutterSpeed < lastDeviceStatus_.shutterSpeed + exposureError &&\n \t    deviceStatus.analogueGain > lastDeviceStatus_.analogueGain - gainError &&\n@@ -430,7 +450,7 @@ static void copyString(std::string const &s, char *d, size_t size)\n \n void Agc::housekeepConfig()\n {\n-\t// First fetch all the up-to-date settings, so no one else has to do it.\n+\t/* First fetch all the up-to-date settings, so no one else has to do it. */\n \tstatus_.ev = ev_;\n \tstatus_.fixedShutter = clipShutter(fixedShutter_);\n \tstatus_.fixedAnalogueGain = fixedAnalogueGain_;\n@@ -438,8 +458,10 @@ void Agc::housekeepConfig()\n \tLOG(RPiAgc, Debug) << \"ev \" << status_.ev << \" fixedShutter \"\n \t\t\t   << status_.fixedShutter << \" fixedAnalogueGain \"\n \t\t\t   << status_.fixedAnalogueGain;\n-\t// Make sure the \"mode\" pointers point to the up-to-date things, if\n-\t// they've changed.\n+\t/*\n+\t * Make sure the \"mode\" pointers point to the up-to-date things, if\n+\t * they've changed.\n+\t */\n \tif (strcmp(meteringModeName_.c_str(), status_.meteringMode)) {\n \t\tauto it = config_.meteringModes.find(meteringModeName_);\n \t\tif (it == config_.meteringModes.end())\n@@ -491,7 +513,7 @@ void Agc::fetchCurrentExposure(Metadata *imageMetadata)\n \n void Agc::fetchAwbStatus(Metadata *imageMetadata)\n {\n-\tawb_.gainR = 1.0; // in case not found in metadata\n+\tawb_.gainR = 1.0; /* in case not found in metadata */\n \tawb_.gainG = 1.0;\n \tawb_.gainB = 1.0;\n \tif (imageMetadata->get(\"awb.status\", awb_) != 0)\n@@ -502,8 +524,10 @@ static double computeInitialY(bcm2835_isp_stats *stats, AwbStatus const &awb,\n \t\t\t      double weights[], double gain)\n {\n \tbcm2835_isp_stats_region *regions = stats->agc_stats;\n-\t// Note how the calculation below means that equal weights give you\n-\t// \"average\" metering (i.e. all pixels equally important).\n+\t/*\n+\t * Note how the calculation below means that equal weights give you\n+\t * \"average\" metering (i.e. all pixels equally important).\n+\t */\n \tdouble rSum = 0, gSum = 0, bSum = 0, pixelSum = 0;\n \tfor (int i = 0; i < AGC_STATS_SIZE; i++) {\n \t\tdouble counted = regions[i].counted;\n@@ -525,11 +549,13 @@ static double computeInitialY(bcm2835_isp_stats *stats, AwbStatus const &awb,\n \treturn ySum / pixelSum / (1 << PIPELINE_BITS);\n }\n \n-// We handle extra gain through EV by adjusting our Y targets. However, you\n-// simply can't monitor histograms once they get very close to (or beyond!)\n-// saturation, so we clamp the Y targets to this value. It does mean that EV\n-// increases don't necessarily do quite what you might expect in certain\n-// (contrived) cases.\n+/*\n+ * We handle extra gain through EV by adjusting our Y targets. However, you\n+ * simply can't monitor histograms once they get very close to (or beyond!)\n+ * saturation, so we clamp the Y targets to this value. It does mean that EV\n+ * increases don't necessarily do quite what you might expect in certain\n+ * (contrived) cases.\n+ */\n \n #define EV_GAIN_Y_TARGET_LIMIT 0.9\n \n@@ -546,18 +572,22 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,\n \t\t      double &gain, double &targetY)\n {\n \tstruct LuxStatus lux = {};\n-\tlux.lux = 400; // default lux level to 400 in case no metadata found\n+\tlux.lux = 400; /* default lux level to 400 in case no metadata found */\n \tif (imageMetadata->get(\"lux.status\", lux) != 0)\n \t\tLOG(RPiAgc, Warning) << \"Agc: no lux level found\";\n \tHistogram h(statistics->hist[0].g_hist, NUM_HISTOGRAM_BINS);\n \tdouble evGain = status_.ev * config_.baseEv;\n-\t// The initial gain and target_Y come from some of the regions. After\n-\t// that we consider the histogram constraints.\n+\t/*\n+\t * The initial gain and target_Y come from some of the regions. After\n+\t * that we consider the histogram constraints.\n+\t */\n \ttargetY = config_.yTarget.eval(config_.yTarget.domain().clip(lux.lux));\n \ttargetY = std::min(EV_GAIN_Y_TARGET_LIMIT, targetY * evGain);\n \n-\t// Do this calculation a few times as brightness increase can be\n-\t// non-linear when there are saturated regions.\n+\t/*\n+\t * Do this calculation a few times as brightness increase can be\n+\t * non-linear when there are saturated regions.\n+\t */\n \tgain = 1.0;\n \tfor (int i = 0; i < 8; i++) {\n \t\tdouble initialY = computeInitialY(statistics, awb_, meteringMode_->weights, gain);\n@@ -565,7 +595,7 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,\n \t\tgain *= extraGain;\n \t\tLOG(RPiAgc, Debug) << \"Initial Y \" << initialY << \" target \" << targetY\n \t\t\t\t   << \" gives gain \" << gain;\n-\t\tif (extraGain < 1.01) // close enough\n+\t\tif (extraGain < 1.01) /* close enough */\n \t\t\tbreak;\n \t}\n \n@@ -592,20 +622,23 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,\n void Agc::computeTargetExposure(double gain)\n {\n \tif (status_.fixedShutter && status_.fixedAnalogueGain) {\n-\t\t// When ag and shutter are both fixed, we need to drive the\n-\t\t// total exposure so that we end up with a digital gain of at least\n-\t\t// 1/min_colour_gain. Otherwise we'd desaturate channels causing\n-\t\t// white to go cyan or magenta.\n+\t\t/*\n+\t\t * When ag and shutter are both fixed, we need to drive the\n+\t\t * total exposure so that we end up with a digital gain of at least\n+\t\t * 1/min_colour_gain. Otherwise we'd desaturate channels causing\n+\t\t * white to go cyan or magenta.\n+\t\t */\n \t\tdouble minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });\n \t\tASSERT(minColourGain != 0.0);\n \t\ttarget_.totalExposure =\n \t\t\tstatus_.fixedShutter * status_.fixedAnalogueGain / minColourGain;\n \t} else {\n-\t\t// The statistics reflect the image without digital gain, so the final\n-\t\t// total exposure we're aiming for is:\n+\t\t/*\n+\t\t * The statistics reflect the image without digital gain, so the final\n+\t\t * total exposure we're aiming for is:\n+\t\t */\n \t\ttarget_.totalExposure = current_.totalExposureNoDG * gain;\n-\t\t// The final target exposure is also limited to what the exposure\n-\t\t// mode allows.\n+\t\t/* The final target exposure is also limited to what the exposure mode allows. */\n \t\tDuration maxShutter = status_.fixedShutter\n \t\t\t\t\t      ? status_.fixedShutter\n \t\t\t\t\t      : exposureMode_->shutter.back();\n@@ -625,17 +658,21 @@ bool Agc::applyDigitalGain(double gain, double targetY)\n \tdouble minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });\n \tASSERT(minColourGain != 0.0);\n \tdouble dg = 1.0 / minColourGain;\n-\t// I think this pipeline subtracts black level and rescales before we\n-\t// get the stats, so no need to worry about it.\n+\t/*\n+\t * I think this pipeline subtracts black level and rescales before we\n+\t * get the stats, so no need to worry about it.\n+\t */\n \tLOG(RPiAgc, Debug) << \"after AWB, target dg \" << dg << \" gain \" << gain\n \t\t\t   << \" target_Y \" << targetY;\n-\t// Finally, if we're trying to reduce exposure but the target_Y is\n-\t// \"close\" to 1.0, then the gain computed for that constraint will be\n-\t// only slightly less than one, because the measured Y can never be\n-\t// larger than 1.0. When this happens, demand a large digital gain so\n-\t// that the exposure can be reduced, de-saturating the image much more\n-\t// quickly (and we then approach the correct value more quickly from\n-\t// below).\n+\t/*\n+\t * Finally, if we're trying to reduce exposure but the target_Y is\n+\t * \"close\" to 1.0, then the gain computed for that constraint will be\n+\t * only slightly less than one, because the measured Y can never be\n+\t * larger than 1.0. When this happens, demand a large digital gain so\n+\t * that the exposure can be reduced, de-saturating the image much more\n+\t * quickly (and we then approach the correct value more quickly from\n+\t * below).\n+\t */\n \tbool desaturate = targetY > config_.fastReduceThreshold &&\n \t\t\t  gain < sqrt(targetY);\n \tif (desaturate)\n@@ -649,8 +686,10 @@ bool Agc::applyDigitalGain(double gain, double targetY)\n void Agc::filterExposure(bool desaturate)\n {\n \tdouble speed = config_.speed;\n-\t// AGC adapts instantly if both shutter and gain are directly specified\n-\t// or we're in the startup phase.\n+\t/*\n+\t * AGC adapts instantly if both shutter and gain are directly specified\n+\t * or we're in the startup phase.\n+\t */\n \tif ((status_.fixedShutter && status_.fixedAnalogueGain) ||\n \t    frameCount_ <= config_.startupFrames)\n \t\tspeed = 1.0;\n@@ -658,15 +697,19 @@ void Agc::filterExposure(bool desaturate)\n \t\tfiltered_.totalExposure = target_.totalExposure;\n \t\tfiltered_.totalExposureNoDG = target_.totalExposureNoDG;\n \t} else {\n-\t\t// If close to the result go faster, to save making so many\n-\t\t// micro-adjustments on the way. (Make this customisable?)\n+\t\t/*\n+\t\t * If close to the result go faster, to save making so many\n+\t\t * micro-adjustments on the way. (Make this customisable?)\n+\t\t */\n \t\tif (filtered_.totalExposure < 1.2 * target_.totalExposure &&\n \t\t    filtered_.totalExposure > 0.8 * target_.totalExposure)\n \t\t\tspeed = sqrt(speed);\n \t\tfiltered_.totalExposure = speed * target_.totalExposure +\n \t\t\t\t\t  filtered_.totalExposure * (1.0 - speed);\n-\t\t// When desaturing, take a big jump down in exposure_no_dg,\n-\t\t// which we'll hide with digital gain.\n+\t\t/*\n+\t\t * When desaturing, take a big jump down in exposure_no_dg,\n+\t\t * which we'll hide with digital gain.\n+\t\t */\n \t\tif (desaturate)\n \t\t\tfiltered_.totalExposureNoDG =\n \t\t\t\ttarget_.totalExposureNoDG;\n@@ -675,9 +718,11 @@ void Agc::filterExposure(bool desaturate)\n \t\t\t\tspeed * target_.totalExposureNoDG +\n \t\t\t\tfiltered_.totalExposureNoDG * (1.0 - speed);\n \t}\n-\t// We can't let the no_dg exposure deviate too far below the\n-\t// total exposure, as there might not be enough digital gain available\n-\t// in the ISP to hide it (which will cause nasty oscillation).\n+\t/*\n+\t * We can't let the no_dg exposure deviate too far below the\n+\t * total exposure, as there might not be enough digital gain available\n+\t * in the ISP to hide it (which will cause nasty oscillation).\n+\t */\n \tif (filtered_.totalExposureNoDG <\n \t    filtered_.totalExposure * config_.fastReduceThreshold)\n \t\tfiltered_.totalExposureNoDG = filtered_.totalExposure * config_.fastReduceThreshold;\n@@ -687,9 +732,11 @@ void Agc::filterExposure(bool desaturate)\n \n void Agc::divideUpExposure()\n {\n-\t// Sending the fixed shutter/gain cases through the same code may seem\n-\t// unnecessary, but it will make more sense when extend this to cover\n-\t// variable aperture.\n+\t/*\n+\t * Sending the fixed shutter/gain cases through the same code may seem\n+\t * unnecessary, but it will make more sense when extend this to cover\n+\t * variable aperture.\n+\t */\n \tDuration exposureValue = filtered_.totalExposureNoDG;\n \tDuration shutterTime;\n \tdouble analogueGain;\n@@ -721,18 +768,22 @@ void Agc::divideUpExposure()\n \t}\n \tLOG(RPiAgc, Debug) << \"Divided up shutter and gain are \" << shutterTime << \" and \"\n \t\t\t   << analogueGain;\n-\t// Finally adjust shutter time for flicker avoidance (require both\n-\t// shutter and gain not to be fixed).\n+\t/*\n+\t * Finally adjust shutter time for flicker avoidance (require both\n+\t * shutter and gain not to be fixed).\n+\t */\n \tif (!status_.fixedShutter && !status_.fixedAnalogueGain &&\n \t    status_.flickerPeriod) {\n \t\tint flickerPeriods = shutterTime / status_.flickerPeriod;\n \t\tif (flickerPeriods) {\n \t\t\tDuration newShutterTime = flickerPeriods * status_.flickerPeriod;\n \t\t\tanalogueGain *= shutterTime / newShutterTime;\n-\t\t\t// We should still not allow the ag to go over the\n-\t\t\t// largest value in the exposure mode. Note that this\n-\t\t\t// may force more of the total exposure into the digital\n-\t\t\t// gain as a side-effect.\n+\t\t\t/*\n+\t\t\t * We should still not allow the ag to go over the\n+\t\t\t * largest value in the exposure mode. Note that this\n+\t\t\t * may force more of the total exposure into the digital\n+\t\t\t * gain as a side-effect.\n+\t\t\t */\n \t\t\tanalogueGain = std::min(analogueGain, exposureMode_->gain.back());\n \t\t\tshutterTime = newShutterTime;\n \t\t}\n@@ -749,8 +800,10 @@ void Agc::writeAndFinish(Metadata *imageMetadata, bool desaturate)\n \tstatus_.targetExposureValue = desaturate ? 0s : target_.totalExposureNoDG;\n \tstatus_.shutterTime = filtered_.shutter;\n \tstatus_.analogueGain = filtered_.analogueGain;\n-\t// Write to metadata as well, in case anyone wants to update the camera\n-\t// immediately.\n+\t/*\n+\t * Write to metadata as well, in case anyone wants to update the camera\n+\t * immediately.\n+\t */\n \timageMetadata->set(\"agc.status\", status_);\n \tLOG(RPiAgc, Debug) << \"Output written, total exposure requested is \"\n \t\t\t   << filtered_.totalExposure;\n@@ -765,7 +818,7 @@ Duration Agc::clipShutter(Duration shutter)\n \treturn shutter;\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Agc(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/agc.hpp b/src/ipa/raspberrypi/controller/rpi/agc.hpp\nindex 4ed7293bce97..c2d68b60f15e 100644\n--- a/src/ipa/raspberrypi/controller/rpi/agc.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/agc.hpp\n@@ -15,10 +15,12 @@\n #include \"../agc_status.h\"\n #include \"../pwl.hpp\"\n \n-// This is our implementation of AGC.\n+/* This is our implementation of AGC. */\n \n-// This is the number actually set up by the firmware, not the maximum possible\n-// number (which is 16).\n+/*\n+ * This is the number actually set up by the firmware, not the maximum possible\n+ * number (which is 16).\n+ */\n \n #define AGC_STATS_SIZE 15\n \n@@ -73,7 +75,7 @@ public:\n \tAgc(Controller *controller);\n \tchar const *name() const override;\n \tvoid read(boost::property_tree::ptree const &params) override;\n-\t// AGC handles \"pausing\" for itself.\n+\t/* AGC handles \"pausing\" for itself. */\n \tbool isPaused() const override;\n \tvoid pause() override;\n \tvoid resume() override;\n@@ -115,17 +117,17 @@ private:\n \t\tlibcamera::utils::Duration shutter;\n \t\tdouble analogueGain;\n \t\tlibcamera::utils::Duration totalExposure;\n-\t\tlibcamera::utils::Duration totalExposureNoDG; // without digital gain\n+\t\tlibcamera::utils::Duration totalExposureNoDG; /* without digital gain */\n \t};\n-\tExposureValues current_;  // values for the current frame\n-\tExposureValues target_;   // calculate the values we want here\n-\tExposureValues filtered_; // these values are filtered towards target\n+\tExposureValues current_;  /* values for the current frame */\n+\tExposureValues target_;   /* calculate the values we want here */\n+\tExposureValues filtered_; /* these values are filtered towards target */\n \tAgcStatus status_;\n \tint lockCount_;\n \tDeviceStatus lastDeviceStatus_;\n \tlibcamera::utils::Duration lastTargetExposure_;\n-\tdouble lastSensitivity_; // sensitivity of the previous camera mode\n-\t// Below here the \"settings\" that applications can change.\n+\tdouble lastSensitivity_; /* sensitivity of the previous camera mode */\n+\t/* Below here the \"settings\" that applications can change. */\n \tstd::string meteringModeName_;\n \tstd::string exposureModeName_;\n \tstd::string constraintModeName_;\n@@ -136,4 +138,4 @@ private:\n \tdouble fixedAnalogueGain_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/alsc.cpp b/src/ipa/raspberrypi/controller/rpi/alsc.cpp\nindex 4929abc5b360..c9e1b9dc9f7d 100644\n--- a/src/ipa/raspberrypi/controller/rpi/alsc.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/alsc.cpp\n@@ -14,7 +14,7 @@\n #include \"../awb_status.h\"\n #include \"alsc.hpp\"\n \n-// Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm.\n+/* Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm. */\n \n using namespace RPiController;\n using namespace libcamera;\n@@ -68,7 +68,7 @@ static void generateLut(double *lut, boost::property_tree::ptree const &params)\n \t\t\tdouble r2 = (dx * dx + dy * dy) / R2;\n \t\t\tlut[num++] =\n \t\t\t\t(f1 * r2 + f2) * (f1 * r2 + f2) /\n-\t\t\t\t(f2 * f2); // this reproduces the cos^4 rule\n+\t\t\t\t(f2 * f2); /* this reproduces the cos^4 rule */\n \t\t}\n \t}\n }\n@@ -171,7 +171,7 @@ void Alsc::initialise()\n \tframeCount2_ = frameCount_ = framePhase_ = 0;\n \tfirstTime_ = true;\n \tct_ = config_.defaultCt;\n-\t// The lambdas are initialised in the SwitchMode.\n+\t/* The lambdas are initialised in the SwitchMode. */\n }\n \n void Alsc::waitForAysncThread()\n@@ -188,8 +188,10 @@ void Alsc::waitForAysncThread()\n \n static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)\n {\n-\t// Return true if the modes crop from the sensor significantly differently,\n-\t// or if the user transform has changed.\n+\t/*\n+\t * Return true if the modes crop from the sensor significantly differently,\n+\t * or if the user transform has changed.\n+\t */\n \tif (cm0.transform != cm1.transform)\n \t\treturn true;\n \tint leftDiff = abs(cm0.cropX - cm1.cropX);\n@@ -198,9 +200,11 @@ static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)\n \t\t\t     cm1.cropX - cm1.scaleX * cm1.width);\n \tint bottomDiff = fabs(cm0.cropY + cm0.scaleY * cm0.height -\n \t\t\t      cm1.cropY - cm1.scaleY * cm1.height);\n-\t// These thresholds are a rather arbitrary amount chosen to trigger\n-\t// when carrying on with the previously calculated tables might be\n-\t// worse than regenerating them (but without the adaptive algorithm).\n+\t/*\n+\t * These thresholds are a rather arbitrary amount chosen to trigger\n+\t * when carrying on with the previously calculated tables might be\n+\t * worse than regenerating them (but without the adaptive algorithm).\n+\t */\n \tint thresholdX = cm0.sensorWidth >> 4;\n \tint thresholdY = cm0.sensorHeight >> 4;\n \treturn leftDiff > thresholdX || rightDiff > thresholdX ||\n@@ -210,28 +214,34 @@ static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)\n void Alsc::switchMode(CameraMode const &cameraMode,\n \t\t      [[maybe_unused]] Metadata *metadata)\n {\n-\t// We're going to start over with the tables if there's any \"significant\"\n-\t// change.\n+\t/*\n+\t * We're going to start over with the tables if there's any \"significant\"\n+\t * change.\n+\t */\n \tbool resetTables = firstTime_ || compareModes(cameraMode_, cameraMode);\n \n-\t// Believe the colour temperature from the AWB, if there is one.\n+\t/* Believe the colour temperature from the AWB, if there is one. */\n \tct_ = getCt(metadata, ct_);\n \n-\t// Ensure the other thread isn't running while we do this.\n+\t/* Ensure the other thread isn't running while we do this. */\n \twaitForAysncThread();\n \n \tcameraMode_ = cameraMode;\n \n-\t// We must resample the luminance table like we do the others, but it's\n-\t// fixed so we can simply do it up front here.\n+\t/*\n+\t * We must resample the luminance table like we do the others, but it's\n+\t * fixed so we can simply do it up front here.\n+\t */\n \tresampleCalTable(config_.luminanceLut, cameraMode_, luminanceTable_);\n \n \tif (resetTables) {\n-\t\t// Upon every \"table reset\", arrange for something sensible to be\n-\t\t// generated. Construct the tables for the previous recorded colour\n-\t\t// temperature. In order to start over from scratch we initialise\n-\t\t// the lambdas, but the rest of this code then echoes the code in\n-\t\t// doAlsc, without the adaptive algorithm.\n+\t\t/*\n+\t\t * Upon every \"table reset\", arrange for something sensible to be\n+\t\t * generated. Construct the tables for the previous recorded colour\n+\t\t * temperature. In order to start over from scratch we initialise\n+\t\t * the lambdas, but the rest of this code then echoes the code in\n+\t\t * doAlsc, without the adaptive algorithm.\n+\t\t */\n \t\tfor (int i = 0; i < XY; i++)\n \t\t\tlambdaR_[i] = lambdaB_[i] = 1.0;\n \t\tdouble calTableR[XY], calTableB[XY], calTableTmp[XY];\n@@ -244,7 +254,7 @@ void Alsc::switchMode(CameraMode const &cameraMode,\n \t\taddLuminanceToTables(syncResults_, asyncLambdaR_, 1.0, asyncLambdaB_,\n \t\t\t\t     luminanceTable_, config_.luminanceStrength);\n \t\tmemcpy(prevSyncResults_, syncResults_, sizeof(prevSyncResults_));\n-\t\tframePhase_ = config_.framePeriod; // run the algo again asap\n+\t\tframePhase_ = config_.framePeriod; /* run the algo again asap */\n \t\tfirstTime_ = false;\n \t}\n }\n@@ -260,7 +270,7 @@ void Alsc::fetchAsyncResults()\n double getCt(Metadata *metadata, double defaultCt)\n {\n \tAwbStatus awbStatus;\n-\tawbStatus.temperatureK = defaultCt; // in case nothing found\n+\tawbStatus.temperatureK = defaultCt; /* in case nothing found */\n \tif (metadata->get(\"awb.status\", awbStatus) != 0)\n \t\tLOG(RPiAlsc, Debug) << \"no AWB results found, using \"\n \t\t\t\t    << awbStatus.temperatureK;\n@@ -282,18 +292,22 @@ static void copyStats(bcm2835_isp_stats_region regions[XY], StatisticsPtr &stats\n \t\tregions[i].g_sum = inputRegions[i].g_sum / gTable[i];\n \t\tregions[i].b_sum = inputRegions[i].b_sum / bTable[i];\n \t\tregions[i].counted = inputRegions[i].counted;\n-\t\t// (don't care about the uncounted value)\n+\t\t/* (don't care about the uncounted value) */\n \t}\n }\n \n void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata)\n {\n \tLOG(RPiAlsc, Debug) << \"Starting ALSC calculation\";\n-\t// Get the current colour temperature. It's all we need from the\n-\t// metadata. Default to the last CT value (which could be the default).\n+\t/*\n+\t * Get the current colour temperature. It's all we need from the\n+\t * metadata. Default to the last CT value (which could be the default).\n+\t */\n \tct_ = getCt(imageMetadata, ct_);\n-\t// We have to copy the statistics here, dividing out our best guess of\n-\t// the LSC table that the pipeline applied to them.\n+\t/*\n+\t * We have to copy the statistics here, dividing out our best guess of\n+\t * the LSC table that the pipeline applied to them.\n+\t */\n \tAlscStatus alscStatus;\n \tif (imageMetadata->get(\"alsc.status\", alscStatus) != 0) {\n \t\tLOG(RPiAlsc, Warning)\n@@ -317,8 +331,10 @@ void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata)\n \n void Alsc::prepare(Metadata *imageMetadata)\n {\n-\t// Count frames since we started, and since we last poked the async\n-\t// thread.\n+\t/*\n+\t * Count frames since we started, and since we last poked the async\n+\t * thread.\n+\t */\n \tif (frameCount_ < (int)config_.startupFrames)\n \t\tframeCount_++;\n \tdouble speed = frameCount_ < (int)config_.startupFrames\n@@ -331,12 +347,12 @@ void Alsc::prepare(Metadata *imageMetadata)\n \t\tif (asyncStarted_ && asyncFinished_)\n \t\t\tfetchAsyncResults();\n \t}\n-\t// Apply IIR filter to results and program into the pipeline.\n+\t/* Apply IIR filter to results and program into the pipeline. */\n \tdouble *ptr = (double *)syncResults_,\n \t       *pptr = (double *)prevSyncResults_;\n \tfor (unsigned int i = 0; i < sizeof(syncResults_) / sizeof(double); i++)\n \t\tpptr[i] = speed * ptr[i] + (1.0 - speed) * pptr[i];\n-\t// Put output values into status metadata.\n+\t/* Put output values into status metadata. */\n \tAlscStatus status;\n \tmemcpy(status.r, prevSyncResults_[0], sizeof(status.r));\n \tmemcpy(status.g, prevSyncResults_[1], sizeof(status.g));\n@@ -346,8 +362,10 @@ void Alsc::prepare(Metadata *imageMetadata)\n \n void Alsc::process(StatisticsPtr &stats, Metadata *imageMetadata)\n {\n-\t// Count frames since we started, and since we last poked the async\n-\t// thread.\n+\t/*\n+\t * Count frames since we started, and since we last poked the async\n+\t * thread.\n+\t */\n \tif (framePhase_ < (int)config_.framePeriod)\n \t\tframePhase_++;\n \tif (frameCount2_ < (int)config_.startupFrames)\n@@ -415,8 +433,10 @@ void getCalTable(double ct, std::vector<AlscCalibration> const &calibrations,\n void resampleCalTable(double const calTableIn[XY],\n \t\t      CameraMode const &cameraMode, double calTableOut[XY])\n {\n-\t// Precalculate and cache the x sampling locations and phases to save\n-\t// recomputing them on every row.\n+\t/*\n+\t * Precalculate and cache the x sampling locations and phases to save\n+\t * recomputing them on every row.\n+\t */\n \tint xLo[X], xHi[X];\n \tdouble xf[X];\n \tdouble scaleX = cameraMode.sensorWidth /\n@@ -434,7 +454,7 @@ void resampleCalTable(double const calTableIn[XY],\n \t\t\txHi[i] = X - 1 - xHi[i];\n \t\t}\n \t}\n-\t// Now march over the output table generating the new values.\n+\t/* Now march over the output table generating the new values. */\n \tdouble scaleY = cameraMode.sensorHeight /\n \t\t\t(cameraMode.height * cameraMode.scaleY);\n \tdouble yOff = cameraMode.cropY / (double)cameraMode.sensorHeight;\n@@ -461,7 +481,7 @@ void resampleCalTable(double const calTableIn[XY],\n \t}\n }\n \n-// Calculate chrominance statistics (R/G and B/G) for each region.\n+/* Calculate chrominance statistics (R/G and B/G) for each region. */\n static_assert(XY == AWB_REGIONS, \"ALSC/AWB statistics region mismatch\");\n static void calculateCrCb(bcm2835_isp_stats_region *awbRegion, double cr[XY],\n \t\t\t  double cb[XY], uint32_t minCount, uint16_t minG)\n@@ -512,8 +532,10 @@ void compensateLambdasForCal(double const calTable[XY],\n \tprintf(\"]\\n\");\n }\n \n-// Compute weight out of 1.0 which reflects how similar we wish to make the\n-// colours of these two regions.\n+/*\n+ * Compute weight out of 1.0 which reflects how similar we wish to make the\n+ * colours of these two regions.\n+ */\n static double computeWeight(double Ci, double Cj, double sigma)\n {\n \tif (Ci == InsufficientData || Cj == InsufficientData)\n@@ -522,11 +544,11 @@ static double computeWeight(double Ci, double Cj, double sigma)\n \treturn exp(-diff * diff / 2);\n }\n \n-// Compute all weights.\n+/* Compute all weights. */\n static void computeW(double const C[XY], double sigma, double W[XY][4])\n {\n \tfor (int i = 0; i < XY; i++) {\n-\t\t// Start with neighbour above and go clockwise.\n+\t\t/* Start with neighbour above and go clockwise. */\n \t\tW[i][0] = i >= X ? computeWeight(C[i], C[i - X], sigma) : 0;\n \t\tW[i][1] = i % X < X - 1 ? computeWeight(C[i], C[i + 1], sigma) : 0;\n \t\tW[i][2] = i < XY - X ? computeWeight(C[i], C[i + X], sigma) : 0;\n@@ -534,17 +556,19 @@ static void computeW(double const C[XY], double sigma, double W[XY][4])\n \t}\n }\n \n-// Compute M, the large but sparse matrix such that M * lambdas = 0.\n+/* Compute M, the large but sparse matrix such that M * lambdas = 0. */\n static void constructM(double const C[XY], double const W[XY][4],\n \t\t       double M[XY][4])\n {\n \tdouble epsilon = 0.001;\n \tfor (int i = 0; i < XY; i++) {\n-\t\t// Note how, if C[i] == INSUFFICIENT_DATA, the weights will all\n-\t\t// be zero so the equation is still set up correctly.\n+\t\t/*\n+\t\t * Note how, if C[i] == INSUFFICIENT_DATA, the weights will all\n+\t\t * be zero so the equation is still set up correctly.\n+\t\t */\n \t\tint m = !!(i >= X) + !!(i % X < X - 1) + !!(i < XY - X) +\n-\t\t\t!!(i % X); // total number of neighbours\n-\t\t// we'll divide the diagonal out straight away\n+\t\t\t!!(i % X); /* total number of neighbours */\n+\t\t/* we'll divide the diagonal out straight away */\n \t\tdouble diagonal = (epsilon + W[i][0] + W[i][1] + W[i][2] + W[i][3]) * C[i];\n \t\tM[i][0] = i >= X ? (W[i][0] * C[i - X] + epsilon / m * C[i]) / diagonal : 0;\n \t\tM[i][1] = i % X < X - 1 ? (W[i][1] * C[i + 1] + epsilon / m * C[i]) / diagonal : 0;\n@@ -553,9 +577,11 @@ static void constructM(double const C[XY], double const W[XY][4],\n \t}\n }\n \n-// In the compute_lambda_ functions, note that the matrix coefficients for the\n-// left/right neighbours are zero down the left/right edges, so we don't need\n-// need to test the i value to exclude them.\n+/*\n+ * In the compute_lambda_ functions, note that the matrix coefficients for the\n+ * left/right neighbours are zero down the left/right edges, so we don't need\n+ * need to test the i value to exclude them.\n+ */\n static double computeLambdaBottom(int i, double const M[XY][4],\n \t\t\t\t  double lambda[XY])\n {\n@@ -585,7 +611,7 @@ static double computeLambdaTopEnd(int i, double const M[XY][4],\n \treturn M[i][0] * lambda[i - X] + M[i][3] * lambda[i - 1];\n }\n \n-// Gauss-Seidel iteration with over-relaxation.\n+/* Gauss-Seidel iteration with over-relaxation. */\n static double gaussSeidel2Sor(double const M[XY][4], double omega,\n \t\t\t      double lambda[XY], double lambdaBound)\n {\n@@ -610,8 +636,10 @@ static double gaussSeidel2Sor(double const M[XY][4], double omega,\n \t}\n \tlambda[i] = computeLambdaTopEnd(i, M, lambda);\n \tlambda[i] = std::clamp(lambda[i], min, max);\n-\t// Also solve the system from bottom to top, to help spread the updates\n-\t// better.\n+\t/*\n+\t * Also solve the system from bottom to top, to help spread the updates\n+\t * better.\n+\t */\n \tlambda[i] = computeLambdaTopEnd(i, M, lambda);\n \tlambda[i] = std::clamp(lambda[i], min, max);\n \tfor (i = XY - 2; i >= XY - X; i--) {\n@@ -637,7 +665,7 @@ static double gaussSeidel2Sor(double const M[XY][4], double omega,\n \treturn maxDiff;\n }\n \n-// Normalise the values so that the smallest value is 1.\n+/* Normalise the values so that the smallest value is 1. */\n static void normalise(double *ptr, size_t n)\n {\n \tdouble minval = ptr[0];\n@@ -647,7 +675,7 @@ static void normalise(double *ptr, size_t n)\n \t\tptr[i] /= minval;\n }\n \n-// Rescale the values so that the average value is 1.\n+/* Rescale the values so that the average value is 1. */\n static void reaverage(Span<double> data)\n {\n \tdouble sum = std::accumulate(data.begin(), data.end(), 0.0);\n@@ -670,15 +698,17 @@ static void runMatrixIterations(double const C[XY], double lambda[XY],\n \t\t\t\t<< \"Stop after \" << i + 1 << \" iterations\";\n \t\t\tbreak;\n \t\t}\n-\t\t// this happens very occasionally (so make a note), though\n-\t\t// doesn't seem to matter\n+\t\t/*\n+\t\t * this happens very occasionally (so make a note), though\n+\t\t * doesn't seem to matter\n+\t\t */\n \t\tif (maxDiff > lastMaxDiff)\n \t\t\tLOG(RPiAlsc, Debug)\n \t\t\t\t<< \"Iteration \" << i << \": max_diff gone up \"\n \t\t\t\t<< lastMaxDiff << \" to \" << maxDiff;\n \t\tlastMaxDiff = maxDiff;\n \t}\n-\t// We're going to normalise the lambdas so the total average is 1.\n+\t/* We're going to normalise the lambdas so the total average is 1. */\n \treaverage({ lambda, XY });\n }\n \n@@ -712,41 +742,49 @@ void addLuminanceToTables(double results[3][Y][X], double const lambdaR[XY],\n void Alsc::doAlsc()\n {\n \tdouble cr[XY], cb[XY], wr[XY][4], wb[XY][4], calTableR[XY], calTableB[XY], calTableTmp[XY];\n-\t// Calculate our R/B (\"Cr\"/\"Cb\") colour statistics, and assess which are\n-\t// usable.\n+\t/*\n+\t * Calculate our R/B (\"Cr\"/\"Cb\") colour statistics, and assess which are\n+\t * usable.\n+\t */\n \tcalculateCrCb(statistics_, cr, cb, config_.minCount, config_.minG);\n-\t// Fetch the new calibrations (if any) for this CT. Resample them in\n-\t// case the camera mode is not full-frame.\n+\t/*\n+\t * Fetch the new calibrations (if any) for this CT. Resample them in\n+\t * case the camera mode is not full-frame.\n+\t */\n \tgetCalTable(ct_, config_.calibrationsCr, calTableTmp);\n \tresampleCalTable(calTableTmp, cameraMode_, calTableR);\n \tgetCalTable(ct_, config_.calibrationsCb, calTableTmp);\n \tresampleCalTable(calTableTmp, cameraMode_, calTableB);\n-\t// You could print out the cal tables for this image here, if you're\n-\t// tuning the algorithm...\n-\t// Apply any calibration to the statistics, so the adaptive algorithm\n-\t// makes only the extra adjustments.\n+\t/*\n+\t * You could print out the cal tables for this image here, if you're\n+\t * tuning the algorithm...\n+\t * Apply any calibration to the statistics, so the adaptive algorithm\n+\t * makes only the extra adjustments.\n+\t */\n \tapplyCalTable(calTableR, cr);\n \tapplyCalTable(calTableB, cb);\n-\t// Compute weights between zones.\n+\t/* Compute weights between zones. */\n \tcomputeW(cr, config_.sigmaCr, wr);\n \tcomputeW(cb, config_.sigmaCb, wb);\n-\t// Run Gauss-Seidel iterations over the resulting matrix, for R and B.\n+\t/* Run Gauss-Seidel iterations over the resulting matrix, for R and B. */\n \trunMatrixIterations(cr, lambdaR_, wr, config_.omega, config_.nIter,\n \t\t\t    config_.threshold, config_.lambdaBound);\n \trunMatrixIterations(cb, lambdaB_, wb, config_.omega, config_.nIter,\n \t\t\t    config_.threshold, config_.lambdaBound);\n-\t// Fold the calibrated gains into our final lambda values. (Note that on\n-\t// the next run, we re-start with the lambda values that don't have the\n-\t// calibration gains included.)\n+\t/*\n+\t * Fold the calibrated gains into our final lambda values. (Note that on\n+\t * the next run, we re-start with the lambda values that don't have the\n+\t * calibration gains included.)\n+\t */\n \tcompensateLambdasForCal(calTableR, lambdaR_, asyncLambdaR_);\n \tcompensateLambdasForCal(calTableB, lambdaB_, asyncLambdaB_);\n-\t// Fold in the luminance table at the appropriate strength.\n+\t/* Fold in the luminance table at the appropriate strength. */\n \taddLuminanceToTables(asyncResults_, asyncLambdaR_, 1.0,\n \t\t\t     asyncLambdaB_, luminanceTable_,\n \t\t\t     config_.luminanceStrength);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Alsc(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/alsc.hpp b/src/ipa/raspberrypi/controller/rpi/alsc.hpp\nindex 7a0949d1ccc5..3ffc175d78b6 100644\n--- a/src/ipa/raspberrypi/controller/rpi/alsc.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/alsc.hpp\n@@ -15,7 +15,7 @@\n \n namespace RPiController {\n \n-// Algorithm to generate automagic LSC (Lens Shading Correction) tables.\n+/* Algorithm to generate automagic LSC (Lens Shading Correction) tables. */\n \n struct AlscCalibration {\n \tdouble ct;\n@@ -23,11 +23,11 @@ struct AlscCalibration {\n };\n \n struct AlscConfig {\n-\t// Only repeat the ALSC calculation every \"this many\" frames\n+\t/* Only repeat the ALSC calculation every \"this many\" frames */\n \tuint16_t framePeriod;\n-\t// number of initial frames for which speed taken as 1.0 (maximum)\n+\t/* number of initial frames for which speed taken as 1.0 (maximum) */\n \tuint16_t startupFrames;\n-\t// IIR filter speed applied to algorithm results\n+\t/* IIR filter speed applied to algorithm results */\n \tdouble speed;\n \tdouble sigmaCr;\n \tdouble sigmaCb;\n@@ -39,9 +39,9 @@ struct AlscConfig {\n \tdouble luminanceStrength;\n \tstd::vector<AlscCalibration> calibrationsCr;\n \tstd::vector<AlscCalibration> calibrationsCb;\n-\tdouble defaultCt; // colour temperature if no metadata found\n-\tdouble threshold; // iteration termination threshold\n-\tdouble lambdaBound; // upper/lower bound for lambda from a value of 1\n+\tdouble defaultCt; /* colour temperature if no metadata found */\n+\tdouble threshold; /* iteration termination threshold */\n+\tdouble lambdaBound; /* upper/lower bound for lambda from a value of 1 */\n };\n \n class Alsc : public Algorithm\n@@ -57,41 +57,45 @@ public:\n \tvoid process(StatisticsPtr &stats, Metadata *imageMetadata) override;\n \n private:\n-\t// configuration is read-only, and available to both threads\n+\t/* configuration is read-only, and available to both threads */\n \tAlscConfig config_;\n \tbool firstTime_;\n \tCameraMode cameraMode_;\n \tdouble luminanceTable_[ALSC_CELLS_X * ALSC_CELLS_Y];\n \tstd::thread asyncThread_;\n-\tvoid asyncFunc(); // asynchronous thread function\n+\tvoid asyncFunc(); /* asynchronous thread function */\n \tstd::mutex mutex_;\n-\t// condvar for async thread to wait on\n+\t/* condvar for async thread to wait on */\n \tstd::condition_variable asyncSignal_;\n-\t// condvar for synchronous thread to wait on\n+\t/* condvar for synchronous thread to wait on */\n \tstd::condition_variable syncSignal_;\n-\t// for sync thread to check  if async thread finished (requires mutex)\n+\t/* for sync thread to check  if async thread finished (requires mutex) */\n \tbool asyncFinished_;\n-\t// for async thread to check if it's been told to run (requires mutex)\n+\t/* for async thread to check if it's been told to run (requires mutex) */\n \tbool asyncStart_;\n-\t// for async thread to check if it's been told to quit (requires mutex)\n+\t/* for async thread to check if it's been told to quit (requires mutex) */\n \tbool asyncAbort_;\n \n-\t// The following are only for the synchronous thread to use:\n-\t// for sync thread to note its has asked async thread to run\n+\t/*\n+\t * The following are only for the synchronous thread to use:\n+\t * for sync thread to note its has asked async thread to run\n+\t */\n \tbool asyncStarted_;\n-\t// counts up to framePeriod before restarting the async thread\n+\t/* counts up to framePeriod before restarting the async thread */\n \tint framePhase_;\n-\t// counts up to startupFrames\n+\t/* counts up to startupFrames */\n \tint frameCount_;\n-\t// counts up to startupFrames for Process function\n+\t/* counts up to startupFrames for Process function */\n \tint frameCount2_;\n \tdouble syncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X];\n \tdouble prevSyncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X];\n \tvoid waitForAysncThread();\n-\t// The following are for the asynchronous thread to use, though the main\n-\t// thread can set/reset them if the async thread is known to be idle:\n+\t/*\n+\t * The following are for the asynchronous thread to use, though the main\n+\t * thread can set/reset them if the async thread is known to be idle:\n+\t */\n \tvoid restartAsync(StatisticsPtr &stats, Metadata *imageMetadata);\n-\t// copy out the results from the async thread so that it can be restarted\n+\t/* copy out the results from the async thread so that it can be restarted */\n \tvoid fetchAsyncResults();\n \tdouble ct_;\n \tbcm2835_isp_stats_region statistics_[ALSC_CELLS_Y * ALSC_CELLS_X];\n@@ -103,4 +107,4 @@ private:\n \tdouble lambdaB_[ALSC_CELLS_X * ALSC_CELLS_Y];\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/awb.cpp b/src/ipa/raspberrypi/controller/rpi/awb.cpp\nindex 74449c8c7591..d6f79f3a8e14 100644\n--- a/src/ipa/raspberrypi/controller/rpi/awb.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/awb.cpp\n@@ -21,8 +21,10 @@ LOG_DEFINE_CATEGORY(RPiAwb)\n #define AWB_STATS_SIZE_X DEFAULT_AWB_REGIONS_X\n #define AWB_STATS_SIZE_Y DEFAULT_AWB_REGIONS_Y\n \n-// todo - the locking in this algorithm needs some tidying up as has been done\n-// elsewhere (ALSC and AGC).\n+/*\n+ * todo - the locking in this algorithm needs some tidying up as has been done\n+ * elsewhere (ALSC and AGC).\n+ */\n \n void AwbMode::read(boost::property_tree::ptree const &params)\n {\n@@ -107,11 +109,11 @@ void AwbConfig::read(boost::property_tree::ptree const &params)\n \t\t\tbayes = false;\n \t\t}\n \t}\n-\tfast = params.get<int>(\"fast\", bayes); // default to fast for Bayesian, otherwise slow\n+\tfast = params.get<int>(\"fast\", bayes); /* default to fast for Bayesian, otherwise slow */\n \twhitepointR = params.get<double>(\"whitepoint_r\", 0.0);\n \twhitepointB = params.get<double>(\"whitepoint_b\", 0.0);\n \tif (bayes == false)\n-\t\tsensitivityR = sensitivityB = 1.0; // nor do sensitivities make any sense\n+\t\tsensitivityR = sensitivityB = 1.0; /* nor do sensitivities make any sense */\n }\n \n Awb::Awb(Controller *controller)\n@@ -147,16 +149,18 @@ void Awb::read(boost::property_tree::ptree const &params)\n void Awb::initialise()\n {\n \tframeCount_ = framePhase_ = 0;\n-\t// Put something sane into the status that we are filtering towards,\n-\t// just in case the first few frames don't have anything meaningful in\n-\t// them.\n+\t/*\n+\t * Put something sane into the status that we are filtering towards,\n+\t * just in case the first few frames don't have anything meaningful in\n+\t * them.\n+\t */\n \tif (!config_.ctR.empty() && !config_.ctB.empty()) {\n \t\tsyncResults_.temperatureK = config_.ctR.domain().clip(4000);\n \t\tsyncResults_.gainR = 1.0 / config_.ctR.eval(syncResults_.temperatureK);\n \t\tsyncResults_.gainG = 1.0;\n \t\tsyncResults_.gainB = 1.0 / config_.ctB.eval(syncResults_.temperatureK);\n \t} else {\n-\t\t// random values just to stop the world blowing up\n+\t\t/* random values just to stop the world blowing up */\n \t\tsyncResults_.temperatureK = 4500;\n \t\tsyncResults_.gainR = syncResults_.gainG = syncResults_.gainB = 1.0;\n \t}\n@@ -171,7 +175,7 @@ bool Awb::isPaused() const\n \n void Awb::pause()\n {\n-\t// \"Pause\" by fixing everything to the most recent values.\n+\t/* \"Pause\" by fixing everything to the most recent values. */\n \tmanualR_ = syncResults_.gainR = prevSyncResults_.gainR;\n \tmanualB_ = syncResults_.gainB = prevSyncResults_.gainB;\n \tsyncResults_.gainG = prevSyncResults_.gainG;\n@@ -186,8 +190,10 @@ void Awb::resume()\n \n unsigned int Awb::getConvergenceFrames() const\n {\n-\t// If not in auto mode, there is no convergence\n-\t// to happen, so no need to drop any frames - return zero.\n+\t/*\n+\t * If not in auto mode, there is no convergence\n+\t * to happen, so no need to drop any frames - return zero.\n+\t */\n \tif (!isAutoEnabled())\n \t\treturn 0;\n \telse\n@@ -201,11 +207,13 @@ void Awb::setMode(std::string const &modeName)\n \n void Awb::setManualGains(double manualR, double manualB)\n {\n-\t// If any of these are 0.0, we swich back to auto.\n+\t/* If any of these are 0.0, we swich back to auto. */\n \tmanualR_ = manualR;\n \tmanualB_ = manualB;\n-\t// If not in auto mode, set these values into the sync_results which\n-\t// means that Prepare() will adopt them immediately.\n+\t/*\n+\t * If not in auto mode, set these values into the sync_results which\n+\t * means that Prepare() will adopt them immediately.\n+\t */\n \tif (!isAutoEnabled()) {\n \t\tsyncResults_.gainR = prevSyncResults_.gainR = manualR_;\n \t\tsyncResults_.gainG = prevSyncResults_.gainG = 1.0;\n@@ -216,8 +224,10 @@ void Awb::setManualGains(double manualR, double manualB)\n void Awb::switchMode([[maybe_unused]] CameraMode const &cameraMode,\n \t\t     Metadata *metadata)\n {\n-\t// On the first mode switch we'll have no meaningful colour\n-\t// temperature, so try to dead reckon one if in manual mode.\n+\t/*\n+\t * On the first mode switch we'll have no meaningful colour\n+\t * temperature, so try to dead reckon one if in manual mode.\n+\t */\n \tif (!isAutoEnabled() && firstSwitchMode_ && config_.bayes) {\n \t\tPwl ctRInverse = config_.ctR.inverse();\n \t\tPwl ctBInverse = config_.ctB.inverse();\n@@ -226,7 +236,7 @@ void Awb::switchMode([[maybe_unused]] CameraMode const &cameraMode,\n \t\tprevSyncResults_.temperatureK = (ctR + ctB) / 2;\n \t\tsyncResults_.temperatureK = prevSyncResults_.temperatureK;\n \t}\n-\t// Let other algorithms know the current white balance values.\n+\t/* Let other algorithms know the current white balance values. */\n \tmetadata->set(\"awb.status\", prevSyncResults_);\n \tfirstSwitchMode_ = false;\n }\n@@ -241,8 +251,10 @@ void Awb::fetchAsyncResults()\n \tLOG(RPiAwb, Debug) << \"Fetch AWB results\";\n \tasyncFinished_ = false;\n \tasyncStarted_ = false;\n-\t// It's possible manual gains could be set even while the async\n-\t// thread was running, so only copy the results if still in auto mode.\n+\t/*\n+\t * It's possible manual gains could be set even while the async\n+\t * thread was running, so only copy the results if still in auto mode.\n+\t */\n \tif (isAutoEnabled())\n \t\tsyncResults_ = asyncResults_;\n }\n@@ -250,9 +262,9 @@ void Awb::fetchAsyncResults()\n void Awb::restartAsync(StatisticsPtr &stats, double lux)\n {\n \tLOG(RPiAwb, Debug) << \"Starting AWB calculation\";\n-\t// this makes a new reference which belongs to the asynchronous thread\n+\t/* this makes a new reference which belongs to the asynchronous thread */\n \tstatistics_ = stats;\n-\t// store the mode as it could technically change\n+\t/* store the mode as it could technically change */\n \tauto m = config_.modes.find(modeName_);\n \tmode_ = m != config_.modes.end()\n \t\t\t? &m->second\n@@ -284,7 +296,7 @@ void Awb::prepare(Metadata *imageMetadata)\n \t\tif (asyncStarted_ && asyncFinished_)\n \t\t\tfetchAsyncResults();\n \t}\n-\t// Finally apply IIR filter to results and put into metadata.\n+\t/* Finally apply IIR filter to results and put into metadata. */\n \tmemcpy(prevSyncResults_.mode, syncResults_.mode,\n \t       sizeof(prevSyncResults_.mode));\n \tprevSyncResults_.temperatureK = speed * syncResults_.temperatureK +\n@@ -304,17 +316,17 @@ void Awb::prepare(Metadata *imageMetadata)\n \n void Awb::process(StatisticsPtr &stats, Metadata *imageMetadata)\n {\n-\t// Count frames since we last poked the async thread.\n+\t/* Count frames since we last poked the async thread. */\n \tif (framePhase_ < (int)config_.framePeriod)\n \t\tframePhase_++;\n \tLOG(RPiAwb, Debug) << \"frame_phase \" << framePhase_;\n-\t// We do not restart the async thread if we're not in auto mode.\n+\t/* We do not restart the async thread if we're not in auto mode. */\n \tif (isAutoEnabled() &&\n \t    (framePhase_ >= (int)config_.framePeriod ||\n \t     frameCount_ < (int)config_.startupFrames)) {\n-\t\t// Update any settings and any image metadata that we need.\n+\t\t/* Update any settings and any image metadata that we need. */\n \t\tstruct LuxStatus luxStatus = {};\n-\t\tluxStatus.lux = 400; // in case no metadata\n+\t\tluxStatus.lux = 400; /* in case no metadata */\n \t\tif (imageMetadata->get(\"lux.status\", luxStatus) != 0)\n \t\t\tLOG(RPiAwb, Debug) << \"No lux metadata found\";\n \t\tLOG(RPiAwb, Debug) << \"Awb lux value is \" << luxStatus.lux;\n@@ -366,15 +378,21 @@ static void generateStats(std::vector<Awb::RGB> &zones,\n void Awb::prepareStats()\n {\n \tzones_.clear();\n-\t// LSC has already been applied to the stats in this pipeline, so stop\n-\t// any LSC compensation.  We also ignore config_.fast in this version.\n+\t/*\n+\t * LSC has already been applied to the stats in this pipeline, so stop\n+\t * any LSC compensation.  We also ignore config_.fast in this version.\n+\t */\n \tgenerateStats(zones_, statistics_->awb_stats, config_.minPixels,\n \t\t      config_.minG);\n-\t// we're done with these; we may as well relinquish our hold on the\n-\t// pointer.\n+\t/*\n+\t * we're done with these; we may as well relinquish our hold on the\n+\t * pointer.\n+\t */\n \tstatistics_.reset();\n-\t// apply sensitivities, so values appear to come from our \"canonical\"\n-\t// sensor.\n+\t/*\n+\t * apply sensitivities, so values appear to come from our \"canonical\"\n+\t * sensor.\n+\t */\n \tfor (auto &zone : zones_) {\n \t\tzone.R *= config_.sensitivityR;\n \t\tzone.B *= config_.sensitivityB;\n@@ -383,14 +401,16 @@ void Awb::prepareStats()\n \n double Awb::computeDelta2Sum(double gainR, double gainB)\n {\n-\t// Compute the sum of the squared colour error (non-greyness) as it\n-\t// appears in the log likelihood equation.\n+\t/*\n+\t * Compute the sum of the squared colour error (non-greyness) as it\n+\t * appears in the log likelihood equation.\n+\t */\n \tdouble delta2Sum = 0;\n \tfor (auto &z : zones_) {\n \t\tdouble deltaR = gainR * z.R - 1 - config_.whitepointR;\n \t\tdouble deltaB = gainB * z.B - 1 - config_.whitepointB;\n \t\tdouble delta2 = deltaR * deltaR + deltaB * deltaB;\n-\t\t//LOG(RPiAwb, Debug) << \"delta_r \" << delta_r << \" delta_b \" << delta_b << \" delta2 \" << delta2;\n+\t\t/*LOG(RPiAwb, Debug) << \"delta_r \" << delta_r << \" delta_b \" << delta_b << \" delta2 \" << delta2; */\n \t\tdelta2 = std::min(delta2, config_.deltaLimit);\n \t\tdelta2Sum += delta2;\n \t}\n@@ -399,15 +419,17 @@ double Awb::computeDelta2Sum(double gainR, double gainB)\n \n Pwl Awb::interpolatePrior()\n {\n-\t// Interpolate the prior log likelihood function for our current lux\n-\t// value.\n+\t/*\n+\t * Interpolate the prior log likelihood function for our current lux\n+\t * value.\n+\t */\n \tif (lux_ <= config_.priors.front().lux)\n \t\treturn config_.priors.front().prior;\n \telse if (lux_ >= config_.priors.back().lux)\n \t\treturn config_.priors.back().prior;\n \telse {\n \t\tint idx = 0;\n-\t\t// find which two we lie between\n+\t\t/* find which two we lie between */\n \t\twhile (config_.priors[idx + 1].lux < lux_)\n \t\t\tidx++;\n \t\tdouble lux0 = config_.priors[idx].lux,\n@@ -424,8 +446,10 @@ Pwl Awb::interpolatePrior()\n static double interpolateQuadatric(Pwl::Point const &a, Pwl::Point const &b,\n \t\t\t\t   Pwl::Point const &c)\n {\n-\t// Given 3 points on a curve, find the extremum of the function in that\n-\t// interval by fitting a quadratic.\n+\t/*\n+\t * Given 3 points on a curve, find the extremum of the function in that\n+\t * interval by fitting a quadratic.\n+\t */\n \tconst double eps = 1e-3;\n \tPwl::Point ca = c - a, ba = b - a;\n \tdouble denominator = 2 * (ba.y * ca.x - ca.y * ba.x);\n@@ -434,17 +458,17 @@ static double interpolateQuadatric(Pwl::Point const &a, Pwl::Point const &b,\n \t\tdouble result = numerator / denominator + a.x;\n \t\treturn std::max(a.x, std::min(c.x, result));\n \t}\n-\t// has degenerated to straight line segment\n+\t/* has degenerated to straight line segment */\n \treturn a.y < c.y - eps ? a.x : (c.y < a.y - eps ? c.x : b.x);\n }\n \n double Awb::coarseSearch(Pwl const &prior)\n {\n-\tpoints_.clear(); // assume doesn't deallocate memory\n+\tpoints_.clear(); /* assume doesn't deallocate memory */\n \tsize_t bestPoint = 0;\n \tdouble t = mode_->ctLo;\n \tint spanR = 0, spanB = 0;\n-\t// Step down the CT curve evaluating log likelihood.\n+\t/* Step down the CT curve evaluating log likelihood. */\n \twhile (true) {\n \t\tdouble r = config_.ctR.eval(t, &spanR);\n \t\tdouble b = config_.ctB.eval(t, &spanB);\n@@ -462,13 +486,15 @@ double Awb::coarseSearch(Pwl const &prior)\n \t\t\tbestPoint = points_.size() - 1;\n \t\tif (t == mode_->ctHi)\n \t\t\tbreak;\n-\t\t// for even steps along the r/b curve scale them by the current t\n+\t\t/* for even steps along the r/b curve scale them by the current t */\n \t\tt = std::min(t + t / 10 * config_.coarseStep, mode_->ctHi);\n \t}\n \tt = points_[bestPoint].x;\n \tLOG(RPiAwb, Debug) << \"Coarse search found CT \" << t;\n-\t// We have the best point of the search, but refine it with a quadratic\n-\t// interpolation around its neighbours.\n+\t/*\n+\t * We have the best point of the search, but refine it with a quadratic\n+\t * interpolation around its neighbours.\n+\t */\n \tif (points_.size() > 2) {\n \t\tunsigned long bp = std::min(bestPoint, points_.size() - 2);\n \t\tbestPoint = std::max(1UL, bp);\n@@ -496,17 +522,21 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)\n \tPwl::Point transverse(bDiff, -rDiff);\n \tif (transverse.len2() < 1e-6)\n \t\treturn;\n-\t// unit vector orthogonal to the b vs. r function (pointing outwards\n-\t// with r and b increasing)\n+\t/*\n+\t * unit vector orthogonal to the b vs. r function (pointing outwards\n+\t * with r and b increasing)\n+\t */\n \ttransverse = transverse / transverse.len();\n \tdouble bestLogLikelihood = 0, bestT = 0, bestR = 0, bestB = 0;\n \tdouble transverseRange = config_.transverseNeg + config_.transversePos;\n \tconst int maxNumDeltas = 12;\n-\t// a transverse step approximately every 0.01 r/b units\n+\t/* a transverse step approximately every 0.01 r/b units */\n \tint numDeltas = floor(transverseRange * 100 + 0.5) + 1;\n \tnumDeltas = numDeltas < 3 ? 3 : (numDeltas > maxNumDeltas ? maxNumDeltas : numDeltas);\n-\t// Step down CT curve. March a bit further if the transverse range is\n-\t// large.\n+\t/*\n+\t * Step down CT curve. March a bit further if the transverse range is\n+\t * large.\n+\t */\n \tnsteps += numDeltas;\n \tfor (int i = -nsteps; i <= nsteps; i++) {\n \t\tdouble tTest = t + i * step;\n@@ -514,10 +544,10 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)\n \t\t\tprior.eval(prior.domain().clip(tTest));\n \t\tdouble rCurve = config_.ctR.eval(tTest, &spanR);\n \t\tdouble bCurve = config_.ctB.eval(tTest, &spanB);\n-\t\t// x will be distance off the curve, y the log likelihood there\n+\t\t/* x will be distance off the curve, y the log likelihood there */\n \t\tPwl::Point points[maxNumDeltas];\n \t\tint bestPoint = 0;\n-\t\t// Take some measurements transversely *off* the CT curve.\n+\t\t/* Take some measurements transversely *off* the CT curve. */\n \t\tfor (int j = 0; j < numDeltas; j++) {\n \t\t\tpoints[j].x = -config_.transverseNeg +\n \t\t\t\t      (transverseRange * j) / (numDeltas - 1);\n@@ -533,8 +563,10 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)\n \t\t\tif (points[j].y < points[bestPoint].y)\n \t\t\t\tbestPoint = j;\n \t\t}\n-\t\t// We have NUM_DELTAS points transversely across the CT curve,\n-\t\t// now let's do a quadratic interpolation for the best result.\n+\t\t/*\n+\t\t * We have NUM_DELTAS points transversely across the CT curve,\n+\t\t * now let's do a quadratic interpolation for the best result.\n+\t\t */\n \t\tbestPoint = std::max(1, std::min(bestPoint, numDeltas - 2));\n \t\tPwl::Point rbTest = Pwl::Point(rCurve, bCurve) +\n \t\t\t\t\ttransverse * interpolateQuadatric(points[bestPoint - 1],\n@@ -560,12 +592,16 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)\n \n void Awb::awbBayes()\n {\n-\t// May as well divide out G to save computeDelta2Sum from doing it over\n-\t// and over.\n+\t/*\n+\t * May as well divide out G to save computeDelta2Sum from doing it over\n+\t * and over.\n+\t */\n \tfor (auto &z : zones_)\n \t\tz.R = z.R / (z.G + 1), z.B = z.B / (z.G + 1);\n-\t// Get the current prior, and scale according to how many zones are\n-\t// valid... not entirely sure about this.\n+\t/*\n+\t * Get the current prior, and scale according to how many zones are\n+\t * valid... not entirely sure about this.\n+\t */\n \tPwl prior = interpolatePrior();\n \tprior *= zones_.size() / (double)(AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y);\n \tprior.map([](double x, double y) {\n@@ -577,19 +613,23 @@ void Awb::awbBayes()\n \tLOG(RPiAwb, Debug)\n \t\t<< \"After coarse search: r \" << r << \" b \" << b << \" (gains r \"\n \t\t<< 1 / r << \" b \" << 1 / b << \")\";\n-\t// Not entirely sure how to handle the fine search yet. Mostly the\n-\t// estimated CT is already good enough, but the fine search allows us to\n-\t// wander transverely off the CT curve. Under some illuminants, where\n-\t// there may be more or less green light, this may prove beneficial,\n-\t// though I probably need more real datasets before deciding exactly how\n-\t// this should be controlled and tuned.\n+\t/*\n+\t * Not entirely sure how to handle the fine search yet. Mostly the\n+\t * estimated CT is already good enough, but the fine search allows us to\n+\t * wander transverely off the CT curve. Under some illuminants, where\n+\t * there may be more or less green light, this may prove beneficial,\n+\t * though I probably need more real datasets before deciding exactly how\n+\t * this should be controlled and tuned.\n+\t */\n \tfineSearch(t, r, b, prior);\n \tLOG(RPiAwb, Debug)\n \t\t<< \"After fine search: r \" << r << \" b \" << b << \" (gains r \"\n \t\t<< 1 / r << \" b \" << 1 / b << \")\";\n-\t// Write results out for the main thread to pick up. Remember to adjust\n-\t// the gains from the ones that the \"canonical sensor\" would require to\n-\t// the ones needed by *this* sensor.\n+\t/*\n+\t * Write results out for the main thread to pick up. Remember to adjust\n+\t * the gains from the ones that the \"canonical sensor\" would require to\n+\t * the ones needed by *this* sensor.\n+\t */\n \tasyncResults_.temperatureK = t;\n \tasyncResults_.gainR = 1.0 / r * config_.sensitivityR;\n \tasyncResults_.gainG = 1.0;\n@@ -599,10 +639,12 @@ void Awb::awbBayes()\n void Awb::awbGrey()\n {\n \tLOG(RPiAwb, Debug) << \"Grey world AWB\";\n-\t// Make a separate list of the derivatives for each of red and blue, so\n-\t// that we can sort them to exclude the extreme gains.  We could\n-\t// consider some variations, such as normalising all the zones first, or\n-\t// doing an L2 average etc.\n+\t/*\n+\t * Make a separate list of the derivatives for each of red and blue, so\n+\t * that we can sort them to exclude the extreme gains.  We could\n+\t * consider some variations, such as normalising all the zones first, or\n+\t * doing an L2 average etc.\n+\t */\n \tstd::vector<RGB> &derivsR(zones_);\n \tstd::vector<RGB> derivsB(derivsR);\n \tstd::sort(derivsR.begin(), derivsR.end(),\n@@ -613,7 +655,7 @@ void Awb::awbGrey()\n \t\t  [](RGB const &a, RGB const &b) {\n \t\t\t  return a.G * b.B < b.G * a.B;\n \t\t  });\n-\t// Average the middle half of the values.\n+\t/* Average the middle half of the values. */\n \tint discard = derivsR.size() / 4;\n \tRGB sumR(0, 0, 0), sumB(0, 0, 0);\n \tfor (auto ri = derivsR.begin() + discard,\n@@ -622,7 +664,7 @@ void Awb::awbGrey()\n \t\tsumR += *ri, sumB += *bi;\n \tdouble gainR = sumR.G / (sumR.R + 1),\n \t       gainB = sumB.G / (sumB.B + 1);\n-\tasyncResults_.temperatureK = 4500; // don't know what it is\n+\tasyncResults_.temperatureK = 4500; /* don't know what it is */\n \tasyncResults_.gainR = gainR;\n \tasyncResults_.gainG = 1.0;\n \tasyncResults_.gainB = gainB;\n@@ -645,7 +687,7 @@ void Awb::doAwb()\n \t}\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Awb(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/awb.hpp b/src/ipa/raspberrypi/controller/rpi/awb.hpp\nindex 91251d6be2da..597f3182da44 100644\n--- a/src/ipa/raspberrypi/controller/rpi/awb.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/awb.hpp\n@@ -16,63 +16,73 @@\n \n namespace RPiController {\n \n-// Control algorithm to perform AWB calculations.\n+/* Control algorithm to perform AWB calculations. */\n \n struct AwbMode {\n \tvoid read(boost::property_tree::ptree const &params);\n-\tdouble ctLo; // low CT value for search\n-\tdouble ctHi; // high CT value for search\n+\tdouble ctLo; /* low CT value for search */\n+\tdouble ctHi; /* high CT value for search */\n };\n \n struct AwbPrior {\n \tvoid read(boost::property_tree::ptree const &params);\n-\tdouble lux; // lux level\n-\tPwl prior; // maps CT to prior log likelihood for this lux level\n+\tdouble lux; /* lux level */\n+\tPwl prior; /* maps CT to prior log likelihood for this lux level */\n };\n \n struct AwbConfig {\n \tAwbConfig() : defaultMode(nullptr) {}\n \tvoid read(boost::property_tree::ptree const &params);\n-\t// Only repeat the AWB calculation every \"this many\" frames\n+\t/* Only repeat the AWB calculation every \"this many\" frames */\n \tuint16_t framePeriod;\n-\t// number of initial frames for which speed taken as 1.0 (maximum)\n+\t/* number of initial frames for which speed taken as 1.0 (maximum) */\n \tuint16_t startupFrames;\n-\tunsigned int convergenceFrames; // approx number of frames to converge\n-\tdouble speed; // IIR filter speed applied to algorithm results\n-\tbool fast; // \"fast\" mode uses a 16x16 rather than 32x32 grid\n-\tPwl ctR; // function maps CT to r (= R/G)\n-\tPwl ctB; // function maps CT to b (= B/G)\n-\t// table of illuminant priors at different lux levels\n+\tunsigned int convergenceFrames; /* approx number of frames to converge */\n+\tdouble speed; /* IIR filter speed applied to algorithm results */\n+\tbool fast; /* \"fast\" mode uses a 16x16 rather than 32x32 grid */\n+\tPwl ctR; /* function maps CT to r (= R/G) */\n+\tPwl ctB; /*\n+\tPwl ctB;  * function maps CT to b (= B/G)\n+\t * table of illuminant priors at different lux levels\n+\t */\n \tstd::vector<AwbPrior> priors;\n-\t// AWB \"modes\" (determines the search range)\n+\t/* AWB \"modes\" (determines the search range) */\n \tstd::map<std::string, AwbMode> modes;\n-\tAwbMode *defaultMode; // mode used if no mode selected\n-\t// minimum proportion of pixels counted within AWB region for it to be\n-\t// \"useful\"\n+\tAwbMode *defaultMode; /* mode used if no mode selected */\n+\t/*\n+\t * minimum proportion of pixels counted within AWB region for it to be\n+\t * \"useful\"\n+\t */\n \tdouble minPixels;\n-\t// minimum G value of those pixels, to be regarded a \"useful\"\n+\t/* minimum G value of those pixels, to be regarded a \"useful\" */\n \tuint16_t minG;\n-\t// number of AWB regions that must be \"useful\" in order to do the AWB\n-\t// calculation\n+\t/*\n+\t * number of AWB regions that must be \"useful\" in order to do the AWB\n+\t * calculation\n+\t */\n \tuint32_t minRegions;\n-\t// clamp on colour error term (so as not to penalise non-grey excessively)\n+\t/* clamp on colour error term (so as not to penalise non-grey excessively) */\n \tdouble deltaLimit;\n-\t// step size control in coarse search\n+\t/* step size control in coarse search */\n \tdouble coarseStep;\n-\t// how far to wander off CT curve towards \"more purple\"\n+\t/* how far to wander off CT curve towards \"more purple\" */\n \tdouble transversePos;\n-\t// how far to wander off CT curve towards \"more green\"\n+\t/* how far to wander off CT curve towards \"more green\" */\n \tdouble transverseNeg;\n-\t// red sensitivity ratio (set to canonical sensor's R/G divided by this\n-\t// sensor's R/G)\n+\t/*\n+\t * red sensitivity ratio (set to canonical sensor's R/G divided by this\n+\t * sensor's R/G)\n+\t */\n \tdouble sensitivityR;\n-\t// blue sensitivity ratio (set to canonical sensor's B/G divided by this\n-\t// sensor's B/G)\n+\t/*\n+\t * blue sensitivity ratio (set to canonical sensor's B/G divided by this\n+\t * sensor's B/G)\n+\t */\n \tdouble sensitivityB;\n-\t// The whitepoint (which we normally \"aim\" for) can be moved.\n+\t/* The whitepoint (which we normally \"aim\" for) can be moved. */\n \tdouble whitepointR;\n \tdouble whitepointB;\n-\tbool bayes; // use Bayesian algorithm\n+\tbool bayes; /* use Bayesian algorithm */\n };\n \n class Awb : public AwbAlgorithm\n@@ -83,7 +93,7 @@ public:\n \tchar const *name() const override;\n \tvoid initialise() override;\n \tvoid read(boost::property_tree::ptree const &params) override;\n-\t// AWB handles \"pausing\" for itself.\n+\t/* AWB handles \"pausing\" for itself. */\n \tbool isPaused() const override;\n \tvoid pause() override;\n \tvoid resume() override;\n@@ -108,35 +118,39 @@ public:\n \n private:\n \tbool isAutoEnabled() const;\n-\t// configuration is read-only, and available to both threads\n+\t/* configuration is read-only, and available to both threads */\n \tAwbConfig config_;\n \tstd::thread asyncThread_;\n-\tvoid asyncFunc(); // asynchronous thread function\n+\tvoid asyncFunc(); /* asynchronous thread function */\n \tstd::mutex mutex_;\n-\t// condvar for async thread to wait on\n+\t/* condvar for async thread to wait on */\n \tstd::condition_variable asyncSignal_;\n-\t// condvar for synchronous thread to wait on\n+\t/* condvar for synchronous thread to wait on */\n \tstd::condition_variable syncSignal_;\n-\t// for sync thread to check  if async thread finished (requires mutex)\n+\t/* for sync thread to check  if async thread finished (requires mutex) */\n \tbool asyncFinished_;\n-\t// for async thread to check if it's been told to run (requires mutex)\n+\t/* for async thread to check if it's been told to run (requires mutex) */\n \tbool asyncStart_;\n-\t// for async thread to check if it's been told to quit (requires mutex)\n+\t/* for async thread to check if it's been told to quit (requires mutex) */\n \tbool asyncAbort_;\n \n-\t// The following are only for the synchronous thread to use:\n-\t// for sync thread to note its has asked async thread to run\n+\t/*\n+\t * The following are only for the synchronous thread to use:\n+\t * for sync thread to note its has asked async thread to run\n+\t */\n \tbool asyncStarted_;\n-\t// counts up to framePeriod before restarting the async thread\n+\t/* counts up to framePeriod before restarting the async thread */\n \tint framePhase_;\n-\tint frameCount_; // counts up to startup_frames\n+\tint frameCount_; /* counts up to startup_frames */\n \tAwbStatus syncResults_;\n \tAwbStatus prevSyncResults_;\n \tstd::string modeName_;\n-\t// The following are for the asynchronous thread to use, though the main\n-\t// thread can set/reset them if the async thread is known to be idle:\n+\t/*\n+\t * The following are for the asynchronous thread to use, though the main\n+\t * thread can set/reset them if the async thread is known to be idle:\n+\t */\n \tvoid restartAsync(StatisticsPtr &stats, double lux);\n-\t// copy out the results from the async thread so that it can be restarted\n+\t/* copy out the results from the async thread so that it can be restarted */\n \tvoid fetchAsyncResults();\n \tStatisticsPtr statistics_;\n \tAwbMode *mode_;\n@@ -152,11 +166,11 @@ private:\n \tvoid fineSearch(double &t, double &r, double &b, Pwl const &prior);\n \tstd::vector<RGB> zones_;\n \tstd::vector<Pwl::Point> points_;\n-\t// manual r setting\n+\t/* manual r setting */\n \tdouble manualR_;\n-\t// manual b setting\n+\t/* manual b setting */\n \tdouble manualB_;\n-\tbool firstSwitchMode_; // is this the first call to SwitchMode?\n+\tbool firstSwitchMode_; /* is this the first call to SwitchMode? */\n };\n \n static inline Awb::RGB operator+(Awb::RGB const &a, Awb::RGB const &b)\n@@ -176,4 +190,4 @@ static inline Awb::RGB operator*(Awb::RGB const &rgb, double d)\n \treturn d * rgb;\n }\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/black_level.cpp b/src/ipa/raspberrypi/controller/rpi/black_level.cpp\nindex 695b3129dd93..88fe4538d18d 100644\n--- a/src/ipa/raspberrypi/controller/rpi/black_level.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/black_level.cpp\n@@ -34,7 +34,7 @@ char const *BlackLevel::name() const\n void BlackLevel::read(boost::property_tree::ptree const &params)\n {\n \tuint16_t blackLevel = params.get<uint16_t>(\n-\t\t\"black_level\", 4096); // 64 in 10 bits scaled to 16 bits\n+\t\t\"black_level\", 4096); /* 64 in 10 bits scaled to 16 bits */\n \tblackLevelR_ = params.get<uint16_t>(\"black_level_r\", blackLevel);\n \tblackLevelG_ = params.get<uint16_t>(\"black_level_g\", blackLevel);\n \tblackLevelB_ = params.get<uint16_t>(\"black_level_b\", blackLevel);\n@@ -46,8 +46,10 @@ void BlackLevel::read(boost::property_tree::ptree const &params)\n \n void BlackLevel::prepare(Metadata *imageMetadata)\n {\n-\t// Possibly we should think about doing this in a switch_mode or\n-\t// something?\n+\t/*\n+\t * Possibly we should think about doing this in a switch_mode or\n+\t * something?\n+\t */\n \tstruct BlackLevelStatus status;\n \tstatus.black_level_r = blackLevelR_;\n \tstatus.black_level_g = blackLevelG_;\n@@ -55,7 +57,7 @@ void BlackLevel::prepare(Metadata *imageMetadata)\n \timageMetadata->set(\"black_level.status\", status);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn new BlackLevel(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/black_level.hpp b/src/ipa/raspberrypi/controller/rpi/black_level.hpp\nindex 0d74f6a4c49b..f01c55151288 100644\n--- a/src/ipa/raspberrypi/controller/rpi/black_level.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/black_level.hpp\n@@ -9,7 +9,7 @@\n #include \"../algorithm.hpp\"\n #include \"../black_level_status.h\"\n \n-// This is our implementation of the \"black level algorithm\".\n+/* This is our implementation of the \"black level algorithm\". */\n \n namespace RPiController {\n \n@@ -27,4 +27,4 @@ private:\n \tdouble blackLevelB_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/ccm.cpp b/src/ipa/raspberrypi/controller/rpi/ccm.cpp\nindex 24d8e5bd1fd8..9ad63b6e20d9 100644\n--- a/src/ipa/raspberrypi/controller/rpi/ccm.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/ccm.cpp\n@@ -19,11 +19,13 @@ using namespace libcamera;\n \n LOG_DEFINE_CATEGORY(RPiCcm)\n \n-// This algorithm selects a CCM (Colour Correction Matrix) according to the\n-// colour temperature estimated by AWB (interpolating between known matricies as\n-// necessary). Additionally the amount of colour saturation can be controlled\n-// both according to the current estimated lux level and according to a\n-// saturation setting that is exposed to applications.\n+/*\n+ * This algorithm selects a CCM (Colour Correction Matrix) according to the\n+ * colour temperature estimated by AWB (interpolating between known matricies as\n+ * necessary). Additionally the amount of colour saturation can be controlled\n+ * both according to the current estimated lux level and according to a\n+ * saturation setting that is exposed to applications.\n+ */\n \n #define NAME \"rpi.ccm\"\n \n@@ -125,11 +127,11 @@ void Ccm::prepare(Metadata *imageMetadata)\n {\n \tbool awbOk = false, luxOk = false;\n \tstruct AwbStatus awb = {};\n-\tawb.temperatureK = 4000; // in case no metadata\n+\tawb.temperatureK = 4000; /* in case no metadata */\n \tstruct LuxStatus lux = {};\n-\tlux.lux = 400; // in case no metadata\n+\tlux.lux = 400; /* in case no metadata */\n \t{\n-\t\t// grab mutex just once to get everything\n+\t\t/* grab mutex just once to get everything */\n \t\tstd::lock_guard<Metadata> lock(*imageMetadata);\n \t\tawbOk = getLocked(imageMetadata, \"awb.status\", awb);\n \t\tluxOk = getLocked(imageMetadata, \"lux.status\", lux);\n@@ -162,7 +164,7 @@ void Ccm::prepare(Metadata *imageMetadata)\n \timageMetadata->set(\"ccm.status\", ccmStatus);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Ccm(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/ccm.hpp b/src/ipa/raspberrypi/controller/rpi/ccm.hpp\nindex 4c4807b8a942..7622044ce49c 100644\n--- a/src/ipa/raspberrypi/controller/rpi/ccm.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/ccm.hpp\n@@ -13,7 +13,7 @@\n \n namespace RPiController {\n \n-// Algorithm to calculate colour matrix. Should be placed after AWB.\n+/* Algorithm to calculate colour matrix. Should be placed after AWB. */\n \n struct Matrix {\n \tMatrix(double m0, double m1, double m2, double m3, double m4, double m5,\n@@ -72,4 +72,4 @@ private:\n \tdouble saturation_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/contrast.cpp b/src/ipa/raspberrypi/controller/rpi/contrast.cpp\nindex 169837576678..f11c834a0192 100644\n--- a/src/ipa/raspberrypi/controller/rpi/contrast.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/contrast.cpp\n@@ -18,11 +18,13 @@ using namespace libcamera;\n \n LOG_DEFINE_CATEGORY(RPiContrast)\n \n-// This is a very simple control algorithm which simply retrieves the results of\n-// AGC and AWB via their \"status\" metadata, and applies digital gain to the\n-// colour channels in accordance with those instructions. We take care never to\n-// apply less than unity gains, as that would cause fully saturated pixels to go\n-// off-white.\n+/*\n+ * This is a very simple control algorithm which simply retrieves the results of\n+ * AGC and AWB via their \"status\" metadata, and applies digital gain to the\n+ * colour channels in accordance with those instructions. We take care never to\n+ * apply less than unity gains, as that would cause fully saturated pixels to go\n+ * off-white.\n+ */\n \n #define NAME \"rpi.contrast\"\n \n@@ -38,15 +40,15 @@ char const *Contrast::name() const\n \n void Contrast::read(boost::property_tree::ptree const &params)\n {\n-\t// enable adaptive enhancement by default\n+\t/* enable adaptive enhancement by default */\n \tconfig_.ceEnable = params.get<int>(\"ce_enable\", 1);\n-\t// the point near the bottom of the histogram to move\n+\t/* the point near the bottom of the histogram to move */\n \tconfig_.loHistogram = params.get<double>(\"lo_histogram\", 0.01);\n-\t// where in the range to try and move it to\n+\t/* where in the range to try and move it to */\n \tconfig_.loLevel = params.get<double>(\"lo_level\", 0.015);\n-\t// but don't move by more than this\n+\t/* but don't move by more than this */\n \tconfig_.loMax = params.get<double>(\"lo_max\", 500);\n-\t// equivalent values for the top of the histogram...\n+\t/* equivalent values for the top of the histogram... */\n \tconfig_.hiHistogram = params.get<double>(\"hi_histogram\", 0.95);\n \tconfig_.hiLevel = params.get<double>(\"hi_level\", 0.95);\n \tconfig_.hiMax = params.get<double>(\"hi_max\", 2000);\n@@ -81,8 +83,10 @@ static void fillInStatus(ContrastStatus &status, double brightness,\n \n void Contrast::initialise()\n {\n-\t// Fill in some default values as Prepare will run before Process gets\n-\t// called.\n+\t/*\n+\t * Fill in some default values as Prepare will run before Process gets\n+\t * called.\n+\t */\n \tfillInStatus(status_, brightness_, contrast_, config_.gammaCurve);\n }\n \n@@ -97,8 +101,10 @@ Pwl computeStretchCurve(Histogram const &histogram,\n {\n \tPwl enhance;\n \tenhance.append(0, 0);\n-\t// If the start of the histogram is rather empty, try to pull it down a\n-\t// bit.\n+\t/*\n+\t * If the start of the histogram is rather empty, try to pull it down a\n+\t * bit.\n+\t */\n \tdouble histLo = histogram.quantile(config.loHistogram) *\n \t\t\t(65536 / NUM_HISTOGRAM_BINS);\n \tdouble levelLo = config.loLevel * 65536;\n@@ -109,13 +115,17 @@ Pwl computeStretchCurve(Histogram const &histogram,\n \tLOG(RPiContrast, Debug)\n \t\t<< \"Final values \" << histLo << \" -> \" << levelLo;\n \tenhance.append(histLo, levelLo);\n-\t// Keep the mid-point (median) in the same place, though, to limit the\n-\t// apparent amount of global brightness shift.\n+\t/*\n+\t * Keep the mid-point (median) in the same place, though, to limit the\n+\t * apparent amount of global brightness shift.\n+\t */\n \tdouble mid = histogram.quantile(0.5) * (65536 / NUM_HISTOGRAM_BINS);\n \tenhance.append(mid, mid);\n \n-\t// If the top to the histogram is empty, try to pull the pixel values\n-\t// there up.\n+\t/*\n+\t * If the top to the histogram is empty, try to pull the pixel values\n+\t * there up.\n+\t */\n \tdouble histHi = histogram.quantile(config.hiHistogram) *\n \t\t\t(65536 / NUM_HISTOGRAM_BINS);\n \tdouble levelHi = config.hiLevel * 65536;\n@@ -149,22 +159,30 @@ void Contrast::process(StatisticsPtr &stats,\n \t\t       [[maybe_unused]] Metadata *imageMetadata)\n {\n \tHistogram histogram(stats->hist[0].g_hist, NUM_HISTOGRAM_BINS);\n-\t// We look at the histogram and adjust the gamma curve in the following\n-\t// ways: 1. Adjust the gamma curve so as to pull the start of the\n-\t// histogram down, and possibly push the end up.\n+\t/*\n+\t * We look at the histogram and adjust the gamma curve in the following\n+\t * ways: 1. Adjust the gamma curve so as to pull the start of the\n+\t * histogram down, and possibly push the end up.\n+\t */\n \tPwl gammaCurve = config_.gammaCurve;\n \tif (config_.ceEnable) {\n \t\tif (config_.loMax != 0 || config_.hiMax != 0)\n \t\t\tgammaCurve = computeStretchCurve(histogram, config_).compose(gammaCurve);\n-\t\t// We could apply other adjustments (e.g. partial equalisation)\n-\t\t// based on the histogram...?\n+\t\t/*\n+\t\t * We could apply other adjustments (e.g. partial equalisation)\n+\t\t * based on the histogram...?\n+\t\t */\n \t}\n-\t// 2. Finally apply any manually selected brightness/contrast\n-\t// adjustment.\n+\t/*\n+\t * 2. Finally apply any manually selected brightness/contrast\n+\t * adjustment.\n+\t */\n \tif (brightness_ != 0 || contrast_ != 1.0)\n \t\tgammaCurve = applyManualContrast(gammaCurve, brightness_, contrast_);\n-\t// And fill in the status for output. Use more points towards the bottom\n-\t// of the curve.\n+\t/*\n+\t * And fill in the status for output. Use more points towards the bottom\n+\t * of the curve.\n+\t */\n \tContrastStatus status;\n \tfillInStatus(status, brightness_, contrast_, gammaCurve);\n \t{\n@@ -173,7 +191,7 @@ void Contrast::process(StatisticsPtr &stats,\n \t}\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Contrast(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/contrast.hpp b/src/ipa/raspberrypi/controller/rpi/contrast.hpp\nindex 5a6d530f63fd..4793dedc10ff 100644\n--- a/src/ipa/raspberrypi/controller/rpi/contrast.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/contrast.hpp\n@@ -13,8 +13,10 @@\n \n namespace RPiController {\n \n-// Back End algorithm to appaly correct digital gain. Should be placed after\n-// Back End AWB.\n+/*\n+ * Back End algorithm to appaly correct digital gain. Should be placed after\n+ * Back End AWB.\n+ */\n \n struct ContrastConfig {\n \tbool ceEnable;\n@@ -47,4 +49,4 @@ private:\n \tstd::mutex mutex_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/dpc.cpp b/src/ipa/raspberrypi/controller/rpi/dpc.cpp\nindex 42154cf300b8..68ba5e3e37bb 100644\n--- a/src/ipa/raspberrypi/controller/rpi/dpc.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/dpc.cpp\n@@ -14,8 +14,10 @@ using namespace libcamera;\n \n LOG_DEFINE_CATEGORY(RPiDpc)\n \n-// We use the lux status so that we can apply stronger settings in darkness (if\n-// necessary).\n+/*\n+ * We use the lux status so that we can apply stronger settings in darkness (if\n+ * necessary).\n+ */\n \n #define NAME \"rpi.dpc\"\n \n@@ -39,13 +41,13 @@ void Dpc::read(boost::property_tree::ptree const &params)\n void Dpc::prepare(Metadata *imageMetadata)\n {\n \tDpcStatus dpcStatus = {};\n-\t// Should we vary this with lux level or analogue gain? TBD.\n+\t/* Should we vary this with lux level or analogue gain? TBD. */\n \tdpcStatus.strength = config_.strength;\n \tLOG(RPiDpc, Debug) << \"strength \" << dpcStatus.strength;\n \timageMetadata->set(\"dpc.status\", dpcStatus);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Dpc(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/dpc.hpp b/src/ipa/raspberrypi/controller/rpi/dpc.hpp\nindex 039310cc8d05..048fa2b8405e 100644\n--- a/src/ipa/raspberrypi/controller/rpi/dpc.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/dpc.hpp\n@@ -11,7 +11,7 @@\n \n namespace RPiController {\n \n-// Back End algorithm to apply appropriate GEQ settings.\n+/* Back End algorithm to apply appropriate GEQ settings. */\n \n struct DpcConfig {\n \tint strength;\n@@ -29,4 +29,4 @@ private:\n \tDpcConfig config_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/geq.cpp b/src/ipa/raspberrypi/controller/rpi/geq.cpp\nindex 0da5efdf3d3d..14f226cf989c 100644\n--- a/src/ipa/raspberrypi/controller/rpi/geq.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/geq.cpp\n@@ -18,8 +18,10 @@ using namespace libcamera;\n \n LOG_DEFINE_CATEGORY(RPiGeq)\n \n-// We use the lux status so that we can apply stronger settings in darkness (if\n-// necessary).\n+/*\n+ * We use the lux status so that we can apply stronger settings in darkness (if\n+ * necessary).\n+ */\n \n #define NAME \"rpi.geq\"\n \n@@ -50,7 +52,7 @@ void Geq::prepare(Metadata *imageMetadata)\n \tif (imageMetadata->get(\"lux.status\", luxStatus))\n \t\tLOG(RPiGeq, Warning) << \"no lux data found\";\n \tDeviceStatus deviceStatus;\n-\tdeviceStatus.analogueGain = 1.0; // in case not found\n+\tdeviceStatus.analogueGain = 1.0; /* in case not found */\n \tif (imageMetadata->get(\"device.status\", deviceStatus))\n \t\tLOG(RPiGeq, Warning)\n \t\t\t<< \"no device metadata - use analogue gain of 1x\";\n@@ -71,7 +73,7 @@ void Geq::prepare(Metadata *imageMetadata)\n \timageMetadata->set(\"geq.status\", geqStatus);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Geq(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/geq.hpp b/src/ipa/raspberrypi/controller/rpi/geq.hpp\nindex bdbc55b2e2d9..5ea424fc768d 100644\n--- a/src/ipa/raspberrypi/controller/rpi/geq.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/geq.hpp\n@@ -11,12 +11,12 @@\n \n namespace RPiController {\n \n-// Back End algorithm to apply appropriate GEQ settings.\n+/* Back End algorithm to apply appropriate GEQ settings. */\n \n struct GeqConfig {\n \tuint16_t offset;\n \tdouble slope;\n-\tPwl strength; // lux to strength factor\n+\tPwl strength; /* lux to strength factor */\n };\n \n class Geq : public Algorithm\n@@ -31,4 +31,4 @@ private:\n \tGeqConfig config_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/lux.cpp b/src/ipa/raspberrypi/controller/rpi/lux.cpp\nindex 10654fbba94a..7f86f17470d8 100644\n--- a/src/ipa/raspberrypi/controller/rpi/lux.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/lux.cpp\n@@ -25,8 +25,10 @@ LOG_DEFINE_CATEGORY(RPiLux)\n Lux::Lux(Controller *controller)\n \t: Algorithm(controller)\n {\n-\t// Put in some defaults as there will be no meaningful values until\n-\t// Process has run.\n+\t/*\n+\t * Put in some defaults as there will be no meaningful values until\n+\t * Process has run.\n+\t */\n \tstatus_.aperture = 1.0;\n \tstatus_.lux = 400;\n }\n@@ -71,7 +73,7 @@ void Lux::process(StatisticsPtr &stats, Metadata *imageMetadata)\n \t\t\t\t    sizeof(stats->hist[0].g_hist[0]);\n \t\tfor (int i = 0; i < numBins; i++)\n \t\t\tsum += bin[i] * (uint64_t)i, num += bin[i];\n-\t\t// add .5 to reflect the mid-points of bins\n+\t\t/* add .5 to reflect the mid-points of bins */\n \t\tdouble currentY = sum / (double)num + .5;\n \t\tdouble gainRatio = referenceGain_ / currentGain;\n \t\tdouble shutterSpeedRatio =\n@@ -89,14 +91,16 @@ void Lux::process(StatisticsPtr &stats, Metadata *imageMetadata)\n \t\t\tstd::unique_lock<std::mutex> lock(mutex_);\n \t\t\tstatus_ = status;\n \t\t}\n-\t\t// Overwrite the metadata here as well, so that downstream\n-\t\t// algorithms get the latest value.\n+\t\t/*\n+\t\t * Overwrite the metadata here as well, so that downstream\n+\t\t * algorithms get the latest value.\n+\t\t */\n \t\timageMetadata->set(\"lux.status\", status);\n \t} else\n \t\tLOG(RPiLux, Warning) << \": no device metadata\";\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Lux(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/lux.hpp b/src/ipa/raspberrypi/controller/rpi/lux.hpp\nindex 98cfd0ac8bd0..7cf189363c06 100644\n--- a/src/ipa/raspberrypi/controller/rpi/lux.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/lux.hpp\n@@ -13,7 +13,7 @@\n #include \"../lux_status.h\"\n #include \"../algorithm.hpp\"\n \n-// This is our implementation of the \"lux control algorithm\".\n+/* This is our implementation of the \"lux control algorithm\". */\n \n namespace RPiController {\n \n@@ -28,16 +28,18 @@ public:\n \tvoid setCurrentAperture(double aperture);\n \n private:\n-\t// These values define the conditions of the reference image, against\n-\t// which we compare the new image.\n+\t/*\n+\t * These values define the conditions of the reference image, against\n+\t * which we compare the new image.\n+\t */\n \tlibcamera::utils::Duration referenceshutterSpeed_;\n \tdouble referenceGain_;\n-\tdouble referenceAperture_; // units of 1/f\n-\tdouble referenceY_; // out of 65536\n+\tdouble referenceAperture_; /* units of 1/f */\n+\tdouble referenceY_; /* out of 65536 */\n \tdouble referenceLux_;\n \tdouble currentAperture_;\n \tLuxStatus status_;\n \tstd::mutex mutex_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/noise.cpp b/src/ipa/raspberrypi/controller/rpi/noise.cpp\nindex d6e4df4192f2..8117ce3608ed 100644\n--- a/src/ipa/raspberrypi/controller/rpi/noise.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/noise.cpp\n@@ -34,8 +34,10 @@ char const *Noise::name() const\n void Noise::switchMode(CameraMode const &cameraMode,\n \t\t       [[maybe_unused]] Metadata *metadata)\n {\n-\t// For example, we would expect a 2x2 binned mode to have a \"noise\n-\t// factor\" of sqrt(2x2) = 2. (can't be less than one, right?)\n+\t/*\n+\t * For example, we would expect a 2x2 binned mode to have a \"noise\n+\t * factor\" of sqrt(2x2) = 2. (can't be less than one, right?)\n+\t */\n \tmodeFactor_ = std::max(1.0, cameraMode.noiseFactor);\n }\n \n@@ -48,14 +50,16 @@ void Noise::read(boost::property_tree::ptree const &params)\n void Noise::prepare(Metadata *imageMetadata)\n {\n \tstruct DeviceStatus deviceStatus;\n-\tdeviceStatus.analogueGain = 1.0; // keep compiler calm\n+\tdeviceStatus.analogueGain = 1.0; /* keep compiler calm */\n \tif (imageMetadata->get(\"device.status\", deviceStatus) == 0) {\n-\t\t// There is a slight question as to exactly how the noise\n-\t\t// profile, specifically the constant part of it, scales. For\n-\t\t// now we assume it all scales the same, and we'll revisit this\n-\t\t// if it proves substantially wrong.  NOTE: we may also want to\n-\t\t// make some adjustments based on the camera mode (such as\n-\t\t// binning), if we knew how to discover it...\n+\t\t/*\n+\t\t * There is a slight question as to exactly how the noise\n+\t\t * profile, specifically the constant part of it, scales. For\n+\t\t * now we assume it all scales the same, and we'll revisit this\n+\t\t * if it proves substantially wrong.  NOTE: we may also want to\n+\t\t * make some adjustments based on the camera mode (such as\n+\t\t * binning), if we knew how to discover it...\n+\t\t */\n \t\tdouble factor = sqrt(deviceStatus.analogueGain) / modeFactor_;\n \t\tstruct NoiseStatus status;\n \t\tstatus.noise_constant = referenceConstant_ * factor;\n@@ -68,7 +72,7 @@ void Noise::prepare(Metadata *imageMetadata)\n \t\tLOG(RPiNoise, Warning) << \" no metadata\";\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn new Noise(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/noise.hpp b/src/ipa/raspberrypi/controller/rpi/noise.hpp\nindex ed6ffe910e27..56a4707b5ef2 100644\n--- a/src/ipa/raspberrypi/controller/rpi/noise.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/noise.hpp\n@@ -9,7 +9,7 @@\n #include \"../algorithm.hpp\"\n #include \"../noise_status.h\"\n \n-// This is our implementation of the \"noise algorithm\".\n+/* This is our implementation of the \"noise algorithm\". */\n \n namespace RPiController {\n \n@@ -23,10 +23,10 @@ public:\n \tvoid prepare(Metadata *imageMetadata) override;\n \n private:\n-\t// the noise profile for analogue gain of 1.0\n+\t/* the noise profile for analogue gain of 1.0 */\n \tdouble referenceConstant_;\n \tdouble referenceSlope_;\n \tdouble modeFactor_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sdn.cpp b/src/ipa/raspberrypi/controller/rpi/sdn.cpp\nindex 8707b6d9cd9e..6459b90fb9d4 100644\n--- a/src/ipa/raspberrypi/controller/rpi/sdn.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/sdn.cpp\n@@ -17,8 +17,10 @@ using namespace libcamera;\n \n LOG_DEFINE_CATEGORY(RPiSdn)\n \n-// Calculate settings for the spatial denoise block using the noise profile in\n-// the image metadata.\n+/*\n+ * Calculate settings for the spatial denoise block using the noise profile in\n+ * the image metadata.\n+ */\n \n #define NAME \"rpi.sdn\"\n \n@@ -45,7 +47,7 @@ void Sdn::initialise()\n void Sdn::prepare(Metadata *imageMetadata)\n {\n \tstruct NoiseStatus noiseStatus = {};\n-\tnoiseStatus.noise_slope = 3.0; // in case no metadata\n+\tnoiseStatus.noise_slope = 3.0; /* in case no metadata */\n \tif (imageMetadata->get(\"noise.status\", noiseStatus) != 0)\n \t\tLOG(RPiSdn, Warning) << \"no noise profile found\";\n \tLOG(RPiSdn, Debug)\n@@ -65,11 +67,11 @@ void Sdn::prepare(Metadata *imageMetadata)\n \n void Sdn::setMode(DenoiseMode mode)\n {\n-\t// We only distinguish between off and all other modes.\n+\t/* We only distinguish between off and all other modes. */\n \tmode_ = mode;\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn (Algorithm *)new Sdn(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sdn.hpp b/src/ipa/raspberrypi/controller/rpi/sdn.hpp\nindex d9b18f296635..8b6e3db1a548 100644\n--- a/src/ipa/raspberrypi/controller/rpi/sdn.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/sdn.hpp\n@@ -11,7 +11,7 @@\n \n namespace RPiController {\n \n-// Algorithm to calculate correct spatial denoise (SDN) settings.\n+/* Algorithm to calculate correct spatial denoise (SDN) settings. */\n \n class Sdn : public DenoiseAlgorithm\n {\n@@ -29,4 +29,4 @@ private:\n \tDenoiseMode mode_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.cpp b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp\nindex 775ed0fd2c46..b8f5b1005ac6 100644\n--- a/src/ipa/raspberrypi/controller/rpi/sharpen.cpp\n+++ b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp\n@@ -33,7 +33,7 @@ char const *Sharpen::name() const\n void Sharpen::switchMode(CameraMode const &cameraMode,\n \t\t\t [[maybe_unused]] Metadata *metadata)\n {\n-\t// can't be less than one, right?\n+\t/* can't be less than one, right? */\n \tmodeFactor_ = std::max(1.0, cameraMode.noiseFactor);\n }\n \n@@ -50,24 +50,30 @@ void Sharpen::read(boost::property_tree::ptree const &params)\n \n void Sharpen::setStrength(double strength)\n {\n-\t// Note that this function is how an application sets the overall\n-\t// sharpening \"strength\". We call this the \"user strength\" field\n-\t// as there already is a strength_ field - being an internal gain\n-\t// parameter that gets passed to the ISP control code. Negative\n-\t// values are not allowed - coerce them to zero (no sharpening).\n+\t/*\n+\t * Note that this function is how an application sets the overall\n+\t * sharpening \"strength\". We call this the \"user strength\" field\n+\t * as there already is a strength_ field - being an internal gain\n+\t * parameter that gets passed to the ISP control code. Negative\n+\t * values are not allowed - coerce them to zero (no sharpening).\n+\t */\n \tuserStrength_ = std::max(0.0, strength);\n }\n \n void Sharpen::prepare(Metadata *imageMetadata)\n {\n-\t// The user_strength_ affects the algorithm's internal gain directly, but\n-\t// we adjust the limit and threshold less aggressively. Using a sqrt\n-\t// function is an arbitrary but gentle way of accomplishing this.\n+\t/*\n+\t * The user_strength_ affects the algorithm's internal gain directly, but\n+\t * we adjust the limit and threshold less aggressively. Using a sqrt\n+\t * function is an arbitrary but gentle way of accomplishing this.\n+\t */\n \tdouble userStrengthSqrt = sqrt(userStrength_);\n \tstruct SharpenStatus status;\n-\t// Binned modes seem to need the sharpening toned down with this\n-\t// pipeline, thus we use the mode_factor here. Also avoid\n-\t// divide-by-zero with the userStrengthSqrt.\n+\t/*\n+\t * Binned modes seem to need the sharpening toned down with this\n+\t * pipeline, thus we use the mode_factor here. Also avoid\n+\t * divide-by-zero with the userStrengthSqrt.\n+\t */\n \tstatus.threshold = threshold_ * modeFactor_ /\n \t\t\t   std::max(0.01, userStrengthSqrt);\n \tstatus.strength = strength_ / modeFactor_ * userStrength_;\n@@ -77,7 +83,7 @@ void Sharpen::prepare(Metadata *imageMetadata)\n \timageMetadata->set(\"sharpen.status\", status);\n }\n \n-// Register algorithm with the system.\n+/* Register algorithm with the system. */\n static Algorithm *create(Controller *controller)\n {\n \treturn new Sharpen(controller);\ndiff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.hpp b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp\nindex ced917f3c42b..18c45fd4e2a7 100644\n--- a/src/ipa/raspberrypi/controller/rpi/sharpen.hpp\n+++ b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp\n@@ -9,7 +9,7 @@\n #include \"../sharpen_algorithm.hpp\"\n #include \"../sharpen_status.h\"\n \n-// This is our implementation of the \"sharpen algorithm\".\n+/* This is our implementation of the \"sharpen algorithm\". */\n \n namespace RPiController {\n \n@@ -31,4 +31,4 @@ private:\n \tdouble userStrength_;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/sharpen_algorithm.hpp b/src/ipa/raspberrypi/controller/sharpen_algorithm.hpp\nindex 888f4569c56a..22cc6090f8fc 100644\n--- a/src/ipa/raspberrypi/controller/sharpen_algorithm.hpp\n+++ b/src/ipa/raspberrypi/controller/sharpen_algorithm.hpp\n@@ -14,8 +14,8 @@ class SharpenAlgorithm : public Algorithm\n {\n public:\n \tSharpenAlgorithm(Controller *controller) : Algorithm(controller) {}\n-\t// A sharpness control algorithm must provide the following:\n+\t/* A sharpness control algorithm must provide the following: */\n \tvirtual void setStrength(double strength) = 0;\n };\n \n-} // namespace RPiController\n+} /* namespace RPiController */\ndiff --git a/src/ipa/raspberrypi/controller/sharpen_status.h b/src/ipa/raspberrypi/controller/sharpen_status.h\nindex 2b0490742fba..5ea21ab23f91 100644\n--- a/src/ipa/raspberrypi/controller/sharpen_status.h\n+++ b/src/ipa/raspberrypi/controller/sharpen_status.h\n@@ -6,20 +6,20 @@\n  */\n #pragma once\n \n-// The \"sharpen\" algorithm stores the strength to use.\n+/* The \"sharpen\" algorithm stores the strength to use. */\n \n #ifdef __cplusplus\n extern \"C\" {\n #endif\n \n struct SharpenStatus {\n-\t// controls the smallest level of detail (or noise!) that sharpening will pick up\n+\t/* controls the smallest level of detail (or noise!) that sharpening will pick up */\n \tdouble threshold;\n-\t// the rate at which the sharpening response ramps once above the threshold\n+\t/* the rate at which the sharpening response ramps once above the threshold */\n \tdouble strength;\n-\t// upper limit of the allowed sharpening response\n+\t/* upper limit of the allowed sharpening response */\n \tdouble limit;\n-\t// The sharpening strength requested by the user or application.\n+\t/* The sharpening strength requested by the user or application. */\n \tdouble userStrength;\n };\n \ndiff --git a/src/ipa/raspberrypi/md_parser.hpp b/src/ipa/raspberrypi/md_parser.hpp\nindex e505108a7adc..a05ab800b9ae 100644\n--- a/src/ipa/raspberrypi/md_parser.hpp\n+++ b/src/ipa/raspberrypi/md_parser.hpp\n@@ -152,4 +152,4 @@ private:\n \tOffsetMap offsets_;\n };\n \n-} // namespace RPi\n+} /* namespace RPi */\n","prefixes":["libcamera-devel","11/15"]}