From patchwork Mon May 4 09:28:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 3683 Return-Path: Received: from perceval.ideasonboard.com (perceval.ideasonboard.com [IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id 51944603F2 for ; Mon, 4 May 2020 11:28:38 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (1024-bit key; unprotected) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="Bw5XBvq8"; dkim-atps=neutral Received: from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id C0EFE4F7; Mon, 4 May 2020 11:28:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1588584518; bh=9JbbUN6yVJD7mX2oEUwHA7gtmEuMM1xMW4Yk7+fqwm4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bw5XBvq8xfdA3qnch0sUxAsfFOK5sYIR4Lkl6eyxFWUfGePKskR7t6CeviyLbivpv eUUS23B/Y1s1F6p3OuGrrzp0vk8/MzxcMgWuOAqsGFLVN2Zfd9R7lVIwfrIHbAMkwe Xo2O21WSkT1RRu5vBxLJSdGg5N0czugzz5DuZwJk= From: Laurent Pinchart To: libcamera-devel@lists.libcamera.org Date: Mon, 4 May 2020 12:28:24 +0300 Message-Id: <20200504092829.10099-2-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> References: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> MIME-Version: 1.0 Subject: [libcamera-devel] [PATCH 1/6] LICENSES: Add BSD-2-Clause license X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2020 09:28:38 -0000 In preparation for new code covered by the BSD-2-Clause license, add it to the LICENSES directory. Signed-off-by: Laurent Pinchart Reviewed-by: Kieran Bingham --- LICENSES/BSD-2-Clause.txt | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 LICENSES/BSD-2-Clause.txt diff --git a/LICENSES/BSD-2-Clause.txt b/LICENSES/BSD-2-Clause.txt new file mode 100644 index 000000000000..2d2bab1127b0 --- /dev/null +++ b/LICENSES/BSD-2-Clause.txt @@ -0,0 +1,22 @@ +Copyright (c) . All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, +are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, +this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, +this list of conditions and the following disclaimer in the documentation +and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. From patchwork Mon May 4 09:28:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 3684 Return-Path: Received: from perceval.ideasonboard.com (perceval.ideasonboard.com [IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id CD4F4603F2 for ; Mon, 4 May 2020 11:28:38 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (1024-bit key; unprotected) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="TfxnZQHe"; dkim-atps=neutral Received: from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 45F92304; Mon, 4 May 2020 11:28:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1588584518; bh=XkjIhHo56orF+/5XMA1wWg8nWC6d8Am2JMyNUzywyzw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TfxnZQHeDmlDaPnlFrhxnGPBUL+czbvTGMA8N9tx5o39/iogMaumTrKksmwVvRdoT Gr+jtxCXNA55dcU6kTAtJ5zVfmGsbxbiFCbG5Yao0xK6x561gMgYzMrc41giL1Eonu JJpSnnUIClyd3kkB2jVkbjp3DNVqKLjRara0tGAw= From: Laurent Pinchart To: libcamera-devel@lists.libcamera.org Date: Mon, 4 May 2020 12:28:25 +0300 Message-Id: <20200504092829.10099-3-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> References: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> MIME-Version: 1.0 Subject: [libcamera-devel] [PATCH 2/6] include: uapi: Add header definitions for BCM2835 Unicam and ISP blocks X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2020 09:28:39 -0000 From: Naushir Patuck This commit adds the headers and definitions required for the bcm2835_isp and bcm2835_unicam kernel modules. The headers come from patches recently posted to the linux-media@vger.kernel.org mailing list to add the Unicam and ISP peripherals drivers to the kernel. Signed-off-by: Naushir Patuck Signed-off-by: Laurent Pinchart --- include/linux/bcm2835-isp.h | 320 ++++++++++++++++++++++++++++++++ include/linux/v4l2-controls.h | 4 + include/linux/vc_sm_cma_ioctl.h | 135 ++++++++++++++ include/linux/videodev2.h | 2 + 4 files changed, 461 insertions(+) create mode 100644 include/linux/bcm2835-isp.h create mode 100644 include/linux/vc_sm_cma_ioctl.h diff --git a/include/linux/bcm2835-isp.h b/include/linux/bcm2835-isp.h new file mode 100644 index 000000000000..e7afc367fd76 --- /dev/null +++ b/include/linux/bcm2835-isp.h @@ -0,0 +1,320 @@ +/* SPDX-License-Identifier: ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* + * bcm2835-isp.h + * + * BCM2835 ISP driver - user space header file. + * + * Copyright © 2019-2020 Raspberry Pi (Trading) Ltd. + * + * Author: Naushir Patuck (naush@raspberrypi.com) + * + */ + +#ifndef __BCM2835_ISP_H_ +#define __BCM2835_ISP_H_ + +#include + +#define V4L2_CID_USER_BCM2835_ISP_CC_MATRIX \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0001) +#define V4L2_CID_USER_BCM2835_ISP_LENS_SHADING \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0002) +#define V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0003) +#define V4L2_CID_USER_BCM2835_ISP_GEQ \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0004) +#define V4L2_CID_USER_BCM2835_ISP_GAMMA \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0005) +#define V4L2_CID_USER_BCM2835_ISP_DENOISE \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0006) +#define V4L2_CID_USER_BCM2835_ISP_SHARPEN \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0007) +#define V4L2_CID_USER_BCM2835_ISP_DPC \ + (V4L2_CID_USER_BCM2835_ISP_BASE + 0x0008) + +/* + * All structs below are directly mapped onto the equivalent structs in + * drivers/staging/vc04_services/vchiq-mmal/mmal-parameters.h + * for convenience. + */ + +/** + * struct bcm2835_isp_rational - Rational value type. + * + * @num: Numerator. + * @den: Denominator. + */ +struct bcm2835_isp_rational { + __s32 num; + __s32 den; +}; + +/** + * struct bcm2835_isp_ccm - Colour correction matrix. + * + * @ccm: 3x3 correction matrix coefficients. + * @offsets: 1x3 correction offsets. + */ +struct bcm2835_isp_ccm { + struct bcm2835_isp_rational ccm[3][3]; + __s32 offsets[3]; +}; + +/** + * struct bcm2835_isp_custom_ccm - Custom CCM applied with the + * V4L2_CID_USER_BCM2835_ISP_CC_MATRIX ctrl. + * + * @enabled: Enable custom CCM. + * @ccm: Custom CCM coefficients and offsets. + */ +struct bcm2835_isp_custom_ccm { + __u32 enabled; + struct bcm2835_isp_ccm ccm; +}; + +/** + * enum bcm2835_isp_gain_format - format of the gains in the lens shading + * tables used with the + * V4L2_CID_USER_BCM2835_ISP_LENS_SHADING ctrl. + * + * @GAIN_FORMAT_U0P8_1: Gains are u0.8 format, starting at 1.0 + * @GAIN_FORMAT_U1P7_0: Gains are u1.7 format, starting at 0.0 + * @GAIN_FORMAT_U1P7_1: Gains are u1.7 format, starting at 1.0 + * @GAIN_FORMAT_U2P6_0: Gains are u2.6 format, starting at 0.0 + * @GAIN_FORMAT_U2P6_1: Gains are u2.6 format, starting at 1.0 + * @GAIN_FORMAT_U3P5_0: Gains are u3.5 format, starting at 0.0 + * @GAIN_FORMAT_U3P5_1: Gains are u3.5 format, starting at 1.0 + * @GAIN_FORMAT_U4P10: Gains are u4.10 format, starting at 0.0 + */ +enum bcm2835_isp_gain_format { + GAIN_FORMAT_U0P8_1 = 0, + GAIN_FORMAT_U1P7_0 = 1, + GAIN_FORMAT_U1P7_1 = 2, + GAIN_FORMAT_U2P6_0 = 3, + GAIN_FORMAT_U2P6_1 = 4, + GAIN_FORMAT_U3P5_0 = 5, + GAIN_FORMAT_U3P5_1 = 6, + GAIN_FORMAT_U4P10 = 7, +}; + +/** + * struct bcm2835_isp_lens_shading - Lens shading tables supplied with the + * V4L2_CID_USER_BCM2835_ISP_LENS_SHADING + * ctrl. + * + * @enabled: Enable lens shading. + * @grid_cell_size: Size of grid cells in samples (16, 32, 64, 128 or 256). + * @grid_width: Width of lens shading tables in grid cells. + * @grid_stride: Row to row distance (in grid cells) between grid cells + * in the same horizontal location. + * @grid_height: Height of lens shading tables in grid cells. + * @mem_handle_table: Memory handle to the tables. + * @ref_transform: Reference transform - unsupported, please pass zero. + * @corner_sampled: Whether the gains are sampled at the corner points + * of the grid cells or in the cell centres. + * @gain_format: Format of the gains (see enum &bcm2835_isp_gain_format). + */ +struct bcm2835_isp_lens_shading { + __u32 enabled; + __u32 grid_cell_size; + __u32 grid_width; + __u32 grid_stride; + __u32 grid_height; + __u32 mem_handle_table; + __u32 ref_transform; + __u32 corner_sampled; + __u32 gain_format; +}; + +/** + * struct bcm2835_isp_black_level - Sensor black level set with the + * V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL ctrl. + * + * @enabled: Enable black level. + * @black_level_r: Black level for red channel. + * @black_level_g: Black level for green channels. + * @black_level_b: Black level for blue channel. + */ +struct bcm2835_isp_black_level { + __u32 enabled; + __u16 black_level_r; + __u16 black_level_g; + __u16 black_level_b; + __u8 pad_[2]; /* Unused */ +}; + +/** + * struct bcm2835_isp_geq - Green equalisation parameters set with the + * V4L2_CID_USER_BCM2835_ISP_GEQ ctrl. + * + * @enabled: Enable green equalisation. + * @offset: Fixed offset of the green equalisation threshold. + * @slope: Slope of the green equalisation threshold. + */ +struct bcm2835_isp_geq { + __u32 enabled; + __u32 offset; + struct bcm2835_isp_rational slope; +}; + +#define BCM2835_NUM_GAMMA_PTS 33 + +/** + * struct bcm2835_isp_gamma - Gamma parameters set with the + * V4L2_CID_USER_BCM2835_ISP_GAMMA ctrl. + * + * @enabled: Enable gamma adjustment. + * @X: X values of the points defining the gamma curve. + * Values should be scaled to 16 bits. + * @Y: Y values of the points defining the gamma curve. + * Values should be scaled to 16 bits. + */ +struct bcm2835_isp_gamma { + __u32 enabled; + __u16 x[BCM2835_NUM_GAMMA_PTS]; + __u16 y[BCM2835_NUM_GAMMA_PTS]; +}; + +/** + * struct bcm2835_isp_denoise - Denoise parameters set with the + * V4L2_CID_USER_BCM2835_ISP_DENOISE ctrl. + * + * @enabled: Enable denoise. + * @constant: Fixed offset of the noise threshold. + * @slope: Slope of the noise threshold. + * @strength: Denoise strength between 0.0 (off) and 1.0 (maximum). + */ +struct bcm2835_isp_denoise { + __u32 enabled; + __u32 constant; + struct bcm2835_isp_rational slope; + struct bcm2835_isp_rational strength; +}; + +/** + * struct bcm2835_isp_sharpen - Sharpen parameters set with the + * V4L2_CID_USER_BCM2835_ISP_SHARPEN ctrl. + * + * @enabled: Enable sharpening. + * @threshold: Threshold at which to start sharpening pixels. + * @strength: Strength with which pixel sharpening increases. + * @limit: Limit to the amount of sharpening applied. + */ +struct bcm2835_isp_sharpen { + __u32 enabled; + struct bcm2835_isp_rational threshold; + struct bcm2835_isp_rational strength; + struct bcm2835_isp_rational limit; +}; + +/** + * enum bcm2835_isp_dpc_mode - defective pixel correction (DPC) strength. + * + * @DPC_MODE_OFF: No DPC. + * @DPC_MODE_NORMAL: Normal DPC. + * @DPC_MODE_STRONG: Strong DPC. + */ +enum bcm2835_isp_dpc_mode { + DPC_MODE_OFF = 0, + DPC_MODE_NORMAL = 1, + DPC_MODE_STRONG = 2, +}; + +/** + * struct bcm2835_isp_dpc - Defective pixel correction (DPC) parameters set + * with the V4L2_CID_USER_BCM2835_ISP_DPC ctrl. + * + * @enabled: Enable DPC. + * @strength: DPC strength (see enum &bcm2835_isp_dpc_mode). + */ +struct bcm2835_isp_dpc { + __u32 enabled; + __u32 strength; +}; + +/* + * ISP statistics structures. + * + * The bcm2835_isp_stats structure is generated at the output of the + * statistics node. Note that this does not directly map onto the statistics + * output of the ISP HW. Instead, the MMAL firmware code maps the HW statistics + * to the bcm2835_isp_stats structure. + */ +#define DEFAULT_AWB_REGIONS_X 16 +#define DEFAULT_AWB_REGIONS_Y 12 + +#define NUM_HISTOGRAMS 2 +#define NUM_HISTOGRAM_BINS 128 +#define AWB_REGIONS (DEFAULT_AWB_REGIONS_X * DEFAULT_AWB_REGIONS_Y) +#define FLOATING_REGIONS 16 +#define AGC_REGIONS 16 +#define FOCUS_REGIONS 12 + +/** + * struct bcm2835_isp_stats_hist - Histogram statistics + * + * @r_hist: Red channel histogram. + * @g_hist: Combined green channel histogram. + * @b_hist: Blue channel histogram. + */ +struct bcm2835_isp_stats_hist { + __u32 r_hist[NUM_HISTOGRAM_BINS]; + __u32 g_hist[NUM_HISTOGRAM_BINS]; + __u32 b_hist[NUM_HISTOGRAM_BINS]; +}; + +/** + * struct bcm2835_isp_stats_region - Region sums. + * + * @counted: The number of 2x2 bayer tiles accumulated. + * @notcounted: The number of 2x2 bayer tiles not accumulated. + * @r_sum: Total sum of counted pixels in the red channel for a region. + * @g_sum: Total sum of counted pixels in the green channel for a region. + * @b_sum: Total sum of counted pixels in the blue channel for a region. + */ +struct bcm2835_isp_stats_region { + __u32 counted; + __u32 notcounted; + __u64 r_sum; + __u64 g_sum; + __u64 b_sum; +}; + +/** + * struct bcm2835_isp_stats_focus - Focus statistics. + * + * @contrast_val: Focus measure - accumulated output of the focus filter. + * In the first dimension, index [0] counts pixels below a + * preset threshold, and index [1] counts pixels above the + * threshold. In the second dimension, index [0] uses the + * first predefined filter, and index [1] uses the second + * predefined filter. + * @contrast_val_num: The number of counted pixels in the above accumulation. + */ +struct bcm2835_isp_stats_focus { + __u64 contrast_val[2][2]; + __u32 contrast_val_num[2][2]; +}; + +/** + * struct bcm2835_isp_stats - ISP statistics. + * + * @version: Version of the bcm2835_isp_stats structure. + * @size: Size of the bcm2835_isp_stats structure. + * @hist: Histogram statistics for the entire image. + * @awb_stats: Statistics for the regions defined for AWB calculations. + * @floating_stats: Statistics for arbitrarily placed (floating) regions. + * @agc_stats: Statistics for the regions defined for AGC calculations. + * @focus_stats: Focus filter statistics for the focus regions. + */ +struct bcm2835_isp_stats { + __u32 version; + __u32 size; + struct bcm2835_isp_stats_hist hist[NUM_HISTOGRAMS]; + struct bcm2835_isp_stats_region awb_stats[AWB_REGIONS]; + struct bcm2835_isp_stats_region floating_stats[FLOATING_REGIONS]; + struct bcm2835_isp_stats_region agc_stats[AGC_REGIONS]; + struct bcm2835_isp_stats_focus focus_stats[FOCUS_REGIONS]; +}; + +#endif /* __BCM2835_ISP_H_ */ diff --git a/include/linux/v4l2-controls.h b/include/linux/v4l2-controls.h index 1213961c35cb..171351aee6fc 100644 --- a/include/linux/v4l2-controls.h +++ b/include/linux/v4l2-controls.h @@ -192,6 +192,10 @@ enum v4l2_colorfx { * We reserve 16 controls for this driver. */ #define V4L2_CID_USER_IMX_BASE (V4L2_CID_USER_BASE + 0x10b0) +/* The base for the bcm2835-isp driver controls. + * We reserve 16 controls for this driver. */ +#define V4L2_CID_USER_BCM2835_ISP_BASE (V4L2_CID_USER_BASE + 0x10c0) + /* MPEG-class control IDs */ /* The MPEG controls are applicable to all codec controls * and the 'MPEG' part of the define is historical */ diff --git a/include/linux/vc_sm_cma_ioctl.h b/include/linux/vc_sm_cma_ioctl.h new file mode 100644 index 000000000000..21b8758ea03f --- /dev/null +++ b/include/linux/vc_sm_cma_ioctl.h @@ -0,0 +1,135 @@ +/* +Copyright (c) 2012, Broadcom Europe Ltd +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + * Neither the name of the copyright holder nor the + names of its contributors may be used to endorse or promote products + derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ + +/* + * Copyright 2019 Raspberry Pi (Trading) Ltd. All rights reserved. + * + * Based on vmcs_sm_ioctl.h Copyright Broadcom Corporation. + */ + +#ifndef __VC_SM_CMA_IOCTL_H +#define __VC_SM_CMA_IOCTL_H + +/* ---- Include Files ---------------------------------------------------- */ + +#include /* Needed for standard types */ + +#include + +/* ---- Constants and Types ---------------------------------------------- */ + +#define VC_SM_CMA_RESOURCE_NAME 32 +#define VC_SM_CMA_RESOURCE_NAME_DEFAULT "sm-host-resource" + +/* Type define used to create unique IOCTL number */ +#define VC_SM_CMA_MAGIC_TYPE 'J' + +/* IOCTL commands on /dev/vc-sm-cma */ +enum vc_sm_cma_cmd_e { + VC_SM_CMA_CMD_ALLOC = 0x5A, /* Start at 0x5A arbitrarily */ + + VC_SM_CMA_CMD_IMPORT_DMABUF, + + VC_SM_CMA_CMD_CLEAN_INVALID2, + + VC_SM_CMA_CMD_LAST /* Do not delete */ +}; + +/* Cache type supported, conveniently matches the user space definition in + * user-vcsm.h. + */ +enum vc_sm_cma_cache_e { + VC_SM_CMA_CACHE_NONE, + VC_SM_CMA_CACHE_HOST, + VC_SM_CMA_CACHE_VC, + VC_SM_CMA_CACHE_BOTH, +}; + +/* IOCTL Data structures */ +struct vc_sm_cma_ioctl_alloc { + /* user -> kernel */ + __u32 size; + __u32 num; + __u32 cached; /* enum vc_sm_cma_cache_e */ + __u32 pad; + __u8 name[VC_SM_CMA_RESOURCE_NAME]; + + /* kernel -> user */ + __s32 handle; + __u32 vc_handle; + __u64 dma_addr; +}; + +struct vc_sm_cma_ioctl_import_dmabuf { + /* user -> kernel */ + __s32 dmabuf_fd; + __u32 cached; /* enum vc_sm_cma_cache_e */ + __u8 name[VC_SM_CMA_RESOURCE_NAME]; + + /* kernel -> user */ + __s32 handle; + __u32 vc_handle; + __u32 size; + __u32 pad; + __u64 dma_addr; +}; + +/* + * Cache functions to be set to struct vc_sm_cma_ioctl_clean_invalid2 + * invalidate_mode. + */ +#define VC_SM_CACHE_OP_NOP 0x00 +#define VC_SM_CACHE_OP_INV 0x01 +#define VC_SM_CACHE_OP_CLEAN 0x02 +#define VC_SM_CACHE_OP_FLUSH 0x03 + +struct vc_sm_cma_ioctl_clean_invalid2 { + __u32 op_count; + __u32 pad; + struct vc_sm_cma_ioctl_clean_invalid_block { + __u32 invalidate_mode; + __u32 block_count; + void * /*__user */start_address; + __u32 block_size; + __u32 inter_block_stride; + } s[0]; +}; + +/* IOCTL numbers */ +#define VC_SM_CMA_IOCTL_MEM_ALLOC\ + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_ALLOC,\ + struct vc_sm_cma_ioctl_alloc) + +#define VC_SM_CMA_IOCTL_MEM_IMPORT_DMABUF\ + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_IMPORT_DMABUF,\ + struct vc_sm_cma_ioctl_import_dmabuf) + +#define VC_SM_CMA_IOCTL_MEM_CLEAN_INVALID2\ + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_CLEAN_INVALID2,\ + struct vc_sm_cma_ioctl_clean_invalid2) + +#endif /* __VC_SM_CMA_IOCTL_H */ diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h index ab40b3272ed2..dde27de9b112 100644 --- a/include/linux/videodev2.h +++ b/include/linux/videodev2.h @@ -749,6 +749,8 @@ struct v4l2_pix_format { #define V4L2_META_FMT_VSP1_HGT v4l2_fourcc('V', 'S', 'P', 'T') /* R-Car VSP1 2-D Histogram */ #define V4L2_META_FMT_UVC v4l2_fourcc('U', 'V', 'C', 'H') /* UVC Payload Header metadata */ #define V4L2_META_FMT_D4XX v4l2_fourcc('D', '4', 'X', 'X') /* D4XX Payload Header metadata */ +#define V4L2_META_FMT_SENSOR_DATA v4l2_fourcc('S', 'E', 'N', 'S') /* Sensor Ancillary metadata */ +#define V4L2_META_FMT_BCM2835_ISP_STATS v4l2_fourcc('B', 'S', 'T', 'A') /* BCM2835 ISP image statistics output */ /* Vendor specific - used for RK_ISP1 camera sub-system */ #define V4L2_META_FMT_RK_ISP1_PARAMS v4l2_fourcc('R', 'K', '1', 'P') /* Rockchip ISP1 params */ From patchwork Mon May 4 09:28:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 3685 Return-Path: Received: from perceval.ideasonboard.com (perceval.ideasonboard.com [213.167.242.64]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id 7E937616AB for ; Mon, 4 May 2020 11:28:39 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (1024-bit key; unprotected) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="wfaxEw5r"; dkim-atps=neutral Received: from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id C2A934F7; Mon, 4 May 2020 11:28:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1588584519; bh=1sGhb+ryCPRBlU04AOjbfK9zzKuNeaHv8ZGhueas5jI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wfaxEw5rkD9hp2qb8U6AkGN4YJ2KMmjfqxjOvYUBJEB2QS0ZOU2uE2kw1GGxlgx/v bveuGGxVZmbR4t+3vQJa8/jw9YZ62JJY2fpZW8byJUxhsXa+L4nslqHqMMuwsHLPrw oO03YLk8PQKwjIXtRK/O+AmrwW4Y56QwFej8+ZHM= From: Laurent Pinchart To: libcamera-devel@lists.libcamera.org Date: Mon, 4 May 2020 12:28:26 +0300 Message-Id: <20200504092829.10099-4-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> References: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> MIME-Version: 1.0 Subject: [libcamera-devel] [PATCH 3/6] libcamera: pipeline: Raspberry Pi pipeline handler X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2020 09:28:39 -0000 From: Naushir Patuck Initial implementation of the Raspberry Pi (BCM2835) ISP pipeline handler. All code is licensed under the BSD-2-Clause terms. Copyright (c) 2019-2020 Raspberry Pi Trading Ltd. Signed-off-by: Naushir Patuck Signed-off-by: Laurent Pinchart Reviewed-by: Kieran Bingham --- include/ipa/raspberrypi.h | 58 + .../pipeline/raspberrypi/meson.build | 3 + .../pipeline/raspberrypi/raspberrypi.cpp | 1598 +++++++++++++++++ .../pipeline/raspberrypi/staggered_ctrl.h | 236 +++ src/libcamera/pipeline/raspberrypi/vcsm.h | 144 ++ 5 files changed, 2039 insertions(+) create mode 100644 include/ipa/raspberrypi.h create mode 100644 src/libcamera/pipeline/raspberrypi/meson.build create mode 100644 src/libcamera/pipeline/raspberrypi/raspberrypi.cpp create mode 100644 src/libcamera/pipeline/raspberrypi/staggered_ctrl.h create mode 100644 src/libcamera/pipeline/raspberrypi/vcsm.h diff --git a/include/ipa/raspberrypi.h b/include/ipa/raspberrypi.h new file mode 100644 index 000000000000..3df56e8a1306 --- /dev/null +++ b/include/ipa/raspberrypi.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: LGPL-2.1-or-later */ +/* + * Copyright (C) 2019-2020, Raspberry Pi Ltd. + * + * raspberrypi.h - Image Processing Algorithm interface for Raspberry Pi + */ +#ifndef __LIBCAMERA_IPA_INTERFACE_RASPBERRYPI_H__ +#define __LIBCAMERA_IPA_INTERFACE_RASPBERRYPI_H__ + +#include +#include + +enum RPiOperations { + RPI_IPA_ACTION_V4L2_SET_STAGGERED = 1, + RPI_IPA_ACTION_V4L2_SET_ISP, + RPI_IPA_ACTION_STATS_METADATA_COMPLETE, + RPI_IPA_ACTION_RUN_ISP, + RPI_IPA_ACTION_RUN_ISP_AND_DROP_FRAME, + RPI_IPA_ACTION_SET_SENSOR_CONFIG, + RPI_IPA_ACTION_EMBEDDED_COMPLETE, + RPI_IPA_EVENT_SIGNAL_STAT_READY, + RPI_IPA_EVENT_SIGNAL_ISP_PREPARE, + RPI_IPA_EVENT_QUEUE_REQUEST, + RPI_IPA_EVENT_LS_TABLE_ALLOCATION, +}; + +enum RPiIpaMask { + ID = 0x0ffff, + STATS = 0x10000, + EMBEDDED_DATA = 0x20000, + BAYER_DATA = 0x40000 +}; + +/* Size of the LS grid allocation. */ +#define MAX_LS_GRID_SIZE (32 << 10) + +namespace libcamera { + +/* List of controls handled by the Raspberry Pi IPA */ +static const ControlInfoMap RPiControls = { + { &controls::AeEnable, ControlInfo(false, true) }, + { &controls::ExposureTime, ControlInfo(0, 999999) }, + { &controls::AnalogueGain, ControlInfo(1.0f, 32.0f) }, + { &controls::AeMeteringMode, ControlInfo(0, static_cast(controls::MeteringModeMax)) }, + { &controls::AeConstraintMode, ControlInfo(0, static_cast(controls::ConstraintModeMax)) }, + { &controls::AeExposureMode, ControlInfo(0, static_cast(controls::ExposureModeMax)) }, + { &controls::ExposureValue, ControlInfo(0.0f, 16.0f) }, + { &controls::AwbEnable, ControlInfo(false, true) }, + { &controls::ColourGains, ControlInfo(0.0f, 32.0f) }, + { &controls::AwbMode, ControlInfo(0, static_cast(controls::AwbModeMax)) }, + { &controls::Brightness, ControlInfo(-1.0f, 1.0f) }, + { &controls::Contrast, ControlInfo(0.0f, 32.0f) }, + { &controls::Saturation, ControlInfo(0.0f, 32.0f) }, +}; + +} /* namespace libcamera */ + +#endif /* __LIBCAMERA_IPA_INTERFACE_RASPBERRYPI_H__ */ diff --git a/src/libcamera/pipeline/raspberrypi/meson.build b/src/libcamera/pipeline/raspberrypi/meson.build new file mode 100644 index 000000000000..737857977831 --- /dev/null +++ b/src/libcamera/pipeline/raspberrypi/meson.build @@ -0,0 +1,3 @@ +libcamera_sources += files([ + 'raspberrypi.cpp' +]) diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp new file mode 100644 index 000000000000..1685081997e5 --- /dev/null +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp @@ -0,0 +1,1598 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019-2020, Raspberry Pi (Trading) Ltd. + * + * raspberrypi.cpp - Pipeline handler for Raspberry Pi devices + */ +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "camera_sensor.h" +#include "device_enumerator.h" +#include "ipa_manager.h" +#include "media_device.h" +#include "pipeline_handler.h" +#include "staggered_ctrl.h" +#include "utils.h" +#include "v4l2_controls.h" +#include "v4l2_videodevice.h" +#include "vcsm.h" + +namespace libcamera { + +LOG_DEFINE_CATEGORY(RPI) + +using V4L2PixFmtMap = std::map>; + +namespace { + +bool isRaw(PixelFormat &pixFmt) +{ + /* + * The isRaw test might be redundant right now the pipeline handler only + * supports RAW sensors. Leave it in for now, just as a sanity check. + */ + const PixelFormatInfo &info = PixelFormatInfo::info(pixFmt); + if (!info.isValid()) + return false; + + return info.colourEncoding == PixelFormatInfo::ColourEncodingRAW; +} + +double scoreFormat(double desired, double actual) +{ + double score = desired - actual; + /* Smaller desired dimensions are preferred. */ + if (score < 0.0) + score = (-score) / 8; + /* Penalise non-exact matches. */ + if (actual != desired) + score *= 2; + + return score; +} + +V4L2DeviceFormat findBestMode(V4L2PixFmtMap &formatsMap, const Size &req) +{ + double bestScore = 9e9, score; + V4L2DeviceFormat bestMode = {}; + +#define PENALTY_AR 1500.0 +#define PENALTY_8BIT 2000.0 +#define PENALTY_10BIT 1000.0 +#define PENALTY_12BIT 0.0 +#define PENALTY_UNPACKED 500.0 + + /* Calculate the closest/best mode from the user requested size. */ + for (const auto &iter : formatsMap) { + V4L2PixelFormat v4l2Format = iter.first; + PixelFormat pixelFormat = v4l2Format.toPixelFormat(); + const PixelFormatInfo &info = PixelFormatInfo::info(pixelFormat); + + for (const SizeRange &sz : iter.second) { + double modeWidth = sz.contains(req) ? req.width : sz.max.width; + double modeHeight = sz.contains(req) ? req.height : sz.max.height; + double reqAr = static_cast(req.width) / req.height; + double modeAr = modeWidth / modeHeight; + + /* Score the dimensions for closeness. */ + score = scoreFormat(req.width, modeWidth); + score += scoreFormat(req.height, modeHeight); + score += PENALTY_AR * scoreFormat(reqAr, modeAr); + + /* Add any penalties... this is not an exact science! */ + if (!info.packed) + score += PENALTY_UNPACKED; + + if (info.bitsPerPixel == 12) + score += PENALTY_12BIT; + else if (info.bitsPerPixel == 10) + score += PENALTY_10BIT; + else if (info.bitsPerPixel == 8) + score += PENALTY_8BIT; + + if (score <= bestScore) { + bestScore = score; + bestMode.fourcc = v4l2Format; + bestMode.size = Size(modeWidth, modeHeight); + } + + LOG(RPI, Info) << "Mode: " << modeWidth << "x" << modeHeight + << " fmt " << v4l2Format.toString() + << " Score: " << score + << " (best " << bestScore << ")"; + } + } + + return bestMode; +} + +} /* namespace */ + +/* + * Device stream abstraction for either an internal or external stream. + * Used for both Unicam and the ISP. + */ +class RPiStream : public Stream +{ +public: + RPiStream() + { + } + + RPiStream(const char *name, MediaEntity *dev, bool importOnly = false) + : external_(false), importOnly_(importOnly), name_(name), + dev_(std::make_unique(dev)) + { + } + + V4L2VideoDevice *dev() const + { + return dev_.get(); + } + + void setExternal(bool external) + { + external_ = external; + } + + bool isExternal() const + { + /* + * Import streams cannot be external. + * + * RAW capture is a special case where we simply copy the RAW + * buffer out of the request. All other buffer handling happens + * as if the stream is internal. + */ + return external_ && !importOnly_; + } + + bool isImporter() const + { + return importOnly_; + } + + void reset() + { + external_ = false; + internalBuffers_.clear(); + } + + std::string name() const + { + return name_; + } + + void setExternalBuffers(std::vector> *buffers) + { + externalBuffers_ = buffers; + } + + const std::vector> *getBuffers() const + { + return external_ ? externalBuffers_ : &internalBuffers_; + } + + void releaseBuffers() + { + dev_->releaseBuffers(); + if (!external_ && !importOnly_) + internalBuffers_.clear(); + } + + int importBuffers(unsigned int count) + { + return dev_->importBuffers(count); + } + + int allocateBuffers(unsigned int count) + { + return dev_->allocateBuffers(count, &internalBuffers_); + } + + int queueBuffers() + { + if (external_) + return 0; + + for (auto &b : internalBuffers_) { + int ret = dev_->queueBuffer(b.get()); + if (ret) { + LOG(RPI, Error) << "Failed to queue buffers for " + << name_; + return ret; + } + } + + return 0; + } + + bool findFrameBuffer(FrameBuffer *buffer) const + { + auto start = external_ ? externalBuffers_->begin() : internalBuffers_.begin(); + auto end = external_ ? externalBuffers_->end() : internalBuffers_.end(); + + if (importOnly_) + return false; + + if (std::find_if(start, end, + [buffer](std::unique_ptr const &ref) { return ref.get() == buffer; }) != end) + return true; + + return false; + } + +private: + /* + * Indicates that this stream is active externally, i.e. the buffers + * are provided by the application. + */ + bool external_; + /* Indicates that this stream only imports buffers, e.g. ISP input. */ + bool importOnly_; + /* Stream name identifier. */ + std::string name_; + /* The actual device stream. */ + std::unique_ptr dev_; + /* Internally allocated framebuffers associated with this device stream. */ + std::vector> internalBuffers_; + /* Externally allocated framebuffers associated with this device stream. */ + std::vector> *externalBuffers_; +}; + +/* + * The following class is just a convenient (and typesafe) array of device + * streams indexed with an enum class. + */ +enum class Unicam : unsigned int { Image, Embedded }; +enum class Isp : unsigned int { Input, Output0, Output1, Stats }; + +template +class RPiDevice : public std::array +{ +private: + constexpr auto index(E e) const noexcept + { + return static_cast>(e); + } +public: + RPiStream &operator[](E e) + { + return std::array::operator[](index(e)); + } + const RPiStream &operator[](E e) const + { + return std::array::operator[](index(e)); + } +}; + +class RPiCameraData : public CameraData +{ +public: + RPiCameraData(PipelineHandler *pipe) + : CameraData(pipe), sensor_(nullptr), lsTable_(nullptr), + state_(State::Stopped), dropFrame_(false), ispOutputCount_(0) + { + } + + ~RPiCameraData() + { + /* + * Free the LS table if we have allocated one. Another + * allocation will occur in applyLS() with the appropriate + * size. + */ + if (lsTable_) { + vcsm_.free(lsTable_); + lsTable_ = nullptr; + } + + /* Stop the IPA proxy thread. */ + ipa_->stop(); + } + + void frameStarted(uint32_t sequence); + + int loadIPA(); + void queueFrameAction(unsigned int frame, const IPAOperationData &action); + + /* bufferComplete signal handlers. */ + void unicamBufferDequeue(FrameBuffer *buffer); + void ispInputDequeue(FrameBuffer *buffer); + void ispOutputDequeue(FrameBuffer *buffer); + + void clearIncompleteRequests(); + void handleStreamBuffer(FrameBuffer *buffer, const RPiStream *stream); + void handleState(); + + CameraSensor *sensor_; + /* Array of Unicam and ISP device streams and associated buffers/streams. */ + RPiDevice unicam_; + RPiDevice isp_; + /* The vector below is just for convenience when iterating over all streams. */ + std::vector streams_; + /* Buffers passed to the IPA. */ + std::vector ipaBuffers_; + + /* VCSM allocation helper. */ + RPi::Vcsm vcsm_; + void *lsTable_; + + RPi::StaggeredCtrl staggeredCtrl_; + uint32_t expectedSequence_; + bool sensorMetadata_; + + /* + * All the functions in this class are called from a single calling + * thread. So, we do not need to have any mutex to protect access to any + * of the variables below. + */ + enum class State { Stopped, Idle, Busy, IpaComplete }; + State state_; + std::queue bayerQueue_; + std::queue embeddedQueue_; + std::deque requestQueue_; + +private: + void checkRequestCompleted(); + void tryRunPipeline(); + void tryFlushQueues(); + FrameBuffer *updateQueue(std::queue &q, uint64_t timestamp, V4L2VideoDevice *dev); + + bool dropFrame_; + int ispOutputCount_; +}; + +class RPiCameraConfiguration : public CameraConfiguration +{ +public: + RPiCameraConfiguration(const RPiCameraData *data); + + Status validate() override; + +private: + const RPiCameraData *data_; +}; + +class PipelineHandlerRPi : public PipelineHandler +{ +public: + PipelineHandlerRPi(CameraManager *manager); + ~PipelineHandlerRPi(); + + CameraConfiguration *generateConfiguration(Camera *camera, const StreamRoles &roles) override; + int configure(Camera *camera, CameraConfiguration *config) override; + + int exportFrameBuffers(Camera *camera, Stream *stream, + std::vector> *buffers) override; + + int start(Camera *camera) override; + void stop(Camera *camera) override; + + int queueRequestDevice(Camera *camera, Request *request) override; + + bool match(DeviceEnumerator *enumerator) override; + +private: + RPiCameraData *cameraData(const Camera *camera) + { + return static_cast(PipelineHandler::cameraData(camera)); + } + + int configureIPA(Camera *camera); + + int queueAllBuffers(Camera *camera); + int prepareBuffers(Camera *camera); + void freeBuffers(Camera *camera); + + std::shared_ptr unicam_; + std::shared_ptr isp_; +}; + +RPiCameraConfiguration::RPiCameraConfiguration(const RPiCameraData *data) + : CameraConfiguration(), data_(data) +{ +} + +CameraConfiguration::Status RPiCameraConfiguration::validate() +{ + Status status = Valid; + + if (config_.empty()) + return Invalid; + + unsigned int rawCount = 0, outCount = 0, count = 0, maxIndex = 0; + std::pair outSize[2]; + Size maxSize = {}; + for (StreamConfiguration &cfg : config_) { + if (isRaw(cfg.pixelFormat)) { + /* + * Calculate the best sensor mode we can use based on + * the user request. + */ + V4L2PixFmtMap fmts = data_->unicam_[Unicam::Image].dev()->formats(); + V4L2DeviceFormat sensorFormat = findBestMode(fmts, cfg.size); + PixelFormat sensorPixFormat = sensorFormat.fourcc.toPixelFormat(); + if (cfg.size != sensorFormat.size || + cfg.pixelFormat != sensorPixFormat) { + cfg.size = sensorFormat.size; + cfg.pixelFormat = sensorPixFormat; + status = Adjusted; + } + rawCount++; + } else { + outSize[outCount] = std::make_pair(count, cfg.size); + /* Record the largest resolution for fixups later. */ + if (maxSize < cfg.size) { + maxSize = cfg.size; + maxIndex = outCount; + } + outCount++; + } + + count++; + + /* Can only output 1 RAW stream, or 2 YUV/RGB streams. */ + if (rawCount > 1 || outCount > 2) { + LOG(RPI, Error) << "Invalid number of streams requested"; + return Invalid; + } + } + + /* + * Now do any fixups needed. For the two ISP outputs, one stream must be + * equal or smaller than the other in all dimensions. + */ + for (unsigned int i = 0; i < outCount; i++) { + outSize[i].second.width = std::min(outSize[i].second.width, + maxSize.width); + outSize[i].second.height = std::min(outSize[i].second.height, + maxSize.height); + + if (config_.at(outSize[i].first).size != outSize[i].second) { + config_.at(outSize[i].first).size = outSize[i].second; + status = Adjusted; + } + + /* + * Also validate the correct pixel formats here. + * Note that Output0 and Output1 support a different + * set of formats. + * + * Output 0 must be for the largest resolution. We will + * have that fixed up in the code above. + * + */ + PixelFormat &cfgPixFmt = config_.at(outSize[i].first).pixelFormat; + V4L2PixFmtMap fmts; + + if (i == maxIndex) + fmts = data_->isp_[Isp::Output0].dev()->formats(); + else + fmts = data_->isp_[Isp::Output1].dev()->formats(); + + if (fmts.find(V4L2PixelFormat::fromPixelFormat(cfgPixFmt, false)) == fmts.end()) { + /* If we cannot find a native format, use a default one. */ + cfgPixFmt = PixelFormat(DRM_FORMAT_NV12); + status = Adjusted; + } + } + + return status; +} + +PipelineHandlerRPi::PipelineHandlerRPi(CameraManager *manager) + : PipelineHandler(manager), unicam_(nullptr), isp_(nullptr) +{ +} + +PipelineHandlerRPi::~PipelineHandlerRPi() +{ + if (unicam_) + unicam_->release(); + + if (isp_) + isp_->release(); +} + +CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera, + const StreamRoles &roles) +{ + RPiCameraData *data = cameraData(camera); + CameraConfiguration *config = new RPiCameraConfiguration(data); + V4L2DeviceFormat sensorFormat; + V4L2PixFmtMap fmts; + + if (roles.empty()) + return config; + + for (const StreamRole role : roles) { + StreamConfiguration cfg{}; + + switch (role) { + case StreamRole::StillCaptureRaw: + cfg.size = data->sensor_->resolution(); + fmts = data->unicam_[Unicam::Image].dev()->formats(); + sensorFormat = findBestMode(fmts, cfg.size); + cfg.pixelFormat = sensorFormat.fourcc.toPixelFormat(); + ASSERT(cfg.pixelFormat.isValid()); + cfg.bufferCount = 1; + break; + + case StreamRole::StillCapture: + cfg.pixelFormat = PixelFormat(DRM_FORMAT_NV12); + /* Return the largest sensor resolution. */ + cfg.size = data->sensor_->resolution(); + cfg.bufferCount = 1; + break; + + case StreamRole::VideoRecording: + cfg.pixelFormat = PixelFormat(DRM_FORMAT_NV12); + cfg.size = { 1920, 1080 }; + cfg.bufferCount = 4; + break; + + case StreamRole::Viewfinder: + cfg.pixelFormat = PixelFormat(DRM_FORMAT_ARGB8888); + cfg.size = { 800, 600 }; + cfg.bufferCount = 4; + break; + + default: + LOG(RPI, Error) << "Requested stream role not supported: " + << role; + break; + } + + config->addConfiguration(cfg); + } + + config->validate(); + + return config; +} + +int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config) +{ + RPiCameraData *data = cameraData(camera); + int ret; + + /* Start by resetting the Unicam and ISP stream states. */ + for (auto const stream : data->streams_) + stream->reset(); + + Size maxSize = {}, sensorSize = {}; + unsigned int maxIndex = 0; + bool rawStream = false; + + /* + * Look for the RAW stream (if given) size as well as the largest + * ISP output size. + */ + for (unsigned i = 0; i < config->size(); i++) { + StreamConfiguration &cfg = config->at(i); + + if (isRaw(cfg.pixelFormat)) { + /* + * If we have been given a RAW stream, use that size + * for setting up the sensor. + */ + sensorSize = cfg.size; + rawStream = true; + } else { + if (cfg.size > maxSize) { + maxSize = config->at(i).size; + maxIndex = i; + } + } + } + + /* First calculate the best sensor mode we can use based on the user request. */ + V4L2PixFmtMap fmts = data->unicam_[Unicam::Image].dev()->formats(); + V4L2DeviceFormat sensorFormat = findBestMode(fmts, rawStream ? sensorSize : maxSize); + + /* + * Unicam image output format. The ISP input format gets set at + * start, just in case we have swapped bayer orders due to flips + */ + ret = data->unicam_[Unicam::Image].dev()->setFormat(&sensorFormat); + if (ret) + return ret; + + LOG(RPI, Info) << "Sensor: " << camera->name() + << " - Selected mode: " << sensorFormat.toString(); + + /* + * This format may be reset on start() if the bayer order has changed + * because of flips in the sensor. + */ + ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat); + + /* + * See which streams are requested, and route the user + * StreamConfiguration appropriately. + */ + V4L2DeviceFormat format = {}; + for (unsigned i = 0; i < config->size(); i++) { + StreamConfiguration &cfg = config->at(i); + + if (isRaw(cfg.pixelFormat)) { + cfg.setStream(&data->isp_[Isp::Input]); + cfg.stride = sensorFormat.planes[0].bpl; + data->isp_[Isp::Input].setExternal(true); + continue; + } + + if (i == maxIndex) { + /* ISP main output format. */ + V4L2VideoDevice *dev = data->isp_[Isp::Output0].dev(); + V4L2PixelFormat fourcc = dev->toV4L2PixelFormat(cfg.pixelFormat); + format.size = cfg.size; + format.fourcc = fourcc; + + ret = dev->setFormat(&format); + if (ret) + return -EINVAL; + + if (format.size != cfg.size || format.fourcc != fourcc) { + LOG(RPI, Error) + << "Failed to set format on ISP capture0 device: " + << format.toString(); + return -EINVAL; + } + + cfg.setStream(&data->isp_[Isp::Output0]); + cfg.stride = format.planes[0].bpl; + data->isp_[Isp::Output0].setExternal(true); + } + + /* + * ISP second output format. This fallthrough means that if a + * second output stream has not been configured, we simply use + * the Output0 configuration. + */ + V4L2VideoDevice *dev = data->isp_[Isp::Output1].dev(); + format.fourcc = dev->toV4L2PixelFormat(cfg.pixelFormat); + format.size = cfg.size; + + ret = dev->setFormat(&format); + if (ret) { + LOG(RPI, Error) + << "Failed to set format on ISP capture1 device: " + << format.toString(); + return ret; + } + /* + * If we have not yet provided a stream for this config, it + * means this is to be routed from Output1. + */ + if (!cfg.stream()) { + cfg.setStream(&data->isp_[Isp::Output1]); + cfg.stride = format.planes[0].bpl; + data->isp_[Isp::Output1].setExternal(true); + } + } + + /* ISP statistics output format. */ + format = {}; + format.fourcc = V4L2PixelFormat(V4L2_META_FMT_BCM2835_ISP_STATS); + ret = data->isp_[Isp::Stats].dev()->setFormat(&format); + if (ret) { + LOG(RPI, Error) << "Failed to set format on ISP stats stream: " + << format.toString(); + return ret; + } + + /* Unicam embedded data output format. */ + format = {}; + format.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA); + LOG(RPI, Debug) << "Setting embedded data format."; + ret = data->unicam_[Unicam::Embedded].dev()->setFormat(&format); + if (ret) { + LOG(RPI, Error) << "Failed to set format on Unicam embedded: " + << format.toString(); + return ret; + } + + /* Adjust aspect ratio by providing crops on the input image. */ + Rectangle crop = { + .x = 0, + .y = 0, + .width = sensorFormat.size.width, + .height = sensorFormat.size.height + }; + + int ar = maxSize.height * sensorFormat.size.width - maxSize.width * sensorFormat.size.height; + if (ar > 0) + crop.width = maxSize.width * sensorFormat.size.height / maxSize.height; + else if (ar < 0) + crop.height = maxSize.height * sensorFormat.size.width / maxSize.width; + + crop.width &= ~1; + crop.height &= ~1; + + crop.x = (sensorFormat.size.width - crop.width) >> 1; + crop.y = (sensorFormat.size.height - crop.height) >> 1; + data->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &crop); + + ret = configureIPA(camera); + if (ret) + LOG(RPI, Error) << "Failed to configure the IPA: " << ret; + + return ret; +} + +int PipelineHandlerRPi::exportFrameBuffers(Camera *camera, Stream *stream, + std::vector> *buffers) +{ + RPiStream *s = static_cast(stream); + unsigned int count = stream->configuration().bufferCount; + int ret = s->dev()->exportBuffers(count, buffers); + + s->setExternalBuffers(buffers); + + return ret; +} + +int PipelineHandlerRPi::start(Camera *camera) +{ + RPiCameraData *data = cameraData(camera); + ControlList controls(data->sensor_->controls()); + int ret; + + /* Allocate buffers for internal pipeline usage. */ + ret = prepareBuffers(camera); + if (ret) { + LOG(RPI, Error) << "Failed to allocate buffers"; + return ret; + } + + ret = queueAllBuffers(camera); + if (ret) { + LOG(RPI, Error) << "Failed to queue buffers"; + return ret; + } + + /* + * IPA configure may have changed the sensor flips - hence the bayer + * order. Get the sensor format and set the ISP input now. + */ + V4L2DeviceFormat sensorFormat; + data->unicam_[Unicam::Image].dev()->getFormat(&sensorFormat); + ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat); + if (ret) + return ret; + + /* Enable SOF event generation. */ + data->unicam_[Unicam::Image].dev()->setFrameStartEnabled(true); + + /* + * Write the last set of gain and exposure values to the camera before + * starting. First check that the staggered ctrl has been initialised + * by the IPA action. + */ + ASSERT(data->staggeredCtrl_); + data->staggeredCtrl_.reset(); + data->staggeredCtrl_.write(); + data->expectedSequence_ = 0; + + data->state_ = RPiCameraData::State::Idle; + + /* Start all streams. */ + for (auto const stream : data->streams_) { + ret = stream->dev()->streamOn(); + if (ret) { + stop(camera); + return ret; + } + } + + return 0; +} + +void PipelineHandlerRPi::stop(Camera *camera) +{ + RPiCameraData *data = cameraData(camera); + + data->state_ = RPiCameraData::State::Stopped; + + /* Disable SOF event generation. */ + data->unicam_[Unicam::Image].dev()->setFrameStartEnabled(false); + + /* This also stops the streams. */ + data->clearIncompleteRequests(); + /* The default std::queue constructor is explicit with gcc 5 and 6. */ + data->bayerQueue_ = std::queue{}; + data->embeddedQueue_ = std::queue{}; + + freeBuffers(camera); +} + +int PipelineHandlerRPi::queueRequestDevice(Camera *camera, Request *request) +{ + RPiCameraData *data = cameraData(camera); + + if (data->state_ == RPiCameraData::State::Stopped) + return -EINVAL; + + /* Ensure all external streams have associated buffers! */ + for (auto &stream : data->isp_) { + if (!stream.isExternal()) + continue; + + if (!request->findBuffer(&stream)) { + LOG(RPI, Error) << "Attempt to queue request with invalid stream."; + return -ENOENT; + } + } + + /* Push the request to the back of the queue. */ + data->requestQueue_.push_back(request); + data->handleState(); + + return 0; +} + +bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator) +{ + DeviceMatch unicam("unicam"); + DeviceMatch isp("bcm2835-isp"); + + unicam.add("unicam-embedded"); + unicam.add("unicam-image"); + + isp.add("bcm2835-isp0-output0"); /* Input */ + isp.add("bcm2835-isp0-capture1"); /* Output 0 */ + isp.add("bcm2835-isp0-capture2"); /* Output 1 */ + isp.add("bcm2835-isp0-capture3"); /* Stats */ + + unicam_ = enumerator->search(unicam); + if (!unicam_) + return false; + + isp_ = enumerator->search(isp); + if (!isp_) + return false; + + unicam_->acquire(); + isp_->acquire(); + + std::unique_ptr data = std::make_unique(this); + + /* Locate and open the unicam video streams. */ + data->unicam_[Unicam::Embedded] = RPiStream("Unicam Embedded", unicam_->getEntityByName("unicam-embedded")); + data->unicam_[Unicam::Image] = RPiStream("Unicam Image", unicam_->getEntityByName("unicam-image")); + + /* Tag the ISP input stream as an import stream. */ + data->isp_[Isp::Input] = RPiStream("ISP Input", isp_->getEntityByName("bcm2835-isp0-output0"), true); + data->isp_[Isp::Output0] = RPiStream("ISP Output0", isp_->getEntityByName("bcm2835-isp0-capture1")); + data->isp_[Isp::Output1] = RPiStream("ISP Output1", isp_->getEntityByName("bcm2835-isp0-capture2")); + data->isp_[Isp::Stats] = RPiStream("ISP Stats", isp_->getEntityByName("bcm2835-isp0-capture3")); + + /* This is just for convenience so that we can easily iterate over all streams. */ + for (auto &stream : data->unicam_) + data->streams_.push_back(&stream); + for (auto &stream : data->isp_) + data->streams_.push_back(&stream); + + /* Open all Unicam and ISP streams. */ + for (auto const stream : data->streams_) { + if (stream->dev()->open()) + return false; + } + + /* Wire up all the buffer connections. */ + data->unicam_[Unicam::Image].dev()->frameStart.connect(data.get(), &RPiCameraData::frameStarted); + data->unicam_[Unicam::Image].dev()->bufferReady.connect(data.get(), &RPiCameraData::unicamBufferDequeue); + data->unicam_[Unicam::Embedded].dev()->bufferReady.connect(data.get(), &RPiCameraData::unicamBufferDequeue); + data->isp_[Isp::Input].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispInputDequeue); + data->isp_[Isp::Output0].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue); + data->isp_[Isp::Output1].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue); + data->isp_[Isp::Stats].dev()->bufferReady.connect(data.get(), &RPiCameraData::ispOutputDequeue); + + /* Identify the sensor. */ + for (MediaEntity *entity : unicam_->entities()) { + if (entity->function() == MEDIA_ENT_F_CAM_SENSOR) { + data->sensor_ = new CameraSensor(entity); + break; + } + } + + if (!data->sensor_) + return false; + + if (data->sensor_->init()) + return false; + + if (data->loadIPA()) { + LOG(RPI, Error) << "Failed to load a suitable IPA library"; + return false; + } + + /* Register the controls that the Raspberry Pi IPA can handle. */ + data->controlInfo_ = RPiControls; + /* Initialize the camera properties. */ + data->properties_ = data->sensor_->properties(); + + /* + * List the available output streams. + * Currently cannot do Unicam streams! + */ + std::set streams; + streams.insert(&data->isp_[Isp::Input]); + streams.insert(&data->isp_[Isp::Output0]); + streams.insert(&data->isp_[Isp::Output1]); + streams.insert(&data->isp_[Isp::Stats]); + + /* Create and register the camera. */ + std::shared_ptr camera = Camera::create(this, data->sensor_->model(), streams); + registerCamera(std::move(camera), std::move(data)); + + return true; +} + +int PipelineHandlerRPi::configureIPA(Camera *camera) +{ + std::map streamConfig; + std::map entityControls; + RPiCameraData *data = cameraData(camera); + + /* Get the device format to pass to the IPA. */ + V4L2DeviceFormat sensorFormat; + data->unicam_[Unicam::Image].dev()->getFormat(&sensorFormat); + /* Inform IPA of stream configuration and sensor controls. */ + int i = 0; + for (auto const &stream : data->isp_) { + if (stream.isExternal()) { + streamConfig[i] = { + .pixelFormat = stream.configuration().pixelFormat, + .size = stream.configuration().size + }; + } + } + entityControls.emplace(0, data->unicam_[Unicam::Image].dev()->controls()); + entityControls.emplace(1, data->isp_[Isp::Input].dev()->controls()); + + /* Allocate the lens shading table via vcsm and pass to the IPA. */ + if (!data->lsTable_) { + data->lsTable_ = data->vcsm_.alloc("ls_grid", MAX_LS_GRID_SIZE); + uintptr_t ptr = reinterpret_cast(data->lsTable_); + + if (!data->lsTable_) + return -ENOMEM; + + /* + * The vcsm allocation will always be in the memory region + * < 32-bits to allow Videocore to access the memory. + */ + IPAOperationData op; + op.operation = RPI_IPA_EVENT_LS_TABLE_ALLOCATION; + op.data = { static_cast(ptr & 0xffffffff), + data->vcsm_.getVCHandle(data->lsTable_) }; + data->ipa_->processEvent(op); + } + + CameraSensorInfo sensorInfo = {}; + int ret = data->sensor_->sensorInfo(&sensorInfo); + if (ret) { + LOG(RPI, Error) << "Failed to retrieve camera sensor info"; + return ret; + } + + /* Ready the IPA - it must know about the sensor resolution. */ + data->ipa_->configure(sensorInfo, streamConfig, entityControls); + + return 0; +} + +int PipelineHandlerRPi::queueAllBuffers(Camera *camera) +{ + RPiCameraData *data = cameraData(camera); + int ret; + + for (auto const stream : data->streams_) { + ret = stream->queueBuffers(); + if (ret < 0) + return ret; + } + + return 0; +} + +int PipelineHandlerRPi::prepareBuffers(Camera *camera) +{ + RPiCameraData *data = cameraData(camera); + int count, ret; + + /* + * Decide how many internal buffers to allocate. For now, simply + * look at how many external buffers will be provided. + * Will need to improve this logic. + */ + unsigned int maxBuffers = 0; + for (const Stream *s : camera->streams()) + if (static_cast(s)->isExternal()) + maxBuffers = std::max(maxBuffers, s->configuration().bufferCount); + + for (auto const stream : data->streams_) { + if (stream->isExternal() || stream->isImporter()) { + /* + * If a stream is marked as external reserve memory to + * prepare to import as many buffers are requested in + * the stream configuration. + * + * If a stream is an internal stream with importer + * role, reserve as many buffers as possible. + */ + unsigned int count = stream->isExternal() + ? stream->configuration().bufferCount + : maxBuffers; + ret = stream->importBuffers(count); + if (ret < 0) + return ret; + } else { + /* + * If the stream is an internal exporter allocate and + * export as many buffers as possible to its internal + * pool. + */ + ret = stream->allocateBuffers(maxBuffers); + if (ret < 0) { + freeBuffers(camera); + return ret; + } + } + } + + /* + * Add cookies to the ISP Input buffers so that we can link them with + * the IPA and RPI_IPA_EVENT_SIGNAL_ISP_PREPARE event. + */ + count = 0; + for (auto const &b : *data->unicam_[Unicam::Image].getBuffers()) { + b->setCookie(count++); + } + + /* + * Add cookies to the stats and embedded data buffers and link them with + * the IPA. + */ + count = 0; + for (auto const &b : *data->isp_[Isp::Stats].getBuffers()) { + b->setCookie(count++); + data->ipaBuffers_.push_back({ .id = RPiIpaMask::STATS | b->cookie(), + .planes = b->planes() }); + } + + count = 0; + for (auto const &b : *data->unicam_[Unicam::Embedded].getBuffers()) { + b->setCookie(count++); + data->ipaBuffers_.push_back({ .id = RPiIpaMask::EMBEDDED_DATA | b->cookie(), + .planes = b->planes() }); + } + + data->ipa_->mapBuffers(data->ipaBuffers_); + + return 0; +} + +void PipelineHandlerRPi::freeBuffers(Camera *camera) +{ + RPiCameraData *data = cameraData(camera); + + std::vector ids; + for (IPABuffer &ipabuf : data->ipaBuffers_) + ids.push_back(ipabuf.id); + + data->ipa_->unmapBuffers(ids); + data->ipaBuffers_.clear(); + + for (auto const stream : data->streams_) + stream->releaseBuffers(); +} + +void RPiCameraData::frameStarted(uint32_t sequence) +{ + LOG(RPI, Debug) << "frame start " << sequence; + + /* Write any controls for the next frame as soon as we can. */ + staggeredCtrl_.write(); +} + +int RPiCameraData::loadIPA() +{ + ipa_ = IPAManager::instance()->createIPA(pipe_, 1, 1); + if (!ipa_) + return -ENOENT; + + ipa_->queueFrameAction.connect(this, &RPiCameraData::queueFrameAction); + + IPASettings settings{ + .configurationFile = ipa_->configurationFile(sensor_->model() + ".json") + }; + + ipa_->init(settings); + + /* + * Startup the IPA thread now. Without this call, none of the IPA API + * functions will run. + * + * It only gets stopped in the class destructor. + */ + return ipa_->start(); +} + +void RPiCameraData::queueFrameAction(unsigned int frame, const IPAOperationData &action) +{ + /* + * The following actions can be handled when the pipeline handler is in + * a stopped state. + */ + switch (action.operation) { + case RPI_IPA_ACTION_V4L2_SET_STAGGERED: { + ControlList controls = action.controls[0]; + if (!staggeredCtrl_.set(controls)) + LOG(RPI, Error) << "V4L2 staggered set failed"; + goto done; + } + + case RPI_IPA_ACTION_SET_SENSOR_CONFIG: { + /* + * Setup our staggered control writer with the sensor default + * gain and exposure delays. + */ + if (!staggeredCtrl_) { + staggeredCtrl_.init(unicam_[Unicam::Image].dev(), + { { V4L2_CID_ANALOGUE_GAIN, action.data[0] }, + { V4L2_CID_EXPOSURE, action.data[1] } }); + sensorMetadata_ = action.data[2]; + } + + /* Set the sensor orientation here as well. */ + ControlList controls = action.controls[0]; + unicam_[Unicam::Image].dev()->setControls(&controls); + goto done; + } + + case RPI_IPA_ACTION_V4L2_SET_ISP: { + ControlList controls = action.controls[0]; + isp_[Isp::Input].dev()->setControls(&controls); + goto done; + } + } + + if (state_ == State::Stopped) + goto done; + + /* + * The following actions must not be handled when the pipeline handler + * is in a stopped state. + */ + switch (action.operation) { + case RPI_IPA_ACTION_STATS_METADATA_COMPLETE: { + unsigned int bufferId = action.data[0]; + FrameBuffer *buffer = isp_[Isp::Stats].getBuffers()->at(bufferId).get(); + + handleStreamBuffer(buffer, &isp_[Isp::Stats]); + /* Fill the Request metadata buffer with what the IPA has provided */ + requestQueue_.front()->metadata() = std::move(action.controls[0]); + state_ = State::IpaComplete; + break; + } + + case RPI_IPA_ACTION_EMBEDDED_COMPLETE: { + unsigned int bufferId = action.data[0]; + FrameBuffer *buffer = unicam_[Unicam::Embedded].getBuffers()->at(bufferId).get(); + handleStreamBuffer(buffer, &unicam_[Unicam::Embedded]); + break; + } + + case RPI_IPA_ACTION_RUN_ISP_AND_DROP_FRAME: + case RPI_IPA_ACTION_RUN_ISP: { + unsigned int bufferId = action.data[0]; + FrameBuffer *buffer = unicam_[Unicam::Image].getBuffers()->at(bufferId).get(); + + LOG(RPI, Debug) << "Input re-queue to ISP, buffer id " << buffer->cookie() + << ", timestamp: " << buffer->metadata().timestamp; + + isp_[Isp::Input].dev()->queueBuffer(buffer); + dropFrame_ = (action.operation == RPI_IPA_ACTION_RUN_ISP_AND_DROP_FRAME) ? true : false; + ispOutputCount_ = 0; + break; + } + + default: + LOG(RPI, Error) << "Unknown action " << action.operation; + break; + } + +done: + handleState(); +} + +void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer) +{ + const RPiStream *stream = nullptr; + + if (state_ == State::Stopped) + return; + + for (RPiStream const &s : unicam_) { + if (s.findFrameBuffer(buffer)) { + stream = &s; + break; + } + } + + /* The buffer must belong to one of our streams. */ + ASSERT(stream); + + LOG(RPI, Debug) << "Stream " << stream->name() << " buffer dequeue" + << ", buffer id " << buffer->cookie() + << ", timestamp: " << buffer->metadata().timestamp; + + if (stream == &unicam_[Unicam::Image]) { + bayerQueue_.push(buffer); + } else { + embeddedQueue_.push(buffer); + + std::unordered_map ctrl; + int offset = buffer->metadata().sequence - expectedSequence_; + staggeredCtrl_.get(ctrl, offset); + + expectedSequence_ = buffer->metadata().sequence + 1; + + /* + * Sensor metadata is unavailable, so put the expected ctrl + * values (accounting for the staggered delays) into the empty + * metadata buffer. + */ + if (!sensorMetadata_) { + const FrameBuffer &fb = buffer->planes(); + uint32_t *mem = static_cast(::mmap(NULL, fb.planes()[0].length, + PROT_READ | PROT_WRITE, + MAP_SHARED, + fb.planes()[0].fd.fd(), 0)); + mem[0] = ctrl[V4L2_CID_EXPOSURE]; + mem[1] = ctrl[V4L2_CID_ANALOGUE_GAIN]; + munmap(mem, fb.planes()[0].length); + } + } + + handleState(); +} + +void RPiCameraData::ispInputDequeue(FrameBuffer *buffer) +{ + if (state_ == State::Stopped) + return; + + handleStreamBuffer(buffer, &unicam_[Unicam::Image]); + handleState(); +} + +void RPiCameraData::ispOutputDequeue(FrameBuffer *buffer) +{ + const RPiStream *stream = nullptr; + + if (state_ == State::Stopped) + return; + + for (RPiStream const &s : isp_) { + if (s.findFrameBuffer(buffer)) { + stream = &s; + break; + } + } + + /* The buffer must belong to one of our ISP output streams. */ + ASSERT(stream); + + LOG(RPI, Debug) << "Stream " << stream->name() << " buffer complete" + << ", buffer id " << buffer->cookie() + << ", timestamp: " << buffer->metadata().timestamp; + + handleStreamBuffer(buffer, stream); + + /* + * Increment the number of ISP outputs generated. + * This is needed to track dropped frames. + */ + ispOutputCount_++; + + /* If this is a stats output, hand it to the IPA now. */ + if (stream == &isp_[Isp::Stats]) { + IPAOperationData op; + op.operation = RPI_IPA_EVENT_SIGNAL_STAT_READY; + op.data = { RPiIpaMask::STATS | buffer->cookie() }; + ipa_->processEvent(op); + } + + handleState(); +} + +void RPiCameraData::clearIncompleteRequests() +{ + /* + * Queue up any buffers passed in the request. + * This is needed because streamOff() will then mark the buffers as + * cancelled. + */ + for (auto const request : requestQueue_) { + for (auto const stream : streams_) { + if (stream->isExternal()) + stream->dev()->queueBuffer(request->findBuffer(stream)); + } + } + + /* Stop all streams. */ + for (auto const stream : streams_) + stream->dev()->streamOff(); + + /* + * All outstanding requests (and associated buffers) must be returned + * back to the pipeline. The buffers would have been marked as + * cancelled by the call to streamOff() earlier. + */ + while (!requestQueue_.empty()) { + Request *request = requestQueue_.front(); + /* + * A request could be partially complete, + * i.e. we have returned some buffers, but still waiting + * for others or waiting for metadata. + */ + for (auto const stream : streams_) { + if (!stream->isExternal()) + continue; + + FrameBuffer *buffer = request->findBuffer(stream); + /* + * Has the buffer already been handed back to the + * request? If not, do so now. + */ + if (buffer->request()) + pipe_->completeBuffer(camera_, request, buffer); + } + + pipe_->completeRequest(camera_, request); + requestQueue_.pop_front(); + } +} + +void RPiCameraData::handleStreamBuffer(FrameBuffer *buffer, const RPiStream *stream) +{ + if (stream->isExternal()) { + if (!dropFrame_) { + Request *request = buffer->request(); + pipe_->completeBuffer(camera_, request, buffer); + } + } else { + /* Special handling for RAW buffer Requests. + * + * The ISP input stream is alway an import stream, but if the + * current Request has been made for a buffer on the stream, + * simply memcpy to the Request buffer and requeue back to the + * device. + */ + if (stream == &unicam_[Unicam::Image] && !dropFrame_) { + const Stream *rawStream = static_cast(&isp_[Isp::Input]); + Request *request = requestQueue_.front(); + FrameBuffer *raw = request->findBuffer(const_cast(rawStream)); + if (raw) { + raw->copyFrom(buffer); + pipe_->completeBuffer(camera_, request, raw); + } + } + + /* Simply requeue the buffer. */ + stream->dev()->queueBuffer(buffer); + } +} + +void RPiCameraData::handleState() +{ + switch (state_) { + case State::Stopped: + case State::Busy: + break; + + case State::IpaComplete: + /* If the request is completed, we will switch to Idle state. */ + checkRequestCompleted(); + /* + * No break here, we want to try running the pipeline again. + * The fallthrough clause below suppresses compiler warnings. + */ + /* Fall through */ + + case State::Idle: + tryRunPipeline(); + tryFlushQueues(); + break; + } +} + +void RPiCameraData::checkRequestCompleted() +{ + bool requestCompleted = false; + /* + * If we are dropping this frame, do not touch the request, simply + * change the state to IDLE when ready. + */ + if (!dropFrame_) { + Request *request = requestQueue_.front(); + if (request->hasPendingBuffers()) + return; + + /* Must wait for metadata to be filled in before completing. */ + if (state_ != State::IpaComplete) + return; + + pipe_->completeRequest(camera_, request); + requestQueue_.pop_front(); + requestCompleted = true; + } + + /* + * Make sure we have three outputs completed in the case of a dropped + * frame. + */ + if (state_ == State::IpaComplete && + ((ispOutputCount_ == 3 && dropFrame_) || requestCompleted)) { + state_ = State::Idle; + if (dropFrame_) + LOG(RPI, Info) << "Dropping frame at the request of the IPA"; + } +} + +void RPiCameraData::tryRunPipeline() +{ + FrameBuffer *bayerBuffer, *embeddedBuffer; + IPAOperationData op; + + /* If any of our request or buffer queues are empty, we cannot proceed. */ + if (state_ != State::Idle || requestQueue_.empty() || + bayerQueue_.empty() || embeddedQueue_.empty()) + return; + + /* Start with the front of the bayer buffer queue. */ + bayerBuffer = bayerQueue_.front(); + + /* + * Find the embedded data buffer with a matching timestamp to pass to + * the IPA. Any embedded buffers with a timestamp lower than the + * current bayer buffer will be removed and re-queued to the driver. + */ + embeddedBuffer = updateQueue(embeddedQueue_, bayerBuffer->metadata().timestamp, + unicam_[Unicam::Embedded].dev()); + + if (!embeddedBuffer) { + LOG(RPI, Debug) << "Could not find matching embedded buffer"; + + /* + * Look the other way, try to match a bayer buffer with the + * first embedded buffer in the queue. This will also do some + * housekeeping on the bayer image queue - clear out any + * buffers that are older than the first buffer in the embedded + * queue. + * + * But first check if the embedded queue has emptied out. + */ + if (embeddedQueue_.empty()) + return; + + embeddedBuffer = embeddedQueue_.front(); + bayerBuffer = updateQueue(bayerQueue_, embeddedBuffer->metadata().timestamp, + unicam_[Unicam::Image].dev()); + + if (!bayerBuffer) { + LOG(RPI, Debug) << "Could not find matching bayer buffer - ending."; + return; + } + } + + /* + * Take the first request from the queue and action the IPA. + * Unicam buffers for the request have already been queued as they come + * in. + */ + Request *request = requestQueue_.front(); + + /* + * Process all the user controls by the IPA. Once this is complete, we + * queue the ISP output buffer listed in the request to start the HW + * pipeline. + */ + op.operation = RPI_IPA_EVENT_QUEUE_REQUEST; + op.controls = { request->controls() }; + ipa_->processEvent(op); + + /* Queue up any ISP buffers passed into the request. */ + for (auto &stream : isp_) { + if (stream.isExternal()) + stream.dev()->queueBuffer(request->findBuffer(&stream)); + } + + /* Ready to use the buffers, pop them off the queue. */ + bayerQueue_.pop(); + embeddedQueue_.pop(); + + /* Set our state to say the pipeline is active. */ + state_ = State::Busy; + + LOG(RPI, Debug) << "Signalling RPI_IPA_EVENT_SIGNAL_ISP_PREPARE:" + << " Bayer buffer id: " << bayerBuffer->cookie() + << " Embedded buffer id: " << embeddedBuffer->cookie(); + + op.operation = RPI_IPA_EVENT_SIGNAL_ISP_PREPARE; + op.data = { RPiIpaMask::EMBEDDED_DATA | embeddedBuffer->cookie(), + RPiIpaMask::BAYER_DATA | bayerBuffer->cookie() }; + ipa_->processEvent(op); +} + +void RPiCameraData::tryFlushQueues() +{ + /* + * It is possible for us to end up in a situation where all available + * Unicam buffers have been dequeued but do not match. This can happen + * when the system is heavily loaded and we get out of lock-step with + * the two channels. + * + * In such cases, the best thing to do is the re-queue all the buffers + * and give a chance for the hardware to return to lock-step. We do + * have to drop all interim frames. + */ + if (unicam_[Unicam::Image].getBuffers()->size() == bayerQueue_.size() && + unicam_[Unicam::Embedded].getBuffers()->size() == embeddedQueue_.size()) { + LOG(RPI, Warning) << "Flushing all buffer queues!"; + + while (!bayerQueue_.empty()) { + unicam_[Unicam::Image].dev()->queueBuffer(bayerQueue_.front()); + bayerQueue_.pop(); + } + + while (!embeddedQueue_.empty()) { + unicam_[Unicam::Embedded].dev()->queueBuffer(embeddedQueue_.front()); + embeddedQueue_.pop(); + } + } +} + +FrameBuffer *RPiCameraData::updateQueue(std::queue &q, uint64_t timestamp, + V4L2VideoDevice *dev) +{ + while (!q.empty()) { + FrameBuffer *b = q.front(); + if (b->metadata().timestamp < timestamp) { + q.pop(); + dev->queueBuffer(b); + LOG(RPI, Error) << "Dropping input frame!"; + } else if (b->metadata().timestamp == timestamp) { + /* The calling function will pop the item from the queue. */ + return b; + } else { + break; /* Only higher timestamps from here. */ + } + } + + return nullptr; +} + +REGISTER_PIPELINE_HANDLER(PipelineHandlerRPi); + +} /* namespace libcamera */ diff --git a/src/libcamera/pipeline/raspberrypi/staggered_ctrl.h b/src/libcamera/pipeline/raspberrypi/staggered_ctrl.h new file mode 100644 index 000000000000..0403c087c686 --- /dev/null +++ b/src/libcamera/pipeline/raspberrypi/staggered_ctrl.h @@ -0,0 +1,236 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2020, Raspberry Pi (Trading) Ltd. + * + * staggered_ctrl.h - Helper for writing staggered ctrls to a V4L2 device. + */ +#pragma once + +#include +#include +#include +#include + +#include +#include "log.h" +#include "utils.h" +#include "v4l2_videodevice.h" + +/* For logging... */ +using libcamera::LogCategory; +using libcamera::LogDebug; +using libcamera::LogInfo; +using libcamera::utils::hex; + +LOG_DEFINE_CATEGORY(RPI_S_W); + +namespace RPi { + +class StaggeredCtrl +{ +public: + StaggeredCtrl() + : init_(false), setCount_(0), getCount_(0), maxDelay_(0) + { + } + + ~StaggeredCtrl() + { + } + + operator bool() const + { + return init_; + } + + void init(libcamera::V4L2VideoDevice *dev, + std::initializer_list> delayList) + { + std::lock_guard lock(lock_); + + dev_ = dev; + delay_ = delayList; + ctrl_.clear(); + + /* Find the largest delay across all controls. */ + maxDelay_ = 0; + for (auto const &p : delay_) { + LOG(RPI_S_W, Info) << "Init ctrl " + << hex(p.first) << " with delay " + << static_cast(p.second); + maxDelay_ = std::max(maxDelay_, p.second); + } + + init_ = true; + } + + void reset() + { + std::lock_guard lock(lock_); + + int lastSetCount = std::max(0, setCount_ - 1); + std::unordered_map lastVal; + + /* Reset the counters. */ + setCount_ = getCount_ = 0; + + /* Look for the last set values. */ + for (auto const &c : ctrl_) + lastVal[c.first] = c.second[lastSetCount].value; + + /* Apply the last set values as the next to be applied. */ + ctrl_.clear(); + for (auto &c : lastVal) + ctrl_[c.first][setCount_] = CtrlInfo(c.second); + } + + bool set(uint32_t ctrl, int32_t value) + { + std::lock_guard lock(lock_); + + /* Can we find this ctrl as one that is registered? */ + if (delay_.find(ctrl) == delay_.end()) + return false; + + ctrl_[ctrl][setCount_].value = value; + ctrl_[ctrl][setCount_].updated = true; + + return true; + } + + bool set(std::initializer_list> ctrlList) + { + std::lock_guard lock(lock_); + + for (auto const &p : ctrlList) { + /* Can we find this ctrl? */ + if (delay_.find(p.first) == delay_.end()) + return false; + + ctrl_[p.first][setCount_] = CtrlInfo(p.second); + } + + return true; + } + + bool set(libcamera::ControlList &controls) + { + std::lock_guard lock(lock_); + + for (auto const &p : controls) { + /* Can we find this ctrl? */ + if (delay_.find(p.first) == delay_.end()) + return false; + + ctrl_[p.first][setCount_] = CtrlInfo(p.second.get()); + LOG(RPI_S_W, Debug) << "Setting ctrl " + << hex(p.first) << " to " + << ctrl_[p.first][setCount_].value + << " at index " + << setCount_; + } + + return true; + } + + int write() + { + std::lock_guard lock(lock_); + libcamera::ControlList controls(dev_->controls()); + + for (auto &p : ctrl_) { + int delayDiff = maxDelay_ - delay_[p.first]; + int index = std::max(0, setCount_ - delayDiff); + + if (p.second[index].updated) { + /* We need to write this value out. */ + controls.set(p.first, p.second[index].value); + p.second[index].updated = false; + LOG(RPI_S_W, Debug) << "Writing ctrl " + << hex(p.first) << " to " + << p.second[index].value + << " at index " + << index; + } + } + + nextFrame(); + return dev_->setControls(&controls); + } + + void get(std::unordered_map &ctrl, uint8_t offset = 0) + { + std::lock_guard lock(lock_); + + /* Account for the offset to reset the getCounter. */ + getCount_ += offset + 1; + + ctrl.clear(); + for (auto &p : ctrl_) { + int index = std::max(0, getCount_ - maxDelay_); + ctrl[p.first] = p.second[index].value; + LOG(RPI_S_W, Debug) << "Getting ctrl " + << hex(p.first) << " to " + << p.second[index].value + << " at index " + << index; + } + } + +private: + void nextFrame() + { + /* Advance the control history to the next frame */ + int prevCount = setCount_; + setCount_++; + + LOG(RPI_S_W, Debug) << "Next frame, set index is " << setCount_; + + for (auto &p : ctrl_) { + p.second[setCount_].value = p.second[prevCount].value; + p.second[setCount_].updated = false; + } + } + + /* listSize must be a power of 2. */ + static constexpr int listSize = (1 << 4); + struct CtrlInfo { + CtrlInfo() + : value(0), updated(false) + { + } + + CtrlInfo(int32_t value_) + : value(value_), updated(true) + { + } + + int32_t value; + bool updated; + }; + + class CircularArray : public std::array + { + public: + CtrlInfo &operator[](int index) + { + return std::array::operator[](index & (listSize - 1)); + } + + const CtrlInfo &operator[](int index) const + { + return std::array::operator[](index & (listSize - 1)); + } + }; + + bool init_; + uint32_t setCount_; + uint32_t getCount_; + uint8_t maxDelay_; + libcamera::V4L2VideoDevice *dev_; + std::unordered_map delay_; + std::unordered_map ctrl_; + std::mutex lock_; +}; + +} /* namespace RPi */ diff --git a/src/libcamera/pipeline/raspberrypi/vcsm.h b/src/libcamera/pipeline/raspberrypi/vcsm.h new file mode 100644 index 000000000000..fdce0050c26b --- /dev/null +++ b/src/libcamera/pipeline/raspberrypi/vcsm.h @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * vcsm.h - Helper class for vcsm allocations. + */ +#pragma once + +#include +#include + +#include +#include +#include +#include +#include + +namespace RPi { + +#define VCSM_CMA_DEVICE_NAME "/dev/vcsm-cma" + +class Vcsm +{ +public: + Vcsm() + { + vcsmHandle_ = ::open(VCSM_CMA_DEVICE_NAME, O_RDWR, 0); + if (vcsmHandle_ == -1) { + std::cerr << "Could not open vcsm device: " + << VCSM_CMA_DEVICE_NAME; + } + } + + ~Vcsm() + { + /* Free all existing allocations. */ + auto it = allocMap_.begin(); + while (it != allocMap_.end()) + it = remove(it->first); + + if (vcsmHandle_) + ::close(vcsmHandle_); + } + + void *alloc(const char *name, unsigned int size, + vc_sm_cma_cache_e cache = VC_SM_CMA_CACHE_NONE) + { + unsigned int pageSize = getpagesize(); + void *user_ptr; + int ret; + + /* Ask for page aligned allocation. */ + size = (size + pageSize - 1) & ~(pageSize - 1); + + struct vc_sm_cma_ioctl_alloc alloc; + memset(&alloc, 0, sizeof(alloc)); + alloc.size = size; + alloc.num = 1; + alloc.cached = cache; + alloc.handle = 0; + if (name != NULL) + memcpy(alloc.name, name, 32); + + ret = ::ioctl(vcsmHandle_, VC_SM_CMA_IOCTL_MEM_ALLOC, &alloc); + + if (ret < 0 || alloc.handle < 0) { + std::cerr << "vcsm allocation failure for " + << name << std::endl; + return nullptr; + } + + /* Map the buffer into user space. */ + user_ptr = ::mmap(0, alloc.size, PROT_READ | PROT_WRITE, + MAP_SHARED, alloc.handle, 0); + + if (user_ptr == MAP_FAILED) { + std::cerr << "vcsm mmap failure for " << name << std::endl; + ::close(alloc.handle); + return nullptr; + } + + std::lock_guard lock(lock_); + allocMap_.emplace(user_ptr, AllocInfo(alloc.handle, + alloc.size, alloc.vc_handle)); + + return user_ptr; + } + + void free(void *user_ptr) + { + std::lock_guard lock(lock_); + remove(user_ptr); + } + + unsigned int getVCHandle(void *user_ptr) + { + std::lock_guard lock(lock_); + auto it = allocMap_.find(user_ptr); + if (it != allocMap_.end()) + return it->second.vcHandle; + + return 0; + } + +private: + struct AllocInfo { + AllocInfo(int handle_, int size_, int vcHandle_) + : handle(handle_), size(size_), vcHandle(vcHandle_) + { + } + + int handle; + int size; + uint32_t vcHandle; + }; + + /* Map of all allocations that have been requested. */ + using AllocMap = std::map; + + AllocMap::iterator remove(void *user_ptr) + { + auto it = allocMap_.find(user_ptr); + if (it != allocMap_.end()) { + int handle = it->second.handle; + int size = it->second.size; + ::munmap(user_ptr, size); + ::close(handle); + /* + * Remove the allocation from the map. This returns + * an iterator to the next element. + */ + it = allocMap_.erase(it); + } + + /* Returns an iterator to the next element. */ + return it; + } + + AllocMap allocMap_; + int vcsmHandle_; + std::mutex lock_; +}; + +} /* namespace RPi */ From patchwork Mon May 4 09:28:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 3687 Return-Path: Received: from perceval.ideasonboard.com (perceval.ideasonboard.com [IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id D324C616BB for ; Mon, 4 May 2020 11:28:40 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (1024-bit key; unprotected) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="fcSWUhc4"; dkim-atps=neutral Received: from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 6A853304; Mon, 4 May 2020 11:28:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1588584520; bh=9lE8fvH7mu8XmMc8Nfp6vXrPoxD6Py9EUt5z7urq6vM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fcSWUhc4iH3IqQAwObbCJ671R/2mv5ZqO7T14UY5k2M4vmNa2qeEIENKHB56Yxffh PGNZaZLsytAskIJgaNzPl1UKSSLUJmLhx/A1szZGZEyvglk9i2nL+63AVCSkZleHph 8XVxj7BZJzu2KbYJmumHhdJru50xsREzYjXhP4UQ= From: Laurent Pinchart To: libcamera-devel@lists.libcamera.org Date: Mon, 4 May 2020 12:28:27 +0300 Message-Id: <20200504092829.10099-5-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> References: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> MIME-Version: 1.0 X-Mailman-Approved-At: Mon, 04 May 2020 11:29:25 +0200 Subject: [libcamera-devel] [PATCH 4/6] libcamera: ipa: Raspberry Pi IPA X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2020 09:28:42 -0000 From: Naushir Patuck Initial implementation of the Raspberry Pi (BCM2835) libcamera IPA and associated libraries. All code is licensed under the BSD-2-Clause terms. Copyright (c) 2019-2020 Raspberry Pi Trading Ltd. Signed-off-by: Naushir Patuck Signed-off-by: Laurent Pinchart --- src/ipa/raspberrypi/README.md | 23 + src/ipa/raspberrypi/cam_helper.cpp | 119 ++ src/ipa/raspberrypi/cam_helper.hpp | 102 ++ src/ipa/raspberrypi/cam_helper_imx219.cpp | 180 +++ src/ipa/raspberrypi/cam_helper_imx477.cpp | 162 +++ src/ipa/raspberrypi/cam_helper_ov5647.cpp | 89 ++ .../raspberrypi/controller/agc_algorithm.hpp | 28 + src/ipa/raspberrypi/controller/agc_status.h | 39 + src/ipa/raspberrypi/controller/algorithm.cpp | 47 + src/ipa/raspberrypi/controller/algorithm.hpp | 62 + src/ipa/raspberrypi/controller/alsc_status.h | 27 + .../raspberrypi/controller/awb_algorithm.hpp | 22 + src/ipa/raspberrypi/controller/awb_status.h | 26 + .../controller/black_level_status.h | 23 + src/ipa/raspberrypi/controller/camera_mode.h | 40 + .../raspberrypi/controller/ccm_algorithm.hpp | 21 + src/ipa/raspberrypi/controller/ccm_status.h | 22 + .../controller/contrast_algorithm.hpp | 22 + .../raspberrypi/controller/contrast_status.h | 31 + src/ipa/raspberrypi/controller/controller.cpp | 109 ++ src/ipa/raspberrypi/controller/controller.hpp | 54 + .../raspberrypi/controller/device_status.h | 30 + src/ipa/raspberrypi/controller/dpc_status.h | 21 + src/ipa/raspberrypi/controller/geq_status.h | 22 + src/ipa/raspberrypi/controller/histogram.cpp | 64 + src/ipa/raspberrypi/controller/histogram.hpp | 44 + src/ipa/raspberrypi/controller/logging.hpp | 30 + src/ipa/raspberrypi/controller/lux_status.h | 29 + src/ipa/raspberrypi/controller/metadata.hpp | 77 ++ src/ipa/raspberrypi/controller/noise_status.h | 22 + src/ipa/raspberrypi/controller/pwl.cpp | 216 ++++ src/ipa/raspberrypi/controller/pwl.hpp | 109 ++ src/ipa/raspberrypi/controller/rpi/agc.cpp | 642 ++++++++++ src/ipa/raspberrypi/controller/rpi/agc.hpp | 123 ++ src/ipa/raspberrypi/controller/rpi/alsc.cpp | 705 +++++++++++ src/ipa/raspberrypi/controller/rpi/alsc.hpp | 104 ++ src/ipa/raspberrypi/controller/rpi/awb.cpp | 608 +++++++++ src/ipa/raspberrypi/controller/rpi/awb.hpp | 178 +++ .../controller/rpi/black_level.cpp | 56 + .../controller/rpi/black_level.hpp | 30 + src/ipa/raspberrypi/controller/rpi/ccm.cpp | 163 +++ src/ipa/raspberrypi/controller/rpi/ccm.hpp | 76 ++ .../raspberrypi/controller/rpi/contrast.cpp | 176 +++ .../raspberrypi/controller/rpi/contrast.hpp | 51 + src/ipa/raspberrypi/controller/rpi/dpc.cpp | 49 + src/ipa/raspberrypi/controller/rpi/dpc.hpp | 32 + src/ipa/raspberrypi/controller/rpi/geq.cpp | 75 ++ src/ipa/raspberrypi/controller/rpi/geq.hpp | 34 + src/ipa/raspberrypi/controller/rpi/lux.cpp | 104 ++ src/ipa/raspberrypi/controller/rpi/lux.hpp | 42 + src/ipa/raspberrypi/controller/rpi/noise.cpp | 71 ++ src/ipa/raspberrypi/controller/rpi/noise.hpp | 32 + src/ipa/raspberrypi/controller/rpi/sdn.cpp | 63 + src/ipa/raspberrypi/controller/rpi/sdn.hpp | 29 + .../raspberrypi/controller/rpi/sharpen.cpp | 60 + .../raspberrypi/controller/rpi/sharpen.hpp | 32 + src/ipa/raspberrypi/controller/sdn_status.h | 23 + .../raspberrypi/controller/sharpen_status.h | 26 + src/ipa/raspberrypi/data/imx219.json | 401 ++++++ src/ipa/raspberrypi/data/imx477.json | 416 +++++++ src/ipa/raspberrypi/data/meson.build | 9 + src/ipa/raspberrypi/data/ov5647.json | 398 ++++++ src/ipa/raspberrypi/data/uncalibrated.json | 82 ++ src/ipa/raspberrypi/md_parser.cpp | 101 ++ src/ipa/raspberrypi/md_parser.hpp | 123 ++ src/ipa/raspberrypi/md_parser_rpi.cpp | 37 + src/ipa/raspberrypi/md_parser_rpi.hpp | 32 + src/ipa/raspberrypi/meson.build | 59 + src/ipa/raspberrypi/raspberrypi.cpp | 1081 +++++++++++++++++ 69 files changed, 8235 insertions(+) create mode 100644 src/ipa/raspberrypi/README.md create mode 100644 src/ipa/raspberrypi/cam_helper.cpp create mode 100644 src/ipa/raspberrypi/cam_helper.hpp create mode 100644 src/ipa/raspberrypi/cam_helper_imx219.cpp create mode 100644 src/ipa/raspberrypi/cam_helper_imx477.cpp create mode 100644 src/ipa/raspberrypi/cam_helper_ov5647.cpp create mode 100644 src/ipa/raspberrypi/controller/agc_algorithm.hpp create mode 100644 src/ipa/raspberrypi/controller/agc_status.h create mode 100644 src/ipa/raspberrypi/controller/algorithm.cpp create mode 100644 src/ipa/raspberrypi/controller/algorithm.hpp create mode 100644 src/ipa/raspberrypi/controller/alsc_status.h create mode 100644 src/ipa/raspberrypi/controller/awb_algorithm.hpp create mode 100644 src/ipa/raspberrypi/controller/awb_status.h create mode 100644 src/ipa/raspberrypi/controller/black_level_status.h create mode 100644 src/ipa/raspberrypi/controller/camera_mode.h create mode 100644 src/ipa/raspberrypi/controller/ccm_algorithm.hpp create mode 100644 src/ipa/raspberrypi/controller/ccm_status.h create mode 100644 src/ipa/raspberrypi/controller/contrast_algorithm.hpp create mode 100644 src/ipa/raspberrypi/controller/contrast_status.h create mode 100644 src/ipa/raspberrypi/controller/controller.cpp create mode 100644 src/ipa/raspberrypi/controller/controller.hpp create mode 100644 src/ipa/raspberrypi/controller/device_status.h create mode 100644 src/ipa/raspberrypi/controller/dpc_status.h create mode 100644 src/ipa/raspberrypi/controller/geq_status.h create mode 100644 src/ipa/raspberrypi/controller/histogram.cpp create mode 100644 src/ipa/raspberrypi/controller/histogram.hpp create mode 100644 src/ipa/raspberrypi/controller/logging.hpp create mode 100644 src/ipa/raspberrypi/controller/lux_status.h create mode 100644 src/ipa/raspberrypi/controller/metadata.hpp create mode 100644 src/ipa/raspberrypi/controller/noise_status.h create mode 100644 src/ipa/raspberrypi/controller/pwl.cpp create mode 100644 src/ipa/raspberrypi/controller/pwl.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/agc.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/agc.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/alsc.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/alsc.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/awb.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/awb.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/black_level.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/black_level.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/ccm.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/ccm.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/contrast.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/contrast.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/dpc.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/dpc.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/geq.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/geq.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/lux.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/lux.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/noise.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/noise.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/sdn.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/sdn.hpp create mode 100644 src/ipa/raspberrypi/controller/rpi/sharpen.cpp create mode 100644 src/ipa/raspberrypi/controller/rpi/sharpen.hpp create mode 100644 src/ipa/raspberrypi/controller/sdn_status.h create mode 100644 src/ipa/raspberrypi/controller/sharpen_status.h create mode 100644 src/ipa/raspberrypi/data/imx219.json create mode 100644 src/ipa/raspberrypi/data/imx477.json create mode 100644 src/ipa/raspberrypi/data/meson.build create mode 100644 src/ipa/raspberrypi/data/ov5647.json create mode 100644 src/ipa/raspberrypi/data/uncalibrated.json create mode 100644 src/ipa/raspberrypi/md_parser.cpp create mode 100644 src/ipa/raspberrypi/md_parser.hpp create mode 100644 src/ipa/raspberrypi/md_parser_rpi.cpp create mode 100644 src/ipa/raspberrypi/md_parser_rpi.hpp create mode 100644 src/ipa/raspberrypi/meson.build create mode 100644 src/ipa/raspberrypi/raspberrypi.cpp diff --git a/src/ipa/raspberrypi/README.md b/src/ipa/raspberrypi/README.md new file mode 100644 index 000000000000..68bdff12fbeb --- /dev/null +++ b/src/ipa/raspberrypi/README.md @@ -0,0 +1,23 @@ +# _libcamera_ for the Raspberry Pi + +Raspberry Pi provides a fully featured pipeline handler and control algorithms +(IPAs, or "Image Processing Algorithms") to work with _libcamera_. Support is +included for all existing Raspberry Pi camera modules. + +_libcamera_ for the Raspberry Pi allows users to: + +1. Use their existing Raspberry Pi cameras. +1. Change the tuning of the image processing for their Raspberry Pi cameras. +1. Alter or amend the control algorithms (such as AGC/AEC, AWB or any others) + that control the sensor and ISP. +1. Implement their own custom control algorithms. +1. Supply new tunings and/or algorithms for completely new sensors. + +## How to install and run _libcamera_ on the Raspberry Pi + +Please follow the instructions [here](https://github.com/raspberrypi/documentation/tree/master/linux/software/libcamera/README.md). + +## Documentation + +Full documentation for the _Raspberry Pi Camera Algorithm and Tuning Guide_ can +be found [here](https://github.com/raspberrypi/documentation/tree/master/linux/software/libcamera/rpi_SOFT_libcamera_1p0.pdf). diff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp new file mode 100644 index 000000000000..167508f7bf38 --- /dev/null +++ b/src/ipa/raspberrypi/cam_helper.cpp @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * cam_helper.cpp - helper information for different sensors + */ + +#include + +#include +#include +#include + +#include "cam_helper.hpp" +#include "md_parser.hpp" + +#include "v4l2_videodevice.h" + +using namespace RPi; + +static std::map cam_helpers; + +CamHelper *CamHelper::Create(std::string const &cam_name) +{ + /* + * CamHelpers get registered by static RegisterCamHelper + * initialisers. + */ + for (auto &p : cam_helpers) { + if (cam_name.find(p.first) != std::string::npos) + return p.second(); + } + + return NULL; +} + +CamHelper::CamHelper(MdParser *parser) + : parser_(parser), initialized_(false) +{ +} + +CamHelper::~CamHelper() +{ + delete parser_; +} + +uint32_t CamHelper::ExposureLines(double exposure_us) const +{ + assert(initialized_); + return exposure_us * 1000.0 / mode_.line_length; +} + +double CamHelper::Exposure(uint32_t exposure_lines) const +{ + assert(initialized_); + return exposure_lines * mode_.line_length / 1000.0; +} + +void CamHelper::SetCameraMode(const CameraMode &mode) +{ + mode_ = mode; + parser_->SetBitsPerPixel(mode.bitdepth); + parser_->SetLineLengthBytes(0); /* We use SetBufferSize. */ + initialized_ = true; +} + +void CamHelper::GetDelays(int &exposure_delay, int &gain_delay) const +{ + /* + * These values are correct for many sensors. Other sensors will + * need to over-ride this method. + */ + exposure_delay = 2; + gain_delay = 1; +} + +bool CamHelper::SensorEmbeddedDataPresent() const +{ + return false; +} + +unsigned int CamHelper::HideFramesStartup() const +{ + /* + * By default, hide 6 frames completely at start-up while AGC etc. sort + * themselves out (converge). + */ + return 6; +} + +unsigned int CamHelper::HideFramesModeSwitch() const +{ + /* After a mode switch, many sensors return valid frames immediately. */ + return 0; +} + +unsigned int CamHelper::MistrustFramesStartup() const +{ + /* Many sensors return a single bad frame on start-up. */ + return 1; +} + +unsigned int CamHelper::MistrustFramesModeSwitch() const +{ + /* Many sensors return valid metadata immediately. */ + return 0; +} + +CamTransform CamHelper::GetOrientation() const +{ + /* Most sensors will be mounted the "right" way up? */ + return CamTransform_IDENTITY; +} + +RegisterCamHelper::RegisterCamHelper(char const *cam_name, + CamHelperCreateFunc create_func) +{ + cam_helpers[std::string(cam_name)] = create_func; +} diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp new file mode 100644 index 000000000000..0c8aa29a4cff --- /dev/null +++ b/src/ipa/raspberrypi/cam_helper.hpp @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * cam_helper.hpp - helper class providing camera information + */ +#pragma once + +#include + +#include "camera_mode.h" +#include "md_parser.hpp" + +#include "v4l2_videodevice.h" + +namespace RPi { + +// The CamHelper class provides a number of facilities that anyone trying +// trying to drive a camera will need to know, but which are not provided by +// by the standard driver framework. Specifically, it provides: +// +// A "CameraMode" structure to describe extra information about the chosen +// mode of the driver. For example, how it is cropped from the full sensor +// area, how it is scaled, whether pixels are averaged compared to the full +// resolution. +// +// The ability to convert between number of lines of exposure and actual +// exposure time, and to convert between the sensor's gain codes and actual +// gains. +// +// A method to return the number of frames of delay between updating exposure +// and analogue gain and the changes taking effect. For many sensors these +// take the values 2 and 1 respectively, but sensors that are different will +// need to over-ride the default method provided. +// +// A method to query if the sensor outputs embedded data that can be parsed. +// +// A parser to parse the metadata buffers provided by some sensors (for +// example, the imx219 does; the ov5647 doesn't). This allows us to know for +// sure the exposure and gain of the frame we're looking at. CamHelper +// provides methods for converting analogue gains to and from the sensor's +// native gain codes. +// +// Finally, a set of methods that determine how to handle the vagaries of +// different camera modules on start-up or when switching modes. Some +// modules may produce one or more frames that are not yet correctly exposed, +// or where the metadata may be suspect. We have the following methods: +// HideFramesStartup(): Tell the pipeline handler not to return this many +// frames at start-up. This can also be used to hide initial frames +// while the AGC and other algorithms are sorting themselves out. +// HideFramesModeSwitch(): Tell the pipeline handler not to return this +// many frames after a mode switch (other than start-up). Some sensors +// may produce innvalid frames after a mode switch; others may not. +// MistrustFramesStartup(): At start-up a sensor may return frames for +// which we should not run any control algorithms (for example, metadata +// may be invalid). +// MistrustFramesModeSwitch(): The number of frames, after a mode switch +// (other than start-up), for which control algorithms should not run +// (for example, metadata may be unreliable). + +// Bitfield to represent the default orientation of the camera. +typedef int CamTransform; +static constexpr CamTransform CamTransform_IDENTITY = 0; +static constexpr CamTransform CamTransform_HFLIP = 1; +static constexpr CamTransform CamTransform_VFLIP = 2; + +class CamHelper +{ +public: + static CamHelper *Create(std::string const &cam_name); + CamHelper(MdParser *parser); + virtual ~CamHelper(); + void SetCameraMode(const CameraMode &mode); + MdParser &Parser() const { return *parser_; } + uint32_t ExposureLines(double exposure_us) const; + double Exposure(uint32_t exposure_lines) const; // in us + virtual uint32_t GainCode(double gain) const = 0; + virtual double Gain(uint32_t gain_code) const = 0; + virtual void GetDelays(int &exposure_delay, int &gain_delay) const; + virtual bool SensorEmbeddedDataPresent() const; + virtual unsigned int HideFramesStartup() const; + virtual unsigned int HideFramesModeSwitch() const; + virtual unsigned int MistrustFramesStartup() const; + virtual unsigned int MistrustFramesModeSwitch() const; + virtual CamTransform GetOrientation() const; +protected: + MdParser *parser_; + CameraMode mode_; + bool initialized_; +}; + +// This is for registering camera helpers with the system, so that the +// CamHelper::Create function picks them up automatically. + +typedef CamHelper *(*CamHelperCreateFunc)(); +struct RegisterCamHelper +{ + RegisterCamHelper(char const *cam_name, + CamHelperCreateFunc create_func); +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp new file mode 100644 index 000000000000..35c6597c2016 --- /dev/null +++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * cam_helper_imx219.cpp - camera helper for imx219 sensor + */ + +#include +#include +#include +#include + +/* + * We have observed the imx219 embedded data stream randomly return junk + * reister values. Do not rely on embedded data until this has been resolved. + */ +#define ENABLE_EMBEDDED_DATA 0 + +#include "cam_helper.hpp" +#if ENABLE_EMBEDDED_DATA +#include "md_parser.hpp" +#else +#include "md_parser_rpi.hpp" +#endif + +using namespace RPi; + +/* Metadata parser implementation specific to Sony IMX219 sensors. */ + +class MdParserImx219 : public MdParserSmia +{ +public: + MdParserImx219(); + Status Parse(void *data) override; + Status GetExposureLines(unsigned int &lines) override; + Status GetGainCode(unsigned int &gain_code) override; +private: + /* Offset of the register's value in the metadata block. */ + int reg_offsets_[3]; + /* Value of the register, once read from the metadata block. */ + int reg_values_[3]; +}; + +class CamHelperImx219 : public CamHelper +{ +public: + CamHelperImx219(); + uint32_t GainCode(double gain) const override; + double Gain(uint32_t gain_code) const override; + unsigned int MistrustFramesModeSwitch() const override; + bool SensorEmbeddedDataPresent() const override; + CamTransform GetOrientation() const override; +}; + +CamHelperImx219::CamHelperImx219() +#if ENABLE_EMBEDDED_DATA + : CamHelper(new MdParserImx219()) +#else + : CamHelper(new MdParserRPi()) +#endif +{ +} + +uint32_t CamHelperImx219::GainCode(double gain) const +{ + return (uint32_t)(256 - 256 / gain); +} + +double CamHelperImx219::Gain(uint32_t gain_code) const +{ + return 256.0 / (256 - gain_code); +} + +unsigned int CamHelperImx219::MistrustFramesModeSwitch() const +{ + /* + * For reasons unknown, we do occasionally get a bogus metadata frame + * at a mode switch (though not at start-up). Possibly warrants some + * investigation, though not a big deal. + */ + return 1; +} + +bool CamHelperImx219::SensorEmbeddedDataPresent() const +{ + return ENABLE_EMBEDDED_DATA; +} + +CamTransform CamHelperImx219::GetOrientation() const +{ + /* Camera is "upside down" on this board. */ + return CamTransform_HFLIP | CamTransform_VFLIP; +} + +static CamHelper *Create() +{ + return new CamHelperImx219(); +} + +static RegisterCamHelper reg("imx219", &Create); + +/* + * We care about one gain register and a pair of exposure registers. Their I2C + * addresses from the Sony IMX219 datasheet: + */ +#define GAIN_REG 0x157 +#define EXPHI_REG 0x15A +#define EXPLO_REG 0x15B + +/* + * Index of each into the reg_offsets and reg_values arrays. Must be in + * register address order. + */ +#define GAIN_INDEX 0 +#define EXPHI_INDEX 1 +#define EXPLO_INDEX 2 + +MdParserImx219::MdParserImx219() +{ + reg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = -1; +} + +MdParser::Status MdParserImx219::Parse(void *data) +{ + bool try_again = false; + + if (reset_) { + /* + * Search again through the metadata for the gain and exposure + * registers. + */ + assert(bits_per_pixel_); + assert(num_lines_ || buffer_size_bytes_); + /* Need to be ordered */ + uint32_t regs[3] = { GAIN_REG, EXPHI_REG, EXPLO_REG }; + reg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = -1; + int ret = static_cast(findRegs(static_cast(data), + regs, reg_offsets_, 3)); + /* + * > 0 means "worked partially but parse again next time", + * < 0 means "hard error". + */ + if (ret > 0) + try_again = true; + else if (ret < 0) + return ERROR; + } + + for (int i = 0; i < 3; i++) { + if (reg_offsets_[i] == -1) + continue; + + reg_values_[i] = static_cast(data)[reg_offsets_[i]]; + } + + /* Re-parse next time if we were unhappy in some way. */ + reset_ = try_again; + + return OK; +} + +MdParser::Status MdParserImx219::GetExposureLines(unsigned int &lines) +{ + if (reg_offsets_[EXPHI_INDEX] == -1 || reg_offsets_[EXPLO_INDEX] == -1) + return NOTFOUND; + + lines = reg_values_[EXPHI_INDEX] * 256 + reg_values_[EXPLO_INDEX]; + + return OK; +} + +MdParser::Status MdParserImx219::GetGainCode(unsigned int &gain_code) +{ + if (reg_offsets_[GAIN_INDEX] == -1) + return NOTFOUND; + + gain_code = reg_values_[GAIN_INDEX]; + + return OK; +} diff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp new file mode 100644 index 000000000000..695444567bb2 --- /dev/null +++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2020, Raspberry Pi (Trading) Limited + * + * cam_helper_imx477.cpp - camera helper for imx477 sensor + */ + +#include +#include +#include +#include + +#include "cam_helper.hpp" +#include "md_parser.hpp" + +using namespace RPi; + +/* Metadata parser implementation specific to Sony IMX477 sensors. */ + +class MdParserImx477 : public MdParserSmia +{ +public: + MdParserImx477(); + Status Parse(void *data) override; + Status GetExposureLines(unsigned int &lines) override; + Status GetGainCode(unsigned int &gain_code) override; +private: + /* Offset of the register's value in the metadata block. */ + int reg_offsets_[4]; + /* Value of the register, once read from the metadata block. */ + int reg_values_[4]; +}; + +class CamHelperImx477 : public CamHelper +{ +public: + CamHelperImx477(); + uint32_t GainCode(double gain) const override; + double Gain(uint32_t gain_code) const override; + bool SensorEmbeddedDataPresent() const override; + CamTransform GetOrientation() const override; +}; + +CamHelperImx477::CamHelperImx477() + : CamHelper(new MdParserImx477()) +{ +} + +uint32_t CamHelperImx477::GainCode(double gain) const +{ + return static_cast(1024 - 1024 / gain); +} + +double CamHelperImx477::Gain(uint32_t gain_code) const +{ + return 1024.0 / (1024 - gain_code); +} + +bool CamHelperImx477::SensorEmbeddedDataPresent() const +{ + return true; +} + +CamTransform CamHelperImx477::GetOrientation() const +{ + /* Camera is "upside down" on this board. */ + return CamTransform_HFLIP | CamTransform_VFLIP; +} + +static CamHelper *Create() +{ + return new CamHelperImx477(); +} + +static RegisterCamHelper reg("imx477", &Create); + +/* + * We care about two gain registers and a pair of exposure registers. Their + * I2C addresses from the Sony IMX477 datasheet: + */ +#define EXPHI_REG 0x0202 +#define EXPLO_REG 0x0203 +#define GAINHI_REG 0x0204 +#define GAINLO_REG 0x0205 + +/* + * Index of each into the reg_offsets and reg_values arrays. Must be in register + * address order. + */ +#define EXPHI_INDEX 0 +#define EXPLO_INDEX 1 +#define GAINHI_INDEX 2 +#define GAINLO_INDEX 3 + +MdParserImx477::MdParserImx477() +{ + reg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = reg_offsets_[3] = -1; +} + +MdParser::Status MdParserImx477::Parse(void *data) +{ + bool try_again = false; + + if (reset_) { + /* + * Search again through the metadata for the gain and exposure + * registers. + */ + assert(bits_per_pixel_); + assert(num_lines_ || buffer_size_bytes_); + /* Need to be ordered */ + uint32_t regs[4] = { + EXPHI_REG, + EXPLO_REG, + GAINHI_REG, + GAINLO_REG + }; + reg_offsets_[0] = reg_offsets_[1] = reg_offsets_[2] = reg_offsets_[3] = -1; + int ret = static_cast(findRegs(static_cast(data), + regs, reg_offsets_, 4)); + /* + * > 0 means "worked partially but parse again next time", + * < 0 means "hard error". + */ + if (ret > 0) + try_again = true; + else if (ret < 0) + return ERROR; + } + + for (int i = 0; i < 4; i++) { + if (reg_offsets_[i] == -1) + continue; + + reg_values_[i] = static_cast(data)[reg_offsets_[i]]; + } + + /* Re-parse next time if we were unhappy in some way. */ + reset_ = try_again; + + return OK; +} + +MdParser::Status MdParserImx477::GetExposureLines(unsigned int &lines) +{ + if (reg_offsets_[EXPHI_INDEX] == -1 || reg_offsets_[EXPLO_INDEX] == -1) + return NOTFOUND; + + lines = reg_values_[EXPHI_INDEX] * 256 + reg_values_[EXPLO_INDEX]; + + return OK; +} + +MdParser::Status MdParserImx477::GetGainCode(unsigned int &gain_code) +{ + if (reg_offsets_[GAINHI_INDEX] == -1 || reg_offsets_[GAINLO_INDEX] == -1) + return NOTFOUND; + + gain_code = reg_values_[GAINHI_INDEX] * 256 + reg_values_[GAINLO_INDEX]; + + return OK; +} diff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp new file mode 100644 index 000000000000..3dbcb164451f --- /dev/null +++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * cam_helper_ov5647.cpp - camera information for ov5647 sensor + */ + +#include + +#include "cam_helper.hpp" +#include "md_parser_rpi.hpp" + +using namespace RPi; + +class CamHelperOv5647 : public CamHelper +{ +public: + CamHelperOv5647(); + uint32_t GainCode(double gain) const override; + double Gain(uint32_t gain_code) const override; + void GetDelays(int &exposure_delay, int &gain_delay) const override; + unsigned int HideFramesModeSwitch() const override; + unsigned int MistrustFramesStartup() const override; + unsigned int MistrustFramesModeSwitch() const override; +}; + +/* + * OV5647 doesn't output metadata, so we have to use the "unicam parser" which + * works by counting frames. + */ + +CamHelperOv5647::CamHelperOv5647() + : CamHelper(new MdParserRPi()) +{ +} + +uint32_t CamHelperOv5647::GainCode(double gain) const +{ + return static_cast(gain * 16.0); +} + +double CamHelperOv5647::Gain(uint32_t gain_code) const +{ + return static_cast(gain_code) / 16.0; +} + +void CamHelperOv5647::GetDelays(int &exposure_delay, int &gain_delay) const +{ + /* + * We run this sensor in a mode where the gain delay is bumped up to + * 2. It seems to be the only way to make the delays "predictable". + */ + exposure_delay = 2; + gain_delay = 2; +} + +unsigned int CamHelperOv5647::HideFramesModeSwitch() const +{ + /* + * After a mode switch, we get a couple of under-exposed frames which + * we don't want shown. + */ + return 2; +} + +unsigned int CamHelperOv5647::MistrustFramesStartup() const +{ + /* + * First couple of frames are under-exposed and are no good for control + * algos. + */ + return 2; +} + +unsigned int CamHelperOv5647::MistrustFramesModeSwitch() const +{ + /* + * First couple of frames are under-exposed even after a simple + * mode switch, and are no good for control algos. + */ + return 2; +} + +static CamHelper *Create() +{ + return new CamHelperOv5647(); +} + +static RegisterCamHelper reg("ov5647", &Create); diff --git a/src/ipa/raspberrypi/controller/agc_algorithm.hpp b/src/ipa/raspberrypi/controller/agc_algorithm.hpp new file mode 100644 index 000000000000..f29bb3ac941c --- /dev/null +++ b/src/ipa/raspberrypi/controller/agc_algorithm.hpp @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * agc_algorithm.hpp - AGC/AEC control algorithm interface + */ +#pragma once + +#include "algorithm.hpp" + +namespace RPi { + +class AgcAlgorithm : public Algorithm +{ +public: + AgcAlgorithm(Controller *controller) : Algorithm(controller) {} + // An AGC algorithm must provide the following: + virtual void SetEv(double ev) = 0; + virtual void SetFlickerPeriod(double flicker_period) = 0; + virtual void SetFixedShutter(double fixed_shutter) = 0; // microseconds + virtual void SetFixedAnalogueGain(double fixed_analogue_gain) = 0; + virtual void SetMeteringMode(std::string const &metering_mode_name) = 0; + virtual void SetExposureMode(std::string const &exposure_mode_name) = 0; + virtual void + SetConstraintMode(std::string const &contraint_mode_name) = 0; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/agc_status.h b/src/ipa/raspberrypi/controller/agc_status.h new file mode 100644 index 000000000000..10381c90a313 --- /dev/null +++ b/src/ipa/raspberrypi/controller/agc_status.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * agc_status.h - AGC/AEC control algorithm status + */ +#pragma once + +// The AGC algorithm should post the following structure into the image's +// "agc.status" metadata. + +#ifdef __cplusplus +extern "C" { +#endif + +// Note: total_exposure_value will be reported as zero until the algorithm has +// seen statistics and calculated meaningful values. The contents should be +// ignored until then. + +struct AgcStatus { + double total_exposure_value; // value for all exposure and gain for this image + double target_exposure_value; // (unfiltered) target total exposure AGC is aiming for + double shutter_time; + double analogue_gain; + char exposure_mode[32]; + char constraint_mode[32]; + char metering_mode[32]; + double ev; + double flicker_period; + int floating_region_enable; + double fixed_shutter; + double fixed_analogue_gain; + double digital_gain; + int locked; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/algorithm.cpp b/src/ipa/raspberrypi/controller/algorithm.cpp new file mode 100644 index 000000000000..9bd3df8615f8 --- /dev/null +++ b/src/ipa/raspberrypi/controller/algorithm.cpp @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * algorithm.cpp - ISP control algorithms + */ + +#include "algorithm.hpp" + +using namespace RPi; + +void Algorithm::Read(boost::property_tree::ptree const ¶ms) +{ + (void)params; +} + +void Algorithm::Initialise() {} + +void Algorithm::SwitchMode(CameraMode const &camera_mode) +{ + (void)camera_mode; +} + +void Algorithm::Prepare(Metadata *image_metadata) +{ + (void)image_metadata; +} + +void Algorithm::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + (void)stats; + (void)image_metadata; +} + +// For registering algorithms with the system: + +static std::map algorithms; +std::map const &RPi::GetAlgorithms() +{ + return algorithms; +} + +RegisterAlgorithm::RegisterAlgorithm(char const *name, + AlgoCreateFunc create_func) +{ + algorithms[std::string(name)] = create_func; +} diff --git a/src/ipa/raspberrypi/controller/algorithm.hpp b/src/ipa/raspberrypi/controller/algorithm.hpp new file mode 100644 index 000000000000..b82c184168ce --- /dev/null +++ b/src/ipa/raspberrypi/controller/algorithm.hpp @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * algorithm.hpp - ISP control algorithm interface + */ +#pragma once + +// All algorithms should be derived from this class and made available to the +// Controller. + +#include +#include +#include +#include + +#include "logging.hpp" +#include "controller.hpp" + +#include + +namespace RPi { + +// This defines the basic interface for all control algorithms. + +class Algorithm +{ +public: + Algorithm(Controller *controller) + : controller_(controller), paused_(false) + { + } + virtual ~Algorithm() {} + virtual char const *Name() const = 0; + virtual bool IsPaused() const { return paused_; } + virtual void Pause() { paused_ = true; } + virtual void Resume() { paused_ = false; } + virtual void Read(boost::property_tree::ptree const ¶ms); + virtual void Initialise(); + virtual void SwitchMode(CameraMode const &camera_mode); + virtual void Prepare(Metadata *image_metadata); + virtual void Process(StatisticsPtr &stats, Metadata *image_metadata); + Metadata &GetGlobalMetadata() const + { + return controller_->GetGlobalMetadata(); + } + +private: + Controller *controller_; + std::atomic paused_; +}; + +// This code is for automatic registration of Front End algorithms with the +// system. + +typedef Algorithm *(*AlgoCreateFunc)(Controller *controller); +struct RegisterAlgorithm { + RegisterAlgorithm(char const *name, AlgoCreateFunc create_func); +}; +std::map const &GetAlgorithms(); + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/alsc_status.h b/src/ipa/raspberrypi/controller/alsc_status.h new file mode 100644 index 000000000000..d3f579715594 --- /dev/null +++ b/src/ipa/raspberrypi/controller/alsc_status.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * alsc_status.h - ALSC (auto lens shading correction) control algorithm status + */ +#pragma once + +// The ALSC algorithm should post the following structure into the image's +// "alsc.status" metadata. + +#ifdef __cplusplus +extern "C" { +#endif + +#define ALSC_CELLS_X 16 +#define ALSC_CELLS_Y 12 + +struct AlscStatus { + double r[ALSC_CELLS_Y][ALSC_CELLS_X]; + double g[ALSC_CELLS_Y][ALSC_CELLS_X]; + double b[ALSC_CELLS_Y][ALSC_CELLS_X]; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/awb_algorithm.hpp b/src/ipa/raspberrypi/controller/awb_algorithm.hpp new file mode 100644 index 000000000000..22508ddd75a1 --- /dev/null +++ b/src/ipa/raspberrypi/controller/awb_algorithm.hpp @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * awb_algorithm.hpp - AWB control algorithm interface + */ +#pragma once + +#include "algorithm.hpp" + +namespace RPi { + +class AwbAlgorithm : public Algorithm +{ +public: + AwbAlgorithm(Controller *controller) : Algorithm(controller) {} + // An AWB algorithm must provide the following: + virtual void SetMode(std::string const &mode_name) = 0; + virtual void SetManualGains(double manual_r, double manual_b) = 0; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/awb_status.h b/src/ipa/raspberrypi/controller/awb_status.h new file mode 100644 index 000000000000..46d7c842299a --- /dev/null +++ b/src/ipa/raspberrypi/controller/awb_status.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * awb_status.h - AWB control algorithm status + */ +#pragma once + +// The AWB algorithm places its results into both the image and global metadata, +// under the tag "awb.status". + +#ifdef __cplusplus +extern "C" { +#endif + +struct AwbStatus { + char mode[32]; + double temperature_K; + double gain_r; + double gain_g; + double gain_b; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/black_level_status.h b/src/ipa/raspberrypi/controller/black_level_status.h new file mode 100644 index 000000000000..d085f64b27fe --- /dev/null +++ b/src/ipa/raspberrypi/controller/black_level_status.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * black_level_status.h - black level control algorithm status + */ +#pragma once + +// The "black level" algorithm stores the black levels to use. + +#ifdef __cplusplus +extern "C" { +#endif + +struct BlackLevelStatus { + uint16_t black_level_r; // out of 16 bits + uint16_t black_level_g; + uint16_t black_level_b; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/camera_mode.h b/src/ipa/raspberrypi/controller/camera_mode.h new file mode 100644 index 000000000000..875bab3161af --- /dev/null +++ b/src/ipa/raspberrypi/controller/camera_mode.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019-2020, Raspberry Pi (Trading) Limited + * + * camera_mode.h - description of a particular operating mode of a sensor + */ +#pragma once + +// Description of a "camera mode", holding enough information for control +// algorithms to adapt their behaviour to the different modes of the camera, +// including binning, scaling, cropping etc. + +#ifdef __cplusplus +extern "C" { +#endif + +#define CAMERA_MODE_NAME_LEN 32 + +struct CameraMode { + // bit depth of the raw camera output + uint32_t bitdepth; + // size in pixels of frames in this mode + uint16_t width, height; + // size of full resolution uncropped frame ("sensor frame") + uint16_t sensor_width, sensor_height; + // binning factor (1 = no binning, 2 = 2-pixel binning etc.) + uint8_t bin_x, bin_y; + // location of top left pixel in the sensor frame + uint16_t crop_x, crop_y; + // scaling factor (so if uncropped, width*scale_x is sensor_width) + double scale_x, scale_y; + // scaling of the noise compared to the native sensor mode + double noise_factor; + // line time in nanoseconds + double line_length; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/ccm_algorithm.hpp b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp new file mode 100644 index 000000000000..21806cb0ddf9 --- /dev/null +++ b/src/ipa/raspberrypi/controller/ccm_algorithm.hpp @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * ccm_algorithm.hpp - CCM (colour correction matrix) control algorithm interface + */ +#pragma once + +#include "algorithm.hpp" + +namespace RPi { + +class CcmAlgorithm : public Algorithm +{ +public: + CcmAlgorithm(Controller *controller) : Algorithm(controller) {} + // A CCM algorithm must provide the following: + virtual void SetSaturation(double saturation) = 0; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/ccm_status.h b/src/ipa/raspberrypi/controller/ccm_status.h new file mode 100644 index 000000000000..7e41dd1ff3c0 --- /dev/null +++ b/src/ipa/raspberrypi/controller/ccm_status.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * ccm_status.h - CCM (colour correction matrix) control algorithm status + */ +#pragma once + +// The "ccm" algorithm generates an appropriate colour matrix. + +#ifdef __cplusplus +extern "C" { +#endif + +struct CcmStatus { + double matrix[9]; + double saturation; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/contrast_algorithm.hpp b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp new file mode 100644 index 000000000000..9780322b28bc --- /dev/null +++ b/src/ipa/raspberrypi/controller/contrast_algorithm.hpp @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * contrast_algorithm.hpp - contrast (gamma) control algorithm interface + */ +#pragma once + +#include "algorithm.hpp" + +namespace RPi { + +class ContrastAlgorithm : public Algorithm +{ +public: + ContrastAlgorithm(Controller *controller) : Algorithm(controller) {} + // A contrast algorithm must provide the following: + virtual void SetBrightness(double brightness) = 0; + virtual void SetContrast(double contrast) = 0; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/contrast_status.h b/src/ipa/raspberrypi/controller/contrast_status.h new file mode 100644 index 000000000000..d7edd4e9990d --- /dev/null +++ b/src/ipa/raspberrypi/controller/contrast_status.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * contrast_status.h - contrast (gamma) control algorithm status + */ +#pragma once + +// The "contrast" algorithm creates a gamma curve, optionally doing a little bit +// of contrast stretching based on the AGC histogram. + +#ifdef __cplusplus +extern "C" { +#endif + +#define CONTRAST_NUM_POINTS 33 + +struct ContrastPoint { + uint16_t x; + uint16_t y; +}; + +struct ContrastStatus { + struct ContrastPoint points[CONTRAST_NUM_POINTS]; + double brightness; + double contrast; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/controller.cpp b/src/ipa/raspberrypi/controller/controller.cpp new file mode 100644 index 000000000000..20dd4c787d14 --- /dev/null +++ b/src/ipa/raspberrypi/controller/controller.cpp @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * controller.cpp - ISP controller + */ + +#include "algorithm.hpp" +#include "controller.hpp" + +#include +#include + +using namespace RPi; + +Controller::Controller() + : switch_mode_called_(false) {} + +Controller::Controller(char const *json_filename) + : switch_mode_called_(false) +{ + Read(json_filename); + Initialise(); +} + +Controller::~Controller() {} + +void Controller::Read(char const *filename) +{ + RPI_LOG("Controller starting"); + boost::property_tree::ptree root; + boost::property_tree::read_json(filename, root); + for (auto const &key_and_value : root) { + Algorithm *algo = CreateAlgorithm(key_and_value.first.c_str()); + if (algo) { + algo->Read(key_and_value.second); + algorithms_.push_back(AlgorithmPtr(algo)); + } else + RPI_LOG("WARNING: No algorithm found for \"" + << key_and_value.first << "\""); + } + RPI_LOG("Controller finished"); +} + +Algorithm *Controller::CreateAlgorithm(char const *name) +{ + auto it = GetAlgorithms().find(std::string(name)); + return it != GetAlgorithms().end() ? (*it->second)(this) : nullptr; +} + +void Controller::Initialise() +{ + RPI_LOG("Controller starting"); + for (auto &algo : algorithms_) + algo->Initialise(); + RPI_LOG("Controller finished"); +} + +void Controller::SwitchMode(CameraMode const &camera_mode) +{ + RPI_LOG("Controller starting"); + for (auto &algo : algorithms_) + algo->SwitchMode(camera_mode); + switch_mode_called_ = true; + RPI_LOG("Controller finished"); +} + +void Controller::Prepare(Metadata *image_metadata) +{ + RPI_LOG("Controller::Prepare starting"); + assert(switch_mode_called_); + for (auto &algo : algorithms_) + if (!algo->IsPaused()) + algo->Prepare(image_metadata); + RPI_LOG("Controller::Prepare finished"); +} + +void Controller::Process(StatisticsPtr stats, Metadata *image_metadata) +{ + RPI_LOG("Controller::Process starting"); + assert(switch_mode_called_); + for (auto &algo : algorithms_) + if (!algo->IsPaused()) + algo->Process(stats, image_metadata); + RPI_LOG("Controller::Process finished"); +} + +Metadata &Controller::GetGlobalMetadata() +{ + return global_metadata_; +} + +Algorithm *Controller::GetAlgorithm(std::string const &name) const +{ + // The passed name must be the entire algorithm name, or must match the + // last part of it with a period (.) just before. + size_t name_len = name.length(); + for (auto &algo : algorithms_) { + char const *algo_name = algo->Name(); + size_t algo_name_len = strlen(algo_name); + if (algo_name_len >= name_len && + strcasecmp(name.c_str(), + algo_name + algo_name_len - name_len) == 0 && + (name_len == algo_name_len || + algo_name[algo_name_len - name_len - 1] == '.')) + return algo.get(); + } + return nullptr; +} diff --git a/src/ipa/raspberrypi/controller/controller.hpp b/src/ipa/raspberrypi/controller/controller.hpp new file mode 100644 index 000000000000..d6853866d33c --- /dev/null +++ b/src/ipa/raspberrypi/controller/controller.hpp @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * controller.hpp - ISP controller interface + */ +#pragma once + +// The Controller is simply a container for a collecting together a number of +// "control algorithms" (such as AWB etc.) and for running them all in a +// convenient manner. + +#include +#include + +#include + +#include "camera_mode.h" +#include "device_status.h" +#include "metadata.hpp" + +namespace RPi { + +class Algorithm; +typedef std::unique_ptr AlgorithmPtr; +typedef std::shared_ptr StatisticsPtr; + +// The Controller holds a pointer to some global_metadata, which is how +// different controllers and control algorithms within them can exchange +// information. The Prepare method returns a pointer to metadata for this +// specific image, and which should be passed on to the Process method. + +class Controller +{ +public: + Controller(); + Controller(char const *json_filename); + ~Controller(); + Algorithm *CreateAlgorithm(char const *name); + void Read(char const *filename); + void Initialise(); + void SwitchMode(CameraMode const &camera_mode); + void Prepare(Metadata *image_metadata); + void Process(StatisticsPtr stats, Metadata *image_metadata); + Metadata &GetGlobalMetadata(); + Algorithm *GetAlgorithm(std::string const &name) const; + +protected: + Metadata global_metadata_; + std::vector algorithms_; + bool switch_mode_called_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/device_status.h b/src/ipa/raspberrypi/controller/device_status.h new file mode 100644 index 000000000000..aa08608b5d40 --- /dev/null +++ b/src/ipa/raspberrypi/controller/device_status.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * device_status.h - device (image sensor) status + */ +#pragma once + +// Definition of "device metadata" which stores things like shutter time and +// analogue gain that downstream control algorithms will want to know. + +#ifdef __cplusplus +extern "C" { +#endif + +struct DeviceStatus { + // time shutter is open, in microseconds + double shutter_speed; + double analogue_gain; + // 1.0/distance-in-metres, or 0 if unknown + double lens_position; + // 1/f so that brightness quadruples when this doubles, or 0 if unknown + double aperture; + // proportional to brightness with 0 = no flash, 1 = maximum flash + double flash_intensity; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/dpc_status.h b/src/ipa/raspberrypi/controller/dpc_status.h new file mode 100644 index 000000000000..a3ec2762573b --- /dev/null +++ b/src/ipa/raspberrypi/controller/dpc_status.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * dpc_status.h - DPC (defective pixel correction) control algorithm status + */ +#pragma once + +// The "DPC" algorithm sets defective pixel correction strength. + +#ifdef __cplusplus +extern "C" { +#endif + +struct DpcStatus { + int strength; // 0 = "off", 1 = "normal", 2 = "strong" +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/geq_status.h b/src/ipa/raspberrypi/controller/geq_status.h new file mode 100644 index 000000000000..07fd5f0347ef --- /dev/null +++ b/src/ipa/raspberrypi/controller/geq_status.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * geq_status.h - GEQ (green equalisation) control algorithm status + */ +#pragma once + +// The "GEQ" algorithm calculates the green equalisation thresholds + +#ifdef __cplusplus +extern "C" { +#endif + +struct GeqStatus { + uint16_t offset; + double slope; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/histogram.cpp b/src/ipa/raspberrypi/controller/histogram.cpp new file mode 100644 index 000000000000..103d3f606a76 --- /dev/null +++ b/src/ipa/raspberrypi/controller/histogram.cpp @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * histogram.cpp - histogram calculations + */ +#include +#include + +#include "histogram.hpp" + +using namespace RPi; + +uint64_t Histogram::CumulativeFreq(double bin) const +{ + if (bin <= 0) + return 0; + else if (bin >= Bins()) + return Total(); + int b = (int)bin; + return cumulative_[b] + + (bin - b) * (cumulative_[b + 1] - cumulative_[b]); +} + +double Histogram::Quantile(double q, int first, int last) const +{ + if (first == -1) + first = 0; + if (last == -1) + last = cumulative_.size() - 2; + assert(first <= last); + uint64_t items = q * Total(); + while (first < last) // binary search to find the right bin + { + int middle = (first + last) / 2; + if (cumulative_[middle + 1] > items) + last = middle; // between first and middle + else + first = middle + 1; // after middle + } + assert(items >= cumulative_[first] && items <= cumulative_[last + 1]); + double frac = cumulative_[first + 1] == cumulative_[first] ? 0 + : (double)(items - cumulative_[first]) / + (cumulative_[first + 1] - cumulative_[first]); + return first + frac; +} + +double Histogram::InterQuantileMean(double q_lo, double q_hi) const +{ + assert(q_hi > q_lo); + double p_lo = Quantile(q_lo); + double p_hi = Quantile(q_hi, (int)p_lo); + double sum_bin_freq = 0, cumul_freq = 0; + for (double p_next = floor(p_lo) + 1.0; p_next <= ceil(p_hi); + p_lo = p_next, p_next += 1.0) { + int bin = floor(p_lo); + double freq = (cumulative_[bin + 1] - cumulative_[bin]) * + (std::min(p_next, p_hi) - p_lo); + sum_bin_freq += bin * freq; + cumul_freq += freq; + } + // add 0.5 to give an average for bin mid-points + return sum_bin_freq / cumul_freq + 0.5; +} diff --git a/src/ipa/raspberrypi/controller/histogram.hpp b/src/ipa/raspberrypi/controller/histogram.hpp new file mode 100644 index 000000000000..06fc3aa7b9e8 --- /dev/null +++ b/src/ipa/raspberrypi/controller/histogram.hpp @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * histogram.hpp - histogram calculation interface + */ +#pragma once + +#include +#include +#include + +// A simple histogram class, for use in particular to find "quantiles" and +// averages between "quantiles". + +namespace RPi { + +class Histogram +{ +public: + template Histogram(T *histogram, int num) + { + assert(num); + cumulative_.reserve(num + 1); + cumulative_.push_back(0); + for (int i = 0; i < num; i++) + cumulative_.push_back(cumulative_.back() + + histogram[i]); + } + uint32_t Bins() const { return cumulative_.size() - 1; } + uint64_t Total() const { return cumulative_[cumulative_.size() - 1]; } + // Cumulative frequency up to a (fractional) point in a bin. + uint64_t CumulativeFreq(double bin) const; + // Return the (fractional) bin of the point q (0 <= q <= 1) through the + // histogram. Optionally provide limits to help. + double Quantile(double q, int first = -1, int last = -1) const; + // Return the average histogram bin value between the two quantiles. + double InterQuantileMean(double q_lo, double q_hi) const; + +private: + std::vector cumulative_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/logging.hpp b/src/ipa/raspberrypi/controller/logging.hpp new file mode 100644 index 000000000000..f0d306b6e7f4 --- /dev/null +++ b/src/ipa/raspberrypi/controller/logging.hpp @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019-2020, Raspberry Pi (Trading) Limited + * + * logging.hpp - logging macros + */ +#pragma once + +#include + +#ifndef RPI_LOGGING_ENABLE +#define RPI_LOGGING_ENABLE 0 +#endif + +#ifndef RPI_WARNING_ENABLE +#define RPI_WARNING_ENABLE 1 +#endif + +#define RPI_LOG(stuff) \ + do { \ + if (RPI_LOGGING_ENABLE) \ + std::cout << __FUNCTION__ << ": " << stuff << "\n"; \ + } while (0) + +#define RPI_WARN(stuff) \ + do { \ + if (RPI_WARNING_ENABLE) \ + std::cout << __FUNCTION__ << " ***WARNING*** " \ + << stuff << "\n"; \ + } while (0) diff --git a/src/ipa/raspberrypi/controller/lux_status.h b/src/ipa/raspberrypi/controller/lux_status.h new file mode 100644 index 000000000000..8ccfd933829b --- /dev/null +++ b/src/ipa/raspberrypi/controller/lux_status.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * lux_status.h - Lux control algorithm status + */ +#pragma once + +// The "lux" algorithm looks at the (AGC) histogram statistics of the frame and +// estimates the current lux level of the scene. It does this by a simple ratio +// calculation comparing to a reference image that was taken in known conditions +// with known statistics and a properly measured lux level. There is a slight +// problem with aperture, in that it may be variable without the system knowing +// or being aware of it. In this case an external application may set a +// "current_aperture" value if it wishes, which would be used in place of the +// (presumably meaningless) value in the image metadata. + +#ifdef __cplusplus +extern "C" { +#endif + +struct LuxStatus { + double lux; + double aperture; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/metadata.hpp b/src/ipa/raspberrypi/controller/metadata.hpp new file mode 100644 index 000000000000..1d7624a0911e --- /dev/null +++ b/src/ipa/raspberrypi/controller/metadata.hpp @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * metadata.hpp - general metadata class + */ +#pragma once + +// A simple class for carrying arbitrary metadata, for example about an image. + +#include +#include +#include +#include + +#include + +namespace RPi { + +class Metadata +{ +public: + template void Set(std::string const &tag, T const &value) + { + std::lock_guard lock(mutex_); + data_[tag] = value; + } + template int Get(std::string const &tag, T &value) const + { + std::lock_guard lock(mutex_); + auto it = data_.find(tag); + if (it == data_.end()) + return -1; + value = boost::any_cast(it->second); + return 0; + } + void Clear() + { + std::lock_guard lock(mutex_); + data_.clear(); + } + Metadata &operator=(Metadata const &other) + { + std::lock_guard lock(mutex_); + std::lock_guard other_lock(other.mutex_); + data_ = other.data_; + return *this; + } + template T *GetLocked(std::string const &tag) + { + // This allows in-place access to the Metadata contents, + // for which you should be holding the lock. + auto it = data_.find(tag); + if (it == data_.end()) + return nullptr; + return boost::any_cast(&it->second); + } + template + void SetLocked(std::string const &tag, T const &value) + { + // Use this only if you're holding the lock yourself. + data_[tag] = value; + } + // Note: use of (lowercase) lock and unlock means you can create scoped + // locks with the standard lock classes. + // e.g. std::lock_guard lock(metadata) + void lock() { mutex_.lock(); } + void unlock() { mutex_.unlock(); } + +private: + mutable std::mutex mutex_; + std::map data_; +}; + +typedef std::shared_ptr MetadataPtr; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/noise_status.h b/src/ipa/raspberrypi/controller/noise_status.h new file mode 100644 index 000000000000..8439a40213aa --- /dev/null +++ b/src/ipa/raspberrypi/controller/noise_status.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * noise_status.h - Noise control algorithm status + */ +#pragma once + +// The "noise" algorithm stores an estimate of the noise profile for this image. + +#ifdef __cplusplus +extern "C" { +#endif + +struct NoiseStatus { + double noise_constant; + double noise_slope; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/pwl.cpp b/src/ipa/raspberrypi/controller/pwl.cpp new file mode 100644 index 000000000000..7e11d8f3c3db --- /dev/null +++ b/src/ipa/raspberrypi/controller/pwl.cpp @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * pwl.cpp - piecewise linear functions + */ + +#include +#include + +#include "pwl.hpp" + +using namespace RPi; + +void Pwl::Read(boost::property_tree::ptree const ¶ms) +{ + for (auto it = params.begin(); it != params.end(); it++) { + double x = it->second.get_value(); + assert(it == params.begin() || x > points_.back().x); + it++; + double y = it->second.get_value(); + points_.push_back(Point(x, y)); + } + assert(points_.size() >= 2); +} + +void Pwl::Append(double x, double y, const double eps) +{ + if (points_.empty() || points_.back().x + eps < x) + points_.push_back(Point(x, y)); +} + +void Pwl::Prepend(double x, double y, const double eps) +{ + if (points_.empty() || points_.front().x - eps > x) + points_.insert(points_.begin(), Point(x, y)); +} + +Pwl::Interval Pwl::Domain() const +{ + return Interval(points_[0].x, points_[points_.size() - 1].x); +} + +Pwl::Interval Pwl::Range() const +{ + double lo = points_[0].y, hi = lo; + for (auto &p : points_) + lo = std::min(lo, p.y), hi = std::max(hi, p.y); + return Interval(lo, hi); +} + +bool Pwl::Empty() const +{ + return points_.empty(); +} + +double Pwl::Eval(double x, int *span_ptr, bool update_span) const +{ + int span = findSpan(x, span_ptr && *span_ptr != -1 + ? *span_ptr + : points_.size() / 2 - 1); + if (span_ptr && update_span) + *span_ptr = span; + return points_[span].y + + (x - points_[span].x) * (points_[span + 1].y - points_[span].y) / + (points_[span + 1].x - points_[span].x); +} + +int Pwl::findSpan(double x, int span) const +{ + // Pwls are generally small, so linear search may well be faster than + // binary, though could review this if large PWls start turning up. + int last_span = points_.size() - 2; + // some algorithms may call us with span pointing directly at the last + // control point + span = std::max(0, std::min(last_span, span)); + while (span < last_span && x >= points_[span + 1].x) + span++; + while (span && x < points_[span].x) + span--; + return span; +} + +Pwl::PerpType Pwl::Invert(Point const &xy, Point &perp, int &span, + const double eps) const +{ + assert(span >= -1); + bool prev_off_end = false; + for (span = span + 1; span < (int)points_.size() - 1; span++) { + Point span_vec = points_[span + 1] - points_[span]; + double t = ((xy - points_[span]) % span_vec) / span_vec.Len2(); + if (t < -eps) // off the start of this span + { + if (span == 0) { + perp = points_[span]; + return PerpType::Start; + } else if (prev_off_end) { + perp = points_[span]; + return PerpType::Vertex; + } + } else if (t > 1 + eps) // off the end of this span + { + if (span == (int)points_.size() - 2) { + perp = points_[span + 1]; + return PerpType::End; + } + prev_off_end = true; + } else // a true perpendicular + { + perp = points_[span] + span_vec * t; + return PerpType::Perpendicular; + } + } + return PerpType::None; +} + +Pwl Pwl::Compose(Pwl const &other, const double eps) const +{ + double this_x = points_[0].x, this_y = points_[0].y; + int this_span = 0, other_span = other.findSpan(this_y, 0); + Pwl result({ { this_x, other.Eval(this_y, &other_span, false) } }); + while (this_span != (int)points_.size() - 1) { + double dx = points_[this_span + 1].x - points_[this_span].x, + dy = points_[this_span + 1].y - points_[this_span].y; + if (abs(dy) > eps && + other_span + 1 < (int)other.points_.size() && + points_[this_span + 1].y >= + other.points_[other_span + 1].x + eps) { + // next control point in result will be where this + // function's y reaches the next span in other + this_x = points_[this_span].x + + (other.points_[other_span + 1].x - + points_[this_span].y) * dx / dy; + this_y = other.points_[++other_span].x; + } else if (abs(dy) > eps && other_span > 0 && + points_[this_span + 1].y <= + other.points_[other_span - 1].x - eps) { + // next control point in result will be where this + // function's y reaches the previous span in other + this_x = points_[this_span].x + + (other.points_[other_span + 1].x - + points_[this_span].y) * dx / dy; + this_y = other.points_[--other_span].x; + } else { + // we stay in the same span in other + this_span++; + this_x = points_[this_span].x, + this_y = points_[this_span].y; + } + result.Append(this_x, other.Eval(this_y, &other_span, false), + eps); + } + return result; +} + +void Pwl::Map(std::function f) const +{ + for (auto &pt : points_) + f(pt.x, pt.y); +} + +void Pwl::Map2(Pwl const &pwl0, Pwl const &pwl1, + std::function f) +{ + int span0 = 0, span1 = 0; + double x = std::min(pwl0.points_[0].x, pwl1.points_[0].x); + f(x, pwl0.Eval(x, &span0, false), pwl1.Eval(x, &span1, false)); + while (span0 < (int)pwl0.points_.size() - 1 || + span1 < (int)pwl1.points_.size() - 1) { + if (span0 == (int)pwl0.points_.size() - 1) + x = pwl1.points_[++span1].x; + else if (span1 == (int)pwl1.points_.size() - 1) + x = pwl0.points_[++span0].x; + else if (pwl0.points_[span0 + 1].x > pwl1.points_[span1 + 1].x) + x = pwl1.points_[++span1].x; + else + x = pwl0.points_[++span0].x; + f(x, pwl0.Eval(x, &span0, false), pwl1.Eval(x, &span1, false)); + } +} + +Pwl Pwl::Combine(Pwl const &pwl0, Pwl const &pwl1, + std::function f, + const double eps) +{ + Pwl result; + Map2(pwl0, pwl1, [&](double x, double y0, double y1) { + result.Append(x, f(x, y0, y1), eps); + }); + return result; +} + +void Pwl::MatchDomain(Interval const &domain, bool clip, const double eps) +{ + int span = 0; + Prepend(domain.start, Eval(clip ? points_[0].x : domain.start, &span), + eps); + span = points_.size() - 2; + Append(domain.end, Eval(clip ? points_.back().x : domain.end, &span), + eps); +} + +Pwl &Pwl::operator*=(double d) +{ + for (auto &pt : points_) + pt.y *= d; + return *this; +} + +void Pwl::Debug(FILE *fp) const +{ + fprintf(fp, "Pwl {\n"); + for (auto &p : points_) + fprintf(fp, "\t(%g, %g)\n", p.x, p.y); + fprintf(fp, "}\n"); +} diff --git a/src/ipa/raspberrypi/controller/pwl.hpp b/src/ipa/raspberrypi/controller/pwl.hpp new file mode 100644 index 000000000000..bd7c76683103 --- /dev/null +++ b/src/ipa/raspberrypi/controller/pwl.hpp @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * pwl.hpp - piecewise linear functions interface + */ +#pragma once + +#include +#include + +#include + +namespace RPi { + +class Pwl +{ +public: + struct Interval { + Interval(double _start, double _end) : start(_start), end(_end) + { + } + double start, end; + bool Contains(double value) + { + return value >= start && value <= end; + } + double Clip(double value) + { + return value < start ? start + : (value > end ? end : value); + } + double Len() const { return end - start; } + }; + struct Point { + Point() : x(0), y(0) {} + Point(double _x, double _y) : x(_x), y(_y) {} + double x, y; + Point operator-(Point const &p) const + { + return Point(x - p.x, y - p.y); + } + Point operator+(Point const &p) const + { + return Point(x + p.x, y + p.y); + } + double operator%(Point const &p) const + { + return x * p.x + y * p.y; + } + Point operator*(double f) const { return Point(x * f, y * f); } + Point operator/(double f) const { return Point(x / f, y / f); } + double Len2() const { return x * x + y * y; } + double Len() const { return sqrt(Len2()); } + }; + Pwl() {} + Pwl(std::vector const &points) : points_(points) {} + void Read(boost::property_tree::ptree const ¶ms); + void Append(double x, double y, const double eps = 1e-6); + void Prepend(double x, double y, const double eps = 1e-6); + Interval Domain() const; + Interval Range() const; + bool Empty() const; + // Evaluate Pwl, optionally supplying an initial guess for the + // "span". The "span" may be optionally be updated. If you want to know + // the "span" value but don't have an initial guess you can set it to + // -1. + double Eval(double x, int *span_ptr = nullptr, + bool update_span = true) const; + // Find perpendicular closest to xy, starting from span+1 so you can + // call it repeatedly to check for multiple closest points (set span to + // -1 on the first call). Also returns "pseudo" perpendiculars; see + // PerpType enum. + enum class PerpType { + None, // no perpendicular found + Start, // start of Pwl is closest point + End, // end of Pwl is closest point + Vertex, // vertex of Pwl is closest point + Perpendicular // true perpendicular found + }; + PerpType Invert(Point const &xy, Point &perp, int &span, + const double eps = 1e-6) const; + // Compose two Pwls together, doing "this" first and "other" after. + Pwl Compose(Pwl const &other, const double eps = 1e-6) const; + // Apply function to (x,y) values at every control point. + void Map(std::function f) const; + // Apply function to (x, y0, y1) values wherever either Pwl has a + // control point. + static void Map2(Pwl const &pwl0, Pwl const &pwl1, + std::function f); + // Combine two Pwls, meaning we create a new Pwl where the y values are + // given by running f wherever either has a knot. + static Pwl + Combine(Pwl const &pwl0, Pwl const &pwl1, + std::function f, + const double eps = 1e-6); + // Make "this" match (at least) the given domain. Any extension my be + // clipped or linear. + void MatchDomain(Interval const &domain, bool clip = true, + const double eps = 1e-6); + Pwl &operator*=(double d); + void Debug(FILE *fp = stdout) const; + +private: + int findSpan(double x, int span) const; + std::vector points_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/agc.cpp b/src/ipa/raspberrypi/controller/rpi/agc.cpp new file mode 100644 index 000000000000..a4742872e3f9 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/agc.cpp @@ -0,0 +1,642 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * agc.cpp - AGC/AEC control algorithm + */ + +#include + +#include "linux/bcm2835-isp.h" + +#include "../awb_status.h" +#include "../device_status.h" +#include "../histogram.hpp" +#include "../logging.hpp" +#include "../lux_status.h" +#include "../metadata.hpp" + +#include "agc.hpp" + +using namespace RPi; + +#define NAME "rpi.agc" + +#define PIPELINE_BITS 13 // seems to be a 13-bit pipeline + +void AgcMeteringMode::Read(boost::property_tree::ptree const ¶ms) +{ + int num = 0; + for (auto &p : params.get_child("weights")) { + if (num == AGC_STATS_SIZE) + throw std::runtime_error("AgcConfig: too many weights"); + weights[num++] = p.second.get_value(); + } + if (num != AGC_STATS_SIZE) + throw std::runtime_error("AgcConfig: insufficient weights"); +} + +static std::string +read_metering_modes(std::map &metering_modes, + boost::property_tree::ptree const ¶ms) +{ + std::string first; + for (auto &p : params) { + AgcMeteringMode metering_mode; + metering_mode.Read(p.second); + metering_modes[p.first] = std::move(metering_mode); + if (first.empty()) + first = p.first; + } + return first; +} + +static int read_double_list(std::vector &list, + boost::property_tree::ptree const ¶ms) +{ + for (auto &p : params) + list.push_back(p.second.get_value()); + return list.size(); +} + +void AgcExposureMode::Read(boost::property_tree::ptree const ¶ms) +{ + int num_shutters = + read_double_list(shutter, params.get_child("shutter")); + int num_ags = read_double_list(gain, params.get_child("gain")); + if (num_shutters < 2 || num_ags < 2) + throw std::runtime_error( + "AgcConfig: must have at least two entries in exposure profile"); + if (num_shutters != num_ags) + throw std::runtime_error( + "AgcConfig: expect same number of exposure and gain entries in exposure profile"); +} + +static std::string +read_exposure_modes(std::map &exposure_modes, + boost::property_tree::ptree const ¶ms) +{ + std::string first; + for (auto &p : params) { + AgcExposureMode exposure_mode; + exposure_mode.Read(p.second); + exposure_modes[p.first] = std::move(exposure_mode); + if (first.empty()) + first = p.first; + } + return first; +} + +void AgcConstraint::Read(boost::property_tree::ptree const ¶ms) +{ + std::string bound_string = params.get("bound", ""); + transform(bound_string.begin(), bound_string.end(), + bound_string.begin(), ::toupper); + if (bound_string != "UPPER" && bound_string != "LOWER") + throw std::runtime_error( + "AGC constraint type should be UPPER or LOWER"); + bound = bound_string == "UPPER" ? Bound::UPPER : Bound::LOWER; + q_lo = params.get("q_lo"); + q_hi = params.get("q_hi"); + Y_target.Read(params.get_child("y_target")); +} + +static AgcConstraintMode +read_constraint_mode(boost::property_tree::ptree const ¶ms) +{ + AgcConstraintMode mode; + for (auto &p : params) { + AgcConstraint constraint; + constraint.Read(p.second); + mode.push_back(std::move(constraint)); + } + return mode; +} + +static std::string read_constraint_modes( + std::map &constraint_modes, + boost::property_tree::ptree const ¶ms) +{ + std::string first; + for (auto &p : params) { + constraint_modes[p.first] = read_constraint_mode(p.second); + if (first.empty()) + first = p.first; + } + return first; +} + +void AgcConfig::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG("AgcConfig"); + default_metering_mode = read_metering_modes( + metering_modes, params.get_child("metering_modes")); + default_exposure_mode = read_exposure_modes( + exposure_modes, params.get_child("exposure_modes")); + default_constraint_mode = read_constraint_modes( + constraint_modes, params.get_child("constraint_modes")); + Y_target.Read(params.get_child("y_target")); + speed = params.get("speed", 0.2); + startup_frames = params.get("startup_frames", 10); + fast_reduce_threshold = + params.get("fast_reduce_threshold", 0.4); + base_ev = params.get("base_ev", 1.0); +} + +Agc::Agc(Controller *controller) + : AgcAlgorithm(controller), metering_mode_(nullptr), + exposure_mode_(nullptr), constraint_mode_(nullptr), + frame_count_(0), lock_count_(0) +{ + ev_ = status_.ev = 1.0; + flicker_period_ = status_.flicker_period = 0.0; + fixed_shutter_ = status_.fixed_shutter = 0; + fixed_analogue_gain_ = status_.fixed_analogue_gain = 0.0; + // set to zero initially, so we can tell it's not been calculated + status_.total_exposure_value = 0.0; + status_.target_exposure_value = 0.0; + status_.locked = false; + output_status_ = status_; +} + +char const *Agc::Name() const +{ + return NAME; +} + +void Agc::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG("Agc"); + config_.Read(params); + // Set the config's defaults (which are the first ones it read) as our + // current modes, until someone changes them. (they're all known to + // exist at this point) + metering_mode_name_ = config_.default_metering_mode; + metering_mode_ = &config_.metering_modes[metering_mode_name_]; + exposure_mode_name_ = config_.default_exposure_mode; + exposure_mode_ = &config_.exposure_modes[exposure_mode_name_]; + constraint_mode_name_ = config_.default_constraint_mode; + constraint_mode_ = &config_.constraint_modes[constraint_mode_name_]; +} + +void Agc::SetEv(double ev) +{ + std::unique_lock lock(settings_mutex_); + ev_ = ev; +} + +void Agc::SetFlickerPeriod(double flicker_period) +{ + std::unique_lock lock(settings_mutex_); + flicker_period_ = flicker_period; +} + +void Agc::SetFixedShutter(double fixed_shutter) +{ + std::unique_lock lock(settings_mutex_); + fixed_shutter_ = fixed_shutter; +} + +void Agc::SetFixedAnalogueGain(double fixed_analogue_gain) +{ + std::unique_lock lock(settings_mutex_); + fixed_analogue_gain_ = fixed_analogue_gain; +} + +void Agc::SetMeteringMode(std::string const &metering_mode_name) +{ + std::unique_lock lock(settings_mutex_); + metering_mode_name_ = metering_mode_name; +} + +void Agc::SetExposureMode(std::string const &exposure_mode_name) +{ + std::unique_lock lock(settings_mutex_); + exposure_mode_name_ = exposure_mode_name; +} + +void Agc::SetConstraintMode(std::string const &constraint_mode_name) +{ + std::unique_lock lock(settings_mutex_); + constraint_mode_name_ = constraint_mode_name; +} + +void Agc::Prepare(Metadata *image_metadata) +{ + AgcStatus status; + { + std::unique_lock lock(output_mutex_); + status = output_status_; + } + int lock_count = lock_count_; + lock_count_ = 0; + status.digital_gain = 1.0; + if (status_.total_exposure_value) { + // Process has run, so we have meaningful values. + DeviceStatus device_status; + if (image_metadata->Get("device.status", device_status) == 0) { + double actual_exposure = device_status.shutter_speed * + device_status.analogue_gain; + if (actual_exposure) { + status.digital_gain = + status_.total_exposure_value / + actual_exposure; + RPI_LOG("Want total exposure " << status_.total_exposure_value); + // Never ask for a gain < 1.0, and also impose + // some upper limit. Make it customisable? + status.digital_gain = std::max( + 1.0, + std::min(status.digital_gain, 4.0)); + RPI_LOG("Actual exposure " << actual_exposure); + RPI_LOG("Use digital_gain " << status.digital_gain); + RPI_LOG("Effective exposure " << actual_exposure * status.digital_gain); + // Decide whether AEC/AGC has converged. + // Insist AGC is steady for MAX_LOCK_COUNT + // frames before we say we are "locked". + // (The hard-coded constants may need to + // become customisable.) + if (status.target_exposure_value) { +#define MAX_LOCK_COUNT 3 + double err = 0.10 * status.target_exposure_value + 200; + if (actual_exposure < + status.target_exposure_value + err + && actual_exposure > + status.target_exposure_value - err) + lock_count_ = + std::min(lock_count + 1, + MAX_LOCK_COUNT); + else if (actual_exposure < + status.target_exposure_value + + 1.5 * err && + actual_exposure > + status.target_exposure_value + - 1.5 * err) + lock_count_ = lock_count; + RPI_LOG("Lock count: " << lock_count_); + } + } + } else + RPI_LOG(Name() << ": no device metadata"); + status.locked = lock_count_ >= MAX_LOCK_COUNT; + //printf("%s\n", status.locked ? "+++++++++" : "-"); + image_metadata->Set("agc.status", status); + } +} + +void Agc::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + frame_count_++; + // First a little bit of housekeeping, fetching up-to-date settings and + // configuration, that kind of thing. + housekeepConfig(); + // Get the current exposure values for the frame that's just arrived. + fetchCurrentExposure(image_metadata); + // Compute the total gain we require relative to the current exposure. + double gain, target_Y; + computeGain(stats.get(), image_metadata, gain, target_Y); + // Now compute the target (final) exposure which we think we want. + computeTargetExposure(gain); + // Some of the exposure has to be applied as digital gain, so work out + // what that is. This function also tells us whether it's decided to + // "desaturate" the image more quickly. + bool desaturate = applyDigitalGain(image_metadata, gain, target_Y); + // The results have to be filtered so as not to change too rapidly. + filterExposure(desaturate); + // The last thing is to divvy up the exposure value into a shutter time + // and analogue_gain, according to the current exposure mode. + divvyupExposure(); + // Finally advertise what we've done. + writeAndFinish(image_metadata, desaturate); +} + +static void copy_string(std::string const &s, char *d, size_t size) +{ + size_t length = s.copy(d, size - 1); + d[length] = '\0'; +} + +void Agc::housekeepConfig() +{ + // First fetch all the up-to-date settings, so no one else has to do it. + std::string new_exposure_mode_name, new_constraint_mode_name, + new_metering_mode_name; + { + std::unique_lock lock(settings_mutex_); + new_metering_mode_name = metering_mode_name_; + new_exposure_mode_name = exposure_mode_name_; + new_constraint_mode_name = constraint_mode_name_; + status_.ev = ev_; + status_.fixed_shutter = fixed_shutter_; + status_.fixed_analogue_gain = fixed_analogue_gain_; + status_.flicker_period = flicker_period_; + } + RPI_LOG("ev " << status_.ev << " fixed_shutter " + << status_.fixed_shutter << " fixed_analogue_gain " + << status_.fixed_analogue_gain); + // Make sure the "mode" pointers point to the up-to-date things, if + // they've changed. + if (strcmp(new_metering_mode_name.c_str(), status_.metering_mode)) { + auto it = config_.metering_modes.find(new_metering_mode_name); + if (it == config_.metering_modes.end()) + throw std::runtime_error("Agc: no metering mode " + + new_metering_mode_name); + metering_mode_ = &it->second; + copy_string(new_metering_mode_name, status_.metering_mode, + sizeof(status_.metering_mode)); + } + if (strcmp(new_exposure_mode_name.c_str(), status_.exposure_mode)) { + auto it = config_.exposure_modes.find(new_exposure_mode_name); + if (it == config_.exposure_modes.end()) + throw std::runtime_error("Agc: no exposure profile " + + new_exposure_mode_name); + exposure_mode_ = &it->second; + copy_string(new_exposure_mode_name, status_.exposure_mode, + sizeof(status_.exposure_mode)); + } + if (strcmp(new_constraint_mode_name.c_str(), status_.constraint_mode)) { + auto it = + config_.constraint_modes.find(new_constraint_mode_name); + if (it == config_.constraint_modes.end()) + throw std::runtime_error("Agc: no constraint list " + + new_constraint_mode_name); + constraint_mode_ = &it->second; + copy_string(new_constraint_mode_name, status_.constraint_mode, + sizeof(status_.constraint_mode)); + } + RPI_LOG("exposure_mode " + << new_exposure_mode_name << " constraint_mode " + << new_constraint_mode_name << " metering_mode " + << new_metering_mode_name); +} + +void Agc::fetchCurrentExposure(Metadata *image_metadata) +{ + std::unique_lock lock(*image_metadata); + DeviceStatus *device_status = + image_metadata->GetLocked("device.status"); + if (!device_status) + throw std::runtime_error("Agc: no device metadata"); + current_.shutter = device_status->shutter_speed; + current_.analogue_gain = device_status->analogue_gain; + AgcStatus *agc_status = + image_metadata->GetLocked("agc.status"); + current_.total_exposure = agc_status ? agc_status->total_exposure_value : 0; + current_.total_exposure_no_dg = current_.shutter * current_.analogue_gain; +} + +static double compute_initial_Y(bcm2835_isp_stats *stats, Metadata *image_metadata, + double weights[]) +{ + bcm2835_isp_stats_region *regions = stats->agc_stats; + struct AwbStatus awb; + awb.gain_r = awb.gain_g = awb.gain_b = 1.0; // in case no metadata + if (image_metadata->Get("awb.status", awb) != 0) + RPI_WARN("Agc: no AWB status found"); + double Y_sum = 0, weight_sum = 0; + for (int i = 0; i < AGC_STATS_SIZE; i++) { + if (regions[i].counted == 0) + continue; + weight_sum += weights[i]; + double Y = regions[i].r_sum * awb.gain_r * .299 + + regions[i].g_sum * awb.gain_g * .587 + + regions[i].b_sum * awb.gain_b * .114; + Y /= regions[i].counted; + Y_sum += Y * weights[i]; + } + return Y_sum / weight_sum / (1 << PIPELINE_BITS); +} + +// We handle extra gain through EV by adjusting our Y targets. However, you +// simply can't monitor histograms once they get very close to (or beyond!) +// saturation, so we clamp the Y targets to this value. It does mean that EV +// increases don't necessarily do quite what you might expect in certain +// (contrived) cases. + +#define EV_GAIN_Y_TARGET_LIMIT 0.9 + +static double constraint_compute_gain(AgcConstraint &c, Histogram &h, + double lux, double ev_gain, + double &target_Y) +{ + target_Y = c.Y_target.Eval(c.Y_target.Domain().Clip(lux)); + target_Y = std::min(EV_GAIN_Y_TARGET_LIMIT, target_Y * ev_gain); + double iqm = h.InterQuantileMean(c.q_lo, c.q_hi); + return (target_Y * NUM_HISTOGRAM_BINS) / iqm; +} + +void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *image_metadata, + double &gain, double &target_Y) +{ + struct LuxStatus lux = {}; + lux.lux = 400; // default lux level to 400 in case no metadata found + if (image_metadata->Get("lux.status", lux) != 0) + RPI_WARN("Agc: no lux level found"); + Histogram h(statistics->hist[0].g_hist, NUM_HISTOGRAM_BINS); + double ev_gain = status_.ev * config_.base_ev; + // The initial gain and target_Y come from some of the regions. After + // that we consider the histogram constraints. + target_Y = + config_.Y_target.Eval(config_.Y_target.Domain().Clip(lux.lux)); + target_Y = std::min(EV_GAIN_Y_TARGET_LIMIT, target_Y * ev_gain); + double initial_Y = compute_initial_Y(statistics, image_metadata, + metering_mode_->weights); + gain = std::min(10.0, target_Y / (initial_Y + .001)); + RPI_LOG("Initially Y " << initial_Y << " target " << target_Y + << " gives gain " << gain); + for (auto &c : *constraint_mode_) { + double new_target_Y; + double new_gain = + constraint_compute_gain(c, h, lux.lux, ev_gain, + new_target_Y); + RPI_LOG("Constraint has target_Y " + << new_target_Y << " giving gain " << new_gain); + if (c.bound == AgcConstraint::Bound::LOWER && + new_gain > gain) { + RPI_LOG("Lower bound constraint adopted"); + gain = new_gain, target_Y = new_target_Y; + } else if (c.bound == AgcConstraint::Bound::UPPER && + new_gain < gain) { + RPI_LOG("Upper bound constraint adopted"); + gain = new_gain, target_Y = new_target_Y; + } + } + RPI_LOG("Final gain " << gain << " (target_Y " << target_Y << " ev " + << status_.ev << " base_ev " << config_.base_ev + << ")"); +} + +void Agc::computeTargetExposure(double gain) +{ + // The statistics reflect the image without digital gain, so the final + // total exposure we're aiming for is: + target_.total_exposure = current_.total_exposure_no_dg * gain; + // The final target exposure is also limited to what the exposure + // mode allows. + double max_total_exposure = + (status_.fixed_shutter != 0.0 + ? status_.fixed_shutter + : exposure_mode_->shutter.back()) * + (status_.fixed_analogue_gain != 0.0 + ? status_.fixed_analogue_gain + : exposure_mode_->gain.back()); + target_.total_exposure = std::min(target_.total_exposure, + max_total_exposure); + RPI_LOG("Target total_exposure " << target_.total_exposure); +} + +bool Agc::applyDigitalGain(Metadata *image_metadata, double gain, + double target_Y) +{ + double dg = 1.0; + // I think this pipeline subtracts black level and rescales before we + // get the stats, so no need to worry about it. + struct AwbStatus awb; + if (image_metadata->Get("awb.status", awb) == 0) { + double min_gain = std::min(awb.gain_r, + std::min(awb.gain_g, awb.gain_b)); + dg *= std::max(1.0, 1.0 / min_gain); + } else + RPI_WARN("Agc: no AWB status found"); + RPI_LOG("after AWB, target dg " << dg << " gain " << gain + << " target_Y " << target_Y); + // Finally, if we're trying to reduce exposure but the target_Y is + // "close" to 1.0, then the gain computed for that constraint will be + // only slightly less than one, because the measured Y can never be + // larger than 1.0. When this happens, demand a large digital gain so + // that the exposure can be reduced, de-saturating the image much more + // quickly (and we then approach the correct value more quickly from + // below). + bool desaturate = target_Y > config_.fast_reduce_threshold && + gain < sqrt(target_Y); + if (desaturate) + dg /= config_.fast_reduce_threshold; + RPI_LOG("Digital gain " << dg << " desaturate? " << desaturate); + target_.total_exposure_no_dg = target_.total_exposure / dg; + RPI_LOG("Target total_exposure_no_dg " << target_.total_exposure_no_dg); + return desaturate; +} + +void Agc::filterExposure(bool desaturate) +{ + double speed = frame_count_ <= config_.startup_frames ? 1.0 : config_.speed; + if (filtered_.total_exposure == 0.0) { + filtered_.total_exposure = target_.total_exposure; + filtered_.total_exposure_no_dg = target_.total_exposure_no_dg; + } else { + // If close to the result go faster, to save making so many + // micro-adjustments on the way. (Make this customisable?) + if (filtered_.total_exposure < 1.2 * target_.total_exposure && + filtered_.total_exposure > 0.8 * target_.total_exposure) + speed = sqrt(speed); + filtered_.total_exposure = speed * target_.total_exposure + + filtered_.total_exposure * (1.0 - speed); + // When desaturing, take a big jump down in exposure_no_dg, + // which we'll hide with digital gain. + if (desaturate) + filtered_.total_exposure_no_dg = + target_.total_exposure_no_dg; + else + filtered_.total_exposure_no_dg = + speed * target_.total_exposure_no_dg + + filtered_.total_exposure_no_dg * (1.0 - speed); + } + // We can't let the no_dg exposure deviate too far below the + // total exposure, as there might not be enough digital gain available + // in the ISP to hide it (which will cause nasty oscillation). + if (filtered_.total_exposure_no_dg < + filtered_.total_exposure * config_.fast_reduce_threshold) + filtered_.total_exposure_no_dg = filtered_.total_exposure * + config_.fast_reduce_threshold; + RPI_LOG("After filtering, total_exposure " << filtered_.total_exposure << + " no dg " << filtered_.total_exposure_no_dg); +} + +void Agc::divvyupExposure() +{ + // Sending the fixed shutter/gain cases through the same code may seem + // unnecessary, but it will make more sense when extend this to cover + // variable aperture. + double exposure_value = filtered_.total_exposure_no_dg; + double shutter_time, analogue_gain; + shutter_time = status_.fixed_shutter != 0.0 + ? status_.fixed_shutter + : exposure_mode_->shutter[0]; + analogue_gain = status_.fixed_analogue_gain != 0.0 + ? status_.fixed_analogue_gain + : exposure_mode_->gain[0]; + if (shutter_time * analogue_gain < exposure_value) { + for (unsigned int stage = 1; + stage < exposure_mode_->gain.size(); stage++) { + if (status_.fixed_shutter == 0.0) { + if (exposure_mode_->shutter[stage] * + analogue_gain >= + exposure_value) { + shutter_time = + exposure_value / analogue_gain; + break; + } + shutter_time = exposure_mode_->shutter[stage]; + } + if (status_.fixed_analogue_gain == 0.0) { + if (exposure_mode_->gain[stage] * + shutter_time >= + exposure_value) { + analogue_gain = + exposure_value / shutter_time; + break; + } + analogue_gain = exposure_mode_->gain[stage]; + } + } + } + RPI_LOG("Divided up shutter and gain are " << shutter_time << " and " + << analogue_gain); + // Finally adjust shutter time for flicker avoidance (require both + // shutter and gain not to be fixed). + if (status_.fixed_shutter == 0.0 && + status_.fixed_analogue_gain == 0.0 && + status_.flicker_period != 0.0) { + int flicker_periods = shutter_time / status_.flicker_period; + if (flicker_periods > 0) { + double new_shutter_time = flicker_periods * status_.flicker_period; + analogue_gain *= shutter_time / new_shutter_time; + // We should still not allow the ag to go over the + // largest value in the exposure mode. Note that this + // may force more of the total exposure into the digital + // gain as a side-effect. + analogue_gain = std::min(analogue_gain, + exposure_mode_->gain.back()); + shutter_time = new_shutter_time; + } + RPI_LOG("After flicker avoidance, shutter " + << shutter_time << " gain " << analogue_gain); + } + filtered_.shutter = shutter_time; + filtered_.analogue_gain = analogue_gain; +} + +void Agc::writeAndFinish(Metadata *image_metadata, bool desaturate) +{ + status_.total_exposure_value = filtered_.total_exposure; + status_.target_exposure_value = desaturate ? 0 : target_.total_exposure_no_dg; + status_.shutter_time = filtered_.shutter; + status_.analogue_gain = filtered_.analogue_gain; + { + std::unique_lock lock(output_mutex_); + output_status_ = status_; + } + // Write to metadata as well, in case anyone wants to update the camera + // immediately. + image_metadata->Set("agc.status", status_); + RPI_LOG("Output written, total exposure requested is " + << filtered_.total_exposure); + RPI_LOG("Camera exposure update: shutter time " << filtered_.shutter << + " analogue gain " << filtered_.analogue_gain); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Agc(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/agc.hpp b/src/ipa/raspberrypi/controller/rpi/agc.hpp new file mode 100644 index 000000000000..dbcefba65d31 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/agc.hpp @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * agc.hpp - AGC/AEC control algorithm + */ +#pragma once + +#include +#include + +#include "../agc_algorithm.hpp" +#include "../agc_status.h" +#include "../pwl.hpp" + +// This is our implementation of AGC. + +// This is the number actually set up by the firmware, not the maximum possible +// number (which is 16). + +#define AGC_STATS_SIZE 15 + +namespace RPi { + +struct AgcMeteringMode { + double weights[AGC_STATS_SIZE]; + void Read(boost::property_tree::ptree const ¶ms); +}; + +struct AgcExposureMode { + std::vector shutter; + std::vector gain; + void Read(boost::property_tree::ptree const ¶ms); +}; + +struct AgcConstraint { + enum class Bound { LOWER = 0, UPPER = 1 }; + Bound bound; + double q_lo; + double q_hi; + Pwl Y_target; + void Read(boost::property_tree::ptree const ¶ms); +}; + +typedef std::vector AgcConstraintMode; + +struct AgcConfig { + void Read(boost::property_tree::ptree const ¶ms); + std::map metering_modes; + std::map exposure_modes; + std::map constraint_modes; + Pwl Y_target; + double speed; + uint16_t startup_frames; + double max_change; + double min_change; + double fast_reduce_threshold; + double speed_up_threshold; + std::string default_metering_mode; + std::string default_exposure_mode; + std::string default_constraint_mode; + double base_ev; +}; + +class Agc : public AgcAlgorithm +{ +public: + Agc(Controller *controller); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void SetEv(double ev) override; + void SetFlickerPeriod(double flicker_period) override; + void SetFixedShutter(double fixed_shutter) override; // microseconds + void SetFixedAnalogueGain(double fixed_analogue_gain) override; + void SetMeteringMode(std::string const &metering_mode_name) override; + void SetExposureMode(std::string const &exposure_mode_name) override; + void SetConstraintMode(std::string const &contraint_mode_name) override; + void Prepare(Metadata *image_metadata) override; + void Process(StatisticsPtr &stats, Metadata *image_metadata) override; + +private: + AgcConfig config_; + void housekeepConfig(); + void fetchCurrentExposure(Metadata *image_metadata); + void computeGain(bcm2835_isp_stats *statistics, Metadata *image_metadata, + double &gain, double &target_Y); + void computeTargetExposure(double gain); + bool applyDigitalGain(Metadata *image_metadata, double gain, + double target_Y); + void filterExposure(bool desaturate); + void divvyupExposure(); + void writeAndFinish(Metadata *image_metadata, bool desaturate); + AgcMeteringMode *metering_mode_; + AgcExposureMode *exposure_mode_; + AgcConstraintMode *constraint_mode_; + uint64_t frame_count_; + struct ExposureValues { + ExposureValues() : shutter(0), analogue_gain(0), + total_exposure(0), total_exposure_no_dg(0) {} + double shutter; + double analogue_gain; + double total_exposure; + double total_exposure_no_dg; // without digital gain + }; + ExposureValues current_; // values for the current frame + ExposureValues target_; // calculate the values we want here + ExposureValues filtered_; // these values are filtered towards target + AgcStatus status_; // to "latch" settings so they can't change + AgcStatus output_status_; // the status we will write out + std::mutex output_mutex_; + int lock_count_; + // Below here the "settings" that applications can change. + std::mutex settings_mutex_; + std::string metering_mode_name_; + std::string exposure_mode_name_; + std::string constraint_mode_name_; + double ev_; + double flicker_period_; + double fixed_shutter_; + double fixed_analogue_gain_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/alsc.cpp b/src/ipa/raspberrypi/controller/rpi/alsc.cpp new file mode 100644 index 000000000000..821a0ca34a2c --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/alsc.cpp @@ -0,0 +1,705 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * alsc.cpp - ALSC (auto lens shading correction) control algorithm + */ +#include + +#include "../awb_status.h" +#include "alsc.hpp" + +// Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm. + +using namespace RPi; + +#define NAME "rpi.alsc" + +static const int X = ALSC_CELLS_X; +static const int Y = ALSC_CELLS_Y; +static const int XY = X * Y; +static const double INSUFFICIENT_DATA = -1.0; + +Alsc::Alsc(Controller *controller) + : Algorithm(controller) +{ + async_abort_ = async_start_ = async_started_ = async_finished_ = false; + async_thread_ = std::thread(std::bind(&Alsc::asyncFunc, this)); +} + +Alsc::~Alsc() +{ + { + std::lock_guard lock(mutex_); + async_abort_ = true; + async_signal_.notify_one(); + } + async_thread_.join(); +} + +char const *Alsc::Name() const +{ + return NAME; +} + +static void generate_lut(double *lut, boost::property_tree::ptree const ¶ms) +{ + double cstrength = params.get("corner_strength", 2.0); + if (cstrength <= 1.0) + throw std::runtime_error("Alsc: corner_strength must be > 1.0"); + double asymmetry = params.get("asymmetry", 1.0); + if (asymmetry < 0) + throw std::runtime_error("Alsc: asymmetry must be >= 0"); + double f1 = cstrength - 1, f2 = 1 + sqrt(cstrength); + double R2 = X * Y / 4 * (1 + asymmetry * asymmetry); + int num = 0; + for (int y = 0; y < Y; y++) { + for (int x = 0; x < X; x++) { + double dy = y - Y / 2 + 0.5, + dx = (x - X / 2 + 0.5) * asymmetry; + double r2 = (dx * dx + dy * dy) / R2; + lut[num++] = + (f1 * r2 + f2) * (f1 * r2 + f2) / + (f2 * f2); // this reproduces the cos^4 rule + } + } +} + +static void read_lut(double *lut, boost::property_tree::ptree const ¶ms) +{ + int num = 0; + const int max_num = XY; + for (auto &p : params) { + if (num == max_num) + throw std::runtime_error( + "Alsc: too many entries in LSC table"); + lut[num++] = p.second.get_value(); + } + if (num < max_num) + throw std::runtime_error("Alsc: too few entries in LSC table"); +} + +static void read_calibrations(std::vector &calibrations, + boost::property_tree::ptree const ¶ms, + std::string const &name) +{ + if (params.get_child_optional(name)) { + double last_ct = 0; + for (auto &p : params.get_child(name)) { + double ct = p.second.get("ct"); + if (ct <= last_ct) + throw std::runtime_error( + "Alsc: entries in " + name + + " must be in increasing ct order"); + AlscCalibration calibration; + calibration.ct = last_ct = ct; + boost::property_tree::ptree const &table = + p.second.get_child("table"); + int num = 0; + for (auto it = table.begin(); it != table.end(); it++) { + if (num == XY) + throw std::runtime_error( + "Alsc: too many values for ct " + + std::to_string(ct) + " in " + + name); + calibration.table[num++] = + it->second.get_value(); + } + if (num != XY) + throw std::runtime_error( + "Alsc: too few values for ct " + + std::to_string(ct) + " in " + name); + calibrations.push_back(calibration); + RPI_LOG("Read " << name << " calibration for ct " + << ct); + } + } +} + +void Alsc::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG("Alsc"); + config_.frame_period = params.get("frame_period", 12); + config_.startup_frames = params.get("startup_frames", 10); + config_.speed = params.get("speed", 0.05); + double sigma = params.get("sigma", 0.01); + config_.sigma_Cr = params.get("sigma_Cr", sigma); + config_.sigma_Cb = params.get("sigma_Cb", sigma); + config_.min_count = params.get("min_count", 10.0); + config_.min_G = params.get("min_G", 50); + config_.omega = params.get("omega", 1.3); + config_.n_iter = params.get("n_iter", X + Y); + config_.luminance_strength = + params.get("luminance_strength", 1.0); + for (int i = 0; i < XY; i++) + config_.luminance_lut[i] = 1.0; + if (params.get_child_optional("corner_strength")) + generate_lut(config_.luminance_lut, params); + else if (params.get_child_optional("luminance_lut")) + read_lut(config_.luminance_lut, + params.get_child("luminance_lut")); + else + RPI_WARN("Alsc: no luminance table - assume unity everywhere"); + read_calibrations(config_.calibrations_Cr, params, "calibrations_Cr"); + read_calibrations(config_.calibrations_Cb, params, "calibrations_Cb"); + config_.default_ct = params.get("default_ct", 4500.0); + config_.threshold = params.get("threshold", 1e-3); +} + +static void get_cal_table(double ct, + std::vector const &calibrations, + double cal_table[XY]); +static void resample_cal_table(double const cal_table_in[XY], + CameraMode const &camera_mode, + double cal_table_out[XY]); +static void compensate_lambdas_for_cal(double const cal_table[XY], + double const old_lambdas[XY], + double new_lambdas[XY]); +static void add_luminance_to_tables(double results[3][Y][X], + double const lambda_r[XY], double lambda_g, + double const lambda_b[XY], + double const luminance_lut[XY], + double luminance_strength); + +void Alsc::Initialise() +{ + RPI_LOG("Alsc"); + frame_count2_ = frame_count_ = frame_phase_ = 0; + first_time_ = true; + // Initialise the lambdas. Each call to Process then restarts from the + // previous results. Also initialise the previous frame tables to the + // same harmless values. + for (int i = 0; i < XY; i++) + lambda_r_[i] = lambda_b_[i] = 1.0; +} + +void Alsc::SwitchMode(CameraMode const &camera_mode) +{ + // There's a bit of a question what we should do if the "crop" of the + // camera mode has changed. Any calculation currently in flight would + // not be useful to the new mode, so arguably we should abort it, and + // generate a new table (like the "first_time" code already here). When + // the crop doesn't change, we can presumably just leave things + // alone. For now, I think we'll just wait and see. When the crop does + // change, any effects should be transient, and if they're not transient + // enough, we'll revisit the question then. + camera_mode_ = camera_mode; + if (first_time_) { + // On the first time, arrange for something sensible in the + // initial tables. Construct the tables for some default colour + // temperature. This echoes the code in doAlsc, without the + // adaptive algorithm. + double cal_table_r[XY], cal_table_b[XY], cal_table_tmp[XY]; + get_cal_table(4000, config_.calibrations_Cr, cal_table_tmp); + resample_cal_table(cal_table_tmp, camera_mode_, cal_table_r); + get_cal_table(4000, config_.calibrations_Cb, cal_table_tmp); + resample_cal_table(cal_table_tmp, camera_mode_, cal_table_b); + compensate_lambdas_for_cal(cal_table_r, lambda_r_, + async_lambda_r_); + compensate_lambdas_for_cal(cal_table_b, lambda_b_, + async_lambda_b_); + add_luminance_to_tables(sync_results_, async_lambda_r_, 1.0, + async_lambda_b_, config_.luminance_lut, + config_.luminance_strength); + memcpy(prev_sync_results_, sync_results_, + sizeof(prev_sync_results_)); + first_time_ = false; + } +} + +void Alsc::fetchAsyncResults() +{ + RPI_LOG("Fetch ALSC results"); + async_finished_ = false; + async_started_ = false; + memcpy(sync_results_, async_results_, sizeof(sync_results_)); +} + +static double get_ct(Metadata *metadata, double default_ct) +{ + AwbStatus awb_status; + awb_status.temperature_K = default_ct; // in case nothing found + if (metadata->Get("awb.status", awb_status) != 0) + RPI_WARN("Alsc: no AWB results found, using " + << awb_status.temperature_K); + else + RPI_LOG("Alsc: AWB results found, using " + << awb_status.temperature_K); + return awb_status.temperature_K; +} + +static void copy_stats(bcm2835_isp_stats_region regions[XY], StatisticsPtr &stats, + AlscStatus const &status) +{ + bcm2835_isp_stats_region *input_regions = stats->awb_stats; + double *r_table = (double *)status.r; + double *g_table = (double *)status.g; + double *b_table = (double *)status.b; + for (int i = 0; i < XY; i++) { + regions[i].r_sum = input_regions[i].r_sum / r_table[i]; + regions[i].g_sum = input_regions[i].g_sum / g_table[i]; + regions[i].b_sum = input_regions[i].b_sum / b_table[i]; + regions[i].counted = input_regions[i].counted; + // (don't care about the uncounted value) + } +} + +void Alsc::restartAsync(StatisticsPtr &stats, Metadata *image_metadata) +{ + RPI_LOG("Starting ALSC thread"); + // Get the current colour temperature. It's all we need from the + // metadata. + ct_ = get_ct(image_metadata, config_.default_ct); + // We have to copy the statistics here, dividing out our best guess of + // the LSC table that the pipeline applied to them. + AlscStatus alsc_status; + if (image_metadata->Get("alsc.status", alsc_status) != 0) { + RPI_WARN("No ALSC status found for applied gains!"); + for (int y = 0; y < Y; y++) + for (int x = 0; x < X; x++) { + alsc_status.r[y][x] = 1.0; + alsc_status.g[y][x] = 1.0; + alsc_status.b[y][x] = 1.0; + } + } + copy_stats(statistics_, stats, alsc_status); + frame_phase_ = 0; + // copy the camera mode so it won't change during the calculations + async_camera_mode_ = camera_mode_; + async_start_ = true; + async_started_ = true; + async_signal_.notify_one(); +} + +void Alsc::Prepare(Metadata *image_metadata) +{ + // Count frames since we started, and since we last poked the async + // thread. + if (frame_count_ < (int)config_.startup_frames) + frame_count_++; + double speed = frame_count_ < (int)config_.startup_frames + ? 1.0 + : config_.speed; + RPI_LOG("Alsc: frame_count " << frame_count_ << " speed " << speed); + { + std::unique_lock lock(mutex_); + if (async_started_ && async_finished_) { + RPI_LOG("ALSC thread finished"); + fetchAsyncResults(); + } + } + // Apply IIR filter to results and program into the pipeline. + double *ptr = (double *)sync_results_, + *pptr = (double *)prev_sync_results_; + for (unsigned int i = 0; + i < sizeof(sync_results_) / sizeof(double); i++) + pptr[i] = speed * ptr[i] + (1.0 - speed) * pptr[i]; + // Put output values into status metadata. + AlscStatus status; + memcpy(status.r, prev_sync_results_[0], sizeof(status.r)); + memcpy(status.g, prev_sync_results_[1], sizeof(status.g)); + memcpy(status.b, prev_sync_results_[2], sizeof(status.b)); + image_metadata->Set("alsc.status", status); +} + +void Alsc::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + // Count frames since we started, and since we last poked the async + // thread. + if (frame_phase_ < (int)config_.frame_period) + frame_phase_++; + if (frame_count2_ < (int)config_.startup_frames) + frame_count2_++; + RPI_LOG("Alsc: frame_phase " << frame_phase_); + if (frame_phase_ >= (int)config_.frame_period || + frame_count2_ < (int)config_.startup_frames) { + std::unique_lock lock(mutex_); + if (async_started_ == false) { + RPI_LOG("ALSC thread starting"); + restartAsync(stats, image_metadata); + } + } +} + +void Alsc::asyncFunc() +{ + while (true) { + { + std::unique_lock lock(mutex_); + async_signal_.wait(lock, [&] { + return async_start_ || async_abort_; + }); + async_start_ = false; + if (async_abort_) + break; + } + doAlsc(); + { + std::lock_guard lock(mutex_); + async_finished_ = true; + sync_signal_.notify_one(); + } + } +} + +void get_cal_table(double ct, std::vector const &calibrations, + double cal_table[XY]) +{ + if (calibrations.empty()) { + for (int i = 0; i < XY; i++) + cal_table[i] = 1.0; + RPI_LOG("Alsc: no calibrations found"); + } else if (ct <= calibrations.front().ct) { + memcpy(cal_table, calibrations.front().table, + XY * sizeof(double)); + RPI_LOG("Alsc: using calibration for " + << calibrations.front().ct); + } else if (ct >= calibrations.back().ct) { + memcpy(cal_table, calibrations.back().table, + XY * sizeof(double)); + RPI_LOG("Alsc: using calibration for " + << calibrations.front().ct); + } else { + int idx = 0; + while (ct > calibrations[idx + 1].ct) + idx++; + double ct0 = calibrations[idx].ct, + ct1 = calibrations[idx + 1].ct; + RPI_LOG("Alsc: ct is " << ct << ", interpolating between " + << ct0 << " and " << ct1); + for (int i = 0; i < XY; i++) + cal_table[i] = + (calibrations[idx].table[i] * (ct1 - ct) + + calibrations[idx + 1].table[i] * (ct - ct0)) / + (ct1 - ct0); + } +} + +void resample_cal_table(double const cal_table_in[XY], + CameraMode const &camera_mode, double cal_table_out[XY]) +{ + // Precalculate and cache the x sampling locations and phases to save + // recomputing them on every row. + int x_lo[X], x_hi[X]; + double xf[X]; + double scale_x = camera_mode.sensor_width / + (camera_mode.width * camera_mode.scale_x); + double x_off = camera_mode.crop_x / (double)camera_mode.sensor_width; + double x = .5 / scale_x + x_off * X - .5; + double x_inc = 1 / scale_x; + for (int i = 0; i < X; i++, x += x_inc) { + x_lo[i] = floor(x); + xf[i] = x - x_lo[i]; + x_hi[i] = std::min(x_lo[i] + 1, X - 1); + x_lo[i] = std::max(x_lo[i], 0); + } + // Now march over the output table generating the new values. + double scale_y = camera_mode.sensor_height / + (camera_mode.height * camera_mode.scale_y); + double y_off = camera_mode.crop_y / (double)camera_mode.sensor_height; + double y = .5 / scale_y + y_off * Y - .5; + double y_inc = 1 / scale_y; + for (int j = 0; j < Y; j++, y += y_inc) { + int y_lo = floor(y); + double yf = y - y_lo; + int y_hi = std::min(y_lo + 1, Y - 1); + y_lo = std::max(y_lo, 0); + double const *row_above = cal_table_in + X * y_lo; + double const *row_below = cal_table_in + X * y_hi; + for (int i = 0; i < X; i++) { + double above = row_above[x_lo[i]] * (1 - xf[i]) + + row_above[x_hi[i]] * xf[i]; + double below = row_below[x_lo[i]] * (1 - xf[i]) + + row_below[x_hi[i]] * xf[i]; + *(cal_table_out++) = above * (1 - yf) + below * yf; + } + } +} + +// Calculate chrominance statistics (R/G and B/G) for each region. +static_assert(XY == AWB_REGIONS, "ALSC/AWB statistics region mismatch"); +static void calculate_Cr_Cb(bcm2835_isp_stats_region *awb_region, double Cr[XY], + double Cb[XY], uint32_t min_count, uint16_t min_G) +{ + for (int i = 0; i < XY; i++) { + bcm2835_isp_stats_region &zone = awb_region[i]; + if (zone.counted <= min_count || + zone.g_sum / zone.counted <= min_G) { + Cr[i] = Cb[i] = INSUFFICIENT_DATA; + continue; + } + Cr[i] = zone.r_sum / (double)zone.g_sum; + Cb[i] = zone.b_sum / (double)zone.g_sum; + } +} + +static void apply_cal_table(double const cal_table[XY], double C[XY]) +{ + for (int i = 0; i < XY; i++) + if (C[i] != INSUFFICIENT_DATA) + C[i] *= cal_table[i]; +} + +void compensate_lambdas_for_cal(double const cal_table[XY], + double const old_lambdas[XY], + double new_lambdas[XY]) +{ + double min_new_lambda = std::numeric_limits::max(); + for (int i = 0; i < XY; i++) { + new_lambdas[i] = old_lambdas[i] * cal_table[i]; + min_new_lambda = std::min(min_new_lambda, new_lambdas[i]); + } + for (int i = 0; i < XY; i++) + new_lambdas[i] /= min_new_lambda; +} + +static void print_cal_table(double const C[XY]) +{ + printf("table: [\n"); + for (int j = 0; j < Y; j++) { + for (int i = 0; i < X; i++) { + printf("%5.3f", 1.0 / C[j * X + i]); + if (i != X - 1 || j != Y - 1) + printf(","); + } + printf("\n"); + } + printf("]\n"); +} + +// Compute weight out of 1.0 which reflects how similar we wish to make the +// colours of these two regions. +static double compute_weight(double C_i, double C_j, double sigma) +{ + if (C_i == INSUFFICIENT_DATA || C_j == INSUFFICIENT_DATA) + return 0; + double diff = (C_i - C_j) / sigma; + return exp(-diff * diff / 2); +} + +// Compute all weights. +static void compute_W(double const C[XY], double sigma, double W[XY][4]) +{ + for (int i = 0; i < XY; i++) { + // Start with neighbour above and go clockwise. + W[i][0] = i >= X ? compute_weight(C[i], C[i - X], sigma) : 0; + W[i][1] = i % X < X - 1 ? compute_weight(C[i], C[i + 1], sigma) + : 0; + W[i][2] = + i < XY - X ? compute_weight(C[i], C[i + X], sigma) : 0; + W[i][3] = i % X ? compute_weight(C[i], C[i - 1], sigma) : 0; + } +} + +// Compute M, the large but sparse matrix such that M * lambdas = 0. +static void construct_M(double const C[XY], double const W[XY][4], + double M[XY][4]) +{ + double epsilon = 0.001; + for (int i = 0; i < XY; i++) { + // Note how, if C[i] == INSUFFICIENT_DATA, the weights will all + // be zero so the equation is still set up correctly. + int m = !!(i >= X) + !!(i % X < X - 1) + !!(i < XY - X) + + !!(i % X); // total number of neighbours + // we'll divide the diagonal out straight away + double diagonal = + (epsilon + W[i][0] + W[i][1] + W[i][2] + W[i][3]) * + C[i]; + M[i][0] = i >= X ? (W[i][0] * C[i - X] + epsilon / m * C[i]) / + diagonal + : 0; + M[i][1] = i % X < X - 1 + ? (W[i][1] * C[i + 1] + epsilon / m * C[i]) / + diagonal + : 0; + M[i][2] = i < XY - X + ? (W[i][2] * C[i + X] + epsilon / m * C[i]) / + diagonal + : 0; + M[i][3] = i % X ? (W[i][3] * C[i - 1] + epsilon / m * C[i]) / + diagonal + : 0; + } +} + +// In the compute_lambda_ functions, note that the matrix coefficients for the +// left/right neighbours are zero down the left/right edges, so we don't need +// need to test the i value to exclude them. +static double compute_lambda_bottom(int i, double const M[XY][4], + double lambda[XY]) +{ + return M[i][1] * lambda[i + 1] + M[i][2] * lambda[i + X] + + M[i][3] * lambda[i - 1]; +} +static double compute_lambda_bottom_start(int i, double const M[XY][4], + double lambda[XY]) +{ + return M[i][1] * lambda[i + 1] + M[i][2] * lambda[i + X]; +} +static double compute_lambda_interior(int i, double const M[XY][4], + double lambda[XY]) +{ + return M[i][0] * lambda[i - X] + M[i][1] * lambda[i + 1] + + M[i][2] * lambda[i + X] + M[i][3] * lambda[i - 1]; +} +static double compute_lambda_top(int i, double const M[XY][4], + double lambda[XY]) +{ + return M[i][0] * lambda[i - X] + M[i][1] * lambda[i + 1] + + M[i][3] * lambda[i - 1]; +} +static double compute_lambda_top_end(int i, double const M[XY][4], + double lambda[XY]) +{ + return M[i][0] * lambda[i - X] + M[i][3] * lambda[i - 1]; +} + +// Gauss-Seidel iteration with over-relaxation. +static double gauss_seidel2_SOR(double const M[XY][4], double omega, + double lambda[XY]) +{ + double old_lambda[XY]; + for (int i = 0; i < XY; i++) + old_lambda[i] = lambda[i]; + int i; + lambda[0] = compute_lambda_bottom_start(0, M, lambda); + for (i = 1; i < X; i++) + lambda[i] = compute_lambda_bottom(i, M, lambda); + for (; i < XY - X; i++) + lambda[i] = compute_lambda_interior(i, M, lambda); + for (; i < XY - 1; i++) + lambda[i] = compute_lambda_top(i, M, lambda); + lambda[i] = compute_lambda_top_end(i, M, lambda); + // Also solve the system from bottom to top, to help spread the updates + // better. + lambda[i] = compute_lambda_top_end(i, M, lambda); + for (i = XY - 2; i >= XY - X; i--) + lambda[i] = compute_lambda_top(i, M, lambda); + for (; i >= X; i--) + lambda[i] = compute_lambda_interior(i, M, lambda); + for (; i >= 1; i--) + lambda[i] = compute_lambda_bottom(i, M, lambda); + lambda[0] = compute_lambda_bottom_start(0, M, lambda); + double max_diff = 0; + for (int i = 0; i < XY; i++) { + lambda[i] = old_lambda[i] + (lambda[i] - old_lambda[i]) * omega; + if (fabs(lambda[i] - old_lambda[i]) > fabs(max_diff)) + max_diff = lambda[i] - old_lambda[i]; + } + return max_diff; +} + +// Normalise the values so that the smallest value is 1. +static void normalise(double *ptr, size_t n) +{ + double minval = ptr[0]; + for (size_t i = 1; i < n; i++) + minval = std::min(minval, ptr[i]); + for (size_t i = 0; i < n; i++) + ptr[i] /= minval; +} + +static void run_matrix_iterations(double const C[XY], double lambda[XY], + double const W[XY][4], double omega, + int n_iter, double threshold) +{ + double M[XY][4]; + construct_M(C, W, M); + double last_max_diff = std::numeric_limits::max(); + for (int i = 0; i < n_iter; i++) { + double max_diff = fabs(gauss_seidel2_SOR(M, omega, lambda)); + if (max_diff < threshold) { + RPI_LOG("Stop after " << i + 1 << " iterations"); + break; + } + // this happens very occasionally (so make a note), though + // doesn't seem to matter + if (max_diff > last_max_diff) + RPI_LOG("Iteration " << i << ": max_diff gone up " + << last_max_diff << " to " + << max_diff); + last_max_diff = max_diff; + } + // We're going to normalise the lambdas so the smallest is 1. Not sure + // this is really necessary as they get renormalised later, but I + // suppose it does stop these quantities from wandering off... + normalise(lambda, XY); +} + +static void add_luminance_rb(double result[XY], double const lambda[XY], + double const luminance_lut[XY], + double luminance_strength) +{ + for (int i = 0; i < XY; i++) + result[i] = lambda[i] * + ((luminance_lut[i] - 1) * luminance_strength + 1); +} + +static void add_luminance_g(double result[XY], double lambda, + double const luminance_lut[XY], + double luminance_strength) +{ + for (int i = 0; i < XY; i++) + result[i] = lambda * + ((luminance_lut[i] - 1) * luminance_strength + 1); +} + +void add_luminance_to_tables(double results[3][Y][X], double const lambda_r[XY], + double lambda_g, double const lambda_b[XY], + double const luminance_lut[XY], + double luminance_strength) +{ + add_luminance_rb((double *)results[0], lambda_r, luminance_lut, + luminance_strength); + add_luminance_g((double *)results[1], lambda_g, luminance_lut, + luminance_strength); + add_luminance_rb((double *)results[2], lambda_b, luminance_lut, + luminance_strength); + normalise((double *)results, 3 * XY); +} + +void Alsc::doAlsc() +{ + double Cr[XY], Cb[XY], Wr[XY][4], Wb[XY][4], cal_table_r[XY], + cal_table_b[XY], cal_table_tmp[XY]; + // Calculate our R/B ("Cr"/"Cb") colour statistics, and assess which are + // usable. + calculate_Cr_Cb(statistics_, Cr, Cb, config_.min_count, config_.min_G); + // Fetch the new calibrations (if any) for this CT. Resample them in + // case the camera mode is not full-frame. + get_cal_table(ct_, config_.calibrations_Cr, cal_table_tmp); + resample_cal_table(cal_table_tmp, async_camera_mode_, cal_table_r); + get_cal_table(ct_, config_.calibrations_Cb, cal_table_tmp); + resample_cal_table(cal_table_tmp, async_camera_mode_, cal_table_b); + // You could print out the cal tables for this image here, if you're + // tuning the algorithm... + (void)print_cal_table; + // Apply any calibration to the statistics, so the adaptive algorithm + // makes only the extra adjustments. + apply_cal_table(cal_table_r, Cr); + apply_cal_table(cal_table_b, Cb); + // Compute weights between zones. + compute_W(Cr, config_.sigma_Cr, Wr); + compute_W(Cb, config_.sigma_Cb, Wb); + // Run Gauss-Seidel iterations over the resulting matrix, for R and B. + run_matrix_iterations(Cr, lambda_r_, Wr, config_.omega, config_.n_iter, + config_.threshold); + run_matrix_iterations(Cb, lambda_b_, Wb, config_.omega, config_.n_iter, + config_.threshold); + // Fold the calibrated gains into our final lambda values. (Note that on + // the next run, we re-start with the lambda values that don't have the + // calibration gains included.) + compensate_lambdas_for_cal(cal_table_r, lambda_r_, async_lambda_r_); + compensate_lambdas_for_cal(cal_table_b, lambda_b_, async_lambda_b_); + // Fold in the luminance table at the appropriate strength. + add_luminance_to_tables(async_results_, async_lambda_r_, 1.0, + async_lambda_b_, config_.luminance_lut, + config_.luminance_strength); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Alsc(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/alsc.hpp b/src/ipa/raspberrypi/controller/rpi/alsc.hpp new file mode 100644 index 000000000000..c8ed3d2190b5 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/alsc.hpp @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * alsc.hpp - ALSC (auto lens shading correction) control algorithm + */ +#pragma once + +#include +#include +#include + +#include "../algorithm.hpp" +#include "../alsc_status.h" + +namespace RPi { + +// Algorithm to generate automagic LSC (Lens Shading Correction) tables. + +struct AlscCalibration { + double ct; + double table[ALSC_CELLS_X * ALSC_CELLS_Y]; +}; + +struct AlscConfig { + // Only repeat the ALSC calculation every "this many" frames + uint16_t frame_period; + // number of initial frames for which speed taken as 1.0 (maximum) + uint16_t startup_frames; + // IIR filter speed applied to algorithm results + double speed; + double sigma_Cr; + double sigma_Cb; + double min_count; + uint16_t min_G; + double omega; + uint32_t n_iter; + double luminance_lut[ALSC_CELLS_X * ALSC_CELLS_Y]; + double luminance_strength; + std::vector calibrations_Cr; + std::vector calibrations_Cb; + double default_ct; // colour temperature if no metadata found + double threshold; // iteration termination threshold +}; + +class Alsc : public Algorithm +{ +public: + Alsc(Controller *controller = NULL); + ~Alsc(); + char const *Name() const override; + void Initialise() override; + void SwitchMode(CameraMode const &camera_mode) override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + void Process(StatisticsPtr &stats, Metadata *image_metadata) override; + +private: + // configuration is read-only, and available to both threads + AlscConfig config_; + bool first_time_; + std::atomic camera_mode_; + std::thread async_thread_; + void asyncFunc(); // asynchronous thread function + std::mutex mutex_; + CameraMode async_camera_mode_; + // condvar for async thread to wait on + std::condition_variable async_signal_; + // condvar for synchronous thread to wait on + std::condition_variable sync_signal_; + // for sync thread to check if async thread finished (requires mutex) + bool async_finished_; + // for async thread to check if it's been told to run (requires mutex) + bool async_start_; + // for async thread to check if it's been told to quit (requires mutex) + bool async_abort_; + + // The following are only for the synchronous thread to use: + // for sync thread to note its has asked async thread to run + bool async_started_; + // counts up to frame_period before restarting the async thread + int frame_phase_; + // counts up to startup_frames + int frame_count_; + // counts up to startup_frames for Process method + int frame_count2_; + double sync_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X]; + double prev_sync_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X]; + // The following are for the asynchronous thread to use, though the main + // thread can set/reset them if the async thread is known to be idle: + void restartAsync(StatisticsPtr &stats, Metadata *image_metadata); + // copy out the results from the async thread so that it can be restarted + void fetchAsyncResults(); + double ct_; + bcm2835_isp_stats_region statistics_[ALSC_CELLS_Y * ALSC_CELLS_X]; + double async_results_[3][ALSC_CELLS_Y][ALSC_CELLS_X]; + double async_lambda_r_[ALSC_CELLS_X * ALSC_CELLS_Y]; + double async_lambda_b_[ALSC_CELLS_X * ALSC_CELLS_Y]; + void doAlsc(); + double lambda_r_[ALSC_CELLS_X * ALSC_CELLS_Y]; + double lambda_b_[ALSC_CELLS_X * ALSC_CELLS_Y]; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/awb.cpp b/src/ipa/raspberrypi/controller/rpi/awb.cpp new file mode 100644 index 000000000000..a58fa11df863 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/awb.cpp @@ -0,0 +1,608 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * awb.cpp - AWB control algorithm + */ + +#include "../logging.hpp" +#include "../lux_status.h" + +#include "awb.hpp" + +using namespace RPi; + +#define NAME "rpi.awb" + +#define AWB_STATS_SIZE_X DEFAULT_AWB_REGIONS_X +#define AWB_STATS_SIZE_Y DEFAULT_AWB_REGIONS_Y + +const double Awb::RGB::INVALID = -1.0; + +void AwbMode::Read(boost::property_tree::ptree const ¶ms) +{ + ct_lo = params.get("lo"); + ct_hi = params.get("hi"); +} + +void AwbPrior::Read(boost::property_tree::ptree const ¶ms) +{ + lux = params.get("lux"); + prior.Read(params.get_child("prior")); +} + +static void read_ct_curve(Pwl &ct_r, Pwl &ct_b, + boost::property_tree::ptree const ¶ms) +{ + int num = 0; + for (auto it = params.begin(); it != params.end(); it++) { + double ct = it->second.get_value(); + assert(it == params.begin() || ct != ct_r.Domain().end); + if (++it == params.end()) + throw std::runtime_error( + "AwbConfig: incomplete CT curve entry"); + ct_r.Append(ct, it->second.get_value()); + if (++it == params.end()) + throw std::runtime_error( + "AwbConfig: incomplete CT curve entry"); + ct_b.Append(ct, it->second.get_value()); + num++; + } + if (num < 2) + throw std::runtime_error( + "AwbConfig: insufficient points in CT curve"); +} + +void AwbConfig::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG("AwbConfig"); + bayes = params.get("bayes", 1); + frame_period = params.get("frame_period", 10); + startup_frames = params.get("startup_frames", 10); + speed = params.get("speed", 0.05); + if (params.get_child_optional("ct_curve")) + read_ct_curve(ct_r, ct_b, params.get_child("ct_curve")); + if (params.get_child_optional("priors")) { + for (auto &p : params.get_child("priors")) { + AwbPrior prior; + prior.Read(p.second); + if (!priors.empty() && prior.lux <= priors.back().lux) + throw std::runtime_error( + "AwbConfig: Prior must be ordered in increasing lux value"); + priors.push_back(prior); + } + if (priors.empty()) + throw std::runtime_error( + "AwbConfig: no AWB priors configured"); + } + if (params.get_child_optional("modes")) { + for (auto &p : params.get_child("modes")) { + modes[p.first].Read(p.second); + if (default_mode == nullptr) + default_mode = &modes[p.first]; + } + if (default_mode == nullptr) + throw std::runtime_error( + "AwbConfig: no AWB modes configured"); + } + min_pixels = params.get("min_pixels", 16.0); + min_G = params.get("min_G", 32); + min_regions = params.get("min_regions", 10); + delta_limit = params.get("delta_limit", 0.2); + coarse_step = params.get("coarse_step", 0.2); + transverse_pos = params.get("transverse_pos", 0.01); + transverse_neg = params.get("transverse_neg", 0.01); + if (transverse_pos <= 0 || transverse_neg <= 0) + throw std::runtime_error( + "AwbConfig: transverse_pos/neg must be > 0"); + sensitivity_r = params.get("sensitivity_r", 1.0); + sensitivity_b = params.get("sensitivity_b", 1.0); + if (bayes) { + if (ct_r.Empty() || ct_b.Empty() || priors.empty() || + default_mode == nullptr) { + RPI_WARN( + "Bayesian AWB mis-configured - switch to Grey method"); + bayes = false; + } + } + fast = params.get( + "fast", bayes); // default to fast for Bayesian, otherwise slow + whitepoint_r = params.get("whitepoint_r", 0.0); + whitepoint_b = params.get("whitepoint_b", 0.0); + if (bayes == false) + sensitivity_r = sensitivity_b = + 1.0; // nor do sensitivities make any sense +} + +Awb::Awb(Controller *controller) + : AwbAlgorithm(controller) +{ + async_abort_ = async_start_ = async_started_ = async_finished_ = false; + mode_ = nullptr; + manual_r_ = manual_b_ = 0.0; + async_thread_ = std::thread(std::bind(&Awb::asyncFunc, this)); +} + +Awb::~Awb() +{ + { + std::lock_guard lock(mutex_); + async_abort_ = true; + async_signal_.notify_one(); + } + async_thread_.join(); +} + +char const *Awb::Name() const +{ + return NAME; +} + +void Awb::Read(boost::property_tree::ptree const ¶ms) +{ + config_.Read(params); +} + +void Awb::Initialise() +{ + frame_count2_ = frame_count_ = frame_phase_ = 0; + // Put something sane into the status that we are filtering towards, + // just in case the first few frames don't have anything meaningful in + // them. + if (!config_.ct_r.Empty() && !config_.ct_b.Empty()) { + sync_results_.temperature_K = config_.ct_r.Domain().Clip(4000); + sync_results_.gain_r = + 1.0 / config_.ct_r.Eval(sync_results_.temperature_K); + sync_results_.gain_g = 1.0; + sync_results_.gain_b = + 1.0 / config_.ct_b.Eval(sync_results_.temperature_K); + } else { + // random values just to stop the world blowing up + sync_results_.temperature_K = 4500; + sync_results_.gain_r = sync_results_.gain_g = + sync_results_.gain_b = 1.0; + } + prev_sync_results_ = sync_results_; +} + +void Awb::SetMode(std::string const &mode_name) +{ + std::unique_lock lock(settings_mutex_); + mode_name_ = mode_name; +} + +void Awb::SetManualGains(double manual_r, double manual_b) +{ + std::unique_lock lock(settings_mutex_); + // If any of these are 0.0, we swich back to auto. + manual_r_ = manual_r; + manual_b_ = manual_b; +} + +void Awb::fetchAsyncResults() +{ + RPI_LOG("Fetch AWB results"); + async_finished_ = false; + async_started_ = false; + sync_results_ = async_results_; +} + +void Awb::restartAsync(StatisticsPtr &stats, std::string const &mode_name, + double lux) +{ + RPI_LOG("Starting AWB thread"); + // this makes a new reference which belongs to the asynchronous thread + statistics_ = stats; + // store the mode as it could technically change + auto m = config_.modes.find(mode_name); + mode_ = m != config_.modes.end() + ? &m->second + : (mode_ == nullptr ? config_.default_mode : mode_); + lux_ = lux; + frame_phase_ = 0; + async_start_ = true; + async_started_ = true; + size_t len = mode_name.copy(async_results_.mode, + sizeof(async_results_.mode) - 1); + async_results_.mode[len] = '\0'; + async_signal_.notify_one(); +} + +void Awb::Prepare(Metadata *image_metadata) +{ + if (frame_count_ < (int)config_.startup_frames) + frame_count_++; + double speed = frame_count_ < (int)config_.startup_frames + ? 1.0 + : config_.speed; + RPI_LOG("Awb: frame_count " << frame_count_ << " speed " << speed); + { + std::unique_lock lock(mutex_); + if (async_started_ && async_finished_) { + RPI_LOG("AWB thread finished"); + fetchAsyncResults(); + } + } + // Finally apply IIR filter to results and put into metadata. + memcpy(prev_sync_results_.mode, sync_results_.mode, + sizeof(prev_sync_results_.mode)); + prev_sync_results_.temperature_K = + speed * sync_results_.temperature_K + + (1.0 - speed) * prev_sync_results_.temperature_K; + prev_sync_results_.gain_r = speed * sync_results_.gain_r + + (1.0 - speed) * prev_sync_results_.gain_r; + prev_sync_results_.gain_g = speed * sync_results_.gain_g + + (1.0 - speed) * prev_sync_results_.gain_g; + prev_sync_results_.gain_b = speed * sync_results_.gain_b + + (1.0 - speed) * prev_sync_results_.gain_b; + image_metadata->Set("awb.status", prev_sync_results_); + RPI_LOG("Using AWB gains r " << prev_sync_results_.gain_r << " g " + << prev_sync_results_.gain_g << " b " + << prev_sync_results_.gain_b); +} + +void Awb::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + // Count frames since we last poked the async thread. + if (frame_phase_ < (int)config_.frame_period) + frame_phase_++; + if (frame_count2_ < (int)config_.startup_frames) + frame_count2_++; + RPI_LOG("Awb: frame_phase " << frame_phase_); + if (frame_phase_ >= (int)config_.frame_period || + frame_count2_ < (int)config_.startup_frames) { + // Update any settings and any image metadata that we need. + std::string mode_name; + { + std::unique_lock lock(settings_mutex_); + mode_name = mode_name_; + } + struct LuxStatus lux_status = {}; + lux_status.lux = 400; // in case no metadata + if (image_metadata->Get("lux.status", lux_status) != 0) + RPI_LOG("No lux metadata found"); + RPI_LOG("Awb lux value is " << lux_status.lux); + + std::unique_lock lock(mutex_); + if (async_started_ == false) { + RPI_LOG("AWB thread starting"); + restartAsync(stats, mode_name, lux_status.lux); + } + } +} + +void Awb::asyncFunc() +{ + while (true) { + { + std::unique_lock lock(mutex_); + async_signal_.wait(lock, [&] { + return async_start_ || async_abort_; + }); + async_start_ = false; + if (async_abort_) + break; + } + doAwb(); + { + std::lock_guard lock(mutex_); + async_finished_ = true; + sync_signal_.notify_one(); + } + } +} + +static void generate_stats(std::vector &zones, + bcm2835_isp_stats_region *stats, double min_pixels, + double min_G) +{ + for (int i = 0; i < AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y; i++) { + Awb::RGB zone; // this is "invalid", unless R gets overwritten later + double counted = stats[i].counted; + if (counted >= min_pixels) { + zone.G = stats[i].g_sum / counted; + if (zone.G >= min_G) { + zone.R = stats[i].r_sum / counted; + zone.B = stats[i].b_sum / counted; + } + } + zones.push_back(zone); + } +} + +void Awb::prepareStats() +{ + zones_.clear(); + // LSC has already been applied to the stats in this pipeline, so stop + // any LSC compensation. We also ignore config_.fast in this version. + generate_stats(zones_, statistics_->awb_stats, config_.min_pixels, + config_.min_G); + // we're done with these; we may as well relinquish our hold on the + // pointer. + statistics_.reset(); + // apply sensitivities, so values appear to come from our "canonical" + // sensor. + for (auto &zone : zones_) + zone.R *= config_.sensitivity_r, + zone.B *= config_.sensitivity_b; +} + +double Awb::computeDelta2Sum(double gain_r, double gain_b) +{ + // Compute the sum of the squared colour error (non-greyness) as it + // appears in the log likelihood equation. + double delta2_sum = 0; + for (auto &z : zones_) { + double delta_r = gain_r * z.R - 1 - config_.whitepoint_r; + double delta_b = gain_b * z.B - 1 - config_.whitepoint_b; + double delta2 = delta_r * delta_r + delta_b * delta_b; + //RPI_LOG("delta_r " << delta_r << " delta_b " << delta_b << " delta2 " << delta2); + delta2 = std::min(delta2, config_.delta_limit); + delta2_sum += delta2; + } + return delta2_sum; +} + +Pwl Awb::interpolatePrior() +{ + // Interpolate the prior log likelihood function for our current lux + // value. + if (lux_ <= config_.priors.front().lux) + return config_.priors.front().prior; + else if (lux_ >= config_.priors.back().lux) + return config_.priors.back().prior; + else { + int idx = 0; + // find which two we lie between + while (config_.priors[idx + 1].lux < lux_) + idx++; + double lux0 = config_.priors[idx].lux, + lux1 = config_.priors[idx + 1].lux; + return Pwl::Combine(config_.priors[idx].prior, + config_.priors[idx + 1].prior, + [&](double /*x*/, double y0, double y1) { + return y0 + (y1 - y0) * + (lux_ - lux0) / (lux1 - lux0); + }); + } +} + +static double interpolate_quadatric(Pwl::Point const &A, Pwl::Point const &B, + Pwl::Point const &C) +{ + // Given 3 points on a curve, find the extremum of the function in that + // interval by fitting a quadratic. + const double eps = 1e-3; + Pwl::Point CA = C - A, BA = B - A; + double denominator = 2 * (BA.y * CA.x - CA.y * BA.x); + if (abs(denominator) > eps) { + double numerator = BA.y * CA.x * CA.x - CA.y * BA.x * BA.x; + double result = numerator / denominator + A.x; + return std::max(A.x, std::min(C.x, result)); + } + // has degenerated to straight line segment + return A.y < C.y - eps ? A.x : (C.y < A.y - eps ? C.x : B.x); +} + +double Awb::coarseSearch(Pwl const &prior) +{ + points_.clear(); // assume doesn't deallocate memory + size_t best_point = 0; + double t = mode_->ct_lo; + int span_r = 0, span_b = 0; + // Step down the CT curve evaluating log likelihood. + while (true) { + double r = config_.ct_r.Eval(t, &span_r); + double b = config_.ct_b.Eval(t, &span_b); + double gain_r = 1 / r, gain_b = 1 / b; + double delta2_sum = computeDelta2Sum(gain_r, gain_b); + double prior_log_likelihood = + prior.Eval(prior.Domain().Clip(t)); + double final_log_likelihood = delta2_sum - prior_log_likelihood; + RPI_LOG("t: " << t << " gain_r " << gain_r << " gain_b " + << gain_b << " delta2_sum " << delta2_sum + << " prior " << prior_log_likelihood << " final " + << final_log_likelihood); + points_.push_back(Pwl::Point(t, final_log_likelihood)); + if (points_.back().y < points_[best_point].y) + best_point = points_.size() - 1; + if (t == mode_->ct_hi) + break; + // for even steps along the r/b curve scale them by the current t + t = std::min(t + t / 10 * config_.coarse_step, + mode_->ct_hi); + } + t = points_[best_point].x; + RPI_LOG("Coarse search found CT " << t); + // We have the best point of the search, but refine it with a quadratic + // interpolation around its neighbours. + if (points_.size() > 2) { + unsigned long bp = std::min(best_point, points_.size() - 2); + best_point = std::max(1UL, bp); + t = interpolate_quadatric(points_[best_point - 1], + points_[best_point], + points_[best_point + 1]); + RPI_LOG("After quadratic refinement, coarse search has CT " + << t); + } + return t; +} + +void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior) +{ + int span_r, span_b; + config_.ct_r.Eval(t, &span_r); + config_.ct_b.Eval(t, &span_b); + double step = t / 10 * config_.coarse_step * 0.1; + int nsteps = 5; + double r_diff = config_.ct_r.Eval(t + nsteps * step, &span_r) - + config_.ct_r.Eval(t - nsteps * step, &span_r); + double b_diff = config_.ct_b.Eval(t + nsteps * step, &span_b) - + config_.ct_b.Eval(t - nsteps * step, &span_b); + Pwl::Point transverse(b_diff, -r_diff); + if (transverse.Len2() < 1e-6) + return; + // unit vector orthogonal to the b vs. r function (pointing outwards + // with r and b increasing) + transverse = transverse / transverse.Len(); + double best_log_likelihood = 0, best_t = 0, best_r = 0, best_b = 0; + double transverse_range = + config_.transverse_neg + config_.transverse_pos; + const int MAX_NUM_DELTAS = 12; + // a transverse step approximately every 0.01 r/b units + int num_deltas = floor(transverse_range * 100 + 0.5) + 1; + num_deltas = num_deltas < 3 ? 3 : + (num_deltas > MAX_NUM_DELTAS ? MAX_NUM_DELTAS : num_deltas); + // Step down CT curve. March a bit further if the transverse range is + // large. + nsteps += num_deltas; + for (int i = -nsteps; i <= nsteps; i++) { + double t_test = t + i * step; + double prior_log_likelihood = + prior.Eval(prior.Domain().Clip(t_test)); + double r_curve = config_.ct_r.Eval(t_test, &span_r); + double b_curve = config_.ct_b.Eval(t_test, &span_b); + // x will be distance off the curve, y the log likelihood there + Pwl::Point points[MAX_NUM_DELTAS]; + int best_point = 0; + // Take some measurements transversely *off* the CT curve. + for (int j = 0; j < num_deltas; j++) { + points[j].x = -config_.transverse_neg + + (transverse_range * j) / (num_deltas - 1); + Pwl::Point rb_test = Pwl::Point(r_curve, b_curve) + + transverse * points[j].x; + double r_test = rb_test.x, b_test = rb_test.y; + double gain_r = 1 / r_test, gain_b = 1 / b_test; + double delta2_sum = computeDelta2Sum(gain_r, gain_b); + points[j].y = delta2_sum - prior_log_likelihood; + RPI_LOG("At t " << t_test << " r " << r_test << " b " + << b_test << ": " << points[j].y); + if (points[j].y < points[best_point].y) + best_point = j; + } + // We have NUM_DELTAS points transversely across the CT curve, + // now let's do a quadratic interpolation for the best result. + best_point = std::max(1, std::min(best_point, num_deltas - 2)); + Pwl::Point rb_test = + Pwl::Point(r_curve, b_curve) + + transverse * + interpolate_quadatric(points[best_point - 1], + points[best_point], + points[best_point + 1]); + double r_test = rb_test.x, b_test = rb_test.y; + double gain_r = 1 / r_test, gain_b = 1 / b_test; + double delta2_sum = computeDelta2Sum(gain_r, gain_b); + double final_log_likelihood = delta2_sum - prior_log_likelihood; + RPI_LOG("Finally " + << t_test << " r " << r_test << " b " << b_test << ": " + << final_log_likelihood + << (final_log_likelihood < best_log_likelihood ? " BEST" + : "")); + if (best_t == 0 || final_log_likelihood < best_log_likelihood) + best_log_likelihood = final_log_likelihood, + best_t = t_test, best_r = r_test, best_b = b_test; + } + t = best_t, r = best_r, b = best_b; + RPI_LOG("Fine search found t " << t << " r " << r << " b " << b); +} + +void Awb::awbBayes() +{ + // May as well divide out G to save computeDelta2Sum from doing it over + // and over. + for (auto &z : zones_) + z.R = z.R / (z.G + 1), z.B = z.B / (z.G + 1); + // Get the current prior, and scale according to how many zones are + // valid... not entirely sure about this. + Pwl prior = interpolatePrior(); + prior *= zones_.size() / (double)(AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y); + prior.Map([](double x, double y) { + RPI_LOG("(" << x << "," << y << ")"); + }); + double t = coarseSearch(prior); + double r = config_.ct_r.Eval(t); + double b = config_.ct_b.Eval(t); + RPI_LOG("After coarse search: r " << r << " b " << b << " (gains r " + << 1 / r << " b " << 1 / b << ")"); + // Not entirely sure how to handle the fine search yet. Mostly the + // estimated CT is already good enough, but the fine search allows us to + // wander transverely off the CT curve. Under some illuminants, where + // there may be more or less green light, this may prove beneficial, + // though I probably need more real datasets before deciding exactly how + // this should be controlled and tuned. + fineSearch(t, r, b, prior); + RPI_LOG("After fine search: r " << r << " b " << b << " (gains r " + << 1 / r << " b " << 1 / b << ")"); + // Write results out for the main thread to pick up. Remember to adjust + // the gains from the ones that the "canonical sensor" would require to + // the ones needed by *this* sensor. + async_results_.temperature_K = t; + async_results_.gain_r = 1.0 / r * config_.sensitivity_r; + async_results_.gain_g = 1.0; + async_results_.gain_b = 1.0 / b * config_.sensitivity_b; +} + +void Awb::awbGrey() +{ + RPI_LOG("Grey world AWB"); + // Make a separate list of the derivatives for each of red and blue, so + // that we can sort them to exclude the extreme gains. We could + // consider some variations, such as normalising all the zones first, or + // doing an L2 average etc. + std::vector &derivs_R(zones_); + std::vector derivs_B(derivs_R); + std::sort(derivs_R.begin(), derivs_R.end(), + [](RGB const &a, RGB const &b) { + return a.G * b.R < b.G * a.R; + }); + std::sort(derivs_B.begin(), derivs_B.end(), + [](RGB const &a, RGB const &b) { + return a.G * b.B < b.G * a.B; + }); + // Average the middle half of the values. + int discard = derivs_R.size() / 4; + RGB sum_R(0, 0, 0), sum_B(0, 0, 0); + for (auto ri = derivs_R.begin() + discard, + bi = derivs_B.begin() + discard; + ri != derivs_R.end() - discard; ri++, bi++) + sum_R += *ri, sum_B += *bi; + double gain_r = sum_R.G / (sum_R.R + 1), + gain_b = sum_B.G / (sum_B.B + 1); + async_results_.temperature_K = 4500; // don't know what it is + async_results_.gain_r = gain_r; + async_results_.gain_g = 1.0; + async_results_.gain_b = gain_b; +} + +void Awb::doAwb() +{ + if (manual_r_ != 0.0 && manual_b_ != 0.0) { + async_results_.temperature_K = 4500; // don't know what it is + async_results_.gain_r = manual_r_; + async_results_.gain_g = 1.0; + async_results_.gain_b = manual_b_; + RPI_LOG("Using manual white balance: gain_r " + << async_results_.gain_r << " gain_b " + << async_results_.gain_b); + } else { + prepareStats(); + RPI_LOG("Valid zones: " << zones_.size()); + if (zones_.size() > config_.min_regions) { + if (config_.bayes) + awbBayes(); + else + awbGrey(); + RPI_LOG("CT found is " + << async_results_.temperature_K + << " with gains r " << async_results_.gain_r + << " and b " << async_results_.gain_b); + } + } +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Awb(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/awb.hpp b/src/ipa/raspberrypi/controller/rpi/awb.hpp new file mode 100644 index 000000000000..369252523200 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/awb.hpp @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * awb.hpp - AWB control algorithm + */ +#pragma once + +#include +#include +#include + +#include "../awb_algorithm.hpp" +#include "../pwl.hpp" +#include "../awb_status.h" + +namespace RPi { + +// Control algorithm to perform AWB calculations. + +struct AwbMode { + void Read(boost::property_tree::ptree const ¶ms); + double ct_lo; // low CT value for search + double ct_hi; // high CT value for search +}; + +struct AwbPrior { + void Read(boost::property_tree::ptree const ¶ms); + double lux; // lux level + Pwl prior; // maps CT to prior log likelihood for this lux level +}; + +struct AwbConfig { + AwbConfig() : default_mode(nullptr) {} + void Read(boost::property_tree::ptree const ¶ms); + // Only repeat the AWB calculation every "this many" frames + uint16_t frame_period; + // number of initial frames for which speed taken as 1.0 (maximum) + uint16_t startup_frames; + double speed; // IIR filter speed applied to algorithm results + bool fast; // "fast" mode uses a 16x16 rather than 32x32 grid + Pwl ct_r; // function maps CT to r (= R/G) + Pwl ct_b; // function maps CT to b (= B/G) + // table of illuminant priors at different lux levels + std::vector priors; + // AWB "modes" (determines the search range) + std::map modes; + AwbMode *default_mode; // mode used if no mode selected + // minimum proportion of pixels counted within AWB region for it to be + // "useful" + double min_pixels; + // minimum G value of those pixels, to be regarded a "useful" + uint16_t min_G; + // number of AWB regions that must be "useful" in order to do the AWB + // calculation + uint32_t min_regions; + // clamp on colour error term (so as not to penalise non-grey excessively) + double delta_limit; + // step size control in coarse search + double coarse_step; + // how far to wander off CT curve towards "more purple" + double transverse_pos; + // how far to wander off CT curve towards "more green" + double transverse_neg; + // red sensitivity ratio (set to canonical sensor's R/G divided by this + // sensor's R/G) + double sensitivity_r; + // blue sensitivity ratio (set to canonical sensor's B/G divided by this + // sensor's B/G) + double sensitivity_b; + // The whitepoint (which we normally "aim" for) can be moved. + double whitepoint_r; + double whitepoint_b; + bool bayes; // use Bayesian algorithm +}; + +class Awb : public AwbAlgorithm +{ +public: + Awb(Controller *controller = NULL); + ~Awb(); + char const *Name() const override; + void Initialise() override; + void Read(boost::property_tree::ptree const ¶ms) override; + void SetMode(std::string const &name) override; + void SetManualGains(double manual_r, double manual_b) override; + void Prepare(Metadata *image_metadata) override; + void Process(StatisticsPtr &stats, Metadata *image_metadata) override; + struct RGB { + RGB(double _R = INVALID, double _G = INVALID, + double _B = INVALID) + : R(_R), G(_G), B(_B) + { + } + double R, G, B; + static const double INVALID; + bool Valid() const { return G != INVALID; } + bool Invalid() const { return G == INVALID; } + RGB &operator+=(RGB const &other) + { + R += other.R, G += other.G, B += other.B; + return *this; + } + RGB Square() const { return RGB(R * R, G * G, B * B); } + }; + +private: + // configuration is read-only, and available to both threads + AwbConfig config_; + std::thread async_thread_; + void asyncFunc(); // asynchronous thread function + std::mutex mutex_; + // condvar for async thread to wait on + std::condition_variable async_signal_; + // condvar for synchronous thread to wait on + std::condition_variable sync_signal_; + // for sync thread to check if async thread finished (requires mutex) + bool async_finished_; + // for async thread to check if it's been told to run (requires mutex) + bool async_start_; + // for async thread to check if it's been told to quit (requires mutex) + bool async_abort_; + + // The following are only for the synchronous thread to use: + // for sync thread to note its has asked async thread to run + bool async_started_; + // counts up to frame_period before restarting the async thread + int frame_phase_; + int frame_count_; // counts up to startup_frames + int frame_count2_; // counts up to startup_frames for Process method + AwbStatus sync_results_; + AwbStatus prev_sync_results_; + std::string mode_name_; + std::mutex settings_mutex_; + // The following are for the asynchronous thread to use, though the main + // thread can set/reset them if the async thread is known to be idle: + void restartAsync(StatisticsPtr &stats, std::string const &mode_name, + double lux); + // copy out the results from the async thread so that it can be restarted + void fetchAsyncResults(); + StatisticsPtr statistics_; + AwbMode *mode_; + double lux_; + AwbStatus async_results_; + void doAwb(); + void awbBayes(); + void awbGrey(); + void prepareStats(); + double computeDelta2Sum(double gain_r, double gain_b); + Pwl interpolatePrior(); + double coarseSearch(Pwl const &prior); + void fineSearch(double &t, double &r, double &b, Pwl const &prior); + std::vector zones_; + std::vector points_; + // manual r setting + double manual_r_; + // manual b setting + double manual_b_; +}; + +static inline Awb::RGB operator+(Awb::RGB const &a, Awb::RGB const &b) +{ + return Awb::RGB(a.R + b.R, a.G + b.G, a.B + b.B); +} +static inline Awb::RGB operator-(Awb::RGB const &a, Awb::RGB const &b) +{ + return Awb::RGB(a.R - b.R, a.G - b.G, a.B - b.B); +} +static inline Awb::RGB operator*(double d, Awb::RGB const &rgb) +{ + return Awb::RGB(d * rgb.R, d * rgb.G, d * rgb.B); +} +static inline Awb::RGB operator*(Awb::RGB const &rgb, double d) +{ + return d * rgb; +} + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/black_level.cpp b/src/ipa/raspberrypi/controller/rpi/black_level.cpp new file mode 100644 index 000000000000..59c9f5a62379 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/black_level.cpp @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * black_level.cpp - black level control algorithm + */ + +#include +#include + +#include "../black_level_status.h" +#include "../logging.hpp" + +#include "black_level.hpp" + +using namespace RPi; + +#define NAME "rpi.black_level" + +BlackLevel::BlackLevel(Controller *controller) + : Algorithm(controller) +{ +} + +char const *BlackLevel::Name() const +{ + return NAME; +} + +void BlackLevel::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG(Name()); + uint16_t black_level = params.get( + "black_level", 4096); // 64 in 10 bits scaled to 16 bits + black_level_r_ = params.get("black_level_r", black_level); + black_level_g_ = params.get("black_level_g", black_level); + black_level_b_ = params.get("black_level_b", black_level); +} + +void BlackLevel::Prepare(Metadata *image_metadata) +{ + // Possibly we should think about doing this in a switch_mode or + // something? + struct BlackLevelStatus status; + status.black_level_r = black_level_r_; + status.black_level_g = black_level_g_; + status.black_level_b = black_level_b_; + image_metadata->Set("black_level.status", status); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return new BlackLevel(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/black_level.hpp b/src/ipa/raspberrypi/controller/rpi/black_level.hpp new file mode 100644 index 000000000000..5d74c6da038f --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/black_level.hpp @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * black_level.hpp - black level control algorithm + */ +#pragma once + +#include "../algorithm.hpp" +#include "../black_level_status.h" + +// This is our implementation of the "black level algorithm". + +namespace RPi { + +class BlackLevel : public Algorithm +{ +public: + BlackLevel(Controller *controller); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + +private: + double black_level_r_; + double black_level_g_; + double black_level_b_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/ccm.cpp b/src/ipa/raspberrypi/controller/rpi/ccm.cpp new file mode 100644 index 000000000000..327cb71ce42a --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/ccm.cpp @@ -0,0 +1,163 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * ccm.cpp - CCM (colour correction matrix) control algorithm + */ + +#include "../awb_status.h" +#include "../ccm_status.h" +#include "../logging.hpp" +#include "../lux_status.h" +#include "../metadata.hpp" + +#include "ccm.hpp" + +using namespace RPi; + +// This algorithm selects a CCM (Colour Correction Matrix) according to the +// colour temperature estimated by AWB (interpolating between known matricies as +// necessary). Additionally the amount of colour saturation can be controlled +// both according to the current estimated lux level and according to a +// saturation setting that is exposed to applications. + +#define NAME "rpi.ccm" + +Matrix::Matrix() +{ + memset(m, 0, sizeof(m)); +} +Matrix::Matrix(double m0, double m1, double m2, double m3, double m4, double m5, + double m6, double m7, double m8) +{ + m[0][0] = m0, m[0][1] = m1, m[0][2] = m2, m[1][0] = m3, m[1][1] = m4, + m[1][2] = m5, m[2][0] = m6, m[2][1] = m7, m[2][2] = m8; +} +void Matrix::Read(boost::property_tree::ptree const ¶ms) +{ + double *ptr = (double *)m; + int n = 0; + for (auto it = params.begin(); it != params.end(); it++) { + if (n++ == 9) + throw std::runtime_error("Ccm: too many values in CCM"); + *ptr++ = it->second.get_value(); + } + if (n < 9) + throw std::runtime_error("Ccm: too few values in CCM"); +} + +Ccm::Ccm(Controller *controller) + : CcmAlgorithm(controller), saturation_(1.0) {} + +char const *Ccm::Name() const +{ + return NAME; +} + +void Ccm::Read(boost::property_tree::ptree const ¶ms) +{ + if (params.get_child_optional("saturation")) + config_.saturation.Read(params.get_child("saturation")); + for (auto &p : params.get_child("ccms")) { + CtCcm ct_ccm; + ct_ccm.ct = p.second.get("ct"); + ct_ccm.ccm.Read(p.second.get_child("ccm")); + if (!config_.ccms.empty() && + ct_ccm.ct <= config_.ccms.back().ct) + throw std::runtime_error( + "Ccm: CCM not in increasing colour temperature order"); + config_.ccms.push_back(std::move(ct_ccm)); + } + if (config_.ccms.empty()) + throw std::runtime_error("Ccm: no CCMs specified"); +} + +void Ccm::SetSaturation(double saturation) +{ + saturation_ = saturation; +} + +void Ccm::Initialise() {} + +template +static bool get_locked(Metadata *metadata, std::string const &tag, T &value) +{ + T *ptr = metadata->GetLocked(tag); + if (ptr == nullptr) + return false; + value = *ptr; + return true; +} + +Matrix calculate_ccm(std::vector const &ccms, double ct) +{ + if (ct <= ccms.front().ct) + return ccms.front().ccm; + else if (ct >= ccms.back().ct) + return ccms.back().ccm; + else { + int i = 0; + for (; ct > ccms[i].ct; i++) + ; + double lambda = + (ct - ccms[i - 1].ct) / (ccms[i].ct - ccms[i - 1].ct); + return lambda * ccms[i].ccm + (1.0 - lambda) * ccms[i - 1].ccm; + } +} + +Matrix apply_saturation(Matrix const &ccm, double saturation) +{ + Matrix RGB2Y(0.299, 0.587, 0.114, -0.169, -0.331, 0.500, 0.500, -0.419, + -0.081); + Matrix Y2RGB(1.000, 0.000, 1.402, 1.000, -0.345, -0.714, 1.000, 1.771, + 0.000); + Matrix S(1, 0, 0, 0, saturation, 0, 0, 0, saturation); + return Y2RGB * S * RGB2Y * ccm; +} + +void Ccm::Prepare(Metadata *image_metadata) +{ + bool awb_ok = false, lux_ok = false; + struct AwbStatus awb = {}; + awb.temperature_K = 4000; // in case no metadata + struct LuxStatus lux = {}; + lux.lux = 400; // in case no metadata + { + // grab mutex just once to get everything + std::lock_guard lock(*image_metadata); + awb_ok = get_locked(image_metadata, "awb.status", awb); + lux_ok = get_locked(image_metadata, "lux.status", lux); + } + if (!awb_ok) + RPI_WARN("Ccm: no colour temperature found"); + if (!lux_ok) + RPI_WARN("Ccm: no lux value found"); + Matrix ccm = calculate_ccm(config_.ccms, awb.temperature_K); + double saturation = saturation_; + struct CcmStatus ccm_status; + ccm_status.saturation = saturation; + if (!config_.saturation.Empty()) + saturation *= config_.saturation.Eval( + config_.saturation.Domain().Clip(lux.lux)); + ccm = apply_saturation(ccm, saturation); + for (int j = 0; j < 3; j++) + for (int i = 0; i < 3; i++) + ccm_status.matrix[j * 3 + i] = + std::max(-8.0, std::min(7.9999, ccm.m[j][i])); + RPI_LOG("CCM: colour temperature " << awb.temperature_K << "K"); + RPI_LOG("CCM: " << ccm_status.matrix[0] << " " << ccm_status.matrix[1] + << " " << ccm_status.matrix[2] << " " + << ccm_status.matrix[3] << " " << ccm_status.matrix[4] + << " " << ccm_status.matrix[5] << " " + << ccm_status.matrix[6] << " " << ccm_status.matrix[7] + << " " << ccm_status.matrix[8]); + image_metadata->Set("ccm.status", ccm_status); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Ccm(controller); + ; +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/ccm.hpp b/src/ipa/raspberrypi/controller/rpi/ccm.hpp new file mode 100644 index 000000000000..f6f4dee1bb8f --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/ccm.hpp @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * ccm.hpp - CCM (colour correction matrix) control algorithm + */ +#pragma once + +#include +#include + +#include "../ccm_algorithm.hpp" +#include "../pwl.hpp" + +namespace RPi { + +// Algorithm to calculate colour matrix. Should be placed after AWB. + +struct Matrix { + Matrix(double m0, double m1, double m2, double m3, double m4, double m5, + double m6, double m7, double m8); + Matrix(); + double m[3][3]; + void Read(boost::property_tree::ptree const ¶ms); +}; +static inline Matrix operator*(double d, Matrix const &m) +{ + return Matrix(m.m[0][0] * d, m.m[0][1] * d, m.m[0][2] * d, + m.m[1][0] * d, m.m[1][1] * d, m.m[1][2] * d, + m.m[2][0] * d, m.m[2][1] * d, m.m[2][2] * d); +} +static inline Matrix operator*(Matrix const &m1, Matrix const &m2) +{ + Matrix m; + for (int i = 0; i < 3; i++) + for (int j = 0; j < 3; j++) + m.m[i][j] = m1.m[i][0] * m2.m[0][j] + + m1.m[i][1] * m2.m[1][j] + + m1.m[i][2] * m2.m[2][j]; + return m; +} +static inline Matrix operator+(Matrix const &m1, Matrix const &m2) +{ + Matrix m; + for (int i = 0; i < 3; i++) + for (int j = 0; j < 3; j++) + m.m[i][j] = m1.m[i][j] + m2.m[i][j]; + return m; +} + +struct CtCcm { + double ct; + Matrix ccm; +}; + +struct CcmConfig { + std::vector ccms; + Pwl saturation; +}; + +class Ccm : public CcmAlgorithm +{ +public: + Ccm(Controller *controller = NULL); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void SetSaturation(double saturation) override; + void Initialise() override; + void Prepare(Metadata *image_metadata) override; + +private: + CcmConfig config_; + std::atomic saturation_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/contrast.cpp b/src/ipa/raspberrypi/controller/rpi/contrast.cpp new file mode 100644 index 000000000000..e4967990c577 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/contrast.cpp @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * contrast.cpp - contrast (gamma) control algorithm + */ +#include + +#include "../contrast_status.h" +#include "../histogram.hpp" + +#include "contrast.hpp" + +using namespace RPi; + +// This is a very simple control algorithm which simply retrieves the results of +// AGC and AWB via their "status" metadata, and applies digital gain to the +// colour channels in accordance with those instructions. We take care never to +// apply less than unity gains, as that would cause fully saturated pixels to go +// off-white. + +#define NAME "rpi.contrast" + +Contrast::Contrast(Controller *controller) + : ContrastAlgorithm(controller), brightness_(0.0), contrast_(1.0) +{ +} + +char const *Contrast::Name() const +{ + return NAME; +} + +void Contrast::Read(boost::property_tree::ptree const ¶ms) +{ + // enable adaptive enhancement by default + config_.ce_enable = params.get("ce_enable", 1); + // the point near the bottom of the histogram to move + config_.lo_histogram = params.get("lo_histogram", 0.01); + // where in the range to try and move it to + config_.lo_level = params.get("lo_level", 0.015); + // but don't move by more than this + config_.lo_max = params.get("lo_max", 500); + // equivalent values for the top of the histogram... + config_.hi_histogram = params.get("hi_histogram", 0.95); + config_.hi_level = params.get("hi_level", 0.95); + config_.hi_max = params.get("hi_max", 2000); + config_.gamma_curve.Read(params.get_child("gamma_curve")); +} + +void Contrast::SetBrightness(double brightness) +{ + brightness_ = brightness; +} + +void Contrast::SetContrast(double contrast) +{ + contrast_ = contrast; +} + +static void fill_in_status(ContrastStatus &status, double brightness, + double contrast, Pwl &gamma_curve) +{ + status.brightness = brightness; + status.contrast = contrast; + for (int i = 0; i < CONTRAST_NUM_POINTS - 1; i++) { + int x = i < 16 ? i * 1024 + : (i < 24 ? (i - 16) * 2048 + 16384 + : (i - 24) * 4096 + 32768); + status.points[i].x = x; + status.points[i].y = std::min(65535.0, gamma_curve.Eval(x)); + } + status.points[CONTRAST_NUM_POINTS - 1].x = 65535; + status.points[CONTRAST_NUM_POINTS - 1].y = 65535; +} + +void Contrast::Initialise() +{ + // Fill in some default values as Prepare will run before Process gets + // called. + fill_in_status(status_, brightness_, contrast_, config_.gamma_curve); +} + +void Contrast::Prepare(Metadata *image_metadata) +{ + std::unique_lock lock(mutex_); + image_metadata->Set("contrast.status", status_); +} + +Pwl compute_stretch_curve(Histogram const &histogram, + ContrastConfig const &config) +{ + Pwl enhance; + enhance.Append(0, 0); + // If the start of the histogram is rather empty, try to pull it down a + // bit. + double hist_lo = histogram.Quantile(config.lo_histogram) * + (65536 / NUM_HISTOGRAM_BINS); + double level_lo = config.lo_level * 65536; + RPI_LOG("Move histogram point " << hist_lo << " to " << level_lo); + hist_lo = std::max( + level_lo, + std::min(65535.0, std::min(hist_lo, level_lo + config.lo_max))); + RPI_LOG("Final values " << hist_lo << " -> " << level_lo); + enhance.Append(hist_lo, level_lo); + // Keep the mid-point (median) in the same place, though, to limit the + // apparent amount of global brightness shift. + double mid = histogram.Quantile(0.5) * (65536 / NUM_HISTOGRAM_BINS); + enhance.Append(mid, mid); + + // If the top to the histogram is empty, try to pull the pixel values + // there up. + double hist_hi = histogram.Quantile(config.hi_histogram) * + (65536 / NUM_HISTOGRAM_BINS); + double level_hi = config.hi_level * 65536; + RPI_LOG("Move histogram point " << hist_hi << " to " << level_hi); + hist_hi = std::min( + level_hi, + std::max(0.0, std::max(hist_hi, level_hi - config.hi_max))); + RPI_LOG("Final values " << hist_hi << " -> " << level_hi); + enhance.Append(hist_hi, level_hi); + enhance.Append(65535, 65535); + return enhance; +} + +Pwl apply_manual_contrast(Pwl const &gamma_curve, double brightness, + double contrast) +{ + Pwl new_gamma_curve; + RPI_LOG("Manual brightness " << brightness << " contrast " << contrast); + gamma_curve.Map([&](double x, double y) { + new_gamma_curve.Append( + x, std::max(0.0, std::min(65535.0, + (y - 32768) * contrast + + 32768 + brightness))); + }); + return new_gamma_curve; +} + +void Contrast::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + (void)image_metadata; + double brightness = brightness_, contrast = contrast_; + Histogram histogram(stats->hist[0].g_hist, NUM_HISTOGRAM_BINS); + // We look at the histogram and adjust the gamma curve in the following + // ways: 1. Adjust the gamma curve so as to pull the start of the + // histogram down, and possibly push the end up. + Pwl gamma_curve = config_.gamma_curve; + if (config_.ce_enable) { + if (config_.lo_max != 0 || config_.hi_max != 0) + gamma_curve = compute_stretch_curve(histogram, config_) + .Compose(gamma_curve); + // We could apply other adjustments (e.g. partial equalisation) + // based on the histogram...? + } + // 2. Finally apply any manually selected brightness/contrast + // adjustment. + if (brightness != 0 || contrast != 1.0) + gamma_curve = apply_manual_contrast(gamma_curve, brightness, + contrast); + // And fill in the status for output. Use more points towards the bottom + // of the curve. + ContrastStatus status; + fill_in_status(status, brightness, contrast, gamma_curve); + { + std::unique_lock lock(mutex_); + status_ = status; + } +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Contrast(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/contrast.hpp b/src/ipa/raspberrypi/controller/rpi/contrast.hpp new file mode 100644 index 000000000000..2e38a762feff --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/contrast.hpp @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * contrast.hpp - contrast (gamma) control algorithm + */ +#pragma once + +#include +#include + +#include "../contrast_algorithm.hpp" +#include "../pwl.hpp" + +namespace RPi { + +// Back End algorithm to appaly correct digital gain. Should be placed after +// Back End AWB. + +struct ContrastConfig { + bool ce_enable; + double lo_histogram; + double lo_level; + double lo_max; + double hi_histogram; + double hi_level; + double hi_max; + Pwl gamma_curve; +}; + +class Contrast : public ContrastAlgorithm +{ +public: + Contrast(Controller *controller = NULL); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void SetBrightness(double brightness) override; + void SetContrast(double contrast) override; + void Initialise() override; + void Prepare(Metadata *image_metadata) override; + void Process(StatisticsPtr &stats, Metadata *image_metadata) override; + +private: + ContrastConfig config_; + std::atomic brightness_; + std::atomic contrast_; + ContrastStatus status_; + std::mutex mutex_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/dpc.cpp b/src/ipa/raspberrypi/controller/rpi/dpc.cpp new file mode 100644 index 000000000000..d31fae977367 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/dpc.cpp @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * dpc.cpp - DPC (defective pixel correction) control algorithm + */ + +#include "../logging.hpp" +#include "dpc.hpp" + +using namespace RPi; + +// We use the lux status so that we can apply stronger settings in darkness (if +// necessary). + +#define NAME "rpi.dpc" + +Dpc::Dpc(Controller *controller) + : Algorithm(controller) +{ +} + +char const *Dpc::Name() const +{ + return NAME; +} + +void Dpc::Read(boost::property_tree::ptree const ¶ms) +{ + config_.strength = params.get("strength", 1); + if (config_.strength < 0 || config_.strength > 2) + throw std::runtime_error("Dpc: bad strength value"); +} + +void Dpc::Prepare(Metadata *image_metadata) +{ + DpcStatus dpc_status = {}; + // Should we vary this with lux level or analogue gain? TBD. + dpc_status.strength = config_.strength; + RPI_LOG("Dpc: strength " << dpc_status.strength); + image_metadata->Set("dpc.status", dpc_status); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Dpc(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/dpc.hpp b/src/ipa/raspberrypi/controller/rpi/dpc.hpp new file mode 100644 index 000000000000..9fb72867a1f0 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/dpc.hpp @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * dpc.hpp - DPC (defective pixel correction) control algorithm + */ +#pragma once + +#include "../algorithm.hpp" +#include "../dpc_status.h" + +namespace RPi { + +// Back End algorithm to apply appropriate GEQ settings. + +struct DpcConfig { + int strength; +}; + +class Dpc : public Algorithm +{ +public: + Dpc(Controller *controller); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + +private: + DpcConfig config_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/geq.cpp b/src/ipa/raspberrypi/controller/rpi/geq.cpp new file mode 100644 index 000000000000..ee0cb95d2389 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/geq.cpp @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * geq.cpp - GEQ (green equalisation) control algorithm + */ + +#include "../device_status.h" +#include "../logging.hpp" +#include "../lux_status.h" +#include "../pwl.hpp" + +#include "geq.hpp" + +using namespace RPi; + +// We use the lux status so that we can apply stronger settings in darkness (if +// necessary). + +#define NAME "rpi.geq" + +Geq::Geq(Controller *controller) + : Algorithm(controller) +{ +} + +char const *Geq::Name() const +{ + return NAME; +} + +void Geq::Read(boost::property_tree::ptree const ¶ms) +{ + config_.offset = params.get("offset", 0); + config_.slope = params.get("slope", 0.0); + if (config_.slope < 0.0 || config_.slope >= 1.0) + throw std::runtime_error("Geq: bad slope value"); + if (params.get_child_optional("strength")) + config_.strength.Read(params.get_child("strength")); +} + +void Geq::Prepare(Metadata *image_metadata) +{ + LuxStatus lux_status = {}; + lux_status.lux = 400; + if (image_metadata->Get("lux.status", lux_status)) + RPI_WARN("Geq: no lux data found"); + DeviceStatus device_status = {}; + device_status.analogue_gain = 1.0; // in case not found + if (image_metadata->Get("device.status", device_status)) + RPI_WARN("Geq: no device metadata - use analogue gain of 1x"); + GeqStatus geq_status = {}; + double strength = + config_.strength.Empty() + ? 1.0 + : config_.strength.Eval(config_.strength.Domain().Clip( + lux_status.lux)); + strength *= device_status.analogue_gain; + double offset = config_.offset * strength; + double slope = config_.slope * strength; + geq_status.offset = std::min(65535.0, std::max(0.0, offset)); + geq_status.slope = std::min(.99999, std::max(0.0, slope)); + RPI_LOG("Geq: offset " << geq_status.offset << " slope " + << geq_status.slope << " (analogue gain " + << device_status.analogue_gain << " lux " + << lux_status.lux << ")"); + image_metadata->Set("geq.status", geq_status); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Geq(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/geq.hpp b/src/ipa/raspberrypi/controller/rpi/geq.hpp new file mode 100644 index 000000000000..7d4bd38d5070 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/geq.hpp @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * geq.hpp - GEQ (green equalisation) control algorithm + */ +#pragma once + +#include "../algorithm.hpp" +#include "../geq_status.h" + +namespace RPi { + +// Back End algorithm to apply appropriate GEQ settings. + +struct GeqConfig { + uint16_t offset; + double slope; + Pwl strength; // lux to strength factor +}; + +class Geq : public Algorithm +{ +public: + Geq(Controller *controller); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + +private: + GeqConfig config_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/lux.cpp b/src/ipa/raspberrypi/controller/rpi/lux.cpp new file mode 100644 index 000000000000..154db153b119 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/lux.cpp @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * lux.cpp - Lux control algorithm + */ +#include + +#include "linux/bcm2835-isp.h" + +#include "../device_status.h" +#include "../logging.hpp" + +#include "lux.hpp" + +using namespace RPi; + +#define NAME "rpi.lux" + +Lux::Lux(Controller *controller) + : Algorithm(controller) +{ + // Put in some defaults as there will be no meaningful values until + // Process has run. + status_.aperture = 1.0; + status_.lux = 400; +} + +char const *Lux::Name() const +{ + return NAME; +} + +void Lux::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG(Name()); + reference_shutter_speed_ = + params.get("reference_shutter_speed"); + reference_gain_ = params.get("reference_gain"); + reference_aperture_ = params.get("reference_aperture", 1.0); + reference_Y_ = params.get("reference_Y"); + reference_lux_ = params.get("reference_lux"); + current_aperture_ = reference_aperture_; +} + +void Lux::Prepare(Metadata *image_metadata) +{ + std::unique_lock lock(mutex_); + image_metadata->Set("lux.status", status_); +} + +void Lux::Process(StatisticsPtr &stats, Metadata *image_metadata) +{ + // set some initial values to shut the compiler up + DeviceStatus device_status = + { .shutter_speed = 1.0, + .analogue_gain = 1.0, + .lens_position = 0.0, + .aperture = 0.0, + .flash_intensity = 0.0 }; + if (image_metadata->Get("device.status", device_status) == 0) { + double current_gain = device_status.analogue_gain; + double current_shutter_speed = device_status.shutter_speed; + double current_aperture = device_status.aperture; + if (current_aperture == 0) + current_aperture = current_aperture_; + uint64_t sum = 0; + uint32_t num = 0; + uint32_t *bin = stats->hist[0].g_hist; + const int num_bins = sizeof(stats->hist[0].g_hist) / + sizeof(stats->hist[0].g_hist[0]); + for (int i = 0; i < num_bins; i++) + sum += bin[i] * (uint64_t)i, num += bin[i]; + // add .5 to reflect the mid-points of bins + double current_Y = sum / (double)num + .5; + double gain_ratio = reference_gain_ / current_gain; + double shutter_speed_ratio = + reference_shutter_speed_ / current_shutter_speed; + double aperture_ratio = reference_aperture_ / current_aperture; + double Y_ratio = current_Y * (65536 / num_bins) / reference_Y_; + double estimated_lux = shutter_speed_ratio * gain_ratio * + aperture_ratio * aperture_ratio * + Y_ratio * reference_lux_; + LuxStatus status; + status.lux = estimated_lux; + status.aperture = current_aperture; + RPI_LOG(Name() << ": estimated lux " << estimated_lux); + { + std::unique_lock lock(mutex_); + status_ = status; + } + // Overwrite the metadata here as well, so that downstream + // algorithms get the latest value. + image_metadata->Set("lux.status", status); + } else + RPI_WARN(Name() << ": no device metadata"); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Lux(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/lux.hpp b/src/ipa/raspberrypi/controller/rpi/lux.hpp new file mode 100644 index 000000000000..eb9354091452 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/lux.hpp @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * lux.hpp - Lux control algorithm + */ +#pragma once + +#include +#include + +#include "../lux_status.h" +#include "../algorithm.hpp" + +// This is our implementation of the "lux control algorithm". + +namespace RPi { + +class Lux : public Algorithm +{ +public: + Lux(Controller *controller); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + void Process(StatisticsPtr &stats, Metadata *image_metadata) override; + void SetCurrentAperture(double aperture); + +private: + // These values define the conditions of the reference image, against + // which we compare the new image. + double reference_shutter_speed_; // in micro-seconds + double reference_gain_; + double reference_aperture_; // units of 1/f + double reference_Y_; // out of 65536 + double reference_lux_; + std::atomic current_aperture_; + LuxStatus status_; + std::mutex mutex_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/noise.cpp b/src/ipa/raspberrypi/controller/rpi/noise.cpp new file mode 100644 index 000000000000..2209d791315e --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/noise.cpp @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * noise.cpp - Noise control algorithm + */ + +#include + +#include "../device_status.h" +#include "../logging.hpp" +#include "../noise_status.h" + +#include "noise.hpp" + +using namespace RPi; + +#define NAME "rpi.noise" + +Noise::Noise(Controller *controller) + : Algorithm(controller), mode_factor_(1.0) +{ +} + +char const *Noise::Name() const +{ + return NAME; +} + +void Noise::SwitchMode(CameraMode const &camera_mode) +{ + // For example, we would expect a 2x2 binned mode to have a "noise + // factor" of sqrt(2x2) = 2. (can't be less than one, right?) + mode_factor_ = std::max(1.0, camera_mode.noise_factor); +} + +void Noise::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG(Name()); + reference_constant_ = params.get("reference_constant"); + reference_slope_ = params.get("reference_slope"); +} + +void Noise::Prepare(Metadata *image_metadata) +{ + struct DeviceStatus device_status; + device_status.analogue_gain = 1.0; // keep compiler calm + if (image_metadata->Get("device.status", device_status) == 0) { + // There is a slight question as to exactly how the noise + // profile, specifically the constant part of it, scales. For + // now we assume it all scales the same, and we'll revisit this + // if it proves substantially wrong. NOTE: we may also want to + // make some adjustments based on the camera mode (such as + // binning), if we knew how to discover it... + double factor = sqrt(device_status.analogue_gain) / mode_factor_; + struct NoiseStatus status; + status.noise_constant = reference_constant_ * factor; + status.noise_slope = reference_slope_ * factor; + image_metadata->Set("noise.status", status); + RPI_LOG(Name() << ": constant " << status.noise_constant + << " slope " << status.noise_slope); + } else + RPI_WARN(Name() << " no metadata"); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return new Noise(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/noise.hpp b/src/ipa/raspberrypi/controller/rpi/noise.hpp new file mode 100644 index 000000000000..51d46a3dad09 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/noise.hpp @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * noise.hpp - Noise control algorithm + */ +#pragma once + +#include "../algorithm.hpp" +#include "../noise_status.h" + +// This is our implementation of the "noise algorithm". + +namespace RPi { + +class Noise : public Algorithm +{ +public: + Noise(Controller *controller); + char const *Name() const override; + void SwitchMode(CameraMode const &camera_mode) override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + +private: + // the noise profile for analogue gain of 1.0 + double reference_constant_; + double reference_slope_; + std::atomic mode_factor_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/sdn.cpp b/src/ipa/raspberrypi/controller/rpi/sdn.cpp new file mode 100644 index 000000000000..28d9d983da0b --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/sdn.cpp @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sdn.cpp - SDN (spatial denoise) control algorithm + */ + +#include "../noise_status.h" +#include "../sdn_status.h" + +#include "sdn.hpp" + +using namespace RPi; + +// Calculate settings for the spatial denoise block using the noise profile in +// the image metadata. + +#define NAME "rpi.sdn" + +Sdn::Sdn(Controller *controller) + : Algorithm(controller) +{ +} + +char const *Sdn::Name() const +{ + return NAME; +} + +void Sdn::Read(boost::property_tree::ptree const ¶ms) +{ + deviation_ = params.get("deviation", 3.2); + strength_ = params.get("strength", 0.75); +} + +void Sdn::Initialise() {} + +void Sdn::Prepare(Metadata *image_metadata) +{ + struct NoiseStatus noise_status = {}; + noise_status.noise_slope = 3.0; // in case no metadata + if (image_metadata->Get("noise.status", noise_status) != 0) + RPI_WARN("Sdn: no noise profile found"); + RPI_LOG("Noise profile: constant " << noise_status.noise_constant + << " slope " + << noise_status.noise_slope); + struct SdnStatus status; + status.noise_constant = noise_status.noise_constant * deviation_; + status.noise_slope = noise_status.noise_slope * deviation_; + status.strength = strength_; + image_metadata->Set("sdn.status", status); + RPI_LOG("Sdn: programmed constant " << status.noise_constant + << " slope " << status.noise_slope + << " strength " + << status.strength); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return (Algorithm *)new Sdn(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/sdn.hpp b/src/ipa/raspberrypi/controller/rpi/sdn.hpp new file mode 100644 index 000000000000..d48aab7eaf95 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/sdn.hpp @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sdn.hpp - SDN (spatial denoise) control algorithm + */ +#pragma once + +#include "../algorithm.hpp" + +namespace RPi { + +// Algorithm to calculate correct spatial denoise (SDN) settings. + +class Sdn : public Algorithm +{ +public: + Sdn(Controller *controller = NULL); + char const *Name() const override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Initialise() override; + void Prepare(Metadata *image_metadata) override; + +private: + double deviation_; + double strength_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.cpp b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp new file mode 100644 index 000000000000..1f07bb626500 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/sharpen.cpp @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sharpen.cpp - sharpening control algorithm + */ + +#include + +#include "../logging.hpp" +#include "../sharpen_status.h" + +#include "sharpen.hpp" + +using namespace RPi; + +#define NAME "rpi.sharpen" + +Sharpen::Sharpen(Controller *controller) + : Algorithm(controller) +{ +} + +char const *Sharpen::Name() const +{ + return NAME; +} + +void Sharpen::SwitchMode(CameraMode const &camera_mode) +{ + // can't be less than one, right? + mode_factor_ = std::max(1.0, camera_mode.noise_factor); +} + +void Sharpen::Read(boost::property_tree::ptree const ¶ms) +{ + RPI_LOG(Name()); + threshold_ = params.get("threshold", 1.0); + strength_ = params.get("strength", 1.0); + limit_ = params.get("limit", 1.0); +} + +void Sharpen::Prepare(Metadata *image_metadata) +{ + double mode_factor = mode_factor_; + struct SharpenStatus status; + // Binned modes seem to need the sharpening toned down with this + // pipeline. + status.threshold = threshold_ * mode_factor; + status.strength = strength_ / mode_factor; + status.limit = limit_ / mode_factor; + image_metadata->Set("sharpen.status", status); +} + +// Register algorithm with the system. +static Algorithm *Create(Controller *controller) +{ + return new Sharpen(controller); +} +static RegisterAlgorithm reg(NAME, &Create); diff --git a/src/ipa/raspberrypi/controller/rpi/sharpen.hpp b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp new file mode 100644 index 000000000000..3b0d6801d281 --- /dev/null +++ b/src/ipa/raspberrypi/controller/rpi/sharpen.hpp @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sharpen.hpp - sharpening control algorithm + */ +#pragma once + +#include "../algorithm.hpp" +#include "../sharpen_status.h" + +// This is our implementation of the "sharpen algorithm". + +namespace RPi { + +class Sharpen : public Algorithm +{ +public: + Sharpen(Controller *controller); + char const *Name() const override; + void SwitchMode(CameraMode const &camera_mode) override; + void Read(boost::property_tree::ptree const ¶ms) override; + void Prepare(Metadata *image_metadata) override; + +private: + double threshold_; + double strength_; + double limit_; + std::atomic mode_factor_; +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/controller/sdn_status.h b/src/ipa/raspberrypi/controller/sdn_status.h new file mode 100644 index 000000000000..871e0b62af1f --- /dev/null +++ b/src/ipa/raspberrypi/controller/sdn_status.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sdn_status.h - SDN (spatial denoise) control algorithm status + */ +#pragma once + +// This stores the parameters required for Spatial Denoise (SDN). + +#ifdef __cplusplus +extern "C" { +#endif + +struct SdnStatus { + double noise_constant; + double noise_slope; + double strength; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/controller/sharpen_status.h b/src/ipa/raspberrypi/controller/sharpen_status.h new file mode 100644 index 000000000000..6de80f407c74 --- /dev/null +++ b/src/ipa/raspberrypi/controller/sharpen_status.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * sharpen_status.h - Sharpen control algorithm status + */ +#pragma once + +// The "sharpen" algorithm stores the strength to use. + +#ifdef __cplusplus +extern "C" { +#endif + +struct SharpenStatus { + // controls the smallest level of detail (or noise!) that sharpening will pick up + double threshold; + // the rate at which the sharpening response ramps once above the threshold + double strength; + // upper limit of the allowed sharpening response + double limit; +}; + +#ifdef __cplusplus +} +#endif diff --git a/src/ipa/raspberrypi/data/imx219.json b/src/ipa/raspberrypi/data/imx219.json new file mode 100644 index 000000000000..ce7ff36faf09 --- /dev/null +++ b/src/ipa/raspberrypi/data/imx219.json @@ -0,0 +1,401 @@ +{ + "rpi.black_level": + { + "black_level": 4096 + }, + "rpi.dpc": + { + + }, + "rpi.lux": + { + "reference_shutter_speed": 27685, + "reference_gain": 1.0, + "reference_aperture": 1.0, + "reference_lux": 998, + "reference_Y": 12744 + }, + "rpi.noise": + { + "reference_constant": 0, + "reference_slope": 3.67 + }, + "rpi.geq": + { + "offset": 204, + "slope": 0.01633 + }, + "rpi.sdn": + { + + }, + "rpi.awb": + { + "priors": + [ + { + "lux": 0, "prior": + [ + 2000, 1.0, 3000, 0.0, 13000, 0.0 + ] + }, + { + "lux": 800, "prior": + [ + 2000, 0.0, 6000, 2.0, 13000, 2.0 + ] + }, + { + "lux": 1500, "prior": + [ + 2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0 + ] + } + ], + "modes": + { + "auto": + { + "lo": 2500, + "hi": 8000 + }, + "incandescent": + { + "lo": 2500, + "hi": 3000 + }, + "tungsten": + { + "lo": 3000, + "hi": 3500 + }, + "fluorescent": + { + "lo": 4000, + "hi": 4700 + }, + "indoor": + { + "lo": 3000, + "hi": 5000 + }, + "daylight": + { + "lo": 5500, + "hi": 6500 + }, + "cloudy": + { + "lo": 7000, + "hi": 8600 + } + }, + "bayes": 1, + "ct_curve": + [ + 2498.0, 0.9309, 0.3599, 2911.0, 0.8682, 0.4283, 2919.0, 0.8358, 0.4621, 3627.0, 0.7646, 0.5327, 4600.0, 0.6079, 0.6721, 5716.0, + 0.5712, 0.7017, 8575.0, 0.4331, 0.8037 + ], + "sensitivity_r": 1.05, + "sensitivity_b": 1.05, + "transverse_pos": 0.04791, + "transverse_neg": 0.04881 + }, + "rpi.agc": + { + "metering_modes": + { + "centre-weighted": + { + "weights": + [ + 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 + ] + }, + "spot": + { + "weights": + [ + 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 + ] + }, + "matrix": + { + "weights": + [ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 + ] + } + }, + "exposure_modes": + { + "normal": + { + "shutter": + [ + 100, 10000, 30000, 60000, 120000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + }, + "sport": + { + "shutter": + [ + 100, 5000, 10000, 20000, 120000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + } + }, + "constraint_modes": + { + "normal": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.5, 1000, 0.5 + ] + } + ], + "highlight": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.5, 1000, 0.5 + ] + }, + { + "bound": "UPPER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.8, 1000, 0.8 + ] + } + ], + "shadows": + [ + { + "bound": "LOWER", "q_lo": 0.0, "q_hi": 0.5, "y_target": + [ + 0, 0.17, 1000, 0.17 + ] + } + ] + }, + "y_target": + [ + 0, 0.16, 1000, 0.165, 10000, 0.17 + ] + }, + "rpi.alsc": + { + "omega": 1.3, + "n_iter": 100, + "luminance_strength": 0.7, + "calibrations_Cr": + [ + { + "ct": 3000, "table": + [ + 1.487, 1.481, 1.481, 1.445, 1.389, 1.327, 1.307, 1.307, 1.307, 1.309, 1.341, 1.405, 1.458, 1.494, 1.494, 1.497, + 1.491, 1.481, 1.448, 1.397, 1.331, 1.275, 1.243, 1.229, 1.229, 1.249, 1.287, 1.349, 1.409, 1.463, 1.494, 1.497, + 1.491, 1.469, 1.405, 1.331, 1.275, 1.217, 1.183, 1.172, 1.172, 1.191, 1.231, 1.287, 1.349, 1.424, 1.484, 1.499, + 1.487, 1.444, 1.363, 1.283, 1.217, 1.183, 1.148, 1.138, 1.138, 1.159, 1.191, 1.231, 1.302, 1.385, 1.461, 1.492, + 1.481, 1.423, 1.334, 1.253, 1.189, 1.148, 1.135, 1.119, 1.123, 1.137, 1.159, 1.203, 1.272, 1.358, 1.442, 1.488, + 1.479, 1.413, 1.321, 1.236, 1.176, 1.139, 1.118, 1.114, 1.116, 1.123, 1.149, 1.192, 1.258, 1.344, 1.432, 1.487, + 1.479, 1.413, 1.321, 1.236, 1.176, 1.139, 1.116, 1.114, 1.115, 1.123, 1.149, 1.192, 1.258, 1.344, 1.432, 1.487, + 1.479, 1.425, 1.336, 1.251, 1.189, 1.149, 1.136, 1.118, 1.121, 1.138, 1.158, 1.206, 1.275, 1.358, 1.443, 1.488, + 1.488, 1.448, 1.368, 1.285, 1.219, 1.189, 1.149, 1.139, 1.139, 1.158, 1.195, 1.235, 1.307, 1.387, 1.462, 1.493, + 1.496, 1.475, 1.411, 1.337, 1.284, 1.219, 1.189, 1.176, 1.176, 1.195, 1.235, 1.296, 1.356, 1.429, 1.487, 1.501, + 1.495, 1.489, 1.458, 1.407, 1.337, 1.287, 1.253, 1.239, 1.239, 1.259, 1.296, 1.356, 1.419, 1.472, 1.499, 1.499, + 1.494, 1.489, 1.489, 1.453, 1.398, 1.336, 1.317, 1.317, 1.317, 1.321, 1.351, 1.416, 1.467, 1.501, 1.501, 1.499 + ] + }, + { + "ct": 3850, "table": + [ + 1.694, 1.688, 1.688, 1.649, 1.588, 1.518, 1.495, 1.495, 1.495, 1.497, 1.532, 1.602, 1.659, 1.698, 1.698, 1.703, + 1.698, 1.688, 1.653, 1.597, 1.525, 1.464, 1.429, 1.413, 1.413, 1.437, 1.476, 1.542, 1.606, 1.665, 1.698, 1.703, + 1.697, 1.673, 1.605, 1.525, 1.464, 1.401, 1.369, 1.354, 1.354, 1.377, 1.417, 1.476, 1.542, 1.623, 1.687, 1.705, + 1.692, 1.646, 1.561, 1.472, 1.401, 1.368, 1.337, 1.323, 1.324, 1.348, 1.377, 1.417, 1.492, 1.583, 1.661, 1.697, + 1.686, 1.625, 1.528, 1.439, 1.372, 1.337, 1.321, 1.311, 1.316, 1.324, 1.348, 1.389, 1.461, 1.553, 1.642, 1.694, + 1.684, 1.613, 1.514, 1.423, 1.359, 1.328, 1.311, 1.306, 1.306, 1.316, 1.339, 1.378, 1.446, 1.541, 1.633, 1.693, + 1.684, 1.613, 1.514, 1.423, 1.359, 1.328, 1.311, 1.305, 1.305, 1.316, 1.339, 1.378, 1.446, 1.541, 1.633, 1.693, + 1.685, 1.624, 1.529, 1.438, 1.372, 1.336, 1.324, 1.309, 1.314, 1.323, 1.348, 1.392, 1.462, 1.555, 1.646, 1.694, + 1.692, 1.648, 1.561, 1.473, 1.403, 1.372, 1.336, 1.324, 1.324, 1.348, 1.378, 1.423, 1.495, 1.585, 1.667, 1.701, + 1.701, 1.677, 1.608, 1.527, 1.471, 1.403, 1.375, 1.359, 1.359, 1.378, 1.423, 1.488, 1.549, 1.631, 1.694, 1.709, + 1.702, 1.694, 1.656, 1.601, 1.527, 1.473, 1.441, 1.424, 1.424, 1.443, 1.488, 1.549, 1.621, 1.678, 1.706, 1.707, + 1.699, 1.694, 1.694, 1.654, 1.593, 1.525, 1.508, 1.508, 1.508, 1.509, 1.546, 1.614, 1.674, 1.708, 1.708, 1.707 + ] + }, + { + "ct": 6000, "table": + [ + 2.179, 2.176, 2.176, 2.125, 2.048, 1.975, 1.955, 1.954, 1.954, 1.956, 1.993, 2.071, 2.141, 2.184, 2.185, 2.188, + 2.189, 2.176, 2.128, 2.063, 1.973, 1.908, 1.872, 1.856, 1.856, 1.876, 1.922, 1.999, 2.081, 2.144, 2.184, 2.192, + 2.187, 2.152, 2.068, 1.973, 1.907, 1.831, 1.797, 1.786, 1.786, 1.804, 1.853, 1.922, 1.999, 2.089, 2.166, 2.191, + 2.173, 2.117, 2.013, 1.908, 1.831, 1.791, 1.755, 1.749, 1.749, 1.767, 1.804, 1.853, 1.939, 2.041, 2.135, 2.181, + 2.166, 2.089, 1.975, 1.869, 1.792, 1.755, 1.741, 1.731, 1.734, 1.749, 1.767, 1.818, 1.903, 2.005, 2.111, 2.173, + 2.165, 2.074, 1.956, 1.849, 1.777, 1.742, 1.729, 1.725, 1.729, 1.734, 1.758, 1.804, 1.884, 1.991, 2.099, 2.172, + 2.165, 2.074, 1.956, 1.849, 1.777, 1.742, 1.727, 1.724, 1.725, 1.734, 1.758, 1.804, 1.884, 1.991, 2.099, 2.172, + 2.166, 2.085, 1.975, 1.869, 1.791, 1.755, 1.741, 1.729, 1.733, 1.749, 1.769, 1.819, 1.904, 2.009, 2.114, 2.174, + 2.174, 2.118, 2.015, 1.913, 1.831, 1.791, 1.755, 1.749, 1.749, 1.769, 1.811, 1.855, 1.943, 2.047, 2.139, 2.183, + 2.187, 2.151, 2.072, 1.979, 1.911, 1.831, 1.801, 1.791, 1.791, 1.811, 1.855, 1.933, 2.006, 2.101, 2.173, 2.197, + 2.189, 2.178, 2.132, 2.069, 1.979, 1.913, 1.879, 1.867, 1.867, 1.891, 1.933, 2.006, 2.091, 2.156, 2.195, 2.197, + 2.181, 2.179, 2.178, 2.131, 2.057, 1.981, 1.965, 1.965, 1.965, 1.969, 1.999, 2.083, 2.153, 2.197, 2.197, 2.196 + ] + } + ], + "calibrations_Cb": + [ + { + "ct": 3000, "table": + [ + 1.967, 1.961, 1.955, 1.953, 1.954, 1.957, 1.961, 1.963, 1.963, 1.961, 1.959, 1.957, 1.954, 1.951, 1.951, 1.955, + 1.961, 1.959, 1.957, 1.956, 1.962, 1.967, 1.975, 1.979, 1.979, 1.975, 1.971, 1.967, 1.957, 1.952, 1.951, 1.951, + 1.959, 1.959, 1.959, 1.966, 1.976, 1.989, 1.999, 2.004, 2.003, 1.997, 1.991, 1.981, 1.967, 1.956, 1.951, 1.951, + 1.959, 1.962, 1.967, 1.978, 1.993, 2.009, 2.021, 2.028, 2.026, 2.021, 2.011, 1.995, 1.981, 1.964, 1.953, 1.951, + 1.961, 1.965, 1.977, 1.993, 2.009, 2.023, 2.041, 2.047, 2.047, 2.037, 2.024, 2.011, 1.995, 1.975, 1.958, 1.953, + 1.963, 1.968, 1.981, 2.001, 2.019, 2.039, 2.046, 2.052, 2.052, 2.051, 2.035, 2.021, 2.001, 1.978, 1.959, 1.955, + 1.961, 1.966, 1.981, 2.001, 2.019, 2.038, 2.043, 2.051, 2.052, 2.042, 2.034, 2.019, 2.001, 1.978, 1.959, 1.954, + 1.957, 1.961, 1.972, 1.989, 2.003, 2.021, 2.038, 2.039, 2.039, 2.034, 2.019, 2.004, 1.988, 1.971, 1.954, 1.949, + 1.952, 1.953, 1.959, 1.972, 1.989, 2.003, 2.016, 2.019, 2.019, 2.014, 2.003, 1.988, 1.971, 1.955, 1.948, 1.947, + 1.949, 1.948, 1.949, 1.957, 1.971, 1.978, 1.991, 1.994, 1.994, 1.989, 1.979, 1.967, 1.954, 1.946, 1.947, 1.947, + 1.949, 1.946, 1.944, 1.946, 1.949, 1.954, 1.962, 1.967, 1.967, 1.963, 1.956, 1.948, 1.943, 1.943, 1.946, 1.949, + 1.951, 1.946, 1.944, 1.942, 1.943, 1.943, 1.947, 1.948, 1.949, 1.947, 1.945, 1.941, 1.938, 1.939, 1.948, 1.952 + ] + }, + { + "ct": 3850, "table": + [ + 1.726, 1.724, 1.722, 1.723, 1.731, 1.735, 1.743, 1.746, 1.746, 1.741, 1.735, 1.729, 1.725, 1.721, 1.721, 1.721, + 1.724, 1.723, 1.723, 1.727, 1.735, 1.744, 1.749, 1.756, 1.756, 1.749, 1.744, 1.735, 1.727, 1.719, 1.719, 1.719, + 1.723, 1.723, 1.724, 1.735, 1.746, 1.759, 1.767, 1.775, 1.775, 1.766, 1.758, 1.746, 1.735, 1.723, 1.718, 1.716, + 1.723, 1.725, 1.732, 1.746, 1.759, 1.775, 1.782, 1.792, 1.792, 1.782, 1.772, 1.759, 1.745, 1.729, 1.718, 1.716, + 1.725, 1.729, 1.738, 1.756, 1.775, 1.785, 1.796, 1.803, 1.804, 1.794, 1.783, 1.772, 1.757, 1.736, 1.722, 1.718, + 1.728, 1.731, 1.741, 1.759, 1.781, 1.795, 1.803, 1.806, 1.808, 1.805, 1.791, 1.779, 1.762, 1.739, 1.722, 1.721, + 1.727, 1.731, 1.741, 1.759, 1.781, 1.791, 1.799, 1.804, 1.806, 1.801, 1.791, 1.779, 1.762, 1.739, 1.722, 1.717, + 1.722, 1.724, 1.733, 1.751, 1.768, 1.781, 1.791, 1.796, 1.799, 1.791, 1.781, 1.766, 1.754, 1.731, 1.717, 1.714, + 1.718, 1.718, 1.724, 1.737, 1.752, 1.768, 1.776, 1.782, 1.784, 1.781, 1.766, 1.754, 1.737, 1.724, 1.713, 1.709, + 1.716, 1.715, 1.716, 1.725, 1.737, 1.749, 1.756, 1.763, 1.764, 1.762, 1.749, 1.737, 1.724, 1.717, 1.709, 1.708, + 1.715, 1.714, 1.712, 1.715, 1.722, 1.729, 1.736, 1.741, 1.742, 1.739, 1.731, 1.723, 1.717, 1.712, 1.711, 1.709, + 1.716, 1.714, 1.711, 1.712, 1.715, 1.719, 1.723, 1.728, 1.731, 1.729, 1.723, 1.718, 1.711, 1.711, 1.713, 1.713 + ] + }, + { + "ct": 6000, "table": + [ + 1.374, 1.372, 1.373, 1.374, 1.375, 1.378, 1.378, 1.381, 1.382, 1.382, 1.378, 1.373, 1.372, 1.369, 1.365, 1.365, + 1.371, 1.371, 1.372, 1.374, 1.378, 1.381, 1.384, 1.386, 1.388, 1.387, 1.384, 1.377, 1.372, 1.368, 1.364, 1.362, + 1.369, 1.371, 1.372, 1.377, 1.383, 1.391, 1.394, 1.396, 1.397, 1.395, 1.391, 1.382, 1.374, 1.369, 1.362, 1.361, + 1.369, 1.371, 1.375, 1.383, 1.391, 1.399, 1.402, 1.404, 1.405, 1.403, 1.398, 1.391, 1.379, 1.371, 1.363, 1.361, + 1.371, 1.373, 1.378, 1.388, 1.399, 1.407, 1.411, 1.413, 1.413, 1.411, 1.405, 1.397, 1.385, 1.374, 1.366, 1.362, + 1.371, 1.374, 1.379, 1.389, 1.405, 1.411, 1.414, 1.414, 1.415, 1.415, 1.411, 1.401, 1.388, 1.376, 1.367, 1.363, + 1.371, 1.373, 1.379, 1.389, 1.405, 1.408, 1.413, 1.414, 1.414, 1.413, 1.409, 1.401, 1.388, 1.376, 1.367, 1.362, + 1.366, 1.369, 1.374, 1.384, 1.396, 1.404, 1.407, 1.408, 1.408, 1.408, 1.401, 1.395, 1.382, 1.371, 1.363, 1.359, + 1.364, 1.365, 1.368, 1.375, 1.386, 1.396, 1.399, 1.401, 1.399, 1.399, 1.395, 1.385, 1.374, 1.365, 1.359, 1.357, + 1.361, 1.363, 1.365, 1.368, 1.377, 1.384, 1.388, 1.391, 1.391, 1.388, 1.385, 1.375, 1.366, 1.361, 1.358, 1.356, + 1.361, 1.362, 1.362, 1.364, 1.367, 1.373, 1.376, 1.377, 1.377, 1.375, 1.373, 1.366, 1.362, 1.358, 1.358, 1.358, + 1.361, 1.362, 1.362, 1.362, 1.363, 1.367, 1.369, 1.368, 1.367, 1.367, 1.367, 1.364, 1.358, 1.357, 1.358, 1.359 + ] + } + ], + "luminance_lut": + [ + 2.716, 2.568, 2.299, 2.065, 1.845, 1.693, 1.605, 1.597, 1.596, 1.634, 1.738, 1.914, 2.145, 2.394, 2.719, 2.901, + 2.593, 2.357, 2.093, 1.876, 1.672, 1.528, 1.438, 1.393, 1.394, 1.459, 1.569, 1.731, 1.948, 2.169, 2.481, 2.756, + 2.439, 2.197, 1.922, 1.691, 1.521, 1.365, 1.266, 1.222, 1.224, 1.286, 1.395, 1.573, 1.747, 1.988, 2.299, 2.563, + 2.363, 2.081, 1.797, 1.563, 1.376, 1.244, 1.152, 1.099, 1.101, 1.158, 1.276, 1.421, 1.607, 1.851, 2.163, 2.455, + 2.342, 2.003, 1.715, 1.477, 1.282, 1.152, 1.074, 1.033, 1.035, 1.083, 1.163, 1.319, 1.516, 1.759, 2.064, 2.398, + 2.342, 1.985, 1.691, 1.446, 1.249, 1.111, 1.034, 1.004, 1.004, 1.028, 1.114, 1.274, 1.472, 1.716, 2.019, 2.389, + 2.342, 1.991, 1.691, 1.446, 1.249, 1.112, 1.034, 1.011, 1.005, 1.035, 1.114, 1.274, 1.472, 1.716, 2.019, 2.389, + 2.365, 2.052, 1.751, 1.499, 1.299, 1.171, 1.089, 1.039, 1.042, 1.084, 1.162, 1.312, 1.516, 1.761, 2.059, 2.393, + 2.434, 2.159, 1.856, 1.601, 1.403, 1.278, 1.166, 1.114, 1.114, 1.162, 1.266, 1.402, 1.608, 1.847, 2.146, 2.435, + 2.554, 2.306, 2.002, 1.748, 1.563, 1.396, 1.299, 1.247, 1.243, 1.279, 1.386, 1.551, 1.746, 1.977, 2.272, 2.518, + 2.756, 2.493, 2.195, 1.947, 1.739, 1.574, 1.481, 1.429, 1.421, 1.457, 1.559, 1.704, 1.929, 2.159, 2.442, 2.681, + 2.935, 2.739, 2.411, 2.151, 1.922, 1.749, 1.663, 1.628, 1.625, 1.635, 1.716, 1.872, 2.113, 2.368, 2.663, 2.824 + ], + "sigma": 0.00381, + "sigma_Cb": 0.00216 + }, + "rpi.contrast": + { + "ce_enable": 1, + "gamma_curve": + [ + 0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193, + 8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168, + 16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796, + 32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476, + 65535, 65535 + ] + }, + "rpi.ccm": + { + "ccms": + [ + { + "ct": 2498, "ccm": + [ + 1.58731, -0.18011, -0.40721, -0.60639, 2.03422, -0.42782, -0.19612, -1.69203, 2.88815 + ] + }, + { + "ct": 2811, "ccm": + [ + 1.61593, -0.33164, -0.28429, -0.55048, 1.97779, -0.42731, -0.12042, -1.42847, 2.54889 + ] + }, + { + "ct": 2911, "ccm": + [ + 1.62771, -0.41282, -0.21489, -0.57991, 2.04176, -0.46186, -0.07613, -1.13359, 2.20972 + ] + }, + { + "ct": 2919, "ccm": + [ + 1.62661, -0.37736, -0.24925, -0.52519, 1.95233, -0.42714, -0.10842, -1.34929, 2.45771 + ] + }, + { + "ct": 3627, "ccm": + [ + 1.70385, -0.57231, -0.13154, -0.47763, 1.85998, -0.38235, -0.07467, -0.82678, 1.90145 + ] + }, + { + "ct": 4600, "ccm": + [ + 1.68486, -0.61085, -0.07402, -0.41927, 2.04016, -0.62089, -0.08633, -0.67672, 1.76305 + ] + }, + { + "ct": 5716, "ccm": + [ + 1.80439, -0.73699, -0.06739, -0.36073, 1.83327, -0.47255, -0.08378, -0.56403, 1.64781 + ] + }, + { + "ct": 8575, "ccm": + [ + 1.89357, -0.76427, -0.12931, -0.27399, 2.15605, -0.88206, -0.12035, -0.68256, 1.80292 + ] + } + ] + }, + "rpi.sharpen": + { + + }, + "rpi.dpc": + { + + } +} diff --git a/src/ipa/raspberrypi/data/imx477.json b/src/ipa/raspberrypi/data/imx477.json new file mode 100644 index 000000000000..dce5234f7209 --- /dev/null +++ b/src/ipa/raspberrypi/data/imx477.json @@ -0,0 +1,416 @@ +{ + "rpi.black_level": + { + "black_level": 4096 + }, + "rpi.dpc": + { + + }, + "rpi.lux": + { + "reference_shutter_speed": 27242, + "reference_gain": 1.0, + "reference_aperture": 1.0, + "reference_lux": 830, + "reference_Y": 17755 + }, + "rpi.noise": + { + "reference_constant": 0, + "reference_slope": 2.767 + }, + "rpi.geq": + { + "offset": 204, + "slope": 0.01078 + }, + "rpi.sdn": + { + + }, + "rpi.awb": + { + "priors": + [ + { + "lux": 0, "prior": + [ + 2000, 1.0, 3000, 0.0, 13000, 0.0 + ] + }, + { + "lux": 800, "prior": + [ + 2000, 0.0, 6000, 2.0, 13000, 2.0 + ] + }, + { + "lux": 1500, "prior": + [ + 2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0 + ] + } + ], + "modes": + { + "auto": + { + "lo": 2500, + "hi": 8000 + }, + "incandescent": + { + "lo": 2500, + "hi": 3000 + }, + "tungsten": + { + "lo": 3000, + "hi": 3500 + }, + "fluorescent": + { + "lo": 4000, + "hi": 4700 + }, + "indoor": + { + "lo": 3000, + "hi": 5000 + }, + "daylight": + { + "lo": 5500, + "hi": 6500 + }, + "cloudy": + { + "lo": 7000, + "hi": 8600 + } + }, + "bayes": 1, + "ct_curve": + [ + 2360.0, 0.6009, 0.3093, 2870.0, 0.5047, 0.3936, 2970.0, 0.4782, 0.4221, 3700.0, 0.4212, 0.4923, 3870.0, 0.4037, 0.5166, 4000.0, + 0.3965, 0.5271, 4400.0, 0.3703, 0.5666, 4715.0, 0.3411, 0.6147, 5920.0, 0.3108, 0.6687, 9050.0, 0.2524, 0.7856 + ], + "sensitivity_r": 1.05, + "sensitivity_b": 1.05, + "transverse_pos": 0.0238, + "transverse_neg": 0.04429 + }, + "rpi.agc": + { + "metering_modes": + { + "centre-weighted": + { + "weights": + [ + 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 + ] + }, + "spot": + { + "weights": + [ + 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 + ] + }, + "matrix": + { + "weights": + [ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 + ] + } + }, + "exposure_modes": + { + "normal": + { + "shutter": + [ + 100, 10000, 30000, 60000, 120000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + }, + "sport": + { + "shutter": + [ + 100, 5000, 10000, 20000, 120000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + } + }, + "constraint_modes": + { + "normal": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.3, 1000, 0.3 + ] + } + ], + "highlight": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.3, 1000, 0.3 + ] + }, + { + "bound": "UPPER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.8, 1000, 0.8 + ] + } + ], + "shadows": + [ + { + "bound": "LOWER", "q_lo": 0.0, "q_hi": 0.5, "y_target": + [ + 0, 0.17, 1000, 0.17 + ] + } + ] + }, + "y_target": + [ + 0, 0.16, 1000, 0.165, 10000, 0.17 + ] + }, + "rpi.alsc": + { + "omega": 1.3, + "n_iter": 100, + "luminance_strength": 0.5, + "calibrations_Cr": + [ + { + "ct": 2960, "table": + [ + 2.088, 2.086, 2.082, 2.081, 2.077, 2.071, 2.068, 2.068, 2.072, 2.073, 2.075, 2.078, 2.084, 2.092, 2.095, 2.098, + 2.086, 2.084, 2.079, 2.078, 2.075, 2.068, 2.064, 2.063, 2.068, 2.071, 2.072, 2.075, 2.081, 2.089, 2.092, 2.094, + 2.083, 2.081, 2.077, 2.072, 2.069, 2.062, 2.059, 2.059, 2.063, 2.067, 2.069, 2.072, 2.079, 2.088, 2.089, 2.089, + 2.081, 2.077, 2.072, 2.068, 2.065, 2.058, 2.055, 2.054, 2.057, 2.062, 2.066, 2.069, 2.077, 2.084, 2.086, 2.086, + 2.078, 2.075, 2.069, 2.065, 2.061, 2.055, 2.052, 2.049, 2.051, 2.056, 2.062, 2.065, 2.072, 2.079, 2.081, 2.079, + 2.079, 2.075, 2.069, 2.064, 2.061, 2.053, 2.049, 2.046, 2.049, 2.051, 2.057, 2.062, 2.069, 2.075, 2.077, 2.075, + 2.082, 2.079, 2.072, 2.065, 2.061, 2.054, 2.049, 2.047, 2.049, 2.051, 2.056, 2.061, 2.066, 2.073, 2.073, 2.069, + 2.086, 2.082, 2.075, 2.068, 2.062, 2.054, 2.051, 2.049, 2.051, 2.052, 2.056, 2.061, 2.066, 2.073, 2.073, 2.072, + 2.088, 2.086, 2.079, 2.074, 2.066, 2.057, 2.051, 2.051, 2.054, 2.055, 2.056, 2.061, 2.067, 2.072, 2.073, 2.072, + 2.091, 2.087, 2.079, 2.075, 2.068, 2.057, 2.052, 2.052, 2.056, 2.055, 2.055, 2.059, 2.066, 2.072, 2.072, 2.072, + 2.093, 2.088, 2.081, 2.077, 2.069, 2.059, 2.054, 2.054, 2.057, 2.056, 2.056, 2.058, 2.066, 2.072, 2.073, 2.073, + 2.095, 2.091, 2.084, 2.078, 2.075, 2.067, 2.057, 2.057, 2.059, 2.059, 2.058, 2.059, 2.068, 2.073, 2.075, 2.078 + ] + }, + { + "ct": 4850, "table": + [ + 2.973, 2.968, 2.956, 2.943, 2.941, 2.932, 2.923, 2.921, 2.924, 2.929, 2.931, 2.939, 2.953, 2.965, 2.966, 2.976, + 2.969, 2.962, 2.951, 2.941, 2.934, 2.928, 2.919, 2.918, 2.919, 2.923, 2.927, 2.933, 2.945, 2.957, 2.962, 2.962, + 2.964, 2.956, 2.944, 2.932, 2.929, 2.924, 2.915, 2.914, 2.915, 2.919, 2.924, 2.928, 2.941, 2.952, 2.958, 2.959, + 2.957, 2.951, 2.939, 2.928, 2.924, 2.919, 2.913, 2.911, 2.911, 2.915, 2.919, 2.925, 2.936, 2.947, 2.952, 2.953, + 2.954, 2.947, 2.935, 2.924, 2.919, 2.915, 2.908, 2.906, 2.906, 2.907, 2.914, 2.921, 2.932, 2.941, 2.943, 2.942, + 2.953, 2.946, 2.932, 2.921, 2.916, 2.911, 2.904, 2.902, 2.901, 2.904, 2.909, 2.919, 2.926, 2.937, 2.939, 2.939, + 2.953, 2.947, 2.932, 2.918, 2.915, 2.909, 2.903, 2.901, 2.901, 2.906, 2.911, 2.918, 2.924, 2.936, 2.936, 2.932, + 2.956, 2.948, 2.934, 2.919, 2.916, 2.908, 2.903, 2.901, 2.902, 2.907, 2.909, 2.917, 2.926, 2.936, 2.939, 2.939, + 2.957, 2.951, 2.936, 2.923, 2.917, 2.907, 2.904, 2.901, 2.902, 2.908, 2.911, 2.919, 2.929, 2.939, 2.942, 2.942, + 2.961, 2.951, 2.936, 2.922, 2.918, 2.906, 2.904, 2.901, 2.901, 2.907, 2.911, 2.921, 2.931, 2.941, 2.942, 2.944, + 2.964, 2.954, 2.936, 2.924, 2.918, 2.909, 2.905, 2.905, 2.905, 2.907, 2.912, 2.923, 2.933, 2.942, 2.944, 2.944, + 2.964, 2.958, 2.943, 2.927, 2.921, 2.914, 2.909, 2.907, 2.907, 2.912, 2.916, 2.928, 2.936, 2.944, 2.947, 2.952 + ] + }, + { + "ct": 5930, "table": + [ + 3.312, 3.308, 3.301, 3.294, 3.288, 3.277, 3.268, 3.261, 3.259, 3.261, 3.267, 3.273, 3.285, 3.301, 3.303, 3.312, + 3.308, 3.304, 3.294, 3.291, 3.283, 3.271, 3.263, 3.259, 3.257, 3.258, 3.261, 3.268, 3.278, 3.293, 3.299, 3.299, + 3.302, 3.296, 3.288, 3.282, 3.276, 3.267, 3.259, 3.254, 3.252, 3.253, 3.256, 3.261, 3.273, 3.289, 3.292, 3.292, + 3.296, 3.289, 3.282, 3.276, 3.269, 3.263, 3.256, 3.251, 3.248, 3.249, 3.251, 3.257, 3.268, 3.279, 3.284, 3.284, + 3.292, 3.285, 3.279, 3.271, 3.264, 3.257, 3.249, 3.243, 3.241, 3.241, 3.246, 3.252, 3.261, 3.274, 3.275, 3.273, + 3.291, 3.285, 3.276, 3.268, 3.259, 3.251, 3.242, 3.239, 3.236, 3.238, 3.244, 3.248, 3.258, 3.268, 3.269, 3.265, + 3.294, 3.288, 3.275, 3.266, 3.257, 3.248, 3.239, 3.238, 3.237, 3.238, 3.243, 3.246, 3.255, 3.264, 3.264, 3.257, + 3.297, 3.293, 3.279, 3.268, 3.258, 3.249, 3.238, 3.237, 3.239, 3.239, 3.243, 3.245, 3.255, 3.264, 3.264, 3.263, + 3.301, 3.295, 3.281, 3.271, 3.259, 3.248, 3.237, 3.237, 3.239, 3.241, 3.243, 3.246, 3.257, 3.265, 3.266, 3.264, + 3.306, 3.295, 3.279, 3.271, 3.261, 3.247, 3.235, 3.234, 3.239, 3.239, 3.243, 3.247, 3.258, 3.265, 3.265, 3.264, + 3.308, 3.297, 3.279, 3.272, 3.261, 3.249, 3.239, 3.239, 3.241, 3.243, 3.245, 3.248, 3.261, 3.265, 3.266, 3.265, + 3.309, 3.301, 3.286, 3.276, 3.267, 3.256, 3.246, 3.242, 3.244, 3.244, 3.249, 3.253, 3.263, 3.267, 3.271, 3.274 + ] + } + ], + "calibrations_Cb": + [ + { + "ct": 2960, "table": + [ + 2.133, 2.134, 2.139, 2.143, 2.148, 2.155, 2.158, 2.158, 2.158, 2.161, 2.161, 2.162, 2.159, 2.156, 2.152, 2.151, + 2.132, 2.133, 2.135, 2.142, 2.147, 2.153, 2.158, 2.158, 2.158, 2.158, 2.159, 2.159, 2.157, 2.154, 2.151, 2.148, + 2.133, 2.133, 2.135, 2.142, 2.149, 2.154, 2.158, 2.158, 2.157, 2.156, 2.158, 2.157, 2.155, 2.153, 2.148, 2.146, + 2.133, 2.133, 2.138, 2.145, 2.149, 2.154, 2.158, 2.159, 2.158, 2.155, 2.157, 2.156, 2.153, 2.149, 2.146, 2.144, + 2.133, 2.134, 2.139, 2.146, 2.149, 2.154, 2.158, 2.159, 2.159, 2.156, 2.154, 2.154, 2.149, 2.145, 2.143, 2.139, + 2.135, 2.135, 2.139, 2.146, 2.151, 2.155, 2.158, 2.159, 2.158, 2.156, 2.153, 2.151, 2.146, 2.143, 2.139, 2.136, + 2.135, 2.135, 2.138, 2.145, 2.151, 2.154, 2.157, 2.158, 2.157, 2.156, 2.153, 2.151, 2.147, 2.143, 2.141, 2.137, + 2.135, 2.134, 2.135, 2.141, 2.149, 2.154, 2.157, 2.157, 2.157, 2.157, 2.157, 2.153, 2.149, 2.146, 2.142, 2.139, + 2.132, 2.133, 2.135, 2.139, 2.148, 2.153, 2.158, 2.159, 2.159, 2.161, 2.161, 2.157, 2.154, 2.149, 2.144, 2.141, + 2.132, 2.133, 2.135, 2.141, 2.149, 2.155, 2.161, 2.161, 2.162, 2.162, 2.163, 2.159, 2.154, 2.149, 2.144, 2.138, + 2.136, 2.136, 2.137, 2.143, 2.149, 2.156, 2.162, 2.163, 2.162, 2.163, 2.164, 2.161, 2.157, 2.152, 2.146, 2.138, + 2.137, 2.137, 2.141, 2.147, 2.152, 2.157, 2.162, 2.162, 2.159, 2.161, 2.162, 2.162, 2.157, 2.152, 2.148, 2.148 + ] + }, + { + "ct": 4850, "table": + [ + 1.463, 1.464, 1.471, 1.478, 1.479, 1.483, 1.484, 1.486, 1.486, 1.484, 1.483, 1.481, 1.478, 1.475, 1.471, 1.468, + 1.463, 1.463, 1.468, 1.476, 1.479, 1.482, 1.484, 1.487, 1.486, 1.484, 1.483, 1.482, 1.478, 1.473, 1.469, 1.468, + 1.463, 1.464, 1.468, 1.476, 1.479, 1.483, 1.484, 1.486, 1.486, 1.485, 1.484, 1.482, 1.477, 1.473, 1.469, 1.468, + 1.463, 1.464, 1.469, 1.477, 1.481, 1.483, 1.485, 1.487, 1.487, 1.485, 1.485, 1.482, 1.478, 1.474, 1.469, 1.468, + 1.465, 1.465, 1.471, 1.478, 1.481, 1.484, 1.486, 1.488, 1.488, 1.487, 1.485, 1.482, 1.477, 1.472, 1.468, 1.467, + 1.465, 1.466, 1.472, 1.479, 1.482, 1.485, 1.486, 1.488, 1.488, 1.486, 1.484, 1.479, 1.475, 1.472, 1.468, 1.466, + 1.466, 1.466, 1.472, 1.478, 1.482, 1.484, 1.485, 1.488, 1.487, 1.485, 1.483, 1.479, 1.475, 1.472, 1.469, 1.468, + 1.465, 1.466, 1.469, 1.476, 1.481, 1.485, 1.485, 1.486, 1.486, 1.485, 1.483, 1.479, 1.477, 1.474, 1.471, 1.469, + 1.464, 1.465, 1.469, 1.476, 1.481, 1.484, 1.485, 1.487, 1.487, 1.486, 1.485, 1.481, 1.478, 1.475, 1.471, 1.469, + 1.463, 1.464, 1.469, 1.477, 1.481, 1.485, 1.485, 1.488, 1.488, 1.487, 1.486, 1.481, 1.478, 1.475, 1.471, 1.468, + 1.464, 1.465, 1.471, 1.478, 1.482, 1.486, 1.486, 1.488, 1.488, 1.487, 1.486, 1.481, 1.478, 1.475, 1.472, 1.468, + 1.465, 1.466, 1.472, 1.481, 1.483, 1.487, 1.487, 1.488, 1.488, 1.486, 1.485, 1.481, 1.479, 1.476, 1.473, 1.472 + ] + }, + { + "ct": 5930, "table": + [ + 1.443, 1.444, 1.448, 1.453, 1.459, 1.463, 1.465, 1.467, 1.469, 1.469, 1.467, 1.466, 1.462, 1.457, 1.454, 1.451, + 1.443, 1.444, 1.445, 1.451, 1.459, 1.463, 1.465, 1.467, 1.469, 1.469, 1.467, 1.465, 1.461, 1.456, 1.452, 1.451, + 1.444, 1.444, 1.445, 1.451, 1.459, 1.463, 1.466, 1.468, 1.469, 1.469, 1.467, 1.465, 1.461, 1.456, 1.452, 1.449, + 1.444, 1.444, 1.447, 1.452, 1.459, 1.464, 1.467, 1.469, 1.471, 1.469, 1.467, 1.466, 1.461, 1.456, 1.452, 1.449, + 1.444, 1.445, 1.448, 1.452, 1.459, 1.465, 1.469, 1.471, 1.471, 1.471, 1.468, 1.465, 1.461, 1.455, 1.451, 1.449, + 1.445, 1.446, 1.449, 1.453, 1.461, 1.466, 1.469, 1.471, 1.472, 1.469, 1.467, 1.465, 1.459, 1.455, 1.451, 1.447, + 1.446, 1.446, 1.449, 1.453, 1.461, 1.466, 1.469, 1.469, 1.469, 1.469, 1.467, 1.465, 1.459, 1.455, 1.452, 1.449, + 1.446, 1.446, 1.447, 1.451, 1.459, 1.466, 1.469, 1.469, 1.469, 1.469, 1.467, 1.465, 1.461, 1.457, 1.454, 1.451, + 1.444, 1.444, 1.447, 1.451, 1.459, 1.466, 1.469, 1.469, 1.471, 1.471, 1.468, 1.466, 1.462, 1.458, 1.454, 1.452, + 1.444, 1.444, 1.448, 1.453, 1.459, 1.466, 1.469, 1.471, 1.472, 1.472, 1.468, 1.466, 1.462, 1.458, 1.454, 1.449, + 1.446, 1.447, 1.449, 1.454, 1.461, 1.466, 1.471, 1.471, 1.471, 1.471, 1.468, 1.466, 1.462, 1.459, 1.455, 1.449, + 1.447, 1.447, 1.452, 1.457, 1.462, 1.468, 1.472, 1.472, 1.471, 1.471, 1.468, 1.466, 1.462, 1.459, 1.456, 1.455 + ] + } + ], + "luminance_lut": + [ + 1.548, 1.499, 1.387, 1.289, 1.223, 1.183, 1.164, 1.154, 1.153, 1.169, 1.211, 1.265, 1.345, 1.448, 1.581, 1.619, + 1.513, 1.412, 1.307, 1.228, 1.169, 1.129, 1.105, 1.098, 1.103, 1.127, 1.157, 1.209, 1.272, 1.361, 1.481, 1.583, + 1.449, 1.365, 1.257, 1.175, 1.124, 1.085, 1.062, 1.054, 1.059, 1.079, 1.113, 1.151, 1.211, 1.293, 1.407, 1.488, + 1.424, 1.324, 1.222, 1.139, 1.089, 1.056, 1.034, 1.031, 1.034, 1.049, 1.075, 1.115, 1.164, 1.241, 1.351, 1.446, + 1.412, 1.297, 1.203, 1.119, 1.069, 1.039, 1.021, 1.016, 1.022, 1.032, 1.052, 1.086, 1.135, 1.212, 1.321, 1.439, + 1.406, 1.287, 1.195, 1.115, 1.059, 1.028, 1.014, 1.012, 1.015, 1.026, 1.041, 1.074, 1.125, 1.201, 1.302, 1.425, + 1.406, 1.294, 1.205, 1.126, 1.062, 1.031, 1.013, 1.009, 1.011, 1.019, 1.042, 1.079, 1.129, 1.203, 1.302, 1.435, + 1.415, 1.318, 1.229, 1.146, 1.076, 1.039, 1.019, 1.014, 1.017, 1.031, 1.053, 1.093, 1.144, 1.219, 1.314, 1.436, + 1.435, 1.348, 1.246, 1.164, 1.094, 1.059, 1.036, 1.032, 1.037, 1.049, 1.072, 1.114, 1.167, 1.257, 1.343, 1.462, + 1.471, 1.385, 1.278, 1.189, 1.124, 1.084, 1.064, 1.061, 1.069, 1.078, 1.101, 1.146, 1.207, 1.298, 1.415, 1.496, + 1.522, 1.436, 1.323, 1.228, 1.169, 1.118, 1.101, 1.094, 1.099, 1.113, 1.146, 1.194, 1.265, 1.353, 1.474, 1.571, + 1.578, 1.506, 1.378, 1.281, 1.211, 1.156, 1.135, 1.134, 1.139, 1.158, 1.194, 1.251, 1.327, 1.427, 1.559, 1.611 + ], + "sigma": 0.00121, + "sigma_Cb": 0.00115 + }, + "rpi.contrast": + { + "ce_enable": 1, + "gamma_curve": + [ + 0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193, + 8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168, + 16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796, + 32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476, + 65535, 65535 + ] + }, + "rpi.ccm": + { + "ccms": + [ + { + "ct": 2360, "ccm": + [ + 1.66078, -0.23588, -0.42491, -0.47456, 1.82763, -0.35307, -0.00545, -1.44729, 2.45273 + ] + }, + { + "ct": 2870, "ccm": + [ + 1.78373, -0.55344, -0.23029, -0.39951, 1.69701, -0.29751, 0.01986, -1.06525, 2.04539 + ] + }, + { + "ct": 2970, "ccm": + [ + 1.73511, -0.56973, -0.16537, -0.36338, 1.69878, -0.33539, -0.02354, -0.76813, 1.79168 + ] + }, + { + "ct": 3000, "ccm": + [ + 2.06374, -0.92218, -0.14156, -0.41721, 1.69289, -0.27568, -0.00554, -0.92741, 1.93295 + ] + }, + { + "ct": 3700, "ccm": + [ + 2.13792, -1.08136, -0.05655, -0.34739, 1.58989, -0.24249, -0.00349, -0.76789, 1.77138 + ] + }, + { + "ct": 3870, "ccm": + [ + 1.83834, -0.70528, -0.13307, -0.30499, 1.60523, -0.30024, -0.05701, -0.58313, 1.64014 + ] + }, + { + "ct": 4000, "ccm": + [ + 2.15741, -1.10295, -0.05447, -0.34631, 1.61158, -0.26528, -0.02723, -0.70288, 1.73011 + ] + }, + { + "ct": 4400, "ccm": + [ + 2.05729, -0.95007, -0.10723, -0.41712, 1.78606, -0.36894, -0.11899, -0.55727, 1.67626 + ] + }, + + { + "ct": 4715, "ccm": + [ + 1.90255, -0.77478, -0.12777, -0.31338, 1.88197, -0.56858, -0.06001, -0.61785, 1.67786 + ] + }, + { + "ct": 5920, "ccm": + [ + 1.98691, -0.84671, -0.14019, -0.26581, 1.70615, -0.44035, -0.09532, -0.47332, 1.56864 + ] + }, + { + "ct": 9050, "ccm": + [ + 2.09255, -0.76541, -0.32714, -0.28973, 2.27462, -0.98489, -0.17299, -0.61275, 1.78574 + ] + } + ] + }, + "rpi.sharpen": + { + + } +} diff --git a/src/ipa/raspberrypi/data/meson.build b/src/ipa/raspberrypi/data/meson.build new file mode 100644 index 000000000000..6ff27745ff80 --- /dev/null +++ b/src/ipa/raspberrypi/data/meson.build @@ -0,0 +1,9 @@ +conf_files = files([ + 'imx219.json', + 'imx477.json', + 'ov5647.json', + 'uncalibrated.json', +]) + +install_data(conf_files, + install_dir : join_paths(ipa_data_dir, 'raspberrypi')) diff --git a/src/ipa/raspberrypi/data/ov5647.json b/src/ipa/raspberrypi/data/ov5647.json new file mode 100644 index 000000000000..a2469059a2e8 --- /dev/null +++ b/src/ipa/raspberrypi/data/ov5647.json @@ -0,0 +1,398 @@ +{ + "rpi.black_level": + { + "black_level": 1024 + }, + "rpi.dpc": + { + + }, + "rpi.lux": + { + "reference_shutter_speed": 21663, + "reference_gain": 1.0, + "reference_aperture": 1.0, + "reference_lux": 987, + "reference_Y": 8961 + }, + "rpi.noise": + { + "reference_constant": 0, + "reference_slope": 4.25 + }, + "rpi.geq": + { + "offset": 401, + "slope": 0.05619 + }, + "rpi.sdn": + { + + }, + "rpi.awb": + { + "priors": + [ + { + "lux": 0, "prior": + [ + 2000, 1.0, 3000, 0.0, 13000, 0.0 + ] + }, + { + "lux": 800, "prior": + [ + 2000, 0.0, 6000, 2.0, 13000, 2.0 + ] + }, + { + "lux": 1500, "prior": + [ + 2000, 0.0, 4000, 1.0, 6000, 6.0, 6500, 7.0, 7000, 1.0, 13000, 1.0 + ] + } + ], + "modes": + { + "auto": + { + "lo": 2500, + "hi": 8000 + }, + "incandescent": + { + "lo": 2500, + "hi": 3000 + }, + "tungsten": + { + "lo": 3000, + "hi": 3500 + }, + "fluorescent": + { + "lo": 4000, + "hi": 4700 + }, + "indoor": + { + "lo": 3000, + "hi": 5000 + }, + "daylight": + { + "lo": 5500, + "hi": 6500 + }, + "cloudy": + { + "lo": 7000, + "hi": 8600 + } + }, + "bayes": 1, + "ct_curve": + [ + 2500.0, 1.0289, 0.4503, 2803.0, 0.9428, 0.5108, 2914.0, 0.9406, 0.5127, 3605.0, 0.8261, 0.6249, 4540.0, 0.7331, 0.7533, 5699.0, + 0.6715, 0.8627, 8625.0, 0.6081, 1.0012 + ], + "sensitivity_r": 1.05, + "sensitivity_b": 1.05, + "transverse_pos": 0.0321, + "transverse_neg": 0.04313 + }, + "rpi.agc": + { + "metering_modes": + { + "centre-weighted": + { + "weights": + [ + 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 + ] + }, + "spot": + { + "weights": + [ + 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 + ] + }, + "matrix": + { + "weights": + [ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 + ] + } + }, + "exposure_modes": + { + "normal": + { + "shutter": + [ + 100, 10000, 30000, 30000, 30000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + }, + "sport": + { + "shutter": + [ + 100, 5000, 10000, 20000, 30000 + ], + "gain": + [ + 1.0, 2.0, 4.0, 6.0, 6.0 + ] + } + }, + "constraint_modes": + { + "normal": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.5, 1000, 0.5 + ] + } + ], + "highlight": + [ + { + "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.5, 1000, 0.5 + ] + }, + { + "bound": "UPPER", "q_lo": 0.98, "q_hi": 1.0, "y_target": + [ + 0, 0.8, 1000, 0.8 + ] + } + ], + "shadows": + [ + { + "bound": "LOWER", "q_lo": 0.0, "q_hi": 0.5, "y_target": + [ + 0, 0.17, 1000, 0.17 + ] + } + ] + }, + "y_target": + [ + 0, 0.16, 1000, 0.165, 10000, 0.17 + ], + "base_ev": 1.25 + }, + "rpi.alsc": + { + "omega": 1.3, + "n_iter": 100, + "luminance_strength": 0.5, + "calibrations_Cr": + [ + { + "ct": 3000, "table": + [ + 1.105, 1.103, 1.093, 1.083, 1.071, 1.065, 1.065, 1.065, 1.066, 1.069, 1.072, 1.077, 1.084, 1.089, 1.093, 1.093, + 1.103, 1.096, 1.084, 1.072, 1.059, 1.051, 1.047, 1.047, 1.051, 1.053, 1.059, 1.067, 1.075, 1.082, 1.085, 1.086, + 1.096, 1.084, 1.072, 1.059, 1.051, 1.045, 1.039, 1.038, 1.039, 1.045, 1.049, 1.057, 1.063, 1.072, 1.081, 1.082, + 1.092, 1.075, 1.061, 1.052, 1.045, 1.039, 1.036, 1.035, 1.035, 1.039, 1.044, 1.049, 1.056, 1.063, 1.072, 1.081, + 1.092, 1.073, 1.058, 1.048, 1.043, 1.038, 1.035, 1.033, 1.033, 1.035, 1.039, 1.044, 1.051, 1.057, 1.069, 1.078, + 1.091, 1.068, 1.054, 1.045, 1.041, 1.038, 1.035, 1.032, 1.032, 1.032, 1.036, 1.041, 1.045, 1.055, 1.069, 1.078, + 1.091, 1.068, 1.052, 1.043, 1.041, 1.038, 1.035, 1.032, 1.031, 1.032, 1.034, 1.036, 1.043, 1.055, 1.069, 1.078, + 1.092, 1.068, 1.052, 1.047, 1.042, 1.041, 1.038, 1.035, 1.032, 1.032, 1.035, 1.039, 1.043, 1.055, 1.071, 1.079, + 1.092, 1.073, 1.057, 1.051, 1.047, 1.047, 1.044, 1.041, 1.038, 1.038, 1.039, 1.043, 1.051, 1.059, 1.076, 1.083, + 1.092, 1.081, 1.068, 1.058, 1.056, 1.056, 1.053, 1.052, 1.049, 1.048, 1.048, 1.051, 1.059, 1.066, 1.083, 1.085, + 1.091, 1.087, 1.081, 1.068, 1.065, 1.064, 1.062, 1.062, 1.061, 1.056, 1.056, 1.056, 1.064, 1.069, 1.084, 1.089, + 1.091, 1.089, 1.085, 1.079, 1.069, 1.068, 1.067, 1.067, 1.067, 1.063, 1.061, 1.063, 1.068, 1.069, 1.081, 1.092 + ] + }, + { + "ct": 5000, "table": + [ + 1.486, 1.484, 1.468, 1.449, 1.427, 1.403, 1.399, 1.399, 1.399, 1.404, 1.413, 1.433, 1.454, 1.473, 1.482, 1.488, + 1.484, 1.472, 1.454, 1.431, 1.405, 1.381, 1.365, 1.365, 1.367, 1.373, 1.392, 1.411, 1.438, 1.458, 1.476, 1.481, + 1.476, 1.458, 1.433, 1.405, 1.381, 1.361, 1.339, 1.334, 1.334, 1.346, 1.362, 1.391, 1.411, 1.438, 1.462, 1.474, + 1.471, 1.443, 1.417, 1.388, 1.361, 1.339, 1.321, 1.313, 1.313, 1.327, 1.346, 1.362, 1.391, 1.422, 1.453, 1.473, + 1.469, 1.439, 1.408, 1.377, 1.349, 1.321, 1.312, 1.299, 1.299, 1.311, 1.327, 1.348, 1.378, 1.415, 1.446, 1.468, + 1.468, 1.434, 1.402, 1.371, 1.341, 1.316, 1.299, 1.296, 1.295, 1.299, 1.314, 1.338, 1.371, 1.408, 1.441, 1.466, + 1.468, 1.434, 1.401, 1.371, 1.341, 1.316, 1.301, 1.296, 1.295, 1.297, 1.314, 1.338, 1.369, 1.408, 1.441, 1.465, + 1.469, 1.436, 1.401, 1.374, 1.348, 1.332, 1.315, 1.301, 1.301, 1.313, 1.324, 1.342, 1.372, 1.409, 1.442, 1.465, + 1.471, 1.444, 1.413, 1.388, 1.371, 1.348, 1.332, 1.323, 1.323, 1.324, 1.342, 1.362, 1.386, 1.418, 1.449, 1.467, + 1.473, 1.454, 1.431, 1.407, 1.388, 1.371, 1.359, 1.352, 1.351, 1.351, 1.362, 1.383, 1.404, 1.433, 1.462, 1.472, + 1.474, 1.461, 1.447, 1.424, 1.407, 1.394, 1.385, 1.381, 1.379, 1.381, 1.383, 1.401, 1.419, 1.444, 1.466, 1.481, + 1.474, 1.464, 1.455, 1.442, 1.421, 1.408, 1.403, 1.403, 1.403, 1.399, 1.402, 1.415, 1.432, 1.446, 1.467, 1.483 + ] + }, + { + "ct": 6500, "table": + [ + 1.567, 1.565, 1.555, 1.541, 1.525, 1.518, 1.518, 1.518, 1.521, 1.527, 1.532, 1.541, 1.551, 1.559, 1.567, 1.569, + 1.565, 1.557, 1.542, 1.527, 1.519, 1.515, 1.511, 1.516, 1.519, 1.524, 1.528, 1.533, 1.542, 1.553, 1.559, 1.562, + 1.561, 1.546, 1.532, 1.521, 1.518, 1.515, 1.511, 1.516, 1.519, 1.524, 1.528, 1.529, 1.533, 1.542, 1.554, 1.559, + 1.561, 1.539, 1.526, 1.524, 1.521, 1.521, 1.522, 1.524, 1.525, 1.531, 1.529, 1.529, 1.531, 1.538, 1.549, 1.558, + 1.559, 1.538, 1.526, 1.525, 1.524, 1.528, 1.534, 1.536, 1.536, 1.536, 1.532, 1.529, 1.531, 1.537, 1.548, 1.556, + 1.561, 1.537, 1.525, 1.524, 1.526, 1.532, 1.537, 1.539, 1.538, 1.537, 1.532, 1.529, 1.529, 1.537, 1.546, 1.556, + 1.561, 1.536, 1.524, 1.522, 1.525, 1.532, 1.538, 1.538, 1.537, 1.533, 1.528, 1.526, 1.527, 1.536, 1.546, 1.555, + 1.561, 1.537, 1.522, 1.521, 1.524, 1.531, 1.536, 1.537, 1.534, 1.529, 1.526, 1.522, 1.523, 1.534, 1.547, 1.555, + 1.561, 1.538, 1.524, 1.522, 1.526, 1.531, 1.535, 1.535, 1.534, 1.527, 1.524, 1.522, 1.522, 1.535, 1.549, 1.556, + 1.558, 1.543, 1.532, 1.526, 1.526, 1.529, 1.534, 1.535, 1.533, 1.526, 1.523, 1.522, 1.524, 1.537, 1.552, 1.557, + 1.555, 1.546, 1.541, 1.528, 1.527, 1.528, 1.531, 1.533, 1.531, 1.527, 1.522, 1.522, 1.526, 1.536, 1.552, 1.561, + 1.555, 1.547, 1.542, 1.538, 1.526, 1.526, 1.529, 1.531, 1.529, 1.528, 1.519, 1.519, 1.527, 1.531, 1.543, 1.561 + ] + } + ], + "calibrations_Cb": + [ + { + "ct": 3000, "table": + [ + 1.684, 1.688, 1.691, 1.697, 1.709, 1.722, 1.735, 1.745, 1.747, 1.745, 1.731, 1.719, 1.709, 1.705, 1.699, 1.699, + 1.684, 1.689, 1.694, 1.708, 1.721, 1.735, 1.747, 1.762, 1.762, 1.758, 1.745, 1.727, 1.716, 1.707, 1.701, 1.699, + 1.684, 1.691, 1.704, 1.719, 1.734, 1.755, 1.772, 1.786, 1.789, 1.788, 1.762, 1.745, 1.724, 1.709, 1.702, 1.698, + 1.682, 1.694, 1.709, 1.729, 1.755, 1.773, 1.798, 1.815, 1.817, 1.808, 1.788, 1.762, 1.733, 1.714, 1.704, 1.699, + 1.682, 1.693, 1.713, 1.742, 1.772, 1.798, 1.815, 1.829, 1.831, 1.821, 1.807, 1.773, 1.742, 1.716, 1.703, 1.699, + 1.681, 1.693, 1.713, 1.742, 1.772, 1.799, 1.828, 1.839, 1.839, 1.828, 1.807, 1.774, 1.742, 1.715, 1.699, 1.695, + 1.679, 1.691, 1.712, 1.739, 1.771, 1.798, 1.825, 1.829, 1.831, 1.818, 1.801, 1.774, 1.738, 1.712, 1.695, 1.691, + 1.676, 1.685, 1.703, 1.727, 1.761, 1.784, 1.801, 1.817, 1.817, 1.801, 1.779, 1.761, 1.729, 1.706, 1.691, 1.684, + 1.669, 1.678, 1.692, 1.714, 1.741, 1.764, 1.784, 1.795, 1.795, 1.779, 1.761, 1.738, 1.713, 1.696, 1.683, 1.679, + 1.664, 1.671, 1.679, 1.693, 1.716, 1.741, 1.762, 1.769, 1.769, 1.753, 1.738, 1.713, 1.701, 1.687, 1.681, 1.676, + 1.661, 1.664, 1.671, 1.679, 1.693, 1.714, 1.732, 1.739, 1.739, 1.729, 1.708, 1.701, 1.685, 1.679, 1.676, 1.677, + 1.659, 1.661, 1.664, 1.671, 1.679, 1.693, 1.712, 1.714, 1.714, 1.708, 1.701, 1.687, 1.679, 1.672, 1.673, 1.677 + ] + }, + { + "ct": 5000, "table": + [ + 1.177, 1.183, 1.187, 1.191, 1.197, 1.206, 1.213, 1.215, 1.215, 1.215, 1.211, 1.204, 1.196, 1.191, 1.183, 1.182, + 1.179, 1.185, 1.191, 1.196, 1.206, 1.217, 1.224, 1.229, 1.229, 1.226, 1.221, 1.212, 1.202, 1.195, 1.188, 1.182, + 1.183, 1.191, 1.196, 1.206, 1.217, 1.229, 1.239, 1.245, 1.245, 1.245, 1.233, 1.221, 1.212, 1.199, 1.193, 1.187, + 1.183, 1.192, 1.201, 1.212, 1.229, 1.241, 1.252, 1.259, 1.259, 1.257, 1.245, 1.233, 1.217, 1.201, 1.194, 1.192, + 1.183, 1.192, 1.202, 1.219, 1.238, 1.252, 1.261, 1.269, 1.268, 1.261, 1.257, 1.241, 1.223, 1.204, 1.194, 1.191, + 1.182, 1.192, 1.202, 1.219, 1.239, 1.255, 1.266, 1.271, 1.271, 1.265, 1.258, 1.242, 1.223, 1.205, 1.192, 1.191, + 1.181, 1.189, 1.199, 1.218, 1.239, 1.254, 1.262, 1.268, 1.268, 1.258, 1.253, 1.241, 1.221, 1.204, 1.191, 1.187, + 1.179, 1.184, 1.193, 1.211, 1.232, 1.243, 1.254, 1.257, 1.256, 1.253, 1.242, 1.232, 1.216, 1.199, 1.187, 1.183, + 1.174, 1.179, 1.187, 1.202, 1.218, 1.232, 1.243, 1.246, 1.246, 1.239, 1.232, 1.218, 1.207, 1.191, 1.183, 1.179, + 1.169, 1.175, 1.181, 1.189, 1.202, 1.218, 1.229, 1.232, 1.232, 1.224, 1.218, 1.207, 1.199, 1.185, 1.181, 1.174, + 1.164, 1.168, 1.175, 1.179, 1.189, 1.201, 1.209, 1.213, 1.213, 1.209, 1.201, 1.198, 1.186, 1.181, 1.174, 1.173, + 1.161, 1.166, 1.171, 1.175, 1.179, 1.189, 1.197, 1.198, 1.198, 1.197, 1.196, 1.186, 1.182, 1.175, 1.173, 1.173 + ] + }, + { + "ct": 6500, "table": + [ + 1.166, 1.171, 1.173, 1.178, 1.187, 1.193, 1.201, 1.205, 1.205, 1.205, 1.199, 1.191, 1.184, 1.179, 1.174, 1.171, + 1.166, 1.172, 1.176, 1.184, 1.195, 1.202, 1.209, 1.216, 1.216, 1.213, 1.208, 1.201, 1.189, 1.182, 1.176, 1.171, + 1.166, 1.173, 1.183, 1.195, 1.202, 1.214, 1.221, 1.228, 1.229, 1.228, 1.221, 1.209, 1.201, 1.186, 1.179, 1.174, + 1.165, 1.174, 1.187, 1.201, 1.214, 1.223, 1.235, 1.241, 1.242, 1.241, 1.229, 1.221, 1.205, 1.188, 1.181, 1.177, + 1.165, 1.174, 1.189, 1.207, 1.223, 1.235, 1.242, 1.253, 1.252, 1.245, 1.241, 1.228, 1.211, 1.189, 1.181, 1.178, + 1.164, 1.173, 1.189, 1.207, 1.224, 1.238, 1.249, 1.255, 1.255, 1.249, 1.242, 1.228, 1.211, 1.191, 1.179, 1.176, + 1.163, 1.172, 1.187, 1.207, 1.223, 1.237, 1.245, 1.253, 1.252, 1.243, 1.237, 1.228, 1.207, 1.188, 1.176, 1.173, + 1.159, 1.167, 1.179, 1.199, 1.217, 1.227, 1.237, 1.241, 1.241, 1.237, 1.228, 1.217, 1.201, 1.184, 1.174, 1.169, + 1.156, 1.164, 1.172, 1.189, 1.205, 1.217, 1.226, 1.229, 1.229, 1.222, 1.217, 1.204, 1.192, 1.177, 1.171, 1.166, + 1.154, 1.159, 1.166, 1.177, 1.189, 1.205, 1.213, 1.216, 1.216, 1.209, 1.204, 1.192, 1.183, 1.172, 1.168, 1.162, + 1.152, 1.155, 1.161, 1.166, 1.177, 1.188, 1.195, 1.198, 1.199, 1.196, 1.187, 1.183, 1.173, 1.168, 1.163, 1.162, + 1.151, 1.154, 1.158, 1.162, 1.168, 1.177, 1.183, 1.184, 1.184, 1.184, 1.182, 1.172, 1.168, 1.165, 1.162, 1.161 + ] + } + ], + "luminance_lut": + [ + 2.236, 2.111, 1.912, 1.741, 1.579, 1.451, 1.379, 1.349, 1.349, 1.361, 1.411, 1.505, 1.644, 1.816, 2.034, 2.159, + 2.139, 1.994, 1.796, 1.625, 1.467, 1.361, 1.285, 1.248, 1.239, 1.265, 1.321, 1.408, 1.536, 1.703, 1.903, 2.087, + 2.047, 1.898, 1.694, 1.511, 1.373, 1.254, 1.186, 1.152, 1.142, 1.166, 1.226, 1.309, 1.441, 1.598, 1.799, 1.978, + 1.999, 1.824, 1.615, 1.429, 1.281, 1.179, 1.113, 1.077, 1.071, 1.096, 1.153, 1.239, 1.357, 1.525, 1.726, 1.915, + 1.976, 1.773, 1.563, 1.374, 1.222, 1.119, 1.064, 1.032, 1.031, 1.049, 1.099, 1.188, 1.309, 1.478, 1.681, 1.893, + 1.973, 1.756, 1.542, 1.351, 1.196, 1.088, 1.028, 1.011, 1.004, 1.029, 1.077, 1.169, 1.295, 1.459, 1.663, 1.891, + 1.973, 1.761, 1.541, 1.349, 1.193, 1.087, 1.031, 1.006, 1.006, 1.023, 1.075, 1.169, 1.298, 1.463, 1.667, 1.891, + 1.982, 1.789, 1.568, 1.373, 1.213, 1.111, 1.051, 1.029, 1.024, 1.053, 1.106, 1.199, 1.329, 1.495, 1.692, 1.903, + 2.015, 1.838, 1.621, 1.426, 1.268, 1.159, 1.101, 1.066, 1.068, 1.099, 1.166, 1.259, 1.387, 1.553, 1.751, 1.937, + 2.076, 1.911, 1.692, 1.507, 1.346, 1.236, 1.169, 1.136, 1.139, 1.174, 1.242, 1.349, 1.475, 1.641, 1.833, 2.004, + 2.193, 2.011, 1.798, 1.604, 1.444, 1.339, 1.265, 1.235, 1.237, 1.273, 1.351, 1.461, 1.598, 1.758, 1.956, 2.125, + 2.263, 2.154, 1.916, 1.711, 1.549, 1.432, 1.372, 1.356, 1.356, 1.383, 1.455, 1.578, 1.726, 1.914, 2.119, 2.211 + ], + "sigma": 0.006, + "sigma_Cb": 0.00208 + }, + "rpi.contrast": + { + "ce_enable": 1, + "gamma_curve": + [ + 0, 0, 1024, 5040, 2048, 9338, 3072, 12356, 4096, 15312, 5120, 18051, 6144, 20790, 7168, 23193, + 8192, 25744, 9216, 27942, 10240, 30035, 11264, 32005, 12288, 33975, 13312, 35815, 14336, 37600, 15360, 39168, + 16384, 40642, 18432, 43379, 20480, 45749, 22528, 47753, 24576, 49621, 26624, 51253, 28672, 52698, 30720, 53796, + 32768, 54876, 36864, 57012, 40960, 58656, 45056, 59954, 49152, 61183, 53248, 62355, 57344, 63419, 61440, 64476, + 65535, 65535 + ] + }, + "rpi.ccm": + { + "ccms": + [ + { + "ct": 2500, "ccm": + [ + 1.70741, -0.05307, -0.65433, -0.62822, 1.68836, -0.06014, -0.04452, -1.87628, 2.92079 + ] + }, + { + "ct": 2803, "ccm": + [ + 1.74383, -0.18731, -0.55652, -0.56491, 1.67772, -0.11281, -0.01522, -1.60635, 2.62157 + ] + }, + { + "ct": 2912, "ccm": + [ + 1.75215, -0.22221, -0.52995, -0.54568, 1.63522, -0.08954, 0.02633, -1.56997, 2.54364 + ] + }, + { + "ct": 2914, "ccm": + [ + 1.72423, -0.28939, -0.43484, -0.55188, 1.62925, -0.07737, 0.01959, -1.28661, 2.26702 + ] + }, + { + "ct": 3605, "ccm": + [ + 1.80381, -0.43646, -0.36735, -0.46505, 1.56814, -0.10309, 0.00929, -1.00424, 1.99495 + ] + }, + { + "ct": 4540, "ccm": + [ + 1.85263, -0.46545, -0.38719, -0.44136, 1.68443, -0.24307, 0.04108, -0.85599, 1.81491 + ] + }, + { + "ct": 5699, "ccm": + [ + 1.98595, -0.63542, -0.35054, -0.34623, 1.54146, -0.19522, 0.00411, -0.70936, 1.70525 + ] + }, + { + "ct": 8625, "ccm": + [ + 2.21637, -0.56663, -0.64974, -0.41133, 1.96625, -0.55492, -0.02307, -0.83529, 1.85837 + ] + } + ] + }, + "rpi.sharpen": + { + + } +} diff --git a/src/ipa/raspberrypi/data/uncalibrated.json b/src/ipa/raspberrypi/data/uncalibrated.json new file mode 100644 index 000000000000..16a01e940b42 --- /dev/null +++ b/src/ipa/raspberrypi/data/uncalibrated.json @@ -0,0 +1,82 @@ +{ + "rpi.black_level": + { + "black_level": 4096 + }, + "rpi.awb": + { + "use_derivatives": 0, + "bayes": 0 + }, + "rpi.agc": + { + "metering_modes": + { + "centre-weighted": { + "weights": [4, 4, 4, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0] + } + }, + "exposure_modes": + { + "normal": + { + "shutter": [ 100, 15000, 30000, 60000, 120000 ], + "gain": [ 1.0, 2.0, 3.0, 4.0, 6.0 ] + } + }, + "constraint_modes": + { + "normal": + [ + { "bound": "LOWER", "q_lo": 0.98, "q_hi": 1.0, "y_target": [ 0, 0.4, 1000, 0.4 ] } + ] + }, + "y_target": [ 0, 0.16, 1000, 0.165, 10000, 0.17 ] + }, + "rpi.ccm": + { + "ccms": + [ + { "ct": 4000, "ccm": [ 2.0, -1.0, 0.0, -0.5, 2.0, -0.5, 0, -1.0, 2.0 ] } + ] + }, + "rpi.contrast": + { + "ce_enable": 0, + "gamma_curve": [ + 0, 0, + 1024, 5040, + 2048, 9338, + 3072, 12356, + 4096, 15312, + 5120, 18051, + 6144, 20790, + 7168, 23193, + 8192, 25744, + 9216, 27942, + 10240, 30035, + 11264, 32005, + 12288, 33975, + 13312, 35815, + 14336, 37600, + 15360, 39168, + 16384, 40642, + 18432, 43379, + 20480, 45749, + 22528, 47753, + 24576, 49621, + 26624, 51253, + 28672, 52698, + 30720, 53796, + 32768, 54876, + 36864, 57012, + 40960, 58656, + 45056, 59954, + 49152, 61183, + 53248, 62355, + 57344, 63419, + 61440, 64476, + 65535, 65535 + ] + } +} diff --git a/src/ipa/raspberrypi/md_parser.cpp b/src/ipa/raspberrypi/md_parser.cpp new file mode 100644 index 000000000000..ca809aa266e4 --- /dev/null +++ b/src/ipa/raspberrypi/md_parser.cpp @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * md_parser.cpp - image sensor metadata parsers + */ + +#include +#include +#include + +#include "md_parser.hpp" + +using namespace RPi; + +// This function goes through the embedded data to find the offsets (not +// values!), in the data block, where the values of the given registers can +// subsequently be found. + +// Embedded data tag bytes, from Sony IMX219 datasheet but general to all SMIA +// sensors, I think. + +#define LINE_START 0x0a +#define LINE_END_TAG 0x07 +#define REG_HI_BITS 0xaa +#define REG_LOW_BITS 0xa5 +#define REG_VALUE 0x5a +#define REG_SKIP 0x55 + +MdParserSmia::ParseStatus MdParserSmia::findRegs(unsigned char *data, + uint32_t regs[], int offsets[], + unsigned int num_regs) +{ + assert(num_regs > 0); + if (data[0] != LINE_START) + return NO_LINE_START; + + unsigned int current_offset = 1; // after the LINE_START + unsigned int current_line_start = 0, current_line = 0; + unsigned int reg_num = 0, first_reg = 0; + ParseStatus retcode = PARSE_OK; + while (1) { + int tag = data[current_offset++]; + if ((bits_per_pixel_ == 10 && + (current_offset + 1 - current_line_start) % 5 == 0) || + (bits_per_pixel_ == 12 && + (current_offset + 1 - current_line_start) % 3 == 0)) { + if (data[current_offset++] != REG_SKIP) + return BAD_DUMMY; + } + int data_byte = data[current_offset++]; + //printf("Offset %u, tag 0x%02x data_byte 0x%02x\n", current_offset-1, tag, data_byte); + if (tag == LINE_END_TAG) { + if (data_byte != LINE_END_TAG) + return BAD_LINE_END; + if (num_lines_ && ++current_line == num_lines_) + return MISSING_REGS; + if (line_length_bytes_) { + current_offset = + current_line_start + line_length_bytes_; + // Require whole line to be in the buffer (if buffer size set). + if (buffer_size_bytes_ && + current_offset + line_length_bytes_ > + buffer_size_bytes_) + return MISSING_REGS; + if (data[current_offset] != LINE_START) + return NO_LINE_START; + } else { + // allow a zero line length to mean "hunt for the next line" + while (data[current_offset] != LINE_START && + current_offset < buffer_size_bytes_) + current_offset++; + if (current_offset == buffer_size_bytes_) + return NO_LINE_START; + } + // inc current_offset to after LINE_START + current_line_start = + current_offset++; + } else { + if (tag == REG_HI_BITS) + reg_num = (reg_num & 0xff) | (data_byte << 8); + else if (tag == REG_LOW_BITS) + reg_num = (reg_num & 0xff00) | data_byte; + else if (tag == REG_SKIP) + reg_num++; + else if (tag == REG_VALUE) { + while (reg_num >= + // assumes registers are in order... + regs[first_reg]) { + if (reg_num == regs[first_reg]) + offsets[first_reg] = + current_offset - 1; + if (++first_reg == num_regs) + return retcode; + } + reg_num++; + } else + return ILLEGAL_TAG; + } + } +} diff --git a/src/ipa/raspberrypi/md_parser.hpp b/src/ipa/raspberrypi/md_parser.hpp new file mode 100644 index 000000000000..70d054b2c013 --- /dev/null +++ b/src/ipa/raspberrypi/md_parser.hpp @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * md_parser.hpp - image sensor metadata parser interface + */ +#pragma once + +#include + +/* Camera metadata parser class. Usage as shown below. + +Setup: + +Usually the metadata parser will be made as part of the CamHelper class so +application code doesn't have to worry which to kind to instantiate. But for +the sake of example let's suppose we're parsing imx219 metadata. + +MdParser *parser = new MdParserImx219(); // for example +parser->SetBitsPerPixel(bpp); +parser->SetLineLengthBytes(pitch); +parser->SetNumLines(2); + +Note 1: if you don't know how many lines there are, you can use SetBufferSize +instead to limit the total buffer size. + +Note 2: if you don't know the line length, you can leave the line length unset +(or set to zero) and the parser will hunt for the line start instead. In this +case SetBufferSize *must* be used so that the parser won't run off the end of +the buffer. + +Then on every frame: + +if (parser->Parse(data) != MdParser::OK) + much badness; +unsigned int exposure_lines, gain_code +if (parser->GetExposureLines(exposure_lines) != MdParser::OK) + exposure was not found; +if (parser->GetGainCode(parser, gain_code) != MdParser::OK) + gain code was not found; + +(Note that the CamHelper class converts to/from exposure lines and time, +and gain_code / actual gain.) + +If you suspect your embedded data may have changed its layout, change any line +lengths, number of lines, bits per pixel etc. that are different, and +then: + +parser->Reset(); + +before calling Parse again. */ + +namespace RPi { + +// Abstract base class from which other metadata parsers are derived. + +class MdParser +{ +public: + // Parser status codes: + // OK - success + // NOTFOUND - value such as exposure or gain was not found + // ERROR - all other errors + enum Status { + OK = 0, + NOTFOUND = 1, + ERROR = 2 + }; + MdParser() : reset_(true) {} + virtual ~MdParser() {} + void Reset() { reset_ = true; } + void SetBitsPerPixel(int bpp) { bits_per_pixel_ = bpp; } + void SetNumLines(unsigned int num_lines) { num_lines_ = num_lines; } + void SetLineLengthBytes(unsigned int num_bytes) + { + line_length_bytes_ = num_bytes; + } + void SetBufferSize(unsigned int num_bytes) + { + buffer_size_bytes_ = num_bytes; + } + virtual Status Parse(void *data) = 0; + virtual Status GetExposureLines(unsigned int &lines) = 0; + virtual Status GetGainCode(unsigned int &gain_code) = 0; + +protected: + bool reset_; + int bits_per_pixel_; + unsigned int num_lines_; + unsigned int line_length_bytes_; + unsigned int buffer_size_bytes_; +}; + +// This isn't a full implementation of a metadata parser for SMIA sensors, +// however, it does provide the findRegs method which will prove useful and make +// it easier to implement parsers for other SMIA-like sensors (see +// md_parser_imx219.cpp for an example). + +class MdParserSmia : public MdParser +{ +public: + MdParserSmia() : MdParser() {} + +protected: + // Note that error codes > 0 are regarded as non-fatal; codes < 0 + // indicate a bad data buffer. Status codes are: + // PARSE_OK - found all registers, much happiness + // MISSING_REGS - some registers found; should this be a hard error? + // The remaining codes are all hard errors. + enum ParseStatus { + PARSE_OK = 0, + MISSING_REGS = 1, + NO_LINE_START = -1, + ILLEGAL_TAG = -2, + BAD_DUMMY = -3, + BAD_LINE_END = -4, + BAD_PADDING = -5 + }; + ParseStatus findRegs(unsigned char *data, uint32_t regs[], + int offsets[], unsigned int num_regs); +}; + +} // namespace RPi diff --git a/src/ipa/raspberrypi/md_parser_rpi.cpp b/src/ipa/raspberrypi/md_parser_rpi.cpp new file mode 100644 index 000000000000..a42b28f775b4 --- /dev/null +++ b/src/ipa/raspberrypi/md_parser_rpi.cpp @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2020, Raspberry Pi (Trading) Limited + * + * md_parser_rpi.cpp - Metadata parser for generic Raspberry Pi metadata + */ + +#include + +#include "md_parser_rpi.hpp" + +using namespace RPi; + +MdParserRPi::MdParserRPi() +{ +} + +MdParser::Status MdParserRPi::Parse(void *data) +{ + if (buffer_size_bytes_ < sizeof(rpiMetadata)) + return ERROR; + + memcpy(&metadata, data, sizeof(rpiMetadata)); + return OK; +} + +MdParser::Status MdParserRPi::GetExposureLines(unsigned int &lines) +{ + lines = metadata.exposure; + return OK; +} + +MdParser::Status MdParserRPi::GetGainCode(unsigned int &gain_code) +{ + gain_code = metadata.gain; + return OK; +} diff --git a/src/ipa/raspberrypi/md_parser_rpi.hpp b/src/ipa/raspberrypi/md_parser_rpi.hpp new file mode 100644 index 000000000000..1fa334f45bd1 --- /dev/null +++ b/src/ipa/raspberrypi/md_parser_rpi.hpp @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019, Raspberry Pi (Trading) Limited + * + * md_parser_rpi.hpp - Raspberry Pi metadata parser interface + */ +#pragma once + +#include "md_parser.hpp" + +namespace RPi { + +class MdParserRPi : public MdParser +{ +public: + MdParserRPi(); + Status Parse(void *data) override; + Status GetExposureLines(unsigned int &lines) override; + Status GetGainCode(unsigned int &gain_code) override; + +private: + // This must be the same struct that is filled into the metadata buffer + // in the pipeline handler. + struct rpiMetadata + { + uint32_t exposure; + uint32_t gain; + }; + rpiMetadata metadata; +}; + +} diff --git a/src/ipa/raspberrypi/meson.build b/src/ipa/raspberrypi/meson.build new file mode 100644 index 000000000000..2dece3a468e8 --- /dev/null +++ b/src/ipa/raspberrypi/meson.build @@ -0,0 +1,59 @@ +ipa_name = 'ipa_rpi' + +rpi_ipa_deps = [ + libcamera_dep, + dependency('boost'), + libatomic, +] + +rpi_ipa_includes = [ + ipa_includes, + libipa_includes, + include_directories('controller') +] + +rpi_ipa_sources = files([ + 'raspberrypi.cpp', + 'md_parser.cpp', + 'md_parser_rpi.cpp', + 'cam_helper.cpp', + 'cam_helper_ov5647.cpp', + 'cam_helper_imx219.cpp', + 'cam_helper_imx477.cpp', + 'controller/controller.cpp', + 'controller/histogram.cpp', + 'controller/algorithm.cpp', + 'controller/rpi/alsc.cpp', + 'controller/rpi/awb.cpp', + 'controller/rpi/sharpen.cpp', + 'controller/rpi/black_level.cpp', + 'controller/rpi/geq.cpp', + 'controller/rpi/noise.cpp', + 'controller/rpi/lux.cpp', + 'controller/rpi/agc.cpp', + 'controller/rpi/dpc.cpp', + 'controller/rpi/ccm.cpp', + 'controller/rpi/contrast.cpp', + 'controller/rpi/sdn.cpp', + 'controller/pwl.cpp', +]) + +mod = shared_module(ipa_name, + rpi_ipa_sources, + name_prefix : '', + include_directories : rpi_ipa_includes, + dependencies : rpi_ipa_deps, + link_with : libipa, + install : true, + install_dir : ipa_install_dir) + +if ipa_sign_module + custom_target(ipa_name + '.so.sign', + input : mod, + output : ipa_name + '.so.sign', + command : [ ipa_sign, ipa_priv_key, '@INPUT@', '@OUTPUT@' ], + install : false, + build_by_default : true) +endif + +subdir('data') diff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp new file mode 100644 index 000000000000..80dc77a3b1ae --- /dev/null +++ b/src/ipa/raspberrypi/raspberrypi.cpp @@ -0,0 +1,1081 @@ +/* SPDX-License-Identifier: BSD-2-Clause */ +/* + * Copyright (C) 2019-2020, Raspberry Pi (Trading) Ltd. + * + * rpi.cpp - Raspberry Pi Image Processing Algorithms + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "agc_algorithm.hpp" +#include "agc_status.h" +#include "alsc_status.h" +#include "awb_algorithm.hpp" +#include "awb_status.h" +#include "black_level_status.h" +#include "cam_helper.hpp" +#include "ccm_algorithm.hpp" +#include "ccm_status.h" +#include "contrast_algorithm.hpp" +#include "contrast_status.h" +#include "controller.hpp" +#include "dpc_status.h" +#include "geq_status.h" +#include "lux_status.h" +#include "metadata.hpp" +#include "noise_status.h" +#include "sdn_status.h" +#include "sharpen_status.h" + +#include "camera_sensor.h" +#include "log.h" +#include "utils.h" + +namespace libcamera { + +/* Configure the sensor with these values initially. */ +#define DEFAULT_ANALOGUE_GAIN 1.0 +#define DEFAULT_EXPOSURE_TIME 20000 + +LOG_DEFINE_CATEGORY(IPARPI) + +class IPARPi : public IPAInterface +{ +public: + IPARPi() + : lastMode_({}), controller_(), controllerInit_(false), + frame_count_(0), check_count_(0), hide_count_(0), + mistrust_count_(0), lsTableHandle_(0), lsTable_(nullptr) + { + } + + ~IPARPi() + { + } + + int init(const IPASettings &settings) override; + int start() override { return 0; } + void stop() override {} + + void configure(const CameraSensorInfo &sensorInfo, + const std::map &streamConfig, + const std::map &entityControls) override; + void mapBuffers(const std::vector &buffers) override; + void unmapBuffers(const std::vector &ids) override; + void processEvent(const IPAOperationData &event) override; + +private: + void setMode(const CameraSensorInfo &sensorInfo); + void queueRequest(const ControlList &controls); + void returnEmbeddedBuffer(unsigned int bufferId); + void prepareISP(unsigned int bufferId); + void reportMetadata(); + bool parseEmbeddedData(unsigned int bufferId, struct DeviceStatus &deviceStatus); + void processStats(unsigned int bufferId); + void applyAGC(const struct AgcStatus *agcStatus); + void applyAWB(const struct AwbStatus *awbStatus, ControlList &ctrls); + void applyDG(const struct AgcStatus *dgStatus, ControlList &ctrls); + void applyCCM(const struct CcmStatus *ccmStatus, ControlList &ctrls); + void applyBlackLevel(const struct BlackLevelStatus *blackLevelStatus, ControlList &ctrls); + void applyGamma(const struct ContrastStatus *contrastStatus, ControlList &ctrls); + void applyGEQ(const struct GeqStatus *geqStatus, ControlList &ctrls); + void applyDenoise(const struct SdnStatus *denoiseStatus, ControlList &ctrls); + void applySharpen(const struct SharpenStatus *sharpenStatus, ControlList &ctrls); + void applyDPC(const struct DpcStatus *dpcStatus, ControlList &ctrls); + void applyLS(const struct AlscStatus *lsStatus, ControlList &ctrls); + void resampleTable(uint16_t dest[], double const src[12][16], int dest_w, int dest_h); + + std::map buffers_; + std::map buffersMemory_; + + ControlInfoMap unicam_ctrls_; + ControlInfoMap isp_ctrls_; + ControlList libcameraMetadata_; + + /* IPA configuration. */ + std::string tuningFile_; + + /* Camera sensor params. */ + CameraMode mode_; + CameraMode lastMode_; + + /* Raspberry Pi controller specific defines. */ + std::unique_ptr helper_; + RPi::Controller controller_; + bool controllerInit_; + RPi::Metadata rpiMetadata_; + + /* + * We count frames to decide if the frame must be hidden (e.g. from + * display) or mistrusted (i.e. not given to the control algos). + */ + uint64_t frame_count_; + /* For checking the sequencing of Prepare/Process calls. */ + uint64_t check_count_; + /* How many frames the pipeline handler should hide, or "drop". */ + unsigned int hide_count_; + /* How many frames we should avoid running control algos on. */ + unsigned int mistrust_count_; + /* LS table allocation passed in from the pipeline handler. */ + uint32_t lsTableHandle_; + void *lsTable_; +}; + +int IPARPi::init(const IPASettings &settings) +{ + tuningFile_ = settings.configurationFile; + return 0; +} + +void IPARPi::setMode(const CameraSensorInfo &sensorInfo) +{ + mode_.bitdepth = sensorInfo.bitsPerPixel; + mode_.width = sensorInfo.outputSize.width; + mode_.height = sensorInfo.outputSize.height; + mode_.sensor_width = sensorInfo.activeAreaSize.width; + mode_.sensor_height = sensorInfo.activeAreaSize.height; + mode_.crop_x = sensorInfo.analogCrop.x; + mode_.crop_y = sensorInfo.analogCrop.y; + + /* + * Calculate scaling parameters. The scale_[xy] factors are determined + * by the ratio between the crop rectangle size and the output size. + */ + mode_.scale_x = sensorInfo.analogCrop.width / sensorInfo.outputSize.width; + mode_.scale_y = sensorInfo.analogCrop.height / sensorInfo.outputSize.height; + + /* + * We're not told by the pipeline handler how scaling is split between + * binning and digital scaling. For now, as a heuristic, assume that + * downscaling up to 2 is achieved through binning, and that any + * additional scaling is achieved through digital scaling. + * + * \todo Get the pipeline handle to provide the full data + */ + mode_.bin_y = std::min(2, static_cast(mode_.scale_x)); + mode_.bin_y = std::min(2, static_cast(mode_.scale_y)); + + /* The noise factor is the square root of the total binning factor. */ + mode_.noise_factor = sqrt(mode_.bin_x * mode_.bin_y); + + /* + * Calculate the line length in nanoseconds as the ratio between + * the line length in pixels and the pixel rate. + */ + mode_.line_length = 1e9 * sensorInfo.lineLength / sensorInfo.pixelRate; +} + +void IPARPi::configure(const CameraSensorInfo &sensorInfo, + const std::map &streamConfig, + const std::map &entityControls) +{ + if (entityControls.empty()) + return; + + unicam_ctrls_ = entityControls.at(0); + isp_ctrls_ = entityControls.at(1); + /* Setup a metadata ControlList to output metadata. */ + libcameraMetadata_ = ControlList(controls::controls); + + /* + * Load the "helper" for this sensor. This tells us all the device specific stuff + * that the kernel driver doesn't. We only do this the first time; we don't need + * to re-parse the metadata after a simple mode-switch for no reason. + */ + std::string cameraName(sensorInfo.model); + if (!helper_) { + helper_ = std::unique_ptr(RPi::CamHelper::Create(cameraName)); + /* + * Pass out the sensor config to the pipeline handler in order + * to setup the staggered writer class. + */ + int gainDelay, exposureDelay, sensorMetadata; + helper_->GetDelays(exposureDelay, gainDelay); + sensorMetadata = helper_->SensorEmbeddedDataPresent(); + RPi::CamTransform orientation = helper_->GetOrientation(); + + IPAOperationData op; + op.operation = RPI_IPA_ACTION_SET_SENSOR_CONFIG; + op.data.push_back(gainDelay); + op.data.push_back(exposureDelay); + op.data.push_back(sensorMetadata); + + ControlList ctrls(unicam_ctrls_); + ctrls.set(V4L2_CID_HFLIP, (int32_t) !!(orientation & RPi::CamTransform_HFLIP)); + ctrls.set(V4L2_CID_VFLIP, (int32_t) !!(orientation & RPi::CamTransform_VFLIP)); + op.controls.push_back(ctrls); + + queueFrameAction.emit(0, op); + } + + /* Re-assemble camera mode using the sensor info. */ + setMode(sensorInfo); + + /* Pass the camera mode to the CamHelper to setup algorithms. */ + helper_->SetCameraMode(mode_); + + /* + * Initialise frame counts, and decide how many frames must be hidden or + *"mistrusted", which depends on whether this is a startup from cold, + * or merely a mode switch in a running system. + */ + frame_count_ = 0; + check_count_ = 0; + if (controllerInit_) { + hide_count_ = helper_->HideFramesModeSwitch(); + mistrust_count_ = helper_->MistrustFramesModeSwitch(); + } else { + hide_count_ = helper_->HideFramesStartup(); + mistrust_count_ = helper_->MistrustFramesStartup(); + } + + if (!controllerInit_) { + /* Load the tuning file for this sensor. */ + controller_.Read(tuningFile_.c_str()); + controller_.Initialise(); + controllerInit_ = true; + + /* Calculate initial values for gain and exposure. */ + int32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN); + int32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME); + + ControlList ctrls(unicam_ctrls_); + ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code); + ctrls.set(V4L2_CID_EXPOSURE, exposure_lines); + + IPAOperationData op; + op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED; + op.controls.push_back(ctrls); + queueFrameAction.emit(0, op); + } + + controller_.SwitchMode(mode_); + + lastMode_ = mode_; +} + +void IPARPi::mapBuffers(const std::vector &buffers) +{ + for (const IPABuffer &buffer : buffers) { + auto elem = buffers_.emplace(std::piecewise_construct, + std::forward_as_tuple(buffer.id), + std::forward_as_tuple(buffer.planes)); + const FrameBuffer &fb = elem.first->second; + + buffersMemory_[buffer.id] = mmap(NULL, fb.planes()[0].length, PROT_READ | PROT_WRITE, + MAP_SHARED, fb.planes()[0].fd.fd(), 0); + + if (buffersMemory_[buffer.id] == MAP_FAILED) { + int ret = -errno; + LOG(IPARPI, Fatal) << "Failed to mmap buffer: " << strerror(-ret); + } + } +} + +void IPARPi::unmapBuffers(const std::vector &ids) +{ + for (unsigned int id : ids) { + const auto fb = buffers_.find(id); + if (fb == buffers_.end()) + continue; + + munmap(buffersMemory_[id], fb->second.planes()[0].length); + buffersMemory_.erase(id); + buffers_.erase(id); + } +} + +void IPARPi::processEvent(const IPAOperationData &event) +{ + switch (event.operation) { + case RPI_IPA_EVENT_SIGNAL_STAT_READY: { + unsigned int bufferId = event.data[0]; + + if (++check_count_ != frame_count_) /* assert here? */ + LOG(IPARPI, Error) << "WARNING: Prepare/Process mismatch!!!"; + if (frame_count_ > mistrust_count_) + processStats(bufferId); + + IPAOperationData op; + op.operation = RPI_IPA_ACTION_STATS_METADATA_COMPLETE; + op.data = { bufferId & RPiIpaMask::ID }; + op.controls = { libcameraMetadata_ }; + queueFrameAction.emit(0, op); + break; + } + + case RPI_IPA_EVENT_SIGNAL_ISP_PREPARE: { + unsigned int embeddedbufferId = event.data[0]; + unsigned int bayerbufferId = event.data[1]; + + /* + * At start-up, or after a mode-switch, we may want to + * avoid running the control algos for a few frames in case + * they are "unreliable". + */ + prepareISP(embeddedbufferId); + reportMetadata(); + + /* Ready to push the input buffer into the ISP. */ + IPAOperationData op; + if (++frame_count_ > hide_count_) + op.operation = RPI_IPA_ACTION_RUN_ISP; + else + op.operation = RPI_IPA_ACTION_RUN_ISP_AND_DROP_FRAME; + op.data = { bayerbufferId & RPiIpaMask::ID }; + queueFrameAction.emit(0, op); + break; + } + + case RPI_IPA_EVENT_QUEUE_REQUEST: { + queueRequest(event.controls[0]); + break; + } + + case RPI_IPA_EVENT_LS_TABLE_ALLOCATION: { + lsTable_ = reinterpret_cast(event.data[0]); + lsTableHandle_ = event.data[1]; + break; + } + + default: + LOG(IPARPI, Error) << "Unknown event " << event.operation; + break; + } +} + +void IPARPi::reportMetadata() +{ + std::unique_lock lock(rpiMetadata_); + + /* + * Certain information about the current frame and how it will be + * processed can be extracted and placed into the libcamera metadata + * buffer, where an application could query it. + */ + + DeviceStatus *deviceStatus = rpiMetadata_.GetLocked("device.status"); + if (deviceStatus) { + libcameraMetadata_.set(controls::ExposureTime, deviceStatus->shutter_speed); + libcameraMetadata_.set(controls::AnalogueGain, deviceStatus->analogue_gain); + } + + AgcStatus *agcStatus = rpiMetadata_.GetLocked("agc.status"); + if (agcStatus) + libcameraMetadata_.set(controls::AeLocked, agcStatus->locked); + + LuxStatus *luxStatus = rpiMetadata_.GetLocked("lux.status"); + if (luxStatus) + libcameraMetadata_.set(controls::Lux, luxStatus->lux); + + AwbStatus *awbStatus = rpiMetadata_.GetLocked("awb.status"); + if (awbStatus) { + libcameraMetadata_.set(controls::ColourGains, { static_cast(awbStatus->gain_r), + static_cast(awbStatus->gain_b) }); + libcameraMetadata_.set(controls::ColourTemperature, awbStatus->temperature_K); + } + + BlackLevelStatus *blackLevelStatus = rpiMetadata_.GetLocked("black_level.status"); + if (blackLevelStatus) + libcameraMetadata_.set(controls::SensorBlackLevels, + { static_cast(blackLevelStatus->black_level_r), + static_cast(blackLevelStatus->black_level_g), + static_cast(blackLevelStatus->black_level_g), + static_cast(blackLevelStatus->black_level_b) }); +} + +/* + * Converting between enums (used in the libcamera API) and the names that + * we use to identify different modes. Unfortunately, the conversion tables + * must be kept up-to-date by hand. + */ + +static const std::map MeteringModeTable = { + { controls::MeteringCentreWeighted, "centre-weighted" }, + { controls::MeteringSpot, "spot" }, + { controls::MeteringMatrix, "matrix" }, + { controls::MeteringCustom, "custom" }, +}; + +static const std::map ConstraintModeTable = { + { controls::ConstraintNormal, "normal" }, + { controls::ConstraintHighlight, "highlight" }, + { controls::ConstraintCustom, "custom" }, +}; + +static const std::map ExposureModeTable = { + { controls::ExposureNormal, "normal" }, + { controls::ExposureShort, "short" }, + { controls::ExposureLong, "long" }, + { controls::ExposureCustom, "custom" }, +}; + +static const std::map AwbModeTable = { + { controls::AwbAuto, "normal" }, + { controls::AwbIncandescent, "incandescent" }, + { controls::AwbTungsten, "tungsten" }, + { controls::AwbFluorescent, "fluorescent" }, + { controls::AwbIndoor, "indoor" }, + { controls::AwbDaylight, "daylight" }, + { controls::AwbCustom, "custom" }, +}; + +void IPARPi::queueRequest(const ControlList &controls) +{ + /* Clear the return metadata buffer. */ + libcameraMetadata_.clear(); + + for (auto const &ctrl : controls) { + LOG(IPARPI, Info) << "Request ctrl: " + << controls::controls.at(ctrl.first)->name() + << " = " << ctrl.second.toString(); + + switch (ctrl.first) { + case controls::AE_ENABLE: { + RPi::Algorithm *agc = controller_.GetAlgorithm("agc"); + ASSERT(agc); + if (ctrl.second.get() == false) + agc->Pause(); + else + agc->Resume(); + + libcameraMetadata_.set(controls::AeEnable, ctrl.second.get()); + break; + } + + case controls::EXPOSURE_TIME: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + /* This expects units of micro-seconds. */ + agc->SetFixedShutter(ctrl.second.get()); + /* For the manual values to take effect, AGC must be unpaused. */ + if (agc->IsPaused()) + agc->Resume(); + + libcameraMetadata_.set(controls::ExposureTime, ctrl.second.get()); + break; + } + + case controls::ANALOGUE_GAIN: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + agc->SetFixedAnalogueGain(ctrl.second.get()); + /* For the manual values to take effect, AGC must be unpaused. */ + if (agc->IsPaused()) + agc->Resume(); + + libcameraMetadata_.set(controls::AnalogueGain, + ctrl.second.get()); + break; + } + + case controls::AE_METERING_MODE: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + + int32_t idx = ctrl.second.get(); + if (MeteringModeTable.count(idx)) { + agc->SetMeteringMode(MeteringModeTable.at(idx)); + libcameraMetadata_.set(controls::AeMeteringMode, idx); + } else { + LOG(IPARPI, Error) << "Metering mode " << idx + << " not recognised"; + } + break; + } + + case controls::AE_CONSTRAINT_MODE: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + + int32_t idx = ctrl.second.get(); + if (ConstraintModeTable.count(idx)) { + agc->SetConstraintMode(ConstraintModeTable.at(idx)); + libcameraMetadata_.set(controls::AeConstraintMode, idx); + } else { + LOG(IPARPI, Error) << "Constraint mode " << idx + << " not recognised"; + } + break; + } + + case controls::AE_EXPOSURE_MODE: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + + int32_t idx = ctrl.second.get(); + if (ExposureModeTable.count(idx)) { + agc->SetExposureMode(ExposureModeTable.at(idx)); + libcameraMetadata_.set(controls::AeExposureMode, idx); + } else { + LOG(IPARPI, Error) << "Exposure mode " << idx + << " not recognised"; + } + break; + } + + case controls::EXPOSURE_VALUE: { + RPi::AgcAlgorithm *agc = dynamic_cast( + controller_.GetAlgorithm("agc")); + ASSERT(agc); + + /* + * The SetEv() method takes in a direct exposure multiplier. + * So convert to 2^EV + */ + double ev = pow(2.0, ctrl.second.get()); + agc->SetEv(ev); + libcameraMetadata_.set(controls::ExposureValue, + ctrl.second.get()); + break; + } + + case controls::AWB_ENABLE: { + RPi::Algorithm *awb = controller_.GetAlgorithm("awb"); + ASSERT(awb); + + if (ctrl.second.get() == false) + awb->Pause(); + else + awb->Resume(); + + libcameraMetadata_.set(controls::AwbEnable, + ctrl.second.get()); + break; + } + + case controls::AWB_MODE: { + RPi::AwbAlgorithm *awb = dynamic_cast( + controller_.GetAlgorithm("awb")); + ASSERT(awb); + + int32_t idx = ctrl.second.get(); + if (AwbModeTable.count(idx)) { + awb->SetMode(AwbModeTable.at(idx)); + libcameraMetadata_.set(controls::AwbMode, idx); + } else { + LOG(IPARPI, Error) << "AWB mode " << idx + << " not recognised"; + } + break; + } + + case controls::COLOUR_GAINS: { + auto gains = ctrl.second.get>(); + RPi::AwbAlgorithm *awb = dynamic_cast( + controller_.GetAlgorithm("awb")); + ASSERT(awb); + + awb->SetManualGains(gains[0], gains[1]); + if (gains[0] != 0.0f && gains[1] != 0.0f) + /* A gain of 0.0f will switch back to auto mode. */ + libcameraMetadata_.set(controls::ColourGains, + { gains[0], gains[1] }); + break; + } + + case controls::BRIGHTNESS: { + RPi::ContrastAlgorithm *contrast = dynamic_cast( + controller_.GetAlgorithm("contrast")); + ASSERT(contrast); + + contrast->SetBrightness(ctrl.second.get() * 65536); + libcameraMetadata_.set(controls::Brightness, + ctrl.second.get()); + break; + } + + case controls::CONTRAST: { + RPi::ContrastAlgorithm *contrast = dynamic_cast( + controller_.GetAlgorithm("contrast")); + ASSERT(contrast); + + contrast->SetContrast(ctrl.second.get()); + libcameraMetadata_.set(controls::Contrast, + ctrl.second.get()); + break; + } + + case controls::SATURATION: { + RPi::CcmAlgorithm *ccm = dynamic_cast( + controller_.GetAlgorithm("ccm")); + ASSERT(ccm); + + ccm->SetSaturation(ctrl.second.get()); + libcameraMetadata_.set(controls::Saturation, + ctrl.second.get()); + break; + } + + default: + LOG(IPARPI, Warning) + << "Ctrl " << controls::controls.at(ctrl.first)->name() + << " is not handled."; + break; + } + } +} + +void IPARPi::returnEmbeddedBuffer(unsigned int bufferId) +{ + IPAOperationData op; + op.operation = RPI_IPA_ACTION_EMBEDDED_COMPLETE; + op.data = { bufferId & RPiIpaMask::ID }; + queueFrameAction.emit(0, op); +} + +void IPARPi::prepareISP(unsigned int bufferId) +{ + struct DeviceStatus deviceStatus = {}; + bool success = parseEmbeddedData(bufferId, deviceStatus); + + /* Done with embedded data now, return to pipeline handler asap. */ + returnEmbeddedBuffer(bufferId); + + if (success) { + ControlList ctrls(isp_ctrls_); + + rpiMetadata_.Clear(); + rpiMetadata_.Set("device.status", deviceStatus); + controller_.Prepare(&rpiMetadata_); + + /* Lock the metadata buffer to avoid constant locks/unlocks. */ + std::unique_lock lock(rpiMetadata_); + + AwbStatus *awbStatus = rpiMetadata_.GetLocked("awb.status"); + if (awbStatus) + applyAWB(awbStatus, ctrls); + + CcmStatus *ccmStatus = rpiMetadata_.GetLocked("ccm.status"); + if (ccmStatus) + applyCCM(ccmStatus, ctrls); + + AgcStatus *dgStatus = rpiMetadata_.GetLocked("agc.status"); + if (dgStatus) + applyDG(dgStatus, ctrls); + + AlscStatus *lsStatus = rpiMetadata_.GetLocked("alsc.status"); + if (lsStatus) + applyLS(lsStatus, ctrls); + + ContrastStatus *contrastStatus = rpiMetadata_.GetLocked("contrast.status"); + if (contrastStatus) + applyGamma(contrastStatus, ctrls); + + BlackLevelStatus *blackLevelStatus = rpiMetadata_.GetLocked("black_level.status"); + if (blackLevelStatus) + applyBlackLevel(blackLevelStatus, ctrls); + + GeqStatus *geqStatus = rpiMetadata_.GetLocked("geq.status"); + if (geqStatus) + applyGEQ(geqStatus, ctrls); + + SdnStatus *denoiseStatus = rpiMetadata_.GetLocked("sdn.status"); + if (denoiseStatus) + applyDenoise(denoiseStatus, ctrls); + + SharpenStatus *sharpenStatus = rpiMetadata_.GetLocked("sharpen.status"); + if (sharpenStatus) + applySharpen(sharpenStatus, ctrls); + + DpcStatus *dpcStatus = rpiMetadata_.GetLocked("dpc.status"); + if (dpcStatus) + applyDPC(dpcStatus, ctrls); + + if (!ctrls.empty()) { + IPAOperationData op; + op.operation = RPI_IPA_ACTION_V4L2_SET_ISP; + op.controls.push_back(ctrls); + queueFrameAction.emit(0, op); + } + } +} + +bool IPARPi::parseEmbeddedData(unsigned int bufferId, struct DeviceStatus &deviceStatus) +{ + auto it = buffersMemory_.find(bufferId); + if (it == buffersMemory_.end()) { + LOG(IPARPI, Error) << "Could not find embedded buffer!"; + return false; + } + + int size = buffers_.find(bufferId)->second.planes()[0].length; + helper_->Parser().SetBufferSize(size); + RPi::MdParser::Status status = helper_->Parser().Parse(it->second); + if (status != RPi::MdParser::Status::OK) { + LOG(IPARPI, Error) << "Embedded Buffer parsing failed, error " << status; + } else { + uint32_t exposure_lines, gain_code; + if (helper_->Parser().GetExposureLines(exposure_lines) != RPi::MdParser::Status::OK) { + LOG(IPARPI, Error) << "Exposure time failed"; + return false; + } + + deviceStatus.shutter_speed = helper_->Exposure(exposure_lines); + if (helper_->Parser().GetGainCode(gain_code) != RPi::MdParser::Status::OK) { + LOG(IPARPI, Error) << "Gain failed"; + return false; + } + + deviceStatus.analogue_gain = helper_->Gain(gain_code); + LOG(IPARPI, Debug) << "Metadata - Exposure : " + << deviceStatus.shutter_speed << " Gain : " + << deviceStatus.analogue_gain; + } + + return true; +} + +void IPARPi::processStats(unsigned int bufferId) +{ + auto it = buffersMemory_.find(bufferId); + if (it == buffersMemory_.end()) { + LOG(IPARPI, Error) << "Could not find stats buffer!"; + return; + } + + bcm2835_isp_stats *stats = static_cast(it->second); + RPi::StatisticsPtr statistics = std::make_shared(*stats); + controller_.Process(statistics, &rpiMetadata_); + + struct AgcStatus agcStatus; + if (rpiMetadata_.Get("agc.status", agcStatus) == 0) + applyAGC(&agcStatus); +} + +void IPARPi::applyAWB(const struct AwbStatus *awbStatus, ControlList &ctrls) +{ + const auto gainR = isp_ctrls_.find(V4L2_CID_RED_BALANCE); + if (gainR == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find red gain control"; + return; + } + + const auto gainB = isp_ctrls_.find(V4L2_CID_BLUE_BALANCE); + if (gainB == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find blue gain control"; + return; + } + + LOG(IPARPI, Debug) << "Applying WB R: " << awbStatus->gain_r << " B: " + << awbStatus->gain_b; + + ctrls.set(V4L2_CID_RED_BALANCE, + static_cast(awbStatus->gain_r * 1000)); + ctrls.set(V4L2_CID_BLUE_BALANCE, + static_cast(awbStatus->gain_b * 1000)); +} + +void IPARPi::applyAGC(const struct AgcStatus *agcStatus) +{ + IPAOperationData op; + op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED; + + int32_t gain_code = helper_->GainCode(agcStatus->analogue_gain); + int32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time); + + if (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find analogue gain control"; + return; + } + + if (unicam_ctrls_.find(V4L2_CID_EXPOSURE) == unicam_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find exposure control"; + return; + } + + LOG(IPARPI, Debug) << "Applying AGC Exposure: " << agcStatus->shutter_time + << " (Shutter lines: " << exposure_lines << ") Gain: " + << agcStatus->analogue_gain << " (Gain Code: " + << gain_code << ")"; + + ControlList ctrls(unicam_ctrls_); + ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code); + ctrls.set(V4L2_CID_EXPOSURE, exposure_lines); + op.controls.push_back(ctrls); + queueFrameAction.emit(0, op); +} + +void IPARPi::applyDG(const struct AgcStatus *dgStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_DIGITAL_GAIN) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find digital gain control"; + return; + } + + ctrls.set(V4L2_CID_DIGITAL_GAIN, + static_cast(dgStatus->digital_gain * 1000)); +} + +void IPARPi::applyCCM(const struct CcmStatus *ccmStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_CC_MATRIX) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find CCM control"; + return; + } + + bcm2835_isp_custom_ccm ccm; + for (int i = 0; i < 9; i++) { + ccm.ccm.ccm[i / 3][i % 3].den = 1000; + ccm.ccm.ccm[i / 3][i % 3].num = 1000 * ccmStatus->matrix[i]; + } + + ccm.enabled = 1; + ccm.ccm.offsets[0] = ccm.ccm.offsets[1] = ccm.ccm.offsets[2] = 0; + + ControlValue c(Span{ reinterpret_cast(&ccm), + sizeof(ccm) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_CC_MATRIX, c); +} + +void IPARPi::applyGamma(const struct ContrastStatus *contrastStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_GAMMA) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find Gamma control"; + return; + } + + struct bcm2835_isp_gamma gamma; + gamma.enabled = 1; + for (int i = 0; i < CONTRAST_NUM_POINTS; i++) { + gamma.x[i] = contrastStatus->points[i].x; + gamma.y[i] = contrastStatus->points[i].y; + } + + ControlValue c(Span{ reinterpret_cast(&gamma), + sizeof(gamma) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_GAMMA, c); +} + +void IPARPi::applyBlackLevel(const struct BlackLevelStatus *blackLevelStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find black level control"; + return; + } + + bcm2835_isp_black_level blackLevel; + blackLevel.enabled = 1; + blackLevel.black_level_r = blackLevelStatus->black_level_r; + blackLevel.black_level_g = blackLevelStatus->black_level_g; + blackLevel.black_level_b = blackLevelStatus->black_level_b; + + ControlValue c(Span{ reinterpret_cast(&blackLevel), + sizeof(blackLevel) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_BLACK_LEVEL, c); +} + +void IPARPi::applyGEQ(const struct GeqStatus *geqStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_GEQ) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find geq control"; + return; + } + + bcm2835_isp_geq geq; + geq.enabled = 1; + geq.offset = geqStatus->offset; + geq.slope.den = 1000; + geq.slope.num = 1000 * geqStatus->slope; + + ControlValue c(Span{ reinterpret_cast(&geq), + sizeof(geq) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_GEQ, c); +} + +void IPARPi::applyDenoise(const struct SdnStatus *denoiseStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_DENOISE) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find denoise control"; + return; + } + + bcm2835_isp_denoise denoise; + denoise.enabled = 1; + denoise.constant = denoiseStatus->noise_constant; + denoise.slope.num = 1000 * denoiseStatus->noise_slope; + denoise.slope.den = 1000; + denoise.strength.num = 1000 * denoiseStatus->strength; + denoise.strength.den = 1000; + + ControlValue c(Span{ reinterpret_cast(&denoise), + sizeof(denoise) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_DENOISE, c); +} + +void IPARPi::applySharpen(const struct SharpenStatus *sharpenStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_SHARPEN) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find sharpen control"; + return; + } + + bcm2835_isp_sharpen sharpen; + sharpen.enabled = 1; + sharpen.threshold.num = 1000 * sharpenStatus->threshold; + sharpen.threshold.den = 1000; + sharpen.strength.num = 1000 * sharpenStatus->strength; + sharpen.strength.den = 1000; + sharpen.limit.num = 1000 * sharpenStatus->limit; + sharpen.limit.den = 1000; + + ControlValue c(Span{ reinterpret_cast(&sharpen), + sizeof(sharpen) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_SHARPEN, c); +} + +void IPARPi::applyDPC(const struct DpcStatus *dpcStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_DPC) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find DPC control"; + return; + } + + bcm2835_isp_dpc dpc; + dpc.enabled = 1; + dpc.strength = dpcStatus->strength; + + ControlValue c(Span{ reinterpret_cast(&dpc), + sizeof(dpc) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_DPC, c); +} + +void IPARPi::applyLS(const struct AlscStatus *lsStatus, ControlList &ctrls) +{ + if (isp_ctrls_.find(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING) == isp_ctrls_.end()) { + LOG(IPARPI, Error) << "Can't find LS control"; + return; + } + + /* + * Program lens shading tables into pipeline. + * Choose smallest cell size that won't exceed 63x48 cells. + */ + const int cell_sizes[] = { 16, 32, 64, 128, 256 }; + unsigned int num_cells = ARRAY_SIZE(cell_sizes); + unsigned int i, w, h, cell_size; + for (i = 0; i < num_cells; i++) { + cell_size = cell_sizes[i]; + w = (mode_.width + cell_size - 1) / cell_size; + h = (mode_.height + cell_size - 1) / cell_size; + if (w < 64 && h <= 48) + break; + } + + if (i == num_cells) { + LOG(IPARPI, Error) << "Cannot find cell size"; + return; + } + + /* We're going to supply corner sampled tables, 16 bit samples. */ + w++, h++; + bcm2835_isp_lens_shading ls = { + .enabled = 1, + .grid_cell_size = cell_size, + .grid_width = w, + .grid_stride = w, + .grid_height = h, + .mem_handle_table = lsTableHandle_, + .ref_transform = 0, + .corner_sampled = 1, + .gain_format = GAIN_FORMAT_U4P10 + }; + + if (!lsTable_ || w * h * 4 * sizeof(uint16_t) > MAX_LS_GRID_SIZE) { + LOG(IPARPI, Error) << "Do not have a correctly allocate lens shading table!"; + return; + } + + if (lsStatus) { + /* Format will be u4.10 */ + uint16_t *grid = static_cast(lsTable_); + + resampleTable(grid, lsStatus->r, w, h); + resampleTable(grid + w * h, lsStatus->g, w, h); + std::memcpy(grid + 2 * w * h, grid + w * h, w * h * sizeof(uint16_t)); + resampleTable(grid + 3 * w * h, lsStatus->b, w, h); + } + + ControlValue c(Span{ reinterpret_cast(&ls), + sizeof(ls) }); + ctrls.set(V4L2_CID_USER_BCM2835_ISP_LENS_SHADING, c); +} + +/* Resamples a 16x12 table with central sampling to dest_w x dest_h with corner sampling. */ +void IPARPi::resampleTable(uint16_t dest[], double const src[12][16], int dest_w, int dest_h) +{ + /* + * Precalculate and cache the x sampling locations and phases to + * save recomputing them on every row. + */ + assert(dest_w > 1 && dest_h > 1 && dest_w <= 64); + int x_lo[64], x_hi[64]; + double xf[64]; + double x = -0.5, x_inc = 16.0 / (dest_w - 1); + for (int i = 0; i < dest_w; i++, x += x_inc) { + x_lo[i] = floor(x); + xf[i] = x - x_lo[i]; + x_hi[i] = x_lo[i] < 15 ? x_lo[i] + 1 : 15; + x_lo[i] = x_lo[i] > 0 ? x_lo[i] : 0; + } + + /* Now march over the output table generating the new values. */ + double y = -0.5, y_inc = 12.0 / (dest_h - 1); + for (int j = 0; j < dest_h; j++, y += y_inc) { + int y_lo = floor(y); + double yf = y - y_lo; + int y_hi = y_lo < 11 ? y_lo + 1 : 11; + y_lo = y_lo > 0 ? y_lo : 0; + double const *row_above = src[y_lo]; + double const *row_below = src[y_hi]; + for (int i = 0; i < dest_w; i++) { + double above = row_above[x_lo[i]] * (1 - xf[i]) + row_above[x_hi[i]] * xf[i]; + double below = row_below[x_lo[i]] * (1 - xf[i]) + row_below[x_hi[i]] * xf[i]; + int result = floor(1024 * (above * (1 - yf) + below * yf) + .5); + *(dest++) = result > 16383 ? 16383 : result; /* want u4.10 */ + } + } +} + +/* + * External IPA module interface + */ + +extern "C" { +const struct IPAModuleInfo ipaModuleInfo = { + IPA_MODULE_API_VERSION, + 1, + "PipelineHandlerRPi", + "raspberrypi", +}; + +struct ipa_context *ipaCreate() +{ + return new IPAInterfaceWrapper(std::make_unique()); +} + +}; /* extern "C" */ + +} /* namespace libcamera */ From patchwork Mon May 4 09:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 3686 Return-Path: Received: from perceval.ideasonboard.com (perceval.ideasonboard.com [IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id 6B52661758 for ; Mon, 4 May 2020 11:28:42 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (1024-bit key; unprotected) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="O7Pb9coR"; dkim-atps=neutral Received: from pendragon.bb.dnainternet.fi (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id B340511FC; Mon, 4 May 2020 11:28:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1588584522; bh=eoOqXeesVEjAUZEjf1NoKwaZxGzmUN1vJebIbcb4bVw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O7Pb9coRMDPciSMcTRZ4Xg4jKKh9VdznASIyWn4QefiAMtk1Rz4rsveR+Usf18bMZ 9LxLgeCMzp1o2Vmd66BC17fXPsXUYSMkbhQv8+dvk+VozIYCo/i+SrftZScEBAF1si RSGhwuwj+NZifAp17yFayj7KUH5kFo6uGoxkeUMk= From: Laurent Pinchart To: libcamera-devel@lists.libcamera.org Date: Mon, 4 May 2020 12:28:29 +0300 Message-Id: <20200504092829.10099-7-laurent.pinchart@ideasonboard.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> References: <20200504092829.10099-1-laurent.pinchart@ideasonboard.com> MIME-Version: 1.0 Subject: [libcamera-devel] [PATCH 6/6] libcamera: raspberrypi: Add components to meson build X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 May 2020 09:28:42 -0000 From: Naushir Patuck Add the Raspberry Pi pipeline handler and IPA as targets in the meson build system. Signed-off-by: Naushir Patuck Signed-off-by: Laurent Pinchart --- meson_options.txt | 2 +- src/ipa/meson.build | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/meson_options.txt b/meson_options.txt index 6464df837cc3..2c7ab07bff91 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -14,7 +14,7 @@ option('gstreamer', option('pipelines', type : 'array', - choices : ['ipu3', 'rkisp1', 'uvcvideo', 'vimc'], + choices : ['ipu3', 'raspberrypi', 'rkisp1', 'uvcvideo', 'vimc'], description : 'Select which pipeline handlers to include') option('test', diff --git a/src/ipa/meson.build b/src/ipa/meson.build index 56e65eaa7426..e0f6e14afd3e 100644 --- a/src/ipa/meson.build +++ b/src/ipa/meson.build @@ -18,7 +18,7 @@ subdir('libipa') ipa_sign = find_program('ipa-sign.sh') -ipas = ['rkisp1', 'vimc'] +ipas = ['raspberrypi', 'rkisp1', 'vimc'] foreach pipeline : get_option('pipelines') if ipas.contains(pipeline)