[{"id":4894,"web_url":"https://patchwork.libcamera.org/comment/4894/","msgid":"<20200524231608.GB6006@pendragon.ideasonboard.com>","date":"2020-05-24T23:16:08","subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Naush,\n\nThank you for the patch.\n\nOn Wed, May 13, 2020 at 10:11:19AM +0100, Naushir Patuck wrote:\n> Add support for setting V4L2_CID_VBLANK appropriately when setting\n> V4L2_CID_EXPOSURE. This will allow adaptive framerates during\n> viewfinder use cases (e.g. when the exposure time goes above 33ms, we\n> can reduce the framerate to lower than 30fps).\n> \n> The minimum and maximum frame durations are provided via libcamera\n> controls, and will prioritise exposure time limits over any AGC request.\n\nGeneral question about the AGC and frame duration selection behaviour.\nWhen the scene illumination drops and the applications allows the frame\nrate to be lowered, is it best to adapt the frame duration continuously,\nor jump from, let's say, 30fps to 15fps and stay at 15fps ? I could\nimagine adaptative frame durations to be annoying to handle for codecs\nor displays, but I probably lack a good view of the details.\n\n> V4L2_CID_VBLANK is controlled through the staggered writer, just like\n> the exposure and gain controls.\n> \n> Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> ---\n>  include/ipa/raspberrypi.h                     |  1 +\n>  src/ipa/raspberrypi/cam_helper.cpp            | 38 ++++++++++++++-\n>  src/ipa/raspberrypi/cam_helper.hpp            | 15 +++++-\n>  src/ipa/raspberrypi/cam_helper_imx219.cpp     | 11 ++++-\n>  src/ipa/raspberrypi/cam_helper_imx477.cpp     | 11 ++++-\n>  src/ipa/raspberrypi/cam_helper_ov5647.cpp     | 11 ++++-\n>  src/ipa/raspberrypi/raspberrypi.cpp           | 48 ++++++++++++++++---\n>  .../pipeline/raspberrypi/raspberrypi.cpp      |  2 +\n>  8 files changed, 124 insertions(+), 13 deletions(-)\n> \n> diff --git a/include/ipa/raspberrypi.h b/include/ipa/raspberrypi.h\n> index c109469e..74954db2 100644\n> --- a/include/ipa/raspberrypi.h\n> +++ b/include/ipa/raspberrypi.h\n> @@ -51,6 +51,7 @@ static const ControlInfoMap RPiControls = {\n>  \t{ &controls::Brightness, ControlInfo(-1.0f, 1.0f) },\n>  \t{ &controls::Contrast, ControlInfo(0.0f, 32.0f) },\n>  \t{ &controls::Saturation, ControlInfo(0.0f, 32.0f) },\n> +\t{ &controls::FrameDurations, ControlInfo(1.0e3f, 1.0e9f) }\n>  };\n>  \n>  } /* namespace libcamera */\n> diff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp\n> index 7f05d2c6..06732241 100644\n> --- a/src/ipa/raspberrypi/cam_helper.cpp\n> +++ b/src/ipa/raspberrypi/cam_helper.cpp\n> @@ -34,8 +34,10 @@ CamHelper *CamHelper::Create(std::string const &cam_name)\n>  \treturn nullptr;\n>  }\n>  \n> -CamHelper::CamHelper(MdParser *parser)\n> -\t: parser_(parser), initialized_(false)\n> +CamHelper::CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> +\t\t     unsigned int frameIntegrationDiff)\n> +\t: parser_(parser), initialized_(false), maxFrameLength_(maxFrameLength),\n> +\t  frameIntegrationDiff_(frameIntegrationDiff)\n>  {\n>  }\n>  \n> @@ -56,6 +58,38 @@ double CamHelper::Exposure(uint32_t exposure_lines) const\n>  \treturn exposure_lines * mode_.line_length / 1000.0;\n>  }\n>  \n> +uint32_t CamHelper::GetVBlanking(double &exposure, double minFrameDuration,\n> +\t\t\t\t double maxFrameDuration) const\n> +{\n> +\tuint32_t frameLengthMin, frameLengthMax, vblank;\n> +\tuint32_t exposureLines = ExposureLines(exposure);\n> +\n> +\tassert(initialized_);\n> +\t/*\n> +\t * Clip frame length by the frame duration range and the maximum allowable\n> +\t * value in the sensor, given by maxFrameLength_.\n> +\t */\n> +\tframeLengthMin = std::min<uint32_t>(1e3 * minFrameDuration / mode_.line_length,\n> +\t\t\t\t\t    maxFrameLength_);\n> +\tframeLengthMax = std::min<uint32_t>(1e3 * maxFrameDuration / mode_.line_length,\n> +\t\t\t\t\t    maxFrameLength_);\n\nNot something to be addressed as part of this series, but as we divide\nand multiply by 1000 in various places, should we specify all duration\ncontrols in nanoseconds instead of microseconds ?\n\n> +\t/*\n> +\t * Limit the exposure to the maximum frame duration requested, and\n> +\t * re-calculate if it has been clipped.\n> +\t */\n> +\texposureLines = std::min(frameLengthMax - frameIntegrationDiff_,\n> +\t\t\t\t exposureLines);\n> +\texposure = Exposure(exposureLines);\n> +\n> +\t/* Limit the vblank to the range allowed by the frame length limits. */\n> +\tvblank = std::max<uint32_t>(mode_.height, exposureLines) -\n> +\t\t mode_.height + frameIntegrationDiff_;\n\nIf the exposure time is shorter than the frame height, do we need the\nframeIntegrationDiff_ margin ? Assuming a sensor that would have a\nminimum vblank of 0 and a frameIntegrationDiff_ of 40, if height was\n1080 and exposureLines 500, there should be no need to set vblank to 40,\nright ?\n\n> +\tvblank = std::max(vblank, frameLengthMin - mode_.height);\n> +\tvblank = std::min(vblank, frameLengthMax - mode_.height);\n\nYou can use\n\n\tvblank = utils::clamp(vblank, frameLengthMin - mode_.height,\n\t\t\t      frameLengthMax - mode_.height);\n\n> +\n> +\treturn vblank;\n> +}\n> +\n>  void CamHelper::SetCameraMode(const CameraMode &mode)\n>  {\n>  \tmode_ = mode;\n> diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\n> index 0c8aa29a..fdcdbddb 100644\n> --- a/src/ipa/raspberrypi/cam_helper.hpp\n> +++ b/src/ipa/raspberrypi/cam_helper.hpp\n> @@ -68,12 +68,15 @@ class CamHelper\n>  {\n>  public:\n>  \tstatic CamHelper *Create(std::string const &cam_name);\n> -\tCamHelper(MdParser *parser);\n> +\tCamHelper(MdParser *parser, unsigned int maxFrameLength,\n> +\t\t  unsigned int frameIntegrationDiff);\n>  \tvirtual ~CamHelper();\n>  \tvoid SetCameraMode(const CameraMode &mode);\n>  \tMdParser &Parser() const { return *parser_; }\n>  \tuint32_t ExposureLines(double exposure_us) const;\n>  \tdouble Exposure(uint32_t exposure_lines) const; // in us\n> +\tvirtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,\n> +\t\t\t\t      double maxFrameDuration) const;\n\nDoes this function need to be virtual ?\n\n>  \tvirtual uint32_t GainCode(double gain) const = 0;\n>  \tvirtual double Gain(uint32_t gain_code) const = 0;\n>  \tvirtual void GetDelays(int &exposure_delay, int &gain_delay) const;\n> @@ -83,10 +86,20 @@ public:\n>  \tvirtual unsigned int MistrustFramesStartup() const;\n>  \tvirtual unsigned int MistrustFramesModeSwitch() const;\n>  \tvirtual CamTransform GetOrientation() const;\n> +\n>  protected:\n>  \tMdParser *parser_;\n>  \tCameraMode mode_;\n> +\n> +private:\n>  \tbool initialized_;\n> +\t/* Largest possible frame length, in units of lines. */\n> +\tunsigned int maxFrameLength_;\n> +\t/*\n> +\t * Smallest difference between the frame length and integration time,\n> +\t * in units of lines.\n> +\t */\n> +\tunsigned int frameIntegrationDiff_;\n>  };\n>  \n>  // This is for registering camera helpers with the system, so that the\n> diff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> index 35c6597c..fd95a31a 100644\n> --- a/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> +++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> @@ -50,13 +50,22 @@ public:\n>  \tunsigned int MistrustFramesModeSwitch() const override;\n>  \tbool SensorEmbeddedDataPresent() const override;\n>  \tCamTransform GetOrientation() const override;\n> +\n> +private:\n> +\t/*\n> +\t * Smallest difference between the frame length and integration time,\n> +\t * in units of lines.\n> +\t */\n> +\tstatic constexpr int frameIntegrationDiff = 4;\n> +\t/* Largest possible frame length, in units of lines. */\n> +\tstatic constexpr int maxFrameLength = 0xffff;\n>  };\n>  \n>  CamHelperImx219::CamHelperImx219()\n>  #if ENABLE_EMBEDDED_DATA\n>  \t: CamHelper(new MdParserImx219())\n\nShouldn't this be updated too ?\n\n>  #else\n> -\t: CamHelper(new MdParserRPi())\n> +\t: CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n>  #endif\n>  {\n>  }\n> diff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> index 69544456..4a1cab76 100644\n> --- a/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> +++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> @@ -39,10 +39,19 @@ public:\n>  \tdouble Gain(uint32_t gain_code) const override;\n>  \tbool SensorEmbeddedDataPresent() const override;\n>  \tCamTransform GetOrientation() const override;\n> +\n> +private:\n> +\t/*\n> +\t * Smallest difference between the frame length and integration time,\n> +\t * in units of lines.\n> +\t */\n> +\tstatic constexpr int frameIntegrationDiff = 22;\n> +\t/* Largest possible frame length, in units of lines. */\n> +\tstatic constexpr int maxFrameLength = 0xffdc;\n>  };\n>  \n>  CamHelperImx477::CamHelperImx477()\n> -\t: CamHelper(new MdParserImx477())\n> +\t: CamHelper(new MdParserImx477(), maxFrameLength, frameIntegrationDiff)\n>  {\n>  }\n>  \n> diff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> index 3dbcb164..d814fa90 100644\n> --- a/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> +++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> @@ -22,6 +22,15 @@ public:\n>  \tunsigned int HideFramesModeSwitch() const override;\n>  \tunsigned int MistrustFramesStartup() const override;\n>  \tunsigned int MistrustFramesModeSwitch() const override;\n> +\n> +private:\n> +\t/*\n> +\t * Smallest difference between the frame length and integration time,\n> +\t * in units of lines.\n> +\t */\n> +\tstatic constexpr int frameIntegrationDiff = 4;\n> +\t/* Largest possible frame length, in units of lines. */\n> +\tstatic constexpr int maxFrameLength = 0xffff;\n>  };\n>  \n>  /*\n> @@ -30,7 +39,7 @@ public:\n>   */\n>  \n>  CamHelperOv5647::CamHelperOv5647()\n> -\t: CamHelper(new MdParserRPi())\n> +\t: CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n>  {\n>  }\n>  \n> diff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp\n> index 3bcc0815..1af5e74a 100644\n> --- a/src/ipa/raspberrypi/raspberrypi.cpp\n> +++ b/src/ipa/raspberrypi/raspberrypi.cpp\n> @@ -53,6 +53,8 @@ namespace libcamera {\n>  /* Configure the sensor with these values initially. */\n>  #define DEFAULT_ANALOGUE_GAIN 1.0\n>  #define DEFAULT_EXPOSURE_TIME 20000\n> +#define DEFAULT_MIN_FRAME_DURATION (1e6 / 30.0)\n> +#define DEFAULT_MAX_FRAME_DURATION (1e6 / 0.01)\n>  \n>  LOG_DEFINE_CATEGORY(IPARPI)\n>  \n> @@ -136,6 +138,9 @@ private:\n>  \t/* LS table allocation passed in from the pipeline handler. */\n>  \tuint32_t lsTableHandle_;\n>  \tvoid *lsTable_;\n> +\n> +\t/* Frame duration (1/fps) given in microseconds. */\n> +\tdouble minFrameDuration_, maxFrameDuration_;\n\nShouldn't these be float, or uint32_t depending on the outcome of the\ndiscussion for patch 1/3 ?\n\n>  };\n>  \n>  int IPARPi::init(const IPASettings &settings)\n> @@ -252,13 +257,20 @@ void IPARPi::configure(const CameraSensorInfo &sensorInfo,\n>  \t\tcontroller_.Initialise();\n>  \t\tcontrollerInit_ = true;\n>  \n> -\t\t/* Calculate initial values for gain and exposure. */\n> +\t\t/* Calculate initial values for gain, vblank, and exposure */\n> +\t\tminFrameDuration_ = DEFAULT_MIN_FRAME_DURATION;\n> +\t\tmaxFrameDuration_ = DEFAULT_MAX_FRAME_DURATION;\n> +\n> +\t\tdouble exposure = DEFAULT_EXPOSURE_TIME;\n> +\t\tint32_t vblank = helper_->GetVBlanking(exposure, minFrameDuration_,\n> +\t\t\t\t\t\t       maxFrameDuration_);\n> +\t\tint32_t exposure_lines = helper_->ExposureLines(exposure);\n>  \t\tint32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN);\n> -\t\tint32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME);\n>  \n>  \t\tControlList ctrls(unicam_ctrls_);\n> -\t\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> +\t\tctrls.set(V4L2_CID_VBLANK, vblank);\n>  \t\tctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> +\t\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n>  \n>  \t\tIPAOperationData op;\n>  \t\top.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> @@ -630,6 +642,17 @@ void IPARPi::queueRequest(const ControlList &controls)\n>  \t\t\tbreak;\n>  \t\t}\n>  \n> +\t\tcase controls::FRAME_DURATIONS: {\n> +\t\t\tauto frameDurations = ctrl.second.get<Span<const float>>();\n> +\n> +\t\t\t/* This will be applied once AGC recalculations occur. */\n> +\t\t\tminFrameDuration_ = frameDurations[0];\n> +\t\t\tmaxFrameDuration_ = frameDurations[1];\n\nShould the values be adjusted based on the sensor capabilities ?\n\n> +\t\t\tlibcameraMetadata_.set(controls::FrameDurations,\n> +\t\t\t\t\t       { frameDurations[0], frameDurations[1] });\n\nHmmm... From an application point of view, wouldn't it be more useful to\nknow what frame duration was actually used for the current frame ? You\ndon't have to implement this as part of this series, but I could imagine\na FrameDuration metadata control being useful for that purpose.\nFrameDurations would be set by applications, and FrameDuration would be\nreported through metadata. (On a side note, would FrameDurationLimits be\na better name than FrameDuration then ?)\n\n> +\t\t\tbreak;\n> +\t\t}\n> +\n>  \t\tdefault:\n>  \t\t\tLOG(IPARPI, Warning)\n>  \t\t\t\t<< \"Ctrl \" << controls::controls.at(ctrl.first)->name()\n> @@ -795,7 +818,12 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n>  \top.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n>  \n>  \tint32_t gain_code = helper_->GainCode(agcStatus->analogue_gain);\n> -\tint32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time);\n> +\n> +\t/* GetVBlanking might clip exposure time to the fps limits. */\n> +\tdouble exposure = agcStatus->shutter_time;\n> +\tint32_t vblanking = helper_->GetVBlanking(exposure, minFrameDuration_,\n> +\t\t\t\t\t\t  maxFrameDuration_);\n\nI had completely missed, when reviewing the GetVBlanking()\nimplementation above, that the first parameter was a referenced and was\nused to return a value. We tend in libcamera to use const references for\ninput parameters, and pointers for output or in/out parameters to make\nthat clearer. It's not ideal though, as pointer can be null, and using\nreferences when a parameter should not be null is nicer. No need to\nchange the code here, but maybe a comment on top of GetVBlanking() would\nbe useful to point out that the exposure time can be adjusted.\n\n> +\tint32_t exposure_lines = helper_->ExposureLines(exposure);\n>  \n>  \tif (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) {\n>  \t\tLOG(IPARPI, Error) << \"Can't find analogue gain control\";\n> @@ -807,14 +835,20 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n>  \t\treturn;\n>  \t}\n>  \n> -\tLOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << agcStatus->shutter_time\n> -\t\t\t   << \" (Shutter lines: \" << exposure_lines << \") Gain: \"\n> +\tLOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << exposure\n> +\t\t\t   << \" (Shutter lines: \" << exposure_lines << \", AGC requested \"\n> +\t\t\t   << agcStatus->shutter_time << \") Gain: \"\n>  \t\t\t   << agcStatus->analogue_gain << \" (Gain Code: \"\n>  \t\t\t   << gain_code << \")\";\n\nCould the AGC algorithm get confused if it requests a certain exposure,\nand a smaller value is set in the sensor ? Will it keep trying to push\nthe exposure up to increase the luminance of the scene without\nnotificing it's pointless ?\n\n>  \n>  \tControlList ctrls(unicam_ctrls_);\n> -\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> +\t/*\n> +\t * VBLANK must be set before EXPOSURE as the former will adjust the\n> +\t * limits of the latter control.\n> +\t */\n\nThis will be nasty on the V4L2 side. We write multiple controls with a\nsingle VIDIOC_S_EXT_CTRLS call, and the V4L2 control framework will\nfirst clamp all control values to the limits before setting any control.\nThe limits for the exposure time will thus not be adjusted before its\nvalue is clamped :-( I'm not sure how to best solve this. Ideally the\nproblem should be fixed in the control framework, but I doubt that will\nbe happen. We need to at least ask Hans to see what options we have (or\nrather, what options we don't have). One easy hack may be to report a\nlarge range for the exposure time limits in the camera sensor drivers,\nand adjust the value in .s_ctrl() instead of relying on the control\nframework to do that for us.\n\n> +\tctrls.set(V4L2_CID_VBLANK, vblanking);\n>  \tctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> +\tctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n>  \top.controls.push_back(ctrls);\n>  \tqueueFrameAction.emit(0, op);\n>  }\n> diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> index 21a1d7f7..948290e2 100644\n> --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> @@ -1159,7 +1159,9 @@ void RPiCameraData::queueFrameAction(unsigned int frame, const IPAOperationData\n>  \t\tif (!staggeredCtrl_) {\n>  \t\t\tstaggeredCtrl_.init(unicam_[Unicam::Image].dev(),\n>  \t\t\t\t\t    { { V4L2_CID_ANALOGUE_GAIN, action.data[0] },\n> +\t\t\t\t\t      { V4L2_CID_VBLANK, action.data[1] },\n>  \t\t\t\t\t      { V4L2_CID_EXPOSURE, action.data[1] } });\n\nSo the initial value of the exposure time and the vertical blanking are\nalways identical ?\n\n> +\n>  \t\t\tsensorMetadata_ = action.data[2];\n>  \t\t}\n>","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id B0816603D0\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 25 May 2020 01:16:21 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id EB7F224D;\n\tMon, 25 May 2020 01:16:20 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"eciZnKJt\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1590362181;\n\tbh=rXAQmVbBi3RzOKRRerKRJ6KVpjh+jy7OAIfbiMkF8oA=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=eciZnKJt8SyQmLogsSNzrgg6qkjo7QhNbg6Hc5uVpQmZgMmmx4W9fZcPIXjJtsAji\n\tLFrfdPLTlIL27gjo4Y4gWDzUmoE8fTPn3Vi4uAZE6Ff9ReK1b5MCUgMLcmcT57R7jP\n\t1TKtN2h4gKJAplSm3123Ns2k+TDqizhgPZ/zkywA=","Date":"Mon, 25 May 2020 02:16:08 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Naushir Patuck <naush@raspberrypi.com>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20200524231608.GB6006@pendragon.ideasonboard.com>","References":"<20200513091120.1653-1-naush@raspberrypi.com>\n\t<20200513091120.1653-3-naush@raspberrypi.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<20200513091120.1653-3-naush@raspberrypi.com>","Subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Sun, 24 May 2020 23:16:22 -0000"}},{"id":4906,"web_url":"https://patchwork.libcamera.org/comment/4906/","msgid":"<CAEmqJPrX6HpNCtO+0Wyrj=rNAogujaVQHcG=u0fTOZpiucYdfw@mail.gmail.com>","date":"2020-05-26T09:29:12","subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","submitter":{"id":34,"url":"https://patchwork.libcamera.org/api/people/34/","name":"Naushir Patuck","email":"naush@raspberrypi.com"},"content":"Hi Laurent,\n\nThank you for the comments.\n\nOn Mon, 25 May 2020 at 00:16, Laurent Pinchart\n<laurent.pinchart@ideasonboard.com> wrote:\n>\n> Hi Naush,\n>\n> Thank you for the patch.\n>\n> On Wed, May 13, 2020 at 10:11:19AM +0100, Naushir Patuck wrote:\n> > Add support for setting V4L2_CID_VBLANK appropriately when setting\n> > V4L2_CID_EXPOSURE. This will allow adaptive framerates during\n> > viewfinder use cases (e.g. when the exposure time goes above 33ms, we\n> > can reduce the framerate to lower than 30fps).\n> >\n> > The minimum and maximum frame durations are provided via libcamera\n> > controls, and will prioritise exposure time limits over any AGC request.\n>\n> General question about the AGC and frame duration selection behaviour.\n> When the scene illumination drops and the applications allows the frame\n> rate to be lowered, is it best to adapt the frame duration continuously,\n> or jump from, let's say, 30fps to 15fps and stay at 15fps ? I could\n> imagine adaptative frame durations to be annoying to handle for codecs\n> or displays, but I probably lack a good view of the details.\n\nTraditionally, we had always allowed the fps to gradually drop with\nexposure during viewfinder use cases.  This would provide a better\nuser experience (in my opinion) over a step change to the fps).  It\nalso ensures we run with the maximum possible framerate whenever we\ncan.  However, during video encoding, we have always left the fps at a\nfixed rate, thereby limiting the maximum exposure time allowed.  I\nguess it is essentially up to the AGC algorithm to decide how it wants\nto handle it depending on the use case.\n\n> > V4L2_CID_VBLANK is controlled through the staggered writer, just like\n> > the exposure and gain controls.\n> >\n> > Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> > ---\n> >  include/ipa/raspberrypi.h                     |  1 +\n> >  src/ipa/raspberrypi/cam_helper.cpp            | 38 ++++++++++++++-\n> >  src/ipa/raspberrypi/cam_helper.hpp            | 15 +++++-\n> >  src/ipa/raspberrypi/cam_helper_imx219.cpp     | 11 ++++-\n> >  src/ipa/raspberrypi/cam_helper_imx477.cpp     | 11 ++++-\n> >  src/ipa/raspberrypi/cam_helper_ov5647.cpp     | 11 ++++-\n> >  src/ipa/raspberrypi/raspberrypi.cpp           | 48 ++++++++++++++++---\n> >  .../pipeline/raspberrypi/raspberrypi.cpp      |  2 +\n> >  8 files changed, 124 insertions(+), 13 deletions(-)\n> >\n> > diff --git a/include/ipa/raspberrypi.h b/include/ipa/raspberrypi.h\n> > index c109469e..74954db2 100644\n> > --- a/include/ipa/raspberrypi.h\n> > +++ b/include/ipa/raspberrypi.h\n> > @@ -51,6 +51,7 @@ static const ControlInfoMap RPiControls = {\n> >       { &controls::Brightness, ControlInfo(-1.0f, 1.0f) },\n> >       { &controls::Contrast, ControlInfo(0.0f, 32.0f) },\n> >       { &controls::Saturation, ControlInfo(0.0f, 32.0f) },\n> > +     { &controls::FrameDurations, ControlInfo(1.0e3f, 1.0e9f) }\n> >  };\n> >\n> >  } /* namespace libcamera */\n> > diff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp\n> > index 7f05d2c6..06732241 100644\n> > --- a/src/ipa/raspberrypi/cam_helper.cpp\n> > +++ b/src/ipa/raspberrypi/cam_helper.cpp\n> > @@ -34,8 +34,10 @@ CamHelper *CamHelper::Create(std::string const &cam_name)\n> >       return nullptr;\n> >  }\n> >\n> > -CamHelper::CamHelper(MdParser *parser)\n> > -     : parser_(parser), initialized_(false)\n> > +CamHelper::CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > +                  unsigned int frameIntegrationDiff)\n> > +     : parser_(parser), initialized_(false), maxFrameLength_(maxFrameLength),\n> > +       frameIntegrationDiff_(frameIntegrationDiff)\n> >  {\n> >  }\n> >\n> > @@ -56,6 +58,38 @@ double CamHelper::Exposure(uint32_t exposure_lines) const\n> >       return exposure_lines * mode_.line_length / 1000.0;\n> >  }\n> >\n> > +uint32_t CamHelper::GetVBlanking(double &exposure, double minFrameDuration,\n> > +                              double maxFrameDuration) const\n> > +{\n> > +     uint32_t frameLengthMin, frameLengthMax, vblank;\n> > +     uint32_t exposureLines = ExposureLines(exposure);\n> > +\n> > +     assert(initialized_);\n> > +     /*\n> > +      * Clip frame length by the frame duration range and the maximum allowable\n> > +      * value in the sensor, given by maxFrameLength_.\n> > +      */\n> > +     frameLengthMin = std::min<uint32_t>(1e3 * minFrameDuration / mode_.line_length,\n> > +                                         maxFrameLength_);\n> > +     frameLengthMax = std::min<uint32_t>(1e3 * maxFrameDuration / mode_.line_length,\n> > +                                         maxFrameLength_);\n>\n> Not something to be addressed as part of this series, but as we divide\n> and multiply by 1000 in various places, should we specify all duration\n> controls in nanoseconds instead of microseconds ?\n\nGood question :)  I think microseconds is more \"user friendly\", but I\ndo see the point that we convert to nanoseconds everywhere.  My\nsuggestion would be to leave it as microseconds for the user-facing\ncontrols.\n\n>\n> > +     /*\n> > +      * Limit the exposure to the maximum frame duration requested, and\n> > +      * re-calculate if it has been clipped.\n> > +      */\n> > +     exposureLines = std::min(frameLengthMax - frameIntegrationDiff_,\n> > +                              exposureLines);\n> > +     exposure = Exposure(exposureLines);\n> > +\n> > +     /* Limit the vblank to the range allowed by the frame length limits. */\n> > +     vblank = std::max<uint32_t>(mode_.height, exposureLines) -\n> > +              mode_.height + frameIntegrationDiff_;\n>\n> If the exposure time is shorter than the frame height, do we need the\n> frameIntegrationDiff_ margin ? Assuming a sensor that would have a\n> minimum vblank of 0 and a frameIntegrationDiff_ of 40, if height was\n> 1080 and exposureLines 500, there should be no need to set vblank to 40,\n> right ?\n\nYes, you are correct.  Will fix this calculation.\n\n>\n> > +     vblank = std::max(vblank, frameLengthMin - mode_.height);\n> > +     vblank = std::min(vblank, frameLengthMax - mode_.height);\n>\n> You can use\n>\n>         vblank = utils::clamp(vblank, frameLengthMin - mode_.height,\n>                               frameLengthMax - mode_.height);\n\nAck.\n\n> > +\n> > +     return vblank;\n> > +}\n> > +\n> >  void CamHelper::SetCameraMode(const CameraMode &mode)\n> >  {\n> >       mode_ = mode;\n> > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\n> > index 0c8aa29a..fdcdbddb 100644\n> > --- a/src/ipa/raspberrypi/cam_helper.hpp\n> > +++ b/src/ipa/raspberrypi/cam_helper.hpp\n> > @@ -68,12 +68,15 @@ class CamHelper\n> >  {\n> >  public:\n> >       static CamHelper *Create(std::string const &cam_name);\n> > -     CamHelper(MdParser *parser);\n> > +     CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > +               unsigned int frameIntegrationDiff);\n> >       virtual ~CamHelper();\n> >       void SetCameraMode(const CameraMode &mode);\n> >       MdParser &Parser() const { return *parser_; }\n> >       uint32_t ExposureLines(double exposure_us) const;\n> >       double Exposure(uint32_t exposure_lines) const; // in us\n> > +     virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,\n> > +                                   double maxFrameDuration) const;\n>\n> Does this function need to be virtual ?\n\nI'm not sure :)\n\nI wanted to be flexible enough that if a sensor wanted to do something\na bit different from the norm, it could override the base\nimplementation.  I don't have a particular example in mind though.\n\n>\n> >       virtual uint32_t GainCode(double gain) const = 0;\n> >       virtual double Gain(uint32_t gain_code) const = 0;\n> >       virtual void GetDelays(int &exposure_delay, int &gain_delay) const;\n> > @@ -83,10 +86,20 @@ public:\n> >       virtual unsigned int MistrustFramesStartup() const;\n> >       virtual unsigned int MistrustFramesModeSwitch() const;\n> >       virtual CamTransform GetOrientation() const;\n> > +\n> >  protected:\n> >       MdParser *parser_;\n> >       CameraMode mode_;\n> > +\n> > +private:\n> >       bool initialized_;\n> > +     /* Largest possible frame length, in units of lines. */\n> > +     unsigned int maxFrameLength_;\n> > +     /*\n> > +      * Smallest difference between the frame length and integration time,\n> > +      * in units of lines.\n> > +      */\n> > +     unsigned int frameIntegrationDiff_;\n> >  };\n> >\n> >  // This is for registering camera helpers with the system, so that the\n> > diff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > index 35c6597c..fd95a31a 100644\n> > --- a/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > +++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > @@ -50,13 +50,22 @@ public:\n> >       unsigned int MistrustFramesModeSwitch() const override;\n> >       bool SensorEmbeddedDataPresent() const override;\n> >       CamTransform GetOrientation() const override;\n> > +\n> > +private:\n> > +     /*\n> > +      * Smallest difference between the frame length and integration time,\n> > +      * in units of lines.\n> > +      */\n> > +     static constexpr int frameIntegrationDiff = 4;\n> > +     /* Largest possible frame length, in units of lines. */\n> > +     static constexpr int maxFrameLength = 0xffff;\n> >  };\n> >\n> >  CamHelperImx219::CamHelperImx219()\n> >  #if ENABLE_EMBEDDED_DATA\n> >       : CamHelper(new MdParserImx219())\n>\n> Shouldn't this be updated too ?\n\nAck.\n\n>\n> >  #else\n> > -     : CamHelper(new MdParserRPi())\n> > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> >  #endif\n> >  {\n> >  }\n> > diff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > index 69544456..4a1cab76 100644\n> > --- a/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > +++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > @@ -39,10 +39,19 @@ public:\n> >       double Gain(uint32_t gain_code) const override;\n> >       bool SensorEmbeddedDataPresent() const override;\n> >       CamTransform GetOrientation() const override;\n> > +\n> > +private:\n> > +     /*\n> > +      * Smallest difference between the frame length and integration time,\n> > +      * in units of lines.\n> > +      */\n> > +     static constexpr int frameIntegrationDiff = 22;\n> > +     /* Largest possible frame length, in units of lines. */\n> > +     static constexpr int maxFrameLength = 0xffdc;\n> >  };\n> >\n> >  CamHelperImx477::CamHelperImx477()\n> > -     : CamHelper(new MdParserImx477())\n> > +     : CamHelper(new MdParserImx477(), maxFrameLength, frameIntegrationDiff)\n> >  {\n> >  }\n> >\n> > diff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > index 3dbcb164..d814fa90 100644\n> > --- a/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > +++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > @@ -22,6 +22,15 @@ public:\n> >       unsigned int HideFramesModeSwitch() const override;\n> >       unsigned int MistrustFramesStartup() const override;\n> >       unsigned int MistrustFramesModeSwitch() const override;\n> > +\n> > +private:\n> > +     /*\n> > +      * Smallest difference between the frame length and integration time,\n> > +      * in units of lines.\n> > +      */\n> > +     static constexpr int frameIntegrationDiff = 4;\n> > +     /* Largest possible frame length, in units of lines. */\n> > +     static constexpr int maxFrameLength = 0xffff;\n> >  };\n> >\n> >  /*\n> > @@ -30,7 +39,7 @@ public:\n> >   */\n> >\n> >  CamHelperOv5647::CamHelperOv5647()\n> > -     : CamHelper(new MdParserRPi())\n> > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> >  {\n> >  }\n> >\n> > diff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp\n> > index 3bcc0815..1af5e74a 100644\n> > --- a/src/ipa/raspberrypi/raspberrypi.cpp\n> > +++ b/src/ipa/raspberrypi/raspberrypi.cpp\n> > @@ -53,6 +53,8 @@ namespace libcamera {\n> >  /* Configure the sensor with these values initially. */\n> >  #define DEFAULT_ANALOGUE_GAIN 1.0\n> >  #define DEFAULT_EXPOSURE_TIME 20000\n> > +#define DEFAULT_MIN_FRAME_DURATION (1e6 / 30.0)\n> > +#define DEFAULT_MAX_FRAME_DURATION (1e6 / 0.01)\n> >\n> >  LOG_DEFINE_CATEGORY(IPARPI)\n> >\n> > @@ -136,6 +138,9 @@ private:\n> >       /* LS table allocation passed in from the pipeline handler. */\n> >       uint32_t lsTableHandle_;\n> >       void *lsTable_;\n> > +\n> > +     /* Frame duration (1/fps) given in microseconds. */\n> > +     double minFrameDuration_, maxFrameDuration_;\n>\n> Shouldn't these be float, or uint32_t depending on the outcome of the\n> discussion for patch 1/3 ?\n\nAck.\n\n>\n> >  };\n> >\n> >  int IPARPi::init(const IPASettings &settings)\n> > @@ -252,13 +257,20 @@ void IPARPi::configure(const CameraSensorInfo &sensorInfo,\n> >               controller_.Initialise();\n> >               controllerInit_ = true;\n> >\n> > -             /* Calculate initial values for gain and exposure. */\n> > +             /* Calculate initial values for gain, vblank, and exposure */\n> > +             minFrameDuration_ = DEFAULT_MIN_FRAME_DURATION;\n> > +             maxFrameDuration_ = DEFAULT_MAX_FRAME_DURATION;\n> > +\n> > +             double exposure = DEFAULT_EXPOSURE_TIME;\n> > +             int32_t vblank = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > +                                                    maxFrameDuration_);\n> > +             int32_t exposure_lines = helper_->ExposureLines(exposure);\n> >               int32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN);\n> > -             int32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME);\n> >\n> >               ControlList ctrls(unicam_ctrls_);\n> > -             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > +             ctrls.set(V4L2_CID_VBLANK, vblank);\n> >               ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > +             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> >\n> >               IPAOperationData op;\n> >               op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> > @@ -630,6 +642,17 @@ void IPARPi::queueRequest(const ControlList &controls)\n> >                       break;\n> >               }\n> >\n> > +             case controls::FRAME_DURATIONS: {\n> > +                     auto frameDurations = ctrl.second.get<Span<const float>>();\n> > +\n> > +                     /* This will be applied once AGC recalculations occur. */\n> > +                     minFrameDuration_ = frameDurations[0];\n> > +                     maxFrameDuration_ = frameDurations[1];\n>\n> Should the values be adjusted based on the sensor capabilities ?\n\nThey do, but separately in the CamHelper::GetVBlanking() call.  Unless\nI have misunderstood the comment?\n\n>\n> > +                     libcameraMetadata_.set(controls::FrameDurations,\n> > +                                            { frameDurations[0], frameDurations[1] });\n -> >\n> Hmmm... From an application point of view, wouldn't it be more useful to\n> know what frame duration was actually used for the current frame ? You\n> don't have to implement this as part of this series, but I could imagine\n> a FrameDuration metadata control being useful for that purpose.\n> FrameDurations would be set by applications, and FrameDuration would be\n> reported through metadata. (On a side note, would FrameDurationLimits be\n> a better name than FrameDuration then ?)\n\nYes, good point.  I can update this to reflect the actual frame durations used.\nHappy to rename FrameDuration -> FrameDurationLimits.\n\n>\n> > +                     break;\n> > +             }\n> > +\n> >               default:\n> >                       LOG(IPARPI, Warning)\n> >                               << \"Ctrl \" << controls::controls.at(ctrl.first)->name()\n> > @@ -795,7 +818,12 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> >       op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> >\n> >       int32_t gain_code = helper_->GainCode(agcStatus->analogue_gain);\n> > -     int32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time);\n> > +\n> > +     /* GetVBlanking might clip exposure time to the fps limits. */\n> > +     double exposure = agcStatus->shutter_time;\n> > +     int32_t vblanking = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > +                                               maxFrameDuration_);\n>\n> I had completely missed, when reviewing the GetVBlanking()\n> implementation above, that the first parameter was a referenced and was\n> used to return a value. We tend in libcamera to use const references for\n> input parameters, and pointers for output or in/out parameters to make\n> that clearer. It's not ideal though, as pointer can be null, and using\n> references when a parameter should not be null is nicer. No need to\n> change the code here, but maybe a comment on top of GetVBlanking() would\n> be useful to point out that the exposure time can be adjusted.\n\nAck.\n\n>\n> > +     int32_t exposure_lines = helper_->ExposureLines(exposure);\n> >\n> >       if (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) {\n> >               LOG(IPARPI, Error) << \"Can't find analogue gain control\";\n> > @@ -807,14 +835,20 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> >               return;\n> >       }\n> >\n> > -     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << agcStatus->shutter_time\n> > -                        << \" (Shutter lines: \" << exposure_lines << \") Gain: \"\n> > +     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << exposure\n> > +                        << \" (Shutter lines: \" << exposure_lines << \", AGC requested \"\n> > +                        << agcStatus->shutter_time << \") Gain: \"\n> >                          << agcStatus->analogue_gain << \" (Gain Code: \"\n> >                          << gain_code << \")\";\n>\n> Could the AGC algorithm get confused if it requests a certain exposure,\n> and a smaller value is set in the sensor ? Will it keep trying to push\n> the exposure up to increase the luminance of the scene without\n> notificing it's pointless ?\n\nYes, it can, and yet it will :)\nHowever, there is no harm done.  As long as the AGC algorithm gets\ngiven the actual exposure time / gain used for the frame - rather than\nwhat it requested - it will fill up the rest of the exposure with\ndigital gain to get the expected brightness.  This is why it is\nimportant for CamHelper::GetVBlanking() to return out the exposure\nthat would actually be used.\n\nDavid and I did discuss passing in the framerate limits into the AGC\nalgorithm, but we felt it added complexity for no real gain in\nfunctionality.  The AGC algorithm would not behave any differently in\neither case.  Would could help a user who wanted a very specific\nbehavior with framerate limits would be to setup a suitable exposure\nprofile (division of shutter speed and gain) in the tuning file.\n\n>\n> >\n> >       ControlList ctrls(unicam_ctrls_);\n> > -     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > +     /*\n> > +      * VBLANK must be set before EXPOSURE as the former will adjust the\n> > +      * limits of the latter control.\n> > +      */\n>\n> This will be nasty on the V4L2 side. We write multiple controls with a\n> single VIDIOC_S_EXT_CTRLS call, and the V4L2 control framework will\n> first clamp all control values to the limits before setting any control.\n> The limits for the exposure time will thus not be adjusted before its\n> value is clamped :-( I'm not sure how to best solve this. Ideally the\n> problem should be fixed in the control framework, but I doubt that will\n> be happen. We need to at least ask Hans to see what options we have (or\n> rather, what options we don't have). One easy hack may be to report a\n> large range for the exposure time limits in the camera sensor drivers,\n> and adjust the value in .s_ctrl() instead of relying on the control\n> framework to do that for us.\n\nAck.  This needs a bit more thinking.\n\nIf I were to separate out the VBLANK set ctrl with the EXPOSURE set\nctrl would that help here?  Note that I cannot easily do that right\nnow as the staggered writer only has an interface to set a single list\nof ctrls.\n\nI suppose the current behavior means that we get 1 frame of exposure\nthat is possibly wrong or clipped before the vblank has updated the\nctrl limits in the sensor driver.\n\n> > +     ctrls.set(V4L2_CID_VBLANK, vblanking);\n> >       ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > +     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> >       op.controls.push_back(ctrls);\n> >       queueFrameAction.emit(0, op);\n> >  }\n> > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > index 21a1d7f7..948290e2 100644\n> > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > @@ -1159,7 +1159,9 @@ void RPiCameraData::queueFrameAction(unsigned int frame, const IPAOperationData\n> >               if (!staggeredCtrl_) {\n> >                       staggeredCtrl_.init(unicam_[Unicam::Image].dev(),\n> >                                           { { V4L2_CID_ANALOGUE_GAIN, action.data[0] },\n> > +                                           { V4L2_CID_VBLANK, action.data[1] },\n> >                                             { V4L2_CID_EXPOSURE, action.data[1] } });\n>\n> So the initial value of the exposure time and the vertical blanking are\n> always identical ?\n\nThis is setting up the delay in applying the ctrl for the staggered\nwriter.  So I would expect EXPOSURE and VBLANK to have the same delay\nin the sensor.\n\n>\n> > +\n> >                       sensorMetadata_ = action.data[2];\n> >               }\n> >\n>\n> --\n> Regards,\n>\n> Laurent Pinchart\n\nThanks,\nNaush","headers":{"Return-Path":"<naush@raspberrypi.com>","Received":["from mail-lf1-x144.google.com (mail-lf1-x144.google.com\n\t[IPv6:2a00:1450:4864:20::144])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 9954D603D1\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 26 May 2020 11:29:29 +0200 (CEST)","by mail-lf1-x144.google.com with SMTP id c12so11896886lfc.10\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 26 May 2020 02:29:29 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=raspberrypi.com\n\theader.i=@raspberrypi.com\n\theader.b=\"a0hG95qv\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google;\n\th=mime-version:references:in-reply-to:from:date:message-id:subject:to\n\t:cc; bh=VHpvsmQLeTh5iTxCrN0qeAZ1ZUPvXLBrRdZ6BOGIQjQ=;\n\tb=a0hG95qvbJG2Vs0yRY9KQpokumWzFSA5h5nrANG8HHKyZvHTfVA5JcQPXTh9Gci8t+\n\t5ZLtl6FAvrMH6qoBVb5aGMnBkmSMRPqgSSJ3y5PPDIEE1XvbYDG71w5ip2/FPK4tUscE\n\tj0bmb4pD56jN9d8+KqCzd1si4USQBmWZ3k1y1ALcxEhmf14NSass2xiTXa6W2yg7dTLI\n\tW3BNNkrw6LEcT0yNccRakDYMtvcT1FRBC/zIy+mk7I8IRwv6t6KhzLkQXxb+4LMtLY8i\n\tdDVj6iCAXXsQs7dZsI+CurXEZJVw6aGCfarC7N8iIpKTZsHsrKK70LinYtzh6HYaBw4S\n\tGUnQ==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:mime-version:references:in-reply-to:from:date\n\t:message-id:subject:to:cc;\n\tbh=VHpvsmQLeTh5iTxCrN0qeAZ1ZUPvXLBrRdZ6BOGIQjQ=;\n\tb=IKgNpZoZPguDHVygxWvUnCyUpidhbXRQ4TQx3PGMggGwTb9WbWuqAcWML/wrRolm0Y\n\tJLqhmVUuHO+TmFv68NiZ5JJ3LGexlpL69ZbClIi1yGqubuLZbuv4H5Qurl4dObJrA6s9\n\tL8YhMdf6vAGiuvvEb/xk7RFjz3xl6QqwJ43Xa1LItvodCukF2d8BCls6iwNJJrkIfAhh\n\t9MCs/nhDqHWbpb2SB2oMcsgqmKdTdsKjqNJl3hPsbqmBkRJtU2XeJDlBLcs+mX7cS2QW\n\tbtDdMYNYTxmh6jHUCefs84Q+sxj7xr4suydlzPGgNQw+2k/3XnZo6REsSJDOeyPlG2V+\n\ttI2g==","X-Gm-Message-State":"AOAM531+vyDtHXwI8iSXZnw8CH+DPUAk82X6/dUq8Mluh59p6iziYbMq\n\tfx5qUCHPgcuNrblTCze7J7/6M3qIac0eXKJDXjc4+Q==","X-Google-Smtp-Source":"ABdhPJxB/qDXD/OurohUbXbyKmCAWp/Xhc+rlofNYPHDcljv7vVFQjRpf8HkDTL6xaB8fGxVikXpr6lsiveSKUQz/bs=","X-Received":"by 2002:a19:c8cb:: with SMTP id y194mr70402lff.89.1590485368525; \n\tTue, 26 May 2020 02:29:28 -0700 (PDT)","MIME-Version":"1.0","References":"<20200513091120.1653-1-naush@raspberrypi.com>\n\t<20200513091120.1653-3-naush@raspberrypi.com>\n\t<20200524231608.GB6006@pendragon.ideasonboard.com>","In-Reply-To":"<20200524231608.GB6006@pendragon.ideasonboard.com>","From":"Naushir Patuck <naush@raspberrypi.com>","Date":"Tue, 26 May 2020 10:29:12 +0100","Message-ID":"<CAEmqJPrX6HpNCtO+0Wyrj=rNAogujaVQHcG=u0fTOZpiucYdfw@mail.gmail.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Content-Type":"text/plain; charset=\"UTF-8\"","Subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Tue, 26 May 2020 09:29:29 -0000"}},{"id":4918,"web_url":"https://patchwork.libcamera.org/comment/4918/","msgid":"<20200528013003.GC4670@pendragon.ideasonboard.com>","date":"2020-05-28T01:30:03","subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Naush,\n\nOn Tue, May 26, 2020 at 10:29:12AM +0100, Naushir Patuck wrote:\n> On Mon, 25 May 2020 at 00:16, Laurent Pinchart wrote:\n> > On Wed, May 13, 2020 at 10:11:19AM +0100, Naushir Patuck wrote:\n> > > Add support for setting V4L2_CID_VBLANK appropriately when setting\n> > > V4L2_CID_EXPOSURE. This will allow adaptive framerates during\n> > > viewfinder use cases (e.g. when the exposure time goes above 33ms, we\n> > > can reduce the framerate to lower than 30fps).\n> > >\n> > > The minimum and maximum frame durations are provided via libcamera\n> > > controls, and will prioritise exposure time limits over any AGC request.\n> >\n> > General question about the AGC and frame duration selection behaviour.\n> > When the scene illumination drops and the applications allows the frame\n> > rate to be lowered, is it best to adapt the frame duration continuously,\n> > or jump from, let's say, 30fps to 15fps and stay at 15fps ? I could\n> > imagine adaptative frame durations to be annoying to handle for codecs\n> > or displays, but I probably lack a good view of the details.\n> \n> Traditionally, we had always allowed the fps to gradually drop with\n> exposure during viewfinder use cases.  This would provide a better\n> user experience (in my opinion) over a step change to the fps).  It\n> also ensures we run with the maximum possible framerate whenever we\n> can.  However, during video encoding, we have always left the fps at a\n> fixed rate, thereby limiting the maximum exposure time allowed.  I\n> guess it is essentially up to the AGC algorithm to decide how it wants\n> to handle it depending on the use case.\n\nThanks for the information. Would you expect applications to specify a\nframe duration interval in the viewfinder use case and a fixed frame\nduration (with min == max) in the video recording use case, or would you\nhandle that internally in the IPA ? I suppose an application may want\nthe IPA to pick either 15fps or 30fps depending on the scene\nillumination and stick to that ? In that case we would need to provide a\nhint to tell whether the frame rate should be adapted continuously or\nnot. Or we could do so on the application side by specifying a 15-30 fps\ninterval, capturing a few frames until AEGC locks, reading the frame\nduration selected by the IPA, setting the frame duration to a fixed\nvalue and proceeding with the video recording. It's essentially a\nquestion of who would be responsible for implementing this (application\nor IPA), and if we decide to put the burden on the application, we could\nalso provide a higher-level video recording helper to implement that\nsequence. What do you think would be best ?\n\n> > > V4L2_CID_VBLANK is controlled through the staggered writer, just like\n> > > the exposure and gain controls.\n> > >\n> > > Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> > > ---\n> > >  include/ipa/raspberrypi.h                     |  1 +\n> > >  src/ipa/raspberrypi/cam_helper.cpp            | 38 ++++++++++++++-\n> > >  src/ipa/raspberrypi/cam_helper.hpp            | 15 +++++-\n> > >  src/ipa/raspberrypi/cam_helper_imx219.cpp     | 11 ++++-\n> > >  src/ipa/raspberrypi/cam_helper_imx477.cpp     | 11 ++++-\n> > >  src/ipa/raspberrypi/cam_helper_ov5647.cpp     | 11 ++++-\n> > >  src/ipa/raspberrypi/raspberrypi.cpp           | 48 ++++++++++++++++---\n> > >  .../pipeline/raspberrypi/raspberrypi.cpp      |  2 +\n> > >  8 files changed, 124 insertions(+), 13 deletions(-)\n> > >\n> > > diff --git a/include/ipa/raspberrypi.h b/include/ipa/raspberrypi.h\n> > > index c109469e..74954db2 100644\n> > > --- a/include/ipa/raspberrypi.h\n> > > +++ b/include/ipa/raspberrypi.h\n> > > @@ -51,6 +51,7 @@ static const ControlInfoMap RPiControls = {\n> > >       { &controls::Brightness, ControlInfo(-1.0f, 1.0f) },\n> > >       { &controls::Contrast, ControlInfo(0.0f, 32.0f) },\n> > >       { &controls::Saturation, ControlInfo(0.0f, 32.0f) },\n> > > +     { &controls::FrameDurations, ControlInfo(1.0e3f, 1.0e9f) }\n> > >  };\n> > >\n> > >  } /* namespace libcamera */\n> > > diff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp\n> > > index 7f05d2c6..06732241 100644\n> > > --- a/src/ipa/raspberrypi/cam_helper.cpp\n> > > +++ b/src/ipa/raspberrypi/cam_helper.cpp\n> > > @@ -34,8 +34,10 @@ CamHelper *CamHelper::Create(std::string const &cam_name)\n> > >       return nullptr;\n> > >  }\n> > >\n> > > -CamHelper::CamHelper(MdParser *parser)\n> > > -     : parser_(parser), initialized_(false)\n> > > +CamHelper::CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > > +                  unsigned int frameIntegrationDiff)\n> > > +     : parser_(parser), initialized_(false), maxFrameLength_(maxFrameLength),\n> > > +       frameIntegrationDiff_(frameIntegrationDiff)\n> > >  {\n> > >  }\n> > >\n> > > @@ -56,6 +58,38 @@ double CamHelper::Exposure(uint32_t exposure_lines) const\n> > >       return exposure_lines * mode_.line_length / 1000.0;\n> > >  }\n> > >\n> > > +uint32_t CamHelper::GetVBlanking(double &exposure, double minFrameDuration,\n> > > +                              double maxFrameDuration) const\n> > > +{\n> > > +     uint32_t frameLengthMin, frameLengthMax, vblank;\n> > > +     uint32_t exposureLines = ExposureLines(exposure);\n> > > +\n> > > +     assert(initialized_);\n> > > +     /*\n> > > +      * Clip frame length by the frame duration range and the maximum allowable\n> > > +      * value in the sensor, given by maxFrameLength_.\n> > > +      */\n> > > +     frameLengthMin = std::min<uint32_t>(1e3 * minFrameDuration / mode_.line_length,\n> > > +                                         maxFrameLength_);\n> > > +     frameLengthMax = std::min<uint32_t>(1e3 * maxFrameDuration / mode_.line_length,\n> > > +                                         maxFrameLength_);\n> >\n> > Not something to be addressed as part of this series, but as we divide\n> > and multiply by 1000 in various places, should we specify all duration\n> > controls in nanoseconds instead of microseconds ?\n> \n> Good question :)  I think microseconds is more \"user friendly\", but I\n> do see the point that we convert to nanoseconds everywhere.  My\n> suggestion would be to leave it as microseconds for the user-facing\n> controls.\n\nLet's do so for now. I may reconsider that later though if we need more\nprecision than 1µs (timestamps are where I expect precision to be\nneeded, and we express them in ns already, maybe we won't need as much\nprecision in the durations) or if we end up having controls in both\nunits and we decide to make the API more consistent.\n\n> > > +     /*\n> > > +      * Limit the exposure to the maximum frame duration requested, and\n> > > +      * re-calculate if it has been clipped.\n> > > +      */\n> > > +     exposureLines = std::min(frameLengthMax - frameIntegrationDiff_,\n> > > +                              exposureLines);\n> > > +     exposure = Exposure(exposureLines);\n> > > +\n> > > +     /* Limit the vblank to the range allowed by the frame length limits. */\n> > > +     vblank = std::max<uint32_t>(mode_.height, exposureLines) -\n> > > +              mode_.height + frameIntegrationDiff_;\n> >\n> > If the exposure time is shorter than the frame height, do we need the\n> > frameIntegrationDiff_ margin ? Assuming a sensor that would have a\n> > minimum vblank of 0 and a frameIntegrationDiff_ of 40, if height was\n> > 1080 and exposureLines 500, there should be no need to set vblank to 40,\n> > right ?\n> \n> Yes, you are correct.  Will fix this calculation.\n> \n> >\n> > > +     vblank = std::max(vblank, frameLengthMin - mode_.height);\n> > > +     vblank = std::min(vblank, frameLengthMax - mode_.height);\n> >\n> > You can use\n> >\n> >         vblank = utils::clamp(vblank, frameLengthMin - mode_.height,\n> >                               frameLengthMax - mode_.height);\n> \n> Ack.\n> \n> > > +\n> > > +     return vblank;\n> > > +}\n> > > +\n> > >  void CamHelper::SetCameraMode(const CameraMode &mode)\n> > >  {\n> > >       mode_ = mode;\n> > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\n> > > index 0c8aa29a..fdcdbddb 100644\n> > > --- a/src/ipa/raspberrypi/cam_helper.hpp\n> > > +++ b/src/ipa/raspberrypi/cam_helper.hpp\n> > > @@ -68,12 +68,15 @@ class CamHelper\n> > >  {\n> > >  public:\n> > >       static CamHelper *Create(std::string const &cam_name);\n> > > -     CamHelper(MdParser *parser);\n> > > +     CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > > +               unsigned int frameIntegrationDiff);\n> > >       virtual ~CamHelper();\n> > >       void SetCameraMode(const CameraMode &mode);\n> > >       MdParser &Parser() const { return *parser_; }\n> > >       uint32_t ExposureLines(double exposure_us) const;\n> > >       double Exposure(uint32_t exposure_lines) const; // in us\n> > > +     virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,\n> > > +                                   double maxFrameDuration) const;\n> >\n> > Does this function need to be virtual ?\n> \n> I'm not sure :)\n> \n> I wanted to be flexible enough that if a sensor wanted to do something\n> a bit different from the norm, it could override the base\n> implementation.  I don't have a particular example in mind though.\n\nWe could always make it virtual at that point then, this is an internal\nAPI.\n\n> > >       virtual uint32_t GainCode(double gain) const = 0;\n> > >       virtual double Gain(uint32_t gain_code) const = 0;\n> > >       virtual void GetDelays(int &exposure_delay, int &gain_delay) const;\n> > > @@ -83,10 +86,20 @@ public:\n> > >       virtual unsigned int MistrustFramesStartup() const;\n> > >       virtual unsigned int MistrustFramesModeSwitch() const;\n> > >       virtual CamTransform GetOrientation() const;\n> > > +\n> > >  protected:\n> > >       MdParser *parser_;\n> > >       CameraMode mode_;\n> > > +\n> > > +private:\n> > >       bool initialized_;\n> > > +     /* Largest possible frame length, in units of lines. */\n> > > +     unsigned int maxFrameLength_;\n> > > +     /*\n> > > +      * Smallest difference between the frame length and integration time,\n> > > +      * in units of lines.\n> > > +      */\n> > > +     unsigned int frameIntegrationDiff_;\n> > >  };\n> > >\n> > >  // This is for registering camera helpers with the system, so that the\n> > > diff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > index 35c6597c..fd95a31a 100644\n> > > --- a/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > +++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > @@ -50,13 +50,22 @@ public:\n> > >       unsigned int MistrustFramesModeSwitch() const override;\n> > >       bool SensorEmbeddedDataPresent() const override;\n> > >       CamTransform GetOrientation() const override;\n> > > +\n> > > +private:\n> > > +     /*\n> > > +      * Smallest difference between the frame length and integration time,\n> > > +      * in units of lines.\n> > > +      */\n> > > +     static constexpr int frameIntegrationDiff = 4;\n> > > +     /* Largest possible frame length, in units of lines. */\n> > > +     static constexpr int maxFrameLength = 0xffff;\n> > >  };\n> > >\n> > >  CamHelperImx219::CamHelperImx219()\n> > >  #if ENABLE_EMBEDDED_DATA\n> > >       : CamHelper(new MdParserImx219())\n> >\n> > Shouldn't this be updated too ?\n> \n> Ack.\n> \n> > >  #else\n> > > -     : CamHelper(new MdParserRPi())\n> > > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> > >  #endif\n> > >  {\n> > >  }\n> > > diff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > index 69544456..4a1cab76 100644\n> > > --- a/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > +++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > @@ -39,10 +39,19 @@ public:\n> > >       double Gain(uint32_t gain_code) const override;\n> > >       bool SensorEmbeddedDataPresent() const override;\n> > >       CamTransform GetOrientation() const override;\n> > > +\n> > > +private:\n> > > +     /*\n> > > +      * Smallest difference between the frame length and integration time,\n> > > +      * in units of lines.\n> > > +      */\n> > > +     static constexpr int frameIntegrationDiff = 22;\n> > > +     /* Largest possible frame length, in units of lines. */\n> > > +     static constexpr int maxFrameLength = 0xffdc;\n> > >  };\n> > >\n> > >  CamHelperImx477::CamHelperImx477()\n> > > -     : CamHelper(new MdParserImx477())\n> > > +     : CamHelper(new MdParserImx477(), maxFrameLength, frameIntegrationDiff)\n> > >  {\n> > >  }\n> > >\n> > > diff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > index 3dbcb164..d814fa90 100644\n> > > --- a/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > +++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > @@ -22,6 +22,15 @@ public:\n> > >       unsigned int HideFramesModeSwitch() const override;\n> > >       unsigned int MistrustFramesStartup() const override;\n> > >       unsigned int MistrustFramesModeSwitch() const override;\n> > > +\n> > > +private:\n> > > +     /*\n> > > +      * Smallest difference between the frame length and integration time,\n> > > +      * in units of lines.\n> > > +      */\n> > > +     static constexpr int frameIntegrationDiff = 4;\n> > > +     /* Largest possible frame length, in units of lines. */\n> > > +     static constexpr int maxFrameLength = 0xffff;\n> > >  };\n> > >\n> > >  /*\n> > > @@ -30,7 +39,7 @@ public:\n> > >   */\n> > >\n> > >  CamHelperOv5647::CamHelperOv5647()\n> > > -     : CamHelper(new MdParserRPi())\n> > > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> > >  {\n> > >  }\n> > >\n> > > diff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp\n> > > index 3bcc0815..1af5e74a 100644\n> > > --- a/src/ipa/raspberrypi/raspberrypi.cpp\n> > > +++ b/src/ipa/raspberrypi/raspberrypi.cpp\n> > > @@ -53,6 +53,8 @@ namespace libcamera {\n> > >  /* Configure the sensor with these values initially. */\n> > >  #define DEFAULT_ANALOGUE_GAIN 1.0\n> > >  #define DEFAULT_EXPOSURE_TIME 20000\n> > > +#define DEFAULT_MIN_FRAME_DURATION (1e6 / 30.0)\n> > > +#define DEFAULT_MAX_FRAME_DURATION (1e6 / 0.01)\n> > >\n> > >  LOG_DEFINE_CATEGORY(IPARPI)\n> > >\n> > > @@ -136,6 +138,9 @@ private:\n> > >       /* LS table allocation passed in from the pipeline handler. */\n> > >       uint32_t lsTableHandle_;\n> > >       void *lsTable_;\n> > > +\n> > > +     /* Frame duration (1/fps) given in microseconds. */\n> > > +     double minFrameDuration_, maxFrameDuration_;\n> >\n> > Shouldn't these be float, or uint32_t depending on the outcome of the\n> > discussion for patch 1/3 ?\n> \n> Ack.\n> \n> > >  };\n> > >\n> > >  int IPARPi::init(const IPASettings &settings)\n> > > @@ -252,13 +257,20 @@ void IPARPi::configure(const CameraSensorInfo &sensorInfo,\n> > >               controller_.Initialise();\n> > >               controllerInit_ = true;\n> > >\n> > > -             /* Calculate initial values for gain and exposure. */\n> > > +             /* Calculate initial values for gain, vblank, and exposure */\n> > > +             minFrameDuration_ = DEFAULT_MIN_FRAME_DURATION;\n> > > +             maxFrameDuration_ = DEFAULT_MAX_FRAME_DURATION;\n> > > +\n> > > +             double exposure = DEFAULT_EXPOSURE_TIME;\n> > > +             int32_t vblank = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > > +                                                    maxFrameDuration_);\n> > > +             int32_t exposure_lines = helper_->ExposureLines(exposure);\n> > >               int32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN);\n> > > -             int32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME);\n> > >\n> > >               ControlList ctrls(unicam_ctrls_);\n> > > -             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > +             ctrls.set(V4L2_CID_VBLANK, vblank);\n> > >               ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > > +             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > >\n> > >               IPAOperationData op;\n> > >               op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> > > @@ -630,6 +642,17 @@ void IPARPi::queueRequest(const ControlList &controls)\n> > >                       break;\n> > >               }\n> > >\n> > > +             case controls::FRAME_DURATIONS: {\n> > > +                     auto frameDurations = ctrl.second.get<Span<const float>>();\n> > > +\n> > > +                     /* This will be applied once AGC recalculations occur. */\n> > > +                     minFrameDuration_ = frameDurations[0];\n> > > +                     maxFrameDuration_ = frameDurations[1];\n> >\n> > Should the values be adjusted based on the sensor capabilities ?\n> \n> They do, but separately in the CamHelper::GetVBlanking() call.  Unless\n> I have misunderstood the comment?\n\nI think I was just confused, this seems fine.\n\n> > > +                     libcameraMetadata_.set(controls::FrameDurations,\n> > > +                                            { frameDurations[0], frameDurations[1] });\n> >\n> > Hmmm... From an application point of view, wouldn't it be more useful to\n> > know what frame duration was actually used for the current frame ? You\n> > don't have to implement this as part of this series, but I could imagine\n> > a FrameDuration metadata control being useful for that purpose.\n> > FrameDurations would be set by applications, and FrameDuration would be\n> > reported through metadata. (On a side note, would FrameDurationLimits be\n> > a better name than FrameDuration then ?)\n> \n> Yes, good point.  I can update this to reflect the actual frame durations used.\n> Happy to rename FrameDuration -> FrameDurationLimits.\n> \n> > > +                     break;\n> > > +             }\n> > > +\n> > >               default:\n> > >                       LOG(IPARPI, Warning)\n> > >                               << \"Ctrl \" << controls::controls.at(ctrl.first)->name()\n> > > @@ -795,7 +818,12 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> > >       op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> > >\n> > >       int32_t gain_code = helper_->GainCode(agcStatus->analogue_gain);\n> > > -     int32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time);\n> > > +\n> > > +     /* GetVBlanking might clip exposure time to the fps limits. */\n> > > +     double exposure = agcStatus->shutter_time;\n> > > +     int32_t vblanking = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > > +                                               maxFrameDuration_);\n> >\n> > I had completely missed, when reviewing the GetVBlanking()\n> > implementation above, that the first parameter was a referenced and was\n> > used to return a value. We tend in libcamera to use const references for\n> > input parameters, and pointers for output or in/out parameters to make\n> > that clearer. It's not ideal though, as pointer can be null, and using\n> > references when a parameter should not be null is nicer. No need to\n> > change the code here, but maybe a comment on top of GetVBlanking() would\n> > be useful to point out that the exposure time can be adjusted.\n> \n> Ack.\n> \n> > > +     int32_t exposure_lines = helper_->ExposureLines(exposure);\n> > >\n> > >       if (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) {\n> > >               LOG(IPARPI, Error) << \"Can't find analogue gain control\";\n> > > @@ -807,14 +835,20 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> > >               return;\n> > >       }\n> > >\n> > > -     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << agcStatus->shutter_time\n> > > -                        << \" (Shutter lines: \" << exposure_lines << \") Gain: \"\n> > > +     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << exposure\n> > > +                        << \" (Shutter lines: \" << exposure_lines << \", AGC requested \"\n> > > +                        << agcStatus->shutter_time << \") Gain: \"\n> > >                          << agcStatus->analogue_gain << \" (Gain Code: \"\n> > >                          << gain_code << \")\";\n> >\n> > Could the AGC algorithm get confused if it requests a certain exposure,\n> > and a smaller value is set in the sensor ? Will it keep trying to push\n> > the exposure up to increase the luminance of the scene without\n> > notificing it's pointless ?\n> \n> Yes, it can, and yet it will :)\n> However, there is no harm done.  As long as the AGC algorithm gets\n> given the actual exposure time / gain used for the frame - rather than\n> what it requested - it will fill up the rest of the exposure with\n> digital gain to get the expected brightness.  This is why it is\n> important for CamHelper::GetVBlanking() to return out the exposure\n> that would actually be used.\n> \n> David and I did discuss passing in the framerate limits into the AGC\n> algorithm, but we felt it added complexity for no real gain in\n> functionality.  The AGC algorithm would not behave any differently in\n> either case.  Would could help a user who wanted a very specific\n> behavior with framerate limits would be to setup a suitable exposure\n> profile (division of shutter speed and gain) in the tuning file.\n\nOK.\n\n> > >\n> > >       ControlList ctrls(unicam_ctrls_);\n> > > -     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > +     /*\n> > > +      * VBLANK must be set before EXPOSURE as the former will adjust the\n> > > +      * limits of the latter control.\n> > > +      */\n> >\n> > This will be nasty on the V4L2 side. We write multiple controls with a\n> > single VIDIOC_S_EXT_CTRLS call, and the V4L2 control framework will\n> > first clamp all control values to the limits before setting any control.\n> > The limits for the exposure time will thus not be adjusted before its\n> > value is clamped :-( I'm not sure how to best solve this. Ideally the\n> > problem should be fixed in the control framework, but I doubt that will\n> > be happen. We need to at least ask Hans to see what options we have (or\n> > rather, what options we don't have). One easy hack may be to report a\n> > large range for the exposure time limits in the camera sensor drivers,\n> > and adjust the value in .s_ctrl() instead of relying on the control\n> > framework to do that for us.\n> \n> Ack.  This needs a bit more thinking.\n> \n> If I were to separate out the VBLANK set ctrl with the EXPOSURE set\n> ctrl would that help here?  Note that I cannot easily do that right\n> now as the staggered writer only has an interface to set a single list\n> of ctrls.\n> \n> I suppose the current behavior means that we get 1 frame of exposure\n> that is possibly wrong or clipped before the vblank has updated the\n> ctrl limits in the sensor driver.\n\nThe more I think about it, the more I believe it's a deficiency of the\nV4L2 API, and we should work around it by reporting exposure time limits\nthat don't depend on the blanking, and clamping the value in .s_ctrl().\nWe would otherwise need much more complexity on the userspace side, and\nit's not worth it.\n\n> > > +     ctrls.set(V4L2_CID_VBLANK, vblanking);\n> > >       ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > > +     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > >       op.controls.push_back(ctrls);\n> > >       queueFrameAction.emit(0, op);\n> > >  }\n> > > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > index 21a1d7f7..948290e2 100644\n> > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > @@ -1159,7 +1159,9 @@ void RPiCameraData::queueFrameAction(unsigned int frame, const IPAOperationData\n> > >               if (!staggeredCtrl_) {\n> > >                       staggeredCtrl_.init(unicam_[Unicam::Image].dev(),\n> > >                                           { { V4L2_CID_ANALOGUE_GAIN, action.data[0] },\n> > > +                                           { V4L2_CID_VBLANK, action.data[1] },\n> > >                                             { V4L2_CID_EXPOSURE, action.data[1] } });\n> >\n> > So the initial value of the exposure time and the vertical blanking are\n> > always identical ?\n> \n> This is setting up the delay in applying the ctrl for the staggered\n> writer.  So I would expect EXPOSURE and VBLANK to have the same delay\n> in the sensor.\n\nOops, I have misunderstood this, my bad.\n\n> > > +\n> > >                       sensorMetadata_ = action.data[2];\n> > >               }\n> > >","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id CC10261012\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 28 May 2020 03:30:17 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 395AF2A3;\n\tThu, 28 May 2020 03:30:17 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"ErcKVMS3\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1590629417;\n\tbh=A3jipI69sPxCx57QNjtCjKeIoiM4vNr+XaxQznbJUSE=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=ErcKVMS3x4xQiorgSgL60AesduHi7qktOFF6mop547GQOw/USJBJuyeogTpP/u9cR\n\t36H8pIs4Pn/lIVglDHmsdYw3amqdqvQowC1WPKJz6QLyqCXlNLKlJRDT3wOcd/SGa1\n\tSee85IJdBYnnhvyuqyfi/OkFGuQK7fHwpJg1WTWg=","Date":"Thu, 28 May 2020 04:30:03 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Naushir Patuck <naush@raspberrypi.com>","Cc":"libcamera-devel@lists.libcamera.org","Message-ID":"<20200528013003.GC4670@pendragon.ideasonboard.com>","References":"<20200513091120.1653-1-naush@raspberrypi.com>\n\t<20200513091120.1653-3-naush@raspberrypi.com>\n\t<20200524231608.GB6006@pendragon.ideasonboard.com>\n\t<CAEmqJPrX6HpNCtO+0Wyrj=rNAogujaVQHcG=u0fTOZpiucYdfw@mail.gmail.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<CAEmqJPrX6HpNCtO+0Wyrj=rNAogujaVQHcG=u0fTOZpiucYdfw@mail.gmail.com>","Subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Thu, 28 May 2020 01:30:18 -0000"}},{"id":4921,"web_url":"https://patchwork.libcamera.org/comment/4921/","msgid":"<CAEmqJPoyrfc7kcGANpmmbM_PX54NAVVYGpNOKnp_Odr86260yQ@mail.gmail.com>","date":"2020-05-28T09:08:05","subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","submitter":{"id":34,"url":"https://patchwork.libcamera.org/api/people/34/","name":"Naushir Patuck","email":"naush@raspberrypi.com"},"content":"Hi Laurent,\n\nOn Thu, 28 May 2020 at 02:30, Laurent Pinchart\n<laurent.pinchart@ideasonboard.com> wrote:\n>\n> Hi Naush,\n>\n> On Tue, May 26, 2020 at 10:29:12AM +0100, Naushir Patuck wrote:\n> > On Mon, 25 May 2020 at 00:16, Laurent Pinchart wrote:\n> > > On Wed, May 13, 2020 at 10:11:19AM +0100, Naushir Patuck wrote:\n> > > > Add support for setting V4L2_CID_VBLANK appropriately when setting\n> > > > V4L2_CID_EXPOSURE. This will allow adaptive framerates during\n> > > > viewfinder use cases (e.g. when the exposure time goes above 33ms, we\n> > > > can reduce the framerate to lower than 30fps).\n> > > >\n> > > > The minimum and maximum frame durations are provided via libcamera\n> > > > controls, and will prioritise exposure time limits over any AGC request.\n> > >\n> > > General question about the AGC and frame duration selection behaviour.\n> > > When the scene illumination drops and the applications allows the frame\n> > > rate to be lowered, is it best to adapt the frame duration continuously,\n> > > or jump from, let's say, 30fps to 15fps and stay at 15fps ? I could\n> > > imagine adaptative frame durations to be annoying to handle for codecs\n> > > or displays, but I probably lack a good view of the details.\n> >\n> > Traditionally, we had always allowed the fps to gradually drop with\n> > exposure during viewfinder use cases.  This would provide a better\n> > user experience (in my opinion) over a step change to the fps).  It\n> > also ensures we run with the maximum possible framerate whenever we\n> > can.  However, during video encoding, we have always left the fps at a\n> > fixed rate, thereby limiting the maximum exposure time allowed.  I\n> > guess it is essentially up to the AGC algorithm to decide how it wants\n> > to handle it depending on the use case.\n>\n> Thanks for the information. Would you expect applications to specify a\n> frame duration interval in the viewfinder use case and a fixed frame\n> duration (with min == max) in the video recording use case, or would you\n> handle that internally in the IPA ? I suppose an application may want\n> the IPA to pick either 15fps or 30fps depending on the scene\n> illumination and stick to that ? In that case we would need to provide a\n> hint to tell whether the frame rate should be adapted continuously or\n> not. Or we could do so on the application side by specifying a 15-30 fps\n> interval, capturing a few frames until AEGC locks, reading the frame\n> duration selected by the IPA, setting the frame duration to a fixed\n> value and proceeding with the video recording. It's essentially a\n> question of who would be responsible for implementing this (application\n> or IPA), and if we decide to put the burden on the application, we could\n> also provide a higher-level video recording helper to implement that\n> sequence. What do you think would be best ?\n\nI would expect the application to make the decision as to what frame\ndurations it wants based on the intended use cases.  In my opinion, we\nshould minimise the burden on the application side for these types of\ncontrols and let the IPA handle it.  So, having hints passed into the\nIPA for either gradual fps changes or step changes does seem like a\ngood solution to me.\n\n>\n> > > > V4L2_CID_VBLANK is controlled through the staggered writer, just like\n> > > > the exposure and gain controls.\n> > > >\n> > > > Signed-off-by: Naushir Patuck <naush@raspberrypi.com>\n> > > > ---\n> > > >  include/ipa/raspberrypi.h                     |  1 +\n> > > >  src/ipa/raspberrypi/cam_helper.cpp            | 38 ++++++++++++++-\n> > > >  src/ipa/raspberrypi/cam_helper.hpp            | 15 +++++-\n> > > >  src/ipa/raspberrypi/cam_helper_imx219.cpp     | 11 ++++-\n> > > >  src/ipa/raspberrypi/cam_helper_imx477.cpp     | 11 ++++-\n> > > >  src/ipa/raspberrypi/cam_helper_ov5647.cpp     | 11 ++++-\n> > > >  src/ipa/raspberrypi/raspberrypi.cpp           | 48 ++++++++++++++++---\n> > > >  .../pipeline/raspberrypi/raspberrypi.cpp      |  2 +\n> > > >  8 files changed, 124 insertions(+), 13 deletions(-)\n> > > >\n> > > > diff --git a/include/ipa/raspberrypi.h b/include/ipa/raspberrypi.h\n> > > > index c109469e..74954db2 100644\n> > > > --- a/include/ipa/raspberrypi.h\n> > > > +++ b/include/ipa/raspberrypi.h\n> > > > @@ -51,6 +51,7 @@ static const ControlInfoMap RPiControls = {\n> > > >       { &controls::Brightness, ControlInfo(-1.0f, 1.0f) },\n> > > >       { &controls::Contrast, ControlInfo(0.0f, 32.0f) },\n> > > >       { &controls::Saturation, ControlInfo(0.0f, 32.0f) },\n> > > > +     { &controls::FrameDurations, ControlInfo(1.0e3f, 1.0e9f) }\n> > > >  };\n> > > >\n> > > >  } /* namespace libcamera */\n> > > > diff --git a/src/ipa/raspberrypi/cam_helper.cpp b/src/ipa/raspberrypi/cam_helper.cpp\n> > > > index 7f05d2c6..06732241 100644\n> > > > --- a/src/ipa/raspberrypi/cam_helper.cpp\n> > > > +++ b/src/ipa/raspberrypi/cam_helper.cpp\n> > > > @@ -34,8 +34,10 @@ CamHelper *CamHelper::Create(std::string const &cam_name)\n> > > >       return nullptr;\n> > > >  }\n> > > >\n> > > > -CamHelper::CamHelper(MdParser *parser)\n> > > > -     : parser_(parser), initialized_(false)\n> > > > +CamHelper::CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > > > +                  unsigned int frameIntegrationDiff)\n> > > > +     : parser_(parser), initialized_(false), maxFrameLength_(maxFrameLength),\n> > > > +       frameIntegrationDiff_(frameIntegrationDiff)\n> > > >  {\n> > > >  }\n> > > >\n> > > > @@ -56,6 +58,38 @@ double CamHelper::Exposure(uint32_t exposure_lines) const\n> > > >       return exposure_lines * mode_.line_length / 1000.0;\n> > > >  }\n> > > >\n> > > > +uint32_t CamHelper::GetVBlanking(double &exposure, double minFrameDuration,\n> > > > +                              double maxFrameDuration) const\n> > > > +{\n> > > > +     uint32_t frameLengthMin, frameLengthMax, vblank;\n> > > > +     uint32_t exposureLines = ExposureLines(exposure);\n> > > > +\n> > > > +     assert(initialized_);\n> > > > +     /*\n> > > > +      * Clip frame length by the frame duration range and the maximum allowable\n> > > > +      * value in the sensor, given by maxFrameLength_.\n> > > > +      */\n> > > > +     frameLengthMin = std::min<uint32_t>(1e3 * minFrameDuration / mode_.line_length,\n> > > > +                                         maxFrameLength_);\n> > > > +     frameLengthMax = std::min<uint32_t>(1e3 * maxFrameDuration / mode_.line_length,\n> > > > +                                         maxFrameLength_);\n> > >\n> > > Not something to be addressed as part of this series, but as we divide\n> > > and multiply by 1000 in various places, should we specify all duration\n> > > controls in nanoseconds instead of microseconds ?\n> >\n> > Good question :)  I think microseconds is more \"user friendly\", but I\n> > do see the point that we convert to nanoseconds everywhere.  My\n> > suggestion would be to leave it as microseconds for the user-facing\n> > controls.\n>\n> Let's do so for now. I may reconsider that later though if we need more\n> precision than 1µs (timestamps are where I expect precision to be\n> needed, and we express them in ns already, maybe we won't need as much\n> precision in the durations) or if we end up having controls in both\n> units and we decide to make the API more consistent.\n\nAgreed.\n\n>\n> > > > +     /*\n> > > > +      * Limit the exposure to the maximum frame duration requested, and\n> > > > +      * re-calculate if it has been clipped.\n> > > > +      */\n> > > > +     exposureLines = std::min(frameLengthMax - frameIntegrationDiff_,\n> > > > +                              exposureLines);\n> > > > +     exposure = Exposure(exposureLines);\n> > > > +\n> > > > +     /* Limit the vblank to the range allowed by the frame length limits. */\n> > > > +     vblank = std::max<uint32_t>(mode_.height, exposureLines) -\n> > > > +              mode_.height + frameIntegrationDiff_;\n> > >\n> > > If the exposure time is shorter than the frame height, do we need the\n> > > frameIntegrationDiff_ margin ? Assuming a sensor that would have a\n> > > minimum vblank of 0 and a frameIntegrationDiff_ of 40, if height was\n> > > 1080 and exposureLines 500, there should be no need to set vblank to 40,\n> > > right ?\n> >\n> > Yes, you are correct.  Will fix this calculation.\n> >\n> > >\n> > > > +     vblank = std::max(vblank, frameLengthMin - mode_.height);\n> > > > +     vblank = std::min(vblank, frameLengthMax - mode_.height);\n> > >\n> > > You can use\n> > >\n> > >         vblank = utils::clamp(vblank, frameLengthMin - mode_.height,\n> > >                               frameLengthMax - mode_.height);\n> >\n> > Ack.\n> >\n> > > > +\n> > > > +     return vblank;\n> > > > +}\n> > > > +\n> > > >  void CamHelper::SetCameraMode(const CameraMode &mode)\n> > > >  {\n> > > >       mode_ = mode;\n> > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp\n> > > > index 0c8aa29a..fdcdbddb 100644\n> > > > --- a/src/ipa/raspberrypi/cam_helper.hpp\n> > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp\n> > > > @@ -68,12 +68,15 @@ class CamHelper\n> > > >  {\n> > > >  public:\n> > > >       static CamHelper *Create(std::string const &cam_name);\n> > > > -     CamHelper(MdParser *parser);\n> > > > +     CamHelper(MdParser *parser, unsigned int maxFrameLength,\n> > > > +               unsigned int frameIntegrationDiff);\n> > > >       virtual ~CamHelper();\n> > > >       void SetCameraMode(const CameraMode &mode);\n> > > >       MdParser &Parser() const { return *parser_; }\n> > > >       uint32_t ExposureLines(double exposure_us) const;\n> > > >       double Exposure(uint32_t exposure_lines) const; // in us\n> > > > +     virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,\n> > > > +                                   double maxFrameDuration) const;\n> > >\n> > > Does this function need to be virtual ?\n> >\n> > I'm not sure :)\n> >\n> > I wanted to be flexible enough that if a sensor wanted to do something\n> > a bit different from the norm, it could override the base\n> > implementation.  I don't have a particular example in mind though.\n>\n> We could always make it virtual at that point then, this is an internal\n> API.\n\nAck.\n\n>\n> > > >       virtual uint32_t GainCode(double gain) const = 0;\n> > > >       virtual double Gain(uint32_t gain_code) const = 0;\n> > > >       virtual void GetDelays(int &exposure_delay, int &gain_delay) const;\n> > > > @@ -83,10 +86,20 @@ public:\n> > > >       virtual unsigned int MistrustFramesStartup() const;\n> > > >       virtual unsigned int MistrustFramesModeSwitch() const;\n> > > >       virtual CamTransform GetOrientation() const;\n> > > > +\n> > > >  protected:\n> > > >       MdParser *parser_;\n> > > >       CameraMode mode_;\n> > > > +\n> > > > +private:\n> > > >       bool initialized_;\n> > > > +     /* Largest possible frame length, in units of lines. */\n> > > > +     unsigned int maxFrameLength_;\n> > > > +     /*\n> > > > +      * Smallest difference between the frame length and integration time,\n> > > > +      * in units of lines.\n> > > > +      */\n> > > > +     unsigned int frameIntegrationDiff_;\n> > > >  };\n> > > >\n> > > >  // This is for registering camera helpers with the system, so that the\n> > > > diff --git a/src/ipa/raspberrypi/cam_helper_imx219.cpp b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > > index 35c6597c..fd95a31a 100644\n> > > > --- a/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > > +++ b/src/ipa/raspberrypi/cam_helper_imx219.cpp\n> > > > @@ -50,13 +50,22 @@ public:\n> > > >       unsigned int MistrustFramesModeSwitch() const override;\n> > > >       bool SensorEmbeddedDataPresent() const override;\n> > > >       CamTransform GetOrientation() const override;\n> > > > +\n> > > > +private:\n> > > > +     /*\n> > > > +      * Smallest difference between the frame length and integration time,\n> > > > +      * in units of lines.\n> > > > +      */\n> > > > +     static constexpr int frameIntegrationDiff = 4;\n> > > > +     /* Largest possible frame length, in units of lines. */\n> > > > +     static constexpr int maxFrameLength = 0xffff;\n> > > >  };\n> > > >\n> > > >  CamHelperImx219::CamHelperImx219()\n> > > >  #if ENABLE_EMBEDDED_DATA\n> > > >       : CamHelper(new MdParserImx219())\n> > >\n> > > Shouldn't this be updated too ?\n> >\n> > Ack.\n> >\n> > > >  #else\n> > > > -     : CamHelper(new MdParserRPi())\n> > > > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> > > >  #endif\n> > > >  {\n> > > >  }\n> > > > diff --git a/src/ipa/raspberrypi/cam_helper_imx477.cpp b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > > index 69544456..4a1cab76 100644\n> > > > --- a/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > > +++ b/src/ipa/raspberrypi/cam_helper_imx477.cpp\n> > > > @@ -39,10 +39,19 @@ public:\n> > > >       double Gain(uint32_t gain_code) const override;\n> > > >       bool SensorEmbeddedDataPresent() const override;\n> > > >       CamTransform GetOrientation() const override;\n> > > > +\n> > > > +private:\n> > > > +     /*\n> > > > +      * Smallest difference between the frame length and integration time,\n> > > > +      * in units of lines.\n> > > > +      */\n> > > > +     static constexpr int frameIntegrationDiff = 22;\n> > > > +     /* Largest possible frame length, in units of lines. */\n> > > > +     static constexpr int maxFrameLength = 0xffdc;\n> > > >  };\n> > > >\n> > > >  CamHelperImx477::CamHelperImx477()\n> > > > -     : CamHelper(new MdParserImx477())\n> > > > +     : CamHelper(new MdParserImx477(), maxFrameLength, frameIntegrationDiff)\n> > > >  {\n> > > >  }\n> > > >\n> > > > diff --git a/src/ipa/raspberrypi/cam_helper_ov5647.cpp b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > > index 3dbcb164..d814fa90 100644\n> > > > --- a/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > > +++ b/src/ipa/raspberrypi/cam_helper_ov5647.cpp\n> > > > @@ -22,6 +22,15 @@ public:\n> > > >       unsigned int HideFramesModeSwitch() const override;\n> > > >       unsigned int MistrustFramesStartup() const override;\n> > > >       unsigned int MistrustFramesModeSwitch() const override;\n> > > > +\n> > > > +private:\n> > > > +     /*\n> > > > +      * Smallest difference between the frame length and integration time,\n> > > > +      * in units of lines.\n> > > > +      */\n> > > > +     static constexpr int frameIntegrationDiff = 4;\n> > > > +     /* Largest possible frame length, in units of lines. */\n> > > > +     static constexpr int maxFrameLength = 0xffff;\n> > > >  };\n> > > >\n> > > >  /*\n> > > > @@ -30,7 +39,7 @@ public:\n> > > >   */\n> > > >\n> > > >  CamHelperOv5647::CamHelperOv5647()\n> > > > -     : CamHelper(new MdParserRPi())\n> > > > +     : CamHelper(new MdParserRPi(), maxFrameLength, frameIntegrationDiff)\n> > > >  {\n> > > >  }\n> > > >\n> > > > diff --git a/src/ipa/raspberrypi/raspberrypi.cpp b/src/ipa/raspberrypi/raspberrypi.cpp\n> > > > index 3bcc0815..1af5e74a 100644\n> > > > --- a/src/ipa/raspberrypi/raspberrypi.cpp\n> > > > +++ b/src/ipa/raspberrypi/raspberrypi.cpp\n> > > > @@ -53,6 +53,8 @@ namespace libcamera {\n> > > >  /* Configure the sensor with these values initially. */\n> > > >  #define DEFAULT_ANALOGUE_GAIN 1.0\n> > > >  #define DEFAULT_EXPOSURE_TIME 20000\n> > > > +#define DEFAULT_MIN_FRAME_DURATION (1e6 / 30.0)\n> > > > +#define DEFAULT_MAX_FRAME_DURATION (1e6 / 0.01)\n> > > >\n> > > >  LOG_DEFINE_CATEGORY(IPARPI)\n> > > >\n> > > > @@ -136,6 +138,9 @@ private:\n> > > >       /* LS table allocation passed in from the pipeline handler. */\n> > > >       uint32_t lsTableHandle_;\n> > > >       void *lsTable_;\n> > > > +\n> > > > +     /* Frame duration (1/fps) given in microseconds. */\n> > > > +     double minFrameDuration_, maxFrameDuration_;\n> > >\n> > > Shouldn't these be float, or uint32_t depending on the outcome of the\n> > > discussion for patch 1/3 ?\n> >\n> > Ack.\n> >\n> > > >  };\n> > > >\n> > > >  int IPARPi::init(const IPASettings &settings)\n> > > > @@ -252,13 +257,20 @@ void IPARPi::configure(const CameraSensorInfo &sensorInfo,\n> > > >               controller_.Initialise();\n> > > >               controllerInit_ = true;\n> > > >\n> > > > -             /* Calculate initial values for gain and exposure. */\n> > > > +             /* Calculate initial values for gain, vblank, and exposure */\n> > > > +             minFrameDuration_ = DEFAULT_MIN_FRAME_DURATION;\n> > > > +             maxFrameDuration_ = DEFAULT_MAX_FRAME_DURATION;\n> > > > +\n> > > > +             double exposure = DEFAULT_EXPOSURE_TIME;\n> > > > +             int32_t vblank = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > > > +                                                    maxFrameDuration_);\n> > > > +             int32_t exposure_lines = helper_->ExposureLines(exposure);\n> > > >               int32_t gain_code = helper_->GainCode(DEFAULT_ANALOGUE_GAIN);\n> > > > -             int32_t exposure_lines = helper_->ExposureLines(DEFAULT_EXPOSURE_TIME);\n> > > >\n> > > >               ControlList ctrls(unicam_ctrls_);\n> > > > -             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > > +             ctrls.set(V4L2_CID_VBLANK, vblank);\n> > > >               ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > > > +             ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > >\n> > > >               IPAOperationData op;\n> > > >               op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> > > > @@ -630,6 +642,17 @@ void IPARPi::queueRequest(const ControlList &controls)\n> > > >                       break;\n> > > >               }\n> > > >\n> > > > +             case controls::FRAME_DURATIONS: {\n> > > > +                     auto frameDurations = ctrl.second.get<Span<const float>>();\n> > > > +\n> > > > +                     /* This will be applied once AGC recalculations occur. */\n> > > > +                     minFrameDuration_ = frameDurations[0];\n> > > > +                     maxFrameDuration_ = frameDurations[1];\n> > >\n> > > Should the values be adjusted based on the sensor capabilities ?\n> >\n> > They do, but separately in the CamHelper::GetVBlanking() call.  Unless\n> > I have misunderstood the comment?\n>\n> I think I was just confused, this seems fine.\n>\n> > > > +                     libcameraMetadata_.set(controls::FrameDurations,\n> > > > +                                            { frameDurations[0], frameDurations[1] });\n> > >\n> > > Hmmm... From an application point of view, wouldn't it be more useful to\n> > > know what frame duration was actually used for the current frame ? You\n> > > don't have to implement this as part of this series, but I could imagine\n> > > a FrameDuration metadata control being useful for that purpose.\n> > > FrameDurations would be set by applications, and FrameDuration would be\n> > > reported through metadata. (On a side note, would FrameDurationLimits be\n> > > a better name than FrameDuration then ?)\n> >\n> > Yes, good point.  I can update this to reflect the actual frame durations used.\n> > Happy to rename FrameDuration -> FrameDurationLimits.\n> >\n> > > > +                     break;\n> > > > +             }\n> > > > +\n> > > >               default:\n> > > >                       LOG(IPARPI, Warning)\n> > > >                               << \"Ctrl \" << controls::controls.at(ctrl.first)->name()\n> > > > @@ -795,7 +818,12 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> > > >       op.operation = RPI_IPA_ACTION_V4L2_SET_STAGGERED;\n> > > >\n> > > >       int32_t gain_code = helper_->GainCode(agcStatus->analogue_gain);\n> > > > -     int32_t exposure_lines = helper_->ExposureLines(agcStatus->shutter_time);\n> > > > +\n> > > > +     /* GetVBlanking might clip exposure time to the fps limits. */\n> > > > +     double exposure = agcStatus->shutter_time;\n> > > > +     int32_t vblanking = helper_->GetVBlanking(exposure, minFrameDuration_,\n> > > > +                                               maxFrameDuration_);\n> > >\n> > > I had completely missed, when reviewing the GetVBlanking()\n> > > implementation above, that the first parameter was a referenced and was\n> > > used to return a value. We tend in libcamera to use const references for\n> > > input parameters, and pointers for output or in/out parameters to make\n> > > that clearer. It's not ideal though, as pointer can be null, and using\n> > > references when a parameter should not be null is nicer. No need to\n> > > change the code here, but maybe a comment on top of GetVBlanking() would\n> > > be useful to point out that the exposure time can be adjusted.\n> >\n> > Ack.\n> >\n> > > > +     int32_t exposure_lines = helper_->ExposureLines(exposure);\n> > > >\n> > > >       if (unicam_ctrls_.find(V4L2_CID_ANALOGUE_GAIN) == unicam_ctrls_.end()) {\n> > > >               LOG(IPARPI, Error) << \"Can't find analogue gain control\";\n> > > > @@ -807,14 +835,20 @@ void IPARPi::applyAGC(const struct AgcStatus *agcStatus)\n> > > >               return;\n> > > >       }\n> > > >\n> > > > -     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << agcStatus->shutter_time\n> > > > -                        << \" (Shutter lines: \" << exposure_lines << \") Gain: \"\n> > > > +     LOG(IPARPI, Debug) << \"Applying AGC Exposure: \" << exposure\n> > > > +                        << \" (Shutter lines: \" << exposure_lines << \", AGC requested \"\n> > > > +                        << agcStatus->shutter_time << \") Gain: \"\n> > > >                          << agcStatus->analogue_gain << \" (Gain Code: \"\n> > > >                          << gain_code << \")\";\n> > >\n> > > Could the AGC algorithm get confused if it requests a certain exposure,\n> > > and a smaller value is set in the sensor ? Will it keep trying to push\n> > > the exposure up to increase the luminance of the scene without\n> > > notificing it's pointless ?\n> >\n> > Yes, it can, and yet it will :)\n> > However, there is no harm done.  As long as the AGC algorithm gets\n> > given the actual exposure time / gain used for the frame - rather than\n> > what it requested - it will fill up the rest of the exposure with\n> > digital gain to get the expected brightness.  This is why it is\n> > important for CamHelper::GetVBlanking() to return out the exposure\n> > that would actually be used.\n> >\n> > David and I did discuss passing in the framerate limits into the AGC\n> > algorithm, but we felt it added complexity for no real gain in\n> > functionality.  The AGC algorithm would not behave any differently in\n> > either case.  Would could help a user who wanted a very specific\n> > behavior with framerate limits would be to setup a suitable exposure\n> > profile (division of shutter speed and gain) in the tuning file.\n>\n> OK.\n>\n> > > >\n> > > >       ControlList ctrls(unicam_ctrls_);\n> > > > -     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > > +     /*\n> > > > +      * VBLANK must be set before EXPOSURE as the former will adjust the\n> > > > +      * limits of the latter control.\n> > > > +      */\n> > >\n> > > This will be nasty on the V4L2 side. We write multiple controls with a\n> > > single VIDIOC_S_EXT_CTRLS call, and the V4L2 control framework will\n> > > first clamp all control values to the limits before setting any control.\n> > > The limits for the exposure time will thus not be adjusted before its\n> > > value is clamped :-( I'm not sure how to best solve this. Ideally the\n> > > problem should be fixed in the control framework, but I doubt that will\n> > > be happen. We need to at least ask Hans to see what options we have (or\n> > > rather, what options we don't have). One easy hack may be to report a\n> > > large range for the exposure time limits in the camera sensor drivers,\n> > > and adjust the value in .s_ctrl() instead of relying on the control\n> > > framework to do that for us.\n> >\n> > Ack.  This needs a bit more thinking.\n> >\n> > If I were to separate out the VBLANK set ctrl with the EXPOSURE set\n> > ctrl would that help here?  Note that I cannot easily do that right\n> > now as the staggered writer only has an interface to set a single list\n> > of ctrls.\n> >\n> > I suppose the current behavior means that we get 1 frame of exposure\n> > that is possibly wrong or clipped before the vblank has updated the\n> > ctrl limits in the sensor driver.\n>\n> The more I think about it, the more I believe it's a deficiency of the\n> V4L2 API, and we should work around it by reporting exposure time limits\n> that don't depend on the blanking, and clamping the value in .s_ctrl().\n> We would otherwise need much more complexity on the userspace side, and\n> it's not worth it.\n\nAck, do you think it's probably best kept as a separate thread of work?\n\n>\n> > > > +     ctrls.set(V4L2_CID_VBLANK, vblanking);\n> > > >       ctrls.set(V4L2_CID_EXPOSURE, exposure_lines);\n> > > > +     ctrls.set(V4L2_CID_ANALOGUE_GAIN, gain_code);\n> > > >       op.controls.push_back(ctrls);\n> > > >       queueFrameAction.emit(0, op);\n> > > >  }\n> > > > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > > index 21a1d7f7..948290e2 100644\n> > > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp\n> > > > @@ -1159,7 +1159,9 @@ void RPiCameraData::queueFrameAction(unsigned int frame, const IPAOperationData\n> > > >               if (!staggeredCtrl_) {\n> > > >                       staggeredCtrl_.init(unicam_[Unicam::Image].dev(),\n> > > >                                           { { V4L2_CID_ANALOGUE_GAIN, action.data[0] },\n> > > > +                                           { V4L2_CID_VBLANK, action.data[1] },\n> > > >                                             { V4L2_CID_EXPOSURE, action.data[1] } });\n> > >\n> > > So the initial value of the exposure time and the vertical blanking are\n> > > always identical ?\n> >\n> > This is setting up the delay in applying the ctrl for the staggered\n> > writer.  So I would expect EXPOSURE and VBLANK to have the same delay\n> > in the sensor.\n>\n> Oops, I have misunderstood this, my bad.\n>\n> > > > +\n> > > >                       sensorMetadata_ = action.data[2];\n> > > >               }\n> > > >\n>\n> --\n> Regards,\n>\n> Laurent Pinchart\n\nRegards,\nNaush","headers":{"Return-Path":"<naush@raspberrypi.com>","Received":["from mail-lf1-x144.google.com (mail-lf1-x144.google.com\n\t[IPv6:2a00:1450:4864:20::144])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 099F2603CF\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 28 May 2020 11:08:23 +0200 (CEST)","by mail-lf1-x144.google.com with SMTP id u16so14390421lfl.8\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 28 May 2020 02:08:22 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (2048-bit key; \n\tunprotected) header.d=raspberrypi.com\n\theader.i=@raspberrypi.com\n\theader.b=\"JDg/U2sV\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=raspberrypi.com; s=google;\n\th=mime-version:references:in-reply-to:from:date:message-id:subject:to\n\t:cc:content-transfer-encoding;\n\tbh=YmRwWEXq+skWs3HZEa2Yj/IjDbjIvy+euoaD09+ARLE=;\n\tb=JDg/U2sVOPjGSIu/0yR04MtHe4VOcssRs5i+DAQaIxyi4QeJA0LrJv3Ucn90IMW5BB\n\t7a6c9bi2QrW11tUNRuJXsljQX65leLcezitdK1vBsEKqpTeZ1FEieHCKVya+qvR2Mzt6\n\tyX8FzWgK8t9Bmr9t25IyQyyqZM4TDRmDzkO/9Zu97irngw3M3tFPQGEo4uq6SPeaP5kF\n\tBEYXB8/3TKMcRolOcipTmA4dQmWAdOa7Px37GLw7WY+qiJZOkPZcCmjSJg2UtNe8y8kO\n\tCgJuCYTWWFWoryCuYKaJbn/Cq8WFL1bbmBlxt7sUlL1Ol2rc/dMnBVALMXZ45fATLzWC\n\t6iQQ==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:mime-version:references:in-reply-to:from:date\n\t:message-id:subject:to:cc:content-transfer-encoding;\n\tbh=YmRwWEXq+skWs3HZEa2Yj/IjDbjIvy+euoaD09+ARLE=;\n\tb=XFeLDTmGGE0PGV/zYaMt8LR/2xLYMsNNMCuELJnerInAL6tbjpGBPwA5MP0Tbj9h24\n\tcraTH72OqFyMNtVS9wRTLCnvDKdpQPD5H7D9n6Q1t7nO8W2SCqQGH/Od6esm50+0+urX\n\tjEDlZoyneALoXSU6FuwIYVpK4zcJHAeDJGRVQyFZ9Fw5f3okeTzodMsaiHJLuqU2Q2hD\n\tCIedUVpbW1zZcxF4/xj3W5lDi0dYFU8DcH7Di2UecrHNs9Hub7UMlLPSB73PZGNFbcwC\n\tHQA57xbt3ZMs88lMNeNz/BQi6KUCo2Wgw/UldPh//BoLeKtL2wEAZ88ykRKaDtWRD9No\n\tmLeQ==","X-Gm-Message-State":"AOAM530EsvdD7BYD9xaDCeoeojp+z3W96CII5xrRaZJFFUDEXfm2NAEk\n\tDEp2oEPi1OHAwi2czuZU8UjHOeFKARAooOkjoiWi2g==","X-Google-Smtp-Source":"ABdhPJyKl0lJW153fHUjyxmHDje4b+19Hg5ou5FIKRhxEcKWbqyRFHTrK8vgbmCkBGg9JqaNM36Qdk7nityi/CsjSP8=","X-Received":"by 2002:a19:4f46:: with SMTP id a6mr1170084lfk.107.1590656901709;\n\tThu, 28 May 2020 02:08:21 -0700 (PDT)","MIME-Version":"1.0","References":"<20200513091120.1653-1-naush@raspberrypi.com>\n\t<20200513091120.1653-3-naush@raspberrypi.com>\n\t<20200524231608.GB6006@pendragon.ideasonboard.com>\n\t<CAEmqJPrX6HpNCtO+0Wyrj=rNAogujaVQHcG=u0fTOZpiucYdfw@mail.gmail.com>\n\t<20200528013003.GC4670@pendragon.ideasonboard.com>","In-Reply-To":"<20200528013003.GC4670@pendragon.ideasonboard.com>","From":"Naushir Patuck <naush@raspberrypi.com>","Date":"Thu, 28 May 2020 10:08:05 +0100","Message-ID":"<CAEmqJPoyrfc7kcGANpmmbM_PX54NAVVYGpNOKnp_Odr86260yQ@mail.gmail.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","Cc":"libcamera-devel@lists.libcamera.org","Content-Type":"text/plain; charset=\"UTF-8\"","Content-Transfer-Encoding":"quoted-printable","Subject":"Re: [libcamera-devel] [PATCH v2 2/3] libcamera: raspberrypi: Add\n\tcontrol of sensor vblanking","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Thu, 28 May 2020 09:08:23 -0000"}}]