[v2,00/10] libipa: agc: Calculate exposure limits
mbox series

Message ID 20251028-exposure-limits-v2-0-a8b5a318323e@ideasonboard.com
Headers show
Series
  • libipa: agc: Calculate exposure limits
Related show

Message

Jacopo Mondi Oct. 28, 2025, 9:31 a.m. UTC
By definition, the exposure time that can be programmed on a sensor is
limited by the frame duration and whenever the frame duration is
updated, the exposure time limits should be updated as well.

This is not the case for IPAs using the AgcMeanLuminance algorithm from
libipa, where the exposure limits are computed at configure() time and
never updated.

This has two implications:
1) The AGC algorithms will always operate with an exposure time bound to
   the frame duration programmed at startup time
2) The Camera::controls() limits are not updated to reflect the new
   constraints

This series addresses issue 1) by changing the AgcMeanLuminance class
and the associated helpers to regulate the exposure time base on the
currently programmed max frame duration using a sensor-specific margin
that is now reported by the CameraHelper class.

As the AgcMeanLuminance algorithm tries to push the exposure time up
to the maximum allowed frame duration, initializing the algorithm
with the sensor's maximum capabilities results in a very low frame rate,
unless the user specifies a FrameDurationLimits range in which the
algorithm should operate. To avoid slowing the frame rate down too much
compute a "default" frame rate at algorithm configuration time, were
the canonical 30, 15 and 10 FPS values are tested and selected in order
of preference.

The last patch is not for inclusion as Mali support needs to be
finalized upstream first, but shows how is it possible to handle
frame duration limits and exposure with the new model (and I needed that
patch for testing on Mali anyway).

Tested with camshark on Mali-C55 and with a test scripts that sets a
longer frame duration (66 msecs) and by validating that the exposure
time is increased to match the new limits on RkISP1.

RkISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 33251.27us, 10.6667, 1.00018 and 6.49639
61.545331 (47.58 fps) cam0-stream0 seq: 000050 bytesused: 2073600/1036800
kISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 66578.16us, 10.6667, 1.00018 and 3.24451
61.566241 (47.82 fps) cam0-stream0 seq: 000051 bytesused: 2073600/1036800

Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
---
Changes in v2:
- Rework of v1 that moves the exposure limits computation to the
  AgcMeanLuminance exposure helpers

---
Jacopo Mondi (9):
      ipa: camera_sensor_helper: Introduce exposureMargin()
      ipa: ipu3: Move CameraHelper to context
      ipa: libipa: agc: Make Agc::configure() set limits
      ipa: libipa: agc: Configure with frame duration
      ipa: libipa: agc: Initialize exposure with frame duration
      ipa: libipa: agc: Set exposure limits with frame duration
      ipa: libipa: agc: Initialize a sensible frame duration
      ipa: libipa: agc: Calculate frame duration in helper
      [DNI] ipa: mali: Handle FrameDurationLimits

Kieran Bingham (1):
      ipa: mali-c55: Move CameraHelper to context

 src/ipa/ipu3/algorithms/agc.cpp         |  32 +++++++---
 src/ipa/ipu3/ipa_context.cpp            |   3 +
 src/ipa/ipu3/ipa_context.h              |   3 +
 src/ipa/ipu3/ipu3.cpp                   |  22 +++----
 src/ipa/libipa/agc_mean_luminance.cpp   |  73 ++++++++++++++++++---
 src/ipa/libipa/agc_mean_luminance.h     |  18 +++++-
 src/ipa/libipa/camera_sensor_helper.cpp |  72 +++++++++++++++++++++
 src/ipa/libipa/camera_sensor_helper.h   |   2 +
 src/ipa/libipa/exposure_mode_helper.cpp | 109 +++++++++++++++++++++++++++-----
 src/ipa/libipa/exposure_mode_helper.h   |  16 ++++-
 src/ipa/mali-c55/algorithms/agc.cpp     |  76 +++++++++++++++++++---
 src/ipa/mali-c55/ipa_context.cpp        |   6 ++
 src/ipa/mali-c55/ipa_context.h          |  12 ++++
 src/ipa/mali-c55/mali-c55.cpp           |  44 ++++++-------
 src/ipa/rkisp1/algorithms/agc.cpp       |  42 ++++++------
 15 files changed, 430 insertions(+), 100 deletions(-)
---
base-commit: 36f9cdcdb4a3c69e10b7da8a3de8354421cdcb56
change-id: 20251017-exposure-limits-5c48252fe548

Best regards,

Comments

Hans de Goede Oct. 28, 2025, 9:53 a.m. UTC | #1
Hi Jacopo,

On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> By definition, the exposure time that can be programmed on a sensor is
> limited by the frame duration and whenever the frame duration is
> updated, the exposure time limits should be updated as well.
> 
> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> libipa, where the exposure limits are computed at configure() time and
> never updated.
> 
> This has two implications:
> 1) The AGC algorithms will always operate with an exposure time bound to
>    the frame duration programmed at startup time
> 2) The Camera::controls() limits are not updated to reflect the new
>    constraints
> 
> This series addresses issue 1) by changing the AgcMeanLuminance class
> and the associated helpers to regulate the exposure time base on the
> currently programmed max frame duration using a sensor-specific margin
> that is now reported by the CameraHelper class.

I have a question about the above paragraph. The sensor-specific margin
is already known by the kernel's sensor driver and properly written
sensor drivers will update the exposure range reported by the query-ctrl
ioctl after changing the vblank control.

So I think it would be better to rely on the kernel for this info
and refresh the cached exposure-control limits when changing the vblank
control rather then duplicating this info in the CameraHelper class ?

One could even calculate the exposure-margin by calculating the configured
frame-time (in a unit of lines) and then substracting the max-exposure
reported by the sensor from the frame-time-in-lines.

> As the AgcMeanLuminance algorithm tries to push the exposure time up
> to the maximum allowed frame duration, initializing the algorithm
> with the sensor's maximum capabilities results in a very low frame rate,
> unless the user specifies a FrameDurationLimits range in which the
> algorithm should operate. To avoid slowing the frame rate down too much
> compute a "default" frame rate at algorithm configuration time, were
> the canonical 30, 15 and 10 FPS values are tested and selected in order
> of preference.

I must admit I'm not familiar with the AgcMeanLuminance algorithm.
Does this algorithm also set / used to set the vblank control to
the maximum allowed frame duration? Or does it dynamically adjust
vblank when it needs more exposure "range" ?

I've been thinking about adding support to dynamically increase
vblank when more exposure time is needed in the softISP AGC code,
but it would probably be better to first convert that over to
the shared AGC helpers. I know that Kieran is working on this.

At least some UVC cameras have an auto-exposure priority (or is mode?)
v4l2 control which can be set to "aperture priority" to make them
automatically increase frameduration if necessary because of low
light conditions. I think we want something similar for libcamera's
AGC.

Regards,

Hans






> 
> The last patch is not for inclusion as Mali support needs to be
> finalized upstream first, but shows how is it possible to handle
> frame duration limits and exposure with the new model (and I needed that
> patch for testing on Mali anyway).
> 
> Tested with camshark on Mali-C55 and with a test scripts that sets a
> longer frame duration (66 msecs) and by validating that the exposure
> time is increased to match the new limits on RkISP1.
> 
> RkISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 33251.27us, 10.6667, 1.00018 and 6.49639
> 61.545331 (47.58 fps) cam0-stream0 seq: 000050 bytesused: 2073600/1036800
> kISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 66578.16us, 10.6667, 1.00018 and 3.24451
> 61.566241 (47.82 fps) cam0-stream0 seq: 000051 bytesused: 2073600/1036800
> 
> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> ---
> Changes in v2:
> - Rework of v1 that moves the exposure limits computation to the
>   AgcMeanLuminance exposure helpers
> 
> ---
> Jacopo Mondi (9):
>       ipa: camera_sensor_helper: Introduce exposureMargin()
>       ipa: ipu3: Move CameraHelper to context
>       ipa: libipa: agc: Make Agc::configure() set limits
>       ipa: libipa: agc: Configure with frame duration
>       ipa: libipa: agc: Initialize exposure with frame duration
>       ipa: libipa: agc: Set exposure limits with frame duration
>       ipa: libipa: agc: Initialize a sensible frame duration
>       ipa: libipa: agc: Calculate frame duration in helper
>       [DNI] ipa: mali: Handle FrameDurationLimits
> 
> Kieran Bingham (1):
>       ipa: mali-c55: Move CameraHelper to context
> 
>  src/ipa/ipu3/algorithms/agc.cpp         |  32 +++++++---
>  src/ipa/ipu3/ipa_context.cpp            |   3 +
>  src/ipa/ipu3/ipa_context.h              |   3 +
>  src/ipa/ipu3/ipu3.cpp                   |  22 +++----
>  src/ipa/libipa/agc_mean_luminance.cpp   |  73 ++++++++++++++++++---
>  src/ipa/libipa/agc_mean_luminance.h     |  18 +++++-
>  src/ipa/libipa/camera_sensor_helper.cpp |  72 +++++++++++++++++++++
>  src/ipa/libipa/camera_sensor_helper.h   |   2 +
>  src/ipa/libipa/exposure_mode_helper.cpp | 109 +++++++++++++++++++++++++++-----
>  src/ipa/libipa/exposure_mode_helper.h   |  16 ++++-
>  src/ipa/mali-c55/algorithms/agc.cpp     |  76 +++++++++++++++++++---
>  src/ipa/mali-c55/ipa_context.cpp        |   6 ++
>  src/ipa/mali-c55/ipa_context.h          |  12 ++++
>  src/ipa/mali-c55/mali-c55.cpp           |  44 ++++++-------
>  src/ipa/rkisp1/algorithms/agc.cpp       |  42 ++++++------
>  15 files changed, 430 insertions(+), 100 deletions(-)
> ---
> base-commit: 36f9cdcdb4a3c69e10b7da8a3de8354421cdcb56
> change-id: 20251017-exposure-limits-5c48252fe548
> 
> Best regards,
Robert Mader Oct. 28, 2025, 11:26 a.m. UTC | #2
Just gave it a go on my PinePhone Pro (rkisp1/rk3399) and can confirm it 
still fixes low framerates for me. So for the common and rkisp1 bits:

Tested-by: Robert Mader<robert.mader@collabora.com>

On 10/28/25 10:31, Jacopo Mondi wrote:
> By definition, the exposure time that can be programmed on a sensor is
> limited by the frame duration and whenever the frame duration is
> updated, the exposure time limits should be updated as well.
>
> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> libipa, where the exposure limits are computed at configure() time and
> never updated.
>
> This has two implications:
> 1) The AGC algorithms will always operate with an exposure time bound to
>     the frame duration programmed at startup time
> 2) The Camera::controls() limits are not updated to reflect the new
>     constraints
>
> This series addresses issue 1) by changing the AgcMeanLuminance class
> and the associated helpers to regulate the exposure time base on the
> currently programmed max frame duration using a sensor-specific margin
> that is now reported by the CameraHelper class.
>
> As the AgcMeanLuminance algorithm tries to push the exposure time up
> to the maximum allowed frame duration, initializing the algorithm
> with the sensor's maximum capabilities results in a very low frame rate,
> unless the user specifies a FrameDurationLimits range in which the
> algorithm should operate. To avoid slowing the frame rate down too much
> compute a "default" frame rate at algorithm configuration time, were
> the canonical 30, 15 and 10 FPS values are tested and selected in order
> of preference.
>
> The last patch is not for inclusion as Mali support needs to be
> finalized upstream first, but shows how is it possible to handle
> frame duration limits and exposure with the new model (and I needed that
> patch for testing on Mali anyway).
>
> Tested with camshark on Mali-C55 and with a test scripts that sets a
> longer frame duration (66 msecs) and by validating that the exposure
> time is increased to match the new limits on RkISP1.
>
> RkISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 33251.27us, 10.6667, 1.00018 and 6.49639
> 61.545331 (47.58 fps) cam0-stream0 seq: 000050 bytesused: 2073600/1036800
> kISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 66578.16us, 10.6667, 1.00018 and 3.24451
> 61.566241 (47.82 fps) cam0-stream0 seq: 000051 bytesused: 2073600/1036800
>
> Signed-off-by: Jacopo Mondi<jacopo.mondi@ideasonboard.com>
> ---
> Changes in v2:
> - Rework of v1 that moves the exposure limits computation to the
>    AgcMeanLuminance exposure helpers
>
> ---
> Jacopo Mondi (9):
>        ipa: camera_sensor_helper: Introduce exposureMargin()
>        ipa: ipu3: Move CameraHelper to context
>        ipa: libipa: agc: Make Agc::configure() set limits
>        ipa: libipa: agc: Configure with frame duration
>        ipa: libipa: agc: Initialize exposure with frame duration
>        ipa: libipa: agc: Set exposure limits with frame duration
>        ipa: libipa: agc: Initialize a sensible frame duration
>        ipa: libipa: agc: Calculate frame duration in helper
>        [DNI] ipa: mali: Handle FrameDurationLimits
>
> Kieran Bingham (1):
>        ipa: mali-c55: Move CameraHelper to context
>
>   src/ipa/ipu3/algorithms/agc.cpp         |  32 +++++++---
>   src/ipa/ipu3/ipa_context.cpp            |   3 +
>   src/ipa/ipu3/ipa_context.h              |   3 +
>   src/ipa/ipu3/ipu3.cpp                   |  22 +++----
>   src/ipa/libipa/agc_mean_luminance.cpp   |  73 ++++++++++++++++++---
>   src/ipa/libipa/agc_mean_luminance.h     |  18 +++++-
>   src/ipa/libipa/camera_sensor_helper.cpp |  72 +++++++++++++++++++++
>   src/ipa/libipa/camera_sensor_helper.h   |   2 +
>   src/ipa/libipa/exposure_mode_helper.cpp | 109 +++++++++++++++++++++++++++-----
>   src/ipa/libipa/exposure_mode_helper.h   |  16 ++++-
>   src/ipa/mali-c55/algorithms/agc.cpp     |  76 +++++++++++++++++++---
>   src/ipa/mali-c55/ipa_context.cpp        |   6 ++
>   src/ipa/mali-c55/ipa_context.h          |  12 ++++
>   src/ipa/mali-c55/mali-c55.cpp           |  44 ++++++-------
>   src/ipa/rkisp1/algorithms/agc.cpp       |  42 ++++++------
>   15 files changed, 430 insertions(+), 100 deletions(-)
> ---
> base-commit: 36f9cdcdb4a3c69e10b7da8a3de8354421cdcb56
> change-id: 20251017-exposure-limits-5c48252fe548
>
> Best regards,
Jacopo Mondi Oct. 28, 2025, 11:42 a.m. UTC | #3
Hi Hans

On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> Hi Jacopo,
>
> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > By definition, the exposure time that can be programmed on a sensor is
> > limited by the frame duration and whenever the frame duration is
> > updated, the exposure time limits should be updated as well.
> >
> > This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > libipa, where the exposure limits are computed at configure() time and
> > never updated.
> >
> > This has two implications:
> > 1) The AGC algorithms will always operate with an exposure time bound to
> >    the frame duration programmed at startup time
> > 2) The Camera::controls() limits are not updated to reflect the new
> >    constraints
> >
> > This series addresses issue 1) by changing the AgcMeanLuminance class
> > and the associated helpers to regulate the exposure time base on the
> > currently programmed max frame duration using a sensor-specific margin
> > that is now reported by the CameraHelper class.
>
> I have a question about the above paragraph. The sensor-specific margin
> is already known by the kernel's sensor driver and properly written
> sensor drivers will update the exposure range reported by the query-ctrl
> ioctl after changing the vblank control.
>
> So I think it would be better to rely on the kernel for this info
> and refresh the cached exposure-control limits when changing the vblank
> control rather then duplicating this info in the CameraHelper class ?

We discussed this with Niklas in v1
https://patchwork.libcamera.org/patch/24695/#36372

My main concern (one more ioctl per frame apart) is that relying on
the kernel to update the control limits will introduce a delay of a
variable number of frames that correponds to the pipeline depth plus
the control delays.

Whenever processing statistics for frame X and a new vblank value is
calculated to accommodate the newly computed exposure value (within
the FrameDurationLimits) we push the new value to DelayedControls
which caches it in a queue. Whenever a new frame is started the
queue of controls is popped and controls get applied to the device. They
will take effect a few frames later (while the kernel interface
updates the control limits immediately if I'm not mistaken).

The IPA operates "in the future" and as once it has consumed request X
(which will be realized on the device at frame X + depth) it shall
calculate new values from frame X+1. This means that once the IPA
state is updated to new limits, all calculations from that point on
should use the new state, and we can't rely on the device statee for this.

We currently do not properly track Request id/frame sequences, and we
kind of ignore the pipeline depth as we keep consuming requests as
soon as they arrive, but it's not unresonable to assume that the
overall delay, once we properly track the depth and compute parameters
early enough, can be around 6 or 7 frames.

>
> One could even calculate the exposure-margin by calculating the configured
> frame-time (in a unit of lines) and then substracting the max-exposure
> reported by the sensor from the frame-time-in-lines.

Stefan suggested that too.

When you configure() you take the sensor's exposure time and the frame
length and the diff between the two is the margin. However there is no
indication anywhere that the sensor driver has to initialize controls
in this way and considering how slightly different sensor drivers are
I wouldn't be on them being consistent.

>
> > As the AgcMeanLuminance algorithm tries to push the exposure time up
> > to the maximum allowed frame duration, initializing the algorithm
> > with the sensor's maximum capabilities results in a very low frame rate,
> > unless the user specifies a FrameDurationLimits range in which the
> > algorithm should operate. To avoid slowing the frame rate down too much
> > compute a "default" frame rate at algorithm configuration time, were
> > the canonical 30, 15 and 10 FPS values are tested and selected in order
> > of preference.
>
> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> Does this algorithm also set / used to set the vblank control to
> the maximum allowed frame duration? Or does it dynamically adjust
> vblank when it needs more exposure "range" ?

It didn't, as it didn't calculate the frame duration.

The only IPA that use AgcMeanLuminance and handles VBLANK is the
RkISP1 and as you can see in 09/10 the way it does that is to take
the exposure time calculated by AgcMeanLuminance::calculateNewEV()
and use it as the desired frame duration. This is wrong as it breaks
the assumption we have a safe margin. It only doesn't break as the
driver will clamp exposure once VBLANK is applied, but I presume this
also causes a mis-alignment between what the algorithm think the
exposure time is and what is actually applied to the device (and I
presume this is what Stefan has reported causing oscillations when I
naively proposed to ignore the margin and let the driver adjust
exposure).

This is the reason for 09/10: the AgcMeanLuminance now calculates the
duration as well using the (exposure time it has computed + margin) and
sends it back to the IPA that transforms it in a proper vblank value
to be eventually set on the device.

>
> I've been thinking about adding support to dynamically increase
> vblank when more exposure time is needed in the softISP AGC code,
> but it would probably be better to first convert that over to
> the shared AGC helpers. I know that Kieran is working on this.

Yes, I think that's a desirable goal.

My whole effort in this series has been around moving as much as
possible of the logic to the shared algorithm so that all IPA using it
will get a consistent behaviour from "free".

Also patch 10/10 does what you want to do on Simple for Mali, so that
could serve as a reference.

>
> At least some UVC cameras have an auto-exposure priority (or is mode?)
> v4l2 control which can be set to "aperture priority" to make them
> automatically increase frameduration if necessary because of low
> light conditions. I think we want something similar for libcamera's
> AGC.

Yes, that's why we re-worked the AGC-related controls to have a
separate ExposureTimeMode and AnalogueGainMode compared to the single
"big button" AeEnable.

To implement priorities application can fix one of the two and let the
algorithm adjust the other.

The exposure time will anyway always be clamped to the user selected
FrameDurationLimits[] so that has to be properly adjusted too.

Thanks
  j
>
> Regards,
>
> Hans
>
>
>
>
>
>
> >
> > The last patch is not for inclusion as Mali support needs to be
> > finalized upstream first, but shows how is it possible to handle
> > frame duration limits and exposure with the new model (and I needed that
> > patch for testing on Mali anyway).
> >
> > Tested with camshark on Mali-C55 and with a test scripts that sets a
> > longer frame duration (66 msecs) and by validating that the exposure
> > time is increased to match the new limits on RkISP1.
> >
> > RkISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 33251.27us, 10.6667, 1.00018 and 6.49639
> > 61.545331 (47.58 fps) cam0-stream0 seq: 000050 bytesused: 2073600/1036800
> > kISP1Agc agc.cpp:644 Divided up exposure time, analogue gain, quantization gain and digital gain are 66578.16us, 10.6667, 1.00018 and 3.24451
> > 61.566241 (47.82 fps) cam0-stream0 seq: 000051 bytesused: 2073600/1036800
> >
> > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > ---
> > Changes in v2:
> > - Rework of v1 that moves the exposure limits computation to the
> >   AgcMeanLuminance exposure helpers
> >
> > ---
> > Jacopo Mondi (9):
> >       ipa: camera_sensor_helper: Introduce exposureMargin()
> >       ipa: ipu3: Move CameraHelper to context
> >       ipa: libipa: agc: Make Agc::configure() set limits
> >       ipa: libipa: agc: Configure with frame duration
> >       ipa: libipa: agc: Initialize exposure with frame duration
> >       ipa: libipa: agc: Set exposure limits with frame duration
> >       ipa: libipa: agc: Initialize a sensible frame duration
> >       ipa: libipa: agc: Calculate frame duration in helper
> >       [DNI] ipa: mali: Handle FrameDurationLimits
> >
> > Kieran Bingham (1):
> >       ipa: mali-c55: Move CameraHelper to context
> >
> >  src/ipa/ipu3/algorithms/agc.cpp         |  32 +++++++---
> >  src/ipa/ipu3/ipa_context.cpp            |   3 +
> >  src/ipa/ipu3/ipa_context.h              |   3 +
> >  src/ipa/ipu3/ipu3.cpp                   |  22 +++----
> >  src/ipa/libipa/agc_mean_luminance.cpp   |  73 ++++++++++++++++++---
> >  src/ipa/libipa/agc_mean_luminance.h     |  18 +++++-
> >  src/ipa/libipa/camera_sensor_helper.cpp |  72 +++++++++++++++++++++
> >  src/ipa/libipa/camera_sensor_helper.h   |   2 +
> >  src/ipa/libipa/exposure_mode_helper.cpp | 109 +++++++++++++++++++++++++++-----
> >  src/ipa/libipa/exposure_mode_helper.h   |  16 ++++-
> >  src/ipa/mali-c55/algorithms/agc.cpp     |  76 +++++++++++++++++++---
> >  src/ipa/mali-c55/ipa_context.cpp        |   6 ++
> >  src/ipa/mali-c55/ipa_context.h          |  12 ++++
> >  src/ipa/mali-c55/mali-c55.cpp           |  44 ++++++-------
> >  src/ipa/rkisp1/algorithms/agc.cpp       |  42 ++++++------
> >  15 files changed, 430 insertions(+), 100 deletions(-)
> > ---
> > base-commit: 36f9cdcdb4a3c69e10b7da8a3de8354421cdcb56
> > change-id: 20251017-exposure-limits-5c48252fe548
> >
> > Best regards,
>
Hans de Goede Oct. 28, 2025, 2:48 p.m. UTC | #4
Hi Jacopo,

On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> Hi Hans
> 
> On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
>> Hi Jacopo,
>>
>> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
>>> By definition, the exposure time that can be programmed on a sensor is
>>> limited by the frame duration and whenever the frame duration is
>>> updated, the exposure time limits should be updated as well.
>>>
>>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
>>> libipa, where the exposure limits are computed at configure() time and
>>> never updated.
>>>
>>> This has two implications:
>>> 1) The AGC algorithms will always operate with an exposure time bound to
>>>    the frame duration programmed at startup time
>>> 2) The Camera::controls() limits are not updated to reflect the new
>>>    constraints
>>>
>>> This series addresses issue 1) by changing the AgcMeanLuminance class
>>> and the associated helpers to regulate the exposure time base on the
>>> currently programmed max frame duration using a sensor-specific margin
>>> that is now reported by the CameraHelper class.
>>
>> I have a question about the above paragraph. The sensor-specific margin
>> is already known by the kernel's sensor driver and properly written
>> sensor drivers will update the exposure range reported by the query-ctrl
>> ioctl after changing the vblank control.
>>
>> So I think it would be better to rely on the kernel for this info
>> and refresh the cached exposure-control limits when changing the vblank
>> control rather then duplicating this info in the CameraHelper class ?
> 
> We discussed this with Niklas in v1
> https://patchwork.libcamera.org/patch/24695/#36372
> 
> My main concern (one more ioctl per frame apart) is that relying on
> the kernel to update the control limits will introduce a delay of a
> variable number of frames that correponds to the pipeline depth plus
> the control delays.
> 
> Whenever processing statistics for frame X and a new vblank value is
> calculated to accommodate the newly computed exposure value (within
> the FrameDurationLimits) we push the new value to DelayedControls
> which caches it in a queue. Whenever a new frame is started the
> queue of controls is popped and controls get applied to the device. They
> will take effect a few frames later (while the kernel interface
> updates the control limits immediately if I'm not mistaken).

Right I forgot about the delayed-controls stuff, not being able
to get the new exposure limits from the kernel then makes sense.

Question: is there code somewhere to guarantee that when delayed
controls changes vblank it applies the vblank before the exposure?

IIRC libcamera uses set-ext-ctrls to set them all at once, but
most (all?) sensor drivers do not implement this, so the kernel
just applies them 1-by-1 internally. And if vblank gets increased
only after exposure then for one frame exposure will still get
clamped based on the old vblank.

This problem actually sounds like a more likely cause for Stefan's
oscillation issue you mention below.

>> One could even calculate the exposure-margin by calculating the configured
>> frame-time (in a unit of lines) and then substracting the max-exposure
>> reported by the sensor from the frame-time-in-lines.
> 
> Stefan suggested that too.
> 
> When you configure() you take the sensor's exposure time and the frame
> length and the diff between the two is the margin. However there is no
> indication anywhere that the sensor driver has to initialize controls
> in this way and considering how slightly different sensor drivers are
> I wouldn't be on them being consistent.

Hmm, I would say that we need to fix the sensor drivers then. But since
we need the camerahelper for the analog-gain bits I guess that having
to add the exposure vs framelength margin is not a big problem.

>>> As the AgcMeanLuminance algorithm tries to push the exposure time up
>>> to the maximum allowed frame duration, initializing the algorithm
>>> with the sensor's maximum capabilities results in a very low frame rate,
>>> unless the user specifies a FrameDurationLimits range in which the
>>> algorithm should operate. To avoid slowing the frame rate down too much
>>> compute a "default" frame rate at algorithm configuration time, were
>>> the canonical 30, 15 and 10 FPS values are tested and selected in order
>>> of preference.
>>
>> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
>> Does this algorithm also set / used to set the vblank control to
>> the maximum allowed frame duration? Or does it dynamically adjust
>> vblank when it needs more exposure "range" ?
> 
> It didn't, as it didn't calculate the frame duration.
> 
> The only IPA that use AgcMeanLuminance and handles VBLANK is the
> RkISP1 and as you can see in 09/10 the way it does that is to take
> the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> and use it as the desired frame duration. This is wrong as it breaks
> the assumption we have a safe margin. It only doesn't break as the
> driver will clamp exposure once VBLANK is applied, but I presume this
> also causes a mis-alignment between what the algorithm think the
> exposure time is and what is actually applied to the device (and I
> presume this is what Stefan has reported causing oscillations when I
> naively proposed to ignore the margin and let the driver adjust
> exposure).

See above, applying exposure before an increased vblank will lead
to a much bigger clamp which is a more likely cause of oscillation.

> This is the reason for 09/10: the AgcMeanLuminance now calculates the
> duration as well using the (exposure time it has computed + margin) and
> sends it back to the IPA that transforms it in a proper vblank value
> to be eventually set on the device.
> 
>>
>> I've been thinking about adding support to dynamically increase
>> vblank when more exposure time is needed in the softISP AGC code,
>> but it would probably be better to first convert that over to
>> the shared AGC helpers. I know that Kieran is working on this.
> 
> Yes, I think that's a desirable goal.
> 
> My whole effort in this series has been around moving as much as
> possible of the logic to the shared algorithm so that all IPA using it
> will get a consistent behaviour from "free".

Ack.

Thank you for taking the time to answer all my questions.

Regards,

Hans
Naushir Patuck Oct. 28, 2025, 2:58 p.m. UTC | #5
Hi Hans,

On Tue, 28 Oct 2025 at 14:49, Hans de Goede <hansg@kernel.org> wrote:
>
> Hi Jacopo,
>
> On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > Hi Hans
> >
> > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> >> Hi Jacopo,
> >>
> >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> >>> By definition, the exposure time that can be programmed on a sensor is
> >>> limited by the frame duration and whenever the frame duration is
> >>> updated, the exposure time limits should be updated as well.
> >>>
> >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> >>> libipa, where the exposure limits are computed at configure() time and
> >>> never updated.
> >>>
> >>> This has two implications:
> >>> 1) The AGC algorithms will always operate with an exposure time bound to
> >>>    the frame duration programmed at startup time
> >>> 2) The Camera::controls() limits are not updated to reflect the new
> >>>    constraints
> >>>
> >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> >>> and the associated helpers to regulate the exposure time base on the
> >>> currently programmed max frame duration using a sensor-specific margin
> >>> that is now reported by the CameraHelper class.
> >>
> >> I have a question about the above paragraph. The sensor-specific margin
> >> is already known by the kernel's sensor driver and properly written
> >> sensor drivers will update the exposure range reported by the query-ctrl
> >> ioctl after changing the vblank control.
> >>
> >> So I think it would be better to rely on the kernel for this info
> >> and refresh the cached exposure-control limits when changing the vblank
> >> control rather then duplicating this info in the CameraHelper class ?
> >
> > We discussed this with Niklas in v1
> > https://patchwork.libcamera.org/patch/24695/#36372
> >
> > My main concern (one more ioctl per frame apart) is that relying on
> > the kernel to update the control limits will introduce a delay of a
> > variable number of frames that correponds to the pipeline depth plus
> > the control delays.
> >
> > Whenever processing statistics for frame X and a new vblank value is
> > calculated to accommodate the newly computed exposure value (within
> > the FrameDurationLimits) we push the new value to DelayedControls
> > which caches it in a queue. Whenever a new frame is started the
> > queue of controls is popped and controls get applied to the device. They
> > will take effect a few frames later (while the kernel interface
> > updates the control limits immediately if I'm not mistaken).
>
> Right I forgot about the delayed-controls stuff, not being able
> to get the new exposure limits from the kernel then makes sense.
>
> Question: is there code somewhere to guarantee that when delayed
> controls changes vblank it applies the vblank before the exposure?
>
> IIRC libcamera uses set-ext-ctrls to set them all at once, but
> most (all?) sensor drivers do not implement this, so the kernel
> just applies them 1-by-1 internally. And if vblank gets increased
> only after exposure then for one frame exposure will still get
> clamped based on the old vblank.
>
> This problem actually sounds like a more likely cause for Stefan's
> oscillation issue you mention below.

When constructing the DelayedControls object, we mark the VBLANK
control as "priority write".  This ensures that VBLANK is written
ahead of any (and particularly EXPOSURE) controls for this very
reason.

Regards,
Naush


>
> >> One could even calculate the exposure-margin by calculating the configured
> >> frame-time (in a unit of lines) and then substracting the max-exposure
> >> reported by the sensor from the frame-time-in-lines.
> >
> > Stefan suggested that too.
> >
> > When you configure() you take the sensor's exposure time and the frame
> > length and the diff between the two is the margin. However there is no
> > indication anywhere that the sensor driver has to initialize controls
> > in this way and considering how slightly different sensor drivers are
> > I wouldn't be on them being consistent.
>
> Hmm, I would say that we need to fix the sensor drivers then. But since
> we need the camerahelper for the analog-gain bits I guess that having
> to add the exposure vs framelength margin is not a big problem.
>
> >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> >>> to the maximum allowed frame duration, initializing the algorithm
> >>> with the sensor's maximum capabilities results in a very low frame rate,
> >>> unless the user specifies a FrameDurationLimits range in which the
> >>> algorithm should operate. To avoid slowing the frame rate down too much
> >>> compute a "default" frame rate at algorithm configuration time, were
> >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> >>> of preference.
> >>
> >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> >> Does this algorithm also set / used to set the vblank control to
> >> the maximum allowed frame duration? Or does it dynamically adjust
> >> vblank when it needs more exposure "range" ?
> >
> > It didn't, as it didn't calculate the frame duration.
> >
> > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > RkISP1 and as you can see in 09/10 the way it does that is to take
> > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > and use it as the desired frame duration. This is wrong as it breaks
> > the assumption we have a safe margin. It only doesn't break as the
> > driver will clamp exposure once VBLANK is applied, but I presume this
> > also causes a mis-alignment between what the algorithm think the
> > exposure time is and what is actually applied to the device (and I
> > presume this is what Stefan has reported causing oscillations when I
> > naively proposed to ignore the margin and let the driver adjust
> > exposure).
>
> See above, applying exposure before an increased vblank will lead
> to a much bigger clamp which is a more likely cause of oscillation.
>
> > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > duration as well using the (exposure time it has computed + margin) and
> > sends it back to the IPA that transforms it in a proper vblank value
> > to be eventually set on the device.
> >
> >>
> >> I've been thinking about adding support to dynamically increase
> >> vblank when more exposure time is needed in the softISP AGC code,
> >> but it would probably be better to first convert that over to
> >> the shared AGC helpers. I know that Kieran is working on this.
> >
> > Yes, I think that's a desirable goal.
> >
> > My whole effort in this series has been around moving as much as
> > possible of the logic to the shared algorithm so that all IPA using it
> > will get a consistent behaviour from "free".
>
> Ack.
>
> Thank you for taking the time to answer all my questions.
>
> Regards,
>
> Hans
>
>
Jacopo Mondi Oct. 28, 2025, 3:27 p.m. UTC | #6
Hi Hans

On Tue, Oct 28, 2025 at 03:48:58PM +0100, Hans de Goede wrote:
> Hi Jacopo,
>
> On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > Hi Hans
> >
> > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> >> Hi Jacopo,
> >>
> >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> >>> By definition, the exposure time that can be programmed on a sensor is
> >>> limited by the frame duration and whenever the frame duration is
> >>> updated, the exposure time limits should be updated as well.
> >>>
> >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> >>> libipa, where the exposure limits are computed at configure() time and
> >>> never updated.
> >>>
> >>> This has two implications:
> >>> 1) The AGC algorithms will always operate with an exposure time bound to
> >>>    the frame duration programmed at startup time
> >>> 2) The Camera::controls() limits are not updated to reflect the new
> >>>    constraints
> >>>
> >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> >>> and the associated helpers to regulate the exposure time base on the
> >>> currently programmed max frame duration using a sensor-specific margin
> >>> that is now reported by the CameraHelper class.
> >>
> >> I have a question about the above paragraph. The sensor-specific margin
> >> is already known by the kernel's sensor driver and properly written
> >> sensor drivers will update the exposure range reported by the query-ctrl
> >> ioctl after changing the vblank control.
> >>
> >> So I think it would be better to rely on the kernel for this info
> >> and refresh the cached exposure-control limits when changing the vblank
> >> control rather then duplicating this info in the CameraHelper class ?
> >
> > We discussed this with Niklas in v1
> > https://patchwork.libcamera.org/patch/24695/#36372
> >
> > My main concern (one more ioctl per frame apart) is that relying on
> > the kernel to update the control limits will introduce a delay of a
> > variable number of frames that correponds to the pipeline depth plus
> > the control delays.
> >
> > Whenever processing statistics for frame X and a new vblank value is
> > calculated to accommodate the newly computed exposure value (within
> > the FrameDurationLimits) we push the new value to DelayedControls
> > which caches it in a queue. Whenever a new frame is started the
> > queue of controls is popped and controls get applied to the device. They
> > will take effect a few frames later (while the kernel interface
> > updates the control limits immediately if I'm not mistaken).
>
> Right I forgot about the delayed-controls stuff, not being able
> to get the new exposure limits from the kernel then makes sense.
>
> Question: is there code somewhere to guarantee that when delayed
> controls changes vblank it applies the vblank before the exposure?

Yes, DelayedControls has a "priority" parameters to make sure VBLANK
is updated before EXPOSURE
https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/delayed_controls.cpp#n50
https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/delayed_controls.cpp#n232

>
> IIRC libcamera uses set-ext-ctrls to set them all at once, but
> most (all?) sensor drivers do not implement this, so the kernel
> just applies them 1-by-1 internally. And if vblank gets increased
> only after exposure then for one frame exposure will still get
> clamped based on the old vblank.
>
> This problem actually sounds like a more likely cause for Stefan's
> oscillation issue you mention below.

You're right, thankfully RPi handled this when they first implemented
DelayedControls

>
> >> One could even calculate the exposure-margin by calculating the configured
> >> frame-time (in a unit of lines) and then substracting the max-exposure
> >> reported by the sensor from the frame-time-in-lines.
> >
> > Stefan suggested that too.
> >
> > When you configure() you take the sensor's exposure time and the frame
> > length and the diff between the two is the margin. However there is no
> > indication anywhere that the sensor driver has to initialize controls
> > in this way and considering how slightly different sensor drivers are
> > I wouldn't be on them being consistent.
>
> Hmm, I would say that we need to fix the sensor drivers then. But since
> we need the camerahelper for the analog-gain bits I guess that having
> to add the exposure vs framelength margin is not a big problem.
>
> >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> >>> to the maximum allowed frame duration, initializing the algorithm
> >>> with the sensor's maximum capabilities results in a very low frame rate,
> >>> unless the user specifies a FrameDurationLimits range in which the
> >>> algorithm should operate. To avoid slowing the frame rate down too much
> >>> compute a "default" frame rate at algorithm configuration time, were
> >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> >>> of preference.
> >>
> >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> >> Does this algorithm also set / used to set the vblank control to
> >> the maximum allowed frame duration? Or does it dynamically adjust
> >> vblank when it needs more exposure "range" ?
> >
> > It didn't, as it didn't calculate the frame duration.
> >
> > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > RkISP1 and as you can see in 09/10 the way it does that is to take
> > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > and use it as the desired frame duration. This is wrong as it breaks
> > the assumption we have a safe margin. It only doesn't break as the
> > driver will clamp exposure once VBLANK is applied, but I presume this
> > also causes a mis-alignment between what the algorithm think the
> > exposure time is and what is actually applied to the device (and I
> > presume this is what Stefan has reported causing oscillations when I
> > naively proposed to ignore the margin and let the driver adjust
> > exposure).
>
> See above, applying exposure before an increased vblank will lead
> to a much bigger clamp which is a more likely cause of oscillation.
>
> > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > duration as well using the (exposure time it has computed + margin) and
> > sends it back to the IPA that transforms it in a proper vblank value
> > to be eventually set on the device.
> >
> >>
> >> I've been thinking about adding support to dynamically increase
> >> vblank when more exposure time is needed in the softISP AGC code,
> >> but it would probably be better to first convert that over to
> >> the shared AGC helpers. I know that Kieran is working on this.
> >
> > Yes, I think that's a desirable goal.
> >
> > My whole effort in this series has been around moving as much as
> > possible of the logic to the shared algorithm so that all IPA using it
> > will get a consistent behaviour from "free".
>
> Ack.
>
> Thank you for taking the time to answer all my questions.
>

No worries at all, this is very helpful as some of my understanding of
the issue has to be validated, and that's exactly what reviews are for ;)

Thanks
   j

> Regards,
>
> Hans
>
>
Niklas Söderlund Oct. 28, 2025, 3:38 p.m. UTC | #7
Hello,

On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> Hi Jacopo,
> 
> On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > Hi Hans
> > 
> > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> >> Hi Jacopo,
> >>
> >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> >>> By definition, the exposure time that can be programmed on a sensor is
> >>> limited by the frame duration and whenever the frame duration is
> >>> updated, the exposure time limits should be updated as well.
> >>>
> >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> >>> libipa, where the exposure limits are computed at configure() time and
> >>> never updated.
> >>>
> >>> This has two implications:
> >>> 1) The AGC algorithms will always operate with an exposure time bound to
> >>>    the frame duration programmed at startup time
> >>> 2) The Camera::controls() limits are not updated to reflect the new
> >>>    constraints
> >>>
> >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> >>> and the associated helpers to regulate the exposure time base on the
> >>> currently programmed max frame duration using a sensor-specific margin
> >>> that is now reported by the CameraHelper class.
> >>
> >> I have a question about the above paragraph. The sensor-specific margin
> >> is already known by the kernel's sensor driver and properly written
> >> sensor drivers will update the exposure range reported by the query-ctrl
> >> ioctl after changing the vblank control.
> >>
> >> So I think it would be better to rely on the kernel for this info
> >> and refresh the cached exposure-control limits when changing the vblank
> >> control rather then duplicating this info in the CameraHelper class ?
> > 
> > We discussed this with Niklas in v1
> > https://patchwork.libcamera.org/patch/24695/#36372
> > 
> > My main concern (one more ioctl per frame apart) is that relying on
> > the kernel to update the control limits will introduce a delay of a
> > variable number of frames that correponds to the pipeline depth plus
> > the control delays.
> > 
> > Whenever processing statistics for frame X and a new vblank value is
> > calculated to accommodate the newly computed exposure value (within
> > the FrameDurationLimits) we push the new value to DelayedControls
> > which caches it in a queue. Whenever a new frame is started the
> > queue of controls is popped and controls get applied to the device. They
> > will take effect a few frames later (while the kernel interface
> > updates the control limits immediately if I'm not mistaken).
> 
> Right I forgot about the delayed-controls stuff, not being able
> to get the new exposure limits from the kernel then makes sense.

I too understand why it's easier to not consult the kernel for the new 
limits. I have no real strong feeling that the correct solution is to 
have this limit calculation done in the kernel or user-space. What I do 
feel somewhat strongly about is if we do the limit calculation in 
user-space we should never consult the kernel for the limits.

This just mixes two ways of doing the same thing and is very confusing 
when reading the code. So if doing this in user-space is the way forward 
I think we should drop the reading of min and max of V4L2_CID_EXPOSURE 
in IPARkISP1::configure() and instead use this method there too.

I also feel, a little less strongly as it's a big work item, that it's 
really wrong to duplicate the calculation both in kernel and user-space.

As this IMHO exposes a weakness in the V4L2 API we should think about 
how we want the V4L2_CID_EXPOSURE control to report min/max values. Do 
we really want it to report its min/max values based on the current 
V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL 
for each frame.

Can we simplify the control to report the min and max values the sensor 
can do provided the VBLANK setting allows it, and just reject any value 
at set time that do not conform to the current VBLANK setting?

If so we can stop doing  stuff in sensor drivers such as in IMX219 to 
allow for this control dependency in each driver.

  static int imx219_set_ctrl(struct v4l2_ctrl *ctrl)
  {
    ...

    if (ctrl->id == V4L2_CID_VBLANK) {
    	int exposure_max, exposure_def;

    	/* Update max exposure while meeting expected vblanking */
    	exposure_max = format->height + ctrl->val - 4;
    	exposure_def = (exposure_max < IMX219_EXPOSURE_DEFAULT) ?
    		exposure_max : IMX219_EXPOSURE_DEFAULT;
    	__v4l2_ctrl_modify_range(imx219->exposure,
    				 imx219->exposure->minimum,
    				 exposure_max, imx219->exposure->step,
    				 exposure_def);
    }

    ...
  }

>
> Question: is there code somewhere to guarantee that when delayed
> controls changes vblank it applies the vblank before the exposure?
> 
> IIRC libcamera uses set-ext-ctrls to set them all at once, but
> most (all?) sensor drivers do not implement this, so the kernel
> just applies them 1-by-1 internally. And if vblank gets increased
> only after exposure then for one frame exposure will still get
> clamped based on the old vblank.
> 
> This problem actually sounds like a more likely cause for Stefan's
> oscillation issue you mention below.
> 
> >> One could even calculate the exposure-margin by calculating the configured
> >> frame-time (in a unit of lines) and then substracting the max-exposure
> >> reported by the sensor from the frame-time-in-lines.
> > 
> > Stefan suggested that too.
> > 
> > When you configure() you take the sensor's exposure time and the frame
> > length and the diff between the two is the margin. However there is no
> > indication anywhere that the sensor driver has to initialize controls
> > in this way and considering how slightly different sensor drivers are
> > I wouldn't be on them being consistent.
> 
> Hmm, I would say that we need to fix the sensor drivers then. But since
> we need the camerahelper for the analog-gain bits I guess that having
> to add the exposure vs framelength margin is not a big problem.
> 
> >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> >>> to the maximum allowed frame duration, initializing the algorithm
> >>> with the sensor's maximum capabilities results in a very low frame rate,
> >>> unless the user specifies a FrameDurationLimits range in which the
> >>> algorithm should operate. To avoid slowing the frame rate down too much
> >>> compute a "default" frame rate at algorithm configuration time, were
> >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> >>> of preference.
> >>
> >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> >> Does this algorithm also set / used to set the vblank control to
> >> the maximum allowed frame duration? Or does it dynamically adjust
> >> vblank when it needs more exposure "range" ?
> > 
> > It didn't, as it didn't calculate the frame duration.
> > 
> > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > RkISP1 and as you can see in 09/10 the way it does that is to take
> > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > and use it as the desired frame duration. This is wrong as it breaks
> > the assumption we have a safe margin. It only doesn't break as the
> > driver will clamp exposure once VBLANK is applied, but I presume this
> > also causes a mis-alignment between what the algorithm think the
> > exposure time is and what is actually applied to the device (and I
> > presume this is what Stefan has reported causing oscillations when I
> > naively proposed to ignore the margin and let the driver adjust
> > exposure).
> 
> See above, applying exposure before an increased vblank will lead
> to a much bigger clamp which is a more likely cause of oscillation.
> 
> > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > duration as well using the (exposure time it has computed + margin) and
> > sends it back to the IPA that transforms it in a proper vblank value
> > to be eventually set on the device.
> > 
> >>
> >> I've been thinking about adding support to dynamically increase
> >> vblank when more exposure time is needed in the softISP AGC code,
> >> but it would probably be better to first convert that over to
> >> the shared AGC helpers. I know that Kieran is working on this.
> > 
> > Yes, I think that's a desirable goal.
> > 
> > My whole effort in this series has been around moving as much as
> > possible of the logic to the shared algorithm so that all IPA using it
> > will get a consistent behaviour from "free".
> 
> Ack.
> 
> Thank you for taking the time to answer all my questions.
> 
> Regards,
> 
> Hans
> 
>
Jacopo Mondi Nov. 2, 2025, 1:36 p.m. UTC | #8
Hi Niklas,

On Tue, Oct 28, 2025 at 04:38:42PM +0100, Niklas Söderlund wrote:
> Hello,
>
> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> > Hi Jacopo,
> >
> > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > Hi Hans
> > >
> > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > >> Hi Jacopo,
> > >>
> > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > >>> By definition, the exposure time that can be programmed on a sensor is
> > >>> limited by the frame duration and whenever the frame duration is
> > >>> updated, the exposure time limits should be updated as well.
> > >>>
> > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > >>> libipa, where the exposure limits are computed at configure() time and
> > >>> never updated.
> > >>>
> > >>> This has two implications:
> > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > >>>    the frame duration programmed at startup time
> > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > >>>    constraints
> > >>>
> > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > >>> and the associated helpers to regulate the exposure time base on the
> > >>> currently programmed max frame duration using a sensor-specific margin
> > >>> that is now reported by the CameraHelper class.
> > >>
> > >> I have a question about the above paragraph. The sensor-specific margin
> > >> is already known by the kernel's sensor driver and properly written
> > >> sensor drivers will update the exposure range reported by the query-ctrl
> > >> ioctl after changing the vblank control.
> > >>
> > >> So I think it would be better to rely on the kernel for this info
> > >> and refresh the cached exposure-control limits when changing the vblank
> > >> control rather then duplicating this info in the CameraHelper class ?
> > >
> > > We discussed this with Niklas in v1
> > > https://patchwork.libcamera.org/patch/24695/#36372
> > >
> > > My main concern (one more ioctl per frame apart) is that relying on
> > > the kernel to update the control limits will introduce a delay of a
> > > variable number of frames that correponds to the pipeline depth plus
> > > the control delays.
> > >
> > > Whenever processing statistics for frame X and a new vblank value is
> > > calculated to accommodate the newly computed exposure value (within
> > > the FrameDurationLimits) we push the new value to DelayedControls
> > > which caches it in a queue. Whenever a new frame is started the
> > > queue of controls is popped and controls get applied to the device. They
> > > will take effect a few frames later (while the kernel interface
> > > updates the control limits immediately if I'm not mistaken).
> >
> > Right I forgot about the delayed-controls stuff, not being able
> > to get the new exposure limits from the kernel then makes sense.
>
> I too understand why it's easier to not consult the kernel for the new
> limits. I have no real strong feeling that the correct solution is to
> have this limit calculation done in the kernel or user-space. What I do
> feel somewhat strongly about is if we do the limit calculation in
> user-space we should never consult the kernel for the limits.

Currently the EXPOSURE control limits from the kernel interface are
only queried when initializing the IPA. To be honest this doesn't
bother me that much, but you already know that :)

But I agree that if we move the max exposure computation to be based
on the frame duration, we could even ignore the EXPOSURE limits
completely, provided we get a way to know the minimum exposure value.
I checked a few drivers and this doesn't always correspond to the
exposure margin which I already added to the CameraSensorHelper.

I could add a CameraSensorHelper::exposureMin(), to be honest this
seems quite some churn when we could easily get the min exposure from
the v4l2-control. Do you have alternative proposals ?

I think I could improve the here proposed setLimit() interface, as
maxExposure is only used to signal when a fixed exposure should be
used.

I could pass the min exposure to Agc::configure() and only update the
frame duration in setLimit() with an optional "fixed exposure"
parameter in case a fixed manual exposure is desired...

>
> This just mixes two ways of doing the same thing and is very confusing
> when reading the code. So if doing this in user-space is the way forward
> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> in IPARkISP1::configure() and instead use this method there too.
>
> I also feel, a little less strongly as it's a big work item, that it's
> really wrong to duplicate the calculation both in kernel and user-space.

That I don't agree for the reasons explained already. This is not only
about the kernel interface, it's the IPA internal calculation that
need to know what the current exposure limit (in real-time) to
regulate itself and know how to split the desired exposure between
shutter time and gains. It could even be considered an internal IPA
parameter..

>
> As this IMHO exposes a weakness in the V4L2 API we should think about
> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> we really want it to report its min/max values based on the current
> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> for each frame.
>
> Can we simplify the control to report the min and max values the sensor
> can do provided the VBLANK setting allows it, and just reject any value
> at set time that do not conform to the current VBLANK setting?

Won't the v4l2 control framework would need a min and a max to adjust the
value passed to the driver ?

>
> If so we can stop doing  stuff in sensor drivers such as in IMX219 to
> allow for this control dependency in each driver.
>
>   static int imx219_set_ctrl(struct v4l2_ctrl *ctrl)
>   {
>     ...
>
>     if (ctrl->id == V4L2_CID_VBLANK) {
>     	int exposure_max, exposure_def;
>
>     	/* Update max exposure while meeting expected vblanking */
>     	exposure_max = format->height + ctrl->val - 4;
>     	exposure_def = (exposure_max < IMX219_EXPOSURE_DEFAULT) ?
>     		exposure_max : IMX219_EXPOSURE_DEFAULT;
>     	__v4l2_ctrl_modify_range(imx219->exposure,
>     				 imx219->exposure->minimum,
>     				 exposure_max, imx219->exposure->step,
>     				 exposure_def);
>     }
>
>     ...
>   }
>
> >
> > Question: is there code somewhere to guarantee that when delayed
> > controls changes vblank it applies the vblank before the exposure?
> >
> > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > most (all?) sensor drivers do not implement this, so the kernel
> > just applies them 1-by-1 internally. And if vblank gets increased
> > only after exposure then for one frame exposure will still get
> > clamped based on the old vblank.
> >
> > This problem actually sounds like a more likely cause for Stefan's
> > oscillation issue you mention below.
> >
> > >> One could even calculate the exposure-margin by calculating the configured
> > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > >> reported by the sensor from the frame-time-in-lines.
> > >
> > > Stefan suggested that too.
> > >
> > > When you configure() you take the sensor's exposure time and the frame
> > > length and the diff between the two is the margin. However there is no
> > > indication anywhere that the sensor driver has to initialize controls
> > > in this way and considering how slightly different sensor drivers are
> > > I wouldn't be on them being consistent.
> >
> > Hmm, I would say that we need to fix the sensor drivers then. But since
> > we need the camerahelper for the analog-gain bits I guess that having
> > to add the exposure vs framelength margin is not a big problem.
> >
> > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > >>> to the maximum allowed frame duration, initializing the algorithm
> > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > >>> unless the user specifies a FrameDurationLimits range in which the
> > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > >>> compute a "default" frame rate at algorithm configuration time, were
> > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > >>> of preference.
> > >>
> > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > >> Does this algorithm also set / used to set the vblank control to
> > >> the maximum allowed frame duration? Or does it dynamically adjust
> > >> vblank when it needs more exposure "range" ?
> > >
> > > It didn't, as it didn't calculate the frame duration.
> > >
> > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > and use it as the desired frame duration. This is wrong as it breaks
> > > the assumption we have a safe margin. It only doesn't break as the
> > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > also causes a mis-alignment between what the algorithm think the
> > > exposure time is and what is actually applied to the device (and I
> > > presume this is what Stefan has reported causing oscillations when I
> > > naively proposed to ignore the margin and let the driver adjust
> > > exposure).
> >
> > See above, applying exposure before an increased vblank will lead
> > to a much bigger clamp which is a more likely cause of oscillation.
> >
> > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > duration as well using the (exposure time it has computed + margin) and
> > > sends it back to the IPA that transforms it in a proper vblank value
> > > to be eventually set on the device.
> > >
> > >>
> > >> I've been thinking about adding support to dynamically increase
> > >> vblank when more exposure time is needed in the softISP AGC code,
> > >> but it would probably be better to first convert that over to
> > >> the shared AGC helpers. I know that Kieran is working on this.
> > >
> > > Yes, I think that's a desirable goal.
> > >
> > > My whole effort in this series has been around moving as much as
> > > possible of the logic to the shared algorithm so that all IPA using it
> > > will get a consistent behaviour from "free".
> >
> > Ack.
> >
> > Thank you for taking the time to answer all my questions.
> >
> > Regards,
> >
> > Hans
> >
> >
>
> --
> Kind Regards,
> Niklas Söderlund
Niklas Söderlund Nov. 2, 2025, 4:04 p.m. UTC | #9
Hi Jacopo,

On 2025-11-02 14:36:36 +0100, Jacopo Mondi wrote:
> Hi Niklas,
> 
> On Tue, Oct 28, 2025 at 04:38:42PM +0100, Niklas Söderlund wrote:
> > Hello,
> >
> > On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> > > Hi Jacopo,
> > >
> > > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > > Hi Hans
> > > >
> > > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > > >> Hi Jacopo,
> > > >>
> > > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > > >>> By definition, the exposure time that can be programmed on a sensor is
> > > >>> limited by the frame duration and whenever the frame duration is
> > > >>> updated, the exposure time limits should be updated as well.
> > > >>>
> > > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > > >>> libipa, where the exposure limits are computed at configure() time and
> > > >>> never updated.
> > > >>>
> > > >>> This has two implications:
> > > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > > >>>    the frame duration programmed at startup time
> > > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > > >>>    constraints
> > > >>>
> > > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > > >>> and the associated helpers to regulate the exposure time base on the
> > > >>> currently programmed max frame duration using a sensor-specific margin
> > > >>> that is now reported by the CameraHelper class.
> > > >>
> > > >> I have a question about the above paragraph. The sensor-specific margin
> > > >> is already known by the kernel's sensor driver and properly written
> > > >> sensor drivers will update the exposure range reported by the query-ctrl
> > > >> ioctl after changing the vblank control.
> > > >>
> > > >> So I think it would be better to rely on the kernel for this info
> > > >> and refresh the cached exposure-control limits when changing the vblank
> > > >> control rather then duplicating this info in the CameraHelper class ?
> > > >
> > > > We discussed this with Niklas in v1
> > > > https://patchwork.libcamera.org/patch/24695/#36372
> > > >
> > > > My main concern (one more ioctl per frame apart) is that relying on
> > > > the kernel to update the control limits will introduce a delay of a
> > > > variable number of frames that correponds to the pipeline depth plus
> > > > the control delays.
> > > >
> > > > Whenever processing statistics for frame X and a new vblank value is
> > > > calculated to accommodate the newly computed exposure value (within
> > > > the FrameDurationLimits) we push the new value to DelayedControls
> > > > which caches it in a queue. Whenever a new frame is started the
> > > > queue of controls is popped and controls get applied to the device. They
> > > > will take effect a few frames later (while the kernel interface
> > > > updates the control limits immediately if I'm not mistaken).
> > >
> > > Right I forgot about the delayed-controls stuff, not being able
> > > to get the new exposure limits from the kernel then makes sense.
> >
> > I too understand why it's easier to not consult the kernel for the new
> > limits. I have no real strong feeling that the correct solution is to
> > have this limit calculation done in the kernel or user-space. What I do
> > feel somewhat strongly about is if we do the limit calculation in
> > user-space we should never consult the kernel for the limits.
> 
> Currently the EXPOSURE control limits from the kernel interface are
> only queried when initializing the IPA. To be honest this doesn't
> bother me that much, but you already know that :)
> 
> But I agree that if we move the max exposure computation to be based
> on the frame duration, we could even ignore the EXPOSURE limits
> completely, provided we get a way to know the minimum exposure value.
> I checked a few drivers and this doesn't always correspond to the
> exposure margin which I already added to the CameraSensorHelper.
> 
> I could add a CameraSensorHelper::exposureMin(), to be honest this
> seems quite some churn when we could easily get the min exposure from
> the v4l2-control. Do you have alternative proposals ?

No, and I think that highlights my concern.

If we can't be sure we can to 100% calculate the exposure limits in 
user-space for all sensors, with their special cases etc, based on the 
VBLANK setting and need to check the kernel sometimes. Then I think it's 
dangerous to do so. What if the next issue us that the min exposure time 
also depends on VBLANK for another sensor for example?

But if we can be sure we can do the limit calculation in user-space I 
think we should. But then we should do so all the time to prepare for 
deprecating the V4L2_CID_EXPOSURE control.

I will not rage on about this anymore, or I will try not too ;-) When I 
see the same thing done twice in two different locations, the results 
not correlated, and even worse IMHO sometimes done using method A and 
sometimes using method B my head says danger, danger, this will come 
back and bite me!

The current code do not do the right thing either, and is currently 
biting me. I agree too something that will potentially bite me in the 
future and gives better images now is better then fighting V4L2 API for 
years before this can be fixed properly.

> 
> I think I could improve the here proposed setLimit() interface, as
> maxExposure is only used to signal when a fixed exposure should be
> used.
> 
> I could pass the min exposure to Agc::configure() and only update the
> frame duration in setLimit() with an optional "fixed exposure"
> parameter in case a fixed manual exposure is desired...
> 
> >
> > This just mixes two ways of doing the same thing and is very confusing
> > when reading the code. So if doing this in user-space is the way forward
> > I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> > in IPARkISP1::configure() and instead use this method there too.
> >
> > I also feel, a little less strongly as it's a big work item, that it's
> > really wrong to duplicate the calculation both in kernel and user-space.
> 
> That I don't agree for the reasons explained already. This is not only
> about the kernel interface, it's the IPA internal calculation that
> need to know what the current exposure limit (in real-time) to
> regulate itself and know how to split the desired exposure between
> shutter time and gains. It could even be considered an internal IPA
> parameter..
> 
> >
> > As this IMHO exposes a weakness in the V4L2 API we should think about
> > how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> > we really want it to report its min/max values based on the current
> > V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> > for each frame.
> >
> > Can we simplify the control to report the min and max values the sensor
> > can do provided the VBLANK setting allows it, and just reject any value
> > at set time that do not conform to the current VBLANK setting?
> 
> Won't the v4l2 control framework would need a min and a max to adjust the
> value passed to the driver ?
> 
> >
> > If so we can stop doing  stuff in sensor drivers such as in IMX219 to
> > allow for this control dependency in each driver.
> >
> >   static int imx219_set_ctrl(struct v4l2_ctrl *ctrl)
> >   {
> >     ...
> >
> >     if (ctrl->id == V4L2_CID_VBLANK) {
> >     	int exposure_max, exposure_def;
> >
> >     	/* Update max exposure while meeting expected vblanking */
> >     	exposure_max = format->height + ctrl->val - 4;
> >     	exposure_def = (exposure_max < IMX219_EXPOSURE_DEFAULT) ?
> >     		exposure_max : IMX219_EXPOSURE_DEFAULT;
> >     	__v4l2_ctrl_modify_range(imx219->exposure,
> >     				 imx219->exposure->minimum,
> >     				 exposure_max, imx219->exposure->step,
> >     				 exposure_def);
> >     }
> >
> >     ...
> >   }
> >
> > >
> > > Question: is there code somewhere to guarantee that when delayed
> > > controls changes vblank it applies the vblank before the exposure?
> > >
> > > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > > most (all?) sensor drivers do not implement this, so the kernel
> > > just applies them 1-by-1 internally. And if vblank gets increased
> > > only after exposure then for one frame exposure will still get
> > > clamped based on the old vblank.
> > >
> > > This problem actually sounds like a more likely cause for Stefan's
> > > oscillation issue you mention below.
> > >
> > > >> One could even calculate the exposure-margin by calculating the configured
> > > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > > >> reported by the sensor from the frame-time-in-lines.
> > > >
> > > > Stefan suggested that too.
> > > >
> > > > When you configure() you take the sensor's exposure time and the frame
> > > > length and the diff between the two is the margin. However there is no
> > > > indication anywhere that the sensor driver has to initialize controls
> > > > in this way and considering how slightly different sensor drivers are
> > > > I wouldn't be on them being consistent.
> > >
> > > Hmm, I would say that we need to fix the sensor drivers then. But since
> > > we need the camerahelper for the analog-gain bits I guess that having
> > > to add the exposure vs framelength margin is not a big problem.
> > >
> > > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > > >>> to the maximum allowed frame duration, initializing the algorithm
> > > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > > >>> unless the user specifies a FrameDurationLimits range in which the
> > > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > > >>> compute a "default" frame rate at algorithm configuration time, were
> > > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > > >>> of preference.
> > > >>
> > > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > > >> Does this algorithm also set / used to set the vblank control to
> > > >> the maximum allowed frame duration? Or does it dynamically adjust
> > > >> vblank when it needs more exposure "range" ?
> > > >
> > > > It didn't, as it didn't calculate the frame duration.
> > > >
> > > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > > and use it as the desired frame duration. This is wrong as it breaks
> > > > the assumption we have a safe margin. It only doesn't break as the
> > > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > > also causes a mis-alignment between what the algorithm think the
> > > > exposure time is and what is actually applied to the device (and I
> > > > presume this is what Stefan has reported causing oscillations when I
> > > > naively proposed to ignore the margin and let the driver adjust
> > > > exposure).
> > >
> > > See above, applying exposure before an increased vblank will lead
> > > to a much bigger clamp which is a more likely cause of oscillation.
> > >
> > > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > > duration as well using the (exposure time it has computed + margin) and
> > > > sends it back to the IPA that transforms it in a proper vblank value
> > > > to be eventually set on the device.
> > > >
> > > >>
> > > >> I've been thinking about adding support to dynamically increase
> > > >> vblank when more exposure time is needed in the softISP AGC code,
> > > >> but it would probably be better to first convert that over to
> > > >> the shared AGC helpers. I know that Kieran is working on this.
> > > >
> > > > Yes, I think that's a desirable goal.
> > > >
> > > > My whole effort in this series has been around moving as much as
> > > > possible of the logic to the shared algorithm so that all IPA using it
> > > > will get a consistent behaviour from "free".
> > >
> > > Ack.
> > >
> > > Thank you for taking the time to answer all my questions.
> > >
> > > Regards,
> > >
> > > Hans
> > >
> > >
> >
> > --
> > Kind Regards,
> > Niklas Söderlund
Niklas Söderlund Nov. 2, 2025, 4:42 p.m. UTC | #10
Hello (again) Jacopo,

On 2025-11-02 17:04:22 +0100, Niklas Söderlund wrote:
> Hi Jacopo,
> 
> On 2025-11-02 14:36:36 +0100, Jacopo Mondi wrote:
> > Hi Niklas,
> > 
> > On Tue, Oct 28, 2025 at 04:38:42PM +0100, Niklas Söderlund wrote:
> > > Hello,
> > >
> > > On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> > > > Hi Jacopo,
> > > >
> > > > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > > > Hi Hans
> > > > >
> > > > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > > > >> Hi Jacopo,
> > > > >>
> > > > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > > > >>> By definition, the exposure time that can be programmed on a sensor is
> > > > >>> limited by the frame duration and whenever the frame duration is
> > > > >>> updated, the exposure time limits should be updated as well.
> > > > >>>
> > > > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > > > >>> libipa, where the exposure limits are computed at configure() time and
> > > > >>> never updated.
> > > > >>>
> > > > >>> This has two implications:
> > > > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > > > >>>    the frame duration programmed at startup time
> > > > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > > > >>>    constraints
> > > > >>>
> > > > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > > > >>> and the associated helpers to regulate the exposure time base on the
> > > > >>> currently programmed max frame duration using a sensor-specific margin
> > > > >>> that is now reported by the CameraHelper class.
> > > > >>
> > > > >> I have a question about the above paragraph. The sensor-specific margin
> > > > >> is already known by the kernel's sensor driver and properly written
> > > > >> sensor drivers will update the exposure range reported by the query-ctrl
> > > > >> ioctl after changing the vblank control.
> > > > >>
> > > > >> So I think it would be better to rely on the kernel for this info
> > > > >> and refresh the cached exposure-control limits when changing the vblank
> > > > >> control rather then duplicating this info in the CameraHelper class ?
> > > > >
> > > > > We discussed this with Niklas in v1
> > > > > https://patchwork.libcamera.org/patch/24695/#36372
> > > > >
> > > > > My main concern (one more ioctl per frame apart) is that relying on
> > > > > the kernel to update the control limits will introduce a delay of a
> > > > > variable number of frames that correponds to the pipeline depth plus
> > > > > the control delays.
> > > > >
> > > > > Whenever processing statistics for frame X and a new vblank value is
> > > > > calculated to accommodate the newly computed exposure value (within
> > > > > the FrameDurationLimits) we push the new value to DelayedControls
> > > > > which caches it in a queue. Whenever a new frame is started the
> > > > > queue of controls is popped and controls get applied to the device. They
> > > > > will take effect a few frames later (while the kernel interface
> > > > > updates the control limits immediately if I'm not mistaken).
> > > >
> > > > Right I forgot about the delayed-controls stuff, not being able
> > > > to get the new exposure limits from the kernel then makes sense.
> > >
> > > I too understand why it's easier to not consult the kernel for the new
> > > limits. I have no real strong feeling that the correct solution is to
> > > have this limit calculation done in the kernel or user-space. What I do
> > > feel somewhat strongly about is if we do the limit calculation in
> > > user-space we should never consult the kernel for the limits.
> > 
> > Currently the EXPOSURE control limits from the kernel interface are
> > only queried when initializing the IPA. To be honest this doesn't
> > bother me that much, but you already know that :)
> > 
> > But I agree that if we move the max exposure computation to be based
> > on the frame duration, we could even ignore the EXPOSURE limits
> > completely, provided we get a way to know the minimum exposure value.
> > I checked a few drivers and this doesn't always correspond to the
> > exposure margin which I already added to the CameraSensorHelper.
> > 
> > I could add a CameraSensorHelper::exposureMin(), to be honest this
> > seems quite some churn when we could easily get the min exposure from
> > the v4l2-control. Do you have alternative proposals ?
> 
> No, and I think that highlights my concern.
> 
> If we can't be sure we can to 100% calculate the exposure limits in 
> user-space for all sensors, with their special cases etc, based on the 
> VBLANK setting and need to check the kernel sometimes. Then I think it's 
> dangerous to do so. What if the next issue us that the min exposure time 
> also depends on VBLANK for another sensor for example?
> 
> But if we can be sure we can do the limit calculation in user-space I 
> think we should. But then we should do so all the time to prepare for 
> deprecating the V4L2_CID_EXPOSURE control.
> 
> I will not rage on about this anymore, or I will try not too ;-) When I 
> see the same thing done twice in two different locations, the results 
> not correlated, and even worse IMHO sometimes done using method A and 
> sometimes using method B my head says danger, danger, this will come 
> back and bite me!
> 
> The current code do not do the right thing either, and is currently 
> biting me. I agree too something that will potentially bite me in the 
> future and gives better images now is better then fighting V4L2 API for 
> years before this can be fixed properly.

I said I would try ;-)

One (half ugly) idea if we do want to use the kernel to calculate the 
values would be to at IPA init time in the pipeline handler do a "range 
finding run". The pipeline would run over the sensors VBLANK setting 
range recording the min and max EXPOSURE values, build a lookup table it 
then gives to the IPA.

The IPA then uses this table internally to map a VBLANK value to a 
EXPOSURE min/max limit. This could likely be done in a sensor helper 
class.

> 
> > 
> > I think I could improve the here proposed setLimit() interface, as
> > maxExposure is only used to signal when a fixed exposure should be
> > used.
> > 
> > I could pass the min exposure to Agc::configure() and only update the
> > frame duration in setLimit() with an optional "fixed exposure"
> > parameter in case a fixed manual exposure is desired...
> > 
> > >
> > > This just mixes two ways of doing the same thing and is very confusing
> > > when reading the code. So if doing this in user-space is the way forward
> > > I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> > > in IPARkISP1::configure() and instead use this method there too.
> > >
> > > I also feel, a little less strongly as it's a big work item, that it's
> > > really wrong to duplicate the calculation both in kernel and user-space.
> > 
> > That I don't agree for the reasons explained already. This is not only
> > about the kernel interface, it's the IPA internal calculation that
> > need to know what the current exposure limit (in real-time) to
> > regulate itself and know how to split the desired exposure between
> > shutter time and gains. It could even be considered an internal IPA
> > parameter..
> > 
> > >
> > > As this IMHO exposes a weakness in the V4L2 API we should think about
> > > how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> > > we really want it to report its min/max values based on the current
> > > V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> > > for each frame.
> > >
> > > Can we simplify the control to report the min and max values the sensor
> > > can do provided the VBLANK setting allows it, and just reject any value
> > > at set time that do not conform to the current VBLANK setting?
> > 
> > Won't the v4l2 control framework would need a min and a max to adjust the
> > value passed to the driver ?
> > 
> > >
> > > If so we can stop doing  stuff in sensor drivers such as in IMX219 to
> > > allow for this control dependency in each driver.
> > >
> > >   static int imx219_set_ctrl(struct v4l2_ctrl *ctrl)
> > >   {
> > >     ...
> > >
> > >     if (ctrl->id == V4L2_CID_VBLANK) {
> > >     	int exposure_max, exposure_def;
> > >
> > >     	/* Update max exposure while meeting expected vblanking */
> > >     	exposure_max = format->height + ctrl->val - 4;
> > >     	exposure_def = (exposure_max < IMX219_EXPOSURE_DEFAULT) ?
> > >     		exposure_max : IMX219_EXPOSURE_DEFAULT;
> > >     	__v4l2_ctrl_modify_range(imx219->exposure,
> > >     				 imx219->exposure->minimum,
> > >     				 exposure_max, imx219->exposure->step,
> > >     				 exposure_def);
> > >     }
> > >
> > >     ...
> > >   }
> > >
> > > >
> > > > Question: is there code somewhere to guarantee that when delayed
> > > > controls changes vblank it applies the vblank before the exposure?
> > > >
> > > > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > > > most (all?) sensor drivers do not implement this, so the kernel
> > > > just applies them 1-by-1 internally. And if vblank gets increased
> > > > only after exposure then for one frame exposure will still get
> > > > clamped based on the old vblank.
> > > >
> > > > This problem actually sounds like a more likely cause for Stefan's
> > > > oscillation issue you mention below.
> > > >
> > > > >> One could even calculate the exposure-margin by calculating the configured
> > > > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > > > >> reported by the sensor from the frame-time-in-lines.
> > > > >
> > > > > Stefan suggested that too.
> > > > >
> > > > > When you configure() you take the sensor's exposure time and the frame
> > > > > length and the diff between the two is the margin. However there is no
> > > > > indication anywhere that the sensor driver has to initialize controls
> > > > > in this way and considering how slightly different sensor drivers are
> > > > > I wouldn't be on them being consistent.
> > > >
> > > > Hmm, I would say that we need to fix the sensor drivers then. But since
> > > > we need the camerahelper for the analog-gain bits I guess that having
> > > > to add the exposure vs framelength margin is not a big problem.
> > > >
> > > > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > > > >>> to the maximum allowed frame duration, initializing the algorithm
> > > > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > > > >>> unless the user specifies a FrameDurationLimits range in which the
> > > > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > > > >>> compute a "default" frame rate at algorithm configuration time, were
> > > > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > > > >>> of preference.
> > > > >>
> > > > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > > > >> Does this algorithm also set / used to set the vblank control to
> > > > >> the maximum allowed frame duration? Or does it dynamically adjust
> > > > >> vblank when it needs more exposure "range" ?
> > > > >
> > > > > It didn't, as it didn't calculate the frame duration.
> > > > >
> > > > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > > > and use it as the desired frame duration. This is wrong as it breaks
> > > > > the assumption we have a safe margin. It only doesn't break as the
> > > > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > > > also causes a mis-alignment between what the algorithm think the
> > > > > exposure time is and what is actually applied to the device (and I
> > > > > presume this is what Stefan has reported causing oscillations when I
> > > > > naively proposed to ignore the margin and let the driver adjust
> > > > > exposure).
> > > >
> > > > See above, applying exposure before an increased vblank will lead
> > > > to a much bigger clamp which is a more likely cause of oscillation.
> > > >
> > > > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > > > duration as well using the (exposure time it has computed + margin) and
> > > > > sends it back to the IPA that transforms it in a proper vblank value
> > > > > to be eventually set on the device.
> > > > >
> > > > >>
> > > > >> I've been thinking about adding support to dynamically increase
> > > > >> vblank when more exposure time is needed in the softISP AGC code,
> > > > >> but it would probably be better to first convert that over to
> > > > >> the shared AGC helpers. I know that Kieran is working on this.
> > > > >
> > > > > Yes, I think that's a desirable goal.
> > > > >
> > > > > My whole effort in this series has been around moving as much as
> > > > > possible of the logic to the shared algorithm so that all IPA using it
> > > > > will get a consistent behaviour from "free".
> > > >
> > > > Ack.
> > > >
> > > > Thank you for taking the time to answer all my questions.
> > > >
> > > > Regards,
> > > >
> > > > Hans
> > > >
> > > >
> > >
> > > --
> > > Kind Regards,
> > > Niklas Söderlund
> 
> -- 
> Kind Regards,
> Niklas Söderlund
Laurent Pinchart Nov. 2, 2025, 5:31 p.m. UTC | #11
On Tue, Oct 28, 2025 at 02:58:15PM +0000, Naushir Patuck wrote:
> On Tue, 28 Oct 2025 at 14:49, Hans de Goede wrote:
> > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > >>> By definition, the exposure time that can be programmed on a sensor is
> > >>> limited by the frame duration and whenever the frame duration is
> > >>> updated, the exposure time limits should be updated as well.
> > >>>
> > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > >>> libipa, where the exposure limits are computed at configure() time and
> > >>> never updated.
> > >>>
> > >>> This has two implications:
> > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > >>>    the frame duration programmed at startup time
> > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > >>>    constraints
> > >>>
> > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > >>> and the associated helpers to regulate the exposure time base on the
> > >>> currently programmed max frame duration using a sensor-specific margin
> > >>> that is now reported by the CameraHelper class.
> > >>
> > >> I have a question about the above paragraph. The sensor-specific margin
> > >> is already known by the kernel's sensor driver and properly written
> > >> sensor drivers will update the exposure range reported by the query-ctrl
> > >> ioctl after changing the vblank control.
> > >>
> > >> So I think it would be better to rely on the kernel for this info
> > >> and refresh the cached exposure-control limits when changing the vblank
> > >> control rather then duplicating this info in the CameraHelper class ?
> > >
> > > We discussed this with Niklas in v1
> > > https://patchwork.libcamera.org/patch/24695/#36372
> > >
> > > My main concern (one more ioctl per frame apart) is that relying on
> > > the kernel to update the control limits will introduce a delay of a
> > > variable number of frames that correponds to the pipeline depth plus
> > > the control delays.
> > >
> > > Whenever processing statistics for frame X and a new vblank value is
> > > calculated to accommodate the newly computed exposure value (within
> > > the FrameDurationLimits) we push the new value to DelayedControls
> > > which caches it in a queue. Whenever a new frame is started the
> > > queue of controls is popped and controls get applied to the device. They
> > > will take effect a few frames later (while the kernel interface
> > > updates the control limits immediately if I'm not mistaken).
> >
> > Right I forgot about the delayed-controls stuff, not being able
> > to get the new exposure limits from the kernel then makes sense.
> >
> > Question: is there code somewhere to guarantee that when delayed
> > controls changes vblank it applies the vblank before the exposure?
> >
> > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > most (all?) sensor drivers do not implement this, so the kernel
> > just applies them 1-by-1 internally. And if vblank gets increased
> > only after exposure then for one frame exposure will still get
> > clamped based on the old vblank.
> >
> > This problem actually sounds like a more likely cause for Stefan's
> > oscillation issue you mention below.
> 
> When constructing the DelayedControls object, we mark the VBLANK
> control as "priority write".

Jacopo, have you noticed we don't do so in the rkisp1 pipeline handler ?
It seems to be a bug.

> This ensures that VBLANK is written
> ahead of any (and particularly EXPOSURE) controls for this very
> reason.

It's a workaround for a control framework limitation. The framework
first validates all control values against current boundaries, and then
proceeds to changing controls one by one. Fixing that in the kernel is
likely quite difficult. We could do with a better API for controls in
V4L2 (both userspace API and in-kernel API), but it would be a
substantial amount of work.

> > >> One could even calculate the exposure-margin by calculating the configured
> > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > >> reported by the sensor from the frame-time-in-lines.
> > >
> > > Stefan suggested that too.
> > >
> > > When you configure() you take the sensor's exposure time and the frame
> > > length and the diff between the two is the margin. However there is no
> > > indication anywhere that the sensor driver has to initialize controls
> > > in this way and considering how slightly different sensor drivers are
> > > I wouldn't be on them being consistent.
> >
> > Hmm, I would say that we need to fix the sensor drivers then. But since
> > we need the camerahelper for the analog-gain bits I guess that having
> > to add the exposure vs framelength margin is not a big problem.
> >
> > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > >>> to the maximum allowed frame duration, initializing the algorithm
> > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > >>> unless the user specifies a FrameDurationLimits range in which the
> > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > >>> compute a "default" frame rate at algorithm configuration time, were
> > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > >>> of preference.
> > >>
> > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > >> Does this algorithm also set / used to set the vblank control to
> > >> the maximum allowed frame duration? Or does it dynamically adjust
> > >> vblank when it needs more exposure "range" ?
> > >
> > > It didn't, as it didn't calculate the frame duration.
> > >
> > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > and use it as the desired frame duration. This is wrong as it breaks
> > > the assumption we have a safe margin. It only doesn't break as the
> > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > also causes a mis-alignment between what the algorithm think the
> > > exposure time is and what is actually applied to the device (and I
> > > presume this is what Stefan has reported causing oscillations when I
> > > naively proposed to ignore the margin and let the driver adjust
> > > exposure).
> >
> > See above, applying exposure before an increased vblank will lead
> > to a much bigger clamp which is a more likely cause of oscillation.
> >
> > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > duration as well using the (exposure time it has computed + margin) and
> > > sends it back to the IPA that transforms it in a proper vblank value
> > > to be eventually set on the device.
> > >
> > >>
> > >> I've been thinking about adding support to dynamically increase
> > >> vblank when more exposure time is needed in the softISP AGC code,
> > >> but it would probably be better to first convert that over to
> > >> the shared AGC helpers. I know that Kieran is working on this.
> > >
> > > Yes, I think that's a desirable goal.
> > >
> > > My whole effort in this series has been around moving as much as
> > > possible of the logic to the shared algorithm so that all IPA using it
> > > will get a consistent behaviour from "free".
> >
> > Ack.
> >
> > Thank you for taking the time to answer all my questions.
Jacopo Mondi Nov. 2, 2025, 6:05 p.m. UTC | #12
Hi Laurent

On Sun, Nov 02, 2025 at 07:31:57PM +0200, Laurent Pinchart wrote:
> On Tue, Oct 28, 2025 at 02:58:15PM +0000, Naushir Patuck wrote:
> > On Tue, 28 Oct 2025 at 14:49, Hans de Goede wrote:
> > > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > > >>> By definition, the exposure time that can be programmed on a sensor is
> > > >>> limited by the frame duration and whenever the frame duration is
> > > >>> updated, the exposure time limits should be updated as well.
> > > >>>
> > > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > > >>> libipa, where the exposure limits are computed at configure() time and
> > > >>> never updated.
> > > >>>
> > > >>> This has two implications:
> > > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > > >>>    the frame duration programmed at startup time
> > > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > > >>>    constraints
> > > >>>
> > > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > > >>> and the associated helpers to regulate the exposure time base on the
> > > >>> currently programmed max frame duration using a sensor-specific margin
> > > >>> that is now reported by the CameraHelper class.
> > > >>
> > > >> I have a question about the above paragraph. The sensor-specific margin
> > > >> is already known by the kernel's sensor driver and properly written
> > > >> sensor drivers will update the exposure range reported by the query-ctrl
> > > >> ioctl after changing the vblank control.
> > > >>
> > > >> So I think it would be better to rely on the kernel for this info
> > > >> and refresh the cached exposure-control limits when changing the vblank
> > > >> control rather then duplicating this info in the CameraHelper class ?
> > > >
> > > > We discussed this with Niklas in v1
> > > > https://patchwork.libcamera.org/patch/24695/#36372
> > > >
> > > > My main concern (one more ioctl per frame apart) is that relying on
> > > > the kernel to update the control limits will introduce a delay of a
> > > > variable number of frames that correponds to the pipeline depth plus
> > > > the control delays.
> > > >
> > > > Whenever processing statistics for frame X and a new vblank value is
> > > > calculated to accommodate the newly computed exposure value (within
> > > > the FrameDurationLimits) we push the new value to DelayedControls
> > > > which caches it in a queue. Whenever a new frame is started the
> > > > queue of controls is popped and controls get applied to the device. They
> > > > will take effect a few frames later (while the kernel interface
> > > > updates the control limits immediately if I'm not mistaken).
> > >
> > > Right I forgot about the delayed-controls stuff, not being able
> > > to get the new exposure limits from the kernel then makes sense.
> > >
> > > Question: is there code somewhere to guarantee that when delayed
> > > controls changes vblank it applies the vblank before the exposure?
> > >
> > > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > > most (all?) sensor drivers do not implement this, so the kernel
> > > just applies them 1-by-1 internally. And if vblank gets increased
> > > only after exposure then for one frame exposure will still get
> > > clamped based on the old vblank.
> > >
> > > This problem actually sounds like a more likely cause for Stefan's
> > > oscillation issue you mention below.
> >
> > When constructing the DelayedControls object, we mark the VBLANK
> > control as "priority write".
>
> Jacopo, have you noticed we don't do so in the rkisp1 pipeline handler ?
> It seems to be a bug.
>

Not at all!

I can quickly send a patch, but I wonder how this went unnoticed till
now

> > This ensures that VBLANK is written
> > ahead of any (and particularly EXPOSURE) controls for this very
> > reason.
>
> It's a workaround for a control framework limitation. The framework
> first validates all control values against current boundaries, and then
> proceeds to changing controls one by one. Fixing that in the kernel is
> likely quite difficult. We could do with a better API for controls in
> V4L2 (both userspace API and in-kernel API), but it would be a
> substantial amount of work.
>
> > > >> One could even calculate the exposure-margin by calculating the configured
> > > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > > >> reported by the sensor from the frame-time-in-lines.
> > > >
> > > > Stefan suggested that too.
> > > >
> > > > When you configure() you take the sensor's exposure time and the frame
> > > > length and the diff between the two is the margin. However there is no
> > > > indication anywhere that the sensor driver has to initialize controls
> > > > in this way and considering how slightly different sensor drivers are
> > > > I wouldn't be on them being consistent.
> > >
> > > Hmm, I would say that we need to fix the sensor drivers then. But since
> > > we need the camerahelper for the analog-gain bits I guess that having
> > > to add the exposure vs framelength margin is not a big problem.
> > >
> > > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > > >>> to the maximum allowed frame duration, initializing the algorithm
> > > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > > >>> unless the user specifies a FrameDurationLimits range in which the
> > > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > > >>> compute a "default" frame rate at algorithm configuration time, were
> > > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > > >>> of preference.
> > > >>
> > > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > > >> Does this algorithm also set / used to set the vblank control to
> > > >> the maximum allowed frame duration? Or does it dynamically adjust
> > > >> vblank when it needs more exposure "range" ?
> > > >
> > > > It didn't, as it didn't calculate the frame duration.
> > > >
> > > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > > and use it as the desired frame duration. This is wrong as it breaks
> > > > the assumption we have a safe margin. It only doesn't break as the
> > > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > > also causes a mis-alignment between what the algorithm think the
> > > > exposure time is and what is actually applied to the device (and I
> > > > presume this is what Stefan has reported causing oscillations when I
> > > > naively proposed to ignore the margin and let the driver adjust
> > > > exposure).
> > >
> > > See above, applying exposure before an increased vblank will lead
> > > to a much bigger clamp which is a more likely cause of oscillation.
> > >
> > > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > > duration as well using the (exposure time it has computed + margin) and
> > > > sends it back to the IPA that transforms it in a proper vblank value
> > > > to be eventually set on the device.
> > > >
> > > >>
> > > >> I've been thinking about adding support to dynamically increase
> > > >> vblank when more exposure time is needed in the softISP AGC code,
> > > >> but it would probably be better to first convert that over to
> > > >> the shared AGC helpers. I know that Kieran is working on this.
> > > >
> > > > Yes, I think that's a desirable goal.
> > > >
> > > > My whole effort in this series has been around moving as much as
> > > > possible of the logic to the shared algorithm so that all IPA using it
> > > > will get a consistent behaviour from "free".
> > >
> > > Ack.
> > >
> > > Thank you for taking the time to answer all my questions.
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart Nov. 2, 2025, 7:22 p.m. UTC | #13
On Sun, Nov 02, 2025 at 05:42:23PM +0100, Niklas Söderlund wrote:
> On 2025-11-02 17:04:22 +0100, Niklas Söderlund wrote:
> > On 2025-11-02 14:36:36 +0100, Jacopo Mondi wrote:
> > > On Tue, Oct 28, 2025 at 04:38:42PM +0100, Niklas Söderlund wrote:
> > > > On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> > > > > On 28-Oct-25 12:42 PM, Jacopo Mondi wrote:
> > > > > > On Tue, Oct 28, 2025 at 10:53:15AM +0100, Hans de Goede wrote:
> > > > > >> On 28-Oct-25 10:31 AM, Jacopo Mondi wrote:
> > > > > >>> By definition, the exposure time that can be programmed on a sensor is
> > > > > >>> limited by the frame duration and whenever the frame duration is
> > > > > >>> updated, the exposure time limits should be updated as well.
> > > > > >>>
> > > > > >>> This is not the case for IPAs using the AgcMeanLuminance algorithm from
> > > > > >>> libipa, where the exposure limits are computed at configure() time and
> > > > > >>> never updated.
> > > > > >>>
> > > > > >>> This has two implications:
> > > > > >>> 1) The AGC algorithms will always operate with an exposure time bound to
> > > > > >>>    the frame duration programmed at startup time
> > > > > >>> 2) The Camera::controls() limits are not updated to reflect the new
> > > > > >>>    constraints
> > > > > >>>
> > > > > >>> This series addresses issue 1) by changing the AgcMeanLuminance class
> > > > > >>> and the associated helpers to regulate the exposure time base on the
> > > > > >>> currently programmed max frame duration using a sensor-specific margin
> > > > > >>> that is now reported by the CameraHelper class.
> > > > > >>
> > > > > >> I have a question about the above paragraph. The sensor-specific margin
> > > > > >> is already known by the kernel's sensor driver and properly written
> > > > > >> sensor drivers will update the exposure range reported by the query-ctrl
> > > > > >> ioctl after changing the vblank control.
> > > > > >>
> > > > > >> So I think it would be better to rely on the kernel for this info
> > > > > >> and refresh the cached exposure-control limits when changing the vblank
> > > > > >> control rather then duplicating this info in the CameraHelper class ?
> > > > > >
> > > > > > We discussed this with Niklas in v1
> > > > > > https://patchwork.libcamera.org/patch/24695/#36372
> > > > > >
> > > > > > My main concern (one more ioctl per frame apart) is that relying on
> > > > > > the kernel to update the control limits will introduce a delay of a
> > > > > > variable number of frames that correponds to the pipeline depth plus
> > > > > > the control delays.
> > > > > >
> > > > > > Whenever processing statistics for frame X and a new vblank value is
> > > > > > calculated to accommodate the newly computed exposure value (within
> > > > > > the FrameDurationLimits) we push the new value to DelayedControls
> > > > > > which caches it in a queue. Whenever a new frame is started the
> > > > > > queue of controls is popped and controls get applied to the device. They
> > > > > > will take effect a few frames later (while the kernel interface
> > > > > > updates the control limits immediately if I'm not mistaken).
> > > > >
> > > > > Right I forgot about the delayed-controls stuff, not being able
> > > > > to get the new exposure limits from the kernel then makes sense.
> > > >
> > > > I too understand why it's easier to not consult the kernel for the new
> > > > limits. I have no real strong feeling that the correct solution is to
> > > > have this limit calculation done in the kernel or user-space. What I do
> > > > feel somewhat strongly about is if we do the limit calculation in
> > > > user-space we should never consult the kernel for the limits.
> > > 
> > > Currently the EXPOSURE control limits from the kernel interface are
> > > only queried when initializing the IPA. To be honest this doesn't
> > > bother me that much, but you already know that :)
> > > 
> > > But I agree that if we move the max exposure computation to be based
> > > on the frame duration, we could even ignore the EXPOSURE limits
> > > completely, provided we get a way to know the minimum exposure value.
> > > I checked a few drivers and this doesn't always correspond to the
> > > exposure margin which I already added to the CameraSensorHelper.
> > > 
> > > I could add a CameraSensorHelper::exposureMin(), to be honest this
> > > seems quite some churn when we could easily get the min exposure from
> > > the v4l2-control. Do you have alternative proposals ?
> > 
> > No, and I think that highlights my concern.
> > 
> > If we can't be sure we can to 100% calculate the exposure limits in 
> > user-space for all sensors, with their special cases etc, based on the 
> > VBLANK setting and need to check the kernel sometimes. Then I think it's 
> > dangerous to do so. What if the next issue us that the min exposure time 
> > also depends on VBLANK for another sensor for example?
> > 
> > But if we can be sure we can do the limit calculation in user-space I 
> > think we should. But then we should do so all the time to prepare for 
> > deprecating the V4L2_CID_EXPOSURE control.

We still need a control to set the exposure time, don't we ?

> > I will not rage on about this anymore, or I will try not too ;-) When I 
> > see the same thing done twice in two different locations, the results 
> > not correlated, and even worse IMHO sometimes done using method A and 
> > sometimes using method B my head says danger, danger, this will come 
> > back and bite me!
> > 
> > The current code do not do the right thing either, and is currently 
> > biting me. I agree too something that will potentially bite me in the 
> > future and gives better images now is better then fighting V4L2 API for 
> > years before this can be fixed properly.
> 
> I said I would try ;-)
> 
> One (half ugly) idea if we do want to use the kernel to calculate the 
> values would be to at IPA init time in the pipeline handler do a "range 
> finding run". The pipeline would run over the sensors VBLANK setting 
> range recording the min and max EXPOSURE values, build a lookup table it 
> then gives to the IPA.
> 
> The IPA then uses this table internally to map a VBLANK value to a 
> EXPOSURE min/max limit. This could likely be done in a sensor helper 
> class.

I agree with part of your arguments above. Duplicating the same code in
multiple places is usually a sign that something goes wrong. That could
be the case here too. Taking a step back, it's important to understand
why we perform the same calculations in kernel drivers and in libcamera:

- In the kernel, the boundary calculations are used to ensure invalid
  register values will be adjusted. This is more for safety than
  correctness, even if most sensors will probably recover easily from
  invalid exposure time values.

- In libcamera, the calculations are meant for correctness of
  algorithms. We need to perform exposure time calculations every frame,
  and going through one (or more) round-trip to the kernel will impact
  performance.

There's a case to be made for reporting from kernel drivers parameters
that libcamera needs. We're doing so today in a way that splits data
between the kernel and userspace, and uses all kinds of heuristics. This
is the result of a UAPI that doesn't really match the needs of
userspace. I'm not blaming the developers of the V4L2 control API, it's
old and was designed for different types of devices and different
purposes than what we're dealing with now, but it's nonetheless a bad
match.

I believe we can improve the UAPI to better suit our needs. That will be
lots of work, and we can't wait for that to be complete before fixing
the issue that this patch series addresses. Not to mention that we'll
need a very motivated champion to make this happen :-)

> > > I think I could improve the here proposed setLimit() interface, as
> > > maxExposure is only used to signal when a fixed exposure should be
> > > used.
> > > 
> > > I could pass the min exposure to Agc::configure() and only update the
> > > frame duration in setLimit() with an optional "fixed exposure"
> > > parameter in case a fixed manual exposure is desired...
> > > 
> > > >
> > > > This just mixes two ways of doing the same thing and is very confusing
> > > > when reading the code. So if doing this in user-space is the way forward
> > > > I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> > > > in IPARkISP1::configure() and instead use this method there too.
> > > >
> > > > I also feel, a little less strongly as it's a big work item, that it's
> > > > really wrong to duplicate the calculation both in kernel and user-space.
> > > 
> > > That I don't agree for the reasons explained already. This is not only
> > > about the kernel interface, it's the IPA internal calculation that
> > > need to know what the current exposure limit (in real-time) to
> > > regulate itself and know how to split the desired exposure between
> > > shutter time and gains. It could even be considered an internal IPA
> > > parameter..
> > > 
> > > >
> > > > As this IMHO exposes a weakness in the V4L2 API we should think about
> > > > how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> > > > we really want it to report its min/max values based on the current
> > > > V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> > > > for each frame.
> > > >
> > > > Can we simplify the control to report the min and max values the sensor
> > > > can do provided the VBLANK setting allows it, and just reject any value
> > > > at set time that do not conform to the current VBLANK setting?
> > > 
> > > Won't the v4l2 control framework would need a min and a max to adjust the
> > > value passed to the driver ?
> > > 
> > > >
> > > > If so we can stop doing  stuff in sensor drivers such as in IMX219 to
> > > > allow for this control dependency in each driver.
> > > >
> > > >   static int imx219_set_ctrl(struct v4l2_ctrl *ctrl)
> > > >   {
> > > >     ...
> > > >
> > > >     if (ctrl->id == V4L2_CID_VBLANK) {
> > > >     	int exposure_max, exposure_def;
> > > >
> > > >     	/* Update max exposure while meeting expected vblanking */
> > > >     	exposure_max = format->height + ctrl->val - 4;
> > > >     	exposure_def = (exposure_max < IMX219_EXPOSURE_DEFAULT) ?
> > > >     		exposure_max : IMX219_EXPOSURE_DEFAULT;
> > > >     	__v4l2_ctrl_modify_range(imx219->exposure,
> > > >     				 imx219->exposure->minimum,
> > > >     				 exposure_max, imx219->exposure->step,
> > > >     				 exposure_def);
> > > >     }
> > > >
> > > >     ...
> > > >   }
> > > >
> > > > >
> > > > > Question: is there code somewhere to guarantee that when delayed
> > > > > controls changes vblank it applies the vblank before the exposure?
> > > > >
> > > > > IIRC libcamera uses set-ext-ctrls to set them all at once, but
> > > > > most (all?) sensor drivers do not implement this, so the kernel
> > > > > just applies them 1-by-1 internally. And if vblank gets increased
> > > > > only after exposure then for one frame exposure will still get
> > > > > clamped based on the old vblank.
> > > > >
> > > > > This problem actually sounds like a more likely cause for Stefan's
> > > > > oscillation issue you mention below.
> > > > >
> > > > > >> One could even calculate the exposure-margin by calculating the configured
> > > > > >> frame-time (in a unit of lines) and then substracting the max-exposure
> > > > > >> reported by the sensor from the frame-time-in-lines.
> > > > > >
> > > > > > Stefan suggested that too.
> > > > > >
> > > > > > When you configure() you take the sensor's exposure time and the frame
> > > > > > length and the diff between the two is the margin. However there is no
> > > > > > indication anywhere that the sensor driver has to initialize controls
> > > > > > in this way and considering how slightly different sensor drivers are
> > > > > > I wouldn't be on them being consistent.
> > > > >
> > > > > Hmm, I would say that we need to fix the sensor drivers then. But since
> > > > > we need the camerahelper for the analog-gain bits I guess that having
> > > > > to add the exposure vs framelength margin is not a big problem.
> > > > >
> > > > > >>> As the AgcMeanLuminance algorithm tries to push the exposure time up
> > > > > >>> to the maximum allowed frame duration, initializing the algorithm
> > > > > >>> with the sensor's maximum capabilities results in a very low frame rate,
> > > > > >>> unless the user specifies a FrameDurationLimits range in which the
> > > > > >>> algorithm should operate. To avoid slowing the frame rate down too much
> > > > > >>> compute a "default" frame rate at algorithm configuration time, were
> > > > > >>> the canonical 30, 15 and 10 FPS values are tested and selected in order
> > > > > >>> of preference.
> > > > > >>
> > > > > >> I must admit I'm not familiar with the AgcMeanLuminance algorithm.
> > > > > >> Does this algorithm also set / used to set the vblank control to
> > > > > >> the maximum allowed frame duration? Or does it dynamically adjust
> > > > > >> vblank when it needs more exposure "range" ?
> > > > > >
> > > > > > It didn't, as it didn't calculate the frame duration.
> > > > > >
> > > > > > The only IPA that use AgcMeanLuminance and handles VBLANK is the
> > > > > > RkISP1 and as you can see in 09/10 the way it does that is to take
> > > > > > the exposure time calculated by AgcMeanLuminance::calculateNewEV()
> > > > > > and use it as the desired frame duration. This is wrong as it breaks
> > > > > > the assumption we have a safe margin. It only doesn't break as the
> > > > > > driver will clamp exposure once VBLANK is applied, but I presume this
> > > > > > also causes a mis-alignment between what the algorithm think the
> > > > > > exposure time is and what is actually applied to the device (and I
> > > > > > presume this is what Stefan has reported causing oscillations when I
> > > > > > naively proposed to ignore the margin and let the driver adjust
> > > > > > exposure).
> > > > >
> > > > > See above, applying exposure before an increased vblank will lead
> > > > > to a much bigger clamp which is a more likely cause of oscillation.
> > > > >
> > > > > > This is the reason for 09/10: the AgcMeanLuminance now calculates the
> > > > > > duration as well using the (exposure time it has computed + margin) and
> > > > > > sends it back to the IPA that transforms it in a proper vblank value
> > > > > > to be eventually set on the device.
> > > > > >
> > > > > >>
> > > > > >> I've been thinking about adding support to dynamically increase
> > > > > >> vblank when more exposure time is needed in the softISP AGC code,
> > > > > >> but it would probably be better to first convert that over to
> > > > > >> the shared AGC helpers. I know that Kieran is working on this.
> > > > > >
> > > > > > Yes, I think that's a desirable goal.
> > > > > >
> > > > > > My whole effort in this series has been around moving as much as
> > > > > > possible of the logic to the shared algorithm so that all IPA using it
> > > > > > will get a consistent behaviour from "free".
> > > > >
> > > > > Ack.
> > > > >
> > > > > Thank you for taking the time to answer all my questions.
Hans de Goede Nov. 3, 2025, 9:01 a.m. UTC | #14
Hi,

On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:

...

>> Right I forgot about the delayed-controls stuff, not being able
>> to get the new exposure limits from the kernel then makes sense.
> 
> I too understand why it's easier to not consult the kernel for the new 
> limits. I have no real strong feeling that the correct solution is to 
> have this limit calculation done in the kernel or user-space. What I do 
> feel somewhat strongly about is if we do the limit calculation in 
> user-space we should never consult the kernel for the limits.
> 
> This just mixes two ways of doing the same thing and is very confusing 
> when reading the code. So if doing this in user-space is the way forward 
> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE 
> in IPARkISP1::configure() and instead use this method there too.
> 
> I also feel, a little less strongly as it's a big work item, that it's 
> really wrong to duplicate the calculation both in kernel and user-space.
> 
> As this IMHO exposes a weakness in the V4L2 API we should think about 
> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do 
> we really want it to report its min/max values based on the current 
> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL 
> for each frame.
> 
> Can we simplify the control to report the min and max values the sensor 
> can do provided the VBLANK setting allows it, and just reject any value 
> at set time that do not conform to the current VBLANK setting?

One complicating factor here is that some sensors will do vblank stretching
when you ask for a higher exposure. IOW some sensors behave as if a bigger
vblank is set when setting exposure > (frame-length + margin) to still
give you the desired exposure.

ATM I think that for all new sensor drivers we try to avoid this auto
vblank increase by clamping the exposure range.

If we don't clamp the exposure range, then we will get inconsistent
behavior between sensors.

We could only clamp the value when written and keep the advertised
range the same though, but that feels wrong. Arguably allowing vblank
stretch would be nice to do, then normally the IPA will keep
the exposure within the frame-length - exposure range, but it could
then do higher exposures without needed to reprogram vblank, but
only one some sensors ...

Regards,

Hans
Jacopo Mondi Nov. 4, 2025, 3:53 p.m. UTC | #15
Hi Hans

On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> Hi,
>
> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> > On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
>
> ...
>
> >> Right I forgot about the delayed-controls stuff, not being able
> >> to get the new exposure limits from the kernel then makes sense.
> >
> > I too understand why it's easier to not consult the kernel for the new
> > limits. I have no real strong feeling that the correct solution is to
> > have this limit calculation done in the kernel or user-space. What I do
> > feel somewhat strongly about is if we do the limit calculation in
> > user-space we should never consult the kernel for the limits.
> >
> > This just mixes two ways of doing the same thing and is very confusing
> > when reading the code. So if doing this in user-space is the way forward
> > I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> > in IPARkISP1::configure() and instead use this method there too.
> >
> > I also feel, a little less strongly as it's a big work item, that it's
> > really wrong to duplicate the calculation both in kernel and user-space.
> >
> > As this IMHO exposes a weakness in the V4L2 API we should think about
> > how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> > we really want it to report its min/max values based on the current
> > V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> > for each frame.
> >
> > Can we simplify the control to report the min and max values the sensor
> > can do provided the VBLANK setting allows it, and just reject any value
> > at set time that do not conform to the current VBLANK setting?
>
> One complicating factor here is that some sensors will do vblank stretching
> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> vblank is set when setting exposure > (frame-length + margin) to still
> give you the desired exposure.

Interesting. Do you have any pointer to mainline drivers that do that ?

>
> ATM I think that for all new sensor drivers we try to avoid this auto
> vblank increase by clamping the exposure range.
>
> If we don't clamp the exposure range, then we will get inconsistent
> behavior between sensors.
>
> We could only clamp the value when written and keep the advertised
> range the same though, but that feels wrong. Arguably allowing vblank
> stretch would be nice to do, then normally the IPA will keep
> the exposure within the frame-length - exposure range, but it could
> then do higher exposures without needed to reprogram vblank, but
> only one some sensors ...
>
> Regards,
>
> Hans
>
>
Hans de Goede Nov. 4, 2025, 5:23 p.m. UTC | #16
Hi Jacopo,

On 4-Nov-25 16:53, Jacopo Mondi wrote:
> Hi Hans
> 
> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
>> Hi,
>>
>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
>>
>> ...
>>
>>>> Right I forgot about the delayed-controls stuff, not being able
>>>> to get the new exposure limits from the kernel then makes sense.
>>>
>>> I too understand why it's easier to not consult the kernel for the new
>>> limits. I have no real strong feeling that the correct solution is to
>>> have this limit calculation done in the kernel or user-space. What I do
>>> feel somewhat strongly about is if we do the limit calculation in
>>> user-space we should never consult the kernel for the limits.
>>>
>>> This just mixes two ways of doing the same thing and is very confusing
>>> when reading the code. So if doing this in user-space is the way forward
>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
>>> in IPARkISP1::configure() and instead use this method there too.
>>>
>>> I also feel, a little less strongly as it's a big work item, that it's
>>> really wrong to duplicate the calculation both in kernel and user-space.
>>>
>>> As this IMHO exposes a weakness in the V4L2 API we should think about
>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
>>> we really want it to report its min/max values based on the current
>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
>>> for each frame.
>>>
>>> Can we simplify the control to report the min and max values the sensor
>>> can do provided the VBLANK setting allows it, and just reject any value
>>> at set time that do not conform to the current VBLANK setting?
>>
>> One complicating factor here is that some sensors will do vblank stretching
>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
>> vblank is set when setting exposure > (frame-length + margin) to still
>> give you the desired exposure.
> 
> Interesting. Do you have any pointer to mainline drivers that do that ?

I'm not talking about the driver stretching the vblank I'm talking
about the hw (the sensor) stretching the vblank. I think that in newer
drivers we stop this from happening by clamping the exposure.

I'm 100% certain that some hw does this as I've seen this happen.
I can do some digging if you want some specific sensor model examples
where the hw is doing this.

Regards,

Hans
Naushir Patuck Nov. 5, 2025, 10:50 a.m. UTC | #17
Hi,

On Tue, 4 Nov 2025 at 17:23, Hans de Goede <hansg@kernel.org> wrote:
>
> Hi Jacopo,
>
> On 4-Nov-25 16:53, Jacopo Mondi wrote:
> > Hi Hans
> >
> > On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> >> Hi,
> >>
> >> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> >>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> >>
> >> ...
> >>
> >>>> Right I forgot about the delayed-controls stuff, not being able
> >>>> to get the new exposure limits from the kernel then makes sense.
> >>>
> >>> I too understand why it's easier to not consult the kernel for the new
> >>> limits. I have no real strong feeling that the correct solution is to
> >>> have this limit calculation done in the kernel or user-space. What I do
> >>> feel somewhat strongly about is if we do the limit calculation in
> >>> user-space we should never consult the kernel for the limits.
> >>>
> >>> This just mixes two ways of doing the same thing and is very confusing
> >>> when reading the code. So if doing this in user-space is the way forward
> >>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> >>> in IPARkISP1::configure() and instead use this method there too.
> >>>
> >>> I also feel, a little less strongly as it's a big work item, that it's
> >>> really wrong to duplicate the calculation both in kernel and user-space.
> >>>
> >>> As this IMHO exposes a weakness in the V4L2 API we should think about
> >>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> >>> we really want it to report its min/max values based on the current
> >>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> >>> for each frame.
> >>>
> >>> Can we simplify the control to report the min and max values the sensor
> >>> can do provided the VBLANK setting allows it, and just reject any value
> >>> at set time that do not conform to the current VBLANK setting?
> >>
> >> One complicating factor here is that some sensors will do vblank stretching
> >> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> >> vblank is set when setting exposure > (frame-length + margin) to still
> >> give you the desired exposure.
> >
> > Interesting. Do you have any pointer to mainline drivers that do that ?
>
> I'm not talking about the driver stretching the vblank I'm talking
> about the hw (the sensor) stretching the vblank. I think that in newer
> drivers we stop this from happening by clamping the exposure.
>
> I'm 100% certain that some hw does this as I've seen this happen.
> I can do some digging if you want some specific sensor model examples
> where the hw is doing this.

We have worked with sensors (e.g. IMX477 and IMX708) that  support
auto-increasing the frame length / VBLANK if the exposure time
stretches past the current frame length.  However, we have turned off
this feature in the sensor and handle it manually like we do for all
other sensors.

Naush


>
> Regards,
>
> Hans
>
>
>
Jacopo Mondi Nov. 6, 2025, 8:29 a.m. UTC | #18
Hi Hans

On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
> Hi Jacopo,
>
> On 4-Nov-25 16:53, Jacopo Mondi wrote:
> > Hi Hans
> >
> > On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> >> Hi,
> >>
> >> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> >>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> >>
> >> ...
> >>
> >>>> Right I forgot about the delayed-controls stuff, not being able
> >>>> to get the new exposure limits from the kernel then makes sense.
> >>>
> >>> I too understand why it's easier to not consult the kernel for the new
> >>> limits. I have no real strong feeling that the correct solution is to
> >>> have this limit calculation done in the kernel or user-space. What I do
> >>> feel somewhat strongly about is if we do the limit calculation in
> >>> user-space we should never consult the kernel for the limits.
> >>>
> >>> This just mixes two ways of doing the same thing and is very confusing
> >>> when reading the code. So if doing this in user-space is the way forward
> >>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> >>> in IPARkISP1::configure() and instead use this method there too.
> >>>
> >>> I also feel, a little less strongly as it's a big work item, that it's
> >>> really wrong to duplicate the calculation both in kernel and user-space.
> >>>
> >>> As this IMHO exposes a weakness in the V4L2 API we should think about
> >>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> >>> we really want it to report its min/max values based on the current
> >>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> >>> for each frame.
> >>>
> >>> Can we simplify the control to report the min and max values the sensor
> >>> can do provided the VBLANK setting allows it, and just reject any value
> >>> at set time that do not conform to the current VBLANK setting?
> >>
> >> One complicating factor here is that some sensors will do vblank stretching
> >> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> >> vblank is set when setting exposure > (frame-length + margin) to still
> >> give you the desired exposure.
> >
> > Interesting. Do you have any pointer to mainline drivers that do that ?
>
> I'm not talking about the driver stretching the vblank I'm talking
> about the hw (the sensor) stretching the vblank. I think that in newer
> drivers we stop this from happening by clamping the exposure.
>
> I'm 100% certain that some hw does this as I've seen this happen.
> I can do some digging if you want some specific sensor model examples
> where the hw is doing this.

I think the fact some sensor supporting this feature not an issue if
we operate with algorithms designed to always clamp the shutter time
to the frame duration. Hence I would be more interested in knowing
which usage model you would consider best for the whole
libcamera/libipa handling of exposure/frame duration.

Currently, the design is to have the maximum achievable shutter time
to be always constrained by the frame duration. Users should take
care of explicitly slowing down the camera frame rate in order to
achieve a larger shutter time.

I think it is justified by the face that the desired exposure is
realized by the combined effect of the shutter time and gains, and
hence limiting the shutter time won't imply not being able to reach
the desired exposure levels.

Alternative approaches where the exposure time drives the frame
duration could be implemented with helpers, similar to the already
discussed "aperture priority" vs "shutter priority" modes.

What we have to make sure is the controls we define to driver the AGC
allows us to create such helpers easily.

Do you think we're on the same page here ?

Thanks
  j

>
> Regards,
>
> Hans
>
>
>
Hans de Goede Nov. 7, 2025, 4:41 p.m. UTC | #19
Hi Jacopo,

On 6-Nov-25 9:29 AM, Jacopo Mondi wrote:
> Hi Hans
> 
> On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
>> Hi Jacopo,
>>
>> On 4-Nov-25 16:53, Jacopo Mondi wrote:
>>> Hi Hans
>>>
>>> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
>>>> Hi,
>>>>
>>>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
>>>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
>>>>
>>>> ...
>>>>
>>>>>> Right I forgot about the delayed-controls stuff, not being able
>>>>>> to get the new exposure limits from the kernel then makes sense.
>>>>>
>>>>> I too understand why it's easier to not consult the kernel for the new
>>>>> limits. I have no real strong feeling that the correct solution is to
>>>>> have this limit calculation done in the kernel or user-space. What I do
>>>>> feel somewhat strongly about is if we do the limit calculation in
>>>>> user-space we should never consult the kernel for the limits.
>>>>>
>>>>> This just mixes two ways of doing the same thing and is very confusing
>>>>> when reading the code. So if doing this in user-space is the way forward
>>>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
>>>>> in IPARkISP1::configure() and instead use this method there too.
>>>>>
>>>>> I also feel, a little less strongly as it's a big work item, that it's
>>>>> really wrong to duplicate the calculation both in kernel and user-space.
>>>>>
>>>>> As this IMHO exposes a weakness in the V4L2 API we should think about
>>>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
>>>>> we really want it to report its min/max values based on the current
>>>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
>>>>> for each frame.
>>>>>
>>>>> Can we simplify the control to report the min and max values the sensor
>>>>> can do provided the VBLANK setting allows it, and just reject any value
>>>>> at set time that do not conform to the current VBLANK setting?
>>>>
>>>> One complicating factor here is that some sensors will do vblank stretching
>>>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
>>>> vblank is set when setting exposure > (frame-length + margin) to still
>>>> give you the desired exposure.
>>>
>>> Interesting. Do you have any pointer to mainline drivers that do that ?
>>
>> I'm not talking about the driver stretching the vblank I'm talking
>> about the hw (the sensor) stretching the vblank. I think that in newer
>> drivers we stop this from happening by clamping the exposure.
>>
>> I'm 100% certain that some hw does this as I've seen this happen.
>> I can do some digging if you want some specific sensor model examples
>> where the hw is doing this.
> 
> I think the fact some sensor supporting this feature not an issue if
> we operate with algorithms designed to always clamp the shutter time
> to the frame duration. Hence I would be more interested in knowing
> which usage model you would consider best for the whole
> libcamera/libipa handling of exposure/frame duration.
> 
> Currently, the design is to have the maximum achievable shutter time
> to be always constrained by the frame duration. Users should take
> care of explicitly slowing down the camera frame rate in order to
> achieve a larger shutter time.
> 
> I think it is justified by the face that the desired exposure is
> realized by the combined effect of the shutter time and gains, and
> hence limiting the shutter time won't imply not being able to reach
> the desired exposure levels.

Right, except that in low light you may need to boost the gain
to a level where you get significant noise and at some point
the gain is mostly just amplifying noise (discussion continued
below).

> Alternative approaches where the exposure time drives the frame
> duration could be implemented with helpers, similar to the already
> discussed "aperture priority" vs "shutter priority" modes.

Right, I do believe that we really need these modes, at least
for laptops. Unlike the RPI cameras which have pretty nice large
lenses. The laptop designs are optimized for small bezels and
thus small lenses which means not a whole lot of light, which
really starts to bite at low light conditions.

And I've seen both Intel's proprietary IPU6 stack as well as
many UVC webcams lower the framerate to compensate (somewhat).

I'm fine with defaulting to never allowing the need for
higher exposure to lower the framerate, but as you say I do
believe we need a control to allow the AGC lowering the framerate
to get larger exposure times. And I think those controls should
likely even default to on devices with small lenses, but first
we need to have / implement  such an exposure-mode control to begin
with :)

> What we have to make sure is the controls we define to driver the AGC
> allows us to create such helpers easily.
> 
> Do you think we're on the same page here ?

Yes I believe that we are on the same page here.

Regards,

Hans
Naushir Patuck Nov. 10, 2025, 9:05 a.m. UTC | #20
Hi Hans,

On Fri, 7 Nov 2025 at 16:41, Hans de Goede
<johannes.goede@oss.qualcomm.com> wrote:
>
> Hi Jacopo,
>
> On 6-Nov-25 9:29 AM, Jacopo Mondi wrote:
> > Hi Hans
> >
> > On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
> >> Hi Jacopo,
> >>
> >> On 4-Nov-25 16:53, Jacopo Mondi wrote:
> >>> Hi Hans
> >>>
> >>> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> >>>> Hi,
> >>>>
> >>>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> >>>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> >>>>
> >>>> ...
> >>>>
> >>>>>> Right I forgot about the delayed-controls stuff, not being able
> >>>>>> to get the new exposure limits from the kernel then makes sense.
> >>>>>
> >>>>> I too understand why it's easier to not consult the kernel for the new
> >>>>> limits. I have no real strong feeling that the correct solution is to
> >>>>> have this limit calculation done in the kernel or user-space. What I do
> >>>>> feel somewhat strongly about is if we do the limit calculation in
> >>>>> user-space we should never consult the kernel for the limits.
> >>>>>
> >>>>> This just mixes two ways of doing the same thing and is very confusing
> >>>>> when reading the code. So if doing this in user-space is the way forward
> >>>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> >>>>> in IPARkISP1::configure() and instead use this method there too.
> >>>>>
> >>>>> I also feel, a little less strongly as it's a big work item, that it's
> >>>>> really wrong to duplicate the calculation both in kernel and user-space.
> >>>>>
> >>>>> As this IMHO exposes a weakness in the V4L2 API we should think about
> >>>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> >>>>> we really want it to report its min/max values based on the current
> >>>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> >>>>> for each frame.
> >>>>>
> >>>>> Can we simplify the control to report the min and max values the sensor
> >>>>> can do provided the VBLANK setting allows it, and just reject any value
> >>>>> at set time that do not conform to the current VBLANK setting?
> >>>>
> >>>> One complicating factor here is that some sensors will do vblank stretching
> >>>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> >>>> vblank is set when setting exposure > (frame-length + margin) to still
> >>>> give you the desired exposure.
> >>>
> >>> Interesting. Do you have any pointer to mainline drivers that do that ?
> >>
> >> I'm not talking about the driver stretching the vblank I'm talking
> >> about the hw (the sensor) stretching the vblank. I think that in newer
> >> drivers we stop this from happening by clamping the exposure.
> >>
> >> I'm 100% certain that some hw does this as I've seen this happen.
> >> I can do some digging if you want some specific sensor model examples
> >> where the hw is doing this.
> >
> > I think the fact some sensor supporting this feature not an issue if
> > we operate with algorithms designed to always clamp the shutter time
> > to the frame duration. Hence I would be more interested in knowing
> > which usage model you would consider best for the whole
> > libcamera/libipa handling of exposure/frame duration.
> >
> > Currently, the design is to have the maximum achievable shutter time
> > to be always constrained by the frame duration. Users should take
> > care of explicitly slowing down the camera frame rate in order to
> > achieve a larger shutter time.
> >
> > I think it is justified by the face that the desired exposure is
> > realized by the combined effect of the shutter time and gains, and
> > hence limiting the shutter time won't imply not being able to reach
> > the desired exposure levels.
>
> Right, except that in low light you may need to boost the gain
> to a level where you get significant noise and at some point
> the gain is mostly just amplifying noise (discussion continued
> below).
>
> > Alternative approaches where the exposure time drives the frame
> > duration could be implemented with helpers, similar to the already
> > discussed "aperture priority" vs "shutter priority" modes.
>
> Right, I do believe that we really need these modes, at least
> for laptops. Unlike the RPI cameras which have pretty nice large
> lenses. The laptop designs are optimized for small bezels and
> thus small lenses which means not a whole lot of light, which
> really starts to bite at low light conditions.
>
> And I've seen both Intel's proprietary IPU6 stack as well as
> many UVC webcams lower the framerate to compensate (somewhat).
>
> I'm fine with defaulting to never allowing the need for
> higher exposure to lower the framerate, but as you say I do
> believe we need a control to allow the AGC lowering the framerate
> to get larger exposure times. And I think those controls should
> likely even default to on devices with small lenses, but first
> we need to have / implement  such an exposure-mode control to begin
> with :)

I believe the current FrameDurationLimits will allow you to achieve
everything you need here.  From the control definition:

  - FrameDurationLimits:
      type: int64_t
      direction: inout
      description: |
        The minimum and maximum (in that order) frame duration, expressed in
        microseconds.

        When provided by applications, the control specifies the sensor frame
        duration interval the pipeline has to use. This limits the largest
        exposure time the sensor can use. For example, if a maximum frame
        duration of 33ms is requested (corresponding to 30 frames per second),
        the sensor will not be able to raise the exposure time above 33ms.
        A fixed frame duration is achieved by setting the minimum and maximum
        values to be the same. Setting both values to 0 reverts to using the
        camera defaults.

        The maximum frame duration provides the absolute limit to the exposure
        time computed by the AE algorithm and it overrides any exposure mode
        setting specified with controls::AeExposureMode. Similarly, when a
        manual exposure time is set through controls::ExposureTime, it also
        gets clipped to the limits set by this control. When reported in
        metadata, the control expresses the minimum and maximum frame durations
        used after being clipped to the sensor provided frame duration limits.

So this control takes a min and max value to use by the IPA.  If you
specify, say, min = (1/30)s and max = (1/15)s, this should limit the
AGC to increase the shutter time upto 66ms if the scene conditions
require.  This is the mechanism RPi uses to allow AEC to increase the
exposure without ramping up the analogue gain, if required.

Regards,
Naush

>
> > What we have to make sure is the controls we define to driver the AGC
> > allows us to create such helpers easily.
> >
> > Do you think we're on the same page here ?
>
> Yes I believe that we are on the same page here.
>
> Regards,
>
> Hans
>
>
>
>
Hans de Goede Nov. 10, 2025, 9:40 a.m. UTC | #21
Hi Naush,

On 10-Nov-25 10:05, Naushir Patuck wrote:
> Hi Hans,
> 
> On Fri, 7 Nov 2025 at 16:41, Hans de Goede
> <johannes.goede@oss.qualcomm.com> wrote:
>>
>> Hi Jacopo,
>>
>> On 6-Nov-25 9:29 AM, Jacopo Mondi wrote:
>>> Hi Hans
>>>
>>> On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
>>>> Hi Jacopo,
>>>>
>>>> On 4-Nov-25 16:53, Jacopo Mondi wrote:
>>>>> Hi Hans
>>>>>
>>>>> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
>>>>>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
>>>>>>
>>>>>> ...
>>>>>>
>>>>>>>> Right I forgot about the delayed-controls stuff, not being able
>>>>>>>> to get the new exposure limits from the kernel then makes sense.
>>>>>>>
>>>>>>> I too understand why it's easier to not consult the kernel for the new
>>>>>>> limits. I have no real strong feeling that the correct solution is to
>>>>>>> have this limit calculation done in the kernel or user-space. What I do
>>>>>>> feel somewhat strongly about is if we do the limit calculation in
>>>>>>> user-space we should never consult the kernel for the limits.
>>>>>>>
>>>>>>> This just mixes two ways of doing the same thing and is very confusing
>>>>>>> when reading the code. So if doing this in user-space is the way forward
>>>>>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
>>>>>>> in IPARkISP1::configure() and instead use this method there too.
>>>>>>>
>>>>>>> I also feel, a little less strongly as it's a big work item, that it's
>>>>>>> really wrong to duplicate the calculation both in kernel and user-space.
>>>>>>>
>>>>>>> As this IMHO exposes a weakness in the V4L2 API we should think about
>>>>>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
>>>>>>> we really want it to report its min/max values based on the current
>>>>>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
>>>>>>> for each frame.
>>>>>>>
>>>>>>> Can we simplify the control to report the min and max values the sensor
>>>>>>> can do provided the VBLANK setting allows it, and just reject any value
>>>>>>> at set time that do not conform to the current VBLANK setting?
>>>>>>
>>>>>> One complicating factor here is that some sensors will do vblank stretching
>>>>>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
>>>>>> vblank is set when setting exposure > (frame-length + margin) to still
>>>>>> give you the desired exposure.
>>>>>
>>>>> Interesting. Do you have any pointer to mainline drivers that do that ?
>>>>
>>>> I'm not talking about the driver stretching the vblank I'm talking
>>>> about the hw (the sensor) stretching the vblank. I think that in newer
>>>> drivers we stop this from happening by clamping the exposure.
>>>>
>>>> I'm 100% certain that some hw does this as I've seen this happen.
>>>> I can do some digging if you want some specific sensor model examples
>>>> where the hw is doing this.
>>>
>>> I think the fact some sensor supporting this feature not an issue if
>>> we operate with algorithms designed to always clamp the shutter time
>>> to the frame duration. Hence I would be more interested in knowing
>>> which usage model you would consider best for the whole
>>> libcamera/libipa handling of exposure/frame duration.
>>>
>>> Currently, the design is to have the maximum achievable shutter time
>>> to be always constrained by the frame duration. Users should take
>>> care of explicitly slowing down the camera frame rate in order to
>>> achieve a larger shutter time.
>>>
>>> I think it is justified by the face that the desired exposure is
>>> realized by the combined effect of the shutter time and gains, and
>>> hence limiting the shutter time won't imply not being able to reach
>>> the desired exposure levels.
>>
>> Right, except that in low light you may need to boost the gain
>> to a level where you get significant noise and at some point
>> the gain is mostly just amplifying noise (discussion continued
>> below).
>>
>>> Alternative approaches where the exposure time drives the frame
>>> duration could be implemented with helpers, similar to the already
>>> discussed "aperture priority" vs "shutter priority" modes.
>>
>> Right, I do believe that we really need these modes, at least
>> for laptops. Unlike the RPI cameras which have pretty nice large
>> lenses. The laptop designs are optimized for small bezels and
>> thus small lenses which means not a whole lot of light, which
>> really starts to bite at low light conditions.
>>
>> And I've seen both Intel's proprietary IPU6 stack as well as
>> many UVC webcams lower the framerate to compensate (somewhat).
>>
>> I'm fine with defaulting to never allowing the need for
>> higher exposure to lower the framerate, but as you say I do
>> believe we need a control to allow the AGC lowering the framerate
>> to get larger exposure times. And I think those controls should
>> likely even default to on devices with small lenses, but first
>> we need to have / implement  such an exposure-mode control to begin
>> with :)
> 
> I believe the current FrameDurationLimits will allow you to achieve
> everything you need here.  From the control definition:
> 
>   - FrameDurationLimits:
>       type: int64_t
>       direction: inout
>       description: |
>         The minimum and maximum (in that order) frame duration, expressed in
>         microseconds.
> 
>         When provided by applications, the control specifies the sensor frame
>         duration interval the pipeline has to use. This limits the largest
>         exposure time the sensor can use. For example, if a maximum frame
>         duration of 33ms is requested (corresponding to 30 frames per second),
>         the sensor will not be able to raise the exposure time above 33ms.
>         A fixed frame duration is achieved by setting the minimum and maximum
>         values to be the same. Setting both values to 0 reverts to using the
>         camera defaults.
> 
>         The maximum frame duration provides the absolute limit to the exposure
>         time computed by the AE algorithm and it overrides any exposure mode
>         setting specified with controls::AeExposureMode. Similarly, when a
>         manual exposure time is set through controls::ExposureTime, it also
>         gets clipped to the limits set by this control. When reported in
>         metadata, the control expresses the minimum and maximum frame durations
>         used after being clipped to the sensor provided frame duration limits.
> 
> So this control takes a min and max value to use by the IPA.  If you
> specify, say, min = (1/30)s and max = (1/15)s, this should limit the
> AGC to increase the shutter time upto 66ms if the scene conditions
> require.  This is the mechanism RPi uses to allow AEC to increase the
> exposure without ramping up the analogue gain, if required.

Right, this works if you are in full control of the applications and thus
of the framerate range the application requests.

But when using e.g. browsers for video conferencing they will just ask
for a fixed framerate. I'm not even sure if the getUserMedia web APIs
used by webrtc in-browser apps support a framerate range. So we may want to
have another mechanism to allow to allow lowering the framerate in low
light conditions, which on UVC cams at least is typically done through
an aperture priority exposure mode.

But I see from the description of the current frameDuration control that
this would go against that description. So maybe we need to have pipewire
always request a fps range of (desired-fps / 2) - desired-fps. Even then
it would still be nice to have a control to tell the AGC if it should
optimize for getting as much fps as possible (only lowering the fps
if it needs a gain of say above 4.0 otherwise) or if it should try to
keep the gain as low as possible and immediately start lowering fps
if necessary.

Regards,

Hans
Naushir Patuck Nov. 10, 2025, 9:57 a.m. UTC | #22
Hi Hans,

On Mon, 10 Nov 2025 at 09:40, Hans de Goede
<johannes.goede@oss.qualcomm.com> wrote:
>
> Hi Naush,
>
> On 10-Nov-25 10:05, Naushir Patuck wrote:
> > Hi Hans,
> >
> > On Fri, 7 Nov 2025 at 16:41, Hans de Goede
> > <johannes.goede@oss.qualcomm.com> wrote:
> >>
> >> Hi Jacopo,
> >>
> >> On 6-Nov-25 9:29 AM, Jacopo Mondi wrote:
> >>> Hi Hans
> >>>
> >>> On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
> >>>> Hi Jacopo,
> >>>>
> >>>> On 4-Nov-25 16:53, Jacopo Mondi wrote:
> >>>>> Hi Hans
> >>>>>
> >>>>> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> >>>>>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> >>>>>>
> >>>>>> ...
> >>>>>>
> >>>>>>>> Right I forgot about the delayed-controls stuff, not being able
> >>>>>>>> to get the new exposure limits from the kernel then makes sense.
> >>>>>>>
> >>>>>>> I too understand why it's easier to not consult the kernel for the new
> >>>>>>> limits. I have no real strong feeling that the correct solution is to
> >>>>>>> have this limit calculation done in the kernel or user-space. What I do
> >>>>>>> feel somewhat strongly about is if we do the limit calculation in
> >>>>>>> user-space we should never consult the kernel for the limits.
> >>>>>>>
> >>>>>>> This just mixes two ways of doing the same thing and is very confusing
> >>>>>>> when reading the code. So if doing this in user-space is the way forward
> >>>>>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> >>>>>>> in IPARkISP1::configure() and instead use this method there too.
> >>>>>>>
> >>>>>>> I also feel, a little less strongly as it's a big work item, that it's
> >>>>>>> really wrong to duplicate the calculation both in kernel and user-space.
> >>>>>>>
> >>>>>>> As this IMHO exposes a weakness in the V4L2 API we should think about
> >>>>>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> >>>>>>> we really want it to report its min/max values based on the current
> >>>>>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> >>>>>>> for each frame.
> >>>>>>>
> >>>>>>> Can we simplify the control to report the min and max values the sensor
> >>>>>>> can do provided the VBLANK setting allows it, and just reject any value
> >>>>>>> at set time that do not conform to the current VBLANK setting?
> >>>>>>
> >>>>>> One complicating factor here is that some sensors will do vblank stretching
> >>>>>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> >>>>>> vblank is set when setting exposure > (frame-length + margin) to still
> >>>>>> give you the desired exposure.
> >>>>>
> >>>>> Interesting. Do you have any pointer to mainline drivers that do that ?
> >>>>
> >>>> I'm not talking about the driver stretching the vblank I'm talking
> >>>> about the hw (the sensor) stretching the vblank. I think that in newer
> >>>> drivers we stop this from happening by clamping the exposure.
> >>>>
> >>>> I'm 100% certain that some hw does this as I've seen this happen.
> >>>> I can do some digging if you want some specific sensor model examples
> >>>> where the hw is doing this.
> >>>
> >>> I think the fact some sensor supporting this feature not an issue if
> >>> we operate with algorithms designed to always clamp the shutter time
> >>> to the frame duration. Hence I would be more interested in knowing
> >>> which usage model you would consider best for the whole
> >>> libcamera/libipa handling of exposure/frame duration.
> >>>
> >>> Currently, the design is to have the maximum achievable shutter time
> >>> to be always constrained by the frame duration. Users should take
> >>> care of explicitly slowing down the camera frame rate in order to
> >>> achieve a larger shutter time.
> >>>
> >>> I think it is justified by the face that the desired exposure is
> >>> realized by the combined effect of the shutter time and gains, and
> >>> hence limiting the shutter time won't imply not being able to reach
> >>> the desired exposure levels.
> >>
> >> Right, except that in low light you may need to boost the gain
> >> to a level where you get significant noise and at some point
> >> the gain is mostly just amplifying noise (discussion continued
> >> below).
> >>
> >>> Alternative approaches where the exposure time drives the frame
> >>> duration could be implemented with helpers, similar to the already
> >>> discussed "aperture priority" vs "shutter priority" modes.
> >>
> >> Right, I do believe that we really need these modes, at least
> >> for laptops. Unlike the RPI cameras which have pretty nice large
> >> lenses. The laptop designs are optimized for small bezels and
> >> thus small lenses which means not a whole lot of light, which
> >> really starts to bite at low light conditions.
> >>
> >> And I've seen both Intel's proprietary IPU6 stack as well as
> >> many UVC webcams lower the framerate to compensate (somewhat).
> >>
> >> I'm fine with defaulting to never allowing the need for
> >> higher exposure to lower the framerate, but as you say I do
> >> believe we need a control to allow the AGC lowering the framerate
> >> to get larger exposure times. And I think those controls should
> >> likely even default to on devices with small lenses, but first
> >> we need to have / implement  such an exposure-mode control to begin
> >> with :)
> >
> > I believe the current FrameDurationLimits will allow you to achieve
> > everything you need here.  From the control definition:
> >
> >   - FrameDurationLimits:
> >       type: int64_t
> >       direction: inout
> >       description: |
> >         The minimum and maximum (in that order) frame duration, expressed in
> >         microseconds.
> >
> >         When provided by applications, the control specifies the sensor frame
> >         duration interval the pipeline has to use. This limits the largest
> >         exposure time the sensor can use. For example, if a maximum frame
> >         duration of 33ms is requested (corresponding to 30 frames per second),
> >         the sensor will not be able to raise the exposure time above 33ms.
> >         A fixed frame duration is achieved by setting the minimum and maximum
> >         values to be the same. Setting both values to 0 reverts to using the
> >         camera defaults.
> >
> >         The maximum frame duration provides the absolute limit to the exposure
> >         time computed by the AE algorithm and it overrides any exposure mode
> >         setting specified with controls::AeExposureMode. Similarly, when a
> >         manual exposure time is set through controls::ExposureTime, it also
> >         gets clipped to the limits set by this control. When reported in
> >         metadata, the control expresses the minimum and maximum frame durations
> >         used after being clipped to the sensor provided frame duration limits.
> >
> > So this control takes a min and max value to use by the IPA.  If you
> > specify, say, min = (1/30)s and max = (1/15)s, this should limit the
> > AGC to increase the shutter time upto 66ms if the scene conditions
> > require.  This is the mechanism RPi uses to allow AEC to increase the
> > exposure without ramping up the analogue gain, if required.
>
> Right, this works if you are in full control of the applications and thus
> of the framerate range the application requests.
>
> But when using e.g. browsers for video conferencing they will just ask
> for a fixed framerate. I'm not even sure if the getUserMedia web APIs
> used by webrtc in-browser apps support a framerate range. So we may want to
> have another mechanism to allow to allow lowering the framerate in low
> light conditions, which on UVC cams at least is typically done through
> an aperture priority exposure mode.

If the application/pipewire has set a fixed framerate to use, I don't
think the IPA should do anything other than run at this framerate,
regardless of lighting conditions.  But perhaps this is not what you
mean here?

>
> But I see from the description of the current frameDuration control that
> this would go against that description. So maybe we need to have pipewire
> always request a fps range of (desired-fps / 2) - desired-fps. Even then
> it would still be nice to have a control to tell the AGC if it should
> optimize for getting as much fps as possible (only lowering the fps
> if it needs a gain of say above 4.0 otherwise) or if it should try to
> keep the gain as low as possible and immediately start lowering fps
> if necessary.

This makes sense, and how we achieve this is through the "exposure
profiles" tuning parameters.  It defines how the AGC algorithm splits
the required exposure into shutter and gain provided to the sensor.  I
believe libipa does have exposure profile handling as well.

Regards,
Naush


>
> Regards,
>
> Hans
>
David Plowman Nov. 10, 2025, 10:20 a.m. UTC | #23
Hi everyone

On Mon, 10 Nov 2025 at 09:58, Naushir Patuck <naush@raspberrypi.com> wrote:
>
> Hi Hans,
>
> On Mon, 10 Nov 2025 at 09:40, Hans de Goede
> <johannes.goede@oss.qualcomm.com> wrote:
> >
> > Hi Naush,
> >
> > On 10-Nov-25 10:05, Naushir Patuck wrote:
> > > Hi Hans,
> > >
> > > On Fri, 7 Nov 2025 at 16:41, Hans de Goede
> > > <johannes.goede@oss.qualcomm.com> wrote:
> > >>
> > >> Hi Jacopo,
> > >>
> > >> On 6-Nov-25 9:29 AM, Jacopo Mondi wrote:
> > >>> Hi Hans
> > >>>
> > >>> On Tue, Nov 04, 2025 at 06:23:23PM +0100, Hans de Goede wrote:
> > >>>> Hi Jacopo,
> > >>>>
> > >>>> On 4-Nov-25 16:53, Jacopo Mondi wrote:
> > >>>>> Hi Hans
> > >>>>>
> > >>>>> On Mon, Nov 03, 2025 at 10:01:15AM +0100, Hans de Goede wrote:
> > >>>>>> Hi,
> > >>>>>>
> > >>>>>> On 28-Oct-25 4:38 PM, Niklas Söderlund wrote:
> > >>>>>>> On 2025-10-28 15:48:58 +0100, Hans de Goede wrote:
> > >>>>>>
> > >>>>>> ...
> > >>>>>>
> > >>>>>>>> Right I forgot about the delayed-controls stuff, not being able
> > >>>>>>>> to get the new exposure limits from the kernel then makes sense.
> > >>>>>>>
> > >>>>>>> I too understand why it's easier to not consult the kernel for the new
> > >>>>>>> limits. I have no real strong feeling that the correct solution is to
> > >>>>>>> have this limit calculation done in the kernel or user-space. What I do
> > >>>>>>> feel somewhat strongly about is if we do the limit calculation in
> > >>>>>>> user-space we should never consult the kernel for the limits.
> > >>>>>>>
> > >>>>>>> This just mixes two ways of doing the same thing and is very confusing
> > >>>>>>> when reading the code. So if doing this in user-space is the way forward
> > >>>>>>> I think we should drop the reading of min and max of V4L2_CID_EXPOSURE
> > >>>>>>> in IPARkISP1::configure() and instead use this method there too.
> > >>>>>>>
> > >>>>>>> I also feel, a little less strongly as it's a big work item, that it's
> > >>>>>>> really wrong to duplicate the calculation both in kernel and user-space.
> > >>>>>>>
> > >>>>>>> As this IMHO exposes a weakness in the V4L2 API we should think about
> > >>>>>>> how we want the V4L2_CID_EXPOSURE control to report min/max values. Do
> > >>>>>>> we really want it to report its min/max values based on the current
> > >>>>>>> V4L2_CID_VBLANK setting as it's too expensive to query it with an IOCTL
> > >>>>>>> for each frame.
> > >>>>>>>
> > >>>>>>> Can we simplify the control to report the min and max values the sensor
> > >>>>>>> can do provided the VBLANK setting allows it, and just reject any value
> > >>>>>>> at set time that do not conform to the current VBLANK setting?
> > >>>>>>
> > >>>>>> One complicating factor here is that some sensors will do vblank stretching
> > >>>>>> when you ask for a higher exposure. IOW some sensors behave as if a bigger
> > >>>>>> vblank is set when setting exposure > (frame-length + margin) to still
> > >>>>>> give you the desired exposure.
> > >>>>>
> > >>>>> Interesting. Do you have any pointer to mainline drivers that do that ?
> > >>>>
> > >>>> I'm not talking about the driver stretching the vblank I'm talking
> > >>>> about the hw (the sensor) stretching the vblank. I think that in newer
> > >>>> drivers we stop this from happening by clamping the exposure.
> > >>>>
> > >>>> I'm 100% certain that some hw does this as I've seen this happen.
> > >>>> I can do some digging if you want some specific sensor model examples
> > >>>> where the hw is doing this.
> > >>>
> > >>> I think the fact some sensor supporting this feature not an issue if
> > >>> we operate with algorithms designed to always clamp the shutter time
> > >>> to the frame duration. Hence I would be more interested in knowing
> > >>> which usage model you would consider best for the whole
> > >>> libcamera/libipa handling of exposure/frame duration.
> > >>>
> > >>> Currently, the design is to have the maximum achievable shutter time
> > >>> to be always constrained by the frame duration. Users should take
> > >>> care of explicitly slowing down the camera frame rate in order to
> > >>> achieve a larger shutter time.
> > >>>
> > >>> I think it is justified by the face that the desired exposure is
> > >>> realized by the combined effect of the shutter time and gains, and
> > >>> hence limiting the shutter time won't imply not being able to reach
> > >>> the desired exposure levels.
> > >>
> > >> Right, except that in low light you may need to boost the gain
> > >> to a level where you get significant noise and at some point
> > >> the gain is mostly just amplifying noise (discussion continued
> > >> below).
> > >>
> > >>> Alternative approaches where the exposure time drives the frame
> > >>> duration could be implemented with helpers, similar to the already
> > >>> discussed "aperture priority" vs "shutter priority" modes.
> > >>
> > >> Right, I do believe that we really need these modes, at least
> > >> for laptops. Unlike the RPI cameras which have pretty nice large
> > >> lenses. The laptop designs are optimized for small bezels and
> > >> thus small lenses which means not a whole lot of light, which
> > >> really starts to bite at low light conditions.
> > >>
> > >> And I've seen both Intel's proprietary IPU6 stack as well as
> > >> many UVC webcams lower the framerate to compensate (somewhat).
> > >>
> > >> I'm fine with defaulting to never allowing the need for
> > >> higher exposure to lower the framerate, but as you say I do
> > >> believe we need a control to allow the AGC lowering the framerate
> > >> to get larger exposure times. And I think those controls should
> > >> likely even default to on devices with small lenses, but first
> > >> we need to have / implement  such an exposure-mode control to begin
> > >> with :)
> > >
> > > I believe the current FrameDurationLimits will allow you to achieve
> > > everything you need here.  From the control definition:
> > >
> > >   - FrameDurationLimits:
> > >       type: int64_t
> > >       direction: inout
> > >       description: |
> > >         The minimum and maximum (in that order) frame duration, expressed in
> > >         microseconds.
> > >
> > >         When provided by applications, the control specifies the sensor frame
> > >         duration interval the pipeline has to use. This limits the largest
> > >         exposure time the sensor can use. For example, if a maximum frame
> > >         duration of 33ms is requested (corresponding to 30 frames per second),
> > >         the sensor will not be able to raise the exposure time above 33ms.
> > >         A fixed frame duration is achieved by setting the minimum and maximum
> > >         values to be the same. Setting both values to 0 reverts to using the
> > >         camera defaults.
> > >
> > >         The maximum frame duration provides the absolute limit to the exposure
> > >         time computed by the AE algorithm and it overrides any exposure mode
> > >         setting specified with controls::AeExposureMode. Similarly, when a
> > >         manual exposure time is set through controls::ExposureTime, it also
> > >         gets clipped to the limits set by this control. When reported in
> > >         metadata, the control expresses the minimum and maximum frame durations
> > >         used after being clipped to the sensor provided frame duration limits.
> > >
> > > So this control takes a min and max value to use by the IPA.  If you
> > > specify, say, min = (1/30)s and max = (1/15)s, this should limit the
> > > AGC to increase the shutter time upto 66ms if the scene conditions
> > > require.  This is the mechanism RPi uses to allow AEC to increase the
> > > exposure without ramping up the analogue gain, if required.
> >
> > Right, this works if you are in full control of the applications and thus
> > of the framerate range the application requests.
> >
> > But when using e.g. browsers for video conferencing they will just ask
> > for a fixed framerate. I'm not even sure if the getUserMedia web APIs
> > used by webrtc in-browser apps support a framerate range. So we may want to
> > have another mechanism to allow to allow lowering the framerate in low
> > light conditions, which on UVC cams at least is typically done through
> > an aperture priority exposure mode.
>
> If the application/pipewire has set a fixed framerate to use, I don't
> think the IPA should do anything other than run at this framerate,
> regardless of lighting conditions.  But perhaps this is not what you
> mean here?
>
> >
> > But I see from the description of the current frameDuration control that
> > this would go against that description. So maybe we need to have pipewire
> > always request a fps range of (desired-fps / 2) - desired-fps. Even then
> > it would still be nice to have a control to tell the AGC if it should
> > optimize for getting as much fps as possible (only lowering the fps
> > if it needs a gain of say above 4.0 otherwise) or if it should try to
> > keep the gain as low as possible and immediately start lowering fps
> > if necessary.
>
> This makes sense, and how we achieve this is through the "exposure
> profiles" tuning parameters.  It defines how the AGC algorithm splits
> the required exposure into shutter and gain provided to the sensor.  I
> believe libipa does have exposure profile handling as well.

Personally, I've always been of the opinion that there's a bigger issue here.

For example, a very common use case might be video conferencing. But
on a Raspberry Pi, when using particular sensors, we might have strong
recommendations on how the camera system should be configured. It
might include "use this AGC mode", "activate HDR", "use a higher
default EV" and so on. How is Pipewire to know about this? Either it
has some quite sophisticated tables of parameters and settings, which
feels a bit undesirable to me. Or it can simply tell libcamera a
"high-level" use case, such as "I am doing a video conference", and
then libcamera (or our pipeline handler underneath) deals with it.

What do people think?

David

>
> Regards,
> Naush
>
>
> >
> > Regards,
> >
> > Hans
> >