[libcamera-devel,v3,1/3] ipa: raspberrypi: Make CamHelper exposure methods virtual
diff mbox series

Message ID 20210414102955.9503-2-david.plowman@raspberrypi.com
State Superseded
Headers show
Series
  • Raspberry Pi: handle sensors more flexibly
Related show

Commit Message

David Plowman April 14, 2021, 10:29 a.m. UTC
This allows derived classes to override them if they have any special
behaviours to implement. For instance if a particular camera mode
produces a different signal level to other modes, you might choose to
address that in the gain or exposure methods.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
---
 src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Kieran Bingham April 14, 2021, 10:49 a.m. UTC | #1
Hi David,

On 14/04/2021 11:29, David Plowman wrote:
> This allows derived classes to override them if they have any special
> behaviours to implement. For instance if a particular camera mode
> produces a different signal level to other modes, you might choose to
> address that in the gain or exposure methods.
> 
> Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> Reviewed-by: Naushir Patuck <naush@raspberrypi.com>

This isn't actually used yet is it?
But I don't see any harm in it, given that's the point of the cam helper
interface.

Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>

> ---
>  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> index 1b2d6eec..4053a870 100644
> --- a/src/ipa/raspberrypi/cam_helper.hpp
> +++ b/src/ipa/raspberrypi/cam_helper.hpp
> @@ -66,8 +66,8 @@ public:
>  	virtual ~CamHelper();
>  	void SetCameraMode(const CameraMode &mode);
>  	MdParser &Parser() const { return *parser_; }
> -	uint32_t ExposureLines(double exposure_us) const;
> -	double Exposure(uint32_t exposure_lines) const; // in us
> +	virtual uint32_t ExposureLines(double exposure_us) const;
> +	virtual double Exposure(uint32_t exposure_lines) const; // in us
>  	virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
>  				      double maxFrameDuration) const;
>  	virtual uint32_t GainCode(double gain) const = 0;
>
Laurent Pinchart April 16, 2021, 9:56 p.m. UTC | #2
Hi David,

Thank you for the patch.

On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> This allows derived classes to override them if they have any special
> behaviours to implement. For instance if a particular camera mode
> produces a different signal level to other modes, you might choose to
> address that in the gain or exposure methods.

For my information, would you be able to give an example of how this
would be used ? What kind of particular camera mode would produce a
different signal level, is this related to binning, or something else ?

> Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> Reviewed-by: Naushir Patuck <naush@raspberrypi.com>

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> ---
>  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> index 1b2d6eec..4053a870 100644
> --- a/src/ipa/raspberrypi/cam_helper.hpp
> +++ b/src/ipa/raspberrypi/cam_helper.hpp
> @@ -66,8 +66,8 @@ public:
>  	virtual ~CamHelper();
>  	void SetCameraMode(const CameraMode &mode);
>  	MdParser &Parser() const { return *parser_; }
> -	uint32_t ExposureLines(double exposure_us) const;
> -	double Exposure(uint32_t exposure_lines) const; // in us
> +	virtual uint32_t ExposureLines(double exposure_us) const;
> +	virtual double Exposure(uint32_t exposure_lines) const; // in us
>  	virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
>  				      double maxFrameDuration) const;
>  	virtual uint32_t GainCode(double gain) const = 0;
David Plowman April 17, 2021, 8:19 a.m. UTC | #3
Hi Laurent

Thanks for the comments and questions!

On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hi David,
>
> Thank you for the patch.
>
> On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > This allows derived classes to override them if they have any special
> > behaviours to implement. For instance if a particular camera mode
> > produces a different signal level to other modes, you might choose to
> > address that in the gain or exposure methods.
>
> For my information, would you be able to give an example of how this
> would be used ? What kind of particular camera mode would produce a
> different signal level, is this related to binning, or something else ?

Yes, so the sensor I'm looking at produces twice the signal level in
the binning modes that it does in the full resolution mode. I think
this difference would be awkward for applications which means we have
to hide it somewhere.

One of my earlier emails talked a little bit about hiding this in the
AGC. You'd have to know a "base gain" for every mode (so 1 for full
res, 2 for binned), and then have some process where we have public
exposure/gain numbers (which are the same for all modes) and there's a
process of conversion into and out of the AGC and IPA which deal with
real exposure/gain values. But I'm really not sure I want to go there,
at least right now!

I think this is a pragmatic alternative. You can already hide the
difference by changing the CamHelper's Gain and GainCode functions, to
apply the magic extra factor of 2 when necessary. This change here
makes it possible to apply that factor in the exposure instead - you
could imagine someone wanting the lowest noise possible and accepting
that the real exposure will be double (I wouldn't want to double
exposures without people explicitly making that choice). And finally,
this change doesn't make it any more difficult to do the compensation
in the AGC at a later date, should we wish.

In the CamHelper for my new sensor, I even make it easy to mix the
factor of 2 across both gain and exposure, so you could, for example,
have sqrt(2) in each.

That got rather long... I hope it was understandable!

David

>
> > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
>
> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
>
> > ---
> >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > index 1b2d6eec..4053a870 100644
> > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > @@ -66,8 +66,8 @@ public:
> >       virtual ~CamHelper();
> >       void SetCameraMode(const CameraMode &mode);
> >       MdParser &Parser() const { return *parser_; }
> > -     uint32_t ExposureLines(double exposure_us) const;
> > -     double Exposure(uint32_t exposure_lines) const; // in us
> > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> >                                     double maxFrameDuration) const;
> >       virtual uint32_t GainCode(double gain) const = 0;
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart April 27, 2021, 6:20 a.m. UTC | #4
Hi David,

Sorry for the late reply, catching up with reviews.

On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > This allows derived classes to override them if they have any special
> > > behaviours to implement. For instance if a particular camera mode
> > > produces a different signal level to other modes, you might choose to
> > > address that in the gain or exposure methods.
> >
> > For my information, would you be able to give an example of how this
> > would be used ? What kind of particular camera mode would produce a
> > different signal level, is this related to binning, or something else ?
> 
> Yes, so the sensor I'm looking at produces twice the signal level in
> the binning modes that it does in the full resolution mode. I think
> this difference would be awkward for applications which means we have
> to hide it somewhere.
> 
> One of my earlier emails talked a little bit about hiding this in the
> AGC. You'd have to know a "base gain" for every mode (so 1 for full
> res, 2 for binned), and then have some process where we have public
> exposure/gain numbers (which are the same for all modes) and there's a
> process of conversion into and out of the AGC and IPA which deal with
> real exposure/gain values. But I'm really not sure I want to go there,
> at least right now!
> 
> I think this is a pragmatic alternative. You can already hide the
> difference by changing the CamHelper's Gain and GainCode functions, to
> apply the magic extra factor of 2 when necessary. This change here
> makes it possible to apply that factor in the exposure instead - you
> could imagine someone wanting the lowest noise possible and accepting
> that the real exposure will be double (I wouldn't want to double
> exposures without people explicitly making that choice). And finally,
> this change doesn't make it any more difficult to do the compensation
> in the AGC at a later date, should we wish.
> 
> In the CamHelper for my new sensor, I even make it easy to mix the
> factor of 2 across both gain and exposure, so you could, for example,
> have sqrt(2) in each.
> 
> That got rather long... I hope it was understandable!

Thanks for the information. There may be something that I don't get
though, as I don't see why the exposure needs to be affected. The
exposure time has a big influence on motion blur, so it seems to me that
we shouldn't cheat here. The gain seems to be a better place to hide
this sensor feature. I wonder if we should consider moving from gain to
sensor sensitivity.

> > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> >
> > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> >
> > > ---
> > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > index 1b2d6eec..4053a870 100644
> > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > @@ -66,8 +66,8 @@ public:
> > >       virtual ~CamHelper();
> > >       void SetCameraMode(const CameraMode &mode);
> > >       MdParser &Parser() const { return *parser_; }
> > > -     uint32_t ExposureLines(double exposure_us) const;
> > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > >                                     double maxFrameDuration) const;
> > >       virtual uint32_t GainCode(double gain) const = 0;
David Plowman April 27, 2021, 8:22 a.m. UTC | #5
Hi Laurent

Thanks for the comments.

On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hi David,
>
> Sorry for the late reply, catching up with reviews.
>
> On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > This allows derived classes to override them if they have any special
> > > > behaviours to implement. For instance if a particular camera mode
> > > > produces a different signal level to other modes, you might choose to
> > > > address that in the gain or exposure methods.
> > >
> > > For my information, would you be able to give an example of how this
> > > would be used ? What kind of particular camera mode would produce a
> > > different signal level, is this related to binning, or something else ?
> >
> > Yes, so the sensor I'm looking at produces twice the signal level in
> > the binning modes that it does in the full resolution mode. I think
> > this difference would be awkward for applications which means we have
> > to hide it somewhere.
> >
> > One of my earlier emails talked a little bit about hiding this in the
> > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > res, 2 for binned), and then have some process where we have public
> > exposure/gain numbers (which are the same for all modes) and there's a
> > process of conversion into and out of the AGC and IPA which deal with
> > real exposure/gain values. But I'm really not sure I want to go there,
> > at least right now!
> >
> > I think this is a pragmatic alternative. You can already hide the
> > difference by changing the CamHelper's Gain and GainCode functions, to
> > apply the magic extra factor of 2 when necessary. This change here
> > makes it possible to apply that factor in the exposure instead - you
> > could imagine someone wanting the lowest noise possible and accepting
> > that the real exposure will be double (I wouldn't want to double
> > exposures without people explicitly making that choice). And finally,
> > this change doesn't make it any more difficult to do the compensation
> > in the AGC at a later date, should we wish.
> >
> > In the CamHelper for my new sensor, I even make it easy to mix the
> > factor of 2 across both gain and exposure, so you could, for example,
> > have sqrt(2) in each.
> >
> > That got rather long... I hope it was understandable!
>
> Thanks for the information. There may be something that I don't get
> though, as I don't see why the exposure needs to be affected. The
> exposure time has a big influence on motion blur, so it seems to me that
> we shouldn't cheat here. The gain seems to be a better place to hide
> this sensor feature. I wonder if we should consider moving from gain to
> sensor sensitivity.

You're right, exposure doesn't *need* to be affected, and indeed my
default position is to handle the difference using gain instead. The
intention is that people are *able* to keep the gain to a minimum if
they want to, at the cost of changing the exposure - but this has to
be a deliberate choice.

I agree the "sensitivity" is a good idea, and indeed a much better
term than I used above, which was "base gain".

Note that sensitivity is a per-mode thing here. I think it's a good
solution but has some consequences:

- If an application wants to change camera mode and set the exposure
they're going to have to deal with per-mode sensitivities and adapt
their numbers accordingly. Do we like this? I'm always very nervous of
complicating an application's life!

- You could try and handle per-mode sensitivities either outside the
AGC (effectively that's what this allows) or you could handle it
within the AGC. Whenever the camera mode changes you'd have to adjust
the state variables according to the ratio of old_sensitivity /
new_sensitivity. You'd probably have to use a different exposure
profile, possibly it needs to be set by the application - or could we
do something more automatic? The implementation would also depend on
whether we want applications to have to pay attention to the "per-mode
sensitivities" or not.

So overall, I'm in several minds about going this "per-mode
sensitivity" route and the work it requires on the AGC, as well as the
implications for applications. In the meantime, making the exposure
methods virtual allows these sensors to work and doesn't make a later
step towards per-mode sensitivities any more difficult. You would
simply do that work and then delete your virtual exposure methods.

Possibly I've repeated myself rather a lot there - but maybe I've
explained it better this time? :)

Thanks
David

>
> > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > >
> > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > >
> > > > ---
> > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > > index 1b2d6eec..4053a870 100644
> > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > @@ -66,8 +66,8 @@ public:
> > > >       virtual ~CamHelper();
> > > >       void SetCameraMode(const CameraMode &mode);
> > > >       MdParser &Parser() const { return *parser_; }
> > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > > >                                     double maxFrameDuration) const;
> > > >       virtual uint32_t GainCode(double gain) const = 0;
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart April 28, 2021, 12:16 p.m. UTC | #6
Hi David,

On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > This allows derived classes to override them if they have any special
> > > > > behaviours to implement. For instance if a particular camera mode
> > > > > produces a different signal level to other modes, you might choose to
> > > > > address that in the gain or exposure methods.
> > > >
> > > > For my information, would you be able to give an example of how this
> > > > would be used ? What kind of particular camera mode would produce a
> > > > different signal level, is this related to binning, or something else ?
> > >
> > > Yes, so the sensor I'm looking at produces twice the signal level in
> > > the binning modes that it does in the full resolution mode. I think
> > > this difference would be awkward for applications which means we have
> > > to hide it somewhere.
> > >
> > > One of my earlier emails talked a little bit about hiding this in the
> > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > > res, 2 for binned), and then have some process where we have public
> > > exposure/gain numbers (which are the same for all modes) and there's a
> > > process of conversion into and out of the AGC and IPA which deal with
> > > real exposure/gain values. But I'm really not sure I want to go there,
> > > at least right now!
> > >
> > > I think this is a pragmatic alternative. You can already hide the
> > > difference by changing the CamHelper's Gain and GainCode functions, to
> > > apply the magic extra factor of 2 when necessary. This change here
> > > makes it possible to apply that factor in the exposure instead - you
> > > could imagine someone wanting the lowest noise possible and accepting
> > > that the real exposure will be double (I wouldn't want to double
> > > exposures without people explicitly making that choice). And finally,
> > > this change doesn't make it any more difficult to do the compensation
> > > in the AGC at a later date, should we wish.
> > >
> > > In the CamHelper for my new sensor, I even make it easy to mix the
> > > factor of 2 across both gain and exposure, so you could, for example,
> > > have sqrt(2) in each.
> > >
> > > That got rather long... I hope it was understandable!
> >
> > Thanks for the information. There may be something that I don't get
> > though, as I don't see why the exposure needs to be affected. The
> > exposure time has a big influence on motion blur, so it seems to me that
> > we shouldn't cheat here. The gain seems to be a better place to hide
> > this sensor feature. I wonder if we should consider moving from gain to
> > sensor sensitivity.
> 
> You're right, exposure doesn't *need* to be affected, and indeed my
> default position is to handle the difference using gain instead. The
> intention is that people are *able* to keep the gain to a minimum if
> they want to, at the cost of changing the exposure - but this has to
> be a deliberate choice.
>
> I agree the "sensitivity" is a good idea, and indeed a much better
> term than I used above, which was "base gain".
> 
> Note that sensitivity is a per-mode thing here. I think it's a good
> solution but has some consequences:
> 
> - If an application wants to change camera mode and set the exposure
> they're going to have to deal with per-mode sensitivities and adapt
> their numbers accordingly. Do we like this? I'm always very nervous of
> complicating an application's life!
>
> - You could try and handle per-mode sensitivities either outside the
> AGC (effectively that's what this allows) or you could handle it
> within the AGC. Whenever the camera mode changes you'd have to adjust
> the state variables according to the ratio of old_sensitivity /
> new_sensitivity. You'd probably have to use a different exposure
> profile, possibly it needs to be set by the application - or could we
> do something more automatic? The implementation would also depend on
> whether we want applications to have to pay attention to the "per-mode
> sensitivities" or not.
> 
> So overall, I'm in several minds about going this "per-mode
> sensitivity" route and the work it requires on the AGC, as well as the
> implications for applications. In the meantime, making the exposure
> methods virtual allows these sensors to work and doesn't make a later
> step towards per-mode sensitivities any more difficult. You would
> simply do that work and then delete your virtual exposure methods.

I'm sorry, but I still don't see why it requires making the below
functions virtual. They convert between exposure expressed in lines and
exposure expressed as a time, why would that need to be sensor-specific
? I guess I need to see an actual use case first.

> Possibly I've repeated myself rather a lot there - but maybe I've
> explained it better this time? :)
> 
> > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > >
> > > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > >
> > > > > ---
> > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > index 1b2d6eec..4053a870 100644
> > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > @@ -66,8 +66,8 @@ public:
> > > > >       virtual ~CamHelper();
> > > > >       void SetCameraMode(const CameraMode &mode);
> > > > >       MdParser &Parser() const { return *parser_; }
> > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > > > >                                     double maxFrameDuration) const;
> > > > >       virtual uint32_t GainCode(double gain) const = 0;
Naushir Patuck April 28, 2021, 12:35 p.m. UTC | #7
Hi Laurent,

On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart <
laurent.pinchart@ideasonboard.com> wrote:

> Hi David,
>
> On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > > This allows derived classes to override them if they have any
> special
> > > > > > behaviours to implement. For instance if a particular camera mode
> > > > > > produces a different signal level to other modes, you might
> choose to
> > > > > > address that in the gain or exposure methods.
> > > > >
> > > > > For my information, would you be able to give an example of how
> this
> > > > > would be used ? What kind of particular camera mode would produce a
> > > > > different signal level, is this related to binning, or something
> else ?
> > > >
> > > > Yes, so the sensor I'm looking at produces twice the signal level in
> > > > the binning modes that it does in the full resolution mode. I think
> > > > this difference would be awkward for applications which means we have
> > > > to hide it somewhere.
> > > >
> > > > One of my earlier emails talked a little bit about hiding this in the
> > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > > > res, 2 for binned), and then have some process where we have public
> > > > exposure/gain numbers (which are the same for all modes) and there's
> a
> > > > process of conversion into and out of the AGC and IPA which deal with
> > > > real exposure/gain values. But I'm really not sure I want to go
> there,
> > > > at least right now!
> > > >
> > > > I think this is a pragmatic alternative. You can already hide the
> > > > difference by changing the CamHelper's Gain and GainCode functions,
> to
> > > > apply the magic extra factor of 2 when necessary. This change here
> > > > makes it possible to apply that factor in the exposure instead - you
> > > > could imagine someone wanting the lowest noise possible and accepting
> > > > that the real exposure will be double (I wouldn't want to double
> > > > exposures without people explicitly making that choice). And finally,
> > > > this change doesn't make it any more difficult to do the compensation
> > > > in the AGC at a later date, should we wish.
> > > >
> > > > In the CamHelper for my new sensor, I even make it easy to mix the
> > > > factor of 2 across both gain and exposure, so you could, for example,
> > > > have sqrt(2) in each.
> > > >
> > > > That got rather long... I hope it was understandable!
> > >
> > > Thanks for the information. There may be something that I don't get
> > > though, as I don't see why the exposure needs to be affected. The
> > > exposure time has a big influence on motion blur, so it seems to me
> that
> > > we shouldn't cheat here. The gain seems to be a better place to hide
> > > this sensor feature. I wonder if we should consider moving from gain to
> > > sensor sensitivity.
> >
> > You're right, exposure doesn't *need* to be affected, and indeed my
> > default position is to handle the difference using gain instead. The
> > intention is that people are *able* to keep the gain to a minimum if
> > they want to, at the cost of changing the exposure - but this has to
> > be a deliberate choice.
> >
> > I agree the "sensitivity" is a good idea, and indeed a much better
> > term than I used above, which was "base gain".
> >
> > Note that sensitivity is a per-mode thing here. I think it's a good
> > solution but has some consequences:
> >
> > - If an application wants to change camera mode and set the exposure
> > they're going to have to deal with per-mode sensitivities and adapt
> > their numbers accordingly. Do we like this? I'm always very nervous of
> > complicating an application's life!
> >
> > - You could try and handle per-mode sensitivities either outside the
> > AGC (effectively that's what this allows) or you could handle it
> > within the AGC. Whenever the camera mode changes you'd have to adjust
> > the state variables according to the ratio of old_sensitivity /
> > new_sensitivity. You'd probably have to use a different exposure
> > profile, possibly it needs to be set by the application - or could we
> > do something more automatic? The implementation would also depend on
> > whether we want applications to have to pay attention to the "per-mode
> > sensitivities" or not.
> >
> > So overall, I'm in several minds about going this "per-mode
> > sensitivity" route and the work it requires on the AGC, as well as the
> > implications for applications. In the meantime, making the exposure
> > methods virtual allows these sensors to work and doesn't make a later
> > step towards per-mode sensitivities any more difficult. You would
> > simply do that work and then delete your virtual exposure methods.
>
> I'm sorry, but I still don't see why it requires making the below
> functions virtual. They convert between exposure expressed in lines and
> exposure expressed as a time, why would that need to be sensor-specific
> ? I guess I need to see an actual use case first.
>

This is entirely tangential to David's usage, but I also need this method to
be virtual so I can override this for imx477 ultra long exposure modes.  In
this mode, we have some additional scaling to apply to the standard
equation.

Regards,
Naush


>
> > Possibly I've repeated myself rather a lot there - but maybe I've
> > explained it better this time? :)
> >
> > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > > >
> > > > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > > >
> > > > > > ---
> > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp
> b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > index 1b2d6eec..4053a870 100644
> > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > @@ -66,8 +66,8 @@ public:
> > > > > >       virtual ~CamHelper();
> > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; //
> in us
> > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double
> minFrameDuration,
> > > > > >                                     double maxFrameDuration)
> const;
> > > > > >       virtual uint32_t GainCode(double gain) const = 0;
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> libcamera-devel mailing list
> libcamera-devel@lists.libcamera.org
> https://lists.libcamera.org/listinfo/libcamera-devel
>
Laurent Pinchart April 28, 2021, 1:32 p.m. UTC | #8
Hi Naush,

On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > > > This allows derived classes to override them if they have any special
> > > > > > > behaviours to implement. For instance if a particular camera mode
> > > > > > > produces a different signal level to other modes, you might choose to
> > > > > > > address that in the gain or exposure methods.
> > > > > >
> > > > > > For my information, would you be able to give an example of how this
> > > > > > would be used ? What kind of particular camera mode would produce a
> > > > > > different signal level, is this related to binning, or something else ?
> > > > >
> > > > > Yes, so the sensor I'm looking at produces twice the signal level in
> > > > > the binning modes that it does in the full resolution mode. I think
> > > > > this difference would be awkward for applications which means we have
> > > > > to hide it somewhere.
> > > > >
> > > > > One of my earlier emails talked a little bit about hiding this in the
> > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > > > > res, 2 for binned), and then have some process where we have public
> > > > > exposure/gain numbers (which are the same for all modes) and there's a
> > > > > process of conversion into and out of the AGC and IPA which deal with
> > > > > real exposure/gain values. But I'm really not sure I want to go there,
> > > > > at least right now!
> > > > >
> > > > > I think this is a pragmatic alternative. You can already hide the
> > > > > difference by changing the CamHelper's Gain and GainCode functions, to
> > > > > apply the magic extra factor of 2 when necessary. This change here
> > > > > makes it possible to apply that factor in the exposure instead - you
> > > > > could imagine someone wanting the lowest noise possible and accepting
> > > > > that the real exposure will be double (I wouldn't want to double
> > > > > exposures without people explicitly making that choice). And finally,
> > > > > this change doesn't make it any more difficult to do the compensation
> > > > > in the AGC at a later date, should we wish.
> > > > >
> > > > > In the CamHelper for my new sensor, I even make it easy to mix the
> > > > > factor of 2 across both gain and exposure, so you could, for example,
> > > > > have sqrt(2) in each.
> > > > >
> > > > > That got rather long... I hope it was understandable!
> > > >
> > > > Thanks for the information. There may be something that I don't get
> > > > though, as I don't see why the exposure needs to be affected. The
> > > > exposure time has a big influence on motion blur, so it seems to me that
> > > > we shouldn't cheat here. The gain seems to be a better place to hide
> > > > this sensor feature. I wonder if we should consider moving from gain to
> > > > sensor sensitivity.
> > >
> > > You're right, exposure doesn't *need* to be affected, and indeed my
> > > default position is to handle the difference using gain instead. The
> > > intention is that people are *able* to keep the gain to a minimum if
> > > they want to, at the cost of changing the exposure - but this has to
> > > be a deliberate choice.
> > >
> > > I agree the "sensitivity" is a good idea, and indeed a much better
> > > term than I used above, which was "base gain".
> > >
> > > Note that sensitivity is a per-mode thing here. I think it's a good
> > > solution but has some consequences:
> > >
> > > - If an application wants to change camera mode and set the exposure
> > > they're going to have to deal with per-mode sensitivities and adapt
> > > their numbers accordingly. Do we like this? I'm always very nervous of
> > > complicating an application's life!
> > >
> > > - You could try and handle per-mode sensitivities either outside the
> > > AGC (effectively that's what this allows) or you could handle it
> > > within the AGC. Whenever the camera mode changes you'd have to adjust
> > > the state variables according to the ratio of old_sensitivity /
> > > new_sensitivity. You'd probably have to use a different exposure
> > > profile, possibly it needs to be set by the application - or could we
> > > do something more automatic? The implementation would also depend on
> > > whether we want applications to have to pay attention to the "per-mode
> > > sensitivities" or not.
> > >
> > > So overall, I'm in several minds about going this "per-mode
> > > sensitivity" route and the work it requires on the AGC, as well as the
> > > implications for applications. In the meantime, making the exposure
> > > methods virtual allows these sensors to work and doesn't make a later
> > > step towards per-mode sensitivities any more difficult. You would
> > > simply do that work and then delete your virtual exposure methods.
> >
> > I'm sorry, but I still don't see why it requires making the below
> > functions virtual. They convert between exposure expressed in lines and
> > exposure expressed as a time, why would that need to be sensor-specific
> > ? I guess I need to see an actual use case first.
> 
> This is entirely tangential to David's usage, but I also need this method to
> be virtual so I can override this for imx477 ultra long exposure modes.  In
> this mode, we have some additional scaling to apply to the standard
> equation.

I assume that in that case the exposure_lines argument, and the return
value of ExposureLines(), won't be expressed as a number of lines, right
?

Given that sensors also often have a fine exposure time expressed in
pixels, in addition to the coarse exposure time expressed in lines,
could we create an Exposure structure that could model exposure settings
(coarse in lines, fine in pixels, plus other parameters for long
exposures) in a way that wouldn't be sensor-specific ?

> > > Possibly I've repeated myself rather a lot there - but maybe I've
> > > explained it better this time? :)
> > >
> > > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > > > >
> > > > > > Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > > > > >
> > > > > > > ---
> > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > index 1b2d6eec..4053a870 100644
> > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > @@ -66,8 +66,8 @@ public:
> > > > > > >       virtual ~CamHelper();
> > > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > > > > > >                                     double maxFrameDuration) const;
> > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
Naushir Patuck April 28, 2021, 1:40 p.m. UTC | #9
Hi Laurent,


On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart <
laurent.pinchart@ideasonboard.com> wrote:

> Hi Naush,
>
> On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> > On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> > > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > > > > This allows derived classes to override them if they have
> any special
> > > > > > > > behaviours to implement. For instance if a particular camera
> mode
> > > > > > > > produces a different signal level to other modes, you might
> choose to
> > > > > > > > address that in the gain or exposure methods.
> > > > > > >
> > > > > > > For my information, would you be able to give an example of
> how this
> > > > > > > would be used ? What kind of particular camera mode would
> produce a
> > > > > > > different signal level, is this related to binning, or
> something else ?
> > > > > >
> > > > > > Yes, so the sensor I'm looking at produces twice the signal
> level in
> > > > > > the binning modes that it does in the full resolution mode. I
> think
> > > > > > this difference would be awkward for applications which means we
> have
> > > > > > to hide it somewhere.
> > > > > >
> > > > > > One of my earlier emails talked a little bit about hiding this
> in the
> > > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for
> full
> > > > > > res, 2 for binned), and then have some process where we have
> public
> > > > > > exposure/gain numbers (which are the same for all modes) and
> there's a
> > > > > > process of conversion into and out of the AGC and IPA which deal
> with
> > > > > > real exposure/gain values. But I'm really not sure I want to go
> there,
> > > > > > at least right now!
> > > > > >
> > > > > > I think this is a pragmatic alternative. You can already hide the
> > > > > > difference by changing the CamHelper's Gain and GainCode
> functions, to
> > > > > > apply the magic extra factor of 2 when necessary. This change
> here
> > > > > > makes it possible to apply that factor in the exposure instead -
> you
> > > > > > could imagine someone wanting the lowest noise possible and
> accepting
> > > > > > that the real exposure will be double (I wouldn't want to double
> > > > > > exposures without people explicitly making that choice). And
> finally,
> > > > > > this change doesn't make it any more difficult to do the
> compensation
> > > > > > in the AGC at a later date, should we wish.
> > > > > >
> > > > > > In the CamHelper for my new sensor, I even make it easy to mix
> the
> > > > > > factor of 2 across both gain and exposure, so you could, for
> example,
> > > > > > have sqrt(2) in each.
> > > > > >
> > > > > > That got rather long... I hope it was understandable!
> > > > >
> > > > > Thanks for the information. There may be something that I don't get
> > > > > though, as I don't see why the exposure needs to be affected. The
> > > > > exposure time has a big influence on motion blur, so it seems to
> me that
> > > > > we shouldn't cheat here. The gain seems to be a better place to
> hide
> > > > > this sensor feature. I wonder if we should consider moving from
> gain to
> > > > > sensor sensitivity.
> > > >
> > > > You're right, exposure doesn't *need* to be affected, and indeed my
> > > > default position is to handle the difference using gain instead. The
> > > > intention is that people are *able* to keep the gain to a minimum if
> > > > they want to, at the cost of changing the exposure - but this has to
> > > > be a deliberate choice.
> > > >
> > > > I agree the "sensitivity" is a good idea, and indeed a much better
> > > > term than I used above, which was "base gain".
> > > >
> > > > Note that sensitivity is a per-mode thing here. I think it's a good
> > > > solution but has some consequences:
> > > >
> > > > - If an application wants to change camera mode and set the exposure
> > > > they're going to have to deal with per-mode sensitivities and adapt
> > > > their numbers accordingly. Do we like this? I'm always very nervous
> of
> > > > complicating an application's life!
> > > >
> > > > - You could try and handle per-mode sensitivities either outside the
> > > > AGC (effectively that's what this allows) or you could handle it
> > > > within the AGC. Whenever the camera mode changes you'd have to adjust
> > > > the state variables according to the ratio of old_sensitivity /
> > > > new_sensitivity. You'd probably have to use a different exposure
> > > > profile, possibly it needs to be set by the application - or could we
> > > > do something more automatic? The implementation would also depend on
> > > > whether we want applications to have to pay attention to the
> "per-mode
> > > > sensitivities" or not.
> > > >
> > > > So overall, I'm in several minds about going this "per-mode
> > > > sensitivity" route and the work it requires on the AGC, as well as
> the
> > > > implications for applications. In the meantime, making the exposure
> > > > methods virtual allows these sensors to work and doesn't make a later
> > > > step towards per-mode sensitivities any more difficult. You would
> > > > simply do that work and then delete your virtual exposure methods.
> > >
> > > I'm sorry, but I still don't see why it requires making the below
> > > functions virtual. They convert between exposure expressed in lines and
> > > exposure expressed as a time, why would that need to be sensor-specific
> > > ? I guess I need to see an actual use case first.
> >
> > This is entirely tangential to David's usage, but I also need this
> method to
> > be virtual so I can override this for imx477 ultra long exposure modes.
> In
> > this mode, we have some additional scaling to apply to the standard
> > equation.
>
> I assume that in that case the exposure_lines argument, and the return
> value of ExposureLines(), won't be expressed as a number of lines, right
> ?
>

I have not worked through the details yet, but yet, this is possibly going
to be
the case with the expected implementation.


>
> Given that sensors also often have a fine exposure time expressed in
> pixels, in addition to the coarse exposure time expressed in lines,
> could we create an Exposure structure that could model exposure settings
> (coarse in lines, fine in pixels, plus other parameters for long
> exposures) in a way that wouldn't be sensor-specific ?
>

Perhaps that might be a way to go that could be generalised for all sensors.
I do still wonder about global shutter sensors though.  I've not worked with
many of them, but the ones I have expressed exposure in terms entirely
unrelated to HTS/VTS, but maybe something to worry about when we
come across it :-)


>
> > > > Possibly I've repeated myself rather a lot there - but maybe I've
> > > > explained it better this time? :)
> > > >
> > > > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > > > > >
> > > > > > > Reviewed-by: Laurent Pinchart <
> laurent.pinchart@ideasonboard.com>
> > > > > > >
> > > > > > > > ---
> > > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp
> b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > index 1b2d6eec..4053a870 100644
> > > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > @@ -66,8 +66,8 @@ public:
> > > > > > > >       virtual ~CamHelper();
> > > > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in
> us
> > > > > > > > +     virtual uint32_t ExposureLines(double exposure_us)
> const;
> > > > > > > > +     virtual double Exposure(uint32_t exposure_lines)
> const; // in us
> > > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us,
> double minFrameDuration,
> > > > > > > >                                     double maxFrameDuration)
> const;
> > > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
>
> --
> Regards,
>
> Laurent Pinchart
>
Laurent Pinchart April 28, 2021, 2:05 p.m. UTC | #10
Hi Naush,

On Wed, Apr 28, 2021 at 02:40:26PM +0100, Naushir Patuck wrote:
> On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart wrote:
> > On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> > > On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> > > > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > > > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > > > > > This allows derived classes to override them if they have any special
> > > > > > > > > behaviours to implement. For instance if a particular camera mode
> > > > > > > > > produces a different signal level to other modes, you might choose to
> > > > > > > > > address that in the gain or exposure methods.
> > > > > > > >
> > > > > > > > For my information, would you be able to give an example of how this
> > > > > > > > would be used ? What kind of particular camera mode would produce a
> > > > > > > > different signal level, is this related to binning, or something else ?
> > > > > > >
> > > > > > > Yes, so the sensor I'm looking at produces twice the signal level in
> > > > > > > the binning modes that it does in the full resolution mode. I think
> > > > > > > this difference would be awkward for applications which means we have
> > > > > > > to hide it somewhere.
> > > > > > >
> > > > > > > One of my earlier emails talked a little bit about hiding this in the
> > > > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > > > > > > res, 2 for binned), and then have some process where we have public
> > > > > > > exposure/gain numbers (which are the same for all modes) and there's a
> > > > > > > process of conversion into and out of the AGC and IPA which deal with
> > > > > > > real exposure/gain values. But I'm really not sure I want to go there,
> > > > > > > at least right now!
> > > > > > >
> > > > > > > I think this is a pragmatic alternative. You can already hide the
> > > > > > > difference by changing the CamHelper's Gain and GainCode functions, to
> > > > > > > apply the magic extra factor of 2 when necessary. This change here
> > > > > > > makes it possible to apply that factor in the exposure instead - you
> > > > > > > could imagine someone wanting the lowest noise possible and accepting
> > > > > > > that the real exposure will be double (I wouldn't want to double
> > > > > > > exposures without people explicitly making that choice). And finally,
> > > > > > > this change doesn't make it any more difficult to do the compensation
> > > > > > > in the AGC at a later date, should we wish.
> > > > > > >
> > > > > > > In the CamHelper for my new sensor, I even make it easy to mix the
> > > > > > > factor of 2 across both gain and exposure, so you could, for example,
> > > > > > > have sqrt(2) in each.
> > > > > > >
> > > > > > > That got rather long... I hope it was understandable!
> > > > > >
> > > > > > Thanks for the information. There may be something that I don't get
> > > > > > though, as I don't see why the exposure needs to be affected. The
> > > > > > exposure time has a big influence on motion blur, so it seems to me that
> > > > > > we shouldn't cheat here. The gain seems to be a better place to hide
> > > > > > this sensor feature. I wonder if we should consider moving from gain to
> > > > > > sensor sensitivity.
> > > > >
> > > > > You're right, exposure doesn't *need* to be affected, and indeed my
> > > > > default position is to handle the difference using gain instead. The
> > > > > intention is that people are *able* to keep the gain to a minimum if
> > > > > they want to, at the cost of changing the exposure - but this has to
> > > > > be a deliberate choice.
> > > > >
> > > > > I agree the "sensitivity" is a good idea, and indeed a much better
> > > > > term than I used above, which was "base gain".
> > > > >
> > > > > Note that sensitivity is a per-mode thing here. I think it's a good
> > > > > solution but has some consequences:
> > > > >
> > > > > - If an application wants to change camera mode and set the exposure
> > > > > they're going to have to deal with per-mode sensitivities and adapt
> > > > > their numbers accordingly. Do we like this? I'm always very nervous of
> > > > > complicating an application's life!
> > > > >
> > > > > - You could try and handle per-mode sensitivities either outside the
> > > > > AGC (effectively that's what this allows) or you could handle it
> > > > > within the AGC. Whenever the camera mode changes you'd have to adjust
> > > > > the state variables according to the ratio of old_sensitivity /
> > > > > new_sensitivity. You'd probably have to use a different exposure
> > > > > profile, possibly it needs to be set by the application - or could we
> > > > > do something more automatic? The implementation would also depend on
> > > > > whether we want applications to have to pay attention to the "per-mode
> > > > > sensitivities" or not.
> > > > >
> > > > > So overall, I'm in several minds about going this "per-mode
> > > > > sensitivity" route and the work it requires on the AGC, as well as the
> > > > > implications for applications. In the meantime, making the exposure
> > > > > methods virtual allows these sensors to work and doesn't make a later
> > > > > step towards per-mode sensitivities any more difficult. You would
> > > > > simply do that work and then delete your virtual exposure methods.
> > > >
> > > > I'm sorry, but I still don't see why it requires making the below
> > > > functions virtual. They convert between exposure expressed in lines and
> > > > exposure expressed as a time, why would that need to be sensor-specific
> > > > ? I guess I need to see an actual use case first.
> > >
> > > This is entirely tangential to David's usage, but I also need this method to
> > > be virtual so I can override this for imx477 ultra long exposure modes. In
> > > this mode, we have some additional scaling to apply to the standard
> > > equation.
> >
> > I assume that in that case the exposure_lines argument, and the return
> > value of ExposureLines(), won't be expressed as a number of lines, right
> > ?
> 
> I have not worked through the details yet, but yet, this is possibly going
> to be the case with the expected implementation.
> 
> > Given that sensors also often have a fine exposure time expressed in
> > pixels, in addition to the coarse exposure time expressed in lines,
> > could we create an Exposure structure that could model exposure settings
> > (coarse in lines, fine in pixels, plus other parameters for long
> > exposures) in a way that wouldn't be sensor-specific ?
> 
> Perhaps that might be a way to go that could be generalised for all sensors.

Would you be able to check if this could be achieved for the different
sensors that you need to support ?

> I do still wonder about global shutter sensors though.  I've not worked with
> many of them, but the ones I have expressed exposure in terms entirely
> unrelated to HTS/VTS, but maybe something to worry about when we
> come across it :-)

Agreed.

> > > > > Possibly I've repeated myself rather a lot there - but maybe I've
> > > > > explained it better this time? :)
> > > > >
> > > > > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > > > > > >
> > > > > > > > Reviewed-by: Laurent Pinchart < laurent.pinchart@ideasonboard.com>
> > > > > > > >
> > > > > > > > > ---
> > > > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > index 1b2d6eec..4053a870 100644
> > > > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > @@ -66,8 +66,8 @@ public:
> > > > > > > > >       virtual ~CamHelper();
> > > > > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > > > > > > > >                                     double maxFrameDuration) const;
> > > > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
Naushir Patuck April 28, 2021, 2:12 p.m. UTC | #11
On Wed, 28 Apr 2021 at 15:05, Laurent Pinchart <
laurent.pinchart@ideasonboard.com> wrote:

> Hi Naush,
>
> On Wed, Apr 28, 2021 at 02:40:26PM +0100, Naushir Patuck wrote:
> > On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart wrote:
> > > On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> > > > On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> > > > > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > > > > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > > > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman
> wrote:
> > > > > > > > > > This allows derived classes to override them if they
> have any special
> > > > > > > > > > behaviours to implement. For instance if a particular
> camera mode
> > > > > > > > > > produces a different signal level to other modes, you
> might choose to
> > > > > > > > > > address that in the gain or exposure methods.
> > > > > > > > >
> > > > > > > > > For my information, would you be able to give an example
> of how this
> > > > > > > > > would be used ? What kind of particular camera mode would
> produce a
> > > > > > > > > different signal level, is this related to binning, or
> something else ?
> > > > > > > >
> > > > > > > > Yes, so the sensor I'm looking at produces twice the signal
> level in
> > > > > > > > the binning modes that it does in the full resolution mode.
> I think
> > > > > > > > this difference would be awkward for applications which
> means we have
> > > > > > > > to hide it somewhere.
> > > > > > > >
> > > > > > > > One of my earlier emails talked a little bit about hiding
> this in the
> > > > > > > > AGC. You'd have to know a "base gain" for every mode (so 1
> for full
> > > > > > > > res, 2 for binned), and then have some process where we have
> public
> > > > > > > > exposure/gain numbers (which are the same for all modes) and
> there's a
> > > > > > > > process of conversion into and out of the AGC and IPA which
> deal with
> > > > > > > > real exposure/gain values. But I'm really not sure I want to
> go there,
> > > > > > > > at least right now!
> > > > > > > >
> > > > > > > > I think this is a pragmatic alternative. You can already
> hide the
> > > > > > > > difference by changing the CamHelper's Gain and GainCode
> functions, to
> > > > > > > > apply the magic extra factor of 2 when necessary. This
> change here
> > > > > > > > makes it possible to apply that factor in the exposure
> instead - you
> > > > > > > > could imagine someone wanting the lowest noise possible and
> accepting
> > > > > > > > that the real exposure will be double (I wouldn't want to
> double
> > > > > > > > exposures without people explicitly making that choice). And
> finally,
> > > > > > > > this change doesn't make it any more difficult to do the
> compensation
> > > > > > > > in the AGC at a later date, should we wish.
> > > > > > > >
> > > > > > > > In the CamHelper for my new sensor, I even make it easy to
> mix the
> > > > > > > > factor of 2 across both gain and exposure, so you could, for
> example,
> > > > > > > > have sqrt(2) in each.
> > > > > > > >
> > > > > > > > That got rather long... I hope it was understandable!
> > > > > > >
> > > > > > > Thanks for the information. There may be something that I
> don't get
> > > > > > > though, as I don't see why the exposure needs to be affected.
> The
> > > > > > > exposure time has a big influence on motion blur, so it seems
> to me that
> > > > > > > we shouldn't cheat here. The gain seems to be a better place
> to hide
> > > > > > > this sensor feature. I wonder if we should consider moving
> from gain to
> > > > > > > sensor sensitivity.
> > > > > >
> > > > > > You're right, exposure doesn't *need* to be affected, and indeed
> my
> > > > > > default position is to handle the difference using gain instead.
> The
> > > > > > intention is that people are *able* to keep the gain to a
> minimum if
> > > > > > they want to, at the cost of changing the exposure - but this
> has to
> > > > > > be a deliberate choice.
> > > > > >
> > > > > > I agree the "sensitivity" is a good idea, and indeed a much
> better
> > > > > > term than I used above, which was "base gain".
> > > > > >
> > > > > > Note that sensitivity is a per-mode thing here. I think it's a
> good
> > > > > > solution but has some consequences:
> > > > > >
> > > > > > - If an application wants to change camera mode and set the
> exposure
> > > > > > they're going to have to deal with per-mode sensitivities and
> adapt
> > > > > > their numbers accordingly. Do we like this? I'm always very
> nervous of
> > > > > > complicating an application's life!
> > > > > >
> > > > > > - You could try and handle per-mode sensitivities either outside
> the
> > > > > > AGC (effectively that's what this allows) or you could handle it
> > > > > > within the AGC. Whenever the camera mode changes you'd have to
> adjust
> > > > > > the state variables according to the ratio of old_sensitivity /
> > > > > > new_sensitivity. You'd probably have to use a different exposure
> > > > > > profile, possibly it needs to be set by the application - or
> could we
> > > > > > do something more automatic? The implementation would also
> depend on
> > > > > > whether we want applications to have to pay attention to the
> "per-mode
> > > > > > sensitivities" or not.
> > > > > >
> > > > > > So overall, I'm in several minds about going this "per-mode
> > > > > > sensitivity" route and the work it requires on the AGC, as well
> as the
> > > > > > implications for applications. In the meantime, making the
> exposure
> > > > > > methods virtual allows these sensors to work and doesn't make a
> later
> > > > > > step towards per-mode sensitivities any more difficult. You would
> > > > > > simply do that work and then delete your virtual exposure
> methods.
> > > > >
> > > > > I'm sorry, but I still don't see why it requires making the below
> > > > > functions virtual. They convert between exposure expressed in
> lines and
> > > > > exposure expressed as a time, why would that need to be
> sensor-specific
> > > > > ? I guess I need to see an actual use case first.
> > > >
> > > > This is entirely tangential to David's usage, but I also need this
> method to
> > > > be virtual so I can override this for imx477 ultra long exposure
> modes. In
> > > > this mode, we have some additional scaling to apply to the standard
> > > > equation.
> > >
> > > I assume that in that case the exposure_lines argument, and the return
> > > value of ExposureLines(), won't be expressed as a number of lines,
> right
> > > ?
> >
> > I have not worked through the details yet, but yet, this is possibly
> going
> > to be the case with the expected implementation.
> >
> > > Given that sensors also often have a fine exposure time expressed in
> > > pixels, in addition to the coarse exposure time expressed in lines,
> > > could we create an Exposure structure that could model exposure
> settings
> > > (coarse in lines, fine in pixels, plus other parameters for long
> > > exposures) in a way that wouldn't be sensor-specific ?
> >
> > Perhaps that might be a way to go that could be generalised for all
> sensors.
>
> Would you be able to check if this could be achieved for the different
> sensors that you need to support ?
>

Yes, I'll have a look and see what we can come up with.


>
> > I do still wonder about global shutter sensors though.  I've not worked
> with
> > many of them, but the ones I have expressed exposure in terms entirely
> > unrelated to HTS/VTS, but maybe something to worry about when we
> > come across it :-)
>
> Agreed.
>
> > > > > > Possibly I've repeated myself rather a lot there - but maybe I've
> > > > > > explained it better this time? :)
> > > > > >
> > > > > > > > > > Signed-off-by: David Plowman <
> david.plowman@raspberrypi.com>
> > > > > > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> > > > > > > > >
> > > > > > > > > Reviewed-by: Laurent Pinchart <
> laurent.pinchart@ideasonboard.com>
> > > > > > > > >
> > > > > > > > > > ---
> > > > > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > > > > >
> > > > > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp
> b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > > index 1b2d6eec..4053a870 100644
> > > > > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > > > > @@ -66,8 +66,8 @@ public:
> > > > > > > > > >       virtual ~CamHelper();
> > > > > > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > > > > > -     double Exposure(uint32_t exposure_lines) const; //
> in us
> > > > > > > > > > +     virtual uint32_t ExposureLines(double exposure_us)
> const;
> > > > > > > > > > +     virtual double Exposure(uint32_t exposure_lines)
> const; // in us
> > > > > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us,
> double minFrameDuration,
> > > > > > > > > >                                     double
> maxFrameDuration) const;
> > > > > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
>
> --
> Regards,
>
> Laurent Pinchart
>
David Plowman May 4, 2021, 9:15 a.m. UTC | #12
Hi again

Thanks for all the various opinions. If I may wade in again... :)

Like Naush, I have some more requirements for what the Exposure
functions need to do. The sensor I'm currently dealing with has the
constraint - in some readout modes only - that the number of exposure
lines has to be a multiple of 2. It's one of those "quad Bayer" or
"tetracell" sensors, so you can imagine where this is coming from. You
might speculate that so-called "nonacell" sensors might want multiples
of 3, though I've never had my hands on one of those!

I suppose we could argue that the ControlInfos needs to record and
advertise the "step" that V4L2 parameters expose - we clearly need to
fix up the exposure correctly before we pass it into the delayed
control. Or at least, it must have the true value when we read it out.

Perhaps the problem with making this function virtual is that we've
never been totally clear what it's for. Maybe it really is supposed to
do nothing but a division, though at that point, shouldn't we remove
it and just do the division? Perhaps it's purpose is to hide away
peculiarities of particular sensors (and modes) on the grounds that
they're going to be difficult to predict in advance for all future
devices, that probably sums up how I've always viewed it.

I already treat the GainCode function rather like this. For example,
where different modes have different minimum gains, or even different
sensitivities, I soak that up in the virtual GainCode method, so it's
not "give me the code for this gain", it's "give me the code I need to
program for this gain, even if it's not really this exact gain". Here
I'm just wanting to give developers the *option* to get minimum gain
images if they're prepared deliberately to alter the exposure. I won't
argue that some solution involving sensitivities and per-mode
sensitivities is not better, but we'd need to decide how that
interacts with the AEC/AGC, with applications and so on. I'm feeling
like that might be another one of those rather long discussions...

Thanks!
David

On Wed, 28 Apr 2021 at 15:12, Naushir Patuck <naush@raspberrypi.com> wrote:
>
>
>
> On Wed, 28 Apr 2021 at 15:05, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
>>
>> Hi Naush,
>>
>> On Wed, Apr 28, 2021 at 02:40:26PM +0100, Naushir Patuck wrote:
>> > On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart wrote:
>> > > On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
>> > > > On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
>> > > > > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
>> > > > > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
>> > > > > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
>> > > > > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
>> > > > > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
>> > > > > > > > > > This allows derived classes to override them if they have any special
>> > > > > > > > > > behaviours to implement. For instance if a particular camera mode
>> > > > > > > > > > produces a different signal level to other modes, you might choose to
>> > > > > > > > > > address that in the gain or exposure methods.
>> > > > > > > > >
>> > > > > > > > > For my information, would you be able to give an example of how this
>> > > > > > > > > would be used ? What kind of particular camera mode would produce a
>> > > > > > > > > different signal level, is this related to binning, or something else ?
>> > > > > > > >
>> > > > > > > > Yes, so the sensor I'm looking at produces twice the signal level in
>> > > > > > > > the binning modes that it does in the full resolution mode. I think
>> > > > > > > > this difference would be awkward for applications which means we have
>> > > > > > > > to hide it somewhere.
>> > > > > > > >
>> > > > > > > > One of my earlier emails talked a little bit about hiding this in the
>> > > > > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
>> > > > > > > > res, 2 for binned), and then have some process where we have public
>> > > > > > > > exposure/gain numbers (which are the same for all modes) and there's a
>> > > > > > > > process of conversion into and out of the AGC and IPA which deal with
>> > > > > > > > real exposure/gain values. But I'm really not sure I want to go there,
>> > > > > > > > at least right now!
>> > > > > > > >
>> > > > > > > > I think this is a pragmatic alternative. You can already hide the
>> > > > > > > > difference by changing the CamHelper's Gain and GainCode functions, to
>> > > > > > > > apply the magic extra factor of 2 when necessary. This change here
>> > > > > > > > makes it possible to apply that factor in the exposure instead - you
>> > > > > > > > could imagine someone wanting the lowest noise possible and accepting
>> > > > > > > > that the real exposure will be double (I wouldn't want to double
>> > > > > > > > exposures without people explicitly making that choice). And finally,
>> > > > > > > > this change doesn't make it any more difficult to do the compensation
>> > > > > > > > in the AGC at a later date, should we wish.
>> > > > > > > >
>> > > > > > > > In the CamHelper for my new sensor, I even make it easy to mix the
>> > > > > > > > factor of 2 across both gain and exposure, so you could, for example,
>> > > > > > > > have sqrt(2) in each.
>> > > > > > > >
>> > > > > > > > That got rather long... I hope it was understandable!
>> > > > > > >
>> > > > > > > Thanks for the information. There may be something that I don't get
>> > > > > > > though, as I don't see why the exposure needs to be affected. The
>> > > > > > > exposure time has a big influence on motion blur, so it seems to me that
>> > > > > > > we shouldn't cheat here. The gain seems to be a better place to hide
>> > > > > > > this sensor feature. I wonder if we should consider moving from gain to
>> > > > > > > sensor sensitivity.
>> > > > > >
>> > > > > > You're right, exposure doesn't *need* to be affected, and indeed my
>> > > > > > default position is to handle the difference using gain instead. The
>> > > > > > intention is that people are *able* to keep the gain to a minimum if
>> > > > > > they want to, at the cost of changing the exposure - but this has to
>> > > > > > be a deliberate choice.
>> > > > > >
>> > > > > > I agree the "sensitivity" is a good idea, and indeed a much better
>> > > > > > term than I used above, which was "base gain".
>> > > > > >
>> > > > > > Note that sensitivity is a per-mode thing here. I think it's a good
>> > > > > > solution but has some consequences:
>> > > > > >
>> > > > > > - If an application wants to change camera mode and set the exposure
>> > > > > > they're going to have to deal with per-mode sensitivities and adapt
>> > > > > > their numbers accordingly. Do we like this? I'm always very nervous of
>> > > > > > complicating an application's life!
>> > > > > >
>> > > > > > - You could try and handle per-mode sensitivities either outside the
>> > > > > > AGC (effectively that's what this allows) or you could handle it
>> > > > > > within the AGC. Whenever the camera mode changes you'd have to adjust
>> > > > > > the state variables according to the ratio of old_sensitivity /
>> > > > > > new_sensitivity. You'd probably have to use a different exposure
>> > > > > > profile, possibly it needs to be set by the application - or could we
>> > > > > > do something more automatic? The implementation would also depend on
>> > > > > > whether we want applications to have to pay attention to the "per-mode
>> > > > > > sensitivities" or not.
>> > > > > >
>> > > > > > So overall, I'm in several minds about going this "per-mode
>> > > > > > sensitivity" route and the work it requires on the AGC, as well as the
>> > > > > > implications for applications. In the meantime, making the exposure
>> > > > > > methods virtual allows these sensors to work and doesn't make a later
>> > > > > > step towards per-mode sensitivities any more difficult. You would
>> > > > > > simply do that work and then delete your virtual exposure methods.
>> > > > >
>> > > > > I'm sorry, but I still don't see why it requires making the below
>> > > > > functions virtual. They convert between exposure expressed in lines and
>> > > > > exposure expressed as a time, why would that need to be sensor-specific
>> > > > > ? I guess I need to see an actual use case first.
>> > > >
>> > > > This is entirely tangential to David's usage, but I also need this method to
>> > > > be virtual so I can override this for imx477 ultra long exposure modes. In
>> > > > this mode, we have some additional scaling to apply to the standard
>> > > > equation.
>> > >
>> > > I assume that in that case the exposure_lines argument, and the return
>> > > value of ExposureLines(), won't be expressed as a number of lines, right
>> > > ?
>> >
>> > I have not worked through the details yet, but yet, this is possibly going
>> > to be the case with the expected implementation.
>> >
>> > > Given that sensors also often have a fine exposure time expressed in
>> > > pixels, in addition to the coarse exposure time expressed in lines,
>> > > could we create an Exposure structure that could model exposure settings
>> > > (coarse in lines, fine in pixels, plus other parameters for long
>> > > exposures) in a way that wouldn't be sensor-specific ?
>> >
>> > Perhaps that might be a way to go that could be generalised for all sensors.
>>
>> Would you be able to check if this could be achieved for the different
>> sensors that you need to support ?
>
>
> Yes, I'll have a look and see what we can come up with.
>
>>
>>
>> > I do still wonder about global shutter sensors though.  I've not worked with
>> > many of them, but the ones I have expressed exposure in terms entirely
>> > unrelated to HTS/VTS, but maybe something to worry about when we
>> > come across it :-)
>>
>> Agreed.
>>
>> > > > > > Possibly I've repeated myself rather a lot there - but maybe I've
>> > > > > > explained it better this time? :)
>> > > > > >
>> > > > > > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
>> > > > > > > > > > Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
>> > > > > > > > >
>> > > > > > > > > Reviewed-by: Laurent Pinchart < laurent.pinchart@ideasonboard.com>
>> > > > > > > > >
>> > > > > > > > > > ---
>> > > > > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
>> > > > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
>> > > > > > > > > >
>> > > > > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > index 1b2d6eec..4053a870 100644
>> > > > > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > @@ -66,8 +66,8 @@ public:
>> > > > > > > > > >       virtual ~CamHelper();
>> > > > > > > > > >       void SetCameraMode(const CameraMode &mode);
>> > > > > > > > > >       MdParser &Parser() const { return *parser_; }
>> > > > > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
>> > > > > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
>> > > > > > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
>> > > > > > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
>> > > > > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
>> > > > > > > > > >                                     double maxFrameDuration) const;
>> > > > > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
>>
>> --
>> Regards,
>>
>> Laurent Pinchart
Laurent Pinchart May 4, 2021, 12:34 p.m. UTC | #13
Hi David,

Thank you for the patch.

On Tue, May 04, 2021 at 10:15:26AM +0100, David Plowman wrote:
> Hi again
> 
> Thanks for all the various opinions. If I may wade in again... :)
> 
> Like Naush, I have some more requirements for what the Exposure
> functions need to do. The sensor I'm currently dealing with has the
> constraint - in some readout modes only - that the number of exposure
> lines has to be a multiple of 2. It's one of those "quad Bayer" or
> "tetracell" sensors, so you can imagine where this is coming from. You
> might speculate that so-called "nonacell" sensors might want multiples
> of 3, though I've never had my hands on one of those!
> 
> I suppose we could argue that the ControlInfos needs to record and
> advertise the "step" that V4L2 parameters expose - we clearly need to
> fix up the exposure correctly before we pass it into the delayed
> control. Or at least, it must have the true value when we read it out.

There's also information we could hardcode in userspace. Generally
speaking, querying information automatically is nice, but it can become
tricky when the information isn't static, as we've seen with the limits
for the vertical blanking control. I'm not opposed to storing
information in libcamera, but that should preferably be done as data, in
the sensor database, instead of virtual functions. I'm not fond of
functions that adjust parameters in an unspecified way, as they may work
"magically" in some use cases, but fail in other ones. I prefer giving
enough information to the caller to figure needed adjustements out, with
the option of having a standard helper performing the most common
adjustements in a documented way.

> Perhaps the problem with making this function virtual is that we've
> never been totally clear what it's for.

I think that matches the reason why I dislike virtual fucntions that
I've just explained :-)

> Maybe it really is supposed to
> do nothing but a division, though at that point, shouldn't we remove
> it and just do the division? Perhaps it's purpose is to hide away
> peculiarities of particular sensors (and modes) on the grounds that
> they're going to be difficult to predict in advance for all future
> devices, that probably sums up how I've always viewed it.
> 
> I already treat the GainCode function rather like this. For example,
> where different modes have different minimum gains, or even different
> sensitivities, I soak that up in the virtual GainCode method, so it's
> not "give me the code for this gain", it's "give me the code I need to
> program for this gain, even if it's not really this exact gain".

That's the part that bothers me. If we can document the function to
specify exactly what types of adjustements it can make, then I'm OK, the
caller can decide whether that's fine, or if its use cases require
something different. It's the black box aspect that I don't like.

> Here
> I'm just wanting to give developers the *option* to get minimum gain
> images if they're prepared deliberately to alter the exposure. I won't
> argue that some solution involving sensitivities and per-mode
> sensitivities is not better, but we'd need to decide how that
> interacts with the AEC/AGC, with applications and so on. I'm feeling
> like that might be another one of those rather long discussions...

It's a needed discussion though, so maybe we should bite the bullet
instead of shoving it under the carpet :-) 

> On Wed, 28 Apr 2021 at 15:12, Naushir Patuck wrote:
> > On Wed, 28 Apr 2021 at 15:05, Laurent Pinchart wrote:
> >> On Wed, Apr 28, 2021 at 02:40:26PM +0100, Naushir Patuck wrote:
> >>> On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart wrote:
> >>>> On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> >>>>> On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> >>>>>> On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> >>>>>>> On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> >>>>>>>> On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> >>>>>>>>> On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> >>>>>>>>>> On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> >>>>>>>>>>> This allows derived classes to override them if they have any special
> >>>>>>>>>>> behaviours to implement. For instance if a particular camera mode
> >>>>>>>>>>> produces a different signal level to other modes, you might choose to
> >>>>>>>>>>> address that in the gain or exposure methods.
> >>>>>>>>>>
> >>>>>>>>>> For my information, would you be able to give an example of how this
> >>>>>>>>>> would be used ? What kind of particular camera mode would produce a
> >>>>>>>>>> different signal level, is this related to binning, or something else ?
> >>>>>>>>>
> >>>>>>>>> Yes, so the sensor I'm looking at produces twice the signal level in
> >>>>>>>>> the binning modes that it does in the full resolution mode. I think
> >>>>>>>>> this difference would be awkward for applications which means we have
> >>>>>>>>> to hide it somewhere.
> >>>>>>>>>
> >>>>>>>>> One of my earlier emails talked a little bit about hiding this in the
> >>>>>>>>> AGC. You'd have to know a "base gain" for every mode (so 1 for full
> >>>>>>>>> res, 2 for binned), and then have some process where we have public
> >>>>>>>>> exposure/gain numbers (which are the same for all modes) and there's a
> >>>>>>>>> process of conversion into and out of the AGC and IPA which deal with
> >>>>>>>>> real exposure/gain values. But I'm really not sure I want to go there,
> >>>>>>>>> at least right now!
> >>>>>>>>>
> >>>>>>>>> I think this is a pragmatic alternative. You can already hide the
> >>>>>>>>> difference by changing the CamHelper's Gain and GainCode functions, to
> >>>>>>>>> apply the magic extra factor of 2 when necessary. This change here
> >>>>>>>>> makes it possible to apply that factor in the exposure instead - you
> >>>>>>>>> could imagine someone wanting the lowest noise possible and accepting
> >>>>>>>>> that the real exposure will be double (I wouldn't want to double
> >>>>>>>>> exposures without people explicitly making that choice). And finally,
> >>>>>>>>> this change doesn't make it any more difficult to do the compensation
> >>>>>>>>> in the AGC at a later date, should we wish.
> >>>>>>>>>
> >>>>>>>>> In the CamHelper for my new sensor, I even make it easy to mix the
> >>>>>>>>> factor of 2 across both gain and exposure, so you could, for example,
> >>>>>>>>> have sqrt(2) in each.
> >>>>>>>>>
> >>>>>>>>> That got rather long... I hope it was understandable!
> >>>>>>>>
> >>>>>>>> Thanks for the information. There may be something that I don't get
> >>>>>>>> though, as I don't see why the exposure needs to be affected. The
> >>>>>>>> exposure time has a big influence on motion blur, so it seems to me that
> >>>>>>>> we shouldn't cheat here. The gain seems to be a better place to hide
> >>>>>>>> this sensor feature. I wonder if we should consider moving from gain to
> >>>>>>>> sensor sensitivity.
> >>>>>>>
> >>>>>>> You're right, exposure doesn't *need* to be affected, and indeed my
> >>>>>>> default position is to handle the difference using gain instead. The
> >>>>>>> intention is that people are *able* to keep the gain to a minimum if
> >>>>>>> they want to, at the cost of changing the exposure - but this has to
> >>>>>>> be a deliberate choice.
> >>>>>>>
> >>>>>>> I agree the "sensitivity" is a good idea, and indeed a much better
> >>>>>>> term than I used above, which was "base gain".
> >>>>>>>
> >>>>>>> Note that sensitivity is a per-mode thing here. I think it's a good
> >>>>>>> solution but has some consequences:
> >>>>>>>
> >>>>>>> - If an application wants to change camera mode and set the exposure
> >>>>>>> they're going to have to deal with per-mode sensitivities and adapt
> >>>>>>> their numbers accordingly. Do we like this? I'm always very nervous of
> >>>>>>> complicating an application's life!
> >>>>>>>
> >>>>>>> - You could try and handle per-mode sensitivities either outside the
> >>>>>>> AGC (effectively that's what this allows) or you could handle it
> >>>>>>> within the AGC. Whenever the camera mode changes you'd have to adjust
> >>>>>>> the state variables according to the ratio of old_sensitivity /
> >>>>>>> new_sensitivity. You'd probably have to use a different exposure
> >>>>>>> profile, possibly it needs to be set by the application - or could we
> >>>>>>> do something more automatic? The implementation would also depend on
> >>>>>>> whether we want applications to have to pay attention to the "per-mode
> >>>>>>> sensitivities" or not.
> >>>>>>>
> >>>>>>> So overall, I'm in several minds about going this "per-mode
> >>>>>>> sensitivity" route and the work it requires on the AGC, as well as the
> >>>>>>> implications for applications. In the meantime, making the exposure
> >>>>>>> methods virtual allows these sensors to work and doesn't make a later
> >>>>>>> step towards per-mode sensitivities any more difficult. You would
> >>>>>>> simply do that work and then delete your virtual exposure methods.
> >>>>>>
> >>>>>> I'm sorry, but I still don't see why it requires making the below
> >>>>>> functions virtual. They convert between exposure expressed in lines and
> >>>>>> exposure expressed as a time, why would that need to be sensor-specific
> >>>>>> ? I guess I need to see an actual use case first.
> >>>>>
> >>>>> This is entirely tangential to David's usage, but I also need this method to
> >>>>> be virtual so I can override this for imx477 ultra long exposure modes. In
> >>>>> this mode, we have some additional scaling to apply to the standard
> >>>>> equation.
> >>>>
> >>>> I assume that in that case the exposure_lines argument, and the return
> >>>> value of ExposureLines(), won't be expressed as a number of lines, right
> >>>> ?
> >>>
> >>> I have not worked through the details yet, but yet, this is possibly going
> >>> to be the case with the expected implementation.
> >>>
> >>>> Given that sensors also often have a fine exposure time expressed in
> >>>> pixels, in addition to the coarse exposure time expressed in lines,
> >>>> could we create an Exposure structure that could model exposure settings
> >>>> (coarse in lines, fine in pixels, plus other parameters for long
> >>>> exposures) in a way that wouldn't be sensor-specific ?
> >>>
> >>> Perhaps that might be a way to go that could be generalised for all sensors.
> >>
> >> Would you be able to check if this could be achieved for the different
> >> sensors that you need to support ?
> >
> > Yes, I'll have a look and see what we can come up with.
> >
> >>> I do still wonder about global shutter sensors though.  I've not worked with
> >>> many of them, but the ones I have expressed exposure in terms entirely
> >>> unrelated to HTS/VTS, but maybe something to worry about when we
> >>> come across it :-)
> >>
> >> Agreed.
> >>
> >>>>>>> Possibly I've repeated myself rather a lot there - but maybe I've
> >>>>>>> explained it better this time? :)
> >>>>>>>
> >>>>>>>>>>> Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> >>>>>>>>>>> Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
> >>>>>>>>>>
> >>>>>>>>>> Reviewed-by: Laurent Pinchart < laurent.pinchart@ideasonboard.com>
> >>>>>>>>>>
> >>>>>>>>>>> ---
> >>>>>>>>>>>  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> >>>>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>>>>>>>>
> >>>>>>>>>>> diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> >>>>>>>>>>> index 1b2d6eec..4053a870 100644
> >>>>>>>>>>> --- a/src/ipa/raspberrypi/cam_helper.hpp
> >>>>>>>>>>> +++ b/src/ipa/raspberrypi/cam_helper.hpp
> >>>>>>>>>>> @@ -66,8 +66,8 @@ public:
> >>>>>>>>>>>       virtual ~CamHelper();
> >>>>>>>>>>>       void SetCameraMode(const CameraMode &mode);
> >>>>>>>>>>>       MdParser &Parser() const { return *parser_; }
> >>>>>>>>>>> -     uint32_t ExposureLines(double exposure_us) const;
> >>>>>>>>>>> -     double Exposure(uint32_t exposure_lines) const; // in us
> >>>>>>>>>>> +     virtual uint32_t ExposureLines(double exposure_us) const;
> >>>>>>>>>>> +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> >>>>>>>>>>>       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> >>>>>>>>>>>                                     double maxFrameDuration) const;
> >>>>>>>>>>>       virtual uint32_t GainCode(double gain) const = 0;

Patch
diff mbox series

diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
index 1b2d6eec..4053a870 100644
--- a/src/ipa/raspberrypi/cam_helper.hpp
+++ b/src/ipa/raspberrypi/cam_helper.hpp
@@ -66,8 +66,8 @@  public:
 	virtual ~CamHelper();
 	void SetCameraMode(const CameraMode &mode);
 	MdParser &Parser() const { return *parser_; }
-	uint32_t ExposureLines(double exposure_us) const;
-	double Exposure(uint32_t exposure_lines) const; // in us
+	virtual uint32_t ExposureLines(double exposure_us) const;
+	virtual double Exposure(uint32_t exposure_lines) const; // in us
 	virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
 				      double maxFrameDuration) const;
 	virtual uint32_t GainCode(double gain) const = 0;