[libcamera-devel,1/1] libcamera: controls: Add StartupFrame control
diff mbox series

Message ID 20230531125016.5540-2-david.plowman@raspberrypi.com
State New
Headers show
Series
  • StartupFrame metadata
Related show

Commit Message

David Plowman May 31, 2023, 12:50 p.m. UTC
This control is passed back in a frame as metadata to indicate whether
the camera system is still in a startup phase, and the application is
advised to avoid using the frame.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
---
 src/libcamera/control_ids.yaml | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

Comments

Naushir Patuck May 31, 2023, 12:57 p.m. UTC | #1
Hi David,

Thank you for this patch.  Indeed removing the drop frame logic the pipeline
handler would simplify things!

On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel
<libcamera-devel@lists.libcamera.org> wrote:
>
> This control is passed back in a frame as metadata to indicate whether
> the camera system is still in a startup phase, and the application is
> advised to avoid using the frame.
>
> Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> ---
>  src/libcamera/control_ids.yaml | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> index adea5f90..4742d907 100644
> --- a/src/libcamera/control_ids.yaml
> +++ b/src/libcamera/control_ids.yaml
> @@ -694,6 +694,21 @@ controls:
>              Continuous AF is paused. No further state changes or lens movements
>              will occur until the AfPauseResume control is sent.
>
> +  - StartupFrame:
> +      type: bool
> +      description: |
> +        The value true indicates that the camera system is still in a startup
> +        phase where the output images may not be reliable, or that certain of
> +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> +        still be changing quite rapidly.
> +
> +        Applications are advised to avoid using these frames. Mostly, they will
> +        occur when the camera system starts for the first time, although,
> +        depending on the sensor and the implementation, they could occur at
> +        other times.
> +
> +        The value false indicates that this is a normal frame.

Just throwing it out there, but would it be useful if this control was an
integer with the count of startup frames left to handle? A value of 0, or the
absence of the control would indicate this is a "valid" frame.

Regards,
Naush

> +
>    # ----------------------------------------------------------------------------
>    # Draft controls section
>
> --
> 2.30.2
>
Laurent Pinchart June 5, 2023, 8:17 a.m. UTC | #2
Hello,

On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> Hi David,
> 
> Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> handler would simplify things!

That's a change I would welcome :-)

> On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> >
> > This control is passed back in a frame as metadata to indicate whether
> > the camera system is still in a startup phase, and the application is
> > advised to avoid using the frame.
> >
> > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > ---
> >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> >  1 file changed, 15 insertions(+)
> >
> > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > index adea5f90..4742d907 100644
> > --- a/src/libcamera/control_ids.yaml
> > +++ b/src/libcamera/control_ids.yaml
> > @@ -694,6 +694,21 @@ controls:
> >              Continuous AF is paused. No further state changes or lens movements
> >              will occur until the AfPauseResume control is sent.
> >
> > +  - StartupFrame:
> > +      type: bool
> > +      description: |
> > +        The value true indicates that the camera system is still in a startup
> > +        phase where the output images may not be reliable, or that certain of
> > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > +        still be changing quite rapidly.

I think we need to decide with a bit more details what constitute a
"startup frame" and what doesn't, or we will have all kinds of
inconsistent behaviour.

I read it that we have multiple criteria on which we base the decision:

- output images not being "reliable"
- 3A algorithms convergence
- "possibly others"

The second criteria is fairly clear, but I'm thinking we should possibly
exclude it from the startup frames and report convergence of the 3A
algorithms instead.

The first and third criteria are quite vague. If I recall correctly, the
first one includes bad frames from the sensor (as in completely
corrupted frames, such as frames that are all black or made of random
data). Those are completely unusable for applications, is there a value
in still making them available instead of dropping them correctly ?

What are the other reasons a frame would be a "startup frame" ?

> > +
> > +        Applications are advised to avoid using these frames. Mostly, they will
> > +        occur when the camera system starts for the first time, although,
> > +        depending on the sensor and the implementation, they could occur at
> > +        other times.
> > +
> > +        The value false indicates that this is a normal frame.
> 
> Just throwing it out there, but would it be useful if this control was an
> integer with the count of startup frames left to handle? A value of 0, or the
> absence of the control would indicate this is a "valid" frame.
> 
> > +
> >    # ----------------------------------------------------------------------------
> >    # Draft controls section
> >
Naushir Patuck June 5, 2023, 9:32 a.m. UTC | #3
Hi Laurent,

David is away this week so his reply will be delayed.

On Mon, 5 Jun 2023 at 09:17, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hello,
>
> On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> > Hi David,
> >
> > Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> > handler would simplify things!
>
> That's a change I would welcome :-)
>
> > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > >
> > > This control is passed back in a frame as metadata to indicate whether
> > > the camera system is still in a startup phase, and the application is
> > > advised to avoid using the frame.
> > >
> > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > ---
> > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > >  1 file changed, 15 insertions(+)
> > >
> > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > index adea5f90..4742d907 100644
> > > --- a/src/libcamera/control_ids.yaml
> > > +++ b/src/libcamera/control_ids.yaml
> > > @@ -694,6 +694,21 @@ controls:
> > >              Continuous AF is paused. No further state changes or lens movements
> > >              will occur until the AfPauseResume control is sent.
> > >
> > > +  - StartupFrame:
> > > +      type: bool
> > > +      description: |
> > > +        The value true indicates that the camera system is still in a startup
> > > +        phase where the output images may not be reliable, or that certain of
> > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > +        still be changing quite rapidly.
>
> I think we need to decide with a bit more details what constitute a
> "startup frame" and what doesn't, or we will have all kinds of
> inconsistent behaviour.
>
> I read it that we have multiple criteria on which we base the decision:
>
> - output images not being "reliable"
> - 3A algorithms convergence
> - "possibly others"
>
> The second criteria is fairly clear, but I'm thinking we should possibly
> exclude it from the startup frames and report convergence of the 3A
> algorithms instead.

The 3A "startup" convergence is included as a criteria here because during
startup, we drive the AE/AWB/ALSC as aggressively (i.e. no filtering) as
possible to achieve convergence as fast as we possibly can.  This means the
image output can oscillate quite badly - hence applications should avoid
displaying or consuming them.

I feel that this startup phase needs to be treated differently compared to a
normal "converging" phase where the algorithms are filtering the outputs and the
transitions our smooth, and conversely the application can (and probably should)
display/consume these frames.

>
> The first and third criteria are quite vague. If I recall correctly, the
> first one includes bad frames from the sensor (as in completely
> corrupted frames, such as frames that are all black or made of random
> data). Those are completely unusable for applications, is there a value
> in still making them available instead of dropping them correctly ?

This is what we do now (plus dropping the startup frames that may be good from
the sensor but still in the 3A startup phase), but I think a key goals of this
change is be consistent (i.e. let the application handle all frames always), and
also help with avoiding some framebuffer allocations to help with startup
convergence performance.

Regards,
Naush

>
> What are the other reasons a frame would be a "startup frame" ?
>
> > > +
> > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > +        occur when the camera system starts for the first time, although,
> > > +        depending on the sensor and the implementation, they could occur at
> > > +        other times.
> > > +
> > > +        The value false indicates that this is a normal frame.
> >
> > Just throwing it out there, but would it be useful if this control was an
> > integer with the count of startup frames left to handle? A value of 0, or the
> > absence of the control would indicate this is a "valid" frame.
> >
> > > +
> > >    # ----------------------------------------------------------------------------
> > >    # Draft controls section
> > >
>
> --
> Regards,
>
> Laurent Pinchart
David Plowman June 12, 2023, 9:43 a.m. UTC | #4
Hi again

Thanks for the discussion on this! Mostly I probably want to reiterate
what Naush has said.

On Mon, 5 Jun 2023 at 10:32, Naushir Patuck <naush@raspberrypi.com> wrote:
>
> Hi Laurent,
>
> David is away this week so his reply will be delayed.
>
> On Mon, 5 Jun 2023 at 09:17, Laurent Pinchart
> <laurent.pinchart@ideasonboard.com> wrote:
> >
> > Hello,
> >
> > On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> > > Hi David,
> > >
> > > Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> > > handler would simplify things!
> >
> > That's a change I would welcome :-)
> >
> > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > >
> > > > This control is passed back in a frame as metadata to indicate whether
> > > > the camera system is still in a startup phase, and the application is
> > > > advised to avoid using the frame.
> > > >
> > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > ---
> > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > >  1 file changed, 15 insertions(+)
> > > >
> > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > index adea5f90..4742d907 100644
> > > > --- a/src/libcamera/control_ids.yaml
> > > > +++ b/src/libcamera/control_ids.yaml
> > > > @@ -694,6 +694,21 @@ controls:
> > > >              Continuous AF is paused. No further state changes or lens movements
> > > >              will occur until the AfPauseResume control is sent.
> > > >
> > > > +  - StartupFrame:
> > > > +      type: bool
> > > > +      description: |
> > > > +        The value true indicates that the camera system is still in a startup
> > > > +        phase where the output images may not be reliable, or that certain of
> > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > +        still be changing quite rapidly.
> >
> > I think we need to decide with a bit more details what constitute a
> > "startup frame" and what doesn't, or we will have all kinds of
> > inconsistent behaviour.
> >
> > I read it that we have multiple criteria on which we base the decision:
> >
> > - output images not being "reliable"
> > - 3A algorithms convergence
> > - "possibly others"
> >
> > The second criteria is fairly clear, but I'm thinking we should possibly
> > exclude it from the startup frames and report convergence of the 3A
> > algorithms instead.
>
> The 3A "startup" convergence is included as a criteria here because during
> startup, we drive the AE/AWB/ALSC as aggressively (i.e. no filtering) as
> possible to achieve convergence as fast as we possibly can.  This means the
> image output can oscillate quite badly - hence applications should avoid
> displaying or consuming them.
>
> I feel that this startup phase needs to be treated differently compared to a
> normal "converging" phase where the algorithms are filtering the outputs and the
> transitions our smooth, and conversely the application can (and probably should)
> display/consume these frames.

Basically, yes. The startup phase is different. For some algorithms we
indicate "converged" already, but that doesn't mean you shouldn't
display "unconverged" frames - of course we do. But during startup we
can expect algorithms possibly to overshoot and the recommendation is
simply not to use them. I suppose it doesn't mean an application
couldn't do something different - but we want to make the recommended
behaviour easy.

>
> >
> > The first and third criteria are quite vague. If I recall correctly, the
> > first one includes bad frames from the sensor (as in completely
> > corrupted frames, such as frames that are all black or made of random
> > data). Those are completely unusable for applications, is there a value
> > in still making them available instead of dropping them correctly ?
>
> This is what we do now (plus dropping the startup frames that may be good from
> the sensor but still in the 3A startup phase), but I think a key goals of this
> change is be consistent (i.e. let the application handle all frames always), and
> also help with avoiding some framebuffer allocations to help with startup
> convergence performance.
>
> Regards,
> Naush
>
> >
> > What are the other reasons a frame would be a "startup frame" ?

There is a genuine discussion to have here, I think. There are some
very clear reasons for telling an application not to use a frame - if
it's complete garbage, for example. And there are some more "advisory"
ones, which is what we can have for several frames. Categorising the
"usability" of a frame is certainly an idea, though I don't know if we
want to do that without a clear reason.

But it's also very hard to predict the behaviour of all pipeline
handlers in this respect. Perhaps someone has a pipeline handler where
the first frame after every mode switch comes out badly, perhaps it
needs to consume a frame first for its IPAs to sort themselves out.
But maybe another PH doesn't. So the idea is to have a portable way to
indicate this kind of thing so that applications don't have to start
guessing the behaviour underneath.

Note that these "behaviours" can be quite complex. For us, it's
completely different if you request fixed exposure and gain, and
colour gains. But only slightly different, IIRC, if you don't give the
colour gains. Forcing this kind of stuff into applications,
particularly ones that expect to use different PHs, feels quite
undesirable.

Don't know if I've really added anything, but I hope it makes a bit of sense!

Thanks
David

> >
> > > > +
> > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > +        occur when the camera system starts for the first time, although,
> > > > +        depending on the sensor and the implementation, they could occur at
> > > > +        other times.
> > > > +
> > > > +        The value false indicates that this is a normal frame.
> > >
> > > Just throwing it out there, but would it be useful if this control was an
> > > integer with the count of startup frames left to handle? A value of 0, or the
> > > absence of the control would indicate this is a "valid" frame.
> > >
> > > > +
> > > >    # ----------------------------------------------------------------------------
> > > >    # Draft controls section
> > > >
> >
> > --
> > Regards,
> >
> > Laurent Pinchart
Kieran Bingham June 12, 2023, 10:33 p.m. UTC | #5
Quoting David Plowman via libcamera-devel (2023-06-12 10:43:52)
> Hi again
> 
> Thanks for the discussion on this! Mostly I probably want to reiterate
> what Naush has said.
> 
> On Mon, 5 Jun 2023 at 10:32, Naushir Patuck <naush@raspberrypi.com> wrote:
> >
> > Hi Laurent,
> >
> > David is away this week so his reply will be delayed.
> >
> > On Mon, 5 Jun 2023 at 09:17, Laurent Pinchart
> > <laurent.pinchart@ideasonboard.com> wrote:
> > >
> > > Hello,
> > >
> > > On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> > > > Hi David,
> > > >
> > > > Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> > > > handler would simplify things!
> > >
> > > That's a change I would welcome :-)
> > >
> > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > >
> > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > the camera system is still in a startup phase, and the application is
> > > > > advised to avoid using the frame.
> > > > >
> > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > ---
> > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > >  1 file changed, 15 insertions(+)
> > > > >
> > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > index adea5f90..4742d907 100644
> > > > > --- a/src/libcamera/control_ids.yaml
> > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > @@ -694,6 +694,21 @@ controls:
> > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > >              will occur until the AfPauseResume control is sent.
> > > > >
> > > > > +  - StartupFrame:
> > > > > +      type: bool
> > > > > +      description: |
> > > > > +        The value true indicates that the camera system is still in a startup
> > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > +        still be changing quite rapidly.
> > >
> > > I think we need to decide with a bit more details what constitute a
> > > "startup frame" and what doesn't, or we will have all kinds of
> > > inconsistent behaviour.
> > >
> > > I read it that we have multiple criteria on which we base the decision:
> > >
> > > - output images not being "reliable"
> > > - 3A algorithms convergence
> > > - "possibly others"
> > >
> > > The second criteria is fairly clear, but I'm thinking we should possibly
> > > exclude it from the startup frames and report convergence of the 3A
> > > algorithms instead.
> >
> > The 3A "startup" convergence is included as a criteria here because during
> > startup, we drive the AE/AWB/ALSC as aggressively (i.e. no filtering) as
> > possible to achieve convergence as fast as we possibly can.  This means the
> > image output can oscillate quite badly - hence applications should avoid
> > displaying or consuming them.
> >
> > I feel that this startup phase needs to be treated differently compared to a
> > normal "converging" phase where the algorithms are filtering the outputs and the
> > transitions our smooth, and conversely the application can (and probably should)
> > display/consume these frames.
> 
> Basically, yes. The startup phase is different. For some algorithms we
> indicate "converged" already, but that doesn't mean you shouldn't
> display "unconverged" frames - of course we do. But during startup we
> can expect algorithms possibly to overshoot and the recommendation is
> simply not to use them. I suppose it doesn't mean an application
> couldn't do something different - but we want to make the recommended
> behaviour easy.

I'm pleased to see this metadata introduction. I already see visible
'flashes' on the RkISP pipeline handler in my tests as the frames aren't
dropped there, nor are they reported to the applications as being
not-yet-ready for consumption.

I know in the RPi pipeline handler it's well supported to capture a
single frame with pre-set manual controls. I guess we would expect
everything to be reported as converged in that instance, so it wouldn't
get marked as a StartupFrame? (Just trying to see if I can find odd
corner cases, but I expect this would already be fine).



> 
> >
> > >
> > > The first and third criteria are quite vague. If I recall correctly, the
> > > first one includes bad frames from the sensor (as in completely
> > > corrupted frames, such as frames that are all black or made of random
> > > data). Those are completely unusable for applications, is there a value
> > > in still making them available instead of dropping them correctly ?
> >
> > This is what we do now (plus dropping the startup frames that may be good from
> > the sensor but still in the 3A startup phase), but I think a key goals of this
> > change is be consistent (i.e. let the application handle all frames always), and
> > also help with avoiding some framebuffer allocations to help with startup
> > convergence performance.
> >
> > Regards,
> > Naush
> >
> > >
> > > What are the other reasons a frame would be a "startup frame" ?
> 
> There is a genuine discussion to have here, I think. There are some
> very clear reasons for telling an application not to use a frame - if
> it's complete garbage, for example. And there are some more "advisory"
> ones, which is what we can have for several frames. Categorising the
> "usability" of a frame is certainly an idea, though I don't know if we
> want to do that without a clear reason.

I expect minimising latency to getting the first 'usable' frame to be a
priority on occasions here - so if applications can know some quantative
level of detail about 'how' usable the frame is, that could potetnially
reduce the number of frames an appliction might discard in some use
cases.

I'm not sure how easy 'quantifying' the usability of the frame will be
though.



> 
> But it's also very hard to predict the behaviour of all pipeline
> handlers in this respect. Perhaps someone has a pipeline handler where
> the first frame after every mode switch comes out badly, perhaps it
> needs to consume a frame first for its IPAs to sort themselves out.
> But maybe another PH doesn't. So the idea is to have a portable way to
> indicate this kind of thing so that applications don't have to start
> guessing the behaviour underneath.
> 
> Note that these "behaviours" can be quite complex. For us, it's
> completely different if you request fixed exposure and gain, and
> colour gains. But only slightly different, IIRC, if you don't give the
> colour gains. Forcing this kind of stuff into applications,
> particularly ones that expect to use different PHs, feels quite
> undesirable.
> 
> Don't know if I've really added anything, but I hope it makes a bit of sense!
> 
> Thanks
> David
> 
> > >
> > > > > +
> > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > +        occur when the camera system starts for the first time, although,
> > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > +        other times.
> > > > > +
> > > > > +        The value false indicates that this is a normal frame.

Presumably an application can also 'assume' that a lack of this metadata
would also indicate 'false' ...

> > > >
> > > > Just throwing it out there, but would it be useful if this control was an
> > > > integer with the count of startup frames left to handle? A value of 0, or the
> > > > absence of the control would indicate this is a "valid" frame.

Ah yes - that's what I mean ;-)



> > > >
> > > > > +
> > > > >    # ----------------------------------------------------------------------------
> > > > >    # Draft controls section
> > > > >
> > >
> > > --
> > > Regards,
> > >
> > > Laurent Pinchart
Naushir Patuck June 13, 2023, 8:19 a.m. UTC | #6
Hi Kieran,

On Mon, 12 Jun 2023 at 23:33, Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Quoting David Plowman via libcamera-devel (2023-06-12 10:43:52)
> > Hi again
> >
> > Thanks for the discussion on this! Mostly I probably want to reiterate
> > what Naush has said.
> >
> > On Mon, 5 Jun 2023 at 10:32, Naushir Patuck <naush@raspberrypi.com> wrote:
> > >
> > > Hi Laurent,
> > >
> > > David is away this week so his reply will be delayed.
> > >
> > > On Mon, 5 Jun 2023 at 09:17, Laurent Pinchart
> > > <laurent.pinchart@ideasonboard.com> wrote:
> > > >
> > > > Hello,
> > > >
> > > > On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> > > > > Hi David,
> > > > >
> > > > > Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> > > > > handler would simplify things!
> > > >
> > > > That's a change I would welcome :-)
> > > >
> > > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > > >
> > > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > > the camera system is still in a startup phase, and the application is
> > > > > > advised to avoid using the frame.
> > > > > >
> > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > ---
> > > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > > >  1 file changed, 15 insertions(+)
> > > > > >
> > > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > > index adea5f90..4742d907 100644
> > > > > > --- a/src/libcamera/control_ids.yaml
> > > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > > @@ -694,6 +694,21 @@ controls:
> > > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > > >              will occur until the AfPauseResume control is sent.
> > > > > >
> > > > > > +  - StartupFrame:
> > > > > > +      type: bool
> > > > > > +      description: |
> > > > > > +        The value true indicates that the camera system is still in a startup
> > > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > > +        still be changing quite rapidly.
> > > >
> > > > I think we need to decide with a bit more details what constitute a
> > > > "startup frame" and what doesn't, or we will have all kinds of
> > > > inconsistent behaviour.
> > > >
> > > > I read it that we have multiple criteria on which we base the decision:
> > > >
> > > > - output images not being "reliable"
> > > > - 3A algorithms convergence
> > > > - "possibly others"
> > > >
> > > > The second criteria is fairly clear, but I'm thinking we should possibly
> > > > exclude it from the startup frames and report convergence of the 3A
> > > > algorithms instead.
> > >
> > > The 3A "startup" convergence is included as a criteria here because during
> > > startup, we drive the AE/AWB/ALSC as aggressively (i.e. no filtering) as
> > > possible to achieve convergence as fast as we possibly can.  This means the
> > > image output can oscillate quite badly - hence applications should avoid
> > > displaying or consuming them.
> > >
> > > I feel that this startup phase needs to be treated differently compared to a
> > > normal "converging" phase where the algorithms are filtering the outputs and the
> > > transitions our smooth, and conversely the application can (and probably should)
> > > display/consume these frames.
> >
> > Basically, yes. The startup phase is different. For some algorithms we
> > indicate "converged" already, but that doesn't mean you shouldn't
> > display "unconverged" frames - of course we do. But during startup we
> > can expect algorithms possibly to overshoot and the recommendation is
> > simply not to use them. I suppose it doesn't mean an application
> > couldn't do something different - but we want to make the recommended
> > behaviour easy.
>
> I'm pleased to see this metadata introduction. I already see visible
> 'flashes' on the RkISP pipeline handler in my tests as the frames aren't
> dropped there, nor are they reported to the applications as being
> not-yet-ready for consumption.
>
> I know in the RPi pipeline handler it's well supported to capture a
> single frame with pre-set manual controls. I guess we would expect
> everything to be reported as converged in that instance, so it wouldn't
> get marked as a StartupFrame? (Just trying to see if I can find odd
> corner cases, but I expect this would already be fine).

If the user requests fully manual controls (i.e. shutter speed, analogue gain,
manual red/blue gains), then we don't signal any drop frames for algorithm
convergence.  However, we would still signal drop frames if the sensor is
known to produce N garbage frames on startup.

Regards,
Naush


>
>
>
> >
> > >
> > > >
> > > > The first and third criteria are quite vague. If I recall correctly, the
> > > > first one includes bad frames from the sensor (as in completely
> > > > corrupted frames, such as frames that are all black or made of random
> > > > data). Those are completely unusable for applications, is there a value
> > > > in still making them available instead of dropping them correctly ?
> > >
> > > This is what we do now (plus dropping the startup frames that may be good from
> > > the sensor but still in the 3A startup phase), but I think a key goals of this
> > > change is be consistent (i.e. let the application handle all frames always), and
> > > also help with avoiding some framebuffer allocations to help with startup
> > > convergence performance.
> > >
> > > Regards,
> > > Naush
> > >
> > > >
> > > > What are the other reasons a frame would be a "startup frame" ?
> >
> > There is a genuine discussion to have here, I think. There are some
> > very clear reasons for telling an application not to use a frame - if
> > it's complete garbage, for example. And there are some more "advisory"
> > ones, which is what we can have for several frames. Categorising the
> > "usability" of a frame is certainly an idea, though I don't know if we
> > want to do that without a clear reason.
>
> I expect minimising latency to getting the first 'usable' frame to be a
> priority on occasions here - so if applications can know some quantative
> level of detail about 'how' usable the frame is, that could potetnially
> reduce the number of frames an appliction might discard in some use
> cases.
>
> I'm not sure how easy 'quantifying' the usability of the frame will be
> though.
>
>
>
> >
> > But it's also very hard to predict the behaviour of all pipeline
> > handlers in this respect. Perhaps someone has a pipeline handler where
> > the first frame after every mode switch comes out badly, perhaps it
> > needs to consume a frame first for its IPAs to sort themselves out.
> > But maybe another PH doesn't. So the idea is to have a portable way to
> > indicate this kind of thing so that applications don't have to start
> > guessing the behaviour underneath.
> >
> > Note that these "behaviours" can be quite complex. For us, it's
> > completely different if you request fixed exposure and gain, and
> > colour gains. But only slightly different, IIRC, if you don't give the
> > colour gains. Forcing this kind of stuff into applications,
> > particularly ones that expect to use different PHs, feels quite
> > undesirable.
> >
> > Don't know if I've really added anything, but I hope it makes a bit of sense!
> >
> > Thanks
> > David
> >
> > > >
> > > > > > +
> > > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > > +        occur when the camera system starts for the first time, although,
> > > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > > +        other times.
> > > > > > +
> > > > > > +        The value false indicates that this is a normal frame.
>
> Presumably an application can also 'assume' that a lack of this metadata
> would also indicate 'false' ...
>
> > > > >
> > > > > Just throwing it out there, but would it be useful if this control was an
> > > > > integer with the count of startup frames left to handle? A value of 0, or the
> > > > > absence of the control would indicate this is a "valid" frame.
>
> Ah yes - that's what I mean ;-)
>
>
>
> > > > >
> > > > > > +
> > > > > >    # ----------------------------------------------------------------------------
> > > > > >    # Draft controls section
> > > > > >
> > > >
> > > > --
> > > > Regards,
> > > >
> > > > Laurent Pinchart
Laurent Pinchart July 3, 2023, 1:41 p.m. UTC | #7
Hello,

On Tue, Jun 13, 2023 at 09:19:25AM +0100, Naushir Patuck via libcamera-devel wrote:
> On Mon, 12 Jun 2023 at 23:33, Kieran Bingham wrote:
> > Quoting David Plowman via libcamera-devel (2023-06-12 10:43:52)
> > > On Mon, 5 Jun 2023 at 10:32, Naushir Patuck wrote:
> > > > On Mon, 5 Jun 2023 at 09:17, Laurent Pinchart wrote:
> > > > > On Wed, May 31, 2023 at 01:57:16PM +0100, Naushir Patuck via libcamera-devel wrote:
> > > > > > Hi David,
> > > > > >
> > > > > > Thank you for this patch.  Indeed removing the drop frame logic the pipeline
> > > > > > handler would simplify things!
> > > > >
> > > > > That's a change I would welcome :-)
> > > > >
> > > > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > > > >
> > > > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > > > the camera system is still in a startup phase, and the application is
> > > > > > > advised to avoid using the frame.
> > > > > > >
> > > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > > ---
> > > > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > > > >  1 file changed, 15 insertions(+)
> > > > > > >
> > > > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > > > index adea5f90..4742d907 100644
> > > > > > > --- a/src/libcamera/control_ids.yaml
> > > > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > > > @@ -694,6 +694,21 @@ controls:
> > > > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > > > >              will occur until the AfPauseResume control is sent.
> > > > > > >
> > > > > > > +  - StartupFrame:
> > > > > > > +      type: bool
> > > > > > > +      description: |
> > > > > > > +        The value true indicates that the camera system is still in a startup
> > > > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > > > +        still be changing quite rapidly.
> > > > >
> > > > > I think we need to decide with a bit more details what constitute a
> > > > > "startup frame" and what doesn't, or we will have all kinds of
> > > > > inconsistent behaviour.
> > > > >
> > > > > I read it that we have multiple criteria on which we base the decision:
> > > > >
> > > > > - output images not being "reliable"
> > > > > - 3A algorithms convergence
> > > > > - "possibly others"
> > > > >
> > > > > The second criteria is fairly clear, but I'm thinking we should possibly
> > > > > exclude it from the startup frames and report convergence of the 3A
> > > > > algorithms instead.
> > > >
> > > > The 3A "startup" convergence is included as a criteria here because during
> > > > startup, we drive the AE/AWB/ALSC as aggressively (i.e. no filtering) as
> > > > possible to achieve convergence as fast as we possibly can.  This means the
> > > > image output can oscillate quite badly - hence applications should avoid
> > > > displaying or consuming them.
> > > >
> > > > I feel that this startup phase needs to be treated differently compared to a
> > > > normal "converging" phase where the algorithms are filtering the outputs and the
> > > > transitions our smooth, and conversely the application can (and probably should)
> > > > display/consume these frames.
> > >
> > > Basically, yes. The startup phase is different. For some algorithms we
> > > indicate "converged" already, but that doesn't mean you shouldn't
> > > display "unconverged" frames - of course we do.

I don't get this.

> > > But during startup we
> > > can expect algorithms possibly to overshoot and the recommendation is
> > > simply not to use them.

That I agree with.

> > > I suppose it doesn't mean an application
> > > couldn't do something different - but we want to make the recommended
> > > behaviour easy.

Applications can easily differentiate the startup phase from the rest of
the camera operation, as startup occurs, by definition, at startup :-)
We could thus tell applications that they should ignore unconverged
frames immediately after starting the camera, but not later.

What I can't tell at the moment is whether the algorithm convergence is
the right criteria at start time. I can imagine that the convergence
phase could be split into a short initial part with large oscillations,
and a second part with smoother transitions until full convergence is
reached. Now that I've written this I of course expect someone to tell
that they absolutely need to differenciate between the two, but is it
needed for real ?

> > I'm pleased to see this metadata introduction. I already see visible
> > 'flashes' on the RkISP pipeline handler in my tests as the frames aren't
> > dropped there, nor are they reported to the applications as being
> > not-yet-ready for consumption.
> >
> > I know in the RPi pipeline handler it's well supported to capture a
> > single frame with pre-set manual controls. I guess we would expect
> > everything to be reported as converged in that instance, so it wouldn't
> > get marked as a StartupFrame? (Just trying to see if I can find odd
> > corner cases, but I expect this would already be fine).
> 
> If the user requests fully manual controls (i.e. shutter speed, analogue gain,
> manual red/blue gains), then we don't signal any drop frames for algorithm
> convergence.  However, we would still signal drop frames if the sensor is
> known to produce N garbage frames on startup.
> 
> > > > > The first and third criteria are quite vague. If I recall correctly, the
> > > > > first one includes bad frames from the sensor (as in completely
> > > > > corrupted frames, such as frames that are all black or made of random
> > > > > data). Those are completely unusable for applications, is there a value
> > > > > in still making them available instead of dropping them correctly ?
> > > >
> > > > This is what we do now (plus dropping the startup frames that may be good from
> > > > the sensor but still in the 3A startup phase), but I think a key goals of this
> > > > change is be consistent (i.e. let the application handle all frames always),

If the sensor produces frames that can't be used in any circumstance, I
really see no value in exposing them to applications. From an
application point of view this wouldn't be an inconsistent behaviour,
those frames would simply not exist.

> > > > and
> > > > also help with avoiding some framebuffer allocations to help with startup
> > > > convergence performance.

I'm not sure to get this.

> > > > > What are the other reasons a frame would be a "startup frame" ?
> > >
> > > There is a genuine discussion to have here, I think. There are some
> > > very clear reasons for telling an application not to use a frame - if
> > > it's complete garbage, for example. And there are some more "advisory"
> > > ones, which is what we can have for several frames. Categorising the
> > > "usability" of a frame is certainly an idea, though I don't know if we
> > > want to do that without a clear reason.
> >
> > I expect minimising latency to getting the first 'usable' frame to be a
> > priority on occasions here - so if applications can know some quantative
> > level of detail about 'how' usable the frame is, that could potetnially
> > reduce the number of frames an appliction might discard in some use
> > cases.
> >
> > I'm not sure how easy 'quantifying' the usability of the frame will be
> > though.

I can't imagine a way to do so at the moment, but I'd be happy to hear
proposals :-)

> > > But it's also very hard to predict the behaviour of all pipeline
> > > handlers in this respect. Perhaps someone has a pipeline handler where
> > > the first frame after every mode switch comes out badly,

We need to discuss how "mode switch" and "startup" interact. From the
point of view of the application, the libcamera API doesn't expose a
"mode switch" concept. We can stop, reconfigure and restart the camera,
and the documented behaviour doesn't differ from the first start. I know
how IPA modules can take advantage of information from the previous
capture session to accelerate algorithm convergence, and they should do
so as an internal optimization, but I don't think I want to expose this
concept explicitly. In particular, I don't like the "when the camera
system starts for the first time" in the documentation.

> > > perhaps it
> > > needs to consume a frame first for its IPAs to sort themselves out.
> > > But maybe another PH doesn't. So the idea is to have a portable way to
> > > indicate this kind of thing so that applications don't have to start
> > > guessing the behaviour underneath.
> > >
> > > Note that these "behaviours" can be quite complex. For us, it's
> > > completely different if you request fixed exposure and gain, and
> > > colour gains. But only slightly different, IIRC, if you don't give the
> > > colour gains. Forcing this kind of stuff into applications,
> > > particularly ones that expect to use different PHs, feels quite
> > > undesirable.

I'm not following here either, I don't see what you're forcing onto
applications here.

> > > Don't know if I've really added anything, but I hope it makes a bit of sense!
> > >
> > > > > > > +
> > > > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > > > +        occur when the camera system starts for the first time, although,
> > > > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > > > +        other times.
> > > > > > > +
> > > > > > > +        The value false indicates that this is a normal frame.
> >
> > Presumably an application can also 'assume' that a lack of this metadata
> > would also indicate 'false' ...
> >
> > > > > >
> > > > > > Just throwing it out there, but would it be useful if this control was an
> > > > > > integer with the count of startup frames left to handle? A value of 0, or the
> > > > > > absence of the control would indicate this is a "valid" frame.
> >
> > Ah yes - that's what I mean ;-)
> >
> > > > > >
> > > > > > > +
> > > > > > >    # ----------------------------------------------------------------------------
> > > > > > >    # Draft controls section
> > > > > > >
Naushir Patuck July 11, 2023, 10:23 a.m. UTC | #8
Hi all,

On a semi-related topic, we talked offline about improving the drop frame
support by queuing a request buffer multiple times to avoid the need for
allocating internal buffers.  I've tried this out and here are my findings.

Firstly, to handle queuing a single buffer multiple times, I need to increase
the number of cache slots in V4L2BufferCache().  Perhaps
V4L2VideoDevice::importBuffers()
should be updated to not take in a count parameter and we just allocate slots
for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
long-standing \todo in the RPi code to choose an appropriate value, and the
maximum number is really the only appropriate value I can think of.

Once I got this working, unfortunately I realised this method would never
actually work correctly in the common scenario where the application configures
and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
N times for N dropped frames.  However this buffer is also imported into the ISP
for processing and stats generation, all while it is also being filled by Unicam
for the next sensor frame.  This makes the stats entirely unusable.

So in the end we either have to allocate additional buffers for drop frames
(like we do right now), or we implement something like this series where the
application is responsible for dropping/ignoring these frames.

Regards,
Naush





On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel
<libcamera-devel@lists.libcamera.org> wrote:
>
> This control is passed back in a frame as metadata to indicate whether
> the camera system is still in a startup phase, and the application is
> advised to avoid using the frame.
>
> Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> ---
>  src/libcamera/control_ids.yaml | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> index adea5f90..4742d907 100644
> --- a/src/libcamera/control_ids.yaml
> +++ b/src/libcamera/control_ids.yaml
> @@ -694,6 +694,21 @@ controls:
>              Continuous AF is paused. No further state changes or lens movements
>              will occur until the AfPauseResume control is sent.
>
> +  - StartupFrame:
> +      type: bool
> +      description: |
> +        The value true indicates that the camera system is still in a startup
> +        phase where the output images may not be reliable, or that certain of
> +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> +        still be changing quite rapidly.
> +
> +        Applications are advised to avoid using these frames. Mostly, they will
> +        occur when the camera system starts for the first time, although,
> +        depending on the sensor and the implementation, they could occur at
> +        other times.
> +
> +        The value false indicates that this is a normal frame.
> +
>    # ----------------------------------------------------------------------------
>    # Draft controls section
>
> --
> 2.30.2
>
Kieran Bingham July 11, 2023, 10:31 a.m. UTC | #9
Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> Hi all,
> 
> On a semi-related topic, we talked offline about improving the drop frame
> support by queuing a request buffer multiple times to avoid the need for
> allocating internal buffers.  I've tried this out and here are my findings.
> 
> Firstly, to handle queuing a single buffer multiple times, I need to increase
> the number of cache slots in V4L2BufferCache().  Perhaps
> V4L2VideoDevice::importBuffers()
> should be updated to not take in a count parameter and we just allocate slots
> for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> long-standing \todo in the RPi code to choose an appropriate value, and the
> maximum number is really the only appropriate value I can think of.

I still think allocating the maximum here in the v4l2 components is
appropriate as they are 'cheap' ...



> Once I got this working, unfortunately I realised this method would never
> actually work correctly in the common scenario where the application configures
> and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> N times for N dropped frames.  However this buffer is also imported into the ISP
> for processing and stats generation, all while it is also being filled by Unicam
> for the next sensor frame.  This makes the stats entirely unusable.


Aha that's a shame. I thought the restrictions were going to be in the
kernel side, so at least it's interesting to know that we /can/ queue
the same buffer multiple times (with a distinct v4l2_buffer id) and get
somewhat of the expected behaviour....

> 
> So in the end we either have to allocate additional buffers for drop frames
> (like we do right now), or we implement something like this series where the
> application is responsible for dropping/ignoring these frames.

Of course if the expected behaviour doesn't suit the use case ... then
...

This may all be different for pipeline's with an inline ISP though ...
so the research is still useful.

--
Kieran

> 
> Regards,
> Naush
> 
> 
> 
> 
> 
> On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel
> <libcamera-devel@lists.libcamera.org> wrote:
> >
> > This control is passed back in a frame as metadata to indicate whether
> > the camera system is still in a startup phase, and the application is
> > advised to avoid using the frame.
> >
> > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > ---
> >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> >  1 file changed, 15 insertions(+)
> >
> > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > index adea5f90..4742d907 100644
> > --- a/src/libcamera/control_ids.yaml
> > +++ b/src/libcamera/control_ids.yaml
> > @@ -694,6 +694,21 @@ controls:
> >              Continuous AF is paused. No further state changes or lens movements
> >              will occur until the AfPauseResume control is sent.
> >
> > +  - StartupFrame:
> > +      type: bool
> > +      description: |
> > +        The value true indicates that the camera system is still in a startup
> > +        phase where the output images may not be reliable, or that certain of
> > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > +        still be changing quite rapidly.
> > +
> > +        Applications are advised to avoid using these frames. Mostly, they will
> > +        occur when the camera system starts for the first time, although,
> > +        depending on the sensor and the implementation, they could occur at
> > +        other times.
> > +
> > +        The value false indicates that this is a normal frame.
> > +
> >    # ----------------------------------------------------------------------------
> >    # Draft controls section
> >
> > --
> > 2.30.2
> >
Naushir Patuck July 11, 2023, 2:58 p.m. UTC | #10
On Tue, 11 Jul 2023 at 11:31, Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > Hi all,
> >
> > On a semi-related topic, we talked offline about improving the drop frame
> > support by queuing a request buffer multiple times to avoid the need for
> > allocating internal buffers.  I've tried this out and here are my findings.
> >
> > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > the number of cache slots in V4L2BufferCache().  Perhaps
> > V4L2VideoDevice::importBuffers()
> > should be updated to not take in a count parameter and we just allocate slots
> > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > long-standing \todo in the RPi code to choose an appropriate value, and the
> > maximum number is really the only appropriate value I can think of.
>
> I still think allocating the maximum here in the v4l2 components is
> appropriate as they are 'cheap' ...

Agree, I think 32 is the limit according to V4L2.

>
>
>
> > Once I got this working, unfortunately I realised this method would never
> > actually work correctly in the common scenario where the application configures
> > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > N times for N dropped frames.  However this buffer is also imported into the ISP
> > for processing and stats generation, all while it is also being filled by Unicam
> > for the next sensor frame.  This makes the stats entirely unusable.
>
>
> Aha that's a shame. I thought the restrictions were going to be in the
> kernel side, so at least it's interesting to know that we /can/ queue
> the same buffer multiple times (with a distinct v4l2_buffer id) and get
> somewhat of the expected behaviour....
>
> >
> > So in the end we either have to allocate additional buffers for drop frames
> > (like we do right now), or we implement something like this series where the
> > application is responsible for dropping/ignoring these frames.
>
> Of course if the expected behaviour doesn't suit the use case ... then
> ...
>
> This may all be different for pipeline's with an inline ISP though ...
> so the research is still useful.

So the question is... should we continue with this series as a possible
improvement if the pipeline handler wants to support this control?

>
> --
> Kieran
>
> >
> > Regards,
> > Naush
> >
> >
> >
> >
> >
> > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel
> > <libcamera-devel@lists.libcamera.org> wrote:
> > >
> > > This control is passed back in a frame as metadata to indicate whether
> > > the camera system is still in a startup phase, and the application is
> > > advised to avoid using the frame.
> > >
> > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > ---
> > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > >  1 file changed, 15 insertions(+)
> > >
> > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > index adea5f90..4742d907 100644
> > > --- a/src/libcamera/control_ids.yaml
> > > +++ b/src/libcamera/control_ids.yaml
> > > @@ -694,6 +694,21 @@ controls:
> > >              Continuous AF is paused. No further state changes or lens movements
> > >              will occur until the AfPauseResume control is sent.
> > >
> > > +  - StartupFrame:
> > > +      type: bool
> > > +      description: |
> > > +        The value true indicates that the camera system is still in a startup
> > > +        phase where the output images may not be reliable, or that certain of
> > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > +        still be changing quite rapidly.
> > > +
> > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > +        occur when the camera system starts for the first time, although,
> > > +        depending on the sensor and the implementation, they could occur at
> > > +        other times.
> > > +
> > > +        The value false indicates that this is a normal frame.
> > > +
> > >    # ----------------------------------------------------------------------------
> > >    # Draft controls section
> > >
> > > --
> > > 2.30.2
> > >
David Plowman July 31, 2023, 2:35 p.m. UTC | #11
Hi again everyone!

On Tue, 11 Jul 2023 at 15:59, Naushir Patuck <naush@raspberrypi.com> wrote:
>
> On Tue, 11 Jul 2023 at 11:31, Kieran Bingham
> <kieran.bingham@ideasonboard.com> wrote:
> >
> > Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > > Hi all,
> > >
> > > On a semi-related topic, we talked offline about improving the drop frame
> > > support by queuing a request buffer multiple times to avoid the need for
> > > allocating internal buffers.  I've tried this out and here are my findings.
> > >
> > > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > > the number of cache slots in V4L2BufferCache().  Perhaps
> > > V4L2VideoDevice::importBuffers()
> > > should be updated to not take in a count parameter and we just allocate slots
> > > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > > long-standing \todo in the RPi code to choose an appropriate value, and the
> > > maximum number is really the only appropriate value I can think of.
> >
> > I still think allocating the maximum here in the v4l2 components is
> > appropriate as they are 'cheap' ...
>
> Agree, I think 32 is the limit according to V4L2.
>
> >
> >
> >
> > > Once I got this working, unfortunately I realised this method would never
> > > actually work correctly in the common scenario where the application configures
> > > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > > N times for N dropped frames.  However this buffer is also imported into the ISP
> > > for processing and stats generation, all while it is also being filled by Unicam
> > > for the next sensor frame.  This makes the stats entirely unusable.
> >
> >
> > Aha that's a shame. I thought the restrictions were going to be in the
> > kernel side, so at least it's interesting to know that we /can/ queue
> > the same buffer multiple times (with a distinct v4l2_buffer id) and get
> > somewhat of the expected behaviour....
> >
> > >
> > > So in the end we either have to allocate additional buffers for drop frames
> > > (like we do right now), or we implement something like this series where the
> > > application is responsible for dropping/ignoring these frames.
> >
> > Of course if the expected behaviour doesn't suit the use case ... then
> > ...
> >
> > This may all be different for pipeline's with an inline ISP though ...
> > so the research is still useful.
>
> So the question is... should we continue with this series as a possible
> improvement if the pipeline handler wants to support this control?

So yes, I would indeed like to revisit this question. I still think
it's useful for the original reasons, and the new use-case I'd like to
bring into the mix now is HDR.

One HDR method combines a number of different exposures to create the
output. Now this isn't so relevant for the Pi seeing as we have no
hardware for combining images, but you could imagine it being
important to platforms more widely. The problem comes when this HDR
mode is engaged... what do we do while we wait for all those different
exposures to come through? Because until we have one of each, we can't
really produce the image that the application is expecting.

The idea would be that requests cycle round as normal, but come with
metadata saying "I'm not ready yet" while HDR is "starting up". Note
that this could of course happen at any time now, not just when the
camera starts (or mode switches).

I still like the idea of generic "I'm not ready" metadata because
applications won't want to understand all the different ways in which
things might not be ready. Though supplementary information that
details what we're still waiting for might be helpful. Thoughts...?

Thanks!
David

>
> >
> > --
> > Kieran
> >
> > >
> > > Regards,
> > > Naush
> > >
> > >
> > >
> > >
> > >
> > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel
> > > <libcamera-devel@lists.libcamera.org> wrote:
> > > >
> > > > This control is passed back in a frame as metadata to indicate whether
> > > > the camera system is still in a startup phase, and the application is
> > > > advised to avoid using the frame.
> > > >
> > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > ---
> > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > >  1 file changed, 15 insertions(+)
> > > >
> > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > index adea5f90..4742d907 100644
> > > > --- a/src/libcamera/control_ids.yaml
> > > > +++ b/src/libcamera/control_ids.yaml
> > > > @@ -694,6 +694,21 @@ controls:
> > > >              Continuous AF is paused. No further state changes or lens movements
> > > >              will occur until the AfPauseResume control is sent.
> > > >
> > > > +  - StartupFrame:
> > > > +      type: bool
> > > > +      description: |
> > > > +        The value true indicates that the camera system is still in a startup
> > > > +        phase where the output images may not be reliable, or that certain of
> > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > +        still be changing quite rapidly.
> > > > +
> > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > +        occur when the camera system starts for the first time, although,
> > > > +        depending on the sensor and the implementation, they could occur at
> > > > +        other times.
> > > > +
> > > > +        The value false indicates that this is a normal frame.
> > > > +
> > > >    # ----------------------------------------------------------------------------
> > > >    # Draft controls section
> > > >
> > > > --
> > > > 2.30.2
> > > >
Laurent Pinchart Aug. 8, 2023, 8:14 a.m. UTC | #12
Hello,

On Mon, Jul 31, 2023 at 03:35:36PM +0100, David Plowman via libcamera-devel wrote:
> On Tue, 11 Jul 2023 at 15:59, Naushir Patuck wrote:
> > On Tue, 11 Jul 2023 at 11:31, Kieran Bingham wrote:
> > > Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > > > Hi all,
> > > >
> > > > On a semi-related topic, we talked offline about improving the drop frame
> > > > support by queuing a request buffer multiple times to avoid the need for
> > > > allocating internal buffers.  I've tried this out and here are my findings.
> > > >
> > > > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > > > the number of cache slots in V4L2BufferCache().  Perhaps
> > > > V4L2VideoDevice::importBuffers()
> > > > should be updated to not take in a count parameter and we just allocate slots
> > > > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > > > long-standing \todo in the RPi code to choose an appropriate value, and the
> > > > maximum number is really the only appropriate value I can think of.
> > >
> > > I still think allocating the maximum here in the v4l2 components is
> > > appropriate as they are 'cheap' ...
> >
> > Agree, I think 32 is the limit according to V4L2.

There's one "small" drawback though. Allocating buffers is indeed cheap
(for DMABUF), but once a V4L2 buffer has been queued with dmabuf
objects, those objects will stay referenced until the V4L2 buffer is
freed. On systems where the user keeps allocating and freeing buffers,
this means that we will hold on to 32 buffers until the Camera is
reconfigured or released.

This is an issue with V4L2, and we're already affected by it. Increasing
the number of buffers will make it worse in some use cases, but doesn't
significantly change the nature of the problem. The proposed new V4L2
DELETE_BUFS ioctl may help solving this. In the meantime, I think we can
increase the number of buffers despite the issue, but I would limit the
increase to a smaller value.

> > > > Once I got this working, unfortunately I realised this method would never
> > > > actually work correctly in the common scenario where the application configures

Should I quote Henri from An American Tail ? :-)

> > > > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > > > N times for N dropped frames.  However this buffer is also imported into the ISP
> > > > for processing and stats generation, all while it is also being filled by Unicam
> > > > for the next sensor frame.  This makes the stats entirely unusable.
> > >
> > > Aha that's a shame. I thought the restrictions were going to be in the
> > > kernel side, so at least it's interesting to know that we /can/ queue
> > > the same buffer multiple times (with a distinct v4l2_buffer id) and get
> > > somewhat of the expected behaviour....
> > >
> > > > So in the end we either have to allocate additional buffers for drop frames
> > > > (like we do right now), or we implement something like this series where the
> > > > application is responsible for dropping/ignoring these frames.

There may be something that escapes me, but I think you're trying to
burry the idea too fast.

Looking at the problem from an API point of view, ignoring the internal
implementation in the pipeline handler for a moment, your proposal is to
add a mechanism to tell applications that they should ignore the
contents of a request and resubmit it. If this is possible to do for
applications without any negative side effects such as the one you've
described above, then I don't see why it would be impossible for
the pipeline handler to do the same before the request reaches the
application.

Addressing the exact issue you're facing, it seems that the problem is
caused by using the raw buffer from the first request only. Under normal
operation conditions, the pipeline handler will need at least two raw
buffers, otherwise frames will be dropped. It's the application's
responsibility to queue enough requests for proper operation if it wants
to avoid frame drops. I think you can thus use raw buffers supplied in
the first two requests.

> > > Of course if the expected behaviour doesn't suit the use case ... then
> > > ...
> > >
> > > This may all be different for pipeline's with an inline ISP though ...
> > > so the research is still useful.
> >
> > So the question is... should we continue with this series as a possible
> > improvement if the pipeline handler wants to support this control?
> 
> So yes, I would indeed like to revisit this question. I still think
> it's useful for the original reasons, and the new use-case I'd like to
> bring into the mix now is HDR.
> 
> One HDR method combines a number of different exposures to create the
> output. Now this isn't so relevant for the Pi seeing as we have no
> hardware for combining images, but you could imagine it being
> important to platforms more widely. The problem comes when this HDR
> mode is engaged... what do we do while we wait for all those different
> exposures to come through? Because until we have one of each, we can't
> really produce the image that the application is expecting.
> 
> The idea would be that requests cycle round as normal, but come with
> metadata saying "I'm not ready yet" while HDR is "starting up". Note
> that this could of course happen at any time now, not just when the
> camera starts (or mode switches).
> 
> I still like the idea of generic "I'm not ready" metadata because
> applications won't want to understand all the different ways in which
> things might not be ready. Though supplementary information that
> details what we're still waiting for might be helpful. Thoughts...?

For the HDR case, would this supplementary information be conveyed in
the form of an HDR-specific control in the request metadata ?

If an application decides to enable HDR in the middle of a capture
session, is it unreasonable to expect the application to understand this
particular reason why frames are not "ready" ?

> > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > >
> > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > the camera system is still in a startup phase, and the application is
> > > > > advised to avoid using the frame.
> > > > >
> > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > ---
> > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > >  1 file changed, 15 insertions(+)
> > > > >
> > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > index adea5f90..4742d907 100644
> > > > > --- a/src/libcamera/control_ids.yaml
> > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > @@ -694,6 +694,21 @@ controls:
> > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > >              will occur until the AfPauseResume control is sent.
> > > > >
> > > > > +  - StartupFrame:
> > > > > +      type: bool
> > > > > +      description: |
> > > > > +        The value true indicates that the camera system is still in a startup
> > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > +        still be changing quite rapidly.
> > > > > +
> > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > +        occur when the camera system starts for the first time, although,
> > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > +        other times.
> > > > > +
> > > > > +        The value false indicates that this is a normal frame.
> > > > > +
> > > > >    # ----------------------------------------------------------------------------
> > > > >    # Draft controls section
> > > > >
David Plowman Aug. 9, 2023, 10:53 a.m. UTC | #13
Hi Laurent

Thanks for the comments!

On Tue, 8 Aug 2023 at 09:14, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hello,
>
> On Mon, Jul 31, 2023 at 03:35:36PM +0100, David Plowman via libcamera-devel wrote:
> > On Tue, 11 Jul 2023 at 15:59, Naushir Patuck wrote:
> > > On Tue, 11 Jul 2023 at 11:31, Kieran Bingham wrote:
> > > > Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > > > > Hi all,
> > > > >
> > > > > On a semi-related topic, we talked offline about improving the drop frame
> > > > > support by queuing a request buffer multiple times to avoid the need for
> > > > > allocating internal buffers.  I've tried this out and here are my findings.
> > > > >
> > > > > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > > > > the number of cache slots in V4L2BufferCache().  Perhaps
> > > > > V4L2VideoDevice::importBuffers()
> > > > > should be updated to not take in a count parameter and we just allocate slots
> > > > > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > > > > long-standing \todo in the RPi code to choose an appropriate value, and the
> > > > > maximum number is really the only appropriate value I can think of.
> > > >
> > > > I still think allocating the maximum here in the v4l2 components is
> > > > appropriate as they are 'cheap' ...
> > >
> > > Agree, I think 32 is the limit according to V4L2.
>
> There's one "small" drawback though. Allocating buffers is indeed cheap
> (for DMABUF), but once a V4L2 buffer has been queued with dmabuf
> objects, those objects will stay referenced until the V4L2 buffer is
> freed. On systems where the user keeps allocating and freeing buffers,
> this means that we will hold on to 32 buffers until the Camera is
> reconfigured or released.
>
> This is an issue with V4L2, and we're already affected by it. Increasing
> the number of buffers will make it worse in some use cases, but doesn't
> significantly change the nature of the problem. The proposed new V4L2
> DELETE_BUFS ioctl may help solving this. In the meantime, I think we can
> increase the number of buffers despite the issue, but I would limit the
> increase to a smaller value.
>
> > > > > Once I got this working, unfortunately I realised this method would never
> > > > > actually work correctly in the common scenario where the application configures
>
> Should I quote Henri from An American Tail ? :-)
>
> > > > > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > > > > N times for N dropped frames.  However this buffer is also imported into the ISP
> > > > > for processing and stats generation, all while it is also being filled by Unicam
> > > > > for the next sensor frame.  This makes the stats entirely unusable.
> > > >
> > > > Aha that's a shame. I thought the restrictions were going to be in the
> > > > kernel side, so at least it's interesting to know that we /can/ queue
> > > > the same buffer multiple times (with a distinct v4l2_buffer id) and get
> > > > somewhat of the expected behaviour....
> > > >
> > > > > So in the end we either have to allocate additional buffers for drop frames
> > > > > (like we do right now), or we implement something like this series where the
> > > > > application is responsible for dropping/ignoring these frames.
>
> There may be something that escapes me, but I think you're trying to
> burry the idea too fast.
>
> Looking at the problem from an API point of view, ignoring the internal
> implementation in the pipeline handler for a moment, your proposal is to
> add a mechanism to tell applications that they should ignore the
> contents of a request and resubmit it. If this is possible to do for
> applications without any negative side effects such as the one you've
> described above, then I don't see why it would be impossible for
> the pipeline handler to do the same before the request reaches the
> application.
>
> Addressing the exact issue you're facing, it seems that the problem is
> caused by using the raw buffer from the first request only. Under normal
> operation conditions, the pipeline handler will need at least two raw
> buffers, otherwise frames will be dropped. It's the application's
> responsibility to queue enough requests for proper operation if it wants
> to avoid frame drops. I think you can thus use raw buffers supplied in
> the first two requests.
>
> > > > Of course if the expected behaviour doesn't suit the use case ... then
> > > > ...
> > > >
> > > > This may all be different for pipeline's with an inline ISP though ...
> > > > so the research is still useful.
> > >
> > > So the question is... should we continue with this series as a possible
> > > improvement if the pipeline handler wants to support this control?
> >
> > So yes, I would indeed like to revisit this question. I still think
> > it's useful for the original reasons, and the new use-case I'd like to
> > bring into the mix now is HDR.
> >
> > One HDR method combines a number of different exposures to create the
> > output. Now this isn't so relevant for the Pi seeing as we have no
> > hardware for combining images, but you could imagine it being
> > important to platforms more widely. The problem comes when this HDR
> > mode is engaged... what do we do while we wait for all those different
> > exposures to come through? Because until we have one of each, we can't
> > really produce the image that the application is expecting.
> >
> > The idea would be that requests cycle round as normal, but come with
> > metadata saying "I'm not ready yet" while HDR is "starting up". Note
> > that this could of course happen at any time now, not just when the
> > camera starts (or mode switches).
> >
> > I still like the idea of generic "I'm not ready" metadata because
> > applications won't want to understand all the different ways in which
> > things might not be ready. Though supplementary information that
> > details what we're still waiting for might be helpful. Thoughts...?
>
> For the HDR case, would this supplementary information be conveyed in
> the form of an HDR-specific control in the request metadata ?
>
> If an application decides to enable HDR in the middle of a capture
> session, is it unreasonable to expect the application to understand this
> particular reason why frames are not "ready" ?

Just to answer this last question, what I really want is to be
nice to our application developers. For instance:

1. There will be more features like this in future. Frame
stacking, night modes, various things. Do we want to force
applications to have to check all this stuff explicitly?
(A bit like "validating" a configuration - you get a
handy "something got adjusted" indication.)

2. As new features get added, it would be good to minimse the
amount of pain we cause to existing applications. A simple "not
ready" means application developers will have fewer code paths
that need to be found and edited. Vendors might be able to add
new features to their processing without breaking existing
applications.

A couple of notes to this:

* I guess we could handle this by "swallowing" frames and not
  letting them out into the world. This obviously refers back to
  the discussion further up which needs to be resolved.

* I certainly think it's good if further information is available
  for those who are interested, this is mostly about making
  libcamera easier to use when folks aren't.

David

>
> > > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > > >
> > > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > > the camera system is still in a startup phase, and the application is
> > > > > > advised to avoid using the frame.
> > > > > >
> > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > ---
> > > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > > >  1 file changed, 15 insertions(+)
> > > > > >
> > > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > > index adea5f90..4742d907 100644
> > > > > > --- a/src/libcamera/control_ids.yaml
> > > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > > @@ -694,6 +694,21 @@ controls:
> > > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > > >              will occur until the AfPauseResume control is sent.
> > > > > >
> > > > > > +  - StartupFrame:
> > > > > > +      type: bool
> > > > > > +      description: |
> > > > > > +        The value true indicates that the camera system is still in a startup
> > > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > > +        still be changing quite rapidly.
> > > > > > +
> > > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > > +        occur when the camera system starts for the first time, although,
> > > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > > +        other times.
> > > > > > +
> > > > > > +        The value false indicates that this is a normal frame.
> > > > > > +
> > > > > >    # ----------------------------------------------------------------------------
> > > > > >    # Draft controls section
> > > > > >
>
> --
> Regards,
>
> Laurent Pinchart
Naushir Patuck Aug. 14, 2023, 12:49 p.m. UTC | #14
Hi Laurent,

On Tue, 8 Aug 2023 at 09:14, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hello,
>
> On Mon, Jul 31, 2023 at 03:35:36PM +0100, David Plowman via libcamera-devel wrote:
> > On Tue, 11 Jul 2023 at 15:59, Naushir Patuck wrote:
> > > On Tue, 11 Jul 2023 at 11:31, Kieran Bingham wrote:
> > > > Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > > > > Hi all,
> > > > >
> > > > > On a semi-related topic, we talked offline about improving the drop frame
> > > > > support by queuing a request buffer multiple times to avoid the need for
> > > > > allocating internal buffers.  I've tried this out and here are my findings.
> > > > >
> > > > > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > > > > the number of cache slots in V4L2BufferCache().  Perhaps
> > > > > V4L2VideoDevice::importBuffers()
> > > > > should be updated to not take in a count parameter and we just allocate slots
> > > > > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > > > > long-standing \todo in the RPi code to choose an appropriate value, and the
> > > > > maximum number is really the only appropriate value I can think of.
> > > >
> > > > I still think allocating the maximum here in the v4l2 components is
> > > > appropriate as they are 'cheap' ...
> > >
> > > Agree, I think 32 is the limit according to V4L2.
>
> There's one "small" drawback though. Allocating buffers is indeed cheap
> (for DMABUF), but once a V4L2 buffer has been queued with dmabuf
> objects, those objects will stay referenced until the V4L2 buffer is
> freed. On systems where the user keeps allocating and freeing buffers,
> this means that we will hold on to 32 buffers until the Camera is
> reconfigured or released.
>
> This is an issue with V4L2, and we're already affected by it. Increasing
> the number of buffers will make it worse in some use cases, but doesn't
> significantly change the nature of the problem. The proposed new V4L2
> DELETE_BUFS ioctl may help solving this. In the meantime, I think we can
> increase the number of buffers despite the issue, but I would limit the
> increase to a smaller value.
>
> > > > > Once I got this working, unfortunately I realised this method would never
> > > > > actually work correctly in the common scenario where the application configures
>
> Should I quote Henri from An American Tail ? :-)
>
> > > > > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > > > > N times for N dropped frames.  However this buffer is also imported into the ISP
> > > > > for processing and stats generation, all while it is also being filled by Unicam
> > > > > for the next sensor frame.  This makes the stats entirely unusable.
> > > >
> > > > Aha that's a shame. I thought the restrictions were going to be in the
> > > > kernel side, so at least it's interesting to know that we /can/ queue
> > > > the same buffer multiple times (with a distinct v4l2_buffer id) and get
> > > > somewhat of the expected behaviour....
> > > >
> > > > > So in the end we either have to allocate additional buffers for drop frames
> > > > > (like we do right now), or we implement something like this series where the
> > > > > application is responsible for dropping/ignoring these frames.
>
> There may be something that escapes me, but I think you're trying to
> burry the idea too fast.
>
> Looking at the problem from an API point of view, ignoring the internal
> implementation in the pipeline handler for a moment, your proposal is to
> add a mechanism to tell applications that they should ignore the
> contents of a request and resubmit it. If this is possible to do for
> applications without any negative side effects such as the one you've
> described above, then I don't see why it would be impossible for
> the pipeline handler to do the same before the request reaches the
> application.
>
> Addressing the exact issue you're facing, it seems that the problem is
> caused by using the raw buffer from the first request only. Under normal
> operation conditions, the pipeline handler will need at least two raw
> buffers, otherwise frames will be dropped. It's the application's
> responsibility to queue enough requests for proper operation if it wants
> to avoid frame drops. I think you can thus use raw buffers supplied in
> the first two requests.
>

It's certainly possible for the pipeline handler to do this kind of thing
assuming the application has queued an appropriate number of buffers. But it
feels quite cumbersome because I'm "borrowing" buffers associated with requests,
and this needs careful management of buffer ordering.  If/when this association
is removed, this can become quite a bit simpler.

> > > > Of course if the expected behaviour doesn't suit the use case ... then
> > > > ...
> > > >
> > > > This may all be different for pipeline's with an inline ISP though ...
> > > > so the research is still useful.
> > >
> > > So the question is... should we continue with this series as a possible
> > > improvement if the pipeline handler wants to support this control?
> >
> > So yes, I would indeed like to revisit this question. I still think
> > it's useful for the original reasons, and the new use-case I'd like to
> > bring into the mix now is HDR.
> >
> > One HDR method combines a number of different exposures to create the
> > output. Now this isn't so relevant for the Pi seeing as we have no
> > hardware for combining images, but you could imagine it being
> > important to platforms more widely. The problem comes when this HDR
> > mode is engaged... what do we do while we wait for all those different
> > exposures to come through? Because until we have one of each, we can't
> > really produce the image that the application is expecting.
> >
> > The idea would be that requests cycle round as normal, but come with
> > metadata saying "I'm not ready yet" while HDR is "starting up". Note
> > that this could of course happen at any time now, not just when the
> > camera starts (or mode switches).
> >
> > I still like the idea of generic "I'm not ready" metadata because
> > applications won't want to understand all the different ways in which
> > things might not be ready. Though supplementary information that
> > details what we're still waiting for might be helpful. Thoughts...?
>
> For the HDR case, would this supplementary information be conveyed in
> the form of an HDR-specific control in the request metadata ?
>
> If an application decides to enable HDR in the middle of a capture
> session, is it unreasonable to expect the application to understand this
> particular reason why frames are not "ready" ?
>
> > > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > > >
> > > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > > the camera system is still in a startup phase, and the application is
> > > > > > advised to avoid using the frame.
> > > > > >
> > > > > > Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
> > > > > > ---
> > > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > > >  1 file changed, 15 insertions(+)
> > > > > >
> > > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > > index adea5f90..4742d907 100644
> > > > > > --- a/src/libcamera/control_ids.yaml
> > > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > > @@ -694,6 +694,21 @@ controls:
> > > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > > >              will occur until the AfPauseResume control is sent.
> > > > > >
> > > > > > +  - StartupFrame:
> > > > > > +      type: bool
> > > > > > +      description: |
> > > > > > +        The value true indicates that the camera system is still in a startup
> > > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > > +        still be changing quite rapidly.
> > > > > > +
> > > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > > +        occur when the camera system starts for the first time, although,
> > > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > > +        other times.
> > > > > > +
> > > > > > +        The value false indicates that this is a normal frame.
> > > > > > +
> > > > > >    # ----------------------------------------------------------------------------
> > > > > >    # Draft controls section
> > > > > >
>
> --
> Regards,
>
> Laurent Pinchart

Patch
diff mbox series

diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
index adea5f90..4742d907 100644
--- a/src/libcamera/control_ids.yaml
+++ b/src/libcamera/control_ids.yaml
@@ -694,6 +694,21 @@  controls:
             Continuous AF is paused. No further state changes or lens movements
             will occur until the AfPauseResume control is sent.
 
+  - StartupFrame:
+      type: bool
+      description: |
+        The value true indicates that the camera system is still in a startup
+        phase where the output images may not be reliable, or that certain of
+        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
+        still be changing quite rapidly.
+
+        Applications are advised to avoid using these frames. Mostly, they will
+        occur when the camera system starts for the first time, although,
+        depending on the sensor and the implementation, they could occur at
+        other times.
+
+        The value false indicates that this is a normal frame.
+
   # ----------------------------------------------------------------------------
   # Draft controls section