[0/8] libcamera: Add swstats_cpu::processFrame() and atomisp pipeline handler
mbox series

Message ID 20241103152205.29219-1-hdegoede@redhat.com
Headers show
Series
  • libcamera: Add swstats_cpu::processFrame() and atomisp pipeline handler
Related show

Message

Hans de Goede Nov. 3, 2024, 3:21 p.m. UTC
Hi All,

Here is a patch series adding a new pipeline handler for the atomisp.

This series includes the patches which I posted earlier as:
"[RFC 0/4] libcamera: swstats_cpu: Add processFrame() method"
as patches 1, 2, 4 and 5. I have added Reviewed-by-s received on the RFC
posting and I've fixed the lack of doxygen documentation.

I have pushed this to the libcamera-softisp as atomisp-v1 and this
has passed CI without any issues, see:
https://gitlab.freedesktop.org/camera/libcamera-softisp/-/commit/111b26f3b1c353888ebab3268a632f8827d83f2d/pipelines?ref=atomisp-v1

Regards,

Hans


Hans de Goede (8):
  libcamera: swstats_cpu: Update statsProcessFn() / processLine0()
    documentation
  libcamera: swstats_cpu: Drop patternSize_ documentation
  libcamera: swstats_cpu: Move header to libcamera/internal/software_isp
  libcamera: software_isp: Move benchmark code to its own class
  libcamera: swstats_cpu: Add processFrame() method
  libcamera: swstats_cpu: Add support for YUV420
  libcamera: ipa_manager: createIPA: Allow passing an IPA name to match
  libcamera: Add new atomisp pipeline handler

 include/libcamera/internal/ipa_manager.h      |  11 +-
 include/libcamera/internal/ipa_module.h       |   2 +-
 .../internal/software_isp/benchmark.h         |  36 ++
 .../internal/software_isp/meson.build         |   2 +
 .../internal}/software_isp/swstats_cpu.h      |  18 +
 meson.build                                   |   1 +
 meson_options.txt                             |   1 +
 src/ipa/simple/data/uncalibrated_atomisp.yaml |   7 +
 src/libcamera/ipa_manager.cpp                 |   7 +-
 src/libcamera/ipa_module.cpp                  |   6 +-
 src/libcamera/pipeline/atomisp/atomisp.cpp    | 584 ++++++++++++++++++
 src/libcamera/pipeline/atomisp/meson.build    |   5 +
 src/libcamera/software_isp/benchmark.cpp      |  93 +++
 src/libcamera/software_isp/debayer_cpu.cpp    |  32 +-
 src/libcamera/software_isp/debayer_cpu.h      |   9 +-
 src/libcamera/software_isp/meson.build        |   3 +-
 src/libcamera/software_isp/swstats_cpu.cpp    | 178 +++++-
 17 files changed, 926 insertions(+), 69 deletions(-)
 create mode 100644 include/libcamera/internal/software_isp/benchmark.h
 rename {src/libcamera => include/libcamera/internal}/software_isp/swstats_cpu.h (79%)
 create mode 100644 src/ipa/simple/data/uncalibrated_atomisp.yaml
 create mode 100644 src/libcamera/pipeline/atomisp/atomisp.cpp
 create mode 100644 src/libcamera/pipeline/atomisp/meson.build
 create mode 100644 src/libcamera/software_isp/benchmark.cpp

Comments

Laurent Pinchart Nov. 4, 2024, 11:53 p.m. UTC | #1
Hi Hans,

(CC'ing Sakari)

Thank you for the patch.

A few high-level questions first.

On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
> Add a basic atomisp pipeline handler which supports configuring
> the pipeline, capturing frames and selecting front/back sensor.
> 
> The atomisp ISP needs some extra lines/columns when debayering and also
> has some max resolution limitations, this causes the available output
> resolutions to differ from the sensor resolutions.
> 
> The atomisp driver's Android heritage means that it mostly works as a non
> media-controller centric v4l2 device, primarily controlled through its
> /dev/video# node.

Could that be fixed on the kernel side (assuming someone would be able
to do the work of course) ?

> The driver takes care of setting up the pipeline itself
> propagating try / set fmt calls down from its single /dev/video# node to
> the selected sensor taking the necessary padding, etc. into account.
> 
> Therefor things like getting the list of support formats / sizes and
> setFmt() calls are all done on the /dev/video# node instead of on subdevs,
> this avoids having to duplicate the padding, etc. logic in the pipeline
> handler.
> 
> Since the statistics buffers which we get from the ISP2 are not documented

Could the stats format be reverse-engineered ? Or alternatively, could
Intel provide documentation (waving at Sakari) ?

> this uses the swstats_cpu and simple-IPA from the swisp. At the moment only
> aec/agc is supported.
> 
> awb support will be added in a follow-up patch.
> 
> Signed-off-by: Hans de Goede <hdegoede@redhat.com>

[snip]
Hans de Goede Nov. 6, 2024, 1:25 p.m. UTC | #2
Hi Laurent,

On 5-Nov-24 12:53 AM, Laurent Pinchart wrote:
> Hi Hans,
> 
> (CC'ing Sakari)
> 
> Thank you for the patch.
> 
> A few high-level questions first.
> 
> On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
>> Add a basic atomisp pipeline handler which supports configuring
>> the pipeline, capturing frames and selecting front/back sensor.
>>
>> The atomisp ISP needs some extra lines/columns when debayering and also
>> has some max resolution limitations, this causes the available output
>> resolutions to differ from the sensor resolutions.
>>
>> The atomisp driver's Android heritage means that it mostly works as a non
>> media-controller centric v4l2 device, primarily controlled through its
>> /dev/video# node.
> 
> Could that be fixed on the kernel side (assuming someone would be able
> to do the work of course) ?

Yes, note that the current kernel driver already uses the media-controller
and has separate subdevs for the ISP, CSI receivers, sensors and VCM,
see e.g. the 2 attached pngs for 2 different setups (generated by dot).

And the atomisp pipeline handler e.g. already configures mc-links to
select which sensor to use.

So we are already part way there. The thing which currently is not
very mc-centric is that a single set_fmt call is made on /dev/video#
after setting the mc-links and then that configures the fmts
on all the subdevs taking the special resolution-padding requirements
of the ISP into account.

Currently atomisp kernel code already allocates and initializes
a bunch of ISP contexts at this set_fmt call time (rather then
at request-buffers time) and more importantly it selects which
pipeline program (since the ISP is not fixed function) to run on
the ISP at this time. Changing that is very much no trivial.

I guess we could keep allocating those at that time and have
a flag (ioctl / v4l2-ctrl?) to skip the propagating of the fmts
to the subdevs and instead having the pipeline handler set
the subdev fmts itself, but I do not see much added value in that
atm.

>> The driver takes care of setting up the pipeline itself
>> propagating try / set fmt calls down from its single /dev/video# node to
>> the selected sensor taking the necessary padding, etc. into account.
>>
>> Therefor things like getting the list of support formats / sizes and
>> setFmt() calls are all done on the /dev/video# node instead of on subdevs,
>> this avoids having to duplicate the padding, etc. logic in the pipeline
>> handler.
>>
>> Since the statistics buffers which we get from the ISP2 are not documented
> 
> Could the stats format be reverse-engineered ? Or alternatively, could
> Intel provide documentation (waving at Sakari) ?

I have asked Salari about this already, but with these kinda things
it is going to take a while to get an official yes / no answer.

>> this uses the swstats_cpu and simple-IPA from the swisp. At the moment only
>> aec/agc is supported.
>>
>> awb support will be added in a follow-up patch.

Regards,

Hans
Laurent Pinchart Nov. 6, 2024, 1:40 p.m. UTC | #3
Hi Hans,

On Wed, Nov 06, 2024 at 02:25:31PM +0100, Hans de Goede wrote:
> On 5-Nov-24 12:53 AM, Laurent Pinchart wrote:
> > Hi Hans,
> > 
> > (CC'ing Sakari)
> > 
> > Thank you for the patch.
> > 
> > A few high-level questions first.
> > 
> > On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
> >> Add a basic atomisp pipeline handler which supports configuring
> >> the pipeline, capturing frames and selecting front/back sensor.
> >>
> >> The atomisp ISP needs some extra lines/columns when debayering and also
> >> has some max resolution limitations, this causes the available output
> >> resolutions to differ from the sensor resolutions.
> >>
> >> The atomisp driver's Android heritage means that it mostly works as a non
> >> media-controller centric v4l2 device, primarily controlled through its
> >> /dev/video# node.
> > 
> > Could that be fixed on the kernel side (assuming someone would be able
> > to do the work of course) ?
> 
> Yes, note that the current kernel driver already uses the media-controller
> and has separate subdevs for the ISP, CSI receivers, sensors and VCM,
> see e.g. the 2 attached pngs for 2 different setups (generated by dot).
> 
> And the atomisp pipeline handler e.g. already configures mc-links to
> select which sensor to use.
> 
> So we are already part way there.

Ah nice :-)

> The thing which currently is not
> very mc-centric is that a single set_fmt call is made on /dev/video#
> after setting the mc-links and then that configures the fmts
> on all the subdevs taking the special resolution-padding requirements
> of the ISP into account.
> 
> Currently atomisp kernel code already allocates and initializes
> a bunch of ISP contexts at this set_fmt call time (rather then
> at request-buffers time) and more importantly it selects which
> pipeline program (since the ISP is not fixed function) to run on
> the ISP at this time. Changing that is very much no trivial.

I see there's quite a bit of untangling that would need to be done
indeed.

Speaking of this, how do you plan to handle side-by-side development in
libcamera and in the driver ? I don't see how we could ensure backward
compatibility in any clean way on either side, would it be fine to tell
users they will always have to use the latest version on both sides ?

> I guess we could keep allocating those at that time and have
> a flag (ioctl / v4l2-ctrl?) to skip the propagating of the fmts
> to the subdevs and instead having the pipeline handler set
> the subdev fmts itself, but I do not see much added value in that
> atm.

By itself it doesn't add a lot of value indeed, but it would still
prepare for the future.

Another thing that would need to be looked at is replacing the ISP
parameters ioctl API with a parameters buffer. That will be useful to
set the white balance gains.

> >> The driver takes care of setting up the pipeline itself
> >> propagating try / set fmt calls down from its single /dev/video# node to
> >> the selected sensor taking the necessary padding, etc. into account.
> >>
> >> Therefor things like getting the list of support formats / sizes and
> >> setFmt() calls are all done on the /dev/video# node instead of on subdevs,
> >> this avoids having to duplicate the padding, etc. logic in the pipeline
> >> handler.
> >>
> >> Since the statistics buffers which we get from the ISP2 are not documented
> > 
> > Could the stats format be reverse-engineered ? Or alternatively, could
> > Intel provide documentation (waving at Sakari) ?
> 
> I have asked Salari about this already, but with these kinda things
> it is going to take a while to get an official yes / no answer.
> 
> >> this uses the swstats_cpu and simple-IPA from the swisp. At the moment only
> >> aec/agc is supported.
> >>
> >> awb support will be added in a follow-up patch.
Hans de Goede Nov. 6, 2024, 2:17 p.m. UTC | #4
Hi Laurent,

On 6-Nov-24 2:40 PM, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Wed, Nov 06, 2024 at 02:25:31PM +0100, Hans de Goede wrote:
>> On 5-Nov-24 12:53 AM, Laurent Pinchart wrote:
>>> Hi Hans,
>>>
>>> (CC'ing Sakari)
>>>
>>> Thank you for the patch.
>>>
>>> A few high-level questions first.
>>>
>>> On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
>>>> Add a basic atomisp pipeline handler which supports configuring
>>>> the pipeline, capturing frames and selecting front/back sensor.
>>>>
>>>> The atomisp ISP needs some extra lines/columns when debayering and also
>>>> has some max resolution limitations, this causes the available output
>>>> resolutions to differ from the sensor resolutions.
>>>>
>>>> The atomisp driver's Android heritage means that it mostly works as a non
>>>> media-controller centric v4l2 device, primarily controlled through its
>>>> /dev/video# node.
>>>
>>> Could that be fixed on the kernel side (assuming someone would be able
>>> to do the work of course) ?
>>
>> Yes, note that the current kernel driver already uses the media-controller
>> and has separate subdevs for the ISP, CSI receivers, sensors and VCM,
>> see e.g. the 2 attached pngs for 2 different setups (generated by dot).
>>
>> And the atomisp pipeline handler e.g. already configures mc-links to
>> select which sensor to use.
>>
>> So we are already part way there.
> 
> Ah nice :-)
> 
>> The thing which currently is not
>> very mc-centric is that a single set_fmt call is made on /dev/video#
>> after setting the mc-links and then that configures the fmts
>> on all the subdevs taking the special resolution-padding requirements
>> of the ISP into account.
>>
>> Currently atomisp kernel code already allocates and initializes
>> a bunch of ISP contexts at this set_fmt call time (rather then
>> at request-buffers time) and more importantly it selects which
>> pipeline program (since the ISP is not fixed function) to run on
>> the ISP at this time. Changing that is very much no trivial.
> 
> I see there's quite a bit of untangling that would need to be done
> indeed.
> 
> Speaking of this, how do you plan to handle side-by-side development in
> libcamera and in the driver ?

That is a good question my answer is: "carefully".

I hope to be able to make time to get any review comments on
the atomisp pipeline handler addressed and post new version
regularly so that this can get merged in a reasonable time frame.

Then I would like to enable this for Fedora 42 (code complete
deadline 18 Feb 2025, final freeze April 1st 2025) and once enabled
there some form of compatibility will need to be kept.

As mentioned in the other thread IMHO it is import to start shipping
this to end users in a somewhat usable form to hopefully build
a community around this and get more contributors.

So lets say that we start with swstats and then manage to switch
to ISP 3A stats, then I will likely keep the swstats support around
behind some flag for testing / comparison at least for a while.

And I would e.g. make the pipeline handler detect an older kernel
driver and auto switch to the swstats in that case for say approx.
6 months (so keep swstats support around at least that long).

So basically my idea would be to not pin ourselves to providing
a stable ABI / compatibility in either direction (kernel > libcamera,
libcamera > kernel) forever. But I also don't want things to break
if the 2 are not upgraded at exactly the same time.

Likewise if the kernel gets a new /dev/video node for stats buffers
and that is not used by userspace then it should behave as before
and allocate stats buffers itself and just cycle through those
as it does now, since I don't think the ISP can work without them.

> I don't see how we could ensure backward
> compatibility in any clean way on either side, would it be fine to tell
> users they will always have to use the latest version on both sides ?

See above. My compromise would be no long term ABI guarantees
(because staging driver) but add in some leeway by not braking things
if they get a bit out of sink.

>> I guess we could keep allocating those at that time and have
>> a flag (ioctl / v4l2-ctrl?) to skip the propagating of the fmts
>> to the subdevs and instead having the pipeline handler set
>> the subdev fmts itself, but I do not see much added value in that
>> atm.
> 
> By itself it doesn't add a lot of value indeed, but it would still
> prepare for the future.
> 
> Another thing that would need to be looked at is replacing the ISP
> parameters ioctl API with a parameters buffer. That will be useful to
> set the white balance gains.

Interesting. I have not had time look into this yet. But I think that
currently there is some custom ioctl which passes params including the
white balance gains.

I definitely do not plan to use any of the still existing (a lot have
been removed already) custom atomisp IOCTLs. I was actually thinking
about having v4l2-controls on the ISP subdev for the whitebalance
gains. But if other ISPs are using parameter buffers for this then
that sounds good.

Do the go through another /dev/video# node, or ...  ?   Are there any
docs / example code for this ?

It would be good to replace the custom ioctl used for the atomisp
params which contain the gains with a parameter buffer mechanism.

Note that AFAICT the atomisp has multiple parameter buffer types
(currently separate ioctls). Like e.g. separate buffers to pass parameters
related to special optional features like digital image stabilization.

Regards,

Hans