[RFC,v1,0/3] libcamera: pipeline: virtual: Add raw Bayer frame support
mbox series

Message ID 20260501105137.439519-1-maxbretschneider@protonmail.com
Headers show
Series
  • libcamera: pipeline: virtual: Add raw Bayer frame support
Related show

Message

Max Bretschneider May 1, 2026, 10:52 a.m. UTC
The virtual pipeline handler currently only produces NV12 output, either
from test patterns or JPEG files. This means that SoftISP, which
processes raw Bayer data, cannot be tested without physical camera
hardware. A first step to achieving this would be to support raw streams
in the virtual pipeline handler. This series introduces a RawFrameGenerator
that reads binary raw Bayer files from disk and serves them through the virtual
pipeline. The introduced changes are opt-in through the YAML configuration. If
virtual cameras are configured with raw_frames, the new code path is
chosen, paths for the existing test_patterns and frames are unaffected.

The SoftISP part of this patch's motivation is complemented by David Plowman's
series, which introduced the CameraSensorMemory class
(https://patchwork.libcamera.org/patch/25468/), since SoftISP needs an
actual CameraSensor for correct construction. The idea is that in the
future it can be used together to serve Raw frames through the
virtual pipeline and process them with SoftISP, for example for CI tests or more easily reproducible benchmarks.

Testing: I have used `dd` to create zero filled raw frames to
test things such as format negotiation, buffer sizing, general frame
flow, config parsing (both right/wrong, e.g. rejecting invalid input)
and bit depth handling. I've written a small bash script to essentialy
automate this for all n : { 8, 10, 12, 14, 16 } and the four Bayer
formats in accordance to `src/libcamera/formats.yaml`. I then used numpy to
test the generateFrame() functionality by generating a known frame with
content, capturing it through the pipeline to disk with the cam application
and then reading the output back again to verify that all the pixels match
the expected values.

I noticed that the pipeline handler components currently have no
dedicated test coverage, if I missed them or if there is some preferred
approach for adding additional testing I'd be open to any suggestions.
For now I can also share my scripts if thats wanted.

Ran `git clang-format` as well as the `checkstyle.py`. I've run the
Meson tests both with and without the changes and get the same:
Ok:                 51
Expected Fail:      1
Fail:               1
Unexpected Pass:    0
Skipped:            29
Timeout:            0

Open questions:
- The sensor properties set in match() are currently hardcoded, and I've
  left them with a \todo. Especially since we've also discussed adding
  controls such as Exposure and AnalogueGain in "[RFC] Decouple
  SoftwareISP from CameraSensor for virtual pipeline testing": Should I
  move these directly to the yaml as well or proceed differently?
- I've written a lambda to derive the pixel format, but at this point it
  has gotten quite large. Maybe this could also be a 2D lookup table or
  something else that is cleaner. On that lambda also: all cases are
  covered I think, but the default case is 16, I wasn't sure if 16
  should also be made an explicit case and something more sensible
  should be chosen for the default.
- Between the raw_frames and the frames path, the file collection logic
  is now also duplicated. Should I add a helper here to change this?
- Currently every frame is read into framesDatas_ immediately at
  startup. Depending on the number and the size of the raw files, this
  could take up quite a lot of memory. For the use cases I thought of
  (e.g. CI) this would be fine, since the number of files would probably
  be comparatively small there. Still I wanted to bring it up.
- We weren't sure if the SPDX license header is fine like this. For the
  moment I've just placed my intern company mail, is this fine as is?

Documentation:
I've added documentation according to the coding-style.rst,
if my comments are not sufficient I'd be happy to add more of course.

AI Disclosure:
I've used DeepWiki to familiarize myself with the libcamera repository
and to point me at reference implementations and patterns to use as orientation.

Thank you all for your time!

Max Bretschneider (3):
  libcamera: pipeline: virtual: Add RawFrameGenerator
  libcamera: pipeline: virtual: Add raw_frames config parser support
  libcamera: pipeline: virtual: Support StreamRole::Raw and Bayer
    formats

 .../pipeline/virtual/config_parser.cpp        | 100 ++++++++++++-
 src/libcamera/pipeline/virtual/meson.build    |   1 +
 .../pipeline/virtual/raw_frame_generator.cpp  | 132 ++++++++++++++++++
 .../pipeline/virtual/raw_frame_generator.h    |  47 +++++++
 src/libcamera/pipeline/virtual/virtual.cpp    | 128 ++++++++++++++---
 src/libcamera/pipeline/virtual/virtual.h      |   3 +-
 6 files changed, 387 insertions(+), 24 deletions(-)
 create mode 100644 src/libcamera/pipeline/virtual/raw_frame_generator.cpp
 create mode 100644 src/libcamera/pipeline/virtual/raw_frame_generator.h

--
2.43.0