[libcamera-devel,4/4] libcamera: rpi: Handle SensorConfiguration
diff mbox series

Message ID 20230724123907.29086-5-jacopo.mondi@ideasonboard.com
State Accepted
Headers show
Series
  • libcamera: Introduce SensorConfiguration
Related show

Commit Message

Jacopo Mondi July 24, 2023, 12:39 p.m. UTC
Handle the SensorConfiguration provided by the application in the
pipeline validate() and configure() call chains.

During validation, first make sure SensorConfiguration is valid, then
handle it to compute the sensor format.

For the VC4 platform where the RAW stream follows the sensor's
configuration adjust the RAW stream configuration to match the sensor
configuration.

Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
---
 .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
 .../pipeline/rpi/common/pipeline_base.h       |  4 +-
 src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
 3 files changed, 83 insertions(+), 15 deletions(-)

Comments

Naushir Patuck July 26, 2023, 9:52 a.m. UTC | #1
Hi Jacopo,

Thank you for your work.

On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
<libcamera-devel@lists.libcamera.org> wrote:
>
> Handle the SensorConfiguration provided by the application in the
> pipeline validate() and configure() call chains.
>
> During validation, first make sure SensorConfiguration is valid, then
> handle it to compute the sensor format.
>
> For the VC4 platform where the RAW stream follows the sensor's
> configuration adjust the RAW stream configuration to match the sensor
> configuration.
>
> Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> ---
>  .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
>  .../pipeline/rpi/common/pipeline_base.h       |  4 +-
>  src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
>  3 files changed, 83 insertions(+), 15 deletions(-)
>
> diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> index e0fbeec37fe9..574e003402df 100644
> --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
>         if (config_.empty())
>                 return Invalid;
>
> +       if (!sensorConfig.valid()) {
> +               LOG(RPI, Error) << "Invalid sensor configuration request";
> +               return Invalid;
> +       }
> +
>         status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
>
>         /*
> @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
>         std::sort(outStreams.begin(), outStreams.end(),
>                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
>
> -       /* Compute the sensor configuration. */
> -       unsigned int bitDepth = defaultRawBitDepth;
> -       if (!rawStreams.empty()) {
> +       /* Compute the sensor's format then do any platform specific fixups. */
> +       unsigned int bitDepth;
> +       Size sensorSize;
> +
> +       if (sensorConfig) {
> +               /* Use the application provided sensor configuration. */
> +               bitDepth = sensorConfig.bitDepth;
> +               sensorSize = sensorConfig.outputSize;
> +       } else if (!rawStreams.empty()) {
> +               /* Use the RAW stream format and size. */
>                 BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
>                 bitDepth = bayerFormat.bitDepth;
> +               sensorSize = rawStreams[0].cfg->size;
> +       } else {
> +               bitDepth = defaultRawBitDepth;
> +               sensorSize = outStreams[0].cfg->size;
>         }
>
> -       sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> -                                                                : rawStreams[0].cfg->size,
> -                                             bitDepth);
> +       sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);

We don't necessarily need to do this in the case where sensorConfig is present
right?  But I suppose it's a good way to do a validation on the values in
sensorConfig, so no harm in keeping it.

> +
> +       /*
> +        * If a sensor configuration has been requested, it should apply
> +        * without modifications.
> +        */
> +       if (sensorConfig) {
> +               BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> +
> +               if (bayer.bitDepth != sensorConfig.bitDepth ||
> +                   sensorFormat_.size != sensorConfig.outputSize) {
>
> -       /* Do any platform specific fixups. */
> -       status = data_->platformValidate(rawStreams, outStreams);
> +                       LOG(RPI, Error) << "Invalid sensor configuration: "
> +                                       << "bitDepth/size mismatch";
> +                       return Invalid;
> +               }
> +       }
> +
> +       status = data_->platformValidate(this, rawStreams, outStreams);

Minor nit, we lost the comment above this statement.


>         if (status == Invalid)
>                 return Invalid;
>
> @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
>         std::sort(ispStreams.begin(), ispStreams.end(),
>                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
>
> -       /* Apply the format on the sensor with any cached transform. */
> +       /*
> +        * Apply the format on the sensor with any cached transform.
> +        *
> +        * If the application has provided a sensor configuration apply it
> +        * instead of just applying a format.
> +        */
>         const RPiCameraConfiguration *rpiConfig =
>                                 static_cast<const RPiCameraConfiguration *>(config);
> -       V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> +       V4L2SubdeviceFormat sensorFormat;
>
> -       ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> +       if (rpiConfig->sensorConfig) {
> +               ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> +                                                       rpiConfig->combinedTransform_,
> +                                                       &sensorFormat);
> +       } else {
> +               sensorFormat = rpiConfig->sensorFormat_;
> +               ret = data->sensor_->setFormat(&sensorFormat,
> +                                              rpiConfig->combinedTransform_);
> +       }
>         if (ret)
>                 return ret;
>
> diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> index a139c98a5a2b..0a795f4d2689 100644
> --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> @@ -42,6 +42,7 @@ namespace RPi {
>  /* Map of mbus codes to supported sizes reported by the sensor. */
>  using SensorFormats = std::map<unsigned int, std::vector<Size>>;
>
> +class RPiCameraConfiguration;
>  class CameraData : public Camera::Private
>  {
>  public:
> @@ -72,7 +73,8 @@ public:
>                 V4L2VideoDevice *dev;
>         };
>
> -       virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> +       virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> +                                                            std::vector<StreamParams> &rawStreams,
>                                                              std::vector<StreamParams> &outStreams) const = 0;
>         virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
>                                       std::optional<BayerFormat::Packing> packing,
> diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> index 018cf4881d0e..bf864d4174b2 100644
> --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> @@ -65,7 +65,8 @@ public:
>         {
>         }
>
> -       CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> +       CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> +                                                    std::vector<StreamParams> &rawStreams,
>                                                      std::vector<StreamParams> &outStreams) const override;
>
>         int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
>         return 0;
>  }
>
> -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> +                                                           std::vector<StreamParams> &rawStreams,
>                                                             std::vector<StreamParams> &outStreams) const
>  {
>         CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
>                 return CameraConfiguration::Status::Invalid;
>         }
>
> -       if (!rawStreams.empty())
> +       if (!rawStreams.empty()) {
>                 rawStreams[0].dev = unicam_[Unicam::Image].dev();
>
> +               /* Adjust the RAW stream to match the requested sensor config. */
> +               if (rpiConfig->sensorConfig) {
> +                       StreamConfiguration *rawStream = rawStreams[0].cfg;
> +                       BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> +
> +                       /* Handle flips to make sure to match the RAW stream format. */
> +                       if (flipsAlterBayerOrder_)
> +                               rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> +                       PixelFormat rawFormat = rawBayer.toPixelFormat();
> +
> +                       if (rawStream->pixelFormat != rawFormat ||
> +                           rawStream->size != rpiConfig->sensorConfig.outputSize) {
> +                               rawStream->pixelFormat = rawFormat;
> +                               rawStream->size = rpiConfig->sensorConfig.outputSize;
> +
> +                               status = CameraConfiguration::Adjusted;
> +                       }
> +               }
> +       }
> +

I wonder if we can make some simplifications here?  The block of code that
handles bayer orders and transforms already happens in
RPiCameraConfiguration::validate()
(done there to avoid derived pipeline handlers from repeating the same
construct). Could it be simply removed from here without any loss of
functionality?

If the answer is yes, can I suggest another change - instead of passing in
*rpiConfig to this function, we can pass in sensorFormat_?  This ought to be
equivalent I think? In fact, a completely unrelated change that I'm working on
passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
be ideal :)

Regards,
Naush

>         /*
>          * For the two ISP outputs, one stream must be equal or smaller than the
>          * other in all dimensions.
> @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
>         for (unsigned int i = 0; i < outStreams.size(); i++) {
>                 Size size;
>
> +               /* \todo Warn if upscaling: reduces image quality. */
> +
>                 size.width = std::min(outStreams[i].cfg->size.width,
>                                       outStreams[0].cfg->size.width);
>                 size.height = std::min(outStreams[i].cfg->size.height,
> --
> 2.40.1
>
Jacopo Mondi July 27, 2023, 8:13 a.m. UTC | #2
Hi Naush

On Wed, Jul 26, 2023 at 10:52:59AM +0100, Naushir Patuck wrote:
> Hi Jacopo,
>
> Thank you for your work.
>
> On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
> <libcamera-devel@lists.libcamera.org> wrote:
> >
> > Handle the SensorConfiguration provided by the application in the
> > pipeline validate() and configure() call chains.
> >
> > During validation, first make sure SensorConfiguration is valid, then
> > handle it to compute the sensor format.
> >
> > For the VC4 platform where the RAW stream follows the sensor's
> > configuration adjust the RAW stream configuration to match the sensor
> > configuration.
> >
> > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > ---
> >  .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
> >  .../pipeline/rpi/common/pipeline_base.h       |  4 +-
> >  src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
> >  3 files changed, 83 insertions(+), 15 deletions(-)
> >
> > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > index e0fbeec37fe9..574e003402df 100644
> > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> >         if (config_.empty())
> >                 return Invalid;
> >
> > +       if (!sensorConfig.valid()) {
> > +               LOG(RPI, Error) << "Invalid sensor configuration request";
> > +               return Invalid;
> > +       }
> > +
> >         status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
> >
> >         /*
> > @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> >         std::sort(outStreams.begin(), outStreams.end(),
> >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> >
> > -       /* Compute the sensor configuration. */
> > -       unsigned int bitDepth = defaultRawBitDepth;
> > -       if (!rawStreams.empty()) {
> > +       /* Compute the sensor's format then do any platform specific fixups. */
> > +       unsigned int bitDepth;
> > +       Size sensorSize;
> > +
> > +       if (sensorConfig) {
> > +               /* Use the application provided sensor configuration. */
> > +               bitDepth = sensorConfig.bitDepth;
> > +               sensorSize = sensorConfig.outputSize;
> > +       } else if (!rawStreams.empty()) {
> > +               /* Use the RAW stream format and size. */
> >                 BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
> >                 bitDepth = bayerFormat.bitDepth;
> > +               sensorSize = rawStreams[0].cfg->size;
> > +       } else {
> > +               bitDepth = defaultRawBitDepth;
> > +               sensorSize = outStreams[0].cfg->size;
> >         }
> >
> > -       sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> > -                                                                : rawStreams[0].cfg->size,
> > -                                             bitDepth);
> > +       sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
>
> We don't necessarily need to do this in the case where sensorConfig is present
> right?  But I suppose it's a good way to do a validation on the values in
> sensorConfig, so no harm in keeping it.
>
> > +
> > +       /*
> > +        * If a sensor configuration has been requested, it should apply
> > +        * without modifications.
> > +        */
> > +       if (sensorConfig) {
> > +               BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> > +
> > +               if (bayer.bitDepth != sensorConfig.bitDepth ||
> > +                   sensorFormat_.size != sensorConfig.outputSize) {
> >
> > -       /* Do any platform specific fixups. */
> > -       status = data_->platformValidate(rawStreams, outStreams);
> > +                       LOG(RPI, Error) << "Invalid sensor configuration: "
> > +                                       << "bitDepth/size mismatch";
> > +                       return Invalid;
> > +               }
> > +       }
> > +
> > +       status = data_->platformValidate(this, rawStreams, outStreams);
>
> Minor nit, we lost the comment above this statement.
>
>
> >         if (status == Invalid)
> >                 return Invalid;
> >
> > @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
> >         std::sort(ispStreams.begin(), ispStreams.end(),
> >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> >
> > -       /* Apply the format on the sensor with any cached transform. */
> > +       /*
> > +        * Apply the format on the sensor with any cached transform.
> > +        *
> > +        * If the application has provided a sensor configuration apply it
> > +        * instead of just applying a format.
> > +        */
> >         const RPiCameraConfiguration *rpiConfig =
> >                                 static_cast<const RPiCameraConfiguration *>(config);
> > -       V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> > +       V4L2SubdeviceFormat sensorFormat;
> >
> > -       ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> > +       if (rpiConfig->sensorConfig) {
> > +               ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> > +                                                       rpiConfig->combinedTransform_,
> > +                                                       &sensorFormat);
> > +       } else {
> > +               sensorFormat = rpiConfig->sensorFormat_;
> > +               ret = data->sensor_->setFormat(&sensorFormat,
> > +                                              rpiConfig->combinedTransform_);
> > +       }
> >         if (ret)
> >                 return ret;
> >
> > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > index a139c98a5a2b..0a795f4d2689 100644
> > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > @@ -42,6 +42,7 @@ namespace RPi {
> >  /* Map of mbus codes to supported sizes reported by the sensor. */
> >  using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> >
> > +class RPiCameraConfiguration;
> >  class CameraData : public Camera::Private
> >  {
> >  public:
> > @@ -72,7 +73,8 @@ public:
> >                 V4L2VideoDevice *dev;
> >         };
> >
> > -       virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > +       virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> > +                                                            std::vector<StreamParams> &rawStreams,
> >                                                              std::vector<StreamParams> &outStreams) const = 0;
> >         virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
> >                                       std::optional<BayerFormat::Packing> packing,
> > diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > index 018cf4881d0e..bf864d4174b2 100644
> > --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > @@ -65,7 +65,8 @@ public:
> >         {
> >         }
> >
> > -       CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > +       CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > +                                                    std::vector<StreamParams> &rawStreams,
> >                                                      std::vector<StreamParams> &outStreams) const override;
> >
> >         int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> > @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
> >         return 0;
> >  }
> >
> > -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> > +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > +                                                           std::vector<StreamParams> &rawStreams,
> >                                                             std::vector<StreamParams> &outStreams) const
> >  {
> >         CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> > @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> >                 return CameraConfiguration::Status::Invalid;
> >         }
> >
> > -       if (!rawStreams.empty())
> > +       if (!rawStreams.empty()) {
> >                 rawStreams[0].dev = unicam_[Unicam::Image].dev();
> >
> > +               /* Adjust the RAW stream to match the requested sensor config. */
> > +               if (rpiConfig->sensorConfig) {
> > +                       StreamConfiguration *rawStream = rawStreams[0].cfg;
> > +                       BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > +
> > +                       /* Handle flips to make sure to match the RAW stream format. */
> > +                       if (flipsAlterBayerOrder_)
> > +                               rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > +                       PixelFormat rawFormat = rawBayer.toPixelFormat();
> > +
> > +                       if (rawStream->pixelFormat != rawFormat ||
> > +                           rawStream->size != rpiConfig->sensorConfig.outputSize) {
> > +                               rawStream->pixelFormat = rawFormat;
> > +                               rawStream->size = rpiConfig->sensorConfig.outputSize;
> > +
> > +                               status = CameraConfiguration::Adjusted;
> > +                       }
> > +               }
> > +       }
> > +
>
> I wonder if we can make some simplifications here?  The block of code that
> handles bayer orders and transforms already happens in
> RPiCameraConfiguration::validate()
> (done there to avoid derived pipeline handlers from repeating the same
> construct). Could it be simply removed from here without any loss of
> functionality?

Are you sure we're actually doing the same thing ?

As I understand it, the logic implemented here makes sure that the RAW
StreamConfiguration matches the requested SensorConfiguration without
actually testing it against the CSI-2 receiver video device.

While in RPiCameraConfiguration::validate() we take the RAW
StreamConfiguration, regardless of the SensorConfig, and create a
V4L2DeviceFormat from it (using the media bus code from the sensor
format) and try to apply it to the CSI-2 receiver video device,
adjusting the RAW StreamConfiguration according to the result of
applying the format to the video device.

When it comes to the bayer order handling are you referring to:

        /* Handle flips to make sure to match the RAW stream format. */
        if (flipsAlterBayerOrder_)
                rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);

If that's your actual concern ?

If we don't do it here, do we risk to set to Adjusted a correct
configuration from user ?

Let's assume your sensor is natively SRGGB10 and you have H/VFLIP
enabled, so that the actual output format is BGGR10.

You receive a CameraConfiguration with a RAW stream with

        { PixelFormat::SBGGR10, 1080p }

And a SensorConfiguration = { 10, 1080p }

In RPiCameraConfiguration::validate() we compute the 'best' sensor
format with CameraData::findBestFormat(). At this point (and here I'm
going from memory so please confirm) flips are not activated, so you
get back a SubdeviceFormat with the native SRGGB10 ordering.

Then we enter platformValidate() with

        sensorFormat_ = { MEDIA_BUS_SRGGB10, 1080p }

and we get to
        BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);

let's assume we don't do the above check for if
(flipsAlterBayerOrder_) and we'll end up with

        if (rawStream->pixelFormat != rawFormat)

and then we Adjust the RAW stream to PixelFormat::SRGGB10.

Then we're back to RPiCameraConfiguration::validate(). We enter the
"Further fixup on RAW streams" loop. We apply SRGGB10 to the CSI-2
receiver but here we notice that flipsAlterBayerOrder_. So we go and
adjust back the RAW stream configuration to SBGGR10 and we mark it
again to Adjusted, while it is what the application has actually asked
for!

Is my understanding correct here ?

>
> If the answer is yes, can I suggest another change - instead of passing in
> *rpiConfig to this function, we can pass in sensorFormat_?  This ought to be
> equivalent I think? In fact, a completely unrelated change that I'm working on
> passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
> be ideal :)
>
> Regards,
> Naush
>
> >         /*
> >          * For the two ISP outputs, one stream must be equal or smaller than the
> >          * other in all dimensions.
> > @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> >         for (unsigned int i = 0; i < outStreams.size(); i++) {
> >                 Size size;
> >
> > +               /* \todo Warn if upscaling: reduces image quality. */
> > +
> >                 size.width = std::min(outStreams[i].cfg->size.width,
> >                                       outStreams[0].cfg->size.width);
> >                 size.height = std::min(outStreams[i].cfg->size.height,
> > --
> > 2.40.1
> >
Naushir Patuck July 27, 2023, 2:31 p.m. UTC | #3
Hi Jacopo,

On Thu, 27 Jul 2023 at 09:13, Jacopo Mondi
<jacopo.mondi@ideasonboard.com> wrote:
>
> Hi Naush
>
> On Wed, Jul 26, 2023 at 10:52:59AM +0100, Naushir Patuck wrote:
> > Hi Jacopo,
> >
> > Thank you for your work.
> >
> > On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
> > <libcamera-devel@lists.libcamera.org> wrote:
> > >
> > > Handle the SensorConfiguration provided by the application in the
> > > pipeline validate() and configure() call chains.
> > >
> > > During validation, first make sure SensorConfiguration is valid, then
> > > handle it to compute the sensor format.
> > >
> > > For the VC4 platform where the RAW stream follows the sensor's
> > > configuration adjust the RAW stream configuration to match the sensor
> > > configuration.
> > >
> > > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > > ---
> > >  .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
> > >  .../pipeline/rpi/common/pipeline_base.h       |  4 +-
> > >  src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
> > >  3 files changed, 83 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > index e0fbeec37fe9..574e003402df 100644
> > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > >         if (config_.empty())
> > >                 return Invalid;
> > >
> > > +       if (!sensorConfig.valid()) {
> > > +               LOG(RPI, Error) << "Invalid sensor configuration request";
> > > +               return Invalid;
> > > +       }
> > > +
> > >         status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
> > >
> > >         /*
> > > @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > >         std::sort(outStreams.begin(), outStreams.end(),
> > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > >
> > > -       /* Compute the sensor configuration. */
> > > -       unsigned int bitDepth = defaultRawBitDepth;
> > > -       if (!rawStreams.empty()) {
> > > +       /* Compute the sensor's format then do any platform specific fixups. */
> > > +       unsigned int bitDepth;
> > > +       Size sensorSize;
> > > +
> > > +       if (sensorConfig) {
> > > +               /* Use the application provided sensor configuration. */
> > > +               bitDepth = sensorConfig.bitDepth;
> > > +               sensorSize = sensorConfig.outputSize;
> > > +       } else if (!rawStreams.empty()) {
> > > +               /* Use the RAW stream format and size. */
> > >                 BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
> > >                 bitDepth = bayerFormat.bitDepth;
> > > +               sensorSize = rawStreams[0].cfg->size;
> > > +       } else {
> > > +               bitDepth = defaultRawBitDepth;
> > > +               sensorSize = outStreams[0].cfg->size;
> > >         }
> > >
> > > -       sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> > > -                                                                : rawStreams[0].cfg->size,
> > > -                                             bitDepth);
> > > +       sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
> >
> > We don't necessarily need to do this in the case where sensorConfig is present
> > right?  But I suppose it's a good way to do a validation on the values in
> > sensorConfig, so no harm in keeping it.
> >
> > > +
> > > +       /*
> > > +        * If a sensor configuration has been requested, it should apply
> > > +        * without modifications.
> > > +        */
> > > +       if (sensorConfig) {
> > > +               BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> > > +
> > > +               if (bayer.bitDepth != sensorConfig.bitDepth ||
> > > +                   sensorFormat_.size != sensorConfig.outputSize) {
> > >
> > > -       /* Do any platform specific fixups. */
> > > -       status = data_->platformValidate(rawStreams, outStreams);
> > > +                       LOG(RPI, Error) << "Invalid sensor configuration: "
> > > +                                       << "bitDepth/size mismatch";
> > > +                       return Invalid;
> > > +               }
> > > +       }
> > > +
> > > +       status = data_->platformValidate(this, rawStreams, outStreams);
> >
> > Minor nit, we lost the comment above this statement.
> >
> >
> > >         if (status == Invalid)
> > >                 return Invalid;
> > >
> > > @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
> > >         std::sort(ispStreams.begin(), ispStreams.end(),
> > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > >
> > > -       /* Apply the format on the sensor with any cached transform. */
> > > +       /*
> > > +        * Apply the format on the sensor with any cached transform.
> > > +        *
> > > +        * If the application has provided a sensor configuration apply it
> > > +        * instead of just applying a format.
> > > +        */
> > >         const RPiCameraConfiguration *rpiConfig =
> > >                                 static_cast<const RPiCameraConfiguration *>(config);
> > > -       V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> > > +       V4L2SubdeviceFormat sensorFormat;
> > >
> > > -       ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> > > +       if (rpiConfig->sensorConfig) {
> > > +               ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> > > +                                                       rpiConfig->combinedTransform_,
> > > +                                                       &sensorFormat);
> > > +       } else {
> > > +               sensorFormat = rpiConfig->sensorFormat_;
> > > +               ret = data->sensor_->setFormat(&sensorFormat,
> > > +                                              rpiConfig->combinedTransform_);
> > > +       }
> > >         if (ret)
> > >                 return ret;
> > >
> > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > index a139c98a5a2b..0a795f4d2689 100644
> > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > @@ -42,6 +42,7 @@ namespace RPi {
> > >  /* Map of mbus codes to supported sizes reported by the sensor. */
> > >  using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> > >
> > > +class RPiCameraConfiguration;
> > >  class CameraData : public Camera::Private
> > >  {
> > >  public:
> > > @@ -72,7 +73,8 @@ public:
> > >                 V4L2VideoDevice *dev;
> > >         };
> > >
> > > -       virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > +       virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> > > +                                                            std::vector<StreamParams> &rawStreams,
> > >                                                              std::vector<StreamParams> &outStreams) const = 0;
> > >         virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
> > >                                       std::optional<BayerFormat::Packing> packing,
> > > diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > index 018cf4881d0e..bf864d4174b2 100644
> > > --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > @@ -65,7 +65,8 @@ public:
> > >         {
> > >         }
> > >
> > > -       CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > +       CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > +                                                    std::vector<StreamParams> &rawStreams,
> > >                                                      std::vector<StreamParams> &outStreams) const override;
> > >
> > >         int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> > > @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
> > >         return 0;
> > >  }
> > >
> > > -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> > > +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > +                                                           std::vector<StreamParams> &rawStreams,
> > >                                                             std::vector<StreamParams> &outStreams) const
> > >  {
> > >         CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> > > @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > >                 return CameraConfiguration::Status::Invalid;
> > >         }
> > >
> > > -       if (!rawStreams.empty())
> > > +       if (!rawStreams.empty()) {
> > >                 rawStreams[0].dev = unicam_[Unicam::Image].dev();
> > >
> > > +               /* Adjust the RAW stream to match the requested sensor config. */
> > > +               if (rpiConfig->sensorConfig) {
> > > +                       StreamConfiguration *rawStream = rawStreams[0].cfg;
> > > +                       BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > > +
> > > +                       /* Handle flips to make sure to match the RAW stream format. */
> > > +                       if (flipsAlterBayerOrder_)
> > > +                               rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > > +                       PixelFormat rawFormat = rawBayer.toPixelFormat();
> > > +
> > > +                       if (rawStream->pixelFormat != rawFormat ||
> > > +                           rawStream->size != rpiConfig->sensorConfig.outputSize) {
> > > +                               rawStream->pixelFormat = rawFormat;
> > > +                               rawStream->size = rpiConfig->sensorConfig.outputSize;
> > > +
> > > +                               status = CameraConfiguration::Adjusted;
> > > +                       }
> > > +               }
> > > +       }
> > > +
> >
> > I wonder if we can make some simplifications here?  The block of code that
> > handles bayer orders and transforms already happens in
> > RPiCameraConfiguration::validate()
> > (done there to avoid derived pipeline handlers from repeating the same
> > construct). Could it be simply removed from here without any loss of
> > functionality?
>
> Are you sure we're actually doing the same thing ?
>
> As I understand it, the logic implemented here makes sure that the RAW
> StreamConfiguration matches the requested SensorConfiguration without
> actually testing it against the CSI-2 receiver video device.
>
> While in RPiCameraConfiguration::validate() we take the RAW
> StreamConfiguration, regardless of the SensorConfig, and create a
> V4L2DeviceFormat from it (using the media bus code from the sensor
> format) and try to apply it to the CSI-2 receiver video device,
> adjusting the RAW StreamConfiguration according to the result of
> applying the format to the video device.

Sorry, I should have been clearer - I was only referring to the Bayer order
handling below, not the whole block of code.

>
> When it comes to the bayer order handling are you referring to:
>
>         /* Handle flips to make sure to match the RAW stream format. */
>         if (flipsAlterBayerOrder_)
>                 rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
>
> If that's your actual concern ?

Yes, this is the bit that I'm wondering if we can remove.

>
> If we don't do it here, do we risk to set to Adjusted a correct
> configuration from user ?
>
> Let's assume your sensor is natively SRGGB10 and you have H/VFLIP
> enabled, so that the actual output format is BGGR10.
>
> You receive a CameraConfiguration with a RAW stream with
>
>         { PixelFormat::SBGGR10, 1080p }
>
> And a SensorConfiguration = { 10, 1080p }
>
> In RPiCameraConfiguration::validate() we compute the 'best' sensor
> format with CameraData::findBestFormat(). At this point (and here I'm
> going from memory so please confirm) flips are not activated, so you
> get back a SubdeviceFormat with the native SRGGB10 ordering.

With you so far.

>
> Then we enter platformValidate() with
>
>         sensorFormat_ = { MEDIA_BUS_SRGGB10, 1080p }
>
> and we get to
>         BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
>
> let's assume we don't do the above check for if
> (flipsAlterBayerOrder_) and we'll end up with
>
>         if (rawStream->pixelFormat != rawFormat)
>
> and then we Adjust the RAW stream to PixelFormat::SRGGB10.
>
> Then we're back to RPiCameraConfiguration::validate(). We enter the
> "Further fixup on RAW streams" loop. We apply SRGGB10 to the CSI-2
> receiver but here we notice that flipsAlterBayerOrder_. So we go and
> adjust back the RAW stream configuration to SBGGR10 and we mark it
> again to Adjusted, while it is what the application has actually asked
> for!
>
> Is my understanding correct here ?

Yes, I agree with your sequencing and final outcome here.

I guess my thinking here was that in Vc4CameraData::platformValidate we would
not ever touch cfg->pixelFormat. So no if (flipsAlterBayerOrder_) clause, and
no if (rawStream->pixelFormat != rawFormat) test.

From your example above, in "Further fixup on RAW streams" loop, rawFormat
would be BGGR10, then after the if (data_->flipsAlterBayerOrder_), we remain at
BGGR10 and this would not signal Adjusted.  I think that's what would happen,
but my head hurts too much thinking about it!

Maybe the simplification I'm thinking of will not work for all circumstances,
and your change is more complete and explicit.  But I wonder then, should we
remove the if (data_->flipsAlterBayerOrder_) test in the "Further fixup on RAW
streams" loop instead?

Regards,
Naush

>
> >
> > If the answer is yes, can I suggest another change - instead of passing in
> > *rpiConfig to this function, we can pass in sensorFormat_?  This ought to be
> > equivalent I think? In fact, a completely unrelated change that I'm working on
> > passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
> > be ideal :)
> >
> > Regards,
> > Naush
> >
> > >         /*
> > >          * For the two ISP outputs, one stream must be equal or smaller than the
> > >          * other in all dimensions.
> > > @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > >         for (unsigned int i = 0; i < outStreams.size(); i++) {
> > >                 Size size;
> > >
> > > +               /* \todo Warn if upscaling: reduces image quality. */
> > > +
> > >                 size.width = std::min(outStreams[i].cfg->size.width,
> > >                                       outStreams[0].cfg->size.width);
> > >                 size.height = std::min(outStreams[i].cfg->size.height,
> > > --
> > > 2.40.1
> > >
Jacopo Mondi July 31, 2023, 8:36 a.m. UTC | #4
Hi Naush

On Thu, Jul 27, 2023 at 03:31:20PM +0100, Naushir Patuck wrote:
> Hi Jacopo,
>
> On Thu, 27 Jul 2023 at 09:13, Jacopo Mondi
> <jacopo.mondi@ideasonboard.com> wrote:
> >
> > Hi Naush
> >
> > On Wed, Jul 26, 2023 at 10:52:59AM +0100, Naushir Patuck wrote:
> > > Hi Jacopo,
> > >
> > > Thank you for your work.
> > >
> > > On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
> > > <libcamera-devel@lists.libcamera.org> wrote:
> > > >
> > > > Handle the SensorConfiguration provided by the application in the
> > > > pipeline validate() and configure() call chains.
> > > >
> > > > During validation, first make sure SensorConfiguration is valid, then
> > > > handle it to compute the sensor format.
> > > >
> > > > For the VC4 platform where the RAW stream follows the sensor's
> > > > configuration adjust the RAW stream configuration to match the sensor
> > > > configuration.
> > > >
> > > > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > > > ---
> > > >  .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
> > > >  .../pipeline/rpi/common/pipeline_base.h       |  4 +-
> > > >  src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
> > > >  3 files changed, 83 insertions(+), 15 deletions(-)
> > > >
> > > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > index e0fbeec37fe9..574e003402df 100644
> > > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > >         if (config_.empty())
> > > >                 return Invalid;
> > > >
> > > > +       if (!sensorConfig.valid()) {
> > > > +               LOG(RPI, Error) << "Invalid sensor configuration request";
> > > > +               return Invalid;
> > > > +       }
> > > > +
> > > >         status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
> > > >
> > > >         /*
> > > > @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > >         std::sort(outStreams.begin(), outStreams.end(),
> > > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > > >
> > > > -       /* Compute the sensor configuration. */
> > > > -       unsigned int bitDepth = defaultRawBitDepth;
> > > > -       if (!rawStreams.empty()) {
> > > > +       /* Compute the sensor's format then do any platform specific fixups. */
> > > > +       unsigned int bitDepth;
> > > > +       Size sensorSize;
> > > > +
> > > > +       if (sensorConfig) {
> > > > +               /* Use the application provided sensor configuration. */
> > > > +               bitDepth = sensorConfig.bitDepth;
> > > > +               sensorSize = sensorConfig.outputSize;
> > > > +       } else if (!rawStreams.empty()) {
> > > > +               /* Use the RAW stream format and size. */
> > > >                 BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
> > > >                 bitDepth = bayerFormat.bitDepth;
> > > > +               sensorSize = rawStreams[0].cfg->size;
> > > > +       } else {
> > > > +               bitDepth = defaultRawBitDepth;
> > > > +               sensorSize = outStreams[0].cfg->size;
> > > >         }
> > > >
> > > > -       sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> > > > -                                                                : rawStreams[0].cfg->size,
> > > > -                                             bitDepth);
> > > > +       sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
> > >
> > > We don't necessarily need to do this in the case where sensorConfig is present
> > > right?  But I suppose it's a good way to do a validation on the values in
> > > sensorConfig, so no harm in keeping it.
> > >
> > > > +
> > > > +       /*
> > > > +        * If a sensor configuration has been requested, it should apply
> > > > +        * without modifications.
> > > > +        */
> > > > +       if (sensorConfig) {
> > > > +               BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> > > > +
> > > > +               if (bayer.bitDepth != sensorConfig.bitDepth ||
> > > > +                   sensorFormat_.size != sensorConfig.outputSize) {
> > > >
> > > > -       /* Do any platform specific fixups. */
> > > > -       status = data_->platformValidate(rawStreams, outStreams);
> > > > +                       LOG(RPI, Error) << "Invalid sensor configuration: "
> > > > +                                       << "bitDepth/size mismatch";
> > > > +                       return Invalid;
> > > > +               }
> > > > +       }
> > > > +
> > > > +       status = data_->platformValidate(this, rawStreams, outStreams);
> > >
> > > Minor nit, we lost the comment above this statement.
> > >
> > >
> > > >         if (status == Invalid)
> > > >                 return Invalid;
> > > >
> > > > @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
> > > >         std::sort(ispStreams.begin(), ispStreams.end(),
> > > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > > >
> > > > -       /* Apply the format on the sensor with any cached transform. */
> > > > +       /*
> > > > +        * Apply the format on the sensor with any cached transform.
> > > > +        *
> > > > +        * If the application has provided a sensor configuration apply it
> > > > +        * instead of just applying a format.
> > > > +        */
> > > >         const RPiCameraConfiguration *rpiConfig =
> > > >                                 static_cast<const RPiCameraConfiguration *>(config);
> > > > -       V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> > > > +       V4L2SubdeviceFormat sensorFormat;
> > > >
> > > > -       ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> > > > +       if (rpiConfig->sensorConfig) {
> > > > +               ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> > > > +                                                       rpiConfig->combinedTransform_,
> > > > +                                                       &sensorFormat);
> > > > +       } else {
> > > > +               sensorFormat = rpiConfig->sensorFormat_;
> > > > +               ret = data->sensor_->setFormat(&sensorFormat,
> > > > +                                              rpiConfig->combinedTransform_);
> > > > +       }
> > > >         if (ret)
> > > >                 return ret;
> > > >
> > > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > index a139c98a5a2b..0a795f4d2689 100644
> > > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > @@ -42,6 +42,7 @@ namespace RPi {
> > > >  /* Map of mbus codes to supported sizes reported by the sensor. */
> > > >  using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> > > >
> > > > +class RPiCameraConfiguration;
> > > >  class CameraData : public Camera::Private
> > > >  {
> > > >  public:
> > > > @@ -72,7 +73,8 @@ public:
> > > >                 V4L2VideoDevice *dev;
> > > >         };
> > > >
> > > > -       virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > > +       virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> > > > +                                                            std::vector<StreamParams> &rawStreams,
> > > >                                                              std::vector<StreamParams> &outStreams) const = 0;
> > > >         virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
> > > >                                       std::optional<BayerFormat::Packing> packing,
> > > > diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > index 018cf4881d0e..bf864d4174b2 100644
> > > > --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > @@ -65,7 +65,8 @@ public:
> > > >         {
> > > >         }
> > > >
> > > > -       CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > > +       CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > > +                                                    std::vector<StreamParams> &rawStreams,
> > > >                                                      std::vector<StreamParams> &outStreams) const override;
> > > >
> > > >         int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> > > > @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
> > > >         return 0;
> > > >  }
> > > >
> > > > -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> > > > +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > > +                                                           std::vector<StreamParams> &rawStreams,
> > > >                                                             std::vector<StreamParams> &outStreams) const
> > > >  {
> > > >         CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> > > > @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > >                 return CameraConfiguration::Status::Invalid;
> > > >         }
> > > >
> > > > -       if (!rawStreams.empty())
> > > > +       if (!rawStreams.empty()) {
> > > >                 rawStreams[0].dev = unicam_[Unicam::Image].dev();
> > > >
> > > > +               /* Adjust the RAW stream to match the requested sensor config. */
> > > > +               if (rpiConfig->sensorConfig) {
> > > > +                       StreamConfiguration *rawStream = rawStreams[0].cfg;
> > > > +                       BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > > > +
> > > > +                       /* Handle flips to make sure to match the RAW stream format. */
> > > > +                       if (flipsAlterBayerOrder_)
> > > > +                               rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > > > +                       PixelFormat rawFormat = rawBayer.toPixelFormat();
> > > > +
> > > > +                       if (rawStream->pixelFormat != rawFormat ||
> > > > +                           rawStream->size != rpiConfig->sensorConfig.outputSize) {
> > > > +                               rawStream->pixelFormat = rawFormat;
> > > > +                               rawStream->size = rpiConfig->sensorConfig.outputSize;
> > > > +
> > > > +                               status = CameraConfiguration::Adjusted;
> > > > +                       }
> > > > +               }
> > > > +       }
> > > > +
> > >
> > > I wonder if we can make some simplifications here?  The block of code that
> > > handles bayer orders and transforms already happens in
> > > RPiCameraConfiguration::validate()
> > > (done there to avoid derived pipeline handlers from repeating the same
> > > construct). Could it be simply removed from here without any loss of
> > > functionality?
> >
> > Are you sure we're actually doing the same thing ?
> >
> > As I understand it, the logic implemented here makes sure that the RAW
> > StreamConfiguration matches the requested SensorConfiguration without
> > actually testing it against the CSI-2 receiver video device.
> >
> > While in RPiCameraConfiguration::validate() we take the RAW
> > StreamConfiguration, regardless of the SensorConfig, and create a
> > V4L2DeviceFormat from it (using the media bus code from the sensor
> > format) and try to apply it to the CSI-2 receiver video device,
> > adjusting the RAW StreamConfiguration according to the result of
> > applying the format to the video device.
>
> Sorry, I should have been clearer - I was only referring to the Bayer order
> handling below, not the whole block of code.
>
> >
> > When it comes to the bayer order handling are you referring to:
> >
> >         /* Handle flips to make sure to match the RAW stream format. */
> >         if (flipsAlterBayerOrder_)
> >                 rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> >
> > If that's your actual concern ?
>
> Yes, this is the bit that I'm wondering if we can remove.
>
> >
> > If we don't do it here, do we risk to set to Adjusted a correct
> > configuration from user ?
> >
> > Let's assume your sensor is natively SRGGB10 and you have H/VFLIP
> > enabled, so that the actual output format is BGGR10.
> >
> > You receive a CameraConfiguration with a RAW stream with
> >
> >         { PixelFormat::SBGGR10, 1080p }
> >
> > And a SensorConfiguration = { 10, 1080p }
> >
> > In RPiCameraConfiguration::validate() we compute the 'best' sensor
> > format with CameraData::findBestFormat(). At this point (and here I'm
> > going from memory so please confirm) flips are not activated, so you
> > get back a SubdeviceFormat with the native SRGGB10 ordering.
>
> With you so far.
>
> >
> > Then we enter platformValidate() with
> >
> >         sensorFormat_ = { MEDIA_BUS_SRGGB10, 1080p }
> >
> > and we get to
> >         BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> >
> > let's assume we don't do the above check for if
> > (flipsAlterBayerOrder_) and we'll end up with
> >
> >         if (rawStream->pixelFormat != rawFormat)
> >
> > and then we Adjust the RAW stream to PixelFormat::SRGGB10.
> >
> > Then we're back to RPiCameraConfiguration::validate(). We enter the
> > "Further fixup on RAW streams" loop. We apply SRGGB10 to the CSI-2
> > receiver but here we notice that flipsAlterBayerOrder_. So we go and
> > adjust back the RAW stream configuration to SBGGR10 and we mark it
> > again to Adjusted, while it is what the application has actually asked
> > for!
> >
> > Is my understanding correct here ?
>
> Yes, I agree with your sequencing and final outcome here.
>
> I guess my thinking here was that in Vc4CameraData::platformValidate we would
> not ever touch cfg->pixelFormat. So no if (flipsAlterBayerOrder_) clause, and
> no if (rawStream->pixelFormat != rawFormat) test.
>
> From your example above, in "Further fixup on RAW streams" loop, rawFormat
> would be BGGR10, then after the if (data_->flipsAlterBayerOrder_), we remain at
> BGGR10 and this would not signal Adjusted.  I think that's what would happen,
> but my head hurts too much thinking about it!
>
> Maybe the simplification I'm thinking of will not work for all circumstances,
> and your change is more complete and explicit.  But I wonder then, should we
> remove the if (data_->flipsAlterBayerOrder_) test in the "Further fixup on RAW
> streams" loop instead?

With your pending patch "[PATCH] pipeline: rpi: Don't call
toV4L2DeviceFormat() from validate()" that removes

        rawFormat = PipelineHandlerBase::toV4L2DeviceFormat(raw.dev, subdevFormat, packing);

in favour of

        rawFormat.fourcc = raw.dev->toV4L2PixelFormat(cfg.pixelFormat);

we use the pixelformat for the raw stream as computed in
platformValidate(), you're right, so on can presume there is no need
to re-check for the Bayer ordering in validate() as it has been corrected in
platformValidate().

But be aware that the StreamConfiguration PixelFormat only gets
changed when there is a SensorConfig, not in all cases, so I'm afraid
we need to keep the:

        if (data->flipsAlterBayerOrder_)

test in the "Further fixup" loop in validate().

Do you agree ?

>
> Regards,
> Naush
>
> >
> > >
> > > If the answer is yes, can I suggest another change - instead of passing in
> > > *rpiConfig to this function, we can pass in sensorFormat_?  This ought to be
> > > equivalent I think? In fact, a completely unrelated change that I'm working on
> > > passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
> > > be ideal :)
> > >
> > > Regards,
> > > Naush
> > >
> > > >         /*
> > > >          * For the two ISP outputs, one stream must be equal or smaller than the
> > > >          * other in all dimensions.
> > > > @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > >         for (unsigned int i = 0; i < outStreams.size(); i++) {
> > > >                 Size size;
> > > >
> > > > +               /* \todo Warn if upscaling: reduces image quality. */
> > > > +
> > > >                 size.width = std::min(outStreams[i].cfg->size.width,
> > > >                                       outStreams[0].cfg->size.width);
> > > >                 size.height = std::min(outStreams[i].cfg->size.height,
> > > > --
> > > > 2.40.1
> > > >
Naushir Patuck July 31, 2023, 9:42 a.m. UTC | #5
Hi Jacopo,


On Mon, 31 Jul 2023 at 09:37, Jacopo Mondi
<jacopo.mondi@ideasonboard.com> wrote:
>
> Hi Naush
>
> On Thu, Jul 27, 2023 at 03:31:20PM +0100, Naushir Patuck wrote:
> > Hi Jacopo,
> >
> > On Thu, 27 Jul 2023 at 09:13, Jacopo Mondi
> > <jacopo.mondi@ideasonboard.com> wrote:
> > >
> > > Hi Naush
> > >
> > > On Wed, Jul 26, 2023 at 10:52:59AM +0100, Naushir Patuck wrote:
> > > > Hi Jacopo,
> > > >
> > > > Thank you for your work.
> > > >
> > > > On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
> > > > <libcamera-devel@lists.libcamera.org> wrote:
> > > > >
> > > > > Handle the SensorConfiguration provided by the application in the
> > > > > pipeline validate() and configure() call chains.
> > > > >
> > > > > During validation, first make sure SensorConfiguration is valid, then
> > > > > handle it to compute the sensor format.
> > > > >
> > > > > For the VC4 platform where the RAW stream follows the sensor's
> > > > > configuration adjust the RAW stream configuration to match the sensor
> > > > > configuration.
> > > > >
> > > > > Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
> > > > > ---
> > > > >  .../pipeline/rpi/common/pipeline_base.cpp     | 64 +++++++++++++++----
> > > > >  .../pipeline/rpi/common/pipeline_base.h       |  4 +-
> > > > >  src/libcamera/pipeline/rpi/vc4/vc4.cpp        | 30 ++++++++-
> > > > >  3 files changed, 83 insertions(+), 15 deletions(-)
> > > > >
> > > > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > > index e0fbeec37fe9..574e003402df 100644
> > > > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > > > @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > > >         if (config_.empty())
> > > > >                 return Invalid;
> > > > >
> > > > > +       if (!sensorConfig.valid()) {
> > > > > +               LOG(RPI, Error) << "Invalid sensor configuration request";
> > > > > +               return Invalid;
> > > > > +       }
> > > > > +
> > > > >         status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
> > > > >
> > > > >         /*
> > > > > @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > > >         std::sort(outStreams.begin(), outStreams.end(),
> > > > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > > > >
> > > > > -       /* Compute the sensor configuration. */
> > > > > -       unsigned int bitDepth = defaultRawBitDepth;
> > > > > -       if (!rawStreams.empty()) {
> > > > > +       /* Compute the sensor's format then do any platform specific fixups. */
> > > > > +       unsigned int bitDepth;
> > > > > +       Size sensorSize;
> > > > > +
> > > > > +       if (sensorConfig) {
> > > > > +               /* Use the application provided sensor configuration. */
> > > > > +               bitDepth = sensorConfig.bitDepth;
> > > > > +               sensorSize = sensorConfig.outputSize;
> > > > > +       } else if (!rawStreams.empty()) {
> > > > > +               /* Use the RAW stream format and size. */
> > > > >                 BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
> > > > >                 bitDepth = bayerFormat.bitDepth;
> > > > > +               sensorSize = rawStreams[0].cfg->size;
> > > > > +       } else {
> > > > > +               bitDepth = defaultRawBitDepth;
> > > > > +               sensorSize = outStreams[0].cfg->size;
> > > > >         }
> > > > >
> > > > > -       sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> > > > > -                                                                : rawStreams[0].cfg->size,
> > > > > -                                             bitDepth);
> > > > > +       sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
> > > >
> > > > We don't necessarily need to do this in the case where sensorConfig is present
> > > > right?  But I suppose it's a good way to do a validation on the values in
> > > > sensorConfig, so no harm in keeping it.
> > > >
> > > > > +
> > > > > +       /*
> > > > > +        * If a sensor configuration has been requested, it should apply
> > > > > +        * without modifications.
> > > > > +        */
> > > > > +       if (sensorConfig) {
> > > > > +               BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> > > > > +
> > > > > +               if (bayer.bitDepth != sensorConfig.bitDepth ||
> > > > > +                   sensorFormat_.size != sensorConfig.outputSize) {
> > > > >
> > > > > -       /* Do any platform specific fixups. */
> > > > > -       status = data_->platformValidate(rawStreams, outStreams);
> > > > > +                       LOG(RPI, Error) << "Invalid sensor configuration: "
> > > > > +                                       << "bitDepth/size mismatch";
> > > > > +                       return Invalid;
> > > > > +               }
> > > > > +       }
> > > > > +
> > > > > +       status = data_->platformValidate(this, rawStreams, outStreams);
> > > >
> > > > Minor nit, we lost the comment above this statement.
> > > >
> > > >
> > > > >         if (status == Invalid)
> > > > >                 return Invalid;
> > > > >
> > > > > @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
> > > > >         std::sort(ispStreams.begin(), ispStreams.end(),
> > > > >                   [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > > > >
> > > > > -       /* Apply the format on the sensor with any cached transform. */
> > > > > +       /*
> > > > > +        * Apply the format on the sensor with any cached transform.
> > > > > +        *
> > > > > +        * If the application has provided a sensor configuration apply it
> > > > > +        * instead of just applying a format.
> > > > > +        */
> > > > >         const RPiCameraConfiguration *rpiConfig =
> > > > >                                 static_cast<const RPiCameraConfiguration *>(config);
> > > > > -       V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> > > > > +       V4L2SubdeviceFormat sensorFormat;
> > > > >
> > > > > -       ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> > > > > +       if (rpiConfig->sensorConfig) {
> > > > > +               ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> > > > > +                                                       rpiConfig->combinedTransform_,
> > > > > +                                                       &sensorFormat);
> > > > > +       } else {
> > > > > +               sensorFormat = rpiConfig->sensorFormat_;
> > > > > +               ret = data->sensor_->setFormat(&sensorFormat,
> > > > > +                                              rpiConfig->combinedTransform_);
> > > > > +       }
> > > > >         if (ret)
> > > > >                 return ret;
> > > > >
> > > > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > > index a139c98a5a2b..0a795f4d2689 100644
> > > > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > > > @@ -42,6 +42,7 @@ namespace RPi {
> > > > >  /* Map of mbus codes to supported sizes reported by the sensor. */
> > > > >  using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> > > > >
> > > > > +class RPiCameraConfiguration;
> > > > >  class CameraData : public Camera::Private
> > > > >  {
> > > > >  public:
> > > > > @@ -72,7 +73,8 @@ public:
> > > > >                 V4L2VideoDevice *dev;
> > > > >         };
> > > > >
> > > > > -       virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > > > +       virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> > > > > +                                                            std::vector<StreamParams> &rawStreams,
> > > > >                                                              std::vector<StreamParams> &outStreams) const = 0;
> > > > >         virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
> > > > >                                       std::optional<BayerFormat::Packing> packing,
> > > > > diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > > index 018cf4881d0e..bf864d4174b2 100644
> > > > > --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > > +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > > > @@ -65,7 +65,8 @@ public:
> > > > >         {
> > > > >         }
> > > > >
> > > > > -       CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > > > +       CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > > > +                                                    std::vector<StreamParams> &rawStreams,
> > > > >                                                      std::vector<StreamParams> &outStreams) const override;
> > > > >
> > > > >         int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> > > > > @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
> > > > >         return 0;
> > > > >  }
> > > > >
> > > > > -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> > > > > +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > > > +                                                           std::vector<StreamParams> &rawStreams,
> > > > >                                                             std::vector<StreamParams> &outStreams) const
> > > > >  {
> > > > >         CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> > > > > @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > > >                 return CameraConfiguration::Status::Invalid;
> > > > >         }
> > > > >
> > > > > -       if (!rawStreams.empty())
> > > > > +       if (!rawStreams.empty()) {
> > > > >                 rawStreams[0].dev = unicam_[Unicam::Image].dev();
> > > > >
> > > > > +               /* Adjust the RAW stream to match the requested sensor config. */
> > > > > +               if (rpiConfig->sensorConfig) {
> > > > > +                       StreamConfiguration *rawStream = rawStreams[0].cfg;
> > > > > +                       BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > > > > +
> > > > > +                       /* Handle flips to make sure to match the RAW stream format. */
> > > > > +                       if (flipsAlterBayerOrder_)
> > > > > +                               rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > > > > +                       PixelFormat rawFormat = rawBayer.toPixelFormat();
> > > > > +
> > > > > +                       if (rawStream->pixelFormat != rawFormat ||
> > > > > +                           rawStream->size != rpiConfig->sensorConfig.outputSize) {
> > > > > +                               rawStream->pixelFormat = rawFormat;
> > > > > +                               rawStream->size = rpiConfig->sensorConfig.outputSize;
> > > > > +
> > > > > +                               status = CameraConfiguration::Adjusted;
> > > > > +                       }
> > > > > +               }
> > > > > +       }
> > > > > +
> > > >
> > > > I wonder if we can make some simplifications here?  The block of code that
> > > > handles bayer orders and transforms already happens in
> > > > RPiCameraConfiguration::validate()
> > > > (done there to avoid derived pipeline handlers from repeating the same
> > > > construct). Could it be simply removed from here without any loss of
> > > > functionality?
> > >
> > > Are you sure we're actually doing the same thing ?
> > >
> > > As I understand it, the logic implemented here makes sure that the RAW
> > > StreamConfiguration matches the requested SensorConfiguration without
> > > actually testing it against the CSI-2 receiver video device.
> > >
> > > While in RPiCameraConfiguration::validate() we take the RAW
> > > StreamConfiguration, regardless of the SensorConfig, and create a
> > > V4L2DeviceFormat from it (using the media bus code from the sensor
> > > format) and try to apply it to the CSI-2 receiver video device,
> > > adjusting the RAW StreamConfiguration according to the result of
> > > applying the format to the video device.
> >
> > Sorry, I should have been clearer - I was only referring to the Bayer order
> > handling below, not the whole block of code.
> >
> > >
> > > When it comes to the bayer order handling are you referring to:
> > >
> > >         /* Handle flips to make sure to match the RAW stream format. */
> > >         if (flipsAlterBayerOrder_)
> > >                 rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > >
> > > If that's your actual concern ?
> >
> > Yes, this is the bit that I'm wondering if we can remove.
> >
> > >
> > > If we don't do it here, do we risk to set to Adjusted a correct
> > > configuration from user ?
> > >
> > > Let's assume your sensor is natively SRGGB10 and you have H/VFLIP
> > > enabled, so that the actual output format is BGGR10.
> > >
> > > You receive a CameraConfiguration with a RAW stream with
> > >
> > >         { PixelFormat::SBGGR10, 1080p }
> > >
> > > And a SensorConfiguration = { 10, 1080p }
> > >
> > > In RPiCameraConfiguration::validate() we compute the 'best' sensor
> > > format with CameraData::findBestFormat(). At this point (and here I'm
> > > going from memory so please confirm) flips are not activated, so you
> > > get back a SubdeviceFormat with the native SRGGB10 ordering.
> >
> > With you so far.
> >
> > >
> > > Then we enter platformValidate() with
> > >
> > >         sensorFormat_ = { MEDIA_BUS_SRGGB10, 1080p }
> > >
> > > and we get to
> > >         BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > >
> > > let's assume we don't do the above check for if
> > > (flipsAlterBayerOrder_) and we'll end up with
> > >
> > >         if (rawStream->pixelFormat != rawFormat)
> > >
> > > and then we Adjust the RAW stream to PixelFormat::SRGGB10.
> > >
> > > Then we're back to RPiCameraConfiguration::validate(). We enter the
> > > "Further fixup on RAW streams" loop. We apply SRGGB10 to the CSI-2
> > > receiver but here we notice that flipsAlterBayerOrder_. So we go and
> > > adjust back the RAW stream configuration to SBGGR10 and we mark it
> > > again to Adjusted, while it is what the application has actually asked
> > > for!
> > >
> > > Is my understanding correct here ?
> >
> > Yes, I agree with your sequencing and final outcome here.
> >
> > I guess my thinking here was that in Vc4CameraData::platformValidate we would
> > not ever touch cfg->pixelFormat. So no if (flipsAlterBayerOrder_) clause, and
> > no if (rawStream->pixelFormat != rawFormat) test.
> >
> > From your example above, in "Further fixup on RAW streams" loop, rawFormat
> > would be BGGR10, then after the if (data_->flipsAlterBayerOrder_), we remain at
> > BGGR10 and this would not signal Adjusted.  I think that's what would happen,
> > but my head hurts too much thinking about it!
> >
> > Maybe the simplification I'm thinking of will not work for all circumstances,
> > and your change is more complete and explicit.  But I wonder then, should we
> > remove the if (data_->flipsAlterBayerOrder_) test in the "Further fixup on RAW
> > streams" loop instead?
>
> With your pending patch "[PATCH] pipeline: rpi: Don't call
> toV4L2DeviceFormat() from validate()" that removes
>
>         rawFormat = PipelineHandlerBase::toV4L2DeviceFormat(raw.dev, subdevFormat, packing);
>
> in favour of
>
>         rawFormat.fourcc = raw.dev->toV4L2PixelFormat(cfg.pixelFormat);
>
> we use the pixelformat for the raw stream as computed in
> platformValidate(), you're right, so on can presume there is no need
> to re-check for the Bayer ordering in validate() as it has been corrected in
> platformValidate().
>
> But be aware that the StreamConfiguration PixelFormat only gets
> changed when there is a SensorConfig, not in all cases, so I'm afraid
> we need to keep the:
>
>         if (data->flipsAlterBayerOrder_)
>
> test in the "Further fixup" loop in validate().
>
> Do you agree ?

Yes, I think I agree with this!
There's too many changes on the go touching the exact same lines of code :-)
We may have to wait until they are all merged before noticing any
subtle breakages.

Regards,
Naush


>
> >
> > Regards,
> > Naush
> >
> > >
> > > >
> > > > If the answer is yes, can I suggest another change - instead of passing in
> > > > *rpiConfig to this function, we can pass in sensorFormat_?  This ought to be
> > > > equivalent I think? In fact, a completely unrelated change that I'm working on
> > > > passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
> > > > be ideal :)
> > > >
> > > > Regards,
> > > > Naush
> > > >
> > > > >         /*
> > > > >          * For the two ISP outputs, one stream must be equal or smaller than the
> > > > >          * other in all dimensions.
> > > > > @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > > >         for (unsigned int i = 0; i < outStreams.size(); i++) {
> > > > >                 Size size;
> > > > >
> > > > > +               /* \todo Warn if upscaling: reduces image quality. */
> > > > > +
> > > > >                 size.width = std::min(outStreams[i].cfg->size.width,
> > > > >                                       outStreams[0].cfg->size.width);
> > > > >                 size.height = std::min(outStreams[i].cfg->size.height,
> > > > > --
> > > > > 2.40.1
> > > > >

Patch
diff mbox series

diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
index e0fbeec37fe9..574e003402df 100644
--- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
+++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
@@ -180,6 +180,11 @@  CameraConfiguration::Status RPiCameraConfiguration::validate()
 	if (config_.empty())
 		return Invalid;
 
+	if (!sensorConfig.valid()) {
+		LOG(RPI, Error) << "Invalid sensor configuration request";
+		return Invalid;
+	}
+
 	status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
 
 	/*
@@ -207,19 +212,43 @@  CameraConfiguration::Status RPiCameraConfiguration::validate()
 	std::sort(outStreams.begin(), outStreams.end(),
 		  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
 
-	/* Compute the sensor configuration. */
-	unsigned int bitDepth = defaultRawBitDepth;
-	if (!rawStreams.empty()) {
+	/* Compute the sensor's format then do any platform specific fixups. */
+	unsigned int bitDepth;
+	Size sensorSize;
+
+	if (sensorConfig) {
+		/* Use the application provided sensor configuration. */
+		bitDepth = sensorConfig.bitDepth;
+		sensorSize = sensorConfig.outputSize;
+	} else if (!rawStreams.empty()) {
+		/* Use the RAW stream format and size. */
 		BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
 		bitDepth = bayerFormat.bitDepth;
+		sensorSize = rawStreams[0].cfg->size;
+	} else {
+		bitDepth = defaultRawBitDepth;
+		sensorSize = outStreams[0].cfg->size;
 	}
 
-	sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
-								 : rawStreams[0].cfg->size,
-					      bitDepth);
+	sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
+
+	/*
+	 * If a sensor configuration has been requested, it should apply
+	 * without modifications.
+	 */
+	if (sensorConfig) {
+		BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
+
+		if (bayer.bitDepth != sensorConfig.bitDepth ||
+		    sensorFormat_.size != sensorConfig.outputSize) {
 
-	/* Do any platform specific fixups. */
-	status = data_->platformValidate(rawStreams, outStreams);
+			LOG(RPI, Error) << "Invalid sensor configuration: "
+					<< "bitDepth/size mismatch";
+			return Invalid;
+		}
+	}
+
+	status = data_->platformValidate(this, rawStreams, outStreams);
 	if (status == Invalid)
 		return Invalid;
 
@@ -466,12 +495,25 @@  int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
 	std::sort(ispStreams.begin(), ispStreams.end(),
 		  [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
 
-	/* Apply the format on the sensor with any cached transform. */
+	/*
+	 * Apply the format on the sensor with any cached transform.
+	 *
+	 * If the application has provided a sensor configuration apply it
+	 * instead of just applying a format.
+	 */
 	const RPiCameraConfiguration *rpiConfig =
 				static_cast<const RPiCameraConfiguration *>(config);
-	V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
+	V4L2SubdeviceFormat sensorFormat;
 
-	ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
+	if (rpiConfig->sensorConfig) {
+		ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
+							rpiConfig->combinedTransform_,
+							&sensorFormat);
+	} else {
+		sensorFormat = rpiConfig->sensorFormat_;
+		ret = data->sensor_->setFormat(&sensorFormat,
+					       rpiConfig->combinedTransform_);
+	}
 	if (ret)
 		return ret;
 
diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
index a139c98a5a2b..0a795f4d2689 100644
--- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
+++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
@@ -42,6 +42,7 @@  namespace RPi {
 /* Map of mbus codes to supported sizes reported by the sensor. */
 using SensorFormats = std::map<unsigned int, std::vector<Size>>;
 
+class RPiCameraConfiguration;
 class CameraData : public Camera::Private
 {
 public:
@@ -72,7 +73,8 @@  public:
 		V4L2VideoDevice *dev;
 	};
 
-	virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
+	virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
+							     std::vector<StreamParams> &rawStreams,
 							     std::vector<StreamParams> &outStreams) const = 0;
 	virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
 				      std::optional<BayerFormat::Packing> packing,
diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
index 018cf4881d0e..bf864d4174b2 100644
--- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
+++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
@@ -65,7 +65,8 @@  public:
 	{
 	}
 
-	CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
+	CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
+						     std::vector<StreamParams> &rawStreams,
 						     std::vector<StreamParams> &outStreams) const override;
 
 	int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
@@ -394,7 +395,8 @@  int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
 	return 0;
 }
 
-CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
+CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
+							    std::vector<StreamParams> &rawStreams,
 							    std::vector<StreamParams> &outStreams) const
 {
 	CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
@@ -405,9 +407,29 @@  CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
 		return CameraConfiguration::Status::Invalid;
 	}
 
-	if (!rawStreams.empty())
+	if (!rawStreams.empty()) {
 		rawStreams[0].dev = unicam_[Unicam::Image].dev();
 
+		/* Adjust the RAW stream to match the requested sensor config. */
+		if (rpiConfig->sensorConfig) {
+			StreamConfiguration *rawStream = rawStreams[0].cfg;
+			BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
+
+			/* Handle flips to make sure to match the RAW stream format. */
+			if (flipsAlterBayerOrder_)
+				rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
+			PixelFormat rawFormat = rawBayer.toPixelFormat();
+
+			if (rawStream->pixelFormat != rawFormat ||
+			    rawStream->size != rpiConfig->sensorConfig.outputSize) {
+				rawStream->pixelFormat = rawFormat;
+				rawStream->size = rpiConfig->sensorConfig.outputSize;
+
+				status = CameraConfiguration::Adjusted;
+			}
+		}
+	}
+
 	/*
 	 * For the two ISP outputs, one stream must be equal or smaller than the
 	 * other in all dimensions.
@@ -417,6 +439,8 @@  CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
 	for (unsigned int i = 0; i < outStreams.size(); i++) {
 		Size size;
 
+		/* \todo Warn if upscaling: reduces image quality. */
+
 		size.width = std::min(outStreams[i].cfg->size.width,
 				      outStreams[0].cfg->size.width);
 		size.height = std::min(outStreams[i].cfg->size.height,