[libcamera-devel,v2,05/12] android: camera_device: Generate RAW resolutions

Message ID 20200902152236.69770-6-jacopo@jmondi.org
State Accepted
Headers show
Series
  • android: camera_device: Fix JPEG/RAW sizes
Related show

Commit Message

Jacopo Mondi Sept. 2, 2020, 3:22 p.m. UTC
The resolutions supported for the RAW formats cannot be tested from
a list of known sizes like the processed ones. This is mainly due to the
fact RAW streams are produced by capturing frames at the CSI-2 receiver
output and their size corresponds to the sensor's native sizes.

In order to obtain the RAW frame size generate a temporary
CameraConfiguration for the Role::StillCaptureRAW role and inspect the
map of StreamFormats returned by the pipeline handler.

Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
---
 src/android/camera_device.cpp | 21 +++++++++++++++++----
 src/android/camera_device.h   |  2 ++
 2 files changed, 19 insertions(+), 4 deletions(-)

Comments

Laurent Pinchart Sept. 5, 2020, 6:07 p.m. UTC | #1
Hi Jacopo,

Thank you for the patch.

On Wed, Sep 02, 2020 at 05:22:29PM +0200, Jacopo Mondi wrote:
> The resolutions supported for the RAW formats cannot be tested from
> a list of known sizes like the processed ones. This is mainly due to the
> fact RAW streams are produced by capturing frames at the CSI-2 receiver
> output and their size corresponds to the sensor's native sizes.

This isn't always the case though. Most sensors can bin and skip, so
half of the native size is usually a valid resolution. There's also a
dependency between raw and processed sizes. If a binned/skipped raw size
is desired, the available processed sizes will be reduced. And the other
way around, when capturing a lower resolution processed frame, the
pipeline handler may decide to bin/skip on the sensor to reduce the
bandwidth and increase the possible frame rates.

This doesn't need to be taken into account in this patch, but I'd like
to make sure you've considered the issue in the design, to avoid
cornering ourselves in a dead end.

> In order to obtain the RAW frame size generate a temporary
> CameraConfiguration for the Role::StillCaptureRAW role and inspect the
> map of StreamFormats returned by the pipeline handler.
> 
> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> ---
>  src/android/camera_device.cpp | 21 +++++++++++++++++----
>  src/android/camera_device.h   |  2 ++
>  2 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
> index 58b2ad27c5e2..b110bfe43ece 100644
> --- a/src/android/camera_device.cpp
> +++ b/src/android/camera_device.cpp
> @@ -314,6 +314,17 @@ std::vector<Size> CameraDevice::filterYUVResolutions(CameraConfiguration *camera
>  	return supportedResolutions;
>  }
>  
> +std::vector<Size> CameraDevice::filterRawResolutions(const libcamera::PixelFormat &pixelFormat)
> +{
> +	std::unique_ptr<CameraConfiguration> cameraConfig =
> +		camera_->generateConfiguration({ StillCaptureRaw });
> +	StreamConfiguration &cfg = cameraConfig->at(0);
> +	const StreamFormats &formats = cfg.formats();
> +	std::vector<Size> supportedResolutions = formats.sizes(pixelFormat);
> +
> +	return supportedResolutions;
> +}
> +
>  /*
>   * Initialize the format conversion map to translate from Android format
>   * identifier to libcamera pixel formats and fill in the list of supported
> @@ -466,13 +477,15 @@ int CameraDevice::initializeStreamConfigurations()
>  				<< camera3Format.name << " to: "
>  				<< mappedFormat.toString();
>  
> +		std::vector<Size> resolutions;
>  		const PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);
>  		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW)
> -			continue;
> +			resolutions = filterRawResolutions(mappedFormat);
> +		else
> +			resolutions = filterYUVResolutions(cameraConfig.get(),
> +							   mappedFormat,
> +							   cameraResolutions);

Usage of cameraConfig here isn't very nice, but that's something to be
addressed later.

>  
> -		std::vector<Size> resolutions = filterYUVResolutions(cameraConfig.get(),
> -								     mappedFormat,
> -								     cameraResolutions);
>  		for (const Size &res : resolutions) {
>  			streamConfigurations_.push_back({ res, androidFormat });
>  
> diff --git a/src/android/camera_device.h b/src/android/camera_device.h
> index 359a163ebab9..dc0ee664d443 100644
> --- a/src/android/camera_device.h
> +++ b/src/android/camera_device.h
> @@ -97,6 +97,8 @@ private:
>  	filterYUVResolutions(libcamera::CameraConfiguration *cameraConfig,
>  			     const libcamera::PixelFormat &pixelFormat,
>  			     const std::vector<libcamera::Size> &resolutions);
> +	std::vector<libcamera::Size>
> +	filterRawResolutions(const libcamera::PixelFormat &pixelFormat);
>  
>  	std::tuple<uint32_t, uint32_t> calculateStaticMetadataSize();
>  	libcamera::FrameBuffer *createFrameBuffer(const buffer_handle_t camera3buffer);
Jacopo Mondi Sept. 7, 2020, 8:41 a.m. UTC | #2
Hi Laurent,

On Sat, Sep 05, 2020 at 09:07:49PM +0300, Laurent Pinchart wrote:
> Hi Jacopo,
>
> Thank you for the patch.
>
> On Wed, Sep 02, 2020 at 05:22:29PM +0200, Jacopo Mondi wrote:
> > The resolutions supported for the RAW formats cannot be tested from
> > a list of known sizes like the processed ones. This is mainly due to the
> > fact RAW streams are produced by capturing frames at the CSI-2 receiver
> > output and their size corresponds to the sensor's native sizes.
>
> This isn't always the case though. Most sensors can bin and skip, so
> half of the native size is usually a valid resolution. There's also a

It's "native sizes" not "native size" :)
I meant all the sizes produced natively by the sensor.

Then indeed we can have CSI-2 receivers capable of cropping or even
scaling the sensor produced frames, but it's up to the pipeline
handler to report this.

> dependency between raw and processed sizes. If a binned/skipped raw size
> is desired, the available processed sizes will be reduced. And the other
> way around, when capturing a lower resolution processed frame, the
> pipeline handler may decide to bin/skip on the sensor to reduce the
> bandwidth and increase the possible frame rates.
>
> This doesn't need to be taken into account in this patch, but I'd like
> to make sure you've considered the issue in the design, to avoid
> cornering ourselves in a dead end.

I don't see many ways around than reporting all the available per-stream sizes
if not considered in isolation. We cannot produce lists of all valid
stream combinations, because the Android's stream configuration map
reprots sizes per formats, not combinations of valid configuration.

In example
StreamConfiguration
android.scaler.availableStreamConfigurations = {
        { raw, 2092, 1944, 0},
        { raw, 1988, 1112, 0},
        { raw, 724, 512, 0},
        { YUV, 2056, 1920, 0},
        { YUV, 1920, 1080, 0},
        { YUV, 720, 480, 0},
};

How would you express here "if you ask for a 724x512 frame, you can't
have 2056x1920 YUV" if not by refusing that combination at stream
configuration time ? And if the pipeline handler is capable of doing
so, it could even be capable of scaling down the 2092x1944 raw frame
to 724x512 after it has been used to produce the 2056x1920 YUV frame.

I guess there's not easy way out if not failing at configuration time.

>
> > In order to obtain the RAW frame size generate a temporary
> > CameraConfiguration for the Role::StillCaptureRAW role and inspect the
> > map of StreamFormats returned by the pipeline handler.
> >
> > Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > ---
> >  src/android/camera_device.cpp | 21 +++++++++++++++++----
> >  src/android/camera_device.h   |  2 ++
> >  2 files changed, 19 insertions(+), 4 deletions(-)
> >
> > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
> > index 58b2ad27c5e2..b110bfe43ece 100644
> > --- a/src/android/camera_device.cpp
> > +++ b/src/android/camera_device.cpp
> > @@ -314,6 +314,17 @@ std::vector<Size> CameraDevice::filterYUVResolutions(CameraConfiguration *camera
> >  	return supportedResolutions;
> >  }
> >
> > +std::vector<Size> CameraDevice::filterRawResolutions(const libcamera::PixelFormat &pixelFormat)
> > +{
> > +	std::unique_ptr<CameraConfiguration> cameraConfig =
> > +		camera_->generateConfiguration({ StillCaptureRaw });
> > +	StreamConfiguration &cfg = cameraConfig->at(0);
> > +	const StreamFormats &formats = cfg.formats();
> > +	std::vector<Size> supportedResolutions = formats.sizes(pixelFormat);
> > +
> > +	return supportedResolutions;
> > +}
> > +
> >  /*
> >   * Initialize the format conversion map to translate from Android format
> >   * identifier to libcamera pixel formats and fill in the list of supported
> > @@ -466,13 +477,15 @@ int CameraDevice::initializeStreamConfigurations()
> >  				<< camera3Format.name << " to: "
> >  				<< mappedFormat.toString();
> >
> > +		std::vector<Size> resolutions;
> >  		const PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);
> >  		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW)
> > -			continue;
> > +			resolutions = filterRawResolutions(mappedFormat);
> > +		else
> > +			resolutions = filterYUVResolutions(cameraConfig.get(),
> > +							   mappedFormat,
> > +							   cameraResolutions);
>
> Usage of cameraConfig here isn't very nice, but that's something to be
> addressed later.
>

I don't particularly like it either. It breaks a bit ownership rules
and feels a bit like an hack. Not a big deal though, isn't it ?

> >
> > -		std::vector<Size> resolutions = filterYUVResolutions(cameraConfig.get(),
> > -								     mappedFormat,
> > -								     cameraResolutions);
> >  		for (const Size &res : resolutions) {
> >  			streamConfigurations_.push_back({ res, androidFormat });
> >
> > diff --git a/src/android/camera_device.h b/src/android/camera_device.h
> > index 359a163ebab9..dc0ee664d443 100644
> > --- a/src/android/camera_device.h
> > +++ b/src/android/camera_device.h
> > @@ -97,6 +97,8 @@ private:
> >  	filterYUVResolutions(libcamera::CameraConfiguration *cameraConfig,
> >  			     const libcamera::PixelFormat &pixelFormat,
> >  			     const std::vector<libcamera::Size> &resolutions);
> > +	std::vector<libcamera::Size>
> > +	filterRawResolutions(const libcamera::PixelFormat &pixelFormat);
> >
> >  	std::tuple<uint32_t, uint32_t> calculateStaticMetadataSize();
> >  	libcamera::FrameBuffer *createFrameBuffer(const buffer_handle_t camera3buffer);
>
> --
> Regards,
>
> Laurent Pinchart
Laurent Pinchart Sept. 7, 2020, 1:21 p.m. UTC | #3
Hi Jacopo,

On Mon, Sep 07, 2020 at 10:41:41AM +0200, Jacopo Mondi wrote:
> On Sat, Sep 05, 2020 at 09:07:49PM +0300, Laurent Pinchart wrote:
> > On Wed, Sep 02, 2020 at 05:22:29PM +0200, Jacopo Mondi wrote:
> > > The resolutions supported for the RAW formats cannot be tested from
> > > a list of known sizes like the processed ones. This is mainly due to the
> > > fact RAW streams are produced by capturing frames at the CSI-2 receiver
> > > output and their size corresponds to the sensor's native sizes.
> >
> > This isn't always the case though. Most sensors can bin and skip, so
> > half of the native size is usually a valid resolution. There's also a
> 
> It's "native sizes" not "native size" :)
> I meant all the sizes produced natively by the sensor.
> 
> Then indeed we can have CSI-2 receivers capable of cropping or even
> scaling the sensor produced frames, but it's up to the pipeline
> handler to report this.
> 
> > dependency between raw and processed sizes. If a binned/skipped raw size
> > is desired, the available processed sizes will be reduced. And the other
> > way around, when capturing a lower resolution processed frame, the
> > pipeline handler may decide to bin/skip on the sensor to reduce the
> > bandwidth and increase the possible frame rates.
> >
> > This doesn't need to be taken into account in this patch, but I'd like
> > to make sure you've considered the issue in the design, to avoid
> > cornering ourselves in a dead end.
> 
> I don't see many ways around than reporting all the available per-stream sizes
> if not considered in isolation. We cannot produce lists of all valid
> stream combinations, because the Android's stream configuration map
> reprots sizes per formats, not combinations of valid configuration.
> 
> In example
> StreamConfiguration
> android.scaler.availableStreamConfigurations = {
>         { raw, 2092, 1944, 0},
>         { raw, 1988, 1112, 0},
>         { raw, 724, 512, 0},
>         { YUV, 2056, 1920, 0},
>         { YUV, 1920, 1080, 0},
>         { YUV, 720, 480, 0},
> };
> 
> How would you express here "if you ask for a 724x512 frame, you can't
> have 2056x1920 YUV" if not by refusing that combination at stream
> configuration time ? And if the pipeline handler is capable of doing
> so, it could even be capable of scaling down the 2092x1944 raw frame
> to 724x512 after it has been used to produce the 2056x1920 YUV frame.

No idea yet, but Android requires us to report a static list of
configurations, and I don't think we're allowed to fail if any
combination of raw and yuv from the static list is requested. We'll have
to figure out a good way to ensure that. Ideas are welcome.

> I guess there's not easy way out if not failing at configuration time.
> 
> > > In order to obtain the RAW frame size generate a temporary
> > > CameraConfiguration for the Role::StillCaptureRAW role and inspect the
> > > map of StreamFormats returned by the pipeline handler.
> > >
> > > Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
> > > Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
> > > ---
> > >  src/android/camera_device.cpp | 21 +++++++++++++++++----
> > >  src/android/camera_device.h   |  2 ++
> > >  2 files changed, 19 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
> > > index 58b2ad27c5e2..b110bfe43ece 100644
> > > --- a/src/android/camera_device.cpp
> > > +++ b/src/android/camera_device.cpp
> > > @@ -314,6 +314,17 @@ std::vector<Size> CameraDevice::filterYUVResolutions(CameraConfiguration *camera
> > >  	return supportedResolutions;
> > >  }
> > >
> > > +std::vector<Size> CameraDevice::filterRawResolutions(const libcamera::PixelFormat &pixelFormat)
> > > +{
> > > +	std::unique_ptr<CameraConfiguration> cameraConfig =
> > > +		camera_->generateConfiguration({ StillCaptureRaw });
> > > +	StreamConfiguration &cfg = cameraConfig->at(0);
> > > +	const StreamFormats &formats = cfg.formats();
> > > +	std::vector<Size> supportedResolutions = formats.sizes(pixelFormat);
> > > +
> > > +	return supportedResolutions;
> > > +}
> > > +
> > >  /*
> > >   * Initialize the format conversion map to translate from Android format
> > >   * identifier to libcamera pixel formats and fill in the list of supported
> > > @@ -466,13 +477,15 @@ int CameraDevice::initializeStreamConfigurations()
> > >  				<< camera3Format.name << " to: "
> > >  				<< mappedFormat.toString();
> > >
> > > +		std::vector<Size> resolutions;
> > >  		const PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);
> > >  		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW)
> > > -			continue;
> > > +			resolutions = filterRawResolutions(mappedFormat);
> > > +		else
> > > +			resolutions = filterYUVResolutions(cameraConfig.get(),
> > > +							   mappedFormat,
> > > +							   cameraResolutions);
> >
> > Usage of cameraConfig here isn't very nice, but that's something to be
> > addressed later.
> 
> I don't particularly like it either. It breaks a bit ownership rules
> and feels a bit like an hack. Not a big deal though, isn't it ?
> 
> > >
> > > -		std::vector<Size> resolutions = filterYUVResolutions(cameraConfig.get(),
> > > -								     mappedFormat,
> > > -								     cameraResolutions);
> > >  		for (const Size &res : resolutions) {
> > >  			streamConfigurations_.push_back({ res, androidFormat });
> > >
> > > diff --git a/src/android/camera_device.h b/src/android/camera_device.h
> > > index 359a163ebab9..dc0ee664d443 100644
> > > --- a/src/android/camera_device.h
> > > +++ b/src/android/camera_device.h
> > > @@ -97,6 +97,8 @@ private:
> > >  	filterYUVResolutions(libcamera::CameraConfiguration *cameraConfig,
> > >  			     const libcamera::PixelFormat &pixelFormat,
> > >  			     const std::vector<libcamera::Size> &resolutions);
> > > +	std::vector<libcamera::Size>
> > > +	filterRawResolutions(const libcamera::PixelFormat &pixelFormat);
> > >
> > >  	std::tuple<uint32_t, uint32_t> calculateStaticMetadataSize();
> > >  	libcamera::FrameBuffer *createFrameBuffer(const buffer_handle_t camera3buffer);

Patch

diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
index 58b2ad27c5e2..b110bfe43ece 100644
--- a/src/android/camera_device.cpp
+++ b/src/android/camera_device.cpp
@@ -314,6 +314,17 @@  std::vector<Size> CameraDevice::filterYUVResolutions(CameraConfiguration *camera
 	return supportedResolutions;
 }
 
+std::vector<Size> CameraDevice::filterRawResolutions(const libcamera::PixelFormat &pixelFormat)
+{
+	std::unique_ptr<CameraConfiguration> cameraConfig =
+		camera_->generateConfiguration({ StillCaptureRaw });
+	StreamConfiguration &cfg = cameraConfig->at(0);
+	const StreamFormats &formats = cfg.formats();
+	std::vector<Size> supportedResolutions = formats.sizes(pixelFormat);
+
+	return supportedResolutions;
+}
+
 /*
  * Initialize the format conversion map to translate from Android format
  * identifier to libcamera pixel formats and fill in the list of supported
@@ -466,13 +477,15 @@  int CameraDevice::initializeStreamConfigurations()
 				<< camera3Format.name << " to: "
 				<< mappedFormat.toString();
 
+		std::vector<Size> resolutions;
 		const PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat);
 		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW)
-			continue;
+			resolutions = filterRawResolutions(mappedFormat);
+		else
+			resolutions = filterYUVResolutions(cameraConfig.get(),
+							   mappedFormat,
+							   cameraResolutions);
 
-		std::vector<Size> resolutions = filterYUVResolutions(cameraConfig.get(),
-								     mappedFormat,
-								     cameraResolutions);
 		for (const Size &res : resolutions) {
 			streamConfigurations_.push_back({ res, androidFormat });
 
diff --git a/src/android/camera_device.h b/src/android/camera_device.h
index 359a163ebab9..dc0ee664d443 100644
--- a/src/android/camera_device.h
+++ b/src/android/camera_device.h
@@ -97,6 +97,8 @@  private:
 	filterYUVResolutions(libcamera::CameraConfiguration *cameraConfig,
 			     const libcamera::PixelFormat &pixelFormat,
 			     const std::vector<libcamera::Size> &resolutions);
+	std::vector<libcamera::Size>
+	filterRawResolutions(const libcamera::PixelFormat &pixelFormat);
 
 	std::tuple<uint32_t, uint32_t> calculateStaticMetadataSize();
 	libcamera::FrameBuffer *createFrameBuffer(const buffer_handle_t camera3buffer);