[libcamera-devel,RFC,6/7] android: yuv: add YUYV -> NV12 conversion
diff mbox series

Message ID 20230915-libyuv-convert-v1-6-1e5bcf68adac@baylibre.com
State New
Headers show
Series
  • android: add YUYV->NV12 conversion via libyuv
Related show

Commit Message

Mattijs Korpershoek Sept. 15, 2023, 7:57 a.m. UTC
On am62x platforms, the receiver driver (j721e-csi2rx) only
supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.

The receiver and the sensor (ov5640) hardware are both capable of
YUV420, however:

* we are not aware of OV5640 being tested with YUV420 formats on any
  vendor tree.
* NV12 has different line lines (even lines are twice as long as odd
  lines) Different line-sized DMA transfers have not been tested on
  the TI CSI-RX SHIM IP.

On the other hand, the graphics allocator (gralloc) cannot allocate
YUV422 buffers directly. It mainly allocated NV12 buffers when
userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.

Because of the above, we need a pixel conversion from YUYV to NV12,
which is handled in the yuv processor via libyuv.

Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
---
 src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
 1 file changed, 68 insertions(+), 23 deletions(-)

Comments

Laurent Pinchart Sept. 24, 2023, 9:34 p.m. UTC | #1
Hi Mattijs,

Thank you for the patch.

On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> On am62x platforms, the receiver driver (j721e-csi2rx) only
> supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> 
> The receiver and the sensor (ov5640) hardware are both capable of
> YUV420, however:
> 
> * we are not aware of OV5640 being tested with YUV420 formats on any
>   vendor tree.

Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
Sensors only output packed formats (there could be exceptions, but they
would be very rare), it is the DMA engine on the host size that would
convert them to planar formats. That would be the DMA engine handled by
the k3-udma driver. It's fairly complicated, and I don't know if it
would hahev the ability to output planar formats. Tomi (on CC) may have
more information.

> * NV12 has different line lines (even lines are twice as long as odd

s/lines/lengths/

>   lines) Different line-sized DMA transfers have not been tested on

s/)/)./

>   the TI CSI-RX SHIM IP.

As mentioned just above, the sensor will not output planar formats, so
the issue is about the 

> On the other hand, the graphics allocator (gralloc) cannot allocate
> YUV422 buffers directly. It mainly allocated NV12 buffers when
> userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.

The DSS seems to support YUYV, is there a way to fix the TI gralloc to
support it ?

> Because of the above, we need a pixel conversion from YUYV to NV12,
> which is handled in the yuv processor via libyuv.
> 
> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> ---
>  src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
>  1 file changed, 68 insertions(+), 23 deletions(-)
> 
> diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> index 734bb85b7351..b680b833dafb 100644
> --- a/src/android/yuv/post_processor_yuv.cpp
> +++ b/src/android/yuv/post_processor_yuv.cpp
> @@ -7,6 +7,9 @@
>  
>  #include "post_processor_yuv.h"
>  
> +#include <utility>
> +
> +#include <libyuv/convert.h>
>  #include <libyuv/scale.h>
>  
>  #include <libcamera/base/log.h>
> @@ -22,14 +25,42 @@ using namespace libcamera;
>  
>  LOG_DEFINE_CATEGORY(YUV)
>  
> +namespace {
> +
> +/**
> + * \var supportedConversions
> + * \brief list of supported output pixel formats for an input pixel format
> + */
> +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> +	{ formats::YUYV, { formats::NV12 } },
> +};
> +
> +} /* namespace */
> +
>  int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>  				const StreamConfiguration &outCfg)
>  {
>  	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> -				<< " (from " << inCfg.pixelFormat
> -				<< " to " << outCfg.pixelFormat << ")";
> -		return -EINVAL;
> +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> +		if (it == supportedConversions.end()) {
> +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;

			LOG(YUV, Error)
				<< "Unsupported source format "
				<< inCfg.pixelFormat;

> +			return -EINVAL;
> +		}
> +
> +		std::vector<PixelFormat> outFormats = it->second;

No need for a copy.

> +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);

		const auto &match = std::find(outFormats.begin(), outFormats.end(),
					      outCfg.pixelFormat);

> +		if (match == outFormats.end()) {
> +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> +					<< " (from " << inCfg.pixelFormat
> +					<< " to " << outCfg.pixelFormat << ")";

			LOG(YUV, Error)
				<< "Requested pixel format conversion is not supported"
				<< " (from " << inCfg.pixelFormat
				<< " to " << outCfg.pixelFormat << ")";

> +			return -EINVAL;
> +		}
> +	} else {
> +		if (inCfg.pixelFormat != formats::NV12) {
> +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> +					<< " (only NV12 is supported for scaling)";

			LOG(YUV, Error)
				<< "Unsupported format " << inCfg.pixelFormat
				<< " (only NV12 is supported for scaling)";

> +			return -EINVAL;
> +		}
>  	}
>  
>  	if (inCfg.size < outCfg.size) {
> @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>  		return -EINVAL;
>  	}
>  
> -	if (inCfg.pixelFormat != formats::NV12) {
> -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> -				<< " (only NV12 is supported)";
> -		return -EINVAL;
> -	}
> -
>  	calculateLengths(inCfg, outCfg);
>  	return 0;
>  }
> @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
>  		return;
>  	}
>  
> -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> -				    sourceStride_[0],
> -				    sourceMapped.planes()[1].data(),
> -				    sourceStride_[1],
> -				    sourceSize_.width, sourceSize_.height,
> -				    destination->plane(0).data(),
> -				    destinationStride_[0],
> -				    destination->plane(1).data(),
> -				    destinationStride_[1],
> -				    destinationSize_.width,
> -				    destinationSize_.height,
> -				    libyuv::FilterMode::kFilterBilinear);
> +	int ret = 0;
> +	switch (sourceFormat_) {
> +	case formats::NV12:
> +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> +					sourceStride_[0],
> +					sourceMapped.planes()[1].data(),
> +					sourceStride_[1],
> +					sourceSize_.width, sourceSize_.height,
> +					destination->plane(0).data(),
> +					destinationStride_[0],
> +					destination->plane(1).data(),
> +					destinationStride_[1],
> +					destinationSize_.width,
> +					destinationSize_.height,
> +					libyuv::FilterMode::kFilterBilinear);
> +		break;
> +	case formats::YUYV:
> +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> +					 sourceStride_[0],
> +					 destination->plane(0).data(),
> +					 destinationStride_[0],
> +					 destination->plane(1).data(),
> +					 destinationStride_[1],
> +					 destinationSize_.width,
> +					 destinationSize_.height);
> +		break;
> +	default:
> +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> +		break;

This can't happen, can it ?

> +	}
> +
>  	if (ret) {
> -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;

s/Libyuv/libyuv/

Although I would write "YUV post-processing failure: ".

>  		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
>  		return;
>  	}
>
Tomi Valkeinen Sept. 25, 2023, 10:52 a.m. UTC | #2
On 25/09/2023 00:34, Laurent Pinchart wrote:
> Hi Mattijs,
> 
> Thank you for the patch.
> 
> On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
>> On am62x platforms, the receiver driver (j721e-csi2rx) only
>> supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
>>
>> The receiver and the sensor (ov5640) hardware are both capable of
>> YUV420, however:
>>
>> * we are not aware of OV5640 being tested with YUV420 formats on any
>>    vendor tree.
> 
> Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> Sensors only output packed formats (there could be exceptions, but they
> would be very rare), it is the DMA engine on the host size that would
> convert them to planar formats. That would be the DMA engine handled by
> the k3-udma driver. It's fairly complicated, and I don't know if it
> would hahev the ability to output planar formats. Tomi (on CC) may have
> more information.

No, I don't know if the CSI-2 RX or the DMA engine support packed to 
planar conversions. Maybe Jai knows.

If the point is to show the frames on the screen, it sounds to me that 
the YUV422 formats would work the best.

  Tomi

>> * NV12 has different line lines (even lines are twice as long as odd
> 
> s/lines/lengths/
> 
>>    lines) Different line-sized DMA transfers have not been tested on
> 
> s/)/)./
> 
>>    the TI CSI-RX SHIM IP.
> 
> As mentioned just above, the sensor will not output planar formats, so
> the issue is about the
> 
>> On the other hand, the graphics allocator (gralloc) cannot allocate
>> YUV422 buffers directly. It mainly allocated NV12 buffers when
>> userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> 
> The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> support it ?
> 
>> Because of the above, we need a pixel conversion from YUYV to NV12,
>> which is handled in the yuv processor via libyuv.
>>
>> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
>> ---
>>   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
>>   1 file changed, 68 insertions(+), 23 deletions(-)
>>
>> diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
>> index 734bb85b7351..b680b833dafb 100644
>> --- a/src/android/yuv/post_processor_yuv.cpp
>> +++ b/src/android/yuv/post_processor_yuv.cpp
>> @@ -7,6 +7,9 @@
>>   
>>   #include "post_processor_yuv.h"
>>   
>> +#include <utility>
>> +
>> +#include <libyuv/convert.h>
>>   #include <libyuv/scale.h>
>>   
>>   #include <libcamera/base/log.h>
>> @@ -22,14 +25,42 @@ using namespace libcamera;
>>   
>>   LOG_DEFINE_CATEGORY(YUV)
>>   
>> +namespace {
>> +
>> +/**
>> + * \var supportedConversions
>> + * \brief list of supported output pixel formats for an input pixel format
>> + */
>> +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
>> +	{ formats::YUYV, { formats::NV12 } },
>> +};
>> +
>> +} /* namespace */
>> +
>>   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>>   				const StreamConfiguration &outCfg)
>>   {
>>   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
>> -		LOG(YUV, Error) << "Pixel format conversion is not supported"
>> -				<< " (from " << inCfg.pixelFormat
>> -				<< " to " << outCfg.pixelFormat << ")";
>> -		return -EINVAL;
>> +		const auto it = supportedConversions.find(inCfg.pixelFormat);
>> +		if (it == supportedConversions.end()) {
>> +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> 
> 			LOG(YUV, Error)
> 				<< "Unsupported source format "
> 				<< inCfg.pixelFormat;
> 
>> +			return -EINVAL;
>> +		}
>> +
>> +		std::vector<PixelFormat> outFormats = it->second;
> 
> No need for a copy.
> 
>> +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> 
> 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> 					      outCfg.pixelFormat);
> 
>> +		if (match == outFormats.end()) {
>> +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
>> +					<< " (from " << inCfg.pixelFormat
>> +					<< " to " << outCfg.pixelFormat << ")";
> 
> 			LOG(YUV, Error)
> 				<< "Requested pixel format conversion is not supported"
> 				<< " (from " << inCfg.pixelFormat
> 				<< " to " << outCfg.pixelFormat << ")";
> 
>> +			return -EINVAL;
>> +		}
>> +	} else {
>> +		if (inCfg.pixelFormat != formats::NV12) {
>> +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
>> +					<< " (only NV12 is supported for scaling)";
> 
> 			LOG(YUV, Error)
> 				<< "Unsupported format " << inCfg.pixelFormat
> 				<< " (only NV12 is supported for scaling)";
> 
>> +			return -EINVAL;
>> +		}
>>   	}
>>   
>>   	if (inCfg.size < outCfg.size) {
>> @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>>   		return -EINVAL;
>>   	}
>>   
>> -	if (inCfg.pixelFormat != formats::NV12) {
>> -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
>> -				<< " (only NV12 is supported)";
>> -		return -EINVAL;
>> -	}
>> -
>>   	calculateLengths(inCfg, outCfg);
>>   	return 0;
>>   }
>> @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
>>   		return;
>>   	}
>>   
>> -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
>> -				    sourceStride_[0],
>> -				    sourceMapped.planes()[1].data(),
>> -				    sourceStride_[1],
>> -				    sourceSize_.width, sourceSize_.height,
>> -				    destination->plane(0).data(),
>> -				    destinationStride_[0],
>> -				    destination->plane(1).data(),
>> -				    destinationStride_[1],
>> -				    destinationSize_.width,
>> -				    destinationSize_.height,
>> -				    libyuv::FilterMode::kFilterBilinear);
>> +	int ret = 0;
>> +	switch (sourceFormat_) {
>> +	case formats::NV12:
>> +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
>> +					sourceStride_[0],
>> +					sourceMapped.planes()[1].data(),
>> +					sourceStride_[1],
>> +					sourceSize_.width, sourceSize_.height,
>> +					destination->plane(0).data(),
>> +					destinationStride_[0],
>> +					destination->plane(1).data(),
>> +					destinationStride_[1],
>> +					destinationSize_.width,
>> +					destinationSize_.height,
>> +					libyuv::FilterMode::kFilterBilinear);
>> +		break;
>> +	case formats::YUYV:
>> +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
>> +					 sourceStride_[0],
>> +					 destination->plane(0).data(),
>> +					 destinationStride_[0],
>> +					 destination->plane(1).data(),
>> +					 destinationStride_[1],
>> +					 destinationSize_.width,
>> +					 destinationSize_.height);
>> +		break;
>> +	default:
>> +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
>> +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
>> +		break;
> 
> This can't happen, can it ?
> 
>> +	}
>> +
>>   	if (ret) {
>> -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
>> +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> 
> s/Libyuv/libyuv/
> 
> Although I would write "YUV post-processing failure: ".
> 
>>   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
>>   		return;
>>   	}
>>
>
Laurent Pinchart Sept. 25, 2023, 10:56 a.m. UTC | #3
On Mon, Sep 25, 2023 at 01:52:40PM +0300, Tomi Valkeinen wrote:
> On 25/09/2023 00:34, Laurent Pinchart wrote:
> > Hi Mattijs,
> > 
> > Thank you for the patch.
> > 
> > On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> >> On am62x platforms, the receiver driver (j721e-csi2rx) only
> >> supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> >>
> >> The receiver and the sensor (ov5640) hardware are both capable of
> >> YUV420, however:
> >>
> >> * we are not aware of OV5640 being tested with YUV420 formats on any
> >>    vendor tree.
> > 
> > Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> > Sensors only output packed formats (there could be exceptions, but they
> > would be very rare), it is the DMA engine on the host size that would
> > convert them to planar formats. That would be the DMA engine handled by
> > the k3-udma driver. It's fairly complicated, and I don't know if it
> > would hahev the ability to output planar formats. Tomi (on CC) may have
> > more information.
> 
> No, I don't know if the CSI-2 RX or the DMA engine support packed to 
> planar conversions. Maybe Jai knows.
> 
> If the point is to show the frames on the screen, it sounds to me that 
> the YUV422 formats would work the best.

Do you mean packed YUV 4:2:2 or (a.k.a. YUYV & co), semi-planar YUV
4:2:2 (a.k.a. NV16 and NV61) or planar YUV 4:2:2 ? -)

> >> * NV12 has different line lines (even lines are twice as long as odd
> > 
> > s/lines/lengths/
> > 
> >>    lines) Different line-sized DMA transfers have not been tested on
> > 
> > s/)/)./
> > 
> >>    the TI CSI-RX SHIM IP.
> > 
> > As mentioned just above, the sensor will not output planar formats, so
> > the issue is about the
> > 
> >> On the other hand, the graphics allocator (gralloc) cannot allocate
> >> YUV422 buffers directly. It mainly allocated NV12 buffers when
> >> userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> > 
> > The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> > support it ?
> > 
> >> Because of the above, we need a pixel conversion from YUYV to NV12,
> >> which is handled in the yuv processor via libyuv.
> >>
> >> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> >> ---
> >>   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
> >>   1 file changed, 68 insertions(+), 23 deletions(-)
> >>
> >> diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> >> index 734bb85b7351..b680b833dafb 100644
> >> --- a/src/android/yuv/post_processor_yuv.cpp
> >> +++ b/src/android/yuv/post_processor_yuv.cpp
> >> @@ -7,6 +7,9 @@
> >>   
> >>   #include "post_processor_yuv.h"
> >>   
> >> +#include <utility>
> >> +
> >> +#include <libyuv/convert.h>
> >>   #include <libyuv/scale.h>
> >>   
> >>   #include <libcamera/base/log.h>
> >> @@ -22,14 +25,42 @@ using namespace libcamera;
> >>   
> >>   LOG_DEFINE_CATEGORY(YUV)
> >>   
> >> +namespace {
> >> +
> >> +/**
> >> + * \var supportedConversions
> >> + * \brief list of supported output pixel formats for an input pixel format
> >> + */
> >> +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> >> +	{ formats::YUYV, { formats::NV12 } },
> >> +};
> >> +
> >> +} /* namespace */
> >> +
> >>   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> >>   				const StreamConfiguration &outCfg)
> >>   {
> >>   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> >> -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> >> -				<< " (from " << inCfg.pixelFormat
> >> -				<< " to " << outCfg.pixelFormat << ")";
> >> -		return -EINVAL;
> >> +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> >> +		if (it == supportedConversions.end()) {
> >> +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> > 
> > 			LOG(YUV, Error)
> > 				<< "Unsupported source format "
> > 				<< inCfg.pixelFormat;
> > 
> >> +			return -EINVAL;
> >> +		}
> >> +
> >> +		std::vector<PixelFormat> outFormats = it->second;
> > 
> > No need for a copy.
> > 
> >> +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> > 
> > 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> > 					      outCfg.pixelFormat);
> > 
> >> +		if (match == outFormats.end()) {
> >> +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> >> +					<< " (from " << inCfg.pixelFormat
> >> +					<< " to " << outCfg.pixelFormat << ")";
> > 
> > 			LOG(YUV, Error)
> > 				<< "Requested pixel format conversion is not supported"
> > 				<< " (from " << inCfg.pixelFormat
> > 				<< " to " << outCfg.pixelFormat << ")";
> > 
> >> +			return -EINVAL;
> >> +		}
> >> +	} else {
> >> +		if (inCfg.pixelFormat != formats::NV12) {
> >> +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> >> +					<< " (only NV12 is supported for scaling)";
> > 
> > 			LOG(YUV, Error)
> > 				<< "Unsupported format " << inCfg.pixelFormat
> > 				<< " (only NV12 is supported for scaling)";
> > 
> >> +			return -EINVAL;
> >> +		}
> >>   	}
> >>   
> >>   	if (inCfg.size < outCfg.size) {
> >> @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> >>   		return -EINVAL;
> >>   	}
> >>   
> >> -	if (inCfg.pixelFormat != formats::NV12) {
> >> -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> >> -				<< " (only NV12 is supported)";
> >> -		return -EINVAL;
> >> -	}
> >> -
> >>   	calculateLengths(inCfg, outCfg);
> >>   	return 0;
> >>   }
> >> @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
> >>   		return;
> >>   	}
> >>   
> >> -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> >> -				    sourceStride_[0],
> >> -				    sourceMapped.planes()[1].data(),
> >> -				    sourceStride_[1],
> >> -				    sourceSize_.width, sourceSize_.height,
> >> -				    destination->plane(0).data(),
> >> -				    destinationStride_[0],
> >> -				    destination->plane(1).data(),
> >> -				    destinationStride_[1],
> >> -				    destinationSize_.width,
> >> -				    destinationSize_.height,
> >> -				    libyuv::FilterMode::kFilterBilinear);
> >> +	int ret = 0;
> >> +	switch (sourceFormat_) {
> >> +	case formats::NV12:
> >> +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> >> +					sourceStride_[0],
> >> +					sourceMapped.planes()[1].data(),
> >> +					sourceStride_[1],
> >> +					sourceSize_.width, sourceSize_.height,
> >> +					destination->plane(0).data(),
> >> +					destinationStride_[0],
> >> +					destination->plane(1).data(),
> >> +					destinationStride_[1],
> >> +					destinationSize_.width,
> >> +					destinationSize_.height,
> >> +					libyuv::FilterMode::kFilterBilinear);
> >> +		break;
> >> +	case formats::YUYV:
> >> +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> >> +					 sourceStride_[0],
> >> +					 destination->plane(0).data(),
> >> +					 destinationStride_[0],
> >> +					 destination->plane(1).data(),
> >> +					 destinationStride_[1],
> >> +					 destinationSize_.width,
> >> +					 destinationSize_.height);
> >> +		break;
> >> +	default:
> >> +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> >> +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> >> +		break;
> > 
> > This can't happen, can it ?
> > 
> >> +	}
> >> +
> >>   	if (ret) {
> >> -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> >> +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> > 
> > s/Libyuv/libyuv/
> > 
> > Although I would write "YUV post-processing failure: ".
> > 
> >>   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> >>   		return;
> >>   	}
> >>
Tomi Valkeinen Sept. 25, 2023, 10:59 a.m. UTC | #4
On 25/09/2023 13:56, Laurent Pinchart wrote:
> On Mon, Sep 25, 2023 at 01:52:40PM +0300, Tomi Valkeinen wrote:
>> On 25/09/2023 00:34, Laurent Pinchart wrote:
>>> Hi Mattijs,
>>>
>>> Thank you for the patch.
>>>
>>> On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
>>>> On am62x platforms, the receiver driver (j721e-csi2rx) only
>>>> supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
>>>>
>>>> The receiver and the sensor (ov5640) hardware are both capable of
>>>> YUV420, however:
>>>>
>>>> * we are not aware of OV5640 being tested with YUV420 formats on any
>>>>     vendor tree.
>>>
>>> Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
>>> Sensors only output packed formats (there could be exceptions, but they
>>> would be very rare), it is the DMA engine on the host size that would
>>> convert them to planar formats. That would be the DMA engine handled by
>>> the k3-udma driver. It's fairly complicated, and I don't know if it
>>> would hahev the ability to output planar formats. Tomi (on CC) may have
>>> more information.
>>
>> No, I don't know if the CSI-2 RX or the DMA engine support packed to
>> planar conversions. Maybe Jai knows.
>>
>> If the point is to show the frames on the screen, it sounds to me that
>> the YUV422 formats would work the best.
> 
> Do you mean packed YUV 4:2:2 or (a.k.a. YUYV & co), semi-planar YUV
> 4:2:2 (a.k.a. NV16 and NV61) or planar YUV 4:2:2 ? -)

Packed. That's the only option the caneras will will send, isn't it? But 
I could rephrase: any format that is supported both by the DSS and the 
sensor =).

  Tomi

>>>> * NV12 has different line lines (even lines are twice as long as odd
>>>
>>> s/lines/lengths/
>>>
>>>>     lines) Different line-sized DMA transfers have not been tested on
>>>
>>> s/)/)./
>>>
>>>>     the TI CSI-RX SHIM IP.
>>>
>>> As mentioned just above, the sensor will not output planar formats, so
>>> the issue is about the
>>>
>>>> On the other hand, the graphics allocator (gralloc) cannot allocate
>>>> YUV422 buffers directly. It mainly allocated NV12 buffers when
>>>> userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
>>>
>>> The DSS seems to support YUYV, is there a way to fix the TI gralloc to
>>> support it ?
>>>
>>>> Because of the above, we need a pixel conversion from YUYV to NV12,
>>>> which is handled in the yuv processor via libyuv.
>>>>
>>>> Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
>>>> ---
>>>>    src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
>>>>    1 file changed, 68 insertions(+), 23 deletions(-)
>>>>
>>>> diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
>>>> index 734bb85b7351..b680b833dafb 100644
>>>> --- a/src/android/yuv/post_processor_yuv.cpp
>>>> +++ b/src/android/yuv/post_processor_yuv.cpp
>>>> @@ -7,6 +7,9 @@
>>>>    
>>>>    #include "post_processor_yuv.h"
>>>>    
>>>> +#include <utility>
>>>> +
>>>> +#include <libyuv/convert.h>
>>>>    #include <libyuv/scale.h>
>>>>    
>>>>    #include <libcamera/base/log.h>
>>>> @@ -22,14 +25,42 @@ using namespace libcamera;
>>>>    
>>>>    LOG_DEFINE_CATEGORY(YUV)
>>>>    
>>>> +namespace {
>>>> +
>>>> +/**
>>>> + * \var supportedConversions
>>>> + * \brief list of supported output pixel formats for an input pixel format
>>>> + */
>>>> +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
>>>> +	{ formats::YUYV, { formats::NV12 } },
>>>> +};
>>>> +
>>>> +} /* namespace */
>>>> +
>>>>    int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>>>>    				const StreamConfiguration &outCfg)
>>>>    {
>>>>    	if (inCfg.pixelFormat != outCfg.pixelFormat) {
>>>> -		LOG(YUV, Error) << "Pixel format conversion is not supported"
>>>> -				<< " (from " << inCfg.pixelFormat
>>>> -				<< " to " << outCfg.pixelFormat << ")";
>>>> -		return -EINVAL;
>>>> +		const auto it = supportedConversions.find(inCfg.pixelFormat);
>>>> +		if (it == supportedConversions.end()) {
>>>> +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
>>>
>>> 			LOG(YUV, Error)
>>> 				<< "Unsupported source format "
>>> 				<< inCfg.pixelFormat;
>>>
>>>> +			return -EINVAL;
>>>> +		}
>>>> +
>>>> +		std::vector<PixelFormat> outFormats = it->second;
>>>
>>> No need for a copy.
>>>
>>>> +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
>>>
>>> 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
>>> 					      outCfg.pixelFormat);
>>>
>>>> +		if (match == outFormats.end()) {
>>>> +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
>>>> +					<< " (from " << inCfg.pixelFormat
>>>> +					<< " to " << outCfg.pixelFormat << ")";
>>>
>>> 			LOG(YUV, Error)
>>> 				<< "Requested pixel format conversion is not supported"
>>> 				<< " (from " << inCfg.pixelFormat
>>> 				<< " to " << outCfg.pixelFormat << ")";
>>>
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	} else {
>>>> +		if (inCfg.pixelFormat != formats::NV12) {
>>>> +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
>>>> +					<< " (only NV12 is supported for scaling)";
>>>
>>> 			LOG(YUV, Error)
>>> 				<< "Unsupported format " << inCfg.pixelFormat
>>> 				<< " (only NV12 is supported for scaling)";
>>>
>>>> +			return -EINVAL;
>>>> +		}
>>>>    	}
>>>>    
>>>>    	if (inCfg.size < outCfg.size) {
>>>> @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
>>>>    		return -EINVAL;
>>>>    	}
>>>>    
>>>> -	if (inCfg.pixelFormat != formats::NV12) {
>>>> -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
>>>> -				<< " (only NV12 is supported)";
>>>> -		return -EINVAL;
>>>> -	}
>>>> -
>>>>    	calculateLengths(inCfg, outCfg);
>>>>    	return 0;
>>>>    }
>>>> @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
>>>>    		return;
>>>>    	}
>>>>    
>>>> -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
>>>> -				    sourceStride_[0],
>>>> -				    sourceMapped.planes()[1].data(),
>>>> -				    sourceStride_[1],
>>>> -				    sourceSize_.width, sourceSize_.height,
>>>> -				    destination->plane(0).data(),
>>>> -				    destinationStride_[0],
>>>> -				    destination->plane(1).data(),
>>>> -				    destinationStride_[1],
>>>> -				    destinationSize_.width,
>>>> -				    destinationSize_.height,
>>>> -				    libyuv::FilterMode::kFilterBilinear);
>>>> +	int ret = 0;
>>>> +	switch (sourceFormat_) {
>>>> +	case formats::NV12:
>>>> +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
>>>> +					sourceStride_[0],
>>>> +					sourceMapped.planes()[1].data(),
>>>> +					sourceStride_[1],
>>>> +					sourceSize_.width, sourceSize_.height,
>>>> +					destination->plane(0).data(),
>>>> +					destinationStride_[0],
>>>> +					destination->plane(1).data(),
>>>> +					destinationStride_[1],
>>>> +					destinationSize_.width,
>>>> +					destinationSize_.height,
>>>> +					libyuv::FilterMode::kFilterBilinear);
>>>> +		break;
>>>> +	case formats::YUYV:
>>>> +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
>>>> +					 sourceStride_[0],
>>>> +					 destination->plane(0).data(),
>>>> +					 destinationStride_[0],
>>>> +					 destination->plane(1).data(),
>>>> +					 destinationStride_[1],
>>>> +					 destinationSize_.width,
>>>> +					 destinationSize_.height);
>>>> +		break;
>>>> +	default:
>>>> +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
>>>> +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
>>>> +		break;
>>>
>>> This can't happen, can it ?
>>>
>>>> +	}
>>>> +
>>>>    	if (ret) {
>>>> -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
>>>> +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
>>>
>>> s/Libyuv/libyuv/
>>>
>>> Although I would write "YUV post-processing failure: ".
>>>
>>>>    		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
>>>>    		return;
>>>>    	}
>>>>
>
Jai Luthra Sept. 25, 2023, 11:47 a.m. UTC | #5
Hi,

On Sep 25, 2023 at 13:52:40 +0300, Tomi Valkeinen wrote:
> On 25/09/2023 00:34, Laurent Pinchart wrote:
> > Hi Mattijs,
> > 
> > Thank you for the patch.
> > 
> > On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> > > On am62x platforms, the receiver driver (j721e-csi2rx) only
> > > supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> > > 
> > > The receiver and the sensor (ov5640) hardware are both capable of
> > > YUV420, however:
> > > 
> > > * we are not aware of OV5640 being tested with YUV420 formats on any
> > >    vendor tree.
> > 
> > Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> > Sensors only output packed formats (there could be exceptions, but they
> > would be very rare), it is the DMA engine on the host size that would
> > convert them to planar formats. That would be the DMA engine handled by
> > the k3-udma driver. It's fairly complicated, and I don't know if it
> > would hahev the ability to output planar formats. Tomi (on CC) may have
> > more information.
> 
> No, I don't know if the CSI-2 RX or the DMA engine support packed to planar
> conversions. Maybe Jai knows.
> 
> If the point is to show the frames on the screen, it sounds to me that the
> YUV422 formats would work the best.

Unfortunately the DMA hardware does not support pixel-level interleaving 
that would be required to convert packed YUV422/420 formats to planar or 
semi-planar formats.

But yes the display supports packed UYVY formats, we even use it for 
camera to display pipelines on linux. The limitation on android is from 
the gralloc implementation.

> 
>  Tomi
> 
> > > * NV12 has different line lines (even lines are twice as long as odd
> > 
> > s/lines/lengths/
> > 
> > >    lines) Different line-sized DMA transfers have not been tested on
> > 
> > s/)/)./
> > 
> > >    the TI CSI-RX SHIM IP.
> > 
> > As mentioned just above, the sensor will not output planar formats, so
> > the issue is about the
> > 
> > > On the other hand, the graphics allocator (gralloc) cannot allocate
> > > YUV422 buffers directly. It mainly allocated NV12 buffers when
> > > userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> > 
> > The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> > support it ?
> > 
> > > Because of the above, we need a pixel conversion from YUYV to NV12,
> > > which is handled in the yuv processor via libyuv.
> > > 
> > > Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> > > ---
> > >   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
> > >   1 file changed, 68 insertions(+), 23 deletions(-)
> > > 
> > > diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> > > index 734bb85b7351..b680b833dafb 100644
> > > --- a/src/android/yuv/post_processor_yuv.cpp
> > > +++ b/src/android/yuv/post_processor_yuv.cpp
> > > @@ -7,6 +7,9 @@
> > >   #include "post_processor_yuv.h"
> > > +#include <utility>
> > > +
> > > +#include <libyuv/convert.h>
> > >   #include <libyuv/scale.h>
> > >   #include <libcamera/base/log.h>
> > > @@ -22,14 +25,42 @@ using namespace libcamera;
> > >   LOG_DEFINE_CATEGORY(YUV)
> > > +namespace {
> > > +
> > > +/**
> > > + * \var supportedConversions
> > > + * \brief list of supported output pixel formats for an input pixel format
> > > + */
> > > +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> > > +	{ formats::YUYV, { formats::NV12 } },
> > > +};
> > > +
> > > +} /* namespace */
> > > +
> > >   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > >   				const StreamConfiguration &outCfg)
> > >   {
> > >   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> > > -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> > > -				<< " (from " << inCfg.pixelFormat
> > > -				<< " to " << outCfg.pixelFormat << ")";
> > > -		return -EINVAL;
> > > +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> > > +		if (it == supportedConversions.end()) {
> > > +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> > 
> > 			LOG(YUV, Error)
> > 				<< "Unsupported source format "
> > 				<< inCfg.pixelFormat;
> > 
> > > +			return -EINVAL;
> > > +		}
> > > +
> > > +		std::vector<PixelFormat> outFormats = it->second;
> > 
> > No need for a copy.
> > 
> > > +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> > 
> > 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> > 					      outCfg.pixelFormat);
> > 
> > > +		if (match == outFormats.end()) {
> > > +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> > > +					<< " (from " << inCfg.pixelFormat
> > > +					<< " to " << outCfg.pixelFormat << ")";
> > 
> > 			LOG(YUV, Error)
> > 				<< "Requested pixel format conversion is not supported"
> > 				<< " (from " << inCfg.pixelFormat
> > 				<< " to " << outCfg.pixelFormat << ")";
> > 
> > > +			return -EINVAL;
> > > +		}
> > > +	} else {
> > > +		if (inCfg.pixelFormat != formats::NV12) {
> > > +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > +					<< " (only NV12 is supported for scaling)";
> > 
> > 			LOG(YUV, Error)
> > 				<< "Unsupported format " << inCfg.pixelFormat
> > 				<< " (only NV12 is supported for scaling)";
> > 
> > > +			return -EINVAL;
> > > +		}
> > >   	}
> > >   	if (inCfg.size < outCfg.size) {
> > > @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > >   		return -EINVAL;
> > >   	}
> > > -	if (inCfg.pixelFormat != formats::NV12) {
> > > -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > -				<< " (only NV12 is supported)";
> > > -		return -EINVAL;
> > > -	}
> > > -
> > >   	calculateLengths(inCfg, outCfg);
> > >   	return 0;
> > >   }
> > > @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
> > >   		return;
> > >   	}
> > > -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > -				    sourceStride_[0],
> > > -				    sourceMapped.planes()[1].data(),
> > > -				    sourceStride_[1],
> > > -				    sourceSize_.width, sourceSize_.height,
> > > -				    destination->plane(0).data(),
> > > -				    destinationStride_[0],
> > > -				    destination->plane(1).data(),
> > > -				    destinationStride_[1],
> > > -				    destinationSize_.width,
> > > -				    destinationSize_.height,
> > > -				    libyuv::FilterMode::kFilterBilinear);
> > > +	int ret = 0;
> > > +	switch (sourceFormat_) {
> > > +	case formats::NV12:
> > > +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > +					sourceStride_[0],
> > > +					sourceMapped.planes()[1].data(),
> > > +					sourceStride_[1],
> > > +					sourceSize_.width, sourceSize_.height,
> > > +					destination->plane(0).data(),
> > > +					destinationStride_[0],
> > > +					destination->plane(1).data(),
> > > +					destinationStride_[1],
> > > +					destinationSize_.width,
> > > +					destinationSize_.height,
> > > +					libyuv::FilterMode::kFilterBilinear);
> > > +		break;
> > > +	case formats::YUYV:
> > > +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> > > +					 sourceStride_[0],
> > > +					 destination->plane(0).data(),
> > > +					 destinationStride_[0],
> > > +					 destination->plane(1).data(),
> > > +					 destinationStride_[1],
> > > +					 destinationSize_.width,
> > > +					 destinationSize_.height);
> > > +		break;
> > > +	default:
> > > +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> > > +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > +		break;
> > 
> > This can't happen, can it ?
> > 
> > > +	}
> > > +
> > >   	if (ret) {
> > > -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> > > +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> > 
> > s/Libyuv/libyuv/
> > 
> > Although I would write "YUV post-processing failure: ".
> > 
> > >   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > >   		return;
> > >   	}
> > > 
> > 
>
Laurent Pinchart Sept. 25, 2023, 11:55 a.m. UTC | #6
Hi Jai,

On Mon, Sep 25, 2023 at 05:17:59PM +0530, Jai Luthra wrote:
> On Sep 25, 2023 at 13:52:40 +0300, Tomi Valkeinen wrote:
> > On 25/09/2023 00:34, Laurent Pinchart wrote:
> > > On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> > > > On am62x platforms, the receiver driver (j721e-csi2rx) only
> > > > supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> > > > 
> > > > The receiver and the sensor (ov5640) hardware are both capable of
> > > > YUV420, however:
> > > > 
> > > > * we are not aware of OV5640 being tested with YUV420 formats on any
> > > >    vendor tree.
> > > 
> > > Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> > > Sensors only output packed formats (there could be exceptions, but they
> > > would be very rare), it is the DMA engine on the host size that would
> > > convert them to planar formats. That would be the DMA engine handled by
> > > the k3-udma driver. It's fairly complicated, and I don't know if it
> > > would hahev the ability to output planar formats. Tomi (on CC) may have
> > > more information.
> > 
> > No, I don't know if the CSI-2 RX or the DMA engine support packed to planar
> > conversions. Maybe Jai knows.
> > 
> > If the point is to show the frames on the screen, it sounds to me that the
> > YUV422 formats would work the best.
> 
> Unfortunately the DMA hardware does not support pixel-level interleaving 
> that would be required to convert packed YUV422/420 formats to planar or 
> semi-planar formats.
> 
> But yes the display supports packed UYVY formats, we even use it for 
> camera to display pipelines on linux. The limitation on android is from 
> the gralloc implementation.

Thank you for the confirmation. Now for the annoying question: Could
that be fixed in gralloc ?

> > > > * NV12 has different line lines (even lines are twice as long as odd
> > > 
> > > s/lines/lengths/
> > > 
> > > >    lines) Different line-sized DMA transfers have not been tested on
> > > 
> > > s/)/)./
> > > 
> > > >    the TI CSI-RX SHIM IP.
> > > 
> > > As mentioned just above, the sensor will not output planar formats, so
> > > the issue is about the
> > > 
> > > > On the other hand, the graphics allocator (gralloc) cannot allocate
> > > > YUV422 buffers directly. It mainly allocated NV12 buffers when
> > > > userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> > > 
> > > The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> > > support it ?
> > > 
> > > > Because of the above, we need a pixel conversion from YUYV to NV12,
> > > > which is handled in the yuv processor via libyuv.
> > > > 
> > > > Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> > > > ---
> > > >   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
> > > >   1 file changed, 68 insertions(+), 23 deletions(-)
> > > > 
> > > > diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> > > > index 734bb85b7351..b680b833dafb 100644
> > > > --- a/src/android/yuv/post_processor_yuv.cpp
> > > > +++ b/src/android/yuv/post_processor_yuv.cpp
> > > > @@ -7,6 +7,9 @@
> > > >   #include "post_processor_yuv.h"
> > > > +#include <utility>
> > > > +
> > > > +#include <libyuv/convert.h>
> > > >   #include <libyuv/scale.h>
> > > >   #include <libcamera/base/log.h>
> > > > @@ -22,14 +25,42 @@ using namespace libcamera;
> > > >   LOG_DEFINE_CATEGORY(YUV)
> > > > +namespace {
> > > > +
> > > > +/**
> > > > + * \var supportedConversions
> > > > + * \brief list of supported output pixel formats for an input pixel format
> > > > + */
> > > > +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> > > > +	{ formats::YUYV, { formats::NV12 } },
> > > > +};
> > > > +
> > > > +} /* namespace */
> > > > +
> > > >   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > >   				const StreamConfiguration &outCfg)
> > > >   {
> > > >   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> > > > -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> > > > -				<< " (from " << inCfg.pixelFormat
> > > > -				<< " to " << outCfg.pixelFormat << ")";
> > > > -		return -EINVAL;
> > > > +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> > > > +		if (it == supportedConversions.end()) {
> > > > +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> > > 
> > > 			LOG(YUV, Error)
> > > 				<< "Unsupported source format "
> > > 				<< inCfg.pixelFormat;
> > > 
> > > > +			return -EINVAL;
> > > > +		}
> > > > +
> > > > +		std::vector<PixelFormat> outFormats = it->second;
> > > 
> > > No need for a copy.
> > > 
> > > > +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> > > 
> > > 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> > > 					      outCfg.pixelFormat);
> > > 
> > > > +		if (match == outFormats.end()) {
> > > > +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> > > > +					<< " (from " << inCfg.pixelFormat
> > > > +					<< " to " << outCfg.pixelFormat << ")";
> > > 
> > > 			LOG(YUV, Error)
> > > 				<< "Requested pixel format conversion is not supported"
> > > 				<< " (from " << inCfg.pixelFormat
> > > 				<< " to " << outCfg.pixelFormat << ")";
> > > 
> > > > +			return -EINVAL;
> > > > +		}
> > > > +	} else {
> > > > +		if (inCfg.pixelFormat != formats::NV12) {
> > > > +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > +					<< " (only NV12 is supported for scaling)";
> > > 
> > > 			LOG(YUV, Error)
> > > 				<< "Unsupported format " << inCfg.pixelFormat
> > > 				<< " (only NV12 is supported for scaling)";
> > > 
> > > > +			return -EINVAL;
> > > > +		}
> > > >   	}
> > > >   	if (inCfg.size < outCfg.size) {
> > > > @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > >   		return -EINVAL;
> > > >   	}
> > > > -	if (inCfg.pixelFormat != formats::NV12) {
> > > > -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > -				<< " (only NV12 is supported)";
> > > > -		return -EINVAL;
> > > > -	}
> > > > -
> > > >   	calculateLengths(inCfg, outCfg);
> > > >   	return 0;
> > > >   }
> > > > @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
> > > >   		return;
> > > >   	}
> > > > -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > -				    sourceStride_[0],
> > > > -				    sourceMapped.planes()[1].data(),
> > > > -				    sourceStride_[1],
> > > > -				    sourceSize_.width, sourceSize_.height,
> > > > -				    destination->plane(0).data(),
> > > > -				    destinationStride_[0],
> > > > -				    destination->plane(1).data(),
> > > > -				    destinationStride_[1],
> > > > -				    destinationSize_.width,
> > > > -				    destinationSize_.height,
> > > > -				    libyuv::FilterMode::kFilterBilinear);
> > > > +	int ret = 0;
> > > > +	switch (sourceFormat_) {
> > > > +	case formats::NV12:
> > > > +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > +					sourceStride_[0],
> > > > +					sourceMapped.planes()[1].data(),
> > > > +					sourceStride_[1],
> > > > +					sourceSize_.width, sourceSize_.height,
> > > > +					destination->plane(0).data(),
> > > > +					destinationStride_[0],
> > > > +					destination->plane(1).data(),
> > > > +					destinationStride_[1],
> > > > +					destinationSize_.width,
> > > > +					destinationSize_.height,
> > > > +					libyuv::FilterMode::kFilterBilinear);
> > > > +		break;
> > > > +	case formats::YUYV:
> > > > +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> > > > +					 sourceStride_[0],
> > > > +					 destination->plane(0).data(),
> > > > +					 destinationStride_[0],
> > > > +					 destination->plane(1).data(),
> > > > +					 destinationStride_[1],
> > > > +					 destinationSize_.width,
> > > > +					 destinationSize_.height);
> > > > +		break;
> > > > +	default:
> > > > +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> > > > +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > > +		break;
> > > 
> > > This can't happen, can it ?
> > > 
> > > > +	}
> > > > +
> > > >   	if (ret) {
> > > > -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> > > > +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> > > 
> > > s/Libyuv/libyuv/
> > > 
> > > Although I would write "YUV post-processing failure: ".
> > > 
> > > >   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > >   		return;
> > > >   	}
Jai Luthra Sept. 25, 2023, 12:23 p.m. UTC | #7
Hi Laurent, Mattijs,

On Sep 25, 2023 at 14:55:48 +0300, Laurent Pinchart wrote:
> Hi Jai,
> 
> On Mon, Sep 25, 2023 at 05:17:59PM +0530, Jai Luthra wrote:
> > On Sep 25, 2023 at 13:52:40 +0300, Tomi Valkeinen wrote:
> > > On 25/09/2023 00:34, Laurent Pinchart wrote:
> > > > On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> > > > > On am62x platforms, the receiver driver (j721e-csi2rx) only
> > > > > supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> > > > > 
> > > > > The receiver and the sensor (ov5640) hardware are both capable of
> > > > > YUV420, however:
> > > > > 
> > > > > * we are not aware of OV5640 being tested with YUV420 formats on any
> > > > >    vendor tree.
> > > > 
> > > > Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> > > > Sensors only output packed formats (there could be exceptions, but they
> > > > would be very rare), it is the DMA engine on the host size that would
> > > > convert them to planar formats. That would be the DMA engine handled by
> > > > the k3-udma driver. It's fairly complicated, and I don't know if it
> > > > would hahev the ability to output planar formats. Tomi (on CC) may have
> > > > more information.
> > > 
> > > No, I don't know if the CSI-2 RX or the DMA engine support packed to planar
> > > conversions. Maybe Jai knows.
> > > 
> > > If the point is to show the frames on the screen, it sounds to me that the
> > > YUV422 formats would work the best.
> > 
> > Unfortunately the DMA hardware does not support pixel-level interleaving 
> > that would be required to convert packed YUV422/420 formats to planar or 
> > semi-planar formats.
> > 
> > But yes the display supports packed UYVY formats, we even use it for 
> > camera to display pipelines on linux. The limitation on android is from 
> > the gralloc implementation.
> 
> Thank you for the confirmation. Now for the annoying question: Could
> that be fixed in gralloc ?
> 

Fixing gralloc could be possible (we may have to ask the GPU vendor I am 
not sure). But IIRC when we were discussing the options the concern was 
that just fixing the gralloc would not be enough, as there are other 
places in Android framework where things are hardcoded to require NV12.

I don't understand Android/HAL stack enough, maybe Mattijs can share 
what else was needed other than changes to gralloc.

> > > > > * NV12 has different line lines (even lines are twice as long as odd
> > > > 
> > > > s/lines/lengths/
> > > > 
> > > > >    lines) Different line-sized DMA transfers have not been tested on
> > > > 
> > > > s/)/)./
> > > > 
> > > > >    the TI CSI-RX SHIM IP.
> > > > 
> > > > As mentioned just above, the sensor will not output planar formats, so
> > > > the issue is about the
> > > > 
> > > > > On the other hand, the graphics allocator (gralloc) cannot allocate
> > > > > YUV422 buffers directly. It mainly allocated NV12 buffers when
> > > > > userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> > > > 
> > > > The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> > > > support it ?
> > > > 
> > > > > Because of the above, we need a pixel conversion from YUYV to NV12,
> > > > > which is handled in the yuv processor via libyuv.
> > > > > 
> > > > > Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> > > > > ---
> > > > >   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
> > > > >   1 file changed, 68 insertions(+), 23 deletions(-)
> > > > > 
> > > > > diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> > > > > index 734bb85b7351..b680b833dafb 100644
> > > > > --- a/src/android/yuv/post_processor_yuv.cpp
> > > > > +++ b/src/android/yuv/post_processor_yuv.cpp
> > > > > @@ -7,6 +7,9 @@
> > > > >   #include "post_processor_yuv.h"
> > > > > +#include <utility>
> > > > > +
> > > > > +#include <libyuv/convert.h>
> > > > >   #include <libyuv/scale.h>
> > > > >   #include <libcamera/base/log.h>
> > > > > @@ -22,14 +25,42 @@ using namespace libcamera;
> > > > >   LOG_DEFINE_CATEGORY(YUV)
> > > > > +namespace {
> > > > > +
> > > > > +/**
> > > > > + * \var supportedConversions
> > > > > + * \brief list of supported output pixel formats for an input pixel format
> > > > > + */
> > > > > +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> > > > > +	{ formats::YUYV, { formats::NV12 } },
> > > > > +};
> > > > > +
> > > > > +} /* namespace */
> > > > > +
> > > > >   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > > >   				const StreamConfiguration &outCfg)
> > > > >   {
> > > > >   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> > > > > -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> > > > > -				<< " (from " << inCfg.pixelFormat
> > > > > -				<< " to " << outCfg.pixelFormat << ")";
> > > > > -		return -EINVAL;
> > > > > +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> > > > > +		if (it == supportedConversions.end()) {
> > > > > +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> > > > 
> > > > 			LOG(YUV, Error)
> > > > 				<< "Unsupported source format "
> > > > 				<< inCfg.pixelFormat;
> > > > 
> > > > > +			return -EINVAL;
> > > > > +		}
> > > > > +
> > > > > +		std::vector<PixelFormat> outFormats = it->second;
> > > > 
> > > > No need for a copy.
> > > > 
> > > > > +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> > > > 
> > > > 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> > > > 					      outCfg.pixelFormat);
> > > > 
> > > > > +		if (match == outFormats.end()) {
> > > > > +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> > > > > +					<< " (from " << inCfg.pixelFormat
> > > > > +					<< " to " << outCfg.pixelFormat << ")";
> > > > 
> > > > 			LOG(YUV, Error)
> > > > 				<< "Requested pixel format conversion is not supported"
> > > > 				<< " (from " << inCfg.pixelFormat
> > > > 				<< " to " << outCfg.pixelFormat << ")";
> > > > 
> > > > > +			return -EINVAL;
> > > > > +		}
> > > > > +	} else {
> > > > > +		if (inCfg.pixelFormat != formats::NV12) {
> > > > > +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > > +					<< " (only NV12 is supported for scaling)";
> > > > 
> > > > 			LOG(YUV, Error)
> > > > 				<< "Unsupported format " << inCfg.pixelFormat
> > > > 				<< " (only NV12 is supported for scaling)";
> > > > 
> > > > > +			return -EINVAL;
> > > > > +		}
> > > > >   	}
> > > > >   	if (inCfg.size < outCfg.size) {
> > > > > @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > > >   		return -EINVAL;
> > > > >   	}
> > > > > -	if (inCfg.pixelFormat != formats::NV12) {
> > > > > -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > > -				<< " (only NV12 is supported)";
> > > > > -		return -EINVAL;
> > > > > -	}
> > > > > -
> > > > >   	calculateLengths(inCfg, outCfg);
> > > > >   	return 0;
> > > > >   }
> > > > > @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
> > > > >   		return;
> > > > >   	}
> > > > > -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > > -				    sourceStride_[0],
> > > > > -				    sourceMapped.planes()[1].data(),
> > > > > -				    sourceStride_[1],
> > > > > -				    sourceSize_.width, sourceSize_.height,
> > > > > -				    destination->plane(0).data(),
> > > > > -				    destinationStride_[0],
> > > > > -				    destination->plane(1).data(),
> > > > > -				    destinationStride_[1],
> > > > > -				    destinationSize_.width,
> > > > > -				    destinationSize_.height,
> > > > > -				    libyuv::FilterMode::kFilterBilinear);
> > > > > +	int ret = 0;
> > > > > +	switch (sourceFormat_) {
> > > > > +	case formats::NV12:
> > > > > +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > > +					sourceStride_[0],
> > > > > +					sourceMapped.planes()[1].data(),
> > > > > +					sourceStride_[1],
> > > > > +					sourceSize_.width, sourceSize_.height,
> > > > > +					destination->plane(0).data(),
> > > > > +					destinationStride_[0],
> > > > > +					destination->plane(1).data(),
> > > > > +					destinationStride_[1],
> > > > > +					destinationSize_.width,
> > > > > +					destinationSize_.height,
> > > > > +					libyuv::FilterMode::kFilterBilinear);
> > > > > +		break;
> > > > > +	case formats::YUYV:
> > > > > +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> > > > > +					 sourceStride_[0],
> > > > > +					 destination->plane(0).data(),
> > > > > +					 destinationStride_[0],
> > > > > +					 destination->plane(1).data(),
> > > > > +					 destinationStride_[1],
> > > > > +					 destinationSize_.width,
> > > > > +					 destinationSize_.height);
> > > > > +		break;
> > > > > +	default:
> > > > > +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> > > > > +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > > > +		break;
> > > > 
> > > > This can't happen, can it ?
> > > > 
> > > > > +	}
> > > > > +
> > > > >   	if (ret) {
> > > > > -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> > > > > +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> > > > 
> > > > s/Libyuv/libyuv/
> > > > 
> > > > Although I would write "YUV post-processing failure: ".
> > > > 
> > > > >   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > > >   		return;
> > > > >   	}
> 
> -- 
> Regards,
> 
> Laurent Pinchart
Laurent Pinchart Sept. 25, 2023, 12:43 p.m. UTC | #8
On Mon, Sep 25, 2023 at 05:53:03PM +0530, Jai Luthra wrote:
> On Sep 25, 2023 at 14:55:48 +0300, Laurent Pinchart wrote:
> > On Mon, Sep 25, 2023 at 05:17:59PM +0530, Jai Luthra wrote:
> > > On Sep 25, 2023 at 13:52:40 +0300, Tomi Valkeinen wrote:
> > > > On 25/09/2023 00:34, Laurent Pinchart wrote:
> > > > > On Fri, Sep 15, 2023 at 09:57:30AM +0200, Mattijs Korpershoek via libcamera-devel wrote:
> > > > > > On am62x platforms, the receiver driver (j721e-csi2rx) only
> > > > > > supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY.
> > > > > > 
> > > > > > The receiver and the sensor (ov5640) hardware are both capable of
> > > > > > YUV420, however:
> > > > > > 
> > > > > > * we are not aware of OV5640 being tested with YUV420 formats on any
> > > > > >    vendor tree.
> > > > > 
> > > > > Are you talking about packed YUV 4:2:0 here, or planar YUV 4:2:0 ?
> > > > > Sensors only output packed formats (there could be exceptions, but they
> > > > > would be very rare), it is the DMA engine on the host size that would
> > > > > convert them to planar formats. That would be the DMA engine handled by
> > > > > the k3-udma driver. It's fairly complicated, and I don't know if it
> > > > > would hahev the ability to output planar formats. Tomi (on CC) may have
> > > > > more information.
> > > > 
> > > > No, I don't know if the CSI-2 RX or the DMA engine support packed to planar
> > > > conversions. Maybe Jai knows.
> > > > 
> > > > If the point is to show the frames on the screen, it sounds to me that the
> > > > YUV422 formats would work the best.
> > > 
> > > Unfortunately the DMA hardware does not support pixel-level interleaving 
> > > that would be required to convert packed YUV422/420 formats to planar or 
> > > semi-planar formats.
> > > 
> > > But yes the display supports packed UYVY formats, we even use it for 
> > > camera to display pipelines on linux. The limitation on android is from 
> > > the gralloc implementation.
> > 
> > Thank you for the confirmation. Now for the annoying question: Could
> > that be fixed in gralloc ?
> 
> Fixing gralloc could be possible (we may have to ask the GPU vendor I am 
> not sure). But IIRC when we were discussing the options the concern was 
> that just fixing the gralloc would not be enough, as there are other 
> places in Android framework where things are hardcoded to require NV12.

Do you know where those places are ? My understanding is that the
mapping from HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED to an actual pixel
format isn't hardcoded in Android, but is left for the platform to
define. This means, in practice, that all HALs need to agree on this.
There is no runtime API to query the information, it gets hardcoded in
the HALs.

> I don't understand Android/HAL stack enough, maybe Mattijs can share 
> what else was needed other than changes to gralloc.
> 
> > > > > > * NV12 has different line lines (even lines are twice as long as odd
> > > > > 
> > > > > s/lines/lengths/
> > > > > 
> > > > > >    lines) Different line-sized DMA transfers have not been tested on
> > > > > 
> > > > > s/)/)./
> > > > > 
> > > > > >    the TI CSI-RX SHIM IP.
> > > > > 
> > > > > As mentioned just above, the sensor will not output planar formats, so
> > > > > the issue is about the
> > > > > 
> > > > > > On the other hand, the graphics allocator (gralloc) cannot allocate
> > > > > > YUV422 buffers directly. It mainly allocated NV12 buffers when
> > > > > > userspace requests HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
> > > > > 
> > > > > The DSS seems to support YUYV, is there a way to fix the TI gralloc to
> > > > > support it ?
> > > > > 
> > > > > > Because of the above, we need a pixel conversion from YUYV to NV12,
> > > > > > which is handled in the yuv processor via libyuv.
> > > > > > 
> > > > > > Signed-off-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
> > > > > > ---
> > > > > >   src/android/yuv/post_processor_yuv.cpp | 91 +++++++++++++++++++++++++---------
> > > > > >   1 file changed, 68 insertions(+), 23 deletions(-)
> > > > > > 
> > > > > > diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> > > > > > index 734bb85b7351..b680b833dafb 100644
> > > > > > --- a/src/android/yuv/post_processor_yuv.cpp
> > > > > > +++ b/src/android/yuv/post_processor_yuv.cpp
> > > > > > @@ -7,6 +7,9 @@
> > > > > >   #include "post_processor_yuv.h"
> > > > > > +#include <utility>
> > > > > > +
> > > > > > +#include <libyuv/convert.h>
> > > > > >   #include <libyuv/scale.h>
> > > > > >   #include <libcamera/base/log.h>
> > > > > > @@ -22,14 +25,42 @@ using namespace libcamera;
> > > > > >   LOG_DEFINE_CATEGORY(YUV)
> > > > > > +namespace {
> > > > > > +
> > > > > > +/**
> > > > > > + * \var supportedConversions
> > > > > > + * \brief list of supported output pixel formats for an input pixel format
> > > > > > + */
> > > > > > +const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
> > > > > > +	{ formats::YUYV, { formats::NV12 } },
> > > > > > +};
> > > > > > +
> > > > > > +} /* namespace */
> > > > > > +
> > > > > >   int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > > > >   				const StreamConfiguration &outCfg)
> > > > > >   {
> > > > > >   	if (inCfg.pixelFormat != outCfg.pixelFormat) {
> > > > > > -		LOG(YUV, Error) << "Pixel format conversion is not supported"
> > > > > > -				<< " (from " << inCfg.pixelFormat
> > > > > > -				<< " to " << outCfg.pixelFormat << ")";
> > > > > > -		return -EINVAL;
> > > > > > +		const auto it = supportedConversions.find(inCfg.pixelFormat);
> > > > > > +		if (it == supportedConversions.end()) {
> > > > > > +			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
> > > > > 
> > > > > 			LOG(YUV, Error)
> > > > > 				<< "Unsupported source format "
> > > > > 				<< inCfg.pixelFormat;
> > > > > 
> > > > > > +			return -EINVAL;
> > > > > > +		}
> > > > > > +
> > > > > > +		std::vector<PixelFormat> outFormats = it->second;
> > > > > 
> > > > > No need for a copy.
> > > > > 
> > > > > > +		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
> > > > > 
> > > > > 		const auto &match = std::find(outFormats.begin(), outFormats.end(),
> > > > > 					      outCfg.pixelFormat);
> > > > > 
> > > > > > +		if (match == outFormats.end()) {
> > > > > > +			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
> > > > > > +					<< " (from " << inCfg.pixelFormat
> > > > > > +					<< " to " << outCfg.pixelFormat << ")";
> > > > > 
> > > > > 			LOG(YUV, Error)
> > > > > 				<< "Requested pixel format conversion is not supported"
> > > > > 				<< " (from " << inCfg.pixelFormat
> > > > > 				<< " to " << outCfg.pixelFormat << ")";
> > > > > 
> > > > > > +			return -EINVAL;
> > > > > > +		}
> > > > > > +	} else {
> > > > > > +		if (inCfg.pixelFormat != formats::NV12) {
> > > > > > +			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > > > +					<< " (only NV12 is supported for scaling)";
> > > > > 
> > > > > 			LOG(YUV, Error)
> > > > > 				<< "Unsupported format " << inCfg.pixelFormat
> > > > > 				<< " (only NV12 is supported for scaling)";
> > > > > 
> > > > > > +			return -EINVAL;
> > > > > > +		}
> > > > > >   	}
> > > > > >   	if (inCfg.size < outCfg.size) {
> > > > > > @@ -39,12 +70,6 @@ int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > > > >   		return -EINVAL;
> > > > > >   	}
> > > > > > -	if (inCfg.pixelFormat != formats::NV12) {
> > > > > > -		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > > > -				<< " (only NV12 is supported)";
> > > > > > -		return -EINVAL;
> > > > > > -	}
> > > > > > -
> > > > > >   	calculateLengths(inCfg, outCfg);
> > > > > >   	return 0;
> > > > > >   }
> > > > > > @@ -66,20 +91,40 @@ void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
> > > > > >   		return;
> > > > > >   	}
> > > > > > -	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > > > -				    sourceStride_[0],
> > > > > > -				    sourceMapped.planes()[1].data(),
> > > > > > -				    sourceStride_[1],
> > > > > > -				    sourceSize_.width, sourceSize_.height,
> > > > > > -				    destination->plane(0).data(),
> > > > > > -				    destinationStride_[0],
> > > > > > -				    destination->plane(1).data(),
> > > > > > -				    destinationStride_[1],
> > > > > > -				    destinationSize_.width,
> > > > > > -				    destinationSize_.height,
> > > > > > -				    libyuv::FilterMode::kFilterBilinear);
> > > > > > +	int ret = 0;
> > > > > > +	switch (sourceFormat_) {
> > > > > > +	case formats::NV12:
> > > > > > +		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
> > > > > > +					sourceStride_[0],
> > > > > > +					sourceMapped.planes()[1].data(),
> > > > > > +					sourceStride_[1],
> > > > > > +					sourceSize_.width, sourceSize_.height,
> > > > > > +					destination->plane(0).data(),
> > > > > > +					destinationStride_[0],
> > > > > > +					destination->plane(1).data(),
> > > > > > +					destinationStride_[1],
> > > > > > +					destinationSize_.width,
> > > > > > +					destinationSize_.height,
> > > > > > +					libyuv::FilterMode::kFilterBilinear);
> > > > > > +		break;
> > > > > > +	case formats::YUYV:
> > > > > > +		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
> > > > > > +					 sourceStride_[0],
> > > > > > +					 destination->plane(0).data(),
> > > > > > +					 destinationStride_[0],
> > > > > > +					 destination->plane(1).data(),
> > > > > > +					 destinationStride_[1],
> > > > > > +					 destinationSize_.width,
> > > > > > +					 destinationSize_.height);
> > > > > > +		break;
> > > > > > +	default:
> > > > > > +		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
> > > > > > +		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > > > > +		break;
> > > > > 
> > > > > This can't happen, can it ?
> > > > > 
> > > > > > +	}
> > > > > > +
> > > > > >   	if (ret) {
> > > > > > -		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> > > > > > +		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
> > > > > 
> > > > > s/Libyuv/libyuv/
> > > > > 
> > > > > Although I would write "YUV post-processing failure: ".
> > > > > 
> > > > > >   		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
> > > > > >   		return;
> > > > > >   	}

Patch
diff mbox series

diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
index 734bb85b7351..b680b833dafb 100644
--- a/src/android/yuv/post_processor_yuv.cpp
+++ b/src/android/yuv/post_processor_yuv.cpp
@@ -7,6 +7,9 @@ 
 
 #include "post_processor_yuv.h"
 
+#include <utility>
+
+#include <libyuv/convert.h>
 #include <libyuv/scale.h>
 
 #include <libcamera/base/log.h>
@@ -22,14 +25,42 @@  using namespace libcamera;
 
 LOG_DEFINE_CATEGORY(YUV)
 
+namespace {
+
+/**
+ * \var supportedConversions
+ * \brief list of supported output pixel formats for an input pixel format
+ */
+const std::map<PixelFormat, const std::vector<PixelFormat>> supportedConversions = {
+	{ formats::YUYV, { formats::NV12 } },
+};
+
+} /* namespace */
+
 int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
 				const StreamConfiguration &outCfg)
 {
 	if (inCfg.pixelFormat != outCfg.pixelFormat) {
-		LOG(YUV, Error) << "Pixel format conversion is not supported"
-				<< " (from " << inCfg.pixelFormat
-				<< " to " << outCfg.pixelFormat << ")";
-		return -EINVAL;
+		const auto it = supportedConversions.find(inCfg.pixelFormat);
+		if (it == supportedConversions.end()) {
+			LOG(YUV, Error) << "Unsupported source format " << inCfg.pixelFormat;
+			return -EINVAL;
+		}
+
+		std::vector<PixelFormat> outFormats = it->second;
+		const auto &match = std::find(outFormats.begin(), outFormats.end(), outCfg.pixelFormat);
+		if (match == outFormats.end()) {
+			LOG(YUV, Error) << "Requested pixel format conversion is not supported"
+					<< " (from " << inCfg.pixelFormat
+					<< " to " << outCfg.pixelFormat << ")";
+			return -EINVAL;
+		}
+	} else {
+		if (inCfg.pixelFormat != formats::NV12) {
+			LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
+					<< " (only NV12 is supported for scaling)";
+			return -EINVAL;
+		}
 	}
 
 	if (inCfg.size < outCfg.size) {
@@ -39,12 +70,6 @@  int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
 		return -EINVAL;
 	}
 
-	if (inCfg.pixelFormat != formats::NV12) {
-		LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
-				<< " (only NV12 is supported)";
-		return -EINVAL;
-	}
-
 	calculateLengths(inCfg, outCfg);
 	return 0;
 }
@@ -66,20 +91,40 @@  void PostProcessorYuv::process(Camera3RequestDescriptor::StreamBuffer *streamBuf
 		return;
 	}
 
-	int ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
-				    sourceStride_[0],
-				    sourceMapped.planes()[1].data(),
-				    sourceStride_[1],
-				    sourceSize_.width, sourceSize_.height,
-				    destination->plane(0).data(),
-				    destinationStride_[0],
-				    destination->plane(1).data(),
-				    destinationStride_[1],
-				    destinationSize_.width,
-				    destinationSize_.height,
-				    libyuv::FilterMode::kFilterBilinear);
+	int ret = 0;
+	switch (sourceFormat_) {
+	case formats::NV12:
+		ret = libyuv::NV12Scale(sourceMapped.planes()[0].data(),
+					sourceStride_[0],
+					sourceMapped.planes()[1].data(),
+					sourceStride_[1],
+					sourceSize_.width, sourceSize_.height,
+					destination->plane(0).data(),
+					destinationStride_[0],
+					destination->plane(1).data(),
+					destinationStride_[1],
+					destinationSize_.width,
+					destinationSize_.height,
+					libyuv::FilterMode::kFilterBilinear);
+		break;
+	case formats::YUYV:
+		ret = libyuv::YUY2ToNV12(sourceMapped.planes()[0].data(),
+					 sourceStride_[0],
+					 destination->plane(0).data(),
+					 destinationStride_[0],
+					 destination->plane(1).data(),
+					 destinationStride_[1],
+					 destinationSize_.width,
+					 destinationSize_.height);
+		break;
+	default:
+		LOG(YUV, Error) << "Unsupported source format " << sourceFormat_;
+		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
+		break;
+	}
+
 	if (ret) {
-		LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
+		LOG(YUV, Error) << "Libyuv operation failure: " << ret;
 		processComplete.emit(streamBuffer, PostProcessor::Status::Error);
 		return;
 	}