From patchwork Fri Sep 15 07:57:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattijs Korpershoek X-Patchwork-Id: 19013 Return-Path: X-Original-To: parsemail@patchwork.libcamera.org Delivered-To: parsemail@patchwork.libcamera.org Received: from lancelot.ideasonboard.com (lancelot.ideasonboard.com [92.243.16.209]) by patchwork.libcamera.org (Postfix) with ESMTPS id 161AEC32B2 for ; Fri, 15 Sep 2023 07:57:38 +0000 (UTC) Received: from lancelot.ideasonboard.com (localhost [IPv6:::1]) by lancelot.ideasonboard.com (Postfix) with ESMTP id 89BFF62921; Fri, 15 Sep 2023 09:57:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org; s=mail; t=1694764657; bh=YPidGSqcc2kQoHQvTM7e/QJD48dRt0kqNOJQpbXRc/k=; h=Date:References:In-Reply-To:To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Y7KDstVR+OqRHZof2Ot2U9a6wAh9TYjObg9caIQPiiyyrr+D1j00q220YRpTTIsHS HLS3+xgz9cmUoCBaiKrnsXociKy41qXwS4LypAIkz3Rzwg+tZLM25gjbFiu7mfDWpP uTIV+832jBZYjDLyZNTSMIbBNPvotd5f2azy0KfNOB2Do1H+qxXRv+XoGs4XZwf9cw u8JwQgDcBr/1K3d7Xb70lIj3Yj4ZwSq5944RYUB+mJCPCVBAQLJJ8vPz/0bwVw+O8o IGNG0WFi3a0Jmfi0tZ70Pogt6iheKIHlqSBXclYS7bDX3+l0P2UUyLhsV8ngmGOCjq +FGlyO1yBvP6Q== Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lancelot.ideasonboard.com (Postfix) with ESMTPS id 75BF862925 for ; Fri, 15 Sep 2023 09:57:32 +0200 (CEST) Authentication-Results: lancelot.ideasonboard.com; dkim=pass (2048-bit key; unprotected) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b="xIJ0+ULC"; dkim-atps=neutral Received: by mail-wr1-x433.google.com with SMTP id ffacd0b85a97d-31ad9155414so1650692f8f.3 for ; Fri, 15 Sep 2023 00:57:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1694764652; x=1695369452; darn=lists.libcamera.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=wDS1MohSUECROpdzIgju5oIQIUyzBwhqYjp/QXcKlUY=; b=xIJ0+ULC+Q7PajvxI7j2hnZo1RK3zmDHY2nicxgD/8LfynkLzIEbdRD+bBYgNICrrH QsZxzDP1UuEni51Y4ZitQdO5SzzdulQPrA6iwR78MkN1hSBSrEmZSiXUmjvGpTQajBUv fYpLYhu/D8j7I8Y3Jy524eaRh62guFw1xmUz+qBSKKlIbwtqmpexY7/UOe6+8EfaS8Sw q2Qyu078MMkmqxJid8cwtnrbwOVtieP83Xcrk8mUgRRhJrFc/uia+r93T48xKc4M40Zc AWZ2y3TvMvtKqMf9MWhNEm/cPymKxl8saYBYGV6XX1u3Elz765t+VeKXRLBbaGEh4Cp0 7EyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694764652; x=1695369452; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wDS1MohSUECROpdzIgju5oIQIUyzBwhqYjp/QXcKlUY=; b=xNGjasd7eW35GD/0b1ua6swqisxp9fBbP37n+669davFwSHFup2R8jFG4CuiUUjGVS e+OIZLVxJCl5lKzX5mErI5fdYi92rJsg8+Es9uOuYmoq6j0dXL2UX25G86f5iQxtJUqs QOVYuc4aMkHxqitpKoHpEO8nJYiZlibXZ2z6VxzBhGMVrtGMqX69Zdgdzgw6ZQ75UDT1 s6P0Q+iKd/eaLymFNt/tf21h2pTMHfq0KWycGvt9xT7WzMHVz0LMbetrf2osLIR+jTRR TW/RCO2OGn8oRzT3WYLPI0gPqiaPh5w4sezZhHDutxh7B/z8GYcjLe8VmwMbpIoY0rd6 HJUg== X-Gm-Message-State: AOJu0YwRsyAyT+7jcnPEqHLlu32WuOxoFdaxKHjCg+hoedetW1oty/iN E/0tvW1AfU1CLG5ubY4/nFqbhAqqrKDMk43tTvo= X-Google-Smtp-Source: AGHT+IEn5r1+/swhrQCGIePp8ScDtHHUkkwAordKLV11fE6EDDzxnWDlL48LeypmU89FnNaLWR1hRQ== X-Received: by 2002:adf:d4c7:0:b0:31a:ed75:75d6 with SMTP id w7-20020adfd4c7000000b0031aed7575d6mr696501wrk.15.1694764651971; Fri, 15 Sep 2023 00:57:31 -0700 (PDT) Received: from [192.168.1.20] ([2a01:cb19:8704:be00:4f55:bd9d:611a:6c8e]) by smtp.gmail.com with ESMTPSA id m2-20020a056000174200b0031fe9a47a87sm2506942wrf.112.2023.09.15.00.57.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Sep 2023 00:57:31 -0700 (PDT) Date: Fri, 15 Sep 2023 09:57:31 +0200 MIME-Version: 1.0 Message-Id: <20230915-libyuv-convert-v1-7-1e5bcf68adac@baylibre.com> References: <20230915-libyuv-convert-v1-0-1e5bcf68adac@baylibre.com> In-Reply-To: <20230915-libyuv-convert-v1-0-1e5bcf68adac@baylibre.com> To: libcamera-devel@lists.libcamera.org X-Mailer: b4 0.12.4-dev-6aa5d Subject: [libcamera-devel] [PATCH RFC 7/7] WIP: android: add YUYV->NV12 format conversion via libyuv X-BeenThere: libcamera-devel@lists.libcamera.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Mattijs Korpershoek via libcamera-devel From: Mattijs Korpershoek Reply-To: Mattijs Korpershoek Cc: Guillaume La Roque Errors-To: libcamera-devel-bounces@lists.libcamera.org Sender: "libcamera-devel" For some platforms, it's possible that the gralloc implementation and the CSI receiver cannot agree on a pixel format. When that happens, there is usually a m2m converter in the pipeline which handles pixel format conversion. On platforms without pixel format converters, such as the AM62x, we need to do software conversion. The AM62x platform: * uses a CSI receiver (j721e-csi2rx), that only supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY. * Has a gralloc implementation that only supports of semi-planar YUV420 formats such as NV12. Implement YUYV->NV12 format conversion using libyuv. This is mainly done by transforming the first stream from Type::Direct into Type::Internal so that it goes through the post-processor loop. ``` The WIP: part is mainly around computeYUYVSize(): Since gralloc and j721e-csi2rx are incompatible, we need a way to get gralloc to allocate (NV12) the kernel-requested buffer length (YUYV). In other words, we should make sure that the first plane of the NV12 allocated buffer is long enough to fit a YUYV image. According to [1], NV12 has 8 bits (one byte) per component, and the first plane is the Y component. So a 1920x1080 image in NV12 has plane[0].length=1920*1080=2073600 According to [2], YUYV stores 2 pixels per container of 32 bits, which gives us 16 bits (2 bytes for one pixel). So a 1920x1080 image in YUYV has plane[0].length=1920*1080*2=4147200 So apply a *2 factor to make the kernel believe it's receiving a YUYV buffer. Note: this also means that we are wasting NV12's plane[1] buffer with each allocation. [1] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html [2] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html ``` Signed-off-by: Mattijs Korpershoek --- src/android/camera_capabilities.cpp | 90 ++++++++++++++++++++++++++++++++++++- src/android/camera_capabilities.h | 4 ++ src/android/camera_device.cpp | 6 ++- src/android/camera_stream.cpp | 54 +++++++++++++++++++++- src/android/camera_stream.h | 5 +++ 5 files changed, 154 insertions(+), 5 deletions(-) diff --git a/src/android/camera_capabilities.cpp b/src/android/camera_capabilities.cpp index 1bfeaea4b121..e2e0f7409e94 100644 --- a/src/android/camera_capabilities.cpp +++ b/src/android/camera_capabilities.cpp @@ -124,6 +124,16 @@ const std::map camera3FormatsMap = { }, }; +/** + * \var yuvConversions + * \brief list of supported pixel formats for an input pixel format + * + * \todo This should be retrieved statically from yuv/post_processor_yuv instead + */ +const std::map> yuvConversions = { + { formats::YUYV, { formats::NV12 } }, +}; + const std::map hwLevelStrings = { { ANDROID_INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED, "LIMITED" }, @@ -582,8 +592,10 @@ int CameraCapabilities::initializeStreamConfigurations() LOG(HAL, Debug) << "Testing " << pixelFormat; /* - * The stream configuration size can be adjusted, - * not the pixel format. + * The stream configuration size can be adjusted. + * The pixel format might be converted via libyuv. + * Conversion check is done in another loop after + * testing native supported formats. * * \todo This could be simplified once all pipeline * handlers will report the StreamFormats list of @@ -603,7 +615,46 @@ int CameraCapabilities::initializeStreamConfigurations() /* If the format is not mandatory, skip it. */ if (!camera3Format.mandatory) continue; + } + + /* + * Test if we can map the format via a software conversion. + * This means that the converter can produce an "output" that is + * compatible with the format defined in Android. + */ + bool needConversion = false; + for (const PixelFormat &pixelFormat : libcameraFormats) { + LOG(HAL, Debug) << "Testing " << pixelFormat << " using conversion"; + + /* \todo move this into a separate function */ + for (const auto &[inputFormat, outputFormats] : yuvConversions) { + /* check if the converter can produce pixelFormat */ + auto it = std::find(outputFormats.begin(), outputFormats.end(), pixelFormat); + if (it == outputFormats.end()) + continue; + + /* + * The converter can produce output pixelFormat, see if we can configure + * the camera with the associated input pixelFormat. + */ + cfg.pixelFormat = inputFormat; + CameraConfiguration::Status status = cameraConfig->validate(); + + if (status != CameraConfiguration::Invalid && cfg.pixelFormat == inputFormat) { + mappedFormat = inputFormat; + conversionMap_[androidFormat] = std::make_pair(inputFormat, *it); + needConversion = true; + break; + } + } + + /* We found a valid conversion format, so bail out */ + if (mappedFormat.isValid()) + break; + } + + if (!mappedFormat.isValid()) { LOG(HAL, Error) << "Failed to map mandatory Android format " << camera3Format.name << " (" @@ -619,6 +670,11 @@ int CameraCapabilities::initializeStreamConfigurations() LOG(HAL, Debug) << "Mapped Android format " << camera3Format.name << " to " << mappedFormat; + if (needConversion) { + LOG(HAL, Debug) << mappedFormat + << " will be converted into " + << conversionMap_[androidFormat].second; + } std::vector resolutions; const PixelFormatInfo &info = PixelFormatInfo::info(mappedFormat); @@ -1457,6 +1513,36 @@ PixelFormat CameraCapabilities::toPixelFormat(int format) const return it->second; } +/* + * Check if we need to do software conversion via a post-processor + * for an Android format code + */ +bool CameraCapabilities::needConversion(int format) const +{ + auto it = conversionMap_.find(format); + if (it == conversionMap_.end()) { + LOG(HAL, Error) << "Requested format " << utils::hex(format) + << " not supported for conversion"; + return false; + } + + return true; +} + +/* + * Returns a conversion (input,output) pair for a given Android format code + */ +std::pair CameraCapabilities::conversionFormats(int format) const +{ + auto it = conversionMap_.find(format); + if (it == conversionMap_.end()) { + LOG(HAL, Error) << "Requested format " << utils::hex(format) + << " not supported for conversion"; + } + + return it->second; +} + std::unique_ptr CameraCapabilities::requestTemplateManual() const { if (!capabilities_.count(ANDROID_REQUEST_AVAILABLE_CAPABILITIES_MANUAL_SENSOR)) { diff --git a/src/android/camera_capabilities.h b/src/android/camera_capabilities.h index 6f66f221d33f..c3e6b48ab91d 100644 --- a/src/android/camera_capabilities.h +++ b/src/android/camera_capabilities.h @@ -30,6 +30,9 @@ public: CameraMetadata *staticMetadata() const { return staticMetadata_.get(); } libcamera::PixelFormat toPixelFormat(int format) const; + bool needConversion(int format) const; + std::pair + conversionFormats(int format) const; unsigned int maxJpegBufferSize() const { return maxJpegBufferSize_; } std::unique_ptr requestTemplateManual() const; @@ -77,6 +80,7 @@ private: std::vector streamConfigurations_; std::map formatsMap_; + std::map> conversionMap_; std::unique_ptr staticMetadata_; unsigned int maxJpegBufferSize_; diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp index d34bae715a47..842cbb06d345 100644 --- a/src/android/camera_device.cpp +++ b/src/android/camera_device.cpp @@ -635,8 +635,12 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) continue; } + CameraStream::Type type = CameraStream::Type::Direct; + if (capabilities_.needConversion(stream->format)) + type = CameraStream::Type::Internal; + Camera3StreamConfig streamConfig; - streamConfig.streams = { { stream, CameraStream::Type::Direct } }; + streamConfig.streams = { { stream, type } }; streamConfig.config.size = size; streamConfig.config.pixelFormat = format; streamConfigs.push_back(std::move(streamConfig)); diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp index 4fd05dda5ed3..961ee40017f1 100644 --- a/src/android/camera_stream.cpp +++ b/src/android/camera_stream.cpp @@ -95,6 +95,7 @@ int CameraStream::configure() switch (outFormat) { case formats::NV12: + case formats::YUYV: postProcessor_ = std::make_unique(); break; @@ -107,6 +108,16 @@ int CameraStream::configure() return -EINVAL; } + needConversion_ = + cameraDevice_->capabilities()->needConversion(camera3Stream_->format); + + if (needConversion_) { + auto conv = cameraDevice_->capabilities()->conversionFormats(camera3Stream_->format); + LOG(HAL, Debug) << "Configuring the post processor to convert " + << conv.first << " -> " << conv.second; + output.pixelFormat = conv.second; + } + int ret = postProcessor_->configure(input, output); if (ret) return ret; @@ -183,7 +194,12 @@ int CameraStream::process(Camera3RequestDescriptor::StreamBuffer *streamBuffer) streamBuffer->fence.reset(); } - const StreamConfiguration &output = configuration(); + StreamConfiguration output = configuration(); + if (needConversion_) { + output.pixelFormat = + cameraDevice_->capabilities()->conversionFormats(camera3Stream_->format).second; + } + streamBuffer->dstBuffer = std::make_unique( *streamBuffer->camera3Buffer, output.pixelFormat, output.size, PROT_READ | PROT_WRITE); @@ -205,6 +221,39 @@ void CameraStream::flush() worker_->flush(); } +Size CameraStream::computeYUYVSize(const Size &nv12Size) +{ + /* + * On am62x platforms, the receiver driver (j721e-csi2rx) only + * supports packed YUV422 formats such as YUYV, YVYU, UYVY and VYUY. + * + * However, the gralloc implementation is only capable of semiplanar + * YUV420 such as NV12. + * + * To trick the kernel into believing it's receiving a YUYV buffer, we adjust the + * size we request to gralloc so that plane(0) of the NV12 buffer is long enough to + * match the length of a YUYV plane. + * + * for NV12, one pixel is encoded on 1.5 bytes, but plane 0 has 1 byte per pixel. + * for YUYV, one pixel is encoded on 2 bytes. + * + * So apply a *2 factor. + * + * See: + * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-packed-yuv.html + * https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-yuv-planar.html + */ + constexpr unsigned int YUYVfactor = 2; + + unsigned int width = nv12Size.width; + unsigned int height = nv12Size.height; + + if (needConversion_) + width = width * YUYVfactor; + + return Size{ width, height }; +} + FrameBuffer *CameraStream::getBuffer() { if (!allocator_) @@ -222,8 +271,9 @@ FrameBuffer *CameraStream::getBuffer() * \todo Store a reference to the format of the source stream * instead of hardcoding. */ + const Size hackedSize = computeYUYVSize(configuration().size); auto frameBuffer = allocator_->allocate(HAL_PIXEL_FORMAT_YCBCR_420_888, - configuration().size, + hackedSize, camera3Stream_->usage); allocatedBuffers_.push_back(std::move(frameBuffer)); buffers_.emplace_back(allocatedBuffers_.back().get()); diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h index 4c5078b2c26d..52a5606399c5 100644 --- a/src/android/camera_stream.h +++ b/src/android/camera_stream.h @@ -128,10 +128,13 @@ public: int configure(); int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); + libcamera::Size computeYUYVSize(const libcamera::Size &nv12Size); libcamera::FrameBuffer *getBuffer(); void putBuffer(libcamera::FrameBuffer *buffer); void flush(); + bool needConversion() const { return needConversion_; } + private: class PostProcessorWorker : public libcamera::Thread { @@ -184,4 +187,6 @@ private: std::unique_ptr postProcessor_; std::unique_ptr worker_; + + bool needConversion_; };