Message ID | 20220106094323.1819801-2-hiroh@chromium.org |
---|---|
State | Accepted |
Headers | show |
Series |
|
Related | show |
Hi Hiro, On Thu, Jan 06, 2022 at 06:43:23PM +0900, Hirokazu Honda wrote: > An Android HAL client may request multiple identical streams. It is > redundant that a native camera device produces a separate stream for > each of the identical requests. Configure the camera with a single > stream in that case. The other identical HAL streams will be produced by > the YUV post-processor. > > The Android HAL client can provide capture requests that are resolved as > they are produced by YUV post-processor. Then a buffer to be filled a > camera is not given. So the HAL adaptation layer looks up buffer > information to be passed to a camera and allocate the buffer by using > PlatformBufferAllocator. > > Signed-off-by: Hirokazu Honda <hiroh@chromium.org> > --- > src/android/camera_device.cpp | 84 ++++++++++++++++++++++++++++++++++- > src/android/camera_request.h | 2 + > src/android/camera_stream.cpp | 12 ++++- > src/android/camera_stream.h | 6 ++- > 4 files changed, 100 insertions(+), 4 deletions(-) > > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp > index 83825736..53533064 100644 > --- a/src/android/camera_device.cpp > +++ b/src/android/camera_device.cpp > @@ -9,6 +9,7 @@ > > #include <algorithm> > #include <fstream> > +#include <set> > #include <sys/mman.h> > #include <unistd.h> > #include <vector> > @@ -604,6 +605,35 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > continue; > } > > + /* > + * While gralloc usage flags are supposed to report usage > + * patterns to select a suitable buffer allocation strategy, in > + * practice they're also used to make other decisions, such as > + * selecting the actual format for the IMPLEMENTATION_DEFINED > + * HAL pixel format. To avoid issues, we thus have to set the > + * GRALLOC_USAGE_HW_CAMERA_WRITE flag unconditionally, even for > + * streams that will be produced in software. > + */ > + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; You can now remove /* This stream will be produced by hardware. */ stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; at the end of the for loop > + > + /* > + * If a CameraStream with the same size and format of the > + * current stream has already been requested, associate the two. > + */ > + auto iter = std::find_if( > + streamConfigs.begin(), streamConfigs.end(), > + [size, format](const Camera3StreamConfig &streamConfig) { Should [size, format] be captured by reference ? > + return streamConfig.config.size == size && > + streamConfig.config.pixelFormat == format; > + }); > + if (iter != streamConfigs.end()) { > + /* Add usage to copy the buffer in streams[0] to stream. */ > + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN; > + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN; > + iter->streams.push_back({ stream, CameraStream::Type::Mapped }); > + continue; > + } > + > Camera3StreamConfig streamConfig; > streamConfig.streams = { { stream, CameraStream::Type::Direct } }; > streamConfig.config.size = size; > @@ -681,10 +711,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > for (const auto &streamConfig : streamConfigs) { > config->addConfiguration(streamConfig.config); > > + CameraStream *directStream = nullptr; > for (auto &stream : streamConfig.streams) { > + CameraStream *sourceStream = nullptr; > + > + /* > + * Sets the source stream for Mapped type streams to the > + * same Direct type stream. The Direct type stream is > + * put earlier than Mapped type streams in the current > + * implementation. This might be broken in the future. > + * > + * \todo Remove this order assumption. > + */ > + if (stream.type == CameraStream::Type::Mapped) { > + ASSERT(directStream); > + sourceStream = directStream; > + } > + > streams_.emplace_back(this, config.get(), stream.type, > - stream.stream, config->size() - 1); > + stream.stream, > + sourceStream, > + config->size() - 1); > stream.stream->priv = static_cast<void *>(&streams_.back()); > + if (!directStream && > + stream.type == CameraStream::Type::Direct) { > + directStream = &streams_.back(); > + } Am I mistaken or streams of type Mapped will always have a Direct stream in streams[0] ? You seem to have the same assumption about ordering. Can this be simplified as: CameraStream *sourceStream = nullptr; for (auto &stream : streamConfig.streams) { streams_.emplace_back(this, config.get(), stream.type, stream.stream, sourceStream, config->size() - 1); stream.stream->priv = static_cast<void *>(&streams_.back()); if (stream.type == CameraStream::Type::Direct) sourceStream = &streams_.back(); } Streams of type Mapped will always follow a Direct. Internal streams have no Mapped associated. This should give you sourceStream == nullptr for Internal and Direct, which I think it's what you want. > } > } > > @@ -917,6 +969,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > << " with " << descriptor->buffers_.size() << " streams"; > > + std::set<CameraStream *> requiredStreams; > + std::set<CameraStream *> providedStreams; I understand the logic, it took me a while but now I like it. I wonder if we could do better and exploit the fact std::set<> stores objects uniquely. Also, the introduction of putBuffers_ seems like a temporary solution, we should aim to re-use the same mechanism as for Internal buffers to avoid the additional complexity at streamProcessingComplete(). We have been adding ad-hoc solutions for each new issue we faced, and the complexity of keeping it all together has increased enough already. I would rather iterate a few more times on the list of requested streams, which is of limited lenght, in order to 1) Collect all the (unique) CameraStream that have to be queued to the request 2) Handle Direct and Internal which have a CameraStream associated and no sourceStream 3) Handle Mapped and add sourceStream to the Request if needed The hunk I get looks like the following @@ -917,9 +954,37 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() << " with " << descriptor->buffers_.size() << " streams"; + /* + * Collect the CameraStream associated to each requested capture stream. + * Being requestedStreams an std::set<> no duplications can happen. + */ + std::set<CameraStream *> requestedStreams; + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { + CameraStream *cameraStream = buffer.stream; + + switch (cameraStream->type()) { + case CameraStream::Type::Mapped: + requestedStreams.insert(cameraStream->sourceStream()); + break; + + case CameraStream::Type::Direct: + case CameraStream::Type::Internal: + requestedStreams.insert(cameraStream); + break; + } + } + + /* + * Process all the Direct and Internal streams, for which the CameraStream + * they refer to is the one that points to the right libcamera::Stream. + * + * Streams of type Mapped will be handled later. + */ for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { CameraStream *cameraStream = buffer.stream; camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); + FrameBuffer *frameBuffer = nullptr; + UniqueFD acquireFence; std::stringstream ss; ss << i << " - (" << camera3Stream->width << "x" @@ -928,25 +993,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques << "(" << cameraStream->configuration().size.toString() << ")[" << cameraStream->configuration().pixelFormat.toString() << "]"; - /* - * Inspect the camera stream type, create buffers opportunely - * and add them to the Request if required. - */ - FrameBuffer *frameBuffer = nullptr; - UniqueFD acquireFence; - - MutexLocker lock(descriptor->streamsProcessMutex_); - switch (cameraStream->type()) { case CameraStream::Type::Mapped: - /* - * Mapped streams don't need buffers added to the - * Request. - */ - LOG(HAL, Debug) << ss.str() << " (mapped)"; - - descriptor->pendingStreamsToProcess_.insert( - { cameraStream, &buffer }); continue; case CameraStream::Type::Direct: @@ -987,9 +1035,52 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques return -ENOMEM; } + MutexLocker lock(descriptor->streamsProcessMutex_); auto fence = std::make_unique<Fence>(std::move(acquireFence)); descriptor->request_->addBuffer(cameraStream->stream(), frameBuffer, std::move(fence)); + + requestedStreams.erase(cameraStream); + } + + /* + * Now handle the mapped streams. If no buffer has been addded for them + * as their corresponding direct source stream has not been requested, + * add it here. + */ + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { + CameraStream *cameraStream = buffer.stream; + camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); + CameraStream *sourceStream = cameraStream->sourceStream(); + FrameBuffer *frameBuffer = nullptr; + + if (cameraStream->type() != CameraStream::Type::Mapped) + continue; + + std::stringstream ss; + ss << i << " - (" << camera3Stream->width << "x" + << camera3Stream->height << ")" + << "[" << utils::hex(camera3Stream->format) << "] -> " + << "(" << cameraStream->configuration().size.toString() << ")[" + << cameraStream->configuration().pixelFormat.toString() << "]"; + LOG(HAL, Debug) << ss.str() << " (mapped)"; + + descriptor->pendingStreamsToProcess_.insert({ cameraStream, &buffer }); + + /* + * Make sure the CameraStream this stream is mapped on has been + * added to the request. + */ + if (requestedStreams.find(sourceStream) == requestedStreams.end()) + continue; + + /* If that's not the case, we need to do so here. */ + frameBuffer = sourceStream->getBuffer(); + buffer.internalBuffer = frameBuffer; + + MutexLocker lock(descriptor->streamsProcessMutex_); + descriptor->request_->addBuffer(sourceStream->stream(), + frameBuffer, nullptr); } I have only compiled this, but if you know a CTS test which can be run in isolation and exercize this use case I would be happy to test it. This way you might drop putBuffers_ if I'm not mistaken. Thanks j > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > CameraStream *cameraStream = buffer.stream; > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > @@ -947,6 +1001,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > descriptor->pendingStreamsToProcess_.insert( > { cameraStream, &buffer }); > + ASSERT(cameraStream->sourceStream()); > + requiredStreams.insert(cameraStream->sourceStream()); > continue; > > case CameraStream::Type::Direct: > @@ -990,8 +1046,32 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > descriptor->request_->addBuffer(cameraStream->stream(), > frameBuffer, std::move(fence)); > + providedStreams.insert(cameraStream); > } > > + for (CameraStream *requiredStream : requiredStreams) { > + if (providedStreams.find(requiredStream) != providedStreams.end()) > + continue; > + > + /* \todo Can we Map stream to Internal type stream? */ > + ASSERT(requiredStream->type() == CameraStream::Type::Direct); > + > + FrameBuffer *frameBuffer = requiredStream->getBuffer(); > + if (!frameBuffer) { > + LOG(HAL, Error) << "Failed to create frame buffer"; > + return -ENOMEM; > + } > + > + /* > + * addBuffer() lets a buffer for requiredStream be output by > + * camera. Records frameBuffer with requiredStream to call > + * putBuffer() after post-processing is complete. > + */ > + descriptor->request_->addBuffer(requiredStream->stream(), > + frameBuffer, nullptr); > + MutexLocker lock(descriptor->streamsProcessMutex_); > + descriptor->putBuffers_.push_back({ requiredStream, frameBuffer }); > + } > /* > * Translate controls from Android to libcamera and queue the request > * to the camera. > @@ -1251,6 +1331,8 @@ void CameraDevice::streamProcessingComplete(Camera3RequestDescriptor::StreamBuff > request->pendingStreamsToProcess_.erase(streamBuffer->stream); > if (!request->pendingStreamsToProcess_.empty()) > return; > + for (auto [cameraStream, buffer] : request->putBuffers_) > + cameraStream->putBuffer(buffer); > } > > completeDescriptor(streamBuffer->request); > diff --git a/src/android/camera_request.h b/src/android/camera_request.h > index 37b6ae32..7a41c6d9 100644 > --- a/src/android/camera_request.h > +++ b/src/android/camera_request.h > @@ -59,6 +59,8 @@ public: > /* Keeps track of streams requiring post-processing. */ > std::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_ > LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > + std::vector<std::pair<CameraStream *, libcamera::FrameBuffer *>> putBuffers_ > + LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > libcamera::Mutex streamsProcessMutex_; > > Camera3RequestDescriptor(libcamera::Camera *camera, > diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp > index c2157450..634cf319 100644 > --- a/src/android/camera_stream.cpp > +++ b/src/android/camera_stream.cpp > @@ -52,9 +52,11 @@ LOG_DECLARE_CATEGORY(HAL) > > CameraStream::CameraStream(CameraDevice *const cameraDevice, > CameraConfiguration *config, Type type, > - camera3_stream_t *camera3Stream, unsigned int index) > + camera3_stream_t *camera3Stream, > + CameraStream *const sourceStream, unsigned int index) > : cameraDevice_(cameraDevice), config_(config), type_(type), > - camera3Stream_(camera3Stream), index_(index) > + camera3Stream_(camera3Stream), sourceStream_(sourceStream), > + index_(index) > { > } > > @@ -206,6 +208,12 @@ void CameraStream::flush() > > FrameBuffer *CameraStream::getBuffer() > { > + if (type_ == Type::Direct && !allocator_) { > + allocator_ = std::make_unique<PlatformFrameBufferAllocator>(cameraDevice_); > + ASSERT(!mutex_); > + mutex_ = std::make_unique<Mutex>(); > + } > + > if (!allocator_) > return nullptr; > > diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h > index f9504462..4c5078b2 100644 > --- a/src/android/camera_stream.h > +++ b/src/android/camera_stream.h > @@ -114,7 +114,9 @@ public: > }; > CameraStream(CameraDevice *const cameraDevice, > libcamera::CameraConfiguration *config, Type type, > - camera3_stream_t *camera3Stream, unsigned int index); > + camera3_stream_t *camera3Stream, > + CameraStream *const sourceStream, > + unsigned int index); > CameraStream(CameraStream &&other); > ~CameraStream(); > > @@ -122,6 +124,7 @@ public: > camera3_stream_t *camera3Stream() const { return camera3Stream_; } > const libcamera::StreamConfiguration &configuration() const; > libcamera::Stream *stream() const; > + CameraStream *sourceStream() const { return sourceStream_; } > > int configure(); > int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); > @@ -167,6 +170,7 @@ private: > const libcamera::CameraConfiguration *config_; > const Type type_; > camera3_stream_t *camera3Stream_; > + CameraStream *const sourceStream_; > const unsigned int index_; > > std::unique_ptr<PlatformFrameBufferAllocator> allocator_; > -- > 2.34.1.448.ga2b2bfdf31-goog
Hi Jacopo, thank you for reviewing. On Thu, Jan 6, 2022 at 9:35 PM Jacopo Mondi <jacopo@jmondi.org> wrote: > > Hi Hiro, > > On Thu, Jan 06, 2022 at 06:43:23PM +0900, Hirokazu Honda wrote: > > An Android HAL client may request multiple identical streams. It is > > redundant that a native camera device produces a separate stream for > > each of the identical requests. Configure the camera with a single > > stream in that case. The other identical HAL streams will be produced by > > the YUV post-processor. > > > > The Android HAL client can provide capture requests that are resolved as > > they are produced by YUV post-processor. Then a buffer to be filled a > > camera is not given. So the HAL adaptation layer looks up buffer > > information to be passed to a camera and allocate the buffer by using > > PlatformBufferAllocator. > > > > Signed-off-by: Hirokazu Honda <hiroh@chromium.org> > > --- > > src/android/camera_device.cpp | 84 ++++++++++++++++++++++++++++++++++- > > src/android/camera_request.h | 2 + > > src/android/camera_stream.cpp | 12 ++++- > > src/android/camera_stream.h | 6 ++- > > 4 files changed, 100 insertions(+), 4 deletions(-) > > > > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp > > index 83825736..53533064 100644 > > --- a/src/android/camera_device.cpp > > +++ b/src/android/camera_device.cpp > > @@ -9,6 +9,7 @@ > > > > #include <algorithm> > > #include <fstream> > > +#include <set> > > #include <sys/mman.h> > > #include <unistd.h> > > #include <vector> > > @@ -604,6 +605,35 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > continue; > > } > > > > + /* > > + * While gralloc usage flags are supposed to report usage > > + * patterns to select a suitable buffer allocation strategy, in > > + * practice they're also used to make other decisions, such as > > + * selecting the actual format for the IMPLEMENTATION_DEFINED > > + * HAL pixel format. To avoid issues, we thus have to set the > > + * GRALLOC_USAGE_HW_CAMERA_WRITE flag unconditionally, even for > > + * streams that will be produced in software. > > + */ > > + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > You can now remove > > /* This stream will be produced by hardware. */ > stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > at the end of the for loop > > > + > > + /* > > + * If a CameraStream with the same size and format of the > > + * current stream has already been requested, associate the two. > > + */ > > + auto iter = std::find_if( > > + streamConfigs.begin(), streamConfigs.end(), > > + [size, format](const Camera3StreamConfig &streamConfig) { > > Should [size, format] be captured by reference ? > > > + return streamConfig.config.size == size && > > + streamConfig.config.pixelFormat == format; > > + }); > > + if (iter != streamConfigs.end()) { > > + /* Add usage to copy the buffer in streams[0] to stream. */ > > + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN; > > + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN; > > + iter->streams.push_back({ stream, CameraStream::Type::Mapped }); > > + continue; > > + } > > + > > Camera3StreamConfig streamConfig; > > streamConfig.streams = { { stream, CameraStream::Type::Direct } }; > > streamConfig.config.size = size; > > @@ -681,10 +711,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > for (const auto &streamConfig : streamConfigs) { > > config->addConfiguration(streamConfig.config); > > > > + CameraStream *directStream = nullptr; > > for (auto &stream : streamConfig.streams) { > > + CameraStream *sourceStream = nullptr; > > + > > + /* > > + * Sets the source stream for Mapped type streams to the > > + * same Direct type stream. The Direct type stream is > > + * put earlier than Mapped type streams in the current > > + * implementation. This might be broken in the future. > > + * > > + * \todo Remove this order assumption. > > + */ > > + if (stream.type == CameraStream::Type::Mapped) { > > + ASSERT(directStream); > > + sourceStream = directStream; > > + } > > + > > streams_.emplace_back(this, config.get(), stream.type, > > - stream.stream, config->size() - 1); > > + stream.stream, > > + sourceStream, > > + config->size() - 1); > > stream.stream->priv = static_cast<void *>(&streams_.back()); > > + if (!directStream && > > + stream.type == CameraStream::Type::Direct) { > > + directStream = &streams_.back(); > > + } > > > Am I mistaken or streams of type Mapped will always have a Direct > stream in streams[0] ? You seem to have the same assumption about > ordering. Can this be simplified as: > > CameraStream *sourceStream = nullptr; > for (auto &stream : streamConfig.streams) { > > streams_.emplace_back(this, config.get(), stream.type, > stream.stream, > sourceStream, > config->size() - 1); > stream.stream->priv = static_cast<void *>(&streams_.back()); > > if (stream.type == CameraStream::Type::Direct) > sourceStream = &streams_.back(); > } > > Streams of type Mapped will always follow a Direct. > Internal streams have no Mapped associated. > > This should give you sourceStream == nullptr for Internal and Direct, > which I think it's what you want. > > > } > > } > > > > @@ -917,6 +969,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > > << " with " << descriptor->buffers_.size() << " streams"; > > > > + std::set<CameraStream *> requiredStreams; > > + std::set<CameraStream *> providedStreams; > > I understand the logic, it took me a while but now I like it. I wonder > if we could do better and exploit the fact std::set<> stores objects > uniquely. Also, the introduction of putBuffers_ seems like a temporary > solution, we should aim to re-use the same mechanism as for Internal > buffers to avoid the additional complexity at > streamProcessingComplete(). We have been adding ad-hoc solutions for > each new issue we faced, and the complexity of keeping it all together > has increased enough already. > > I would rather iterate a few more times on the list of requested > streams, which is of limited lenght, in order to > > 1) Collect all the (unique) CameraStream that have to be queued to the > request > 2) Handle Direct and Internal which have a CameraStream associated and > no sourceStream > 3) Handle Mapped and add sourceStream to the Request if needed > > The hunk I get looks like the following > > @@ -917,9 +954,37 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > << " with " << descriptor->buffers_.size() << " streams"; > > + /* > + * Collect the CameraStream associated to each requested capture stream. > + * Being requestedStreams an std::set<> no duplications can happen. > + */ > + std::set<CameraStream *> requestedStreams; > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > + CameraStream *cameraStream = buffer.stream; > + > + switch (cameraStream->type()) { > + case CameraStream::Type::Mapped: > + requestedStreams.insert(cameraStream->sourceStream()); > + break; > + > + case CameraStream::Type::Direct: > + case CameraStream::Type::Internal: > + requestedStreams.insert(cameraStream); > + break; > + } > + } > + > + /* > + * Process all the Direct and Internal streams, for which the CameraStream > + * they refer to is the one that points to the right libcamera::Stream. > + * > + * Streams of type Mapped will be handled later. > + */ > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > CameraStream *cameraStream = buffer.stream; > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > + FrameBuffer *frameBuffer = nullptr; > + UniqueFD acquireFence; > > std::stringstream ss; > ss << i << " - (" << camera3Stream->width << "x" > @@ -928,25 +993,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > << "(" << cameraStream->configuration().size.toString() << ")[" > << cameraStream->configuration().pixelFormat.toString() << "]"; > > - /* > - * Inspect the camera stream type, create buffers opportunely > - * and add them to the Request if required. > - */ > - FrameBuffer *frameBuffer = nullptr; > - UniqueFD acquireFence; > - > - MutexLocker lock(descriptor->streamsProcessMutex_); > - > switch (cameraStream->type()) { > case CameraStream::Type::Mapped: > - /* > - * Mapped streams don't need buffers added to the > - * Request. > - */ > - LOG(HAL, Debug) << ss.str() << " (mapped)"; > - > - descriptor->pendingStreamsToProcess_.insert( > - { cameraStream, &buffer }); > continue; > > case CameraStream::Type::Direct: > @@ -987,9 +1035,52 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > return -ENOMEM; > } > > + MutexLocker lock(descriptor->streamsProcessMutex_); > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > descriptor->request_->addBuffer(cameraStream->stream(), > frameBuffer, std::move(fence)); > + > + requestedStreams.erase(cameraStream); > + } > + > + /* > + * Now handle the mapped streams. If no buffer has been addded for them > + * as their corresponding direct source stream has not been requested, > + * add it here. > + */ > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > + CameraStream *cameraStream = buffer.stream; > + camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > + CameraStream *sourceStream = cameraStream->sourceStream(); > + FrameBuffer *frameBuffer = nullptr; > + > + if (cameraStream->type() != CameraStream::Type::Mapped) > + continue; > + > + std::stringstream ss; > + ss << i << " - (" << camera3Stream->width << "x" > + << camera3Stream->height << ")" > + << "[" << utils::hex(camera3Stream->format) << "] -> " > + << "(" << cameraStream->configuration().size.toString() << ")[" > + << cameraStream->configuration().pixelFormat.toString() << "]"; > + LOG(HAL, Debug) << ss.str() << " (mapped)"; > + > + descriptor->pendingStreamsToProcess_.insert({ cameraStream, &buffer }); > + > + /* > + * Make sure the CameraStream this stream is mapped on has been > + * added to the request. > + */ > + if (requestedStreams.find(sourceStream) == requestedStreams.end()) > + continue; > + > + /* If that's not the case, we need to do so here. */ > + frameBuffer = sourceStream->getBuffer(); > + buffer.internalBuffer = frameBuffer; > + > + MutexLocker lock(descriptor->streamsProcessMutex_); > + descriptor->request_->addBuffer(sourceStream->stream(), > + frameBuffer, nullptr); > } > > I have only compiled this, but if you know a CTS test which can be run > in isolation and exercize this use case I would be happy to test it. > > This way you might drop putBuffers_ if I'm not mistaken. Thanks. What a nice suggestion! putBuffer() is invoked against buffer.stream. buffer.stream is Mapped stream, though FrameBuffer to be returned (i.e. putBuffer) is Direct Stream. So I have to introduce putBuffers to pair DirectStream and FrameBuffer. This problem is not resolved in your implementation if I understand your code correctly. -Hiro > > Thanks > j > > > > > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > CameraStream *cameraStream = buffer.stream; > > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > @@ -947,6 +1001,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > > descriptor->pendingStreamsToProcess_.insert( > > { cameraStream, &buffer }); > > + ASSERT(cameraStream->sourceStream()); > > + requiredStreams.insert(cameraStream->sourceStream()); > > continue; > > > > case CameraStream::Type::Direct: > > @@ -990,8 +1046,32 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > > descriptor->request_->addBuffer(cameraStream->stream(), > > frameBuffer, std::move(fence)); > > + providedStreams.insert(cameraStream); > > } > > > > + for (CameraStream *requiredStream : requiredStreams) { > > + if (providedStreams.find(requiredStream) != providedStreams.end()) > > + continue; > > + > > + /* \todo Can we Map stream to Internal type stream? */ > > + ASSERT(requiredStream->type() == CameraStream::Type::Direct); > > + > > + FrameBuffer *frameBuffer = requiredStream->getBuffer(); > > + if (!frameBuffer) { > > + LOG(HAL, Error) << "Failed to create frame buffer"; > > + return -ENOMEM; > > + } > > + > > + /* > > + * addBuffer() lets a buffer for requiredStream be output by > > + * camera. Records frameBuffer with requiredStream to call > > + * putBuffer() after post-processing is complete. > > + */ > > + descriptor->request_->addBuffer(requiredStream->stream(), > > + frameBuffer, nullptr); > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > + descriptor->putBuffers_.push_back({ requiredStream, frameBuffer }); > > + } > > /* > > * Translate controls from Android to libcamera and queue the request > > * to the camera. > > @@ -1251,6 +1331,8 @@ void CameraDevice::streamProcessingComplete(Camera3RequestDescriptor::StreamBuff > > request->pendingStreamsToProcess_.erase(streamBuffer->stream); > > if (!request->pendingStreamsToProcess_.empty()) > > return; > > + for (auto [cameraStream, buffer] : request->putBuffers_) > > + cameraStream->putBuffer(buffer); > > } > > > > completeDescriptor(streamBuffer->request); > > diff --git a/src/android/camera_request.h b/src/android/camera_request.h > > index 37b6ae32..7a41c6d9 100644 > > --- a/src/android/camera_request.h > > +++ b/src/android/camera_request.h > > @@ -59,6 +59,8 @@ public: > > /* Keeps track of streams requiring post-processing. */ > > std::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_ > > LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > + std::vector<std::pair<CameraStream *, libcamera::FrameBuffer *>> putBuffers_ > > + LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > libcamera::Mutex streamsProcessMutex_; > > > > Camera3RequestDescriptor(libcamera::Camera *camera, > > diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp > > index c2157450..634cf319 100644 > > --- a/src/android/camera_stream.cpp > > +++ b/src/android/camera_stream.cpp > > @@ -52,9 +52,11 @@ LOG_DECLARE_CATEGORY(HAL) > > > > CameraStream::CameraStream(CameraDevice *const cameraDevice, > > CameraConfiguration *config, Type type, > > - camera3_stream_t *camera3Stream, unsigned int index) > > + camera3_stream_t *camera3Stream, > > + CameraStream *const sourceStream, unsigned int index) > > : cameraDevice_(cameraDevice), config_(config), type_(type), > > - camera3Stream_(camera3Stream), index_(index) > > + camera3Stream_(camera3Stream), sourceStream_(sourceStream), > > + index_(index) > > { > > } > > > > @@ -206,6 +208,12 @@ void CameraStream::flush() > > > > FrameBuffer *CameraStream::getBuffer() > > { > > + if (type_ == Type::Direct && !allocator_) { > > + allocator_ = std::make_unique<PlatformFrameBufferAllocator>(cameraDevice_); > > + ASSERT(!mutex_); > > + mutex_ = std::make_unique<Mutex>(); > > + } > > + > > if (!allocator_) > > return nullptr; > > > > diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h > > index f9504462..4c5078b2 100644 > > --- a/src/android/camera_stream.h > > +++ b/src/android/camera_stream.h > > @@ -114,7 +114,9 @@ public: > > }; > > CameraStream(CameraDevice *const cameraDevice, > > libcamera::CameraConfiguration *config, Type type, > > - camera3_stream_t *camera3Stream, unsigned int index); > > + camera3_stream_t *camera3Stream, > > + CameraStream *const sourceStream, > > + unsigned int index); > > CameraStream(CameraStream &&other); > > ~CameraStream(); > > > > @@ -122,6 +124,7 @@ public: > > camera3_stream_t *camera3Stream() const { return camera3Stream_; } > > const libcamera::StreamConfiguration &configuration() const; > > libcamera::Stream *stream() const; > > + CameraStream *sourceStream() const { return sourceStream_; } > > > > int configure(); > > int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); > > @@ -167,6 +170,7 @@ private: > > const libcamera::CameraConfiguration *config_; > > const Type type_; > > camera3_stream_t *camera3Stream_; > > + CameraStream *const sourceStream_; > > const unsigned int index_; > > > > std::unique_ptr<PlatformFrameBufferAllocator> allocator_; > > -- > > 2.34.1.448.ga2b2bfdf31-goog
Hi Hiro, On Fri, Jan 07, 2022 at 02:46:00PM +0900, Hirokazu Honda wrote: > Hi Jacopo, thank you for reviewing. > > On Thu, Jan 6, 2022 at 9:35 PM Jacopo Mondi <jacopo@jmondi.org> wrote: > > > > Hi Hiro, > > > > On Thu, Jan 06, 2022 at 06:43:23PM +0900, Hirokazu Honda wrote: > > > An Android HAL client may request multiple identical streams. It is > > > redundant that a native camera device produces a separate stream for > > > each of the identical requests. Configure the camera with a single > > > stream in that case. The other identical HAL streams will be produced by > > > the YUV post-processor. > > > > > > The Android HAL client can provide capture requests that are resolved as > > > they are produced by YUV post-processor. Then a buffer to be filled a > > > camera is not given. So the HAL adaptation layer looks up buffer > > > information to be passed to a camera and allocate the buffer by using > > > PlatformBufferAllocator. > > > > > > Signed-off-by: Hirokazu Honda <hiroh@chromium.org> > > > --- > > > src/android/camera_device.cpp | 84 ++++++++++++++++++++++++++++++++++- > > > src/android/camera_request.h | 2 + > > > src/android/camera_stream.cpp | 12 ++++- > > > src/android/camera_stream.h | 6 ++- > > > 4 files changed, 100 insertions(+), 4 deletions(-) > > > > > > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp > > > index 83825736..53533064 100644 > > > --- a/src/android/camera_device.cpp > > > +++ b/src/android/camera_device.cpp > > > @@ -9,6 +9,7 @@ > > > > > > #include <algorithm> > > > #include <fstream> > > > +#include <set> > > > #include <sys/mman.h> > > > #include <unistd.h> > > > #include <vector> > > > @@ -604,6 +605,35 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > > continue; > > > } > > > > > > + /* > > > + * While gralloc usage flags are supposed to report usage > > > + * patterns to select a suitable buffer allocation strategy, in > > > + * practice they're also used to make other decisions, such as > > > + * selecting the actual format for the IMPLEMENTATION_DEFINED > > > + * HAL pixel format. To avoid issues, we thus have to set the > > > + * GRALLOC_USAGE_HW_CAMERA_WRITE flag unconditionally, even for > > > + * streams that will be produced in software. > > > + */ > > > + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > > > You can now remove > > > > /* This stream will be produced by hardware. */ > > stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > > > at the end of the for loop > > > > > + > > > + /* > > > + * If a CameraStream with the same size and format of the > > > + * current stream has already been requested, associate the two. > > > + */ > > > + auto iter = std::find_if( > > > + streamConfigs.begin(), streamConfigs.end(), > > > + [size, format](const Camera3StreamConfig &streamConfig) { > > > > Should [size, format] be captured by reference ? > > > > > + return streamConfig.config.size == size && > > > + streamConfig.config.pixelFormat == format; > > > + }); > > > + if (iter != streamConfigs.end()) { > > > + /* Add usage to copy the buffer in streams[0] to stream. */ > > > + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN; > > > + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN; > > > + iter->streams.push_back({ stream, CameraStream::Type::Mapped }); > > > + continue; > > > + } > > > + > > > Camera3StreamConfig streamConfig; > > > streamConfig.streams = { { stream, CameraStream::Type::Direct } }; > > > streamConfig.config.size = size; > > > @@ -681,10 +711,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > > for (const auto &streamConfig : streamConfigs) { > > > config->addConfiguration(streamConfig.config); > > > > > > + CameraStream *directStream = nullptr; > > > for (auto &stream : streamConfig.streams) { > > > + CameraStream *sourceStream = nullptr; > > > + > > > + /* > > > + * Sets the source stream for Mapped type streams to the > > > + * same Direct type stream. The Direct type stream is > > > + * put earlier than Mapped type streams in the current > > > + * implementation. This might be broken in the future. > > > + * > > > + * \todo Remove this order assumption. > > > + */ > > > + if (stream.type == CameraStream::Type::Mapped) { > > > + ASSERT(directStream); > > > + sourceStream = directStream; > > > + } > > > + > > > streams_.emplace_back(this, config.get(), stream.type, > > > - stream.stream, config->size() - 1); > > > + stream.stream, > > > + sourceStream, > > > + config->size() - 1); > > > stream.stream->priv = static_cast<void *>(&streams_.back()); > > > + if (!directStream && > > > + stream.type == CameraStream::Type::Direct) { > > > + directStream = &streams_.back(); > > > + } > > > > > > Am I mistaken or streams of type Mapped will always have a Direct > > stream in streams[0] ? You seem to have the same assumption about > > ordering. Can this be simplified as: > > > > CameraStream *sourceStream = nullptr; > > for (auto &stream : streamConfig.streams) { > > > > streams_.emplace_back(this, config.get(), stream.type, > > stream.stream, > > sourceStream, > > config->size() - 1); > > stream.stream->priv = static_cast<void *>(&streams_.back()); > > > > if (stream.type == CameraStream::Type::Direct) > > sourceStream = &streams_.back(); > > } > > > > Streams of type Mapped will always follow a Direct. > > Internal streams have no Mapped associated. > > > > This should give you sourceStream == nullptr for Internal and Direct, > > which I think it's what you want. > > > > > } > > > } > > > > > > @@ -917,6 +969,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > > > << " with " << descriptor->buffers_.size() << " streams"; > > > > > > + std::set<CameraStream *> requiredStreams; > > > + std::set<CameraStream *> providedStreams; > > > > I understand the logic, it took me a while but now I like it. I wonder > > if we could do better and exploit the fact std::set<> stores objects > > uniquely. Also, the introduction of putBuffers_ seems like a temporary > > solution, we should aim to re-use the same mechanism as for Internal > > buffers to avoid the additional complexity at > > streamProcessingComplete(). We have been adding ad-hoc solutions for > > each new issue we faced, and the complexity of keeping it all together > > has increased enough already. > > > > I would rather iterate a few more times on the list of requested > > streams, which is of limited lenght, in order to > > > > 1) Collect all the (unique) CameraStream that have to be queued to the > > request > > 2) Handle Direct and Internal which have a CameraStream associated and > > no sourceStream > > 3) Handle Mapped and add sourceStream to the Request if needed > > > > The hunk I get looks like the following > > > > @@ -917,9 +954,37 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > > << " with " << descriptor->buffers_.size() << " streams"; > > > > + /* > > + * Collect the CameraStream associated to each requested capture stream. > > + * Being requestedStreams an std::set<> no duplications can happen. > > + */ > > + std::set<CameraStream *> requestedStreams; > > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > + CameraStream *cameraStream = buffer.stream; > > + > > + switch (cameraStream->type()) { > > + case CameraStream::Type::Mapped: > > + requestedStreams.insert(cameraStream->sourceStream()); > > + break; > > + > > + case CameraStream::Type::Direct: > > + case CameraStream::Type::Internal: > > + requestedStreams.insert(cameraStream); > > + break; > > + } > > + } > > + > > + /* > > + * Process all the Direct and Internal streams, for which the CameraStream > > + * they refer to is the one that points to the right libcamera::Stream. > > + * > > + * Streams of type Mapped will be handled later. > > + */ > > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > CameraStream *cameraStream = buffer.stream; > > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > + FrameBuffer *frameBuffer = nullptr; > > + UniqueFD acquireFence; > > > > std::stringstream ss; > > ss << i << " - (" << camera3Stream->width << "x" > > @@ -928,25 +993,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > << "(" << cameraStream->configuration().size.toString() << ")[" > > << cameraStream->configuration().pixelFormat.toString() << "]"; > > > > - /* > > - * Inspect the camera stream type, create buffers opportunely > > - * and add them to the Request if required. > > - */ > > - FrameBuffer *frameBuffer = nullptr; > > - UniqueFD acquireFence; > > - > > - MutexLocker lock(descriptor->streamsProcessMutex_); > > - > > switch (cameraStream->type()) { > > case CameraStream::Type::Mapped: > > - /* > > - * Mapped streams don't need buffers added to the > > - * Request. > > - */ > > - LOG(HAL, Debug) << ss.str() << " (mapped)"; > > - > > - descriptor->pendingStreamsToProcess_.insert( > > - { cameraStream, &buffer }); > > continue; > > > > case CameraStream::Type::Direct: > > @@ -987,9 +1035,52 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > return -ENOMEM; > > } > > > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > > descriptor->request_->addBuffer(cameraStream->stream(), > > frameBuffer, std::move(fence)); > > + > > + requestedStreams.erase(cameraStream); > > + } > > + > > + /* > > + * Now handle the mapped streams. If no buffer has been addded for them > > + * as their corresponding direct source stream has not been requested, > > + * add it here. > > + */ > > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > + CameraStream *cameraStream = buffer.stream; > > + camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > + CameraStream *sourceStream = cameraStream->sourceStream(); > > + FrameBuffer *frameBuffer = nullptr; > > + > > + if (cameraStream->type() != CameraStream::Type::Mapped) > > + continue; > > + > > + std::stringstream ss; > > + ss << i << " - (" << camera3Stream->width << "x" > > + << camera3Stream->height << ")" > > + << "[" << utils::hex(camera3Stream->format) << "] -> " > > + << "(" << cameraStream->configuration().size.toString() << ")[" > > + << cameraStream->configuration().pixelFormat.toString() << "]"; > > + LOG(HAL, Debug) << ss.str() << " (mapped)"; > > + > > + descriptor->pendingStreamsToProcess_.insert({ cameraStream, &buffer }); > > + > > + /* > > + * Make sure the CameraStream this stream is mapped on has been > > + * added to the request. > > + */ > > + if (requestedStreams.find(sourceStream) == requestedStreams.end()) > > + continue; > > + > > + /* If that's not the case, we need to do so here. */ > > + frameBuffer = sourceStream->getBuffer(); > > + buffer.internalBuffer = frameBuffer; > > + > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > + descriptor->request_->addBuffer(sourceStream->stream(), > > + frameBuffer, nullptr); > > } > > > > I have only compiled this, but if you know a CTS test which can be run > > in isolation and exercize this use case I would be happy to test it. > > > > This way you might drop putBuffers_ if I'm not mistaken. > > Thanks. What a nice suggestion! > putBuffer() is invoked against buffer.stream. > buffer.stream is Mapped stream, though FrameBuffer to be returned > (i.e. putBuffer) is Direct Stream. > So I have to introduce putBuffers to pair DirectStream and FrameBuffer. > This problem is not resolved in your implementation if I understand > your code correctly. I see. If I got you right the problem is that the temporary framebuffer for Mapped is get() from the sourceStream: frameBuffer = sourceStream->getBuffer(); But then put() from the actual stream if we reuse internalBuffer like I've proposed: stream->putBuffer(buffer->internalBuffer); I now wonder why there's a strict need to get() the buffer from the sourceStream and not from the CameraStream the Mapped stream refers to. With your introduction of the nice PlatformBufferAllocator we shouldn't have restrictions about what CameraStream allocates the temporary buffer, don't we ? Actually at the end of processCaptureRequest() (looking at my hunk above) we have: /* If that's not the case, we need to do so here. */ frameBuffer = sourceStream->getBuffer(); buffer.internalBuffer = frameBuffer; MutexLocker lock(descriptor->streamsProcessMutex_); descriptor->request_->addBuffer(sourceStream->stream(), frameBuffer, nullptr); But I feel like we can replace sourceStream with the current CameraStream: /* If that's not the case, we need to do so here. */ frameBuffer = cameraStream->getBuffer(); buffer.internalBuffer = frameBuffer; MutexLocker lock(descriptor->streamsProcessMutex_); descriptor->request_->addBuffer(cameraStream->stream(), frameBuffer, nullptr); This because: 1) The above about PlatformBufferAllocator: we can allocate buffers from every CameraStream 2) cameraStream->stream() == sourceStream->stream() as CameraStream::stream() returns the libcamera::Stream associated with the libcamera::StreamConfiguration, which is the same for both cameraStream and sourceStream In this way sourceStream is only used to count if a new buffer has to be allocated or not, if the request only contains mapped streams. Does it make sense ? It's quite hard to debug this just looking at the code though, and I might be wasting your time as I've still not tested all the above but I'm asking you to disprove it by again looking at the code. Is there a test you know of which I can run to verify all the above ? Thanks j > > -Hiro > > > > Thanks > > j > > > > > > > > > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > > CameraStream *cameraStream = buffer.stream; > > > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > > @@ -947,6 +1001,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > > > > descriptor->pendingStreamsToProcess_.insert( > > > { cameraStream, &buffer }); > > > + ASSERT(cameraStream->sourceStream()); > > > + requiredStreams.insert(cameraStream->sourceStream()); > > > continue; > > > > > > case CameraStream::Type::Direct: > > > @@ -990,8 +1046,32 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > > > descriptor->request_->addBuffer(cameraStream->stream(), > > > frameBuffer, std::move(fence)); > > > + providedStreams.insert(cameraStream); > > > } > > > > > > + for (CameraStream *requiredStream : requiredStreams) { > > > + if (providedStreams.find(requiredStream) != providedStreams.end()) > > > + continue; > > > + > > > + /* \todo Can we Map stream to Internal type stream? */ > > > + ASSERT(requiredStream->type() == CameraStream::Type::Direct); > > > + > > > + FrameBuffer *frameBuffer = requiredStream->getBuffer(); > > > + if (!frameBuffer) { > > > + LOG(HAL, Error) << "Failed to create frame buffer"; > > > + return -ENOMEM; > > > + } > > > + > > > + /* > > > + * addBuffer() lets a buffer for requiredStream be output by > > > + * camera. Records frameBuffer with requiredStream to call > > > + * putBuffer() after post-processing is complete. > > > + */ > > > + descriptor->request_->addBuffer(requiredStream->stream(), > > > + frameBuffer, nullptr); > > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > > + descriptor->putBuffers_.push_back({ requiredStream, frameBuffer }); > > > + } > > > /* > > > * Translate controls from Android to libcamera and queue the request > > > * to the camera. > > > @@ -1251,6 +1331,8 @@ void CameraDevice::streamProcessingComplete(Camera3RequestDescriptor::StreamBuff > > > request->pendingStreamsToProcess_.erase(streamBuffer->stream); > > > if (!request->pendingStreamsToProcess_.empty()) > > > return; > > > + for (auto [cameraStream, buffer] : request->putBuffers_) > > > + cameraStream->putBuffer(buffer); > > > } > > > > > > completeDescriptor(streamBuffer->request); > > > diff --git a/src/android/camera_request.h b/src/android/camera_request.h > > > index 37b6ae32..7a41c6d9 100644 > > > --- a/src/android/camera_request.h > > > +++ b/src/android/camera_request.h > > > @@ -59,6 +59,8 @@ public: > > > /* Keeps track of streams requiring post-processing. */ > > > std::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_ > > > LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > > + std::vector<std::pair<CameraStream *, libcamera::FrameBuffer *>> putBuffers_ > > > + LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > > libcamera::Mutex streamsProcessMutex_; > > > > > > Camera3RequestDescriptor(libcamera::Camera *camera, > > > diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp > > > index c2157450..634cf319 100644 > > > --- a/src/android/camera_stream.cpp > > > +++ b/src/android/camera_stream.cpp > > > @@ -52,9 +52,11 @@ LOG_DECLARE_CATEGORY(HAL) > > > > > > CameraStream::CameraStream(CameraDevice *const cameraDevice, > > > CameraConfiguration *config, Type type, > > > - camera3_stream_t *camera3Stream, unsigned int index) > > > + camera3_stream_t *camera3Stream, > > > + CameraStream *const sourceStream, unsigned int index) > > > : cameraDevice_(cameraDevice), config_(config), type_(type), > > > - camera3Stream_(camera3Stream), index_(index) > > > + camera3Stream_(camera3Stream), sourceStream_(sourceStream), > > > + index_(index) > > > { > > > } > > > > > > @@ -206,6 +208,12 @@ void CameraStream::flush() > > > > > > FrameBuffer *CameraStream::getBuffer() > > > { > > > + if (type_ == Type::Direct && !allocator_) { > > > + allocator_ = std::make_unique<PlatformFrameBufferAllocator>(cameraDevice_); > > > + ASSERT(!mutex_); > > > + mutex_ = std::make_unique<Mutex>(); > > > + } > > > + > > > if (!allocator_) > > > return nullptr; > > > > > > diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h > > > index f9504462..4c5078b2 100644 > > > --- a/src/android/camera_stream.h > > > +++ b/src/android/camera_stream.h > > > @@ -114,7 +114,9 @@ public: > > > }; > > > CameraStream(CameraDevice *const cameraDevice, > > > libcamera::CameraConfiguration *config, Type type, > > > - camera3_stream_t *camera3Stream, unsigned int index); > > > + camera3_stream_t *camera3Stream, > > > + CameraStream *const sourceStream, > > > + unsigned int index); > > > CameraStream(CameraStream &&other); > > > ~CameraStream(); > > > > > > @@ -122,6 +124,7 @@ public: > > > camera3_stream_t *camera3Stream() const { return camera3Stream_; } > > > const libcamera::StreamConfiguration &configuration() const; > > > libcamera::Stream *stream() const; > > > + CameraStream *sourceStream() const { return sourceStream_; } > > > > > > int configure(); > > > int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); > > > @@ -167,6 +170,7 @@ private: > > > const libcamera::CameraConfiguration *config_; > > > const Type type_; > > > camera3_stream_t *camera3Stream_; > > > + CameraStream *const sourceStream_; > > > const unsigned int index_; > > > > > > std::unique_ptr<PlatformFrameBufferAllocator> allocator_; > > > -- > > > 2.34.1.448.ga2b2bfdf31-goog
Hi Jacopo, On Fri, Jan 7, 2022 at 11:21 PM Jacopo Mondi <jacopo@jmondi.org> wrote: > > Hi Hiro, > > On Fri, Jan 07, 2022 at 02:46:00PM +0900, Hirokazu Honda wrote: > > Hi Jacopo, thank you for reviewing. > > > > On Thu, Jan 6, 2022 at 9:35 PM Jacopo Mondi <jacopo@jmondi.org> wrote: > > > > > > Hi Hiro, > > > > > > On Thu, Jan 06, 2022 at 06:43:23PM +0900, Hirokazu Honda wrote: > > > > An Android HAL client may request multiple identical streams. It is > > > > redundant that a native camera device produces a separate stream for > > > > each of the identical requests. Configure the camera with a single > > > > stream in that case. The other identical HAL streams will be produced by > > > > the YUV post-processor. > > > > > > > > The Android HAL client can provide capture requests that are resolved as > > > > they are produced by YUV post-processor. Then a buffer to be filled a > > > > camera is not given. So the HAL adaptation layer looks up buffer > > > > information to be passed to a camera and allocate the buffer by using > > > > PlatformBufferAllocator. > > > > > > > > Signed-off-by: Hirokazu Honda <hiroh@chromium.org> > > > > --- > > > > src/android/camera_device.cpp | 84 ++++++++++++++++++++++++++++++++++- > > > > src/android/camera_request.h | 2 + > > > > src/android/camera_stream.cpp | 12 ++++- > > > > src/android/camera_stream.h | 6 ++- > > > > 4 files changed, 100 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp > > > > index 83825736..53533064 100644 > > > > --- a/src/android/camera_device.cpp > > > > +++ b/src/android/camera_device.cpp > > > > @@ -9,6 +9,7 @@ > > > > > > > > #include <algorithm> > > > > #include <fstream> > > > > +#include <set> > > > > #include <sys/mman.h> > > > > #include <unistd.h> > > > > #include <vector> > > > > @@ -604,6 +605,35 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > > > continue; > > > > } > > > > > > > > + /* > > > > + * While gralloc usage flags are supposed to report usage > > > > + * patterns to select a suitable buffer allocation strategy, in > > > > + * practice they're also used to make other decisions, such as > > > > + * selecting the actual format for the IMPLEMENTATION_DEFINED > > > > + * HAL pixel format. To avoid issues, we thus have to set the > > > > + * GRALLOC_USAGE_HW_CAMERA_WRITE flag unconditionally, even for > > > > + * streams that will be produced in software. > > > > + */ > > > > + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > > > > > You can now remove > > > > > > /* This stream will be produced by hardware. */ > > > stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; > > > > > > at the end of the for loop > > > > > > > + > > > > + /* > > > > + * If a CameraStream with the same size and format of the > > > > + * current stream has already been requested, associate the two. > > > > + */ > > > > + auto iter = std::find_if( > > > > + streamConfigs.begin(), streamConfigs.end(), > > > > + [size, format](const Camera3StreamConfig &streamConfig) { > > > > > > Should [size, format] be captured by reference ? > > > > > > > + return streamConfig.config.size == size && > > > > + streamConfig.config.pixelFormat == format; > > > > + }); > > > > + if (iter != streamConfigs.end()) { > > > > + /* Add usage to copy the buffer in streams[0] to stream. */ > > > > + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN; > > > > + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN; > > > > + iter->streams.push_back({ stream, CameraStream::Type::Mapped }); > > > > + continue; > > > > + } > > > > + > > > > Camera3StreamConfig streamConfig; > > > > streamConfig.streams = { { stream, CameraStream::Type::Direct } }; > > > > streamConfig.config.size = size; > > > > @@ -681,10 +711,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) > > > > for (const auto &streamConfig : streamConfigs) { > > > > config->addConfiguration(streamConfig.config); > > > > > > > > + CameraStream *directStream = nullptr; > > > > for (auto &stream : streamConfig.streams) { > > > > + CameraStream *sourceStream = nullptr; > > > > + > > > > + /* > > > > + * Sets the source stream for Mapped type streams to the > > > > + * same Direct type stream. The Direct type stream is > > > > + * put earlier than Mapped type streams in the current > > > > + * implementation. This might be broken in the future. > > > > + * > > > > + * \todo Remove this order assumption. > > > > + */ > > > > + if (stream.type == CameraStream::Type::Mapped) { > > > > + ASSERT(directStream); > > > > + sourceStream = directStream; > > > > + } > > > > + > > > > streams_.emplace_back(this, config.get(), stream.type, > > > > - stream.stream, config->size() - 1); > > > > + stream.stream, > > > > + sourceStream, > > > > + config->size() - 1); > > > > stream.stream->priv = static_cast<void *>(&streams_.back()); > > > > + if (!directStream && > > > > + stream.type == CameraStream::Type::Direct) { > > > > + directStream = &streams_.back(); > > > > + } > > > > > > > > > Am I mistaken or streams of type Mapped will always have a Direct > > > stream in streams[0] ? You seem to have the same assumption about > > > ordering. Can this be simplified as: > > > > > > CameraStream *sourceStream = nullptr; > > > for (auto &stream : streamConfig.streams) { > > > > > > streams_.emplace_back(this, config.get(), stream.type, > > > stream.stream, > > > sourceStream, > > > config->size() - 1); > > > stream.stream->priv = static_cast<void *>(&streams_.back()); > > > > > > if (stream.type == CameraStream::Type::Direct) > > > sourceStream = &streams_.back(); > > > } > > > > > > Streams of type Mapped will always follow a Direct. > > > Internal streams have no Mapped associated. > > > > > > This should give you sourceStream == nullptr for Internal and Direct, > > > which I think it's what you want. > > > > > > > } > > > > } > > > > > > > > @@ -917,6 +969,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > > > > << " with " << descriptor->buffers_.size() << " streams"; > > > > > > > > + std::set<CameraStream *> requiredStreams; > > > > + std::set<CameraStream *> providedStreams; > > > > > > I understand the logic, it took me a while but now I like it. I wonder > > > if we could do better and exploit the fact std::set<> stores objects > > > uniquely. Also, the introduction of putBuffers_ seems like a temporary > > > solution, we should aim to re-use the same mechanism as for Internal > > > buffers to avoid the additional complexity at > > > streamProcessingComplete(). We have been adding ad-hoc solutions for > > > each new issue we faced, and the complexity of keeping it all together > > > has increased enough already. > > > > > > I would rather iterate a few more times on the list of requested > > > streams, which is of limited lenght, in order to > > > > > > 1) Collect all the (unique) CameraStream that have to be queued to the > > > request > > > 2) Handle Direct and Internal which have a CameraStream associated and > > > no sourceStream > > > 3) Handle Mapped and add sourceStream to the Request if needed > > > > > > The hunk I get looks like the following > > > > > > @@ -917,9 +954,37 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() > > > << " with " << descriptor->buffers_.size() << " streams"; > > > > > > + /* > > > + * Collect the CameraStream associated to each requested capture stream. > > > + * Being requestedStreams an std::set<> no duplications can happen. > > > + */ > > > + std::set<CameraStream *> requestedStreams; > > > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > > + CameraStream *cameraStream = buffer.stream; > > > + > > > + switch (cameraStream->type()) { > > > + case CameraStream::Type::Mapped: > > > + requestedStreams.insert(cameraStream->sourceStream()); > > > + break; > > > + > > > + case CameraStream::Type::Direct: > > > + case CameraStream::Type::Internal: > > > + requestedStreams.insert(cameraStream); > > > + break; > > > + } > > > + } > > > + > > > + /* > > > + * Process all the Direct and Internal streams, for which the CameraStream > > > + * they refer to is the one that points to the right libcamera::Stream. > > > + * > > > + * Streams of type Mapped will be handled later. > > > + */ > > > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > > CameraStream *cameraStream = buffer.stream; > > > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > > + FrameBuffer *frameBuffer = nullptr; > > > + UniqueFD acquireFence; > > > > > > std::stringstream ss; > > > ss << i << " - (" << camera3Stream->width << "x" > > > @@ -928,25 +993,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > << "(" << cameraStream->configuration().size.toString() << ")[" > > > << cameraStream->configuration().pixelFormat.toString() << "]"; > > > > > > - /* > > > - * Inspect the camera stream type, create buffers opportunely > > > - * and add them to the Request if required. > > > - */ > > > - FrameBuffer *frameBuffer = nullptr; > > > - UniqueFD acquireFence; > > > - > > > - MutexLocker lock(descriptor->streamsProcessMutex_); > > > - > > > switch (cameraStream->type()) { > > > case CameraStream::Type::Mapped: > > > - /* > > > - * Mapped streams don't need buffers added to the > > > - * Request. > > > - */ > > > - LOG(HAL, Debug) << ss.str() << " (mapped)"; > > > - > > > - descriptor->pendingStreamsToProcess_.insert( > > > - { cameraStream, &buffer }); > > > continue; > > > > > > case CameraStream::Type::Direct: > > > @@ -987,9 +1035,52 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > return -ENOMEM; > > > } > > > > > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > > > descriptor->request_->addBuffer(cameraStream->stream(), > > > frameBuffer, std::move(fence)); > > > + > > > + requestedStreams.erase(cameraStream); > > > + } > > > + > > > + /* > > > + * Now handle the mapped streams. If no buffer has been addded for them > > > + * as their corresponding direct source stream has not been requested, > > > + * add it here. > > > + */ > > > + for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > > + CameraStream *cameraStream = buffer.stream; > > > + camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > > + CameraStream *sourceStream = cameraStream->sourceStream(); > > > + FrameBuffer *frameBuffer = nullptr; > > > + > > > + if (cameraStream->type() != CameraStream::Type::Mapped) > > > + continue; > > > + > > > + std::stringstream ss; > > > + ss << i << " - (" << camera3Stream->width << "x" > > > + << camera3Stream->height << ")" > > > + << "[" << utils::hex(camera3Stream->format) << "] -> " > > > + << "(" << cameraStream->configuration().size.toString() << ")[" > > > + << cameraStream->configuration().pixelFormat.toString() << "]"; > > > + LOG(HAL, Debug) << ss.str() << " (mapped)"; > > > + > > > + descriptor->pendingStreamsToProcess_.insert({ cameraStream, &buffer }); > > > + > > > + /* > > > + * Make sure the CameraStream this stream is mapped on has been > > > + * added to the request. > > > + */ > > > + if (requestedStreams.find(sourceStream) == requestedStreams.end()) > > > + continue; > > > + > > > + /* If that's not the case, we need to do so here. */ > > > + frameBuffer = sourceStream->getBuffer(); > > > + buffer.internalBuffer = frameBuffer; > > > + > > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > > + descriptor->request_->addBuffer(sourceStream->stream(), > > > + frameBuffer, nullptr); > > > } > > > > > > I have only compiled this, but if you know a CTS test which can be run > > > in isolation and exercize this use case I would be happy to test it. > > > > > > This way you might drop putBuffers_ if I'm not mistaken. > > > > Thanks. What a nice suggestion! > > putBuffer() is invoked against buffer.stream. > > buffer.stream is Mapped stream, though FrameBuffer to be returned > > (i.e. putBuffer) is Direct Stream. > > So I have to introduce putBuffers to pair DirectStream and FrameBuffer. > > This problem is not resolved in your implementation if I understand > > your code correctly. > > I see. If I got you right the problem is that the temporary > framebuffer for Mapped is get() from the sourceStream: > > frameBuffer = sourceStream->getBuffer(); > > But then put() from the actual stream if we reuse internalBuffer like > I've proposed: > > stream->putBuffer(buffer->internalBuffer); > > I now wonder why there's a strict need to get() the buffer from the > sourceStream and not from the CameraStream the Mapped stream refers to. > With your introduction of the nice PlatformBufferAllocator we > shouldn't have restrictions about what CameraStream allocates the > temporary buffer, don't we ? > > Actually at the end of processCaptureRequest() (looking at my hunk > above) we have: > > /* If that's not the case, we need to do so here. */ > frameBuffer = sourceStream->getBuffer(); > buffer.internalBuffer = frameBuffer; > > MutexLocker lock(descriptor->streamsProcessMutex_); > descriptor->request_->addBuffer(sourceStream->stream(), > frameBuffer, nullptr); > > But I feel like we can replace sourceStream with the current CameraStream: > > /* If that's not the case, we need to do so here. */ > frameBuffer = cameraStream->getBuffer(); > buffer.internalBuffer = frameBuffer; > > MutexLocker lock(descriptor->streamsProcessMutex_); > descriptor->request_->addBuffer(cameraStream->stream(), > frameBuffer, nullptr); > > This because: > 1) The above about PlatformBufferAllocator: we can allocate buffers > from every CameraStream > > 2) cameraStream->stream() == sourceStream->stream() as > CameraStream::stream() returns the libcamera::Stream associated with > the libcamera::StreamConfiguration, which is the same for both > cameraStream and sourceStream > > In this way sourceStream is only used to count if a new buffer has to > be allocated or not, if the request only contains mapped streams. > > Does it make sense ? > That sounds right! > It's quite hard to debug this just looking at the code though, and I > might be wasting your time as I've still not tested all the above but > I'm asking you to disprove it by again looking at the code. Is there a > test you know of which I can run to verify all the above ? > I was sorry for not responding quickly. I saw you upload the patch series. Thanks so much. Regards, -Hiro > Thanks > j > > > > > -Hiro > > > > > > Thanks > > > j > > > > > > > > > > > > > for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { > > > > CameraStream *cameraStream = buffer.stream; > > > > camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); > > > > @@ -947,6 +1001,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > > > > > > descriptor->pendingStreamsToProcess_.insert( > > > > { cameraStream, &buffer }); > > > > + ASSERT(cameraStream->sourceStream()); > > > > + requiredStreams.insert(cameraStream->sourceStream()); > > > > continue; > > > > > > > > case CameraStream::Type::Direct: > > > > @@ -990,8 +1046,32 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques > > > > auto fence = std::make_unique<Fence>(std::move(acquireFence)); > > > > descriptor->request_->addBuffer(cameraStream->stream(), > > > > frameBuffer, std::move(fence)); > > > > + providedStreams.insert(cameraStream); > > > > } > > > > > > > > + for (CameraStream *requiredStream : requiredStreams) { > > > > + if (providedStreams.find(requiredStream) != providedStreams.end()) > > > > + continue; > > > > + > > > > + /* \todo Can we Map stream to Internal type stream? */ > > > > + ASSERT(requiredStream->type() == CameraStream::Type::Direct); > > > > + > > > > + FrameBuffer *frameBuffer = requiredStream->getBuffer(); > > > > + if (!frameBuffer) { > > > > + LOG(HAL, Error) << "Failed to create frame buffer"; > > > > + return -ENOMEM; > > > > + } > > > > + > > > > + /* > > > > + * addBuffer() lets a buffer for requiredStream be output by > > > > + * camera. Records frameBuffer with requiredStream to call > > > > + * putBuffer() after post-processing is complete. > > > > + */ > > > > + descriptor->request_->addBuffer(requiredStream->stream(), > > > > + frameBuffer, nullptr); > > > > + MutexLocker lock(descriptor->streamsProcessMutex_); > > > > + descriptor->putBuffers_.push_back({ requiredStream, frameBuffer }); > > > > + } > > > > /* > > > > * Translate controls from Android to libcamera and queue the request > > > > * to the camera. > > > > @@ -1251,6 +1331,8 @@ void CameraDevice::streamProcessingComplete(Camera3RequestDescriptor::StreamBuff > > > > request->pendingStreamsToProcess_.erase(streamBuffer->stream); > > > > if (!request->pendingStreamsToProcess_.empty()) > > > > return; > > > > + for (auto [cameraStream, buffer] : request->putBuffers_) > > > > + cameraStream->putBuffer(buffer); > > > > } > > > > > > > > completeDescriptor(streamBuffer->request); > > > > diff --git a/src/android/camera_request.h b/src/android/camera_request.h > > > > index 37b6ae32..7a41c6d9 100644 > > > > --- a/src/android/camera_request.h > > > > +++ b/src/android/camera_request.h > > > > @@ -59,6 +59,8 @@ public: > > > > /* Keeps track of streams requiring post-processing. */ > > > > std::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_ > > > > LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > > > + std::vector<std::pair<CameraStream *, libcamera::FrameBuffer *>> putBuffers_ > > > > + LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); > > > > libcamera::Mutex streamsProcessMutex_; > > > > > > > > Camera3RequestDescriptor(libcamera::Camera *camera, > > > > diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp > > > > index c2157450..634cf319 100644 > > > > --- a/src/android/camera_stream.cpp > > > > +++ b/src/android/camera_stream.cpp > > > > @@ -52,9 +52,11 @@ LOG_DECLARE_CATEGORY(HAL) > > > > > > > > CameraStream::CameraStream(CameraDevice *const cameraDevice, > > > > CameraConfiguration *config, Type type, > > > > - camera3_stream_t *camera3Stream, unsigned int index) > > > > + camera3_stream_t *camera3Stream, > > > > + CameraStream *const sourceStream, unsigned int index) > > > > : cameraDevice_(cameraDevice), config_(config), type_(type), > > > > - camera3Stream_(camera3Stream), index_(index) > > > > + camera3Stream_(camera3Stream), sourceStream_(sourceStream), > > > > + index_(index) > > > > { > > > > } > > > > > > > > @@ -206,6 +208,12 @@ void CameraStream::flush() > > > > > > > > FrameBuffer *CameraStream::getBuffer() > > > > { > > > > + if (type_ == Type::Direct && !allocator_) { > > > > + allocator_ = std::make_unique<PlatformFrameBufferAllocator>(cameraDevice_); > > > > + ASSERT(!mutex_); > > > > + mutex_ = std::make_unique<Mutex>(); > > > > + } > > > > + > > > > if (!allocator_) > > > > return nullptr; > > > > > > > > diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h > > > > index f9504462..4c5078b2 100644 > > > > --- a/src/android/camera_stream.h > > > > +++ b/src/android/camera_stream.h > > > > @@ -114,7 +114,9 @@ public: > > > > }; > > > > CameraStream(CameraDevice *const cameraDevice, > > > > libcamera::CameraConfiguration *config, Type type, > > > > - camera3_stream_t *camera3Stream, unsigned int index); > > > > + camera3_stream_t *camera3Stream, > > > > + CameraStream *const sourceStream, > > > > + unsigned int index); > > > > CameraStream(CameraStream &&other); > > > > ~CameraStream(); > > > > > > > > @@ -122,6 +124,7 @@ public: > > > > camera3_stream_t *camera3Stream() const { return camera3Stream_; } > > > > const libcamera::StreamConfiguration &configuration() const; > > > > libcamera::Stream *stream() const; > > > > + CameraStream *sourceStream() const { return sourceStream_; } > > > > > > > > int configure(); > > > > int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); > > > > @@ -167,6 +170,7 @@ private: > > > > const libcamera::CameraConfiguration *config_; > > > > const Type type_; > > > > camera3_stream_t *camera3Stream_; > > > > + CameraStream *const sourceStream_; > > > > const unsigned int index_; > > > > > > > > std::unique_ptr<PlatformFrameBufferAllocator> allocator_; > > > > -- > > > > 2.34.1.448.ga2b2bfdf31-goog
diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp index 83825736..53533064 100644 --- a/src/android/camera_device.cpp +++ b/src/android/camera_device.cpp @@ -9,6 +9,7 @@ #include <algorithm> #include <fstream> +#include <set> #include <sys/mman.h> #include <unistd.h> #include <vector> @@ -604,6 +605,35 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) continue; } + /* + * While gralloc usage flags are supposed to report usage + * patterns to select a suitable buffer allocation strategy, in + * practice they're also used to make other decisions, such as + * selecting the actual format for the IMPLEMENTATION_DEFINED + * HAL pixel format. To avoid issues, we thus have to set the + * GRALLOC_USAGE_HW_CAMERA_WRITE flag unconditionally, even for + * streams that will be produced in software. + */ + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE; + + /* + * If a CameraStream with the same size and format of the + * current stream has already been requested, associate the two. + */ + auto iter = std::find_if( + streamConfigs.begin(), streamConfigs.end(), + [size, format](const Camera3StreamConfig &streamConfig) { + return streamConfig.config.size == size && + streamConfig.config.pixelFormat == format; + }); + if (iter != streamConfigs.end()) { + /* Add usage to copy the buffer in streams[0] to stream. */ + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN; + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN; + iter->streams.push_back({ stream, CameraStream::Type::Mapped }); + continue; + } + Camera3StreamConfig streamConfig; streamConfig.streams = { { stream, CameraStream::Type::Direct } }; streamConfig.config.size = size; @@ -681,10 +711,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list) for (const auto &streamConfig : streamConfigs) { config->addConfiguration(streamConfig.config); + CameraStream *directStream = nullptr; for (auto &stream : streamConfig.streams) { + CameraStream *sourceStream = nullptr; + + /* + * Sets the source stream for Mapped type streams to the + * same Direct type stream. The Direct type stream is + * put earlier than Mapped type streams in the current + * implementation. This might be broken in the future. + * + * \todo Remove this order assumption. + */ + if (stream.type == CameraStream::Type::Mapped) { + ASSERT(directStream); + sourceStream = directStream; + } + streams_.emplace_back(this, config.get(), stream.type, - stream.stream, config->size() - 1); + stream.stream, + sourceStream, + config->size() - 1); stream.stream->priv = static_cast<void *>(&streams_.back()); + if (!directStream && + stream.type == CameraStream::Type::Direct) { + directStream = &streams_.back(); + } } } @@ -917,6 +969,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques LOG(HAL, Debug) << "Queueing request " << descriptor->request_->cookie() << " with " << descriptor->buffers_.size() << " streams"; + std::set<CameraStream *> requiredStreams; + std::set<CameraStream *> providedStreams; for (const auto &[i, buffer] : utils::enumerate(descriptor->buffers_)) { CameraStream *cameraStream = buffer.stream; camera3_stream_t *camera3Stream = cameraStream->camera3Stream(); @@ -947,6 +1001,8 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques descriptor->pendingStreamsToProcess_.insert( { cameraStream, &buffer }); + ASSERT(cameraStream->sourceStream()); + requiredStreams.insert(cameraStream->sourceStream()); continue; case CameraStream::Type::Direct: @@ -990,8 +1046,32 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques auto fence = std::make_unique<Fence>(std::move(acquireFence)); descriptor->request_->addBuffer(cameraStream->stream(), frameBuffer, std::move(fence)); + providedStreams.insert(cameraStream); } + for (CameraStream *requiredStream : requiredStreams) { + if (providedStreams.find(requiredStream) != providedStreams.end()) + continue; + + /* \todo Can we Map stream to Internal type stream? */ + ASSERT(requiredStream->type() == CameraStream::Type::Direct); + + FrameBuffer *frameBuffer = requiredStream->getBuffer(); + if (!frameBuffer) { + LOG(HAL, Error) << "Failed to create frame buffer"; + return -ENOMEM; + } + + /* + * addBuffer() lets a buffer for requiredStream be output by + * camera. Records frameBuffer with requiredStream to call + * putBuffer() after post-processing is complete. + */ + descriptor->request_->addBuffer(requiredStream->stream(), + frameBuffer, nullptr); + MutexLocker lock(descriptor->streamsProcessMutex_); + descriptor->putBuffers_.push_back({ requiredStream, frameBuffer }); + } /* * Translate controls from Android to libcamera and queue the request * to the camera. @@ -1251,6 +1331,8 @@ void CameraDevice::streamProcessingComplete(Camera3RequestDescriptor::StreamBuff request->pendingStreamsToProcess_.erase(streamBuffer->stream); if (!request->pendingStreamsToProcess_.empty()) return; + for (auto [cameraStream, buffer] : request->putBuffers_) + cameraStream->putBuffer(buffer); } completeDescriptor(streamBuffer->request); diff --git a/src/android/camera_request.h b/src/android/camera_request.h index 37b6ae32..7a41c6d9 100644 --- a/src/android/camera_request.h +++ b/src/android/camera_request.h @@ -59,6 +59,8 @@ public: /* Keeps track of streams requiring post-processing. */ std::map<CameraStream *, StreamBuffer *> pendingStreamsToProcess_ LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); + std::vector<std::pair<CameraStream *, libcamera::FrameBuffer *>> putBuffers_ + LIBCAMERA_TSA_GUARDED_BY(streamsProcessMutex_); libcamera::Mutex streamsProcessMutex_; Camera3RequestDescriptor(libcamera::Camera *camera, diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp index c2157450..634cf319 100644 --- a/src/android/camera_stream.cpp +++ b/src/android/camera_stream.cpp @@ -52,9 +52,11 @@ LOG_DECLARE_CATEGORY(HAL) CameraStream::CameraStream(CameraDevice *const cameraDevice, CameraConfiguration *config, Type type, - camera3_stream_t *camera3Stream, unsigned int index) + camera3_stream_t *camera3Stream, + CameraStream *const sourceStream, unsigned int index) : cameraDevice_(cameraDevice), config_(config), type_(type), - camera3Stream_(camera3Stream), index_(index) + camera3Stream_(camera3Stream), sourceStream_(sourceStream), + index_(index) { } @@ -206,6 +208,12 @@ void CameraStream::flush() FrameBuffer *CameraStream::getBuffer() { + if (type_ == Type::Direct && !allocator_) { + allocator_ = std::make_unique<PlatformFrameBufferAllocator>(cameraDevice_); + ASSERT(!mutex_); + mutex_ = std::make_unique<Mutex>(); + } + if (!allocator_) return nullptr; diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h index f9504462..4c5078b2 100644 --- a/src/android/camera_stream.h +++ b/src/android/camera_stream.h @@ -114,7 +114,9 @@ public: }; CameraStream(CameraDevice *const cameraDevice, libcamera::CameraConfiguration *config, Type type, - camera3_stream_t *camera3Stream, unsigned int index); + camera3_stream_t *camera3Stream, + CameraStream *const sourceStream, + unsigned int index); CameraStream(CameraStream &&other); ~CameraStream(); @@ -122,6 +124,7 @@ public: camera3_stream_t *camera3Stream() const { return camera3Stream_; } const libcamera::StreamConfiguration &configuration() const; libcamera::Stream *stream() const; + CameraStream *sourceStream() const { return sourceStream_; } int configure(); int process(Camera3RequestDescriptor::StreamBuffer *streamBuffer); @@ -167,6 +170,7 @@ private: const libcamera::CameraConfiguration *config_; const Type type_; camera3_stream_t *camera3Stream_; + CameraStream *const sourceStream_; const unsigned int index_; std::unique_ptr<PlatformFrameBufferAllocator> allocator_;
An Android HAL client may request multiple identical streams. It is redundant that a native camera device produces a separate stream for each of the identical requests. Configure the camera with a single stream in that case. The other identical HAL streams will be produced by the YUV post-processor. The Android HAL client can provide capture requests that are resolved as they are produced by YUV post-processor. Then a buffer to be filled a camera is not given. So the HAL adaptation layer looks up buffer information to be passed to a camera and allocate the buffer by using PlatformBufferAllocator. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> --- src/android/camera_device.cpp | 84 ++++++++++++++++++++++++++++++++++- src/android/camera_request.h | 2 + src/android/camera_stream.cpp | 12 ++++- src/android/camera_stream.h | 6 ++- 4 files changed, 100 insertions(+), 4 deletions(-) -- 2.34.1.448.ga2b2bfdf31-goog