[RFC,v1,2/2] libcamera: pipeline: virtual: Move image generation to separate thread
diff mbox series

Message ID 20250811094926.1308259-3-barnabas.pocze@ideasonboard.com
State Superseded
Headers show
Series
  • libcamera: pipeline: virtual: Move image generation to separate thread
Related show

Commit Message

Barnabás Pőcze Aug. 11, 2025, 9:49 a.m. UTC
Currently the virtual pipeline generates the images synchronously. This is not
ideal because it blocks the camera manager's internal thread, and because its
behaviour is different from other existing pipeline handlers, all of which
complete requests asynchronously.

So move the image generation to a separate thread by deriving `VirtualCameraData`
from `Thread`, as well as `Object` and using the existing asynchronous signal
and method call mechanism.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
---
 src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
 src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
 2 files changed, 58 insertions(+), 30 deletions(-)

Comments

Laurent Pinchart Aug. 11, 2025, 5:05 p.m. UTC | #1
Hi Barnabás,

Thank you for the patch.

On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
> Currently the virtual pipeline generates the images synchronously. This is not
> ideal because it blocks the camera manager's internal thread, and because its
> behaviour is different from other existing pipeline handlers, all of which
> complete requests asynchronously.
> 
> So move the image generation to a separate thread by deriving `VirtualCameraData`
> from `Thread`, as well as `Object` and using the existing asynchronous signal
> and method call mechanism.
> 
> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
> ---
>  src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
>  src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
>  2 files changed, 58 insertions(+), 30 deletions(-)
> 
> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
> index 049ebcba5..2ad32ec0a 100644
> --- a/src/libcamera/pipeline/virtual/virtual.cpp
> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>  
>  	/* \todo Support multiple streams and pass multi_stream_test */
>  	streamConfigs_.resize(kMaxStream);
> +
> +	moveToThread(this);
> +}
> +
> +void VirtualCameraData::queueRequest(Request *request)
> +{
> +	for (auto const &[stream, buffer] : request->buffers()) {
> +		bool found = false;
> +		/* map buffer and fill test patterns */
> +		for (auto &streamConfig : streamConfigs_) {
> +			if (stream == &streamConfig.stream) {
> +				FrameMetadata &fmd = buffer->_d()->metadata();
> +
> +				fmd.status = FrameMetadata::Status::FrameSuccess;
> +				fmd.sequence = streamConfig.seq++;
> +				fmd.timestamp = currentTimestamp();
> +
> +				for (const auto [i, p] : utils::enumerate(buffer->planes()))
> +					fmd.planes()[i].bytesused = p.length;
> +
> +				found = true;
> +
> +				if (streamConfig.frameGenerator->generateFrame(
> +					    stream->configuration().size, buffer))
> +					fmd.status = FrameMetadata::Status::FrameError;
> +
> +				bufferCompleted.emit(request, buffer);
> +				break;
> +			}
> +		}
> +		ASSERT(found);
> +	}
>  }
>  
>  VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
>  	for (auto &s : data->streamConfigs_)
>  		s.seq = 0;
>  
> +	data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
> +		if (completeBuffer(request, buffer))
> +			completeRequest(request);
> +	});

I'm not a big fan of lambdas in such cases, I find they hinder
readability compared to member functions. The the PipelineHandlerVirtual
class is not exposed outside of the pipeline handler, adding a private
member function shouldn't be a big issue.

> +
> +	data->start();
> +
>  	return 0;
>  }
>  
> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
>  {
> +	VirtualCameraData *data = cameraData(camera);
> +
> +	/* Flush all work. */
> +	data->invokeMethod([] { }, ConnectionTypeBlocking);

Does this mean the Thread class API is lacking ? In particular, I don't
see a similar call in the SoftwareIsp::stop() function. Is it needed
here because we rely solely on the queue of messages to keep track the
requests ? If so, I think we should keep track of the queued requests in
the VirtualCameraData class, and cancel all requests not processed after
the thread exits. This will make stopping the camera faster, and will
also mimick better the behaviour of hardware cameras.

I know it's a bit more work, hopefully it's just freshening up the yak
and not fully shaving it :-)

> +	data->exit();
> +	data->wait();
> +
> +	/* Process queued `bufferCompleted` signal emissions. */
> +	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
> +	data->bufferCompleted.disconnect(this);
>  }
>  
>  int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>  	VirtualCameraData *data = cameraData(camera);
>  	const auto timestamp = currentTimestamp();
>  
> -	for (auto const &[stream, buffer] : request->buffers()) {
> -		bool found = false;
> -		/* map buffer and fill test patterns */
> -		for (auto &streamConfig : data->streamConfigs_) {
> -			if (stream == &streamConfig.stream) {
> -				FrameMetadata &fmd = buffer->_d()->metadata();
> -
> -				fmd.status = FrameMetadata::Status::FrameSuccess;
> -				fmd.sequence = streamConfig.seq++;
> -				fmd.timestamp = timestamp;
> -
> -				for (const auto [i, p] : utils::enumerate(buffer->planes()))
> -					fmd.planes()[i].bytesused = p.length;
> -
> -				found = true;
> -
> -				if (streamConfig.frameGenerator->generateFrame(
> -					    stream->configuration().size, buffer))
> -					fmd.status = FrameMetadata::Status::FrameError;
> -
> -				completeBuffer(request, buffer);
> -				break;
> -			}
> -		}
> -		ASSERT(found);
> -	}
> -
>  	request->metadata().set(controls::SensorTimestamp, timestamp);
> -	completeRequest(request);
> +	data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
>  
>  	return 0;
>  }
> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
> index 683cb82b4..0832fd80c 100644
> --- a/src/libcamera/pipeline/virtual/virtual.h
> +++ b/src/libcamera/pipeline/virtual/virtual.h
> @@ -11,6 +11,7 @@
>  #include <variant>
>  #include <vector>
>  
> +#include <libcamera/base/thread.h>

You should also include object.h.

>  #include <libcamera/geometry.h>
>  #include <libcamera/stream.h>
>  
> @@ -25,7 +26,9 @@ namespace libcamera {
>  
>  using VirtualFrame = std::variant<TestPattern, ImageFrames>;
>  
> -class VirtualCameraData : public Camera::Private
> +class VirtualCameraData : public Camera::Private,
> +			  public Thread,
> +			  public Object
>  {
>  public:
>  	const static unsigned int kMaxStream = 3;
> @@ -54,9 +57,12 @@ public:
>  
>  	~VirtualCameraData() = default;
>  
> +	void queueRequest(Request *request);
> +
>  	Configuration config_;
>  
>  	std::vector<StreamConfig> streamConfigs_;
> +	Signal<Request *, FrameBuffer *> bufferCompleted;
>  };
>  
>  } /* namespace libcamera */
Barnabás Pőcze Aug. 11, 2025, 5:51 p.m. UTC | #2
Hi

2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
> Hi Barnabás,
> 
> Thank you for the patch.
> 
> On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
>> Currently the virtual pipeline generates the images synchronously. This is not
>> ideal because it blocks the camera manager's internal thread, and because its
>> behaviour is different from other existing pipeline handlers, all of which
>> complete requests asynchronously.
>>
>> So move the image generation to a separate thread by deriving `VirtualCameraData`
>> from `Thread`, as well as `Object` and using the existing asynchronous signal
>> and method call mechanism.
>>
>> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
>> ---
>>   src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
>>   src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
>>   2 files changed, 58 insertions(+), 30 deletions(-)
>>
>> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
>> index 049ebcba5..2ad32ec0a 100644
>> --- a/src/libcamera/pipeline/virtual/virtual.cpp
>> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
>> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>>   
>>   	/* \todo Support multiple streams and pass multi_stream_test */
>>   	streamConfigs_.resize(kMaxStream);
>> +
>> +	moveToThread(this);
>> +}
>> +
>> +void VirtualCameraData::queueRequest(Request *request)
>> +{
>> +	for (auto const &[stream, buffer] : request->buffers()) {
>> +		bool found = false;
>> +		/* map buffer and fill test patterns */
>> +		for (auto &streamConfig : streamConfigs_) {
>> +			if (stream == &streamConfig.stream) {
>> +				FrameMetadata &fmd = buffer->_d()->metadata();
>> +
>> +				fmd.status = FrameMetadata::Status::FrameSuccess;
>> +				fmd.sequence = streamConfig.seq++;
>> +				fmd.timestamp = currentTimestamp();
>> +
>> +				for (const auto [i, p] : utils::enumerate(buffer->planes()))
>> +					fmd.planes()[i].bytesused = p.length;
>> +
>> +				found = true;
>> +
>> +				if (streamConfig.frameGenerator->generateFrame(
>> +					    stream->configuration().size, buffer))
>> +					fmd.status = FrameMetadata::Status::FrameError;
>> +
>> +				bufferCompleted.emit(request, buffer);
>> +				break;
>> +			}
>> +		}
>> +		ASSERT(found);
>> +	}
>>   }
>>   
>>   VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
>> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
>>   	for (auto &s : data->streamConfigs_)
>>   		s.seq = 0;
>>   
>> +	data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
>> +		if (completeBuffer(request, buffer))
>> +			completeRequest(request);
>> +	});
> 
> I'm not a big fan of lambdas in such cases, I find they hinder
> readability compared to member functions. The the PipelineHandlerVirtual
> class is not exposed outside of the pipeline handler, adding a private
> member function shouldn't be a big issue.

Interesting, I have the exact opposite view. I find that member functions
usually involve more work and they don't allow for the same level of "scoping"
as lambdas.


> 
>> +
>> +	data->start();
>> +
>>   	return 0;
>>   }
>>   
>> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
>> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
>>   {
>> +	VirtualCameraData *data = cameraData(camera);
>> +
>> +	/* Flush all work. */
>> +	data->invokeMethod([] { }, ConnectionTypeBlocking);
> 
> Does this mean the Thread class API is lacking ? In particular, I don't
> see a similar call in the SoftwareIsp::stop() function. Is it needed
> here because we rely solely on the queue of messages to keep track the
> requests ? If so, I think we should keep track of the queued requests in
> the VirtualCameraData class, and cancel all requests not processed after

It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.

There is a need for flushing/cancelling asynchronous method invocations queued on
the worker thread(s), otherwise they will be executed when the thread is restarted
(or am I missing something that prevents this?). So we need a `cancelMessages()`
function in `Thread` then, it seems?


> the thread exits. This will make stopping the camera faster, and will
> also mimick better the behaviour of hardware cameras.
> 
> I know it's a bit more work, hopefully it's just freshening up the yak
> and not fully shaving it :-)
> 
>> +	data->exit();
>> +	data->wait();
>> +
>> +	/* Process queued `bufferCompleted` signal emissions. */
>> +	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
>> +	data->bufferCompleted.disconnect(this);
>>   }
>>   
>>   int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>   	VirtualCameraData *data = cameraData(camera);
>>   	const auto timestamp = currentTimestamp();
>>   
>> -	for (auto const &[stream, buffer] : request->buffers()) {
>> -		bool found = false;
>> -		/* map buffer and fill test patterns */
>> -		for (auto &streamConfig : data->streamConfigs_) {
>> -			if (stream == &streamConfig.stream) {
>> -				FrameMetadata &fmd = buffer->_d()->metadata();
>> -
>> -				fmd.status = FrameMetadata::Status::FrameSuccess;
>> -				fmd.sequence = streamConfig.seq++;
>> -				fmd.timestamp = timestamp;
>> -
>> -				for (const auto [i, p] : utils::enumerate(buffer->planes()))
>> -					fmd.planes()[i].bytesused = p.length;
>> -
>> -				found = true;
>> -
>> -				if (streamConfig.frameGenerator->generateFrame(
>> -					    stream->configuration().size, buffer))
>> -					fmd.status = FrameMetadata::Status::FrameError;
>> -
>> -				completeBuffer(request, buffer);
>> -				break;
>> -			}
>> -		}
>> -		ASSERT(found);
>> -	}
>> -
>>   	request->metadata().set(controls::SensorTimestamp, timestamp);
>> -	completeRequest(request);
>> +	data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
>>   
>>   	return 0;
>>   }
>> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
>> index 683cb82b4..0832fd80c 100644
>> --- a/src/libcamera/pipeline/virtual/virtual.h
>> +++ b/src/libcamera/pipeline/virtual/virtual.h
>> @@ -11,6 +11,7 @@
>>   #include <variant>
>>   #include <vector>
>>   
>> +#include <libcamera/base/thread.h>
> 
> You should also include object.h.
> 
>>   #include <libcamera/geometry.h>
>>   #include <libcamera/stream.h>
>>   
>> @@ -25,7 +26,9 @@ namespace libcamera {
>>   
>>   using VirtualFrame = std::variant<TestPattern, ImageFrames>;
>>   
>> -class VirtualCameraData : public Camera::Private
>> +class VirtualCameraData : public Camera::Private,
>> +			  public Thread,
>> +			  public Object
>>   {
>>   public:
>>   	const static unsigned int kMaxStream = 3;
>> @@ -54,9 +57,12 @@ public:
>>   
>>   	~VirtualCameraData() = default;
>>   
>> +	void queueRequest(Request *request);
>> +
>>   	Configuration config_;
>>   
>>   	std::vector<StreamConfig> streamConfigs_;
>> +	Signal<Request *, FrameBuffer *> bufferCompleted;
>>   };
>>   
>>   } /* namespace libcamera */
>
Laurent Pinchart Aug. 11, 2025, 10:47 p.m. UTC | #3
Hi Barnabás,

On Mon, Aug 11, 2025 at 07:51:21PM +0200, Barnabás Pőcze wrote:
> 2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
> > On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
> >> Currently the virtual pipeline generates the images synchronously. This is not
> >> ideal because it blocks the camera manager's internal thread, and because its
> >> behaviour is different from other existing pipeline handlers, all of which
> >> complete requests asynchronously.
> >>
> >> So move the image generation to a separate thread by deriving `VirtualCameraData`
> >> from `Thread`, as well as `Object` and using the existing asynchronous signal
> >> and method call mechanism.
> >>
> >> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
> >> ---
> >>   src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
> >>   src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
> >>   2 files changed, 58 insertions(+), 30 deletions(-)
> >>
> >> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
> >> index 049ebcba5..2ad32ec0a 100644
> >> --- a/src/libcamera/pipeline/virtual/virtual.cpp
> >> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
> >> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
> >>   
> >>   	/* \todo Support multiple streams and pass multi_stream_test */
> >>   	streamConfigs_.resize(kMaxStream);
> >> +
> >> +	moveToThread(this);
> >> +}
> >> +
> >> +void VirtualCameraData::queueRequest(Request *request)
> >> +{
> >> +	for (auto const &[stream, buffer] : request->buffers()) {
> >> +		bool found = false;
> >> +		/* map buffer and fill test patterns */
> >> +		for (auto &streamConfig : streamConfigs_) {
> >> +			if (stream == &streamConfig.stream) {
> >> +				FrameMetadata &fmd = buffer->_d()->metadata();
> >> +
> >> +				fmd.status = FrameMetadata::Status::FrameSuccess;
> >> +				fmd.sequence = streamConfig.seq++;
> >> +				fmd.timestamp = currentTimestamp();
> >> +
> >> +				for (const auto [i, p] : utils::enumerate(buffer->planes()))
> >> +					fmd.planes()[i].bytesused = p.length;
> >> +
> >> +				found = true;
> >> +
> >> +				if (streamConfig.frameGenerator->generateFrame(
> >> +					    stream->configuration().size, buffer))
> >> +					fmd.status = FrameMetadata::Status::FrameError;
> >> +
> >> +				bufferCompleted.emit(request, buffer);
> >> +				break;
> >> +			}
> >> +		}
> >> +		ASSERT(found);
> >> +	}
> >>   }
> >>   
> >>   VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
> >> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
> >>   	for (auto &s : data->streamConfigs_)
> >>   		s.seq = 0;
> >>   
> >> +	data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
> >> +		if (completeBuffer(request, buffer))
> >> +			completeRequest(request);
> >> +	});
> > 
> > I'm not a big fan of lambdas in such cases, I find they hinder
> > readability compared to member functions. The the PipelineHandlerVirtual
> > class is not exposed outside of the pipeline handler, adding a private
> > member function shouldn't be a big issue.
> 
> Interesting, I have the exact opposite view. I find that member functions
> usually involve more work and they don't allow for the same level of "scoping"
> as lambdas.

For me it's on a case-by-case basis. I really like lambda as adapters
between two APIs. That's particularly useful when there's a small
impedance mismatch between signals and slots. On the other hand, when
the slot needs to implement a more complex logic that belongs to the
receiver of the signal, I think member functions are better.

> >> +
> >> +	data->start();
> >> +
> >>   	return 0;
> >>   }
> >>   
> >> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
> >> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
> >>   {
> >> +	VirtualCameraData *data = cameraData(camera);
> >> +
> >> +	/* Flush all work. */
> >> +	data->invokeMethod([] { }, ConnectionTypeBlocking);
> > 
> > Does this mean the Thread class API is lacking ? In particular, I don't
> > see a similar call in the SoftwareIsp::stop() function. Is it needed
> > here because we rely solely on the queue of messages to keep track the
> > requests ? If so, I think we should keep track of the queued requests in
> > the VirtualCameraData class, and cancel all requests not processed after
> 
> It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.
> 
> There is a need for flushing/cancelling asynchronous method invocations queued on
> the worker thread(s), otherwise they will be executed when the thread is restarted
> (or am I missing something that prevents this?). So we need a `cancelMessages()`
> function in `Thread` then, it seems?

We have Thread::removeMessages() that could possibly be useful, but it
has to be called from the thread being stopped.

We use threads in two places: IPA modules, and software ISP. For IPA
modules I think we're safe as the stop function calls

	proxy_.invokeMethod(&ThreadProxy::stop, ConnectionTypeBlocking);

before stopping the thread. The software ISP, on the other hand, doesn't
have such a blocking call. I wonder if it could be affected by the
problem you describe. I remember we've fixed a similar issue in the
past. Digging in the history, the following commit is related:

commit 86ffaf936dc663afd48bd3e276134fa720210bd6
Author: Milan Zamazal <mzamazal@redhat.com>
Date:   Tue Feb 25 16:06:11 2025 +0100

    libcamera: software_isp: Dispatch messages on stop

This is however not the same issue, that commit addresses messages sent
by the thread, like you do below.

So yes, it looks like both the software ISP and the virtual pipeline
handler will need to use removeMessages(), or something similar. It's
getting a bit too late to think about how the right API should look
like.

> > the thread exits. This will make stopping the camera faster, and will
> > also mimick better the behaviour of hardware cameras.
> > 
> > I know it's a bit more work, hopefully it's just freshening up the yak
> > and not fully shaving it :-)
> > 
> >> +	data->exit();
> >> +	data->wait();
> >> +
> >> +	/* Process queued `bufferCompleted` signal emissions. */
> >> +	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
> >> +	data->bufferCompleted.disconnect(this);
> >>   }
> >>   
> >>   int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
> >> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
> >>   	VirtualCameraData *data = cameraData(camera);
> >>   	const auto timestamp = currentTimestamp();
> >>   
> >> -	for (auto const &[stream, buffer] : request->buffers()) {
> >> -		bool found = false;
> >> -		/* map buffer and fill test patterns */
> >> -		for (auto &streamConfig : data->streamConfigs_) {
> >> -			if (stream == &streamConfig.stream) {
> >> -				FrameMetadata &fmd = buffer->_d()->metadata();
> >> -
> >> -				fmd.status = FrameMetadata::Status::FrameSuccess;
> >> -				fmd.sequence = streamConfig.seq++;
> >> -				fmd.timestamp = timestamp;
> >> -
> >> -				for (const auto [i, p] : utils::enumerate(buffer->planes()))
> >> -					fmd.planes()[i].bytesused = p.length;
> >> -
> >> -				found = true;
> >> -
> >> -				if (streamConfig.frameGenerator->generateFrame(
> >> -					    stream->configuration().size, buffer))
> >> -					fmd.status = FrameMetadata::Status::FrameError;
> >> -
> >> -				completeBuffer(request, buffer);
> >> -				break;
> >> -			}
> >> -		}
> >> -		ASSERT(found);
> >> -	}
> >> -
> >>   	request->metadata().set(controls::SensorTimestamp, timestamp);
> >> -	completeRequest(request);
> >> +	data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
> >>   
> >>   	return 0;
> >>   }
> >> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
> >> index 683cb82b4..0832fd80c 100644
> >> --- a/src/libcamera/pipeline/virtual/virtual.h
> >> +++ b/src/libcamera/pipeline/virtual/virtual.h
> >> @@ -11,6 +11,7 @@
> >>   #include <variant>
> >>   #include <vector>
> >>   
> >> +#include <libcamera/base/thread.h>
> > 
> > You should also include object.h.
> > 
> >>   #include <libcamera/geometry.h>
> >>   #include <libcamera/stream.h>
> >>   
> >> @@ -25,7 +26,9 @@ namespace libcamera {
> >>   
> >>   using VirtualFrame = std::variant<TestPattern, ImageFrames>;
> >>   
> >> -class VirtualCameraData : public Camera::Private
> >> +class VirtualCameraData : public Camera::Private,
> >> +			  public Thread,
> >> +			  public Object
> >>   {
> >>   public:
> >>   	const static unsigned int kMaxStream = 3;
> >> @@ -54,9 +57,12 @@ public:
> >>   
> >>   	~VirtualCameraData() = default;
> >>   
> >> +	void queueRequest(Request *request);
> >> +
> >>   	Configuration config_;
> >>   
> >>   	std::vector<StreamConfig> streamConfigs_;
> >> +	Signal<Request *, FrameBuffer *> bufferCompleted;
> >>   };
> >>   
> >>   } /* namespace libcamera */
Barnabás Pőcze Aug. 12, 2025, 7:16 a.m. UTC | #4
Hi

2025. 08. 12. 0:47 keltezéssel, Laurent Pinchart írta:
> Hi Barnabás,
> 
> On Mon, Aug 11, 2025 at 07:51:21PM +0200, Barnabás Pőcze wrote:
>> 2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
>>> On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
>>>> Currently the virtual pipeline generates the images synchronously. This is not
>>>> ideal because it blocks the camera manager's internal thread, and because its
>>>> behaviour is different from other existing pipeline handlers, all of which
>>>> complete requests asynchronously.
>>>>
>>>> So move the image generation to a separate thread by deriving `VirtualCameraData`
>>>> from `Thread`, as well as `Object` and using the existing asynchronous signal
>>>> and method call mechanism.
>>>>
>>>> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
>>>> ---
>>>>    src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
>>>>    src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
>>>>    2 files changed, 58 insertions(+), 30 deletions(-)
>>>>
>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
>>>> index 049ebcba5..2ad32ec0a 100644
>>>> --- a/src/libcamera/pipeline/virtual/virtual.cpp
>>>> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
>>>> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>>>>    
>>>>    	/* \todo Support multiple streams and pass multi_stream_test */
>>>>    	streamConfigs_.resize(kMaxStream);
>>>> +
>>>> +	moveToThread(this);
>>>> +}
>>>> +
>>>> +void VirtualCameraData::queueRequest(Request *request)
>>>> +{
>>>> +	for (auto const &[stream, buffer] : request->buffers()) {
>>>> +		bool found = false;
>>>> +		/* map buffer and fill test patterns */
>>>> +		for (auto &streamConfig : streamConfigs_) {
>>>> +			if (stream == &streamConfig.stream) {
>>>> +				FrameMetadata &fmd = buffer->_d()->metadata();
>>>> +
>>>> +				fmd.status = FrameMetadata::Status::FrameSuccess;
>>>> +				fmd.sequence = streamConfig.seq++;
>>>> +				fmd.timestamp = currentTimestamp();
>>>> +
>>>> +				for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>> +					fmd.planes()[i].bytesused = p.length;
>>>> +
>>>> +				found = true;
>>>> +
>>>> +				if (streamConfig.frameGenerator->generateFrame(
>>>> +					    stream->configuration().size, buffer))
>>>> +					fmd.status = FrameMetadata::Status::FrameError;
>>>> +
>>>> +				bufferCompleted.emit(request, buffer);
>>>> +				break;
>>>> +			}
>>>> +		}
>>>> +		ASSERT(found);
>>>> +	}
>>>>    }
>>>>    
>>>>    VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
>>>> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
>>>>    	for (auto &s : data->streamConfigs_)
>>>>    		s.seq = 0;
>>>>    
>>>> +	data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
>>>> +		if (completeBuffer(request, buffer))
>>>> +			completeRequest(request);
>>>> +	});
>>>
>>> I'm not a big fan of lambdas in such cases, I find they hinder
>>> readability compared to member functions. The the PipelineHandlerVirtual
>>> class is not exposed outside of the pipeline handler, adding a private
>>> member function shouldn't be a big issue.
>>
>> Interesting, I have the exact opposite view. I find that member functions
>> usually involve more work and they don't allow for the same level of "scoping"
>> as lambdas.
> 
> For me it's on a case-by-case basis. I really like lambda as adapters
> between two APIs. That's particularly useful when there's a small
> impedance mismatch between signals and slots. On the other hand, when
> the slot needs to implement a more complex logic that belongs to the
> receiver of the signal, I think member functions are better.
> 
>>>> +
>>>> +	data->start();
>>>> +
>>>>    	return 0;
>>>>    }
>>>>    
>>>> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
>>>> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
>>>>    {
>>>> +	VirtualCameraData *data = cameraData(camera);
>>>> +
>>>> +	/* Flush all work. */
>>>> +	data->invokeMethod([] { }, ConnectionTypeBlocking);
>>>
>>> Does this mean the Thread class API is lacking ? In particular, I don't
>>> see a similar call in the SoftwareIsp::stop() function. Is it needed
>>> here because we rely solely on the queue of messages to keep track the
>>> requests ? If so, I think we should keep track of the queued requests in
>>> the VirtualCameraData class, and cancel all requests not processed after
>>
>> It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.
>>
>> There is a need for flushing/cancelling asynchronous method invocations queued on
>> the worker thread(s), otherwise they will be executed when the thread is restarted
>> (or am I missing something that prevents this?). So we need a `cancelMessages()`
>> function in `Thread` then, it seems?
> 
> We have Thread::removeMessages() that could possibly be useful, but it
> has to be called from the thread being stopped.

Looking at the implementation I think I would have assumed that it can be
called from a different thread since it uses locks and everything.
I suppose the current issue is that it is `private`.


> 
> We use threads in two places: IPA modules, and software ISP. For IPA
> modules I think we're safe as the stop function calls
> 
> 	proxy_.invokeMethod(&ThreadProxy::stop, ConnectionTypeBlocking);
> 
> before stopping the thread. The software ISP, on the other hand, doesn't

Yes, that is equivalent to flushing all the invoke messages, which is what
this version proposes, but "stop()" here is a no-op, hence the empty lambda.


> have such a blocking call. I wonder if it could be affected by the
> problem you describe. I remember we've fixed a similar issue in the
> past. Digging in the history, the following commit is related:

I briefly looked at it yesterday, and my assumption is yes. I think the `SoftwareIsp`
class exhibits the same pattern as the virtual pipeline handler. If we consider
e.g. `SoftwareIsp::process()`, there is an async call:

   debayer_->invokeMethod(&DebayerCpu::process,
                          ConnectionTypeQueued, frame, input, output, debayerParams_);

but then `SoftwareIsp::stop()`:

   	ispWorkerThread_.exit();
	ispWorkerThread_.wait();

	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);

	ipa_->stop();

	for (auto buffer : queuedOutputBuffers_) {
		FrameMetadata &metadata = buffer->_d()->metadata();
		metadata.status = FrameMetadata::FrameCancelled;
		outputBufferReady.emit(buffer);
	}
	queuedOutputBuffers_.clear();

	for (auto buffer : queuedInputBuffers_) {
		FrameMetadata &metadata = buffer->_d()->metadata();
		metadata.status = FrameMetadata::FrameCancelled;
		inputBufferReady.emit(buffer);
	}
	queuedInputBuffers_.clear();

does not seem to cancel/flush the invoke messages on `ispWorkerThread_`, to which
the `*debayer_` object is bound. So it seems to me that such messages might
remain in the queue of `ispWorkerThread_`. But I will check this later.


> 
> commit 86ffaf936dc663afd48bd3e276134fa720210bd6
> Author: Milan Zamazal <mzamazal@redhat.com>
> Date:   Tue Feb 25 16:06:11 2025 +0100
> 
>      libcamera: software_isp: Dispatch messages on stop
> 
> This is however not the same issue, that commit addresses messages sent
> by the thread, like you do below.
> 
> So yes, it looks like both the software ISP and the virtual pipeline
> handler will need to use removeMessages(), or something similar. It's
> getting a bit too late to think about how the right API should look
> like.

My first idea was to have `cancelMessages(Message::Type, Object * = nullptr)`
similarly to `dispatchMessages()`.



> 
>>> the thread exits. This will make stopping the camera faster, and will
>>> also mimick better the behaviour of hardware cameras.
>>>
>>> I know it's a bit more work, hopefully it's just freshening up the yak
>>> and not fully shaving it :-)
>>>
>>>> +	data->exit();
>>>> +	data->wait();
>>>> +
>>>> +	/* Process queued `bufferCompleted` signal emissions. */
>>>> +	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
>>>> +	data->bufferCompleted.disconnect(this);
>>>>    }
>>>>    
>>>>    int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>>    	VirtualCameraData *data = cameraData(camera);
>>>>    	const auto timestamp = currentTimestamp();
>>>>    
>>>> -	for (auto const &[stream, buffer] : request->buffers()) {
>>>> -		bool found = false;
>>>> -		/* map buffer and fill test patterns */
>>>> -		for (auto &streamConfig : data->streamConfigs_) {
>>>> -			if (stream == &streamConfig.stream) {
>>>> -				FrameMetadata &fmd = buffer->_d()->metadata();
>>>> -
>>>> -				fmd.status = FrameMetadata::Status::FrameSuccess;
>>>> -				fmd.sequence = streamConfig.seq++;
>>>> -				fmd.timestamp = timestamp;
>>>> -
>>>> -				for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>> -					fmd.planes()[i].bytesused = p.length;
>>>> -
>>>> -				found = true;
>>>> -
>>>> -				if (streamConfig.frameGenerator->generateFrame(
>>>> -					    stream->configuration().size, buffer))
>>>> -					fmd.status = FrameMetadata::Status::FrameError;
>>>> -
>>>> -				completeBuffer(request, buffer);
>>>> -				break;
>>>> -			}
>>>> -		}
>>>> -		ASSERT(found);
>>>> -	}
>>>> -
>>>>    	request->metadata().set(controls::SensorTimestamp, timestamp);
>>>> -	completeRequest(request);
>>>> +	data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
>>>>    
>>>>    	return 0;
>>>>    }
>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
>>>> index 683cb82b4..0832fd80c 100644
>>>> --- a/src/libcamera/pipeline/virtual/virtual.h
>>>> +++ b/src/libcamera/pipeline/virtual/virtual.h
>>>> @@ -11,6 +11,7 @@
>>>>    #include <variant>
>>>>    #include <vector>
>>>>    
>>>> +#include <libcamera/base/thread.h>
>>>
>>> You should also include object.h.
>>>
>>>>    #include <libcamera/geometry.h>
>>>>    #include <libcamera/stream.h>
>>>>    
>>>> @@ -25,7 +26,9 @@ namespace libcamera {
>>>>    
>>>>    using VirtualFrame = std::variant<TestPattern, ImageFrames>;
>>>>    
>>>> -class VirtualCameraData : public Camera::Private
>>>> +class VirtualCameraData : public Camera::Private,
>>>> +			  public Thread,
>>>> +			  public Object
>>>>    {
>>>>    public:
>>>>    	const static unsigned int kMaxStream = 3;
>>>> @@ -54,9 +57,12 @@ public:
>>>>    
>>>>    	~VirtualCameraData() = default;
>>>>    
>>>> +	void queueRequest(Request *request);
>>>> +
>>>>    	Configuration config_;
>>>>    
>>>>    	std::vector<StreamConfig> streamConfigs_;
>>>> +	Signal<Request *, FrameBuffer *> bufferCompleted;
>>>>    };
>>>>    
>>>>    } /* namespace libcamera */
>
Barnabás Pőcze Aug. 12, 2025, 1:17 p.m. UTC | #5
2025. 08. 12. 9:16 keltezéssel, Barnabás Pőcze írta:
> Hi
> 
> 2025. 08. 12. 0:47 keltezéssel, Laurent Pinchart írta:
>> Hi Barnabás,
>>
>> On Mon, Aug 11, 2025 at 07:51:21PM +0200, Barnabás Pőcze wrote:
>>> 2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
>>>> On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
>>>>> Currently the virtual pipeline generates the images synchronously. This is not
>>>>> ideal because it blocks the camera manager's internal thread, and because its
>>>>> behaviour is different from other existing pipeline handlers, all of which
>>>>> complete requests asynchronously.
>>>>>
>>>>> So move the image generation to a separate thread by deriving `VirtualCameraData`
>>>>> from `Thread`, as well as `Object` and using the existing asynchronous signal
>>>>> and method call mechanism.
>>>>>
>>>>> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
>>>>> ---
>>>>>    src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
>>>>>    src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
>>>>>    2 files changed, 58 insertions(+), 30 deletions(-)
>>>>>
>>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
>>>>> index 049ebcba5..2ad32ec0a 100644
>>>>> --- a/src/libcamera/pipeline/virtual/virtual.cpp
>>>>> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
>>>>> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>>>>>        /* \todo Support multiple streams and pass multi_stream_test */
>>>>>        streamConfigs_.resize(kMaxStream);
>>>>> +
>>>>> +    moveToThread(this);
>>>>> +}
>>>>> +
>>>>> +void VirtualCameraData::queueRequest(Request *request)
>>>>> +{
>>>>> +    for (auto const &[stream, buffer] : request->buffers()) {
>>>>> +        bool found = false;
>>>>> +        /* map buffer and fill test patterns */
>>>>> +        for (auto &streamConfig : streamConfigs_) {
>>>>> +            if (stream == &streamConfig.stream) {
>>>>> +                FrameMetadata &fmd = buffer->_d()->metadata();
>>>>> +
>>>>> +                fmd.status = FrameMetadata::Status::FrameSuccess;
>>>>> +                fmd.sequence = streamConfig.seq++;
>>>>> +                fmd.timestamp = currentTimestamp();
>>>>> +
>>>>> +                for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>>> +                    fmd.planes()[i].bytesused = p.length;
>>>>> +
>>>>> +                found = true;
>>>>> +
>>>>> +                if (streamConfig.frameGenerator->generateFrame(
>>>>> +                        stream->configuration().size, buffer))
>>>>> +                    fmd.status = FrameMetadata::Status::FrameError;
>>>>> +
>>>>> +                bufferCompleted.emit(request, buffer);
>>>>> +                break;
>>>>> +            }
>>>>> +        }
>>>>> +        ASSERT(found);
>>>>> +    }
>>>>>    }
>>>>>    VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
>>>>> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
>>>>>        for (auto &s : data->streamConfigs_)
>>>>>            s.seq = 0;
>>>>> +    data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
>>>>> +        if (completeBuffer(request, buffer))
>>>>> +            completeRequest(request);
>>>>> +    });
>>>>
>>>> I'm not a big fan of lambdas in such cases, I find they hinder
>>>> readability compared to member functions. The the PipelineHandlerVirtual
>>>> class is not exposed outside of the pipeline handler, adding a private
>>>> member function shouldn't be a big issue.
>>>
>>> Interesting, I have the exact opposite view. I find that member functions
>>> usually involve more work and they don't allow for the same level of "scoping"
>>> as lambdas.
>>
>> For me it's on a case-by-case basis. I really like lambda as adapters
>> between two APIs. That's particularly useful when there's a small
>> impedance mismatch between signals and slots. On the other hand, when
>> the slot needs to implement a more complex logic that belongs to the
>> receiver of the signal, I think member functions are better.
>>
>>>>> +
>>>>> +    data->start();
>>>>> +
>>>>>        return 0;
>>>>>    }
>>>>> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
>>>>> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
>>>>>    {
>>>>> +    VirtualCameraData *data = cameraData(camera);
>>>>> +
>>>>> +    /* Flush all work. */
>>>>> +    data->invokeMethod([] { }, ConnectionTypeBlocking);
>>>>
>>>> Does this mean the Thread class API is lacking ? In particular, I don't
>>>> see a similar call in the SoftwareIsp::stop() function. Is it needed
>>>> here because we rely solely on the queue of messages to keep track the
>>>> requests ? If so, I think we should keep track of the queued requests in
>>>> the VirtualCameraData class, and cancel all requests not processed after
>>>
>>> It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.
>>>
>>> There is a need for flushing/cancelling asynchronous method invocations queued on
>>> the worker thread(s), otherwise they will be executed when the thread is restarted
>>> (or am I missing something that prevents this?). So we need a `cancelMessages()`
>>> function in `Thread` then, it seems?
>>
>> We have Thread::removeMessages() that could possibly be useful, but it
>> has to be called from the thread being stopped.
> 
> Looking at the implementation I think I would have assumed that it can be
> called from a different thread since it uses locks and everything.
> I suppose the current issue is that it is `private`.
> 
> 
>>
>> We use threads in two places: IPA modules, and software ISP. For IPA
>> modules I think we're safe as the stop function calls
>>
>>     proxy_.invokeMethod(&ThreadProxy::stop, ConnectionTypeBlocking);
>>
>> before stopping the thread. The software ISP, on the other hand, doesn't
> 
> Yes, that is equivalent to flushing all the invoke messages, which is what
> this version proposes, but "stop()" here is a no-op, hence the empty lambda.
> 
> 
>> have such a blocking call. I wonder if it could be affected by the
>> problem you describe. I remember we've fixed a similar issue in the
>> past. Digging in the history, the following commit is related:
> 
> I briefly looked at it yesterday, and my assumption is yes. I think the `SoftwareIsp`
> class exhibits the same pattern as the virtual pipeline handler. If we consider
> e.g. `SoftwareIsp::process()`, there is an async call:
> 
>    debayer_->invokeMethod(&DebayerCpu::process,
>                           ConnectionTypeQueued, frame, input, output, debayerParams_);
> 
> but then `SoftwareIsp::stop()`:
> 
>        ispWorkerThread_.exit();
>      ispWorkerThread_.wait();
> 
>      Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
> 
>      ipa_->stop();
> 
>      for (auto buffer : queuedOutputBuffers_) {
>          FrameMetadata &metadata = buffer->_d()->metadata();
>          metadata.status = FrameMetadata::FrameCancelled;
>          outputBufferReady.emit(buffer);
>      }
>      queuedOutputBuffers_.clear();
> 
>      for (auto buffer : queuedInputBuffers_) {
>          FrameMetadata &metadata = buffer->_d()->metadata();
>          metadata.status = FrameMetadata::FrameCancelled;
>          inputBufferReady.emit(buffer);
>      }
>      queuedInputBuffers_.clear();
> 
> does not seem to cancel/flush the invoke messages on `ispWorkerThread_`, to which
> the `*debayer_` object is bound. So it seems to me that such messages might
> remain in the queue of `ispWorkerThread_`. But I will check this later.

Unfortunately it took more time than expected, but with judicious use of `scheduler-locking`
and breakpoints, I was able to trigger exactly that scenario. Some invoke messages
remain in the thread's message queue after `SoftwareIsp::stop()` returns. The window
of opportunity seems quite small, but definitely possible.

It seems to me that the shutdown procedure is not robust enough. For example, in this
state there is an ubsan warning:

   ../src/libcamera/request.cpp:107:25: runtime error: load of value 3200171710, which is not a valid value for type 'Status'

when it completes buffers in `SimpleCameraData::imageBufferReady()`:

   const RequestOutputs &outputs = conversionQueue_.front();
   for (auto &[stream, buf] : outputs.outputs)
     pipe->completeBuffer(outputs.request, buf);

as the buffer metadata remains uninitialized:

(gdb) p buffer->metadata()
$32 = (const libcamera::FrameMetadata &) @0x7cefedc0ff68: {
   status = 3200171710,
   sequence = 3200171710,
   timestamp = 13744632839234567870,
   planes_ = std::__debug::vector of length 1, capacity 1 = {{
       bytesused = 0
     }}
}

and it also does not complete all requests:

387		ASSERT(data->queuedRequests_.empty());
(gdb) p data->queuedRequests_
$35 = std::__debug::list = {
   [0] = 0x7cafedbe0e00,
   [1] = 0x7cafedbe0f60,
   [2] = 0x7cafedbe10c0,
   [3] = 0x7cafedbe1220
}
(gdb) n
[4:37:02.698049276] [7158] FATAL default pipeline_handler.cpp:387 assertion "data->queuedRequests_.empty()" failed in stop()


After the assertion reaches __pthread_kill_implementation():


(gdb) p ((SimpleCameraData *)data)->swIsp_->ispWorkerThread_->data_->messages_.list_
$40 = std::__debug::list = {
   [0] = std::unique_ptr<libcamera::Message> = {
     get() = 0x7c5fedc06540
   },
   [1] = std::unique_ptr<libcamera::Message> = {
     get() = 0x7c5fedc06600
   },
   [2] = std::unique_ptr<libcamera::Message> = {
     get() = 0x7c5fedc066c0
   }
}
(gdb) p *((libcamera::Message*)0x7c5fedc06540)
$46 = {
   _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
   type_ = libcamera::Message::InvokeMessage,
   receiver_ = 0x7e2fedbe1d68,
   static nextUserType_ = std::atomic<unsigned int> = { 1000 }
}
(gdb) p *((libcamera::Message*)0x7c5fedc06600)
$47 = {
   _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
   type_ = libcamera::Message::InvokeMessage,
   receiver_ = 0x7e2fedbe1d68,
   static nextUserType_ = std::atomic<unsigned int> = { 1000 }
}
(gdb) p *((libcamera::Message*)0x7c5fedc066c0)
$48 = {
   _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
   type_ = libcamera::Message::InvokeMessage,
   receiver_ = 0x7e2fedbe1d68,
   static nextUserType_ = std::atomic<unsigned int> = { 1000 }
}

Note that the `receiver_` is the same:

(gdb) p *((libcamera::Message*)0x7c5fedc066c0)->receiver_
$49 = {
   _vptr.Object = 0x7ffff4bdb3f8 <vtable for libcamera::DebayerCpu+96>,
   parent_ = 0x0,
   children_ = std::__debug::vector of length 0, capacity 0,
   thread_ = 0x7e2fedbe0280,
   signals_ = empty std::__debug::list,
   pendingMessages_ = 3
}


So should that camera ever be restarted, those messages would be dispatched,
reusing old requests and arguments.

I think there are multiple issues here: first, the one we have suspected, that
the `ispWorkerThread_` is not appropriately flushed/cancelled; and second, the
shutdown procedure does not seem robust enough to me if it runs into an assertion
in this state.


> 
> 
>>
>> commit 86ffaf936dc663afd48bd3e276134fa720210bd6
>> Author: Milan Zamazal <mzamazal@redhat.com>
>> Date:   Tue Feb 25 16:06:11 2025 +0100
>>
>>      libcamera: software_isp: Dispatch messages on stop
>>
>> This is however not the same issue, that commit addresses messages sent
>> by the thread, like you do below.
>>
>> So yes, it looks like both the software ISP and the virtual pipeline
>> handler will need to use removeMessages(), or something similar. It's
>> getting a bit too late to think about how the right API should look
>> like.
> 
> My first idea was to have `cancelMessages(Message::Type, Object * = nullptr)`
> similarly to `dispatchMessages()`.
> 
> 
> 
>>
>>>> the thread exits. This will make stopping the camera faster, and will
>>>> also mimick better the behaviour of hardware cameras.
>>>>
>>>> I know it's a bit more work, hopefully it's just freshening up the yak
>>>> and not fully shaving it :-)
>>>>
>>>>> +    data->exit();
>>>>> +    data->wait();
>>>>> +
>>>>> +    /* Process queued `bufferCompleted` signal emissions. */
>>>>> +    Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
>>>>> +    data->bufferCompleted.disconnect(this);
>>>>>    }
>>>>>    int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>>> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>>>        VirtualCameraData *data = cameraData(camera);
>>>>>        const auto timestamp = currentTimestamp();
>>>>> -    for (auto const &[stream, buffer] : request->buffers()) {
>>>>> -        bool found = false;
>>>>> -        /* map buffer and fill test patterns */
>>>>> -        for (auto &streamConfig : data->streamConfigs_) {
>>>>> -            if (stream == &streamConfig.stream) {
>>>>> -                FrameMetadata &fmd = buffer->_d()->metadata();
>>>>> -
>>>>> -                fmd.status = FrameMetadata::Status::FrameSuccess;
>>>>> -                fmd.sequence = streamConfig.seq++;
>>>>> -                fmd.timestamp = timestamp;
>>>>> -
>>>>> -                for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>>> -                    fmd.planes()[i].bytesused = p.length;
>>>>> -
>>>>> -                found = true;
>>>>> -
>>>>> -                if (streamConfig.frameGenerator->generateFrame(
>>>>> -                        stream->configuration().size, buffer))
>>>>> -                    fmd.status = FrameMetadata::Status::FrameError;
>>>>> -
>>>>> -                completeBuffer(request, buffer);
>>>>> -                break;
>>>>> -            }
>>>>> -        }
>>>>> -        ASSERT(found);
>>>>> -    }
>>>>> -
>>>>>        request->metadata().set(controls::SensorTimestamp, timestamp);
>>>>> -    completeRequest(request);
>>>>> +    data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
>>>>>        return 0;
>>>>>    }
>>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
>>>>> index 683cb82b4..0832fd80c 100644
>>>>> --- a/src/libcamera/pipeline/virtual/virtual.h
>>>>> +++ b/src/libcamera/pipeline/virtual/virtual.h
>>>>> @@ -11,6 +11,7 @@
>>>>>    #include <variant>
>>>>>    #include <vector>
>>>>> +#include <libcamera/base/thread.h>
>>>>
>>>> You should also include object.h.
>>>>
>>>>>    #include <libcamera/geometry.h>
>>>>>    #include <libcamera/stream.h>
>>>>> @@ -25,7 +26,9 @@ namespace libcamera {
>>>>>    using VirtualFrame = std::variant<TestPattern, ImageFrames>;
>>>>> -class VirtualCameraData : public Camera::Private
>>>>> +class VirtualCameraData : public Camera::Private,
>>>>> +              public Thread,
>>>>> +              public Object
>>>>>    {
>>>>>    public:
>>>>>        const static unsigned int kMaxStream = 3;
>>>>> @@ -54,9 +57,12 @@ public:
>>>>>        ~VirtualCameraData() = default;
>>>>> +    void queueRequest(Request *request);
>>>>> +
>>>>>        Configuration config_;
>>>>>        std::vector<StreamConfig> streamConfigs_;
>>>>> +    Signal<Request *, FrameBuffer *> bufferCompleted;
>>>>>    };
>>>>>    } /* namespace libcamera */
>>
>
Laurent Pinchart Aug. 12, 2025, 2:55 p.m. UTC | #6
On Tue, Aug 12, 2025 at 03:17:43PM +0200, Barnabás Pőcze wrote:
> 2025. 08. 12. 9:16 keltezéssel, Barnabás Pőcze írta:
> > 2025. 08. 12. 0:47 keltezéssel, Laurent Pinchart írta:
> >> On Mon, Aug 11, 2025 at 07:51:21PM +0200, Barnabás Pőcze wrote:
> >>> 2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
> >>>> On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
> >>>>> Currently the virtual pipeline generates the images synchronously. This is not
> >>>>> ideal because it blocks the camera manager's internal thread, and because its
> >>>>> behaviour is different from other existing pipeline handlers, all of which
> >>>>> complete requests asynchronously.
> >>>>>
> >>>>> So move the image generation to a separate thread by deriving `VirtualCameraData`
> >>>>> from `Thread`, as well as `Object` and using the existing asynchronous signal
> >>>>> and method call mechanism.
> >>>>>
> >>>>> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
> >>>>> ---
> >>>>>    src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
> >>>>>    src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
> >>>>>    2 files changed, 58 insertions(+), 30 deletions(-)
> >>>>>
> >>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
> >>>>> index 049ebcba5..2ad32ec0a 100644
> >>>>> --- a/src/libcamera/pipeline/virtual/virtual.cpp
> >>>>> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
> >>>>> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
> >>>>>        /* \todo Support multiple streams and pass multi_stream_test */
> >>>>>        streamConfigs_.resize(kMaxStream);
> >>>>> +
> >>>>> +    moveToThread(this);
> >>>>> +}
> >>>>> +
> >>>>> +void VirtualCameraData::queueRequest(Request *request)
> >>>>> +{
> >>>>> +    for (auto const &[stream, buffer] : request->buffers()) {
> >>>>> +        bool found = false;
> >>>>> +        /* map buffer and fill test patterns */
> >>>>> +        for (auto &streamConfig : streamConfigs_) {
> >>>>> +            if (stream == &streamConfig.stream) {
> >>>>> +                FrameMetadata &fmd = buffer->_d()->metadata();
> >>>>> +
> >>>>> +                fmd.status = FrameMetadata::Status::FrameSuccess;
> >>>>> +                fmd.sequence = streamConfig.seq++;
> >>>>> +                fmd.timestamp = currentTimestamp();
> >>>>> +
> >>>>> +                for (const auto [i, p] : utils::enumerate(buffer->planes()))
> >>>>> +                    fmd.planes()[i].bytesused = p.length;
> >>>>> +
> >>>>> +                found = true;
> >>>>> +
> >>>>> +                if (streamConfig.frameGenerator->generateFrame(
> >>>>> +                        stream->configuration().size, buffer))
> >>>>> +                    fmd.status = FrameMetadata::Status::FrameError;
> >>>>> +
> >>>>> +                bufferCompleted.emit(request, buffer);
> >>>>> +                break;
> >>>>> +            }
> >>>>> +        }
> >>>>> +        ASSERT(found);
> >>>>> +    }
> >>>>>    }
> >>>>>    VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
> >>>>> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
> >>>>>        for (auto &s : data->streamConfigs_)
> >>>>>            s.seq = 0;
> >>>>> +    data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
> >>>>> +        if (completeBuffer(request, buffer))
> >>>>> +            completeRequest(request);
> >>>>> +    });
> >>>>
> >>>> I'm not a big fan of lambdas in such cases, I find they hinder
> >>>> readability compared to member functions. The the PipelineHandlerVirtual
> >>>> class is not exposed outside of the pipeline handler, adding a private
> >>>> member function shouldn't be a big issue.
> >>>
> >>> Interesting, I have the exact opposite view. I find that member functions
> >>> usually involve more work and they don't allow for the same level of "scoping"
> >>> as lambdas.
> >>
> >> For me it's on a case-by-case basis. I really like lambda as adapters
> >> between two APIs. That's particularly useful when there's a small
> >> impedance mismatch between signals and slots. On the other hand, when
> >> the slot needs to implement a more complex logic that belongs to the
> >> receiver of the signal, I think member functions are better.
> >>
> >>>>> +
> >>>>> +    data->start();
> >>>>> +
> >>>>>        return 0;
> >>>>>    }
> >>>>> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
> >>>>> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
> >>>>>    {
> >>>>> +    VirtualCameraData *data = cameraData(camera);
> >>>>> +
> >>>>> +    /* Flush all work. */
> >>>>> +    data->invokeMethod([] { }, ConnectionTypeBlocking);
> >>>>
> >>>> Does this mean the Thread class API is lacking ? In particular, I don't
> >>>> see a similar call in the SoftwareIsp::stop() function. Is it needed
> >>>> here because we rely solely on the queue of messages to keep track the
> >>>> requests ? If so, I think we should keep track of the queued requests in
> >>>> the VirtualCameraData class, and cancel all requests not processed after
> >>>
> >>> It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.
> >>>
> >>> There is a need for flushing/cancelling asynchronous method invocations queued on
> >>> the worker thread(s), otherwise they will be executed when the thread is restarted
> >>> (or am I missing something that prevents this?). So we need a `cancelMessages()`
> >>> function in `Thread` then, it seems?
> >>
> >> We have Thread::removeMessages() that could possibly be useful, but it
> >> has to be called from the thread being stopped.
> > 
> > Looking at the implementation I think I would have assumed that it can be
> > called from a different thread since it uses locks and everything.
> > I suppose the current issue is that it is `private`.

A quick look confirms that. The devil is of course in the details, but I
think you could try to just make that function public. Maybe we should
also add a message type argument.

Another option would be to automatically remove events when stopping the
thread. Such a hardcoded policy may be too inflexible though.

> >>
> >> We use threads in two places: IPA modules, and software ISP. For IPA
> >> modules I think we're safe as the stop function calls
> >>
> >>     proxy_.invokeMethod(&ThreadProxy::stop, ConnectionTypeBlocking);
> >>
> >> before stopping the thread. The software ISP, on the other hand, doesn't
> > 
> > Yes, that is equivalent to flushing all the invoke messages, which is what
> > this version proposes, but "stop()" here is a no-op, hence the empty lambda.
> > 
> >> have such a blocking call. I wonder if it could be affected by the
> >> problem you describe. I remember we've fixed a similar issue in the
> >> past. Digging in the history, the following commit is related:
> > 
> > I briefly looked at it yesterday, and my assumption is yes. I think the `SoftwareIsp`
> > class exhibits the same pattern as the virtual pipeline handler. If we consider
> > e.g. `SoftwareIsp::process()`, there is an async call:
> > 
> >    debayer_->invokeMethod(&DebayerCpu::process,
> >                           ConnectionTypeQueued, frame, input, output, debayerParams_);
> > 
> > but then `SoftwareIsp::stop()`:
> > 
> >        ispWorkerThread_.exit();
> >      ispWorkerThread_.wait();
> > 
> >      Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
> > 
> >      ipa_->stop();
> > 
> >      for (auto buffer : queuedOutputBuffers_) {
> >          FrameMetadata &metadata = buffer->_d()->metadata();
> >          metadata.status = FrameMetadata::FrameCancelled;
> >          outputBufferReady.emit(buffer);
> >      }
> >      queuedOutputBuffers_.clear();
> > 
> >      for (auto buffer : queuedInputBuffers_) {
> >          FrameMetadata &metadata = buffer->_d()->metadata();
> >          metadata.status = FrameMetadata::FrameCancelled;
> >          inputBufferReady.emit(buffer);
> >      }
> >      queuedInputBuffers_.clear();
> > 
> > does not seem to cancel/flush the invoke messages on `ispWorkerThread_`, to which
> > the `*debayer_` object is bound. So it seems to me that such messages might
> > remain in the queue of `ispWorkerThread_`. But I will check this later.
> 
> Unfortunately it took more time than expected, but with judicious use of `scheduler-locking`
> and breakpoints, I was able to trigger exactly that scenario. Some invoke messages
> remain in the thread's message queue after `SoftwareIsp::stop()` returns. The window
> of opportunity seems quite small, but definitely possible.

Thank you for confirming it.

> It seems to me that the shutdown procedure is not robust enough. For example, in this
> state there is an ubsan warning:
> 
>    ../src/libcamera/request.cpp:107:25: runtime error: load of value 3200171710, which is not a valid value for type 'Status'
> 
> when it completes buffers in `SimpleCameraData::imageBufferReady()`:
> 
>    const RequestOutputs &outputs = conversionQueue_.front();
>    for (auto &[stream, buf] : outputs.outputs)
>      pipe->completeBuffer(outputs.request, buf);
> 
> as the buffer metadata remains uninitialized:
> 
> (gdb) p buffer->metadata()
> $32 = (const libcamera::FrameMetadata &) @0x7cefedc0ff68: {
>    status = 3200171710,
>    sequence = 3200171710,
>    timestamp = 13744632839234567870,
>    planes_ = std::__debug::vector of length 1, capacity 1 = {{
>        bytesused = 0
>      }}
> }
> 
> and it also does not complete all requests:
> 
> 387		ASSERT(data->queuedRequests_.empty());
> (gdb) p data->queuedRequests_
> $35 = std::__debug::list = {
>    [0] = 0x7cafedbe0e00,
>    [1] = 0x7cafedbe0f60,
>    [2] = 0x7cafedbe10c0,
>    [3] = 0x7cafedbe1220
> }
> (gdb) n
> [4:37:02.698049276] [7158] FATAL default pipeline_handler.cpp:387 assertion "data->queuedRequests_.empty()" failed in stop()
> 
> 
> After the assertion reaches __pthread_kill_implementation():
> 
> 
> (gdb) p ((SimpleCameraData *)data)->swIsp_->ispWorkerThread_->data_->messages_.list_
> $40 = std::__debug::list = {
>    [0] = std::unique_ptr<libcamera::Message> = {
>      get() = 0x7c5fedc06540
>    },
>    [1] = std::unique_ptr<libcamera::Message> = {
>      get() = 0x7c5fedc06600
>    },
>    [2] = std::unique_ptr<libcamera::Message> = {
>      get() = 0x7c5fedc066c0
>    }
> }
> (gdb) p *((libcamera::Message*)0x7c5fedc06540)
> $46 = {
>    _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>    type_ = libcamera::Message::InvokeMessage,
>    receiver_ = 0x7e2fedbe1d68,
>    static nextUserType_ = std::atomic<unsigned int> = { 1000 }
> }
> (gdb) p *((libcamera::Message*)0x7c5fedc06600)
> $47 = {
>    _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>    type_ = libcamera::Message::InvokeMessage,
>    receiver_ = 0x7e2fedbe1d68,
>    static nextUserType_ = std::atomic<unsigned int> = { 1000 }
> }
> (gdb) p *((libcamera::Message*)0x7c5fedc066c0)
> $48 = {
>    _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>    type_ = libcamera::Message::InvokeMessage,
>    receiver_ = 0x7e2fedbe1d68,
>    static nextUserType_ = std::atomic<unsigned int> = { 1000 }
> }
> 
> Note that the `receiver_` is the same:
> 
> (gdb) p *((libcamera::Message*)0x7c5fedc066c0)->receiver_
> $49 = {
>    _vptr.Object = 0x7ffff4bdb3f8 <vtable for libcamera::DebayerCpu+96>,
>    parent_ = 0x0,
>    children_ = std::__debug::vector of length 0, capacity 0,
>    thread_ = 0x7e2fedbe0280,
>    signals_ = empty std::__debug::list,
>    pendingMessages_ = 3
> }
> 
> So should that camera ever be restarted, those messages would be dispatched,
> reusing old requests and arguments.
> 
> I think there are multiple issues here: first, the one we have suspected, that
> the `ispWorkerThread_` is not appropriately flushed/cancelled; and second, the
> shutdown procedure does not seem robust enough to me if it runs into an assertion
> in this state.

Wait, does the assertion kill the thread but not the process ?

In any case, there should be no assertion failure. The existing failure
comes from the failure to flush the messages to the thread. If we fix
that, are there still other problems ?

> >> commit 86ffaf936dc663afd48bd3e276134fa720210bd6
> >> Author: Milan Zamazal <mzamazal@redhat.com>
> >> Date:   Tue Feb 25 16:06:11 2025 +0100
> >>
> >>      libcamera: software_isp: Dispatch messages on stop
> >>
> >> This is however not the same issue, that commit addresses messages sent
> >> by the thread, like you do below.
> >>
> >> So yes, it looks like both the software ISP and the virtual pipeline
> >> handler will need to use removeMessages(), or something similar. It's
> >> getting a bit too late to think about how the right API should look
> >> like.
> > 
> > My first idea was to have `cancelMessages(Message::Type, Object * = nullptr)`
> > similarly to `dispatchMessages()`.
> > 
> >>>> the thread exits. This will make stopping the camera faster, and will
> >>>> also mimick better the behaviour of hardware cameras.
> >>>>
> >>>> I know it's a bit more work, hopefully it's just freshening up the yak
> >>>> and not fully shaving it :-)
> >>>>
> >>>>> +    data->exit();
> >>>>> +    data->wait();
> >>>>> +
> >>>>> +    /* Process queued `bufferCompleted` signal emissions. */
> >>>>> +    Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
> >>>>> +    data->bufferCompleted.disconnect(this);
> >>>>>    }
> >>>>>    int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
> >>>>> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
> >>>>>        VirtualCameraData *data = cameraData(camera);
> >>>>>        const auto timestamp = currentTimestamp();
> >>>>> -    for (auto const &[stream, buffer] : request->buffers()) {
> >>>>> -        bool found = false;
> >>>>> -        /* map buffer and fill test patterns */
> >>>>> -        for (auto &streamConfig : data->streamConfigs_) {
> >>>>> -            if (stream == &streamConfig.stream) {
> >>>>> -                FrameMetadata &fmd = buffer->_d()->metadata();
> >>>>> -
> >>>>> -                fmd.status = FrameMetadata::Status::FrameSuccess;
> >>>>> -                fmd.sequence = streamConfig.seq++;
> >>>>> -                fmd.timestamp = timestamp;
> >>>>> -
> >>>>> -                for (const auto [i, p] : utils::enumerate(buffer->planes()))
> >>>>> -                    fmd.planes()[i].bytesused = p.length;
> >>>>> -
> >>>>> -                found = true;
> >>>>> -
> >>>>> -                if (streamConfig.frameGenerator->generateFrame(
> >>>>> -                        stream->configuration().size, buffer))
> >>>>> -                    fmd.status = FrameMetadata::Status::FrameError;
> >>>>> -
> >>>>> -                completeBuffer(request, buffer);
> >>>>> -                break;
> >>>>> -            }
> >>>>> -        }
> >>>>> -        ASSERT(found);
> >>>>> -    }
> >>>>> -
> >>>>>        request->metadata().set(controls::SensorTimestamp, timestamp);
> >>>>> -    completeRequest(request);
> >>>>> +    data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
> >>>>>        return 0;
> >>>>>    }
> >>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
> >>>>> index 683cb82b4..0832fd80c 100644
> >>>>> --- a/src/libcamera/pipeline/virtual/virtual.h
> >>>>> +++ b/src/libcamera/pipeline/virtual/virtual.h
> >>>>> @@ -11,6 +11,7 @@
> >>>>>    #include <variant>
> >>>>>    #include <vector>
> >>>>> +#include <libcamera/base/thread.h>
> >>>>
> >>>> You should also include object.h.
> >>>>
> >>>>>    #include <libcamera/geometry.h>
> >>>>>    #include <libcamera/stream.h>
> >>>>> @@ -25,7 +26,9 @@ namespace libcamera {
> >>>>>    using VirtualFrame = std::variant<TestPattern, ImageFrames>;
> >>>>> -class VirtualCameraData : public Camera::Private
> >>>>> +class VirtualCameraData : public Camera::Private,
> >>>>> +              public Thread,
> >>>>> +              public Object
> >>>>>    {
> >>>>>    public:
> >>>>>        const static unsigned int kMaxStream = 3;
> >>>>> @@ -54,9 +57,12 @@ public:
> >>>>>        ~VirtualCameraData() = default;
> >>>>> +    void queueRequest(Request *request);
> >>>>> +
> >>>>>        Configuration config_;
> >>>>>        std::vector<StreamConfig> streamConfigs_;
> >>>>> +    Signal<Request *, FrameBuffer *> bufferCompleted;
> >>>>>    };
> >>>>>    } /* namespace libcamera */
Barnabás Pőcze Aug. 12, 2025, 4:08 p.m. UTC | #7
2025. 08. 12. 16:55 keltezéssel, Laurent Pinchart írta:
> On Tue, Aug 12, 2025 at 03:17:43PM +0200, Barnabás Pőcze wrote:
>> 2025. 08. 12. 9:16 keltezéssel, Barnabás Pőcze írta:
>>> 2025. 08. 12. 0:47 keltezéssel, Laurent Pinchart írta:
>>>> On Mon, Aug 11, 2025 at 07:51:21PM +0200, Barnabás Pőcze wrote:
>>>>> 2025. 08. 11. 19:05 keltezéssel, Laurent Pinchart írta:
>>>>>> On Mon, Aug 11, 2025 at 11:49:26AM +0200, Barnabás Pőcze wrote:
>>>>>>> Currently the virtual pipeline generates the images synchronously. This is not
>>>>>>> ideal because it blocks the camera manager's internal thread, and because its
>>>>>>> behaviour is different from other existing pipeline handlers, all of which
>>>>>>> complete requests asynchronously.
>>>>>>>
>>>>>>> So move the image generation to a separate thread by deriving `VirtualCameraData`
>>>>>>> from `Thread`, as well as `Object` and using the existing asynchronous signal
>>>>>>> and method call mechanism.
>>>>>>>
>>>>>>> Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
>>>>>>> ---
>>>>>>>     src/libcamera/pipeline/virtual/virtual.cpp | 80 ++++++++++++++--------
>>>>>>>     src/libcamera/pipeline/virtual/virtual.h   |  8 ++-
>>>>>>>     2 files changed, 58 insertions(+), 30 deletions(-)
>>>>>>>
>>>>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
>>>>>>> index 049ebcba5..2ad32ec0a 100644
>>>>>>> --- a/src/libcamera/pipeline/virtual/virtual.cpp
>>>>>>> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
>>>>>>> @@ -129,6 +129,38 @@ VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>>>>>>>         /* \todo Support multiple streams and pass multi_stream_test */
>>>>>>>         streamConfigs_.resize(kMaxStream);
>>>>>>> +
>>>>>>> +    moveToThread(this);
>>>>>>> +}
>>>>>>> +
>>>>>>> +void VirtualCameraData::queueRequest(Request *request)
>>>>>>> +{
>>>>>>> +    for (auto const &[stream, buffer] : request->buffers()) {
>>>>>>> +        bool found = false;
>>>>>>> +        /* map buffer and fill test patterns */
>>>>>>> +        for (auto &streamConfig : streamConfigs_) {
>>>>>>> +            if (stream == &streamConfig.stream) {
>>>>>>> +                FrameMetadata &fmd = buffer->_d()->metadata();
>>>>>>> +
>>>>>>> +                fmd.status = FrameMetadata::Status::FrameSuccess;
>>>>>>> +                fmd.sequence = streamConfig.seq++;
>>>>>>> +                fmd.timestamp = currentTimestamp();
>>>>>>> +
>>>>>>> +                for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>>>>> +                    fmd.planes()[i].bytesused = p.length;
>>>>>>> +
>>>>>>> +                found = true;
>>>>>>> +
>>>>>>> +                if (streamConfig.frameGenerator->generateFrame(
>>>>>>> +                        stream->configuration().size, buffer))
>>>>>>> +                    fmd.status = FrameMetadata::Status::FrameError;
>>>>>>> +
>>>>>>> +                bufferCompleted.emit(request, buffer);
>>>>>>> +                break;
>>>>>>> +            }
>>>>>>> +        }
>>>>>>> +        ASSERT(found);
>>>>>>> +    }
>>>>>>>     }
>>>>>>>     VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
>>>>>>> @@ -291,11 +323,28 @@ int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
>>>>>>>         for (auto &s : data->streamConfigs_)
>>>>>>>             s.seq = 0;
>>>>>>> +    data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
>>>>>>> +        if (completeBuffer(request, buffer))
>>>>>>> +            completeRequest(request);
>>>>>>> +    });
>>>>>>
>>>>>> I'm not a big fan of lambdas in such cases, I find they hinder
>>>>>> readability compared to member functions. The the PipelineHandlerVirtual
>>>>>> class is not exposed outside of the pipeline handler, adding a private
>>>>>> member function shouldn't be a big issue.
>>>>>
>>>>> Interesting, I have the exact opposite view. I find that member functions
>>>>> usually involve more work and they don't allow for the same level of "scoping"
>>>>> as lambdas.
>>>>
>>>> For me it's on a case-by-case basis. I really like lambda as adapters
>>>> between two APIs. That's particularly useful when there's a small
>>>> impedance mismatch between signals and slots. On the other hand, when
>>>> the slot needs to implement a more complex logic that belongs to the
>>>> receiver of the signal, I think member functions are better.
>>>>
>>>>>>> +
>>>>>>> +    data->start();
>>>>>>> +
>>>>>>>         return 0;
>>>>>>>     }
>>>>>>> -void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
>>>>>>> +void PipelineHandlerVirtual::stopDevice(Camera *camera)
>>>>>>>     {
>>>>>>> +    VirtualCameraData *data = cameraData(camera);
>>>>>>> +
>>>>>>> +    /* Flush all work. */
>>>>>>> +    data->invokeMethod([] { }, ConnectionTypeBlocking);
>>>>>>
>>>>>> Does this mean the Thread class API is lacking ? In particular, I don't
>>>>>> see a similar call in the SoftwareIsp::stop() function. Is it needed
>>>>>> here because we rely solely on the queue of messages to keep track the
>>>>>> requests ? If so, I think we should keep track of the queued requests in
>>>>>> the VirtualCameraData class, and cancel all requests not processed after
>>>>>
>>>>> It's already tracked in `Camera::Private::queuedRequests_`, so that part seems easy.
>>>>>
>>>>> There is a need for flushing/cancelling asynchronous method invocations queued on
>>>>> the worker thread(s), otherwise they will be executed when the thread is restarted
>>>>> (or am I missing something that prevents this?). So we need a `cancelMessages()`
>>>>> function in `Thread` then, it seems?
>>>>
>>>> We have Thread::removeMessages() that could possibly be useful, but it
>>>> has to be called from the thread being stopped.
>>>
>>> Looking at the implementation I think I would have assumed that it can be
>>> called from a different thread since it uses locks and everything.
>>> I suppose the current issue is that it is `private`.
> 
> A quick look confirms that. The devil is of course in the details, but I
> think you could try to just make that function public. Maybe we should
> also add a message type argument.
> 
> Another option would be to automatically remove events when stopping the
> thread. Such a hardcoded policy may be too inflexible though.
> 
>>>>
>>>> We use threads in two places: IPA modules, and software ISP. For IPA
>>>> modules I think we're safe as the stop function calls
>>>>
>>>>      proxy_.invokeMethod(&ThreadProxy::stop, ConnectionTypeBlocking);
>>>>
>>>> before stopping the thread. The software ISP, on the other hand, doesn't
>>>
>>> Yes, that is equivalent to flushing all the invoke messages, which is what
>>> this version proposes, but "stop()" here is a no-op, hence the empty lambda.
>>>
>>>> have such a blocking call. I wonder if it could be affected by the
>>>> problem you describe. I remember we've fixed a similar issue in the
>>>> past. Digging in the history, the following commit is related:
>>>
>>> I briefly looked at it yesterday, and my assumption is yes. I think the `SoftwareIsp`
>>> class exhibits the same pattern as the virtual pipeline handler. If we consider
>>> e.g. `SoftwareIsp::process()`, there is an async call:
>>>
>>>     debayer_->invokeMethod(&DebayerCpu::process,
>>>                            ConnectionTypeQueued, frame, input, output, debayerParams_);
>>>
>>> but then `SoftwareIsp::stop()`:
>>>
>>>         ispWorkerThread_.exit();
>>>       ispWorkerThread_.wait();
>>>
>>>       Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
>>>
>>>       ipa_->stop();
>>>
>>>       for (auto buffer : queuedOutputBuffers_) {
>>>           FrameMetadata &metadata = buffer->_d()->metadata();
>>>           metadata.status = FrameMetadata::FrameCancelled;
>>>           outputBufferReady.emit(buffer);
>>>       }
>>>       queuedOutputBuffers_.clear();
>>>
>>>       for (auto buffer : queuedInputBuffers_) {
>>>           FrameMetadata &metadata = buffer->_d()->metadata();
>>>           metadata.status = FrameMetadata::FrameCancelled;
>>>           inputBufferReady.emit(buffer);
>>>       }
>>>       queuedInputBuffers_.clear();
>>>
>>> does not seem to cancel/flush the invoke messages on `ispWorkerThread_`, to which
>>> the `*debayer_` object is bound. So it seems to me that such messages might
>>> remain in the queue of `ispWorkerThread_`. But I will check this later.
>>
>> Unfortunately it took more time than expected, but with judicious use of `scheduler-locking`
>> and breakpoints, I was able to trigger exactly that scenario. Some invoke messages
>> remain in the thread's message queue after `SoftwareIsp::stop()` returns. The window
>> of opportunity seems quite small, but definitely possible.
> 
> Thank you for confirming it.
> 
>> It seems to me that the shutdown procedure is not robust enough. For example, in this
>> state there is an ubsan warning:
>>
>>     ../src/libcamera/request.cpp:107:25: runtime error: load of value 3200171710, which is not a valid value for type 'Status'
>>
>> when it completes buffers in `SimpleCameraData::imageBufferReady()`:
>>
>>     const RequestOutputs &outputs = conversionQueue_.front();
>>     for (auto &[stream, buf] : outputs.outputs)
>>       pipe->completeBuffer(outputs.request, buf);
>>
>> as the buffer metadata remains uninitialized:
>>
>> (gdb) p buffer->metadata()
>> $32 = (const libcamera::FrameMetadata &) @0x7cefedc0ff68: {
>>     status = 3200171710,
>>     sequence = 3200171710,
>>     timestamp = 13744632839234567870,
>>     planes_ = std::__debug::vector of length 1, capacity 1 = {{
>>         bytesused = 0
>>       }}
>> }
>>
>> and it also does not complete all requests:
>>
>> 387		ASSERT(data->queuedRequests_.empty());
>> (gdb) p data->queuedRequests_
>> $35 = std::__debug::list = {
>>     [0] = 0x7cafedbe0e00,
>>     [1] = 0x7cafedbe0f60,
>>     [2] = 0x7cafedbe10c0,
>>     [3] = 0x7cafedbe1220
>> }
>> (gdb) n
>> [4:37:02.698049276] [7158] FATAL default pipeline_handler.cpp:387 assertion "data->queuedRequests_.empty()" failed in stop()
>>
>>
>> After the assertion reaches __pthread_kill_implementation():
>>
>>
>> (gdb) p ((SimpleCameraData *)data)->swIsp_->ispWorkerThread_->data_->messages_.list_
>> $40 = std::__debug::list = {
>>     [0] = std::unique_ptr<libcamera::Message> = {
>>       get() = 0x7c5fedc06540
>>     },
>>     [1] = std::unique_ptr<libcamera::Message> = {
>>       get() = 0x7c5fedc06600
>>     },
>>     [2] = std::unique_ptr<libcamera::Message> = {
>>       get() = 0x7c5fedc066c0
>>     }
>> }
>> (gdb) p *((libcamera::Message*)0x7c5fedc06540)
>> $46 = {
>>     _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>>     type_ = libcamera::Message::InvokeMessage,
>>     receiver_ = 0x7e2fedbe1d68,
>>     static nextUserType_ = std::atomic<unsigned int> = { 1000 }
>> }
>> (gdb) p *((libcamera::Message*)0x7c5fedc06600)
>> $47 = {
>>     _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>>     type_ = libcamera::Message::InvokeMessage,
>>     receiver_ = 0x7e2fedbe1d68,
>>     static nextUserType_ = std::atomic<unsigned int> = { 1000 }
>> }
>> (gdb) p *((libcamera::Message*)0x7c5fedc066c0)
>> $48 = {
>>     _vptr.Message = 0x7ffff06f8500 <vtable for libcamera::InvokeMessage+16>,
>>     type_ = libcamera::Message::InvokeMessage,
>>     receiver_ = 0x7e2fedbe1d68,
>>     static nextUserType_ = std::atomic<unsigned int> = { 1000 }
>> }
>>
>> Note that the `receiver_` is the same:
>>
>> (gdb) p *((libcamera::Message*)0x7c5fedc066c0)->receiver_
>> $49 = {
>>     _vptr.Object = 0x7ffff4bdb3f8 <vtable for libcamera::DebayerCpu+96>,
>>     parent_ = 0x0,
>>     children_ = std::__debug::vector of length 0, capacity 0,
>>     thread_ = 0x7e2fedbe0280,
>>     signals_ = empty std::__debug::list,
>>     pendingMessages_ = 3
>> }
>>
>> So should that camera ever be restarted, those messages would be dispatched,
>> reusing old requests and arguments.
>>
>> I think there are multiple issues here: first, the one we have suspected, that
>> the `ispWorkerThread_` is not appropriately flushed/cancelled; and second, the
>> shutdown procedure does not seem robust enough to me if it runs into an assertion
>> in this state.
> 
> Wait, does the assertion kill the thread but not the process ?

gdb breaks before the process is stopped, the above has been recovered
from that state.


> 
> In any case, there should be no assertion failure. The existing failure
> comes from the failure to flush the messages to the thread. If we fix
> that, are there still other problems ?

If the messages are just cancelled, this will still fail the same way at the
assertion. For the virtual pipeline handler either flushing or cancellation
works well. I am not too familiar with the simple pipeline handler, so I can't
say what should be done here, there is also the IPA that might complicate things.
My impression is that the shutdown logic could (should?) be improved.


> 
>>>> commit 86ffaf936dc663afd48bd3e276134fa720210bd6
>>>> Author: Milan Zamazal <mzamazal@redhat.com>
>>>> Date:   Tue Feb 25 16:06:11 2025 +0100
>>>>
>>>>       libcamera: software_isp: Dispatch messages on stop
>>>>
>>>> This is however not the same issue, that commit addresses messages sent
>>>> by the thread, like you do below.
>>>>
>>>> So yes, it looks like both the software ISP and the virtual pipeline
>>>> handler will need to use removeMessages(), or something similar. It's
>>>> getting a bit too late to think about how the right API should look
>>>> like.
>>>
>>> My first idea was to have `cancelMessages(Message::Type, Object * = nullptr)`
>>> similarly to `dispatchMessages()`.
>>>
>>>>>> the thread exits. This will make stopping the camera faster, and will
>>>>>> also mimick better the behaviour of hardware cameras.
>>>>>>
>>>>>> I know it's a bit more work, hopefully it's just freshening up the yak
>>>>>> and not fully shaving it :-)
>>>>>>
>>>>>>> +    data->exit();
>>>>>>> +    data->wait();
>>>>>>> +
>>>>>>> +    /* Process queued `bufferCompleted` signal emissions. */
>>>>>>> +    Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
>>>>>>> +    data->bufferCompleted.disconnect(this);
>>>>>>>     }
>>>>>>>     int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>>>>> @@ -304,35 +353,8 @@ int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
>>>>>>>         VirtualCameraData *data = cameraData(camera);
>>>>>>>         const auto timestamp = currentTimestamp();
>>>>>>> -    for (auto const &[stream, buffer] : request->buffers()) {
>>>>>>> -        bool found = false;
>>>>>>> -        /* map buffer and fill test patterns */
>>>>>>> -        for (auto &streamConfig : data->streamConfigs_) {
>>>>>>> -            if (stream == &streamConfig.stream) {
>>>>>>> -                FrameMetadata &fmd = buffer->_d()->metadata();
>>>>>>> -
>>>>>>> -                fmd.status = FrameMetadata::Status::FrameSuccess;
>>>>>>> -                fmd.sequence = streamConfig.seq++;
>>>>>>> -                fmd.timestamp = timestamp;
>>>>>>> -
>>>>>>> -                for (const auto [i, p] : utils::enumerate(buffer->planes()))
>>>>>>> -                    fmd.planes()[i].bytesused = p.length;
>>>>>>> -
>>>>>>> -                found = true;
>>>>>>> -
>>>>>>> -                if (streamConfig.frameGenerator->generateFrame(
>>>>>>> -                        stream->configuration().size, buffer))
>>>>>>> -                    fmd.status = FrameMetadata::Status::FrameError;
>>>>>>> -
>>>>>>> -                completeBuffer(request, buffer);
>>>>>>> -                break;
>>>>>>> -            }
>>>>>>> -        }
>>>>>>> -        ASSERT(found);
>>>>>>> -    }
>>>>>>> -
>>>>>>>         request->metadata().set(controls::SensorTimestamp, timestamp);
>>>>>>> -    completeRequest(request);
>>>>>>> +    data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
>>>>>>>         return 0;
>>>>>>>     }
>>>>>>> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
>>>>>>> index 683cb82b4..0832fd80c 100644
>>>>>>> --- a/src/libcamera/pipeline/virtual/virtual.h
>>>>>>> +++ b/src/libcamera/pipeline/virtual/virtual.h
>>>>>>> @@ -11,6 +11,7 @@
>>>>>>>     #include <variant>
>>>>>>>     #include <vector>
>>>>>>> +#include <libcamera/base/thread.h>
>>>>>>
>>>>>> You should also include object.h.
>>>>>>
>>>>>>>     #include <libcamera/geometry.h>
>>>>>>>     #include <libcamera/stream.h>
>>>>>>> @@ -25,7 +26,9 @@ namespace libcamera {
>>>>>>>     using VirtualFrame = std::variant<TestPattern, ImageFrames>;
>>>>>>> -class VirtualCameraData : public Camera::Private
>>>>>>> +class VirtualCameraData : public Camera::Private,
>>>>>>> +              public Thread,
>>>>>>> +              public Object
>>>>>>>     {
>>>>>>>     public:
>>>>>>>         const static unsigned int kMaxStream = 3;
>>>>>>> @@ -54,9 +57,12 @@ public:
>>>>>>>         ~VirtualCameraData() = default;
>>>>>>> +    void queueRequest(Request *request);
>>>>>>> +
>>>>>>>         Configuration config_;
>>>>>>>         std::vector<StreamConfig> streamConfigs_;
>>>>>>> +    Signal<Request *, FrameBuffer *> bufferCompleted;
>>>>>>>     };
>>>>>>>     } /* namespace libcamera */
> 
> --
> Regards,
> 
> Laurent Pinchart

Patch
diff mbox series

diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
index 049ebcba5..2ad32ec0a 100644
--- a/src/libcamera/pipeline/virtual/virtual.cpp
+++ b/src/libcamera/pipeline/virtual/virtual.cpp
@@ -129,6 +129,38 @@  VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
 
 	/* \todo Support multiple streams and pass multi_stream_test */
 	streamConfigs_.resize(kMaxStream);
+
+	moveToThread(this);
+}
+
+void VirtualCameraData::queueRequest(Request *request)
+{
+	for (auto const &[stream, buffer] : request->buffers()) {
+		bool found = false;
+		/* map buffer and fill test patterns */
+		for (auto &streamConfig : streamConfigs_) {
+			if (stream == &streamConfig.stream) {
+				FrameMetadata &fmd = buffer->_d()->metadata();
+
+				fmd.status = FrameMetadata::Status::FrameSuccess;
+				fmd.sequence = streamConfig.seq++;
+				fmd.timestamp = currentTimestamp();
+
+				for (const auto [i, p] : utils::enumerate(buffer->planes()))
+					fmd.planes()[i].bytesused = p.length;
+
+				found = true;
+
+				if (streamConfig.frameGenerator->generateFrame(
+					    stream->configuration().size, buffer))
+					fmd.status = FrameMetadata::Status::FrameError;
+
+				bufferCompleted.emit(request, buffer);
+				break;
+			}
+		}
+		ASSERT(found);
+	}
 }
 
 VirtualCameraConfiguration::VirtualCameraConfiguration(VirtualCameraData *data)
@@ -291,11 +323,28 @@  int PipelineHandlerVirtual::start([[maybe_unused]] Camera *camera,
 	for (auto &s : data->streamConfigs_)
 		s.seq = 0;
 
+	data->bufferCompleted.connect(this, [&](Request *request, FrameBuffer *buffer) {
+		if (completeBuffer(request, buffer))
+			completeRequest(request);
+	});
+
+	data->start();
+
 	return 0;
 }
 
-void PipelineHandlerVirtual::stopDevice([[maybe_unused]] Camera *camera)
+void PipelineHandlerVirtual::stopDevice(Camera *camera)
 {
+	VirtualCameraData *data = cameraData(camera);
+
+	/* Flush all work. */
+	data->invokeMethod([] { }, ConnectionTypeBlocking);
+	data->exit();
+	data->wait();
+
+	/* Process queued `bufferCompleted` signal emissions. */
+	Thread::current()->dispatchMessages(Message::Type::InvokeMessage, this);
+	data->bufferCompleted.disconnect(this);
 }
 
 int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
@@ -304,35 +353,8 @@  int PipelineHandlerVirtual::queueRequestDevice([[maybe_unused]] Camera *camera,
 	VirtualCameraData *data = cameraData(camera);
 	const auto timestamp = currentTimestamp();
 
-	for (auto const &[stream, buffer] : request->buffers()) {
-		bool found = false;
-		/* map buffer and fill test patterns */
-		for (auto &streamConfig : data->streamConfigs_) {
-			if (stream == &streamConfig.stream) {
-				FrameMetadata &fmd = buffer->_d()->metadata();
-
-				fmd.status = FrameMetadata::Status::FrameSuccess;
-				fmd.sequence = streamConfig.seq++;
-				fmd.timestamp = timestamp;
-
-				for (const auto [i, p] : utils::enumerate(buffer->planes()))
-					fmd.planes()[i].bytesused = p.length;
-
-				found = true;
-
-				if (streamConfig.frameGenerator->generateFrame(
-					    stream->configuration().size, buffer))
-					fmd.status = FrameMetadata::Status::FrameError;
-
-				completeBuffer(request, buffer);
-				break;
-			}
-		}
-		ASSERT(found);
-	}
-
 	request->metadata().set(controls::SensorTimestamp, timestamp);
-	completeRequest(request);
+	data->invokeMethod(&VirtualCameraData::queueRequest, ConnectionTypeQueued, request);
 
 	return 0;
 }
diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
index 683cb82b4..0832fd80c 100644
--- a/src/libcamera/pipeline/virtual/virtual.h
+++ b/src/libcamera/pipeline/virtual/virtual.h
@@ -11,6 +11,7 @@ 
 #include <variant>
 #include <vector>
 
+#include <libcamera/base/thread.h>
 #include <libcamera/geometry.h>
 #include <libcamera/stream.h>
 
@@ -25,7 +26,9 @@  namespace libcamera {
 
 using VirtualFrame = std::variant<TestPattern, ImageFrames>;
 
-class VirtualCameraData : public Camera::Private
+class VirtualCameraData : public Camera::Private,
+			  public Thread,
+			  public Object
 {
 public:
 	const static unsigned int kMaxStream = 3;
@@ -54,9 +57,12 @@  public:
 
 	~VirtualCameraData() = default;
 
+	void queueRequest(Request *request);
+
 	Configuration config_;
 
 	std::vector<StreamConfig> streamConfigs_;
+	Signal<Request *, FrameBuffer *> bufferCompleted;
 };
 
 } /* namespace libcamera */