[libcamera-devel] libcamera: pipeline: stm32: add pipeline handler for stm32

Message ID 20190408124322.6869-1-benjamin.gaignard@st.com
State Superseded
Headers show
Series
  • [libcamera-devel] libcamera: pipeline: stm32: add pipeline handler for stm32
Related show

Commit Message

Benjamin GAIGNARD April 8, 2019, 12:43 p.m. UTC
Provide a pipeline handler for the stm32 driver.

Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
---
 src/libcamera/pipeline/meson.build       |   2 +
 src/libcamera/pipeline/stm32/meson.build |   3 +
 src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
 3 files changed, 210 insertions(+)
 create mode 100644 src/libcamera/pipeline/stm32/meson.build
 create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp

Comments

Kieran Bingham April 9, 2019, 9:02 a.m. UTC | #1
Hi Benjamin,

On 08/04/2019 13:43, Benjamin Gaignard wrote:
> Provide a pipeline handler for the stm32 driver.

Excellent :-) Thank you for your contribution.

Have you been able to test this and successfully list your camera or
capture frames?

To list successfully registered cameras on the device:
	build/src/cam/cam -l

I couldn't see any media-controller support in the stm32-dcmi driver...
Perhaps I'm missing something.

> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
> ---
>  src/libcamera/pipeline/meson.build       |   2 +
>  src/libcamera/pipeline/stm32/meson.build |   3 +
>  src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
>  3 files changed, 210 insertions(+)
>  create mode 100644 src/libcamera/pipeline/stm32/meson.build
>  create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
> 
> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
> index 40bb264..08d6e1c 100644
> --- a/src/libcamera/pipeline/meson.build
> +++ b/src/libcamera/pipeline/meson.build
> @@ -4,3 +4,5 @@ libcamera_sources += files([
>  ])
>  
>  subdir('ipu3')
> +
> +subdir('stm32')
> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
> new file mode 100644
> index 0000000..cb6f16b
> --- /dev/null
> +++ b/src/libcamera/pipeline/stm32/meson.build
> @@ -0,0 +1,3 @@
> +libcamera_sources += files([
> +    'stm32.cpp',
> +])

Currently the stm32.cpp seems like a fairly simple implementation, which
fits in its own file, much like pipeline/uvcvideo.cpp.

Do you forsee needing to break the STM32 handler into multiple files
later, or support other components on the STM32?

I think it's fine if you want to have stm32 as a subdir either way  though.

I wonder also, if it is just a 'simple' device if perhaps we should move
the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
subclassed to simply add the device matching, or provide a table of
supported match strings.


What hardware is available on the STM32 for processing frames?
Do you have a scaler/resizer? or any further ISP functionality?

It would be interesting to see the output of 'media-ctl -p' for your
platform.


> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
> new file mode 100644
> index 0000000..301fdfc
> --- /dev/null
> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
> @@ -0,0 +1,205 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * stm32.cpp - Pipeline handler for stm32 devices
> + */
> +
> +#include <libcamera/camera.h>
> +#include <libcamera/request.h>
> +#include <libcamera/stream.h>
> +
> +#include "device_enumerator.h"
> +#include "log.h"
> +#include "media_device.h"
> +#include "pipeline_handler.h"
> +#include "utils.h"
> +#include "v4l2_device.h"
> +
> +namespace libcamera {
> +
> +LOG_DEFINE_CATEGORY(STM32)
> +
> +class PipelineHandlerSTM32 : public PipelineHandler {
> +public:
> +  PipelineHandlerSTM32(CameraManager *manager);
> +  ~PipelineHandlerSTM32();

Our coding style uses a tab to indent.

When you have committed your code, you can run ./utils/checkstyle.py to
check the most recent commit (or you can specify a range suitable for git).

This requires installing clang-format ideally, although astyle is also
supported to some degree.

> +
> +  std::map<Stream *, StreamConfiguration>
> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
> +  int configureStreams(
> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
> +
> +  int allocateBuffers(Camera *camera, Stream *stream) override;
> +  int freeBuffers(Camera *camera, Stream *stream) override;
> +
> +  int start(Camera *camera) override;
> +  void stop(Camera *camera) override;
> +
> +  int queueRequest(Camera *camera, Request *request) override;
> +
> +  bool match(DeviceEnumerator *enumerator);
> +
> +private:
> +  class STM32CameraData : public CameraData {
> +  public:
> +    STM32CameraData(PipelineHandler *pipe)
> +        : CameraData(pipe), video_(nullptr) {}
> +
> +    ~STM32CameraData() { delete video_; }
> +
> +    void bufferReady(Buffer *buffer);
> +
> +    V4L2Device *video_;
> +    Stream stream_;
> +  };
> +
> +  STM32CameraData *cameraData(const Camera *camera) {
> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
> +  }
> +
> +  std::shared_ptr<MediaDevice> media_;
> +};
> +
> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
> +    : PipelineHandler(manager), media_(nullptr) {}
> +
> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
> +  if (media_)
> +    media_->release();
> +}
> +
> +std::map<Stream *, StreamConfiguration>
> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
> +                                          std::set<Stream *> &streams) {
> +  STM32CameraData *data = cameraData(camera);
> +
> +  std::map<Stream *, StreamConfiguration> configs;
> +  StreamConfiguration config{};
> +
> +  LOG(STM32, Debug) << "Retrieving default format";
> +  config.width = 640;
> +  config.height = 480;
> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
> +  config.bufferCount = 4;
> +
> +  configs[&data->stream_] = config;
> +
> +  return configs;
> +}
> +
> +int PipelineHandlerSTM32::configureStreams(
> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
> +  STM32CameraData *data = cameraData(camera);
> +  StreamConfiguration *cfg = &config[&data->stream_];
> +  int ret;
> +
> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
> +                    << "x" << cfg->height;
> +
> +  V4L2DeviceFormat format = {};
> +  format.width = cfg->width;
> +  format.height = cfg->height;
> +  format.fourcc = cfg->pixelFormat;
> +
> +  ret = data->video_->setFormat(&format);
> +  if (ret)
> +    return ret;
> +
> +  if (format.width != cfg->width || format.height != cfg->height ||
> +      format.fourcc != cfg->pixelFormat)
> +    return -EINVAL;
> +
> +  return 0;
> +}
> +
> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
> +  STM32CameraData *data = cameraData(camera);
> +  const StreamConfiguration &cfg = stream->configuration();
> +
> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
> +
> +  return data->video_->exportBuffers(&stream->bufferPool());
> +}
> +
> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
> +  STM32CameraData *data = cameraData(camera);
> +  return data->video_->releaseBuffers();
> +}
> +
> +int PipelineHandlerSTM32::start(Camera *camera) {
> +  STM32CameraData *data = cameraData(camera);
> +  return data->video_->streamOn();
> +}
> +
> +void PipelineHandlerSTM32::stop(Camera *camera) {
> +  STM32CameraData *data = cameraData(camera);
> +  data->video_->streamOff();
> +  PipelineHandler::stop(camera);
> +}
> +
> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
> +  STM32CameraData *data = cameraData(camera);
> +  Buffer *buffer = request->findBuffer(&data->stream_);
> +  if (!buffer) {
> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
> +
> +    return -ENOENT;
> +  }
> +
> +  int ret = data->video_->queueBuffer(buffer);
> +  if (ret < 0)
> +    return ret;
> +
> +  PipelineHandler::queueRequest(camera, request);
> +
> +  return 0;
> +}
> +
> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
> +  DeviceMatch dm("stm32-dcmi");

I see that the stm32-dcmi driver calls down to a subdevice. Is there
ever a need to interact with this subdevice directly?

Or are all controls handled through the DCMI driver?

I think it looks like everything simply goes through the stm32-dcmi
V4L2Device interface.

That said, I can't see if there is any media-controller device in the
DCMI driver. Our DeviceMatch currently only looks at media-controller
enabled devices.

> +
> +  media_ = enumerator->search(dm);
> +  if (!media_)
> +    return false;
> +
> +  media_->acquire();
> +
> +  std::unique_ptr<STM32CameraData> data =
> +      utils::make_unique<STM32CameraData>(this);
> +
> +  /* Locate and open the default video node. */
> +  for (MediaEntity *entity : media_->entities()) {
> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
> +      data->video_ = new V4L2Device(entity);
> +      break;
> +    }
> +  }


I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
correctly match your device?


> +
> +  if (!data->video_) {
> +    LOG(STM32, Error) << "Could not find a default video device";
> +    return false;
> +  }
> +
> +  if (data->video_->open())
> +    return false;
> +
> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
> +
> +  /* Create and register the camera. */
> +  std::set<Stream *> streams{&data->stream_};
> +  std::shared_ptr<Camera> camera =
> +      Camera::create(this, media_->model(), streams);
> +  registerCamera(std::move(camera), std::move(data));
> +
> +  return true;
> +}
> +
> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
> +  Request *request = queuedRequests_.front();
> +
> +  pipe_->completeBuffer(camera_, request, buffer);
> +  pipe_->completeRequest(camera_, request);
> +}
> +
> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
> +
> +} /* namespace libcamera */
> 

Thank you for joining the libcamera project! - If there is anything more
we can do to help you - please let us know.
Benjamin GAIGNARD April 9, 2019, 9:38 a.m. UTC | #2
On 4/9/19 11:02 AM, Kieran Bingham wrote:
> Hi Benjamin,
>
> On 08/04/2019 13:43, Benjamin Gaignard wrote:
>> Provide a pipeline handler for the stm32 driver.
> Excellent :-) Thank you for your contribution.
>
> Have you been able to test this and successfully list your camera or
> capture frames?
Yes I have been able to list and capture frames with cam
>
> To list successfully registered cameras on the device:
> 	build/src/cam/cam -l
>
> I couldn't see any media-controller support in the stm32-dcmi driver...
> Perhaps I'm missing something.

Hugues has send the patches for media controller support a week ago.

https://lkml.org/lkml/2019/4/1/298

I have tested libcamera with those patches.

Benjamin

>
>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
>> ---
>>   src/libcamera/pipeline/meson.build       |   2 +
>>   src/libcamera/pipeline/stm32/meson.build |   3 +
>>   src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
>>   3 files changed, 210 insertions(+)
>>   create mode 100644 src/libcamera/pipeline/stm32/meson.build
>>   create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
>>
>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
>> index 40bb264..08d6e1c 100644
>> --- a/src/libcamera/pipeline/meson.build
>> +++ b/src/libcamera/pipeline/meson.build
>> @@ -4,3 +4,5 @@ libcamera_sources += files([
>>   ])
>>   
>>   subdir('ipu3')
>> +
>> +subdir('stm32')
>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
>> new file mode 100644
>> index 0000000..cb6f16b
>> --- /dev/null
>> +++ b/src/libcamera/pipeline/stm32/meson.build
>> @@ -0,0 +1,3 @@
>> +libcamera_sources += files([
>> +    'stm32.cpp',
>> +])
> Currently the stm32.cpp seems like a fairly simple implementation, which
> fits in its own file, much like pipeline/uvcvideo.cpp.
>
> Do you forsee needing to break the STM32 handler into multiple files
> later, or support other components on the STM32?
>
> I think it's fine if you want to have stm32 as a subdir either way  though.
>
> I wonder also, if it is just a 'simple' device if perhaps we should move
> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
> subclassed to simply add the device matching, or provide a table of
> supported match strings.
>
>
> What hardware is available on the STM32 for processing frames?
> Do you have a scaler/resizer? or any further ISP functionality?
>
> It would be interesting to see the output of 'media-ctl -p' for your
> platform.
>
>
>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
>> new file mode 100644
>> index 0000000..301fdfc
>> --- /dev/null
>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
>> @@ -0,0 +1,205 @@
>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
>> +/*
>> + * stm32.cpp - Pipeline handler for stm32 devices
>> + */
>> +
>> +#include <libcamera/camera.h>
>> +#include <libcamera/request.h>
>> +#include <libcamera/stream.h>
>> +
>> +#include "device_enumerator.h"
>> +#include "log.h"
>> +#include "media_device.h"
>> +#include "pipeline_handler.h"
>> +#include "utils.h"
>> +#include "v4l2_device.h"
>> +
>> +namespace libcamera {
>> +
>> +LOG_DEFINE_CATEGORY(STM32)
>> +
>> +class PipelineHandlerSTM32 : public PipelineHandler {
>> +public:
>> +  PipelineHandlerSTM32(CameraManager *manager);
>> +  ~PipelineHandlerSTM32();
> Our coding style uses a tab to indent.
>
> When you have committed your code, you can run ./utils/checkstyle.py to
> check the most recent commit (or you can specify a range suitable for git).
>
> This requires installing clang-format ideally, although astyle is also
> supported to some degree.
>
>> +
>> +  std::map<Stream *, StreamConfiguration>
>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
>> +  int configureStreams(
>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
>> +
>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
>> +  int freeBuffers(Camera *camera, Stream *stream) override;
>> +
>> +  int start(Camera *camera) override;
>> +  void stop(Camera *camera) override;
>> +
>> +  int queueRequest(Camera *camera, Request *request) override;
>> +
>> +  bool match(DeviceEnumerator *enumerator);
>> +
>> +private:
>> +  class STM32CameraData : public CameraData {
>> +  public:
>> +    STM32CameraData(PipelineHandler *pipe)
>> +        : CameraData(pipe), video_(nullptr) {}
>> +
>> +    ~STM32CameraData() { delete video_; }
>> +
>> +    void bufferReady(Buffer *buffer);
>> +
>> +    V4L2Device *video_;
>> +    Stream stream_;
>> +  };
>> +
>> +  STM32CameraData *cameraData(const Camera *camera) {
>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
>> +  }
>> +
>> +  std::shared_ptr<MediaDevice> media_;
>> +};
>> +
>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
>> +    : PipelineHandler(manager), media_(nullptr) {}
>> +
>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
>> +  if (media_)
>> +    media_->release();
>> +}
>> +
>> +std::map<Stream *, StreamConfiguration>
>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
>> +                                          std::set<Stream *> &streams) {
>> +  STM32CameraData *data = cameraData(camera);
>> +
>> +  std::map<Stream *, StreamConfiguration> configs;
>> +  StreamConfiguration config{};
>> +
>> +  LOG(STM32, Debug) << "Retrieving default format";
>> +  config.width = 640;
>> +  config.height = 480;
>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
>> +  config.bufferCount = 4;
>> +
>> +  configs[&data->stream_] = config;
>> +
>> +  return configs;
>> +}
>> +
>> +int PipelineHandlerSTM32::configureStreams(
>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  StreamConfiguration *cfg = &config[&data->stream_];
>> +  int ret;
>> +
>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
>> +                    << "x" << cfg->height;
>> +
>> +  V4L2DeviceFormat format = {};
>> +  format.width = cfg->width;
>> +  format.height = cfg->height;
>> +  format.fourcc = cfg->pixelFormat;
>> +
>> +  ret = data->video_->setFormat(&format);
>> +  if (ret)
>> +    return ret;
>> +
>> +  if (format.width != cfg->width || format.height != cfg->height ||
>> +      format.fourcc != cfg->pixelFormat)
>> +    return -EINVAL;
>> +
>> +  return 0;
>> +}
>> +
>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  const StreamConfiguration &cfg = stream->configuration();
>> +
>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
>> +
>> +  return data->video_->exportBuffers(&stream->bufferPool());
>> +}
>> +
>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  return data->video_->releaseBuffers();
>> +}
>> +
>> +int PipelineHandlerSTM32::start(Camera *camera) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  return data->video_->streamOn();
>> +}
>> +
>> +void PipelineHandlerSTM32::stop(Camera *camera) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  data->video_->streamOff();
>> +  PipelineHandler::stop(camera);
>> +}
>> +
>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
>> +  STM32CameraData *data = cameraData(camera);
>> +  Buffer *buffer = request->findBuffer(&data->stream_);
>> +  if (!buffer) {
>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
>> +
>> +    return -ENOENT;
>> +  }
>> +
>> +  int ret = data->video_->queueBuffer(buffer);
>> +  if (ret < 0)
>> +    return ret;
>> +
>> +  PipelineHandler::queueRequest(camera, request);
>> +
>> +  return 0;
>> +}
>> +
>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
>> +  DeviceMatch dm("stm32-dcmi");
> I see that the stm32-dcmi driver calls down to a subdevice. Is there
> ever a need to interact with this subdevice directly?
>
> Or are all controls handled through the DCMI driver?
>
> I think it looks like everything simply goes through the stm32-dcmi
> V4L2Device interface.
>
> That said, I can't see if there is any media-controller device in the
> DCMI driver. Our DeviceMatch currently only looks at media-controller
> enabled devices.
>
>> +
>> +  media_ = enumerator->search(dm);
>> +  if (!media_)
>> +    return false;
>> +
>> +  media_->acquire();
>> +
>> +  std::unique_ptr<STM32CameraData> data =
>> +      utils::make_unique<STM32CameraData>(this);
>> +
>> +  /* Locate and open the default video node. */
>> +  for (MediaEntity *entity : media_->entities()) {
>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
>> +      data->video_ = new V4L2Device(entity);
>> +      break;
>> +    }
>> +  }
>
> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
> correctly match your device?
>
>
>> +
>> +  if (!data->video_) {
>> +    LOG(STM32, Error) << "Could not find a default video device";
>> +    return false;
>> +  }
>> +
>> +  if (data->video_->open())
>> +    return false;
>> +
>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
>> +
>> +  /* Create and register the camera. */
>> +  std::set<Stream *> streams{&data->stream_};
>> +  std::shared_ptr<Camera> camera =
>> +      Camera::create(this, media_->model(), streams);
>> +  registerCamera(std::move(camera), std::move(data));
>> +
>> +  return true;
>> +}
>> +
>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
>> +  Request *request = queuedRequests_.front();
>> +
>> +  pipe_->completeBuffer(camera_, request, buffer);
>> +  pipe_->completeRequest(camera_, request);
>> +}
>> +
>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
>> +
>> +} /* namespace libcamera */
>>
> Thank you for joining the libcamera project! - If there is anything more
> we can do to help you - please let us know.
>
Laurent Pinchart April 15, 2019, 11:14 p.m. UTC | #3
Hi Benjamin,

On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
> On 4/9/19 11:02 AM, Kieran Bingham wrote:
> > On 08/04/2019 13:43, Benjamin Gaignard wrote:
> >> Provide a pipeline handler for the stm32 driver.
> > 
> > Excellent :-) Thank you for your contribution.
> >
> > Have you been able to test this and successfully list your camera or
> > capture frames?
> 
> Yes I have been able to list and capture frames with cam
> 
> >
> > To list successfully registered cameras on the device:
> > 	build/src/cam/cam -l
> >
> > I couldn't see any media-controller support in the stm32-dcmi driver...
> > Perhaps I'm missing something.
> 
> Hugues has send the patches for media controller support a week ago.
> 
> https://lkml.org/lkml/2019/4/1/298
> 
> I have tested libcamera with those patches.

I think you may have missed the other comments from Kieran further down.

> >> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
> >> ---
> >>   src/libcamera/pipeline/meson.build       |   2 +
> >>   src/libcamera/pipeline/stm32/meson.build |   3 +
> >>   src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
> >>   3 files changed, 210 insertions(+)
> >>   create mode 100644 src/libcamera/pipeline/stm32/meson.build
> >>   create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
> >>
> >> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
> >> index 40bb264..08d6e1c 100644
> >> --- a/src/libcamera/pipeline/meson.build
> >> +++ b/src/libcamera/pipeline/meson.build
> >> @@ -4,3 +4,5 @@ libcamera_sources += files([
> >>   ])
> >>   
> >>   subdir('ipu3')
> >> +
> >> +subdir('stm32')
> >> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
> >> new file mode 100644
> >> index 0000000..cb6f16b
> >> --- /dev/null
> >> +++ b/src/libcamera/pipeline/stm32/meson.build
> >> @@ -0,0 +1,3 @@
> >> +libcamera_sources += files([
> >> +    'stm32.cpp',
> >> +])
> >
> > Currently the stm32.cpp seems like a fairly simple implementation, which
> > fits in its own file, much like pipeline/uvcvideo.cpp.
> >
> > Do you forsee needing to break the STM32 handler into multiple files
> > later, or support other components on the STM32?
> >
> > I think it's fine if you want to have stm32 as a subdir either way  though.
> >
> > I wonder also, if it is just a 'simple' device if perhaps we should move
> > the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
> > subclassed to simply add the device matching, or provide a table of
> > supported match strings.
> >
> >
> > What hardware is available on the STM32 for processing frames?
> > Do you have a scaler/resizer? or any further ISP functionality?
> >
> > It would be interesting to see the output of 'media-ctl -p' for your
> > platform.
> >
> >> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
> >> new file mode 100644
> >> index 0000000..301fdfc
> >> --- /dev/null
> >> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
> >> @@ -0,0 +1,205 @@
> >> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> >> +/*
> >> + * stm32.cpp - Pipeline handler for stm32 devices
> >> + */
> >> +
> >> +#include <libcamera/camera.h>
> >> +#include <libcamera/request.h>
> >> +#include <libcamera/stream.h>
> >> +
> >> +#include "device_enumerator.h"
> >> +#include "log.h"
> >> +#include "media_device.h"
> >> +#include "pipeline_handler.h"
> >> +#include "utils.h"
> >> +#include "v4l2_device.h"
> >> +
> >> +namespace libcamera {
> >> +
> >> +LOG_DEFINE_CATEGORY(STM32)
> >> +
> >> +class PipelineHandlerSTM32 : public PipelineHandler {
> >> +public:
> >> +  PipelineHandlerSTM32(CameraManager *manager);
> >> +  ~PipelineHandlerSTM32();
> >
> > Our coding style uses a tab to indent.
> >
> > When you have committed your code, you can run ./utils/checkstyle.py to
> > check the most recent commit (or you can specify a range suitable for git).
> >
> > This requires installing clang-format ideally, although astyle is also
> > supported to some degree.
> >
> >> +
> >> +  std::map<Stream *, StreamConfiguration>
> >> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
> >> +  int configureStreams(
> >> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
> >> +
> >> +  int allocateBuffers(Camera *camera, Stream *stream) override;
> >> +  int freeBuffers(Camera *camera, Stream *stream) override;
> >> +
> >> +  int start(Camera *camera) override;
> >> +  void stop(Camera *camera) override;
> >> +
> >> +  int queueRequest(Camera *camera, Request *request) override;
> >> +
> >> +  bool match(DeviceEnumerator *enumerator);
> >> +
> >> +private:
> >> +  class STM32CameraData : public CameraData {
> >> +  public:
> >> +    STM32CameraData(PipelineHandler *pipe)
> >> +        : CameraData(pipe), video_(nullptr) {}
> >> +
> >> +    ~STM32CameraData() { delete video_; }
> >> +
> >> +    void bufferReady(Buffer *buffer);
> >> +
> >> +    V4L2Device *video_;
> >> +    Stream stream_;
> >> +  };
> >> +
> >> +  STM32CameraData *cameraData(const Camera *camera) {
> >> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
> >> +  }
> >> +
> >> +  std::shared_ptr<MediaDevice> media_;
> >> +};
> >> +
> >> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
> >> +    : PipelineHandler(manager), media_(nullptr) {}
> >> +
> >> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
> >> +  if (media_)
> >> +    media_->release();
> >> +}
> >> +
> >> +std::map<Stream *, StreamConfiguration>
> >> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
> >> +                                          std::set<Stream *> &streams) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +
> >> +  std::map<Stream *, StreamConfiguration> configs;
> >> +  StreamConfiguration config{};
> >> +
> >> +  LOG(STM32, Debug) << "Retrieving default format";
> >> +  config.width = 640;
> >> +  config.height = 480;
> >> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
> >> +  config.bufferCount = 4;
> >> +
> >> +  configs[&data->stream_] = config;
> >> +
> >> +  return configs;
> >> +}
> >> +
> >> +int PipelineHandlerSTM32::configureStreams(
> >> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  StreamConfiguration *cfg = &config[&data->stream_];
> >> +  int ret;
> >> +
> >> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
> >> +                    << "x" << cfg->height;
> >> +
> >> +  V4L2DeviceFormat format = {};
> >> +  format.width = cfg->width;
> >> +  format.height = cfg->height;
> >> +  format.fourcc = cfg->pixelFormat;
> >> +
> >> +  ret = data->video_->setFormat(&format);
> >> +  if (ret)
> >> +    return ret;
> >> +
> >> +  if (format.width != cfg->width || format.height != cfg->height ||
> >> +      format.fourcc != cfg->pixelFormat)
> >> +    return -EINVAL;
> >> +
> >> +  return 0;
> >> +}
> >> +
> >> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  const StreamConfiguration &cfg = stream->configuration();
> >> +
> >> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
> >> +
> >> +  return data->video_->exportBuffers(&stream->bufferPool());
> >> +}
> >> +
> >> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  return data->video_->releaseBuffers();
> >> +}
> >> +
> >> +int PipelineHandlerSTM32::start(Camera *camera) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  return data->video_->streamOn();
> >> +}
> >> +
> >> +void PipelineHandlerSTM32::stop(Camera *camera) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  data->video_->streamOff();
> >> +  PipelineHandler::stop(camera);
> >> +}
> >> +
> >> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
> >> +  STM32CameraData *data = cameraData(camera);
> >> +  Buffer *buffer = request->findBuffer(&data->stream_);
> >> +  if (!buffer) {
> >> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
> >> +
> >> +    return -ENOENT;
> >> +  }
> >> +
> >> +  int ret = data->video_->queueBuffer(buffer);
> >> +  if (ret < 0)
> >> +    return ret;
> >> +
> >> +  PipelineHandler::queueRequest(camera, request);
> >> +
> >> +  return 0;
> >> +}
> >> +
> >> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
> >> +  DeviceMatch dm("stm32-dcmi");
> >
> > I see that the stm32-dcmi driver calls down to a subdevice. Is there
> > ever a need to interact with this subdevice directly?
> >
> > Or are all controls handled through the DCMI driver?
> >
> > I think it looks like everything simply goes through the stm32-dcmi
> > V4L2Device interface.
> >
> > That said, I can't see if there is any media-controller device in the
> > DCMI driver. Our DeviceMatch currently only looks at media-controller
> > enabled devices.
> >
> >> +
> >> +  media_ = enumerator->search(dm);
> >> +  if (!media_)
> >> +    return false;
> >> +
> >> +  media_->acquire();
> >> +
> >> +  std::unique_ptr<STM32CameraData> data =
> >> +      utils::make_unique<STM32CameraData>(this);
> >> +
> >> +  /* Locate and open the default video node. */
> >> +  for (MediaEntity *entity : media_->entities()) {
> >> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
> >> +      data->video_ = new V4L2Device(entity);
> >> +      break;
> >> +    }
> >> +  }
> >
> > I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
> > UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
> > correctly match your device?
> >
> >> +
> >> +  if (!data->video_) {
> >> +    LOG(STM32, Error) << "Could not find a default video device";
> >> +    return false;
> >> +  }
> >> +
> >> +  if (data->video_->open())
> >> +    return false;
> >> +
> >> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
> >> +
> >> +  /* Create and register the camera. */
> >> +  std::set<Stream *> streams{&data->stream_};
> >> +  std::shared_ptr<Camera> camera =
> >> +      Camera::create(this, media_->model(), streams);
> >> +  registerCamera(std::move(camera), std::move(data));
> >> +
> >> +  return true;
> >> +}
> >> +
> >> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
> >> +  Request *request = queuedRequests_.front();
> >> +
> >> +  pipe_->completeBuffer(camera_, request, buffer);
> >> +  pipe_->completeRequest(camera_, request);
> >> +}
> >> +
> >> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
> >> +
> >> +} /* namespace libcamera */
> >>
> > 
> > Thank you for joining the libcamera project! - If there is anything more
> > we can do to help you - please let us know.
Benjamin GAIGNARD April 16, 2019, 8:39 a.m. UTC | #4
On 4/16/19 1:14 AM, Laurent Pinchart wrote:
> Hi Benjamin,
>
> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
>> On 4/9/19 11:02 AM, Kieran Bingham wrote:
>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
>>>> Provide a pipeline handler for the stm32 driver.
>>> Excellent :-) Thank you for your contribution.
>>>
>>> Have you been able to test this and successfully list your camera or
>>> capture frames?
>> Yes I have been able to list and capture frames with cam
>>
>>> To list successfully registered cameras on the device:
>>> 	build/src/cam/cam -l
>>>
>>> I couldn't see any media-controller support in the stm32-dcmi driver...
>>> Perhaps I'm missing something.
>> Hugues has send the patches for media controller support a week ago.
>>
>> https://lkml.org/lkml/2019/4/1/298
>>
>> I have tested libcamera with those patches.
> I think you may have missed the other comments from Kieran further down.
Yes I have completely miss them
>
>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
>>>> ---
>>>>    src/libcamera/pipeline/meson.build       |   2 +
>>>>    src/libcamera/pipeline/stm32/meson.build |   3 +
>>>>    src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
>>>>    3 files changed, 210 insertions(+)
>>>>    create mode 100644 src/libcamera/pipeline/stm32/meson.build
>>>>    create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
>>>>
>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
>>>> index 40bb264..08d6e1c 100644
>>>> --- a/src/libcamera/pipeline/meson.build
>>>> +++ b/src/libcamera/pipeline/meson.build
>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
>>>>    ])
>>>>    
>>>>    subdir('ipu3')
>>>> +
>>>> +subdir('stm32')
>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
>>>> new file mode 100644
>>>> index 0000000..cb6f16b
>>>> --- /dev/null
>>>> +++ b/src/libcamera/pipeline/stm32/meson.build
>>>> @@ -0,0 +1,3 @@
>>>> +libcamera_sources += files([
>>>> +    'stm32.cpp',
>>>> +])
>>> Currently the stm32.cpp seems like a fairly simple implementation, which
>>> fits in its own file, much like pipeline/uvcvideo.cpp.
>>>
>>> Do you forsee needing to break the STM32 handler into multiple files
>>> later, or support other components on the STM32?
>>>
>>> I think it's fine if you want to have stm32 as a subdir either way  though.
>>>
>>> I wonder also, if it is just a 'simple' device if perhaps we should move
>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
>>> subclassed to simply add the device matching, or provide a table of
>>> supported match strings.
>>>
>>>
>>> What hardware is available on the STM32 for processing frames?
>>> Do you have a scaler/resizer? or any further ISP functionality?

No scaler, no resize and no ISP functionality.

It is a basic device to memory driver.

>>>
>>> It would be interesting to see the output of 'media-ctl -p' for your
>>> platform.

The ouput of media-ctl -p (with Hugues's patches)

Media controller API version 4.19.24

Media device information
------------------------
driver          stm32-dcmi
model           stm32-dcmi
serial
bus info        platform:stm32-dcmi
hw revision     0x0
driver version  4.19.24

Device topology
- entity 1: stm32_dcmi (1 pad, 1 link)
             type Node subtype V4L flags 1
             device node name /dev/video0
         pad0: Sink
                 <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]

- entity 5: ov5640 0-003c (1 pad, 1 link)
             type V4L2 subdev subtype Sensor flags 0
             device node name /dev/v4l-subdev0
         pad0: Source
                 [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg 
xfer:srgb ycbcr:601 quantization:full-range]
                 -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]

>>>
>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
>>>> new file mode 100644
>>>> index 0000000..301fdfc
>>>> --- /dev/null
>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
>>>> @@ -0,0 +1,205 @@
>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
>>>> +/*
>>>> + * stm32.cpp - Pipeline handler for stm32 devices
>>>> + */
>>>> +
>>>> +#include <libcamera/camera.h>
>>>> +#include <libcamera/request.h>
>>>> +#include <libcamera/stream.h>
>>>> +
>>>> +#include "device_enumerator.h"
>>>> +#include "log.h"
>>>> +#include "media_device.h"
>>>> +#include "pipeline_handler.h"
>>>> +#include "utils.h"
>>>> +#include "v4l2_device.h"
>>>> +
>>>> +namespace libcamera {
>>>> +
>>>> +LOG_DEFINE_CATEGORY(STM32)
>>>> +
>>>> +class PipelineHandlerSTM32 : public PipelineHandler {
>>>> +public:
>>>> +  PipelineHandlerSTM32(CameraManager *manager);
>>>> +  ~PipelineHandlerSTM32();
>>> Our coding style uses a tab to indent.
>>>
>>> When you have committed your code, you can run ./utils/checkstyle.py to
>>> check the most recent commit (or you can specify a range suitable for git).
>>>
>>> This requires installing clang-format ideally, although astyle is also
>>> supported to some degree.
I have fix clang-format package version on my side, it will be better in v2.
>>>
>>>> +
>>>> +  std::map<Stream *, StreamConfiguration>
>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
>>>> +  int configureStreams(
>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
>>>> +
>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
>>>> +
>>>> +  int start(Camera *camera) override;
>>>> +  void stop(Camera *camera) override;
>>>> +
>>>> +  int queueRequest(Camera *camera, Request *request) override;
>>>> +
>>>> +  bool match(DeviceEnumerator *enumerator);
>>>> +
>>>> +private:
>>>> +  class STM32CameraData : public CameraData {
>>>> +  public:
>>>> +    STM32CameraData(PipelineHandler *pipe)
>>>> +        : CameraData(pipe), video_(nullptr) {}
>>>> +
>>>> +    ~STM32CameraData() { delete video_; }
>>>> +
>>>> +    void bufferReady(Buffer *buffer);
>>>> +
>>>> +    V4L2Device *video_;
>>>> +    Stream stream_;
>>>> +  };
>>>> +
>>>> +  STM32CameraData *cameraData(const Camera *camera) {
>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
>>>> +  }
>>>> +
>>>> +  std::shared_ptr<MediaDevice> media_;
>>>> +};
>>>> +
>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
>>>> +    : PipelineHandler(manager), media_(nullptr) {}
>>>> +
>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
>>>> +  if (media_)
>>>> +    media_->release();
>>>> +}
>>>> +
>>>> +std::map<Stream *, StreamConfiguration>
>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
>>>> +                                          std::set<Stream *> &streams) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +
>>>> +  std::map<Stream *, StreamConfiguration> configs;
>>>> +  StreamConfiguration config{};
>>>> +
>>>> +  LOG(STM32, Debug) << "Retrieving default format";
>>>> +  config.width = 640;
>>>> +  config.height = 480;
>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
>>>> +  config.bufferCount = 4;
>>>> +
>>>> +  configs[&data->stream_] = config;
>>>> +
>>>> +  return configs;
>>>> +}
>>>> +
>>>> +int PipelineHandlerSTM32::configureStreams(
>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  StreamConfiguration *cfg = &config[&data->stream_];
>>>> +  int ret;
>>>> +
>>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
>>>> +                    << "x" << cfg->height;
>>>> +
>>>> +  V4L2DeviceFormat format = {};
>>>> +  format.width = cfg->width;
>>>> +  format.height = cfg->height;
>>>> +  format.fourcc = cfg->pixelFormat;
>>>> +
>>>> +  ret = data->video_->setFormat(&format);
>>>> +  if (ret)
>>>> +    return ret;
>>>> +
>>>> +  if (format.width != cfg->width || format.height != cfg->height ||
>>>> +      format.fourcc != cfg->pixelFormat)
>>>> +    return -EINVAL;
>>>> +
>>>> +  return 0;
>>>> +}
>>>> +
>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  const StreamConfiguration &cfg = stream->configuration();
>>>> +
>>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
>>>> +
>>>> +  return data->video_->exportBuffers(&stream->bufferPool());
>>>> +}
>>>> +
>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  return data->video_->releaseBuffers();
>>>> +}
>>>> +
>>>> +int PipelineHandlerSTM32::start(Camera *camera) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  return data->video_->streamOn();
>>>> +}
>>>> +
>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  data->video_->streamOff();
>>>> +  PipelineHandler::stop(camera);
>>>> +}
>>>> +
>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
>>>> +  STM32CameraData *data = cameraData(camera);
>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
>>>> +  if (!buffer) {
>>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
>>>> +
>>>> +    return -ENOENT;
>>>> +  }
>>>> +
>>>> +  int ret = data->video_->queueBuffer(buffer);
>>>> +  if (ret < 0)
>>>> +    return ret;
>>>> +
>>>> +  PipelineHandler::queueRequest(camera, request);
>>>> +
>>>> +  return 0;
>>>> +}
>>>> +
>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
>>>> +  DeviceMatch dm("stm32-dcmi");
>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
>>> ever a need to interact with this subdevice directly?
>>>
>>> Or are all controls handled through the DCMI driver?
>>>
>>> I think it looks like everything simply goes through the stm32-dcmi
>>> V4L2Device interface.
Until today yes but Hugues is working with v4l for media-controller 
awareness.
>>>
>>> That said, I can't see if there is any media-controller device in the
>>> DCMI driver. Our DeviceMatch currently only looks at media-controller
>>> enabled devices.
>>>
>>>> +
>>>> +  media_ = enumerator->search(dm);
>>>> +  if (!media_)
>>>> +    return false;
>>>> +
>>>> +  media_->acquire();
>>>> +
>>>> +  std::unique_ptr<STM32CameraData> data =
>>>> +      utils::make_unique<STM32CameraData>(this);
>>>> +
>>>> +  /* Locate and open the default video node. */
>>>> +  for (MediaEntity *entity : media_->entities()) {
>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
>>>> +      data->video_ = new V4L2Device(entity);
>>>> +      break;
>>>> +    }
>>>> +  }
>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
>>> correctly match your device?

It will because it is in Hugues's patches.

Does an another way of working is posssible ?

Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to 
use it.

Do you have some advices ?

Benjamin

>>>
>>>> +
>>>> +  if (!data->video_) {
>>>> +    LOG(STM32, Error) << "Could not find a default video device";
>>>> +    return false;
>>>> +  }
>>>> +
>>>> +  if (data->video_->open())
>>>> +    return false;
>>>> +
>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
>>>> +
>>>> +  /* Create and register the camera. */
>>>> +  std::set<Stream *> streams{&data->stream_};
>>>> +  std::shared_ptr<Camera> camera =
>>>> +      Camera::create(this, media_->model(), streams);
>>>> +  registerCamera(std::move(camera), std::move(data));
>>>> +
>>>> +  return true;
>>>> +}
>>>> +
>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
>>>> +  Request *request = queuedRequests_.front();
>>>> +
>>>> +  pipe_->completeBuffer(camera_, request, buffer);
>>>> +  pipe_->completeRequest(camera_, request);
>>>> +}
>>>> +
>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
>>>> +
>>>> +} /* namespace libcamera */
>>>>
>>> Thank you for joining the libcamera project! - If there is anything more
>>> we can do to help you - please let us know.
Laurent Pinchart April 17, 2019, 12:11 p.m. UTC | #5
Hi Benjamin,

On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:
> On 4/16/19 1:14 AM, Laurent Pinchart wrote:
> > On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
> >> On 4/9/19 11:02 AM, Kieran Bingham wrote:
> >>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
> >>>> Provide a pipeline handler for the stm32 driver.
> >>> Excellent :-) Thank you for your contribution.
> >>>
> >>> Have you been able to test this and successfully list your camera or
> >>> capture frames?
> >> Yes I have been able to list and capture frames with cam
> >>
> >>> To list successfully registered cameras on the device:
> >>> 	build/src/cam/cam -l
> >>>
> >>> I couldn't see any media-controller support in the stm32-dcmi driver...
> >>> Perhaps I'm missing something.
> >> Hugues has send the patches for media controller support a week ago.
> >>
> >> https://lkml.org/lkml/2019/4/1/298
> >>
> >> I have tested libcamera with those patches.
> > I think you may have missed the other comments from Kieran further down.
> 
> Yes I have completely miss them
> 
> >>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
> >>>> ---
> >>>>    src/libcamera/pipeline/meson.build       |   2 +
> >>>>    src/libcamera/pipeline/stm32/meson.build |   3 +
> >>>>    src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
> >>>>    3 files changed, 210 insertions(+)
> >>>>    create mode 100644 src/libcamera/pipeline/stm32/meson.build
> >>>>    create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
> >>>>
> >>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
> >>>> index 40bb264..08d6e1c 100644
> >>>> --- a/src/libcamera/pipeline/meson.build
> >>>> +++ b/src/libcamera/pipeline/meson.build
> >>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
> >>>>    ])
> >>>>    
> >>>>    subdir('ipu3')
> >>>> +
> >>>> +subdir('stm32')
> >>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
> >>>> new file mode 100644
> >>>> index 0000000..cb6f16b
> >>>> --- /dev/null
> >>>> +++ b/src/libcamera/pipeline/stm32/meson.build
> >>>> @@ -0,0 +1,3 @@
> >>>> +libcamera_sources += files([
> >>>> +    'stm32.cpp',
> >>>> +])
> >>> Currently the stm32.cpp seems like a fairly simple implementation, which
> >>> fits in its own file, much like pipeline/uvcvideo.cpp.
> >>>
> >>> Do you forsee needing to break the STM32 handler into multiple files
> >>> later, or support other components on the STM32?
> >>>
> >>> I think it's fine if you want to have stm32 as a subdir either way  though.
> >>>
> >>> I wonder also, if it is just a 'simple' device if perhaps we should move
> >>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
> >>> subclassed to simply add the device matching, or provide a table of
> >>> supported match strings.
> >>>
> >>>
> >>> What hardware is available on the STM32 for processing frames?
> >>> Do you have a scaler/resizer? or any further ISP functionality?
> 
> No scaler, no resize and no ISP functionality.
> 
> It is a basic device to memory driver.

In that case I wonder if we should really have an stm32-specific
pipeline handler. It seems to me that we could implement a generic
pipeline handler that would support any graph made of a sensor subdev
connected to a video node.

> >>> It would be interesting to see the output of 'media-ctl -p' for your
> >>> platform.
> 
> The ouput of media-ctl -p (with Hugues's patches)
> 
> Media controller API version 4.19.24
> 
> Media device information
> ------------------------
> driver          stm32-dcmi
> model           stm32-dcmi
> serial
> bus info        platform:stm32-dcmi
> hw revision     0x0
> driver version  4.19.24
> 
> Device topology
> - entity 1: stm32_dcmi (1 pad, 1 link)
>              type Node subtype V4L flags 1
>              device node name /dev/video0
>          pad0: Sink
>                  <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
> 
> - entity 5: ov5640 0-003c (1 pad, 1 link)
>              type V4L2 subdev subtype Sensor flags 0
>              device node name /dev/v4l-subdev0
>          pad0: Source
>                  [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg 
> xfer:srgb ycbcr:601 quantization:full-range]
>                  -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
> 
> >>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>> new file mode 100644
> >>>> index 0000000..301fdfc
> >>>> --- /dev/null
> >>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>> @@ -0,0 +1,205 @@
> >>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> >>>> +/*
> >>>> + * stm32.cpp - Pipeline handler for stm32 devices
> >>>> + */
> >>>> +
> >>>> +#include <libcamera/camera.h>
> >>>> +#include <libcamera/request.h>
> >>>> +#include <libcamera/stream.h>
> >>>> +
> >>>> +#include "device_enumerator.h"
> >>>> +#include "log.h"
> >>>> +#include "media_device.h"
> >>>> +#include "pipeline_handler.h"
> >>>> +#include "utils.h"
> >>>> +#include "v4l2_device.h"
> >>>> +
> >>>> +namespace libcamera {
> >>>> +
> >>>> +LOG_DEFINE_CATEGORY(STM32)
> >>>> +
> >>>> +class PipelineHandlerSTM32 : public PipelineHandler {
> >>>> +public:
> >>>> +  PipelineHandlerSTM32(CameraManager *manager);
> >>>> +  ~PipelineHandlerSTM32();
> >>> Our coding style uses a tab to indent.
> >>>
> >>> When you have committed your code, you can run ./utils/checkstyle.py to
> >>> check the most recent commit (or you can specify a range suitable for git).
> >>>
> >>> This requires installing clang-format ideally, although astyle is also
> >>> supported to some degree.
> 
> I have fix clang-format package version on my side, it will be better in v2.

Thank you.

> >>>
> >>>> +
> >>>> +  std::map<Stream *, StreamConfiguration>
> >>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
> >>>> +  int configureStreams(
> >>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
> >>>> +
> >>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
> >>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
> >>>> +
> >>>> +  int start(Camera *camera) override;
> >>>> +  void stop(Camera *camera) override;
> >>>> +
> >>>> +  int queueRequest(Camera *camera, Request *request) override;
> >>>> +
> >>>> +  bool match(DeviceEnumerator *enumerator);
> >>>> +
> >>>> +private:
> >>>> +  class STM32CameraData : public CameraData {
> >>>> +  public:
> >>>> +    STM32CameraData(PipelineHandler *pipe)
> >>>> +        : CameraData(pipe), video_(nullptr) {}
> >>>> +
> >>>> +    ~STM32CameraData() { delete video_; }
> >>>> +
> >>>> +    void bufferReady(Buffer *buffer);
> >>>> +
> >>>> +    V4L2Device *video_;
> >>>> +    Stream stream_;
> >>>> +  };
> >>>> +
> >>>> +  STM32CameraData *cameraData(const Camera *camera) {
> >>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
> >>>> +  }
> >>>> +
> >>>> +  std::shared_ptr<MediaDevice> media_;
> >>>> +};
> >>>> +
> >>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
> >>>> +    : PipelineHandler(manager), media_(nullptr) {}
> >>>> +
> >>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
> >>>> +  if (media_)
> >>>> +    media_->release();
> >>>> +}
> >>>> +
> >>>> +std::map<Stream *, StreamConfiguration>
> >>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
> >>>> +                                          std::set<Stream *> &streams) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +
> >>>> +  std::map<Stream *, StreamConfiguration> configs;
> >>>> +  StreamConfiguration config{};
> >>>> +
> >>>> +  LOG(STM32, Debug) << "Retrieving default format";
> >>>> +  config.width = 640;
> >>>> +  config.height = 480;
> >>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
> >>>> +  config.bufferCount = 4;
> >>>> +
> >>>> +  configs[&data->stream_] = config;
> >>>> +
> >>>> +  return configs;
> >>>> +}
> >>>> +
> >>>> +int PipelineHandlerSTM32::configureStreams(
> >>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  StreamConfiguration *cfg = &config[&data->stream_];
> >>>> +  int ret;
> >>>> +
> >>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
> >>>> +                    << "x" << cfg->height;
> >>>> +
> >>>> +  V4L2DeviceFormat format = {};
> >>>> +  format.width = cfg->width;
> >>>> +  format.height = cfg->height;
> >>>> +  format.fourcc = cfg->pixelFormat;
> >>>> +
> >>>> +  ret = data->video_->setFormat(&format);
> >>>> +  if (ret)
> >>>> +    return ret;
> >>>> +
> >>>> +  if (format.width != cfg->width || format.height != cfg->height ||
> >>>> +      format.fourcc != cfg->pixelFormat)
> >>>> +    return -EINVAL;
> >>>> +
> >>>> +  return 0;
> >>>> +}
> >>>> +
> >>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  const StreamConfiguration &cfg = stream->configuration();
> >>>> +
> >>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
> >>>> +
> >>>> +  return data->video_->exportBuffers(&stream->bufferPool());
> >>>> +}
> >>>> +
> >>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  return data->video_->releaseBuffers();
> >>>> +}
> >>>> +
> >>>> +int PipelineHandlerSTM32::start(Camera *camera) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  return data->video_->streamOn();
> >>>> +}
> >>>> +
> >>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  data->video_->streamOff();
> >>>> +  PipelineHandler::stop(camera);
> >>>> +}
> >>>> +
> >>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
> >>>> +  STM32CameraData *data = cameraData(camera);
> >>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
> >>>> +  if (!buffer) {
> >>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
> >>>> +
> >>>> +    return -ENOENT;
> >>>> +  }
> >>>> +
> >>>> +  int ret = data->video_->queueBuffer(buffer);
> >>>> +  if (ret < 0)
> >>>> +    return ret;
> >>>> +
> >>>> +  PipelineHandler::queueRequest(camera, request);
> >>>> +
> >>>> +  return 0;
> >>>> +}
> >>>> +
> >>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
> >>>> +  DeviceMatch dm("stm32-dcmi");
> >>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
> >>> ever a need to interact with this subdevice directly?
> >>>
> >>> Or are all controls handled through the DCMI driver?
> >>>
> >>> I think it looks like everything simply goes through the stm32-dcmi
> >>> V4L2Device interface.
> 
> Until today yes but Hugues is working with v4l for media-controller 
> awareness.
> 
> >>> That said, I can't see if there is any media-controller device in the
> >>> DCMI driver. Our DeviceMatch currently only looks at media-controller
> >>> enabled devices.
> >>>
> >>>> +
> >>>> +  media_ = enumerator->search(dm);
> >>>> +  if (!media_)
> >>>> +    return false;
> >>>> +
> >>>> +  media_->acquire();
> >>>> +
> >>>> +  std::unique_ptr<STM32CameraData> data =
> >>>> +      utils::make_unique<STM32CameraData>(this);
> >>>> +
> >>>> +  /* Locate and open the default video node. */
> >>>> +  for (MediaEntity *entity : media_->entities()) {
> >>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
> >>>> +      data->video_ = new V4L2Device(entity);
> >>>> +      break;
> >>>> +    }
> >>>> +  }
> >>> 
> >>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
> >>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
> >>> correctly match your device?
> 
> It will because it is in Hugues's patches.
> 
> Does an another way of working is posssible ?
> 
> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to 
> use it.

There's a single video node in your media graph, so there's no real need
to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,
can't you ?

> Do you have some advices ?
> 
> >>>> +
> >>>> +  if (!data->video_) {
> >>>> +    LOG(STM32, Error) << "Could not find a default video device";
> >>>> +    return false;
> >>>> +  }
> >>>> +
> >>>> +  if (data->video_->open())
> >>>> +    return false;
> >>>> +
> >>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
> >>>> +
> >>>> +  /* Create and register the camera. */
> >>>> +  std::set<Stream *> streams{&data->stream_};
> >>>> +  std::shared_ptr<Camera> camera =
> >>>> +      Camera::create(this, media_->model(), streams);
> >>>> +  registerCamera(std::move(camera), std::move(data));
> >>>> +
> >>>> +  return true;
> >>>> +}
> >>>> +
> >>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
> >>>> +  Request *request = queuedRequests_.front();
> >>>> +
> >>>> +  pipe_->completeBuffer(camera_, request, buffer);
> >>>> +  pipe_->completeRequest(camera_, request);
> >>>> +}
> >>>> +
> >>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
> >>>> +
> >>>> +} /* namespace libcamera */
> >>>>
> >>> Thank you for joining the libcamera project! - If there is anything more
> >>> we can do to help you - please let us know.
Hugues FRUCHET April 18, 2019, 2:22 p.m. UTC | #6
Hi Laurent,

On 4/17/19 2:11 PM, Laurent Pinchart wrote:
> Hi Benjamin,
> 
> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:
>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:
>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:
>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
>>>>>> Provide a pipeline handler for the stm32 driver.
>>>>> Excellent :-) Thank you for your contribution.
>>>>>
>>>>> Have you been able to test this and successfully list your camera or
>>>>> capture frames?
>>>> Yes I have been able to list and capture frames with cam
>>>>
>>>>> To list successfully registered cameras on the device:
>>>>> 	build/src/cam/cam -l
>>>>>
>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...
>>>>> Perhaps I'm missing something.
>>>> Hugues has send the patches for media controller support a week ago.
>>>>
>>>> https://lkml.org/lkml/2019/4/1/298
>>>>
>>>> I have tested libcamera with those patches.
>>> I think you may have missed the other comments from Kieran further down.
>>
>> Yes I have completely miss them
>>
>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
>>>>>> ---
>>>>>>     src/libcamera/pipeline/meson.build       |   2 +
>>>>>>     src/libcamera/pipeline/stm32/meson.build |   3 +
>>>>>>     src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
>>>>>>     3 files changed, 210 insertions(+)
>>>>>>     create mode 100644 src/libcamera/pipeline/stm32/meson.build
>>>>>>     create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
>>>>>>
>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
>>>>>> index 40bb264..08d6e1c 100644
>>>>>> --- a/src/libcamera/pipeline/meson.build
>>>>>> +++ b/src/libcamera/pipeline/meson.build
>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
>>>>>>     ])
>>>>>>     
>>>>>>     subdir('ipu3')
>>>>>> +
>>>>>> +subdir('stm32')
>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
>>>>>> new file mode 100644
>>>>>> index 0000000..cb6f16b
>>>>>> --- /dev/null
>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build
>>>>>> @@ -0,0 +1,3 @@
>>>>>> +libcamera_sources += files([
>>>>>> +    'stm32.cpp',
>>>>>> +])
>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which
>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.
>>>>>
>>>>> Do you forsee needing to break the STM32 handler into multiple files
>>>>> later, or support other components on the STM32?
>>>>>
>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.
>>>>>
>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move
>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
>>>>> subclassed to simply add the device matching, or provide a table of
>>>>> supported match strings.
>>>>>
>>>>>
>>>>> What hardware is available on the STM32 for processing frames?
>>>>> Do you have a scaler/resizer? or any further ISP functionality?
>>
>> No scaler, no resize and no ISP functionality.
>>
>> It is a basic device to memory driver.
> 
> In that case I wonder if we should really have an stm32-specific
> pipeline handler. It seems to me that we could implement a generic
> pipeline handler that would support any graph made of a sensor subdev
> connected to a video node.

We can also have a bridge in between, so graph becomes:

root@stm32mp1:~# media-ctl -d /dev/media0 -p
Media controller API version 5.0.0

Media device information
------------------------
driver          stm32-dcmi
model           stm32-dcmi
serial
bus info        platform:stm32-dcmi
hw revision     0x0
driver version  5.0.0

Device topology
- entity 1: stm32_dcmi (1 pad, 1 link)
             type Node subtype V4L flags 1
             device node name /dev/video0
         pad0: Sink
                 <- "mipid02 0-0014":1 [ENABLED,IMMUTABLE]

- entity 5: mipid02 0-0014 (2 pads, 2 links)
             type V4L2 subdev subtype Unknown flags 0
             device node name /dev/v4l-subdev0
         pad0: Sink
                 <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
         pad1: Source
                 -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]

- entity 10: ov5640 0-003c (1 pad, 1 link)
              type V4L2 subdev subtype Sensor flags 0
              device node name /dev/v4l-subdev1
         pad0: Source
                 [fmt:JPEG_1X8/320x240 field:none]
                 -> "mipid02 0-0014":0 [ENABLED,IMMUTABLE]

We have discussed this on irc here:
https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue

And in this case, if I have well understood, user has to use media-ctl 
to set format and resolution on bridge and sensor, just set the 
resolution and format on video node only will not be sufficient.

Moreover, after having configured format and resolution using media-ctl,
the same format and resolution must be asked to GStreamer otherwise a 
mismatch will occur:

1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640 
0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 ! 
decodebin ! fpsdisplaysink -v

/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = 
image/jpeg, width=(int)320, height=(int)240, 
pixel-aspect-ratio=(fraction)1/1
=> OK, QVGA JPEG is captured

but if I just omit the caps right after v4l2src (we don't force any 
format/resolution), this is broken:
1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640 
0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v

/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = 
video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944, 
pixel-aspect-ratio=(fe
=> KO, we try to capture 5Mp YUYV while ov5640 is configured to send 
QVGA JPEG...

In summary if I have well udnerstood, user must ensure right match of 
both media-ctl format/resolution and v4l2 S_FMT format/resolution.


So there are 2 cases to handle at least, the simple case without bridge 
and the case with bridge where MC is involved.

I still feel not cumfortable with the fact that just introducing a 
bridge in the middle (which is just a matter of physical link, no change 
on functionalities) has impacts up to user side, what do you think ?

> 
>>>>> It would be interesting to see the output of 'media-ctl -p' for your
>>>>> platform.
>>
>> The ouput of media-ctl -p (with Hugues's patches)
>>
>> Media controller API version 4.19.24
>>
>> Media device information
>> ------------------------
>> driver          stm32-dcmi
>> model           stm32-dcmi
>> serial
>> bus info        platform:stm32-dcmi
>> hw revision     0x0
>> driver version  4.19.24
>>
>> Device topology
>> - entity 1: stm32_dcmi (1 pad, 1 link)
>>               type Node subtype V4L flags 1
>>               device node name /dev/video0
>>           pad0: Sink
>>                   <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
>>
>> - entity 5: ov5640 0-003c (1 pad, 1 link)
>>               type V4L2 subdev subtype Sensor flags 0
>>               device node name /dev/v4l-subdev0
>>           pad0: Source
>>                   [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg
>> xfer:srgb ycbcr:601 quantization:full-range]
>>                   -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
>>
>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
>>>>>> new file mode 100644
>>>>>> index 0000000..301fdfc
>>>>>> --- /dev/null
>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
>>>>>> @@ -0,0 +1,205 @@
>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
>>>>>> +/*
>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices
>>>>>> + */
>>>>>> +
>>>>>> +#include <libcamera/camera.h>
>>>>>> +#include <libcamera/request.h>
>>>>>> +#include <libcamera/stream.h>
>>>>>> +
>>>>>> +#include "device_enumerator.h"
>>>>>> +#include "log.h"
>>>>>> +#include "media_device.h"
>>>>>> +#include "pipeline_handler.h"
>>>>>> +#include "utils.h"
>>>>>> +#include "v4l2_device.h"
>>>>>> +
>>>>>> +namespace libcamera {
>>>>>> +
>>>>>> +LOG_DEFINE_CATEGORY(STM32)
>>>>>> +
>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {
>>>>>> +public:
>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);
>>>>>> +  ~PipelineHandlerSTM32();
>>>>> Our coding style uses a tab to indent.
>>>>>
>>>>> When you have committed your code, you can run ./utils/checkstyle.py to
>>>>> check the most recent commit (or you can specify a range suitable for git).
>>>>>
>>>>> This requires installing clang-format ideally, although astyle is also
>>>>> supported to some degree.
>>
>> I have fix clang-format package version on my side, it will be better in v2.
> 
> Thank you.
> 
>>>>>
>>>>>> +
>>>>>> +  std::map<Stream *, StreamConfiguration>
>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
>>>>>> +  int configureStreams(
>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
>>>>>> +
>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
>>>>>> +
>>>>>> +  int start(Camera *camera) override;
>>>>>> +  void stop(Camera *camera) override;
>>>>>> +
>>>>>> +  int queueRequest(Camera *camera, Request *request) override;
>>>>>> +
>>>>>> +  bool match(DeviceEnumerator *enumerator);
>>>>>> +
>>>>>> +private:
>>>>>> +  class STM32CameraData : public CameraData {
>>>>>> +  public:
>>>>>> +    STM32CameraData(PipelineHandler *pipe)
>>>>>> +        : CameraData(pipe), video_(nullptr) {}
>>>>>> +
>>>>>> +    ~STM32CameraData() { delete video_; }
>>>>>> +
>>>>>> +    void bufferReady(Buffer *buffer);
>>>>>> +
>>>>>> +    V4L2Device *video_;
>>>>>> +    Stream stream_;
>>>>>> +  };
>>>>>> +
>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {
>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
>>>>>> +  }
>>>>>> +
>>>>>> +  std::shared_ptr<MediaDevice> media_;
>>>>>> +};
>>>>>> +
>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}
>>>>>> +
>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
>>>>>> +  if (media_)
>>>>>> +    media_->release();
>>>>>> +}
>>>>>> +
>>>>>> +std::map<Stream *, StreamConfiguration>
>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
>>>>>> +                                          std::set<Stream *> &streams) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +
>>>>>> +  std::map<Stream *, StreamConfiguration> configs;
>>>>>> +  StreamConfiguration config{};
>>>>>> +
>>>>>> +  LOG(STM32, Debug) << "Retrieving default format";
>>>>>> +  config.width = 640;
>>>>>> +  config.height = 480;
>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
>>>>>> +  config.bufferCount = 4;
>>>>>> +
>>>>>> +  configs[&data->stream_] = config;
>>>>>> +
>>>>>> +  return configs;
>>>>>> +}
>>>>>> +
>>>>>> +int PipelineHandlerSTM32::configureStreams(
>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];
>>>>>> +  int ret;
>>>>>> +
>>>>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
>>>>>> +                    << "x" << cfg->height;
>>>>>> +
>>>>>> +  V4L2DeviceFormat format = {};
>>>>>> +  format.width = cfg->width;
>>>>>> +  format.height = cfg->height;
>>>>>> +  format.fourcc = cfg->pixelFormat;
>>>>>> +
>>>>>> +  ret = data->video_->setFormat(&format);
>>>>>> +  if (ret)
>>>>>> +    return ret;
>>>>>> +
>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||
>>>>>> +      format.fourcc != cfg->pixelFormat)
>>>>>> +    return -EINVAL;
>>>>>> +
>>>>>> +  return 0;
>>>>>> +}
>>>>>> +
>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  const StreamConfiguration &cfg = stream->configuration();
>>>>>> +
>>>>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
>>>>>> +
>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());
>>>>>> +}
>>>>>> +
>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  return data->video_->releaseBuffers();
>>>>>> +}
>>>>>> +
>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  return data->video_->streamOn();
>>>>>> +}
>>>>>> +
>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  data->video_->streamOff();
>>>>>> +  PipelineHandler::stop(camera);
>>>>>> +}
>>>>>> +
>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
>>>>>> +  if (!buffer) {
>>>>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
>>>>>> +
>>>>>> +    return -ENOENT;
>>>>>> +  }
>>>>>> +
>>>>>> +  int ret = data->video_->queueBuffer(buffer);
>>>>>> +  if (ret < 0)
>>>>>> +    return ret;
>>>>>> +
>>>>>> +  PipelineHandler::queueRequest(camera, request);
>>>>>> +
>>>>>> +  return 0;
>>>>>> +}
>>>>>> +
>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
>>>>>> +  DeviceMatch dm("stm32-dcmi");
>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
>>>>> ever a need to interact with this subdevice directly?
>>>>>
>>>>> Or are all controls handled through the DCMI driver?
>>>>>
>>>>> I think it looks like everything simply goes through the stm32-dcmi
>>>>> V4L2Device interface.
>>
>> Until today yes but Hugues is working with v4l for media-controller
>> awareness.
>>
>>>>> That said, I can't see if there is any media-controller device in the
>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller
>>>>> enabled devices.
>>>>>
>>>>>> +
>>>>>> +  media_ = enumerator->search(dm);
>>>>>> +  if (!media_)
>>>>>> +    return false;
>>>>>> +
>>>>>> +  media_->acquire();
>>>>>> +
>>>>>> +  std::unique_ptr<STM32CameraData> data =
>>>>>> +      utils::make_unique<STM32CameraData>(this);
>>>>>> +
>>>>>> +  /* Locate and open the default video node. */
>>>>>> +  for (MediaEntity *entity : media_->entities()) {
>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
>>>>>> +      data->video_ = new V4L2Device(entity);
>>>>>> +      break;
>>>>>> +    }
>>>>>> +  }
>>>>>
>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
>>>>> correctly match your device?
>>
>> It will because it is in Hugues's patches.
>>
>> Does an another way of working is posssible ?
>>
>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to
>> use it.
> 
> There's a single video node in your media graph, so there's no real need
> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,
> can't you ?
> 
>> Do you have some advices ?
>>
>>>>>> +
>>>>>> +  if (!data->video_) {
>>>>>> +    LOG(STM32, Error) << "Could not find a default video device";
>>>>>> +    return false;
>>>>>> +  }
>>>>>> +
>>>>>> +  if (data->video_->open())
>>>>>> +    return false;
>>>>>> +
>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
>>>>>> +
>>>>>> +  /* Create and register the camera. */
>>>>>> +  std::set<Stream *> streams{&data->stream_};
>>>>>> +  std::shared_ptr<Camera> camera =
>>>>>> +      Camera::create(this, media_->model(), streams);
>>>>>> +  registerCamera(std::move(camera), std::move(data));
>>>>>> +
>>>>>> +  return true;
>>>>>> +}
>>>>>> +
>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
>>>>>> +  Request *request = queuedRequests_.front();
>>>>>> +
>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);
>>>>>> +  pipe_->completeRequest(camera_, request);
>>>>>> +}
>>>>>> +
>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
>>>>>> +
>>>>>> +} /* namespace libcamera */
>>>>>>
>>>>> Thank you for joining the libcamera project! - If there is anything more
>>>>> we can do to help you - please let us know.
> 

BR,
Hugues.
Laurent Pinchart April 18, 2019, 4:01 p.m. UTC | #7
Hi Hugues,

On Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:
> On 4/17/19 2:11 PM, Laurent Pinchart wrote:
> > On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:
> >> On 4/16/19 1:14 AM, Laurent Pinchart wrote:
> >>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
> >>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:
> >>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
> >>>>>> Provide a pipeline handler for the stm32 driver.
> >>>>> Excellent :-) Thank you for your contribution.
> >>>>>
> >>>>> Have you been able to test this and successfully list your camera or
> >>>>> capture frames?
> >>>> Yes I have been able to list and capture frames with cam
> >>>>
> >>>>> To list successfully registered cameras on the device:
> >>>>> 	build/src/cam/cam -l
> >>>>>
> >>>>> I couldn't see any media-controller support in the stm32-dcmi driver...
> >>>>> Perhaps I'm missing something.
> >>>> Hugues has send the patches for media controller support a week ago.
> >>>>
> >>>> https://lkml.org/lkml/2019/4/1/298
> >>>>
> >>>> I have tested libcamera with those patches.
> >>> I think you may have missed the other comments from Kieran further down.
> >>
> >> Yes I have completely miss them
> >>
> >>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
> >>>>>> ---
> >>>>>>     src/libcamera/pipeline/meson.build       |   2 +
> >>>>>>     src/libcamera/pipeline/stm32/meson.build |   3 +
> >>>>>>     src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
> >>>>>>     3 files changed, 210 insertions(+)
> >>>>>>     create mode 100644 src/libcamera/pipeline/stm32/meson.build
> >>>>>>     create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>>
> >>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
> >>>>>> index 40bb264..08d6e1c 100644
> >>>>>> --- a/src/libcamera/pipeline/meson.build
> >>>>>> +++ b/src/libcamera/pipeline/meson.build
> >>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
> >>>>>>     ])
> >>>>>>     
> >>>>>>     subdir('ipu3')
> >>>>>> +
> >>>>>> +subdir('stm32')
> >>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
> >>>>>> new file mode 100644
> >>>>>> index 0000000..cb6f16b
> >>>>>> --- /dev/null
> >>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build
> >>>>>> @@ -0,0 +1,3 @@
> >>>>>> +libcamera_sources += files([
> >>>>>> +    'stm32.cpp',
> >>>>>> +])
> >>>>> Currently the stm32.cpp seems like a fairly simple implementation, which
> >>>>> fits in its own file, much like pipeline/uvcvideo.cpp.
> >>>>>
> >>>>> Do you forsee needing to break the STM32 handler into multiple files
> >>>>> later, or support other components on the STM32?
> >>>>>
> >>>>> I think it's fine if you want to have stm32 as a subdir either way  though.
> >>>>>
> >>>>> I wonder also, if it is just a 'simple' device if perhaps we should move
> >>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
> >>>>> subclassed to simply add the device matching, or provide a table of
> >>>>> supported match strings.
> >>>>>
> >>>>>
> >>>>> What hardware is available on the STM32 for processing frames?
> >>>>> Do you have a scaler/resizer? or any further ISP functionality?
> >>
> >> No scaler, no resize and no ISP functionality.
> >>
> >> It is a basic device to memory driver.
> > 
> > In that case I wonder if we should really have an stm32-specific
> > pipeline handler. It seems to me that we could implement a generic
> > pipeline handler that would support any graph made of a sensor subdev
> > connected to a video node.
> 
> We can also have a bridge in between, so graph becomes:
> 
> root@stm32mp1:~# media-ctl -d /dev/media0 -p
> Media controller API version 5.0.0
> 
> Media device information
> ------------------------
> driver          stm32-dcmi
> model           stm32-dcmi
> serial
> bus info        platform:stm32-dcmi
> hw revision     0x0
> driver version  5.0.0
> 
> Device topology
> - entity 1: stm32_dcmi (1 pad, 1 link)
>              type Node subtype V4L flags 1
>              device node name /dev/video0
>          pad0: Sink
>                  <- "mipid02 0-0014":1 [ENABLED,IMMUTABLE]
> 
> - entity 5: mipid02 0-0014 (2 pads, 2 links)
>              type V4L2 subdev subtype Unknown flags 0
>              device node name /dev/v4l-subdev0
>          pad0: Sink
>                  <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
>          pad1: Source
>                  -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]

This is a chip external to the SoC, right ?

> - entity 10: ov5640 0-003c (1 pad, 1 link)
>               type V4L2 subdev subtype Sensor flags 0
>               device node name /dev/v4l-subdev1
>          pad0: Source
>                  [fmt:JPEG_1X8/320x240 field:none]
>                  -> "mipid02 0-0014":0 [ENABLED,IMMUTABLE]
> 
> We have discussed this on irc here:
> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue
> 
> And in this case, if I have well understood, user has to use media-ctl 
> to set format and resolution on bridge and sensor, just set the 
> resolution and format on video node only will not be sufficient.

Userspace has to configure the pipeline. The media-ctl tool is one way
to do so, but not the only one. libcamera aims at solving this issue,
and calls the media controller API directly from pipeline handlers
completely transparently for the application.

> Moreover, after having configured format and resolution using media-ctl,
> the same format and resolution must be asked to GStreamer otherwise a 
> mismatch will occur:
> 
> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640 
> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 ! 
> decodebin ! fpsdisplaysink -v
> 
> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = 
> image/jpeg, width=(int)320, height=(int)240, 
> pixel-aspect-ratio=(fraction)1/1
> => OK, QVGA JPEG is captured
> 
> but if I just omit the caps right after v4l2src (we don't force any 
> format/resolution), this is broken:
> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640 
> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v
> 
> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = 
> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944, 
> pixel-aspect-ratio=(fe
> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send 
> QVGA JPEG...
> 
> In summary if I have well udnerstood, user must ensure right match of 
> both media-ctl format/resolution and v4l2 S_FMT format/resolution.

Userspace does, but not the end-user :-)

> So there are 2 cases to handle at least, the simple case without bridge 
> and the case with bridge where MC is involved.
> 
> I still feel not cumfortable with the fact that just introducing a 
> bridge in the middle (which is just a matter of physical link, no change 
> on functionalities) has impacts up to user side, what do you think ?

The simplest case of a sensor connected to a camera receiver followed by
a DMA engine has traditionally been supported in V4L2 by a single video
device node. At the other extreme, very complex pipelines require usage
of the media controller API, with possibly complex logic in userspace to
connect all the pieces together. Then we have all the cases in-between,
with various degrees of complexity, and we need to draw the line
somewhere. Now that libcamera is becoming a reality, I think it's best
for kernelspace to be as simple as possible, and let userspace handle
the pipeline configuration in as many cases as possible. This will be
transparent for the end-user, both with native libcamera applications,
and for gstreamer-based applications when we'll have a gstreamer element
for libcamera (development hasn't started yet, but that's something we
want to have). Note that we also plan to offer a V4L2 adaptation layer
that will allow unmodified pure V4L2 applications to use a camera
handled by libcamera, and in theory gstreamer could use that, but it
will be limited to best offert as not all features can be exposed that
way. A native gstreamer libcamera element will likely be better.

> >>>>> It would be interesting to see the output of 'media-ctl -p' for your
> >>>>> platform.
> >>
> >> The ouput of media-ctl -p (with Hugues's patches)
> >>
> >> Media controller API version 4.19.24
> >>
> >> Media device information
> >> ------------------------
> >> driver          stm32-dcmi
> >> model           stm32-dcmi
> >> serial
> >> bus info        platform:stm32-dcmi
> >> hw revision     0x0
> >> driver version  4.19.24
> >>
> >> Device topology
> >> - entity 1: stm32_dcmi (1 pad, 1 link)
> >>               type Node subtype V4L flags 1
> >>               device node name /dev/video0
> >>           pad0: Sink
> >>                   <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
> >>
> >> - entity 5: ov5640 0-003c (1 pad, 1 link)
> >>               type V4L2 subdev subtype Sensor flags 0
> >>               device node name /dev/v4l-subdev0
> >>           pad0: Source
> >>                   [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg
> >> xfer:srgb ycbcr:601 quantization:full-range]
> >>                   -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
> >>
> >>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>> new file mode 100644
> >>>>>> index 0000000..301fdfc
> >>>>>> --- /dev/null
> >>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>> @@ -0,0 +1,205 @@
> >>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> >>>>>> +/*
> >>>>>> + * stm32.cpp - Pipeline handler for stm32 devices
> >>>>>> + */
> >>>>>> +
> >>>>>> +#include <libcamera/camera.h>
> >>>>>> +#include <libcamera/request.h>
> >>>>>> +#include <libcamera/stream.h>
> >>>>>> +
> >>>>>> +#include "device_enumerator.h"
> >>>>>> +#include "log.h"
> >>>>>> +#include "media_device.h"
> >>>>>> +#include "pipeline_handler.h"
> >>>>>> +#include "utils.h"
> >>>>>> +#include "v4l2_device.h"
> >>>>>> +
> >>>>>> +namespace libcamera {
> >>>>>> +
> >>>>>> +LOG_DEFINE_CATEGORY(STM32)
> >>>>>> +
> >>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {
> >>>>>> +public:
> >>>>>> +  PipelineHandlerSTM32(CameraManager *manager);
> >>>>>> +  ~PipelineHandlerSTM32();
> >>>>> Our coding style uses a tab to indent.
> >>>>>
> >>>>> When you have committed your code, you can run ./utils/checkstyle.py to
> >>>>> check the most recent commit (or you can specify a range suitable for git).
> >>>>>
> >>>>> This requires installing clang-format ideally, although astyle is also
> >>>>> supported to some degree.
> >>
> >> I have fix clang-format package version on my side, it will be better in v2.
> > 
> > Thank you.
> > 
> >>>>>> +
> >>>>>> +  std::map<Stream *, StreamConfiguration>
> >>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
> >>>>>> +  int configureStreams(
> >>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
> >>>>>> +
> >>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
> >>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
> >>>>>> +
> >>>>>> +  int start(Camera *camera) override;
> >>>>>> +  void stop(Camera *camera) override;
> >>>>>> +
> >>>>>> +  int queueRequest(Camera *camera, Request *request) override;
> >>>>>> +
> >>>>>> +  bool match(DeviceEnumerator *enumerator);
> >>>>>> +
> >>>>>> +private:
> >>>>>> +  class STM32CameraData : public CameraData {
> >>>>>> +  public:
> >>>>>> +    STM32CameraData(PipelineHandler *pipe)
> >>>>>> +        : CameraData(pipe), video_(nullptr) {}
> >>>>>> +
> >>>>>> +    ~STM32CameraData() { delete video_; }
> >>>>>> +
> >>>>>> +    void bufferReady(Buffer *buffer);
> >>>>>> +
> >>>>>> +    V4L2Device *video_;
> >>>>>> +    Stream stream_;
> >>>>>> +  };
> >>>>>> +
> >>>>>> +  STM32CameraData *cameraData(const Camera *camera) {
> >>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
> >>>>>> +  }
> >>>>>> +
> >>>>>> +  std::shared_ptr<MediaDevice> media_;
> >>>>>> +};
> >>>>>> +
> >>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
> >>>>>> +    : PipelineHandler(manager), media_(nullptr) {}
> >>>>>> +
> >>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
> >>>>>> +  if (media_)
> >>>>>> +    media_->release();
> >>>>>> +}
> >>>>>> +
> >>>>>> +std::map<Stream *, StreamConfiguration>
> >>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
> >>>>>> +                                          std::set<Stream *> &streams) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +
> >>>>>> +  std::map<Stream *, StreamConfiguration> configs;
> >>>>>> +  StreamConfiguration config{};
> >>>>>> +
> >>>>>> +  LOG(STM32, Debug) << "Retrieving default format";
> >>>>>> +  config.width = 640;
> >>>>>> +  config.height = 480;
> >>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
> >>>>>> +  config.bufferCount = 4;
> >>>>>> +
> >>>>>> +  configs[&data->stream_] = config;
> >>>>>> +
> >>>>>> +  return configs;
> >>>>>> +}
> >>>>>> +
> >>>>>> +int PipelineHandlerSTM32::configureStreams(
> >>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];
> >>>>>> +  int ret;
> >>>>>> +
> >>>>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
> >>>>>> +                    << "x" << cfg->height;
> >>>>>> +
> >>>>>> +  V4L2DeviceFormat format = {};
> >>>>>> +  format.width = cfg->width;
> >>>>>> +  format.height = cfg->height;
> >>>>>> +  format.fourcc = cfg->pixelFormat;
> >>>>>> +
> >>>>>> +  ret = data->video_->setFormat(&format);
> >>>>>> +  if (ret)
> >>>>>> +    return ret;
> >>>>>> +
> >>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||
> >>>>>> +      format.fourcc != cfg->pixelFormat)
> >>>>>> +    return -EINVAL;
> >>>>>> +
> >>>>>> +  return 0;
> >>>>>> +}
> >>>>>> +
> >>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  const StreamConfiguration &cfg = stream->configuration();
> >>>>>> +
> >>>>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
> >>>>>> +
> >>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());
> >>>>>> +}
> >>>>>> +
> >>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  return data->video_->releaseBuffers();
> >>>>>> +}
> >>>>>> +
> >>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  return data->video_->streamOn();
> >>>>>> +}
> >>>>>> +
> >>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  data->video_->streamOff();
> >>>>>> +  PipelineHandler::stop(camera);
> >>>>>> +}
> >>>>>> +
> >>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
> >>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
> >>>>>> +  if (!buffer) {
> >>>>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
> >>>>>> +
> >>>>>> +    return -ENOENT;
> >>>>>> +  }
> >>>>>> +
> >>>>>> +  int ret = data->video_->queueBuffer(buffer);
> >>>>>> +  if (ret < 0)
> >>>>>> +    return ret;
> >>>>>> +
> >>>>>> +  PipelineHandler::queueRequest(camera, request);
> >>>>>> +
> >>>>>> +  return 0;
> >>>>>> +}
> >>>>>> +
> >>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
> >>>>>> +  DeviceMatch dm("stm32-dcmi");
> >>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
> >>>>> ever a need to interact with this subdevice directly?
> >>>>>
> >>>>> Or are all controls handled through the DCMI driver?
> >>>>>
> >>>>> I think it looks like everything simply goes through the stm32-dcmi
> >>>>> V4L2Device interface.
> >>
> >> Until today yes but Hugues is working with v4l for media-controller
> >> awareness.
> >>
> >>>>> That said, I can't see if there is any media-controller device in the
> >>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller
> >>>>> enabled devices.
> >>>>>
> >>>>>> +
> >>>>>> +  media_ = enumerator->search(dm);
> >>>>>> +  if (!media_)
> >>>>>> +    return false;
> >>>>>> +
> >>>>>> +  media_->acquire();
> >>>>>> +
> >>>>>> +  std::unique_ptr<STM32CameraData> data =
> >>>>>> +      utils::make_unique<STM32CameraData>(this);
> >>>>>> +
> >>>>>> +  /* Locate and open the default video node. */
> >>>>>> +  for (MediaEntity *entity : media_->entities()) {
> >>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
> >>>>>> +      data->video_ = new V4L2Device(entity);
> >>>>>> +      break;
> >>>>>> +    }
> >>>>>> +  }
> >>>>>
> >>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
> >>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
> >>>>> correctly match your device?
> >>
> >> It will because it is in Hugues's patches.
> >>
> >> Does an another way of working is posssible ?
> >>
> >> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to
> >> use it.
> > 
> > There's a single video node in your media graph, so there's no real need
> > to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,
> > can't you ?
> > 
> >> Do you have some advices ?
> >>
> >>>>>> +
> >>>>>> +  if (!data->video_) {
> >>>>>> +    LOG(STM32, Error) << "Could not find a default video device";
> >>>>>> +    return false;
> >>>>>> +  }
> >>>>>> +
> >>>>>> +  if (data->video_->open())
> >>>>>> +    return false;
> >>>>>> +
> >>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
> >>>>>> +
> >>>>>> +  /* Create and register the camera. */
> >>>>>> +  std::set<Stream *> streams{&data->stream_};
> >>>>>> +  std::shared_ptr<Camera> camera =
> >>>>>> +      Camera::create(this, media_->model(), streams);
> >>>>>> +  registerCamera(std::move(camera), std::move(data));
> >>>>>> +
> >>>>>> +  return true;
> >>>>>> +}
> >>>>>> +
> >>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
> >>>>>> +  Request *request = queuedRequests_.front();
> >>>>>> +
> >>>>>> +  pipe_->completeBuffer(camera_, request, buffer);
> >>>>>> +  pipe_->completeRequest(camera_, request);
> >>>>>> +}
> >>>>>> +
> >>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
> >>>>>> +
> >>>>>> +} /* namespace libcamera */
> >>>>>>
> >>>>> Thank you for joining the libcamera project! - If there is anything more
> >>>>> we can do to help you - please let us know.
Benjamin GAIGNARD April 19, 2019, 7:45 a.m. UTC | #8
On 4/18/19 6:01 PM, Laurent Pinchart wrote:
> Hi Hugues,
>
> On Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:
>> On 4/17/19 2:11 PM, Laurent Pinchart wrote:
>>> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:
>>>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:
>>>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
>>>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:
>>>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
>>>>>>>> Provide a pipeline handler for the stm32 driver.
>>>>>>> Excellent :-) Thank you for your contribution.
>>>>>>>
>>>>>>> Have you been able to test this and successfully list your camera or
>>>>>>> capture frames?
>>>>>> Yes I have been able to list and capture frames with cam
>>>>>>
>>>>>>> To list successfully registered cameras on the device:
>>>>>>> 	build/src/cam/cam -l
>>>>>>>
>>>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...
>>>>>>> Perhaps I'm missing something.
>>>>>> Hugues has send the patches for media controller support a week ago.
>>>>>>
>>>>>> https://lkml.org/lkml/2019/4/1/298
>>>>>>
>>>>>> I have tested libcamera with those patches.
>>>>> I think you may have missed the other comments from Kieran further down.
>>>> Yes I have completely miss them
>>>>
>>>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
>>>>>>>> ---
>>>>>>>>      src/libcamera/pipeline/meson.build       |   2 +
>>>>>>>>      src/libcamera/pipeline/stm32/meson.build |   3 +
>>>>>>>>      src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
>>>>>>>>      3 files changed, 210 insertions(+)
>>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/meson.build
>>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
>>>>>>>>
>>>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
>>>>>>>> index 40bb264..08d6e1c 100644
>>>>>>>> --- a/src/libcamera/pipeline/meson.build
>>>>>>>> +++ b/src/libcamera/pipeline/meson.build
>>>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
>>>>>>>>      ])
>>>>>>>>      
>>>>>>>>      subdir('ipu3')
>>>>>>>> +
>>>>>>>> +subdir('stm32')
>>>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
>>>>>>>> new file mode 100644
>>>>>>>> index 0000000..cb6f16b
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build
>>>>>>>> @@ -0,0 +1,3 @@
>>>>>>>> +libcamera_sources += files([
>>>>>>>> +    'stm32.cpp',
>>>>>>>> +])
>>>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which
>>>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.
>>>>>>>
>>>>>>> Do you forsee needing to break the STM32 handler into multiple files
>>>>>>> later, or support other components on the STM32?
>>>>>>>
>>>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.
>>>>>>>
>>>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move
>>>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
>>>>>>> subclassed to simply add the device matching, or provide a table of
>>>>>>> supported match strings.
>>>>>>>
>>>>>>>
>>>>>>> What hardware is available on the STM32 for processing frames?
>>>>>>> Do you have a scaler/resizer? or any further ISP functionality?
>>>> No scaler, no resize and no ISP functionality.
>>>>
>>>> It is a basic device to memory driver.
>>> In that case I wonder if we should really have an stm32-specific
>>> pipeline handler. It seems to me that we could implement a generic
>>> pipeline handler that would support any graph made of a sensor subdev
>>> connected to a video node.
>> We can also have a bridge in between, so graph becomes:
>>
>> root@stm32mp1:~# media-ctl -d /dev/media0 -p
>> Media controller API version 5.0.0
>>
>> Media device information
>> ------------------------
>> driver          stm32-dcmi
>> model           stm32-dcmi
>> serial
>> bus info        platform:stm32-dcmi
>> hw revision     0x0
>> driver version  5.0.0
>>
>> Device topology
>> - entity 1: stm32_dcmi (1 pad, 1 link)
>>               type Node subtype V4L flags 1
>>               device node name /dev/video0
>>           pad0: Sink
>>                   <- "mipid02 0-0014":1 [ENABLED,IMMUTABLE]
>>
>> - entity 5: mipid02 0-0014 (2 pads, 2 links)
>>               type V4L2 subdev subtype Unknown flags 0
>>               device node name /dev/v4l-subdev0
>>           pad0: Sink
>>                   <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
>>           pad1: Source
>>                   -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
> This is a chip external to the SoC, right ?
Yes it is.
>
>> - entity 10: ov5640 0-003c (1 pad, 1 link)
>>                type V4L2 subdev subtype Sensor flags 0
>>                device node name /dev/v4l-subdev1
>>           pad0: Source
>>                   [fmt:JPEG_1X8/320x240 field:none]
>>                   -> "mipid02 0-0014":0 [ENABLED,IMMUTABLE]
>>
>> We have discussed this on irc here:
>> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue
>>
>> And in this case, if I have well understood, user has to use media-ctl
>> to set format and resolution on bridge and sensor, just set the
>> resolution and format on video node only will not be sufficient.
> Userspace has to configure the pipeline. The media-ctl tool is one way
> to do so, but not the only one. libcamera aims at solving this issue,
> and calls the media controller API directly from pipeline handlers
> completely transparently for the application.
>
>> Moreover, after having configured format and resolution using media-ctl,
>> the same format and resolution must be asked to GStreamer otherwise a
>> mismatch will occur:
>>
>> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640
>> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
>> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 !
>> decodebin ! fpsdisplaysink -v
>>
>> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =
>> image/jpeg, width=(int)320, height=(int)240,
>> pixel-aspect-ratio=(fraction)1/1
>> => OK, QVGA JPEG is captured
>>
>> but if I just omit the caps right after v4l2src (we don't force any
>> format/resolution), this is broken:
>> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640
>> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
>> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v
>>
>> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps =
>> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944,
>> pixel-aspect-ratio=(fe
>> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send
>> QVGA JPEG...
>>
>> In summary if I have well udnerstood, user must ensure right match of
>> both media-ctl format/resolution and v4l2 S_FMT format/resolution.
> Userspace does, but not the end-user :-)
>
>> So there are 2 cases to handle at least, the simple case without bridge
>> and the case with bridge where MC is involved.
>>
>> I still feel not cumfortable with the fact that just introducing a
>> bridge in the middle (which is just a matter of physical link, no change
>> on functionalities) has impacts up to user side, what do you think ?
> The simplest case of a sensor connected to a camera receiver followed by
> a DMA engine has traditionally been supported in V4L2 by a single video
> device node. At the other extreme, very complex pipelines require usage
> of the media controller API, with possibly complex logic in userspace to
> connect all the pieces together. Then we have all the cases in-between,
> with various degrees of complexity, and we need to draw the line
> somewhere. Now that libcamera is becoming a reality, I think it's best
> for kernelspace to be as simple as possible, and let userspace handle
> the pipeline configuration in as many cases as possible. This will be
> transparent for the end-user, both with native libcamera applications,
> and for gstreamer-based applications when we'll have a gstreamer element
> for libcamera (development hasn't started yet, but that's something we
> want to have). Note that we also plan to offer a V4L2 adaptation layer
> that will allow unmodified pure V4L2 applications to use a camera
> handled by libcamera, and in theory gstreamer could use that, but it
> will be limited to best offert as not all features can be exposed that
> way. A native gstreamer libcamera element will likely be better.
Since the pipeline have to manage all the complexity I will
keep a dedicated file for stm32 pipeline. Sound good for you ?

Benjamin

>
>>>>>>> It would be interesting to see the output of 'media-ctl -p' for your
>>>>>>> platform.
>>>> The ouput of media-ctl -p (with Hugues's patches)
>>>>
>>>> Media controller API version 4.19.24
>>>>
>>>> Media device information
>>>> ------------------------
>>>> driver          stm32-dcmi
>>>> model           stm32-dcmi
>>>> serial
>>>> bus info        platform:stm32-dcmi
>>>> hw revision     0x0
>>>> driver version  4.19.24
>>>>
>>>> Device topology
>>>> - entity 1: stm32_dcmi (1 pad, 1 link)
>>>>                type Node subtype V4L flags 1
>>>>                device node name /dev/video0
>>>>            pad0: Sink
>>>>                    <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
>>>>
>>>> - entity 5: ov5640 0-003c (1 pad, 1 link)
>>>>                type V4L2 subdev subtype Sensor flags 0
>>>>                device node name /dev/v4l-subdev0
>>>>            pad0: Source
>>>>                    [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg
>>>> xfer:srgb ycbcr:601 quantization:full-range]
>>>>                    -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
>>>>
>>>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
>>>>>>>> new file mode 100644
>>>>>>>> index 0000000..301fdfc
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
>>>>>>>> @@ -0,0 +1,205 @@
>>>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
>>>>>>>> +/*
>>>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices
>>>>>>>> + */
>>>>>>>> +
>>>>>>>> +#include <libcamera/camera.h>
>>>>>>>> +#include <libcamera/request.h>
>>>>>>>> +#include <libcamera/stream.h>
>>>>>>>> +
>>>>>>>> +#include "device_enumerator.h"
>>>>>>>> +#include "log.h"
>>>>>>>> +#include "media_device.h"
>>>>>>>> +#include "pipeline_handler.h"
>>>>>>>> +#include "utils.h"
>>>>>>>> +#include "v4l2_device.h"
>>>>>>>> +
>>>>>>>> +namespace libcamera {
>>>>>>>> +
>>>>>>>> +LOG_DEFINE_CATEGORY(STM32)
>>>>>>>> +
>>>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {
>>>>>>>> +public:
>>>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);
>>>>>>>> +  ~PipelineHandlerSTM32();
>>>>>>> Our coding style uses a tab to indent.
>>>>>>>
>>>>>>> When you have committed your code, you can run ./utils/checkstyle.py to
>>>>>>> check the most recent commit (or you can specify a range suitable for git).
>>>>>>>
>>>>>>> This requires installing clang-format ideally, although astyle is also
>>>>>>> supported to some degree.
>>>> I have fix clang-format package version on my side, it will be better in v2.
>>> Thank you.
>>>
>>>>>>>> +
>>>>>>>> +  std::map<Stream *, StreamConfiguration>
>>>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
>>>>>>>> +  int configureStreams(
>>>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
>>>>>>>> +
>>>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
>>>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
>>>>>>>> +
>>>>>>>> +  int start(Camera *camera) override;
>>>>>>>> +  void stop(Camera *camera) override;
>>>>>>>> +
>>>>>>>> +  int queueRequest(Camera *camera, Request *request) override;
>>>>>>>> +
>>>>>>>> +  bool match(DeviceEnumerator *enumerator);
>>>>>>>> +
>>>>>>>> +private:
>>>>>>>> +  class STM32CameraData : public CameraData {
>>>>>>>> +  public:
>>>>>>>> +    STM32CameraData(PipelineHandler *pipe)
>>>>>>>> +        : CameraData(pipe), video_(nullptr) {}
>>>>>>>> +
>>>>>>>> +    ~STM32CameraData() { delete video_; }
>>>>>>>> +
>>>>>>>> +    void bufferReady(Buffer *buffer);
>>>>>>>> +
>>>>>>>> +    V4L2Device *video_;
>>>>>>>> +    Stream stream_;
>>>>>>>> +  };
>>>>>>>> +
>>>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {
>>>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
>>>>>>>> +  }
>>>>>>>> +
>>>>>>>> +  std::shared_ptr<MediaDevice> media_;
>>>>>>>> +};
>>>>>>>> +
>>>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
>>>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}
>>>>>>>> +
>>>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
>>>>>>>> +  if (media_)
>>>>>>>> +    media_->release();
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +std::map<Stream *, StreamConfiguration>
>>>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
>>>>>>>> +                                          std::set<Stream *> &streams) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +
>>>>>>>> +  std::map<Stream *, StreamConfiguration> configs;
>>>>>>>> +  StreamConfiguration config{};
>>>>>>>> +
>>>>>>>> +  LOG(STM32, Debug) << "Retrieving default format";
>>>>>>>> +  config.width = 640;
>>>>>>>> +  config.height = 480;
>>>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
>>>>>>>> +  config.bufferCount = 4;
>>>>>>>> +
>>>>>>>> +  configs[&data->stream_] = config;
>>>>>>>> +
>>>>>>>> +  return configs;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int PipelineHandlerSTM32::configureStreams(
>>>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];
>>>>>>>> +  int ret;
>>>>>>>> +
>>>>>>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
>>>>>>>> +                    << "x" << cfg->height;
>>>>>>>> +
>>>>>>>> +  V4L2DeviceFormat format = {};
>>>>>>>> +  format.width = cfg->width;
>>>>>>>> +  format.height = cfg->height;
>>>>>>>> +  format.fourcc = cfg->pixelFormat;
>>>>>>>> +
>>>>>>>> +  ret = data->video_->setFormat(&format);
>>>>>>>> +  if (ret)
>>>>>>>> +    return ret;
>>>>>>>> +
>>>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||
>>>>>>>> +      format.fourcc != cfg->pixelFormat)
>>>>>>>> +    return -EINVAL;
>>>>>>>> +
>>>>>>>> +  return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  const StreamConfiguration &cfg = stream->configuration();
>>>>>>>> +
>>>>>>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
>>>>>>>> +
>>>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  return data->video_->releaseBuffers();
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  return data->video_->streamOn();
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  data->video_->streamOff();
>>>>>>>> +  PipelineHandler::stop(camera);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
>>>>>>>> +  STM32CameraData *data = cameraData(camera);
>>>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
>>>>>>>> +  if (!buffer) {
>>>>>>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
>>>>>>>> +
>>>>>>>> +    return -ENOENT;
>>>>>>>> +  }
>>>>>>>> +
>>>>>>>> +  int ret = data->video_->queueBuffer(buffer);
>>>>>>>> +  if (ret < 0)
>>>>>>>> +    return ret;
>>>>>>>> +
>>>>>>>> +  PipelineHandler::queueRequest(camera, request);
>>>>>>>> +
>>>>>>>> +  return 0;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
>>>>>>>> +  DeviceMatch dm("stm32-dcmi");
>>>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
>>>>>>> ever a need to interact with this subdevice directly?
>>>>>>>
>>>>>>> Or are all controls handled through the DCMI driver?
>>>>>>>
>>>>>>> I think it looks like everything simply goes through the stm32-dcmi
>>>>>>> V4L2Device interface.
>>>> Until today yes but Hugues is working with v4l for media-controller
>>>> awareness.
>>>>
>>>>>>> That said, I can't see if there is any media-controller device in the
>>>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller
>>>>>>> enabled devices.
>>>>>>>
>>>>>>>> +
>>>>>>>> +  media_ = enumerator->search(dm);
>>>>>>>> +  if (!media_)
>>>>>>>> +    return false;
>>>>>>>> +
>>>>>>>> +  media_->acquire();
>>>>>>>> +
>>>>>>>> +  std::unique_ptr<STM32CameraData> data =
>>>>>>>> +      utils::make_unique<STM32CameraData>(this);
>>>>>>>> +
>>>>>>>> +  /* Locate and open the default video node. */
>>>>>>>> +  for (MediaEntity *entity : media_->entities()) {
>>>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
>>>>>>>> +      data->video_ = new V4L2Device(entity);
>>>>>>>> +      break;
>>>>>>>> +    }
>>>>>>>> +  }
>>>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
>>>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
>>>>>>> correctly match your device?
>>>> It will because it is in Hugues's patches.
>>>>
>>>> Does an another way of working is posssible ?
>>>>
>>>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to
>>>> use it.
>>> There's a single video node in your media graph, so there's no real need
>>> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,
>>> can't you ?
>>>
>>>> Do you have some advices ?
>>>>
>>>>>>>> +
>>>>>>>> +  if (!data->video_) {
>>>>>>>> +    LOG(STM32, Error) << "Could not find a default video device";
>>>>>>>> +    return false;
>>>>>>>> +  }
>>>>>>>> +
>>>>>>>> +  if (data->video_->open())
>>>>>>>> +    return false;
>>>>>>>> +
>>>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
>>>>>>>> +
>>>>>>>> +  /* Create and register the camera. */
>>>>>>>> +  std::set<Stream *> streams{&data->stream_};
>>>>>>>> +  std::shared_ptr<Camera> camera =
>>>>>>>> +      Camera::create(this, media_->model(), streams);
>>>>>>>> +  registerCamera(std::move(camera), std::move(data));
>>>>>>>> +
>>>>>>>> +  return true;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
>>>>>>>> +  Request *request = queuedRequests_.front();
>>>>>>>> +
>>>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);
>>>>>>>> +  pipe_->completeRequest(camera_, request);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
>>>>>>>> +
>>>>>>>> +} /* namespace libcamera */
>>>>>>>>
>>>>>>> Thank you for joining the libcamera project! - If there is anything more
>>>>>>> we can do to help you - please let us know.
Laurent Pinchart April 19, 2019, 10:18 a.m. UTC | #9
Hi Benjamin,

On Fri, Apr 19, 2019 at 07:45:45AM +0000, Benjamin GAIGNARD wrote:
> On 4/18/19 6:01 PM, Laurent Pinchart wrote:
> > On Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:
> >> On 4/17/19 2:11 PM, Laurent Pinchart wrote:
> >>> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:
> >>>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:
> >>>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:
> >>>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:
> >>>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:
> >>>>>>>> Provide a pipeline handler for the stm32 driver.
> >>>>>>> 
> >>>>>>> Excellent :-) Thank you for your contribution.
> >>>>>>> 
> >>>>>>> Have you been able to test this and successfully list your camera or
> >>>>>>> capture frames?
> >>>>>> 
> >>>>>> Yes I have been able to list and capture frames with cam
> >>>>>> 
> >>>>>>> To list successfully registered cameras on the device:
> >>>>>>> 	build/src/cam/cam -l
> >>>>>>>
> >>>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...
> >>>>>>> Perhaps I'm missing something.
> >>>>>> 
> >>>>>> Hugues has send the patches for media controller support a week ago.
> >>>>>>
> >>>>>> https://lkml.org/lkml/2019/4/1/298
> >>>>>>
> >>>>>> I have tested libcamera with those patches.
> >>>>> 
> >>>>> I think you may have missed the other comments from Kieran further down.
> >>>> 
> >>>> Yes I have completely miss them
> >>>>
> >>>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
> >>>>>>>> ---
> >>>>>>>>      src/libcamera/pipeline/meson.build       |   2 +
> >>>>>>>>      src/libcamera/pipeline/stm32/meson.build |   3 +
> >>>>>>>>      src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++
> >>>>>>>>      3 files changed, 210 insertions(+)
> >>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/meson.build
> >>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>>>>
> >>>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
> >>>>>>>> index 40bb264..08d6e1c 100644
> >>>>>>>> --- a/src/libcamera/pipeline/meson.build
> >>>>>>>> +++ b/src/libcamera/pipeline/meson.build
> >>>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([
> >>>>>>>>      ])
> >>>>>>>>      
> >>>>>>>>      subdir('ipu3')
> >>>>>>>> +
> >>>>>>>> +subdir('stm32')
> >>>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
> >>>>>>>> new file mode 100644
> >>>>>>>> index 0000000..cb6f16b
> >>>>>>>> --- /dev/null
> >>>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build
> >>>>>>>> @@ -0,0 +1,3 @@
> >>>>>>>> +libcamera_sources += files([
> >>>>>>>> +    'stm32.cpp',
> >>>>>>>> +])
> >>>>>>> 
> >>>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which
> >>>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.
> >>>>>>>
> >>>>>>> Do you forsee needing to break the STM32 handler into multiple files
> >>>>>>> later, or support other components on the STM32?
> >>>>>>>
> >>>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.
> >>>>>>>
> >>>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move
> >>>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be
> >>>>>>> subclassed to simply add the device matching, or provide a table of
> >>>>>>> supported match strings.
> >>>>>>>
> >>>>>>>
> >>>>>>> What hardware is available on the STM32 for processing frames?
> >>>>>>> Do you have a scaler/resizer? or any further ISP functionality?
> >>>> 
> >>>> No scaler, no resize and no ISP functionality.
> >>>>
> >>>> It is a basic device to memory driver.
> >>> 
> >>> In that case I wonder if we should really have an stm32-specific
> >>> pipeline handler. It seems to me that we could implement a generic
> >>> pipeline handler that would support any graph made of a sensor subdev
> >>> connected to a video node.
> >> 
> >> We can also have a bridge in between, so graph becomes:
> >>
> >> root@stm32mp1:~# media-ctl -d /dev/media0 -p
> >> Media controller API version 5.0.0
> >>
> >> Media device information
> >> ------------------------
> >> driver          stm32-dcmi
> >> model           stm32-dcmi
> >> serial
> >> bus info        platform:stm32-dcmi
> >> hw revision     0x0
> >> driver version  5.0.0
> >>
> >> Device topology
> >> - entity 1: stm32_dcmi (1 pad, 1 link)
> >>               type Node subtype V4L flags 1
> >>               device node name /dev/video0
> >>           pad0: Sink
> >>                   <- "mipid02 0-0014":1 [ENABLED,IMMUTABLE]
> >>
> >> - entity 5: mipid02 0-0014 (2 pads, 2 links)
> >>               type V4L2 subdev subtype Unknown flags 0
> >>               device node name /dev/v4l-subdev0
> >>           pad0: Sink
> >>                   <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
> >>           pad1: Source
> >>                   -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
> > This is a chip external to the SoC, right ?
> 
> Yes it is.
> 
> >> - entity 10: ov5640 0-003c (1 pad, 1 link)
> >>                type V4L2 subdev subtype Sensor flags 0
> >>                device node name /dev/v4l-subdev1
> >>           pad0: Source
> >>                   [fmt:JPEG_1X8/320x240 field:none]
> >>                   -> "mipid02 0-0014":0 [ENABLED,IMMUTABLE]
> >>
> >> We have discussed this on irc here:
> >> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue
> >>
> >> And in this case, if I have well understood, user has to use media-ctl
> >> to set format and resolution on bridge and sensor, just set the
> >> resolution and format on video node only will not be sufficient.
> >
> > Userspace has to configure the pipeline. The media-ctl tool is one way
> > to do so, but not the only one. libcamera aims at solving this issue,
> > and calls the media controller API directly from pipeline handlers
> > completely transparently for the application.
> >
> >> Moreover, after having configured format and resolution using media-ctl,
> >> the same format and resolution must be asked to GStreamer otherwise a
> >> mismatch will occur:
> >>
> >> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640
> >> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
> >> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 !
> >> decodebin ! fpsdisplaysink -v
> >>
> >> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =
> >> image/jpeg, width=(int)320, height=(int)240,
> >> pixel-aspect-ratio=(fraction)1/1
> >> => OK, QVGA JPEG is captured
> >>
> >> but if I just omit the caps right after v4l2src (we don't force any
> >> format/resolution), this is broken:
> >> 1) media-ctl -d /dev/media0 --set-v4l2 "'ov5640
> >> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]"
> >> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v
> >>
> >> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps =
> >> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944,
> >> pixel-aspect-ratio=(fe
> >> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send
> >> QVGA JPEG...
> >>
> >> In summary if I have well udnerstood, user must ensure right match of
> >> both media-ctl format/resolution and v4l2 S_FMT format/resolution.
> >
> > Userspace does, but not the end-user :-)
> >
> >> So there are 2 cases to handle at least, the simple case without bridge
> >> and the case with bridge where MC is involved.
> >>
> >> I still feel not cumfortable with the fact that just introducing a
> >> bridge in the middle (which is just a matter of physical link, no change
> >> on functionalities) has impacts up to user side, what do you think ?
> >
> > The simplest case of a sensor connected to a camera receiver followed by
> > a DMA engine has traditionally been supported in V4L2 by a single video
> > device node. At the other extreme, very complex pipelines require usage
> > of the media controller API, with possibly complex logic in userspace to
> > connect all the pieces together. Then we have all the cases in-between,
> > with various degrees of complexity, and we need to draw the line
> > somewhere. Now that libcamera is becoming a reality, I think it's best
> > for kernelspace to be as simple as possible, and let userspace handle
> > the pipeline configuration in as many cases as possible. This will be
> > transparent for the end-user, both with native libcamera applications,
> > and for gstreamer-based applications when we'll have a gstreamer element
> > for libcamera (development hasn't started yet, but that's something we
> > want to have). Note that we also plan to offer a V4L2 adaptation layer
> > that will allow unmodified pure V4L2 applications to use a camera
> > handled by libcamera, and in theory gstreamer could use that, but it
> > will be limited to best offert as not all features can be exposed that
> > way. A native gstreamer libcamera element will likely be better.
> 
> Since the pipeline have to manage all the complexity I will
> keep a dedicated file for stm32 pipeline. Sound good for you ?

But the code isn't tied to the STM32 in any way, is it ? I still think
it should be made generic to support any camera that has an identical
architecture. In particular handling of the bridge should go to a helper
class, probably related to the recently added CameraSensor class.

> >>>>>>> It would be interesting to see the output of 'media-ctl -p' for your
> >>>>>>> platform.
> >>>>
> >>>> The ouput of media-ctl -p (with Hugues's patches)
> >>>>
> >>>> Media controller API version 4.19.24
> >>>>
> >>>> Media device information
> >>>> ------------------------
> >>>> driver          stm32-dcmi
> >>>> model           stm32-dcmi
> >>>> serial
> >>>> bus info        platform:stm32-dcmi
> >>>> hw revision     0x0
> >>>> driver version  4.19.24
> >>>>
> >>>> Device topology
> >>>> - entity 1: stm32_dcmi (1 pad, 1 link)
> >>>>                type Node subtype V4L flags 1
> >>>>                device node name /dev/video0
> >>>>            pad0: Sink
> >>>>                    <- "ov5640 0-003c":0 [ENABLED,IMMUTABLE]
> >>>>
> >>>> - entity 5: ov5640 0-003c (1 pad, 1 link)
> >>>>                type V4L2 subdev subtype Sensor flags 0
> >>>>                device node name /dev/v4l-subdev0
> >>>>            pad0: Source
> >>>>                    [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg
> >>>> xfer:srgb ycbcr:601 quantization:full-range]
> >>>>                    -> "stm32_dcmi":0 [ENABLED,IMMUTABLE]
> >>>>
> >>>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>>>> new file mode 100644
> >>>>>>>> index 0000000..301fdfc
> >>>>>>>> --- /dev/null
> >>>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp
> >>>>>>>> @@ -0,0 +1,205 @@
> >>>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> >>>>>>>> +/*
> >>>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices
> >>>>>>>> + */
> >>>>>>>> +
> >>>>>>>> +#include <libcamera/camera.h>
> >>>>>>>> +#include <libcamera/request.h>
> >>>>>>>> +#include <libcamera/stream.h>
> >>>>>>>> +
> >>>>>>>> +#include "device_enumerator.h"
> >>>>>>>> +#include "log.h"
> >>>>>>>> +#include "media_device.h"
> >>>>>>>> +#include "pipeline_handler.h"
> >>>>>>>> +#include "utils.h"
> >>>>>>>> +#include "v4l2_device.h"
> >>>>>>>> +
> >>>>>>>> +namespace libcamera {
> >>>>>>>> +
> >>>>>>>> +LOG_DEFINE_CATEGORY(STM32)
> >>>>>>>> +
> >>>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {
> >>>>>>>> +public:
> >>>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);
> >>>>>>>> +  ~PipelineHandlerSTM32();
> >>>>>>> Our coding style uses a tab to indent.
> >>>>>>>
> >>>>>>> When you have committed your code, you can run ./utils/checkstyle.py to
> >>>>>>> check the most recent commit (or you can specify a range suitable for git).
> >>>>>>>
> >>>>>>> This requires installing clang-format ideally, although astyle is also
> >>>>>>> supported to some degree.
> >>>>
> >>>> I have fix clang-format package version on my side, it will be better in v2.
> >>>
> >>> Thank you.
> >>>
> >>>>>>>> +
> >>>>>>>> +  std::map<Stream *, StreamConfiguration>
> >>>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
> >>>>>>>> +  int configureStreams(
> >>>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
> >>>>>>>> +
> >>>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;
> >>>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;
> >>>>>>>> +
> >>>>>>>> +  int start(Camera *camera) override;
> >>>>>>>> +  void stop(Camera *camera) override;
> >>>>>>>> +
> >>>>>>>> +  int queueRequest(Camera *camera, Request *request) override;
> >>>>>>>> +
> >>>>>>>> +  bool match(DeviceEnumerator *enumerator);
> >>>>>>>> +
> >>>>>>>> +private:
> >>>>>>>> +  class STM32CameraData : public CameraData {
> >>>>>>>> +  public:
> >>>>>>>> +    STM32CameraData(PipelineHandler *pipe)
> >>>>>>>> +        : CameraData(pipe), video_(nullptr) {}
> >>>>>>>> +
> >>>>>>>> +    ~STM32CameraData() { delete video_; }
> >>>>>>>> +
> >>>>>>>> +    void bufferReady(Buffer *buffer);
> >>>>>>>> +
> >>>>>>>> +    V4L2Device *video_;
> >>>>>>>> +    Stream stream_;
> >>>>>>>> +  };
> >>>>>>>> +
> >>>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {
> >>>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
> >>>>>>>> +  }
> >>>>>>>> +
> >>>>>>>> +  std::shared_ptr<MediaDevice> media_;
> >>>>>>>> +};
> >>>>>>>> +
> >>>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
> >>>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}
> >>>>>>>> +
> >>>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {
> >>>>>>>> +  if (media_)
> >>>>>>>> +    media_->release();
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +std::map<Stream *, StreamConfiguration>
> >>>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,
> >>>>>>>> +                                          std::set<Stream *> &streams) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +
> >>>>>>>> +  std::map<Stream *, StreamConfiguration> configs;
> >>>>>>>> +  StreamConfiguration config{};
> >>>>>>>> +
> >>>>>>>> +  LOG(STM32, Debug) << "Retrieving default format";
> >>>>>>>> +  config.width = 640;
> >>>>>>>> +  config.height = 480;
> >>>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;
> >>>>>>>> +  config.bufferCount = 4;
> >>>>>>>> +
> >>>>>>>> +  configs[&data->stream_] = config;
> >>>>>>>> +
> >>>>>>>> +  return configs;
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +int PipelineHandlerSTM32::configureStreams(
> >>>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];
> >>>>>>>> +  int ret;
> >>>>>>>> +
> >>>>>>>> +  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
> >>>>>>>> +                    << "x" << cfg->height;
> >>>>>>>> +
> >>>>>>>> +  V4L2DeviceFormat format = {};
> >>>>>>>> +  format.width = cfg->width;
> >>>>>>>> +  format.height = cfg->height;
> >>>>>>>> +  format.fourcc = cfg->pixelFormat;
> >>>>>>>> +
> >>>>>>>> +  ret = data->video_->setFormat(&format);
> >>>>>>>> +  if (ret)
> >>>>>>>> +    return ret;
> >>>>>>>> +
> >>>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||
> >>>>>>>> +      format.fourcc != cfg->pixelFormat)
> >>>>>>>> +    return -EINVAL;
> >>>>>>>> +
> >>>>>>>> +  return 0;
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  const StreamConfiguration &cfg = stream->configuration();
> >>>>>>>> +
> >>>>>>>> +  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
> >>>>>>>> +
> >>>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  return data->video_->releaseBuffers();
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  return data->video_->streamOn();
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  data->video_->streamOff();
> >>>>>>>> +  PipelineHandler::stop(camera);
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
> >>>>>>>> +  STM32CameraData *data = cameraData(camera);
> >>>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);
> >>>>>>>> +  if (!buffer) {
> >>>>>>>> +    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
> >>>>>>>> +
> >>>>>>>> +    return -ENOENT;
> >>>>>>>> +  }
> >>>>>>>> +
> >>>>>>>> +  int ret = data->video_->queueBuffer(buffer);
> >>>>>>>> +  if (ret < 0)
> >>>>>>>> +    return ret;
> >>>>>>>> +
> >>>>>>>> +  PipelineHandler::queueRequest(camera, request);
> >>>>>>>> +
> >>>>>>>> +  return 0;
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
> >>>>>>>> +  DeviceMatch dm("stm32-dcmi");
> >>>>>>>
> >>>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there
> >>>>>>> ever a need to interact with this subdevice directly?
> >>>>>>>
> >>>>>>> Or are all controls handled through the DCMI driver?
> >>>>>>>
> >>>>>>> I think it looks like everything simply goes through the stm32-dcmi
> >>>>>>> V4L2Device interface.
> >>>>
> >>>> Until today yes but Hugues is working with v4l for media-controller
> >>>> awareness.
> >>>>
> >>>>>>> That said, I can't see if there is any media-controller device in the
> >>>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller
> >>>>>>> enabled devices.
> >>>>>>>
> >>>>>>>> +
> >>>>>>>> +  media_ = enumerator->search(dm);
> >>>>>>>> +  if (!media_)
> >>>>>>>> +    return false;
> >>>>>>>> +
> >>>>>>>> +  media_->acquire();
> >>>>>>>> +
> >>>>>>>> +  std::unique_ptr<STM32CameraData> data =
> >>>>>>>> +      utils::make_unique<STM32CameraData>(this);
> >>>>>>>> +
> >>>>>>>> +  /* Locate and open the default video node. */
> >>>>>>>> +  for (MediaEntity *entity : media_->entities()) {
> >>>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
> >>>>>>>> +      data->video_ = new V4L2Device(entity);
> >>>>>>>> +      break;
> >>>>>>>> +    }
> >>>>>>>> +  }
> >>>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the
> >>>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will
> >>>>>>> correctly match your device?
> >>>>
> >>>> It will because it is in Hugues's patches.
> >>>>
> >>>> Does an another way of working is posssible ?
> >>>>
> >>>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to
> >>>> use it.
> >>>
> >>> There's a single video node in your media graph, so there's no real need
> >>> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,
> >>> can't you ?
> >>>
> >>>> Do you have some advices ?
> >>>>
> >>>>>>>> +
> >>>>>>>> +  if (!data->video_) {
> >>>>>>>> +    LOG(STM32, Error) << "Could not find a default video device";
> >>>>>>>> +    return false;
> >>>>>>>> +  }
> >>>>>>>> +
> >>>>>>>> +  if (data->video_->open())
> >>>>>>>> +    return false;
> >>>>>>>> +
> >>>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
> >>>>>>>> +
> >>>>>>>> +  /* Create and register the camera. */
> >>>>>>>> +  std::set<Stream *> streams{&data->stream_};
> >>>>>>>> +  std::shared_ptr<Camera> camera =
> >>>>>>>> +      Camera::create(this, media_->model(), streams);
> >>>>>>>> +  registerCamera(std::move(camera), std::move(data));
> >>>>>>>> +
> >>>>>>>> +  return true;
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
> >>>>>>>> +  Request *request = queuedRequests_.front();
> >>>>>>>> +
> >>>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);
> >>>>>>>> +  pipe_->completeRequest(camera_, request);
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
> >>>>>>>> +
> >>>>>>>> +} /* namespace libcamera */
> >>>>>>>>
> >>>>>>> Thank you for joining the libcamera project! - If there is anything more
> >>>>>>> we can do to help you - please let us know.

Patch

diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build
index 40bb264..08d6e1c 100644
--- a/src/libcamera/pipeline/meson.build
+++ b/src/libcamera/pipeline/meson.build
@@ -4,3 +4,5 @@  libcamera_sources += files([
 ])
 
 subdir('ipu3')
+
+subdir('stm32')
diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build
new file mode 100644
index 0000000..cb6f16b
--- /dev/null
+++ b/src/libcamera/pipeline/stm32/meson.build
@@ -0,0 +1,3 @@ 
+libcamera_sources += files([
+    'stm32.cpp',
+])
diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp
new file mode 100644
index 0000000..301fdfc
--- /dev/null
+++ b/src/libcamera/pipeline/stm32/stm32.cpp
@@ -0,0 +1,205 @@ 
+/* SPDX-License-Identifier: LGPL-2.1-or-later */
+/*
+ * stm32.cpp - Pipeline handler for stm32 devices
+ */
+
+#include <libcamera/camera.h>
+#include <libcamera/request.h>
+#include <libcamera/stream.h>
+
+#include "device_enumerator.h"
+#include "log.h"
+#include "media_device.h"
+#include "pipeline_handler.h"
+#include "utils.h"
+#include "v4l2_device.h"
+
+namespace libcamera {
+
+LOG_DEFINE_CATEGORY(STM32)
+
+class PipelineHandlerSTM32 : public PipelineHandler {
+public:
+  PipelineHandlerSTM32(CameraManager *manager);
+  ~PipelineHandlerSTM32();
+
+  std::map<Stream *, StreamConfiguration>
+  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;
+  int configureStreams(
+      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;
+
+  int allocateBuffers(Camera *camera, Stream *stream) override;
+  int freeBuffers(Camera *camera, Stream *stream) override;
+
+  int start(Camera *camera) override;
+  void stop(Camera *camera) override;
+
+  int queueRequest(Camera *camera, Request *request) override;
+
+  bool match(DeviceEnumerator *enumerator);
+
+private:
+  class STM32CameraData : public CameraData {
+  public:
+    STM32CameraData(PipelineHandler *pipe)
+        : CameraData(pipe), video_(nullptr) {}
+
+    ~STM32CameraData() { delete video_; }
+
+    void bufferReady(Buffer *buffer);
+
+    V4L2Device *video_;
+    Stream stream_;
+  };
+
+  STM32CameraData *cameraData(const Camera *camera) {
+    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));
+  }
+
+  std::shared_ptr<MediaDevice> media_;
+};
+
+PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)
+    : PipelineHandler(manager), media_(nullptr) {}
+
+PipelineHandlerSTM32::~PipelineHandlerSTM32() {
+  if (media_)
+    media_->release();
+}
+
+std::map<Stream *, StreamConfiguration>
+PipelineHandlerSTM32::streamConfiguration(Camera *camera,
+                                          std::set<Stream *> &streams) {
+  STM32CameraData *data = cameraData(camera);
+
+  std::map<Stream *, StreamConfiguration> configs;
+  StreamConfiguration config{};
+
+  LOG(STM32, Debug) << "Retrieving default format";
+  config.width = 640;
+  config.height = 480;
+  config.pixelFormat = V4L2_PIX_FMT_YUYV;
+  config.bufferCount = 4;
+
+  configs[&data->stream_] = config;
+
+  return configs;
+}
+
+int PipelineHandlerSTM32::configureStreams(
+    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {
+  STM32CameraData *data = cameraData(camera);
+  StreamConfiguration *cfg = &config[&data->stream_];
+  int ret;
+
+  LOG(STM32, Debug) << "Configure the camera for resolution " << cfg->width
+                    << "x" << cfg->height;
+
+  V4L2DeviceFormat format = {};
+  format.width = cfg->width;
+  format.height = cfg->height;
+  format.fourcc = cfg->pixelFormat;
+
+  ret = data->video_->setFormat(&format);
+  if (ret)
+    return ret;
+
+  if (format.width != cfg->width || format.height != cfg->height ||
+      format.fourcc != cfg->pixelFormat)
+    return -EINVAL;
+
+  return 0;
+}
+
+int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {
+  STM32CameraData *data = cameraData(camera);
+  const StreamConfiguration &cfg = stream->configuration();
+
+  LOG(STM32, Debug) << "Requesting " << cfg.bufferCount << " buffers";
+
+  return data->video_->exportBuffers(&stream->bufferPool());
+}
+
+int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {
+  STM32CameraData *data = cameraData(camera);
+  return data->video_->releaseBuffers();
+}
+
+int PipelineHandlerSTM32::start(Camera *camera) {
+  STM32CameraData *data = cameraData(camera);
+  return data->video_->streamOn();
+}
+
+void PipelineHandlerSTM32::stop(Camera *camera) {
+  STM32CameraData *data = cameraData(camera);
+  data->video_->streamOff();
+  PipelineHandler::stop(camera);
+}
+
+int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {
+  STM32CameraData *data = cameraData(camera);
+  Buffer *buffer = request->findBuffer(&data->stream_);
+  if (!buffer) {
+    LOG(STM32, Error) << "Attempt to queue request with invalid stream";
+
+    return -ENOENT;
+  }
+
+  int ret = data->video_->queueBuffer(buffer);
+  if (ret < 0)
+    return ret;
+
+  PipelineHandler::queueRequest(camera, request);
+
+  return 0;
+}
+
+bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {
+  DeviceMatch dm("stm32-dcmi");
+
+  media_ = enumerator->search(dm);
+  if (!media_)
+    return false;
+
+  media_->acquire();
+
+  std::unique_ptr<STM32CameraData> data =
+      utils::make_unique<STM32CameraData>(this);
+
+  /* Locate and open the default video node. */
+  for (MediaEntity *entity : media_->entities()) {
+    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {
+      data->video_ = new V4L2Device(entity);
+      break;
+    }
+  }
+
+  if (!data->video_) {
+    LOG(STM32, Error) << "Could not find a default video device";
+    return false;
+  }
+
+  if (data->video_->open())
+    return false;
+
+  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);
+
+  /* Create and register the camera. */
+  std::set<Stream *> streams{&data->stream_};
+  std::shared_ptr<Camera> camera =
+      Camera::create(this, media_->model(), streams);
+  registerCamera(std::move(camera), std::move(data));
+
+  return true;
+}
+
+void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {
+  Request *request = queuedRequests_.front();
+
+  pipe_->completeBuffer(camera_, request, buffer);
+  pipe_->completeRequest(camera_, request);
+}
+
+REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);
+
+} /* namespace libcamera */