[{"id":1320,"web_url":"https://patchwork.libcamera.org/comment/1320/","msgid":"<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>","date":"2019-04-09T09:02:38","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":4,"url":"https://patchwork.libcamera.org/api/people/4/","name":"Kieran Bingham","email":"kieran.bingham@ideasonboard.com"},"content":"Hi Benjamin,\n\nOn 08/04/2019 13:43, Benjamin Gaignard wrote:\n> Provide a pipeline handler for the stm32 driver.\n\nExcellent :-) Thank you for your contribution.\n\nHave you been able to test this and successfully list your camera or\ncapture frames?\n\nTo list successfully registered cameras on the device:\n\tbuild/src/cam/cam -l\n\nI couldn't see any media-controller support in the stm32-dcmi driver...\nPerhaps I'm missing something.\n\n> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n> ---\n>  src/libcamera/pipeline/meson.build       |   2 +\n>  src/libcamera/pipeline/stm32/meson.build |   3 +\n>  src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n>  3 files changed, 210 insertions(+)\n>  create mode 100644 src/libcamera/pipeline/stm32/meson.build\n>  create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n> \n> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n> index 40bb264..08d6e1c 100644\n> --- a/src/libcamera/pipeline/meson.build\n> +++ b/src/libcamera/pipeline/meson.build\n> @@ -4,3 +4,5 @@ libcamera_sources += files([\n>  ])\n>  \n>  subdir('ipu3')\n> +\n> +subdir('stm32')\n> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n> new file mode 100644\n> index 0000000..cb6f16b\n> --- /dev/null\n> +++ b/src/libcamera/pipeline/stm32/meson.build\n> @@ -0,0 +1,3 @@\n> +libcamera_sources += files([\n> +    'stm32.cpp',\n> +])\n\nCurrently the stm32.cpp seems like a fairly simple implementation, which\nfits in its own file, much like pipeline/uvcvideo.cpp.\n\nDo you forsee needing to break the STM32 handler into multiple files\nlater, or support other components on the STM32?\n\nI think it's fine if you want to have stm32 as a subdir either way  though.\n\nI wonder also, if it is just a 'simple' device if perhaps we should move\nthe bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\nsubclassed to simply add the device matching, or provide a table of\nsupported match strings.\n\n\nWhat hardware is available on the STM32 for processing frames?\nDo you have a scaler/resizer? or any further ISP functionality?\n\nIt would be interesting to see the output of 'media-ctl -p' for your\nplatform.\n\n\n> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n> new file mode 100644\n> index 0000000..301fdfc\n> --- /dev/null\n> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n> @@ -0,0 +1,205 @@\n> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> +/*\n> + * stm32.cpp - Pipeline handler for stm32 devices\n> + */\n> +\n> +#include <libcamera/camera.h>\n> +#include <libcamera/request.h>\n> +#include <libcamera/stream.h>\n> +\n> +#include \"device_enumerator.h\"\n> +#include \"log.h\"\n> +#include \"media_device.h\"\n> +#include \"pipeline_handler.h\"\n> +#include \"utils.h\"\n> +#include \"v4l2_device.h\"\n> +\n> +namespace libcamera {\n> +\n> +LOG_DEFINE_CATEGORY(STM32)\n> +\n> +class PipelineHandlerSTM32 : public PipelineHandler {\n> +public:\n> +  PipelineHandlerSTM32(CameraManager *manager);\n> +  ~PipelineHandlerSTM32();\n\nOur coding style uses a tab to indent.\n\nWhen you have committed your code, you can run ./utils/checkstyle.py to\ncheck the most recent commit (or you can specify a range suitable for git).\n\nThis requires installing clang-format ideally, although astyle is also\nsupported to some degree.\n\n> +\n> +  std::map<Stream *, StreamConfiguration>\n> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n> +  int configureStreams(\n> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n> +\n> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n> +  int freeBuffers(Camera *camera, Stream *stream) override;\n> +\n> +  int start(Camera *camera) override;\n> +  void stop(Camera *camera) override;\n> +\n> +  int queueRequest(Camera *camera, Request *request) override;\n> +\n> +  bool match(DeviceEnumerator *enumerator);\n> +\n> +private:\n> +  class STM32CameraData : public CameraData {\n> +  public:\n> +    STM32CameraData(PipelineHandler *pipe)\n> +        : CameraData(pipe), video_(nullptr) {}\n> +\n> +    ~STM32CameraData() { delete video_; }\n> +\n> +    void bufferReady(Buffer *buffer);\n> +\n> +    V4L2Device *video_;\n> +    Stream stream_;\n> +  };\n> +\n> +  STM32CameraData *cameraData(const Camera *camera) {\n> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n> +  }\n> +\n> +  std::shared_ptr<MediaDevice> media_;\n> +};\n> +\n> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n> +    : PipelineHandler(manager), media_(nullptr) {}\n> +\n> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n> +  if (media_)\n> +    media_->release();\n> +}\n> +\n> +std::map<Stream *, StreamConfiguration>\n> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n> +                                          std::set<Stream *> &streams) {\n> +  STM32CameraData *data = cameraData(camera);\n> +\n> +  std::map<Stream *, StreamConfiguration> configs;\n> +  StreamConfiguration config{};\n> +\n> +  LOG(STM32, Debug) << \"Retrieving default format\";\n> +  config.width = 640;\n> +  config.height = 480;\n> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n> +  config.bufferCount = 4;\n> +\n> +  configs[&data->stream_] = config;\n> +\n> +  return configs;\n> +}\n> +\n> +int PipelineHandlerSTM32::configureStreams(\n> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  StreamConfiguration *cfg = &config[&data->stream_];\n> +  int ret;\n> +\n> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n> +                    << \"x\" << cfg->height;\n> +\n> +  V4L2DeviceFormat format = {};\n> +  format.width = cfg->width;\n> +  format.height = cfg->height;\n> +  format.fourcc = cfg->pixelFormat;\n> +\n> +  ret = data->video_->setFormat(&format);\n> +  if (ret)\n> +    return ret;\n> +\n> +  if (format.width != cfg->width || format.height != cfg->height ||\n> +      format.fourcc != cfg->pixelFormat)\n> +    return -EINVAL;\n> +\n> +  return 0;\n> +}\n> +\n> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  const StreamConfiguration &cfg = stream->configuration();\n> +\n> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n> +\n> +  return data->video_->exportBuffers(&stream->bufferPool());\n> +}\n> +\n> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  return data->video_->releaseBuffers();\n> +}\n> +\n> +int PipelineHandlerSTM32::start(Camera *camera) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  return data->video_->streamOn();\n> +}\n> +\n> +void PipelineHandlerSTM32::stop(Camera *camera) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  data->video_->streamOff();\n> +  PipelineHandler::stop(camera);\n> +}\n> +\n> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n> +  STM32CameraData *data = cameraData(camera);\n> +  Buffer *buffer = request->findBuffer(&data->stream_);\n> +  if (!buffer) {\n> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n> +\n> +    return -ENOENT;\n> +  }\n> +\n> +  int ret = data->video_->queueBuffer(buffer);\n> +  if (ret < 0)\n> +    return ret;\n> +\n> +  PipelineHandler::queueRequest(camera, request);\n> +\n> +  return 0;\n> +}\n> +\n> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n> +  DeviceMatch dm(\"stm32-dcmi\");\n\nI see that the stm32-dcmi driver calls down to a subdevice. Is there\never a need to interact with this subdevice directly?\n\nOr are all controls handled through the DCMI driver?\n\nI think it looks like everything simply goes through the stm32-dcmi\nV4L2Device interface.\n\nThat said, I can't see if there is any media-controller device in the\nDCMI driver. Our DeviceMatch currently only looks at media-controller\nenabled devices.\n\n> +\n> +  media_ = enumerator->search(dm);\n> +  if (!media_)\n> +    return false;\n> +\n> +  media_->acquire();\n> +\n> +  std::unique_ptr<STM32CameraData> data =\n> +      utils::make_unique<STM32CameraData>(this);\n> +\n> +  /* Locate and open the default video node. */\n> +  for (MediaEntity *entity : media_->entities()) {\n> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n> +      data->video_ = new V4L2Device(entity);\n> +      break;\n> +    }\n> +  }\n\n\nI think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\nUVCVideo (and omap3isp) pipelines, so I'm not sure if this will\ncorrectly match your device?\n\n\n> +\n> +  if (!data->video_) {\n> +    LOG(STM32, Error) << \"Could not find a default video device\";\n> +    return false;\n> +  }\n> +\n> +  if (data->video_->open())\n> +    return false;\n> +\n> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n> +\n> +  /* Create and register the camera. */\n> +  std::set<Stream *> streams{&data->stream_};\n> +  std::shared_ptr<Camera> camera =\n> +      Camera::create(this, media_->model(), streams);\n> +  registerCamera(std::move(camera), std::move(data));\n> +\n> +  return true;\n> +}\n> +\n> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n> +  Request *request = queuedRequests_.front();\n> +\n> +  pipe_->completeBuffer(camera_, request, buffer);\n> +  pipe_->completeRequest(camera_, request);\n> +}\n> +\n> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n> +\n> +} /* namespace libcamera */\n> \n\nThank you for joining the libcamera project! - If there is anything more\nwe can do to help you - please let us know.","headers":{"Return-Path":"<kieran.bingham@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 9A5E560B2D\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue,  9 Apr 2019 11:02:41 +0200 (CEST)","from [192.168.0.20]\n\t(cpc89242-aztw30-2-0-cust488.18-1.cable.virginm.net [86.31.129.233])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id D10D933A;\n\tTue,  9 Apr 2019 11:02:40 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1554800561;\n\tbh=myGkdeimZC/I46737OId39COAsEhu7PfbJJsb4I5/14=;\n\th=Reply-To:Subject:To:Cc:References:From:Date:In-Reply-To:From;\n\tb=mxM9s2QgoiBxYfzdB3A/byK3Cvw8Xom7jnMdvzTR9vm/kHcsb54EVGCclpQTcnflw\n\tH/lYjVcoMoPmpVQmGMYst/rNfHE3AKNlHP+4IMD5SBr4PpE7sed6lDgD6+9Y8tVYHE\n\tV5A9ZOO/ewGrQch2yxJHthDVU9Ckvqs2WqY7yIWo=","Reply-To":"kieran.bingham@ideasonboard.com","To":"Benjamin Gaignard <benjamin.gaignard@st.com>,\n\tlibcamera-devel@lists.libcamera.org","Cc":"peter.griffin@linaro.org, hugues.fruchet@st.com","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>","From":"Kieran Bingham <kieran.bingham@ideasonboard.com>","Openpgp":"preference=signencrypt","Autocrypt":"addr=kieran.bingham@ideasonboard.com; keydata=\n\tmQINBFYE/WYBEACs1PwjMD9rgCu1hlIiUA1AXR4rv2v+BCLUq//vrX5S5bjzxKAryRf0uHat\n\tV/zwz6hiDrZuHUACDB7X8OaQcwhLaVlq6byfoBr25+hbZG7G3+5EUl9cQ7dQEdvNj6V6y/SC\n\trRanWfelwQThCHckbobWiQJfK9n7rYNcPMq9B8e9F020LFH7Kj6YmO95ewJGgLm+idg1Kb3C\n\tpotzWkXc1xmPzcQ1fvQMOfMwdS+4SNw4rY9f07Xb2K99rjMwZVDgESKIzhsDB5GY465sCsiQ\n\tcSAZRxqE49RTBq2+EQsbrQpIc8XiffAB8qexh5/QPzCmR4kJgCGeHIXBtgRj+nIkCJPZvZtf\n\tKr2EAbc6tgg6DkAEHJb+1okosV09+0+TXywYvtEop/WUOWQ+zo+Y/OBd+8Ptgt1pDRyOBzL8\n\tRXa8ZqRf0Mwg75D+dKntZeJHzPRJyrlfQokngAAs4PaFt6UfS+ypMAF37T6CeDArQC41V3ko\n\tlPn1yMsVD0p+6i3DPvA/GPIksDC4owjnzVX9kM8Zc5Cx+XoAN0w5Eqo4t6qEVbuettxx55gq\n\t8K8FieAjgjMSxngo/HST8TpFeqI5nVeq0/lqtBRQKumuIqDg+Bkr4L1V/PSB6XgQcOdhtd36\n\tOe9X9dXB8YSNt7VjOcO7BTmFn/Z8r92mSAfHXpb07YJWJosQOQARAQABtDBLaWVyYW4gQmlu\n\tZ2hhbSA8a2llcmFuLmJpbmdoYW1AaWRlYXNvbmJvYXJkLmNvbT6JAkAEEwEKACoCGwMFCwkI\n\tBwIGFQgJCgsCBBYCAwECHgECF4ACGQEFAlnDk/gFCQeA/YsACgkQoR5GchCkYf3X5w/9EaZ7\n\tcnUcT6dxjxrcmmMnfFPoQA1iQXr/MXQJBjFWfxRUWYzjvUJb2D/FpA8FY7y+vksoJP7pWDL7\n\tQTbksdwzagUEk7CU45iLWL/CZ/knYhj1I/+5LSLFmvZ/5Gf5xn2ZCsmg7C0MdW/GbJ8IjWA8\n\t/LKJSEYH8tefoiG6+9xSNp1p0Gesu3vhje/GdGX4wDsfAxx1rIYDYVoX4bDM+uBUQh7sQox/\n\tR1bS0AaVJzPNcjeC14MS226mQRUaUPc9250aj44WmDfcg44/kMsoLFEmQo2II9aOlxUDJ+x1\n\txohGbh9mgBoVawMO3RMBihcEjo/8ytW6v7xSF+xP4Oc+HOn7qebAkxhSWcRxQVaQYw3S9iZz\n\t2iA09AXAkbvPKuMSXi4uau5daXStfBnmOfalG0j+9Y6hOFjz5j0XzaoF6Pln0jisDtWltYhP\n\tX9LjFVhhLkTzPZB/xOeWGmsG4gv2V2ExbU3uAmb7t1VSD9+IO3Km4FtnYOKBWlxwEd8qOFpS\n\tjEqMXURKOiJvnw3OXe9MqG19XdeENA1KyhK5rqjpwdvPGfSn2V+SlsdJA0DFsobUScD9qXQw\n\tOvhapHe3XboK2+Rd7L+g/9Ud7ZKLQHAsMBXOVJbufA1AT+IaOt0ugMcFkAR5UbBg5+dZUYJj\n\t1QbPQcGmM3wfvuaWV5+SlJ+WeKIb8ta5Ag0EVgT9ZgEQAM4o5G/kmruIQJ3K9SYzmPishRHV\n\tDcUcvoakyXSX2mIoccmo9BHtD9MxIt+QmxOpYFNFM7YofX4lG0ld8H7FqoNVLd/+a0yru5Cx\n\tadeZBe3qr1eLns10Q90LuMo7/6zJhCW2w+HE7xgmCHejAwuNe3+7yt4QmwlSGUqdxl8cgtS1\n\tPlEK93xXDsgsJj/bw1EfSVdAUqhx8UQ3aVFxNug5OpoX9FdWJLKROUrfNeBE16RLrNrq2ROc\n\tiSFETpVjyC/oZtzRFnwD9Or7EFMi76/xrWzk+/b15RJ9WrpXGMrttHUUcYZEOoiC2lEXMSAF\n\tSSSj4vHbKDJ0vKQdEFtdgB1roqzxdIOg4rlHz5qwOTynueiBpaZI3PHDudZSMR5Fk6QjFooE\n\tXTw3sSl/km/lvUFiv9CYyHOLdygWohvDuMkV/Jpdkfq8XwFSjOle+vT/4VqERnYFDIGBxaRx\n\tkoBLfNDiiuR3lD8tnJ4A1F88K6ojOUs+jndKsOaQpDZV6iNFv8IaNIklTPvPkZsmNDhJMRHH\n\tIu60S7BpzNeQeT4yyY4dX9lC2JL/LOEpw8DGf5BNOP1KgjCvyp1/KcFxDAo89IeqljaRsCdP\n\t7WCIECWYem6pLwaw6IAL7oX+tEqIMPph/G/jwZcdS6Hkyt/esHPuHNwX4guqTbVEuRqbDzDI\n\t2DJO5FbxABEBAAGJAiUEGAEKAA8CGwwFAlnDlGsFCQeA/gIACgkQoR5GchCkYf1yYRAAq+Yo\n\tnbf9DGdK1kTAm2RTFg+w9oOp2Xjqfhds2PAhFFvrHQg1XfQR/UF/SjeUmaOmLSczM0s6XMeO\n\tVcE77UFtJ/+hLo4PRFKm5X1Pcar6g5m4xGqa+Xfzi9tRkwC29KMCoQOag1BhHChgqYaUH3yo\n\tUzaPwT/fY75iVI+yD0ih/e6j8qYvP8pvGwMQfrmN9YB0zB39YzCSdaUaNrWGD3iCBxg6lwSO\n\tLKeRhxxfiXCIYEf3vwOsP3YMx2JkD5doseXmWBGW1U0T/oJF+DVfKB6mv5UfsTzpVhJRgee7\n\t4jkjqFq4qsUGxcvF2xtRkfHFpZDbRgRlVmiWkqDkT4qMA+4q1y/dWwshSKi/uwVZNycuLsz+\n\t+OD8xPNCsMTqeUkAKfbD8xW4LCay3r/dD2ckoxRxtMD9eOAyu5wYzo/ydIPTh1QEj9SYyvp8\n\tO0g6CpxEwyHUQtF5oh15O018z3ZLztFJKR3RD42VKVsrnNDKnoY0f4U0z7eJv2NeF8xHMuiU\n\tRCIzqxX1GVYaNkKTnb/Qja8hnYnkUzY1Lc+OtwiGmXTwYsPZjjAaDX35J/RSKAoy5wGo/YFA\n\tJxB1gWThL4kOTbsqqXj9GLcyOImkW0lJGGR3o/fV91Zh63S5TKnf2YGGGzxki+ADdxVQAm+Q\n\tsbsRB8KNNvVXBOVNwko86rQqF9drZuw=","Organization":"Ideas on Board","Message-ID":"<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>","Date":"Tue, 9 Apr 2019 10:02:38 +0100","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101\n\tThunderbird/60.5.1","MIME-Version":"1.0","In-Reply-To":"<20190408124322.6869-1-benjamin.gaignard@st.com>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-GB","Content-Transfer-Encoding":"8bit","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Tue, 09 Apr 2019 09:02:41 -0000"}},{"id":1324,"web_url":"https://patchwork.libcamera.org/comment/1324/","msgid":"<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>","date":"2019-04-09T09:38:48","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":14,"url":"https://patchwork.libcamera.org/api/people/14/","name":"Benjamin GAIGNARD","email":"benjamin.gaignard@st.com"},"content":"On 4/9/19 11:02 AM, Kieran Bingham wrote:\n> Hi Benjamin,\n>\n> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n>> Provide a pipeline handler for the stm32 driver.\n> Excellent :-) Thank you for your contribution.\n>\n> Have you been able to test this and successfully list your camera or\n> capture frames?\nYes I have been able to list and capture frames with cam\n>\n> To list successfully registered cameras on the device:\n> \tbuild/src/cam/cam -l\n>\n> I couldn't see any media-controller support in the stm32-dcmi driver...\n> Perhaps I'm missing something.\n\nHugues has send the patches for media controller support a week ago.\n\nhttps://lkml.org/lkml/2019/4/1/298\n\nI have tested libcamera with those patches.\n\nBenjamin\n\n>\n>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n>> ---\n>>   src/libcamera/pipeline/meson.build       |   2 +\n>>   src/libcamera/pipeline/stm32/meson.build |   3 +\n>>   src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n>>   3 files changed, 210 insertions(+)\n>>   create mode 100644 src/libcamera/pipeline/stm32/meson.build\n>>   create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n>>\n>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n>> index 40bb264..08d6e1c 100644\n>> --- a/src/libcamera/pipeline/meson.build\n>> +++ b/src/libcamera/pipeline/meson.build\n>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n>>   ])\n>>   \n>>   subdir('ipu3')\n>> +\n>> +subdir('stm32')\n>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n>> new file mode 100644\n>> index 0000000..cb6f16b\n>> --- /dev/null\n>> +++ b/src/libcamera/pipeline/stm32/meson.build\n>> @@ -0,0 +1,3 @@\n>> +libcamera_sources += files([\n>> +    'stm32.cpp',\n>> +])\n> Currently the stm32.cpp seems like a fairly simple implementation, which\n> fits in its own file, much like pipeline/uvcvideo.cpp.\n>\n> Do you forsee needing to break the STM32 handler into multiple files\n> later, or support other components on the STM32?\n>\n> I think it's fine if you want to have stm32 as a subdir either way  though.\n>\n> I wonder also, if it is just a 'simple' device if perhaps we should move\n> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n> subclassed to simply add the device matching, or provide a table of\n> supported match strings.\n>\n>\n> What hardware is available on the STM32 for processing frames?\n> Do you have a scaler/resizer? or any further ISP functionality?\n>\n> It would be interesting to see the output of 'media-ctl -p' for your\n> platform.\n>\n>\n>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n>> new file mode 100644\n>> index 0000000..301fdfc\n>> --- /dev/null\n>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n>> @@ -0,0 +1,205 @@\n>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>> +/*\n>> + * stm32.cpp - Pipeline handler for stm32 devices\n>> + */\n>> +\n>> +#include <libcamera/camera.h>\n>> +#include <libcamera/request.h>\n>> +#include <libcamera/stream.h>\n>> +\n>> +#include \"device_enumerator.h\"\n>> +#include \"log.h\"\n>> +#include \"media_device.h\"\n>> +#include \"pipeline_handler.h\"\n>> +#include \"utils.h\"\n>> +#include \"v4l2_device.h\"\n>> +\n>> +namespace libcamera {\n>> +\n>> +LOG_DEFINE_CATEGORY(STM32)\n>> +\n>> +class PipelineHandlerSTM32 : public PipelineHandler {\n>> +public:\n>> +  PipelineHandlerSTM32(CameraManager *manager);\n>> +  ~PipelineHandlerSTM32();\n> Our coding style uses a tab to indent.\n>\n> When you have committed your code, you can run ./utils/checkstyle.py to\n> check the most recent commit (or you can specify a range suitable for git).\n>\n> This requires installing clang-format ideally, although astyle is also\n> supported to some degree.\n>\n>> +\n>> +  std::map<Stream *, StreamConfiguration>\n>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n>> +  int configureStreams(\n>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n>> +\n>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n>> +\n>> +  int start(Camera *camera) override;\n>> +  void stop(Camera *camera) override;\n>> +\n>> +  int queueRequest(Camera *camera, Request *request) override;\n>> +\n>> +  bool match(DeviceEnumerator *enumerator);\n>> +\n>> +private:\n>> +  class STM32CameraData : public CameraData {\n>> +  public:\n>> +    STM32CameraData(PipelineHandler *pipe)\n>> +        : CameraData(pipe), video_(nullptr) {}\n>> +\n>> +    ~STM32CameraData() { delete video_; }\n>> +\n>> +    void bufferReady(Buffer *buffer);\n>> +\n>> +    V4L2Device *video_;\n>> +    Stream stream_;\n>> +  };\n>> +\n>> +  STM32CameraData *cameraData(const Camera *camera) {\n>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n>> +  }\n>> +\n>> +  std::shared_ptr<MediaDevice> media_;\n>> +};\n>> +\n>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n>> +    : PipelineHandler(manager), media_(nullptr) {}\n>> +\n>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n>> +  if (media_)\n>> +    media_->release();\n>> +}\n>> +\n>> +std::map<Stream *, StreamConfiguration>\n>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n>> +                                          std::set<Stream *> &streams) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +\n>> +  std::map<Stream *, StreamConfiguration> configs;\n>> +  StreamConfiguration config{};\n>> +\n>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n>> +  config.width = 640;\n>> +  config.height = 480;\n>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n>> +  config.bufferCount = 4;\n>> +\n>> +  configs[&data->stream_] = config;\n>> +\n>> +  return configs;\n>> +}\n>> +\n>> +int PipelineHandlerSTM32::configureStreams(\n>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  StreamConfiguration *cfg = &config[&data->stream_];\n>> +  int ret;\n>> +\n>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n>> +                    << \"x\" << cfg->height;\n>> +\n>> +  V4L2DeviceFormat format = {};\n>> +  format.width = cfg->width;\n>> +  format.height = cfg->height;\n>> +  format.fourcc = cfg->pixelFormat;\n>> +\n>> +  ret = data->video_->setFormat(&format);\n>> +  if (ret)\n>> +    return ret;\n>> +\n>> +  if (format.width != cfg->width || format.height != cfg->height ||\n>> +      format.fourcc != cfg->pixelFormat)\n>> +    return -EINVAL;\n>> +\n>> +  return 0;\n>> +}\n>> +\n>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  const StreamConfiguration &cfg = stream->configuration();\n>> +\n>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n>> +\n>> +  return data->video_->exportBuffers(&stream->bufferPool());\n>> +}\n>> +\n>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  return data->video_->releaseBuffers();\n>> +}\n>> +\n>> +int PipelineHandlerSTM32::start(Camera *camera) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  return data->video_->streamOn();\n>> +}\n>> +\n>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  data->video_->streamOff();\n>> +  PipelineHandler::stop(camera);\n>> +}\n>> +\n>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n>> +  STM32CameraData *data = cameraData(camera);\n>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n>> +  if (!buffer) {\n>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n>> +\n>> +    return -ENOENT;\n>> +  }\n>> +\n>> +  int ret = data->video_->queueBuffer(buffer);\n>> +  if (ret < 0)\n>> +    return ret;\n>> +\n>> +  PipelineHandler::queueRequest(camera, request);\n>> +\n>> +  return 0;\n>> +}\n>> +\n>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n>> +  DeviceMatch dm(\"stm32-dcmi\");\n> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n> ever a need to interact with this subdevice directly?\n>\n> Or are all controls handled through the DCMI driver?\n>\n> I think it looks like everything simply goes through the stm32-dcmi\n> V4L2Device interface.\n>\n> That said, I can't see if there is any media-controller device in the\n> DCMI driver. Our DeviceMatch currently only looks at media-controller\n> enabled devices.\n>\n>> +\n>> +  media_ = enumerator->search(dm);\n>> +  if (!media_)\n>> +    return false;\n>> +\n>> +  media_->acquire();\n>> +\n>> +  std::unique_ptr<STM32CameraData> data =\n>> +      utils::make_unique<STM32CameraData>(this);\n>> +\n>> +  /* Locate and open the default video node. */\n>> +  for (MediaEntity *entity : media_->entities()) {\n>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n>> +      data->video_ = new V4L2Device(entity);\n>> +      break;\n>> +    }\n>> +  }\n>\n> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n> correctly match your device?\n>\n>\n>> +\n>> +  if (!data->video_) {\n>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n>> +    return false;\n>> +  }\n>> +\n>> +  if (data->video_->open())\n>> +    return false;\n>> +\n>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n>> +\n>> +  /* Create and register the camera. */\n>> +  std::set<Stream *> streams{&data->stream_};\n>> +  std::shared_ptr<Camera> camera =\n>> +      Camera::create(this, media_->model(), streams);\n>> +  registerCamera(std::move(camera), std::move(data));\n>> +\n>> +  return true;\n>> +}\n>> +\n>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n>> +  Request *request = queuedRequests_.front();\n>> +\n>> +  pipe_->completeBuffer(camera_, request, buffer);\n>> +  pipe_->completeRequest(camera_, request);\n>> +}\n>> +\n>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n>> +\n>> +} /* namespace libcamera */\n>>\n> Thank you for joining the libcamera project! - If there is anything more\n> we can do to help you - please let us know.\n>","headers":{"Return-Path":"<benjamin.gaignard@st.com>","Received":["from mx07-00178001.pphosted.com (mx07-00178001.pphosted.com\n\t[62.209.51.94])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 760A460B2D\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue,  9 Apr 2019 11:38:53 +0200 (CEST)","from pps.filterd (m0046668.ppops.net [127.0.0.1])\n\tby mx07-00178001.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id\n\tx399bNFG031953; Tue, 9 Apr 2019 11:38:49 +0200","from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])\n\tby mx07-00178001.pphosted.com with ESMTP id 2rprcnywh1-1\n\t(version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT);\n\tTue, 09 Apr 2019 11:38:49 +0200","from zeta.dmz-eu.st.com (zeta.dmz-eu.st.com [164.129.230.9])\n\tby beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id E65DE34;\n\tTue,  9 Apr 2019 09:38:48 +0000 (GMT)","from Webmail-eu.st.com (gpxdag5node6.st.com [10.75.127.79])\n\tby zeta.dmz-eu.st.com (STMicroelectronics) with ESMTP id AC83B27D3;\n\tTue,  9 Apr 2019 09:38:48 +0000 (GMT)","from GPXDAG3NODE6.st.com (10.75.127.73) by GPXDAG5NODE6.st.com\n\t(10.75.127.79) with Microsoft SMTP Server (TLS) id 15.0.1347.2;\n\tTue, 9 Apr 2019 11:38:48 +0200","from GPXDAG3NODE6.st.com ([fe80::fd7b:eed7:eb29:8581]) by\n\tGPXDAG3NODE6.st.com ([fe80::fd7b:eed7:eb29:8581%19]) with mapi id\n\t15.00.1347.000; Tue, 9 Apr 2019 11:38:48 +0200"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com;\n\th=from : to : cc : subject\n\t: date : message-id : references : in-reply-to : content-type :\n\tcontent-id\n\t: content-transfer-encoding : mime-version; s=STMicroelectronics;\n\tbh=IkfvqZwWesr4WGYcMw4JwN1/dlFLVRWpE2NxxwOCkow=;\n\tb=Oi4i4KWMR+3D5pegXbHTmkROL8EFFhJOmQFq35zLIjiMiYkCqlzzeafK07cQlLX3PY3q\n\tXSsK5XFzkbR9iWYU6O2eCWgIzMb2REFchDpN5rj9Zqyvna0gWZprUV7fVD2tT6/2SHwL\n\tp9dhdTcOPgfxISCOy0ZpWN0qXkEW9IBl4/AQ9VX2+fQ6VRDVaFWLmnuQwMq9/FMDYdN9\n\tbjIkOAXuj/dgtk/Dv+RAgq3tjGZnxQhtT/QFeYlrXKy1vXudkhcIolulxxubbdjuyB0P\n\tq6D6RtyDmXiv1jKdwQoxPPvzp9oxXulcxHYM+y9BApR5PesGT6//dkRRVfvrk2v0YCQD\n\tXA== ","From":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","To":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>","CC":"\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>, Hugues FRUCHET\n\t<hugues.fruchet@st.com>","Thread-Topic":"[libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","Thread-Index":"AQHU7girxkW0QpRzE0KeuHPcIgbIO6YzaF0AgAAKGYA=","Date":"Tue, 9 Apr 2019 09:38:48 +0000","Message-ID":"<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>","In-Reply-To":"<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>","Accept-Language":"en-US","Content-Language":"en-US","X-MS-Has-Attach":"","X-MS-TNEF-Correlator":"","user-agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101\n\tThunderbird/60.6.1","x-ms-exchange-messagesentrepresentingtype":"1","x-ms-exchange-transport-fromentityheader":"Hosted","x-originating-ip":"[10.75.127.44]","Content-Type":"text/plain; charset=\"utf-8\"","Content-ID":"<C9C78719332D7640B9BCE217766004AC@st.com>","Content-Transfer-Encoding":"base64","MIME-Version":"1.0","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10434:, ,\n\tdefinitions=2019-04-09_03:, , signatures=0","X-Mailman-Approved-At":"Tue, 09 Apr 2019 16:15:11 +0200","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Tue, 09 Apr 2019 09:38:53 -0000"}},{"id":1373,"web_url":"https://patchwork.libcamera.org/comment/1373/","msgid":"<20190415231435.GO17083@pendragon.ideasonboard.com>","date":"2019-04-15T23:14:35","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Benjamin,\n\nOn Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n> > On 08/04/2019 13:43, Benjamin Gaignard wrote:\n> >> Provide a pipeline handler for the stm32 driver.\n> > \n> > Excellent :-) Thank you for your contribution.\n> >\n> > Have you been able to test this and successfully list your camera or\n> > capture frames?\n> \n> Yes I have been able to list and capture frames with cam\n> \n> >\n> > To list successfully registered cameras on the device:\n> > \tbuild/src/cam/cam -l\n> >\n> > I couldn't see any media-controller support in the stm32-dcmi driver...\n> > Perhaps I'm missing something.\n> \n> Hugues has send the patches for media controller support a week ago.\n> \n> https://lkml.org/lkml/2019/4/1/298\n> \n> I have tested libcamera with those patches.\n\nI think you may have missed the other comments from Kieran further down.\n\n> >> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n> >> ---\n> >>   src/libcamera/pipeline/meson.build       |   2 +\n> >>   src/libcamera/pipeline/stm32/meson.build |   3 +\n> >>   src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n> >>   3 files changed, 210 insertions(+)\n> >>   create mode 100644 src/libcamera/pipeline/stm32/meson.build\n> >>   create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n> >>\n> >> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n> >> index 40bb264..08d6e1c 100644\n> >> --- a/src/libcamera/pipeline/meson.build\n> >> +++ b/src/libcamera/pipeline/meson.build\n> >> @@ -4,3 +4,5 @@ libcamera_sources += files([\n> >>   ])\n> >>   \n> >>   subdir('ipu3')\n> >> +\n> >> +subdir('stm32')\n> >> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n> >> new file mode 100644\n> >> index 0000000..cb6f16b\n> >> --- /dev/null\n> >> +++ b/src/libcamera/pipeline/stm32/meson.build\n> >> @@ -0,0 +1,3 @@\n> >> +libcamera_sources += files([\n> >> +    'stm32.cpp',\n> >> +])\n> >\n> > Currently the stm32.cpp seems like a fairly simple implementation, which\n> > fits in its own file, much like pipeline/uvcvideo.cpp.\n> >\n> > Do you forsee needing to break the STM32 handler into multiple files\n> > later, or support other components on the STM32?\n> >\n> > I think it's fine if you want to have stm32 as a subdir either way  though.\n> >\n> > I wonder also, if it is just a 'simple' device if perhaps we should move\n> > the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n> > subclassed to simply add the device matching, or provide a table of\n> > supported match strings.\n> >\n> >\n> > What hardware is available on the STM32 for processing frames?\n> > Do you have a scaler/resizer? or any further ISP functionality?\n> >\n> > It would be interesting to see the output of 'media-ctl -p' for your\n> > platform.\n> >\n> >> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n> >> new file mode 100644\n> >> index 0000000..301fdfc\n> >> --- /dev/null\n> >> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n> >> @@ -0,0 +1,205 @@\n> >> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> >> +/*\n> >> + * stm32.cpp - Pipeline handler for stm32 devices\n> >> + */\n> >> +\n> >> +#include <libcamera/camera.h>\n> >> +#include <libcamera/request.h>\n> >> +#include <libcamera/stream.h>\n> >> +\n> >> +#include \"device_enumerator.h\"\n> >> +#include \"log.h\"\n> >> +#include \"media_device.h\"\n> >> +#include \"pipeline_handler.h\"\n> >> +#include \"utils.h\"\n> >> +#include \"v4l2_device.h\"\n> >> +\n> >> +namespace libcamera {\n> >> +\n> >> +LOG_DEFINE_CATEGORY(STM32)\n> >> +\n> >> +class PipelineHandlerSTM32 : public PipelineHandler {\n> >> +public:\n> >> +  PipelineHandlerSTM32(CameraManager *manager);\n> >> +  ~PipelineHandlerSTM32();\n> >\n> > Our coding style uses a tab to indent.\n> >\n> > When you have committed your code, you can run ./utils/checkstyle.py to\n> > check the most recent commit (or you can specify a range suitable for git).\n> >\n> > This requires installing clang-format ideally, although astyle is also\n> > supported to some degree.\n> >\n> >> +\n> >> +  std::map<Stream *, StreamConfiguration>\n> >> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n> >> +  int configureStreams(\n> >> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n> >> +\n> >> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n> >> +  int freeBuffers(Camera *camera, Stream *stream) override;\n> >> +\n> >> +  int start(Camera *camera) override;\n> >> +  void stop(Camera *camera) override;\n> >> +\n> >> +  int queueRequest(Camera *camera, Request *request) override;\n> >> +\n> >> +  bool match(DeviceEnumerator *enumerator);\n> >> +\n> >> +private:\n> >> +  class STM32CameraData : public CameraData {\n> >> +  public:\n> >> +    STM32CameraData(PipelineHandler *pipe)\n> >> +        : CameraData(pipe), video_(nullptr) {}\n> >> +\n> >> +    ~STM32CameraData() { delete video_; }\n> >> +\n> >> +    void bufferReady(Buffer *buffer);\n> >> +\n> >> +    V4L2Device *video_;\n> >> +    Stream stream_;\n> >> +  };\n> >> +\n> >> +  STM32CameraData *cameraData(const Camera *camera) {\n> >> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n> >> +  }\n> >> +\n> >> +  std::shared_ptr<MediaDevice> media_;\n> >> +};\n> >> +\n> >> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n> >> +    : PipelineHandler(manager), media_(nullptr) {}\n> >> +\n> >> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n> >> +  if (media_)\n> >> +    media_->release();\n> >> +}\n> >> +\n> >> +std::map<Stream *, StreamConfiguration>\n> >> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n> >> +                                          std::set<Stream *> &streams) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +\n> >> +  std::map<Stream *, StreamConfiguration> configs;\n> >> +  StreamConfiguration config{};\n> >> +\n> >> +  LOG(STM32, Debug) << \"Retrieving default format\";\n> >> +  config.width = 640;\n> >> +  config.height = 480;\n> >> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n> >> +  config.bufferCount = 4;\n> >> +\n> >> +  configs[&data->stream_] = config;\n> >> +\n> >> +  return configs;\n> >> +}\n> >> +\n> >> +int PipelineHandlerSTM32::configureStreams(\n> >> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  StreamConfiguration *cfg = &config[&data->stream_];\n> >> +  int ret;\n> >> +\n> >> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n> >> +                    << \"x\" << cfg->height;\n> >> +\n> >> +  V4L2DeviceFormat format = {};\n> >> +  format.width = cfg->width;\n> >> +  format.height = cfg->height;\n> >> +  format.fourcc = cfg->pixelFormat;\n> >> +\n> >> +  ret = data->video_->setFormat(&format);\n> >> +  if (ret)\n> >> +    return ret;\n> >> +\n> >> +  if (format.width != cfg->width || format.height != cfg->height ||\n> >> +      format.fourcc != cfg->pixelFormat)\n> >> +    return -EINVAL;\n> >> +\n> >> +  return 0;\n> >> +}\n> >> +\n> >> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  const StreamConfiguration &cfg = stream->configuration();\n> >> +\n> >> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n> >> +\n> >> +  return data->video_->exportBuffers(&stream->bufferPool());\n> >> +}\n> >> +\n> >> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  return data->video_->releaseBuffers();\n> >> +}\n> >> +\n> >> +int PipelineHandlerSTM32::start(Camera *camera) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  return data->video_->streamOn();\n> >> +}\n> >> +\n> >> +void PipelineHandlerSTM32::stop(Camera *camera) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  data->video_->streamOff();\n> >> +  PipelineHandler::stop(camera);\n> >> +}\n> >> +\n> >> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n> >> +  STM32CameraData *data = cameraData(camera);\n> >> +  Buffer *buffer = request->findBuffer(&data->stream_);\n> >> +  if (!buffer) {\n> >> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n> >> +\n> >> +    return -ENOENT;\n> >> +  }\n> >> +\n> >> +  int ret = data->video_->queueBuffer(buffer);\n> >> +  if (ret < 0)\n> >> +    return ret;\n> >> +\n> >> +  PipelineHandler::queueRequest(camera, request);\n> >> +\n> >> +  return 0;\n> >> +}\n> >> +\n> >> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n> >> +  DeviceMatch dm(\"stm32-dcmi\");\n> >\n> > I see that the stm32-dcmi driver calls down to a subdevice. Is there\n> > ever a need to interact with this subdevice directly?\n> >\n> > Or are all controls handled through the DCMI driver?\n> >\n> > I think it looks like everything simply goes through the stm32-dcmi\n> > V4L2Device interface.\n> >\n> > That said, I can't see if there is any media-controller device in the\n> > DCMI driver. Our DeviceMatch currently only looks at media-controller\n> > enabled devices.\n> >\n> >> +\n> >> +  media_ = enumerator->search(dm);\n> >> +  if (!media_)\n> >> +    return false;\n> >> +\n> >> +  media_->acquire();\n> >> +\n> >> +  std::unique_ptr<STM32CameraData> data =\n> >> +      utils::make_unique<STM32CameraData>(this);\n> >> +\n> >> +  /* Locate and open the default video node. */\n> >> +  for (MediaEntity *entity : media_->entities()) {\n> >> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n> >> +      data->video_ = new V4L2Device(entity);\n> >> +      break;\n> >> +    }\n> >> +  }\n> >\n> > I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n> > UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n> > correctly match your device?\n> >\n> >> +\n> >> +  if (!data->video_) {\n> >> +    LOG(STM32, Error) << \"Could not find a default video device\";\n> >> +    return false;\n> >> +  }\n> >> +\n> >> +  if (data->video_->open())\n> >> +    return false;\n> >> +\n> >> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n> >> +\n> >> +  /* Create and register the camera. */\n> >> +  std::set<Stream *> streams{&data->stream_};\n> >> +  std::shared_ptr<Camera> camera =\n> >> +      Camera::create(this, media_->model(), streams);\n> >> +  registerCamera(std::move(camera), std::move(data));\n> >> +\n> >> +  return true;\n> >> +}\n> >> +\n> >> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n> >> +  Request *request = queuedRequests_.front();\n> >> +\n> >> +  pipe_->completeBuffer(camera_, request, buffer);\n> >> +  pipe_->completeRequest(camera_, request);\n> >> +}\n> >> +\n> >> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n> >> +\n> >> +} /* namespace libcamera */\n> >>\n> > \n> > Thank you for joining the libcamera project! - If there is anything more\n> > we can do to help you - please let us know.","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 358B660B2E\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 16 Apr 2019 01:14:45 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 7417E320;\n\tTue, 16 Apr 2019 01:14:44 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1555370084;\n\tbh=6qj0QoqrVGD/lrQ4/FaGQNc/KmF2dSM6nMbffxGqRV8=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=Y+dLtQZnbw3zzUcdY40TR4Vo7RKlKi64V331PsptCfI2CIUN+ZKSWapV4BGE4Le39\n\teodBZ/hYG4EUKxbeQQfBjMpbp+b2ZmFOq6uYIc41BVlM9vfOCRGr1rVkOnpZnyXy/b\n\tdbMjwEGLTZGGlp+W5w6dL3pIRaA8bp3nhCKQNt+w=","Date":"Tue, 16 Apr 2019 02:14:35 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","Cc":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>,\n\tHugues FRUCHET <hugues.fruchet@st.com>","Message-ID":"<20190415231435.GO17083@pendragon.ideasonboard.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","In-Reply-To":"<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Mon, 15 Apr 2019 23:14:45 -0000"}},{"id":1398,"web_url":"https://patchwork.libcamera.org/comment/1398/","msgid":"<8692ce69-0733-4807-dd25-e4568cec0443@st.com>","date":"2019-04-16T08:39:25","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":14,"url":"https://patchwork.libcamera.org/api/people/14/","name":"Benjamin GAIGNARD","email":"benjamin.gaignard@st.com"},"content":"On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n> Hi Benjamin,\n>\n> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n>> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n>>>> Provide a pipeline handler for the stm32 driver.\n>>> Excellent :-) Thank you for your contribution.\n>>>\n>>> Have you been able to test this and successfully list your camera or\n>>> capture frames?\n>> Yes I have been able to list and capture frames with cam\n>>\n>>> To list successfully registered cameras on the device:\n>>> \tbuild/src/cam/cam -l\n>>>\n>>> I couldn't see any media-controller support in the stm32-dcmi driver...\n>>> Perhaps I'm missing something.\n>> Hugues has send the patches for media controller support a week ago.\n>>\n>> https://lkml.org/lkml/2019/4/1/298\n>>\n>> I have tested libcamera with those patches.\n> I think you may have missed the other comments from Kieran further down.\nYes I have completely miss them\n>\n>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n>>>> ---\n>>>>    src/libcamera/pipeline/meson.build       |   2 +\n>>>>    src/libcamera/pipeline/stm32/meson.build |   3 +\n>>>>    src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n>>>>    3 files changed, 210 insertions(+)\n>>>>    create mode 100644 src/libcamera/pipeline/stm32/meson.build\n>>>>    create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n>>>>\n>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n>>>> index 40bb264..08d6e1c 100644\n>>>> --- a/src/libcamera/pipeline/meson.build\n>>>> +++ b/src/libcamera/pipeline/meson.build\n>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n>>>>    ])\n>>>>    \n>>>>    subdir('ipu3')\n>>>> +\n>>>> +subdir('stm32')\n>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n>>>> new file mode 100644\n>>>> index 0000000..cb6f16b\n>>>> --- /dev/null\n>>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n>>>> @@ -0,0 +1,3 @@\n>>>> +libcamera_sources += files([\n>>>> +    'stm32.cpp',\n>>>> +])\n>>> Currently the stm32.cpp seems like a fairly simple implementation, which\n>>> fits in its own file, much like pipeline/uvcvideo.cpp.\n>>>\n>>> Do you forsee needing to break the STM32 handler into multiple files\n>>> later, or support other components on the STM32?\n>>>\n>>> I think it's fine if you want to have stm32 as a subdir either way  though.\n>>>\n>>> I wonder also, if it is just a 'simple' device if perhaps we should move\n>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n>>> subclassed to simply add the device matching, or provide a table of\n>>> supported match strings.\n>>>\n>>>\n>>> What hardware is available on the STM32 for processing frames?\n>>> Do you have a scaler/resizer? or any further ISP functionality?\n\nNo scaler, no resize and no ISP functionality.\n\nIt is a basic device to memory driver.\n\n>>>\n>>> It would be interesting to see the output of 'media-ctl -p' for your\n>>> platform.\n\nThe ouput of media-ctl -p (with Hugues's patches)\n\nMedia controller API version 4.19.24\n\nMedia device information\n------------------------\ndriver          stm32-dcmi\nmodel           stm32-dcmi\nserial\nbus info        platform:stm32-dcmi\nhw revision     0x0\ndriver version  4.19.24\n\nDevice topology\n- entity 1: stm32_dcmi (1 pad, 1 link)\n             type Node subtype V4L flags 1\n             device node name /dev/video0\n         pad0: Sink\n                 <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n\n- entity 5: ov5640 0-003c (1 pad, 1 link)\n             type V4L2 subdev subtype Sensor flags 0\n             device node name /dev/v4l-subdev0\n         pad0: Source\n                 [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg \nxfer:srgb ycbcr:601 quantization:full-range]\n                 -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n\n>>>\n>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>> new file mode 100644\n>>>> index 0000000..301fdfc\n>>>> --- /dev/null\n>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>> @@ -0,0 +1,205 @@\n>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>>> +/*\n>>>> + * stm32.cpp - Pipeline handler for stm32 devices\n>>>> + */\n>>>> +\n>>>> +#include <libcamera/camera.h>\n>>>> +#include <libcamera/request.h>\n>>>> +#include <libcamera/stream.h>\n>>>> +\n>>>> +#include \"device_enumerator.h\"\n>>>> +#include \"log.h\"\n>>>> +#include \"media_device.h\"\n>>>> +#include \"pipeline_handler.h\"\n>>>> +#include \"utils.h\"\n>>>> +#include \"v4l2_device.h\"\n>>>> +\n>>>> +namespace libcamera {\n>>>> +\n>>>> +LOG_DEFINE_CATEGORY(STM32)\n>>>> +\n>>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n>>>> +public:\n>>>> +  PipelineHandlerSTM32(CameraManager *manager);\n>>>> +  ~PipelineHandlerSTM32();\n>>> Our coding style uses a tab to indent.\n>>>\n>>> When you have committed your code, you can run ./utils/checkstyle.py to\n>>> check the most recent commit (or you can specify a range suitable for git).\n>>>\n>>> This requires installing clang-format ideally, although astyle is also\n>>> supported to some degree.\nI have fix clang-format package version on my side, it will be better in v2.\n>>>\n>>>> +\n>>>> +  std::map<Stream *, StreamConfiguration>\n>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n>>>> +  int configureStreams(\n>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n>>>> +\n>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n>>>> +\n>>>> +  int start(Camera *camera) override;\n>>>> +  void stop(Camera *camera) override;\n>>>> +\n>>>> +  int queueRequest(Camera *camera, Request *request) override;\n>>>> +\n>>>> +  bool match(DeviceEnumerator *enumerator);\n>>>> +\n>>>> +private:\n>>>> +  class STM32CameraData : public CameraData {\n>>>> +  public:\n>>>> +    STM32CameraData(PipelineHandler *pipe)\n>>>> +        : CameraData(pipe), video_(nullptr) {}\n>>>> +\n>>>> +    ~STM32CameraData() { delete video_; }\n>>>> +\n>>>> +    void bufferReady(Buffer *buffer);\n>>>> +\n>>>> +    V4L2Device *video_;\n>>>> +    Stream stream_;\n>>>> +  };\n>>>> +\n>>>> +  STM32CameraData *cameraData(const Camera *camera) {\n>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n>>>> +  }\n>>>> +\n>>>> +  std::shared_ptr<MediaDevice> media_;\n>>>> +};\n>>>> +\n>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n>>>> +    : PipelineHandler(manager), media_(nullptr) {}\n>>>> +\n>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n>>>> +  if (media_)\n>>>> +    media_->release();\n>>>> +}\n>>>> +\n>>>> +std::map<Stream *, StreamConfiguration>\n>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n>>>> +                                          std::set<Stream *> &streams) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +\n>>>> +  std::map<Stream *, StreamConfiguration> configs;\n>>>> +  StreamConfiguration config{};\n>>>> +\n>>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n>>>> +  config.width = 640;\n>>>> +  config.height = 480;\n>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n>>>> +  config.bufferCount = 4;\n>>>> +\n>>>> +  configs[&data->stream_] = config;\n>>>> +\n>>>> +  return configs;\n>>>> +}\n>>>> +\n>>>> +int PipelineHandlerSTM32::configureStreams(\n>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n>>>> +  int ret;\n>>>> +\n>>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n>>>> +                    << \"x\" << cfg->height;\n>>>> +\n>>>> +  V4L2DeviceFormat format = {};\n>>>> +  format.width = cfg->width;\n>>>> +  format.height = cfg->height;\n>>>> +  format.fourcc = cfg->pixelFormat;\n>>>> +\n>>>> +  ret = data->video_->setFormat(&format);\n>>>> +  if (ret)\n>>>> +    return ret;\n>>>> +\n>>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n>>>> +      format.fourcc != cfg->pixelFormat)\n>>>> +    return -EINVAL;\n>>>> +\n>>>> +  return 0;\n>>>> +}\n>>>> +\n>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  const StreamConfiguration &cfg = stream->configuration();\n>>>> +\n>>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n>>>> +\n>>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n>>>> +}\n>>>> +\n>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  return data->video_->releaseBuffers();\n>>>> +}\n>>>> +\n>>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  return data->video_->streamOn();\n>>>> +}\n>>>> +\n>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  data->video_->streamOff();\n>>>> +  PipelineHandler::stop(camera);\n>>>> +}\n>>>> +\n>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n>>>> +  STM32CameraData *data = cameraData(camera);\n>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n>>>> +  if (!buffer) {\n>>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n>>>> +\n>>>> +    return -ENOENT;\n>>>> +  }\n>>>> +\n>>>> +  int ret = data->video_->queueBuffer(buffer);\n>>>> +  if (ret < 0)\n>>>> +    return ret;\n>>>> +\n>>>> +  PipelineHandler::queueRequest(camera, request);\n>>>> +\n>>>> +  return 0;\n>>>> +}\n>>>> +\n>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n>>>> +  DeviceMatch dm(\"stm32-dcmi\");\n>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n>>> ever a need to interact with this subdevice directly?\n>>>\n>>> Or are all controls handled through the DCMI driver?\n>>>\n>>> I think it looks like everything simply goes through the stm32-dcmi\n>>> V4L2Device interface.\nUntil today yes but Hugues is working with v4l for media-controller \nawareness.\n>>>\n>>> That said, I can't see if there is any media-controller device in the\n>>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n>>> enabled devices.\n>>>\n>>>> +\n>>>> +  media_ = enumerator->search(dm);\n>>>> +  if (!media_)\n>>>> +    return false;\n>>>> +\n>>>> +  media_->acquire();\n>>>> +\n>>>> +  std::unique_ptr<STM32CameraData> data =\n>>>> +      utils::make_unique<STM32CameraData>(this);\n>>>> +\n>>>> +  /* Locate and open the default video node. */\n>>>> +  for (MediaEntity *entity : media_->entities()) {\n>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n>>>> +      data->video_ = new V4L2Device(entity);\n>>>> +      break;\n>>>> +    }\n>>>> +  }\n>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n>>> correctly match your device?\n\nIt will because it is in Hugues's patches.\n\nDoes an another way of working is posssible ?\n\nMaybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to \nuse it.\n\nDo you have some advices ?\n\nBenjamin\n\n>>>\n>>>> +\n>>>> +  if (!data->video_) {\n>>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n>>>> +    return false;\n>>>> +  }\n>>>> +\n>>>> +  if (data->video_->open())\n>>>> +    return false;\n>>>> +\n>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n>>>> +\n>>>> +  /* Create and register the camera. */\n>>>> +  std::set<Stream *> streams{&data->stream_};\n>>>> +  std::shared_ptr<Camera> camera =\n>>>> +      Camera::create(this, media_->model(), streams);\n>>>> +  registerCamera(std::move(camera), std::move(data));\n>>>> +\n>>>> +  return true;\n>>>> +}\n>>>> +\n>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n>>>> +  Request *request = queuedRequests_.front();\n>>>> +\n>>>> +  pipe_->completeBuffer(camera_, request, buffer);\n>>>> +  pipe_->completeRequest(camera_, request);\n>>>> +}\n>>>> +\n>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n>>>> +\n>>>> +} /* namespace libcamera */\n>>>>\n>>> Thank you for joining the libcamera project! - If there is anything more\n>>> we can do to help you - please let us know.","headers":{"Return-Path":"<benjamin.gaignard@st.com>","Received":["from mx07-00178001.pphosted.com (mx08-00178001.pphosted.com\n\t[91.207.212.93])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 58AB760004\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 16 Apr 2019 10:39:30 +0200 (CEST)","from pps.filterd (m0046661.ppops.net [127.0.0.1])\n\tby mx08-00178001.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id\n\tx3G8au9D021534; Tue, 16 Apr 2019 10:39:27 +0200","from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])\n\tby mx08-00178001.pphosted.com with ESMTP id 2ru6ngqed4-1\n\t(version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT);\n\tTue, 16 Apr 2019 10:39:27 +0200","from zeta.dmz-eu.st.com (zeta.dmz-eu.st.com [164.129.230.9])\n\tby beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 4E46338;\n\tTue, 16 Apr 2019 08:39:26 +0000 (GMT)","from Webmail-eu.st.com (sfhdag5node2.st.com [10.75.127.14])\n\tby zeta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 1969D1382;\n\tTue, 16 Apr 2019 08:39:26 +0000 (GMT)","from SFHDAG3NODE3.st.com (10.75.127.9) by SFHDAG5NODE2.st.com\n\t(10.75.127.14) with Microsoft SMTP Server (TLS) id 15.0.1347.2;\n\tTue, 16 Apr 2019 10:39:25 +0200","from SFHDAG3NODE3.st.com ([fe80::3507:b372:7648:476]) by\n\tSFHDAG3NODE3.st.com ([fe80::3507:b372:7648:476%20]) with mapi id\n\t15.00.1347.000; Tue, 16 Apr 2019 10:39:25 +0200"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com;\n\th=from : to : cc : subject\n\t: date : message-id : references : in-reply-to : content-type :\n\tcontent-id\n\t: content-transfer-encoding : mime-version; s=STMicroelectronics;\n\tbh=USMs9t1R3KJv0Y05AeIuDFG/Te1Bmq+gFgmgIU/w2mg=;\n\tb=LMN076z+Jfbau2RJeys+h7P5iQC0QXm87jz7n2+0har6nCOMw+xNzx3iTtTVZKCX9QYp\n\toOlOxnOk/ogqcbwJ99DLdTiq0FQOz3updU1j+WcmQNT/bdoZPIlLYTLuGxVgV8Z31EA2\n\t9gGs3rCiWDtJIZYSa6cIVRAufSHjL4IuqrZq4t42uj3lpX/wq/lt1zzN5S6QBztoJaP+\n\tu3faxGE6Abxt7gFdR6smVklQEtcEAMNdFSrkmT/fcA9MZWcv181bCvwTvnErrIheWUGu\n\tZlpLWmXvV0O+mX6ryzthcbUBCWaduHYWkdknpzwcTcqVbfCPr//gze3PRZEokAwRfcA/\n\tUQ== ","From":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","CC":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>, Hugues FRUCHET\n\t<hugues.fruchet@st.com>","Thread-Topic":"[libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","Thread-Index":"AQHU7girxkW0QpRzE0KeuHPcIgbIO6YzaF0AgAAKGYCAClHsgIAAnc8A","Date":"Tue, 16 Apr 2019 08:39:25 +0000","Message-ID":"<8692ce69-0733-4807-dd25-e4568cec0443@st.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>","In-Reply-To":"<20190415231435.GO17083@pendragon.ideasonboard.com>","Accept-Language":"en-US","Content-Language":"en-US","X-MS-Has-Attach":"","X-MS-TNEF-Correlator":"","user-agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101\n\tThunderbird/60.6.1","x-ms-exchange-messagesentrepresentingtype":"1","x-ms-exchange-transport-fromentityheader":"Hosted","x-originating-ip":"[10.75.127.44]","Content-Type":"text/plain; charset=\"utf-8\"","Content-ID":"<7C34D24D885D2A48803F2A86C69D4E7D@st.com>","Content-Transfer-Encoding":"base64","MIME-Version":"1.0","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10434:, ,\n\tdefinitions=2019-04-16_02:, , signatures=0","X-Mailman-Approved-At":"Tue, 16 Apr 2019 19:01:37 +0200","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Tue, 16 Apr 2019 08:39:30 -0000"}},{"id":1436,"web_url":"https://patchwork.libcamera.org/comment/1436/","msgid":"<20190417121153.GH4993@pendragon.ideasonboard.com>","date":"2019-04-17T12:11:53","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Benjamin,\n\nOn Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:\n> On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n> > On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n> >> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n> >>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n> >>>> Provide a pipeline handler for the stm32 driver.\n> >>> Excellent :-) Thank you for your contribution.\n> >>>\n> >>> Have you been able to test this and successfully list your camera or\n> >>> capture frames?\n> >> Yes I have been able to list and capture frames with cam\n> >>\n> >>> To list successfully registered cameras on the device:\n> >>> \tbuild/src/cam/cam -l\n> >>>\n> >>> I couldn't see any media-controller support in the stm32-dcmi driver...\n> >>> Perhaps I'm missing something.\n> >> Hugues has send the patches for media controller support a week ago.\n> >>\n> >> https://lkml.org/lkml/2019/4/1/298\n> >>\n> >> I have tested libcamera with those patches.\n> > I think you may have missed the other comments from Kieran further down.\n> \n> Yes I have completely miss them\n> \n> >>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n> >>>> ---\n> >>>>    src/libcamera/pipeline/meson.build       |   2 +\n> >>>>    src/libcamera/pipeline/stm32/meson.build |   3 +\n> >>>>    src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n> >>>>    3 files changed, 210 insertions(+)\n> >>>>    create mode 100644 src/libcamera/pipeline/stm32/meson.build\n> >>>>    create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>\n> >>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n> >>>> index 40bb264..08d6e1c 100644\n> >>>> --- a/src/libcamera/pipeline/meson.build\n> >>>> +++ b/src/libcamera/pipeline/meson.build\n> >>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n> >>>>    ])\n> >>>>    \n> >>>>    subdir('ipu3')\n> >>>> +\n> >>>> +subdir('stm32')\n> >>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n> >>>> new file mode 100644\n> >>>> index 0000000..cb6f16b\n> >>>> --- /dev/null\n> >>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n> >>>> @@ -0,0 +1,3 @@\n> >>>> +libcamera_sources += files([\n> >>>> +    'stm32.cpp',\n> >>>> +])\n> >>> Currently the stm32.cpp seems like a fairly simple implementation, which\n> >>> fits in its own file, much like pipeline/uvcvideo.cpp.\n> >>>\n> >>> Do you forsee needing to break the STM32 handler into multiple files\n> >>> later, or support other components on the STM32?\n> >>>\n> >>> I think it's fine if you want to have stm32 as a subdir either way  though.\n> >>>\n> >>> I wonder also, if it is just a 'simple' device if perhaps we should move\n> >>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n> >>> subclassed to simply add the device matching, or provide a table of\n> >>> supported match strings.\n> >>>\n> >>>\n> >>> What hardware is available on the STM32 for processing frames?\n> >>> Do you have a scaler/resizer? or any further ISP functionality?\n> \n> No scaler, no resize and no ISP functionality.\n> \n> It is a basic device to memory driver.\n\nIn that case I wonder if we should really have an stm32-specific\npipeline handler. It seems to me that we could implement a generic\npipeline handler that would support any graph made of a sensor subdev\nconnected to a video node.\n\n> >>> It would be interesting to see the output of 'media-ctl -p' for your\n> >>> platform.\n> \n> The ouput of media-ctl -p (with Hugues's patches)\n> \n> Media controller API version 4.19.24\n> \n> Media device information\n> ------------------------\n> driver          stm32-dcmi\n> model           stm32-dcmi\n> serial\n> bus info        platform:stm32-dcmi\n> hw revision     0x0\n> driver version  4.19.24\n> \n> Device topology\n> - entity 1: stm32_dcmi (1 pad, 1 link)\n>              type Node subtype V4L flags 1\n>              device node name /dev/video0\n>          pad0: Sink\n>                  <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n> \n> - entity 5: ov5640 0-003c (1 pad, 1 link)\n>              type V4L2 subdev subtype Sensor flags 0\n>              device node name /dev/v4l-subdev0\n>          pad0: Source\n>                  [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg \n> xfer:srgb ycbcr:601 quantization:full-range]\n>                  -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n> \n> >>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>> new file mode 100644\n> >>>> index 0000000..301fdfc\n> >>>> --- /dev/null\n> >>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>> @@ -0,0 +1,205 @@\n> >>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> >>>> +/*\n> >>>> + * stm32.cpp - Pipeline handler for stm32 devices\n> >>>> + */\n> >>>> +\n> >>>> +#include <libcamera/camera.h>\n> >>>> +#include <libcamera/request.h>\n> >>>> +#include <libcamera/stream.h>\n> >>>> +\n> >>>> +#include \"device_enumerator.h\"\n> >>>> +#include \"log.h\"\n> >>>> +#include \"media_device.h\"\n> >>>> +#include \"pipeline_handler.h\"\n> >>>> +#include \"utils.h\"\n> >>>> +#include \"v4l2_device.h\"\n> >>>> +\n> >>>> +namespace libcamera {\n> >>>> +\n> >>>> +LOG_DEFINE_CATEGORY(STM32)\n> >>>> +\n> >>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n> >>>> +public:\n> >>>> +  PipelineHandlerSTM32(CameraManager *manager);\n> >>>> +  ~PipelineHandlerSTM32();\n> >>> Our coding style uses a tab to indent.\n> >>>\n> >>> When you have committed your code, you can run ./utils/checkstyle.py to\n> >>> check the most recent commit (or you can specify a range suitable for git).\n> >>>\n> >>> This requires installing clang-format ideally, although astyle is also\n> >>> supported to some degree.\n> \n> I have fix clang-format package version on my side, it will be better in v2.\n\nThank you.\n\n> >>>\n> >>>> +\n> >>>> +  std::map<Stream *, StreamConfiguration>\n> >>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n> >>>> +  int configureStreams(\n> >>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n> >>>> +\n> >>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n> >>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n> >>>> +\n> >>>> +  int start(Camera *camera) override;\n> >>>> +  void stop(Camera *camera) override;\n> >>>> +\n> >>>> +  int queueRequest(Camera *camera, Request *request) override;\n> >>>> +\n> >>>> +  bool match(DeviceEnumerator *enumerator);\n> >>>> +\n> >>>> +private:\n> >>>> +  class STM32CameraData : public CameraData {\n> >>>> +  public:\n> >>>> +    STM32CameraData(PipelineHandler *pipe)\n> >>>> +        : CameraData(pipe), video_(nullptr) {}\n> >>>> +\n> >>>> +    ~STM32CameraData() { delete video_; }\n> >>>> +\n> >>>> +    void bufferReady(Buffer *buffer);\n> >>>> +\n> >>>> +    V4L2Device *video_;\n> >>>> +    Stream stream_;\n> >>>> +  };\n> >>>> +\n> >>>> +  STM32CameraData *cameraData(const Camera *camera) {\n> >>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n> >>>> +  }\n> >>>> +\n> >>>> +  std::shared_ptr<MediaDevice> media_;\n> >>>> +};\n> >>>> +\n> >>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n> >>>> +    : PipelineHandler(manager), media_(nullptr) {}\n> >>>> +\n> >>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n> >>>> +  if (media_)\n> >>>> +    media_->release();\n> >>>> +}\n> >>>> +\n> >>>> +std::map<Stream *, StreamConfiguration>\n> >>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n> >>>> +                                          std::set<Stream *> &streams) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +\n> >>>> +  std::map<Stream *, StreamConfiguration> configs;\n> >>>> +  StreamConfiguration config{};\n> >>>> +\n> >>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n> >>>> +  config.width = 640;\n> >>>> +  config.height = 480;\n> >>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n> >>>> +  config.bufferCount = 4;\n> >>>> +\n> >>>> +  configs[&data->stream_] = config;\n> >>>> +\n> >>>> +  return configs;\n> >>>> +}\n> >>>> +\n> >>>> +int PipelineHandlerSTM32::configureStreams(\n> >>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n> >>>> +  int ret;\n> >>>> +\n> >>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n> >>>> +                    << \"x\" << cfg->height;\n> >>>> +\n> >>>> +  V4L2DeviceFormat format = {};\n> >>>> +  format.width = cfg->width;\n> >>>> +  format.height = cfg->height;\n> >>>> +  format.fourcc = cfg->pixelFormat;\n> >>>> +\n> >>>> +  ret = data->video_->setFormat(&format);\n> >>>> +  if (ret)\n> >>>> +    return ret;\n> >>>> +\n> >>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n> >>>> +      format.fourcc != cfg->pixelFormat)\n> >>>> +    return -EINVAL;\n> >>>> +\n> >>>> +  return 0;\n> >>>> +}\n> >>>> +\n> >>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  const StreamConfiguration &cfg = stream->configuration();\n> >>>> +\n> >>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n> >>>> +\n> >>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n> >>>> +}\n> >>>> +\n> >>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  return data->video_->releaseBuffers();\n> >>>> +}\n> >>>> +\n> >>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  return data->video_->streamOn();\n> >>>> +}\n> >>>> +\n> >>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  data->video_->streamOff();\n> >>>> +  PipelineHandler::stop(camera);\n> >>>> +}\n> >>>> +\n> >>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n> >>>> +  STM32CameraData *data = cameraData(camera);\n> >>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n> >>>> +  if (!buffer) {\n> >>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n> >>>> +\n> >>>> +    return -ENOENT;\n> >>>> +  }\n> >>>> +\n> >>>> +  int ret = data->video_->queueBuffer(buffer);\n> >>>> +  if (ret < 0)\n> >>>> +    return ret;\n> >>>> +\n> >>>> +  PipelineHandler::queueRequest(camera, request);\n> >>>> +\n> >>>> +  return 0;\n> >>>> +}\n> >>>> +\n> >>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n> >>>> +  DeviceMatch dm(\"stm32-dcmi\");\n> >>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n> >>> ever a need to interact with this subdevice directly?\n> >>>\n> >>> Or are all controls handled through the DCMI driver?\n> >>>\n> >>> I think it looks like everything simply goes through the stm32-dcmi\n> >>> V4L2Device interface.\n> \n> Until today yes but Hugues is working with v4l for media-controller \n> awareness.\n> \n> >>> That said, I can't see if there is any media-controller device in the\n> >>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n> >>> enabled devices.\n> >>>\n> >>>> +\n> >>>> +  media_ = enumerator->search(dm);\n> >>>> +  if (!media_)\n> >>>> +    return false;\n> >>>> +\n> >>>> +  media_->acquire();\n> >>>> +\n> >>>> +  std::unique_ptr<STM32CameraData> data =\n> >>>> +      utils::make_unique<STM32CameraData>(this);\n> >>>> +\n> >>>> +  /* Locate and open the default video node. */\n> >>>> +  for (MediaEntity *entity : media_->entities()) {\n> >>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n> >>>> +      data->video_ = new V4L2Device(entity);\n> >>>> +      break;\n> >>>> +    }\n> >>>> +  }\n> >>> \n> >>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n> >>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n> >>> correctly match your device?\n> \n> It will because it is in Hugues's patches.\n> \n> Does an another way of working is posssible ?\n> \n> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to \n> use it.\n\nThere's a single video node in your media graph, so there's no real need\nto use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,\ncan't you ?\n\n> Do you have some advices ?\n> \n> >>>> +\n> >>>> +  if (!data->video_) {\n> >>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n> >>>> +    return false;\n> >>>> +  }\n> >>>> +\n> >>>> +  if (data->video_->open())\n> >>>> +    return false;\n> >>>> +\n> >>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n> >>>> +\n> >>>> +  /* Create and register the camera. */\n> >>>> +  std::set<Stream *> streams{&data->stream_};\n> >>>> +  std::shared_ptr<Camera> camera =\n> >>>> +      Camera::create(this, media_->model(), streams);\n> >>>> +  registerCamera(std::move(camera), std::move(data));\n> >>>> +\n> >>>> +  return true;\n> >>>> +}\n> >>>> +\n> >>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n> >>>> +  Request *request = queuedRequests_.front();\n> >>>> +\n> >>>> +  pipe_->completeBuffer(camera_, request, buffer);\n> >>>> +  pipe_->completeRequest(camera_, request);\n> >>>> +}\n> >>>> +\n> >>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n> >>>> +\n> >>>> +} /* namespace libcamera */\n> >>>>\n> >>> Thank you for joining the libcamera project! - If there is anything more\n> >>> we can do to help you - please let us know.","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 92694600F9\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tWed, 17 Apr 2019 14:12:03 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 9C30C312;\n\tWed, 17 Apr 2019 14:12:02 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1555503122;\n\tbh=d2PF6ZGUdJJws20yZZ/89YqSiOvRw5WtywBqYUWEQV8=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=fc1XPw9Z6bDGfMrHAQZOMr3F+wn7GxQAaYk8Gl8EWM+H3UNiBzX3/+zO0vByQjaX5\n\tSCDvHMloWdvfjNj3zjYYIwOmeZOqZdtrWwk4XCllu/DJ6rIvgJ2QD+gYsqMcPUc+XL\n\tF139imxApUBVuZAALWNqFv0jW6eIBP21a7tS8oGs=","Date":"Wed, 17 Apr 2019 15:11:53 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","Cc":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>,\n\tHugues FRUCHET <hugues.fruchet@st.com>","Message-ID":"<20190417121153.GH4993@pendragon.ideasonboard.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>\n\t<8692ce69-0733-4807-dd25-e4568cec0443@st.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<8692ce69-0733-4807-dd25-e4568cec0443@st.com>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Wed, 17 Apr 2019 12:12:03 -0000"}},{"id":1459,"web_url":"https://patchwork.libcamera.org/comment/1459/","msgid":"<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>","date":"2019-04-18T14:22:07","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":15,"url":"https://patchwork.libcamera.org/api/people/15/","name":"Hugues FRUCHET","email":"hugues.fruchet@st.com"},"content":"Hi Laurent,\n\nOn 4/17/19 2:11 PM, Laurent Pinchart wrote:\n> Hi Benjamin,\n> \n> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:\n>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n>>>>>> Provide a pipeline handler for the stm32 driver.\n>>>>> Excellent :-) Thank you for your contribution.\n>>>>>\n>>>>> Have you been able to test this and successfully list your camera or\n>>>>> capture frames?\n>>>> Yes I have been able to list and capture frames with cam\n>>>>\n>>>>> To list successfully registered cameras on the device:\n>>>>> \tbuild/src/cam/cam -l\n>>>>>\n>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...\n>>>>> Perhaps I'm missing something.\n>>>> Hugues has send the patches for media controller support a week ago.\n>>>>\n>>>> https://lkml.org/lkml/2019/4/1/298\n>>>>\n>>>> I have tested libcamera with those patches.\n>>> I think you may have missed the other comments from Kieran further down.\n>>\n>> Yes I have completely miss them\n>>\n>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n>>>>>> ---\n>>>>>>     src/libcamera/pipeline/meson.build       |   2 +\n>>>>>>     src/libcamera/pipeline/stm32/meson.build |   3 +\n>>>>>>     src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n>>>>>>     3 files changed, 210 insertions(+)\n>>>>>>     create mode 100644 src/libcamera/pipeline/stm32/meson.build\n>>>>>>     create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>>\n>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n>>>>>> index 40bb264..08d6e1c 100644\n>>>>>> --- a/src/libcamera/pipeline/meson.build\n>>>>>> +++ b/src/libcamera/pipeline/meson.build\n>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n>>>>>>     ])\n>>>>>>     \n>>>>>>     subdir('ipu3')\n>>>>>> +\n>>>>>> +subdir('stm32')\n>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n>>>>>> new file mode 100644\n>>>>>> index 0000000..cb6f16b\n>>>>>> --- /dev/null\n>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n>>>>>> @@ -0,0 +1,3 @@\n>>>>>> +libcamera_sources += files([\n>>>>>> +    'stm32.cpp',\n>>>>>> +])\n>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which\n>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.\n>>>>>\n>>>>> Do you forsee needing to break the STM32 handler into multiple files\n>>>>> later, or support other components on the STM32?\n>>>>>\n>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.\n>>>>>\n>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move\n>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n>>>>> subclassed to simply add the device matching, or provide a table of\n>>>>> supported match strings.\n>>>>>\n>>>>>\n>>>>> What hardware is available on the STM32 for processing frames?\n>>>>> Do you have a scaler/resizer? or any further ISP functionality?\n>>\n>> No scaler, no resize and no ISP functionality.\n>>\n>> It is a basic device to memory driver.\n> \n> In that case I wonder if we should really have an stm32-specific\n> pipeline handler. It seems to me that we could implement a generic\n> pipeline handler that would support any graph made of a sensor subdev\n> connected to a video node.\n\nWe can also have a bridge in between, so graph becomes:\n\nroot@stm32mp1:~# media-ctl -d /dev/media0 -p\nMedia controller API version 5.0.0\n\nMedia device information\n------------------------\ndriver          stm32-dcmi\nmodel           stm32-dcmi\nserial\nbus info        platform:stm32-dcmi\nhw revision     0x0\ndriver version  5.0.0\n\nDevice topology\n- entity 1: stm32_dcmi (1 pad, 1 link)\n             type Node subtype V4L flags 1\n             device node name /dev/video0\n         pad0: Sink\n                 <- \"mipid02 0-0014\":1 [ENABLED,IMMUTABLE]\n\n- entity 5: mipid02 0-0014 (2 pads, 2 links)\n             type V4L2 subdev subtype Unknown flags 0\n             device node name /dev/v4l-subdev0\n         pad0: Sink\n                 <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n         pad1: Source\n                 -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n\n- entity 10: ov5640 0-003c (1 pad, 1 link)\n              type V4L2 subdev subtype Sensor flags 0\n              device node name /dev/v4l-subdev1\n         pad0: Source\n                 [fmt:JPEG_1X8/320x240 field:none]\n                 -> \"mipid02 0-0014\":0 [ENABLED,IMMUTABLE]\n\nWe have discussed this on irc here:\nhttps://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue\n\nAnd in this case, if I have well understood, user has to use media-ctl \nto set format and resolution on bridge and sensor, just set the \nresolution and format on video node only will not be sufficient.\n\nMoreover, after having configured format and resolution using media-ctl,\nthe same format and resolution must be asked to GStreamer otherwise a \nmismatch will occur:\n\n1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640 \n0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 ! \ndecodebin ! fpsdisplaysink -v\n\n/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = \nimage/jpeg, width=(int)320, height=(int)240, \npixel-aspect-ratio=(fraction)1/1\n=> OK, QVGA JPEG is captured\n\nbut if I just omit the caps right after v4l2src (we don't force any \nformat/resolution), this is broken:\n1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640 \n0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v\n\n/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = \nvideo/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944, \npixel-aspect-ratio=(fe\n=> KO, we try to capture 5Mp YUYV while ov5640 is configured to send \nQVGA JPEG...\n\nIn summary if I have well udnerstood, user must ensure right match of \nboth media-ctl format/resolution and v4l2 S_FMT format/resolution.\n\n\nSo there are 2 cases to handle at least, the simple case without bridge \nand the case with bridge where MC is involved.\n\nI still feel not cumfortable with the fact that just introducing a \nbridge in the middle (which is just a matter of physical link, no change \non functionalities) has impacts up to user side, what do you think ?\n\n> \n>>>>> It would be interesting to see the output of 'media-ctl -p' for your\n>>>>> platform.\n>>\n>> The ouput of media-ctl -p (with Hugues's patches)\n>>\n>> Media controller API version 4.19.24\n>>\n>> Media device information\n>> ------------------------\n>> driver          stm32-dcmi\n>> model           stm32-dcmi\n>> serial\n>> bus info        platform:stm32-dcmi\n>> hw revision     0x0\n>> driver version  4.19.24\n>>\n>> Device topology\n>> - entity 1: stm32_dcmi (1 pad, 1 link)\n>>               type Node subtype V4L flags 1\n>>               device node name /dev/video0\n>>           pad0: Sink\n>>                   <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n>>\n>> - entity 5: ov5640 0-003c (1 pad, 1 link)\n>>               type V4L2 subdev subtype Sensor flags 0\n>>               device node name /dev/v4l-subdev0\n>>           pad0: Source\n>>                   [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg\n>> xfer:srgb ycbcr:601 quantization:full-range]\n>>                   -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n>>\n>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>> new file mode 100644\n>>>>>> index 0000000..301fdfc\n>>>>>> --- /dev/null\n>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>> @@ -0,0 +1,205 @@\n>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>>>>> +/*\n>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices\n>>>>>> + */\n>>>>>> +\n>>>>>> +#include <libcamera/camera.h>\n>>>>>> +#include <libcamera/request.h>\n>>>>>> +#include <libcamera/stream.h>\n>>>>>> +\n>>>>>> +#include \"device_enumerator.h\"\n>>>>>> +#include \"log.h\"\n>>>>>> +#include \"media_device.h\"\n>>>>>> +#include \"pipeline_handler.h\"\n>>>>>> +#include \"utils.h\"\n>>>>>> +#include \"v4l2_device.h\"\n>>>>>> +\n>>>>>> +namespace libcamera {\n>>>>>> +\n>>>>>> +LOG_DEFINE_CATEGORY(STM32)\n>>>>>> +\n>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n>>>>>> +public:\n>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);\n>>>>>> +  ~PipelineHandlerSTM32();\n>>>>> Our coding style uses a tab to indent.\n>>>>>\n>>>>> When you have committed your code, you can run ./utils/checkstyle.py to\n>>>>> check the most recent commit (or you can specify a range suitable for git).\n>>>>>\n>>>>> This requires installing clang-format ideally, although astyle is also\n>>>>> supported to some degree.\n>>\n>> I have fix clang-format package version on my side, it will be better in v2.\n> \n> Thank you.\n> \n>>>>>\n>>>>>> +\n>>>>>> +  std::map<Stream *, StreamConfiguration>\n>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n>>>>>> +  int configureStreams(\n>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n>>>>>> +\n>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n>>>>>> +\n>>>>>> +  int start(Camera *camera) override;\n>>>>>> +  void stop(Camera *camera) override;\n>>>>>> +\n>>>>>> +  int queueRequest(Camera *camera, Request *request) override;\n>>>>>> +\n>>>>>> +  bool match(DeviceEnumerator *enumerator);\n>>>>>> +\n>>>>>> +private:\n>>>>>> +  class STM32CameraData : public CameraData {\n>>>>>> +  public:\n>>>>>> +    STM32CameraData(PipelineHandler *pipe)\n>>>>>> +        : CameraData(pipe), video_(nullptr) {}\n>>>>>> +\n>>>>>> +    ~STM32CameraData() { delete video_; }\n>>>>>> +\n>>>>>> +    void bufferReady(Buffer *buffer);\n>>>>>> +\n>>>>>> +    V4L2Device *video_;\n>>>>>> +    Stream stream_;\n>>>>>> +  };\n>>>>>> +\n>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {\n>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n>>>>>> +  }\n>>>>>> +\n>>>>>> +  std::shared_ptr<MediaDevice> media_;\n>>>>>> +};\n>>>>>> +\n>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}\n>>>>>> +\n>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n>>>>>> +  if (media_)\n>>>>>> +    media_->release();\n>>>>>> +}\n>>>>>> +\n>>>>>> +std::map<Stream *, StreamConfiguration>\n>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n>>>>>> +                                          std::set<Stream *> &streams) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +\n>>>>>> +  std::map<Stream *, StreamConfiguration> configs;\n>>>>>> +  StreamConfiguration config{};\n>>>>>> +\n>>>>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n>>>>>> +  config.width = 640;\n>>>>>> +  config.height = 480;\n>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n>>>>>> +  config.bufferCount = 4;\n>>>>>> +\n>>>>>> +  configs[&data->stream_] = config;\n>>>>>> +\n>>>>>> +  return configs;\n>>>>>> +}\n>>>>>> +\n>>>>>> +int PipelineHandlerSTM32::configureStreams(\n>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n>>>>>> +  int ret;\n>>>>>> +\n>>>>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n>>>>>> +                    << \"x\" << cfg->height;\n>>>>>> +\n>>>>>> +  V4L2DeviceFormat format = {};\n>>>>>> +  format.width = cfg->width;\n>>>>>> +  format.height = cfg->height;\n>>>>>> +  format.fourcc = cfg->pixelFormat;\n>>>>>> +\n>>>>>> +  ret = data->video_->setFormat(&format);\n>>>>>> +  if (ret)\n>>>>>> +    return ret;\n>>>>>> +\n>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n>>>>>> +      format.fourcc != cfg->pixelFormat)\n>>>>>> +    return -EINVAL;\n>>>>>> +\n>>>>>> +  return 0;\n>>>>>> +}\n>>>>>> +\n>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  const StreamConfiguration &cfg = stream->configuration();\n>>>>>> +\n>>>>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n>>>>>> +\n>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n>>>>>> +}\n>>>>>> +\n>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  return data->video_->releaseBuffers();\n>>>>>> +}\n>>>>>> +\n>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  return data->video_->streamOn();\n>>>>>> +}\n>>>>>> +\n>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  data->video_->streamOff();\n>>>>>> +  PipelineHandler::stop(camera);\n>>>>>> +}\n>>>>>> +\n>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n>>>>>> +  if (!buffer) {\n>>>>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n>>>>>> +\n>>>>>> +    return -ENOENT;\n>>>>>> +  }\n>>>>>> +\n>>>>>> +  int ret = data->video_->queueBuffer(buffer);\n>>>>>> +  if (ret < 0)\n>>>>>> +    return ret;\n>>>>>> +\n>>>>>> +  PipelineHandler::queueRequest(camera, request);\n>>>>>> +\n>>>>>> +  return 0;\n>>>>>> +}\n>>>>>> +\n>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n>>>>>> +  DeviceMatch dm(\"stm32-dcmi\");\n>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n>>>>> ever a need to interact with this subdevice directly?\n>>>>>\n>>>>> Or are all controls handled through the DCMI driver?\n>>>>>\n>>>>> I think it looks like everything simply goes through the stm32-dcmi\n>>>>> V4L2Device interface.\n>>\n>> Until today yes but Hugues is working with v4l for media-controller\n>> awareness.\n>>\n>>>>> That said, I can't see if there is any media-controller device in the\n>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n>>>>> enabled devices.\n>>>>>\n>>>>>> +\n>>>>>> +  media_ = enumerator->search(dm);\n>>>>>> +  if (!media_)\n>>>>>> +    return false;\n>>>>>> +\n>>>>>> +  media_->acquire();\n>>>>>> +\n>>>>>> +  std::unique_ptr<STM32CameraData> data =\n>>>>>> +      utils::make_unique<STM32CameraData>(this);\n>>>>>> +\n>>>>>> +  /* Locate and open the default video node. */\n>>>>>> +  for (MediaEntity *entity : media_->entities()) {\n>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n>>>>>> +      data->video_ = new V4L2Device(entity);\n>>>>>> +      break;\n>>>>>> +    }\n>>>>>> +  }\n>>>>>\n>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n>>>>> correctly match your device?\n>>\n>> It will because it is in Hugues's patches.\n>>\n>> Does an another way of working is posssible ?\n>>\n>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to\n>> use it.\n> \n> There's a single video node in your media graph, so there's no real need\n> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,\n> can't you ?\n> \n>> Do you have some advices ?\n>>\n>>>>>> +\n>>>>>> +  if (!data->video_) {\n>>>>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n>>>>>> +    return false;\n>>>>>> +  }\n>>>>>> +\n>>>>>> +  if (data->video_->open())\n>>>>>> +    return false;\n>>>>>> +\n>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n>>>>>> +\n>>>>>> +  /* Create and register the camera. */\n>>>>>> +  std::set<Stream *> streams{&data->stream_};\n>>>>>> +  std::shared_ptr<Camera> camera =\n>>>>>> +      Camera::create(this, media_->model(), streams);\n>>>>>> +  registerCamera(std::move(camera), std::move(data));\n>>>>>> +\n>>>>>> +  return true;\n>>>>>> +}\n>>>>>> +\n>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n>>>>>> +  Request *request = queuedRequests_.front();\n>>>>>> +\n>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);\n>>>>>> +  pipe_->completeRequest(camera_, request);\n>>>>>> +}\n>>>>>> +\n>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n>>>>>> +\n>>>>>> +} /* namespace libcamera */\n>>>>>>\n>>>>> Thank you for joining the libcamera project! - If there is anything more\n>>>>> we can do to help you - please let us know.\n> \n\nBR,\nHugues.","headers":{"Return-Path":"<hugues.fruchet@st.com>","Received":["from mx07-00178001.pphosted.com (mx08-00178001.pphosted.com\n\t[91.207.212.93])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 8E27060DBE\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 18 Apr 2019 16:22:12 +0200 (CEST)","from pps.filterd (m0046661.ppops.net [127.0.0.1])\n\tby mx08-00178001.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id\n\tx3IELjq9030135; Thu, 18 Apr 2019 16:22:10 +0200","from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])\n\tby mx08-00178001.pphosted.com with ESMTP id 2ru6nh6hfa-1\n\t(version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT);\n\tThu, 18 Apr 2019 16:22:10 +0200","from zeta.dmz-eu.st.com (zeta.dmz-eu.st.com [164.129.230.9])\n\tby beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 6551934;\n\tThu, 18 Apr 2019 14:22:08 +0000 (GMT)","from Webmail-eu.st.com (sfhdag3node1.st.com [10.75.127.7])\n\tby zeta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 2D35828BC;\n\tThu, 18 Apr 2019 14:22:08 +0000 (GMT)","from SFHDAG5NODE1.st.com (10.75.127.13) by SFHDAG3NODE1.st.com\n\t(10.75.127.7) with Microsoft SMTP Server (TLS) id 15.0.1347.2;\n\tThu, 18 Apr 2019 16:22:08 +0200","from SFHDAG5NODE1.st.com ([fe80::cc53:528c:36c8:95f6]) by\n\tSFHDAG5NODE1.st.com ([fe80::cc53:528c:36c8:95f6%20]) with mapi id\n\t15.00.1347.000; Thu, 18 Apr 2019 16:22:07 +0200"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com;\n\th=from : to : cc : subject\n\t: date : message-id : references : in-reply-to : content-type :\n\tcontent-id\n\t: content-transfer-encoding : mime-version; s=STMicroelectronics;\n\tbh=Ee+q5REC9/St+jRkyomWbYpp5u8zakEtmW0Vhoz1Zic=;\n\tb=JqlzsVxhX5o/TCHScUvn4HGO33yAMfmIA9vlUJobMiIzji1IyivKxAsUBgWDcxGArLd8\n\t+5S0c7vDsUnA2zvrzPeuVfm5/mnYvCPn1B66DB5VkoHXSZRblm91sqpLuRXJW+D2mOYr\n\tmNsCjkAVqrafi1lCya6vXwrJ5ibx3RPS78Yx6DUBTQmscnNeEOTz2JwNEccsR8YvBXjq\n\t1UR9KOVRNLYzHTC6jNFweTHRc4EZf5FpmsY4wBgc+BveJAgHkYD9Y3VrcCrXIOKYGL3h\n\tRYHfOIiZnIoP3xy2CB8Y8Lu0LO+0qDPxziypY7V8jwcLNObqFKGn8XltZQbSPKft2LP5\n\t5w== ","From":"Hugues FRUCHET <hugues.fruchet@st.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>, Benjamin GAIGNARD\n\t<benjamin.gaignard@st.com>","CC":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>","Thread-Topic":"[libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","Thread-Index":"AQHU7girm5QzcQSF/0eCDK+twiw+qKYzaF0AgAAKGwCAClHqgIAAndCAgAHNsoCAAba2AA==","Date":"Thu, 18 Apr 2019 14:22:07 +0000","Message-ID":"<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>\n\t<8692ce69-0733-4807-dd25-e4568cec0443@st.com>\n\t<20190417121153.GH4993@pendragon.ideasonboard.com>","In-Reply-To":"<20190417121153.GH4993@pendragon.ideasonboard.com>","Accept-Language":"en-US","Content-Language":"en-US","X-MS-Has-Attach":"","X-MS-TNEF-Correlator":"","user-agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101\n\tThunderbird/60.6.1","x-ms-exchange-messagesentrepresentingtype":"1","x-ms-exchange-transport-fromentityheader":"Hosted","x-originating-ip":"[10.75.127.46]","Content-Type":"text/plain; charset=\"utf-8\"","Content-ID":"<3E3090625206C546AF0C9C2835EA943C@st.com>","Content-Transfer-Encoding":"base64","MIME-Version":"1.0","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10434:, ,\n\tdefinitions=2019-04-18_07:, , signatures=0","X-Mailman-Approved-At":"Thu, 18 Apr 2019 17:16:04 +0200","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Thu, 18 Apr 2019 14:22:12 -0000"}},{"id":1463,"web_url":"https://patchwork.libcamera.org/comment/1463/","msgid":"<20190418160144.GX4806@pendragon.ideasonboard.com>","date":"2019-04-18T16:01:44","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Hugues,\n\nOn Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:\n> On 4/17/19 2:11 PM, Laurent Pinchart wrote:\n> > On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:\n> >> On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n> >>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n> >>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n> >>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n> >>>>>> Provide a pipeline handler for the stm32 driver.\n> >>>>> Excellent :-) Thank you for your contribution.\n> >>>>>\n> >>>>> Have you been able to test this and successfully list your camera or\n> >>>>> capture frames?\n> >>>> Yes I have been able to list and capture frames with cam\n> >>>>\n> >>>>> To list successfully registered cameras on the device:\n> >>>>> \tbuild/src/cam/cam -l\n> >>>>>\n> >>>>> I couldn't see any media-controller support in the stm32-dcmi driver...\n> >>>>> Perhaps I'm missing something.\n> >>>> Hugues has send the patches for media controller support a week ago.\n> >>>>\n> >>>> https://lkml.org/lkml/2019/4/1/298\n> >>>>\n> >>>> I have tested libcamera with those patches.\n> >>> I think you may have missed the other comments from Kieran further down.\n> >>\n> >> Yes I have completely miss them\n> >>\n> >>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n> >>>>>> ---\n> >>>>>>     src/libcamera/pipeline/meson.build       |   2 +\n> >>>>>>     src/libcamera/pipeline/stm32/meson.build |   3 +\n> >>>>>>     src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n> >>>>>>     3 files changed, 210 insertions(+)\n> >>>>>>     create mode 100644 src/libcamera/pipeline/stm32/meson.build\n> >>>>>>     create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>>\n> >>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n> >>>>>> index 40bb264..08d6e1c 100644\n> >>>>>> --- a/src/libcamera/pipeline/meson.build\n> >>>>>> +++ b/src/libcamera/pipeline/meson.build\n> >>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n> >>>>>>     ])\n> >>>>>>     \n> >>>>>>     subdir('ipu3')\n> >>>>>> +\n> >>>>>> +subdir('stm32')\n> >>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n> >>>>>> new file mode 100644\n> >>>>>> index 0000000..cb6f16b\n> >>>>>> --- /dev/null\n> >>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n> >>>>>> @@ -0,0 +1,3 @@\n> >>>>>> +libcamera_sources += files([\n> >>>>>> +    'stm32.cpp',\n> >>>>>> +])\n> >>>>> Currently the stm32.cpp seems like a fairly simple implementation, which\n> >>>>> fits in its own file, much like pipeline/uvcvideo.cpp.\n> >>>>>\n> >>>>> Do you forsee needing to break the STM32 handler into multiple files\n> >>>>> later, or support other components on the STM32?\n> >>>>>\n> >>>>> I think it's fine if you want to have stm32 as a subdir either way  though.\n> >>>>>\n> >>>>> I wonder also, if it is just a 'simple' device if perhaps we should move\n> >>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n> >>>>> subclassed to simply add the device matching, or provide a table of\n> >>>>> supported match strings.\n> >>>>>\n> >>>>>\n> >>>>> What hardware is available on the STM32 for processing frames?\n> >>>>> Do you have a scaler/resizer? or any further ISP functionality?\n> >>\n> >> No scaler, no resize and no ISP functionality.\n> >>\n> >> It is a basic device to memory driver.\n> > \n> > In that case I wonder if we should really have an stm32-specific\n> > pipeline handler. It seems to me that we could implement a generic\n> > pipeline handler that would support any graph made of a sensor subdev\n> > connected to a video node.\n> \n> We can also have a bridge in between, so graph becomes:\n> \n> root@stm32mp1:~# media-ctl -d /dev/media0 -p\n> Media controller API version 5.0.0\n> \n> Media device information\n> ------------------------\n> driver          stm32-dcmi\n> model           stm32-dcmi\n> serial\n> bus info        platform:stm32-dcmi\n> hw revision     0x0\n> driver version  5.0.0\n> \n> Device topology\n> - entity 1: stm32_dcmi (1 pad, 1 link)\n>              type Node subtype V4L flags 1\n>              device node name /dev/video0\n>          pad0: Sink\n>                  <- \"mipid02 0-0014\":1 [ENABLED,IMMUTABLE]\n> \n> - entity 5: mipid02 0-0014 (2 pads, 2 links)\n>              type V4L2 subdev subtype Unknown flags 0\n>              device node name /dev/v4l-subdev0\n>          pad0: Sink\n>                  <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n>          pad1: Source\n>                  -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n\nThis is a chip external to the SoC, right ?\n\n> - entity 10: ov5640 0-003c (1 pad, 1 link)\n>               type V4L2 subdev subtype Sensor flags 0\n>               device node name /dev/v4l-subdev1\n>          pad0: Source\n>                  [fmt:JPEG_1X8/320x240 field:none]\n>                  -> \"mipid02 0-0014\":0 [ENABLED,IMMUTABLE]\n> \n> We have discussed this on irc here:\n> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue\n> \n> And in this case, if I have well understood, user has to use media-ctl \n> to set format and resolution on bridge and sensor, just set the \n> resolution and format on video node only will not be sufficient.\n\nUserspace has to configure the pipeline. The media-ctl tool is one way\nto do so, but not the only one. libcamera aims at solving this issue,\nand calls the media controller API directly from pipeline handlers\ncompletely transparently for the application.\n\n> Moreover, after having configured format and resolution using media-ctl,\n> the same format and resolution must be asked to GStreamer otherwise a \n> mismatch will occur:\n> \n> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640 \n> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 ! \n> decodebin ! fpsdisplaysink -v\n> \n> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = \n> image/jpeg, width=(int)320, height=(int)240, \n> pixel-aspect-ratio=(fraction)1/1\n> => OK, QVGA JPEG is captured\n> \n> but if I just omit the caps right after v4l2src (we don't force any \n> format/resolution), this is broken:\n> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640 \n> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v\n> \n> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = \n> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944, \n> pixel-aspect-ratio=(fe\n> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send \n> QVGA JPEG...\n> \n> In summary if I have well udnerstood, user must ensure right match of \n> both media-ctl format/resolution and v4l2 S_FMT format/resolution.\n\nUserspace does, but not the end-user :-)\n\n> So there are 2 cases to handle at least, the simple case without bridge \n> and the case with bridge where MC is involved.\n> \n> I still feel not cumfortable with the fact that just introducing a \n> bridge in the middle (which is just a matter of physical link, no change \n> on functionalities) has impacts up to user side, what do you think ?\n\nThe simplest case of a sensor connected to a camera receiver followed by\na DMA engine has traditionally been supported in V4L2 by a single video\ndevice node. At the other extreme, very complex pipelines require usage\nof the media controller API, with possibly complex logic in userspace to\nconnect all the pieces together. Then we have all the cases in-between,\nwith various degrees of complexity, and we need to draw the line\nsomewhere. Now that libcamera is becoming a reality, I think it's best\nfor kernelspace to be as simple as possible, and let userspace handle\nthe pipeline configuration in as many cases as possible. This will be\ntransparent for the end-user, both with native libcamera applications,\nand for gstreamer-based applications when we'll have a gstreamer element\nfor libcamera (development hasn't started yet, but that's something we\nwant to have). Note that we also plan to offer a V4L2 adaptation layer\nthat will allow unmodified pure V4L2 applications to use a camera\nhandled by libcamera, and in theory gstreamer could use that, but it\nwill be limited to best offert as not all features can be exposed that\nway. A native gstreamer libcamera element will likely be better.\n\n> >>>>> It would be interesting to see the output of 'media-ctl -p' for your\n> >>>>> platform.\n> >>\n> >> The ouput of media-ctl -p (with Hugues's patches)\n> >>\n> >> Media controller API version 4.19.24\n> >>\n> >> Media device information\n> >> ------------------------\n> >> driver          stm32-dcmi\n> >> model           stm32-dcmi\n> >> serial\n> >> bus info        platform:stm32-dcmi\n> >> hw revision     0x0\n> >> driver version  4.19.24\n> >>\n> >> Device topology\n> >> - entity 1: stm32_dcmi (1 pad, 1 link)\n> >>               type Node subtype V4L flags 1\n> >>               device node name /dev/video0\n> >>           pad0: Sink\n> >>                   <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n> >>\n> >> - entity 5: ov5640 0-003c (1 pad, 1 link)\n> >>               type V4L2 subdev subtype Sensor flags 0\n> >>               device node name /dev/v4l-subdev0\n> >>           pad0: Source\n> >>                   [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg\n> >> xfer:srgb ycbcr:601 quantization:full-range]\n> >>                   -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n> >>\n> >>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>> new file mode 100644\n> >>>>>> index 0000000..301fdfc\n> >>>>>> --- /dev/null\n> >>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>> @@ -0,0 +1,205 @@\n> >>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> >>>>>> +/*\n> >>>>>> + * stm32.cpp - Pipeline handler for stm32 devices\n> >>>>>> + */\n> >>>>>> +\n> >>>>>> +#include <libcamera/camera.h>\n> >>>>>> +#include <libcamera/request.h>\n> >>>>>> +#include <libcamera/stream.h>\n> >>>>>> +\n> >>>>>> +#include \"device_enumerator.h\"\n> >>>>>> +#include \"log.h\"\n> >>>>>> +#include \"media_device.h\"\n> >>>>>> +#include \"pipeline_handler.h\"\n> >>>>>> +#include \"utils.h\"\n> >>>>>> +#include \"v4l2_device.h\"\n> >>>>>> +\n> >>>>>> +namespace libcamera {\n> >>>>>> +\n> >>>>>> +LOG_DEFINE_CATEGORY(STM32)\n> >>>>>> +\n> >>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n> >>>>>> +public:\n> >>>>>> +  PipelineHandlerSTM32(CameraManager *manager);\n> >>>>>> +  ~PipelineHandlerSTM32();\n> >>>>> Our coding style uses a tab to indent.\n> >>>>>\n> >>>>> When you have committed your code, you can run ./utils/checkstyle.py to\n> >>>>> check the most recent commit (or you can specify a range suitable for git).\n> >>>>>\n> >>>>> This requires installing clang-format ideally, although astyle is also\n> >>>>> supported to some degree.\n> >>\n> >> I have fix clang-format package version on my side, it will be better in v2.\n> > \n> > Thank you.\n> > \n> >>>>>> +\n> >>>>>> +  std::map<Stream *, StreamConfiguration>\n> >>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n> >>>>>> +  int configureStreams(\n> >>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n> >>>>>> +\n> >>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n> >>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n> >>>>>> +\n> >>>>>> +  int start(Camera *camera) override;\n> >>>>>> +  void stop(Camera *camera) override;\n> >>>>>> +\n> >>>>>> +  int queueRequest(Camera *camera, Request *request) override;\n> >>>>>> +\n> >>>>>> +  bool match(DeviceEnumerator *enumerator);\n> >>>>>> +\n> >>>>>> +private:\n> >>>>>> +  class STM32CameraData : public CameraData {\n> >>>>>> +  public:\n> >>>>>> +    STM32CameraData(PipelineHandler *pipe)\n> >>>>>> +        : CameraData(pipe), video_(nullptr) {}\n> >>>>>> +\n> >>>>>> +    ~STM32CameraData() { delete video_; }\n> >>>>>> +\n> >>>>>> +    void bufferReady(Buffer *buffer);\n> >>>>>> +\n> >>>>>> +    V4L2Device *video_;\n> >>>>>> +    Stream stream_;\n> >>>>>> +  };\n> >>>>>> +\n> >>>>>> +  STM32CameraData *cameraData(const Camera *camera) {\n> >>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n> >>>>>> +  }\n> >>>>>> +\n> >>>>>> +  std::shared_ptr<MediaDevice> media_;\n> >>>>>> +};\n> >>>>>> +\n> >>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n> >>>>>> +    : PipelineHandler(manager), media_(nullptr) {}\n> >>>>>> +\n> >>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n> >>>>>> +  if (media_)\n> >>>>>> +    media_->release();\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +std::map<Stream *, StreamConfiguration>\n> >>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n> >>>>>> +                                          std::set<Stream *> &streams) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +\n> >>>>>> +  std::map<Stream *, StreamConfiguration> configs;\n> >>>>>> +  StreamConfiguration config{};\n> >>>>>> +\n> >>>>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n> >>>>>> +  config.width = 640;\n> >>>>>> +  config.height = 480;\n> >>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n> >>>>>> +  config.bufferCount = 4;\n> >>>>>> +\n> >>>>>> +  configs[&data->stream_] = config;\n> >>>>>> +\n> >>>>>> +  return configs;\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +int PipelineHandlerSTM32::configureStreams(\n> >>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n> >>>>>> +  int ret;\n> >>>>>> +\n> >>>>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n> >>>>>> +                    << \"x\" << cfg->height;\n> >>>>>> +\n> >>>>>> +  V4L2DeviceFormat format = {};\n> >>>>>> +  format.width = cfg->width;\n> >>>>>> +  format.height = cfg->height;\n> >>>>>> +  format.fourcc = cfg->pixelFormat;\n> >>>>>> +\n> >>>>>> +  ret = data->video_->setFormat(&format);\n> >>>>>> +  if (ret)\n> >>>>>> +    return ret;\n> >>>>>> +\n> >>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n> >>>>>> +      format.fourcc != cfg->pixelFormat)\n> >>>>>> +    return -EINVAL;\n> >>>>>> +\n> >>>>>> +  return 0;\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  const StreamConfiguration &cfg = stream->configuration();\n> >>>>>> +\n> >>>>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n> >>>>>> +\n> >>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  return data->video_->releaseBuffers();\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  return data->video_->streamOn();\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  data->video_->streamOff();\n> >>>>>> +  PipelineHandler::stop(camera);\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n> >>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n> >>>>>> +  if (!buffer) {\n> >>>>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n> >>>>>> +\n> >>>>>> +    return -ENOENT;\n> >>>>>> +  }\n> >>>>>> +\n> >>>>>> +  int ret = data->video_->queueBuffer(buffer);\n> >>>>>> +  if (ret < 0)\n> >>>>>> +    return ret;\n> >>>>>> +\n> >>>>>> +  PipelineHandler::queueRequest(camera, request);\n> >>>>>> +\n> >>>>>> +  return 0;\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n> >>>>>> +  DeviceMatch dm(\"stm32-dcmi\");\n> >>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n> >>>>> ever a need to interact with this subdevice directly?\n> >>>>>\n> >>>>> Or are all controls handled through the DCMI driver?\n> >>>>>\n> >>>>> I think it looks like everything simply goes through the stm32-dcmi\n> >>>>> V4L2Device interface.\n> >>\n> >> Until today yes but Hugues is working with v4l for media-controller\n> >> awareness.\n> >>\n> >>>>> That said, I can't see if there is any media-controller device in the\n> >>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n> >>>>> enabled devices.\n> >>>>>\n> >>>>>> +\n> >>>>>> +  media_ = enumerator->search(dm);\n> >>>>>> +  if (!media_)\n> >>>>>> +    return false;\n> >>>>>> +\n> >>>>>> +  media_->acquire();\n> >>>>>> +\n> >>>>>> +  std::unique_ptr<STM32CameraData> data =\n> >>>>>> +      utils::make_unique<STM32CameraData>(this);\n> >>>>>> +\n> >>>>>> +  /* Locate and open the default video node. */\n> >>>>>> +  for (MediaEntity *entity : media_->entities()) {\n> >>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n> >>>>>> +      data->video_ = new V4L2Device(entity);\n> >>>>>> +      break;\n> >>>>>> +    }\n> >>>>>> +  }\n> >>>>>\n> >>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n> >>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n> >>>>> correctly match your device?\n> >>\n> >> It will because it is in Hugues's patches.\n> >>\n> >> Does an another way of working is posssible ?\n> >>\n> >> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to\n> >> use it.\n> > \n> > There's a single video node in your media graph, so there's no real need\n> > to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,\n> > can't you ?\n> > \n> >> Do you have some advices ?\n> >>\n> >>>>>> +\n> >>>>>> +  if (!data->video_) {\n> >>>>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n> >>>>>> +    return false;\n> >>>>>> +  }\n> >>>>>> +\n> >>>>>> +  if (data->video_->open())\n> >>>>>> +    return false;\n> >>>>>> +\n> >>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n> >>>>>> +\n> >>>>>> +  /* Create and register the camera. */\n> >>>>>> +  std::set<Stream *> streams{&data->stream_};\n> >>>>>> +  std::shared_ptr<Camera> camera =\n> >>>>>> +      Camera::create(this, media_->model(), streams);\n> >>>>>> +  registerCamera(std::move(camera), std::move(data));\n> >>>>>> +\n> >>>>>> +  return true;\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n> >>>>>> +  Request *request = queuedRequests_.front();\n> >>>>>> +\n> >>>>>> +  pipe_->completeBuffer(camera_, request, buffer);\n> >>>>>> +  pipe_->completeRequest(camera_, request);\n> >>>>>> +}\n> >>>>>> +\n> >>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n> >>>>>> +\n> >>>>>> +} /* namespace libcamera */\n> >>>>>>\n> >>>>> Thank you for joining the libcamera project! - If there is anything more\n> >>>>> we can do to help you - please let us know.","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 7867060DB2\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 18 Apr 2019 18:01:53 +0200 (CEST)","from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi\n\t[81.175.216.236])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id C2AFD333;\n\tThu, 18 Apr 2019 18:01:52 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1555603313;\n\tbh=fq02Zm9YpqqpzGYlw39nVr2IykM3z+1DdJv81aQrHEI=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=nRCOO9fd0liLOd4xxLNuEG+kYmxUL6xfrKSU58tNMRD12Xe1IzmtgYj2y7Ql0f9O+\n\tHTjSdB7WdcCn/nlzhK9sIGABnP/tMswk21rZhTxJmAh9vXjfUTvMhTqOQAzeinNPW7\n\tdhucUgUE6LhJGaR3f6MAYD0ZykKiICL1CMreHaf4=","Date":"Thu, 18 Apr 2019 19:01:44 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Hugues FRUCHET <hugues.fruchet@st.com>","Cc":"Benjamin GAIGNARD <benjamin.gaignard@st.com>,\n\t\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>, \n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>","Message-ID":"<20190418160144.GX4806@pendragon.ideasonboard.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>\n\t<8692ce69-0733-4807-dd25-e4568cec0443@st.com>\n\t<20190417121153.GH4993@pendragon.ideasonboard.com>\n\t<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Thu, 18 Apr 2019 16:01:53 -0000"}},{"id":1476,"web_url":"https://patchwork.libcamera.org/comment/1476/","msgid":"<8a14e88e-140e-79a7-c3ad-e185556dc97f@st.com>","date":"2019-04-19T07:45:45","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":14,"url":"https://patchwork.libcamera.org/api/people/14/","name":"Benjamin GAIGNARD","email":"benjamin.gaignard@st.com"},"content":"On 4/18/19 6:01 PM, Laurent Pinchart wrote:\n> Hi Hugues,\n>\n> On Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:\n>> On 4/17/19 2:11 PM, Laurent Pinchart wrote:\n>>> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:\n>>>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n>>>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n>>>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n>>>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n>>>>>>>> Provide a pipeline handler for the stm32 driver.\n>>>>>>> Excellent :-) Thank you for your contribution.\n>>>>>>>\n>>>>>>> Have you been able to test this and successfully list your camera or\n>>>>>>> capture frames?\n>>>>>> Yes I have been able to list and capture frames with cam\n>>>>>>\n>>>>>>> To list successfully registered cameras on the device:\n>>>>>>> \tbuild/src/cam/cam -l\n>>>>>>>\n>>>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...\n>>>>>>> Perhaps I'm missing something.\n>>>>>> Hugues has send the patches for media controller support a week ago.\n>>>>>>\n>>>>>> https://lkml.org/lkml/2019/4/1/298\n>>>>>>\n>>>>>> I have tested libcamera with those patches.\n>>>>> I think you may have missed the other comments from Kieran further down.\n>>>> Yes I have completely miss them\n>>>>\n>>>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n>>>>>>>> ---\n>>>>>>>>      src/libcamera/pipeline/meson.build       |   2 +\n>>>>>>>>      src/libcamera/pipeline/stm32/meson.build |   3 +\n>>>>>>>>      src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n>>>>>>>>      3 files changed, 210 insertions(+)\n>>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/meson.build\n>>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>>>>\n>>>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n>>>>>>>> index 40bb264..08d6e1c 100644\n>>>>>>>> --- a/src/libcamera/pipeline/meson.build\n>>>>>>>> +++ b/src/libcamera/pipeline/meson.build\n>>>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n>>>>>>>>      ])\n>>>>>>>>      \n>>>>>>>>      subdir('ipu3')\n>>>>>>>> +\n>>>>>>>> +subdir('stm32')\n>>>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n>>>>>>>> new file mode 100644\n>>>>>>>> index 0000000..cb6f16b\n>>>>>>>> --- /dev/null\n>>>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n>>>>>>>> @@ -0,0 +1,3 @@\n>>>>>>>> +libcamera_sources += files([\n>>>>>>>> +    'stm32.cpp',\n>>>>>>>> +])\n>>>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which\n>>>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.\n>>>>>>>\n>>>>>>> Do you forsee needing to break the STM32 handler into multiple files\n>>>>>>> later, or support other components on the STM32?\n>>>>>>>\n>>>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.\n>>>>>>>\n>>>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move\n>>>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n>>>>>>> subclassed to simply add the device matching, or provide a table of\n>>>>>>> supported match strings.\n>>>>>>>\n>>>>>>>\n>>>>>>> What hardware is available on the STM32 for processing frames?\n>>>>>>> Do you have a scaler/resizer? or any further ISP functionality?\n>>>> No scaler, no resize and no ISP functionality.\n>>>>\n>>>> It is a basic device to memory driver.\n>>> In that case I wonder if we should really have an stm32-specific\n>>> pipeline handler. It seems to me that we could implement a generic\n>>> pipeline handler that would support any graph made of a sensor subdev\n>>> connected to a video node.\n>> We can also have a bridge in between, so graph becomes:\n>>\n>> root@stm32mp1:~# media-ctl -d /dev/media0 -p\n>> Media controller API version 5.0.0\n>>\n>> Media device information\n>> ------------------------\n>> driver          stm32-dcmi\n>> model           stm32-dcmi\n>> serial\n>> bus info        platform:stm32-dcmi\n>> hw revision     0x0\n>> driver version  5.0.0\n>>\n>> Device topology\n>> - entity 1: stm32_dcmi (1 pad, 1 link)\n>>               type Node subtype V4L flags 1\n>>               device node name /dev/video0\n>>           pad0: Sink\n>>                   <- \"mipid02 0-0014\":1 [ENABLED,IMMUTABLE]\n>>\n>> - entity 5: mipid02 0-0014 (2 pads, 2 links)\n>>               type V4L2 subdev subtype Unknown flags 0\n>>               device node name /dev/v4l-subdev0\n>>           pad0: Sink\n>>                   <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n>>           pad1: Source\n>>                   -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n> This is a chip external to the SoC, right ?\nYes it is.\n>\n>> - entity 10: ov5640 0-003c (1 pad, 1 link)\n>>                type V4L2 subdev subtype Sensor flags 0\n>>                device node name /dev/v4l-subdev1\n>>           pad0: Source\n>>                   [fmt:JPEG_1X8/320x240 field:none]\n>>                   -> \"mipid02 0-0014\":0 [ENABLED,IMMUTABLE]\n>>\n>> We have discussed this on irc here:\n>> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue\n>>\n>> And in this case, if I have well understood, user has to use media-ctl\n>> to set format and resolution on bridge and sensor, just set the\n>> resolution and format on video node only will not be sufficient.\n> Userspace has to configure the pipeline. The media-ctl tool is one way\n> to do so, but not the only one. libcamera aims at solving this issue,\n> and calls the media controller API directly from pipeline handlers\n> completely transparently for the application.\n>\n>> Moreover, after having configured format and resolution using media-ctl,\n>> the same format and resolution must be asked to GStreamer otherwise a\n>> mismatch will occur:\n>>\n>> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640\n>> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n>> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 !\n>> decodebin ! fpsdisplaysink -v\n>>\n>> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =\n>> image/jpeg, width=(int)320, height=(int)240,\n>> pixel-aspect-ratio=(fraction)1/1\n>> => OK, QVGA JPEG is captured\n>>\n>> but if I just omit the caps right after v4l2src (we don't force any\n>> format/resolution), this is broken:\n>> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640\n>> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n>> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v\n>>\n>> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps =\n>> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944,\n>> pixel-aspect-ratio=(fe\n>> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send\n>> QVGA JPEG...\n>>\n>> In summary if I have well udnerstood, user must ensure right match of\n>> both media-ctl format/resolution and v4l2 S_FMT format/resolution.\n> Userspace does, but not the end-user :-)\n>\n>> So there are 2 cases to handle at least, the simple case without bridge\n>> and the case with bridge where MC is involved.\n>>\n>> I still feel not cumfortable with the fact that just introducing a\n>> bridge in the middle (which is just a matter of physical link, no change\n>> on functionalities) has impacts up to user side, what do you think ?\n> The simplest case of a sensor connected to a camera receiver followed by\n> a DMA engine has traditionally been supported in V4L2 by a single video\n> device node. At the other extreme, very complex pipelines require usage\n> of the media controller API, with possibly complex logic in userspace to\n> connect all the pieces together. Then we have all the cases in-between,\n> with various degrees of complexity, and we need to draw the line\n> somewhere. Now that libcamera is becoming a reality, I think it's best\n> for kernelspace to be as simple as possible, and let userspace handle\n> the pipeline configuration in as many cases as possible. This will be\n> transparent for the end-user, both with native libcamera applications,\n> and for gstreamer-based applications when we'll have a gstreamer element\n> for libcamera (development hasn't started yet, but that's something we\n> want to have). Note that we also plan to offer a V4L2 adaptation layer\n> that will allow unmodified pure V4L2 applications to use a camera\n> handled by libcamera, and in theory gstreamer could use that, but it\n> will be limited to best offert as not all features can be exposed that\n> way. A native gstreamer libcamera element will likely be better.\nSince the pipeline have to manage all the complexity I will\nkeep a dedicated file for stm32 pipeline. Sound good for you ?\n\nBenjamin\n\n>\n>>>>>>> It would be interesting to see the output of 'media-ctl -p' for your\n>>>>>>> platform.\n>>>> The ouput of media-ctl -p (with Hugues's patches)\n>>>>\n>>>> Media controller API version 4.19.24\n>>>>\n>>>> Media device information\n>>>> ------------------------\n>>>> driver          stm32-dcmi\n>>>> model           stm32-dcmi\n>>>> serial\n>>>> bus info        platform:stm32-dcmi\n>>>> hw revision     0x0\n>>>> driver version  4.19.24\n>>>>\n>>>> Device topology\n>>>> - entity 1: stm32_dcmi (1 pad, 1 link)\n>>>>                type Node subtype V4L flags 1\n>>>>                device node name /dev/video0\n>>>>            pad0: Sink\n>>>>                    <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n>>>>\n>>>> - entity 5: ov5640 0-003c (1 pad, 1 link)\n>>>>                type V4L2 subdev subtype Sensor flags 0\n>>>>                device node name /dev/v4l-subdev0\n>>>>            pad0: Source\n>>>>                    [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg\n>>>> xfer:srgb ycbcr:601 quantization:full-range]\n>>>>                    -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n>>>>\n>>>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>>>> new file mode 100644\n>>>>>>>> index 0000000..301fdfc\n>>>>>>>> --- /dev/null\n>>>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n>>>>>>>> @@ -0,0 +1,205 @@\n>>>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n>>>>>>>> +/*\n>>>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices\n>>>>>>>> + */\n>>>>>>>> +\n>>>>>>>> +#include <libcamera/camera.h>\n>>>>>>>> +#include <libcamera/request.h>\n>>>>>>>> +#include <libcamera/stream.h>\n>>>>>>>> +\n>>>>>>>> +#include \"device_enumerator.h\"\n>>>>>>>> +#include \"log.h\"\n>>>>>>>> +#include \"media_device.h\"\n>>>>>>>> +#include \"pipeline_handler.h\"\n>>>>>>>> +#include \"utils.h\"\n>>>>>>>> +#include \"v4l2_device.h\"\n>>>>>>>> +\n>>>>>>>> +namespace libcamera {\n>>>>>>>> +\n>>>>>>>> +LOG_DEFINE_CATEGORY(STM32)\n>>>>>>>> +\n>>>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n>>>>>>>> +public:\n>>>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);\n>>>>>>>> +  ~PipelineHandlerSTM32();\n>>>>>>> Our coding style uses a tab to indent.\n>>>>>>>\n>>>>>>> When you have committed your code, you can run ./utils/checkstyle.py to\n>>>>>>> check the most recent commit (or you can specify a range suitable for git).\n>>>>>>>\n>>>>>>> This requires installing clang-format ideally, although astyle is also\n>>>>>>> supported to some degree.\n>>>> I have fix clang-format package version on my side, it will be better in v2.\n>>> Thank you.\n>>>\n>>>>>>>> +\n>>>>>>>> +  std::map<Stream *, StreamConfiguration>\n>>>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n>>>>>>>> +  int configureStreams(\n>>>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n>>>>>>>> +\n>>>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n>>>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n>>>>>>>> +\n>>>>>>>> +  int start(Camera *camera) override;\n>>>>>>>> +  void stop(Camera *camera) override;\n>>>>>>>> +\n>>>>>>>> +  int queueRequest(Camera *camera, Request *request) override;\n>>>>>>>> +\n>>>>>>>> +  bool match(DeviceEnumerator *enumerator);\n>>>>>>>> +\n>>>>>>>> +private:\n>>>>>>>> +  class STM32CameraData : public CameraData {\n>>>>>>>> +  public:\n>>>>>>>> +    STM32CameraData(PipelineHandler *pipe)\n>>>>>>>> +        : CameraData(pipe), video_(nullptr) {}\n>>>>>>>> +\n>>>>>>>> +    ~STM32CameraData() { delete video_; }\n>>>>>>>> +\n>>>>>>>> +    void bufferReady(Buffer *buffer);\n>>>>>>>> +\n>>>>>>>> +    V4L2Device *video_;\n>>>>>>>> +    Stream stream_;\n>>>>>>>> +  };\n>>>>>>>> +\n>>>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {\n>>>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n>>>>>>>> +  }\n>>>>>>>> +\n>>>>>>>> +  std::shared_ptr<MediaDevice> media_;\n>>>>>>>> +};\n>>>>>>>> +\n>>>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n>>>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}\n>>>>>>>> +\n>>>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n>>>>>>>> +  if (media_)\n>>>>>>>> +    media_->release();\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +std::map<Stream *, StreamConfiguration>\n>>>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n>>>>>>>> +                                          std::set<Stream *> &streams) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +\n>>>>>>>> +  std::map<Stream *, StreamConfiguration> configs;\n>>>>>>>> +  StreamConfiguration config{};\n>>>>>>>> +\n>>>>>>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n>>>>>>>> +  config.width = 640;\n>>>>>>>> +  config.height = 480;\n>>>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n>>>>>>>> +  config.bufferCount = 4;\n>>>>>>>> +\n>>>>>>>> +  configs[&data->stream_] = config;\n>>>>>>>> +\n>>>>>>>> +  return configs;\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +int PipelineHandlerSTM32::configureStreams(\n>>>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n>>>>>>>> +  int ret;\n>>>>>>>> +\n>>>>>>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n>>>>>>>> +                    << \"x\" << cfg->height;\n>>>>>>>> +\n>>>>>>>> +  V4L2DeviceFormat format = {};\n>>>>>>>> +  format.width = cfg->width;\n>>>>>>>> +  format.height = cfg->height;\n>>>>>>>> +  format.fourcc = cfg->pixelFormat;\n>>>>>>>> +\n>>>>>>>> +  ret = data->video_->setFormat(&format);\n>>>>>>>> +  if (ret)\n>>>>>>>> +    return ret;\n>>>>>>>> +\n>>>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n>>>>>>>> +      format.fourcc != cfg->pixelFormat)\n>>>>>>>> +    return -EINVAL;\n>>>>>>>> +\n>>>>>>>> +  return 0;\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  const StreamConfiguration &cfg = stream->configuration();\n>>>>>>>> +\n>>>>>>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n>>>>>>>> +\n>>>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  return data->video_->releaseBuffers();\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  return data->video_->streamOn();\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  data->video_->streamOff();\n>>>>>>>> +  PipelineHandler::stop(camera);\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n>>>>>>>> +  STM32CameraData *data = cameraData(camera);\n>>>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n>>>>>>>> +  if (!buffer) {\n>>>>>>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n>>>>>>>> +\n>>>>>>>> +    return -ENOENT;\n>>>>>>>> +  }\n>>>>>>>> +\n>>>>>>>> +  int ret = data->video_->queueBuffer(buffer);\n>>>>>>>> +  if (ret < 0)\n>>>>>>>> +    return ret;\n>>>>>>>> +\n>>>>>>>> +  PipelineHandler::queueRequest(camera, request);\n>>>>>>>> +\n>>>>>>>> +  return 0;\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n>>>>>>>> +  DeviceMatch dm(\"stm32-dcmi\");\n>>>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n>>>>>>> ever a need to interact with this subdevice directly?\n>>>>>>>\n>>>>>>> Or are all controls handled through the DCMI driver?\n>>>>>>>\n>>>>>>> I think it looks like everything simply goes through the stm32-dcmi\n>>>>>>> V4L2Device interface.\n>>>> Until today yes but Hugues is working with v4l for media-controller\n>>>> awareness.\n>>>>\n>>>>>>> That said, I can't see if there is any media-controller device in the\n>>>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n>>>>>>> enabled devices.\n>>>>>>>\n>>>>>>>> +\n>>>>>>>> +  media_ = enumerator->search(dm);\n>>>>>>>> +  if (!media_)\n>>>>>>>> +    return false;\n>>>>>>>> +\n>>>>>>>> +  media_->acquire();\n>>>>>>>> +\n>>>>>>>> +  std::unique_ptr<STM32CameraData> data =\n>>>>>>>> +      utils::make_unique<STM32CameraData>(this);\n>>>>>>>> +\n>>>>>>>> +  /* Locate and open the default video node. */\n>>>>>>>> +  for (MediaEntity *entity : media_->entities()) {\n>>>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n>>>>>>>> +      data->video_ = new V4L2Device(entity);\n>>>>>>>> +      break;\n>>>>>>>> +    }\n>>>>>>>> +  }\n>>>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n>>>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n>>>>>>> correctly match your device?\n>>>> It will because it is in Hugues's patches.\n>>>>\n>>>> Does an another way of working is posssible ?\n>>>>\n>>>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to\n>>>> use it.\n>>> There's a single video node in your media graph, so there's no real need\n>>> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,\n>>> can't you ?\n>>>\n>>>> Do you have some advices ?\n>>>>\n>>>>>>>> +\n>>>>>>>> +  if (!data->video_) {\n>>>>>>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n>>>>>>>> +    return false;\n>>>>>>>> +  }\n>>>>>>>> +\n>>>>>>>> +  if (data->video_->open())\n>>>>>>>> +    return false;\n>>>>>>>> +\n>>>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n>>>>>>>> +\n>>>>>>>> +  /* Create and register the camera. */\n>>>>>>>> +  std::set<Stream *> streams{&data->stream_};\n>>>>>>>> +  std::shared_ptr<Camera> camera =\n>>>>>>>> +      Camera::create(this, media_->model(), streams);\n>>>>>>>> +  registerCamera(std::move(camera), std::move(data));\n>>>>>>>> +\n>>>>>>>> +  return true;\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n>>>>>>>> +  Request *request = queuedRequests_.front();\n>>>>>>>> +\n>>>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);\n>>>>>>>> +  pipe_->completeRequest(camera_, request);\n>>>>>>>> +}\n>>>>>>>> +\n>>>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n>>>>>>>> +\n>>>>>>>> +} /* namespace libcamera */\n>>>>>>>>\n>>>>>>> Thank you for joining the libcamera project! - If there is anything more\n>>>>>>> we can do to help you - please let us know.","headers":{"Return-Path":"<benjamin.gaignard@st.com>","Received":["from mx07-00178001.pphosted.com (mx07-00178001.pphosted.com\n\t[62.209.51.94])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 90B3860004\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 19 Apr 2019 09:45:51 +0200 (CEST)","from pps.filterd (m0046668.ppops.net [127.0.0.1])\n\tby mx07-00178001.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id\n\tx3J7fRod019155; Fri, 19 Apr 2019 09:45:48 +0200","from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])\n\tby mx07-00178001.pphosted.com with ESMTP id 2ru580j6vc-1\n\t(version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT);\n\tFri, 19 Apr 2019 09:45:47 +0200","from zeta.dmz-eu.st.com (zeta.dmz-eu.st.com [164.129.230.9])\n\tby beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id DAF793A;\n\tFri, 19 Apr 2019 07:45:46 +0000 (GMT)","from Webmail-eu.st.com (sfhdag5node3.st.com [10.75.127.15])\n\tby zeta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 2BC921481;\n\tFri, 19 Apr 2019 07:45:46 +0000 (GMT)","from SFHDAG3NODE3.st.com (10.75.127.9) by SFHDAG5NODE3.st.com\n\t(10.75.127.15) with Microsoft SMTP Server (TLS) id 15.0.1347.2;\n\tFri, 19 Apr 2019 09:45:45 +0200","from SFHDAG3NODE3.st.com ([fe80::3507:b372:7648:476]) by\n\tSFHDAG3NODE3.st.com ([fe80::3507:b372:7648:476%20]) with mapi id\n\t15.00.1347.000; Fri, 19 Apr 2019 09:45:45 +0200"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com;\n\th=from : to : cc : subject\n\t: date : message-id : references : in-reply-to : content-type :\n\tcontent-id\n\t: content-transfer-encoding : mime-version; s=STMicroelectronics;\n\tbh=s6GkHVrh3JG2/mtjooMDa8hv7BKfgIa8H3bLTw2qYcM=;\n\tb=T8ylDY/zFZ0U6wVLU0Dn8rq5oHUcMQeI53Q28RGV4aCFHHFBmGJlUcrCqTGXNbLwDW5I\n\trXwjDAC1na8Jujz0kocq4QgWC3V38HWf0r7y9XGuM2P4fB/s+N1DRqklWf9D5POi5+vu\n\tPkxQLI9F99ZY4u1+qJwNS8CX7XIp5T2yoEPcuuwl8YHLcdY9LiwLJIi98pR55h74GB2d\n\tySdOqnqlrrEyHR5dE4vr2ruwfskHkqHlGJosR3Ngqy18RJV98NhaDgjcW9056AM120HT\n\tsspeNuEm9PerMDbxi3FCqdSTBpUqVyIDeGc4Aj6jLxk2xL03gndBvUWALWrWVnHl7Z/F\n\tkg== ","From":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","To":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>, Hugues FRUCHET\n\t<hugues.fruchet@st.com>","CC":"\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>,\n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>","Thread-Topic":"[libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","Thread-Index":"AQHU7girxkW0QpRzE0KeuHPcIgbIO6YzaF0AgAAKGYCAClHsgIAAnc8AgAHNs4CAAba3gIAAG9YAgAEHwYA=","Date":"Fri, 19 Apr 2019 07:45:45 +0000","Message-ID":"<8a14e88e-140e-79a7-c3ad-e185556dc97f@st.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>\n\t<8692ce69-0733-4807-dd25-e4568cec0443@st.com>\n\t<20190417121153.GH4993@pendragon.ideasonboard.com>\n\t<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>\n\t<20190418160144.GX4806@pendragon.ideasonboard.com>","In-Reply-To":"<20190418160144.GX4806@pendragon.ideasonboard.com>","Accept-Language":"en-US","Content-Language":"en-US","X-MS-Has-Attach":"","X-MS-TNEF-Correlator":"","user-agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101\n\tThunderbird/60.6.1","x-ms-exchange-messagesentrepresentingtype":"1","x-ms-exchange-transport-fromentityheader":"Hosted","x-originating-ip":"[10.75.127.50]","Content-Type":"text/plain; charset=\"utf-8\"","Content-ID":"<85998345638CDE41B4F7AE28FD5F7DAD@st.com>","Content-Transfer-Encoding":"base64","MIME-Version":"1.0","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10434:, ,\n\tdefinitions=2019-04-19_04:, , signatures=0","X-Mailman-Approved-At":"Fri, 19 Apr 2019 12:00:50 +0200","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Fri, 19 Apr 2019 07:45:51 -0000"}},{"id":1478,"web_url":"https://patchwork.libcamera.org/comment/1478/","msgid":"<20190419101828.GA4788@pendragon.ideasonboard.com>","date":"2019-04-19T10:18:28","subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","submitter":{"id":2,"url":"https://patchwork.libcamera.org/api/people/2/","name":"Laurent Pinchart","email":"laurent.pinchart@ideasonboard.com"},"content":"Hi Benjamin,\n\nOn Fri, Apr 19, 2019 at 07:45:45AM +0000, Benjamin GAIGNARD wrote:\n> On 4/18/19 6:01 PM, Laurent Pinchart wrote:\n> > On Thu, Apr 18, 2019 at 02:22:07PM +0000, Hugues FRUCHET wrote:\n> >> On 4/17/19 2:11 PM, Laurent Pinchart wrote:\n> >>> On Tue, Apr 16, 2019 at 08:39:25AM +0000, Benjamin GAIGNARD wrote:\n> >>>> On 4/16/19 1:14 AM, Laurent Pinchart wrote:\n> >>>>> On Tue, Apr 09, 2019 at 09:38:48AM +0000, Benjamin GAIGNARD wrote:\n> >>>>>> On 4/9/19 11:02 AM, Kieran Bingham wrote:\n> >>>>>>> On 08/04/2019 13:43, Benjamin Gaignard wrote:\n> >>>>>>>> Provide a pipeline handler for the stm32 driver.\n> >>>>>>> \n> >>>>>>> Excellent :-) Thank you for your contribution.\n> >>>>>>> \n> >>>>>>> Have you been able to test this and successfully list your camera or\n> >>>>>>> capture frames?\n> >>>>>> \n> >>>>>> Yes I have been able to list and capture frames with cam\n> >>>>>> \n> >>>>>>> To list successfully registered cameras on the device:\n> >>>>>>> \tbuild/src/cam/cam -l\n> >>>>>>>\n> >>>>>>> I couldn't see any media-controller support in the stm32-dcmi driver...\n> >>>>>>> Perhaps I'm missing something.\n> >>>>>> \n> >>>>>> Hugues has send the patches for media controller support a week ago.\n> >>>>>>\n> >>>>>> https://lkml.org/lkml/2019/4/1/298\n> >>>>>>\n> >>>>>> I have tested libcamera with those patches.\n> >>>>> \n> >>>>> I think you may have missed the other comments from Kieran further down.\n> >>>> \n> >>>> Yes I have completely miss them\n> >>>>\n> >>>>>>>> Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>\n> >>>>>>>> ---\n> >>>>>>>>      src/libcamera/pipeline/meson.build       |   2 +\n> >>>>>>>>      src/libcamera/pipeline/stm32/meson.build |   3 +\n> >>>>>>>>      src/libcamera/pipeline/stm32/stm32.cpp   | 205 +++++++++++++++++++++++++++++++\n> >>>>>>>>      3 files changed, 210 insertions(+)\n> >>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/meson.build\n> >>>>>>>>      create mode 100644 src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>>>>\n> >>>>>>>> diff --git a/src/libcamera/pipeline/meson.build b/src/libcamera/pipeline/meson.build\n> >>>>>>>> index 40bb264..08d6e1c 100644\n> >>>>>>>> --- a/src/libcamera/pipeline/meson.build\n> >>>>>>>> +++ b/src/libcamera/pipeline/meson.build\n> >>>>>>>> @@ -4,3 +4,5 @@ libcamera_sources += files([\n> >>>>>>>>      ])\n> >>>>>>>>      \n> >>>>>>>>      subdir('ipu3')\n> >>>>>>>> +\n> >>>>>>>> +subdir('stm32')\n> >>>>>>>> diff --git a/src/libcamera/pipeline/stm32/meson.build b/src/libcamera/pipeline/stm32/meson.build\n> >>>>>>>> new file mode 100644\n> >>>>>>>> index 0000000..cb6f16b\n> >>>>>>>> --- /dev/null\n> >>>>>>>> +++ b/src/libcamera/pipeline/stm32/meson.build\n> >>>>>>>> @@ -0,0 +1,3 @@\n> >>>>>>>> +libcamera_sources += files([\n> >>>>>>>> +    'stm32.cpp',\n> >>>>>>>> +])\n> >>>>>>> \n> >>>>>>> Currently the stm32.cpp seems like a fairly simple implementation, which\n> >>>>>>> fits in its own file, much like pipeline/uvcvideo.cpp.\n> >>>>>>>\n> >>>>>>> Do you forsee needing to break the STM32 handler into multiple files\n> >>>>>>> later, or support other components on the STM32?\n> >>>>>>>\n> >>>>>>> I think it's fine if you want to have stm32 as a subdir either way  though.\n> >>>>>>>\n> >>>>>>> I wonder also, if it is just a 'simple' device if perhaps we should move\n> >>>>>>> the bulk of the uvcvideo handler to a 'simple.cpp' and allow it to be\n> >>>>>>> subclassed to simply add the device matching, or provide a table of\n> >>>>>>> supported match strings.\n> >>>>>>>\n> >>>>>>>\n> >>>>>>> What hardware is available on the STM32 for processing frames?\n> >>>>>>> Do you have a scaler/resizer? or any further ISP functionality?\n> >>>> \n> >>>> No scaler, no resize and no ISP functionality.\n> >>>>\n> >>>> It is a basic device to memory driver.\n> >>> \n> >>> In that case I wonder if we should really have an stm32-specific\n> >>> pipeline handler. It seems to me that we could implement a generic\n> >>> pipeline handler that would support any graph made of a sensor subdev\n> >>> connected to a video node.\n> >> \n> >> We can also have a bridge in between, so graph becomes:\n> >>\n> >> root@stm32mp1:~# media-ctl -d /dev/media0 -p\n> >> Media controller API version 5.0.0\n> >>\n> >> Media device information\n> >> ------------------------\n> >> driver          stm32-dcmi\n> >> model           stm32-dcmi\n> >> serial\n> >> bus info        platform:stm32-dcmi\n> >> hw revision     0x0\n> >> driver version  5.0.0\n> >>\n> >> Device topology\n> >> - entity 1: stm32_dcmi (1 pad, 1 link)\n> >>               type Node subtype V4L flags 1\n> >>               device node name /dev/video0\n> >>           pad0: Sink\n> >>                   <- \"mipid02 0-0014\":1 [ENABLED,IMMUTABLE]\n> >>\n> >> - entity 5: mipid02 0-0014 (2 pads, 2 links)\n> >>               type V4L2 subdev subtype Unknown flags 0\n> >>               device node name /dev/v4l-subdev0\n> >>           pad0: Sink\n> >>                   <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n> >>           pad1: Source\n> >>                   -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n> > This is a chip external to the SoC, right ?\n> \n> Yes it is.\n> \n> >> - entity 10: ov5640 0-003c (1 pad, 1 link)\n> >>                type V4L2 subdev subtype Sensor flags 0\n> >>                device node name /dev/v4l-subdev1\n> >>           pad0: Source\n> >>                   [fmt:JPEG_1X8/320x240 field:none]\n> >>                   -> \"mipid02 0-0014\":0 [ENABLED,IMMUTABLE]\n> >>\n> >> We have discussed this on irc here:\n> >> https://linuxtv.org/irc/irclogger_log/v4l?date=2019-04-02,Tue\n> >>\n> >> And in this case, if I have well understood, user has to use media-ctl\n> >> to set format and resolution on bridge and sensor, just set the\n> >> resolution and format on video node only will not be sufficient.\n> >\n> > Userspace has to configure the pipeline. The media-ctl tool is one way\n> > to do so, but not the only one. libcamera aims at solving this issue,\n> > and calls the media controller API directly from pipeline handlers\n> > completely transparently for the application.\n> >\n> >> Moreover, after having configured format and resolution using media-ctl,\n> >> the same format and resolution must be asked to GStreamer otherwise a\n> >> mismatch will occur:\n> >>\n> >> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640\n> >> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n> >> 2) gst-launch-1.0 v4l2src ! image/jpeg, width=320, height=240 !\n> >> decodebin ! fpsdisplaysink -v\n> >>\n> >> /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =\n> >> image/jpeg, width=(int)320, height=(int)240,\n> >> pixel-aspect-ratio=(fraction)1/1\n> >> => OK, QVGA JPEG is captured\n> >>\n> >> but if I just omit the caps right after v4l2src (we don't force any\n> >> format/resolution), this is broken:\n> >> 1) media-ctl -d /dev/media0 --set-v4l2 \"'ov5640\n> >> 0-003c':0[fmt:JPEG_1X8/640x480 field:none]\"\n> >> 2) gst-launch-1.0 v4l2src ! decodebin ! fakesink silent=false -v\n> >>\n> >> /GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps =\n> >> video/x-raw, format=(string)YUY2, width=(int)2592, height=(int)1944,\n> >> pixel-aspect-ratio=(fe\n> >> => KO, we try to capture 5Mp YUYV while ov5640 is configured to send\n> >> QVGA JPEG...\n> >>\n> >> In summary if I have well udnerstood, user must ensure right match of\n> >> both media-ctl format/resolution and v4l2 S_FMT format/resolution.\n> >\n> > Userspace does, but not the end-user :-)\n> >\n> >> So there are 2 cases to handle at least, the simple case without bridge\n> >> and the case with bridge where MC is involved.\n> >>\n> >> I still feel not cumfortable with the fact that just introducing a\n> >> bridge in the middle (which is just a matter of physical link, no change\n> >> on functionalities) has impacts up to user side, what do you think ?\n> >\n> > The simplest case of a sensor connected to a camera receiver followed by\n> > a DMA engine has traditionally been supported in V4L2 by a single video\n> > device node. At the other extreme, very complex pipelines require usage\n> > of the media controller API, with possibly complex logic in userspace to\n> > connect all the pieces together. Then we have all the cases in-between,\n> > with various degrees of complexity, and we need to draw the line\n> > somewhere. Now that libcamera is becoming a reality, I think it's best\n> > for kernelspace to be as simple as possible, and let userspace handle\n> > the pipeline configuration in as many cases as possible. This will be\n> > transparent for the end-user, both with native libcamera applications,\n> > and for gstreamer-based applications when we'll have a gstreamer element\n> > for libcamera (development hasn't started yet, but that's something we\n> > want to have). Note that we also plan to offer a V4L2 adaptation layer\n> > that will allow unmodified pure V4L2 applications to use a camera\n> > handled by libcamera, and in theory gstreamer could use that, but it\n> > will be limited to best offert as not all features can be exposed that\n> > way. A native gstreamer libcamera element will likely be better.\n> \n> Since the pipeline have to manage all the complexity I will\n> keep a dedicated file for stm32 pipeline. Sound good for you ?\n\nBut the code isn't tied to the STM32 in any way, is it ? I still think\nit should be made generic to support any camera that has an identical\narchitecture. In particular handling of the bridge should go to a helper\nclass, probably related to the recently added CameraSensor class.\n\n> >>>>>>> It would be interesting to see the output of 'media-ctl -p' for your\n> >>>>>>> platform.\n> >>>>\n> >>>> The ouput of media-ctl -p (with Hugues's patches)\n> >>>>\n> >>>> Media controller API version 4.19.24\n> >>>>\n> >>>> Media device information\n> >>>> ------------------------\n> >>>> driver          stm32-dcmi\n> >>>> model           stm32-dcmi\n> >>>> serial\n> >>>> bus info        platform:stm32-dcmi\n> >>>> hw revision     0x0\n> >>>> driver version  4.19.24\n> >>>>\n> >>>> Device topology\n> >>>> - entity 1: stm32_dcmi (1 pad, 1 link)\n> >>>>                type Node subtype V4L flags 1\n> >>>>                device node name /dev/video0\n> >>>>            pad0: Sink\n> >>>>                    <- \"ov5640 0-003c\":0 [ENABLED,IMMUTABLE]\n> >>>>\n> >>>> - entity 5: ov5640 0-003c (1 pad, 1 link)\n> >>>>                type V4L2 subdev subtype Sensor flags 0\n> >>>>                device node name /dev/v4l-subdev0\n> >>>>            pad0: Source\n> >>>>                    [fmt:JPEG_1X8/320x240@1/30 field:none colorspace:jpeg\n> >>>> xfer:srgb ycbcr:601 quantization:full-range]\n> >>>>                    -> \"stm32_dcmi\":0 [ENABLED,IMMUTABLE]\n> >>>>\n> >>>>>>>> diff --git a/src/libcamera/pipeline/stm32/stm32.cpp b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>>>> new file mode 100644\n> >>>>>>>> index 0000000..301fdfc\n> >>>>>>>> --- /dev/null\n> >>>>>>>> +++ b/src/libcamera/pipeline/stm32/stm32.cpp\n> >>>>>>>> @@ -0,0 +1,205 @@\n> >>>>>>>> +/* SPDX-License-Identifier: LGPL-2.1-or-later */\n> >>>>>>>> +/*\n> >>>>>>>> + * stm32.cpp - Pipeline handler for stm32 devices\n> >>>>>>>> + */\n> >>>>>>>> +\n> >>>>>>>> +#include <libcamera/camera.h>\n> >>>>>>>> +#include <libcamera/request.h>\n> >>>>>>>> +#include <libcamera/stream.h>\n> >>>>>>>> +\n> >>>>>>>> +#include \"device_enumerator.h\"\n> >>>>>>>> +#include \"log.h\"\n> >>>>>>>> +#include \"media_device.h\"\n> >>>>>>>> +#include \"pipeline_handler.h\"\n> >>>>>>>> +#include \"utils.h\"\n> >>>>>>>> +#include \"v4l2_device.h\"\n> >>>>>>>> +\n> >>>>>>>> +namespace libcamera {\n> >>>>>>>> +\n> >>>>>>>> +LOG_DEFINE_CATEGORY(STM32)\n> >>>>>>>> +\n> >>>>>>>> +class PipelineHandlerSTM32 : public PipelineHandler {\n> >>>>>>>> +public:\n> >>>>>>>> +  PipelineHandlerSTM32(CameraManager *manager);\n> >>>>>>>> +  ~PipelineHandlerSTM32();\n> >>>>>>> Our coding style uses a tab to indent.\n> >>>>>>>\n> >>>>>>> When you have committed your code, you can run ./utils/checkstyle.py to\n> >>>>>>> check the most recent commit (or you can specify a range suitable for git).\n> >>>>>>>\n> >>>>>>> This requires installing clang-format ideally, although astyle is also\n> >>>>>>> supported to some degree.\n> >>>>\n> >>>> I have fix clang-format package version on my side, it will be better in v2.\n> >>>\n> >>> Thank you.\n> >>>\n> >>>>>>>> +\n> >>>>>>>> +  std::map<Stream *, StreamConfiguration>\n> >>>>>>>> +  streamConfiguration(Camera *camera, std::set<Stream *> &streams) override;\n> >>>>>>>> +  int configureStreams(\n> >>>>>>>> +      Camera *camera, std::map<Stream *, StreamConfiguration> &config) override;\n> >>>>>>>> +\n> >>>>>>>> +  int allocateBuffers(Camera *camera, Stream *stream) override;\n> >>>>>>>> +  int freeBuffers(Camera *camera, Stream *stream) override;\n> >>>>>>>> +\n> >>>>>>>> +  int start(Camera *camera) override;\n> >>>>>>>> +  void stop(Camera *camera) override;\n> >>>>>>>> +\n> >>>>>>>> +  int queueRequest(Camera *camera, Request *request) override;\n> >>>>>>>> +\n> >>>>>>>> +  bool match(DeviceEnumerator *enumerator);\n> >>>>>>>> +\n> >>>>>>>> +private:\n> >>>>>>>> +  class STM32CameraData : public CameraData {\n> >>>>>>>> +  public:\n> >>>>>>>> +    STM32CameraData(PipelineHandler *pipe)\n> >>>>>>>> +        : CameraData(pipe), video_(nullptr) {}\n> >>>>>>>> +\n> >>>>>>>> +    ~STM32CameraData() { delete video_; }\n> >>>>>>>> +\n> >>>>>>>> +    void bufferReady(Buffer *buffer);\n> >>>>>>>> +\n> >>>>>>>> +    V4L2Device *video_;\n> >>>>>>>> +    Stream stream_;\n> >>>>>>>> +  };\n> >>>>>>>> +\n> >>>>>>>> +  STM32CameraData *cameraData(const Camera *camera) {\n> >>>>>>>> +    return static_cast<STM32CameraData *>(PipelineHandler::cameraData(camera));\n> >>>>>>>> +  }\n> >>>>>>>> +\n> >>>>>>>> +  std::shared_ptr<MediaDevice> media_;\n> >>>>>>>> +};\n> >>>>>>>> +\n> >>>>>>>> +PipelineHandlerSTM32::PipelineHandlerSTM32(CameraManager *manager)\n> >>>>>>>> +    : PipelineHandler(manager), media_(nullptr) {}\n> >>>>>>>> +\n> >>>>>>>> +PipelineHandlerSTM32::~PipelineHandlerSTM32() {\n> >>>>>>>> +  if (media_)\n> >>>>>>>> +    media_->release();\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +std::map<Stream *, StreamConfiguration>\n> >>>>>>>> +PipelineHandlerSTM32::streamConfiguration(Camera *camera,\n> >>>>>>>> +                                          std::set<Stream *> &streams) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +\n> >>>>>>>> +  std::map<Stream *, StreamConfiguration> configs;\n> >>>>>>>> +  StreamConfiguration config{};\n> >>>>>>>> +\n> >>>>>>>> +  LOG(STM32, Debug) << \"Retrieving default format\";\n> >>>>>>>> +  config.width = 640;\n> >>>>>>>> +  config.height = 480;\n> >>>>>>>> +  config.pixelFormat = V4L2_PIX_FMT_YUYV;\n> >>>>>>>> +  config.bufferCount = 4;\n> >>>>>>>> +\n> >>>>>>>> +  configs[&data->stream_] = config;\n> >>>>>>>> +\n> >>>>>>>> +  return configs;\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +int PipelineHandlerSTM32::configureStreams(\n> >>>>>>>> +    Camera *camera, std::map<Stream *, StreamConfiguration> &config) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  StreamConfiguration *cfg = &config[&data->stream_];\n> >>>>>>>> +  int ret;\n> >>>>>>>> +\n> >>>>>>>> +  LOG(STM32, Debug) << \"Configure the camera for resolution \" << cfg->width\n> >>>>>>>> +                    << \"x\" << cfg->height;\n> >>>>>>>> +\n> >>>>>>>> +  V4L2DeviceFormat format = {};\n> >>>>>>>> +  format.width = cfg->width;\n> >>>>>>>> +  format.height = cfg->height;\n> >>>>>>>> +  format.fourcc = cfg->pixelFormat;\n> >>>>>>>> +\n> >>>>>>>> +  ret = data->video_->setFormat(&format);\n> >>>>>>>> +  if (ret)\n> >>>>>>>> +    return ret;\n> >>>>>>>> +\n> >>>>>>>> +  if (format.width != cfg->width || format.height != cfg->height ||\n> >>>>>>>> +      format.fourcc != cfg->pixelFormat)\n> >>>>>>>> +    return -EINVAL;\n> >>>>>>>> +\n> >>>>>>>> +  return 0;\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +int PipelineHandlerSTM32::allocateBuffers(Camera *camera, Stream *stream) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  const StreamConfiguration &cfg = stream->configuration();\n> >>>>>>>> +\n> >>>>>>>> +  LOG(STM32, Debug) << \"Requesting \" << cfg.bufferCount << \" buffers\";\n> >>>>>>>> +\n> >>>>>>>> +  return data->video_->exportBuffers(&stream->bufferPool());\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +int PipelineHandlerSTM32::freeBuffers(Camera *camera, Stream *stream) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  return data->video_->releaseBuffers();\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +int PipelineHandlerSTM32::start(Camera *camera) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  return data->video_->streamOn();\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +void PipelineHandlerSTM32::stop(Camera *camera) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  data->video_->streamOff();\n> >>>>>>>> +  PipelineHandler::stop(camera);\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +int PipelineHandlerSTM32::queueRequest(Camera *camera, Request *request) {\n> >>>>>>>> +  STM32CameraData *data = cameraData(camera);\n> >>>>>>>> +  Buffer *buffer = request->findBuffer(&data->stream_);\n> >>>>>>>> +  if (!buffer) {\n> >>>>>>>> +    LOG(STM32, Error) << \"Attempt to queue request with invalid stream\";\n> >>>>>>>> +\n> >>>>>>>> +    return -ENOENT;\n> >>>>>>>> +  }\n> >>>>>>>> +\n> >>>>>>>> +  int ret = data->video_->queueBuffer(buffer);\n> >>>>>>>> +  if (ret < 0)\n> >>>>>>>> +    return ret;\n> >>>>>>>> +\n> >>>>>>>> +  PipelineHandler::queueRequest(camera, request);\n> >>>>>>>> +\n> >>>>>>>> +  return 0;\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +bool PipelineHandlerSTM32::match(DeviceEnumerator *enumerator) {\n> >>>>>>>> +  DeviceMatch dm(\"stm32-dcmi\");\n> >>>>>>>\n> >>>>>>> I see that the stm32-dcmi driver calls down to a subdevice. Is there\n> >>>>>>> ever a need to interact with this subdevice directly?\n> >>>>>>>\n> >>>>>>> Or are all controls handled through the DCMI driver?\n> >>>>>>>\n> >>>>>>> I think it looks like everything simply goes through the stm32-dcmi\n> >>>>>>> V4L2Device interface.\n> >>>>\n> >>>> Until today yes but Hugues is working with v4l for media-controller\n> >>>> awareness.\n> >>>>\n> >>>>>>> That said, I can't see if there is any media-controller device in the\n> >>>>>>> DCMI driver. Our DeviceMatch currently only looks at media-controller\n> >>>>>>> enabled devices.\n> >>>>>>>\n> >>>>>>>> +\n> >>>>>>>> +  media_ = enumerator->search(dm);\n> >>>>>>>> +  if (!media_)\n> >>>>>>>> +    return false;\n> >>>>>>>> +\n> >>>>>>>> +  media_->acquire();\n> >>>>>>>> +\n> >>>>>>>> +  std::unique_ptr<STM32CameraData> data =\n> >>>>>>>> +      utils::make_unique<STM32CameraData>(this);\n> >>>>>>>> +\n> >>>>>>>> +  /* Locate and open the default video node. */\n> >>>>>>>> +  for (MediaEntity *entity : media_->entities()) {\n> >>>>>>>> +    if (entity->flags() & MEDIA_ENT_FL_DEFAULT) {\n> >>>>>>>> +      data->video_ = new V4L2Device(entity);\n> >>>>>>>> +      break;\n> >>>>>>>> +    }\n> >>>>>>>> +  }\n> >>>>>>> I think the MEDIA_ENT_FL_DEFAULT is currently quite specific to the\n> >>>>>>> UVCVideo (and omap3isp) pipelines, so I'm not sure if this will\n> >>>>>>> correctly match your device?\n> >>>>\n> >>>> It will because it is in Hugues's patches.\n> >>>>\n> >>>> Does an another way of working is posssible ?\n> >>>>\n> >>>> Maybe we have misunderstood the mean of MEDIA_ENT_FL_DEFAULT and how to\n> >>>> use it.\n> >>>\n> >>> There's a single video node in your media graph, so there's no real need\n> >>> to use MEDIA_ENT_FL_DEFAULT, you can simply pick the only video device,\n> >>> can't you ?\n> >>>\n> >>>> Do you have some advices ?\n> >>>>\n> >>>>>>>> +\n> >>>>>>>> +  if (!data->video_) {\n> >>>>>>>> +    LOG(STM32, Error) << \"Could not find a default video device\";\n> >>>>>>>> +    return false;\n> >>>>>>>> +  }\n> >>>>>>>> +\n> >>>>>>>> +  if (data->video_->open())\n> >>>>>>>> +    return false;\n> >>>>>>>> +\n> >>>>>>>> +  data->video_->bufferReady.connect(data.get(), &STM32CameraData::bufferReady);\n> >>>>>>>> +\n> >>>>>>>> +  /* Create and register the camera. */\n> >>>>>>>> +  std::set<Stream *> streams{&data->stream_};\n> >>>>>>>> +  std::shared_ptr<Camera> camera =\n> >>>>>>>> +      Camera::create(this, media_->model(), streams);\n> >>>>>>>> +  registerCamera(std::move(camera), std::move(data));\n> >>>>>>>> +\n> >>>>>>>> +  return true;\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +void PipelineHandlerSTM32::STM32CameraData::bufferReady(Buffer *buffer) {\n> >>>>>>>> +  Request *request = queuedRequests_.front();\n> >>>>>>>> +\n> >>>>>>>> +  pipe_->completeBuffer(camera_, request, buffer);\n> >>>>>>>> +  pipe_->completeRequest(camera_, request);\n> >>>>>>>> +}\n> >>>>>>>> +\n> >>>>>>>> +REGISTER_PIPELINE_HANDLER(PipelineHandlerSTM32);\n> >>>>>>>> +\n> >>>>>>>> +} /* namespace libcamera */\n> >>>>>>>>\n> >>>>>>> Thank you for joining the libcamera project! - If there is anything more\n> >>>>>>> we can do to help you - please let us know.","headers":{"Return-Path":"<laurent.pinchart@ideasonboard.com>","Received":["from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 7BAA160004\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tFri, 19 Apr 2019 12:18:46 +0200 (CEST)","from pendragon.ideasonboard.com (unknown\n\t[IPv6:2001:14ba:21f5:5b00:ce28:277f:58d7:3ca4])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 086E731A;\n\tFri, 19 Apr 2019 12:18:45 +0200 (CEST)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1555669126;\n\tbh=cN0FUreNcWDFIvZWc5ZZm9Lb0xQOU4bCRKqnNpeoKjc=;\n\th=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n\tb=wCOWGKNHzQROPa4CCcpVhBKUH3kZqSVuS6xXV9eURIECG2xRaHwi7c9XgPSe5PCr5\n\t77CTW1a+xATZCNeS+A3foUSd1HDVePiawn5CMOz3ovUZn64HTvvsuw71ISm89NXj6p\n\tTvZconsYDHy62noOte89mnxLmMpmzWZRbAaAXNPw=","Date":"Fri, 19 Apr 2019 13:18:28 +0300","From":"Laurent Pinchart <laurent.pinchart@ideasonboard.com>","To":"Benjamin GAIGNARD <benjamin.gaignard@st.com>","Cc":"Hugues FRUCHET <hugues.fruchet@st.com>,\n\t\"kieran.bingham@ideasonboard.com\" <kieran.bingham@ideasonboard.com>, \n\t\"libcamera-devel@lists.libcamera.org\"\n\t<libcamera-devel@lists.libcamera.org>, \n\t\"peter.griffin@linaro.org\" <peter.griffin@linaro.org>","Message-ID":"<20190419101828.GA4788@pendragon.ideasonboard.com>","References":"<20190408124322.6869-1-benjamin.gaignard@st.com>\n\t<861d9702-3ccf-2686-76b5-8b7bd48e97ae@ideasonboard.com>\n\t<6ff76c69-a243-0ea5-d2a4-54e64b743f56@st.com>\n\t<20190415231435.GO17083@pendragon.ideasonboard.com>\n\t<8692ce69-0733-4807-dd25-e4568cec0443@st.com>\n\t<20190417121153.GH4993@pendragon.ideasonboard.com>\n\t<bc4b94a7-2b32-937d-1583-881aad476aa2@st.com>\n\t<20190418160144.GX4806@pendragon.ideasonboard.com>\n\t<8a14e88e-140e-79a7-c3ad-e185556dc97f@st.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=utf-8","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<8a14e88e-140e-79a7-c3ad-e185556dc97f@st.com>","User-Agent":"Mutt/1.10.1 (2018-07-13)","Subject":"Re: [libcamera-devel] [PATCH] libcamera: pipeline: stm32: add\n\tpipeline handler for stm32","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.23","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","X-List-Received-Date":"Fri, 19 Apr 2019 10:18:46 -0000"}}]