{"id":8456,"url":"https://patchwork.libcamera.org/api/1.1/patches/8456/?format=json","web_url":"https://patchwork.libcamera.org/patch/8456/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20200627140233.85781-1-chris@gregariousmammal.com>","date":"2020-06-27T14:02:33","name":"[libcamera-devel,v2] Create application developer guide","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"0db04da8586d06ca803722ca75a2923e6397dc01","submitter":{"id":55,"url":"https://patchwork.libcamera.org/api/1.1/people/55/?format=json","name":"Chris Chinchilla","email":"chris@gregariousmammal.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/8456/mbox/","series":[{"id":1047,"url":"https://patchwork.libcamera.org/api/1.1/series/1047/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=1047","date":"2020-06-27T14:02:33","name":"[libcamera-devel,v2] Create application developer guide","version":2,"mbox":"https://patchwork.libcamera.org/series/1047/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/8456/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/8456/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id BEDC3C2E66\n\tfor <parsemail@patchwork.libcamera.org>;\n\tSat, 27 Jun 2020 14:02:46 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 429F6609C5;\n\tSat, 27 Jun 2020 16:02:46 +0200 (CEST)","from mail-wm1-x334.google.com (mail-wm1-x334.google.com\n\t[IPv6:2a00:1450:4864:20::334])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 811E0609C2\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat, 27 Jun 2020 16:02:44 +0200 (CEST)","by mail-wm1-x334.google.com with SMTP id q15so11289421wmj.2\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tSat, 27 Jun 2020 07:02:44 -0700 (PDT)","from localhost.localdomain (p54ac54f4.dip0.t-ipconnect.de.\n\t[84.172.84.244]) by smtp.gmail.com with ESMTPSA id\n\t26sm19360616wmj.25.2020.06.27.07.02.41\n\t(version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);\n\tSat, 27 Jun 2020 07:02:42 -0700 (PDT)"],"Authentication-Results":"lancelot.ideasonboard.com;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n\tunprotected) header.d=gregariousmammal.com\n\theader.i=@gregariousmammal.com header.b=\"UJap6lzB\"; \n\tdkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gregariousmammal.com; s=google;\n\th=from:to:cc:subject:date:message-id:mime-version\n\t:content-transfer-encoding;\n\tbh=qZWbq5XwG9uhcH8p6DOs6P3cTfP69SKLkF9UM24/Zr8=;\n\tb=UJap6lzBUhJYIj6HmBnVLQovDVFoenbXkzO6urNDBRaf+szNrzNgVqRQJlTQZxyYpL\n\tnKTyYp2FfAGMcyvVGhd4hvbNUDaYwKAEr0BtIjU93eFSHxKDbrF/gMUMiN7n5onkq+os\n\tAuVVb7R4N5Mgtugtx8WqyK7NnorO3fVgj88zJcpUK1Wi3W1BGy6bdNN8wZz/qBGLDz7k\n\tutfkEkJvQGspiNmiz7fXW9mgBqLBvKmpAEvmjN/3bi1b8Tn5lbA8Ipqff/Id2cKhC3sz\n\tHgAH+6WaGewLnWgxXt16SvW0u5mii0zGzxsoEChAED6FXsgvBA+GCY/RmosnxFZ18X/J\n\tn/uw==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version\n\t:content-transfer-encoding;\n\tbh=qZWbq5XwG9uhcH8p6DOs6P3cTfP69SKLkF9UM24/Zr8=;\n\tb=hS7PFau4J7Q/ikRVrUv1XYc0h2AIUC7bIvtZSWh/nVcS5P/Yb8PlpQgreyMZL1Z9SA\n\tDjzpVJfETUf3i6BmPV5T+Xm8e5O+l8TuaSuoibDXYbQDmtM9ao50hi56DVbuImm8tg8+\n\tx9R5WA4DhxbIGcZbO6I6HCYjTvZXT6Qwct+jnGCCJ00/Dn1er+YmGgp0lHlAh5pLGfiO\n\toc+kqOFnCO+shR1q4IsC9DSkjtokaROgtN8V1hFGgGFTC//ri+MD9u163MBSnnlywAPn\n\tDwxDVg2OyZMr6FVwy7ctXaVEHzwksxNKjBrZUidPOHHoPHn6NV2JWZ092jR+DcMvUgVb\n\tOttA==","X-Gm-Message-State":"AOAM533rw7kg0cg8hPoUgXWb6nd1ThpbVeWg/otOzeZXYS77T/jO+FRZ\n\tO+b7Di63ypEhLKbDfTlAVDDnOPMkQL6dWA==","X-Google-Smtp-Source":"ABdhPJxoABHtWqQXIj2vReWhW0c+tFtvgFcRRTYRJc7pg/HtY3lKFnhyOsDgxNrhyiYHKs5SHNkPKg==","X-Received":"by 2002:a1c:f616:: with SMTP id\n\tw22mr8679438wmc.155.1593266563162; \n\tSat, 27 Jun 2020 07:02:43 -0700 (PDT)","From":"chris@gregariousmammal.com","To":"libcamera-devel@lists.libcamera.org","Date":"Sat, 27 Jun 2020 16:02:33 +0200","Message-Id":"<20200627140233.85781-1-chris@gregariousmammal.com>","X-Mailer":"git-send-email 2.27.0","MIME-Version":"1.0","Subject":"[libcamera-devel] [PATCH v2] Create application developer guide","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Cc":"Chris Chinchilla <chris@gregariousmammal.com>","Content-Type":"text/plain; charset=\"utf-8\"","Content-Transfer-Encoding":"base64","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"From: Chris Chinchilla <chris@gregariousmammal.com>\n\nThis patch is a new version of the application developer guide that incorporates feedback from reviewers so far. It is still missing content on using controls, as I am not sure if it should include those details.\n\nReviewed-by: Umang Jain <email@uajain.com>\nReviewed-by: Paul Elder <paul.elder@ideasonboard.com>\nReviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>\nSigned-off-by: Chris Chinchilla <chris@gregariousmammal.com>\n---\n Documentation/application-developer.rst | 517 ++++++++++++++++++++++++\n 1 file changed, 517 insertions(+)\n create mode 100644 Documentation/application-developer.rst","diff":"diff --git a/Documentation/application-developer.rst b/Documentation/application-developer.rst\nnew file mode 100644\nindex 0000000..a26edab\n--- /dev/null\n+++ b/Documentation/application-developer.rst\n@@ -0,0 +1,517 @@\n+Using libcamera in a C++ application\n+====================================\n+\n+This tutorial shows how to create a C++ application that uses libcamera\n+to connect to a camera on a system, capture frames from it for 3\n+seconds, and write metadata about the frames to standard out.\n+\n+.. TODO: Check how much of the example code runs before camera start etc?\n+\n+Application skeleton\n+--------------------\n+\n+Most of the code in this tutorial runs in the ``int main()`` function\n+with a separate global function to handle events. The two functions need\n+to share data, which are stored in global variables for simplicity. A\n+production-ready application would organize the various objects created\n+in classes, and the event handler would be a class member function to\n+provide context data without requiring global variables.\n+\n+.. code:: cpp\n+\n+   // Global variables here\n+\n+   int main()\n+   {\n+       // Code to follow\n+   }\n+\n+Camera Manager\n+--------------\n+\n+Every libcamera-based application needs an instance of a\n+`CameraManager <http://libcamera.org/api-html/classlibcamera_1_1CameraManager.html>`_\n+that runs for the life of the application. When the Camera Manager\n+starts, it finds all the cameras available to the current system. Behind\n+the scenes, libcamera abstracts and manages the complex pipelines that\n+kernel drivers expose through the `Linux Media\n+Controller <https://www.kernel.org/doc/html/latest/media/uapi/mediactl/media-controller-intro.html>`_\n+and `Video for Linux (V4L2) <https://www.linuxtv.org/docs.php>`_ APIs,\n+meaning that an application doesn’t need to handle device or driver\n+specifics.\n+\n+Create a Camera Manager instance, and then start it. An application\n+should only create one Camera Manager instance.\n+\n+.. code:: cpp\n+\n+   CameraManager *cm = new CameraManager();\n+   cm->start();\n+\n+When the application runs, it starts the Camera Manager, which\n+identifies all supported devices and creates cameras the application can\n+interact with.\n+\n+Before the ``int main()`` function, create a global shared pointer\n+variable for the camera:\n+\n+.. code:: cpp\n+\n+   std::shared_ptr<Camera> camera;\n+\n+   int main()\n+   {\n+       // Code to follow\n+   }\n+\n+Add the code below that lists all available cameras, and for this\n+example, writes them to standard output:\n+\n+.. code:: cpp\n+\n+   for (auto const &camera : cm->cameras())\n+       std::cout << camera->name() << std::endl;\n+\n+For example, the output on a Linux machine with a connected USB webcam\n+is ``UVC Camera (046d:080a)``. When running Ubuntu in a VM on macOS, the\n+output is ``FaceTime HD Camera (Built-in):``.\n+\n+Create and acquire a camera\n+---------------------------\n+\n+What libcamera considers a camera\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+The libcamera library supports fixed and hot-pluggable cameras,\n+including cameras, plugged and unplugged after starting an application.\n+It supports point-and-shoot still image and video capture, either\n+controlled directly by the CPU or exposed through an internal USB bus as\n+a UVC device designed for video conferencing usage. The libcamera\n+library considers any device that includes independent camera sensors,\n+such as front and back sensors, as multiple camera devices.\n+\n+This example application uses a single camera (the first camera) that\n+the Camera Manager reports as available to applications.\n+\n+Application code can access cameras by index or by name. The code below\n+retrieves the name of the first available camera and gets the camera by\n+name from the Camera Manager.\n+\n+.. code:: cpp\n+\n+   std::string cameraName = cm->cameras()[0]->name();\n+   camera = cm->get(cameraName);\n+\n+Once you know what camera you want to use, an application needs to\n+acquire an exclusive lock to it so no other application can use it.\n+\n+.. code:: cpp\n+\n+   camera->acquire();\n+\n+Configure the camera\n+--------------------\n+\n+Before the application can do anything with the camera, it needs to know\n+what its capabilities are. These capabilities include resolutions,\n+supported pixel formats, and more. The libcamera library uses\n+a ``StreamRole`` to define four predefined ways an application intends\n+to use a camera; these are:\n+\n+To find out if how an application wants to use the camera is possible,\n+generate a new configuration using a vector of ``StreamRole``\\s, and\n+send that vector to the camera. To do this, create a new configuration\n+variable and use the ``generateConfiguration`` function to produce a\n+``CameraConfiguration``. If the camera can handle the configuration, it\n+returns a full ``CameraConfiguration``, and if it can't, a null pointer.\n+\n+.. code:: cpp\n+\n+   std::unique_ptr<CameraConfiguration> config = camera->generateConfiguration( { StreamRole::Viewfinder } );\n+\n+A ``CameraConfiguration`` has a ``StreamConfiguration`` instance for\n+each ``StreamRole`` the application requested, and that the camera can\n+support. Each of these has a default size and format that the camera\n+assigned, depending on the ``StreamRole`` requested.\n+\n+The code below creates a new ``StreamConfiguration`` variable and\n+populates it with the value of the first (and only) ``StreamRole`` in\n+the camera configuration. It then outputs the value to standard out.\n+\n+.. code:: cpp\n+\n+   StreamConfiguration &streamConfig = config->at(0);\n+   std::cout << \"Default viewfinder configuration is: \" << streamConfig.toString() << std::endl;\n+\n+This outputs something like\n+``Default viewfinder configuration is: 1280x720-0x56595559``.\n+\n+Change and validate the configuration\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+With a ``StreamConfiguration`` defined, an application can make changes\n+to the parameters it contains, for example, to change the width and\n+height, use the following code:\n+\n+.. code:: cpp\n+\n+   streamConfig.size.width = 640;\n+   streamConfig.size.height = 480;\n+\n+If an application changes any parameters, validate them before applying\n+them to the camera using the ``validate`` function. If the new values\n+are invalid, the validation process adjusts the parameter to what it\n+considers a valid value. An application should check that the adjusted\n+configuration is something expected. The ``validate`` method returns a\n+`Status <http://libcamera.org/api-html/classlibcamera_1_1CameraConfiguration.html#a64163f21db2fe1ce0a6af5a6f6847744>`_\n+enum an application can check to see if the Pipeline Handler adjusted\n+the configuration.\n+\n+For example, the code above set the width and height to 640x480, but if\n+the camera cannot produce an image that large, it might return the\n+configuration with a new size of 320x240 and a status of ``Adjusted``.\n+\n+For this example application, the code below prints the adjusted values\n+to standard out.\n+\n+.. code:: cpp\n+\n+   config->validate();\n+   std::cout << \"Validated viewfinder configuration is: \" << streamConfig.toString() << std::endl;\n+\n+For example, the output might be something like\n+``Validated viewfinder configuration is: 1280x720-0x56595559``.\n+\n+With a validated ``CameraConfiguration``, send it to the camera to\n+confirm the new configuration:\n+\n+.. code:: cpp\n+\n+   camera->configure(config.get());\n+\n+If an application doesn’t first validate the configuration before\n+calling ``configure``, there’s a chance that calling the function fails.\n+\n+Allocate FrameBuffers\n+---------------------\n+\n+An application needs to reserve the memory that libcamera can write\n+incoming camera data to, and that the application can then read data for\n+each frame from. The libcamera library uses ``FrameBuffer`` instances to\n+buffer frames of data from memory. An application should reserve enough\n+memory for the ``FrameBuffer``\\s that streams need based on the\n+configured sizes and formats.\n+\n+The libcamera library consumes buffers provided by applications as\n+``FrameBuffer`` instances, which makes libcamera a consumer of buffers\n+exported by other devices (such as displays or video encoders), or\n+allocated from an external allocator (such as ION on Android).\n+\n+In some situations, applications do not have any means to allocate or\n+get hold of suitable buffers, for instance, when no other device is\n+involved, or on Linux platforms that lack a centralized allocator. The\n+``FrameBufferAllocator`` class provides a buffer allocator an\n+application can use in these situations.\n+\n+An application doesn’t have to use the default ``FrameBufferAllocator``\n+that libcamera provides. It can instead allocate memory manually and\n+pass the buffers in ``Request``\\s (read more about ``Request`` in\n+`the frame capture section <#frame-capture>`_ of this guide). The\n+example in this guide covers using the ``FrameBufferAllocator`` that\n+libcamera provides.\n+\n+Using the libcamera ``FrameBufferAllocator``\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+As the camera manager knows what configuration is available, it can\n+allocate all the resources with a single method, and de-allocate with\n+another.\n+\n+Applications create a ``FrameBufferAllocator`` for a Camera and use it\n+to allocate buffers for streams of a ``CameraConfiguration`` with the\n+``allocate()`` function.\n+\n+.. code:: cpp\n+\n+   FrameBufferAllocator *allocator = new FrameBufferAllocator(camera);\n+\n+   for (StreamConfiguration &cfg : *config) {\n+       int ret = allocator->allocate(cfg.stream());\n+       if (ret < 0) {\n+           std::cerr << \"Can't allocate buffers\" << std::endl;\n+           return -ENOMEM;\n+       }\n+\n+       unsigned int allocated = allocator->buffers(cfg.stream()).size();\n+       std::cout << \"Allocated \" << allocated << \" buffers for stream\" << std::endl;\n+   }\n+\n+For the example camera above with ``1280x720-0x56595559`` configuration,\n+libcamera reserves **4** buffers for the stream.\n+\n+Frame Capture\n+~~~~~~~~~~~~~\n+\n+The libcamera library follows a familiar streaming request model for\n+data (frames of camera data). For each frame a camera captures, an\n+application must queue a request for it to the camera. With libcamera, a\n+``Request`` is at least one Stream (one source from a Camera), that has\n+one ``FrameBuffer`` full of image data.\n+\n+First, create an instance of a ``Stream`` from the ``StreamConfig``,\n+assign a vector of ``FrameBuffer``\\s to the allocation created above,\n+and create a vector of the requests the application will make.\n+\n+.. code:: cpp\n+\n+   Stream *stream = streamConfig.stream();\n+   const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator->buffers(stream);\n+   std::vector<Request *> requests;\n+\n+Create ``Request``s for the size of the ``FrameBuffer`` by using the\n+``createRequest`` function and adding all the requests created to a\n+vector. For each request add a buffer to it with the ``addBuffer``\n+function, passing the stream the buffer belongs to, and the FrameBuffer.\n+\n+.. code:: cpp\n+\n+       for (unsigned int i = 0; i < buffers.size(); ++i) {\n+           Request *request = camera->createRequest();\n+           if (!request)\n+           {\n+               std::cerr << \"Can't create request\" << std::endl;\n+               return -ENOMEM;\n+           }\n+\n+           const std::unique_ptr<FrameBuffer> &buffer = buffers[i];\n+           int ret = request->addBuffer(stream, buffer.get());\n+           if (ret < 0)\n+           {\n+               std::cerr << \"Can't set buffer for request\"\n+                     << std::endl;\n+               return ret;\n+           }\n+\n+           requests.push_back(request);\n+       }\n+\n+.. TODO: Controls\n+\n+.. TODO: A request can also have controls or parameters that you can apply to the image.\n+\n+Start the camera\n+----------------\n+\n+With the code to handle processing camera data in place, start the\n+camera to begin capturing frames and queuing requests to it.\n+\n+.. code:: cpp\n+\n+   camera->start();\n+   for (Request *request : requests)\n+       camera->queueRequest(request);\n+\n+Event handling and callbacks\n+----------------------------\n+\n+The libcamera library uses the concept of signals and slots (`similar to Qt <https://doc.qt.io/qt-5/signalsandslots.html>`_) to connect events\n+with callbacks to handle those events.\n+\n+Signals\n+~~~~~~~\n+\n+A camera class instance emits a signal when the buffer has been\n+completed (image data written into). Because a Request can contain\n+multiple streams, libcamera emits the Request completed signal when all\n+streams within the request complete.\n+\n+To receive these signals, connect a slot function to the signal an\n+application should act on.\n+\n+.. code:: cpp\n+\n+   camera->requestCompleted.connect(requestComplete);\n+\n+Slots\n+~~~~~\n+\n+Every time the camera request completes, it emits a signal, and the\n+connected slot invoked, passing the Request as a parameter.\n+\n+For this example application, the matching ``requestComplete`` slot\n+method outputs information about the FrameBuffer to standard out, but\n+the callback is typically where an application accesses the image data\n+from the camera and does something with it.\n+\n+Signals operate in the libcamera ``CameraManager`` thread context, so it\n+is important not to block the thread for a long time, as this blocks\n+internal processing of the camera pipelines, and can affect realtime\n+performance, with skipped frames etc.\n+\n+Start an event loop\n+~~~~~~~~~~~~~~~~~~~\n+\n+To emit signals that slots can respond to, an application needs an event\n+loop. An application can use the ``EventDispatcher`` class as an event\n+loop (inspired by `the Qt event system <https://doc.qt.io/qt-5/eventsandfilters.html>`_) for an\n+application to listen to signals from resources libcamera handles.\n+\n+The libcamera library does this by creating instances of the\n+``EventNotifier`` class, which models a file descriptor event source an\n+application can monitor and registers them with the ``EventDispatcher``.\n+Whenever the ``EventDispatcher`` detects an event, it is monitoring, and\n+it emits an ``EventNotifier::activated signal``. The ``Timer`` class\n+controls the length event loops run for that an application can register\n+with a dispatcher with the ``registerTimer`` function.\n+\n+The code below creates a new instance of the ``EventDispatcher`` class,\n+adds it to the camera manager, creates a timer to run for 3 seconds, and\n+during the length of that timer, the ``EventDispatcher`` processes\n+events that occur, and calls the relevant signals.\n+\n+.. code:: cpp\n+\n+   EventDispatcher *dispatcher = cm->eventDispatcher();\n+   Timer timer;\n+   timer.start(3000);\n+   while (timer.isRunning())\n+       dispatcher->processEvents();\n+\n+Create a matching slot method\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Create the ``requestComplete`` function that matches the slot:\n+\n+.. code:: cpp\n+\n+   static void requestComplete(Request *request)\n+   {\n+       // Code to follow\n+   }\n+\n+The signal/slot flow is the only way to pass requests and buffers from\n+libcamera back to the application. There are times when a request can\n+emit a ``requestComplete`` signal, but this request is actually\n+canceled, for example, by application shutdown. To avoid an application\n+processing image data that doesn’t exist, it’s worth checking that the\n+request is still in an expected state (You can find `a full list of the completion statuses in the documentation <https://www.libcamera.org/api-html/classlibcamera_1_1Request.html#a2209ba8d51af8167b25f6e3e94d5c45b>`_).\n+\n+.. code:: cpp\n+\n+   if (request->status() == Request::RequestCancelled) return;\n+\n+When the request completes, an application can access the buffers from\n+the request using the ``buffers()`` function, which returns a map of\n+each buffer, and the stream it is associated with.\n+\n+.. code:: cpp\n+\n+   const std::map<Stream *, FrameBuffer *> &buffers = request->buffers();\n+\n+Iterating through the map allows you to inspect each buffer from each\n+stream completed in this request, and access the metadata for each frame\n+the camera captured. The buffer metadata contains information such as\n+capture status, a timestamp, and the bytes used.\n+\n+.. code:: cpp\n+\n+   for (auto bufferPair : buffers) {\n+       FrameBuffer *buffer = bufferPair.second;\n+       const FrameMetadata &metadata = buffer->metadata();\n+   }\n+\n+The buffer describes the image data, but a buffer can consist of more\n+than one image plane that in memory hold the image data in the Frames.\n+For example, the Y, U, and V components of a YUV-encoded image are\n+described by a plane.\n+\n+For this example application, inside the ``for`` loop from above, print\n+the Frame sequence number and details of the planes.\n+\n+.. code:: cpp\n+\n+   std::cout << \" seq: \" << std::setw(6) << std::setfill('0') << metadata.sequence << \" bytesused: \";\n+\n+   unsigned int nplane = 0;\n+   for (const FrameMetadata::Plane &plane : metadata.planes)\n+   {\n+       std::cout << plane.bytesused;\n+       if (++nplane < metadata.planes.size()) std::cout << \"/\";\n+   }\n+\n+   std::cout << std::endl;\n+\n+The expected output shows each monotonically increasing frame sequence\n+number and the bytes used by planes.\n+\n+.. code:: text\n+\n+   seq: 000000 bytesused: 1843200\n+   seq: 000002 bytesused: 1843200\n+   seq: 000004 bytesused: 1843200\n+   seq: 000006 bytesused: 1843200\n+   seq: 000008 bytesused: 1843200\n+   seq: 000010 bytesused: 1843200\n+   seq: 000012 bytesused: 1843200\n+   seq: 000014 bytesused: 1843200\n+   seq: 000016 bytesused: 1843200\n+   seq: 000018 bytesused: 1843200\n+   seq: 000020 bytesused: 1843200\n+   seq: 000022 bytesused: 1843200\n+   seq: 000024 bytesused: 1843200\n+   seq: 000026 bytesused: 1843200\n+   seq: 000028 bytesused: 1843200\n+   seq: 000030 bytesused: 1843200\n+   seq: 000032 bytesused: 1843200\n+   seq: 000034 bytesused: 1843200\n+   seq: 000036 bytesused: 1843200\n+   seq: 000038 bytesused: 1843200\n+   seq: 000040 bytesused: 1843200\n+   seq: 000042 bytesused: 1843200\n+\n+With the handling of this request complete, reuse the buffer by adding\n+it back to the request with its matching stream, and create a new\n+request using the ``createRequest`` function.\n+\n+.. code:: cpp\n+\n+   request = camera->createRequest();\n+   if (!request)\n+   {\n+       std::cerr << \"Can't create request\" << std::endl;\n+       return;\n+   }\n+\n+   for (auto it = buffers.begin(); it != buffers.end(); ++it)\n+   {\n+       Stream *stream = it->first;\n+       FrameBuffer *buffer = it->second;\n+\n+       request->addBuffer(stream, buffer);\n+   }\n+\n+   camera->queueRequest(request);\n+\n+Clean up and stop the application\n+---------------------------------\n+\n+The application is now finished with the camera and the resources the\n+camera uses, so needs to do the following:\n+\n+-  stop the camera\n+-  free the stream from the FrameBufferAllocator\n+-  delete the FrameBufferAllocator\n+-  release the lock on the camera and reset the pointer to it\n+-  stop the camera manager\n+-  exit the application\n+\n+.. code:: cpp\n+\n+   camera->stop();\n+   allocator->free(stream);\n+   delete allocator;\n+   camera->release();\n+   camera.reset();\n+   cm->stop();\n+\n+   return 0;\n\\ No newline at end of file\n","prefixes":["libcamera-devel","v2"]}