Patch Detail
Show a patch.
GET /api/patches/4037/?format=api
{ "id": 4037, "url": "https://patchwork.libcamera.org/api/patches/4037/?format=api", "web_url": "https://patchwork.libcamera.org/patch/4037/", "project": { "id": 1, "url": "https://patchwork.libcamera.org/api/projects/1/?format=api", "name": "libcamera", "link_name": "libcamera", "list_id": "libcamera_core", "list_email": "libcamera-devel@lists.libcamera.org", "web_url": "", "scm_url": "", "webscm_url": "" }, "msgid": "<20200615105002.555588-1-kieran.bingham@ideasonboard.com>", "date": "2020-06-15T10:50:02", "name": "[libcamera-devel,PATCH-Resend] Add getting started guide for application developers", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": false, "hash": "09a5e0b0a3950a5956496fd3c90213fe3e52e2d1", "submitter": { "id": 4, "url": "https://patchwork.libcamera.org/api/people/4/?format=api", "name": "Kieran Bingham", "email": "kieran.bingham@ideasonboard.com" }, "delegate": null, "mbox": "https://patchwork.libcamera.org/patch/4037/mbox/", "series": [ { "id": 1000, "url": "https://patchwork.libcamera.org/api/series/1000/?format=api", "web_url": "https://patchwork.libcamera.org/project/libcamera/list/?series=1000", "date": "2020-06-15T10:50:02", "name": "[libcamera-devel,PATCH-Resend] Add getting started guide for application developers", "version": 1, "mbox": "https://patchwork.libcamera.org/series/1000/mbox/" } ], "comments": "https://patchwork.libcamera.org/api/patches/4037/comments/", "check": "pending", "checks": "https://patchwork.libcamera.org/api/patches/4037/checks/", "tags": {}, "headers": { "Return-Path": "<kieran.bingham@ideasonboard.com>", "Received": [ "from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id A3BC5603D8\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tMon, 15 Jun 2020 12:50:06 +0200 (CEST)", "from Q.local (cpc89242-aztw30-2-0-cust488.18-1.cable.virginm.net\n\t[86.31.129.233])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 039AFF9;\n\tMon, 15 Jun 2020 12:50:05 +0200 (CEST)" ], "Authentication-Results": "lancelot.ideasonboard.com; dkim=pass (1024-bit key; \n\tunprotected) header.d=ideasonboard.com\n\theader.i=@ideasonboard.com\n\theader.b=\"tHJmgKI6\"; dkim-atps=neutral", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1592218206;\n\tbh=eR1+8NWUHN+ZjLcstwrm8TSRB/2tx1K9Dla9V948WEQ=;\n\th=From:To:Cc:Subject:Date:From;\n\tb=tHJmgKI6cf9TjEZb/eW9zXAZaU6q24UM0fYJMOg0Hw929hPD4nORKoxqxlRkmMVyx\n\tTr5Ec+5E8LcnyiPpyOVcD3w1gwC51YOy3Gt/bu1wwVITCFBL/Br2MffYfUCjootLrA\n\trC2pfhIc6VlNGwH+9ULjnKnahyhquAHJj2rZ3kL8=", "From": "Kieran Bingham <kieran.bingham@ideasonboard.com>", "To": "libcamera devel <libcamera-devel@lists.libcamera.org>", "Cc": "Chris Ward <chris@gregariousmammal.com>", "Date": "Mon, 15 Jun 2020 11:50:02 +0100", "Message-Id": "<20200615105002.555588-1-kieran.bingham@ideasonboard.com>", "X-Mailer": "git-send-email 2.25.1", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=UTF-8", "Content-Transfer-Encoding": "8bit", "Subject": "[libcamera-devel] [PATCH-Resend] Add getting started guide for\n\tapplication developers", "X-BeenThere": "libcamera-devel@lists.libcamera.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "<libcamera-devel.lists.libcamera.org>", "List-Unsubscribe": "<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>", "List-Archive": "<https://lists.libcamera.org/pipermail/libcamera-devel/>", "List-Post": "<mailto:libcamera-devel@lists.libcamera.org>", "List-Help": "<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>", "List-Subscribe": "<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>", "X-List-Received-Date": "Mon, 15 Jun 2020 10:50:07 -0000" }, "content": "From: Chris Chinchilla <chris@gregariousmammal.com>\n\n---\n\n[Kieran:]\nResending this inline to ease review on the libcamera mailing-list.\n\n\n .../guides/libcamera-application-author.rst | 472 ++++++++++++++++++\n 1 file changed, 472 insertions(+)\n create mode 100644 Documentation/guides/libcamera-application-author.rst", "diff": "diff --git a/Documentation/guides/libcamera-application-author.rst b/Documentation/guides/libcamera-application-author.rst\nnew file mode 100644\nindex 000000000000..c5f723820004\n--- /dev/null\n+++ b/Documentation/guides/libcamera-application-author.rst\n@@ -0,0 +1,472 @@\n+Supporting libcamera in your application\n+========================================\n+\n+This tutorial shows you how to create an application that uses libcamera\n+to connect to a camera on a system, capture frames from it for 3\n+seconds, and write metadata about the frames to standard out.\n+\n+.. TODO: How much of the example code runs before camera start etc?\n+\n+Create a pointer to the camera\n+------------------------------\n+\n+Before the ``int main()`` function, create a global shared pointer\n+variable for the camera:\n+\n+.. code:: cpp\n+\n+ std::shared_ptr<Camera> camera;\n+\n+ int main()\n+ {\n+ // Code to follow\n+ }\n+\n+Camera Manager\n+--------------\n+\n+Every libcamera-based application needs an instance of a\n+`CameraManager <http://libcamera.org/api-html/classlibcamera_1_1CameraManager.html>`_\n+that runs for the life of the application. When you start the Camera\n+Manager, it finds all the cameras available to the current system.\n+Behind the scenes, the libcamera Pipeline Handler abstracts and manages\n+the complex pipelines that kernel drivers expose through the `Linux\n+Media\n+Controller <https://www.kernel.org/doc/html/latest/media/uapi/mediactl/media-controller-intro.html>`__\n+and `V4L2 <https://www.linuxtv.org/docs.php>`__ APIs, meaning that an\n+application doesn’t need to handle device or driver specifics.\n+\n+To create and start a new Camera Manager, create a new pointer variable\n+to the instance, and then start it:\n+\n+.. code:: cpp\n+\n+ CameraManager *cm = new CameraManager();\n+ cm->start();\n+\n+When you build the application, the Camera Manager identifies all\n+supported devices and creates cameras the application can interact with.\n+\n+The code below identifies all available cameras, and for this example,\n+writes them to standard output:\n+\n+.. code:: cpp\n+\n+ for (auto const &camera : cm->cameras())\n+ std::cout << camera->name() << std::endl;\n+\n+For example, the output on Ubuntu running in a VM on macOS is\n+``FaceTime HD Camera (Built-in):``, and for x y, etc.\n+\n+.. TODO: Better examples\n+\n+Create and acquire a camera\n+---------------------------\n+\n+What libcamera considers a camera\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+The libcamera library supports fixed and hot-pluggable cameras,\n+including cameras, plugged and unplugged after initializing the library.\n+The libcamera library supports point-and-shoot still image and video\n+capture, either controlled directly by the CPU or exposed through an\n+internal USB bus as a UVC device designed for video conferencing usage.\n+The libcamera library considers any device that includes independent\n+camera sensors, such as front and back sensors as multiple different\n+camera devices.\n+\n+Once you know what camera you want to use, your application needs to\n+acquire a lock to it so no other application can use it.\n+\n+This example application uses a single camera that the Camera Manager\n+reports as available to applications.\n+\n+The code below creates the name of the first available camera as a\n+convenience variable, fetches that camera, and acquires the device for\n+exclusive access:\n+\n+.. code:: cpp\n+\n+ std::string cameraName = cm->cameras()[0]->name();\n+ camera = cm->get(cameraName);\n+ camera->acquire();\n+\n+Configure the camera\n+--------------------\n+\n+Before the application can do anything with the camera, you need to know\n+what it’s capabilities are. These capabilities include scalars,\n+resolutions, supported formats and converters. The libcamera library\n+uses ``StreamRole``\\ s to define four predefined ways an application\n+intends to use a camera (`You can read the full list in the API\n+documentation <http://libcamera.org/api-html/stream_8h.html#a295d1f5e7828d95c0b0aabc0a8baac03>`__).\n+\n+To find out if how your application wants to use the camera is possible,\n+generate a new configuration using a vector of ``StreamRole``\\ s, and\n+send that vector to the camera. To do this, create a new configuration\n+variable and use the ``generateConfiguration`` function to produce a\n+``CameraConfiguration`` for it. If the camera can handle the\n+configuration it returns a full ``CameraConfiguration``, and if it\n+can’t, a null pointer.\n+\n+.. code:: cpp\n+\n+ std::unique_ptr<CameraConfiguration> config = camera->generateConfiguration( { StreamRole::Viewfinder } );\n+\n+A ``CameraConfiguration`` has a ``StreamConfiguration`` instance for\n+each ``StreamRole`` the application requested, and that the camera can\n+support. Each of these has a default size and format that the camera\n+assigned, depending on the ``StreamRole`` requested.\n+\n+The code below creates a new ``StreamConfiguration`` variable and\n+populates it with the value of the first (and only) ``StreamRole`` in\n+the camera configuration. It then outputs the value to standard out.\n+\n+.. code:: cpp\n+\n+ StreamConfiguration &streamConfig = config->at(0);\n+ std::cout << \"Default viewfinder configuration is: \" << streamConfig.toString() << std::endl;\n+\n+Change and validate the configuration\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Once you have a ``StreamConfiguration``, your application can make\n+changes to the parameters it contains, for example, to change the width\n+and height you could use the following code:\n+\n+.. code:: cpp\n+\n+ streamConfig.size.width = 640;\n+ streamConfig.size.height = 480;\n+\n+If your application makes changes to any parameters, validate them\n+before applying them to the camera by using the ``validate`` function.\n+If the new value is invalid, the validation process adjusts the\n+parameter to what it considers a valid value. An application should\n+check that the adjusted configuration is something you expect (you can\n+use the\n+`Status <http://libcamera.org/api-html/classlibcamera_1_1CameraConfiguration.html#a64163f21db2fe1ce0a6af5a6f6847744>`_\n+method to check if the Pipeline Handler adjusted the configuration).\n+\n+For example, above you set the width and height to 640x480, but if the\n+camera cannot produce an image that large, it might return the\n+configuration with a new size of 320x240 and a status of ``Adjusted``.\n+\n+For this example application, the code below prints the adjusted values\n+to standard out.\n+\n+.. code:: cpp\n+\n+ config->validate();\n+ std::cout << \"Validated viewfinder configuration is: \" << streamConfig.toString() << std::endl;\n+\n+With a validated ``CameraConfiguration``, send it to the camera to\n+confirm the new configuration:\n+\n+.. code:: cpp\n+\n+ camera->configure(config.get());\n+\n+If you don’t first validate the configuration before calling\n+``configure``, there’s a chance that calling the function fails.\n+\n+Allocate FrameBuffers\n+---------------------\n+\n+The libcamera library consumes buffers provided by applications as\n+``FrameBuffer`` instances, which makes libcamera a consumer of buffers\n+exported by other devices (such as displays or video encoders), or\n+allocated from an external allocator (such as ION on Android).\n+\n+The libcamera library uses ``FrameBuffer`` instances to buffer frames of\n+data from memory, but first, your application should reserve enough\n+memory for the ``FrameBuffer``\\ s your streams need based on the sizes\n+and formats you configured.\n+\n+In some situations, applications do not have any means to allocate or\n+get hold of suitable buffers, for instance, when no other device is\n+involved, or on Linux platforms that lack a centralized allocator. The\n+``FrameBufferAllocator`` class provides a buffer allocator that you can\n+use in these situations.\n+\n+An application doesn’t have to use the default ``FrameBufferAllocator``\n+that libcamera provides, and can instead allocate memory manually, and\n+pass the buffers in ``Request``\\ s (read more about ``Request``\\ s in\n+`the frame capture section <#frame-capture>`__ of this guide). The\n+example in this guide covers using the ``FrameBufferAllocator`` that\n+libcamera provides.\n+\n+Using the libcamera ``FrameBufferAllocator``\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+As the camera manager knows what configuration is available, it can\n+allocate all the resources for you with a single method, and de-allocate\n+with another.\n+\n+Applications create a ``FrameBufferAllocator`` for a Camera, and use it\n+to allocate buffers for streams of a ``CameraConfiguration`` with the\n+``allocate()`` function.\n+\n+.. code:: cpp\n+\n+ for (StreamConfiguration &cfg : *config) {\n+ int ret = allocator->allocate(cfg.stream());\n+ if (ret < 0) {\n+ std::cerr << \"Can't allocate buffers\" << std::endl;\n+ return -ENOMEM;\n+ }\n+\n+ unsigned int allocated = allocator->buffers(cfg.stream()).size();\n+ std::cout << \"Allocated \" << allocated << \" buffers for stream\" << std::endl;\n+ }\n+\n+Frame Capture\n+~~~~~~~~~~~~~\n+\n+The libcamera library follows a familiar streaming request model for\n+data (frames in this case). For each frame a camera captures, your\n+application must queue a request for it to the camera.\n+\n+In the case of libcamera, a ‘Request’ is at least one Stream (one source\n+from a Camera), with a FrameBuffer full of image data.\n+\n+First, create an instance of a ``Stream`` from the ``StreamConfig``,\n+assign a vector of ``FrameBuffer``\\ s to the allocation created above,\n+and create a vector of the requests the application will make.\n+\n+.. code:: cpp\n+\n+ Stream *stream = streamConfig.stream();\n+ const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator->buffers(stream);\n+ std::vector<Request *> requests;\n+\n+Create ``Request``\\ s for the size of the ``FrameBuffer`` by using the\n+``createRequest`` function and adding all the requests created to a\n+vector. For each request add a buffer to it with the ``addBuffer``\n+function, passing the stream the buffer belongs to, and the ``FrameBuffer``.\n+\n+.. code:: cpp\n+\n+ for (unsigned int i = 0; i < buffers.size(); ++i) {\n+ Request *request = camera->createRequest();\n+ if (!request)\n+ {\n+ std::cerr << \"Can't create request\" << std::endl;\n+ return -ENOMEM;\n+ }\n+\n+ const std::unique_ptr<FrameBuffer> &buffer = buffers[i];\n+ int ret = request->addBuffer(stream, buffer.get());\n+ if (ret < 0)\n+ {\n+ std::cerr << \"Can't set buffer for request\"\n+ << std::endl;\n+ return ret;\n+ }\n+\n+ requests.push_back(request);\n+\n+ /*\n+ * todo: Set controls\n+ *\n+ * ControlList &Request::controls();\n+ * controls.set(controls::Brightness, 255);\n+ */\n+ }\n+\n+.. TODO: Controls\n+.. TODO: A request can also have controls or parameters that you can apply to the image. -->\n+\n+Event handling and callbacks\n+----------------------------\n+\n+The libcamera library uses the concept of signals and slots (`similar to\n+Qt <https://doc.qt.io/qt-5/signalsandslots.html>`__) to connect events\n+with callbacks to handle those events.\n+\n+Signals\n+~~~~~~~\n+\n+Signals are emitted when the buffer has been completed (image data\n+written into), and because a Request can contain multiple buffers - the\n+Request completed signal is emitted when all buffers within the request\n+are completed.\n+\n+A camera class instance emits a completed request signal to report when\n+all the buffers in a request are complete with image data written to\n+them. To receive these signals, connect a slot function to the signal\n+you are interested in. For this example application, that’s when the\n+camera completes a request.\n+\n+.. code:: cpp\n+\n+ camera->requestCompleted.connect(requestComplete);\n+\n+Slots\n+~~~~~\n+\n+Every time the camera request completes, it emits a signal, and the\n+connected slot invoked, passing the Request as a parameter.\n+\n+For this example application, the ``requestComplete`` slot outputs\n+information about the ``FrameBuffer`` to standard out, but the callback is\n+typically where your application accesses the image data from the camera\n+and does something with it.\n+\n+Signals operate in the libcamera ``CameraManager`` thread context, so it\n+is important not to block the thread for a long time, as this blocks\n+internal processing of the camera pipelines, and can affect realtime\n+performances, leading to skipped frames etc.\n+\n+First, create the function that matches the slot:\n+\n+.. code:: cpp\n+\n+ static void requestComplete(Request *request)\n+ {\n+ // Code to follow\n+ }\n+\n+The signal/slot flow is the only way to pass requests and buffers from\n+libcamera back to the application. There are times when a request can\n+emit a ``requestComplete`` signal, but this request is actually\n+cancelled, for example by application shutdown. To avoid an application\n+processing image data that doesn’t exist, it’s worth checking that the\n+request is still in the state you expect (You can find `a full list of\n+the completion statuses in the\n+documentation <https://www.libcamera.org/api-html/classlibcamera_1_1Request.html#a2209ba8d51af8167b25f6e3e94d5c45b>`__).\n+\n+.. code:: cpp\n+\n+ if (request->status() == Request::RequestCancelled) return;\n+\n+When the request completes, you can access the buffers from the request\n+using the ``buffers()`` function which returns a map of each buffer, and\n+the stream it is associated with.\n+\n+.. code:: cpp\n+\n+ const std::map<Stream *, FrameBuffer *> &buffers = request->buffers();\n+\n+Iterating through the map allows you to inspect each buffer from each\n+stream completed in this request, and access the metadata for each frame\n+the camera captured. The buffer metadata contains information such as\n+capture status, a timestamp and the bytes used.\n+\n+.. code:: cpp\n+\n+ for (auto bufferPair : buffers) {\n+ FrameBuffer *buffer = bufferPair.second;\n+ const FrameMetadata &metadata = buffer->metadata();\n+ }\n+\n+The buffer describes the image data, but a buffer can consist of more\n+than one image plane that in memory hold the image data in the Frames.\n+For example, the Y, U, and V components of a YUV-encoded image are\n+described by a plane.\n+\n+For this example application, still inside the ``for`` loop from above,\n+print the Frame sequence number and details of the planes.\n+\n+.. code:: cpp\n+\n+ std::cout << \" seq: \" << std::setw(6) << std::setfill('0') << metadata.sequence << \" bytesused: \";\n+\n+ unsigned int nplane = 0;\n+ for (const FrameMetadata::Plane &plane : metadata.planes)\n+ {\n+ std::cout << plane.bytesused;\n+ if (++nplane < metadata.planes.size()) std::cout << \"/\";\n+ }\n+\n+ std::cout << std::endl;\n+\n+With the handling of this request complete, reuse the buffer by adding\n+it back to the request with its matching stream, and create a new\n+request using the ``createRequest`` function.\n+\n+.. code:: cpp\n+\n+ request = camera->createRequest();\n+ if (!request)\n+ {\n+ std::cerr << \"Can't create request\" << std::endl;\n+ return;\n+ }\n+\n+ for (auto it = buffers.begin(); it != buffers.end(); ++it)\n+ {\n+ Stream *stream = it->first;\n+ FrameBuffer *buffer = it->second;\n+\n+ request->addBuffer(stream, buffer);\n+ }\n+\n+ camera->queueRequest(request);\n+\n+Start the camera and event loop\n+-------------------------------\n+\n+If you build and run the application at this point, none of the code in\n+the slot method above runs. While most of the code to handle processing\n+camera data is in place, you need first to start the camera to begin\n+capturing frames and queuing requests to it.\n+\n+.. code:: cpp\n+\n+ camera->start();\n+ for (Request *request : requests)\n+ camera->queueRequest(request);\n+\n+To emit signals that slots can respond to, your application needs an\n+event loop. You can use the ``EventDispatcher`` class as an event loop\n+for your application to listen to signals from resources libcamera\n+handles.\n+\n+The libcamera library does this by creating instances of the\n+``EventNotifier`` class, which models a file descriptor event source an\n+application can monitor and registers them with the ``EventDispatcher``.\n+Whenever the ``EventDispatcher`` detects an event it is monitoring, and\n+it emits an ``EventNotifier::activated signal``. The ``Timer`` class to\n+control the length event loops run for that you can register with a\n+dispatcher with the ``registerTimer`` function.\n+\n+The code below creates a new instance of the ``EventDispatcher`` class,\n+adds it to the camera manager, creates a timer to run for 3 seconds, and\n+during the length of that timer, the ``EventDispatcher`` processes\n+events that occur, and calls the relevant signals.\n+\n+.. code:: cpp\n+\n+ EventDispatcher *dispatcher = cm->eventDispatcher();\n+ Timer timer;\n+ timer.start(3000);\n+ while (timer.isRunning())\n+ dispatcher->processEvents();\n+\n+Clean up and stop application\n+-----------------------------\n+\n+The application is now finished with the camera and the resources the\n+camera uses, so you need to do the following:\n+\n+- stop the camera\n+- free the stream from the ``FrameBufferAllocator``\n+- delete the ``FrameBufferAllocator``\n+- release the lock on the camera and reset the pointer to it\n+- stop the camera manager\n+- exit the application\n+\n+.. code:: cpp\n+\n+ camera->stop();\n+ allocator->free(stream);\n+ delete allocator;\n+ camera->release();\n+ camera.reset();\n+ cm->stop();\n+\n+ return 0;\n+\n+Conclusion\n+----------\n", "prefixes": [ "libcamera-devel", "PATCH-Resend" ] }