{"id":15447,"url":"https://patchwork.libcamera.org/api/1.1/patches/15447/?format=json","web_url":"https://patchwork.libcamera.org/patch/15447/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20220314154633.506026-4-tomi.valkeinen@ideasonboard.com>","date":"2022-03-14T15:46:33","name":"[libcamera-devel,v5,3/3] py: Add cam.py","commit_ref":null,"pull_url":null,"state":"accepted","archived":false,"hash":"f7c0aacd0236cfbecfafa7b696aadcbebcc87049","submitter":{"id":109,"url":"https://patchwork.libcamera.org/api/1.1/people/109/?format=json","name":"Tomi Valkeinen","email":"tomi.valkeinen@ideasonboard.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/15447/mbox/","series":[{"id":2960,"url":"https://patchwork.libcamera.org/api/1.1/series/2960/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=2960","date":"2022-03-14T15:46:30","name":"Python bindings","version":5,"mbox":"https://patchwork.libcamera.org/series/2960/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/15447/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/15447/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\r\n\t[92.243.16.209])\r\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 8C0D3BF415\r\n\tfor <parsemail@patchwork.libcamera.org>;\r\n\tMon, 14 Mar 2022 15:47:01 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\r\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 278E0632ED;\r\n\tMon, 14 Mar 2022 16:47:01 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\r\n\t[213.167.242.64])\r\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 95F98632E4\r\n\tfor <libcamera-devel@lists.libcamera.org>;\r\n\tMon, 14 Mar 2022 16:46:58 +0100 (CET)","from deskari.lan (91-156-85-209.elisa-laajakaista.fi\r\n\t[91.156.85.209])\r\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id D556E2E0;\r\n\tMon, 14 Mar 2022 16:46:57 +0100 (CET)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/simple; d=libcamera.org;\r\n\ts=mail; t=1647272821;\r\n\tbh=HcdSqEyfpPwXftDpYWaln+rLdxdHZ1UGUgkNAszEHzA=;\r\n\th=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe:\r\n\tList-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:\r\n\tFrom;\r\n\tb=qsvU+rNuxit0qqDpe/httgz8Fyi/xN1YZUX2vUEx4YrD0+ZgSFvEmIJwhG4rGtI9X\r\n\tJztefAg29G4ESfFM2StPJBFc9ofpJmfLaCYjHNfDD/9voGuWIWtYKCr3ZADpWK2ROv\r\n\tH6wbvCXbRdDCei6jyfim69mXGY4uZdmWzVCG/VtbBr45aIPzqeOL6E1ycIJYIbnYPH\r\n\tD9iv0ms9vNalGyfuDxjIq383NSIh2ovFOQMVTCNzYNFKqO5lSkDEWrVJqPLoTb//6k\r\n\tvxKrzkVz5Aj57uyytAoHqqf1tAsRdfLWJqT+R+YVTCQV1GOvMpDqiGNRAFtWCx/MC9\r\n\tiJMN/fW2HZ81g==","v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\r\n\ts=mail; t=1647272818;\r\n\tbh=HcdSqEyfpPwXftDpYWaln+rLdxdHZ1UGUgkNAszEHzA=;\r\n\th=From:To:Cc:Subject:Date:In-Reply-To:References:From;\r\n\tb=fp+RhfrUDLx1aVoqoO8LMqR71STQQmUk0zADYiIDC9QZZIk8LvWKPOMMlYvhFycJd\r\n\tevh/jNXLm+6s9wAOeFJSpgj4zLMWAP81+yvdjy01bRqWgGUuSPFFoNJpycQ96lPIvg\r\n\tRi5K7imTqfSLPyXXKUJbXrJwfhJIAUG7dga+F7W0="],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key; \r\n\tunprotected) header.d=ideasonboard.com\r\n\theader.i=@ideasonboard.com\r\n\theader.b=\"fp+RhfrU\"; dkim-atps=neutral","To":"libcamera-devel@lists.libcamera.org,\r\n\tDavid Plowman <david.plowman@raspberrypi.com>,\r\n\tKieran Bingham <kieran.bingham@ideasonboard.com>,\r\n\tLaurent Pinchart <laurent.pinchart@ideasonboard.com>","Date":"Mon, 14 Mar 2022 17:46:33 +0200","Message-Id":"<20220314154633.506026-4-tomi.valkeinen@ideasonboard.com>","X-Mailer":"git-send-email 2.25.1","In-Reply-To":"<20220314154633.506026-1-tomi.valkeinen@ideasonboard.com>","References":"<20220314154633.506026-1-tomi.valkeinen@ideasonboard.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Subject":"[libcamera-devel] [PATCH v5 3/3] py: Add cam.py","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\r\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\r\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","From":"Tomi Valkeinen via libcamera-devel\r\n\t<libcamera-devel@lists.libcamera.org>","Reply-To":"Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"Add cam.py, which mimics the 'cam' tool. Four rendering backends are\nadded:\n\n* null - Do nothing\n* kms - Use KMS with dmabufs\n* qt - SW render on a Qt window\n* qtgl - OpenGL render on a Qt window\n\nAll the renderers handle only a few pixel formats, and especially the GL\nrenderer is just a prototype.\n\nSigned-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\n---\n src/py/cam/cam.py        | 461 +++++++++++++++++++++++++++++++++++++++\n src/py/cam/cam_kms.py    | 183 ++++++++++++++++\n src/py/cam/cam_null.py   |  46 ++++\n src/py/cam/cam_qt.py     | 355 ++++++++++++++++++++++++++++++\n src/py/cam/cam_qtgl.py   | 386 ++++++++++++++++++++++++++++++++\n src/py/cam/gl_helpers.py |  67 ++++++\n 6 files changed, 1498 insertions(+)\n create mode 100755 src/py/cam/cam.py\n create mode 100644 src/py/cam/cam_kms.py\n create mode 100644 src/py/cam/cam_null.py\n create mode 100644 src/py/cam/cam_qt.py\n create mode 100644 src/py/cam/cam_qtgl.py\n create mode 100644 src/py/cam/gl_helpers.py","diff":"diff --git a/src/py/cam/cam.py b/src/py/cam/cam.py\r\nnew file mode 100755\r\nindex 00000000..b86662e4\r\n--- /dev/null\r\n+++ b/src/py/cam/cam.py\r\n@@ -0,0 +1,461 @@\r\n+#!/usr/bin/env python3\r\n+\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+\r\n+import argparse\r\n+import binascii\r\n+import libcamera as libcam\r\n+import os\r\n+import sys\r\n+\r\n+class CustomCameraAction(argparse.Action):\r\n+\tdef __call__(self, parser, namespace, values, option_string=None):\r\n+\t\tprint(self.dest, values)\r\n+\r\n+\t\tif not \"camera\" in namespace or namespace.camera == None:\r\n+\t\t\tsetattr(namespace, \"camera\", [])\r\n+\r\n+\t\tprevious = namespace.camera\r\n+\t\tprevious.append((self.dest, values))\r\n+\t\tsetattr(namespace, \"camera\", previous)\r\n+\r\n+class CustomAction(argparse.Action):\r\n+\tdef __init__(self, option_strings, dest, **kwargs):\r\n+\t\tsuper().__init__(option_strings, dest, default={}, **kwargs)\r\n+\r\n+\tdef __call__(self, parser, namespace, values, option_string=None):\r\n+\t\tif len(namespace.camera) == 0:\r\n+\t\t\tprint(f\"Option {option_string} requires a --camera context\")\r\n+\t\t\tsys.exit(-1)\r\n+\r\n+\t\tif self.type == bool:\r\n+\t\t\tvalues = True\r\n+\r\n+\t\tcurrent = namespace.camera[-1]\r\n+\r\n+\t\tdata = getattr(namespace, self.dest)\r\n+\r\n+\t\tif self.nargs == \"+\":\r\n+\t\t\tif not current in data:\r\n+\t\t\t\tdata[current] = []\r\n+\r\n+\t\t\tdata[current] += values\r\n+\t\telse:\r\n+\t\t\tdata[current] = values\r\n+\r\n+\r\n+\r\n+def do_cmd_list(cm):\r\n+\tprint(\"Available cameras:\")\r\n+\r\n+\tfor idx,c in enumerate(cm.cameras):\r\n+\t\tprint(f\"{idx + 1}: {c.id}\")\r\n+\r\n+def do_cmd_list_props(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tprint(\"Properties for\", ctx[\"id\"])\r\n+\r\n+\tfor name, prop in camera.properties.items():\r\n+\t\tprint(\"\\t{}: {}\".format(name, prop))\r\n+\r\n+def do_cmd_list_controls(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tprint(\"Controls for\", ctx[\"id\"])\r\n+\r\n+\tfor name, prop in camera.controls.items():\r\n+\t\tprint(\"\\t{}: {}\".format(name, prop))\r\n+\r\n+def do_cmd_info(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tprint(\"Stream info for\", ctx[\"id\"])\r\n+\r\n+\troles = [libcam.StreamRole.Viewfinder]\r\n+\r\n+\tcamconfig = camera.generateConfiguration(roles)\r\n+\tif camconfig == None:\r\n+\t\traise Exception(\"Generating config failed\")\r\n+\r\n+\tfor i, stream_config in enumerate(camconfig):\r\n+\t\tprint(\"\\t{}: {}\".format(i, stream_config.toString()))\r\n+\r\n+\t\tformats = stream_config.formats\r\n+\t\tfor fmt in formats.pixelFormats:\r\n+\t\t\tprint(\"\\t * Pixelformat:\", fmt, formats.range(fmt))\r\n+\r\n+\t\t\tfor size in formats.sizes(fmt):\r\n+\t\t\t\tprint(\"\\t  -\", size)\r\n+\r\n+def acquire(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tcamera.acquire()\r\n+\r\n+def release(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tcamera.release()\r\n+\r\n+def parse_streams(ctx):\r\n+\tstreams = []\r\n+\r\n+\tfor stream_desc in ctx[\"opt-stream\"]:\r\n+\t\tstream_opts = {\"role\": libcam.StreamRole.Viewfinder}\r\n+\r\n+\t\tfor stream_opt in stream_desc.split(\",\"):\r\n+\t\t\tif stream_opt == 0:\r\n+\t\t\t\tcontinue\r\n+\r\n+\t\t\tarr = stream_opt.split(\"=\")\r\n+\t\t\tif len(arr) != 2:\r\n+\t\t\t\tprint(\"Bad stream option\", stream_opt)\r\n+\t\t\t\tsys.exit(-1)\r\n+\r\n+\t\t\tkey = arr[0]\r\n+\t\t\tvalue = arr[1]\r\n+\r\n+\t\t\tif key in [\"width\", \"height\"]:\r\n+\t\t\t\tvalue = int(value)\r\n+\t\t\telif key == \"role\":\r\n+\t\t\t\trolemap = {\r\n+\t\t\t\t\t\"still\": libcam.StreamRole.StillCapture,\r\n+\t\t\t\t\t\"raw\": libcam.StreamRole.Raw,\r\n+\t\t\t\t\t\"video\": libcam.StreamRole.VideoRecording,\r\n+\t\t\t\t\t\"viewfinder\": libcam.StreamRole.Viewfinder,\r\n+\t\t\t\t}\r\n+\r\n+\t\t\t\trole = rolemap.get(value.lower(), None)\r\n+\r\n+\t\t\t\tif role == None:\r\n+\t\t\t\t\tprint(\"Bad stream role\", value)\r\n+\t\t\t\t\tsys.exit(-1)\r\n+\r\n+\t\t\t\tvalue = role\r\n+\t\t\telif key == \"pixelformat\":\r\n+\t\t\t\tpass\r\n+\t\t\telse:\r\n+\t\t\t\tprint(\"Bad stream option key\", key)\r\n+\t\t\t\tsys.exit(-1)\r\n+\r\n+\t\t\tstream_opts[key] = value\r\n+\r\n+\t\tstreams.append(stream_opts)\r\n+\r\n+\treturn streams\r\n+\r\n+def configure(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tstreams = parse_streams(ctx)\r\n+\r\n+\troles = [opts[\"role\"] for opts in streams]\r\n+\r\n+\tcamconfig = camera.generateConfiguration(roles)\r\n+\tif camconfig == None:\r\n+\t\traise Exception(\"Generating config failed\")\r\n+\r\n+\tfor idx,stream_opts in enumerate(streams):\r\n+\t\tstream_config = camconfig.at(idx)\r\n+\r\n+\t\tif \"width\" in stream_opts and \"height\" in stream_opts:\r\n+\t\t\tstream_config.size = (stream_opts[\"width\"], stream_opts[\"height\"])\r\n+\r\n+\t\tif \"pixelformat\" in stream_opts:\r\n+\t\t\tstream_config.pixelFormat = stream_opts[\"pixelformat\"]\r\n+\r\n+\tstat = camconfig.validate()\r\n+\r\n+\tif stat == libcam.ConfigurationStatus.Invalid:\r\n+\t\tprint(\"Camera configuration invalid\")\r\n+\t\texit(-1)\r\n+\telif stat == libcam.ConfigurationStatus.Adjusted:\r\n+\t\tif ctx[\"opt-strict-formats\"]:\r\n+\t\t\tprint(\"Adjusting camera configuration disallowed by --strict-formats argument\")\r\n+\t\t\texit(-1)\r\n+\r\n+\t\tprint(\"Camera configuration adjusted\")\r\n+\r\n+\tr = camera.configure(camconfig);\r\n+\tif r != 0:\r\n+\t\traise Exception(\"Configure failed\")\r\n+\r\n+\tctx[\"stream-names\"] = {}\r\n+\tctx[\"streams\"] = []\r\n+\r\n+\tfor idx, stream_config in enumerate(camconfig):\r\n+\t\tstream = stream_config.stream\r\n+\t\tctx[\"streams\"].append(stream)\r\n+\t\tctx[\"stream-names\"][stream] = \"stream\" + str(idx)\r\n+\t\tprint(\"{}-{}: stream config {}\".format(ctx[\"id\"], ctx[\"stream-names\"][stream], stream.configuration.toString()))\r\n+\r\n+def alloc_buffers(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tallocator = libcam.FrameBufferAllocator(camera);\r\n+\r\n+\tfor idx, stream in enumerate(ctx[\"streams\"]):\r\n+\t\tret = allocator.allocate(stream)\r\n+\t\tif ret < 0:\r\n+\t\t\tprint(\"Can't allocate buffers\")\r\n+\t\t\texit(-1)\r\n+\r\n+\t\tallocated = len(allocator.buffers(stream))\r\n+\r\n+\t\tprint(\"{}-{}: Allocated {} buffers\".format(ctx[\"id\"], ctx[\"stream-names\"][stream], allocated))\r\n+\r\n+\tctx[\"allocator\"] = allocator\r\n+\r\n+def create_requests(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tctx[\"requests\"] = []\r\n+\r\n+\t# Identify the stream with the least number of buffers\r\n+\tnum_bufs = min([len(ctx[\"allocator\"].buffers(stream)) for stream in ctx[\"streams\"]])\r\n+\r\n+\trequests = []\r\n+\r\n+\tfor buf_num in range(num_bufs):\r\n+\t\trequest = camera.createRequest(ctx[\"idx\"])\r\n+\r\n+\t\tif request == None:\r\n+\t\t\tprint(\"Can't create request\")\r\n+\t\t\texit(-1)\r\n+\r\n+\t\tfor stream in ctx[\"streams\"]:\r\n+\t\t\tbuffers = ctx[\"allocator\"].buffers(stream)\r\n+\t\t\tbuffer = buffers[buf_num]\r\n+\r\n+\t\t\tret = request.addBuffer(stream, buffer)\r\n+\t\t\tif ret < 0:\r\n+\t\t\t\tprint(\"Can't set buffer for request\")\r\n+\t\t\t\texit(-1)\r\n+\r\n+\t\trequests.append(request)\r\n+\r\n+\tctx[\"requests\"] = requests\r\n+\r\n+def start(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tcamera.start()\r\n+\r\n+def stop(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tcamera.stop()\r\n+\r\n+def queue_requests(ctx):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tfor request in ctx[\"requests\"]:\r\n+\t\tcamera.queueRequest(request)\r\n+\t\tctx[\"reqs-queued\"] += 1\r\n+\r\n+\tdel ctx[\"requests\"]\r\n+\r\n+def capture_init(contexts):\r\n+\tfor ctx in contexts:\r\n+\t\tacquire(ctx)\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\tconfigure(ctx)\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\talloc_buffers(ctx)\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\tcreate_requests(ctx)\r\n+\r\n+def capture_start(contexts):\r\n+\tfor ctx in contexts:\r\n+\t\tstart(ctx)\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\tqueue_requests(ctx)\r\n+\r\n+# Called from renderer when there is a libcamera event\r\n+def event_handler(state):\r\n+\tcm = state[\"cm\"]\r\n+\tcontexts = state[\"contexts\"]\r\n+\r\n+\tos.read(cm.efd, 8)\r\n+\r\n+\treqs = cm.getReadyRequests()\r\n+\r\n+\tfor req in reqs:\r\n+\t\tctx = next(ctx for ctx in contexts if ctx[\"idx\"] == req.cookie)\r\n+\t\trequest_handler(state, ctx, req)\r\n+\r\n+\trunning = any(ctx[\"reqs-completed\"] < ctx[\"opt-capture\"] for ctx in contexts)\r\n+\treturn running\r\n+\r\n+def request_handler(state, ctx, req):\r\n+\tif req.status != libcam.RequestStatus.Complete:\r\n+\t\traise Exception(\"{}: Request failed: {}\".format(ctx[\"id\"], req.status))\r\n+\r\n+\tbuffers = req.buffers\r\n+\r\n+\t# Compute the frame rate. The timestamp is arbitrarily retrieved from\r\n+\t# the first buffer, as all buffers should have matching timestamps.\r\n+\tts = buffers[next(iter(buffers))].metadata.timestamp\r\n+\tlast = ctx.get(\"last\", 0)\r\n+\tfps = 1000000000.0 / (ts - last) if (last != 0 and (ts - last) != 0) else 0\r\n+\tctx[\"last\"] = ts\r\n+\tctx[\"fps\"] = fps\r\n+\r\n+\tfor stream, fb in buffers.items():\r\n+\t\tstream_name = ctx[\"stream-names\"][stream]\r\n+\r\n+\t\tcrcs = []\r\n+\t\tif ctx[\"opt-crc\"]:\r\n+\t\t\twith fb.mmap(0) as b:\r\n+\t\t\t\tcrc = binascii.crc32(b)\r\n+\t\t\t\tcrcs.append(crc)\r\n+\r\n+\t\tmeta = fb.metadata\r\n+\r\n+\t\tprint(\"{:.6f} ({:.2f} fps) {}-{}: seq {}, bytes {}, CRCs {}\"\r\n+\t\t\t  .format(ts / 1000000000, fps,\r\n+\t\t\t\t\t  ctx[\"id\"], stream_name,\r\n+\t\t\t\t\t  meta.sequence, meta.bytesused,\r\n+\t\t\t\t\t  crcs))\r\n+\r\n+\t\tif ctx[\"opt-metadata\"]:\r\n+\t\t\treqmeta = req.metadata\r\n+\t\t\tfor ctrl, val in reqmeta.items():\r\n+\t\t\t\tprint(f\"\\t{ctrl} = {val}\")\r\n+\r\n+\t\tif ctx[\"opt-save-frames\"]:\r\n+\t\t\twith fb.mmap(0) as b:\r\n+\t\t\t\tfilename = \"frame-{}-{}-{}.data\".format(ctx[\"id\"], stream_name, ctx[\"reqs-completed\"])\r\n+\t\t\t\twith open(filename, \"wb\") as f:\r\n+\t\t\t\t\tf.write(b)\r\n+\r\n+\tstate[\"renderer\"].request_handler(ctx, req);\r\n+\r\n+\tctx[\"reqs-completed\"] += 1\r\n+\r\n+# Called from renderer when it has finished with a request\r\n+def request_prcessed(ctx, req):\r\n+\tcamera = ctx[\"camera\"]\r\n+\r\n+\tif ctx[\"reqs-queued\"] < ctx[\"opt-capture\"]:\r\n+\t\treq.reuse()\r\n+\t\tcamera.queueRequest(req)\r\n+\t\tctx[\"reqs-queued\"] += 1\r\n+\r\n+def capture_deinit(contexts):\r\n+\tfor ctx in contexts:\r\n+\t\tstop(ctx)\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\trelease(ctx)\r\n+\r\n+def do_cmd_capture(state):\r\n+\tcapture_init(state[\"contexts\"])\r\n+\r\n+\trenderer = state[\"renderer\"]\r\n+\r\n+\trenderer.setup()\r\n+\r\n+\tcapture_start(state[\"contexts\"])\r\n+\r\n+\trenderer.run()\r\n+\r\n+\tcapture_deinit(state[\"contexts\"])\r\n+\r\n+def main():\r\n+\tparser = argparse.ArgumentParser()\r\n+\t# global options\r\n+\tparser.add_argument(\"-l\", \"--list\", action=\"store_true\", help=\"List all cameras\")\r\n+\tparser.add_argument(\"-c\", \"--camera\", type=int, action=\"extend\", nargs=1, default=[], help=\"Specify which camera to operate on, by index\")\r\n+\tparser.add_argument(\"-p\", \"--list-properties\", action=\"store_true\", help=\"List cameras properties\")\r\n+\tparser.add_argument(\"--list-controls\", action=\"store_true\", help=\"List cameras controls\")\r\n+\tparser.add_argument(\"-I\", \"--info\", action=\"store_true\", help=\"Display information about stream(s)\")\r\n+\tparser.add_argument(\"-R\", \"--renderer\", default=\"null\", help=\"Renderer (null, kms, qt, qtgl)\")\r\n+\r\n+\t# per camera options\r\n+\tparser.add_argument(\"-C\", \"--capture\", nargs=\"?\", type=int, const=1000000, action=CustomAction, help=\"Capture until interrupted by user or until CAPTURE frames captured\")\r\n+\tparser.add_argument(\"--crc\", nargs=0, type=bool, action=CustomAction, help=\"Print CRC32 for captured frames\")\r\n+\tparser.add_argument(\"--save-frames\", nargs=0, type=bool, action=CustomAction, help=\"Save captured frames to files\")\r\n+\tparser.add_argument(\"--metadata\", nargs=0, type=bool, action=CustomAction, help=\"Print the metadata for completed requests\")\r\n+\tparser.add_argument(\"--strict-formats\", type=bool, nargs=0, action=CustomAction, help=\"Do not allow requested stream format(s) to be adjusted\")\r\n+\tparser.add_argument(\"-s\", \"--stream\", nargs=\"+\", action=CustomAction)\r\n+\targs = parser.parse_args()\r\n+\r\n+\tcm = libcam.CameraManager.singleton()\r\n+\r\n+\tif args.list:\r\n+\t\tdo_cmd_list(cm)\r\n+\r\n+\tcontexts = []\r\n+\r\n+\tfor cam_idx in args.camera:\r\n+\t\tcamera = next((c for i,c in enumerate(cm.cameras) if i + 1 == cam_idx), None)\r\n+\r\n+\t\tif camera == None:\r\n+\t\t\tprint(\"Unable to find camera\", cam_idx)\r\n+\t\t\treturn -1\r\n+\r\n+\t\tcontexts.append({\r\n+\t\t\t\t\t\t\"camera\": camera,\r\n+\t\t\t\t\t\t\"idx\": cam_idx,\r\n+\t\t\t\t\t\t\"id\": \"cam\" + str(cam_idx),\r\n+\t\t\t\t\t\t\"reqs-queued\": 0,\r\n+\t\t\t\t\t\t\"reqs-completed\": 0,\r\n+\t\t\t\t\t\t\"opt-capture\": args.capture.get(cam_idx, False),\r\n+\t\t\t\t\t\t\"opt-crc\": args.crc.get(cam_idx, False),\r\n+\t\t\t\t\t\t\"opt-save-frames\": args.save_frames.get(cam_idx, False),\r\n+\t\t\t\t\t\t\"opt-metadata\": args.metadata.get(cam_idx, False),\r\n+\t\t\t\t\t\t\"opt-strict-formats\": args.strict_formats.get(cam_idx, False),\r\n+\t\t\t\t\t\t\"opt-stream\": args.stream.get(cam_idx, [\"role=viewfinder\"]),\r\n+\t\t\t\t\t\t})\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\tprint(\"Using camera {} as {}\".format(ctx[\"camera\"].id, ctx[\"id\"]))\r\n+\r\n+\tfor ctx in contexts:\r\n+\t\tif args.list_properties:\r\n+\t\t\tdo_cmd_list_props(ctx)\r\n+\t\tif args.list_controls:\r\n+\t\t\tdo_cmd_list_controls(ctx)\r\n+\t\tif args.info:\r\n+\t\t\tdo_cmd_info(ctx)\r\n+\r\n+\tif args.capture:\r\n+\r\n+\t\tstate = {\r\n+\t\t\t\"cm\": cm,\r\n+\t\t\t\"contexts\": contexts,\r\n+\t\t\t\"event_handler\": event_handler,\r\n+\t\t\t\"request_prcessed\": request_prcessed,\r\n+\t\t}\r\n+\r\n+\t\tif args.renderer == \"null\":\r\n+\t\t\timport cam_null\r\n+\t\t\trenderer = cam_null.NullRenderer(state)\r\n+\t\telif args.renderer == \"kms\":\r\n+\t\t\timport cam_kms\r\n+\t\t\trenderer = cam_kms.KMSRenderer(state)\r\n+\t\telif args.renderer == \"qt\":\r\n+\t\t\timport cam_qt\r\n+\t\t\trenderer = cam_qt.QtRenderer(state)\r\n+\t\telif args.renderer == \"qtgl\":\r\n+\t\t\timport cam_qtgl\r\n+\t\t\trenderer = cam_qtgl.QtRenderer(state)\r\n+\t\telse:\r\n+\t\t\tprint(\"Bad renderer\", args.renderer)\r\n+\t\t\treturn -1\r\n+\r\n+\t\tstate[\"renderer\"] = renderer\r\n+\r\n+\t\tdo_cmd_capture(state)\r\n+\r\n+\treturn 0\r\n+\r\n+if __name__ == \"__main__\":\r\n+\tsys.exit(main())\r\ndiff --git a/src/py/cam/cam_kms.py b/src/py/cam/cam_kms.py\r\nnew file mode 100644\r\nindex 00000000..58da1779\r\n--- /dev/null\r\n+++ b/src/py/cam/cam_kms.py\r\n@@ -0,0 +1,183 @@\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+\r\n+import pykms\r\n+import selectors\r\n+import sys\r\n+\r\n+FMT_MAP = {\r\n+\t\"RGB888\": pykms.PixelFormat.RGB888,\r\n+\t\"YUYV\": pykms.PixelFormat.YUYV,\r\n+\t\"ARGB8888\": pykms.PixelFormat.ARGB8888,\r\n+\t\"XRGB8888\": pykms.PixelFormat.XRGB8888,\r\n+}\r\n+\r\n+class KMSRenderer:\r\n+\tdef __init__(self, state):\r\n+\t\tself.state = state\r\n+\r\n+\t\tself.cm = state[\"cm\"]\r\n+\t\tself.contexts = state[\"contexts\"]\r\n+\t\tself.running = False\r\n+\r\n+\t\tcard = pykms.Card()\r\n+\r\n+\t\tres = pykms.ResourceManager(card)\r\n+\t\tconn = res.reserve_connector()\r\n+\t\tcrtc = res.reserve_crtc(conn)\r\n+\t\tmode = conn.get_default_mode()\r\n+\t\tmodeb = mode.to_blob(card)\r\n+\r\n+\t\treq = pykms.AtomicReq(card)\r\n+\t\treq.add_connector(conn, crtc)\r\n+\t\treq.add_crtc(crtc, modeb)\r\n+\t\tr = req.commit_sync(allow_modeset = True)\r\n+\t\tassert(r == 0)\r\n+\r\n+\t\tself.card = card\r\n+\t\tself.resman = res\r\n+\t\tself.crtc = crtc\r\n+\t\tself.mode = mode\r\n+\r\n+\t\tself.bufqueue = []\r\n+\t\tself.current = None\r\n+\t\tself.next = None\r\n+\t\tself.cam_2_drm = {}\r\n+\r\n+\t# KMS\r\n+\r\n+\tdef close(self):\r\n+\t\treq = pykms.AtomicReq(self.card)\r\n+\t\tfor s in self.streams:\r\n+\t\t\treq.add_plane(s[\"plane\"], None, None, dst=(0, 0, 0, 0))\r\n+\t\treq.commit()\r\n+\r\n+\tdef add_plane(self, req, stream, fb):\r\n+\t\ts = next(s for s in self.streams if s[\"stream\"] == stream)\r\n+\t\tidx = s[\"idx\"]\r\n+\t\tplane = s[\"plane\"]\r\n+\r\n+\t\tif idx % 2 == 0:\r\n+\t\t\tx = 0\r\n+\t\telse:\r\n+\t\t\tx = self.mode.hdisplay - fb.width\r\n+\r\n+\t\tif idx // 2 == 0:\r\n+\t\t\ty = 0\r\n+\t\telse:\r\n+\t\t\ty = self.mode.vdisplay - fb.height\r\n+\r\n+\t\treq.add_plane(plane, fb, self.crtc, dst=(x, y, fb.width, fb.height))\r\n+\r\n+\tdef apply_request(self, drmreq):\r\n+\r\n+\t\tbuffers = drmreq[\"camreq\"].buffers\r\n+\r\n+\t\tfor stream, fb in buffers.items():\r\n+\t\t\tdrmfb = self.cam_2_drm.get(fb, None)\r\n+\r\n+\t\t\treq = pykms.AtomicReq(self.card)\r\n+\t\t\tself.add_plane(req, stream, drmfb)\r\n+\t\t\treq.commit()\r\n+\r\n+\tdef handle_page_flip(self, frame, time):\r\n+\t\told = self.current\r\n+\t\tself.current = self.next\r\n+\r\n+\t\tif len(self.bufqueue) > 0:\r\n+\t\t\tself.next = self.bufqueue.pop(0)\r\n+\t\telse:\r\n+\t\t\tself.next = None\r\n+\r\n+\t\tif self.next:\r\n+\t\t\tdrmreq = self.next\r\n+\r\n+\t\t\tself.apply_request(drmreq)\r\n+\r\n+\t\tif old:\r\n+\t\t\treq = old[\"camreq\"]\r\n+\t\t\tctx = old[\"camctx\"]\r\n+\t\t\tself.state[\"request_prcessed\"](ctx, req)\r\n+\r\n+\tdef queue(self, drmreq):\r\n+\t\tif not self.next:\r\n+\t\t\tself.next = drmreq\r\n+\t\t\tself.apply_request(drmreq)\r\n+\t\telse:\r\n+\t\t\tself.bufqueue.append(drmreq)\r\n+\r\n+\t# libcamera\r\n+\r\n+\tdef setup(self):\r\n+\t\tself.streams = []\r\n+\r\n+\t\tidx = 0\r\n+\t\tfor ctx in self.contexts:\r\n+\t\t\tfor stream in ctx[\"streams\"]:\r\n+\r\n+\t\t\t\tcfg = stream.configuration\r\n+\t\t\t\tfmt = cfg.pixelFormat\r\n+\t\t\t\tfmt = FMT_MAP[fmt]\r\n+\r\n+\t\t\t\tplane = self.resman.reserve_generic_plane(self.crtc, fmt)\r\n+\t\t\t\tassert(plane != None)\r\n+\r\n+\t\t\t\tself.streams.append({\r\n+\t\t\t\t\t\t\t\t\"idx\": idx,\r\n+\t\t\t\t\t\t\t\t\"stream\": stream,\r\n+\t\t\t\t\t\t\t\t\"plane\": plane,\r\n+\t\t\t\t\t\t\t\t\"fmt\": fmt,\r\n+\t\t\t\t\t\t\t\t\"size\": cfg.size,\r\n+\t\t\t\t\t\t\t   })\r\n+\r\n+\t\t\t\tfor fb in ctx[\"allocator\"].buffers(stream):\r\n+\t\t\t\t\tw, h = cfg.size\r\n+\t\t\t\t\tstride = cfg.stride\r\n+\t\t\t\t\tfd = fb.fd(0)\r\n+\t\t\t\t\tdrmfb = pykms.DmabufFramebuffer(self.card, w, h, fmt,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t[fd], [stride], [0])\r\n+\t\t\t\t\tself.cam_2_drm[fb] = drmfb\r\n+\r\n+\t\t\t\tidx += 1\r\n+\r\n+\r\n+\tdef readdrm(self, fileobj):\r\n+\t\tfor ev in self.card.read_events():\r\n+\t\t\tif ev.type == pykms.DrmEventType.FLIP_COMPLETE:\r\n+\t\t\t\tself.handle_page_flip(ev.seq, ev.time)\r\n+\r\n+\tdef readcam(self, fd):\r\n+\t\tself.running = self.state[\"event_handler\"](self.state)\r\n+\r\n+\tdef readkey(self, fileobj):\r\n+\t\tsys.stdin.readline()\r\n+\t\tself.running = False\r\n+\r\n+\tdef run(self):\r\n+\t\tprint(\"Capturing...\")\r\n+\r\n+\t\tself.running = True\r\n+\r\n+\t\tsel = selectors.DefaultSelector()\r\n+\t\tsel.register(self.card.fd, selectors.EVENT_READ, self.readdrm)\r\n+\t\tsel.register(self.cm.efd, selectors.EVENT_READ, self.readcam)\r\n+\t\tsel.register(sys.stdin, selectors.EVENT_READ, self.readkey)\r\n+\r\n+\t\tprint(\"Press enter to exit\")\r\n+\r\n+\t\twhile self.running:\r\n+\t\t\tevents = sel.select()\r\n+\t\t\tfor key, mask in events:\r\n+\t\t\t\tcallback = key.data\r\n+\t\t\t\tcallback(key.fileobj)\r\n+\r\n+\t\tprint(\"Exiting...\")\r\n+\r\n+\tdef request_handler(self, ctx, req):\r\n+\r\n+\t\tdrmreq = {\r\n+\t\t\t\"camctx\": ctx,\r\n+\t\t\t\"camreq\": req,\r\n+\t\t}\r\n+\r\n+\t\tself.queue(drmreq)\r\ndiff --git a/src/py/cam/cam_null.py b/src/py/cam/cam_null.py\r\nnew file mode 100644\r\nindex 00000000..db4aedbd\r\n--- /dev/null\r\n+++ b/src/py/cam/cam_null.py\r\n@@ -0,0 +1,46 @@\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+\r\n+import selectors\r\n+import sys\r\n+\r\n+class NullRenderer:\r\n+\tdef __init__(self, state):\r\n+\t\tself.state = state\r\n+\r\n+\t\tself.cm = state[\"cm\"]\r\n+\t\tself.contexts = state[\"contexts\"]\r\n+\r\n+\t\tself.running = False\r\n+\r\n+\tdef setup(self):\r\n+\t\tpass\r\n+\r\n+\tdef run(self):\r\n+\t\tprint(\"Capturing...\")\r\n+\r\n+\t\tself.running = True\r\n+\r\n+\t\tsel = selectors.DefaultSelector()\r\n+\t\tsel.register(self.cm.efd, selectors.EVENT_READ, self.readcam)\r\n+\t\tsel.register(sys.stdin, selectors.EVENT_READ, self.readkey)\r\n+\r\n+\t\tprint(\"Press enter to exit\")\r\n+\r\n+\t\twhile self.running:\r\n+\t\t\tevents = sel.select()\r\n+\t\t\tfor key, mask in events:\r\n+\t\t\t\tcallback = key.data\r\n+\t\t\t\tcallback(key.fileobj)\r\n+\r\n+\t\tprint(\"Exiting...\")\r\n+\r\n+\tdef readcam(self, fd):\r\n+\t\tself.running = self.state[\"event_handler\"](self.state)\r\n+\r\n+\tdef readkey(self, fileobj):\r\n+\t\tsys.stdin.readline()\r\n+\t\tself.running = False\r\n+\r\n+\tdef request_handler(self, ctx, req):\r\n+\t\tself.state[\"request_prcessed\"](ctx, req)\r\ndiff --git a/src/py/cam/cam_qt.py b/src/py/cam/cam_qt.py\r\nnew file mode 100644\r\nindex 00000000..d7a4f6f7\r\n--- /dev/null\r\n+++ b/src/py/cam/cam_qt.py\r\n@@ -0,0 +1,355 @@\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+#\r\n+# Debayering code from PiCamera documentation\r\n+\r\n+from io import BytesIO\r\n+from numpy.lib.stride_tricks import as_strided\r\n+from PIL import Image\r\n+from PIL.ImageQt import ImageQt\r\n+from PyQt5 import QtCore, QtGui, QtWidgets\r\n+import numpy as np\r\n+import sys\r\n+\r\n+def rgb_to_pix(rgb):\r\n+\timg = Image.frombuffer(\"RGB\", (rgb.shape[1], rgb.shape[0]), rgb)\r\n+\tqim = ImageQt(img).copy()\r\n+\tpix = QtGui.QPixmap.fromImage(qim)\r\n+\treturn pix\r\n+\r\n+\r\n+def separate_components(data, r0, g0, g1, b0):\r\n+\t# Now to split the data up into its red, green, and blue components. The\r\n+\t# Bayer pattern of the OV5647 sensor is BGGR. In other words the first\r\n+\t# row contains alternating green/blue elements, the second row contains\r\n+\t# alternating red/green elements, and so on as illustrated below:\r\n+\t#\r\n+\t# GBGBGBGBGBGBGB\r\n+\t# RGRGRGRGRGRGRG\r\n+\t# GBGBGBGBGBGBGB\r\n+\t# RGRGRGRGRGRGRG\r\n+\t#\r\n+\t# Please note that if you use vflip or hflip to change the orientation\r\n+\t# of the capture, you must flip the Bayer pattern accordingly\r\n+\r\n+\trgb = np.zeros(data.shape + (3,), dtype=data.dtype)\r\n+\trgb[r0[1]::2, r0[0]::2, 0] = data[r0[1]::2, r0[0]::2] # Red\r\n+\trgb[g0[1]::2, g0[0]::2, 1] = data[g0[1]::2, g0[0]::2] # Green\r\n+\trgb[g1[1]::2, g1[0]::2, 1] = data[g1[1]::2, g1[0]::2] # Green\r\n+\trgb[b0[1]::2, b0[0]::2, 2] = data[b0[1]::2, b0[0]::2] # Blue\r\n+\r\n+\treturn rgb\r\n+\r\n+def demosaic(rgb, r0, g0, g1, b0):\r\n+\t# At this point we now have the raw Bayer data with the correct values\r\n+\t# and colors but the data still requires de-mosaicing and\r\n+\t# post-processing. If you wish to do this yourself, end the script here!\r\n+\t#\r\n+\t# Below we present a fairly naive de-mosaic method that simply\r\n+\t# calculates the weighted average of a pixel based on the pixels\r\n+\t# surrounding it. The weighting is provided b0[1] a b0[1]te representation of\r\n+\t# the Bayer filter which we construct first:\r\n+\r\n+\tbayer = np.zeros(rgb.shape, dtype=np.uint8)\r\n+\tbayer[r0[1]::2, r0[0]::2, 0] = 1 # Red\r\n+\tbayer[g0[1]::2, g0[0]::2, 1] = 1 # Green\r\n+\tbayer[g1[1]::2, g1[0]::2, 1] = 1 # Green\r\n+\tbayer[b0[1]::2, b0[0]::2, 2] = 1 # Blue\r\n+\r\n+\t# Allocate an array to hold our output with the same shape as the input\r\n+\t# data. After this we define the size of window that will be used to\r\n+\t# calculate each weighted average (3x3). Then we pad out the rgb and\r\n+\t# bayer arrays, adding blank pixels at their edges to compensate for the\r\n+\t# size of the window when calculating averages for edge pixels.\r\n+\r\n+\toutput = np.empty(rgb.shape, dtype=rgb.dtype)\r\n+\twindow = (3, 3)\r\n+\tborders = (window[0] - 1, window[1] - 1)\r\n+\tborder = (borders[0] // 2, borders[1] // 2)\r\n+\r\n+\t#rgb_pad = np.zeros((\r\n+\t#\trgb.shape[0] + borders[0],\r\n+\t#\trgb.shape[1] + borders[1],\r\n+\t#\trgb.shape[2]), dtype=rgb.dtype)\r\n+\t#rgb_pad[\r\n+\t#\tborder[0]:rgb_pad.shape[0] - border[0],\r\n+\t#\tborder[1]:rgb_pad.shape[1] - border[1],\r\n+\t#\t:] = rgb\r\n+\t#rgb = rgb_pad\r\n+\t#\r\n+\t#bayer_pad = np.zeros((\r\n+\t#\tbayer.shape[0] + borders[0],\r\n+\t#\tbayer.shape[1] + borders[1],\r\n+\t#\tbayer.shape[2]), dtype=bayer.dtype)\r\n+\t#bayer_pad[\r\n+\t#\tborder[0]:bayer_pad.shape[0] - border[0],\r\n+\t#\tborder[1]:bayer_pad.shape[1] - border[1],\r\n+\t#\t:] = bayer\r\n+\t#bayer = bayer_pad\r\n+\r\n+\t# In numpy >=1.7.0 just use np.pad (version in Raspbian is 1.6.2 at the\r\n+\t# time of writing...)\r\n+\t#\r\n+\trgb = np.pad(rgb, [\r\n+\t\t(border[0], border[0]),\r\n+\t\t(border[1], border[1]),\r\n+\t\t(0, 0),\r\n+\t\t], 'constant')\r\n+\tbayer = np.pad(bayer, [\r\n+\t\t(border[0], border[0]),\r\n+\t\t(border[1], border[1]),\r\n+\t\t(0, 0),\r\n+\t\t], 'constant')\r\n+\r\n+\t# For each plane in the RGB data, we use a nifty numpy trick\r\n+\t# (as_strided) to construct a view over the plane of 3x3 matrices. We do\r\n+\t# the same for the bayer array, then use Einstein summation on each\r\n+\t# (np.sum is simpler, but copies the data so it's slower), and divide\r\n+\t# the results to get our weighted average:\r\n+\r\n+\tfor plane in range(3):\r\n+\t\tp = rgb[..., plane]\r\n+\t\tb = bayer[..., plane]\r\n+\t\tpview = as_strided(p, shape=(\r\n+\t\t\tp.shape[0] - borders[0],\r\n+\t\t\tp.shape[1] - borders[1]) + window, strides=p.strides * 2)\r\n+\t\tbview = as_strided(b, shape=(\r\n+\t\t\tb.shape[0] - borders[0],\r\n+\t\t\tb.shape[1] - borders[1]) + window, strides=b.strides * 2)\r\n+\t\tpsum = np.einsum('ijkl->ij', pview)\r\n+\t\tbsum = np.einsum('ijkl->ij', bview)\r\n+\t\toutput[..., plane] = psum // bsum\r\n+\r\n+\treturn output\r\n+\r\n+\r\n+\r\n+\r\n+def to_rgb(fmt, size, data):\r\n+\tw = size[0]\r\n+\th = size[1]\r\n+\r\n+\tif fmt == \"YUYV\":\r\n+\t\t# YUV422\r\n+\t\tyuyv = data.reshape((h, w // 2 * 4))\r\n+\r\n+\t\t# YUV444\r\n+\t\tyuv = np.empty((h, w, 3), dtype=np.uint8)\r\n+\t\tyuv[:, :, 0] = yuyv[:, 0::2]\t\t\t\t\t# Y\r\n+\t\tyuv[:, :, 1] = yuyv[:, 1::4].repeat(2, axis=1)\t# U\r\n+\t\tyuv[:, :, 2] = yuyv[:, 3::4].repeat(2, axis=1)\t# V\r\n+\r\n+\t\tm = np.array([\r\n+\t\t\t[ 1.0, 1.0, 1.0],\r\n+\t\t\t[-0.000007154783816076815, -0.3441331386566162, 1.7720025777816772],\r\n+\t\t\t[ 1.4019975662231445, -0.7141380310058594 , 0.00001542569043522235]\r\n+\t\t])\r\n+\r\n+\t\trgb = np.dot(yuv, m)\r\n+\t\trgb[:, :, 0] -= 179.45477266423404\r\n+\t\trgb[:, :, 1] += 135.45870971679688\r\n+\t\trgb[:, :, 2] -= 226.8183044444304\r\n+\t\trgb = rgb.astype(np.uint8)\r\n+\r\n+\telif fmt == \"RGB888\":\r\n+\t\trgb = data.reshape((h, w, 3))\r\n+\t\trgb[:, :, [0, 1, 2]] = rgb[:, :, [2, 1, 0]]\r\n+\r\n+\telif fmt == \"BGR888\":\r\n+\t\trgb = data.reshape((h, w, 3))\r\n+\r\n+\telif fmt in [\"ARGB8888\", \"XRGB8888\"]:\r\n+\t\trgb = data.reshape((h, w, 4))\r\n+\t\trgb = np.flip(rgb, axis=2)\r\n+\t\t# drop alpha component\r\n+\t\trgb = np.delete(rgb, np.s_[0::4], axis=2)\r\n+\r\n+\telif fmt.startswith(\"S\"):\r\n+\t\tbayer_pattern = fmt[1:5]\r\n+\t\tbitspp = int(fmt[5:])\r\n+\r\n+\t\t# TODO: shifting leaves the lowest bits 0\r\n+\t\tif bitspp == 8:\r\n+\t\t\tdata = data.reshape((h, w))\r\n+\t\t\tdata = data.astype(np.uint16) << 8\r\n+\t\telif bitspp in [10, 12]:\r\n+\t\t\tdata = data.view(np.uint16)\r\n+\t\t\tdata = data.reshape((h, w))\r\n+\t\t\tdata = data << (16 - bitspp)\r\n+\t\telse:\r\n+\t\t\traise Exception(\"Bad bitspp:\" + str(bitspp))\r\n+\r\n+\t\tidx = bayer_pattern.find(\"R\")\r\n+\t\tassert(idx != -1)\r\n+\t\tr0 = (idx % 2, idx // 2)\r\n+\r\n+\t\tidx = bayer_pattern.find(\"G\")\r\n+\t\tassert(idx != -1)\r\n+\t\tg0 = (idx % 2, idx // 2)\r\n+\r\n+\t\tidx = bayer_pattern.find(\"G\", idx + 1)\r\n+\t\tassert(idx != -1)\r\n+\t\tg1 = (idx % 2, idx // 2)\r\n+\r\n+\t\tidx = bayer_pattern.find(\"B\")\r\n+\t\tassert(idx != -1)\r\n+\t\tb0 = (idx % 2, idx // 2)\r\n+\r\n+\t\trgb = separate_components(data, r0, g0, g1, b0)\r\n+\t\trgb = demosaic(rgb, r0, g0, g1, b0)\r\n+\t\trgb = (rgb >> 8).astype(np.uint8)\r\n+\r\n+\telse:\r\n+\t\trgb = None\r\n+\r\n+\treturn rgb\r\n+\r\n+\r\n+class QtRenderer:\r\n+\tdef __init__(self, state):\r\n+\t\tself.state = state\r\n+\r\n+\t\tself.cm = state[\"cm\"]\r\n+\t\tself.contexts = state[\"contexts\"]\r\n+\r\n+\tdef setup(self):\r\n+\t\tself.app = QtWidgets.QApplication([])\r\n+\r\n+\t\twindows = []\r\n+\r\n+\t\tfor ctx in self.contexts:\r\n+\t\t\tcamera = ctx[\"camera\"]\r\n+\r\n+\t\t\tfor stream in ctx[\"streams\"]:\r\n+\t\t\t\tfmt = stream.configuration.pixelFormat\r\n+\t\t\t\tsize = stream.configuration.size\r\n+\r\n+\t\t\t\twindow = MainWindow(ctx, stream)\r\n+\t\t\t\twindow.setAttribute(QtCore.Qt.WA_ShowWithoutActivating)\r\n+\t\t\t\twindow.show()\r\n+\t\t\t\twindows.append(window)\r\n+\r\n+\t\tself.windows = windows\r\n+\r\n+\tdef run(self):\r\n+\t\tcamnotif = QtCore.QSocketNotifier(self.cm.efd, QtCore.QSocketNotifier.Read)\r\n+\t\tcamnotif.activated.connect(lambda x: self.readcam())\r\n+\r\n+\t\tkeynotif = QtCore.QSocketNotifier(sys.stdin.fileno(), QtCore.QSocketNotifier.Read)\r\n+\t\tkeynotif.activated.connect(lambda x: self.readkey())\r\n+\r\n+\t\tprint(\"Capturing...\")\r\n+\r\n+\t\tself.app.exec()\r\n+\r\n+\t\tprint(\"Exiting...\")\r\n+\r\n+\tdef readcam(self):\r\n+\t\trunning = self.state[\"event_handler\"](self.state)\r\n+\r\n+\t\tif not running:\r\n+\t\t\tself.app.quit()\r\n+\r\n+\tdef readkey(self):\r\n+\t\tsys.stdin.readline()\r\n+\t\tself.app.quit()\r\n+\r\n+\tdef request_handler(self, ctx, req):\r\n+\t\tbuffers = req.buffers\r\n+\r\n+\t\tfor stream, fb in buffers.items():\r\n+\t\t\twnd = next(wnd for wnd in self.windows if wnd.stream == stream)\r\n+\r\n+\t\t\twnd.handle_request(stream, fb)\r\n+\r\n+\t\tself.state[\"request_prcessed\"](ctx, req)\r\n+\r\n+\tdef cleanup(self):\r\n+\t\tfor w in self.windows:\r\n+\t\t\tw.close()\r\n+\r\n+\r\n+class MainWindow(QtWidgets.QWidget):\r\n+\tdef __init__(self, ctx, stream):\r\n+\t\tsuper().__init__()\r\n+\r\n+\t\tself.ctx = ctx\r\n+\t\tself.stream = stream\r\n+\r\n+\t\tself.label = QtWidgets.QLabel()\r\n+\r\n+\t\twindowLayout = QtWidgets.QHBoxLayout()\r\n+\t\tself.setLayout(windowLayout)\r\n+\r\n+\t\twindowLayout.addWidget(self.label)\r\n+\r\n+\t\tcontrolsLayout = QtWidgets.QVBoxLayout()\r\n+\t\twindowLayout.addLayout(controlsLayout)\r\n+\r\n+\t\twindowLayout.addStretch()\r\n+\r\n+\t\tgroup = QtWidgets.QGroupBox(\"Info\")\r\n+\t\tgroupLayout = QtWidgets.QVBoxLayout()\r\n+\t\tgroup.setLayout(groupLayout)\r\n+\t\tcontrolsLayout.addWidget(group)\r\n+\r\n+\t\tlab = QtWidgets.QLabel(ctx[\"id\"])\r\n+\t\tgroupLayout.addWidget(lab)\r\n+\r\n+\t\tself.frameLabel = QtWidgets.QLabel()\r\n+\t\tgroupLayout.addWidget(self.frameLabel)\r\n+\r\n+\r\n+\t\tgroup = QtWidgets.QGroupBox(\"Properties\")\r\n+\t\tgroupLayout = QtWidgets.QVBoxLayout()\r\n+\t\tgroup.setLayout(groupLayout)\r\n+\t\tcontrolsLayout.addWidget(group)\r\n+\r\n+\t\tcamera = ctx[\"camera\"]\r\n+\r\n+\t\tfor k, v in camera.properties.items():\r\n+\t\t\tlab = QtWidgets.QLabel()\r\n+\t\t\tlab.setText(k + \" = \" + str(v))\r\n+\t\t\tgroupLayout.addWidget(lab)\r\n+\r\n+\t\tgroup = QtWidgets.QGroupBox(\"Controls\")\r\n+\t\tgroupLayout = QtWidgets.QVBoxLayout()\r\n+\t\tgroup.setLayout(groupLayout)\r\n+\t\tcontrolsLayout.addWidget(group)\r\n+\r\n+\t\tfor k, (min, max, default) in camera.controls.items():\r\n+\t\t\tlab = QtWidgets.QLabel()\r\n+\t\t\tlab.setText(\"{} = {}/{}/{}\".format(k, min, max, default))\r\n+\t\t\tgroupLayout.addWidget(lab)\r\n+\r\n+\t\tcontrolsLayout.addStretch()\r\n+\r\n+\tdef buf_to_qpixmap(self, stream, fb):\r\n+\t\twith fb.mmap(0) as b:\r\n+\t\t\tcfg = stream.configuration\r\n+\t\t\tw, h = cfg.size\r\n+\t\t\tpitch = cfg.stride\r\n+\r\n+\t\t\tif cfg.pixelFormat == \"MJPEG\":\r\n+\t\t\t\timg = Image.open(BytesIO(b))\r\n+\t\t\t\tqim = ImageQt(img).copy()\r\n+\t\t\t\tpix = QtGui.QPixmap.fromImage(qim)\r\n+\t\t\telse:\r\n+\t\t\t\tdata = np.array(b, dtype=np.uint8)\r\n+\t\t\t\trgb = to_rgb(cfg.pixelFormat, cfg.size, data)\r\n+\r\n+\t\t\t\tif rgb is None:\r\n+\t\t\t\t\traise Exception(\"Format not supported: \" + cfg.pixelFormat)\r\n+\r\n+\t\t\t\tpix = rgb_to_pix(rgb)\r\n+\r\n+\t\treturn pix\r\n+\r\n+\tdef handle_request(self, stream, fb):\r\n+\t\tctx = self.ctx\r\n+\r\n+\t\tpix = self.buf_to_qpixmap(stream, fb)\r\n+\t\tself.label.setPixmap(pix)\r\n+\r\n+\t\tself.frameLabel.setText(\"Queued: {}\\nDone: {}\\nFps: {:.2f}\"\r\n+\t\t\t.format(ctx[\"reqs-queued\"], ctx[\"reqs-completed\"], ctx[\"fps\"]))\r\ndiff --git a/src/py/cam/cam_qtgl.py b/src/py/cam/cam_qtgl.py\r\nnew file mode 100644\r\nindex 00000000..748905de\r\n--- /dev/null\r\n+++ b/src/py/cam/cam_qtgl.py\r\n@@ -0,0 +1,386 @@\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+\r\n+from PyQt5 import QtCore, QtWidgets\r\n+from PyQt5.QtCore import Qt\r\n+\r\n+import math\r\n+import numpy as np\r\n+import os\r\n+import sys\r\n+\r\n+os.environ[\"PYOPENGL_PLATFORM\"] = \"egl\"\r\n+\r\n+import OpenGL\r\n+#OpenGL.FULL_LOGGING = True\r\n+\r\n+from OpenGL import GL as gl\r\n+from OpenGL.EGL.EXT.image_dma_buf_import import *\r\n+from OpenGL.EGL.KHR.image import *\r\n+from OpenGL.EGL.VERSION.EGL_1_0 import *\r\n+from OpenGL.EGL.VERSION.EGL_1_2 import *\r\n+from OpenGL.EGL.VERSION.EGL_1_3 import *\r\n+\r\n+from OpenGL.GLES2.OES.EGL_image import *\r\n+from OpenGL.GLES2.OES.EGL_image_external import *\r\n+from OpenGL.GLES2.VERSION.GLES2_2_0 import *\r\n+from OpenGL.GLES3.VERSION.GLES3_3_0 import *\r\n+\r\n+from OpenGL.GL import shaders\r\n+\r\n+from gl_helpers import *\r\n+\r\n+# libcamera format string -> DRM fourcc\r\n+FMT_MAP = {\r\n+\t\"RGB888\": \"RG24\",\r\n+\t\"XRGB8888\": \"XR24\",\r\n+\t\"ARGB8888\": \"AR24\",\r\n+\t\"YUYV\": \"YUYV\",\r\n+}\r\n+\r\n+class EglState:\r\n+\tdef __init__(self):\r\n+\t\tself.create_display()\r\n+\t\tself.choose_config()\r\n+\t\tself.create_context()\r\n+\t\tself.check_extensions()\r\n+\r\n+\tdef create_display(self):\r\n+\t\txdpy = getEGLNativeDisplay()\r\n+\t\tdpy = eglGetDisplay(xdpy)\r\n+\t\tself.display = dpy\r\n+\r\n+\tdef choose_config(self):\r\n+\t\tdpy = self.display\r\n+\r\n+\t\tmajor, minor = EGLint(), EGLint()\r\n+\r\n+\t\tb = eglInitialize(dpy, major, minor)\r\n+\t\tassert(b)\r\n+\r\n+\t\tprint(\"EGL {} {}\".format(\r\n+\t\t\t  eglQueryString(dpy, EGL_VENDOR).decode(),\r\n+\t\t\t  eglQueryString(dpy, EGL_VERSION).decode()))\r\n+\r\n+\t\tcheck_egl_extensions(dpy, [\"EGL_EXT_image_dma_buf_import\"])\r\n+\r\n+\t\tb = eglBindAPI(EGL_OPENGL_ES_API)\r\n+\t\tassert(b)\r\n+\r\n+\t\tdef print_config(dpy, cfg):\r\n+\r\n+\t\t\tdef _getconf(dpy, cfg, a):\r\n+\t\t\t\tvalue = ctypes.c_long()\r\n+\t\t\t\teglGetConfigAttrib(dpy, cfg, a, value)\r\n+\t\t\t\treturn value.value\r\n+\r\n+\t\t\tgetconf = lambda a: _getconf(dpy, cfg, a)\r\n+\r\n+\t\t\tprint(\"EGL Config {}: color buf {}/{}/{}/{} = {}, depth {}, stencil {}, native visualid {}, native visualtype {}\".format(\r\n+\t\t\t\tgetconf(EGL_CONFIG_ID),\r\n+\t\t\t\tgetconf(EGL_ALPHA_SIZE),\r\n+\t\t\t\tgetconf(EGL_RED_SIZE),\r\n+\t\t\t\tgetconf(EGL_GREEN_SIZE),\r\n+\t\t\t\tgetconf(EGL_BLUE_SIZE),\r\n+\t\t\t\tgetconf(EGL_BUFFER_SIZE),\r\n+\t\t\t\tgetconf(EGL_DEPTH_SIZE),\r\n+\t\t\t\tgetconf(EGL_STENCIL_SIZE),\r\n+\t\t\t\tgetconf(EGL_NATIVE_VISUAL_ID),\r\n+\t\t\t\tgetconf(EGL_NATIVE_VISUAL_TYPE)))\r\n+\r\n+\t\tif False:\r\n+\t\t\tnum_configs = ctypes.c_long()\r\n+\t\t\teglGetConfigs(dpy, None, 0, num_configs)\r\n+\t\t\tprint(\"{} configs\".format(num_configs.value))\r\n+\r\n+\t\t\tconfigs = (EGLConfig * num_configs.value)()\r\n+\t\t\teglGetConfigs(dpy, configs, num_configs.value, num_configs)\r\n+\t\t\tfor config_id in configs:\r\n+\t\t\t\tprint_config(dpy, config_id)\r\n+\r\n+\r\n+\t\tconfig_attribs = [\r\n+\t\t\tEGL_SURFACE_TYPE, EGL_WINDOW_BIT,\r\n+\t\t\tEGL_RED_SIZE, 8,\r\n+\t\t\tEGL_GREEN_SIZE, 8,\r\n+\t\t\tEGL_BLUE_SIZE, 8,\r\n+\t\t\tEGL_ALPHA_SIZE, 0,\r\n+\t\t\tEGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,\r\n+\t\t\tEGL_NONE,\r\n+\t\t]\r\n+\r\n+\t\tn = EGLint()\r\n+\t\tconfigs = (EGLConfig * 1)()\r\n+\t\tb = eglChooseConfig(dpy, config_attribs, configs, 1, n)\r\n+\t\tassert(b and n.value == 1)\r\n+\t\tconfig = configs[0]\r\n+\r\n+\t\tprint(\"Chosen Config:\")\r\n+\t\tprint_config(dpy, config)\r\n+\r\n+\t\tself.config = config\r\n+\r\n+\tdef create_context(self):\r\n+\t\tdpy = self.display\r\n+\r\n+\t\tcontext_attribs = [\r\n+\t\t\tEGL_CONTEXT_CLIENT_VERSION, 2,\r\n+\t\t\tEGL_NONE,\r\n+\t\t]\r\n+\r\n+\t\tcontext = eglCreateContext(dpy, self.config, EGL_NO_CONTEXT, context_attribs)\r\n+\t\tassert(context)\r\n+\r\n+\t\tb = eglMakeCurrent(dpy, EGL_NO_SURFACE, EGL_NO_SURFACE, context)\r\n+\t\tassert(b)\r\n+\r\n+\t\tself.context = context\r\n+\r\n+\tdef check_extensions(self):\r\n+\t\tcheck_gl_extensions([\"GL_OES_EGL_image\"])\r\n+\r\n+\t\tassert(eglCreateImageKHR)\r\n+\t\tassert(eglDestroyImageKHR)\r\n+\t\tassert(glEGLImageTargetTexture2DOES)\r\n+\r\n+\r\n+class QtRenderer:\r\n+\tdef __init__(self, state):\r\n+\t\tself.state = state\r\n+\r\n+\tdef setup(self):\r\n+\t\tself.app = QtWidgets.QApplication([])\r\n+\r\n+\t\twindow = MainWindow(self.state)\r\n+\t\twindow.setAttribute(QtCore.Qt.WA_ShowWithoutActivating)\r\n+\t\twindow.show()\r\n+\r\n+\t\tself.window = window\r\n+\r\n+\tdef run(self):\r\n+\t\tcamnotif = QtCore.QSocketNotifier(self.state[\"cm\"].efd, QtCore.QSocketNotifier.Read)\r\n+\t\tcamnotif.activated.connect(lambda x: self.readcam())\r\n+\r\n+\t\tkeynotif = QtCore.QSocketNotifier(sys.stdin.fileno(), QtCore.QSocketNotifier.Read)\r\n+\t\tkeynotif.activated.connect(lambda x: self.readkey())\r\n+\r\n+\t\tprint(\"Capturing...\")\r\n+\r\n+\t\tself.app.exec()\r\n+\r\n+\t\tprint(\"Exiting...\")\r\n+\r\n+\tdef readcam(self):\r\n+\t\trunning = self.state[\"event_handler\"](self.state)\r\n+\r\n+\t\tif not running:\r\n+\t\t\tself.app.quit()\r\n+\r\n+\tdef readkey(self):\r\n+\t\tsys.stdin.readline()\r\n+\t\tself.app.quit()\r\n+\r\n+\tdef request_handler(self, ctx, req):\r\n+\t\tself.window.handle_request(ctx, req)\r\n+\r\n+\tdef cleanup(self):\r\n+\t\tself.window.close()\r\n+\r\n+\r\n+class MainWindow(QtWidgets.QWidget):\r\n+\tdef __init__(self, state):\r\n+\t\tsuper().__init__()\r\n+\r\n+\t\tself.setAttribute(Qt.WA_PaintOnScreen)\r\n+\t\tself.setAttribute(Qt.WA_NativeWindow)\r\n+\r\n+\t\tself.state = state\r\n+\r\n+\t\tself.textures = {}\r\n+\t\tself.reqqueue = {}\r\n+\t\tself.current = {}\r\n+\r\n+\t\tfor ctx in self.state[\"contexts\"]:\r\n+\r\n+\t\t\tself.reqqueue[ctx[\"idx\"]] = []\r\n+\t\t\tself.current[ctx[\"idx\"]] = []\r\n+\r\n+\t\t\tfor stream in ctx[\"streams\"]:\r\n+\t\t\t\tfmt = stream.configuration.pixelFormat\r\n+\t\t\t\tsize = stream.configuration.size\r\n+\r\n+\t\t\t\tif not fmt in FMT_MAP:\r\n+\t\t\t\t\traise Exception(\"Unsupported pixel format: \" + str(fmt))\r\n+\r\n+\t\t\t\tself.textures[stream] = None\r\n+\r\n+\t\tnum_tiles = len(self.textures)\r\n+\t\tself.num_columns = math.ceil(math.sqrt(num_tiles))\r\n+\t\tself.num_rows = math.ceil(num_tiles / self.num_columns)\r\n+\r\n+\t\tself.egl = EglState()\r\n+\r\n+\t\tself.surface = None\r\n+\r\n+\tdef paintEngine(self):\r\n+\t\treturn None\r\n+\r\n+\tdef create_surface(self):\r\n+\t\tnative_surface = c_void_p(self.winId().__int__())\r\n+\t\tsurface = eglCreateWindowSurface(self.egl.display, self.egl.config,\r\n+\t\t\t\t\t\t\t\t\t\t native_surface, None)\r\n+\r\n+\t\tb = eglMakeCurrent(self.egl.display, self.surface, self.surface, self.egl.context)\r\n+\t\tassert(b)\r\n+\r\n+\t\tself.surface = surface\r\n+\r\n+\tdef init_gl(self):\r\n+\t\tself.create_surface()\r\n+\r\n+\t\tvertShaderSrc = \"\"\"\r\n+\t\t\tattribute vec2 aPosition;\r\n+\t\t\tvarying vec2 texcoord;\r\n+\r\n+\t\t\tvoid main()\r\n+\t\t\t{\r\n+\t\t\t\tgl_Position = vec4(aPosition * 2.0 - 1.0, 0.0, 1.0);\r\n+\t\t\t\ttexcoord.x = aPosition.x;\r\n+\t\t\t\ttexcoord.y = 1.0 - aPosition.y;\r\n+\t\t\t}\r\n+\t\t\"\"\"\r\n+\t\tfragShaderSrc = \"\"\"\r\n+\t\t\t#extension GL_OES_EGL_image_external : enable\r\n+\t\t\tprecision mediump float;\r\n+\t\t\tvarying vec2 texcoord;\r\n+\t\t\tuniform samplerExternalOES texture;\r\n+\r\n+\t\t\tvoid main()\r\n+\t\t\t{\r\n+\t\t\t\tgl_FragColor = texture2D(texture, texcoord);\r\n+\t\t\t}\r\n+\t\t\"\"\"\r\n+\r\n+\t\tprogram = shaders.compileProgram(\r\n+\t\t\tshaders.compileShader(vertShaderSrc, GL_VERTEX_SHADER),\r\n+\t\t\tshaders.compileShader(fragShaderSrc, GL_FRAGMENT_SHADER)\r\n+\t\t)\r\n+\r\n+\t\tglUseProgram(program)\r\n+\r\n+\t\tglClearColor(0.5, 0.8, 0.7, 1.0)\r\n+\r\n+\t\tvertPositions = [\r\n+\t\t\t 0.0,  0.0,\r\n+\t\t\t 1.0,  0.0,\r\n+\t\t\t 1.0,  1.0,\r\n+\t\t\t 0.0,  1.0\r\n+\t\t]\r\n+\r\n+\t\tinputAttrib = glGetAttribLocation(program, \"aPosition\")\r\n+\t\tglVertexAttribPointer(inputAttrib, 2, GL_FLOAT, GL_FALSE, 0, vertPositions)\r\n+\t\tglEnableVertexAttribArray(inputAttrib)\r\n+\r\n+\r\n+\tdef create_texture(self, stream, fb):\r\n+\t\tcfg = stream.configuration\r\n+\t\tfmt = cfg.pixelFormat\r\n+\t\tfmt = str_to_fourcc(FMT_MAP[fmt])\r\n+\t\tw, h = cfg.size\r\n+\r\n+\t\tattribs = [\r\n+\t\t\tEGL_WIDTH, w,\r\n+\t\t\tEGL_HEIGHT, h,\r\n+\t\t\tEGL_LINUX_DRM_FOURCC_EXT, fmt,\r\n+\t\t\tEGL_DMA_BUF_PLANE0_FD_EXT, fb.fd(0),\r\n+\t\t\tEGL_DMA_BUF_PLANE0_OFFSET_EXT, 0,\r\n+\t\t\tEGL_DMA_BUF_PLANE0_PITCH_EXT, cfg.stride,\r\n+\t\t\tEGL_NONE,\r\n+\t\t]\r\n+\r\n+\t\timage = eglCreateImageKHR(self.egl.display,\r\n+\t\t\t\t\t\t\t\t  EGL_NO_CONTEXT,\r\n+\t\t\t\t\t\t\t\t  EGL_LINUX_DMA_BUF_EXT,\r\n+\t\t\t\t\t\t\t\t  None,\r\n+\t\t\t\t\t\t\t\t  attribs)\r\n+\t\tassert(image)\r\n+\r\n+\t\ttextures = glGenTextures(1)\r\n+\t\tglBindTexture(GL_TEXTURE_EXTERNAL_OES, textures)\r\n+\t\tglTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR)\r\n+\t\tglTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR)\r\n+\t\tglTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)\r\n+\t\tglTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)\r\n+\t\tglEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, image)\r\n+\r\n+\t\treturn textures\r\n+\r\n+\tdef resizeEvent(self, event):\r\n+\t\tsize = event.size()\r\n+\r\n+\t\tprint(\"Resize\", size)\r\n+\r\n+\t\tsuper().resizeEvent(event)\r\n+\r\n+\t\tif self.surface == None:\r\n+\t\t\treturn\r\n+\r\n+\t\tglViewport(0, 0, size.width()//2, size.height())\r\n+\r\n+\tdef paintEvent(self, event):\r\n+\t\tif self.surface == None:\r\n+\t\t\tself.init_gl()\r\n+\r\n+\t\tfor ctx_idx, queue in self.reqqueue.items():\r\n+\t\t\tif len(queue) == 0:\r\n+\t\t\t\tcontinue\r\n+\r\n+\t\t\tctx = next(ctx for ctx in self.state[\"contexts\"] if ctx[\"idx\"] == ctx_idx)\r\n+\r\n+\t\t\tif self.current[ctx_idx]:\r\n+\t\t\t\told = self.current[ctx_idx]\r\n+\t\t\t\tself.current[ctx_idx] = None\r\n+\t\t\t\tself.state[\"request_prcessed\"](ctx, old)\r\n+\r\n+\t\t\tnext_req = queue.pop(0)\r\n+\t\t\tself.current[ctx_idx] = next_req\r\n+\r\n+\t\t\tstream, fb = next(iter(next_req.buffers.items()))\r\n+\r\n+\t\t\tself.textures[stream] = self.create_texture(stream, fb)\r\n+\r\n+\t\tself.paint_gl()\r\n+\r\n+\tdef paint_gl(self):\r\n+\t\tb = eglMakeCurrent(self.egl.display, self.surface, self.surface, self.egl.context)\r\n+\t\tassert(b)\r\n+\r\n+\t\tglClear(GL_COLOR_BUFFER_BIT)\r\n+\r\n+\t\tsize = self.size()\r\n+\r\n+\t\tfor idx,ctx in enumerate(self.state[\"contexts\"]):\r\n+\t\t\tfor stream in ctx[\"streams\"]:\r\n+\t\t\t\tif self.textures[stream] == None:\r\n+\t\t\t\t\tcontinue\r\n+\r\n+\t\t\t\tw = size.width() // self.num_columns\r\n+\t\t\t\th = size.height() // self.num_rows\r\n+\r\n+\t\t\t\tx = idx % self.num_columns\r\n+\t\t\t\ty = idx // self.num_columns\r\n+\r\n+\t\t\t\tx *= w\r\n+\t\t\t\ty *= h\r\n+\r\n+\t\t\t\tglViewport(x, y, w, h)\r\n+\r\n+\t\t\t\tglBindTexture(GL_TEXTURE_EXTERNAL_OES, self.textures[stream])\r\n+\t\t\t\tglDrawArrays(GL_TRIANGLE_FAN, 0, 4)\r\n+\r\n+\t\tb = eglSwapBuffers(self.egl.display, self.surface)\r\n+\t\tassert(b)\r\n+\r\n+\tdef handle_request(self, ctx, req):\r\n+\t\tself.reqqueue[ctx[\"idx\"]].append(req)\r\n+\t\tself.update()\r\ndiff --git a/src/py/cam/gl_helpers.py b/src/py/cam/gl_helpers.py\r\nnew file mode 100644\r\nindex 00000000..a80b03b2\r\n--- /dev/null\r\n+++ b/src/py/cam/gl_helpers.py\r\n@@ -0,0 +1,67 @@\r\n+# SPDX-License-Identifier: GPL-2.0-or-later\r\n+# Copyright (C) 2021, Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>\r\n+\r\n+from OpenGL.EGL.VERSION.EGL_1_0 import EGLNativeDisplayType, eglGetProcAddress, eglQueryString, EGL_EXTENSIONS\r\n+\r\n+from OpenGL.raw.GLES2 import _types as _cs\r\n+from OpenGL.GLES2.VERSION.GLES2_2_0 import *\r\n+from OpenGL.GLES3.VERSION.GLES3_3_0 import *\r\n+from OpenGL import GL as gl\r\n+\r\n+from ctypes import c_int, c_char_p, c_void_p, cdll, POINTER, util, \\\r\n+\tpointer, CFUNCTYPE, c_bool\r\n+\r\n+def getEGLNativeDisplay():\r\n+\t_x11lib = cdll.LoadLibrary(util.find_library(\"X11\"))\r\n+\tXOpenDisplay = _x11lib.XOpenDisplay\r\n+\tXOpenDisplay.argtypes = [c_char_p]\r\n+\tXOpenDisplay.restype = POINTER(EGLNativeDisplayType)\r\n+\r\n+\txdpy = XOpenDisplay(None)\r\n+\r\n+# Hack. PyOpenGL doesn't seem to manage to find glEGLImageTargetTexture2DOES.\r\n+def getglEGLImageTargetTexture2DOES():\r\n+\tfuncptr = eglGetProcAddress(\"glEGLImageTargetTexture2DOES\")\r\n+\tprototype = CFUNCTYPE(None,_cs.GLenum,_cs.GLeglImageOES)\r\n+\treturn prototype(funcptr)\r\n+\r\n+glEGLImageTargetTexture2DOES = getglEGLImageTargetTexture2DOES()\r\n+\r\n+\r\n+def str_to_fourcc(str):\r\n+\tassert(len(str) == 4)\r\n+\tfourcc = 0\r\n+\tfor i,v in enumerate([ord(c) for c in str]):\r\n+\t\tfourcc |= v << (i * 8)\r\n+\treturn fourcc\r\n+\r\n+def get_gl_extensions():\r\n+\tn = GLint()\r\n+\tglGetIntegerv(GL_NUM_EXTENSIONS, n)\r\n+\tgl_extensions = []\r\n+\tfor i in range(n.value):\r\n+\t\tgl_extensions.append(gl.glGetStringi(GL_EXTENSIONS, i).decode())\r\n+\treturn gl_extensions\r\n+\r\n+def check_gl_extensions(required_extensions):\r\n+\textensions = get_gl_extensions()\r\n+\r\n+\tif False:\r\n+\t\tprint(\"GL EXTENSIONS: \", \" \".join(extensions))\r\n+\r\n+\tfor ext in required_extensions:\r\n+\t\tif not ext in extensions:\r\n+\t\t\traise Exception(ext + \" missing\")\r\n+\r\n+def get_egl_extensions(egl_display):\r\n+\treturn eglQueryString(egl_display, EGL_EXTENSIONS).decode().split(\" \")\r\n+\r\n+def check_egl_extensions(egl_display, required_extensions):\r\n+\textensions = get_egl_extensions(egl_display)\r\n+\r\n+\tif False:\r\n+\t\tprint(\"EGL EXTENSIONS: \", \" \".join(extensions))\r\n+\r\n+\tfor ext in required_extensions:\r\n+\t\tif not ext in extensions:\r\n+\t\t\traise Exception(ext + \" missing\")","prefixes":["libcamera-devel","v5","3/3"]}