{"id":24747,"url":"https://patchwork.libcamera.org/api/1.1/patches/24747/?format=json","web_url":"https://patchwork.libcamera.org/patch/24747/","project":{"id":1,"url":"https://patchwork.libcamera.org/api/1.1/projects/1/?format=json","name":"libcamera","link_name":"libcamera","list_id":"libcamera_core","list_email":"libcamera-devel@lists.libcamera.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20251023144841.403689-21-stefan.klug@ideasonboard.com>","date":"2025-10-23T14:48:21","name":"[v2,20/35] libcamera: converter: Add dw100 vertex map class","commit_ref":null,"pull_url":null,"state":"superseded","archived":false,"hash":"d35339a0f227a75bc5f5a52457203ea069eb15e4","submitter":{"id":184,"url":"https://patchwork.libcamera.org/api/1.1/people/184/?format=json","name":"Stefan Klug","email":"stefan.klug@ideasonboard.com"},"delegate":null,"mbox":"https://patchwork.libcamera.org/patch/24747/mbox/","series":[{"id":5520,"url":"https://patchwork.libcamera.org/api/1.1/series/5520/?format=json","web_url":"https://patchwork.libcamera.org/project/libcamera/list/?series=5520","date":"2025-10-23T14:48:01","name":"Full dewarper support on imx8mp","version":2,"mbox":"https://patchwork.libcamera.org/series/5520/mbox/"}],"comments":"https://patchwork.libcamera.org/api/patches/24747/comments/","check":"pending","checks":"https://patchwork.libcamera.org/api/patches/24747/checks/","tags":{},"headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 4AADFC3334\n\tfor <parsemail@patchwork.libcamera.org>;\n\tThu, 23 Oct 2025 14:49:48 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 01C586083A;\n\tThu, 23 Oct 2025 16:49:48 +0200 (CEST)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[IPv6:2001:4b98:dc2:55:216:3eff:fef7:d647])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 9138F6081B\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tThu, 23 Oct 2025 16:49:46 +0200 (CEST)","from ideasonboard.com (unknown\n\t[IPv6:2a00:6020:448c:6c00:7328:357b:4ce1:72b6])\n\tby perceval.ideasonboard.com (Postfix) with UTF8SMTPSA id 841631127; \n\tThu, 23 Oct 2025 16:48:01 +0200 (CEST)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"q2j6+grh\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1761230881;\n\tbh=LbhVCKPmpquMvTodiUYckjrhEt0/lz50k6XNAmtyidg=;\n\th=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n\tb=q2j6+grh5C3iKJIJZ4WL2LYKeoyr945BNugpty5zDM7Q7rZI8NCH+W4FGpyu9JkcI\n\taqyGpXpIL0yMIJiptXu1hNZ7RN98AMSFlYop3OsHtoWLzQxH4RTLrFpbGxpuWEdPV2\n\tdlbIekmMQGzCPuR/bh8g2vubwLFNoRUNwUUKRfGM=","From":"Stefan Klug <stefan.klug@ideasonboard.com>","To":"libcamera-devel@lists.libcamera.org","Cc":"Stefan Klug <stefan.klug@ideasonboard.com>","Subject":"[PATCH v2 20/35] libcamera: converter: Add dw100 vertex map class","Date":"Thu, 23 Oct 2025 16:48:21 +0200","Message-ID":"<20251023144841.403689-21-stefan.klug@ideasonboard.com>","X-Mailer":"git-send-email 2.48.1","In-Reply-To":"<20251023144841.403689-1-stefan.klug@ideasonboard.com>","References":"<20251023144841.403689-1-stefan.klug@ideasonboard.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"},"content":"Using a custom vertex map the dw100 dewarper is capable of doing\ncomplex and useful transformations on the image data. This class\nimplements a pipeline featuring:\n- Arbitrary ScalerCrop\n- Full transform support (Flip, 90deg rotations)\n- Arbitrary move, scale, rotate\n\nScalerCrop and Transform is implemented to provide a interface that is\nstandardized libcamera wide. The rest is implemented on top for more\nflexible dw100 specific features.\n\nSigned-off-by: Stefan Klug <stefan.klug@ideasonboard.com>\n\n---\n\nChanges in v2:\n- Replaced manual transforms with an affine transformation matrix\n- Changed rotation direction to be in sync with the rotation in\n  CameraConfiguration::orientation\n- Changed offset parameter to be in ScalerCrop coordinates. This is\n  easier to explain and has the added benefit, that Scale/Rotate is\nalways centered to the visible image.\n- Improved code comments\n- Make dw100VerticesForLength a local function\n- Dropped unnecessary includes\n- Added documentation\n\nChanges in v0.9\n- Include header in meson.build\n- Fix black line at top and left when rotation 180 degrees\n\nChanges in v0.8\n- Cleanup & formatting\n\nChanges in v0.5\n- Fix crash in std::clamp() due to rounding errors\n---\n .../converter/converter_dw100_vertexmap.h     |  76 +++\n .../libcamera/internal/converter/meson.build  |   1 +\n .../converter/converter_dw100_vertexmap.cpp   | 566 ++++++++++++++++++\n src/libcamera/converter/meson.build           |   1 +\n 4 files changed, 644 insertions(+)\n create mode 100644 include/libcamera/internal/converter/converter_dw100_vertexmap.h\n create mode 100644 src/libcamera/converter/converter_dw100_vertexmap.cpp","diff":"diff --git a/include/libcamera/internal/converter/converter_dw100_vertexmap.h b/include/libcamera/internal/converter/converter_dw100_vertexmap.h\nnew file mode 100644\nindex 000000000000..e72cb72bb9f1\n--- /dev/null\n+++ b/include/libcamera/internal/converter/converter_dw100_vertexmap.h\n@@ -0,0 +1,76 @@\n+#pragma once\n+\n+#include <assert.h>\n+#include <cmath>\n+#include <stdint.h>\n+#include <vector>\n+\n+#include <libcamera/base/span.h>\n+\n+#include <libcamera/geometry.h>\n+#include <libcamera/transform.h>\n+\n+namespace libcamera {\n+\n+class Dw100VertexMap\n+{\n+public:\n+\tenum ScaleMode {\n+\t\tFill = 0,\n+\t\tCrop = 1,\n+\t};\n+\n+\tvoid applyLimits();\n+\tvoid setInputSize(const Size &size)\n+\t{\n+\t\tinputSize_ = size;\n+\t\tscalerCrop_ = Rectangle(size);\n+\t}\n+\n+\tvoid setSensorCrop(const Rectangle &rect) { sensorCrop_ = rect; }\n+\n+\tvoid setScalerCrop(const Rectangle &rect) { scalerCrop_ = rect; }\n+\tconst Rectangle &effectiveScalerCrop() const { return effectiveScalerCrop_; }\n+\tstd::pair<Rectangle, Rectangle> scalerCropBounds() const\n+\t{\n+\t\treturn { Rectangle(sensorCrop_.x, sensorCrop_.y, 1, 1),\n+\t\t\t sensorCrop_ };\n+\t}\n+\n+\tvoid setOutputSize(const Size &size) { outputSize_ = size; }\n+\tconst Size &outputSize() const { return outputSize_; }\n+\n+\tvoid setTransform(const Transform &transform) { transform_ = transform; }\n+\tconst Transform &transform() const { return transform_; }\n+\n+\tvoid setScale(const float scale) { scale_ = scale; }\n+\tfloat effectiveScale() const { return (effectiveScaleX_ + effectiveScaleY_) * 0.5; }\n+\n+\tvoid setRotation(const float rotation) { rotation_ = rotation; }\n+\tfloat rotation() const { return rotation_; }\n+\n+\tvoid setOffset(const Point &offset) { offset_ = offset; }\n+\tconst Point &effectiveOffset() const { return effectiveOffset_; }\n+\n+\tvoid setMode(const ScaleMode mode) { mode_ = mode; }\n+\tScaleMode mode() const { return mode_; }\n+\n+\tstd::vector<uint32_t> getVertexMap();\n+\n+private:\n+\tRectangle scalerCrop_;\n+\tRectangle sensorCrop_;\n+\tTransform transform_ = Transform::Identity;\n+\tSize inputSize_;\n+\tSize outputSize_;\n+\tPoint offset_;\n+\tdouble scale_ = 1.0;\n+\tdouble rotation_ = 0.0;\n+\tScaleMode mode_ = Fill;\n+\tdouble effectiveScaleX_;\n+\tdouble effectiveScaleY_;\n+\tPoint effectiveOffset_;\n+\tRectangle effectiveScalerCrop_;\n+};\n+\n+} /* namespace libcamera */\ndiff --git a/include/libcamera/internal/converter/meson.build b/include/libcamera/internal/converter/meson.build\nindex 85007a4b0f8b..128c644cb73f 100644\n--- a/include/libcamera/internal/converter/meson.build\n+++ b/include/libcamera/internal/converter/meson.build\n@@ -2,5 +2,6 @@\n \n libcamera_internal_headers += files([\n     'converter_dw100.h',\n+    'converter_dw100_vertexmap.h',\n     'converter_v4l2_m2m.h',\n ])\ndiff --git a/src/libcamera/converter/converter_dw100_vertexmap.cpp b/src/libcamera/converter/converter_dw100_vertexmap.cpp\nnew file mode 100644\nindex 000000000000..0e930479b6f7\n--- /dev/null\n+++ b/src/libcamera/converter/converter_dw100_vertexmap.cpp\n@@ -0,0 +1,566 @@\n+#include \"libcamera/internal/converter/converter_dw100_vertexmap.h\"\n+\n+#include <algorithm>\n+#include <assert.h>\n+#include <cmath>\n+#include <stdint.h>\n+#include <utility>\n+#include <vector>\n+\n+#include <libcamera/base/log.h>\n+#include <libcamera/base/span.h>\n+\n+#include <libcamera/geometry.h>\n+#include <libcamera/transform.h>\n+\n+#include \"libcamera/internal/vector.h\"\n+\n+constexpr int kDw100BlockSize = 16;\n+\n+namespace libcamera {\n+\n+LOG_DECLARE_CATEGORY(Converter)\n+namespace {\n+\n+using Vector2d = Vector<double, 2>;\n+using Vector3d = Vector<double, 3>;\n+using Matrix3x3 = Matrix<double, 3, 3>;\n+\n+Matrix3x3 makeTranslate(const double tx, const double ty)\n+{\n+\tMatrix3x3 m = Matrix3x3::identity();\n+\tm[0][2] = tx;\n+\tm[1][2] = ty;\n+\treturn m;\n+}\n+\n+Matrix3x3 makeTranslate(const Vector2d &t)\n+{\n+\treturn makeTranslate(t.x(), t.y());\n+}\n+\n+Matrix3x3 makeRotate(const double degrees)\n+{\n+\tdouble rad = degrees / 180.0 * M_PI;\n+\tdouble sa = std::sin(rad);\n+\tdouble ca = std::cos(rad);\n+\n+\tMatrix3x3 m = Matrix3x3::identity();\n+\tm[0][0] = ca;\n+\tm[0][1] = -sa;\n+\tm[1][0] = sa;\n+\tm[1][1] = ca;\n+\treturn m;\n+}\n+\n+Matrix3x3 makeScale(const double sx, const double sy)\n+{\n+\tMatrix3x3 m = Matrix3x3::identity();\n+\tm[0][0] = sx;\n+\tm[1][1] = sy;\n+\treturn m;\n+}\n+\n+/**\n+ * \\param t The transform to apply\n+ * \\param size The size of the rectangle that is transformed\n+ *\n+ * Create a matrix that represents the transform done by the \\a t. It assumes\n+ * that the origin of the coordinate system is at the top left corner of of the\n+ * rectangle.\n+ */\n+Matrix3x3 makeTransform(const Transform &t, const Size &size)\n+{\n+\tMatrix3x3 m = Matrix3x3::identity();\n+\tdouble wm = size.width * 0.5;\n+\tdouble hm = size.height * 0.5;\n+\tm = makeTranslate(-wm, -hm) * m;\n+\n+\tif (!!(t & Transform::HFlip))\n+\t\tm = makeScale(-1, 1) * m;\n+\n+\tif (!!(t & Transform::VFlip))\n+\t\tm = makeScale(1, -1) * m;\n+\n+\tif (!!(t & Transform::Transpose)) {\n+\t\tm = makeRotate(-90) * m;\n+\t\tm = makeScale(1, -1) * m;\n+\t\tstd::swap(wm, hm);\n+\t}\n+\n+\tm = makeTranslate(wm, hm) * m;\n+\n+\treturn m;\n+}\n+\n+/**\n+ * \\param from The source rectangle\n+ * \\param to The destination rectangle\n+ *\n+ * Create a matrix that transforms from the coordinate system of rectangle \\a\n+ * from into the coordinate system of rectangle \\a to, by overlaying the\n+ * rectangles.\n+ *\n+ * \\see Rectangle::transformedBetween()\n+ */\n+Matrix3x3 makeTransform(const Rectangle &from, const Rectangle &to)\n+{\n+\tMatrix3x3 m = Matrix3x3::identity();\n+\tdouble sx = to.width / static_cast<double>(from.width);\n+\tdouble sy = to.height / static_cast<double>(from.height);\n+\tm = makeTranslate(-from.x, -from.y) * m;\n+\tm = makeScale(sx, sy) * m;\n+\tm = makeTranslate(to.x, to.y) * m;\n+\treturn m;\n+}\n+\n+Vector2d transformPoint(const Matrix3x3 &m, const Vector2d &p)\n+{\n+\tVector3d p2{ { p.x(), p.y(), 1.0 } };\n+\tp2 = m * p2;\n+\treturn { { p2.x() / p2.z(), p2.y() / p2.z() } };\n+}\n+\n+Vector2d transformVector(const Matrix3x3 &m, const Vector2d &p)\n+{\n+\tVector3d p2{ { p.x(), p.y(), 0.0 } };\n+\tp2 = m * p2;\n+\treturn { { p2.x(), p2.y() } };\n+}\n+\n+Vector2d rotatedRectSize(const Vector2d &size, const double degrees)\n+{\n+\tdouble rad = degrees / 180.0 * M_PI;\n+\tdouble sa = sin(rad);\n+\tdouble ca = cos(rad);\n+\n+\treturn { { std::abs(size.x() * ca) + std::abs(size.y() * sa),\n+\t\t   std::abs(size.x() * sa) + std::abs(size.y() * ca) } };\n+}\n+\n+Vector2d point2Vec2d(const Point &p)\n+{\n+\treturn { { static_cast<double>(p.x), static_cast<double>(p.y) } };\n+}\n+\n+int dw100VerticesForLength(const int length)\n+{\n+\treturn (length + kDw100BlockSize - 1) / kDw100BlockSize + 1;\n+}\n+\n+} /* namespace */\n+\n+/**\n+ * \\class libcamera::Dw100VertexMap\n+ * \\brief Helper class to compute dw100 vertex maps\n+ *\n+ * The vertex map class represents a helper for handling dewarper vertex maps.\n+ * There are 3 important sizes in the system:\n+ *\n+ * - The sensor size. The number of pixels of the whole sensor (\\todo specify\n+ *    the crop rectangle).\n+ * - The input rectangle to the dewarper. Describes the pixel data flowing into\n+ *   the dewarper in sensor coordinates.\n+ * - ScalerCrop rectangle. The rectangle that shall be used for all further\n+ *   stages. It is applied after lens dewarping but is in sensor coordinate\n+ *   space.\n+ * - The output size. This defines the size, the dewarper should output.\n+ *\n+ * +------------------------+\n+ * |Sensor size             |\n+ * |   +----------------+   |\n+ * |   |  Input rect    |   |\n+ * |   |  +-------------+   |\n+ * |   |  | ScalerCrop  |   |\n+ * |   |  |             |   |\n+ * |   +--+-------------+   |\n+ * +------------------------+\n+ *\n+ * This class implements a vertex map that forms the following pipeline:\n+ *\n+ * +-------------+    +-------------+    +------------+    +-----------------+\n+ * |             | -> |             |    | Transform  |    | Pan/Zoom        |\n+ * | Lens Dewarp | -> | Scaler Crop | -> | (H/V Flip, | -> | (Offset, Scale, |\n+ * |             |    |             |    | Transpose) |    | Rotate)         |\n+ * +-------------+    +-------------+    +------------+    +-----------------+\n+ *\n+ * \\todo Lens dewarp is not yet implemented. An identity map is used instead.\n+ *\n+ * All parameters are clamped to valid values before creating the vertex map.\n+ *\n+ * The constrains process works as follows:\n+ * - The ScalerCrop rectangle is clamped to the input rectangle\n+ * - The ScalerCrop rectangle is transformed by the specified transform\n+ *   forming ScalerCropT\n+ * - A rectangle of output size is placed in the center of ScalerCropT\n+ *   (OutputRect).\n+ * - Rotate gets applied to OutputRect,\n+ * - Scale is applied, but clamped so that the OutputRect fits completely into\n+ *   ScalerCropT (Only regarding dimensions, not position)\n+ * - Offset is clamped so that the OutputRect lies inside ScalerCropT\n+ *\n+ * The lens dewarp map is usually calibrated during tuning and is a map that\n+ * maps from incoming pixels to dewarped pixels.\n+ */\n+\n+/**\n+ * \\enum Dw100VertexMap::ScaleMode\n+ * \\brief The scale modes available for a vertex map\n+ *\n+ * \\var Dw100VertexMap::Fill\n+ * \\brief Scale the input to fill the output\n+ *\n+ * This scale mode does not preserve aspect ratio. Offset and rotation are taken\n+ * into account.\n+ *\n+ * \\var Dw100VertexMap::Crop\n+ * \\brief Crop the input\n+ *\n+ * This scale mode preserves the aspect ratio. Offset, scale, rotation are taken\n+ * into account within the possible limits.\n+ */\n+\n+/**\n+ * \\brief Apply limits on scale and offset\n+ *\n+ * This function calculates \\a effectiveScalerCrop_, \\a effectiveScale_ and \\a\n+ * effectiveOffset_ based on the requested scaler crop, scale, rotation, offset\n+ * and the selected scale mode, so that the whole output area is filled with\n+ * valid input data.\n+ */\n+void Dw100VertexMap::applyLimits()\n+{\n+\tint ow = outputSize_.width;\n+\tint oh = outputSize_.height;\n+\teffectiveScalerCrop_ = scalerCrop_.boundedTo(sensorCrop_);\n+\n+\t/* Map the scalerCrop to the input pixel space */\n+\tRectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(\n+\t\tsensorCrop_, Rectangle(inputSize_));\n+\n+\tSize localCropSizeT = localScalerCrop.size();\n+\tif (!!(transform_ & Transform::Transpose))\n+\t\tstd::swap(localCropSizeT.width, localCropSizeT.height);\n+\n+\tVector2d size = rotatedRectSize(point2Vec2d({ ow, oh }), rotation_);\n+\n+\t/* Calculate constraints */\n+\tdouble scale = scale_;\n+\tif (mode_ == Crop) {\n+\t\t/* Scale up if needed */\n+\t\tscale = std::max(scale,\n+\t\t\t\t std::max(size.x() / localCropSizeT.width,\n+\t\t\t\t\t  size.y() / localCropSizeT.height));\n+\t\teffectiveScaleX_ = scale;\n+\t\teffectiveScaleY_ = scale;\n+\n+\t\tsize = size / scale;\n+\n+\t} else if (mode_ == Fill) {\n+\t\teffectiveScaleX_ = size.x() / localCropSizeT.width;\n+\t\teffectiveScaleY_ = size.y() / localCropSizeT.height;\n+\n+\t\tsize.x() /= effectiveScaleX_;\n+\t\tsize.y() /= effectiveScaleY_;\n+\t} else {\n+\t\tLOG(Converter, Error) << \"Unknown mode \" << mode_;\n+\t\treturn;\n+\t}\n+\n+\t/*\n+\t * Clamp offset. Due to rounding errors, size might be slightly bigger\n+\t * than scaler crop. Clamp the offset to 0 to prevent a crash in the\n+\t * next clamp.\n+\t */\n+\tdouble maxoffX, maxoffY;\n+\tmaxoffX = std::max(0.0, (localCropSizeT.width - size.x())) * 0.5;\n+\tmaxoffY = std::max(0.0, (localCropSizeT.height - size.y())) * 0.5;\n+\tif (!!(transform_ & Transform::Transpose))\n+\t\tstd::swap(maxoffX, maxoffY);\n+\n+\t/*\n+\t * Transform the offset from sensor space to local space, apply the\n+\t * limit and transform back.\n+\t */\n+\tVector2d offset = point2Vec2d(offset_);\n+\tMatrix3x3 m;\n+\n+\tm = makeTransform(effectiveScalerCrop_, localScalerCrop);\n+\toffset = transformVector(m, offset);\n+\toffset.x() = std::clamp(offset.x(), -maxoffX, maxoffX);\n+\toffset.y() = std::clamp(offset.y(), -maxoffY, maxoffY);\n+\tm = makeTransform(localScalerCrop, effectiveScalerCrop_);\n+\toffset = transformVector(m, offset);\n+\teffectiveOffset_.x = offset.x();\n+\teffectiveOffset_.y = offset.y();\n+}\n+\n+/**\n+ * \\fn Dw100VertexMap::setInputSize()\n+ * \\brief Set the size of the input data\n+ * \\param[in] size The input size\n+ *\n+ * To calculate a proper vertex map, the size of the input images must be set.\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setSensorCrop()\n+ * \\brief Set the crop rectangle that represents the input data\n+ * \\param[in] rect\n+ *\n+ * Set the rectangle that represents the input data in sensor coordinates. This\n+ * must be specified to properly calculate the vertex map.\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setScalerCrop()\n+ * \\brief Set the requested scaler crop\n+ * \\param[in] rect\n+ *\n+ * Set the requested scaler crop. The actually applied scaler crop can be\n+ * queried using \\a Dw100VertexMap::effectiveScalerCrop() after calling\n+ * Dw100VertexMap::applyLimits().\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::effectiveScalerCrop()\n+ * \\brief Get the effective scaler crop\n+ *\n+ * \\return The effective scaler crop\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::scalerCropBounds()\n+ * \\brief Get the min and max values for the scaler crop\n+ *\n+ * \\return A pair of rectangles that represent the scaler crop min/max values\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setOutputSize()\n+ * \\brief Set the output size\n+ * \\param[in] size The size of the output images\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::outputSize()\n+ * \\brief Get the output size\n+ * \\return The output size\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setTransform()\n+ * \\brief Sets the transform to apply\n+ * \\param[in] transform The transform\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::transform()\n+ * \\brief Get the transform\n+ * \\return The transform\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setScale()\n+ * \\brief Sets the scale to apply\n+ * \\param[in] scale The scale\n+ *\n+ * Set the requested scale. The actually applied scale can be queried using \\a\n+ * Dw100VertexMap::effectiveScale() after calling \\a\n+ * Dw100VertexMap::applyLimits().\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::effectiveScale()\n+ * \\brief Get the effective scale\n+ *\n+ * Returns the actual scale applied to the input pixels in x and y direction. So\n+ * a value of [2.0, 1.5] means that every input pixel is scaled to cover 2\n+ * output pixels in x-direction and 1.5 in y-direction.\n+ *\n+ * \\return The effective scale\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setRotation()\n+ * \\brief Sets the rotation to apply\n+ * \\param[in] rotation The rotation in degrees\n+ *\n+ * The rotation is in clockwise direction to allow the same transform as\n+ * CameraConfiguration::orientation\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::rotation()\n+ * \\brief Get the rotation\n+ * \\return The rotation in degrees\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setOffset()\n+ * \\brief Sets the offset to apply\n+ * \\param[in] offset The offset\n+ *\n+ * Set the requested offset. The actually applied offset can be queried using \\a\n+ * Dw100VertexMap::effectiveOffset() after calling \\a\n+ * Dw100VertexMap::applyLimits().\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::effectiveOffset()\n+ * \\brief Get the effective offset\n+ *\n+ * Returns the actual offset applied to the input pixels in ScalerCrop\n+ * coordinates.\n+ *\n+ * \\return The effective offset\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::setMode()\n+ * \\brief Sets the scaling mode to apply\n+ * \\param[in] mode The mode\n+ */\n+\n+/**\n+ * \\fn Dw100VertexMap::mode()\n+ * \\brief Get the scaling mode\n+ * \\return The scaling mode\n+ */\n+\n+/**\n+ * \\brief Get the dw100 vertex map\n+ *\n+ * Calculates the vertex map as a vector of hardware specific entries.\n+ *\n+ * \\return The vertex map\n+ */\n+std::vector<uint32_t> Dw100VertexMap::getVertexMap()\n+{\n+\tint ow = outputSize_.width;\n+\tint oh = outputSize_.height;\n+\tint tileCountW = dw100VerticesForLength(ow);\n+\tint tileCountH = dw100VerticesForLength(oh);\n+\n+\tapplyLimits();\n+\n+\t/*\n+\t * libcamera handles all crop rectangles in sensor space. But the\n+\t * dewarper \"sees\" only the pixels it gets passed. Note that these might\n+\t * not cover exactly the max sensor crop, as there might be a crop\n+\t * between ISP and dewarper to crop to a format supported by the\n+\t * dewarper. effectiveScalerCrop_ is the crop in sensor space that gets\n+\t * fed into the dewarper. localScalerCrop is the sensor crop mapped to\n+\t * the data that is fed into the dewarper.\n+\t */\n+\tRectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(\n+\t\tsensorCrop_, Rectangle(inputSize_));\n+\tSize localCropSizeT = localScalerCrop.size();\n+\tif (!!(transform_ & Transform::Transpose))\n+\t\tstd::swap(localCropSizeT.width, localCropSizeT.height);\n+\n+\t/*\n+\t * The dw100 has a specialty in interpolation that has to be taken into\n+\t * account to use in a pixel perfect manner. To explain this, I will\n+\t * only use the x direction, the vertical axis behaves the same.\n+\t *\n+\t * Let's start with a pixel perfect 1:1 mapping of an image with a width\n+\t * of 64pixels. The coordinates of the vertex map would then be:\n+\t * 0 -- 16 -- 32 -- 48 -- 64\n+\t * Note how the last coordinate lies outside the image (which ends at\n+\t * 63) as it is basically the beginning of the next macro block.\n+\t *\n+\t * if we zoom out a bit we might end up with something like\n+\t * -10 -- 0 -- 32 -- 64 -- 74\n+\t * As the dewarper coordinates are unsigned it actually sees\n+\t * 0 -- 0 -- 32 -- 64 -- 74\n+\t * Leading to stretched pixels at the beginning and black for everything\n+\t * > 63\n+\t *\n+\t * Now lets rotate the image by 180 degrees. A trivial rotation would\n+\t * end up with:\n+\t *\n+\t * 64 -- 48 -- 32 -- 16 -- 0\n+\t *\n+\t * But as the first column now points to pixel 64 we get a single black\n+\t * line. So for a proper 180* rotation, the coordinates need to be\n+\t *\n+\t * 63 -- 47 -- 31 -- 15 -- -1\n+\t *\n+\t * The -1 is clamped to 0 again, leading to a theoretical slight\n+\t * interpolation error on the last 16 pixels.\n+\t *\n+\t * To create this proper transformation there are two things todo:\n+\t *\n+\t * 1. The rotation centers are offset by -0.5. This evens out for no\n+\t *    rotation, and leads to a coordinate offset of -1 on 180 degree\n+\t *    rotations.\n+\t * 2. The transformation (flip and transpose) need to act on a size-1\n+\t *    to get the same effect.\n+\t */\n+\tVector2d centerS{ { localCropSizeT.width * 0.5 - 0.5,\n+\t\t\t    localCropSizeT.height * 0.5 - 0.5 } };\n+\tVector2d centerD{ { ow * 0.5 - 0.5,\n+\t\t\t    oh * 0.5 - 0.5 } };\n+\n+\tLOG(Converter, Debug)\n+\t\t<< \"Apply vertex map for\"\n+\t\t<< \" inputSize: \" << inputSize_\n+\t\t<< \" outputSize: \" << outputSize_\n+\t\t<< \" Transform: \" << transformToString(transform_)\n+\t\t<< \"\\n effectiveScalerCrop: \" << effectiveScalerCrop_\n+\t\t<< \" localCropSizeT: \" << localCropSizeT\n+\t\t<< \" scaleX: \" << effectiveScaleX_\n+\t\t<< \" scaleY: \" << effectiveScaleX_\n+\t\t<< \" rotation: \" << rotation_\n+\t\t<< \" offset: \" << effectiveOffset_;\n+\n+\tMatrix3x3 outputToSensor = Matrix3x3::identity();\n+\t/* Move to center of output */\n+\toutputToSensor = makeTranslate(-centerD) * outputToSensor;\n+\toutputToSensor = makeRotate(-rotation_) * outputToSensor;\n+\toutputToSensor = makeScale(1.0 / effectiveScaleX_, 1.0 / effectiveScaleY_) * outputToSensor;\n+\t/* Move to top left of localScalerCropT */\n+\toutputToSensor = makeTranslate(centerS) * outputToSensor;\n+\toutputToSensor = makeTransform(-transform_, localCropSizeT.shrunkBy({ 1, 1 })) *\n+\t\t\t outputToSensor;\n+\t/* Transform from \"within localScalerCrop\" to input reference frame */\n+\toutputToSensor = makeTranslate(localScalerCrop.x, localScalerCrop.y) * outputToSensor;\n+\toutputToSensor = makeTransform(localScalerCrop, effectiveScalerCrop_) * outputToSensor;\n+\toutputToSensor = makeTranslate(point2Vec2d(effectiveOffset_)) * outputToSensor;\n+\n+\tMatrix3x3 sensorToInput = makeTransform(effectiveScalerCrop_, localScalerCrop);\n+\n+\t/*\n+\t * For every output tile, calculate the position of the corners in the\n+\t * input image.\n+\t */\n+\tstd::vector<uint32_t> res;\n+\tres.reserve(tileCountW * tileCountH);\n+\tfor (int y = 0; y < tileCountH; y++) {\n+\t\tfor (int x = 0; x < tileCountW; x++) {\n+\t\t\tVector2d p{ { static_cast<double>(x) * kDw100BlockSize,\n+\t\t\t\t      static_cast<double>(y) * kDw100BlockSize } };\n+\t\t\tp = p.max(0.0).min(Vector2d{ { static_cast<double>(ow),\n+\t\t\t\t\t\t       static_cast<double>(oh) } });\n+\n+\t\t\tp = transformPoint(outputToSensor, p);\n+\n+\t\t\t/*\n+\t\t\t * \\todo: Transformations in sensor space to be added\n+\t\t\t * here.\n+\t\t\t */\n+\n+\t\t\tp = transformPoint(sensorToInput, p);\n+\n+\t\t\t/* Convert to fixed point */\n+\t\t\tuint32_t v = static_cast<uint32_t>(p.y() * 16) << 16 |\n+\t\t\t\t     (static_cast<uint32_t>(p.x() * 16) & 0xffff);\n+\t\t\tres.push_back(v);\n+\t\t}\n+\t}\n+\n+\treturn res;\n+}\n+\n+} /* namespace libcamera */\ndiff --git a/src/libcamera/converter/meson.build b/src/libcamera/converter/meson.build\nindex fe2dcebb67da..9f59b57c26b9 100644\n--- a/src/libcamera/converter/meson.build\n+++ b/src/libcamera/converter/meson.build\n@@ -1,6 +1,7 @@\n # SPDX-License-Identifier: CC0-1.0\n \n libcamera_internal_sources += files([\n+        'converter_dw100_vertexmap.cpp',\n         'converter_dw100.cpp',\n         'converter_v4l2_m2m.cpp'\n ])\n","prefixes":["v2","20/35"]}