[{"id":37056,"web_url":"https://patchwork.libcamera.org/comment/37056/","msgid":"<176409111643.567526.5379290066019566577@ping.linuxembedded.co.uk>","date":"2025-11-25T17:18:36","subject":"Re: [PATCH v3 15/29] libcamera: converter: Add dw100 vertex map\n\tclass","submitter":{"id":4,"url":"https://patchwork.libcamera.org/api/people/4/","name":"Kieran Bingham","email":"kieran.bingham@ideasonboard.com"},"content":"Quoting Stefan Klug (2025-11-25 16:28:27)\n> Using a custom vertex map the dw100 dewarper is capable of doing\n> complex and useful transformations on the image data. This class\n> implements a pipeline featuring:\n> - Arbitrary ScalerCrop\n> - Full transform support (Flip, 90deg rotations)\n> - Arbitrary move, scale, rotate\n> \n> ScalerCrop and Transform is implemented to provide a interface that is\n> standardized libcamera wide. The rest is implemented on top for more\n> flexible dw100 specific features.\n> \n> Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>\n> \n> ---\n> \n> Changes in v3:\n> - Dropped scalerCropBounds() function as it is not needed anymore\n> - Modified applyLimits() to always run until the end\n> - Small changes in docs\n> \n> Changes in v2:\n> - Replaced manual transforms with an affine transformation matrix\n> - Changed rotation direction to be in sync with the rotation in\n>   CameraConfiguration::orientation\n> - Changed offset parameter to be in ScalerCrop coordinates. This is\n>   easier to explain and has the added benefit, that Scale/Rotate is\n> always centered to the visible image.\n> - Improved code comments\n> - Make dw100VerticesForLength a local function\n> - Dropped unnecessary includes\n> - Added documentation\n> \n> Changes in v0.9\n> - Include header in meson.build\n> - Fix black line at top and left when rotation 180 degrees\n> \n> Changes in v0.8\n> - Cleanup & formatting\n> \n> Changes in v0.5\n> - Fix crash in std::clamp() due to rounding errors\n> ---\n>  .../converter/converter_dw100_vertexmap.h     |  71 +++\n>  .../libcamera/internal/converter/meson.build  |   1 +\n>  .../converter/converter_dw100_vertexmap.cpp   | 564 ++++++++++++++++++\n>  src/libcamera/converter/meson.build           |   1 +\n>  4 files changed, 637 insertions(+)\n>  create mode 100644 include/libcamera/internal/converter/converter_dw100_vertexmap.h\n>  create mode 100644 src/libcamera/converter/converter_dw100_vertexmap.cpp\n> \n> diff --git a/include/libcamera/internal/converter/converter_dw100_vertexmap.h b/include/libcamera/internal/converter/converter_dw100_vertexmap.h\n> new file mode 100644\n> index 000000000000..428b3d74d4d2\n> --- /dev/null\n> +++ b/include/libcamera/internal/converter/converter_dw100_vertexmap.h\n> @@ -0,0 +1,71 @@\n> +#pragma once\n> +\n> +#include <assert.h>\n> +#include <cmath>\n> +#include <stdint.h>\n> +#include <vector>\n> +\n> +#include <libcamera/base/span.h>\n> +\n> +#include <libcamera/geometry.h>\n> +#include <libcamera/transform.h>\n> +\n> +namespace libcamera {\n> +\n> +class Dw100VertexMap\n> +{\n> +public:\n> +       enum ScaleMode {\n> +               Fill = 0,\n> +               Crop = 1,\n> +       };\n> +\n> +       void applyLimits();\n> +       void setInputSize(const Size &size)\n> +       {\n> +               inputSize_ = size;\n> +               scalerCrop_ = Rectangle(size);\n> +       }\n> +\n> +       void setSensorCrop(const Rectangle &rect) { sensorCrop_ = rect; }\n> +\n> +       void setScalerCrop(const Rectangle &rect) { scalerCrop_ = rect; }\n> +       const Rectangle &effectiveScalerCrop() const { return effectiveScalerCrop_; }\n> +\n> +       void setOutputSize(const Size &size) { outputSize_ = size; }\n> +       const Size &outputSize() const { return outputSize_; }\n> +\n> +       void setTransform(const Transform &transform) { transform_ = transform; }\n> +       const Transform &transform() const { return transform_; }\n> +\n> +       void setScale(const float scale) { scale_ = scale; }\n> +       float effectiveScale() const { return (effectiveScaleX_ + effectiveScaleY_) * 0.5; }\n> +\n> +       void setRotation(const float rotation) { rotation_ = rotation; }\n> +       float rotation() const { return rotation_; }\n> +\n> +       void setOffset(const Point &offset) { offset_ = offset; }\n> +       const Point &effectiveOffset() const { return effectiveOffset_; }\n> +\n> +       void setMode(const ScaleMode mode) { mode_ = mode; }\n> +       ScaleMode mode() const { return mode_; }\n> +\n> +       std::vector<uint32_t> getVertexMap();\n> +\n> +private:\n> +       Rectangle scalerCrop_;\n> +       Rectangle sensorCrop_;\n> +       Transform transform_ = Transform::Identity;\n> +       Size inputSize_;\n> +       Size outputSize_;\n> +       Point offset_;\n> +       double scale_ = 1.0;\n> +       double rotation_ = 0.0;\n> +       ScaleMode mode_ = Fill;\n> +       double effectiveScaleX_;\n> +       double effectiveScaleY_;\n> +       Point effectiveOffset_;\n> +       Rectangle effectiveScalerCrop_;\n> +};\n> +\n> +} /* namespace libcamera */\n> diff --git a/include/libcamera/internal/converter/meson.build b/include/libcamera/internal/converter/meson.build\n> index 891e79e7d493..9d586293f63a 100644\n> --- a/include/libcamera/internal/converter/meson.build\n> +++ b/include/libcamera/internal/converter/meson.build\n> @@ -1,5 +1,6 @@\n>  # SPDX-License-Identifier: CC0-1.0\n>  \n>  libcamera_internal_headers += files([\n> +    'converter_dw100_vertexmap.h',\n>      'converter_v4l2_m2m.h',\n>  ])\n> diff --git a/src/libcamera/converter/converter_dw100_vertexmap.cpp b/src/libcamera/converter/converter_dw100_vertexmap.cpp\n> new file mode 100644\n> index 000000000000..427d710743b8\n> --- /dev/null\n> +++ b/src/libcamera/converter/converter_dw100_vertexmap.cpp\n> @@ -0,0 +1,564 @@\n> +#include \"libcamera/internal/converter/converter_dw100_vertexmap.h\"\n> +\n> +#include <algorithm>\n> +#include <assert.h>\n> +#include <cmath>\n> +#include <stdint.h>\n> +#include <utility>\n> +#include <vector>\n> +\n> +#include <libcamera/base/log.h>\n> +#include <libcamera/base/span.h>\n> +\n> +#include <libcamera/geometry.h>\n> +#include <libcamera/transform.h>\n> +\n> +#include \"libcamera/internal/vector.h\"\n> +\n> +constexpr int kDw100BlockSize = 16;\n> +\n> +namespace libcamera {\n> +\n> +LOG_DECLARE_CATEGORY(Converter)\n> +namespace {\n> +\n> +using Vector2d = Vector<double, 2>;\n> +using Vector3d = Vector<double, 3>;\n> +using Matrix3x3 = Matrix<double, 3, 3>;\n> +\n> +Matrix3x3 makeTranslate(const double tx, const double ty)\n> +{\n> +       Matrix3x3 m = Matrix3x3::identity();\n> +       m[0][2] = tx;\n> +       m[1][2] = ty;\n> +       return m;\n> +}\n> +\n> +Matrix3x3 makeTranslate(const Vector2d &t)\n> +{\n> +       return makeTranslate(t.x(), t.y());\n> +}\n> +\n> +Matrix3x3 makeRotate(const double degrees)\n> +{\n> +       double rad = degrees / 180.0 * M_PI;\n> +       double sa = std::sin(rad);\n> +       double ca = std::cos(rad);\n> +\n> +       Matrix3x3 m = Matrix3x3::identity();\n> +       m[0][0] = ca;\n> +       m[0][1] = -sa;\n> +       m[1][0] = sa;\n> +       m[1][1] = ca;\n> +       return m;\n> +}\n> +\n> +Matrix3x3 makeScale(const double sx, const double sy)\n> +{\n> +       Matrix3x3 m = Matrix3x3::identity();\n> +       m[0][0] = sx;\n> +       m[1][1] = sy;\n> +       return m;\n> +}\n> +\n> +/**\n> + * \\param t The transform to apply\n> + * \\param size The size of the rectangle that is transformed\n> + *\n> + * Create a matrix that represents the transform done by the \\a t. It assumes\n> + * that the origin of the coordinate system is at the top left corner of of the\n> + * rectangle.\n> + */\n> +Matrix3x3 makeTransform(const Transform &t, const Size &size)\n> +{\n> +       Matrix3x3 m = Matrix3x3::identity();\n> +       double wm = size.width * 0.5;\n> +       double hm = size.height * 0.5;\n> +       m = makeTranslate(-wm, -hm) * m;\n> +\n> +       if (!!(t & Transform::HFlip))\n> +               m = makeScale(-1, 1) * m;\n> +\n> +       if (!!(t & Transform::VFlip))\n> +               m = makeScale(1, -1) * m;\n> +\n> +       if (!!(t & Transform::Transpose)) {\n> +               m = makeRotate(-90) * m;\n> +               m = makeScale(1, -1) * m;\n> +               std::swap(wm, hm);\n> +       }\n> +\n> +       m = makeTranslate(wm, hm) * m;\n> +\n> +       return m;\n> +}\n> +\n> +/**\n> + * \\param from The source rectangle\n> + * \\param to The destination rectangle\n> + *\n> + * Create a matrix that transforms from the coordinate system of rectangle \\a\n> + * from into the coordinate system of rectangle \\a to, by overlaying the\n> + * rectangles.\n> + *\n> + * \\see Rectangle::transformedBetween()\n> + */\n> +Matrix3x3 makeTransform(const Rectangle &from, const Rectangle &to)\n> +{\n> +       Matrix3x3 m = Matrix3x3::identity();\n> +       double sx = to.width / static_cast<double>(from.width);\n> +       double sy = to.height / static_cast<double>(from.height);\n> +       m = makeTranslate(-from.x, -from.y) * m;\n> +       m = makeScale(sx, sy) * m;\n> +       m = makeTranslate(to.x, to.y) * m;\n> +       return m;\n> +}\n> +\n> +Vector2d transformPoint(const Matrix3x3 &m, const Vector2d &p)\n> +{\n> +       Vector3d p2{ { p.x(), p.y(), 1.0 } };\n> +       p2 = m * p2;\n> +       return { { p2.x() / p2.z(), p2.y() / p2.z() } };\n> +}\n> +\n> +Vector2d transformVector(const Matrix3x3 &m, const Vector2d &p)\n> +{\n> +       Vector3d p2{ { p.x(), p.y(), 0.0 } };\n> +       p2 = m * p2;\n> +       return { { p2.x(), p2.y() } };\n> +}\n> +\n> +Vector2d rotatedRectSize(const Vector2d &size, const double degrees)\n> +{\n> +       double rad = degrees / 180.0 * M_PI;\n> +       double sa = sin(rad);\n> +       double ca = cos(rad);\n> +\n> +       return { { std::abs(size.x() * ca) + std::abs(size.y() * sa),\n> +                  std::abs(size.x() * sa) + std::abs(size.y() * ca) } };\n> +}\n> +\n> +Vector2d point2Vec2d(const Point &p)\n> +{\n> +       return { { static_cast<double>(p.x), static_cast<double>(p.y) } };\n> +}\n> +\n> +int dw100VerticesForLength(const int length)\n> +{\n> +       return (length + kDw100BlockSize - 1) / kDw100BlockSize + 1;\n> +}\n> +\n> +} /* namespace */\n> +\n> +/**\n> + * \\class libcamera::Dw100VertexMap\n> + * \\brief Helper class to compute dw100 vertex maps\n> + *\n> + * The vertex map class represents a helper for handling dewarper vertex maps.\n> + * There are 3 important sizes in the system:\n> + *\n> + * - The sensor size. The number of pixels of the whole sensor.\n> + * - The input rectangle to the dewarper. Describes the pixel data flowing into\n> + *   the dewarper in sensor coordinates.\n> + * - ScalerCrop rectangle. The rectangle that shall be used for all further\n> + *   stages. It is applied after lens dewarping but is in sensor coordinate\n> + *   space.\n> + * - The output size. This defines the size, the dewarper should output.\n> + *\n> + * +------------------------+\n> + * |Sensor size             |\n> + * |   +----------------+   |\n> + * |   |  Input rect    |   |\n> + * |   |  +-------------+   |\n> + * |   |  | ScalerCrop  |   |\n> + * |   |  |             |   |\n> + * |   +--+-------------+   |\n> + * +------------------------+\n> + *\n> + * This class implements a vertex map that forms the following pipeline:\n> + *\n> + * +-------------+    +-------------+    +------------+    +-----------------+\n> + * |             |    |             |    | Transform  |    | Pan/Zoom        |\n> + * | Lens Dewarp | -> | Scaler Crop | -> | (H/V Flip, | -> | (Offset, Scale, |\n> + * |             |    |             |    | Transpose) |    | Rotate)         |\n> + * +-------------+    +-------------+    +------------+    +-----------------+\n> + *\n> + * \\todo Lens dewarp is not yet implemented. An identity map is used instead.\n> + *\n> + * All parameters are clamped to valid values before creating the vertex map.\n> + *\n> + * The constrains process works as follows:\n\nI really like this implementation. I think it's clear, maintainable -\n*and* I could see this being reusable to other implementations later.\n\nAnd the only thing I can spot is a potential typo above\n\n s/constrains/constraints/ ?\n\n\nReviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>\n\n> + * - The ScalerCrop rectangle is clamped to the input rectangle\n> + * - The ScalerCrop rectangle is transformed by the specified transform\n> + *   forming ScalerCropT\n> + * - A rectangle of output size is placed in the center of ScalerCropT\n> + *   (OutputRect).\n> + * - Rotate gets applied to OutputRect,\n> + * - Scale is applied, but clamped so that the OutputRect fits completely into\n> + *   ScalerCropT (Only regarding dimensions, not position)\n> + * - Offset is clamped so that the OutputRect lies inside ScalerCropT\n> + *\n> + * After applying the limits, the actual values used for processing are stored\n> + * effectiveXXX members and can be queried using the corresponding functions.\n> + *\n> + * The lens dewarp map is usually calibrated during tuning and is a map that\n> + * maps from incoming pixels to dewarped pixels.\n> + */\n> +\n> +/**\n> + * \\enum Dw100VertexMap::ScaleMode\n> + * \\brief The scale modes available for a vertex map\n> + *\n> + * \\var Dw100VertexMap::Fill\n> + * \\brief Scale the input to fill the output\n> + *\n> + * This scale mode does not preserve aspect ratio. Offset and rotation are taken\n> + * into account.\n> + *\n> + * \\var Dw100VertexMap::Crop\n> + * \\brief Crop the input\n> + *\n> + * This scale mode preserves the aspect ratio. Offset, scale, rotation are taken\n> + * into account within the possible limits.\n> + */\n> +\n> +/**\n> + * \\brief Apply limits on scale and offset\n> + *\n> + * This function calculates \\a effectiveScalerCrop_, \\a effectiveScale_ and \\a\n> + * effectiveOffset_ based on the requested scaler crop, scale, rotation, offset\n> + * and the selected scale mode, so that the whole output area is filled with\n> + * valid input data.\n> + */\n> +void Dw100VertexMap::applyLimits()\n> +{\n> +       int ow = outputSize_.width;\n> +       int oh = outputSize_.height;\n> +       effectiveScalerCrop_ = scalerCrop_.boundedTo(sensorCrop_);\n> +\n> +       /* Map the scalerCrop to the input pixel space */\n> +       Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(\n> +               sensorCrop_, Rectangle(inputSize_));\n> +\n> +       Size localCropSizeT = localScalerCrop.size();\n> +       if (!!(transform_ & Transform::Transpose))\n> +               std::swap(localCropSizeT.width, localCropSizeT.height);\n> +\n> +       Vector2d size = rotatedRectSize(point2Vec2d({ ow, oh }), rotation_);\n> +\n> +       if (mode_ != Crop && mode_ != Fill) {\n> +               LOG(Converter, Error)\n> +                       << \"Unknown mode \" << mode_ << \". Default to 'Fill'\";\n> +               mode_ = Fill;\n> +       }\n> +\n> +       /* Calculate constraints */\n> +       double scale = scale_;\n> +       if (mode_ == Crop) {\n> +               /* Scale up if needed */\n> +               scale = std::max(scale,\n> +                                std::max(size.x() / localCropSizeT.width,\n> +                                         size.y() / localCropSizeT.height));\n> +               effectiveScaleX_ = scale;\n> +               effectiveScaleY_ = scale;\n> +\n> +               size = size / scale;\n> +\n> +       } else if (mode_ == Fill) {\n> +               effectiveScaleX_ = size.x() / localCropSizeT.width;\n> +               effectiveScaleY_ = size.y() / localCropSizeT.height;\n> +\n> +               size.x() /= effectiveScaleX_;\n> +               size.y() /= effectiveScaleY_;\n> +       }\n> +\n> +       /*\n> +        * Clamp offset. Due to rounding errors, size might be slightly bigger\n> +        * than scaler crop. Clamp the offset to 0 to prevent a crash in the\n> +        * next clamp.\n> +        */\n> +       double maxoffX, maxoffY;\n> +       maxoffX = std::max(0.0, (localCropSizeT.width - size.x())) * 0.5;\n> +       maxoffY = std::max(0.0, (localCropSizeT.height - size.y())) * 0.5;\n> +       if (!!(transform_ & Transform::Transpose))\n> +               std::swap(maxoffX, maxoffY);\n> +\n> +       /*\n> +        * Transform the offset from sensor space to local space, apply the\n> +        * limit and transform back.\n> +        */\n> +       Vector2d offset = point2Vec2d(offset_);\n> +       Matrix3x3 m;\n> +\n> +       m = makeTransform(effectiveScalerCrop_, localScalerCrop);\n> +       offset = transformVector(m, offset);\n> +       offset.x() = std::clamp(offset.x(), -maxoffX, maxoffX);\n> +       offset.y() = std::clamp(offset.y(), -maxoffY, maxoffY);\n> +       m = makeTransform(localScalerCrop, effectiveScalerCrop_);\n> +       offset = transformVector(m, offset);\n> +       effectiveOffset_.x = offset.x();\n> +       effectiveOffset_.y = offset.y();\n> +}\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setInputSize()\n> + * \\brief Set the size of the input data\n> + * \\param[in] size The input size\n> + *\n> + * To calculate a proper vertex map, the size of the input images must be set.\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setSensorCrop()\n> + * \\brief Set the crop rectangle that represents the input data\n> + * \\param[in] rect\n> + *\n> + * Set the rectangle that represents the input data in sensor coordinates. This\n> + * must be specified to properly calculate the vertex map.\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setScalerCrop()\n> + * \\brief Set the requested scaler crop\n> + * \\param[in] rect\n> + *\n> + * Set the requested scaler crop. The actually applied scaler crop can be\n> + * queried using \\a Dw100VertexMap::effectiveScalerCrop() after calling\n> + * Dw100VertexMap::applyLimits().\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::effectiveScalerCrop()\n> + * \\brief Get the effective scaler crop\n> + *\n> + * \\return The effective scaler crop\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setOutputSize()\n> + * \\brief Set the output size\n> + * \\param[in] size The size of the output images\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::outputSize()\n> + * \\brief Get the output size\n> + * \\return The output size\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setTransform()\n> + * \\brief Sets the transform to apply\n> + * \\param[in] transform The transform\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::transform()\n> + * \\brief Get the transform\n> + * \\return The transform\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setScale()\n> + * \\brief Sets the scale to apply\n> + * \\param[in] scale The scale\n> + *\n> + * Set the requested scale. The actually applied scale can be queried using \\a\n> + * Dw100VertexMap::effectiveScale() after calling \\a\n> + * Dw100VertexMap::applyLimits().\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::effectiveScale()\n> + * \\brief Get the effective scale\n> + *\n> + * Returns the actual scale applied to the input pixels in x and y direction. So\n> + * a value of [2.0, 1.5] means that every input pixel is scaled to cover 2\n> + * output pixels in x-direction and 1.5 in y-direction.\n> + *\n> + * \\return The effective scale\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setRotation()\n> + * \\brief Sets the rotation to apply\n> + * \\param[in] rotation The rotation in degrees\n> + *\n> + * The rotation is in clockwise direction to allow the same transform as\n> + * CameraConfiguration::orientation\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::rotation()\n> + * \\brief Get the rotation\n> + * \\return The rotation in degrees\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setOffset()\n> + * \\brief Sets the offset to apply\n> + * \\param[in] offset The offset\n> + *\n> + * Set the requested offset. The actually applied offset can be queried using \\a\n> + * Dw100VertexMap::effectiveOffset() after calling \\a\n> + * Dw100VertexMap::applyLimits().\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::effectiveOffset()\n> + * \\brief Get the effective offset\n> + *\n> + * Returns the actual offset applied to the input pixels in ScalerCrop\n> + * coordinates.\n> + *\n> + * \\return The effective offset\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::setMode()\n> + * \\brief Sets the scaling mode to apply\n> + * \\param[in] mode The mode\n> + */\n> +\n> +/**\n> + * \\fn Dw100VertexMap::mode()\n> + * \\brief Get the scaling mode\n> + * \\return The scaling mode\n> + */\n> +\n> +/**\n> + * \\brief Get the dw100 vertex map\n> + *\n> + * Calculates the vertex map as a vector of hardware specific entries.\n> + *\n> + * \\return The vertex map\n> + */\n> +std::vector<uint32_t> Dw100VertexMap::getVertexMap()\n> +{\n> +       int ow = outputSize_.width;\n> +       int oh = outputSize_.height;\n> +       int tileCountW = dw100VerticesForLength(ow);\n> +       int tileCountH = dw100VerticesForLength(oh);\n> +\n> +       applyLimits();\n> +\n> +       /*\n> +        * libcamera handles all crop rectangles in sensor space. But the\n> +        * dewarper \"sees\" only the pixels it gets passed. Note that these might\n> +        * not cover exactly the max sensor crop, as there might be a crop\n> +        * between ISP and dewarper to crop to a format supported by the\n> +        * dewarper. effectiveScalerCrop_ is the crop in sensor space that gets\n> +        * fed into the dewarper. localScalerCrop is the sensor crop mapped to\n> +        * the data that is fed into the dewarper.\n> +        */\n> +       Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(\n> +               sensorCrop_, Rectangle(inputSize_));\n> +       Size localCropSizeT = localScalerCrop.size();\n> +       if (!!(transform_ & Transform::Transpose))\n> +               std::swap(localCropSizeT.width, localCropSizeT.height);\n> +\n> +       /*\n> +        * The dw100 has a specialty in interpolation that has to be taken into\n> +        * account to use in a pixel perfect manner. To explain this, I will\n> +        * only use the x direction, the vertical axis behaves the same.\n> +        *\n> +        * Let's start with a pixel perfect 1:1 mapping of an image with a width\n> +        * of 64pixels. The coordinates of the vertex map would then be:\n> +        * 0 -- 16 -- 32 -- 48 -- 64\n> +        * Note how the last coordinate lies outside the image (which ends at\n> +        * 63) as it is basically the beginning of the next macro block.\n> +        *\n> +        * if we zoom out a bit we might end up with something like\n> +        * -10 -- 0 -- 32 -- 64 -- 74\n> +        * As the dewarper coordinates are unsigned it actually sees\n> +        * 0 -- 0 -- 32 -- 64 -- 74\n> +        * Leading to stretched pixels at the beginning and black for everything\n> +        * > 63\n> +        *\n> +        * Now lets rotate the image by 180 degrees. A trivial rotation would\n> +        * end up with:\n> +        *\n> +        * 64 -- 48 -- 32 -- 16 -- 0\n> +        *\n> +        * But as the first column now points to pixel 64 we get a single black\n> +        * line. So for a proper 180* rotation, the coordinates need to be\n> +        *\n> +        * 63 -- 47 -- 31 -- 15 -- -1\n> +        *\n> +        * The -1 is clamped to 0 again, leading to a theoretical slight\n> +        * interpolation error on the last 16 pixels.\n> +        *\n> +        * To create this proper transformation there are two things todo:\n> +        *\n> +        * 1. The rotation centers are offset by -0.5. This evens out for no\n> +        *    rotation, and leads to a coordinate offset of -1 on 180 degree\n> +        *    rotations.\n> +        * 2. The transformation (flip and transpose) need to act on a size-1\n> +        *    to get the same effect.\n> +        */\n> +       Vector2d centerS{ { localCropSizeT.width * 0.5 - 0.5,\n> +                           localCropSizeT.height * 0.5 - 0.5 } };\n> +       Vector2d centerD{ { ow * 0.5 - 0.5,\n> +                           oh * 0.5 - 0.5 } };\n> +\n> +       LOG(Converter, Debug)\n> +               << \"Apply vertex map for\"\n> +               << \" inputSize: \" << inputSize_\n> +               << \" outputSize: \" << outputSize_\n> +               << \" Transform: \" << transformToString(transform_)\n> +               << \"\\n effectiveScalerCrop: \" << effectiveScalerCrop_\n> +               << \" localCropSizeT: \" << localCropSizeT\n> +               << \" scaleX: \" << effectiveScaleX_\n> +               << \" scaleY: \" << effectiveScaleX_\n> +               << \" rotation: \" << rotation_\n> +               << \" offset: \" << effectiveOffset_;\n> +\n> +       Matrix3x3 outputToSensor = Matrix3x3::identity();\n> +       /* Move to center of output */\n> +       outputToSensor = makeTranslate(-centerD) * outputToSensor;\n> +       outputToSensor = makeRotate(-rotation_) * outputToSensor;\n> +       outputToSensor = makeScale(1.0 / effectiveScaleX_, 1.0 / effectiveScaleY_) * outputToSensor;\n> +       /* Move to top left of localScalerCropT */\n> +       outputToSensor = makeTranslate(centerS) * outputToSensor;\n> +       outputToSensor = makeTransform(-transform_, localCropSizeT.shrunkBy({ 1, 1 })) *\n> +                        outputToSensor;\n> +       /* Transform from \"within localScalerCrop\" to input reference frame */\n> +       outputToSensor = makeTranslate(localScalerCrop.x, localScalerCrop.y) * outputToSensor;\n> +       outputToSensor = makeTransform(localScalerCrop, effectiveScalerCrop_) * outputToSensor;\n> +       outputToSensor = makeTranslate(point2Vec2d(effectiveOffset_)) * outputToSensor;\n> +\n> +       Matrix3x3 sensorToInput = makeTransform(effectiveScalerCrop_, localScalerCrop);\n> +\n> +       /*\n> +        * For every output tile, calculate the position of the corners in the\n> +        * input image.\n> +        */\n> +       std::vector<uint32_t> res;\n> +       res.reserve(tileCountW * tileCountH);\n> +       for (int y = 0; y < tileCountH; y++) {\n> +               for (int x = 0; x < tileCountW; x++) {\n> +                       Vector2d p{ { static_cast<double>(x) * kDw100BlockSize,\n> +                                     static_cast<double>(y) * kDw100BlockSize } };\n> +                       p = p.max(0.0).min(Vector2d{ { static_cast<double>(ow),\n> +                                                      static_cast<double>(oh) } });\n> +\n> +                       p = transformPoint(outputToSensor, p);\n> +\n> +                       /*\n> +                        * \\todo: Transformations in sensor space to be added\n> +                        * here.\n> +                        */\n> +\n> +                       p = transformPoint(sensorToInput, p);\n> +\n> +                       /* Convert to fixed point */\n> +                       uint32_t v = static_cast<uint32_t>(p.y() * 16) << 16 |\n> +                                    (static_cast<uint32_t>(p.x() * 16) & 0xffff);\n> +                       res.push_back(v);\n> +               }\n> +       }\n> +\n> +       return res;\n> +}\n> +\n> +} /* namespace libcamera */\n> diff --git a/src/libcamera/converter/meson.build b/src/libcamera/converter/meson.build\n> index af1a80fec683..558d63a1bdd4 100644\n> --- a/src/libcamera/converter/meson.build\n> +++ b/src/libcamera/converter/meson.build\n> @@ -1,5 +1,6 @@\n>  # SPDX-License-Identifier: CC0-1.0\n>  \n>  libcamera_internal_sources += files([\n> +        'converter_dw100_vertexmap.cpp',\n>          'converter_v4l2_m2m.cpp'\n>  ])\n> -- \n> 2.51.0\n>","headers":{"Return-Path":"<libcamera-devel-bounces@lists.libcamera.org>","X-Original-To":"parsemail@patchwork.libcamera.org","Delivered-To":"parsemail@patchwork.libcamera.org","Received":["from lancelot.ideasonboard.com (lancelot.ideasonboard.com\n\t[92.243.16.209])\n\tby patchwork.libcamera.org (Postfix) with ESMTPS id 1141DC32DE\n\tfor <parsemail@patchwork.libcamera.org>;\n\tTue, 25 Nov 2025 17:18:43 +0000 (UTC)","from lancelot.ideasonboard.com (localhost [IPv6:::1])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTP id 22F3E60A85;\n\tTue, 25 Nov 2025 18:18:42 +0100 (CET)","from perceval.ideasonboard.com (perceval.ideasonboard.com\n\t[213.167.242.64])\n\tby lancelot.ideasonboard.com (Postfix) with ESMTPS id 88A99609D8\n\tfor <libcamera-devel@lists.libcamera.org>;\n\tTue, 25 Nov 2025 18:18:40 +0100 (CET)","from pendragon.ideasonboard.com\n\t(cpc89244-aztw30-2-0-cust6594.18-1.cable.virginm.net [86.31.185.195])\n\tby perceval.ideasonboard.com (Postfix) with ESMTPSA id 0EDBA2C5;\n\tTue, 25 Nov 2025 18:16:30 +0100 (CET)"],"Authentication-Results":"lancelot.ideasonboard.com; dkim=pass (1024-bit key;\n\tunprotected) header.d=ideasonboard.com header.i=@ideasonboard.com\n\theader.b=\"a8+8UvrX\"; dkim-atps=neutral","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com;\n\ts=mail; t=1764090991;\n\tbh=24h8njp++GqmcrnAwkUjJKGOSlo/kffTPe6Ir+OnD6k=;\n\th=In-Reply-To:References:Subject:From:Cc:To:Date:From;\n\tb=a8+8UvrXU05wfxYg60jdiNwO4/0Q435zxp7fhXUmTWLjh5olboZz5G5SOn/ZyNxyG\n\tzCO1fXwr5S4lE36/ff5WHEo5QQ4RDUhEp2qKNUJjJ4RvyQC8djoDF7COwCziocUVu/\n\t7osQLwFb0BZStvcJY7objEQw0sDZ6zsOAsGDqqHg=","Content-Type":"text/plain; charset=\"utf-8\"","MIME-Version":"1.0","Content-Transfer-Encoding":"quoted-printable","In-Reply-To":"<20251125162851.2301793-16-stefan.klug@ideasonboard.com>","References":"<20251125162851.2301793-1-stefan.klug@ideasonboard.com>\n\t<20251125162851.2301793-16-stefan.klug@ideasonboard.com>","Subject":"Re: [PATCH v3 15/29] libcamera: converter: Add dw100 vertex map\n\tclass","From":"Kieran Bingham <kieran.bingham@ideasonboard.com>","Cc":"Stefan Klug <stefan.klug@ideasonboard.com>","To":"Stefan Klug <stefan.klug@ideasonboard.com>,\n\tlibcamera-devel@lists.libcamera.org","Date":"Tue, 25 Nov 2025 17:18:36 +0000","Message-ID":"<176409111643.567526.5379290066019566577@ping.linuxembedded.co.uk>","User-Agent":"alot/0.9.1","X-BeenThere":"libcamera-devel@lists.libcamera.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<libcamera-devel.lists.libcamera.org>","List-Unsubscribe":"<https://lists.libcamera.org/options/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=unsubscribe>","List-Archive":"<https://lists.libcamera.org/pipermail/libcamera-devel/>","List-Post":"<mailto:libcamera-devel@lists.libcamera.org>","List-Help":"<mailto:libcamera-devel-request@lists.libcamera.org?subject=help>","List-Subscribe":"<https://lists.libcamera.org/listinfo/libcamera-devel>,\n\t<mailto:libcamera-devel-request@lists.libcamera.org?subject=subscribe>","Errors-To":"libcamera-devel-bounces@lists.libcamera.org","Sender":"\"libcamera-devel\" <libcamera-devel-bounces@lists.libcamera.org>"}}]