[v2,20/35] libcamera: converter: Add dw100 vertex map class
diff mbox series

Message ID 20251023144841.403689-21-stefan.klug@ideasonboard.com
State New
Headers show
Series
  • Full dewarper support on imx8mp
Related show

Commit Message

Stefan Klug Oct. 23, 2025, 2:48 p.m. UTC
Using a custom vertex map the dw100 dewarper is capable of doing
complex and useful transformations on the image data. This class
implements a pipeline featuring:
- Arbitrary ScalerCrop
- Full transform support (Flip, 90deg rotations)
- Arbitrary move, scale, rotate

ScalerCrop and Transform is implemented to provide a interface that is
standardized libcamera wide. The rest is implemented on top for more
flexible dw100 specific features.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>

---

Changes in v2:
- Replaced manual transforms with an affine transformation matrix
- Changed rotation direction to be in sync with the rotation in
  CameraConfiguration::orientation
- Changed offset parameter to be in ScalerCrop coordinates. This is
  easier to explain and has the added benefit, that Scale/Rotate is
always centered to the visible image.
- Improved code comments
- Make dw100VerticesForLength a local function
- Dropped unnecessary includes
- Added documentation

Changes in v0.9
- Include header in meson.build
- Fix black line at top and left when rotation 180 degrees

Changes in v0.8
- Cleanup & formatting

Changes in v0.5
- Fix crash in std::clamp() due to rounding errors
---
 .../converter/converter_dw100_vertexmap.h     |  76 +++
 .../libcamera/internal/converter/meson.build  |   1 +
 .../converter/converter_dw100_vertexmap.cpp   | 566 ++++++++++++++++++
 src/libcamera/converter/meson.build           |   1 +
 4 files changed, 644 insertions(+)
 create mode 100644 include/libcamera/internal/converter/converter_dw100_vertexmap.h
 create mode 100644 src/libcamera/converter/converter_dw100_vertexmap.cpp

Comments

Paul Elder Nov. 5, 2025, 4:48 p.m. UTC | #1
Quoting Stefan Klug (2025-10-23 23:48:21)
> Using a custom vertex map the dw100 dewarper is capable of doing
> complex and useful transformations on the image data. This class
> implements a pipeline featuring:
> - Arbitrary ScalerCrop
> - Full transform support (Flip, 90deg rotations)
> - Arbitrary move, scale, rotate
> 
> ScalerCrop and Transform is implemented to provide a interface that is
> standardized libcamera wide. The rest is implemented on top for more
> flexible dw100 specific features.
> 
> Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
> 
> ---
> 
> Changes in v2:
> - Replaced manual transforms with an affine transformation matrix
> - Changed rotation direction to be in sync with the rotation in
>   CameraConfiguration::orientation
> - Changed offset parameter to be in ScalerCrop coordinates. This is
>   easier to explain and has the added benefit, that Scale/Rotate is
> always centered to the visible image.
> - Improved code comments
> - Make dw100VerticesForLength a local function
> - Dropped unnecessary includes
> - Added documentation
> 
> Changes in v0.9
> - Include header in meson.build
> - Fix black line at top and left when rotation 180 degrees
> 
> Changes in v0.8
> - Cleanup & formatting
> 
> Changes in v0.5
> - Fix crash in std::clamp() due to rounding errors
> ---
>  .../converter/converter_dw100_vertexmap.h     |  76 +++
>  .../libcamera/internal/converter/meson.build  |   1 +
>  .../converter/converter_dw100_vertexmap.cpp   | 566 ++++++++++++++++++
>  src/libcamera/converter/meson.build           |   1 +
>  4 files changed, 644 insertions(+)
>  create mode 100644 include/libcamera/internal/converter/converter_dw100_vertexmap.h
>  create mode 100644 src/libcamera/converter/converter_dw100_vertexmap.cpp
> 
> diff --git a/include/libcamera/internal/converter/converter_dw100_vertexmap.h b/include/libcamera/internal/converter/converter_dw100_vertexmap.h
> new file mode 100644
> index 000000000000..e72cb72bb9f1
> --- /dev/null
> +++ b/include/libcamera/internal/converter/converter_dw100_vertexmap.h
> @@ -0,0 +1,76 @@
> +#pragma once
> +
> +#include <assert.h>
> +#include <cmath>
> +#include <stdint.h>
> +#include <vector>
> +
> +#include <libcamera/base/span.h>
> +
> +#include <libcamera/geometry.h>
> +#include <libcamera/transform.h>
> +
> +namespace libcamera {
> +
> +class Dw100VertexMap
> +{
> +public:
> +       enum ScaleMode {
> +               Fill = 0,
> +               Crop = 1,
> +       };
> +
> +       void applyLimits();
> +       void setInputSize(const Size &size)
> +       {
> +               inputSize_ = size;
> +               scalerCrop_ = Rectangle(size);
> +       }
> +
> +       void setSensorCrop(const Rectangle &rect) { sensorCrop_ = rect; }
> +
> +       void setScalerCrop(const Rectangle &rect) { scalerCrop_ = rect; }
> +       const Rectangle &effectiveScalerCrop() const { return effectiveScalerCrop_; }
> +       std::pair<Rectangle, Rectangle> scalerCropBounds() const
> +       {
> +               return { Rectangle(sensorCrop_.x, sensorCrop_.y, 1, 1),
> +                        sensorCrop_ };
> +       }
> +
> +       void setOutputSize(const Size &size) { outputSize_ = size; }
> +       const Size &outputSize() const { return outputSize_; }
> +
> +       void setTransform(const Transform &transform) { transform_ = transform; }
> +       const Transform &transform() const { return transform_; }
> +
> +       void setScale(const float scale) { scale_ = scale; }
> +       float effectiveScale() const { return (effectiveScaleX_ + effectiveScaleY_) * 0.5; }
> +
> +       void setRotation(const float rotation) { rotation_ = rotation; }
> +       float rotation() const { return rotation_; }
> +
> +       void setOffset(const Point &offset) { offset_ = offset; }
> +       const Point &effectiveOffset() const { return effectiveOffset_; }
> +
> +       void setMode(const ScaleMode mode) { mode_ = mode; }
> +       ScaleMode mode() const { return mode_; }
> +
> +       std::vector<uint32_t> getVertexMap();
> +
> +private:
> +       Rectangle scalerCrop_;
> +       Rectangle sensorCrop_;
> +       Transform transform_ = Transform::Identity;
> +       Size inputSize_;
> +       Size outputSize_;
> +       Point offset_;
> +       double scale_ = 1.0;
> +       double rotation_ = 0.0;
> +       ScaleMode mode_ = Fill;
> +       double effectiveScaleX_;
> +       double effectiveScaleY_;
> +       Point effectiveOffset_;
> +       Rectangle effectiveScalerCrop_;
> +};
> +
> +} /* namespace libcamera */
> diff --git a/include/libcamera/internal/converter/meson.build b/include/libcamera/internal/converter/meson.build
> index 85007a4b0f8b..128c644cb73f 100644
> --- a/include/libcamera/internal/converter/meson.build
> +++ b/include/libcamera/internal/converter/meson.build
> @@ -2,5 +2,6 @@
>  
>  libcamera_internal_headers += files([
>      'converter_dw100.h',
> +    'converter_dw100_vertexmap.h',
>      'converter_v4l2_m2m.h',
>  ])
> diff --git a/src/libcamera/converter/converter_dw100_vertexmap.cpp b/src/libcamera/converter/converter_dw100_vertexmap.cpp
> new file mode 100644
> index 000000000000..0e930479b6f7
> --- /dev/null
> +++ b/src/libcamera/converter/converter_dw100_vertexmap.cpp
> @@ -0,0 +1,566 @@
> +#include "libcamera/internal/converter/converter_dw100_vertexmap.h"
> +
> +#include <algorithm>
> +#include <assert.h>
> +#include <cmath>
> +#include <stdint.h>
> +#include <utility>
> +#include <vector>
> +
> +#include <libcamera/base/log.h>
> +#include <libcamera/base/span.h>
> +
> +#include <libcamera/geometry.h>
> +#include <libcamera/transform.h>
> +
> +#include "libcamera/internal/vector.h"
> +
> +constexpr int kDw100BlockSize = 16;
> +
> +namespace libcamera {
> +
> +LOG_DECLARE_CATEGORY(Converter)
> +namespace {
> +
> +using Vector2d = Vector<double, 2>;
> +using Vector3d = Vector<double, 3>;
> +using Matrix3x3 = Matrix<double, 3, 3>;
> +
> +Matrix3x3 makeTranslate(const double tx, const double ty)
> +{
> +       Matrix3x3 m = Matrix3x3::identity();
> +       m[0][2] = tx;
> +       m[1][2] = ty;
> +       return m;
> +}
> +
> +Matrix3x3 makeTranslate(const Vector2d &t)
> +{
> +       return makeTranslate(t.x(), t.y());
> +}
> +
> +Matrix3x3 makeRotate(const double degrees)
> +{
> +       double rad = degrees / 180.0 * M_PI;
> +       double sa = std::sin(rad);
> +       double ca = std::cos(rad);
> +
> +       Matrix3x3 m = Matrix3x3::identity();
> +       m[0][0] = ca;
> +       m[0][1] = -sa;
> +       m[1][0] = sa;
> +       m[1][1] = ca;
> +       return m;
> +}
> +
> +Matrix3x3 makeScale(const double sx, const double sy)
> +{
> +       Matrix3x3 m = Matrix3x3::identity();
> +       m[0][0] = sx;
> +       m[1][1] = sy;
> +       return m;
> +}
> +
> +/**
> + * \param t The transform to apply
> + * \param size The size of the rectangle that is transformed
> + *
> + * Create a matrix that represents the transform done by the \a t. It assumes
> + * that the origin of the coordinate system is at the top left corner of of the
> + * rectangle.
> + */
> +Matrix3x3 makeTransform(const Transform &t, const Size &size)
> +{
> +       Matrix3x3 m = Matrix3x3::identity();
> +       double wm = size.width * 0.5;
> +       double hm = size.height * 0.5;
> +       m = makeTranslate(-wm, -hm) * m;
> +
> +       if (!!(t & Transform::HFlip))
> +               m = makeScale(-1, 1) * m;
> +
> +       if (!!(t & Transform::VFlip))
> +               m = makeScale(1, -1) * m;
> +
> +       if (!!(t & Transform::Transpose)) {
> +               m = makeRotate(-90) * m;
> +               m = makeScale(1, -1) * m;
> +               std::swap(wm, hm);
> +       }
> +
> +       m = makeTranslate(wm, hm) * m;
> +
> +       return m;
> +}
> +
> +/**
> + * \param from The source rectangle
> + * \param to The destination rectangle
> + *
> + * Create a matrix that transforms from the coordinate system of rectangle \a
> + * from into the coordinate system of rectangle \a to, by overlaying the
> + * rectangles.
> + *
> + * \see Rectangle::transformedBetween()
> + */
> +Matrix3x3 makeTransform(const Rectangle &from, const Rectangle &to)
> +{
> +       Matrix3x3 m = Matrix3x3::identity();
> +       double sx = to.width / static_cast<double>(from.width);
> +       double sy = to.height / static_cast<double>(from.height);
> +       m = makeTranslate(-from.x, -from.y) * m;
> +       m = makeScale(sx, sy) * m;
> +       m = makeTranslate(to.x, to.y) * m;
> +       return m;
> +}
> +
> +Vector2d transformPoint(const Matrix3x3 &m, const Vector2d &p)
> +{
> +       Vector3d p2{ { p.x(), p.y(), 1.0 } };
> +       p2 = m * p2;
> +       return { { p2.x() / p2.z(), p2.y() / p2.z() } };
> +}
> +
> +Vector2d transformVector(const Matrix3x3 &m, const Vector2d &p)
> +{
> +       Vector3d p2{ { p.x(), p.y(), 0.0 } };
> +       p2 = m * p2;
> +       return { { p2.x(), p2.y() } };
> +}
> +
> +Vector2d rotatedRectSize(const Vector2d &size, const double degrees)
> +{
> +       double rad = degrees / 180.0 * M_PI;
> +       double sa = sin(rad);
> +       double ca = cos(rad);
> +
> +       return { { std::abs(size.x() * ca) + std::abs(size.y() * sa),
> +                  std::abs(size.x() * sa) + std::abs(size.y() * ca) } };
> +}
> +
> +Vector2d point2Vec2d(const Point &p)
> +{
> +       return { { static_cast<double>(p.x), static_cast<double>(p.y) } };
> +}
> +
> +int dw100VerticesForLength(const int length)
> +{
> +       return (length + kDw100BlockSize - 1) / kDw100BlockSize + 1;
> +}
> +
> +} /* namespace */
> +
> +/**
> + * \class libcamera::Dw100VertexMap
> + * \brief Helper class to compute dw100 vertex maps
> + *
> + * The vertex map class represents a helper for handling dewarper vertex maps.
> + * There are 3 important sizes in the system:
> + *
> + * - The sensor size. The number of pixels of the whole sensor (\todo specify
> + *    the crop rectangle).
> + * - The input rectangle to the dewarper. Describes the pixel data flowing into
> + *   the dewarper in sensor coordinates.
> + * - ScalerCrop rectangle. The rectangle that shall be used for all further
> + *   stages. It is applied after lens dewarping but is in sensor coordinate
> + *   space.
> + * - The output size. This defines the size, the dewarper should output.
> + *
> + * +------------------------+
> + * |Sensor size             |
> + * |   +----------------+   |
> + * |   |  Input rect    |   |
> + * |   |  +-------------+   |
> + * |   |  | ScalerCrop  |   |
> + * |   |  |             |   |
> + * |   +--+-------------+   |
> + * +------------------------+
> + *
> + * This class implements a vertex map that forms the following pipeline:
> + *
> + * +-------------+    +-------------+    +------------+    +-----------------+
> + * |             | -> |             |    | Transform  |    | Pan/Zoom        |
> + * | Lens Dewarp | -> | Scaler Crop | -> | (H/V Flip, | -> | (Offset, Scale, |
> + * |             |    |             |    | Transpose) |    | Rotate)         |
> + * +-------------+    +-------------+    +------------+    +-----------------+
> + *
> + * \todo Lens dewarp is not yet implemented. An identity map is used instead.
> + *
> + * All parameters are clamped to valid values before creating the vertex map.
> + *
> + * The constrains process works as follows:
> + * - The ScalerCrop rectangle is clamped to the input rectangle
> + * - The ScalerCrop rectangle is transformed by the specified transform
> + *   forming ScalerCropT
> + * - A rectangle of output size is placed in the center of ScalerCropT
> + *   (OutputRect).
> + * - Rotate gets applied to OutputRect,
> + * - Scale is applied, but clamped so that the OutputRect fits completely into
> + *   ScalerCropT (Only regarding dimensions, not position)
> + * - Offset is clamped so that the OutputRect lies inside ScalerCropT
> + *
> + * The lens dewarp map is usually calibrated during tuning and is a map that
> + * maps from incoming pixels to dewarped pixels.

imo an explicit mention of what the effective parameters represent would be
useful.

> + */
> +
> +/**
> + * \enum Dw100VertexMap::ScaleMode
> + * \brief The scale modes available for a vertex map
> + *
> + * \var Dw100VertexMap::Fill
> + * \brief Scale the input to fill the output
> + *
> + * This scale mode does not preserve aspect ratio. Offset and rotation are taken
> + * into account.
> + *
> + * \var Dw100VertexMap::Crop
> + * \brief Crop the input
> + *
> + * This scale mode preserves the aspect ratio. Offset, scale, rotation are taken
> + * into account within the possible limits.
> + */
> +
> +/**
> + * \brief Apply limits on scale and offset
> + *
> + * This function calculates \a effectiveScalerCrop_, \a effectiveScale_ and \a
> + * effectiveOffset_ based on the requested scaler crop, scale, rotation, offset
> + * and the selected scale mode, so that the whole output area is filled with
> + * valid input data.
> + */
> +void Dw100VertexMap::applyLimits()
> +{
> +       int ow = outputSize_.width;
> +       int oh = outputSize_.height;
> +       effectiveScalerCrop_ = scalerCrop_.boundedTo(sensorCrop_);

If we error out below this will still be committed. Do we want that?

> +
> +       /* Map the scalerCrop to the input pixel space */
> +       Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(
> +               sensorCrop_, Rectangle(inputSize_));
> +
> +       Size localCropSizeT = localScalerCrop.size();
> +       if (!!(transform_ & Transform::Transpose))
> +               std::swap(localCropSizeT.width, localCropSizeT.height);
> +
> +       Vector2d size = rotatedRectSize(point2Vec2d({ ow, oh }), rotation_);
> +
> +       /* Calculate constraints */
> +       double scale = scale_;
> +       if (mode_ == Crop) {
> +               /* Scale up if needed */
> +               scale = std::max(scale,
> +                                std::max(size.x() / localCropSizeT.width,
> +                                         size.y() / localCropSizeT.height));
> +               effectiveScaleX_ = scale;
> +               effectiveScaleY_ = scale;
> +
> +               size = size / scale;
> +
> +       } else if (mode_ == Fill) {
> +               effectiveScaleX_ = size.x() / localCropSizeT.width;
> +               effectiveScaleY_ = size.y() / localCropSizeT.height;
> +
> +               size.x() /= effectiveScaleX_;
> +               size.y() /= effectiveScaleY_;
> +       } else {
> +               LOG(Converter, Error) << "Unknown mode " << mode_;
> +               return;
> +       }
> +
> +       /*
> +        * Clamp offset. Due to rounding errors, size might be slightly bigger
> +        * than scaler crop. Clamp the offset to 0 to prevent a crash in the
> +        * next clamp.
> +        */
> +       double maxoffX, maxoffY;
> +       maxoffX = std::max(0.0, (localCropSizeT.width - size.x())) * 0.5;
> +       maxoffY = std::max(0.0, (localCropSizeT.height - size.y())) * 0.5;
> +       if (!!(transform_ & Transform::Transpose))
> +               std::swap(maxoffX, maxoffY);
> +
> +       /*
> +        * Transform the offset from sensor space to local space, apply the
> +        * limit and transform back.
> +        */
> +       Vector2d offset = point2Vec2d(offset_);
> +       Matrix3x3 m;
> +
> +       m = makeTransform(effectiveScalerCrop_, localScalerCrop);
> +       offset = transformVector(m, offset);
> +       offset.x() = std::clamp(offset.x(), -maxoffX, maxoffX);
> +       offset.y() = std::clamp(offset.y(), -maxoffY, maxoffY);
> +       m = makeTransform(localScalerCrop, effectiveScalerCrop_);
> +       offset = transformVector(m, offset);
> +       effectiveOffset_.x = offset.x();
> +       effectiveOffset_.y = offset.y();
> +}
> +
> +/**
> + * \fn Dw100VertexMap::setInputSize()
> + * \brief Set the size of the input data
> + * \param[in] size The input size
> + *
> + * To calculate a proper vertex map, the size of the input images must be set.
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setSensorCrop()
> + * \brief Set the crop rectangle that represents the input data
> + * \param[in] rect
> + *
> + * Set the rectangle that represents the input data in sensor coordinates. This
> + * must be specified to properly calculate the vertex map.
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setScalerCrop()
> + * \brief Set the requested scaler crop
> + * \param[in] rect
> + *
> + * Set the requested scaler crop. The actually applied scaler crop can be
> + * queried using \a Dw100VertexMap::effectiveScalerCrop() after calling
> + * Dw100VertexMap::applyLimits().
> + */
> +
> +/**
> + * \fn Dw100VertexMap::effectiveScalerCrop()
> + * \brief Get the effective scaler crop
> + *
> + * \return The effective scaler crop
> + */
> +
> +/**
> + * \fn Dw100VertexMap::scalerCropBounds()
> + * \brief Get the min and max values for the scaler crop
> + *
> + * \return A pair of rectangles that represent the scaler crop min/max values
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setOutputSize()
> + * \brief Set the output size
> + * \param[in] size The size of the output images
> + */
> +
> +/**
> + * \fn Dw100VertexMap::outputSize()
> + * \brief Get the output size
> + * \return The output size
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setTransform()
> + * \brief Sets the transform to apply
> + * \param[in] transform The transform
> + */
> +
> +/**
> + * \fn Dw100VertexMap::transform()
> + * \brief Get the transform
> + * \return The transform
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setScale()
> + * \brief Sets the scale to apply
> + * \param[in] scale The scale
> + *
> + * Set the requested scale. The actually applied scale can be queried using \a
> + * Dw100VertexMap::effectiveScale() after calling \a
> + * Dw100VertexMap::applyLimits().
> + */
> +
> +/**
> + * \fn Dw100VertexMap::effectiveScale()
> + * \brief Get the effective scale
> + *
> + * Returns the actual scale applied to the input pixels in x and y direction. So
> + * a value of [2.0, 1.5] means that every input pixel is scaled to cover 2
> + * output pixels in x-direction and 1.5 in y-direction.

Oh I see, here it is :)

> + *
> + * \return The effective scale
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setRotation()
> + * \brief Sets the rotation to apply
> + * \param[in] rotation The rotation in degrees
> + *
> + * The rotation is in clockwise direction to allow the same transform as
> + * CameraConfiguration::orientation
> + */
> +
> +/**
> + * \fn Dw100VertexMap::rotation()
> + * \brief Get the rotation
> + * \return The rotation in degrees
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setOffset()
> + * \brief Sets the offset to apply
> + * \param[in] offset The offset
> + *
> + * Set the requested offset. The actually applied offset can be queried using \a
> + * Dw100VertexMap::effectiveOffset() after calling \a
> + * Dw100VertexMap::applyLimits().
> + */
> +
> +/**
> + * \fn Dw100VertexMap::effectiveOffset()
> + * \brief Get the effective offset
> + *
> + * Returns the actual offset applied to the input pixels in ScalerCrop
> + * coordinates.
> + *
> + * \return The effective offset
> + */
> +
> +/**
> + * \fn Dw100VertexMap::setMode()
> + * \brief Sets the scaling mode to apply
> + * \param[in] mode The mode
> + */
> +
> +/**
> + * \fn Dw100VertexMap::mode()
> + * \brief Get the scaling mode
> + * \return The scaling mode
> + */
> +
> +/**
> + * \brief Get the dw100 vertex map
> + *
> + * Calculates the vertex map as a vector of hardware specific entries.
> + *
> + * \return The vertex map
> + */
> +std::vector<uint32_t> Dw100VertexMap::getVertexMap()
> +{
> +       int ow = outputSize_.width;
> +       int oh = outputSize_.height;
> +       int tileCountW = dw100VerticesForLength(ow);
> +       int tileCountH = dw100VerticesForLength(oh);
> +
> +       applyLimits();
> +
> +       /*
> +        * libcamera handles all crop rectangles in sensor space. But the
> +        * dewarper "sees" only the pixels it gets passed. Note that these might
> +        * not cover exactly the max sensor crop, as there might be a crop
> +        * between ISP and dewarper to crop to a format supported by the
> +        * dewarper. effectiveScalerCrop_ is the crop in sensor space that gets
> +        * fed into the dewarper. localScalerCrop is the sensor crop mapped to
> +        * the data that is fed into the dewarper.

Aha, this is what I was looking for.

> +        */
> +       Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(
> +               sensorCrop_, Rectangle(inputSize_));
> +       Size localCropSizeT = localScalerCrop.size();
> +       if (!!(transform_ & Transform::Transpose))
> +               std::swap(localCropSizeT.width, localCropSizeT.height);
> +
> +       /*
> +        * The dw100 has a specialty in interpolation that has to be taken into
> +        * account to use in a pixel perfect manner. To explain this, I will
> +        * only use the x direction, the vertical axis behaves the same.
> +        *
> +        * Let's start with a pixel perfect 1:1 mapping of an image with a width
> +        * of 64pixels. The coordinates of the vertex map would then be:
> +        * 0 -- 16 -- 32 -- 48 -- 64
> +        * Note how the last coordinate lies outside the image (which ends at
> +        * 63) as it is basically the beginning of the next macro block.
> +        *
> +        * if we zoom out a bit we might end up with something like
> +        * -10 -- 0 -- 32 -- 64 -- 74
> +        * As the dewarper coordinates are unsigned it actually sees
> +        * 0 -- 0 -- 32 -- 64 -- 74
> +        * Leading to stretched pixels at the beginning and black for everything
> +        * > 63
> +        *
> +        * Now lets rotate the image by 180 degrees. A trivial rotation would
> +        * end up with:
> +        *
> +        * 64 -- 48 -- 32 -- 16 -- 0
> +        *
> +        * But as the first column now points to pixel 64 we get a single black
> +        * line. So for a proper 180* rotation, the coordinates need to be
> +        *
> +        * 63 -- 47 -- 31 -- 15 -- -1
> +        *
> +        * The -1 is clamped to 0 again, leading to a theoretical slight
> +        * interpolation error on the last 16 pixels.
> +        *
> +        * To create this proper transformation there are two things todo:
> +        *
> +        * 1. The rotation centers are offset by -0.5. This evens out for no
> +        *    rotation, and leads to a coordinate offset of -1 on 180 degree
> +        *    rotations.
> +        * 2. The transformation (flip and transpose) need to act on a size-1
> +        *    to get the same effect.
> +        */
> +       Vector2d centerS{ { localCropSizeT.width * 0.5 - 0.5,
> +                           localCropSizeT.height * 0.5 - 0.5 } };
> +       Vector2d centerD{ { ow * 0.5 - 0.5,
> +                           oh * 0.5 - 0.5 } };
> +
> +       LOG(Converter, Debug)
> +               << "Apply vertex map for"
> +               << " inputSize: " << inputSize_
> +               << " outputSize: " << outputSize_
> +               << " Transform: " << transformToString(transform_)
> +               << "\n effectiveScalerCrop: " << effectiveScalerCrop_
> +               << " localCropSizeT: " << localCropSizeT
> +               << " scaleX: " << effectiveScaleX_
> +               << " scaleY: " << effectiveScaleX_
> +               << " rotation: " << rotation_
> +               << " offset: " << effectiveOffset_;
> +
> +       Matrix3x3 outputToSensor = Matrix3x3::identity();
> +       /* Move to center of output */
> +       outputToSensor = makeTranslate(-centerD) * outputToSensor;
> +       outputToSensor = makeRotate(-rotation_) * outputToSensor;
> +       outputToSensor = makeScale(1.0 / effectiveScaleX_, 1.0 / effectiveScaleY_) * outputToSensor;
> +       /* Move to top left of localScalerCropT */
> +       outputToSensor = makeTranslate(centerS) * outputToSensor;
> +       outputToSensor = makeTransform(-transform_, localCropSizeT.shrunkBy({ 1, 1 })) *
> +                        outputToSensor;
> +       /* Transform from "within localScalerCrop" to input reference frame */
> +       outputToSensor = makeTranslate(localScalerCrop.x, localScalerCrop.y) * outputToSensor;
> +       outputToSensor = makeTransform(localScalerCrop, effectiveScalerCrop_) * outputToSensor;
> +       outputToSensor = makeTranslate(point2Vec2d(effectiveOffset_)) * outputToSensor;
> +
> +       Matrix3x3 sensorToInput = makeTransform(effectiveScalerCrop_, localScalerCrop);
> +
> +       /*
> +        * For every output tile, calculate the position of the corners in the
> +        * input image.
> +        */
> +       std::vector<uint32_t> res;
> +       res.reserve(tileCountW * tileCountH);
> +       for (int y = 0; y < tileCountH; y++) {
> +               for (int x = 0; x < tileCountW; x++) {
> +                       Vector2d p{ { static_cast<double>(x) * kDw100BlockSize,
> +                                     static_cast<double>(y) * kDw100BlockSize } };
> +                       p = p.max(0.0).min(Vector2d{ { static_cast<double>(ow),
> +                                                      static_cast<double>(oh) } });
> +
> +                       p = transformPoint(outputToSensor, p);
> +
> +                       /*
> +                        * \todo: Transformations in sensor space to be added
> +                        * here.
> +                        */
> +
> +                       p = transformPoint(sensorToInput, p);
> +
> +                       /* Convert to fixed point */
> +                       uint32_t v = static_cast<uint32_t>(p.y() * 16) << 16 |
> +                                    (static_cast<uint32_t>(p.x() * 16) & 0xffff);
> +                       res.push_back(v);
> +               }
> +       }
> +
> +       return res;
> +}

Looks good to me.


Thanks,

Paul

> +
> +} /* namespace libcamera */
> diff --git a/src/libcamera/converter/meson.build b/src/libcamera/converter/meson.build
> index fe2dcebb67da..9f59b57c26b9 100644
> --- a/src/libcamera/converter/meson.build
> +++ b/src/libcamera/converter/meson.build
> @@ -1,6 +1,7 @@
>  # SPDX-License-Identifier: CC0-1.0
>  
>  libcamera_internal_sources += files([
> +        'converter_dw100_vertexmap.cpp',
>          'converter_dw100.cpp',
>          'converter_v4l2_m2m.cpp'
>  ])
> -- 
> 2.48.1
>

Patch
diff mbox series

diff --git a/include/libcamera/internal/converter/converter_dw100_vertexmap.h b/include/libcamera/internal/converter/converter_dw100_vertexmap.h
new file mode 100644
index 000000000000..e72cb72bb9f1
--- /dev/null
+++ b/include/libcamera/internal/converter/converter_dw100_vertexmap.h
@@ -0,0 +1,76 @@ 
+#pragma once
+
+#include <assert.h>
+#include <cmath>
+#include <stdint.h>
+#include <vector>
+
+#include <libcamera/base/span.h>
+
+#include <libcamera/geometry.h>
+#include <libcamera/transform.h>
+
+namespace libcamera {
+
+class Dw100VertexMap
+{
+public:
+	enum ScaleMode {
+		Fill = 0,
+		Crop = 1,
+	};
+
+	void applyLimits();
+	void setInputSize(const Size &size)
+	{
+		inputSize_ = size;
+		scalerCrop_ = Rectangle(size);
+	}
+
+	void setSensorCrop(const Rectangle &rect) { sensorCrop_ = rect; }
+
+	void setScalerCrop(const Rectangle &rect) { scalerCrop_ = rect; }
+	const Rectangle &effectiveScalerCrop() const { return effectiveScalerCrop_; }
+	std::pair<Rectangle, Rectangle> scalerCropBounds() const
+	{
+		return { Rectangle(sensorCrop_.x, sensorCrop_.y, 1, 1),
+			 sensorCrop_ };
+	}
+
+	void setOutputSize(const Size &size) { outputSize_ = size; }
+	const Size &outputSize() const { return outputSize_; }
+
+	void setTransform(const Transform &transform) { transform_ = transform; }
+	const Transform &transform() const { return transform_; }
+
+	void setScale(const float scale) { scale_ = scale; }
+	float effectiveScale() const { return (effectiveScaleX_ + effectiveScaleY_) * 0.5; }
+
+	void setRotation(const float rotation) { rotation_ = rotation; }
+	float rotation() const { return rotation_; }
+
+	void setOffset(const Point &offset) { offset_ = offset; }
+	const Point &effectiveOffset() const { return effectiveOffset_; }
+
+	void setMode(const ScaleMode mode) { mode_ = mode; }
+	ScaleMode mode() const { return mode_; }
+
+	std::vector<uint32_t> getVertexMap();
+
+private:
+	Rectangle scalerCrop_;
+	Rectangle sensorCrop_;
+	Transform transform_ = Transform::Identity;
+	Size inputSize_;
+	Size outputSize_;
+	Point offset_;
+	double scale_ = 1.0;
+	double rotation_ = 0.0;
+	ScaleMode mode_ = Fill;
+	double effectiveScaleX_;
+	double effectiveScaleY_;
+	Point effectiveOffset_;
+	Rectangle effectiveScalerCrop_;
+};
+
+} /* namespace libcamera */
diff --git a/include/libcamera/internal/converter/meson.build b/include/libcamera/internal/converter/meson.build
index 85007a4b0f8b..128c644cb73f 100644
--- a/include/libcamera/internal/converter/meson.build
+++ b/include/libcamera/internal/converter/meson.build
@@ -2,5 +2,6 @@ 
 
 libcamera_internal_headers += files([
     'converter_dw100.h',
+    'converter_dw100_vertexmap.h',
     'converter_v4l2_m2m.h',
 ])
diff --git a/src/libcamera/converter/converter_dw100_vertexmap.cpp b/src/libcamera/converter/converter_dw100_vertexmap.cpp
new file mode 100644
index 000000000000..0e930479b6f7
--- /dev/null
+++ b/src/libcamera/converter/converter_dw100_vertexmap.cpp
@@ -0,0 +1,566 @@ 
+#include "libcamera/internal/converter/converter_dw100_vertexmap.h"
+
+#include <algorithm>
+#include <assert.h>
+#include <cmath>
+#include <stdint.h>
+#include <utility>
+#include <vector>
+
+#include <libcamera/base/log.h>
+#include <libcamera/base/span.h>
+
+#include <libcamera/geometry.h>
+#include <libcamera/transform.h>
+
+#include "libcamera/internal/vector.h"
+
+constexpr int kDw100BlockSize = 16;
+
+namespace libcamera {
+
+LOG_DECLARE_CATEGORY(Converter)
+namespace {
+
+using Vector2d = Vector<double, 2>;
+using Vector3d = Vector<double, 3>;
+using Matrix3x3 = Matrix<double, 3, 3>;
+
+Matrix3x3 makeTranslate(const double tx, const double ty)
+{
+	Matrix3x3 m = Matrix3x3::identity();
+	m[0][2] = tx;
+	m[1][2] = ty;
+	return m;
+}
+
+Matrix3x3 makeTranslate(const Vector2d &t)
+{
+	return makeTranslate(t.x(), t.y());
+}
+
+Matrix3x3 makeRotate(const double degrees)
+{
+	double rad = degrees / 180.0 * M_PI;
+	double sa = std::sin(rad);
+	double ca = std::cos(rad);
+
+	Matrix3x3 m = Matrix3x3::identity();
+	m[0][0] = ca;
+	m[0][1] = -sa;
+	m[1][0] = sa;
+	m[1][1] = ca;
+	return m;
+}
+
+Matrix3x3 makeScale(const double sx, const double sy)
+{
+	Matrix3x3 m = Matrix3x3::identity();
+	m[0][0] = sx;
+	m[1][1] = sy;
+	return m;
+}
+
+/**
+ * \param t The transform to apply
+ * \param size The size of the rectangle that is transformed
+ *
+ * Create a matrix that represents the transform done by the \a t. It assumes
+ * that the origin of the coordinate system is at the top left corner of of the
+ * rectangle.
+ */
+Matrix3x3 makeTransform(const Transform &t, const Size &size)
+{
+	Matrix3x3 m = Matrix3x3::identity();
+	double wm = size.width * 0.5;
+	double hm = size.height * 0.5;
+	m = makeTranslate(-wm, -hm) * m;
+
+	if (!!(t & Transform::HFlip))
+		m = makeScale(-1, 1) * m;
+
+	if (!!(t & Transform::VFlip))
+		m = makeScale(1, -1) * m;
+
+	if (!!(t & Transform::Transpose)) {
+		m = makeRotate(-90) * m;
+		m = makeScale(1, -1) * m;
+		std::swap(wm, hm);
+	}
+
+	m = makeTranslate(wm, hm) * m;
+
+	return m;
+}
+
+/**
+ * \param from The source rectangle
+ * \param to The destination rectangle
+ *
+ * Create a matrix that transforms from the coordinate system of rectangle \a
+ * from into the coordinate system of rectangle \a to, by overlaying the
+ * rectangles.
+ *
+ * \see Rectangle::transformedBetween()
+ */
+Matrix3x3 makeTransform(const Rectangle &from, const Rectangle &to)
+{
+	Matrix3x3 m = Matrix3x3::identity();
+	double sx = to.width / static_cast<double>(from.width);
+	double sy = to.height / static_cast<double>(from.height);
+	m = makeTranslate(-from.x, -from.y) * m;
+	m = makeScale(sx, sy) * m;
+	m = makeTranslate(to.x, to.y) * m;
+	return m;
+}
+
+Vector2d transformPoint(const Matrix3x3 &m, const Vector2d &p)
+{
+	Vector3d p2{ { p.x(), p.y(), 1.0 } };
+	p2 = m * p2;
+	return { { p2.x() / p2.z(), p2.y() / p2.z() } };
+}
+
+Vector2d transformVector(const Matrix3x3 &m, const Vector2d &p)
+{
+	Vector3d p2{ { p.x(), p.y(), 0.0 } };
+	p2 = m * p2;
+	return { { p2.x(), p2.y() } };
+}
+
+Vector2d rotatedRectSize(const Vector2d &size, const double degrees)
+{
+	double rad = degrees / 180.0 * M_PI;
+	double sa = sin(rad);
+	double ca = cos(rad);
+
+	return { { std::abs(size.x() * ca) + std::abs(size.y() * sa),
+		   std::abs(size.x() * sa) + std::abs(size.y() * ca) } };
+}
+
+Vector2d point2Vec2d(const Point &p)
+{
+	return { { static_cast<double>(p.x), static_cast<double>(p.y) } };
+}
+
+int dw100VerticesForLength(const int length)
+{
+	return (length + kDw100BlockSize - 1) / kDw100BlockSize + 1;
+}
+
+} /* namespace */
+
+/**
+ * \class libcamera::Dw100VertexMap
+ * \brief Helper class to compute dw100 vertex maps
+ *
+ * The vertex map class represents a helper for handling dewarper vertex maps.
+ * There are 3 important sizes in the system:
+ *
+ * - The sensor size. The number of pixels of the whole sensor (\todo specify
+ *    the crop rectangle).
+ * - The input rectangle to the dewarper. Describes the pixel data flowing into
+ *   the dewarper in sensor coordinates.
+ * - ScalerCrop rectangle. The rectangle that shall be used for all further
+ *   stages. It is applied after lens dewarping but is in sensor coordinate
+ *   space.
+ * - The output size. This defines the size, the dewarper should output.
+ *
+ * +------------------------+
+ * |Sensor size             |
+ * |   +----------------+   |
+ * |   |  Input rect    |   |
+ * |   |  +-------------+   |
+ * |   |  | ScalerCrop  |   |
+ * |   |  |             |   |
+ * |   +--+-------------+   |
+ * +------------------------+
+ *
+ * This class implements a vertex map that forms the following pipeline:
+ *
+ * +-------------+    +-------------+    +------------+    +-----------------+
+ * |             | -> |             |    | Transform  |    | Pan/Zoom        |
+ * | Lens Dewarp | -> | Scaler Crop | -> | (H/V Flip, | -> | (Offset, Scale, |
+ * |             |    |             |    | Transpose) |    | Rotate)         |
+ * +-------------+    +-------------+    +------------+    +-----------------+
+ *
+ * \todo Lens dewarp is not yet implemented. An identity map is used instead.
+ *
+ * All parameters are clamped to valid values before creating the vertex map.
+ *
+ * The constrains process works as follows:
+ * - The ScalerCrop rectangle is clamped to the input rectangle
+ * - The ScalerCrop rectangle is transformed by the specified transform
+ *   forming ScalerCropT
+ * - A rectangle of output size is placed in the center of ScalerCropT
+ *   (OutputRect).
+ * - Rotate gets applied to OutputRect,
+ * - Scale is applied, but clamped so that the OutputRect fits completely into
+ *   ScalerCropT (Only regarding dimensions, not position)
+ * - Offset is clamped so that the OutputRect lies inside ScalerCropT
+ *
+ * The lens dewarp map is usually calibrated during tuning and is a map that
+ * maps from incoming pixels to dewarped pixels.
+ */
+
+/**
+ * \enum Dw100VertexMap::ScaleMode
+ * \brief The scale modes available for a vertex map
+ *
+ * \var Dw100VertexMap::Fill
+ * \brief Scale the input to fill the output
+ *
+ * This scale mode does not preserve aspect ratio. Offset and rotation are taken
+ * into account.
+ *
+ * \var Dw100VertexMap::Crop
+ * \brief Crop the input
+ *
+ * This scale mode preserves the aspect ratio. Offset, scale, rotation are taken
+ * into account within the possible limits.
+ */
+
+/**
+ * \brief Apply limits on scale and offset
+ *
+ * This function calculates \a effectiveScalerCrop_, \a effectiveScale_ and \a
+ * effectiveOffset_ based on the requested scaler crop, scale, rotation, offset
+ * and the selected scale mode, so that the whole output area is filled with
+ * valid input data.
+ */
+void Dw100VertexMap::applyLimits()
+{
+	int ow = outputSize_.width;
+	int oh = outputSize_.height;
+	effectiveScalerCrop_ = scalerCrop_.boundedTo(sensorCrop_);
+
+	/* Map the scalerCrop to the input pixel space */
+	Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(
+		sensorCrop_, Rectangle(inputSize_));
+
+	Size localCropSizeT = localScalerCrop.size();
+	if (!!(transform_ & Transform::Transpose))
+		std::swap(localCropSizeT.width, localCropSizeT.height);
+
+	Vector2d size = rotatedRectSize(point2Vec2d({ ow, oh }), rotation_);
+
+	/* Calculate constraints */
+	double scale = scale_;
+	if (mode_ == Crop) {
+		/* Scale up if needed */
+		scale = std::max(scale,
+				 std::max(size.x() / localCropSizeT.width,
+					  size.y() / localCropSizeT.height));
+		effectiveScaleX_ = scale;
+		effectiveScaleY_ = scale;
+
+		size = size / scale;
+
+	} else if (mode_ == Fill) {
+		effectiveScaleX_ = size.x() / localCropSizeT.width;
+		effectiveScaleY_ = size.y() / localCropSizeT.height;
+
+		size.x() /= effectiveScaleX_;
+		size.y() /= effectiveScaleY_;
+	} else {
+		LOG(Converter, Error) << "Unknown mode " << mode_;
+		return;
+	}
+
+	/*
+	 * Clamp offset. Due to rounding errors, size might be slightly bigger
+	 * than scaler crop. Clamp the offset to 0 to prevent a crash in the
+	 * next clamp.
+	 */
+	double maxoffX, maxoffY;
+	maxoffX = std::max(0.0, (localCropSizeT.width - size.x())) * 0.5;
+	maxoffY = std::max(0.0, (localCropSizeT.height - size.y())) * 0.5;
+	if (!!(transform_ & Transform::Transpose))
+		std::swap(maxoffX, maxoffY);
+
+	/*
+	 * Transform the offset from sensor space to local space, apply the
+	 * limit and transform back.
+	 */
+	Vector2d offset = point2Vec2d(offset_);
+	Matrix3x3 m;
+
+	m = makeTransform(effectiveScalerCrop_, localScalerCrop);
+	offset = transformVector(m, offset);
+	offset.x() = std::clamp(offset.x(), -maxoffX, maxoffX);
+	offset.y() = std::clamp(offset.y(), -maxoffY, maxoffY);
+	m = makeTransform(localScalerCrop, effectiveScalerCrop_);
+	offset = transformVector(m, offset);
+	effectiveOffset_.x = offset.x();
+	effectiveOffset_.y = offset.y();
+}
+
+/**
+ * \fn Dw100VertexMap::setInputSize()
+ * \brief Set the size of the input data
+ * \param[in] size The input size
+ *
+ * To calculate a proper vertex map, the size of the input images must be set.
+ */
+
+/**
+ * \fn Dw100VertexMap::setSensorCrop()
+ * \brief Set the crop rectangle that represents the input data
+ * \param[in] rect
+ *
+ * Set the rectangle that represents the input data in sensor coordinates. This
+ * must be specified to properly calculate the vertex map.
+ */
+
+/**
+ * \fn Dw100VertexMap::setScalerCrop()
+ * \brief Set the requested scaler crop
+ * \param[in] rect
+ *
+ * Set the requested scaler crop. The actually applied scaler crop can be
+ * queried using \a Dw100VertexMap::effectiveScalerCrop() after calling
+ * Dw100VertexMap::applyLimits().
+ */
+
+/**
+ * \fn Dw100VertexMap::effectiveScalerCrop()
+ * \brief Get the effective scaler crop
+ *
+ * \return The effective scaler crop
+ */
+
+/**
+ * \fn Dw100VertexMap::scalerCropBounds()
+ * \brief Get the min and max values for the scaler crop
+ *
+ * \return A pair of rectangles that represent the scaler crop min/max values
+ */
+
+/**
+ * \fn Dw100VertexMap::setOutputSize()
+ * \brief Set the output size
+ * \param[in] size The size of the output images
+ */
+
+/**
+ * \fn Dw100VertexMap::outputSize()
+ * \brief Get the output size
+ * \return The output size
+ */
+
+/**
+ * \fn Dw100VertexMap::setTransform()
+ * \brief Sets the transform to apply
+ * \param[in] transform The transform
+ */
+
+/**
+ * \fn Dw100VertexMap::transform()
+ * \brief Get the transform
+ * \return The transform
+ */
+
+/**
+ * \fn Dw100VertexMap::setScale()
+ * \brief Sets the scale to apply
+ * \param[in] scale The scale
+ *
+ * Set the requested scale. The actually applied scale can be queried using \a
+ * Dw100VertexMap::effectiveScale() after calling \a
+ * Dw100VertexMap::applyLimits().
+ */
+
+/**
+ * \fn Dw100VertexMap::effectiveScale()
+ * \brief Get the effective scale
+ *
+ * Returns the actual scale applied to the input pixels in x and y direction. So
+ * a value of [2.0, 1.5] means that every input pixel is scaled to cover 2
+ * output pixels in x-direction and 1.5 in y-direction.
+ *
+ * \return The effective scale
+ */
+
+/**
+ * \fn Dw100VertexMap::setRotation()
+ * \brief Sets the rotation to apply
+ * \param[in] rotation The rotation in degrees
+ *
+ * The rotation is in clockwise direction to allow the same transform as
+ * CameraConfiguration::orientation
+ */
+
+/**
+ * \fn Dw100VertexMap::rotation()
+ * \brief Get the rotation
+ * \return The rotation in degrees
+ */
+
+/**
+ * \fn Dw100VertexMap::setOffset()
+ * \brief Sets the offset to apply
+ * \param[in] offset The offset
+ *
+ * Set the requested offset. The actually applied offset can be queried using \a
+ * Dw100VertexMap::effectiveOffset() after calling \a
+ * Dw100VertexMap::applyLimits().
+ */
+
+/**
+ * \fn Dw100VertexMap::effectiveOffset()
+ * \brief Get the effective offset
+ *
+ * Returns the actual offset applied to the input pixels in ScalerCrop
+ * coordinates.
+ *
+ * \return The effective offset
+ */
+
+/**
+ * \fn Dw100VertexMap::setMode()
+ * \brief Sets the scaling mode to apply
+ * \param[in] mode The mode
+ */
+
+/**
+ * \fn Dw100VertexMap::mode()
+ * \brief Get the scaling mode
+ * \return The scaling mode
+ */
+
+/**
+ * \brief Get the dw100 vertex map
+ *
+ * Calculates the vertex map as a vector of hardware specific entries.
+ *
+ * \return The vertex map
+ */
+std::vector<uint32_t> Dw100VertexMap::getVertexMap()
+{
+	int ow = outputSize_.width;
+	int oh = outputSize_.height;
+	int tileCountW = dw100VerticesForLength(ow);
+	int tileCountH = dw100VerticesForLength(oh);
+
+	applyLimits();
+
+	/*
+	 * libcamera handles all crop rectangles in sensor space. But the
+	 * dewarper "sees" only the pixels it gets passed. Note that these might
+	 * not cover exactly the max sensor crop, as there might be a crop
+	 * between ISP and dewarper to crop to a format supported by the
+	 * dewarper. effectiveScalerCrop_ is the crop in sensor space that gets
+	 * fed into the dewarper. localScalerCrop is the sensor crop mapped to
+	 * the data that is fed into the dewarper.
+	 */
+	Rectangle localScalerCrop = effectiveScalerCrop_.transformedBetween(
+		sensorCrop_, Rectangle(inputSize_));
+	Size localCropSizeT = localScalerCrop.size();
+	if (!!(transform_ & Transform::Transpose))
+		std::swap(localCropSizeT.width, localCropSizeT.height);
+
+	/*
+	 * The dw100 has a specialty in interpolation that has to be taken into
+	 * account to use in a pixel perfect manner. To explain this, I will
+	 * only use the x direction, the vertical axis behaves the same.
+	 *
+	 * Let's start with a pixel perfect 1:1 mapping of an image with a width
+	 * of 64pixels. The coordinates of the vertex map would then be:
+	 * 0 -- 16 -- 32 -- 48 -- 64
+	 * Note how the last coordinate lies outside the image (which ends at
+	 * 63) as it is basically the beginning of the next macro block.
+	 *
+	 * if we zoom out a bit we might end up with something like
+	 * -10 -- 0 -- 32 -- 64 -- 74
+	 * As the dewarper coordinates are unsigned it actually sees
+	 * 0 -- 0 -- 32 -- 64 -- 74
+	 * Leading to stretched pixels at the beginning and black for everything
+	 * > 63
+	 *
+	 * Now lets rotate the image by 180 degrees. A trivial rotation would
+	 * end up with:
+	 *
+	 * 64 -- 48 -- 32 -- 16 -- 0
+	 *
+	 * But as the first column now points to pixel 64 we get a single black
+	 * line. So for a proper 180* rotation, the coordinates need to be
+	 *
+	 * 63 -- 47 -- 31 -- 15 -- -1
+	 *
+	 * The -1 is clamped to 0 again, leading to a theoretical slight
+	 * interpolation error on the last 16 pixels.
+	 *
+	 * To create this proper transformation there are two things todo:
+	 *
+	 * 1. The rotation centers are offset by -0.5. This evens out for no
+	 *    rotation, and leads to a coordinate offset of -1 on 180 degree
+	 *    rotations.
+	 * 2. The transformation (flip and transpose) need to act on a size-1
+	 *    to get the same effect.
+	 */
+	Vector2d centerS{ { localCropSizeT.width * 0.5 - 0.5,
+			    localCropSizeT.height * 0.5 - 0.5 } };
+	Vector2d centerD{ { ow * 0.5 - 0.5,
+			    oh * 0.5 - 0.5 } };
+
+	LOG(Converter, Debug)
+		<< "Apply vertex map for"
+		<< " inputSize: " << inputSize_
+		<< " outputSize: " << outputSize_
+		<< " Transform: " << transformToString(transform_)
+		<< "\n effectiveScalerCrop: " << effectiveScalerCrop_
+		<< " localCropSizeT: " << localCropSizeT
+		<< " scaleX: " << effectiveScaleX_
+		<< " scaleY: " << effectiveScaleX_
+		<< " rotation: " << rotation_
+		<< " offset: " << effectiveOffset_;
+
+	Matrix3x3 outputToSensor = Matrix3x3::identity();
+	/* Move to center of output */
+	outputToSensor = makeTranslate(-centerD) * outputToSensor;
+	outputToSensor = makeRotate(-rotation_) * outputToSensor;
+	outputToSensor = makeScale(1.0 / effectiveScaleX_, 1.0 / effectiveScaleY_) * outputToSensor;
+	/* Move to top left of localScalerCropT */
+	outputToSensor = makeTranslate(centerS) * outputToSensor;
+	outputToSensor = makeTransform(-transform_, localCropSizeT.shrunkBy({ 1, 1 })) *
+			 outputToSensor;
+	/* Transform from "within localScalerCrop" to input reference frame */
+	outputToSensor = makeTranslate(localScalerCrop.x, localScalerCrop.y) * outputToSensor;
+	outputToSensor = makeTransform(localScalerCrop, effectiveScalerCrop_) * outputToSensor;
+	outputToSensor = makeTranslate(point2Vec2d(effectiveOffset_)) * outputToSensor;
+
+	Matrix3x3 sensorToInput = makeTransform(effectiveScalerCrop_, localScalerCrop);
+
+	/*
+	 * For every output tile, calculate the position of the corners in the
+	 * input image.
+	 */
+	std::vector<uint32_t> res;
+	res.reserve(tileCountW * tileCountH);
+	for (int y = 0; y < tileCountH; y++) {
+		for (int x = 0; x < tileCountW; x++) {
+			Vector2d p{ { static_cast<double>(x) * kDw100BlockSize,
+				      static_cast<double>(y) * kDw100BlockSize } };
+			p = p.max(0.0).min(Vector2d{ { static_cast<double>(ow),
+						       static_cast<double>(oh) } });
+
+			p = transformPoint(outputToSensor, p);
+
+			/*
+			 * \todo: Transformations in sensor space to be added
+			 * here.
+			 */
+
+			p = transformPoint(sensorToInput, p);
+
+			/* Convert to fixed point */
+			uint32_t v = static_cast<uint32_t>(p.y() * 16) << 16 |
+				     (static_cast<uint32_t>(p.x() * 16) & 0xffff);
+			res.push_back(v);
+		}
+	}
+
+	return res;
+}
+
+} /* namespace libcamera */
diff --git a/src/libcamera/converter/meson.build b/src/libcamera/converter/meson.build
index fe2dcebb67da..9f59b57c26b9 100644
--- a/src/libcamera/converter/meson.build
+++ b/src/libcamera/converter/meson.build
@@ -1,6 +1,7 @@ 
 # SPDX-License-Identifier: CC0-1.0
 
 libcamera_internal_sources += files([
+        'converter_dw100_vertexmap.cpp',
         'converter_dw100.cpp',
         'converter_v4l2_m2m.cpp'
 ])