Message ID | tencent_5B12F44775EB76DC77A7C123AD000ADB3905@qq.com |
---|---|
State | Superseded |
Headers | show |
Series |
|
Related | show |
Let me send this on the mailing list too to avoid duplicate comments (?) (Everything is the same as the private one that I sent, Siyuan) Hi Siyuan, Thank you for the patch. Here's just a quick review, mostly on the beefy parts that you added. On Wed, Aug 04, 2021 at 07:00:47AM +0100, Siyuan Fan wrote: > From: Fan Siyuan <siyuan.fan@foxmail.com> > > Add class PipelineHandlerISP to connect three parts, including application, Camera Sensor > and ISP processing. The process is roughly as follows. > > The application requests the format and size of rgb image(e.x. RGB888 640x480), > then pipeline handler configures the format and size of raw image (e.x. BGGR10 640X480) > and calls V4L2VideoDevice::allocateBuffers and V4L2VideoDevice::queueBuffer to queue raw buffer. > When bufferReady signal is emitted, which means that the raw buffer is filled with data captured by camera sensor. > Later, pipeline handler calls the bufferReady function to pass the raw buffer to class isp::processing > and the raw buffer is processed by isp pipeline. When processing done, the rgb buffer is exported to application > by PipelineHandlerISP::exportFrameBuffers. The commit message should be truncated at 80 characters. I thought git commit automatically wraps text? You don't need so much detail when explaining the pipeline (I used to write with this much detail too though :) ) As in, you don't need to discuss all the function calls; you can explain stuff conceptually. The names of the functions should match the concepts anyway. Also all pipeline handlers follow the same general flow: taking buffers from the application and queueing buffers to the sensor, getting the buffer back from the sensor and queueing it to the ISP, getting it back and returning it to the application. What's special in your case is that your ISP is in software, and not in a V4L2 device, so you have to manage some buffers yourself. > > Signed-off-by: Fan Siyuan <siyuan.fan@foxmail.com> > > --- > src/libcamera/pipeline/isp/isp.cpp | 323 +++++++++++++++++++++++++++++ > 1 file changed, 323 insertions(+) > create mode 100644 src/libcamera/pipeline/isp/isp.cpp > > diff --git a/src/libcamera/pipeline/isp/isp.cpp b/src/libcamera/pipeline/isp/isp.cpp > new file mode 100644 > index 00000000..e5fbd536 > --- /dev/null > +++ b/src/libcamera/pipeline/isp/isp.cpp > @@ -0,0 +1,323 @@ > +/* SPDX-License-Identifier: LGPL-2.1-or-later */ > +/* > + * Copyright (C) 2021, Siyuan Fan <siyuan.fan@foxmail.com> > + * > + * isp.cpp - The software ISP-based pipeline handler > + */ > + > +#include "isp_processing.h" > + > +#include <math.h> > +#include <stdlib.h> > +#include <sys/mman.h> > +#include <unistd.h> > + > +#include <libcamera/camera.h> > +#include <libcamera/control_ids.h> > +#include <libcamera/controls.h> > +#include <libcamera/formats.h> > + > +#include "libcamera/base/thread.h" > +#include "libcamera/internal/device_enumerator.h" > +#include "libcamera/internal/media_device.h" > +#include "libcamera/internal/pipeline_handler.h" > +#include "libcamera/internal/v4l2_videodevice.h" > + > +namespace libcamera { > + > +LOG_DEFINE_CATEGORY(ISP) > + > +class ISPCameraData : public CameraData > +{ > +public: > + ISPCameraData(PipelineHandler *pipe, MediaDevice *media) > + : CameraData(pipe), media_(media), video_(nullptr) > + { > + } > + > + ~ISPCameraData() > + { > + delete video_; > + delete rawBuffers; > + } > + > + int init(); > + void bufferReady(FrameBuffer *buffer); > + void ISPCompleted(FrameBuffer *buffer); > + > + Stream stream_; > + ISP isp_; > + Thread threadISP_; > + int width; > + int height; Member variables should end with _. > + > + MediaDevice *media_; > + V4L2VideoDevice *video_; > + std::vector<std::unique_ptr<FrameBuffer>> *tempCopyExportBuffers; This is unnecessary. I'll explain it in more detail later where you actually use this. > + std::vector<std::unique_ptr<FrameBuffer>> *rawBuffers; > + std::map<FrameBuffer *, FrameBuffer *> bufferPair; Same here. > +}; > + > + > +class ISPCameraConfiguration : public CameraConfiguration > +{ > +public: > + ISPCameraConfiguration(); > + > + Status validate() override; > +}; > + > +class PipelineHandlerISP : public PipelineHandler > +{ > +public: > + PipelineHandlerISP(CameraManager *manager); > + > + CameraConfiguration *generateConfiguration(Camera *camera, > + const StreamRoles &roles) override; > + int configure(Camera *camera, CameraConfiguration *config) override; > + > + int exportFrameBuffers(Camera *camera, Stream *stream, > + std::vector<std::unique_ptr<FrameBuffer>> *buffers) override; > + > + int start(Camera *camera, const ControlList *controls) override; > + void stop(Camera *camera) override; > + > + int queueRequestDevice(Camera *camera, Request *request) override; > + > + bool match(DeviceEnumerator *enumerator) override; > + > +private: > + ISPCameraData *cameraData(const Camera *camera) > + { > + return static_cast<ISPCameraData *>( > + PipelineHandler::cameraData(camera)); > + } > +}; > + > +ISPCameraConfiguration::ISPCameraConfiguration() > + : CameraConfiguration() > +{ > +} > + > +CameraConfiguration::Status ISPCameraConfiguration::validate() > +{ > + Status status = Valid; > + > + return status; > +} > + > +PipelineHandlerISP::PipelineHandlerISP(CameraManager *manager) > + : PipelineHandler(manager) > +{ > +} > + > +CameraConfiguration *PipelineHandlerISP::generateConfiguration(Camera *camera, > + const StreamRoles &roles) > +{ > + ISPCameraData *data = cameraData(camera); > + CameraConfiguration *config = new ISPCameraConfiguration(); > + > + if (roles.empty()) > + return config; > + > + std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats = > + data->video_->formats(); > + std::map<PixelFormat, std::vector<SizeRange>> deviceFormats; > + std::transform(v4l2Formats.begin(), v4l2Formats.end(), > + std::inserter(deviceFormats, deviceFormats.begin()), > + [&](const decltype(v4l2Formats)::value_type &format) { > + return decltype(deviceFormats)::value_type{ > + format.first.toPixelFormat(), > + format.second > + }; > + }); > + > + StreamFormats formats(deviceFormats); > + StreamConfiguration cfg(formats); > + > + cfg.pixelFormat = formats::RGB888; > + cfg.size = { 640, 480 }; > + cfg.bufferCount = 4; > + > + config->addConfiguration(cfg); > + > + config->validate(); > + > + return config; > +} > + > +int PipelineHandlerISP::configure(Camera *camera, CameraConfiguration *config) > +{ > + ISPCameraData *data = cameraData(camera); > + StreamConfiguration &cfg = config->at(0); > + > + V4L2VideoDevice::Formats fmts = data->video_->formats(); > + V4L2PixelFormat v4l2Format = fmts.begin()->first; > + > + V4L2DeviceFormat format = {}; > + format.fourcc = v4l2Format; > + format.size = cfg.size; Looks like you're configuring the sensor with the first format that it supports, but with the size that the application requested. Choosing the first format might be fine, but you should add a comment to point it out. Even better I think would be for you to choose the format that you want for your ISP; was it SRGGB10? As for the size, I think it depends on your ISP. For example, depending on your demosaic algorithm, I imagine that you'd need double the SRGGB frame size to get the desired RGB888 size. If the same frame size is fine then that's fine. > + > + data->width = format.size.width; > + data->height = format.size.height; > + > + int ret = data->video_->setFormat(&format); > + if (ret) > + return ret; > + > + cfg.setStream(&data->stream_); > + cfg.stride = format.planes[0].bpl; > + > + return 0; > +} > + > +int PipelineHandlerISP::exportFrameBuffers(Camera *camera, Stream *stream, > + std::vector<std::unique_ptr<FrameBuffer>> *buffers) > +{ > + unsigned int count = stream->configuration().bufferCount; > + ISPCameraData *data = cameraData(camera); > + > + for (unsigned int i = 0; i < count; i++) { Everything here should be moved to the ISP. Then you can just do: return isp_->exportBuffers(buffers); > + std::string name = "frame-" + std::to_string(i); > + const int isp_fd = memfd_create(name.c_str(), 0); > + int ret = ftruncate(isp_fd, data->width * data->height * 3); Since you use data->width * data->height * 3 later as well, it might be good to declare this in the beginning of the function and reuse it. > + if (ret < 0) { > + LOG(ISP, Error) > + << "truncate: " > + << strerror(-ret); A bit more detail in the error message would be good, like "Failed to resize memfd" or something. Also this fits on one line. > + return ret; > + } > + FileDescriptor temp = FileDescriptor(std::move(isp_fd)); > + > + FrameBuffer::Plane rgbPlane; > + rgbPlane.fd = std::move(temp); I think you can assign the fd here directly: rgbPlane.fd = FileDescriptor(std::move(isp_fd)) and get rid of temp. > + rgbPlane.length = data->width * data->height * 3; > + > + std::vector<FrameBuffer::Plane> planes; > + planes.push_back(std::move(rgbPlane)); > + std::unique_ptr<FrameBuffer> buffer = std::make_unique<FrameBuffer>(std::move(planes)); > + buffers->push_back(std::move(buffer)); I think you can use emplace_back() instead. buffers->emplace_back(planes) I don't know if you can declare the vector directly. You could try: buffers->emplace_back({ rgbPlane }) > + } > + > + data->tempCopyExportBuffers = buffers; The buffers that you return here are owned by the application, so the pipeline handler does not hold on to them. I see that you want to pair the buffers, but you can do that when the sensor returns its buffer (I'll point it out again below when we get there). > + > + return count; > + > +} > + > +int PipelineHandlerISP::start(Camera *camera, [[maybe_unused]] const ControlList *controls) > +{ > + ISPCameraData *data = cameraData(camera); > + unsigned int count = data->stream_.configuration().bufferCount; > + > + data->rawBuffers = new std::vector<std::unique_ptr<FrameBuffer>>; Maybe you should make data->rawBuffers_ into a vector directly, and not a pointer. > + int ret = data->video_->allocateBuffers(count, data->rawBuffers); Then here you can pass in &data->rawBuffers_ And you don't have to worry about freeing it in the deconstructor. You still have to empty it at stop() though. > + if (ret < 0) { > + LOG(ISP, Error) << strerror(-ret); > + return ret; > + } > + > + for (unsigned int i = 0; i < count; i++) > + data->bufferPair[data->tempCopyExportBuffers->at(i).get()] = data->rawBuffers->at(i).get(); So you don't do the pairing here, you do the pairing when the sensor is done. > + > + ret = data->video_->streamOn(); > + if (ret < 0) { > + data->video_->releaseBuffers(); > + return ret; > + } > + > + data->threadISP_.start(); > + > + return 0; > +} > + > +void PipelineHandlerISP::stop(Camera *camera) > +{ > + ISPCameraData *data = cameraData(camera); > + > + data->threadISP_.exit(); > + data->threadISP_.wait(); > + > + data->video_->streamOff(); > + data->video_->releaseBuffers(); > +} > + > +int PipelineHandlerISP::queueRequestDevice(Camera *camera, Request *request) > +{ > + ISPCameraData *data = cameraData(camera); > + FrameBuffer *rgbBuffer = request->findBuffer(&data->stream_); > + > + if (!rgbBuffer) { > + LOG(ISP, Error) << "Attempt to queue request with invalid stream"; > + return -ENOENT; > + } > + > + int ret = data->video_->queueBuffer(data->bufferPair[rgbBuffer]); Instead of queueing it from the buffer pair, just queue the first v4l2 (raw) buffer that's available. When the sensor returns this buffer you can get the first rgb buffer that's available and queue them together to the ISP. It might be good to use a std::deque or something, and maybe another one for the buffers that are waiting to be completed by the sensor. > + if (ret < 0) > + return ret; > + > + return 0; > +} > + > +bool PipelineHandlerISP::match(DeviceEnumerator *enumerator) > +{ > + DeviceMatch unicam("unicam"); > + > + unicam.add("unicam-embedded"); > + unicam.add("unicam-image"); > + > + MediaDevice *unicam_ = acquireMediaDevice(enumerator, unicam); > + if (!unicam_) { > + LOG(ISP, Debug) << "unicam Device not found"; > + return false; > + } > + > + LOG(ISP, Debug) << "unicam Device Identified"; > + > + std::unique_ptr<ISPCameraData> data = std::make_unique<ISPCameraData>(this, unicam_); > + > + if(data->init()) return false; > + > + std::set<Stream *> streams{&data->stream_}; > + std::shared_ptr<Camera> camera = Camera::create(this, data->video_->deviceName(), streams); > + registerCamera(std::move(camera), std::move(data)); > + > + return true; > +} > + > + > +void ISPCameraData::ISPCompleted(FrameBuffer *buffer) > +{ > + Request *request = buffer->request(); This is good. Remember that if you do have a pending queue and available queue of buffers to move them back here. > + > + pipe_->completeBuffer(request, buffer); > + pipe_->completeRequest(request); > + > +} > + > +void ISPCameraData::bufferReady(FrameBuffer *buffer) > +{ > + for (std::map<FrameBuffer*, FrameBuffer*>::iterator it = bufferPair.begin(); it != bufferPair.end(); it++) { Here you can just take the first available raw buffer, and then just pass that together with the returned rgb buffer into the ISP. Maybe a diagram would help...? It's kind of hard to draw here though... So you allocate rgb buffers 1-4, and raw buffers 1-4, and then you have an rgb pending queue, and a raw available queue. The rgb buffers come from the application, while the pipeline handler always owns the raw buffers. 1. application queues rgb buffer 1: -> add rgb buffer 1 to rgb pending queue -> queue first available raw buffer to sensor (remove from available queue) 2. sensor returns raw buffer 1: -> get the first pending rgb buffer -> rgb buffer 1 -> queue rgb buffer 1 + raw buffer 1 to ISP 3. ISP returns rgb buffer 1 + raw buffer 1 -> add raw buffer 1 to available queue -> return rgb buffer 1 to the application It's a bit confusing with raw vs rgb and available vs pending... drawing a little diagram while you read and organize it and it might help :S Paul > + if (it->second == buffer) > + isp_.invokeMethod(&ISP::processing, ConnectionTypeQueued, buffer, it->first, width, height); > + } > + > +} > + > +int ISPCameraData::init() > +{ > + video_ = new V4L2VideoDevice(media_->getEntityByName("unicam-image")); > + if (video_->open()) > + return -ENODEV; > + > + video_->bufferReady.connect(this, &ISPCameraData::bufferReady); > + > + isp_.moveToThread(&threadISP_); > + isp_.ispCompleted.connect(this, &ISPCameraData::ISPCompleted); > + > + return 0; > +} > + > +REGISTER_PIPELINE_HANDLER(PipelineHandlerISP) > + > +} /* namespace libcamera */ > -- > 2.20.1 >
diff --git a/src/libcamera/pipeline/isp/isp.cpp b/src/libcamera/pipeline/isp/isp.cpp new file mode 100644 index 00000000..e5fbd536 --- /dev/null +++ b/src/libcamera/pipeline/isp/isp.cpp @@ -0,0 +1,323 @@ +/* SPDX-License-Identifier: LGPL-2.1-or-later */ +/* + * Copyright (C) 2021, Siyuan Fan <siyuan.fan@foxmail.com> + * + * isp.cpp - The software ISP-based pipeline handler + */ + +#include "isp_processing.h" + +#include <math.h> +#include <stdlib.h> +#include <sys/mman.h> +#include <unistd.h> + +#include <libcamera/camera.h> +#include <libcamera/control_ids.h> +#include <libcamera/controls.h> +#include <libcamera/formats.h> + +#include "libcamera/base/thread.h" +#include "libcamera/internal/device_enumerator.h" +#include "libcamera/internal/media_device.h" +#include "libcamera/internal/pipeline_handler.h" +#include "libcamera/internal/v4l2_videodevice.h" + +namespace libcamera { + +LOG_DEFINE_CATEGORY(ISP) + +class ISPCameraData : public CameraData +{ +public: + ISPCameraData(PipelineHandler *pipe, MediaDevice *media) + : CameraData(pipe), media_(media), video_(nullptr) + { + } + + ~ISPCameraData() + { + delete video_; + delete rawBuffers; + } + + int init(); + void bufferReady(FrameBuffer *buffer); + void ISPCompleted(FrameBuffer *buffer); + + Stream stream_; + ISP isp_; + Thread threadISP_; + int width; + int height; + + MediaDevice *media_; + V4L2VideoDevice *video_; + std::vector<std::unique_ptr<FrameBuffer>> *tempCopyExportBuffers; + std::vector<std::unique_ptr<FrameBuffer>> *rawBuffers; + std::map<FrameBuffer *, FrameBuffer *> bufferPair; +}; + + +class ISPCameraConfiguration : public CameraConfiguration +{ +public: + ISPCameraConfiguration(); + + Status validate() override; +}; + +class PipelineHandlerISP : public PipelineHandler +{ +public: + PipelineHandlerISP(CameraManager *manager); + + CameraConfiguration *generateConfiguration(Camera *camera, + const StreamRoles &roles) override; + int configure(Camera *camera, CameraConfiguration *config) override; + + int exportFrameBuffers(Camera *camera, Stream *stream, + std::vector<std::unique_ptr<FrameBuffer>> *buffers) override; + + int start(Camera *camera, const ControlList *controls) override; + void stop(Camera *camera) override; + + int queueRequestDevice(Camera *camera, Request *request) override; + + bool match(DeviceEnumerator *enumerator) override; + +private: + ISPCameraData *cameraData(const Camera *camera) + { + return static_cast<ISPCameraData *>( + PipelineHandler::cameraData(camera)); + } +}; + +ISPCameraConfiguration::ISPCameraConfiguration() + : CameraConfiguration() +{ +} + +CameraConfiguration::Status ISPCameraConfiguration::validate() +{ + Status status = Valid; + + return status; +} + +PipelineHandlerISP::PipelineHandlerISP(CameraManager *manager) + : PipelineHandler(manager) +{ +} + +CameraConfiguration *PipelineHandlerISP::generateConfiguration(Camera *camera, + const StreamRoles &roles) +{ + ISPCameraData *data = cameraData(camera); + CameraConfiguration *config = new ISPCameraConfiguration(); + + if (roles.empty()) + return config; + + std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats = + data->video_->formats(); + std::map<PixelFormat, std::vector<SizeRange>> deviceFormats; + std::transform(v4l2Formats.begin(), v4l2Formats.end(), + std::inserter(deviceFormats, deviceFormats.begin()), + [&](const decltype(v4l2Formats)::value_type &format) { + return decltype(deviceFormats)::value_type{ + format.first.toPixelFormat(), + format.second + }; + }); + + StreamFormats formats(deviceFormats); + StreamConfiguration cfg(formats); + + cfg.pixelFormat = formats::RGB888; + cfg.size = { 640, 480 }; + cfg.bufferCount = 4; + + config->addConfiguration(cfg); + + config->validate(); + + return config; +} + +int PipelineHandlerISP::configure(Camera *camera, CameraConfiguration *config) +{ + ISPCameraData *data = cameraData(camera); + StreamConfiguration &cfg = config->at(0); + + V4L2VideoDevice::Formats fmts = data->video_->formats(); + V4L2PixelFormat v4l2Format = fmts.begin()->first; + + V4L2DeviceFormat format = {}; + format.fourcc = v4l2Format; + format.size = cfg.size; + + data->width = format.size.width; + data->height = format.size.height; + + int ret = data->video_->setFormat(&format); + if (ret) + return ret; + + cfg.setStream(&data->stream_); + cfg.stride = format.planes[0].bpl; + + return 0; +} + +int PipelineHandlerISP::exportFrameBuffers(Camera *camera, Stream *stream, + std::vector<std::unique_ptr<FrameBuffer>> *buffers) +{ + unsigned int count = stream->configuration().bufferCount; + ISPCameraData *data = cameraData(camera); + + for (unsigned int i = 0; i < count; i++) { + std::string name = "frame-" + std::to_string(i); + const int isp_fd = memfd_create(name.c_str(), 0); + int ret = ftruncate(isp_fd, data->width * data->height * 3); + if (ret < 0) { + LOG(ISP, Error) + << "truncate: " + << strerror(-ret); + return ret; + } + FileDescriptor temp = FileDescriptor(std::move(isp_fd)); + + FrameBuffer::Plane rgbPlane; + rgbPlane.fd = std::move(temp); + rgbPlane.length = data->width * data->height * 3; + + std::vector<FrameBuffer::Plane> planes; + planes.push_back(std::move(rgbPlane)); + std::unique_ptr<FrameBuffer> buffer = std::make_unique<FrameBuffer>(std::move(planes)); + buffers->push_back(std::move(buffer)); + } + + data->tempCopyExportBuffers = buffers; + + return count; + +} + +int PipelineHandlerISP::start(Camera *camera, [[maybe_unused]] const ControlList *controls) +{ + ISPCameraData *data = cameraData(camera); + unsigned int count = data->stream_.configuration().bufferCount; + + data->rawBuffers = new std::vector<std::unique_ptr<FrameBuffer>>; + int ret = data->video_->allocateBuffers(count, data->rawBuffers); + if (ret < 0) { + LOG(ISP, Error) << strerror(-ret); + return ret; + } + + for (unsigned int i = 0; i < count; i++) + data->bufferPair[data->tempCopyExportBuffers->at(i).get()] = data->rawBuffers->at(i).get(); + + ret = data->video_->streamOn(); + if (ret < 0) { + data->video_->releaseBuffers(); + return ret; + } + + data->threadISP_.start(); + + return 0; +} + +void PipelineHandlerISP::stop(Camera *camera) +{ + ISPCameraData *data = cameraData(camera); + + data->threadISP_.exit(); + data->threadISP_.wait(); + + data->video_->streamOff(); + data->video_->releaseBuffers(); +} + +int PipelineHandlerISP::queueRequestDevice(Camera *camera, Request *request) +{ + ISPCameraData *data = cameraData(camera); + FrameBuffer *rgbBuffer = request->findBuffer(&data->stream_); + + if (!rgbBuffer) { + LOG(ISP, Error) << "Attempt to queue request with invalid stream"; + return -ENOENT; + } + + int ret = data->video_->queueBuffer(data->bufferPair[rgbBuffer]); + if (ret < 0) + return ret; + + return 0; +} + +bool PipelineHandlerISP::match(DeviceEnumerator *enumerator) +{ + DeviceMatch unicam("unicam"); + + unicam.add("unicam-embedded"); + unicam.add("unicam-image"); + + MediaDevice *unicam_ = acquireMediaDevice(enumerator, unicam); + if (!unicam_) { + LOG(ISP, Debug) << "unicam Device not found"; + return false; + } + + LOG(ISP, Debug) << "unicam Device Identified"; + + std::unique_ptr<ISPCameraData> data = std::make_unique<ISPCameraData>(this, unicam_); + + if(data->init()) return false; + + std::set<Stream *> streams{&data->stream_}; + std::shared_ptr<Camera> camera = Camera::create(this, data->video_->deviceName(), streams); + registerCamera(std::move(camera), std::move(data)); + + return true; +} + + +void ISPCameraData::ISPCompleted(FrameBuffer *buffer) +{ + Request *request = buffer->request(); + + pipe_->completeBuffer(request, buffer); + pipe_->completeRequest(request); + +} + +void ISPCameraData::bufferReady(FrameBuffer *buffer) +{ + for (std::map<FrameBuffer*, FrameBuffer*>::iterator it = bufferPair.begin(); it != bufferPair.end(); it++) { + if (it->second == buffer) + isp_.invokeMethod(&ISP::processing, ConnectionTypeQueued, buffer, it->first, width, height); + } + +} + +int ISPCameraData::init() +{ + video_ = new V4L2VideoDevice(media_->getEntityByName("unicam-image")); + if (video_->open()) + return -ENODEV; + + video_->bufferReady.connect(this, &ISPCameraData::bufferReady); + + isp_.moveToThread(&threadISP_); + isp_.ispCompleted.connect(this, &ISPCameraData::ISPCompleted); + + return 0; +} + +REGISTER_PIPELINE_HANDLER(PipelineHandlerISP) + +} /* namespace libcamera */