| Message ID | 20260408115630.12456-1-johannes.goede@oss.qualcomm.com |
|---|---|
| Headers | show |
| Series |
|
| Related | show |
Hi Hans let me ask a few questions On Wed, Apr 08, 2026 at 01:56:27PM +0200, Hans de Goede wrote: > Hi All, > > On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and > any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write > engine and co). Basically there are 2 big cross-switches between PHYs > and decoders and between decoders and VFEs which can be controlled through > media-controller links. > > As such the entite CAMSS block with CSI-phys, decoders and VFEs is > represented to userspace as a single /dev/media# node. > > As long as active links from unrelated cameras are not touched when setting > up a new camera 2 independent raw data-streams can be run and managed by 2 > different libcamera instances. > > But the standard locking of the /dev/media# node by the first libcamera > instance to start streaming from one of the cameras blocks this. > > This patch series allows pipeline-handlers to opt-out of the base > PipelineHandler MediaDevice locking and adds 2 helpers for pipeline > handlers to implement finer grained locking. > First one I have is: why 2 libcamera instances ? Doesn't libcamera register one camera for each connected CSI-2 input ? I guess, however, limiting to a single libcamera instance where there actually is no need to, might be too restrictive ? > This is the second of 3 series which together introduce the camss pipeline > handler. Here is a branch with all 3 series: > https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ > > I hope to get this prep series merged while work continues on the camss > pipeline handler itself. > > For an example of how to use this, see this commmit implementing finer > grained locking for the camss pipeline handler: > > https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad > > For the first patch an alternative approach would be add a lockingRequired > flag to the MediaDevice class, allowing opting out of the locking on a per > media device base. To me this indeed sounds like we need a finer grained control over the locking of the media devices. Could 'bool PipelineHandler::acquire(Camera *camera);' become a virtual function to delegate the finer-grained locking to pipeline handlers ? > > Regards, > > Hans > > > Hans de Goede (3): > libcamera: pipeline: Allow pipeline-handlers to opt out of locking the > media devices > libcamera: media_object: Add MediaEntity::disableLinks() > libcamera: v4l2_device: add lock() and unlock() methods > > include/libcamera/internal/media_object.h | 1 + > include/libcamera/internal/pipeline_handler.h | 2 + > include/libcamera/internal/v4l2_device.h | 3 ++ > src/libcamera/media_device.cpp | 16 ++------ > src/libcamera/media_object.cpp | 27 ++++++++++++++ > src/libcamera/pipeline_handler.cpp | 8 ++-- > src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ > 7 files changed, 77 insertions(+), 17 deletions(-) > > -- > 2.53.0 >
Hi, On 13-Apr-26 4:00 PM, Jacopo Mondi wrote: > Hi Hans > let me ask a few questions > > On Wed, Apr 08, 2026 at 01:56:27PM +0200, Hans de Goede wrote: >> Hi All, >> >> On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and >> any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write >> engine and co). Basically there are 2 big cross-switches between PHYs >> and decoders and between decoders and VFEs which can be controlled through >> media-controller links. >> >> As such the entite CAMSS block with CSI-phys, decoders and VFEs is >> represented to userspace as a single /dev/media# node. >> >> As long as active links from unrelated cameras are not touched when setting >> up a new camera 2 independent raw data-streams can be run and managed by 2 >> different libcamera instances. >> >> But the standard locking of the /dev/media# node by the first libcamera >> instance to start streaming from one of the cameras blocks this. >> >> This patch series allows pipeline-handlers to opt-out of the base >> PipelineHandler MediaDevice locking and adds 2 helpers for pipeline >> handlers to implement finer grained locking. >> > > First one I have is: why 2 libcamera instances ? Doesn't libcamera > register one camera for each connected CSI-2 input ? Yes it does. > I guess, however, limiting to a single libcamera instance where there > actually is no need to, might be too restrictive ? Right, e.g. users may want to use gst-launch twice to launch 2 gst-pipelines each accessing a single camera. > >> This is the second of 3 series which together introduce the camss pipeline >> handler. Here is a branch with all 3 series: >> https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ >> >> I hope to get this prep series merged while work continues on the camss >> pipeline handler itself. >> >> For an example of how to use this, see this commmit implementing finer >> grained locking for the camss pipeline handler: >> >> https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad >> >> For the first patch an alternative approach would be add a lockingRequired >> flag to the MediaDevice class, allowing opting out of the locking on a per >> media device base. > > To me this indeed sounds like we need a finer grained control over the > locking of the media devices. > > Could 'bool PipelineHandler::acquire(Camera *camera);' become a virtual > function to delegate the finer-grained locking to pipeline handlers ? That would also be an option yes. But currently the locking is something "owned" by the core which must not be touched by pipeline handlers I tried to preserve that for pipeline handlers not opting out. E.g. MediaDevice::lock()'s doxygen comment says: With that said I've no objection against making PipelineHandler::acquire() virtual and updating the doc text here a bit to say, e.g. : * The base PipelineHandler implementation handles MediaDevice locking * on behalf of the specified implementation, so this function should not be * called from a pipeline handler implementation directly. * Optionally a pipeline handler may opt out of the base PipelineHandler * locking by overriding PipelineHandler::acquire(). Regards, Hans >> Hans de Goede (3): >> libcamera: pipeline: Allow pipeline-handlers to opt out of locking the >> media devices >> libcamera: media_object: Add MediaEntity::disableLinks() >> libcamera: v4l2_device: add lock() and unlock() methods >> >> include/libcamera/internal/media_object.h | 1 + >> include/libcamera/internal/pipeline_handler.h | 2 + >> include/libcamera/internal/v4l2_device.h | 3 ++ >> src/libcamera/media_device.cpp | 16 ++------ >> src/libcamera/media_object.cpp | 27 ++++++++++++++ >> src/libcamera/pipeline_handler.cpp | 8 ++-- >> src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ >> 7 files changed, 77 insertions(+), 17 deletions(-) >> >> -- >> 2.53.0 >>
Hi Hans On Mon, Apr 13, 2026 at 04:10:57PM +0200, Hans de Goede wrote: > Hi, > > On 13-Apr-26 4:00 PM, Jacopo Mondi wrote: > > Hi Hans > > let me ask a few questions > > > > On Wed, Apr 08, 2026 at 01:56:27PM +0200, Hans de Goede wrote: > >> Hi All, > >> > >> On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and > >> any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write > >> engine and co). Basically there are 2 big cross-switches between PHYs > >> and decoders and between decoders and VFEs which can be controlled through > >> media-controller links. > >> > >> As such the entite CAMSS block with CSI-phys, decoders and VFEs is > >> represented to userspace as a single /dev/media# node. > >> > >> As long as active links from unrelated cameras are not touched when setting > >> up a new camera 2 independent raw data-streams can be run and managed by 2 > >> different libcamera instances. > >> > >> But the standard locking of the /dev/media# node by the first libcamera > >> instance to start streaming from one of the cameras blocks this. > >> > >> This patch series allows pipeline-handlers to opt-out of the base > >> PipelineHandler MediaDevice locking and adds 2 helpers for pipeline > >> handlers to implement finer grained locking. > >> > > > > First one I have is: why 2 libcamera instances ? Doesn't libcamera > > register one camera for each connected CSI-2 input ? > > Yes it does. > > > I guess, however, limiting to a single libcamera instance where there > > actually is no need to, might be too restrictive ? > > Right, e.g. users may want to use gst-launch twice to launch 2 gst-pipelines > each accessing a single camera. > I would say this is fair requirement > > > > >> This is the second of 3 series which together introduce the camss pipeline > >> handler. Here is a branch with all 3 series: > >> https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ > >> > >> I hope to get this prep series merged while work continues on the camss > >> pipeline handler itself. > >> > >> For an example of how to use this, see this commmit implementing finer > >> grained locking for the camss pipeline handler: > >> > >> https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad > >> > >> For the first patch an alternative approach would be add a lockingRequired > >> flag to the MediaDevice class, allowing opting out of the locking on a per > >> media device base. > > > > To me this indeed sounds like we need a finer grained control over the > > locking of the media devices. > > > > Could 'bool PipelineHandler::acquire(Camera *camera);' become a virtual > > function to delegate the finer-grained locking to pipeline handlers ? > > That would also be an option yes. But currently the locking is something > "owned" by the core which must not be touched by pipeline handlers I tried > to preserve that for pipeline handlers not opting out. I'm suggesting a virtual not a pure virtual, so pipelines that are fine with the currently implemented mechanism won't need any change. > > E.g. MediaDevice::lock()'s doxygen comment says: Did you mean to paste: * \brief Lock the device to prevent it from being used by other instances of * libcamera * * Multiple instances of libcamera might be running on the same system, at the * same time. To allow the different instances to coexist, system resources in * the form of media devices must be accessible for enumerating the cameras * they provide at all times, while still allowing an instance to lock a * resource while it prepares to actively use a camera from the resource. * * This function shall not be called from a pipeline handler implementation * directly, as the base PipelineHandler implementation handles this on the * behalf of the specified implementation. * This however prevents designs like yours to work. It might be ideal for inline pipelines or m2m ones where each CSI-2 input lives in its own media graph, but it won't work if all CSI-2 inputs are part of the same media graph; and I don't have argument against such designs at the kernel level even if I might be missing them right now. Going forward we actually want (ideally) a single system-wide media graph. I don't think the locking granularity we have implemented today would work there. > > With that said I've no objection against making PipelineHandler::acquire() > virtual and updating the doc text here a bit to say, e.g. : > > * The base PipelineHandler implementation handles MediaDevice locking > * on behalf of the specified implementation, so this function should not be > * called from a pipeline handler implementation directly. > * Optionally a pipeline handler may opt out of the base PipelineHandler > * locking by overriding PipelineHandler::acquire(). > Providing a method override for PipelineHandler::lock() would be functionally an opt-out :) Let's see what others think > Regards, > > Hans > > > > >> Hans de Goede (3): > >> libcamera: pipeline: Allow pipeline-handlers to opt out of locking the > >> media devices > >> libcamera: media_object: Add MediaEntity::disableLinks() > >> libcamera: v4l2_device: add lock() and unlock() methods > >> > >> include/libcamera/internal/media_object.h | 1 + > >> include/libcamera/internal/pipeline_handler.h | 2 + > >> include/libcamera/internal/v4l2_device.h | 3 ++ > >> src/libcamera/media_device.cpp | 16 ++------ > >> src/libcamera/media_object.cpp | 27 ++++++++++++++ > >> src/libcamera/pipeline_handler.cpp | 8 ++-- > >> src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ > >> 7 files changed, 77 insertions(+), 17 deletions(-) > >> > >> -- > >> 2.53.0 > >> >
Hi, On 13-Apr-26 4:30 PM, Jacopo Mondi wrote: > Hi Hans > > On Mon, Apr 13, 2026 at 04:10:57PM +0200, Hans de Goede wrote: >> Hi, >> >> On 13-Apr-26 4:00 PM, Jacopo Mondi wrote: >>> Hi Hans >>> let me ask a few questions >>> >>> On Wed, Apr 08, 2026 at 01:56:27PM +0200, Hans de Goede wrote: >>>> Hi All, >>>> >>>> On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and >>>> any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write >>>> engine and co). Basically there are 2 big cross-switches between PHYs >>>> and decoders and between decoders and VFEs which can be controlled through >>>> media-controller links. >>>> >>>> As such the entite CAMSS block with CSI-phys, decoders and VFEs is >>>> represented to userspace as a single /dev/media# node. >>>> >>>> As long as active links from unrelated cameras are not touched when setting >>>> up a new camera 2 independent raw data-streams can be run and managed by 2 >>>> different libcamera instances. >>>> >>>> But the standard locking of the /dev/media# node by the first libcamera >>>> instance to start streaming from one of the cameras blocks this. >>>> >>>> This patch series allows pipeline-handlers to opt-out of the base >>>> PipelineHandler MediaDevice locking and adds 2 helpers for pipeline >>>> handlers to implement finer grained locking. >>>> >>> >>> First one I have is: why 2 libcamera instances ? Doesn't libcamera >>> register one camera for each connected CSI-2 input ? >> >> Yes it does. >> >>> I guess, however, limiting to a single libcamera instance where there >>> actually is no need to, might be too restrictive ? >> >> Right, e.g. users may want to use gst-launch twice to launch 2 gst-pipelines >> each accessing a single camera. >> > > I would say this is fair requirement > >> >>> >>>> This is the second of 3 series which together introduce the camss pipeline >>>> handler. Here is a branch with all 3 series: >>>> https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ >>>> >>>> I hope to get this prep series merged while work continues on the camss >>>> pipeline handler itself. >>>> >>>> For an example of how to use this, see this commmit implementing finer >>>> grained locking for the camss pipeline handler: >>>> >>>> https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad >>>> >>>> For the first patch an alternative approach would be add a lockingRequired >>>> flag to the MediaDevice class, allowing opting out of the locking on a per >>>> media device base. >>> >>> To me this indeed sounds like we need a finer grained control over the >>> locking of the media devices. >>> >>> Could 'bool PipelineHandler::acquire(Camera *camera);' become a virtual >>> function to delegate the finer-grained locking to pipeline handlers ? >> >> That would also be an option yes. But currently the locking is something >> "owned" by the core which must not be touched by pipeline handlers I tried >> to preserve that for pipeline handlers not opting out. > > I'm suggesting a virtual not a pure virtual, so pipelines that are > fine with the currently implemented mechanism won't need any change. Right, I get that. But the current wording in MediaDevice::lock()'s documentation suggests that currently it is not virtual at all on purpose. I agree we can change that I just wanted to point out that I believe it currently *deliberately* is not virtual at all. >> E.g. MediaDevice::lock()'s doxygen comment says: > > Did you mean to paste: > > * \brief Lock the device to prevent it from being used by other instances of > * libcamera > * > * Multiple instances of libcamera might be running on the same system, at the > * same time. To allow the different instances to coexist, system resources in > * the form of media devices must be accessible for enumerating the cameras > * they provide at all times, while still allowing an instance to lock a > * resource while it prepares to actively use a camera from the resource. > * > * This function shall not be called from a pipeline handler implementation > * directly, as the base PipelineHandler implementation handles this on the > * behalf of the specified implementation. > * Yes I did mean to do that. > This however prevents designs like yours to work. It might be ideal for > inline pipelines or m2m ones where each CSI-2 input lives in its own > media graph, but it won't work if all CSI-2 inputs are part of the > same media graph; and I don't have argument against such designs at > the kernel level even if I might be missing them right now. > > Going forward we actually want (ideally) a single system-wide media > graph. I don't think the locking granularity we have implemented today > would work there. > >> >> With that said I've no objection against making PipelineHandler::acquire() >> virtual and updating the doc text here a bit to say, e.g. : >> >> * The base PipelineHandler implementation handles MediaDevice locking >> * on behalf of the specified implementation, so this function should not be >> * called from a pipeline handler implementation directly. >> * Optionally a pipeline handler may opt out of the base PipelineHandler >> * locking by overriding PipelineHandler::acquire(). >> > > Providing a method override for PipelineHandler::lock() would be > functionally an opt-out :) I think you mean PipelineHandler::acquire() here? With that correction, yes I agree and I think that would be cleaner then what I'm currently proposing. > Let's see what others think Ack. Regards, Hans >>>> Hans de Goede (3): >>>> libcamera: pipeline: Allow pipeline-handlers to opt out of locking the >>>> media devices >>>> libcamera: media_object: Add MediaEntity::disableLinks() >>>> libcamera: v4l2_device: add lock() and unlock() methods >>>> >>>> include/libcamera/internal/media_object.h | 1 + >>>> include/libcamera/internal/pipeline_handler.h | 2 + >>>> include/libcamera/internal/v4l2_device.h | 3 ++ >>>> src/libcamera/media_device.cpp | 16 ++------ >>>> src/libcamera/media_object.cpp | 27 ++++++++++++++ >>>> src/libcamera/pipeline_handler.cpp | 8 ++-- >>>> src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ >>>> 7 files changed, 77 insertions(+), 17 deletions(-) >>>> >>>> -- >>>> 2.53.0 >>>> >>
Hi Hans On Mon, Apr 13, 2026 at 04:55:57PM +0200, Hans de Goede wrote: > Hi, > > On 13-Apr-26 4:30 PM, Jacopo Mondi wrote: > > Hi Hans > > > > On Mon, Apr 13, 2026 at 04:10:57PM +0200, Hans de Goede wrote: > >> Hi, > >> > >> On 13-Apr-26 4:00 PM, Jacopo Mondi wrote: > >>> Hi Hans > >>> let me ask a few questions > >>> > >>> On Wed, Apr 08, 2026 at 01:56:27PM +0200, Hans de Goede wrote: > >>>> Hi All, > >>>> > >>>> On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and > >>>> any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write > >>>> engine and co). Basically there are 2 big cross-switches between PHYs > >>>> and decoders and between decoders and VFEs which can be controlled through > >>>> media-controller links. > >>>> > >>>> As such the entite CAMSS block with CSI-phys, decoders and VFEs is > >>>> represented to userspace as a single /dev/media# node. > >>>> > >>>> As long as active links from unrelated cameras are not touched when setting > >>>> up a new camera 2 independent raw data-streams can be run and managed by 2 > >>>> different libcamera instances. > >>>> > >>>> But the standard locking of the /dev/media# node by the first libcamera > >>>> instance to start streaming from one of the cameras blocks this. > >>>> > >>>> This patch series allows pipeline-handlers to opt-out of the base > >>>> PipelineHandler MediaDevice locking and adds 2 helpers for pipeline > >>>> handlers to implement finer grained locking. > >>>> > >>> > >>> First one I have is: why 2 libcamera instances ? Doesn't libcamera > >>> register one camera for each connected CSI-2 input ? > >> > >> Yes it does. > >> > >>> I guess, however, limiting to a single libcamera instance where there > >>> actually is no need to, might be too restrictive ? > >> > >> Right, e.g. users may want to use gst-launch twice to launch 2 gst-pipelines > >> each accessing a single camera. > >> > > > > I would say this is fair requirement > > > >> > >>> > >>>> This is the second of 3 series which together introduce the camss pipeline > >>>> handler. Here is a branch with all 3 series: > >>>> https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ > >>>> > >>>> I hope to get this prep series merged while work continues on the camss > >>>> pipeline handler itself. > >>>> > >>>> For an example of how to use this, see this commmit implementing finer > >>>> grained locking for the camss pipeline handler: > >>>> > >>>> https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad > >>>> > >>>> For the first patch an alternative approach would be add a lockingRequired > >>>> flag to the MediaDevice class, allowing opting out of the locking on a per > >>>> media device base. > >>> > >>> To me this indeed sounds like we need a finer grained control over the > >>> locking of the media devices. > >>> > >>> Could 'bool PipelineHandler::acquire(Camera *camera);' become a virtual > >>> function to delegate the finer-grained locking to pipeline handlers ? > >> > >> That would also be an option yes. But currently the locking is something > >> "owned" by the core which must not be touched by pipeline handlers I tried > >> to preserve that for pipeline handlers not opting out. > > > > I'm suggesting a virtual not a pure virtual, so pipelines that are > > fine with the currently implemented mechanism won't need any change. > > Right, I get that. But the current wording in MediaDevice::lock()'s > documentation suggests that currently it is not virtual at all > on purpose. > > I agree we can change that I just wanted to point out that I believe > it currently *deliberately* is not virtual at all. > > >> E.g. MediaDevice::lock()'s doxygen comment says: > > > > Did you mean to paste: > > > > * \brief Lock the device to prevent it from being used by other instances of > > * libcamera > > * > > * Multiple instances of libcamera might be running on the same system, at the > > * same time. To allow the different instances to coexist, system resources in > > * the form of media devices must be accessible for enumerating the cameras > > * they provide at all times, while still allowing an instance to lock a > > * resource while it prepares to actively use a camera from the resource. > > * > > * This function shall not be called from a pipeline handler implementation > > * directly, as the base PipelineHandler implementation handles this on the > > * behalf of the specified implementation. > > * > > Yes I did mean to do that. > > > This however prevents designs like yours to work. It might be ideal for > > inline pipelines or m2m ones where each CSI-2 input lives in its own > > media graph, but it won't work if all CSI-2 inputs are part of the > > same media graph; and I don't have argument against such designs at > > the kernel level even if I might be missing them right now. > > > Going forward we actually want (ideally) a single system-wide media > > graph. I don't think the locking granularity we have implemented today > > would work there. > > > >> > >> With that said I've no objection against making PipelineHandler::acquire() > >> virtual and updating the doc text here a bit to say, e.g. : > >> > >> * The base PipelineHandler implementation handles MediaDevice locking > >> * on behalf of the specified implementation, so this function should not be > >> * called from a pipeline handler implementation directly. > >> * Optionally a pipeline handler may opt out of the base PipelineHandler > >> * locking by overriding PipelineHandler::acquire(). > >> > > > > Providing a method override for PipelineHandler::lock() would be > > functionally an opt-out :) > > I think you mean PipelineHandler::acquire() here? With that correction, Of course, thanks! > yes I agree and I think that would be cleaner then what I'm currently > proposing. > > > Let's see what others think > > Ack. > > Regards, > > Hans > > > >>>> Hans de Goede (3): > >>>> libcamera: pipeline: Allow pipeline-handlers to opt out of locking the > >>>> media devices > >>>> libcamera: media_object: Add MediaEntity::disableLinks() > >>>> libcamera: v4l2_device: add lock() and unlock() methods > >>>> > >>>> include/libcamera/internal/media_object.h | 1 + > >>>> include/libcamera/internal/pipeline_handler.h | 2 + > >>>> include/libcamera/internal/v4l2_device.h | 3 ++ > >>>> src/libcamera/media_device.cpp | 16 ++------ > >>>> src/libcamera/media_object.cpp | 27 ++++++++++++++ > >>>> src/libcamera/pipeline_handler.cpp | 8 ++-- > >>>> src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ > >>>> 7 files changed, 77 insertions(+), 17 deletions(-) > >>>> > >>>> -- > >>>> 2.53.0 > >>>> > >> >
Hi All, On 8-Apr-26 13:56, Hans de Goede wrote: > Hi All, > > On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and > any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write > engine and co). Basically there are 2 big cross-switches between PHYs > and decoders and between decoders and VFEs which can be controlled through > media-controller links. > > As such the entite CAMSS block with CSI-phys, decoders and VFEs is > represented to userspace as a single /dev/media# node. > > As long as active links from unrelated cameras are not touched when setting > up a new camera 2 independent raw data-streams can be run and managed by 2 > different libcamera instances. > > But the standard locking of the /dev/media# node by the first libcamera > instance to start streaming from one of the cameras blocks this. > > This patch series allows pipeline-handlers to opt-out of the base > PipelineHandler MediaDevice locking and adds 2 helpers for pipeline > handlers to implement finer grained locking. > > This is the second of 3 series which together introduce the camss pipeline > handler. Here is a branch with all 3 series: > https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ > > I hope to get this prep series merged while work continues on the camss > pipeline handler itself. > > For an example of how to use this, see this commmit implementing finer > grained locking for the camss pipeline handler: > > https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad > > For the first patch an alternative approach would be add a lockingRequired > flag to the MediaDevice class, allowing opting out of the locking on a per > media device base. As discussed during the meeting here are 2 example media-graphs of camss CSI media-controller nodes: Agetti (the SoC found on the Uno Q): https://fedorapeople.org/~jwrdegoede/agatti.dot https://fedorapeople.org/~jwrdegoede/agatti.dot.svg Hamoa (X1 Elite Soc on T14s): https://fedorapeople.org/~jwrdegoede/hamoa.dot https://fedorapeople.org/~jwrdegoede/hamoa.dot.svg Note with Hamoa the devicetree bindings have changed and the PHYs are now described as separate devicetree nodes with the DT for the T14s only enabling the phy connected to the standard (color) user facing sensor, so the media-graph only shows 1 phy. Regards, Hans
Hi All, On Qualcomm chips any CSI-phy can be connected to any CSI-decoder and any CSI-decoder can be connected to any Video-Front-End (VFE, DMA write engine and co). Basically there are 2 big cross-switches between PHYs and decoders and between decoders and VFEs which can be controlled through media-controller links. As such the entite CAMSS block with CSI-phys, decoders and VFEs is represented to userspace as a single /dev/media# node. As long as active links from unrelated cameras are not touched when setting up a new camera 2 independent raw data-streams can be run and managed by 2 different libcamera instances. But the standard locking of the /dev/media# node by the first libcamera instance to start streaming from one of the cameras blocks this. This patch series allows pipeline-handlers to opt-out of the base PipelineHandler MediaDevice locking and adds 2 helpers for pipeline handlers to implement finer grained locking. This is the second of 3 series which together introduce the camss pipeline handler. Here is a branch with all 3 series: https://github.com/jwrdegoede/libcamera/commits/camss_pipeline_v1/ I hope to get this prep series merged while work continues on the camss pipeline handler itself. For an example of how to use this, see this commmit implementing finer grained locking for the camss pipeline handler: https://github.com/jwrdegoede/libcamera/commit/4ffd7b47119978940b543ad0914bf46c767573ad For the first patch an alternative approach would be add a lockingRequired flag to the MediaDevice class, allowing opting out of the locking on a per media device base. Regards, Hans Hans de Goede (3): libcamera: pipeline: Allow pipeline-handlers to opt out of locking the media devices libcamera: media_object: Add MediaEntity::disableLinks() libcamera: v4l2_device: add lock() and unlock() methods include/libcamera/internal/media_object.h | 1 + include/libcamera/internal/pipeline_handler.h | 2 + include/libcamera/internal/v4l2_device.h | 3 ++ src/libcamera/media_device.cpp | 16 ++------ src/libcamera/media_object.cpp | 27 ++++++++++++++ src/libcamera/pipeline_handler.cpp | 8 ++-- src/libcamera/v4l2_device.cpp | 37 +++++++++++++++++++ 7 files changed, 77 insertions(+), 17 deletions(-)