[libcamera-devel,1/2] cam: event_loop: Stop queuing calls when the event loop are exiting
diff mbox series

Message ID 20210119123110.2976971-2-niklas.soderlund@ragnatech.se
State Superseded
Delegated to: Niklas Söderlund
Headers show
Series
  • cam: Fix races in event loop and long request processing times
Related show

Commit Message

Niklas Söderlund Jan. 19, 2021, 12:31 p.m. UTC
Terminating the event loop with EventLoop::exit() does not grantee that
it will terminate. If the event loops 'call later' queue can be kept
non-empty by continuously queuing new calls using EventLoop::callLater()
either from a different thread or from callbacks in the loop itself
EventLoop::dispatchCalls() will never complete and the loop will run
forever.

Solve this by not accepting new callbacks once the event loop is
exiting.

Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
---
 src/cam/event_loop.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Laurent Pinchart Jan. 19, 2021, 12:43 p.m. UTC | #1
Hi Niklas,

Thank you for the patch.

On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> Terminating the event loop with EventLoop::exit() does not grantee that
> it will terminate. If the event loops 'call later' queue can be kept
> non-empty by continuously queuing new calls using EventLoop::callLater()
> either from a different thread or from callbacks in the loop itself
> EventLoop::dispatchCalls() will never complete and the loop will run
> forever.
> 
> Solve this by not accepting new callbacks once the event loop is
> exiting.
> 
> Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
> Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> ---
>  src/cam/event_loop.cpp | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> index 94c5d1d362455f33..34a55da30c537ac7 100644
> --- a/src/cam/event_loop.cpp
> +++ b/src/cam/event_loop.cpp
> @@ -62,7 +62,7 @@ void EventLoop::interrupt()
>  
>  void EventLoop::callLater(const std::function<void()> &func)
>  {
> -	{
> +	if (!exit_.load(std::memory_order_acquire)) {
>  		std::unique_lock<std::mutex> locker(lock_);
>  		calls_.push_back(func);
>  	}

That's a bit of a layering violation. Would it make sense to fix this
differently, by ensuring that EventLoop::dispatchCalls() will only
handle deferred calls that are on the list when the function is called ?
This would also ensure that callLater(), when called from the same
thread, will guarantee that pending events get processed before the
delayed call is processed.

I mean something along those lines (only compile-tested).

diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
index 94c5d1d36245..320ade525a3f 100644
--- a/src/cam/event_loop.cpp
+++ b/src/cam/event_loop.cpp
@@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
 {
 	std::unique_lock<std::mutex> locker(lock_);

-	for (auto iter = calls_.begin(); iter != calls_.end(); ) {
+	for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
 		std::function<void()> call = std::move(*iter);

 		iter = calls_.erase(iter);
Niklas Söderlund Jan. 19, 2021, 1:42 p.m. UTC | #2
Hi Laurent,

Thanks for your feedback.

On 2021-01-19 14:43:17 +0200, Laurent Pinchart wrote:
> Hi Niklas,
> 
> Thank you for the patch.
> 
> On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> > Terminating the event loop with EventLoop::exit() does not grantee that
> > it will terminate. If the event loops 'call later' queue can be kept
> > non-empty by continuously queuing new calls using EventLoop::callLater()
> > either from a different thread or from callbacks in the loop itself
> > EventLoop::dispatchCalls() will never complete and the loop will run
> > forever.
> > 
> > Solve this by not accepting new callbacks once the event loop is
> > exiting.
> > 
> > Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
> > Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> > Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > ---
> >  src/cam/event_loop.cpp | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > index 94c5d1d362455f33..34a55da30c537ac7 100644
> > --- a/src/cam/event_loop.cpp
> > +++ b/src/cam/event_loop.cpp
> > @@ -62,7 +62,7 @@ void EventLoop::interrupt()
> >  
> >  void EventLoop::callLater(const std::function<void()> &func)
> >  {
> > -	{
> > +	if (!exit_.load(std::memory_order_acquire)) {
> >  		std::unique_lock<std::mutex> locker(lock_);
> >  		calls_.push_back(func);
> >  	}
> 
> That's a bit of a layering violation. Would it make sense to fix this
> differently, by ensuring that EventLoop::dispatchCalls() will only
> handle deferred calls that are on the list when the function is called ?
> This would also ensure that callLater(), when called from the same
> thread, will guarantee that pending events get processed before the
> delayed call is processed.
> 
> I mean something along those lines (only compile-tested).
> 
> diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> index 94c5d1d36245..320ade525a3f 100644
> --- a/src/cam/event_loop.cpp
> +++ b/src/cam/event_loop.cpp
> @@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
>  {
>  	std::unique_lock<std::mutex> locker(lock_);
> 
> -	for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> +	for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
>  		std::function<void()> call = std::move(*iter);
> 
>  		iter = calls_.erase(iter);

I tried something like this but thought it was nastier then I proposed 
in this patch. I do however agree that what is proposed in my patch is 
also not the most cleanest of solutions. The change you propose here 
does not work as intended, it is still venerable to an ever increasing 
call_ vector.

If we would like to go with this solution how about something like this?  
It would also reduce the lock_ contention as a side effect.

diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
index 94c5d1d362455f33..4bfcc503ac1f3b83 100644
--- a/src/cam/event_loop.cpp
+++ b/src/cam/event_loop.cpp
@@ -72,15 +72,13 @@ void EventLoop::callLater(const std::function<void()> &func)
 
 void EventLoop::dispatchCalls()
 {
-       std::unique_lock<std::mutex> locker(lock_);
+       std::list<std::function<void()>> calls;
 
-       for (auto iter = calls_.begin(); iter != calls_.end(); ) {
-               std::function<void()> call = std::move(*iter);
+       {
+               std::unique_lock<std::mutex> locker(lock_);
+               calls = std::move(calls_);
+       }
 
-               iter = calls_.erase(iter);
-
-               locker.unlock();
+       for (std::function<void()> call : calls)
                call();
-               locker.lock();
-       }
 }

> 
> 
> -- 
> Regards,
> 
> Laurent Pinchart
Niklas Söderlund Jan. 19, 2021, 2:07 p.m. UTC | #3
On 2021-01-19 14:42:27 +0100, Niklas Söderlund wrote:
> Hi Laurent,
> 
> Thanks for your feedback.
> 
> On 2021-01-19 14:43:17 +0200, Laurent Pinchart wrote:
> > Hi Niklas,
> > 
> > Thank you for the patch.
> > 
> > On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> > > Terminating the event loop with EventLoop::exit() does not grantee that
> > > it will terminate. If the event loops 'call later' queue can be kept
> > > non-empty by continuously queuing new calls using EventLoop::callLater()
> > > either from a different thread or from callbacks in the loop itself
> > > EventLoop::dispatchCalls() will never complete and the loop will run
> > > forever.
> > > 
> > > Solve this by not accepting new callbacks once the event loop is
> > > exiting.
> > > 
> > > Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
> > > Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> > > Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > > ---
> > >  src/cam/event_loop.cpp | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > index 94c5d1d362455f33..34a55da30c537ac7 100644
> > > --- a/src/cam/event_loop.cpp
> > > +++ b/src/cam/event_loop.cpp
> > > @@ -62,7 +62,7 @@ void EventLoop::interrupt()
> > >  
> > >  void EventLoop::callLater(const std::function<void()> &func)
> > >  {
> > > -	{
> > > +	if (!exit_.load(std::memory_order_acquire)) {
> > >  		std::unique_lock<std::mutex> locker(lock_);
> > >  		calls_.push_back(func);
> > >  	}
> > 
> > That's a bit of a layering violation. Would it make sense to fix this
> > differently, by ensuring that EventLoop::dispatchCalls() will only
> > handle deferred calls that are on the list when the function is called ?
> > This would also ensure that callLater(), when called from the same
> > thread, will guarantee that pending events get processed before the
> > delayed call is processed.
> > 
> > I mean something along those lines (only compile-tested).
> > 
> > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > index 94c5d1d36245..320ade525a3f 100644
> > --- a/src/cam/event_loop.cpp
> > +++ b/src/cam/event_loop.cpp
> > @@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
> >  {
> >  	std::unique_lock<std::mutex> locker(lock_);
> > 
> > -	for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > +	for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
> >  		std::function<void()> call = std::move(*iter);
> > 
> >  		iter = calls_.erase(iter);
> 
> I tried something like this but thought it was nastier then I proposed 
> in this patch. I do however agree that what is proposed in my patch is 
> also not the most cleanest of solutions. The change you propose here 
> does not work as intended, it is still venerable to an ever increasing 
> call_ vector.
> 
> If we would like to go with this solution how about something like this?  
> It would also reduce the lock_ contention as a side effect.
> 
> diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> index 94c5d1d362455f33..4bfcc503ac1f3b83 100644
> --- a/src/cam/event_loop.cpp
> +++ b/src/cam/event_loop.cpp
> @@ -72,15 +72,13 @@ void EventLoop::callLater(const std::function<void()> &func)
>  
>  void EventLoop::dispatchCalls()
>  {
> -       std::unique_lock<std::mutex> locker(lock_);
> +       std::list<std::function<void()>> calls;
>  
> -       for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> -               std::function<void()> call = std::move(*iter);
> +       {
> +               std::unique_lock<std::mutex> locker(lock_);
> +               calls = std::move(calls_);
> +       }
>  
> -               iter = calls_.erase(iter);
> -
> -               locker.unlock();
> +       for (std::function<void()> call : calls)
>                 call();
> -               locker.lock();
> -       }
>  }

I Think this back as both solutions that modifies dispatchCalls() 
suffers from the same flaw. Events queued before the EventLoop::exit() 
are not executed if they are not already part of the active list of 
calls that are processed.

What do you think of injecting a 'end of call processing' marker into 
the calls_ vector and break the loop in dispatchCalls() if it's 
encounter? Spontaneously I'm thinking of a 'nullptr' marker, but it's 
untested.

> 
> > 
> > 
> > -- 
> > Regards,
> > 
> > Laurent Pinchart
> 
> -- 
> Regards,
> Niklas Söderlund
Laurent Pinchart Jan. 19, 2021, 4:04 p.m. UTC | #4
Hi Niklas,

On Tue, Jan 19, 2021 at 03:07:16PM +0100, Niklas Söderlund wrote:
> On 2021-01-19 14:42:27 +0100, Niklas Söderlund wrote:
> > On 2021-01-19 14:43:17 +0200, Laurent Pinchart wrote:
> > > On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> > > > Terminating the event loop with EventLoop::exit() does not grantee that
> > > > it will terminate. If the event loops 'call later' queue can be kept
> > > > non-empty by continuously queuing new calls using EventLoop::callLater()
> > > > either from a different thread or from callbacks in the loop itself
> > > > EventLoop::dispatchCalls() will never complete and the loop will run
> > > > forever.
> > > > 
> > > > Solve this by not accepting new callbacks once the event loop is
> > > > exiting.
> > > > 
> > > > Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
> > > > Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> > > > Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > > > ---
> > > >  src/cam/event_loop.cpp | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > > index 94c5d1d362455f33..34a55da30c537ac7 100644
> > > > --- a/src/cam/event_loop.cpp
> > > > +++ b/src/cam/event_loop.cpp
> > > > @@ -62,7 +62,7 @@ void EventLoop::interrupt()
> > > >  
> > > >  void EventLoop::callLater(const std::function<void()> &func)
> > > >  {
> > > > -	{
> > > > +	if (!exit_.load(std::memory_order_acquire)) {
> > > >  		std::unique_lock<std::mutex> locker(lock_);
> > > >  		calls_.push_back(func);
> > > >  	}
> > > 
> > > That's a bit of a layering violation. Would it make sense to fix this
> > > differently, by ensuring that EventLoop::dispatchCalls() will only
> > > handle deferred calls that are on the list when the function is called ?
> > > This would also ensure that callLater(), when called from the same
> > > thread, will guarantee that pending events get processed before the
> > > delayed call is processed.
> > > 
> > > I mean something along those lines (only compile-tested).
> > > 
> > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > index 94c5d1d36245..320ade525a3f 100644
> > > --- a/src/cam/event_loop.cpp
> > > +++ b/src/cam/event_loop.cpp
> > > @@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
> > >  {
> > >  	std::unique_lock<std::mutex> locker(lock_);
> > > 
> > > -	for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > > +	for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
> > >  		std::function<void()> call = std::move(*iter);
> > > 
> > >  		iter = calls_.erase(iter);
> > 
> > I tried something like this but thought it was nastier then I proposed 
> > in this patch. I do however agree that what is proposed in my patch is 
> > also not the most cleanest of solutions. The change you propose here 
> > does not work as intended, it is still venerable to an ever increasing 
> > call_ vector.
> > 
> > If we would like to go with this solution how about something like this?  
> > It would also reduce the lock_ contention as a side effect.
> > 
> > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > index 94c5d1d362455f33..4bfcc503ac1f3b83 100644
> > --- a/src/cam/event_loop.cpp
> > +++ b/src/cam/event_loop.cpp
> > @@ -72,15 +72,13 @@ void EventLoop::callLater(const std::function<void()> &func)
> >  
> >  void EventLoop::dispatchCalls()
> >  {
> > -       std::unique_lock<std::mutex> locker(lock_);
> > +       std::list<std::function<void()>> calls;
> >  
> > -       for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > -               std::function<void()> call = std::move(*iter);
> > +       {
> > +               std::unique_lock<std::mutex> locker(lock_);
> > +               calls = std::move(calls_);
> > +       }
> >  
> > -               iter = calls_.erase(iter);
> > -
> > -               locker.unlock();
> > +       for (std::function<void()> call : calls)
> >                 call();
> > -               locker.lock();
> > -       }
> >  }

That looks good too.

> I Think this back as both solutions that modifies dispatchCalls() 
> suffers from the same flaw. Events queued before the EventLoop::exit() 
> are not executed if they are not already part of the active list of 
> calls that are processed.

Is that a problem though ? If we need to ensure that the deferred calls
queue is flushed, maybe we could make dispatchCalls() public and call it
after EventLoop::exit() returns ?

> What do you think of injecting a 'end of call processing' marker into 
> the calls_ vector and break the loop in dispatchCalls() if it's 
> encounter? Spontaneously I'm thinking of a 'nullptr' marker, but it's 
> untested.

What's the problem that needs to be solved here ? There's an inherent
race condition if we queue calls from a different thread, how to handle
it best depends on what the use case is.
Niklas Söderlund Jan. 19, 2021, 11:32 p.m. UTC | #5
Hi Laurent,

On 2021-01-19 18:04:14 +0200, Laurent Pinchart wrote:
> Hi Niklas,
> 
> On Tue, Jan 19, 2021 at 03:07:16PM +0100, Niklas Söderlund wrote:
> > On 2021-01-19 14:42:27 +0100, Niklas Söderlund wrote:
> > > On 2021-01-19 14:43:17 +0200, Laurent Pinchart wrote:
> > > > On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> > > > > Terminating the event loop with EventLoop::exit() does not grantee that
> > > > > it will terminate. If the event loops 'call later' queue can be kept
> > > > > non-empty by continuously queuing new calls using EventLoop::callLater()
> > > > > either from a different thread or from callbacks in the loop itself
> > > > > EventLoop::dispatchCalls() will never complete and the loop will run
> > > > > forever.
> > > > > 
> > > > > Solve this by not accepting new callbacks once the event loop is
> > > > > exiting.
> > > > > 
> > > > > Reported-by: Sebastian Fricke <sebastian.fricke@posteo.net>
> > > > > Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> > > > > Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
> > > > > ---
> > > > >  src/cam/event_loop.cpp | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > > > index 94c5d1d362455f33..34a55da30c537ac7 100644
> > > > > --- a/src/cam/event_loop.cpp
> > > > > +++ b/src/cam/event_loop.cpp
> > > > > @@ -62,7 +62,7 @@ void EventLoop::interrupt()
> > > > >  
> > > > >  void EventLoop::callLater(const std::function<void()> &func)
> > > > >  {
> > > > > -	{
> > > > > +	if (!exit_.load(std::memory_order_acquire)) {
> > > > >  		std::unique_lock<std::mutex> locker(lock_);
> > > > >  		calls_.push_back(func);
> > > > >  	}
> > > > 
> > > > That's a bit of a layering violation. Would it make sense to fix this
> > > > differently, by ensuring that EventLoop::dispatchCalls() will only
> > > > handle deferred calls that are on the list when the function is called ?
> > > > This would also ensure that callLater(), when called from the same
> > > > thread, will guarantee that pending events get processed before the
> > > > delayed call is processed.
> > > > 
> > > > I mean something along those lines (only compile-tested).
> > > > 
> > > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > > index 94c5d1d36245..320ade525a3f 100644
> > > > --- a/src/cam/event_loop.cpp
> > > > +++ b/src/cam/event_loop.cpp
> > > > @@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
> > > >  {
> > > >  	std::unique_lock<std::mutex> locker(lock_);
> > > > 
> > > > -	for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > > > +	for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
> > > >  		std::function<void()> call = std::move(*iter);
> > > > 
> > > >  		iter = calls_.erase(iter);
> > > 
> > > I tried something like this but thought it was nastier then I proposed 
> > > in this patch. I do however agree that what is proposed in my patch is 
> > > also not the most cleanest of solutions. The change you propose here 
> > > does not work as intended, it is still venerable to an ever increasing 
> > > call_ vector.
> > > 
> > > If we would like to go with this solution how about something like this?  
> > > It would also reduce the lock_ contention as a side effect.
> > > 
> > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > index 94c5d1d362455f33..4bfcc503ac1f3b83 100644
> > > --- a/src/cam/event_loop.cpp
> > > +++ b/src/cam/event_loop.cpp
> > > @@ -72,15 +72,13 @@ void EventLoop::callLater(const std::function<void()> &func)
> > >  
> > >  void EventLoop::dispatchCalls()
> > >  {
> > > -       std::unique_lock<std::mutex> locker(lock_);
> > > +       std::list<std::function<void()>> calls;
> > >  
> > > -       for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > > -               std::function<void()> call = std::move(*iter);
> > > +       {
> > > +               std::unique_lock<std::mutex> locker(lock_);
> > > +               calls = std::move(calls_);
> > > +       }
> > >  
> > > -               iter = calls_.erase(iter);
> > > -
> > > -               locker.unlock();
> > > +       for (std::function<void()> call : calls)
> > >                 call();
> > > -               locker.lock();
> > > -       }
> > >  }
> 
> That looks good too.
> 
> > I Think this back as both solutions that modifies dispatchCalls() 
> > suffers from the same flaw. Events queued before the EventLoop::exit() 
> > are not executed if they are not already part of the active list of 
> > calls that are processed.
> 
> Is that a problem though ? If we need to ensure that the deferred calls
> queue is flushed, maybe we could make dispatchCalls() public and call it
> after EventLoop::exit() returns ?
> 
> > What do you think of injecting a 'end of call processing' marker into 
> > the calls_ vector and break the loop in dispatchCalls() if it's 
> > encounter? Spontaneously I'm thinking of a 'nullptr' marker, but it's 
> > untested.
> 
> What's the problem that needs to be solved here ? There's an inherent
> race condition if we queue calls from a different thread, how to handle
> it best depends on what the use case is.

I agree the proper solution is likely what best matches our use-case. I 
posted a v2 of this series (it's now just a single patch) that is the 
minimal change to get the behavior back to what is before the deferred 
call support.

I'm sure we can do better on the over all goal of only queuing N 
requests instead of terminating when we are done. But that would be a 
larger change and right now I just want to get back to deterministic and 
functional behavior :-) Over all I think we soon will need a compliance 
tool to verify pipeline behavior to be able to make our other tools more 
strict.

> 
> -- 
> Regards,
> 
> Laurent Pinchart

Patch
diff mbox series

diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
index 94c5d1d362455f33..34a55da30c537ac7 100644
--- a/src/cam/event_loop.cpp
+++ b/src/cam/event_loop.cpp
@@ -62,7 +62,7 @@  void EventLoop::interrupt()
 
 void EventLoop::callLater(const std::function<void()> &func)
 {
-	{
+	if (!exit_.load(std::memory_order_acquire)) {
 		std::unique_lock<std::mutex> locker(lock_);
 		calls_.push_back(func);
 	}