ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html |
| Last Crawled | 2026-04-12 06:13:25 (7 hours ago) |
| First Indexed | 2023-07-26 15:21:03 (2 years ago) |
| HTTP Status Code | 200 |
| Meta Title | C++ Concurrent Queues |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | Project:
ISO JTC1/SC22/WG21: Programming Language C++
Number:
P0260R7
Date:
2023-06-15
Audience
LEWG, SG1
Revises:
P0260R6
Author:
Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov
Contact
dv@vollmann.ch
Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov
Abstract
Concurrent queues are a fundamental structuring tool for concurrent programs.
We propose a concurrent queue concept and a concrete implementation (in P1958).
We propose a set of communication types
that enable loosely bound program components
to dynamically construct and safely share concurrent queues.
Contents
Revision History
Introduction
Target Vehicle
Existing Practice
Concept of a Bounded Queue
Bounded Queues with C++ Interface
Conceptual Interface
Basic Operations
Non-Waiting Operations
Closed Queues
Empty and Full Queues
Element Type Requirements
Exception Handling
Concrete Queues
Response to Feedback by LEWGI at Prague 2020 meeting
Implementation
Historic Contents
Proposed Wording
?.? Concurrent queues [conqueues]
?.?.1 General [conqueues.general]
?.?.2 Header <conqueue> synopsis [conqueues.syn]
?.?.3 Operation status [conqueues.status]
?.?.4 Concepts [conqueues.concepts]
?.?.4.1 Element requirements [conqueues.concept.elemreq]
?.?.4.2 Element type naming [conqueues.concept.elemtype]
?.?.4.3 Lock-free attribute operations [conqueues.concept.lockfree]
?.?.4.4 Synchronization [conqueues.concept.sync]
?.?.4.4 State operations [conqueues.concept.state]
?.?.4.5 Waiting operations [conqueues.concept.wait]
?.?.4.6 Non-waiting operations [conqueues.concept.nonwait]
?.?.4.7 Type concepts [conqueues.concept.type]
?.?.5 Concrete queues [conqueues.concrete]
?.?.6 Tools [conqueues.tools]
?.?.6.1 Ends and Iterators [conqueues.tools.ends]
?.?.6.1.1 Class template
generic_queue_back
[conqueues.tools.back]
?.?.6.1.2 Class template
generic_queue_front
[conqueues.tools.front]
?.?.6.2 Binary interfaces [conqueues.tools.binary]
?.?.6.2.1 Class template
queue_wrapper
[conqueues.tools.wrapper]
?.?.6.2.2 Binary ends [conqueues.tools.binends]
?.?.6.3 Managed Ends [conqueues.tools.managed]
?.?.6.3.1 Class template
shared_queue_back
[conqueues.tools.sharedback]
?.?.6.3.2 Class template
shared_queue_front
[conqueues.tools.front]
?.?.6.3.3 Function template
share_queue_ends
[conqueues.tools.shareendsfront]
Abandoned Interfaces
Non-Blocking Operations
Push Front Operations
Queue Names
Lock-Free Buffer Queue
Storage Iterators
Queue Ordering
Lock-Free Implementations
Concrete Queues
Locking Buffer Queue
Abandoned Additional Conceptual Tools
Fronts and Backs
Streaming Iterators
Binary Interfaces
Managed Indirection
Revision History
This paper revises P0260R6 - 2023-06-16 as follows.
Fix typos.
Implement LEWG feedback to derive conqueue_errc from system_error
Implement LEWG feedback to add range constructor and go back to InputIterator
size_t capacity() added
Added TBB concurrent_bounded_queue as existing practice
Moved discussion about pop() interface to separate paper
This paper revises P0260R5 - 2023-01-15 as follows.
Fixing typos.
Added a scope for the target TS.
Added questions to be answered by a TS.
Added asynchronous interface
P0260R5 revises P0260R4 - 2020-01-12 as follows.
Added more introductory material.
Added response to feedback by LEWGI at Prague meeting 2020.
Added section on existing practice.
Replaced
value_pop
with
pop
.
Replaced
is_lock_free
with
is_always_lockfree
.
Removed
is_empty
and
is_full
.
Added move-into parameter to
try_push(
Element
&&)
Added note that exception thrown by the queue operations themselves are derived from
std::exception
.
Added a note that the wording is partly invalid.
Moved more contents into the "Abandoned" part to avoid confusion.
P0260R4 revised P0260R3 - 2019-01-20 as follows.
Remove the binding of
queue_op_status::success
to a value of zero.
Correct stale use of the
Queue
template parameter
in
shared_queue_front
to
Value
.
Change the return type of
share_queue_ends
from a
pair
to a custom struct.
Move the concrete queue proposal to a separate paper,
P1958R0
.
P0260R3 revised P0260R2 - 2017-10-15 as follows.
Convert
queue_wrapper
to a
function
-like interface.
This conversion removes the
queue_base
class.
Thanks to Zach Lane for the approach.
Removed the requirement that element types have a default constructor.
This removal implies that
statically sized buffers cannot use an array implmentation
and must grow a vector implementation to the maximum size.
Added a discussion of checking for output iterator end
in the wording.
Fill in synopsis section.
Remove stale discussion of
queue_owner
.
Move all abandoned interface discussion to a new section.
Update paper header to current practice.
P0260R2 revised P0260R1 - 2017-02-05 as follows.
Emphasize that non-blocking operations
were removed from the proposed changes.
Correct syntax typos for noexcept and template alias.
Remove
static
from
is_lock_free
for
generic_queue_back
and
generic_queue_front
.
P0260R1 revised P0260R0 - 2016-02-14 as follows.
Remove pure virtuals from
queue_wrapper
.
Correct
queue::pop
to
value_pop
.
Remove nonblocking operations.
Remove non-locking buffer queue concrete class.
Tighten up push/pop wording on closed queues.
Tighten up push/pop wording on synchronization.
Add note about possible non-FIFO behavior.
Define
buffer_queue
to be FIFO.
Make wording consistent across attributes.
Add a restriction on element special methods using the queue.
Make
is_lock_free()
for only non-waiting functions.
Make
is_lock_free()
static for non-indirect classes.
Make
is_lock_free() noexcept
.
Make
has_queue() noexcept
.
Make destructors
noexcept
.
Replace "throws nothing" with
noexcept
.
Make the remarks about the usefulness of
is_empty()
and
is_full
into notes.
Make the non-static member functions
is_
... and
has_
... functions
const
.
P0260R0 revised N3533 - 2013-03-12 as follows.
Update links to source code.
Add wording.
Leave the name facility out of the wording.
Leave the push-front facility out of the wording.
Leave the reopen facility out of the wording.
Leave the storage iterator facility out of the wording.
N3532 revised N3434 = 12-0043 - 2012-01-14 as follows.
Add more exposition.
Provide separate non-blocking operations.
Add a section on the lock-free queues.
Argue against push-back operations.
Add a cautionary note
on the usefulness of
is_closed()
.
Expand the cautionary note
on the usefulness of
is_empty()
.
Add
is_full()
.
Add a subsection on element type requirements.
Add a subsection on exception handling.
Clarify ordering constraints on the interface.
Add a subsection on a lock-free concrete queue.
Add a section on content iterators,
distinct from the existing streaming iterators section.
Swap front and back names, as requested.
General expository cleanup.
Add an 'Revision History' section.
N3434 revised N3353 = 12-0043 - 2012-01-14 as follows.
Change the inheritance-based interface
to a pure conceptual interface.
Put 'try' operations into a separate subsection.
Add a subsection on non-blocking operations.
Add a subsection on push-back operations.
Add a subsection on queue ordering.
Merge the 'Binary Interface' and 'Managed Indirection' sections
into a new 'Conceptual Tools' section.
Expand on the topics and their rationale.
Add a subsection to 'Conceptual Tools'
that provides for type erasure.
Remove the 'Synopsis' section.
Add an 'Implementation' section.
Introduction
Queues provide a mechanism
for communicating data between components of a system.
The existing
deque
in the standard library
is an inherently sequential data structure.
Its reference-returning element access operations
cannot synchronize access to those elements
with other queue operations.
So, concurrent pushes and pops on queues
require a different interface to the queue structure.
Moreover,
concurrency adds a new dimension for performance and semantics.
Different queue implementation must trade off
uncontended operation cost, contended operation cost,
and element order guarantees.
Some of these trade-offs will necessarily result
in semantics weaker than a serial queue.
Concurrent queues come in a several different flavours, e.g.
bounded vs. unbounded
blocking vs. overwriting
single-ended vs. multi-ended
strict FIFO ordering vs. priority based ordering
The syntactic concept proposed here should be valid for all of these flavours,
while the concrete semantics might differ.
Target Vehicle
This proposal targets a TS.
It was originally sent to LEWG for inclusion into Concurrency TS v2.
As Concurrency TS v2 will probably be published before this proposal
is ready to be published, we propose to include concurrent queues into
Concurrency TS v3 and publish this as soon as concurrent queues are ready.
This leaves the door open for other proposal to share the same ship vehicle.
The scope for Concurrency TS v3 would be the same as that for v2:
"This document describes requirements for implementations of an interface
that computer programs written in the C++ programming language may use
to invoke algorithms with concurrent execution. The algorithms
described by this document are realizable across a broad class
of computer architectures."
Should the committee decide to restrict the scope of the TS to only contain
concurrent queues, we propose a slightly different scope:
"This document describes requirements for implementations of an interface
that computer programs written in the C++ programming language may use
to communicate between different execution agents of algorithms with
concurrent execution. The algorithms
described by this document are realizable across a broad class
of computer architectures."
Questions for a TS to Answer
We expect that the TS will inform future work on a variety of questions,
particularly those listed below, using real-world implementation
experience that cannot be obtained without a TS.
Is the proposed concept useful?
Specifically, does it cover different implementations and does it
work together with other concepts for concurrent queues, e.g.
queues with only non-blocking functions or queues with an asynchronous
interface?
Is the concrete queue useful without an asynchronous interface?
Can an asynchronous interface be added without extra overhead?
What other concrete implementations should be provided?
Is a queue that is ignorant of execution contexts from
std::execution
still useful?
Existing Practice
Concept of a Bounded Queue
The basic concept of a bounded queue with potentially blocking
push and pop operations is very old and widely used.
It's generally provided as an operating system level facility,
like other concurrency primitives.
POSIX 2001 has
mq
message queues (with priorities and timeout).
Windows ?
FreeRTOS, Mbed, vxWorks
Bounded Queues with C++ Interface
Literature
Boost
TBB has
concurrent_bounded_queue
(and an unbounded version
concurrent_queue
that has only
non-blocking operations).
Conceptual Interface
We provide basic queue operations,
and then extend those operations to cover other important issues.
By analogy with how
future
defines their errors, we introduce
conque_errc
enum and
conqueue_error
as follows:
enum class conqueue_errc { success, empty, full, closed };
template <>
struct is_error_code_enum<conqueue_errc> : public true_type {};
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : public system_error;
These errors will be reported from concurrent queue operations as specified below.
Basic Operations
The essential solution to the problem of concurrent queuing
is to shift to value-based operations,
rather than reference-based operations.
The basic operations are:
void
queue
::push(const T& x);
void
queue
::push(T&& x);
bool
queue
::push(const T& x, std::error_code& ec);
bool
queue
::push(T&& x, std::error_code& ec);
Pushes
x
onto the queue via copy or move construction.
The first version throws
std::conqueue_error(conqueue_errc::closed)
if the queue is closed.
The second version returns
true
on success, and
false
and sets
ec
to
error_code(conqueue_errc::closed)
if the queue is closed.
T
queue
::pop();
std::optional<T>
queue
::pop(std::error_code& ec);
Pops a value from the queue via move construction into the return value.
The first version throws
std::conqueue_error(conqueue_errc::closed)
if the queue is empty and
closed; the second version,
if the queue is empty and closed, returns
std::nullopt
and
sets
ec
to
std::error_code(conqueue_errc::closed)
.
If queue is empty and open, the operation blocks until an element is available.
In the original buffer_queue paper, the pop function had signature
T pop_value()
. Subsequently, it was changed to
void pop(T&)
due to concern about the problem of loosing elements
when an error occurs.
The exploration of different version of error reporting was
moved to a separate paper
P2921
.
Asynchronous Operations
sender auto
queue
::async_push(T x);
sender auto
queue
::async_pop();
These operations return a sender that will push or pop the element.
Senders must support cancellation and if the receiver
is currently waiting on a push or pop operation and no longer
interested in performing the operation, it should be removed
from any waiting queues, if any, and be completed with
std::execution::set_stopped
.
Non-Waiting Operations
Waiting on a full or empty queue can take a while,
which has an opportunity cost.
Avoiding that wait enables algorithms
to avoid queuing speculative work when a queue is full,
to do other work rather than wait for a push on a full queue, and
to do other work rather than wait for a pop on an empty queue.
bool
queue
::try_push(const T& x, std::error_code& ec);
bool
queue
::try_push(T&& x, std::error_code& ec);
If the queue is full or closed, returns
false
and sets the respective status in the
ec
.
Otherwise, push the value onto the queue via copy or move construction and returns
true
.
REVISITED in Varna
The following version was introduced in response to LEWG-I concerns about
loosing the element if an rvalue cannot be stored in the queue.
queue_op_status
queue
::try_push(T&&, T&);
However, SG1 reaffirmed the APIs above with the following rationale:
It seems that it is possible in both versions:
T x = get_something();
if (q.try_push(std::move(x))) ...
With two parameter version:
T x;
if (q.try_push(get_something(), x)) ...
Ergonomically they are roughly identical. API is slightly simpler with one argument version, therefore, we reverted to original one argument version.
optional<T>
queue
::try_pop(std::error_code& ec);
If the queue is empty, returns
nullopt
and set ec to
conqueue_errc::empty
.
Otherwise, pop the element from the queue via move construction into the optional.
Return
true
and set
ec
to
conqueue_errc::success
.
These operations will not wait when the queue is full or empty.
They may block for mutual exclusion.
Closed Queues
Threads using a queue for communication
need some mechanism to signal when the queue is no longer needed.
The usual approach is add an additional out-of-band signal.
However, this approach suffers from the flaw that
threads waiting on either full or empty queues
need to be woken up when the queue is no longer needed.
To do that, you need access to the condition variables
used for full/empty blocking,
which considerably increases the complexity and fragility of the interface.
It also leads to performance implications with additional mutexes or atomics.
Rather than require an out-of-band signal,
we chose to directly support such a signal in the queue itself,
which considerably simplifies coding.
To achieve this signal, a thread may
close
a queue.
Once closed, no new elements may be pushed onto the queue.
Push operations on a closed queue
will either return
conqueue_errc::closed
(when they have
ec
parameter)
or throw
conqueue_error(conqueue_errc::closed)
(when they do not).
Elements already on the queue may be popped off.
When a queue is empty and closed,
pop operations will either
set
ec
to
conqueue_errc::closed
(when they have a
ec
parameter)
or throw
conqueue_error(conqueue_errc::closed)
otherwise.
The additional operations are as follows.
They are essentially equivalent to the basic operations
except that they return a status,
avoiding an exception when queues are closed.
void
queue
::close() noexcept;
Close the queue.
bool
queue
::is_closed() const noexcept;
Return true iff the queue is closed.
Element Type Requirements
The above operations require element types with
copy/move constructors, and destructor.
These operations may be trivial.
The copy/move constructors operators may throw,
but must leave the objects in a valid state
for subsequent operations.
Exception Handling
push()
and
pop()
may throw an exceptions
of type
conqueue_error
that's derived from
std::system_error
and will contain a
conqueue_errc
.
Concurrent queues cannot completely hide the effect of exceptions
thrown by the element type,
in part because changes cannot be transparently undone
when other threads are observing the queue.
Queues may rethrow exceptions
from storage allocation, mutexes, or condition variables.
If the
element type operations required
do not throw exceptions,
then only the exceptions above are rethrown.
When an element copy/move may throw,
some queue operations have additional behavior.
Construction shall rethrow,
destroying any elements allocated.
A push operation shall rethrow and the state of the queue is unaffected.
A pop operation shall rethrow and the element is popped from the queue.
The value popped is effectively lost.
(Doing otherwise would likely clog the queue with a bad element.)
Concrete Queues
In addition to the concept,
the standard needs at least one concrete queue.
P1958R0
provicdes one such concrete queue,
buffer_queue
.
buffer_queue
is outlined below:
enum class conqueue_errc { success, empty, full, closed };
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : system_error { ... };
template <typename T,
class Allocator = std::allocator<T>>
class buffer_queue
{
buffer_queue() = delete;
buffer_queue(const buffer_queue&) = delete;
buffer_queue& operator =(const buffer_queue&) = delete;
public:
typedef T value_type;
// construct/destroy
explicit buffer_queue(size_t max_elems, const Allocator& alloc = Allocator());
explicit buffer_queue(std::initializer_list<T>, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <typename InputIterator>
buffer_queue(InputIterator begin, InputIterator end, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <
container-compatible-range
<T> R>
buffer_queue(from_range_t, R&& rg, size_t max_elems = 0,
const Allocator& alloc = Allocator());
~buffer_queue() noexcept;
// observers
size_t capacity() const noexcept;
bool is_closed() const noexcept;
static constexpr bool is_always_lock_free() noexcept;
// modifiers
void close() noexcept;
T pop();
optional<T> pop(std::error_code& ec);
optional<T> try_pop(std::error_code& ec);
void push(const T& x);
void push(T&& x);
bool push(const T& x, std::error_code& ec);
bool push(T&& x, std::error_code& ec);
bool try_push(const T& x, std::error_code& ec);
bool try_push(T&& x, std::error_code& ec);
};
buffer_queue is only allowed to allocate in its constructor.
constructors that take initializing sequence are allowed to omit
size_t max_elem argument that will be then assumed to be equal to
the size of the initialization sequence.
Response to Feedback by LEWGI at Prague 2020 meeting
At the Prague meeting in February 2020 LEWGI provided feedback
an set some action items.
"Explore P0059
ring_buffer
prior art and document it in paper."
ring_buffer
is like
std::queue
is a sequential
data structure and therefore provides a completely different
interface than concurrent queues.
"Consider removing
value_pop
to increase consensus."
value_pop
was replaced by
pop
that doesn't have the problem of loosing elements.
"Consider removing
is_empty
and
is_full
to increase consensus."
Done.
"Consider removing
is_lock_free
to increase consensus.
If
is_lock_free
remains, add
is_always_lock_free
(a la
atomic
)."
is_lock_free
was dropped but
is_always_lock_free
was added anyways.
"Remove the maybe-consuming
try_push(&&)
.
Investigate prior art (such as TBB's
concurrent_queue
)
and add either:
An always-consuming
try_push(&&)
which returns
queue_op_status
,
An always-consuming
try_push(&&)
which
returns the input on failure."
TBB's
concurrent_queue
doesn't have
try_push
.
try_push(&&)
now has an additional parameter
that gets the element if it couldn't be pushed.
"Require
buffer_queue
to allocate all storage only once".
"Require
buffer_queue
to allocate all storage
during construction".
"Instead of throwing
queue_op_status
objects,
add a standard library exception type and throw that."
These requests are all valid and added (partly to P1958R1).
Implementation
An implementation is available at
https://github.com/GorNishanov/conqueue
.
A free, open-source implementation of an earlier version of these interfaces
is avaliable at the Google Concurrency Library project
at
https://github.com/alasdairmackintosh/google-concurrency-library
.
The concrete
buffer_queue
is in
..../blob/master/include/buffer_queue.h
.
The concrete
lock_free_buffer_queue
is in
..../blob/master/include/lock_free_buffer_queue.h
.
The corresponding implementation of the conceptual tools is in
..../blob/master/include/queue_base.h
.
Historic Contents
The Contents below is for historic reference only.
Proposed Wording
Note: This wording is left for general reference.
It was not updated from previous proposals as first the design
should be fixed.
So the wording here partly contradicts the design proposed above.
In these cases the design is proposed and not the wording!
The concurrent queue container definition is as follows.
The section, paragraph, and table references
are based on those of
N4567
Working Draft, Standard for Programming Language C++
,
Richard Smith, November 2015.
?.? Concurrent queues [conqueues]
Add a new section.
?.?.1 General [conqueues.general]
Add a new section.
This section provides mechanisms for concurrent access to a queue.
These mechanisms ease the production of race-free programs
(1.10 [intro.multithread]).
?.?.2 Header <conqueue> synopsis [conqueues.syn]
Add a new section.
enum class queue_op_status { success, empty, full, closed };
template <typename Value> class buffer_queue;
template <typename Queue> class generic_queue_back;
template <typename Queue> class generic_queue_front;
template <typename Value> class queue_base;
template <typename Value>
using queue_back = generic_queue_back< queue_base< Value > >;
template <typename Value>
using queue_front = generic_queue_front< queue_base< Value > >;
template <typename Queue> class queue_wrapper;
template <typename Value> class shared_queue_back;
template <typename Value> class shared_queue_front;
template <typename Value> class shared_queue_ends;
template <typename Queue, typename ... Args>
shared_queue_ends<typename Queue::value_type>;
share_queue_ends(Args ... args);
?.?.3 Operation status [conqueues.status]
Add a new section.
Many concurrent queue operations return a status
in the form of the following enumeration.
enum class queue_op_status
Enumerators:
success = 0, empty, full, closed
?.?.4 Concepts [conqueues.concepts]
Add a new section.
This section provides the conceptual operations for concurrent queues
of type
queue
of
Element
types.
?.?.4.1 Element requirements [conqueues.concept.elemreq]
Add a new section:
The types of the elements of a concurrent queue must provide
either or both of a copy constructor or a move constructor,
either or both of a copy assignment operator or a move assignment operator,
and a destructor.
Any copy/more constructor or copy/move assignment operator that throws
shall leave the objects in a valid state for subsequent operations.
None of the above constructors, assignments or destructor
may call any operation on a concurrent queue
for which their objects may become a member.
[
Note:
Queues may hold an internal lock while performing the above operations,
and if they were to call a queue operation, deadlock would result.
—
end note
]
?.?.4.2 Element type naming [conqueues.concept.elemtype]
Add a new section:
The queue class shall provide a typedef to its element value type.
typedef
implementation-defined
value_type;
?.?.4.3 Lock-free attribute operations [conqueues.concept.lockfree]
Add a new section:
A queue type provides lock-free operations (1.10 [intro.multithread],
or it does not.
static bool
queue
::is_lock_free() noexcept;
Returns:
If the non-waiting operations of the queue type are lock-free,
true
.
Otherwise,
false
.
Remark:
The function returns the same result for all instances of the type.
?.?.4.4 Synchronization [conqueues.concept.sync]
Add a new section:
For synchronization purposes, and unless otherwise stated,
all queue operations appear to operate on a single memory location,
all non-const queue operations
appear to be sequentially consistent atomic read-modify-write operations,
and all const queue operations appear to be atomic loads from this location.
[
Note:
In particular,
all queue operations appear to execute in a single global order,
that is part of the total order
S
(29.3 [atomics.order])
of sequentially consistent operations.
Each non-const queue operation
A
strongly happens before every operation on the same queue
that follows
A
in
S
.
Whether or not the queue preserves a FIFO order
is a property of the concrete class.
—
end note
]
static bool
queue
::is_lock_free() noexcept;
Returns:
If the non-waiting operations of the queue type are lock-free,
true
.
Otherwise,
false
.
Remark:
The function returns the same result for all instances of the type.
?.?.4.4 State operations [conqueues.concept.state]
Add a new section:
Upon construction, every queue shall be in an open state.
It may move to a closed state,
but shall not move back to an open state.
void
queue
::close() noexcept;
Effects:
Closes the queue.
No pushes subsequent to the close shall succeed.
bool
queue
::is_closed() const noexcept;
Returns:
true
if the queue is closed,
otherwise,
false
?.?.4.5 Waiting operations [conqueues.concept.wait]
Add a new section:
void
queue
::push(const
Element
&);
void
queue
::push(
Element
&&);
Effects:
If the queue is closed, throws an exception.
Otherwise, if space is available on the queue,
copies or moves the
element
onto the queue and returns.
Otherwise, waits until space is available or the queue is closed.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the queue is unaffected
and the push shall rethrow the exception.
If the operation cannot otherwise complete because the queue is closed,
throws
queue_op_status::closed
.
void
queue
::pop(
Element
&);
Effects:
If an element is available on the queue,
moves the element from the queue to the parameter and returns.
Otherwise, if the queue is closed, throws an exception.
Otherwise, waits until an element is available or the queue is closed.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the element is popped
and the pop shall rethrow the exception.
If the operation cannot otherwise complete because the queue is closed,
throws
queue_op_status::closed
.
queue_op_status
queue
::wait_push(const
Element
&);
queue_op_status
queue
::wait_push(
Element
&&);
Effects:
If the queue is closed, returns.
Otherwise, if space is available on the queue,
copies or moves the
element
onto the queue and returns.
Otherwise, waits until space is available or the queue is closed.
Returns:
If the queue was closed,
queue_op_status::closed
.
Otherwise, the push was successful,
queue_op_status::success
.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the queue is unaffected
and the push shall rethrow the exception.
queue_op_status
queue
::wait_pop(
Element
&);
Effects:
If an element is available on the queue,
moves the element from the queue to the parameter
and returns.
Otherwise, if the queue is closed, returns.
Otherwise, waits until an element is available or the queue is closed.
Returns:
If the queue was closed,
queue_op_status::closed
.
Otherwise, the pop was successful,
queue_op_status::success
.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the element is popped
and the pop shall rethrow the exception.
?.?.4.6 Non-waiting operations [conqueues.concept.nonwait]
Add a new section:
queue_op_status
queue
::try_push(const
Element
&);
queue_op_status
queue
::try_push(
Element
&&);
Effects:
If the queue is closed, returns.
Otherwise, if space is available on the queue,
copies or moves the
element
onto the queue and returns.
Otherwise, returns.
Returns:
If the queue was closed,
queue_op_status::closed
.
Otherwise, if the push was successful,
queue_op_status::success
.
Otherwise, space was unavailable,
queue_op_status::full
.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the queue is unaffected
and the push shall rethrow the exception.
queue_op_status
queue
::try_pop(
Element
&);
Effects:
If an element is available on the queue,
moves the element from the queue to the parameter and returns.
Otherwise, returns.
Returns:
If the pop was successful,
queue_op_status::success
.
Otherwise, if the queue is closed,
queue_op_status::closed
.
Otherwise, no element was available,
queue_op_status::empty
.
Throws:
Any exception from operations on
storage allocation, mutexes, or condition variables.
If an element copy/move operation throws,
the state of the element is popped
and the pop shall rethrow the exception.
?.?.4.7 Type concepts [conqueues.concept.type]
Add a new section:
The
WaitingConcurrentQueue
concept
provides all of the operations specified above.
The
NonWaitingConcurrentQueue
concept
provides all of the operations specified above,
except the waiting operations ([conqueues.concept.wait]).
A
NonWaitingConcurrentQueue
is lock-free
(1.10 [intro.multithread])
when its member function
is_lock_free
reports true.
The
WaitingConcurrentQueueBack
concept
provides all of the operations specified above
except the pop operations.
The
WaitingConcurrentQueueFront
concept
provides all of the operations specified above
except the push operations.
The
NonWaitingConcurrentQueueBack
concept
provides all of the operations specified above
except the pop operations and the waiting push operations.
A
NonWaitingConcurrentQueueBack
is lock-free
(1.10 [intro.multithread])
when its member function
is_lock_free
reports true.
The
NonWaitingConcurrentQueueFront
concept
provides all of the operations specified above
except the push operations and the waiting pop operations.
A
NonWaitingConcurrentQueueFront
is lock-free
(1.10 [intro.multithread])
when its member function
is_lock_free
reports true.
?.?.5 Concrete queues [conqueues.concrete]
Add a new section, with content to be provided by other papers.
?.?.6 Tools [conqueues.tools]
Add a new section:
Additional tools help to use and manage concurrent queues.
?.?.6.1 Ends and Iterators [conqueues.tools.ends]
Add a new section:
Access to only a single end of a queue is a valuable code structuring tool.
A single end can also provide
unambiguous
begin
and
end
operations
that return iterators.
Because queues may be closed
and hence accept no further pushes,
output iterators must also be checked for having reached the end,
ie. having been closed.
[
Example:
void iterate(
generic_queue_back<buffer_queue<int>>::iterator bitr,
generic_queue_back<buffer_queue<int>>::iterator bend,
generic_queue_front<buffer_queue<int>>::iterator fitr,
generic_queue_front<buffer_queue<int>>::iterator fend,
int (*compute)( int ) )
{
while ( fitr != fend && bitr != bend )
*bitr++ = compute(*fitr++);
}
—
end example
]
?.?.6.1.1 Class template
generic_queue_back
[conqueues.tools.back]
Add a new section:
template <typename Queue>
class generic_queue_back
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef
implementation-defined
iterator;
typedef const iterator const_iterator;
generic_queue_back(Queue& queue);
generic_queue_back(Queue* queue);
generic_queue_back(const generic_queue_back& other) = default;
generic_queue_back& operator =(const generic_queue_back& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
The class template
generic_queue_back
implements
WaitingConcurrentQueueBack
generic_queue_back(Queue& queue);
generic_queue_back(Queue* queue);
Effects:
Constructs the queue back with a pointer to the queue object given.
~generic_queue_back() noexcept;
Effects:
Destroys the queue back.
bool has_queue() const noexcept;
Returns:
true
if the contained pointer is not null.
false
otherwise.
?.?.6.1.2 Class template
generic_queue_front
[conqueues.tools.front]
Add a new section:
template <typename Queue>
class generic_queue_front
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef
implementation-defined
iterator;
typedef const iterator const_iterator;
generic_queue_front(Queue& queue);
generic_queue_front(Queue* queue);
generic_queue_front(const generic_queue_front& other) = default;
generic_queue_front& operator =(const generic_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
The class template
generic_queue_front
implements
WaitingConcurrentQueueFront
generic_queue_front(Queue& queue);
generic_queue_front(Queue* queue);
Effects:
Constructs the queue front with a pointer to the queue object given.
~generic_queue_front() noexcept;
Effects:
Destroys the queue front.
bool has_queue() const noexcept;
Returns:
true
if the contained pointer is not null.
false
otherwise.
?.?.6.2 Binary interfaces [conqueues.tools.binary]
Add a new section:
Occasionally it is best to have a binary interface
to any concurrent queue of a given element type.
This binary interface is provided by a wrapper class
that erases the type of the concrete queue class.
?.?.6.2.1 Class template
queue_wrapper
[conqueues.tools.wrapper]
Add a new section:
template<typename Value>
struct queue_wrapper
{
using value_type = Value;
template<typename Queue>
queue_wrapper(Queue * arg);
template<typename Queue>
queue_wrapper(Queue & arg);
~queue_wrapper() noexcept;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
void push(const value_type & x);
queue_op_status wait_push(const value_type & x);
queue_op_status try_push(const value_type & x);
queue_op_status nonblocking_push(const value_type & x);
void push(value_type && x);
queue_op_status wait_push(value_type && x);
queue_op_status try_push(value_type && x);
queue_op_status nonblocking_push(value_type && x);
value_type value_pop();
queue_op_status wait_pop(value_type &);
queue_op_status try_pop(value_type &);
queue_op_status nonblocking_pop(value_type &);
};
The template type parameter
Queue
and the he class template
queue_base
shall implement the
WaitingConcurrentQueue
concept.
template<typename Queue> queue_wrapper(Queue* arg);
template<typename Queue> queue_wrapper(Queue& arg);
Effects:
Constructs the queue wrapper,
referencing the given queue.
~queue_base() noexcept;
Effects:
Destroys the queue wrapper, but not the referenced queue.
?.?.6.2.2 Binary ends [conqueues.tools.binends]
Add a new section:
In addition to binary interfaces to queues,
binary interfaces to ends are also useful.
template <typename Value>
using queue_back = generic_queue_back< queue_wrapper< Value > >;
template <typename Value>
using queue_front = generic_queue_front< queue_wrapper< Value > >;
?.?.6.3 Managed Ends [conqueues.tools.managed]
Add a new section:
Automatically managing references to queues
can be helpful when queues are used as a communication medium.
?.?.6.3.1 Class template
shared_queue_back
[conqueues.tools.sharedback]
Add a new section:
template <typename Value>
class shared_queue_back
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef
implementation-defined
iterator;
typedef const iterator const_iterator;
shared_queue_back(const shared_queue_back& other);
shared_queue_back& operator =(const shared_queue_back& other);
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
The class template
shared_queue_back
implements
WaitingConcurrentQueueBack
shared_queue_back(const shared_queue_back& other);
shared_queue_back& operator =(const shared_queue_back& other) = default;
Effects:
Copy the pointer to the queue,
but keep the back of the queue reference counted.
~shared_queue_back() noexcept;
Effects:
Destroys the queue back.
If this is the last back reference,
and there are no front references,
destroy the queue.
If this is the last back reference,
and there are front references,
close the queue.
?.?.6.3.2 Class template
shared_queue_front
[conqueues.tools.sharedfront]
Add a new section:
template <typename Value>
class shared_queue_front
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef
implementation-defined
iterator;
typedef const iterator const_iterator;
shared_queue_front(Queue& queue);
shared_queue_front(Queue* queue);
shared_queue_front(const shared_queue_front& other) = default;
shared_queue_front& operator =(const shared_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
The class template
shared_queue_front
implements
WaitingConcurrentQueueFront
shared_queue_front(const shared_queue_front& other);
shared_queue_front& operator =(const shared_queue_front& other) = default;
Effects:
Copy the pointer to the queue,
but keep the front of the queue reference counted.
~shared_queue_front() noexcept;
Effects:
Destroys the queue front.
If this is the last front reference,
and there are no back references,
destroy the queue.
If this is the last front reference,
and there are back references,
close the queue.
?.?.6.3.3 Class template
shared_queue_ends
[conqueues.tools.shareqends]
Add a new section:
template <typename Value>
class shared_queue_ends
{
public:
shared_queue_back<Value> back;
shared_queue_front<Value> front;
};
?.?.6.3.4 Function template
share_queue_ends
[conqueues.tools.shareends]
Add a new section:
template <typename Queue, typename ... Args>
shared_queue_ends<typename Queue::value_type>
share_queue_ends(Args ... args);
Effects:
Constructs a
Queue
with the given
Args
.
Initializes a set of reference counters for that queue.
Returns:
a
shared_queue_ends
consisting of one
shared_queue_back
and one
shared_queue_front
for the constructed queue.
Abandoned Interfaces
Re-opening a Queue
There are use cases for opening a queue that is closed.
While we are not aware of an implementation
in which the ability to reopen a queue would be a hardship,
we also imagine that such an implementation could exist.
Open should generally only be called if the queue is closed and empty,
providing a clean synchronization point,
though it is possible to call open on a non-empty queue.
An open operation following a close operation
is guaranteed to be visible after the close operation
and the queue is guaranteed to be open upon completion of the open call.
(But of course, another close call could occur immediately thereafter.)
void
queue
::open();
Open the queue.
Note that when
is_closed()
returns false,
there is no assurance that
any subsequent operation finds the queue closed
because some other thread may close it concurrently.
If an open operation is not available,
there is an assurance that once closed, a queue stays closed.
So, unless the programmer takes care
to ensure that all other threads will not close the queue,
only a return value of true has any meaning.
Given these concerns with reopening queues,
we do not propose wording to reopen a queue.
Non-Blocking Operations
For cases when blocking for mutual exclusion is undesirable,
one can consider non-blocking operations.
The interface is the same as the try operations
but is allowed to also return
queue_op_status::busy
in case the operation is unable to complete without blocking.
queue_op_status
queue
::nonblocking_push(const
Element
&);
queue_op_status
queue
::nonblocking_push(
Element
&&);
If the operation would block, return
queue_op_status::busy
.
Otherwise, if the queue is full, return
queue_op_status::full
.
Otherwise, push the
Element
onto the queue.
Return
queue_op_status::success
.
queue_op_status
queue
::nonblocking_pop(
Element
&);
If the operation would block, return
queue_op_status::busy
.
Otherwise, if the queue is empty, return
queue_op_status::empty
.
Otherwise, pop the
Element
from the queue.
The element will be moved out of the queue in preference to being copied.
Return
queue_op_status::success
.
These operations will neither wait nor block.
However, they may do nothing.
The non-blocking operations highlight a terminology problem.
In terms of synchronization effects,
nonwaiting_push
on queues
is equivalent to
try_lock
on mutexes.
And so one could conclude that
the existing
try_push
should be renamed
nonwaiting_push
and
nonblocking_push
should be renamed
try_push
.
However, at least Thread Building Blocks uses the existing terminology.
Perhaps better is to not use
try_push
and instead use
nonwaiting_push
and
nonblocking_push
.
In November 2016,
the Concurrency Study Group chose to defer non-blocking operations.
Hence, the proposed wording does not include these functions.
In addition,
as these functions were the only ones that returned
busy
,
that enumeration is also not included.
Push Front Operations
Occasionally, one may wish to return a popped item to the queue.
We can provide for this with
push_front
operations.
void
queue
::push_front(const
Element
&);
void
queue
::push_front(
Element
&&);
Push the
Element
onto the back of the queue,
i.e. in at the end of the queue that is normally popped.
Return
queue_op_status::success
.
queue_op_status
queue
::try_push_front(const
Element
&);
queue_op_status
queue
::try_push_front(
Element
&&);
If the queue was full, return
queue_op_status::full
.
Otherwise, push the
Element
onto the front of the queue,
i.e. in at the end of the queue that is normally popped.
Return
queue_op_status::success
.
queue_op_status
queue
::nonblocking_push_front(const
Element
&);
queue_op_status
queue
::nonblocking_push_front(
Element
&&);
If the operation would block, return
queue_op_status::busy
.
Otherwise, if the queue is full, return
queue_op_status::full
.
Otherwise, push the
Element
onto the front queue.
i.e. in at the end of the queue that is normally popped.
Return
queue_op_status::success
.
This feature was requested at the Spring 2012 meeting.
However, we do not think the feature works.
The name
push_front
is inconsistent
with existing "push back" nomenclature.
The effects of
push_front
are only distinguishable from a regular push
when there is a strong ordering of elements.
Highly concurrent queues will likely have no strong ordering.
The
push_front
call may fail
due to full queues, closed queues, etc.
In which case the operation will suffer contention,
and may succeed only after interposing push and pop operations.
The consequence is that
the original push order is not preserved in the final pop order.
So,
push_front
cannot be directly used as an 'undo'.
The operation implies an ability
to reverse internal changes at the front of the queue.
This ability implies a loss efficiency in some implementations.
In short, we do not think that in a concurrent environment
push_front
provides sufficient semantic value
to justify its cost.
Consequently, the proposed wording does not provide this feature.
Queue Names
It is sometimes desirable for queues to be able to identify themselves.
This feature is particularly helpful for run-time diagnotics,
particularly when 'ends' become dynamically passed around between threads.
See
Managed Indirection
.
const char*
queue
::name();
Return the name string provided as a parameter to queue construction.
There is some debate on this facility,
but we see no way to effectively replicate the facility.
However, in recognition of that debate,
the wording does not provide the name facility.
Lock-Free Buffer Queue
We provide a concrete concurrent queue
in the form of a fixed-size
lock_free_buffer_queue
.
It meets the
NonWaitingConcurrentQueue
concept.
The queue is still under development,
so details may change.
In November 2016,
the Concurrency Study Group chose to defer lock-free queues.
Hence, the proposed wording does not include a concrete lock-free queue.
Storage Iterators
In addition to iterators that stream data into and out of a queue,
we could provide an iterator over the storage contents of a queue.
Such and iterator, even when implementable,
would mostly likely be valid only when the queue is otherwise quiecent.
We believe such an iterator would be most useful for debugging,
which may well require knowledge of the concrete class.
Therefore, we do not propose wording for this feature.
Empty and Full Queues
It is sometimes desirable to know if a queue is empty.
bool
queue
::is_empty() const noexcept;
Return true iff the queue is empty.
This operation is useful only during intervals when
the queue is known to not be subject to pushes and pops
from other threads.
Its primary use case is assertions on the state of the queue
at the end if its lifetime,
or when the system is in quiescent state (where there no outstanding pushes).
We can imagine occasional use for knowing when a queue is full,
for instance in system performance polling.
The motivation is significantly weaker though.
bool
queue
::is_full() const noexcept;
Return true iff the queue is full.
Not all queues will have a full state,
and these would always return false.
Queue Ordering
The conceptual queue interface makes minimal guarantees.
The queue is not empty if there is an element
that has been pushed but not popped.
A push operation
synchronizes with
the pop operation that obtains that element.
A close operation
synchronizes with
an operation that observes that the queue is closed.
There is a sequentially consistent order of operations.
In particular, the conceptual interface does not guarantee
that the sequentially consistent order of element pushes
matches the sequentially consistent order of pops.
Concrete queues could specify more specific ordering guarantees.
Lock-Free Implementations
Lock-free queues will have some trouble
waiting for the queue to be non-empty or non-full.
Therefore, we propose two closely-related concepts.
A full concurrent queue concept as described above,
and a non-waiting concurrent queue concept
that has all the operations except
push
,
wait_push
,
value_pop
and
wait_pop
.
That is, it has only non-waiting operations
(presumably emulated with busy wait)
and non-blocking operations,
but no waiting operations.
We propose naming these
WaitingConcurrentQueue
and
NonWaitingConcurrentQueue
,
respectively.
Note: Adopting this conceptual split
requires splitting some of the facilities defined later.
For generic code it's sometimes important to know if a concurrent queue
has a lock free implementation.
constexpr static bool
queue
::is_always_lock_free() noexcept;
Return true iff the has a lock-free implementation
of the non-waiting operations.
Abandoned Additional Conceptual Tools
There are a number of tools that support use of the conceptual interface.
These tools are not part of the queue interface,
but provide restricted views or adapters on top of the queue
useful in implementing concurrent algorithms.
Fronts and Backs
Restricting an interface to one side of a queue
is a valuable code structuring tool.
This restriction is accomplished with
the classes
generic_queue_front
and
generic_queue_back
parameterized on the concrete queue implementation.
These act as pointers
with access to only the front or the back of a queue.
The front of the queue is where elements are popped.
The back of the queue is where elements are pushed.
void send( int number, generic_queue_back<buffer_queue<int>> arv );
These fronts and backs
are also able to provide
begin
and
end
operations
that unambiguously stream data into or out of a queue.
Streaming Iterators
In order to enable the use of existing algorithms
streaming through concurrent queues,
they need to support iterators.
Output iterators will push to a queue
and input iterators will pop from a queue.
Stronger forms of iterators
are in general not possible with concurrent queues.
Iterators implicitly require waiting for the advance,
so iterators are only supportable
with the
WaitingConcurrentQueue
concept.
void iterate(
generic_queue_back<buffer_queue<int>>::iterator bitr,
generic_queue_back<buffer_queue<int>>::iterator bend,
generic_queue_front<buffer_queue<int>>::iterator fitr,
generic_queue_front<buffer_queue<int>>::iterator fend,
int (*compute)( int ) )
{
while ( fitr != fend && bitr != bend )
*bitr++ = compute(*fitr++);
}
Note that contrary to existing iterator algorithms,
we check both iterators for reaching their end,
as either may be closed at any time.
Note that with suitable renaming,
the existing standard front insert and back insert iterators could work as is.
However, there is nothing like a pop iterator adapter.
Binary Interfaces
The standard library is template based,
but it is often desirable to have a binary interface
that shields client from the concrete implementations.
For example,
std::function
is a binary interface
to callable object (of a given signature).
We achieve this capability in queues with type erasure.
We provide a
queue_base
class template
parameterized by the value type.
Its operations are virtual.
This class provides the essential independence
from the queue representation.
We also provide
queue_front
and
queue_back
class templates parameterized by the value types.
These are essentially
generic_queue_front<queue_base<Value>>
and
generic_queue_front<queue_base<Value>>
,
respectively.
To obtain a pointer to
queue_base
from an non-virtual concurrent queue,
construct an instance the
queue_wrapper
class template,
which is parameterized on the queue
and derived from
queue_base
.
Upcasting a pointer to the
queue_wrapper
instance
to a
queue_base
instance
thus erases the concrete queue type.
extern void seq_fill( int count, queue_back<int> b );
buffer_queue<int> body( 10 /*elements*/, /*named*/ "body" );
queue_wrapper<buffer_queue<int>> wrap( body );
seq_fill( 10, wrap.back() );
Managed Indirection
Long running servers may have the need to
reconfigure the relationship between queues and threads.
The ability to pass 'ends' of queues between threads
with automatic memory management eases programming.
To this end, we provide
shared_queue_front
and
shared_queue_back
template classes.
These act as reference-counted versions
of the
queue_front
and
queue_back
template classes.
The
share_queue_ends(Args ... args)
template function
will provide a pair of
shared_queue_front
and
shared_queue_back
to a dynamically allocated
queue_object
instance
containing an instance of the specified implementation queue.
When the last of these fronts and backs are deleted,
the queue itself will be deleted.
Also, when the last of the fronts or the last of the backs is deleted,
the queue will be closed.
auto x = share_queue_ends<buffer_queue<int>>( 10, "shared" );
shared_queue_back<int> b(x.back);
shared_queue_front<int> f(x.front);
f.push(3);
assert(3 == b.value_pop()); |
| Markdown | | | |
|---|---|
| Project: | ISO JTC1/SC22/WG21: Programming Language C++ |
| Number: | P0260R7 |
| Date: | 2023-06-15 |
| Audience | LEWG, SG1 |
| Revises: | P0260R6 |
| Author: | Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov |
| Contact | dv@vollmann.ch |
# C++ Concurrent Queues
Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov
## Abstract
Concurrent queues are a fundamental structuring tool for concurrent programs. We propose a concurrent queue concept and a concrete implementation (in P1958). We propose a set of communication types that enable loosely bound program components to dynamically construct and safely share concurrent queues.
## Contents
[Revision History](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Revision)
[Introduction](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Introduction)
[Target Vehicle](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#TargetVehicle)
[Existing Practice](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArt)
[Concept of a Bounded Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArtConcept)
[Bounded Queues with C++ Interface](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArtCpp)
[Conceptual Interface](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Conceptual)
[Basic Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#basic_operations)
[Non-Waiting Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#non_waiting)
[Closed Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#closed_queues)
[Empty and Full Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#empty_full)
[Element Type Requirements](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#element_requirements)
[Exception Handling](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#exception_handling)
[Concrete Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Concrete)
[Response to Feedback by LEWGI at Prague 2020 meeting](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#FeedbackPragueResponse)
[Implementation](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Implementation)
[Historic Contents](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Historic)
[Proposed Wording](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Wording)
[?.? Concurrent queues \[conqueues\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues)
[?.?.1 General \[conqueues.general\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.general)
[?.?.2 Header \<conqueue\> synopsis \[conqueues.syn\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.syn)
[?.?.3 Operation status \[conqueues.status\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.status)
[?.?.4 Concepts \[conqueues.concepts\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concepts)
[?.?.4.1 Element requirements \[conqueues.concept.elemreq\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.elemreq)
[?.?.4.2 Element type naming \[conqueues.concept.elemtype\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.elemtype)
[?.?.4.3 Lock-free attribute operations \[conqueues.concept.lockfree\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.lockfree)
[?.?.4.4 Synchronization \[conqueues.concept.sync\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.lockfree)
[?.?.4.4 State operations \[conqueues.concept.state\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.state)
[?.?.4.5 Waiting operations \[conqueues.concept.wait\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.wait)
[?.?.4.6 Non-waiting operations \[conqueues.concept.nonwait\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.nonwait)
[?.?.4.7 Type concepts \[conqueues.concept.type\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.type)
[?.?.5 Concrete queues \[conqueues.concrete\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concrete)
[?.?.6 Tools \[conqueues.tools\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools)
[?.?.6.1 Ends and Iterators \[conqueues.tools.ends\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.ends)
[?.?.6.1.1 Class template `generic_queue_back` \[conqueues.tools.back\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.back)
[?.?.6.1.2 Class template `generic_queue_front` \[conqueues.tools.front\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.front)
[?.?.6.2 Binary interfaces \[conqueues.tools.binary\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.binary)
[?.?.6.2.1 Class template `queue_wrapper` \[conqueues.tools.wrapper\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.base)
[?.?.6.2.2 Binary ends \[conqueues.tools.binends\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.binends)
[?.?.6.3 Managed Ends \[conqueues.tools.managed\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.managed)
[?.?.6.3.1 Class template `shared_queue_back` \[conqueues.tools.sharedback\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.sharedback)
[?.?.6.3.2 Class template `shared_queue_front` \[conqueues.tools.front\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.front)
[?.?.6.3.3 Function template `share_queue_ends` \[conqueues.tools.shareendsfront\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.shareends)
[Abandoned Interfaces](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Abandoned)
[Non-Blocking Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#non_block)
[Push Front Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#push_front)
[Queue Names](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#queue_names)
[Lock-Free Buffer Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#lock_free_buffer_queue)
[Storage Iterators](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#storage_iterators)
[Queue Ordering](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#queue_order)
[Lock-Free Implementations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#lock_free)
[Concrete Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Concrete)
[Locking Buffer Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#buffer_queue)
[Abandoned Additional Conceptual Tools](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Tools)
[Fronts and Backs](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#front_back)
[Streaming Iterators](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#streaming_iterators)
[Binary Interfaces](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Binary)
[Managed Indirection](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Managed)
## [Revision History]()
This paper revises P0260R6 - 2023-06-16 as follows.
- Fix typos.
- Implement LEWG feedback to derive conqueue\_errc from system\_error
- Implement LEWG feedback to add range constructor and go back to InputIterator
- size\_t capacity() added
- Added TBB concurrent\_bounded\_queue as existing practice
- Moved discussion about pop() interface to separate paper
This paper revises P0260R5 - 2023-01-15 as follows.
- Fixing typos.
- Added a scope for the target TS.
- Added questions to be answered by a TS.
- Added asynchronous interface
P0260R5 revises P0260R4 - 2020-01-12 as follows.
- Added more introductory material.
- Added response to feedback by LEWGI at Prague meeting 2020.
- Added section on existing practice.
- Replaced `value_pop` with `pop`.
- Replaced `is_lock_free` with `is_always_lockfree`.
- Removed `is_empty` and `is_full`.
- Added move-into parameter to `try_push(Element&&)`
- Added note that exception thrown by the queue operations themselves are derived from `std::exception`.
- Added a note that the wording is partly invalid.
- Moved more contents into the "Abandoned" part to avoid confusion.
P0260R4 revised P0260R3 - 2019-01-20 as follows.
- Remove the binding of `queue_op_status::success` to a value of zero.
- Correct stale use of the `Queue` template parameter in `shared_queue_front` to `Value`.
- Change the return type of `share_queue_ends` from a `pair` to a custom struct.
- Move the concrete queue proposal to a separate paper, [P1958R0](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1958r0.html).
P0260R3 revised P0260R2 - 2017-10-15 as follows.
- Convert `queue_wrapper` to a `function`\-like interface. This conversion removes the `queue_base` class. Thanks to Zach Lane for the approach.
- Removed the requirement that element types have a default constructor. This removal implies that statically sized buffers cannot use an array implmentation and must grow a vector implementation to the maximum size.
- Added a discussion of checking for output iterator end in the wording.
- Fill in synopsis section.
- Remove stale discussion of `queue_owner`.
- Move all abandoned interface discussion to a new section.
- Update paper header to current practice.
P0260R2 revised P0260R1 - 2017-02-05 as follows.
- Emphasize that non-blocking operations were removed from the proposed changes.
- Correct syntax typos for noexcept and template alias.
- Remove `static` from `is_lock_free` for `generic_queue_back` and `generic_queue_front`.
P0260R1 revised P0260R0 - 2016-02-14 as follows.
- Remove pure virtuals from `queue_wrapper`.
- Correct `queue::pop` to `value_pop`.
- Remove nonblocking operations.
- Remove non-locking buffer queue concrete class.
- Tighten up push/pop wording on closed queues.
- Tighten up push/pop wording on synchronization.
- Add note about possible non-FIFO behavior.
- Define `buffer_queue` to be FIFO.
- Make wording consistent across attributes.
- Add a restriction on element special methods using the queue.
- Make `is_lock_free()` for only non-waiting functions.
- Make `is_lock_free()` static for non-indirect classes.
- Make `is_lock_free() noexcept`.
- Make `has_queue() noexcept`.
- Make destructors `noexcept`.
- Replace "throws nothing" with `noexcept`.
- Make the remarks about the usefulness of `is_empty()` and `is_full` into notes.
- Make the non-static member functions `is_`... and `has_`... functions `const`.
P0260R0 revised N3533 - 2013-03-12 as follows.
- Update links to source code.
- Add wording.
- Leave the name facility out of the wording.
- Leave the push-front facility out of the wording.
- Leave the reopen facility out of the wording.
- Leave the storage iterator facility out of the wording.
N3532 revised N3434 = 12-0043 - 2012-01-14 as follows.
- Add more exposition.
- Provide separate non-blocking operations.
- Add a section on the lock-free queues.
- Argue against push-back operations.
- Add a cautionary note on the usefulness of `is_closed()`.
- Expand the cautionary note on the usefulness of `is_empty()`. Add `is_full()`.
- Add a subsection on element type requirements.
- Add a subsection on exception handling.
- Clarify ordering constraints on the interface.
- Add a subsection on a lock-free concrete queue.
- Add a section on content iterators, distinct from the existing streaming iterators section.
- Swap front and back names, as requested.
- General expository cleanup.
- Add an 'Revision History' section.
N3434 revised N3353 = 12-0043 - 2012-01-14 as follows.
- Change the inheritance-based interface to a pure conceptual interface.
- Put 'try' operations into a separate subsection.
- Add a subsection on non-blocking operations.
- Add a subsection on push-back operations.
- Add a subsection on queue ordering.
- Merge the 'Binary Interface' and 'Managed Indirection' sections into a new 'Conceptual Tools' section. Expand on the topics and their rationale.
- Add a subsection to 'Conceptual Tools' that provides for type erasure.
- Remove the 'Synopsis' section.
- Add an 'Implementation' section.
## [Introduction]()
Queues provide a mechanism for communicating data between components of a system.
The existing `deque` in the standard library is an inherently sequential data structure. Its reference-returning element access operations cannot synchronize access to those elements with other queue operations. So, concurrent pushes and pops on queues require a different interface to the queue structure.
Moreover, concurrency adds a new dimension for performance and semantics. Different queue implementation must trade off uncontended operation cost, contended operation cost, and element order guarantees. Some of these trade-offs will necessarily result in semantics weaker than a serial queue.
Concurrent queues come in a several different flavours, e.g.
- bounded vs. unbounded
- blocking vs. overwriting
- single-ended vs. multi-ended
- strict FIFO ordering vs. priority based ordering
The syntactic concept proposed here should be valid for all of these flavours, while the concrete semantics might differ.
### [Target Vehicle]()
This proposal targets a TS. It was originally sent to LEWG for inclusion into Concurrency TS v2. As Concurrency TS v2 will probably be published before this proposal is ready to be published, we propose to include concurrent queues into Concurrency TS v3 and publish this as soon as concurrent queues are ready. This leaves the door open for other proposal to share the same ship vehicle.
The scope for Concurrency TS v3 would be the same as that for v2:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
Should the committee decide to restrict the scope of the TS to only contain concurrent queues, we propose a slightly different scope:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to communicate between different execution agents of algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
#### [Questions for a TS to Answer]()
We expect that the TS will inform future work on a variety of questions, particularly those listed below, using real-world implementation experience that cannot be obtained without a TS.
- Is the proposed concept useful? Specifically, does it cover different implementations and does it work together with other concepts for concurrent queues, e.g. queues with only non-blocking functions or queues with an asynchronous interface?
- Is the concrete queue useful without an asynchronous interface? Can an asynchronous interface be added without extra overhead?
- What other concrete implementations should be provided?
- Is a queue that is ignorant of execution contexts from `std::execution` still useful?
## [Existing Practice]()
### [Concept of a Bounded Queue]()
The basic concept of a bounded queue with potentially blocking push and pop operations is very old and widely used. It's generally provided as an operating system level facility, like other concurrency primitives.
POSIX 2001 has `mq` message queues (with priorities and timeout).
Windows ?
FreeRTOS, Mbed, vxWorks
### [Bounded Queues with C++ Interface]()
Literature
Boost
TBB has `concurrent_bounded_queue` (and an unbounded version `concurrent_queue` that has only non-blocking operations).
## [Conceptual Interface]()
We provide basic queue operations, and then extend those operations to cover other important issues.
By analogy with how `future` defines their errors, we introduce `conque_errc` enum and `conqueue_error` as follows:
```
enum class conqueue_errc { success, empty, full, closed };
template <>
struct is_error_code_enum<conqueue_errc> : public true_type {};
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : public system_error;
```
These errors will be reported from concurrent queue operations as specified below.
### [Basic Operations]()
The essential solution to the problem of concurrent queuing is to shift to value-based operations, rather than reference-based operations.
The basic operations are:
```
void
queue::push(const T& x);
```
```
void
queue::push(T&& x);
```
```
bool
queue::push(const T& x, std::error_code& ec);
```
```
bool
queue::push(T&& x, std::error_code& ec);
```
Pushes `x` onto the queue via copy or move construction. The first version throws `std::conqueue_error(conqueue_errc::closed)` if the queue is closed. The second version returns `true` on success, and `false` and sets `ec` to `error_code(conqueue_errc::closed)` if the queue is closed.
`T queue::pop();`
```
std::optional<T>
queue::pop(std::error_code& ec);
```
Pops a value from the queue via move construction into the return value. The first version throws `std::conqueue_error(conqueue_errc::closed)` if the queue is empty and closed; the second version, if the queue is empty and closed, returns `std::nullopt` and sets `ec` to `std::error_code(conqueue_errc::closed)`. If queue is empty and open, the operation blocks until an element is available.
In the original buffer\_queue paper, the pop function had signature `T pop_value()`. Subsequently, it was changed to `void pop(T&)` due to concern about the problem of loosing elements when an error occurs.
The exploration of different version of error reporting was moved to a separate paper [P2921](https://wg21.link/P2921).
### [Asynchronous Operations]()
```
sender auto
queue::async_push(T x);
```
```
sender auto
queue::async_pop();
```
These operations return a sender that will push or pop the element. Senders must support cancellation and if the receiver is currently waiting on a push or pop operation and no longer interested in performing the operation, it should be removed from any waiting queues, if any, and be completed with `std::execution::set_stopped`.
### [Non-Waiting Operations]()
Waiting on a full or empty queue can take a while, which has an opportunity cost. Avoiding that wait enables algorithms to avoid queuing speculative work when a queue is full, to do other work rather than wait for a push on a full queue, and to do other work rather than wait for a pop on an empty queue.
```
bool
queue::try_push(const T& x, std::error_code& ec);
```
```
bool
queue::try_push(T&& x, std::error_code& ec);
```
If the queue is full or closed, returns `false` and sets the respective status in the `ec`. Otherwise, push the value onto the queue via copy or move construction and returns `true`.
**REVISITED in Varna**
The following version was introduced in response to LEWG-I concerns about loosing the element if an rvalue cannot be stored in the queue.
```
queue_op_status
queue::try_push(T&&, T&);
```
However, SG1 reaffirmed the APIs above with the following rationale:
It seems that it is possible in both versions:
```
T x = get_something();
if (q.try_push(std::move(x))) ...
```
With two parameter version:
```
T x;
if (q.try_push(get_something(), x)) ...
```
Ergonomically they are roughly identical. API is slightly simpler with one argument version, therefore, we reverted to original one argument version.
```
optional<T>
queue::try_pop(std::error_code& ec);
```
If the queue is empty, returns `nullopt` and set ec to `conqueue_errc::empty`. Otherwise, pop the element from the queue via move construction into the optional. Return `true` and set ec to `conqueue_errc::success`.
These operations will not wait when the queue is full or empty. They may block for mutual exclusion.
### [Closed Queues]()
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. The usual approach is add an additional out-of-band signal. However, this approach suffers from the flaw that threads waiting on either full or empty queues need to be woken up when the queue is no longer needed. To do that, you need access to the condition variables used for full/empty blocking, which considerably increases the complexity and fragility of the interface. It also leads to performance implications with additional mutexes or atomics. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplifies coding.
To achieve this signal, a thread may close a queue. Once closed, no new elements may be pushed onto the queue. Push operations on a closed queue will either return `conqueue_errc::closed` (when they have ec parameter) or throw `conqueue_error(conqueue_errc::closed)` (when they do not). Elements already on the queue may be popped off. When a queue is empty and closed, pop operations will either set ec to `conqueue_errc::closed` (when they have a ec parameter) or throw `conqueue_error(conqueue_errc::closed)` otherwise.
The additional operations are as follows. They are essentially equivalent to the basic operations except that they return a status, avoiding an exception when queues are closed.
`void queue::close() noexcept;`
Close the queue.
`bool queue::is_closed() const noexcept;`
Return true iff the queue is closed.
### [Element Type Requirements]()
The above operations require element types with copy/move constructors, and destructor. These operations may be trivial. The copy/move constructors operators may throw, but must leave the objects in a valid state for subsequent operations.
### [Exception Handling]()
`push()` and `pop()` may throw an exceptions of type `conqueue_error` that's derived from `std::system_error` and will contain a `conqueue_errc`.
Concurrent queues cannot completely hide the effect of exceptions thrown by the element type, in part because changes cannot be transparently undone when other threads are observing the queue.
Queues may rethrow exceptions from storage allocation, mutexes, or condition variables.
If the [element type operations required](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#element_requirements) do not throw exceptions, then only the exceptions above are rethrown.
When an element copy/move may throw, some queue operations have additional behavior.
- Construction shall rethrow, destroying any elements allocated.
- A push operation shall rethrow and the state of the queue is unaffected.
- A pop operation shall rethrow and the element is popped from the queue. The value popped is effectively lost. (Doing otherwise would likely clog the queue with a bad element.)
## [Concrete Queues]()
In addition to the concept, the standard needs at least one concrete queue. [P1958R0](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1958r0.html) provicdes one such concrete queue, `buffer_queue`.
`buffer_queue` is outlined below:
```
enum class conqueue_errc { success, empty, full, closed };
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : system_error { ... };
template <typename T,
class Allocator = std::allocator<T>>
class buffer_queue
{
buffer_queue() = delete;
buffer_queue(const buffer_queue&) = delete;
buffer_queue& operator =(const buffer_queue&) = delete;
public:
typedef T value_type;
// construct/destroy
explicit buffer_queue(size_t max_elems, const Allocator& alloc = Allocator());
explicit buffer_queue(std::initializer_list<T>, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <typename InputIterator>
buffer_queue(InputIterator begin, InputIterator end, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <container-compatible-range<T> R>
buffer_queue(from_range_t, R&& rg, size_t max_elems = 0,
const Allocator& alloc = Allocator());
~buffer_queue() noexcept;
// observers
size_t capacity() const noexcept;
bool is_closed() const noexcept;
static constexpr bool is_always_lock_free() noexcept;
// modifiers
void close() noexcept;
T pop();
optional<T> pop(std::error_code& ec);
optional<T> try_pop(std::error_code& ec);
void push(const T& x);
void push(T&& x);
bool push(const T& x, std::error_code& ec);
bool push(T&& x, std::error_code& ec);
bool try_push(const T& x, std::error_code& ec);
bool try_push(T&& x, std::error_code& ec);
};
```
buffer\_queue is only allowed to allocate in its constructor.
constructors that take initializing sequence are allowed to omit size\_t max\_elem argument that will be then assumed to be equal to the size of the initialization sequence.
## [Response to Feedback by LEWGI at Prague 2020 meeting]()
At the Prague meeting in February 2020 LEWGI provided feedback an set some action items.
"Explore P0059 `ring_buffer` prior art and document it in paper."
`ring_buffer` is like `std::queue` is a sequential data structure and therefore provides a completely different interface than concurrent queues.
"Consider removing `value_pop` to increase consensus."
`value_pop` was replaced by `pop` that doesn't have the problem of loosing elements.
"Consider removing `is_empty` and `is_full` to increase consensus."
Done.
"Consider removing `is_lock_free` to increase consensus. If `is_lock_free` remains, add `is_always_lock_free` (a la `atomic`)."
`is_lock_free` was dropped but `is_always_lock_free` was added anyways.
"Remove the maybe-consuming `try_push(&&)`. Investigate prior art (such as TBB's `concurrent_queue`) and add either:
- An always-consuming `try_push(&&)` which returns `queue_op_status`,
- An always-consuming `try_push(&&)` which returns the input on failure."
TBB's `concurrent_queue` doesn't have `try_push`. `try_push(&&)` now has an additional parameter that gets the element if it couldn't be pushed.
"Require `buffer_queue` to allocate all storage only once".
"Require `buffer_queue` to allocate all storage during construction".
"Instead of throwing `queue_op_status` objects, add a standard library exception type and throw that."
These requests are all valid and added (partly to P1958R1).
## [Implementation]()
An implementation is available at <https://github.com/GorNishanov/conqueue>.
A free, open-source implementation of an earlier version of these interfaces is avaliable at the Google Concurrency Library project at <https://github.com/alasdairmackintosh/google-concurrency-library>. The concrete `buffer_queue` is in [..../blob/master/include/buffer\_queue.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/buffer_queue.h). The concrete `lock_free_buffer_queue` is in [..../blob/master/include/lock\_free\_buffer\_queue.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/lock_free_buffer_queue.h). The corresponding implementation of the conceptual tools is in [..../blob/master/include/queue\_base.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/queue_base.h).
# [Historic Contents]()
**The Contents below is for historic reference only.**
## [Proposed Wording]()
**Note: This wording is left for general reference. It was not updated from previous proposals as first the design should be fixed. So the wording here partly contradicts the design proposed above. In these cases the design is proposed and not the wording\!**
The concurrent queue container definition is as follows. The section, paragraph, and table references are based on those of [N4567](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4567.pdf) Working Draft, Standard for Programming Language C++, Richard Smith, November 2015.
### [?.? Concurrent queues \[conqueues\]]()
Add a new section.
### [?.?.1 General \[conqueues.general\]]()
Add a new section.
> This section provides mechanisms for concurrent access to a queue. These mechanisms ease the production of race-free programs (1.10 \[intro.multithread\]).
### [?.?.2 Header \<conqueue\> synopsis \[conqueues.syn\]]()
Add a new section.
> ```
>
enum class queue_op_status { success, empty, full, closed };
template <typename Value> class buffer_queue;
template <typename Queue> class generic_queue_back;
template <typename Queue> class generic_queue_front;
template <typename Value> class queue_base;
template <typename Value>
using queue_back = generic_queue_back< queue_base< Value > >;
template <typename Value>
using queue_front = generic_queue_front< queue_base< Value > >;
template <typename Queue> class queue_wrapper;
template <typename Value> class shared_queue_back;
template <typename Value> class shared_queue_front;
template <typename Value> class shared_queue_ends;
template <typename Queue, typename ... Args>
shared_queue_ends<typename Queue::value_type>;
share_queue_ends(Args ... args);
> ```
### [?.?.3 Operation status \[conqueues.status\]]()
Add a new section.
> Many concurrent queue operations return a status in the form of the following enumeration.
>
> `enum class queue_op_status`
>
> Enumerators:
>
> `success = 0, empty, full, closed`
### [?.?.4 Concepts \[conqueues.concepts\]]()
Add a new section.
> This section provides the conceptual operations for concurrent queues of type `queue` of `Element` types.
### [?.?.4.1 Element requirements \[conqueues.concept.elemreq\]]()
Add a new section:
> The types of the elements of a concurrent queue must provide either or both of a copy constructor or a move constructor, either or both of a copy assignment operator or a move assignment operator, and a destructor.
>
> Any copy/more constructor or copy/move assignment operator that throws shall leave the objects in a valid state for subsequent operations.
>
> None of the above constructors, assignments or destructor may call any operation on a concurrent queue for which their objects may become a member. \[*Note:* Queues may hold an internal lock while performing the above operations, and if they were to call a queue operation, deadlock would result. —*end note*\]
### [?.?.4.2 Element type naming \[conqueues.concept.elemtype\]]()
Add a new section:
> The queue class shall provide a typedef to its element value type.
>
> `typedef implementation-defined value_type;`
### [?.?.4.3 Lock-free attribute operations \[conqueues.concept.lockfree\]]()
Add a new section:
> A queue type provides lock-free operations (1.10 \[intro.multithread\], or it does not.
>
> `static bool queue::is_lock_free() noexcept;`
>
> Returns:
>
> If the non-waiting operations of the queue type are lock-free, `true`. Otherwise, `false`.
>
> Remark:
>
> The function returns the same result for all instances of the type.
### [?.?.4.4 Synchronization \[conqueues.concept.sync\]]()
Add a new section:
> For synchronization purposes, and unless otherwise stated, all queue operations appear to operate on a single memory location, all non-const queue operations appear to be sequentially consistent atomic read-modify-write operations, and all const queue operations appear to be atomic loads from this location. \[*Note:* In particular, all queue operations appear to execute in a single global order, that is part of the total order S (29.3 \[atomics.order\]) of sequentially consistent operations. Each non-const queue operation A strongly happens before every operation on the same queue that follows A in S. Whether or not the queue preserves a FIFO order is a property of the concrete class. —*end note*\]
>
> `static bool queue::is_lock_free() noexcept;`
>
> Returns:
>
> If the non-waiting operations of the queue type are lock-free, `true`. Otherwise, `false`.
>
> Remark:
>
> The function returns the same result for all instances of the type.
### [?.?.4.4 State operations \[conqueues.concept.state\]]()
Add a new section:
> Upon construction, every queue shall be in an open state. It may move to a closed state, but shall not move back to an open state.
>
> `void queue::close() noexcept;`
>
> Effects:
>
> Closes the queue. No pushes subsequent to the close shall succeed.
>
> `bool queue::is_closed() const noexcept;`
>
> Returns:
>
> `true` if the queue is closed, otherwise, `false`
### [?.?.4.5 Waiting operations \[conqueues.concept.wait\]]()
Add a new section:
> `void queue::push(const Element&);`
> `void queue::push(Element&&);`
>
> Effects:
>
> If the queue is closed, throws an exception. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, waits until space is available or the queue is closed.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception. If the operation cannot otherwise complete because the queue is closed, throws `queue_op_status::closed`.
>
> `void queue::pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, if the queue is closed, throws an exception. Otherwise, waits until an element is available or the queue is closed.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception. If the operation cannot otherwise complete because the queue is closed, throws `queue_op_status::closed`.
>
> `queue_op_status queue::wait_push(const Element&);`
> `queue_op_status queue::wait_push(Element&&);`
>
> Effects:
>
> If the queue is closed, returns. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, waits until space is available or the queue is closed.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, the push was successful, `queue_op_status::success`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception.
>
> `queue_op_status queue::wait_pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, if the queue is closed, returns. Otherwise, waits until an element is available or the queue is closed.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, the pop was successful, `queue_op_status::success`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception.
### [?.?.4.6 Non-waiting operations \[conqueues.concept.nonwait\]]()
Add a new section:
> `queue_op_status queue::try_push(const Element&);`
> `queue_op_status queue::try_push(Element&&);`
>
> Effects:
>
> If the queue is closed, returns. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, returns.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, if the push was successful, `queue_op_status::success`. Otherwise, space was unavailable, `queue_op_status::full`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception.
>
> `queue_op_status queue::try_pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, returns.
>
> Returns:
>
> If the pop was successful, `queue_op_status::success`. Otherwise, if the queue is closed, `queue_op_status::closed`. Otherwise, no element was available, `queue_op_status::empty`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception.
### [?.?.4.7 Type concepts \[conqueues.concept.type\]]()
Add a new section:
> The `WaitingConcurrentQueue` concept provides all of the operations specified above.
>
> The `NonWaitingConcurrentQueue` concept provides all of the operations specified above, except the waiting operations (\[conqueues.concept.wait\]). A `NonWaitingConcurrentQueue` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
>
> The `WaitingConcurrentQueueBack` concept provides all of the operations specified above except the pop operations.
>
> The `WaitingConcurrentQueueFront` concept provides all of the operations specified above except the push operations.
>
> The `NonWaitingConcurrentQueueBack` concept provides all of the operations specified above except the pop operations and the waiting push operations. A `NonWaitingConcurrentQueueBack` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
>
> The `NonWaitingConcurrentQueueFront` concept provides all of the operations specified above except the push operations and the waiting pop operations. A `NonWaitingConcurrentQueueFront` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
### [?.?.5 Concrete queues \[conqueues.concrete\]]()
Add a new section, with content to be provided by other papers.
### [?.?.6 Tools \[conqueues.tools\]]()
Add a new section:
> Additional tools help to use and manage concurrent queues.
### [?.?.6.1 Ends and Iterators \[conqueues.tools.ends\]]()
Add a new section:
> Access to only a single end of a queue is a valuable code structuring tool. A single end can also provide unambiguous `begin` and `end` operations that return iterators.
>
> Because queues may be closed and hence accept no further pushes, output iterators must also be checked for having reached the end, ie. having been closed. \[*Example:*
> ```
>
> ```
> —*end example*\]
### [?.?.6.1.1 Class template `generic_queue_back` \[conqueues.tools.back\]]()
Add a new section:
> ```
> template <typename Queue>
class generic_queue_back
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
generic_queue_back(Queue& queue);
generic_queue_back(Queue* queue);
generic_queue_back(const generic_queue_back& other) = default;
generic_queue_back& operator =(const generic_queue_back& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
> ```
> The class template `generic_queue_back` implements `WaitingConcurrentQueueBack`
>
> `generic_queue_back(Queue& queue);`
> `generic_queue_back(Queue* queue);`
>
> Effects:
>
> Constructs the queue back with a pointer to the queue object given.
>
> `~generic_queue_back() noexcept;`
>
> Effects:
>
> Destroys the queue back.
>
> `bool has_queue() const noexcept;`
>
> Returns:
>
> `true` if the contained pointer is not null. `false` otherwise.
### [?.?.6.1.2 Class template `generic_queue_front` \[conqueues.tools.front\]]()
Add a new section:
> ```
> template <typename Queue>
class generic_queue_front
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
generic_queue_front(Queue& queue);
generic_queue_front(Queue* queue);
generic_queue_front(const generic_queue_front& other) = default;
generic_queue_front& operator =(const generic_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
> ```
> The class template `generic_queue_front` implements `WaitingConcurrentQueueFront`
>
> `generic_queue_front(Queue& queue);`
> `generic_queue_front(Queue* queue);`
>
> Effects:
>
> Constructs the queue front with a pointer to the queue object given.
>
> `~generic_queue_front() noexcept;`
>
> Effects:
>
> Destroys the queue front.
>
> `bool has_queue() const noexcept;`
>
> Returns:
>
> `true` if the contained pointer is not null. `false` otherwise.
### [?.?.6.2 Binary interfaces \[conqueues.tools.binary\]]()
Add a new section:
> Occasionally it is best to have a binary interface to any concurrent queue of a given element type. This binary interface is provided by a wrapper class that erases the type of the concrete queue class.
### [?.?.6.2.1 Class template `queue_wrapper` \[conqueues.tools.wrapper\]]()
Add a new section:
> ```
>
> ```
> The template type parameter `Queue` and the he class template `queue_base` shall implement the `WaitingConcurrentQueue` concept.
>
> Effects:
>
> Constructs the queue wrapper, referencing the given queue.
>
> `~queue_base() noexcept;`
>
> Effects:
>
> Destroys the queue wrapper, but not the referenced queue.
### [?.?.6.2.2 Binary ends \[conqueues.tools.binends\]]()
Add a new section:
> In addition to binary interfaces to queues, binary interfaces to ends are also useful.
> ```
>
> ```
### [?.?.6.3 Managed Ends \[conqueues.tools.managed\]]()
Add a new section:
> Automatically managing references to queues can be helpful when queues are used as a communication medium.
### [?.?.6.3.1 Class template `shared_queue_back` \[conqueues.tools.sharedback\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_back
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
shared_queue_back(const shared_queue_back& other);
shared_queue_back& operator =(const shared_queue_back& other);
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
> ```
> The class template `shared_queue_back` implements `WaitingConcurrentQueueBack`
>
> `shared_queue_back(const shared_queue_back& other);`
> `shared_queue_back& operator =(const shared_queue_back& other) = default;`
>
> Effects:
>
> Copy the pointer to the queue, but keep the back of the queue reference counted.
>
> `~shared_queue_back() noexcept;`
>
> Effects:
>
> Destroys the queue back. If this is the last back reference, and there are no front references, destroy the queue. If this is the last back reference, and there are front references, close the queue.
### [?.?.6.3.2 Class template `shared_queue_front` \[conqueues.tools.sharedfront\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_front
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
shared_queue_front(Queue& queue);
shared_queue_front(Queue* queue);
shared_queue_front(const shared_queue_front& other) = default;
shared_queue_front& operator =(const shared_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
> ```
> The class template `shared_queue_front` implements `WaitingConcurrentQueueFront`
>
> `shared_queue_front(const shared_queue_front& other);`
> `shared_queue_front& operator =(const shared_queue_front& other) = default;`
>
> Effects:
>
> Copy the pointer to the queue, but keep the front of the queue reference counted.
>
> `~shared_queue_front() noexcept;`
>
> Effects:
>
> Destroys the queue front. If this is the last front reference, and there are no back references, destroy the queue. If this is the last front reference, and there are back references, close the queue.
### [?.?.6.3.3 Class template `shared_queue_ends` \[conqueues.tools.shareqends\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_ends
{
public:
shared_queue_back<Value> back;
shared_queue_front<Value> front;
};
> ```
### [?.?.6.3.4 Function template `share_queue_ends` \[conqueues.tools.shareends\]]()
Add a new section:
> template \<typename Queue, typename ... Args\>
> shared\_queue\_ends\<typename Queue::value\_type\>
> share\_queue\_ends(Args ... args);
>
> Effects:
>
> Constructs a `Queue` with the given `Args`. Initializes a set of reference counters for that queue.
>
> Returns:
>
> a `shared_queue_ends` consisting of one `shared_queue_back` and one `shared_queue_front` for the constructed queue.
## [Abandoned Interfaces]()
### [Re-opening a Queue]()
There are use cases for opening a queue that is closed. While we are not aware of an implementation in which the ability to reopen a queue would be a hardship, we also imagine that such an implementation could exist. Open should generally only be called if the queue is closed and empty, providing a clean synchronization point, though it is possible to call open on a non-empty queue. An open operation following a close operation is guaranteed to be visible after the close operation and the queue is guaranteed to be open upon completion of the open call. (But of course, another close call could occur immediately thereafter.)
`void queue::open();`
Open the queue.
Note that when `is_closed()` returns false, there is no assurance that any subsequent operation finds the queue closed because some other thread may close it concurrently.
If an open operation is not available, there is an assurance that once closed, a queue stays closed. So, unless the programmer takes care to ensure that all other threads will not close the queue, only a return value of true has any meaning.
Given these concerns with reopening queues, we do not propose wording to reopen a queue.
### [Non-Blocking Operations]()
For cases when blocking for mutual exclusion is undesirable, one can consider non-blocking operations. The interface is the same as the try operations but is allowed to also return `queue_op_status::busy` in case the operation is unable to complete without blocking.
```
queue_op_status
queue::nonblocking_push(const Element&);
```
```
queue_op_status
queue::nonblocking_push(Element&&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is full, return `queue_op_status::full`. Otherwise, push the `Element` onto the queue. Return `queue_op_status::success`.
```
queue_op_status
queue::nonblocking_pop(Element&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is empty, return `queue_op_status::empty`. Otherwise, pop the `Element` from the queue. The element will be moved out of the queue in preference to being copied. Return `queue_op_status::success`.
These operations will neither wait nor block. However, they may do nothing.
The non-blocking operations highlight a terminology problem. In terms of synchronization effects, `nonwaiting_push` on queues is equivalent to `try_lock` on mutexes. And so one could conclude that the existing `try_push` should be renamed `nonwaiting_push` and `nonblocking_push` should be renamed `try_push`. However, at least Thread Building Blocks uses the existing terminology. Perhaps better is to not use `try_push` and instead use `nonwaiting_push` and `nonblocking_push`.
**In November 2016, the Concurrency Study Group chose to defer non-blocking operations. Hence, the proposed wording does not include these functions. In addition, as these functions were the only ones that returned `busy`, that enumeration is also not included.**
### [Push Front Operations]()
Occasionally, one may wish to return a popped item to the queue. We can provide for this with `push_front` operations.
```
void
queue::push_front(const Element&);
```
```
void
queue::push_front(Element&&);
```
Push the `Element` onto the back of the queue, i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
```
queue_op_status
queue::try_push_front(const Element&);
```
```
queue_op_status
queue::try_push_front(Element&&);
```
If the queue was full, return `queue_op_status::full`. Otherwise, push the `Element` onto the front of the queue, i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
```
queue_op_status
queue::nonblocking_push_front(const Element&);
```
```
queue_op_status
queue::nonblocking_push_front(Element&&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is full, return `queue_op_status::full`. Otherwise, push the `Element` onto the front queue. i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
This feature was requested at the Spring 2012 meeting. However, we do not think the feature works.
- The name `push_front` is inconsistent with existing "push back" nomenclature.
- The effects of `push_front` are only distinguishable from a regular push when there is a strong ordering of elements. Highly concurrent queues will likely have no strong ordering.
- The `push_front` call may fail due to full queues, closed queues, etc. In which case the operation will suffer contention, and may succeed only after interposing push and pop operations. The consequence is that the original push order is not preserved in the final pop order. So, `push_front` cannot be directly used as an 'undo'.
- The operation implies an ability to reverse internal changes at the front of the queue. This ability implies a loss efficiency in some implementations.
In short, we do not think that in a concurrent environment `push_front` provides sufficient semantic value to justify its cost. Consequently, the proposed wording does not provide this feature.
### [Queue Names]()
It is sometimes desirable for queues to be able to identify themselves. This feature is particularly helpful for run-time diagnotics, particularly when 'ends' become dynamically passed around between threads. See [Managed Indirection](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Managed).
`const char* queue::name();`
Return the name string provided as a parameter to queue construction.
There is some debate on this facility, but we see no way to effectively replicate the facility. However, in recognition of that debate, the wording does not provide the name facility.
### [Lock-Free Buffer Queue]()
We provide a concrete concurrent queue in the form of a fixed-size `lock_free_buffer_queue`. It meets the `NonWaitingConcurrentQueue` concept. The queue is still under development, so details may change.
**In November 2016, the Concurrency Study Group chose to defer lock-free queues. Hence, the proposed wording does not include a concrete lock-free queue.**
### [Storage Iterators]()
In addition to iterators that stream data into and out of a queue, we could provide an iterator over the storage contents of a queue. Such and iterator, even when implementable, would mostly likely be valid only when the queue is otherwise quiecent. We believe such an iterator would be most useful for debugging, which may well require knowledge of the concrete class. Therefore, we do not propose wording for this feature.
### [Empty and Full Queues]()
It is sometimes desirable to know if a queue is empty.
`bool queue::is_empty() const noexcept;`
Return true iff the queue is empty.
This operation is useful only during intervals when the queue is known to not be subject to pushes and pops from other threads. Its primary use case is assertions on the state of the queue at the end if its lifetime, or when the system is in quiescent state (where there no outstanding pushes).
We can imagine occasional use for knowing when a queue is full, for instance in system performance polling. The motivation is significantly weaker though.
`bool queue::is_full() const noexcept;`
Return true iff the queue is full.
Not all queues will have a full state, and these would always return false.
### [Queue Ordering]()
The conceptual queue interface makes minimal guarantees.
- The queue is not empty if there is an element that has been pushed but not popped.
- A push operation *synchronizes with* the pop operation that obtains that element.
- A close operation *synchronizes with* an operation that observes that the queue is closed.
- There is a sequentially consistent order of operations.
In particular, the conceptual interface does not guarantee that the sequentially consistent order of element pushes matches the sequentially consistent order of pops. Concrete queues could specify more specific ordering guarantees.
### [Lock-Free Implementations]()
Lock-free queues will have some trouble waiting for the queue to be non-empty or non-full. Therefore, we propose two closely-related concepts. A full concurrent queue concept as described above, and a non-waiting concurrent queue concept that has all the operations except `push`, `wait_push`, `value_pop` and `wait_pop`. That is, it has only non-waiting operations (presumably emulated with busy wait) and non-blocking operations, but no waiting operations. We propose naming these `WaitingConcurrentQueue` and `NonWaitingConcurrentQueue`, respectively.
Note: Adopting this conceptual split requires splitting some of the facilities defined later.
For generic code it's sometimes important to know if a concurrent queue has a lock free implementation.
`constexpr static bool queue::is_always_lock_free() noexcept;`
Return true iff the has a lock-free implementation of the non-waiting operations.
## [Abandoned Additional Conceptual Tools]()
There are a number of tools that support use of the conceptual interface. These tools are not part of the queue interface, but provide restricted views or adapters on top of the queue useful in implementing concurrent algorithms.
### [Fronts and Backs]()
Restricting an interface to one side of a queue is a valuable code structuring tool. This restriction is accomplished with the classes `generic_queue_front` and `generic_queue_back` parameterized on the concrete queue implementation. These act as pointers with access to only the front or the back of a queue. The front of the queue is where elements are popped. The back of the queue is where elements are pushed.
```
void send( int number, generic_queue_back<buffer_queue<int>> arv );
```
These fronts and backs are also able to provide `begin` and `end` operations that unambiguously stream data into or out of a queue.
### [Streaming Iterators]()
In order to enable the use of existing algorithms streaming through concurrent queues, they need to support iterators. Output iterators will push to a queue and input iterators will pop from a queue. Stronger forms of iterators are in general not possible with concurrent queues.
Iterators implicitly require waiting for the advance, so iterators are only supportable with the `WaitingConcurrentQueue` concept.
```
```
Note that contrary to existing iterator algorithms, we check both iterators for reaching their end, as either may be closed at any time.
Note that with suitable renaming, the existing standard front insert and back insert iterators could work as is. However, there is nothing like a pop iterator adapter.
### [Binary Interfaces]()
The standard library is template based, but it is often desirable to have a binary interface that shields client from the concrete implementations. For example, `std::function` is a binary interface to callable object (of a given signature). We achieve this capability in queues with type erasure.
We provide a `queue_base` class template parameterized by the value type. Its operations are virtual. This class provides the essential independence from the queue representation.
We also provide `queue_front` and `queue_back` class templates parameterized by the value types. These are essentially `generic_queue_front<queue_base<Value>>` and `generic_queue_front<queue_base<Value>>`, respectively.
To obtain a pointer to `queue_base` from an non-virtual concurrent queue, construct an instance the `queue_wrapper` class template, which is parameterized on the queue and derived from `queue_base`. Upcasting a pointer to the `queue_wrapper` instance to a `queue_base` instance thus erases the concrete queue type.
```
```
### [Managed Indirection]()
Long running servers may have the need to reconfigure the relationship between queues and threads. The ability to pass 'ends' of queues between threads with automatic memory management eases programming.
To this end, we provide `shared_queue_front` and `shared_queue_back` template classes. These act as reference-counted versions of the `queue_front` and `queue_back` template classes.
The `share_queue_ends(Args ... args)` template function will provide a pair of `shared_queue_front` and `shared_queue_back` to a dynamically allocated `queue_object` instance containing an instance of the specified implementation queue. When the last of these fronts and backs are deleted, the queue itself will be deleted. Also, when the last of the fronts or the last of the backs is deleted, the queue will be closed.
```
``` |
| Readable Markdown | | | |
|---|---|
| Project: | ISO JTC1/SC22/WG21: Programming Language C++ |
| Number: | P0260R7 |
| Date: | 2023-06-15 |
| Audience | LEWG, SG1 |
| Revises: | P0260R6 |
| Author: | Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov |
| Contact | dv@vollmann.ch |
Lawrence Crowl, Chris Mysen, Detlef Vollmann, Gor Nishanov
## Abstract
Concurrent queues are a fundamental structuring tool for concurrent programs. We propose a concurrent queue concept and a concrete implementation (in P1958). We propose a set of communication types that enable loosely bound program components to dynamically construct and safely share concurrent queues.
## Contents
[Revision History](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Revision)
[Introduction](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Introduction)
[Target Vehicle](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#TargetVehicle)
[Existing Practice](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArt)
[Concept of a Bounded Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArtConcept)
[Bounded Queues with C++ Interface](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#PriorArtCpp)
[Conceptual Interface](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Conceptual)
[Basic Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#basic_operations)
[Non-Waiting Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#non_waiting)
[Closed Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#closed_queues)
[Empty and Full Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#empty_full)
[Element Type Requirements](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#element_requirements)
[Exception Handling](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#exception_handling)
[Concrete Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Concrete)
[Response to Feedback by LEWGI at Prague 2020 meeting](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#FeedbackPragueResponse)
[Implementation](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Implementation)
[Historic Contents](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Historic)
[Proposed Wording](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Wording)
[?.? Concurrent queues \[conqueues\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues)
[?.?.1 General \[conqueues.general\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.general)
[?.?.2 Header \<conqueue\> synopsis \[conqueues.syn\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.syn)
[?.?.3 Operation status \[conqueues.status\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.status)
[?.?.4 Concepts \[conqueues.concepts\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concepts)
[?.?.4.1 Element requirements \[conqueues.concept.elemreq\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.elemreq)
[?.?.4.2 Element type naming \[conqueues.concept.elemtype\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.elemtype)
[?.?.4.3 Lock-free attribute operations \[conqueues.concept.lockfree\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.lockfree)
[?.?.4.4 Synchronization \[conqueues.concept.sync\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.lockfree)
[?.?.4.4 State operations \[conqueues.concept.state\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.state)
[?.?.4.5 Waiting operations \[conqueues.concept.wait\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.wait)
[?.?.4.6 Non-waiting operations \[conqueues.concept.nonwait\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.nonwait)
[?.?.4.7 Type concepts \[conqueues.concept.type\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concept.type)
[?.?.5 Concrete queues \[conqueues.concrete\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.concrete)
[?.?.6 Tools \[conqueues.tools\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools)
[?.?.6.1 Ends and Iterators \[conqueues.tools.ends\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.ends)
[?.?.6.1.1 Class template `generic_queue_back` \[conqueues.tools.back\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.back)
[?.?.6.1.2 Class template `generic_queue_front` \[conqueues.tools.front\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.front)
[?.?.6.2 Binary interfaces \[conqueues.tools.binary\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.binary)
[?.?.6.2.1 Class template `queue_wrapper` \[conqueues.tools.wrapper\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.base)
[?.?.6.2.2 Binary ends \[conqueues.tools.binends\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.binends)
[?.?.6.3 Managed Ends \[conqueues.tools.managed\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.managed)
[?.?.6.3.1 Class template `shared_queue_back` \[conqueues.tools.sharedback\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.sharedback)
[?.?.6.3.2 Class template `shared_queue_front` \[conqueues.tools.front\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.front)
[?.?.6.3.3 Function template `share_queue_ends` \[conqueues.tools.shareendsfront\]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#conqueues.tools.shareends)
[Abandoned Interfaces](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Abandoned)
[Non-Blocking Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#non_block)
[Push Front Operations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#push_front)
[Queue Names](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#queue_names)
[Lock-Free Buffer Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#lock_free_buffer_queue)
[Storage Iterators](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#storage_iterators)
[Queue Ordering](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#queue_order)
[Lock-Free Implementations](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#lock_free)
[Concrete Queues](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Concrete)
[Locking Buffer Queue](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#buffer_queue)
[Abandoned Additional Conceptual Tools](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Tools)
[Fronts and Backs](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#front_back)
[Streaming Iterators](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#streaming_iterators)
[Binary Interfaces](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Binary)
[Managed Indirection](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Managed)
## [Revision History]()
This paper revises P0260R6 - 2023-06-16 as follows.
- Fix typos.
- Implement LEWG feedback to derive conqueue\_errc from system\_error
- Implement LEWG feedback to add range constructor and go back to InputIterator
- size\_t capacity() added
- Added TBB concurrent\_bounded\_queue as existing practice
- Moved discussion about pop() interface to separate paper
This paper revises P0260R5 - 2023-01-15 as follows.
- Fixing typos.
- Added a scope for the target TS.
- Added questions to be answered by a TS.
- Added asynchronous interface
P0260R5 revises P0260R4 - 2020-01-12 as follows.
- Added more introductory material.
- Added response to feedback by LEWGI at Prague meeting 2020.
- Added section on existing practice.
- Replaced `value_pop` with `pop`.
- Replaced `is_lock_free` with `is_always_lockfree`.
- Removed `is_empty` and `is_full`.
- Added move-into parameter to `try_push(Element&&)`
- Added note that exception thrown by the queue operations themselves are derived from `std::exception`.
- Added a note that the wording is partly invalid.
- Moved more contents into the "Abandoned" part to avoid confusion.
P0260R4 revised P0260R3 - 2019-01-20 as follows.
- Remove the binding of `queue_op_status::success` to a value of zero.
- Correct stale use of the `Queue` template parameter in `shared_queue_front` to `Value`.
- Change the return type of `share_queue_ends` from a `pair` to a custom struct.
- Move the concrete queue proposal to a separate paper, [P1958R0](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1958r0.html).
P0260R3 revised P0260R2 - 2017-10-15 as follows.
- Convert `queue_wrapper` to a `function`\-like interface. This conversion removes the `queue_base` class. Thanks to Zach Lane for the approach.
- Removed the requirement that element types have a default constructor. This removal implies that statically sized buffers cannot use an array implmentation and must grow a vector implementation to the maximum size.
- Added a discussion of checking for output iterator end in the wording.
- Fill in synopsis section.
- Remove stale discussion of `queue_owner`.
- Move all abandoned interface discussion to a new section.
- Update paper header to current practice.
P0260R2 revised P0260R1 - 2017-02-05 as follows.
- Emphasize that non-blocking operations were removed from the proposed changes.
- Correct syntax typos for noexcept and template alias.
- Remove `static` from `is_lock_free` for `generic_queue_back` and `generic_queue_front`.
P0260R1 revised P0260R0 - 2016-02-14 as follows.
- Remove pure virtuals from `queue_wrapper`.
- Correct `queue::pop` to `value_pop`.
- Remove nonblocking operations.
- Remove non-locking buffer queue concrete class.
- Tighten up push/pop wording on closed queues.
- Tighten up push/pop wording on synchronization.
- Add note about possible non-FIFO behavior.
- Define `buffer_queue` to be FIFO.
- Make wording consistent across attributes.
- Add a restriction on element special methods using the queue.
- Make `is_lock_free()` for only non-waiting functions.
- Make `is_lock_free()` static for non-indirect classes.
- Make `is_lock_free() noexcept`.
- Make `has_queue() noexcept`.
- Make destructors `noexcept`.
- Replace "throws nothing" with `noexcept`.
- Make the remarks about the usefulness of `is_empty()` and `is_full` into notes.
- Make the non-static member functions `is_`... and `has_`... functions `const`.
P0260R0 revised N3533 - 2013-03-12 as follows.
- Update links to source code.
- Add wording.
- Leave the name facility out of the wording.
- Leave the push-front facility out of the wording.
- Leave the reopen facility out of the wording.
- Leave the storage iterator facility out of the wording.
N3532 revised N3434 = 12-0043 - 2012-01-14 as follows.
- Add more exposition.
- Provide separate non-blocking operations.
- Add a section on the lock-free queues.
- Argue against push-back operations.
- Add a cautionary note on the usefulness of `is_closed()`.
- Expand the cautionary note on the usefulness of `is_empty()`. Add `is_full()`.
- Add a subsection on element type requirements.
- Add a subsection on exception handling.
- Clarify ordering constraints on the interface.
- Add a subsection on a lock-free concrete queue.
- Add a section on content iterators, distinct from the existing streaming iterators section.
- Swap front and back names, as requested.
- General expository cleanup.
- Add an 'Revision History' section.
N3434 revised N3353 = 12-0043 - 2012-01-14 as follows.
- Change the inheritance-based interface to a pure conceptual interface.
- Put 'try' operations into a separate subsection.
- Add a subsection on non-blocking operations.
- Add a subsection on push-back operations.
- Add a subsection on queue ordering.
- Merge the 'Binary Interface' and 'Managed Indirection' sections into a new 'Conceptual Tools' section. Expand on the topics and their rationale.
- Add a subsection to 'Conceptual Tools' that provides for type erasure.
- Remove the 'Synopsis' section.
- Add an 'Implementation' section.
## [Introduction]()
Queues provide a mechanism for communicating data between components of a system.
The existing `deque` in the standard library is an inherently sequential data structure. Its reference-returning element access operations cannot synchronize access to those elements with other queue operations. So, concurrent pushes and pops on queues require a different interface to the queue structure.
Moreover, concurrency adds a new dimension for performance and semantics. Different queue implementation must trade off uncontended operation cost, contended operation cost, and element order guarantees. Some of these trade-offs will necessarily result in semantics weaker than a serial queue.
Concurrent queues come in a several different flavours, e.g.
- bounded vs. unbounded
- blocking vs. overwriting
- single-ended vs. multi-ended
- strict FIFO ordering vs. priority based ordering
The syntactic concept proposed here should be valid for all of these flavours, while the concrete semantics might differ.
### [Target Vehicle]()
This proposal targets a TS. It was originally sent to LEWG for inclusion into Concurrency TS v2. As Concurrency TS v2 will probably be published before this proposal is ready to be published, we propose to include concurrent queues into Concurrency TS v3 and publish this as soon as concurrent queues are ready. This leaves the door open for other proposal to share the same ship vehicle.
The scope for Concurrency TS v3 would be the same as that for v2:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
Should the committee decide to restrict the scope of the TS to only contain concurrent queues, we propose a slightly different scope:
"This document describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to communicate between different execution agents of algorithms with concurrent execution. The algorithms described by this document are realizable across a broad class of computer architectures."
#### [Questions for a TS to Answer]()
We expect that the TS will inform future work on a variety of questions, particularly those listed below, using real-world implementation experience that cannot be obtained without a TS.
- Is the proposed concept useful? Specifically, does it cover different implementations and does it work together with other concepts for concurrent queues, e.g. queues with only non-blocking functions or queues with an asynchronous interface?
- Is the concrete queue useful without an asynchronous interface? Can an asynchronous interface be added without extra overhead?
- What other concrete implementations should be provided?
- Is a queue that is ignorant of execution contexts from `std::execution` still useful?
## [Existing Practice]()
### [Concept of a Bounded Queue]()
The basic concept of a bounded queue with potentially blocking push and pop operations is very old and widely used. It's generally provided as an operating system level facility, like other concurrency primitives.
POSIX 2001 has `mq` message queues (with priorities and timeout).
Windows ?
FreeRTOS, Mbed, vxWorks
### [Bounded Queues with C++ Interface]()
Literature
Boost
TBB has `concurrent_bounded_queue` (and an unbounded version `concurrent_queue` that has only non-blocking operations).
## [Conceptual Interface]()
We provide basic queue operations, and then extend those operations to cover other important issues.
By analogy with how `future` defines their errors, we introduce `conque_errc` enum and `conqueue_error` as follows:
```
enum class conqueue_errc { success, empty, full, closed };
template <>
struct is_error_code_enum<conqueue_errc> : public true_type {};
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : public system_error;
```
These errors will be reported from concurrent queue operations as specified below.
### [Basic Operations]()
The essential solution to the problem of concurrent queuing is to shift to value-based operations, rather than reference-based operations.
The basic operations are:
```
void
queue::push(const T& x);
```
```
void
queue::push(T&& x);
```
```
bool
queue::push(const T& x, std::error_code& ec);
```
```
bool
queue::push(T&& x, std::error_code& ec);
```
Pushes `x` onto the queue via copy or move construction. The first version throws `std::conqueue_error(conqueue_errc::closed)` if the queue is closed. The second version returns `true` on success, and `false` and sets `ec` to `error_code(conqueue_errc::closed)` if the queue is closed.
`T queue::pop();`
```
std::optional<T>
queue::pop(std::error_code& ec);
```
Pops a value from the queue via move construction into the return value. The first version throws `std::conqueue_error(conqueue_errc::closed)` if the queue is empty and closed; the second version, if the queue is empty and closed, returns `std::nullopt` and sets `ec` to `std::error_code(conqueue_errc::closed)`. If queue is empty and open, the operation blocks until an element is available.
In the original buffer\_queue paper, the pop function had signature `T pop_value()`. Subsequently, it was changed to `void pop(T&)` due to concern about the problem of loosing elements when an error occurs.
The exploration of different version of error reporting was moved to a separate paper [P2921](https://wg21.link/P2921).
### [Asynchronous Operations]()
```
sender auto
queue::async_push(T x);
```
```
sender auto
queue::async_pop();
```
These operations return a sender that will push or pop the element. Senders must support cancellation and if the receiver is currently waiting on a push or pop operation and no longer interested in performing the operation, it should be removed from any waiting queues, if any, and be completed with `std::execution::set_stopped`.
### [Non-Waiting Operations]()
Waiting on a full or empty queue can take a while, which has an opportunity cost. Avoiding that wait enables algorithms to avoid queuing speculative work when a queue is full, to do other work rather than wait for a push on a full queue, and to do other work rather than wait for a pop on an empty queue.
```
bool
queue::try_push(const T& x, std::error_code& ec);
```
```
bool
queue::try_push(T&& x, std::error_code& ec);
```
If the queue is full or closed, returns `false` and sets the respective status in the `ec`. Otherwise, push the value onto the queue via copy or move construction and returns `true`.
**REVISITED in Varna**
The following version was introduced in response to LEWG-I concerns about loosing the element if an rvalue cannot be stored in the queue.
```
queue_op_status
queue::try_push(T&&, T&);
```
However, SG1 reaffirmed the APIs above with the following rationale:
It seems that it is possible in both versions:
```
T x = get_something();
if (q.try_push(std::move(x))) ...
```
With two parameter version:
```
T x;
if (q.try_push(get_something(), x)) ...
```
Ergonomically they are roughly identical. API is slightly simpler with one argument version, therefore, we reverted to original one argument version.
```
optional<T>
queue::try_pop(std::error_code& ec);
```
If the queue is empty, returns `nullopt` and set ec to `conqueue_errc::empty`. Otherwise, pop the element from the queue via move construction into the optional. Return `true` and set ec to `conqueue_errc::success`.
These operations will not wait when the queue is full or empty. They may block for mutual exclusion.
### [Closed Queues]()
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. The usual approach is add an additional out-of-band signal. However, this approach suffers from the flaw that threads waiting on either full or empty queues need to be woken up when the queue is no longer needed. To do that, you need access to the condition variables used for full/empty blocking, which considerably increases the complexity and fragility of the interface. It also leads to performance implications with additional mutexes or atomics. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplifies coding.
To achieve this signal, a thread may close a queue. Once closed, no new elements may be pushed onto the queue. Push operations on a closed queue will either return `conqueue_errc::closed` (when they have ec parameter) or throw `conqueue_error(conqueue_errc::closed)` (when they do not). Elements already on the queue may be popped off. When a queue is empty and closed, pop operations will either set ec to `conqueue_errc::closed` (when they have a ec parameter) or throw `conqueue_error(conqueue_errc::closed)` otherwise.
The additional operations are as follows. They are essentially equivalent to the basic operations except that they return a status, avoiding an exception when queues are closed.
`void queue::close() noexcept;`
Close the queue.
`bool queue::is_closed() const noexcept;`
Return true iff the queue is closed.
### [Element Type Requirements]()
The above operations require element types with copy/move constructors, and destructor. These operations may be trivial. The copy/move constructors operators may throw, but must leave the objects in a valid state for subsequent operations.
### [Exception Handling]()
`push()` and `pop()` may throw an exceptions of type `conqueue_error` that's derived from `std::system_error` and will contain a `conqueue_errc`.
Concurrent queues cannot completely hide the effect of exceptions thrown by the element type, in part because changes cannot be transparently undone when other threads are observing the queue.
Queues may rethrow exceptions from storage allocation, mutexes, or condition variables.
If the [element type operations required](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#element_requirements) do not throw exceptions, then only the exceptions above are rethrown.
When an element copy/move may throw, some queue operations have additional behavior.
- Construction shall rethrow, destroying any elements allocated.
- A push operation shall rethrow and the state of the queue is unaffected.
- A pop operation shall rethrow and the element is popped from the queue. The value popped is effectively lost. (Doing otherwise would likely clog the queue with a bad element.)
## [Concrete Queues]()
In addition to the concept, the standard needs at least one concrete queue. [P1958R0](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1958r0.html) provicdes one such concrete queue, `buffer_queue`.
`buffer_queue` is outlined below:
```
enum class conqueue_errc { success, empty, full, closed };
const error_category& conqueue_category() noexcept;
error_code make_error_code(conqueue_errc e) noexcept;
error_condition make_error_condition(conqueue_errc e) noexcept;
class conqueue_error : system_error { ... };
template <typename T,
class Allocator = std::allocator<T>>
class buffer_queue
{
buffer_queue() = delete;
buffer_queue(const buffer_queue&) = delete;
buffer_queue& operator =(const buffer_queue&) = delete;
public:
typedef T value_type;
// construct/destroy
explicit buffer_queue(size_t max_elems, const Allocator& alloc = Allocator());
explicit buffer_queue(std::initializer_list<T>, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <typename InputIterator>
buffer_queue(InputIterator begin, InputIterator end, size_t max_elems = 0,
const Allocator& alloc = Allocator());
template <container-compatible-range<T> R>
buffer_queue(from_range_t, R&& rg, size_t max_elems = 0,
const Allocator& alloc = Allocator());
~buffer_queue() noexcept;
// observers
size_t capacity() const noexcept;
bool is_closed() const noexcept;
static constexpr bool is_always_lock_free() noexcept;
// modifiers
void close() noexcept;
T pop();
optional<T> pop(std::error_code& ec);
optional<T> try_pop(std::error_code& ec);
void push(const T& x);
void push(T&& x);
bool push(const T& x, std::error_code& ec);
bool push(T&& x, std::error_code& ec);
bool try_push(const T& x, std::error_code& ec);
bool try_push(T&& x, std::error_code& ec);
};
```
buffer\_queue is only allowed to allocate in its constructor.
constructors that take initializing sequence are allowed to omit size\_t max\_elem argument that will be then assumed to be equal to the size of the initialization sequence.
## [Response to Feedback by LEWGI at Prague 2020 meeting]()
At the Prague meeting in February 2020 LEWGI provided feedback an set some action items.
"Explore P0059 `ring_buffer` prior art and document it in paper."
`ring_buffer` is like `std::queue` is a sequential data structure and therefore provides a completely different interface than concurrent queues.
"Consider removing `value_pop` to increase consensus."
`value_pop` was replaced by `pop` that doesn't have the problem of loosing elements.
"Consider removing `is_empty` and `is_full` to increase consensus."
Done.
"Consider removing `is_lock_free` to increase consensus. If `is_lock_free` remains, add `is_always_lock_free` (a la `atomic`)."
`is_lock_free` was dropped but `is_always_lock_free` was added anyways.
"Remove the maybe-consuming `try_push(&&)`. Investigate prior art (such as TBB's `concurrent_queue`) and add either:
- An always-consuming `try_push(&&)` which returns `queue_op_status`,
- An always-consuming `try_push(&&)` which returns the input on failure."
TBB's `concurrent_queue` doesn't have `try_push`. `try_push(&&)` now has an additional parameter that gets the element if it couldn't be pushed.
"Require `buffer_queue` to allocate all storage only once".
"Require `buffer_queue` to allocate all storage during construction".
"Instead of throwing `queue_op_status` objects, add a standard library exception type and throw that."
These requests are all valid and added (partly to P1958R1).
## [Implementation]()
An implementation is available at <https://github.com/GorNishanov/conqueue>.
A free, open-source implementation of an earlier version of these interfaces is avaliable at the Google Concurrency Library project at <https://github.com/alasdairmackintosh/google-concurrency-library>. The concrete `buffer_queue` is in [..../blob/master/include/buffer\_queue.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/buffer_queue.h). The concrete `lock_free_buffer_queue` is in [..../blob/master/include/lock\_free\_buffer\_queue.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/lock_free_buffer_queue.h). The corresponding implementation of the conceptual tools is in [..../blob/master/include/queue\_base.h](https://github.com/alasdairmackintosh/google-concurrency-library/blob/master/include/queue_base.h).
## [Historic Contents]()
**The Contents below is for historic reference only.**
## [Proposed Wording]()
**Note: This wording is left for general reference. It was not updated from previous proposals as first the design should be fixed. So the wording here partly contradicts the design proposed above. In these cases the design is proposed and not the wording\!**
The concurrent queue container definition is as follows. The section, paragraph, and table references are based on those of [N4567](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4567.pdf) Working Draft, Standard for Programming Language C++, Richard Smith, November 2015.
### [?.? Concurrent queues \[conqueues\]]()
Add a new section.
### [?.?.1 General \[conqueues.general\]]()
Add a new section.
> This section provides mechanisms for concurrent access to a queue. These mechanisms ease the production of race-free programs (1.10 \[intro.multithread\]).
### [?.?.2 Header \<conqueue\> synopsis \[conqueues.syn\]]()
Add a new section.
> ```
>
enum class queue_op_status { success, empty, full, closed };
template <typename Value> class buffer_queue;
template <typename Queue> class generic_queue_back;
template <typename Queue> class generic_queue_front;
template <typename Value> class queue_base;
template <typename Value>
using queue_back = generic_queue_back< queue_base< Value > >;
template <typename Value>
using queue_front = generic_queue_front< queue_base< Value > >;
template <typename Queue> class queue_wrapper;
template <typename Value> class shared_queue_back;
template <typename Value> class shared_queue_front;
template <typename Value> class shared_queue_ends;
template <typename Queue, typename ... Args>
shared_queue_ends<typename Queue::value_type>;
share_queue_ends(Args ... args);
> ```
### [?.?.3 Operation status \[conqueues.status\]]()
Add a new section.
> Many concurrent queue operations return a status in the form of the following enumeration.
>
> `enum class queue_op_status`
>
> Enumerators:
>
> `success = 0, empty, full, closed`
### [?.?.4 Concepts \[conqueues.concepts\]]()
Add a new section.
> This section provides the conceptual operations for concurrent queues of type `queue` of `Element` types.
### [?.?.4.1 Element requirements \[conqueues.concept.elemreq\]]()
Add a new section:
> The types of the elements of a concurrent queue must provide either or both of a copy constructor or a move constructor, either or both of a copy assignment operator or a move assignment operator, and a destructor.
>
> Any copy/more constructor or copy/move assignment operator that throws shall leave the objects in a valid state for subsequent operations.
>
> None of the above constructors, assignments or destructor may call any operation on a concurrent queue for which their objects may become a member. \[*Note:* Queues may hold an internal lock while performing the above operations, and if they were to call a queue operation, deadlock would result. —*end note*\]
### [?.?.4.2 Element type naming \[conqueues.concept.elemtype\]]()
Add a new section:
> The queue class shall provide a typedef to its element value type.
>
> `typedef implementation-defined value_type;`
### [?.?.4.3 Lock-free attribute operations \[conqueues.concept.lockfree\]]()
Add a new section:
> A queue type provides lock-free operations (1.10 \[intro.multithread\], or it does not.
>
> `static bool queue::is_lock_free() noexcept;`
>
> Returns:
>
> If the non-waiting operations of the queue type are lock-free, `true`. Otherwise, `false`.
>
> Remark:
>
> The function returns the same result for all instances of the type.
### [?.?.4.4 Synchronization \[conqueues.concept.sync\]]()
Add a new section:
> For synchronization purposes, and unless otherwise stated, all queue operations appear to operate on a single memory location, all non-const queue operations appear to be sequentially consistent atomic read-modify-write operations, and all const queue operations appear to be atomic loads from this location. \[*Note:* In particular, all queue operations appear to execute in a single global order, that is part of the total order S (29.3 \[atomics.order\]) of sequentially consistent operations. Each non-const queue operation A strongly happens before every operation on the same queue that follows A in S. Whether or not the queue preserves a FIFO order is a property of the concrete class. —*end note*\]
>
> `static bool queue::is_lock_free() noexcept;`
>
> Returns:
>
> If the non-waiting operations of the queue type are lock-free, `true`. Otherwise, `false`.
>
> Remark:
>
> The function returns the same result for all instances of the type.
### [?.?.4.4 State operations \[conqueues.concept.state\]]()
Add a new section:
> Upon construction, every queue shall be in an open state. It may move to a closed state, but shall not move back to an open state.
>
> `void queue::close() noexcept;`
>
> Effects:
>
> Closes the queue. No pushes subsequent to the close shall succeed.
>
> `bool queue::is_closed() const noexcept;`
>
> Returns:
>
> `true` if the queue is closed, otherwise, `false`
### [?.?.4.5 Waiting operations \[conqueues.concept.wait\]]()
Add a new section:
> `void queue::push(const Element&);`
> `void queue::push(Element&&);`
>
> Effects:
>
> If the queue is closed, throws an exception. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, waits until space is available or the queue is closed.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception. If the operation cannot otherwise complete because the queue is closed, throws `queue_op_status::closed`.
>
> `void queue::pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, if the queue is closed, throws an exception. Otherwise, waits until an element is available or the queue is closed.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception. If the operation cannot otherwise complete because the queue is closed, throws `queue_op_status::closed`.
>
> `queue_op_status queue::wait_push(const Element&);`
> `queue_op_status queue::wait_push(Element&&);`
>
> Effects:
>
> If the queue is closed, returns. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, waits until space is available or the queue is closed.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, the push was successful, `queue_op_status::success`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception.
>
> `queue_op_status queue::wait_pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, if the queue is closed, returns. Otherwise, waits until an element is available or the queue is closed.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, the pop was successful, `queue_op_status::success`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception.
### [?.?.4.6 Non-waiting operations \[conqueues.concept.nonwait\]]()
Add a new section:
> `queue_op_status queue::try_push(const Element&);`
> `queue_op_status queue::try_push(Element&&);`
>
> Effects:
>
> If the queue is closed, returns. Otherwise, if space is available on the queue, copies or moves the `element` onto the queue and returns. Otherwise, returns.
>
> Returns:
>
> If the queue was closed, `queue_op_status::closed`. Otherwise, if the push was successful, `queue_op_status::success`. Otherwise, space was unavailable, `queue_op_status::full`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the queue is unaffected and the push shall rethrow the exception.
>
> `queue_op_status queue::try_pop(Element&);`
>
> Effects:
>
> If an element is available on the queue, moves the element from the queue to the parameter and returns. Otherwise, returns.
>
> Returns:
>
> If the pop was successful, `queue_op_status::success`. Otherwise, if the queue is closed, `queue_op_status::closed`. Otherwise, no element was available, `queue_op_status::empty`.
>
> Throws:
>
> Any exception from operations on storage allocation, mutexes, or condition variables. If an element copy/move operation throws, the state of the element is popped and the pop shall rethrow the exception.
### [?.?.4.7 Type concepts \[conqueues.concept.type\]]()
Add a new section:
> The `WaitingConcurrentQueue` concept provides all of the operations specified above.
>
> The `NonWaitingConcurrentQueue` concept provides all of the operations specified above, except the waiting operations (\[conqueues.concept.wait\]). A `NonWaitingConcurrentQueue` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
>
> The `WaitingConcurrentQueueBack` concept provides all of the operations specified above except the pop operations.
>
> The `WaitingConcurrentQueueFront` concept provides all of the operations specified above except the push operations.
>
> The `NonWaitingConcurrentQueueBack` concept provides all of the operations specified above except the pop operations and the waiting push operations. A `NonWaitingConcurrentQueueBack` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
>
> The `NonWaitingConcurrentQueueFront` concept provides all of the operations specified above except the push operations and the waiting pop operations. A `NonWaitingConcurrentQueueFront` is lock-free (1.10 \[intro.multithread\]) when its member function `is_lock_free` reports true.
### [?.?.5 Concrete queues \[conqueues.concrete\]]()
Add a new section, with content to be provided by other papers.
### [?.?.6 Tools \[conqueues.tools\]]()
Add a new section:
> Additional tools help to use and manage concurrent queues.
### [?.?.6.1 Ends and Iterators \[conqueues.tools.ends\]]()
Add a new section:
> Access to only a single end of a queue is a valuable code structuring tool. A single end can also provide unambiguous `begin` and `end` operations that return iterators.
>
> Because queues may be closed and hence accept no further pushes, output iterators must also be checked for having reached the end, ie. having been closed. \[*Example:*
> ```
>
> ```
> —*end example*\]
### [?.?.6.1.1 Class template `generic_queue_back` \[conqueues.tools.back\]]()
Add a new section:
> ```
> template <typename Queue>
class generic_queue_back
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
generic_queue_back(Queue& queue);
generic_queue_back(Queue* queue);
generic_queue_back(const generic_queue_back& other) = default;
generic_queue_back& operator =(const generic_queue_back& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
> ```
> The class template `generic_queue_back` implements `WaitingConcurrentQueueBack`
>
> `generic_queue_back(Queue& queue);`
> `generic_queue_back(Queue* queue);`
>
> Effects:
>
> Constructs the queue back with a pointer to the queue object given.
>
> `~generic_queue_back() noexcept;`
>
> Effects:
>
> Destroys the queue back.
>
> `bool has_queue() const noexcept;`
>
> Returns:
>
> `true` if the contained pointer is not null. `false` otherwise.
### [?.?.6.1.2 Class template `generic_queue_front` \[conqueues.tools.front\]]()
Add a new section:
> ```
> template <typename Queue>
class generic_queue_front
{
public:
typedef typename Queue::value_type value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
generic_queue_front(Queue& queue);
generic_queue_front(Queue* queue);
generic_queue_front(const generic_queue_front& other) = default;
generic_queue_front& operator =(const generic_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
> ```
> The class template `generic_queue_front` implements `WaitingConcurrentQueueFront`
>
> `generic_queue_front(Queue& queue);`
> `generic_queue_front(Queue* queue);`
>
> Effects:
>
> Constructs the queue front with a pointer to the queue object given.
>
> `~generic_queue_front() noexcept;`
>
> Effects:
>
> Destroys the queue front.
>
> `bool has_queue() const noexcept;`
>
> Returns:
>
> `true` if the contained pointer is not null. `false` otherwise.
### [?.?.6.2 Binary interfaces \[conqueues.tools.binary\]]()
Add a new section:
> Occasionally it is best to have a binary interface to any concurrent queue of a given element type. This binary interface is provided by a wrapper class that erases the type of the concrete queue class.
### [?.?.6.2.1 Class template `queue_wrapper` \[conqueues.tools.wrapper\]]()
Add a new section:
> ```
>
> ```
> The template type parameter `Queue` and the he class template `queue_base` shall implement the `WaitingConcurrentQueue` concept.
>
> Effects:
>
> Constructs the queue wrapper, referencing the given queue.
>
> `~queue_base() noexcept;`
>
> Effects:
>
> Destroys the queue wrapper, but not the referenced queue.
### [?.?.6.2.2 Binary ends \[conqueues.tools.binends\]]()
Add a new section:
> In addition to binary interfaces to queues, binary interfaces to ends are also useful.
> ```
>
> ```
### [?.?.6.3 Managed Ends \[conqueues.tools.managed\]]()
Add a new section:
> Automatically managing references to queues can be helpful when queues are used as a communication medium.
### [?.?.6.3.1 Class template `shared_queue_back` \[conqueues.tools.sharedback\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_back
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
shared_queue_back(const shared_queue_back& other);
shared_queue_back& operator =(const shared_queue_back& other);
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
void push(const value_type& x);
queue_op_status wait_push(const value_type& x);
queue_op_status try_push(const value_type& x);
void push(value_type&& x);
queue_op_status wait_push(value_type&& x);
queue_op_status try_push(value_type&& x);
};
> ```
> The class template `shared_queue_back` implements `WaitingConcurrentQueueBack`
>
> `shared_queue_back(const shared_queue_back& other);`
> `shared_queue_back& operator =(const shared_queue_back& other) = default;`
>
> Effects:
>
> Copy the pointer to the queue, but keep the back of the queue reference counted.
>
> `~shared_queue_back() noexcept;`
>
> Effects:
>
> Destroys the queue back. If this is the last back reference, and there are no front references, destroy the queue. If this is the last back reference, and there are front references, close the queue.
### [?.?.6.3.2 Class template `shared_queue_front` \[conqueues.tools.sharedfront\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_front
{
public:
typedef typename Value value_type;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef implementation-defined iterator;
typedef const iterator const_iterator;
shared_queue_front(Queue& queue);
shared_queue_front(Queue* queue);
shared_queue_front(const shared_queue_front& other) = default;
shared_queue_front& operator =(const shared_queue_front& other) = default;
void close() noexcept;
bool is_closed() const noexcept;
bool is_empty() const noexcept;
bool is_full() const noexcept;
bool is_lock_free() const noexcept;
bool has_queue() const noexcept;
iterator begin();
iterator end();
const iterator cbegin();
const iterator cend();
value_type value_pop();
queue_op_status wait_pop(value_type& x);
queue_op_status try_pop(value_type& x);
};
> ```
> The class template `shared_queue_front` implements `WaitingConcurrentQueueFront`
>
> `shared_queue_front(const shared_queue_front& other);`
> `shared_queue_front& operator =(const shared_queue_front& other) = default;`
>
> Effects:
>
> Copy the pointer to the queue, but keep the front of the queue reference counted.
>
> `~shared_queue_front() noexcept;`
>
> Effects:
>
> Destroys the queue front. If this is the last front reference, and there are no back references, destroy the queue. If this is the last front reference, and there are back references, close the queue.
### [?.?.6.3.3 Class template `shared_queue_ends` \[conqueues.tools.shareqends\]]()
Add a new section:
> ```
> template <typename Value>
class shared_queue_ends
{
public:
shared_queue_back<Value> back;
shared_queue_front<Value> front;
};
> ```
### [?.?.6.3.4 Function template `share_queue_ends` \[conqueues.tools.shareends\]]()
Add a new section:
> template \<typename Queue, typename ... Args\>
> shared\_queue\_ends\<typename Queue::value\_type\>
> share\_queue\_ends(Args ... args);
>
> Effects:
>
> Constructs a `Queue` with the given `Args`. Initializes a set of reference counters for that queue.
>
> Returns:
>
> a `shared_queue_ends` consisting of one `shared_queue_back` and one `shared_queue_front` for the constructed queue.
## [Abandoned Interfaces]()
### [Re-opening a Queue]()
There are use cases for opening a queue that is closed. While we are not aware of an implementation in which the ability to reopen a queue would be a hardship, we also imagine that such an implementation could exist. Open should generally only be called if the queue is closed and empty, providing a clean synchronization point, though it is possible to call open on a non-empty queue. An open operation following a close operation is guaranteed to be visible after the close operation and the queue is guaranteed to be open upon completion of the open call. (But of course, another close call could occur immediately thereafter.)
`void queue::open();`
Open the queue.
Note that when `is_closed()` returns false, there is no assurance that any subsequent operation finds the queue closed because some other thread may close it concurrently.
If an open operation is not available, there is an assurance that once closed, a queue stays closed. So, unless the programmer takes care to ensure that all other threads will not close the queue, only a return value of true has any meaning.
Given these concerns with reopening queues, we do not propose wording to reopen a queue.
### [Non-Blocking Operations]()
For cases when blocking for mutual exclusion is undesirable, one can consider non-blocking operations. The interface is the same as the try operations but is allowed to also return `queue_op_status::busy` in case the operation is unable to complete without blocking.
```
queue_op_status
queue::nonblocking_push(const Element&);
```
```
queue_op_status
queue::nonblocking_push(Element&&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is full, return `queue_op_status::full`. Otherwise, push the `Element` onto the queue. Return `queue_op_status::success`.
```
queue_op_status
queue::nonblocking_pop(Element&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is empty, return `queue_op_status::empty`. Otherwise, pop the `Element` from the queue. The element will be moved out of the queue in preference to being copied. Return `queue_op_status::success`.
These operations will neither wait nor block. However, they may do nothing.
The non-blocking operations highlight a terminology problem. In terms of synchronization effects, `nonwaiting_push` on queues is equivalent to `try_lock` on mutexes. And so one could conclude that the existing `try_push` should be renamed `nonwaiting_push` and `nonblocking_push` should be renamed `try_push`. However, at least Thread Building Blocks uses the existing terminology. Perhaps better is to not use `try_push` and instead use `nonwaiting_push` and `nonblocking_push`.
**In November 2016, the Concurrency Study Group chose to defer non-blocking operations. Hence, the proposed wording does not include these functions. In addition, as these functions were the only ones that returned `busy`, that enumeration is also not included.**
### [Push Front Operations]()
Occasionally, one may wish to return a popped item to the queue. We can provide for this with `push_front` operations.
```
void
queue::push_front(const Element&);
```
```
void
queue::push_front(Element&&);
```
Push the `Element` onto the back of the queue, i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
```
queue_op_status
queue::try_push_front(const Element&);
```
```
queue_op_status
queue::try_push_front(Element&&);
```
If the queue was full, return `queue_op_status::full`. Otherwise, push the `Element` onto the front of the queue, i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
```
queue_op_status
queue::nonblocking_push_front(const Element&);
```
```
queue_op_status
queue::nonblocking_push_front(Element&&);
```
If the operation would block, return `queue_op_status::busy`. Otherwise, if the queue is full, return `queue_op_status::full`. Otherwise, push the `Element` onto the front queue. i.e. in at the end of the queue that is normally popped. Return `queue_op_status::success`.
This feature was requested at the Spring 2012 meeting. However, we do not think the feature works.
- The name `push_front` is inconsistent with existing "push back" nomenclature.
- The effects of `push_front` are only distinguishable from a regular push when there is a strong ordering of elements. Highly concurrent queues will likely have no strong ordering.
- The `push_front` call may fail due to full queues, closed queues, etc. In which case the operation will suffer contention, and may succeed only after interposing push and pop operations. The consequence is that the original push order is not preserved in the final pop order. So, `push_front` cannot be directly used as an 'undo'.
- The operation implies an ability to reverse internal changes at the front of the queue. This ability implies a loss efficiency in some implementations.
In short, we do not think that in a concurrent environment `push_front` provides sufficient semantic value to justify its cost. Consequently, the proposed wording does not provide this feature.
### [Queue Names]()
It is sometimes desirable for queues to be able to identify themselves. This feature is particularly helpful for run-time diagnotics, particularly when 'ends' become dynamically passed around between threads. See [Managed Indirection](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html#Managed).
`const char* queue::name();`
Return the name string provided as a parameter to queue construction.
There is some debate on this facility, but we see no way to effectively replicate the facility. However, in recognition of that debate, the wording does not provide the name facility.
### [Lock-Free Buffer Queue]()
We provide a concrete concurrent queue in the form of a fixed-size `lock_free_buffer_queue`. It meets the `NonWaitingConcurrentQueue` concept. The queue is still under development, so details may change.
**In November 2016, the Concurrency Study Group chose to defer lock-free queues. Hence, the proposed wording does not include a concrete lock-free queue.**
### [Storage Iterators]()
In addition to iterators that stream data into and out of a queue, we could provide an iterator over the storage contents of a queue. Such and iterator, even when implementable, would mostly likely be valid only when the queue is otherwise quiecent. We believe such an iterator would be most useful for debugging, which may well require knowledge of the concrete class. Therefore, we do not propose wording for this feature.
### [Empty and Full Queues]()
It is sometimes desirable to know if a queue is empty.
`bool queue::is_empty() const noexcept;`
Return true iff the queue is empty.
This operation is useful only during intervals when the queue is known to not be subject to pushes and pops from other threads. Its primary use case is assertions on the state of the queue at the end if its lifetime, or when the system is in quiescent state (where there no outstanding pushes).
We can imagine occasional use for knowing when a queue is full, for instance in system performance polling. The motivation is significantly weaker though.
`bool queue::is_full() const noexcept;`
Return true iff the queue is full.
Not all queues will have a full state, and these would always return false.
### [Queue Ordering]()
The conceptual queue interface makes minimal guarantees.
- The queue is not empty if there is an element that has been pushed but not popped.
- A push operation *synchronizes with* the pop operation that obtains that element.
- A close operation *synchronizes with* an operation that observes that the queue is closed.
- There is a sequentially consistent order of operations.
In particular, the conceptual interface does not guarantee that the sequentially consistent order of element pushes matches the sequentially consistent order of pops. Concrete queues could specify more specific ordering guarantees.
### [Lock-Free Implementations]()
Lock-free queues will have some trouble waiting for the queue to be non-empty or non-full. Therefore, we propose two closely-related concepts. A full concurrent queue concept as described above, and a non-waiting concurrent queue concept that has all the operations except `push`, `wait_push`, `value_pop` and `wait_pop`. That is, it has only non-waiting operations (presumably emulated with busy wait) and non-blocking operations, but no waiting operations. We propose naming these `WaitingConcurrentQueue` and `NonWaitingConcurrentQueue`, respectively.
Note: Adopting this conceptual split requires splitting some of the facilities defined later.
For generic code it's sometimes important to know if a concurrent queue has a lock free implementation.
`constexpr static bool queue::is_always_lock_free() noexcept;`
Return true iff the has a lock-free implementation of the non-waiting operations.
## [Abandoned Additional Conceptual Tools]()
There are a number of tools that support use of the conceptual interface. These tools are not part of the queue interface, but provide restricted views or adapters on top of the queue useful in implementing concurrent algorithms.
### [Fronts and Backs]()
Restricting an interface to one side of a queue is a valuable code structuring tool. This restriction is accomplished with the classes `generic_queue_front` and `generic_queue_back` parameterized on the concrete queue implementation. These act as pointers with access to only the front or the back of a queue. The front of the queue is where elements are popped. The back of the queue is where elements are pushed.
```
void send( int number, generic_queue_back<buffer_queue<int>> arv );
```
These fronts and backs are also able to provide `begin` and `end` operations that unambiguously stream data into or out of a queue.
### [Streaming Iterators]()
In order to enable the use of existing algorithms streaming through concurrent queues, they need to support iterators. Output iterators will push to a queue and input iterators will pop from a queue. Stronger forms of iterators are in general not possible with concurrent queues.
Iterators implicitly require waiting for the advance, so iterators are only supportable with the `WaitingConcurrentQueue` concept.
```
```
Note that contrary to existing iterator algorithms, we check both iterators for reaching their end, as either may be closed at any time.
Note that with suitable renaming, the existing standard front insert and back insert iterators could work as is. However, there is nothing like a pop iterator adapter.
### [Binary Interfaces]()
The standard library is template based, but it is often desirable to have a binary interface that shields client from the concrete implementations. For example, `std::function` is a binary interface to callable object (of a given signature). We achieve this capability in queues with type erasure.
We provide a `queue_base` class template parameterized by the value type. Its operations are virtual. This class provides the essential independence from the queue representation.
We also provide `queue_front` and `queue_back` class templates parameterized by the value types. These are essentially `generic_queue_front<queue_base<Value>>` and `generic_queue_front<queue_base<Value>>`, respectively.
To obtain a pointer to `queue_base` from an non-virtual concurrent queue, construct an instance the `queue_wrapper` class template, which is parameterized on the queue and derived from `queue_base`. Upcasting a pointer to the `queue_wrapper` instance to a `queue_base` instance thus erases the concrete queue type.
```
```
### [Managed Indirection]()
Long running servers may have the need to reconfigure the relationship between queues and threads. The ability to pass 'ends' of queues between threads with automatic memory management eases programming.
To this end, we provide `shared_queue_front` and `shared_queue_back` template classes. These act as reference-counted versions of the `queue_front` and `queue_back` template classes.
The `share_queue_ends(Args ... args)` template function will provide a pair of `shared_queue_front` and `shared_queue_back` to a dynamically allocated `queue_object` instance containing an instance of the specified implementation queue. When the last of these fronts and backs are deleted, the queue itself will be deleted. Also, when the last of the fronts or the last of the backs is deleted, the queue will be closed.
```
``` |
| Shard | 172 (laksa) |
| Root Hash | 3017496785508426972 |
| Unparsed URL | org,open-std!www,/jtc1/sc22/wg21/docs/papers/2023/p0260r7.html s443 |