Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent Queue - spirits away #11878

Open
guevara opened this issue Nov 13, 2024 · 0 comments
Open

Concurrent Queue - spirits away #11878

guevara opened this issue Nov 13, 2024 · 0 comments

Comments

@guevara
Copy link
Owner

guevara commented Nov 13, 2024

Concurrent Queue - spirits away



https://ift.tt/KtvABrp






Herb Sutter在DDJ pillars of concurrency一文中抛出并行编程的三个简单论点,一是分离任务,使用更细粒度的锁或者无锁编程;二是尽量通过并行任务使用CPU资源,以提高系统吞吐量及扩展性;三是保证对共享资源访问的一致性。第三点已经被atomicmutexlockcondition_variable解决了,第一点和第二点则可以归结为如何对任务进行粒度划分并投递到任务的执行单元中去调度执行。任务划分依赖于各种不同业务的理解,例如网络和渲染,很难抽取出其共性。而任务的调度执行则是一种通用的结构,可以分为四个部分:

  1. 任务的封装 在c++11里提供了三种最基本的任务封装形式future, promise,packaged_task
  2. 任务的结构 在c++17里补全了任务结构控制,主要是提供了then, when_all, when_any这三个用来关联多个future的函数
  3. 任务的执行 任务执行者基本都是使用线程池,每个线程不断的尝试获取一个任务并执行,类似于一个while循环
  4. 任务的调度 这部分负责了任务的投递和分发,他在多线程之间维持了一个任务容器集合,提供的接口主要包括接受新任务、取出一个任务和判断容器是否为空

在整个并发任务系统中,在任务容器集合之上的任务调度结构是核心。现在使用的最广泛的任务容器是concurrent queue,下面我们来对concurrent queue来做一下分析。

naive concurrent queue

queue是一个维持先进先出(FIFO)队列的结构,在很多STL的实现之中采取的是多块连续内存的双向链表来维持其先进先出结构。为了在多线程中使用std::queue,最简单的方法就是使用锁来解决data race,同时修改原始提供的接口,使得这个数据结构不会被用错。

#include <queue>
#include <atomic>
#include <mutex>
template<typename T>
class concurrent_queue
{
private:
    mutable std::mutex mut;
    std::queue<T> data_queue;
    std::condition_variable data_cond;
public:
    concurrent_queue()
    {
        // pass
    }
    void push(T new_value)
    {
        std::lock_guard<std::mutex> lk(mut);
        data_queue.push(std::move(data));
        data_cond.notify_one();
    }
    void wait_and_pop(T& value)
    {
        std::unique_lock<std::mutex> lk(mut);
        data_cond.wait(lk, [this]{
            return !data_queue.empty();
        })
        value = std::move(data_queue.front());
        data_queue.pop();
    }
    std::shared_ptr<T> wait_and_pop()
    {
        std::unique_lock<std::mutex> lk(mut);
        data_cond.wait(lk, [this]{
            return !data_queue.empty();
        })
        auto res = std::make_shared<T>(std::move(data_queue.front()));
        data_queue.pop();
        return res;
    }
    bool try_pop(T& value)
    {
        std::lock_guard<std::mutex> lk(mut);
        if(data_queue.empty())
        {
            return false;
        }
        value = std::move(data_queue.front());
        data_queue.pop()
        return True;
    }
    std::shared_ptr<T> try_pop()
    {
        std::lock_guard<std::mutex> lk(mut);
        if(data_queue.empty())
        {
            return std::shared_ptr<T>();
        }
        auto res =  std::make_shared<T>(std::move(data_queue.front()));
        data_queue.pop()
        return res;
    }
    bool empty() const
    {
        std::lock_guard<std::mutex> lk(mut);
        return data_queue.empty();
    }
}

上述代码的主要考量如下:

  1. 由于多线程的干扰,常规的查询empty之后再pop的处理流程是错误的,这两个操作必须封装在一起,所以这里提供了try_popwait_and_pop这两个接口来获取数据。
  2. 为了避免在数据拷贝的时候出现异常导致的数据不一致,返回数据的时候采取两套方案,一个是调用者提供引用,一个是返回一个shared_ptr。这样就保证了如果在拷贝构造front的时候出了trace也能维持整个queue的结构完整。

这个concurrent_queue并不是很高效,主要的drawback包括如下三个方面:

  1. 每次访问接口的时候都需要调用锁,而且是同一个锁
  2. 在尝试获得数据的时候失败会触发yield,从而导致线程切换
  3. 维持了一个全局的先进先出序列,在多消费者的情况下这个强制唯一序是没有意义的,在单消费者的情况下也很少会有这种要求。

对应的常见解决方案:

  1. 使用无锁的方式去代替mutex,同时由于无锁最大的问题是内存分配,有些并发队列通过预先设置最大大小的方式来预分配内存,从而绕过了著名的ABA问题
  2. 使用双链表结构去维持队列,而不是使用queue,这样我们就可以分离头节点和尾节点的访问;如果是固定大小的队列则可以采取ring buffer的形式来维持队列结构。
  3. 当尝试获得数据失败的时候,先轮询一段时间,如果这段时间内还是没有数据,则调用yield,也就是对condition_variable封装了一层。
  4. 每个生产者维护其投递队列,每个消费者根据对各个生产者任务队列的优先级去遍历获取任务。

事实上,在这是一个并发queue的时候,首先要明确如下几个问题:

  1. 这个queue的生产者和消费者各有多少个,常见的有单生产者单消费者(SPSC)、单生产者多消费者(SPMC)、多生产者单消费者(MPSC)和多生产者多消费者(MPMC)

  2. 这个queue的最大元素大小是否确定,如果可以确定最大大小,则动态内存分配就可以避免,直接采取环形队列当作容器即可;如果无法确定最大大小,则只能通过动态内存分配的形式去处理,这里的难度加大了很多,因为要处理多线程的内存分配。

下面我们来看一下现在主流的几种concurrent_queue的实现,来分析一下他们对concurrent_queue的实现优化。

intel spsc concurrent queue

intel官方网站上提供了一个SPSC queue,但是这个queue没有限制最大元素大小,如果临时内存不够的话会调用new,可能会触发锁。

// load with 'consume' (data-dependent) memory ordering
template<typename T>
T load_consume(T const* addr)
{
  // hardware fence is implicit on x86
  T v = *const_cast<T const volatile*>(addr);
  __memory_barrier(); // compiler fence
  return v;
}

// store with 'release' memory ordering
template<typename T>
void store_release(T addr, T v)
{
// hardware fence is implicit on x86
__memory_barrier(); // compiler fence
const_cast<T volatile*>(addr) = v;
}

// cache line size on modern x86 processors (in bytes)
size_t const cache_line_size = 64;

// single-producer/single-consumer queue
template<typename T>
class spsc_queue
{
public:
spsc_queue()
{
node* n = new node;
n->next_ = 0;
tail_ = head_ = first_= tail_copy_ = n;
}

~spsc_queue()
{
node n = first_;
do
{
node
next = n->next_;
delete n;
n = next;
}
while (n);
}

void enqueue(T v)
{
node* n = alloc_node();
n->next_ = 0;
n->value_ = v;
store_release(&head_->next_, n);
head_ = n;
}

// returns 'false' if queue is empty
bool dequeue(T& v)
{
if (load_consume(&tail_->next_))
{
v = tail_->next_->value_;
store_release(&tail_, tail_->next_);
return true;
}
else
{
return false;
}
}

private:
// internal node structure
struct node
{
node* next_;
T value_;
};

// consumer part
// accessed mainly by consumer, infrequently be producer
node* tail_; // tail of the queue

// delimiter between consumer part and producer part,
// so that they situated on different cache lines
char cache_line_pad_ [cache_line_size];

// producer part
// accessed only by producer
node head_; // head of the queue
node
first_; // last unused node (tail of node cache)
node* tail_copy_; // helper (points somewhere between first_ and tail_)

node* alloc_node()
{
// first tries to allocate node from internal node cache,
// if attempt fails, allocates node via ::operator new()

  <span>if</span> <span>(</span><span>first_</span> <span>!=</span> <span>tail_copy_</span><span>)</span>
  <span>{</span>
      <span>node</span><span>*</span> <span>n</span> <span>=</span> <span>first_</span><span>;</span>
      <span>first_</span> <span>=</span> <span>first_</span><span>-&gt;</span><span>next_</span><span>;</span>
      <span>return</span> <span>n</span><span>;</span>
  <span>}</span>
  <span>tail_copy_</span> <span>=</span> <span>load_consume</span><span>(</span><span>&amp;</span><span>tail_</span><span>);</span>
  <span>if</span> <span>(</span><span>first_</span> <span>!=</span> <span>tail_copy_</span><span>)</span>
  <span>{</span>
      <span>node</span><span>*</span> <span>n</span> <span>=</span> <span>first_</span><span>;</span>
      <span>first_</span> <span>=</span> <span>first_</span><span>-&gt;</span><span>next_</span><span>;</span>
      <span>return</span> <span>n</span><span>;</span>
  <span>}</span>
  <span>node</span><span>*</span> <span>n</span> <span>=</span> <span>new</span> <span>node</span><span>;</span>
  <span>return</span> <span>n</span><span>;</span>

}

spsc_queue(spsc_queue const&);
spsc_queue& operator = (spsc_queue const&);
};

// usage example
int main()
{
spsc_queue<int> q;
q.enqueue(1);
q.enqueue(2);
int v;
bool b = q.dequeue(v);
b = q.dequeue(v);
q.enqueue(3);
q.enqueue(4);
b = q.dequeue(v);
b = q.dequeue(v);
b = q.dequeue(v);
}

这个代码的实现很简单粗暴,核心是一个单链表,对于单链表的任何操作都是wait_free的,这个链表有四个指针:

  1. tail指针,指向下一个应该dequeue的位置
  2. head指针,指向最新的一个enqueue的位置
  3. first_指针,指向第一个可以回收node的位置
  4. tail_copy指针,指向一个安全的可以回收的nodenext位置,他不一定指向tail

在这个链表里,指针之间有如下关系:\(first \le tail\_copy \le tail \le head\) 。这里做的核心优化就是按需去更新tail_copy,没必要每次更新tail的时候都把tail_copy更新一遍,只有发现first == tail_copy的时候才去更新一下。每个操作都没有使用到CAS,因此都是wait_free的,当然那一行调用了new的除外。

这里为了避免False Sharing使用了padding。由于读线程只需要更改tail,所以只需要在tail之后加个padding即可。

facebook spsc concurrent queue

facebook 提供了固定大小的SPSC queue,代码在follyProducerConsumerQueue里。

/*
 * Copyright 2017 Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *   http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// @author Bo Hu ([email protected])
// @author Jordan DeLong ([email protected])

#pragma once

#include <atomic>
#include <cassert>
#include <cstdlib>
#include <memory>
#include <stdexcept>
#include <type_traits>
#include <utility>

#include <folly/concurrency/CacheLocality.h>

namespace folly {

/*
* ProducerConsumerQueue is a one producer and one consumer queue
* without locks.
*/
template<class T>
struct ProducerConsumerQueue {
typedef T value_type;

ProducerConsumerQueue(const ProducerConsumerQueue&) = delete;
ProducerConsumerQueue& operator = (const ProducerConsumerQueue&) = delete;

// size must be >= 2.
//
// Also, note that the number of usable slots in the queue at any
// given time is actually (size-1), so if you start with an empty queue,
// isFull() will return true after size-1 insertions.
explicit ProducerConsumerQueue(uint32_t size)
: size_(size)
, records_(static_cast<T>(std::malloc(sizeof(T) size)))
, readIndex_(0)
, writeIndex_(0)
{
assert(size >= 2);
if (!records_) {
throw std::bad_alloc();
}
}

ProducerConsumerQueue() {
// We need to destruct anything that may still exist in our queue.
// (No real synchronization needed at destructor time: only one
// thread can be doing this.)
if (!std::is_trivially_destructible<T>::value) {
size_t readIndex = readIndex_;
size_t endIndex = writeIndex_;
while (readIndex != endIndex) {
records_[readIndex].T();
if (++readIndex == size_) {
readIndex = 0;
}
}
}

<span>std</span><span>::</span><span>free</span><span>(</span><span>records_</span><span>);</span>

}

template<class ...Args>
bool write(Args&&... recordArgs) {
auto const currentWrite = writeIndex_.load(std::memory_order_relaxed);
auto nextRecord = currentWrite + 1;
if (nextRecord == size_) {
nextRecord = 0;
}
if (nextRecord != readIndex_.load(std::memory_order_acquire)) {
new (&records_[currentWrite]) T(std::forward<Args>(recordArgs)...);
writeIndex_.store(nextRecord, std::memory_order_release);
return true;
}

<span>// queue is full</span>
<span>return</span> <span>false</span><span>;</span>

}

// move (or copy) the value at the front of the queue to given variable
bool read(T& record) {
auto const currentRead = readIndex_.load(std::memory_order_relaxed);
if (currentRead == writeIndex_.load(std::memory_order_acquire)) {
// queue is empty
return false;
}

<span>auto</span> <span>nextRecord</span> <span>=</span> <span>currentRead</span> <span>+</span> <span>1</span><span>;</span>
<span>if</span> <span>(</span><span>nextRecord</span> <span>==</span> <span>size_</span><span>)</span> <span>{</span>
  <span>nextRecord</span> <span>=</span> <span>0</span><span>;</span>
<span>}</span>
<span>record</span> <span>=</span> <span>std</span><span>::</span><span>move</span><span>(</span><span>records_</span><span>[</span><span>currentRead</span><span>]);</span>
<span>records_</span><span>[</span><span>currentRead</span><span>].</span><span>~</span><span>T</span><span>();</span>
<span>readIndex_</span><span>.</span><span>store</span><span>(</span><span>nextRecord</span><span>,</span> <span>std</span><span>::</span><span>memory_order_release</span><span>);</span>
<span>return</span> <span>true</span><span>;</span>

}

// pointer to the value at the front of the queue (for use in-place) or
// nullptr if empty.
T* frontPtr() {
auto const currentRead = readIndex_.load(std::memory_order_relaxed);
if (currentRead == writeIndex_.load(std::memory_order_acquire)) {
// queue is empty
return nullptr;
}
return &records_[currentRead];
}

// queue must not be empty
void popFront() {
auto const currentRead = readIndex_.load(std::memory_order_relaxed);
assert(currentRead != writeIndex_.load(std::memory_order_acquire));

<span>auto</span> <span>nextRecord</span> <span>=</span> <span>currentRead</span> <span>+</span> <span>1</span><span>;</span>
<span>if</span> <span>(</span><span>nextRecord</span> <span>==</span> <span>size_</span><span>)</span> <span>{</span>
  <span>nextRecord</span> <span>=</span> <span>0</span><span>;</span>
<span>}</span>
<span>records_</span><span>[</span><span>currentRead</span><span>].</span><span>~</span><span>T</span><span>();</span>
<span>readIndex_</span><span>.</span><span>store</span><span>(</span><span>nextRecord</span><span>,</span> <span>std</span><span>::</span><span>memory_order_release</span><span>);</span>

}

bool isEmpty() const {
return readIndex_.load(std::memory_order_acquire) ==
writeIndex_.load(std::memory_order_acquire);
}

bool isFull() const {
auto nextRecord = writeIndex_.load(std::memory_order_acquire) + 1;
if (nextRecord == size_) {
nextRecord = 0;
}
if (nextRecord != readIndex_.load(std::memory_order_acquire)) {
return false;
}
// queue is full
return true;
}

// * If called by consumer, then true size may be more (because producer may
// be adding items concurrently).
// * If called by producer, then true size may be less (because consumer may
// be removing items concurrently).
// * It is undefined to call this from any other thread.
size_t sizeGuess() const {
int ret = writeIndex_.load(std::memory_order_acquire) -
readIndex_.load(std::memory_order_acquire);
if (ret < 0) {
ret += size_;
}
return ret;
}

private:
char pad0_[CacheLocality::kFalseSharingRange];
const uint32_t size_;
T* const records_;

FOLLY_ALIGN_TO_AVOID_FALSE_SHARING std::atomic<unsigned int> readIndex_;
FOLLY_ALIGN_TO_AVOID_FALSE_SHARING std::atomic<unsigned int> writeIndex_;

char pad1_[CacheLocality::kFalseSharingRange - sizeof(writeIndex_)];
};

}

这里就是使用环形队列来作为容器,双指针来作为头尾,读线程读取readIndex直接采用relaxed,写线程读取writeIndex的时候也是采取relaxed,因为这两个变量只会在对应的线程内修改,可以认为是对应线程的私有变量,如果要读取另外一个线程的变量则需要采取acquire,当然前提是修改的时候使用了release。为了避免False Sharing这里也使用了padding,只不过是用宏做的。

其实这里也可以做一点优化,就像前面intel的延迟处理tail_copy一样,首次读取另外一个线程变量的时候先用relaxed,如果发现不能操作了,则再使用acquire

总的来说,这个无锁spsc queue也是wait_free的。

moodycamel spsc concurrent queue

moody camel 在上面的基础上做了一些改进:在支持无大小限制的情况下,将动态内存分配的需求降得很低,同时支持了容量的动态增长。其容器结构是两层的queue of queue,第一层是循环链表,第二层是循环队列。第一层循环链表的控制基本等价于intelspsc里的代码,而第二层的循环队列的控制基本等价于folly的代码。当enqueue的时候,发现没有空闲内存的时候会调用malloc,不过这种动态内存分配比起intel的每个新node都分配来说简单多了,总的来说还是比较符合wait_free的。这个的代码我就不分析了,直接贴作者的解释吧。

# Enqueue
If room in tail block, add to tail
Else check next block
    If next block is not the head block, enqueue on next block
    Else create a new block and enqueue there
    Advance tail to the block we just enqueued to

Dequeue

Remember where the tail block is
If the front block has an element in it, dequeue it
Else
If front block was the tail block when we entered the function, return false
Else advance to next block and dequeue the item there

naive spmc concurrent queue

在这前面介绍的spsc并发队列的基础上,我们可以比较容易的构建出一个spmc的并发队列,而构造一个mpsc的并发队列则难很多。其原因主要是在enqueue的时候,可能会涉及到动态内存分配,如果有好几个线程都抢着进行动态内存分配的话,就会出现malloc的锁征用。而多个线程抢占dequeue的时候,只需要采取CAS来保持tail的更新即可,虽说这个不是waitfree的,但是lockfree还是可以基本保证的。

boost mpmc concurrent queue

boost concurrent queue通过模板参数的方式来支持固定大小的队列和不定大小的队列。

如果是固定大小队列,则会使用一个带dummy headring buffer来存储内容,同时使用一个头节点索引和一个尾节点索引来标记队列的头尾位置。为了一次性的修改头尾节点索引,这里将队列大小的上限设置为了\(2^{16} - 2\) ,这样两个索引就可以合并为一个int32 来处理,修改的时候可以使用compare_exchange_来同时修改。如果在支持int64类型的compare_exchange_操作的平台,队列大小的上限可以放到\(2^{32} -2\) ,同时两个索引会被压缩为一个int64来做同时修改。

如果是不定大小的队列,则会使用链表的形式来维持队列结构, 代码见下。

 struct BOOST_LOCKFREE_CACHELINE_ALIGNMENT node
 {
   typedef typename detail::select_tagged_handle<node, node_based>::tagged_handle_type tagged_node_handle;
   typedef typename detail::select_tagged_handle<node, node_based>::handle_type handle_type;

node(T const & v, handle_type null_handle):
data(v)//, next(tagged_node_handle(0, 0))
{
/* increment tag to avoid ABA problem */
tagged_node_handle old_next = next.load(memory_order_relaxed);
tagged_node_handle new_next (null_handle, old_next.get_next_tag());
next.store(new_next, memory_order_release);
}

node (handle_type null_handle):
next(tagged_node_handle(null_handle, 0))
{}

node(void)
{}

atomic<tagged_node_handle> next;
T data;
};

这里比较有意思的就是第九行的注释:对指针的tag位置进行自增来避免ABA问题。这里的next指针是一个tagged_pointer,其分配位置是内存对齐的,对齐的大小由BOOST_LOCKFREE_CACHELINE_BYTES定义,在WIN平台下,这个宏定义如下:

#define BOOST_LOCKFREE_CACHELINE_BYTES 64
#ifdef _MSC_VER
#define BOOST_LOCKFREE_CACHELINE_ALIGNMENT __declspec(align(BOOST_LOCKFREE_CACHELINE_BYTES))

当这个指针是64字节对齐时,最底的6位是没有意义的,所以这6位我们可以用来存储额外的数据,这种指针就叫做tagged_pointer,在llvm里这个指针结构也很常见。

boost lockfree queue里,数据成员定义如下:

typedef typename detail::queue_signature::bind<A0, A1, A2>::type bound_args;
static const bool has_capacity = detail::extract_capacity<bound_args>::has_capacity;
static const size_t capacity = detail::extract_capacity<bound_args>::capacity + 1; // the queue uses one dummy node
static const bool fixed_sized = detail::extract_fixed_sized<bound_args>::value;
static const bool node_based = !(has_capacity || fixed_sized);
static const bool compile_time_sized = has_capacity;
typedef typename detail::extract_allocator<bound_args, node>::type node_allocator;
typedef typename detail::select_freelist<node, node_allocator, compile_time_sized, fixed_sized, capacity>::type pool_t;
typedef typename pool_t::tagged_node_handle tagged_node_handle;
typedef typename detail::select_tagged_handle<node, node_based>::handle_type handle_type;

atomic<tagged_node_handle> head_;
static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(tagged_node_handle);
char padding1[padding_size];
atomic<tagged_node_handle> tail_;
char padding2[padding_size];
pool_t pool; //代表node的pool 可以当作内存分配器

因为atomic<T*>内部只包含一个T*作为成员变量,所以atomic<T*>T*的内存布局是一样的,所以这里的padding_size才会这样计算出来。这里的padding的意义在于让poll的开始地址是BOOST_LOCKFREE_CACHELINE_BYTES对齐的,同时这里分为了两个padding而不是一个padding主要是考虑到将tail head分离在两个cache_line上,避免不同线程之间的缓存竞争。

现在我们来看这个lockfree queue提供的接口。

首先查看empty

/** Check if the queue is empty
*
* \return true, if the queue is empty, false otherwise
* \note The result is only accurate, if no other thread modifies the queue. Therefore it is rarely practical to use this
*       value in program logic.
* */
bool empty(void)
{
return pool.get_handle(head_.load()) == pool.get_handle(tail_.load());
}

注释里写的很清楚了,这个函数的返回值是不准确的,因为在没有锁的情况下无法同时获得head tail的准确值。

现在来看push,这里分为了两个接口push bounded_push,区分在于如果内存池已经用完,第一个push在当前队列是大小固定的情况下会返回false,不固定的情况下会向操作系统尝试申请更多的内存并返回;而第二个bounded_push则直接返回false

/** Pushes object t to the queue.
*
* \post object will be pushed to the queue, if internal node can be allocated
* \returns true, if the push operation is successful.
*
* \note Thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
*                    from the OS. This may not be lock-free.
* */
bool push(T const & t)
{
    return do_push<false>(t);
}

/** Pushes object t to the queue.

\post object will be pushed to the queue, if internal node can be allocated
* \returns true, if the push operation is successful.

\note Thread-safe and non-blocking. If internal memory pool is exhausted, operation will fail
* \throws if memory allocator throws
* */
bool bounded_push(T const & t)
{
return do_push<true>(t);
}

这两个函数都调用了do_push,这个函数的定义如下:

    template <bool Bounded>
    bool do_push(T const & t)
    {
        using detail::likely;
    <span>node</span> <span>*</span> <span>n</span> <span>=</span> <span>pool</span><span>.</span><span>template</span> <span>construct</span><span>&lt;</span><span>true</span><span>,</span> <span>Bounded</span><span>&gt;</span><span>(</span><span>t</span><span>,</span> <span>pool</span><span>.</span><span>null_handle</span><span>());</span>
    <span>handle_type</span> <span>node_handle</span> <span>=</span> <span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>n</span><span>);</span>

    <span>if</span> <span>(</span><span>n</span> <span>==</span> <span>NULL</span><span>)</span>
        <span>return</span> <span>false</span><span>;</span>

    <span>for</span> <span>(;;)</span> <span>{</span>
        <span>tagged_node_handle</span> <span>tail</span> <span>=</span> <span>tail_</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>node</span> <span>*</span> <span>tail_node</span> <span>=</span> <span>pool</span><span>.</span><span>get_pointer</span><span>(</span><span>tail</span><span>);</span>
        <span>tagged_node_handle</span> <span>next</span> <span>=</span> <span>tail_node</span><span>-&gt;</span><span>next</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>node</span> <span>*</span> <span>next_ptr</span> <span>=</span> <span>pool</span><span>.</span><span>get_pointer</span><span>(</span><span>next</span><span>);</span>

        <span>tagged_node_handle</span> <span>tail2</span> <span>=</span> <span>tail_</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>if</span> <span>(</span><span>likely</span><span>(</span><span>tail</span> <span>==</span> <span>tail2</span><span>))</span> <span>{</span>
            <span>if</span> <span>(</span><span>next_ptr</span> <span>==</span> <span>0</span><span>)</span> <span>{</span>
                <span>tagged_node_handle</span> <span>new_tail_next</span><span>(</span><span>node_handle</span><span>,</span> <span>next</span><span>.</span><span>get_next_tag</span><span>());</span>
                <span>if</span> <span>(</span> <span>tail_node</span><span>-&gt;</span><span>next</span><span>.</span><span>compare_exchange_weak</span><span>(</span><span>next</span><span>,</span> <span>new_tail_next</span><span>)</span> <span>)</span> <span>{</span>
                    <span>tagged_node_handle</span> <span>new_tail</span><span>(</span><span>node_handle</span><span>,</span> <span>tail</span><span>.</span><span>get_next_tag</span><span>());</span>
                    <span>tail_</span><span>.</span><span>compare_exchange_strong</span><span>(</span><span>tail</span><span>,</span> <span>new_tail</span><span>);</span>
                    <span>return</span> <span>true</span><span>;</span>
                <span>}</span>
            <span>}</span>
            <span>else</span> <span>{</span>
                <span>tagged_node_handle</span> <span>new_tail</span><span>(</span><span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>next_ptr</span><span>),</span> <span>tail</span><span>.</span><span>get_next_tag</span><span>());</span>
                <span>tail_</span><span>.</span><span>compare_exchange_strong</span><span>(</span><span>tail</span><span>,</span> <span>new_tail</span><span>);</span>
            <span>}</span>
        <span>}</span>
    <span>}</span>
<span>}</span>

这里比较难理解的一点就是tail tail2,以及最后30行的compare_exchange_strong。这里在19行使用判断的意义是避免在内部做无用功,虽然不使用19行的判断, tail改变之后,20行所在分支的检测都会fail掉,对正确性没影响,对性能上来说提升很大。在一个完整的成功push流程中有两个cas操作,我们需要担心的是在两个cas操作之间线程被换出之后会出现何种结果,也就是在24行之前被换出。此时老的tailnext已经被修正为了新数据,而新tail却没有更新。在下一个线程进来的时候会发现tail->next != 0, 因此会进28号的分支,在此分支之内会尝试将tail->next更新为tail,这样就避免了数据更新到一半的尴尬局面。

对于pop则只有一个函数:

    /** Pops object from queue.
     *
     * \pre type U must be constructible by T and copyable, or T must be convertible to U
     * \post if pop operation is successful, object will be copied to ret.
     * \returns true, if the pop operation is successful, false if queue was empty.
     *
     * \note Thread-safe and non-blocking
     * */
    template <typename U>
    bool pop (U & ret)
    {
        using detail::likely;
        for (;;) {
            tagged_node_handle head = head_.load(memory_order_acquire);
            node * head_ptr = pool.get_pointer(head);
        <span>tagged_node_handle</span> <span>tail</span> <span>=</span> <span>tail_</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>tagged_node_handle</span> <span>next</span> <span>=</span> <span>head_ptr</span><span>-&gt;</span><span>next</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>node</span> <span>*</span> <span>next_ptr</span> <span>=</span> <span>pool</span><span>.</span><span>get_pointer</span><span>(</span><span>next</span><span>);</span>

        <span>tagged_node_handle</span> <span>head2</span> <span>=</span> <span>head_</span><span>.</span><span>load</span><span>(</span><span>memory_order_acquire</span><span>);</span>
        <span>if</span> <span>(</span><span>likely</span><span>(</span><span>head</span> <span>==</span> <span>head2</span><span>))</span> <span>{</span>
            <span>if</span> <span>(</span><span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>head</span><span>)</span> <span>==</span> <span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>tail</span><span>))</span> <span>{</span>
                <span>if</span> <span>(</span><span>next_ptr</span> <span>==</span> <span>0</span><span>)</span>
                    <span>return</span> <span>false</span><span>;</span>

                <span>tagged_node_handle</span> <span>new_tail</span><span>(</span><span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>next</span><span>),</span> <span>tail</span><span>.</span><span>get_next_tag</span><span>());</span>
                <span>tail_</span><span>.</span><span>compare_exchange_strong</span><span>(</span><span>tail</span><span>,</span> <span>new_tail</span><span>);</span>

            <span>}</span> <span>else</span> <span>{</span>
                <span>if</span> <span>(</span><span>next_ptr</span> <span>==</span> <span>0</span><span>)</span>
                    <span>/* this check is not part of the original algorithm as published by michael and scott</span>

*
* however we reuse the tagged_ptr part for the freelist and clear the next part during node
* allocation. we can observe a null-pointer here.
* */
continue;
detail::copy_payload(next_ptr->data, ret);

                <span>tagged_node_handle</span> <span>new_head</span><span>(</span><span>pool</span><span>.</span><span>get_handle</span><span>(</span><span>next</span><span>),</span> <span>head</span><span>.</span><span>get_next_tag</span><span>());</span>
                <span>if</span> <span>(</span><span>head_</span><span>.</span><span>compare_exchange_weak</span><span>(</span><span>head</span><span>,</span> <span>new_head</span><span>))</span> <span>{</span>
                    <span>pool</span><span>.</span><span>template</span> <span>destruct</span><span>&lt;</span><span>true</span><span>&gt;</span><span>(</span><span>head</span><span>);</span>
                    <span>return</span> <span>true</span><span>;</span>
                <span>}</span>
            <span>}</span>
        <span>}</span>
    <span>}</span>
<span>}</span>

这里就简单多了,成功的pop只需要一个compare_exchange_weak即可,所以就不需要担心数据更改到一半的问题,这里的28行处理的还是tail数据更新到一半的问题。

这里比较有意思的一点就是42行的.template,这个叫做template disambiguator, 其作用就是通知编译器destruct<true>是一个模板,而不是destruct < true

总的来说, boost lockfree queue的注意点完全在lock free上,并没有采取每个生产者单独一个queue的方式来解决争用,虽然我们可以在lockfree queue的基础上做一个这样的东西。

intel tbb concurrent queue

其实这个的总体实现与boost类似。占坑,以后填。粗看起来,这个东西的实现很具有STL的风格。

moodycamel concurrent queue

这个concurrent queue的实现被很多项目使用过, 值得重点分析。这个实现的突出之处在于,每个生产者都维持自己的专属queue,而不同的消费者会以不同的顺序去访问各个生产者的queue,直到遇到一个不为空的queue。简而言之,他所实现的MPMC(multiple producer multiple consumer)的队列建立在了SPMC的多线程队列的基础上。这个SPMC的实现是lockfree的,同时还增加了bulk操作。下面来慢慢介绍这个的设计。

首先就是在构建消费者的时候,尽可能的让消费者与生产者均衡绑定,内部实现是通过使用一个token来维持消费者与生产者之间的亲和性。其实最简单的亲和性分配的方法就是每个消费者分配一个生产者的编号,dequeue的时候采取轮询的方式,每次开始轮询的时候都以上次dequeue成功的生产者queue开始。

处理完了多生产者多消费者之间的映射,现在剩下的内容就是如何更高效的处理单生产者多消费者。moodycamel这里的主要改进就是单个queue的存储结构,这里采取的是两层的循环队列,第一层循环队列存储的是第二层循环队列的指针。一个队列只需要维护四个索引,考虑到原子性修改可以把消费者的两个索引合并为一个uint64或者uint32t,因为只有消费者会发生数据竞争,为了方便比较,也顺便把生产者的两个索引合并为一个uint64t or uint32t,这样就可以直接使用整数比较了。在enqueue的时候,数据复制完成之后,直接对生产者的索引自增即可。而dequeue的时候则没这么容易,此时首先自增消费者索引,然后判断当前消费者索引是否已经越过生产者索引,如果越过了,则对则对另外一个overcommit计数器进行自增,三个计数器合用才能获得真正的容量。

这里使用环形缓冲来扩容而不是采取列表来扩容,主要是因为连续的空间操作可以支持批量的enqueuedequeue操作,直接预先占据一些索引就行了。

</div>






via spiritsaway.info https://ift.tt/xtvPQyO

November 13, 2024 at 09:40AM
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant