From patchwork Wed May 26 04:51:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12280621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BE60C47082 for ; Wed, 26 May 2021 04:53:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CDA5613AD for ; Wed, 26 May 2021 04:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229553AbhEZEzA (ORCPT ); Wed, 26 May 2021 00:55:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230375AbhEZEy4 (ORCPT ); Wed, 26 May 2021 00:54:56 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85F62C061756 for ; Tue, 25 May 2021 21:52:22 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id v22so358020oic.2 for ; Tue, 25 May 2021 21:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TCQmoXiRdmZr93449+4D6oWg487TgdyZgirsJzAaqUE=; b=nAaL+NuSNdgjFpeuLx6zheHsieOmGzDrajfVtt1BM80x9ZrEfr2tzYd+NOFBrIvy2E zqYgWbDJiVHCaxuJxYm7b/yIYw6bNybOMNThtynZ8VQmUlZZLxgjFGINSaLwnfOCTllx Wes7UzpKpKKZogWt9qSH1OxIb7LW5fYE95+ybDwwhLzp977lHwm4fh5N364qk7CvKegt wcaQ1PD+Y1z/+vmWxdZpOSjZcGgOetg2aEgrsexfDEtwx2XF5runnLpOi1W2t8RmIVio yw/xc556lfzSdLZPVajnCDc9eQormtIis649TN5CZMvloBBGi1LdeGcgAovZuUse6YfT tZhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TCQmoXiRdmZr93449+4D6oWg487TgdyZgirsJzAaqUE=; b=N4JPZ2pmNi3gY6ixRA8fJdbLVD9aE0NAhoaqj1/sVxs2BY+iAP1KDwOEUtYWGPixp8 mTu53wQUpOeW6bxji98XfHYv2PV4aY9YGCB1Fw7+YHHfN6IowA8oGgR6smEMTAVd7ynW XUcoX0zmUt0pL0BrNZdilZRmQsNgqApQzIRhIXlvHO8I3mYiC0im/GKjdgnkr6D3xqk3 We38Uk2/6Nrwg8Nc2NbJJry1/pZHYAmmlZCP0mDlU9q1kbpKulFeTlEGmRu763FkwW7F PXZhbQ4Fz+vYUvOC1/BFVT5TzQudHiZfax3yD9JqPFJ4KsRij9NZsn5yczgX5wbeeX0m ooLA== X-Gm-Message-State: AOAM532x34mJqCdNEfhgo46xLTef1/UcL4E7V23kQowcy2Wf35ZCtrr/ MbhqtX7wc7LQf59EdW6yTqvETczej5Vopg== X-Google-Smtp-Source: ABdhPJy31k5kMyQTgQGpnKgUIXaaj5DwDBLo1lJo7oylBxXXyTlkSfgTztebFZaJdgAMyo6YalCBhQ== X-Received: by 2002:a54:4594:: with SMTP id z20mr652795oib.100.1622004741918; Tue, 25 May 2021 21:52:21 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-a858-04cc-9df4-eb0d.res6.spectrum.com. [2603:8081:140c:1a00:a858:4cc:9df4:eb0d]) by smtp.gmail.com with ESMTPSA id x5sm4093292otg.76.2021.05.25.21.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 21:52:21 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 1/2] RDMA/rxe: Add a type flag to rxe_queue structs Date: Tue, 25 May 2021 23:51:39 -0500 Message-Id: <20210526045139.634978-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210526045139.634978-1-rpearsonhpe@gmail.com> References: <20210526045139.634978-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org To create optimal code only want to use smp_load_acquire() and smp_store_release() for user indices in rxe_queue APIs since kernel indices are protected by locks which also act as memory barriers. By adding a type to the queues we can determine which indices need to be protected. Signed-off-by: Bob Pearson --- v2: This patch added in v2 to add the type field. --- drivers/infiniband/sw/rxe/rxe_cq.c | 4 +++- drivers/infiniband/sw/rxe/rxe_qp.c | 12 ++++++++---- drivers/infiniband/sw/rxe/rxe_queue.c | 8 ++++---- drivers/infiniband/sw/rxe/rxe_queue.h | 13 ++++++++++--- drivers/infiniband/sw/rxe/rxe_srq.c | 4 +++- 5 files changed, 28 insertions(+), 13 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index b315ebf041ac..1d4d8a31bc12 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -59,9 +59,11 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, struct rxe_create_cq_resp __user *uresp) { int err; + enum queue_type type; + type = uresp ? QUEUE_TYPE_TO_USER : QUEUE_TYPE_KERNEL; cq->queue = rxe_queue_init(rxe, &cqe, - sizeof(struct rxe_cqe)); + sizeof(struct rxe_cqe), type); if (!cq->queue) { pr_warn("unable to create cq\n"); return -ENOMEM; diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 34ae957a315c..9bd6bf8f9bd9 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -206,6 +206,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, { int err; int wqe_size; + enum queue_type type; err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -231,7 +232,9 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->sq.max_inline = init->cap.max_inline_data = wqe_size; wqe_size += sizeof(struct rxe_send_wqe); - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size); + type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; + qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, + wqe_size, type); if (!qp->sq.queue) return -ENOMEM; @@ -273,6 +276,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, { int err; int wqe_size; + enum queue_type type; if (!qp->srq) { qp->rq.max_wr = init->cap.max_recv_wr; @@ -283,9 +287,9 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, pr_debug("qp#%d max_wr = %d, max_sge = %d, wqe_size = %d\n", qp_num(qp), qp->rq.max_wr, qp->rq.max_sge, wqe_size); - qp->rq.queue = rxe_queue_init(rxe, - &qp->rq.max_wr, - wqe_size); + type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; + qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, + wqe_size, type); if (!qp->rq.queue) return -ENOMEM; diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c index fa69241b1187..8f844d0b9e77 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.c +++ b/drivers/infiniband/sw/rxe/rxe_queue.c @@ -52,9 +52,8 @@ inline void rxe_queue_reset(struct rxe_queue *q) memset(q->buf->data, 0, q->buf_size - sizeof(struct rxe_queue_buf)); } -struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe, - int *num_elem, - unsigned int elem_size) +struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe, int *num_elem, + unsigned int elem_size, enum queue_type type) { struct rxe_queue *q; size_t buf_size; @@ -69,6 +68,7 @@ struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe, goto err1; q->rxe = rxe; + q->type = type; /* used in resize, only need to copy used part of queue */ q->elem_size = elem_size; @@ -136,7 +136,7 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, int err; unsigned long flags = 0, flags1; - new_q = rxe_queue_init(q->rxe, &num_elem, elem_size); + new_q = rxe_queue_init(q->rxe, &num_elem, elem_size, q->type); if (!new_q) return -ENOMEM; diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h index 2902ca7b288c..4512745419f8 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.h +++ b/drivers/infiniband/sw/rxe/rxe_queue.h @@ -19,6 +19,13 @@ * of the queue is one less than the number of element slots */ +/* type of queue */ +enum queue_type { + QUEUE_TYPE_KERNEL, + QUEUE_TYPE_TO_USER, + QUEUE_TYPE_FROM_USER, +}; + struct rxe_queue { struct rxe_dev *rxe; struct rxe_queue_buf *buf; @@ -27,6 +34,7 @@ struct rxe_queue { size_t elem_size; unsigned int log2_elem_size; u32 index_mask; + enum queue_type type; }; int do_mmap_info(struct rxe_dev *rxe, struct mminfo __user *outbuf, @@ -35,9 +43,8 @@ int do_mmap_info(struct rxe_dev *rxe, struct mminfo __user *outbuf, void rxe_queue_reset(struct rxe_queue *q); -struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe, - int *num_elem, - unsigned int elem_size); +struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe, int *num_elem, + unsigned int elem_size, enum queue_type type); int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, unsigned int elem_size, struct ib_udata *udata, diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 41b0d1e11baf..52c5593741ec 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -78,6 +78,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, int err; int srq_wqe_size; struct rxe_queue *q; + enum queue_type type; srq->ibsrq.event_handler = init->event_handler; srq->ibsrq.srq_context = init->srq_context; @@ -91,8 +92,9 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); + type = uresp ? QUEUE_TYPE_FROM_USER : QUEUE_TYPE_KERNEL; q = rxe_queue_init(rxe, &srq->rq.max_wr, - srq_wqe_size); + srq_wqe_size, type); if (!q) { pr_warn("unable to allocate queue for srq\n"); return -ENOMEM; From patchwork Wed May 26 04:51:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12280623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE642C47088 for ; Wed, 26 May 2021 04:53:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 99FE260FF0 for ; Wed, 26 May 2021 04:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229604AbhEZEzB (ORCPT ); Wed, 26 May 2021 00:55:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230366AbhEZEyy (ORCPT ); Wed, 26 May 2021 00:54:54 -0400 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4272EC06175F for ; Tue, 25 May 2021 21:52:23 -0700 (PDT) Received: by mail-ot1-x336.google.com with SMTP id h24-20020a9d64180000b029036edcf8f9a6so6537633otl.3 for ; Tue, 25 May 2021 21:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c+3RAKsMXrhAs6qVSYob13c/f8h3lLGknkxrAdTz4KA=; b=bJU7SU+OUDHcX2AOpDF4VExc+xU9fcZtaxdvoLdzvZZtdenWl8NVLuNbMwWpSgvYxj 0auLGU/htYHRYIiB4zws3f5a1HohpT0HsJitUjQJVl2MIj8yrxE4Ndk30SWaZkwf699K 4ZDD2bbVFy+g6IZfcoTSgRNUPeKdfF1XDFP4bKDZUQHTVzDDoy/RMbsAYoRULvS3ExWc 4OaojN8eGzaICVnRGvWh2I62EXe8jgtwc2/BturRneb5YO7gy1wOs6gpBk0vQV0OO8AE XzbPrSW6S6Hxf/0dxoP9iYivG4TLxrCyrGdwWBOxWM3AVwCMdDGy6uOYnDBhk5yIa+ac vaeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c+3RAKsMXrhAs6qVSYob13c/f8h3lLGknkxrAdTz4KA=; b=rUeV6yoOX6+RI3swsvkQ3M+KCgf2jIAyvg1d0dEOAh7Zyvv/eqdgDytV2b9oyx/8J+ sArGurs8q2rV5yTofSNnh3mJKc1RWz6bUPeetEFjJHklVxuHaCINVgDY5PmnRDNFSKIZ qR3MldH0lnOZ+qv5myic5y3N13ZcclFh1Qe/j6d6cQU3HxWnt+ciEHRgnUlyq3IlJ1qb 7gz8Gd7uK+vYgvG4xYIEnuMeHirComQkCgqzlCS4n0u4n+LUP51CibHSCt74W5QR1THc 49u6XiF/y9TwWdo9eHux4+NnNKUDo97aqyK9hiZiqhELYiSchxhpffI5D/lKUfAc6z7s anlQ== X-Gm-Message-State: AOAM533rE5d3/FX8Ggnw5rUhM8ylVqrUAjCJhGgFwm0zGcxx10NRCNV6 tfHHnQLFgCJrELaCDjz8p9o= X-Google-Smtp-Source: ABdhPJyAc00w/6EVLpBR5yWqOSQVf6vL/6mC6PeD/lA4rogtdzjxrzDcG80RzWUtcWmpzbmvbbUhuw== X-Received: by 2002:a9d:6944:: with SMTP id p4mr878279oto.157.1622004742708; Tue, 25 May 2021 21:52:22 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-a858-04cc-9df4-eb0d.res6.spectrum.com. [2603:8081:140c:1a00:a858:4cc:9df4:eb0d]) by smtp.gmail.com with ESMTPSA id t26sm4233958otc.23.2021.05.25.21.52.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 21:52:22 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 2/2] RDMA/rxe: Protect user space index loads/stores Date: Tue, 25 May 2021 23:51:40 -0500 Message-Id: <20210526045139.634978-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210526045139.634978-1-rpearsonhpe@gmail.com> References: <20210526045139.634978-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Modify the queue APIs to protect all user space index loads with smp_load_acquire() and all user space index stores with smp_store_release(). Base this on the types of the queues which can be one of ..KERNEL, ..FROM_USER, ..TO_USER. Kernel space indices are protected by locks which also provide memory barriers. Signed-off-by: Bob Pearson --- v2: In v2 use queue type to selectively protect user space indices. --- drivers/infiniband/sw/rxe/rxe_queue.h | 168 ++++++++++++++++++-------- 1 file changed, 117 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h index 4512745419f8..6e705e09d357 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.h +++ b/drivers/infiniband/sw/rxe/rxe_queue.h @@ -66,12 +66,22 @@ static inline int queue_empty(struct rxe_queue *q) u32 prod; u32 cons; - /* make sure all changes to queue complete before - * testing queue empty - */ - prod = smp_load_acquire(&q->buf->producer_index); - /* same */ - cons = smp_load_acquire(&q->buf->consumer_index); + switch (q->type) { + case QUEUE_TYPE_FROM_USER: + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + cons = q->buf->consumer_index; + break; + case QUEUE_TYPE_TO_USER: + prod = q->buf->producer_index; + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + break; + case QUEUE_TYPE_KERNEL: + prod = q->buf->producer_index; + cons = q->buf->consumer_index; + break; + } return ((prod - cons) & q->index_mask) == 0; } @@ -81,95 +91,151 @@ static inline int queue_full(struct rxe_queue *q) u32 prod; u32 cons; - /* make sure all changes to queue complete before - * testing queue full - */ - prod = smp_load_acquire(&q->buf->producer_index); - /* same */ - cons = smp_load_acquire(&q->buf->consumer_index); + switch (q->type) { + case QUEUE_TYPE_FROM_USER: + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + cons = q->buf->consumer_index; + break; + case QUEUE_TYPE_TO_USER: + prod = q->buf->producer_index; + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + break; + case QUEUE_TYPE_KERNEL: + prod = q->buf->producer_index; + cons = q->buf->consumer_index; + break; + } return ((prod + 1 - cons) & q->index_mask) == 0; } -static inline void advance_producer(struct rxe_queue *q) +static inline unsigned int queue_count(const struct rxe_queue *q) { u32 prod; + u32 cons; - prod = (q->buf->producer_index + 1) & q->index_mask; + switch (q->type) { + case QUEUE_TYPE_FROM_USER: + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + cons = q->buf->consumer_index; + break; + case QUEUE_TYPE_TO_USER: + prod = q->buf->producer_index; + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + break; + case QUEUE_TYPE_KERNEL: + prod = q->buf->producer_index; + cons = q->buf->consumer_index; + break; + } + + return (prod - cons) & q->index_mask; +} + +static inline void advance_producer(struct rxe_queue *q) +{ + u32 prod; - /* make sure all changes to queue complete before - * changing producer index - */ - smp_store_release(&q->buf->producer_index, prod); + if (q->type == QUEUE_TYPE_FROM_USER) { + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + prod = (prod + 1) & q->index_mask; + /* same */ + smp_store_release(&q->buf->producer_index, prod); + } else { + prod = q->buf->producer_index; + q->buf->producer_index = (prod + 1) & q->index_mask; + } } static inline void advance_consumer(struct rxe_queue *q) { u32 cons; - cons = (q->buf->consumer_index + 1) & q->index_mask; - - /* make sure all changes to queue complete before - * changing consumer index - */ - smp_store_release(&q->buf->consumer_index, cons); + if (q->type == QUEUE_TYPE_TO_USER) { + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + cons = (cons + 1) & q->index_mask; + /* same */ + smp_store_release(&q->buf->consumer_index, cons); + } else { + cons = q->buf->consumer_index; + q->buf->consumer_index = (cons + 1) & q->index_mask; + } } static inline void *producer_addr(struct rxe_queue *q) { - return q->buf->data + ((q->buf->producer_index & q->index_mask) - << q->log2_elem_size); + u32 prod; + + if (q->type == QUEUE_TYPE_FROM_USER) + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + else + prod = q->buf->producer_index; + + return q->buf->data + ((prod & q->index_mask) << q->log2_elem_size); } static inline void *consumer_addr(struct rxe_queue *q) { - return q->buf->data + ((q->buf->consumer_index & q->index_mask) - << q->log2_elem_size); + u32 cons; + + if (q->type == QUEUE_TYPE_TO_USER) + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + else + cons = q->buf->consumer_index; + + return q->buf->data + ((cons & q->index_mask) << q->log2_elem_size); } static inline unsigned int producer_index(struct rxe_queue *q) { - u32 index; + u32 prod; + + if (q->type == QUEUE_TYPE_FROM_USER) + /* protect user space index */ + prod = smp_load_acquire(&q->buf->producer_index); + else + prod = q->buf->producer_index; - /* make sure all changes to queue - * complete before getting producer index - */ - index = smp_load_acquire(&q->buf->producer_index); - index &= q->index_mask; + prod &= q->index_mask; - return index; + return prod; } static inline unsigned int consumer_index(struct rxe_queue *q) { - u32 index; + u32 cons; - /* make sure all changes to queue - * complete before getting consumer index - */ - index = smp_load_acquire(&q->buf->consumer_index); - index &= q->index_mask; + if (q->type == QUEUE_TYPE_TO_USER) + /* protect user space index */ + cons = smp_load_acquire(&q->buf->consumer_index); + else + cons = q->buf->consumer_index; - return index; + cons &= q->index_mask; + + return cons; } -static inline void *addr_from_index(struct rxe_queue *q, unsigned int index) +static inline void *addr_from_index(struct rxe_queue *q, + unsigned int index) { return q->buf->data + ((index & q->index_mask) << q->buf->log2_elem_size); } static inline unsigned int index_from_addr(const struct rxe_queue *q, - const void *addr) + const void *addr) { return (((u8 *)addr - q->buf->data) >> q->log2_elem_size) - & q->index_mask; -} - -static inline unsigned int queue_count(const struct rxe_queue *q) -{ - return (q->buf->producer_index - q->buf->consumer_index) - & q->index_mask; + & q->index_mask; } static inline void *queue_head(struct rxe_queue *q)