From patchwork Tue Apr 18 11:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13215542 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A30FC77B76 for ; Tue, 18 Apr 2023 11:49:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231612AbjDRLtS (ORCPT ); Tue, 18 Apr 2023 07:49:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231629AbjDRLtJ (ORCPT ); Tue, 18 Apr 2023 07:49:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF646A248; Tue, 18 Apr 2023 04:48:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D64FA62CDB; Tue, 18 Apr 2023 11:47:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D9F8C433EF; Tue, 18 Apr 2023 11:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681818456; bh=hF5srHhGnVubFYKGy6gt95dlRe4xzC55sgy5w3GdgdE=; h=From:To:Cc:Subject:Date:From; b=Pu7EsVWW6KN8l4XdS7ObjsFpLgL28DsnhGalbcRrHrC+nh78H9i9mGpn2ddlWBCHQ cqxNLbZDznoi8kdXxtDNZHTaf+wtCUXz5aE7ARsZJ+Lrmkni36dSS6jOQrLog0fRGB ATFLjWdqU16Bcl9xkiefUxgrTVM0FARWVHwII6MKKqSCqTfwLZ828LrfGk/8thUATi HAm6exx7vEUNlUMiGhryDHaBRbnsH91ntQAhmoPLoVrlaS6evi5Yy6rINzx0a1WuaN H/v8Y5Wa1KfSY+AnITXBUZVME12g4jBS0Pc5XfE/lhH03X//a4nlbHGt5TzYs0k91g v0pF0Ibr6HZSQ== From: Arnd Bergmann To: Tariq Toukan , "Gustavo A. R. Silva" Cc: Arnd Bergmann , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] net/mlx4: fix build error from usercopy size check Date: Tue, 18 Apr 2023 13:47:11 +0200 Message-Id: <20230418114730.3674657-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Arnd Bergmann The array_size() helper is used here to prevent accidental overflow in mlx4_init_user_cqes(), but as this returns SIZE_MAX in case an overflow would happen, the logic in copy_to_user() now detects that as overflowing the source: In file included from arch/x86/include/asm/preempt.h:9, from include/linux/preempt.h:78, from include/linux/percpu.h:6, from include/linux/context_tracking_state.h:5, from include/linux/hardirq.h:5, from drivers/net/ethernet/mellanox/mlx4/cq.c:37: In function 'check_copy_size', inlined from 'copy_to_user' at include/linux/uaccess.h:190:6, inlined from 'mlx4_init_user_cqes' at drivers/net/ethernet/mellanox/mlx4/cq.c:317:9, inlined from 'mlx4_cq_alloc' at drivers/net/ethernet/mellanox/mlx4/cq.c:394:10: include/linux/thread_info.h:244:4: error: call to '__bad_copy_from' declared with attribute error: copy source size is too small 244 | __bad_copy_from(); | ^~~~~~~~~~~~~~~~~ Move the size logic out, and instead use the same size value for the comparison and the copy. Fixes: f69bf5dee7ef ("net/mlx4: Use array_size() helper in copy_to_user()") Signed-off-by: Arnd Bergmann Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx4/cq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index 4d4f9cf9facb..020cb8e2883f 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -290,6 +290,7 @@ static void mlx4_cq_free_icm(struct mlx4_dev *dev, int cqn) static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) { int entries_per_copy = PAGE_SIZE / cqe_size; + size_t copy_size = array_size(entries, cqe_size); void *init_ents; int err = 0; int i; @@ -304,7 +305,7 @@ static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) */ memset(init_ents, 0xcc, PAGE_SIZE); - if (entries_per_copy < entries) { + if (copy_size > PAGE_SIZE) { for (i = 0; i < entries / entries_per_copy; i++) { err = copy_to_user((void __user *)buf, init_ents, PAGE_SIZE) ? -EFAULT : 0; @@ -315,7 +316,7 @@ static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) } } else { err = copy_to_user((void __user *)buf, init_ents, - array_size(entries, cqe_size)) ? + copy_size) ? -EFAULT : 0; } From patchwork Tue Apr 18 11:47:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13215546 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E15D1C77B76 for ; Tue, 18 Apr 2023 11:50:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230434AbjDRLuQ (ORCPT ); Tue, 18 Apr 2023 07:50:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229862AbjDRLuO (ORCPT ); Tue, 18 Apr 2023 07:50:14 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E32FAF02; Tue, 18 Apr 2023 04:49:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E20DD630C2; Tue, 18 Apr 2023 11:47:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEFF5C433D2; Tue, 18 Apr 2023 11:47:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681818477; bh=sX2PTY2QQR91FYj3Fp+THVg06q2dBOjIrndLtKeG1dc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pYx+tjHwWXX/SbXLPifTRDMy7wfU1VOul+jnE4V1lTDxXvwtQCsjilBVKWruaRc4W lhc4rASBFjWQV4lobZNQP9Zj3AlVuIWmPMoS5Hwblym3TNDPMxrprC+NQxca5lzp13 CmpKFYJQe1dNJgEUQm9oDNAPF4vnOpWzxSex3FrJzdoeUnIpDENQTWbBmbwAjt0qq7 gZpTFkV0qt9t6ooD8bohkBiIZ73x5QCAWa4sK1koAFDZba2UPt9MwxzMkFy5hlfEu3 RGdYq69nwHD0GvJDmxwumQCeNB9ymQRjGObW9z15L23ibhCpe36EL/NpfFD46qK1R9 gQe038vEx8Okw== From: Arnd Bergmann To: Yishai Hadas , Jason Gunthorpe , Leon Romanovsky , Tariq Toukan Cc: Arnd Bergmann , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 2/2] net/mlx4: avoid overloading user/kernel pointers Date: Tue, 18 Apr 2023 13:47:12 +0200 Message-Id: <20230418114730.3674657-2-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230418114730.3674657-1-arnd@kernel.org> References: <20230418114730.3674657-1-arnd@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Arnd Bergmann The mlx4_ib_create_cq() and mlx4_init_user_cqes() functions cast between kernel pointers and user pointers, which is confusing and can easily hide bugs. Change the code to use use the correct address spaces consistently and use separate pointer variables in mlx4_cq_alloc() to avoid mixing them. Signed-off-by: Arnd Bergmann --- I ran into this while fixing the link error in the first patch, and decided it would be useful to clean up. --- drivers/infiniband/hw/mlx4/cq.c | 11 +++++++---- drivers/net/ethernet/mellanox/mlx4/cq.c | 17 ++++++++--------- include/linux/mlx4/device.h | 2 +- 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c index 4cd738aae53c..b12713fdde99 100644 --- a/drivers/infiniband/hw/mlx4/cq.c +++ b/drivers/infiniband/hw/mlx4/cq.c @@ -180,7 +180,8 @@ int mlx4_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, struct mlx4_ib_dev *dev = to_mdev(ibdev); struct mlx4_ib_cq *cq = to_mcq(ibcq); struct mlx4_uar *uar; - void *buf_addr; + void __user *ubuf_addr; + void *kbuf_addr; int err; struct mlx4_ib_ucontext *context = rdma_udata_to_drv_context( udata, struct mlx4_ib_ucontext, ibucontext); @@ -209,7 +210,8 @@ int mlx4_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, goto err_cq; } - buf_addr = (void *)(unsigned long)ucmd.buf_addr; + ubuf_addr = u64_to_user_ptr(ucmd.buf_addr); + kbuf_addr = NULL; err = mlx4_ib_get_cq_umem(dev, &cq->buf, &cq->umem, ucmd.buf_addr, entries); if (err) @@ -235,7 +237,8 @@ int mlx4_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, if (err) goto err_db; - buf_addr = &cq->buf.buf; + ubuf_addr = NULL; + kbuf_addr = &cq->buf.buf; uar = &dev->priv_uar; cq->mcq.usage = MLX4_RES_USAGE_DRIVER; @@ -248,7 +251,7 @@ int mlx4_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, &cq->mcq, vector, 0, !!(cq->create_flags & IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION), - buf_addr, !!udata); + ubuf_addr, kbuf_addr); if (err) goto err_dbmap; diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c index 020cb8e2883f..22216f4e409b 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cq.c +++ b/drivers/net/ethernet/mellanox/mlx4/cq.c @@ -287,7 +287,7 @@ static void mlx4_cq_free_icm(struct mlx4_dev *dev, int cqn) __mlx4_cq_free_icm(dev, cqn); } -static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) +static int mlx4_init_user_cqes(void __user *buf, int entries, int cqe_size) { int entries_per_copy = PAGE_SIZE / cqe_size; size_t copy_size = array_size(entries, cqe_size); @@ -307,7 +307,7 @@ static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) if (copy_size > PAGE_SIZE) { for (i = 0; i < entries / entries_per_copy; i++) { - err = copy_to_user((void __user *)buf, init_ents, PAGE_SIZE) ? + err = copy_to_user(buf, init_ents, PAGE_SIZE) ? -EFAULT : 0; if (err) goto out; @@ -315,8 +315,7 @@ static int mlx4_init_user_cqes(void *buf, int entries, int cqe_size) buf += PAGE_SIZE; } } else { - err = copy_to_user((void __user *)buf, init_ents, - copy_size) ? + err = copy_to_user(buf, init_ents, copy_size) ? -EFAULT : 0; } @@ -343,7 +342,7 @@ static void mlx4_init_kernel_cqes(struct mlx4_buf *buf, int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, struct mlx4_mtt *mtt, struct mlx4_uar *uar, u64 db_rec, struct mlx4_cq *cq, unsigned vector, int collapsed, - int timestamp_en, void *buf_addr, bool user_cq) + int timestamp_en, void __user *ubuf_addr, void *kbuf_addr) { bool sw_cq_init = dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_SW_CQ_INIT; struct mlx4_priv *priv = mlx4_priv(dev); @@ -391,13 +390,13 @@ int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, cq_context->db_rec_addr = cpu_to_be64(db_rec); if (sw_cq_init) { - if (user_cq) { - err = mlx4_init_user_cqes(buf_addr, nent, + if (ubuf_addr) { + err = mlx4_init_user_cqes(ubuf_addr, nent, dev->caps.cqe_size); if (err) sw_cq_init = false; - } else { - mlx4_init_kernel_cqes(buf_addr, nent, + } else if (kbuf_addr) { + mlx4_init_kernel_cqes(kbuf_addr, nent, dev->caps.cqe_size); } } diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index 6646634a0b9d..dd8f3396dcba 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -1126,7 +1126,7 @@ void mlx4_free_hwq_res(struct mlx4_dev *mdev, struct mlx4_hwq_resources *wqres, int mlx4_cq_alloc(struct mlx4_dev *dev, int nent, struct mlx4_mtt *mtt, struct mlx4_uar *uar, u64 db_rec, struct mlx4_cq *cq, unsigned int vector, int collapsed, int timestamp_en, - void *buf_addr, bool user_cq); + void __user *ubuf_addr, void *kbuf_addr); void mlx4_cq_free(struct mlx4_dev *dev, struct mlx4_cq *cq); int mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align, int *base, u8 flags, u8 usage);