From patchwork Fri Jun 25 16:23:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 12345591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3054CC48BC2 for ; Fri, 25 Jun 2021 16:24:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16ED86115C for ; Fri, 25 Jun 2021 16:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229812AbhFYQ0l (ORCPT ); Fri, 25 Jun 2021 12:26:41 -0400 Received: from mga06.intel.com ([134.134.136.31]:59672 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229586AbhFYQ0k (ORCPT ); Fri, 25 Jun 2021 12:26:40 -0400 IronPort-SDR: 1EipeLo81/DeofpfTS75LXZ29e8HwGrG8vylNms61z8Z0OVB3afeVcPcjzBUihR2pImFFdm6yL hFFDNZzHgKOw== X-IronPort-AV: E=McAfee;i="6200,9189,10026"; a="268831263" X-IronPort-AV: E=Sophos;i="5.83,299,1616482800"; d="scan'208";a="268831263" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 09:24:19 -0700 IronPort-SDR: axhnRLiET8aERvx7CJDBSwEmoUgHbXk2Ea5bxJ9NDS+ufDhGoRo8Z/JyjP9cGPgzwyA/JurVJN fj9vkT8irQXA== X-IronPort-AV: E=Sophos;i="5.83,299,1616482800"; d="scan'208";a="624564420" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.212.85.35]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 09:24:18 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, dledford@redhat.com Cc: linux-rdma@vger.kernel.org, shiraz.saleem@intel.com, mustafa.ismail@intel.com, coverity-bot , Tatyana Nikolova Subject: [PATCH v2 rdma-next 1/2] RDMA/irdma: Check contents of user-space irdma_mem_reg_req object Date: Fri, 25 Jun 2021 11:23:28 -0500 Message-Id: <20210625162329.1654-2-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210625162329.1654-1-tatyana.e.nikolova@intel.com> References: <20210625162329.1654-1-tatyana.e.nikolova@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Shiraz Saleem The contents of user-space req object is used in array indexing in irdma_handle_q_mem without checking for valid values. Guard against bad input on each of these req object pages by limiting them to number of pages that make up the region. Reported-by: coverity-bot Addresses-Coverity-ID: 1505160 ("TAINTED_SCALAR") Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/verbs.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 5bb46a4d26ff..9712f6902ba8 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2358,12 +2358,10 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, struct irdma_cq_mr *cqmr = &iwpbl->cq_mr; struct irdma_hmc_pble *hmc_p; u64 *arr = iwmr->pgaddrmem; - u32 pg_size; + u32 pg_size, total; int err = 0; - int total; bool ret = true; - total = req->sq_pages + req->rq_pages + req->cq_pages; pg_size = iwmr->page_size; err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles); if (err) @@ -2380,6 +2378,7 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, switch (iwmr->type) { case IRDMA_MEMREG_TYPE_QP: + total = req->sq_pages + req->rq_pages; hmc_p = &qpmr->sq_pbl; qpmr->shadow = (dma_addr_t)arr[total]; @@ -2406,7 +2405,7 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, hmc_p = &cqmr->cq_pbl; if (!cqmr->split) - cqmr->shadow = (dma_addr_t)arr[total]; + cqmr->shadow = (dma_addr_t)arr[req->cq_pages]; if (use_pbles) ret = irdma_check_mem_contiguous(arr, req->cq_pages, @@ -2747,7 +2746,8 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, struct irdma_mr *iwmr; struct ib_umem *region; struct irdma_mem_reg_req req; - u32 stag = 0; + u32 total, stag = 0; + u8 shadow_pgcnt = 1; bool use_pbles = false; unsigned long flags; int err = -EINVAL; @@ -2801,7 +2801,13 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, switch (req.reg_type) { case IRDMA_MEMREG_TYPE_QP: - use_pbles = ((req.sq_pages + req.rq_pages) > 2); + total = req.sq_pages + req.rq_pages + shadow_pgcnt; + if (total > iwmr->page_cnt) { + err = -EINVAL; + goto error; + } + total = req.sq_pages + req.rq_pages; + use_pbles = (total > 2); err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); if (err) goto error; @@ -2814,6 +2820,14 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); break; case IRDMA_MEMREG_TYPE_CQ: + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE) + shadow_pgcnt = 0; + total = req.cq_pages + shadow_pgcnt; + if (total > iwmr->page_cnt) { + err = -EINVAL; + goto error; + } + use_pbles = (req.cq_pages > 1); err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); if (err) From patchwork Fri Jun 25 16:23:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 12345593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4706C49EA7 for ; Fri, 25 Jun 2021 16:24:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C9686115C for ; Fri, 25 Jun 2021 16:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229586AbhFYQ0m (ORCPT ); Fri, 25 Jun 2021 12:26:42 -0400 Received: from mga06.intel.com ([134.134.136.31]:59672 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229776AbhFYQ0l (ORCPT ); Fri, 25 Jun 2021 12:26:41 -0400 IronPort-SDR: DNddNItpuJee8DfRw8DpN2q2S/WOhr0UYxfIL/nKvb85pAtT8KCQJVHN9j5lbD1t5pjwHI6ruv cshknqMCGBkQ== X-IronPort-AV: E=McAfee;i="6200,9189,10026"; a="268831265" X-IronPort-AV: E=Sophos;i="5.83,299,1616482800"; d="scan'208";a="268831265" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 09:24:20 -0700 IronPort-SDR: 1/lIli9fowmimqUECVCwGKCbuyxqPVS7kpzwGAnylaKGXUbl0LRroQq6n0+smVSzhsWx6NMj+g YvNgYzCC47QQ== X-IronPort-AV: E=Sophos;i="5.83,299,1616482800"; d="scan'208";a="624564431" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.212.85.35]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2021 09:24:19 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, dledford@redhat.com Cc: linux-rdma@vger.kernel.org, shiraz.saleem@intel.com, mustafa.ismail@intel.com, coverity-bot , Tatyana Nikolova Subject: [PATCH v2 rdma-next 2/2] RDMA/irdma: Fix potential overflow expression in irdma_prm_get_pbles Date: Fri, 25 Jun 2021 11:23:29 -0500 Message-Id: <20210625162329.1654-3-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210625162329.1654-1-tatyana.e.nikolova@intel.com> References: <20210625162329.1654-1-tatyana.e.nikolova@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Shiraz Saleem Coverity reports a signed 32-bit overflow on "1 << pprm->pble_shift" when used expression to compute bits_needed that expects 64bit, unsigned. Fix this by using the 1ULL in the left shift operator and convert mem_size to u64. Reported-by: coverity-bot Addresses-Coverity-ID: 1505157 ("Integer handling issues") Fixes: 915cc7ac0f8e ("RDMA/irdma: Add miscellaneous utility definitions") Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/pble.h | 2 +- drivers/infiniband/hw/irdma/utils.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/irdma/pble.h b/drivers/infiniband/hw/irdma/pble.h index e4e635dc4fd9..e1b3b8118a2c 100644 --- a/drivers/infiniband/hw/irdma/pble.h +++ b/drivers/infiniband/hw/irdma/pble.h @@ -121,7 +121,7 @@ enum irdma_status_code irdma_prm_add_pble_mem(struct irdma_pble_prm *pprm, struct irdma_chunk *pchunk); enum irdma_status_code irdma_prm_get_pbles(struct irdma_pble_prm *pprm, - struct irdma_pble_chunkinfo *chunkinfo, u32 mem_size, + struct irdma_pble_chunkinfo *chunkinfo, u64 mem_size, u64 **vaddr, u64 *fpm_addr); void irdma_prm_return_pbles(struct irdma_pble_prm *pprm, struct irdma_pble_chunkinfo *chunkinfo); diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index ea1df5918c11..5bbe44e54f9a 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -2314,7 +2314,7 @@ enum irdma_status_code irdma_prm_add_pble_mem(struct irdma_pble_prm *pprm, */ enum irdma_status_code irdma_prm_get_pbles(struct irdma_pble_prm *pprm, - struct irdma_pble_chunkinfo *chunkinfo, u32 mem_size, + struct irdma_pble_chunkinfo *chunkinfo, u64 mem_size, u64 **vaddr, u64 *fpm_addr) { u64 bits_needed; @@ -2326,7 +2326,7 @@ irdma_prm_get_pbles(struct irdma_pble_prm *pprm, *vaddr = NULL; *fpm_addr = 0; - bits_needed = (mem_size + (1 << pprm->pble_shift) - 1) >> pprm->pble_shift; + bits_needed = DIV_ROUND_UP_ULL(mem_size, BIT_ULL(pprm->pble_shift)); spin_lock_irqsave(&pprm->prm_lock, flags); while (chunk_entry != &pprm->clist) {