From patchwork Wed Mar 29 14:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13192667 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA462C74A5B for ; Wed, 29 Mar 2023 14:36:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbjC2Og6 (ORCPT ); Wed, 29 Mar 2023 10:36:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230099AbjC2Ogp (ORCPT ); Wed, 29 Mar 2023 10:36:45 -0400 Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70B8359F5 for ; Wed, 29 Mar 2023 07:32:43 -0700 (PDT) Received: by mail-wr1-f50.google.com with SMTP id y14so15971192wrq.4 for ; Wed, 29 Mar 2023 07:32:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1680100251; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=JNU6UNqUS+WuAbKetud+rmauI0T5YLmIIiBYJJBr/Mk=; b=IRZhtwgLYqyDJddtWVlWAB0Tl/aldF834NoRRgRViPLwXVY/IXF/wJMdn7qcaQCjsY /dqH2w1TGl1y8TnyVjr5EVxV5FXO783D0EaUoVUSr3WuHGnTIXoVLJlZtSWthV5JT7Gp AwTjXwvBW0AcbQnghuA83IQ7m+5Rspgm+vYpUHT3v6gAS/1ZAuqab/n9Mv+9RwBDCOGQ Ym3NIpey7JUxl5h03zoiPBqppQXKO2J2FHM5szw8FD7R9iM94lkOYfJuIEJIcdqZEEDW 63o7qXxMb3F2zarZkcV6JsLDVUsAp1U4x1+U8aTJrSZsjdah9Xs1iKhvfg2+9e6ucg5M rniw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680100251; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=JNU6UNqUS+WuAbKetud+rmauI0T5YLmIIiBYJJBr/Mk=; b=CK1YnXbG0naT/wpXpq3dC44RxL05Yu4R+J1xfWuiHiyEGTlg9/tPNAV6p2Qg3M+nOs Xe0yVingl2omkpVwxCFn63bicWmjMt+5tHUk8LXw84Rogcmmw544630oFZ/3jkMtxWAV ryD5DiQkMTIhLhe4WDJh/5nDKk298qoSjY6IG2+xCrcaD/bCTDNqiqqKtyBhKUojq2FN Up/m05kqaxnmxR+0x/Aa9XyLyN1H71+7e1EI0lYlImEeJi/5cHU+fhX2DARyih7eUJHg +k3S141BcD/m6JvHtpqof7GnquBPuZrrOMY+HLwmtqTwuoZmFNohk084HY0oJ8Fam3pQ Tg7Q== X-Gm-Message-State: AAQBX9f3Gt8pdu9mcAJ9m68+NWIPL+Zw0l4syJhmac+ouuQzZxcomR47 JpUhmvatdF5foI43mLZgVwMW12Zgk0NMvq6OCEs= X-Google-Smtp-Source: AKy350YWu8m4GsCwRMcyTMm+yIk6z6s/V+zc2QDgJNprixotZZvBqPMtgdszGSJuj6jG056fFFm5Bg== X-Received: by 2002:a2e:b018:0:b0:29c:9209:9ebe with SMTP id y24-20020a2eb018000000b0029c92099ebemr6689214ljk.39.1680099826515; Wed, 29 Mar 2023 07:23:46 -0700 (PDT) Received: from fedora.. ([85.235.12.219]) by smtp.gmail.com with ESMTPSA id u10-20020a2e854a000000b00295b597c8fasm5505492ljj.22.2023.03.29.07.23.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Mar 2023 07:23:46 -0700 (PDT) From: Linus Walleij To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, Linus Walleij , Bob Pearson Subject: [PATCH v2 1/2] RDMA/rxe: Treat physical addresses right Date: Wed, 29 Mar 2023 16:23:40 +0200 Message-Id: <20230329142341.863175-1-linus.walleij@linaro.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Whenever the IB_MR_TYPE_DMA flag is set in imbr.type, the "iova" (I/O virtual address) is not really a virtual address but a physical address. This means that the use of virt_to_page() on these addresses is also incorrect, this needs to be treated and converted to a page using the page frame number and pfn_to_page(). Fix up all users in this file. Fixes: 592627ccbdff ("RDMA/rxe: Replace rxe_map and rxe_phys_buf by xarray") Cc: Bob Pearson Reported-by: Jason Gunthorpe Link: https://lore.kernel.org/linux-rdma/ZB2s3GeaN%2FFBpR5K@nvidia.com/ Signed-off-by: Linus Walleij Acked-by: Zhu Yanjun --- ChangeLog v1->v2: - New patch prepended to patch set. --- drivers/infiniband/sw/rxe/rxe_mr.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index b10aa1580a64..8e8250652f9d 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -279,16 +279,20 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, return 0; } -static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr, +/* + * This function is always called with a physical address as parameter, + * since DMA only operates on physical addresses. + */ +static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 phys, void *addr, unsigned int length, enum rxe_mr_copy_dir dir) { - unsigned int page_offset = iova & (PAGE_SIZE - 1); + unsigned int page_offset = phys & (PAGE_SIZE - 1); unsigned int bytes; struct page *page; u8 *va; while (length) { - page = virt_to_page(iova & mr->page_mask); + page = pfn_to_page(phys >> PAGE_SHIFT); bytes = min_t(unsigned int, length, PAGE_SIZE - page_offset); va = kmap_local_page(page); @@ -300,7 +304,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr, kunmap_local(va); page_offset = 0; - iova += bytes; + phys += bytes; addr += bytes; length -= bytes; } @@ -487,8 +491,11 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, } if (mr->ibmr.type == IB_MR_TYPE_DMA) { - page_offset = iova & (PAGE_SIZE - 1); - page = virt_to_page(iova & PAGE_MASK); + /* In this case iova is a physical address */ + u64 phys = iova; + + page_offset = phys & (PAGE_SIZE - 1); + page = pfn_to_page(phys >> PAGE_SHIFT); } else { unsigned long index; int err; @@ -544,8 +551,11 @@ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value) } if (mr->ibmr.type == IB_MR_TYPE_DMA) { - page_offset = iova & (PAGE_SIZE - 1); - page = virt_to_page(iova & PAGE_MASK); + /* In this case iova is a physical address */ + u64 phys = iova; + + page_offset = phys & (PAGE_SIZE - 1); + page = pfn_to_page(phys >> PAGE_SHIFT); } else { unsigned long index; int err; From patchwork Wed Mar 29 14:23:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13192670 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB3B6C74A5B for ; Wed, 29 Mar 2023 14:40:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230249AbjC2OkG (ORCPT ); Wed, 29 Mar 2023 10:40:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230072AbjC2Ojt (ORCPT ); Wed, 29 Mar 2023 10:39:49 -0400 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD4E572BD for ; Wed, 29 Mar 2023 07:36:43 -0700 (PDT) Received: by mail-wr1-x434.google.com with SMTP id j24so16014538wrd.0 for ; Wed, 29 Mar 2023 07:36:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1680100601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1wnbLglyJ3eA52OV8SdW6bu29+/Ammndwmng0PdQyq4=; b=ugq/1g9YBwG/1tf5t1LALgH0UrL4xYSqR0DVFa02ocSu6KLYuIvrce81O7jv1AiExo jovxehqKJ5gAiDoOdJZYaOJFlxfQM0u38hD/2V4cYqDKQLyn7tc2bwqtjNPQi8FtmRh0 KU8K5C96Pu4zVi+okbNpDMuBqhPBPr8G938lWe5JnEVBSseaTDdAFWrLobXvsUBmveAL miFdgsoOMzJzCYa0XPO0dVmaGBwVu4NLBMZsJOi93M6TKKTWsVKvmIfblHMgtVXNTrre tzwtdYFGE9wBD7BXx5iODf2VShqMUbAHDPRxLpVTu0ahb3rDzbSmBLqZcenWSg6guBZF h5Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680100601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1wnbLglyJ3eA52OV8SdW6bu29+/Ammndwmng0PdQyq4=; b=sJNUYwde3FIo64ICLj1O+fUm5yvJNvB0kHRmIzNfckcb8dzmCjhdea4L1+LjdL2ote PChtzB3fbuTP4tQ6mKZRiSphQAW3Wxls9xkkDmcx08w20ObquVU8zRd+hB43Dd7aMdlo JFoUCgfCo2id4wpUF20ud+6OxSV1nnFWLJPcs/3KSArOAz36tFEaTNxUlq7CrXPmxDEL PsuJW3+PW+Zzj96yD1mxtojYHfseUVjCbaB3z448JAILOmMLfRocFERVuovBa0flxf/m lb/NE/nQ3fyywqVccgpHTVc5s+hfn7WwOzvlyw7a7dhPhkIpDRN/zN45V5kY9Nz1gJTx XWSQ== X-Gm-Message-State: AAQBX9fPF1VroaVXJskuIyXrdGCqujTscQlikbDyJ4jfTWdBzdWs45cz MVg3GERFAQSQAybF7dUAgwpyuJZUCNpotRDdlMM= X-Google-Smtp-Source: AKy350a8fGtpe6YiOteif9ADW5RFFaoTamblmODlSEVjE7hYvNNQvJ/qSwlgDM2jpXAfPd5v97uw7w== X-Received: by 2002:a2e:3309:0:b0:295:733a:3463 with SMTP id d9-20020a2e3309000000b00295733a3463mr6031393ljc.29.1680099828081; Wed, 29 Mar 2023 07:23:48 -0700 (PDT) Received: from fedora.. ([85.235.12.219]) by smtp.gmail.com with ESMTPSA id u10-20020a2e854a000000b00295b597c8fasm5505492ljj.22.2023.03.29.07.23.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Mar 2023 07:23:47 -0700 (PDT) From: Linus Walleij To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, Linus Walleij Subject: [PATCH v2 2/2] RDMA/rxe: Pass a pointer to virt_to_page() Date: Wed, 29 Mar 2023 16:23:41 +0200 Message-Id: <20230329142341.863175-2-linus.walleij@linaro.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230329142341.863175-1-linus.walleij@linaro.org> References: <20230329142341.863175-1-linus.walleij@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Like the other calls in this function virt_to_page() expects a pointer, not an integer. However since many architectures implement virt_to_pfn() as a macro, this function becomes polymorphic and accepts both a (unsigned long) and a (void *). Fix this up with an explicit cast. Then we need a second cast to (uintptr_t). This is because the kernel robot builds this driver also for the PARISC, yielding the following build error on PARISC when casting (void *) to virt_to_page(): drivers/infiniband/sw/rxe/rxe_mr.c: In function 'rxe_set_page': >> drivers/infiniband/sw/rxe/rxe_mr.c:216:42: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 216 | struct page *page = virt_to_page((void *)(iova & mr->page_mask)); | ^ include/asm-generic/memory_model.h:18:46: note: in definition of macro '__pfn_to_page' 18 | #define __pfn_to_page(pfn) (mem_map + ((pfn) - ARCH_PFN_OFFSET)) | ^~~ arch/parisc/include/asm/page.h:179:45: note: in expansion of macro '__pa' 179 | #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) | ^~~~ drivers/infiniband/sw/rxe/rxe_mr.c:216:29: note: in expansion of macro 'virt_to_page' 216 | struct page *page = virt_to_page((void *)(iova & mr->page_mask)); | ^~~~~~~~~~~~ First: I think this happens because iova is u64 by design and (void *) on PARISC is sometimes 32 bit. Second: compilation of the SW RXE driver on PARISC is made possible because it fulfills depends on INFINIBAND_VIRT_DMA since that is just def_bool !HIGHMEM and PARISC does not have HIGHMEM. By first casting iova to (uintptr_t) it is turned into a u32 on PARISC or any other 32bit system and u64 on any 64BIT system. Link: https://lore.kernel.org/linux-rdma/202303242000.HmTaa6yB-lkp@intel.com/ Signed-off-by: Linus Walleij Reviewed-by: Bob Pearson --- ChangeLog v1->v2: - Fix up confusion between virtual and physical addresses found by Jason in a separate patch. - Fix up compilation on PARISC by additional cast. I don't know if this is the right solution, perhaps RDMA should rather depend on 64BIT if the subsystem is only for 64BIT systems? --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 8e8250652f9d..a5efb0575956 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -213,7 +213,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) static int rxe_set_page(struct ib_mr *ibmr, u64 iova) { struct rxe_mr *mr = to_rmr(ibmr); - struct page *page = virt_to_page(iova & mr->page_mask); + struct page *page = virt_to_page((void *)(uintptr_t)(iova & mr->page_mask)); bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT); int err;