From patchwork Tue Jul 2 09:09:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1FB7C3064D for ; Tue, 2 Jul 2024 09:10:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 267586B00B1; Tue, 2 Jul 2024 05:10:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 217F16B00B2; Tue, 2 Jul 2024 05:10:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0429C6B00B3; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D9A756B00B1 for ; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5E06981C13 for ; Tue, 2 Jul 2024 09:10:44 +0000 (UTC) X-FDA: 82294242408.11.7B59EE9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id B06CA40018 for ; Tue, 2 Jul 2024 09:10:42 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VczNJJsV; spf=pass (imf17.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OYMgxE0TdJyy7aZE07VN/5Dh/CRuDuWFFrQDIKb/9oQ=; b=oNkdjd3KypsWirdV88wxWq6bUz37VN9svyqPUaWKAcADFdWRSODfdpf48oVWqTEMazPF09 eywOukCXs5T6vIKopL78SwVN4SSLaXJEKBjDo7aQw1tYUxCn1SXThdW9RF2O5i/sUUX43E aR9vZ414t3JG2ufMiEGzS/V7pPT1PGI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VczNJJsV; spf=pass (imf17.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911412; a=rsa-sha256; cv=none; b=8Cq40vzOQ01YwE85Vwkq4FF229CU5WFT8ejD3klGiDc+Npl/wHiq/AADXI97nLEbvmeB+W aPV0U7fFKWyYr9fv5AKexP0C7AbHrAtf29VZivV9qyV+FqNvgAlRMNKqrVcDrgFUe6lkSs q37kol+flF8uGRs8/MhsXF5FZh59c8o= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C612161A28; Tue, 2 Jul 2024 09:10:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD16CC116B1; Tue, 2 Jul 2024 09:10:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911441; bh=iroMohzT1Q8AWRK2+Thbq89BnX39U8SKlcdoPnAxJYY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VczNJJsVneBG8U1/XVDb6geZHvZySIGG4DY2l4QAi9gXw0LZ4UZx7I111sTW/pzG+ 4gFYx4HiuOTIS7ymyC1ZaLERA/X33A5BIU5kAJM/SZyFgB5GRFBxOi0L/j9m8A/bcs lgRBq3rNRiPVGh9h5iFW+4ma1ZmCVnY4jodvAce1csOdqMDIlLFM1gpYm0fdzdf3dc DXGkry+mEYxh70j6i+/ZAGXhXHz4SK5EzMJ4XXxg16NwaKkpXS/s2cAIDA+bNQUsB3 T6fOcXLArV3UrnEVTuSxL9/A8so/gdTs+kjzN4UkmqpH6Ew92iDJu+Z7ECGV8U0SDk gTnjyBi/EKyDQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 11/18] RDMA/core: Separate DMA mapping to caching IOVA and page linkage Date: Tue, 2 Jul 2024 12:09:41 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B06CA40018 X-Stat-Signature: 8hmu4968d4nrhbyxup1ur6t41895gcgk X-Rspam-User: X-HE-Tag: 1719911442-114533 X-HE-Meta: U2FsdGVkX18lqen2W3a0E8vehmBpWbEC1lHT/r/WbqQkXx1jGvSZ7bmTc+idcB5g06xyX8ija0exZnz0uUJ9DKtUWGwcAJDVsOw6sqq7OgrzK+RTWtMGGVcT108Wp+PufaICCqF5cGZ3yAiexOnO4kbs7+dik6Jv+fnm8Runvd6USbFqfI4X1q6O8ze/AE9LCk2HVW3Y+FjS4ySjprSkEkMFCAbMGwNMCef8qBf2YwaZbVf8rjIQNqaQRtgvg8lWcxoLfhlTJC8uYCg9K39t8Jysp9J22tmrZ0cY17xs2V3eaQmhzHUxXAjNatT1v6iGDdY2CIuCpQu9VgAKG71+EjH1lUfVs3RrSjdEcgQBcb5rPoMI9BxFvA2CwVEFAOODIxNAASPikPRgtlFPXWnAWxYo5pPv0iv2+iRrQqDyFwKsJ+02zttoVaW8oORDECBS+w/BuOh1HfqW34cqEAtMq+21EKoDXib9Wa3ipNA4za+buCB3Cvp/JYtVSkOOLoK2q0Z0NavzJHvyvjzAg02/kxUe5hhGO8U5UcP4qCPF0+cY1IkgMpyfYMNt0F1PN65EzGMoAylpZQaFysWBgReZU0CzhzjeAiANIJ4O7fRrRQCuNv40zYhQ/FLVDwbgMyguVq9QYLnJfgkndYQYp/Xet3pmlE/vWYF04MSXnORx1HxK872/1jCEPOhkWtFrr68KVz3gsDCV6ZHjxNaqqkZmj3lZO6lEZprC+1FqhoKc+CokdMLEQjLD3YL3w2FrPD8cD1/bqFtpu4X5H7/5q6Mc6yLAPuO5pZ0EsGjB7ArX2O9mNDSUHkjFjVgBrynP0Dnp/3hjtioKXsQ5lyiDyt5Ydh6RJudYVg5buzYWmQEutl2vbgZPRE7IuN2cpslq88b22ftSn+bGa70iLrFAYbb/Sprz2quLhglOeqdzM6LdflJFtJAsAsGJHxH3cnCGqg3jciGs97nCIW4eOvgYxGP mRPYbLCq T0e/uPSRepDn5Hd+UZ8CT4z4nt1X4LyVCffOY4Hoi2Dz9UAiXVyY1NtGQH95pzvAH/Xv4Glz0dHr4vDhP8l6h8Fl0M1r+eqaJDq6Bk5lptNHR3zmoHJCesMSnIWVxAwoMpmEwMwpWrXjZGMA7GaGNRQQoZZdPhyrrvVlESxCqlVcaEiKKeO53FQloKpoSxIoz6iXpI9V5ea7qNPU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Reuse newly added DMA API to cache IOVA and only link/unlink pages in fast path. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 61 +++--------------------------- drivers/infiniband/hw/mlx5/odp.c | 7 +++- include/rdma/ib_umem_odp.h | 8 +--- 3 files changed, 12 insertions(+), 64 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index c628a98c41b7..6e170cb5110c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -81,20 +81,13 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, if (!umem_odp->pfn_list) return -ENOMEM; - umem_odp->dma_list = kvcalloc( - ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); - if (!umem_odp->dma_list) { - ret = -ENOMEM; - goto out_pfn_list; - } umem_odp->iova.dev = dev->dma_device; umem_odp->iova.size = end - start; umem_odp->iova.dir = DMA_BIDIRECTIONAL; ret = dma_alloc_iova(&umem_odp->iova); if (ret) - goto out_dma_list; - + goto out_pfn_list; ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, @@ -107,8 +100,6 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, out_free_iova: dma_free_iova(&umem_odp->iova); -out_dma_list: - kvfree(umem_odp->dma_list); out_pfn_list: kvfree(umem_odp->pfn_list); return ret; @@ -286,7 +277,6 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); dma_free_iova(&umem_odp->iova); - kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } put_pid(umem_odp->tgid); @@ -294,40 +284,10 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) } EXPORT_SYMBOL(ib_umem_odp_release); -/* - * Map for DMA and insert a single page into the on-demand paging page tables. - * - * @umem: the umem to insert the page to. - * @dma_index: index in the umem to add the dma to. - * @page: the page struct to map and add. - * @access_mask: access permissions needed for this page. - * - * The function returns -EFAULT if the DMA mapping operation fails. - * - */ -static int ib_umem_odp_map_dma_single_page( - struct ib_umem_odp *umem_odp, - unsigned int dma_index, - struct page *page) -{ - struct ib_device *dev = umem_odp->umem.ibdev; - dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, - DMA_BIDIRECTIONAL); - if (ib_dma_mapping_error(dev, *dma_addr)) { - *dma_addr = 0; - return -EFAULT; - } - umem_odp->npages++; - return 0; -} - /** * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. * * Maps the range passed in the argument to DMA addresses. - * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. * Upon success the ODP MR will be locked to let caller complete its device * page table update. * @@ -435,15 +395,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, __func__, hmm_order, page_shift); break; } - - ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); - if (ret < 0) { - ibdev_dbg(umem_odp->umem.ibdev, - "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); - break; - } - range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -463,10 +414,8 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma; int idx; u64 addr; - struct ib_device *dev = umem_odp->umem.ibdev; lockdep_assert_held(&umem_odp->umem_mutex); @@ -474,19 +423,19 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; - dma = umem_odp->dma_list[idx]; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) continue; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) continue; - ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); + dma_hmm_unlink_page(&umem_odp->pfn_list[pfn_idx], + &umem_odp->iova, + idx * (1 << umem_odp->page_shift)); if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); struct page *head_page = compound_head(page); /* * set_page_dirty prefers being called with diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 5713fe25f4de..b2aeaef9d0e1 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -149,6 +149,7 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + struct ib_device *dev = odp->umem.ibdev; unsigned long pfn; dma_addr_t pa; size_t i; @@ -162,12 +163,16 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, /* Initial ODP init */ continue; - pa = odp->dma_list[idx + i]; + pa = dma_hmm_link_page(&odp->pfn_list[idx + i], &odp->iova, + (idx + i) * (1 << odp->page_shift)); + WARN_ON_ONCE(ib_dma_mapping_error(dev, pa)); + pa |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) pa |= MLX5_IB_MTT_WRITE; pas[i] = cpu_to_be64(pa); + odp->npages++; } } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index a3f4a5c03bf8..653fc076b6ee 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -18,15 +18,9 @@ struct ib_umem_odp { /* An array of the pfns included in the on-demand paging umem. */ unsigned long *pfn_list; - /* - * An array with DMA addresses mapped for pfns in pfn_list. - * The lower two bits designate access permissions. - * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. - */ - dma_addr_t *dma_list; struct dma_iova_attrs iova; /* - * The umem_mutex protects the page_list and dma_list fields of an ODP + * The umem_mutex protects the page_list field of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex * also protects access to the mmu notifier counters. */