From patchwork Tue Jul 2 09:09:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5559FC3064D for ; Tue, 2 Jul 2024 09:10:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCDC46B0096; Tue, 2 Jul 2024 05:10:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C829E6B0098; Tue, 2 Jul 2024 05:10:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF0236B0099; Tue, 2 Jul 2024 05:10:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8F3BE6B0096 for ; Tue, 2 Jul 2024 05:10:08 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4D569141F93 for ; Tue, 2 Jul 2024 09:10:08 +0000 (UTC) X-FDA: 82294240896.20.6F86913 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 7E82714001E for ; Tue, 2 Jul 2024 09:10:05 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WdB+vVAS; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911388; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=O4AbwGmGnj47IV1Y8hzENMzunZyP4axPgZfTlB+LSU4=; b=iKkpo2XJPUsErQ79LNJGUfdi3Z8S+MR+gJ5JTxadtBatEeNF3woTqZ4wkHIoplNo3GTPck 6ExbcOAhTAYMNKvXcn8+brGYbpAllh5ZaOEDRNFrL/BFCo667RapJcafcR7e5LvRoKBGiU TOSMtmosMYZWYbS1FFvAc+ed3h/WI3w= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WdB+vVAS; spf=pass (imf26.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911388; a=rsa-sha256; cv=none; b=2nBFhnD8+oXk9bN5a+VXgBVmp2b+2dRsxMGASckAr31dOgzzCUBXCZihilP9YMWlwP3EjA hW+5VB7R6rrs2D/qmCsB/w+TkdSG9pSCDo5/it9Vtm2Jsw18zOHfvYhdoURatQe7s6vawm XlIIb5l6d5dAkNjEgRvFujAR8RcH+9Y= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B00FE61A0C; Tue, 2 Jul 2024 09:10:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97D44C116B1; Tue, 2 Jul 2024 09:10:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911404; bh=PsDcRtwRrZy3xcTFmq5V+v2aFvC1FX6fMcSIsPNIgJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WdB+vVAS8uHPdPme4myJO22f4FbGRcG1dHDJHwEW7t+ki8CzsXtGbIJKrwNZzL+oN HZhggaNWlUq2LDjYroZErlO93jF4y+zjRYe2QMVqmzz1MDxk1Rl8xeJex/5Q1jOGA8 kNM7rlYFNxKQypQjyLEv1W8ppPwmxff79s/KS7zpU97+5kGBVsrGR36Z/Ui8WyxfjN +2WhW3GmeKGr7A5xpE0coC6WO/48R8aI2ccSdCQQ38AntHiLnCHjCF0ItXO1X8/fim N1g17gLjq4qvdcY6NksGgFPtVrvn0kKJnJryFF33jW+ntCtDHOmI6RkHi12DWkTu0B MTNXNHg+0HJIQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 01/18] dma-mapping: query DMA memory type Date: Tue, 2 Jul 2024 12:09:31 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 9s8tzwgf5m618cmxzzky5ruah6861isd X-Rspam-User: X-Rspamd-Queue-Id: 7E82714001E X-Rspamd-Server: rspam02 X-HE-Tag: 1719911405-122911 X-HE-Meta: U2FsdGVkX1+lQamM9zKDNcgmGQL2wlVMm/pQGUOU6EE01zGvfXArkjIiIJCV/REihIqKzJUMTgwcgqvX0ZodAyBg8dXTeIOE1Qd1lFG7ZRccZ7g63soZ0qRY5xNt8yvLEusqhcTRKX1z0EMaMA4r/vfQDRf/QKfutcBzvUFEAIfmel78lcyt/kEnaWMzGDgAJlCykepoXMPfA4ICG+u68h5D8wMiAWJ1xQBhIPd6u/eY7zZx5g7NW8Y3xN2O8VMMDBfx/O1F6socjghyxCH8eWX5g7xXGNxGzt1myp6UlYI54roTjcWkmeySyCglL1LksVqIhP6oOK9kWjk/qp/qDX2VhtIehhyrM6YPHX7L2eG6d7JQeRk9JFwCKeFBEd8bTjlqYOf8CGZg2qHoqizuCvccqFfc4zBFEqXP8wpn+Uo+6D/9OJNy7DgYctebyyA5pNexhX4BHW+DjeG92xkolAxaOF6InyQrVUVpw46gfTNWlLgCh4KkHLldYMU4IUkyGLpGnQsTKA4rXQ6jhwYtWgBpKTvLSVUFrRnJMNmqSq6ZiLOiM+ynaqBqqApvO+QjD6t2M9VKaGgycafFdqOjQklTyJl6TwOWxZgTqHV0c4l8Po55ymsV0y43eFtpZzWj3cGFZaOd4vI1h/k4Y1BwuPuCMUUr11iEoH6NRz94YR59GYzOtNoU5nX1nniXOof80hKlc1ON/3dPpuDBEYjkMNGDTvQ06wXQICRyJBBlCKtmhx5fVaRBNjijUsvSHzph6Krm9gqVuFM86PgAU8qEqFUZ2iHjHldmB9b1gQ/GMl7DO4LqFdNRHIk2vN/X3eQ5HHuqYC9EERQkQ3sLq/+E84UVTIw8Ia8MLYCMYtaFU8REj/rkX6ZN+uMmE3zJVJJZBflVhmTPMjmPhloZVh6sya2ENtwGTZHI2AvKeYzIEd7iuBqBGYJPancrmj67s9a7n2bqJ7trO9rQlW+hRE7 hBQ98PEQ aYwC5XmVG5Ryy1Grhd9oN5yQeqLDNvuO/x329lokmSe5DW8F6u2Ojk3UdMMUU4KUNjCirJN8dfuFY8UpieseqgfEh1tyB9PONCLc/u0GHexuO2BIEz8lTxRiZCdmyslobYeG9yJYR1Fnvq8O7emdRTsfw7/g3/uPUJbqTcmg/NYTAQlZ40MyE6xZ4bvcbN7V1r8we2SrqR22ldJU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Provide an option to query and set DMA memory type so callers who supply range of pages can perform it only once as the whole range is supposed to have same memory type. Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 20 ++++++++++++++++++++ kernel/dma/mapping.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f693aafe221f..49b99c6e7ec5 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -76,6 +76,20 @@ #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) +enum dma_memory_types { + /* Normal memory without any extra properties like P2P, e.t.c */ + DMA_MEMORY_TYPE_NORMAL, + /* Memory which is p2p capable */ + DMA_MEMORY_TYPE_P2P, + /* Encrypted memory (TDX) */ + DMA_MEMORY_TYPE_ENCRYPTED, +}; + +struct dma_memory_type { + enum dma_memory_types type; + struct dev_pagemap *p2p_pgmap; +}; + #ifdef CONFIG_DMA_API_DEBUG void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); void debug_dma_map_single(struct device *dev, const void *addr, @@ -149,6 +163,8 @@ void *dma_vmap_noncontiguous(struct device *dev, size_t size, void dma_vunmap_noncontiguous(struct device *dev, void *vaddr); int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, size_t size, struct sg_table *sgt); + +void dma_get_memory_type(struct page *page, struct dma_memory_type *type); #else /* CONFIG_HAS_DMA */ static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, @@ -279,6 +295,10 @@ static inline int dma_mmap_noncontiguous(struct device *dev, { return -EINVAL; } +static inline void dma_get_memory_type(struct page *page, + struct dma_memory_type *type) +{ +} #endif /* CONFIG_HAS_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 81de84318ccc..877e43b39c06 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -6,6 +6,7 @@ * Copyright (c) 2006 Tejun Heo */ #include /* for max_pfn */ +#include #include #include #include @@ -14,6 +15,7 @@ #include #include #include +#include #include "debug.h" #include "direct.h" @@ -894,3 +896,31 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +/** + * dma_get_memory_type - get the DMA memory type of the page supplied + * @page: page to check + * @type: memory type of that page + * + * Return the DMA memory type for the struct page. Pages with the same + * memory type can be combined into the same IOVA mapping. Users of the + * dma_iova family of functions must seperate the memory they want to map + * into same-memory type ranges. + */ +void dma_get_memory_type(struct page *page, struct dma_memory_type *type) +{ + /* TODO: Rewrite this check to rely on specific struct page flags */ + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { + type->type = DMA_MEMORY_TYPE_ENCRYPTED; + return; + } + + if (is_pci_p2pdma_page(page)) { + type->type = DMA_MEMORY_TYPE_P2P; + type->p2p_pgmap = page->pgmap; + return; + } + + type->type = DMA_MEMORY_TYPE_NORMAL; +} +EXPORT_SYMBOL_GPL(dma_get_memory_type); From patchwork Tue Jul 2 09:09:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736A6C3064D for ; Tue, 2 Jul 2024 09:10:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD9DA6B009D; Tue, 2 Jul 2024 05:10:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D87E56B009E; Tue, 2 Jul 2024 05:10:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C293A6B009F; Tue, 2 Jul 2024 05:10:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A27D16B009D for ; Tue, 2 Jul 2024 05:10:18 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4F0B61C388C for ; Tue, 2 Jul 2024 09:10:18 +0000 (UTC) X-FDA: 82294241316.15.20CB428 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf30.hostedemail.com (Postfix) with ESMTP id 0FC608002B for ; Tue, 2 Jul 2024 09:10:15 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jZsLMwcx; spf=pass (imf30.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911405; a=rsa-sha256; cv=none; b=q78VTjrEXYtg1XGU3nItX0P+iSImTKPRpKbno+mIijQtXEF+3TOcMyPPrDU5/YALTC2pmP I33Pzjufb2Qq67JxauXLu2azaF+agfOM3dulg16ntwdTXqgq+RzvgimgtGv8c7ee9JCzNS YsGNmRAZRJ6Y80yKPC99xCcdx0ACq98= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jZsLMwcx; spf=pass (imf30.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911405; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/Tk6HO/OFPztC8ZmvMZj3eONbvN9mSpoSMmb7Ix2/1w=; b=x2cMC4S6T6VZvpglBMM9xYbcH7FMenUZNi4ulU3N4IemQGu1mMBJVZ+A3QO0s87WaanNzH XhNkL+49vw7GsDxmNXcTdOuWpX256h2FjYLXO02Dpva5uNldrdF/QjpDmE7Ed5bm8QLuJa r43euWLnvgOskMb4ZK43TP75iJBIMe8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id F02CACE1D06; Tue, 2 Jul 2024 09:10:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB762C116B1; Tue, 2 Jul 2024 09:10:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911412; bh=GdqxFreVifTUH7yLunaT0Kp4VH1OcNBKz7jklxCA7IE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jZsLMwcxfIhbNdXAH7Erajavnd9zt8wZwqJVAhbFF8wuHrgj9ZFLnS0M824YeY1WI lsBBR7Huv75yN3CNdjq3wGOj6d6XRdgf43l4OFc9s6OcECQ4HjS13MEjvrsvp/qDtC V9bMyC4eCtQlzz7bAocuoWalQBZeKMuoUk0+t2JOvKUiwHjZ7NOHkhxRQ2bfGbsofJ 9EiPqR5DPCSASk+2Ji+dPRehAt5lU8Xk41chq5PYN3dkl0zeRpGP7rNQIh7zxcQ/Ux TOXwMCzKrQG/bRPwo4SyChwOnWE10AWLJZqnWPQAFqWJhu2cqfYHV+DULsrE24NbAI pEN9z2lU0w7oA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 02/18] dma-mapping: provide an interface to allocate IOVA Date: Tue, 2 Jul 2024 12:09:32 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: rq4qbghus7fratnwjq7hym7q8uwe8eaj X-Rspamd-Queue-Id: 0FC608002B X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1719911415-995556 X-HE-Meta: U2FsdGVkX185LZjQbaPXux0D83LDKFqmvIMxEQssD0inSjgJRGijaMOl4bmZL+ddPH30PfFxCakHl5HIWkh9iT42/rhacy+pi+LhDh6RFOiOTe5GkTRakLFBghwedWoUbBa9dJ4ZZNyU+OTin3VTyHcxUuaqmF4e1pxQQBBnC23Koeg0pVfGkrOTOy/9Nxy6B6hkVMxTaIu9PEtEMlX0pRF756G6rhQ0W9EGyA6+uZFJ2ceHlMt89Z4x5n1obJW2pk4lTxM37olXHHzOX0z1EA44qtLIpjMxkfmLcMiYio0PjqttQwQodMke3jMbLOVuwdsuzI4kORN380SnGc6un4XQjTYP0l7hgOTJ6S1XRLk6wpr18ygOz4AYPmQmtl5R8vx5F6TDsP0uqo1l8vBo0wqz1L9+3BfI/PRmpBKKKq7mlhksZ+fyhEZS63zN5hbcUS6neY/QTNQ6M2lnOp+FyMa1ov4O+wKKbmJMa1bJTRZ0GQ8vuYcjn/pjyCvzrZaqPt++1/VXwvHopNfl1o4k8ZhRD8SrILKOmDwO19hmStgIf7H7+kZkiimjq7mE+2TqRhGQY+7ebspDFZ2JcCDEzma+ZCqXFhJhEC3gm7/lq0Up8idIDUBe3CWK4l7/lUmO0Oa3zRaT5kuHpZUZzaLujPfO5DOk5s8wM6BrykWucRirUGS7UhRwhQNvsIiv3snyoSqZQtbi/9OhMV2SRC4aVccpItDIBQePJPj73pUI6KO30wwD/DvT/S0jQL7D3BIixat0QwX5BMUIGhzPHLj4+CuQp+xEz0AJjb4GYGHYATUa9xMJPV7JFjMkgd7o3bolKw+vW6EK+J1i0nAte8IresiOs5dZ5UUvlzMeS3KN8yARIX7dTJFlqD9ASTOj+5PJrVKGQ9gdUD1i14uiEm//aGu0vAbdHx35aZDOoePvYgMWF1s8z0pafsUYJakcz0rZ9JX9ACNwD/gx9BAg7rC gTh6n4+y CKvg7LYz8hksGOb5GMy2YVKKQxkKl4aLa66CVklnZ+Hq6MNcmd+rzU5kkVqUzB2FJExyblfIsXyi5ApS+W3ZJ9DM4coeuh+rnSGwguIHckuxTVA1gJ4ZwS12YOOqFVPsE/ooSwrU9iZD3sO7IS1dZ5aUYHWtuTE13MfLm3BqyjMsnr+EghoYw++cWQhiJ9KqUdOOwwQQ/flS98c8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Existing .map_page() callback provides two things at the same time: allocates IOVA and links DMA pages. That combination works great for most of the callers who use it in control paths, but less effective in fast paths. These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 3 +++ include/linux/dma-mapping.h | 20 +++++++++++++++++ kernel/dma/mapping.c | 44 +++++++++++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 02a1c825896b..23e5e2f63a1c 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -86,6 +86,9 @@ struct dma_map_ops { size_t (*max_mapping_size)(struct device *dev); size_t (*opt_mapping_size)(void); unsigned long (*get_merge_boundary)(struct device *dev); + + dma_addr_t (*alloc_iova)(struct device *dev, size_t size); + void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); }; #ifdef CONFIG_DMA_OPS diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 49b99c6e7ec5..673ddcf140ff 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -90,6 +90,16 @@ struct dma_memory_type { struct dev_pagemap *p2p_pgmap; }; +struct dma_iova_attrs { + /* OUT field */ + dma_addr_t addr; + /* IN fields */ + struct device *dev; + size_t size; + enum dma_data_direction dir; + unsigned long attrs; +}; + #ifdef CONFIG_DMA_API_DEBUG void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); void debug_dma_map_single(struct device *dev, const void *addr, @@ -115,6 +125,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) return 0; } +int dma_alloc_iova(struct dma_iova_attrs *iova); +void dma_free_iova(struct dma_iova_attrs *iova); + dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs); @@ -166,6 +179,13 @@ int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, void dma_get_memory_type(struct page *page, struct dma_memory_type *type); #else /* CONFIG_HAS_DMA */ +static inline int dma_alloc_iova(struct dma_iova_attrs *iova) +{ + return -EOPNOTSUPP; +} +static inline void dma_free_iova(struct dma_iova_attrs *iova) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 877e43b39c06..0c8f51010d08 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -924,3 +924,47 @@ void dma_get_memory_type(struct page *page, struct dma_memory_type *type) type->type = DMA_MEMORY_TYPE_NORMAL; } EXPORT_SYMBOL_GPL(dma_get_memory_type); + +/** + * dma_alloc_iova - Allocate an IOVA space + * @iova: IOVA attributes + * + * Allocate an IOVA space for the given IOVA attributes. The IOVA space + * is allocated to the worst case when whole range is going to be used. + */ +int dma_alloc_iova(struct dma_iova_attrs *iova) +{ + struct device *dev = iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || !ops->alloc_iova) { + /* dma_map_direct(..) check is for HMM range fault callers */ + iova->addr = 0; + return 0; + } + + iova->addr = ops->alloc_iova(dev, iova->size); + if (dma_mapping_error(dev, iova->addr)) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL_GPL(dma_alloc_iova); + +/** + * dma_free_iova - Free an IOVA space + * @iova: IOVA attributes + * + * Free an IOVA space for the given IOVA attributes. + */ +void dma_free_iova(struct dma_iova_attrs *iova) +{ + struct device *dev = iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || !ops->free_iova || !iova->addr) + return; + + ops->free_iova(dev, iova->addr, iova->size); +} +EXPORT_SYMBOL_GPL(dma_free_iova); From patchwork Tue Jul 2 09:09:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2D6BC41513 for ; Tue, 2 Jul 2024 09:10:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 414766B007B; Tue, 2 Jul 2024 05:10:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 375346B009A; Tue, 2 Jul 2024 05:10:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23EC26B009B; Tue, 2 Jul 2024 05:10:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 058126B007B for ; Tue, 2 Jul 2024 05:10:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6F534141AC8 for ; Tue, 2 Jul 2024 09:10:14 +0000 (UTC) X-FDA: 82294241148.27.F1D6701 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf11.hostedemail.com (Postfix) with ESMTP id 252394001D for ; Tue, 2 Jul 2024 09:10:11 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PqzjMLOj; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911391; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g9SErVqLGwmT74XKztdMcaMW7Z/xOj3VfCtyRrAV9cY=; b=u1dh3ZsupD5IgoJAh3eDGQuWAJu5UiKH3zHVd+HvtqGnMbmEafPXf8XR5mREi1NyzZ1yNf yOuB+uzTgGfaFLRi4Ka+97uJEgj3Uq7q+BdrUwcZurOo6YTrgUXh/6nGOnwFS8mythVMFa bTESKutDBglCYRYYEedsz5yBFWenBR4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911391; a=rsa-sha256; cv=none; b=pgxhAb9VI3E166AnVx4ywxKxVA+YwAsjwThFSFnm+8rpLII+F8A0MhJ+Y1wPwKq6rYwgSX aNaUTcDFvOZ3lCwbzAsWgFuVT3xREiYITlmmjzxKpt6dDBi7hCtyX02a1YJS+Emg4jBuSL 7w1Hw5XIODevA1awwcW35lWna+xd50Q= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PqzjMLOj; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C12ACCE1CFA; Tue, 2 Jul 2024 09:10:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DCD7C4AF0C; Tue, 2 Jul 2024 09:10:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911408; bh=I7mD3dENF4HZBNBumoLyPjANfOtz0e/DAq7U9sRqLk0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PqzjMLOjXeye9zq/mHzTj6Cl4RxSudqszFKsPx54AmE/tJ+OHgWu5ivCnN1LbBRUN 1Yv5vxJ4cXclv2ETPV3ZWWpK913fTcXr7yVC/nJo7AtJ6rnu3SJBTnF9+JY89SOEvd TYkrX534Vg7HQIcEHujxARKgvk/nUOly8dQEFeqHVLZriAax5EaH09H4nUGZ/y8Loi vDIjzYbeVRyYivHIMQkPD/nstZnUBJhmQWQx2JPLSSZymUewhzPZooZ7yJC7ufM6es ONXggsrnv88nbqTGWDYThjoAZxFxhMpavbb+IIZpeIVu2VNBfqxKFRLDEd3u7X+sYd zohrHgWstJ+9w== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanvosky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 03/18] dma-mapping: check if IOVA can be used Date: Tue, 2 Jul 2024 12:09:33 +0300 Message-ID: <4c479ac482c3bd123a5f999fdff46454a7faa905.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 252394001D X-Stat-Signature: 8it5o7xcuco93ci8pneu18jbgc7u4g7w X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719911411-963374 X-HE-Meta: U2FsdGVkX1/LAvc1gulmwHuJx92z1d+iIoZZ3i+LXy+TW20zPy20Tyvc1lpI1Rm9kWjCGZj9mXP4PR/8QjorsrIJ3ZjfWxWVFkIrF6Ld4hzn5S9BtAUOr875pGs/qubL/9KEEX80P5R8bj6WAiEfzUp0T9032pshwpJzBR/UBY7pQUPWPh1o5Pv72bEW9mn069R0UvMz0d3fZSp3HS7saJxIHILBjwhmV1HMHhVLTrHh8KYa0v0SkUBwI21GaMyTv8UMHP2Q/C1wxE2sLuDjHWIylloisFjzZVTr3nPq7561xT3iN7MdtXjybhxwotrv54G/K7gzcDsXGMvhkeGSFj6zKtLvD7k6i2DF9VYQVZty8LzG1+OM6mvLa4976bCnVOHovYuMoyqyh38PvytE1+Vnvgxm5zsZZfKWx+1ht4DmtL8I5RmKzyCfGijPxxFZtJqrCfPj7lierpsvWkj7mlboFSiiqYDfdyB0uu2ZT5ttTEIbJoNNhVVKsg0FqiOQJgRBSclorNGX0AHSp8PT6uzFvyniedcDowDcTsTc+qpJO/vNgAwr5M542IzJN0/cIczCbg5wZa22p2Au0j/JiS1Zxyu7HZiczJ1rWy81id6/h1eMkR0zsJrm2iOjGHjT3THqJ6VwJijaeXK4sSihaKCcZlu7ELaTjKmihJ8i8rRKO2dPemS5tqgePHStLxXGSVfoi2h7gSxnPmCTiSLHvpbAyQh794tO9fBtFtLslAtGcq1XBxQr5/C4aCWe7df7CLNF1F48bp+sO+YhwQTvGspvyXuAs+xSNwqN9HPzcucQYtfbb0TXe6ILizWQwig7xH9LI3Swth7OykwM5Xn1JRY8aWHIu/15uOp9H1pNlmcq+Q5VjO8WwJkgVqthq1jo0iQjZE10B3v2eA/mQrePgwwT2rHekA/gsLCxCumlArymP95jmK0sXhKSs05w4hLbqyyR13hp6i9M9LdJPc6 brk/flko ipwNSAqMeUdVofPbchj+I7czu7BbsOJEskUR10PxYCZzePrtixKp5vXtnB6QS5lpJTfzx3KfcDwvz77g3d61KqnqaM8bv/UWVcWKwhGbDteAG5uRkxLURyk4MOuqqoQAIk8684XdCWhBviJ8+01Z4FLwOaTLyUG+tcz+rR587v/kGzBPQXHlhypDtm5D3aMqY474ltklqh5tc+Do= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanvosky Provide a way to the callers to see if IOVA can be used for specific DMA memory type. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 13 ------------- drivers/pci/p2pdma.c | 4 ++-- include/linux/dma-map-ops.h | 21 +++++++++++++++++++++ include/linux/dma-mapping.h | 10 ++++++++++ kernel/dma/mapping.c | 32 ++++++++++++++++++++++++++++++++ 5 files changed, 65 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 43520e7275cc..89e34503e0bb 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -597,19 +597,6 @@ static int iova_reserve_iommu_regions(struct device *dev, return ret; } -static bool dev_is_untrusted(struct device *dev) -{ - return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; -} - -static bool dev_use_swiotlb(struct device *dev, size_t size, - enum dma_data_direction dir) -{ - return IS_ENABLED(CONFIG_SWIOTLB) && - (dev_is_untrusted(dev) || - dma_kmalloc_needs_bounce(dev, size, dir)); -} - static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir) { diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4f47a13cb500..6ceea32bb041 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -964,8 +964,8 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) } EXPORT_SYMBOL_GPL(pci_p2pmem_publish); -static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, - struct device *dev) +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, + struct device *dev) { enum pci_p2pdma_map_type type = PCI_P2PDMA_MAP_NOT_SUPPORTED; struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 23e5e2f63a1c..b52e9c8db241 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -9,6 +9,7 @@ #include #include #include +#include struct cma; struct iommu_ops; @@ -348,6 +349,19 @@ static inline bool dma_kmalloc_needs_bounce(struct device *dev, size_t size, return !dma_kmalloc_safe(dev, dir) && !dma_kmalloc_size_aligned(size); } +static inline bool dev_is_untrusted(struct device *dev) +{ + return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; +} + +static inline bool dev_use_swiotlb(struct device *dev, size_t size, + enum dma_data_direction dir) +{ + return IS_ENABLED(CONFIG_SWIOTLB) && + (dev_is_untrusted(dev) || + dma_kmalloc_needs_bounce(dev, size, dir)); +} + void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, @@ -514,6 +528,8 @@ struct pci_p2pdma_map_state { enum pci_p2pdma_map_type pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, struct scatterlist *sg); +enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap, + struct device *dev); #else /* CONFIG_PCI_P2PDMA */ static inline enum pci_p2pdma_map_type pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, @@ -521,6 +537,11 @@ pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, { return PCI_P2PDMA_MAP_NOT_SUPPORTED; } +static inline enum pci_p2pdma_map_type +pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev) +{ + return PCI_P2PDMA_MAP_NOT_SUPPORTED; +} #endif /* CONFIG_PCI_P2PDMA */ #endif /* _LINUX_DMA_MAP_OPS_H */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 673ddcf140ff..9d1e020869a6 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -100,6 +100,11 @@ struct dma_iova_attrs { unsigned long attrs; }; +struct dma_iova_state { + struct dma_iova_attrs *iova; + struct dma_memory_type *type; +}; + #ifdef CONFIG_DMA_API_DEBUG void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); void debug_dma_map_single(struct device *dev, const void *addr, @@ -178,6 +183,7 @@ int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, size_t size, struct sg_table *sgt); void dma_get_memory_type(struct page *page, struct dma_memory_type *type); +bool dma_can_use_iova(struct dma_iova_state *state, size_t size); #else /* CONFIG_HAS_DMA */ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) { @@ -319,6 +325,10 @@ static inline void dma_get_memory_type(struct page *page, struct dma_memory_type *type) { } +static inline bool dma_can_use_iova(struct dma_iova_state *state, size_t size) +{ + return false; +} #endif /* CONFIG_HAS_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 0c8f51010d08..9044ee525fdb 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -968,3 +968,35 @@ void dma_free_iova(struct dma_iova_attrs *iova) ops->free_iova(dev, iova->addr, iova->size); } EXPORT_SYMBOL_GPL(dma_free_iova); + +/** + * dma_can_use_iova - check if the device type is valid + * and won't take SWIOTLB path + * @state: IOVA state + * @size: size of the buffer + * + * Return %true if the device should use swiotlb for the given buffer, else + * %false. + */ +bool dma_can_use_iova(struct dma_iova_state *state, size_t size) +{ + struct device *dev = state->iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + struct dma_memory_type *type = state->type; + enum pci_p2pdma_map_type map; + + if (is_swiotlb_force_bounce(dev) || + dev_use_swiotlb(dev, size, state->iova->dir)) + return false; + + if (dma_map_direct(dev, ops) || !ops->alloc_iova) + return false; + + if (type->type == DMA_MEMORY_TYPE_P2P) { + map = pci_p2pdma_map_type(type->p2p_pgmap, dev); + return map == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE; + } + + return type->type == DMA_MEMORY_TYPE_NORMAL; +} +EXPORT_SYMBOL_GPL(dma_can_use_iova); From patchwork Tue Jul 2 09:09:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BCCBC30658 for ; Tue, 2 Jul 2024 09:10:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E794C6B00AA; Tue, 2 Jul 2024 05:10:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E28876B00AB; Tue, 2 Jul 2024 05:10:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDB746B00AC; Tue, 2 Jul 2024 05:10:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9E85B6B00AA for ; Tue, 2 Jul 2024 05:10:36 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4F84681FD2 for ; Tue, 2 Jul 2024 09:10:36 +0000 (UTC) X-FDA: 82294242072.19.BE052BD Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id 94FEA40013 for ; Tue, 2 Jul 2024 09:10:34 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="YYKS1Ot/"; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911422; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Sg8VrrbO63E4vxe03RzTRxr3bmVdW29o9ynUGItU87Y=; b=TE7nVyka9jHANOCuoqRprWqJlHSbyXcpB/K1FAJ4k/E9mLNZtXtHLFsBz5+y5xHpGcp3Yc HNHxd9MUs8huINfYqOL9FkSbIWOZvaup+5/X9MHK9KZ9IBqDyR36O7+oCJ5HZGW3A34KXf Grd6voMRFNe58UkLff1f6XSppHImISQ= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="YYKS1Ot/"; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911422; a=rsa-sha256; cv=none; b=XZfE6rG58LjC6tnJ4+eTo30ClPks0IKOnFAzU0Jmx4ILP5wY3t/GmyrurCimvUh9ct5oq2 DZx5JSA2jpfSmEGgTi2xGlwXJmFo58Hur8kBjck8+67/7hQw94RFT1pnXXdjRKClTRFfRR uS0JqgHXksakhSkVSZERfTws2Moyrg4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C632061A0C; Tue, 2 Jul 2024 09:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD899C4AF0A; Tue, 2 Jul 2024 09:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911433; bh=0ck5rd5jPLMOP1nHX+50lNZNnmmRrxabk/SQIbsa6U4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YYKS1Ot/toIO1bChF3hjNIXQnLRGAFZW60cYZGmLLcwlJGabkFTKyyB9S4UjzMRw4 OWdB+gcPZfiFupAaORoIq2eLPt1FH+wTYrENPy5qAeVHARSxgi1nIntNE+I24Dz9VP HGwRhb2VAiNoV6XRMQ24UVg2ee/gd+HohgenhbH1Ffqm3bb0/puHq59LU7IGpEL/vk sVPeLGHK/A9Eqt9VnWOf9HrTGNvjmQUyWmuLi3tGXthIBjVPGtIFwFF4zX749XQYhs NGGEjZMQlBISbiHromVsoVz6nukh5hle9SkHC3Rj4ksdesgD1SzycXc0OVeUTNAhcG ADXWSXiEd0N1A== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 04/18] dma-mapping: implement link range API Date: Tue, 2 Jul 2024 12:09:34 +0300 Message-ID: <8944a1211b243fed1234a56bc8004a11dbf85a87.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 94FEA40013 X-Stat-Signature: rcqm86kw5u6x8ayd77pcym6991rj8y6o X-Rspam-User: X-HE-Tag: 1719911434-996644 X-HE-Meta: U2FsdGVkX19f9j1oWHTe05mEVAwLyL45SnrsONoj6lyv+lCpPXsC4SnriTefe+WMWj7VcSpeQN+PwS1iVVENVpXg2xna8TgR4TIgFSLNcvWOD9Lqrdn6dfk8VEDW+W5hDW37XG0bL99nHH8qsZvFBxKR6oxTXvjvwUxt80jv+8AlQ8IRPZ+4u5fUUlFLxjuevbeGMdbNHzEEFiWmDLPKhwfELwspz9grDHGpqZ2I1c9vU+H5yOAdmxk6+ey5ygmD/EvQEw5Nd1aj5ZH0AsOCEfq1zZ77rm62XtY8Jtj2T0C+0TzVlI9VukvKXusx3tIvf03ynVMLEAJaKTCM8diKjZj+nSr+H26dcO6YjuBCl9UXMDv4OTX4cu3qEOn+mUCxo1LhY5iSXsOKvESJyiSmbJHGwD+40R0n0Yih11Bp+aobk2f7zkwG4dWusZs7rll3zcNPyJvTRxGRGgBFFlfVfkvPEpk/UbTxfamrKBgkNta4i3iPlcw2KGF+xUoX65TTeyGghNbeg2hZnhqPCDGV9tVl++3VHB8yLPpk4PbM92yRBr/lqqZpdzRiUFcCJjYWKIWCHvzR7cnJ7ULDGGokJ9h7N9JL/gNhvMO+E1Xva9ZmITwQ+dcK7oGgeId8p6HpvBnYLAvfesz8eszhRkHsBvMRvD/tZh0UKw93S9dPdgsapl2PEV20+YXPunAPVyqDnmrxvLS+dc3nSRysZ9Nb5Owye5JuNqBM1e4ugFA9nU9r5955qoSMFi4qie+VLORyXsAcZAdCgws1zXV7b0Mgac8y5M9M8qPrBv6dJKqbVSwOffA7QXffIHPjMdEmD8eVc8jyuVksAiFdUCzk9p9FVPLi/+6/wq2VBnYClBU4GzDGX/MVLr0mgDNFcVNZRFt8pxryqyC+6MOs1zWNIYKNe6XKR6LykzKgR7r8R5p7NeqeSgQ5XzrshZWlynMGzSkXbgjtvpaLE2y2OdMdL57 jb3pu5nR 1MdJAESU6CVTK1N4JXJB/RcNWE2BoX75vyze0flcIkk8ZIimYDzE7W+xWUrslpgepKbCRl377gpAANuP0/B/ptmV6nq5l/eaSoUBrNO0FyD4YKzeQPCZBIykAOeYdSkGRSeA1J7Y8g5CYTEdU+iAhUmBD+k5jHt8DVYKZU+if3s9YmEFZmOYzJOOMgoA9AjQjUlWl8JJJNQzYHRc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA APIs to perform DMA linkage of buffers in layers higher than DMA. In proposed API, the callers will perform the following steps: dma_alloc_iova() if (dma_can_use_iova(...)) dma_start_range(...) for (page in range) dma_link_range(...) dma_end_range(...) else /* Fallback to legacy map pages */ dma_map_page(...) Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 6 +++ include/linux/dma-mapping.h | 22 +++++++++++ kernel/dma/mapping.c | 78 ++++++++++++++++++++++++++++++++++++- 3 files changed, 105 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index b52e9c8db241..4868586b015e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -90,6 +90,12 @@ struct dma_map_ops { dma_addr_t (*alloc_iova)(struct device *dev, size_t size); void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); + int (*link_range)(struct dma_iova_state *state, phys_addr_t phys, + dma_addr_t addr, size_t size); + void (*unlink_range)(struct dma_iova_state *state, + dma_addr_t dma_handle, size_t size); + int (*start_range)(struct dma_iova_state *state); + void (*end_range)(struct dma_iova_state *state); }; #ifdef CONFIG_DMA_OPS diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 9d1e020869a6..c530095ff232 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -11,6 +11,7 @@ #include #include #include +#include /** * List of possible attributes associated with a DMA mapping. The semantics @@ -103,6 +104,8 @@ struct dma_iova_attrs { struct dma_iova_state { struct dma_iova_attrs *iova; struct dma_memory_type *type; + struct iommu_domain *domain; + size_t range_size; }; #ifdef CONFIG_DMA_API_DEBUG @@ -184,6 +187,10 @@ int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, void dma_get_memory_type(struct page *page, struct dma_memory_type *type); bool dma_can_use_iova(struct dma_iova_state *state, size_t size); +int dma_start_range(struct dma_iova_state *state); +void dma_end_range(struct dma_iova_state *state); +int dma_link_range(struct dma_iova_state *state, phys_addr_t phys, size_t size); +void dma_unlink_range(struct dma_iova_state *state); #else /* CONFIG_HAS_DMA */ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) { @@ -329,6 +336,21 @@ static inline bool dma_can_use_iova(struct dma_iova_state *state, size_t size) { return false; } +static inline int dma_start_range(struct dma_iova_state *state) +{ + return -EOPNOTSUPP; +} +static inline void dma_end_range(struct dma_iova_state *state) +{ +} +static inline int dma_link_range(struct dma_iova_state *state, phys_addr_t phys, + size_t size) +{ + return -EOPNOTSUPP; +} +static inline void dma_unlink_range(struct dma_iova_state *state) +{ +} #endif /* CONFIG_HAS_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9044ee525fdb..089b4a977bab 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -989,7 +989,8 @@ bool dma_can_use_iova(struct dma_iova_state *state, size_t size) dev_use_swiotlb(dev, size, state->iova->dir)) return false; - if (dma_map_direct(dev, ops) || !ops->alloc_iova) + if (dma_map_direct(dev, ops) || !ops->alloc_iova || !ops->link_range || + !ops->start_range) return false; if (type->type == DMA_MEMORY_TYPE_P2P) { @@ -1000,3 +1001,78 @@ bool dma_can_use_iova(struct dma_iova_state *state, size_t size) return type->type == DMA_MEMORY_TYPE_NORMAL; } EXPORT_SYMBOL_GPL(dma_can_use_iova); + +/** + * dma_start_range - Start a range of IOVA space + * @state: IOVA state + * + * Start a range of IOVA space for the given IOVA state. + */ +int dma_start_range(struct dma_iova_state *state) +{ + struct device *dev = state->iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (!ops->start_range) + return 0; + + return ops->start_range(state); +} +EXPORT_SYMBOL_GPL(dma_start_range); + +/** + * dma_end_range - End a range of IOVA space + * @state: IOVA state + * + * End a range of IOVA space for the given IOVA state. + */ +void dma_end_range(struct dma_iova_state *state) +{ + struct device *dev = state->iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (!ops->end_range) + return; + + ops->end_range(state); +} +EXPORT_SYMBOL_GPL(dma_end_range); + +/** + * dma_link_range - Link a range of IOVA space + * @state: IOVA state + * @phys: physical address to link + * @size: size of the buffer + * + * Link a range of IOVA space for the given IOVA state. + */ +int dma_link_range(struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + struct device *dev = state->iova->dev; + dma_addr_t addr = state->iova->addr + state->range_size; + const struct dma_map_ops *ops = get_dma_ops(dev); + int ret; + + ret = ops->link_range(state, phys, addr, size); + if (ret) + return ret; + + state->range_size += size; + return 0; +} +EXPORT_SYMBOL_GPL(dma_link_range); + +/** + * dma_unlink_range - Unlink a range of IOVA space + * @state: IOVA state + * + * Unlink a range of IOVA space for the given IOVA state. + */ +void dma_unlink_range(struct dma_iova_state *state) +{ + struct device *dev = state->iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + ops->unlink_range(state, state->iova->addr, state->range_size); +} +EXPORT_SYMBOL_GPL(dma_unlink_range); From patchwork Tue Jul 2 09:09:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7826EC3064D for ; Tue, 2 Jul 2024 09:10:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 044386B00A0; Tue, 2 Jul 2024 05:10:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F36206B00A1; Tue, 2 Jul 2024 05:10:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAF806B00A2; Tue, 2 Jul 2024 05:10:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B89666B00A0 for ; Tue, 2 Jul 2024 05:10:22 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5565BA2F83 for ; Tue, 2 Jul 2024 09:10:22 +0000 (UTC) X-FDA: 82294241484.07.E91162E Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf04.hostedemail.com (Postfix) with ESMTP id 1E90C40012 for ; Tue, 2 Jul 2024 09:10:19 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DERWykst; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911390; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9l3RqBQ3wmhiS4Devu11UR9cNpL8lZMERvC1jOBPYeU=; b=2EK0nWsGXAAPt22JhMkBagBYlycxq2ZdbR3RzsV+UUWAahCqw92eeKg6ec0Y1ho2egkVzi HJNrQv5Vs734VFC8JiqfR9fmXBZI2nuKEPVMvvtLSDZDOSPnqvayHjuRIsqIqrEi5oqafE mtetU+x+ROxoy0wi9NQNw2XlVskZw9c= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DERWykst; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911390; a=rsa-sha256; cv=none; b=bEQwVRkwcdy+FFVrdPVAuQNmkhRSq7MfCd4hbq8WSMMPwIUivQN9ofOxbdukqMZ+AqghXO gVpDF8Cb6FKWeLS5FgFN3TZz1eQJCrh7i+5wgcxnwzTPgM4SYyaj5vfVqOFGDEGJWaaz1T X62K6g0r0BILrK8xaVBwF6Up9J3AO2E= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 1FC19CE1CF5; Tue, 2 Jul 2024 09:10:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E8E09C4AF0E; Tue, 2 Jul 2024 09:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911416; bh=r8LbBOjlbH4F69KVmFsNaHjJ/rODIFYYIWRpcAsJOVI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DERWykstv7XxGRlK5d+9zV0EQM0MK52hTvvioFtFjlLgY6y7WByKUa78ayxSb4kv4 I+xUVII03CJetjfBvTMv9CMMhQJfzVzLz89kaGypwXqqSFHhbw4Gb2EbFJGDnfX17l 0aC4Byhh+gWLSP7xqntoErbx/0yPnHiKuF5CB/Xs9/iSM9GN7ebAkO6YOTDfK+Sr8y dJtkIzyXzi0gzX+IoPLVGBHI3F2b9pwZ+KO31UXjhbL9I3Iuq75SeGIsrKaVl1FT3q sHBmNAh3u7+dpLJB9+kZkDVxWvxbuHf4tfrlHP2fR8bQCt2MPqKJJkSIF05sdFYODC r+/wFzkIKdacg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 05/18] mm/hmm: let users to tag specific PFN with DMA mapped bit Date: Tue, 2 Jul 2024 12:09:35 +0300 Message-ID: <769b6266d5e8638322f25550dd01a85515bf9d08.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 1E90C40012 X-Stat-Signature: jmijq75ffp4iwo734a4bo7fqjjfcfpej X-Rspam-User: X-HE-Tag: 1719911419-360060 X-HE-Meta: U2FsdGVkX18pBdWrfJ76yAxlukYpflm6e36IiiI0v9JKBLCUJLp8CP2W5iAAvvFw3uDTYXpR3ece4C2S8ysgojX1jBFvCK5VHKvxBPL/eworGKQNkdo1ccpKda4nbHUB17UuScoUforw7nQW07MzlG9xrfQuR73uHL+qGdoqmpPagED41oaetOb6iVn+6iF8x2+kPhwgmKE1b6ESItczXE+O6C+6rSpdtsUdYj13o8KYdeqt5GMJvhZXmH59+l9BC/Ybr+McT2zt8maGs6nYl0DBUErmB17x1kT4igTEVTLp2KGPGLdBdTo7/BBPNTaqPjyuCq/jFR7swHJq4Z+qCahEZHxdQg/Z0DjVc9RIgpUMT/5GHHc0qg8aLJX789TzGxz1ckxyVcsoKxlTuX4GbRUnSTCSyAvkWEg/bm7+ogvaJd5qCHJW7ZC4T69bCRpv/5RKIHMIQLOjGVpot8PUi1tDi3Az84+0DgdEdzlIWinl7cj9QK0R4nR193b2g6nbJFugDMx0O7inxN+BNALpr6H5cic2ipz3MLxUi6PfDGUK6lCtj+CvEF4odOPypG3DUyWUZaBagRHZbG0/wiHDrprVJTkX6CFu2SlABbWn7dKSwjuzPZCHSVV2YqRkShsjoWgu2oCgMeIOvAjEP2GhELo2rK7+/IeeLAXCj+p+5zaQrDia1SlDj2zrE6vts69oUI0br74p+PJbQMEi4Hv04Ocb6XJTJpNP5QEEKvizw2HioWNA5DdcIco4MtbIptWq+CwTpgIPrZKdWei9sqS/VRilmei/dM7hAfcTBY/QaVYoDjCJ9mWmlK+nG/YmXgBUS5dG6L7ldPtT7/Xsq2zNaNrQx8MzqhLHXvnBYDbpu15A/MThm2AE0YJd3u8k0gQZG/4sxUxZdo3KvwyUWUUxmFPuh7tQJQ9ppuO14bH50b3pSTU6dXdehlE3c8vvwJvQ+xaSlxJV+7cw2fsZchN DFDCC/wE kzwocMl5g2tPab9knSdvBvLn9NgLVRGDhdNcjeM+rSl5qWEXwwP08id18M3wu9S6Z10mj4REUeYpOLjCV8QHdzrTDQEE7Ua11w06fKObLLNq2ye08fya0SP2K/QdgxLyI7HtkwQ98vwkPeMTLER17DonKn6RKgfwXR+R5Z8hoO0MG1NriJ9LiH5q1QnNqq7UYvu1Oc23LgzOaJ3k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with information if this specific PFN was already DMA mapped. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 4 ++++ mm/hmm.c | 34 +++++++++++++++++++++------------- 2 files changed, 25 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..2999697db83a 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,8 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation + * to mark that page is already DMA mapped * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +38,8 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + /* Sticky lag, carried from Input to Output */ + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ diff --git a/mm/hmm.c b/mm/hmm.c index 93aebd9cc130..03aeb9929d9e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -44,8 +44,10 @@ static int hmm_pfns_fill(unsigned long addr, unsigned long end, { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +204,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -236,7 +240,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -253,14 +257,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | swp_offset_pfn(entry) | cpu_flags; return 0; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (!required_fault) { - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } @@ -304,11 +308,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | HMM_PFN_ERROR; return 0; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | pte_pfn(pte) | cpu_flags; return 0; fault: @@ -448,8 +452,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -507,8 +513,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_DMA_MAPPED; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; From patchwork Tue Jul 2 09:09:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80DD0C30658 for ; Tue, 2 Jul 2024 09:10:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 947C26B00A1; Tue, 2 Jul 2024 05:10:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F5076B00A2; Tue, 2 Jul 2024 05:10:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 795A36B00A3; Tue, 2 Jul 2024 05:10:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 589C46B00A1 for ; Tue, 2 Jul 2024 05:10:24 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 092EA121FA5 for ; Tue, 2 Jul 2024 09:10:24 +0000 (UTC) X-FDA: 82294241568.26.62D7E5C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id 572F71C000F for ; Tue, 2 Jul 2024 09:10:22 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=B0P6xSie; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911399; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rbchBOHSC9nt77VwqMfCPdOcuY7N/xjcpgjKjtqVxUQ=; b=KZtq2Xl86fNlTMzj9ajrS6OFqbYGjkqHkwuo5uo5oMwOXOm6htI1FtG6hDWLbsw+k2JyLn 6+LBL5wAmyrSqET/POb2kzm328MyRr/pMJBz+7/TSXd6g6oVljfkWkz0/4/953BCt7ZGga xemvYYlEjAkbu6cx1w6QsFQeFaVFxi8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911399; a=rsa-sha256; cv=none; b=r596BU4TDNOjWnblOjrcxejrSfzwn/VuQKQewTBkpUiqf/iYoflfFiU3qb4PE1Q0F2SXrn oXnuwIWNsqKW1S+AqjPccfw7Fo3EueB9Te3m//wca5KwR0/6tigEqZk4FvgZrQ8STnw4Uj bQMA91ncycq32H8ILdG4HxI3QSxCgnY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=B0P6xSie; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6611C61A2A; Tue, 2 Jul 2024 09:10:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26C0AC4AF0D; Tue, 2 Jul 2024 09:10:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911420; bh=SqRJkswAXpWAsD53KsFAbDwitxXp1HHK8jZehp+jNVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B0P6xSien3K1c31vVmNKC6qS649bWpjI5mYxXWH8emcmDGg2eigH8FA2DdzWikUeV OTGCsKzhgCxVn0yFExmtnvDeYvSdw03wWMH3YCkE0Is5H/+FrihVahL3vShxIXpt5H ekIE/hI3wHawgVD7zRlm3769EJWxwN6cjJq9LjWDY8CwX07zGC8TIt4ox3kCloi44h 8tByePaihSYoAZbPDlvArG+KbAuvnYoS6wLwWd3uqYtGr/qXHBBtoAFke7v4CagF12 +Ba7rdX8nQKBDA4nkUOmts95lMSEr2/fvOG5MxU0r8Xs/bA45+3GA81F9bXChbQUkP XrFGs/s9NsLMw== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 06/18] dma-mapping: provide callbacks to link/unlink HMM PFNs to specific IOVA Date: Tue, 2 Jul 2024 12:09:36 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: qxe78ycsy9yt1x3nu4bdqotna9pfzxxj X-Rspamd-Queue-Id: 572F71C000F X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1719911422-92444 X-HE-Meta: U2FsdGVkX198PbgmRAh/hL+CiJGPekIJbYTKxOjlrmuY/pQfJ50v9IFJ/D60V96vt1xmLraSE1l/eOx74K2K0j8xf7JOyws67uVJdPKJybRjyLrPYjmP84XKmaw51XAUlLAAUM9embkAm1umtDO72lGzzDWdIhdcf/4KGmbWh04ZLkz0D5QR5ClGBd1Ikxatr8IDmRMdXj2TE/VVQLRbI38Ot1Kkq/hHFi068OEJAjPw8sB91pLT4m6vsqHjPobx97D52JiMMFOKnRaqgafWiayxoBsiyq3Hf8gzfZPjb7tEEddsgYZyEgjlx0gjXNi64Q6FZEYs51ZCFbv7kiB5gkVPjVF7AZcAtKfNZa7DU65kksJ8nTDQQxB1je1wqaGBAWfYlznFi1yl0DUmGJfe1Gfz0hnGkn7jnXIvtCaqYer3zlhjUuvPYvNwwyQotm1F/Ctl4F3xMKg1+pX+/9NCZ4TTO29VbTJmuCz5+hId5adHKeX6SVV4zXWy7CoI7Ak9XBH/AdhnIEODa9Pe+MmwDO8iAKydKnWO9Y611p5Ws+1J+ums/Wm7HRzgVHopKtwHE65BTrlEeyJXVHUDiDkIDPbun3+kAuE0FQKRPCxDP3zTaTk4GP1o3je/GCx49DpnqY2qdJ553wvSasFfuPccVXs2dtcKKntv4++N7ibSf16WLoaGz6+qYn3X9TLTpYPq+znti+oaue/KsWLaJ59oOQs7alm8bMtgWGp+3ntr0rLOckiEgzsGq9W/7u8Yf4iyz5lfJc1lMmVK5Na5QquhyTSoBg+up+3lY5E+PofhqQQHo8fH7jMjvI1MMFe2I6g469X7ai6xaDVZdIieZgryiz6nzRjRyqpS7Q+WqrPr67IsKu4k9my97Ei/9MtvDv2AFL5P7yg47Z77DRPvVAbOhSNLVjclVr6c8bbLCghCRga78R0dUt6MIwd8q1ZxVlU1pN1k/t/SJbdCT4SijHw aCJzr7Vx WyVVcHrd+gCf8hy3gIpj6/0p2U7TpEClJQ2KbRe3EmI2HfOVsvmE54iAbCkJeA7ARt3Ci3ANDW4DwBzfZiODVxsF3Af7YDQdAJWaVyL3PWXP9EPqFiZHQT2mQiG5Ozmxq+w15PCplvpQDo5FVYn3YZLfX4xArP6iAVHD8zNwgdEaN0kdbLyqcV12UObRCyS8AWUoNoMNXHY2gufs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA link/unlink API to provide a way for HMM users to link pages to already preallocated IOVA. Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 15 +++++ kernel/dma/mapping.c | 108 ++++++++++++++++++++++++++++++++++++ 2 files changed, 123 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index c530095ff232..2578b6615a2f 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -135,6 +135,10 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) int dma_alloc_iova(struct dma_iova_attrs *iova); void dma_free_iova(struct dma_iova_attrs *iova); +dma_addr_t dma_hmm_link_page(unsigned long *pfn, struct dma_iova_attrs *iova, + dma_addr_t dma_offset); +void dma_hmm_unlink_page(unsigned long *pfn, struct dma_iova_attrs *iova, + dma_addr_t dma_offset); dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, @@ -199,6 +203,17 @@ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) static inline void dma_free_iova(struct dma_iova_attrs *iova) { } +static inline dma_addr_t dma_hmm_link_page(unsigned long *pfn, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_hmm_unlink_page(unsigned long *pfn, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 089b4a977bab..69c431bd89e6 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "debug.h" #include "direct.h" @@ -1076,3 +1077,110 @@ void dma_unlink_range(struct dma_iova_state *state) ops->unlink_range(state, state->iova->addr, state->range_size); } EXPORT_SYMBOL_GPL(dma_unlink_range); + +/** + * dma_hmm_link_page - Link a physical HMM page to DMA address + * @pfn: HMM PFN + * @iova: Preallocated IOVA space + * @dma_offset: DMA offset form which this page needs to be linked + * + * dma_alloc_iova() allocates IOVA based on the size specified by their use in + * iova->size. Call this function after IOVA allocation to link whole @page + * to get the DMA address. Note that very first call to this function + * will have @dma_offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @dma_offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t dma_hmm_link_page(unsigned long *pfn, struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + struct page *page = hmm_pfn_to_page(*pfn); + phys_addr_t phys = page_to_phys(page); + bool coherent = dev_is_dma_coherent(dev); + struct dma_memory_type type = {}; + struct dma_iova_state state = {}; + dma_addr_t addr; + int ret; + + if (*pfn & HMM_PFN_DMA_MAPPED) + /* + * We are in this flow when there is a need to resync flags, + * for example when page was already linked in prefetch call + * with READ flag and now we need to add WRITE flag + * + * This page was already programmed to HW and we don't want/need + * to unlink and link it again just to resync flags. + * + * The DMA address calculation below is based on the fact that + * HMM doesn't work with swiotlb. + */ + return (iova->addr) ? iova->addr + dma_offset : + phys_to_dma(dev, phys); + + dma_get_memory_type(page, &type); + + state.iova = iova; + state.type = &type; + state.range_size = dma_offset; + + if (!dma_can_use_iova(&state, PAGE_SIZE)) { + if (!coherent && !(iova->attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, PAGE_SIZE, iova->dir); + + addr = phys_to_dma(dev, phys); + goto done; + } + + ret = dma_start_range(&state); + if (ret) + return DMA_MAPPING_ERROR; + + ret = dma_link_range(&state, page_to_phys(page), PAGE_SIZE); + dma_end_range(&state); + if (ret) + return DMA_MAPPING_ERROR; + + addr = iova->addr + dma_offset; +done: + kmsan_handle_dma(page, 0, PAGE_SIZE, iova->dir); + *pfn |= HMM_PFN_DMA_MAPPED; + return addr; +} +EXPORT_SYMBOL_GPL(dma_hmm_link_page); + +/** + * dma_hmm_unlink_page - Unlink a physical HMM page from DMA address + * @pfn: HMM PFN + * @iova: Preallocated IOVA space + * @dma_offset: DMA offset form which this page needs to be unlinked + * from the IOVA space + */ +void dma_hmm_unlink_page(unsigned long *pfn, struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + struct page *page = hmm_pfn_to_page(*pfn); + struct dma_memory_type type = {}; + struct dma_iova_state state = {}; + const struct dma_map_ops *ops = get_dma_ops(dev); + + dma_get_memory_type(page, &type); + + state.iova = iova; + state.type = &type; + + *pfn &= ~HMM_PFN_DMA_MAPPED; + + if (!dma_can_use_iova(&state, PAGE_SIZE)) { + if (!(iova->attrs & DMA_ATTR_SKIP_CPU_SYNC)) + dma_direct_sync_single_for_cpu(dev, dma_offset, + PAGE_SIZE, iova->dir); + return; + } + + ops->unlink_range(&state, state.iova->addr + dma_offset, PAGE_SIZE); +} +EXPORT_SYMBOL_GPL(dma_hmm_unlink_page); From patchwork Tue Jul 2 09:09:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6C99C30658 for ; Tue, 2 Jul 2024 09:10:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39F456B00A6; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34E7B6B00A7; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23DD86B00A8; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 077026B00A6 for ; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A8B101403F7 for ; Tue, 2 Jul 2024 09:10:30 +0000 (UTC) X-FDA: 82294241820.29.8B64B24 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf04.hostedemail.com (Postfix) with ESMTP id 5636840007 for ; Tue, 2 Jul 2024 09:10:27 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jzgab3Uf; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911407; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/wSyz3lVz4kACp7GRUzUvVI1JSwAXJQSeDPkH1ecdos=; b=y4n8NGC90jBa/Ry+FVpNJCJnVy4aDAKJcU3dOOH1lLHiPMnoFi9zlpSXeVSLzqtlMpkDl4 RAxz0vMtBJo/b75llI4tSqdnwgqK8cJvUXHB6eG90/ca5u8TbEZuwBbET2gdqg4p6fg2hZ TmLrLRlTUYX63iJ3oqU2KVOnVf5lnv4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911407; a=rsa-sha256; cv=none; b=dE52UmH8GQAz8YfsMK1vHYtZgiK8IVPCSwyCU8cv5Z5NVVt6w/J/1Stl6jzq4O6BKgza9U KFmd6/gsOfQYSQEQLoP9++/rizxkoiTj0SZwg+l6IqZdayzkIxnNPrlx/6XB45V/kh0joR H7eGQ/97PwhNZdk2moDdzW04dRuxDww= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jzgab3Uf; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id A5E71CE1CE4; Tue, 2 Jul 2024 09:10:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3695BC116B1; Tue, 2 Jul 2024 09:10:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911425; bh=3LQD0/3Cm7R9Az4oi5cuZkbR6iiFWyUtiH8FYV5qU6g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jzgab3UfsAsewcAa24eobcDOjpYZ/7PPMau2CiYGMxN+FfxGb00wDLL3BI2ELXmGU moRg+6YKgbtiDf2JVUe5eQJT8XKaWX5rrUt19Vkg6yDcQ3EOq1SxCxg+IKYC8PD3l0 HS940fsiVLIaK9q8PMwxocD9VU27iN2gNzMGrNBw8q8UcVws7pN76iWryjdgzqisHm VXjMg0SkcQEtPyECwWKViptiiD9d+jmj3OBTAB4yD01wnJBA03aMuR0LwlA7S+3cjd 2MBCI6RclneZK65+IxUismvWUQZUYYw63kyUErE4LhTZye947A31418PnSxjVrmxDt Tsc8PYJOJc7iA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 07/18] iommu/dma: Provide an interface to allow preallocate IOVA Date: Tue, 2 Jul 2024 12:09:37 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 5636840007 X-Stat-Signature: p5dj7ezgm7zqya918dbjtu3eg1wxwq1t X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719911427-664238 X-HE-Meta: U2FsdGVkX19rsdVkNfD15653DA8b18CygM9RuShFuUVnLqK0r9XLgtjcnMtL5ap2RllY64k32rzWpLrJaf8zreeit64n7sT7QEQHcBR1RUQqRHh+kAmRuW8Gtgy7+m/SoSqJZKzVC7BaI3TAktsdq2DZwRYBTNibYlwPrtIBNW0HvvLNOTsWdphzhoqu9ajflaYG9Wd26Ji61nHM/WbQk682Mw47H0u6uGo8MVc9r7OCFk2HFXXxl8lV8cBrUKmGHAuvJkpcs9MUHDkXW5FqhweGNulPdbe/KUS+hfqCCVrQ529Z2Sp0Lpp/YugO4Ld7Zn8ND6MwN6/uxCpsD8J7w5xTWq2mjcH7l4B6FEZ5q4vl4bPGAlGiioves+1kmdfpV+15+xDB9bSR/AAguTcAqVrtxYvsczuxxQeGW94nvJaLBMHBIkmTanFzNKAgh/BBSLwWFm34Gp+Cf8Er534I6sOMgPCMMrD0HnaSJe+qrxu/E5TJBSdT6mVNRMn11O9GSnKl0m1mOmokNZZHrm8KRVCd12FnVloYfrpajmrVSGdQyJhG7UOkWPBqiJoRBOn3Oxv+FXnjVrrq5PvkXG5qH2+X8LfixYlqHa90lLBTfccIqsKUukvoe99gs4ln9WnyA9BYAqJjwvbIaorpG1B7+ngbA/dYCIZ8QChqWcaG1KzLnOIoRr4L3z55sUoW0vmHTNyRIQ42lshYmSu074hiE7KYYoR0e3l9Mcv4+K5D/Ym2+hiqB2wwHmCxZT6Aau/xxRQVP7Vz8rcUflCS1s67PDFaO+k2/bje7nQAgNXxtQckSfz//WXnxNCAVKoRdUVgH05QHTCYj7YHj1xdbAre1TsaP6FzGyoXFDRJ9OAU0crnWAiLjP+zK/9wIGODAfhfpvYUC50vEIMFEXelpm7bFj9RbNrVMjpoWEvNqqxVCBWaiGc7Mb+88U56t30OK4kYbIgBd1EFtMKtvk/nenL 7XQkqNUk bDt3jmJ/ii0H1T+4mebbgmqxBtPDfxrUbXWiUwmaOxglkUMea6mIuw0LXBU54F+ZtQxSiJiol+uraHug/jR4aLbM/v1rjw/LrAzCyFl4EN9+jr76bco2xIbA/urEsNjjvRHB6De4kBDF+7rixgtyCr659MEVYyd++PbeNDgb8TXs4pofRKf3p/2/rxtV6vZRiRN7iYKcuZv3pUVI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Separate IOVA allocation to dedicated callback so it will allow cache of IOVA and reuse it in fast paths for devices which support ODP (on-demand-paging) mechanism. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 89e34503e0bb..0b5ca6961940 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -357,7 +357,7 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic_set(&cookie->fq_timer_on, 0); /* * Prevent incomplete fq state being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + * __iommu_dma_unmap() through __iommu_dma_free_iova() to queue_iova() */ smp_wmb(); WRITE_ONCE(cookie->fq_domain, domain); @@ -745,7 +745,7 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, } } -static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, +static dma_addr_t __iommu_dma_alloc_iova(struct iommu_domain *domain, size_t size, u64 dma_limit, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -791,7 +791,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, +static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct iommu_iotlb_gather *gather) { struct iova_domain *iovad = &cookie->iovad; @@ -828,7 +828,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -851,12 +851,12 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -960,7 +960,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, return NULL; size = iova_align(iovad, size); - iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; @@ -994,7 +994,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -1429,7 +1429,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!iova_len) return __finalise_sg(dev, sg, nents, 0); - iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; goto out_restore_sg; @@ -1446,7 +1446,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len, NULL); + __iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); out: @@ -1707,6 +1707,30 @@ static size_t iommu_dma_max_mapping_size(struct device *dev) return SIZE_MAX; } +static dma_addr_t iommu_dma_alloc_iova(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t dma_mask = dma_get_mask(dev); + + size = iova_align(iovad, size); + return __iommu_dma_alloc_iova(domain, size, dma_mask, dev); +} + +static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, + size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + + size = iova_align(iovad, size); + iommu_iotlb_gather_init(&iotlb_gather); + __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED | DMA_F_CAN_SKIP_SYNC, @@ -1731,6 +1755,8 @@ static const struct dma_map_ops iommu_dma_ops = { .get_merge_boundary = iommu_dma_get_merge_boundary, .opt_mapping_size = iommu_dma_opt_mapping_size, .max_mapping_size = iommu_dma_max_mapping_size, + .alloc_iova = iommu_dma_alloc_iova, + .free_iova = iommu_dma_free_iova, }; void iommu_setup_dma_ops(struct device *dev) @@ -1773,7 +1799,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) goto out_free_page; @@ -1787,7 +1813,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; From patchwork Tue Jul 2 09:09:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D93C3064D for ; Tue, 2 Jul 2024 09:10:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1AC46B00A9; Tue, 2 Jul 2024 05:10:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC9D06B00AA; Tue, 2 Jul 2024 05:10:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91C336B00AB; Tue, 2 Jul 2024 05:10:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 71AD76B00A9 for ; Tue, 2 Jul 2024 05:10:35 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3442AA127A for ; Tue, 2 Jul 2024 09:10:35 +0000 (UTC) X-FDA: 82294242030.25.8A38091 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf03.hostedemail.com (Postfix) with ESMTP id D84E920012 for ; Tue, 2 Jul 2024 09:10:32 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rq4OARQm; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911422; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6gy+X4jxy6GcXu5cpZnf0rnb3egcZsh/8Gh8+BiJ9zA=; b=Whh7pHccJN+uFUFxDkIQFXg9o/8nHDx09Hud30ZYvlK/tybSJVhj5WhkJK2CvTJOhVUb1w LAZquMTy7nM0jYjP/MVbm8vYaYlO5eIQVqqTTwNQs0gO8QKacIzNWahkGWeN9q/9EVf0bY HXFUJzTIrJhDleeP83SJZSKdGVLKmKE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rq4OARQm; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911423; a=rsa-sha256; cv=none; b=qQhHMrWPD9n2CTNhd7WBxWKl0mVzRh5RcOYwmx1vYTl82oCvErNOq8aIt4ZB2RUFLsFIUZ dQonvKQX2xfspDgHiqcWOb2jGEktRWzxwtkm144wjPQkXho8FuqF+OZEfGx5vjoKq5KKLc R0LFzDcf4xd1XKDVWPsOIqCHwb7EP6I= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E4D27CE1CF0; Tue, 2 Jul 2024 09:10:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3ADDBC116B1; Tue, 2 Jul 2024 09:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911429; bh=suT8AWgnm6GBnp/JyhIgcl0q4xx7OPR0g7KMmtGZr9Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rq4OARQmYhfiLIiz6G9wms51urucovfps8babGp4qOGPRB4+5JYgUHLAf6DLfrqPT hJXfo5keL4ouKe9Y4EGQdEL9StrtH1y0FoD0WIEG6+W6CO3MX++fPmfQii7hxJRJux fusUTN23YcTMGW6P7yhGGqD1y1aMsDmtGmRN8YuzUAp5T8XEoteErwwnKbH0sieTGn NDceqME/1xe7SEg4mP6v20tLeGjQsiNWxolnzJ+VOCoSlueXx8IZMPwdetZjLTcW8K X/5eBD/D3Zv08vRmpy+naCb5TKMeQuI9m5F7mEmHjHwRQltCAQDN/qS030U5hOEtlB dKCPiM7uI6ZuQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 08/18] iommu/dma: Implement link/unlink ranges callbacks Date: Tue, 2 Jul 2024 12:09:38 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D84E920012 X-Stat-Signature: i1bd64tzci79o5sq6sfy5cx6t7wojxnb X-HE-Tag: 1719911432-99406 X-HE-Meta: U2FsdGVkX18TTbBns3abkOukfJafrfmyGXgV4hmTTCfwlJuzRiYlN7jRzfZN/jBfmwuTFLiFNY8Zes/OwzJC7LRs6qmorOVIdzCEQXYcDt7wBZD2prGDNSDgMhj97ZpLf85wIdn6RImpby1/O5Dgb3/Shl+ionuHypwIPdvlyqzYlflNJJx0I79RNhejRWz7h1zs9Ohs/zOQ31pPqpTvsnyVG/q1pZUKUiQVN9r1XCahpUmXbrS1TVP84Ih4vTRrOl21DIlf8CF3x80nWkDD06K83VooFzpk/5f36kjmfeN0XC/OvWyl02RM7uvQd5D6KNJ/FN2Ujir04pIBBvt14pQhhaUBFV13nx9jH5oUFXDGlcDUG0IzjZK2xIhY72V3elSd9rAZARKZdKtJBBNLwf6yBUk2yZUv0HMgw0uboTVdo2wvfZ19O5PFpiPZzjwApkSIoN7cHYVodIlvEdSKdGO9lSapAZhmQ6Yoj9M2C5STA6GCAwtwrwcrk46id7cUYt9XTdSZ4HvywB0BGGH5XCah2akFCOKz/F0/MKwlabQ2snPChShrb5UvSK8IJbHR9V85INasfuQj+VnfFNLmg7PihZJZw7zw+S14aW5c0PDbEpzH81h3uDBOY7y4FvWeDc8WKBHdwiYgpbKOt4Wse+BErCMkPwo9fkBpKgfdTuAhNXswP+/Ri+WmBrIf1CRgwy1MTf10cyC791OaaFuZZXmp1ujk/YlQZtN7Czdt1lKrU9JyR7jcWJ8ClFAXjeyfMSOpc6NYcjsemQhJl3rmgY4KdFnGvfUsMVkgVOpGGIR/W3O5ALKcRhIBaghuED3WDQeneKQIxUKyyGBifeSPO2oqqFRQ5dHd5jSbHgiPNhMONPqcJyT5GDD/2z/I3KtBwTzY1PjBXKT8y4jfnnsLLTphr4twfs10DSmR+UkFiZIgWwKeQiaZ8fWugUdIrJSBcZ08HbTo1wWSfX3a5aH moBnSgVs 2jGN66unQz8ma73I2qUy+GSoiq7f99ME9x+n0rUPKm1ZV57jhAosKf1Ve7kuuG5OAJsTQ1IJ27905xJGl4W7dMSCQxGrRlne/LWVsVGKiFAgwFvy6ZQdg6GmiE+FWi0mTBavdIXGkjITDgHyN2MbrmWPNbVwEaBRAZuXgwCbjksa08Y8Z2vlWNSH6RVPi/iWH4dXJpn2RYovyGEHzu9nx9zpd8qNCrq4Ffne/OCEQFCWlFds= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Add an implementation of link/unlink interface to perform in map/unmap pages in fast patch for pre-allocated IOVA. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 79 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 79 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0b5ca6961940..7425d155a14e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1731,6 +1731,82 @@ static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); } +static int iommu_dma_start_range(struct dma_iova_state *state) +{ + struct device *dev = state->iova->dev; + + state->domain = iommu_get_dma_domain(dev); + + if (static_branch_unlikely(&iommu_deferred_attach_enabled)) + return iommu_deferred_attach(dev, state->domain); + + return 0; +} + +static int iommu_dma_link_range(struct dma_iova_state *state, phys_addr_t phys, + dma_addr_t addr, size_t size) +{ + struct iommu_domain *domain = state->domain; + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct device *dev = state->iova->dev; + enum dma_data_direction dir = state->iova->dir; + bool coherent = dev_is_dma_coherent(dev); + unsigned long attrs = state->iova->attrs; + int prot = dma_info_to_prot(dir, coherent, attrs); + + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, size, dir); + + size = iova_align(iovad, size); + return iommu_map(domain, addr, phys, size, prot, GFP_ATOMIC); +} + +static void iommu_sync_dma_for_cpu(struct iommu_domain *domain, + dma_addr_t start, size_t size, + enum dma_data_direction dir) +{ + size_t sync_size, unmapped = 0; + phys_addr_t phys; + + do { + phys = iommu_iova_to_phys(domain, start + unmapped); + if (WARN_ON(!phys)) + continue; + + sync_size = (unmapped + PAGE_SIZE > size) ? size % PAGE_SIZE : + PAGE_SIZE; + arch_sync_dma_for_cpu(phys, sync_size, dir); + unmapped += sync_size; + } while (unmapped < size); +} + +static void iommu_dma_unlink_range(struct dma_iova_state *state, + dma_addr_t start, size_t size) +{ + struct device *dev = state->iova->dev; + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + bool coherent = dev_is_dma_coherent(dev); + unsigned long attrs = state->iova->attrs; + size_t unmapped; + + iommu_iotlb_gather_init(&iotlb_gather); + iotlb_gather.queued = READ_ONCE(cookie->fq_domain); + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !coherent) + iommu_sync_dma_for_cpu(domain, start, size, state->iova->dir); + + size = iova_align(iovad, size); + unmapped = iommu_unmap_fast(domain, start, size, &iotlb_gather); + WARN_ON(unmapped != size); + + if (!iotlb_gather.queued) + iommu_iotlb_sync(domain, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED | DMA_F_CAN_SKIP_SYNC, @@ -1757,6 +1833,9 @@ static const struct dma_map_ops iommu_dma_ops = { .max_mapping_size = iommu_dma_max_mapping_size, .alloc_iova = iommu_dma_alloc_iova, .free_iova = iommu_dma_free_iova, + .link_range = iommu_dma_link_range, + .unlink_range = iommu_dma_unlink_range, + .start_range = iommu_dma_start_range, }; void iommu_setup_dma_ops(struct device *dev) From patchwork Tue Jul 2 09:09:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55C34C3064D for ; Tue, 2 Jul 2024 09:11:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1C706B00BE; Tue, 2 Jul 2024 05:11:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC8356B00BF; Tue, 2 Jul 2024 05:11:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B40626B00C0; Tue, 2 Jul 2024 05:11:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9540B6B00BE for ; Tue, 2 Jul 2024 05:11:03 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5CA1DA1D90 for ; Tue, 2 Jul 2024 09:11:03 +0000 (UTC) X-FDA: 82294243206.19.427B0EA Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf15.hostedemail.com (Postfix) with ESMTP id 27A6CA000F for ; Tue, 2 Jul 2024 09:11:00 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A9QLYC0x; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aDBFkSM5rrLaLXmHSA7DZ1pVkef/e5P7sT7Jt69Tq9A=; b=70/+WE4J+ibKJnddlpVJ9ci1jfEHNO/+R/bUimgSvo09TL5ly+HkwqCYSVW0I9BwGIRTvj 8e8DXNVnAlFlTPyDhtOJoDN5A88CeEoEkWjNqpMyIWqIEjNjC0G6YkhUNPf4LiZ9vGX6/P O6ZVRQRj7ZKTS6I7wDlHSPLg5EnTwf8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911439; a=rsa-sha256; cv=none; b=cWD/nDJgt27syeUZrlsG4WKr5HSP1Qb+YXchWFUP2Spi2M1O1DS6J0ImDNIlGvfK4h1Px/ lIpYRFiiiSPQ2R/Cd2BiHGQVP9qnUI4Y/Dvph+bSDhuqVewbczb3lXt5hdQC7Y3GzmkTIN aOV7Qo7rBPSEHHClhVfB0PMpj5cEXEQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A9QLYC0x; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 9160FCE1CEF; Tue, 2 Jul 2024 09:10:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CADCC116B1; Tue, 2 Jul 2024 09:10:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911457; bh=SkqFHwOM6KJzTXdmnNBw1x88eUJ1/2PlxqGXP9zV2ZU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A9QLYC0xA24DxduqY3heCuvEMv2g+F4kcqj46l7twDf7eoAjLAfDXUqwWewJv15rI V4ngl5OKU+cYCEmcoA1OWEcg+g4LDqDL7IRuO5NiDytGijovb3mdlf1czP84Qh/8w1 gojGNLjQCBZqSCRgaAAiURw+YEKUsuwnCMNl6nCr2hlpPPb4vGZE4NXCFAJUYCIcpD dAJi7vB/mecxRkG5SOV39gduWkn/DXvyFUwGe+XPrw4P8XlIk54V56l6s3+MBenBuR aHhZJAvI2iVoMTtjqvR2irRwgfTxB8wUnp7xjwsi9Cd0akPOCz7fVXYus3N3dY3QeP 1txEpEBqSvlag== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 09/18] RDMA/umem: Preallocate and cache IOVA for UMEM ODP Date: Tue, 2 Jul 2024 12:09:39 +0300 Message-ID: <2d04e220fea52a41f2005c3a3e2123c3967af88f.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 27A6CA000F X-Stat-Signature: r3ih5byrifxwrhmxqi6cjf54eux4njqp X-HE-Tag: 1719911460-310888 X-HE-Meta: U2FsdGVkX182s664ToQ5yxuNgO+1ZVssxFpiG6GmLGZLOJEPHkdBuMvcI7FXjDxpJQ+nOeRM2Cp0Ll3oiHyE98/6FfRAvb35o2SkiNz365LeXLkhbbCMqRnIFUCm3Zfr0V41wQDOHLP7Bl6nBQAZzS5BuoEPWjc6FKKfkRtdl0KSSNR5hmAflpdVlpVliO1x2HjpzaMkOFScXBlWreUhjJvqPpUXIV4v7bFDpOZ0qeppcxb5LmKfrRsgg2qB6RPqPJVaP7k259mQvrr14AaGDiAPxsHRMhDY8t26+r3g/dRSGBvVa+zzsGtfkTju3RznKXdZAvGUR7y0W3cY0NiGkd9NnVPYxRbTIyvKKUlfji9d+2X3horMYWOXmM+QrlLONwvtam0Wk3+vMj0YwugfgL0XdF649/FDCUUekEA+4zOiS5pmLUwxltEuihU/ENk5DZUzFt4UKwM16rGVZyZaffgp3wkxNTVZKhdFOnIBMsGy4Lf0jbaCmMSLDWwLvqoJwFTf2Huatpr9EacJDuugXcNcnfVxTtv0qtMTcWdZauuhAjmdFBvR0aCDedMgQlUK8hr8TL59ZYVK1KUMqv33VfM3yXx5ubKihpIGMNSNL614DX03jcsq0pijHWCWNPClXskAdbr1UPkFxvJgF8h/igTZM79l3zwWeNb3dQX546lijQugWvhsl2oH9I7unqYONL/23kmKIjz7CZpl7bmwM+eKcHAf9yV/19R2A+RhJ88A8ohywQZa1q9S6LwJIRPCRWOQ2WH3eGYBJX0Xy/aHzax+bWOhA4tuEL6BxV2FpBG+deE3Qk5V6XUbykzWeNy6aOVRkRBr+V2NPZc+NWW59VaGl8e4zFxF1iIePJhpx4sOCNIffRS0Cu+0gAMRQ73NsCARyQpdV5/YIeOR9eVStFIICYKUajjXuF0M+5xy5V0cAYlCLZ3FgOtbtWFUQCcaqvHwLjN9zRbpPe8PAzv NgbYUuUi gCyNHlH1+g78gO3Wdu7aSj2nhbSOiTYGrDP7CHltvm8rYkvwnW6z0d/jPuh8BW3FUdS7EjlA6en9k+dmyhlWE5ayj1E6dpGDKPtL+7xt3I+SrguQqt4SCfPCd0/gTg1wPk4rFsvmxAKxhgt+bIC71SdPrINYVt7QQX7QbKi5AtlKiGCMiTL+BeN6e94p6w89oJx9G X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to provide two step interface to map pages, preallocate IOVA when UMEM is initialized. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 14 +++++++++++++- include/rdma/ib_umem_odp.h | 1 + 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e9fa22d31c23..955bf338b1bf 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -50,6 +50,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + struct ib_device *dev = umem_odp->umem.ibdev; int ret; umem_odp->umem.is_odp = 1; @@ -87,15 +88,25 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, goto out_pfn_list; } + umem_odp->iova.dev = dev->dma_device; + umem_odp->iova.size = end - start; + umem_odp->iova.dir = DMA_BIDIRECTIONAL; + ret = dma_alloc_iova(&umem_odp->iova); + if (ret) + goto out_dma_list; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, start, end - start, ops); if (ret) - goto out_dma_list; + goto out_free_iova; } return 0; +out_free_iova: + dma_free_iova(&umem_odp->iova); out_dma_list: kvfree(umem_odp->dma_list); out_pfn_list: @@ -274,6 +285,7 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); + dma_free_iova(&umem_odp->iova); kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 0844c1d05ac6..bb2d7f2a5b04 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -23,6 +23,7 @@ struct ib_umem_odp { * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. */ dma_addr_t *dma_list; + struct dma_iova_attrs iova; /* * The umem_mutex protects the page_list and dma_list fields of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex From patchwork Tue Jul 2 09:09:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 790B2C30658 for ; Tue, 2 Jul 2024 09:10:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 48D1F6B00AF; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EFB16B00B2; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 217A36B00B1; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 014766B00AF for ; Tue, 2 Jul 2024 05:10:43 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A881342064 for ; Tue, 2 Jul 2024 09:10:43 +0000 (UTC) X-FDA: 82294242366.24.5917E8E Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf15.hostedemail.com (Postfix) with ESMTP id 5AFFAA000A for ; Tue, 2 Jul 2024 09:10:41 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iNhBksVu; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911425; a=rsa-sha256; cv=none; b=dbxS+Lf6Of1WVNTO/830gB+CZGu21Vu0RzzudDa/g2LFhXTOittnEzgUjX8rYiGsRz3JI+ h2YVeZH7RX9r6U5QMCpoFBzXS2BJez9rt6ge04N2eXzPLsyUyeqk6UU651Tjg0sW4UTgMx gCAqj9TbjQnRnNMQrA4cjujLWIqKqhw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iNhBksVu; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911425; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ifIJh0bZftiQQ7Yq/7uM50HY0l+OSKyHkPtG00KxOjU=; b=fM6KxwrUVgNAnTeOgj97yyodMpszMLHeXsuecdtlIrdUWT8Y8Clm13kYL36csxs6ibLXV5 t6PD7QiMgdl3x8iGMeVPVu6R+H3YQu1+IY3Osrgo+bCWSrZJvvuYz1QiFKbmBBcyucRn2Q vGf1BZ1/Hre5APWY41pb7OIbSNphwDQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 3EE90CE1CDD; Tue, 2 Jul 2024 09:10:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3CCCC116B1; Tue, 2 Jul 2024 09:10:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911437; bh=UccJQ9nS+AS0vpHIOQiIlzi89+lcmxz/okxY3l4iYQ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iNhBksVuMjdd9OgE12zpQ/G/0gXROuc2JiQtdkq/YPji7n6elUfWNHPpjzLSkzwpD rYUQOCs17v6ZkaTLonG9Qm2hIKwxebLAoNnvhf5uX0T5kMW55FAtOwHN1U58yzjg5H KmCEktqyhOcnCdt2e+XYEsVRIapgAdwwTd3WfQSE5X050+CWU82gphnSKNpW3gg9Yj ZT0bXPA9TCvLuXGiFdBiGVLaWvdhZ6EExdZ0nFexFeg8QlaSTJEauVpUoRW/POrhmm lFbNK4ciqFxYeGoW/fyYyyOismRIotZnY2twO4F1++hPCQz9/9/ERwuYQmgecOSPzd 6W0gz8DC0fy+A== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 10/18] RDMA/umem: Store ODP access mask information in PFN Date: Tue, 2 Jul 2024 12:09:40 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 5AFFAA000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: mjidzipajrps3bjxri8a1ybxctw1q9tw X-HE-Tag: 1719911441-532397 X-HE-Meta: U2FsdGVkX18f8eKXRS2BIsIw2q2Fzb8j5mNYY3hl/1KM6qisK1OxfMTWyn2Bfv9Yb+1kwC77FoVI33c1PFYhh5doPMVu+QRbmiuhr8F2VWTpwgN89o2fz5Sb6TxS4n1+vYpExlA4xo5tQ+kHTsVcIziNrbfcyXQSzbaPnoEPQq+ZXA728rLFz+5epO/Q5RMZwtEcCZZNegmjc6HrzGwWVuG5U8vi7r+1XdgS0SWLOp1SMhXnQWK7Nhh4Wm/AY1tcXEske3JKODsLa1TAnSopRGCWs+/5leoDftEyzxDiRrBvRQCkAKmk0jizbSiZ74KRUR3T3tXiaPj0mjA0yX60W/kbWpycmpC3AthF5Pg3NN0i7vfAs0j9L/UW/yjvRra+dXTSr2QBnblhiXSGG71Nw4pAjR3BTGtvu/Ks1q7kzBKLkgI//CNtPehYJcntr/KZ5slanpS8RFGuTmMTy7B2A2SJ9WKZ5TMF/U8Cg3zsF4YcaBgu5hY8n1Jsu3NtNw0qdAFvFNOl9Met2ZszHM8T1LnNcWAfd8wgXEHN3F5V36mi+dXN+mi7lNvyW2U42bfumrjIEiRUVTuG490fkpJFiS6TAntBecyvS8F2KH3IUuMP8BnkJhNNvvZmfa2Zcges7ci5ZGeTFPDI2QyFrT5tqbg+pRKRZfNWZtssDWPY34SUPPiyFhfA4k/b9/jUyXMqfaDTqEhptkjOvEpD/50TcNELABUPhQW/6+FED/3oT7aWXLklCh/cTu9wT42mulcZnt9hZubGueLIyvwFJOnemSPvVCaj1s+LlI98NF5tSLJAGqN3lpEr6uEVqHkjUXevoepahToAyjqSwUi/yRA9JXXx/OmaiEmvzka15YokWbociN9qo73YIrDsWgplG9HqT0MTMNcf/kVadlRGBQoJzYVpjAP1komI3WRGPOpSn934McH8nn27jh2xfUcSQyFiXN0Y6vRKom4w+d2zbsz 5nueT421 2kRn6sto/uG3+8KTDnHOI3SZOxZsWtsZ0z6Yzt6j+p9f4eU263u5IYDKf+L5cvXr3s1wJVEYkMDlYJ2WnB29XEjq3a5tYyR1xu7Y69CK+icPXekuLuBp/tBIG3Fz1rbwwFDDm5EiKKG7He/EqfOt+mqlzwJLrm/1mePenDwBFn5Q9cnR1/HDCPXH85S0WZPn4Y4jFHJJyU6d0JdU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to remove of dma_list, store access mask in PFN pointer and not in dma_addr_t. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 98 +++++++++++----------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/odp.c | 37 ++++++----- include/rdma/ib_umem_odp.h | 14 +--- 4 files changed, 59 insertions(+), 91 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 955bf338b1bf..c628a98c41b7 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -308,22 +308,11 @@ EXPORT_SYMBOL(ib_umem_odp_release); static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, unsigned int dma_index, - struct page *page, - u64 access_mask) + struct page *page) { struct ib_device *dev = umem_odp->umem.ibdev; dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - if (*dma_addr) { - /* - * If the page is already dma mapped it means it went through - * a non-invalidating trasition, like read-only to writable. - * Resync the flags. - */ - *dma_addr = (*dma_addr & ODP_DMA_ADDR_MASK) | access_mask; - return 0; - } - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, DMA_BIDIRECTIONAL); if (ib_dma_mapping_error(dev, *dma_addr)) { @@ -331,7 +320,6 @@ static int ib_umem_odp_map_dma_single_page( return -EFAULT; } umem_odp->npages++; - *dma_addr |= access_mask; return 0; } @@ -367,9 +355,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, struct hmm_range range = {}; unsigned long timeout; - if (access_mask == 0) - return -EINVAL; - if (user_virt < ib_umem_start(umem_odp) || user_virt + bcnt > ib_umem_end(umem_odp)) return -EFAULT; @@ -395,7 +380,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, if (fault) { range.default_flags = HMM_PFN_REQ_FAULT; - if (access_mask & ODP_WRITE_ALLOWED_BIT) + if (access_mask & HMM_PFN_WRITE) range.default_flags |= HMM_PFN_REQ_WRITE; } @@ -427,22 +412,17 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, for (pfn_index = 0; pfn_index < num_pfns; pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) { - if (fault) { - /* - * Since we asked for hmm_range_fault() to populate - * pages it shouldn't return an error entry on success. - */ - WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); - WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); - } else { - if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) { - WARN_ON(umem_odp->dma_list[dma_index]); - continue; - } - access_mask = ODP_READ_ALLOWED_BIT; - if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) - access_mask |= ODP_WRITE_ALLOWED_BIT; - } + /* + * Since we asked for hmm_range_fault() to populate + * pages it shouldn't return an error entry on success. + */ + WARN_ON(fault && range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); + WARN_ON(fault && !(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); + if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) + continue; + + if (range.hmm_pfns[pfn_index] & HMM_PFN_DMA_MAPPED) + continue; hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]); /* If a hugepage was detected and ODP wasn't set for, the umem @@ -457,13 +437,13 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, } ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]), - access_mask); + umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); if (ret < 0) { ibdev_dbg(umem_odp->umem.ibdev, "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); break; } + range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -483,7 +463,6 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma_addr; dma_addr_t dma; int idx; u64 addr; @@ -494,34 +473,33 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { + unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; + struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); + idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; dma = umem_odp->dma_list[idx]; - /* The access flags guaranteed a valid DMA address in case was NULL */ - if (dma) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - dma_addr = dma & ODP_DMA_ADDR_MASK; - ib_dma_unmap_page(dev, dma_addr, - BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (dma & ODP_WRITE_ALLOWED_BIT) { - struct page *head_page = compound_head(page); - /* - * set_page_dirty prefers being called with - * the page lock. However, MMU notifiers are - * called sometimes with and sometimes without - * the lock. We rely on the umem_mutex instead - * to prevent other mmu notifiers from - * continuing and allowing the page mapping to - * be removed. - */ - set_page_dirty(head_page); - } - umem_odp->dma_list[idx] = 0; - umem_odp->npages--; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) + continue; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) + continue; + + ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), + DMA_BIDIRECTIONAL); + if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *head_page = compound_head(page); + /* + * set_page_dirty prefers being called with + * the page lock. However, MMU notifiers are + * called sometimes with and sometimes without + * the lock. We rely on the umem_mutex instead + * to prevent other mmu notifiers from + * continuing and allowing the page mapping to + * be removed. + */ + set_page_dirty(head_page); } + umem_odp->npages--; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index f255a12e26a0..e8494a803a58 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -334,6 +334,7 @@ struct mlx5_ib_flow_db { #define MLX5_IB_UPD_XLT_PD BIT(4) #define MLX5_IB_UPD_XLT_ACCESS BIT(5) #define MLX5_IB_UPD_XLT_INDIRECT BIT(6) +#define MLX5_IB_UPD_XLT_DOWNGRADE BIT(7) /* Private QP creation flags to be passed in ib_qp_init_attr.create_flags. * diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 4a04cbc5b78a..5713fe25f4de 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -143,22 +144,12 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static u64 umem_dma_to_mtt(dma_addr_t umem_dma) -{ - u64 mtt_entry = umem_dma & ODP_DMA_ADDR_MASK; - - if (umem_dma & ODP_READ_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_READ; - if (umem_dma & ODP_WRITE_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_WRITE; - - return mtt_entry; -} - static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); + bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + unsigned long pfn; dma_addr_t pa; size_t i; @@ -166,8 +157,17 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, return; for (i = 0; i < nentries; i++) { + pfn = odp->pfn_list[idx + i]; + if (!(pfn & HMM_PFN_VALID)) + /* Initial ODP init */ + continue; + pa = odp->dma_list[idx + i]; - pas[i] = cpu_to_be64(umem_dma_to_mtt(pa)); + pa |= MLX5_IB_MTT_READ; + if ((pfn & HMM_PFN_WRITE) && !downgrade) + pa |= MLX5_IB_MTT_WRITE; + + pas[i] = cpu_to_be64(pa); } } @@ -268,8 +268,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->dma_list[idx] & - (ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) { + if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; @@ -555,7 +554,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, { int page_shift, ret, np; bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE; - u64 access_mask; + u64 access_mask = 0; u64 start_idx; bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT); u32 xlt_flags = MLX5_IB_UPD_XLT_ATOMIC; @@ -563,12 +562,14 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (flags & MLX5_PF_FLAGS_ENABLE) xlt_flags |= MLX5_IB_UPD_XLT_ENABLE; + if (flags & MLX5_PF_FLAGS_DOWNGRADE) + xlt_flags |= MLX5_IB_UPD_XLT_DOWNGRADE; + page_shift = odp->page_shift; start_idx = (user_va - ib_umem_start(odp)) >> page_shift; - access_mask = ODP_READ_ALLOWED_BIT; if (odp->umem.writable && !downgrade) - access_mask |= ODP_WRITE_ALLOWED_BIT; + access_mask |= HMM_PFN_WRITE; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index bb2d7f2a5b04..a3f4a5c03bf8 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -8,6 +8,7 @@ #include #include +#include struct ib_umem_odp { struct ib_umem umem; @@ -68,19 +69,6 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) umem_odp->page_shift; } -/* - * The lower 2 bits of the DMA address signal the R/W permissions for - * the entry. To upgrade the permissions, provide the appropriate - * bitmask to the map_dma_pages function. - * - * Be aware that upgrading a mapped address might result in change of - * the DMA address for the page. - */ -#define ODP_READ_ALLOWED_BIT (1<<0ULL) -#define ODP_WRITE_ALLOWED_BIT (1<<1ULL) - -#define ODP_DMA_ADDR_MASK (~(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) - #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_umem_odp * From patchwork Tue Jul 2 09:09:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1FB7C3064D for ; Tue, 2 Jul 2024 09:10:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 267586B00B1; Tue, 2 Jul 2024 05:10:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 217F16B00B2; Tue, 2 Jul 2024 05:10:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0429C6B00B3; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D9A756B00B1 for ; Tue, 2 Jul 2024 05:10:44 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5E06981C13 for ; Tue, 2 Jul 2024 09:10:44 +0000 (UTC) X-FDA: 82294242408.11.7B59EE9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id B06CA40018 for ; Tue, 2 Jul 2024 09:10:42 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VczNJJsV; spf=pass (imf17.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OYMgxE0TdJyy7aZE07VN/5Dh/CRuDuWFFrQDIKb/9oQ=; b=oNkdjd3KypsWirdV88wxWq6bUz37VN9svyqPUaWKAcADFdWRSODfdpf48oVWqTEMazPF09 eywOukCXs5T6vIKopL78SwVN4SSLaXJEKBjDo7aQw1tYUxCn1SXThdW9RF2O5i/sUUX43E aR9vZ414t3JG2ufMiEGzS/V7pPT1PGI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VczNJJsV; spf=pass (imf17.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911412; a=rsa-sha256; cv=none; b=8Cq40vzOQ01YwE85Vwkq4FF229CU5WFT8ejD3klGiDc+Npl/wHiq/AADXI97nLEbvmeB+W aPV0U7fFKWyYr9fv5AKexP0C7AbHrAtf29VZivV9qyV+FqNvgAlRMNKqrVcDrgFUe6lkSs q37kol+flF8uGRs8/MhsXF5FZh59c8o= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C612161A28; Tue, 2 Jul 2024 09:10:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD16CC116B1; Tue, 2 Jul 2024 09:10:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911441; bh=iroMohzT1Q8AWRK2+Thbq89BnX39U8SKlcdoPnAxJYY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VczNJJsVneBG8U1/XVDb6geZHvZySIGG4DY2l4QAi9gXw0LZ4UZx7I111sTW/pzG+ 4gFYx4HiuOTIS7ymyC1ZaLERA/X33A5BIU5kAJM/SZyFgB5GRFBxOi0L/j9m8A/bcs lgRBq3rNRiPVGh9h5iFW+4ma1ZmCVnY4jodvAce1csOdqMDIlLFM1gpYm0fdzdf3dc DXGkry+mEYxh70j6i+/ZAGXhXHz4SK5EzMJ4XXxg16NwaKkpXS/s2cAIDA+bNQUsB3 T6fOcXLArV3UrnEVTuSxL9/A8so/gdTs+kjzN4UkmqpH6Ew92iDJu+Z7ECGV8U0SDk gTnjyBi/EKyDQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 11/18] RDMA/core: Separate DMA mapping to caching IOVA and page linkage Date: Tue, 2 Jul 2024 12:09:41 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B06CA40018 X-Stat-Signature: 8hmu4968d4nrhbyxup1ur6t41895gcgk X-Rspam-User: X-HE-Tag: 1719911442-114533 X-HE-Meta: U2FsdGVkX18lqen2W3a0E8vehmBpWbEC1lHT/r/WbqQkXx1jGvSZ7bmTc+idcB5g06xyX8ija0exZnz0uUJ9DKtUWGwcAJDVsOw6sqq7OgrzK+RTWtMGGVcT108Wp+PufaICCqF5cGZ3yAiexOnO4kbs7+dik6Jv+fnm8Runvd6USbFqfI4X1q6O8ze/AE9LCk2HVW3Y+FjS4ySjprSkEkMFCAbMGwNMCef8qBf2YwaZbVf8rjIQNqaQRtgvg8lWcxoLfhlTJC8uYCg9K39t8Jysp9J22tmrZ0cY17xs2V3eaQmhzHUxXAjNatT1v6iGDdY2CIuCpQu9VgAKG71+EjH1lUfVs3RrSjdEcgQBcb5rPoMI9BxFvA2CwVEFAOODIxNAASPikPRgtlFPXWnAWxYo5pPv0iv2+iRrQqDyFwKsJ+02zttoVaW8oORDECBS+w/BuOh1HfqW34cqEAtMq+21EKoDXib9Wa3ipNA4za+buCB3Cvp/JYtVSkOOLoK2q0Z0NavzJHvyvjzAg02/kxUe5hhGO8U5UcP4qCPF0+cY1IkgMpyfYMNt0F1PN65EzGMoAylpZQaFysWBgReZU0CzhzjeAiANIJ4O7fRrRQCuNv40zYhQ/FLVDwbgMyguVq9QYLnJfgkndYQYp/Xet3pmlE/vWYF04MSXnORx1HxK872/1jCEPOhkWtFrr68KVz3gsDCV6ZHjxNaqqkZmj3lZO6lEZprC+1FqhoKc+CokdMLEQjLD3YL3w2FrPD8cD1/bqFtpu4X5H7/5q6Mc6yLAPuO5pZ0EsGjB7ArX2O9mNDSUHkjFjVgBrynP0Dnp/3hjtioKXsQ5lyiDyt5Ydh6RJudYVg5buzYWmQEutl2vbgZPRE7IuN2cpslq88b22ftSn+bGa70iLrFAYbb/Sprz2quLhglOeqdzM6LdflJFtJAsAsGJHxH3cnCGqg3jciGs97nCIW4eOvgYxGP mRPYbLCq T0e/uPSRepDn5Hd+UZ8CT4z4nt1X4LyVCffOY4Hoi2Dz9UAiXVyY1NtGQH95pzvAH/Xv4Glz0dHr4vDhP8l6h8Fl0M1r+eqaJDq6Bk5lptNHR3zmoHJCesMSnIWVxAwoMpmEwMwpWrXjZGMA7GaGNRQQoZZdPhyrrvVlESxCqlVcaEiKKeO53FQloKpoSxIoz6iXpI9V5ea7qNPU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Reuse newly added DMA API to cache IOVA and only link/unlink pages in fast path. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 61 +++--------------------------- drivers/infiniband/hw/mlx5/odp.c | 7 +++- include/rdma/ib_umem_odp.h | 8 +--- 3 files changed, 12 insertions(+), 64 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index c628a98c41b7..6e170cb5110c 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -81,20 +81,13 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, if (!umem_odp->pfn_list) return -ENOMEM; - umem_odp->dma_list = kvcalloc( - ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); - if (!umem_odp->dma_list) { - ret = -ENOMEM; - goto out_pfn_list; - } umem_odp->iova.dev = dev->dma_device; umem_odp->iova.size = end - start; umem_odp->iova.dir = DMA_BIDIRECTIONAL; ret = dma_alloc_iova(&umem_odp->iova); if (ret) - goto out_dma_list; - + goto out_pfn_list; ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, @@ -107,8 +100,6 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, out_free_iova: dma_free_iova(&umem_odp->iova); -out_dma_list: - kvfree(umem_odp->dma_list); out_pfn_list: kvfree(umem_odp->pfn_list); return ret; @@ -286,7 +277,6 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); dma_free_iova(&umem_odp->iova); - kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } put_pid(umem_odp->tgid); @@ -294,40 +284,10 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) } EXPORT_SYMBOL(ib_umem_odp_release); -/* - * Map for DMA and insert a single page into the on-demand paging page tables. - * - * @umem: the umem to insert the page to. - * @dma_index: index in the umem to add the dma to. - * @page: the page struct to map and add. - * @access_mask: access permissions needed for this page. - * - * The function returns -EFAULT if the DMA mapping operation fails. - * - */ -static int ib_umem_odp_map_dma_single_page( - struct ib_umem_odp *umem_odp, - unsigned int dma_index, - struct page *page) -{ - struct ib_device *dev = umem_odp->umem.ibdev; - dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, - DMA_BIDIRECTIONAL); - if (ib_dma_mapping_error(dev, *dma_addr)) { - *dma_addr = 0; - return -EFAULT; - } - umem_odp->npages++; - return 0; -} - /** * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. * * Maps the range passed in the argument to DMA addresses. - * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. * Upon success the ODP MR will be locked to let caller complete its device * page table update. * @@ -435,15 +395,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, __func__, hmm_order, page_shift); break; } - - ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); - if (ret < 0) { - ibdev_dbg(umem_odp->umem.ibdev, - "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); - break; - } - range.hmm_pfns[pfn_index] |= HMM_PFN_DMA_MAPPED; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -463,10 +414,8 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma; int idx; u64 addr; - struct ib_device *dev = umem_odp->umem.ibdev; lockdep_assert_held(&umem_odp->umem_mutex); @@ -474,19 +423,19 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; - dma = umem_odp->dma_list[idx]; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) continue; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_DMA_MAPPED)) continue; - ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); + dma_hmm_unlink_page(&umem_odp->pfn_list[pfn_idx], + &umem_odp->iova, + idx * (1 << umem_odp->page_shift)); if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); struct page *head_page = compound_head(page); /* * set_page_dirty prefers being called with diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 5713fe25f4de..b2aeaef9d0e1 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -149,6 +149,7 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + struct ib_device *dev = odp->umem.ibdev; unsigned long pfn; dma_addr_t pa; size_t i; @@ -162,12 +163,16 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, /* Initial ODP init */ continue; - pa = odp->dma_list[idx + i]; + pa = dma_hmm_link_page(&odp->pfn_list[idx + i], &odp->iova, + (idx + i) * (1 << odp->page_shift)); + WARN_ON_ONCE(ib_dma_mapping_error(dev, pa)); + pa |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) pa |= MLX5_IB_MTT_WRITE; pas[i] = cpu_to_be64(pa); + odp->npages++; } } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index a3f4a5c03bf8..653fc076b6ee 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -18,15 +18,9 @@ struct ib_umem_odp { /* An array of the pfns included in the on-demand paging umem. */ unsigned long *pfn_list; - /* - * An array with DMA addresses mapped for pfns in pfn_list. - * The lower two bits designate access permissions. - * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. - */ - dma_addr_t *dma_list; struct dma_iova_attrs iova; /* - * The umem_mutex protects the page_list and dma_list fields of an ODP + * The umem_mutex protects the page_list field of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex * also protects access to the mmu notifier counters. */ From patchwork Tue Jul 2 09:09:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27268C30658 for ; Tue, 2 Jul 2024 09:10:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 06C816B00B4; Tue, 2 Jul 2024 05:10:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F38126B00B5; Tue, 2 Jul 2024 05:10:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D66396B00B6; Tue, 2 Jul 2024 05:10:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AEF7C6B00B4 for ; Tue, 2 Jul 2024 05:10:48 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6967481C49 for ; Tue, 2 Jul 2024 09:10:48 +0000 (UTC) X-FDA: 82294242576.04.28EEEE0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf29.hostedemail.com (Postfix) with ESMTP id BF9DD12001E for ; Tue, 2 Jul 2024 09:10:46 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=KTz8NtUc; spf=pass (imf29.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911425; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wlDjMGMoY5vmg8XP+VTRHx2+O1DZwsQWCcm7mjUpOdM=; b=8lbvfVUhHenuL1J/MbtJuPfefgdbfMYVtw4yfEOeZEX9AijBhNxjgQtDk+PQ4NcgQBIMup YDgvYwEZc7rfRyy0DYYIql02N+Z6w2OcI6QG5sydcmsJZPmVanui30IytAfhxdxUa3bYaC 8gBV4xpWYomb78o2jijbq3EiJL/u3HA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911425; a=rsa-sha256; cv=none; b=feVNpSt4AsMldvPyccR1vuDL2qqUPwG3UjzaWHysKtj+Nx+xZh9xRKHpApkme1gwXR4VuK LhtboG2rMo7igfXf2p7XCuhYuV3fjtz8cQB/3EyQpfhtk26CDjGn8ZIPE4JA/LiTNZq3m4 hYfOnYx4DOLJPJwpTysqCaJnD9bkI6g= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=KTz8NtUc; spf=pass (imf29.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0574461A24; Tue, 2 Jul 2024 09:10:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DB9F3C4AF0A; Tue, 2 Jul 2024 09:10:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911445; bh=pdK7V9xHdNBFoHCz59AGR5WOTx5xl2Vj8twOHvengYk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KTz8NtUcFIGsDCeQDH+Db9PamT6/nlViuegQcsznIc+DF9srj7x240NEr4il//Xc/ 8i6luwPgO0yiJWdOnRcYYR5NL7yfawxlEKbv7drJYa8B4a339djv3OvkkqYT8c8H3F 2fawWfN+3TdW5300DYyGBMMir31FVKME3Mb0rFDkEafNNmyz4K9325deM1XSwCqvmZ j68Pmkr7YplRS9KcdrYMe4vUV3x2d5CnPCIdbVmN1wj2l59g8sgHeEOk1PKw3gUGEO sLuttA9bzZDdVviGMe/UOzCvZZ/jZMpHBqkeBTmM9HISPNn+o452NCrh1hoAqBSCYz dRKD1bcV6Pa7w== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 12/18] RDMA/umem: Prevent UMEM ODP creation with SWIOTLB Date: Tue, 2 Jul 2024 12:09:42 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: BF9DD12001E X-Stat-Signature: xi9jito6p7gxrdfmj7tjdenpg6xsi46q X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719911446-15292 X-HE-Meta: U2FsdGVkX1+dBjH6ev3kLrCHfbTUMkXKOM+W23C1dHQoGzQv5kN3Ejz0KgdtG4N+P2h7VHZBF+rgeuya2emnIDxcS41E0uottMhlb+tj6+cG68xSHA4rxmST4+ATr5tU5oESvqLI+7597mc1bmhc+ZTiwCohL6pMUI5gUcZdWKvYGf1GsCEhc2OryksbmiPiEWiXI0sghrXOh8RBLzr2do40AL0qbUIOmi6fULuAWFSX9ObLw0WLhCbTbZcbvzh3heD1lYRwN4HMzviuLj00wGq7TBKSzyo7bN97X8nRClecyKAsunqH8JiI0SIuuQSs5doYoQRWfH9GdFCXeQmP/1ePNMImD6kHiSDPOoct5v89/yctbkUeMbY+6zZuwIwU1xvR+UXU8AlNk3i9fmJdPu94qWh5zWPvhvmnEqxbRMEQT527YmReP87s4Hg1u/89Ssv4cyHUgb95Mnqqa5GnEJPj2T4PhgESwsCekdb0HxlcULxJqqWuE77qR6WFryHJCI6XV84SKc7UYJqv9u8v//7MT09x2RWx5rGWExgKUp3vIcg2C0Nqfoq+JoIIDklmS8KyF+qEKq17XQhZogKOHOwTSHY7wtJgFGSoDGq00s+rD/YUEP6cIbyr2k/mR/+1pMnOyoBoqB/8BiNQG/IvMGARtSWjHKzdBjrM9uGNTk1n6Cv/G3atalQv2cMzIuQ4Xr6CBxmh+UOYXLNPg/byvB0gDTDgyCNwFuo9KD5sq7EUjO61i89OjiFQE6gNtgPycI6VsFOrilVluwVv2as3AVY/BUzGXaicreNyxDqtjlwr1B1/YekhVhaKINn0++teWhFdD0yTP6u/FIf0KCxAyQu9JpFtb/Saum/JUQFe8Ejx83gjnABCAwmSyMWZHDYVggJOX12eiBBs3U/ENzkZnZXogbL5rY8wr0/YyZ3HRNFWTxLjKFEcpy+sLIPS1GTZvxYFVi9UtH5r+P290RR fc1+X8sq 8T12TE4cjdFcOcOr2FJiYgFttZTdP8Q5lx8KOHrowpkTtUSzih/Qz2rp+J5bCoQKX6/pW+CsG+4QobpNhi2WCCwDJtJgijHJzNUlqR1kbxFbiGQ1OjFn/vG+NMhvlsPO+q5+0f+qt3QyzSzB09Fnf3s8y2nbP4TWy9mNQvsziC4pALtsjYsLcGiEF51Pbr6cnPIPhntcRSvj1Cio= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky RDMA UMEM never supported DMA addresses returned from SWIOTLB, as these addresses should be programmed to the hardware which is not aware that it is bounce buffers and not real ones. Instead of silently leave broken system for the users who didn't know it, let's be explicit and return an error to them. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 81 +++++++++++++++--------------- 1 file changed, 41 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6e170cb5110c..12186717a892 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -42,7 +42,8 @@ #include #include #include - +#include +#include #include #include "uverbs.h" @@ -51,50 +52,50 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { struct ib_device *dev = umem_odp->umem.ibdev; + size_t page_size = 1UL << umem_odp->page_shift; + unsigned long start, end; + size_t ndmas, npfns; int ret; umem_odp->umem.is_odp = 1; mutex_init(&umem_odp->umem_mutex); + if (umem_odp->is_implicit_odp) + return 0; + + if (dev_use_swiotlb(dev->dma_device, page_size, DMA_BIDIRECTIONAL) || + is_swiotlb_force_bounce(dev->dma_device)) + return -EOPNOTSUPP; + + start = ALIGN_DOWN(umem_odp->umem.address, page_size); + if (check_add_overflow(umem_odp->umem.address, + (unsigned long)umem_odp->umem.length, &end)) + return -EOVERFLOW; + end = ALIGN(end, page_size); + if (unlikely(end < page_size)) + return -EOVERFLOW; + + ndmas = (end - start) >> umem_odp->page_shift; + if (!ndmas) + return -EINVAL; + + npfns = (end - start) >> PAGE_SHIFT; + umem_odp->pfn_list = + kvcalloc(npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); + if (!umem_odp->pfn_list) + return -ENOMEM; + + umem_odp->iova.dev = dev->dma_device; + umem_odp->iova.size = end - start; + umem_odp->iova.dir = DMA_BIDIRECTIONAL; + ret = dma_alloc_iova(&umem_odp->iova); + if (ret) + goto out_pfn_list; - if (!umem_odp->is_implicit_odp) { - size_t page_size = 1UL << umem_odp->page_shift; - unsigned long start; - unsigned long end; - size_t ndmas, npfns; - - start = ALIGN_DOWN(umem_odp->umem.address, page_size); - if (check_add_overflow(umem_odp->umem.address, - (unsigned long)umem_odp->umem.length, - &end)) - return -EOVERFLOW; - end = ALIGN(end, page_size); - if (unlikely(end < page_size)) - return -EOVERFLOW; - - ndmas = (end - start) >> umem_odp->page_shift; - if (!ndmas) - return -EINVAL; - - npfns = (end - start) >> PAGE_SHIFT; - umem_odp->pfn_list = kvcalloc( - npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); - if (!umem_odp->pfn_list) - return -ENOMEM; - - - umem_odp->iova.dev = dev->dma_device; - umem_odp->iova.size = end - start; - umem_odp->iova.dir = DMA_BIDIRECTIONAL; - ret = dma_alloc_iova(&umem_odp->iova); - if (ret) - goto out_pfn_list; - - ret = mmu_interval_notifier_insert(&umem_odp->notifier, - umem_odp->umem.owning_mm, - start, end - start, ops); - if (ret) - goto out_free_iova; - } + ret = mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, start, + end - start, ops); + if (ret) + goto out_free_iova; return 0; From patchwork Tue Jul 2 09:09:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE5C2C3065C for ; Tue, 2 Jul 2024 09:10:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 267916B0082; Tue, 2 Jul 2024 05:10:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A2306B00BA; Tue, 2 Jul 2024 05:10:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEC5E6B00BC; Tue, 2 Jul 2024 05:10:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id CCD4A6B00BA for ; Tue, 2 Jul 2024 05:10:55 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7E07DA417E for ; Tue, 2 Jul 2024 09:10:55 +0000 (UTC) X-FDA: 82294242870.06.16DD666 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf03.hostedemail.com (Postfix) with ESMTP id 35F092001C for ; Tue, 2 Jul 2024 09:10:52 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OiiAr9+n; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911431; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3p3NSd/GjJmBxM2OO6GrqLbMgfdyhgIx44JQZKa1IEw=; b=z4hgc0LT01VNYUcQnrDlMgkhRxkJ16thsPihQnkOdDEpe5IWLegJv1wRyaXH8F9WSpUeRs lsYUK5syQH6A3NIBqmokEgVzHCeWHe44xerqj0vKhwHpHOus4AFQje9UQMh5pSthNVBR5g NPUBNo5t7y9j4Zzk/qwdj9gAc9ZHTZI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911431; a=rsa-sha256; cv=none; b=PiPRySA8nG/aaJSfWyM5/7EYIfU46/MYr4O3OM+IH9d8rpaHKA84bOamFoaBNVJvSsj4mh Nl8Slystv8oDMeIf9wd77YtakzRgqytrZC1Cs/a/yFyEyaUP7lElELmCfmeKzVz4jmo4Vc DZ/yx88Lm/39lXM6Ux/y8IbP+33eugA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OiiAr9+n; spf=pass (imf03.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 763E7CE1CF3; Tue, 2 Jul 2024 09:10:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EFE14C32781; Tue, 2 Jul 2024 09:10:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911449; bh=UBtvWw7Gy5bkZxQWGiITiVmdZjOyVzq6Q6pe5h5nMZE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OiiAr9+nYxAigqsEDGipeVXi0TP3nXWi7mK/LlWrRaIz0IcR+fZxFtgFqfr7E/wnG 7ZazMCjRFgPgPtVEEwp8YXy2SRazjjt2+Uuh41m/x79CqLIGsWzLXbd0r9lNK9Or5x l0Dh89IJNVkfH11huzLO5HsmiCGKKTkWGUjZRb6B+EJuLxL80CuIGX0+K0APVMQ33N //In69xpEVXNX92YXJyAHM1hzpqFPmSrIFIfZU9FgI5S1zYNwGXUeWmBpHGSexRleS 7FyjezEJi+u8u8fSnqRA9/cx/u52Z9MzzW9NFEmTl8EU9VPoOXKOuFBhcBVID0QXA/ iAgDsM4unOl2Q== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 13/18] vfio/mlx5: Explicitly use number of pages instead of allocated length Date: Tue, 2 Jul 2024 12:09:43 +0300 Message-ID: <8feabd70634bc8d5c4bda4afe3f5083e56044006.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: yxxg7t46wnkfz8ii7ztdhrc8icwwhh75 X-Rspamd-Queue-Id: 35F092001C X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1719911452-394249 X-HE-Meta: U2FsdGVkX19Be//QUd7BntivtU/0YBHGsoQzjWLRvxnjjZOLG3V7kd+IpfaYpws+7ZXNhOLAQ4l6z9juHwdW2FmM0ntc5E+XJO+NVFgFRpXphXqjLNl1vuK38yzYx5YMwryCkCiz33eonxvI6RKZpWpce46d8WwHFMojpxz1nQBnMFKik7gH37+DaSwKIa427mU3QB2/pLxXK5mZUAnMEZvpq35zbK7Xykaiml2VH24rbVLT4mQ4N9mme7ZynjEvOKj/plwgWB1oQe9fyXnk2oB7u8v/4R4Y1zGGRxeDbnPGyEtAwOHjHzLnOQQPLhu/4I22XrRZwgHVQTZSz8uC3FEbIWI6hpCbKLVaWYz4Re5Ng7GSOmlTmYFArwmCWNdzhS5n67RAoKHpKidWKwyJEc3v+VpUz3/IrGWvvuc4CIy7cwRFBctyDRVBNn9vTRoKZ03uYyd/2HH7TRd4MuD2aOr8CMdi69+bIG7S5YFhwttW/VVRca8eYdGgTKjVpaJTRMi524gtCiFzGkM1ln02SpZY8IM+ucB8Ku2ePVaLxVvKPBm/LZosxeCBJU4ztswFjgyb8j2iiF3zhUMU/88SaE4rgsZUy5PX0fLBorlf+0qvq14ysOTPUHTeM36fJo/90uS02guiOV2iDE8wJjC14G4pz0uK0N0YAM7VCUTfodR0b9B8sY1L5zxuV6/SYLUrdSjN4nGYP9kNM9PtWhd+4hJHPJwioZHeQDyKRwL/yman4Zs10ofNQ1b4MH27zln316uIOWuYGb1XiTAJmq+IDU6HZfuiMb2KPGHwhn6paJ/oLyOvSMpT0scfor2vYSwuTW5wQttffvuSP9ldoqqFSrjuNp3oG0ceZbl+6EvaoYCh02FPI7Wl3xY9h+uNvEvs/fCuaD8IzP1zS8MbukYvLG1WNUL4dp7cK3apF87EPsgfxWjrVNlJTwoI73iD5PwgE8yE9xSskhm8+UCqfn3 CKMTm18c NSyMEbg/uqhVgdhD82YgoE1aKQmH1RVUgz/g0i51L06Lbgxqi58NL5WJL4v41sdUJEIKDPHiedbTm3rKZ7RI8gaNNg+VWeU3/xwDIpTiJS4QOZjJ14JPaHcXHo2oTg8YUHuKj9Bp/H88cdDaSfJCQiEdM8EeL498CF+KchNX9XBHZlfm0iV0VBSfE5lY33KWpB3w5GTxOPViNDXk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky allocated_length is a multiple of page size and number of pages, so let's change the functions to accept number of pages. It opens us a venue to combine receive and send paths together with code readability improvement. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 32 +++++++++++----------- drivers/vfio/pci/mlx5/cmd.h | 10 +++---- drivers/vfio/pci/mlx5/main.c | 53 +++++++++++++++++++++++------------- 3 files changed, 55 insertions(+), 40 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 41a4b0cf4297..fdc3e515741f 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -318,8 +318,7 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, struct mlx5_vhca_recv_buf *recv_buf, u32 *mkey) { - size_t npages = buf ? DIV_ROUND_UP(buf->allocated_length, PAGE_SIZE) : - recv_buf->npages; + size_t npages = buf ? buf->npages : recv_buf->npages; int err = 0, inlen; __be64 *mtt; void *mkc; @@ -375,7 +374,7 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->allocated_length) + if (buf->dmaed || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); @@ -444,7 +443,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, if (ret) goto err; - buf->allocated_length += filled * PAGE_SIZE; + buf->npages += filled; /* clean input for another bulk allocation */ memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, @@ -460,8 +459,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, } struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf; @@ -473,9 +471,8 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, buf->dma_dir = dma_dir; buf->migf = migf; - if (length) { - ret = mlx5vf_add_migration_pages(buf, - DIV_ROUND_UP_ULL(length, PAGE_SIZE)); + if (npages) { + ret = mlx5vf_add_migration_pages(buf, npages); if (ret) goto end; @@ -501,8 +498,8 @@ void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf) } struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir) +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf, *temp_buf; struct list_head free_list; @@ -517,7 +514,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { if (buf->dma_dir == dma_dir) { list_del_init(&buf->buf_elm); - if (buf->allocated_length >= length) { + if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); goto found; } @@ -531,7 +528,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, } } spin_unlock_irq(&migf->list_lock); - buf = mlx5vf_alloc_data_buffer(migf, length, dma_dir); + buf = mlx5vf_alloc_data_buffer(migf, npages, dma_dir); found: while ((temp_buf = list_first_entry_or_null(&free_list, @@ -712,7 +709,7 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, MLX5_SET(save_vhca_state_in, in, op_mod, 0); MLX5_SET(save_vhca_state_in, in, vhca_id, mvdev->vhca_id); MLX5_SET(save_vhca_state_in, in, mkey, buf->mkey); - MLX5_SET(save_vhca_state_in, in, size, buf->allocated_length); + MLX5_SET(save_vhca_state_in, in, size, buf->npages * PAGE_SIZE); MLX5_SET(save_vhca_state_in, in, incremental, inc); MLX5_SET(save_vhca_state_in, in, set_track, track); @@ -734,8 +731,11 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, } if (!header_buf) { - header_buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + header_buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) { err = PTR_ERR(header_buf); goto err_free; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index df421dc6de04..7d4a833b6900 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -56,7 +56,7 @@ struct mlx5_vhca_data_buffer { struct sg_append_table table; loff_t start_pos; u64 length; - u64 allocated_length; + u32 npages; u32 mkey; enum dma_data_direction dma_dir; u8 dmaed:1; @@ -217,12 +217,12 @@ int mlx5vf_cmd_alloc_pd(struct mlx5_vf_migration_file *migf); void mlx5vf_cmd_dealloc_pd(struct mlx5_vf_migration_file *migf); void mlx5fv_cmd_clean_migf_resources(struct mlx5_vf_migration_file *migf); struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf); struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf); struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, unsigned long offset); diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 61d9b0f9146d..0925cd7d2f17 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -308,6 +308,7 @@ static struct mlx5_vhca_data_buffer * mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, u8 index, size_t required_length) { + u32 npages = DIV_ROUND_UP(required_length, PAGE_SIZE); struct mlx5_vhca_data_buffer *buf = migf->buf[index]; u8 chunk_num; @@ -315,12 +316,11 @@ mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, chunk_num = buf->stop_copy_chunk_num; buf->migf->buf[index] = NULL; /* Checking whether the pre-allocated buffer can fit */ - if (buf->allocated_length >= required_length) + if (buf->npages >= npages) return buf; mlx5vf_put_data_buffer(buf); - buf = mlx5vf_get_data_buffer(buf->migf, required_length, - DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(buf->migf, npages, DMA_FROM_DEVICE); if (IS_ERR(buf)) return buf; @@ -373,7 +373,8 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, u8 *to_buff; int ret; - header_buf = mlx5vf_get_data_buffer(migf, size, DMA_NONE); + header_buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(size, PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) return PTR_ERR(header_buf); @@ -388,7 +389,7 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, to_buff = kmap_local_page(page); memcpy(to_buff, &header, sizeof(header)); header_buf->length = sizeof(header); - data.stop_copy_size = cpu_to_le64(migf->buf[0]->allocated_length); + data.stop_copy_size = cpu_to_le64(migf->buf[0]->npages * PAGE_SIZE); memcpy(to_buff + sizeof(header), &data, sizeof(data)); header_buf->length += sizeof(data); kunmap_local(to_buff); @@ -437,15 +438,20 @@ static int mlx5vf_prep_stop_copy(struct mlx5vf_pci_core_device *mvdev, num_chunks = mvdev->chunk_mode ? MAX_NUM_CHUNKS : 1; for (i = 0; i < num_chunks; i++) { - buf = mlx5vf_get_data_buffer(migf, inc_state_size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer( + migf, DIV_ROUND_UP(inc_state_size, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; } migf->buf[i] = buf; - buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; @@ -553,7 +559,8 @@ static long mlx5vf_precopy_ioctl(struct file *filp, unsigned int cmd, * We finished transferring the current state and the device has a * dirty state, save a new state to be ready for. */ - buf = mlx5vf_get_data_buffer(migf, inc_length, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(inc_length, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); mlx5vf_mark_err(migf); @@ -674,8 +681,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) if (track) { /* leave the allocated buffer ready for the stop-copy phase */ - buf = mlx5vf_alloc_data_buffer(migf, - migf->buf[0]->allocated_length, DMA_FROM_DEVICE); + buf = mlx5vf_alloc_data_buffer(migf, migf->buf[0]->npages, + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_pd; @@ -918,11 +925,14 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, goto out_unlock; break; case MLX5_VF_LOAD_STATE_PREP_HEADER_DATA: - if (vhca_buf_header->allocated_length < migf->record_size) { + { + u32 npages = DIV_ROUND_UP(migf->record_size, PAGE_SIZE); + + if (vhca_buf_header->npages < npages) { mlx5vf_free_data_buffer(vhca_buf_header); - migf->buf_header[0] = mlx5vf_alloc_data_buffer(migf, - migf->record_size, DMA_NONE); + migf->buf_header[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_NONE); if (IS_ERR(migf->buf_header[0])) { ret = PTR_ERR(migf->buf_header[0]); migf->buf_header[0] = NULL; @@ -935,6 +945,7 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, vhca_buf_header->start_pos = migf->max_pos; migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER_DATA; break; + } case MLX5_VF_LOAD_STATE_READ_HEADER_DATA: ret = mlx5vf_resume_read_header_data(migf, vhca_buf_header, &buf, &len, pos, &done); @@ -945,12 +956,13 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, { u64 size = max(migf->record_size, migf->stop_copy_prep_size); + u32 npages = DIV_ROUND_UP(size, PAGE_SIZE); - if (vhca_buf->allocated_length < size) { + if (vhca_buf->npages < npages) { mlx5vf_free_data_buffer(vhca_buf); - migf->buf[0] = mlx5vf_alloc_data_buffer(migf, - size, DMA_TO_DEVICE); + migf->buf[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_TO_DEVICE); if (IS_ERR(migf->buf[0])) { ret = PTR_ERR(migf->buf[0]); migf->buf[0] = NULL; @@ -1033,8 +1045,11 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) } migf->buf[0] = buf; - buf = mlx5vf_alloc_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_alloc_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_buf; From patchwork Tue Jul 2 09:09:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3948CC3065C for ; Tue, 2 Jul 2024 09:11:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 241386B00BB; Tue, 2 Jul 2024 05:11:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CAEC6B00BC; Tue, 2 Jul 2024 05:11:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 043BD6B00BE; Tue, 2 Jul 2024 05:10:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D8BC36B00BB for ; Tue, 2 Jul 2024 05:10:59 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 922481C22CC for ; Tue, 2 Jul 2024 09:10:59 +0000 (UTC) X-FDA: 82294243038.12.CCE4ACE Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf30.hostedemail.com (Postfix) with ESMTP id 2DE2980019 for ; Tue, 2 Jul 2024 09:10:56 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IH4G738G; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf30.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911439; a=rsa-sha256; cv=none; b=war9nE+rBl/FPck7zpYJdm3avJUHPFi0/Is549qrl8lR7zNyqpuov7C8tXoc5dpRABOevC FKJUULqmMPjODPbXtPH3vcHyEHZotY3d3Pk6TgtGxgDXEnIyCEVI9lu/Cp/0y8cHXf3O6j oJ15wRjsYtQOy7hS1EJ9Pnl7RyEFMbc= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IH4G738G; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf30.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8O8SGV/3uQ565YneOj8KgnPNqyeiKyEahvUcFa0TAAc=; b=ZPa0Xgyrl8kYV4qkNh0L6Bex7KdNgFGdHWe+bpVrf3NGrClUDL0wMRSfoPBs+if9GboUFu k3SVho27OMus2vpRNhRK+8ng7AO8uXAheR9kkIswpJBrv3lwGbYSuOQfIRxPJAYXTWNajE VPvOjEQFwlYkvN5FR7OCT8nTqj73Euo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 88DA5CE1CE4; Tue, 2 Jul 2024 09:10:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07E58C116B1; Tue, 2 Jul 2024 09:10:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911453; bh=9UOnDPAidcjKmf/OJX9UmmceN7wfX+lWZhw4XRQHsBI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IH4G738GSwErqAnN9PIIpPbA4v1TGUQrymBnSy8qOyXwgifbXPO3u0KTdl/89ps+h p9f/IyRPlypt/XGrwR9R/A17oUdbh2wo7NHA51sch1NhyLLlQ3xFLkNVK+2gYK3L/p RiNnu7NpFOyPRV50Li1NyNWm2X77rSAW5sS6b//Ylakd8AaLb8hx4XC3S6rNJe2Z0h E4oVem7p4gi38QGhSck1S5LwRFj2SF8x9KKB6Dg0XuKwF2o2WEiiOWOFq0+Ztr+Um9 BaNIQR5Ix4Ytpr7fe4YRJVgcFLvt4s+kf6lOEEyBFj9lMldCW7PCA3KoX846v1mE+2 Oqtm9QZbIvssg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 14/18] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Date: Tue, 2 Jul 2024 12:09:44 +0300 Message-ID: <1818fe62955e127f49469595706b1eb40b02d352.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2DE2980019 X-Stat-Signature: unm5a94yuder85hjnqdtc8peb18u9qoj X-Rspam-User: X-HE-Tag: 1719911456-233722 X-HE-Meta: U2FsdGVkX1937L0kg6Hmna/P9UyN07j9A4UfrBi1D4hob4Ap8nToAwEJVcPlC2JBdFiENg+NHTIY26tI28H3gXClsVLs/itz9IRV1X+1X0eiuxATOZJoK2n6d7a2QSmbhQP3hCwN9R9jc5gwNoPqigtZd91l6/IhdqYwSH/VscmolvqvAnrtW86SyQk3D9SPZ9NYiSRHLzkGkG5yIqUJyPLo+nbFUX3iyG4TRSWdS7lYOpYDICkXYupqmCu6gfW2b+9vTpnVTmj6xBRuG91Tju+FZSAc/tau4AqPsdRZ+sYK0bgjIDEV1cfTTcbGjrzkCr2fwGyaQeOIlTudYko3gwMIw1vJFuMe4v5dT557SusYP6TJleBdU789Ck6HARhy40VEL6PVkSjr+exvcYM2bLkOekShsRH+OGHyip+SHrjM9VSD6UhxL9HrPLhkloFkVh+ppTt4BlvhjnsFILbsuiF276OASg0gOEv78Nq+y1XojmediB/tVh3Xz5FRPuaTn2UBbnyax8qXDcJWHRijsLdGUKa2Flcw9NO5BDtl/VhrrvPOW6QL3mu+CIyG/4auXRf4n8m5fTh4GUxviMd5zety8GVplCrxrzYPWiH455+x19nauKZgN4amaxZGd8Qh58GAj8mAqgS6CzZbgaNducGEbSHzvUYLXHJ49rEOVt1KkYsVbauElk8DytAHTI2xbGAhRemWQ+xm4CyTmR19W8SzgOoVfYkMSbSmUFYhigNBiU5LkV0viYi62KzvceGKIYeBthz1PJl7y3JlknQWnLRrdfcqZnjkV8KaWFKpALQVRKSnPj2nYhGr/02A0kOYipvQsYgFwS+keApstnYKFwafzvQoLED1YVzcY6m8U/SDBB+6Qx2cvZJlteX2Mxe6XW7VPxr02Eg5dhgSKxnGTq3juP7qW9uAz0XjWvyE8iu/3ligNmza+dNtFHtDDavkNEg8uOuLEOLFTHbBWq8 QRBZT9MT DrhA3Blc1e3qhNB8hHLEHNZKlxReh/1knArRChJytiuFaM26ubsEMzsbCxwhYgJptM+PJFA6D4KsTRzLmWkLHsha3vrgW7Bei3UpPephGq6z4H+brow1beB159Ls9yf4/5QiucAYjM9ZIFqt0Ywyy8m97s9TPbxJtslFchKqUliHH5ExQg1/rCHQ7ZSXIdlxu5ov8fXcA8ZKT8Ds= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Change the creation of mkey to be performed in multiple steps: data allocation, DMA setup and actual call to HW to create that mkey. In this new flow, the whole input to MKEY command is saved to eliminate the need to keep array of pointers for DMA addresses for receive list and in the future patches for send list too. In addition to memory size reduce and elimination of unnecessary data movements to set MKEY input, the code is prepared for future reuse. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 154 ++++++++++++++++++++---------------- drivers/vfio/pci/mlx5/cmd.h | 4 +- 2 files changed, 88 insertions(+), 70 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index fdc3e515741f..adf57104555a 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -313,39 +313,21 @@ static int mlx5vf_cmd_get_vhca_id(struct mlx5_core_dev *mdev, u16 function_id, return ret; } -static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, - struct mlx5_vhca_data_buffer *buf, - struct mlx5_vhca_recv_buf *recv_buf, - u32 *mkey) +static u32 *alloc_mkey_in(u32 npages, u32 pdn) { - size_t npages = buf ? buf->npages : recv_buf->npages; - int err = 0, inlen; - __be64 *mtt; + int inlen; void *mkc; u32 *in; inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(*mtt) * round_up(npages, 2); + sizeof(__be64) * round_up(npages, 2); - in = kvzalloc(inlen, GFP_KERNEL); + in = kvzalloc(inlen, GFP_KERNEL_ACCOUNT); if (!in) - return -ENOMEM; + return NULL; MLX5_SET(create_mkey_in, in, translations_octword_actual_size, DIV_ROUND_UP(npages, 2)); - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); - - if (buf) { - struct sg_dma_page_iter dma_iter; - - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); - } else { - int i; - - for (i = 0; i < npages; i++) - *mtt++ = cpu_to_be64(recv_buf->dma_addrs[i]); - } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); @@ -359,9 +341,29 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); MLX5_SET64(mkc, mkc, len, npages * PAGE_SIZE); - err = mlx5_core_create_mkey(mdev, mkey, in, inlen); - kvfree(in); - return err; + + return in; +} + +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, + struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, + u32 *mkey) +{ + __be64 *mtt; + int inlen; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + if (buf) { + struct sg_dma_page_iter dma_iter; + + for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + } + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -374,20 +376,27 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->npages) + if (buf->mkey_in || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); if (ret) return ret; - ret = _create_mkey(mdev, buf->migf->pdn, buf, NULL, &buf->mkey); - if (ret) + buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); + if (!buf->mkey_in) { + ret = -ENOMEM; goto err; + } - buf->dmaed = true; + ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + if (ret) + goto err_create_mkey; return 0; + +err_create_mkey: + kvfree(buf->mkey_in); err: dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; @@ -401,8 +410,9 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) lockdep_assert_held(&migf->mvdev->state_mutex); WARN_ON(migf->mvdev->mdev_detach); - if (buf->dmaed) { + if (buf->mkey_in) { mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + kvfree(buf->mkey_in); dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, buf->dma_dir, 0); } @@ -779,7 +789,7 @@ int mlx5vf_cmd_load_vhca_state(struct mlx5vf_pci_core_device *mvdev, if (mvdev->mdev_detach) return -ENOTCONN; - if (!buf->dmaed) { + if (!buf->mkey_in) { err = mlx5vf_dma_data_buffer(buf); if (err) return err; @@ -1380,56 +1390,54 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, kvfree(recv_buf->page_list); return -ENOMEM; } +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in) +{ + dma_addr_t addr; + __be64 *mtt; + int i; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_single(mdev->device, addr, PAGE_SIZE, + DMA_FROM_DEVICE); + } +} -static int register_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in) { - int i, j; + dma_addr_t addr; + __be64 *mtt; + int i; - recv_buf->dma_addrs = kvcalloc(recv_buf->npages, - sizeof(*recv_buf->dma_addrs), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->dma_addrs) - return -ENOMEM; + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = 0; i < recv_buf->npages; i++) { - recv_buf->dma_addrs[i] = dma_map_page(mdev->device, - recv_buf->page_list[i], - 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, recv_buf->dma_addrs[i])) + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(mdev->device, addr)) goto error; + + *mtt++ = cpu_to_be64(addr); } + return 0; error: - for (j = 0; j < i; j++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[j], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); + unregister_dma_pages(mdev, i, mkey_in); return -ENOMEM; } -static void unregister_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - for (i = 0; i < recv_buf->npages; i++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[i], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); -} - static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) { struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + kvfree(recv_buf->mkey_in); free_recv_pages(&qp->recv_buf); } @@ -1445,18 +1453,28 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, if (err < 0) return err; - err = register_dma_recv_pages(mdev, recv_buf); - if (err) + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); + if (!recv_buf->mkey_in) { + err = -ENOMEM; goto end; + } + + err = register_dma_pages(mdev, npages, recv_buf->page_list, + recv_buf->mkey_in); + if (err) + goto err_register_dma; - err = _create_mkey(mdev, pdn, NULL, recv_buf, &recv_buf->mkey); + err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, + &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in); +err_register_dma: + kvfree(recv_buf->mkey_in); end: free_recv_pages(recv_buf); return err; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 7d4a833b6900..25dd6ff54591 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -58,8 +58,8 @@ struct mlx5_vhca_data_buffer { u64 length; u32 npages; u32 mkey; + u32 *mkey_in; enum dma_data_direction dma_dir; - u8 dmaed:1; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; @@ -133,8 +133,8 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; - dma_addr_t *dma_addrs; u32 next_rq_offset; + u32 *mkey_in; u32 mkey; }; From patchwork Tue Jul 2 09:09:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8134AC3064D for ; Tue, 2 Jul 2024 09:11:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A4526B00C8; Tue, 2 Jul 2024 05:11:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1553F6B00C9; Tue, 2 Jul 2024 05:11:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4F2E6B00CA; Tue, 2 Jul 2024 05:11:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BC5506B00C8 for ; Tue, 2 Jul 2024 05:11:20 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EFBCB121C0B for ; Tue, 2 Jul 2024 09:11:19 +0000 (UTC) X-FDA: 82294243878.15.7E09BC9 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf25.hostedemail.com (Postfix) with ESMTP id B0A0AA000E for ; Tue, 2 Jul 2024 09:11:17 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="NhTi/KqE"; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911456; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=txb2CNn7jIKzUxY/PPc6Z0wBMF1GDdG0pioCYdqCWUs=; b=r1sYdmr6Ez7Dasa2OgJPfFPVbBXgczk5QEbW4kBwi9E1aqFfG3EuGw/QZvAq5PBWRYa1if nMcD8cmGtJPIpshDt9Vvylvmf+j226lhNH9zHvWEGYXfx2QKEHCu4WED7XYupsmBPvlUqw b2Ix5CRb4GKy9AylFSbZuq9EIn55C3k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911456; a=rsa-sha256; cv=none; b=pZOTJ2o4uF1t0szXNHvfq023zK7rbqEf9Oy0M8yxrvaW2d4ZObQviB/GkniyyR2fsCDhOj oR52O2krk7tW6g8ZlnaGxvirBTtvBZqROUMIrGKH6yveiOMHXucFFFA1WAMlGzRGuVZFFw CRLQeDtaB8i1SdKYeCri6QQ+Fwtx++w= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="NhTi/KqE"; spf=pass (imf25.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id CE6B1CE1CF2; Tue, 2 Jul 2024 09:11:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4CABDC116B1; Tue, 2 Jul 2024 09:11:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911474; bh=PIW7uecyqbo8l+/QQ3xYJoQlnbA8PPRXn3HjOM7nqAY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NhTi/KqELO0ntEOXqd97jqF285urhVtp0ovXZbMP4lcmZaDwcgPg/5W9aGAWMevzM GDilwAbddqGsrw6Zr4IKwgEv/nWRXIl2ieuB+HDrivIcEW+HlwLJRNlmEiKZLN2rHp 0muXrcmfjl0zBQ2vUDmhRG8u6kxF++q1H1AnwZx2MUDaHup1Eyt+Cx95IXnEJ7sI4X am+hv9pcwPTPSt+yJK71cxH2vUFES69TrTtxJVdEaMHZrLleT2dLw+rgqcyKXSgXqu Hh1xdmh+LLCL2umU4Gf1ef/Cnv3MoUC9ltluYS0I8qeptgpVu09+wfyU+0sTdQu4iU nF6+LSv5nDujA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 15/18] vfio/mlx5: Explicitly store page list Date: Tue, 2 Jul 2024 12:09:45 +0300 Message-ID: <2691374c551bc276ec135ea58207a440e34f7be4.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B0A0AA000E X-Stat-Signature: rw7fq8twh6riupbhdsac4959htqtiyou X-HE-Tag: 1719911477-307784 X-HE-Meta: U2FsdGVkX18ssh8o8afm50VmCXHK5h4AS29/PJt4BlvZb4PsQWH1Hp9bmSDO2gXddl+MOMGZVgpZdk1yVPPZtSyn/to7GYJ6kz9Ts7e03JSk54mgjMY5yRNDj9lMQlHJxRGuhW2DMkCuhhIAoOzt2azjhRJMW48U3xo4Tlbhx9GejbPDONDW/ooloC05mEsLq6zlZIkbvwy2NpzKyqq1dgepixvGbcVI27iE9hdt2fSnSS42zoXrHihXNSyhuvLOqExj3/+CjU/YC1hzKDWm6cJZYaWr+bNqPUcEHGNUKXSL+HpY98TAnwea17l/m6WtEHmp+HrXNE1k7+MHCD9CXsxMyrMfHSgR5k8Of3yHbt19x0kgWnqo6xUyICxAiMvsnXtRdnXoxB/nyr/N0+YnqmZUJW2M6u/HwmGU4mi3qUAR91BVvTQuOGPtWnaNmLbq/Tp9OM9MugIPQ1/aULBgulJVm7YtfSCo2OWt2vOrweOeKOGqA8L+midn1BxV3NA8FFmMEoemE645PdnPKYGW+Rb5OpuL6cBmMAz8NCPD5HifoY10y+uQRZ5UPrUxJxr+HilmVgafxTArCRUYVeA/9X38UvLBG8n9AydcSz6WHL6YEfYMnRdht9TpCGdFbG6V3bFGx1U/ScncfH3MhucklIcfULHGrx3mY8B9QHz+ZWA2r48/QF6sR/2n8mbWZIPddiDRLrjyk+1wN6t2mdYQ3ZsGDkPcV34L22UEYC7wgK6VqsQLLzoinkvCY6jbMzCysfAMsWgyk8lD1WC2HYtYP8KiONi/knCzq/d8MKU7M5beEjxbQjZz65APtk1P59y8REwgSPi11IkbMgIg/ggtC+WQkHk3LKDFmBeYERiskt2m4PBMwW02EPpX4ZruuVO8jESCBRL/I1QEA/A+DOPvKRjH5un81qbAsNUm6vnxuVNvlY582T/snFAUMnxSNQoKhXPJcJdmra+XTBB0iWg iOAMrLoD kRBF/919zidiLpAbukOeaA4WW5gliGEIcNSWbk1fl/xcOBRuJO5YgKhjQdicnsxJoe8VVEkugb7KKrVFFw+bFEZnueCYJyaQumxml4tRY7TrYB94XyoIlBmfIpZZ2TUGHXXUhAn+08lPFCeT7aVWocpqLQMZsf/7sqbC3V3x2IM/ZhQsdodHTlVxPAA84+DdLkWqN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to removal scatter-gather table and unifying receive and send list, explicitly store page list. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 33 ++++++++++++++++----------------- drivers/vfio/pci/mlx5/cmd.h | 1 + 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index adf57104555a..cb23f03d58f4 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -421,6 +421,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); + kvfree(buf->page_list); kfree(buf); } @@ -428,44 +429,42 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned int npages) { unsigned int to_alloc = npages; + size_t old_size, new_size; struct page **page_list; unsigned long filled; unsigned int to_fill; int ret; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); - page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); + to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*buf->page_list)); + old_size = buf->npages * sizeof(*buf->page_list); + new_size = old_size + to_alloc * sizeof(*buf->page_list); + page_list = kvrealloc(buf->page_list, old_size, new_size, + GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page_list) return -ENOMEM; + buf->page_list = page_list; + do { filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); - if (!filled) { - ret = -ENOMEM; - goto err; - } + buf->page_list + buf->npages); + if (!filled) + return -ENOMEM; + to_alloc -= filled; ret = sg_alloc_append_table_from_pages( - &buf->table, page_list, filled, 0, + &buf->table, buf->page_list + buf->npages, filled, 0, filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, GFP_KERNEL_ACCOUNT); if (ret) - goto err; + return ret; buf->npages += filled; - /* clean input for another bulk allocation */ - memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*page_list)); + PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); - kvfree(page_list); return 0; - -err: - kvfree(page_list); - return ret; } struct mlx5_vhca_data_buffer * diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 25dd6ff54591..5b764199db53 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -53,6 +53,7 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct page **page_list; struct sg_append_table table; loff_t start_pos; u64 length; From patchwork Tue Jul 2 09:09:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23742C30658 for ; Tue, 2 Jul 2024 09:11:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A58436B00C3; Tue, 2 Jul 2024 05:11:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A05E76B00C4; Tue, 2 Jul 2024 05:11:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 880686B00C5; Tue, 2 Jul 2024 05:11:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 61D156B00C3 for ; Tue, 2 Jul 2024 05:11:08 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1EF1A8203C for ; Tue, 2 Jul 2024 09:11:08 +0000 (UTC) X-FDA: 82294243416.05.CEA82D5 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf23.hostedemail.com (Postfix) with ESMTP id CCA7A140011 for ; Tue, 2 Jul 2024 09:11:05 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UxFYX5T8; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zwrKUCzADWV/reomyey7/DXNocFin+1WODzqFL7FKto=; b=eTcIC/lWJ9hQ0HBK+U04JiHeT/e2ORKSaXflBpCS0jvS/WUu8YP3WjgktUlXWonFo8Ou+U qvBjVZGHSmUTnIAL/HQ921QvvmaJXaDPQ92yV2H9TQbOLL7NbQouRt0Kna0gsLFFnZ46pK jZfD16mDj/bbqMv+IReUy0T/cPGxCjc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911445; a=rsa-sha256; cv=none; b=dkmon+Iz3nOVpoYe8GPiJzmX6ztWaSUSR7UGkX8WDHf98aIQ+dBqwoTTy5Avl4hKUk6r0x LwdGMIG+OO0x7QCJmpYkEArJsLJtx+9f1La4LdNqzQOqMDXK9NBNKbJFM/gj0R7QJEVkwj Cf2FR5GivtcHCMSUouW+CVcDVkx4EFg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UxFYX5T8; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C1323CE1CF7; Tue, 2 Jul 2024 09:11:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FBF7C4AF0C; Tue, 2 Jul 2024 09:11:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911462; bh=fa03j06JQsZdSB5w8kukQGr1qEK3gqfe4kVoo/wFmOU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UxFYX5T8UxyxQNqh0dNbCpGazWGtmPierV/QvwE9BGOpvSWKPtM6wsts3yNmS5dqL tH5CgKntyE6eqP9YqyyoyG6O7j1aw3evZjy6f1qzlgNBPBlk1iiY4Ykr5qqF3To1RQ KUEah33CnUzMnFc49cw5jU5avrg/pQowUAbFlRIsygLY92LOPvWfpYjXI6LM3NfuF9 NQD+XRyxBALu/EiXi7jhLvldrcxv78UCmLUaPa2f1JvSQIDvVX3+xit5nWY6qlb5Ei XqArm6GygNZMsWo8UtofSP2LhTuU2Nvje1MspK6zjEvpFFN/4kMz/wRXyU5akAE+yK bIlJRc6et2iPQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 16/18] vfio/mlx5: Convert vfio to use DMA link API Date: Tue, 2 Jul 2024 12:09:46 +0300 Message-ID: <34e6da6903d31e26dbc08138eb37d1ccae3b2d3d.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: CCA7A140011 X-Stat-Signature: 7zzu4w9py9334io8kqbh9395yey1diqs X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719911465-521298 X-HE-Meta: U2FsdGVkX19Xd0ct4kCX8nfjasTshX83G/QZn/oWQ4eN6chfarLMWJJ5EuO44dQT/meTmda5ETUAf51h52a+onYiFY6QRkgBCHZ4txnZbenEluCirkkWR2ZA7CR/Np1zHvvV4ry1CvxNSHscnJg6qm871q/a4hmsQN0nahszQ8kg5AzcGudF2Pl/FmuXZdaW2Pg9X+JEZ7Rgj1oW3r+omWZOda6rzHgTrxNkRMu7WXls6sbbQdVYrIqZayr3c2/BMrt1FKwRbMDoGCRjRIiyEJkMt+mr2byhhSuyz2Z+wOsSEvonl6RqmY8w6brsQhtX16jbxDi8x5gUgHF4j8AcTzOZqo36pzBYAHA6LsQyZ1K/W2cya8+aa64PszzrSqD7UA/2oHtkSA16FmEsuCjlYDVbuUL4zPVB70C0k/Sdg/6d4SepjREC8praIy2+90qsW2DB8spU2RQtwwIqqIiPmxspdJG7ndQ0FweKYuHNIhD1SmR+AkOYuBuBUkW7MpczeLDoxtHmEXxTpnV7E2a2PdK9ehIlTcoQDpVoytutUIBQhAbLJ4mqZPoC0jz64uuuvyV6/JFA9oLn6mUk+x0Ukm2Rpebi8x0f1kgJDu3P7aEkrUJE3bPggF0fA1wpFBbPHAsMklqO+5p5nUXFPgbORCRSyLTCmU8YDzStX37WkujAlvvU1O/T7lpjpvZGGPZkN1ScH+m8vF3kcIB4hfZ5tG1YqyFwfQxP11r7llpcxOtZiUcjPYpRVLtgRbt/k9gw/gMrZIlhMZoNGWhJMVSPquux5fxjSaqoGnwdA4Q0GLr0WNkwrGmc0CJ60R6YoYZ9BG014Qjb7vodwgdfG3bDiimss6URaTcbhnOgP5AVCE5u5/sHFavoc4MIabD8izvhWu5KdrHenG2C7wdhPSwGwOYwxxojyObugVlQ6B5BroJ7ElAuqQPVUKPE4MsRvp5CfLBAL+/jZoreREDkIww sNMSCBXn lY2aXk1C1mxB+gTD5nTRSlZcEIPFRFQ2o9xkQFt7FIIvScgLpO4TYy675WYuGbXyU7knPzZxRE5X5jqnuQ7AJw62j0V5BYxhr/hDZLS+o3shbahUNtNYWLfhyGtsKHaDiKoFEJhxzs54UiT6RuiFP56+mUFdw+Lc5UYtRjzKVXYiDoK99w28GSyqH7yDvG0WY+u+G5PK1TGaN0lg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Remove intermediate scatter-gather table as it is not needed if DMA link API is used. This conversion reduces drastically the memory used to manage that table. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 241 ++++++++++++++++++++--------------- drivers/vfio/pci/mlx5/cmd.h | 10 +- drivers/vfio/pci/mlx5/main.c | 33 +---- 3 files changed, 143 insertions(+), 141 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index cb23f03d58f4..4520eaf78767 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -345,25 +345,106 @@ static u32 *alloc_mkey_in(u32 npages, u32 pdn) return in; } -static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, - struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, u32 *mkey_in, u32 *mkey) { + int inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +} + +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in, struct dma_iova_attrs *iova, + struct dma_memory_type *type) +{ + struct dma_iova_state state = {}; + dma_addr_t addr; __be64 *mtt; - int inlen; + int i; + + WARN_ON_ONCE(iova->dir == DMA_NONE); + + state.iova = iova; + state.type = type; + state.range_size = PAGE_SIZE * npages; + + if (dma_can_use_iova(&state, PAGE_SIZE)) { + dma_unlink_range(&state); + } else { + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, + klm_pas_mtt); + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_page_attrs(iova->dev, addr, PAGE_SIZE, + iova->dir, iova->attrs); + } + } + dma_free_iova(iova); +} + +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in, + struct dma_iova_attrs *iova, + struct dma_memory_type *type) +{ + struct dma_iova_state state = {}; + dma_addr_t addr; + bool use_iova; + __be64 *mtt; + int i, err; + + WARN_ON_ONCE(iova->dir == DMA_NONE); + + iova->dev = mdev->device; + iova->size = npages * PAGE_SIZE; + err = dma_alloc_iova(iova); + if (err) + return err; + + /* + * All VFIO pages are of the same type, and it is enough + * to check one page only + */ + dma_get_memory_type(page_list[0], type); + state.iova = iova; + state.type = type; + + use_iova = dma_can_use_iova(&state, PAGE_SIZE); mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - if (buf) { - struct sg_dma_page_iter dma_iter; + if (use_iova) + err = dma_start_range(&state); + if (err) { + dma_free_iova(iova); + return err; + } + for (i = 0; i < npages; i++) { + if (use_iova) { + err = dma_link_range(&state, page_to_phys(page_list[i]), + PAGE_SIZE); + addr = iova->addr; + } else { + addr = dma_map_page_attrs(iova->dev, page_list[i], 0, + PAGE_SIZE, iova->dir, + iova->attrs); + err = dma_mapping_error(mdev->device, addr); + } + if (err) + goto error; - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + /* In IOVA case, we can use one MTT entry for whole buffer */ + if (i == 0 || !use_iova) + *mtt++ = cpu_to_be64(addr); } + if (use_iova) + dma_end_range(&state); - inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(__be64) * round_up(npages, 2); + return 0; - return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +error: + unregister_dma_pages(mdev, i, mkey_in, iova, type); + return err; } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -379,49 +460,56 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->mkey_in || !buf->npages) return -EINVAL; - ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); - if (ret) - return ret; - buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); - if (!buf->mkey_in) { - ret = -ENOMEM; - goto err; - } + if (!buf->mkey_in) + return -ENOMEM; - ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + ret = register_dma_pages(mdev, buf->npages, buf->page_list, + buf->mkey_in, &buf->iova, &buf->type); + if (ret) + goto err_register_dma; + + ret = create_mkey(mdev, buf->npages, buf->mkey_in, &buf->mkey); if (ret) goto err_create_mkey; return 0; err_create_mkey: + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, &buf->iova, + &buf->type); +err_register_dma: kvfree(buf->mkey_in); -err: - dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; } +static void free_page_list(u32 npages, struct page **page_list) +{ + int i; + + /* Undo alloc_pages_bulk_array() */ + for (i = npages - 1; i >= 0; i--) + __free_page(page_list[i]); + + kvfree(page_list); +} + void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) { - struct mlx5_vf_migration_file *migf = buf->migf; - struct sg_page_iter sg_iter; + struct mlx5vf_pci_core_device *mvdev = buf->migf->mvdev; + struct mlx5_core_dev *mdev = mvdev->mdev; - lockdep_assert_held(&migf->mvdev->state_mutex); - WARN_ON(migf->mvdev->mdev_detach); + lockdep_assert_held(&mvdev->state_mutex); + WARN_ON(mvdev->mdev_detach); if (buf->mkey_in) { - mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + mlx5_core_destroy_mkey(mdev, buf->mkey); + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, + &buf->iova, &buf->type); kvfree(buf->mkey_in); - dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, - buf->dma_dir, 0); } - /* Undo alloc_pages_bulk_array() */ - for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) - __free_page(sg_page_iter_page(&sg_iter)); - sg_free_append_table(&buf->table); - kvfree(buf->page_list); + free_page_list(buf->npages, buf->page_list); kfree(buf); } @@ -432,10 +520,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, size_t old_size, new_size; struct page **page_list; unsigned long filled; - unsigned int to_fill; - int ret; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*buf->page_list)); old_size = buf->npages * sizeof(*buf->page_list); new_size = old_size + to_alloc * sizeof(*buf->page_list); page_list = kvrealloc(buf->page_list, old_size, new_size, @@ -446,22 +531,13 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, buf->page_list = page_list; do { - filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - buf->page_list + buf->npages); + filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_alloc, + buf->page_list + buf->npages); if (!filled) return -ENOMEM; to_alloc -= filled; - ret = sg_alloc_append_table_from_pages( - &buf->table, buf->page_list + buf->npages, filled, 0, - filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, - GFP_KERNEL_ACCOUNT); - - if (ret) - return ret; buf->npages += filled; - to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); return 0; @@ -478,7 +554,7 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, if (!buf) return ERR_PTR(-ENOMEM); - buf->dma_dir = dma_dir; + buf->iova.dir = dma_dir; buf->migf = migf; if (npages) { ret = mlx5vf_add_migration_pages(buf, npages); @@ -521,7 +597,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, spin_lock_irq(&migf->list_lock); list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { - if (buf->dma_dir == dma_dir) { + if (buf->iova.dir == dma_dir) { list_del_init(&buf->buf_elm); if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); @@ -1343,17 +1419,6 @@ static void mlx5vf_destroy_qp(struct mlx5_core_dev *mdev, kfree(qp); } -static void free_recv_pages(struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - /* Undo alloc_pages_bulk_array() */ - for (i = 0; i < recv_buf->npages; i++) - __free_page(recv_buf->page_list[i]); - - kvfree(recv_buf->page_list); -} - static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, unsigned int npages) { @@ -1389,45 +1454,6 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, kvfree(recv_buf->page_list); return -ENOMEM; } -static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = npages - 1; i >= 0; i--) { - addr = be64_to_cpu(mtt[i]); - dma_unmap_single(mdev->device, addr, PAGE_SIZE, - DMA_FROM_DEVICE); - } -} - -static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - struct page **page_list, u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - - for (i = 0; i < npages; i++) { - addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, addr)) - goto error; - - *mtt++ = cpu_to_be64(addr); - } - - return 0; - -error: - unregister_dma_pages(mdev, i, mkey_in); - return -ENOMEM; -} static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) @@ -1435,9 +1461,10 @@ static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in, + &recv_buf->iova, &recv_buf->type); kvfree(recv_buf->mkey_in); - free_recv_pages(&qp->recv_buf); + free_page_list(recv_buf->npages, recv_buf->page_list); } static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1458,24 +1485,26 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, goto end; } + recv_buf->iova.dir = DMA_FROM_DEVICE; err = register_dma_pages(mdev, npages, recv_buf->page_list, - recv_buf->mkey_in); + recv_buf->mkey_in, &recv_buf->iova, + &recv_buf->type); if (err) goto err_register_dma; - err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, - &recv_buf->mkey); + err = create_mkey(mdev, npages, recv_buf->mkey_in, &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_pages(mdev, npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in, &recv_buf->iova, + &recv_buf->type); err_register_dma: kvfree(recv_buf->mkey_in); end: - free_recv_pages(recv_buf); + free_page_list(npages, recv_buf->page_list); return err; } diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 5b764199db53..1b2552c238d8 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -53,21 +53,17 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct dma_iova_attrs iova; struct page **page_list; - struct sg_append_table table; + struct dma_memory_type type; loff_t start_pos; u64 length; u32 npages; u32 mkey; u32 *mkey_in; - enum dma_data_direction dma_dir; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; - /* Optimize mlx5vf_get_migration_page() for sequential access */ - struct scatterlist *last_offset_sg; - unsigned int sg_last_entry; - unsigned long last_offset; }; struct mlx5vf_async_data { @@ -132,8 +128,10 @@ struct mlx5_vhca_cq { }; struct mlx5_vhca_recv_buf { + struct dma_iova_attrs iova; u32 npages; struct page **page_list; + struct dma_memory_type type; u32 next_rq_offset; u32 *mkey_in; u32 mkey; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 0925cd7d2f17..ddadf8ccae87 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -34,35 +34,10 @@ static struct mlx5vf_pci_core_device *mlx5vf_drvdata(struct pci_dev *pdev) core_device); } -struct page * -mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, - unsigned long offset) +struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, + unsigned long offset) { - unsigned long cur_offset = 0; - struct scatterlist *sg; - unsigned int i; - - /* All accesses are sequential */ - if (offset < buf->last_offset || !buf->last_offset_sg) { - buf->last_offset = 0; - buf->last_offset_sg = buf->table.sgt.sgl; - buf->sg_last_entry = 0; - } - - cur_offset = buf->last_offset; - - for_each_sg(buf->last_offset_sg, sg, - buf->table.sgt.orig_nents - buf->sg_last_entry, i) { - if (offset < sg->length + cur_offset) { - buf->last_offset_sg = sg; - buf->sg_last_entry += i; - buf->last_offset = cur_offset; - return nth_page(sg_page(sg), - (offset - cur_offset) / PAGE_SIZE); - } - cur_offset += sg->length; - } - return NULL; + return buf->page_list[offset / PAGE_SIZE]; } static void mlx5vf_disable_fd(struct mlx5_vf_migration_file *migf) @@ -121,7 +96,7 @@ static void mlx5vf_buf_read_done(struct mlx5_vhca_data_buffer *vhca_buf) struct mlx5_vf_migration_file *migf = vhca_buf->migf; if (vhca_buf->stop_copy_chunk_num) { - bool is_header = vhca_buf->dma_dir == DMA_NONE; + bool is_header = vhca_buf->iova.dir == DMA_NONE; u8 chunk_num = vhca_buf->stop_copy_chunk_num; size_t next_required_umem_size = 0; From patchwork Tue Jul 2 09:09:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B820C30658 for ; Tue, 2 Jul 2024 09:11:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 416146B00C4; Tue, 2 Jul 2024 05:11:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C6CC6B00C5; Tue, 2 Jul 2024 05:11:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2668D6B00C6; Tue, 2 Jul 2024 05:11:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 06A256B00C4 for ; Tue, 2 Jul 2024 05:11:08 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id AD14A8203C for ; Tue, 2 Jul 2024 09:11:08 +0000 (UTC) X-FDA: 82294243416.20.9793324 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id 11FAF1C0008 for ; Tue, 2 Jul 2024 09:11:06 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tmMV07HW; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911444; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QtKqiS2FdqQyx5V42cRxrrsXLbP2wtKYMXoP2kKFaO4=; b=sr9wZT6dfluMNxeyNe03nV0wEkxzuGASZCBCd/xZaDRQMHFWJZs3LCO0SBqROFpRY5M6MW R+2Ou+uA/LTynynX6N8dG7qYvSqwyawJuth+yhSBFciXpmS7EVqQ2vEV54755id+PoGcRc /x5i8WuavlmlWTpEVO9Xg7mgwJ2qr/M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911444; a=rsa-sha256; cv=none; b=J8AlVDO8XFd8LK9CqOzn2vpYqtKLI5mAS0EiKi6CJjYY1nch9JMsX0rqQpSiHIKzwxdXB2 uzXPrwOle2Xux/YK/N4GJbIivJqTiZBufMCQUqlsh2qPbD6Uqgli6TsKGS6a6qfvt7Rt+s tZ+fRfZcPQE9Bj/Ne/Idkk1nhOvK3oA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tmMV07HW; spf=pass (imf18.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2FA6561A27; Tue, 2 Jul 2024 09:11:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38628C116B1; Tue, 2 Jul 2024 09:11:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911466; bh=awqNz5OGdWfjQsJN8aSzpTwPuTxHlPEDhpz1xfEBgC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tmMV07HWOakUhLEx0CG5Sw8D1u/vJiFeT/1LOA4/MYdwbWSy8GvydRXDag+XzNQuF 8Mrby6JUZOtPEv8fs7zqbYmgnJ3vQ0E8UKqg2AiGbf2B4rjUPAo8OMiBBUtmXlUtED I7lOY1LwtkVhj5RlocDgysGvTm0aU3G+jzWGhM9DlHlbRQ+NQwCweNkMi/QFwfNt5f Zt/8qIdBUHskSlNSHY+lwUUXkwkAM2VZSfuGEizQOnbvlLYGPXRFfuO7ZnqRmHvzP3 sHQW3H4gZFNAN04PzGFcNOHwkG6/bvqhEaQ9QjvKDRliH9gVdc0oevJLqLUVJ+y2YD 4NmY108G1I3Ow== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 17/18] block: export helper to get segment max size Date: Tue, 2 Jul 2024 12:09:47 +0300 Message-ID: <3649c1dc673ea0a49a90f3e01b76ef91fb90f076.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 11FAF1C0008 X-Stat-Signature: po5yyxt55xdq7t6mrk3neucqgpsiwwd7 X-HE-Tag: 1719911466-561274 X-HE-Meta: U2FsdGVkX1/SD+rS3BwXnUZiE0nJ3nhk3lehn9RO0E6MAVaNoBuKrFw8DOxvimSjVRvfXU0XhspjqxihSne65vzFZMJv6isYT2KimgWyA7W3+z3tL1ddyJXVqS4MNuYd3uTsaSFYIvBrE9iRG5osSZwmBK/T0K+w7zNb0mLhoB7Wlxl5ViwRvlmS1kK/tHpxoQGmSNeziNOhMcOzEbHKA5hVKfGYKLFs52nvJF8QcTUOP/7W1P0TUkFylwWwMpNDoL66r4z+1j/IhunLvm6UzW0Eu1E9iT+xIoBtWUJZog97EEmQE2nML6Cy/1ZXJShfrj7kWbq46R3ePa/agGh/yJ+s9BBwkLkOG2ffmdel+uPRH1SSpCm246fI9zhBnpo/6EPJRSZ7UVXTfHEywMK45eTTSqmyj0KGu6/R3AS2FrUjTgwNy6vrH78aW83vDyLR1vGA4DAhoL7ueOe44b7R4VaJLXSjv/C/UrlUFN+dFYd4CMo6ZuWqjgZX31Sb2oId0Ou7a0q2GpyBPY5kx5CHuxlXabIma1whikxS/kyDCCslq9vH2jrGaN2Pf8BLw/5woENm6ToD+kY3Nd6rMXw2x5wtCwXeON5dmiRrM7+xAPIlLMTgwQRCSh/FuCz0EDd3gTPJktlo2vpAVLGe4ycM0fgSfHuHtpFlik+C+tUSMCMbN5zPchiL7W0b8Bv5h6BJ6TTudEz7xDMpd8qhsa3LRAk9SayE5TgeYmWFqalXMCRx4axElPK/4OsDaZ7S9KAJEm7s8Nhptp7bqEjpM3myVvYquUsa1J0uN2e0K2b2NQM0coJDZ6JhA8j8RrK4MW9U3ZSbzJ0bx/da9pLnNUAORcT6W7JbKUZeV7v/cVjhOIIi4Xm/ZqlVBQ/wXB8EpxRpNXQZg9jTBmn0GW5WZf18P7VZ2r8Fq1iP4NwPlxq1qgbQ7LDdgJApXkyVd+7yYewktzS2eJoU4jmRCplmhui nNirO3qh MpS/e+gYzYVwf2uiIlj6y0syF2D0XKGEeScgIg4I8+5MVGM6BiZg67FUXGjtWoBabiOBJIKkaARyb/9qUHlMNQ7/wWnomMd+gDb4YewHSo0esUFrrZgiRxIbOAOxIsGIFWPMCsZb2UumW9aZnuVJI+mOdzH0n5yFLqhU4bnSinxvSlyRkZ7JLm4fWG0MW3+UQdU2YTGsbFjexZAY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chaitanya Kulkarni Export the get_max_segment_size() so driver can do use that to create DMA mapping when it receives the request. Signed-off-by: Chaitanya Kulkarni Signed-off-by: Leon Romanovsky --- block/blk-merge.c | 3 ++- include/linux/blk-mq.h | 3 +++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 8534c35e0497..0561e728ef95 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -190,7 +190,7 @@ static inline unsigned get_max_io_size(struct bio *bio, * * Returns the maximum number of bytes that can be added as a single segment. */ -static inline unsigned get_max_segment_size(const struct queue_limits *lim, +inline unsigned get_max_segment_size(const struct queue_limits *lim, struct page *start_page, unsigned long offset) { unsigned long mask = lim->seg_boundary_mask; @@ -203,6 +203,7 @@ static inline unsigned get_max_segment_size(const struct queue_limits *lim, */ return min(mask - offset, (unsigned long)lim->max_segment_size - 1) + 1; } +EXPORT_SYMBOL_GPL(get_max_segment_size); /** * bvec_split_segs - verify whether or not a bvec should be split in the middle diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 89ba6b16fe8b..008c77c9b518 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -1150,4 +1150,7 @@ static inline int blk_rq_map_sg(struct request_queue *q, struct request *rq, } void blk_dump_rq_flags(struct request *, char *); +unsigned get_max_segment_size(const struct queue_limits *lim, + struct page *start_page, unsigned long offset); + #endif /* BLK_MQ_H */ From patchwork Tue Jul 2 09:09:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07CF6C3064D for ; Tue, 2 Jul 2024 09:11:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C1BA6B00C6; Tue, 2 Jul 2024 05:11:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76EA36B00C7; Tue, 2 Jul 2024 05:11:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C1D66B00C8; Tue, 2 Jul 2024 05:11:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 375A46B00C6 for ; Tue, 2 Jul 2024 05:11:13 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E7828121FD8 for ; Tue, 2 Jul 2024 09:11:12 +0000 (UTC) X-FDA: 82294243584.03.656CB1F Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id 310781C000F for ; Tue, 2 Jul 2024 09:11:11 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Y6hNK7RX; spf=pass (imf20.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Bo8JQyU88yGuoimDBKz+YHSIvjRpHry905O+W7O6wDg=; b=df7kA7AE7XDs+xCB1ax0QhkqhZARMv906k2hlCoaH5rE/jQa3QuAtkNuAz8QT/DlYrGVGP zmorU0bWlpNwVmCBMULPVT2yxzDV0EKan21J0SxtHLzeyMrpS3/d1Om/SdnHWINq4kKRmG 78zVUmB3mx+wrs2d6nqvZI3TuspJtJQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911448; a=rsa-sha256; cv=none; b=uZQx8DtBS3/35lUkRBl/g96+AyKgFQolf9BHIOqjHebw6o6ywhHm+H8n05qpNWwEkVCa3I 5ZbNfAMZ3RzpzoPD7IETsOXvceJqS4PjznwKzxCtD8x82zhEWYFXnyms4KQxbdbhwagP1M v5YHt7lkNSKItjoDa5Wf5U799pRMEFM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Y6hNK7RX; spf=pass (imf20.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4E5D461A0C; Tue, 2 Jul 2024 09:11:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 336E6C116B1; Tue, 2 Jul 2024 09:11:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911470; bh=rWwAHyiwmax8yvMjrwSRAusPbIB/KT92oJ4QN5II8o8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y6hNK7RXDkaA/i33kcQvJ4JKz8/yPjpD5lRL7DqV5Xmj2oQvXnyGYAQ2epTmINygy 0y19irtGTGWm+OtjW5NkAItWfTaPfLe+Fvzo6XpXK70xytP//3+E0opt8r1DuIjpCG 8r0JXPXWB2Z4hDwHlXlzcZ3/Bm1gsukpVrzK5gOHjyumUYCJ6hIXV0+iG4nt9CDCbA AVvnWwKND922egkc2IkGqZ7kYK3GZ63NIoDvDrhXUVUjhoQqnTcoiujq6cCinieq+O VSwcXGDkFUwWonCqIjy2AUNt+QmmYZvJX5fJ8Y5c5HTLjsPFdIJ2jHoXuGkDlZv1PP 3XgJzr2OB4ILA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 18/18] nvme-pci: use new dma API Date: Tue, 2 Jul 2024 12:09:48 +0300 Message-ID: <47eb0510b0a6aa52d9f5665d75fa7093dd6af53f.1719909395.git.leon@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 5b6gouab9fyzwtfhdxq3b5px7qcywxqw X-Rspamd-Queue-Id: 310781C000F X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1719911471-669797 X-HE-Meta: U2FsdGVkX18Oj2vZiQggGgmFKCFH+UZmn9WlKPWAFiyVKFSg4I0U++rM/bcoBxWKEguRiVanYj73WWkNIJv5/TuE84bCxGUx6HDqYdO3iPWOP85EN+91CbjXWAMpgDjZ2bRRtrhV7hdC6IK6zuNI1Ex07rN+nbfnB1sUHayytzEuiRg6j61Hr6p9FT9IB+6qnABtb7nrWR8kOQhO8V66DsNnc3p+kKeoYEYw6mq/Pn8rFRSSGKl5X/8n74MQLdaPQkalF2TJs77VdfvhJdpxXTEcBGDHkYKjB50v5pQbk1ir2Z2guth1QGUGB6hCYIONRXYIqg39TdsM/FgW0kRCe6eK1FGWCNFEdVLOPlpxsNpp5f3nrLvbTX6n4/byWSpsS364u/q9Qkxyf8u7lEndLrA5PgFI+22mK+MT/GCtywmWh8+1QumaewW1c4ob5xcr2APdZ60acMa84V24/gprFR0ZFRKpSG0D4XHP23Z1uGYsWb5Gb9tm8UjOnV0PZUNuHjU3Oya69hQq0AjlLWZz6sIJHvDaai5oAovrVOgHH5I+dFlLpYYVJTRgxxlBCwomNJYoH5DKayFKpCJYa1UdGsgxq1BN5TLy55RJD7Hi7Yzv5FjDFRu9sbWTgh29V86WxE1cZRqGqAppe4xTF67jAraMRlQvy835j7Sm1cTP7XJDCoDCLO17YmdRKegoMB0ksz1iidTWBLr4QBS5gplJEOlO25ARPQpaiTrh1OBcSsbcYXGS9wu1ZSXaeFkSL/U0yinv/nVBospg7WsGaFmcYcRC+FlK14mgs149Jshx33BS8+FeuuHYHi/P5cewfgF9CLpQoAFAyC4a/HLy/aVr/NGNrNJP6NeHo6Q302a1bVSlo/qLrkiUFSiPPvTChY8a6zQemBFFi55dH9+FVR8CgkB3wQRa2xJ+W7OU5XXT/9YUGY9KILARm4q9Y4Af0G8qpXUyTjPHYsqpTCjTO5j pH8s/krM G3PSAj0IGtjrmsz1mmhlvrJTazZrwxyEXBTOkX9MEQroYJxZn19ShGIuHqI9E1UOVKIJfEnBAASXyTiH1BfGWIK2FL6B+eAJvhBcyf2aCl5TOafwhrLYDHgiQ2luQ9JNrjCSWfH978ClJHvkPb6SD0W55dfHrOOlQC6PZM4aws6xczj4usD1PbQiNBcSAXn/d/baSNqdJ0fYNh6aJs8LgPSXCWlKij916qDtSe3dgYv4APmM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chaitanya Kulkarni Introduce a new structure, iod_dma_map, to hold the DMA mapping for each I/O. This includes the iova state and mapped addresses from dma_link_range() or dma_map_page_attrs(). Replace the existing sg_table in nvme_iod with struct dma_map. The size difference between :- struct nvme_iod with struct sg_table :- 184 struct nvme_iod with struct dma_map :- 176 In nvme_map_data(), allocate dma_map from mempool and iova using dma_alloc_iova(). Obtain the memory type from the first bvec of the first bio of the request and use that to decide whether we want to use iova or not. In the newly added function nvme_rq_dma_map(), perform DMA mapping for the bvec pages using nvme_dma_link_page(). Additionally, if NVMe SGL is provided, build SGL entry inline while creating this mapping to avoid extra traversal. Call nvme_rq_dma_map() from nvme_pci_setup_prps() and nvme_pci_setup_sgls(). For NVME SGL case, nvme_rq_dma_map() will handle building SGL inline. To build PRPs, use iod->dma_map->dma_link_address in nvme_pci_setup_prps() and increment the counter appropriately to retrieve the next set of DMA addresses. This demonstrates how the new DMA API can fit into the NVMe driver and replace the old DMA APIs. As this is an RFC, I expect more robust error handling, optimizations, and in-depth testing for the final version once we agree on DMA API architecture. Following is the performance comparision for existing DMA API case with sg_table and with dma_map, once we have agreement on the new DMA API design I intend to get similar profiling numbers for new DMA API. sgl (sg_table + old dma API ) vs no_sgl (iod_dma_map + new DMA API) :- block size IOPS (k) Average of 3 4K -------------------------------------------------------------- sg-list-fio-perf.bs-4k-1.fio: 68.6 sg-list-fio-perf.bs-4k-2.fio: 68 68.36 sg-list-fio-perf.bs-4k-3.fio: 68.5 no-sg-list-fio-perf.bs-4k-1.fio: 68.7 no-sg-list-fio-perf.bs-4k-2.fio: 68.5 68.43 no-sg-list-fio-perf.bs-4k-3.fio: 68.1 % Change default vs new DMA API = +0.0975% 8K -------------------------------------------------------------- sg-list-fio-perf.bs-8k-1.fio: 67 sg-list-fio-perf.bs-8k-2.fio: 67.1 67.03 sg-list-fio-perf.bs-8k-3.fio: 67 no-sg-list-fio-perf.bs-8k-1.fio: 66.7 no-sg-list-fio-perf.bs-8k-2.fio: 66.7 66.7 no-sg-list-fio-perf.bs-8k-3.fio: 66.7 % Change default vs new DMA API = +0.4993% 16K -------------------------------------------------------------- sg-list-fio-perf.bs-16k-1.fio: 63.8 sg-list-fio-perf.bs-16k-2.fio: 63.4 63.5 sg-list-fio-perf.bs-16k-3.fio: 63.3 no-sg-list-fio-perf.bs-16k-1.fio: 63.5 no-sg-list-fio-perf.bs-16k-2.fio: 63.4 63.33 no-sg-list-fio-perf.bs-16k-3.fio: 63.1 % Change default vs new DMA API = -0.2632% 32K -------------------------------------------------------------- sg-list-fio-perf.bs-32k-1.fio: 59.3 sg-list-fio-perf.bs-32k-2.fio: 59.3 59.36 sg-list-fio-perf.bs-32k-3.fio: 59.5 no-sg-list-fio-perf.bs-32k-1.fio: 59.5 no-sg-list-fio-perf.bs-32k-2.fio: 59.6 59.43 no-sg-list-fio-perf.bs-32k-3.fio: 59.2 % Change default vs new DMA API = +0.1122% 64K -------------------------------------------------------------- sg-list-fio-perf.bs-64k-1.fio: 53.7 sg-list-fio-perf.bs-64k-2.fio: 53.4 53.56 sg-list-fio-perf.bs-64k-3.fio: 53.6 no-sg-list-fio-perf.bs-64k-1.fio: 53.5 no-sg-list-fio-perf.bs-64k-2.fio: 53.8 53.63 no-sg-list-fio-perf.bs-64k-3.fio: 53.6 % Change default vs new DMA API = +0.1246% 128K -------------------------------------------------------------- sg-list-fio-perf/bs-128k-1.fio: 48 sg-list-fio-perf/bs-128k-2.fio: 46.4 47.13 sg-list-fio-perf/bs-128k-3.fio: 47 no-sg-list-fio-perf/bs-128k-1.fio: 46.6 no-sg-list-fio-perf/bs-128k-2.fio: 47 46.9 no-sg-list-fio-perf/bs-128k-3.fio: 47.1 % Change default vs new DMA API = −0.495% 256K -------------------------------------------------------------- sg-list-fio-perf/bs-256k-1.fio: 37 sg-list-fio-perf/bs-256k-2.fio: 41 39.93 sg-list-fio-perf/bs-256k-3.fio: 41.8 no-sg-list-fio-perf/bs-256k-1.fio: 37.5 no-sg-list-fio-perf/bs-256k-2.fio: 41.4 40.5 no-sg-list-fio-perf/bs-256k-3.fio: 42.6 % Change default vs new DMA API = +1.42% 512K -------------------------------------------------------------- sg-list-fio-perf/bs-512k-1.fio: 28.5 sg-list-fio-perf/bs-512k-2.fio: 28.2 28.4 sg-list-fio-perf/bs-512k-3.fio: 28.5 no-sg-list-fio-perf/bs-512k-1.fio: 28.7 no-sg-list-fio-perf/bs-512k-2.fio: 28.6 28.7 no-sg-list-fio-perf/bs-512k-3.fio: 28.8 % Change default vs new DMA API = +1.06% Signed-off-by: Chaitanya Kulkarni Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 283 ++++++++++++++++++++++++++++++---------- 1 file changed, 213 insertions(+), 70 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 102a9fb0c65f..53a71b03c794 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -221,6 +221,16 @@ union nvme_descriptor { __le64 *prp_list; }; +struct iod_dma_map { + bool use_iova; + struct dma_iova_state state; + struct dma_memory_type type; + struct dma_iova_attrs iova; + dma_addr_t dma_link_address[NVME_MAX_SEGS]; + u32 len[NVME_MAX_SEGS]; + u16 nr_dma_link_address; +}; + /* * The nvme_iod describes the data in an I/O. * @@ -236,7 +246,7 @@ struct nvme_iod { unsigned int dma_len; /* length of single DMA segment mapping */ dma_addr_t first_dma; dma_addr_t meta_dma; - struct sg_table sgt; + struct iod_dma_map *dma_map; union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS]; }; @@ -521,6 +531,26 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, return true; } +static inline void nvme_dma_unlink_range(struct nvme_iod *iod) +{ + struct dma_iova_attrs *iova = &iod->dma_map->iova; + dma_addr_t addr; + u16 len; + u32 i; + + if (iod->dma_map->use_iova) { + dma_unlink_range(&iod->dma_map->state); + return; + } + + for (i = 0; i < iod->dma_map->nr_dma_link_address; i++) { + addr = iod->dma_map->dma_link_address[i]; + len = iod->dma_map->len[i]; + dma_unmap_page_attrs(iova->dev, addr, len, + iova->dir, iova->attrs); + } +} + static void nvme_free_prps(struct nvme_dev *dev, struct request *req) { const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1; @@ -547,9 +577,7 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) return; } - WARN_ON_ONCE(!iod->sgt.nents); - - dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); + nvme_dma_unlink_range(iod); if (iod->nr_allocations == 0) dma_pool_free(dev->prp_small_pool, iod->list[0].sg_list, @@ -559,21 +587,123 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) iod->first_dma); else nvme_free_prps(dev, req); - mempool_free(iod->sgt.sgl, dev->iod_mempool); + + dma_free_iova(&iod->dma_map->iova); + mempool_free(iod->dma_map, dev->iod_mempool); } -static void nvme_print_sgl(struct scatterlist *sgl, int nents) +static inline dma_addr_t nvme_dma_link_page(struct page *page, + unsigned int poffset, + unsigned int len, + struct nvme_iod *iod) { - int i; - struct scatterlist *sg; + struct dma_iova_attrs *iova = &iod->dma_map->iova; + struct dma_iova_state *state = &iod->dma_map->state; + dma_addr_t dma_addr; + int ret; + + if (iod->dma_map->use_iova) { + phys_addr_t phys = page_to_phys(page) + poffset; + + dma_addr = state->iova->addr + state->range_size; + ret = dma_link_range(&iod->dma_map->state, phys, len); + if (ret) + return DMA_MAPPING_ERROR; + } else { + dma_addr = dma_map_page_attrs(iova->dev, page, poffset, len, + iova->dir, iova->attrs); + } + return dma_addr; +} + +static void nvme_pci_sgl_set_data(struct nvme_sgl_desc *sge, + dma_addr_t dma_addr, + unsigned int dma_len); + +static int __nvme_rq_dma_map(struct request *req, struct nvme_iod *iod, + struct nvme_sgl_desc *sgl_list) +{ + struct dma_iova_attrs *iova = &iod->dma_map->iova; + struct req_iterator iter; + struct bio_vec bv; + int cnt = 0; + dma_addr_t addr; + + iod->dma_map->nr_dma_link_address = 0; + rq_for_each_bvec(bv, req, iter) { + unsigned nbytes = bv.bv_len; + unsigned total = 0; + unsigned offset, len; + + if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) { + addr = nvme_dma_link_page(bv.bv_page, bv.bv_offset, + bv.bv_len, iod); + if (dma_mapping_error(iova->dev, addr)) { + pr_err("dma_mapping_error %d\n", + dma_mapping_error(iova->dev, addr)); + return -ENOMEM; + } + + iod->dma_map->dma_link_address[cnt] = addr; + iod->dma_map->len[cnt] = bv.bv_len; + iod->dma_map->nr_dma_link_address++; + + if (sgl_list) + nvme_pci_sgl_set_data(&sgl_list[cnt], addr, + bv.bv_len); + cnt++; + continue; + } + while (nbytes > 0) { + struct page *page = bv.bv_page; + + offset = bv.bv_offset + total; + len = min(get_max_segment_size(&req->q->limits, page, + offset), nbytes); + + page += (offset >> PAGE_SHIFT); + offset &= ~PAGE_MASK; + + addr = nvme_dma_link_page(page, offset, len, iod); + if (dma_mapping_error(iova->dev, addr)) { + pr_err("dma_mapping_error2 %d\n", + dma_mapping_error(iova->dev, addr)); + return -ENOMEM; + } + + iod->dma_map->dma_link_address[cnt] = addr; + iod->dma_map->len[cnt] = len; + iod->dma_map->nr_dma_link_address++; - for_each_sg(sgl, sg, nents, i) { - dma_addr_t phys = sg_phys(sg); - pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d " - "dma_address:%pad dma_length:%d\n", - i, &phys, sg->offset, sg->length, &sg_dma_address(sg), - sg_dma_len(sg)); + if (sgl_list) + nvme_pci_sgl_set_data(&sgl_list[cnt], addr, len); + + total += len; + nbytes -= len; + cnt++; + } + } + return cnt; +} + +static int nvme_rq_dma_map(struct request *req, struct nvme_iod *iod, + struct nvme_sgl_desc *sgl_list) +{ + int ret; + + if (iod->dma_map->use_iova) { + ret = dma_start_range(&iod->dma_map->state); + if (ret) { + pr_err("dma_start_dange_failed %d", ret); + return ret; + } + + ret = __nvme_rq_dma_map(req, iod, sgl_list); + dma_end_range(&iod->dma_map->state); + return ret; } + + return __nvme_rq_dma_map(req, iod, sgl_list); } static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, @@ -582,13 +712,23 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct dma_pool *pool; int length = blk_rq_payload_bytes(req); - struct scatterlist *sg = iod->sgt.sgl; - int dma_len = sg_dma_len(sg); - u64 dma_addr = sg_dma_address(sg); - int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); + u16 dma_addr_cnt = 0; + int dma_len; + u64 dma_addr; + int offset; __le64 *prp_list; dma_addr_t prp_dma; int nprps, i; + int ret; + + ret = nvme_rq_dma_map(req, iod, NULL); + if (ret < 0) + return errno_to_blk_status(ret); + + dma_len = iod->dma_map->len[dma_addr_cnt]; + dma_addr = iod->dma_map->dma_link_address[dma_addr_cnt]; + offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); + dma_addr_cnt++; length -= (NVME_CTRL_PAGE_SIZE - offset); if (length <= 0) { @@ -600,9 +740,9 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, if (dma_len) { dma_addr += (NVME_CTRL_PAGE_SIZE - offset); } else { - sg = sg_next(sg); - dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + dma_addr = iod->dma_map->dma_link_address[dma_addr_cnt]; + dma_len = iod->dma_map->len[dma_addr_cnt]; + dma_addr_cnt++; } if (length <= NVME_CTRL_PAGE_SIZE) { @@ -646,31 +786,29 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, break; if (dma_len > 0) continue; - if (unlikely(dma_len < 0)) - goto bad_sgl; - sg = sg_next(sg); - dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + if (dma_addr_cnt >= iod->dma_map->nr_dma_link_address) + pr_err_ratelimited("dma_addr_cnt exceeded %u and %u\n", + dma_addr_cnt, + iod->dma_map->nr_dma_link_address); + dma_addr = iod->dma_map->dma_link_address[dma_addr_cnt]; + dma_len = iod->dma_map->len[dma_addr_cnt]; + dma_addr_cnt++; } done: - cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sgt.sgl)); + cmnd->dptr.prp1 = cpu_to_le64(iod->dma_map->dma_link_address[0]); cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); + return BLK_STS_OK; free_prps: nvme_free_prps(dev, req); return BLK_STS_RESOURCE; -bad_sgl: - WARN(DO_ONCE(nvme_print_sgl, iod->sgt.sgl, iod->sgt.nents), - "Invalid SGL for payload:%d nents:%d\n", - blk_rq_payload_bytes(req), iod->sgt.nents); - return BLK_STS_IOERR; } static void nvme_pci_sgl_set_data(struct nvme_sgl_desc *sge, - struct scatterlist *sg) + dma_addr_t dma_addr, unsigned int dma_len) { - sge->addr = cpu_to_le64(sg_dma_address(sg)); - sge->length = cpu_to_le32(sg_dma_len(sg)); + sge->addr = cpu_to_le64(dma_addr); + sge->length = cpu_to_le32(dma_len); sge->type = NVME_SGL_FMT_DATA_DESC << 4; } @@ -685,22 +823,16 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge, static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, struct request *req, struct nvme_rw_command *cmd) { + unsigned int entries = blk_rq_nr_phys_segments(req); struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct dma_pool *pool; struct nvme_sgl_desc *sg_list; - struct scatterlist *sg = iod->sgt.sgl; - unsigned int entries = iod->sgt.nents; + struct dma_pool *pool; dma_addr_t sgl_dma; - int i = 0; + int ret; /* setting the transfer type as SGL */ cmd->flags = NVME_CMD_SGL_METABUF; - if (entries == 1) { - nvme_pci_sgl_set_data(&cmd->dptr.sgl, sg); - return BLK_STS_OK; - } - if (entries <= (256 / sizeof(struct nvme_sgl_desc))) { pool = dev->prp_small_pool; iod->nr_allocations = 0; @@ -718,12 +850,11 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, iod->list[0].sg_list = sg_list; iod->first_dma = sgl_dma; - nvme_pci_sgl_set_seg(&cmd->dptr.sgl, sgl_dma, entries); - do { - nvme_pci_sgl_set_data(&sg_list[i++], sg); - sg = sg_next(sg); - } while (--entries > 0); + ret = nvme_rq_dma_map(req, iod, sg_list); + if (ret < 0) + return errno_to_blk_status(ret); + nvme_pci_sgl_set_seg(&cmd->dptr.sgl, sgl_dma, ret); return BLK_STS_OK; } @@ -791,34 +922,47 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, } iod->dma_len = 0; - iod->sgt.sgl = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); - if (!iod->sgt.sgl) + iod->dma_map = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); + if (!iod->dma_map) return BLK_STS_RESOURCE; - sg_init_table(iod->sgt.sgl, blk_rq_nr_phys_segments(req)); - iod->sgt.orig_nents = blk_rq_map_sg(req->q, req, iod->sgt.sgl); - if (!iod->sgt.orig_nents) - goto out_free_sg; - rc = dma_map_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), - DMA_ATTR_NO_WARN); - if (rc) { - if (rc == -EREMOTEIO) - ret = BLK_STS_TARGET; - goto out_free_sg; - } + iod->dma_map->state.range_size = 0; + iod->dma_map->iova.dev = dev->dev; + iod->dma_map->iova.dir = rq_dma_dir(req); + iod->dma_map->iova.attrs = DMA_ATTR_NO_WARN; + iod->dma_map->iova.size = blk_rq_payload_bytes(req); + if (!iod->dma_map->iova.size) + goto free_iod_map; + + rc = dma_alloc_iova(&iod->dma_map->iova); + if (rc) + goto free_iod_map; + + /* + * Following call assumes that all the biovecs belongs to this request + * are of the same type. + */ + dma_get_memory_type(req->bio->bi_io_vec[0].bv_page, + &iod->dma_map->type); + iod->dma_map->state.iova = &iod->dma_map->iova; + iod->dma_map->state.type = &iod->dma_map->type; + + iod->dma_map->use_iova = + dma_can_use_iova(&iod->dma_map->state, + req->bio->bi_io_vec[0].bv_len); - if (nvme_pci_use_sgls(dev, req, iod->sgt.nents)) + if (nvme_pci_use_sgls(dev, req, blk_rq_nr_phys_segments(req))) ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw); else ret = nvme_pci_setup_prps(dev, req, &cmnd->rw); if (ret != BLK_STS_OK) - goto out_unmap_sg; + goto free_iova; return BLK_STS_OK; -out_unmap_sg: - dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); -out_free_sg: - mempool_free(iod->sgt.sgl, dev->iod_mempool); +free_iova: + dma_free_iova(&iod->dma_map->iova); +free_iod_map: + mempool_free(iod->dma_map, dev->iod_mempool); return ret; } @@ -842,7 +986,6 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) iod->aborted = false; iod->nr_allocations = -1; - iod->sgt.nents = 0; ret = nvme_setup_cmd(req->q->queuedata, req); if (ret) @@ -2670,7 +2813,7 @@ static void nvme_release_prp_pools(struct nvme_dev *dev) static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev) { - size_t alloc_size = sizeof(struct scatterlist) * NVME_MAX_SEGS; + size_t alloc_size = sizeof(struct iod_dma_map); dev->iod_mempool = mempool_create_node(1, mempool_kmalloc, mempool_kfree,