From patchwork Tue Mar 5 10:15:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A349CC54E41 for ; Tue, 5 Mar 2024 10:15:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A5FC6B0096; Tue, 5 Mar 2024 05:15:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3576E6B00A8; Tue, 5 Mar 2024 05:15:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 184D56B00AA; Tue, 5 Mar 2024 05:15:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 03AAD6B0096 for ; Tue, 5 Mar 2024 05:15:44 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B8AC280D7C for ; Tue, 5 Mar 2024 10:15:43 +0000 (UTC) X-FDA: 81862578966.06.A6EDC35 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id 15B04140017 for ; Tue, 5 Mar 2024 10:15:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qOqLaLn3; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633741; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=VGKYLFPfUGJHtT00zVJnn7onAYX1D5CI2UtbLSnte14=; b=B0eWJh8nord4QahAnGHyzRu07WtYMsnTA6/qBUdSLD8N0HGvcQDjqw0wW2kcOpc+O7VXrm ik+nZp5iL0brlwpLxq/fPXcJwhnXg4SRn85f6WyAPFHpieVCoK8ULKXAapgVVe4LkQcryn RFte0myU2LHJ5pRQTjEgjYGy7GgzxVo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qOqLaLn3; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633741; a=rsa-sha256; cv=none; b=EK0sc6Sp8EmxpgH95VCumqKfikAXNnL6KcRhlq+0OPD2arW/V6iwEeubZgDv0f5uNYVyvs eYLUL6kbIoJNtlDY9vLcd6eT+BYW4RWy0u7kFUC16NKXTPbbF6xIwEtI85Laclcb2CSi0c yETKwE3G5uhflaLb23EZZNZOeigO08Y= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3BC006148C; Tue, 5 Mar 2024 10:15:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3B5AC43390; Tue, 5 Mar 2024 10:15:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633739; bh=EE9BRQu4WZxxi1OZrE2CKyiBDx8STpwb4qcA00ZbW2s=; h=From:To:Cc:Subject:Date:From; b=qOqLaLn3iBOQV8Vpk9C7fIIoeo4gt3eJqv3Qk0ac57+rTecyJfbAwk5F9oV2CYlYi iCxE76g9a4VsFGyienbFOKBwslCX8QnHQSkps7BVaKOrdU3YgbTgSp3wgD5vuXtzjH Auf9D1w6HpIVDYu45nDqaeQDo5Qxg7SqI71cb4CF5bX33POEIF22qvG6dZC3feFMSL L8/YYU6bMkn9HMPttYWzUIqXiOvpy+fN5ejAW5p5z2cSEsV2UMHJjs77k01LVnvlWp tAQhYEiHcEmzmfYrM9sBM2FOv7/h5MX0oMiIYnb/sIlyxLNKRLomqakayy9734ligO 6YKcLLle3sv0Q== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 01/16] mm/hmm: let users to tag specific PFNs Date: Tue, 5 Mar 2024 12:15:11 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: 15B04140017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: abcaoxqrxenqcnudzp9w6qj7pi9pjwmx X-HE-Tag: 1709633740-29801 X-HE-Meta: U2FsdGVkX18gwy+Mjh1AFJCmS8tLzOWV/hl1Z7gvv51dEN/2O/ejsEcV5VWS1QNoV1TBhT5evIAh+ZE6nol9irKGyprv5QLt+WjiTahYzivz6Yo9Q9iariGS6K3ZbJj+JhAOJYJc78IBlk27sSzq/NrG63UTcIKSEcq7QsiGnPoDhlmcCE52bYKnS3+IZXmsyWsu+RDsYliaYeNd2wJ/jqhZ57VQLiURpcp9yrLooVHFGb9H+b/mVet4Zx5MD1p+OkO8OcBx+xI9g9/kN7+8xqroKdMpx9HaJcJu71ch7GhRMoRv9E0fru0zVLVxIXFvuBo0T7dXoocmDAjEKXSdJxS99agYDmvRlu+WhToD37phrpVvLcp0C+u5sCH9cYZSInYCaTG/IyqohxJYU4wKX7fpn+13nHaCT0QdNc3WeZhSClaDldDz2toUJMhJQtRRUTPZNgEDYo4LLC9G+6sN3XoIDR7Xfx6cOtulZ3uR4PWLa+QsHAsmcF0tuTopGI9KPPHxaqHiTQ1HRVrAj+yI5hP4/2c/m3oThpTCh+CiTaNZisBhDIvUY7QmzYkwc5IhoWXctj9zvgUbTOfrdJSz1G9De3b4zXusNbqs7e1eeYfajccfiG4ouiFIpas6yaxb0OLqXrLTlgsWiqFd5e6dx04toHR00WF26nNletGvlIqRnANuUT/RJmjZDHeHbwr1anbl7FrEF8UiLIVlV20V3NHRs2chq9bNJ4DPoHnDpzRfOBCFiLucz1hVBarKak98+vqGY8CjxS4ZamTK/Dcn8MzIQcbymx39eFpcndwcoxkZsGt0WGik034EKZqKliB7hYQ+I7AmlP/kzcfbtP6uUb8xe9u+qIjoCXgU6V8Qzc+FgSL0bAeKY1IESEdm0Qat1F1OsOoQEQiFquwxNaGOssr4IIDnmw2a3FlxJWEHrfI5fp5RiCfN5FOSeRiNVPKooqV6Hy9tdmCujT9M4+t wX/rBTt/ HN+O+pGsp2bI0m9sxGj0dZxly8KtqFn/cR18o+ClvVzK1NHTa3Cw++oanx4A4ggJfa4cxsP7jV+cJFj5SKseNDmfro9oWAZddbDk9NiKfegiMSKWXfrAFw3ajJPS43WPoxkc1j/bt3cGGAaNb2MhADENuewGyVua4jHiQsoxU4+VuaRNZ/eKw4At59JtK3sCj/AvzmmwY5j2GnIIaVfaVxRhapab+3vt+9qBZIs/0Fj3uS3E8VBAFWPiGfw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new sticky flag, which isn't overwritten by HMM range fault. Such flag allows users to tag specific PFNs with extra data in addition to already filled by HMM. Signed-off-by: Leon Romanovsky --- include/linux/hmm.h | 3 +++ mm/hmm.c | 34 +++++++++++++++++++++------------- 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 126a36571667..b90902baa593 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -23,6 +23,7 @@ struct mmu_interval_notifier; * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_STICKY - Flag preserved on input-to-output transformation * * On input: * 0 - Return the current state of the page, do not fault it. @@ -36,6 +37,8 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + /* Sticky lag, carried from Input to Output */ + HMM_PFN_STICKY = 1UL << (BITS_PER_LONG - 7), HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ diff --git a/mm/hmm.c b/mm/hmm.c index 277ddcab4947..9645a72beec0 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -44,8 +44,10 @@ static int hmm_pfns_fill(unsigned long addr, unsigned long end, { unsigned long i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, i++) - range->hmm_pfns[i] = cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++) { + range->hmm_pfns[i] &= HMM_PFN_STICKY; + range->hmm_pfns[i] |= cpu_flags; + } return 0; } @@ -202,8 +204,10 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, required_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { + hmm_pfns[i] &= HMM_PFN_STICKY; + hmm_pfns[i] |= pfn | cpu_flags; + } return 0; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -236,7 +240,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) goto fault; - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_STICKY; return 0; } @@ -253,14 +257,14 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_STICKY) | swp_offset_pfn(entry) | cpu_flags; return 0; } required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (!required_fault) { - *hmm_pfn = 0; + *hmm_pfn = *hmm_pfn & HMM_PFN_STICKY; return 0; } @@ -304,11 +308,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, pte_unmap(ptep); return -EFAULT; } - *hmm_pfn = HMM_PFN_ERROR; + *hmm_pfn = (*hmm_pfn & HMM_PFN_STICKY) | HMM_PFN_ERROR; return 0; } - *hmm_pfn = pte_pfn(pte) | cpu_flags; + *hmm_pfn = (*hmm_pfn & HMM_PFN_STICKY) | pte_pfn(pte) | cpu_flags; return 0; fault: @@ -453,8 +457,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) - hmm_pfns[i] = pfn | cpu_flags; + for (i = 0; i < npages; ++i, ++pfn) { + hmm_pfns[i] &= HMM_PFN_STICKY; + hmm_pfns[i] |= pfn | cpu_flags; + } goto out_unlock; } @@ -512,8 +518,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, } pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); - for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + for (; addr < end; addr += PAGE_SIZE, i++, pfn++) { + range->hmm_pfns[i] &= HMM_PFN_STICKY; + range->hmm_pfns[i] |= pfn | cpu_flags; + } spin_unlock(ptl); return 0; From patchwork Tue Mar 5 10:15:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C72CEC54E4A for ; Tue, 5 Mar 2024 10:15:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 469216B00AA; Tue, 5 Mar 2024 05:15:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 418506B00AB; Tue, 5 Mar 2024 05:15:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21E416B00AC; Tue, 5 Mar 2024 05:15:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0BDA16B00AA for ; Tue, 5 Mar 2024 05:15:46 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C2DA9C0E00 for ; Tue, 5 Mar 2024 10:15:45 +0000 (UTC) X-FDA: 81862579050.06.9C622D3 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf15.hostedemail.com (Postfix) with ESMTP id 417AFA000B for ; Tue, 5 Mar 2024 10:15:44 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XCEvQ40z; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZV22DEkctZnBDhVO2mpwxlurhUyOLjlkYP+TkKjvMOA=; b=sFE2TyrwJX3+xEwp1NhGTXq+kKiNmiYI+0LGcHV8wRVBoW6IZUzouSqQHp9dbTYRJkCRtT 1LTVhmErVMGPPZ2SkoX6dHHybTbyN70CIuCD/c6n9bBEdkKtcvrVL9KV91yKat0P5LzCSN ogUY/+L0jymRfiOgonuivaJ5t2HT8A8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XCEvQ40z; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633744; a=rsa-sha256; cv=none; b=6OxhDWYXHWv1QL2wboHTxUL90f/4RbQurm4tg3vjhXQ1viQGF30oUQ9Rxqz2p0H6dJvrm1 Exrfveif5cnWwvBprd67RhSLVMSG+7/LD9Iiv4P5mp/qxhvtKvjDPsP//2j7pk5LW+is4+ DK/FL8aXQHwD0xah0Y1QZo7flpGRb3s= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6063761489; Tue, 5 Mar 2024 10:15:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F837C433F1; Tue, 5 Mar 2024 10:15:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633743; bh=XGhek5Egl9EZN5GDA4K/P4lgi/cIEFOzTc1LzkqAZ+k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XCEvQ40zftA5CzahxVADDhIsF5HF2PFIaFCKJl08PAjcb8phQRdD1LDvHI07G+6PZ CvBdJkiMC9pln2pklakFZdq+NgxiSqfQtDr0p0vm+4uPAZzhgnykkeCD4vUFgHznRW ZOBma0N0c7UfO8tz8sA3pTWlv0aFd55k7B11BsajPMqaNfTE2HFLWk/aE5awQigsm3 8YGZDR4iQoO0Eh3bVsL7tCcIfm91jjBwAWwMtpH2xHx88UXEnRem5M/qO+QvY/fybd zRSV/V3Bo/1pG1mbTkyZbbDGE1jDM3WdTIBsJpv95ElPKrvNelIborsyuhPgbCuKH7 DHXrXaxWh7khg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 02/16] dma-mapping: provide an interface to allocate IOVA Date: Tue, 5 Mar 2024 12:15:12 +0200 Message-ID: <54a3554639bfb963c9919c5d7c1f449021bebdb3.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 417AFA000B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: hunqmut3wyrrtnqje1887yg9sguwrryt X-HE-Tag: 1709633744-938687 X-HE-Meta: U2FsdGVkX19ywEDccTSGEHk/pxdmrq4LtpEPXjjmFI/kDhzo8CQE2pfvFd1fbKmjDEhHpV5goycT5gc/lO3Ptn90JxK8VpNXmznC8sM5+Ihw88jurZf7Ak1XBXynM8YL7acpIVZ/kWjg7DKrdqMghkmJS/eGbezyiy2nNRSzR0eA3d0gnQVE1df6K6fZjtOO6ZD/fasAs0cW1PDQAd274Eh6CVsIp/kls405PGmLYSg8O3eRJwAomRdv70aes832wKhhErioLqD7p2BlPoCLaeO6Fac7iJH76F4HJPf1L0c5LSVgk4vnOONp1A+slUMddD6m9UpUlzDnUn2hSS0j+kERRkUMekHuMD8KskdrCnj80FI2mmes6oDC+Dgu8CFqc+0ddbL3c7jrMT35hfvdANbEBxffqVw/x/roAg1EwNhCIG2mzkEew7F6XHmJJSzCuhc3L5eu9/YEMVmRDjfmvsqWZSTpZVbLH87CmvJd3pADY14Ys7RudX1BcKWR5qnluzUe+3p1kPYate0tnVaJeEQ6HOROePVQj0Hv1kegcmdUg9FJgRxnRUwzMcC7h2g2VwHRcBQQ4JS6tP9LLSOoWg+jN1pAyr29fM5uc2qGyW567h9yT1Qm3SmG9CHPakJ1mgoMp/4EWaIqRitIR9bIK0+/A8lVGkqPM2E8LQWIYlOxnNHW9HZNAtEWJSWmcljh6sw6dlpTaUeBtSFN/p8aK5prbPmKt2QqQFUkwvSGGEJZvwWlVOsZmqNDuo+2c4zYGoqUV37P1DLM/ENgNkE7mDvPCPYp+x0E5ZFBJNEXxlOm6kXt8q57VTAe/5eLVdeS+Cg81VX0EUn6JiFKfymtvTgBZT2Hf4I1BDRUToRz0JilmP+IUMpPEEj0+v1BLvwLekM9eZ1Hc0w9ykzzWX1C1Vut4ickGADxAZ6VvKzTlDKjaXlu97iX+Cin1lufhYgCFbBIQIMXlQQEmgz4AuO qQqesdMw j4pvV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Existing .map_page() callback provides two things at the same time: allocates IOVA and links DMA pages. That combination works great for most of the callers who use it in control paths, but less effective in fast paths. These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 3 +++ include/linux/dma-mapping.h | 20 ++++++++++++++++++++ kernel/dma/mapping.c | 30 ++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 4abc60f04209..bd605b44bb57 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -83,6 +83,9 @@ struct dma_map_ops { size_t (*max_mapping_size)(struct device *dev); size_t (*opt_mapping_size)(void); unsigned long (*get_merge_boundary)(struct device *dev); + + dma_addr_t (*alloc_iova)(struct device *dev, size_t size); + void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); }; #ifdef CONFIG_DMA_OPS diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4a658de44ee9..176fb8a86d63 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -91,6 +91,16 @@ static inline void debug_dma_map_single(struct device *dev, const void *addr, } #endif /* CONFIG_DMA_API_DEBUG */ +struct dma_iova_attrs { + /* OUT field */ + dma_addr_t addr; + /* IN fields */ + struct device *dev; + size_t size; + enum dma_data_direction dir; + unsigned long attrs; +}; + #ifdef CONFIG_HAS_DMA static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) { @@ -101,6 +111,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) return 0; } +int dma_alloc_iova(struct dma_iova_attrs *iova); +void dma_free_iova(struct dma_iova_attrs *iova); + dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs); @@ -159,6 +172,13 @@ void dma_vunmap_noncontiguous(struct device *dev, void *vaddr); int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, size_t size, struct sg_table *sgt); #else /* CONFIG_HAS_DMA */ +static inline int dma_alloc_iova(struct dma_iova_attrs *iova) +{ + return -EOPNOTSUPP; +} +static inline void dma_free_iova(struct dma_iova_attrs *iova) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58db8fd70471..b6b27bab90f3 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -183,6 +183,36 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, } EXPORT_SYMBOL(dma_unmap_page_attrs); +int dma_alloc_iova(struct dma_iova_attrs *iova) +{ + struct device *dev = iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || !ops->alloc_iova) { + iova->addr = 0; + return 0; + } + + iova->addr = ops->alloc_iova(dev, iova->size); + if (dma_mapping_error(dev, iova->addr)) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL(dma_alloc_iova); + +void dma_free_iova(struct dma_iova_attrs *iova) +{ + struct device *dev = iova->dev; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || !ops->free_iova) + return; + + ops->free_iova(dev, iova->addr, iova->size); +} +EXPORT_SYMBOL(dma_free_iova); + static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { From patchwork Tue Mar 5 10:15:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 247C8C54E41 for ; Tue, 5 Mar 2024 10:15:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04DD56B00AE; Tue, 5 Mar 2024 05:15:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1B406B00AF; Tue, 5 Mar 2024 05:15:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D41C96B00B0; Tue, 5 Mar 2024 05:15:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BA3B86B00AE for ; Tue, 5 Mar 2024 05:15:49 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 707261C0A30 for ; Tue, 5 Mar 2024 10:15:49 +0000 (UTC) X-FDA: 81862579218.29.21096DF Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id EB13A40019 for ; Tue, 5 Mar 2024 10:15:47 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i6KfRrmN; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633748; a=rsa-sha256; cv=none; b=xSuvNSBLIcC9TvUdbmmCloNtoCjarInMRsX7kDUaWtKxBDhbcn3Y+rla+HZU+cSgRfJNHq BTIsHzpBd4Sg8WjBaCXRJJF4NC/BFyI8yV/g8I9zXYq5uLXkSXgpjvrr7r95H6ZRjQbHOX K5bpNAchYvk9yia8tONANJlWTL0vwuY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i6KfRrmN; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633748; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f1hoDfpdTddpyu+zr+83f9Hrpbyga7AGgUx+JQQjpEk=; b=VD+q3R4rp2Mur/KpFNubgcRcm/fEk0eyVRep/X2u76/DZnHemBBd+b4mJAKK31b/AbqJdl QrhwrLoWCMhDGNTJ7PukPaKi36OACM/sbz+XkGX/s7cv9VPtRPL3t3iEMVWjUHz/oyeUnB PTIsHrIpV7IM6JIA2QKA4W3uZvMjZdo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2C22C60E8B; Tue, 5 Mar 2024 10:15:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2653C43394; Tue, 5 Mar 2024 10:15:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633747; bh=qAJEud8zSEtY+UaYfguXl0FuNLhyjwr4jvzS2H85FiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i6KfRrmN48zwpr58f2jn2e/4KoqDgu11fH84icYWi7KEjDlgMSwa2yskoM0pEdvED lwqbMF5T04iutucWA4erka9aP+EjX7Kjb/imGmt7yI/VQdkXDFZPfbOZatMO67TE1d R1GBVc6uz3XJqv/sD9fRb0932eSAzZZ+OLdul+PniCr2QIaCRf+7ivXT3+1lkcpffl NbvxILp9mDgF+iNO64UjpDjxS33n6TG7Ama1MHUUThE/3v8Ma0XkJPu4Dzh8vf5qTe cSuDCSx3F2rrN4NpXul9RJ+C51ATSFr+VQnbih8yd/sN8rJqiTSxCs6F2KroalPzVA AP3upF62lstFg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 03/16] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Date: Tue, 5 Mar 2024 12:15:13 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EB13A40019 X-Stat-Signature: iu8suu5rsti7u6mdn161hpkde81hewwo X-Rspam-User: X-HE-Tag: 1709633747-994135 X-HE-Meta: U2FsdGVkX18mwUzENkG+FYEpgjY34cLDRJCXKZF06W0WDtbv+Uep08+Xvk6Dg25jNVF+wk0PHw4ujvfCz2FfD7JD9Pqz9kUPLtti/vO6uvpI1zcPDfUbbvKAF0703ogARJ08dS/ChG3kmHRGlRXTMqyWTC0BrfyU9vm9VD0I3KC+epTsGM8mWW8vNDtxOukmra471nI8kL6bFHOCm3Br5Trqw8ovoNWmrSe36+KLAFSUFL8S9cJibYJb+6sed/SGESBvY6TW8Nxco3S6k+k4rz/sxpHofjDoV7yE8RNObd5b0bsir0RI+lnjvEGzH/wr4ZGnz/usFqRmNoUV/HsGZuuarO42vUzhKeFkI1wjNCdzqcikdbAVr8udT6uXF9hc8wWC7FdOh99anocKagmnxcd4SvdLWsRu1jrQ8JfEgoyN43hj3uVn1LKiiQiOKyZnS07czGnBffgvEdDLZ6tuYjDICuBcP430apRQykqPHPXYzO+jlvrLsyPQPBwfz+p2d14PrpP+FQ0iR0dklO1ids5fehtrz5vQB26M+dBed3MWhhGQKtKn50+ApoWY9FakkfsX6+CcbpXtbVGI1oQeQEizmxfCITfjS5hXexpehvoHID+9CNcVKysAxRwYJoB5Uy7/5wdGyRTTDIUAg3pbwT7amHFP2A5jG44huUn42wFAby+FQK4QzEioznmbmUm4Zie8kTyDmUTOJ0qN+qwZuGGcWJeZBlCvqau0/ilaeutWLl+5uz+vgen7zgy0p+V3nsgkpcvEDTvOj8+hdxmQtVJyuibxddWUwiN+w5hLlZyLwwjyK+X3OteiDTgt0d3Ma4WU1Gy/FNaLeAIvLFT0hwiDVfZKk3WzSXmLFKmoIak6CpYApQvYAr4H+RXEZto1ErL7uAQiMaMZc4fS9SuZE3gv3/SG3XKWwez+andfMA8XvpP0zk0JK98yKsMRLMSqftEXUAWpr1mxiymnmaN w+15J31t nEkJj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Introduce new DMA link/unlink API to provide a way for advanced users to directly map/unmap pages without ned to allocate IOVA on every map call. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 10 +++++++ include/linux/dma-mapping.h | 13 +++++++++ kernel/dma/debug.h | 2 ++ kernel/dma/direct.h | 3 ++ kernel/dma/mapping.c | 57 +++++++++++++++++++++++++++++++++++++ 5 files changed, 85 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index bd605b44bb57..fd03a080df1e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -86,6 +86,13 @@ struct dma_map_ops { dma_addr_t (*alloc_iova)(struct device *dev, size_t size); void (*free_iova)(struct device *dev, dma_addr_t dma_addr, size_t size); + dma_addr_t (*link_range)(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs); + void (*unlink_range)(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs); }; #ifdef CONFIG_DMA_OPS @@ -428,6 +435,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif +#define arch_dma_link_range_direct arch_dma_map_page_direct +#define arch_dma_unlink_range_direct arch_dma_unmap_page_direct + #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, bool coherent); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 176fb8a86d63..91cc084adb53 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -113,6 +113,9 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) int dma_alloc_iova(struct dma_iova_attrs *iova); void dma_free_iova(struct dma_iova_attrs *iova); +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset); +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset); dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, @@ -179,6 +182,16 @@ static inline int dma_alloc_iova(struct dma_iova_attrs *iova) static inline void dma_free_iova(struct dma_iova_attrs *iova) { } +static inline dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unlink_range(struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..3d529f355c6d 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -127,4 +127,6 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev, { } #endif /* CONFIG_DMA_API_DEBUG */ +#define debug_dma_link_range debug_dma_map_page +#define debug_dma_unlink_range debug_dma_unmap_page #endif /* _KERNEL_DMA_DEBUG_H */ diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 18d346118fe8..1c30e1cd607a 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -125,4 +125,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); } + +#define dma_direct_link_range dma_direct_map_page +#define dma_direct_unlink_range dma_direct_unmap_page #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index b6b27bab90f3..f989c64622c2 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -213,6 +213,63 @@ void dma_free_iova(struct dma_iova_attrs *iova) } EXPORT_SYMBOL(dma_free_iova); +/** + * dma_link_range - Link a physical page to DMA address + * @page: The page to be mapped + * @offset: The offset within the page + * @iova: Preallocated IOVA attributes + * @dma_offset: DMA offset form which this page needs to be linked + * + * dma_alloc_iova() allocates IOVA based on the size specified by ther user in + * iova->size. Call this function after IOVA allocation to link @page from + * @offset to get the DMA address. Note that very first call to this function + * will have @dma_offset set to 0 in the IOVA space allocated from + * dma_alloc_iova(). For subsequent calls to this function on same @iova, + * @dma_offset needs to be advanced by the caller with the size of previous + * page that was linked + DMA address returned for the previous page that was + * linked by this function. + */ +dma_addr_t dma_link_range(struct page *page, unsigned long offset, + struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_link_range_direct(dev, page_to_phys(page) + offset + size)) + addr = dma_direct_link_range(dev, page, offset, size, dir, attrs); + else if (ops->link_range) + addr = ops->link_range(dev, page, offset, addr, size, dir, attrs); + + kmsan_handle_dma(page, offset, size, dir); + debug_dma_link_range(dev, page, offset, size, dir, addr, attrs); + return addr; +} +EXPORT_SYMBOL(dma_link_range); + +void dma_unlink_range(struct dma_iova_attrs *iova, dma_addr_t dma_offset) +{ + struct device *dev = iova->dev; + size_t size = iova->size; + enum dma_data_direction dir = iova->dir; + unsigned long attrs = iova->attrs; + dma_addr_t addr = iova->addr + dma_offset; + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (dma_map_direct(dev, ops) || + arch_dma_unlink_range_direct(dev, addr + size)) + dma_direct_unlink_range(dev, addr, size, dir, attrs); + else if (ops->unlink_range) + ops->unlink_range(dev, addr, size, dir, attrs); + + debug_dma_unlink_range(dev, addr, size, dir); +} +EXPORT_SYMBOL(dma_unlink_range); + static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { From patchwork Tue Mar 5 10:15:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 292EDC54E41 for ; Tue, 5 Mar 2024 10:15:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F41CA6B00B1; Tue, 5 Mar 2024 05:15:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC7156B00B2; Tue, 5 Mar 2024 05:15:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D19CF6B00B3; Tue, 5 Mar 2024 05:15:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BD5B16B00B1 for ; Tue, 5 Mar 2024 05:15:54 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 72413120265 for ; Tue, 5 Mar 2024 10:15:54 +0000 (UTC) X-FDA: 81862579428.10.53CF1CC Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id BB530140002 for ; Tue, 5 Mar 2024 10:15:52 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=l7gVIpZC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DGOWTpc0mK7AvFB2rfCYarTy/YnOa82zBOZKlzEOfvY=; b=hdvAym4FYQLuM+eCFM96njOAoKMoiGkfIh9+xkDLxdtPDU0+DplmkP+3E6AmvH+E3V1CaH STpviYtON7JMBZQD9SsX3WRXSh3tYxu+vuAaIFa6FYrx72OO2LJMr/kC5HTQOpr+BcQ1IE 930S38lEX1cbB7j5+IYw9PETu5H2s5g= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=l7gVIpZC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633752; a=rsa-sha256; cv=none; b=N+VTUkqmjiQg06spVPL/sz66VqG3Afbwvid15gZC8EIWrSGhf+fS1xOz4IBT/L8XI6kQz3 CtZmGUY2POUbyFffH+Q/5mfw3zvXhTHTtC/vewnf2qY1sZAMRKkqvdPH1xT9MIo5OicK6f +XJRRa5AqMiF75omCyrFHuBrl0nLdL8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CE4ED61488; Tue, 5 Mar 2024 10:15:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 733AEC433C7; Tue, 5 Mar 2024 10:15:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633751; bh=W6hmByMbviZow30AysrbmdCiStk7NPI/+8Qeoey1JY8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l7gVIpZCN4qfruYV+rhZs6OVB7eTnW+2jECepsOGpeOe6WYvjXT9SqiFpBQzTVqwz XZ0+mj9fWXef0KdsccKP98pCRc30F8WZsLr+lNyYrNYr3ah1H9ji1UrOI4hdLXFzf0 CXiKAAqhEBrPGQ+EI22ylfpvgmIPUXyhaTu/hkQYZKAaa92IBeLzIwCbEMlXgEp/+X SCobxhPdH3p7yiwMHielMCXj+TLdArAdfOiZodMpIPkTamBq50gmavt5TvXHw8Qguf gxgKOtzpIwVZVp1iDgIDemvks4MTC3dkgSc2SRsBHlswlYQDO8Faa4CI1/l3SS+j8R M12Dmxecw3+fg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 04/16] iommu/dma: Provide an interface to allow preallocate IOVA Date: Tue, 5 Mar 2024 12:15:14 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: BB530140002 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: u3ogyoyhxx9zktcft86z8qy54iyk5s1f X-HE-Tag: 1709633752-801602 X-HE-Meta: U2FsdGVkX18MHY1daoArHzp7QLRb0A4lztJ623Vupelwv/pdKBdS1yA2q52rbY2rTXPBu1QuF5EsGAJh8QUDMYiv+631Hm9OwEftafLs3lAMQeexEpulWmeR/NtddLtTqAWtePzz/TE8kpDJE+A8q9jHROvUid2Lw53T0/4UBdgWPNsKjYhOmaCK6Qxr2kVFwErqg6ZISNxcWip665fesj3jbrT/KNrpUbP6eDwj750APJy3ok/BWVtUp+AEXfWmE0Tc43mgM1adn3hD49kV9MnjC+NkvneNsjN0bWHlok50Q1wYIMDd+3dPhETti4MKhRiXLitCFi7n4QAb0nM/Msk6rP/xp+QErj4RVRHUGx/qmNs0vbfZ5o8sz5sUBbACodgEqN+kr1eSCPQ6yGz4Zoxeoaz0kW4R3OqNc6xniPIgNaQ9NO/UwEGa6C34AWmAwT+7AL9OA30Erw2JZQMfzlTRFKUPRmgcKachMXyhTyFzJLQgM1U8+qrDlv5QoeM8jfgIQYWgLs18HOvnHCGovvNRwTcZ7HHDWqgeCsAFlliMy2SqwoXl/HHJ1Cu7CNlXmQQLQzR1Z/GVlHnY3Ly8vhzxwSpMQwYaGEr5N/+yC7L8tfnEpuAOq8zjLty+4h3dUHLwBP5soGsOAgdj58JY47XVU21QE4uOdH8zlax7hTMksVq+SgvCdSO2aFsxDCAjI/iQT0DhzMgVcXgI3SODEgBwwHb80hBD/XSRp5SE9huCksgfIx3qf+LGhT0R0DSeS3v+SFzh/X8uwDepfVl2U1+uRXKTFgrzagaCAoqGhakBnIXwMD82hdX5h9fmkfUfJSPx3gp564suAaajj3ObbNHfF9jm2XRHRx5X/QZgSAiCzWGf3B5E9dsYAgHNWUQOjsaQiLk3MywlFIXOlTIeHQ1+2SoUZ9ybwZuzdoNyrR1b7TzZK3h1hxrF74sPZcyM33ZrCJyhoKCtkPbbz2t 01LlZXbz 1CObwgJV8jwUkxlQ+nKPX9lpgjCQMdQHq7QRFTubAJ80IwmmGWfshVQPhk2qn9WSL+ZGWLrHT1/NbCi6wrpj9hXTRKKr8GPo7C0lnwrcWi9F0IqkqfJt/7WV9kuBnslEIA6YDwrpwB/RiFMMypveAVvD7Ykof9rjMq2Ocp9PQ7Ha2arA9I0kE8hLaMnGZWXM+Pc2ogA2aePK0DvbLCRTaWf568BVKHb6z//EJaTX03w4nFvo5xcxjFg3xGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Separate IOVA allocation to dedicated callback so it will allow cache of IOVA and reuse it in fast paths for devices which support ODP (on-demand-paging) mechanism. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 50ccc4f1ef81..e55726783501 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -356,7 +356,7 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic_set(&cookie->fq_timer_on, 0); /* * Prevent incomplete fq state being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + * __iommu_dma_unmap() through __iommu_dma_free_iova() to queue_iova() */ smp_wmb(); WRITE_ONCE(cookie->fq_domain, domain); @@ -760,7 +760,7 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, } } -static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, +static dma_addr_t __iommu_dma_alloc_iova(struct iommu_domain *domain, size_t size, u64 dma_limit, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -806,7 +806,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, +static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct iommu_iotlb_gather *gather) { struct iova_domain *iovad = &cookie->iovad; @@ -843,7 +843,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -861,12 +861,12 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -970,7 +970,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, return NULL; size = iova_align(iovad, size); - iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; @@ -1004,7 +1004,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -1436,7 +1436,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!iova_len) return __finalise_sg(dev, sg, nents, 0); - iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; goto out_restore_sg; @@ -1453,7 +1453,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len, NULL); + __iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); out: @@ -1706,6 +1706,30 @@ static size_t iommu_dma_opt_mapping_size(void) return iova_rcache_range(); } +static dma_addr_t iommu_dma_alloc_iova(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t dma_mask = dma_get_mask(dev); + + size = iova_align(iovad, size); + return __iommu_dma_alloc_iova(domain, size, dma_mask, dev); +} + +static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, + size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + + size = iova_align(iovad, size); + iommu_iotlb_gather_init(&iotlb_gather); + __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, @@ -1728,6 +1752,8 @@ static const struct dma_map_ops iommu_dma_ops = { .unmap_resource = iommu_dma_unmap_resource, .get_merge_boundary = iommu_dma_get_merge_boundary, .opt_mapping_size = iommu_dma_opt_mapping_size, + .alloc_iova = iommu_dma_alloc_iova, + .free_iova = iommu_dma_free_iova, }; /* @@ -1776,7 +1802,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) goto out_free_page; @@ -1790,7 +1816,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; From patchwork Tue Mar 5 10:15:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E79E1C54E41 for ; Tue, 5 Mar 2024 10:16:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B25D16B00B7; Tue, 5 Mar 2024 05:15:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA9906B00B6; Tue, 5 Mar 2024 05:15:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FB446B00B7; Tue, 5 Mar 2024 05:15:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7D4D76B00B5 for ; Tue, 5 Mar 2024 05:15:59 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 11BAEA0CE8 for ; Tue, 5 Mar 2024 10:15:59 +0000 (UTC) X-FDA: 81862579638.01.25484FC Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf22.hostedemail.com (Postfix) with ESMTP id 6B3C2C0025 for ; Tue, 5 Mar 2024 10:15:57 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ByTfLnoJ; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PWIkDp3wWF79wb05Gr2TTYEnApxvGXoKXkM2oYrDtQY=; b=ycivvBGDoIejm6Sbkx2bdB8hXivy8yJJQ704hTtkl124Ztm4nW2s7JaJ5PsUmXMWXa4a+X S8t28TTXfZdP4ErPf1J/xnAObkn3i8BHu2hv9k10mZ2btrazG34Y7epn1AoYWEI/oHHMCC MYCBtUJGpr07s7U1omePYWGqy+Rj32M= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ByTfLnoJ; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633757; a=rsa-sha256; cv=none; b=ucetewzUSBxw1pBgVhK4Vdap+I1hIw7RluDfyejY2liRB0pfmAKSQD5urIvl2zcDbhdZgc /bqI2y5+g3qEtfngnI3l/TP5wsY9FrLr6sd2OsBysXkCq/QpGPDXrCbJG9z4GHuTVyujOH nwWIQ5P0U072MrM5GTO5Y8UayXX1Iek= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8ACAA61474; Tue, 5 Mar 2024 10:15:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7BBD6C43394; Tue, 5 Mar 2024 10:15:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633756; bh=fDITfGtiOJR6jcG42lBCR+mHX3ThOxLIN0l1OlLSZWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ByTfLnoJmSpqGwNJur7VUxIppmuqlzQeuAhPYJQ/TQKXkk3OCEirC5ffiRkhgkppB cckQTEOH3UsnvdENJWgMQgy4IMZ7q/BBWiIEkIBojVa4Z0UmILV9I3Xmjjan5cUORQ c7UHMQ41uf064Bljy7vDygOZ2SHuVoszHhEFz62fgeqXXCenGqRXB2N7RGaN2l3eEB +fuoJbje5OkcIZ+LD3N7VQ7/9x5iuDmsDzuzbjVOUnLvy6SfwHreGFmyZi8A14BSAX IXNywenHjYvkDHrQ4N3mAFQ2VNCjx/PaibVMdbJZPOe6t75GxZVzTOALDm651hZReb OnVUeU4LR9rEQ== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 05/16] iommu/dma: Prepare map/unmap page functions to receive IOVA Date: Tue, 5 Mar 2024 12:15:15 +0200 Message-ID: <13187a8682ab4f8708ca88cc4363f90e64e14ccc.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 6B3C2C0025 X-Stat-Signature: c8t3jnbjsxnjpwjs6jbppnywrbn7ct4c X-Rspam-User: X-HE-Tag: 1709633757-90111 X-HE-Meta: U2FsdGVkX193UAGfRVAf2WaU80zOe4ioHy4s/EaPa15x/HkGpeTFaNcv/NX93IMySlQJZdrP+RUmc7F5BMhKqqkooq8x7a2Vz+UHok5FmuNEJ6QAVAHY3j8XKeT+9RNr4/FU2aLm64+UWqG8Kh+eSjGFbGs+111m5W1CDtSqYPC9VxkdoM4gwerCUNbFzYQT4RPvDcSNu43DTXoaN4mr5avSdPTiL2Ptsr3eFhUIKcp6JOeh9L4LZJmbfmP76avvngYG8g51k+1C/4mcV5fjSDlBPdnxiYQM8D6W0YkTZojnaMvcOA34Q3YZ9rBV7trWT1OmR0osEUNYX5Vp/5U0/2xJqtdJpz0MD2zW30gQtkEaRW6TNXrTMif4FfqKrGFStMADCCYdXDotDxFnygOgtY9px4tLD4yL15Ahd3vhEnAhsJpzv1dLUf19jdvIhrTBPQ9ydDhcQVi31RbabyT3pm7kRP0BYqlwZdYRmRZ5V0W02I6sIrL1YGmI2HUqEfOf34TY6TL3lq1HdtcP3DG1KOY1eqqC3HBAsEBnpM+5DFYrFqXn+EclpdCSdZLczffwXZg/bvbAAPo+/0nYeFgybWoRf3efJUFvQzIqdGi9bVweXSE9vL42cF4VuRFaFZMdE7pAIyuEegv7rPHZZjZsqld7blVJg0wN2EB22dB9JdCSA4TTCPfOQybWBMgw9SjICBk9PwqqV7YYhwVPn73VWGe7ZAoXviytyK70dbE4xxCykguOH0CIjfVrFbolMd1kugcnZFANrcpaHhRHPKomJ2AqOSHLYDu9qE/2yEiSHd+og2pkUmxPcjir+sxQoZzrp3zf/rrvzu/WAMCJwkw13UpAU7dvVvgb//qTCzAFJKhz8HsWvlUBMQ8ybnPOYya4HRagG1wz+VqaHoBJPX7cbtyEX+LMAaoiINxyHbTGZO9DJKzTnN63j+J0Z2wgsGJ9s8GlN6wOBuNGvvxDwIf ieAvpR7X jlVvI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Extend the existing map_page/unmap_page function implementations to get preallocated IOVA. In such case, the IOVA allocation needs to be skipped, but rest of the code stays the same. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 68 ++++++++++++++++++++++++++------------- 1 file changed, 45 insertions(+), 23 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e55726783501..dbdd373a609a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -824,7 +824,7 @@ static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, } static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, - size_t size) + size_t size, bool free_iova) { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -843,17 +843,19 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + if (free_iova) + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, u64 dma_mask) + dma_addr_t iova, size_t size, int prot, + u64 dma_mask) { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, phys); - dma_addr_t iova; + bool no_iova = !iova; if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) @@ -861,12 +863,14 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); + if (no_iova) + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - __iommu_dma_free_iova(cookie, iova, size, NULL); + if (no_iova) + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -1031,7 +1035,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return vaddr; out_unmap: - __iommu_dma_unmap(dev, *dma_handle, size); + __iommu_dma_unmap(dev, *dma_handle, size, true); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); return NULL; } @@ -1060,7 +1064,7 @@ static void iommu_dma_free_noncontiguous(struct device *dev, size_t size, { struct dma_sgt_handle *sh = sgt_handle(sgt); - __iommu_dma_unmap(dev, sgt->sgl->dma_address, size); + __iommu_dma_unmap(dev, sgt->sgl->dma_address, size, true); __iommu_dma_free_pages(sh->pages, PAGE_ALIGN(size) >> PAGE_SHIFT); sg_free_table(&sh->sgt); kfree(sh); @@ -1131,9 +1135,11 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } -static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t __iommu_dma_map_pages(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t iova, + size_t size, + enum dma_data_direction dir, + unsigned long attrs) { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); @@ -1141,7 +1147,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; - dma_addr_t iova, dma_mask = dma_get_mask(dev); + dma_addr_t addr, dma_mask = dma_get_mask(dev); /* * If both the physical buffer start address and size are @@ -1182,14 +1188,23 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); - iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) + addr = __iommu_dma_map(dev, phys, iova, size, prot, dma_mask); + if (addr == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); - return iova; + return addr; } -static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + return __iommu_dma_map_pages(dev, page, offset, 0, size, dir, attrs); +} + +static void __iommu_dma_unmap_pages(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs, bool free_iova) { struct iommu_domain *domain = iommu_get_dma_domain(dev); phys_addr_t phys; @@ -1201,12 +1216,19 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_unmap(dev, dma_handle, size, free_iova); if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } +static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + __iommu_dma_unmap_pages(dev, dma_handle, size, dir, attrs, true); +} + /* * Prepare a successfully-mapped scatterlist to give back to the caller. * @@ -1509,13 +1531,13 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, } if (end) - __iommu_dma_unmap(dev, start, end - start); + __iommu_dma_unmap(dev, start, end - start, true); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { - return __iommu_dma_map(dev, phys, size, + return __iommu_dma_map(dev, phys, 0, size, dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, dma_get_mask(dev)); } @@ -1523,7 +1545,7 @@ static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap(dev, handle, size, true); } static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) @@ -1560,7 +1582,7 @@ static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) static void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap(dev, handle, size, true); __iommu_dma_free(dev, size, cpu_addr); } @@ -1626,7 +1648,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (!cpu_addr) return NULL; - *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot, + *handle = __iommu_dma_map(dev, page_to_phys(page), 0, size, ioprot, dev->coherent_dma_mask); if (*handle == DMA_MAPPING_ERROR) { __iommu_dma_free(dev, size, cpu_addr); From patchwork Tue Mar 5 10:15:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 912F2C54798 for ; Tue, 5 Mar 2024 10:16:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C365B94000A; Tue, 5 Mar 2024 05:16:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBE0F940007; Tue, 5 Mar 2024 05:16:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A354C940009; Tue, 5 Mar 2024 05:16:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 893B3940007 for ; Tue, 5 Mar 2024 05:16:02 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6436C40DC8 for ; Tue, 5 Mar 2024 10:16:02 +0000 (UTC) X-FDA: 81862579764.30.F339C28 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id DA30D14000E for ; Tue, 5 Mar 2024 10:16:00 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=hD0RcuJh; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633761; a=rsa-sha256; cv=none; b=dJQjpWjUsgziTMDiUrqyxMyJRK0koxhQ3sQr0Pksfp42HtOiK48hzpLE8mu8iWwekqskGL Kdv473d1aBgdxvLJN2dK9SWszxl6mU0BlglknaogtlmZ3EHbr7XCowthuZKZsR5bid67Ii mJecUyf/sgIg+JbKK9zjFCuDJe/wt2s= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=hD0RcuJh; spf=pass (imf09.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633761; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nX2hAFEH1HSNcc2953eD/Rf6lJqonK3MWEmZQ3VuDkE=; b=QVlle0R5IYeDjjBZozSFTOPI2CNcxvUCpg9nwNGpaaY+nxJX6x01o7UEo69p3Z5+U+mc9B uy6/UX5ZD71IatTA6iQ2JKe28NFZHAHCkVD1E20bcJ0OpM60sW8TNsneY3ECbIy41984kv XY6YrxtFOZQ51vmFh02vuJlusgnI7vM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0330961486; Tue, 5 Mar 2024 10:16:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05EE4C433B1; Tue, 5 Mar 2024 10:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633759; bh=buI8QIvB9qQbf6UKyQhoBlduuXAN0B68Jk0a66fCfeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hD0RcuJh00bicYhn0Ta6Ju9jPnBJ6gf5WneHPY3WuSGytfQ3Rc+DL/DoEICP3FM75 TX/RbpezM81fbB7g0M+/pUDbb5pBwp/U/cmchQIoNl1PajmoWeXVhqKZvBxR2j5GQ/ Xdi601RNr9cazQkefNIGomp16HunWfV85J/ruCFKJmmEcwPO6ApMTZcJ4UuZr1UsdF IMcAM0aIzs2G0O2qY8xakXVQgmneCmZ8NqSMafyLl2qF/5B13f9WQleGpTBPKI7GT6 DWWSpgNOxOOcGpvNc8RrhGDXjyzyq5YkBJQZFkgzf+EGBUNHt2uhxv49wDRBLex/sm mq/GhRcT5gfAA== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 06/16] iommu/dma: Implement link/unlink page callbacks Date: Tue, 5 Mar 2024 12:15:16 +0200 Message-ID: <1d3d26afcdbf95b053a3a44ceff34a4fa5334582.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DA30D14000E X-Stat-Signature: y4uuqiabcpnczd3mc3zt73anxeebt3fg X-Rspam-User: X-HE-Tag: 1709633760-292589 X-HE-Meta: U2FsdGVkX1/eEXsWXeUgA2mD1nuXbF+YnXZd2gv15F1ppmh6hQ8pFmpMUVLgfQ2Zby6pGOifgaiLOI3kjbzgvyQ74aJE5b0rHZUlaJBUGGsLurTyyio5OhJlYvjk6yKC4ojxvEHpx33/0cqjauw49STHqZ7w4Ntskh89sECEsqiHQ+70wJkpYgKflqHHV1QrQlilOrshTWfSjajrhZn/0XkxfRzPeS/vc+wxro8fQ9aOEcm463fzJ5pABrcDju/en7Zh+wRQsnbejyzgBS0WHjENwwrGcoqqLZ7N0q9LxXB8FAu9NNIMI/LocYxUL0G50oiVKCdf+uLV38Glp/uWYa6YfTZVMbFL8a73fgQq6lgOWIbGkQ9WPoF541tSqlmtNImUiV1EPcpJe2DX1pH622q5wQVMMYQR2ltyltTjpzMytnH6gxL0Oo3Fy8tT5PXdvvTSXKrwUjQOd5WzCn/hb8ob/57z+LOgL+z836W/MKAVg8VX2xDJ5ExGmfMjmmglFGBWlVimAwDKWjfBIc6q3XY4rmdEamVjiOtS7oJBD3BIS3ZupWr8kqHJ6ugC37+6jd3smF1nH5dxSSlQOu+fYdRI29rZ2p5EN/Ci6HZkd9sKT5DvdC6y57fE1iiNhBsKsdIaW1qurfECylZ618FH4yUSo+cSR0op4aUREXd4qMoJQZ0KDTlFr+76ylv2uh3xy0Dw0jnNWyLWhYapf9KFq08wLCBBzhEYUu4HFdYPCviTVoa5Jle1yNDjZIzqcTXo4N65QH/BIOlpwC7U2roNV6RfAZBXWv0X2S6BFyZnZmn+Cx5Nx0CsibMcnOwAoeCWv6EgxGlkuIv1sEZCpUHa9ctT2rbC1WWNmLnB82wD4RrEicMDjLSlCFPYT/6dLu0yeVTt3WPOUB7EObwHhcPpOEK0mYCZD3PTCEZBXcP+UBc9GlQmGDp8P/PTq2QCGT390QEa7bEjb2n5oR6qHiY T4By0YmF 0rKax X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Add an implementation of link/unlink interface to perform in map/unmap pages in fast patch for pre-allocated IOVA. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index dbdd373a609a..b683c4a4e9f8 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1752,6 +1752,21 @@ static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); } +static dma_addr_t iommu_dma_link_range(struct device *dev, struct page *page, + unsigned long offset, dma_addr_t iova, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + return __iommu_dma_map_pages(dev, page, offset, iova, size, dir, attrs); +} + +static void iommu_dma_unlink_range(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + __iommu_dma_unmap_pages(dev, addr, size, dir, attrs, false); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, @@ -1776,6 +1791,8 @@ static const struct dma_map_ops iommu_dma_ops = { .opt_mapping_size = iommu_dma_opt_mapping_size, .alloc_iova = iommu_dma_alloc_iova, .free_iova = iommu_dma_free_iova, + .link_range = iommu_dma_link_range, + .unlink_range = iommu_dma_unlink_range, }; /* From patchwork Tue Mar 5 10:15:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CE1FC54E41 for ; Tue, 5 Mar 2024 10:16:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0A1C6B00B8; Tue, 5 Mar 2024 05:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B6D136B00B9; Tue, 5 Mar 2024 05:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 949A16B00BA; Tue, 5 Mar 2024 05:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7F79F6B00B8 for ; Tue, 5 Mar 2024 05:16:06 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5BE9FC0DEC for ; Tue, 5 Mar 2024 10:16:06 +0000 (UTC) X-FDA: 81862579932.27.A3A715E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf02.hostedemail.com (Postfix) with ESMTP id B85928000F for ; Tue, 5 Mar 2024 10:16:04 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Cimcj0rC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zYSEuAz/VcQQ9oYOdYsxLecJdgLxoOHzBKYScKbWY7E=; b=6MHH2L2Hx+ZOaQNPF2RaDQhgY/qnxg/m8uzV2vjIF6YQbuJfBBY0xxXvDV6OBwbKLgYjgx nVPh6svQS3qpPGhdd6tThX/NC3qWsat8hoHQIEUCHJd1Rt+0kafp17BaMY97n3AP8OeNVY Qdzha8txvA4cLQ2mpb1PoNvyoPVC4NE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Cimcj0rC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633764; a=rsa-sha256; cv=none; b=u5vupOsJsFsLmd2gvJzbPFuhzebba65gU37SfLcur5/UFHjtRd/u+rkQnJi/jsQQVGAOru 2BRzeCGwTyNGqxWjNpFBzPQnHRpxMRhnRnCwnxPROWXwuCkxyXiRUzAU1C01IbhGHR8R1z /HqMDeA6hst1UK89ola2Ig8Lcl665bQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CE02A614B0; Tue, 5 Mar 2024 10:16:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC633C43390; Tue, 5 Mar 2024 10:16:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633763; bh=E8ahaIx2eHfPJk0G81HhbupoEvLGKO0ealmbIxFQ2TY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Cimcj0rCFoKg3KWHRwTEgnoqc8qrWIhxvFizI1kU7loe/t73myo7KMiGNWQpQmDYK YkCKGSvq63GGBNQ2CLnL24z1RXznU+8F5XkWj/5/ZksOhZ0kBQccN2aT7CgAKP9v9x yl9owxSK+HXI1J5nvkmlDd9aQR0PNfqbHP8QIEKRMOoQAznwZDxj6A2AMZSNfEgi32 fiuIyDdyjU1OE4zsZOG6BbNIThso3SY9ITUaG4nRwQoZTR27ICISHS92MWtSOWycWO /4uENNscCuCDEqeAucc8qKvGe+qf/1JpEGEZlgyvbFkeMhHjGMpx2I6GdKSl26ERxE hhl/ov41ZTT0g== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 07/16] RDMA/umem: Preallocate and cache IOVA for UMEM ODP Date: Tue, 5 Mar 2024 12:15:17 +0200 Message-ID: <47cc27fbaf9f4bd19edbcaac380bdd9684c5d12f.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: B85928000F X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: wbr3ewy164z8kpj74bonmb7i6c3zsd19 X-HE-Tag: 1709633764-735124 X-HE-Meta: U2FsdGVkX19pkoXnNpdjqRXG/CsitiPSZo+B9pjnNcZt5ABWB1VTLvVC5xlRo5EWm8NEcFCiAX1RHG/otammGQM6euy+R9evGfgcdCU3vjSD0sdgRo3ySNofbmqXTlkwUMgUBivvFuppEKPuKKF+kPF6x1l3zTJd5hBx8DLIP8IBgtHc+JyRBLp1caJOYaDmiOKce8Dz+GA8b6HGIpXAOYnxl2pMDHxDFJJrV+iIsuJonPY4jKaSRWeZbBEe50Tr1wsb/HJ10HCfhom3917bprv6AH4Njn9S3xp0t0tV2EFCTvAE31Mdm6eYcxlBdot4sjtvrvcXHrgCXVK70giaSd+KWGRVeCVx4EJ821Tddr7aeis9pTmqG0LRNVg70mXZeSttkw5rcGcKnm0cLhE/vTD1Cc6O4rRFryjaYXklzo0nC8+BbIIR4VhpDvjXBaJI59CIkE0rSnCbWabnNKGx5xjGBmjH8cUHUzW/ZKFpEpaoXr+SmUURvxdSOdLOBwla2Pf/SLbQICQ4/fSfRuBX6cQn2tDkyPbtz4ZJyCGDGo09GPpQbygwHZC1DjavhE7wd7Rs7sWiAvJwW9Fu1OsD6sgESWOcMamGNpIaCiVEdL69uuiJfezBtIjWIyyoS39IkQtJAAfCmTM5BMU+dR8pSKfNtkB8EKYs+bBUBpYXdoUKIpXtg17xEWIXja3dz0gR9UfcnMXKxxCy6wzsrEV8X2MK7O5LT4UTMpPRH14uSt2rBRkuM9qJ1WWvA2oYVaDkw8W85LTnP59n2a2Gzg/lI9NwpRQoYit/r4GsFwOO6Qc8L9Fsx+RaguOPtRGGZbwBU53fRIAbsigN8GLtYU+mpCQ9MmlBqLzKqTIua/DkJx62rb/SMEkr8mkIJ4unufV8bv//A2kNGG5rHSTOguDChusRCh2OS9vA/DSa3mOpERfetZxmmgdCZ+95dRlKvnfBFRewJ7SNGf+H51mjLgk Tt40A+bI 8p0y+QY92yciFmVufpU2zdTdCgcFvr145wUZa6yUpEkF8v2WN1GwoMSI0Ht+zNUlDQ2hp5grniTVAmuyhsbJL+BVMzaZ1iwYfWwlJR1DB5ejqMlGx6o2d3yatLT6qM8pBs+/yqpjuqlmV3pkiuGf/iUOXlIkLR6nxJZqnxVvvZFmUFCSK2OGmr0n+tjzwM8raPNgLV1WBChcHAXreZVju7ustpHqLC4bvYjdN/lmTg9JeWEDh/O7y22UnFg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to provide two step interface to map pages, preallocate IOVA when UMEM is initialized. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 16 +++++++++++++++- include/rdma/ib_umem_odp.h | 1 + include/rdma/ib_verbs.h | 18 ++++++++++++++++++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e9fa22d31c23..f69d1233dc82 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -50,6 +50,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + struct ib_device *dev = umem_odp->umem.ibdev; int ret; umem_odp->umem.is_odp = 1; @@ -87,15 +88,25 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, goto out_pfn_list; } + umem_odp->iova.dev = dev->dma_device; + umem_odp->iova.size = end - start; + umem_odp->iova.dir = DMA_BIDIRECTIONAL; + ret = ib_dma_alloc_iova(dev, &umem_odp->iova); + if (ret) + goto out_dma_list; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, start, end - start, ops); if (ret) - goto out_dma_list; + goto out_free_iova; } return 0; +out_free_iova: + ib_dma_free_iova(dev, &umem_odp->iova); out_dma_list: kvfree(umem_odp->dma_list); out_pfn_list: @@ -262,6 +273,8 @@ EXPORT_SYMBOL(ib_umem_odp_get); void ib_umem_odp_release(struct ib_umem_odp *umem_odp) { + struct ib_device *dev = umem_odp->umem.ibdev; + /* * Ensure that no more pages are mapped in the umem. * @@ -274,6 +287,7 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); + ib_dma_free_iova(dev, &umem_odp->iova); kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 0844c1d05ac6..bb2d7f2a5b04 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -23,6 +23,7 @@ struct ib_umem_odp { * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. */ dma_addr_t *dma_list; + struct dma_iova_attrs iova; /* * The umem_mutex protects the page_list and dma_list fields of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index b7b6b58dd348..e71fa19187cc 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4077,6 +4077,24 @@ static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr) return dma_mapping_error(dev->dma_device, dma_addr); } +static inline int ib_dma_alloc_iova(struct ib_device *dev, + struct dma_iova_attrs *iova) +{ + if (ib_uses_virt_dma(dev)) + return 0; + + return dma_alloc_iova(iova); +} + +static inline void ib_dma_free_iova(struct ib_device *dev, + struct dma_iova_attrs *iova) +{ + if (ib_uses_virt_dma(dev)) + return; + + dma_free_iova(iova); +} + /** * ib_dma_map_single - Map a kernel virtual address to DMA address * @dev: The device for which the dma_addr is to be created From patchwork Tue Mar 5 10:15:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DDD0C54E41 for ; Tue, 5 Mar 2024 10:16:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD61F940009; Tue, 5 Mar 2024 05:16:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A61D494000C; Tue, 5 Mar 2024 05:16:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86419940009; Tue, 5 Mar 2024 05:16:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 71101940007 for ; Tue, 5 Mar 2024 05:16:10 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4BE4814072D for ; Tue, 5 Mar 2024 10:16:10 +0000 (UTC) X-FDA: 81862580100.22.7E3B1EB Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id B9095C0015 for ; Tue, 5 Mar 2024 10:16:08 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=odKrLpzw; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633768; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iy08ScejRLVciARpa7RdIV8Z6fV5+d/bdAy2bLacZEA=; b=GGACxgOYHRKN/tMawTuIoatd/jtzyLkHWuS9DhjDfPcOovzQTOUprR19t4hEBtgIS9XGaQ mvcMi59Atmy8zu/ECiR3CaCAfNxXqZfLSq+61GEWHuIgF9pxL8qLFBaAfZqYK4U6M0cvUu CsVyYC4+ZcSC+rfadA4NAbrxK63n5Dw= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=odKrLpzw; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633768; a=rsa-sha256; cv=none; b=UlGYIbT2u2gM3PJvQPA11ejxy0IzEgo6J+sMMebCx+d7tSZDeQtsxAS+wVzM32czDAfKFQ Hs1rnaY76qrQ/Pt/EOzV4WsixgDG2kThLqVvh/lvEKY3lgHOYB8RFyENveYBQ5MkHdo6XA nswmSE3kcVOTo4RMXzy3rNsrxQ49Gg0= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E9F21614B7; Tue, 5 Mar 2024 10:16:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8AB2BC43390; Tue, 5 Mar 2024 10:16:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633767; bh=aiC62CDa6AOfDDD5eldoWW566/DmWHKZI7ImrxLpIDI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=odKrLpzwo1+ckRGRNVfwyJTveQktZ8kd/S5yCucQpqZHcu4+kPj9mCgK/xgfYyPv3 I1yxGUXHANUZsFY2/zao6yrw+TC1BO6Ug1tU2XFw43OtQovJo9iqD1Ai4PUuOW2ek4 X/9Elu7Uq5ZyXbu6pqOIjh/WRKRxCRaxho9zx8G+nbRwIw8EWxL/eMZEcymmglY26U WQHazEjuKBpCIJbiC5wZCkJazHnRrQozUZyHySSTthx/1IlEePhg78T3ZBxMiNMYg+ cr2bSbBp/+udUr2p98mwq716WamQdsJgUH8ZChwuFyRcfal6uhhPyUvWkzAuA7xFxs NQnrJCt3MltQw== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 08/16] RDMA/umem: Store ODP access mask information in PFN Date: Tue, 5 Mar 2024 12:15:18 +0200 Message-ID: <88b042d29a28a2866d5bc5ca20bdba4a71bc7aca.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: B9095C0015 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: fybuxy4j1cdnhaea8azcayksdk9escfz X-HE-Tag: 1709633768-289871 X-HE-Meta: U2FsdGVkX19hXDXdxea7jefJSr9+lJc7aZAFAq3FZnXYCAMupb7PJ1jxg46xgFUNDX2E6qqkOZw8q9DRwSQ+m78Zd+KuQ90YN+JtWQaJcoT4qkeztaaveR31zalJ6lx7LrhTAkqHw/vCirB8x/5MyG1NnbRIkMrzL90+wI9dpqrSjCbfUkq/gpjgtNS6zsK+mBljDa2+uOgQA7P7fO3P0AtKfd1fwANj34f0qZe6tvbXhMXT4PsuCPCMyc2vSLCgMERaVedyKKo6G87iQPSJrEqafmkG+l4Bj3CaBna5HjEVySWbUDsVFEQWzrQ4grjjNjRpekB7h4tQT/PyfXg8RZUuXlUjFXsABXeUfQkhrvyCzuxfphUi5tcfux/Se8FYgLQiqGnXTbHY0kAlTFmjNP7jtTAfaDiSKOwbdV02VMmWS6vwN77ypSZBokn5bTIKYnVa8LsJT/w8XUgqVJ6LuClkSLV0TvR471BE1dGjLz8wxHbOXhjw44fMZEWgh6ZMdoDtW+aFbF9X0QZ/bog7e35Bju1yduPyUQrF7gycF93zFEonK/x1mc8nAPr0BZ8pTNO3s2g9TkIZ5N7WSHgXcrWK7wZs+HLjdKj0ZLnLyQS+Mf1ImstwJwFeaAGgz7IkT442N7TGsVPaVQCFLmAH4TsXx2ZtjOqs0tbSZx7CRyp8L+mw/93A/Noubq/r0y2t24VU6KOpUsUzLNmbVgH8sNTXRbIv2ipuMEYl7is+n5A7awZ1QpL7MxjFLVzYvCaHKkgDp3adgCEasUAnXQ16cla5theBjtwWJkleaty+JlIgVFAbvw2KAunFbbO3szzqcBuzl8623PH4lOlgFt7xxY9wc5Q16nTZ3jP8sAaR/ipSJJUzveyR4BK7R+vBHl8xJ3n82ulSTUQkHa+4rYEV8D1vZwXTyO+ka23zEHACMiama58R/bWzFF6DeS1I4rKZKT8MCGEqrVrCutOFgU7 Sap57azt fcFMF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to remove of dma_list, store access mask in PFN pointer and not in dma_addr_t. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 99 +++++++++++----------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/odp.c | 37 ++++++----- include/rdma/ib_umem_odp.h | 13 ---- 4 files changed, 59 insertions(+), 91 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index f69d1233dc82..3619fb78f786 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -310,22 +310,11 @@ EXPORT_SYMBOL(ib_umem_odp_release); static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, unsigned int dma_index, - struct page *page, - u64 access_mask) + struct page *page) { struct ib_device *dev = umem_odp->umem.ibdev; dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - if (*dma_addr) { - /* - * If the page is already dma mapped it means it went through - * a non-invalidating trasition, like read-only to writable. - * Resync the flags. - */ - *dma_addr = (*dma_addr & ODP_DMA_ADDR_MASK) | access_mask; - return 0; - } - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, DMA_BIDIRECTIONAL); if (ib_dma_mapping_error(dev, *dma_addr)) { @@ -333,7 +322,6 @@ static int ib_umem_odp_map_dma_single_page( return -EFAULT; } umem_odp->npages++; - *dma_addr |= access_mask; return 0; } @@ -369,9 +357,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, struct hmm_range range = {}; unsigned long timeout; - if (access_mask == 0) - return -EINVAL; - if (user_virt < ib_umem_start(umem_odp) || user_virt + bcnt > ib_umem_end(umem_odp)) return -EFAULT; @@ -397,7 +382,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, if (fault) { range.default_flags = HMM_PFN_REQ_FAULT; - if (access_mask & ODP_WRITE_ALLOWED_BIT) + if (access_mask & HMM_PFN_WRITE) range.default_flags |= HMM_PFN_REQ_WRITE; } @@ -429,22 +414,17 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, for (pfn_index = 0; pfn_index < num_pfns; pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) { - if (fault) { - /* - * Since we asked for hmm_range_fault() to populate - * pages it shouldn't return an error entry on success. - */ - WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); - WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); - } else { - if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) { - WARN_ON(umem_odp->dma_list[dma_index]); - continue; - } - access_mask = ODP_READ_ALLOWED_BIT; - if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) - access_mask |= ODP_WRITE_ALLOWED_BIT; - } + /* + * Since we asked for hmm_range_fault() to populate + * pages it shouldn't return an error entry on success. + */ + WARN_ON(fault && range.hmm_pfns[pfn_index] & HMM_PFN_ERROR); + WARN_ON(fault && !(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)); + if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) + continue; + + if (range.hmm_pfns[pfn_index] & HMM_PFN_STICKY) + continue; hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]); /* If a hugepage was detected and ODP wasn't set for, the umem @@ -459,13 +439,13 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, } ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]), - access_mask); + umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); if (ret < 0) { ibdev_dbg(umem_odp->umem.ibdev, "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); break; } + range.hmm_pfns[pfn_index] |= HMM_PFN_STICKY; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -485,7 +465,6 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma_addr; dma_addr_t dma; int idx; u64 addr; @@ -496,34 +475,34 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, virt = max_t(u64, virt, ib_umem_start(umem_odp)); bound = min_t(u64, bound, ib_umem_end(umem_odp)); for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) { + unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; + struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); + idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; dma = umem_odp->dma_list[idx]; - /* The access flags guaranteed a valid DMA address in case was NULL */ - if (dma) { - unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT; - struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); - - dma_addr = dma & ODP_DMA_ADDR_MASK; - ib_dma_unmap_page(dev, dma_addr, - BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); - if (dma & ODP_WRITE_ALLOWED_BIT) { - struct page *head_page = compound_head(page); - /* - * set_page_dirty prefers being called with - * the page lock. However, MMU notifiers are - * called sometimes with and sometimes without - * the lock. We rely on the umem_mutex instead - * to prevent other mmu notifiers from - * continuing and allowing the page mapping to - * be removed. - */ - set_page_dirty(head_page); - } - umem_odp->dma_list[idx] = 0; - umem_odp->npages--; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) + continue; + if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_STICKY)) + continue; + + ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), + DMA_BIDIRECTIONAL); + if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { + struct page *head_page = compound_head(page); + /* + * set_page_dirty prefers being called with + * the page lock. However, MMU notifiers are + * called sometimes with and sometimes without + * the lock. We rely on the umem_mutex instead + * to prevent other mmu notifiers from + * continuing and allowing the page mapping to + * be removed. + */ + set_page_dirty(head_page); } + umem_odp->pfn_list[pfn_idx] &= ~HMM_PFN_STICKY; + umem_odp->npages--; } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index bbe79b86c717..4f368242680d 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -334,6 +334,7 @@ struct mlx5_ib_flow_db { #define MLX5_IB_UPD_XLT_PD BIT(4) #define MLX5_IB_UPD_XLT_ACCESS BIT(5) #define MLX5_IB_UPD_XLT_INDIRECT BIT(6) +#define MLX5_IB_UPD_XLT_DOWNGRADE BIT(7) /* Private QP creation flags to be passed in ib_qp_init_attr.create_flags. * diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 4a04cbc5b78a..5713fe25f4de 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "mlx5_ib.h" #include "cmd.h" @@ -143,22 +144,12 @@ static void populate_klm(struct mlx5_klm *pklm, size_t idx, size_t nentries, } } -static u64 umem_dma_to_mtt(dma_addr_t umem_dma) -{ - u64 mtt_entry = umem_dma & ODP_DMA_ADDR_MASK; - - if (umem_dma & ODP_READ_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_READ; - if (umem_dma & ODP_WRITE_ALLOWED_BIT) - mtt_entry |= MLX5_IB_MTT_WRITE; - - return mtt_entry; -} - static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); + bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + unsigned long pfn; dma_addr_t pa; size_t i; @@ -166,8 +157,17 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, return; for (i = 0; i < nentries; i++) { + pfn = odp->pfn_list[idx + i]; + if (!(pfn & HMM_PFN_VALID)) + /* Initial ODP init */ + continue; + pa = odp->dma_list[idx + i]; - pas[i] = cpu_to_be64(umem_dma_to_mtt(pa)); + pa |= MLX5_IB_MTT_READ; + if ((pfn & HMM_PFN_WRITE) && !downgrade) + pa |= MLX5_IB_MTT_WRITE; + + pas[i] = cpu_to_be64(pa); } } @@ -268,8 +268,7 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, * estimate the cost of another UMR vs. the cost of bigger * UMR. */ - if (umem_odp->dma_list[idx] & - (ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) { + if (umem_odp->pfn_list[idx] & HMM_PFN_VALID) { if (!in_block) { blk_start_idx = idx; in_block = 1; @@ -555,7 +554,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, { int page_shift, ret, np; bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE; - u64 access_mask; + u64 access_mask = 0; u64 start_idx; bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT); u32 xlt_flags = MLX5_IB_UPD_XLT_ATOMIC; @@ -563,12 +562,14 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (flags & MLX5_PF_FLAGS_ENABLE) xlt_flags |= MLX5_IB_UPD_XLT_ENABLE; + if (flags & MLX5_PF_FLAGS_DOWNGRADE) + xlt_flags |= MLX5_IB_UPD_XLT_DOWNGRADE; + page_shift = odp->page_shift; start_idx = (user_va - ib_umem_start(odp)) >> page_shift; - access_mask = ODP_READ_ALLOWED_BIT; if (odp->umem.writable && !downgrade) - access_mask |= ODP_WRITE_ALLOWED_BIT; + access_mask |= HMM_PFN_WRITE; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index bb2d7f2a5b04..095b1297cfb1 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -68,19 +68,6 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) umem_odp->page_shift; } -/* - * The lower 2 bits of the DMA address signal the R/W permissions for - * the entry. To upgrade the permissions, provide the appropriate - * bitmask to the map_dma_pages function. - * - * Be aware that upgrading a mapped address might result in change of - * the DMA address for the page. - */ -#define ODP_READ_ALLOWED_BIT (1<<0ULL) -#define ODP_WRITE_ALLOWED_BIT (1<<1ULL) - -#define ODP_DMA_ADDR_MASK (~(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT)) - #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_umem_odp * From patchwork Tue Mar 5 10:15:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13581936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D2CBC54E4A for ; Tue, 5 Mar 2024 10:16:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B47CC94000D; Tue, 5 Mar 2024 05:16:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF8BE94000B; Tue, 5 Mar 2024 05:16:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FC8694000D; Tue, 5 Mar 2024 05:16:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 76BD294000B for ; Tue, 5 Mar 2024 05:16:14 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 477B080CEA for ; Tue, 5 Mar 2024 10:16:14 +0000 (UTC) X-FDA: 81862580268.18.A371F70 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id B91C110000B for ; Tue, 5 Mar 2024 10:16:12 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Re5/PREc"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hDi5xvojWRIJEBMJj35ZWcDoY+2xWx5fHfKioYJwfDU=; b=YGgOAkLpb3tMWMEcZ+JOfyeV0xA/H2q1D33JFdA3OUcA8EHfxtgTOt8FwnvMyM9UvkL8g6 v0FQCCfD5ilAKe0Oo6Y3C9Nlikv5L00YLpG1g5C+LYUnLskt44IROfTLKw8+/UAjgVjzoT IvyDKOurtZav+gXqrPEkCA/GTq/YRqc= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Re5/PREc"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633772; a=rsa-sha256; cv=none; b=PFKiDkPWk2vPsGQC2ddT5gpQq3cmcD/puATWCr0oDXBPRictq/e9mU7Fbngs0iqtzLkG2F AD2Efn6ql4LM1LOxNX9zO8Xy7IZGxmJB5MfjUUWte3zJ7y6s+429o5Leywq1FR1wpah5YF 73ZVLa1GeJZS9f2keCmqdinjvrDtd60= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id EA3EE61460; Tue, 5 Mar 2024 10:16:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2032C433C7; Tue, 5 Mar 2024 10:16:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633771; bh=VVD1wxwJfNeGBkq+0kcqb+dcMkI1qAL6P7iY3YcEByk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Re5/PREctans6bjAaCScdeo+OGx9Q0gK1V/DUyZejXI/cOEwqo9NbvH0bS0ukSRzY IO/UKo4/zvHoXhFlzBm7rkytV2fXO3sfDTwKk1YF4g7KnjIZJnhC38ThoJ8egOf97m g4a34Fomxt5HbL1QobDIYe5WO49HDJFoBuSUR3sxyehwushCKt9mQaenNr4ah8bZjI iRyOhCdtO8jqtiPOwv6kquAL+SI7/sZiOaVipfxPtHPM9YEoRLtuvc3A7Jy/k5unuZ O5i2rMBduMf6GtLh0Na4vtbnu75fR3CLlqQUL81nhWRM2LWeNit78vQf/3WGAsKjtj dpiN3RI5d/DnQ== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 09/16] RDMA/core: Separate DMA mapping to caching IOVA and page linkage Date: Tue, 5 Mar 2024 12:15:19 +0200 Message-ID: <22f9bd2e33ca2ec2b3d3bbd4cbac55122991e02f.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B91C110000B X-Stat-Signature: 7k5dsdibmteqmurb53q81q4o6pouie5b X-HE-Tag: 1709633772-889247 X-HE-Meta: U2FsdGVkX1+vLhO8Hs800dRlcXo/2jsF3xE2a3FTk7IPV8WRCjFdFb51gbXs7u5cucJneufg+aUtJHVSXC4JaRNlYltSiZD7XA7RpVcVgO1baL7J3nVY2f+f3iAWWoP2nxz0SgsMIE2/HJGLP682McmT6JsmDMpnIwivbwm4UYSrV6pKWVMDCEKUfw3y7TvB7WQ0bK4U1QeemsmChmFdH6PwQsjMudHQjAw3K/gLCo6VLb3gPIeu8C9+3chN68cd5hEpM+F0sNQBx9/4iJeATnm+OpkAFDBbmkp9TRxH8Fiw5d8q8PJo6UdKqZV6iwrhDmGXetGXGc58LWKMfu4E1emwZX8qBPsQCaEM3/BP+MRAIbYc+3QRGXkXgIMMto2LYvwvYOUYtMvfRaxnNVWLRZrNShX7g1iea+ia9zgpZvAuwm62ZyyH7h2CEkCTeOcromIm1Bojzu1RrDG/K5uhXu9fLUdalkNy00nI/wn58VfPgm2ux8t268LVWQ4AfmQ/oAtuiZqsoPL/4lW187iRUG+a6JjtN99BNsr+GRCD69JACfkhnhlHLx7/bvK7SRJJkXWaz9yilX7hSmtI3CvQa2wFKQugDPoMfzviCnHOa7dYZifSQhaoTLiApWxXAtyCEYoCA0Gup27wa1upSnb+/nSBkz93ZU90nPDT84xhM9ZUuI02KP1L6aA9ny6LG617GKEnpk4hFT+e8luDlHHmuI7P4mJkmpEVQqVCf2a+eXV0+37pOvEPq6c0Mk5NS8sf7MCnxT/B4nuGYz6eQcA0WO7TQFQaD09KesJUjG1/PNmk+c0wnzbVDeSPiJY88FsJZc7vf0Ny9Xzie2FMikbXJSK614sOcmI4upKoL0gs+6NjAlf9gqIfZY0TSFb49b5AvutrSuPO9dxnaXZ7glU0abcLUSEcENVHi2If9HcnCUpKR2J3tCD0L52C9GL6qTXMxBv8qHrUKuR1tpt3lZo i8BysMxD rcziP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Reuse newly added DMA API to cache IOVA and only link/unlink pages in fast path. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem_odp.c | 57 ++---------------------------- drivers/infiniband/hw/mlx5/odp.c | 22 +++++++++++- include/rdma/ib_umem_odp.h | 8 +---- include/rdma/ib_verbs.h | 36 +++++++++++++++++++ 4 files changed, 61 insertions(+), 62 deletions(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 3619fb78f786..1301009a6b78 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -81,20 +81,13 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, if (!umem_odp->pfn_list) return -ENOMEM; - umem_odp->dma_list = kvcalloc( - ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); - if (!umem_odp->dma_list) { - ret = -ENOMEM; - goto out_pfn_list; - } umem_odp->iova.dev = dev->dma_device; umem_odp->iova.size = end - start; umem_odp->iova.dir = DMA_BIDIRECTIONAL; ret = ib_dma_alloc_iova(dev, &umem_odp->iova); if (ret) - goto out_dma_list; - + goto out_pfn_list; ret = mmu_interval_notifier_insert(&umem_odp->notifier, umem_odp->umem.owning_mm, @@ -107,8 +100,6 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, out_free_iova: ib_dma_free_iova(dev, &umem_odp->iova); -out_dma_list: - kvfree(umem_odp->dma_list); out_pfn_list: kvfree(umem_odp->pfn_list); return ret; @@ -288,7 +279,6 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) mutex_unlock(&umem_odp->umem_mutex); mmu_interval_notifier_remove(&umem_odp->notifier); ib_dma_free_iova(dev, &umem_odp->iova); - kvfree(umem_odp->dma_list); kvfree(umem_odp->pfn_list); } put_pid(umem_odp->tgid); @@ -296,40 +286,10 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) } EXPORT_SYMBOL(ib_umem_odp_release); -/* - * Map for DMA and insert a single page into the on-demand paging page tables. - * - * @umem: the umem to insert the page to. - * @dma_index: index in the umem to add the dma to. - * @page: the page struct to map and add. - * @access_mask: access permissions needed for this page. - * - * The function returns -EFAULT if the DMA mapping operation fails. - * - */ -static int ib_umem_odp_map_dma_single_page( - struct ib_umem_odp *umem_odp, - unsigned int dma_index, - struct page *page) -{ - struct ib_device *dev = umem_odp->umem.ibdev; - dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index]; - - *dma_addr = ib_dma_map_page(dev, page, 0, 1 << umem_odp->page_shift, - DMA_BIDIRECTIONAL); - if (ib_dma_mapping_error(dev, *dma_addr)) { - *dma_addr = 0; - return -EFAULT; - } - umem_odp->npages++; - return 0; -} - /** * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it. * * Maps the range passed in the argument to DMA addresses. - * The DMA addresses of the mapped pages is updated in umem_odp->dma_list. * Upon success the ODP MR will be locked to let caller complete its device * page table update. * @@ -437,15 +397,6 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, __func__, hmm_order, page_shift); break; } - - ret = ib_umem_odp_map_dma_single_page( - umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index])); - if (ret < 0) { - ibdev_dbg(umem_odp->umem.ibdev, - "ib_umem_odp_map_dma_single_page failed with error %d\n", ret); - break; - } - range.hmm_pfns[pfn_index] |= HMM_PFN_STICKY; } /* upon success lock should stay on hold for the callee */ if (!ret) @@ -465,7 +416,6 @@ EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock); void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, u64 bound) { - dma_addr_t dma; int idx; u64 addr; struct ib_device *dev = umem_odp->umem.ibdev; @@ -479,15 +429,14 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]); idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift; - dma = umem_odp->dma_list[idx]; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_VALID)) continue; if (!(umem_odp->pfn_list[pfn_idx] & HMM_PFN_STICKY)) continue; - ib_dma_unmap_page(dev, dma, BIT(umem_odp->page_shift), - DMA_BIDIRECTIONAL); + ib_dma_unlink_range(dev, &umem_odp->iova, + idx * (1 << umem_odp->page_shift)); if (umem_odp->pfn_list[pfn_idx] & HMM_PFN_WRITE) { struct page *head_page = compound_head(page); /* diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 5713fe25f4de..13d61f1ab40b 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -149,6 +149,7 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); bool downgrade = flags & MLX5_IB_UPD_XLT_DOWNGRADE; + struct ib_device *dev = odp->umem.ibdev; unsigned long pfn; dma_addr_t pa; size_t i; @@ -162,12 +163,31 @@ static void populate_mtt(__be64 *pas, size_t idx, size_t nentries, /* Initial ODP init */ continue; - pa = odp->dma_list[idx + i]; + if (pfn & HMM_PFN_STICKY && odp->iova.addr) + /* + * We are in this flow when there is a need to resync flags, + * for example when page was already linked in prefetch call + * with READ flag and now we need to add WRITE flag + * + * This page was already programmed to HW and we don't want/need + * to unlink and link it again just to resync flags. + * + * The DMA address calculation below is based on the fact that + * RDMA UMEM doesn't work with swiotlb. + */ + pa = odp->iova.addr + (idx + i) * (1 << odp->page_shift); + else + pa = ib_dma_link_range(dev, hmm_pfn_to_page(pfn), 0, &odp->iova, + (idx + i) * (1 << odp->page_shift)); + WARN_ON_ONCE(ib_dma_mapping_error(dev, pa)); + pa |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) pa |= MLX5_IB_MTT_WRITE; pas[i] = cpu_to_be64(pa); + odp->pfn_list[idx + i] |= HMM_PFN_STICKY; + odp->npages++; } } diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 095b1297cfb1..a786556c65f9 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -17,15 +17,9 @@ struct ib_umem_odp { /* An array of the pfns included in the on-demand paging umem. */ unsigned long *pfn_list; - /* - * An array with DMA addresses mapped for pfns in pfn_list. - * The lower two bits designate access permissions. - * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT. - */ - dma_addr_t *dma_list; struct dma_iova_attrs iova; /* - * The umem_mutex protects the page_list and dma_list fields of an ODP + * The umem_mutex protects the page_list field of an ODP * umem, allowing only a single thread to map/unmap pages. The mutex * also protects access to the mmu notifier counters. */ diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index e71fa19187cc..c9e2bcd5268a 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4160,6 +4160,42 @@ static inline void ib_dma_unmap_page(struct ib_device *dev, dma_unmap_page(dev->dma_device, addr, size, direction); } +/** + * ib_dma_link_range - Link a physical page to DMA address + * @dev: The device for which the dma_addr is to be created + * @page: The page to be mapped + * @offset: The offset within the page + * @iova: Preallocated IOVA attributes + * @dma_offset: DMA offset + */ +static inline dma_addr_t ib_dma_link_range(struct ib_device *dev, + struct page *page, + unsigned long offset, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + if (ib_uses_virt_dma(dev)) + return (uintptr_t)(page_address(page) + offset); + + return dma_link_range(page, offset, iova, dma_offset); +} + +/** + * ib_dma_unlink_range - Unlink a mapping created by ib_dma_link_page() + * @dev: The device for which the DMA address was created + * @iova: DMA IOVA properties + * @dma_offset: DMA offset + */ +static inline void ib_dma_unlink_range(struct ib_device *dev, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + if (ib_uses_virt_dma(dev)) + return; + + dma_unlink_range(iova, dma_offset); +} + int ib_dma_virt_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents); static inline int ib_dma_map_sg_attrs(struct ib_device *dev, struct scatterlist *sg, int nents, From patchwork Tue Mar 5 10:15:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB61BC54E41 for ; Tue, 5 Mar 2024 10:16:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A871D940010; Tue, 5 Mar 2024 05:16:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E67994000F; Tue, 5 Mar 2024 05:16:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 810EA940010; Tue, 5 Mar 2024 05:16:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E2CB94000B for ; Tue, 5 Mar 2024 05:16:18 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 505DEA1210 for ; Tue, 5 Mar 2024 10:16:18 +0000 (UTC) X-FDA: 81862580436.17.BA8B384 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id BAF591A0021 for ; Tue, 5 Mar 2024 10:16:16 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fCHRXvyt; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633776; a=rsa-sha256; cv=none; b=iHJLI5m3EaBh+c0RarC643OHBii5Em8iji6jawEe3Xc6yJc9qZ9duomJjqBJk8VyHxxQLc 88UAzv5SJ0L1cqG72nVy2nbHKA4/hb6Va6oa4drxUgf30sigbxA3mgXxMkO9xsTGL3TpZk PenOFcr/GIEdSuEBLKyV97JbdzOWJrU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fCHRXvyt; spf=pass (imf19.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633776; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qYb+zBT+l4bRumlZjw61WjNIKyLOpqGiPdeR0ak+ylo=; b=KpGx5JRzOpy/yaWS+LnY4ZnM+5S8LhWxNKypy/vFXk3AHslMspQLFcwQu9eGK4g8LtnI8Q rY5ix02LmW0og1iOOsfspIX/C03Sv8p/GUZ7dPMCBfMkvwRya4we+0kQ1KXBO0qbRNWRUn xeZUqh76MwTuPi/gxLA8k5MwxGe8KwQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id F0F34614BA; Tue, 5 Mar 2024 10:16:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AC60C433C7; Tue, 5 Mar 2024 10:16:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633775; bh=inT0MN5YzB4Hs5AmOhFFcCz6zSy1Lq0FeGslm64Slg8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fCHRXvytmeqDSxdCD2EfXr+ZdX+jOkx56Bb2+TCYBawnqgSnLqcXlkFgDF5qmcEOp E7PMxA+vsc9YowiUDs+N+vKmKNosJclbR8xYEqnO1rRvVNV4WnPpECqgsHVJpNlMKA lB8rm/cBF860sPiBVcTFadxXT46nj5cyTjjsLn7L1IQPJJauWBLzOauVDxRC7JMriq hqBOuvcngfO3Ag48oQOmKFqqEd24H42kIyQ6pFAEtQwXjfX4+Lv6LdF8+Wba5yxu2m Q4I0UNsudFH3cUZ/4s6hLK+r8y5Wx3e1b6prKFfNnu92fF6dlE3FGGxbKpEWGUnQh+ 39aiIZNMUkzUA== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 10/16] RDMA/umem: Prevent UMEM ODP creation with SWIOTLB Date: Tue, 5 Mar 2024 12:15:20 +0200 Message-ID: <8c6d5e7db2d1a01888cc7b9b9850b05e19c75c64.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: BAF591A0021 X-Stat-Signature: oksyfj8ur5185aid34kwi9zqbg3y33nd X-Rspam-User: X-HE-Tag: 1709633776-432418 X-HE-Meta: U2FsdGVkX18B3flbjBsNMS0vNTsFE6ypWNR/WI3ZeeYxY9WuWv8chYZkyVtr9tgippE0568kF7WO1MTYQWV4KYtib47vnaV8YPdYwZo+YwcnxXsKV/WRoivkaDoymF0DoAl+Xeikq1BMV0qMntKkYrMim0RiU+NQKjgQ/0ef3CSbhy23FYXcgEvpO8XULJb8xEKDqTSjBykOk4yQb5qx0DyemR+u3pZXsV4Uchds17PoiyHlmN1dtNLs8YOSQwJqy+Hghmaor3xRInh7od6D3LvN2UXd1IZ+wrMFGPq5pqF1ckUoK5yCdPK1pU8W4Oq3M0nEU5yjw5k+T2kLou27eyepEUGQYfDzKNc7AQwXQM5THzZw3AkAaxixSEE85SAQxPfWc7Z8n1q2hTKNBlt2LyNtjjKM/eAO9+8YgyLy6UQqKnJ5Y0ff+4vTqqh+C9BdorHcjzvQEfLYJBWwwUFdNDSPPO639qsNL+ouU6/fqLPJp1AROUa/LJh/OUH6qJp1lX+/LpqBeZ59a6PmoW78UsYYIFf6z+PV1LooX7vNbKhBf2X5tEXnPQRGH6oh1NWYbFQ3eEiLxO63es4caH9RgvoTnPOQs48WksRaEHxs2ImGlLonoZByUtfaS7U1tiaeCjy1NcaR830aRwaJUm761agAkvJye3MUbSSp6na7CNCkMwSwMzfYXerbz+GqVUUMIn18IDpt+BA8BwgBGY7GTA4hV9vVRQsYx1xS/k1e2eF6+TtHwB8NuxjW/36AlI7257Ic4xnwFVxKH6z8imX3eO7Jo+SVkzQXahKLG7ITxIlHwxxE89QemCasnHKkPgZWm8Nce4Lfr4eXArYfN5Zy3N4sR5p+XJQpjlU/N8gQvqGgm+hdQjWHSxw9hxqCb4QWS1KfMSVIqP8PpIV3aChyB8HbSCU7M1qTdZfmHE+vI68xurMS4SWXcRQk2OfKqHEcjB7uReunI7HOhCMz26j mgQidmsJ oxXKI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky RDMA UMEM never supported DMA addresses returned from SWIOTLB, as these addresses should be programmed to the hardware which is not aware that it is bounce buffers and not real ones. Instead of silently leave broken system for the users who didn't know it, let's be explicit and return an error to them. Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-attributes.rst | 7 +++ drivers/infiniband/core/umem_odp.c | 77 +++++++++++------------ include/linux/dma-mapping.h | 6 ++ kernel/dma/direct.h | 4 +- kernel/dma/mapping.c | 4 ++ 5 files changed, 58 insertions(+), 40 deletions(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst index 1887d92e8e92..b337ec65d506 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,3 +130,10 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). + +DMA_ATTR_NO_TRANSLATION +----------------------- + +This attribute is used to indicate to the DMA-mapping subsystem that the +buffer is not subject to any address translation. This is used for devices +that doesn't need buffer bouncing or fixing DMA addresses. diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 1301009a6b78..57c56000f60e 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -50,51 +50,50 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, const struct mmu_interval_notifier_ops *ops) { + size_t page_size = 1UL << umem_odp->page_shift; struct ib_device *dev = umem_odp->umem.ibdev; + size_t ndmas, npfns; + unsigned long start; + unsigned long end; int ret; umem_odp->umem.is_odp = 1; mutex_init(&umem_odp->umem_mutex); - if (!umem_odp->is_implicit_odp) { - size_t page_size = 1UL << umem_odp->page_shift; - unsigned long start; - unsigned long end; - size_t ndmas, npfns; - - start = ALIGN_DOWN(umem_odp->umem.address, page_size); - if (check_add_overflow(umem_odp->umem.address, - (unsigned long)umem_odp->umem.length, - &end)) - return -EOVERFLOW; - end = ALIGN(end, page_size); - if (unlikely(end < page_size)) - return -EOVERFLOW; - - ndmas = (end - start) >> umem_odp->page_shift; - if (!ndmas) - return -EINVAL; - - npfns = (end - start) >> PAGE_SHIFT; - umem_odp->pfn_list = kvcalloc( - npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); - if (!umem_odp->pfn_list) - return -ENOMEM; - - - umem_odp->iova.dev = dev->dma_device; - umem_odp->iova.size = end - start; - umem_odp->iova.dir = DMA_BIDIRECTIONAL; - ret = ib_dma_alloc_iova(dev, &umem_odp->iova); - if (ret) - goto out_pfn_list; - - ret = mmu_interval_notifier_insert(&umem_odp->notifier, - umem_odp->umem.owning_mm, - start, end - start, ops); - if (ret) - goto out_free_iova; - } + if (umem_odp->is_implicit_odp) + return 0; + + start = ALIGN_DOWN(umem_odp->umem.address, page_size); + if (check_add_overflow(umem_odp->umem.address, + (unsigned long)umem_odp->umem.length, &end)) + return -EOVERFLOW; + end = ALIGN(end, page_size); + if (unlikely(end < page_size)) + return -EOVERFLOW; + + ndmas = (end - start) >> umem_odp->page_shift; + if (!ndmas) + return -EINVAL; + + npfns = (end - start) >> PAGE_SHIFT; + umem_odp->pfn_list = + kvcalloc(npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); + if (!umem_odp->pfn_list) + return -ENOMEM; + + umem_odp->iova.dev = dev->dma_device; + umem_odp->iova.size = end - start; + umem_odp->iova.dir = DMA_BIDIRECTIONAL; + umem_odp->iova.attrs = DMA_ATTR_NO_TRANSLATION; + ret = ib_dma_alloc_iova(dev, &umem_odp->iova); + if (ret) + goto out_pfn_list; + + ret = mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, start, + end - start, ops); + if (ret) + goto out_free_iova; return 0; diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 91cc084adb53..89945e707a9b 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -62,6 +62,12 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) +/* + * DMA_ATTR_NO_TRANSLATION: used to indicate that the buffer should not be mapped + * through address translation. + */ +#define DMA_ATTR_NO_TRANSLATION (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. It can * be given to a device to use as a DMA source or target. It is specific to a diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 1c30e1cd607a..1c9ec204c999 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -92,6 +92,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, if (is_swiotlb_force_bounce(dev)) { if (is_pci_p2pdma_page(page)) return DMA_MAPPING_ERROR; + if (attrs & DMA_ATTR_NO_TRANSLATION) + return DMA_MAPPING_ERROR; return swiotlb_map(dev, phys, size, dir, attrs); } @@ -99,7 +101,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, dma_kmalloc_needs_bounce(dev, size, dir)) { if (is_pci_p2pdma_page(page)) return DMA_MAPPING_ERROR; - if (is_swiotlb_active(dev)) + if (is_swiotlb_active(dev) && !(attrs & DMA_ATTR_NO_TRANSLATION)) return swiotlb_map(dev, phys, size, dir, attrs); dev_WARN_ONCE(dev, 1, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index f989c64622c2..49b1fde510c5 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -188,6 +188,10 @@ int dma_alloc_iova(struct dma_iova_attrs *iova) struct device *dev = iova->dev; const struct dma_map_ops *ops = get_dma_ops(dev); + if (dma_map_direct(dev, ops) && is_swiotlb_force_bounce(dev) && + iova->attrs & DMA_ATTR_NO_TRANSLATION) + return -EOPNOTSUPP; + if (dma_map_direct(dev, ops) || !ops->alloc_iova) { iova->addr = 0; return 0; From patchwork Tue Mar 5 10:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D7C1C54E41 for ; Tue, 5 Mar 2024 10:16:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF5A5940011; Tue, 5 Mar 2024 05:16:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C7EF094000F; Tue, 5 Mar 2024 05:16:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8371940011; Tue, 5 Mar 2024 05:16:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9484994000F for ; Tue, 5 Mar 2024 05:16:22 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 708098016E for ; Tue, 5 Mar 2024 10:16:22 +0000 (UTC) X-FDA: 81862580604.10.158459E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf16.hostedemail.com (Postfix) with ESMTP id E8E3118001D for ; Tue, 5 Mar 2024 10:16:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=L2QU2W8h; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633781; a=rsa-sha256; cv=none; b=MoGEwbYhOpny/1rr1x66MrUCPXALeziVMlC5e2TuLGweDAiv6kg5L0bTZR1CgCUyEMHaVO peXkcBqsafjzAd+rP9HIj6bfE+memzpGbita+p9Bhfw54nK9JlCRaQSPOBCVLchMX45iUy UjPFtZ1uBfdysgoOKx+LjzJJ/5LaM9M= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=L2QU2W8h; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633781; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H9MuA0krlFy3lQiwWj2HqWERSYJGSqT7XtPlY8u0Lgc=; b=XXQnGFCZ2K0mid9d6JM5yRcjvdxBGQp77PxpBCaloxNXCTDsA2S5TuHf2EuBT3gPuwq7Yg nhC02oWQteWaDTsT4qKLSvDTyUQu5WikZ7zvMq4YC53fgcxknOr4mDpmNcvlvvgrCu2757 Wv3lpt2cfpZnWOLrnrANx4bQTb8t6WM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 11AE0614B8; Tue, 5 Mar 2024 10:16:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2EC1C43390; Tue, 5 Mar 2024 10:16:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633779; bh=utZ3KdFtdsQAESX2EpyzMrGiuaExMhARHy+JfNBKyD0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L2QU2W8hopktnjqtLh1dUA+1NmamN3jO2hagY+NwOntBW3lcniRVvhjZhygSySvOU zmT1qrJchWcoqJ2R8mvRPMDLWCJ3zSLtuMGhw4NQgQMYE8T+IPVf6ITidiC1VwGRir gtTb1oFKQVhxzVAOxoUh7eF6aVHsgij8QiaxeWlZYGmhQGHOcBXawhmIlI8lY89hFI r3LrJVTUJ9WGUE1lZOaEerkHmLwDIG0ZpwWAZk8UiAYs18llZFSY7SCbVioinjMAVf lBBpzlWMXaUIBxBZqwoyXLVY13am2lZaFJpc9bCZSCgoFvDIi5G4lUMLgNXKm0lxt6 HypLpEtFTk/XQ== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 11/16] vfio/mlx5: Explicitly use number of pages instead of allocated length Date: Tue, 5 Mar 2024 12:15:21 +0200 Message-ID: <01606f62be051034035ef1501b7c721b8a319dcc.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E8E3118001D X-Stat-Signature: surmyrtjo599ptdhp8y5pqyrqsa8qocd X-HE-Tag: 1709633780-148280 X-HE-Meta: U2FsdGVkX19GcfegY0H9/Fm9SzEQOxpcyhG2nDdmO1Q63qfqieEmz0YhehS2EhSo3XJBf1NgSxPdE5pzrBYoAGsFU/dC64mEDyPj9kE3/xCdhyW0MS37sLkPCsSKKYzyAPo+iBzysCuPeUOOZPOhlxJdCh0X09aeUlLokeL+N9Yiat2nrg+iJ/ApiIeK5C1rQdmyDHKogPC3tnp6N8e5LEHLotcDF5ptfKeuAjfEFeAhEbjMMxmP6nRr+fUPntRvnhU0AN2omaOqRMrhrf1oRRxVKIxAb1gSYvMDJ5qct6QaHeJX1/3x7+M8SxfiGIAZWq/MJU9F8rJrAyC0CIqlmr6VBxBYlsTPZ2BsY5wtyE38V3S1o30kfw950oAxztMdKzk4SORGvICCMr8hjTH+el1ZYxZVuiJpdQcsokbYyobsvEeTqeKqCPle9Kr6rE1frRXJrBUj1UsRG/vdCGZzZaiBA/ewNflGtzJ4p+EbE23KE3cdVt1z9FbIgXxDLoiUidbt/tbnyoAQle4nTSEk44kWODoWc/B1HO6udn/I4PYS0Yz/6s7hqHiBIOK7Yfbz8mmsowsZiNxdEzSCfhJAPsYyqFOSQwaLYsmPbLoLfCI/m6POe7xE7kQRtqQyh0QilnbtiAOPmW9odURFMod/2oC0MHddbiMXYG0YkSMy8fjEOKtRhVjqkIs4bcRTseG9xJ9mpsXJ5OgxRnt3EIpSoink9EhNU2Q41m+r2/Vqm5l+3uIuKPDzPdiDBGhOaz3UWz2791L5uoD2uGtvEmgE+cwglngti5WrW006S7ummGWcnhB7QsXwTWoeljqrvpbuUdx41/M+FE3YJzuO9SR7TJcy+FP42xe+tcqYG5OoEjMknlZc+i03fIyQLUoWDg5MPHpQMyt3nWZCtVkiFWl/vuFQuVP81gvbz8dzayt2oCZUpFPkSjhb8V4vbjvz7vpfD/bVV0bFgLd33bOOphy 2AUocClR xj1hB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky allocated_length is a multiple of page size and number of pages, so let's change the functions to accept number of pages. It opens us a venue to combine receive and send paths together with code readability improvement. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 31 ++++++++--------- drivers/vfio/pci/mlx5/cmd.h | 10 +++--- drivers/vfio/pci/mlx5/main.c | 65 +++++++++++++++++++++++------------- 3 files changed, 62 insertions(+), 44 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index efd1d252cdc9..45104e47b7b2 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -305,8 +305,7 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, struct mlx5_vhca_recv_buf *recv_buf, u32 *mkey) { - size_t npages = buf ? DIV_ROUND_UP(buf->allocated_length, PAGE_SIZE) : - recv_buf->npages; + size_t npages = buf ? buf->npages : recv_buf->npages; int err = 0, inlen; __be64 *mtt; void *mkc; @@ -362,7 +361,7 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (mvdev->mdev_detach) return -ENOTCONN; - if (buf->dmaed || !buf->allocated_length) + if (buf->dmaed || !buf->npages) return -EINVAL; ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); @@ -403,8 +402,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) } struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf; @@ -416,9 +414,8 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, buf->dma_dir = dma_dir; buf->migf = migf; - if (length) { - ret = mlx5vf_add_migration_pages(buf, - DIV_ROUND_UP_ULL(length, PAGE_SIZE)); + if (npages) { + ret = mlx5vf_add_migration_pages(buf, npages); if (ret) goto end; @@ -444,8 +441,8 @@ void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf) } struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir) +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir) { struct mlx5_vhca_data_buffer *buf, *temp_buf; struct list_head free_list; @@ -460,7 +457,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { if (buf->dma_dir == dma_dir) { list_del_init(&buf->buf_elm); - if (buf->allocated_length >= length) { + if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); goto found; } @@ -474,7 +471,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, } } spin_unlock_irq(&migf->list_lock); - buf = mlx5vf_alloc_data_buffer(migf, length, dma_dir); + buf = mlx5vf_alloc_data_buffer(migf, npages, dma_dir); found: while ((temp_buf = list_first_entry_or_null(&free_list, @@ -645,7 +642,7 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, MLX5_SET(save_vhca_state_in, in, op_mod, 0); MLX5_SET(save_vhca_state_in, in, vhca_id, mvdev->vhca_id); MLX5_SET(save_vhca_state_in, in, mkey, buf->mkey); - MLX5_SET(save_vhca_state_in, in, size, buf->allocated_length); + MLX5_SET(save_vhca_state_in, in, size, buf->npages * PAGE_SIZE); MLX5_SET(save_vhca_state_in, in, incremental, inc); MLX5_SET(save_vhca_state_in, in, set_track, track); @@ -668,8 +665,12 @@ int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev, } if (!header_buf) { - header_buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + u32 npages = DIV_ROUND_UP( + sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE); + + header_buf = + mlx5vf_get_data_buffer(migf, npages, DMA_NONE); if (IS_ERR(header_buf)) { err = PTR_ERR(header_buf); goto err_free; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index f2c7227fa683..887267ebbd8a 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -60,7 +60,7 @@ struct mlx5_vhca_data_buffer { struct sg_append_table table; loff_t start_pos; u64 length; - u64 allocated_length; + u32 npages; u32 mkey; enum dma_data_direction dma_dir; u8 dmaed:1; @@ -219,12 +219,12 @@ int mlx5vf_cmd_alloc_pd(struct mlx5_vf_migration_file *migf); void mlx5vf_cmd_dealloc_pd(struct mlx5_vf_migration_file *migf); void mlx5fv_cmd_clean_migf_resources(struct mlx5_vf_migration_file *migf); struct mlx5_vhca_data_buffer * -mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf); struct mlx5_vhca_data_buffer * -mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, - size_t length, enum dma_data_direction dma_dir); +mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, + enum dma_data_direction dma_dir); void mlx5vf_put_data_buffer(struct mlx5_vhca_data_buffer *buf); int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned int npages); diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index fe09a8c8af95..b11b1c27d284 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -94,7 +94,7 @@ int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, if (ret) goto err; - buf->allocated_length += filled * PAGE_SIZE; + buf->npages += filled; /* clean input for another bulk allocation */ memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, @@ -352,6 +352,7 @@ static struct mlx5_vhca_data_buffer * mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, u8 index, size_t required_length) { + u32 npages = DIV_ROUND_UP(required_length, PAGE_SIZE); struct mlx5_vhca_data_buffer *buf = migf->buf[index]; u8 chunk_num; @@ -359,12 +360,11 @@ mlx5vf_mig_file_get_stop_copy_buf(struct mlx5_vf_migration_file *migf, chunk_num = buf->stop_copy_chunk_num; buf->migf->buf[index] = NULL; /* Checking whether the pre-allocated buffer can fit */ - if (buf->allocated_length >= required_length) + if (buf->npages >= npages) return buf; mlx5vf_put_data_buffer(buf); - buf = mlx5vf_get_data_buffer(buf->migf, required_length, - DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(buf->migf, npages, DMA_FROM_DEVICE); if (IS_ERR(buf)) return buf; @@ -417,7 +417,9 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, u8 *to_buff; int ret; - header_buf = mlx5vf_get_data_buffer(migf, size, DMA_NONE); + BUILD_BUG_ON(size > PAGE_SIZE); + header_buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(size, PAGE_SIZE), + DMA_NONE); if (IS_ERR(header_buf)) return PTR_ERR(header_buf); @@ -432,7 +434,7 @@ static int mlx5vf_add_stop_copy_header(struct mlx5_vf_migration_file *migf, to_buff = kmap_local_page(page); memcpy(to_buff, &header, sizeof(header)); header_buf->length = sizeof(header); - data.stop_copy_size = cpu_to_le64(migf->buf[0]->allocated_length); + data.stop_copy_size = cpu_to_le64(migf->buf[0]->npages * PAGE_SIZE); memcpy(to_buff + sizeof(header), &data, sizeof(data)); header_buf->length += sizeof(data); kunmap_local(to_buff); @@ -481,15 +483,22 @@ static int mlx5vf_prep_stop_copy(struct mlx5vf_pci_core_device *mvdev, num_chunks = mvdev->chunk_mode ? MAX_NUM_CHUNKS : 1; for (i = 0; i < num_chunks; i++) { - buf = mlx5vf_get_data_buffer(migf, inc_state_size, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer( + migf, DIV_ROUND_UP(inc_state_size, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; } + BUILD_BUG_ON(sizeof(struct mlx5_vf_migration_header) > + PAGE_SIZE); migf->buf[i] = buf; - buf = mlx5vf_get_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_get_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto err; @@ -597,7 +606,8 @@ static long mlx5vf_precopy_ioctl(struct file *filp, unsigned int cmd, * We finished transferring the current state and the device has a * dirty state, save a new state to be ready for. */ - buf = mlx5vf_get_data_buffer(migf, inc_length, DMA_FROM_DEVICE); + buf = mlx5vf_get_data_buffer(migf, DIV_ROUND_UP(inc_length, PAGE_SIZE), + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); mlx5vf_mark_err(migf); @@ -718,8 +728,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track) if (track) { /* leave the allocated buffer ready for the stop-copy phase */ - buf = mlx5vf_alloc_data_buffer(migf, - migf->buf[0]->allocated_length, DMA_FROM_DEVICE); + buf = mlx5vf_alloc_data_buffer(migf, migf->buf[0]->npages, + DMA_FROM_DEVICE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_pd; @@ -783,16 +793,15 @@ mlx5vf_resume_read_image_no_header(struct mlx5_vhca_data_buffer *vhca_buf, const char __user **buf, size_t *len, loff_t *pos, ssize_t *done) { + u32 npages = DIV_ROUND_UP(requested_length, PAGE_SIZE); int ret; if (requested_length > MAX_LOAD_SIZE) return -ENOMEM; - if (vhca_buf->allocated_length < requested_length) { - ret = mlx5vf_add_migration_pages( - vhca_buf, - DIV_ROUND_UP(requested_length - vhca_buf->allocated_length, - PAGE_SIZE)); + if (vhca_buf->npages < npages) { + ret = mlx5vf_add_migration_pages(vhca_buf, + npages - vhca_buf->npages); if (ret) return ret; } @@ -992,11 +1001,14 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, goto out_unlock; break; case MLX5_VF_LOAD_STATE_PREP_HEADER_DATA: - if (vhca_buf_header->allocated_length < migf->record_size) { + { + u32 npages = DIV_ROUND_UP(migf->record_size, PAGE_SIZE); + + if (vhca_buf_header->npages < npages) { mlx5vf_free_data_buffer(vhca_buf_header); - migf->buf_header[0] = mlx5vf_alloc_data_buffer(migf, - migf->record_size, DMA_NONE); + migf->buf_header[0] = mlx5vf_alloc_data_buffer( + migf, npages, DMA_NONE); if (IS_ERR(migf->buf_header[0])) { ret = PTR_ERR(migf->buf_header[0]); migf->buf_header[0] = NULL; @@ -1009,6 +1021,7 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, vhca_buf_header->start_pos = migf->max_pos; migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER_DATA; break; + } case MLX5_VF_LOAD_STATE_READ_HEADER_DATA: ret = mlx5vf_resume_read_header_data(migf, vhca_buf_header, &buf, &len, pos, &done); @@ -1019,12 +1032,13 @@ static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, { u64 size = max(migf->record_size, migf->stop_copy_prep_size); + u32 npages = DIV_ROUND_UP(size, PAGE_SIZE); - if (vhca_buf->allocated_length < size) { + if (vhca_buf->npages < npages) { mlx5vf_free_data_buffer(vhca_buf); migf->buf[0] = mlx5vf_alloc_data_buffer(migf, - size, DMA_TO_DEVICE); + npages, DMA_TO_DEVICE); if (IS_ERR(migf->buf[0])) { ret = PTR_ERR(migf->buf[0]); migf->buf[0] = NULL; @@ -1115,8 +1129,11 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) migf->buf[0] = buf; if (MLX5VF_PRE_COPY_SUPP(mvdev)) { - buf = mlx5vf_alloc_data_buffer(migf, - sizeof(struct mlx5_vf_migration_header), DMA_NONE); + buf = mlx5vf_alloc_data_buffer( + migf, + DIV_ROUND_UP(sizeof(struct mlx5_vf_migration_header), + PAGE_SIZE), + DMA_NONE); if (IS_ERR(buf)) { ret = PTR_ERR(buf); goto out_buf; From patchwork Tue Mar 5 10:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B1D0C54E5D for ; Tue, 5 Mar 2024 10:16:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AA4A940013; Tue, 5 Mar 2024 05:16:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 356A494000F; Tue, 5 Mar 2024 05:16:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15C22940013; Tue, 5 Mar 2024 05:16:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0192E94000F for ; Tue, 5 Mar 2024 05:16:26 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CE7D9160DBA for ; Tue, 5 Mar 2024 10:16:26 +0000 (UTC) X-FDA: 81862580772.06.6E79102 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 21CE54000F for ; Tue, 5 Mar 2024 10:16:24 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qXJ4kuzE; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4ym1mYA2A1wcBt6xWLNgPkj9MHmnOsKdsN3joHsUxhA=; b=QGsNJm4TY25Odjyttybl15wyyvxRXpT5QL9VSl2OJDFfODqd/qVvbW7DbBrjEj8kFIvNOv aP2g2R5dZOUbWSMPoIhqyRJCCk4gsygLwW5eAf181dI5FzYGKQMstEYjU2ctNkH0YIJa10 tSmnxXXLKZ8yFpuVdHeLWiD/FTdtr2E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633785; a=rsa-sha256; cv=none; b=BpwNR+XwKfooRbYTfFWsQSiQwd1qD/K65cp2ygpVqlEURKdAfYvVTuEXp3TdHzWgjljo/0 lpZ5Ja3es2UzGYDBt+HcwT584KFIKkoIHEYmaSjlMZwh/+EgDAWP6S5jwnPxCY8HjtDupP ST6gl0wOriaJyzAfP/N6m8N6D9o9lT4= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qXJ4kuzE; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4CDD0614A0; Tue, 5 Mar 2024 10:16:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2CCB1C433C7; Tue, 5 Mar 2024 10:16:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633783; bh=ktX9dWW2cFJcgdIR5D6ZbOLB18GCIDFrg21/P33AIJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qXJ4kuzEHOpNp6UZHPKJ9BBgqb6aOqWMZjNX8Mr/NyeYNLbWIluFUQs/il5Qm/nbz rKyGKW497bjfduFoZd0wr9iokEFIosoXqEoSp9FYHE5Xp8JwBP0eMkdQRVZkL+6Uww ryPQG/188kpX4HM5ZM0WopQQkatTizmCi7p5dHYxsiDfGbVRvELF0KDSVp5gVbevRn iuQfGnjX1e6Gup0T5r/Y3LWU+2RN3KFhzCkDeTVBPZBoB5S341ai7MgzL+1qV7wvO/ N+O9x3owVU80s6i+zBYKUlhCTukDtgGYcYE4+1ebdg9cXWrQxpzmVk5+1Ra5IET5cB /WlNvWjN00B8g== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 12/16] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Date: Tue, 5 Mar 2024 12:15:22 +0200 Message-ID: <9366169430357d953e961cd41ae912c5fbd3f568.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 21CE54000F X-Rspam-User: X-Stat-Signature: n36gxfydfoqqxwyhxhqnn4hwfsab3wnr X-Rspamd-Server: rspam03 X-HE-Tag: 1709633784-991825 X-HE-Meta: U2FsdGVkX180cWANo8oyqYyV19+gNqMdM/k4MJOGGd/ehseAlO1D2x8lKJECdqFRWxlSoiax4IIxYF+bjAokh1gpUX3TMjv7eT7ziSt68GqiLSR7oB+7tfVqV1mnPrkWmk3R1iYInUgcToajpWhBj9evp/gE3OJWIzRKyPA79IB1l2v9uHdqemKUksSWcH9wxxAyxtl1RLhdIvJ3LkAf7+lVJ72+4GpsBMrvLTwG7YfTho4HDQbu29hfd+PItzemG+xVDp5m01u6zdA+Dyo/wQqeEM0AwO1dlnmP+I3gA90jBuxd5ykhwoKJH1Kg/eLY3lf8E6AhnnTrxSKlMiv6T6eaygxQi5mJOGQmY20BQyQyMQzVRdVHCa0E1oRDteS7KpucR9txeVbEpNN5MnxDRj6IxtYlkvwFiw09ltok2Ha45qKiEBjysad43uzkotwax2MoOmf6yk15ui8i9IVHPmyuItj8r2AFLP/T8ca0pQc0MQG9q4kvpi6nSACjp/Qb3uDJnTQNJOwylpmUA4FtTuzywOM6nVOurwV7wz6M+GFDBjq0e2mOJjod1Fqd/AxIl4GzYCcMLl67Vlfqw0d1jMCygG7S4eqBZPkgBJM8v9nF+bmv1z3ZO8rAl+1im07xYSKS/j2qkiWwys+G4gPst6uTZLFfCxQkGw6N3JofWzq28LgXH3dtWkcKEJSdYN1WNHkjXEk2JhOYApJJTPerL0HXs7twIN0LOLkL4nduvRTQDajGVK6LOxtFiTzii/JgvQOM/B5gvXaU7B6ZgHXCwU9FZnEGmgOdr4tsNyzjTcIJi1n6Z0B4kGtEsophbZofqaBTpT0I7Hy+HyhDjTk5cDST3jSYxVSLoBnFUmz/VPOK2lbCtDDk4fl4LMK1AFSj1uayhniIiviLMt+GT/n5emt30vIckEQGYGazdWKhlcua0BkRSVJol8TZfsai9WJY/aurB8Jg0dNZukm8Tfl NM9rrobs DNyha8p0bmjimMB6sOvtjI+Y94Otww+ShkQJlV0ti1juLqs06W3DB8yG68g5L6oUuG9d3/3j/VD4M0Vk6GISgC3+ykhKKdGaVsnbve9TU2RmV+1M8G5Rno9zJYGroKkRQ86APFtveBOVcCAeCWNqK0yrq7CGZqrdRDYVboctZnO8e0FbmfzGAGJLgW5DvvDuIMyy5gAzNPmgBrWvf8UsZU1GVnWkM7mugqGIBDBzafXpkxcbhU03ezMR60yeC5WJvIHKZa6/SibCmHDowNM74Ym/5UZPeyjTbEn2E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Change the creation of mkey to be performed in multiple steps: data allocation, DMA setup and actual call to HW to create that mkey. In this new flow, the whole input to MKEY command is saved to eliminate the need to keep array of pointers for DMA addresses for receive list and in the future patches for send list too. In addition to memory size reduce and elimination of unnecessary data movements to set MKEY input, the code is prepared for future reuse. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 149 +++++++++++++++++++++--------------- drivers/vfio/pci/mlx5/cmd.h | 3 +- 2 files changed, 88 insertions(+), 64 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 45104e47b7b2..44762980fcb9 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -300,39 +300,21 @@ static int mlx5vf_cmd_get_vhca_id(struct mlx5_core_dev *mdev, u16 function_id, return ret; } -static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, - struct mlx5_vhca_data_buffer *buf, - struct mlx5_vhca_recv_buf *recv_buf, - u32 *mkey) +static u32 *alloc_mkey_in(u32 npages, u32 pdn) { - size_t npages = buf ? buf->npages : recv_buf->npages; - int err = 0, inlen; - __be64 *mtt; + int inlen; void *mkc; u32 *in; inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(*mtt) * round_up(npages, 2); + sizeof(__be64) * round_up(npages, 2); - in = kvzalloc(inlen, GFP_KERNEL); + in = kvzalloc(inlen, GFP_KERNEL_ACCOUNT); if (!in) - return -ENOMEM; + return NULL; MLX5_SET(create_mkey_in, in, translations_octword_actual_size, DIV_ROUND_UP(npages, 2)); - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); - - if (buf) { - struct sg_dma_page_iter dma_iter; - - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); - } else { - int i; - - for (i = 0; i < npages; i++) - *mtt++ = cpu_to_be64(recv_buf->dma_addrs[i]); - } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); @@ -346,9 +328,30 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); MLX5_SET64(mkc, mkc, len, npages * PAGE_SIZE); - err = mlx5_core_create_mkey(mdev, mkey, in, inlen); - kvfree(in); - return err; + + return in; +} + +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, + struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, + u32 *mkey) +{ + __be64 *mtt; + int inlen; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + if (buf) { + struct sg_dma_page_iter dma_iter; + + for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + } + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -368,13 +371,22 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (ret) return ret; - ret = _create_mkey(mdev, buf->migf->pdn, buf, NULL, &buf->mkey); - if (ret) + buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); + if (!buf->mkey_in) { + ret = -ENOMEM; goto err; + } + + ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + if (ret) + goto err_create_mkey; buf->dmaed = true; return 0; + +err_create_mkey: + kvfree(buf->mkey_in); err: dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; @@ -390,6 +402,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->dmaed) { mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + kvfree(buf->mkey_in); dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, buf->dma_dir, 0); } @@ -1286,46 +1299,45 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, return -ENOMEM; } -static int register_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in) { - int i, j; + dma_addr_t addr; + __be64 *mtt; + int i; - recv_buf->dma_addrs = kvcalloc(recv_buf->npages, - sizeof(*recv_buf->dma_addrs), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->dma_addrs) - return -ENOMEM; + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = 0; i < recv_buf->npages; i++) { - recv_buf->dma_addrs[i] = dma_map_page(mdev->device, - recv_buf->page_list[i], - 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, recv_buf->dma_addrs[i])) - goto error; + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_single(mdev->device, addr, PAGE_SIZE, + DMA_FROM_DEVICE); } - return 0; - -error: - for (j = 0; j < i; j++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[j], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); - return -ENOMEM; } -static void unregister_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in) { + dma_addr_t addr; + __be64 *mtt; int i; - for (i = 0; i < recv_buf->npages; i++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[i], - PAGE_SIZE, DMA_FROM_DEVICE); + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(mdev->device, addr)) + goto error; + + *mtt++ = cpu_to_be64(addr); + } + + return 0; - kvfree(recv_buf->dma_addrs); +error: + unregister_dma_pages(mdev, i, mkey_in); + return -ENOMEM; } static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1334,7 +1346,8 @@ static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + kvfree(recv_buf->mkey_in); free_recv_pages(&qp->recv_buf); } @@ -1350,18 +1363,28 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, if (err < 0) return err; - err = register_dma_recv_pages(mdev, recv_buf); - if (err) + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); + if (!recv_buf->mkey_in) { + err = -ENOMEM; goto end; + } + + err = register_dma_pages(mdev, npages, recv_buf->page_list, + recv_buf->mkey_in); + if (err) + goto err_register_dma; - err = _create_mkey(mdev, pdn, NULL, recv_buf, &recv_buf->mkey); + err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, + &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in); +err_register_dma: + kvfree(recv_buf->mkey_in); end: free_recv_pages(recv_buf); return err; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 887267ebbd8a..83728c0669e7 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -62,6 +62,7 @@ struct mlx5_vhca_data_buffer { u64 length; u32 npages; u32 mkey; + u32 *mkey_in; enum dma_data_direction dma_dir; u8 dmaed:1; u8 stop_copy_chunk_num; @@ -137,8 +138,8 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; - dma_addr_t *dma_addrs; u32 next_rq_offset; + u32 *mkey_in; u32 mkey; }; From patchwork Tue Mar 5 10:15:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E87AC54E4A for ; Tue, 5 Mar 2024 10:16:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8861940015; Tue, 5 Mar 2024 05:16:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E72C94000F; Tue, 5 Mar 2024 05:16:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8118D940015; Tue, 5 Mar 2024 05:16:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6C4AE94000F for ; Tue, 5 Mar 2024 05:16:30 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4EA5C80C82 for ; Tue, 5 Mar 2024 10:16:30 +0000 (UTC) X-FDA: 81862580940.09.6A537C5 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf06.hostedemail.com (Postfix) with ESMTP id B494318001C for ; Tue, 5 Mar 2024 10:16:28 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="IkRk/fb0"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633788; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CTugSmgLwJgDA9ipI5mnVckl7fOXK4aUNX016/UL5M0=; b=liDHxBjoSP58V3HFHhvRqeZHyp4uQ7nRn/9Lds1pfjA7DESCXU4Roe1oj9OISrxVY2eqKl KrPL8VUFsJyGT+QLexAXEFT/ci0hebYtiNVhbNlx8CuhXMYfjnbXxFmOoRf6O5975tL8GP 5S3CHcwfZ66qPpTOQoC97bQSa45CO0Y= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="IkRk/fb0"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633788; a=rsa-sha256; cv=none; b=jf3IWbIkesWVs8QDisKk76uY/RmF+E+CwPWsWJm0pog6FvkGUtqz0SyVG9dxE9q2WlwJiD yKJJYYnnxxYpCj7NqqA05TOCVoTndBWGBPm13PLMS7/BPPytSd6ikV55qGcY9XOVqfyvHJ 3+RcuxwJNyCiixntsAHr5mgiw8wWS0A= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0149A614A9; Tue, 5 Mar 2024 10:16:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03941C43394; Tue, 5 Mar 2024 10:16:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633787; bh=WBfM8ZvY1JfgPv87VMT5XwFqFK1pOKYXqtZNVa4/Kmg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IkRk/fb09cNpaq1ao/YIjVQRUd7549ASV5vUrE0loQ3I40W0g4lHqMa2RnlPOC+SL 7O47LLrDs24FE154PCpzD9gxTzqQT73wARI+zqa5pCclRYy+mfUAb3VXN5E67rLGEq 3FJ29W4o1dvJg1oBPDj9nWxWk+GDyXcjXC0TwrN0jPCHmoAcRi2VmERQ0yOXxC4y76 SQTUfIFSI98tR0AmIVTshw3fDy/384A966eKFUmESBoo+LVwjOUtuC+rAvhyqDK1EM ycf5A5dNTFKwMwlUDywAbqNnUHwzdQvu0BxoWJduZUbSgbO83ABzuBc7P68A9/gK4g OYyahJ7w7P7Rw== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 13/16] vfio/mlx5: Explicitly store page list Date: Tue, 5 Mar 2024 12:15:23 +0200 Message-ID: <1d0ca7408af6e5f0bb09baffd021bc72287e5ed8.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B494318001C X-Stat-Signature: hamtkg88mcnak6tctk95tfkdtaf74wrt X-HE-Tag: 1709633788-262000 X-HE-Meta: U2FsdGVkX19Sy7FRU4qyoOb1VLCHtHO7CYulu6QYaXL6dCs/wTR1j60iVsYfBtQXDCqzpgMoJXB/lizZbJo1gPgmViDYzKHEwI3pfGrV2KKSon5W7ag2tE0rWy6zG7ZlfXphKrMLx6PjT+ZUe2DznHm5OalIJizb6zwTKRYCqY34hruPfUGO9BOMF6HujsC67lLOz3pgDQN3GkFXIoZT7AOVKDOJslm6MhZbOyQCE79pv1tCL2e/oU0NhlvJk4v8YV1qaTcxL+vI9+4S9iGoTUsSgmKzCIRD506kkNmaNquSAeuLgKAGSdzGK+6K23/0UOQD6FyCB5NOgViAWsrTOK5TrAQ3FxGbcoXfDYuwRN3fzEJe4MZiDW8YzF919KjJ6czD8JQJkZk5/LxuIREhHZINH6nUfHc+wU/2ZlGNrzZefHv30oYW1u/7i+A2ufqp4sSwvbDpScp2lwpH31mjbWgICejajpYZkfa5+42WDiFFKxjVU/McGj2ZgceQexYGrMv77Hs2jkwo7lS90fOLW358Ntxvo4gWKqcoJKiRZm8rsE5+dIyYRtzLqJW6/vQZ0gqYFc0Z0JoAIJCO+xxacVuO4/4o1tIkRQeJ9xG5kyFGfVMnwQwbz9djeOtJ8AkSpMrsj9c9BMUER64VLlXBjCOQT0C5ZR0266lfa/IUz193BaNMT/MQ9/hs2zcGIAv2IQ2oJ1ds08RkzHtZ8iqG7CxxdixEWZb0hiLv853MXJAK0ElCoyMqpXOHUtBnXjoF9iDyOM1fGXy+YFnGLoTCN5frub/wk9brF1vQN5FFmMSC6cp19pceRkqWBtUHHx/+oVAMKXFvoIRwj/FSFwydjmjXMdgBGRPat+tg9eFGW0KHLKh4oe6hEtnkB9E2QU99z6xAURzI7MOQbl7iZj8k+mZ2bbUhY0oZccI4U5UjCteDolC3wTNE72hdik4qlLDNy8ouanIfAE4KQBtHHF3 8fgbg4Qh IfbpXYFtCtAkW1ES03CceEVRySK3VbrEwA+eb+3bosGBzb2UwN15opIVBAng6szPwRHzRIr3tRNlHkNAXCRO3Nn7Kn632u1f8W8KSwEeD8PvciXGzzS+YnuYSBzrIOLmTTa3zS4Vx4OfetWWAKrvMagNOxGCeEmiVHievWGQOmI0Nm4mVi+DXn8AUWAHQCRgxUsn9ISvmUCDZVw6c5RzJABIJq1lMIDFYAxYQK6nAOBUh6cn/Sr38fQ4tzw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky As a preparation to removal scatter-gather table and unifying receive and send list, explicitly store page list. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 1 + drivers/vfio/pci/mlx5/cmd.h | 1 + drivers/vfio/pci/mlx5/main.c | 35 +++++++++++++++++------------------ 3 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 44762980fcb9..5e2103042d9b 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -411,6 +411,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); + kvfree(buf->page_list); kfree(buf); } diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 83728c0669e7..815fcb54494d 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -57,6 +57,7 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct page **page_list; struct sg_append_table table; loff_t start_pos; u64 length; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index b11b1c27d284..7ffe24693a55 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -69,44 +69,43 @@ int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned int npages) { unsigned int to_alloc = npages; + size_t old_size, new_size; struct page **page_list; unsigned long filled; unsigned int to_fill; int ret; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); - page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); + to_fill = min_t(unsigned int, npages, + PAGE_SIZE / sizeof(*buf->page_list)); + old_size = buf->npages * sizeof(*buf->page_list); + new_size = old_size + to_fill * sizeof(*buf->page_list); + page_list = kvrealloc(buf->page_list, old_size, new_size, + GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page_list) return -ENOMEM; + buf->page_list = page_list; + do { filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); - if (!filled) { - ret = -ENOMEM; - goto err; - } + buf->page_list + buf->npages); + if (!filled) + return -ENOMEM; + to_alloc -= filled; ret = sg_alloc_append_table_from_pages( - &buf->table, page_list, filled, 0, + &buf->table, buf->page_list + buf->npages, filled, 0, filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, GFP_KERNEL_ACCOUNT); - if (ret) - goto err; + return ret; + buf->npages += filled; - /* clean input for another bulk allocation */ - memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*page_list)); + PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); - kvfree(page_list); return 0; - -err: - kvfree(page_list); - return ret; } static void mlx5vf_disable_fd(struct mlx5_vf_migration_file *migf) From patchwork Tue Mar 5 10:15:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55C20C54E4A for ; Tue, 5 Mar 2024 10:16:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E5C280007; Tue, 5 Mar 2024 05:16:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4917594000F; Tue, 5 Mar 2024 05:16:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2717180007; Tue, 5 Mar 2024 05:16:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0611194000F for ; Tue, 5 Mar 2024 05:16:35 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D80BF140DD1 for ; Tue, 5 Mar 2024 10:16:34 +0000 (UTC) X-FDA: 81862581108.04.2B48661 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf23.hostedemail.com (Postfix) with ESMTP id 1ED9D140009 for ; Tue, 5 Mar 2024 10:16:32 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ME6sTK28; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633793; a=rsa-sha256; cv=none; b=pobPxxfuO8dfEd6YJyW7iXvskVW36VSybdLUEV4kvDYD8UI5PiMDbnToAH4WPsVlP773vF LRFnwiusub1sAcLhRGSLCnoH1XvG7ohYVGnXClnoVLpA7bXqo11DcGvfxmPfFna1tDR7QH X0SJf4Nw1DkddUpYQg5lMgcmXnkFVAk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ME6sTK28; spf=pass (imf23.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kP6IjCrCtQDnpe3Kioc5v3kxzu1trppQiw50mdHvFFY=; b=QlIYOqQQ4LGO1IgHvTF4G0J8mTywKomu+QA9o9hqSf1vob7RqyaIxOnitthOec6ZpHnV9Y xt97fDhBE11MuOgtvhTNQMgsYAbK/5liym5KviighW5EWnG1DYoUR/dABbv8cvnweXHOfd 80bMUCC4Nrn3hD55tdnGMahBoSdwHRs= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4F997614AD; Tue, 5 Mar 2024 10:16:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3995DC433C7; Tue, 5 Mar 2024 10:16:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633792; bh=4IocgUHLj9eV065W5P3anYxI4nhTW2naxln23AtMtis=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ME6sTK286uwoXoz+IpcdDC5iiCveCz60c3QC6e2Qoo6RT1vn1QMiK79xbl2Z0iJEQ lF+WIuufy46vobE2X5BkxN4zFsuEFfqCY7nNLgqtqaGEzXhdvxV79SceqYL9IBcm1m TvpaFHgFXCBfyApy9K0LVj6Gp960SddtGc3bQwTBI9C4yeb0N89NQ9nYnZIjPV3lud HWI1R9l5KpNKb/eOURGHv33QqpDtRBoDV6zIP7rZo4UIRb0nFH+r1oENLRXAryoJBH l4+GOIMyR7QDazy9mrVtajxBlb0ezE2lg1MG7ckSp0k65UFJTLImRNCeq2jbANxI8Z 7hpa9NbeUdkHw== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 14/16] vfio/mlx5: Convert vfio to use DMA link API Date: Tue, 5 Mar 2024 12:15:24 +0200 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1ED9D140009 X-Stat-Signature: oiu4ynq1ucsa58tt5xk9a796xpowyobi X-Rspam-User: X-HE-Tag: 1709633792-617770 X-HE-Meta: U2FsdGVkX1/X/ptoG5YLwFfY7LhLilNNH6Pr8gnmdAke5PS7dKhyXBAbuAiz/NZbh628yRapH4bd0lnSTvxoyufhyRzOEc+IEj2U6Vy1bTn2gXTUC+GC5PKc3SDM9yWdX5ZYABD0YrMyHPdTnKCT7lL+Z233LL/ZkBFl7Tt95pLXsxHyjHwtk623MqxO4LcIfJVrrXbRKEyJTm64DYk5/Hu9N60jwfJ4EGDDIl49nvORyjy1vW/GiNBES255pKw4rX+tXZy5nS54SOwlsoZEpNovr7DuVTJSmK0HGhvPKXApFPhjw+TzTziLjCAWsajSBrBSzTz+Tq5U7kcavtmkLj1I0UK2LsbFdGOBIVqashKBFiB1R0NJASy0o/ar/sH+n0g2W7VzTULJVL2HE+wfEZ6ZnBopdiKxIhhCQAuhppgLi5DooXEvFzQ0TG5HSZOVUyya7PZ/7BgjF8HJu6p0EFXwB6uNCCo3kTaeRX4W+oBCwgmb0uzepsy9Z6kHodAfqodqF2U0JHKRjgAA4BGMgQZLHEnbSjFrR0fhgIlPsNTJ4Eu0aocTjmtvaS6Pu76lz8ooqaSJ9ewI5g3Exa85VsxDNJmAXP8pZo4A2BvoFUMCQJX2HQEvtuQt4IgnogZGlNlZ4l9hRoPVuBJxztxDqFCfBD4G3NKTAHkZVPPY8VpteHOpFgmgMTyLjc4M+MQ5v9IbEPmQXeWVXJ//iu1hikf0UjA/ps5LJvg7KkQ2T5B2wMCdAhac502hg/8ccRtRyBtXC7Qcebyk6FO6rGiGoLsos1en2URU8GNtP76jTcL2CK9LUuf7UjQotnnz7NkVdmUxm49D14OETxG0QCwcn7J7ZdkJcl9OBXGVNJT9wiGXprGLs8Z8aoBhOCwJ+MMnG0stnVI5ikHJRXp5ipMMWg2wK/HKJvZxE5qurgRJcE9v2ivpjSnOpVfWVG4rwd2PNkrp+omUaSJia5Fjhr3 CJdOmWu+ R6rSuzl6/NMXjmj+Vli1YaKJ2cXErybKKlLOVvC5ZXBi1MC6C8Bbn/b4doW66INnE92Gio2VnCGFE7Ep6bMTbDmAaSTgzrGqC2U5GQvUEzGUpY4KZZUzQDemLeR2+hhni3hrWepboFDjsqhv6cTUJfu8CqRQ5qioACSCh7Yd5XWGIn+KsUmxSnDoN+HKG2axRxmXrvyK8xTA4oDj07bLC7SzB+sjNCYWzekeNNMBdddPsd5bP3xJfr5jUlQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Remove intermediate scatter-gather table as it is not needed if DMA link API is used. This conversion reduces drastically the memory used to manage that table. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 177 ++++++++++++++++------------------- drivers/vfio/pci/mlx5/cmd.h | 8 +- drivers/vfio/pci/mlx5/main.c | 50 ++-------- 3 files changed, 91 insertions(+), 144 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 5e2103042d9b..cfae03f7b7da 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -332,26 +332,60 @@ static u32 *alloc_mkey_in(u32 npages, u32 pdn) return in; } -static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, - struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, u32 *mkey_in, u32 *mkey) { + int inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +} + +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in, struct dma_iova_attrs *iova) +{ + dma_addr_t addr; __be64 *mtt; - int inlen; + int i; mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - if (buf) { - struct sg_dma_page_iter dma_iter; + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unlink_range(iova, addr); + } + dma_free_iova(iova); +} + +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in, + struct dma_iova_attrs *iova) +{ + dma_addr_t addr; + __be64 *mtt; + int i, err; + + iova->dev = mdev->device; + iova->size = npages * PAGE_SIZE; + err = dma_alloc_iova(iova); + if (err) + return err; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + for (i = 0; i < npages; i++) { + addr = dma_link_range(page_list[i], 0, iova, i * PAGE_SIZE); + if (dma_mapping_error(mdev->device, addr)) + goto error; - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + *mtt++ = cpu_to_be64(addr); } - inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(__be64) * round_up(npages, 2); + return 0; - return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); +error: + unregister_dma_pages(mdev, i, mkey_in, iova); + return -ENOMEM; } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -367,17 +401,16 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->dmaed || !buf->npages) return -EINVAL; - ret = dma_map_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); - if (ret) - return ret; - buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); - if (!buf->mkey_in) { - ret = -ENOMEM; - goto err; - } + if (!buf->mkey_in) + return -ENOMEM; + + ret = register_dma_pages(mdev, buf->npages, buf->page_list, + buf->mkey_in, &buf->iova); + if (ret) + goto err_register_dma; - ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + ret = create_mkey(mdev, buf->npages, buf->mkey_in, &buf->mkey); if (ret) goto err_create_mkey; @@ -386,32 +419,39 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) return 0; err_create_mkey: + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, &buf->iova); +err_register_dma: kvfree(buf->mkey_in); -err: - dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; } +static void free_page_list(u32 npages, struct page **page_list) +{ + int i; + + /* Undo alloc_pages_bulk_array() */ + for (i = npages - 1; i >= 0; i--) + __free_page(page_list[i]); + + kvfree(page_list); +} + void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) { - struct mlx5_vf_migration_file *migf = buf->migf; - struct sg_page_iter sg_iter; + struct mlx5vf_pci_core_device *mvdev = buf->migf->mvdev; + struct mlx5_core_dev *mdev = mvdev->mdev; - lockdep_assert_held(&migf->mvdev->state_mutex); - WARN_ON(migf->mvdev->mdev_detach); + lockdep_assert_held(&mvdev->state_mutex); + WARN_ON(mvdev->mdev_detach); if (buf->dmaed) { - mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + mlx5_core_destroy_mkey(mdev, buf->mkey); + unregister_dma_pages(mdev, buf->npages, buf->mkey_in, + &buf->iova); kvfree(buf->mkey_in); - dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, - buf->dma_dir, 0); } - /* Undo alloc_pages_bulk_array() */ - for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) - __free_page(sg_page_iter_page(&sg_iter)); - sg_free_append_table(&buf->table); - kvfree(buf->page_list); + free_page_list(buf->npages, buf->page_list); kfree(buf); } @@ -426,7 +466,7 @@ mlx5vf_alloc_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, if (!buf) return ERR_PTR(-ENOMEM); - buf->dma_dir = dma_dir; + buf->iova.dir = dma_dir; buf->migf = migf; if (npages) { ret = mlx5vf_add_migration_pages(buf, npages); @@ -469,7 +509,7 @@ mlx5vf_get_data_buffer(struct mlx5_vf_migration_file *migf, u32 npages, spin_lock_irq(&migf->list_lock); list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) { - if (buf->dma_dir == dma_dir) { + if (buf->iova.dir == dma_dir) { list_del_init(&buf->buf_elm); if (buf->npages >= npages) { spin_unlock_irq(&migf->list_lock); @@ -1253,17 +1293,6 @@ static void mlx5vf_destroy_qp(struct mlx5_core_dev *mdev, kfree(qp); } -static void free_recv_pages(struct mlx5_vhca_recv_buf *recv_buf) -{ - int i; - - /* Undo alloc_pages_bulk_array() */ - for (i = 0; i < recv_buf->npages; i++) - __free_page(recv_buf->page_list[i]); - - kvfree(recv_buf->page_list); -} - static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, unsigned int npages) { @@ -1300,56 +1329,16 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, return -ENOMEM; } -static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - - for (i = npages - 1; i >= 0; i--) { - addr = be64_to_cpu(mtt[i]); - dma_unmap_single(mdev->device, addr, PAGE_SIZE, - DMA_FROM_DEVICE); - } -} - -static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, - struct page **page_list, u32 *mkey_in) -{ - dma_addr_t addr; - __be64 *mtt; - int i; - - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - - for (i = 0; i < npages; i++) { - addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, addr)) - goto error; - - *mtt++ = cpu_to_be64(addr); - } - - return 0; - -error: - unregister_dma_pages(mdev, i, mkey_in); - return -ENOMEM; -} - static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_qp *qp) { struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in, + &recv_buf->iova); kvfree(recv_buf->mkey_in); - free_recv_pages(&qp->recv_buf); + free_page_list(recv_buf->npages, recv_buf->page_list); } static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1370,24 +1359,24 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, goto end; } + recv_buf->iova.dir = DMA_FROM_DEVICE; err = register_dma_pages(mdev, npages, recv_buf->page_list, - recv_buf->mkey_in); + recv_buf->mkey_in, &recv_buf->iova); if (err) goto err_register_dma; - err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, - &recv_buf->mkey); + err = create_mkey(mdev, npages, recv_buf->mkey_in, &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_pages(mdev, npages, recv_buf->mkey_in); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in, &recv_buf->iova); err_register_dma: kvfree(recv_buf->mkey_in); end: - free_recv_pages(recv_buf); + free_page_list(npages, recv_buf->page_list); return err; } diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 815fcb54494d..3a046166d9f2 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -57,22 +57,17 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct dma_iova_attrs iova; struct page **page_list; - struct sg_append_table table; loff_t start_pos; u64 length; u32 npages; u32 mkey; u32 *mkey_in; - enum dma_data_direction dma_dir; u8 dmaed:1; u8 stop_copy_chunk_num; struct list_head buf_elm; struct mlx5_vf_migration_file *migf; - /* Optimize mlx5vf_get_migration_page() for sequential access */ - struct scatterlist *last_offset_sg; - unsigned int sg_last_entry; - unsigned long last_offset; }; struct mlx5vf_async_data { @@ -137,6 +132,7 @@ struct mlx5_vhca_cq { }; struct mlx5_vhca_recv_buf { + struct dma_iova_attrs iova; u32 npages; struct page **page_list; u32 next_rq_offset; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index 7ffe24693a55..668c28bc429c 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -34,35 +34,10 @@ static struct mlx5vf_pci_core_device *mlx5vf_drvdata(struct pci_dev *pdev) core_device); } -struct page * -mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, - unsigned long offset) +struct page *mlx5vf_get_migration_page(struct mlx5_vhca_data_buffer *buf, + unsigned long offset) { - unsigned long cur_offset = 0; - struct scatterlist *sg; - unsigned int i; - - /* All accesses are sequential */ - if (offset < buf->last_offset || !buf->last_offset_sg) { - buf->last_offset = 0; - buf->last_offset_sg = buf->table.sgt.sgl; - buf->sg_last_entry = 0; - } - - cur_offset = buf->last_offset; - - for_each_sg(buf->last_offset_sg, sg, - buf->table.sgt.orig_nents - buf->sg_last_entry, i) { - if (offset < sg->length + cur_offset) { - buf->last_offset_sg = sg; - buf->sg_last_entry += i; - buf->last_offset = cur_offset; - return nth_page(sg_page(sg), - (offset - cur_offset) / PAGE_SIZE); - } - cur_offset += sg->length; - } - return NULL; + return buf->page_list[offset / PAGE_SIZE]; } int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, @@ -72,13 +47,9 @@ int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, size_t old_size, new_size; struct page **page_list; unsigned long filled; - unsigned int to_fill; - int ret; - to_fill = min_t(unsigned int, npages, - PAGE_SIZE / sizeof(*buf->page_list)); old_size = buf->npages * sizeof(*buf->page_list); - new_size = old_size + to_fill * sizeof(*buf->page_list); + new_size = old_size + to_alloc * sizeof(*buf->page_list); page_list = kvrealloc(buf->page_list, old_size, new_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!page_list) @@ -87,22 +58,13 @@ int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, buf->page_list = page_list; do { - filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, + filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_alloc, buf->page_list + buf->npages); if (!filled) return -ENOMEM; to_alloc -= filled; - ret = sg_alloc_append_table_from_pages( - &buf->table, buf->page_list + buf->npages, filled, 0, - filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, - GFP_KERNEL_ACCOUNT); - if (ret) - return ret; - buf->npages += filled; - to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); return 0; @@ -164,7 +126,7 @@ static void mlx5vf_buf_read_done(struct mlx5_vhca_data_buffer *vhca_buf) struct mlx5_vf_migration_file *migf = vhca_buf->migf; if (vhca_buf->stop_copy_chunk_num) { - bool is_header = vhca_buf->dma_dir == DMA_NONE; + bool is_header = vhca_buf->iova.dir == DMA_NONE; u8 chunk_num = vhca_buf->stop_copy_chunk_num; size_t next_required_umem_size = 0; From patchwork Tue Mar 5 10:15:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59392C54798 for ; Tue, 5 Mar 2024 10:16:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F35794000F; Tue, 5 Mar 2024 05:16:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1E2E8000A; Tue, 5 Mar 2024 05:16:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D686A80009; Tue, 5 Mar 2024 05:16:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BEB3494000F for ; Tue, 5 Mar 2024 05:16:38 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9878B140D30 for ; Tue, 5 Mar 2024 10:16:38 +0000 (UTC) X-FDA: 81862581276.01.3FDABC5 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id E8904100007 for ; Tue, 5 Mar 2024 10:16:36 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WNEwTJ5c; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633797; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qz+nVanLpL+n9z9iwjMPgdzjVfAYTSIqRoaTaCVsNO8=; b=41w3n8Z3e95pGjGtI4QDB27z+4cLH7JLUCXKoKCmvbEuWMz9kmny7Indka+mt7dQetmnS9 EPF58VVk5PYCcjkgX+LGzNfXHtdp79u2EbvsOUqFosXkT9DIF0+s1q/ZBomfgCW1awIL5J TodN02tem39H2T06XLTwq8gJCgAeOgA= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WNEwTJ5c; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633797; a=rsa-sha256; cv=none; b=VkPnD3DYZBMzoZA0kHIyYVv91xSb7T8Sk4MQjlyZLT/5jJi9eGTJtm4FkdF+HQnZ6s9MCs A6s3D+kYyEp5trJAkF9WV8odlQ4ouaOUPAOjRbMReZoyuKlPL8Ga3CKFauUpM60UzDOV84 T9HrarfoR8XVDmsP2haYGJlRBauZzO4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0A9CA61126; Tue, 5 Mar 2024 10:16:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D045C43394; Tue, 5 Mar 2024 10:16:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633795; bh=LoE4Mk2h+x4QygNOYAaQkErlcP1J9OkI1WKvPNtdRq4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WNEwTJ5cMN9ELwzZS16ppqt8z3E74GD7UClDOZ+K258awlhlHWmthDNESMHRqcqrd FB5rGUzT1HNrfY0k9YRNPH7tpd6eRrBCHBFvW2HuHKRpDRIcieGpESqf5qYSZEu7Ot rJawxgEDFjOWV3aypoQz9qlwzRrIaDARCl71nYqsSh9yKTfAjSlP900AU9rH1yZsqq FSiuklZ0TDjrPQoHx0CLw7CspAdlMh8IJ7b9yEWuOD9m26KZiCU7D2cXRMmTCHbBNI pVkHPjju2ehO/3/QWSJNDh9vuYxqDYLnTUKNJldj71/uHBe1w+Vt8uTvJzHh9hKp99 5/3DH4IEH+yWg== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Chaitanya Kulkarni , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Leon Romanovsky , Zhu Yanjun Subject: [RFC 15/16] block: add dma_link_range() based API Date: Tue, 5 Mar 2024 12:15:25 +0200 Message-ID: <1e52aa392b9c434f55203c9d630dd06fcdb75c32.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: E8904100007 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: o6w4zrfq8dqpasobm4mhmand1rpouuxb X-HE-Tag: 1709633796-491331 X-HE-Meta: U2FsdGVkX19yLgJY7TdbLbAjtKFtZ+sXwf2dw3kaU446WmFv1FipX7/efsx8UV912hAm7fbSCZjxplxk/Eaot0bVkWubxhXoRJmCUh+lt8smsPTTnTHHoSvsc7EqVd4Aq42VfLWY4zT+L2NnxU9BCfidvmjM/d59QM01T7OYkPccdhryacHfugQJNPFgvPIkA97fg6P9d8dhnclEbvgMqH6wkdztKr3oEdTE4JYx/CMTJKCbQt7TcEbGa2q/HFDxCo/85zJQ8FU5U1GspodDzhtFGRez57bZA44M/VMvS0E2QWRdONlnwD4CjqraaufWKnFpaEY3d1AEwRgAeZ8OEHWZyeHAMIu6Yv1Ue1/+1NflCoxNz1TiQ8SK6wAOVEPmA7MFydylFnX+aK6di1+qItC+QYqp3PLvpODgq/AKrNlV4m/v+G9dZbnmPDJaJUUZieubcyQYr7ewM2As8TYjr7vooebNxEazsfgh9xBPEZG+5Fjfhx1HsmltIW4u4Gou4CoJKsI8mDXG/uNV7OCaK28/1mW1pAkT32chu4+/5D4KdvrW/pXA0VGAgoxQ6tCiLVozXcvJRroX0oQq3q7VuuMm3bm424PiNSGtxYvB6bQ6LBuB5Tj15l8U37LmfXymjzwd4I9jhEpu3UmZW+VxrpKSAhe+N1T1aJwALP2D6GljdZBFLxumOWBySTNJvn9w8AkWOc7j3YRpx5oH6EKwSf1TuGMX3HyRyUUXob6tHChxaJ8O3Ht9z2PMHqqiqtJIfe82p6J8Iea8gEaQhe93YWib5/4qBWCvt+Zc25/pT0v1FI8McTIwiyaZ9Nmgpa+sbkN4YSCM+6e/vryq4b02XztC7JO3oG5iodjmy6ovPnozHCKgVcLnbXQWg9a2T8xE9y6YQRZlaySSgz6+Ju4G8qo3leYP9a+1xp9/rH/rZK7G+7EShcL5utbZUI2BhEs4+mHP+lGzrpo6OYx2075 nWhXV0qX BilyZEbg24Cxrt2o0COXWnI0s/lGZDvDQagDABm4TOUIFO6dQyQsfLlMaR0sH91POs4qI/r8b/RdKx6sN4TTETEsQODqYwnfbg189Ef9+A3P1kb4njqcycB/ED3kfiGJ6MzP0LkWIaxq43/kYoF7gOk5PkDWKaNO7ASNJ07gre8ICx/pX30G20o+BiH31P2DpQRy+PNumv8Cc+vWz82vUCsE01O8etIiTTyirYjnom1tl14G+eeVqcQnXlX8xE2kXZv/UnPq+OPl9OZpHrP5mPFgmuINdVkZyIdcl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chaitanya Kulkarni Add two helper functions that are needed to calculate the total DMA length of the request blk_rq_get_dma_length() and to create DMA mapping blk_rq_dma_map(). blk_rq_get_dma_length() is used to get the total length of the request, when driver is allocating IOVA space for this request with the call to dma_alloc_iova(). This length is then initialized to the iova->size and passed to allocate iova call chain :- dma_map_ops->allov_iova() iommu_dma_alloc_iova() alloc_iova_fast() iova_rcache_get() OR alloc_iova() blk_rq_dma_map() iterates through bvec list and creates DMA mapping for each page using iova parameter with the help of dma_link_range(). Note that @iova is allocated & pre-initialized using dma_alloc_iova() by the caller. After creating a mapping for each page, call into the callback function @cb provided by the drive with a mapped DMA address for this page, offset into the iova space (needed at the time of unlink), length of the mapped page, and page number that is mapped in this request. Driver is responsible for using this DMA address to complete the mapping of underlying protocol-specific data structures, such as NVMe PRPs or NVMe SGLs. This callback approach allows us to iterate bvec list only once to create bvec to DMA mapping and use that DMA address in driver to build the protocol-specific data structure, essentially mapping one bvec page at a time to DMA address and using that DMA address to create underlying protocol-specific data structures. Finally, returning the number of linked count. Signed-off-by: Chaitanya Kulkarni Signed-off-by: Leon Romanovsky --- block/blk-merge.c | 156 +++++++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 9 +++ 2 files changed, 165 insertions(+) diff --git a/block/blk-merge.c b/block/blk-merge.c index 2d470cf2173e..63effc8ac1db 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -583,6 +583,162 @@ int __blk_rq_map_sg(struct request_queue *q, struct request *rq, } EXPORT_SYMBOL(__blk_rq_map_sg); +static dma_addr_t blk_dma_link_page(struct page *page, unsigned int page_offset, + struct dma_iova_attrs *iova, + dma_addr_t dma_offset) +{ + dma_addr_t dma_addr; + int ret; + + dma_addr = dma_link_range(page, page_offset, iova, dma_offset); + ret = dma_mapping_error(iova->dev, dma_addr); + if (ret) { + pr_err("dma_mapping_err %d dma_addr 0x%llx dma_offset %llu\n", + ret, dma_addr, dma_offset); + /* better way ? */ + dma_addr = 0; + } + return dma_addr; +} + +/** + * blk_rq_dma_map: block layer request to DMA mapping helper. + * + * @req : [in] request to be mapped + * @cb : [in] callback to be called for each bvec mapped bvec into + * underlaying driver. + * @cb_data : [in] callback data to be passed, privete to the underlaying + * driver. + * @iova : [in] iova to be used to create DMA mapping for this request's + * bvecs. + * Description: + * Iterates through bvec list and create dma mapping between each bvec page + * using @iova with dma_link_range(). Note that @iova needs to be allocated and + * pre-initialized using dma_alloc_iova() by the caller. After creating + * a mapping for each page, call into the callback function @cb provided by + * driver with mapped dma address for this bvec, offset into iova space, length + * of the mapped page, and bvec number that is mapped in this requets. Driver is + * responsible for using this dma address to complete the mapping of underlaying + * protocol specific data structure, such as NVMe PRPs or NVMe SGLs. This + * callback approach allows us to iterate bvec list only once to create bvec to + * DMA mapping & use that dma address in the driver to build the protocol + * specific data structure, essentially mapping one bvec page at a time to DMA + * address and use that DMA address to create underlaying protocol specific + * data structure. + * + * Caller needs to ensure @iova is initialized & allovated with using + * dma_alloc_iova(). + */ +int blk_rq_dma_map(struct request *req, driver_map_cb cb, void *cb_data, + struct dma_iova_attrs *iova) +{ + dma_addr_t curr_dma_offset = 0; + dma_addr_t prev_dma_addr = 0; + dma_addr_t dma_addr; + size_t prev_dma_len = 0; + struct req_iterator iter; + struct bio_vec bv; + int linked_cnt = 0; + + rq_for_each_bvec(bv, req, iter) { + if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) { + curr_dma_offset = prev_dma_addr + prev_dma_len; + + dma_addr = blk_dma_link_page(bv.bv_page, bv.bv_offset, + iova, curr_dma_offset); + if (!dma_addr) + break; + + cb(cb_data, linked_cnt, dma_addr, curr_dma_offset, + bv.bv_len); + + prev_dma_len = bv.bv_len; + prev_dma_addr = dma_addr; + linked_cnt++; + } else { + unsigned nbytes = bv.bv_len; + unsigned total = 0; + unsigned offset, len; + + while (nbytes > 0) { + struct page *page = bv.bv_page; + + offset = bv.bv_offset + total; + len = min(get_max_segment_size(&req->q->limits, + page, offset), + nbytes); + + page += (offset >> PAGE_SHIFT); + offset &= ~PAGE_MASK; + + curr_dma_offset = prev_dma_addr + prev_dma_len; + + dma_addr = blk_dma_link_page(page, offset, + iova, + curr_dma_offset); + if (!dma_addr) + break; + + cb(cb_data, linked_cnt, dma_addr, + curr_dma_offset, len); + + total += len; + nbytes -= len; + + prev_dma_len = len; + prev_dma_addr = dma_addr; + linked_cnt++; + } + } + } + return linked_cnt; +} +EXPORT_SYMBOL_GPL(blk_rq_dma_map); + +/* + * Calculate total DMA length needed to satisfy this request. + */ +size_t blk_rq_get_dma_length(struct request *rq) +{ + struct request_queue *q = rq->q; + struct bio *bio = rq->bio; + unsigned int offset, len; + struct bvec_iter iter; + size_t dma_length = 0; + struct bio_vec bvec; + + if (rq->rq_flags & RQF_SPECIAL_PAYLOAD) + return rq->special_vec.bv_len; + + if (!rq->bio) + return 0; + + for_each_bio(bio) { + bio_for_each_bvec(bvec, bio, iter) { + unsigned int nbytes = bvec.bv_len; + unsigned int total = 0; + + if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE) { + dma_length += bvec.bv_len; + continue; + } + + while (nbytes > 0) { + offset = bvec.bv_offset + total; + len = min(get_max_segment_size(&q->limits, + bvec.bv_page, + offset), nbytes); + total += len; + nbytes -= len; + dma_length += len; + } + } + } + + return dma_length; +} +EXPORT_SYMBOL(blk_rq_get_dma_length); + static inline unsigned int blk_rq_get_max_sectors(struct request *rq, sector_t offset) { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 7a8150a5f051..80b9c7f2c3a0 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -8,6 +8,7 @@ #include #include #include +#include struct blk_mq_tags; struct blk_flush_queue; @@ -1144,7 +1145,15 @@ static inline int blk_rq_map_sg(struct request_queue *q, struct request *rq, return __blk_rq_map_sg(q, rq, sglist, &last_sg); } + +typedef void (*driver_map_cb)(void *cb_data, u32 cnt, dma_addr_t dma_addr, + dma_addr_t offset, u32 len); + +int blk_rq_dma_map(struct request *req, driver_map_cb cb, void *cb_data, + struct dma_iova_attrs *iova); + void blk_dump_rq_flags(struct request *, char *); +size_t blk_rq_get_dma_length(struct request *rq); #ifdef CONFIG_BLK_DEV_ZONED static inline unsigned int blk_rq_zone_no(struct request *rq) From patchwork Tue Mar 5 10:15:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32075C54798 for ; Tue, 5 Mar 2024 10:17:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C7386B007D; Tue, 5 Mar 2024 05:16:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7060A6B007E; Tue, 5 Mar 2024 05:16:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5545A6B0081; Tue, 5 Mar 2024 05:16:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 31F336B007D for ; Tue, 5 Mar 2024 05:16:45 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E46E240DE8 for ; Tue, 5 Mar 2024 10:16:44 +0000 (UTC) X-FDA: 81862581528.14.2A54849 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf21.hostedemail.com (Postfix) with ESMTP id C9A431C0013 for ; Tue, 5 Mar 2024 10:16:42 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VTFmI4f2; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633803; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=69+6sE92dSIPG3r1ibrFiQOVqsokM2kHpVwxvHVdtK4=; b=rQRs+k1OHs8Zb2F8iH32Hw+Eg90aqxmfM5vYmdrZ9ObgsRWPnDoChioej7uyl/I40oQrhm wlW6Ppozaktid1XBBQxbWoYDZKEAc7ONZUwYO99QDvvPfWhtl4G38umXFLvKZfqKkW8FB8 Gq+XSqLUczTtpvB83S8YdxWmJkmp0XM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VTFmI4f2; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633803; a=rsa-sha256; cv=none; b=NHn+V+p0838pGpq1KPLpjKp+EJdB00I1qDml0aun2Pqbat2i86DgkARvQjEva/bm096v6D VuG+oxTDu+JuQNxXYijCem0RzEDkBYM6/YGc6wJhsx89AIaPePO/+aZg63eQsmkvnp7i5i DugQA1q8ZWn8svOSNFwNkkWBvad3IjI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BC863CE19D1; Tue, 5 Mar 2024 10:16:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABC9DC43394; Tue, 5 Mar 2024 10:16:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633799; bh=POBXmxmfWLR3OdaBK59TZEAySmTXPtp10vl7hcXrrqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VTFmI4f2DZvwdpBpwsbb8kyrGjVIZbcmQOHHwyDR0biw0twpPzx4NpvjUFpX0Gk7H C+sCQE7uq7qTB93P3sS7irPKFFCb4FaZKw01WlmNm+yE4jyYDqLuTxfuHM30FE2Vui Rmm/rt4Jlyczdj/X3Q5fhaAG/XRTQ6z7sGUWipQF7MSYaWU3wWmjSLDXZNLL2taYuH 7FpJt98diCoAw4a6AcuAWxMCiyTyNswR2gsDQO8ylGptyoFGStdREYbJ8zwj4mRDSA cg7RNSBRvoxHUvBB7jmw+w5OM9WUOmUhppGWLdgrC40bIvxrdiZPOEWuM3g4409p0e dytDSDKhAxIYA== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Chaitanya Kulkarni , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Leon Romanovsky , Zhu Yanjun Subject: [RFC 16/16] nvme-pci: use blk_rq_dma_map() for NVMe SGL Date: Tue, 5 Mar 2024 12:15:26 +0200 Message-ID: <016fc02cbfa9be3c156a6f74df38def1e09c08f1.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: C9A431C0013 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: jhjpsesb77u9fryisezyxa9wbwcb6h34 X-HE-Tag: 1709633802-249986 X-HE-Meta: U2FsdGVkX19o3reuU3tZdHnund/ZecwkOJOxhlXmNaYqrnZc+nKHq5zIJKjEBmcFsiekMsDvCePvpeFA9pzMA6QbrzWzxINi0WZBIezt1r0lwCZKlqfnRHKzJ5EK6xikIl+/Et+MELigwFkOnb3etOU6tUpwrMocrwtFOnqthgZj7mTUUJ/Eg99xMhW8hMx4Kh8/bhHnPOQ88dBxdBYUrVIpiNMSYFtZ4o+dn+Uo86MuOai5ch3UYeVYUEY9AXlLpr2AtX9fyWpZ4DoApWetXpTRA2jrf/jsaOwXaZObG6klcrmYK+f3u9oHnXW5mNVL4pGIw0aISe3BhzfsLywADTu9A2+oN92mPWyiE+fLU+5sGj+JLJjd2YMYznUfg4EjYl8DdNh6i0wlHi38DkfF0hbUEB66wQDzKtM0APjWb3w/QB0Z1S5JEWuMv++YnZ0qoRwPe2ndvsXNE2PB5KG4MKWR8/vtZqsghMnk0ZISXxzGhKo0y8OA6t3R7HPnzEpGSyoT18z50QI68SJodGDSsbdw0N40CP60PEeZdrsGxXNjQ9YQeEDEe/iwNlTcEtwiNQzi8ek+ZrJV01/6DB1Tdrpi0c6v5tWTFIrJ6+yMM8F4xQ8f2m02Z3vV79CNl9SqZkUjBsgVvd/MQ/WgOzOSfp2mPSk5nA96ry8Fst9ZYB4alJbnILWWcDcl+Zdu+P/jRs+baa1Jixcy08vIv+wk8wvTYViuufj/4+126NBflZnOgAfLm8AuGHgZjitS+0qT6NpZ1b/V04Qn/dc3qi5cmZM9dq7jm9wqMoqnxwMm0PKJU4azEcT6NCDbv9+aX46FLNNNf4qv7K0et6QvZi03e5DLJFljLxr/Xr3VgZaZd/XGKGrnnM8CB2LwmcavlPzgQg/hhmAg5MS2wdW1SjPi/5REPfaeJJmMS8LvQMh2rOFbq51fvalZjroLJR8Cd9ZyG7K12HwPTaUz5zsoMic pNk6lZae KNKt9FLdwzCn4923A6Dw24EwmJoekXi3zojKkynEI41YHOH4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chaitanya Kulkarni Update nvme_iod structure to hold iova, list of DMA linked addresses and total linked count, first one is needed in the request submission path to create a request to DMA mapping and last two are needed in the request completion path to remove the DMA mapping. In nvme_map_data() initialize iova with device, direction, and iova dma length with the help of blk_rq_get_dma_length(). Allocate iova using dma_alloc_iova(). and call in nvme_pci_setup_sgls(). Call newly added blk_rq_dma_map() to create request to DMA mapping and provide a callback function nvme_pci_sgl_map(). In the callback function initialize NVMe SGL dma addresses. Finally in nvme_unmap_data() unlink the dma address and free iova. Full disclosure:- ----------------- This is an RFC to demonstrate the newly added DMA APIs can be used to map/unmap bvecs without the use of sg list, hence I've modified the pci code to only handle SGLs for now. Once we have some agreement on the structure of new DMA API I'll add support for PRPs along with all the optimization that I've removed from the code for this RFC for NVMe SGLs and PRPs. I was able to run fio verification job successfully :- $ fio fio/verify.fio --ioengine=io_uring --filename=/dev/nvme0n1 --loops=10 write-and-verify: (g=0): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=io_uring, iodepth=16 fio-3.36 Starting 1 process Jobs: 1 (f=1): [V(1)][81.6%][r=12.2MiB/s][r=1559 IOPS][eta 03m:00s] write-and-verify: (groupid=0, jobs=1): err= 0: pid=4435: Mon Mar 4 20:54:48 2024 read: IOPS=2789, BW=21.8MiB/s (22.9MB/s)(6473MiB/297008msec) slat (usec): min=4, max=5124, avg=356.51, stdev=604.30 clat (nsec): min=1593, max=23376k, avg=5377076.99, stdev=2039189.93 lat (usec): min=493, max=23407, avg=5733.58, stdev=2103.22 clat percentiles (usec): | 1.00th=[ 1172], 5.00th=[ 2114], 10.00th=[ 2835], 20.00th=[ 3654], | 30.00th=[ 4228], 40.00th=[ 4752], 50.00th=[ 5276], 60.00th=[ 5800], | 70.00th=[ 6325], 80.00th=[ 7046], 90.00th=[ 8094], 95.00th=[ 8979], | 99.00th=[10421], 99.50th=[11076], 99.90th=[12780], 99.95th=[14222], | 99.99th=[16909] write: IOPS=2608, BW=20.4MiB/s (21.4MB/s)(10.0GiB/502571msec); 0 zone resets slat (usec): min=4, max=5787, avg=382.68, stdev=649.01 clat (nsec): min=521, max=23650k, avg=5751363.17, stdev=2676065.35 lat (usec): min=95, max=23674, avg=6134.04, stdev=2813.48 clat percentiles (usec): | 1.00th=[ 709], 5.00th=[ 1270], 10.00th=[ 1958], 20.00th=[ 3261], | 30.00th=[ 4228], 40.00th=[ 5014], 50.00th=[ 5800], 60.00th=[ 6521], | 70.00th=[ 7373], 80.00th=[ 8225], 90.00th=[ 9241], 95.00th=[ 9896], | 99.00th=[11469], 99.50th=[11863], 99.90th=[13960], 99.95th=[15270], | 99.99th=[17695] bw ( KiB/s): min= 1440, max=132496, per=99.28%, avg=20715.88, stdev=13123.13, samples=1013 iops : min= 180, max=16562, avg=2589.34, stdev=1640.39, samples=1013 lat (nsec) : 750=0.01% lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=0.01%, 500=0.07% lat (usec) : 750=0.79%, 1000=1.22% lat (msec) : 2=5.94%, 4=18.87%, 10=69.53%, 20=3.58%, 50=0.01% cpu : usr=1.01%, sys=98.95%, ctx=1591, majf=0, minf=2286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=828524,1310720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=6473MiB (6787MB), run=297008-297008msec WRITE: bw=20.4MiB/s (21.4MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=10.0GiB (10.7GB), run=502571-502571msec Disk stats (read/write): nvme0n1: ios=829189/1310720, sectors=13293416/20971520, merge=0/0, ticks=836561/1340351, in_queue=2176913, util=99.30% Signed-off-by: Chaitanya Kulkarni Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 220 +++++++++------------------------------- 1 file changed, 49 insertions(+), 171 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index e6267a6aa380..140939228409 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -236,7 +236,9 @@ struct nvme_iod { unsigned int dma_len; /* length of single DMA segment mapping */ dma_addr_t first_dma; dma_addr_t meta_dma; - struct sg_table sgt; + struct dma_iova_attrs iova; + dma_addr_t dma_link_address[128]; + u16 nr_dma_link_address; union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS]; }; @@ -521,25 +523,10 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, return true; } -static void nvme_free_prps(struct nvme_dev *dev, struct request *req) -{ - const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1; - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - dma_addr_t dma_addr = iod->first_dma; - int i; - - for (i = 0; i < iod->nr_allocations; i++) { - __le64 *prp_list = iod->list[i].prp_list; - dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); - - dma_pool_free(dev->prp_page_pool, prp_list, dma_addr); - dma_addr = next_dma_addr; - } -} - static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + u16 i; if (iod->dma_len) { dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len, @@ -547,9 +534,8 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) return; } - WARN_ON_ONCE(!iod->sgt.nents); - - dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); + for (i = 0; i < iod->nr_dma_link_address; i++) + dma_unlink_range(&iod->iova, iod->dma_link_address[i]); if (iod->nr_allocations == 0) dma_pool_free(dev->prp_small_pool, iod->list[0].sg_list, @@ -557,120 +543,15 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) else if (iod->nr_allocations == 1) dma_pool_free(dev->prp_page_pool, iod->list[0].sg_list, iod->first_dma); - else - nvme_free_prps(dev, req); - mempool_free(iod->sgt.sgl, dev->iod_mempool); -} - -static void nvme_print_sgl(struct scatterlist *sgl, int nents) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sgl, sg, nents, i) { - dma_addr_t phys = sg_phys(sg); - pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d " - "dma_address:%pad dma_length:%d\n", - i, &phys, sg->offset, sg->length, &sg_dma_address(sg), - sg_dma_len(sg)); - } -} - -static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, - struct request *req, struct nvme_rw_command *cmnd) -{ - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct dma_pool *pool; - int length = blk_rq_payload_bytes(req); - struct scatterlist *sg = iod->sgt.sgl; - int dma_len = sg_dma_len(sg); - u64 dma_addr = sg_dma_address(sg); - int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); - __le64 *prp_list; - dma_addr_t prp_dma; - int nprps, i; - - length -= (NVME_CTRL_PAGE_SIZE - offset); - if (length <= 0) { - iod->first_dma = 0; - goto done; - } - - dma_len -= (NVME_CTRL_PAGE_SIZE - offset); - if (dma_len) { - dma_addr += (NVME_CTRL_PAGE_SIZE - offset); - } else { - sg = sg_next(sg); - dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); - } - - if (length <= NVME_CTRL_PAGE_SIZE) { - iod->first_dma = dma_addr; - goto done; - } - - nprps = DIV_ROUND_UP(length, NVME_CTRL_PAGE_SIZE); - if (nprps <= (256 / 8)) { - pool = dev->prp_small_pool; - iod->nr_allocations = 0; - } else { - pool = dev->prp_page_pool; - iod->nr_allocations = 1; - } - - prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); - if (!prp_list) { - iod->nr_allocations = -1; - return BLK_STS_RESOURCE; - } - iod->list[0].prp_list = prp_list; - iod->first_dma = prp_dma; - i = 0; - for (;;) { - if (i == NVME_CTRL_PAGE_SIZE >> 3) { - __le64 *old_prp_list = prp_list; - prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); - if (!prp_list) - goto free_prps; - iod->list[iod->nr_allocations++].prp_list = prp_list; - prp_list[0] = old_prp_list[i - 1]; - old_prp_list[i - 1] = cpu_to_le64(prp_dma); - i = 1; - } - prp_list[i++] = cpu_to_le64(dma_addr); - dma_len -= NVME_CTRL_PAGE_SIZE; - dma_addr += NVME_CTRL_PAGE_SIZE; - length -= NVME_CTRL_PAGE_SIZE; - if (length <= 0) - break; - if (dma_len > 0) - continue; - if (unlikely(dma_len < 0)) - goto bad_sgl; - sg = sg_next(sg); - dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); - } -done: - cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sgt.sgl)); - cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); - return BLK_STS_OK; -free_prps: - nvme_free_prps(dev, req); - return BLK_STS_RESOURCE; -bad_sgl: - WARN(DO_ONCE(nvme_print_sgl, iod->sgt.sgl, iod->sgt.nents), - "Invalid SGL for payload:%d nents:%d\n", - blk_rq_payload_bytes(req), iod->sgt.nents); - return BLK_STS_IOERR; + dma_free_iova(&iod->iova); } static void nvme_pci_sgl_set_data(struct nvme_sgl_desc *sge, - struct scatterlist *sg) + dma_addr_t dma_addr, + unsigned int dma_len) { - sge->addr = cpu_to_le64(sg_dma_address(sg)); - sge->length = cpu_to_le32(sg_dma_len(sg)); + sge->addr = cpu_to_le64(dma_addr); + sge->length = cpu_to_le32(dma_len); sge->type = NVME_SGL_FMT_DATA_DESC << 4; } @@ -682,25 +563,37 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge, sge->type = NVME_SGL_FMT_LAST_SEG_DESC << 4; } +struct nvme_pci_sgl_map_data { + struct nvme_iod *iod; + struct nvme_sgl_desc *sgl_list; +}; + +static void nvme_pci_sgl_map(void *data, u32 cnt, dma_addr_t dma_addr, + dma_addr_t offset, u32 len) +{ + struct nvme_pci_sgl_map_data *d = data; + struct nvme_sgl_desc *sgl_list = d->sgl_list; + struct nvme_iod *iod = d->iod; + + nvme_pci_sgl_set_data(&sgl_list[cnt], dma_addr, len); + iod->dma_link_address[cnt] = offset; + iod->nr_dma_link_address++; +} + static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, struct request *req, struct nvme_rw_command *cmd) { + unsigned int entries = blk_rq_nr_phys_segments(req); struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct dma_pool *pool; struct nvme_sgl_desc *sg_list; - struct scatterlist *sg = iod->sgt.sgl; - unsigned int entries = iod->sgt.nents; + struct dma_pool *pool; dma_addr_t sgl_dma; - int i = 0; + int linked_count; + struct nvme_pci_sgl_map_data data; /* setting the transfer type as SGL */ cmd->flags = NVME_CMD_SGL_METABUF; - if (entries == 1) { - nvme_pci_sgl_set_data(&cmd->dptr.sgl, sg); - return BLK_STS_OK; - } - if (entries <= (256 / sizeof(struct nvme_sgl_desc))) { pool = dev->prp_small_pool; iod->nr_allocations = 0; @@ -718,11 +611,13 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, iod->list[0].sg_list = sg_list; iod->first_dma = sgl_dma; - nvme_pci_sgl_set_seg(&cmd->dptr.sgl, sgl_dma, entries); - do { - nvme_pci_sgl_set_data(&sg_list[i++], sg); - sg = sg_next(sg); - } while (--entries > 0); + data.iod = iod; + data.sgl_list = sg_list; + + linked_count = blk_rq_dma_map(req, nvme_pci_sgl_map, &data, + &iod->iova); + + nvme_pci_sgl_set_seg(&cmd->dptr.sgl, sgl_dma, linked_count); return BLK_STS_OK; } @@ -788,36 +683,20 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, &cmnd->rw, &bv); } } - - iod->dma_len = 0; - iod->sgt.sgl = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); - if (!iod->sgt.sgl) + iod->iova.dev = dev->dev; + iod->iova.dir = rq_dma_dir(req); + iod->iova.attrs = DMA_ATTR_NO_WARN; + iod->iova.size = blk_rq_get_dma_length(req); + if (!iod->iova.size) return BLK_STS_RESOURCE; - sg_init_table(iod->sgt.sgl, blk_rq_nr_phys_segments(req)); - iod->sgt.orig_nents = blk_rq_map_sg(req->q, req, iod->sgt.sgl); - if (!iod->sgt.orig_nents) - goto out_free_sg; - rc = dma_map_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), - DMA_ATTR_NO_WARN); - if (rc) { - if (rc == -EREMOTEIO) - ret = BLK_STS_TARGET; - goto out_free_sg; - } + rc = dma_alloc_iova(&iod->iova); + if (rc) + return BLK_STS_RESOURCE; - if (nvme_pci_use_sgls(dev, req, iod->sgt.nents)) - ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw); - else - ret = nvme_pci_setup_prps(dev, req, &cmnd->rw); - if (ret != BLK_STS_OK) - goto out_unmap_sg; - return BLK_STS_OK; + iod->dma_len = 0; -out_unmap_sg: - dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0); -out_free_sg: - mempool_free(iod->sgt.sgl, dev->iod_mempool); + ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw); return ret; } @@ -841,7 +720,6 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) iod->aborted = false; iod->nr_allocations = -1; - iod->sgt.nents = 0; ret = nvme_setup_cmd(req->q->queuedata, req); if (ret)