From patchwork Wed Jun 3 02:42:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 11584911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25AD4618 for ; Wed, 3 Jun 2020 02:44:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB90420663 for ; Wed, 3 Jun 2020 02:44:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IRxhv2o9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB90420663 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=khp12jJhG3ASsD9OriDv5Cb7GXCSN1aHJKIyHL20psM=; b=IRxhv2o9SxpIB4 DqRAUvl+I7+07Xr7duDiksM/MMO4ICW8Bk8pgPSOQ+zSuJGQXQD5Z0mthempF5MB+1/2PWKVycmNW Ej3TvJp0APjPPaTX7p1uZR6xIpuVPQ+WzuqeFBjX56Z/W9A+lJszsMpowiEdgBchbp2I5NTo4u8YJ tJHrWnji56yxOnLPFuiMmC7oj/rRTbg3A5nA2ED+ad49/P7u15zhUaxm5+KHmM2++qLlBSQXlsDu8 8Skgtg53ayfauDX9uUuwMBYZ4Y1KVvLetzosyN97FARu3WeUxYE/LL/n1hPA9t/Ml1ETePQMZ2eNR 9jCzv6PosQsCilKdDSRA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgJON-000195-Rj; Wed, 03 Jun 2020 02:44:35 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgJOG-00011j-46 for linux-arm-kernel@lists.infradead.org; Wed, 03 Jun 2020 02:44:30 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id DE0E2348B6AB4E692FD1; Wed, 3 Jun 2020 10:44:23 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.201.193) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Wed, 3 Jun 2020 10:44:14 +0800 From: Barry Song To: , , , Subject: [PATCH 1/3] dma-direct: provide the ability to reserve per-numa CMA Date: Wed, 3 Jun 2020 14:42:29 +1200 Message-ID: <20200603024231.61748-2-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200603024231.61748-1-song.bao.hua@hisilicon.com> References: <20200603024231.61748-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.201.193] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200602_194428_331475_AF8CC528 X-CRM114-Status: GOOD ( 17.08 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Barry Song , john.garry@huawei.com, linux-kernel@vger.kernel.org, linuxarm@huawei.com, iommu@lists.linux-foundation.org, prime.zeng@hisilicon.com, Jonathan.Cameron@huawei.com, Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This is useful for two scenarios: 1. ARM64 smmu will get memory from local numa node, it can save its command queues and page tables locally. Tests show it can decrease dma_unmap latency at lot. For example, without this patch, smmu on node2 will get memory from node0 by calling dma_alloc_coherent(), typically, it has to wait for more than 560ns for the completion of CMD_SYNC in an empty command queue; with this patch, it needs 240ns only. 2. when we set iommu passthrough, drivers will get memory from CMA, local memory means much less latency. Cc: Will Deacon Cc: Robin Murphy Signed-off-by: Barry Song Reported-by: kbuild test robot Reported-by: kbuild test robot Reported-by: kernel test robot Reported-by: Dan Carpenter Reported-by: kernel test robot --- include/linux/dma-contiguous.h | 4 ++++ kernel/dma/Kconfig | 10 +++++++++ kernel/dma/contiguous.c | 41 +++++++++++++++++++++++++++++++++- 3 files changed, 54 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h index 03f8e98e3bcc..278a80a40456 100644 --- a/include/linux/dma-contiguous.h +++ b/include/linux/dma-contiguous.h @@ -79,6 +79,8 @@ static inline void dma_contiguous_set_default(struct cma *cma) void dma_contiguous_reserve(phys_addr_t addr_limit); +void dma_pernuma_cma_reserve(void); + int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, phys_addr_t limit, struct cma **res_cma, bool fixed); @@ -128,6 +130,8 @@ static inline void dma_contiguous_set_default(struct cma *cma) { } static inline void dma_contiguous_reserve(phys_addr_t limit) { } +static inline void dma_pernuma_cma_reserve(void) { } + static inline int dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base, phys_addr_t limit, struct cma **res_cma, bool fixed) diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 4c103a24e380..14f4a1fe4285 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -100,6 +100,16 @@ config DMA_CMA if DMA_CMA comment "Default contiguous memory area size:" +config CMA_PERNUMA_SIZE_MBYTES + int "Size in Mega Bytes for per-numa CMA areas" + depends on NUMA + default 16 if ARM64 + default 0 + help + Defines the size (in MiB) of the per-numa memory area for Contiguous + Memory Allocator. Every numa node will get a separate CMA with this + size. If the size of 0 is selected, per-numa CMA is disabled. + config CMA_SIZE_MBYTES int "Size in Mega Bytes" depends on !CMA_SIZE_SEL_PERCENTAGE diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 8bc6f2d670f9..4b10d0ca0456 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -30,7 +30,14 @@ #define CMA_SIZE_MBYTES 0 #endif +#ifdef CONFIG_CMA_PERNUMA_SIZE_MBYTES +#define CMA_SIZE_PERNUMA_MBYTES CONFIG_CMA_PERNUMA_SIZE_MBYTES +#else +#define CMA_SIZE_PERNUMA_MBYTES 0 +#endif + struct cma *dma_contiguous_default_area; +struct cma *dma_contiguous_pernuma_area[MAX_NUMNODES]; /* * Default global CMA area size can be defined in kernel's .config. @@ -44,6 +51,8 @@ struct cma *dma_contiguous_default_area; */ static const phys_addr_t size_bytes __initconst = (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M; +static const phys_addr_t pernuma_size_bytes __initconst = + (phys_addr_t)CMA_SIZE_PERNUMA_MBYTES * SZ_1M; static phys_addr_t size_cmdline __initdata = -1; static phys_addr_t base_cmdline __initdata; static phys_addr_t limit_cmdline __initdata; @@ -96,6 +105,31 @@ static inline __maybe_unused phys_addr_t cma_early_percent_memory(void) #endif +void __init dma_pernuma_cma_reserve(void) +{ + int nid; + + if (!pernuma_size_bytes || nr_online_nodes <= 1) + return; + + for_each_node_state(nid, N_ONLINE) { + int ret; + + ret = cma_declare_contiguous_nid(0, pernuma_size_bytes, 0, 0, + 0, false, "pernuma", + &dma_contiguous_pernuma_area[nid], + nid); + if (ret) { + pr_warn("%s: reservation failed: err %d, node %d", __func__, + ret, nid); + continue; + } + + pr_debug("%s: reserved %llu MiB on node %d\n", __func__, + pernuma_size_bytes / SZ_1M, nid); + } +} + /** * dma_contiguous_reserve() - reserve area(s) for contiguous memory handling * @limit: End address of the reserved memory (optional, 0 for any). @@ -223,7 +257,8 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, * * This function allocates contiguous memory buffer for specified device. It * first tries to use device specific contiguous memory area if available or - * the default global one, then tries a fallback allocation of normal pages. + * the per-numa ones and default global one, then tries a fallback allocation + * of normal pages. per-numa memory areas don't support address limit * * Note that it byapss one-page size of allocations from the global area as * the addresses within one page are always contiguous, so there is no need @@ -234,9 +269,13 @@ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) size_t count = size >> PAGE_SHIFT; struct page *page = NULL; struct cma *cma = NULL; + int nid = dev_to_node(dev); if (dev && dev->cma_area) cma = dev->cma_area; + else if ((nid != NUMA_NO_NODE) && dma_contiguous_pernuma_area[nid] + && !(gfp & (GFP_DMA | GFP_DMA32))) + cma = dma_contiguous_pernuma_area[nid]; else if (count > 1) cma = dma_contiguous_default_area;