From patchwork Mon Mar 1 07:44:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38E68C433E0 for ; Mon, 1 Mar 2021 07:45:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F046764E01 for ; Mon, 1 Mar 2021 07:45:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F046764E01 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91258.172383 (Exim 4.92) (envelope-from ) id 1lGdEv-0002VZ-O5; Mon, 01 Mar 2021 07:45:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91258.172383; Mon, 01 Mar 2021 07:45:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdEv-0002VQ-Kf; Mon, 01 Mar 2021 07:45:13 +0000 Received: by outflank-mailman (input) for mailman id 91258; Mon, 01 Mar 2021 07:45:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdEu-0002U2-8f for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:45:12 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e972874d-e2ae-475e-9525-0ade0ca379b4; Mon, 01 Mar 2021 07:45:05 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdEa-00FR4x-UN; Mon, 01 Mar 2021 07:44:54 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e972874d-e2ae-475e-9525-0ade0ca379b4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jOUIvbpH1p3Q/dfE+7zDHmmKowLb7E/HssGmywq4qaM=; b=iiWXchTzp1z6tMwlARGVdg7qFd Jl5fCXY5v6vpmDrkF3TdnBigIdjyRi1EYbprKN/1D4IWDPlTOnJ7qGmZuPx8fCSdhV6gYXkdVXilz lMRq0qveYM+qGvCgIg+kOUnvXAV2GvvUvDQNP6xUFLQw1EBbP4PWEhY5Dt7Efsr5yOvBwA+G5tdDL IXM4bsZPzfNakZV6/kSBDPUsMIZLv9Shumi2xDaFpURz7TXDi9YPfYWJ+n8Xb4hRpph8JY36bzgpN I1aPhiWMrPFf4XDSHTeM18WNJetLSA8Rk2qjCMPvrj9Zx9X1OfH9nLC1/QFByOA9CBTSSNP3VF0/O sAzslN5g==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 01/14] powerpc/svm: stop using io_tlb_start Date: Mon, 1 Mar 2021 08:44:23 +0100 Message-Id: <20210301074436.919889-2-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Use the local variable that is passed to swiotlb_init_with_tbl for freeing the memory in the failure case to isolate the code a little better from swiotlb internals. Signed-off-by: Christoph Hellwig --- arch/powerpc/platforms/pseries/svm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 7b739cc7a8a93e..1d829e257996fb 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -55,9 +55,9 @@ void __init svm_swiotlb_init(void) if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false)) return; - if (io_tlb_start) - memblock_free_early(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + + memblock_free_early(__pa(vstart), + PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); panic("SVM: Cannot allocate SWIOTLB buffer"); } From patchwork Mon Mar 1 07:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08DBDC433E9 for ; Mon, 1 Mar 2021 07:45:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF5E664D8F for ; Mon, 1 Mar 2021 07:45:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF5E664D8F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91259.172395 (Exim 4.92) (envelope-from ) id 1lGdF1-0002Yc-0l; Mon, 01 Mar 2021 07:45:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91259.172395; Mon, 01 Mar 2021 07:45:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdF0-0002YT-Tq; Mon, 01 Mar 2021 07:45:18 +0000 Received: by outflank-mailman (input) for mailman id 91259; Mon, 01 Mar 2021 07:45:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdEz-0002U2-8p for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:45:17 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7de0bb2f-46fe-43ad-b5f7-e752f15ae060; Mon, 01 Mar 2021 07:45:13 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdEi-00FR5G-79; Mon, 01 Mar 2021 07:45:01 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7de0bb2f-46fe-43ad-b5f7-e752f15ae060 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tKaBu0f6sdjk6bNW9s6/VvP1+ZEbLxuaNFQMHAfVp5Q=; b=tKh68XQqoLQiyHvT+9s1iAuWb2 A5FHu4Y6qdjG086l6/NN4x9mA8n+M0pOWZfnYJ9MIs+c2dF9phYH13gQTwrBdGbDygjmQEB5aQ6gF FxLIE7L6N3XRkmMcDsP2rOd3gZFrjhkzFt0/gUTcIe6o8Nn5gFpgG+aEcKvT8pMEh9FFy0RrINRpi 8+9SrARqXNgIkfp0fvZ72NB6BSU7FRO1tCOwwexDYXjY0p6pS0BBHifNAGlXCOHfbfTbfW1o2k9wz VNO74OsM9eHlp+EhF0Og7xcINd/5VkklKfsO0g9ny3Vj4yGHzy0GJpxvy3jmjQapduX5elWVYyn8j TBscHQNg==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 02/14] swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_single Date: Mon, 1 Mar 2021 08:44:24 +0100 Message-Id: <20210301074436.919889-3-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Now that swiotlb remembers the allocation size there is no need to pass it back to swiotlb_tbl_unmap_single. Signed-off-by: Christoph Hellwig Reviewed-by: Konrad Rzeszutek Wilk --- drivers/iommu/dma-iommu.c | 11 +++------- drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 1 - kernel/dma/direct.h | 2 +- kernel/dma/swiotlb.c | 45 ++++++++++++++++++++------------------- 5 files changed, 29 insertions(+), 34 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9ab6ee22c11088..da2bd8f0885e6e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -493,8 +493,6 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, unsigned long attrs) { struct iommu_domain *domain = iommu_get_dma_domain(dev); - struct iommu_dma_cookie *cookie = domain->iova_cookie; - struct iova_domain *iovad = &cookie->iovad; phys_addr_t phys; phys = iommu_iova_to_phys(domain, dma_addr); @@ -504,8 +502,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, __iommu_dma_unmap(dev, dma_addr, size); if (unlikely(is_swiotlb_buffer(phys))) - swiotlb_tbl_unmap_single(dev, phys, size, - iova_align(iovad, size), dir, attrs); + swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } static bool dev_is_untrusted(struct device *dev) @@ -580,10 +577,8 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, } iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys)) - swiotlb_tbl_unmap_single(dev, phys, org_size, - aligned_size, dir, attrs); - + if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys)) + swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs); return iova; } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99cb..d47f1b311caac0 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -406,7 +406,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, * Ensure that the address returned is DMA'ble */ if (unlikely(!dma_capable(dev, dev_addr, size, true))) { - swiotlb_tbl_unmap_single(dev, map, size, size, dir, + swiotlb_tbl_unmap_single(dev, map, size, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); return DMA_MAPPING_ERROR; } @@ -445,7 +445,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, /* NOTE: We use dev_addr here, not paddr! */ if (is_xen_swiotlb_buffer(hwdev, dev_addr)) - swiotlb_tbl_unmap_single(hwdev, paddr, size, size, dir, attrs); + swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs); } static void diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 5857a937c63722..59f421d041ed9e 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -57,7 +57,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, extern void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, - size_t alloc_size, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index b9861557873768..e1bf721591c0cf 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -114,6 +114,6 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, dma_direct_sync_single_for_cpu(dev, addr, size, dir); if (unlikely(is_swiotlb_buffer(phys))) - swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs); + swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); } #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index c10e855a03bc16..03aa614565e417 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -102,7 +102,7 @@ static phys_addr_t *io_tlb_orig_addr; /* * The mapped buffer's size should be validated during a sync operation. */ -static size_t *io_tlb_orig_size; +static size_t *io_tlb_alloc_size; /* * Protect the above data structures in the map and unmap calls @@ -253,15 +253,15 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) __func__, alloc_size, PAGE_SIZE); alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(size_t)); - io_tlb_orig_size = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_orig_size) + io_tlb_alloc_size = memblock_alloc(alloc_size, PAGE_SIZE); + if (!io_tlb_alloc_size) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); for (i = 0; i < io_tlb_nslabs; i++) { io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_orig_size[i] = 0; + io_tlb_alloc_size[i] = 0; } io_tlb_index = 0; no_iotlb_memory = false; @@ -393,18 +393,18 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) if (!io_tlb_orig_addr) goto cleanup4; - io_tlb_orig_size = (size_t *) + io_tlb_alloc_size = (size_t *) __get_free_pages(GFP_KERNEL, get_order(io_tlb_nslabs * sizeof(size_t))); - if (!io_tlb_orig_size) + if (!io_tlb_alloc_size) goto cleanup5; for (i = 0; i < io_tlb_nslabs; i++) { io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_orig_size[i] = 0; + io_tlb_alloc_size[i] = 0; } io_tlb_index = 0; no_iotlb_memory = false; @@ -436,7 +436,7 @@ void __init swiotlb_exit(void) return; if (late_alloc) { - free_pages((unsigned long)io_tlb_orig_size, + free_pages((unsigned long)io_tlb_alloc_size, get_order(io_tlb_nslabs * sizeof(size_t))); free_pages((unsigned long)io_tlb_orig_addr, get_order(io_tlb_nslabs * sizeof(phys_addr_t))); @@ -447,7 +447,7 @@ void __init swiotlb_exit(void) } else { memblock_free_late(__pa(io_tlb_orig_addr), PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(io_tlb_orig_size), + memblock_free_late(__pa(io_tlb_alloc_size), PAGE_ALIGN(io_tlb_nslabs * sizeof(size_t))); memblock_free_late(__pa(io_tlb_list), PAGE_ALIGN(io_tlb_nslabs * sizeof(int))); @@ -639,7 +639,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, */ for (i = 0; i < nr_slots(alloc_size + offset); i++) { io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); - io_tlb_orig_size[index+i] = alloc_size - (i << IO_TLB_SHIFT); + io_tlb_alloc_size[index+i] = alloc_size - (i << IO_TLB_SHIFT); } tlb_addr = slot_addr(io_tlb_start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && @@ -648,14 +648,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return tlb_addr; } -static void validate_sync_size_and_truncate(struct device *hwdev, size_t orig_size, size_t *size) +static void validate_sync_size_and_truncate(struct device *hwdev, size_t alloc_size, size_t *size) { - if (*size > orig_size) { + if (*size > alloc_size) { /* Warn and truncate mapping_size */ dev_WARN_ONCE(hwdev, 1, "Attempt for buffer overflow. Original size: %zu. Mapping size: %zu.\n", - orig_size, *size); - *size = orig_size; + alloc_size, *size); + *size = alloc_size; } } @@ -663,16 +663,17 @@ static void validate_sync_size_and_truncate(struct device *hwdev, size_t orig_si * tlb_addr is the physical address of the bounce buffer to unmap. */ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) + size_t mapping_size, enum dma_data_direction dir, + unsigned long attrs) { unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); - int i, count, nslots = nr_slots(alloc_size + offset); int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; + size_t alloc_size = io_tlb_alloc_size[index]; + int i, count, nslots = nr_slots(alloc_size + offset); - validate_sync_size_and_truncate(hwdev, io_tlb_orig_size[index], &mapping_size); + validate_sync_size_and_truncate(hwdev, alloc_size, &mapping_size); /* * First, sync the memory before unmapping the entry @@ -701,7 +702,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, for (i = index + nslots - 1; i >= index; i--) { io_tlb_list[i] = ++count; io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_orig_size[i] = 0; + io_tlb_alloc_size[i] = 0; } /* @@ -721,13 +722,13 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, enum dma_sync_target target) { int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - size_t orig_size = io_tlb_orig_size[index]; + size_t alloc_size = io_tlb_alloc_size[index]; phys_addr_t orig_addr = io_tlb_orig_addr[index]; if (orig_addr == INVALID_PHYS_ADDR) return; - validate_sync_size_and_truncate(hwdev, orig_size, &size); + validate_sync_size_and_truncate(hwdev, alloc_size, &size); switch (target) { case SYNC_FOR_CPU: @@ -770,7 +771,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, /* Ensure that the address returned is DMA'ble */ dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { - swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, size, dir, + swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); dev_WARN_ONCE(dev, 1, "swiotlb addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", From patchwork Mon Mar 1 07:44:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB310C43381 for ; Mon, 1 Mar 2021 07:45:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A8C1164E01 for ; Mon, 1 Mar 2021 07:45:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8C1164E01 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91260.172407 (Exim 4.92) (envelope-from ) id 1lGdF5-0002c3-A5; Mon, 01 Mar 2021 07:45:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91260.172407; Mon, 01 Mar 2021 07:45:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdF5-0002bt-68; Mon, 01 Mar 2021 07:45:23 +0000 Received: by outflank-mailman (input) for mailman id 91260; Mon, 01 Mar 2021 07:45:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdF4-0002U2-96 for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:45:22 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e9697837-920a-42cb-a2bc-a6e2cf0c2a7c; Mon, 01 Mar 2021 07:45:19 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdEq-00FRBJ-Uh; Mon, 01 Mar 2021 07:45:10 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e9697837-920a-42cb-a2bc-a6e2cf0c2a7c DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Ds/ACDNRu58btmZG/NqJ4lBgbf4+MYCDj5+pQ+XVzpM=; b=Woztf5KlRadzF1qDJdsaN2b6jM F8qWgj/k/tKZ4TYWm/AYmy26ne94IWhyXBXotmr6ZMC2165WaVQ0HTcxuOZlbeVPRgeaDbU9tPbk6 b1+viMTPj3CMaTXoXiKCSmZ7osK63GGeshw5QNx/sYvqsaDd+dXbdEpaqbUF7xUXVaqz1aTIQ+sXB NgnUGMgUWy//zxAqn8O6EvVL4sb4zQrrgB8TYz0XC/mJGicpRKi4j6KyFM7g0lV1wMwCAPkPNUjoK 59SY3DMBp5chNynkNLgir+OXDa4hEDqhRh8xvDXl/lPGZGPo5vMoG0sUo/Zfva4X8S0ufaB4hAII9 2/ISnCMw==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 03/14] swiotlb: move orig addr and size validation into swiotlb_bounce Date: Mon, 1 Mar 2021 08:44:25 +0100 Message-Id: <20210301074436.919889-4-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Move the code to find and validate the original buffer address and size from the callers into swiotlb_bounce. This means a tiny bit of extra work in the swiotlb_map path, but avoids code duplication and a leads to a better code structure. Signed-off-by: Christoph Hellwig Reviewed-by: Konrad Rzeszutek Wilk --- kernel/dma/swiotlb.c | 59 +++++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 36 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 03aa614565e417..a9063092f6f566 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -460,12 +460,25 @@ void __init swiotlb_exit(void) /* * Bounce: copy the swiotlb buffer from or back to the original dma location */ -static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir) +static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, + enum dma_data_direction dir) { + int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; + size_t alloc_size = io_tlb_alloc_size[index]; + phys_addr_t orig_addr = io_tlb_orig_addr[index]; unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); + if (orig_addr == INVALID_PHYS_ADDR) + return; + + if (size > alloc_size) { + dev_WARN_ONCE(dev, 1, + "Buffer overflow detected. Allocation size: %zu. Mapping size: %zu.\n", + alloc_size, size); + size = alloc_size; + } + if (PageHighMem(pfn_to_page(pfn))) { /* The buffer does not have a mapping. Map it in and copy */ unsigned int offset = orig_addr & ~PAGE_MASK; @@ -644,21 +657,10 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, tlb_addr = slot_addr(io_tlb_start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); return tlb_addr; } -static void validate_sync_size_and_truncate(struct device *hwdev, size_t alloc_size, size_t *size) -{ - if (*size > alloc_size) { - /* Warn and truncate mapping_size */ - dev_WARN_ONCE(hwdev, 1, - "Attempt for buffer overflow. Original size: %zu. Mapping size: %zu.\n", - alloc_size, *size); - *size = alloc_size; - } -} - /* * tlb_addr is the physical address of the bounce buffer to unmap. */ @@ -669,19 +671,15 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; - size_t alloc_size = io_tlb_alloc_size[index]; - int i, count, nslots = nr_slots(alloc_size + offset); - - validate_sync_size_and_truncate(hwdev, alloc_size, &mapping_size); + int nslots = nr_slots(io_tlb_alloc_size[index] + offset); + int count, i; /* * First, sync the memory before unmapping the entry */ - if (orig_addr != INVALID_PHYS_ADDR && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE); /* * Return the buffer to the free list by setting the corresponding @@ -721,27 +719,16 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - size_t alloc_size = io_tlb_alloc_size[index]; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; - - if (orig_addr == INVALID_PHYS_ADDR) - return; - - validate_sync_size_and_truncate(hwdev, alloc_size, &size); - switch (target) { case SYNC_FOR_CPU: if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, - size, DMA_FROM_DEVICE); + swiotlb_bounce(hwdev, tlb_addr, size, DMA_FROM_DEVICE); else BUG_ON(dir != DMA_TO_DEVICE); break; case SYNC_FOR_DEVICE: if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, - size, DMA_TO_DEVICE); + swiotlb_bounce(hwdev, tlb_addr, size, DMA_TO_DEVICE); else BUG_ON(dir != DMA_FROM_DEVICE); break; From patchwork Mon Mar 1 07:44:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2355C433DB for ; Mon, 1 Mar 2021 07:45:44 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 71FA964DE7 for ; Mon, 1 Mar 2021 07:45:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71FA964DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91262.172419 (Exim 4.92) (envelope-from ) id 1lGdFH-0002l3-Jc; Mon, 01 Mar 2021 07:45:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91262.172419; Mon, 01 Mar 2021 07:45:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFH-0002kv-FK; Mon, 01 Mar 2021 07:45:35 +0000 Received: by outflank-mailman (input) for mailman id 91262; Mon, 01 Mar 2021 07:45:34 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFG-0002i6-Cq for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:45:34 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 69718da5-0bc2-47df-8d61-ef1be52e02d2; Mon, 01 Mar 2021 07:45:31 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdEv-00FRBY-PX; Mon, 01 Mar 2021 07:45:16 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 69718da5-0bc2-47df-8d61-ef1be52e02d2 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dcvL651kX+/+tDC/vc46FsoJZklOjDE98QeeBFDCLlE=; b=t5bTAovDUHg7PpXkFQaiN6GS+l rkEIHPA6OhDt3ss3PZEGfvrl8GQrzKXAqN22LEPl3yalumIEbug2Mu3ZsVTlLRPfpqzmgn4u29+Pw ZhjNV/YNPlJKM7XgqRRv34XplHbctmeg7RJ3KgGkyClBZAmTvDJrEpqcPR4+m/bo3kUD/VlOKKmlT p4Z/hoChDLtFgMHzWcInci8zLGCyhU9l1JIlIA3uJMnES6B0KRUI7Oys99vUcnkChl95PxYeWgubf dJrGji3sLeseANLPZPDcjriDIvMgIvbCdlTwdK8prS9HkHh1sUXjgHiPJpP/OP24o6JodkPxa7avY sCMcSA5w==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 04/14] swiotlb: split swiotlb_tbl_sync_single Date: Mon, 1 Mar 2021 08:44:26 +0100 Message-Id: <20210301074436.919889-5-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Split swiotlb_tbl_sync_single into two separate funtions for the to device and to cpu synchronization. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 12 ++++++------ drivers/xen/swiotlb-xen.c | 4 ++-- include/linux/swiotlb.h | 17 ++++------------- kernel/dma/direct.c | 8 ++++---- kernel/dma/direct.h | 4 ++-- kernel/dma/swiotlb.c | 34 +++++++++++++++------------------- 6 files changed, 33 insertions(+), 46 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index da2bd8f0885e6e..b57a0e3e21f6c7 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -749,7 +749,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, arch_sync_dma_for_cpu(phys, size, dir); if (is_swiotlb_buffer(phys)) - swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU); + swiotlb_sync_single_for_cpu(dev, phys, size, dir); } static void iommu_dma_sync_single_for_device(struct device *dev, @@ -762,7 +762,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); if (is_swiotlb_buffer(phys)) - swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE); + swiotlb_sync_single_for_device(dev, phys, size, dir); if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_device(phys, size, dir); @@ -783,8 +783,8 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); if (is_swiotlb_buffer(sg_phys(sg))) - swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, - dir, SYNC_FOR_CPU); + swiotlb_sync_single_for_cpu(dev, sg_phys(sg), + sg->length, dir); } } @@ -800,8 +800,8 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, for_each_sg(sgl, sg, nelems, i) { if (is_swiotlb_buffer(sg_phys(sg))) - swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, - dir, SYNC_FOR_DEVICE); + swiotlb_sync_single_for_device(dev, sg_phys(sg), + sg->length, dir); if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index d47f1b311caac0..4e8a4e14942afd 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -462,7 +462,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr, } if (is_xen_swiotlb_buffer(dev, dma_addr)) - swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); + swiotlb_sync_single_for_cpu(dev, paddr, size, dir); } static void @@ -472,7 +472,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr); if (is_xen_swiotlb_buffer(dev, dma_addr)) - swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); + swiotlb_sync_single_for_device(dev, paddr, size, dir); if (!dev_is_dma_coherent(dev)) { if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 59f421d041ed9e..0696bdc8072e97 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -42,14 +42,6 @@ extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern int swiotlb_late_init_with_default_size(size_t default_size); extern void __init swiotlb_update_mem_attributes(void); -/* - * Enumeration for sync targets - */ -enum dma_sync_target { - SYNC_FOR_CPU = 0, - SYNC_FOR_DEVICE = 1, -}; - phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs); @@ -60,11 +52,10 @@ extern void swiotlb_tbl_unmap_single(struct device *hwdev, enum dma_data_direction dir, unsigned long attrs); -extern void swiotlb_tbl_sync_single(struct device *hwdev, - phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir, - enum dma_sync_target target); - +void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir); +void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir); dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 002268262c9ad8..f737e334705945 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -344,8 +344,8 @@ void dma_direct_sync_sg_for_device(struct device *dev, phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg)); if (unlikely(is_swiotlb_buffer(paddr))) - swiotlb_tbl_sync_single(dev, paddr, sg->length, - dir, SYNC_FOR_DEVICE); + swiotlb_sync_single_for_device(dev, paddr, sg->length, + dir); if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_device(paddr, sg->length, @@ -370,8 +370,8 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, arch_sync_dma_for_cpu(paddr, sg->length, dir); if (unlikely(is_swiotlb_buffer(paddr))) - swiotlb_tbl_sync_single(dev, paddr, sg->length, dir, - SYNC_FOR_CPU); + swiotlb_sync_single_for_cpu(dev, paddr, sg->length, + dir); if (dir == DMA_FROM_DEVICE) arch_dma_mark_clean(paddr, sg->length); diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e1bf721591c0cf..50afc05b6f1dcb 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -57,7 +57,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, phys_addr_t paddr = dma_to_phys(dev, addr); if (unlikely(is_swiotlb_buffer(paddr))) - swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); + swiotlb_sync_single_for_device(dev, paddr, size, dir); if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_device(paddr, size, dir); @@ -74,7 +74,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, } if (unlikely(is_swiotlb_buffer(paddr))) - swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); + swiotlb_sync_single_for_cpu(dev, paddr, size, dir); if (dir == DMA_FROM_DEVICE) arch_dma_mark_clean(paddr, size); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a9063092f6f566..388d9be35b5795 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -715,26 +715,22 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&io_tlb_lock, flags); } -void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir, - enum dma_sync_target target) +void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir) { - switch (target) { - case SYNC_FOR_CPU: - if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(hwdev, tlb_addr, size, DMA_FROM_DEVICE); - else - BUG_ON(dir != DMA_TO_DEVICE); - break; - case SYNC_FOR_DEVICE: - if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(hwdev, tlb_addr, size, DMA_TO_DEVICE); - else - BUG_ON(dir != DMA_FROM_DEVICE); - break; - default: - BUG(); - } + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) + swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE); + else + BUG_ON(dir != DMA_FROM_DEVICE); +} + +void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir) +{ + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) + swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE); + else + BUG_ON(dir != DMA_TO_DEVICE); } /* From patchwork Mon Mar 1 07:44:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D29CAC433E0 for ; Mon, 1 Mar 2021 07:45:59 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9897164DE7 for ; Mon, 1 Mar 2021 07:45:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9897164DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91269.172430 (Exim 4.92) (envelope-from ) id 1lGdFY-0002tt-1P; Mon, 01 Mar 2021 07:45:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91269.172430; Mon, 01 Mar 2021 07:45:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFX-0002tl-U9; Mon, 01 Mar 2021 07:45:51 +0000 Received: by outflank-mailman (input) for mailman id 91269; Mon, 01 Mar 2021 07:45:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFW-0002sK-EB for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:45:50 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 83d1d930-5506-416b-9373-1512f9c8c7f4; Mon, 01 Mar 2021 07:45:49 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdF4-00FRBm-On; Mon, 01 Mar 2021 07:45:28 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 83d1d930-5506-416b-9373-1512f9c8c7f4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=waRM0pjuhEeQaBWUrgK7ZdelZaRqodVDxZI32WcdEMs=; b=l9A9+OrFX2akSpcdSbJffe1kyf 38IwDgTDdrgca89MW5mliDcOLO5Nd0YTAd+ofoshJOMHMQ4O35SMjrqUSU9BIHBTrFAacTJCjWAH3 MmKvzN6NMpMwvvP6g0ZXvkrEF4XoDYt1DYcZ7SUrX+oWzfjcZJ1GA4dM4UpRcUm7kFnJARpBEfYgI A3Sdb4LCU5WFHZbbQwkGeNAkJDOEECqd0DCf7dqLDGdhg10VJK800H5RapjGPVmj1QIBsBNLQ0dey tGbmmvR/4HsSuRkqrabDlZRDAgwFImpxNnkr11nYp5NPR3J5sG9CcFaruHrnGOM9tkBPSZNeSHJPt 25rxC34A==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 05/14] xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer Date: Mon, 1 Mar 2021 08:44:27 +0100 Message-Id: <20210301074436.919889-6-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Use the is_swiotlb_buffer to check if a physical address is a swiotlb buffer. This works because xen-swiotlb does use the same buffer as the main swiotlb code, and xen_io_tlb_{start,end} are just the addresses for it that went through phys_to_virt. Signed-off-by: Christoph Hellwig Reviewed-by: Konrad Rzeszutek Wilk --- drivers/xen/swiotlb-xen.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 4e8a4e14942afd..bffb35993c9d5f 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -111,10 +111,8 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) * have the same virtual address as another address * in our domain. Therefore _only_ check address within our domain. */ - if (pfn_valid(PFN_DOWN(paddr))) { - return paddr >= virt_to_phys(xen_io_tlb_start) && - paddr < virt_to_phys(xen_io_tlb_end); - } + if (pfn_valid(PFN_DOWN(paddr))) + return is_swiotlb_buffer(paddr); return 0; } From patchwork Mon Mar 1 07:44:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 291F5C433DB for ; Mon, 1 Mar 2021 07:46:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E658E64DE7 for ; Mon, 1 Mar 2021 07:46:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E658E64DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91271.172443 (Exim 4.92) (envelope-from ) id 1lGdFj-0002z6-AC; Mon, 01 Mar 2021 07:46:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91271.172443; Mon, 01 Mar 2021 07:46:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFj-0002yx-78; Mon, 01 Mar 2021 07:46:03 +0000 Received: by outflank-mailman (input) for mailman id 91271; Mon, 01 Mar 2021 07:46:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFi-0002yc-EZ for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:02 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 39a37b42-4e08-4a33-8ddc-30ce59f6d3b8; Mon, 01 Mar 2021 07:45:59 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFQ-00FRCL-0I; Mon, 01 Mar 2021 07:45:46 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 39a37b42-4e08-4a33-8ddc-30ce59f6d3b8 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OCILNlokJCxmLaXrgvg794b6b4atNJeGi+QbETqZtzY=; b=n2VQKtfAYQonjn8kWQvsXpZj00 EL3jl9UA5+wN2UQu9NxyAYOuJG12NX6uYa2qLyFZLMxl5fbXDop5kVsZqCdyIIEt29aRzatWQNkqA OB32u9XjFpGo6wSQwtWRgvklyPmyuzphmHFTWaISqvgVzYH/0+SL5NXnpAQR1qI8SO9VWURLQ/L4n I58qbE6gV6GeOQgJJDsqODfRzK69AeitDS/OZpoBARnsIv2KwoMN7N7xiB+e4O9LnfWUlocfnSzlr lnJBcw8Dvv2laYJlgev77JFgu8iEqDd6mVEcY0zRgaRY3lx3eyp9LomM4YYkhb+Z/UHcQnIhmcpoM JcA5BOhQ==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 06/14] xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported Date: Mon, 1 Mar 2021 08:44:28 +0100 Message-Id: <20210301074436.919889-7-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Use the existing variable that holds the physical address for xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus helper. Signed-off-by: Christoph Hellwig Reviewed-by: Konrad Rzeszutek Wilk --- drivers/xen/swiotlb-xen.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index bffb35993c9d5f..e99f0614dcb979 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -46,7 +46,7 @@ * API. */ -static char *xen_io_tlb_start, *xen_io_tlb_end; +static char *xen_io_tlb_start; static unsigned long xen_io_tlb_nslabs; /* * Quick lookup value of the bus address of the IOTLB. @@ -82,11 +82,6 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev, return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr)); } -static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address) -{ - return xen_phys_to_dma(dev, virt_to_phys(address)); -} - static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) { unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); @@ -250,7 +245,6 @@ int __ref xen_swiotlb_init(int verbose, bool early) rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs); end: - xen_io_tlb_end = xen_io_tlb_start + bytes; if (!rc) swiotlb_set_max_segment(PAGE_SIZE); @@ -558,7 +552,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, static int xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) { - return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask; + return xen_phys_to_dma(hwdev, io_tlb_end - 1) <= mask; } const struct dma_map_ops xen_swiotlb_dma_ops = { From patchwork Mon Mar 1 07:44:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80B4EC433E0 for ; Mon, 1 Mar 2021 07:46:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4239A64E3F for ; Mon, 1 Mar 2021 07:46:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4239A64E3F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91273.172455 (Exim 4.92) (envelope-from ) id 1lGdFm-000320-LB; Mon, 01 Mar 2021 07:46:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91273.172455; Mon, 01 Mar 2021 07:46:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFm-00031r-GR; Mon, 01 Mar 2021 07:46:06 +0000 Received: by outflank-mailman (input) for mailman id 91273; Mon, 01 Mar 2021 07:46:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFl-0002yc-BV for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:05 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ce8998b2-166e-4703-93ae-db34b11e3ad7; Mon, 01 Mar 2021 07:46:04 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFa-00FRD5-OZ; Mon, 01 Mar 2021 07:45:56 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ce8998b2-166e-4703-93ae-db34b11e3ad7 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n4a5R9+ylbvoibBv9Gp4RRSzd949UJhfH+kdgUzXMIA=; b=ZdtJkSTQoBpw/cdu0C97cG13/n hnkW6A3RrPzYaXlCcieHyhVXiP8MF2ep7TS+aJyf3GZe66+xpZWgToyCzgiH5ydTo5cY6SWeh6gSp eWAP2epI/P5SHOeCxT2yVEQus8aSEgI/HaWdf01Lw1tEj/ySKW/4yU6vt8dK7QexOzT/AK9BnwVJQ vkWA9I7c0R9uCfQWQvxfOa4meVFZhWV8w/VdbMi/iGnvdTMEwEiaN3spsgAGnkyV0mYLo6Wst8PQ4 lqZEwHRsz0EIbqJE4HY4ZT1Oxo8rFdW+q5SvmOCXv0E3WMX1TpKxeP8ykm/dSx13AmIi0JJJusybs 1oZ+I7pA==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 07/14] xen-swiotlb: remove xen_set_nslabs Date: Mon, 1 Mar 2021 08:44:29 +0100 Message-Id: <20210301074436.919889-8-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html The xen_set_nslabs function is a little weird, as it has just one caller, that caller passes a global variable as the argument, which is then overriden in the function and a derivative of it returned. Just add a cpp symbol for the default size using a readable constant and open code the remaining three lines in the caller. Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index e99f0614dcb979..5352655432e724 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -138,16 +138,6 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) } while (i < nslabs); return 0; } -static unsigned long xen_set_nslabs(unsigned long nr_tbl) -{ - if (!nr_tbl) { - xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT); - xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE); - } else - xen_io_tlb_nslabs = nr_tbl; - - return xen_io_tlb_nslabs << IO_TLB_SHIFT; -} enum xen_swiotlb_err { XEN_SWIOTLB_UNKNOWN = 0, @@ -170,6 +160,9 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err) } return ""; } + +#define DEFAULT_NSLABS ALIGN(SZ_64M >> IO_TLB_SHIFT, IO_TLB_SEGSIZE) + int __ref xen_swiotlb_init(int verbose, bool early) { unsigned long bytes, order; @@ -179,8 +172,10 @@ int __ref xen_swiotlb_init(int verbose, bool early) xen_io_tlb_nslabs = swiotlb_nr_tbl(); retry: - bytes = xen_set_nslabs(xen_io_tlb_nslabs); - order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT); + if (!xen_io_tlb_nslabs) + xen_io_tlb_nslabs = DEFAULT_NSLABS; + bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT; + order = get_order(bytes); /* * IO TLB memory already allocated. Just use it. From patchwork Mon Mar 1 07:44:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E0E7C433DB for ; Mon, 1 Mar 2021 07:46:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B94D64DE7 for ; Mon, 1 Mar 2021 07:46:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B94D64DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91274.172467 (Exim 4.92) (envelope-from ) id 1lGdFt-00039S-TM; Mon, 01 Mar 2021 07:46:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91274.172467; Mon, 01 Mar 2021 07:46:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFt-000392-Ox; Mon, 01 Mar 2021 07:46:13 +0000 Received: by outflank-mailman (input) for mailman id 91274; Mon, 01 Mar 2021 07:46:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFs-00036y-67 for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:12 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 668e4dd6-2686-475b-b19f-30dda4cde3d0; Mon, 01 Mar 2021 07:46:10 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFg-00FRDV-0p; Mon, 01 Mar 2021 07:46:01 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 668e4dd6-2686-475b-b19f-30dda4cde3d0 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Nv2PzTFaNuvZMLFLXyjuLWvrF97az/7ZIuedCeQXxXo=; b=dcs28v1gCq97Kg7VGkzZDNtfBS 5/Y9BWSDEFQ0ue8+GeO+pvfl9PH2Fa9smNuJM50LF1Mq+3hEEaB1VHWSyj/dcpsRTIinzTOpuTC5b feXtU4Are3eTutHu0jUVtlOEMlSECtX4EqblBryk1+ow7zc7Ho2mKmJuNtqugq8wrtI9IWLt5URtJ dtiRRCH0oKGw4/2bPQZciYOP1nmZ+lVgZRpEj8aALss9KfVEZMW/4Wav8dkCb6S8RJ4IdMSDjoWBZ yDBMCcgb9k+zD7C6KTqH3FoHhADuq5RVyDNDgVLKekqcZPdb4nQ04/3MhFlJfGsbra7rrfrT2bPfn 8gl8q2lA==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 08/14] xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs Date: Mon, 1 Mar 2021 08:44:30 +0100 Message-Id: <20210301074436.919889-9-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html The xen_io_tlb_start and xen_io_tlb_nslabs variables ar now only used in xen_swiotlb_init, so replace them with local variables. Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 57 +++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 32 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 5352655432e724..1a31ddf7139799 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -40,14 +40,7 @@ #include #define MAX_DMA_BITS 32 -/* - * Used to do a quick range check in swiotlb_tbl_unmap_single and - * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -static char *xen_io_tlb_start; -static unsigned long xen_io_tlb_nslabs; /* * Quick lookup value of the bus address of the IOTLB. */ @@ -169,75 +162,75 @@ int __ref xen_swiotlb_init(int verbose, bool early) int rc = -ENOMEM; enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN; unsigned int repeat = 3; + char *start; + unsigned long nslabs; - xen_io_tlb_nslabs = swiotlb_nr_tbl(); + nslabs = swiotlb_nr_tbl(); retry: - if (!xen_io_tlb_nslabs) - xen_io_tlb_nslabs = DEFAULT_NSLABS; - bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT; + if (!nslabs) + nslabs = DEFAULT_NSLABS; + bytes = nslabs << IO_TLB_SHIFT; order = get_order(bytes); /* * IO TLB memory already allocated. Just use it. */ - if (io_tlb_start != 0) { - xen_io_tlb_start = phys_to_virt(io_tlb_start); + if (io_tlb_start != 0) goto end; - } /* * Get IO TLB memory from any location. */ if (early) { - xen_io_tlb_start = memblock_alloc(PAGE_ALIGN(bytes), + start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); - if (!xen_io_tlb_start) + if (!start) panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, PAGE_ALIGN(bytes), PAGE_SIZE); } else { #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { - xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order); - if (xen_io_tlb_start) + start = (void *)xen_get_swiotlb_free_pages(order); + if (start) break; order--; } if (order != get_order(bytes)) { pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n", (PAGE_SIZE << order) >> 20); - xen_io_tlb_nslabs = SLABS_PER_PAGE << order; - bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT; + nslabs = SLABS_PER_PAGE << order; + bytes = nslabs << IO_TLB_SHIFT; } } - if (!xen_io_tlb_start) { + if (!start) { m_ret = XEN_SWIOTLB_ENOMEM; goto error; } /* * And replace that memory with pages under 4GB. */ - rc = xen_swiotlb_fixup(xen_io_tlb_start, + rc = xen_swiotlb_fixup(start, bytes, - xen_io_tlb_nslabs); + nslabs); if (rc) { if (early) - memblock_free(__pa(xen_io_tlb_start), + memblock_free(__pa(start), PAGE_ALIGN(bytes)); else { - free_pages((unsigned long)xen_io_tlb_start, order); - xen_io_tlb_start = NULL; + free_pages((unsigned long)start, order); + start = NULL; } m_ret = XEN_SWIOTLB_EFIXUP; goto error; } if (early) { - if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, + if (swiotlb_init_with_tbl(start, nslabs, verbose)) panic("Cannot allocate SWIOTLB buffer"); rc = 0; } else - rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs); + rc = swiotlb_late_init_with_tbl(start, nslabs); end: if (!rc) @@ -246,17 +239,17 @@ int __ref xen_swiotlb_init(int verbose, bool early) return rc; error: if (repeat--) { - xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */ - (xen_io_tlb_nslabs >> 1)); + nslabs = max(1024UL, /* Min is 2MB */ + (nslabs >> 1)); pr_info("Lowering to %luMB\n", - (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20); + (nslabs << IO_TLB_SHIFT) >> 20); goto retry; } pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc); if (early) panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc); else - free_pages((unsigned long)xen_io_tlb_start, order); + free_pages((unsigned long)start, order); return rc; } From patchwork Mon Mar 1 07:44:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4401EC433DB for ; Mon, 1 Mar 2021 07:46:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0BF6564DE7 for ; Mon, 1 Mar 2021 07:46:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BF6564DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91277.172479 (Exim 4.92) (envelope-from ) id 1lGdG0-0003Fl-6y; Mon, 01 Mar 2021 07:46:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91277.172479; Mon, 01 Mar 2021 07:46:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdG0-0003Fd-2K; Mon, 01 Mar 2021 07:46:20 +0000 Received: by outflank-mailman (input) for mailman id 91277; Mon, 01 Mar 2021 07:46:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdFy-00036y-E0 for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:18 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3f8fe213-b4ce-4d70-a9c9-b55cac12f660; Mon, 01 Mar 2021 07:46:15 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFm-00FRDp-Lh; Mon, 01 Mar 2021 07:46:08 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3f8fe213-b4ce-4d70-a9c9-b55cac12f660 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JFey0nQH0q27BLg8sKUeuXSyZDiQiQFLOD6BixvKeyU=; b=NOzDrThXk5sQLHtG3a7GXeBGS9 H2XGITYrLulEacH0a8AvtD4GcR3a2s77sP1xJjnehH3ArsbGm+hD5Yg+R4px9hPi+g+Hqawj5XasA USCZkvxqssnycydmnC4rU9Kn3Dp7ojnB+xcinZ1+dr2f3ywKfxV6zOcD7Kwh/7lBlruP/9QUg21lZ FZB2z4KkS3r112xjPrzwRHn0SffHvc9LiSxaQmlgIEzvoxWUQ9A+bGnTQDUWoUtbAOlVvg3mcywgr 5zo9Ej6FKEwbLIhTjZHYLebzvkQLYCHjA/PtqDXKiu7RlyNYJviJ7iXcpJ36kfbd/1Ijq+NNQpDOs QO9Tuoyw==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 09/14] swiotlb: lift the double initialization protection from xen-swiotlb Date: Mon, 1 Mar 2021 08:44:31 +0100 Message-Id: <20210301074436.919889-10-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Lift the double initialization protection from xen-swiotlb to the core code to avoid exposing too many swiotlb internals. Also upgrade the check to a warning as it should not happen. Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 7 ------- kernel/dma/swiotlb.c | 8 ++++++++ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 1a31ddf7139799..060eeb056486f5 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -172,12 +172,6 @@ int __ref xen_swiotlb_init(int verbose, bool early) bytes = nslabs << IO_TLB_SHIFT; order = get_order(bytes); - /* - * IO TLB memory already allocated. Just use it. - */ - if (io_tlb_start != 0) - goto end; - /* * Get IO TLB memory from any location. */ @@ -232,7 +226,6 @@ int __ref xen_swiotlb_init(int verbose, bool early) } else rc = swiotlb_late_init_with_tbl(start, nslabs); -end: if (!rc) swiotlb_set_max_segment(PAGE_SIZE); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 388d9be35b5795..ebe7c123e27e52 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -229,6 +229,10 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) unsigned long i, bytes; size_t alloc_size; + /* protect against double initialization */ + if (WARN_ON_ONCE(io_tlb_start)) + return -ENOMEM; + bytes = nslabs << IO_TLB_SHIFT; io_tlb_nslabs = nslabs; @@ -367,6 +371,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { unsigned long i, bytes; + /* protect against double initialization */ + if (WARN_ON_ONCE(io_tlb_start)) + return -ENOMEM; + bytes = nslabs << IO_TLB_SHIFT; io_tlb_nslabs = nslabs; From patchwork Mon Mar 1 07:44:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 781CDC433DB for ; Mon, 1 Mar 2021 07:46:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2D7AE64E40 for ; Mon, 1 Mar 2021 07:46:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D7AE64E40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91279.172491 (Exim 4.92) (envelope-from ) id 1lGdG4-0003Ka-IC; Mon, 01 Mar 2021 07:46:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91279.172491; Mon, 01 Mar 2021 07:46:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdG4-0003KQ-DR; Mon, 01 Mar 2021 07:46:24 +0000 Received: by outflank-mailman (input) for mailman id 91279; Mon, 01 Mar 2021 07:46:23 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdG3-00036y-EK for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:23 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 59568fdb-0e0b-468d-bfce-b5d40220191a; Mon, 01 Mar 2021 07:46:19 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFr-00FRE9-3o; Mon, 01 Mar 2021 07:46:11 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 59568fdb-0e0b-468d-bfce-b5d40220191a DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=brl7E8YVp1JaasQZUSI4zOCBNU3VX5lh7I+3ZpJVKoQ=; b=cO0/CICsxS0v4hTlVtR0AshwTW BsQP4SA3okIlBHconzJAwHO1GHs+WlrszQeZX8PaBKPE4sTe4Pr60XvEotfhISYYsfb9GHUs6mxn5 +LUPeX4e+PyqKPmerxpZOsSOplktgII87t8Q57xX5Wdtdc6sPRJ57juxwYJQ5uVZFhq9uXQY/vibm POlSfDXa1pP3OWLz54nN2HRVZh6xiQ91PsdXS5Qqon+lvO2cQD6Qc2bfExgGVpbGTpzis4zBJTAon zOhhS34/GFAL7SGt20X25yTtEm0DWXZEh1aPLIImYDKVUb9adcUs9rZkJP08CohwnsBQYdzbUs5V+ gW+ODGhQ==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 10/14] xen-swiotlb: split xen_swiotlb_init Date: Mon, 1 Mar 2021 08:44:32 +0100 Message-Id: <20210301074436.919889-11-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Split xen_swiotlb_init into a normal an an early case. That makes both much simpler and more readable, and also allows marking the early code as __init and x86-only. Signed-off-by: Christoph Hellwig --- arch/arm/xen/mm.c | 2 +- arch/x86/xen/pci-swiotlb-xen.c | 4 +- drivers/xen/swiotlb-xen.c | 124 +++++++++++++++++++-------------- include/xen/swiotlb-xen.h | 3 +- 4 files changed, 75 insertions(+), 58 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 467fa225c3d0ed..aae950cd053fea 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -140,7 +140,7 @@ static int __init xen_mm_init(void) struct gnttab_cache_flush cflush; if (!xen_initial_domain()) return 0; - xen_swiotlb_init(1, false); + xen_swiotlb_init(); cflush.op = 0; cflush.a.dev_bus_addr = 0; diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c index 19ae3e4fe4e98e..54f9aa7e845739 100644 --- a/arch/x86/xen/pci-swiotlb-xen.c +++ b/arch/x86/xen/pci-swiotlb-xen.c @@ -59,7 +59,7 @@ int __init pci_xen_swiotlb_detect(void) void __init pci_xen_swiotlb_init(void) { if (xen_swiotlb) { - xen_swiotlb_init(1, true /* early */); + xen_swiotlb_init_early(); dma_ops = &xen_swiotlb_dma_ops; #ifdef CONFIG_PCI @@ -76,7 +76,7 @@ int pci_xen_swiotlb_init_late(void) if (xen_swiotlb) return 0; - rc = xen_swiotlb_init(1, false /* late */); + rc = xen_swiotlb_init(); if (rc) return rc; diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 060eeb056486f5..00adeb95ebb9df 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -156,96 +156,112 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err) #define DEFAULT_NSLABS ALIGN(SZ_64M >> IO_TLB_SHIFT, IO_TLB_SEGSIZE) -int __ref xen_swiotlb_init(int verbose, bool early) +int __ref xen_swiotlb_init(void) { - unsigned long bytes, order; - int rc = -ENOMEM; enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN; + unsigned long nslabs, bytes, order; unsigned int repeat = 3; + int rc = -ENOMEM; char *start; - unsigned long nslabs; nslabs = swiotlb_nr_tbl(); -retry: if (!nslabs) nslabs = DEFAULT_NSLABS; +retry: + m_ret = XEN_SWIOTLB_ENOMEM; bytes = nslabs << IO_TLB_SHIFT; order = get_order(bytes); /* * Get IO TLB memory from any location. */ - if (early) { - start = memblock_alloc(PAGE_ALIGN(bytes), - PAGE_SIZE); - if (!start) - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", - __func__, PAGE_ALIGN(bytes), PAGE_SIZE); - } else { #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) - while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { - start = (void *)xen_get_swiotlb_free_pages(order); - if (start) - break; - order--; - } - if (order != get_order(bytes)) { - pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n", - (PAGE_SIZE << order) >> 20); - nslabs = SLABS_PER_PAGE << order; - bytes = nslabs << IO_TLB_SHIFT; - } + while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { + start = (void *)xen_get_swiotlb_free_pages(order); + if (start) + break; + order--; } - if (!start) { - m_ret = XEN_SWIOTLB_ENOMEM; + if (!start) goto error; + if (order != get_order(bytes)) { + pr_warn("Warning: only able to allocate %ld MB for software IO TLB\n", + (PAGE_SIZE << order) >> 20); + nslabs = SLABS_PER_PAGE << order; + bytes = nslabs << IO_TLB_SHIFT; } + /* * And replace that memory with pages under 4GB. */ - rc = xen_swiotlb_fixup(start, - bytes, - nslabs); + rc = xen_swiotlb_fixup(start, bytes, nslabs); if (rc) { - if (early) - memblock_free(__pa(start), - PAGE_ALIGN(bytes)); - else { - free_pages((unsigned long)start, order); - start = NULL; - } + free_pages((unsigned long)start, order); m_ret = XEN_SWIOTLB_EFIXUP; goto error; } - if (early) { - if (swiotlb_init_with_tbl(start, nslabs, - verbose)) - panic("Cannot allocate SWIOTLB buffer"); - rc = 0; - } else - rc = swiotlb_late_init_with_tbl(start, nslabs); - - if (!rc) - swiotlb_set_max_segment(PAGE_SIZE); - - return rc; + rc = swiotlb_late_init_with_tbl(start, nslabs); + if (rc) + return rc; + swiotlb_set_max_segment(PAGE_SIZE); + return 0; error: if (repeat--) { - nslabs = max(1024UL, /* Min is 2MB */ - (nslabs >> 1)); + /* Min is 2MB */ + nslabs = max(1024UL, (nslabs >> 1)); pr_info("Lowering to %luMB\n", (nslabs << IO_TLB_SHIFT) >> 20); goto retry; } pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc); - if (early) - panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc); - else - free_pages((unsigned long)start, order); + free_pages((unsigned long)start, order); return rc; } +#ifdef CONFIG_X86 +void __init xen_swiotlb_init_early(void) +{ + unsigned long nslabs, bytes; + unsigned int repeat = 3; + char *start; + int rc; + + nslabs = swiotlb_nr_tbl(); + if (!nslabs) + nslabs = DEFAULT_NSLABS; +retry: + /* + * Get IO TLB memory from any location. + */ + bytes = nslabs << IO_TLB_SHIFT; + start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); + if (!start) + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", + __func__, PAGE_ALIGN(bytes), PAGE_SIZE); + + /* + * And replace that memory with pages under 4GB. + */ + rc = xen_swiotlb_fixup(start, bytes, nslabs); + if (rc) { + memblock_free(__pa(start), PAGE_ALIGN(bytes)); + if (repeat--) { + /* Min is 2MB */ + nslabs = max(1024UL, (nslabs >> 1)); + pr_info("Lowering to %luMB\n", + (nslabs << IO_TLB_SHIFT) >> 20); + goto retry; + } + panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc); + } + + if (swiotlb_init_with_tbl(start, nslabs, false)) + panic("Cannot allocate SWIOTLB buffer"); + swiotlb_set_max_segment(PAGE_SIZE); +} +#endif /* CONFIG_X86 */ + static void * xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h index d5eaf9d682b804..6206b1ec99168a 100644 --- a/include/xen/swiotlb-xen.h +++ b/include/xen/swiotlb-xen.h @@ -9,7 +9,8 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir); -extern int xen_swiotlb_init(int verbose, bool early); +int xen_swiotlb_init(void); +void __init xen_swiotlb_init_early(void); extern const struct dma_map_ops xen_swiotlb_dma_ops; #endif /* __LINUX_SWIOTLB_XEN_H */ From patchwork Mon Mar 1 07:44:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 071B7C433E0 for ; Mon, 1 Mar 2021 07:46:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B576D64E38 for ; Mon, 1 Mar 2021 07:46:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B576D64E38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91283.172503 (Exim 4.92) (envelope-from ) id 1lGdGF-0003US-1h; Mon, 01 Mar 2021 07:46:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91283.172503; Mon, 01 Mar 2021 07:46:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGE-0003UK-Tv; Mon, 01 Mar 2021 07:46:34 +0000 Received: by outflank-mailman (input) for mailman id 91283; Mon, 01 Mar 2021 07:46:33 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGD-0003Px-FD for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:33 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f8907fb7-1183-41fc-8ced-cf8aac6b1f92; Mon, 01 Mar 2021 07:46:26 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdFu-00FREl-Fr; Mon, 01 Mar 2021 07:46:16 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f8907fb7-1183-41fc-8ced-cf8aac6b1f92 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mXkVhFtn8HeR3wavy6jqxTMaaOwapdi0UDpQuxnpggo=; b=MbcvSu3J2A/2YUuOQNftUQ40jQ oUyLNnz0N1TEM4xM+lm4M86OFbd+5jWcWV5H8fJ+wevrT5rjsxPmbvlkku5a6L6YhDxVew8znrmgZ Cf4ecqDKrnijw22rBE/MZaMd92C/DYnWIjSgWuViL9Ymn0FdRFdRKjFO/m6MuCJK/C08WYdQ4wXJw RCCZEzP9gpPvK7pZcDtyRKDuANbg/bRDXobijdkIyKk1Kl6on4DGYufGNrrPCBvvUVmZ0qyKVx7a7 Z565H9liCHB8m+RNFQsMXMoPrF3YD83Axz+iZTxR69IH2XG69+UqAc6OkrTK4pIgFoNfVbM/GwwPA 87TI1Fjg==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 11/14] xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup Date: Mon, 1 Mar 2021 08:44:33 +0100 Message-Id: <20210301074436.919889-12-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 00adeb95ebb9df..4ecfce2c6f7263 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -104,8 +104,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) return 0; } -static int -xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) +static int xen_swiotlb_fixup(void *buf, unsigned long nslabs) { int i, rc; int dma_bits; @@ -195,7 +194,7 @@ int __ref xen_swiotlb_init(void) /* * And replace that memory with pages under 4GB. */ - rc = xen_swiotlb_fixup(start, bytes, nslabs); + rc = xen_swiotlb_fixup(start, nslabs); if (rc) { free_pages((unsigned long)start, order); m_ret = XEN_SWIOTLB_EFIXUP; @@ -243,7 +242,7 @@ void __init xen_swiotlb_init_early(void) /* * And replace that memory with pages under 4GB. */ - rc = xen_swiotlb_fixup(start, bytes, nslabs); + rc = xen_swiotlb_fixup(start, nslabs); if (rc) { memblock_free(__pa(start), PAGE_ALIGN(bytes)); if (repeat--) { From patchwork Mon Mar 1 07:44:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85C61C433DB for ; Mon, 1 Mar 2021 07:46:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4274F64DE7 for ; Mon, 1 Mar 2021 07:46:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4274F64DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91284.172515 (Exim 4.92) (envelope-from ) id 1lGdGK-0003ZY-BE; Mon, 01 Mar 2021 07:46:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91284.172515; Mon, 01 Mar 2021 07:46:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGK-0003ZN-7Q; Mon, 01 Mar 2021 07:46:40 +0000 Received: by outflank-mailman (input) for mailman id 91284; Mon, 01 Mar 2021 07:46:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGI-0003Px-HT for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:38 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 37320e27-4c2c-4bb2-bd03-bf2f4de314cc; Mon, 01 Mar 2021 07:46:36 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdG1-00FRFO-Qp; Mon, 01 Mar 2021 07:46:24 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 37320e27-4c2c-4bb2-bd03-bf2f4de314cc DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=81M6EJPI3vkCtYnpFa0x4gniskFZXPw81KUK+OV+b2E=; b=HUTc/sQJH9VIyxfTKlMOFXA8mh 1hx2sw1uXE8i9EXaUGwspaQ5tiNwyZayIKCm2EBLLamUOrDVgrR3x+jzSCV3A7+ZFClygClxqJAzI WCQhx1P7IBp93xu0Wgqj7bGXMiIXAn8dFlUDsD7EAJpwF0+Dz6PHM09tCTxcW1dWXCUh+DS7uvJwB 9hX5wP43qtWNKxLup518cW2MhEFWy/79IxhX4iyy7Q6AFNPKH7GMLk+phrlo/VE/z5hb0yoUDmTLa HcZl+u4kGVipTyyV83ID7PcJLANbYevmwLYwvy9Bi0U+4ByqmfrTM/34J9nribzAetC+Ba/2MNiSS dyY8up2g==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 12/14] swiotlb: move global variables into a new io_tlb_mem structure Date: Mon, 1 Mar 2021 08:44:34 +0100 Message-Id: <20210301074436.919889-13-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html From: Claire Chang Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang [hch: rebased] Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 43 ++++- kernel/dma/swiotlb.c | 354 +++++++++++++++++--------------------- 3 files changed, 203 insertions(+), 196 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 4ecfce2c6f7263..5329ad54a5f34e 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -548,7 +548,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, static int xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) { - return xen_phys_to_dma(hwdev, io_tlb_end - 1) <= mask; + return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask; } const struct dma_map_ops xen_swiotlb_dma_ops = { diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 0696bdc8072e97..5ec5378b17c333 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -6,6 +6,7 @@ #include #include #include +#include struct device; struct page; @@ -61,11 +62,49 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; -extern phys_addr_t io_tlb_start, io_tlb_end; + +/** + * struct io_tlb_mem - IO TLB Memory Pool Descriptor + * + * @start: The start address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @end: The end address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and + * @end. This is command line adjustable via setup_io_tlb_npages. + * @used: The number of used IO TLB block. + * @list: The free list describing the number of free entries available + * from each index. + * @index: The index to start searching in the next round. + * @orig_addr: The original address corresponding to a mapped entry. + * @alloc_size: Size of the allocated buffer. + * @lock: The lock to protect the above data structures in the map and + * unmap calls. + * @debugfs: The dentry to debugfs. + * @late_alloc: %true if allocated using the page allocator + */ +struct io_tlb_mem { + phys_addr_t start; + phys_addr_t end; + unsigned long nslabs; + unsigned long used; + unsigned int *list; + unsigned int index; + phys_addr_t *orig_addr; + size_t *alloc_size; + spinlock_t lock; + struct dentry *debugfs; + bool late_alloc; +}; +extern struct io_tlb_mem io_tlb_default_mem; static inline bool is_swiotlb_buffer(phys_addr_t paddr) { - return paddr >= io_tlb_start && paddr < io_tlb_end; + struct io_tlb_mem *mem = &io_tlb_default_mem; + + return paddr >= mem->start && paddr < mem->end; } void __init swiotlb_exit(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index ebe7c123e27e52..6aa84fa3b1467e 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -59,32 +59,11 @@ */ #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) -enum swiotlb_force swiotlb_force; - -/* - * Used to do a quick range check in swiotlb_tbl_unmap_single and - * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -phys_addr_t io_tlb_start, io_tlb_end; - -/* - * The number of IO TLB blocks (in groups of 64) between io_tlb_start and - * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - */ -static unsigned long io_tlb_nslabs; +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) -/* - * The number of used IO TLB block - */ -static unsigned long io_tlb_used; +enum swiotlb_force swiotlb_force; -/* - * This is a free list describing the number of free entries available from - * each index - */ -static unsigned int *io_tlb_list; -static unsigned int io_tlb_index; +struct io_tlb_mem io_tlb_default_mem; /* * Max segment that we can provide which (if pages are contingous) will @@ -92,32 +71,15 @@ static unsigned int io_tlb_index; */ static unsigned int max_segment; -/* - * We need to save away the original address corresponding to a mapped entry - * for the sync operations. - */ -#define INVALID_PHYS_ADDR (~(phys_addr_t)0) -static phys_addr_t *io_tlb_orig_addr; - -/* - * The mapped buffer's size should be validated during a sync operation. - */ -static size_t *io_tlb_alloc_size; - -/* - * Protect the above data structures in the map and unmap calls - */ -static DEFINE_SPINLOCK(io_tlb_lock); - -static int late_alloc; - static int __init setup_io_tlb_npages(char *str) { + struct io_tlb_mem *mem = &io_tlb_default_mem; + if (isdigit(*str)) { - io_tlb_nslabs = simple_strtoul(str, &str, 0); + mem->nslabs = simple_strtoul(str, &str, 0); /* avoid tail segment of size < IO_TLB_SEGSIZE */ - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } if (*str == ',') ++str; @@ -125,7 +87,7 @@ setup_io_tlb_npages(char *str) swiotlb_force = SWIOTLB_FORCE; } else if (!strcmp(str, "noforce")) { swiotlb_force = SWIOTLB_NO_FORCE; - io_tlb_nslabs = 1; + mem->nslabs = 1; } return 0; @@ -136,7 +98,7 @@ static bool no_iotlb_memory; unsigned long swiotlb_nr_tbl(void) { - return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs; + return unlikely(no_iotlb_memory) ? 0 : io_tlb_default_mem.nslabs; } EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); @@ -158,13 +120,14 @@ unsigned long swiotlb_size_or_default(void) { unsigned long size; - size = io_tlb_nslabs << IO_TLB_SHIFT; + size = io_tlb_default_mem.nslabs << IO_TLB_SHIFT; return size ? size : (IO_TLB_DEFAULT_SIZE); } void __init swiotlb_adjust_size(unsigned long new_size) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long size; /* @@ -172,10 +135,10 @@ void __init swiotlb_adjust_size(unsigned long new_size) * architectures such as those supporting memory encryption to * adjust/expand SWIOTLB size for their use. */ - if (!io_tlb_nslabs) { + if (!mem->nslabs) { size = ALIGN(new_size, IO_TLB_SIZE); - io_tlb_nslabs = size >> IO_TLB_SHIFT; - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + mem->nslabs = size >> IO_TLB_SHIFT; + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } @@ -183,14 +146,15 @@ void __init swiotlb_adjust_size(unsigned long new_size) void swiotlb_print_info(void) { - unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT; + struct io_tlb_mem *mem = &io_tlb_default_mem; + unsigned long bytes = mem->nslabs << IO_TLB_SHIFT; if (no_iotlb_memory) { pr_warn("No low mem\n"); return; } - pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end, + pr_info("mapped [mem %pa-%pa] (%luMB)\n", &mem->start, &mem->end, bytes >> 20); } @@ -212,68 +176,65 @@ static inline unsigned long nr_slots(u64 val) */ void __init swiotlb_update_mem_attributes(void) { + struct io_tlb_mem *mem = &io_tlb_default_mem; void *vaddr; unsigned long bytes; - if (no_iotlb_memory || late_alloc) + if (no_iotlb_memory || mem->late_alloc) return; - vaddr = phys_to_virt(io_tlb_start); - bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + vaddr = phys_to_virt(mem->start); + bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); memset(vaddr, 0, bytes); } int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long i, bytes; size_t alloc_size; /* protect against double initialization */ - if (WARN_ON_ONCE(io_tlb_start)) + if (WARN_ON_ONCE(mem->start)) return -ENOMEM; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = __pa(tlb); - io_tlb_end = io_tlb_start + bytes; + mem->nslabs = nslabs; + mem->start = __pa(tlb); + mem->end = mem->start + bytes; + mem->index = 0; + spin_lock_init(&mem->lock); /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between mem->start and mem->end. */ - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int)); - io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_list) + alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(int)); + mem->list = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem->list) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)); - io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_orig_addr) + alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t)); + mem->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem->orig_addr) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(size_t)); - io_tlb_alloc_size = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_alloc_size) - panic("%s: Failed to allocate %zu bytes align=0x%lx\n", - __func__, alloc_size, PAGE_SIZE); - - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_alloc_size[i] = 0; + for (i = 0; i < mem->nslabs; i++) { + mem->list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->orig_addr[i] = INVALID_PHYS_ADDR; + mem->alloc_size[i] = 0; } - io_tlb_index = 0; no_iotlb_memory = false; if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; } @@ -284,26 +245,27 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) void __init swiotlb_init(int verbose) { + struct io_tlb_mem *mem = &io_tlb_default_mem; size_t default_size = IO_TLB_DEFAULT_SIZE; unsigned char *vstart; unsigned long bytes; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!mem->nslabs) { + mem->nslabs = (default_size >> IO_TLB_SHIFT); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + bytes = mem->nslabs << IO_TLB_SHIFT; /* Get IO TLB memory from the low pages */ vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) + if (vstart && !swiotlb_init_with_tbl(vstart, mem->nslabs, verbose)) return; - if (io_tlb_start) { - memblock_free_early(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); - io_tlb_start = 0; + if (mem->start) { + memblock_free_early(mem->start, + PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); + mem->start = 0; } pr_warn("Cannot allocate buffer"); no_iotlb_memory = true; @@ -317,22 +279,23 @@ swiotlb_init(int verbose) int swiotlb_late_init_with_default_size(size_t default_size) { - unsigned long bytes, req_nslabs = io_tlb_nslabs; + struct io_tlb_mem *mem = &io_tlb_default_mem; + unsigned long bytes, req_nslabs = mem->nslabs; unsigned char *vstart = NULL; unsigned int order; int rc = 0; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!mem->nslabs) { + mem->nslabs = (default_size >> IO_TLB_SHIFT); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } /* * Get IO TLB memory from the low pages */ - order = get_order(io_tlb_nslabs << IO_TLB_SHIFT); - io_tlb_nslabs = SLABS_PER_PAGE << order; - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + order = get_order(mem->nslabs << IO_TLB_SHIFT); + mem->nslabs = SLABS_PER_PAGE << order; + bytes = mem->nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, @@ -343,15 +306,15 @@ swiotlb_late_init_with_default_size(size_t default_size) } if (!vstart) { - io_tlb_nslabs = req_nslabs; + mem->nslabs = req_nslabs; return -ENOMEM; } if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20); - io_tlb_nslabs = SLABS_PER_PAGE << order; + mem->nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs); + rc = swiotlb_late_init_with_tbl(vstart, mem->nslabs); if (rc) free_pages((unsigned long)vstart, order); @@ -360,26 +323,32 @@ swiotlb_late_init_with_default_size(size_t default_size) static void swiotlb_cleanup(void) { - io_tlb_end = 0; - io_tlb_start = 0; - io_tlb_nslabs = 0; + struct io_tlb_mem *mem = &io_tlb_default_mem; + + mem->end = 0; + mem->start = 0; + mem->nslabs = 0; max_segment = 0; } int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long i, bytes; /* protect against double initialization */ - if (WARN_ON_ONCE(io_tlb_start)) + if (WARN_ON_ONCE(mem->start)) return -ENOMEM; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = virt_to_phys(tlb); - io_tlb_end = io_tlb_start + bytes; + mem->nslabs = nslabs; + mem->start = virt_to_phys(tlb); + mem->end = mem->start + bytes; + mem->index = 0; + mem->late_alloc = 1; + spin_lock_init(&mem->lock); set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); memset(tlb, 0, bytes); @@ -387,52 +356,45 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between mem->start and mem->end. */ - io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * sizeof(int))); - if (!io_tlb_list) + mem->list = (unsigned int *)__get_free_pages(GFP_KERNEL, + get_order(mem->nslabs * sizeof(int))); + if (!mem->list) goto cleanup3; - io_tlb_orig_addr = (phys_addr_t *) + mem->orig_addr = (phys_addr_t *) __get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * + get_order(mem->nslabs * sizeof(phys_addr_t))); - if (!io_tlb_orig_addr) + if (!mem->orig_addr) goto cleanup4; - io_tlb_alloc_size = (size_t *) + mem->alloc_size = (size_t *) __get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * + get_order(mem->nslabs * sizeof(size_t))); - if (!io_tlb_alloc_size) + if (!mem->alloc_size) goto cleanup5; - - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_alloc_size[i] = 0; + for (i = 0; i < mem->nslabs; i++) { + mem->list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->orig_addr[i] = INVALID_PHYS_ADDR; + mem->alloc_size[i] = 0; } - io_tlb_index = 0; no_iotlb_memory = false; swiotlb_print_info(); - - late_alloc = 1; - - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); - + swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; cleanup5: - free_pages((unsigned long)io_tlb_orig_addr, get_order(io_tlb_nslabs * - sizeof(phys_addr_t))); - + free_pages((unsigned long)mem->orig_addr, + get_order(mem->nslabs * sizeof(phys_addr_t))); cleanup4: - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - io_tlb_list = NULL; + free_pages((unsigned long)mem->list, + get_order(mem->nslabs * sizeof(int))); + mem->list = NULL; cleanup3: swiotlb_cleanup(); return -ENOMEM; @@ -440,27 +402,29 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) void __init swiotlb_exit(void) { - if (!io_tlb_orig_addr) + struct io_tlb_mem *mem = &io_tlb_default_mem; + + if (!mem->orig_addr) return; - if (late_alloc) { - free_pages((unsigned long)io_tlb_alloc_size, - get_order(io_tlb_nslabs * sizeof(size_t))); - free_pages((unsigned long)io_tlb_orig_addr, - get_order(io_tlb_nslabs * sizeof(phys_addr_t))); - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - free_pages((unsigned long)phys_to_virt(io_tlb_start), - get_order(io_tlb_nslabs << IO_TLB_SHIFT)); + if (mem->late_alloc) { + free_pages((unsigned long)mem->alloc_size, + get_order(mem->nslabs * sizeof(size_t))); + free_pages((unsigned long)mem->orig_addr, + get_order(mem->nslabs * sizeof(phys_addr_t))); + free_pages((unsigned long)mem->list, + get_order(mem->nslabs * sizeof(int))); + free_pages((unsigned long)phys_to_virt(mem->start), + get_order(mem->nslabs << IO_TLB_SHIFT)); } else { - memblock_free_late(__pa(io_tlb_orig_addr), - PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(io_tlb_alloc_size), - PAGE_ALIGN(io_tlb_nslabs * sizeof(size_t))); - memblock_free_late(__pa(io_tlb_list), - PAGE_ALIGN(io_tlb_nslabs * sizeof(int))); - memblock_free_late(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + memblock_free_late(__pa(mem->alloc_size), + PAGE_ALIGN(mem->nslabs * sizeof(size_t))); + memblock_free_late(__pa(mem->orig_addr), + PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t))); + memblock_free_late(__pa(mem->list), + PAGE_ALIGN(mem->nslabs * sizeof(int))); + memblock_free_late(mem->start, + PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); } swiotlb_cleanup(); } @@ -471,9 +435,10 @@ void __init swiotlb_exit(void) static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - size_t alloc_size = io_tlb_alloc_size[index]; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + struct io_tlb_mem *mem = &io_tlb_default_mem; + int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = mem->orig_addr[index]; + size_t alloc_size = mem->alloc_size[index]; unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); @@ -538,9 +503,9 @@ static inline unsigned long get_max_slots(unsigned long boundary_mask) return nr_slots(boundary_mask + 1); } -static unsigned int wrap_index(unsigned int index) +static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) { - if (index >= io_tlb_nslabs) + if (index >= mem->nslabs) return 0; return index; } @@ -552,9 +517,10 @@ static unsigned int wrap_index(unsigned int index) static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = - phys_to_dma_unencrypted(dev, io_tlb_start) & boundary_mask; + phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; unsigned long max_slots = get_max_slots(boundary_mask); unsigned int iotlb_align_mask = dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); @@ -573,15 +539,15 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, if (alloc_size >= PAGE_SIZE) stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT)); - spin_lock_irqsave(&io_tlb_lock, flags); - if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) + spin_lock_irqsave(&mem->lock, flags); + if (unlikely(nslots > mem->nslabs - mem->used)) goto not_found; - index = wrap = wrap_index(ALIGN(io_tlb_index, stride)); + index = wrap = wrap_index(mem, ALIGN(mem->index, stride)); do { if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != (orig_addr & iotlb_align_mask)) { - index = wrap_index(index + 1); + index = wrap_index(mem, index + 1); continue; } @@ -593,34 +559,34 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, if (!iommu_is_span_boundary(index, nslots, nr_slots(tbl_dma_addr), max_slots)) { - if (io_tlb_list[index] >= nslots) + if (mem->list[index] >= nslots) goto found; } - index = wrap_index(index + stride); + index = wrap_index(mem, index + stride); } while (index != wrap); not_found: - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&mem->lock, flags); return -1; found: for (i = index; i < index + nslots; i++) - io_tlb_list[i] = 0; + mem->list[i] = 0; for (i = index - 1; io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && - io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; + mem->list[i]; i--) + mem->list[i] = ++count; /* * Update the indices to avoid searching in the next round. */ - if (index + nslots < io_tlb_nslabs) - io_tlb_index = index + nslots; + if (index + nslots < mem->nslabs) + mem->index = index + nslots; else - io_tlb_index = 0; - io_tlb_used += nslots; + mem->index = 0; + mem->used += nslots; - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&mem->lock, flags); return index; } @@ -628,6 +594,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int index, i; phys_addr_t tlb_addr; @@ -649,7 +616,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!(attrs & DMA_ATTR_NO_WARN)) dev_warn_ratelimited(dev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", - alloc_size, io_tlb_nslabs, io_tlb_used); + alloc_size, mem->nslabs, mem->used); return (phys_addr_t)DMA_MAPPING_ERROR; } @@ -659,10 +626,10 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * needed. */ for (i = 0; i < nr_slots(alloc_size + offset); i++) { - io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); - io_tlb_alloc_size[index+i] = alloc_size - (i << IO_TLB_SHIFT); + mem->orig_addr[index + i] = slot_addr(orig_addr, i); + mem->alloc_size[index + i] = alloc_size - (i << IO_TLB_SHIFT); } - tlb_addr = slot_addr(io_tlb_start, index) + offset; + tlb_addr = slot_addr(mem->start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -676,10 +643,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, enum dma_data_direction dir, unsigned long attrs) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); - int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT; - int nslots = nr_slots(io_tlb_alloc_size[index] + offset); + int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; + int nslots = nr_slots(mem->alloc_size[index] + offset); int count, i; /* @@ -695,9 +663,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * While returning the entries to the free list, we merge the entries * with slots below and above the pool being returned. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&mem->lock, flags); if (index + nslots < ALIGN(index + 1, IO_TLB_SEGSIZE)) - count = io_tlb_list[index + nslots]; + count = mem->list[index + nslots]; else count = 0; @@ -706,9 +674,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * superceeding slots */ for (i = index + nslots - 1; i >= index; i--) { - io_tlb_list[i] = ++count; - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; - io_tlb_alloc_size[i] = 0; + mem->list[i] = ++count; + mem->orig_addr[i] = INVALID_PHYS_ADDR; + mem->alloc_size[i] = 0; } /* @@ -716,11 +684,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * available (non zero) */ for (i = index - 1; - io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && io_tlb_list[i]; + io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && mem->list[i]; i--) - io_tlb_list[i] = ++count; - io_tlb_used -= nslots; - spin_unlock_irqrestore(&io_tlb_lock, flags); + mem->list[i] = ++count; + mem->used -= nslots; + spin_unlock_irqrestore(&mem->lock, flags); } void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, @@ -783,21 +751,21 @@ size_t swiotlb_max_mapping_size(struct device *dev) bool is_swiotlb_active(void) { /* - * When SWIOTLB is initialized, even if io_tlb_start points to physical - * address zero, io_tlb_end surely doesn't. + * When SWIOTLB is initialized, even if mem->start points to physical + * address zero, mem->end surely doesn't. */ - return io_tlb_end != 0; + return io_tlb_default_mem.end != 0; } #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) { - struct dentry *root; + struct io_tlb_mem *mem = &io_tlb_default_mem; - root = debugfs_create_dir("swiotlb", NULL); - debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs); - debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used); + mem->debugfs = debugfs_create_dir("swiotlb", NULL); + debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); + debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); return 0; } From patchwork Mon Mar 1 07:44:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78190C433DB for ; Mon, 1 Mar 2021 07:47:09 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 35F4264DE7 for ; Mon, 1 Mar 2021 07:47:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35F4264DE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91292.172527 (Exim 4.92) (envelope-from ) id 1lGdGd-0003mN-Tv; Mon, 01 Mar 2021 07:46:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91292.172527; Mon, 01 Mar 2021 07:46:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGd-0003mE-QP; Mon, 01 Mar 2021 07:46:59 +0000 Received: by outflank-mailman (input) for mailman id 91292; Mon, 01 Mar 2021 07:46:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGc-0003Px-Fo for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:46:58 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e632e6a4-91a5-43c2-a0ab-a484934d2c5c; Mon, 01 Mar 2021 07:46:46 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdG9-00FRGI-DD; Mon, 01 Mar 2021 07:46:31 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e632e6a4-91a5-43c2-a0ab-a484934d2c5c DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IRx+Pi1pD2z501lxg6zw7GkdxsvxCqbOuoP9Ueee5qA=; b=duV9FY07kMgTOlwKmQNCqAWEUf HjMyR8HaEfGmccKEKenMvn/1dBX50Ok55KK5MtUOkSpqafeHtQNadPiMrKGb3QbBwegZbFGao8obV mY5aLxa4tn+8dC3UpfhNc7lQs2fv48Mp3fvXx+Qa3Vqk8W4a51I32NP0SxrbjdhsyOZb2HlKshJSQ CAe/Ix/J5ywCjLm/dpsBpRfxR+7jf6rLLA7/mey+AAV1WC0Y/jSVRYwHrtBDRp1rTsPZDqweuy4xd Kt7hacQU3L56juNbddmIoZa0LvXOBR+1MjpQVBgh4IuuzLlpBcV+l3J3wTJGR26wUi2xv3tNdm3xB wvRgJZpw==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 13/14] swiotlb: dynamically allocate io_tlb_default_mem Date: Mon, 1 Mar 2021 08:44:35 +0100 Message-Id: <20210301074436.919889-14-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Instead of allocating ->list and ->orig_addr separately just do one dynamic allocation for the actual io_tlb_mem structure. This simplifies a lot of the initialization code, and also allows to just check io_tlb_default_mem to see if swiotlb is in use. Signed-off-by: Christoph Hellwig --- drivers/xen/swiotlb-xen.c | 22 +-- include/linux/swiotlb.h | 18 ++- kernel/dma/swiotlb.c | 300 +++++++++++++------------------------- 3 files changed, 117 insertions(+), 223 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 5329ad54a5f34e..4c89afc0df6289 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -158,17 +158,14 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err) int __ref xen_swiotlb_init(void) { enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN; - unsigned long nslabs, bytes, order; - unsigned int repeat = 3; + unsigned long bytes = swiotlb_size_or_default(); + unsigned long nslabs = bytes >> IO_TLB_SHIFT; + unsigned int order, repeat = 3; int rc = -ENOMEM; char *start; - nslabs = swiotlb_nr_tbl(); - if (!nslabs) - nslabs = DEFAULT_NSLABS; retry: m_ret = XEN_SWIOTLB_ENOMEM; - bytes = nslabs << IO_TLB_SHIFT; order = get_order(bytes); /* @@ -221,19 +218,16 @@ int __ref xen_swiotlb_init(void) #ifdef CONFIG_X86 void __init xen_swiotlb_init_early(void) { - unsigned long nslabs, bytes; + unsigned long bytes = swiotlb_size_or_default(); + unsigned long nslabs = bytes >> IO_TLB_SHIFT; unsigned int repeat = 3; char *start; int rc; - nslabs = swiotlb_nr_tbl(); - if (!nslabs) - nslabs = DEFAULT_NSLABS; retry: /* * Get IO TLB memory from any location. */ - bytes = nslabs << IO_TLB_SHIFT; start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE); if (!start) panic("%s: Failed to allocate %lu bytes align=0x%lx\n", @@ -248,8 +242,8 @@ void __init xen_swiotlb_init_early(void) if (repeat--) { /* Min is 2MB */ nslabs = max(1024UL, (nslabs >> 1)); - pr_info("Lowering to %luMB\n", - (nslabs << IO_TLB_SHIFT) >> 20); + bytes = nslabs << IO_TLB_SHIFT; + pr_info("Lowering to %luMB\n", bytes >> 20); goto retry; } panic("%s (rc:%d)", xen_swiotlb_error(XEN_SWIOTLB_EFIXUP), rc); @@ -548,7 +542,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, static int xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) { - return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask; + return xen_phys_to_dma(hwdev, io_tlb_default_mem->end - 1) <= mask; } const struct dma_map_ops xen_swiotlb_dma_ops = { diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 5ec5378b17c333..63f7a63f61d098 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -90,28 +90,30 @@ struct io_tlb_mem { phys_addr_t end; unsigned long nslabs; unsigned long used; - unsigned int *list; unsigned int index; - phys_addr_t *orig_addr; - size_t *alloc_size; spinlock_t lock; struct dentry *debugfs; bool late_alloc; + struct io_tlb_slot { + phys_addr_t orig_addr; + size_t alloc_size; + unsigned int list; + } slots[]; }; -extern struct io_tlb_mem io_tlb_default_mem; +extern struct io_tlb_mem *io_tlb_default_mem; static inline bool is_swiotlb_buffer(phys_addr_t paddr) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; - return paddr >= mem->start && paddr < mem->end; + return mem && paddr >= mem->start && paddr < mem->end; } void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); bool is_swiotlb_active(void); -void __init swiotlb_adjust_size(unsigned long new_size); +void __init swiotlb_adjust_size(unsigned long size); #else #define swiotlb_force SWIOTLB_NO_FORCE static inline bool is_swiotlb_buffer(phys_addr_t paddr) @@ -135,7 +137,7 @@ static inline bool is_swiotlb_active(void) return false; } -static inline void swiotlb_adjust_size(unsigned long new_size) +static inline void swiotlb_adjust_size(unsigned long size) { } #endif /* CONFIG_SWIOTLB */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 6aa84fa3b1467e..b7bcd7b804bfe8 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -63,7 +63,7 @@ enum swiotlb_force swiotlb_force; -struct io_tlb_mem io_tlb_default_mem; +struct io_tlb_mem *io_tlb_default_mem; /* * Max segment that we can provide which (if pages are contingous) will @@ -71,15 +71,15 @@ struct io_tlb_mem io_tlb_default_mem; */ static unsigned int max_segment; +static unsigned long default_nslabs = IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT; + static int __init setup_io_tlb_npages(char *str) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - if (isdigit(*str)) { - mem->nslabs = simple_strtoul(str, &str, 0); /* avoid tail segment of size < IO_TLB_SEGSIZE */ - mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); + default_nslabs = + ALIGN(simple_strtoul(str, &str, 0), IO_TLB_SEGSIZE); } if (*str == ',') ++str; @@ -87,24 +87,22 @@ setup_io_tlb_npages(char *str) swiotlb_force = SWIOTLB_FORCE; } else if (!strcmp(str, "noforce")) { swiotlb_force = SWIOTLB_NO_FORCE; - mem->nslabs = 1; + default_nslabs = 1; } return 0; } early_param("swiotlb", setup_io_tlb_npages); -static bool no_iotlb_memory; - unsigned long swiotlb_nr_tbl(void) { - return unlikely(no_iotlb_memory) ? 0 : io_tlb_default_mem.nslabs; + return io_tlb_default_mem ? io_tlb_default_mem->nslabs : 0; } EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); unsigned int swiotlb_max_segment(void) { - return unlikely(no_iotlb_memory) ? 0 : max_segment; + return io_tlb_default_mem ? max_segment : 0; } EXPORT_SYMBOL_GPL(swiotlb_max_segment); @@ -118,44 +116,32 @@ void swiotlb_set_max_segment(unsigned int val) unsigned long swiotlb_size_or_default(void) { - unsigned long size; - - size = io_tlb_default_mem.nslabs << IO_TLB_SHIFT; - - return size ? size : (IO_TLB_DEFAULT_SIZE); + return default_nslabs << IO_TLB_SHIFT; } -void __init swiotlb_adjust_size(unsigned long new_size) +void __init swiotlb_adjust_size(unsigned long size) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long size; - /* * If swiotlb parameter has not been specified, give a chance to * architectures such as those supporting memory encryption to * adjust/expand SWIOTLB size for their use. */ - if (!mem->nslabs) { - size = ALIGN(new_size, IO_TLB_SIZE); - mem->nslabs = size >> IO_TLB_SHIFT; - mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); - - pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); - } + size = ALIGN(size, IO_TLB_SIZE); + default_nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); + pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } void swiotlb_print_info(void) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long bytes = mem->nslabs << IO_TLB_SHIFT; + struct io_tlb_mem *mem = io_tlb_default_mem; - if (no_iotlb_memory) { + if (!mem) { pr_warn("No low mem\n"); return; } pr_info("mapped [mem %pa-%pa] (%luMB)\n", &mem->start, &mem->end, - bytes >> 20); + (mem->nslabs << IO_TLB_SHIFT) >> 20); } static inline unsigned long io_tlb_offset(unsigned long val) @@ -176,13 +162,12 @@ static inline unsigned long nr_slots(u64 val) */ void __init swiotlb_update_mem_attributes(void) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; void *vaddr; unsigned long bytes; - if (no_iotlb_memory || mem->late_alloc) + if (!mem || mem->late_alloc) return; - vaddr = phys_to_virt(mem->start); bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); @@ -191,49 +176,33 @@ void __init swiotlb_update_mem_attributes(void) int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long i, bytes; + unsigned long bytes = nslabs << IO_TLB_SHIFT, i; + struct io_tlb_mem *mem; size_t alloc_size; /* protect against double initialization */ - if (WARN_ON_ONCE(mem->start)) + if (WARN_ON_ONCE(io_tlb_default_mem)) return -ENOMEM; - bytes = nslabs << IO_TLB_SHIFT; - + alloc_size = PAGE_ALIGN(struct_size(mem, slots, nslabs)); + mem = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem) + panic("%s: Failed to allocate %zu bytes align=0x%lx\n", + __func__, alloc_size, PAGE_SIZE); mem->nslabs = nslabs; mem->start = __pa(tlb); mem->end = mem->start + bytes; mem->index = 0; spin_lock_init(&mem->lock); - - /* - * Allocate and initialize the free list array. This array is used - * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between mem->start and mem->end. - */ - alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(int)); - mem->list = memblock_alloc(alloc_size, PAGE_SIZE); - if (!mem->list) - panic("%s: Failed to allocate %zu bytes align=0x%lx\n", - __func__, alloc_size, PAGE_SIZE); - - alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t)); - mem->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); - if (!mem->orig_addr) - panic("%s: Failed to allocate %zu bytes align=0x%lx\n", - __func__, alloc_size, PAGE_SIZE); - for (i = 0; i < mem->nslabs; i++) { - mem->list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->orig_addr[i] = INVALID_PHYS_ADDR; - mem->alloc_size[i] = 0; + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; + mem->slots[i].alloc_size = 0; } - no_iotlb_memory = false; + io_tlb_default_mem = mem; if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; } @@ -245,30 +214,21 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) void __init swiotlb_init(int verbose) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - size_t default_size = IO_TLB_DEFAULT_SIZE; - unsigned char *vstart; - unsigned long bytes; - - if (!mem->nslabs) { - mem->nslabs = (default_size >> IO_TLB_SHIFT); - mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); - } - - bytes = mem->nslabs << IO_TLB_SHIFT; + size_t bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT); + void *tlb; /* Get IO TLB memory from the low pages */ - vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, mem->nslabs, verbose)) - return; - - if (mem->start) { - memblock_free_early(mem->start, - PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); - mem->start = 0; - } + tlb = memblock_alloc_low(bytes, PAGE_SIZE); + if (!tlb) + goto fail; + if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose)) + goto fail_free_mem; + return; + +fail_free_mem: + memblock_free_early(__pa(tlb), bytes); +fail: pr_warn("Cannot allocate buffer"); - no_iotlb_memory = true; } /* @@ -279,23 +239,19 @@ swiotlb_init(int verbose) int swiotlb_late_init_with_default_size(size_t default_size) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long bytes, req_nslabs = mem->nslabs; + unsigned long nslabs = + ALIGN(default_size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE); + unsigned long bytes; unsigned char *vstart = NULL; unsigned int order; int rc = 0; - if (!mem->nslabs) { - mem->nslabs = (default_size >> IO_TLB_SHIFT); - mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); - } - /* * Get IO TLB memory from the low pages */ - order = get_order(mem->nslabs << IO_TLB_SHIFT); - mem->nslabs = SLABS_PER_PAGE << order; - bytes = mem->nslabs << IO_TLB_SHIFT; + order = get_order(nslabs << IO_TLB_SHIFT); + nslabs = SLABS_PER_PAGE << order; + bytes = nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, @@ -305,43 +261,35 @@ swiotlb_late_init_with_default_size(size_t default_size) order--; } - if (!vstart) { - mem->nslabs = req_nslabs; + if (!vstart) return -ENOMEM; - } + if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20); - mem->nslabs = SLABS_PER_PAGE << order; + nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, mem->nslabs); + rc = swiotlb_late_init_with_tbl(vstart, nslabs); if (rc) free_pages((unsigned long)vstart, order); return rc; } -static void swiotlb_cleanup(void) -{ - struct io_tlb_mem *mem = &io_tlb_default_mem; - - mem->end = 0; - mem->start = 0; - mem->nslabs = 0; - max_segment = 0; -} - int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long i, bytes; + unsigned long bytes = nslabs << IO_TLB_SHIFT, i; + struct io_tlb_mem *mem; /* protect against double initialization */ - if (WARN_ON_ONCE(mem->start)) + if (WARN_ON_ONCE(io_tlb_default_mem)) return -ENOMEM; - bytes = nslabs << IO_TLB_SHIFT; + mem = (void *)__get_free_pages(GFP_KERNEL, + get_order(struct_size(mem, slots, nslabs))); + if (!mem) + return -ENOMEM; mem->nslabs = nslabs; mem->start = virt_to_phys(tlb); @@ -349,84 +297,35 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) mem->index = 0; mem->late_alloc = 1; spin_lock_init(&mem->lock); + for (i = 0; i < mem->nslabs; i++) { + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i); + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; + mem->slots[i].alloc_size = 0; + } set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); memset(tlb, 0, bytes); - /* - * Allocate and initialize the free list array. This array is used - * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between mem->start and mem->end. - */ - mem->list = (unsigned int *)__get_free_pages(GFP_KERNEL, - get_order(mem->nslabs * sizeof(int))); - if (!mem->list) - goto cleanup3; - - mem->orig_addr = (phys_addr_t *) - __get_free_pages(GFP_KERNEL, - get_order(mem->nslabs * - sizeof(phys_addr_t))); - if (!mem->orig_addr) - goto cleanup4; - - mem->alloc_size = (size_t *) - __get_free_pages(GFP_KERNEL, - get_order(mem->nslabs * - sizeof(size_t))); - if (!mem->alloc_size) - goto cleanup5; - - for (i = 0; i < mem->nslabs; i++) { - mem->list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); - mem->orig_addr[i] = INVALID_PHYS_ADDR; - mem->alloc_size[i] = 0; - } - no_iotlb_memory = false; - + io_tlb_default_mem = mem; swiotlb_print_info(); swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; - -cleanup5: - free_pages((unsigned long)mem->orig_addr, - get_order(mem->nslabs * sizeof(phys_addr_t))); -cleanup4: - free_pages((unsigned long)mem->list, - get_order(mem->nslabs * sizeof(int))); - mem->list = NULL; -cleanup3: - swiotlb_cleanup(); - return -ENOMEM; } void __init swiotlb_exit(void) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; + size_t size; - if (!mem->orig_addr) + if (!mem) return; - if (mem->late_alloc) { - free_pages((unsigned long)mem->alloc_size, - get_order(mem->nslabs * sizeof(size_t))); - free_pages((unsigned long)mem->orig_addr, - get_order(mem->nslabs * sizeof(phys_addr_t))); - free_pages((unsigned long)mem->list, - get_order(mem->nslabs * sizeof(int))); - free_pages((unsigned long)phys_to_virt(mem->start), - get_order(mem->nslabs << IO_TLB_SHIFT)); - } else { - memblock_free_late(__pa(mem->alloc_size), - PAGE_ALIGN(mem->nslabs * sizeof(size_t))); - memblock_free_late(__pa(mem->orig_addr), - PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(mem->list), - PAGE_ALIGN(mem->nslabs * sizeof(int))); - memblock_free_late(mem->start, - PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); - } - swiotlb_cleanup(); + size = struct_size(mem, slots, mem->nslabs); + if (mem->late_alloc) + free_pages((unsigned long)mem, get_order(size)); + else + memblock_free_late(__pa(mem), PAGE_ALIGN(size)); + io_tlb_default_mem = NULL; } /* @@ -435,10 +334,10 @@ void __init swiotlb_exit(void) static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = mem->orig_addr[index]; - size_t alloc_size = mem->alloc_size[index]; + phys_addr_t orig_addr = mem->slots[index].orig_addr; + size_t alloc_size = mem->slots[index].alloc_size; unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); @@ -517,7 +416,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; @@ -559,7 +458,7 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, if (!iommu_is_span_boundary(index, nslots, nr_slots(tbl_dma_addr), max_slots)) { - if (mem->list[index] >= nslots) + if (mem->slots[index].list >= nslots) goto found; } index = wrap_index(mem, index + stride); @@ -571,11 +470,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, found: for (i = index; i < index + nslots; i++) - mem->list[i] = 0; + mem->slots[i].list = 0; for (i = index - 1; io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && - mem->list[i]; i--) - mem->list[i] = ++count; + mem->slots[i].list; i--) + mem->slots[i].list = ++count; /* * Update the indices to avoid searching in the next round. @@ -594,12 +493,12 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int index, i; phys_addr_t tlb_addr; - if (no_iotlb_memory) + if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); if (mem_encrypt_active()) @@ -626,8 +525,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * needed. */ for (i = 0; i < nr_slots(alloc_size + offset); i++) { - mem->orig_addr[index + i] = slot_addr(orig_addr, i); - mem->alloc_size[index + i] = alloc_size - (i << IO_TLB_SHIFT); + mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); + mem->slots[index + i].alloc_size = + alloc_size - (i << IO_TLB_SHIFT); } tlb_addr = slot_addr(mem->start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && @@ -643,11 +543,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; - int nslots = nr_slots(mem->alloc_size[index] + offset); + int nslots = nr_slots(mem->slots[index].alloc_size + offset); int count, i; /* @@ -665,7 +565,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, */ spin_lock_irqsave(&mem->lock, flags); if (index + nslots < ALIGN(index + 1, IO_TLB_SEGSIZE)) - count = mem->list[index + nslots]; + count = mem->slots[index + nslots].list; else count = 0; @@ -674,9 +574,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * superceeding slots */ for (i = index + nslots - 1; i >= index; i--) { - mem->list[i] = ++count; - mem->orig_addr[i] = INVALID_PHYS_ADDR; - mem->alloc_size[i] = 0; + mem->slots[i].list = ++count; + mem->slots[i].orig_addr = INVALID_PHYS_ADDR; + mem->slots[i].alloc_size = 0; } /* @@ -684,9 +584,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * available (non zero) */ for (i = index - 1; - io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && mem->list[i]; + io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && mem->slots[i].list; i--) - mem->list[i] = ++count; + mem->slots[i].list = ++count; mem->used -= nslots; spin_unlock_irqrestore(&mem->lock, flags); } @@ -750,19 +650,17 @@ size_t swiotlb_max_mapping_size(struct device *dev) bool is_swiotlb_active(void) { - /* - * When SWIOTLB is initialized, even if mem->start points to physical - * address zero, mem->end surely doesn't. - */ - return io_tlb_default_mem.end != 0; + return io_tlb_default_mem != NULL; } #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = io_tlb_default_mem; + if (!mem) + return 0; mem->debugfs = debugfs_create_dir("swiotlb", NULL); debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); From patchwork Mon Mar 1 07:44:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12108897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43103C433DB for ; Mon, 1 Mar 2021 07:47:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1B0064E38 for ; Mon, 1 Mar 2021 07:47:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1B0064E38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.91295.172539 (Exim 4.92) (envelope-from ) id 1lGdGo-0003sT-8H; Mon, 01 Mar 2021 07:47:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 91295.172539; Mon, 01 Mar 2021 07:47:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGo-0003sM-3i; Mon, 01 Mar 2021 07:47:10 +0000 Received: by outflank-mailman (input) for mailman id 91295; Mon, 01 Mar 2021 07:47:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lGdGm-0003Px-G6 for xen-devel@lists.xenproject.org; Mon, 01 Mar 2021 07:47:08 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 150bf699-19f9-4e2b-9394-11bc25ca000a; Mon, 01 Mar 2021 07:46:54 +0000 (UTC) Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdGJ-00FRGy-Ot; Mon, 01 Mar 2021 07:46:44 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 150bf699-19f9-4e2b-9394-11bc25ca000a DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EvDfqiSn3rulcjf6po/DEAzkDiYY8dbrnWHudBCB31U=; b=NuEeiEt+jz2nL+G8tqDzAjhDCT sk6sMFuu25E/9wU2JOTtwf+LIn3rI3KgAlpoyZOe+7Mx17+VbccT2ZU56rfmAJ587I+611Cv6Eua9 P05heQ5oMUcxoDbrElSx3XK56ZAjA7Ykza3keQrj7x2FYuE2HaI2HKli9tU4e3LXIpv8p6snIXBJt U6oC4aVDBLMcWx4CBS5dL7ZoC00Q0Vb8WtjON8vC6PlFKcqIg2ugdoyrWD9bS7LJoxzsTUJ5t2Ska yp+PFgZ3gDWG1NEsIbSXfPb156pc19QFB4tERdHFZHJNeq9oZObZ81KUJSGDAB7JWDe9bxrRiQzY1 UkjKlPYg==; From: Christoph Hellwig To: Konrad Rzeszutek Wilk Cc: Michael Ellerman , Dongli Zhang , Claire Chang , xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org Subject: [PATCH 14/14] swiotlb: remove swiotlb_nr_tbl Date: Mon, 1 Mar 2021 08:44:36 +0100 Message-Id: <20210301074436.919889-15-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301074436.919889-1-hch@lst.de> References: <20210301074436.919889-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html All callers just use it to check if swiotlb is active at all, for which they can just use is_swiotlb_active. In the longer run drivers need to stop using is_swiotlb_active as well, but let's do the simple step first. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- drivers/gpu/drm/nouveau/nouveau_ttm.c | 2 +- drivers/pci/xen-pcifront.c | 2 +- include/linux/swiotlb.h | 1 - kernel/dma/swiotlb.c | 7 +------ 5 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index ad22f42541bda6..a9d65fc8aa0eab 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB - if (swiotlb_nr_tbl()) { + if (is_swiotlb_active()) { unsigned int max_segment; max_segment = swiotlb_max_segment(); diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index a37bc3d7b38b3b..9662522aa0664a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm) } #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) - need_swiotlb = !!swiotlb_nr_tbl(); + need_swiotlb = is_swiotlb_active(); #endif ret = ttm_bo_device_init(&drm->ttm.bdev, &nouveau_bo_driver, diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c index c6fe0cfec0f681..a549e822033fd6 100644 --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev) spin_unlock(&pcifront_dev_lock); - if (!err && !swiotlb_nr_tbl()) { + if (!err && !is_swiotlb_active()) { err = pci_xen_swiotlb_init_late(); if (err) dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n"); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 63f7a63f61d098..216854a5e5134b 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -37,7 +37,6 @@ enum swiotlb_force { extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); -extern unsigned long swiotlb_nr_tbl(void); unsigned long swiotlb_size_or_default(void); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern int swiotlb_late_init_with_default_size(size_t default_size); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index b7bcd7b804bfe8..809d5fdc204675 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -94,12 +94,6 @@ setup_io_tlb_npages(char *str) } early_param("swiotlb", setup_io_tlb_npages); -unsigned long swiotlb_nr_tbl(void) -{ - return io_tlb_default_mem ? io_tlb_default_mem->nslabs : 0; -} -EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); - unsigned int swiotlb_max_segment(void) { return io_tlb_default_mem ? max_segment : 0; @@ -652,6 +646,7 @@ bool is_swiotlb_active(void) { return io_tlb_default_mem != NULL; } +EXPORT_SYMBOL_GPL(is_swiotlb_active); #ifdef CONFIG_DEBUG_FS