From patchwork Wed Jan 6 03:41:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12000997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4476C433E0 for ; Wed, 6 Jan 2021 05:43:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7D062227C3 for ; Wed, 6 Jan 2021 05:43:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D062227C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62298.110303 (Exim 4.92) (envelope-from ) id 1kx1bC-00018D-Sv; Wed, 06 Jan 2021 05:43:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62298.110303; Wed, 06 Jan 2021 05:43:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bC-000183-Mv; Wed, 06 Jan 2021 05:43:10 +0000 Received: by outflank-mailman (input) for mailman id 62298; Wed, 06 Jan 2021 03:41:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwzhe-0005e4-7x for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:41:42 +0000 Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cd4f283c-986b-4984-b9ab-f4a3892b60de; Wed, 06 Jan 2021 03:41:40 +0000 (UTC) Received: by mail-pl1-x62d.google.com with SMTP id q4so876723plr.7 for ; Tue, 05 Jan 2021 19:41:40 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id v15sm690974pfn.217.2021.01.05.19.41.33 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:41:38 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cd4f283c-986b-4984-b9ab-f4a3892b60de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iaVOsKMvFDJYn55B6p38dczfl91LGu9UJUDGkSKIIuA=; b=Wd25yMi/LH9CrpDNCdod8ZuwAhKMpkRR0yK0iyOtUJX5+JrAlM8LRrWgdVy0/ZmO7b 1jsT6wpnhxzgyKVtcN0/irI63tatCC+aG0Ke+zE9Hqqq+IC5FUQ/gEsRtxo1Iz0zjdp4 mrFQo9eapIqiFBskqxNfIZwWHXaV3iOAHVbX8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iaVOsKMvFDJYn55B6p38dczfl91LGu9UJUDGkSKIIuA=; b=p3KDIqSi4Q6raR3j5zBuv6eFpcDJqCGgguSbuVJpt8yD2FmGlIV8HPplZsVcXdTdoc 6r86QrW/EDEmtwRKhdVX1AbN2NbxHwXB44/0Jt4FiPrxWvrYUeJKrSVYuty5hnTqQRcO w3/AELR/m+YB03075gWVNeOOo8MrbDjgCGtcFvSzDRcSbTAmlwHgKPzfnJ0jrg4H0KaK PcGSpgd32OjI9VyWRC7MRzWJpOBfMX/EjQIfXzHNmTe23VSgZuvU/qg/hD6hCZC3jiPt L5izKr6qY3uqBNJcWrZzUYD2vFUO9tPxhF6XjOjSCNxC9ItYgRRPT9gk7Qd3m2HH2R5o aUXg== X-Gm-Message-State: AOAM532WdOKLWs/tLpnhHoKBT1On/9nNQ7l/m7gisvG85I91jNN4ETVz NPfXLXeI9nbD0VTHFJltyVsRmw== X-Google-Smtp-Source: ABdhPJyuJpkeVt0ODIilijkP6Uq2ZjzjhDDZESbhcJhHxYV7NpHV8otDe9U+HR7alXQpTjFLXj3ueQ== X-Received: by 2002:a17:90b:1882:: with SMTP id mn2mr2249148pjb.236.1609904499318; Tue, 05 Jan 2021 19:41:39 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 1/6] swiotlb: Add io_tlb_mem struct Date: Wed, 6 Jan 2021 11:41:19 +0800 Message-Id: <20210106034124.30560-2-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang --- arch/powerpc/platforms/pseries/svm.c | 4 +- drivers/xen/swiotlb-xen.c | 4 +- include/linux/swiotlb.h | 39 +++- kernel/dma/swiotlb.c | 292 +++++++++++++-------------- 4 files changed, 178 insertions(+), 161 deletions(-) diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 7b739cc7a8a9..2b767f1ca5fd 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -55,8 +55,8 @@ void __init svm_swiotlb_init(void) if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, false)) return; - if (io_tlb_start) - memblock_free_early(io_tlb_start, + if (io_tlb_default_mem.start) + memblock_free_early(io_tlb_default_mem.start, PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); panic("SVM: Cannot allocate SWIOTLB buffer"); } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..4d17dff7ffd2 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -192,8 +192,8 @@ int __ref xen_swiotlb_init(int verbose, bool early) /* * IO TLB memory already allocated. Just use it. */ - if (io_tlb_start != 0) { - xen_io_tlb_start = phys_to_virt(io_tlb_start); + if (io_tlb_default_mem.start != 0) { + xen_io_tlb_start = phys_to_virt(io_tlb_default_mem.start); goto end; } diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index d9c9fc9ca5d2..dd8eb57cbb8f 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -70,11 +70,46 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; -extern phys_addr_t io_tlb_start, io_tlb_end; + +/** + * struct io_tlb_mem - IO TLB Memory Pool Descriptor + * + * @start: The start address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @end: The end address of the swiotlb memory pool. Used to do a quick + * range check to see if the memory was in fact allocated by this + * API. + * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and + * @end. This is command line adjustable via setup_io_tlb_npages. + * @used: The number of used IO TLB block. + * @list: The free list describing the number of free entries available + * from each index. + * @index: The index to start searching in the next round. + * @orig_addr: The original address corresponding to a mapped entry for the + * sync operations. + * @lock: The lock to protect the above data structures in the map and + * unmap calls. + * @debugfs: The dentry to debugfs. + */ +struct io_tlb_mem { + phys_addr_t start; + phys_addr_t end; + unsigned long nslabs; + unsigned long used; + unsigned int *list; + unsigned int index; + phys_addr_t *orig_addr; + spinlock_t lock; + struct dentry *debugfs; +}; +extern struct io_tlb_mem io_tlb_default_mem; static inline bool is_swiotlb_buffer(phys_addr_t paddr) { - return paddr >= io_tlb_start && paddr < io_tlb_end; + struct io_tlb_mem *mem = &io_tlb_default_mem; + + return paddr >= mem->start && paddr < mem->end; } void __init swiotlb_exit(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 7c42df6e6100..e4368159f88a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -61,33 +61,11 @@ * allocate a contiguous 1MB, we're probably in trouble anyway. */ #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) enum swiotlb_force swiotlb_force; -/* - * Used to do a quick range check in swiotlb_tbl_unmap_single and - * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -phys_addr_t io_tlb_start, io_tlb_end; - -/* - * The number of IO TLB blocks (in groups of 64) between io_tlb_start and - * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - */ -static unsigned long io_tlb_nslabs; - -/* - * The number of used IO TLB block - */ -static unsigned long io_tlb_used; - -/* - * This is a free list describing the number of free entries available from - * each index - */ -static unsigned int *io_tlb_list; -static unsigned int io_tlb_index; +struct io_tlb_mem io_tlb_default_mem; /* * Max segment that we can provide which (if pages are contingous) will @@ -95,27 +73,17 @@ static unsigned int io_tlb_index; */ static unsigned int max_segment; -/* - * We need to save away the original address corresponding to a mapped entry - * for the sync operations. - */ -#define INVALID_PHYS_ADDR (~(phys_addr_t)0) -static phys_addr_t *io_tlb_orig_addr; - -/* - * Protect the above data structures in the map and unmap calls - */ -static DEFINE_SPINLOCK(io_tlb_lock); - static int late_alloc; static int __init setup_io_tlb_npages(char *str) { + struct io_tlb_mem *mem = &io_tlb_default_mem; + if (isdigit(*str)) { - io_tlb_nslabs = simple_strtoul(str, &str, 0); + mem->nslabs = simple_strtoul(str, &str, 0); /* avoid tail segment of size < IO_TLB_SEGSIZE */ - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } if (*str == ',') ++str; @@ -123,7 +91,7 @@ setup_io_tlb_npages(char *str) swiotlb_force = SWIOTLB_FORCE; } else if (!strcmp(str, "noforce")) { swiotlb_force = SWIOTLB_NO_FORCE; - io_tlb_nslabs = 1; + mem->nslabs = 1; } return 0; @@ -134,7 +102,7 @@ static bool no_iotlb_memory; unsigned long swiotlb_nr_tbl(void) { - return unlikely(no_iotlb_memory) ? 0 : io_tlb_nslabs; + return unlikely(no_iotlb_memory) ? 0 : io_tlb_default_mem.nslabs; } EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); @@ -156,13 +124,14 @@ unsigned long swiotlb_size_or_default(void) { unsigned long size; - size = io_tlb_nslabs << IO_TLB_SHIFT; + size = io_tlb_default_mem.nslabs << IO_TLB_SHIFT; return size ? size : (IO_TLB_DEFAULT_SIZE); } void __init swiotlb_adjust_size(unsigned long new_size) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long size; /* @@ -170,10 +139,10 @@ void __init swiotlb_adjust_size(unsigned long new_size) * architectures such as those supporting memory encryption to * adjust/expand SWIOTLB size for their use. */ - if (!io_tlb_nslabs) { + if (!mem->nslabs) { size = ALIGN(new_size, 1 << IO_TLB_SHIFT); - io_tlb_nslabs = size >> IO_TLB_SHIFT; - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + mem->nslabs = size >> IO_TLB_SHIFT; + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20); } @@ -181,14 +150,15 @@ void __init swiotlb_adjust_size(unsigned long new_size) void swiotlb_print_info(void) { - unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT; + struct io_tlb_mem *mem = &io_tlb_default_mem; + unsigned long bytes = mem->nslabs << IO_TLB_SHIFT; if (no_iotlb_memory) { pr_warn("No low mem\n"); return; } - pr_info("mapped [mem %pa-%pa] (%luMB)\n", &io_tlb_start, &io_tlb_end, + pr_info("mapped [mem %pa-%pa] (%luMB)\n", &mem->start, &mem->end, bytes >> 20); } @@ -200,57 +170,59 @@ void swiotlb_print_info(void) */ void __init swiotlb_update_mem_attributes(void) { + struct io_tlb_mem *mem = &io_tlb_default_mem; void *vaddr; unsigned long bytes; if (no_iotlb_memory || late_alloc) return; - vaddr = phys_to_virt(io_tlb_start); - bytes = PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT); + vaddr = phys_to_virt(mem->start); + bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT); memset(vaddr, 0, bytes); } int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long i, bytes; size_t alloc_size; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = __pa(tlb); - io_tlb_end = io_tlb_start + bytes; + mem->nslabs = nslabs; + mem->start = __pa(tlb); + mem->end = mem->start + bytes; /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between mem->start and mem->end. */ - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int)); - io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_list) + alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(int)); + mem->list = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem->list) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)); - io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); - if (!io_tlb_orig_addr) + alloc_size = PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t)); + mem->orig_addr = memblock_alloc(alloc_size, PAGE_SIZE); + if (!mem->orig_addr) panic("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < mem->nslabs; i++) { + mem->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + mem->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + mem->index = 0; no_iotlb_memory = false; if (verbose) swiotlb_print_info(); - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; } @@ -261,26 +233,27 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) void __init swiotlb_init(int verbose) { + struct io_tlb_mem *mem = &io_tlb_default_mem; size_t default_size = IO_TLB_DEFAULT_SIZE; unsigned char *vstart; unsigned long bytes; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!mem->nslabs) { + mem->nslabs = (default_size >> IO_TLB_SHIFT); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + bytes = mem->nslabs << IO_TLB_SHIFT; /* Get IO TLB memory from the low pages */ vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE); - if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose)) + if (vstart && !swiotlb_init_with_tbl(vstart, mem->nslabs, verbose)) return; - if (io_tlb_start) { - memblock_free_early(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); - io_tlb_start = 0; + if (mem->start) { + memblock_free_early(mem->start, + PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); + mem->start = 0; } pr_warn("Cannot allocate buffer"); no_iotlb_memory = true; @@ -294,22 +267,23 @@ swiotlb_init(int verbose) int swiotlb_late_init_with_default_size(size_t default_size) { - unsigned long bytes, req_nslabs = io_tlb_nslabs; + struct io_tlb_mem *mem = &io_tlb_default_mem; + unsigned long bytes, req_nslabs = mem->nslabs; unsigned char *vstart = NULL; unsigned int order; int rc = 0; - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + if (!mem->nslabs) { + mem->nslabs = (default_size >> IO_TLB_SHIFT); + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); } /* * Get IO TLB memory from the low pages */ - order = get_order(io_tlb_nslabs << IO_TLB_SHIFT); - io_tlb_nslabs = SLABS_PER_PAGE << order; - bytes = io_tlb_nslabs << IO_TLB_SHIFT; + order = get_order(mem->nslabs << IO_TLB_SHIFT); + mem->nslabs = SLABS_PER_PAGE << order; + bytes = mem->nslabs << IO_TLB_SHIFT; while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, @@ -320,15 +294,15 @@ swiotlb_late_init_with_default_size(size_t default_size) } if (!vstart) { - io_tlb_nslabs = req_nslabs; + mem->nslabs = req_nslabs; return -ENOMEM; } if (order != get_order(bytes)) { pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20); - io_tlb_nslabs = SLABS_PER_PAGE << order; + mem->nslabs = SLABS_PER_PAGE << order; } - rc = swiotlb_late_init_with_tbl(vstart, io_tlb_nslabs); + rc = swiotlb_late_init_with_tbl(vstart, mem->nslabs); if (rc) free_pages((unsigned long)vstart, order); @@ -337,22 +311,25 @@ swiotlb_late_init_with_default_size(size_t default_size) static void swiotlb_cleanup(void) { - io_tlb_end = 0; - io_tlb_start = 0; - io_tlb_nslabs = 0; + struct io_tlb_mem *mem = &io_tlb_default_mem; + + mem->end = 0; + mem->start = 0; + mem->nslabs = 0; max_segment = 0; } int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long i, bytes; bytes = nslabs << IO_TLB_SHIFT; - io_tlb_nslabs = nslabs; - io_tlb_start = virt_to_phys(tlb); - io_tlb_end = io_tlb_start + bytes; + mem->nslabs = nslabs; + mem->start = virt_to_phys(tlb); + mem->end = mem->start + bytes; set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); memset(tlb, 0, bytes); @@ -360,39 +337,39 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) /* * Allocate and initialize the free list array. This array is used * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. + * between mem->start and mem->end. */ - io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * sizeof(int))); - if (!io_tlb_list) + mem->list = (unsigned int *)__get_free_pages(GFP_KERNEL, + get_order(mem->nslabs * sizeof(int))); + if (!mem->list) goto cleanup3; - io_tlb_orig_addr = (phys_addr_t *) + mem->orig_addr = (phys_addr_t *) __get_free_pages(GFP_KERNEL, - get_order(io_tlb_nslabs * + get_order(mem->nslabs * sizeof(phys_addr_t))); - if (!io_tlb_orig_addr) + if (!mem->orig_addr) goto cleanup4; - for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + for (i = 0; i < mem->nslabs; i++) { + mem->list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + mem->orig_addr[i] = INVALID_PHYS_ADDR; } - io_tlb_index = 0; + mem->index = 0; no_iotlb_memory = false; swiotlb_print_info(); late_alloc = 1; - swiotlb_set_max_segment(io_tlb_nslabs << IO_TLB_SHIFT); + swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; cleanup4: - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - io_tlb_list = NULL; + free_pages((unsigned long)mem->list, + get_order(mem->nslabs * sizeof(int))); + mem->list = NULL; cleanup3: swiotlb_cleanup(); return -ENOMEM; @@ -400,23 +377,25 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) void __init swiotlb_exit(void) { - if (!io_tlb_orig_addr) + struct io_tlb_mem *mem = &io_tlb_default_mem; + + if (!mem->orig_addr) return; if (late_alloc) { - free_pages((unsigned long)io_tlb_orig_addr, - get_order(io_tlb_nslabs * sizeof(phys_addr_t))); - free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs * - sizeof(int))); - free_pages((unsigned long)phys_to_virt(io_tlb_start), - get_order(io_tlb_nslabs << IO_TLB_SHIFT)); + free_pages((unsigned long)mem->orig_addr, + get_order(mem->nslabs * sizeof(phys_addr_t))); + free_pages((unsigned long)mem->list, + get_order(mem->nslabs * sizeof(int))); + free_pages((unsigned long)phys_to_virt(mem->start), + get_order(mem->nslabs << IO_TLB_SHIFT)); } else { - memblock_free_late(__pa(io_tlb_orig_addr), - PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t))); - memblock_free_late(__pa(io_tlb_list), - PAGE_ALIGN(io_tlb_nslabs * sizeof(int))); - memblock_free_late(io_tlb_start, - PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); + memblock_free_late(__pa(mem->orig_addr), + PAGE_ALIGN(mem->nslabs * sizeof(phys_addr_t))); + memblock_free_late(__pa(mem->list), + PAGE_ALIGN(mem->nslabs * sizeof(int))); + memblock_free_late(mem->start, + PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT)); } swiotlb_cleanup(); } @@ -465,7 +444,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start); + struct io_tlb_mem *mem = &io_tlb_default_mem; + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, mem->start); unsigned long flags; phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; @@ -516,13 +496,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * Find suitable number of IO TLB entries size that will fit this * request and allocate a buffer from that IO TLB pool. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&mem->lock, flags); - if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) + if (unlikely(nslots > mem->nslabs - mem->used)) goto not_found; - index = ALIGN(io_tlb_index, stride); - if (index >= io_tlb_nslabs) + index = ALIGN(mem->index, stride); + if (index >= mem->nslabs) index = 0; wrap = index; @@ -530,7 +510,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, while (iommu_is_span_boundary(index, nslots, offset_slots, max_slots)) { index += stride; - if (index >= io_tlb_nslabs) + if (index >= mem->nslabs) index = 0; if (index == wrap) goto not_found; @@ -541,40 +521,40 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * contiguous buffers, we allocate the buffers from that slot * and mark the entries as '0' indicating unavailable. */ - if (io_tlb_list[index] >= nslots) { + if (mem->list[index] >= nslots) { int count = 0; for (i = index; i < (int) (index + nslots); i++) - io_tlb_list[i] = 0; - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT); + mem->list[i] = 0; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && mem->list[i]; i--) + mem->list[i] = ++count; + tlb_addr = mem->start + (index << IO_TLB_SHIFT); /* * Update the indices to avoid searching in the next * round. */ - io_tlb_index = ((index + nslots) < io_tlb_nslabs - ? (index + nslots) : 0); + mem->index = ((index + nslots) < mem->nslabs + ? (index + nslots) : 0); goto found; } index += stride; - if (index >= io_tlb_nslabs) + if (index >= mem->nslabs) index = 0; } while (index != wrap); not_found: - tmp_io_tlb_used = io_tlb_used; + tmp_io_tlb_used = mem->used; - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&mem->lock, flags); if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", - alloc_size, io_tlb_nslabs, tmp_io_tlb_used); + alloc_size, mem->nslabs, tmp_io_tlb_used); return (phys_addr_t)DMA_MAPPING_ERROR; found: - io_tlb_used += nslots; - spin_unlock_irqrestore(&io_tlb_lock, flags); + mem->used += nslots; + spin_unlock_irqrestore(&mem->lock, flags); /* * Save away the mapping from the original address to the DMA address. @@ -582,7 +562,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, * needed. */ for (i = 0; i < nslots; i++) - io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); + mem->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -597,10 +577,11 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = mem->orig_addr[index]; /* * First, sync the memory before unmapping the entry @@ -616,36 +597,37 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, * While returning the entries to the free list, we merge the entries * with slots below and above the pool being returned. */ - spin_lock_irqsave(&io_tlb_lock, flags); + spin_lock_irqsave(&mem->lock, flags); { count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? - io_tlb_list[index + nslots] : 0); + mem->list[index + nslots] : 0); /* * Step 1: return the slots to the free list, merging the * slots with superceeding slots */ for (i = index + nslots - 1; i >= index; i--) { - io_tlb_list[i] = ++count; - io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; + mem->list[i] = ++count; + mem->orig_addr[i] = INVALID_PHYS_ADDR; } /* * Step 2: merge the returned slots with the preceding slots, * if available (non zero) */ - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && mem->list[i]; i--) + mem->list[i] = ++count; - io_tlb_used -= nslots; + mem->used -= nslots; } - spin_unlock_irqrestore(&io_tlb_lock, flags); + spin_unlock_irqrestore(&mem->lock, flags); } void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; + struct io_tlb_mem *mem = &io_tlb_default_mem; + int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = mem->orig_addr[index]; if (orig_addr == INVALID_PHYS_ADDR) return; @@ -713,21 +695,21 @@ size_t swiotlb_max_mapping_size(struct device *dev) bool is_swiotlb_active(void) { /* - * When SWIOTLB is initialized, even if io_tlb_start points to physical - * address zero, io_tlb_end surely doesn't. + * When SWIOTLB is initialized, even if mem->start points to physical + * address zero, mem->end surely doesn't. */ - return io_tlb_end != 0; + return io_tlb_default_mem.end != 0; } #ifdef CONFIG_DEBUG_FS static int __init swiotlb_create_debugfs(void) { - struct dentry *root; + struct io_tlb_mem *mem = &io_tlb_default_mem; - root = debugfs_create_dir("swiotlb", NULL); - debugfs_create_ulong("io_tlb_nslabs", 0400, root, &io_tlb_nslabs); - debugfs_create_ulong("io_tlb_used", 0400, root, &io_tlb_used); + mem->debugfs = debugfs_create_dir("swiotlb", NULL); + debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); + debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); return 0; } From patchwork Wed Jan 6 03:41:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12001005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 790BBC4332D for ; Wed, 6 Jan 2021 05:43:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 19388227C3 for ; Wed, 6 Jan 2021 05:43:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19388227C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62300.110311 (Exim 4.92) (envelope-from ) id 1kx1bD-00018v-7C; Wed, 06 Jan 2021 05:43:11 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62300.110311; Wed, 06 Jan 2021 05:43:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bD-00018p-1d; Wed, 06 Jan 2021 05:43:11 +0000 Received: by outflank-mailman (input) for mailman id 62300; Wed, 06 Jan 2021 03:41:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwzhk-0005eZ-PF for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:41:48 +0000 Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 52ede173-762d-4e29-a179-c7eb0410a104; Wed, 06 Jan 2021 03:41:47 +0000 (UTC) Received: by mail-pg1-x536.google.com with SMTP id n7so1318878pgg.2 for ; Tue, 05 Jan 2021 19:41:47 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id x23sm631878pgk.14.2021.01.05.19.41.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:41:46 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 52ede173-762d-4e29-a179-c7eb0410a104 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Dj8JKlKorbjSOdwS0lvNYWFk7amT5kVhimzXI22aOmo=; b=nyHDRg7Vmv1AYnOmtvFMyhhjPOqDXplXsE46O+JQcnB7v+fs3ZJWBBx1UfSrKuJzD/ qiFZ8JU9bombqMxXDm/lZSO/uU9UIeCy/s4FTWAcP6gjSJdtc1DTG2nFcQh6/7gh/rE4 s4F6iUuPmIjkPdKaCf1nMPPcD/1GvLZGYzHJE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Dj8JKlKorbjSOdwS0lvNYWFk7amT5kVhimzXI22aOmo=; b=H8QXBqhDRc6S7nlS2TDDDn5fMizk7MB8pRzqofh8pazMgDGyjDoUOqlPU//sAPjErr ReDnd3X30G9lbVj7bN1kmMEwXG3c+XTg+R0YRjyQTooZsA5seH1UyQAI/58ABiU6bWB9 m2fZuSEtmjSy+6xaF0nYXr7Zs191GKl+51/ijG54xZCLjU/xeWDN8UDd+a0i8tO6cgDq VeX57XgInVmUqhtFFecjQHX8F7nBEFyK6w6bnVdACiK859ts34Dd7fc0qodtMrgG4ziu ZiytEr5Gb64CHRLQPppNdNER0lG0YR/NodY4rt2H81x8JiX3c7JfE5C2u4ec7Z4tqsjA PP0g== X-Gm-Message-State: AOAM533aumSZmSKqCAao3WkaXzBenzWG9GBGhh2uDRY6sn0CDqC9Cmkn e39m9uBWrBQX88/hd8tCWgvQwA== X-Google-Smtp-Source: ABdhPJz99z2RdIIx1bNT47v5aDKOf5A60/F7kX7ZvEglTd1/imvNFy1tfXVN1FEQPH8zUfX6RY9tlA== X-Received: by 2002:aa7:810a:0:b029:1a6:501b:19ed with SMTP id b10-20020aa7810a0000b02901a6501b19edmr1942562pfi.17.1609904506706; Tue, 05 Jan 2021 19:41:46 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 2/6] swiotlb: Add restricted DMA pool Date: Wed, 6 Jan 2021 11:41:20 +0800 Message-Id: <20210106034124.30560-3-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 Add the initialization function to create restricted DMA pools from matching reserved-memory nodes in the device tree. Signed-off-by: Claire Chang --- include/linux/device.h | 4 ++ include/linux/swiotlb.h | 7 +- kernel/dma/Kconfig | 1 + kernel/dma/swiotlb.c | 144 ++++++++++++++++++++++++++++++++++------ 4 files changed, 131 insertions(+), 25 deletions(-) diff --git a/include/linux/device.h b/include/linux/device.h index 89bb8b84173e..ca6f71ec8871 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -413,6 +413,7 @@ struct dev_links_info { * @dma_pools: Dma pools (if dma'ble device). * @dma_mem: Internal for coherent mem override. * @cma_area: Contiguous memory area for dma allocations + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override. * @archdata: For arch-specific additions. * @of_node: Associated device tree node. * @fwnode: Associated device node supplied by platform firmware. @@ -515,6 +516,9 @@ struct device { #ifdef CONFIG_DMA_CMA struct cma *cma_area; /* contiguous memory area for dma allocations */ +#endif +#ifdef CONFIG_SWIOTLB + struct io_tlb_mem *dma_io_tlb_mem; #endif /* arch specific additions */ struct dev_archdata archdata; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index dd8eb57cbb8f..a1bbd7788885 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -76,12 +76,13 @@ extern enum swiotlb_force swiotlb_force; * * @start: The start address of the swiotlb memory pool. Used to do a quick * range check to see if the memory was in fact allocated by this - * API. + * API. For restricted DMA pool, this is device tree adjustable. * @end: The end address of the swiotlb memory pool. Used to do a quick * range check to see if the memory was in fact allocated by this - * API. + * API. For restricted DMA pool, this is device tree adjustable. * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and - * @end. This is command line adjustable via setup_io_tlb_npages. + * @end. For default swiotlb, this is command line adjustable via + * setup_io_tlb_npages. * @used: The number of used IO TLB block. * @list: The free list describing the number of free entries available * from each index. diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 479fc145acfc..131a0a66781b 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -82,6 +82,7 @@ config ARCH_HAS_FORCE_DMA_UNENCRYPTED config SWIOTLB bool select NEED_DMA_MAP_STATE + select OF_EARLY_FLATTREE # # Should be selected if we can mmap non-coherent mappings to userspace. diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e4368159f88a..7fb2ac087d23 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -36,6 +36,11 @@ #include #include #include +#include +#include +#include +#include +#include #ifdef CONFIG_DEBUG_FS #include #endif @@ -319,20 +324,21 @@ static void swiotlb_cleanup(void) max_segment = 0; } -int -swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +static int swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, + size_t size) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - unsigned long i, bytes; + unsigned long i; + void *vaddr = phys_to_virt(start); - bytes = nslabs << IO_TLB_SHIFT; + size = ALIGN(size, 1 << IO_TLB_SHIFT); + mem->nslabs = size >> IO_TLB_SHIFT; + mem->nslabs = ALIGN(mem->nslabs, IO_TLB_SEGSIZE); - mem->nslabs = nslabs; - mem->start = virt_to_phys(tlb); - mem->end = mem->start + bytes; + mem->start = start; + mem->end = mem->start + size; - set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT); - memset(tlb, 0, bytes); + set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); + memset(vaddr, 0, size); /* * Allocate and initialize the free list array. This array is used @@ -356,13 +362,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) mem->orig_addr[i] = INVALID_PHYS_ADDR; } mem->index = 0; - no_iotlb_memory = false; - - swiotlb_print_info(); - - late_alloc = 1; - - swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT); return 0; @@ -375,6 +374,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) return -ENOMEM; } +int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs) +{ + struct io_tlb_mem *mem = &io_tlb_default_mem; + unsigned long bytes = nslabs << IO_TLB_SHIFT; + int ret; + + ret = swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), bytes); + if (ret) + return ret; + + no_iotlb_memory = false; + + swiotlb_print_info(); + + late_alloc = 1; + + swiotlb_set_max_segment(bytes); + + return 0; +} + void __init swiotlb_exit(void) { struct io_tlb_mem *mem = &io_tlb_default_mem; @@ -703,16 +723,96 @@ bool is_swiotlb_active(void) #ifdef CONFIG_DEBUG_FS -static int __init swiotlb_create_debugfs(void) +static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name, + struct dentry *node) { - struct io_tlb_mem *mem = &io_tlb_default_mem; - - mem->debugfs = debugfs_create_dir("swiotlb", NULL); + mem->debugfs = debugfs_create_dir(name, node); debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs); debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used); +} + +static int __init swiotlb_create_default_debugfs(void) +{ + swiotlb_create_debugfs(&io_tlb_default_mem, "swiotlb", NULL); + return 0; } -late_initcall(swiotlb_create_debugfs); +late_initcall(swiotlb_create_default_debugfs); #endif + +static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + struct device *dev) +{ + struct io_tlb_mem *mem = rmem->priv; + int ret; + + if (dev->dma_io_tlb_mem) + return -EBUSY; + + if (!mem) { + mem = kzalloc(sizeof(*mem), GFP_KERNEL); + if (!mem) + return -ENOMEM; + + if (!memremap(rmem->base, rmem->size, MEMREMAP_WB)) { + ret = -EINVAL; + goto cleanup; + } + + ret = swiotlb_init_io_tlb_mem(mem, rmem->base, rmem->size); + if (ret) + goto cleanup; + + rmem->priv = mem; + } + +#ifdef CONFIG_DEBUG_FS + swiotlb_create_debugfs(mem, dev_name(dev), io_tlb_default_mem.debugfs); +#endif + + dev->dma_io_tlb_mem = mem; + + return 0; + +cleanup: + kfree(mem); + + return ret; +} + +static void rmem_swiotlb_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + if (!dev) + return; + +#ifdef CONFIG_DEBUG_FS + debugfs_remove_recursive(dev->dma_io_tlb_mem->debugfs); +#endif + dev->dma_io_tlb_mem = NULL; +} + +static const struct reserved_mem_ops rmem_swiotlb_ops = { + .device_init = rmem_swiotlb_device_init, + .device_release = rmem_swiotlb_device_release, +}; + +static int __init rmem_swiotlb_setup(struct reserved_mem *rmem) +{ + unsigned long node = rmem->fdt_node; + + if (of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "linux,cma-default", NULL) || + of_get_flat_dt_prop(node, "linux,dma-default", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + rmem->ops = &rmem_swiotlb_ops; + pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + return 0; +} + +RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup); From patchwork Wed Jan 6 03:41:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12001003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 186B9C43381 for ; Wed, 6 Jan 2021 05:43:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D7CF22CA2 for ; Wed, 6 Jan 2021 05:43:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D7CF22CA2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62302.110323 (Exim 4.92) (envelope-from ) id 1kx1bD-00019m-M8; Wed, 06 Jan 2021 05:43:11 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62302.110323; Wed, 06 Jan 2021 05:43:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bD-00019T-Cx; Wed, 06 Jan 2021 05:43:11 +0000 Received: by outflank-mailman (input) for mailman id 62302; Wed, 06 Jan 2021 03:41:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwzhs-0005gb-NX for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:41:56 +0000 Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ad4f0432-f18c-4758-99b0-8b90ac36ebf8; Wed, 06 Jan 2021 03:41:55 +0000 (UTC) Received: by mail-pg1-x52c.google.com with SMTP id v19so1278454pgj.12 for ; Tue, 05 Jan 2021 19:41:55 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id y8sm563699pji.55.2021.01.05.19.41.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:41:54 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ad4f0432-f18c-4758-99b0-8b90ac36ebf8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Sg3W40DneGLjbdEt4n7awoP/HejFEoiSmlpzE0xyHEk=; b=MpYV6agHmz95gEj7SCzmo0uVfEOUSkdlTyOlztpXyiM9mVjfI+VmZ30LY4vs93D3RG XL3036gp7fiugceUrf7+Wu1bMGhE56AAt2H7O51hd3s/xGQ24fPUO8LI/afPiBLogXia rnNt07TLLVkWc0kKZNGZg/M72o6OTpF2dX1Ko= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Sg3W40DneGLjbdEt4n7awoP/HejFEoiSmlpzE0xyHEk=; b=GTXw3A/Dl/iZsHfBYm5aHK2bd2gOVhc5Z3W4OvfJJea+0Y7j2DpRvrI8aFXjd9WTdO g8r1+QwjUMtfunDzJZX9GHerzzmoynzOL9NWLS0gM0XCNq+lbNe5mojxvixF6XT/fkfY mNQNwLz47teJV2rDsJ6fMpBtHRiEXZobsDudZuSNS9fhPldX6/3J8KRcw/A1bvWW2uPI iED+MBW+tcVnRpENJU2svKMy1x89/C9LjC1GCoautUCHvFA3XOdhvFERS+iVvxrpS6r9 2UxIlcV6bt09oiSXBcL/0N2l9F665wDijwHXn/m3wCA22igaforO7SQnKc0tkn75yzQT G/Hg== X-Gm-Message-State: AOAM530PveTMCny/6Z1C9E/fGG9WQ8j7SrgkEN2t0VNbh+CtB4BHbt7c FTH5AzmZ++a3JTTroGZ74B8vLQ== X-Google-Smtp-Source: ABdhPJwnAO/axR+zC2onQD7uPzcTloFfMHRHayZ5ujUz+VO3ZSqsCh60UtW6TVUumKw3QDxHltKi2Q== X-Received: by 2002:a63:d214:: with SMTP id a20mr2337441pgg.63.1609904514755; Tue, 05 Jan 2021 19:41:54 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 3/6] swiotlb: Use restricted DMA pool if available Date: Wed, 6 Jan 2021 11:41:21 +0800 Message-Id: <20210106034124.30560-4-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to restrict the DMA to a predefined memory region. Signed-off-by: Claire Chang --- drivers/iommu/dma-iommu.c | 12 ++++++------ include/linux/swiotlb.h | 17 +++++++++++------ kernel/dma/direct.c | 8 ++++---- kernel/dma/direct.h | 10 ++++++---- kernel/dma/swiotlb.c | 13 ++++++------- 5 files changed, 33 insertions(+), 27 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f0305e6aac1b..1343cc2ef27a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -516,7 +516,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, __iommu_dma_unmap(dev, dma_addr, size); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, iova_align(iovad, size), dir, attrs); } @@ -592,7 +592,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, } iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); - if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys)) + if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, org_size, aligned_size, dir, attrs); @@ -764,7 +764,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(phys, size, dir); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU); } @@ -777,7 +777,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - if (is_swiotlb_buffer(phys)) + if (is_swiotlb_buffer(dev, phys)) swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE); if (!dev_is_dma_coherent(dev)) @@ -798,7 +798,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, dir, SYNC_FOR_CPU); } @@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) { - if (is_swiotlb_buffer(sg_phys(sg))) + if (is_swiotlb_buffer(dev, sg_phys(sg))) swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, dir, SYNC_FOR_DEVICE); diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index a1bbd7788885..5135e5636042 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -2,12 +2,12 @@ #ifndef __LINUX_SWIOTLB_H #define __LINUX_SWIOTLB_H +#include #include #include #include #include -struct device; struct page; struct scatterlist; @@ -106,9 +106,14 @@ struct io_tlb_mem { }; extern struct io_tlb_mem io_tlb_default_mem; -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + return dev->dma_io_tlb_mem ? dev->dma_io_tlb_mem : &io_tlb_default_mem; +} + +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) +{ + struct io_tlb_mem *mem = get_io_tlb_mem(dev); return paddr >= mem->start && paddr < mem->end; } @@ -116,11 +121,11 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr) void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); -bool is_swiotlb_active(void); +bool is_swiotlb_active(struct device *dev); void __init swiotlb_adjust_size(unsigned long new_size); #else #define swiotlb_force SWIOTLB_NO_FORCE -static inline bool is_swiotlb_buffer(phys_addr_t paddr) +static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } @@ -136,7 +141,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev) return SIZE_MAX; } -static inline bool is_swiotlb_active(void) +static inline bool is_swiotlb_active(struct device *dev) { return false; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 002268262c9a..30ccbc08e229 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev, for_each_sg(sgl, sg, nents, i) { phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg)); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, sg->length, dir, SYNC_FOR_DEVICE); @@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_cpu(paddr, sg->length, dir); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, sg->length, dir, SYNC_FOR_CPU); @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask) size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ - if (is_swiotlb_active() && + if (is_swiotlb_active(dev) && (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) return swiotlb_max_mapping_size(dev); return SIZE_MAX; @@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev) bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr) { return !dev_is_dma_coherent(dev) || - is_swiotlb_buffer(dma_to_phys(dev, dma_addr)); + is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr)); } /** diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index b98615578737..7188834cc4c7 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, { phys_addr_t paddr = dma_to_phys(dev, addr); - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); if (!dev_is_dma_coherent(dev)) @@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, arch_sync_dma_for_cpu_all(); } - if (unlikely(is_swiotlb_buffer(paddr))) + if (unlikely(is_swiotlb_buffer(dev, paddr))) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); if (dir == DMA_FROM_DEVICE) @@ -87,8 +87,10 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) +#ifdef CONFIG_SWIOTLB + if (unlikely(swiotlb_force == SWIOTLB_FORCE) || dev->dma_io_tlb_mem) return swiotlb_map(dev, phys, size, dir, attrs); +#endif if (unlikely(!dma_capable(dev, dma_addr, size, true))) { if (swiotlb_force != SWIOTLB_NO_FORCE) @@ -113,7 +115,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); - if (unlikely(is_swiotlb_buffer(phys))) + if (unlikely(is_swiotlb_buffer(dev, phys))) swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs); } #endif /* _KERNEL_DMA_DIRECT_H */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 7fb2ac087d23..1f05af09e61a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -222,7 +222,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose) mem->orig_addr[i] = INVALID_PHYS_ADDR; } mem->index = 0; - no_iotlb_memory = false; if (verbose) swiotlb_print_info(); @@ -464,7 +463,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, mem->start); unsigned long flags; phys_addr_t tlb_addr; @@ -475,7 +474,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, unsigned long max_slots; unsigned long tmp_io_tlb_used; - if (no_iotlb_memory) + if (no_iotlb_memory && !hwdev->dma_io_tlb_mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); if (mem_encrypt_active()) @@ -597,7 +596,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); unsigned long flags; int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; @@ -645,7 +644,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) { - struct io_tlb_mem *mem = &io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = mem->orig_addr[index]; @@ -712,13 +711,13 @@ size_t swiotlb_max_mapping_size(struct device *dev) return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; } -bool is_swiotlb_active(void) +bool is_swiotlb_active(struct device *dev) { /* * When SWIOTLB is initialized, even if mem->start points to physical * address zero, mem->end surely doesn't. */ - return io_tlb_default_mem.end != 0; + return io_tlb_default_mem.end != 0 || dev->dma_io_tlb_mem; } #ifdef CONFIG_DEBUG_FS From patchwork Wed Jan 6 03:41:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12001007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62228C4332B for ; Wed, 6 Jan 2021 05:43:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F0D87227C3 for ; Wed, 6 Jan 2021 05:43:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0D87227C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62304.110336 (Exim 4.92) (envelope-from ) id 1kx1bE-0001Aa-5X; Wed, 06 Jan 2021 05:43:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62304.110336; Wed, 06 Jan 2021 05:43:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bD-0001AJ-R0; Wed, 06 Jan 2021 05:43:11 +0000 Received: by outflank-mailman (input) for mailman id 62304; Wed, 06 Jan 2021 03:42:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwzi0-0005h6-Ds for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:42:04 +0000 Received: from mail-pl1-x635.google.com (unknown [2607:f8b0:4864:20::635]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ccc18fc8-3f3e-4f31-abf4-74b673b3d06e; Wed, 06 Jan 2021 03:42:03 +0000 (UTC) Received: by mail-pl1-x635.google.com with SMTP id y8so874452plp.8 for ; Tue, 05 Jan 2021 19:42:03 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id c24sm645490pgi.71.2021.01.05.19.41.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:42:01 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ccc18fc8-3f3e-4f31-abf4-74b673b3d06e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=C4wJrXGOxQgGn7AWcQHemsUEO9lwwvSET4WpFWpbPwY=; b=TacCBszXScPX762D74v8tFRfid8uZoy9Ewhv1WIXnCiKTHeqMsF3/gsK/ez6Nm9+D5 ZNRfylPSihDDipIm5LnWx2jdXSEWrWcm6YzF3IfFUBeMSe/mjhKtlhPWbfirAA9QZxzO m3eTqq/0mBatCIinOczTbQ1MsdcHqQ/fVoY1s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=C4wJrXGOxQgGn7AWcQHemsUEO9lwwvSET4WpFWpbPwY=; b=WIgjSFmKBZc01eb3zGo78+5FQ1bIercY7ftkoY/L6qI2bQGmWCa0ju4cfH8fUvHz5u C/SdcOaRcpu8Dz5q+cNiPGElQAln0ldQNfJv48j9EeHymlRxve/SjoINPTs54rhiX0/X Exu3O14LcV4ESVC3+bbJtxE6Fn+YJ1I5U6Gz+7QTFz9d6Ch4Iz3WQGX5Z8KKm/h6jiWv h/ngm6B/zkiUwGEXWLBaoe87/NTMrFSSAH0AG/3qhQnwfG3oxuHKWgTLZZqN5X0GW3M3 IHET7doZLkf1C5+4irmKF9hNmGAsii2rUyfD5khNJckke58VsSIVf2Mr20BpUYUrYUPk Yx7g== X-Gm-Message-State: AOAM5337zCHa9AWL17LeeC0F1luVnE/49gADs+pJ8Z9wLofeTE48QrPp Bem5FRj3GSE+UZFv3W7bsOZ1NA== X-Google-Smtp-Source: ABdhPJzG9r3nJPEwoRZp7m2+d8QeYx0Am+0VGQ6z3BFwCJSpdV+bXsC0qFMiY2ZqvkX4It24DduLlw== X-Received: by 2002:a17:902:e9d2:b029:db:d4f6:b581 with SMTP id 18-20020a170902e9d2b02900dbd4f6b581mr2540895plk.34.1609904522345; Tue, 05 Jan 2021 19:42:02 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 4/6] swiotlb: Add restricted DMA alloc/free support. Date: Wed, 6 Jan 2021 11:41:22 +0800 Message-Id: <20210106034124.30560-5-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 Add the functions, swiotlb_alloc and swiotlb_free to support the memory allocation from restricted DMA pool. Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 6 ++ kernel/dma/direct.c | 12 +++ kernel/dma/swiotlb.c | 171 +++++++++++++++++++++++++++++----------- 3 files changed, 144 insertions(+), 45 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 5135e5636042..84fe96e40685 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -68,6 +68,12 @@ extern void swiotlb_tbl_sync_single(struct device *hwdev, dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); +void *swiotlb_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, + unsigned long attrs); + +void swiotlb_free(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_addr, unsigned long attrs); + #ifdef CONFIG_SWIOTLB extern enum swiotlb_force swiotlb_force; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 30ccbc08e229..126e9b3354d6 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -137,6 +137,11 @@ void *dma_direct_alloc(struct device *dev, size_t size, void *ret; int err; +#ifdef CONFIG_SWIOTLB + if (unlikely(dev->dma_io_tlb_mem)) + return swiotlb_alloc(dev, size, dma_handle, attrs); +#endif + size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; @@ -246,6 +251,13 @@ void dma_direct_free(struct device *dev, size_t size, { unsigned int page_order = get_order(size); +#ifdef CONFIG_SWIOTLB + if (unlikely(dev->dma_io_tlb_mem)) { + swiotlb_free(dev, size, cpu_addr, dma_addr, attrs); + return; + } +#endif + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 1f05af09e61a..ca88ef59435d 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -459,14 +459,13 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } } -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static int swiotlb_tbl_find_free_region(struct device *hwdev, + dma_addr_t tbl_dma_addr, + size_t alloc_size, + unsigned long attrs) { struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); - dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, mem->start); unsigned long flags; - phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; int i; unsigned long mask; @@ -477,15 +476,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, if (no_iotlb_memory && !hwdev->dma_io_tlb_mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); - if (mem_encrypt_active()) - pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); - - if (mapping_size > alloc_size) { - dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", - mapping_size, alloc_size); - return (phys_addr_t)DMA_MAPPING_ERROR; - } - mask = dma_get_seg_boundary(hwdev); tbl_dma_addr &= mask; @@ -547,7 +537,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, mem->list[i] = 0; for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && mem->list[i]; i--) mem->list[i] = ++count; - tlb_addr = mem->start + (index << IO_TLB_SHIFT); /* * Update the indices to avoid searching in the next @@ -570,45 +559,21 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", alloc_size, mem->nslabs, tmp_io_tlb_used); - return (phys_addr_t)DMA_MAPPING_ERROR; + return -ENOMEM; + found: mem->used += nslots; spin_unlock_irqrestore(&mem->lock, flags); - /* - * Save away the mapping from the original address to the DMA address. - * This is needed when we sync the memory. Then we sync the buffer if - * needed. - */ - for (i = 0; i < nslots; i++) - mem->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); - - return tlb_addr; + return index; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static void swiotlb_tbl_release_region(struct device *hwdev, int index, + size_t size) { struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); unsigned long flags; - int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = mem->orig_addr[index]; - - /* - * First, sync the memory before unmapping the entry - */ - if (orig_addr != INVALID_PHYS_ADDR && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; /* * Return the buffer to the free list by setting the corresponding @@ -640,6 +605,69 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&mem->lock, flags); } +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, mem->start); + phys_addr_t tlb_addr; + unsigned int nslots, index; + int i; + + if (mem_encrypt_active()) + pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); + + if (mapping_size > alloc_size) { + dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", + mapping_size, alloc_size); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + index = swiotlb_tbl_find_free_region(hwdev, tbl_dma_addr, alloc_size, + attrs); + if (index < 0) + return (phys_addr_t)DMA_MAPPING_ERROR; + + tlb_addr = mem->start + (index << IO_TLB_SHIFT); + + /* + * Save away the mapping from the original address to the DMA address. + * This is needed when we sync the memory. Then we sync the buffer if + * needed. + */ + nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + for (i = 0; i < nslots; i++) + mem->orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + + return tlb_addr; +} + +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); + int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = mem->orig_addr[index]; + + /* + * First, sync the memory before unmapping the entry + */ + if (orig_addr != INVALID_PHYS_ADDR && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + swiotlb_tbl_release_region(hwdev, index, alloc_size); +} + void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) @@ -706,6 +734,59 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, return dma_addr; } +void *swiotlb_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, + unsigned long attrs) +{ + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; + int index; + void *vaddr; + phys_addr_t tlb_addr; + + size = PAGE_ALIGN(size); + index = swiotlb_tbl_find_free_region(dev, mem->start, size, attrs); + if (index < 0) + return NULL; + + tlb_addr = mem->start + (index << IO_TLB_SHIFT); + *dma_handle = phys_to_dma_unencrypted(dev, tlb_addr); + + if (!dev_is_dma_coherent(dev)) { + unsigned long pfn = PFN_DOWN(tlb_addr); + + /* remove any dirty cache lines on the kernel alias */ + arch_dma_prep_coherent(pfn_to_page(pfn), size); + + /* create a coherent mapping */ + vaddr = dma_common_contiguous_remap( + pfn_to_page(pfn), size, + dma_pgprot(dev, PAGE_KERNEL, attrs), + __builtin_return_address(0)); + if (!vaddr) { + swiotlb_tbl_release_region(dev, index, size); + return NULL; + } + } else { + vaddr = phys_to_virt(tlb_addr); + } + + memset(vaddr, 0, size); + + return vaddr; +} + +void swiotlb_free(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_addr, unsigned long attrs) +{ + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; + unsigned int index; + + if (!dev_is_dma_coherent(dev)) + vunmap(vaddr); + + index = (dma_addr - mem->start) >> IO_TLB_SHIFT; + swiotlb_tbl_release_region(dev, index, PAGE_ALIGN(size)); +} + size_t swiotlb_max_mapping_size(struct device *dev) { return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; From patchwork Wed Jan 6 03:41:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12000995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B0CC433DB for ; Wed, 6 Jan 2021 05:43:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 881ED22CA2 for ; Wed, 6 Jan 2021 05:43:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 881ED22CA2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62306.110345 (Exim 4.92) (envelope-from ) id 1kx1bE-0001Be-Jz; Wed, 06 Jan 2021 05:43:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62306.110345; Wed, 06 Jan 2021 05:43:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bE-0001BI-A9; Wed, 06 Jan 2021 05:43:12 +0000 Received: by outflank-mailman (input) for mailman id 62306; Wed, 06 Jan 2021 03:42:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwzi7-0005he-3a for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:42:11 +0000 Received: from mail-pl1-x632.google.com (unknown [2607:f8b0:4864:20::632]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b505af66-e4ae-4d63-a98a-e23d4e1b6bb8; Wed, 06 Jan 2021 03:42:10 +0000 (UTC) Received: by mail-pl1-x632.google.com with SMTP id e2so862388plt.12 for ; Tue, 05 Jan 2021 19:42:10 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id f7sm581725pjs.25.2021.01.05.19.42.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:42:09 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b505af66-e4ae-4d63-a98a-e23d4e1b6bb8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FZGRSIKNWZAaAhyn9I46mv2qEvPmwpxpVmXsznFggAw=; b=gGxBXOihHsm3ZVDHLv31fDSdj1SXhYQbdn9k5cmAoncGt12QQUfM/c3IHcaEnDkA5J ocQxp+U8gmPg/aUt1mhhV6zZJ8VXZG4qoywct4SKLATrpd2CnwkRGzp6iVjwWvWh4uPB 8z3i8pIB+5J9HuYE/Sxl8iNS0nIPHFjKwJX1c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FZGRSIKNWZAaAhyn9I46mv2qEvPmwpxpVmXsznFggAw=; b=PxIIMV7mvAGib28GROplbU9zOmNqgVJNWB4KrGiSEmmvqaxJr9Wc55JIePDtR//9W9 wLCo7HHIoVdiH8RiMFr+WIqGURNEH5AhJOULUj00ZUmONwe5+z0QJVlbFLwH1wzoM7dK zHLxksbYOPDdzrgi0irw+fFtSbor/7uHFUZVghgaCpKViu6A6jopZqJTgs7pAA0Ujfn/ Fgypd7UlAKSimtb1n9RaGbe6vkY8WOeXzeUID21J9eSA8h4MowyigNVfdb+1Kig0z5Sl /WijGUFEHI5LZ7jUgnKpPn2HiyEhiHbDZKhroI6ZSu++ADPtnnYcEnGE6vG+GrjfYksF g6MQ== X-Gm-Message-State: AOAM530EbNUxqif5zJaxUL3IGs4ywLN3FhWpIWamcdGrolLhHiPsBOLp jtotZ3GZX5OQbIRK59vUSbQgQQ== X-Google-Smtp-Source: ABdhPJz3c69cs7bN5kIpvTYTxsM3wMIyOYfyNKD7ypyEM9IFVwp2OokHNuXw3jIYJjMTgHoQH4pU0Q== X-Received: by 2002:a17:90a:5802:: with SMTP id h2mr2368082pji.68.1609904529594; Tue, 05 Jan 2021 19:42:09 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 5/6] dt-bindings: of: Add restricted DMA pool Date: Wed, 6 Jan 2021 11:41:23 +0800 Message-Id: <20210106034124.30560-6-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the device tree. Signed-off-by: Claire Chang --- .../reserved-memory/reserved-memory.txt | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..44975e2a1fd2 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,20 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to restrict + the DMA to a predefined memory region. - vendor specific string in the form ,[-] no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_mem_reserved: restricted_dma_mem_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x400000>; + }; }; /* ... */ @@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; }; From patchwork Wed Jan 6 03:41:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12001001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA9D7C433E9 for ; Wed, 6 Jan 2021 05:43:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8DB16227C3 for ; Wed, 6 Jan 2021 05:43:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8DB16227C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.62308.110358 (Exim 4.92) (envelope-from ) id 1kx1bF-0001D3-5S; Wed, 06 Jan 2021 05:43:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 62308.110358; Wed, 06 Jan 2021 05:43:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kx1bE-0001CS-Q3; Wed, 06 Jan 2021 05:43:12 +0000 Received: by outflank-mailman (input) for mailman id 62308; Wed, 06 Jan 2021 03:42:18 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwziE-0005iZ-7X for xen-devel@lists.xenproject.org; Wed, 06 Jan 2021 03:42:18 +0000 Received: from mail-pf1-x42b.google.com (unknown [2607:f8b0:4864:20::42b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ef479265-0f1c-44ef-9aa4-d83a65a3aef0; Wed, 06 Jan 2021 03:42:17 +0000 (UTC) Received: by mail-pf1-x42b.google.com with SMTP id q22so928404pfk.12 for ; Tue, 05 Jan 2021 19:42:17 -0800 (PST) Received: from localhost ([2401:fa00:1:10:3e52:82ff:fe5e:cc9d]) by smtp.gmail.com with ESMTPSA id er23sm565730pjb.12.2021.01.05.19.42.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Jan 2021 19:42:16 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ef479265-0f1c-44ef-9aa4-d83a65a3aef0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n/86gZ2YH2sPTM8FXS5odMzg67Ac9T6AO13ElIbtDDg=; b=cO8PVUda8Bx3SiBfo8qLV3f+sGuoG1tsrJdvV5+xtnmv+FltCXbPO19IOiNV2arX1e MfsIwE8A6Knc0SP2qleJACUoPMyYpzQZDZm9mnnFZGjxFwvgCZj6whF/RxWqHYYM1e+/ wdRgR2lnWWnPiXx/LyJKGIMg1QVQFfB7OO6nA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n/86gZ2YH2sPTM8FXS5odMzg67Ac9T6AO13ElIbtDDg=; b=Askj8rLxSvDiQPR99i8LdBx+j0dgnlwqNPikteilIyYYLHM4a7hUE2kwqZsd+OYtWd j9t51LIJ0ed/cmUvp21XcfqTlZn/2MVQj3/NQnI2vFgmTWe8P0OmjzRc+kFRYqfO4Fi7 0cw7kx6VX+z9COe59M92HPLRvGufBltG5N3I5W5x+CpzNTtv637AjQm3WXf6rI5bkkZZ KTuXOSUI0eKNVxPgo39Da+MiMMVJyEJ8ymOdqF53fbwv+CGVreokie8cIjqClNGVmDgp QGN2fRQuDBHIPtpDCGU+jRyk6vW6dxoZxhH6oSHm04DYJkrY4hGDxdmt5FlaUaAEEBs6 TaIA== X-Gm-Message-State: AOAM532Y+c0NeW1QXXmxF2CzRScqQGqA7htNzfsCFXoktvH9S3dfun1x bjrwjHVhR5+Ecc8IMYYQBlLTlQ== X-Google-Smtp-Source: ABdhPJyc8aEv7i+WmUMkW4PXzO0Jf/pJZ6vWz1r98byTGcdBBPjMldkIqPaeDl28qDIQ4NNN7+TicQ== X-Received: by 2002:a65:6382:: with SMTP id h2mr2332953pgv.365.1609904536826; Tue, 05 Jan 2021 19:42:16 -0800 (PST) From: Claire Chang To: robh+dt@kernel.org, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, joro@8bytes.org, will@kernel.org, frowand.list@gmail.com, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com Cc: grant.likely@arm.com, xypron.glpk@gmx.de, treding@nvidia.com, mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, gregkh@linuxfoundation.org, saravanak@google.com, rafael.j.wysocki@intel.com, heikki.krogerus@linux.intel.com, andriy.shevchenko@linux.intel.com, rdunlap@infradead.org, dan.j.williams@intel.com, bgolaszewski@baylibre.com, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org, tfiga@chromium.org, drinkcat@chromium.org, Claire Chang Subject: [RFC PATCH v3 6/6] of: Add plumbing for restricted DMA pool Date: Wed, 6 Jan 2021 11:41:24 +0800 Message-Id: <20210106034124.30560-7-tientzu@chromium.org> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog In-Reply-To: <20210106034124.30560-1-tientzu@chromium.org> References: <20210106034124.30560-1-tientzu@chromium.org> MIME-Version: 1.0 If a device is not behind an IOMMU, we look up the device node and set up the restricted DMA when the restricted-dma-pool is presented. Signed-off-by: Claire Chang --- drivers/of/address.c | 21 +++++++++++++++++++++ drivers/of/device.c | 4 ++++ drivers/of/of_private.h | 5 +++++ 3 files changed, 30 insertions(+) diff --git a/drivers/of/address.c b/drivers/of/address.c index 73ddf2540f3f..94eca8249854 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -1094,3 +1095,23 @@ bool of_dma_is_coherent(struct device_node *np) return false; } EXPORT_SYMBOL_GPL(of_dma_is_coherent); + +int of_dma_set_restricted_buffer(struct device *dev) +{ + struct device_node *node; + int count, i; + + if (!dev->of_node) + return 0; + + count = of_property_count_elems_of_size(dev->of_node, "memory-region", + sizeof(phandle)); + for (i = 0; i < count; i++) { + node = of_parse_phandle(dev->of_node, "memory-region", i); + if (of_device_is_compatible(node, "restricted-dma-pool")) + return of_reserved_mem_device_init_by_idx( + dev, dev->of_node, i); + } + + return 0; +} diff --git a/drivers/of/device.c b/drivers/of/device.c index aedfaaafd3e7..e2c7409956ab 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -182,6 +182,10 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); dev->dma_range_map = map; + + if (!iommu) + return of_dma_set_restricted_buffer(dev); + return 0; } EXPORT_SYMBOL_GPL(of_dma_configure_id); diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index d9e6a324de0a..28a2dfa197ba 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -161,12 +161,17 @@ struct bus_dma_region; #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map); +int of_dma_set_restricted_buffer(struct device *dev); #else static inline int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) { return -ENODEV; } +static inline int of_dma_get_restricted_buffer(struct device *dev) +{ + return -ENODEV; +} #endif #endif /* _LINUX_OF_PRIVATE_H */