From patchwork Tue May 18 06:42:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claire Chang X-Patchwork-Id: 12263757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83D86C43460 for ; Tue, 18 May 2021 06:43:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4CB4260240 for ; Tue, 18 May 2021 06:43:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CB4260240 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 644506EA8F; Tue, 18 May 2021 06:43:43 +0000 (UTC) Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by gabe.freedesktop.org (Postfix) with ESMTPS id F07586EA91 for ; Tue, 18 May 2021 06:43:41 +0000 (UTC) Received: by mail-pl1-x631.google.com with SMTP id p6so4519818plr.11 for ; Mon, 17 May 2021 23:43:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JT5d6YAbqSvmA2KlfQD5hf+kbiYfdJYrN1IkT+aNsag=; b=ZD9htTkyvkMMEUEkLQy/gcEpM7a0MIQiTSmGWFCNXxq+cS4fwyD1/1tr56UCMPrm90 EVzumZJoS3llTlb5ISKK/cVcn3v918VUbLmvCTKfyykZEE8GPe8Bv+qSbFp6wqzg5TUP ctly0anMGiSEZrCnObhPGgfqy4/ugiUoIEdJo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JT5d6YAbqSvmA2KlfQD5hf+kbiYfdJYrN1IkT+aNsag=; b=o71as2hf69nDGTcK/eTFHq4sRvFg/WjEgsFk5YUJChpuvctEij61rQIKIwANz5l6Dc 9CzH2zq6rWYc20vTSlkq2Tt7EHAl6THp+kJooMha9DglkFRv2tvxhvhGNrtznG65D+Zn 3f8o3glzsZo4JOeCg7EzBLHhHV74UGEaWSKHjS+vp8kQr1Zgvmte2OyyfmOCvs1+PVAH warcpLSRSNo2hQT7rs7vqGWFGhT0BMt7Bslg0M86h819ZSNlu/MYsyZTtrrjtNhKoexT QwwWmOV1OrK9fK/gKOG1jo0PV1tY8SBPPnSKifEJC1d4/FuCeKi16iyyx+DvEVZCs9IE Z4EA== X-Gm-Message-State: AOAM532cTBXc6KHo1/6TIXqdS07PgNf29EsTR+fWaPG2zNaIBcu4mLt+ sZ7xP5Xft4uUjo13xgBxzno2hg== X-Google-Smtp-Source: ABdhPJygj36pSY5SiWUplzqr15FxIdvmJ+/xyuGNliCPbWZ+Ih2Wqk2RJDjta6zpAFevyluE/pwdNw== X-Received: by 2002:a17:90a:bc4a:: with SMTP id t10mr3648458pjv.4.1621320221637; Mon, 17 May 2021 23:43:41 -0700 (PDT) Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198]) by smtp.gmail.com with UTF8SMTPSA id c16sm11590759pgl.79.2021.05.17.23.43.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 17 May 2021 23:43:41 -0700 (PDT) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Subject: [PATCH v7 08/15] swiotlb: Bounce data from/to restricted DMA pool if available Date: Tue, 18 May 2021 14:42:08 +0800 Message-Id: <20210518064215.2856977-9-tientzu@chromium.org> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org> References: <20210518064215.2856977-1-tientzu@chromium.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, peterz@infradead.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org, mingo@kernel.org, jxgao@google.com, sstabellini@kernel.org, Saravana Kannan , xypron.glpk@gmx.de, "Rafael J . Wysocki" , Bartosz Golaszewski , bskeggs@redhat.com, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org, Thierry Reding , intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, linux-devicetree , airlied@linux.ie, Robin Murphy , Nicolas Boichat , rodrigo.vivi@intel.com, bhelgaas@google.com, tientzu@chromium.org, Dan Williams , Andy Shevchenko , Greg KH , Randy Dunlap , lkml , tfiga@chromium.org, "list@263.net:IOMMU DRIVERS" , Jim Quinlan , linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Regardless of swiotlb setting, the restricted DMA pool is preferred if available. The restricted DMA pools provide a basic level of protection against the DMA overwriting buffer contents at unexpected times. However, to protect against general data leakage and system memory corruption, the system needs to provide a way to lock down the memory access, e.g., MPU. Note that is_dev_swiotlb_force doesn't check if swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior with default swiotlb will be changed by the following patche ("dma-direct: Allocate memory from restricted DMA pool if available"). Signed-off-by: Claire Chang --- include/linux/swiotlb.h | 13 +++++++++++++ kernel/dma/direct.c | 3 ++- kernel/dma/direct.h | 3 ++- kernel/dma/swiotlb.c | 8 ++++---- 4 files changed, 21 insertions(+), 6 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index c530c976d18b..0c5a18d9cf89 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) return mem && paddr >= mem->start && paddr < mem->end; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ +#ifdef CONFIG_DMA_RESTRICTED_POOL + if (dev->dma_io_tlb_mem) + return true; +#endif /* CONFIG_DMA_RESTRICTED_POOL */ + return false; +} + void __init swiotlb_exit(void); unsigned int swiotlb_max_segment(void); size_t swiotlb_max_mapping_size(struct device *dev); @@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) { return false; } +static inline bool is_dev_swiotlb_force(struct device *dev) +{ + return false; +} static inline void swiotlb_exit(void) { } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 7a88c34d0867..078f7087e466 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ if (is_swiotlb_active(dev) && - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE || + is_dev_swiotlb_force(dev))) return swiotlb_max_mapping_size(dev); return SIZE_MAX; } diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 13e9e7158d94..f94813674e23 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) + if (unlikely(swiotlb_force == SWIOTLB_FORCE) || + is_dev_swiotlb_force(dev)) return swiotlb_map(dev, phys, size, dir, attrs); if (unlikely(!dma_capable(dev, dma_addr, size, true))) { diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 68e7633f11fe..95f482c4408c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -347,7 +347,7 @@ void __init swiotlb_exit(void) static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = mem->slots[index].orig_addr; size_t alloc_size = mem->slots[index].alloc_size; @@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; @@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(dev); unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int i; int index; @@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, size_t mapping_size, enum dma_data_direction dir, unsigned long attrs) { - struct io_tlb_mem *mem = io_tlb_default_mem; + struct io_tlb_mem *mem = get_io_tlb_mem(hwdev); unsigned long flags; unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;