From patchwork Mon Jul 29 10:51:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baruch Siach X-Patchwork-Id: 13744708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8427C3DA61 for ; Mon, 29 Jul 2024 10:53:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=o4bUqP9vOnVTqrTmg2j9XMKxtXXjsh9gBF9rH/A5hCY=; b=gLLe1w5FWHhgKgN+bE9RK5/OFl 6jhv10g/iwppc8S+iZkrJPywXPhdu9Qn+WwPaSCoC7ty/sMwiKNOGJOSExhSw9jsDkefuJIIgqyug vefv3nf5K+/n9WvGqNxJCA1FgsbYuRVKAtbzKmXI686U9yTKFwny0fTOid1zIR2EL/O1L37V+UZio Hnvj2EB3UKw/MvuS9bLhUW3Aj6r63+7a+hambQLGcEfFo5ANhdJXtimV3xdI6pbOxe6GI6RqSs1z6 FzHn/fx8AbedxC1X2fPxQsN074KwpQywsMAE2IRuzx0dYLu0/FhJw56Azbn7S90Zt7ZcL0GaPzQCf /VAtcbPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYO0S-0000000Axzs-3vIj; Mon, 29 Jul 2024 10:53:32 +0000 Received: from guitar.tkos.co.il ([84.110.109.230] helo=mail.tkos.co.il) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYNys-0000000AxUZ-3ddw for linux-arm-kernel@lists.infradead.org; Mon, 29 Jul 2024 10:51:57 +0000 Received: from tarshish.tkos.co.il (unknown [10.0.8.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.tkos.co.il (Postfix) with ESMTPS id 29561440F5F; Mon, 29 Jul 2024 13:50:33 +0300 (IDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tkos.co.il; s=default; t=1722250233; bh=jwVrNkJU6TTXZr03Aq1yS4EYBFAKJQ5ZGtsXERqr5z8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vBf0lRuBpzD9o62Xp531a74Gq1NOBiZZUSIxG/oPDdoC8/1xjDtYtq/akdkDcE0kj D1/LaqCflZE12HPEnrCkolPF8KvnZtr0KrWCTKoSR/xXowyJXF3UOmXUIhLeKQV+o1 DIHAXmMYS3wurN++0F2Ju5m35D+DKYCAgyT5xBmceFFbv2mGTM4Ju+wr8gnSO79YxY /GvtYpbvpKizJHES0JaD2aqYkam5SjvYosOTgWeHXdlvH1I8p3GU3NQ2+uxTWff2bQ uF95ALYYrb+0bwo2KdyQGoaHC7xT0IU01q0mAmBI5/zLflThrleJzDthkYSfLQq70b sxZH+zJqWlzmA== From: Baruch Siach To: Christoph Hellwig , Marek Szyprowski , Catalin Marinas , Will Deacon Cc: Baruch Siach , Robin Murphy , iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, =?utf-8?b?UGV0ciBUZXNhxZnDrWs=?= , Ramon Fried , Elad Nachman Subject: [PATCH v3 3/3] dma-direct: use RAM start to offset zone_dma_limit Date: Mon, 29 Jul 2024 13:51:26 +0300 Message-ID: <629b184354fa22cb32a90bd1fa0e1dc258251f81.1722249878.git.baruch@tkos.co.il> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240729_035155_390898_A3C9433F X-CRM114-Status: GOOD ( 18.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Current code using zone_dma_limit assume that all address range below limit is suitable for DMA. For some existing platforms this assumption is not correct. DMA range might have non zero lower limit. Commit 791ab8b2e3db ("arm64: Ignore any DMA offsets in the max_zone_phys() calculation") made DMA/DMA32 zones span the entire RAM when RAM starts above 32-bits. This breaks hardware with DMA area that start above 32-bits. But the commit log says that "we haven't noticed any such hardware". It turns out that such hardware does exist. One such platform has RAM starting at 32GB with an internal bus that has the following DMA limits: #address-cells = <2>; #size-cells = <2>; dma-ranges = <0x00 0xc0000000 0x08 0x00000000 0x00 0x40000000>; Devices under this bus can see 1GB of DMA range between 3GB-4GB in each device address space. This range is mapped to CPU memory at 32GB-33GB. With current code DMA allocations for devices under this bus are not limited to DMA area, leading to run-time allocation failure. Add start of RAM address to zone_dma_limit to make DMA allocation for constrained devices possible. The result is DMA zone that properly reflects the hardware constraints as follows: [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x0000000800000000-0x000000083fffffff] [ 0.000000] DMA32 empty [ 0.000000] Normal [mem 0x0000000840000000-0x0000000bffffffff] Rename the dma_direct_supported() local 'min_mask' variable to better describe its use as limit. Suggested-by: Catalin Marinas Signed-off-by: Baruch Siach --- kernel/dma/direct.c | 7 ++++--- kernel/dma/pool.c | 3 ++- kernel/dma/swiotlb.c | 4 ++-- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 3dbc0b89d6fb..bd7972d3b101 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -563,7 +563,7 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, int dma_direct_supported(struct device *dev, u64 mask) { - u64 min_mask = (max_pfn - 1) << PAGE_SHIFT; + u64 min_limit = (max_pfn - 1) << PAGE_SHIFT; /* * Because 32-bit DMA masks are so common we expect every architecture @@ -580,8 +580,9 @@ int dma_direct_supported(struct device *dev, u64 mask) * part of the check. */ if (IS_ENABLED(CONFIG_ZONE_DMA)) - min_mask = min_t(u64, min_mask, zone_dma_limit); - return mask >= phys_to_dma_unencrypted(dev, min_mask); + min_limit = min_t(u64, min_limit, + memblock_start_of_DRAM() + zone_dma_limit); + return mask >= phys_to_dma_unencrypted(dev, min_limit); } /* diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 410a7b40e496..ded3d841c88c 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -12,6 +12,7 @@ #include #include #include +#include static struct gen_pool *atomic_pool_dma __ro_after_init; static unsigned long pool_size_dma; @@ -70,7 +71,7 @@ static bool cma_in_zone(gfp_t gfp) /* CMA can't cross zone boundaries, see cma_activate_area() */ end = cma_get_base(cma) + size - 1; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA)) - return end <= zone_dma_limit; + return end <= memblock_start_of_DRAM() + zone_dma_limit; if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) return end <= DMA_BIT_MASK(32); return true; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dfd83e5ee0b3..2813eeb8b375 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -450,7 +450,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, if (!remap) io_tlb_default_mem.can_grow = true; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp_mask & __GFP_DMA)) - io_tlb_default_mem.phys_limit = zone_dma_limit; + io_tlb_default_mem.phys_limit = memblock_start_of_DRAM() + zone_dma_limit; else if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp_mask & __GFP_DMA32)) io_tlb_default_mem.phys_limit = DMA_BIT_MASK(32); else @@ -629,7 +629,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes, } gfp &= ~GFP_ZONEMASK; - if (phys_limit <= zone_dma_limit) + if (phys_limit <= memblock_start_of_DRAM() + zone_dma_limit) gfp |= __GFP_DMA; else if (phys_limit <= DMA_BIT_MASK(32)) gfp |= __GFP_DMA32;