From patchwork Tue Apr 21 11:11:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 6246611 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 33B849F313 for ; Tue, 21 Apr 2015 11:13:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 44929202F8 for ; Tue, 21 Apr 2015 11:13:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 370B02024C for ; Tue, 21 Apr 2015 11:13:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YkW5Y-00015T-7B; Tue, 21 Apr 2015 11:11:36 +0000 Received: from smtp.citrix.com ([66.165.176.89]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YkW5S-0000wl-4z for linux-arm-kernel@lists.infradead.org; Tue, 21 Apr 2015 11:11:31 +0000 X-IronPort-AV: E=Sophos;i="5.11,615,1422921600"; d="scan'208";a="255015531" Date: Tue, 21 Apr 2015 12:11:01 +0100 From: Stefano Stabellini X-X-Sender: sstabellini@kaball.uk.xensource.com To: Stefano Stabellini Subject: Re: [Xen-devel] [PATCH] xen: Add __GFP_DMA flag when xen_swiotlb_init gets free pages. In-Reply-To: Message-ID: References: <1429526904-27176-1-git-send-email-cbz@baozis.org> <5534DABB.5060305@citrix.com> <20150420110729.GA27707@cbz-thinkpad> <5534EAE3.8060403@citrix.com> <1429603030.6174.21.camel@citrix.com> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 X-DLP: MIA1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150421_041130_364166_3EC1185F X-CRM114-Status: GOOD ( 20.81 ) X-Spam-Score: -4.6 (----) Cc: Ian Campbell , linux-kernel@vger.kernel.org, xen-devel@lists.xen.org, Chen Baozi , David Vrabel , linux-arm-kernel@lists.infradead.org, Roger Pau Monne X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, 21 Apr 2015, Stefano Stabellini wrote: > On Tue, 21 Apr 2015, Ian Campbell wrote: > > On Mon, 2015-04-20 at 18:54 +0100, Stefano Stabellini wrote: > > > This should definitely be done only on ARM and ARM64, as on x86 PVH > > > assumes the presence of an IOMMU. We need an ifdef. > > > > > > Also we need to figure out a way to try without GFP_DMA in case no ram > > > under 4g is available at all, as some arm64 platforms don't have any. Of > > > course in those cases we don't need to worry about devices and their dma > > > masks. Maybe we could use memblock for that? > > > > It's pretty ugly, but I've not got any better ideas. > > > > It would perhaps be less ugly as a an arch-specific > > get_me_a_swiotlb_region type function, with the bare __get_free_pages as > > the generic fallback. > > We could do that, but even open code like this isn't too bad: it might > be ugly but at least is very obvious. Chen, could you please try the patch below in your repro scenario? I have only build tested it. Tested-by: Chen Baozi --- xen: Add __GFP_DMA flag when xen_swiotlb_init gets free pages on ARM From: Chen Baozi Make sure that xen_swiotlb_init allocates buffers that are DMA capable when at least one memblock is available below 4G. Otherwise we assume that all devices on the SoC can cope with >4G addresses. No functional changes on x86. Signed-off-by: Chen Baozi Signed-off-by: Stefano Stabellini diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h index 2f7e6ff..0b579b2 100644 --- a/arch/arm/include/asm/xen/page.h +++ b/arch/arm/include/asm/xen/page.h @@ -110,5 +110,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn) bool xen_arch_need_swiotlb(struct device *dev, unsigned long pfn, unsigned long mfn); +unsigned long xen_get_swiotlb_free_pages(unsigned int order); #endif /* _ASM_ARM_XEN_PAGE_H */ diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 793551d..4983250 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include #include @@ -21,6 +22,20 @@ #include #include +unsigned long xen_get_swiotlb_free_pages(unsigned int order) +{ + struct memblock_region *reg; + gfp_t flags = __GFP_NOWARN; + + for_each_memblock(memory, reg) { + if (reg->base < (phys_addr_t)0xffffffff) { + flags |= __GFP_DMA; + break; + } + } + return __get_free_pages(flags, order); +} + enum dma_cache_op { DMA_UNMAP, DMA_MAP, diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h index 358dcd3..c44a5d5 100644 --- a/arch/x86/include/asm/xen/page.h +++ b/arch/x86/include/asm/xen/page.h @@ -269,4 +269,9 @@ static inline bool xen_arch_need_swiotlb(struct device *dev, return false; } +static inline unsigned long xen_get_swiotlb_free_pages(unsigned int order) +{ + return __get_free_pages(__GFP_NOWARN, order); +} + #endif /* _ASM_X86_XEN_PAGE_H */ diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 810ad41..4c54932 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -235,7 +235,7 @@ retry: #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { - xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order); + xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order); if (xen_io_tlb_start) break; order--;