From patchwork Mon Sep 2 13:03:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11126681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31CF81399 for ; Mon, 2 Sep 2019 13:05:57 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0DA23217F4 for ; Mon, 2 Sep 2019 13:05:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IVuekfAW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DA23217F4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fHLUtxMSZGRxcf/ncxrTnCkxRoSSilRf2dpNdyRZGtE=; b=IVuekfAWL/9NpA S/Bcaj63N+J7zxcRngz7q1mhIcWOZmeDV5AaGsKakg9TkL6RW3hdu35UFwAX3iJErhq1cvCX3p1VC i683JZD+ncnngVEpeD4GyjTp16tgI120QWwLP8AEGeNXDbiwfU7a2AlJwvo7sN16pPc4uYPaloOyc lSVyCt7XkYYtKQ0h4Aky84bDCmZQc9MwgJ1Js0FTgwIx1WZIHz3cYy485T6k76qfHlx6+f5rReIP3 9jYtZEarq9LLf4E1A0PHGqV/1QaonTzchxx01Km0DvI0R369lJx/MJZKbizZW2Gco5XPWxpT95OTq Zt50jciuOPmqGhqIi1mA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i4m1r-0004e4-Ou; Mon, 02 Sep 2019 13:05:55 +0000 Received: from [2001:4bb8:18c:1755:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i4m08-0001cz-BC; Mon, 02 Sep 2019 13:04:08 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk , gross@suse.com, boris.ostrovsky@oracle.com Subject: [PATCH 08/13] swiotlb-xen: always use dma-direct helpers to alloc coherent pages Date: Mon, 2 Sep 2019 15:03:34 +0200 Message-Id: <20190902130339.23163-9-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190902130339.23163-1-hch@lst.de> References: <20190902130339.23163-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org x86 currently calls alloc_pages, but using dma-direct works as well there, with the added benefit of using the CMA pool if available. The biggest advantage is of course to remove a pointless bit of architecture specific code. Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini --- arch/x86/include/asm/xen/page-coherent.h | 16 ---------------- drivers/xen/swiotlb-xen.c | 7 +++---- include/xen/arm/page-coherent.h | 12 ------------ 3 files changed, 3 insertions(+), 32 deletions(-) diff --git a/arch/x86/include/asm/xen/page-coherent.h b/arch/x86/include/asm/xen/page-coherent.h index 116777e7f387..8ee33c5edded 100644 --- a/arch/x86/include/asm/xen/page-coherent.h +++ b/arch/x86/include/asm/xen/page-coherent.h @@ -5,22 +5,6 @@ #include #include -static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, - unsigned long attrs) -{ - void *vstart = (void*)__get_free_pages(flags, get_order(size)); - *dma_handle = virt_to_phys(vstart); - return vstart; -} - -static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, - void *cpu_addr, dma_addr_t dma_handle, - unsigned long attrs) -{ - free_pages((unsigned long) cpu_addr, get_order(size)); -} - static inline void xen_dma_map_page(struct device *hwdev, struct page *page, dma_addr_t dev_addr, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { } diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index b8808677ae1d..f9dd4cb6e4b3 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -299,8 +299,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, * address. In fact on ARM virt_to_phys only works for kernel direct * mapped RAM memory. Also see comment below. */ - ret = xen_alloc_coherent_pages(hwdev, size, dma_handle, flags, attrs); - + ret = dma_direct_alloc(hwdev, size, dma_handle, flags, attrs); if (!ret) return ret; @@ -319,7 +318,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, else { if (xen_create_contiguous_region(phys, order, fls64(dma_mask), dma_handle) != 0) { - xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs); + dma_direct_free(hwdev, size, ret, (dma_addr_t)phys, attrs); return NULL; } SetPageXenRemapped(virt_to_page(ret)); @@ -351,7 +350,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, TestClearPageXenRemapped(virt_to_page(vaddr))) xen_destroy_contiguous_region(phys, order); - xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs); + dma_direct_free(hwdev, size, vaddr, (dma_addr_t)phys, attrs); } /* diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h index a840d6949a87..0e244f4fec1a 100644 --- a/include/xen/arm/page-coherent.h +++ b/include/xen/arm/page-coherent.h @@ -16,18 +16,6 @@ void __xen_dma_sync_single_for_cpu(struct device *hwdev, void __xen_dma_sync_single_for_device(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir); -static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) -{ - return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs); -} - -static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, - void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) -{ - dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs); -} - static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) {