From patchwork Mon Nov 10 16:14:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 5267071 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A2AFC9F2ED for ; Mon, 10 Nov 2014 16:19:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C472220158 for ; Mon, 10 Nov 2014 16:19:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2759201B4 for ; Mon, 10 Nov 2014 16:19:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XnreP-0006Qx-Ra; Mon, 10 Nov 2014 16:17:09 +0000 Received: from smtp.citrix.com ([66.165.176.89]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xnrcz-0004qK-BI for linux-arm-kernel@lists.infradead.org; Mon, 10 Nov 2014 16:15:42 +0000 X-IronPort-AV: E=Sophos;i="5.07,353,1413244800"; d="scan'208";a="189798979" Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id 14.3.181.6; Mon, 10 Nov 2014 11:14:23 -0500 Received: from kaball.uk.xensource.com ([10.80.2.59]) by ukmail1.uk.xensource.com with esmtp (Exim 4.69) (envelope-from ) id 1Xnrbe-0000Mx-1R; Mon, 10 Nov 2014 16:14:18 +0000 From: Stefano Stabellini To: Subject: [PATCH v8 08/13] xen/arm: use hypercall to flush caches in map_page Date: Mon, 10 Nov 2014 16:14:00 +0000 Message-ID: <1415636045-24669-8-git-send-email-stefano.stabellini@eu.citrix.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: MIME-Version: 1.0 X-DLP: MIA1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141110_081541_667282_D85BF1C8 X-CRM114-Status: GOOD ( 10.56 ) X-Spam-Score: -5.6 (-----) Cc: Ian.Campbell@citrix.com, Stefano Stabellini , catalin.marinas@arm.com, konrad.wilk@oracle.com, linux-kernel@vger.kernel.org, david.vrabel@citrix.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In xen_dma_map_page, if the page is a local page, call the native map_page dma_ops. If the page is foreign, call __xen_dma_map_page that issues any required cache maintenane operations via hypercall. The reason for doing this is that the native dma_ops map_page could allocate buffers than need to be freed. If the page is foreign we don't call the native unmap_page dma_ops function, resulting in a memory leak. Suggested-by: Catalin Marinas Signed-off-by: Stefano Stabellini --- arch/arm/include/asm/xen/page-coherent.h | 9 ++++++++- arch/arm/xen/mm32.c | 12 ++++++++++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h index 25d450c..36b79a8 100644 --- a/arch/arm/include/asm/xen/page-coherent.h +++ b/arch/arm/include/asm/xen/page-coherent.h @@ -5,6 +5,9 @@ #include #include +void __xen_dma_map_page(struct device *hwdev, struct page *page, + dma_addr_t dev_addr, unsigned long offset, size_t size, + enum dma_data_direction dir, struct dma_attrs *attrs); void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs); @@ -32,7 +35,11 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page, dma_addr_t dev_addr, unsigned long offset, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { - __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); + if (PFN_DOWN(dev_addr) == page_to_pfn(page)) { + if (__generic_dma_ops(hwdev)->map_page) + __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); + } else + __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, attrs); } void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, diff --git a/arch/arm/xen/mm32.c b/arch/arm/xen/mm32.c index 7824498..d611c4b 100644 --- a/arch/arm/xen/mm32.c +++ b/arch/arm/xen/mm32.c @@ -45,6 +45,18 @@ static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_MAP); } +void __xen_dma_map_page(struct device *hwdev, struct page *page, + dma_addr_t dev_addr, unsigned long offset, size_t size, + enum dma_data_direction dir, struct dma_attrs *attrs) +{ + if (is_device_dma_coherent(hwdev)) + return; + if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) + return; + + __xen_dma_page_cpu_to_dev(hwdev, dev_addr, size, dir); +} + void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs)