From patchwork Thu May 14 17:00:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 6408251 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 217C0C0432 for ; Thu, 14 May 2015 17:26:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 525C5203E9 for ; Thu, 14 May 2015 17:26:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6E04A20397 for ; Thu, 14 May 2015 17:26:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YswrG-0007zH-Kj; Thu, 14 May 2015 17:23:42 +0000 Received: from smtp.citrix.com ([66.165.176.89]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YswmE-0002xk-0l for linux-arm-kernel@lists.infradead.org; Thu, 14 May 2015 17:18:33 +0000 X-IronPort-AV: E=Sophos;i="5.13,430,1427760000"; d="scan'208";a="262772843" From: Julien Grall To: Subject: [RFC 11/23] xen: Add Xen specific page definition Date: Thu, 14 May 2015 18:00:51 +0100 Message-ID: <1431622863-28575-12-git-send-email-julien.grall@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> References: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150514_101830_321866_2E537D9A X-CRM114-Status: UNSURE ( 9.55 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -5.0 (-----) Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, Konrad Rzeszutek Wilk , tim@xen.org, linux-kernel@vger.kernel.org, Julien Grall , David Vrabel , Boris Ostrovsky , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The Xen hypercall interface is always using 4K page granularity on ARM and x86 architecture. With the incoming support of 64K page granularity for ARM64 guest, it won't be possible to re-use the Linux page definition in Xen drivers. Introduce Xen page definition helpers based on the Linux page definition. They have exactly the same name but prefixed with XEN_/xen_ prefix. Also modify page_to_pfn to use new Xen page definition. Signed-off-by: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Boris Ostrovsky Cc: David Vrabel --- include/xen/page.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/include/xen/page.h b/include/xen/page.h index c5ed20b..89ae01c 100644 --- a/include/xen/page.h +++ b/include/xen/page.h @@ -1,11 +1,28 @@ #ifndef _XEN_PAGE_H #define _XEN_PAGE_H +#include + +/* The hypercall interface supports only 4KB page */ +#define XEN_PAGE_SHIFT 12 +#define XEN_PAGE_SIZE (_AC(1,UL) << XEN_PAGE_SHIFT) +#define XEN_PAGE_MASK (~(XEN_PAGE_SIZE-1)) +#define xen_offset_in_page(p) ((unsigned long)(p) & ~XEN_PAGE_MASK) +#define xen_pfn_to_page(pfn) \ + ((pfn_to_page(((unsigned long)(pfn) << XEN_PAGE_SHIFT) >> PAGE_SHIFT))) +#define xen_page_to_pfn(page) \ + (((page_to_pfn(page)) << PAGE_SHIFT) >> XEN_PAGE_SHIFT) + +#define XEN_PFN_PER_PAGE (PAGE_SIZE / XEN_PAGE_SIZE) + +#define XEN_PFN_DOWN(x) ((x) >> XEN_PAGE_SHIFT) +#define XEN_PFN_PHYS(x) ((phys_addr_t)(x) << XEN_PAGE_SHIFT) + #include static inline unsigned long page_to_mfn(struct page *page) { - return pfn_to_mfn(page_to_pfn(page)); + return pfn_to_mfn(xen_page_to_pfn(page)); } struct xen_memory_region {