From patchwork Thu Jul 28 14:51:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 9251305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C60C66077C for ; Thu, 28 Jul 2016 14:54:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7407279E0 for ; Thu, 28 Jul 2016 14:54:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABCAD27D0C; Thu, 28 Jul 2016 14:54:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4C42E279E0 for ; Thu, 28 Jul 2016 14:54:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bSmfe-0003Ll-Bu; Thu, 28 Jul 2016 14:52:22 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bSmfc-0003Hn-RH for xen-devel@lists.xen.org; Thu, 28 Jul 2016 14:52:20 +0000 Received: from [85.158.143.35] by server-2.bemta-6.messagelabs.com id FA/93-13744-42C1A975; Thu, 28 Jul 2016 14:52:20 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrKLMWRWlGSWpSXmKPExsVysyfVTVdZZla 4wfFGOYslHxezODB6HN39mymAMYo1My8pvyKBNWPZnAlMBfvkKn7t3crWwHhftIuRi0NIYBOj RMOdxWwQzmlGiW8/z7B0MXJysAloStz5/IkJxBYRkJa49vkyI0gRs0A7o8Ta/l5mkISwQLDEx FVXgRo4OFgEVCXuLIwGCfMKuEjM33YUrFdCQE7i5LHJrCAlnEDx191CIGEhAWeJ6TOXMk5g5F 7AyLCKUb04tagstUjXRC+pKDM9oyQ3MTNH19DATC83tbg4MT01JzGpWC85P3cTI9C7DECwg7H 7sv8hRkkOJiVR3rDQmeFCfEn5KZUZicUZ8UWlOanFhxhlODiUJHhvSc0KFxIsSk1PrUjLzAGG GUxagoNHSYRXThoozVtckJhbnJkOkTrFqCglzssLkhAASWSU5sG1wUL7EqOslDAvI9AhQjwFq UW5mSWo8q8YxTkYlYR5DUCm8GTmlcBNfwW0mAlocXHsDJDFJYkIKakGxogD868E9s2XXVq2UP BSX9shqXksyk/4RecdaG8R27lag/lXg/iniy93TpEIXVl5xbpN9VWMzof7b3Pd72RHL69LURX 21Pf9VrRP1dhEKmKX3/sTX97YHl/Iah0muKKrod6mR+zlC+GnE/P3hvFsuy/+yqM9Xof5zu4p t+K9i1/+O8nzY93fEiWW4oxEQy3mouJEAOKpWitoAgAA X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-16.tower-21.messagelabs.com!1469717538!21659317!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 47597 invoked from network); 28 Jul 2016 14:52:19 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-16.tower-21.messagelabs.com with SMTP; 28 Jul 2016 14:52:19 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B600ABF3; Thu, 28 Jul 2016 07:53:28 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.218.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 80F353F21A; Thu, 28 Jul 2016 07:52:10 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xen.org Date: Thu, 28 Jul 2016 15:51:37 +0100 Message-Id: <1469717505-8026-15-git-send-email-julien.grall@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1469717505-8026-1-git-send-email-julien.grall@arm.com> References: <1469717505-8026-1-git-send-email-julien.grall@arm.com> Cc: proskurin@sec.in.tum.de, Julien Grall , sstabellini@kernel.org, steve.capper@arm.com, wei.chen@linaro.org Subject: [Xen-devel] [RFC 14/22] xen/arm: p2m: Re-implement p2m_cache_flush using p2m_get_entry X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The function p2m_cache_flush can be re-implemented using the generic function p2m_get_entry by iterating over the range and using the mapping order given by the callee. As the current implementation, no preemption is implemented, although the comment in the current code claimed it. As the function is called by a DOMCTL with a region of 1GB maximum, I think the preemption can be left unimplemented for now. Finally drop the operation CACHEFLUSH in apply_one_level as nobody is using it anymore. Note that the function could have been dropped in one go at the end, however I find easier to drop the operations one by one avoiding a big deletion in the patch that convert the last operation. Signed-off-by: Julien Grall --- The loop pattern will be very for the reliquish function. It might be possible to extract it in a separate function. --- xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++--------------------------- 1 file changed, 34 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 9a9c85c..e7697bb 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -722,7 +722,6 @@ enum p2m_operation { INSERT, REMOVE, RELINQUISH, - CACHEFLUSH, MEMACCESS, }; @@ -978,36 +977,6 @@ static int apply_one_level(struct domain *d, */ return P2M_ONE_PROGRESS; - case CACHEFLUSH: - if ( !p2m_valid(orig_pte) ) - { - *addr = (*addr + level_size) & level_mask; - return P2M_ONE_PROGRESS_NOP; - } - - if ( level < 3 && p2m_table(orig_pte) ) - return P2M_ONE_DESCEND; - - /* - * could flush up to the next superpage boundary, but would - * need to be careful about preemption, so just do one 4K page - * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will - * continue to loop over the rest of the range. - */ - if ( p2m_is_ram(orig_pte.p2m.type) ) - { - unsigned long offset = paddr_to_pfn(*addr & ~level_mask); - flush_page_to_ram(orig_pte.p2m.base + offset); - - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS; - } - else - { - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS_NOP; - } - case MEMACCESS: if ( level < 3 ) { @@ -1555,12 +1524,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr) { struct p2m_domain *p2m = &d->arch.p2m; gfn_t end = gfn_add(start, nr); + p2m_type_t t; + unsigned int order; start = gfn_max(start, p2m->lowest_mapped_gfn); end = gfn_min(end, p2m->max_mapped_gfn); - return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN, - 0, p2m_invalid, d->arch.p2m.default_access); + /* XXX: Should we use write lock here? */ + p2m_read_lock(p2m); + + for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) ) + { + mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order); + + /* Skip hole and non-RAM page */ + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) ) + { + /* + * the order corresponds to the order of the mapping in the + * page table. so we need to align the gfn before + * incrementing. + */ + start = _gfn(gfn_x(start) & ~((1UL << order) - 1)); + continue; + } + + /* + * Could flush up to the next superpage boundary, but we would + * need to be careful about preemption, so just do one 4K page + * now. + * XXX: Implement preemption. + */ + flush_page_to_ram(mfn_x(mfn)); + order = 0; + } + + p2m_read_unlock(p2m); + + return 0; } mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)