From patchwork Mon Apr 12 14:05:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E1AC433B4 for ; Mon, 12 Apr 2021 14:05:56 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 49B2061220 for ; Mon, 12 Apr 2021 14:05:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49B2061220 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109166.208331 (Exim 4.92) (envelope-from ) id 1lVxCC-0004VW-9O; Mon, 12 Apr 2021 14:05:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109166.208331; Mon, 12 Apr 2021 14:05:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxCC-0004VP-5J; Mon, 12 Apr 2021 14:05:44 +0000 Received: by outflank-mailman (input) for mailman id 109166; Mon, 12 Apr 2021 14:05:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxCB-0004VK-0W for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:05:43 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f02c09b5-056f-4ead-b681-faa7d1b36791; Mon, 12 Apr 2021 14:05:42 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 98D1FB01E; Mon, 12 Apr 2021 14:05:41 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f02c09b5-056f-4ead-b681-faa7d1b36791 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236341; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GLEUa36booore5XSCfEDziIQ+mcpNZX/p32JNakBHwk=; b=SS83GvZqCvFDCgiSwblJOWFQICySlrJyZPLwoOel04La+L1HF0n2LcpaworO1RamZTrmC5 tTlinwGNKZ8FRV/IbanBDr4+Qda+N2j6hd4hwswIVFDprFuXPttu2YM0Fcy80dNAgSWPxO c+IubP1DA0XpTjoxX78nHKiARWncH94= Subject: [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <4b842581-e24d-6b74-eef5-7ac48f0ff0a4@suse.com> Date: Mon, 12 Apr 2021 16:05:41 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Extend a respective #ifdef from inside set_typed_p2m_entry() to around all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety check path. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1260,6 +1260,8 @@ int p2m_finish_type_change(struct domain return rc; } +#ifdef CONFIG_HVM + /* * Returns: * 0 for success @@ -1280,7 +1282,10 @@ static int set_typed_p2m_entry(struct do struct p2m_domain *p2m = p2m_get_hostp2m(d); if ( !paging_mode_translate(d) ) + { + ASSERT_UNREACHABLE(); return -EIO; + } gfn_lock(p2m, gfn, order); omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL); @@ -1313,7 +1318,6 @@ static int set_typed_p2m_entry(struct do if ( rc ) gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n", gfn_l, order, rc, mfn_x(mfn)); -#ifdef CONFIG_HVM else if ( p2m_is_pod(ot) ) { pod_lock(p2m); @@ -1321,7 +1325,6 @@ static int set_typed_p2m_entry(struct do BUG_ON(p2m->pod.entry_count < 0); pod_unlock(p2m); } -#endif gfn_unlock(p2m, gfn, order); return rc; @@ -1349,6 +1352,8 @@ int set_mmio_p2m_entry(struct domain *d, p2m_get_hostp2m(d)->default_access); } +#endif /* CONFIG_HVM */ + int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l, p2m_access_t p2ma, unsigned int flag) { From patchwork Mon Apr 12 14:06:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09672C433B4 for ; Mon, 12 Apr 2021 14:06:47 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8A4661220 for ; Mon, 12 Apr 2021 14:06:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8A4661220 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109170.208344 (Exim 4.92) (envelope-from ) id 1lVxD4-0004bw-Ks; Mon, 12 Apr 2021 14:06:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109170.208344; Mon, 12 Apr 2021 14:06:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxD4-0004bp-Fs; Mon, 12 Apr 2021 14:06:38 +0000 Received: by outflank-mailman (input) for mailman id 109170; Mon, 12 Apr 2021 14:06:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxD2-0004bj-Kp for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:06:36 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f2c8750e-5202-4437-a2e3-2945403f8ef4; Mon, 12 Apr 2021 14:06:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 16245AF95; Mon, 12 Apr 2021 14:06:35 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f2c8750e-5202-4437-a2e3-2945403f8ef4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236395; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8uQ/wa7eMeHMjVMRgN6IKy+IwI8MpBIUTSpk+7O8K0I=; b=RB5oQlm4pj3/Bdb62wLXgfUZbOofUqr8OW8w78UzEsCkMZubMlRn0MEqOsXfOV0YY1LFf0 Q3pmhjXDgEheMBg2++lQAPmd2NVPotlg1OwdQYHxddHU528ISkLTwI/2vWYRuko4JEVazE r+mpp5BuIRaA7PV+juDfpkwS67EouoM= Subject: [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <7f8ca70d-8bbe-bd5d-533a-c5ea81dc91a2@suse.com> Date: Mon, 12 Apr 2021 16:06:34 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Mirror the "translated" check the functions do to do_domctl(), allowing the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the original checks. Also arrange for {set,clear}_mmio_p2m_entry() and {set,clear}_identity_p2m_entry() to respectively live next to each other, such that clear_mmio_p2m_entry() can also be covered by the #ifdef already covering set_mmio_p2m_entry(). Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: Fix build. --- Arguably the original checks, returning success, could also be dropped at this point. --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1352,52 +1352,6 @@ int set_mmio_p2m_entry(struct domain *d, p2m_get_hostp2m(d)->default_access); } -#endif /* CONFIG_HVM */ - -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l, - p2m_access_t p2ma, unsigned int flag) -{ - p2m_type_t p2mt; - p2m_access_t a; - gfn_t gfn = _gfn(gfn_l); - mfn_t mfn; - struct p2m_domain *p2m = p2m_get_hostp2m(d); - int ret; - - if ( !paging_mode_translate(p2m->domain) ) - { - if ( !is_iommu_enabled(d) ) - return 0; - return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), - 1ul << PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable); - } - - gfn_lock(p2m, gfn, 0); - - mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); - - if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) - ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K, - p2m_mmio_direct, p2ma); - else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma ) - ret = 0; - else - { - if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED ) - ret = 0; - else - ret = -EBUSY; - printk(XENLOG_G_WARNING - "Cannot setup identity map d%d:%lx," - " gfn already mapped to %lx.\n", - d->domain_id, gfn_l, mfn_x(mfn)); - } - - gfn_unlock(p2m, gfn, 0); - return ret; -} - /* * Returns: * 0 for success @@ -1447,6 +1401,52 @@ int clear_mmio_p2m_entry(struct domain * return rc; } +#endif /* CONFIG_HVM */ + +int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l, + p2m_access_t p2ma, unsigned int flag) +{ + p2m_type_t p2mt; + p2m_access_t a; + gfn_t gfn = _gfn(gfn_l); + mfn_t mfn; + struct p2m_domain *p2m = p2m_get_hostp2m(d); + int ret; + + if ( !paging_mode_translate(p2m->domain) ) + { + if ( !is_iommu_enabled(d) ) + return 0; + return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), + 1ul << PAGE_ORDER_4K, + IOMMUF_readable | IOMMUF_writable); + } + + gfn_lock(p2m, gfn, 0); + + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); + + if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) + ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K, + p2m_mmio_direct, p2ma); + else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma ) + ret = 0; + else + { + if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED ) + ret = 0; + else + ret = -EBUSY; + printk(XENLOG_G_WARNING + "Cannot setup identity map d%d:%lx," + " gfn already mapped to %lx.\n", + d->domain_id, gfn_l, mfn_x(mfn)); + } + + gfn_unlock(p2m, gfn, 0); + return ret; +} + int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l) { p2m_type_t p2mt; @@ -1892,6 +1892,8 @@ void *map_domain_gfn(struct p2m_domain * return map_domain_page(*mfn); } +#ifdef CONFIG_HVM + static unsigned int mmio_order(const struct domain *d, unsigned long start_fn, unsigned long nr) { @@ -1932,7 +1934,10 @@ int map_mmio_regions(struct domain *d, unsigned int iter, order; if ( !paging_mode_translate(d) ) + { + ASSERT_UNREACHABLE(); return 0; + } for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER; i += 1UL << order, ++iter ) @@ -1964,7 +1969,10 @@ int unmap_mmio_regions(struct domain *d, unsigned int iter, order; if ( !paging_mode_translate(d) ) + { + ASSERT_UNREACHABLE(); return 0; + } for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER; i += 1UL << order, ++iter ) @@ -1986,8 +1994,6 @@ int unmap_mmio_regions(struct domain *d, return i == nr ? 0 : i ?: ret; } -#ifdef CONFIG_HVM - int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, p2m_type_t *t, p2m_access_t *a, bool prepopulate) --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -750,6 +750,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe if ( ret ) break; + if ( !paging_mode_translate(d) ) + break; + if ( add ) { printk(XENLOG_G_DEBUG --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -678,11 +678,19 @@ int p2m_finish_type_change(struct domain int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); +#ifdef CONFIG_HVM /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, unsigned int order); +#else +static inline int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, + mfn_t mfn, unsigned int order) +{ + return -EIO; +} +#endif /* Set identity addresses in the p2m table (for pass-through) */ int set_identity_p2m_entry(struct domain *d, unsigned long gfn, From patchwork Mon Apr 12 14:07:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD174C433B4 for ; Mon, 12 Apr 2021 14:07:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8166561220 for ; Mon, 12 Apr 2021 14:07:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8166561220 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109173.208356 (Exim 4.92) (envelope-from ) id 1lVxDd-0004hy-Se; Mon, 12 Apr 2021 14:07:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109173.208356; Mon, 12 Apr 2021 14:07:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxDd-0004hr-PF; Mon, 12 Apr 2021 14:07:13 +0000 Received: by outflank-mailman (input) for mailman id 109173; Mon, 12 Apr 2021 14:07:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxDc-0004hl-W8 for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:07:13 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e4421fbc-ab4a-4cf0-85cd-98c7ae45dba4; Mon, 12 Apr 2021 14:07:12 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 53B18B01E; Mon, 12 Apr 2021 14:07:11 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e4421fbc-ab4a-4cf0-85cd-98c7ae45dba4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236431; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a9yqIDWLh5sL3LqtehE9yMAlebDtEpvP4PzPBcBcKu8=; b=gBemrL+XQb9PjafG9Nm2xTEp1U7Txo/JkyIlEp4O5dpko8uD0OYBsuYOUhIrSBJaQee3IA JzRwWFglL54Bz88RFVWnPUUmIPFU7kV9Qp7skw1zn8uFwh4EUzklxXsqV4w9sybHeQCHTs ZcRx3kZKf8C0lG54XssEVqOM7uFj+zQ= Subject: [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tim Deegan References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: Date: Mon, 12 Apr 2021 16:07:10 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US As is the adjacent ga_to_gfn() one as well as paging_gva_to_gfn(). Signed-off-by: Jan Beulich Acked-by: Tim Deegan --- v2: Fix !SHADOW_PAGING && !HVM build. --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1793,7 +1793,6 @@ void np2m_schedule(int dir) p2m_unlock(p2m); } } -#endif unsigned long paging_gva_to_gfn(struct vcpu *v, unsigned long va, @@ -1844,6 +1843,8 @@ unsigned long paging_gva_to_gfn(struct v return hostmode->gva_to_gfn(v, hostp2m, va, pfec); } +#endif /* CONFIG_HVM */ + /* * If the map is non-NULL, we leave this function having acquired an extra ref * on mfn_to_page(*mfn). In all cases, *pfec contains appropriate --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3030,6 +3030,7 @@ static bool sh_invlpg(struct vcpu *v, un return true; } +#ifdef CONFIG_HVM static unsigned long sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m, @@ -3063,6 +3064,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m return gfn_x(gfn); } +#endif /* CONFIG_HVM */ static inline void sh_update_linear_entries(struct vcpu *v) @@ -4183,7 +4185,9 @@ int sh_audit_l4_table(struct vcpu *v, mf const struct paging_mode sh_paging_mode = { .page_fault = sh_page_fault, .invlpg = sh_invlpg, +#ifdef CONFIG_HVM .gva_to_gfn = sh_gva_to_gfn, +#endif .update_cr3 = sh_update_cr3, .update_paging_modes = shadow_update_paging_modes, .flush_tlb = shadow_flush_tlb, --- a/xen/arch/x86/mm/shadow/none.c +++ b/xen/arch/x86/mm/shadow/none.c @@ -43,12 +43,14 @@ static bool _invlpg(struct vcpu *v, unsi return true; } +#ifdef CONFIG_HVM static unsigned long _gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m, unsigned long va, uint32_t *pfec) { ASSERT_UNREACHABLE(); return gfn_x(INVALID_GFN); } +#endif static void _update_cr3(struct vcpu *v, int do_locking, bool noflush) { @@ -63,7 +65,9 @@ static void _update_paging_modes(struct static const struct paging_mode sh_paging_none = { .page_fault = _page_fault, .invlpg = _invlpg, +#ifdef CONFIG_HVM .gva_to_gfn = _gva_to_gfn, +#endif .update_cr3 = _update_cr3, .update_paging_modes = _update_paging_modes, }; --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -127,6 +127,7 @@ struct paging_mode { struct cpu_user_regs *regs); bool (*invlpg )(struct vcpu *v, unsigned long linear); +#ifdef CONFIG_HVM unsigned long (*gva_to_gfn )(struct vcpu *v, struct p2m_domain *p2m, unsigned long va, @@ -136,6 +137,7 @@ struct paging_mode { unsigned long cr3, paddr_t ga, uint32_t *pfec, unsigned int *page_order); +#endif void (*update_cr3 )(struct vcpu *v, int do_locking, bool noflush); void (*update_paging_modes )(struct vcpu *v); @@ -287,6 +289,8 @@ unsigned long paging_gva_to_gfn(struct v unsigned long va, uint32_t *pfec); +#ifdef CONFIG_HVM + /* Translate a guest address using a particular CR3 value. This is used * to by nested HAP code, to walk the guest-supplied NPT tables as if * they were pagetables. @@ -305,6 +309,8 @@ static inline unsigned long paging_ga_to page_order); } +#endif /* CONFIG_HVM */ + /* Update all the things that are derived from the guest's CR3. * Called when the guest changes CR3; the caller can then use v->arch.cr3 * as the value to load into the host CR3 to schedule this vcpu */ From patchwork Mon Apr 12 14:07:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3014AC433B4 for ; Mon, 12 Apr 2021 14:08:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E660E61220 for ; Mon, 12 Apr 2021 14:08:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E660E61220 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109179.208368 (Exim 4.92) (envelope-from ) id 1lVxEH-0004qV-53; Mon, 12 Apr 2021 14:07:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109179.208368; Mon, 12 Apr 2021 14:07:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxEH-0004qO-1y; Mon, 12 Apr 2021 14:07:53 +0000 Received: by outflank-mailman (input) for mailman id 109179; Mon, 12 Apr 2021 14:07:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxEG-0004qG-3W for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:07:52 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4545c696-18c8-4dca-85ae-b40de634959f; Mon, 12 Apr 2021 14:07:51 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AC32CAF95; Mon, 12 Apr 2021 14:07:50 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4545c696-18c8-4dca-85ae-b40de634959f X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236470; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3wdroPuGfKuZZGJHeiqndko2WBYxJIqNkRfSw3fJY0U=; b=uGiwk5AfKObcwe/w+SEj7r8Dy134Nk/Mg6Pe6QjbzK6xyF8Nm2ECE9yJcwJKiCkAWkWdel dLZKkpfqnFIQz+PVk0ZCylxM93jXm559EyYni4xj1AKHuj7VNbDXbZhpdBk3fESMVDHfl8 JzmARMwxoEkzyvro6EDt9idcrp71qe0= Subject: [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <187fa031-c237-2298-6b39-2c4a63b06c0f@suse.com> Date: Mon, 12 Apr 2021 16:07:50 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Generally all of this is dead code anyway, but there's a caller of guest_iommu_add_ppr_log(), and the code itself calls p2m_change_type_one(), which is about to become HVM-only. Hence this code, short of deleting it altogether, needs to become properly HVM- only as well. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v2: New. --- a/xen/drivers/passthrough/amd/Makefile +++ b/xen/drivers/passthrough/amd/Makefile @@ -5,4 +5,4 @@ obj-y += pci_amd_iommu.o obj-bin-y += iommu_acpi.init.o obj-y += iommu_intr.o obj-y += iommu_cmd.o -obj-y += iommu_guest.o +obj-$(CONFIG_HVM) += iommu_guest.o --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -307,12 +307,16 @@ int __must_check amd_iommu_suspend(void) void amd_iommu_crash_shutdown(void); /* guest iommu support */ +#ifdef CONFIG_HVM void amd_iommu_send_guest_cmd(struct amd_iommu *iommu, u32 cmd[]); void guest_iommu_add_ppr_log(struct domain *d, u32 entry[]); void guest_iommu_add_event_log(struct domain *d, u32 entry[]); int guest_iommu_init(struct domain* d); void guest_iommu_destroy(struct domain *d); int guest_iommu_set_base(struct domain *d, uint64_t base); +#else +static inline void guest_iommu_add_ppr_log(struct domain *d, uint32_t entry[]) {} +#endif static inline u32 get_field_from_reg_u32(u32 reg_value, u32 mask, u32 shift) { From patchwork Mon Apr 12 14:08:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98978C433B4 for ; Mon, 12 Apr 2021 14:08:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F77B6135B for ; Mon, 12 Apr 2021 14:08:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F77B6135B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109183.208380 (Exim 4.92) (envelope-from ) id 1lVxEu-0004xS-GN; Mon, 12 Apr 2021 14:08:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109183.208380; Mon, 12 Apr 2021 14:08:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxEu-0004xL-BS; Mon, 12 Apr 2021 14:08:32 +0000 Received: by outflank-mailman (input) for mailman id 109183; Mon, 12 Apr 2021 14:08:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxEt-0004xB-Gt for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:08:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4e88388e-dfcc-4294-b7da-687fb1edbe64; Mon, 12 Apr 2021 14:08:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A3D3EAF9E; Mon, 12 Apr 2021 14:08:29 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4e88388e-dfcc-4294-b7da-687fb1edbe64 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9KpoNaBMCbO0fmRRrVeB3gVS03fXDEugPkClPlqCtAg=; b=bJe3w45OELSouUI+EWLG+hkkmhlLwNESyZhNCbUgDe/pyKn4ZVjRWaNrabcmz2APCmMnWY M9x//jc3vD8UYt0qBdANB+J5G4MQx1WgVwIQoUFqEt21JoHV1X3ZFULV2BwNmTgESYlHlg iPTB2FNNvscGD8F6QPH1GOI6su9g3uU= Subject: [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <8f6e2834-fbc1-4f3b-c305-71bb20b04502@suse.com> Date: Mon, 12 Apr 2021 16:08:29 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Exclude functions using them from !HVM builds, thus making it possible to exclude the hooks as well. Also cover the already unused memory_type_changed hook while inserting the #ifdef in the struct. While no respective check was there, I can't see how XEN_DOMCTL_set_broken_page_p2m could have been meant to work for PV the way it is implemented. Restrict this operation to acting on HVM guests. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1080,9 +1080,12 @@ long arch_do_domctl( { p2m_type_t pt; unsigned long pfn = domctl->u.set_broken_page_p2m.pfn; - mfn_t mfn = get_gfn_query(d, pfn, &pt); - if ( unlikely(!mfn_valid(mfn)) || unlikely(!p2m_is_ram(pt)) ) + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( unlikely(!mfn_valid(get_gfn_query(d, pfn, &pt))) || + unlikely(!p2m_is_ram(pt)) ) ret = -EINVAL; else ret = p2m_change_type_one(d, pfn, pt, p2m_ram_broken); --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -292,6 +292,8 @@ int p2m_is_logdirty_range(struct p2m_dom return 0; } +#ifdef CONFIG_HVM + static void change_entry_type_global(struct p2m_domain *p2m, p2m_type_t ot, p2m_type_t nt) { @@ -316,7 +318,6 @@ void p2m_change_entry_type_global(struct change_entry_type_global(hostp2m, ot, nt); -#ifdef CONFIG_HVM if ( unlikely(altp2m_active(d)) ) { unsigned int i; @@ -331,12 +332,10 @@ void p2m_change_entry_type_global(struct p2m_unlock(altp2m); } } -#endif p2m_unlock(hostp2m); } -#ifdef CONFIG_HVM /* There's already a memory_type_changed() in asm/mtrr.h. */ static void _memory_type_changed(struct p2m_domain *p2m) { @@ -369,7 +368,8 @@ void p2m_memory_type_changed(struct doma p2m_unlock(hostp2m); } -#endif + +#endif /* CONFIG_HVM */ int p2m_set_ioreq_server(struct domain *d, unsigned int flags, @@ -876,6 +876,7 @@ guest_physmap_add_page(struct domain *d, } #ifdef CONFIG_HVM + int guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int page_order, p2m_type_t t) @@ -1024,7 +1025,6 @@ out: return rc; } -#endif /* * Modify the p2m type of a single gfn from ot to nt. @@ -1161,7 +1161,6 @@ void p2m_change_type_range(struct domain change_type_range(hostp2m, start, end, ot, nt); -#ifdef CONFIG_HVM if ( unlikely(altp2m_active(d)) ) { unsigned int i; @@ -1176,7 +1175,6 @@ void p2m_change_type_range(struct domain p2m_unlock(altp2m); } } -#endif hostp2m->defer_nested_flush = 0; if ( nestedhvm_enabled(d) ) p2m_flush_nestedp2m(d); @@ -1184,6 +1182,8 @@ void p2m_change_type_range(struct domain p2m_unlock(hostp2m); } +#endif /* CONFIG_HVM */ + /* * Finish p2m type change for gfns which are marked as need_recalc in a range. * Uses the current p2m's max_mapped_pfn to further clip the invalidation --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -324,6 +324,7 @@ p2m_next_level(struct p2m_domain *p2m, v return rc; } +#ifdef CONFIG_HVM /* * Mark (via clearing the U flag) as needing P2M type re-calculation all valid * present entries at the targeted level for the passed in GFN range, which is @@ -392,6 +393,7 @@ static int p2m_pt_set_recalc_range(struc return err; } +#endif /* CONFIG_HVM */ /* * Handle possibly necessary P2M type re-calculation (U flag clear for a @@ -930,6 +932,8 @@ pod_retry_l1: return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN; } +#ifdef CONFIG_HVM + static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m, p2m_type_t ot, p2m_type_t nt) { @@ -1011,6 +1015,8 @@ static int p2m_pt_change_entry_type_rang return err; } +#endif /* CONFIG_HVM */ + #if P2M_AUDIT static long p2m_pt_audit_p2m(struct p2m_domain *p2m) { @@ -1168,8 +1174,10 @@ void p2m_pt_init(struct p2m_domain *p2m) p2m->set_entry = p2m_pt_set_entry; p2m->get_entry = p2m_pt_get_entry; p2m->recalc = do_recalc; +#ifdef CONFIG_HVM p2m->change_entry_type_global = p2m_pt_change_entry_type_global; p2m->change_entry_type_range = p2m_pt_change_entry_type_range; +#endif /* Still too early to use paging_mode_hap(). */ if ( hap_enabled(p2m->domain) ) --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -588,6 +588,7 @@ static int paging_log_dirty_op(struct do return rv; } +#ifdef CONFIG_HVM void paging_log_dirty_range(struct domain *d, unsigned long begin_pfn, unsigned long nr, @@ -617,6 +618,7 @@ void paging_log_dirty_range(struct domai guest_flush_tlb_mask(d, d->dirty_cpumask); } +#endif /* * Callers must supply log_dirty_ops for the log dirty code to call. This --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -264,6 +264,7 @@ struct p2m_domain { void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); void (*disable_hardware_log_dirty)(struct p2m_domain *p2m); void (*flush_hardware_cached_dirty)(struct p2m_domain *p2m); +#ifdef CONFIG_HVM void (*change_entry_type_global)(struct p2m_domain *p2m, p2m_type_t ot, p2m_type_t nt); @@ -272,6 +273,7 @@ struct p2m_domain { unsigned long first_gfn, unsigned long last_gfn); void (*memory_type_changed)(struct p2m_domain *p2m); +#endif void (*write_p2m_entry_pre)(struct domain *d, unsigned long gfn, l1_pgentry_t old, From patchwork Mon Apr 12 14:08:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E006AC433B4 for ; Mon, 12 Apr 2021 14:09:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A3C1B61244 for ; Mon, 12 Apr 2021 14:09:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3C1B61244 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109186.208392 (Exim 4.92) (envelope-from ) id 1lVxFF-000549-SH; Mon, 12 Apr 2021 14:08:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109186.208392; Mon, 12 Apr 2021 14:08:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxFF-000542-O6; Mon, 12 Apr 2021 14:08:53 +0000 Received: by outflank-mailman (input) for mailman id 109186; Mon, 12 Apr 2021 14:08:52 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxFE-00053o-3c for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:08:52 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 92f3fdf1-1e40-446e-91b1-1ade3aaeb44b; Mon, 12 Apr 2021 14:08:51 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B2E06AF9E; Mon, 12 Apr 2021 14:08:50 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 92f3fdf1-1e40-446e-91b1-1ade3aaeb44b X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236530; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ABE4LcUgraNA73SdEGSmiIlXjTsOODWeZNKtHKbvRrQ=; b=L8xJm6uUaZxy1T37lApMF5qOjuVBuZs5l5mlBn2rJuuQUZdVS51iuA7Xo7Z6JvtB/ySs5E VSocStusG9ZpClsne2anaEsO5Yrv2Ywen/8sV2N4vz0jFnM7+TJuoemWmdlVN5PUJqbasx JuuCehiyUXn4hF+NssppCfJuP0KWuNg= Subject: [PATCH v2 06/12] x86/p2m: hardware-log-dirty related hooks are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: Date: Mon, 12 Apr 2021 16:08:50 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Exclude functions using them from !HVM builds, thus making it possible to exclude the hooks as well. By moving an #endif in p2m.c (instead of introducing yet another one) p2m_{get,set}_ioreq_server() get excluded for !HVM builds as well. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -369,8 +369,6 @@ void p2m_memory_type_changed(struct doma p2m_unlock(hostp2m); } -#endif /* CONFIG_HVM */ - int p2m_set_ioreq_server(struct domain *d, unsigned int flags, struct ioreq_server *s) @@ -464,6 +462,8 @@ void p2m_flush_hardware_cached_dirty(str } } +#endif /* CONFIG_HVM */ + /* * Force a synchronous P2M TLB flush if a deferred flush is pending. * --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -261,10 +261,10 @@ struct p2m_domain { bool_t *sve); int (*recalc)(struct p2m_domain *p2m, unsigned long gfn); +#ifdef CONFIG_HVM void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); void (*disable_hardware_log_dirty)(struct p2m_domain *p2m); void (*flush_hardware_cached_dirty)(struct p2m_domain *p2m); -#ifdef CONFIG_HVM void (*change_entry_type_global)(struct p2m_domain *p2m, p2m_type_t ot, p2m_type_t nt); @@ -630,6 +630,9 @@ int guest_physmap_add_page(struct domain /* Set a p2m range as populate-on-demand */ int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, unsigned int order); + +#ifdef CONFIG_HVM + /* Enable hardware-assisted log-dirty. */ void p2m_enable_hardware_log_dirty(struct domain *d); @@ -639,6 +642,12 @@ void p2m_disable_hardware_log_dirty(stru /* Flush hardware cached dirty GFNs */ void p2m_flush_hardware_cached_dirty(struct domain *d); +#else + +static inline void p2m_flush_hardware_cached_dirty(struct domain *d) {} + +#endif + /* Change types across all p2m entries in a domain */ void p2m_change_entry_type_global(struct domain *d, p2m_type_t ot, p2m_type_t nt); From patchwork Mon Apr 12 14:09:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD69DC433B4 for ; Mon, 12 Apr 2021 14:09:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7848261244 for ; Mon, 12 Apr 2021 14:09:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7848261244 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109191.208404 (Exim 4.92) (envelope-from ) id 1lVxFh-0005B2-6W; Mon, 12 Apr 2021 14:09:21 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109191.208404; Mon, 12 Apr 2021 14:09:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxFh-0005Av-1a; Mon, 12 Apr 2021 14:09:21 +0000 Received: by outflank-mailman (input) for mailman id 109191; Mon, 12 Apr 2021 14:09:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxFg-0005Ap-Jo for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:09:20 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ee05faa5-6ba0-476e-a373-1c4c7cea959e; Mon, 12 Apr 2021 14:09:19 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 28B6AB01E; Mon, 12 Apr 2021 14:09:19 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ee05faa5-6ba0-476e-a373-1c4c7cea959e X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236559; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sLQCZKCYhX92fi1Kjl5wzB13m0sMbygMCSvUFAynd98=; b=daV9fvpdreJ4xwHh79Ez2/VrQA+LdFcIcP8ywe/TezlUh/j3Xq8Hg4Mig2DwNZ5fa9CBSI DPKLK/SfGkBo/tMOzFjWlJrNWCUv2qcyE9j6tRj5HB0UWeCdbi8nGjdHh0cIqRM0rKdEnE FiHoTuVpx21Y3Znkg8TPe1zgg/RnIvg= Subject: [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: Date: Mon, 12 Apr 2021 16:09:18 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Exclude functions involved in its use from !HVM builds, thus making it possible to exclude the hook as well. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -1173,8 +1173,8 @@ void p2m_pt_init(struct p2m_domain *p2m) { p2m->set_entry = p2m_pt_set_entry; p2m->get_entry = p2m_pt_get_entry; - p2m->recalc = do_recalc; #ifdef CONFIG_HVM + p2m->recalc = do_recalc; p2m->change_entry_type_global = p2m_pt_change_entry_type_global; p2m->change_entry_type_range = p2m_pt_change_entry_type_range; #endif --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1182,8 +1182,6 @@ void p2m_change_type_range(struct domain p2m_unlock(hostp2m); } -#endif /* CONFIG_HVM */ - /* * Finish p2m type change for gfns which are marked as need_recalc in a range. * Uses the current p2m's max_mapped_pfn to further clip the invalidation @@ -1234,7 +1232,6 @@ int p2m_finish_type_change(struct domain if ( rc < 0 ) goto out; -#ifdef CONFIG_HVM if ( unlikely(altp2m_active(d)) ) { unsigned int i; @@ -1252,7 +1249,6 @@ int p2m_finish_type_change(struct domain goto out; } } -#endif out: p2m_unlock(hostp2m); @@ -1260,8 +1256,6 @@ int p2m_finish_type_change(struct domain return rc; } -#ifdef CONFIG_HVM - /* * Returns: * 0 for success --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -259,9 +259,9 @@ struct p2m_domain { p2m_query_t q, unsigned int *page_order, bool_t *sve); +#ifdef CONFIG_HVM int (*recalc)(struct p2m_domain *p2m, unsigned long gfn); -#ifdef CONFIG_HVM void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); void (*disable_hardware_log_dirty)(struct p2m_domain *p2m); void (*flush_hardware_cached_dirty)(struct p2m_domain *p2m); From patchwork Mon Apr 12 14:10:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1670EC433B4 for ; Mon, 12 Apr 2021 14:10:58 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6C9461244 for ; Mon, 12 Apr 2021 14:10:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6C9461244 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109197.208416 (Exim 4.92) (envelope-from ) id 1lVxH3-00062Y-Ir; Mon, 12 Apr 2021 14:10:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109197.208416; Mon, 12 Apr 2021 14:10:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxH3-00062R-Dv; Mon, 12 Apr 2021 14:10:45 +0000 Received: by outflank-mailman (input) for mailman id 109197; Mon, 12 Apr 2021 14:10:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxH2-000622-Kh for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:10:44 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d87d0314-b9d5-41cb-a850-589de889cb5e; Mon, 12 Apr 2021 14:10:43 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id D093EB133; Mon, 12 Apr 2021 14:10:42 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d87d0314-b9d5-41cb-a850-589de889cb5e X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=saN7xHAgr8abNwYS5Ioqwzd7tue8dyVnuFdM2ErCD3g=; b=q5tlWIwfF/AcvLpa6HWjscWdEki7z7bqRverBSwwR2hm9Khcv08Vh6GYZmMzamg7vkMGKX 8yZKdGcay2JgT0WNk7j8aS5tSxCMRitlxVydjXjQu2/7Pf0HNaqvvjwf6AlI5EVCjz/ePQ u7+vh7lb7iVI0KL76hMnOa1hiqtvgU8= Subject: [PATCH v2 08/12] x86: mem-access is HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tamas K Lengyel , Alexandru Isaila , Petre Pircalabu References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <91921cd4-902e-3341-10b7-718550658c8d@suse.com> Date: Mon, 12 Apr 2021 16:10:42 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US By excluding the file from being built for !HVM, #ifdef-ary can be removed from it. The new HVM dependency on the Kconfig option is benign for Arm. Signed-off-by: Jan Beulich Acked-by: Tamas K Lengyel Reviewed-by: Alexandru Isaila --- v2: New. --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -15,7 +15,6 @@ config X86 select HAS_FAST_MULTIPLY select HAS_IOPORTS select HAS_KEXEC - select MEM_ACCESS_ALWAYS_ON select HAS_MEM_PAGING select HAS_NS16550 select HAS_PASSTHROUGH @@ -92,6 +91,7 @@ config PV_LINEAR_PT config HVM def_bool !PV_SHIM_EXCLUSIVE select IOREQ_SERVER + select MEM_ACCESS_ALWAYS_ON prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware --- a/xen/arch/x86/mm/mem_access.c +++ b/xen/arch/x86/mm/mem_access.c @@ -139,7 +139,6 @@ bool p2m_mem_access_emulate_check(struct return violation; } -#ifdef CONFIG_HVM bool p2m_mem_access_check(paddr_t gpa, unsigned long gla, struct npfec npfec, vm_event_request_t **req_ptr) @@ -282,7 +281,6 @@ int p2m_set_altp2m_mem_access(struct dom */ return ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1); } -#endif static int set_mem_access(struct domain *d, struct p2m_domain *p2m, struct p2m_domain *ap2m, p2m_access_t a, @@ -290,7 +288,6 @@ static int set_mem_access(struct domain { int rc = 0; -#ifdef CONFIG_HVM if ( ap2m ) { rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn); @@ -299,9 +296,6 @@ static int set_mem_access(struct domain rc = 0; } else -#else - ASSERT(!ap2m); -#endif { p2m_access_t _a; p2m_type_t t; @@ -362,7 +356,6 @@ long p2m_set_mem_access(struct domain *d long rc = 0; /* altp2m view 0 is treated as the hostp2m */ -#ifdef CONFIG_HVM if ( altp2m_idx ) { if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || @@ -372,9 +365,6 @@ long p2m_set_mem_access(struct domain *d ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); } -#else - ASSERT(!altp2m_idx); -#endif if ( !xenmem_access_to_p2m_access(p2m, access, &a) ) return -EINVAL; @@ -422,7 +412,6 @@ long p2m_set_mem_access_multi(struct dom long rc = 0; /* altp2m view 0 is treated as the hostp2m */ -#ifdef CONFIG_HVM if ( altp2m_idx ) { if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || @@ -432,9 +421,6 @@ long p2m_set_mem_access_multi(struct dom ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); } -#else - ASSERT(!altp2m_idx); -#endif p2m_lock(p2m); if ( ap2m ) @@ -484,7 +470,6 @@ int p2m_get_mem_access(struct domain *d, { struct p2m_domain *p2m = p2m_get_hostp2m(d); -#ifdef CONFIG_HVM if ( !altp2m_active(d) ) { if ( altp2m_idx ) @@ -499,9 +484,6 @@ int p2m_get_mem_access(struct domain *d, p2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); } -#else - ASSERT(!altp2m_idx); -#endif return _p2m_get_mem_access(p2m, gfn, access); } @@ -512,7 +494,6 @@ void arch_p2m_set_access_required(struct p2m_get_hostp2m(d)->access_required = access_required; -#ifdef CONFIG_HVM if ( altp2m_active(d) ) { unsigned int i; @@ -524,7 +505,6 @@ void arch_p2m_set_access_required(struct p2m->access_required = access_required; } } -#endif } bool p2m_mem_access_sanity_check(const struct domain *d) --- a/xen/arch/x86/vm_event.c +++ b/xen/arch/x86/vm_event.c @@ -265,6 +265,7 @@ void vm_event_emulate_check(struct vcpu return; } +#ifdef CONFIG_HVM switch ( rsp->reason ) { case VM_EVENT_REASON_MEM_ACCESS: @@ -298,6 +299,7 @@ void vm_event_emulate_check(struct vcpu default: break; }; +#endif } void vm_event_reset_vmtrace(struct vcpu *v) --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -58,6 +58,7 @@ config MEM_ACCESS_ALWAYS_ON config MEM_ACCESS def_bool MEM_ACCESS_ALWAYS_ON prompt "Memory Access and VM events" if !MEM_ACCESS_ALWAYS_ON + depends on HVM ---help--- Framework to configure memory access types for guests and receive From patchwork Mon Apr 12 14:12:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57C51C433B4 for ; Mon, 12 Apr 2021 14:12:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 155D860190 for ; Mon, 12 Apr 2021 14:12:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 155D860190 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109204.208428 (Exim 4.92) (envelope-from ) id 1lVxIz-0006Cg-Tv; Mon, 12 Apr 2021 14:12:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109204.208428; Mon, 12 Apr 2021 14:12:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxIz-0006CZ-QU; Mon, 12 Apr 2021 14:12:45 +0000 Received: by outflank-mailman (input) for mailman id 109204; Mon, 12 Apr 2021 14:12:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxIy-0006CA-1V for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:12:44 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 6d77315c-b830-495f-8b5f-88d1697ed773; Mon, 12 Apr 2021 14:12:43 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6EC03B02C; Mon, 12 Apr 2021 14:12:42 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6d77315c-b830-495f-8b5f-88d1697ed773 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236762; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kdzk5wmAXjisOcJzpMedbV3NzErzKi8Z0MukFBUTB+o=; b=V/Nq+qGJOPSgrAxgpLSOhZjh4q+WO9SATeW+SzUB2hNrHhjA8IZ5R98AZcVSTbR6DoyQXK pSqa/1AOkiyMzb8Wmyp9wBNm2sZEAgeF+VA7OKp27JktIbIQ8dp98guYv2eQmqtQ0rDUkn +3fDNHQn2jHJRJoO8gJoFfTIYTt6Rc0= Subject: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Daniel de Graaf , Paul Durrant , Tamas K Lengyel , Petre Pircalabu , Alexandru Isaila References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <26dae9ef-5fcb-f806-059d-7cdd2974ad40@suse.com> Date: Mon, 12 Apr 2021 16:12:41 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US While doing so, make the option dependent upon HVM, which really is the main purpose of the change. Signed-off-by: Jan Beulich Reviewed-by: Alexandru Isaila Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -15,7 +15,6 @@ config X86 select HAS_FAST_MULTIPLY select HAS_IOPORTS select HAS_KEXEC - select HAS_MEM_PAGING select HAS_NS16550 select HAS_PASSTHROUGH select HAS_PCI @@ -251,6 +250,10 @@ config HYPERV_GUEST endif +config MEM_PAGING + bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED + depends on HVM + config MEM_SHARING bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED depends on HVM --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1932,9 +1932,11 @@ int hvm_hap_nested_page_fault(paddr_t gp goto out_put_gfn; } +#ifdef CONFIG_MEM_PAGING /* Check if the page has been paged out */ if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) ) paged = 1; +#endif #ifdef CONFIG_MEM_SHARING /* Mem sharing: if still shared on write access then its enomem */ --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -5,7 +5,7 @@ obj-$(CONFIG_HVM) += altp2m.o obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o -obj-y += mem_paging.o +obj-$(CONFIG_MEM_PAGING) += mem_paging.o obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-y += p2m.o p2m-pt.o obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o --- a/xen/arch/x86/x86_64/compat/mm.c +++ b/xen/arch/x86/x86_64/compat/mm.c @@ -155,8 +155,10 @@ int compat_arch_memory_op(unsigned long case XENMEM_get_sharing_shared_pages: return mem_sharing_get_nr_shared_mfns(); +#ifdef CONFIG_MEM_PAGING case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#endif #ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -994,8 +994,10 @@ long subarch_memory_op(unsigned long cmd case XENMEM_get_sharing_shared_pages: return mem_sharing_get_nr_shared_mfns(); +#ifdef CONFIG_MEM_PAGING case XENMEM_paging_op: return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t)); +#endif #ifdef CONFIG_MEM_SHARING case XENMEM_sharing_op: --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -40,9 +40,6 @@ config HAS_IOPORTS config HAS_KEXEC bool -config HAS_MEM_PAGING - bool - config HAS_PDX bool --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1096,7 +1096,7 @@ static void complete_domain_destroy(stru free_xenoprof_pages(d); #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING xfree(d->vm_event_paging); #endif xfree(d->vm_event_monitor); --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1856,7 +1856,7 @@ int check_get_page_from_gfn(struct domai page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q); -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING if ( p2m_is_paging(p2mt) ) { if ( page ) --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -390,7 +390,7 @@ static int vm_event_resume(struct domain /* Check flags which apply only when the vCPU is paused */ if ( atomic_read(&v->vm_event_pause_count) ) { -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING ) p2m_mem_paging_resume(d, &rsp); #endif @@ -521,7 +521,7 @@ int __vm_event_claim_slot(struct domain return vm_event_grab_slot(ved, current->domain != d); } -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_paging_notification(struct vcpu *v, unsigned int port) { @@ -546,7 +546,7 @@ static void mem_sharing_notification(str /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d) { -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING if ( vm_event_check_ring(d->vm_event_paging) ) { /* Destroying the wait queue head means waking up all @@ -613,7 +613,7 @@ int vm_event_domctl(struct domain *d, st switch ( vec->mode ) { -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING case XEN_DOMCTL_VM_EVENT_OP_PAGING: { rc = -EINVAL; --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -23,6 +23,7 @@ #include #include +#include #include const struct iommu_init_ops *__initdata iommu_init_ops; @@ -336,7 +337,7 @@ bool arch_iommu_use_permitted(const stru */ return d == dom_io || (likely(!mem_sharing_enabled(d)) && - likely(!vm_event_check_ring(d->vm_event_paging)) && + likely(!mem_paging_enabled(d)) && likely(!p2m_get_hostp2m(d)->global_logdirty)); } --- a/xen/include/asm-x86/mem_paging.h +++ b/xen/include/asm-x86/mem_paging.h @@ -24,6 +24,12 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg); +#ifdef CONFIG_MEM_PAGING +# define mem_paging_enabled(d) vm_event_check_ring((d)->vm_event_paging) +#else +# define mem_paging_enabled(d) false +#endif + #endif /*__ASM_X86_MEM_PAGING_H__ */ /* --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -136,11 +136,16 @@ typedef unsigned int p2m_query_t; #define P2M_PAGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \ | p2m_to_mask(p2m_ram_logdirty) ) +#ifdef CONFIG_MEM_PAGING #define P2M_PAGING_TYPES (p2m_to_mask(p2m_ram_paging_out) \ | p2m_to_mask(p2m_ram_paged) \ | p2m_to_mask(p2m_ram_paging_in)) #define P2M_PAGED_TYPES (p2m_to_mask(p2m_ram_paged)) +#else +#define P2M_PAGING_TYPES 0 +#define P2M_PAGED_TYPES 0 +#endif /* Shared types */ /* XXX: Sharable types could include p2m_ram_ro too, but we would need to --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -530,7 +530,7 @@ struct domain struct domain *parent; /* VM fork parent */ #endif /* Memory paging support */ -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING struct vm_event_domain *vm_event_paging; #endif /* VM event monitor support */ --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -592,7 +592,7 @@ static XSM_INLINE int xsm_mem_access(XSM } #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -146,7 +146,7 @@ struct xsm_operations { int (*mem_access) (struct domain *d); #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING int (*mem_paging) (struct domain *d); #endif @@ -592,7 +592,7 @@ static inline int xsm_mem_access (xsm_de } #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING static inline int xsm_mem_paging (xsm_default_t def, struct domain *d) { return xsm_ops->mem_paging(d); --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -124,7 +124,7 @@ void __init xsm_fixup_ops (struct xsm_op set_to_dummy_if_null(ops, mem_access); #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING set_to_dummy_if_null(ops, mem_paging); #endif --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1256,7 +1256,7 @@ static int flask_mem_access(struct domai } #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING static int flask_mem_paging(struct domain *d) { return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_PAGING); @@ -1829,7 +1829,7 @@ static struct xsm_operations flask_ops = .mem_access = flask_mem_access, #endif -#ifdef CONFIG_HAS_MEM_PAGING +#ifdef CONFIG_MEM_PAGING .mem_paging = flask_mem_paging, #endif From patchwork Mon Apr 12 14:13:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77C08C433B4 for ; Mon, 12 Apr 2021 14:13:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3943361025 for ; Mon, 12 Apr 2021 14:13:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3943361025 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109207.208440 (Exim 4.92) (envelope-from ) id 1lVxJb-0006I8-6g; Mon, 12 Apr 2021 14:13:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109207.208440; Mon, 12 Apr 2021 14:13:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxJb-0006I1-3V; Mon, 12 Apr 2021 14:13:23 +0000 Received: by outflank-mailman (input) for mailman id 109207; Mon, 12 Apr 2021 14:13:21 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxJZ-0006Hs-MB for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:13:21 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e732afb0-b721-44b7-97b4-18aede15e640; Mon, 12 Apr 2021 14:13:20 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1FDB5AF95; Mon, 12 Apr 2021 14:13:20 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e732afb0-b721-44b7-97b4-18aede15e640 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236800; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gYXEbMRpB/ru7Klc+cGqatrS8eIXTxopFHC0OSl4Opk=; b=d9Yaxg/9DIR+lgT8j3ISwrPUmAYtKR94BCxdhLTprHSq8AeK42dUhE4sT4+vOq+dBwhuQP TnpTb4VAW6Ae8TAnQPFl3sWWHak2kDB7z2vafKC9AjszdLsq5TdA/fX6FBUvXSkTuzmv+g YKBt6dfCjXM+9cPq8YmbYOFByT7YjII= Subject: [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: Date: Mon, 12 Apr 2021 16:13:19 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US With the hooks no longer needing setting, p2m_pt_init() degenerates to (about) nothing when !HVM. As a result, p2m-pt.c doesn't need building anymore in this case, as long as p2m_pt_init() has proper surrogates put in place. Unfortunately this means some new #ifdef-ary in p2m.c, but the mid-term plan there is to get rid of (almost) all of it by splitting out the then hopefully very few remaining non-HVM pieces. While the movement of the paging_mode_translate() check from p2m_remove_page() to guest_physmap_remove_page() may not be strictly necessary anymore (it was in an early version of this change), it looks more logical to live in the latter function, allowing to avoid acquiring the lock in the PV case. All other callers of p2m_remove_page() already have such a check anyway (in the altp2m case up the call stack). Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -7,8 +7,8 @@ obj-$(CONFIG_SHADOW_PAGING) += guest_wal obj-$(CONFIG_MEM_ACCESS) += mem_access.o obj-$(CONFIG_MEM_PAGING) += mem_paging.o obj-$(CONFIG_MEM_SHARING) += mem_sharing.o -obj-y += p2m.o p2m-pt.o -obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o +obj-y += p2m.o +obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o obj-y += paging.o guest_walk_%.o: guest_walk.c Makefile --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -494,8 +494,10 @@ mfn_t __get_gfn_type_access(struct p2m_d p2m_type_t *t, p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked) { +#ifdef CONFIG_HVM mfn_t mfn; gfn_t gfn = _gfn(gfn_l); +#endif /* Unshare makes no sense withuot populate. */ if ( q & P2M_UNSHARE ) @@ -509,6 +511,7 @@ mfn_t __get_gfn_type_access(struct p2m_d return _mfn(gfn_l); } +#ifdef CONFIG_HVM if ( locked ) /* Grab the lock here, don't release until put_gfn */ gfn_lock(p2m, gfn, 0); @@ -542,6 +545,7 @@ mfn_t __get_gfn_type_access(struct p2m_d } return mfn; +#endif } void __put_gfn(struct p2m_domain *p2m, unsigned long gfn) @@ -620,6 +624,7 @@ struct page_info *p2m_get_page_from_gfn( return page; } +#ifdef CONFIG_HVM /* Returns: 0 for success, -errno for failure */ int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma) @@ -659,6 +664,7 @@ int p2m_set_entry(struct p2m_domain *p2m return rc; } +#endif mfn_t p2m_alloc_ptp(struct p2m_domain *p2m, unsigned int level) { @@ -777,6 +783,8 @@ void p2m_final_teardown(struct domain *d p2m_teardown_hostp2m(d); } +#ifdef CONFIG_HVM + static int __must_check p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order) @@ -785,10 +793,6 @@ p2m_remove_page(struct p2m_domain *p2m, p2m_type_t t; p2m_access_t a; - /* IOMMU for PV guests is handled in get_page_type() and put_page(). */ - if ( !paging_mode_translate(p2m->domain) ) - return 0; - ASSERT(gfn_locked_by_me(p2m, gfn)); P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_x(gfn), mfn_x(mfn)); @@ -829,6 +833,10 @@ guest_physmap_remove_page(struct domain struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; + /* IOMMU for PV guests is handled in get_page_type() and put_page(). */ + if ( !paging_mode_translate(d) ) + return 0; + gfn_lock(p2m, gfn, page_order); rc = p2m_remove_page(p2m, gfn, mfn, page_order); gfn_unlock(p2m, gfn, page_order); @@ -836,6 +844,8 @@ guest_physmap_remove_page(struct domain return rc; } +#endif /* CONFIG_HVM */ + int guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int page_order) @@ -1400,14 +1410,16 @@ int clear_mmio_p2m_entry(struct domain * int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l, p2m_access_t p2ma, unsigned int flag) { +#ifdef CONFIG_HVM p2m_type_t p2mt; p2m_access_t a; gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret; +#endif - if ( !paging_mode_translate(p2m->domain) ) + if ( !paging_mode_translate(d) ) { if ( !is_iommu_enabled(d) ) return 0; @@ -1416,6 +1428,7 @@ int set_identity_p2m_entry(struct domain IOMMUF_readable | IOMMUF_writable); } +#ifdef CONFIG_HVM gfn_lock(p2m, gfn, 0); mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); @@ -1439,16 +1452,19 @@ int set_identity_p2m_entry(struct domain gfn_unlock(p2m, gfn, 0); return ret; +#endif } int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l) { +#ifdef CONFIG_HVM p2m_type_t p2mt; p2m_access_t a; gfn_t gfn = _gfn(gfn_l); mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); int ret; +#endif if ( !paging_mode_translate(d) ) { @@ -1457,6 +1473,7 @@ int clear_identity_p2m_entry(struct doma return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K); } +#ifdef CONFIG_HVM gfn_lock(p2m, gfn, 0); mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); @@ -1476,6 +1493,7 @@ int clear_identity_p2m_entry(struct doma } return ret; +#endif } #ifdef CONFIG_MEM_SHARING --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -324,7 +324,6 @@ p2m_next_level(struct p2m_domain *p2m, v return rc; } -#ifdef CONFIG_HVM /* * Mark (via clearing the U flag) as needing P2M type re-calculation all valid * present entries at the targeted level for the passed in GFN range, which is @@ -393,7 +392,6 @@ static int p2m_pt_set_recalc_range(struc return err; } -#endif /* CONFIG_HVM */ /* * Handle possibly necessary P2M type re-calculation (U flag clear for a @@ -932,8 +930,6 @@ pod_retry_l1: return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN; } -#ifdef CONFIG_HVM - static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m, p2m_type_t ot, p2m_type_t nt) { @@ -1015,8 +1011,6 @@ static int p2m_pt_change_entry_type_rang return err; } -#endif /* CONFIG_HVM */ - #if P2M_AUDIT static long p2m_pt_audit_p2m(struct p2m_domain *p2m) { @@ -1173,11 +1167,9 @@ void p2m_pt_init(struct p2m_domain *p2m) { p2m->set_entry = p2m_pt_set_entry; p2m->get_entry = p2m_pt_get_entry; -#ifdef CONFIG_HVM p2m->recalc = do_recalc; p2m->change_entry_type_global = p2m_pt_change_entry_type_global; p2m->change_entry_type_range = p2m_pt_change_entry_type_range; -#endif /* Still too early to use paging_mode_hap(). */ if ( hap_enabled(p2m->domain) ) --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -251,6 +251,7 @@ struct p2m_domain { /* Pages used to construct the p2m */ struct page_list_head pages; +#ifdef CONFIG_HVM int (*set_entry)(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order, @@ -264,7 +265,6 @@ struct p2m_domain { p2m_query_t q, unsigned int *page_order, bool_t *sve); -#ifdef CONFIG_HVM int (*recalc)(struct p2m_domain *p2m, unsigned long gfn); void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); @@ -785,8 +785,14 @@ int __must_check p2m_set_entry(struct p2 unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma); +#if defined(CONFIG_HVM) /* Set up function pointers for PT implementation: only for use by p2m code */ extern void p2m_pt_init(struct p2m_domain *p2m); +#elif defined(CONFIG_SHADOW_PAGING) +# define p2m_pt_init shadow_p2m_init +#else +static inline void p2m_pt_init(struct p2m_domain *p2m) {} +#endif void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn, p2m_query_t q, uint32_t *pfec); --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -8,9 +8,18 @@ int set_foreign_p2m_entry(struct domain unsigned long gfn, mfn_t mfn); /* Remove a page from a domain's p2m table */ +#ifdef CONFIG_HVM int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int page_order); +#else +static inline int +guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order) +{ + return 0; +} +#endif /* Map MMIO regions in the p2m: start_gfn and nr describe the range in * * the guest physical address space to map, starting from the machine From patchwork Mon Apr 12 14:13:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0664CC433B4 for ; Mon, 12 Apr 2021 14:14:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AD09761246 for ; Mon, 12 Apr 2021 14:14:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD09761246 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109212.208452 (Exim 4.92) (envelope-from ) id 1lVxK8-0006Pk-Kz; Mon, 12 Apr 2021 14:13:56 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109212.208452; Mon, 12 Apr 2021 14:13:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxK8-0006Pd-HY; Mon, 12 Apr 2021 14:13:56 +0000 Received: by outflank-mailman (input) for mailman id 109212; Mon, 12 Apr 2021 14:13:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxK7-0006PW-Mt for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:13:55 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8c1d2933-e52a-4c63-987b-ad3c2b09877f; Mon, 12 Apr 2021 14:13:54 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7CAFCB02C; Mon, 12 Apr 2021 14:13:53 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8c1d2933-e52a-4c63-987b-ad3c2b09877f X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236833; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GsWv5qIefEQzmwBFM73qXNflItcakFdzhM28hDFIMf8=; b=fvSItehCc4KJs8O8ZDrOIC72OVn3C9H4BedBWsLp43lQHeoI6SsLezvCQUxx4ZbcGe0Jsj 1kgegP+2Rwz7GJ9OCYeyc3OXdKOOj1ePNRSwkB6vQmIohdjORJZfosbw8UbMYz5oRRNjct EYdlicQituweMUMC8GxZLbN5bPvX1NA= Subject: [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks are HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tim Deegan References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: <507182a8-38c3-cda3-d921-7494a5df63a7@suse.com> Date: Mon, 12 Apr 2021 16:13:52 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Move respective shadow code to its HVM-only source file, thus making it possible to exclude the hooks as well. This then shows that shadow_p2m_init() also isn't needed in !HVM builds. Signed-off-by: Jan Beulich Acked-by: Tim Deegan --- v2: New. --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2327,22 +2327,6 @@ void shadow_prepare_page_type_change(str shadow_remove_all_shadows(d, page_to_mfn(page)); } -static void -sh_remove_all_shadows_and_parents(struct domain *d, mfn_t gmfn) -/* Even harsher: this is a HVM page that we thing is no longer a pagetable. - * Unshadow it, and recursively unshadow pages that reference it. */ -{ - sh_remove_shadows(d, gmfn, 0, 1); - /* XXX TODO: - * Rework this hashtable walker to return a linked-list of all - * the shadows it modified, then do breadth-first recursion - * to find the way up to higher-level tables and unshadow them too. - * - * The current code (just tearing down each page's shadows as we - * detect that it is not a pagetable) is correct, but very slow. - * It means extra emulated writes and slows down removal of mappings. */ -} - /**************************************************************************/ /* Reset the up-pointers of every L3 shadow to 0. @@ -3086,126 +3070,6 @@ static int shadow_test_disable(struct do } /**************************************************************************/ -/* P2M map manipulations */ - -/* shadow specific code which should be called when P2M table entry is updated - * with new content. It is responsible for update the entry, as well as other - * shadow processing jobs. - */ - -static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn, - l1_pgentry_t old, l1_pgentry_t new, - unsigned int level) -{ - mfn_t omfn = l1e_get_mfn(old); - unsigned int oflags = l1e_get_flags(old); - p2m_type_t p2mt = p2m_flags_to_type(oflags); - bool flush = false; - - /* - * If there are any shadows, update them. But if shadow_teardown() - * has already been called then it's not safe to try. - */ - if ( unlikely(!d->arch.paging.shadow.total_pages) ) - return; - - switch ( level ) - { - default: - /* - * The following assertion is to make sure we don't step on 1GB host - * page support of HVM guest. - */ - ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE))); - break; - - /* If we're removing an MFN from the p2m, remove it from the shadows too */ - case 1: - if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) ) - { - sh_remove_all_shadows_and_parents(d, omfn); - if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) ) - flush = true; - } - break; - - /* - * If we're removing a superpage mapping from the p2m, we need to check - * all the pages covered by it. If they're still there in the new - * scheme, that's OK, but otherwise they must be unshadowed. - */ - case 2: - if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) ) - break; - - if ( p2m_is_valid(p2mt) && mfn_valid(omfn) ) - { - unsigned int i; - mfn_t nmfn = l1e_get_mfn(new); - l1_pgentry_t *npte = NULL; - - /* If we're replacing a superpage with a normal L1 page, map it */ - if ( (l1e_get_flags(new) & _PAGE_PRESENT) && - !(l1e_get_flags(new) & _PAGE_PSE) && - mfn_valid(nmfn) ) - npte = map_domain_page(nmfn); - - gfn &= ~(L1_PAGETABLE_ENTRIES - 1); - - for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) - { - if ( !npte || - !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) || - !mfn_eq(l1e_get_mfn(npte[i]), omfn) ) - { - /* This GFN->MFN mapping has gone away */ - sh_remove_all_shadows_and_parents(d, omfn); - if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) ) - flush = true; - } - omfn = mfn_add(omfn, 1); - } - - if ( npte ) - unmap_domain_page(npte); - } - - break; - } - - if ( flush ) - guest_flush_tlb_mask(d, d->dirty_cpumask); -} - -#if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH) -static void -sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags) -{ - struct domain *d = p2m->domain; - - /* If we're doing FAST_FAULT_PATH, then shadow mode may have - cached the fact that this is an mmio region in the shadow - page tables. Blow the tables away to remove the cache. - This is pretty heavy handed, but this is a rare operation - (it might happen a dozen times during boot and then never - again), so it doesn't matter too much. */ - if ( d->arch.paging.shadow.has_fast_mmio_entries ) - { - shadow_blow_tables(d); - d->arch.paging.shadow.has_fast_mmio_entries = false; - } -} -#else -# define sh_write_p2m_entry_post NULL -#endif - -void shadow_p2m_init(struct p2m_domain *p2m) -{ - p2m->write_p2m_entry_pre = sh_unshadow_for_p2m_change; - p2m->write_p2m_entry_post = sh_write_p2m_entry_post; -} - -/**************************************************************************/ /* Log-dirty mode support */ /* Shadow specific code which is called in paging_log_dirty_enable(). --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -774,6 +774,142 @@ void sh_destroy_monitor_table(const stru } /**************************************************************************/ +/* P2M map manipulations */ + +/* shadow specific code which should be called when P2M table entry is updated + * with new content. It is responsible for update the entry, as well as other + * shadow processing jobs. + */ + +static void +sh_remove_all_shadows_and_parents(struct domain *d, mfn_t gmfn) +/* Even harsher: this is a HVM page that we thing is no longer a pagetable. + * Unshadow it, and recursively unshadow pages that reference it. */ +{ + sh_remove_shadows(d, gmfn, 0, 1); + /* XXX TODO: + * Rework this hashtable walker to return a linked-list of all + * the shadows it modified, then do breadth-first recursion + * to find the way up to higher-level tables and unshadow them too. + * + * The current code (just tearing down each page's shadows as we + * detect that it is not a pagetable) is correct, but very slow. + * It means extra emulated writes and slows down removal of mappings. */ +} + +static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn, + l1_pgentry_t old, l1_pgentry_t new, + unsigned int level) +{ + mfn_t omfn = l1e_get_mfn(old); + unsigned int oflags = l1e_get_flags(old); + p2m_type_t p2mt = p2m_flags_to_type(oflags); + bool flush = false; + + /* + * If there are any shadows, update them. But if shadow_teardown() + * has already been called then it's not safe to try. + */ + if ( unlikely(!d->arch.paging.shadow.total_pages) ) + return; + + switch ( level ) + { + default: + /* + * The following assertion is to make sure we don't step on 1GB host + * page support of HVM guest. + */ + ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE))); + break; + + /* If we're removing an MFN from the p2m, remove it from the shadows too */ + case 1: + if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) ) + { + sh_remove_all_shadows_and_parents(d, omfn); + if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) ) + flush = true; + } + break; + + /* + * If we're removing a superpage mapping from the p2m, we need to check + * all the pages covered by it. If they're still there in the new + * scheme, that's OK, but otherwise they must be unshadowed. + */ + case 2: + if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) ) + break; + + if ( p2m_is_valid(p2mt) && mfn_valid(omfn) ) + { + unsigned int i; + mfn_t nmfn = l1e_get_mfn(new); + l1_pgentry_t *npte = NULL; + + /* If we're replacing a superpage with a normal L1 page, map it */ + if ( (l1e_get_flags(new) & _PAGE_PRESENT) && + !(l1e_get_flags(new) & _PAGE_PSE) && + mfn_valid(nmfn) ) + npte = map_domain_page(nmfn); + + gfn &= ~(L1_PAGETABLE_ENTRIES - 1); + + for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) + { + if ( !npte || + !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) || + !mfn_eq(l1e_get_mfn(npte[i]), omfn) ) + { + /* This GFN->MFN mapping has gone away */ + sh_remove_all_shadows_and_parents(d, omfn); + if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) ) + flush = true; + } + omfn = mfn_add(omfn, 1); + } + + if ( npte ) + unmap_domain_page(npte); + } + + break; + } + + if ( flush ) + guest_flush_tlb_mask(d, d->dirty_cpumask); +} + +#if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH) +static void +sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags) +{ + struct domain *d = p2m->domain; + + /* If we're doing FAST_FAULT_PATH, then shadow mode may have + cached the fact that this is an mmio region in the shadow + page tables. Blow the tables away to remove the cache. + This is pretty heavy handed, but this is a rare operation + (it might happen a dozen times during boot and then never + again), so it doesn't matter too much. */ + if ( d->arch.paging.shadow.has_fast_mmio_entries ) + { + shadow_blow_tables(d); + d->arch.paging.shadow.has_fast_mmio_entries = false; + } +} +#else +# define sh_write_p2m_entry_post NULL +#endif + +void shadow_p2m_init(struct p2m_domain *p2m) +{ + p2m->write_p2m_entry_pre = sh_unshadow_for_p2m_change; + p2m->write_p2m_entry_post = sh_write_p2m_entry_post; +} + +/**************************************************************************/ /* VRAM dirty tracking support */ int shadow_track_dirty_vram(struct domain *d, unsigned long begin_pfn, --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -278,7 +278,6 @@ struct p2m_domain { unsigned long first_gfn, unsigned long last_gfn); void (*memory_type_changed)(struct p2m_domain *p2m); -#endif void (*write_p2m_entry_pre)(struct domain *d, unsigned long gfn, l1_pgentry_t old, @@ -286,6 +285,7 @@ struct p2m_domain { unsigned int level); void (*write_p2m_entry_post)(struct p2m_domain *p2m, unsigned int oflags); +#endif #if P2M_AUDIT long (*audit_p2m)(struct p2m_domain *p2m); #endif @@ -788,8 +788,6 @@ int __must_check p2m_set_entry(struct p2 #if defined(CONFIG_HVM) /* Set up function pointers for PT implementation: only for use by p2m code */ extern void p2m_pt_init(struct p2m_domain *p2m); -#elif defined(CONFIG_SHADOW_PAGING) -# define p2m_pt_init shadow_p2m_init #else static inline void p2m_pt_init(struct p2m_domain *p2m) {} #endif From patchwork Mon Apr 12 14:14:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12198213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98809C433ED for ; Mon, 12 Apr 2021 14:14:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4E88661025 for ; Mon, 12 Apr 2021 14:14:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E88661025 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.109216.208464 (Exim 4.92) (envelope-from ) id 1lVxKb-0006W1-UQ; Mon, 12 Apr 2021 14:14:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 109216.208464; Mon, 12 Apr 2021 14:14:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxKb-0006Vu-Qn; Mon, 12 Apr 2021 14:14:25 +0000 Received: by outflank-mailman (input) for mailman id 109216; Mon, 12 Apr 2021 14:14:24 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lVxKa-0006Vi-Ih for xen-devel@lists.xenproject.org; Mon, 12 Apr 2021 14:14:24 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 340aa3eb-3687-42f3-ba84-dd321bf55e96; Mon, 12 Apr 2021 14:14:23 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CDFB4B0C6; Mon, 12 Apr 2021 14:14:22 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 340aa3eb-3687-42f3-ba84-dd321bf55e96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618236862; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gI02vjjowQWzzvN/gkSlZfdNhI0WKj+tm5YpsfYK02w=; b=Fo/w+bGBe8LxHKE+Md8eMt8DqNigTQfaC0iTTVaJyrm267G46Gm7HYV8BspyZbbnUZ2vQe x3SaQUE4IGcqxy3aaYrnWw7rp3elzvJ/B3yCM3BUrLB3BOzB/XVURgVkDhjDu8Ow32CuSD 1wUq7lJ5H1rR/po8dj6N+K9KXeIHvVo= Subject: [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Message-ID: Date: Mon, 12 Apr 2021 16:14:22 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <3cf73378-b9d6-0eca-12b6-0f628518bebf@suse.com> Content-Language: en-US Combine two HVM-specific sections in two cases (i.e. going from four of them to just two). Make defer_nested_flush bool and HVM-only, moving it next to other nested stuff. Move default_access up into a padding hole. When moving them anyway, also adjust comment style. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1167,7 +1167,7 @@ void p2m_change_type_range(struct domain ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); p2m_lock(hostp2m); - hostp2m->defer_nested_flush = 1; + hostp2m->defer_nested_flush = true; change_type_range(hostp2m, start, end, ot, nt); @@ -1185,7 +1185,7 @@ void p2m_change_type_range(struct domain p2m_unlock(altp2m); } } - hostp2m->defer_nested_flush = 0; + hostp2m->defer_nested_flush = false; if ( nestedhvm_enabled(d) ) p2m_flush_nestedp2m(d); --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -216,20 +216,15 @@ struct p2m_domain { p2m_class_t p2m_class; /* host/nested/alternate */ -#ifdef CONFIG_HVM - /* Nested p2ms only: nested p2m base value that this p2m shadows. - * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but - * needs both the per-p2m lock and the per-domain nestedp2m lock - * to set it to any other value. */ -#define P2M_BASE_EADDR (~0ULL) - uint64_t np2m_base; - uint64_t np2m_generation; + /* + * Default P2M access type for each page in the the domain: new pages, + * swapped in pages, cleared pages, and pages that are ambiguously + * retyped get this access type. See definition of p2m_access_t. + */ + p2m_access_t default_access; - /* Nested p2ms: linked list of n2pms allocated to this domain. - * The host p2m hasolds the head of the list and the np2ms are - * threaded on in LRU order. */ - struct list_head np2m_list; -#endif + /* Pages used to construct the p2m */ + struct page_list_head pages; /* Host p2m: Log-dirty ranges registered for the domain. */ struct rangeset *logdirty_ranges; @@ -237,21 +232,10 @@ struct p2m_domain { /* Host p2m: Global log-dirty mode enabled for the domain. */ bool global_logdirty; - /* Host p2m: when this flag is set, don't flush all the nested-p2m - * tables on every host-p2m change. The setter of this flag - * is responsible for performing the full flush before releasing the - * host p2m's lock. */ - int defer_nested_flush; - #ifdef CONFIG_HVM /* Alternate p2m: count of vcpu's currently using this p2m. */ atomic_t active_vcpus; -#endif - - /* Pages used to construct the p2m */ - struct page_list_head pages; -#ifdef CONFIG_HVM int (*set_entry)(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order, @@ -306,11 +290,6 @@ struct p2m_domain { unsigned int defer_flush; bool_t need_flush; - /* Default P2M access type for each page in the the domain: new pages, - * swapped in pages, cleared pages, and pages that are ambiguously - * retyped get this access type. See definition of p2m_access_t. */ - p2m_access_t default_access; - /* If true, and an access fault comes in and there is no vm_event listener, * pause domain. Otherwise, remove access restrictions. */ bool_t access_required; @@ -357,6 +336,31 @@ struct p2m_domain { mm_lock_t lock; /* Locking of private pod structs, * * not relying on the p2m lock. */ } pod; + + /* + * Host p2m: when this flag is set, don't flush all the nested-p2m + * tables on every host-p2m change. The setter of this flag + * is responsible for performing the full flush before releasing the + * host p2m's lock. + */ + bool defer_nested_flush; + + /* + * Nested p2ms only: nested p2m base value that this p2m shadows. + * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but + * needs both the per-p2m lock and the per-domain nestedp2m lock + * to set it to any other value. + */ +#define P2M_BASE_EADDR (~0ULL) + uint64_t np2m_base; + uint64_t np2m_generation; + + /* + * Nested p2ms: linked list of n2pms allocated to this domain. + * The host p2m hasolds the head of the list and the np2ms are + * threaded on in LRU order. + */ + struct list_head np2m_list; #endif union {