From patchwork Wed Nov 10 22:29:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 412C9C4332F for ; Wed, 10 Nov 2021 22:30:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 267DD61268 for ; Wed, 10 Nov 2021 22:30:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233758AbhKJWdI (ORCPT ); Wed, 10 Nov 2021 17:33:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233633AbhKJWdH (ORCPT ); Wed, 10 Nov 2021 17:33:07 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E2EBC061767 for ; Wed, 10 Nov 2021 14:30:19 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id s22-20020a056a0008d600b00480fea2e96cso2742010pfu.7 for ; Wed, 10 Nov 2021 14:30:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FRB6mm7PujoNC1Ip5uFqQse8luP5YPrMmL6VDyk5GCE=; b=Nt5BR3PiOgQTw8FUcX5skE2ch2lPLxQWVTBrDMy4YFLGVgQYZag+CKculiu8sqrIfL DdobjvG46z80R2cYmz8B0PcUayjlB/PhMaYxLLSlSWl92Og7lK9BU2+InmmStyUjz7ae mu4PD2iilAOO5bTrtvN9BT9iwgj9rxiV22MQQqWXEhJ6Hkg2hM1vfgZg42wFeroYe9iq VxfIG3SAo5XZHo8FmgYOkuunLEvsADqvNd4jak+eDW5DNggcXceHLAeTxnzMHKu12qg+ 60gSIOxPDC9fPUFv1x8/bA0/ejnnZML2Bjq+2dR9r7JlD63lEoGp/xvimaPcc0HeIpnR fqrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FRB6mm7PujoNC1Ip5uFqQse8luP5YPrMmL6VDyk5GCE=; b=jZbQxJAv6N884QAIN4KA/CuafxCWUadj9ibAHh09a4xenlt7i9ElDL2VPsrP5anrt4 fl+TYSKB8JwKGDfU8ktghE538w4TQbXq9OYxXAuBmkr/2CpXwPnqDLiMJHV06L4KiS7C u5BPyyKRgAs7sXVDl3gCpMicSnP05Do1V+niIyWV8kjOQI/UpDU5Qg5m9tc5jsqqp0e7 GTM+07NFhvoKQCaezb7ZFDAoGMNVCXYB28GWyfrLccqswHJ98xy3UNiiYLUbFkI6N3EE tIhtPZvNjWYmJ++/4vKU+mZrYEyz6qGNhOmPEfIQFQz900ZBDhuAcYvrx9hBDxOa8Efp f6+w== X-Gm-Message-State: AOAM532KYL7XUB+swTo5vW8qnC8Yls+1Wshphg+pLtDqrxiFTL/6Dr+Z cHaZmY6BaOPJqa6YUZ4NwbM64622vurl X-Google-Smtp-Source: ABdhPJzkseJVP2CacQrpdbltO1iscfhno95GIazSs8qQiDiZsu7BWitWeqFF2GfZtoXwDIwG/N18FYwEt/RS X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:902:7086:b0:143:6ba3:9b27 with SMTP id z6-20020a170902708600b001436ba39b27mr2875988plk.60.1636583418880; Wed, 10 Nov 2021 14:30:18 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:52 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-2-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 01/19] KVM: x86/mmu: Fix TLB flush range when handling disconnected pt From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon , stable@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When recursively clearing out disconnected pts, the range based TLB flush in handle_removed_tdp_mmu_page uses the wrong starting GFN, resulting in the flush mostly missing the affected range. Fix this by using base_gfn for the flush. Fixes: a066e61f13cf ("KVM: x86/mmu: Factor out handling of removed page tables") CC: stable@vger.kernel.org Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7c5dd83e52de..866c2b191e1e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -374,7 +374,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, shared); } - kvm_flush_remote_tlbs_with_address(kvm, gfn, + kvm_flush_remote_tlbs_with_address(kvm, base_gfn, KVM_PAGES_PER_HPAGE(level + 1)); call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); From patchwork Wed Nov 10 22:29:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49496C433F5 for ; Wed, 10 Nov 2021 22:30:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27D026117A for ; Wed, 10 Nov 2021 22:30:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233759AbhKJWdV (ORCPT ); Wed, 10 Nov 2021 17:33:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233766AbhKJWdP (ORCPT ); Wed, 10 Nov 2021 17:33:15 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 329D5C061767 for ; Wed, 10 Nov 2021 14:30:27 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id w2-20020a627b02000000b0049fa951281fso2730919pfc.9 for ; Wed, 10 Nov 2021 14:30:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZyHddNGopGY/lza1UZlncFMjZJJM1j4g87L00tHisx8=; b=hZXGX2BqInvhBr17VwQFPDZblpGbwErhA25L417eM0C4wya2Mz0BSmHZ57PExAB5C8 l/ejrmTc/Ev0pYYaxinvsvJEiPplUg2VGRSk/pd2vtP1NPITzXwD1rgdlQwPHrPz65F0 Fy0Lvc9d6h0EQV81QZkhtgPAgSUuJyq52RcO0Kf3oSAgKJ4HlkehzNLHDAPHkuZbGp3d kNP8cpP5sRcVxhUZKHBh5p207E4hCotQS/Kjw9Oaq9WdQNr6nVefOzfBBgTdCwiNHFVE 77jMLZLM2TWMyTgu4AoK5osaOl8q73ag+A1wjvdHdm6zB6FkpORAovVRBXP2MeXJ8QRP n23w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZyHddNGopGY/lza1UZlncFMjZJJM1j4g87L00tHisx8=; b=dvlZKM5aFiK8BB+hKrQqv2BQKPqFeO6TDb0LkZ0IfBdzSH0iHbAWBwwtqFffMVLQvc FDpx7t8QJ8zpaNO9tDQw2FuWNLie+y4QBkxnJqogZ2i6cGIf4FoG6ZOB8fgUbdHsd6JF T1QUfawOcob5EmkhED0iXfKWm/98i6xoO3y6WAoQ39GxGElsMwQvd1t59YZXWXPtnDTq bISOB4dALuq7JwxRC6Kji5yvgV4a0wgNHKRgfcJx499tVw0sOk1GHImw5rsKZmrvyWeT 130N2KD+toaQjvASHIPuNaTDyeq+esMP2Sz2vW8gR9v38ctDGFc2hnv1pfZRVRxGvFvV DQqQ== X-Gm-Message-State: AOAM531iMToZvLJ9MUDjdhX3aKxfc5N7mq7+d6+27sej3AJjWg7oyUZE S6swb+iIN+PA6JCcvmPu+roWJiVFC76w X-Google-Smtp-Source: ABdhPJwaci3mHy1I59FmDVZFexbfweOAJYQShAUkg2GBiTBX6eyMa46UQ8SopCwDR5guJydvHtX++tE3UPpV X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr48530pjd.0.1636583425446; Wed, 10 Nov 2021 14:30:25 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:53 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-3-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 02/19] KVM: x86/mmu: Batch TLB flushes for a single zap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When recursively handling a removed TDP page table, the TDP MMU will flush the TLBs and queue an RCU callback to free the PT. If the original change zapped a non-leaf SPTE at PG_LEVEL_1G or above, that change will result in many unnecessary TLB flushes when one would suffice. Queue all the PTs which need to be freed on a list and wait to queue RCU callbacks to free them until after all the recursive callbacks are done. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 88 ++++++++++++++++++++++++++++++-------- 1 file changed, 70 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 866c2b191e1e..5b31d046df78 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -220,7 +220,8 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, - bool shared); + bool shared, + struct list_head *disconnected_sps); static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { @@ -302,6 +303,11 @@ static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp, * @shared: This operation may not be running under the exclusive use * of the MMU lock and the operation must synchronize with other * threads that might be modifying SPTEs. + * @disconnected_sps: If null, the TLBs will be flushed and the disconnected + * TDP MMU page will be queued to be freed after an RCU + * callback. If non-null the page will be added to the list + * and flushing the TLBs and queueing an RCU callback to + * free the page will be the caller's responsibility. * * Given a page table that has been removed from the TDP paging structure, * iterates through the page table to clear SPTEs and free child page tables. @@ -312,7 +318,8 @@ static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp, * early rcu_dereferences in the function. */ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, - bool shared) + bool shared, + struct list_head *disconnected_sps) { struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt)); int level = sp->role.level; @@ -371,13 +378,16 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, } handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, old_child_spte, REMOVED_SPTE, level, - shared); + shared, disconnected_sps); } - kvm_flush_remote_tlbs_with_address(kvm, base_gfn, - KVM_PAGES_PER_HPAGE(level + 1)); - - call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); + if (disconnected_sps) { + list_add_tail(&sp->link, disconnected_sps); + } else { + kvm_flush_remote_tlbs_with_address(kvm, base_gfn, + KVM_PAGES_PER_HPAGE(level + 1)); + call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); + } } /** @@ -391,13 +401,21 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, * @shared: This operation may not be running under the exclusive use of * the MMU lock and the operation must synchronize with other * threads that might be modifying SPTEs. + * @disconnected_sps: Only used if a page of page table memory has been + * removed from the paging structure by this change. + * If null, the TLBs will be flushed and the disconnected + * TDP MMU page will be queued to be freed after an RCU + * callback. If non-null the page will be added to the list + * and flushing the TLBs and queueing an RCU callback to + * free the page will be the caller's responsibility. * * Handle bookkeeping that might result from the modification of a SPTE. * This function must be called for all TDP SPTE modifications. */ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, - bool shared) + bool shared, + struct list_head *disconnected_sps) { bool was_present = is_shadow_present_pte(old_spte); bool is_present = is_shadow_present_pte(new_spte); @@ -475,22 +493,39 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, */ if (was_present && !was_leaf && (pfn_changed || !is_present)) handle_removed_tdp_mmu_page(kvm, - spte_to_child_pt(old_spte, level), shared); + spte_to_child_pt(old_spte, level), shared, + disconnected_sps); } static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, - bool shared) + bool shared, struct list_head *disconnected_sps) { __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, - shared); + shared, disconnected_sps); handle_changed_spte_acc_track(old_spte, new_spte, level); handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, level); } /* - * tdp_mmu_set_spte_atomic - Set a TDP MMU SPTE atomically + * The TLBs must be flushed between the pages linked from disconnected_sps + * being removed from the paging structure and this function being called. + */ +static void handle_disconnected_sps(struct kvm *kvm, + struct list_head *disconnected_sps) +{ + struct kvm_mmu_page *sp; + struct kvm_mmu_page *next; + + list_for_each_entry_safe(sp, next, disconnected_sps, link) { + list_del(&sp->link); + call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); + } +} + +/* + * __tdp_mmu_set_spte_atomic - Set a TDP MMU SPTE atomically * and handle the associated bookkeeping. Do not mark the page dirty * in KVM's dirty bitmaps. * @@ -500,9 +535,10 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * Returns: true if the SPTE was set, false if it was not. If false is returned, * this function will have no side-effects. */ -static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) +static inline bool __tdp_mmu_set_spte_atomic(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte, + struct list_head *disconnected_sps) { lockdep_assert_held_read(&kvm->mmu_lock); @@ -522,22 +558,32 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, return false; __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, true); + new_spte, iter->level, true, disconnected_sps); handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); return true; } +static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte) +{ + return __tdp_mmu_set_spte_atomic(kvm, iter, new_spte, NULL); +} + static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, struct tdp_iter *iter) { + LIST_HEAD(disconnected_sps); + /* * Freeze the SPTE by setting it to a special, * non-present value. This will stop other threads from * immediately installing a present entry in its place * before the TLBs are flushed. */ - if (!tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_SPTE)) + if (!__tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_SPTE, + &disconnected_sps)) return false; kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, @@ -553,6 +599,8 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, */ WRITE_ONCE(*rcu_dereference(iter->sptep), 0); + handle_disconnected_sps(kvm, &disconnected_sps); + return true; } @@ -577,6 +625,8 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte, bool record_acc_track, bool record_dirty_log) { + LIST_HEAD(disconnected_sps); + lockdep_assert_held_write(&kvm->mmu_lock); /* @@ -591,7 +641,7 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte); __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, false); + new_spte, iter->level, false, &disconnected_sps); if (record_acc_track) handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); @@ -599,6 +649,8 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level); + + handle_disconnected_sps(kvm, &disconnected_sps); } static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, From patchwork Wed Nov 10 22:29:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17F27C433EF for ; Wed, 10 Nov 2021 22:30:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E75A561213 for ; Wed, 10 Nov 2021 22:30:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233826AbhKJWdX (ORCPT ); Wed, 10 Nov 2021 17:33:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233898AbhKJWdS (ORCPT ); Wed, 10 Nov 2021 17:33:18 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B821C061767 for ; Wed, 10 Nov 2021 14:30:30 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id c140-20020a624e92000000b0044d3de98438so2705650pfb.14 for ; Wed, 10 Nov 2021 14:30:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WwHJW5gdPl9g8jWdOt95giCAKwWO4RT74eVa/es5zQQ=; b=njPMi+C7jKAk4+T8RVs6iWwXpbM0cbtjRZI7h1bNzQiTKSRmIaKzyez3PcrHiMYBi6 X1New7+hdWb8ybGRK5IgZl4XrsvPp4ZcWCx3wHuGPYoWBVscNgrbDlME5XqpxSxtxbEn dFm5ro6yF5zX4MbX1XSHYjYi0mS7SNlpnNIcAHP16ilhadK/30us8urvvMzM2zNDNp0R KE24vqnYV8f7ZIqjfNgRNmK4vzNJF1OfcUuVsJpQZFKRErCFhKdqMFaKg/YzK5URJoQu aOAXS7ddnxMCPoVMScoY1wBnWgbF1KXaA8EqoQ+En3biiObqU1gKG7yRNprQwgrTVvym YViw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WwHJW5gdPl9g8jWdOt95giCAKwWO4RT74eVa/es5zQQ=; b=nCB1e3yDNwklZMN9iHC1F5qtPgwR7zl1OiOs6VvAXZb5VOxdRrqM87Wxg3HWdE6JSV iKqMjyxBozHjdkxIa0IH9C6zzacVZjPrDII+TUte4+FwMjxYmz6snd3FtQjiglYdYLHw UJJD3mvG2ELgPasHRWC/mm2AIhg3pgIuwyk+uK/HX1w/I2++Roz+U5b7NXZojIxH21Ca rf4nbnAMCFUoSPFRMeZnqWG8RiOW4Kew6vQcrPy/IE/uieY6mADh7p5kWIQDJxFtn3NY eyYwlBdmVwArox+CuN1TuSWXl5aEiiQPdDYHYtxOpn6NSDWe1wIL87sAiWu5xtbOpRwP vchw== X-Gm-Message-State: AOAM530qtAS658TmnGTa3jVyERcrTnCzHBr6jIbMOiRBpZRO5YhRHtr4 TeYmvE4yMPDKJj+CLGUtlrimnMCgICEi X-Google-Smtp-Source: ABdhPJxJD90fTkFGyFahnWqmQ7219qdj9g4VUJ/pjZrd1ITQO6ACMS08VDjnCpbq52RgldJ7tzBolDkQgEoT X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:902:8f94:b0:143:8e81:3ec1 with SMTP id z20-20020a1709028f9400b001438e813ec1mr2869588plo.52.1636583429698; Wed, 10 Nov 2021 14:30:29 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:54 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-4-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 03/19] KVM: x86/mmu: Factor flush and free up when zapping under MMU write lock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When zapping a GFN range under the MMU write lock, there is no need to flush the TLBs for every zap. Instead, follow the lead of the Legacy MMU can collect disconnected sps to be freed after a flush at the end of the routine. Signed-off-by: Ben Gardon Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5b31d046df78..a448f0f2d993 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -623,10 +623,9 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, */ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte, bool record_acc_track, - bool record_dirty_log) + bool record_dirty_log, + struct list_head *disconnected_sps) { - LIST_HEAD(disconnected_sps); - lockdep_assert_held_write(&kvm->mmu_lock); /* @@ -641,7 +640,7 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte); __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, false, &disconnected_sps); + new_spte, iter->level, false, disconnected_sps); if (record_acc_track) handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); @@ -649,28 +648,32 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level); +} - handle_disconnected_sps(kvm, &disconnected_sps); +static inline void tdp_mmu_zap_spte(struct kvm *kvm, struct tdp_iter *iter, + struct list_head *disconnected_sps) +{ + __tdp_mmu_set_spte(kvm, iter, 0, true, true, disconnected_sps); } static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - __tdp_mmu_set_spte(kvm, iter, new_spte, true, true); + __tdp_mmu_set_spte(kvm, iter, new_spte, true, true, NULL); } static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - __tdp_mmu_set_spte(kvm, iter, new_spte, false, true); + __tdp_mmu_set_spte(kvm, iter, new_spte, false, true, NULL); } static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - __tdp_mmu_set_spte(kvm, iter, new_spte, true, false); + __tdp_mmu_set_spte(kvm, iter, new_spte, true, false, NULL); } #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -757,6 +760,7 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t max_gfn_host = 1ULL << (shadow_phys_bits - PAGE_SHIFT); bool zap_all = (start == 0 && end >= max_gfn_host); struct tdp_iter iter; + LIST_HEAD(disconnected_sps); /* * No need to try to step down in the iterator when zapping all SPTEs, @@ -799,7 +803,7 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, continue; if (!shared) { - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_zap_spte(kvm, &iter, &disconnected_sps); flush = true; } else if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { /* @@ -811,6 +815,12 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, } } + if (!list_empty(&disconnected_sps)) { + kvm_flush_remote_tlbs(kvm); + handle_disconnected_sps(kvm, &disconnected_sps); + flush = false; + } + rcu_read_unlock(); return flush; } From patchwork Wed Nov 10 22:29:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D423EC433FE for ; Wed, 10 Nov 2021 22:30:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDA1A61250 for ; Wed, 10 Nov 2021 22:30:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233812AbhKJWdZ (ORCPT ); Wed, 10 Nov 2021 17:33:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233751AbhKJWdW (ORCPT ); Wed, 10 Nov 2021 17:33:22 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51F47C061766 for ; Wed, 10 Nov 2021 14:30:34 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id h35-20020a63f923000000b002d5262fdfc4so2238819pgi.2 for ; Wed, 10 Nov 2021 14:30:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=yCz7cyLf3zj3rbp6D5rVcRB8/oAS85nKHiPmUvql15M=; b=puwXdgsXTast2KMlehXXil09T6tXG+5+d6LEEM/ydxzkNwv1r7hW2iYVdYZ6szgVjq yPbJmvrEH12zVvIWThdbLT9grVe1r9BFT+gL0xWprjJiHVuqRyrNozWyfPBjifT/qXCf tRHBX+U0uQGPybDAqM01+aVKn1MDnv1HRc5y0KR9XPWCNqmQZg/hpJTwu8A0LfU+xs0k /LjK+2zXB97JjmCHERHbMDSVSHQpMtaEc7NWp7B538R2v7AFOoNbTfJNH16lhaNdIf4l pvuaKbzwAfjD3/CuHkJys/P3YrixNkaZwduTHuKEwJbmEXngoAd73rSlXcXiGbk3IKU+ Qncw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yCz7cyLf3zj3rbp6D5rVcRB8/oAS85nKHiPmUvql15M=; b=DxH9tl12cz+yHPfR3xZlFekRLhpfOBt0IcFacUcXcnbYSyWRa7hF9lVpT0OPjKkYZc f9QQXdA/vVkDvp4gkfI99btmXs6NnzDCHfBCaAd1Oa7nLB4xg+avYNEhopayT2arIAn2 mIwxZgg+BwpipKQP7vwYH/QRZLOwyDU4P5NFQs1L5lMLatc+AJNfLzklrTeNXl3HsCfq yPXb7M2eIMtoXZBlnFEieTM94a/7r0b/px7JR83E1UI0O3PUKUMF2wRF7st+WNmOOEp3 b/FQYAdeKCXUSs28PXF9wXN6JgBk3RAKX4pWAKysc3nCdPFF3/RR2U/kBGjYyg1D4tWi QGaw== X-Gm-Message-State: AOAM531n+vTd5dHHpRNkjY+DjAyYIom+hSnS3HGJ+o4G3Dmor5ElLnJo Z3gE8PZJY8hWMsxd+V+QxqD/y6i8i0Xs X-Google-Smtp-Source: ABdhPJxpMsqYTyzUV1XtB8oPH6XSH/mhtghcJCEb8GMNVlWHOgVZ5EhWP4YdliIdeqQu7u5r2KSoqvuL4HAE X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a05:6a00:2349:b0:49f:db1d:c378 with SMTP id j9-20020a056a00234900b0049fdb1dc378mr2436949pfj.53.1636583433793; Wed, 10 Nov 2021 14:30:33 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:55 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-5-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 04/19] KVM: x86/mmu: Yield while processing disconnected_sps From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When preparing to free disconnected SPs, the list can accumulate many entries; enough that it is likely necessary to yeild while queuing RCU callbacks to free the SPs. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a448f0f2d993..c2a9f7acf8ef 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -513,7 +513,8 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * being removed from the paging structure and this function being called. */ static void handle_disconnected_sps(struct kvm *kvm, - struct list_head *disconnected_sps) + struct list_head *disconnected_sps, + bool can_yield, bool shared) { struct kvm_mmu_page *sp; struct kvm_mmu_page *next; @@ -521,6 +522,16 @@ static void handle_disconnected_sps(struct kvm *kvm, list_for_each_entry_safe(sp, next, disconnected_sps, link) { list_del(&sp->link); call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); + + if (can_yield && + (need_resched() || rwlock_needbreak(&kvm->mmu_lock))) { + rcu_read_unlock(); + if (shared) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); + rcu_read_lock(); + } } } @@ -599,7 +610,7 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, */ WRITE_ONCE(*rcu_dereference(iter->sptep), 0); - handle_disconnected_sps(kvm, &disconnected_sps); + handle_disconnected_sps(kvm, &disconnected_sps, false, true); return true; } @@ -817,7 +828,8 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (!list_empty(&disconnected_sps)) { kvm_flush_remote_tlbs(kvm); - handle_disconnected_sps(kvm, &disconnected_sps); + handle_disconnected_sps(kvm, &disconnected_sps, + can_yield, shared); flush = false; } From patchwork Wed Nov 10 22:29:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADA3DC433F5 for ; Wed, 10 Nov 2021 22:30:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9597F6117A for ; Wed, 10 Nov 2021 22:30:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233905AbhKJWda (ORCPT ); Wed, 10 Nov 2021 17:33:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233751AbhKJWdZ (ORCPT ); Wed, 10 Nov 2021 17:33:25 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6C8DC061766 for ; Wed, 10 Nov 2021 14:30:37 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id t7-20020a17090a5d8700b001a7604b85f5so1763146pji.8 for ; Wed, 10 Nov 2021 14:30:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HxuPuqGkbTjSIg2Su8P7hCQVY2FZaCOTe+WC7xPaI7c=; b=nYv5Z9KofpdEzDPn8k6Dj3N3h4bGmdGBZSObIUm5pHTaj8GTpoM67DnJHAd17YOjSD S35CPXcrZXckEjm1mfZHXhTRkR0G6NuvdspFlthNlpEhNTubVhN6oK4ldmQkebEBnbkz gfZz3ei6yhgmd/BRb0Zojc0lIP+9Lnl43zs6Q3j6cIaNgCCSUZGO9mKrPP8sMlqB443X EP27exc5uIjp0as5AoSktWMHiEVQwmHSBjSrCCraUZkl1A2L0m74ICzb0xL4jAdbdGEV HXE+/F1b61tSCGhPD1UIWxO8KsX5c26c1BsSlntNdndzNNHm9S/szCyKvGnNGPPBA3DW CZgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HxuPuqGkbTjSIg2Su8P7hCQVY2FZaCOTe+WC7xPaI7c=; b=0/R75OnygqYz7oFlD7LdJNHsE1R+KB8kS508rNQhyvDV7ug516DY9yd2XdQ3S9dUpN 5csSUR5JaS3jYhugepY9ec0CAQjW/FAV3UCAjVTYXwxYk6rs/afUnazHNJbrPy9pmdMO e8xKW5/BIiDs6uIUjiBz586At8WyAR/KLqiY5cwSouHax7oK76ulVMe/NTMTAusDOQxn JabFZUl9PEWSImFcfgmTVm37XXBdHedF+M4CPq/aYqN4FFg42ZPnZ/e6N+kpZrs7HMl1 e80u/t45POJMZJ/pS5JYH5hMDfdlzT2W8bZ0aZtIdLhnuj7EKkBexGMA6QFEd8TH2TDY qqCQ== X-Gm-Message-State: AOAM531tIzo93eQSiUDNLbxOHKx2VXPNDt07/Mc96mj17XTyzg8M+9Ug ouq6ISAxFz6qhujDRuXUV8TfeCV/YiHP X-Google-Smtp-Source: ABdhPJyo8uAV/k8H7CE2MqwmQTT8qNVtaTXISNS2gn9aK+hCVK8iTqvWvCYXG02UbdAOde0Tyl+J7KMn/Fj6 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90a:284f:: with SMTP id p15mr48426pjf.1.1636583436858; Wed, 10 Nov 2021 14:30:36 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:56 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-6-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 05/19] KVM: x86/mmu: Remove redundant flushes when disabling dirty logging From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org tdp_mmu_zap_spte_atomic flushes on every zap already, so no need to flush again after it's done. Signed-off-by: Ben Gardon Reviewed-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 +--- arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++--------------- arch/x86/kvm/mmu/tdp_mmu.h | 5 ++--- 3 files changed, 9 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 354d2ca92df4..baa94acab516 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5870,9 +5870,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); - flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); } } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c2a9f7acf8ef..1ece645e737f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1438,10 +1438,9 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, * Clear leaf entries which could be replaced by large mappings, for * GFNs within the slot. */ -static bool zap_collapsible_spte_range(struct kvm *kvm, +static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, - const struct kvm_memory_slot *slot, - bool flush) + const struct kvm_memory_slot *slot) { gfn_t start = slot->base_gfn; gfn_t end = start + slot->npages; @@ -1452,10 +1451,8 @@ static bool zap_collapsible_spte_range(struct kvm *kvm, tdp_root_for_each_pte(iter, root, start, end) { retry: - if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) { - flush = false; + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - } if (!is_shadow_present_pte(iter.old_spte) || !is_last_spte(iter.old_spte, iter.level)) @@ -1475,30 +1472,24 @@ static bool zap_collapsible_spte_range(struct kvm *kvm, iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); goto retry; } - flush = true; } rcu_read_unlock(); - - return flush; } /* * Clear non-leaf entries (and free associated page tables) which could * be replaced by large mappings, for GFNs within the slot. */ -bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot, - bool flush) +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) { struct kvm_mmu_page *root; lockdep_assert_held_read(&kvm->mmu_lock); for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) - flush = zap_collapsible_spte_range(kvm, root, slot, flush); - - return flush; + zap_collapsible_spte_range(kvm, root, slot); } /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 476b133544dd..3899004a5d91 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -64,9 +64,8 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, unsigned long mask, bool wrprot); -bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot, - bool flush); +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, From patchwork Wed Nov 10 22:29:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACCBCC433F5 for ; Wed, 10 Nov 2021 22:30:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 906D461213 for ; Wed, 10 Nov 2021 22:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233962AbhKJWdh (ORCPT ); Wed, 10 Nov 2021 17:33:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233898AbhKJWd1 (ORCPT ); Wed, 10 Nov 2021 17:33:27 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F12F7C061767 for ; Wed, 10 Nov 2021 14:30:39 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id jx2-20020a17090b46c200b001a62e9db321so1803894pjb.7 for ; Wed, 10 Nov 2021 14:30:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=H/hrjKdAmnuRrFMgl5Un/XSJex+ZkUQ49X9d7368gt8=; b=CM75Xt0UbPSnEvDj7QGIpqerK2dVIwF+BTCLhc+zNB5qz635kbUa+ymQal6lN5FSlG Y+8b2Qh9X9h6ndsLsRsPWysbEMxJ8p48i2KfYTwkem2iJtvo/N436BbyHq5c2LFGiZaU 9XdLcZ/JKlMaiWMRei0pYq7xgccdC+VKq3bJoE5/bUd+BxGFAEWXiPhO+Enkz5UrmG3F V13V0uEPZyOufRjUs/qDZjYBcX/P/gDlBrCCezUFn1qMmjIW02noaj48croBilvz61wL V4T9g7FiKnlyQ/3KiS26/DcRW2YCrgK98PFWWWKgplSustRgYM1Ww2iRCYOBD9yFVMTk /AEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=H/hrjKdAmnuRrFMgl5Un/XSJex+ZkUQ49X9d7368gt8=; b=QbYm4s9shXPwZ8ZLv4ZIqaoaGM3DftUSAeMhlVs0e/040Tiu021yn2FKbyi0H9jmNp KwRYfUVRezywdkGQhjyXtpsLciqcNpOL4tNmHGMyz2X+LRC/MXquuMPI2uku7R3CN73c QA1DaMSYzot+CgkBMcs4Rp4TYt6YrDpMZh+A1s9e2EG3hE1HKe/n+vzztu8KudGW69fY ERi/HZBx6Lh6uXMeCNdZLUfbAz1o65601+4uO72LGDk8tAiP6XjM9foCojHTvGuRNsLG vD4wjYRniPlytJGUkW9fhWVWSlFS7/s7PK/4UqVHHqrTpon5W4d0SkDvqyqQSiKM9ZC/ d3Bw== X-Gm-Message-State: AOAM530+ljlxHzsSQvQHkuHvX9R8UTF5dqbNliTggTSDMPSO7BWajWiB LOGMUtLHMGP+wl9KqAhICt5f/jWLN+73 X-Google-Smtp-Source: ABdhPJzvpchAFkdO6Ltc3OqLhz49Vv4vJURrfktF1X47fDD4EErNeXPCFqjdVBDd7YVzY5cy3IEgk4EAc8lK X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:aa7:9101:0:b0:49f:af85:b72c with SMTP id 1-20020aa79101000000b0049faf85b72cmr2297562pfh.53.1636583439486; Wed, 10 Nov 2021 14:30:39 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:57 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-7-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 06/19] KVM: x86/mmu: Introduce vcpu_make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a wrapper around make_spte which conveys the vCPU-specific context of the function. This will facilitate factoring out all uses of the vCPU pointer from make_spte in subsequent commits. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 6 +++--- arch/x86/kvm/mmu/spte.c | 17 +++++++++++++---- arch/x86/kvm/mmu/spte.h | 12 ++++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++--- 5 files changed, 29 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index baa94acab516..2ada6dee920a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2723,7 +2723,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, was_rmapped = 1; } - wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, + wrprot = vcpu_make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, true, host_writable, &spte); if (*sptep == spte) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f87d36898c44..edb8ebd1a775 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1129,9 +1129,9 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) spte = *sptep; host_writable = spte & shadow_host_writable_mask; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + vcpu_make_spte(vcpu, sp, slot, pte_access, gfn, + spte_to_pfn(spte), spte, true, false, + host_writable, &spte); flush |= mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 0c76c45fdb68..04d26e913941 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -90,10 +90,9 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) } bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, - unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte) + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -191,6 +190,16 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return wrprot; } +bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte) +{ + return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, + prefetch, can_unsync, host_writable, new_spte); + +} + u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { u64 spte = SPTE_MMU_PRESENT_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index cc432f9a966b..14f18082d505 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -330,10 +330,14 @@ static inline u64 get_mmio_spte_generation(u64 spte) } bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, - unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte); + struct kvm_memory_slot *slot, unsigned int pte_access, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, + bool can_unsync, bool host_writable, u64 *new_spte); +bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, + unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, + u64 old_spte, bool prefetch, bool can_unsync, + bool host_writable, u64 *new_spte); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1ece645e737f..836eadd4e73a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -980,9 +980,10 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (unlikely(!fault->slot)) new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else - wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + wrprot = vcpu_make_spte(vcpu, sp, fault->slot, ACC_ALL, + iter->gfn, fault->pfn, iter->old_spte, + fault->prefetch, true, + fault->map_writable, &new_spte); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; From patchwork Wed Nov 10 22:29:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0AE1C433EF for ; Wed, 10 Nov 2021 22:30:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C96766124C for ; Wed, 10 Nov 2021 22:30:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233716AbhKJWdl (ORCPT ); Wed, 10 Nov 2021 17:33:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233932AbhKJWdb (ORCPT ); Wed, 10 Nov 2021 17:33:31 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A02D4C061208 for ; Wed, 10 Nov 2021 14:30:42 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id k63-20020a628442000000b004812ea67c34so2755458pfd.2 for ; Wed, 10 Nov 2021 14:30:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7mdCvtcLdHr4CBiJSM0JyhidXjaakUIXa0mwfIITG4U=; b=XN6a3I6PjHvm5Ar/2g+NNS3gv+POg2AhfK/2qSa/WIUkP5Unpm2fS1p2roOVcCTF2E ak5fz8+RtInZGm/GxLJDAl5DPUvywcDbiLqh0akEy6T25/vLL4QsbK6XLdSHe2sZLgeP 63/6uu05RAXgqcmvtOMr2p4bK0bPa55sBHKzAPngPb7tiOHQ/+QAYTdV1AJm+fsLbuLp q8Hl+HOHGDF1DmkLv2LEB7SrHOGCRJZgvPw/W3iCar35oKr/OTrC34ecbo7YShyIA7Tk TFrQc+T+78K8bXiPjwG5DazWhnUNxmeaDBTtXB/wxj5xKTgA6ieBuGy+ugssRzH3PWjZ LjYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7mdCvtcLdHr4CBiJSM0JyhidXjaakUIXa0mwfIITG4U=; b=pxFlnepDUrJ1w4WuiJtGy4JhpzrnqD6XMYO9BCsKy6trh6ix5ojvLhNQxbnLmubLz1 rv1vSbsxzjPwmADROSg8oZvepfPaqasHJG8j2rUXC1woWM8J5+WyBxRBNqvnRCgDgGUn mY5kmARiHMi8JvR9GQ+uUAg1DiC7prm0SqbFGHTvTe96SD/ad2WE05aiCcHCUFJwmC0n ayyCAlK5idPwRT5TkL2vZGT3eRZCMMPX3KiIV6Ann1250d+aYK32CqRGQ7J1TUtuwXF3 0tQwEGHQ0uquoni9I4iO/jbZGVIVSkbxxM3HYkqtgibZaC9xQud1kBA+wZ4vfKwEky0F eMeg== X-Gm-Message-State: AOAM530AkGp48gomFYJxpIDnubFgmSFyCR2sA9lTWK4dTpTngKyZxIXj YXtHH6UCyhTDG69WfFhA8EwrfObzWVpJ X-Google-Smtp-Source: ABdhPJzgUfoZW15hmkTkx4ifInPQGv4C5+MNkhrE8JA+yI2RTx3AKNuUgQK/Us4wcCAzHAXrVvzbwGpbNnyS X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a65:6a4a:: with SMTP id o10mr1488725pgu.357.1636583442186; Wed, 10 Nov 2021 14:30:42 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:58 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-8-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 07/19] KVM: x86/mmu: Factor wrprot for nested PML out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When running a nested VM, KVM write protects SPTEs in the EPT/NPT02 instead of using PML for dirty tracking. This avoids expensive translation later, when emptying the Page Modification Log. In service of removing the vCPU pointer from make_spte, factor the check for nested PML out of the function. Signed-off-by: Ben Gardon Signed-off-by: Sean Christopherson Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 10 +++++++--- arch/x86/kvm/mmu/spte.h | 3 ++- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 04d26e913941..3cf08a534a16 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -92,7 +92,8 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, - bool can_unsync, bool host_writable, u64 *new_spte) + bool can_unsync, bool host_writable, bool ad_need_write_protect, + u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -100,7 +101,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (sp->role.ad_disabled) spte |= SPTE_TDP_AD_DISABLED_MASK; - else if (kvm_vcpu_ad_need_write_protect(vcpu)) + else if (ad_need_write_protect) spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; /* @@ -195,8 +196,11 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte) { + bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); + return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, - prefetch, can_unsync, host_writable, new_spte); + prefetch, can_unsync, host_writable, + ad_need_write_protect, new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 14f18082d505..bcf58602f224 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -332,7 +332,8 @@ static inline u64 get_mmio_spte_generation(u64 spte) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, - bool can_unsync, bool host_writable, u64 *new_spte); + bool can_unsync, bool host_writable, bool ad_need_write_protect, + u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Wed Nov 10 22:29:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02BACC433FE for ; Wed, 10 Nov 2021 22:30:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DFF4F61213 for ; Wed, 10 Nov 2021 22:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233934AbhKJWdj (ORCPT ); Wed, 10 Nov 2021 17:33:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233955AbhKJWdd (ORCPT ); Wed, 10 Nov 2021 17:33:33 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A461C061767 for ; Wed, 10 Nov 2021 14:30:45 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id 184-20020a6217c1000000b0049f9aad0040so2685134pfx.21 for ; Wed, 10 Nov 2021 14:30:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kPwPnVTG4Sy/meC4ElsKY5OGWS2q6fgAYEMpbQOeHNY=; b=gO1pQ3nPCk/+kvKVw4tr9OzEHRrIQlWiBFLGqTLTtW/AQgMs8lQLtYak0CvzDW3m2u 5j1oTymJjQGd3gKgzJkHfl88I2SdumZcbdfLAX99i/1Uiqw+SqrtCfj97duhqpqAjaol jHYv4rppazVVjQUSnOwefYsFpECDqlLd6xXvT2/ayjCvT+G3POS+6nv7CAUlcb5gBDZU e3xkvT0oEnvR2+oAgVNh3elHKBfxNWPZ5DwlaWWZXjdFsiX1vFJqjUGy9GUF0zExMnSR KnJxshLm+Ga9KtolvZrxRXT+FJxrrgOyKfjpgFgCqxSnjvweCmhww7E9iBOinsVGaTl8 Rp4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kPwPnVTG4Sy/meC4ElsKY5OGWS2q6fgAYEMpbQOeHNY=; b=26BswbeMrj591Q8O2/u3/OWFDwVN9jegxKwOsiEdI59ZI6qb/v3V3koQBv4HAjyVjZ ZZYLia2DMtJZZm26V98pA8qdsPT8NSIU6jr+rrgJ71aKCrERX6MVBtqYCz9WNxxTx1Yo +qvAZpwHr1CTG2t0l4JAVpWwa2R646aiLQr2hSJdbI/09sDpK3AlREC+txxgribqScMG 3N+S7RmJAnZJeMrU3wQrNmq8fzUL9lo/fWeXgtaPkLjJkPF6V4tw+YsVzq7hzDHSoAya NnPrLCk128iB5Y2BRex1k1ndUOnVLrhH4rSiGDsEZbk6F7NQGSiM9YlHS7r9MAXCtbki KBpA== X-Gm-Message-State: AOAM531piDuoRCGYjN7bPgBctIqfWolCx+7Gl5vXFLGUj7WUbDr+MtLX I0iXFNj/Ig/Ek8YDGBuQrhoYEVDsmDOZ X-Google-Smtp-Source: ABdhPJxESrBcOCdqxCcnFWr7i+KiODt1/OOmJvwZPi/0b7+sOYfoRTX0jhTercb/paKoe9/n+bm6nK/JLynv X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:902:e806:b0:142:830:eaa4 with SMTP id u6-20020a170902e80600b001420830eaa4mr2432339plg.16.1636583445154; Wed, 10 Nov 2021 14:30:45 -0800 (PST) Date: Wed, 10 Nov 2021 14:29:59 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-9-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 08/19] KVM: x86/mmu: Factor mt_mask out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In service of removing the vCPU pointer from make_spte, factor the memory type mask calculation out of make_spte. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 9 +++++---- arch/x86/kvm/mmu/spte.h | 2 +- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3cf08a534a16..75c666d3e7f1 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -93,7 +93,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 *new_spte) + u64 mt_mask, u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -130,8 +130,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (level > PG_LEVEL_4K) spte |= PT_PAGE_SIZE_MASK; if (tdp_enabled) - spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, - kvm_is_mmio_pfn(pfn)); + spte |= mt_mask; if (host_writable) spte |= shadow_host_writable_mask; @@ -197,10 +196,12 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool can_unsync, bool host_writable, u64 *new_spte) { bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); + u64 mt_mask = static_call(kvm_x86_get_mt_mask)(vcpu, gfn, + kvm_is_mmio_pfn(pfn)); return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, - ad_need_write_protect, new_spte); + ad_need_write_protect, mt_mask, new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index bcf58602f224..e739f2ebf844 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -333,7 +333,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 *new_spte); + u64 mt_mask, u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Wed Nov 10 22:30:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7057BC433F5 for ; Wed, 10 Nov 2021 22:30:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55BEC61284 for ; Wed, 10 Nov 2021 22:30:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234013AbhKJWdo (ORCPT ); Wed, 10 Nov 2021 17:33:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233911AbhKJWdh (ORCPT ); Wed, 10 Nov 2021 17:33:37 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC132C061766 for ; Wed, 10 Nov 2021 14:30:48 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id y124-20020a623282000000b0047a09271e49so2701303pfy.16 for ; Wed, 10 Nov 2021 14:30:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HnZqtlOQ63wNQv89kCvwlZXyDDrYIYxEA2ngT/q/LPk=; b=SiP3qz5fosLZDiKoqRIEqpSSPNPujrrnDHdmqmki+XBhmlrA6Mo4LFT21Wyh7HNGh2 KYjigbtNUZ0oEpp+En0Cw1iizfZ+tw5zJYqIEbK01Ok0TVIPHG7d1637lXz0wzi5WjQ1 aEcbY4qEfUn8BgV2LSmQvJOZFalNxMhlRmvuIZCE43c8q9ml0wnw9KEVa+znQBpmOgcW MQa48NfdHZii4bky+VlPw6YQ05geOw7UVVdyKz6RZjwvvW56eVIAaA+A2dlYu25CzTCw l52ERhDGsxK/AOe5HJaHBAnoprA0MwvtV6aJ8GMbD3u6YND+tlDgtb4xkHS6i+FP8YMp 7lwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HnZqtlOQ63wNQv89kCvwlZXyDDrYIYxEA2ngT/q/LPk=; b=NSzAjeTFETNLw/p6/f/ZPnG2isk8SNpWcqd7YxTpPKBUwZwqEhelNDnRSuS+Qez+R9 8NhGZaPAF/4j74QLv5/nPdeRXMp2o/LOghWlFYV00A1TfF2CBQ4Mtq4WUMFOgC9OmE3L 85mfjL274HfX+x5W+XK8XvMx+OAlvQe0OHMrUkpIEsOUTNs7t3Ib5mDH75Vy/F0zCl/C 1dpg2AFKPv9kRzAenvykAC0ntiaAchcRT6rxQ3iTGuEvKUKcRt41tbg+6Mf+H7Ot+LLe VlCc81+yaNHPcwhKouARDVa9fA8okBD/VWY9EnAIBeFkHv4+tLny+81WnRsbShRC5KJM PLRQ== X-Gm-Message-State: AOAM531Zwr0tf5piUjI7GWKhsDwErUVoZ5dDRHl0NNA1RXUteAIHlO43 wNwHdh0EoSlgQLBhZd7ABtPbfcsNVIU0 X-Google-Smtp-Source: ABdhPJxZRHSUHtHsNtP0gv5OKCFs+Ww73ld1gIG8wmXR7CC0f7LvNYe4mKbiCbH6Z+LjVmW5vyN6rKt9gvzi X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:902:c202:b0:142:2441:aa25 with SMTP id 2-20020a170902c20200b001422441aa25mr2672775pll.68.1636583448361; Wed, 10 Nov 2021 14:30:48 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:00 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-10-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 09/19] KVM: x86/mmu: Remove need for a vcpu from kvm_slot_page_track_is_active From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_slot_page_track_is_active only uses its vCPU argument to get a pointer to the assoicated struct kvm, so just pass in the struct KVM to remove the need for a vCPU pointer. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/page_track.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 9d4a3b1b25b9..e99a30a4d38b 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -63,7 +63,7 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); -bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, +bool kvm_slot_page_track_is_active(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2ada6dee920a..7d0da79668c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2587,7 +2587,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_slot_page_track_is_active(vcpu, slot, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; /* @@ -3884,7 +3884,7 @@ static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, * guest is writing the page which is write tracked which can * not be fixed by page fault handler. */ - if (kvm_slot_page_track_is_active(vcpu, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) return true; return false; diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index cc4eb5b7fb76..35c221d5f6ce 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -173,7 +173,7 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); /* * check if the corresponding access on the specified guest page is tracked. */ -bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, +bool kvm_slot_page_track_is_active(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { @@ -186,7 +186,7 @@ bool kvm_slot_page_track_is_active(struct kvm_vcpu *vcpu, return false; if (mode == KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(vcpu->kvm)) + !kvm_page_track_write_tracking_enabled(kvm)) return false; index = gfn_to_index(gfn, slot->base_gfn, PG_LEVEL_4K); From patchwork Wed Nov 10 22:30:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DAB3C433FE for ; Wed, 10 Nov 2021 22:30:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 423DC61381 for ; Wed, 10 Nov 2021 22:30:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233959AbhKJWdp (ORCPT ); Wed, 10 Nov 2021 17:33:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233947AbhKJWdk (ORCPT ); Wed, 10 Nov 2021 17:33:40 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04325C061208 for ; Wed, 10 Nov 2021 14:30:52 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id 65-20020a630344000000b002d9865f61efso2198985pgd.16 for ; Wed, 10 Nov 2021 14:30:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=42+d8YjAZxClfWgdhFomXy37B3KIO2GCLS50eyLEhmM=; b=Io7wOvgnheuOgfMytVU2lNhyzaqT1yMo7DnIxxVjg3AiREAoEdMqEOYFJEwc8qusVi brk8qH7Hn8m+rhZrHwzYHFlDS/t96TGUkgWbNW7tlbOm51sf/irvYkVc+3agDMVaYvwa LS4KSwNLvyXPv52GJHfxArHXpSGdD2EtuHSyAhfOupFdyWmw/uLz+qN2YoAzSpOUarwY W3CvNk1Kg1ifL3rbu0n7cY6OjCThFfEQqxkY8b1tyLhdfvdEzLnZd69DN7cg2JADv1vN LkHLSkdh5Ro5vtGgPwoUOZbJ+3fxG5dudh4JnMzyhbNIy9pF6VIQ69KdQvtgnR32oBnf Hc7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=42+d8YjAZxClfWgdhFomXy37B3KIO2GCLS50eyLEhmM=; b=MftPl+2LXX2w5xbZU5JwTlg3evuYHlDVlLrITk57Dn0YiF6pxbLj+KEQ3Tk5XSROrB 7YReleOc6ivMnFnLP6aBH01fgSK/MRIooOBhsKfbaJVUf41/L7rccYgTtJycmZ69KLTi jJrSvnzKwgRRe+obterTguxYDdpWgrhJLFuChw+JnrRM8/4Q93dFMWeIlE6WkZVzDlUA C7brm/rwCtTuvcraypIpQZnCZpKOWEcZyy6f24y48gp3Wmfy4J2AU1f8anTewe41wHPZ ghxkRuRV7OVRwvjI8+3JpGdU2PSG74d/U9kqgtGSRKO2lardtylBbtu0VeUmmrs31/dV +n6g== X-Gm-Message-State: AOAM533ZpMGRniE9uf3X88BP5aAufrZKw3nywnkCfgVdynqNxAleN2Vw 03jSgm6oA3/lt/G8LDCPQdTBQc/ypcKS X-Google-Smtp-Source: ABdhPJwwo4pGevtCowaRTd49LsoEZHU1m3kSXl8vfKEeqFOgmriW6TQS89bHjWS2JWZtAPzyZnz/XH5+2dK+ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90a:284f:: with SMTP id p15mr48592pjf.1.1636583451110; Wed, 10 Nov 2021 14:30:51 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:01 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-11-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 10/19] KVM: x86/mmu: Remove need for a vcpu from mmu_try_to_unsync_pages From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The vCPU argument to mmu_try_to_unsync_pages is now only used to get a pointer to the associated struct kvm, so pass in the kvm pointer from the beginning to remove the need for a vCPU when calling the function. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 16 ++++++++-------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/spte.c | 2 +- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7d0da79668c0..1e890509b93f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2561,10 +2561,10 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) return r; } -static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { trace_kvm_mmu_unsync_page(sp); - ++vcpu->kvm->stat.mmu_unsync; + ++kvm->stat.mmu_unsync; sp->unsync = 1; kvm_mmu_mark_parents_unsync(sp); @@ -2576,7 +2576,7 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch) { struct kvm_mmu_page *sp; @@ -2587,7 +2587,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; /* @@ -2596,7 +2596,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, * that case, KVM must complete emulation of the guest TLB flush before * allowing shadow pages to become unsync (writable by the guest). */ - for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { + for_each_gfn_indirect_valid_sp(kvm, sp, gfn) { if (!can_unsync) return -EPERM; @@ -2615,7 +2615,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, */ if (!locked) { locked = true; - spin_lock(&vcpu->kvm->arch.mmu_unsync_pages_lock); + spin_lock(&kvm->arch.mmu_unsync_pages_lock); /* * Recheck after taking the spinlock, a different vCPU @@ -2630,10 +2630,10 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } WARN_ON(sp->role.level != PG_LEVEL_4K); - kvm_unsync_page(vcpu, sp); + kvm_unsync_page(kvm, sp); } if (locked) - spin_unlock(&vcpu->kvm->arch.mmu_unsync_pages_lock); + spin_unlock(&kvm->arch.mmu_unsync_pages_lock); /* * We need to ensure that the marking of unsync pages is visible diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 52c6527b1a06..1073d10cce91 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -118,7 +118,7 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 75c666d3e7f1..b7271daa06c5 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -160,7 +160,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); wrprot = true; From patchwork Wed Nov 10 22:30:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73047C4332F for ; Wed, 10 Nov 2021 22:31:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EEBC61264 for ; Wed, 10 Nov 2021 22:31:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234033AbhKJWdr (ORCPT ); Wed, 10 Nov 2021 17:33:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233985AbhKJWdm (ORCPT ); Wed, 10 Nov 2021 17:33:42 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78D13C06127A for ; Wed, 10 Nov 2021 14:30:54 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id u6-20020a63f646000000b002dbccd46e61so2188621pgj.18 for ; Wed, 10 Nov 2021 14:30:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kuLHe6e00eLqYAgnzNJ6bYA8Sh2Prapi6rstCXNiDsA=; b=lnI1+kEabppgiUIHN+wwolHhYiIObAfxxJmXyqQ4dQLUkXQAXdoU3vOfmwZdlyEyWR 1xY7S4WV4tbj7B6FbCdgqztWGP1YWwzjYbfPh7TdrtzPzYmV9WeDcH4aerdaujpPjxum lPGSfaJnHzO9E3dtYABLxaXZWcGealMZ/fXQBui4k7KUukcITmBdDxjrruRQKJvGukMb RQuMXtjJ9gJuPSktwQhvbBO/+A0boTU8OR4uEVRuPaO8JPRUb8uFXN6LaD8gColctetV 4Yci/bND2JX7QM+MxIZlJp+eT61PleH+OgzHHTeByK00mXVBLgm+BsZOnJWSH6vm+agQ FtDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kuLHe6e00eLqYAgnzNJ6bYA8Sh2Prapi6rstCXNiDsA=; b=6fkm0e1+gYhmN2zMsV6D2UKLt0d64lwLSyqrsCEueXVT0DbImuV33Fb5T/cX+m+wCu M1yaF/3NGI6tSBhTOhD851E0nCjTLwecuQdZPjUFuBJ6xQb2vDoK5EME9v57ylJYTfTW Tonk06pVDpVzOM/govDD22h51BroG9K0Y/bSfH3b/TZ9XbsqF5dji3LszrGKQdebNcpA gXh7mMHGG4dwxwokgTrnge5Y6YeC8a5sglJ9/jW6VreTev5FPNGwjMKE7CatsqHB/8ZU fhODCLUMn/wnmrBFErLHeAMEt9fARGcCDXauBPx2bvXwg0wyMWP3xNxrxEu8aiTb/Q5m TPVA== X-Gm-Message-State: AOAM532OdubLp+/O6tX63vwmaAlXbQxfyremRo+LsswSow/9QqiCLe0R FzmPQuBT311lglonauHEGUZ4Yy4dFoz5 X-Google-Smtp-Source: ABdhPJz01H22b5GUVOY5rp3KousEMs96eKNC+pb2NAZAn6edDHG5YGrvBlD81bLMGSnt78nu1exsqRQJlY0A X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:aa7:8059:0:b0:47e:5de6:5bc7 with SMTP id y25-20020aa78059000000b0047e5de65bc7mr2360482pfm.78.1636583454003; Wed, 10 Nov 2021 14:30:54 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:02 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-12-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 11/19] KVM: x86/mmu: Factor shadow_zero_check out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the interest of devloping a version of make_spte that can function without a vCPU pointer, factor out the shadow_zero_mask to be an additional argument to the function. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 11 +++++++---- arch/x86/kvm/mmu/spte.h | 3 ++- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b7271daa06c5..d3b059e96c6e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -93,7 +93,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte) + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -176,9 +177,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (prefetch) spte = mark_spte_for_access_track(spte); - WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), + WARN_ONCE(is_rsvd_spte(shadow_zero_check, spte, level), "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, - get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + get_rsvd_bits(shadow_zero_check, spte, level)); if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ @@ -198,10 +199,12 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); u64 mt_mask = static_call(kvm_x86_get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn)); + struct rsvd_bits_validate *shadow_zero_check = &vcpu->arch.mmu->shadow_zero_check; return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, - ad_need_write_protect, mt_mask, new_spte); + ad_need_write_protect, mt_mask, shadow_zero_check, + new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index e739f2ebf844..6134a10487c4 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -333,7 +333,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte); + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, From patchwork Wed Nov 10 22:30:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B99C433F5 for ; Wed, 10 Nov 2021 22:31:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE0C461269 for ; Wed, 10 Nov 2021 22:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234048AbhKJWds (ORCPT ); Wed, 10 Nov 2021 17:33:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234018AbhKJWdo (ORCPT ); Wed, 10 Nov 2021 17:33:44 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBC46C06120C for ; Wed, 10 Nov 2021 14:30:56 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id 184-20020a6217c1000000b0049f9aad0040so2685448pfx.21 for ; Wed, 10 Nov 2021 14:30:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U0pc7GNjNaZx6eJ03QWleInOW3Gt7Fz+Qc/8LyqURhE=; b=px2lwMl/TZx0yUhF4SwNScmSrQAUFBznydcVGXRUwo/79hhD0xBfJ3/LE9EB545hKE HLfb07EwbOpxADghWPpD89GqAwaOVd9LQFKKUe4oNZzV18T0FwtJhKUpgBziskAKWqHu 9fOTUHgSI3rik7QjveUsjqyTjKBNTeD7tTZu+zqECNEEXyTF41Bwl86wH2WTMvExAaCG OFAeVw+LtiyW3rdHd2FCFGLBLI8C1ik2ZQbb5jidCHAwu3kte/5rz17Fwq+usTZXmYP+ OjwRBnMMAQI0NjT84Ipa7zkh42kIMc/dUfOxZPbUC6T5WJ2mqMxmOGRHujdULkcmYgz8 mRnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U0pc7GNjNaZx6eJ03QWleInOW3Gt7Fz+Qc/8LyqURhE=; b=HEvFRsMD5YW3iRK2OqanpHXjDJ3MlRswwQg64x11gG+nEd2Ih/KSnNRrQcVxP9GvVH rHyPQaRkPTI603mIGiZG/zhBVTZ2eFkli5ad6UYf86kY/fuWUAaZhflUTJ1/AiUC8ZIJ 3yzkaovS084hACI6pxXwxwDG3bDXh1wrbJvLH/IJxEvRcnSHxNXqwNCNJC3+xnrRr7U8 ifxKVxHqlZ4q+BDHwsfH7ysGNp8OMYnVJreE4jVVWLA1cJoGly2rId+1whJmfamR/slA 0CeNH/8J/WQhp1kjquNFg0rCFzJ/F0VxhG7j32IxCGkq72MOC9SXgiKqAZg/IQetu1xX MMbw== X-Gm-Message-State: AOAM533S0wCKbpFyiUxXPz38HgsZIAxkS5bS7YD1qorzVhzIGTBVLcKL zJ4JZN+q1mEdPuBTDIrc2arH3Huh6tPj X-Google-Smtp-Source: ABdhPJwpl3E6b33pF25B4V3bd+PvVjLl0XHsteOkH1O86mYcfkRiNXJAHcPrAJgqD7qqHHYZZaJtlkhNuAFC X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:902:c245:b0:141:f279:1c72 with SMTP id 5-20020a170902c24500b00141f2791c72mr2462731plg.18.1636583456490; Wed, 10 Nov 2021 14:30:56 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:03 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-13-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 12/19] KVM: x86/mmu: Replace vcpu argument with kvm pointer in make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No that nothing in make_spte actually needs the vCPU argument, just pass in a pointer to the struct kvm. This allows the function to be used in situations where there is no relevant struct vcpu. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 8 ++++---- arch/x86/kvm/mmu/spte.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d3b059e96c6e..d98723b14cec 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -89,7 +89,7 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) E820_TYPE_RAM); } -bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, @@ -161,7 +161,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(kvm, slot, gfn, can_unsync, prefetch)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); wrprot = true; @@ -184,7 +184,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON(level > PG_LEVEL_4K); - mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); + mark_page_dirty_in_slot(kvm, slot, gfn); } *new_spte = spte; @@ -201,7 +201,7 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, kvm_is_mmio_pfn(pfn)); struct rsvd_bits_validate *shadow_zero_check = &vcpu->arch.mmu->shadow_zero_check; - return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, + return make_spte(vcpu->kvm, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, ad_need_write_protect, mt_mask, shadow_zero_check, new_spte); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 6134a10487c4..5bb055688080 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -329,7 +329,7 @@ static inline u64 get_mmio_spte_generation(u64 spte) return gen; } -bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, From patchwork Wed Nov 10 22:30:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43ECBC433EF for ; Wed, 10 Nov 2021 22:31:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D35461269 for ; Wed, 10 Nov 2021 22:31:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234058AbhKJWdu (ORCPT ); Wed, 10 Nov 2021 17:33:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233985AbhKJWdr (ORCPT ); Wed, 10 Nov 2021 17:33:47 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3DF7C06127A for ; Wed, 10 Nov 2021 14:30:59 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id l10-20020a17090a4d4a00b001a6f817f57eso1808798pjh.3 for ; Wed, 10 Nov 2021 14:30:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LbThUuaOpzCbOzZAIPBZcmEtqbOiPlEJtesgwMaSYDQ=; b=GdALKheein9Cb85IAuoDwJOYJGpvSzm5GwacVa/XVjVCJl1ynuK+EpRV3FPQMTk9CY 9THiBOiwH1i4LGVI97p6IPgtVpxwEkZy9Mtvs6Ih1BjdW80NRz43d0dlyG739eeZ666F ghqI8TdJmMwDdm2sify4R0wlPK7OVgv5K0iCLdiK0B406bBxCvjmd7gcFjZseXynMAHn UpZ/FahkjP9LPAUzx8xeLTKs98McpM+9hGDrgJK1DX/e8ezLHlPe9JLayL7KL4ijA157 JzY89qViwsCprqSzX1+rheQPZNA0F8CzdLSQ4x7FnZIM15Gcjbj4v7E0BluWlP7pMBY4 wXRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LbThUuaOpzCbOzZAIPBZcmEtqbOiPlEJtesgwMaSYDQ=; b=DY7zjX+glBa9UqT9UPOv1UQwqI2lUZbeew6OswOUSm1vYqGWwvNKaFT5SDdI5ngnim o4bjYNEVSJ0WRQLUY1PQ0ForFKq8Bfp/LGZzXQ4mKkcoiOwPNbLVrA3INQZwqHdBnfxc ZFjaPzEo55CYGOgBmjdUTv0V/BjaFlnVtYlHLykVuF5y6X2dFLFrvm/R2svRIChK/eg8 dj5/8MlubnIk2EuJlYy75RaE3hldSRPvK+s9pZAGVEA1boVhfin4+/v9pLEWPgwvhvux 1GUHIVMutz3xqxcYayRdhJPuMVirrzUbNjBETX4hRyxBF7hK01xUqu7Dvozeyf3zTNpF iOUw== X-Gm-Message-State: AOAM532eAv5pwsuPbet/Khut3Z5UbQr0+k979cyVTbXwmwQp0T9/Qdo1 WLxetNk5qScWVGm1D12EzMq2C9rB10b2 X-Google-Smtp-Source: ABdhPJwVt+Qc1aJV9QEwJFviIGlNtIREHh7km4Ycaep0pTfsFtmaYAT/hX7M+OXMOkXF8niNc+7/1tfKcBRF X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90b:384d:: with SMTP id nl13mr2941669pjb.80.1636583459164; Wed, 10 Nov 2021 14:30:59 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:04 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-14-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 13/19] KVM: x86/mmu: Factor out the meat of reset_tdp_shadow_zero_bits_mask From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a helper function which does not require a vCPU pointer. The only element of the struct kvm_mmu context used by the function is the shadow root level, so pass that in too instead of the mmu context. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e890509b93f..fdf0f15ab19d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4450,17 +4450,14 @@ static inline bool boot_cpu_is_amd(void) * possible, however, kvm currently does not do execution-protection. */ static void -reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, - struct kvm_mmu *context) +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, + int shadow_root_level) { - struct rsvd_bits_validate *shadow_zero_check; int i; - shadow_zero_check = &context->shadow_zero_check; - if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, false, + shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), false, true); else @@ -4470,12 +4467,20 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, if (!shadow_me_mask) return; - for (i = context->shadow_root_level; --i >= 0;) { + for (i = shadow_root_level; --i >= 0;) { shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; } } +static void +reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, + struct kvm_mmu *context) +{ + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, + context->shadow_root_level); +} + /* * as the comments in reset_shadow_zero_bits_mask() except it * is the shadow page table for intel nested guest. From patchwork Wed Nov 10 22:30:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E47BCC433F5 for ; Wed, 10 Nov 2021 22:31:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD8BD6117A for ; Wed, 10 Nov 2021 22:31:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233772AbhKJWdw (ORCPT ); Wed, 10 Nov 2021 17:33:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233583AbhKJWdu (ORCPT ); Wed, 10 Nov 2021 17:33:50 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68A4AC061766 for ; Wed, 10 Nov 2021 14:31:02 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id p13-20020a63c14d000000b002da483902b1so2207185pgi.12 for ; Wed, 10 Nov 2021 14:31:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a4a77MBQXK3EPSbKlE5NpJfybpZMYGifteD+19ZgEjc=; b=IuG2Y7KCWiPU+iXM0AcH5e0ewPXWRzUhuQvhQg+veDfzVu0xJ5nNA8aZU1tmXgm22+ djWMUC9RyVAVJfEgFqPIqcqW0ttS2in9eeqMLh3Fwq7KgzCtsNAh2fYviM49QeMunRZr lfKm3GBZO7/uaD112sGEUVIvE1cgQwUZgr+NnAg9wJU4p+cYWPnnkAOMCYBso0nZtFOH abi0NbVX9XGtMZfs5x1NL/TNo8wC16iOgZw2D3gkoACpnQ/iLZlkGc9oJp0kZfVMt+DK /A3l1J3hMgLNm9u6KsHIXEqEECTUiv2ggnEMwlDrSmKsA719XeH8nRMKqniwngiw1rKe mYbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a4a77MBQXK3EPSbKlE5NpJfybpZMYGifteD+19ZgEjc=; b=ip8dFYPLb4ZYtW4YEw/LgyAn1E60ihkERVcrnxjhbXLGUy/AU4jGQ2Dhbs+klZa9Bp WiB1alwsA0jL/rGPrDFvfxIAioR4ypVVH0bsHPNmLdS9yMXzKEz11UnA7irmF3bB7Usu PQSuvKwgCGAV8DzpTmKGh+gOwKBKwkpbsyReYjDSftXkgxUzps3tnektvVWoBo+sGnC4 gqA3agJ8utFjrfqLOgm0LqDSre/xSfiYUPU2RyfCECaOskb2WFM23xT++OGjPkEL+oph CiDfuEgGIVF+Oh1slx0lkRdEc4DjLNx1yPjzf7BqolxuP16gFPL/JhyPl+j8B/KqDg+3 aZpQ== X-Gm-Message-State: AOAM5334XDwShFXN8FwZJeXdZ3vgK26yKzR3NBsICtgY0U18UQS2kN7P XsWRn3fOXLyP04ybTEVcT90D2cLkAlGT X-Google-Smtp-Source: ABdhPJy5pNHY3QLlEDTlYU+CYFqdW3NuAm+AtIIaMKz9xm6e82UqadMagYn2vdF+giyTsRw7zrxUiVPTtj41 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a05:6a00:1583:b0:49f:dc1c:a0fe with SMTP id u3-20020a056a00158300b0049fdc1ca0femr2489601pfk.46.1636583461905; Wed, 10 Nov 2021 14:31:01 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:05 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-15-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 14/19] KVM: x86/mmu: Propagate memslot const qualifier From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In preparation for implementing in-place hugepage promotion, various functions will need to be called from zap_collapsible_spte_range, which has the const qualifier on its memslot argument. Propagate the const qualifier to the various functions which will be needed. This just serves to simplify the following patch. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_page_track.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 4 ++-- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/kvm_main.c | 12 ++++++------ 8 files changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index e99a30a4d38b..eb186bc57f6a 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -64,8 +64,8 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); bool kvm_slot_page_track_is_active(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode); + const struct kvm_memory_slot *slot, + gfn_t gfn, enum kvm_page_track_mode mode); void kvm_page_track_register_notifier(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fdf0f15ab19d..ef7a84422463 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2576,7 +2576,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1073d10cce91..6563cce9c438 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -118,7 +118,7 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm *kvm, struct kvm_memory_slot *slot, +int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, bool can_unsync, bool prefetch); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 35c221d5f6ce..68eb1fb548b6 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -174,8 +174,8 @@ EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); * check if the corresponding access on the specified guest page is tracked. */ bool kvm_slot_page_track_is_active(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, - enum kvm_page_track_mode mode) + const struct kvm_memory_slot *slot, + gfn_t gfn, enum kvm_page_track_mode mode) { int index; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d98723b14cec..7be41d2dbb02 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -90,7 +90,7 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) } bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, unsigned int pte_access, + const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 5bb055688080..d7598506fbad 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -330,7 +330,7 @@ static inline u64 get_mmio_spte_generation(u64 spte) } bool make_spte(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, unsigned int pte_access, + const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..675da38fac7f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -435,7 +435,7 @@ struct kvm_memory_slot { u16 as_id; }; -static inline bool kvm_slot_dirty_track_enabled(struct kvm_memory_slot *slot) +static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; } @@ -855,9 +855,9 @@ void kvm_set_page_accessed(struct page *page); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, +kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva); @@ -934,7 +934,7 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); -void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..6dbf8cba1900 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2138,12 +2138,12 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(struct kvm_memory_slot *slot) +static bool memslot_is_readonly(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_READONLY; } -static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn, +static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { if (!slot || slot->flags & KVM_MEMSLOT_INVALID) @@ -2438,7 +2438,7 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva) { @@ -2478,13 +2478,13 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); } @@ -3079,7 +3079,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) EXPORT_SYMBOL_GPL(kvm_clear_guest); void mark_page_dirty_in_slot(struct kvm *kvm, - struct kvm_memory_slot *memslot, + const struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && kvm_slot_dirty_track_enabled(memslot)) { From patchwork Wed Nov 10 22:30:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20F7CC433FE for ; Wed, 10 Nov 2021 22:31:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B61761250 for ; Wed, 10 Nov 2021 22:31:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234113AbhKJWd5 (ORCPT ); Wed, 10 Nov 2021 17:33:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233874AbhKJWdx (ORCPT ); Wed, 10 Nov 2021 17:33:53 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F052C061766 for ; Wed, 10 Nov 2021 14:31:05 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id hg9-20020a17090b300900b001a6aa0b7d8cso1805640pjb.2 for ; Wed, 10 Nov 2021 14:31:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FWwXwDfBnz5LFBsDCQ1wpQlpwGxzZTjObPP1GkJajQw=; b=IROO4E6G6/Tu+tdfzalPUTSYwK6uUaJHX2LmS4eaKh5TTZtnmAE9yCGDoEY62aX/vL 7OTdREb33pFkvarZYwglLn1c93rjqtLfoUEZG8aV0oSTvejcx8hXPUqEXKb5oqtsSxWn p4S+1x/dh9KDaZ8U5NNI/J/t4w5QJC5mDT349cbnv0qjic1Y5SZQvmTQZ7wxqMNuwkO1 f9oflPCjnaenuvAnzSVA0Mv2U/aGvR/9D9g9bTas6o3KG8PiqjLgrCMH6QIOa9acVC1O CjiAzXzcscqZTr1+BITbqpcYiOY4CQ+qrBDl+1JBVpRg0BS/eJa7y+G/APsZ25ArjaKk Sxuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FWwXwDfBnz5LFBsDCQ1wpQlpwGxzZTjObPP1GkJajQw=; b=DWwwqW5tOqnRLnPKdA0elvLYoE+8axbdG6G7ec7cNLnSy2sGwA7rGLt+KJOPIccgwS yuRJcQH2RIVQoKVDjTQu75dgkVg6vnDgKPcfrfJIGwPIQM6xNJTW1edjw4GPkYcS5ILs WgUwpxaiyytXkGxeXqeoVNvt5QPIFVEkTcZx6cenEQ2ZcKBq/KW4Zf2Kz6x1veD32ZSa FrEkWlucE8TBDdLVeYr6KX80mNfMOWbbLZCebJN2L9IebOYUjVFvuRHZKtzypXSZ03p4 uqaPYmvAnxxNzgCRhqytOuqm675uo53kTm/fSKqmWOacpHCzydkInbuHhXPYkjGKnl37 Pbvw== X-Gm-Message-State: AOAM530/Kxa8wxR/v5rlkPm9IfOB3GfDE2SMZ9srnNSB3yK9j7BvGV/X ECQelHB9sFNBjvm9g+2MyRO8mtRZrGRz X-Google-Smtp-Source: ABdhPJxUzvRq1lCmdEsikMFwEJuiowRKri6jjr8/fv+T3xudb3seeFH03JVAy+YjfbSGQ4njn1KEsGfxjt9X X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a05:6a00:174e:b0:47b:d4d6:3b9e with SMTP id j14-20020a056a00174e00b0047bd4d63b9emr2356149pfc.21.1636583464658; Wed, 10 Nov 2021 14:31:04 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:06 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-16-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 15/19] KVM: x86/MMU: Refactor vmx_get_mt_mask From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the gotos from vmx_get_mt_mask to make it easier to separate out the parts which do not depend on vcpu state. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/vmx/vmx.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 71f54d85f104..77f45c005f28 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6987,7 +6987,6 @@ static int __init vmx_check_processor_compat(void) static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache; - u64 ipat = 0; /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7007,30 +7006,22 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) * EPT memory type is used to emulate guest CD/MTRR. */ - if (is_mmio) { - cache = MTRR_TYPE_UNCACHABLE; - goto exit; - } + if (is_mmio) + return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { - ipat = VMX_EPT_IPAT_BIT; - cache = MTRR_TYPE_WRBACK; - goto exit; - } + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) + return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; if (kvm_read_cr0(vcpu) & X86_CR0_CD) { - ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cache = MTRR_TYPE_WRBACK; else cache = MTRR_TYPE_UNCACHABLE; - goto exit; - } - cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); + return (cache << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + } -exit: - return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; + return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; } static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx, u32 new_ctl) From patchwork Wed Nov 10 22:30:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 976ACC433EF for ; Wed, 10 Nov 2021 22:31:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8271E61250 for ; Wed, 10 Nov 2021 22:31:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234110AbhKJWeK (ORCPT ); Wed, 10 Nov 2021 17:34:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234120AbhKJWd5 (ORCPT ); Wed, 10 Nov 2021 17:33:57 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08F31C061766 for ; Wed, 10 Nov 2021 14:31:08 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id m15-20020a170902bb8f00b0014382b67873so1279492pls.19 for ; Wed, 10 Nov 2021 14:31:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dkTH62+axVdrikmzQkaE0kznpVJj+gZnKCcWL87PylQ=; b=mHD54jNcbxYTQavv0t79LapRbwT16TqVnoCpLWulzvkw8VF7/sQOmaNWDyl0SIWwYu KWTf+MQgGkOv3h+nnKULdskJO3xwTrT6/IAqsZVOBxhtjiF7imMh4DpXyAOh7mlB/gar GG55sxGQu9zohvycDtbFbzUIVXJ74SfcAXYXPBlb0WYqr4AOJov8FhPRxBT/dtJOR3mO V5UAoFF1Qh6aWYkf9P20FmWqwuKkf80PKJwPScQwtyEA3cpeCfV7ga/nJiZh+7dLoc1d xZf41CTK/VQQpTmqxsMiEwH4kXWgD8Nyrp2gvXKjH0gIgxQ07UryZH1PEshe2iT+A3W2 W0CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dkTH62+axVdrikmzQkaE0kznpVJj+gZnKCcWL87PylQ=; b=REx21lNdUkHVUB0Rac5j5PqMpkv9MP5cuzANBEL81D3pOtf+czKvm9lWDwsiCkM3I1 Fj3k2jBwPsBB4afZHLU4NE9aPRBBezmaRslK/NpWzyGdcO0zTrIOcrmE2RNUeUIyDx7/ LlS7XEOtRTGYkso5gXS7JPKkDESXGqdagUMwreCnkdu1po7GL0yMtDK4H4wWRlH1hi61 nJ2/LhnwwLrFKlB+qGcdk/i4pVLaDtannk+sGhrwxwHs6E7qiBs+3M/vg0bL6v0sdoqo 9FlLMvyAHgmej7uvndKZlvVUp+tfz7AnZBRSOyTnUgn0YXIL1X5ZGQLLC9maXsQ8hBaf ujrg== X-Gm-Message-State: AOAM532qNZ6Cg97HI5YeGw3wjyavr2als7VONs1aW3SGC2MluDIbI/S4 u8g+8r3/b0/O2Wr4NDpqh8B6/9bklZce X-Google-Smtp-Source: ABdhPJyMssxC4Jao+H/dnJG0lmW43OOcR8GJJBzIio46unX/DwCuEaTf/pjH/awCAWxkCaqmEXvswBTFX4d/ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90b:1d81:: with SMTP id pf1mr2857835pjb.79.1636583467517; Wed, 10 Nov 2021 14:31:07 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:07 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-17-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 16/19] KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out the parts of vmx_get_mt_mask which do not depend on the vCPU argument. This also requires adding some error reporting to the helper function to say whether it was possible to generate the MT mask without a vCPU argument. This refactoring will allow the MT mask to be computed when noncoherent DMA is not enabled on a VM. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 77f45c005f28..4129614262e8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6984,9 +6984,26 @@ static int __init vmx_check_processor_compat(void) return 0; } +static bool vmx_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask) +{ + if (is_mmio) { + *mask = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + return true; + } + + if (!kvm_arch_has_noncoherent_dma(kvm)) { + *mask = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + return true; + } + + return false; +} + static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache; + u64 mask; /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7006,11 +7023,8 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) * EPT memory type is used to emulate guest CD/MTRR. */ - if (is_mmio) - return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) - return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + if (vmx_try_get_mt_mask(vcpu->kvm, gfn, is_mmio, &mask)) + return mask; if (kvm_read_cr0(vcpu) & X86_CR0_CD) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) From patchwork Wed Nov 10 22:30:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E0A3C433F5 for ; Wed, 10 Nov 2021 22:31:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 701A76117A for ; Wed, 10 Nov 2021 22:31:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234166AbhKJWeR (ORCPT ); Wed, 10 Nov 2021 17:34:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234074AbhKJWeG (ORCPT ); Wed, 10 Nov 2021 17:34:06 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80E55C06120F for ; Wed, 10 Nov 2021 14:31:10 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id h21-20020a056a001a5500b0049fc7bcb45aso2721105pfv.11 for ; Wed, 10 Nov 2021 14:31:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bZTXtoL6rjrzQ4HwLwZZ68cG+tVDuxM3qYy8DkAd4cQ=; b=baoKiQKjU6YQRi/VZRG/k8cZ6kvnxiNjxD9yt8sWBe/hdX0Q/VLTZWrq30Eona0gDe LvijFooMTQipe3XWgcJZlN0FN2PaJV7BwUkfGPuabQ4n2LlkBlZ1B6PoymrI2YlWPkjl gbbNWMhjZYdmAutSvOIbx/UgznfZG7XGQIKE9SVPjFXhp/M8dZBT4PZSSi1jUOnE3SkY EVYh9EFoN6Gx+e64KvtpEcbqQfjdAf+qoV/dg9lFHyFU1LGag8SmPcFIkULn90THRfQB AeqMVenOvjia/xJ4+xqfg9PH0/nTYu/o+Q2lK1tVOdPMMw5EPbux0HPsScpom9HVcTC9 o7vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bZTXtoL6rjrzQ4HwLwZZ68cG+tVDuxM3qYy8DkAd4cQ=; b=SXRUik9GUp4B7Iv6CYH/3vpdkBGDXvNum+a9whBAe0+opJlHqVPUuZVQBOHJsQa7Mf hoQt3IpyJuM5X5DWlxiqa1YyHPWDBqmGB4/9GywpvnTVFK4W/Ey+/tL7w0k2tHZYYhHw kzjCwraNd/b1wKFCIFIc02Qs9mKMTXOohZNttDFvdxy58nQr6dkFCjuvo76LB/yj8K9V Zc9mlpYT0w0qOZZkXIvZY0SIxya7r4ecIgcS9VlT3/Y7DPghHEMOruXzfi/qsEEegvno ueAa5/QaJb3GHcvURFkeGarM0vve63N50bpUZpbZDrdCodbtzzNbSN/AEa2RLOQHq8QA 5ilA== X-Gm-Message-State: AOAM531n73IaWzcRXagLtqmDm+w+2h+harKXFOjj2SnCDlrL4OyiJR9R wSbWrQbddqDOkBbJ1vQrxupUOK4hRADy X-Google-Smtp-Source: ABdhPJwl96mkKsHES8Eqe+lbSV/Cm0gOjq4Js/ovxC9x8ZbcgihmVW5WWukxAIvwv6UHgbSSO2vghgeeXVY7 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a05:6a00:1312:b0:44c:becf:b329 with SMTP id j18-20020a056a00131200b0044cbecfb329mr2529651pfu.5.1636583470017; Wed, 10 Nov 2021 14:31:10 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:08 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-18-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 17/19] KVM: x86/mmu: Add try_get_mt_mask to x86_ops From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add another function for getting the memory type mask to x86_ops. This version of the function can fail, but it does not require a vCPU pointer. It will be used in a subsequent commit for in-place large page promotion when disabling dirty logging. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/svm.c | 8 ++++++++ arch/x86/kvm/vmx/vmx.c | 1 + 4 files changed, 12 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index cefe1d81e2e8..c86e9629ff1a 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -84,6 +84,7 @@ KVM_X86_OP_NULL(sync_pir_to_irr) KVM_X86_OP(set_tss_addr) KVM_X86_OP(set_identity_map_addr) KVM_X86_OP(get_mt_mask) +KVM_X86_OP(try_get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_NULL(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88fce6ab4bbd..ae13075f4d4c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1400,6 +1400,8 @@ struct kvm_x86_ops { int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr); u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); + bool (*try_get_mt_mask)(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask); void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 21bb81710e0f..d073cc3985e6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4067,6 +4067,13 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index) return true; } +static bool svm_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask) +{ + *mask = 0; + return true; +} + static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; @@ -4660,6 +4667,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .set_tss_addr = svm_set_tss_addr, .set_identity_map_addr = svm_set_identity_map_addr, .get_mt_mask = svm_get_mt_mask, + .try_get_mt_mask = svm_try_get_mt_mask, .get_exit_info = svm_get_exit_info, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4129614262e8..8cd6c1f50d3e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7658,6 +7658,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .set_tss_addr = vmx_set_tss_addr, .set_identity_map_addr = vmx_set_identity_map_addr, .get_mt_mask = vmx_get_mt_mask, + .try_get_mt_mask = vmx_try_get_mt_mask, .get_exit_info = vmx_get_exit_info, From patchwork Wed Nov 10 22:30:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF14DC433F5 for ; Wed, 10 Nov 2021 22:31:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C439961288 for ; Wed, 10 Nov 2021 22:31:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234108AbhKJWeT (ORCPT ); Wed, 10 Nov 2021 17:34:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234099AbhKJWeJ (ORCPT ); Wed, 10 Nov 2021 17:34:09 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37479C061229 for ; Wed, 10 Nov 2021 14:31:13 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id z19-20020aa79593000000b0049472f5e52dso2711838pfj.13 for ; Wed, 10 Nov 2021 14:31:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kiIatwAlHvvu2uBeY4ii96e1dHU62kAmoLcLl2YIwuU=; b=RnDqEGa/fEcU2We4hgto9VMn8KlebfcLVZiOFRmtEeoUfA42ScMbhhwf2EyLQ657fN mIJcNlq//gcq7sGDAaP2vVXJkRZ9KHQfahF6uNMfjkKte9KKk26GkrRuP78aDPZBlvaN 28ulYf5vXbt5A1h6hgPU9OM+XyKvThbrmG+hrgl8mchydnzLzsiP5P50bJEVr5l+Qeo1 ewZR2wDzQN+gEhyct5BRm6QZVul+MEvcMKGHwIWe0UKRNUntcWMRkcwIgpVZ0QZ0FVCA ACjP1Hh+pHd9ROVQpC22dUxnSoa0rSHPQU92ogo+RXYssTlhPRTA2Yn6OEr8SuHpgj/m HIBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kiIatwAlHvvu2uBeY4ii96e1dHU62kAmoLcLl2YIwuU=; b=NebRCcw1pWmwf49flx8ZvP2v7QiIM0h0sV0DmEn+oHW4jT0xFvmFCxqQIoY0aHg53b DxweXPPb6Ef/FgQ4oOgW6ewfe2cYjn6sHn7QUkP3Sg+++qxeMhKZtx3YxCouHd145sqH eeRmeafWyLdCiej3+yQSdMju9P0WB2FErYDRKUePxJPjA/WxfjbI28OzB6W8+cNm/eQC eBtQuAQonAxzO8J+KYCTPSnrpehLHgRaBiBrldf091A9u6o7ptcN59KKj438Qqz7eOiK RWSfUK5V5iX7oZTGG03PMv0MUghMpG+PtBCiCi26sgt/Dxs2lR+Q1CER9LpA4Kmty/TX mtIA== X-Gm-Message-State: AOAM532Zuz114CN1xZwWbwJsVt8y/ooYIV+1TjZDLXIsBVstNx15WLBD zV7S5spXumYCU4h/0ac6L4dRqa5aFFHc X-Google-Smtp-Source: ABdhPJxRkOjjvaAjzujyF2bBDUfldGJ3U4LikPZF4vwA1t4IZQZ/JS3wXmtHCpgOAl3XM9zKZg/6GD+smOA7 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90a:4a85:: with SMTP id f5mr21194680pjh.92.1636583472609; Wed, 10 Nov 2021 14:31:12 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:09 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-19-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 18/19] KVM: x86/mmu: Make kvm_is_mmio_pfn usable outside of spte.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Export kvm_is_mmio_pfn from spte.c. It will be used in a subsequent commit for in-place lpage promotion when disabling dirty logging. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 7be41d2dbb02..13b6143f6333 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -68,7 +68,7 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) return spte; } -static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) +bool kvm_is_mmio_pfn(kvm_pfn_t pfn) { if (pfn_valid(pfn)) return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)) && diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index d7598506fbad..909c24c733c4 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -347,4 +347,5 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn); void kvm_mmu_reset_all_pte_masks(void); +bool kvm_is_mmio_pfn(kvm_pfn_t pfn); #endif From patchwork Wed Nov 10 22:30:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12613469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BBC3C433F5 for ; Wed, 10 Nov 2021 22:31:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 885CE61186 for ; Wed, 10 Nov 2021 22:31:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234175AbhKJWeX (ORCPT ); Wed, 10 Nov 2021 17:34:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234138AbhKJWeQ (ORCPT ); Wed, 10 Nov 2021 17:34:16 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44D77C06122B for ; Wed, 10 Nov 2021 14:31:16 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id x25-20020aa79199000000b0044caf0d1ba8so2763399pfa.1 for ; Wed, 10 Nov 2021 14:31:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ldW91wlXn8Nh81WNcDS8T3qMJSnkBkv0dUkW0EdfFg8=; b=pv4JNA3zdg+m6PIYkxBmnczNxzjzSCX427piXb8HddSLJ99m7hBH54Jzu65IImWEMG 2ucz1Y6pLDJ12Gtw+N1OqJQaUf5DwXv8ES9CuhRos30u6ql6Ioj+h6VtgZUTGzsm8P6J FMDQe3KY4GmutldldZSPZLqdMtH1x63txjc18pVA5T5m1pCaZ2ZKJIcd/xVdkY+WnXqp 0rPXAEy6HgO5PdSF0ygXgtxioPkwAjtR7KwH71aYcfrizJA/SdH3AFoIEq1G8xtAIeGt 5ZLhCV0bvU2q4zUDLwrXj6UgH1GXgLRzCa0zF309x6dnr0H2Ye7DfXaCxrNQ/nHM5z61 o8aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ldW91wlXn8Nh81WNcDS8T3qMJSnkBkv0dUkW0EdfFg8=; b=QHeuwNFYE952sslKvkHOs1+jeZGXYUJKfg9Ma2SLvtLYjv5DdCfympppcGWWSLPTx8 2dkBudP2RWOlzR7xY6VD00qw4huVuSo0tNw5hB23f43Cr8xUI04ps9+d6NX7rAEr2Lla kLEjtiLzLMr/F/HmoZ9P6hPW597l/Gy5Jwtm/qT6xFl7lH09kVco5JLoMP5RkviEI/KD N+yazQ/L3LQdiDElb0ZFA0EUDoQOFjep7KGtnOewp/oOSAthFaMc03NvxiQjV/39BbVI gZBcWgHSAqYK0BohGqtGMToFHy8T4yWLV8aiQoiQ1Ny4gY1oRYi1G/8za8PM2xeDv9wb vCJw== X-Gm-Message-State: AOAM530uEIzkORa89IxyHPztzk9XqNUfgZSq1L/d6WCdkLSAArtyjAZl kdFksbO80Pi92HVMeJ5wfhqSQ2e0oxRj X-Google-Smtp-Source: ABdhPJyFKSswS7Pz+l1CcGPccn7T/6K9wxzQnw7pOs4wiwT54m3groMD/MDFiPgg3hfc6u7nCwLtDAk1bbym X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:6586:7b2f:b259:2011]) (user=bgardon job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr49033pjd.0.1636583475503; Wed, 10 Nov 2021 14:31:15 -0800 (PST) Date: Wed, 10 Nov 2021 14:30:10 -0800 In-Reply-To: <20211110223010.1392399-1-bgardon@google.com> Message-Id: <20211110223010.1392399-20-bgardon@google.com> Mime-Version: 1.0 References: <20211110223010.1392399-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [RFC 19/19] KVM: x86/mmu: Promote pages in-place when disabling dirty logging From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When disabling dirty logging, the TDP MMU currently zaps each leaf entry mapping memory in the relevant memslot. This is very slow. Doing the zaps under the mmu read lock requires a TLB flush for every zap and the zapping causes a storm of ETP/NPT violations. Instead of zapping, replace the split large pages with large page mappings directly. While this sort of operation has historically only been done in the vCPU page fault handler context, refactorings earlier in this series and the relative simplicity of the TDP MMU make it possible here as well. Running the dirty_log_perf_test on an Intel Skylake with 96 vCPUs and 1G of memory per vCPU, this reduces the time required to disable dirty logging from over 45 seconds to just over 1 second. It also avoids provoking page faults, improving vCPU performance while disabling dirty logging. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 4 ++ arch/x86/kvm/mmu/tdp_mmu.c | 69 ++++++++++++++++++++++++++++++++- 3 files changed, 72 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ef7a84422463..add724aa9e8c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4449,7 +4449,7 @@ static inline bool boot_cpu_is_amd(void) * the direct page table on host, use as much mmu features as * possible, however, kvm currently does not do execution-protection. */ -static void +void build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, int shadow_root_level) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 6563cce9c438..84d439432acf 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -161,4 +161,8 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +void +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, + int shadow_root_level); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 836eadd4e73a..77ff7f1d0d0a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1435,6 +1435,66 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } +static void try_promote_lpage(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + struct kvm_mmu_page *sp = sptep_to_sp(iter->sptep); + struct rsvd_bits_validate shadow_zero_check; + /* + * Since the TDP MMU doesn't manage nested PTs, there's no need to + * write protect for a nested VM when PML is in use. + */ + bool ad_need_write_protect = false; + bool map_writable; + kvm_pfn_t pfn; + u64 new_spte; + u64 mt_mask; + + /* + * If addresses are being invalidated, don't do in-place promotion to + * avoid accidentally mapping an invalidated address. + */ + if (unlikely(kvm->mmu_notifier_count)) + return; + + pfn = __gfn_to_pfn_memslot(slot, iter->gfn, true, NULL, true, + &map_writable, NULL); + + /* + * Can't reconstitute an lpage if the consituent pages can't be + * mapped higher. + */ + if (iter->level > kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, + pfn, PG_LEVEL_NUM)) + return; + + build_tdp_shadow_zero_bits_mask(&shadow_zero_check, iter->root_level); + + /* + * In some cases, a vCPU pointer is required to get the MT mask, + * however in most cases it can be generated without one. If a + * vCPU pointer is needed kvm_x86_try_get_mt_mask will fail. + * In that case, bail on in-place promotion. + */ + if (unlikely(!static_call(kvm_x86_try_get_mt_mask)(kvm, iter->gfn, + kvm_is_mmio_pfn(pfn), + &mt_mask))) + return; + + make_spte(kvm, sp, slot, ACC_ALL, iter->gfn, pfn, 0, false, true, + map_writable, ad_need_write_protect, mt_mask, + &shadow_zero_check, &new_spte); + + tdp_mmu_set_spte_atomic(kvm, iter, new_spte); + + /* + * Re-read the SPTE to avoid recursing into one of the removed child + * page tables. + */ + iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep)); +} + /* * Clear leaf entries which could be replaced by large mappings, for * GFNs within the slot. @@ -1455,9 +1515,14 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!is_shadow_present_pte(iter.old_spte)) + continue; + + /* Try to promote the constitutent pages to an lpage. */ + if (!is_last_spte(iter.old_spte, iter.level)) { + try_promote_lpage(kvm, slot, &iter); continue; + } pfn = spte_to_pfn(iter.old_spte); if (kvm_is_reserved_pfn(pfn) ||