From patchwork Mon Mar 15 23:38:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B716BC433DB for ; Mon, 15 Mar 2021 23:39:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75CB564F5E for ; Mon, 15 Mar 2021 23:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233762AbhCOXis (ORCPT ); Mon, 15 Mar 2021 19:38:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231760AbhCOXiO (ORCPT ); Mon, 15 Mar 2021 19:38:14 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 741ABC06174A for ; Mon, 15 Mar 2021 16:38:14 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id c20so10307697qtw.9 for ; Mon, 15 Mar 2021 16:38:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kl2EVOBmn7oQV8DsVZaFdaHri6MWqYz7gtsahKZb1Es=; b=qidQHbG8ZfyIg9dIuIPHeZczbB4TDnrajiH+GpYI8NXv1vvJgbEjKGAQUbSuWQAPDq Da9we1AIBx4DUAZBSovJmYC+QFzyagc2byZzyBymdijQ0LKTuwE/ZlTIzW5a7CJTQ1pA 5vMmqmqLgaY7U3H8sc2F2hB+9ugpj4BpedoK9e5J4SoEnp/3XcOyd5lexR6u4vmYogmL zqim66i37eOa16VeTLPmvBfcU9NnL8uWmA3m4BQaAgddIx1SmGFmMH1Hpae+DMdR34xr /mbpeWukRRhfn70l+JEJg0hWuMD7cpkIkA/Hv8XKX9E6FQWKNorgNAIbMTOQicdLVxF6 TKSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kl2EVOBmn7oQV8DsVZaFdaHri6MWqYz7gtsahKZb1Es=; b=ZDZFY+JqPvqxhE5xHOV04QwVf14nRU9UThPQgeAF+foB4OGuXj5lAEF0LQNs1TwGhA tH5PXnwxE5btYLiv4KVmm3iJySTkIuEfcAqGYmyk+JHkHFxy31FgSreaer4uWFA51nZc g4oLkDs3AN2IKPbcFdyg7xKDvSlHNqKfybB7zgYKytf+KDGFml5yfaHiAR5Tc+UHprAZ bd2suP2VtJmD1t70wpkUqZ3LGRCg76jxZFsq03g/AaguUj2gMnkIfyXHoBhHa/3TNksw iZLHRM/ikh3we+AR/vA1wtXLMCx39U6vjS+Sm8NHvdyxclSRWfla5SukNM47qx3Q4EMK AA8w== X-Gm-Message-State: AOAM532EuP3fCPvIijH91nGfKNZGI30vFHtcB3XRX2jwcM8ZwLggNh68 4j5F3sLD/l4ZrA5wxWLzvi2gFCN6pHOc X-Google-Smtp-Source: ABdhPJzJXakUfv9mJ/Kx+ckRp9KsEOpMzFQFnjFYsAHrC3+thh4wfProYb5J7i58/Y5o8wnMW6ToVw0WS4Z7 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a0c:b218:: with SMTP id x24mr13495443qvd.55.1615851493599; Mon, 15 Mar 2021 16:38:13 -0700 (PDT) Date: Mon, 15 Mar 2021 16:38:00 -0700 In-Reply-To: <20210315233803.2706477-1-bgardon@google.com> Message-Id: <20210315233803.2706477-2-bgardon@google.com> Mime-Version: 1.0 References: <20210315233803.2706477-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v3 1/4] KVM: x86/mmu: Fix RCU usage in handle_removed_tdp_mmu_page From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon , kernel test robot Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The pt passed into handle_removed_tdp_mmu_page does not need RCU protection, as it is not at any risk of being freed by another thread at that point. However, the implicit cast from tdp_sptep_t to u64 * dropped the __rcu annotation without a proper rcu_derefrence. Fix this by passing the pt as a tdp_ptep_t and then rcu_dereferencing it in the function. Suggested-by: Sean Christopherson Reported-by: kernel test robot Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d78915019b08..db2936cca4bf 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -301,11 +301,16 @@ static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp, * * Given a page table that has been removed from the TDP paging structure, * iterates through the page table to clear SPTEs and free child page tables. + * + * Note that pt is passed in as a tdp_ptep_t, but it does not need RCU + * protection. Since this thread removed it from the paging structure, + * this thread will be responsible for ensuring the page is freed. Hence the + * early rcu_dereferences in the function. */ -static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt, +static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, bool shared) { - struct kvm_mmu_page *sp = sptep_to_sp(pt); + struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt)); int level = sp->role.level; gfn_t base_gfn = sp->gfn; u64 old_child_spte; @@ -318,7 +323,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt, tdp_mmu_unlink_page(kvm, sp, shared); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { - sptep = pt + i; + sptep = rcu_dereference(pt) + i; gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)); if (shared) { From patchwork Mon Mar 15 23:38:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A3C3C433E9 for ; Mon, 15 Mar 2021 23:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7DC464EE7 for ; Mon, 15 Mar 2021 23:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233703AbhCOXit (ORCPT ); Mon, 15 Mar 2021 19:38:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233748AbhCOXiQ (ORCPT ); Mon, 15 Mar 2021 19:38:16 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68E10C061762 for ; Mon, 15 Mar 2021 16:38:16 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id o70so25540811qke.16 for ; Mon, 15 Mar 2021 16:38:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bVFbFclWH6TVYcwCc+4Ci62XlhbttvVSTzGI6/A6UWQ=; b=bnmsw1QoRPS3s1v5bLuw2O+C1XzhjnLkvJZUHsa6fsbxPrVE7xckFD+pctBDSa8UJt YUTztapQGgjjwidq0KXl5UhkhBLOpJ3Vd13vu8+qDuerXA+zT0v8TvvH/Ocztju7/3+t 24IYgEXsan4t7lO/9liC0yb4s3lghkEi+/gt7HE8yYE5LY6YgSwEBKosuUtU7LaBYlBx WMUPDSrcmql1hyxO1REX18EA8HXV6A+yAvYiNO7MXrvOs/U7XYG+5sY/l0dQv2EeT4UH oJgXt3eZVSfrw1xRueBaqMQuhzGmrnwbirr7Sx7+vlJ7l6LZxwnX+SyMV19mCEDCssfn wxyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bVFbFclWH6TVYcwCc+4Ci62XlhbttvVSTzGI6/A6UWQ=; b=Deb0vaw+Mh4piuksPDnjUFST2SyvsFu7u1cmLW3BQW4dCENZfwq9LtWtNX8uiBFym0 ZdX+jjg2omu1Uq+5W+Dwp1JmqMfrMY4KEACxCQwfeoLeZkKU+GUK40+R2CI9Fbb1M1NL 8BDNqLQjPVooUak4s7H6EmH5VmTQwl9hxRAGznx2ysJ6v9/MPZEWfhSvejHOA/0MxsUa Ws77G8xVS15iRKmwC7TFezmyWq102JrkD5OMcjr8GdQ9LfHlDwDcqDhgzpIV4CLP9+Ha E7Scz7X1J1TsojHuSXc+oV0s/GwQdRO4JRYMeV6K+3c6ItLhCGirNVc0Wm7q5VmbgBbX tEpw== X-Gm-Message-State: AOAM530bxTARhcEg6UgZdUGbbdQbROUMEpI4kaVr4W0SjvorbQkFYWm8 YGQ0vE4vSQwJjfBukIiaO+8xxgSQkE3l X-Google-Smtp-Source: ABdhPJyl9fl0/XiZ1Ci+Juu9xR/Oj+P6JLoVt2VTtmcqgs2461zw5PNZ3kkWaevlIWn7QJ6rL4eqGk0kyUaW X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a05:6214:2623:: with SMTP id gv3mr12868051qvb.35.1615851495631; Mon, 15 Mar 2021 16:38:15 -0700 (PDT) Date: Mon, 15 Mar 2021 16:38:01 -0700 In-Reply-To: <20210315233803.2706477-1-bgardon@google.com> Message-Id: <20210315233803.2706477-3-bgardon@google.com> Mime-Version: 1.0 References: <20210315233803.2706477-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v3 2/4] KVM: x86/mmu: Fix RCU usage when atomically zapping SPTEs From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon , kernel test robot Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix a missing rcu_dereference in tdp_mmu_zap_spte_atomic. Reported-by: kernel test robot Signed-off-by: Ben Gardon Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index db2936cca4bf..946da74e069c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -543,7 +543,7 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, * here since the SPTE is going from non-present * to non-present. */ - WRITE_ONCE(*iter->sptep, 0); + WRITE_ONCE(*rcu_dereference(iter->sptep), 0); return true; } From patchwork Mon Mar 15 23:38:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EA42C4332B for ; Mon, 15 Mar 2021 23:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28D6264F73 for ; Mon, 15 Mar 2021 23:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233879AbhCOXiv (ORCPT ); Mon, 15 Mar 2021 19:38:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233803AbhCOXiS (ORCPT ); Mon, 15 Mar 2021 19:38:18 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FEE1C06174A for ; Mon, 15 Mar 2021 16:38:18 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id l7so4634895qvz.19 for ; Mon, 15 Mar 2021 16:38:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EKob6tlP8lBjDbv9XP0X/EBXxf5qoM5NvDApjzhDeLw=; b=sfwzFNMfZj1M0pRVHjCNpZruP1XeXS/Aqo5imtnIYZ4t2H3I1D8pTBE2Adlm+oUgGP SvL0WaMc0xh5pGTqxkB8IzOmCIJjsGvxjgIXOCDhfNIq34k/m2SmlJSQGq5x0GNRCAkP 6AYwjXjhIHSB9wpMmSn9NmAMqxMHI5Psu2arOhaKanAP08+Yh+Ir0lzMzHsbKOVoKs+A ne2Ir1tgusgsj2bIadvDr1054Ju+pBiXgrWTqVkFOpFEmjl85m0WGcbetqcpSL91QVVJ dul/k2myZ8QOSmkEKyfD2/9z82w8Y/+SVPTmqlA4CcQX8ryKacxPFuOfeFOP/qe0w0xa vc5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EKob6tlP8lBjDbv9XP0X/EBXxf5qoM5NvDApjzhDeLw=; b=h+dt0+SucMTuDHQcqOTxjy4Y0N9FdavyGsHwBawNE8XEhTwhZOG8/SMMxFd/np7J9e 03bzgZo5oLMV6bIOOj05B5KjdYEkNjf7thqknB+FdocVk2cv3q1Z0e5nvcqf5M2hil6J dMxaM9UX2AcSnP4sTpz5DOJE56wG4/WoLFKzNypXyjOWUq+OfR3G9ivgDB1gfh/YDmhS Nlqnxp0K76jswiJ/jjkQO7GolYExkAZhmoIEfJ6B2pCiPt6Uhgrt4GMLorXwcqqTFFYr T9vVknhPhr8bjiVlMzdKL/ycm7cRuJrA+24b51yHLo8MULRGSHpkTWHCwbOphMotFLFk LYzA== X-Gm-Message-State: AOAM530YGfMAAgPjspWekpaWDm2BBU5CEOd/X34fKhjjofQYDH4yBZoT LANgOg3/dz51V/8nNPUlZmXr9fUw4nLj X-Google-Smtp-Source: ABdhPJwlTVQpLrRMyCITzXCRmt1G+SDI9IyzKARtZfhU1x85Ggu9SGcr+gl2zv3mEp8vr13HwX7NEtMNayHy X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:ad4:5c4f:: with SMTP id a15mr13176294qva.41.1615851497419; Mon, 15 Mar 2021 16:38:17 -0700 (PDT) Date: Mon, 15 Mar 2021 16:38:02 -0700 In-Reply-To: <20210315233803.2706477-1-bgardon@google.com> Message-Id: <20210315233803.2706477-4-bgardon@google.com> Mime-Version: 1.0 References: <20210315233803.2706477-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v3 3/4] KVM: x86/mmu: Factor out tdp_iter_return_to_root From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In tdp_mmu_iter_cond_resched there is a call to tdp_iter_start which causes the iterator to continue its walk over the paging structure from the root. This is needed after a yield as paging structure could have been freed in the interim. The tdp_iter_start call is not very clear and something of a hack. It requires exposing tdp_iter fields not used elsewhere in tdp_mmu.c and the effect is not obvious from the function name. Factor a more aptly named function out of tdp_iter_start and call it from tdp_mmu_iter_cond_resched and tdp_iter_start. No functional change intended. Signed-off-by: Ben Gardon Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.c | 24 +++++++++++++++++------- arch/x86/kvm/mmu/tdp_iter.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 4 +--- 3 files changed, 19 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index e5f148106e20..f7f94ea65243 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -20,6 +20,21 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level) return gfn & -KVM_PAGES_PER_HPAGE(level); } +/* + * Return the TDP iterator to the root PT and allow it to continue its + * traversal over the paging structure from there. + */ +void tdp_iter_restart(struct tdp_iter *iter) +{ + iter->yielded_gfn = iter->next_last_level_gfn; + iter->level = iter->root_level; + + iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); + + iter->valid = true; +} + /* * Sets a TDP iterator to walk a pre-order traversal of the paging structure * rooted at root_pt, starting with the walk to translate next_last_level_gfn. @@ -31,16 +46,11 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); iter->next_last_level_gfn = next_last_level_gfn; - iter->yielded_gfn = iter->next_last_level_gfn; iter->root_level = root_level; iter->min_level = min_level; - iter->level = root_level; - iter->pt_path[iter->level - 1] = (tdp_ptep_t)root_pt; + iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root_pt; - iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - - iter->valid = true; + tdp_iter_restart(iter); } /* diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 4cc177d75c4a..8eb424d17c91 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -63,5 +63,6 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter); +void tdp_iter_restart(struct tdp_iter *iter); #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 946da74e069c..38b6b6936171 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -664,9 +664,7 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, WARN_ON(iter->gfn > iter->next_last_level_gfn); - tdp_iter_start(iter, iter->pt_path[iter->root_level - 1], - iter->root_level, iter->min_level, - iter->next_last_level_gfn); + tdp_iter_restart(iter); return true; } From patchwork Mon Mar 15 23:38:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AD5EC43381 for ; Mon, 15 Mar 2021 23:39:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1677264F5F for ; Mon, 15 Mar 2021 23:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233920AbhCOXiw (ORCPT ); Mon, 15 Mar 2021 19:38:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233855AbhCOXiU (ORCPT ); Mon, 15 Mar 2021 19:38:20 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45D9AC06174A for ; Mon, 15 Mar 2021 16:38:20 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id 11so2989479qtz.7 for ; Mon, 15 Mar 2021 16:38:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=11SZDQJBmy0VS8yPygn+u5hc7CNk6/hYLjvA/ixucu0=; b=rG5lIPXY7sBIpKzUZGcMtods8SGuaPQAv9qvd8H7hYAxrkmOsOLmd4VaYkgNjkhEN0 LUA08tC9LMR37CN5uIpOuYnPPwgzmmVgLFCQ+pOj2z3nrjfC4Z84JGq/JkUjcY3qdZBh R9//mCSHxgPvw65JHXhgmrDL0/Uc9Z8mtr4zxCwWyZGEI45mOTzgYfc9jVQJxBxwIme7 T+sZB4XvgUMkqA6tNRl+4nxFvS9zPNjvX0/DfFxhfg7oMFfYY0RUGX28wrYgjJQNPg5s TLYZOq3OkVz3VkWvc9+cMd8/Sr5Tz7jpu5gqijxX02ZbuW6PiSX85hnzREsgCKniBEvP ckYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=11SZDQJBmy0VS8yPygn+u5hc7CNk6/hYLjvA/ixucu0=; b=aSZutstH5mibykTaKwCUls5FQzZ2AOkYWM0xxPqNDBz03GQx8GSN8hnC/pftMeuMjG icFg+UwbU4/mmeA2rUNE/pR75MxTRScW3E3fB+ns617gPs7Dx9dDlyakJGDVnOlCecyp KI6PprWXhJ+DeYeKqFzdf69KnksTDA3lJJvot+CeiVDU6h9Iubi5cfxdy7esdtxrTt9d ko616FIj/QAXw4c/LtB2pO7Yi0JsxMvF4SydYrUzEZMiPuy+HS+gi573XjraQJLjl8n6 3lm2N4qs8SGy0thkDCTzbbh4ka74VSF5a/QLmCRTsybnQlYytZmnKXoi6qUmvRqReqHx iCSg== X-Gm-Message-State: AOAM533U9ZnA2qnZHMjUrvn5ZYMpx+tHSymH0CviqO9UZZhYDZKT9uCy WH3fW/9qZ3Wued4TEOMTYKvyigluPysz X-Google-Smtp-Source: ABdhPJwSenFacKz5nyVO1O2ghvdeWvx9MxHuAR2+UOa2ywCLFSI5UQu66PkXts972obleZLVwlB3KacIxA9l X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:ad4:4d92:: with SMTP id cv18mr13122936qvb.5.1615851499416; Mon, 15 Mar 2021 16:38:19 -0700 (PDT) Date: Mon, 15 Mar 2021 16:38:03 -0700 In-Reply-To: <20210315233803.2706477-1-bgardon@google.com> Message-Id: <20210315233803.2706477-5-bgardon@google.com> Mime-Version: 1.0 References: <20210315233803.2706477-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v3 4/4] KVM: x86/mmu: Store the address space ID in the TDP iterator From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , kernel test robot , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Store the address space ID in the TDP iterator so that it can be retrieved without having to bounce through the root shadow page. This streamlines the code and fixes a Sparse warning about not properly using rcu_dereference() when grabbing the ID from the root on the fly. Reported-by: kernel test robot Signed-off-by: Sean Christopherson Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ arch/x86/kvm/mmu/tdp_iter.c | 6 +----- arch/x86/kvm/mmu/tdp_iter.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 23 +++++------------------ 4 files changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ec4fc28b325a..1f6f98c76bdf 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -78,6 +78,11 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) return to_shadow_page(__pa(sptep)); } +static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) +{ + return sp->role.smm ? 1 : 0; +} + static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index f7f94ea65243..b3ed302c1a35 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -49,6 +49,7 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, iter->root_level = root_level; iter->min_level = min_level; iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root_pt; + iter->as_id = kvm_mmu_page_as_id(sptep_to_sp(root_pt)); tdp_iter_restart(iter); } @@ -169,8 +170,3 @@ void tdp_iter_next(struct tdp_iter *iter) iter->valid = false; } -tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter) -{ - return iter->pt_path[iter->root_level - 1]; -} - diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 8eb424d17c91..b1748b988d3a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -36,6 +36,8 @@ struct tdp_iter { int min_level; /* The iterator's current level within the paging structure */ int level; + /* The address space ID, i.e. SMM vs. regular. */ + int as_id; /* A snapshot of the value at sptep */ u64 old_spte; /* @@ -62,7 +64,6 @@ tdp_ptep_t spte_to_child_pt(u64 pte, int level); void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); -tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 38b6b6936171..462b1f71c77f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -203,11 +203,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared); -static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) -{ - return sp->role.smm ? 1 : 0; -} - static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); @@ -497,10 +492,6 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - u64 *root_pt = tdp_iter_root_pt(iter); - struct kvm_mmu_page *root = sptep_to_sp(root_pt); - int as_id = kvm_mmu_page_as_id(root); - lockdep_assert_held_read(&kvm->mmu_lock); /* @@ -514,8 +505,8 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, new_spte) != iter->old_spte) return false; - handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, - iter->level, true); + handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, true); return true; } @@ -569,10 +560,6 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte, bool record_acc_track, bool record_dirty_log) { - tdp_ptep_t root_pt = tdp_iter_root_pt(iter); - struct kvm_mmu_page *root = sptep_to_sp(root_pt); - int as_id = kvm_mmu_page_as_id(root); - lockdep_assert_held_write(&kvm->mmu_lock); /* @@ -586,13 +573,13 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte); - __handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, - iter->level, false); + __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, false); if (record_acc_track) handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); if (record_dirty_log) - handle_changed_spte_dirty_log(kvm, as_id, iter->gfn, + handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level); }