From patchwork Fri Dec 18 00:31:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11981049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA43CC2BBD4 for ; Fri, 18 Dec 2020 00:32:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C97423372 for ; Fri, 18 Dec 2020 00:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727144AbgLRAch (ORCPT ); Thu, 17 Dec 2020 19:32:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727137AbgLRAcf (ORCPT ); Thu, 17 Dec 2020 19:32:35 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35E97C06138C for ; Thu, 17 Dec 2020 16:31:55 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id h75so488221ybg.18 for ; Thu, 17 Dec 2020 16:31:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=78xBxovk70v/fvyU3yhuNQMatP3jPljZeMZVBSc56LA=; b=B3R1PQgtNPMxojroBSUthuq4U69no1BV6LHzwIpnP2Sj6puZN3CnpVw+zhGmvxrl0q JTXB9JGhODmdNX7TioHy+s5pqQnnqQCPalfctnvEjgn5W9qwjUhUZaExbvBaq7kdOC/r SLzRgMkzeohdkhNl847qO3lxdKxwJOXZGcG6R/rkTTQnM+VF70b+CZ6bjYt+85ZGZJTC T5tIgphJmEhFlbNu6kTE1h5HXZb8q+Csr2CYJulyeZivzdoh53oNuOTOHul26Xth1BDJ II5PlL23t5CZGoqldm2PcxCBrvabKIGbnXjvsLlK2j0K7p0yj4Yr1vW+CGiC9cttDbda HM5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=78xBxovk70v/fvyU3yhuNQMatP3jPljZeMZVBSc56LA=; b=ABKHKFFsoXURaGDlYleIKVGWWghHu91f8ZuCOMHy21GTdjjFGWB6xoVWBXUvHedgze 3JHpBZqow6CYGOwwSl8yhdKtmIHCycQCCaPAsR6CjqifnDqqps2N4zBBisB4DsUlHkci x0KuqdF6fhRFfKq1v8IvtJhEkf/rJn/Te5cbYdg7lxLiVMBpLKuhjZSW3yx/1BYgLgrS MNDkN4cxPb3Ps94obOAxlttrI88aJyrHTTqP2XmqAYcah1QwS+Y7cVcY6uve9j2WoNK/ Mg+J/oBrMnGt+VF3VI4G9HwScMIkrrzT0sbaecCBb8YswHc6HgNrt0lc6QELVcDGZKs/ J2HQ== X-Gm-Message-State: AOAM5307JNqEyEWTcKfIt/nVdcRZmT1/Qq98A7EVVVnZWcdfiGcKHufM 4AfnwC+Npw3Q4NVCxu+Ql7tkrhTsAgU= X-Google-Smtp-Source: ABdhPJw0//qXhDKO4cdyBbLqZZOdyHdbbb4ewFr2vGZyeBU7kHgf2afeFksObT5u3Pgo2wDrgOG863zVrws= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:db81:: with SMTP id g123mr2924808ybf.277.1608251514431; Thu, 17 Dec 2020 16:31:54 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 17 Dec 2020 16:31:36 -0800 In-Reply-To: <20201218003139.2167891-1-seanjc@google.com> Message-Id: <20201218003139.2167891-2-seanjc@google.com> Mime-Version: 1.0 References: <20201218003139.2167891-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.684.gfbc64c5ab5-goog Subject: [PATCH 1/4] KVM: x86/mmu: Use -1 to flag an undefined spte in get_mmio_spte() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Richard Herbert Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Return -1 from the get_walk() helpers if the shadow walk doesn't fill at least one spte, which can theoretically happen if the walk hits a not-present PTPDR. Returning the root level in such a case will cause get_mmio_spte() to return garbage (uninitialized stack data). In practice, such a scenario should be impossible as KVM shouldn't get a reserved-bit page fault with a not-present PDPTR. Note, using mmu->root_level in get_walk() is wrong for other reasons, too, but that's now a moot point. Fixes: 95fb5b0258b7 ("kvm: x86/mmu: Support MMIO in the TDP MMU") Cc: Ben Gardon Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 ++++++- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7a6ae9e90bd7..a48cd12c01d7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3488,7 +3488,7 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) { struct kvm_shadow_walk_iterator iterator; - int leaf = vcpu->arch.mmu->root_level; + int leaf = -1; u64 spte; @@ -3532,6 +3532,11 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) else leaf = get_walk(vcpu, addr, sptes); + if (unlikely(leaf < 0)) { + *sptep = 0ull; + return reserved; + } + rsvd_check = &vcpu->arch.mmu->shadow_zero_check; for (level = root; level >= leaf; level--) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 84c8f06bec26..50cec7a15ddb 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1152,8 +1152,8 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) { struct tdp_iter iter; struct kvm_mmu *mmu = vcpu->arch.mmu; - int leaf = vcpu->arch.mmu->shadow_root_level; gfn_t gfn = addr >> PAGE_SHIFT; + int leaf = -1; tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf = iter.level; From patchwork Fri Dec 18 00:31:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11981051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE13C2BBD5 for ; Fri, 18 Dec 2020 00:32:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 775B923371 for ; Fri, 18 Dec 2020 00:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728184AbgLRAci (ORCPT ); Thu, 17 Dec 2020 19:32:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725930AbgLRAch (ORCPT ); Thu, 17 Dec 2020 19:32:37 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AD1DC061282 for ; Thu, 17 Dec 2020 16:31:57 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id m203so557258ybf.1 for ; Thu, 17 Dec 2020 16:31:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=oPInapfcZTUHbBkbVV08IPJRvv/+257Sb09FHDMXMx0=; b=BJM2457W/2DqIVdYPMvLElMFmLn+V9yNBpsNR5FdQfmquv2Pi/tsiRXYT/XV90HZuz q9wsNx7EMERv9CED+XDz1TDsMIELmhvQgDp4Z8RN4oAFaPp8hH+Lfjjj6F//A01INe8c hQWQaNaITd2hfeSEztvkBn9V8c0eByUkez8k3Y2PtvfGeupAZ5EWirZ6ZAYtQk6ddsZn KRK5CEWkZ7KN/UVc+VZmFveUdRpJWQJTbKNdR5IYQ9tty9vlQ6mB9bAOCKvhkMb8rbzH SQM1f2BzIbZYtXRvUtOFLzWxy80tWWVa9zIS7egAI4cBMi1hm/f9h0GUGso/HrUwDz2A mgww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=oPInapfcZTUHbBkbVV08IPJRvv/+257Sb09FHDMXMx0=; b=ZCdecGbEwmY29pj6AYtuXFl24hhqRp6z8BOVshXa08DFxkXA4y64DWiZxBE94/ucUw MsAU9rDNdUij/smWcbDqyJwsWB30gq5Z1gJZXX3Tmh0VZN5rqelYu6X/+OzyvgYfXeoK y5UCMNdPkXgzN5RcptFuFVWWkcqjqCKpdZEjzAk8N5l0SZYvTwWbYoNLuoC9tdJXJMIA z8Hkjq6tB+VWUqdwcVViWBAAZqruRLKyNphDE3MUzInBNVhcJFJI4WsRZFpDcZakMVC7 43CAZ6hYk56p6ienBCTi7nA6TeKlaE8Ty6pmbgR2JZYTbKU3lJZm3fvph+BgGbSE63m/ 4emw== X-Gm-Message-State: AOAM530GAmBA1uxPrh6qTDlhsnBpmTch+Yf/9o/exP6N5ZlMCCtmWm7o D95voCRwXXFxIn/0/AVaUySvBNbd7d4= X-Google-Smtp-Source: ABdhPJwZql1VvyeiPF6SWvG4xyoDmI65yGXwGbsVO8dFZiPBHoEGM/0qw4KWY8ipQ5YL1H/pZm6Ie2PArzI= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:5303:: with SMTP id h3mr2709285ybb.58.1608251516871; Thu, 17 Dec 2020 16:31:56 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 17 Dec 2020 16:31:37 -0800 In-Reply-To: <20201218003139.2167891-1-seanjc@google.com> Message-Id: <20201218003139.2167891-3-seanjc@google.com> Mime-Version: 1.0 References: <20201218003139.2167891-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.684.gfbc64c5ab5-goog Subject: [PATCH 2/4] KVM: x86/mmu: Get root level from walkers when retrieving MMIO SPTE From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Richard Herbert Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Get the so called "root" level from the low level shadow page table walkers instead of manually attempting to calculate it higher up the stack, e.g. in get_mmio_spte(). When KVM is using PAE shadow paging, the starting level of the walk, from the callers perspective, is not the CR3 root but rather the PDPTR "root". Checking for reserved bits from the CR3 root causes get_mmio_spte() to consume uninitialized stack data due to indexing into sptes[] for a level that was not filled by get_walk(). This can result in false positives and/or negatives depending on what garbage happens to be on the stack. Opportunistically nuke a few extra newlines. Fixes: 95fb5b0258b7 ("kvm: x86/mmu: Support MMIO in the TDP MMU") Reported-by: Richard Herbert Cc: Ben Gardon Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/kvm/mmu/mmu.c | 15 ++++++--------- arch/x86/kvm/mmu/tdp_mmu.c | 5 ++++- arch/x86/kvm/mmu/tdp_mmu.h | 4 +++- 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a48cd12c01d7..52f36c879086 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3485,16 +3485,16 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. */ -static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) +static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { struct kvm_shadow_walk_iterator iterator; int leaf = -1; u64 spte; - walk_shadow_page_lockless_begin(vcpu); - for (shadow_walk_init(&iterator, vcpu, addr); + for (shadow_walk_init(&iterator, vcpu, addr), + *root_level = iterator.level; shadow_walk_okay(&iterator); __shadow_walk_next(&iterator, spte)) { leaf = iterator.level; @@ -3504,7 +3504,6 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) if (!is_shadow_present_pte(spte)) break; - } walk_shadow_page_lockless_end(vcpu); @@ -3517,9 +3516,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { u64 sptes[PT64_ROOT_MAX_LEVEL]; struct rsvd_bits_validate *rsvd_check; - int root = vcpu->arch.mmu->shadow_root_level; - int leaf; - int level; + int root, leaf, level; bool reserved = false; if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) { @@ -3528,9 +3525,9 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) } if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) - leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes); + leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else - leaf = get_walk(vcpu, addr, sptes); + leaf = get_walk(vcpu, addr, sptes, &root); if (unlikely(leaf < 0)) { *sptep = 0ull; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 50cec7a15ddb..a4f9447f8327 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1148,13 +1148,16 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. */ -int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level) { struct tdp_iter iter; struct kvm_mmu *mmu = vcpu->arch.mmu; gfn_t gfn = addr >> PAGE_SHIFT; int leaf = -1; + *root_level = vcpu->arch.mmu->shadow_root_level; + tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf = iter.level; sptes[leaf - 1] = iter.old_spte; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 556e065503f6..cbbdbadd1526 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -44,5 +44,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn); -int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes); +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level); + #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Fri Dec 18 00:31:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11981055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0815AC2BBD4 for ; Fri, 18 Dec 2020 00:33:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC68A23372 for ; Fri, 18 Dec 2020 00:33:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728777AbgLRAdO (ORCPT ); Thu, 17 Dec 2020 19:33:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726917AbgLRAdN (ORCPT ); Thu, 17 Dec 2020 19:33:13 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4147DC061248 for ; Thu, 17 Dec 2020 16:32:00 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id l7so523374qth.15 for ; Thu, 17 Dec 2020 16:32:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=31UxY3BjxhhaVQJHGm3+ViFf/MlRFopzIvlMkIG1nRk=; b=Hn2FNm68j9LqarEV9dPHPgK3Xa8kkSTIruyksHHOd6CxrYqKy25Xx4tzQT+Rqm+GFb ZCGYQOxmf1wkpIk1j+V9nnoT+BH3let0VtXfOkIpf1dpIR60LoFSAPfoPG0jEnce3FHR 1etFrryYikPIGpU2gcVBo89wS4sBSvNuw+XtxSPpKv5J5RTrAIDuAOyOvxP0jtHRey6J pjk9Pjurj8MkO8tmhSxyJVR1r/JiQc8Tvbhv0Jfr7+0QgHqHJH9+GK9825r3t7tNEyOW 1KMEvqn+JI3Gy+4Cwh8UIOdTNmZH7EZfVwSzubR8eV9zoz2yuJuEsxgzxP4AGUUjaKRd 2teQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=31UxY3BjxhhaVQJHGm3+ViFf/MlRFopzIvlMkIG1nRk=; b=mz7mnPXKdgqWy6uEAk4Po9AUlAKKYixuLWN4M6dLNj/BfxVGtf+xgu3eBZYdmF4e8v j/PhREVuBPyNXvw3c4OLThRxTZN3HQUSVHuuSrU4uBb2RbMLAoBpkiDBAINMrtdZ9aiW 0wkOCVmpE9M53KMNpjK7e8ewwUg9hoiT3M2PBScimW2YbrlV2U59H56DThCGLDjJ57DD HPd+V0lIDrRE1VQ64YgnKEzEu0kD4N4D+U9v087uXDp3jaXjNSGT5Ye7p6wLkNMI/CPm hHoJByzHU3inlACsbQc4O0icuYTyLPnJZtLVl5/ZlZvqOCAVYdS8H74y+K6aBHmSzxpj ZKDg== X-Gm-Message-State: AOAM531FkB699HqPvzFquVDFPpeZawcW7E1e+ufoNpiBOUv6vB2prPxg jLEDaBEIVgf3b7izRBbz+p2zHGnFmtY= X-Google-Smtp-Source: ABdhPJx+lcVCo2wS+9IQsGU5CWvwij4l21/N8yDEVb4j3SxnWb7tnXeTlyi47WM0DVQjnd6m2O7L/Xu2fAw= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a0c:aa55:: with SMTP id e21mr1897255qvb.43.1608251519493; Thu, 17 Dec 2020 16:31:59 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 17 Dec 2020 16:31:38 -0800 In-Reply-To: <20201218003139.2167891-1-seanjc@google.com> Message-Id: <20201218003139.2167891-4-seanjc@google.com> Mime-Version: 1.0 References: <20201218003139.2167891-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.684.gfbc64c5ab5-goog Subject: [PATCH 3/4] KVM: x86/mmu: Use raw level to index into MMIO walks' sptes array From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Richard Herbert Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bump the size of the sptes array by one and use the raw level of the SPTE to index into the sptes array. Using the SPTE level directly improves readability by eliminating the need to reason out why the level is being adjusted when indexing the array. The array is on the stack and is not explicitly initialized; bumping its size is nothing more than a superficial adjustment to the stack frame. Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/kvm/mmu/mmu.c | 15 +++++++-------- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52f36c879086..4798a4472066 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3500,7 +3500,7 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level leaf = iterator.level; spte = mmu_spte_get_lockless(iterator.sptep); - sptes[leaf - 1] = spte; + sptes[leaf] = spte; if (!is_shadow_present_pte(spte)) break; @@ -3514,7 +3514,7 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level /* return true if reserved bit is detected on spte. */ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { - u64 sptes[PT64_ROOT_MAX_LEVEL]; + u64 sptes[PT64_ROOT_MAX_LEVEL + 1]; struct rsvd_bits_validate *rsvd_check; int root, leaf, level; bool reserved = false; @@ -3537,16 +3537,15 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) rsvd_check = &vcpu->arch.mmu->shadow_zero_check; for (level = root; level >= leaf; level--) { - if (!is_shadow_present_pte(sptes[level - 1])) + if (!is_shadow_present_pte(sptes[level])) break; /* * Use a bitwise-OR instead of a logical-OR to aggregate the * reserved bit and EPT's invalid memtype/XWR checks to avoid * adding a Jcc in the loop. */ - reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) | - __is_rsvd_bits_set(rsvd_check, sptes[level - 1], - level); + reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level]) | + __is_rsvd_bits_set(rsvd_check, sptes[level], level); } if (reserved) { @@ -3554,10 +3553,10 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) __func__, addr); for (level = root; level >= leaf; level--) pr_err("------ spte 0x%llx level %d.\n", - sptes[level - 1], level); + sptes[level], level); } - *sptep = sptes[leaf - 1]; + *sptep = sptes[leaf]; return reserved; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a4f9447f8327..efef571806ad 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1160,7 +1160,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf = iter.level; - sptes[leaf - 1] = iter.old_spte; + sptes[leaf] = iter.old_spte; } return leaf; From patchwork Fri Dec 18 00:31:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11981053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC326C2BBD5 for ; Fri, 18 Dec 2020 00:33:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA9FC238A0 for ; Fri, 18 Dec 2020 00:33:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729612AbgLRAdP (ORCPT ); Thu, 17 Dec 2020 19:33:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729376AbgLRAdO (ORCPT ); Thu, 17 Dec 2020 19:33:14 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAE48C0611C5 for ; Thu, 17 Dec 2020 16:32:02 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id e68so538374yba.7 for ; Thu, 17 Dec 2020 16:32:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=w0e5jl4JlfaoIUtkUKYlv6RdfB1E6rKWxyRAVNknKcI=; b=rrNIVmC7+1psjMOcg4IZPyGbZYaAQcu7Ov+cn7CgIP/Pr87m6qWdWF8ZHCxBvOBmWR KdeYR4Lm2ncBIlPQM8SNjxvWw5Qt7cRCtmQnyy04s3V78iNsvxpvckS+V6DnE6KnyOP7 9xpPbX0SIRymafrVEdAoXAdVpbDDeAVoWPpIDblGhvDoAxSoWM0c9VNr3s8MeGeJdwdZ KSIp31tsiA5F5B1DKE1FXRagF1IWtAHFIaxGKqXS+aH3a9O6/JBwnAIIxiZKGfj8Rd+6 OWoStP+A16tEKzqA8NojNx9xCQNTExRc/NPFHT27xrQUb8ZdIcUXFTrlwU5O+fyhoDF1 kA2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=w0e5jl4JlfaoIUtkUKYlv6RdfB1E6rKWxyRAVNknKcI=; b=ZONsl0eyUlXGsiU7+Riupll/NDPkf1+8mmacNYrk9jyEwPWAAK9lK4tQQCIWAMerqg G0Tf7ddmDNQ8lNSgmEa/KOsgkwPrfslVmGfXjUJ/kcrxq1O1vTXXE6m94rv3o9XfXE4O DiOLaEhj1xko50Lh66yyHeyetvAerfHD7D48gg3MAZ8ROqikaAAG0cUDj19hPKrR9CoP fhov2PBlCR6ciggQLFrew2X6pmzaMI+Mzu2d3foku+XunVZPJBdC04QNpSarcOVXCFSS O3gFBRVZ7P37Gh7MBbPWUKLpzXfJJxYs4IfZ2vM1oIXYkVFD52FIiu/f1mpTjNGjEUKW bfYg== X-Gm-Message-State: AOAM530uYY5LIjhvtiYAEJlnp4gEesT/ms0ZjQJAf5CoHyegnII1M1Aw fvk75OCrV3/6OOLGvOt1oVlIoEErSNk= X-Google-Smtp-Source: ABdhPJzUXpyjv97hO5c5ID9uNDy7GodRHRxyycz5+CMS86gAEQRD7Z+XyKZntxMahFyoPm/l3UyTBjkvS0s= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a25:e6c2:: with SMTP id d185mr2559405ybh.304.1608251522057; Thu, 17 Dec 2020 16:32:02 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 17 Dec 2020 16:31:39 -0800 In-Reply-To: <20201218003139.2167891-1-seanjc@google.com> Message-Id: <20201218003139.2167891-5-seanjc@google.com> Mime-Version: 1.0 References: <20201218003139.2167891-1-seanjc@google.com> X-Mailer: git-send-email 2.29.2.684.gfbc64c5ab5-goog Subject: [PATCH 4/4] KVM: x86/mmu: Optimize not-present/MMIO SPTE check in get_mmio_spte() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Richard Herbert Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check only the terminal leaf for a "!PRESENT || MMIO" SPTE when looking for reserved bits on valid, non-MMIO SPTEs. The get_walk() helpers terminate their walks if a not-present or MMIO SPTE is encountered, i.e. the non-terminal SPTEs have already been verified to be regular SPTEs. This eliminates an extra check-and-branch in a relatively hot loop. Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/kvm/mmu/mmu.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4798a4472066..769855f5f0a1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3511,7 +3511,7 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level return leaf; } -/* return true if reserved bit is detected on spte. */ +/* return true if reserved bit(s) are detected on a valid, non-MMIO SPTE. */ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { u64 sptes[PT64_ROOT_MAX_LEVEL + 1]; @@ -3534,11 +3534,20 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) return reserved; } + *sptep = sptes[leaf]; + + /* + * Skip reserved bits checks on the terminal leaf if it's not a valid + * SPTE. Note, this also (intentionally) skips MMIO SPTEs, which, by + * design, always have reserved bits set. The purpose of the checks is + * to detect reserved bits on non-MMIO SPTEs. i.e. buggy SPTEs. + */ + if (!is_shadow_present_pte(sptes[leaf])) + leaf++; + rsvd_check = &vcpu->arch.mmu->shadow_zero_check; - for (level = root; level >= leaf; level--) { - if (!is_shadow_present_pte(sptes[level])) - break; + for (level = root; level >= leaf; level--) /* * Use a bitwise-OR instead of a logical-OR to aggregate the * reserved bit and EPT's invalid memtype/XWR checks to avoid @@ -3546,7 +3555,6 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) */ reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level]) | __is_rsvd_bits_set(rsvd_check, sptes[level], level); - } if (reserved) { pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n", @@ -3556,8 +3564,6 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) sptes[level], level); } - *sptep = sptes[leaf]; - return reserved; }