From patchwork Fri Aug 26 23:12:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67371ECAAD5 for ; Fri, 26 Aug 2022 23:12:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345459AbiHZXM5 (ORCPT ); Fri, 26 Aug 2022 19:12:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345442AbiHZXMw (ORCPT ); Fri, 26 Aug 2022 19:12:52 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32EC7EA174 for ; Fri, 26 Aug 2022 16:12:50 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-337ed9110c2so47556047b3.15 for ; Fri, 26 Aug 2022 16:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=wRixm1YVhAJ8xZe1pU1E4V06yoGJLAsGACDB692LvKs=; b=a14Azz3UWEGB1TFiwilQsSdjjeYlNQ4qVw7ykdMcGxGgW/6E3YbcqSOVMYquc+yorh 7AjBfh2BJwFrNUrfuPm2NON91/U3D4qGTVfzFEmgRCYl4aISPTJmBVXpNEY/3aVgsPSL yiziBrUYY2bvfLKvv8+989Eb4Jg2eI3qz/iVBrs+p7TW32dTqnjMKjfFdmZUCdkQH0cD VKnNFkWNbek7VbykO5qfr6N74Sw89nsNQeC8NmWs4FzETAVR31NWebvmxMQ9VyNxp2we AYF8ef9A+vKc8WAFUDnIJ5XawlK8mtISATVaGM08bCuTZnvvOcWCVTuA0MsZGzfa4uRE /4Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=wRixm1YVhAJ8xZe1pU1E4V06yoGJLAsGACDB692LvKs=; b=UTJf5KJW2bMElfP4YMax4plPW+LXEKtaigVwyN8/AHzVJrrj8n/RLIOrvKeVakZHqC 2YFtpvtC03gp+krV/1vFH6q6SpBVxdNkMiiEqaEYQ6aIAdpbjGSy9CansLjZGKYjHtGo 5TkYJgx7VRtG4oyVcyVXyVEgkC5KmuIIp7nA6uubz5yOJEale9xgGWlHmkkr7B7KyUok /vLB7z5V7Zn75oRfWMgDRqC8zAGVRhefvdutxpxuhUsOwtAE/niBKLGr1p+kAqZviMDI ofcuA6BLczNueEJ+XwMUdZtrtA7LixCVL7gHmrDv+TYiUwSykV+aZi1qyMMYgcBPP9Ry mYiA== X-Gm-Message-State: ACgBeo1Fgq9xhJYrfixzi/lm5AsrGZsYgsilrzdRD0a2/IotIVPKW3UW gvCKe67QL3qI3D5dPh2bHGIhk2Uy9CWiCg== X-Google-Smtp-Source: AA6agR6VUHye4G1LZsaLksi0vReDzr2WREV02PyvuI7HuGwzk6GDhtr9ZRr03Ja4sHSp0q2KH3JjxHoiTQEQ6w== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:24c4:0:b0:696:3e03:2d0e with SMTP id k187-20020a2524c4000000b006963e032d0emr1785534ybk.104.1661555569856; Fri, 26 Aug 2022 16:12:49 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:26 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-10-dmatlack@google.com> Subject: [PATCH v2 09/10] KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU faults From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Stop calling make_mmu_pages_available() when handling TDP MMU faults. The TDP MMU does not participate in the "available MMU pages" tracking and limiting so calling this function is unnecessary work when handling TDP MMU faults. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8f124a23ab4c..803aed2c0e74 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4347,10 +4347,6 @@ int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); - if (r) - goto out_unlock; - r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: