From patchwork Mon Jun 22 20:08:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62D3E138C for ; Mon, 22 Jun 2020 20:12:40 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3BA282073E for ; Mon, 22 Jun 2020 20:12:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="nDd7WIhO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BA282073E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eAoEu40zKdpxsw9B6OLDu9g5CFjBlRHO02JDk+Zatvk=; b=nDd7WIhO6HXvJwgIqiTTJgT1p rg21B2vVBH/iw2zRgtfNco3K6gyuZ/hwHmyQH4SWuSbQXG7wVES0RsXcIiCHOPnCoMHhRCXpWxGAJ /vV277vg2kapRWnDYgZ3/8rTV5RUFicZksc63xkDnMnOFEsDjjqycRHLBbpXyOQAhXgOPiwAzNGQP ZauVB0H2wbs6VKJCySeo7MYtErbn3TkorN6g+Kga+PILJq2atgNwOiEsQW/bmynOsCXfPz4LdQTal 6pSal7PGDhwAse/NlOQbPwtv2+1Lmp3YTN9lj9zSyfmaUZjBypMHTR1QLe1uArvA+OckxF1WHuzn7 2RW/diX5g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnSm3-0006Py-OS; Mon, 22 Jun 2020 20:10:35 +0000 Received: from mga12.intel.com ([192.55.52.136]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnSll-0006Gp-9y for linux-arm-kernel@lists.infradead.org; Mon, 22 Jun 2020 20:10:18 +0000 IronPort-SDR: KGwzHuqlqD3tOBFGvatzbEQe8nwEW32L2rhMrn+wPiUc0UpbSh7x+EH1dN14eSIvCJnMIqVNZa UyHvAvtwk4tw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="123527719" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="123527719" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:09:10 -0700 IronPort-SDR: yGlHMVpx7tPbvz8MilBzAcb+zV5/r1q/UDwdXJHLNr+wYdW5D6lwBhGmwvh3BbwcOP7TP9cHQR F9Wk/ytWU0Yw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="318877061" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2020 13:09:10 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Subject: [PATCH v2 05/21] KVM: x86/mmu: Try to avoid crashing KVM if a MMU memory cache is empty Date: Mon, 22 Jun 2020 13:08:06 -0700 Message-Id: <20200622200822.4426-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622200822.4426-1-sean.j.christopherson@intel.com> References: <20200622200822.4426-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [192.55.52.136 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Junaid Shahid , Christoffer Dall , Wanpeng Li , kvm@vger.kernel.org, Suzuki K Poulose , Joerg Roedel , Peter Shier , linux-mips@vger.kernel.org, Sean Christopherson , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Ben Gardon , Vitaly Kuznetsov , Peter Feiner , kvmarm@lists.cs.columbia.edu, Julien Thierry , Jim Mattson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Attempt to allocate a new object instead of crashing KVM (and likely the kernel) if a memory cache is unexpectedly empty. Use GFP_ATOMIC for the allocation as the caches are used while holding mmu_lock. The immediate BUG_ON() makes the code unnecessarily explosive and led to confusing minimums being used in the past, e.g. allocating 4 objects where 1 would suffice. Signed-off-by: Sean Christopherson Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ba70de24a5b0..5e773564ab20 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1060,6 +1060,15 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) local_irq_enable(); } +static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, + gfp_t gfp_flags) +{ + if (mc->kmem_cache) + return kmem_cache_zalloc(mc->kmem_cache, gfp_flags); + else + return (void *)__get_free_page(gfp_flags); +} + static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) { void *obj; @@ -1067,10 +1076,7 @@ static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) if (mc->nobjs >= min) return 0; while (mc->nobjs < ARRAY_SIZE(mc->objects)) { - if (mc->kmem_cache) - obj = kmem_cache_zalloc(mc->kmem_cache, GFP_KERNEL_ACCOUNT); - else - obj = (void *)__get_free_page(GFP_KERNEL_ACCOUNT); + obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -1118,8 +1124,11 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) { void *p; - BUG_ON(!mc->nobjs); - p = mc->objects[--mc->nobjs]; + if (WARN_ON(!mc->nobjs)) + p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT); + else + p = mc->objects[--mc->nobjs]; + BUG_ON(!p); return p; }