From patchwork Fri Apr 1 17:55:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8929C433EF for ; Fri, 1 Apr 2022 17:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350783AbiDAR6d (ORCPT ); Fri, 1 Apr 2022 13:58:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350814AbiDAR6Z (ORCPT ); Fri, 1 Apr 2022 13:58:25 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B47D20D82A for ; Fri, 1 Apr 2022 10:56:35 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id nr19-20020a17090b241300b001c6f8baf45eso2444189pjb.0 for ; Fri, 01 Apr 2022 10:56:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=s72V80y7tDOh1F0V57dD8TPMVDGZRFJLVeRypa2j6NOIeyK6OdhN/vMWnCocaWrr7k CTCwsrxCAl4hj3azp9KbFVRgymRC6cGX41YRf3bKbcz4hip3E1y1CUleOkW/mRUjOv72 4q0juwIAEhigFEFYWcTUd2u5Yv+XnqXVD6a3pn/Ui+K8aArSkJYYBVjZFCZK1SOZsjC5 7Yesg542/Fq0ocOM9rn7tyOyRxQKZ3AxCieAlZZBYn/IE9/hjt2QS80Tz+PLFcorlflQ sdxYZkSU8ub5I6xHkzMHujo97FILxUXUQ+MHykloWi+evJGsQ9HO/j+9n0zxGqN3fxiy QTYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=D45tW/gzCzWAHkFE3M9xgJKmySouCaH9HClEPTjyMe8uhQZKPmRg4EfDu1yRPj8cRP jN9UyJukNRhTj9TLobd6lWKg9eo1+LqNxNbdw50DjNniSz05kh8kMTOWj5bqV3nTDyHx 5MC/r9HUwGuPkZ2aQE4fBP4SYJO3rHCic3TYt9FON9LFh9HFLvXwEAzaSf3ZXaLqk3Pj gpBMDHo9ayDGLp3OBLXUv1c69Dd/W8KmTALjH6hYrFY+Kum0PRaEjMcrIF3kbC3BBnKI c22GvAI+5jgJ8iDbO5q0CiRowxYOYjGRFEa8UblxW2KVuLs1LvPPM2zabI3lfo527dsQ g0/w== X-Gm-Message-State: AOAM531pT2Z3zrsf94OvdUo9KNeculAvLemUgDVwzCy1x1GKaLdnwT1g sbRJE4IUGtHf3NfhE780zvFA/sEyzHYDrw== X-Google-Smtp-Source: ABdhPJybrRr8dD4aYv/cbz/erk9I24s3Xk9K694qQkW/0pJU9JMwG7vQGFtAvLaRQheDfdF0TPnn3iF2T50Fnw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ce0f:b0:156:5a4:926c with SMTP id k15-20020a170902ce0f00b0015605a4926cmr11415497plg.3.1648835794980; Fri, 01 Apr 2022 10:56:34 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:52 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-22-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 21/23] KVM: Allow GFP flags to be passed when topping up MMU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 252ee4a61b58..7d3a1f28beb2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1335,6 +1335,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp); int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc); void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c4cac4195f4a..554148ea0c30 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -369,7 +369,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, return (void *)__get_free_page(gfp_flags); } -int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp) { void *obj; @@ -384,7 +384,7 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) if (mc->nobjs >= min) return 0; while (mc->nobjs < mc->capacity) { - obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); + obj = mmu_memory_cache_alloc_obj(mc, gfp); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -392,6 +392,11 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) return 0; } +int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +{ + return __kvm_mmu_topup_memory_cache(mc, min, GFP_KERNEL_ACCOUNT); +} + int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc) { return mc->nobjs;