From patchwork Tue Jun 6 19:03:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13269619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E295DC7EE2A for ; Tue, 6 Jun 2023 19:06:51 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1q6bz2-0001Q5-1i; Tue, 06 Jun 2023 15:04:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from <3R4N_ZAsKCnYUWeYlfYsnhaaiiafY.WigkYgo-XYpYfhihaho.ila@flex--ackerleytng.bounces.google.com>) id 1q6bz0-0001Pr-OQ for qemu-devel@nongnu.org; Tue, 06 Jun 2023 15:04:42 -0400 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from <3R4N_ZAsKCnYUWeYlfYsnhaaiiafY.WigkYgo-XYpYfhihaho.ila@flex--ackerleytng.bounces.google.com>) id 1q6byy-00028D-TL for qemu-devel@nongnu.org; Tue, 06 Jun 2023 15:04:42 -0400 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1b24203563fso2139725ad.2 for ; Tue, 06 Jun 2023 12:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686078279; x=1688670279; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wXM9JnRppggpXADZsSU7oZSsG9y4s1gBipK1U/D+HWI=; b=VWeSODpWaBqW4mQpMFylMZsGjqz9x2cJWtAJkF8pA6jBSLPJ/ks6MKmLjF0le7g30t JnmedBVXzjwth3hzxEtFKbLpgTPpii8AIYI3pjk/s5jbK19NEFa6oqTQ13vB92hVQiL8 ViOM5skTjTy0akJXm7D62mzzCwjuX5DC11svxqniq47zqTKpG8FGJ6kwksqVcEgjbsdr Q2K/qR6ANqCdLydEwlvpleG5wsmCc5RNcdR6s83XxomLHgCDxLDz6X0Nkjd7UZ2zKFsM UtKOTh+fe/f2a9gAwgpw1N4/+Q8OfnwiS90nL6ITis3W2ib2nbVxkWVf59u885V77XDE HbzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686078279; x=1688670279; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wXM9JnRppggpXADZsSU7oZSsG9y4s1gBipK1U/D+HWI=; b=HmXUMV0j1T9F/KiVt0pHmglPq9F4x9E/dBFIMW6WUQOz1fD21wfoHkjgwWb1zCuomT ivhx1FAaV1UMu5LtWBjh65H2kAH7Lll2YZXjTOA3+vEYfgu9gimR07ArIadLzqe4PdjP vmb25yFQNL6LybLbO+wzuairg7ERd+HrUdGlPA9wSsQTUwly5gj6QFGgOfnVGAK7CLtM GDsd6KP0+N9kUjk1EYeCBnRZCwCPl3XftL3qbbuUURg1D8fmmSj3NDMGxHKtwfVscwtN m3ZbAdXJNdNn52v27U3qiaDPxs5lliMN/mHwTLSyxwnPa72tZ2X9XmTEQh5Hgik6ed+n r4nw== X-Gm-Message-State: AC+VfDzcaRG2JnqXdsNlwz1LNUSmq+0u8Jgr7kFPDBcd+lre857WvTfP F73mANL3cHW7lhPbfyWTJtoUyU8u1uNv1Pa7qQ== X-Google-Smtp-Source: ACHHUZ7ZdtfDieT/Oervi6LGRC4PY5HZ9dIjkRC61ZAb5O1fokMpY3zuDqGBxtoVMfmrEF+ZlDZD/kzUef5/lnoKEw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:7c03:b0:1b2:690:23ff with SMTP id x3-20020a1709027c0300b001b2069023ffmr991531pll.11.1686078279112; Tue, 06 Jun 2023 12:04:39 -0700 (PDT) Date: Tue, 6 Jun 2023 19:03:59 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <44ac051fab6315161905949caefed78381b6c5fe.1686077275.git.ackerleytng@google.com> Subject: [RFC PATCH 14/19] KVM: guest_mem: Refactor cleanup to separate inode and file cleanup From: Ackerley Tng To: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, pbonzini@redhat.com, seanjc@google.com, shuah@kernel.org, willy@infradead.org Cc: brauner@kernel.org, chao.p.peng@linux.intel.com, coltonlewis@google.com, david@redhat.com, dhildenb@redhat.com, dmatlack@google.com, erdemaktas@google.com, hughd@google.com, isaku.yamahata@gmail.com, jarkko@kernel.org, jmattson@google.com, joro@8bytes.org, jthoughton@google.com, jun.nakajima@intel.com, kirill.shutemov@linux.intel.com, liam.merwick@oracle.com, mail@maciej.szmigiero.name, mhocko@suse.com, michael.roth@amd.com, qperret@google.com, rientjes@google.com, rppt@kernel.org, steven.price@arm.com, tabba@google.com, vannapurve@google.com, vbabka@suse.cz, vipinsh@google.com, vkuznets@redhat.com, wei.w.wang@intel.com, yu.c.zhang@linux.intel.com, kvm@vger.kernel.org, linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, qemu-devel@nongnu.org, x86@kernel.org, Ackerley Tng Received-SPF: pass client-ip=2607:f8b0:4864:20::649; envelope-from=3R4N_ZAsKCnYUWeYlfYsnhaaiiafY.WigkYgo-XYpYfhihaho.ila@flex--ackerleytng.bounces.google.com; helo=mail-pl1-x649.google.com X-Spam_score_int: -95 X-Spam_score: -9.6 X-Spam_bar: --------- X-Spam_report: (-9.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, USER_IN_DEF_DKIM_WL=-7.5 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Cleanup in kvm_gmem_release() should be the reverse of kvm_gmem_create_file(). Cleanup in kvm_gmem_evict_inode() should be the reverse of kvm_gmem_create_inode(). Signed-off-by: Ackerley Tng --- virt/kvm/guest_mem.c | 105 +++++++++++++++++++++++++++++-------------- 1 file changed, 71 insertions(+), 34 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 2f69ef666871..13253af40be6 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -247,42 +247,13 @@ static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset, static int kvm_gmem_release(struct inode *inode, struct file *file) { - struct kvm_gmem *gmem = inode->i_mapping->private_data; - struct kvm_memory_slot *slot; - struct kvm *kvm = gmem->kvm; - unsigned long index; - /* - * Prevent concurrent attempts to *unbind* a memslot. This is the last - * reference to the file and thus no new bindings can be created, but - * deferencing the slot for existing bindings needs to be protected - * against memslot updates, specifically so that unbind doesn't race - * and free the memslot (kvm_gmem_get_file() will return NULL). + * This is called when the last reference to the file is released. Only + * clean up file-related stuff. struct kvm_gmem is also referred to in + * the inode, so clean that up in kvm_gmem_evict_inode(). */ - mutex_lock(&kvm->slots_lock); - - xa_for_each(&gmem->bindings, index, slot) - rcu_assign_pointer(slot->gmem.file, NULL); - - synchronize_rcu(); - - /* - * All in-flight operations are gone and new bindings can be created. - * Free the backing memory, and more importantly, zap all SPTEs that - * pointed at this file. - */ - kvm_gmem_invalidate_begin(kvm, gmem, 0, -1ul); - truncate_inode_pages_final(file->f_mapping); - kvm_gmem_invalidate_end(kvm, gmem, 0, -1ul); - - mutex_unlock(&kvm->slots_lock); - - WARN_ON_ONCE(!(mapping_empty(file->f_mapping))); - - xa_destroy(&gmem->bindings); - kfree(gmem); - - kvm_put_kvm(kvm); + file->f_mapping = NULL; + file->private_data = NULL; return 0; } @@ -603,11 +574,77 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +static void kvm_gmem_evict_inode(struct inode *inode) +{ + struct kvm_gmem *gmem = inode->i_mapping->private_data; + struct kvm_memory_slot *slot; + struct kvm *kvm; + unsigned long index; + + /* + * If iput() was called before inode is completely set up due to some + * error in kvm_gmem_create_inode(), gmem will be NULL. + */ + if (!gmem) + goto basic_cleanup; + + kvm = gmem->kvm; + + /* + * Prevent concurrent attempts to *unbind* a memslot. This is the last + * reference to the file and thus no new bindings can be created, but + * deferencing the slot for existing bindings needs to be protected + * against memslot updates, specifically so that unbind doesn't race + * and free the memslot (kvm_gmem_get_file() will return NULL). + */ + mutex_lock(&kvm->slots_lock); + + xa_for_each(&gmem->bindings, index, slot) + rcu_assign_pointer(slot->gmem.file, NULL); + + synchronize_rcu(); + + /* + * All in-flight operations are gone and new bindings can be created. + * Free the backing memory, and more importantly, zap all SPTEs that + * pointed at this file. + */ + kvm_gmem_invalidate_begin(kvm, gmem, 0, -1ul); + truncate_inode_pages_final(inode->i_mapping); + kvm_gmem_invalidate_end(kvm, gmem, 0, -1ul); + + mutex_unlock(&kvm->slots_lock); + + WARN_ON_ONCE(!(mapping_empty(inode->i_mapping))); + + xa_destroy(&gmem->bindings); + kfree(gmem); + + kvm_put_kvm(kvm); + +basic_cleanup: + clear_inode(inode); +} + +static const struct super_operations kvm_gmem_super_operations = { + /* + * TODO update statfs handler for kvm_gmem. What should the statfs + * handler return? + */ + .statfs = simple_statfs, + .evict_inode = kvm_gmem_evict_inode, +}; + static int kvm_gmem_init_fs_context(struct fs_context *fc) { + struct pseudo_fs_context *ctx; + if (!init_pseudo(fc, GUEST_MEMORY_MAGIC)) return -ENOMEM; + ctx = fc->fs_private; + ctx->ops = &kvm_gmem_super_operations; + return 0; }