From patchwork Thu Aug 1 09:01:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750003 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D7C016EBFA for ; Thu, 1 Aug 2024 09:01:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502885; cv=none; b=sTApaTiC1w6jGvH1nQjQPX//iyYueLDUx1xq5BctH1w1WnuKQM/Kgo30qiZU3a84VtsSna7XIG13lccfrq7isuWinoKf9++R+LYV78tkSt9uQqaYkGmTppJX9TzpQ9c4CpJbivPVV0P6jRJSFAXNmcZQ3idVpqOtKJvND51xcd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502885; c=relaxed/simple; bh=HhD/ECd8uGqtdbqpkSn1wxM8zfqN3I3b8kecNolsv+Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rXoWRt2j2ed0EhxUuiYHE1O0aN5sq6blVaUMHG/SezN2btgdATT0KSOLIuK1gyQgcbhigoxG8Q5azJ1YuPqce8GtYLcjUD/KIx0JZZx5l0iVjawJ9cKbeGuNO83WPF5Zmw+vfCQWPq+VWsyFn1wolG0EVrZ7ixG+SQpovZu3R/I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ihGrjVEy; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ihGrjVEy" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66b8faa2a4aso137919977b3.0 for ; Thu, 01 Aug 2024 02:01:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502883; x=1723107683; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=55lqFFECQwDLNrYt4aytrJso1NybdDvPLt3jS/2DBHA=; b=ihGrjVEygGyygmgVLBUu48sf0XqGZzKC5rM0VOzzrWZNhB2hyzNOLX3Fa91BCHFojY MN4Bde/mdGQ+cB/TXegDB3lOaRGTcjsylNwi7lZFAsrCroqbJQGwJbw6bFe0kbo2C5P+ uLQOwX4E1wAglZrnyi+6GI7lwLxsVme3tkFeOIvxh66iuU84/SQhXeojF5DM/qNWcTuK Ub/K0n3aOkMDH2a/ObrsmdZzQ0iWnivpSlMzs5Yk2R8hruMOCMVVayJHNWgfubFFnkNW SaeMOhJ9bgeEY7LjPTXxSLQ+fpoQPpd8H+ivcRl0+uoyaglX9qMhSN4hqJx66+/kLhIg uHFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502883; x=1723107683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=55lqFFECQwDLNrYt4aytrJso1NybdDvPLt3jS/2DBHA=; b=m6fLpqMMfBYzC3ZpLA5deyOVu24s/VPwNJTSICMkxIwQt660icn9oVT9/sVOuUSKKB P0bn+0iKEp8GQHOtsQfGPmHzZrynVoK4O3R+EirXD3W7KyByKDQDGVn52JBRSQM2Zrha lkvuF/vNHbt/wtpaA5hBGD1vIbN/eiKxGigPgHLpbVVxhuvHRKjJtHnBaFFVR+zdzm5M Acg8mkb+W6UMikFIw/7DM23fx+MGePzhYweU2RyygEj4YatHY1oxH2N0GVyVVZFjyw5s 8GhAcPYk2R5VMr4vtG4VJPllgFSGGY0ApRXcmcG4HGenIh2izC94+RZteNXFbV/Wy0B9 vKKg== X-Gm-Message-State: AOJu0Yy5SGJbvqtPaYnfz27gj5bSVWY2B83+2hUnoQjrdxOrxdvGHD05 Qm8M5ZR1imUH3bvihtPFdXEkK/CiqI50wjqazqIK97FGWQOCj/0nqsRTpTnbThvA2Gn1IGwzW0D htKanR4ak+1JgMxKKL25tiv/C5C1a+ygyprh/6ksrY0W51BDMCJUf7KQN1dbOB9Jtc2rnZ2eN8Q xQ1xbzAYpT4VmbBf6l3eGTf/M= X-Google-Smtp-Source: AGHT+IFXjhPCx5Vb6+Hy1aNGDM/3GXCZNNlq0AHfSY4+hcUlvoRXu5eLes4iJzd35wbNh934dQJX6BHp3A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:f05:b0:648:3f93:68e0 with SMTP id 00721157ae682-6875028afcamr1291237b3.6.1722502882722; Thu, 01 Aug 2024 02:01:22 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:08 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-2-tabba@google.com> Subject: [RFC PATCH v2 01/10] KVM: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Create a new variant of kvm_gmem_get_pfn(), which retains the folio lock if it returns successfully. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 11 +++++++++++ virt/kvm/guest_memfd.c | 19 ++++++++++++++++--- 2 files changed, 27 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 692c01e41a18..43a157f8171a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2431,6 +2431,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *max_order); +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2439,6 +2441,15 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } + +static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, + int *max_order) +{ + KVM_BUG_ON(1, kvm); + return -EIO; +} #endif /* CONFIG_KVM_PRIVATE_MEM */ #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 747fe251e445..f3f4334a9ccb 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -482,8 +482,8 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) fput(file); } -int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order) { pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem; @@ -524,10 +524,23 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, r = 0; - folio_unlock(folio); out_fput: fput(file); return r; } +EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn_locked); + +int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order) +{ + int r; + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, pfn, max_order); + if (r) + return r; + + unlock_page(pfn_to_page(*pfn)); + return 0; +} EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); From patchwork Thu Aug 1 09:01:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750004 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C82F16F29A for ; Thu, 1 Aug 2024 09:01:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502888; cv=none; b=PiX7knBr4dPmzn3+M0radSLsr0MHIGsaFso0Ghb0/r8xGfnKb23gsgtqgm5dteec4+wVpjSOmqcGrlTBlH/ClaCN8uL6N2JtYB/+Dcdk6AH/Qx96jHDoXzI8Mffcly5h4Nml3LnFvHm/nDCe3mUm9q4n3fau03VQq8ZEhGc+oIk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502888; c=relaxed/simple; bh=7UlKYDzzEoWteaYTY1v3TroetnmwezP3ooLrZ6P40cw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KBSlqvkosMc5Ib5nvjPDNG082AwJ6BTdPJh4LK33ZEN3jeZk151Qb4ycr2vtn7ZeqjJVjT/Ki0wpfIzaqaju/wEX8E3Rw2ay/7E463jZgCWpTxWlr/NJ41SFSQ3xcmVsnRLsFfP1UoBQuSVTCSZSJ0DfhzmG35kPn+T0mEeB8is= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hJ71OKUV; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hJ71OKUV" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-688777c95c4so3938327b3.1 for ; Thu, 01 Aug 2024 02:01:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502885; x=1723107685; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=hJ71OKUVAJxcNcW27aHrDTmsUOya2+03f2M5ICSAzZe/wO4YhNHmQdrbGbS5KQ4Jvj N5eMvRZTtufsoUNoCWUs+f0ZvIQGkSEKlkw0x5eXPrlkY0JkAnNUhmI7xc3VfLAxv03c OnSjxzLnTnJLLtlx4PCFDVW5OW1Ce7vUhuKXBgt97mkQ2ZFn/C/dFbUy4FPct8ZzCmJB lNIgD5Y2G+LzKjiAj1t+ASyIrOmoeJFoMdwc0sXn8nbkPfNXLTMuxOJUg9IMI6uEwr6Y py7WcclZt38Jg4zRfcGa2Ut1VIcujWKmVchFOCdK+OEJuswGcKnDgvKPGFm6VSwZBetE f4Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502885; x=1723107685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=HVHqXuH+Rw6HjGqFW1gJ+osJIIJAx0h8m/P8BVE5wFBRUt7xe496Ivcl3H8HrlRsrJ WdczaNYuhYhw+YYkcfZ2SIg4+r5u6TqbflYiy71JX8S84ZxJC85+zrhg+hoDbbD7aVEu IRofJqmOIPOLUWD76o1K7tAWX7UT7B5BxLaDBkppnO4h1mqHg6kNy4hENQZY5OnvLC5n zeVjn/yXPaOhzQMCHfBqRoXJMClpyh3w7O7cb6H0+fDa8vzR/iA5UDHL8gNJootKwlP3 wJ0hdjSXIVB4r7aIK2Jpbnt7weiACflVCoLZP3t/3pW7DZtFhBLKeMGCT0KUHOHtlspm Otrg== X-Gm-Message-State: AOJu0YzHyqBGAnKAI+XhTBDTw51aeDqwOy57S5VyKot9lzVdv6de3NyR kmcy5JZ4Wp51kKyLeh+f5OmGiH1iMNPiLFNKz5sl/ks18pzBHrKtnKr+dywpVxVSkTLnm7haFDS yx4nR2WOrdQvcFg7pqGrxCM7hq277guFgvMLH2z5JU2o96YvdaZ7p+wjy9vduLQxS+YkDELU79+ Mko6h4U5eG5l/YFwF9F5APqzw= X-Google-Smtp-Source: AGHT+IFeMtDkO9X4o1H08sPbTOZyFDUmCP10xCQXIQm4WlBnMC+GIqYzlc/fnsdt6vCtEVOetfjohOJS0A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:15:b0:64a:8aec:617c with SMTP id 00721157ae682-6874580ff7emr1313617b3.0.1722502885098; Thu, 01 Aug 2024 02:01:25 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:09 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-3-tabba@google.com> Subject: [RFC PATCH v2 02/10] KVM: Add restricted support for mapping guestmem by the host From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add support for mmap() and fault() for guest_memfd in the host. The ability to fault in a guest page is contingent on that page being shared with the host. To track this, this patch adds a new xarray to each guest_memfd object, which tracks the mappability of guest frames. The guest_memfd PRIVATE memory attribute is not used for two reasons. First because it reflects the userspace expectation for that memory location, and therefore can be toggled by userspace. The second is, although each guest_memfd file has a 1:1 binding with a KVM instance, the plan is to allow multiple files per inode, e.g. to allow intra-host migration to a new KVM instance, without destroying guest_memfd. This new feature is gated with a new configuration option, CONFIG_KVM_PRIVATE_MEM_MAPPABLE. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 61 ++++++++++++++++++++ virt/kvm/Kconfig | 4 ++ virt/kvm/guest_memfd.c | 110 +++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 122 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 297 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 43a157f8171a..ab1344327e57 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2452,4 +2452,65 @@ static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, } #endif /* CONFIG_KVM_PRIVATE_MEM */ +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end); +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable); +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +#else +static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + bool is_mappable) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 29b73eedfe74..a3970c5eca7b 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -109,3 +109,7 @@ config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_PRIVATE_MEM bool + +config KVM_PRIVATE_MEM_MAPPABLE + select KVM_PRIVATE_MEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index f3f4334a9ccb..0a1f266a16f9 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -11,6 +11,9 @@ struct kvm_gmem { struct kvm *kvm; struct xarray bindings; struct list_head entry; +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + struct xarray unmappable_gfns; +#endif }; static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) @@ -230,6 +233,11 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) mutex_unlock(&kvm->slots_lock); xa_destroy(&gmem->bindings); + +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_destroy(&gmem->unmappable_gfns); +#endif + kfree(gmem); kvm_put_kvm(kvm); @@ -248,7 +256,105 @@ static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot) return get_file_active(&slot->gmem.file); } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + void *xval = is_mappable ? NULL : xa_mk_value(true); + void *r; + + r = xa_store_range(&gmem->unmappable_gfns, start, end - 1, xval, GFP_KERNEL); + + return xa_err(r); +} + +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, true); +} + +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, false); +} + +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + unsigned long _gfn = gfn; + + return !xa_find(&gmem->unmappable_gfns, &_gfn, ULONG_MAX, XA_PRESENT); +} + +static bool kvm_gmem_isfaultable(struct vm_fault *vmf) +{ + struct kvm_gmem *gmem = vmf->vma->vm_file->private_data; + struct inode *inode = file_inode(vmf->vma->vm_file); + pgoff_t pgoff = vmf->pgoff; + struct kvm_memory_slot *slot; + unsigned long index; + bool r = true; + + filemap_invalidate_lock(inode->i_mapping); + + xa_for_each_range(&gmem->bindings, index, slot, pgoff, pgoff) { + pgoff_t base_gfn = slot->base_gfn; + pgoff_t gfn_pgoff = slot->gmem.pgoff; + pgoff_t gfn = base_gfn + max(gfn_pgoff, pgoff) - gfn_pgoff; + + if (!kvm_slot_gmem_is_mappable(slot, gfn)) { + r = false; + break; + } + } + + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = kvm_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); + if (!folio) + return VM_FAULT_SIGBUS; + + if (!kvm_gmem_isfaultable(vmf)) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + file_accessed(file); + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -369,6 +475,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) xa_init(&gmem->bindings); list_add(&gmem->entry, &inode->i_mapping->i_private_list); +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_init(&gmem->unmappable_gfns); +#endif + fd_install(fd, file); return fd; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1192942aef91..f4b4498d4de6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3265,6 +3265,128 @@ static int next_segment(unsigned long len, int offset) return len; } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +static bool __kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + if (!kvm_slot_gmem_is_mappable(memslot, i)) + return false; + } + } + + return true; +} + +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mappable(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static bool __kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + struct page *page; + bool is_mapped; + kvm_pfn_t pfn; + + if (WARN_ON_ONCE(kvm_gmem_get_pfn_locked(kvm, memslot, i, &pfn, NULL))) + continue; + + page = pfn_to_page(pfn); + is_mapped = page_mapped(page) || page_maybe_dma_pinned(page); + unlock_page(page); + put_page(page); + + if (is_mapped) + return true; + } + } + + return false; +} + +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mapped(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static int kvm_gmem_toggle_mappable(struct kvm *kvm, gfn_t start, gfn_t end, + bool is_mappable) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_slot_gmem_toggle_mappable(memslot, gfn_start, gfn_end, is_mappable); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} + +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, true); +} + +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, false); +} + +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, int len) From patchwork Thu Aug 1 09:01:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750005 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7FBD170836 for ; Thu, 1 Aug 2024 09:01:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502890; cv=none; b=REg5oSiOyv7mk11WZ5ytSf6dDmZhsyI1Le55rbfUftidlAo9iuPRFY5l6lsj7givt/9nvlQ9tBk9gMUVpJ5y8i44EBjYSzTRe/zHWzPtOWHbKQd2BJY/IMSGwvett9IRhJQXjLDrQ92Jxr8cJ+OtWVIkcsOcmi0dM9yvl7yHdZE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502890; c=relaxed/simple; bh=du2XpGvCSw1HscL0jxNXXaB3NWH5Lr5MM3GWCqX9g5k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GWcg1G3A0cNQ82aUG3UE3Z24nVfeI/oqIMdMInz2XkKXcxiSUtF/ngbNWz+6882Zj2VoHd3HBEGDNvI2H/UQ1vIIHr1cnhwaLT2Xt8nwmamHC74vUsUXajiQK9G5b1wwiWY9G2PYBXY6I0hBiSHjxU91pzlRsWnWY0jzjUclgmU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CEIuKSS9; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CEIuKSS9" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-650ab31aabdso114815647b3.3 for ; Thu, 01 Aug 2024 02:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502888; x=1723107688; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r2dA+/WuAip0LraCdE4saiE0NevgxBDmZ5s6G2UnSY8=; b=CEIuKSS92SxTAJccsTwblBOXYtblC/JFSSK9bGt/luHJJqKX+N3o538RnfzUfVhjrx sdqr4X5EO1+caszz/fGNHKxZxpjaJq8yu0u2Q3CPWSnaikJD2wwIIpQ4sYxL8fvlTAQC JD2MLllq+Q6O38DxF144fgJWTor9SOObl5wxO7i+QHPl4CcTzmHqJf9UiX+V9furY/1U kxwaZvXWT6JaTbCoVo5ZhWaQmWtOG0WpjIg+Eq5CLjxeGfdcmSzgFUoYov+FIouQHKlA XVoYXN9/SuuUwN0doQJ/xd3Jmw4AAcmR4oD0w7onX3+hO+QhKPkuQH9aAx1mBSERyUDV 5QDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502888; x=1723107688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r2dA+/WuAip0LraCdE4saiE0NevgxBDmZ5s6G2UnSY8=; b=kI7o3EzRH5zJjmcG1V01OLImtB530hfx2vdw3Bs+uolJu84+C6fwgHLRQaQQNoF6D3 tqQZfgnAmAL1E8wrIq5/3izf2ELdPvhieUX7wTqjfrqByZVOd+fQTxp1bSNAdJzPtTt5 DRDjRhbqWxI7FHBcXIXUvEoYrz08VujoV4wWSKR1oyM5dBWV6uKe4hmpFm/B+LLWJkGI 9jNUBZ0DxP1uQcxi1n83PS4uD3o521tbVtHUz+e8n5ggOEkA7FwMmbmObsVImlF6vFJV YNw0EQS1+ILRyInhWTpzBFninn5DRaargyrfwF5uP/vSg6Z9tPq2T0fOG7dIn0+HN+3h BKPA== X-Gm-Message-State: AOJu0Yw0kRQpNHbQbCPyJm6kgNmuPIuv7nhgCGSfq+hP8ZLsJHQqm45L zLZTSjw8Y+xA81QOC/AkfrW5OchTxUsbga/CtN/eaNLCAjHB1I0NegH2jPaBotT+WHyC6mmFeWf to/WLNiJj7bBRd8ubBsGJ6XVOPd69Jo0oQyJKDtTT7/n/f76CV5ykrILvm4HTIxlzeEsG2vnf+3 1qTmbsQdFnfkbnKgWwovDYqJY= X-Google-Smtp-Source: AGHT+IFTyhbnlkMzi/3eqbxm5qXZ73AAT9ul7/e/eVlxOdal7iBxLuY+Lba4cs8t3PeL8JTir4ZyTcDElQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:9e:b0:673:b39a:92ce with SMTP id 00721157ae682-6874be4e4b8mr30147b3.3.1722502887498; Thu, 01 Aug 2024 02:01:27 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:10 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-4-tabba@google.com> Subject: [RFC PATCH v2 03/10] KVM: Implement kvm_(read|/write)_guest_page for private memory slots From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Make __kvm_read_guest_page/__kvm_write_guest_page capable of accessing guest memory if no userspace address is available. Moreover, check that the memory being accessed is shared with the host before attempting the access. KVM at the host might need to access shared memory that is not mapped in the host userspace but is in fact shared with the host, e.g., when accounting for stolen time. This allows the access without relying on the slot's userspace_addr being set. This does not circumvent protection, since the access is only attempted if the memory is mappable by the host, which implies shareability. Signed-off-by: Fuad Tabba --- virt/kvm/kvm_main.c | 127 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 111 insertions(+), 16 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f4b4498d4de6..ec6255c7325e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3385,20 +3385,108 @@ int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) return kvm_gmem_toggle_mappable(kvm, start, end, false); } +static int __kvm_read_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, + int len) +{ + struct page *page; + u64 pfn; + int r = 0; + + if (size_add(offset, len) > PAGE_SIZE) + return -E2BIG; + + mutex_lock(&kvm->slots_lock); + + if (!__kvm_gmem_is_mappable(kvm, gfn, gfn + 1)) { + r = -EPERM; + goto unlock; + } + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, &pfn, NULL); + if (r) + goto unlock; + + page = pfn_to_page(pfn); + memcpy(data, page_address(page) + offset, len); + unlock_page(page); + kvm_release_pfn_clean(pfn); +unlock: + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static int __kvm_write_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, const void *data, + int offset, int len) +{ + struct page *page; + u64 pfn; + int r = 0; + + if (size_add(offset, len) > PAGE_SIZE) + return -E2BIG; + + mutex_lock(&kvm->slots_lock); + + if (!__kvm_gmem_is_mappable(kvm, gfn, gfn + 1)) { + r = -EPERM; + goto unlock; + } + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, &pfn, NULL); + if (r) + goto unlock; + + page = pfn_to_page(pfn); + memcpy(page_address(page) + offset, data, len); + unlock_page(page); + kvm_release_pfn_dirty(pfn); +unlock: + mutex_unlock(&kvm->slots_lock); + + return r; +} +#else +static int __kvm_read_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, + int len) +{ + WARN_ON_ONCE(1); + return -EIO; +} + +static int __kvm_write_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, const void *data, + int offset, int len) +{ + WARN_ON_ONCE(1); + return -EIO; +} #endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ -static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) + +static int __kvm_read_guest_page(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, int len) { - int r; unsigned long addr; + if (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_MAPPABLE) && + kvm_slot_can_be_private(slot)) { + return __kvm_read_private_guest_page(kvm, slot, gfn, data, + offset, len); + } + addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_from_user(data, (void __user *)addr + offset, len); - if (r) + if (__copy_from_user(data, (void __user *)addr + offset, len)) return -EFAULT; return 0; } @@ -3408,7 +3496,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -3417,7 +3505,7 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); @@ -3492,17 +3580,24 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); /* Copy @len bytes from @data into guest memory at '(@gfn * PAGE_SIZE) + @offset' */ static int __kvm_write_guest_page(struct kvm *kvm, struct kvm_memory_slot *memslot, gfn_t gfn, - const void *data, int offset, int len) + const void *data, int offset, int len) { - int r; - unsigned long addr; + if (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_MAPPABLE) && + kvm_slot_can_be_private(memslot)) { + int r = __kvm_write_private_guest_page(kvm, memslot, gfn, data, + offset, len); + + if (r) + return r; + } else { + unsigned long addr = gfn_to_hva_memslot(memslot, gfn); + + if (kvm_is_error_hva(addr)) + return -EFAULT; + if (__copy_to_user((void __user *)addr + offset, data, len)) + return -EFAULT; + } - addr = gfn_to_hva_memslot(memslot, gfn); - if (kvm_is_error_hva(addr)) - return -EFAULT; - r = __copy_to_user((void __user *)addr + offset, data, len); - if (r) - return -EFAULT; mark_page_dirty_in_slot(kvm, memslot, gfn); return 0; } From patchwork Thu Aug 1 09:01:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750006 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10408170A08 for ; Thu, 1 Aug 2024 09:01:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502892; cv=none; b=lz2ikuRtmWWaerO+YUTMSPJFEgFkNd9S+RMOllrp2hJdOISnn4rK733XQNCt2CuFcWtFRG6VlDFNzvmQ3dFh0akpl+LHKNSC7+uE1Ifa7Jj+XmXbkMZX0vRSkm18WKAHl0AVAaXnNgi+V3XV4mHIiurpU/zXQekFGGI2590hgvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502892; c=relaxed/simple; bh=C7bksynIBlDmKWzYneQQmkbLTeJZkrmT11xDMld4peA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WqNNFDTl7wbWVv+T80Qn9PiLTmR4fD6jO8hzLCpLXNVpNkK/67M72dEFL1VeZ63yl9/cgwv3c/kZJjjP2AXIdciId5APIqBmcQk5NCb0X20E3WIjhg9JdNYKxTqNrWlcx8wIrMUJju7fqM7KCkbewycTEbhbYB99TZCV8wyc9ak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0S+nV2l3; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0S+nV2l3" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-672bea19dd3so140831077b3.1 for ; Thu, 01 Aug 2024 02:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502890; x=1723107690; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LPYjxi2NpHlLCCCycghuduPbRm3K8NiftdQ2Nx+A00o=; b=0S+nV2l3KdXtNu6yLURPlbFskle7e+h8Mt1Wd2CbWLEsIIPRfrH9nVyMGYmqiVmdCL R8pNhcB37YFYWzDA+QMYFDFRHSk72VgLTg5malYbMqiuOWsPiAq2nhhID4zJBM4EpqvY qgq0A+c7wHcmT5q36PkG1T95Km4EpZwRRwbsllsccZigeIeUnfbctHzlPcghKX8rQY/1 PdLW5TuzxUFnam6Nu9KB6qTvKQgMx1Cyv4u4AzkBzE3AADO2jOh7dy5Eo9Ho8zJcXyZc og3IuxcDi1haYsz+VlLuF/du9Z0ngih15K9sd77ZkBAm5/ZmDsZxgq+0UpFP/cfZlCN8 os4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502890; x=1723107690; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LPYjxi2NpHlLCCCycghuduPbRm3K8NiftdQ2Nx+A00o=; b=VDwS5zSvtbLfwou75jBivs6kOVYXFudedUVcR/7X28FmlLs3OMnAGPdBYWUIx5UmxZ U6IbYs5YC6Ksw0DUpd0RmuLOyyR+HnLhtdPa169J4GfvmZRlMO4AAt4wjXHCpdmibNvI qCHp9rrkRToGzG9esCArAOemprYrY1Utb3DOWmkrcHLi2nnpzKPsHtB7jP1Ayb/0R39j zLTsnW8s14VjitJDlW956qCMGokfsPMlK9wHYueGz/YuxCKtfXT47ElNG6L7ym7TZxpY 5qemNhl3nlOA7B8fXJYKt8u9aTPvz9W9pyVV4r2wRGFzUzGbBjC7egiiMkSrD/WDET74 9soQ== X-Gm-Message-State: AOJu0YxqPYb1ebW2BX56LD4Kx24Rdk6BmOslIlmXNzkbIC9AV/mnHJMF gVdfHzU0etEma/0Sg7syCEfLUYuxawfRFEXI8ILcyL6qUiS+opq30phS3UTp3xj940QJFmxJ2nX 9FhwMhOAfFctkqK4w+BuKLu9dQCnztZFXDFP6jRUKLs+CJ1y7+sWuIp7wOo0ofQOZu4mhKqMdDJ k0J7wm7ryGyiVEyAEYha0mSpk= X-Google-Smtp-Source: AGHT+IEWXtT3G1kPV2U5ozw0QmNThPZWLC3X0cJFXxT6qwOkPcf9RWWDopByOJxRGSqLytT/Up6QyZXMSQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:9e:b0:673:b39a:92ce with SMTP id 00721157ae682-6874be4e4b8mr30247b3.3.1722502889731; Thu, 01 Aug 2024 02:01:29 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:11 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-5-tabba@google.com> Subject: [RFC PATCH v2 04/10] KVM: Add KVM capability to check if guest_memfd can be mapped by the host From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add the KVM capability KVM_CAP_GUEST_MEMFD_MAPPABLE, which is true if mapping guest memory is supported by the host. Signed-off-by: Fuad Tabba --- include/uapi/linux/kvm.h | 3 ++- virt/kvm/kvm_main.c | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d03842abae57..783d0c3f4cb1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -916,7 +916,8 @@ struct kvm_enable_cap { #define KVM_CAP_MEMORY_FAULT_INFO 232 #define KVM_CAP_MEMORY_ATTRIBUTES 233 #define KVM_CAP_GUEST_MEMFD 234 -#define KVM_CAP_VM_TYPES 235 +#define KVM_CAP_GUEST_MEMFD_MAPPABLE 235 +#define KVM_CAP_VM_TYPES 236 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ec6255c7325e..485c39fc373c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5077,6 +5077,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_PRIVATE_MEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); +#endif +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + case KVM_CAP_GUEST_MEMFD_MAPPABLE: + return !kvm || kvm_arch_has_private_mem(kvm); #endif default: break; From patchwork Thu Aug 1 09:01:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750007 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D4E716DC35 for ; Thu, 1 Aug 2024 09:01:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502895; cv=none; b=FAghG6nhAKL5hMS9wDAftQHMCpmL/0SBYa+jZh4THQj028D8R6iUgMSsYONtzxBnXKYz1mkKGAHNnY17NvNwPHCn+UiJXabj5BTyicW6C9Je1oiVpOr2XZgr/R5tw80EpIo0JXGBSmuehWtobsD0WQwGu438spqdNtVxjdjhjK0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502895; c=relaxed/simple; bh=zbjEJuwMf5QM2G3YeMI5M3nCWbiG7wC9OQ0aFSyGqoE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Unuc52/EBHdxyXvbOlSYd380cY+2xIlzSBloLWQoXOxqRX5xyM2yVHsvBmY8Myfsr5uLpT5NoCMf7+LuxGXC4HJWfTkeqPa3KFjOf0fZwFid6Hw7SS5XvnluOvkZvlmlGRYdxTUkmuorzkczq393j8eeHSX2ei87WiHKMxnpu3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0fu2Apxv; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0fu2Apxv" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-654d96c2bb5so123865107b3.2 for ; Thu, 01 Aug 2024 02:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502893; x=1723107693; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FenZNVmro9JWmRf8yJhfaxf1ruU6PeQcDA6Tq5ucplA=; b=0fu2ApxvCQOoT4ROGNMcW28XaI3XGgtt/HG/CKsDSDds8dGKr5h6DFYyiSgUv+l5ol Teov9NoyTElhrw79iZlINd6aC2cHLR0FxwELVVjE6eek4KImdHiUxiJAeE/he+FEysG6 Wlj+T4IbGURcSiNMX1JWZVbD6I+1Xxz+sZIN0IOk7aSvIvCF31xWF3pvpEnB3ZQEZlXA A/kuOqzJExgom3SJ9YAidg6thkbqRYBD6k3Cgc8XyTAmVQMSmpTxmvrW//D4OweKmU2F Q5HhAC7wHzJ6Z70kHmbRSJ91DZwoqB6eJvpu9RleM4BUn0Otf1fUOeHJzb3hsmn+M8iK VpBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502893; x=1723107693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FenZNVmro9JWmRf8yJhfaxf1ruU6PeQcDA6Tq5ucplA=; b=EWCcv2dV2JHaRqK2wnzlpZkZSefw+Baw23QuiPWcD3W7s/Ky4PUwamTLhcBKtrIlkb HstodfXJUziyY1kP+I6BQfxx/BQcPO13HN11haWC+QyKOTB9COSytCZUptTJjSiyi5HZ HPE9xX40/obrtUqz9kOrawX0D6kF2quBIkG4LZWK6vadYNXtddN72KKkNr2KZesB1Iad VPjROzPCnprh7vk+gMQj0TUGTUqr9q+chGkIcV+Ad9Ms9Xvn91BVySvkU6BpBB28Lv50 JmhglD9ElOBlOPKUknGqQhoStHHqTl+10ptWn6zZe0qQHeW78Oqfe02euyPuW+wjhbtl Sf+w== X-Gm-Message-State: AOJu0YzzqhA5PtnfjD3zppsTovEbRVvkSYtYi59u2HEmJCSgnYsrn9kP +jOdabCQZixc41Kgsl94KMaio0gFuLJkMTRj9oMPTnRUWRIxGxCYlZMGulcUkML6q2z4dEIEWuQ 8LAyl8LLx7VeFI0r5KerveyjkBrjvDVu1Pd2Ycpqd4KTmxMd/wCyAhMSwdxZE7KpWa9r8vSz6D0 cwSVAUzJNui4hquJeiwfZtWeg= X-Google-Smtp-Source: AGHT+IGm9QckLnDbVxc2uajSqqnHpYUpgzy1Qib76hzzsjZLg4j1Inen+nQBahz4WB8vSdsSSS9dwIaeEA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:dcf:b0:665:a4a4:57c1 with SMTP id 00721157ae682-6874a9ec608mr269487b3.2.1722502892571; Thu, 01 Aug 2024 02:01:32 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:12 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-6-tabba@google.com> Subject: [RFC PATCH v2 05/10] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory if the capability is supported, and that still checks that memory is not mappable if the capability isn't supported. Also, build the guest_memfd selftest for aarch64. Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 47 ++++++++++++++++++- 2 files changed, 46 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ac280dcba996..fb63f7e956d4 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -166,6 +166,7 @@ TEST_GEN_PROGS_aarch64 += arch_timer TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test +TEST_GEN_PROGS_aarch64 += guest_memfd_test TEST_GEN_PROGS_aarch64 += guest_print_test TEST_GEN_PROGS_aarch64 += get-reg-list TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ba0c8e996035..c6bb2be5b6e2 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,55 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t total_size) { + size_t page_size = getpagesize(); + char *mem; + int ret; + int i; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, 0xaa, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + memset(mem, 0xaa, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t total_size) +{ + size_t page_size = getpagesize(); char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); +} + +static void test_mmap(int fd, size_t total_size) +{ + if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_MAPPABLE)) + test_mmap_allowed(fd, total_size); + else + test_mmap_denied(fd, total_size); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -190,7 +233,7 @@ int main(int argc, char *argv[]) fd = vm_create_guest_memfd(vm, total_size, 0); test_file_read_write(fd); - test_mmap(fd, page_size); + test_mmap(fd, total_size); test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); From patchwork Thu Aug 1 09:01:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750008 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3BE2170A20 for ; Thu, 1 Aug 2024 09:01:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502898; cv=none; b=UkRsY0Xat/Vc20EPbyIMrHm1dkQkq48EIs0raNdVspeV7uRdpCYw213TjAQ3z39LmIdWRafuCAYcGBolwt/Wu5pJ2GKcBjG5wcZpDOdxui1CyuTWy3fVKlcJRu3ys5sOq5W/MFuUP8DZI0kuu0R3/eJfrT1rKj+m0GVL7jvvPh8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502898; c=relaxed/simple; bh=gVbQE1gLeXsrOsdMOUmedL1iM5Ce5pM5lcUiFJNejpw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=awoHm/UghvaTNXL5GNGcbdGRkhmZ7Mqlkey3oYFpCwtBN82+PC76fmfItRZxzrix/70TdIoDJR5MrNMDNoC3/0fZ8ugxk+JpYRQ9CqUpOWwzclFP+KdeEbP72BKUjMwR79xhzm34LxaiDZsGHPQ/jEAkkOvOXdZaQrQuz0Pp6wo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CGb/7cF/; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CGb/7cF/" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6688b5b40faso158305177b3.2 for ; Thu, 01 Aug 2024 02:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502896; x=1723107696; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eynVCV8vS5USu2ZyXSA4dI0KQ06/f9GaXl4H1mlHW7I=; b=CGb/7cF/Sp/oIa+K5WpVBd21yRlYADMNGrxiQt4umul/O3M0Cg3fFWPgnkjvaKDX3D 18L5kEE1k/jgs5Erk3Jzb2KSjy8/8WJT97vtHydxTz7kb8Q5Ruwha2ucLviRVfiAWThF d2IeXmFGbtE3d0VsrMtAjwR+OBft6Zyek78x8F29ha5uHb+yeXPyqEL7cFEDj6UWq1em 0eQlWf/QvyToKKdQpFR5F4kR1qa43oOsZE4XEw+58hv41GOaQxN3L0YFsW0CLiTJ+/V7 Ui87djFxFgt+1riO90+Yjj3Us9l3l05RgwdbHOtPHx46HdX38+1nxU9W7zp7rC35skGR taAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502896; x=1723107696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eynVCV8vS5USu2ZyXSA4dI0KQ06/f9GaXl4H1mlHW7I=; b=Nic0NTqLzQKhG/ek39Q8ATO37SzWm1F7Vet06Z/4KlRSMVPRV51McKd7RFC3whIBll en83jjp9TGIBxlpb4giq5y76ZXBJIxE2PqRlaxGbsLwELDcZrk1dKX0BE4m18sOg6+Dy Rlt4cOsJGTiLDH3hvAYI/tWBp2Q52JEbgZpmHSCiSNhWOZl8YPVESvP4In2cGDeNbIGc WUMDr+OyrAJvrx810fCPv8YT+4srFK9qemOIEPjhC49PgGSxA6fY893IEreOX9ymXzKN F7IFM7YDlUwHWqjBtUG662rUonX9dMHfNtIyEjIbSq7ZgNLuxr+A7+xUIRiUmsyWQ/pN rL2w== X-Gm-Message-State: AOJu0YxRWDnR3RS8KziLtZJchYXLl/HpZzuAsD4iWUeNduu08kgnBKLB I/1wLoOtnHq1k6O+uaq3Wr9E9IFzHb2VdrHyL4/Rl10Jxf+IMhGJtBAjs3dsp7IgyxlZGNRjUgW Md5WGpTEH9NfFyZC91tq91rtKaMpNLEsb3cLtEheH1Qe0dACoXNZbaOVMaeTn8GKRXSs7uy+M2W 1ls9/27gn67ky+gYjZSdVVtLM= X-Google-Smtp-Source: AGHT+IGMIr1XUVhclOFb55aW6DOFYObKwVO0wQH5tDUYBTBbgfFm+mNN9/WDhRbYaQh1Bw5e40TmTstGTw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:2190:b0:e0b:bafe:a7ff with SMTP id 3f1490d57ef6-e0bcd21d5e4mr2642276.6.1722502895329; Thu, 01 Aug 2024 02:01:35 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:13 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-7-tabba@google.com> Subject: [RFC PATCH v2 06/10] KVM: arm64: Skip VMA checks for slots without userspace address From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Memory slots backed by guest memory might be created with no intention of being mapped by the host. These are recognized by not having a userspace address in the memory slot. VMA checks are neither possible nor necessary for this kind of slot, so skip them. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8bcab0cc3fe9..e632e10ea395 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -948,6 +948,10 @@ static void stage2_unmap_memslot(struct kvm *kvm, phys_addr_t size = PAGE_SIZE * memslot->npages; hva_t reg_end = hva + size; + /* Host will not map this private memory without a userspace address. */ + if (kvm_slot_can_be_private(memslot) && !hva) + return; + /* * A memory region could potentially cover multiple VMAs, and any holes * between them, so iterate over all of them to find out if we should @@ -1976,6 +1980,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, hva = new->userspace_addr; reg_end = hva + (new->npages << PAGE_SHIFT); + /* Host will not map this private memory without a userspace address. */ + if ((kvm_slot_can_be_private(new)) && !hva) + return 0; + mmap_read_lock(current->mm); /* * A memory region could potentially cover multiple VMAs, and any holes From patchwork Thu Aug 1 09:01:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750009 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F144916F8F5 for ; Thu, 1 Aug 2024 09:01:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502901; cv=none; b=GSS8dXBx7Xco+RJ4qeuZcM9mWI5qJWaUwztnBeq/jHCeGe0ivE3V3CzlCllW6YpUUIgrquzRg+x7++3+Bh1ZZaTWV04ztNgTylTVs0895/mnHpv+TYEM3a4xXPBPtwTs6UEZD6EWKBTAVWugenMZYmqdl6cTNdjE294G62eMuYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502901; c=relaxed/simple; bh=/xyaq17skgd/L7RMTEIyM7OuUD5Qi6/Az+T9v4EOwIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gwYhIYlN4Lv+PkO1Hdg42PGvUR9oaerh+m7/JHXl/NCGWYgNs3qkPlJUrcoeaKk3PrmcnBBYDhSPQj/1YZFq2Kb3j8bFJgC7bdV0BdottMFL2LLje9Lp/lGWfat0AJG+a3BpWd26meMiAgQmt2o2tw6YwqBBEd+N1kSDbIK5GsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rKlwcllm; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rKlwcllm" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-65b985bb059so125081607b3.2 for ; Thu, 01 Aug 2024 02:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502899; x=1723107699; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VYgP8TsqpjmhLiOepZBWnEDyKrGoYglq1ochGe+g9ag=; b=rKlwcllmqT3DNxXeUx3t9TKeOsuQYOZ1OdjmuY2h388pOPyP/rAL8IVQad7QirMR+F m1+VVxdJmEUz3bYMTRBImn8Wy/Eqj9ycyoxIULA0OlsgY2STxcSfXBpuiDrzxzabHXjG ZIfKOPtpcX1weCndJilBLSDQjQIdHefwdt/Cm1tJC8I5J9TJfvrvSnt0BdGAvm7mXmeR 50dP8Q9qHY3nk5ywZY9KyARmBGw2YdMQnyXMAaxfGlhfjc4yKZ1r3xkuEIPRLShTW1ht 9VTuQkWIk92M3WTDAG3FlGy/n2po7xZ9kjmfT4B/Z8GXStLXCfjaZjJMzQjAhvORBadx HHyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502899; x=1723107699; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VYgP8TsqpjmhLiOepZBWnEDyKrGoYglq1ochGe+g9ag=; b=NgMzejZzDi2UJ4GndB0+xUNKpMfc0GSMtaheK5mTegPZpr7P7hVIGaaKd9tSLXqUBH X9ltgo25gmT1eTKhwJ0iexXwICR2S8dH1xo/x72ARrEYyclCVF//e1mflAn+YA8BRDDG iUUd9Q/Ai1ogbPfX1ZT7YMVDcjzs0WAgC9YGvqrPsnmTMNd2nTa9quF1hlpgB7ycI71+ IQE3mvcGbhHnS94kBSy4DSdEwVqgs/iGoctdfeUs+sgNmoQGykwtekN0aFUYYjyx2bAx mmJj7IZbig2BemurT0k8gqQVKeNLJlvuziJLzpjn0/XIvqoS2+TichxjcO5LPfoKf1rN 7O3A== X-Gm-Message-State: AOJu0Yz4AMJifRxPq5QGhr5VU36Hbpen31zj9xSUskFO7Klnm6/TD3Gc LtGQE4zdmGhd/5tSCC4kPEfFY08SqwHbQXlcSrgFhgtcenicc1n/ZXOjfJjTTTfxXrwVtcM3/ew SBuugfcOpnkShVnWfIKMBSVitPUiBU5RCNbe7Muo3k83yc59Ul5dANHPHtq9Nb+SaU2GgkpKtbH qqFMud8ReqXmRqeIxncYmkLN4= X-Google-Smtp-Source: AGHT+IFYiAm/ZMOGWhUa/K3MvrLtTZnEUf3NdBQXNfdF1a3J1k2aDdCJIpsdZ+5mVOdSrR6hCD4FF6rWww== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:1107:b0:e03:6556:9fb5 with SMTP id 3f1490d57ef6-e0bcd3e5b26mr6126276.11.1722502897790; Thu, 01 Aug 2024 02:01:37 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:14 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-8-tabba@google.com> Subject: [RFC PATCH v2 07/10] KVM: arm64: Do not allow changes to private memory slots From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Handling changes to private memory slots can be difficult, since it would probably require some cooperation from the hypervisor and/or the guest. Do not allow such changes for now. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e632e10ea395..b1fc636fb670 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1970,6 +1970,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, change != KVM_MR_FLAGS_ONLY) return 0; + if ((change == KVM_MR_MOVE || change == KVM_MR_FLAGS_ONLY) && + ((kvm_slot_can_be_private(old)) || (kvm_slot_can_be_private(new)))) + return -EPERM; + /* * Prevent userspace from creating a memory region outside of the IPA * space addressable by the KVM guest IPA space. From patchwork Thu Aug 1 09:01:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750010 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21A18170A31 for ; Thu, 1 Aug 2024 09:01:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502903; cv=none; b=mvplCL7QN9pa2Suei2GQelTVSzn4aoBkM9pqQdpntbA9AfpZsFHmiAR9XzgApha1CIHzvZ8PvgggTyPvtT0ASlns2Leq4lQIn9OGCEPxC1WrYGOYnyPQxG4McOHvBaO7nz7n42O0+RmMlCa44qItUlp1k+8Gs09I43aSaBR65Qk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502903; c=relaxed/simple; bh=LiD8hVelwBpdklQy0FqdcvS85HI+0CpJq5EuMCPJOtQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ux+Q2kZ4HEemqaHNfuOYJH2dKCNAaehGmtKhKx7dbGLpFKrEdimwgqKhpOfGDu8J0ucez72cMTEvZKEiBPZIiaTNx3rjt/zpPzzQltBHqR1BoRSQerj8dAowz+KM5INHbAukG1UCKjoQYVUZix05xLsXPNCXVqFpYS0L2acB1GM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j6YfgY+O; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j6YfgY+O" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e08bc29c584so9096445276.0 for ; Thu, 01 Aug 2024 02:01:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502901; x=1723107701; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hPkdSkcYKbc3+kSJZ/g9JVRtvGOqbhaBzYnzj0kRbFw=; b=j6YfgY+ObWuS6S0rH3S0aytYCYSmvRvbOUKasKxWpq+Z6nMgwyhMpU3sLPf+eahHTh dM7ukWqj5iHLDKnkWK17QLw6xukTwXEbSEIZrOhgQAVq4vPkFEHtpQLvhOuWxxVd6yiB 0upjBUXY9+nZGX4a6FJNcqXtjuPpZetk5UHtRfEnnOmLBtpKNtYBv7Qj22mJBA1cwZFY CZm/4nw+5FLPiN/3qip+fU9Re3bbUVGFq3rStx/T8encc57BoZpDqMEChjh0DkSlc/KT eRGcXBGLyJSxhvnNZxLmqr2by0s2TLHghgAh/OT16mfG1UUPEd3tynVcBiXbRDVVQakv L79g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502901; x=1723107701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hPkdSkcYKbc3+kSJZ/g9JVRtvGOqbhaBzYnzj0kRbFw=; b=HkS45rcJ15bHJk5hZIzHZR/RfKpvceoee/EmaJp0m76J9ktqEDEg0UCGTGZ8MXNri6 r+X7Whf9pN+FADMtW6PF/YYlaLebXraKQm1e1+6Afree3JKb3Yabqcq3xKHBlI7AwCxU IbmJnAXW1KRFwcILurIz6D+QDFqXAPE6o8ZrxEol5fJP+5T4J3MrGJBKd8lJfBEA+L8v xa5vbHDeCc/an0l22dhzgmEXsPLbvIyH3n0bbxw53hUxVAlCpvEODV+uQ2I+TNrb4xBG ZVEzk2pRk9K6OtS3LGOfq/jVH037tH0srKQIbMUbQNRKIbyRwDf6AWZY+1Xaah+5gjHg y1Fg== X-Gm-Message-State: AOJu0YygWHi12hSrdVy8iA8okV0CBG21lcbSdW4y5MheU+UmAIRXxbE+ q1veNbmlcZ9YJCzpzyYz8tRjR6P3jY9Ao0k6SX/Vf2AYkaZwOq5wuhM97vYEStLcdP8NtCjXq3s 6vdqXyJLH+Vn1edrP27ssLujUK9IeF13loxXyNJGMPYYXl2UbjTpl+Fc5jGNjOQFun6cyxLyCxC rk6CmLr72yJHDgzH2bbAFEZFo= X-Google-Smtp-Source: AGHT+IFLHM91ijuDJHLUo5+kzuJ4Oi83XUPKih8V/PqPhJPQ78pJ2Ei/0euF1MyM7UCy8SAtSIgxDnFcYw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:2190:b0:e05:f565:6bd3 with SMTP id 3f1490d57ef6-e0bcd490586mr2277276.12.1722502900177; Thu, 01 Aug 2024 02:01:40 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:15 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-9-tabba@google.com> Subject: [RFC PATCH v2 08/10] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add arm64 support for resolving guest page faults on guest_memfd() backed memslots. This support is not contingent on pKVM, or other confidential computing support, and works in both VHE and nVHE modes. Without confidential computing, this support is useful for testing and debugging. In the future, it might also be useful should a user want to use guest_memfd() for all code, whether it's for a protected guest or not. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 127 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 125 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b1fc636fb670..e15167865cab 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1378,6 +1378,123 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static int guest_memfd_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_memory_slot *memslot, bool fault_is_perm) +{ + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + bool exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); + bool logging_active = memslot_is_logging(memslot); + struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + bool write_fault = kvm_is_write_fault(vcpu); + struct mm_struct *mm = current->mm; + gfn_t gfn = gpa_to_gfn(fault_ipa); + struct kvm *kvm = vcpu->kvm; + unsigned long mmu_seq; + struct page *page; + kvm_pfn_t pfn; + int ret; + + /* For now, guest_memfd() only supports PAGE_SIZE granules. */ + if (WARN_ON_ONCE(fault_is_perm && + kvm_vcpu_trap_get_perm_fault_granule(vcpu) != PAGE_SIZE)) { + return -EFAULT; + } + + VM_BUG_ON(write_fault && exec_fault); + + if (fault_is_perm && !write_fault && !exec_fault) { + kvm_err("Unexpected L2 read permission error\n"); + return -EFAULT; + } + + /* + * Permission faults just need to update the existing leaf entry, + * and so normally don't require allocations from the memcache. The + * only exception to this is when dirty logging is enabled at runtime + * and a write fault needs to collapse a block entry into a table. + */ + if (!fault_is_perm || (logging_active && write_fault)) { + ret = kvm_mmu_topup_memory_cache(memcache, + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + if (ret) + return ret; + } + + /* + * Read mmu_invalidate_seq so that KVM can detect if the results of + * kvm_gmem_get_pfn_locked() become stale prior to acquiring + * kvm->mmu_lock. + */ + mmu_seq = vcpu->kvm->mmu_invalidate_seq; + + /* To pair with the smp_wmb() in kvm_mmu_invalidate_end(). */ + smp_rmb(); + + ret = kvm_gmem_get_pfn_locked(kvm, memslot, gfn, &pfn, NULL); + if (ret) + return ret; + + page = pfn_to_page(pfn); + + if (!kvm_gmem_is_mappable(kvm, gfn, gfn + 1) && + (page_mapped(page) || page_maybe_dma_pinned(page))) { + return -EPERM; + } + + /* + * Once it's faulted in, a guest_memfd() page will stay in memory. + * Therefore, count it as locked. + */ + if (!fault_is_perm) { + ret = account_locked_vm(mm, 1, true); + if (ret) + goto unlock_page; + } + + read_lock(&kvm->mmu_lock); + if (mmu_invalidate_retry(kvm, mmu_seq)) + goto unlock_mmu; + + if (write_fault) + prot |= KVM_PGTABLE_PROT_W; + + if (exec_fault) + prot |= KVM_PGTABLE_PROT_X; + + if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) + prot |= KVM_PGTABLE_PROT_X; + + /* + * Under the premise of getting a FSC_PERM fault, we just need to relax + * permissions. + */ + if (fault_is_perm) + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + else + ret = kvm_pgtable_stage2_map(pgt, fault_ipa, PAGE_SIZE, + __pfn_to_phys(pfn), prot, + memcache, + KVM_PGTABLE_WALK_HANDLE_FAULT | + KVM_PGTABLE_WALK_SHARED); + + /* Mark the page dirty only if the fault is handled successfully */ + if (write_fault && !ret) { + kvm_set_pfn_dirty(pfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); + } + +unlock_mmu: + read_unlock(&kvm->mmu_lock); + + if (ret && !fault_is_perm) + account_locked_vm(mm, 1, false); +unlock_page: + unlock_page(page); + put_page(page); + return ret != -EAGAIN ? ret : 0; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, bool fault_is_perm) @@ -1748,8 +1865,14 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, - esr_fsc_is_permission_fault(esr)); + if (kvm_slot_can_be_private(memslot)) { + ret = guest_memfd_abort(vcpu, fault_ipa, memslot, + esr_fsc_is_permission_fault(esr)); + } else { + ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, + esr_fsc_is_permission_fault(esr)); + } + if (ret == 0) ret = 1; out: From patchwork Thu Aug 1 09:01:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750011 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B53D5170A3F for ; Thu, 1 Aug 2024 09:01:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502906; cv=none; b=ZvE+8IEInXvBiVfVQQQ0wGpRdyONLJFNAbDPFtYN/WCGHv3oD872dxMSN2yJUiDIDXbkW5ov6+CzUw/CjJuXbKwEYukdPFB2nhEw18ytHYUQZMt6Op3EoJQcP9ECqIhdXOgaCp2bwfgVXPiz4AebIHYJmSm40bhRY+FB2fufMic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502906; c=relaxed/simple; bh=q5IFhmTRn0cgQ8n1AC7LRcSHCxEBQI8JJxFLv3GwqRI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oKkqHXYfy6t48KgBXrcdvpFw/DdaM/2O7yPc28/yYJhrd0kryUtbCdTGpEX7u0Y0ZptaL32d+j35WjQTiob2+ZCYg+Ieg90HC+P6JA6LhhaIQg8W+/ENLgHYZ326XiuaWCW5NdApL3DZZ+gL3vBJO75KeDzX2Tt0c9y8u88rkZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZHbFy7Dh; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZHbFy7Dh" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-a79c35c28f1so592580566b.3 for ; Thu, 01 Aug 2024 02:01:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502903; x=1723107703; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zLFB4vtfe+Qz1MMVjY77NoMcItZ1noXdllMM80GyRW8=; b=ZHbFy7Dh2DNCjKdxQMTGQA2shnaozzRFcaoINBnMg+kUBfLpioLLdpj8szNTQ8UqR4 rmDsGHh3r07+WlDcRy6/FPEu/BD8adGIakyy7V4vEWZKXVSCR2C1qFX5GycDUL+TYHvu PtEowTeEim7bn8TmxcQHaHePJht/P3OOoJkamF7Va2e7ORryXjlxA1+DKfYL2nf/wVEF wsaTb9Mqq810qBRDWL5de7YcW+1CVoxwbOXCnTVJAw1z/OPYV4lAkyyHPAHRePiDoq29 JBkSCIuBJP4hCvWKEuDdR77TA8LwbuJn6DKt7Dt5l53/hcEGSjvMmXupiKjM42+HXeRH cEow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502903; x=1723107703; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zLFB4vtfe+Qz1MMVjY77NoMcItZ1noXdllMM80GyRW8=; b=ZJETKHb5foFHBFKzhoJJG/DCLWju7CCvequkDa3Iqyy5RiSER3L2ZiMSYJHR/0TiPl yPndpKiwpyiI8qG0DooXphld4+M8FeoatWcpXcvgg8AVjQfsS8OYowlmKKV5OrzgFY4e VnGinIJOw6Q1IJtAq67tNaRsP50zyqJ474LeQKJ/Va0UpI0EqWTOfkRWwcyj3Jm3sq+/ AdflRFuqQ2EwwBoxedRKu7dQmIDpiWl0mCjo1hFUJO1HgXPYDwMhFzO4Zewy8W4EMQw7 oW8djY8tN/DgpNZbOggGZeAvTdg8il19wOaQ/mdruLjxD1fg8pVWKf765uXjtRg5jENK eRzg== X-Gm-Message-State: AOJu0YwRYwqjreDtP4+/WlLWHWsd5yoX3Toc+p70vSqOQvz/m2bSy9e0 u0VO0lT0Z8IZCv/4bTr9PMaQ5scRkYJySU4+RTPSYHgrNtIsAzgHhtFkt0eu++Ltj6+DwgyMAX8 wStZcA/dGtaOd3rhtUdPk97wuLOCpmXxwDdw3IbxIhNyPbtM+R1rDDqX0KCld1BXWb/y8/A9KTk XunjeFUyKPubU/Li7TXRFpVO4= X-Google-Smtp-Source: AGHT+IHTzLe8RYDoIu4Q6eNF/9ig1sSQFAYVxHVp/NRIu780LrDluXUfDxDxS5ts3clkl+0Fh9PskkCJDQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a17:906:3415:b0:a7a:aa12:100a with SMTP id a640c23a62f3a-a7daf11ce02mr180366b.0.1722502902501; Thu, 01 Aug 2024 02:01:42 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:16 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-10-tabba@google.com> Subject: [RFC PATCH v2 09/10] KVM: arm64: arm64 has private memory support when config is enabled From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Implement kvm_arch_has_private_mem() in arm64, making it dependent on the configuration option. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 36b8e97bf49e..8f7d78ee9557 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1414,4 +1414,7 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); (pa + pi + pa3) == 1; \ }) +#define kvm_arch_has_private_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && is_protected_kvm_enabled()) + #endif /* __ARM64_KVM_HOST_H__ */ From patchwork Thu Aug 1 09:01:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750012 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F2B9172BD0 for ; Thu, 1 Aug 2024 09:01:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502909; cv=none; b=JTTcY/3cM1I32G1THkqOFGEqb45CGN4d16kmfDHFv992jOhSLKmJDfjWZrwqYD/9sm90QX3XqGchIRuEAQpgUerZV/HGVnbfWRx+4YhDFoANScGzcTAI3U3mcN/TzJN8Af7d58OrvTxSPatZ6hPVuRVrKda1WOSEztdzIVPO8k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502909; c=relaxed/simple; bh=7g0DlirT/kYCBVr+9dRDXVKvBiF25jbrjn9HL4AfHFU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JSFES54Ct7f2rSccsH0gRJHDDAF+3gltDg5GOof54yMM+iLbV/K+3d8VMDSTgEvALjyHahxFBMjUnXnx8DiqcnFUTiiJOOfaSJKTHOfnWGFLLQjsMtCVpf3flZ0u2+/VCQH8JBXN06uY9RvR59beM28DDGmDUTsiBl0vBPPpKOg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rxlcnSyx; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rxlcnSyx" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-36832c7023bso3449916f8f.2 for ; Thu, 01 Aug 2024 02:01:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502906; x=1723107706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8IHbq3nRTqO2jYrZNiv7GHjdo//AKiTG6CX0+AzFu78=; b=rxlcnSyxsmdCm+YTdetdXdPCyqIFVA5gvNxPbX3EfxzQ2i72TC/2P7rfGyoVfIrD+8 MVY6/peYjqQnUMh7SLLlVxBNd/AyIWH9Y9TPVWX5wqr+PnDD34m7FBCf8COA6+bUJCXg ylv9tkA9v6oeesdIBDGLHajdlj8YxPaAjDi1Ts+hkhOnzpHvLeuYouKuYnrFiw4UWeNQ 9CEWCvCa2hSqURx6/mZN+BpF5UdN+9xGXVxQtMjz/YyNXXf8y41rRrXsdUAn6SIZH8HL y4Ffo7d2klzM9xKw0z0SVzLObPwHUEWHVxl2yaxtbGYgA2Wr6NKPo5tvWLwoZjGgkF7N iLNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502906; x=1723107706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8IHbq3nRTqO2jYrZNiv7GHjdo//AKiTG6CX0+AzFu78=; b=SIVf7T97Q5s19VH5r7YTy0iiww8t3W54D56dX+9ByxpvZu/PAwRHF27BPi7j0aEeH0 PHJjfJ6UtqfmcDjlC0oZ8kq6f98QBw32PBx5tn4M5SdC04jO7R7z4nWSmxU/wcswTWZY /5FYjBLZRvIUHGw848dDX8r37aIyLEEEy03T0maETCVN4f9LHZ/SXiUf+fHUo+tVfTlz uVvX3hjkiASVg64XwuH3i8kihmiZmEZHgj00Ph/Kw7Al9tGFw3NFVgvKVBv5x86j9YR8 4dHFWehpAhO49ZtF7imgeMLE+iKixsXDjgqDz0oPHaE+/ffLVBYKUN6HJRQZU91P8siX PDAw== X-Gm-Message-State: AOJu0YyfAyIwVNULKG7rVoyAyQwCI2k08X8u9m9UeqZHIEN090qfSCjN z1FoWAYg//s4aNziPTuckDQv45HhMwhWI5OqUuwRGGbS9BhgpALRxBaiiSy7nsVdL07jmhebGH4 flySe01uGpKdb/AD9pYXtE4aOiHDWDhiwwAJ2lVR9qf13cQ3Ga1SOtQ6ewczjIjcv5yILsOaaei P2rm4GoKuVdpH3j5m4R3kYGVM= X-Google-Smtp-Source: AGHT+IEYAOIgImAvC/AtXq27Mfc53Y7PA+fVTX3Rlx+IY0/mIU2S6P/V0TnGwu9UoSydDFeidGnaFL2tvg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a5d:4569:0:b0:368:5d2:9e5b with SMTP id ffacd0b85a97d-36baa9ed838mr3368f8f.0.1722502905208; Thu, 01 Aug 2024 02:01:45 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:17 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-11-tabba@google.com> Subject: [RFC PATCH v2 10/10] KVM: arm64: Enable private memory kconfig for arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Now that the infrastructure is in place for arm64 to support guest private memory, enable it in the arm64 kernel configuration. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 58f09370d17e..8b166c697930 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -37,6 +37,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_PRIVATE_MEM_MAPPABLE help Support hosting virtualized guest machines.