From patchwork Thu Aug 1 09:01:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750014 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC64816C6B4 for ; Thu, 1 Aug 2024 09:01:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502885; cv=none; b=X8nEuQoP4N9+WZlYZDHGIezZ35XXf5xb3uxKpiXT8Q1tQDxruiuwN9TG3kd2Pmu6A8Yrvzqx/fN9YjbIWrHJcsiGRuQ2NCRaRjVYOHToOUmF4RVTaaM2tt/qLdj+UYdqqWGX5kW83/wzXPMS4aSAEyt3nRyKz0fCgFtBdKnwGeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502885; c=relaxed/simple; bh=HhD/ECd8uGqtdbqpkSn1wxM8zfqN3I3b8kecNolsv+Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rXoWRt2j2ed0EhxUuiYHE1O0aN5sq6blVaUMHG/SezN2btgdATT0KSOLIuK1gyQgcbhigoxG8Q5azJ1YuPqce8GtYLcjUD/KIx0JZZx5l0iVjawJ9cKbeGuNO83WPF5Zmw+vfCQWPq+VWsyFn1wolG0EVrZ7ixG+SQpovZu3R/I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ihGrjVEy; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ihGrjVEy" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6688c44060fso136700147b3.2 for ; Thu, 01 Aug 2024 02:01:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502883; x=1723107683; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=55lqFFECQwDLNrYt4aytrJso1NybdDvPLt3jS/2DBHA=; b=ihGrjVEygGyygmgVLBUu48sf0XqGZzKC5rM0VOzzrWZNhB2hyzNOLX3Fa91BCHFojY MN4Bde/mdGQ+cB/TXegDB3lOaRGTcjsylNwi7lZFAsrCroqbJQGwJbw6bFe0kbo2C5P+ uLQOwX4E1wAglZrnyi+6GI7lwLxsVme3tkFeOIvxh66iuU84/SQhXeojF5DM/qNWcTuK Ub/K0n3aOkMDH2a/ObrsmdZzQ0iWnivpSlMzs5Yk2R8hruMOCMVVayJHNWgfubFFnkNW SaeMOhJ9bgeEY7LjPTXxSLQ+fpoQPpd8H+ivcRl0+uoyaglX9qMhSN4hqJx66+/kLhIg uHFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502883; x=1723107683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=55lqFFECQwDLNrYt4aytrJso1NybdDvPLt3jS/2DBHA=; b=caaZiEAv7aOJ31UyF2Ho1cygUOZWcEEj0nXg1dWUg7I3bJeVI6LWHPz/oK4am0bFv2 xskUn0Jpt+gqQYApkSJm+YpcNaAvumQIA0h98JixQdFb9f+PpB2GlRnm+m9+E3CjpyAF lJGKZo/5uXah/Km/sASdJc9FYWj9I9bJexIsTa0cGV0ISCZwFqn7nhrCgntz4eoW4RhL zX2vs+PQUpfq2R9OWDMD4Hd5WUNcJ60Z/ZimsZZ/UXo+E19MNVi9jLFK3zJlAhpJlk8C QMmsQ+ZcxJ1LddwKRvydF4rSqHT/o9+aaGMctIxC+dTMXPR2m5AHZ7bNM3x8DRdc9lF3 Cb5A== X-Forwarded-Encrypted: i=1; AJvYcCWL3cmfzMpebxdrWlolEt6LRV+DO9uIxgIag71FBsYhqVTL0if3x9/FMQUoLPjjW95RM0aZ8M6pJ7bpB6Htg4OSEpy4AxDLYA19BZqJMQ== X-Gm-Message-State: AOJu0YypRdiGmxEq0LQTK/Dc6KjNclLYoLHl/NqQuSJJF5LgqbamC/xT xcI+bcNrG6jQqGL2PnA6758D1ysKO+NzzZEb+Ig7HrhAYc+EPr0zHuL9JRMGlV7cxVyqbcDz/Q= = X-Google-Smtp-Source: AGHT+IFXjhPCx5Vb6+Hy1aNGDM/3GXCZNNlq0AHfSY4+hcUlvoRXu5eLes4iJzd35wbNh934dQJX6BHp3A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:f05:b0:648:3f93:68e0 with SMTP id 00721157ae682-6875028afcamr1291237b3.6.1722502882722; Thu, 01 Aug 2024 02:01:22 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:08 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-2-tabba@google.com> Subject: [RFC PATCH v2 01/10] KVM: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Create a new variant of kvm_gmem_get_pfn(), which retains the folio lock if it returns successfully. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 11 +++++++++++ virt/kvm/guest_memfd.c | 19 ++++++++++++++++--- 2 files changed, 27 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 692c01e41a18..43a157f8171a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2431,6 +2431,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *max_order); +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2439,6 +2441,15 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } + +static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, + int *max_order) +{ + KVM_BUG_ON(1, kvm); + return -EIO; +} #endif /* CONFIG_KVM_PRIVATE_MEM */ #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 747fe251e445..f3f4334a9ccb 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -482,8 +482,8 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) fput(file); } -int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order) { pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem; @@ -524,10 +524,23 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, r = 0; - folio_unlock(folio); out_fput: fput(file); return r; } +EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn_locked); + +int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order) +{ + int r; + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, pfn, max_order); + if (r) + return r; + + unlock_page(pfn_to_page(*pfn)); + return 0; +} EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); From patchwork Thu Aug 1 09:01:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750015 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A7DC16F907 for ; Thu, 1 Aug 2024 09:01:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502888; cv=none; b=PiX7knBr4dPmzn3+M0radSLsr0MHIGsaFso0Ghb0/r8xGfnKb23gsgtqgm5dteec4+wVpjSOmqcGrlTBlH/ClaCN8uL6N2JtYB/+Dcdk6AH/Qx96jHDoXzI8Mffcly5h4Nml3LnFvHm/nDCe3mUm9q4n3fau03VQq8ZEhGc+oIk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502888; c=relaxed/simple; bh=7UlKYDzzEoWteaYTY1v3TroetnmwezP3ooLrZ6P40cw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KBSlqvkosMc5Ib5nvjPDNG082AwJ6BTdPJh4LK33ZEN3jeZk151Qb4ycr2vtn7ZeqjJVjT/Ki0wpfIzaqaju/wEX8E3Rw2ay/7E463jZgCWpTxWlr/NJ41SFSQ3xcmVsnRLsFfP1UoBQuSVTCSZSJ0DfhzmG35kPn+T0mEeB8is= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hJ71OKUV; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hJ71OKUV" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-664fc7c4e51so127401827b3.3 for ; Thu, 01 Aug 2024 02:01:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502885; x=1723107685; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=hJ71OKUVAJxcNcW27aHrDTmsUOya2+03f2M5ICSAzZe/wO4YhNHmQdrbGbS5KQ4Jvj N5eMvRZTtufsoUNoCWUs+f0ZvIQGkSEKlkw0x5eXPrlkY0JkAnNUhmI7xc3VfLAxv03c OnSjxzLnTnJLLtlx4PCFDVW5OW1Ce7vUhuKXBgt97mkQ2ZFn/C/dFbUy4FPct8ZzCmJB lNIgD5Y2G+LzKjiAj1t+ASyIrOmoeJFoMdwc0sXn8nbkPfNXLTMuxOJUg9IMI6uEwr6Y py7WcclZt38Jg4zRfcGa2Ut1VIcujWKmVchFOCdK+OEJuswGcKnDgvKPGFm6VSwZBetE f4Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502885; x=1723107685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=qYQHq3/2H4abveHsPKT0nUirai+Fy4RODW8KNVui4GRTz+aRnnj+ETRV+lTrJ58fia xuvm+pIDdm/dHqs3nF4VovVwbDGgSwp2wz+AsaeNRjtoaTQDepnU4es33CLKCCNG7xOF wge45Xz5lltuw/WK6gAfryBgskg8YEBr2qPI8HFNejJ3gPfLjjZlb8ifw+5y8uwJd6w9 RcfJnj0z3tum1V0tQdzh9H7Yn06m3zx4QsRKIrAAsaEN1mFBzh/iLxlVWw/LcMdM6Yyc LphbI72Anl45+f01cToBzcWbqVSdzDk5GINQsHOXxqS1bRyAWLyy6j2uy0ZjJ7vWYfX2 RAQA== X-Forwarded-Encrypted: i=1; AJvYcCUtEebtKuM59p7st5EKeCdSrZhGvluiZVthhQ6e9liC9wLWP82ZX7OVyUkRbxCOcPy9h0yGzUEg5JrMJaE8uZbkh1CCEWpu+8ZqIEAqJA== X-Gm-Message-State: AOJu0YzZO7yy4aqC7EvzsAY2LBw0X3i/PGdfiFY0Rp3bBPL6L0Ng6+vh gVc4LiMTo+6jI252bK5hNgCZOOWLMeFkrEzWHgtrggLVOg/8usnuxfbQxQGKwT/tB3Iy9stPPA= = X-Google-Smtp-Source: AGHT+IFeMtDkO9X4o1H08sPbTOZyFDUmCP10xCQXIQm4WlBnMC+GIqYzlc/fnsdt6vCtEVOetfjohOJS0A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:15:b0:64a:8aec:617c with SMTP id 00721157ae682-6874580ff7emr1313617b3.0.1722502885098; Thu, 01 Aug 2024 02:01:25 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:09 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-3-tabba@google.com> Subject: [RFC PATCH v2 02/10] KVM: Add restricted support for mapping guestmem by the host From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add support for mmap() and fault() for guest_memfd in the host. The ability to fault in a guest page is contingent on that page being shared with the host. To track this, this patch adds a new xarray to each guest_memfd object, which tracks the mappability of guest frames. The guest_memfd PRIVATE memory attribute is not used for two reasons. First because it reflects the userspace expectation for that memory location, and therefore can be toggled by userspace. The second is, although each guest_memfd file has a 1:1 binding with a KVM instance, the plan is to allow multiple files per inode, e.g. to allow intra-host migration to a new KVM instance, without destroying guest_memfd. This new feature is gated with a new configuration option, CONFIG_KVM_PRIVATE_MEM_MAPPABLE. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 61 ++++++++++++++++++++ virt/kvm/Kconfig | 4 ++ virt/kvm/guest_memfd.c | 110 +++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 122 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 297 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 43a157f8171a..ab1344327e57 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2452,4 +2452,65 @@ static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, } #endif /* CONFIG_KVM_PRIVATE_MEM */ +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end); +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable); +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +#else +static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + bool is_mappable) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 29b73eedfe74..a3970c5eca7b 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -109,3 +109,7 @@ config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_PRIVATE_MEM bool + +config KVM_PRIVATE_MEM_MAPPABLE + select KVM_PRIVATE_MEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index f3f4334a9ccb..0a1f266a16f9 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -11,6 +11,9 @@ struct kvm_gmem { struct kvm *kvm; struct xarray bindings; struct list_head entry; +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + struct xarray unmappable_gfns; +#endif }; static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) @@ -230,6 +233,11 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) mutex_unlock(&kvm->slots_lock); xa_destroy(&gmem->bindings); + +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_destroy(&gmem->unmappable_gfns); +#endif + kfree(gmem); kvm_put_kvm(kvm); @@ -248,7 +256,105 @@ static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot) return get_file_active(&slot->gmem.file); } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + void *xval = is_mappable ? NULL : xa_mk_value(true); + void *r; + + r = xa_store_range(&gmem->unmappable_gfns, start, end - 1, xval, GFP_KERNEL); + + return xa_err(r); +} + +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, true); +} + +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, false); +} + +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + unsigned long _gfn = gfn; + + return !xa_find(&gmem->unmappable_gfns, &_gfn, ULONG_MAX, XA_PRESENT); +} + +static bool kvm_gmem_isfaultable(struct vm_fault *vmf) +{ + struct kvm_gmem *gmem = vmf->vma->vm_file->private_data; + struct inode *inode = file_inode(vmf->vma->vm_file); + pgoff_t pgoff = vmf->pgoff; + struct kvm_memory_slot *slot; + unsigned long index; + bool r = true; + + filemap_invalidate_lock(inode->i_mapping); + + xa_for_each_range(&gmem->bindings, index, slot, pgoff, pgoff) { + pgoff_t base_gfn = slot->base_gfn; + pgoff_t gfn_pgoff = slot->gmem.pgoff; + pgoff_t gfn = base_gfn + max(gfn_pgoff, pgoff) - gfn_pgoff; + + if (!kvm_slot_gmem_is_mappable(slot, gfn)) { + r = false; + break; + } + } + + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = kvm_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); + if (!folio) + return VM_FAULT_SIGBUS; + + if (!kvm_gmem_isfaultable(vmf)) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + file_accessed(file); + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -369,6 +475,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) xa_init(&gmem->bindings); list_add(&gmem->entry, &inode->i_mapping->i_private_list); +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_init(&gmem->unmappable_gfns); +#endif + fd_install(fd, file); return fd; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1192942aef91..f4b4498d4de6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3265,6 +3265,128 @@ static int next_segment(unsigned long len, int offset) return len; } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +static bool __kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + if (!kvm_slot_gmem_is_mappable(memslot, i)) + return false; + } + } + + return true; +} + +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mappable(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static bool __kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + struct page *page; + bool is_mapped; + kvm_pfn_t pfn; + + if (WARN_ON_ONCE(kvm_gmem_get_pfn_locked(kvm, memslot, i, &pfn, NULL))) + continue; + + page = pfn_to_page(pfn); + is_mapped = page_mapped(page) || page_maybe_dma_pinned(page); + unlock_page(page); + put_page(page); + + if (is_mapped) + return true; + } + } + + return false; +} + +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mapped(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static int kvm_gmem_toggle_mappable(struct kvm *kvm, gfn_t start, gfn_t end, + bool is_mappable) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_slot_gmem_toggle_mappable(memslot, gfn_start, gfn_end, is_mappable); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} + +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, true); +} + +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, false); +} + +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, int len) From patchwork Thu Aug 1 09:01:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750016 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DFE515884A for ; Thu, 1 Aug 2024 09:01:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502890; cv=none; b=SQWxKVnOCut45jM9i3AHafeJ5jwYz+EadvyPElYumi0srj9nR4qliq0LWnVrX7KoN6qjszK8q7RsaEhvdJX/YWBdPmTuK3ssMQ3ectx55iYLvv7jFRj4yNDKXmRghDYiBLppXH3mjLdTKbglMhChyDOObrt59qWQF/QZsebtOQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502890; c=relaxed/simple; bh=du2XpGvCSw1HscL0jxNXXaB3NWH5Lr5MM3GWCqX9g5k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GWcg1G3A0cNQ82aUG3UE3Z24nVfeI/oqIMdMInz2XkKXcxiSUtF/ngbNWz+6882Zj2VoHd3HBEGDNvI2H/UQ1vIIHr1cnhwaLT2Xt8nwmamHC74vUsUXajiQK9G5b1wwiWY9G2PYBXY6I0hBiSHjxU91pzlRsWnWY0jzjUclgmU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e0CcJCoq; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e0CcJCoq" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-664b7a67ad4so134698887b3.2 for ; Thu, 01 Aug 2024 02:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502887; x=1723107687; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r2dA+/WuAip0LraCdE4saiE0NevgxBDmZ5s6G2UnSY8=; b=e0CcJCoq43TphP/o5oVTNQ1sYcFUArHjtJYw7NTVX0P9MvPDRWBtxGgrc2Bv3gbB45 HIElwa3fSEVQcss+p3sbLON9DwHZqCLwTfy+PigPVYOjjjxdiyM+h29xO7kKGdhYY8zW FPildZ6t3XgrHN13FRoE3hw3hWpfM/URzklrF4ssu/in4sc0bL1kDW53gezfS2NBAKnN aMhYp1iLaFaCmSPLNSyfgUFozHohi6awoNSKfoHDeFvQHR3Z9PszK/vb0aUVw6B3gpJr SNJ0H4PlPxwd6pdj9gJLZZvy6CTtJoicGkI7OwqAL23EqpPYykhSyj5SVVp+8/lPHQOE T/IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502887; x=1723107687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r2dA+/WuAip0LraCdE4saiE0NevgxBDmZ5s6G2UnSY8=; b=FB+P9cS95Hfi7NzrseFts2gjMdynCFBDrXcqmk5Zzi33EEkqyta1lJBwDaoDhHlPut /AqdjfjQL77GL4rX5ARKcxrDcoTO1oIXrUZZ+m+OUmnGnWZAW2lM/wS6Dicluxhg51ls ooCYKM7+ni927tJ2po3vIQr9MvO1FV2yBcfxdyNWjZRhjMre4AxupV/1lC9gVluY0EUQ lYtu5w8MNbu/ZhDrcVTiQ/28EDEtbCm8dOXUK2rxfpeE/kQQw/rP4xwPA+WtQy1DWxap NNsZ6uwRJ/Aoowtun+qgtofUbVnJ4AHKS7zzM2ej58VnKl7aAoo4s9/kcs1A/3iMtywv pVHA== X-Forwarded-Encrypted: i=1; AJvYcCX5xoTyITE3cC6F5URa3vSknbTvWTbsGZaK/k9dBdXMAdzMJorRL11UMFcj8TtnnGSmF7NfnSO/t4OynwyQqVy8cdqYdTlWUavoZwKeNA== X-Gm-Message-State: AOJu0YznEn1tkZ/W2dNXIlsJEP7i4t0to4hlFhZBoU+eueFyi2uZULlv xQKIJyv3UhCoDBbFu9MuYGcQc5hxs3ddx/VRR3THB3T3ev8KJwydAob6Q7euy7QjwjbJ/dpF1Q= = X-Google-Smtp-Source: AGHT+IFTyhbnlkMzi/3eqbxm5qXZ73AAT9ul7/e/eVlxOdal7iBxLuY+Lba4cs8t3PeL8JTir4ZyTcDElQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:9e:b0:673:b39a:92ce with SMTP id 00721157ae682-6874be4e4b8mr30147b3.3.1722502887498; Thu, 01 Aug 2024 02:01:27 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:10 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-4-tabba@google.com> Subject: [RFC PATCH v2 03/10] KVM: Implement kvm_(read|/write)_guest_page for private memory slots From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Make __kvm_read_guest_page/__kvm_write_guest_page capable of accessing guest memory if no userspace address is available. Moreover, check that the memory being accessed is shared with the host before attempting the access. KVM at the host might need to access shared memory that is not mapped in the host userspace but is in fact shared with the host, e.g., when accounting for stolen time. This allows the access without relying on the slot's userspace_addr being set. This does not circumvent protection, since the access is only attempted if the memory is mappable by the host, which implies shareability. Signed-off-by: Fuad Tabba --- virt/kvm/kvm_main.c | 127 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 111 insertions(+), 16 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f4b4498d4de6..ec6255c7325e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3385,20 +3385,108 @@ int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) return kvm_gmem_toggle_mappable(kvm, start, end, false); } +static int __kvm_read_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, + int len) +{ + struct page *page; + u64 pfn; + int r = 0; + + if (size_add(offset, len) > PAGE_SIZE) + return -E2BIG; + + mutex_lock(&kvm->slots_lock); + + if (!__kvm_gmem_is_mappable(kvm, gfn, gfn + 1)) { + r = -EPERM; + goto unlock; + } + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, &pfn, NULL); + if (r) + goto unlock; + + page = pfn_to_page(pfn); + memcpy(data, page_address(page) + offset, len); + unlock_page(page); + kvm_release_pfn_clean(pfn); +unlock: + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static int __kvm_write_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, const void *data, + int offset, int len) +{ + struct page *page; + u64 pfn; + int r = 0; + + if (size_add(offset, len) > PAGE_SIZE) + return -E2BIG; + + mutex_lock(&kvm->slots_lock); + + if (!__kvm_gmem_is_mappable(kvm, gfn, gfn + 1)) { + r = -EPERM; + goto unlock; + } + + r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, &pfn, NULL); + if (r) + goto unlock; + + page = pfn_to_page(pfn); + memcpy(page_address(page) + offset, data, len); + unlock_page(page); + kvm_release_pfn_dirty(pfn); +unlock: + mutex_unlock(&kvm->slots_lock); + + return r; +} +#else +static int __kvm_read_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, + int len) +{ + WARN_ON_ONCE(1); + return -EIO; +} + +static int __kvm_write_private_guest_page(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, const void *data, + int offset, int len) +{ + WARN_ON_ONCE(1); + return -EIO; +} #endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ -static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) + +static int __kvm_read_guest_page(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, int len) { - int r; unsigned long addr; + if (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_MAPPABLE) && + kvm_slot_can_be_private(slot)) { + return __kvm_read_private_guest_page(kvm, slot, gfn, data, + offset, len); + } + addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_from_user(data, (void __user *)addr + offset, len); - if (r) + if (__copy_from_user(data, (void __user *)addr + offset, len)) return -EFAULT; return 0; } @@ -3408,7 +3496,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -3417,7 +3505,7 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); @@ -3492,17 +3580,24 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); /* Copy @len bytes from @data into guest memory at '(@gfn * PAGE_SIZE) + @offset' */ static int __kvm_write_guest_page(struct kvm *kvm, struct kvm_memory_slot *memslot, gfn_t gfn, - const void *data, int offset, int len) + const void *data, int offset, int len) { - int r; - unsigned long addr; + if (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_MAPPABLE) && + kvm_slot_can_be_private(memslot)) { + int r = __kvm_write_private_guest_page(kvm, memslot, gfn, data, + offset, len); + + if (r) + return r; + } else { + unsigned long addr = gfn_to_hva_memslot(memslot, gfn); + + if (kvm_is_error_hva(addr)) + return -EFAULT; + if (__copy_to_user((void __user *)addr + offset, data, len)) + return -EFAULT; + } - addr = gfn_to_hva_memslot(memslot, gfn); - if (kvm_is_error_hva(addr)) - return -EFAULT; - r = __copy_to_user((void __user *)addr + offset, data, len); - if (r) - return -EFAULT; mark_page_dirty_in_slot(kvm, memslot, gfn); return 0; } From patchwork Thu Aug 1 09:01:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750017 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC71E16DC35 for ; Thu, 1 Aug 2024 09:01:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502893; cv=none; b=YtKTXhpUM9b7Oj5j+aLWSWpoVhpsBUa/uPUvuBtl7sTCh7zFlPwQTdpZY22nx3naLxVpmHjy8xk9tq37H3uGzQ9PndJ0DpsPEFiXfM8dWBdIIWgq9M4y39dRs6EmFcbs84+OIA21BhWJRaRv+GXRUtM0jrlEutw7RBh+1EOlYRs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502893; c=relaxed/simple; bh=C7bksynIBlDmKWzYneQQmkbLTeJZkrmT11xDMld4peA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s7ow0YiozCR/FQUnFWPsLjy5oftKDyPj813dZJEXq2YezIgAsKYg7dm60Cz6J5Aav2zhNeAgGO/2VBbvvBcq+/IiGHU9p3YwcARQbi0tnVIIecwX7LOdqTC8RTCVu0uqFE/onM0jv0V9DvnFDnQhAs6/c+LseKYF7FYNlJcgNq4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0S+nV2l3; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0S+nV2l3" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-672bea19dd3so140830887b3.1 for ; Thu, 01 Aug 2024 02:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502890; x=1723107690; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LPYjxi2NpHlLCCCycghuduPbRm3K8NiftdQ2Nx+A00o=; b=0S+nV2l3KdXtNu6yLURPlbFskle7e+h8Mt1Wd2CbWLEsIIPRfrH9nVyMGYmqiVmdCL R8pNhcB37YFYWzDA+QMYFDFRHSk72VgLTg5malYbMqiuOWsPiAq2nhhID4zJBM4EpqvY qgq0A+c7wHcmT5q36PkG1T95Km4EpZwRRwbsllsccZigeIeUnfbctHzlPcghKX8rQY/1 PdLW5TuzxUFnam6Nu9KB6qTvKQgMx1Cyv4u4AzkBzE3AADO2jOh7dy5Eo9Ho8zJcXyZc og3IuxcDi1haYsz+VlLuF/du9Z0ngih15K9sd77ZkBAm5/ZmDsZxgq+0UpFP/cfZlCN8 os4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502890; x=1723107690; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LPYjxi2NpHlLCCCycghuduPbRm3K8NiftdQ2Nx+A00o=; b=KvrajqtQiTIlAScxmW1BpsMPA3R+XtM7AOZJEhOXHYuSg+J19dRwNP5YNUPtr/JOD3 +XoYvHbLzqc/NmdIjX5qgp1Bx4QAX8doBNd+hxlAYgs6B3rNTEKmKOYJCuPrXFElD/hK BsPi88wkeryeP8J5m3Jr9mxZkS+Yu7h84mnns77o2Rm4KGu6dUFYpaaMEdRWcvTfpZcU kN/cFPJJT7MuHuvK2mHZIitSzfYeKVnXpBGtFwlhR7rAlCBmOrMnHaXa0kHcB1DDnBSI kpqxuDJbXeWdnW0pETI3vsVQO0R/d0QcxQg2jDmB/LbMQQ1HdGQwhbfQYv5/D+mMf4xH 5NmA== X-Forwarded-Encrypted: i=1; AJvYcCUFB185plnIRZEBoYOCzZEiXj9dGhjjl0RN+7oIhM3ffBPEospBpa1NTJLgd1fFnK5DQnoFDvCJlVGOyRSw@vger.kernel.org X-Gm-Message-State: AOJu0Yxd7h+aYtGHV5SeeOduUrinjRBwfOlvPBZgRxXNggLfKPTCQ28S X738QhEHyoNSzdYVIVxFQQi+zCXmiyotnYE4oGe7Sm7SY2Xtfe2buF5O3/zTo+WeHDng784IGQ= = X-Google-Smtp-Source: AGHT+IEWXtT3G1kPV2U5ozw0QmNThPZWLC3X0cJFXxT6qwOkPcf9RWWDopByOJxRGSqLytT/Up6QyZXMSQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:9e:b0:673:b39a:92ce with SMTP id 00721157ae682-6874be4e4b8mr30247b3.3.1722502889731; Thu, 01 Aug 2024 02:01:29 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:11 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-5-tabba@google.com> Subject: [RFC PATCH v2 04/10] KVM: Add KVM capability to check if guest_memfd can be mapped by the host From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add the KVM capability KVM_CAP_GUEST_MEMFD_MAPPABLE, which is true if mapping guest memory is supported by the host. Signed-off-by: Fuad Tabba --- include/uapi/linux/kvm.h | 3 ++- virt/kvm/kvm_main.c | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d03842abae57..783d0c3f4cb1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -916,7 +916,8 @@ struct kvm_enable_cap { #define KVM_CAP_MEMORY_FAULT_INFO 232 #define KVM_CAP_MEMORY_ATTRIBUTES 233 #define KVM_CAP_GUEST_MEMFD 234 -#define KVM_CAP_VM_TYPES 235 +#define KVM_CAP_GUEST_MEMFD_MAPPABLE 235 +#define KVM_CAP_VM_TYPES 236 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ec6255c7325e..485c39fc373c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5077,6 +5077,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_PRIVATE_MEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); +#endif +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + case KVM_CAP_GUEST_MEMFD_MAPPABLE: + return !kvm || kvm_arch_has_private_mem(kvm); #endif default: break; From patchwork Thu Aug 1 09:01:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750018 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E54D0170836 for ; Thu, 1 Aug 2024 09:01:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502895; cv=none; b=IX9GdB4rf29GeWIcXvtkuH/Nc2+FqkQnp5B+SRpOco045xdIHTAzJG5BsP2KHAjBPhKcvTYvFlOYbqy1jM7ho9cK5UV248S6ucPlm84Zg23pPeVN6Hj1OfEbgb4UrXDNY8ufc3MjmJrQfe0ijn4m+UVFRVY5nNxQ9VT9bAo99+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502895; c=relaxed/simple; bh=zbjEJuwMf5QM2G3YeMI5M3nCWbiG7wC9OQ0aFSyGqoE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Unuc52/EBHdxyXvbOlSYd380cY+2xIlzSBloLWQoXOxqRX5xyM2yVHsvBmY8Myfsr5uLpT5NoCMf7+LuxGXC4HJWfTkeqPa3KFjOf0fZwFid6Hw7SS5XvnluOvkZvlmlGRYdxTUkmuorzkczq393j8eeHSX2ei87WiHKMxnpu3g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0fu2Apxv; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0fu2Apxv" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6648363b329so104021197b3.3 for ; Thu, 01 Aug 2024 02:01:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502893; x=1723107693; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FenZNVmro9JWmRf8yJhfaxf1ruU6PeQcDA6Tq5ucplA=; b=0fu2ApxvCQOoT4ROGNMcW28XaI3XGgtt/HG/CKsDSDds8dGKr5h6DFYyiSgUv+l5ol Teov9NoyTElhrw79iZlINd6aC2cHLR0FxwELVVjE6eek4KImdHiUxiJAeE/he+FEysG6 Wlj+T4IbGURcSiNMX1JWZVbD6I+1Xxz+sZIN0IOk7aSvIvCF31xWF3pvpEnB3ZQEZlXA A/kuOqzJExgom3SJ9YAidg6thkbqRYBD6k3Cgc8XyTAmVQMSmpTxmvrW//D4OweKmU2F Q5HhAC7wHzJ6Z70kHmbRSJ91DZwoqB6eJvpu9RleM4BUn0Otf1fUOeHJzb3hsmn+M8iK VpBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502893; x=1723107693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FenZNVmro9JWmRf8yJhfaxf1ruU6PeQcDA6Tq5ucplA=; b=HN0bHLhZjCnJ//2+sWbHJgTvb/A/kVzg1NuXeIe10VkTvy5VwPGXrRfKPgGTtwZAEA ufMP66d2WulHMsCOQ6AiWEp5k3xtZc9lDtY9GfRNE5saeLlK/iuuIdXg74VFWb3EBm8F eTtNmPVlXg+fySlyDskaV6qamnFyNdIrizCkvqfpdyw9I15ej5a/osrPrQ1OMQnNR3jj VFucw3xFgqyWs4k5wU+RABoJCFOBuaALu7/PeytiJ/hGXE8iIbSASSFI0F27dhP8cNF0 dWtpOWyXJvPWkx2FpqGOfcu7QvIbuFLhrZaufJlHrfA6nYC48C8bY/+hIta3u8vRNcKJ W4+A== X-Forwarded-Encrypted: i=1; AJvYcCUuLEeg7X0onjfpy+ByPIiEJ0PJmPJGRecj64spsL/3ctGxbK4G5smjSgBU9cadQZcpFVfhmL+enlPQ8b28Kw2fpDBDwo+ntUWWUmXqVg== X-Gm-Message-State: AOJu0YxtZDnxqLRRdrf01u/CoKdQEGKCK+59pmd4UiS0ohprgh4M9e84 FzQJdNnkWZgjfAv8JNfGadlD4guyHEZgzv47W2fYc0UOwH5mDOv8L8HBTsVLh7dJtsb4zGkl6w= = X-Google-Smtp-Source: AGHT+IGm9QckLnDbVxc2uajSqqnHpYUpgzy1Qib76hzzsjZLg4j1Inen+nQBahz4WB8vSdsSSS9dwIaeEA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:dcf:b0:665:a4a4:57c1 with SMTP id 00721157ae682-6874a9ec608mr269487b3.2.1722502892571; Thu, 01 Aug 2024 02:01:32 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:12 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-6-tabba@google.com> Subject: [RFC PATCH v2 05/10] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory if the capability is supported, and that still checks that memory is not mappable if the capability isn't supported. Also, build the guest_memfd selftest for aarch64. Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 47 ++++++++++++++++++- 2 files changed, 46 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ac280dcba996..fb63f7e956d4 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -166,6 +166,7 @@ TEST_GEN_PROGS_aarch64 += arch_timer TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test +TEST_GEN_PROGS_aarch64 += guest_memfd_test TEST_GEN_PROGS_aarch64 += guest_print_test TEST_GEN_PROGS_aarch64 += get-reg-list TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ba0c8e996035..c6bb2be5b6e2 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,55 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t total_size) { + size_t page_size = getpagesize(); + char *mem; + int ret; + int i; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, 0xaa, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + memset(mem, 0xaa, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], 0xaa); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t total_size) +{ + size_t page_size = getpagesize(); char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); +} + +static void test_mmap(int fd, size_t total_size) +{ + if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_MAPPABLE)) + test_mmap_allowed(fd, total_size); + else + test_mmap_denied(fd, total_size); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -190,7 +233,7 @@ int main(int argc, char *argv[]) fd = vm_create_guest_memfd(vm, total_size, 0); test_file_read_write(fd); - test_mmap(fd, page_size); + test_mmap(fd, total_size); test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); From patchwork Thu Aug 1 09:01:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750019 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EA9D16F8F3 for ; Thu, 1 Aug 2024 09:01:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502897; cv=none; b=PHJFsCVMRy48mx4R4Yr0U1Vf1WusFSwvklgU1c5xECqBMwLDbfeIqMOvZAF8jd3wjpfdXXvcWgOpS5QsWPxT8DoKQGTO9dBWVOi/MqXzN8rBMO/0goPwm9/Q+Ln7g+NWQquZVWKrtOy+no5FPYqpKZRqH2ExXI+ykz2yyMMY1BQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502897; c=relaxed/simple; bh=gVbQE1gLeXsrOsdMOUmedL1iM5Ce5pM5lcUiFJNejpw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=R0E1Rw8P9lgTdH+axkTgY5nTtSFb7LDnFL1Uqbd8lcccCvHUIvry2FgEv4qSbIinNgfnlrKM6YxazvyitUqIVdeNr0sOesvrMUvjtyNGhqQF7ZZ8frZhaeysb78lbIJckmcX9FSQ51rTIJXeMM1Mcu9DHPjtrog31etSWoZzcd4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2qvuNJkt; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2qvuNJkt" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-66b3b4415c7so142481347b3.0 for ; Thu, 01 Aug 2024 02:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502895; x=1723107695; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eynVCV8vS5USu2ZyXSA4dI0KQ06/f9GaXl4H1mlHW7I=; b=2qvuNJktDAusy2AW/M/a8t4wUlXcnQCAZlWnMPRfa9sY135CKsXd3Lk1StNlGe741L cT0Vs7FEPeWU9bUdHzRAVM3c9xHJjXl66Q0oIp4ufAXZjQ0BQAFU0HAACxoJkYgsQZva PBeHBpNBCrg+QFOsuRcSj50cJuXRm9ApgkdOESsuuz2gdw0j6SGd4n/vqpj2aKXqpFSY NiP28Y+PBjqu89NFsmrNDMnOwwSaG2nzDqdgl4MuqFDSD+FWG2blJ1kj4uXR78jx9LS1 d4JJubnqoAr7maeUY37p4YMdXoqNRoGLbueeK4KLzHuK0vHtSEXktj64tRg4HDEKUU8R IvUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502895; x=1723107695; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eynVCV8vS5USu2ZyXSA4dI0KQ06/f9GaXl4H1mlHW7I=; b=e3st5uOC0aFqA7d0L939KKQfiQJFuPt5FpDHrFr66pZtMj24A7SU4PYXkpozgANLV6 DVxEdYQO68QcVui44ZRfvct5nV5GEA1+qPGSDP1avqYFzBM7uHQBzOCYOJ8wBrTaP6h0 RbX12igrkiLVXjT9mabN9TTxcMkel1JgJgNmTiwQ8SQZJ08QfQm5ql+9asOgl0LGYsmy 7m98JKBdSpx5BPxxtVcMq8DxjFIS5FnlOwDHXlGu8x/TxNMjUAnoGJcscSi4qZyZGg0p v8ESwBYRjXHST3Lgc8K8ypXdaK99az0cATFcIa8rVf3cqOS6fYsBzfYRR3gtn6xssTwA NlAg== X-Forwarded-Encrypted: i=1; AJvYcCWjv0y4t6lbc//luua9D685WsJ1htW6juTOdnGlWZjrRXpODXcbCqap7ZUAvIUxi+5KrCmsG6OsZkPHiptjWkye3W0CC0A7Aev9SDMoVQ== X-Gm-Message-State: AOJu0Yyj4T/lab/JseizWWCRL373q/ahwISVzIoCVKMWAqR/RlYAA2wq ZtXV3kNvJgd6gfG6RFkHhl2fyGnA5hU5UnS3G7HKAnizR6h4Z8O4TeYms8r6G/7l/kbBW9xeOw= = X-Google-Smtp-Source: AGHT+IGMIr1XUVhclOFb55aW6DOFYObKwVO0wQH5tDUYBTBbgfFm+mNN9/WDhRbYaQh1Bw5e40TmTstGTw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:2190:b0:e0b:bafe:a7ff with SMTP id 3f1490d57ef6-e0bcd21d5e4mr2642276.6.1722502895329; Thu, 01 Aug 2024 02:01:35 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:13 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-7-tabba@google.com> Subject: [RFC PATCH v2 06/10] KVM: arm64: Skip VMA checks for slots without userspace address From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Memory slots backed by guest memory might be created with no intention of being mapped by the host. These are recognized by not having a userspace address in the memory slot. VMA checks are neither possible nor necessary for this kind of slot, so skip them. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8bcab0cc3fe9..e632e10ea395 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -948,6 +948,10 @@ static void stage2_unmap_memslot(struct kvm *kvm, phys_addr_t size = PAGE_SIZE * memslot->npages; hva_t reg_end = hva + size; + /* Host will not map this private memory without a userspace address. */ + if (kvm_slot_can_be_private(memslot) && !hva) + return; + /* * A memory region could potentially cover multiple VMAs, and any holes * between them, so iterate over all of them to find out if we should @@ -1976,6 +1980,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, hva = new->userspace_addr; reg_end = hva + (new->npages << PAGE_SHIFT); + /* Host will not map this private memory without a userspace address. */ + if ((kvm_slot_can_be_private(new)) && !hva) + return 0; + mmap_read_lock(current->mm); /* * A memory region could potentially cover multiple VMAs, and any holes From patchwork Thu Aug 1 09:01:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750020 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5A25170A27 for ; Thu, 1 Aug 2024 09:01:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502900; cv=none; b=sJjuraDSj0F7QDdstCQ0rfJ7gGYQomqX8XqoHj/enAXbMSKK7vUrOcuQYSr8JJyocjF7RRVw3ELilHGOnwv7fHRnPT9Zab/ljZVDYbPlzBlvm23ibWWjNRlCRGGNsjU+ORfYLwXjR+PmKAFcGc6IxhB8/Ett1DFif4uJhCWnFMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502900; c=relaxed/simple; bh=/xyaq17skgd/L7RMTEIyM7OuUD5Qi6/Az+T9v4EOwIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=goWkXoabiKH6xE09JUrCBZ/k00JhX/cvUWWnG+arIhpcErb5AFBApWySS7CO3F4AZmBgTe917bZRggDRZlsRtGcMUmPSK4KpNHM010JnBd/V+QOwVyFCglot0UZlihvPCAEOo+uqBnewQj6I9oQStTCxOyzBg5PMrzKg7A4hiC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=C0SttAXp; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="C0SttAXp" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6799b9a2161so150334797b3.3 for ; Thu, 01 Aug 2024 02:01:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502898; x=1723107698; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VYgP8TsqpjmhLiOepZBWnEDyKrGoYglq1ochGe+g9ag=; b=C0SttAXpfE4BrS5dhxgp4l+/SRfPRUSQiQ1iS78kTS83BiOgdaj6YTjuGsUh5wMHU0 s6s40F8HJldmOrCytvR0hExcfpE09ou2ej23cMzGEGzSSr2pEDUUTGAjOu2v86rqvLPX d9mA5av/ySyYiudwsJMazyKyXmqe0fXqArm6KZp3mW3irMepANtVy7yura3eNvGXg7r5 q8cYA11YJ1+TXIYrFzvwjO3b3Mhdq79q3gRzfA57Vs0wl4RfEvdGPDAEsmOCqygZcZlS P/JkeYphRZEmNXJTbmPLAFCZgdD0NV9XvphSETGmbaxT2xCnNUqOQUsr3Irm/ENScvfR 546g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502898; x=1723107698; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VYgP8TsqpjmhLiOepZBWnEDyKrGoYglq1ochGe+g9ag=; b=Dkd9Qjlns0y2u9wRh2etejU31bz7tE9oEXXtEjmwp+Ow+V2vrOdVRJAsKHYZf+EHQB JepbeRKJKOHs4aDPnwsz9RQTxNQ2uc43UMqU/9z6Zi3Zwwnk+Ca097t/3H2o28iB2/LL Lw9ZuKiLQmYvt/KtfiV9Wi7s/Lz4ifcujroOu8g54J3bmmwH08TKWO7pDtCbO2YyC/aJ Ipuo/7OAJFvHnHHEt3C6RhmoIvpy/T9JWkqemC0yljnOgnG5KjXzobl4rWsF90+gzitz K7m/QU0DTJjqjS31HM1HTPLu3z2qNO/ge1FAUsdrqMbXCaC0ChXSD2vnrMQoyTWu+ZkZ dVmQ== X-Forwarded-Encrypted: i=1; AJvYcCVqrIScGpmetstw9ah8khSpsbRblgfQFfmXuIzn0x0pcyTcJyEwzuw1w1mEl/GNpk2WZWFv3XcqZeKiJjiFt966LGUbKkm755eJ63wlYQ== X-Gm-Message-State: AOJu0YxtGYQJ4IiCVDCZ6BPODh6/gAlvADL75LEHqy0/1lRlXXjzuAV9 lOuy7p90e87J5/N8BM8QSzo+MoeqO6YEQtZhTcCzQLZ5czIU/Bzq3ZeMCPD6EUCT+eIvLxLyHQ= = X-Google-Smtp-Source: AGHT+IFYiAm/ZMOGWhUa/K3MvrLtTZnEUf3NdBQXNfdF1a3J1k2aDdCJIpsdZ+5mVOdSrR6hCD4FF6rWww== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:1107:b0:e03:6556:9fb5 with SMTP id 3f1490d57ef6-e0bcd3e5b26mr6126276.11.1722502897790; Thu, 01 Aug 2024 02:01:37 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:14 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-8-tabba@google.com> Subject: [RFC PATCH v2 07/10] KVM: arm64: Do not allow changes to private memory slots From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Handling changes to private memory slots can be difficult, since it would probably require some cooperation from the hypervisor and/or the guest. Do not allow such changes for now. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e632e10ea395..b1fc636fb670 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1970,6 +1970,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, change != KVM_MR_FLAGS_ONLY) return 0; + if ((change == KVM_MR_MOVE || change == KVM_MR_FLAGS_ONLY) && + ((kvm_slot_can_be_private(old)) || (kvm_slot_can_be_private(new)))) + return -EPERM; + /* * Prevent userspace from creating a memory region outside of the IPA * space addressable by the KVM guest IPA space. From patchwork Thu Aug 1 09:01:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750021 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42E0816DC35 for ; Thu, 1 Aug 2024 09:01:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502902; cv=none; b=uyWi/PNlYLN338gJBbUuFiOWDzId+jYqwutHm1mdkY7YG6igKJ6LupqgRq/rK6zdK6ittIYYPbwkPnf2OmHMSDw7PDMQEx9+i++PR1GBpO3GakgHvY/dhywLD3a0MFN+176dARw8pGMJ9Bq2Qn+1TOjs9e9YxyRmXVMpoHPxDY8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502902; c=relaxed/simple; bh=LiD8hVelwBpdklQy0FqdcvS85HI+0CpJq5EuMCPJOtQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q7jU1c5wMPDR+O5N8DjQGsKaWsycWFeXKP1j8hwH6QjgxvYzEt1ylRL16zAkrg9jiZDyidE+2yB9ZaO8vMBm8mAuY/NGDmNJTgMNO5AlcLPVLVKa4yUY8V/PFYqKo0Vgnh7ytu0xaS4+nILg1bwq3v+5HBBqu956gDYrhLMNjZc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gwWcuKC5; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gwWcuKC5" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e08bc29c584so9096419276.0 for ; Thu, 01 Aug 2024 02:01:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502900; x=1723107700; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hPkdSkcYKbc3+kSJZ/g9JVRtvGOqbhaBzYnzj0kRbFw=; b=gwWcuKC5lHGj/wpFCIeFLwwiIuUpblO7aBI0iFFLNkGAswKk2KIVBX26XvngZo38cS CuM6egCRoOECpgGMpe/uMrFxXtBiZwKVmq+7nIkx7gYI2xdiGiu737JizU8p3MwnuU+0 QiIrW9jK3CAHTHb/cumXHsUn8MBlrv0Xz/9bP8UwI6whCXXF9VH/WQyJdCouO+RCATal 2RgIKtGApG8ChBF/ns6u9/RX57u9zIqqIgyQ0eXOgHUyidNQXo4qclHWhSE+h7B2iR9D GlKAPg05iz5e49n+vXhK+V1sYRtEkwtt7cZqHSrEZjxwMRIMbd2uVluPmDhFcSnEXOug xBoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502900; x=1723107700; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hPkdSkcYKbc3+kSJZ/g9JVRtvGOqbhaBzYnzj0kRbFw=; b=oCEt6ymGcgtKIVesgjkKHtERbeu2QCHDIR207wC679+uY4mxNpb3yUV7Cv5SbDPS49 m9K9na2LReoYinlD4fZzPzV85OTcW76/I8AtkWMG822e145J/ELXzfFkLy7pszRmVnER iHaUVdkbZd21qGEdGddImXFXZxxH+qqNvtPT9B9ARttTUzjGH1UdLQUu43Lcd2rkdtZ7 PyQv8Rgs3Pg0g4gVZaoJbYWOnrR46B88Q1HrEqGRee6xUZ2FF4f+11/mK7yt5h5drGzn cSZsxyNJJuJsUmCaDPGpIxX3eOdABmlV5k3MBwrCL4BT6k4wcmOpdToC3Lj0UJ8zmF4M QeQQ== X-Forwarded-Encrypted: i=1; AJvYcCV0TsYGSqQOUKWsRKg7yl3AlP4pQ12Ve6asjQEhm/rACLIw1emGqdK97yJQsaruW49HlPtfwG1FTupzCRoRzCLyOprKnsp+wgg/+tx61g== X-Gm-Message-State: AOJu0YzPEasYLvgH1uqVAUGLBa+UU9VMYenkn+wngyksXS9yC5jpJpRk 0PK45MXB/nKeAJ2Welw0h8MCLlaMFpIQIsmbB3Rjsvjfru1nbc5wbKq0Plq1Ots+MRtnSMa9yg= = X-Google-Smtp-Source: AGHT+IFLHM91ijuDJHLUo5+kzuJ4Oi83XUPKih8V/PqPhJPQ78pJ2Ei/0euF1MyM7UCy8SAtSIgxDnFcYw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:2190:b0:e05:f565:6bd3 with SMTP id 3f1490d57ef6-e0bcd490586mr2277276.12.1722502900177; Thu, 01 Aug 2024 02:01:40 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:15 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-9-tabba@google.com> Subject: [RFC PATCH v2 08/10] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Add arm64 support for resolving guest page faults on guest_memfd() backed memslots. This support is not contingent on pKVM, or other confidential computing support, and works in both VHE and nVHE modes. Without confidential computing, this support is useful for testing and debugging. In the future, it might also be useful should a user want to use guest_memfd() for all code, whether it's for a protected guest or not. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 127 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 125 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b1fc636fb670..e15167865cab 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1378,6 +1378,123 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static int guest_memfd_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_memory_slot *memslot, bool fault_is_perm) +{ + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + bool exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); + bool logging_active = memslot_is_logging(memslot); + struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + bool write_fault = kvm_is_write_fault(vcpu); + struct mm_struct *mm = current->mm; + gfn_t gfn = gpa_to_gfn(fault_ipa); + struct kvm *kvm = vcpu->kvm; + unsigned long mmu_seq; + struct page *page; + kvm_pfn_t pfn; + int ret; + + /* For now, guest_memfd() only supports PAGE_SIZE granules. */ + if (WARN_ON_ONCE(fault_is_perm && + kvm_vcpu_trap_get_perm_fault_granule(vcpu) != PAGE_SIZE)) { + return -EFAULT; + } + + VM_BUG_ON(write_fault && exec_fault); + + if (fault_is_perm && !write_fault && !exec_fault) { + kvm_err("Unexpected L2 read permission error\n"); + return -EFAULT; + } + + /* + * Permission faults just need to update the existing leaf entry, + * and so normally don't require allocations from the memcache. The + * only exception to this is when dirty logging is enabled at runtime + * and a write fault needs to collapse a block entry into a table. + */ + if (!fault_is_perm || (logging_active && write_fault)) { + ret = kvm_mmu_topup_memory_cache(memcache, + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + if (ret) + return ret; + } + + /* + * Read mmu_invalidate_seq so that KVM can detect if the results of + * kvm_gmem_get_pfn_locked() become stale prior to acquiring + * kvm->mmu_lock. + */ + mmu_seq = vcpu->kvm->mmu_invalidate_seq; + + /* To pair with the smp_wmb() in kvm_mmu_invalidate_end(). */ + smp_rmb(); + + ret = kvm_gmem_get_pfn_locked(kvm, memslot, gfn, &pfn, NULL); + if (ret) + return ret; + + page = pfn_to_page(pfn); + + if (!kvm_gmem_is_mappable(kvm, gfn, gfn + 1) && + (page_mapped(page) || page_maybe_dma_pinned(page))) { + return -EPERM; + } + + /* + * Once it's faulted in, a guest_memfd() page will stay in memory. + * Therefore, count it as locked. + */ + if (!fault_is_perm) { + ret = account_locked_vm(mm, 1, true); + if (ret) + goto unlock_page; + } + + read_lock(&kvm->mmu_lock); + if (mmu_invalidate_retry(kvm, mmu_seq)) + goto unlock_mmu; + + if (write_fault) + prot |= KVM_PGTABLE_PROT_W; + + if (exec_fault) + prot |= KVM_PGTABLE_PROT_X; + + if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) + prot |= KVM_PGTABLE_PROT_X; + + /* + * Under the premise of getting a FSC_PERM fault, we just need to relax + * permissions. + */ + if (fault_is_perm) + ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + else + ret = kvm_pgtable_stage2_map(pgt, fault_ipa, PAGE_SIZE, + __pfn_to_phys(pfn), prot, + memcache, + KVM_PGTABLE_WALK_HANDLE_FAULT | + KVM_PGTABLE_WALK_SHARED); + + /* Mark the page dirty only if the fault is handled successfully */ + if (write_fault && !ret) { + kvm_set_pfn_dirty(pfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); + } + +unlock_mmu: + read_unlock(&kvm->mmu_lock); + + if (ret && !fault_is_perm) + account_locked_vm(mm, 1, false); +unlock_page: + unlock_page(page); + put_page(page); + return ret != -EAGAIN ? ret : 0; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, bool fault_is_perm) @@ -1748,8 +1865,14 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, - esr_fsc_is_permission_fault(esr)); + if (kvm_slot_can_be_private(memslot)) { + ret = guest_memfd_abort(vcpu, fault_ipa, memslot, + esr_fsc_is_permission_fault(esr)); + } else { + ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, + esr_fsc_is_permission_fault(esr)); + } + if (ret == 0) ret = 1; out: From patchwork Thu Aug 1 09:01:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750022 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8FADC170A3E for ; Thu, 1 Aug 2024 09:01:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502906; cv=none; b=JmWPg1tUdF9AawNBQMG0S74p0U7GlUWYqNzbERCKMUiEbDsXIxvny5nQ5UyEu/m14hdLg/bBgsIH18x8DOqRHYyx3i7MLGjpeRt9bZ5lgxqsVpkeGfzkyxKBcdJ7/gM9c36oTjqr9sZyIiYrKDeF2OV14Dpxl0Y1VgMpY1G+vf8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502906; c=relaxed/simple; bh=q5IFhmTRn0cgQ8n1AC7LRcSHCxEBQI8JJxFLv3GwqRI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oKkqHXYfy6t48KgBXrcdvpFw/DdaM/2O7yPc28/yYJhrd0kryUtbCdTGpEX7u0Y0ZptaL32d+j35WjQTiob2+ZCYg+Ieg90HC+P6JA6LhhaIQg8W+/ENLgHYZ326XiuaWCW5NdApL3DZZ+gL3vBJO75KeDzX2Tt0c9y8u88rkZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZHbFy7Dh; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZHbFy7Dh" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-a7ab4817f34so586824366b.2 for ; Thu, 01 Aug 2024 02:01:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502903; x=1723107703; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zLFB4vtfe+Qz1MMVjY77NoMcItZ1noXdllMM80GyRW8=; b=ZHbFy7Dh2DNCjKdxQMTGQA2shnaozzRFcaoINBnMg+kUBfLpioLLdpj8szNTQ8UqR4 rmDsGHh3r07+WlDcRy6/FPEu/BD8adGIakyy7V4vEWZKXVSCR2C1qFX5GycDUL+TYHvu PtEowTeEim7bn8TmxcQHaHePJht/P3OOoJkamF7Va2e7ORryXjlxA1+DKfYL2nf/wVEF wsaTb9Mqq810qBRDWL5de7YcW+1CVoxwbOXCnTVJAw1z/OPYV4lAkyyHPAHRePiDoq29 JBkSCIuBJP4hCvWKEuDdR77TA8LwbuJn6DKt7Dt5l53/hcEGSjvMmXupiKjM42+HXeRH cEow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502903; x=1723107703; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zLFB4vtfe+Qz1MMVjY77NoMcItZ1noXdllMM80GyRW8=; b=KtsvOPiFz5Lm5hkHqVN7w1dObKrhEorqWM/iLzwNNz/SLv+hDZs8ey5FbtKZmlRmdG 1YlEsSJAeoXwwhjwzrGWUdZri1T4LU4VIX9Z/NLT6IebQ5voVAS+mGZ0NGIVHkSC/heB fXQFnMJQYSS60hp0JoAKCKFBMH6H9OpForvr3lzQNKa/nxH4h3ZW4Z0DSMlGFV0Dz/6c GH1/r2IYM6mITZvDcj0ahu2KlC4EGSDZN+8r02zG36WWvJCB8HdIEclpkcS6cNOMssgB wy62v06dwctnzzLRWd/EMPe5jNOYWZ41W0xBmTpprGh/JDE6gSHfwa5//mfPYWcKZbXn K4EA== X-Forwarded-Encrypted: i=1; AJvYcCVzQJps8sdaIASdL1y4JnpXAqhqlMQl9ETqBzypSlKPvPgXBrDjPLK1v7qT28Pw6zL3mIfF9cW6uEYu7oAcEUUqOpcaL9tCjO6TSURRPw== X-Gm-Message-State: AOJu0YwGWWlIVyNsY0oT01qjMcRInGCh9VywftBcXK6GFgoTmlwQbLvm Kb88CRQCXE3gkylIM4y320s7bNjuGl++Wpf2R1lFrXfbp5zU2MYj3VOprrqROPiE4hz3KU4QvQ= = X-Google-Smtp-Source: AGHT+IHTzLe8RYDoIu4Q6eNF/9ig1sSQFAYVxHVp/NRIu780LrDluXUfDxDxS5ts3clkl+0Fh9PskkCJDQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a17:906:3415:b0:a7a:aa12:100a with SMTP id a640c23a62f3a-a7daf11ce02mr180366b.0.1722502902501; Thu, 01 Aug 2024 02:01:42 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:16 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-10-tabba@google.com> Subject: [RFC PATCH v2 09/10] KVM: arm64: arm64 has private memory support when config is enabled From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Implement kvm_arch_has_private_mem() in arm64, making it dependent on the configuration option. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 36b8e97bf49e..8f7d78ee9557 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1414,4 +1414,7 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); (pa + pi + pa3) == 1; \ }) +#define kvm_arch_has_private_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && is_protected_kvm_enabled()) + #endif /* __ARM64_KVM_HOST_H__ */ From patchwork Thu Aug 1 09:01:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13750023 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D02917165E for ; Thu, 1 Aug 2024 09:01:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502908; cv=none; b=fCXXvFKvTj6hWmVMAKIAgcMiFp3ihCGMJBEHkaoKBnmjPOzLoVtk0f7arSMRVwyyRl2ik3/IvrBnstMNfKqey4jYyBCVB3RsR/XFel9hGFcU8QuI/qMZRIWN8s4Gqgh1H0hWxwCHAjiNMe0VdmK7oxINPA2NU1bPVtfLNZxWR9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722502908; c=relaxed/simple; bh=7g0DlirT/kYCBVr+9dRDXVKvBiF25jbrjn9HL4AfHFU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MTRlUVHFdMXMivUt4ZlQ7n21RUz6IC9a52gbEZuOFxh2CuXNhkFTOtKOOk9QkONyvJagC2gfWh8AlxjgIvA3nx4KpKv51On/ENH9a+v9H/PnkH9g6QkqFBceSQpLmrSK2k3rQ+Bwx8YDk2JxvuRgddoyjn3JtT2fPVEly7qKzj4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4WsrT4I0; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4WsrT4I0" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-36835f6ebdcso4080146f8f.1 for ; Thu, 01 Aug 2024 02:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502905; x=1723107705; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8IHbq3nRTqO2jYrZNiv7GHjdo//AKiTG6CX0+AzFu78=; b=4WsrT4I0qutrM2J00Hx5SwFkHKE2/Yqxxsf/jyyolCuZcKHupiH1NreW4EREe43vQA 6sMadTeeGwIrdfK/bzLKIpruH9Vn0Pmk//r90gSjoZVGBIZiHHod2GBada5gVznf78/7 3UwWOaA8eQZelmW5D8T1yFadxYxojWBcXb/6XfiO7rbTDPlcMww54fqUrdJXiu8R/xaV d75AllT4LqPgZrejYZmN5cyeH75vZRjN1UJuENhZIVsmczYzFKFtJLt1upmz/SfZX0l1 5/6Tj13lIzJv10PKUun6oO18WmabQlFvzx6Nympy3VaWzAIRp8zVHa61CIZLWJUbSiZq 86YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502905; x=1723107705; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8IHbq3nRTqO2jYrZNiv7GHjdo//AKiTG6CX0+AzFu78=; b=BRxB+9CX6jPPeNg276iP//vY1lqCNkCsttYLqiz1yc35MWXsdp53jEL51TGHwL7xN2 yRIJpOywLD/jygRbykcuKYsH8lBSvYksRMSvsBZtBJk2R7YR36+VFjl9UFa3C8nrKNmS gLN5qpYaaYlaAVkMnpB4SaFQgB409jN4bNXuWX5Q3MFlI1a+GhjbL03ZFnd+yFwq8+hJ /RUjV4aIEQekrQukvsRKheo5qnwoYtC9iHysdPKdYbqqZ2QCOpiAcObQIblT7yd8U4TW AJDZFYO3CVPfbiS1v+zN+kmPV3M6s0gz9tCaYpMPsMHyhO0eIb26yJVdFJG6641HiBrm H8Hw== X-Forwarded-Encrypted: i=1; AJvYcCVgtHk4nwutlO0wDnDTwGJD6JL3Yui2Whf2pgsvDTCZDbP5+lvyByDx0I9TulfOb2CdeRU4cNUgO2RDOgBESxgVl9qOHLQweB9P7Cv00w== X-Gm-Message-State: AOJu0YxC+uvvFCyNZ75Y+WUGCeIBLE3SnoiPvi68w+U0PPAT5aNTCsLX D3EnINSvjktOpl+bdM6SEKGfAS6uW+r4sfcgVEt8K8AycsyEIbnN2UE+poM7318W2zE9H+4IdA= = X-Google-Smtp-Source: AGHT+IEYAOIgImAvC/AtXq27Mfc53Y7PA+fVTX3Rlx+IY0/mIU2S6P/V0TnGwu9UoSydDFeidGnaFL2tvg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a5d:4569:0:b0:368:5d2:9e5b with SMTP id ffacd0b85a97d-36baa9ed838mr3368f8f.0.1722502905208; Thu, 01 Aug 2024 02:01:45 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:17 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-11-tabba@google.com> Subject: [RFC PATCH v2 10/10] KVM: arm64: Enable private memory kconfig for arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com Now that the infrastructure is in place for arm64 to support guest private memory, enable it in the arm64 kernel configuration. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 58f09370d17e..8b166c697930 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -37,6 +37,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_PRIVATE_MEM_MAPPABLE help Support hosting virtualized guest machines.