From patchwork Thu Jul 11 22:27:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731140 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77F731A01DF for ; Thu, 11 Jul 2024 22:28:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736885; cv=none; b=fpxCHbBhoHe7HbeRh5n5ysWFX6p4oGpqDdaSQaLbKnQ1VU6QgjnVaBLsr33Q2bXnHu1gCkatAsOtF64efvT+6mpFTpyKCadWm94Jal2BN6gcCuqo1fHHoc/refopsWfBcgtxU4nZf75bVfQ/zw0YbM/S79HTm2kCBI0X0QeIh08= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736885; c=relaxed/simple; bh=/T1IyEuC1afzaDJ38A39rp6xkEAxRpEAD+ybHaS4NhU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aIa3hiDtpoHI0nAb55BKf92Y6ElQckfKXQKnwwII54Oa4sbgdAfixjOb7LvKN3YHrkj4yGhCTIcoQeLgGI9u2jJewqWBsAlhTM78lWHWyl+9ytac1VuaYPK33sU2Htk6x99Q7+9hhlKz2KFNnM+KgfXUVJn1CJkMbR4tAQeEKhU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=UyQLX5AV; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UyQLX5AV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eZWyZhV9VRSNjrziIQxjP5twHaew9jzbSXjMU0sQXfw=; b=UyQLX5AV/+MurTqcyrppDDZ5StNcDxHlCElh/Cx3w00gEWisdWbqC+sd01cJjPtXzEOjVN DvBgYtzYETYryqRESOC2ExpF6/tO+dfno7IYYlDiL2ZXIzfMGoDfIG0HXZdZXJRXY9g65M Te7Gm/Bko423KtgtBM1wPWrUnj36P2Y= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-159-boVmNKkaMxqwNxg5j4Qe9w-1; Thu, 11 Jul 2024 18:27:58 -0400 X-MC-Unique: boVmNKkaMxqwNxg5j4Qe9w-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AD06119560BF; Thu, 11 Jul 2024 22:27:57 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B96E31956046; Thu, 11 Jul 2024 22:27:56 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 01/12] KVM: guest_memfd: return folio from __kvm_gmem_get_pfn() Date: Thu, 11 Jul 2024 18:27:44 -0400 Message-ID: <20240711222755.57476-2-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Right now this is simply more consistent and avoids use of pfn_to_page() and put_page(). It will be put to more use in upcoming patches, to ensure that the up-to-date flag is set at the very end of both the kvm_gmem_get_pfn() and kvm_gmem_populate() flows. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/guest_memfd.c | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1c509c351261..522e1b28e7ae 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -541,34 +541,34 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) fput(file); } -static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare) +static struct folio * +__kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare) { pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem = file->private_data; struct folio *folio; struct page *page; - int r; if (file != slot->gmem.file) { WARN_ON_ONCE(slot->gmem.file); - return -EFAULT; + return ERR_PTR(-EFAULT); } gmem = file->private_data; if (xa_load(&gmem->bindings, index) != slot) { WARN_ON_ONCE(xa_load(&gmem->bindings, index)); - return -EIO; + return ERR_PTR(-EIO); } folio = kvm_gmem_get_folio(file_inode(file), index, prepare); if (IS_ERR(folio)) - return PTR_ERR(folio); + return folio; if (folio_test_hwpoison(folio)) { folio_unlock(folio); folio_put(folio); - return -EHWPOISON; + return ERR_PTR(-EHWPOISON); } page = folio_file_page(folio, index); @@ -577,25 +577,25 @@ static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, if (max_order) *max_order = 0; - r = 0; - folio_unlock(folio); - - return r; + return folio; } int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *max_order) { struct file *file = kvm_gmem_get_file(slot); - int r; + struct folio *folio; if (!file) return -EFAULT; - r = __kvm_gmem_get_pfn(file, slot, gfn, pfn, max_order, true); + folio = __kvm_gmem_get_pfn(file, slot, gfn, pfn, max_order, true); fput(file); - return r; + if (IS_ERR(folio)) + return PTR_ERR(folio); + + return 0; } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); @@ -625,6 +625,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages = min_t(ulong, slot->npages - (start_gfn - slot->base_gfn), npages); for (i = 0; i < npages; i += (1 << max_order)) { + struct folio *folio; gfn_t gfn = start_gfn + i; kvm_pfn_t pfn; @@ -633,9 +634,11 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } - ret = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &max_order, false); - if (ret) + folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &max_order, false); + if (IS_ERR(folio)) { + ret = PTR_ERR(folio); break; + } if (!IS_ALIGNED(gfn, (1 << max_order)) || (npages - i) < (1 << max_order)) @@ -644,7 +647,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long p = src ? src + i * PAGE_SIZE : NULL; ret = post_populate(kvm, gfn, pfn, p, max_order, opaque); - put_page(pfn_to_page(pfn)); + folio_put(folio); if (ret) break; } From patchwork Thu Jul 11 22:27:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731139 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55B1515CD79 for ; Thu, 11 Jul 2024 22:28:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736884; cv=none; b=dOXojhZINstaW+ENPAni+IP92FvYWQU9hM04WOVyRUKHS73fkSsPHDs0cn9iUKp0fwYfNSJPhdDs2mH2/mX/brv20wZfFgtFcO8sDtLl6uSMPF7eZzzZUmo375gULNPKugS7/P0ZuKgOcxyUv5/RaXeb5I0ur4RqxWUOLYGG9RY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736884; c=relaxed/simple; bh=BoNFTMNodlsB+PzOqqm9FaRQDDjFEoEodW/SZ/JxIgw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=juSJrUw1HL5yQsVRMC0Fv/vJdIf7FPqJZ4nQfaUhYonHM0hldAXYkWsPNKLRBQQMkapFClfwz8OGDUs8O4+n8jABgBDNiazhtNZzWm7GvRPQjDN0PUzja4S5Q1kVRDE3zSdTfgn0leabThdaFVIaFh9UeDHk0YLHPRFS3YEluxs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Jbv6REEG; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Jbv6REEG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736881; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eD20AnLzqduoYD1gk5YMqXZJv49fFs5h4FT96ZIsKZg=; b=Jbv6REEGsh7+GGN0LLfe9/+/0zvGGuO+kJ25s/7OYrazypD/0MUBcyiMegRPZBm2CXyY56 hX8nkO5lZJHmw8CObu4kd0NFndEKAiMmns/KX5ADWl74YlABIXet+udjlpcdRozghIbljf 0kAPk2KEchUjvW/3jmXwwwff7pptK78= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-219-llKx0sUsNyaWHEDP-qGuUw-1; Thu, 11 Jul 2024 18:27:59 -0400 X-MC-Unique: llKx0sUsNyaWHEDP-qGuUw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6C219196CE01; Thu, 11 Jul 2024 22:27:58 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B2C9619560AE; Thu, 11 Jul 2024 22:27:57 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 02/12] KVM: guest_memfd: delay folio_mark_uptodate() until after successful preparation Date: Thu, 11 Jul 2024 18:27:45 -0400 Message-ID: <20240711222755.57476-3-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 The up-to-date flag as is now is not too useful; it tells guest_memfd not to overwrite the contents of a folio, but it doesn't say that the page is ready to be mapped into the guest. For encrypted guests, mapping a private page requires that the "preparation" phase has succeeded, and at the same time the same page cannot be prepared twice. So, ensure that folio_mark_uptodate() is only called on a prepared page. If kvm_gmem_prepare_folio() or the post_populate callback fail, the folio will not be marked up-to-date; it's not a problem to call clear_highpage() again on such a page prior to the next preparation attempt. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/guest_memfd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 522e1b28e7ae..1ea632dbae57 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -73,8 +73,6 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool for (i = 0; i < nr_pages; i++) clear_highpage(folio_page(folio, i)); - - folio_mark_uptodate(folio); } if (prepare) { @@ -84,6 +82,8 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool folio_put(folio); return ERR_PTR(r); } + + folio_mark_uptodate(folio); } /* @@ -646,6 +646,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long p = src ? src + i * PAGE_SIZE : NULL; ret = post_populate(kvm, gfn, pfn, p, max_order, opaque); + if (!ret) + folio_mark_uptodate(folio); folio_put(folio); if (ret) From patchwork Thu Jul 11 22:27:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731142 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 547251A254E for ; Thu, 11 Jul 2024 22:28:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736887; cv=none; b=It50SGEeeKFM+oo47TABkqup9MrKi5yGkwq0lB1xTUhjMVksxrlMjH64CgrNUg5VDYgoESPyXr9S1tIbnmoS+T3SYO1wxiSlXX8hW5rFb4jbpllvO7yORF+U1XYHPIE/9Gt1weqotsD/SocZPG40OzPYOIw1GJsNkR4h/mcWGeM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736887; c=relaxed/simple; bh=c6++jZ9Vq4mbxWHs/E5s5/t2TpDV/B7uGsl8FnP9omE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uRhGbjwDahEKp4EhngU5e6U77Lf2kT5PCzpRWKwjr1/s5DG4Hvpi8AP6O6MICJg2oEe75/9/8EJ4HyxyHHl5YPO0bHx+ctePnU4806f6TLRPTnJAUQOlookx4TCW9knqVo/YAT4Uw12vxl/yrEFUX9xii6sVwS0MbMGJFULKtI8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=K9qLzK8J; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="K9qLzK8J" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cBzkcMlAqM2DYy44QBoHHnpRLCW6Bwp/3x9Z7IIRW1Y=; b=K9qLzK8JMlsql/wIlHmK1kZBW8IwxVzHpF/uUOX7nO19F6hsbDtlH/Ui3JefTQkfzPUwo9 gk42AX6PsS1W0ekwGfGhoKAR2eVd+sPPUziZ17PEBi/VCAPDawRaNPUC1k24V0w74Ddzf7 /l3ZxldeKZlMMnF4DLsviw1wRwekDFA= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-48-GP78KJSDONmgZ3_h1VhNlg-1; Thu, 11 Jul 2024 18:28:00 -0400 X-MC-Unique: GP78KJSDONmgZ3_h1VhNlg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5A38B19560AA; Thu, 11 Jul 2024 22:27:59 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7013B1956046; Thu, 11 Jul 2024 22:27:58 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 03/12] KVM: guest_memfd: do not go through struct page Date: Thu, 11 Jul 2024 18:27:46 -0400 Message-ID: <20240711222755.57476-4-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 We have a perfectly usable folio, use it to retrieve the pfn and order. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/guest_memfd.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1ea632dbae57..5221b584288f 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -13,6 +13,18 @@ struct kvm_gmem { struct list_head entry; }; +/** + * folio_file_pfn - like folio_file_page, but return a pfn. + * @folio: The folio which contains this index. + * @index: The index we want to look up. + * + * Return: The pfn for this index. + */ +static inline kvm_pfn_t folio_file_pfn(struct folio *folio, pgoff_t index) +{ + return folio_pfn(folio) + (index & (folio_nr_pages(folio) - 1)); +} + static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) { #ifdef CONFIG_HAVE_KVM_GMEM_PREPARE @@ -22,7 +34,6 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol list_for_each_entry(gmem, gmem_list, entry) { struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; - struct page *page; kvm_pfn_t pfn; gfn_t gfn; int rc; @@ -34,13 +45,12 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol if (!slot) continue; - page = folio_file_page(folio, index); - pfn = page_to_pfn(page); + pfn = folio_file_pfn(folio, index); gfn = slot->base_gfn + index - slot->gmem.pgoff; - rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, compound_order(compound_head(page))); + rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio)); if (rc) { - pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx GFN %llx PFN %llx error %d.\n", - index, gfn, pfn, rc); + pr_warn_ratelimited("gmem: Failed to prepare folio for GFN %llx PFN %llx error %d.\n", + gfn, pfn, rc); return rc; } } @@ -548,7 +558,6 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem = file->private_data; struct folio *folio; - struct page *page; if (file != slot->gmem.file) { WARN_ON_ONCE(slot->gmem.file); @@ -571,9 +580,7 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, return ERR_PTR(-EHWPOISON); } - page = folio_file_page(folio, index); - - *pfn = page_to_pfn(page); + *pfn = folio_file_pfn(folio, index); if (max_order) *max_order = 0; From patchwork Thu Jul 11 22:27:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731141 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95A701A0703 for ; Thu, 11 Jul 2024 22:28:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736885; cv=none; b=XJWT6VrF+N/75VG7bVk768Tj+h+5a29J3IrxwKTGFR9UmpYWicxdJ4E5JLweYDtNrJOQ8CCP9JUstuttjHgjpiHm3tJqXSscVspX7XZ3mnCzjgjMt69pHmX/lBxO1TYS59QeETZBVkr5sKmTGLy1lTjl0hE/ICUOS9ErYNETN90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736885; c=relaxed/simple; bh=HOliJ1ZdMoSd2fBSfY+0O1TjzrvD5+PcUaWx6R23OaY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qN24Cazu1xyuL6J/PUakjpYkLYUaEmTEvba9Ce1OQYU4IR/APiBu3U0nmJLahveMC6yxsawdF7LxIcQDmvEM8luEnfPO22oFMSDxkbcpYduY0a2RWtl+8Mhq0bnbbdtdWo2bNBpjPeIvlqm5azjCnr9unIdciksArfc3x1q60fc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YJDUkog9; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YJDUkog9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kWZmH65L03pPl+Vpnns8b+A9Wf2H9U43K+mM8fVZcRw=; b=YJDUkog9+ojLR/07dcFxw10tEykrxUHMIrl9eJetDa9Ed2pdDSBCNRsJuwrdESyL2Guv0K QnuBixzDLppYBVIMnwoubWWOXQrh4eteZjRL9Hz69VFb6v6jObJ3ycs7+KwPz4eBXS9ppt rWKPUBfwwxSLLBj1OrFWpbG8atcq2GQ= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-612-FNhOjotyPGGQuKnH_ikckQ-1; Thu, 11 Jul 2024 18:28:01 -0400 X-MC-Unique: FNhOjotyPGGQuKnH_ikckQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id F04911955F30; Thu, 11 Jul 2024 22:27:59 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2D5731955F68; Thu, 11 Jul 2024 22:27:59 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 04/12] KVM: rename CONFIG_HAVE_KVM_GMEM_* to CONFIG_HAVE_KVM_ARCH_GMEM_* Date: Thu, 11 Jul 2024 18:27:47 -0400 Message-ID: <20240711222755.57476-5-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Add "ARCH" to the symbols; shortly, the "prepare" phase will include both the arch-independent step to clear out contents left in the page by the host, and the arch-dependent step enabled by CONFIG_HAVE_KVM_GMEM_PREPARE. For consistency do the same for CONFIG_HAVE_KVM_GMEM_INVALIDATE as well. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- arch/x86/kvm/Kconfig | 4 ++-- arch/x86/kvm/x86.c | 4 ++-- include/linux/kvm_host.h | 4 ++-- virt/kvm/Kconfig | 4 ++-- virt/kvm/guest_memfd.c | 6 +++--- 5 files changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 4287a8071a3a..472a1537b7a9 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -141,8 +141,8 @@ config KVM_AMD_SEV depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM select KVM_GENERIC_PRIVATE_MEM - select HAVE_KVM_GMEM_PREPARE - select HAVE_KVM_GMEM_INVALIDATE + select HAVE_KVM_ARCH_GMEM_PREPARE + select HAVE_KVM_ARCH_GMEM_INVALIDATE help Provides support for launching Encrypted VMs (SEV) and Encrypted VMs with Encrypted State (SEV-ES) on AMD processors. diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6968eadd418..a1c85591f92c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13603,7 +13603,7 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_arch_no_poll); -#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE bool kvm_arch_gmem_prepare_needed(struct kvm *kvm) { return kvm->arch.vm_type == KVM_X86_SNP_VM; @@ -13615,7 +13615,7 @@ int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_ord } #endif -#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) { static_call_cond(kvm_x86_gmem_invalidate)(start, end); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c3c922bf077f..eb8404e9aa03 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2441,7 +2441,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, } #endif /* CONFIG_KVM_PRIVATE_MEM */ -#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); bool kvm_arch_gmem_prepare_needed(struct kvm *kvm); #endif @@ -2473,7 +2473,7 @@ typedef int (*kvm_gmem_populate_cb)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque); -#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index b14e14cdbfb9..fd6a3010afa8 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -113,10 +113,10 @@ config KVM_GENERIC_PRIVATE_MEM select KVM_PRIVATE_MEM bool -config HAVE_KVM_GMEM_PREPARE +config HAVE_KVM_ARCH_GMEM_PREPARE bool depends on KVM_PRIVATE_MEM -config HAVE_KVM_GMEM_INVALIDATE +config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_PRIVATE_MEM diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 5221b584288f..76139332f2f3 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -27,7 +27,7 @@ static inline kvm_pfn_t folio_file_pfn(struct folio *folio, pgoff_t index) static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) { -#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE struct list_head *gmem_list = &inode->i_mapping->i_private_list; struct kvm_gmem *gmem; @@ -353,7 +353,7 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol return MF_DELAYED; } -#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE static void kvm_gmem_free_folio(struct folio *folio) { struct page *page = folio_page(folio, 0); @@ -368,7 +368,7 @@ static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio, -#ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE .free_folio = kvm_gmem_free_folio, #endif }; From patchwork Thu Jul 11 22:27:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731143 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E25C1A0B12 for ; Thu, 11 Jul 2024 22:28:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; cv=none; b=M4mUKYEhFKmcv7zwPGLUv7xiM5QyVO7QVVtdj0MINyrh9AhzKoT12xpyCZLDELvJqPFs8+e7bk4qL0G2umvLTB8gncmGSVa2yQ1vmrYBQBTGJT+iuRpgzzvS9sid1u1VfxrMNMfnpgLdxVPwE2x0eZ6WXPAx7OrHhXY5lUS3j1Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; c=relaxed/simple; bh=/0Ic6LMV19RGvo7vaR5tRG57wXbUrKxO49kaG6FCTWE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Q1/3A9lkdJZusPC2G8TPxA4svMHeAkOF4f5ejocCMST84WMFOGiFBzrSKpdzpxy86Ykwowcn21EHQdK94W4DxgAeoPyJrg1H+B0jvpo+jwlVAM6pyWHWDBZhQoPhWjkRZXUvtPEZbW/mnELPdeB0EAGA/c0TYNUiqiCkhN1s1Kg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=F3Rf6817; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="F3Rf6817" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yV6wurz3on5LgFg0xx3ZfvMVuEd3TIir8IXtT5zHd0I=; b=F3Rf6817Has2DUGBG/Pl9myJVx/LfJE2w+ng+kH01HvZf09SheVc6M8U0AjyogYjzMs77+ 5dttZlsqlVVCEzOTmgt2b8eTLL+ICmlSDYj7eETmcdJznTTlG9I78cLiizCRO1MIs40SY6 cEhyD7bGmyN1J/WLM20bHbrstU91kuY= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-421-XaQJg6rjN_uo_8XDQ5SpcA-1; Thu, 11 Jul 2024 18:28:01 -0400 X-MC-Unique: XaQJg6rjN_uo_8XDQ5SpcA-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A0B411954B34; Thu, 11 Jul 2024 22:28:00 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DDA0519560AE; Thu, 11 Jul 2024 22:27:59 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 05/12] KVM: guest_memfd: return locked folio from __kvm_gmem_get_pfn Date: Thu, 11 Jul 2024 18:27:48 -0400 Message-ID: <20240711222755.57476-6-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Allow testing the up-to-date flag in the caller without taking the lock again. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/guest_memfd.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 76139332f2f3..9271aba9b7b3 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -59,6 +59,7 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol return 0; } +/* Returns a locked folio on success. */ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare) { struct folio *folio; @@ -551,6 +552,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) fput(file); } +/* Returns a locked folio on success. */ static struct folio * __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare) @@ -584,7 +586,6 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, if (max_order) *max_order = 0; - folio_unlock(folio); return folio; } @@ -602,6 +603,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, if (IS_ERR(folio)) return PTR_ERR(folio); + folio_unlock(folio); return 0; } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); @@ -647,6 +649,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } + folio_unlock(folio); if (!IS_ALIGNED(gfn, (1 << max_order)) || (npages - i) < (1 << max_order)) max_order = 0; From patchwork Thu Jul 11 22:27:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731145 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B75681A2561 for ; Thu, 11 Jul 2024 22:28:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; cv=none; b=TaLkjuMzAJK4tDjCmzTgSsLBozoK7DyibWwUSj85/X3UlsF2a9VEHyfyMMDAaLRVRV0CE18J6ds/s024Eg5mvAjItfki16QYBkDhXl7sEjIYGnjSwWzsjATteMutTJIYFEih05LRNTCjnV9hSP6R0TwEb47R8gSZOn6l9vWgGaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; c=relaxed/simple; bh=3noS2xZdvGZRQ9VyWRjCizvA5eaHGFfmtjrTH/fev5M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZHvNYkBafEA4Jsqg5j/Gx0y/kwq7vhTNGHMgXaEF91zDueu6zDarqWz7IKO5AmzhbmMLSZOSAxicd+Q+gWygHsPIqVS9BuA4P35sQEamTsPT/tVgPHgG0q8QPIlQgBvqe11k3TaF/EYg9enwPOjecW813NG9+7ixZYR83QiUyF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=L3IjuD66; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L3IjuD66" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OQJsIM33olpQsryznT5Y6g1VAopkpCW9wLp5voPKMSU=; b=L3IjuD66L1BZn/xETZqK7rPpd4qkmODmHVoEvwz2BJxnpuNhWKN2DGa7mP6thzVGroMRQu oVwrSXL9DElx2JOD0PT2hzN+QmLjaeDcOo8LrS0RZ0Ktu8ACyTEXBcWDFwQVgEXv9icth8 28sUewe1EnxL34ND1vr5nH51+HinMU8= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-dEaEXPHTNnKqsrwDWLyxJg-1; Thu, 11 Jul 2024 18:28:02 -0400 X-MC-Unique: dEaEXPHTNnKqsrwDWLyxJg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5A633195609F; Thu, 11 Jul 2024 22:28:01 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A57D819560AE; Thu, 11 Jul 2024 22:28:00 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 06/12] KVM: guest_memfd: delay kvm_gmem_prepare_folio() until the memory is passed to the guest Date: Thu, 11 Jul 2024 18:27:49 -0400 Message-ID: <20240711222755.57476-7-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Initializing the contents of the folio on fallocate() is unnecessarily restrictive. It means that the page is registered with the firmware and then it cannot be touched anymore. In particular, this loses the possibility of using fallocate() to pre-allocate the page for SEV-SNP guests, because kvm_arch_gmem_prepare() then fails. It's only when the guest actually accesses the page (and therefore kvm_gmem_get_pfn() is called) that the page must be cleared from any stale host data and registered with the firmware. The up-to-date flag is clear if this has to be done (i.e. it is the first access and kvm_gmem_populate() has not been called). All in all, there are enough differences between kvm_gmem_get_pfn() and kvm_gmem_populate(), that it's better to separate the two flows completely. Extract the bulk of kvm_gmem_get_folio(), which take a folio and end up setting its up-to-date flag, to a new function kvm_gmem_prepare_folio(); these are now done only by the non-__-prefixed kvm_gmem_get_pfn(). As a bonus, __kvm_gmem_get_pfn() loses its ugly "bool prepare" argument. The previous semantics, where fallocate() could be used to prepare the pages in advance of running the guest, can be accessed with KVM_PRE_FAULT_MEMORY. For now, accessing a page in one VM will attempt to call kvm_arch_gmem_prepare() in all of those that have bound the guest_memfd. Cleaning this up is left to a separate patch. Suggested-by: Sean Christopherson Signed-off-by: Paolo Bonzini --- virt/kvm/guest_memfd.c | 107 ++++++++++++++++++++++++----------------- 1 file changed, 63 insertions(+), 44 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 9271aba9b7b3..f637327ad8e1 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -25,7 +25,7 @@ static inline kvm_pfn_t folio_file_pfn(struct folio *folio, pgoff_t index) return folio_pfn(folio) + (index & (folio_nr_pages(folio) - 1)); } -static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) +static int __kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) { #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE struct list_head *gmem_list = &inode->i_mapping->i_private_list; @@ -59,49 +59,60 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol return 0; } -/* Returns a locked folio on success. */ -static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare) +/* Process @folio, which contains @gfn (which in turn is contained in @slot), + * so that the guest can use it. On successful return the guest sees a zero + * page so as to avoid leaking host data and the up-to-date flag is set. + */ +static int kvm_gmem_prepare_folio(struct file *file, struct kvm_memory_slot *slot, + gfn_t gfn, struct folio *folio) { - struct folio *folio; + unsigned long nr_pages, i; + pgoff_t index; + int r; - /* TODO: Support huge pages. */ - folio = filemap_grab_folio(inode->i_mapping, index); - if (IS_ERR(folio)) - return folio; + if (folio_test_uptodate(folio)) + return 0; + + nr_pages = folio_nr_pages(folio); + for (i = 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); /* - * Use the up-to-date flag to track whether or not the memory has been - * zeroed before being handed off to the guest. There is no backing - * storage for the memory, so the folio will remain up-to-date until - * it's removed. + * Right now the folio order is always going to be zero, + * but the code is ready for huge folios, the only + * assumption being that the base pgoff of memslots is + * naturally aligned according to the folio's order. + * The desired order will be passed when creating the + * guest_memfd and checked when creating memslots. * - * TODO: Skip clearing pages when trusted firmware will do it when - * assigning memory to the guest. + * By making the base pgoff of the memslot naturally aligned + * to the folio's order, huge folios are safe to prepare, no + * matter what page size is used to map memory into the guest. */ - if (!folio_test_uptodate(folio)) { - unsigned long nr_pages = folio_nr_pages(folio); - unsigned long i; - - for (i = 0; i < nr_pages; i++) - clear_highpage(folio_page(folio, i)); - } - - if (prepare) { - int r = kvm_gmem_prepare_folio(inode, index, folio); - if (r < 0) { - folio_unlock(folio); - folio_put(folio); - return ERR_PTR(r); - } + WARN_ON(!IS_ALIGNED(slot->gmem.pgoff, 1 << folio_order(folio))); + index = gfn - slot->base_gfn + slot->gmem.pgoff; + index = ALIGN_DOWN(index, 1 << folio_order(folio)); + r = __kvm_gmem_prepare_folio(file_inode(file), index, folio); + if (!r) folio_mark_uptodate(folio); - } - /* - * Ignore accessed, referenced, and dirty flags. The memory is - * unevictable and there is no storage to write back to. - */ - return folio; + return r; +} + +/* + * Returns a locked folio on success. The caller is responsible for + * setting the up-to-date flag before the memory is mapped into the guest. + * There is no backing storage for the memory, so the folio will remain + * up-to-date until it's removed. + * + * Ignore accessed, referenced, and dirty flags. The memory is + * unevictable and there is no storage to write back to. + */ +static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) +{ + /* TODO: Support huge pages. */ + return filemap_grab_folio(inode->i_mapping, index); } static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, @@ -201,7 +212,7 @@ static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) break; } - folio = kvm_gmem_get_folio(inode, index, true); + folio = kvm_gmem_get_folio(inode, index); if (IS_ERR(folio)) { r = PTR_ERR(folio); break; @@ -555,7 +566,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) /* Returns a locked folio on success. */ static struct folio * __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order, bool prepare) + gfn_t gfn, kvm_pfn_t *pfn, int *max_order) { pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem = file->private_data; @@ -572,7 +583,7 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, return ERR_PTR(-EIO); } - folio = kvm_gmem_get_folio(file_inode(file), index, prepare); + folio = kvm_gmem_get_folio(file_inode(file), index); if (IS_ERR(folio)) return folio; @@ -594,17 +605,25 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, { struct file *file = kvm_gmem_get_file(slot); struct folio *folio; + int r = 0; if (!file) return -EFAULT; - folio = __kvm_gmem_get_pfn(file, slot, gfn, pfn, max_order, true); - fput(file); - if (IS_ERR(folio)) - return PTR_ERR(folio); + folio = __kvm_gmem_get_pfn(file, slot, gfn, pfn, max_order); + if (IS_ERR(folio)) { + r = PTR_ERR(folio); + goto out; + } + r = kvm_gmem_prepare_folio(file, slot, gfn, folio); folio_unlock(folio); - return 0; + if (r < 0) + folio_put(folio); + +out: + fput(file); + return r; } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); @@ -643,7 +662,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } - folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &max_order, false); + folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &max_order); if (IS_ERR(folio)) { ret = PTR_ERR(folio); break; From patchwork Thu Jul 11 22:27:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731148 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 774221A2C03 for ; Thu, 11 Jul 2024 22:28:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736890; cv=none; b=XR17dYu3yRRKMAZFWvGigG5p4etJZLytcYovFZJsGMMzGJNyLvH0KlIZTKa+KqkkpFLvSGtefLuJ4gr6EZmovoYoGNZhlIeNXFBLc8sR1Wj/TyU/ymno3T3K8FVeUSfIBGOfAF79r5VJKOzDRtowRXlTvEOaquZ2pYQwO1R+d0w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736890; c=relaxed/simple; bh=Ve367pixorBRdOPefjPXUuwrWqmI9xjXUI0HpZqTyLI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WfS6nWQVIVaDhRSgx2NR5JBEIoS4vE2aG0A+TPW5vsfXi6zI5ZzMHMmAYzhuJhzata1ZiGzQ2IR/nNTIFI0ZuDteBFfx5mFOz4fmEZwlCUyPJkcFvgIcPAQx4QutK+e44yEgqGGWKFuV1gCCxDXS+dcGskqZO3rpbePrXFSwafE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GYaXMaz1; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GYaXMaz1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736886; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GZEoVsxg65CVVTTTaTfnB0vLKsXzsAVAh8awqP2v8tc=; b=GYaXMaz17vtAZkfIrUUYnlPa7nYfW8OWkMBAlv1n7Xp9tfJVjdzkVUKtj/U+kTLwwcrrI+ 21iSkYYijOyyY46AL3ptewBYA+PmeMxxvw99ZPYto/LqEauVoDjdfvROgJcLmtASqPvQKQ 20jnivVT2bstgTjeXIoStcoJ8+MyPEU= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-arPgLaZTNY2jfzuIKPN2uQ-1; Thu, 11 Jul 2024 18:28:03 -0400 X-MC-Unique: arPgLaZTNY2jfzuIKPN2uQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 35BB21955D66; Thu, 11 Jul 2024 22:28:02 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 71E0919560AE; Thu, 11 Jul 2024 22:28:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 07/12] KVM: guest_memfd: make kvm_gmem_prepare_folio() operate on a single struct kvm Date: Thu, 11 Jul 2024 18:27:50 -0400 Message-ID: <20240711222755.57476-8-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 This is now possible because preparation is done by kvm_gmem_get_pfn() instead of fallocate(). In practice this is not a limitation, because even though guest_memfd can be bound to multiple struct kvm, for hardware implementations of confidential computing only one guest (identified by an ASID on SEV-SNP, or an HKID on TDX) will be able to access it. In the case of intra-host migration (not implemented yet for SEV-SNP, but we can use SEV-ES as an idea of how it will work), the new struct kvm inherits the same ASID and preparation need not be repeated. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/guest_memfd.c | 47 ++++++++++++++++-------------------------- 1 file changed, 18 insertions(+), 29 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index f637327ad8e1..f4d82719ec19 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -25,37 +25,27 @@ static inline kvm_pfn_t folio_file_pfn(struct folio *folio, pgoff_t index) return folio_pfn(folio) + (index & (folio_nr_pages(folio) - 1)); } -static int __kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) +static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, + pgoff_t index, struct folio *folio) { #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE - struct list_head *gmem_list = &inode->i_mapping->i_private_list; - struct kvm_gmem *gmem; + kvm_pfn_t pfn; + gfn_t gfn; + int rc; - list_for_each_entry(gmem, gmem_list, entry) { - struct kvm_memory_slot *slot; - struct kvm *kvm = gmem->kvm; - kvm_pfn_t pfn; - gfn_t gfn; - int rc; + if (!kvm_arch_gmem_prepare_needed(kvm)) + return 0; - if (!kvm_arch_gmem_prepare_needed(kvm)) - continue; - - slot = xa_load(&gmem->bindings, index); - if (!slot) - continue; - - pfn = folio_file_pfn(folio, index); - gfn = slot->base_gfn + index - slot->gmem.pgoff; - rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio)); - if (rc) { - pr_warn_ratelimited("gmem: Failed to prepare folio for GFN %llx PFN %llx error %d.\n", - gfn, pfn, rc); - return rc; - } + pfn = folio_file_pfn(folio, index); + gfn = slot->base_gfn + index - slot->gmem.pgoff; + rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio)); + if (rc) { + pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx GFN %llx PFN %llx error %d.\n", + index, gfn, pfn, rc); + return rc; } - #endif + return 0; } @@ -63,7 +53,7 @@ static int __kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct f * so that the guest can use it. On successful return the guest sees a zero * page so as to avoid leaking host data and the up-to-date flag is set. */ -static int kvm_gmem_prepare_folio(struct file *file, struct kvm_memory_slot *slot, +static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, struct folio *folio) { unsigned long nr_pages, i; @@ -92,8 +82,7 @@ static int kvm_gmem_prepare_folio(struct file *file, struct kvm_memory_slot *slo WARN_ON(!IS_ALIGNED(slot->gmem.pgoff, 1 << folio_order(folio))); index = gfn - slot->base_gfn + slot->gmem.pgoff; index = ALIGN_DOWN(index, 1 << folio_order(folio)); - - r = __kvm_gmem_prepare_folio(file_inode(file), index, folio); + r = __kvm_gmem_prepare_folio(kvm, slot, index, folio); if (!r) folio_mark_uptodate(folio); @@ -616,7 +605,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, goto out; } - r = kvm_gmem_prepare_folio(file, slot, gfn, folio); + r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio); folio_unlock(folio); if (r < 0) folio_put(folio); From patchwork Thu Jul 11 22:27:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731146 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 628451A257F for ; Thu, 11 Jul 2024 22:28:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736889; cv=none; b=SdzX3dMQaaALh+bLG1thJndPJQHxb2TXdV7GTtkVmBX0+UJRuvoEo6U7NIAEdE3TTBS/MCPhx6dlqGm/vuStoD1owYMm7VOPg262VJd7gbVWi929lV4fVyYXeaBzrJG7EIwATb1KFTJsatPy/3tlme5oxFb2HsgTj5k2JPdIxCI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736889; c=relaxed/simple; bh=+4BkaniAqpfzC0qGYjpuSJW8g5yc40Mynb16Kpmc+2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=bl1QCvh4mNND+31odc8iS8P0VhHbCHhEgvQU0QWNfw8sXjn18v2SwG8BfVGAcq60oyG0lH2YM6DkAoLL6ObD01HCJ67upp0hBzHx7dypofT1xjE2tOXmFVgb2PxgnSkz2ou+BTgOuACKtFfASioX0KWFHfvQhypBDURZfZKAXT8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=H7bl0p/z; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="H7bl0p/z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736886; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nZ8PFk8IkSEb2wUJxnz3ahSOOTMj/+fWZ6HPcwrgFLQ=; b=H7bl0p/ze3e6dgG5U7CnYmHZo8txcCFrlAKR0S9SSRCD+7CljsOZMPsuXzRTIvrH30sgxy 68A/prDrlZp1WRXoirACifVSsD5FsrXDL9XK/LRwrjuUL8arQ3RPcZ0bPXQdKOnltuJLRz FqFtBpESNjWRABiDMZ7FQfWdjHA4knY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-108-Uf8_FIPxNs-tNonAAHTBGQ-1; Thu, 11 Jul 2024 18:28:04 -0400 X-MC-Unique: Uf8_FIPxNs-tNonAAHTBGQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 193631956095; Thu, 11 Jul 2024 22:28:03 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 325111956046; Thu, 11 Jul 2024 22:28:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 08/12] KVM: remove kvm_arch_gmem_prepare_needed() Date: Thu, 11 Jul 2024 18:27:51 -0400 Message-ID: <20240711222755.57476-9-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 It is enough to return 0 if a guest need not do any preparation. This is in fact how sev_gmem_prepare() works for non-SNP guests, and it extends naturally to Intel hosts: the x86 callback for gmem_prepare is optional and returns 0 if not defined. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- arch/x86/kvm/x86.c | 5 ----- include/linux/kvm_host.h | 1 - virt/kvm/guest_memfd.c | 13 +++---------- 3 files changed, 3 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a1c85591f92c..4f58423c6148 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13604,11 +13604,6 @@ bool kvm_arch_no_poll(struct kvm_vcpu *vcpu) EXPORT_SYMBOL_GPL(kvm_arch_no_poll); #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE -bool kvm_arch_gmem_prepare_needed(struct kvm *kvm) -{ - return kvm->arch.vm_type == KVM_X86_SNP_VM; -} - int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order) { return static_call(kvm_x86_gmem_prepare)(kvm, pfn, gfn, max_order); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index eb8404e9aa03..f6e11991442d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2443,7 +2443,6 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); -bool kvm_arch_gmem_prepare_needed(struct kvm *kvm); #endif /** diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index f4d82719ec19..509360eefea5 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -29,16 +29,9 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo pgoff_t index, struct folio *folio) { #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE - kvm_pfn_t pfn; - gfn_t gfn; - int rc; - - if (!kvm_arch_gmem_prepare_needed(kvm)) - return 0; - - pfn = folio_file_pfn(folio, index); - gfn = slot->base_gfn + index - slot->gmem.pgoff; - rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio)); + kvm_pfn_t pfn = folio_file_pfn(folio, index); + gfn_t gfn = slot->base_gfn + index - slot->gmem.pgoff; + int rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio)); if (rc) { pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx GFN %llx PFN %llx error %d.\n", index, gfn, pfn, rc); From patchwork Thu Jul 11 22:27:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731144 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 158221A2572 for ; Thu, 11 Jul 2024 22:28:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; cv=none; b=e9opKwxNJ+2a596fJ1651TOJ47fBXtcDmZXZSR/u/DMK4w4xNwBgFowpz6GahRjVWt5d0WZIKu3DiIa6isWgTPT0CTqheJwheOS1VuDSEl9BXCjPc2idJScSRmmJHtlrp+ZoPZctdYfzf31B8og4wvvX/LVbPrMLtpDAQLag2lE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736888; c=relaxed/simple; bh=UIuuI8pj3ZeyK5S9JQuOd5zsynrclPUOgJSVOjpDijA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=igbDUMSMoQEaqHoOuJJpAhOYAAuvWRGYm+s+mHqEIwdULQuXU6QTE7DVAqCyU3CydIV8mtP06sMxOBg1oRD2HYU9ouvOG7dqfjmMgCeJvWgLC4mi6TNXlVMRxPHyGv/aXnDlNS0ZsH+HAm8LxcS9c17cq3ZMwXuJZoJprfmBnJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=iOQXpnpA; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iOQXpnpA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736886; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p5Ee96ulXxhjmi3RItKf8tKz5A6ffGeMgjDgibBuQZU=; b=iOQXpnpAy0R6ZVue+3f9Dah8PgHxTxykcdcxGPAobGK/c3pR5Ts5OBz9v6XRD75KXw/FBn S1Zjo/mnlWaGG08ZSkcecgSPo6iZknFdB12KnWFEwGtwB/vN1LCsZrdO2fjfGfqYhBJ7MY 9K2T3cyNldocfqWD17HlVkd2eQfT1fM= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-425-fEqQouoeOpK14cdJxf_YVA-1; Thu, 11 Jul 2024 18:28:04 -0400 X-MC-Unique: fEqQouoeOpK14cdJxf_YVA-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9C61F19560BA; Thu, 11 Jul 2024 22:28:03 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E46D419560AE; Thu, 11 Jul 2024 22:28:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 09/12] KVM: guest_memfd: move check for already-populated page to common code Date: Thu, 11 Jul 2024 18:27:52 -0400 Message-ID: <20240711222755.57476-10-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Do not allow populating the same page twice with startup data. In the case of SEV-SNP, for example, the firmware does not allow it anyway, since the launch-update operation is only possible on pages that are still shared in the RMP. Even if it worked, kvm_gmem_populate()'s callback is meant to have side effects such as updating launch measurements, and updating the same page twice is unlikely to have the desired results. Races between calls to the ioctl are not possible because kvm_gmem_populate() holds slots_lock and the VM should not be running. But again, even if this worked on other confidential computing technology, it doesn't matter to guest_memfd.c whether this is an intentional attempt to do something fishy, or missing synchronization in userspace, or even something intentional. One of the racers wins, and the page is initialized by either kvm_gmem_prepare_folio() or kvm_gmem_populate(). Anyway, out of paranoia, adjust sev_gmem_post_populate() anyway to use the same errno that kvm_gmem_populate() is using. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- arch/x86/kvm/svm/sev.c | 2 +- virt/kvm/guest_memfd.c | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index df8818759698..397ef9e70182 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2213,7 +2213,7 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf if (ret || assigned) { pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n", __func__, gfn, ret, assigned); - ret = -EINVAL; + ret = ret ? -EINVAL : -EEXIST; goto err; } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 509360eefea5..266810bb91c9 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -650,6 +650,13 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } + if (folio_test_uptodate(folio)) { + folio_unlock(folio); + folio_put(folio); + ret = -EEXIST; + break; + } + folio_unlock(folio); if (!IS_ALIGNED(gfn, (1 << max_order)) || (npages - i) < (1 << max_order)) From patchwork Thu Jul 11 22:27:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731149 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EA411A2561 for ; Thu, 11 Jul 2024 22:28:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736891; cv=none; b=ALbG1RZR4ZUR86IPv6U3cgLbEIhXwtkCoLrbxarq57DcqXR1qGxPjPHVlLggBwsJY0HNkHleb8ayTXkCvFjLm7b+EWjv5fO/eqK68f0paHY8EMsqw/tIKoVL0CHvgH908ryaKBzIXICzFAr+WjhJ3YIJTj3DNvhO3ht1F9CLXdg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736891; c=relaxed/simple; bh=DmGXtGInNHI307ARIyGEHrYJoBoCKU0M0LjACRQGU5w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EEl5J+CN50P25iyrM9b7gvEq1b+wIaOr/imkrQQ5qNbKjr1jOKCm4E29P9fx2wfgvpaRCoffX3GG/z/oKXE7tWs/J4AzO1GFC1rIvOakEo5HCbkyKQsZUyNnwHSwsntn7hVe4CIsBa6N8h7MpIOqrqGZDsi5gbFONytT/rv61fM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=E1dnWrE8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="E1dnWrE8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736889; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cj29ojuCVRWWZsFaxPT41gUNJJ+jA3ygN2YwSApHAEY=; b=E1dnWrE8y229ITt2bR+2119kZI+ZyZFTEzhrxReaiVmXp7XWEp9BfdE/GcfX4MdWQ9MYwj 6+jCxmt7tJHoIVmtWhouThV+PZn+grCFyaOkZguqEkIVYjnXWmmNVtwWz4TjGTUQYnHSG2 AEIObhMA8yo0sBhwli5BBEk+zomNL18= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-178-pIZurnONNAuSbZjAmS1kxg-1; Thu, 11 Jul 2024 18:28:06 -0400 X-MC-Unique: pIZurnONNAuSbZjAmS1kxg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E2D451955F44; Thu, 11 Jul 2024 22:28:04 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3673A19560AE; Thu, 11 Jul 2024 22:28:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 10/12] KVM: cleanup and add shortcuts to kvm_range_has_memory_attributes() Date: Thu, 11 Jul 2024 18:27:53 -0400 Message-ID: <20240711222755.57476-11-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Use a guard to simplify early returns, and add two more easy shortcuts. If the requested attributes are invalid, the attributes xarray will never show them as set. And if testing a single page, kvm_get_memory_attributes() is more efficient. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- virt/kvm/kvm_main.c | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f817ec66c85f..8ab9d8ff7b74 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2392,6 +2392,14 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +static u64 kvm_supported_mem_attributes(struct kvm *kvm) +{ + if (!kvm || kvm_arch_has_private_mem(kvm)) + return KVM_MEMORY_ATTRIBUTE_PRIVATE; + + return 0; +} + /* * Returns true if _all_ gfns in the range [@start, @end) have attributes * matching @attrs. @@ -2400,40 +2408,30 @@ bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, unsigned long attrs) { XA_STATE(xas, &kvm->mem_attr_array, start); + unsigned long mask = kvm_supported_mem_attributes(kvm);; unsigned long index; - bool has_attrs; void *entry; - rcu_read_lock(); + if (attrs & ~mask) + return false; - if (!attrs) { - has_attrs = !xas_find(&xas, end - 1); - goto out; - } + if (end == start + 1) + return kvm_get_memory_attributes(kvm, start) == attrs; + + guard(rcu)(); + if (!attrs) + return !xas_find(&xas, end - 1); - has_attrs = true; for (index = start; index < end; index++) { do { entry = xas_next(&xas); } while (xas_retry(&xas, entry)); - if (xas.xa_index != index || xa_to_value(entry) != attrs) { - has_attrs = false; - break; - } + if (xas.xa_index != index || xa_to_value(entry) != attrs) + return false; } -out: - rcu_read_unlock(); - return has_attrs; -} - -static u64 kvm_supported_mem_attributes(struct kvm *kvm) -{ - if (!kvm || kvm_arch_has_private_mem(kvm)) - return KVM_MEMORY_ATTRIBUTE_PRIVATE; - - return 0; + return true; } static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, From patchwork Thu Jul 11 22:27:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731147 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E21181A2FB3 for ; Thu, 11 Jul 2024 22:28:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736890; cv=none; b=OsanuVBQY9Z4y5A07OjO1LoAfu2bzwN0TIQip25UV9RdO0euaqx/nw0cAjDPIHdcwuvsEzHNGQanp/j8X+dZw0GjpTmLaXfFXMECrkBSJiTEHjc9UnsiC6+6b1oo74av47Q2Y19wR9x+loTzmCnlmLzsj8+QiI4XfZNxH/g7fdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736890; c=relaxed/simple; bh=pEl5heZgxHtlWuvvajR2J+DBVOf+jkhfdZulTczQpYk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZWfZfqKtL6VJP7pO8+Y6n+E3g76lRxIwVTrvJlwoIMglVuV6lqWh98u4LA+64RFMadXGGPqvtKsjb00jTDB14tbwlfCq6oKFO8eV29nos3t/B/+SCTaj2IrFTYy94G2MEMOXZ6tzvAep0L/L0tJ4HoiJ5Nv5UjBDYyIV7lDQPkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=VTR1fECg; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VTR1fECg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736888; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H1VFMLLfYUOuUXaenTPqPnVHdgLceGR9dY67SpFugho=; b=VTR1fECg0kLZ8mYmX91lCkgawW1zhbLT9MmhGlrPcNZrVdGzEfDklmSybco04VMPuv0ID1 Dqe3v/m/ninRN9GlazqeTx3O0Z9TqHqFQuLrk0V+6069qtSgB4qJkq9FFehzpqh2QyyIfz Cael25psW/WsQNBY2F27aNwHsXvBF/o= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-459-W3ZwWlsdP4SRt5V9jcAj0w-1; Thu, 11 Jul 2024 18:28:06 -0400 X-MC-Unique: W3ZwWlsdP4SRt5V9jcAj0w-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A1E211956095; Thu, 11 Jul 2024 22:28:05 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E80AF19560AE; Thu, 11 Jul 2024 22:28:04 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 11/12] KVM: extend kvm_range_has_memory_attributes() to check subset of attributes Date: Thu, 11 Jul 2024 18:27:54 -0400 Message-ID: <20240711222755.57476-12-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 While currently there is no other attribute than KVM_MEMORY_ATTRIBUTE_PRIVATE, KVM code such as kvm_mem_is_private() is written to expect their existence. Allow using kvm_range_has_memory_attributes() as a multi-page version of kvm_mem_is_private(), without it breaking later when more attributes are introduced. Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- arch/x86/kvm/mmu/mmu.c | 2 +- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 13 +++++++------ 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4e0e9963066f..f6b7391fe438 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7524,7 +7524,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, const unsigned long end = start + KVM_PAGES_PER_HPAGE(level); if (level == PG_LEVEL_2M) - return kvm_range_has_memory_attributes(kvm, start, end, attrs); + return kvm_range_has_memory_attributes(kvm, start, end, ~0, attrs); for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { if (hugepage_test_mixed(slot, gfn, level - 1) || diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f6e11991442d..456dbdf7aaaf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2410,7 +2410,7 @@ static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn } bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, - unsigned long attrs); + unsigned long mask, unsigned long attrs); bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8ab9d8ff7b74..62e5d9b0ae83 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2402,21 +2402,21 @@ static u64 kvm_supported_mem_attributes(struct kvm *kvm) /* * Returns true if _all_ gfns in the range [@start, @end) have attributes - * matching @attrs. + * such that the bits in @mask match @attrs. */ bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, - unsigned long attrs) + unsigned long mask, unsigned long attrs) { XA_STATE(xas, &kvm->mem_attr_array, start); - unsigned long mask = kvm_supported_mem_attributes(kvm);; unsigned long index; void *entry; + mask &= kvm_supported_mem_attributes(kvm); if (attrs & ~mask) return false; if (end == start + 1) - return kvm_get_memory_attributes(kvm, start) == attrs; + return (kvm_get_memory_attributes(kvm, start) & mask) == attrs; guard(rcu)(); if (!attrs) @@ -2427,7 +2427,8 @@ bool kvm_range_has_memory_attributes(struct kvm *kvm, gfn_t start, gfn_t end, entry = xas_next(&xas); } while (xas_retry(&xas, entry)); - if (xas.xa_index != index || xa_to_value(entry) != attrs) + if (xas.xa_index != index || + (xa_to_value(entry) & mask) != attrs) return false; } @@ -2526,7 +2527,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, mutex_lock(&kvm->slots_lock); /* Nothing to do if the entire range as the desired attributes. */ - if (kvm_range_has_memory_attributes(kvm, start, end, attributes)) + if (kvm_range_has_memory_attributes(kvm, start, end, ~0, attributes)) goto out_unlock; /* From patchwork Thu Jul 11 22:27:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13731150 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28F221A706E for ; Thu, 11 Jul 2024 22:28:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736893; cv=none; b=FWPrP3EUbWdKR6IlxNT2mq4k6UT+nkpcret1/9Gj14Wp+H/WfEcTVOYU1B/3yJDxZftXy4K66ssyVteMTLzcdfECtVVYPs/2/DVEGlIjx1jghUvmttFvkp7rjlgf0+Ef9qH1IoS0hT4WnqR1hPduIDCEgc7gzDCqD5lzdrFn4CM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720736893; c=relaxed/simple; bh=jxnA+pgnX+Ut2Md50wdncFdtANDPeRmkX6yBgbvo4Q8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=tg2niXl2BR8auD3/Bugo1+DomGiYhazyw0PkvpOMC+0uOrDavUomf0Bm7FzCBGJUHW+0YoFlGFA/m1qFzYEPy8oDI94rh7B5/GSp4+E+ATp0MBbJ6LRbGRb3wONaBOAX54Z92O61vxBrNYHuvDv+YAJHB/bDQLCx7sEJWRnxDV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=OLhVVWFS; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OLhVVWFS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720736891; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pFX8OR+FGcVBao93UmqOw/ZJuMdGpLg2ksyXeGLvNGk=; b=OLhVVWFShC3FuGGj4cp6XK5d6JIrTt7c7Xgh69Qe5b+AlOjDYFiUCDufsZ2rCznQtv4bvI DpBBF8JaoYIkqIEDWfr8wdGI9Rc+8S/0EHofzVDenNqZIwiNpwmRHll+wpwL7chyXGZLDw Rdl5vffiESfADfUiqnWJqqtN2CDGU4I= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-513-VufFI1n3M_ankmHegWRvqg-1; Thu, 11 Jul 2024 18:28:07 -0400 X-MC-Unique: VufFI1n3M_ankmHegWRvqg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4801819560BA; Thu, 11 Jul 2024 22:28:06 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A776019560AE; Thu, 11 Jul 2024 22:28:05 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com Subject: [PATCH 12/12] KVM: guest_memfd: let kvm_gmem_populate() operate only on private gfns Date: Thu, 11 Jul 2024 18:27:55 -0400 Message-ID: <20240711222755.57476-13-pbonzini@redhat.com> In-Reply-To: <20240711222755.57476-1-pbonzini@redhat.com> References: <20240711222755.57476-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 This check is currently performed by sev_gmem_post_populate(), but it applies to all callers of kvm_gmem_populate(): the point of the function is that the memory is being encrypted and some work has to be done on all the gfns in order to encrypt them. Therefore, check the KVM_MEMORY_ATTRIBUTE_PRIVATE attribute prior to invoking the callback, and stop the operation if a shared page is encountered. Because CONFIG_KVM_PRIVATE_MEM in principle does not require attributes, this makes kvm_gmem_populate() depend on CONFIG_KVM_GENERIC_PRIVATE_MEM (which does require them). Signed-off-by: Paolo Bonzini Reviewed-by: Michael Roth --- arch/x86/kvm/svm/sev.c | 7 ------- include/linux/kvm_host.h | 2 ++ virt/kvm/guest_memfd.c | 12 ++++++++++++ 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 397ef9e70182..5a93e554cbb6 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2202,13 +2202,6 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf bool assigned; int level; - if (!kvm_mem_is_private(kvm, gfn)) { - pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n", - __func__, gfn); - ret = -EINVAL; - goto err; - } - ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level); if (ret || assigned) { pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n", diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 456dbdf7aaaf..f7ba665652f3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2445,6 +2445,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * @@ -2471,6 +2472,7 @@ typedef int (*kvm_gmem_populate_cb)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque); +#endif #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 266810bb91c9..7e2c9274fd16 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -609,6 +609,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { @@ -662,11 +663,21 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long (npages - i) < (1 << max_order)) max_order = 0; + ret = -EINVAL; + while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order), + KVM_MEMORY_ATTRIBUTE_PRIVATE, + KVM_MEMORY_ATTRIBUTE_PRIVATE)) { + if (!max_order) + goto put_folio_and_exit; + max_order--; + } + p = src ? src + i * PAGE_SIZE : NULL; ret = post_populate(kvm, gfn, pfn, p, max_order, opaque); if (!ret) folio_mark_uptodate(folio); +put_folio_and_exit: folio_put(folio); if (ret) break; @@ -678,3 +689,4 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long return ret && !i ? ret : i; } EXPORT_SYMBOL_GPL(kvm_gmem_populate); +#endif