From patchwork Thu Apr 4 18:50:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13618139 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F62A131E24 for ; Thu, 4 Apr 2024 18:50:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712256642; cv=none; b=l3z2+sjoTzYA9e9m0VtXcVRra+IccQVB87WGmTxJby/dFw4luf+nGDp69Lux1rqCKMiDOGyNwJ2essjBSONluUc+jqLT3x1t+JYa7fbSqbZU4PgpG+ENrauUUjLl2SP7+T1nX/1/HiY9sgmsjXnKrDAoeF4mLQW/zXuNz3gJPAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712256642; c=relaxed/simple; bh=rPyzJzYSSz48gXujx1qLV6V98R9AloP/Cjjmrkp6pHc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ofis64c7wDNgDUIz+Xwxh2lRVsXHnOPMD73am0DHSQHcjq1hPhup/8eNCtCmG4EK2NgTxeeGwm/84POzO3AGmMokTLxR05+wWNMw2rZgxFlm+0HncEsGPAUTSvdSjDR7xIfz1PVNxAFrAM0pwfVtanBhuQxZaQmfDs8gLJs6EFQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HobvqCh4; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HobvqCh4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712256639; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZJEIZUQrzsd0Y9E8i/CEuCH1bzGOQtQsL0wQj5bpcwo=; b=HobvqCh4pxQkZl5ZLzgyrjRPq1f8+RAd63ai+NPFApJBBqLhnJ2f8+4Ty4RYAc09jIg3+i BZ8SfB9yRztLoVJvfyMLuGe1m909+ZR5cQacZBEgNfkGJJOwSnEMGMVQJ42O24XRlHEEab B9ogN9JishNoWaOiu7tjF1DNyLtkeXE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-625-opckuniFNKe245UFzdHtFA-1; Thu, 04 Apr 2024 14:50:35 -0400 X-MC-Unique: opckuniFNKe245UFzdHtFA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 31A9785CE42; Thu, 4 Apr 2024 18:50:35 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0BACF1C060A4; Thu, 4 Apr 2024 18:50:35 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com, isaku.yamahata@intel.com Subject: [PATCH 02/11] KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode Date: Thu, 4 Apr 2024 14:50:24 -0400 Message-ID: <20240404185034.3184582-3-pbonzini@redhat.com> In-Reply-To: <20240404185034.3184582-1-pbonzini@redhat.com> References: <20240404185034.3184582-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 From: Michael Roth truncate_inode_pages_range() may attempt to zero pages before truncating them, and this will occur before arch-specific invalidations can be triggered via .invalidate_folio/.free_folio hooks via kvm_gmem_aops. For AMD SEV-SNP this would result in an RMP #PF being generated by the hardware, which is currently treated as fatal (and even if specifically allowed for, would not result in anything other than garbage being written to guest pages due to encryption). On Intel TDX this would also result in undesirable behavior. Set the AS_INACCESSIBLE flag to prevent the MM from attempting unexpected accesses of this sort during operations like truncation. This may also in some cases yield a decent performance improvement for guest_memfd userspace implementations that hole-punch ranges immediately after private->shared conversions via KVM_SET_MEMORY_ATTRIBUTES, since the current implementation of truncate_inode_pages_range() always ends up zero'ing an entire 4K range if it is backing by a 2M folio. Link: https://lore.kernel.org/lkml/ZR9LYhpxTaTk6PJX@google.com/ Suggested-by: Sean Christopherson Signed-off-by: Michael Roth Message-ID: <20240329212444.395559-6-michael.roth@amd.com> Signed-off-by: Paolo Bonzini Acked-by: Vlastimil Babka --- virt/kvm/guest_memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 0f4e0cf4f158..5a929536ecf2 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -357,6 +357,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) inode->i_private = (void *)(unsigned long)flags; inode->i_op = &kvm_gmem_iops; inode->i_mapping->a_ops = &kvm_gmem_aops; + inode->i_mapping->flags |= AS_INACCESSIBLE; inode->i_mode |= S_IFREG; inode->i_size = size; mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);