From patchwork Thu Oct 10 18:23:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830659 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42A0D1CEAA2 for ; Thu, 10 Oct 2024 18:24:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584693; cv=none; b=M4aKr7vw9QNFBSuEr1iH92djg5JxyDxaJlLtqzWzm3PJ87Og6FXJKomTqrVm332WU0CTsiiwQocNijAl/Ti/GI1GwUN720bZgZC157RdYSNgNoyMoUPWCNr+4VdZPM43qLIuTysJHVSrq7mio9XJ9tLnJzq12ji9Pwmq/dFTcps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584693; c=relaxed/simple; bh=VeqAt9xsRyQYjwTyqnYEHFfSJqQO2zWoId1ssBiaEMk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JnCCt7zIB1e1jz15QP05RZX0DAewyf5YMAw4+VkeEXkNKaSPZ4zNXirKsd0mFIM8tWcnktAcztFuQMBpAZCoASG4TgB4AdRUsxcfHaynwNTkUaOY0S+E9xCSABZiAMJv+zONcNDa+zRUEBh3nX4VHjGRtG64Dncjv94tEJ9VX2Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c6ulq1H/; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c6ulq1H/" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2e2ebab7abfso157717a91.0 for ; Thu, 10 Oct 2024 11:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584691; x=1729189491; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=Om/x5O1cFoCu45xFQlZl7muX4OGvvwFhFfu/OqGssvQ=; b=c6ulq1H/+SVkO3snwnEpIDv6c51AZwFPT1LwqFNCVv4NoGTlUIwMtlx2CGv4OolC4z 5koadCL6jvV5FKPkcR7h2EcqLSGLIrxPFSpXWBBtr1Q7Qs67rbWfXAkZuygIXztR30jX vinelIxIHIA9mCJvKaK123LtWbg1NOboLMeiKUh0EpYwOSI95YlMnIXDtfqepFmlcWAS CnV3G1l9SuTBGmxsijDu+22rWmksmHX3fhLSJfqBck7S6jAczfD9MKD7ErhknpjpLCXD TxRiyYgzfFwgfGEmjn40VJXaAyNZX1rswlRL/erQ/34CTu9kx47GkXzSb8i1IPcncetk Akrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584691; x=1729189491; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Om/x5O1cFoCu45xFQlZl7muX4OGvvwFhFfu/OqGssvQ=; b=V4gsUK4qh/w3gz46/+2C7oCCXAAz7B1GCcwhJW7zGkIJtmxogdda4h6YPLt5kO0One TRqikKAysSaAw1J4RMMHStuTqjwiExkctwIyuZuEA4W+/3XOkoKtUzj0VUVH+nhDFQvn x1tMbcfybAQYY8r/kSgu54UPPOrWezI+iwVKS50yIGurmFEs3JEyP3rMsmOf4An0EwRQ DIMyJp64sRTk75UP3xaLn5k66If4zrM4RxsrpN8FWhRzf1G7DiSkY98odqietB02g176 C8x8ZWWx8M3lnN4QizFgvITseWbKJj0R3ivmtFJY1o3uva0f8N6Gs4xK4yppowmP7s81 0rnQ== X-Gm-Message-State: AOJu0YxLj+8/bMuf7VfrI1JUnCLOdGth6tNvTTcBXLunciDeXTmDblDj 90XlQS9G8+eiRc+yIMj/YHuyA2dmPYfHxp4MewINkIgdkpxvjWz2Vu16iXZVJZxWcuZtpW2f6ep v6g== X-Google-Smtp-Source: AGHT+IG89v4lIFRhnNqxe9hFvIIxqDvldHGwH//6ib2L4odxYyk8SJA2JGOjXIHuxBeh8ylp7HgA6abaBbM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:eac4:b0:2e2:8f4d:457 with SMTP id 98e67ed59e1d1-2e2c80b5015mr7266a91.2.1728584689990; Thu, 10 Oct 2024 11:24:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:03 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-2-seanjc@google.com> Subject: [PATCH v13 01/85] KVM: Drop KVM_ERR_PTR_BAD_PAGE and instead return NULL to indicate an error From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove KVM_ERR_PTR_BAD_PAGE and instead return NULL, as "bad page" is just a leftover bit of weirdness from days of old when KVM stuffed a "bad" page into the guest instead of actually handling missing pages. See commit cea7bb21280e ("KVM: MMU: Make gfn_to_page() always safe"). Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_pr.c | 2 +- arch/powerpc/kvm/book3s_xive_native.c | 2 +- arch/s390/kvm/vsie.c | 2 +- arch/x86/kvm/lapic.c | 2 +- include/linux/kvm_host.h | 7 ------- virt/kvm/kvm_main.c | 15 ++++++--------- 6 files changed, 10 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 7b8ae509328f..d7721297b9b6 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -645,7 +645,7 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) int i; hpage = gfn_to_page(vcpu->kvm, pte->raddr >> PAGE_SHIFT); - if (is_error_page(hpage)) + if (!hpage) return; hpage_offset = pte->raddr & ~PAGE_MASK; diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c index 6e2ebbd8aaac..d9bf1bc3ff61 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -654,7 +654,7 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive, } page = gfn_to_page(kvm, gfn); - if (is_error_page(page)) { + if (!page) { srcu_read_unlock(&kvm->srcu, srcu_idx); pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr); return -EINVAL; diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index 89cafea4c41f..763a070f5955 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -661,7 +661,7 @@ static int pin_guest_page(struct kvm *kvm, gpa_t gpa, hpa_t *hpa) struct page *page; page = gfn_to_page(kvm, gpa_to_gfn(gpa)); - if (is_error_page(page)) + if (!page) return -EINVAL; *hpa = (hpa_t)page_to_phys(page) + (gpa & ~PAGE_MASK); return 0; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 2098dc689088..20526e4d6c62 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -2664,7 +2664,7 @@ int kvm_alloc_apic_access_page(struct kvm *kvm) } page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); - if (is_error_page(page)) { + if (!page) { ret = -EFAULT; goto out; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index db567d26f7b9..ee186a1fbaad 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -153,13 +153,6 @@ static inline bool kvm_is_error_gpa(gpa_t gpa) return gpa == INVALID_GPA; } -#define KVM_ERR_PTR_BAD_PAGE (ERR_PTR(-ENOENT)) - -static inline bool is_error_page(struct page *page) -{ - return IS_ERR(page); -} - #define KVM_REQUEST_MASK GENMASK(7,0) #define KVM_REQUEST_NO_WAKEUP BIT(8) #define KVM_REQUEST_WAIT BIT(9) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 05cbb2548d99..4b659a649dfa 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3078,19 +3078,14 @@ EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); */ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) { - struct page *page; kvm_pfn_t pfn; pfn = gfn_to_pfn(kvm, gfn); if (is_error_noslot_pfn(pfn)) - return KVM_ERR_PTR_BAD_PAGE; + return NULL; - page = kvm_pfn_to_refcounted_page(pfn); - if (!page) - return KVM_ERR_PTR_BAD_PAGE; - - return page; + return kvm_pfn_to_refcounted_page(pfn); } EXPORT_SYMBOL_GPL(gfn_to_page); @@ -3184,7 +3179,8 @@ static void kvm_set_page_accessed(struct page *page) void kvm_release_page_clean(struct page *page) { - WARN_ON(is_error_page(page)); + if (WARN_ON(!page)) + return; kvm_set_page_accessed(page); put_page(page); @@ -3208,7 +3204,8 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); void kvm_release_page_dirty(struct page *page) { - WARN_ON(is_error_page(page)); + if (WARN_ON(!page)) + return; kvm_set_page_dirty(page); kvm_release_page_clean(page); From patchwork Thu Oct 10 18:23:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830660 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F8E11CEEB3 for ; Thu, 10 Oct 2024 18:24:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584695; cv=none; b=WPliJmmDeCbNhFHEYN3NBTSweEi4HpNUVE+NtQs4rlRRZoWPvXMa2OqNkZCzyyfG9RU5UbIiaDIwQBXSmkGxuXsiCOu43LxGmJi8RnXkvRO1AnwbR/m3ukukpnhmbgpjlf60Z7MYIDFe5lGivg3RTi5Fxye19zUtYX45x1uRuhk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584695; c=relaxed/simple; bh=GKQ1+9zVatIXCwDEsM6DNkbKcmF2enzdpw2dYi1+r2s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=auVa3MOmcK593IE1U/A3HzVDtADJou73M0JSjDvHlKQ3OU7EFLCjJx7SxSI6RWzXl/qWKl2fqxByza5hBU1V9nsCEhhTTtD+jNeP0D/LRvLOfe3s0DMYNIHSfNUXTdz/wPLohgLokNt8ac468m6BDz2uMNrX3wNVTKpXqmaNhgc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LbsR3s+e; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LbsR3s+e" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-20b921fa133so13287205ad.1 for ; Thu, 10 Oct 2024 11:24:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584693; x=1729189493; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=yUMe47Ku2d+lU53n+UV6eA4G9zH9pOaBnB7PUsg7n0E=; b=LbsR3s+eI6WfEnRbM2uT3vT6R0rDFtwfUQWzpB0nPc/rlV2FAnol+zYCnI0BRzeuc9 6dvAAbxGV4LpiWM+0lW7vLCrRS/LH1TIj9aZ6qsziigEi04cwRAF6RIz64LkI/oouB/x v8nk2+H9lL8+D3jP+Pb7CYde9KT6zgK6VQyFbvnUfqmJel6dqRVM0mYatQEmtKjpv3I7 n6sf8oQlLqPt2heDQ2EePF2CK5+5jZiCSmRGtjwlxxgMXznVLy1FP6MirDNLNTbto9Ys ZOZtRZ1DreJkK526CJoZ1fyGnLu5fiFs1pj6y4ni5JpaLLTAkcgHbH+XaO5ny1q8AxUS Cliw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584693; x=1729189493; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=yUMe47Ku2d+lU53n+UV6eA4G9zH9pOaBnB7PUsg7n0E=; b=LGaIyoO8QRryjQ9LLIYlo0Tj2q/DGPFidKNZu7x0kTeJ6YMfUAlZyBJuwnsHOzsLOY G7g5uICXAUiqHQWQLkUujURCTtj29Cu/nFON39ovH9SrLsH5HZDX5oPjpp0mX/b9u4NT zY8Et9TRV2ZpDeVEm8zPmD4euv9Xe2YEahLClCUlLEO2n0Uu03CjYelKiFMFufCeaapD 6O2Zom51C33kdl9F64rZy9WoTRYlPiXNIq9tMnvbpKFc4zWLDwizy5D2JeVLjiFMh4uS yo1ShRBSa4JtoH8FZcJKMPHR3FZ2oUZpJUx1tDvM674UPeQSVu6S9ZhXvBK6AehCq/NY ecXA== X-Gm-Message-State: AOJu0Yy2XTJclEzZH600ufzJWUcrk4mOMkWZvbfBZzD9qcMKMIb6asHK STw/MV3NrevPFXEhXiHvVcdMLx+LB8hyomzRmTIXFxEkUbB3A/f+hgjlx844XE0KfTZ0ZmFcIwX A+A== X-Google-Smtp-Source: AGHT+IHgBvu4/6UpOEaDnS659CK1Rn1/E5r8mTk00ZsUtMRUIXJafj8Ignnkw5WxXqtPYIrr50vXY4UIGHU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:ff02:b0:205:58ee:1567 with SMTP id d9443c01a7336-20c80362278mr43175ad.0.1728584692824; Thu, 10 Oct 2024 11:24:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:04 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-3-seanjc@google.com> Subject: [PATCH v13 02/85] KVM: Allow calling kvm_release_page_{clean,dirty}() on a NULL page pointer From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Allow passing a NULL @page to kvm_release_page_{clean,dirty}(), there's no tangible benefit to forcing the callers to pre-check @page, and it ends up generating a lot of duplicate boilerplate code. Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4b659a649dfa..2032292df0b0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3179,7 +3179,7 @@ static void kvm_set_page_accessed(struct page *page) void kvm_release_page_clean(struct page *page) { - if (WARN_ON(!page)) + if (!page) return; kvm_set_page_accessed(page); @@ -3204,7 +3204,7 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); void kvm_release_page_dirty(struct page *page) { - if (WARN_ON(!page)) + if (!page) return; kvm_set_page_dirty(page); From patchwork Thu Oct 10 18:23:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830661 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AD861CF5D6 for ; Thu, 10 Oct 2024 18:24:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584697; cv=none; b=YdV+2M4lOgqNbca6CD5N8f4unNLecfAoiEzxF6NrGCW8zzEB0erQfkaNxqVN0l614AuiK/udfoJAwrgzMFml3TjhvCPqd4mkgRhYNlY4eH3ZVywP54rtkqViurqj3+BQNHC9EtTBtJG8ugz/MzWH1cXe8+BBDQOhCKvuaQNKe3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584697; c=relaxed/simple; bh=iBKdL35kBHi2YuWkjigWDtnk5mPkldKXaU7GHff52mQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uonubBpkXoXIME0P0qlY+O8o75NWNYdLPCAS7MFLAhrXPympCleMrgjulSKWBNlNw9IFUJRp/Ye54OY6CPGua5hn92ozFW4IMJOv5G3jbdwrHjIII08qq9FmzhE5Mxucmq2F2CeribrWgWwigvPc3tXV3EBiB+FfqxapEHE5GYY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YefjeM7N; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YefjeM7N" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e284982a31so22414827b3.3 for ; Thu, 10 Oct 2024 11:24:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584695; x=1729189495; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=8PiI9IzFtQVAZLsCzTZ7ta7WGdR9XnSo4A5BSh1uxRg=; b=YefjeM7NlL5WJTyheSu3K4yw2vp2CkS7uAx7Z40RaC/a/T9mf5Ut339LIWtkQ2XuXQ 17CbefMXqiqBMTUljRvZMHgszGwbaPFU/tkQky5Clu/5W9+/PGYR9hUCa+QIZ1madUv6 GhU3c/TE8J0FNncSSvv8YuqrNrxaNbhW9YiVKnl29law8WUX4fWW35jK/IILY2eu6SQ7 snOZPsHIyng5WefoSyBjOo1ogSwiFFA5eg3ef7Npa6sKnfD/cEcNQbDShB+D61sKl22R AeIaMhHY8Bfyfnv2ZxOyGX3QoGGAPDZXoT3sLgxyIdmnWJQG6fwCEFa2beItHdM20glS GA2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584695; x=1729189495; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=8PiI9IzFtQVAZLsCzTZ7ta7WGdR9XnSo4A5BSh1uxRg=; b=TiYlOmVkRLWq7jh3kH1eHuu6fx+7JLm81d5A1i2eYkyZxBic4QR8Ofuj0FRkxVyfqa WcPItw4Rrhk754oU8rP8/BcpLOZp8LW+ZFMCmuEolxukl2R7Lrl1N9OMUFbuuxOfXgBz cWj6GK7Jc2NiKosR5wr/pW3FUrSjsCbhGP9BAvtsisR7Xeu8eusOrivTGVVRxycBscUR v7KaHG1S2KB1iEwKUzFOFDqv/EiB5wXkHpuSglcMNhXKlP/1XnYukamHJy0SjdZm/0b9 nUh3/PSTP/w9miN5Fbnb2E9GsflcKlWiCGIP9RlJzzS0mygPwKM4T/3SxQuqU8W5SCC+ ZOgQ== X-Gm-Message-State: AOJu0Yz8K6eUMfFhXS+5SI3jGbzfn9zAliDyj+bdioupDYNhThkOHFQj vUe1zGUQmkIP2D+DBUgJwRMn+b6ZHvOfk1Z9gfYQVr3Yzanb3UHxXbFO6puTSMo4D5Sd8g6+ifb j/A== X-Google-Smtp-Source: AGHT+IGz6mODxZTH5agCMYiGZey+8PwFJu50se4JqpXBNG0SwyKrN9TWGGQJqqmZYCdc2Fnc7eka1IGyLLE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a81:ad16:0:b0:6c1:298e:5a7 with SMTP id 00721157ae682-6e3224662ffmr1632867b3.5.1728584694780; Thu, 10 Oct 2024 11:24:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:05 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-4-seanjc@google.com> Subject: [PATCH v13 03/85] KVM: Add kvm_release_page_unused() API to put pages that KVM never consumes From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add an API to release an unused page, i.e. to put a page without marking it accessed or dirty. The API will be used when KVM faults-in a page but bails before installing the guest mapping (and other similar flows). Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ee186a1fbaad..ab4485b2bddc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1216,6 +1216,15 @@ unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn, bool *writable); + +static inline void kvm_release_page_unused(struct page *page) +{ + if (!page) + return; + + put_page(page); +} + void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); From patchwork Thu Oct 10 18:23:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830662 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95BBC1D07BE for ; Thu, 10 Oct 2024 18:24:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584699; cv=none; b=J0SA1GQ4MPFjhUGXpN04GvwLekriZYc06iWNTaH6kf5Zlj1JaJqxVyAlDl93oRZuwCXrgpgz/hhNEMrgFkBXYv8VJmosdQOQ+01oEOeqBOFmppasHTuzO0LWMVDsyhvcz1cc9/ZNLRyyVonQF5ZdM3iR02GNj4VFzPsKBX2VymY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584699; c=relaxed/simple; bh=/+UXWBjoqTQ+9Qx4e881Uho0Hr1R01lQsdRPnyHLl9M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qlu/VQHnx+cINKpJrZg/JMUtqkwpJaXChuzMPohdU6av0cSmEdShrC8chyXSsoFOHlNA7vOL4RJgljdtF5bb4QyTvXrsFCpZf+rlpwOZnSeBXiLr9huIliFrrVxQndL9LDL1/PglcY1V2EGG86D5ta9iXzhlL3x7a0wUobRbvS4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=f+oxcvBw; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="f+oxcvBw" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e26ba37314so24841877b3.0 for ; Thu, 10 Oct 2024 11:24:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584697; x=1729189497; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=X4CpCR2xmh5xLGpBn2Bl3P0Y4RCeyU3sCMc2ynGYEmk=; b=f+oxcvBwyIf0CPK4Z3H2v2Mp1oKcg8M4xtBPnL3mEbTKwPFddA2gYnQvOwqwhMPqcs 6tTJPsf3R4LXc1m3EABExF2dXv+LHXuszre/1aFscEjO6qmtW3xxqqIATZx3rvdG8rsT 1J3bs69TevQpiRIDsRnNKqiW/W7gMd1WhJDESUTWDMC5TzfnghveyjVNiiFf2Y1HjFiQ vPehYSYcIUDZNDcFnYRqNJsZTdr7XltICzxxbDvDzqP29NLHv3HJTa/ykSCOdCs7Xdvo siw+q8s3hy1UsZky6mk+tSYSCcSAXsDzMU8KrWmL1toG+e2nbzaxdcSmAXg/WDWx5nyc 3r6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584697; x=1729189497; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=X4CpCR2xmh5xLGpBn2Bl3P0Y4RCeyU3sCMc2ynGYEmk=; b=jU+c/MRL04nnQGWldXQGdv3apm8mlrOmur3C4ykdihDh3pIyia1ORJ7c+P+W2aLoFZ 2FUExVLvG75UWN4dALOBdVoK6qE+6o/IpTPRxmvICDWYXDBmJCJAe2XEZ34v9q4TznC7 4Tft2E1BPdOgoPRmUmDn78qrSxeSG7fin5Z4o35M9daplTjKYdnaZmQABS81xKOR8Qu5 23Pohmxzyj1pVx4n7s4wHYj/UxNZDDwRirFTmdSbCDd6MEMtOzTO4aEE44tiLF8dyK2i ki1xdQ7qDfPUD0NYzetsNLroEBNVbUB1fl8ZP3XMtcqVKVMJCFv5uaJszntq5K88ZeY3 /fKg== X-Gm-Message-State: AOJu0YxasR+5W6veD6Y5YPvXbj9H50HX7tHcP7VzS47IuBTfc0sas8FL j3NVZBoiUmbzZEh2FP8wdCzdz/khrCA5V1nei5rA++w1HE6Q0IjJROrgF/KnVotYT/yqvNDGMDa iWA== X-Google-Smtp-Source: AGHT+IGrULPD5dT5NWC9dFAfgTsoL9JIWGzHU9t8+6e7flVyO5QHPCRlr/jDUZCu+nhXBHYl+WGzejB3KuQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a0d:e486:0:b0:6e2:371f:4aef with SMTP id 00721157ae682-6e322168931mr204147b3.3.1728584696651; Thu, 10 Oct 2024 11:24:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:06 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-5-seanjc@google.com> Subject: [PATCH v13 04/85] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: stable@vger.kernel.org Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8f7eb3ad88fc..5521608077ec 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask; /* - * Optimization: for pte sync, if spte was writable the hash - * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * When overwriting an existing leaf SPTE, and the old SPTE was + * writable, skip trying to unsync shadow pages as any relevant + * shadow pages must already be unsync, i.e. the hash lookup is + * unnecessary (and expensive). + * + * The same reasoning applies to dirty page/folio accounting; + * KVM will mark the folio dirty using the old SPTE, thus + * there's no need to immediately mark the new SPTE as dirty. + * + * Note, both cases rely on KVM not changing PFNs without first + * zapping the old SPTE, which is guaranteed by both the shadow + * MMU and the TDP MMU. */ - if (is_writable_pte(old_spte)) + if (is_last_spte(old_spte, level) && is_writable_pte(old_spte)) goto out; /* From patchwork Thu Oct 10 18:23:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830663 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC8311E22E9 for ; Thu, 10 Oct 2024 18:25:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584705; cv=none; b=J9Or98mBcvXqRLFANhWcGJs1mZIy4NlaV+VdMh4NFTP0WU73zJdzQAtdz6BxR9cRRwzwwc0UPrL057xi2do+e0F4yGTePrguf6Hev0T86FT9EHY+E329Ib0uCU+aRPe8mHeTv6uvvolx072SO6ykB18e6k/5w9XSp9oQEjvosLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584705; c=relaxed/simple; bh=uAiI5uzFAMXWntu4OPmR0EhFBczJTAI1tjFERMQUX4o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pUa3nmUrRgzhcqjYyZ+ENOheYgHBogQqSPNBSOa9nocwxCiOLIvPpVl105TpyZ4PNRIxhx2dyOjrzv0nStZzpECy0hVrNWihde27dLYhMeLkMGvQoZhwBDLInBH4Q1nvDTMm8wIeKPRSg9gy97DWBt61qw1Gt3pRtXq/D3kkoXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zZnV9RMn; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zZnV9RMn" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e2904a54a10so1670011276.3 for ; Thu, 10 Oct 2024 11:25:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584701; x=1729189501; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=pPAj0F/7+N3G0fgp8rSBmKqwV8uR3Qu6XgX2gyD+fUo=; b=zZnV9RMn+Omcj8cSLLQ1OO0nfSJBegsKEALf2M2ho0vTZxhMugmk84GGd9BSEy8lfi 5bXdzgRyXEp9OOV73dHe9K+/mvumfArmQlwdAIIf098Xo0lDHqhVeyJufcDy6upSTUCg eJa0l/phgkxFHgmSe5dUQRSSAhpuDVME8CPxLTUOLKDPpIH8StTUDV6Gj9Aj32V4pyB7 hUXbYov9HOCq3yRTRJegHiHEPel6bi7MvYy78Fmk4PNgStPCSNCbeQWI874/CL75Nylh b8mC6E7bFg+6jKx8vU6UyFAv3zFXpZ6ePXNmjV/tqGGnu0kMfhBsyo2PJWG6sUybPPoq 1hOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584701; x=1729189501; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pPAj0F/7+N3G0fgp8rSBmKqwV8uR3Qu6XgX2gyD+fUo=; b=ZdW+kzdmHOP6YfNWxWvHjUKuTvXKK3HluF7Z4DR1Ik7Q7jbDh6PMSEAtViaSWp5eCl lWDADfdhxIavhTX4epjqxKdAXDH95X7/0MHw/Xs/UeZyHXQgWICU6/1bdNgbymYvBGk+ Df3YZnsgIs/XCAr5q+dewWswO+mN7bgs3QaBtEQwQeHhbTksETCBbZwU1blg1NFJtiLP IfYGAlLC1uoFrE99RlL48DmEYXT3YvxtVDgjipqTOyIn1uadyOae6trEwfYAnwSwIjhG WBz9hErtL/Za8deBLccfX0qtMaixwUEfL9KhYjbPLvYvKFTpqldUS229eQEOIPtYOFEI pIQg== X-Gm-Message-State: AOJu0YznOdkE8mPns2FynB4cpqg+bUCwrozvxketI7hF/XyiNBcRz1P3 ZpdUV5RU5fjqptPDdqgiKEzT5RBupAO5bprLCC2bKSBbV22dbH4xZveGS3pdeV7buLKN8ia+GOw 8qA== X-Google-Smtp-Source: AGHT+IHQgMXiDY3ifHkkle2NgOnYNSR5nXtws6LnwLsJ4J6Fqip1dtdD5j5lAqBgp/4MJdCcEXFALInKEMI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:69c3:0:b0:e1a:9ed2:67f4 with SMTP id 3f1490d57ef6-e28fe4313ffmr4949276.2.1728584700890; Thu, 10 Oct 2024 11:25:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:07 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-6-seanjc@google.com> Subject: [PATCH v13 05/85] KVM: x86/mmu: Don't overwrite shadow-present MMU SPTEs when prefaulting From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Treat attempts to prefetch/prefault MMU SPTEs as spurious if there's an existing shadow-present SPTE, as overwriting a SPTE that may have been create by a "real" fault is at best confusing, and at worst potentially harmful. E.g. mmu_try_to_unsync_pages() doesn't unsync when prefetching, which creates a scenario where KVM could try to replace a Writable SPTE with a !Writable SPTE, as sp->unsync is checked prior to acquiring mmu_unsync_pages_lock. Note, this applies to three of the four flavors of "prefetch" in KVM: - KVM_PRE_FAULT_MEMORY - Async #PF (host or PV) - Prefetching The fourth flavor, SPTE synchronization, i.e. FNAME(sync_spte), _only_ overwrites shadow-present SPTEs when calling make_spte(). But SPTE synchronization specifically uses mmu_spte_update(), and so naturally avoids the @prefetch check in mmu_set_spte(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 +++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a9a23e058555..a8c64069aa89 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2919,6 +2919,9 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (is_shadow_present_pte(*sptep)) { + if (prefetch) + return RET_PF_SPURIOUS; + /* * If we overwrite a PTE page pointer with a 2MB PMD, unlink * the parent of the now unreachable PTE. diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3b996c1fdaab..3c6583468742 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1026,6 +1026,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (WARN_ON_ONCE(sp->role.level != fault->goal_level)) return RET_PF_RETRY; + if (fault->prefetch && is_shadow_present_pte(iter->old_spte)) + return RET_PF_SPURIOUS; + if (unlikely(!fault->slot)) new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else From patchwork Thu Oct 10 18:23:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830664 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E7F51E3770 for ; Thu, 10 Oct 2024 18:25:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584706; cv=none; b=uuMOkXMZv3I8vYavW+49wNywsoNnt1wzsL17htjFVk/AHzJweNRdYaIxeOKiSybv4AZgQO8trbSZ1X1LE2/DYE6l6UuWHWGPiFDtmPxm2YiJi+SR4MHV37KHuD1eqGgrjFGka6OEBjJkLoOI2t+kQbeiLS38VfwjbLREAR5X84I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584706; c=relaxed/simple; bh=V8Pz/YRM4TKMSvyoWeXbxNdYc7yxSwxWSxDHm25abTE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oljMALxgBUXeqoj6pIHuFzeq9aIyR1N9vd/2hJvcXn0PuocW8WMhJ8t1Bvm2PBJH4oALaj5WY+unUzd7kAIHcSsBfmk+KLd/FL+CyHHWI3vyI7S4tf67zKlbxw8vXv4FkvHQVXpGzGfoIXMWIk7c04dg8OYHuoN+s9AQ3pBTI+I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2Sdd9oap; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2Sdd9oap" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e290d41291bso1588427276.1 for ; Thu, 10 Oct 2024 11:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584704; x=1729189504; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zmCOkT+wuJeT9H1MjXHCrAkGhLQoiMWK376kHF4kwCg=; b=2Sdd9oap+q0MyhyBJfN/XXaLaeugLyBeVKfhNG2OdkEb/qrt0NkuxHYVoDsDdUZ9e1 pmV7SmQhjF9bAwgYB0J8zvebqLiGSyv20F8v9j0thW0q6ArX1s4TwXm8L3IHPzem3f8y DwxbUzTbCpK28hZiB1VxvTVS9uIMZOjdMOhgijGcTvQ/wCKV0SUyqlU1dK5YJ8IjBDbg b/lM5r6hSEmGcKJbLuF7of6iZwO86ElPJvMvme5CnHpYMePTLHn6R0KeLtgTbtl3JJy2 5t78epXtg7vnXBYurXwLP6RwMrqQnpppEpVK40dF3oMv/Bjakbx1TqmR49X4kE7/FaxV fqbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584704; x=1729189504; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zmCOkT+wuJeT9H1MjXHCrAkGhLQoiMWK376kHF4kwCg=; b=HMX94IWWQ99dUzzIm8sFWkzhqUI/1+hORrS9680+BDenOtKuND8xxfPScJYQk9L7pr sw+/PN96V6/7Z/Dnk8SJzinNatoPOl/fyny4A6ZYCXhRth1gaZWtfcyqS4JhROfnDn42 K8gJKwcaQfZFp/fneF70d+Lhd+bYPx+K5liZ5lWNch7ALQI3sPNEZNhGQrkLZ6NCbPo4 cNf2OFazPo+pPgu26vxjvLX5WaK+KeXFlYKmxmGq34K3fZpVisyIUr16zSu7PbhwCsqo 1yHyrbwzZnnS9pCKIRWuVHtT9LmfC8PZw6aLm5AL1ZyCXZzyhNUfhCoUwwexxc6n/1gJ uMrw== X-Gm-Message-State: AOJu0YxqeXLHnyz4qRAwImdA2PbqZIQ+V9tdt5rShScHk2dJygS7iRoi d7blt+99aKWLYcyEoebUI7S+FfDjLArsJZNVH8vsoDKyuxYMO39itHTfBSUZPxoWWnGPVLA3I7Z /wA== X-Google-Smtp-Source: AGHT+IHLS32ZnUy4bY0gdVqRjJSlR5N1zmb3UxYk5drKGwVtxhq/Y8NIZIwMp1duovDzYwXPQsOfO5WBu4o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6902:1812:b0:e24:9f58:dd17 with SMTP id 3f1490d57ef6-e28fe32f042mr66754276.1.1728584702932; Thu, 10 Oct 2024 11:25:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:08 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-7-seanjc@google.com> Subject: [PATCH v13 06/85] KVM: x86/mmu: Invert @can_unsync and renamed to @synchronizing From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Invert the polarity of "can_unsync" and rename the parameter to "synchronizing" to allow a future change to set the Accessed bit if KVM is synchronizing an existing SPTE. Querying "can_unsync" in that case is nonsensical, as the fact that KVM can't unsync SPTEs doesn't provide any justification for setting the Accessed bit. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/spte.c | 4 ++-- arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 6 files changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8c64069aa89..0f21d6f76cab 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2795,7 +2795,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) * be write-protected. */ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, bool can_unsync, bool prefetch) + gfn_t gfn, bool synchronizing, bool prefetch) { struct kvm_mmu_page *sp; bool locked = false; @@ -2810,12 +2810,12 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, /* * The page is not write-tracked, mark existing shadow pages unsync - * unless KVM is synchronizing an unsync SP (can_unsync = false). In - * that case, KVM must complete emulation of the guest TLB flush before - * allowing shadow pages to become unsync (writable by the guest). + * unless KVM is synchronizing an unsync SP. In that case, KVM must + * complete emulation of the guest TLB flush before allowing shadow + * pages to become unsync (writable by the guest). */ for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { - if (!can_unsync) + if (synchronizing) return -EPERM; if (sp->unsync) @@ -2941,7 +2941,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, - true, host_writable, &spte); + false, host_writable, &spte); if (*sptep == spte) { ret = RET_PF_SPURIOUS; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c98827840e07..4da83544c4e1 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,7 @@ static inline gfn_t gfn_round_for_level(gfn_t gfn, int level) } int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, bool can_unsync, bool prefetch); + gfn_t gfn, bool synchronizing, bool prefetch); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ae7d39ff2d07..6e7bd8921c6f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -963,7 +963,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int host_writable = spte & shadow_host_writable_mask; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, + spte_to_pfn(spte), spte, true, true, host_writable, &spte); return mmu_spte_update(sptep, spte); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 5521608077ec..0e47fea1a2d9 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -157,7 +157,7 @@ bool spte_has_volatile_bits(u64 spte) bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, + u64 old_spte, bool prefetch, bool synchronizing, bool host_writable, u64 *new_spte) { int level = sp->role.level; @@ -248,7 +248,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch)) { + if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, synchronizing, prefetch)) { wrprot = true; pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 2cb816ea2430..c81cac9358e0 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -499,7 +499,7 @@ bool spte_has_volatile_bits(u64 spte); bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, - u64 old_spte, bool prefetch, bool can_unsync, + u64 old_spte, bool prefetch, bool synchronizing, bool host_writable, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3c6583468742..76bca7a726c1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1033,8 +1033,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + fault->pfn, iter->old_spte, fault->prefetch, + false, fault->map_writable, &new_spte); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; From patchwork Thu Oct 10 18:23:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830752 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A60A1E47AC for ; Thu, 10 Oct 2024 18:25:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584708; cv=none; b=HAIFfF8QLbDunYQitcXaJ48FHvdoO2FZNs/HdAq0nLqzklkaM8i0P2e3MApDe/uk0h2DXCaYpxwz0PcHrkyRv3h+IqpDwx7xGLbpVqQ96OrpPqRhs1faZhGwk5wvQ5n8GRd/yghLnrs/IRiH6A+85TDkU+xC5c/mhU3x99s8UkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584708; c=relaxed/simple; bh=zZ6qnR3p2Ej0Et0EkgjdDF1APVKDfMggaSwqeFMbAB0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=omrUU+ryjAoDPuSSilnH8PXLjXPFF//z3wnnxFw21Y27UE95fxhMsz5c0vzuKSW9s2bMNOS3JCZ81yoCuL6J2g0Fuh6uzh8+xqHEyO+dazauaeF+nAFLe/0EUZYRw/e1nT3lyMOILSJIxrPzmKm8bKk5gCqPizemJWmM9KvcRSE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IDkmO6HR; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IDkmO6HR" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2e2cc47ce63so995574a91.3 for ; Thu, 10 Oct 2024 11:25:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584707; x=1729189507; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GZoCfd92gqYHwqsSYOai9KQfQ7K6dxh+JWKccmiJTPU=; b=IDkmO6HRVmabgOlLjqCh29oQFT1bfBQbkp3F1tpeC5Cl5RH4yKJjEzu/fRdCPtY2LJ 8alxmLe/e6YwD8u37nx2cwxVWQGc05WHH8rhdocEO3m0ZILY2McAVy6C7ivAMxZD63lA 5rm83irlLcGgCTPE12ygB+5N8x12ZhG5hm21exX/zmXXDmPJm1UnrJS16qHVBYfIHr2O AobpACM+F+Sg7RjUSd+L/3V/uhbi11gyOc4mNHGNjscaAcXfUW6mW3Qzc8IqRRoIOJrz mQEaeoC3Oe07Txd1htL8K8SnhqjKEfbT3WNRYv4Ca+fTxlJk2zb0kfgWcERlJ9Sm7PG8 lEzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584707; x=1729189507; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GZoCfd92gqYHwqsSYOai9KQfQ7K6dxh+JWKccmiJTPU=; b=JItfdk5PPnI6T4Gsro91ns9O+ScM1jIjvODo4abx9mNncHzeeWXmhbt3MtqhXE01om OmjR1E5mt2tDBeRQGzZKOD1GwaNixWwuUfUgf4AgWVOY2kBNxN/8Ob3olQcE+QbqcD9d Y0E2co0I08AbvC2b8aOhUn9ryUYkXrJ9oNsCxwFcbGA5CB1AepyM5lOKsgJuC8DUnXXp Zl883eoMuN6w52tgsB42y2F9qNndDtXY4Op/d0psgyKQi1BPr7pTSwNfGsHuTAjAFK+j dhNERW4sprgo9J+R35X9GR8s5EVrCU9Xx7Z/TV6a4D6LFOTSqbFho26/0JR3eV8SLuck P90g== X-Gm-Message-State: AOJu0YxMyhoODcEQegbVXxTXLDwyHtfhyqpKPZnaSIqNptZKicCKxaOK l2Rg5nnWKH7tov+wFUZ8E/6Ug9qqJbiCmnwd7v7XbWE/N8+JPfbdzm8d+RafWvVoC4/xTxXEzHY CbQ== X-Google-Smtp-Source: AGHT+IH8UDk7ruln14ZDsSWqbj1uDcx7028dIle1wpc1As3ec3XNWN4cizl8I1jAsp9rsgJdiTjtCNc1iNw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:fa84:b0:2e2:da81:40c6 with SMTP id 98e67ed59e1d1-2e2f0a524d4mr44a91.2.1728584705592; Thu, 10 Oct 2024 11:25:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:09 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-8-seanjc@google.com> Subject: [PATCH v13 07/85] KVM: x86/mmu: Mark new SPTE as Accessed when synchronizing existing SPTE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Set the Accessed bit when making a "new" SPTE during SPTE synchronization, as _clearing_ the Accessed bit is counter-productive, and even if the Accessed bit wasn't set in the old SPTE, odds are very good the guest will access the page in the near future, as the most common case where KVM synchronizes a shadow-present SPTE is when the guest is making the gPTE read-only for Copy-on-Write (CoW). Preserving the Accessed bit will allow dropping the logic that propagates the Accessed bit to the underlying struct page when overwriting an existing SPTE, without undue risk of regressing page aging. Note, KVM's current behavior is very deliberate, as SPTE synchronization was the only "speculative" access type as of commit 947da5383069 ("KVM: MMU: Set the accessed bit on non-speculative shadow ptes"). But, much has changed since 2008, and more changes are on the horizon. Spurious clearing of the Accessed (and Dirty) was mitigated by commit e6722d9211b2 ("KVM: x86/mmu: Reduce the update to the spte in FNAME(sync_spte)"), which changed FNAME(sync_spte) to only overwrite SPTEs if the protections are actually changing. I.e. KVM is already preserving Accessed information for SPTEs that aren't dropping protections. And with the aforementioned future change to NOT mark the page/folio as accessed, KVM's SPTEs will become the "source of truth" so to speak, in which case clearing the Accessed bit outside of page aging becomes very undesirable. Suggested-by: Yan Zhao Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 0e47fea1a2d9..618059b30b8b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -178,7 +178,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= SPTE_TDP_AD_WRPROT_ONLY; spte |= shadow_present_mask; - if (!prefetch) + if (!prefetch || synchronizing) spte |= spte_shadow_accessed_mask(spte); /* @@ -259,7 +259,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= spte_shadow_dirty_mask(spte); out: - if (prefetch) + if (prefetch && !synchronizing) spte = mark_spte_for_access_track(spte); WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), From patchwork Thu Oct 10 18:23:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830753 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4355C1E5730 for ; Thu, 10 Oct 2024 18:25:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584712; cv=none; b=eLM8UOvOll3b/l4nwnMFC/CyovnoRpi6ymy1EeYDhr0xVR+D3B9DSGclh3YA70skC+Gavq6fa9pf46J7kkIxk9ZJzZjgWaIpP4NnSJ9g7aVzfhcTWz83o/mTv2Zdko1/3cBq0jNbUCBCPAO3+ON2Gh1/tBhfZbjgiOBuaLLOsco= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584712; c=relaxed/simple; bh=VkZarLC+Eu//G+0PeVKNSmckYM6o8fYMsW2puWWES3Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g20mxCQdv9FB44HkEXLAu5CiK5igxrY5WF6tKBaTMSgySAdqTfaRXpNmEnY9vwOK1eT53/U/DGPwxn59iq/uhT9oMegsJONMFftXgE77tm78aDnMMQLvmlKdMSHwVEdyRTiS519GQQ18LBz4lyWyV08prz42PzLMpw48C/J12yo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Kbknugbz; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Kbknugbz" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7e6af43d0c5so1107917a12.3 for ; Thu, 10 Oct 2024 11:25:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584709; x=1729189509; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=e3d7DBw/sg5cuh+IKhcM0BkpGN0EJ0d/2DHG844pD6k=; b=KbknugbzdNUXpNoKARsIRLpx8PRqlV/FblB8LCRY3jXCEL8YQWp2+5e1SPcnpU1anA /Un1XGepnDYvRE8uMn1M8sNXDKThkOYuLMTMY4dvsf3kPUPmzir5ncOXQFn7+bv1p4hy H2jRB4kxAnETZzMSlESoEbGU/tyiL4jS3pm3WzUHXkieibovGZ1NPgTsV7+P+Cesmr1y Fs2xH6iupE8taJ7d1TWULgnebCuHG6dRZWs4+QPnv+cXQLB2p6C1WBwO4XLulqfa6dO3 cUBm6987ZwMn/Cpfa5BrC69Oqg5CqfZ6FFkwpWMqFmwwGgxbzBJ359J9aVrp/AaSSVAN 8DSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584709; x=1729189509; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=e3d7DBw/sg5cuh+IKhcM0BkpGN0EJ0d/2DHG844pD6k=; b=u3ol48iM6E5zLnO/ta/nJEx8Fy1kYNLaq21HFPfoZJVI37A+/3EJNYeQnUS9OFmqPr TUgJ2h4DgMMD0uywPYHfDJ/Toxfz12cqis7W4RrTrSJHPdPmUem+IOrMT4oMg0jdEEXN kv3lYgzFx2dMVDVBccPeGjPZy6McEbL3FL9BEH3mbYq4CZM3QvTL1luqHH1DJSfIgrdZ DoglPpbTrhCvkhKFy5gSjgAg0bi9mEh/Prh0MEr/45lVb4VWgHRmXCh9So6CfG3mh92c 2qJ6qGftdqfWtdLSDeRsH6XhXYCj4ehH5gsf5y0/5/4HvGcC1OeOJgp65CWWIcIAGYH3 jMvw== X-Gm-Message-State: AOJu0YxijHi/jcg6Lnhu78oPD45jn72DvXvO8PrCIwHf+YjoigDFheuV 03YguRl7tMGwuK5meRngxAXI04vflT6+FjXRh1QI6O0qVSO6PFqHB7AvIgeWnsKot48HwKscOuj AcQ== X-Google-Smtp-Source: AGHT+IFvOj1YmixVIm2giqrxikolLV70YR7/zWjxlJNmhfeSXdlwSACqKd64r4Is1alzH3q275ZQuugRXZo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90b:153:b0:2e2:af66:c33e with SMTP id 98e67ed59e1d1-2e2f0ae73f3mr37a91.1.1728584708343; Thu, 10 Oct 2024 11:25:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:10 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-9-seanjc@google.com> Subject: [PATCH v13 08/85] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages/folios dirty when creating SPTEs to map PFNs into the guest, not when zapping or modifying SPTEs, as marking folios dirty when zapping or modifying SPTEs can be extremely inefficient. E.g. when KVM is zapping collapsible SPTEs to reconstitute a hugepage after disbling dirty logging, KVM will mark every 4KiB pfn as dirty, even though _at least_ 512 pfns are guaranteed to be in a single folio (the SPTE couldn't potentially be huge if that weren't the case). The problem only becomes worse for 1GiB HugeTLB pages, as KVM can mark a single folio dirty 512*512 times. Marking a folio dirty when mapping is functionally safe as KVM drops all relevant SPTEs in response to an mmu_notifier invalidation, i.e. ensures that the guest can't dirty a folio after access has been removed. And because KVM already marks folios dirty when zapping/modifying SPTEs for KVM reasons, i.e. not in response to an mmu_notifier invalidation, there is no danger of "prematurely" marking a folio dirty. E.g. if a filesystems cleans a folio without first removing write access, then there already exists races where KVM could mark a folio dirty before remote TLBs are flushed, i.e. before guest writes are guaranteed to stop. Furthermore, x86 is literally the only architecture that marks folios dirty on the backend; every other KVM architecture marks folios dirty at map time. x86's unique behavior likely stems from the fact that x86's MMU predates mmu_notifiers. Long, long ago, before mmu_notifiers were added, marking pages dirty when zapping SPTEs was logical, and perhaps even necessary, as KVM held references to pages, i.e. kept a page's refcount elevated while the page was mapped into the guest. At the time, KVM's rmap_remove() simply did: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); else kvm_release_pfn_clean(pfn); i.e. dropped the refcount and marked the page dirty at the same time. After mmu_notifiers were introduced, commit acb66dd051d0 ("KVM: MMU: don't hold pagecount reference for mapped sptes pages") removed the refcount logic, but kept the dirty logic, i.e. converted the above to: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); And for KVM x86, that's essentially how things have stayed over the last ~15 years, without anyone revisiting *why* KVM marks pages/folios dirty at zap/modification time, e.g. the behavior was blindly carried forward to the TDP MMU. Practically speaking, the only downside to marking a folio dirty during mapping is that KVM could trigger writeback of memory that was never actually written. Except that can't actually happen if KVM marks folios dirty if and only if a writable SPTE is created (as done here), because KVM always marks writable SPTEs as dirty during make_spte(). See commit 9b51a63024bd ("KVM: MMU: Explicitly set D-bit for writable spte."), circa 2015. Note, KVM's access tracking logic for prefetched SPTEs is a bit odd. If a guest PTE is dirty and writable, KVM will create a writable SPTE, but then mark the SPTE for access tracking. Which isn't wrong, just a bit odd, as it results in _more_ precise dirty tracking for MMUs _without_ A/D bits. To keep things simple, mark the folio dirty before access tracking comes into play, as an access-tracked SPTE can be restored in the fast page fault path, i.e. without holding mmu_lock. While writing SPTEs and accessing memslots outside of mmu_lock is safe, marking a folio dirty is not. E.g. if the fast path gets interrupted _just_ after setting a SPTE, the primary MMU could theoretically invalidate and free a folio before KVM marks it dirty. Unlike the shadow MMU, which waits for CPUs to respond to an IPI, the TDP MMU only guarantees the page tables themselves won't be freed (via RCU). Opportunistically update a few stale comments. Cc: David Matlack Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 30 ++++-------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +++--- arch/x86/kvm/mmu/spte.c | 20 ++++++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.c | 12 ------------ 4 files changed, 25 insertions(+), 43 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0f21d6f76cab..1ae823ebd12b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -547,10 +547,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } - if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { + if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush = true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - } return flush; } @@ -593,9 +591,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); - return old_spte; } @@ -1250,16 +1245,6 @@ static bool spte_clear_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } -static bool spte_wrprot_for_clear_dirty(u64 *sptep) -{ - bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT, - (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); - - return was_writable; -} - /* * Gets the GFN ready for another round of dirty logging by clearing the * - D bit on ad-enabled SPTEs, and @@ -1275,7 +1260,8 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, for_each_rmap_spte(rmap_head, &iter, sptep) if (spte_ad_need_write_protect(*sptep)) - flush |= spte_wrprot_for_clear_dirty(sptep); + flush |= test_and_clear_bit(PT_WRITABLE_SHIFT, + (unsigned long *)sptep); else flush |= spte_clear_dirty(sptep); @@ -1628,14 +1614,6 @@ static bool kvm_rmap_age_gfn_range(struct kvm *kvm, clear_bit((ffs(shadow_accessed_mask) - 1), (unsigned long *)sptep); } else { - /* - * Capture the dirty status of the page, so that - * it doesn't get lost when the SPTE is marked - * for access tracking. - */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); - spte = mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); } @@ -3415,7 +3393,7 @@ static bool fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, * harm. This also avoids the TLB flush needed after setting dirty bit * so non-PML cases won't be impacted. * - * Compare with set_spte where instead shadow_dirty_mask is set. + * Compare with make_spte() where instead shadow_dirty_mask is set. */ if (!try_cmpxchg64(sptep, &old_spte, new_spte)) return false; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 6e7bd8921c6f..fbaae040218b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -892,9 +892,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, /* * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is - * safe because: - * - The spte has a reference to the struct page, so the pfn for a given gfn - * can't change unless all sptes pointing to it are nuked first. + * safe because SPTEs are protected by mmu_notifiers and memslot generations, so + * the pfn for a given gfn can't change unless all SPTEs pointing to the gfn are + * nuked first. * * Returns * < 0: failed to sync spte diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 618059b30b8b..8e8d6ee79c8b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -232,8 +232,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * unnecessary (and expensive). * * The same reasoning applies to dirty page/folio accounting; - * KVM will mark the folio dirty using the old SPTE, thus - * there's no need to immediately mark the new SPTE as dirty. + * KVM marked the folio dirty when the old SPTE was created, + * thus there's no need to mark the folio dirty again. * * Note, both cases rely on KVM not changing PFNs without first * zapping the old SPTE, which is guaranteed by both the shadow @@ -266,12 +266,28 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + /* + * Mark the memslot dirty *after* modifying it for access tracking. + * Unlike folios, memslots can be safely marked dirty out of mmu_lock, + * i.e. in the fast page fault handler. + */ if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON_ONCE(level > PG_LEVEL_4K); mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } + /* + * If the page that KVM got from the primary MMU is writable, i.e. if + * it's host-writable, mark the page/folio dirty. As alluded to above, + * folios can't be safely marked dirty in the fast page fault handler, + * and so KVM must (somewhat) speculatively mark the folio dirty even + * though it isn't guaranteed to be written as KVM won't mark the folio + * dirty if/when the SPTE is made writable. + */ + if (host_writable) + kvm_set_pfn_dirty(pfn); + *new_spte = spte; return wrprot; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 76bca7a726c1..517b384473c1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -511,10 +511,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (is_leaf != was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); - if (was_leaf && is_dirty_spte(old_spte) && - (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - /* * Recursively handle child PTs if the change removed a subtree from * the paging structure. Note the WARN on the PFN changing without the @@ -1249,13 +1245,6 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, iter->level); new_spte = iter->old_spte & ~shadow_accessed_mask; } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); - new_spte = mark_spte_for_access_track(iter->old_spte); iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte, new_spte, @@ -1596,7 +1585,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); } rcu_read_unlock(); From patchwork Thu Oct 10 18:23:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830754 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B31E1E7C30 for ; Thu, 10 Oct 2024 18:25:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584714; cv=none; b=HCyqXeujAcmAgyOt1ogJC8LgoyWfKFFxe3/B6lRMSme0653g1rJpIWeEKdT1zdiERZbudyboR8rgF/Wpv2YDEd2Tcn12eEBqH4msB7tHh7T1N6yj2g38LA9nngYutoig8theDIrb307rCm7aGcC2V2F0Mcnan9F55UJyRB9FIQQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584714; c=relaxed/simple; bh=sB5G3PQtHd5Wp6fJfSwA75+Ou8nv+Xyz2zynkW7I+sM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jH76Q2fjg6OdH1x+Yt45xym5m+84Vmt4FaCSKyIIDROAvjpwYdF3NTh3daxkDCQvjUsLPubuRTesgNbrnd6EmBLCWjgojqDGgKRQBiObyNcv+1D4uDIzMWrv0Q7TaGgQlzzYWbwDLUNaT7B9LXwVpNj8fyRAVNi/7cbryidl5B0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YFnRBGee; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YFnRBGee" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e2baf2ff64so22478437b3.0 for ; Thu, 10 Oct 2024 11:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584711; x=1729189511; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=0ix/Xs4/Qn/2DvOT3Bh000Q598gzWwOGIiO61AJ4Bto=; b=YFnRBGeeymk+2pmMBdw0wl0Yy+Pdr75lMEEb46Qj2R/ZMELxdBUC0VhXMaFoLMRZyw lM5BZATdJOfXIHC7Ufn9e5g1CVYCsM89XBgpEbeWnjle3YVc0JrKT5IY3d3WFOKNc03i tQo8sG3IUbpyxFHlOMrb7ZHLDldQnJqNa2RjhIHLewNO7ZU5NmsXMw8G548D7wuNl5BX KAXxb5IgcpvzTluv7+tQS9iUf8SuC1dnEWalfACIcBmgOeNyW3QjD8rsI9do0p1/7fwV MXEidCQsM6goBjTy2GmzJ7UCSunVjhh4Ev1K313o4BihBPVtWlNKHeeWHalpT8ZOONki 6nMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584711; x=1729189511; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=0ix/Xs4/Qn/2DvOT3Bh000Q598gzWwOGIiO61AJ4Bto=; b=mZRZUewcBqft2qdbgI0nUIEuN/TXUGCdtHOj3LQp5OPIM2inVtzGWOFhbs81KGqLMB vQHPrYSPzfKKYk9QD+LNKXusYww2l29NsRSJKC9KVIob7cwKoY0UuBLLzHYwLDnGU69n YU5RNyz9WqtlPJhLTOKM5z0GkogCmv0uTb6HJLhNeQFXdeA/aoB3ob0qB+TKcsth1Cd3 EMqkCEtKa61TWKF45smAl/JYclpAVyKVicy3WhW3nrPV4CZyJbAM0XGXBNGmXAK2PnaK s0zbjzmLU1vrsafu1fgL/Eq2AG/LB9HkAQFYeSBgzqbiIwl1FukmUxASHUfmdaq3//Qw aAHw== X-Gm-Message-State: AOJu0YxT9f/iSLaexNy8QHWjf+mG1OqxMvMIlPHbJtxD/duahUkWHcqu 6sVJRrwji+il1aQ+diy77LXMSitaGuctEozFHAgEMsew3Tg5KxGcXgzSNCpqlrb35Y1sTc17g5R Dmw== X-Google-Smtp-Source: AGHT+IGD2TfP2dloLIlDYfRlI1lvmWPD7YUtu8EYW3dxRXY7JxJbZQIAJnuRF8jcBfCYGhk25T1qJs7AAuY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:5b41:b0:6db:c6ac:62a0 with SMTP id 00721157ae682-6e322305467mr243067b3.5.1728584711093; Thu, 10 Oct 2024 11:25:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:11 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-10-seanjc@google.com> Subject: [PATCH v13 09/85] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that KVM doesn't clobber Accessed bits of shadow-present SPTEs, e.g. when prefetching, mark folios as accessed only when zapping leaf SPTEs, which is a rough heuristic for "only in response to an mmu_notifier invalidation". Page aging and LRUs are tolerant of false negatives, i.e. KVM doesn't need to be precise for correctness, and re-marking folios as accessed when zapping entire roots or when zapping collapsible SPTEs is expensive and adds very little value. E.g. when a VM is dying, all of its memory is being freed; marking folios accessed at that time provides no known value. Similarly, because KVM marks folios as accessed when creating SPTEs, marking all folios as accessed when userspace happens to delete a memslot doesn't add value. The folio was marked access when the old SPTE was created, and will be marked accessed yet again if a vCPU accesses the pfn again after reloading a new root. Zapping collapsible SPTEs is a similar story; marking folios accessed just because userspace disable dirty logging is a side effect of KVM behavior, not a deliberate goal. As an intermediate step, a.k.a. bisection point, towards *never* marking folios accessed when dropping SPTEs, mark folios accessed when the primary MMU might be invalidating mappings, as such zappings are not KVM initiated, i.e. might actually be related to page aging and LRU activity. Note, x86 is the only KVM architecture that "double dips"; every other arch marks pfns as accessed only when mapping into the guest, not when mapping into the guest _and_ when removing from the guest. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/locking.rst | 76 +++++++++++++++--------------- arch/x86/kvm/mmu/mmu.c | 4 +- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++- 3 files changed, 43 insertions(+), 44 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 20a9a37d1cdd..3d8bf40ca448 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -147,49 +147,51 @@ Then, we can ensure the dirty bitmaps is correctly set for a gfn. 2) Dirty bit tracking -In the origin code, the spte can be fast updated (non-atomically) if the +In the original code, the spte can be fast updated (non-atomically) if the spte is read-only and the Accessed bit has already been set since the Accessed bit and Dirty bit can not be lost. But it is not true after fast page fault since the spte can be marked writable between reading spte and updating spte. Like below case: -+------------------------------------------------------------------------+ -| At the beginning:: | -| | -| spte.W = 0 | -| spte.Accessed = 1 | -+------------------------------------+-----------------------------------+ -| CPU 0: | CPU 1: | -+------------------------------------+-----------------------------------+ -| In mmu_spte_clear_track_bits():: | | -| | | -| old_spte = *spte; | | -| | | -| | | -| /* 'if' condition is satisfied. */| | -| if (old_spte.Accessed == 1 && | | -| old_spte.W == 0) | | -| spte = 0ull; | | -+------------------------------------+-----------------------------------+ -| | on fast page fault path:: | -| | | -| | spte.W = 1 | -| | | -| | memory write on the spte:: | -| | | -| | spte.Dirty = 1 | -+------------------------------------+-----------------------------------+ -| :: | | -| | | -| else | | -| old_spte = xchg(spte, 0ull) | | -| if (old_spte.Accessed == 1) | | -| kvm_set_pfn_accessed(spte.pfn);| | -| if (old_spte.Dirty == 1) | | -| kvm_set_pfn_dirty(spte.pfn); | | -| OOPS!!! | | -+------------------------------------+-----------------------------------+ ++-------------------------------------------------------------------------+ +| At the beginning:: | +| | +| spte.W = 0 | +| spte.Accessed = 1 | ++-------------------------------------+-----------------------------------+ +| CPU 0: | CPU 1: | ++-------------------------------------+-----------------------------------+ +| In mmu_spte_update():: | | +| | | +| old_spte = *spte; | | +| | | +| | | +| /* 'if' condition is satisfied. */ | | +| if (old_spte.Accessed == 1 && | | +| old_spte.W == 0) | | +| spte = new_spte; | | ++-------------------------------------+-----------------------------------+ +| | on fast page fault path:: | +| | | +| | spte.W = 1 | +| | | +| | memory write on the spte:: | +| | | +| | spte.Dirty = 1 | ++-------------------------------------+-----------------------------------+ +| :: | | +| | | +| else | | +| old_spte = xchg(spte, new_spte);| | +| if (old_spte.Accessed && | | +| !new_spte.Accessed) | | +| flush = true; | | +| if (old_spte.Dirty && | | +| !new_spte.Dirty) | | +| flush = true; | | +| OOPS!!! | | ++-------------------------------------+-----------------------------------+ The Dirty bit is lost in this case. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1ae823ebd12b..04228a7da69a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -542,10 +542,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * to guarantee consistency between TLB and page tables. */ - if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { + if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) flush = true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); - } if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush = true; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 517b384473c1..8aa0d7a7602b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -520,10 +520,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (was_present && !was_leaf && (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); - - if (was_leaf && is_accessed_spte(old_spte) && - (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } static inline int __must_check __tdp_mmu_set_spte_atomic(struct tdp_iter *iter, @@ -865,6 +861,9 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); + if (is_accessed_spte(iter.old_spte)) + kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); + /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details. From patchwork Thu Oct 10 18:23:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830755 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D63A51E885F for ; Thu, 10 Oct 2024 18:25:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584715; cv=none; b=K3vVpNE7Z87kKtknljsfxw2toXnINVv6Yk+MpqK925GjliOHcpwlk7KXvSxlEQNZZP8Vqz/MxCZwgWUXrGrEgFAcDTll2ZrgXxugqKFwKzwPMAx8Izy89YtSGJusiNsGsCwa1VSCtYt3CuLs1Slu/1Zj+h8jeAFZWM+MNk7GkPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584715; c=relaxed/simple; bh=FKY/uzFyW8/uk7MX2MD3fh26WMEEPHDBpnIM2/GXcs8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i7b2Ii3kgWz1+TdqGvBRHJGQSShkg2yEiZk9Bj3sQkEkruNYhcnkl+abc/ezSfajA2xNhPrzYN4Jyo24XgX4VltUsdV/YtfKtvuW2EO+S2uLjRxd4PvKo4Ny7wmxe3Viy0Tl8MWqL5EIV9FIDaQUkTrYTakcWNlT2QczPTShhe8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=w5tvPSoS; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="w5tvPSoS" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2e2d17b0e86so988044a91.3 for ; Thu, 10 Oct 2024 11:25:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584713; x=1729189513; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=k3/xcOX0JmpZSdxUAeDWhRU0v4mqVA9cM67JfDYoNsA=; b=w5tvPSoSWJn+x5q8Oi0ZuA+6nHh716+OIuMRO32iJPXf1iRYDrVPRgGrl1ZJENQvwc mz1HBZPp97M1QM9TizwsEctlYX2RZjDkJzAVpK2ZQIgTitnr1b7IytQF678iRdMz5tHh FUUvh1DdiXPRS4vweuQEsdZZ3uyd14jwGvFphrv6qIvKpgPgKpu0kl1URoW7+fcvcr81 nju8c/yF+vmfJpUJVd3TRkyahxIySMZEM0QnjUZq6q+VDmv8lr9b9PnwJtV3Y0LsQFCN Kh43SB85epl5iaJqPu/GSek9p6Db9/pYBgGQeM4RcUQbVYdV2ebPlFBqlLfYqVf3l6HF sB7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584713; x=1729189513; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=k3/xcOX0JmpZSdxUAeDWhRU0v4mqVA9cM67JfDYoNsA=; b=Qjknz8fsszX0FjYWiKggBgaoRVwJE/LUeU14lQElRs8FiLg3zBgr+Pwcnk+ANylqq7 W5R2H0Me1BAlZ/YE3P/OiV5wXNIkWeQjDCkho2vQBi0ZKVP646TyiXmsbpSwpMMcqW5J dCOgYXRocOMQimUPse5tm1sXPSdf77eo6H90Wh/tRGW9pzg0fRABT6waWUj7jWqME30W vUbr6RUTz2Ktqrbfw96x5UgQ9t/JAAHQ0NcOCyJagqGeCDjs+Y8SxBR6hMuPHIaA0e37 lF3vAajTjwAowNPr3UCCdaEfDJHiTaZzhrz2vjztYyqxx+tvnDlz3t9GUewHdgdfCa/T Ammw== X-Gm-Message-State: AOJu0YzxOGYKtKGpfCu5AkmkPwTFr/0XeYDNK08d6KCatMNJ099Dskct TWhJ6HBHK/vdN+kKQM1m7y9RxDakCoGgLn3f1GuXfnmjVX8YPAl8CLWN5OW7oIfw3pn8mfFb06v TZg== X-Google-Smtp-Source: AGHT+IEzZdX7U43FtBqfYwOSCvhQrFRs7MoNj08p1am6l1FbD4QaVhgMc/3KfB78OlyTOzXtpjGPG3KIosU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:ea0e:b0:2e2:d843:4880 with SMTP id 98e67ed59e1d1-2e2f0c82a2fmr30a91.5.1728584713017; Thu, 10 Oct 2024 11:25:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:12 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-11-seanjc@google.com> Subject: [PATCH v13 10/85] KVM: x86/mmu: Use gfn_to_page_many_atomic() when prefetching indirect PTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use gfn_to_page_many_atomic() instead of gfn_to_pfn_memslot_atomic() when prefetching indirect PTEs (direct_pte_prefetch_many() already uses the "to page" APIS). Functionally, the two are subtly equivalent, as the "to pfn" API short-circuits hva_to_pfn() if hva_to_pfn_fast() fails, i.e. is just a wrapper for get_user_page_fast_only()/get_user_pages_fast_only(). Switching to the "to page" API will allow dropping the @atomic parameter from the entire hva_to_pfn() callchain. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index fbaae040218b..36b2607280f0 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -535,8 +535,8 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, { struct kvm_memory_slot *slot; unsigned pte_access; + struct page *page; gfn_t gfn; - kvm_pfn_t pfn; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; @@ -549,12 +549,11 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - pfn = gfn_to_pfn_memslot_atomic(slot, gfn); - if (is_error_pfn(pfn)) + if (gfn_to_page_many_atomic(slot, gfn, &page, 1) != 1) return false; - mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL); - kvm_release_pfn_clean(pfn); + mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); + kvm_release_page_clean(page); return true; } From patchwork Thu Oct 10 18:23:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830756 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DA5D1EABA1 for ; Thu, 10 Oct 2024 18:25:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584718; cv=none; b=FSYb+HQ6/yKt4pMU6Rgs2yMZjpECHEC7iHO33UdxY7+YIy5nyq7cNROpz3g7R2WBf/b3uG8N9TNic3L349cvCWT6ovGEN4OIGBxnSvPBcM4BnKncrVKn+P8Vt5k53MR/XJRFpz+ggx7e9fENney6+DTcrJ97Nv9CIU/3kzfTEfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584718; c=relaxed/simple; bh=bXBvqUXIo6rFfz7xJp3m3CL9RbZG3WivseimjKRiNcE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Fk0SCoUi1jnvEh9LNTGHD2Zvy2/juqq0o5piLwDCKmmwOPcivMAJYf7X3q3MbhLzPAkIOs8EKhs9bZnBPwmTcuhBxg48kjduSasoLGNwEVdcx/+KZGDJtc4vYNO3ziXs3AA/mCNVFaPQRCVzbG3uhRWSrguH2VRNhVUJcy7cy3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=msluruMB; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="msluruMB" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-20b921fa133so13291585ad.1 for ; Thu, 10 Oct 2024 11:25:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584715; x=1729189515; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=msluruMB47yV+eMqqJKQqQc3MtTMadYZ8sWgl6QRmMr09qa3+feEPjjvGyBY+uif8W qpS0VwqWhyL0czQJzqH+B9niYon2tyJam8T8Wx431OsNSh0byA40a4gCFG9E/iQ4vl7a 6UoZSJP5zfbt83Sm9xMLY1LQdgh7R2IqI7So3Y4ciWiXWgSST91zKUp2NWSuxui2Udu2 XYV5f67I+k1MuchiObUt1AkpUZwZ9d/5lXZxbM2nMVGYJ3xTOCQSP/NlphrkSb4KDTES lsmR21Ah+UemyDSlKJ7g2DcTy675DQzG6u8F3vgdUqFS9ZvBqFBbQWAyPrrqdAw2BlVR 7WoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584715; x=1729189515; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=pFNgBz2vhBff625W+m3AevMdoJ4T3G14v6VcVNA5m+Zb82Fn8aOj9eSkhrB4SGUssQ DlwLB/V/s0cZKPMChvf2Jn3i0HYsAM4eCKMtrVxi4HFmAAhpW4SL7PHXTUCa/Om4OZAl CW1Un7mcYQyXB9ZR3Q1dBbYpY2NqTfGkhCSAvIwPdwwyrJdg1KXT/rRRsQS8XOqp0Rcl pg7gkixixjjtRbcqErbYB/9SqI+3foyaSn4lHDloyE98KyNQHne6HY7bZybT93uN2JOo 1UlDQ0sZVaQyp/iqYCuT2qIcIcZp+lcNgzeZk1WbxB/o9kp7G3NOBG54tZ+ji5kTDJJy wTgQ== X-Gm-Message-State: AOJu0YxCPkt/lLiGdX0U5vncykVWny4XNjZfDoCYPLdery+9NtqzDHIh A5AwnVA976vtOZXzYnc7fEKXsPh4y6DE35XvuEFv+LGqPkSOWl/Qdfc+Y5ZJ4U0X1B6zI+oUCOt rDA== X-Google-Smtp-Source: AGHT+IFdLsiMiQR9ssEjYgvfJ+TOiOlvv8h202qp7bAbI4MCMFrC9IRsz+IufRcGyfoerq3Ms7MkoMKw8Uo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:f143:b0:20c:5beb:9c6a with SMTP id d9443c01a7336-20c80510505mr43225ad.4.1728584715291; Thu, 10 Oct 2024 11:25:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:13 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-12-seanjc@google.com> Subject: [PATCH v13 11/85] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and communicate its true purpose, as the "atomic" aspect is essentially a side effect of the fact that x86 uses the API while holding mmu_lock. E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, as the goal is to opportunistically grab surrounding pages that have already been accessed and/or dirtied by the host, and to do so quickly. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04228a7da69a..5fe45ab0e818 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2958,7 +2958,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, if (!slot) return -1; - ret = gfn_to_page_many_atomic(slot, gfn, pages, end - start); + ret = kvm_prefetch_pages(slot, gfn, pages, end - start); if (ret <= 0) return -1; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 36b2607280f0..143b7e9f26dc 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -549,7 +549,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - if (gfn_to_page_many_atomic(slot, gfn, &page, 1) != 1) + if (kvm_prefetch_pages(slot, gfn, &page, 1) != 1) return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ab4485b2bddc..56e7cde8c8b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1207,8 +1207,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages); +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2032292df0b0..957b4a6c9254 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3053,8 +3053,8 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages) +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages) { unsigned long addr; gfn_t entry = 0; @@ -3068,7 +3068,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); } -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); /* * Do not use this helper unless you are absolutely certain the gfn _must_ be From patchwork Thu Oct 10 18:23:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830757 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44A2F1E909B for ; Thu, 10 Oct 2024 18:25:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584721; cv=none; b=UDU428xbvkGEH1tVq6LrmCWPuBNQVZxCA6vgfCgHDZlHyZ/mQAH5c3lemI/yjR7mfyyehekyzv3NBeSLQ8An3qs9Ty8Oca5g4A0PGlaWu2RhVlbRnPz2TwTHlIuuuWnU471K6952NCz3IppGmam+qm7GIj6mtlMepwvoJ3At5rE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584721; c=relaxed/simple; bh=NgBQk0YRvUNmc13dbVZlntKigL70MLaajTeU3NGo9fA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RE/TO0Qh1KOren6uFokyP9Zyx1hCuGPJdLuhHw0ckqiaZBNGdTc2IXycR+bsws8ZiMCerWF3Bn9MEmVZexcIKrdJPirw7WYA6X1f2yM2i4EbJ/BQv5bVpsDoxIOLMuMoFDz00XSJx0kVKvTgf82Y2dy6AFQdVjBPdRwhrG9HW60= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eihI10jr; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eihI10jr" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71df5102bc0so1345460b3a.0 for ; Thu, 10 Oct 2024 11:25:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584717; x=1729189517; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=IMk6Ksq+DkJF8Cmr0W3poBl1lc0GSthNwEC5P5onZMQ=; b=eihI10jru2n8ukWDIrc4V1BUWuR7kg0e1MklavXglc6950CPFgvqStVWPROKStYTp3 hNXNXKKlqyzDuLLGxwwWxlgdHdmPGCfXDNUg1NmPaTS44X9jIVXe/ExDP544smBEGPaF hk8AtmMAPGvv8BKjmHvXS6tiKMnZmeVClyLoX/BaV/Blwx/GCI6B2Yu3V02I2crBiNsk 1u0NCDnwPePOmtM2eaOQH2672Om7WlhN2DEagdo+me81XgGdIVBDEjPXYHoKNWdjMqCl Z1oc4pvNwUbuOmrWza21vh57XnX3eqw7MGRoIVnvl5bwzDuQdR41ncqUkNktv/loOESb L5dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584717; x=1729189517; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=IMk6Ksq+DkJF8Cmr0W3poBl1lc0GSthNwEC5P5onZMQ=; b=wy5ot9UQkzbYwr3OsGKhqeNzm90mUY+NxhpfDwjOElmRoT6nnQcxHGxARRVtbb6hAh nsb0/mbFQN7yWwFT88QQTYh95CKUpaxsUqFfdBF4FAfqEhB6IAKiFD0b1CQ6kibDGnHW hGiQcsKELrOGbY8ggz+8Vme5Lu+vkTR32rqTZfvLpeVsOh0aczCtq4WQMAURnL+vSiQR U4QtC+7TMIKp5+asx9eFQESGZbIRYQ5XIhMsbSecJYllbv8zPQCBPIoUo4jxBIoB1cd+ 8qJpIi0QFXataqAD9ee+pA5+HiYb3tWP+3X/v2UppYKJSs2BfdWNVbMxLPBS1zCO/Rdz uzpQ== X-Gm-Message-State: AOJu0YyfNG5PpJb20mgyMV4OR9h4iw5vBqArmw4SK+6Hmhq8Hu6RoDR1 xYkSw9EY9VKcPRwhvvK//i/jKYW44Yj3a4aFtIo7j16BdqIwxdj8Ds0Pzq8slX29liXwb5AwpZc mjg== X-Google-Smtp-Source: AGHT+IEgBAPk5NggidzeRIz4/KASJ69N4B17cGxFjG2BQTaLRrpMtimTBEcirbwLQ+H7ew0HtHwcTgFgXoI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2d8a:b0:71d:fc7c:4a34 with SMTP id d2e1a72fcca58-71e1dbff219mr24371b3a.6.1728584717336; Thu, 10 Oct 2024 11:25:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:14 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-13-seanjc@google.com> Subject: [PATCH v13 12/85] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop @atomic from the myriad "to_pfn" APIs now that all callers pass "false", and remove a comment blurb about KVM running only the "GUP fast" part in atomic context. No functional change intended. Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/locking.rst | 4 +-- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 12 ++++---- include/linux/kvm_host.h | 4 +-- virt/kvm/kvm_main.c | 39 ++++++-------------------- virt/kvm/kvm_mm.h | 4 +-- virt/kvm/pfncache.c | 2 +- 9 files changed, 23 insertions(+), 48 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 3d8bf40ca448..f463ac42ac7a 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -135,8 +135,8 @@ We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. For direct sp, we can easily avoid it since the spte of direct sp is fixed to gfn. For indirect sp, we disabled fast page fault for simplicity. -A solution for indirect sp could be to pin the gfn, for example via -kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: +A solution for indirect sp could be to pin the gfn before the cmpxchg. After +the pinning: - We have held the refcount of pfn; that means the pfn can not be freed and be reused for another gfn. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a509b63bd4dd..a6e62cc9015c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1569,7 +1569,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, write_fault, &writable, NULL); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 1b51b1c4713b..8cd02ca4b1b8 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -613,7 +613,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok = true; } else { /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 408d98f8a514..26a969e935e3 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -852,7 +852,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long pfn; /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5fe45ab0e818..0e235f276ee5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4380,9 +4380,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return kvm_faultin_pfn_private(vcpu, fault); async = false; - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, false, - &async, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, &async, + fault->write, &fault->map_writable, + &fault->hva); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ @@ -4402,9 +4402,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - NULL, fault->write, - &fault->map_writable, &fault->hva); + fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, NULL, + fault->write, &fault->map_writable, + &fault->hva); return RET_PF_CONTINUE; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 56e7cde8c8b8..2faafc7a56ae 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1232,9 +1232,8 @@ kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva); void kvm_release_pfn_clean(kvm_pfn_t pfn); @@ -1315,7 +1314,6 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 957b4a6c9254..0bc077213d3e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2756,8 +2756,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) /* * The fast path to get the writable pfn which will be stored in @pfn, - * true indicates success, otherwise false is returned. It's also the - * only part that runs if we can in atomic context. + * true indicates success, otherwise false is returned. */ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, bool *writable, kvm_pfn_t *pfn) @@ -2922,7 +2921,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, /* * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function is forbidden from sleeping * @interruptible: whether the process can be interrupted by non-fatal signals * @async: whether this function need to wait IO complete if the * host page is not in the memory @@ -2934,22 +2932,16 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; - /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; - if (atomic) - return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, writable, &pfn); if (npages == 1) @@ -2986,7 +2978,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, } kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool interruptible, bool *async, + bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -3008,39 +3000,24 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, writable = NULL; } - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + return hva_to_pfn(addr, interruptible, async, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); -kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) -{ - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); -} -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); - kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..a3fa86f60d6c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,8 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, + bool write_fault, bool *writable); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index f0039efb9e1e..58c706a610e5 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -198,7 +198,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) } /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + new_pfn = hva_to_pfn(gpc->uhva, false, NULL, true, NULL); if (is_error_noslot_pfn(new_pfn)) goto out_error; From patchwork Thu Oct 10 18:23:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830758 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 433821EBA0A for ; Thu, 10 Oct 2024 18:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584721; cv=none; b=dP950vQJ7exutqeGpY/ZLQcvTJ11LenEMXsK5zTPtpmgHCYDGKfgY3OXHY/FHhsktaAe5mvAFyQsqLW5yIT1pyS+KydD6q8HNTJV0xEeTWo9ImJLFCet+1CYMnhO2Cv6Au6J4C5j1RZCcyfWo566NZYO/cW64u2fmFmIPTNE4Dg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584721; c=relaxed/simple; bh=IF5BVNwy5lfsZS2zGuSAAv+TGmQDGgo7CuXbuO4ZMc4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LcR0hcj6h+VdQ/liaC3ckx8K2C3FA/mKsjQ/fxPzgCPLXhXlmXN+5c2htHnNWS6HSE9aVMZBwddftDd3yEoCbK2Hwq84oE6ZoCDSzrXBMyMy8/66312S0vCD9zuU00FWI/ywUpWn7jnyugNAUfaKtZ+Zq/hy0G1dM6LxlquE3+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OrSuHW3L; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OrSuHW3L" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e284982a31so22425087b3.3 for ; Thu, 10 Oct 2024 11:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584719; x=1729189519; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=F0+KYw/6HzWsmGATvSxbQuKdqCcRS3waqcwxHrAen8Y=; b=OrSuHW3LuAFGFcLFdBetAruGipimy71Kdd23mnkCvSb3kzL/ihIej8uTFHbxMiLtnA 6YXSI2J+unla/LKg7JIEqX4mCyvrr3VQ406BwPbBG3PiTMk+2hz8bnoRevRPodlhpmMn aYJOYXJWz/3t+SqCWv1rhWrcN4c2Ngbiu7CrMUkXV7A/egH9oRwfDsMuE+3yorA8VAOz djCQlkkb9mc9WG+6oBsuPymdYZpYY26x8ENW3hOt+jiqZzezffl3p0vFDG1wHBHrBrbq D5swTKZ9IT9IBSz41WC4R7QS/s7rDjGVXDZH7rNvgqH9bS1bMtz43t3X2ef5ejrnse9y 5+Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584719; x=1729189519; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=F0+KYw/6HzWsmGATvSxbQuKdqCcRS3waqcwxHrAen8Y=; b=SgFfE0JZf1RN3+WqPFpejJjnjFzIm863D8EMUoawhMBhU/oGvR9YJZbce/pDspKCB/ Ib3pYv3PzkvCerWaQp6TZdounBh9rM0HKi8K9EHPwOu9Z3xAcbivY8NHiFuVvfXgKp87 5fLQzPEEQY9eb+DvIO7l4tXcY/a1ts1AIJ8hjZCNCXEMuOT8e85GpZBDRW7SbN+VNsoI /6paAnKn7Z4J23pqZ3PdrHfU4adJTNE5/D7WlcYQc8tEkaBEzGKOTkdJR3zkNs1VeRZX pyIm1yRsWUIPleutTe/MJ7bAfxWwe1rvvpAMiD1xWtmA8jF1suhrFDQLsrYEZ99ZL3VM 35QA== X-Gm-Message-State: AOJu0Yx8hhR47weV0KMJI2o07T72ykBUrp6nIfJuWBWvWyLXJnKx1pI8 3JFqbJd1r5tadG95HBpYp4HIJdPJ43QuxZJTbWzK4QZTvRsNlubLMoOBG7vO1uQelyLJ5G8pzoJ tmw== X-Google-Smtp-Source: AGHT+IEeBu92qjGJ6/8zutZUBi8WOanw4UupdtztXpUfLE+vAeLHZHNkVuFOksmpHMU427MoGYN+Q9YZlwU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:3204:b0:62c:f976:a763 with SMTP id 00721157ae682-6e322185b22mr897567b3.1.1728584719334; Thu, 10 Oct 2024 11:25:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:15 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-14-seanjc@google.com> Subject: [PATCH v13 13/85] KVM: Annotate that all paths in hva_to_pfn() might sleep From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that hva_to_pfn() no longer supports being called in atomic context, move the might_sleep() annotation from hva_to_pfn_slow() to hva_to_pfn(). Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0bc077213d3e..17acc75990a5 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2804,8 +2804,6 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, struct page *page; int npages; - might_sleep(); - if (writable) *writable = write_fault; @@ -2939,6 +2937,8 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, kvm_pfn_t pfn; int npages, r; + might_sleep(); + if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; From patchwork Thu Oct 10 18:23:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830759 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9DC31EF09F for ; Thu, 10 Oct 2024 18:25:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584724; cv=none; b=HrWjbEg4o2Dvh9NPENT7vAFf4RpmJsQTDAakH9Gc6yGat3kRNDcmKCmMWb3w8v34dqkvB7b/+eFy76FeGvn7v8AAvZWG7D9Zxun3kS5jWS0XReB+RvG/ZQGYHX204DUQpkcB54fDaomy9WnjWXKKCJylBCX3dfgUFfnXgXcBGz4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584724; c=relaxed/simple; bh=rFmsStC1d1lztIwj7oxh4zyq4MQj9meeqHdDcxeq/wk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BAe34OgeFBISsS2CLsGpS9Z0wFraAdc+uncTXnArcvK/7ur9OX/yE2Lp3VUp25JAWZOD5phr6Mvo++gL7OpLt4Do7nDr05O6Jpcv03P0WLvuwlRvC+CMXh8ZE7ER+HlOO78+YQRgPBB9PoitINGszHxu1B1zLxdrEevBETZU1f0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zBhsl50q; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zBhsl50q" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7ea0069a8b0so1224555a12.0 for ; Thu, 10 Oct 2024 11:25:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584722; x=1729189522; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RdgAxVyxp31c7WibHLiyIF43paER00mxZOAIHhsdwhk=; b=zBhsl50qhIyYlq5VgEtE/+OL3pX+IkWGN8RxGVp/QOIATD1A51kLNeCudHlw6tpDB0 TTnMuZ0YrHvKBYD5A8G+ycj8IUIKyFCIAvMqpCIDMUUrDrOdy+ab52MBoxiszJHwHzOm sj7/CT59ErmtRPL1Q4YUop0G579u9pMO68H1UQD9Q5tvvttPtuceH0HBENkiQygrEN+L UXFheIKzvN3cuQNMasUDogdtF4XkKlByJg1oeWEuvwjQVYTXh4u+Ukk61xBxYSl0CUBn zAffTJ/B4qMDCsY9MtrgvulhfXkVKD4XfiY8/udxWcKlNZIr+Ec1bJeX/OtZ25Pola2P ZJog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584722; x=1729189522; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RdgAxVyxp31c7WibHLiyIF43paER00mxZOAIHhsdwhk=; b=TCEtbyJ0842RaNwcB0CEaTKtfsTbczYpsBDk3TwhYGEDejc3pUsLBofvu5yd0yDQ1/ 2c3dIFbZO3/b788zt1ZeZlZ0K+S4wBoE9g6N2VLaBorv3nHSXoN/zWlTQ5KLF/RqFiwg HghBmL15+j1XZAcBhZLF2wdDm0jlcsHlqYPf37OROhnz17Y+XNtSCwuhcjOjNYdIiPp3 C7uAKqb54NBT9pS0hzQtKwvKp8U7oOEuNAFNdJuA7CYDcDRUqk/aMZnfChrJQUPgSIbp FWCClIpZUBgP2SGLX0BaHs0e1VJPbcoYmue18JwDjacbmPSp+BnwKZ4Ak2IM0++WFONf ooPQ== X-Gm-Message-State: AOJu0YzSVNbIHDGxGP3BxranIX0UsDAp9BPt/eAM6xH1pvvGJJbuAqi5 vu+2+2xWFGzpRv5EMV7ac0580Y1JQe67VTrwiKX63vwgqDV7JzUU3XsU/lFOEJvYKBm3TbAyGxk Vfg== X-Google-Smtp-Source: AGHT+IH031wjBK6zBugJiyQwbgM1S3WkZumhuZG0/47H7WWzE24O4tbIqX0eqBdD0TzKWzHgCchY3qP4ics= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:9a94:b0:2e2:da81:40c1 with SMTP id 98e67ed59e1d1-2e2f09f2280mr107a91.1.1728584720854; Thu, 10 Oct 2024 11:25:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:16 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-15-seanjc@google.com> Subject: [PATCH v13 14/85] KVM: Return ERR_SIGPENDING from hva_to_pfn() if GUP returns -EGAIN From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Treat an -EAGAIN return from GUP the same as -EINTR and immediately report to the caller that a signal is pending. GUP only returns -EAGAIN if the _initial_ mmap_read_lock_killable() fails, which in turn onnly fails if a signal is pending Note, rwsem_down_read_slowpath() actually returns -EINTR, so GUP is really just making life harder than it needs to be. And the call to mmap_read_lock_killable() in the retry path returns its -errno verbatim, i.e. GUP (and thus KVM) is already handling locking failure this way, but only some of the time. Suggested-by: Paolo Bonzini Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 17acc75990a5..ebba5d22db2d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2946,7 +2946,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, writable, &pfn); if (npages == 1) return pfn; - if (npages == -EINTR) + if (npages == -EINTR || npages == -EAGAIN) return KVM_PFN_ERR_SIGPENDING; mmap_read_lock(current->mm); From patchwork Thu Oct 10 18:23:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830760 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE1071EF937 for ; Thu, 10 Oct 2024 18:25:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584726; cv=none; b=cwNVgq5gpLOVtbaRmz/5hbNKay5RfKsavnVqqV6IdqNboZsbuKSlXlTETwQ8dUTu9yRqLTVSUPv262vPNBkgSg5JaZAgqHbvjxieqj6CODiO6ZyT6a08DRrIGOVL8tU7tT10hoeCX6Fv+bdQ/hWZB/4BJque+8USmd01o8ntuYc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584726; c=relaxed/simple; bh=xKxLUn1wM/pJucT4/ZxvVNmNhzhyHMbiLj3YVf1FNHQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hIF4vwXmjczPA+1dll0kHaeh5kgazP1q6uL3i5pTc0MzEgKw6quKNj9hT3KWnJ6Z6zrQGfRxL9FMEEMJqtGz1nBJWStdRnb39ECFAqVQKEtm84uIvXkT2bTvuAMnUi6SAARTMNPmYukZuOsQi+M3wcGT/d+dZmqE6+a9QFRjMjk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TYQRQk+w; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TYQRQk+w" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e3231725c9so23542457b3.1 for ; Thu, 10 Oct 2024 11:25:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584724; x=1729189524; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=86tNdlyX8HXR/WHFYCj/131p34DPDBwWgQrhOuSjETM=; b=TYQRQk+w7SZ6hxu000ERFTE/19nobGDZp4qfI4lUGaKSkx65llk5i8OtSPaWktaPe+ 5XozsAEpjOmnyoNqatnrYMAZ60RWiIscI44p1xR028Kq4YBFHeziOBq1g5YMW369+ewe DBbnBBj7uyWKhzvNHIsKPh1xm7J41dcuSCTEYZDqgPyXGvJRHeKaxnjlU+Q7xVjJlTj+ Py4wWC7F64nR2FkwHexXSLRXrQRj5eudodYY+PhBwB9w0LCVub3wGLZ+EE1KRXqo1yqd P1/WIlzQG/PMilWwZm1sDFH2cYInsL65Bp0RatNOaVo01u0iAbnToWRI8vRt9jBA3Mti z40w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584724; x=1729189524; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=86tNdlyX8HXR/WHFYCj/131p34DPDBwWgQrhOuSjETM=; b=fn8WYU8oK7GYy+0BRDX/gdMy/fvR8Mq0WRgWgQNPo5dh8kV1sOnZV88htqylY8Qr64 51GvufVha3v3jajq/4N601rZgZdR47GrnmCETxSvBoHmO51NWlE4kTakA/alVhvBw70j sVrhcok5c9SdhjiMa71SCMicTRp/Rewab/hDbXDUtFTWBgZ9hecHGRo6e9wePGvpdDDn AUEGXFsSufl6j8jR/FlqoDFPkOijrgnvcNgSn1FR13caVE+4r2c6qAdc2tqTPIvalMgH 3ZCtGW1t9rOQfg7H2g7z+IeJhERmiV6FRyv1oEm8VvNSTAFIKaRnSr2Tb4JNKnC0rIUp oWCQ== X-Gm-Message-State: AOJu0YwV/W63yydUDVaFhEMPcgbMhAkCpsY4HHvhzdt4l5gZZGKR6xai As2iiBg72u6oYEy5yiSLfIASaQPjwyGXvL/PV20l51nnW4Bn/oNAKsWCVvvsQvxjdhlyekGLgUs iGA== X-Google-Smtp-Source: AGHT+IGf82b/hxj2IWFe54/TAnEDS4XyhakmYGUWxryxrCLTzb83+cUU5J1DZhSXbLBNZQvipdjBbhmVkR8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:8e09:0:b0:e28:ef25:5f13 with SMTP id 3f1490d57ef6-e28fe0e7352mr200743276.0.1728584723790; Thu, 10 Oct 2024 11:25:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:17 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-16-seanjc@google.com> Subject: [PATCH v13 15/85] KVM: Drop extra GUP (via check_user_page_hwpoison()) to detect poisoned page From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove check_user_page_hwpoison() as it's effectively dead code. Prior to commit 234b239bea39 ("kvm: Faults which trigger IO release the mmap_sem"), hva_to_pfn_slow() wasn't actually a slow path in all cases, i.e. would do get_user_pages_fast() without ever doing slow GUP with FOLL_HWPOISON. Now that hva_to_pfn_slow() is a straight shot to get_user_pages_unlocked(), and unconditionally passes FOLL_HWPOISON, it is impossible for hva_to_pfn() to get an -errno that needs to be morphed to -EHWPOISON. There are essentially four cases in KVM: - npages == 0, then FOLL_NOWAIT, a.k.a. @async, must be true, and thus check_user_page_hwpoison() will not be called - npages == 1 || npages == -EHWPOISON, all good - npages == -EINTR || npages == -EAGAIN, bail early, all good - everything else, including -EFAULT, can go down the vma_lookup() path, as npages < 0 means KVM went through hva_to_pfn_slow() which passes FOLL_HWPOISON Suggested-by: Paolo Bonzini Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ebba5d22db2d..87f81e74cbc0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2746,14 +2746,6 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w return gfn_to_hva_memslot_prot(slot, gfn, writable); } -static inline int check_user_page_hwpoison(unsigned long addr) -{ - int rc, flags = FOLL_HWPOISON | FOLL_WRITE; - - rc = get_user_pages(addr, 1, flags, NULL); - return rc == -EHWPOISON; -} - /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. @@ -2948,14 +2940,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, return pfn; if (npages == -EINTR || npages == -EAGAIN) return KVM_PFN_ERR_SIGPENDING; + if (npages == -EHWPOISON) + return KVM_PFN_ERR_HWPOISON; mmap_read_lock(current->mm); - if (npages == -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { - pfn = KVM_PFN_ERR_HWPOISON; - goto exit; - } - retry: vma = vma_lookup(current->mm, addr); @@ -2972,7 +2960,6 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, *async = true; pfn = KVM_PFN_ERR_FAULT; } -exit: mmap_read_unlock(current->mm); return pfn; } From patchwork Thu Oct 10 18:23:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830761 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C56011F12E4 for ; Thu, 10 Oct 2024 18:25:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584729; cv=none; b=AEtErrGi3TuL7OBQwzScR1ahMIbYtfqVkV7/Zq9idQzLoHCw4As/SmonJ1tdM3t9MYnZ0TPc03KtYV+LQCWNKfywDQWJApcf5I6nn4ZkMSTO9FfJq4FRZBwzYtHHX3j2Z25OYQLewYHJnW0g2FN/85uNYzEZV76axe3mn5i2VgI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584729; c=relaxed/simple; bh=42VPy3Va09DWIWyeI7bQhF4N8yIw+oK2ibHTdiPXTHw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KI2KsorXhVj91B4vykMkA03kS2bbkQegqLZrvYEj84x4JdCcJsAtVXz9Ne1EDAmxTOB1NguTvqKNxCqiZSz/ZfF0Uj2u4pgkvMKETd8YqxM6JGT9ob00h51+31+ITKirvTA/9L2jWRwaWsqiki0VwxJKoeyfV/k1XacoomygEM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WvslKqNS; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WvslKqNS" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71e048c1595so1304412b3a.1 for ; Thu, 10 Oct 2024 11:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584726; x=1729189526; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=H+RZpPtsk3Pe/RqqhWN1oE7ZGzOesPmt7slwmkU09yw=; b=WvslKqNSO5DKemaosV5Ekaavvk/7eArebNvJycUuLBkZGB/XJaMFX6aV1iKkGUFwP0 ZNwz0bY08AVY+Ai+JzYMIfwVM/iwhmIFh6ypp19yQJ7xlLlH4E1/h5/i4lkEIEswxRX6 G1V1+MDdkMXxuUeQhS2FytA/DTRX5CMGC7SlmlvH24EV5qDMrezu16iTJ094fO53XMoJ CGl31UkFsqipIwaYzOCagq4DAomkEhANFb6QoDU2IIOMnJOpIdKn8veU7TKRO0XUkEuU U8S1Qx2j8B7VTqF/nKa6l80LAoqMwmyO08nUTDqCHn4jvA8Xb+kpNd8Sq1I3C8YK6lcV TOiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584726; x=1729189526; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=H+RZpPtsk3Pe/RqqhWN1oE7ZGzOesPmt7slwmkU09yw=; b=ucNSWN+4IDKA/fccgMbM4U22Omvfj/tDBznRkMisyM8RkPZf+zXxGMgzWG23BQnS7z 4smEIlJQ6vwgMfyn1AMTIXF2RGaB23P6CY8yF3w04RFCATJpW9Mv5lzZbHEvzStyE3Ph SKOkVLxXY6heeedQGtDQGKg6rM+WpBC54JlDeyOdN/0gsB71BGzb+eCrnyDn2rCN14zO RtTUn4Tdky+wm2zuwnuyG/NtUd0eMI3wpqNTyHGMw5OEJwj6KUTWdDf/1qnxVXwqMiCF Kp4Ysc2yCGzPYa8/xWyQjimexlUIjSUex+nj4044Gqfe2FdVqN+z643zHoaEoc7FtQVf Zt4g== X-Gm-Message-State: AOJu0YwWNnhdhyTO9cUxaPnhkASVL5O9QV5kCkqxIWp91bZLSKAPzBbx J0Iw1Sg8s9om8ZX+S4tfen+08VUMaXfkFHe6ImiIl/V5YDiq6Ty6iOZdiSdmrA6SmwV4MDXu5eS caQ== X-Google-Smtp-Source: AGHT+IEgY3dn5eo0Xr6pp3rAg0uVd1WuwbDag/Dk23UVvT7MMuxyKVAtWc2T42TlMp1H8D9GJ1Oltmo921o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8c17:b0:71d:f452:ee99 with SMTP id d2e1a72fcca58-71e1dbcac44mr6866b3a.3.1728584725637; Thu, 10 Oct 2024 11:25:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:18 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-17-seanjc@google.com> Subject: [PATCH v13 16/85] KVM: Replace "async" pointer in gfn=>pfn with "no_wait" and error code From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones From: David Stevens Add a pfn error code to communicate that hva_to_pfn() failed because I/O was needed and disallowed, and convert @async to a constant @no_wait boolean. This will allow eliminating the @no_wait param by having callers pass in FOLL_NOWAIT along with other FOLL_* flags. Tested-by: Alex Bennée Signed-off-by: David Stevens Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 18 +++++++++++------- include/linux/kvm_host.h | 3 ++- virt/kvm/kvm_main.c | 27 ++++++++++++++------------- virt/kvm/kvm_mm.h | 2 +- virt/kvm/pfncache.c | 4 ++-- 5 files changed, 30 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0e235f276ee5..fa8f3fb7c14b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4374,17 +4374,21 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - bool async; - if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); - async = false; - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, &async, + fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable, &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + + /* + * If resolving the page failed because I/O is needed to fault-in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry with I/O allowed. All + * other failures are terminal, i.e. retrying won't help. + */ + if (fault->pfn != KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4402,7 +4406,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, NULL, + fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, fault->write, &fault->map_writable, &fault->hva); return RET_PF_CONTINUE; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2faafc7a56ae..071a0a1f1c60 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) /* * error pfns indicate that the gfn is in slot but faild to @@ -1233,7 +1234,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool interruptible, bool *async, + bool interruptible, bool no_wait, bool write_fault, bool *writable, hva_t *hva); void kvm_release_pfn_clean(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 87f81e74cbc0..dd5839abef6c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2778,7 +2778,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, +static int hva_to_pfn_slow(unsigned long addr, bool no_wait, bool write_fault, bool interruptible, bool *writable, kvm_pfn_t *pfn) { /* @@ -2801,7 +2801,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (write_fault) flags |= FOLL_WRITE; - if (async) + if (no_wait) flags |= FOLL_NOWAIT; if (interruptible) flags |= FOLL_INTERRUPTIBLE; @@ -2912,8 +2912,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest * @interruptible: whether the process can be interrupted by non-fatal signals - * @async: whether this function need to wait IO complete if the - * host page is not in the memory + * @no_wait: whether or not this function need to wait IO complete if the + * host page is not in the memory * @write_fault: whether we should get a writable host page * @writable: whether it allows to map a writable host page for !@write_fault * @@ -2922,7 +2922,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, bool write_fault, bool *writable) { struct vm_area_struct *vma; @@ -2934,7 +2934,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; - npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, + npages = hva_to_pfn_slow(addr, no_wait, write_fault, interruptible, writable, &pfn); if (npages == 1) return pfn; @@ -2956,16 +2956,17 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, if (r < 0) pfn = KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async = true; - pfn = KVM_PFN_ERR_FAULT; + if (no_wait && vma_is_valid(vma, write_fault)) + pfn = KVM_PFN_ERR_NEEDS_IO; + else + pfn = KVM_PFN_ERR_FAULT; } mmap_read_unlock(current->mm); return pfn; } kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool interruptible, bool *async, + bool interruptible, bool no_wait, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -2987,21 +2988,21 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, writable = NULL; } - return hva_to_pfn(addr, interruptible, async, write_fault, writable); + return hva_to_pfn(addr, interruptible, no_wait, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index a3fa86f60d6c..51f3fee4ca3f 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,7 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool *async, +kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, bool write_fault, bool *writable); #ifdef CONFIG_HAVE_KVM_PFNCACHE diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 58c706a610e5..32dc61f48c81 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -197,8 +197,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) cond_resched(); } - /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, NULL, true, NULL); + /* We always request a writable mapping */ + new_pfn = hva_to_pfn(gpc->uhva, false, false, true, NULL); if (is_error_noslot_pfn(new_pfn)) goto out_error; From patchwork Thu Oct 10 18:23:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830762 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 927641F1313 for ; Thu, 10 Oct 2024 18:25:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584730; cv=none; b=kELXYtqvwP2C5xW2f3AxA4KMbWmviQEyRmvudlCULSHQ/GyIOThXR6smbhfuKFi2gGJYOO3WlynSqaTcu9dpWLABpurQLnl8sQeP+EaBCjFNY4gEiB80pGGGmkewXOiKaiAeaXxWLVauyZr4pnQ6KV+A1wKjQjjbP77ELY5oA/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584730; c=relaxed/simple; bh=vtDT6oLoOmF91AiioFXu5/ol9RZL+XplRHPVYvyqLOQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tXY2z1L3biN0pqJX9EgTIg1Czq9fuyHaet8DSB44dvmfiSofgx7RptE16cSuxRueBWtmlbj7IsbP0VzwLWwWsv807lt1s7irWbXzDVwfoytavkLERJxgJdqYdcczYdHzeixRwUXhcPQmq25nYRXTVdoho1+oMBuZXpIxBLiN1I0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Dha/A36a; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Dha/A36a" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-207510f3242so15990285ad.0 for ; Thu, 10 Oct 2024 11:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584728; x=1729189528; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=5N9i4qcLR1M4w0icoJwy+0qPwifBeFycVmWqg2S4YD0=; b=Dha/A36ak1juWSy53/C5jnDKwMB3ibLmNWOqN2CaIPVTvc2bOsO+vIG4ji5VUNB+Fr gAcZshJlAnKB9IGnefOr8J2/sIsOnWzbaUSaKAYAfY0GdCKdUUDLPpPmylbsAc5fiJtQ e+FQSoUCLb8PwxOQTC2Wp4MvrQOZvR+GL6RpTuUF2Ws2zSXOwcR2KcfTnxlrY2LApBDX 1jDtMaJUCphK2q+yi25T6nAEQFpfTOKBCtnS6RvfK2LDAc5h9R7TmHsLd/jFQ02K/iU9 XMkUMOtKJ9vnvdqixw1TS95uc48IbgO3uWWv/2SyHoFjG6zpnpjdrM+rk6uPgJaCnthV Nb3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584728; x=1729189528; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=5N9i4qcLR1M4w0icoJwy+0qPwifBeFycVmWqg2S4YD0=; b=USp1xknonfWovCOFq8sLurKNkJGSTTo+X5qYQBjXzemBqnwiPqRat7RZakyhNw+zOl YGHQYAyq4OFLnOM3UeFQ6MrsUG0TCp4XAJymV7WejpBWjSzA0/PndUUBistwoYc1Ip8v r6q0T+oMuAcGRRCKTmEI0jqGxBHJn8yLiUCPVgZIxffbLGwcQVkIbWa/CtkFxcKHgsXm AnKnpjB7CcE6ZB+H4KhSJvPrVwBZqNTpp1coEDm+7dssSNfheqFdU/RUjyffnIJberi3 CbnJmwD1XCDf1yrBpcyvRE4bLWcQW9Qy+BAjhRaLQ/lxovaZw5yBrhz3yskpQpT4UXqB 0OnA== X-Gm-Message-State: AOJu0YyEDYRs1fY7lL7u54J6L+Ghw1XwW8nrO0bwGs3A2pkBXG0y+1nF Uh/G1hzRU+o1CL8byslM7apA3KnFkpAYzh/f0UPvK3w8auM/vQp/EwH3I1hQOqxj++oJmBkfRPn gUw== X-Google-Smtp-Source: AGHT+IHx3wOCIkhIacFyS0KkUCfogOtyOnmQIp9NOdA3rn83blMwYV/7exbhoH/uHIfky6alS3vCfOjmfpM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:d502:b0:20b:7ece:3225 with SMTP id d9443c01a7336-20c6358fdd3mr1169885ad.0.1728584727745; Thu, 10 Oct 2024 11:25:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:19 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-18-seanjc@google.com> Subject: [PATCH v13 17/85] KVM: x86/mmu: Drop kvm_page_fault.hva, i.e. don't track intermediate hva From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove kvm_page_fault.hva as it is never read, only written. This will allow removing the @hva param from __gfn_to_pfn_memslot(). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 2 -- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fa8f3fb7c14b..c67228b46bd5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3294,7 +3294,6 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, fault->slot = NULL; fault->pfn = KVM_PFN_NOSLOT; fault->map_writable = false; - fault->hva = KVM_HVA_ERR_BAD; /* * If MMIO caching is disabled, emulate immediately without @@ -4379,7 +4378,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable, - &fault->hva); + NULL); /* * If resolving the page failed because I/O is needed to fault-in the @@ -4408,7 +4407,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault */ fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, fault->write, &fault->map_writable, - &fault->hva); + NULL); return RET_PF_CONTINUE; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4da83544c4e1..633aedec3c2e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -238,7 +238,6 @@ struct kvm_page_fault { /* Outputs of kvm_faultin_pfn. */ unsigned long mmu_seq; kvm_pfn_t pfn; - hva_t hva; bool map_writable; /* @@ -313,7 +312,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .is_private = err & PFERR_PRIVATE_ACCESS, .pfn = KVM_PFN_ERR_FAULT, - .hva = KVM_HVA_ERR_BAD, }; int r; From patchwork Thu Oct 10 18:23:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830763 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFEF01F4FA5 for ; Thu, 10 Oct 2024 18:25:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584733; cv=none; b=VrRw74tiG9FlsScMMAscftaT2IQtiEIzPEbmVxTLpTGNIO/Ebp/bDkrGpe1vIHwoyzVVCA8y03UA9i8CAnRaJ/tO4nkDgf4BED1eLzh8J/MiUrXW8OHvpsQXkwzf9nPhwFqvhl5ZUGCOi/4q/8/Bya6Opwkqerwc2275UdP7OFc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584733; c=relaxed/simple; bh=gORZY+kwKqxFhTI7jRiw4mW6KKJknMWDgC2YkpetElw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fOb38K0ptUaVHqSA7tD7Zligjj4RaAV6jLPm63Th5xbDof3M543YXm9VSk1O0XBnPpSuuKXgnSg0OmxLO5JZYgD8PIIoTinP3vVkRPaEVP3YzR5UiSdtNw3Dmx5rVTwGrEHeWNUzsVU9hoD/Nno807pxe3rDICdu1tU0ma384Pg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eIGj+Y80; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eIGj+Y80" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2e2d3d5fb4eso896233a91.2 for ; Thu, 10 Oct 2024 11:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584731; x=1729189531; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=KxFe7qyo4eHIzhTxczhq6OF5aBNVD8CbU/vq6HHGwtQ=; b=eIGj+Y80OKxaBgravZE7eiui0FBQFj4LLKuhGsYIejJ/ru27Nu4bo6PYyo0RqdW6jx mlEsjNvzm4bt/om7CPDzDCIitHXfOAawPomD6hoIfarHR+jb/Vpik0cM7RCURqHtp9Th KS+pp/789ua5N645ObTpGNCJ5zY5VYGSoas2pjsYOGcz4D//yVxmivdlBgrIuPOsZsuD PsIcOCklR+rODFwgTw/JLAwVlh79y7eUR2dCyqq7sfL4r4jGaiVeE7naHt9ulZ5giDzT SQCuZV4v5mlU7zeVE9DYptsrrR1n7XGueT0FUJrtCmHgSugT/5LXEjrWUt41nO2Mw821 sr8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584731; x=1729189531; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=KxFe7qyo4eHIzhTxczhq6OF5aBNVD8CbU/vq6HHGwtQ=; b=aisyO5EMhrJrDkIerT1G6WnleQqrvL/xKGZ3jisF9HZRoUBlf1oHr8FeWvhAg9cpSm +0G1PFqEocVQ9XHL53dvai8U3ji+l9ZKXnMOKazMb8DlnPeip+E/G06NzL0aPvsfi5wO KIaM4HQoePpjH64D8oOvMkCONtnsOq06CjtG8tFwWwxwvr+vqpB+PqfHTde1XRRuFz02 E7KNcnChRnnLDAR1mFkyWrAxOGyT07bGZ5ep8d2fvnFz+cFHCClMzTlKHylKNq+cXLAj SEdKruLHXUOF867UpL3wsmZxPAzafNU0T0sVREDBSuG4DvHgyQSR/siouBQkuTwNArtw 3ICw== X-Gm-Message-State: AOJu0YyMkLOTEu90H3Ug8OVpIMrpgYu99qdVwjJJ9hbZJ4o3zb158eBa Mv0KD9Zc+hCdM+L7D2XpCJhqjcVF3F6JRpD0nzjITqt+13Idxbb2CmsEspM45qBeXGH4l0waDtE SDA== X-Google-Smtp-Source: AGHT+IGFsZ8G+8oqHMXYnd6l/b9oSo3/s957bEBlaeuh+YsQIWv/Tzia2RQhXQw4+rIMPn/3epGKWmgCn2Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:5107:b0:2e2:ca3e:10fe with SMTP id 98e67ed59e1d1-2e2f0f88a54mr20a91.8.1728584729847; Thu, 10 Oct 2024 11:25:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:20 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-19-seanjc@google.com> Subject: [PATCH v13 18/85] KVM: Drop unused "hva" pointer from __gfn_to_pfn_memslot() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop @hva from __gfn_to_pfn_memslot() now that all callers pass NULL. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 6 ++---- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 9 +++------ 6 files changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a6e62cc9015c..dd221587fcca 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1570,7 +1570,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmap_read_unlock(current->mm); pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable, NULL); + write_fault, &writable); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 8cd02ca4b1b8..2f1d58984b41 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -614,7 +614,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, } else { /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok, NULL); + writing, &write_ok); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 26a969e935e3..8304b6f8fe45 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -853,7 +853,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p, NULL); + writing, upgrade_p); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c67228b46bd5..28f2b842d6ca 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4377,8 +4377,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return kvm_faultin_pfn_private(vcpu, fault); fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - fault->write, &fault->map_writable, - NULL); + fault->write, &fault->map_writable); /* * If resolving the page failed because I/O is needed to fault-in the @@ -4406,8 +4405,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * get a page and a fatal signal, i.e. SIGKILL, is pending. */ fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, - fault->write, &fault->map_writable, - NULL); + fault->write, &fault->map_writable); return RET_PF_CONTINUE; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 071a0a1f1c60..cbc7b9c04c14 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1235,7 +1235,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool interruptible, bool no_wait, - bool write_fault, bool *writable, hva_t *hva); + bool write_fault, bool *writable); void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index dd5839abef6c..10071f31b2ca 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2967,13 +2967,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool interruptible, bool no_wait, - bool write_fault, bool *writable, hva_t *hva) + bool write_fault, bool *writable) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); - if (hva) - *hva = addr; - if (kvm_is_error_hva(addr)) { if (writable) *writable = false; @@ -2996,13 +2993,13 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - write_fault, writable, NULL); + write_fault, writable); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); From patchwork Thu Oct 10 18:23:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830764 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C8D61F4FC7 for ; Thu, 10 Oct 2024 18:25:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584736; cv=none; b=fjitbAjJYkCxRWja4f3P79T0F89CogAGpeA4E7ixTz8z7WtFdm9b5erhe7K4bpIEzSQItG7CE93FJSdLwn+KPUXSTNIR9jx15yeGTLZBGUMerp/uwGjuAEM3YpiLtJd8nJADmW0dJtwp/NIiMB9g6KVv+RmRnhp0nwy7MMWsEhg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584736; c=relaxed/simple; bh=B6DKB14GzdU+gSziUKXJ1u64FTzTfv1AIh0aTxVhDrQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F2b2MxypgK/rb5l/PsXotv9WPxEUj/PsmX0UPdcUWUOSgtZdifMZpwjxfHZJwu50ywPJOxSYDSiG3UeqHsxyj8Iho6bY++yY9ovlTsL9z6lNywHd6Afmq1jf5M4+XmqeRklWH1pBvbOCGz+jh3iKbG68SAHWBYCpSPBqlS0bo60= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g1s0phY3; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g1s0phY3" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e0082c1dd0so24638547b3.3 for ; Thu, 10 Oct 2024 11:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584733; x=1729189533; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=GUnoK8hes+aZNWzzso7nvh51KinEVhA6U4mspQ9USyg=; b=g1s0phY3wpDJ54yACpNQxa2jJ5a2WcyMg39r6mcL0Y5WvOsI7TPZmKy87sNNnninoG zinstTd6W23/g/DTDy9vblMtUSS/Mw0aCsL5WxcWrXK9YHs7OO15+K5dRJE45N88mZ5k bMgGbOPlYZoLPGfXxDDp3SS2NsWU9F9TnainvF/SLhaC9R/T8+lj/750qjFxMw/9Xfiy Zwf1LPVrkRC722UWTl+lDusREgs7UPSV2fkeMFMKdlJAT0pFqDWDX48hjEcLhHzWX+ly PthT23TBgnEc1ZqcJZE7l64Sk6YgJ5BG3NR4GOREQp1+pZoy+Ws+Z8Vq7urpfzDZJtdV ARmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584733; x=1729189533; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=GUnoK8hes+aZNWzzso7nvh51KinEVhA6U4mspQ9USyg=; b=xD97syddszXFeYdAB0Q9psTGhCAjmreZRbrwhK4FXLQywnpTtrnGDBh3ikPnbOlvlw fWspzgL9PwkUSxHB3G+sXiYDwdqZX55b2o2jTuL07EjFHzPRTXary0HClmRM41vtN17V HCGu4r5psaokQfpwMQ4JOycAJ0Tto/x0UUETfH0CwkdR7gGBTIm0kK/KHTB7obbNLcLG BXUQywv1L1bXyJrrxZiRtLv8oUPrZ0RI7QMyO8IbxiNzPknvh+/klyGd0cD61AlkkOxv fx2fgi6ZQJ3ZqOpGJFC/sX7ORLKGIyrHnijAXwSsbG53J0fuSPaf7Cy9I/tHRvCQtpHc yZdQ== X-Gm-Message-State: AOJu0YzZqcYtJm59zYZ+V5pb5h0auOFObK1UUsnUC7qSqhSwXxjV6k0D 83RLAemHQW2JndhmJYTY1AKHdXzKkV/B53+wdfRnpSeg6oYQvdwS2VLaLnU89DAuIn8WQEzFvmc C7g== X-Google-Smtp-Source: AGHT+IHJGx3vYA/ciscOnZoyMGBDqN0A5fNCXYXCqQ4jImN8v4VZ+ud/0qiiFEbLMXTR4U3Rg3+GOt3HIs4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:2c01:b0:6db:d257:b98 with SMTP id 00721157ae682-6e32215253bmr563917b3.3.1728584733164; Thu, 10 Oct 2024 11:25:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:21 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-20-seanjc@google.com> Subject: [PATCH v13 19/85] KVM: Introduce kvm_follow_pfn() to eventually replace "gfn_to_pfn" APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones From: David Stevens Introduce kvm_follow_pfn() to eventually supplant the various "gfn_to_pfn" APIs, albeit by adding more wrappers. The primary motivation of the new helper is to pass a structure instead of an ever changing set of parameters, e.g. so that tweaking the behavior, inputs, and/or outputs of the "to pfn" helpers doesn't require churning half of KVM. In the more distant future, the APIs exposed to arch code could also follow suit, e.g. by adding something akin to x86's "struct kvm_page_fault" when faulting in guest memory. But for now, the goal is purely to clean up KVM's "internal" MMU code. As part of the conversion, replace the write_fault, interruptible, and no-wait boolean flags with FOLL_WRITE, FOLL_INTERRUPTIBLE, and FOLL_NOWAIT respectively. Collecting the various FOLL_* flags into a single field will again ease the pain of passing new flags. Tested-by: Alex Bennée Signed-off-by: David Stevens Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 162 +++++++++++++++++++++++--------------------- virt/kvm/kvm_mm.h | 20 +++++- virt/kvm/pfncache.c | 9 ++- 3 files changed, 109 insertions(+), 82 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 10071f31b2ca..52629ac26119 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2750,8 +2750,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page[1]; @@ -2760,14 +2759,13 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { *pfn = page_to_pfn(page[0]); - - if (writable) - *writable = true; + if (kfp->map_writable) + *kfp->map_writable = true; return true; } @@ -2778,8 +2776,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool no_wait, bool write_fault, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { /* * When a VCPU accesses a page that is not mapped into the secondary @@ -2792,34 +2789,30 @@ static int hva_to_pfn_slow(unsigned long addr, bool no_wait, bool write_fault, * Note that get_user_page_fast_only() and FOLL_WRITE for now * implicitly honor NUMA hinting faults and don't need this flag. */ - unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT; - struct page *page; + unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT | kfp->flags; + struct page *page, *wpage; int npages; - if (writable) - *writable = write_fault; - - if (write_fault) - flags |= FOLL_WRITE; - if (no_wait) - flags |= FOLL_NOWAIT; - if (interruptible) - flags |= FOLL_INTERRUPTIBLE; - - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages != 1) return npages; + if (!kfp->map_writable) + goto out; + + if (kfp->flags & FOLL_WRITE) { + *kfp->map_writable = true; + goto out; + } + /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { - struct page *wpage; - - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable = true; - put_page(page); - page = wpage; - } + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { + *kfp->map_writable = true; + put_page(page); + page = wpage; } + +out: *pfn = page_to_pfn(page); return npages; } @@ -2846,10 +2839,10 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) } static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) + struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { - struct follow_pfnmap_args args = { .vma = vma, .address = addr }; + struct follow_pfnmap_args args = { .vma = vma, .address = kfp->hva }; + bool write_fault = kfp->flags & FOLL_WRITE; kvm_pfn_t pfn; int r; @@ -2860,7 +2853,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current->mm, addr, + r = fixup_user_fault(current->mm, kfp->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2878,8 +2871,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, goto out; } - if (writable) - *writable = args.writable; + if (kfp->map_writable) + *kfp->map_writable = args.writable; pfn = args.pfn; /* @@ -2908,22 +2901,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, return r; } -/* - * Pin guest page in memory and return its pfn. - * @addr: host virtual address which maps memory to the guest - * @interruptible: whether the process can be interrupted by non-fatal signals - * @no_wait: whether or not this function need to wait IO complete if the - * host page is not in the memory - * @write_fault: whether we should get a writable host page - * @writable: whether it allows to map a writable host page for !@write_fault - * - * The function will map a writable host page for these two cases: - * 1): @write_fault = true - * 2): @write_fault = false && @writable, @writable will tell the caller - * whether the mapping is writable. - */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, - bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) { struct vm_area_struct *vma; kvm_pfn_t pfn; @@ -2931,11 +2909,10 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, might_sleep(); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; - npages = hva_to_pfn_slow(addr, no_wait, write_fault, interruptible, - writable, &pfn); + npages = hva_to_pfn_slow(kfp, &pfn); if (npages == 1) return pfn; if (npages == -EINTR || npages == -EAGAIN) @@ -2945,18 +2922,19 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, mmap_read_lock(current->mm); retry: - vma = vma_lookup(current->mm, addr); + vma = vma_lookup(current->mm, kfp->hva); if (vma == NULL) pfn = KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r = hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r = hva_to_pfn_remapped(vma, kfp, &pfn); if (r == -EAGAIN) goto retry; if (r < 0) pfn = KVM_PFN_ERR_FAULT; } else { - if (no_wait && vma_is_valid(vma, write_fault)) + if ((kfp->flags & FOLL_NOWAIT) && + vma_is_valid(vma, kfp->flags & FOLL_WRITE)) pfn = KVM_PFN_ERR_NEEDS_IO; else pfn = KVM_PFN_ERR_FAULT; @@ -2965,41 +2943,69 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, return pfn; } +static kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) +{ + kfp->hva = __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, + kfp->flags & FOLL_WRITE); + + if (kfp->hva == KVM_HVA_ERR_RO_BAD) + return KVM_PFN_ERR_RO_FAULT; + + if (kvm_is_error_hva(kfp->hva)) + return KVM_PFN_NOSLOT; + + if (memslot_is_readonly(kfp->slot) && kfp->map_writable) { + *kfp->map_writable = false; + kfp->map_writable = NULL; + } + + return hva_to_pfn(kfp); +} + kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool interruptible, bool no_wait, bool write_fault, bool *writable) { - unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); - - if (kvm_is_error_hva(addr)) { - if (writable) - *writable = false; - - return addr == KVM_HVA_ERR_RO_BAD ? KVM_PFN_ERR_RO_FAULT : - KVM_PFN_NOSLOT; - } - - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable = false; - writable = NULL; - } - - return hva_to_pfn(addr, interruptible, no_wait, write_fault, writable); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .map_writable = writable, + }; + + if (write_fault) + kfp.flags |= FOLL_WRITE; + if (no_wait) + kfp.flags |= FOLL_NOWAIT; + if (interruptible) + kfp.flags |= FOLL_INTERRUPTIBLE; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - write_fault, writable); + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(kvm, gfn), + .gfn = gfn, + .flags = write_fault ? FOLL_WRITE : 0, + .map_writable = writable, + }; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, true, NULL); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = FOLL_WRITE, + }; + + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 51f3fee4ca3f..d5a215958f06 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,24 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool interruptible, bool no_wait, - bool write_fault, bool *writable); + +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + const gfn_t gfn; + + unsigned long hva; + + /* FOLL_* flags modifying lookup behavior, e.g. FOLL_WRITE. */ + unsigned int flags; + + /* + * If non-NULL, try to get a writable mapping even for a read fault. + * Set to true if a writable mapping was obtained. + */ + bool *map_writable; +}; + +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 32dc61f48c81..067daf9ad6ef 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -159,6 +159,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; + struct kvm_follow_pfn kfp = { + .slot = gpc->memslot, + .gfn = gpa_to_gfn(gpc->gpa), + .flags = FOLL_WRITE, + .hva = gpc->uhva, + }; lockdep_assert_held(&gpc->refresh_lock); @@ -197,8 +203,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) cond_resched(); } - /* We always request a writable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, false, true, NULL); + new_pfn = hva_to_pfn(&kfp); if (is_error_noslot_pfn(new_pfn)) goto out_error; From patchwork Thu Oct 10 18:23:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830765 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF3F51F4FDC for ; Thu, 10 Oct 2024 18:25:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584737; cv=none; b=remF09SO24reQqfDjreOR5Wls5Pwu5W393engKtPdSjpcP39kKECMFfmEzZrZByPnUf3iGXR9pOrXmLLg7AJHhT485n6NmJtArDvhsVFOIg0+ld4QjIeaYAX2ymMMoe3ynkPbiQxMJepSDXuDV/sYScoxzkatIPgJp9gZxEH6t4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584737; c=relaxed/simple; bh=KvCWyyHWjrlulYIoOxtt/Dqt8bcDl+mOtHXiCiz5pRo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oyzqa8IVmkttopyw/qz7u07fCv6L+ayZuUcDRj3nhftXph6440cZbImweBtRSq1GxcC0RcldlQLs+ujlzZNG1/KtcHcLLwfikFuFRlGB87Rb2mST0D54iWjPszjb4Ss3KZnqyZZsu3PEeAeiNRANKYh/vYHPGmrNsDIX6hPIAKU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aZapr+Qp; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aZapr+Qp" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2e18d5b9a25so1639314a91.2 for ; Thu, 10 Oct 2024 11:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584735; x=1729189535; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=LNPEWcjaz+dEwpcBQF1ZcKP3HAxUc/LzxURpjzN7wyY=; b=aZapr+QpZhS7r/MQaYKqDvjfYRBORnxInsXEL92L+scGg5hKtUtQE7o8nOHIrYusqm 5EmvoFkDqXFzl2HpKgkfAiP5lENBwGYDxj5waWmwhFK3gp6/VX/zM/witRAv91QhuvDh hk9MOt+wYPDWmF57nqd7mgAbHZd24J/E0TKSFO7QbNz5i+Ba9ZoHcPZ0IONz8KHYSAGl 5BXGeZ4jCqP2voRrg95+ZwUWfHnXmR2Cq5/ywlxqLxLBk2lAB2BuSmj2RKdIIYKnlJk1 iEfZhsO1MnnSNipUT/G2uAr44IdPub7PgGYmTru8gHSf1ad+5Bp4aUAXkTFpHx/6ALns SViA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584735; x=1729189535; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=LNPEWcjaz+dEwpcBQF1ZcKP3HAxUc/LzxURpjzN7wyY=; b=C7QfIhtaDXCKCh2faE/pLHuB0t2QjCG+5xHNDAhrLPsXOqD8MJVDEDhn3lgTlvzkdp wJvo313p6siCo4aNO4/pP9dXMMgA6GP4mRVQ88aJujbAiEF3dNFZ1N6GIn4fuoit8ElM /VFoRCJiFQumlFaEKUYt9xpJ06QjKwPPNVqcV39l4sWaGx3uKXcqTybuWcSa3kxSJF8L yq8UyJ2xTCJ9ZKNwEauY2OlNMZvJW+Y8rR+E0VlPBbqgPEI0Nz9+7t+TyVPQzUg8fTy/ h2RY83imPTdJk5EbsPZDfyaXp1G9jY0TXeSrYJcbRoJqO5X+hqxSROe5DpXHrZMRyuxi bPuA== X-Gm-Message-State: AOJu0YwZwGI1BS0A1bW7L2s2geQPNI8cJlvA4YvDXAYZlYKfSeh6DNKG OuCDFtsUe6E+BfVZc1fhYtwtTVllJ2fnFLULggKK0zyDMBZ9fGBH8b+zgkbCb5yVUPXYGI3680O UGg== X-Google-Smtp-Source: AGHT+IEnZEGZphhdHlQgb0iOcJfNRdGKMtxwSRuFd5MkDGpZIOhDcg8ohYIeF4ySSwO5bqKUu8yK4lYjZ+4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:8595:b0:2e2:d600:3862 with SMTP id 98e67ed59e1d1-2e2f0d9dbfemr89a91.8.1728584735013; Thu, 10 Oct 2024 11:25:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:22 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-21-seanjc@google.com> Subject: [PATCH v13 20/85] KVM: Remove pointless sanity check on @map param to kvm_vcpu_(un)map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop kvm_vcpu_{,un}map()'s useless checks on @map being non-NULL. The map is 100% kernel controlled, any caller that passes a NULL pointer is broken and needs to be fixed, i.e. a crash due to a NULL pointer dereference is desirable (though obviously not as desirable as not having a bug in the first place). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 52629ac26119..c7691bc40389 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3071,9 +3071,6 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; - if (!map) - return -EINVAL; - pfn = gfn_to_pfn(vcpu->kvm, gfn); if (is_error_noslot_pfn(pfn)) return -EINVAL; @@ -3101,9 +3098,6 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_map); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) { - if (!map) - return; - if (!map->hva) return; From patchwork Thu Oct 10 18:23:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830766 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 014EB1F708E for ; Thu, 10 Oct 2024 18:25:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584739; cv=none; b=SmJTP7sQkeuzh/0WARCze7pEstUdNam14vpAIJ38hMCZUBgFyXPL0AnQdQ3ZKfQzqa9XzoPLRphuWnKy+kOT+/145GB/MfjHxhOlqeAbcUHTSjGsXQYmR+yTu1zZ7mAEELicG6uo+u61jxaS9H8+cgG1BqhMaR9vyCsbtjYfQMk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584739; c=relaxed/simple; bh=w6CtoHT4iiBJ8oKluaibldhSnM94wRba+mvcEkROyJo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=liPoGHhPf0hAr1yR8rrDIuu5MDNOpTLJe9qZz5C5oq0TGj6Nyd/R8G89XLNfVT2r9RPJH9BEbhpp4qek5BwDjyPQ4k1eID5hSbqWqXeVZ8fAtzc3ibk3vUhYst1Nhi7GE+fDTSOkxD4pLlh3tib2tNXW54Fo0hItfit3Ki2Gcws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B4VD+bqQ; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B4VD+bqQ" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e25cae769abso1655415276.0 for ; Thu, 10 Oct 2024 11:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584737; x=1729189537; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=IMgO/JDFhWAWu7Wkr7+zBWYfApHhI977J8rKw7ML51Y=; b=B4VD+bqQ1bv6Pf45zN8plQZ+1nlQIV+x2YTm+XE8/QAFyOT6pF5ri3atTDy7wmWK1W cBuNjj+RZcYpDljbhbOPdKgUEc6sxDbKrQJ1fcQGXI1ipHd31NNylsUJEh9353nqzmBe xErqAc5drpMhDXL6T2FAUVJtAVGGhL9gh8F57tSlbW7u9huTn8DSiVUEoNHxzP9tsAAL App72uklOszMQmpZcdIaLkjDcH7P4uuH5tw33U+NlFCh5D1Z5egpssNiIn5bx95a2D6u zh0dl5vUT4MDK4fiWa6tECFRTbzyA11CMn2YXNF224VvrXZcgmYLwkzQLjyWsiB0wLbP KAFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584737; x=1729189537; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=IMgO/JDFhWAWu7Wkr7+zBWYfApHhI977J8rKw7ML51Y=; b=IO3CWLoNQ29RWEn08vgR0A7JtIGiPrFeDIHWUQ4eZEWKFiyHbqrTmgTj1aZmpSUn8l ExE4y/bB9vfLJ+NkgWERMTlp6qq68pZn9UMkjY96MYsJgymQ8LdolPiSAFFHRCeg6dfz v2aj1JkL1pXbX6K1VVkInzSItPYOExJ9b50raDbXoIM3adnSGN2q4+BNL3L1lNBDDLVe Slz6P1TBlhXzxbRbT3bXYU89s+JH3mLdKt9lfrP1Tf7G+N6NRR3VDy/PUrU+kTs7hlU5 vsnzBNKUF8kTAexr6xc0+I9OSdjwjq/AY3lyBf8EwFUmig6llklJOGqU10LMzgyKo/NH lArg== X-Gm-Message-State: AOJu0Yx4HzaXOH1HyMDieGhpBULVkC6oB9RA9kbmC9UXVUpB6XseLQIu k+CsEHTlpN6vp3RF+u6LWGNdiOfaovONOGDIobjxp444YMpCth7MLGGmUHI9MBtlIDJqQyDGVPM PFg== X-Google-Smtp-Source: AGHT+IGlSN4X6DbrP5DPPe6+MPgoNlxMoOT3GZzpJV2jr+xnXsAJlzTjTlsnBYkwN+CsyuU9UJbLGW89hxk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:c70a:0:b0:e28:eae7:f84a with SMTP id 3f1490d57ef6-e28fe0feaecmr5507276.0.1728584736880; Thu, 10 Oct 2024 11:25:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:23 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-22-seanjc@google.com> Subject: [PATCH v13 21/85] KVM: Explicitly initialize all fields at the start of kvm_vcpu_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Explicitly initialize the entire kvm_host_map structure when mapping a pfn, as some callers declare their struct on the stack, i.e. don't zero-initialize the struct, which makes the map->hva in kvm_vcpu_unmap() *very* suspect. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c7691bc40389..f1c9a781315c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3067,32 +3067,24 @@ void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - kvm_pfn_t pfn; - void *hva = NULL; - struct page *page = KVM_UNMAPPED_PAGE; - - pfn = gfn_to_pfn(vcpu->kvm, gfn); - if (is_error_noslot_pfn(pfn)) - return -EINVAL; - - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - hva = kmap(page); -#ifdef CONFIG_HAS_IOMEM - } else { - hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB); -#endif - } - - if (!hva) - return -EFAULT; - - map->page = page; - map->hva = hva; - map->pfn = pfn; + map->page = KVM_UNMAPPED_PAGE; + map->hva = NULL; map->gfn = gfn; - return 0; + map->pfn = gfn_to_pfn(vcpu->kvm, gfn); + if (is_error_noslot_pfn(map->pfn)) + return -EINVAL; + + if (pfn_valid(map->pfn)) { + map->page = pfn_to_page(map->pfn); + map->hva = kmap(map->page); +#ifdef CONFIG_HAS_IOMEM + } else { + map->hva = memremap(pfn_to_hpa(map->pfn), PAGE_SIZE, MEMREMAP_WB); +#endif + } + + return map->hva ? 0 : -EFAULT; } EXPORT_SYMBOL_GPL(kvm_vcpu_map); From patchwork Thu Oct 10 18:23:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830767 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAC751F8907 for ; Thu, 10 Oct 2024 18:25:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584742; cv=none; b=oIA3tt2JjmEZR/0LAghVCPkAwQcAUQyythmi4OQvKm2LjnnKf23LXn5p/VP7g/jLf6Y6+iF32qXn2T3h0lOJMbquU5WMg05iRTd0E1pynhMJeF3D6AIY9Qu2zyhVU4QpiBNSmpehmFYAtPOOmqQWZPOzOHxAxL6wuhoqzvyORrM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584742; c=relaxed/simple; bh=ZlT2BBZrHMf0x3c0Y2iKgnaJviq4ilGlr1zNhk4Vq/U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CwO3PPyP2G1oqqZP1M9Z9x28JAXs+zotxiMg2/tSTmVKjWxi4HdvzzSWJrzLDMRnbVxCRfqGVEct6+wEwSf2iAj4WjchY5kxpnF2Wr6xXOwcq2ji/IHBomtAc1kG3kfc6rwj22kyTTpoGRNx6cYRSEZmMcHn0IMIRPsDLHobyp8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CjJawX0g; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CjJawX0g" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e3245ed6b8so23834837b3.0 for ; Thu, 10 Oct 2024 11:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584739; x=1729189539; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=wU+jX/zRq5GV793BchOJ2bBXpO8AjDudJh47DKJg70M=; b=CjJawX0gL3DlpHXrFl3UettsKvgKZhWv4SqQA90BYqhykfLcxkE6Wu1vh/woj9cKRe WBeA5xaSPtiO2n25gFk5BFhGTdTAIXI9+cKopSHheHDsvMSXgv9GRVTZNsFw8UHjUIyO bgz5EqJfA36MxGkk/wKO81Pr6UfjRrL4/iYWSOGDvEZr2PKMKXa9vGQ9qv7HktdxPuOA rYvrX4Mva7y9IcOyZ2keW7/rxD4tkbLD7VQOI/NAXhIe4EeyM29lHpbclTyhvbd6iFmV yktNhWnPAFFG1WSt+zWdIjGhy2UW8v7gJwCEuCC8hBDIry/zGPL/EdBbYHZWYL9FX1C3 sl0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584739; x=1729189539; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=wU+jX/zRq5GV793BchOJ2bBXpO8AjDudJh47DKJg70M=; b=pi95ir/xWyc/ygSDG66qLky29XEzTF4W5LgHyg77R1YdWkADMwgFfMaTiaHcMO96ZY PwNYMjMdLWPgGPKxoKeKthj5kPCCmIkyW7NLMT6O6pooBFiyD+uM3FlnXJ0F3LLZwyfd 0aCvKhR5esV0ygjlc18UiwLUQYNLFU5He9okTLriLjzgWdacU6Lf0r+crlycdgwfRdyb f3Fv+znbZNz2SZpkNbmGXiIdsQDqnYRQoEwmZ6VhrxFdiz+imh1/5jahOyN+Cz5vc7mZ W37cH6pLIaQMYLcbGXhE9l3v9DSjg7u5ngPnRm9MVPtMzBo5kKF2CWb3SszhzfMByOnG lMEQ== X-Gm-Message-State: AOJu0YwB/eltPulnw3zXiBu3xsynHKZFLPEu0eFZvU4JXp9ulvnfswlP 3ytDE8Q/PwqJ9hKK24deRLqFNnITr+CbHUGCQNQIOoONCImsAQYaQ+IkH6A1uXKzg8d6ly12HAz Vsg== X-Google-Smtp-Source: AGHT+IGZ+3mddEgfwOZrB7Mg/yEB+clUKORcl3gwcx/GeFypDRI64NMuGnBtjzJ0gTzObJ64BsofGpKHxtA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:ab6d:0:b0:e28:f8e6:f4c6 with SMTP id 3f1490d57ef6-e28fe465652mr58758276.2.1728584738891; Thu, 10 Oct 2024 11:25:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:24 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-23-seanjc@google.com> Subject: [PATCH v13 22/85] KVM: Use NULL for struct page pointer to indicate mremapped memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop yet another unnecessary magic page value from KVM, as there's zero reason to use a poisoned pointer to indicate "no page". If KVM uses a NULL page pointer, the kernel will explode just as quickly as if KVM uses a poisoned pointer. Never mind the fact that such usage would be a blatant and egregious KVM bug. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 4 ---- virt/kvm/kvm_main.c | 4 ++-- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cbc7b9c04c14..e3c01cbbc41a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -273,16 +273,12 @@ enum { READING_SHADOW_PAGE_TABLES, }; -#define KVM_UNMAPPED_PAGE ((void *) 0x500 + POISON_POINTER_DELTA) - struct kvm_host_map { /* * Only valid if the 'pfn' is managed by the host kernel (i.e. There is * a 'struct page' for it. When using mem= kernel parameter some memory * can be used as guest memory but they are not managed by host * kernel). - * If 'pfn' is not managed by the host kernel, this field is - * initialized to KVM_UNMAPPED_PAGE. */ struct page *page; void *hva; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f1c9a781315c..7acb1a8af2e4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3067,7 +3067,7 @@ void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - map->page = KVM_UNMAPPED_PAGE; + map->page = NULL; map->hva = NULL; map->gfn = gfn; @@ -3093,7 +3093,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) if (!map->hva) return; - if (map->page != KVM_UNMAPPED_PAGE) + if (map->page) kunmap(map->page); #ifdef CONFIG_HAS_IOMEM else From patchwork Thu Oct 10 18:23:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830768 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C85771CFEDD for ; Thu, 10 Oct 2024 18:25:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584743; cv=none; b=UIxXr6ZII1cw8/8R1dpqLXcCZPKJVX3Ey/nVVRKJth3GhC3yfMGOjTZXCzs40whpX4hFIFulrmXzRyFUuZ7ke5A38o8NUu82aqKVc2mRwEmivwkbB4Eibdm3yXQybgLVxcWG1yXrA/sd9yNcrllGaNsqOlqh4fek6yG0Cz4qDCI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584743; c=relaxed/simple; bh=Z6RyJQ8NNQkeYyg1tN/wL4j+YrBUDlf6269dIZP90as=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tnrhrVgK4PTm358XA32p3uarhE3UFhOhnsfbx9AslYrYgwyouk9hb/C75fv/ZfIL6zP5+FBTNwJO86Sr2ADL+DnPggarUqJCvtLp6G/gIkohLSfDabLVYwmyroOza9Wb46Y3MlbGzXkHA/WAwGKwMDRrzKXPQ/mvzWfEMwmK1rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qoM+EiSr; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qoM+EiSr" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-20b8bf5d09aso15596575ad.3 for ; Thu, 10 Oct 2024 11:25:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584741; x=1729189541; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=UxQVf2h1B1ZgTBX9bfQB+O4u0WEdBy2E8BNaXtNQtEc=; b=qoM+EiSrBnuwmjbt/H8tKBvxJwaIx0hkbfvehONsdCk9vjqOswD4mvEj0K1OvuyMS8 Xefy/51XOxUXr3lsl7Gxr0tNwtj+ZPlYKogfR3q1PQN1U7OSpY3Nf2KyigCmie4XlfOu 2xoglyZSTIcyp+jlQO9uqGd3hDo0QHuTSL4kl7jncKIHd+kyxUPUGGTYKg6YzSg7q58o hz32KEqkByEhfBdPCd4ourTLtS05BCrpdXQYuWlkGQ0tWMIgXq7FyYT20RKpzbGXrhJW eTw/7JHF+f6c5jdj0TmxNhUAaWushq/8RPGlbiYwR+r7c/2SWrqGoB8HMLioJXH2uYJ/ aKWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584741; x=1729189541; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=UxQVf2h1B1ZgTBX9bfQB+O4u0WEdBy2E8BNaXtNQtEc=; b=en8zQYnUmz9WPiqPQ6AzPpmPHnZdKuAYC6Ypc9Iv8HGoaHgMUuHmhJH2/VJC3SrJYa X5/cYwqN1c4+Sv5xbxtZGZwBoIc1ovEWHugz/knzfjJaYxWrFSiBVzoQVL9WcS9H84RY l92qZUQDx3VoXxRuh8WryzhWZVvJW4xwuQhr5y8pfFMnqwQiLhaJcBeGlHqdDRJWCcmP ERmB675GgecmkJBfD5KD3RBorhSxuu6oIlXtKMfwmxAxn9uDn2qK+9cNzJPpcIR+xBgR G7GT/VzT4O7FO6O+7wdmNrzg2GlYtCmqXO/7MsIrF1fszzyyX9+5X2AbBDACevQKpvvF UQQQ== X-Gm-Message-State: AOJu0YwK++Lv5OLfnR9E+vpadqfoAtXuUT8+8/TARjGVwKHphZrQJjTQ HfDJY3d98W/SL6I1t/ZWycGZ48zIkMqJjiVzmBN2ATLlzOC8pwQdONBSbYEt2RrjFKidZw/ZLjq 4rQ== X-Google-Smtp-Source: AGHT+IFSsaj7YGs0xXvL7KvHpo4OHZS23TfW901WnGn1dNBw4UEq8HUBbyPz7tDSJuBwPv672jv+MkN6+4M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:903:2349:b0:20b:9365:e6e4 with SMTP id d9443c01a7336-20c63782bdcmr1094885ad.9.1728584740809; Thu, 10 Oct 2024 11:25:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:25 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-24-seanjc@google.com> Subject: [PATCH v13 23/85] KVM: nVMX: Rely on kvm_vcpu_unmap() to track validity of eVMCS mapping From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove the explicit evmptr12 validity check when deciding whether or not to unmap the eVMCS pointer, and instead rely on kvm_vcpu_unmap() to play nice with a NULL map->hva, i.e. to do nothing if the map is invalid. Note, vmx->nested.hv_evmcs_map is zero-allocated along with the rest of vcpu_vmx, i.e. the map starts out invalid/NULL. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index a8e7bc04d9bf..e94a25373a59 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -231,11 +231,8 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); - if (nested_vmx_is_evmptr12_valid(vmx)) { - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); - vmx->nested.hv_evmcs = NULL; - } - + kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); + vmx->nested.hv_evmcs = NULL; vmx->nested.hv_evmcs_vmptr = EVMPTR_INVALID; if (hv_vcpu) { From patchwork Thu Oct 10 18:23:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830769 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3B5A1F8F18 for ; Thu, 10 Oct 2024 18:25:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584746; cv=none; b=K35eaoU/TYoy8Nrp3ddiaexUP6TGWVSnEHKN7HTlsNUbJtnj6SnyQF+piJ4w71R5RFtf4txNJvyMmfiyYmS1rXRVKcFGuXIYYSDgFwJpFZg72Eyeaqoh8hFKkFfPySI+bwDLJ5KJoVYC4mPMGhyccua1aBL1gXf6Oab4EvhR+mo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584746; c=relaxed/simple; bh=QVyL+E+J2Fu+8nHNQtozyIIO+ywn/WvnTCVSYx092RU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J+q9rPL5zP7jUAnuid+Zi8nj6N+GtDF1y2mRtGVs7M2wPMk2CzU5GBtej4s8WiqXRE2p1ZGiEEZkCggqxaB2gtoCoDqWnC5HraWRhwKiqmhwneAWbcWj17SSSCCYAt+DMp3/g1W61L46T5hD9zlsiMmWcwKyiJbyACZr9Z9d948= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o9hLcB8T; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o9hLcB8T" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e28690bc290so2029756276.1 for ; Thu, 10 Oct 2024 11:25:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584743; x=1729189543; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=eHAZpPaCzYnE+TRrbh6ni4wh+Yys2pChavpSnHk2YEU=; b=o9hLcB8TcgrAYeRwjVSZvm39dElO8cyvKzpeBbUmWSYLS3W2vdIeqdtka+VJm68gJL Mt1xP26lna3GmuKsA5M6XaLxLrcnmZpwiPqK97xslR+y8/OD8joZ6hTsWJkepTSytJ4/ 5qQ/cvFl9Lhgl/9BAV4EsrHv5625y4MIqZDYmniqgzndR8v9BPPB25oMftqfGBWKejna 3Sh6VKQ7Q19iuz1MdTuXO7tL450UYOL13DDXwlw+nZBSCcl5e+WElcxqudAEceZSwbPv HeAmyiYjnu6f8gH9Kx4SOq0jxKrIFi4JcIiq3wGhRqCUNlPtIfkJbZnXdKTR6rDaWD7o bGSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584743; x=1729189543; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=eHAZpPaCzYnE+TRrbh6ni4wh+Yys2pChavpSnHk2YEU=; b=XkX857aHqRb4kmmd0ucqdSsQHwll0hIw3U5c8qtur8k7bYWbamsEJILA7XGfXLIAf0 xcFLFCvxzahLiLa3bnqoPhicurW+Fyeox/TQOFWb95MFHwr7978ZxOHjQQd8PJ2GqjIb u5tVvidl4JEywblzGM+QgzGF9DpAUkvyHYGa/9xiPzgf7iF9dfdE3v5F+gHEg1aCRKXs adxmZWcZt/RmM52F1VxiQJ60Ys+p2qjTiJ30xbSPcHfQqN51LkJKH6Wy7fiMuM/QXiX2 zJ6qiNp+3r7CufKoyjMp2YI8ycZ7KXGjJEEZgh2qu0InobtwFt3hnLfgXOtjfp50UstU i8cw== X-Gm-Message-State: AOJu0Yzh/ZEy1rsC5EFoY3TCP/bVpER3TdKEpMOhGi7JupT0lWVBXQlE yAfFmCb88DoyMj3L2633iMflQrUEfsyDZW1dJ+2uSKjOi9gBz5SU52e18D0FiDJeZacoxcPc44M Xfg== X-Google-Smtp-Source: AGHT+IGUAGxjHNELfO1gnQBoxyShlnhLiinRslmifWPsD1igdN7s2BU3B9tdatRPL4jwPkA1Z0qI9X+JjYQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:9346:0:b0:e28:fc1b:66bb with SMTP id 3f1490d57ef6-e28fe4f0fddmr4836276.6.1728584742775; Thu, 10 Oct 2024 11:25:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:26 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-25-seanjc@google.com> Subject: [PATCH v13 24/85] KVM: nVMX: Drop pointless msr_bitmap_map field from struct nested_vmx From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove vcpu_vmx.msr_bitmap_map and instead use an on-stack structure in the one function that uses the map, nested_vmx_prepare_msr_bitmap(). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 -- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e94a25373a59..fb37658b62c9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -621,7 +621,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, int msr; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = vmx->nested.vmcs02.msr_bitmap; - struct kvm_host_map *map = &vmx->nested.msr_bitmap_map; + struct kvm_host_map msr_bitmap_map; /* Nothing to do if the MSR bitmap is not in use. */ if (!cpu_has_vmx_msr_bitmap() || @@ -644,10 +644,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, return true; } - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map)) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &msr_bitmap_map)) return false; - msr_bitmap_l1 = (unsigned long *)map->hva; + msr_bitmap_l1 = (unsigned long *)msr_bitmap_map.hva; /* * To keep the control flow simple, pay eight 8-byte writes (sixteen @@ -711,7 +711,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); - kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); + kvm_vcpu_unmap(vcpu, &msr_bitmap_map, false); vmx->nested.force_msr_bitmap_recalc = false; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 2325f773a20b..40303b43da6c 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -200,8 +200,6 @@ struct nested_vmx { struct kvm_host_map virtual_apic_map; struct kvm_host_map pi_desc_map; - struct kvm_host_map msr_bitmap_map; - struct pi_desc *pi_desc; bool pi_pending; u16 posted_intr_nv; From patchwork Thu Oct 10 18:23:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830770 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B425F1F9416 for ; Thu, 10 Oct 2024 18:25:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584748; cv=none; b=ovK62InAZ8VJGLlRmzCxAVZ71Ko8lB7PBB7FLZxMdmSFJ/V+R+IPsPunmtnNmdUq606mAzZ+z2XB4KoMvBt9Hz4Q84aeRZlvHNFWwtsovsvvqvxXrjgPStwYjmvL/R1AQiW9k4zEpDvXykJfwN9VKRUwoNJjZBDfqJkTCnF7mgY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584748; c=relaxed/simple; bh=H3zY/Xt0t+c4LMWpCCUO1umdZ2mHGDI4Sis4Gpw2f4E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=u+ww40QcLVR9xHNzmfL4Zv99oj+Wf7EKnjT11lNfuzzsHNHGaoFfdbMDOOqgY3Mn3epMnUFvEe/I43VIUonNHA0fGTlvQ6gFK8soz//Nj6dkotadqV5NZxhl5SUQnotocCeAx5bKaDYZqoxAhhDiRzHneVFiXsxOE5+k9NthgbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dOB3fNA4; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dOB3fNA4" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e25b39871fso27044147b3.0 for ; Thu, 10 Oct 2024 11:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584745; x=1729189545; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=xnti8P2TQudE7lvlkZdCl8Wq+TvGW4tpGYbCykNWsXw=; b=dOB3fNA47Ys/t2RsrBoFMYei+S0bW216ulRwWrnfXEdN8iMvrCGNtKMihXfXcvqKyO UwHBwAHc8CUabZz6ufuXBq9dDxqCduiUDhRaekZSGdsbD//eJIWik8hOLt/7s3rfOiEJ z1Uug1P8XEYf+oxXnQpeR8VnmbIOdQv4uB4mOAhxzWuOX9ZERNYGDzAES/UqOG5jy5PV rtiiw/BOtjrl63GJm9GZUYSl6rhmyGpAthr4RNSdMa+mvLJhz3Sbj/1MdqYhqoqW/Pe4 JhLDjNtSaQNO119lhq8ZFAG0O0b9ck7954dyAKa4UuBRfnmamD6eqSzGHFWMXD7lGYjg mLVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584745; x=1729189545; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=xnti8P2TQudE7lvlkZdCl8Wq+TvGW4tpGYbCykNWsXw=; b=GZyM5jp9wlzZtK/43wKpfIjo/ZT1dVbKhHVZ3vGYOhgPs5A6hSrCJoLQfxwFQShBye idX2oQpdh86y6vbb0UA84yvoOhDlJH3xDZ7omWgwZEv6amyYQ2SdEJ8mtu0QXK0LYwPT Thf7vrfteYUih1+WY5uxGUNCuWfKQFmmqRTC905lvgtvn5x6CSsBanSGd5MJh58/blC1 N9j2XuMiCeFeyWKP7TS/piaT2wgikBj2bwh0a+ka0L4Tlt0XmvBX6mMWJsVfsgyG2Wae yI9ybmeROvB0hFknPuI2wewN+EhJWJ6ye+almKKMGV8lxHQAtGcWai1XhRWcT2KWJgVQ EEFw== X-Gm-Message-State: AOJu0Ywis+B0q27/q+RTHmoa/xdbS/x6QcLcF2ygqUoKQePT9hnZCxKF pcHX+J8e9gtp3WnJNKibb1j39uivrMq+KQoNwZjbEzhA20ytxtJsMbEghBVnUVoWNHIEMa67y1z xeQ== X-Google-Smtp-Source: AGHT+IEdX//xUJq0LlzHSFsWnzcNANrEE9KtnndGvFzP5BAu23zXYFVXA6eeBi51F5M5i08FgAsxiLe5HbY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:300b:b0:6e3:f32:5fc8 with SMTP id 00721157ae682-6e32217bbe0mr502817b3.1.1728584744665; Thu, 10 Oct 2024 11:25:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:27 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-26-seanjc@google.com> Subject: [PATCH v13 25/85] KVM: nVMX: Add helper to put (unmap) vmcs12 pages From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add a helper to dedup unmapping the vmcs12 pages. This will reduce the amount of churn when a future patch refactors the kvm_vcpu_unmap() API. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index fb37658b62c9..81865db18e12 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -314,6 +314,21 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs) vcpu->arch.regs_dirty = 0; } +static void nested_put_vmcs12_pages(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + + /* + * Unpin physical memory we referred to in the vmcs02. The APIC access + * page's backing page (yeah, confusing) shouldn't actually be accessed, + * and if it is written, the contents are irrelevant. + */ + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); + kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); + vmx->nested.pi_desc = NULL; +} + /* * Free whatever needs to be freed from vmx->nested when L1 goes down, or * just stops using VMX. @@ -346,15 +361,8 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.cached_vmcs12 = NULL; kfree(vmx->nested.cached_shadow_vmcs12); vmx->nested.cached_shadow_vmcs12 = NULL; - /* - * Unpin physical memory we referred to in the vmcs02. The APIC access - * page's backing page (yeah, confusing) shouldn't actually be accessed, - * and if it is written, the contents are irrelevant. - */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); - vmx->nested.pi_desc = NULL; + + nested_put_vmcs12_pages(vcpu); kvm_mmu_free_roots(vcpu->kvm, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL); @@ -5010,11 +5018,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, vmx_update_cpu_dirty_logging(vcpu); } - /* Unpin physical memory we referred to in vmcs02 */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); - vmx->nested.pi_desc = NULL; + nested_put_vmcs12_pages(vcpu); if (vmx->nested.reload_vmcs01_apic_access_page) { vmx->nested.reload_vmcs01_apic_access_page = false; From patchwork Thu Oct 10 18:23:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830771 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8B6A1F9A8E for ; Thu, 10 Oct 2024 18:25:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584749; cv=none; b=Rz8c0fLIqLYtGzz6L167b4f3np9fDRwnsUDbytSnOWzJDdqMba7M6fPIRh0FW4SvnCMC5RbB8nh6Q9m9frDuCida61pSabzrN9GVfWsOgUoe3FykEvQFWnGQc0+RK9OoVG8RuKwBGeCptzWa2BBl/vIYJeEhTGv+aRdr3UdCNxI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584749; c=relaxed/simple; bh=wQkcrwrrK4R8UDZylZK1jjbxKHOYfzFAR77NBuuXUsk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G+vssBwMNzvTFAwA8dcRQZTJMGHu994MUUuNOk7zMOrDXwUQm89xRO9Ges+a9FTYJj91WBSV/RSoXwKoNvf4OIiOR5OsmyJKGWVhBM+W+fjy4e41oKtvH7qRoX+I+mCmABOGgBGSM4bsIjKXXgndG5qq6JuXa0RnolCNmRkdv9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=be7NmGNy; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="be7NmGNy" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2e148d4550bso1322250a91.1 for ; Thu, 10 Oct 2024 11:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584747; x=1729189547; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=/W6JK9yAcorM7mqEFQq2hRBqvFCoz2C4pz3yOEmiGek=; b=be7NmGNypoYKlIQHzeI5qmlCcUnBhnj66Rq1h3NtOxbOPwC+w2QWEewadPkTALHHw3 xPJKAp195KEPkglLi++7I4Q8+YOmOUjNKY8vER3KsZeeMIagBJvsGt5Pobsm74JYsbU6 LTrXc4Yk9chzFDVCyczjOYxiAQl3ChwlolnTSQxMAaBK84r8sAvRfe70ej1wxuwkkAso K642YYl0QtppHrwBVSCdBwaujah/UYJ3uFn3uGnTv6BDtvRNYAJCNnt+vptRV4XD1YyB zB9Nu1ry3UFXALvaKzFf8mG173wYfV1cR9LVTLoNYadPVMufgcCdJQL3tQsG1XncstE1 EjmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584747; x=1729189547; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=/W6JK9yAcorM7mqEFQq2hRBqvFCoz2C4pz3yOEmiGek=; b=BANhBPUHJEu1sCSkAaLwr6/O/ihHB2ixbGosTcz7JHjk9cDa5qWSwgBkd/kiGU0So0 k9HReFyYlnjJ8WJoyeQB+Kl1Xmpj0A+XrQiEPy9jQ8LGIOLKrILJH8LE/fGhiC2IblaQ bZijPQgAra2w6gSRVQoFgM4Ym+u7zAMuVhjuNyUEfCI5cpzmAvG7dq1aaUyinGJ7qGuY BYl5OIfHrUaBGwFg/SZg9zPmH6WWQya2SJyuwTkRV/LAO9rekdA+DdLxCgK+7ZwN00cQ BCiwxEACSEhlrNFRmYGQmIUTFpHavxCPiV4nGve/J/hjyKPL7IHB3tYrPIWRioFAj4Hf w6+w== X-Gm-Message-State: AOJu0YzEGWZDKbMsERgrr0XCACc3hpE2ac6tCgbeMYFR+bSVPK0sPxPQ BnHtMzMti0dxqZIHwoWu0nn57Z9aSxildJ5zFe02+mPubzVhi877rAX4gqfjDDEvvIESTd+Y5n5 NhA== X-Google-Smtp-Source: AGHT+IFGWHaFUEsQRxn4rcRWabj8vzxqpLHdNOXqOmiiQlX4dD6MZqF5BpShV2hVfQIOX/PTyNMVtEMSya8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:9a94:b0:2e2:da81:40c1 with SMTP id 98e67ed59e1d1-2e2f09f2280mr109a91.1.1728584746508; Thu, 10 Oct 2024 11:25:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:28 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-27-seanjc@google.com> Subject: [PATCH v13 26/85] KVM: Use plain "struct page" pointer instead of single-entry array From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use a single pointer instead of a single-entry array for the struct page pointer in hva_to_pfn_fast(). Using an array makes the code unnecessarily annoying to read and update. No functional change intended. Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7acb1a8af2e4..d3e48fcc4fb0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2752,7 +2752,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w */ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { - struct page *page[1]; + struct page *page; /* * Fast pin a writable pfn only if it is a write fault request @@ -2762,8 +2762,8 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { - *pfn = page_to_pfn(page[0]); + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { + *pfn = page_to_pfn(page); if (kfp->map_writable) *kfp->map_writable = true; return true; From patchwork Thu Oct 10 18:23:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830772 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58F261F9AB0 for ; Thu, 10 Oct 2024 18:25:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584752; cv=none; b=UuSxXyZNZNeKKmtepk8HAZ5e3QF1HH58nBL5zFqd8EFhcFUepevLWeFd60UkPr117s1WyuPRywICvC58wBdLfOkORhJC9Abt4AExSXKptWLKTOihXKbknzVNnH0Y8R6EIQ0NxSCDgSQnbck/YlWHLTpUBLChsrGTFwKDT/bV/ZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584752; c=relaxed/simple; bh=5U+Xe396ie07CWKh5LcgsN1ZYAsnCq4iDqWBxnyRpWQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sssgNPwZhxP072SNuNUJutTkTzH/HZwX8IGfMsRnTPFXnYuYNep6ir56FtQQD2R2rBwfJHf4Xx8+YU7RxUIHpCJxGpC7/57EKaLlSCq2k50Zk+DLyjm/ZPK/G3BpxVNE3HthakeX8EN08G6oL0crbP1yYFx6a4Plj/sYpjGj03U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lFnqB2HJ; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lFnqB2HJ" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e00a395b1so1476160b3a.0 for ; Thu, 10 Oct 2024 11:25:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584749; x=1729189549; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=txhtY6exXDaGFQAWYGvmt09bjeoYz319F03IR2k/C3M=; b=lFnqB2HJNeC6aFRXmR03XZEiiImTYVHbWU/GkqwSXJ1GV6qh+onF9ea8pHau54nnJI JYMZo2Bdtnz9StiMiAh+3XeXwXPsCsBo4ISZ0LWerbxkuqPYHmQr81UeEuHQk2phaVBp K3Id19I5VP8U2GRXqNwXNPzdSdh7wcepp8PkASs/yhBCM17kjsTnGy8UkCWFYvjGO+kp nSgfjfAQrxTaWsofKYmr1+Cp2YVqzRjb56ONlO5cVt9OP7ILypd5IJCunw89Zzld0TPK EwYre895aXW+Dy/WBtL1VNQxnolmV9TTXJPy5ansjxuuDOLiuEvSknJMQiId1fkNjfnI McUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584749; x=1729189549; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=txhtY6exXDaGFQAWYGvmt09bjeoYz319F03IR2k/C3M=; b=I/Ej1oUiMwPcfb+qQinR9LiDjzkv5UWVi7YusF0CoDa0DBniy+bwC5hIKZoltHcE6G W0ASEIktiy1o5f6bxYTbkR2oGyUsv23rNfbGmhh4zCrcnZfjlkKPtwwiMQtstPS3pxAZ davK7SLsw5b3lt65xiHJUuC6UHJFnJLrMA9qLf6z6B8GirMDqupMpr/tZoVZY5je62vB ML6bIySYthiO3wX6shAZFWDczwMyLmweCDsQOAXdNrFwjUk6wWickYGya7QJmqEg72uL 06So/Uu5obgtQU25W18T0G7T5f8dH7O61i4d4eE0Z4j/F21UFs74MJB3m5IWG24bz7X/ mLkw== X-Gm-Message-State: AOJu0Yxb+TgKR2S4Fw1W9OJOMVNJxKtq//W21W1yCc8Jorskbhj2gx8U ZNh1oEmZLfOH6OzrCY9uWjoHiu5MnU8kjA7D/50dTvW03CVCm97uEnuP5kZQ9ZeTQ812KVyU384 DzQ== X-Google-Smtp-Source: AGHT+IE/uLa8aldzd4vJMnq+De69DiThAuVt+3q8zF6rqxlBj2/ZKo7+FHCoJyy45GpCD4mZjJfGV3U05Tk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2d8a:b0:71d:f1f9:b982 with SMTP id d2e1a72fcca58-71e1dc00b9cmr38199b3a.6.1728584749206; Thu, 10 Oct 2024 11:25:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:29 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-28-seanjc@google.com> Subject: [PATCH v13 27/85] KVM: Provide refcounted page as output field in struct kvm_follow_pfn From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add kvm_follow_pfn.refcounted_page as an output for the "to pfn" APIs to "return" the struct page that is associated with the returned pfn (if KVM acquired a reference to the page). This will eventually allow removing KVM's hacky kvm_pfn_to_refcounted_page() code, which is error prone and can't detect pfns that are valid, but aren't (currently) refcounted. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 99 +++++++++++++++++++++------------------------ virt/kvm/kvm_mm.h | 9 +++++ 2 files changed, 56 insertions(+), 52 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d3e48fcc4fb0..e29f78ed6f48 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2746,6 +2746,46 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w return gfn_to_hva_memslot_prot(slot, gfn, writable); } +static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, + struct follow_pfnmap_args *map, bool writable) +{ + kvm_pfn_t pfn; + + WARN_ON_ONCE(!!page == !!map); + + if (kfp->map_writable) + *kfp->map_writable = writable; + + /* + * FIXME: Remove this once KVM no longer blindly calls put_page() on + * every pfn that points at a struct page. + * + * Get a reference for follow_pte() pfns if they happen to point at a + * struct page, as KVM will ultimately call kvm_release_pfn_clean() on + * the returned pfn, i.e. KVM expects to have a reference. + * + * Certain IO or PFNMAP mappings can be backed with valid struct pages, + * but be allocated without refcounting, e.g. tail pages of + * non-compound higher order allocations. Grabbing and putting a + * reference to such pages would cause KVM to prematurely free a page + * it doesn't own (KVM gets and puts the one and only reference). + * Don't allow those pages until the FIXME is resolved. + */ + if (map) { + pfn = map->pfn; + page = kvm_pfn_to_refcounted_page(pfn); + if (page && !get_page_unless_zero(page)) + return KVM_PFN_ERR_FAULT; + } else { + pfn = page_to_pfn(page); + } + + if (kfp->refcounted_page) + *kfp->refcounted_page = page; + + return pfn; +} + /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. @@ -2763,9 +2803,7 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) return false; if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { - *pfn = page_to_pfn(page); - if (kfp->map_writable) - *kfp->map_writable = true; + *pfn = kvm_resolve_pfn(kfp, page, NULL, true); return true; } @@ -2797,23 +2835,15 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) if (npages != 1) return npages; - if (!kfp->map_writable) - goto out; - - if (kfp->flags & FOLL_WRITE) { - *kfp->map_writable = true; - goto out; - } - /* map read fault as writable if possible */ - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { - *kfp->map_writable = true; + if (!(flags & FOLL_WRITE) && kfp->map_writable && + get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { put_page(page); page = wpage; + flags |= FOLL_WRITE; } -out: - *pfn = page_to_pfn(page); + *pfn = kvm_resolve_pfn(kfp, page, NULL, flags & FOLL_WRITE); return npages; } @@ -2828,22 +2858,11 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault) return true; } -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page = kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { struct follow_pfnmap_args args = { .vma = vma, .address = kfp->hva }; bool write_fault = kfp->flags & FOLL_WRITE; - kvm_pfn_t pfn; int r; r = follow_pfnmap_start(&args); @@ -2867,37 +2886,13 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, } if (write_fault && !args.writable) { - pfn = KVM_PFN_ERR_RO_FAULT; + *p_pfn = KVM_PFN_ERR_RO_FAULT; goto out; } - if (kfp->map_writable) - *kfp->map_writable = args.writable; - pfn = args.pfn; - - /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. - */ - if (!kvm_try_get_pfn(pfn)) - r = -EFAULT; + *p_pfn = kvm_resolve_pfn(kfp, NULL, &args, args.writable); out: follow_pfnmap_end(&args); - *p_pfn = pfn; - return r; } diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d5a215958f06..d3ac1ba8ba66 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -35,6 +35,15 @@ struct kvm_follow_pfn { * Set to true if a writable mapping was obtained. */ bool *map_writable; + + /* + * Optional output. Set to a valid "struct page" if the returned pfn + * is for a refcounted or pinned struct page, NULL if the returned pfn + * has no struct page or if the struct page is not being refcounted + * (e.g. tail pages of non-compound higher order allocations from + * IO/PFNMAP mappings). + */ + struct page **refcounted_page; }; kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp); From patchwork Thu Oct 10 18:23:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830773 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CE651FA27C for ; Thu, 10 Oct 2024 18:25:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584753; cv=none; b=flBhDaWBtZQ9Yg3Jy/oYu8uVAta3p3XqUTa/cdiQ1Cu0tv2uFMgEYP/5NQOPcGM7w6V3B4OjnRkdif8ANcsXq9r4fh19db2Dq6fMzia+lGJIN3EZkYUkTj9wmNOu0FF1q0rHGY89sLqrjM4OkCPTAgXSGywidsUTSgROsELJgSE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584753; c=relaxed/simple; bh=CmHVeWZ83J0EQfDqM5RvAa04rCEwL0d1DQJqfJodvCg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g1tpmmO6DCucNtnSwdJJiWBKZPNFunZ6T3Nm1noOhd52aYmA3tXkWWW44TLwnJPLwdjkP3wDenCHMjUdO/EmcYZZwcGGVUVEarTz3HpJYUbPZ9oYBu+8EV4CvWRP7wiH8kuBAQQG3hyqcvV85PNagqtk9wxOhXEzg5S/z/AObnY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FXMkCkX5; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FXMkCkX5" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso1172633a12.0 for ; Thu, 10 Oct 2024 11:25:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584751; x=1729189551; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=MotdKZcknVHCSnrsAh1XZemF3Qz2FYV2KdHfEEknv98=; b=FXMkCkX57wh+XHLq6n+XdE6kGmp0hkZZ6UA+5owP77BOfoNtvC7ytMt+Xm91GK2Ptt Z/XHN/Hk1/ZsHTw8wUK6ItSDN1PZ/Fd39gXJ0/20Uus/oeHrpT4bHO1ml/gJ2PjphReY RrNOOnUdwahhvvYNSbZQ/1IpMiASQC2EYHaf0+95rIjmy89mPbWSyHnf4XImQefy6klx RNtpXcGLTqbVqH6LtcwuGjG4TL0uJhOqs7pHAce0CgWIm6XbodXJ/ME1WZd0CQ9+NL1R U+EB7wNh59mYw7+giGo4f0SmCXTtVP1ct2vdXaUQ41Qoz11pFqCyoPvQon1ikHXhiajl 4aPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584751; x=1729189551; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=MotdKZcknVHCSnrsAh1XZemF3Qz2FYV2KdHfEEknv98=; b=Um4hh8cQ+CKoba/VNXG6CbbLiPAHtF7nWON4pH9suRY3PqUzPWpTC1iBCwCLjOVMP6 24mEZG8cfwBlsrz8GaKGfYfCOTBg56CITlyLnB7RAYCbGoceFMRid9Twlfi9fhXsDW71 asYQgxiFuyovvKFdIUVgQl0Fu3+niMo2VRIU5KYzlx1zwnoTpDPku9PpKBt4BEMdF2yA g45T9xEcwrKyEHFmc2u4X7E0AdOB47OD9kVkfGuIuhX7uIDclKzP0lyWZmgydoYOkT6f ZStq3PcCN7BIuij3BdUbXd2/czvGrEi5IYqacswPS/W4y9HMBCoauRux71qs4WHj/oN9 d6Gw== X-Gm-Message-State: AOJu0YxwtPQ0DKmFijNy1lGPB6cPk8mmrYA0D30f1NWTKkOkQoZLWRhb B0ufvQPqc2QfZpnOR0FJKytriBtTOJVn1sjR/lvSEMmyANYhCREQst4bbqmX47sC+cBlu/E0BIp oSA== X-Google-Smtp-Source: AGHT+IHahb+vMbBUybe9JXpaiwYyoag4sz2flsLCX6vY5d7t+wo3V9TaPVQYV32tHeuijXZHqATezWyezMk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:d913:b0:2e2:a810:c3e4 with SMTP id 98e67ed59e1d1-2e2c81d4d19mr7321a91.4.1728584751203; Thu, 10 Oct 2024 11:25:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:30 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-29-seanjc@google.com> Subject: [PATCH v13 28/85] KVM: Move kvm_{set,release}_page_{clean,dirty}() helpers up in kvm_main.c From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Hoist the kvm_{set,release}_page_{clean,dirty}() APIs further up in kvm_main.c so that they can be used by the kvm_follow_pfn family of APIs. No functional change intended. Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 82 ++++++++++++++++++++++----------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e29f78ed6f48..6cdbd0516d58 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2746,6 +2746,47 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w return gfn_to_hva_memslot_prot(slot, gfn, writable); } +static bool kvm_is_ad_tracked_page(struct page *page) +{ + /* + * Per page-flags.h, pages tagged PG_reserved "should in general not be + * touched (e.g. set dirty) except by its owner". + */ + return !PageReserved(page); +} + +static void kvm_set_page_dirty(struct page *page) +{ + if (kvm_is_ad_tracked_page(page)) + SetPageDirty(page); +} + +static void kvm_set_page_accessed(struct page *page) +{ + if (kvm_is_ad_tracked_page(page)) + mark_page_accessed(page); +} + +void kvm_release_page_clean(struct page *page) +{ + if (!page) + return; + + kvm_set_page_accessed(page); + put_page(page); +} +EXPORT_SYMBOL_GPL(kvm_release_page_clean); + +void kvm_release_page_dirty(struct page *page) +{ + if (!page) + return; + + kvm_set_page_dirty(page); + kvm_release_page_clean(page); +} +EXPORT_SYMBOL_GPL(kvm_release_page_dirty); + static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, struct follow_pfnmap_args *map, bool writable) { @@ -3105,37 +3146,6 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); -static bool kvm_is_ad_tracked_page(struct page *page) -{ - /* - * Per page-flags.h, pages tagged PG_reserved "should in general not be - * touched (e.g. set dirty) except by its owner". - */ - return !PageReserved(page); -} - -static void kvm_set_page_dirty(struct page *page) -{ - if (kvm_is_ad_tracked_page(page)) - SetPageDirty(page); -} - -static void kvm_set_page_accessed(struct page *page) -{ - if (kvm_is_ad_tracked_page(page)) - mark_page_accessed(page); -} - -void kvm_release_page_clean(struct page *page) -{ - if (!page) - return; - - kvm_set_page_accessed(page); - put_page(page); -} -EXPORT_SYMBOL_GPL(kvm_release_page_clean); - void kvm_release_pfn_clean(kvm_pfn_t pfn) { struct page *page; @@ -3151,16 +3161,6 @@ void kvm_release_pfn_clean(kvm_pfn_t pfn) } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); -void kvm_release_page_dirty(struct page *page) -{ - if (!page) - return; - - kvm_set_page_dirty(page); - kvm_release_page_clean(page); -} -EXPORT_SYMBOL_GPL(kvm_release_page_dirty); - void kvm_release_pfn_dirty(kvm_pfn_t pfn) { struct page *page; From patchwork Thu Oct 10 18:23:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830774 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 681131FAC4F for ; Thu, 10 Oct 2024 18:25:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584756; cv=none; b=cqo6sAQ+PwFkrw57+kAaX8MIy5T2JWw8YoRwURUZz68SMSoGDlH8rLi9okNELfJ7kEIKcaZvBJU5k2i6nqWpsFVcIAa1A//zifvmNORUp/x+PHwjptfTgiWaMNTaEkbSIJrIfP54obEVmTTpsHWj4a8YFv8A1jp3dQM6lcFFJvw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584756; c=relaxed/simple; bh=DZ/5QhAqV/AFQ+a9ItmswBEEvilydxqWBCs3ItnJewM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RNypRQUAsmfDmT6sQAf9TrApwMcj+FKF5DjIeykPwE342Tb3JZQdT/hOLXrO+PWMKManbbwTucuKPKrrx3Ms82I2NlH9TGguliLMzRo/B/N441ds+mDCxdaZCmzdm2jIQQl2NDsE6SM4Ft1IjWubSnRnGuZQnrwMz2IOQIfyIdQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hmIwJ/Iy; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hmIwJ/Iy" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1159fb161fso2214709276.1 for ; Thu, 10 Oct 2024 11:25:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584753; x=1729189553; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=nXVCtJlFVF227JmXan6PsTRYgw7//koA3Av9lMmMw70=; b=hmIwJ/IyhcXzHrpeXJmRadf+rJk2JLpf6qp/np19zFesdgrIj1iUPTmoJQNxJ6SI6X IHcMrrqOA4nE8IiNaHd2zwz6BvlPYPPf1ekepldTmgYUw+ak/bAI/v9hLzV0EYY6w0as Pd91JjhJ3PMfnyMtRXbJz4nddyfr9oLinVFCzp1mn4EqH4SlHOX9f3OICBNQavz6abuI pryFHHJbcxlbbdQzivVw5wyjGWIEhRw5l97zg8ZIES2+CB6VqneR/2wG6WO5fTubinUd HBmHOSTWDqWDcYOMtblw7eC8qj2WdaMLhZ9SkfWqev1nkC4dDfd9Onjs7JQevhMCO0rc 5ELQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584753; x=1729189553; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=nXVCtJlFVF227JmXan6PsTRYgw7//koA3Av9lMmMw70=; b=p4P+0ktiJe+KqobJTjuo+NuiWkUSCFq8R2deK3blc1nY6Loh6OUX3CufQceCTuS74L kukOP9Jut8yAqL/qkGNoV/SLaK3SakVy3Dw5rqZa/ZfglN/QjxdrwseytBuGPI5alu23 rKlSWl6x2zWC7CWJZ6MhKu2gc9q4nSYJVGFg2ppQtEmvQyT35X1qsYFX2zrFkiHjxOJe PYWh9lsnjMk48n9A+SQjmo5NMVqrHAiiS7o9BcUkYNW6EwTAm2IpKSpUqkX+KWAguVdw rlS4aI357RhrUTvuUtTS4ZnPeTOVPEV5LCptF/uTlSZi2Oi0PqD1oZjKQublCv9BkGkq G05g== X-Gm-Message-State: AOJu0YzuK5CWKIWLmHmZ1uV2NurTHlMPFLyNS7s47IljVXVYIt0X5k2A Zq5ci03MhIkjOcgubzeSxBqWscv7EtgFBhE6OYUt0Gtr8lomr2MlkoFljcPuiWenm7XVrKPRvnM NXQ== X-Google-Smtp-Source: AGHT+IFqzRS1xOg2oJDncuDOdOtR7gITdwbfvBFuJ4xWEWgDQEplutdiZUKmbH0mIVak18Oaak6LM032atM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:b205:0:b0:e25:17cb:352e with SMTP id 3f1490d57ef6-e28fe43f3f5mr4095276.9.1728584753336; Thu, 10 Oct 2024 11:25:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:31 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-30-seanjc@google.com> Subject: [PATCH v13 29/85] KVM: pfncache: Precisely track refcounted pages From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Track refcounted struct page memory using kvm_follow_pfn.refcounted_page instead of relying on kvm_release_pfn_clean() to correctly detect that the pfn is associated with a struct page. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/pfncache.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 067daf9ad6ef..728d2c1b488a 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -159,11 +159,14 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; + struct page *page; + struct kvm_follow_pfn kfp = { .slot = gpc->memslot, .gfn = gpa_to_gfn(gpc->gpa), .flags = FOLL_WRITE, .hva = gpc->uhva, + .refcounted_page = &page, }; lockdep_assert_held(&gpc->refresh_lock); @@ -198,7 +201,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) if (new_khva != old_khva) gpc_unmap(new_pfn, new_khva); - kvm_release_pfn_clean(new_pfn); + kvm_release_page_unused(page); cond_resched(); } @@ -218,7 +221,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) new_khva = gpc_map(new_pfn); if (!new_khva) { - kvm_release_pfn_clean(new_pfn); + kvm_release_page_unused(page); goto out_error; } @@ -236,11 +239,11 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) gpc->khva = new_khva + offset_in_page(gpc->uhva); /* - * Put the reference to the _new_ pfn. The pfn is now tracked by the + * Put the reference to the _new_ page. The page is now tracked by the * cache and can be safely migrated, swapped, etc... as the cache will * invalidate any mappings in response to relevant mmu_notifier events. */ - kvm_release_pfn_clean(new_pfn); + kvm_release_page_clean(page); return 0; From patchwork Thu Oct 10 18:23:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830775 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3218B1FB3D2 for ; Thu, 10 Oct 2024 18:25:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584757; cv=none; b=FHxVhVEWN/ewMY5yBMwKouGG6LXJyopjtVDr64ZjiiQTNnqlRyhu1brJ2Pv42s3aeM83tVhCQiNxRzHadOMWt467C9sK2jMTmLCZv9mzgQqqeLcBs8f7ZfrP+oFR33vzkOh2ZlVc9i3UPEmojygmzSX+x8wdkZEdpYoN4QEVK1c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584757; c=relaxed/simple; bh=bnwD3uEfCD/YcddkLJu8d23/zYqKX46jf2vKJvYQlNE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RMqvLVlvOpZovpgV38Ot6z3sdnGWLdJOvUQhbeh0DvjK+AizB12ndY6mingLbyT/Lo96hk+QOnT0ZGAbEEZ6TH++jXQYXjsGXNzMpBkntJioYFbIwOC4czURec9GdDGF7eU20+UZCGaAMsMNc+hRSyUrM+ei3oDJw161KC/UcYI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=u+lh0lZQ; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u+lh0lZQ" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e3231725c9so23554687b3.1 for ; Thu, 10 Oct 2024 11:25:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584755; x=1729189555; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=/prKt8Bu+DRISNBcov5NrkfgoAIYHMahITKzCWpSrCI=; b=u+lh0lZQ0iXDXtDscH8PiFZR0maNS7DQsBMYikW3/rmnfcBXpI7TGMudK3dfPgUTEB P0isjFKFnrvh+aV0DTRRLJCNrk0qEXoNyKTpSlURgiPJyby8IfIe3wTsXycMslYF84Uu LxbGHqs7w+ucwTevI1iLIdHaxVjE7F3ssW3jQgSMXBln1jgkY2wR5qIbf7ZdkBZ9CO4J LKRV/cxsUGAhvttyWa4tlVFNCym+L/JIU+85sWhn7wEMpOen81nC7NpKHxGCzaFpRxd6 w/zKDX4OeUl0XtF7v4HJT2epxE2xdrp+lv8Fag4R953vA0Hy5rN8/x7w1xKjZA+XIYl8 Vqew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584755; x=1729189555; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=/prKt8Bu+DRISNBcov5NrkfgoAIYHMahITKzCWpSrCI=; b=Wh4jV7SHx2Z9tqEhBlhc10a/qL6/jkJneAbskdfBiu9JGwi2ZVHBklqXCuBEgE4CHB T3MAH0NfZom5+oeaOeAKy3P3SHwpf7WPBQCwYJnket/K6D1RdqBWxCCyZYv+b2QXZwbA qWAYPx4EM/0/mOifnP7ShRoj/hwgykSaKItX778kMNYY5S06y5Avg7FQGF+KC6kdvzHB vfyFkPzvb7JefT1l0TFp/yOqar07CDZUu1IV24rfmntgsRj8APvzI/KwDDDmwg0mydFU l3jyHMBbvn+Iu0GFNpYn2P/GvdDaGC1b+EETKRMn+/7KhbsMAfc989FYhdbd7hELn/Kg BdJQ== X-Gm-Message-State: AOJu0YwrS6ncoLeBsuLa/8oam7uCeX46WFzPDtQquL1VY/unBr01OhuY 7oIItXQC+7AJd0RwY9tkth8aHdqfAiZYIT+H+vgNUK0enuUhX18LgpZAuz8MryCvVLqMWiy+ALD +UQ== X-Google-Smtp-Source: AGHT+IHNZW39HKgOpo4XLx10sqcp5MldEDD35H2r/7z7SkgDatDPvbEuIFyvlgb9BtXWEl9gaR4EDWCFxGs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:2093:b0:6d4:30f0:3a74 with SMTP id 00721157ae682-6e322171640mr1011347b3.6.1728584755409; Thu, 10 Oct 2024 11:25:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:32 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-31-seanjc@google.com> Subject: [PATCH v13 30/85] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones From: David Stevens Migrate kvm_vcpu_map() to kvm_follow_pfn(), and have it track whether or not the map holds a refcounted struct page. Precisely tracking struct page references will eventually allow removing kvm_pfn_to_refcounted_page() and its various wrappers. Signed-off-by: David Stevens [sean: use a pointer instead of a boolean] Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 26 ++++++++++++++++---------- 2 files changed, 17 insertions(+), 11 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e3c01cbbc41a..02ab3a657aa6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -280,6 +280,7 @@ struct kvm_host_map { * can be used as guest memory but they are not managed by host * kernel). */ + struct page *refcounted_page; struct page *page; void *hva; kvm_pfn_t pfn; @@ -1238,7 +1239,6 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6cdbd0516d58..b1c1b7e4f33a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3093,21 +3093,21 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) -{ - if (dirty) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); -} - int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(vcpu->kvm, gfn), + .gfn = gfn, + .flags = FOLL_WRITE, + .refcounted_page = &map->refcounted_page, + }; + + map->refcounted_page = NULL; map->page = NULL; map->hva = NULL; map->gfn = gfn; - map->pfn = gfn_to_pfn(vcpu->kvm, gfn); + map->pfn = kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(map->pfn)) return -EINVAL; @@ -3139,10 +3139,16 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); - kvm_release_pfn(map->pfn, dirty); + if (map->refcounted_page) { + if (dirty) + kvm_release_page_dirty(map->refcounted_page); + else + kvm_release_page_clean(map->refcounted_page); + } map->hva = NULL; map->page = NULL; + map->refcounted_page = NULL; } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); From patchwork Thu Oct 10 18:23:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830776 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19B1D1FBC91 for ; Thu, 10 Oct 2024 18:25:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584760; cv=none; b=kKuisbsTimXJ7X6T8peB0HPu2oVSH11o67KlW2yadj5E27BwU8xyJRAx2M6I0LPnRajtSphKfsvaMwPrcQCKFAvRmytJJng9C6T1ZNsA1wO8lM4SpIt1i81gwD7mm0MJfCzswP1WwBDbnc2FK/fy3kxEhktT+QZvrOcLeLHgrf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584760; c=relaxed/simple; bh=M2NCneg/Pye+UyLI2GGt5shIez0MQF8LN4Gs0ecBwhw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=og/z64Nk4pl85YEIAvD/o8JenBdtyy9UmElST7tZu4z7yQ/bVjLrQJQTdGnhh694eUlZktVBY0ww65HWRX0VlanUo3vPqGWafHHoYF+6IzgvLPnt8/bDLbsPrflNpkvlp/0bqC8YwOlJG/uWaX4Q2SfYSUF5HwQEhIUsgRWXlAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CYNUenPL; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CYNUenPL" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e290b8b69f8so2137706276.2 for ; Thu, 10 Oct 2024 11:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584757; x=1729189557; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=dQaiAwtn89tr8uPQzQfay17fvGV3F3WPslUlHA+BgDY=; b=CYNUenPLiXIvtTyD74D8imBRAdC50F3hM1AGdNct9/duav3eSxT08JrxuH4uXDnr2p GlywDJgMcirPnzaX/MzQmoAche5Jq/wQ+s2jZ8bLV0HN01jab8FA4Xyx+wz08gu7Ei/1 aMZVlCmv09LzBHl5ILjwWuvSGgp907WUV3teWOwY2a5c3RZBVUymYlPJ+uSMbRo4gDHd BCHayDy3tx5zkYmvi0rcCgT1BlZperbNOxiXf9X2CrpcePe1UPYsgemnEtJcHtRrO8rT DTQAY2TnyFkSUZshaKMhikHuc//2hbWd69gYZuL5srQsImqM4AupFJHvu+dzaLsC46rc 9Ypw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584757; x=1729189557; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=dQaiAwtn89tr8uPQzQfay17fvGV3F3WPslUlHA+BgDY=; b=climkRUKHMTwtKrVoBohpDOqyjuZlPiXyl3SvKNl82YgBZRqO9jQpZolCkq3GkiQW2 lPZa9CaNjEALGg2UJZ/v+pStikmir7guwrcgk7blZJ8La+G7Qnn5RrBV+W9+wRz1wv5F KBlnUfMq9eAOv5mlhkjRLpsK99UmcCUH+ag3LGJQaT+eOHSsdH89G099Gsm8NXv7eBBq aCJ1t/XHHfBEnVUWZ8OCmBwyiczANCEFRLokDolj1g+/nDAL+0+2dUjtoD86vpaZDNFJ mzyBtUMTpKw/AW+gVokDQIoHcR2aSofDAZHg6qBli82dxVXChBOwTfPeJZHp7BrRbvVw m9uQ== X-Gm-Message-State: AOJu0YxxaaV63d+e9A0d/hkIoRv2zpxPN33HB6IHXv9Sb1U+pqXUuDUt mMStjQbHGtuzBrwznHNCt9p9lMCFiygxJCiQ9L+peDFS5di+6pFN1skDhzx9x3IFlFze7Ioa833 oig== X-Google-Smtp-Source: AGHT+IHY0yNiFmgeAXsfVKl7EbEBY2R3q1IXvjRBHtSPoxXhe2HbHq4twyB0Huv0R+/OHvP6V1ZW3nZzpzQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:80cb:0:b0:e20:2502:be14 with SMTP id 3f1490d57ef6-e28fe410672mr4860276.7.1728584757233; Thu, 10 Oct 2024 11:25:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:33 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-32-seanjc@google.com> Subject: [PATCH v13 31/85] KVM: Pin (as in FOLL_PIN) pages during kvm_vcpu_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Pin, as in FOLL_PIN, pages when mapping them for direct access by KVM. As per Documentation/core-api/pin_user_pages.rst, writing to a page that was gotten via FOLL_GET is explicitly disallowed. Correct (uses FOLL_PIN calls): pin_user_pages() write to the data within the pages unpin_user_pages() INCORRECT (uses FOLL_GET calls): get_user_pages() write to the data within the pages put_page() Unfortunately, FOLL_PIN is a "private" flag, and so kvm_follow_pfn must use a one-off bool instead of being able to piggyback the "flags" field. Link: https://lwn.net/Articles/930667 Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 54 +++++++++++++++++++++++++++++----------- virt/kvm/kvm_mm.h | 7 ++++++ 3 files changed, 47 insertions(+), 16 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 02ab3a657aa6..8739b905d85b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -280,7 +280,7 @@ struct kvm_host_map { * can be used as guest memory but they are not managed by host * kernel). */ - struct page *refcounted_page; + struct page *pinned_page; struct page *page; void *hva; kvm_pfn_t pfn; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b1c1b7e4f33a..40a59526d466 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2814,9 +2814,12 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, */ if (map) { pfn = map->pfn; - page = kvm_pfn_to_refcounted_page(pfn); - if (page && !get_page_unless_zero(page)) - return KVM_PFN_ERR_FAULT; + + if (!kfp->pin) { + page = kvm_pfn_to_refcounted_page(pfn); + if (page && !get_page_unless_zero(page)) + return KVM_PFN_ERR_FAULT; + } } else { pfn = page_to_pfn(page); } @@ -2834,16 +2837,24 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page; + bool r; /* - * Fast pin a writable pfn only if it is a write fault request - * or the caller allows to map a writable pfn for a read fault - * request. + * Try the fast-only path when the caller wants to pin/get the page for + * writing. If the caller only wants to read the page, KVM must go + * down the full, slow path in order to avoid racing an operation that + * breaks Copy-on-Write (CoW), e.g. so that KVM doesn't end up pointing + * at the old, read-only page while mm/ points at a new, writable page. */ if (!((kfp->flags & FOLL_WRITE) || kfp->map_writable)) return false; - if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page)) { + if (kfp->pin) + r = pin_user_pages_fast(kfp->hva, 1, FOLL_WRITE, &page) == 1; + else + r = get_user_page_fast_only(kfp->hva, FOLL_WRITE, &page); + + if (r) { *pfn = kvm_resolve_pfn(kfp, page, NULL, true); return true; } @@ -2872,10 +2883,21 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) struct page *page, *wpage; int npages; - npages = get_user_pages_unlocked(kfp->hva, 1, &page, flags); + if (kfp->pin) + npages = pin_user_pages_unlocked(kfp->hva, 1, &page, flags); + else + npages = get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages != 1) return npages; + /* + * Pinning is mutually exclusive with opportunistically mapping a read + * fault as writable, as KVM should never pin pages when mapping memory + * into the guest (pinning is only for direct accesses from KVM). + */ + if (WARN_ON_ONCE(kfp->map_writable && kfp->pin)) + goto out; + /* map read fault as writable if possible */ if (!(flags & FOLL_WRITE) && kfp->map_writable && get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { @@ -2884,6 +2906,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) flags |= FOLL_WRITE; } +out: *pfn = kvm_resolve_pfn(kfp, page, NULL, flags & FOLL_WRITE); return npages; } @@ -3099,10 +3122,11 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) .slot = gfn_to_memslot(vcpu->kvm, gfn), .gfn = gfn, .flags = FOLL_WRITE, - .refcounted_page = &map->refcounted_page, + .refcounted_page = &map->pinned_page, + .pin = true, }; - map->refcounted_page = NULL; + map->pinned_page = NULL; map->page = NULL; map->hva = NULL; map->gfn = gfn; @@ -3139,16 +3163,16 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); - if (map->refcounted_page) { + if (map->pinned_page) { if (dirty) - kvm_release_page_dirty(map->refcounted_page); - else - kvm_release_page_clean(map->refcounted_page); + kvm_set_page_dirty(map->pinned_page); + kvm_set_page_accessed(map->pinned_page); + unpin_user_page(map->pinned_page); } map->hva = NULL; map->page = NULL; - map->refcounted_page = NULL; + map->pinned_page = NULL; } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d3ac1ba8ba66..acef3f5c582a 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -30,6 +30,13 @@ struct kvm_follow_pfn { /* FOLL_* flags modifying lookup behavior, e.g. FOLL_WRITE. */ unsigned int flags; + /* + * Pin the page (effectively FOLL_PIN, which is an mm/ internal flag). + * The page *must* be pinned if KVM will write to the page via a kernel + * mapping, e.g. via kmap(), mremap(), etc. + */ + bool pin; + /* * If non-NULL, try to get a writable mapping even for a read fault. * Set to true if a writable mapping was obtained. From patchwork Thu Oct 10 18:23:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830777 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E22FD1FCC46 for ; Thu, 10 Oct 2024 18:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584761; cv=none; b=D2Bm1iQuJPwrp/AadiRYGHJZizkUJSfUoc4HHHpkg8Tpsu4yXw5lUjWlffPu9Apkl0fAD30MuFMCAc1RHCcttsz8UhbWEFZzaaQ8bD0PbcS8NKc+3RT9k8tlNTmFeflHDIuqsco7U4+XB9QR/+HZ0EiMi7ehfz/x4yvaOtTTpKg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584761; c=relaxed/simple; bh=6AchJ9iF2qZs5ZSAOBrdf0urgGqdHLpa0Cd8lTPRDhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=slUSN9EGvSi1eCLm47B+7jsOYQapCV2KcqZDbm6GJFpHgCvf6c3Lk0afoeAVTCU0ONdIiv/hQc+EUwKXmrs5vMm5/CMvYX7iipeYMMTugjdRdIemfdPJw5tbVYF9xtxbGUgeaCDYMXv+SpiwC+ix2nnCzQQcSJVkuerFzWBUgu8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KeBY5py2; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KeBY5py2" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7ea05b8ea21so1335564a12.2 for ; Thu, 10 Oct 2024 11:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584759; x=1729189559; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=VITkHk6wYCk30qmAXnWfpQxFdItqbxje0xl/TrLgm28=; b=KeBY5py261h3po2p7EcoPVpo67GiOj7+6dOP6EoAEriscYrzhTzO39NHb0NJsAW1jR n42KW4EwGgaUs3yx/oCZEuQvV/M1idASTdIADhBN6ON4GT5SGgjV1ooFQU62n4X7Qr84 /fRWijapZZzZVKePZxyFZegIWlOsafg0wDZM5BHf4vXa9CU5WL9dY+OB5HyQ609h9s7e VPCG5daQbQMIE3NYESeghcC9iMjOQZxPTOapKpidLBB73TVQIpNb1Ze+j7LMk9Bx3x2i vO15GDeuHOJlaco62A9bG0UiqB8QoTlfPHg1Bpr7sbnSv5dMGej5HVQzlTOoDtu38Q9N 6JtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584759; x=1729189559; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=VITkHk6wYCk30qmAXnWfpQxFdItqbxje0xl/TrLgm28=; b=wRlOpWzhET5q/HyipXVLqvMsWGHfxfQTFdnUkd9m3Au+qnWI0nFjY+fm7oR94QvDNa 4/I3zWrerwVt5t4KlFouzoVzumzmTDN9rHs9d0Uu6tqx8DFc1xJkt113azy0VUmHjXTM 6370y2dQEH6J2FcKMXBLvFOq9+lhjI5YY90qesb5gYyJqjnczOmUkC0TvWq84aamilL2 /KzJMcPRcWdMO4z2zNr/LFs7PNOkStJtK4pB7hDP25ZhPnfJWaQmVYhxs94u/O5fysr4 f9DSUUP1RnrNQAIS0ok8lwmBqijpjcQlNuBtWIuRl/N7KKjFVnWN69vZbdC7hqXR7qui HoYg== X-Gm-Message-State: AOJu0YxLFdrzBDFXvm7UYVVZzmj0tRvUYSvZ9wmZXjCZPuVuAMVmiSc7 yyHY4YH4hshrW44g7V6Gqy2NtnZtZJoLTHzZSJNyGZfCGU0UHcB8GGFyHC9oZ0sMBiQ3lXLzv/3 8lA== X-Google-Smtp-Source: AGHT+IEekecUM0XVWoBmNjTczqp1a2/ixTMw1fOAmBrSm/Dp9XgHLEkyd54mJG+SlDPSB0szYk/zKOq5dxU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a65:678f:0:b0:7e9:f98c:e9f7 with SMTP id 41be03b00d2f7-7ea5359ed14mr32a12.10.1728584759044; Thu, 10 Oct 2024 11:25:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:34 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-33-seanjc@google.com> Subject: [PATCH v13 32/85] KVM: nVMX: Mark vmcs12's APIC access page dirty when unmapping From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark the APIC access page as dirty when unmapping it from KVM. The fact that the page _shouldn't_ be written doesn't guarantee the page _won't_ be written. And while the contents are likely irrelevant, the values _are_ visible to the guest, i.e. dropping writes would be visible to the guest (though obviously highly unlikely to be problematic in practice). Marking the map dirty will allow specifying the write vs. read-only when *mapping* the memory, which in turn will allow creating read-only maps. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 81865db18e12..ff83b56fe2fa 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -318,12 +318,7 @@ static void nested_put_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - /* - * Unpin physical memory we referred to in the vmcs02. The APIC access - * page's backing page (yeah, confusing) shouldn't actually be accessed, - * and if it is written, the contents are irrelevant. - */ - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, false); + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, true); kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); vmx->nested.pi_desc = NULL; From patchwork Thu Oct 10 18:23:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830778 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAFA51FCC7E for ; Thu, 10 Oct 2024 18:26:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584765; cv=none; b=neE5DEPgBSITz5axddyrkILSft92xqd33FH6TUq4L3BN2W1SF+riUMWp1p6FexJCTUzG+mmW8xAIir0bFaW98VGZZ112PJJ01O688FejAQ2jXZp9Puy99ZTkUK31q4oVDSBxUMJG/tFjajs9Nh9q1pg/y6nsz7Q4UCWTHjhqhcA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584765; c=relaxed/simple; bh=mQXhN5iZpOZWnnd2N5GwBNgcRKXl5xUfnEr6O0WaUwk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Awc9JpQ+mB4bJZMVmHiDzpzgIX2xif9cNae9aIahfWONMG+u8GrQxfuvCqcRdbc1e8TNYuu5NwabudPNzTUUu7fAKiUIA5vlU8JB2O/rBwrTfrLYlqXKKR2qgwGMnOSyst3ZoIljYczhWIefQzKSMnvDF9k9XRRpJHcb2cABpac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fhtsmxz8; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fhtsmxz8" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e3204db795so21454177b3.2 for ; Thu, 10 Oct 2024 11:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584761; x=1729189561; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=+DlXoUFYFzJpPJut8ySNZ/avzuJs4Hk+oSmN6VTWgSk=; b=fhtsmxz8vpfa2op10hhZgWbQg3a/Ux0j9mEjjKDNpT+6H68jzG7G30Lrt0DgwS2OqA VAQch49mqfwm5cA16yd/W6I1emiwxouDMXVSvKt1yo8/wuYywAj8WmU3ri9BLGwOXV2O ev+52ekxcamGE3PinuEE6axN2+AjKb8EBbMunUi5MpzIPNmR7Cd+feQewcjH6urh4T2Y L7YmQKAT/bzQUgg9L16aEUHrMjmkiyjytv61vSD0Smo/6KXR/dS0NqmYK+EDeZ81iIUB BpN0M+BzPxJAzpf2q5xvJT+CdoJM7Ovvc/CKmFvaOg7Bed7+EqAlOAxWv/v1tDiHH+OX ghxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584761; x=1729189561; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=+DlXoUFYFzJpPJut8ySNZ/avzuJs4Hk+oSmN6VTWgSk=; b=I7+gK0vQD9cDMAdNuKYFZ8YEwkMzFOKlQMrz2lYoYF57k1QjqFn4/GHe77gy3W7ty7 dHVD5qrXGPFuEumNuS2p8MitJXtlrkcTeWuOF7OYT7QMemvDm2hVhv7XTZGRK2nas6Hb 65c7D4p5Hd1evDFkTAChwInF/DEoNol7rJJjIx0uu76E/fY69NagFmWcfYT3a7CUAx1o dcunO4+gXZmOYH8YVOvmaCCwSCjY0iFBWsIsBJv8Unq8mr9N6QSUNMNp6+NkjzXWMbLb 7g69KHm3WcTy1y8pM0aI3uWKbrP1nNuZKe55hantjATS/PhDmeE3vjVKTwGYdEryT5R3 Thqg== X-Gm-Message-State: AOJu0YyxcII75ZbODfLRY/7cE55iO1l3j9AThabK8nKYuhQsAakNKLpE q0N74dBEW34xxcke+P9HybRQz8hRAE/X/HB4HZKyQm29JqHdcpMw3SPvf8nWEUYomFsCurNf6FH D+w== X-Google-Smtp-Source: AGHT+IHYBe/gylmYYB76Y4JbCGQqBxyU7NvKFk8CzbtQRqqOSH0kk5njqmBLFW7wxt+Okx9MKc9CHg/F7Og= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:4349:b0:6e2:1713:bdb5 with SMTP id 00721157ae682-6e32217cfd3mr118607b3.5.1728584760829; Thu, 10 Oct 2024 11:26:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:35 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-34-seanjc@google.com> Subject: [PATCH v13 33/85] KVM: Pass in write/dirty to kvm_vcpu_map(), not kvm_vcpu_unmap() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that all kvm_vcpu_{,un}map() users pass "true" for @dirty, have them pass "true" as a @writable param to kvm_vcpu_map(), and thus create a read-only mapping when possible. Note, creating read-only mappings can be theoretically slower, as they don't play nice with fast GUP due to the need to break CoW before mapping the underlying PFN. But practically speaking, creating a mapping isn't a super hot path, and getting a writable mapping for reading is weird and confusing. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 8 ++++---- arch/x86/kvm/vmx/nested.c | 16 ++++++++-------- include/linux/kvm_host.h | 20 ++++++++++++++++++-- virt/kvm/kvm_main.c | 12 +++++++----- 6 files changed, 40 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index d5314cb7dff4..9f9478bdecfc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -922,7 +922,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) nested_svm_vmexit(svm); out: - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); return ret; } @@ -1126,7 +1126,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.exit_int_info_err, KVM_ISA_SVM); - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); nested_svm_transition_tlb_flush(vcpu); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0b851ef937f2..4557ff3804ae 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3468,7 +3468,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm) sev_es_sync_to_ghcb(svm); - kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map, true); + kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map); svm->sev_es.ghcb = NULL; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9df3e1e5ae81..c1e29307826b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2299,7 +2299,7 @@ static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload) svm_copy_vmloadsave_state(vmcb12, svm->vmcb); } - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); return ret; } @@ -4714,7 +4714,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) svm_copy_vmrun_state(map_save.hva + 0x400, &svm->vmcb01.ptr->save); - kvm_vcpu_unmap(vcpu, &map_save, true); + kvm_vcpu_unmap(vcpu, &map_save); return 0; } @@ -4774,9 +4774,9 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) svm->nested.nested_run_pending = 1; unmap_save: - kvm_vcpu_unmap(vcpu, &map_save, true); + kvm_vcpu_unmap(vcpu, &map_save); unmap_map: - kvm_vcpu_unmap(vcpu, &map, true); + kvm_vcpu_unmap(vcpu, &map); return ret; } diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ff83b56fe2fa..259fe445e695 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -231,7 +231,7 @@ static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map); vmx->nested.hv_evmcs = NULL; vmx->nested.hv_evmcs_vmptr = EVMPTR_INVALID; @@ -318,9 +318,9 @@ static void nested_put_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); - kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map, true); + kvm_vcpu_unmap(vcpu, &vmx->nested.apic_access_page_map); + kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map); + kvm_vcpu_unmap(vcpu, &vmx->nested.pi_desc_map); vmx->nested.pi_desc = NULL; } @@ -624,7 +624,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, int msr; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = vmx->nested.vmcs02.msr_bitmap; - struct kvm_host_map msr_bitmap_map; + struct kvm_host_map map; /* Nothing to do if the MSR bitmap is not in use. */ if (!cpu_has_vmx_msr_bitmap() || @@ -647,10 +647,10 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, return true; } - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &msr_bitmap_map)) + if (kvm_vcpu_map_readonly(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &map)) return false; - msr_bitmap_l1 = (unsigned long *)msr_bitmap_map.hva; + msr_bitmap_l1 = (unsigned long *)map.hva; /* * To keep the control flow simple, pay eight 8-byte writes (sixteen @@ -714,7 +714,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); - kvm_vcpu_unmap(vcpu, &msr_bitmap_map, false); + kvm_vcpu_unmap(vcpu, &map); vmx->nested.force_msr_bitmap_recalc = false; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8739b905d85b..9263375d0362 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -285,6 +285,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool writable; }; /* @@ -1312,8 +1313,23 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); -int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); + +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map, + bool writable); +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); + +static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gpa, map, true); +} + +static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gpa, map, false); +} + unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 40a59526d466..080740f65061 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3116,7 +3116,8 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); -int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, + bool writable) { struct kvm_follow_pfn kfp = { .slot = gfn_to_memslot(vcpu->kvm, gfn), @@ -3130,6 +3131,7 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) map->page = NULL; map->hva = NULL; map->gfn = gfn; + map->writable = writable; map->pfn = kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(map->pfn)) @@ -3146,9 +3148,9 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) return map->hva ? 0 : -EFAULT; } -EXPORT_SYMBOL_GPL(kvm_vcpu_map); +EXPORT_SYMBOL_GPL(__kvm_vcpu_map); -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map) { if (!map->hva) return; @@ -3160,11 +3162,11 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) memunmap(map->hva); #endif - if (dirty) + if (map->writable) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); if (map->pinned_page) { - if (dirty) + if (map->writable) kvm_set_page_dirty(map->pinned_page); kvm_set_page_accessed(map->pinned_page); unpin_user_page(map->pinned_page); From patchwork Thu Oct 10 18:23:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830779 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2FAB1FEFCC for ; Thu, 10 Oct 2024 18:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584765; cv=none; b=A5UucXGw9aaCzWg+6DRz6z/S2ptDa75bxxWfTrLvecGXnl/7Q7Lhn6DLUIqvFWcK7nuY+0L27eudLYEU8M4D+7oMLv8HiS7A+g50/QnWk1V6wPaYYIcESIizBLloHkch0THA9LYyW56o6V4Zn5QguX7CPDRDdeJ6zXsl2HN6fM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584765; c=relaxed/simple; bh=+qwur9UMvfPsT7xhW5+1hFXGI3F0T2CjnbFVAQFEZ9U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kldnrMEKnFjoUMApShCLwQU7TjEipFWpODG7YUdSr98LAzTp4P+JLpsF7rVLfGTYLc3MRCLfLBqOi1XZZxoSkXXL2UiKPd0hO4dtsGsLztX7b8dcPMY0Dype9O084c+BkOxNNpcj48+hrvYgLxiA7nQKc3NlEValOVxAOSV+YNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qwMYhkdO; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qwMYhkdO" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e10ed0746so1227849b3a.3 for ; Thu, 10 Oct 2024 11:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584763; x=1729189563; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=w3u38MFKj4Hgu0NiviVwRjtFpk/ioMNBGBDQlt4DAgY=; b=qwMYhkdO+k0O3GAu86CC0Oovaqx2YjaPKBv6LeLzcOh3PtNiOaQrcv5HBilI8icD9g y0Ltij7dFk7sin4usHA+6Wv1hFiwFPtq5Ajocz2SMLSw7EHhQ24kqq+Bt+H2kyBdKaty qPo1gORXTlpg4cwuwI2/Xd/UCp8kWbToIyady8yUqr5BbwLt+Kkm56JwyHrVdgysxAdq eCA0ipk6bTZMsprIdAiZDgjFHxpqT0t6hWRsy+Feo3csNnBxmMja133Zk9XVBeicQl/q 2iXz1SFCo5UtodgUyE2QoPpXLNLLkpEupG73DPzHBSBkCdaePTQJNXxdK3uBHhhZG/wU Akeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584763; x=1729189563; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=w3u38MFKj4Hgu0NiviVwRjtFpk/ioMNBGBDQlt4DAgY=; b=NEKRC93+659kG0Tvqz+//usAh+3RiYqPuIS6rphgeyLd5kdZ23E9qXZc/sWU03FdEi DnZntP8dz54aXCoFv8kuUnYS/QOymPW4hd3vR+HBZplSfIiqmL+yQINgHJVyat9v45Q2 QTjEOEw8VkWYKq+S5mNWKI3Ygnvb8eY+L5uW06TLWhqGmEzm1wNlUocEUBwNqAg6y0yV E1ECLKFocqU7KyGYB0iG+2sJC9TjiaG6xTHAOOdqr7AZy+ik98C7sMGYjYM00xs7Rw1y D6QUHmP4g7sbESCLX43N38tTe8kW3Mx3+ZVwIAgjZEDjKFQJDbWbhqgn/zKNyXpeVq+W 9mTA== X-Gm-Message-State: AOJu0YxpDBneXfAAFlP799mxlHU89Dasd/DNthSjHYa132gjQj2Bmfyp c55wPql1i18nhg4zhHxpYqvLJ/oYpbk1ceFWuSn9gfU3x11Y+s/q0DrvlMRMmHnUeFS/l6bihak IFA== X-Google-Smtp-Source: AGHT+IHthU1bxxWE76UmRU/cmSB/2b+tJYVulTyd1SGmZGLYBHQFEbukUNmDJAlwNs2CFth+SfvGFADhDCI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2f51:b0:71e:dd:8f9b with SMTP id d2e1a72fcca58-71e1dbe8750mr6976b3a.5.1728584762866; Thu, 10 Oct 2024 11:26:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:36 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-35-seanjc@google.com> Subject: [PATCH v13 34/85] KVM: Get writable mapping for __kvm_vcpu_map() only when necessary From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones When creating a memory map for read, don't request a writable pfn from the primary MMU. While creating read-only mappings can be theoretically slower, as they don't play nice with fast GUP due to the need to break CoW before mapping the underlying PFN, practically speaking, creating a mapping isn't a super hot path, and getting a writable mapping for reading is weird and confusing. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 080740f65061..b845e9252633 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3122,7 +3122,7 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, struct kvm_follow_pfn kfp = { .slot = gfn_to_memslot(vcpu->kvm, gfn), .gfn = gfn, - .flags = FOLL_WRITE, + .flags = writable ? FOLL_WRITE : 0, .refcounted_page = &map->pinned_page, .pin = true, }; From patchwork Thu Oct 10 18:23:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830780 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CA8B1FF7A6 for ; Thu, 10 Oct 2024 18:26:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584767; cv=none; b=TOel7F09Bx6CpY48I1CA0eXtVo4HqNl4SoFekPMUtEo1Z4zU9EpUE+nu+mQF4LL8sRLBpZF/2u+3kbQtH0/LOtSX5bRk8NcJdQf+hkgCZeWsBlM0XR8+dR+s19/HLLOvNqqzc9noUhFJn9C7NXJmKPvRZZmalDbR1ezjSPxAPN4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584767; c=relaxed/simple; bh=JluxJpntN2XzeRJ5Yz03kdoesvuYdAA2z8LW9UOaBnA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Pr2BZivmeEMdSSGsDKK2NhYa13gGcZEaAAbJgP6XymxVVtlaestVDpd4VLkLnM+AUnKo/HT0T9DQQcngViHoV5AZq2biBnBF8yQn7wBmCbMbnuDhmXbhDgMC5g9roZCeNKoInwH0+dHMNaDtuS9xdn5oH21Etl6wAgRw/F9XWsU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VfKoGAUz; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VfKoGAUz" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7cd9ac1fa89so1435652a12.1 for ; Thu, 10 Oct 2024 11:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584765; x=1729189565; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=ByBZxBThzycQ9+5cWv7LHsEIs8PvxVG+sWS1etQw10Y=; b=VfKoGAUzy1tyfifoK0lzuC5EO6vmRc5H8EipIe+ZLEuWAXzAG8pU2LiyJzlZVu0hi0 mOO9VygcpFe8zOSwUmfQQHakO15699SyqMVOWWSAiUH2fJJdgC8rguMny3tmzuNm6qma xqCi/swqvFmWY9n9yk/3JuaiFkFnRNBPBb225gAWzmoJNnJjRf6EzDz+lob/Swy2Zd8F 5WPg5QuuVmEjsJhFvUHJUB4SSv4y2afPWrVKR+wB8gSLJdQ+AEm1DKIqe37EnkjyN8cB 4U807FVw2rgqm/ygIdqMTxDCTj1f1B9aQDFvY0AHA0uFNQgR8DeiTSbrOcm3suHvH+5h 7yIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584765; x=1729189565; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=ByBZxBThzycQ9+5cWv7LHsEIs8PvxVG+sWS1etQw10Y=; b=qbmjLwMzhuzu1WnQ/wB6B8qMNww3386ZCkr5urRs3FYvel/Nq6e5MPyNtIoJllrsKu imZuqR2BFR6j94WPk8tQiGMYCI5s6oWHoKAxZEfAQCYvtdqg67MoQYYBk4nVS7l7Ozpj jNqlsZdL4q+98wvNj+TQweiY8OxzfyR0YF9lDNwI9Ln9KLvKLmd878ChprgEL50hC7Mu ihSInqkD6dT+MQIJCronzNw3+bsXfayagoru3gS7ElsONVbLSfunLsxqRmbKVLPyNlwk mnllZyoI5qV78vE0gKDQKeK0wNxORiVIiBY8My6veC+4Okg7QWIplzjDr/DFaRBUltiA JtzQ== X-Gm-Message-State: AOJu0YzAsXJ664R/D+dmOCKzMRoOs60FEU2ePM3yF6c9dD/XMfSvG+Yg xg3xaefTZ8nsj1JLPGtnoc6PVAR9pds0tB57Bqw9d27PYJGy0Al0fVYg9NuqrhghRt8tWV4oYYb mEA== X-Google-Smtp-Source: AGHT+IE23QkhjcWr/3QASJ/S9+dapZUNy7Z4CCepJKqapzuUx8eBu6OhWGfM5laLrlsw47tRhKuFSYugMsQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:ff61:0:b0:7cb:c8c3:3811 with SMTP id 41be03b00d2f7-7ea5356a658mr52a12.5.1728584764791; Thu, 10 Oct 2024 11:26:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:37 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-36-seanjc@google.com> Subject: [PATCH v13 35/85] KVM: Disallow direct access (w/o mmu_notifier) to unpinned pfn by default From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add an off-by-default module param to control whether or not KVM is allowed to map memory that isn't pinned, i.e. that KVM can't guarantee won't be freed while it is mapped into KVM and/or the guest. Don't remove the functionality entirely, as there are use cases where mapping unpinned memory is safe (as defined by the platform owner), e.g. when memory is hidden from the kernel and managed by userspace, in which case userspace is already fully trusted to not muck with guest memory mappings. But for more typical setups, mapping unpinned memory is wildly unsafe, and unnecessary. The APIs are used exclusively by x86's nested virtualization support, and there is no known (or sane) use case for mapping PFN-mapped memory a KVM guest _and_ letting the guest use it for virtualization structures. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b845e9252633..6dcb4f0eed3e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -94,6 +94,13 @@ unsigned int halt_poll_ns_shrink = 2; module_param(halt_poll_ns_shrink, uint, 0644); EXPORT_SYMBOL_GPL(halt_poll_ns_shrink); +/* + * Allow direct access (from KVM or the CPU) without MMU notifier protection + * to unpinned pages. + */ +static bool allow_unsafe_mappings; +module_param(allow_unsafe_mappings, bool, 0444); + /* * Ordering of locks: * @@ -2811,6 +2818,9 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, * reference to such pages would cause KVM to prematurely free a page * it doesn't own (KVM gets and puts the one and only reference). * Don't allow those pages until the FIXME is resolved. + * + * Don't grab a reference for pins, callers that pin pages are required + * to check refcounted_page, i.e. must not blindly release the pfn. */ if (map) { pfn = map->pfn; @@ -2929,6 +2939,14 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, bool write_fault = kfp->flags & FOLL_WRITE; int r; + /* + * Remapped memory cannot be pinned in any meaningful sense. Bail if + * the caller wants to pin the page, i.e. access the page outside of + * MMU notifier protection, and unsafe umappings are disallowed. + */ + if (kfp->pin && !allow_unsafe_mappings) + return -EINVAL; + r = follow_pfnmap_start(&args); if (r) { /* From patchwork Thu Oct 10 18:23:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830781 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D447320010A for ; Thu, 10 Oct 2024 18:26:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584769; cv=none; b=qis6Zbak6h9+mWoM1DbPrWZE08Ct2cOnk1bjM+4Ku2h8IdF6UDqkXzbe4/VbL68Q+qvGZF88TZgboat4zVdKKlr7Fyp3JkRCUjOnrbTgPQA5AR3XQDpLaVwG3jDONvwrxu1THm3WokB7KXzKkv6KO3OOghroCAQjT9zC5m4R4ao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584769; c=relaxed/simple; bh=HP41e6SfsC4ul+oSDEyzHZ0hlt7iv8seYpnKNCv0jUQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aBINEimqQXBwx/RPQFM+xOb1Zi0PeECYJgUyf42SbHBU+YVp3KSwPpz1U6bA5Fmr5Xt7XVCPmfJu80gxGI08jrCKE1fOpbbR0a29aeyCsFCpGhH0pcnqpnbwg0LJHNotEw10oG0Z/Mjy3bs0WzUTpvbXgwchSUVmHI5s4Xfnn6I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3iE+fjr+; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3iE+fjr+" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e04c42fecso1290000b3a.0 for ; Thu, 10 Oct 2024 11:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584767; x=1729189567; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=n3l8C52m2RjpLTXcY6dq+UIBo01A1fTC4GBMzDz1kHI=; b=3iE+fjr+J+BhPA8v3iqwSMU7gQHtmXoLc2UoAeE8K7W7XowZPMedacJz0kDLjxUDYQ QBHxZX1NpRwXYSGnH0HL3vJtR2VFd/IfEGEWVjktv7EDSMjvcPUq75M3pmwLSNSY+vlM TTGcpYpnDBEcT6Yq4ZgXXOtSRMnft79vdJT2U4MoIAfvpL1GKwjhtjsMIhwYKH1sdUdg mA89AXe5f0SdxOYRU+ZmAG4KCc7jke6+CVqu1SVWGJg4hyIVrQYwExBFF/T3JmjQdnSL ThQ6E+6hJCDsmRcl4PwyBxiNMZtTBKOsC/MsBvMbt48BIiFEhaYo/ha2z7lutfruNTUk snKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584767; x=1729189567; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=n3l8C52m2RjpLTXcY6dq+UIBo01A1fTC4GBMzDz1kHI=; b=OwJ0sAqusy6t9TRNQH+4a9ha81FsmbfLXUU8iTQ2VP+1rQJtze04OeEYExujn1DU3S +lG6Ncmxh8I48W7Fvnr9urQ63rkJpCuCW81dTTok0AeIflC7SNwYCF33taQ3jala/lMh +GW72W8DJAGfj5X9ezWSqKAHX9G5WqayaONPePA1bBDslgxdWWFPYAltNmJ2SbPYyDLQ 1t6zwY7ryWPrUd3bIlgJL3FfpUPI9Y49OeQnTRzCFC+OTM9n3MRTzmJBuSes7nDdj9sX XsTajuknl3lsmI95t5yanDRD7jsoNYS9ub24nOc+msQNTXO1nqWkn3Hb4nOzp8xV6SNC 82UA== X-Gm-Message-State: AOJu0Yxsf8k3tOU5fCRIlD+kjJtE4SubvQ5paNdtJCx68DFfcxjZ2yXR nutfSP3LNjBxUMKUT1I6fP859iPu+QR17FV+ghXe7NLYoWGwBvPxTzXckVDyd6PfwndFHlVSNyz sWQ== X-Google-Smtp-Source: AGHT+IH5y4h19aQ8D3UFZNHEjFs0l+IkJ/F+GwDOdY5CLNZ6WKY+eP+zfpO4DiXt2PfVXCIgSRJOtCb+AeE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:6f44:b0:71e:268b:845e with SMTP id d2e1a72fcca58-71e26e53c16mr11733b3a.1.1728584766802; Thu, 10 Oct 2024 11:26:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:38 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-37-seanjc@google.com> Subject: [PATCH v13 36/85] KVM: x86: Don't fault-in APIC access page during initial allocation From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop the gfn_to_page() lookup when installing KVM's internal memslot for the APIC access page, as KVM doesn't need to immediately fault-in the page now that the page isn't pinned. In the extremely unlikely event the kernel can't allocate a 4KiB page, KVM can just as easily return -EFAULT on the future page fault. Suggested-by: Paolo Bonzini Signed-off-by: Sean Christopherson --- arch/x86/kvm/lapic.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 20526e4d6c62..65412640cfc7 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -2647,7 +2647,6 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu) int kvm_alloc_apic_access_page(struct kvm *kvm) { - struct page *page; void __user *hva; int ret = 0; @@ -2663,17 +2662,6 @@ int kvm_alloc_apic_access_page(struct kvm *kvm) goto out; } - page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); - if (!page) { - ret = -EFAULT; - goto out; - } - - /* - * Do not pin the page in memory, so that memory hot-unplug - * is able to migrate it. - */ - put_page(page); kvm->arch.apic_access_memslot_enabled = true; out: mutex_unlock(&kvm->slots_lock); From patchwork Thu Oct 10 18:23:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830782 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B27032010E7 for ; Thu, 10 Oct 2024 18:26:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584771; cv=none; b=ii9LbGdKPbfJ2uHAAJ22iccIXxp9G+VxFS/H9v8yRHX0uu5BM9kQoBaeJT0/z1rrwGLao9d6qmEiYDvLoyRb+fmeKko1c/Sx0tIwPqMHI1oiZMshBUO9w9TiZpuRnGTkjihaGfqdUjx+VUW3+P/Mpz8vIDVYgLYuJuzp/N37o7k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584771; c=relaxed/simple; bh=FOrzmk+4dQuoAh6BJ51EHAqwvBR4fq2vz5+jTHFfYUI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WQhzcmCd0MH6CyXGKdAU//lNi0d9oXRmJfICnlS5fgoDcu4JxGeZupgfi7yf2V911Fg7/ZCKBifq915oROSbjDZKtBYI5U4RaP5mftz6O/CvHlG/BgdEyPh/NJXeIUWtf7/7UjGsDvqNscSczGoPncOOUWi8FVFvoz+WaSKV+kM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n3ZB9Hl9; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n3ZB9Hl9" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20c8c108f85so9240375ad.1 for ; Thu, 10 Oct 2024 11:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584769; x=1729189569; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=551DSOChXG5A+yDanUyc1gC1jxg9ryEf27npUXDW1CE=; b=n3ZB9Hl9GSMKmEx/lUk3TOxxvmnE72q9Nb6QJN/j4oQK/ZZl7fEE+aU1iTmEyHusGF MvskZF7JItYNQUwaksK1w6E2W6vRxUpqG/GQskA591YhhwipexkEwz3mVDnMI35S5jsv Y4WSCAtt0cfpUgxFf5UIV/1sPoj+lv6W94JgPBO2jl9ksD7m1Suxh1wAL4H+VTW+S7Pw k0Dmcxek3i15MXvGlTyKrDmdYFW9aHLcUrFd1RXD7zqVAp9dLiiN4ZKuzQ+JtjjDKIk+ B6ODMMAUkUqPGWTVvDQlBjpM7hat/NMris04mBqZ5fLmXqjpagd/DFoKuVvenG/nokos h98A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584769; x=1729189569; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=551DSOChXG5A+yDanUyc1gC1jxg9ryEf27npUXDW1CE=; b=XJ5pwP8DIzZslqHScjgkmFcvl7SmyWGAPb9lHTFB9bOq3uNhJXTPDXClM5NzUklmUb 9N5paZ/340XsS3sOX+88BvzX9iFDLgRSHZIhO1i7ySZUnVg8fA4ycCWgcGXIRYfXr9QF G4g5pR4/lMQUhnCQxq3FgjWlcba5HaJU6xWb8j0YpdVS7YElcJDgfEVEgdDA7m2vcW3o 2ROhUR62iVFikDeMLgU2Hn+P5paIjRKhTOgXasj8pGycymBE0EoxS08AAcLpGtOvu5CD CR7qiQcqbx45RV6sGKmavL3rS4wrGRqFebF6aX1L0E6HPz2Ax1cjg+6tcqQ0dgj5d9va yP6Q== X-Gm-Message-State: AOJu0YzjM8V9TOS+pqZk4W8dsQAmTrHJiNkR3sAMqdPDApQgD+zsxz+0 MJW+yeHJDg8CsyHBIdyehlb8gqGL4BedHNA+/A1OPULHCC/gefJwiE4+pzi6hrrja2dDbTsTZZN emg== X-Google-Smtp-Source: AGHT+IGKm7yq47izgr1FmkV+xWCQVhrzT+Qt86SXT8qqzZ7digIfsw6vfVxyVo72E4861BiCHpsXRIV5BzI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:e80f:b0:1fa:2ae7:cc6a with SMTP id d9443c01a7336-20c63722d7fmr648865ad.4.1728584768900; Thu, 10 Oct 2024 11:26:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:39 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-38-seanjc@google.com> Subject: [PATCH v13 37/85] KVM: x86/mmu: Add "mmu" prefix fault-in helpers to free up generic names From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Prefix x86's faultin_pfn helpers with "mmu" so that the mmu-less names can be used by common KVM for similar APIs. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 28f2b842d6ca..e451e1b9a55a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4347,8 +4347,8 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, return max_level; } -static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { int max_order, r; @@ -4371,10 +4371,11 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, return RET_PF_CONTINUE; } -static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { if (fault->is_private) - return kvm_faultin_pfn_private(vcpu, fault); + return kvm_mmu_faultin_pfn_private(vcpu, fault); fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, fault->write, &fault->map_writable); @@ -4409,8 +4410,8 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_CONTINUE; } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - unsigned int access) +static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, unsigned int access) { struct kvm_memory_slot *slot = fault->slot; int ret; @@ -4493,7 +4494,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn)) return RET_PF_RETRY; - ret = __kvm_faultin_pfn(vcpu, fault); + ret = __kvm_mmu_faultin_pfn(vcpu, fault); if (ret != RET_PF_CONTINUE) return ret; @@ -4570,7 +4571,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r = kvm_mmu_faultin_pfn(vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; @@ -4661,7 +4662,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r = kvm_mmu_faultin_pfn(vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 633aedec3c2e..59e600f6ff9d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -235,7 +235,7 @@ struct kvm_page_fault { /* The memslot containing gfn. May be NULL. */ struct kvm_memory_slot *slot; - /* Outputs of kvm_faultin_pfn. */ + /* Outputs of kvm_mmu_faultin_pfn(). */ unsigned long mmu_seq; kvm_pfn_t pfn; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 143b7e9f26dc..9bd3d6f5db91 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -812,7 +812,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, walker.pte_access); + r = kvm_mmu_faultin_pfn(vcpu, fault, walker.pte_access); if (r != RET_PF_CONTINUE) return r; From patchwork Thu Oct 10 18:23:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830783 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0DBB1E0495 for ; Thu, 10 Oct 2024 18:26:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584774; cv=none; b=i/6KZISLagZW5ogAcyOj4ZFOMG3+xpk9zApeKwLT92/hBcKH9wioGUhmeOSnSYUfGkvChnq+1bzNFPMMpdViCamzZ7i7mNwwMtVXyhjqvUb80vIpHM8JoU2C6EEbMwAPgLQLSfmHCG5YIEzTwhwaPB/NNW0wfkcFXUYhvYx2tOQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584774; c=relaxed/simple; bh=t32hhiDzmz/KF55Q/3vtmKLh+T++3xGW8zC+Y7mhAxY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WeTrhex4v0u1gPanCcL15a9ihylm50Ht28eSfd/mKFg7V5zgQielucb0DlS53qbWAyeXu1496fs0HaUZqpdXyZHsyqmrnJIPrdf864zqfsjUm9mZwZW7jDQI8ESWq3opKb70vXufSqGIvU3Oo5IQ/RZZuioEsT8xNBviZjmgRFQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bMjYV7ju; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bMjYV7ju" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7ea38f581c9so1274180a12.0 for ; Thu, 10 Oct 2024 11:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584772; x=1729189572; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=UQkfEcPm4BCGA2NCPxYbIcRTjXNaFQKnzNzKtIM0PSc=; b=bMjYV7juBeFlVMGvI7NvnOECGx9NNcc/zEtv4XL/auuP5Ka903KuEIjSTMzcC1Eugg j8rtVp7Wpeix1cw/WDIwsZIU0wTWT5U751Nxt1Ns0Xa2Fr7UkWo6hT9YrDpr7YpS6OTw LU+YXtX5CGLiB6ZjvIB4nH+NtxFny5aWZnAqymn5BW+gfWLaiXeYGy+NAT0NJ5UGu00k 4dsKzgGcgqUs6EjEUOnGq7lwY8emCm7AxNltyvDSO0iq23nJjbJoWCfuddHB0rSEfkzM JWAWaFN301LUh25d47GCwzrcPcxjw1ki4q6ZHEZ7nWT/H19p2Slj5JxKjRL74P3AOnEm sJ0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584772; x=1729189572; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=UQkfEcPm4BCGA2NCPxYbIcRTjXNaFQKnzNzKtIM0PSc=; b=GzS2z/XkX+Uas3KK5DB/WUlyf/i5b9xPwvsRnUlg7Gr2sYWbYMvNb2W6md/qnlm4W9 nmI/C0T8eHeVnU6MToX/3GKPNL186BhVwdb9SgwFHOr3xoEe2OTTleF2odqyMh1gr9WO BnLhCfv2CvlbCJFGs+fnriGMGhRi5vXPJDRP9PPin/7KaxRvPgGsdygguApTFR7pVmne zkC8ojMjJbcLr8weco2SL1GoV8lXpD8Yfi9X/Q9HGaecwfBoIMcZfjsU39oHSJlpaNNM bODTfGtOEfwmvQKBAXh3ISl92iyrH66pV6scQdEqqfKojljSOv63A+hWZspEGzYB3iro Pbkw== X-Gm-Message-State: AOJu0YwetDpx/IyrDlqaH4ixtt5Qit5sLnk++eGev186ASV/kX2Is5Uj tgr+CeCGuOp4WRgJbfMA6AWeFNFBtb/hnIlYEdIE4AVPhlOfnRMpuo96LTq8NWDOhiisuM4iFks 8vg== X-Google-Smtp-Source: AGHT+IHq9hr54ORGaKAudrJMY0Hy9bRYjmUvRYz8zAm0RAj3x+bD9YD3f2uWGLgDblXeQYOnnP141duUJXA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:1d03:0:b0:7db:539:893c with SMTP id 41be03b00d2f7-7ea535a65c6mr17a12.9.1728584770873; Thu, 10 Oct 2024 11:26:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:40 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-39-seanjc@google.com> Subject: [PATCH v13 38/85] KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use kvm_release_page_clean() to put prefeteched pages instead of calling put_page() directly. This will allow de-duplicating the prefetch code between indirect and direct MMUs. Note, there's a small functional change as kvm_release_page_clean() marks the page/folio as accessed. While it's not strictly guaranteed that the guest will access the page, KVM won't intercept guest accesses, i.e. won't mark the page accessed if it _is_ accessed by the guest (unless A/D bits are disabled, but running without A/D bits is effectively limited to pre-HSW Intel CPUs). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e451e1b9a55a..62924f95a398 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2965,7 +2965,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, for (i = 0; i < ret; i++, gfn++, start++) { mmu_set_spte(vcpu, slot, start, access, gfn, page_to_pfn(pages[i]), NULL); - put_page(pages[i]); + kvm_release_page_clean(pages[i]); } return 0; From patchwork Thu Oct 10 18:23:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830784 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32ACB1E0B6C for ; Thu, 10 Oct 2024 18:26:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584775; cv=none; b=WyZBwXnGagWt9q5r7EL+LC40/pOJ7R9qj3eygRc+oaa83YIlzI4W328sObHH3w25Tn3CnBYOdC/4NZKh7gUaZ/MB6hVE7kQYJLhHAtHsQHDxxz4ffz7RiNBfrv1p2iZp+wBxS71xi8UqDedUZN59Dhu8sCqd2paanJx6ec1vTHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584775; c=relaxed/simple; bh=AMgsQWUhFHfCzgS24A9rVoaxkKJ7Z500dfTCiVjIUZQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=e+u3vahikq02TLfjHmTbAWa97oWLCUZFh06NPzf2xPw8sb0ifJuSPWqX47UG2T12n8WC3qAlDTULXCy7dN5bd4pxE/5QZMSrHvDMKdHFo36rykZdPAO/r4GTi7smU2vySFQP1MUDhoh9tORzqxmQAL3eOGyPtzyPFFZXRCbiLbw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uwFCrjqa; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uwFCrjqa" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20c3d9a0eb2so16543055ad.0 for ; Thu, 10 Oct 2024 11:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584774; x=1729189574; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=bLMllFEXz5NbCB79T5EDeXxXCOdT0E/lPN9LlBgR2FM=; b=uwFCrjqa7ghmvrDiWY1KS05snxhx368q6kuiBqFzKfDm/bDcUbsQJzABMEIxvC3s/D Slu5mrI9UMYuTQmhw0sA92WKhhiNVsMx9HKgola5QQUGUxYW6ooZJiq5CZ2pOxm9TWY3 Jpfh0H5DOQibZ6lLTjBLWEPcYgkVnXuwvNL/oqgFZ6aLSk/iPMXxJP8vFMUY4hwuYBQE OE1YgF94rIC52TMb4mM+yjZUo/IKJ+IQVB3ZvxV8RZ87he18StRjX489yhFk4pPyaGq+ 1nUyzh9EOKMTJ/MxSSmXIoFRbwC6Hq7zQw2bNbNqevOKWkDvKcgjZgEqXKxi0/s2g2wa tuXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584774; x=1729189574; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=bLMllFEXz5NbCB79T5EDeXxXCOdT0E/lPN9LlBgR2FM=; b=AclUXD5QBaGfvh3nar+97yIhmHMTj8qRMAaVx6HzPbseSf9k4mEH9KFIIDfo9BYW6t ogo4XW6W0/3rFJS4U8NfI9D8cq2KosVkFUfptrDHWrTdaBC9+OxatWPiEsrKzg4TEmJm 3E4jnsgCO37ISDGbckArVqRwJTydRilZ1D+ghIsNJQED3h5W2ARYY9EkRsTPxBLX1zon /QzEIN4a+9vPFsS7dsAWal9jpV4EIiso25DCZjkgwJyX1KpVNxWR6zg1KLnWaZUB0zMS kTYvAcfgfmOgMki86XYh17EyAB1hIqLUTfO75GwU43+5Ox/xiyETjz0rZWTIi7aEsXUi ZezA== X-Gm-Message-State: AOJu0YyVIzNt+HaTVHKYPgMH0yqJvlU3vQ11FXcGMJX3iH0xT7sKC9W0 j6dYTSpMGgNjqpAIYaR9SLXveYclFP+D1sKtBgUMknAY/hvqqPaWIra4A47ku9f/o1QjkETB28S xQg== X-Google-Smtp-Source: AGHT+IHTeV3dFyfdACUyzit3PmgjtoD4THwZ77SaAB8Cv7KtK6sEvfc4IjBdPT+nDHGdhMcMXcKC7R1qaBw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:903:1c8:b0:20c:716c:5af with SMTP id d9443c01a7336-20c716c078dmr946605ad.3.1728584773496; Thu, 10 Oct 2024 11:26:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:41 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-40-seanjc@google.com> Subject: [PATCH v13 39/85] KVM: x86/mmu: Add common helper to handle prefetching SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Deduplicate the prefetching code for indirect and direct MMUs. The core logic is the same, the only difference is that indirect MMUs need to prefetch SPTEs one-at-a-time, as contiguous guest virtual addresses aren't guaranteed to yield contiguous guest physical addresses. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 40 +++++++++++++++++++++------------- arch/x86/kvm/mmu/paging_tmpl.h | 13 +---------- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 62924f95a398..65d3a602eb2c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2943,32 +2943,41 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, return ret; } -static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, - u64 *start, u64 *end) +static bool kvm_mmu_prefetch_sptes(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *sptep, + int nr_pages, unsigned int access) { struct page *pages[PTE_PREFETCH_NUM]; struct kvm_memory_slot *slot; - unsigned int access = sp->role.access; - int i, ret; - gfn_t gfn; + int i; + + if (WARN_ON_ONCE(nr_pages > PTE_PREFETCH_NUM)) + return false; - gfn = kvm_mmu_page_get_gfn(sp, spte_index(start)); slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK); if (!slot) - return -1; + return false; - ret = kvm_prefetch_pages(slot, gfn, pages, end - start); - if (ret <= 0) - return -1; + nr_pages = kvm_prefetch_pages(slot, gfn, pages, nr_pages); + if (nr_pages <= 0) + return false; - for (i = 0; i < ret; i++, gfn++, start++) { - mmu_set_spte(vcpu, slot, start, access, gfn, + for (i = 0; i < nr_pages; i++, gfn++, sptep++) { + mmu_set_spte(vcpu, slot, sptep, access, gfn, page_to_pfn(pages[i]), NULL); kvm_release_page_clean(pages[i]); } - return 0; + return true; +} + +static bool direct_pte_prefetch_many(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, + u64 *start, u64 *end) +{ + gfn_t gfn = kvm_mmu_page_get_gfn(sp, spte_index(start)); + unsigned int access = sp->role.access; + + return kvm_mmu_prefetch_sptes(vcpu, gfn, start, end - start, access); } static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, @@ -2986,8 +2995,9 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, if (is_shadow_present_pte(*spte) || spte == sptep) { if (!start) continue; - if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0) + if (!direct_pte_prefetch_many(vcpu, sp, start, spte)) return; + start = NULL; } else if (!start) start = spte; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 9bd3d6f5db91..a476a5428017 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -533,9 +533,7 @@ static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, pt_element_t gpte) { - struct kvm_memory_slot *slot; unsigned pte_access; - struct page *page; gfn_t gfn; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) @@ -545,16 +543,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MASK); - if (!slot) - return false; - - if (kvm_prefetch_pages(slot, gfn, &page, 1) != 1) - return false; - - mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); - kvm_release_page_clean(page); - return true; + return kvm_mmu_prefetch_sptes(vcpu, gfn, spte, 1, pte_access); } static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, From patchwork Thu Oct 10 18:23:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830785 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41C0B202F66 for ; Thu, 10 Oct 2024 18:26:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584777; cv=none; b=X6iWYT7p1qY3E9dpMRWfNtHRGmhmXcTNMez8HcjDxA6q9H0y3Bi1PtkJbEKNw3nuP2BQE64/+J/MXdRubmSDUoTnLkVmMXFGtu5h0MU14bl9ixljRJDnAcXu+OmZTdzVomCbR9f1zAbMLCgpRfwU4pq9JbA1DutoL71dtrN5dik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584777; c=relaxed/simple; bh=PQXxus6kr8nuokmnnM+Lbqq2T+wYMcW3c8L/2YrIVXU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MwZm3FNB58mXKYcj1TV5HX2/QdISqL0MLrRPKMdPtn4SiShpdG6fcoDhfpvAovrqU1kK0MDW98WmJje3EzldJzJ3tGHNvT3u1AY5S1zivUItci7lUQ2UDHbmQ4pp5smcPObfwC/6WeOH04dpqmgD50mJNq7Tz+eswHjfv53ep10= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gciGGDh5; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gciGGDh5" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71e123d3d22so1428899b3a.3 for ; Thu, 10 Oct 2024 11:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584776; x=1729189576; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=GAJaAoo95YXsHy4joQlb4gblEl3cKLpcYsG4EH0XckI=; b=gciGGDh5HFpYWLuKNpe3Ev4O1droYOafPNHG8yohFPMnZzjhRji78ghkqI9VqZhT+B Juf6rcXU/vrIP8E/Z1uAY9snG3nfyrtMdkveyBqYvx2MzjD4hY+IMWPMKq+/pXqVwVGX ea0byuwg6VXoi/3PVusUXLHHDb//D85kFO9B+vfoit+AWNu+8Ifoj/K1gWHZ1ayEP7bM 5W/mtnUCiPZ18zjIWT0zKiLM5FJtZBfhithRFXzW4f9XDSnPiQ81VGioPPD2PpwuXPhl XqHt3I/ZMv4OFkktHewNKir892D4AGcVInKoLbwo/PslHwbVt4L20OtMF8zjtlhzbESm QwcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584776; x=1729189576; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=GAJaAoo95YXsHy4joQlb4gblEl3cKLpcYsG4EH0XckI=; b=W70Ev+bYGbCFx9qjYqz/TcV9r8SeSeSKLCZtJlrs2S/Lk6gI1IfFrpi9oHgA4Gyk9I FJYLNcJ759wX8KWIMVbgUejQKPJhgC6JnRmdgWFq4FuhUZJjNtq7rjL8n+To+7H5W+pu eJOZGDfEqUq6LfgVACvhL6zVrQHIRUAfbhqtV8Qz3FyzkJrP6TIw16oDEzm7lbOn53ZW QNM5AlqMJ0EuQ2vuLEbQU48lKA9MzxCREfS9SIpgmMpzwuTO8HH/a5RyWkAaYINpC6N8 bTGEsMTC5I9cx23nZXVSWN0UpGOkDxBLPU8y7OO3hRiBlv6ms1236rN55czfjxr1JjH9 /n0Q== X-Gm-Message-State: AOJu0YwnCajRl/4ON+bd86W+pIqwI6T6iz7iq7IeA/PZCSPOwkM+dLvK ZpZMtFDRTnuXZsrvTdGIxq9SuAo8lXvihLTyxf/KMYJxwmU8MDepI1066tIvdIlG9uP7Wth9sly QJQ== X-Google-Smtp-Source: AGHT+IEXxr/4vsCtgUsUHHkctm5xGRqb6tsNipBtkL7vgMjj75AzVZauqGsQqnffIf336Hhi8e9e6vZrhMQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:aa7:9184:0:b0:71d:f7c9:8cb3 with SMTP id d2e1a72fcca58-71e1dbf1ab5mr7792b3a.5.1728584775458; Thu, 10 Oct 2024 11:26:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:42 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-41-seanjc@google.com> Subject: [PATCH v13 40/85] KVM: x86/mmu: Add helper to "finish" handling a guest page fault From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add a helper to finish/complete the handling of a guest page, e.g. to mark the pages accessed and put any held references. In the near future, this will allow improving the logic without having to copy+paste changes into all page fault paths. And in the less near future, will allow sharing the "finish" API across all architectures. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 +++++++++--- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 65d3a602eb2c..31a6ae41a6f4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4357,6 +4357,12 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, return max_level; } +static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, int r) +{ + kvm_release_pfn_clean(fault->pfn); +} + static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { @@ -4522,7 +4528,7 @@ static int kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, * mmu_lock is acquired. */ if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn)) { - kvm_release_pfn_clean(fault->pfn); + kvm_mmu_finish_page_fault(vcpu, fault, RET_PF_RETRY); return RET_PF_RETRY; } @@ -4598,8 +4604,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = direct_map(vcpu, fault); out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } @@ -4685,8 +4691,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } #endif diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index a476a5428017..35d0c3f1a789 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -836,8 +836,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = FNAME(fetch)(vcpu, fault, &walker); out_unlock: + kvm_mmu_finish_page_fault(vcpu, fault, r); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } From patchwork Thu Oct 10 18:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830786 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 671BD202F91 for ; Thu, 10 Oct 2024 18:26:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584780; cv=none; b=fQOWfDqLhg7ms9pxk+bJxVJWfgqcTbztE2iw/Ora8gxqnE1HIs0zBE9HGAYepOWj+fr2A4xAOAu//RByteb2grXroOAdv6tan510tEJDl4q6rUkD6+bmOn7ka3oMFDZdVLkfZbSL6PcPzbqW5je70WrsYclpErQpgf9H6LCHU6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584780; c=relaxed/simple; bh=wP8WMCipPwqkHPMKwQ03EbdALKWDvH1se8OSiGfXvd8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JdupKfT9pr+PN5FMZdrO2+aQ1HXmj+WxN8Eyg9VW0ftugJFHkboWopyviMKcLHvulFYwjhuD5cVMdFbioTC+7NrzhWbWlA5saZBlqNKH86nN29YUcJWBIr3PnGB820fV2DI2nQ1+GaRfgSkZkvRtT5A+SHFvb2Ie4AYdPAl1YDo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NPuFqHnM; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NPuFqHnM" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20b921fa133so13297355ad.1 for ; Thu, 10 Oct 2024 11:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584778; x=1729189578; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=sgK+YcjRpEQ+YhbfYv/o4akq/yPA2qNOXCMP41gc9EY=; b=NPuFqHnMcNqMW7akh4WWzd8zhX+rRXRzBI7zESBDBZBdwT0VEGK+uUaNUj3twhjoaK rcZEJ1pzRTQnu0rv1PaSI4+ObWewHv+EBTozp88TZjRIVplFldJLM+UkcY09Q1ua6RkR Dw9nQw0P48Ua8VAOW6e8QPalxUz5psROQe6Q4WAKd1T6oYcKBfrDyxTj1cLgmmtJXDMK f6TXqyy3Ji++fL8mOKm9O78UFQjHLpTNhmKisvw5USEpHdm6PhXanNcXX1Nq8NMsY8Gf nPwcM1Aql+6FmfWqD78E6aTWg5BuzTjuo501MWC6u3CV3+dNVtlksmD4K27Kqv/V+XkD gs3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584778; x=1729189578; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=sgK+YcjRpEQ+YhbfYv/o4akq/yPA2qNOXCMP41gc9EY=; b=EQm4qPsz79eNWH/Ksloujr7sxslYq1wPV5cSlju+2fr5Q06XUMOXSta78yBUSxeoZm JN8A+v9FKZpsq/kBBmw7qAv6veeGrIbMFgr1wTMBGNMkcXPylKT+W1HZLfq/QXtLFpP1 exAQQbEDhREiszX/KOwWXjPQPV5Qc137F+I2Ey9ZEfbphWCjsROSqtJafY99eq+gep7r 0F5E+0b64uOKlmfLZsQ9/w8Ta54WcUNbSn0ibVuEAW4dVxWYHuKnJEYaCYLl+1/upWco fMW5pigW3QP3lNbjeXBs2iDB/lz+kZEBPrYOgTQMmN7xd4+kvVYgAoT5iLq/sj45HskO 602A== X-Gm-Message-State: AOJu0YxQg6Gj7nIRLQh8CL/j1huQfA+yww+8ASzCMtoVimfmty/bFqst u2ru/pR9kUnMBPwZ2gBCDNvf8XBnnqG3bGFkxBaQqK2i1xIUSGFiPjmSx05rw0juhgD1fzbuyrG aRg== X-Google-Smtp-Source: AGHT+IHgES02wxBmUBuw4MXImaMxOgBvLS6HzbS699NHinso/VI3SIlimd+hdhb5F/AgwxmTSyUtUzYKtBo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:ec85:b0:20b:7bfa:ac0f with SMTP id d9443c01a7336-20ca037f212mr485ad.1.1728584777475; Thu, 10 Oct 2024 11:26:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:43 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-42-seanjc@google.com> Subject: [PATCH v13 41/85] KVM: x86/mmu: Mark pages/folios dirty at the origin of make_spte() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Move the marking of folios dirty from make_spte() out to its callers, which have access to the _struct page_, not just the underlying pfn. Once all architectures follow suit, this will allow removing KVM's ugly hack where KVM elevates the refcount of VM_MIXEDMAP pfns that happen to be struct page memory. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 30 ++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 5 +++++ arch/x86/kvm/mmu/spte.c | 11 ----------- 3 files changed, 33 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 31a6ae41a6f4..f730870887dd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2964,7 +2964,17 @@ static bool kvm_mmu_prefetch_sptes(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *sptep, for (i = 0; i < nr_pages; i++, gfn++, sptep++) { mmu_set_spte(vcpu, slot, sptep, access, gfn, page_to_pfn(pages[i]), NULL); - kvm_release_page_clean(pages[i]); + + /* + * KVM always prefetches writable pages from the primary MMU, + * and KVM can make its SPTE writable in the fast page handler, + * without notifying the primary MMU. Mark pages/folios dirty + * now to ensure file data is written back if it ends up being + * written by the guest. Because KVM's prefetching GUPs + * writable PTEs, the probability of unnecessary writeback is + * extremely low. + */ + kvm_release_page_dirty(pages[i]); } return true; @@ -4360,7 +4370,23 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int r) { - kvm_release_pfn_clean(fault->pfn); + lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || + r == RET_PF_RETRY); + + /* + * If the page that KVM got from the *primary MMU* is writable, and KVM + * installed or reused a SPTE, mark the page/folio dirty. Note, this + * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if + * the GFN is write-protected. Folios can't be safely marked dirty + * outside of mmu_lock as doing so could race with writeback on the + * folio. As a result, KVM can't mark folios dirty in the fast page + * fault handler, and so KVM must (somewhat) speculatively mark the + * folio dirty if KVM could locklessly make the SPTE writable. + */ + if (!fault->map_writable || r == RET_PF_RETRY) + kvm_release_pfn_clean(fault->pfn); + else + kvm_release_pfn_dirty(fault->pfn); } static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 35d0c3f1a789..f4711674c47b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -954,6 +954,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int spte_to_pfn(spte), spte, true, true, host_writable, &spte); + /* + * There is no need to mark the pfn dirty, as the new protections must + * be a subset of the old protections, i.e. synchronizing a SPTE cannot + * change the SPTE from read-only to writable. + */ return mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8e8d6ee79c8b..f1a50a78badb 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -277,17 +277,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } - /* - * If the page that KVM got from the primary MMU is writable, i.e. if - * it's host-writable, mark the page/folio dirty. As alluded to above, - * folios can't be safely marked dirty in the fast page fault handler, - * and so KVM must (somewhat) speculatively mark the folio dirty even - * though it isn't guaranteed to be written as KVM won't mark the folio - * dirty if/when the SPTE is made writable. - */ - if (host_writable) - kvm_set_pfn_dirty(pfn); - *new_spte = spte; return wrprot; } From patchwork Thu Oct 10 18:23:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830787 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EA2D2038B6 for ; Thu, 10 Oct 2024 18:26:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584781; cv=none; b=nVgW83X3FLVDNgfLHHPV8OwAQdQKc1S6MPU7BxzZnqLk5ijweaQQDQPJ3tS0GoID04mXWnT0niAmnOpE0TQP52tK6roxYV6Y1hJJpag1dfgMaEX+sTox1Po+0D0e57oeZzKXK3fyj9Xs0UwTawF/RCoXEMe8Nzx3oUWlSjJbwTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584781; c=relaxed/simple; bh=hSHnz2G1ntKemXewFOwXdDJXjhCTKhhhRrw1wSIGnaU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tdECsv9TR+eIbSUP0quhFc4wNCipFoik5z5yETN21stBCoMVTnwkqQPlXRPGhyJNSOi2hKWjbolAtsFPbIQQ6DXhVLBEhoEIrMIRVVvkB+TzZh7fKqJWV9uNp4BIujLtRpK4VakJJ+iJPMc79Ny4NfRnGLW71RtofSRG6kb1bOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=E53iqmq8; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="E53iqmq8" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71e026caf8bso1466234b3a.2 for ; Thu, 10 Oct 2024 11:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584780; x=1729189580; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=8DwvYXI8CmE5CO0Ycvk+EgdgF52vTMi74FFh8TONwA8=; b=E53iqmq8I7HHufRa9Nfa4QpSUD3mPUf0HdiC+gr3+ZJVA0PtJRMGFCy2RlpuApodID MHkYZafORHG4Fo5pdlmIa2Zu49fY42Dn8Mwbg8BEIZ/Pa8OJQiqqaoX3cE0NBkDUycJi 8knjazonwqf9LvKUPFmBk2crmB0MGGb2WbPM1i7pjkVZjFgAzrwTCr7oBWE4eumlOw58 NDuf4DR/sTZJYm1oq12ic4ny/HL0dHq+Txgeh165RAyYFWZt+w+um9fwU7jF7aG4NAnz uIpRYBDmcMn403NAN6nAehV9jbOFJtSN+/wsSbKYsr5xfV60UWvUZH56iYMQo+BvtQRs hZkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584780; x=1729189580; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=8DwvYXI8CmE5CO0Ycvk+EgdgF52vTMi74FFh8TONwA8=; b=ZpMZkHVa/g2FwKnE60JeuTc3zMnZE5IGcUfCAbv+2zEhNM/tX2M8MgXLxcyGbo372j OQ7TixKnvXkrzn4sGpUkBYItXs7m+IAkgva+jqvAJwcCHIP/M9az/fA0DtWL00Q53VVl slX28j12n6Ab2bIEskGZfvsyuQ3GyY5RRKKl4b6rmHG1+UDf1XwjgGA+FTTpWlicCv9s ZTQhLCGze4XxGu7wEPB7VL+ZpVKxtnjmrhDsM0iMs60RY/y87KQkjscnHQZmi6Z/efFm eeXuTEJPA98Z6S/Mxpln8VPmJUQt+Z3w+fZQPrgqxo9gdqjMKwtTyMinYCjQh57UVtiU l6Jg== X-Gm-Message-State: AOJu0YzKJJKgGMgA8aZY+LFWkSL/gnAb1PLZYlrYQB7/71NzT8hV3La5 aKi8FFPIMurR+55762WE5kmiDh5qVftRyOJiKtazS9N6uOrnq9hs2iIiAVYLUlbxXRW6/1yhg75 /AA== X-Google-Smtp-Source: AGHT+IG3pPvsekcqYnotExeZVZUXEYlJLq+6e3a1eutYJX4gtVKcKQbhJKMNrxmpaRIGMBxtyW/kZ/ByL14= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:91a3:b0:71d:fb06:e79b with SMTP id d2e1a72fcca58-71e37c53abamr28b3a.0.1728584779496; Thu, 10 Oct 2024 11:26:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:44 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-43-seanjc@google.com> Subject: [PATCH v13 42/85] KVM: Move declarations of memslot accessors up in kvm_host.h From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Move the memslot lookup helpers further up in kvm_host.h so that they can be used by inlined "to pfn" wrappers. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9263375d0362..346bfef14e5a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1168,6 +1168,10 @@ static inline bool kvm_memslot_iter_is_valid(struct kvm_memslot_iter *iter, gfn_ kvm_memslot_iter_is_valid(iter, end); \ kvm_memslot_iter_next(iter)) +struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); +struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); +struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); + /* * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: * - create a new memory slot @@ -1303,15 +1307,13 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, }) int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); -struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); -struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); -struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); + kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map, From patchwork Thu Oct 10 18:23:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830788 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88608204085 for ; Thu, 10 Oct 2024 18:26:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584784; cv=none; b=LuD2pu/NzBq2fHklJkEsa+TY+StU4eZlA9cBSUU5PbiACvIM1LvGYWWyENmtIg4HnhMNU+jyzQ9hi/oYR/JerF6wH4OazLAwXylKDlUH2W94HOVg9TKukIa+DYb897CVjyZJfIVVve2CbFcyQR5VrGksRb7cngqNUAwiU68zESM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584784; c=relaxed/simple; bh=acp8IhS6KrAOGkkRIhpxPxZ31aINXkAYcQRqekRRiYM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=APjL5vScxqGjfRMwPg4XVpvbZu36gFvs5SePFrl93p3gg8Is4k3LJJZptGbMHiKg+UU12Hf2hx/C1dqHRqnrfhVrjW6b/EGYLbfRja8pAga7Z2jwkpZzk2l2v3VwURP2xP2TrQdfOrc3HXQXcCSidIxRAIFKZh7qUCxpDUFB43s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uDM694X6; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uDM694X6" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e29b4f8837so18944057b3.0 for ; Thu, 10 Oct 2024 11:26:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584781; x=1729189581; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=CHy2j9VNwFKd5G/Ubc3O4QkXvNoIm5ox3M6XZ4oswcY=; b=uDM694X6zZ+HAFlWAM1UtiI7QC1H/CwpkaKKHAjYXOuVOyiyi8G2+ZRbvZR0h01V6j 8791sMmDTkwkGnskijypQOT+tDDsu2qoAbvNhrjSu8cnn2Yf+uO5xePCPOmvECf9djFK fsVWQVcfW3vXWJZ5a3vaAM4SKhoxfc5D8P+vhcEIXDdTEO5XqLZLB1zx45NWSP8UMPbF MG11ZZjR+IA7jhtkoRfRVawnyLm9IVUQgSJOtxuEzxhp8dmQ5HIX07OMvFBqgFcoRCYF AhIeNaKp3fuopM9y6UO3qLYo5AoppeiYECEIkGUsWLp6Rmoyl18Vp1Mv+YuUwhDmE05H JYBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584781; x=1729189581; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=CHy2j9VNwFKd5G/Ubc3O4QkXvNoIm5ox3M6XZ4oswcY=; b=EExEahAHUGzZp/wr2d7ZTQ4he0GHesYpxBEMkPLBYLeLQfStkpFY0QpU9hK3oLfIcK /BjKlPEc3sDf9A4LV7PzjzGk7Rh0nY89Dq+uGSJbfU8O+1HKWghmeuThIJF46kS24hiC d/+uhyUUUnbrP2q9hou2rVvCyEoRo1iEK0c1XxpcuR5u+OajnM5DEMFS50GXG8pkLKJW QmLegMdcUTEZRHewBGIrcCIYTVbdQCoyqzQeSJZL/7Xz4DpSP8GyH7SgjfzrqTtMx9Ft J9ESvGv67MYNTmTEWmXljPHriJmL6MOyPLvwSIXIBEbpZ1o/K74NjuFSv8mN5Z+eq87m VgLg== X-Gm-Message-State: AOJu0YzR7jZiaLsiNAV7rY0CQBIzzF01/9ec114jWT4fPh/hpTX4sKFI iLH2xsLbnNWJChXP/Sig7UT0fHqwCFeFvSRf4x0QAvr1iE5ib4dAWRaM1o/MNG6xfuRR69sPHeX H+w== X-Google-Smtp-Source: AGHT+IEBXjEpYcXfPYRm3Nf4G9ZN1ucR55aqjscHr2p9QPsSrcRZYI0185ucK64AXquh+1QVcruWQv+kNDM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:5605:b0:6be:523:af53 with SMTP id 00721157ae682-6e32f2f4e28mr919077b3.3.1728584781575; Thu, 10 Oct 2024 11:26:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:45 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-44-seanjc@google.com> Subject: [PATCH v13 43/85] KVM: Add kvm_faultin_pfn() to specifically service guest page faults From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Add a new dedicated API, kvm_faultin_pfn(), for servicing guest page faults, i.e. for getting pages/pfns that will be mapped into the guest via an mmu_notifier-protected KVM MMU. Keep struct kvm_follow_pfn buried in internal code, as having __kvm_faultin_pfn() take "out" params is actually cleaner for several architectures, e.g. it allows the caller to have its own "page fault" structure without having to marshal data to/from kvm_follow_pfn. Long term, common KVM would ideally provide a kvm_page_fault structure, a la x86's struct of the same name. But all architectures need to be converted to a common API before that can happen. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 12 ++++++++++++ virt/kvm/kvm_main.c | 22 ++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 346bfef14e5a..3b9afb40e935 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1231,6 +1231,18 @@ static inline void kvm_release_page_unused(struct page *page) void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); +kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, + unsigned int foll, bool *writable, + struct page **refcounted_page); + +static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, + bool write, bool *writable, + struct page **refcounted_page) +{ + return __kvm_faultin_pfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, + write ? FOLL_WRITE : 0, writable, refcounted_page); +} + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6dcb4f0eed3e..696d5e429b3e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3098,6 +3098,28 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); +kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, + unsigned int foll, bool *writable, + struct page **refcounted_page) +{ + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = foll, + .map_writable = writable, + .refcounted_page = refcounted_page, + }; + + if (WARN_ON_ONCE(!writable || !refcounted_page)) + return KVM_PFN_ERR_FAULT; + + *writable = false; + *refcounted_page = NULL; + + return kvm_follow_pfn(&kfp); +} +EXPORT_SYMBOL_GPL(__kvm_faultin_pfn); + int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages) { From patchwork Thu Oct 10 18:23:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830789 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55CF32040A5 for ; Thu, 10 Oct 2024 18:26:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584785; cv=none; b=J4U1Iiv5VK0eJiphHLyC2iqBEbwiR2Gz/pwvLO40VRkFCP3vri/4DdSe5uCwaAnieS6pXUuwJ+vsK0jKU8OqIM3XZorMP7vDsCulqFfxB3ky18trwdxgE2dQyO9TKsYT0sVlQO9qYS/ID5S4My44KHsDSjC7fYY+eqvf7Mt5hGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584785; c=relaxed/simple; bh=Hkx6OqV7mFxqog7S2nXIkbHpJQToxM9/kCtdODEngY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T5mqv6aQ59A6AzTutDYi/pYDJVB5NW37nMWR0g+ziMW4JH7Xg6Pq49bs+FPKl9NkbL4Rs4+Ie/w8ADNNYNoBAE7yqCoK8/xh1/xALG+xevfPnqSi2CMemHhE+HyNEPR0KjixaU0HHhEVWQGufOKwYrUlSy+9pn0WyBg0Q0vi9Bg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bQ8YWjsA; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bQ8YWjsA" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7ea05b8ea21so1336015a12.2 for ; Thu, 10 Oct 2024 11:26:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584784; x=1729189584; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=O48RmE1hmLPs0qNEdPIh+k/rpCPu+e8A6GC4+tMXVMw=; b=bQ8YWjsAmSCtND2aP1+tnixLSReqcEMO5NgW5XaAIrBbL2LakVMBpTYBr79a6Ie4it yNo7Kq1xSA5MpJ+3LWpNyQ8FebCsfQePx6QL7MRRwa7HUCZ4EAISOfHABrJmf07r19dm vC7F1QJ2GzmgWC7lF+MFU+KbFzyvgv1xoyoqQgeGA7foloGcAJt5EuXt6aPoIbgmFg8d S+O2XvahdLb6BSZR7nlHR5Hzv2cmwMq+KpUVit9qdBBsIoaQJtbbi96Ot04RWV1vcYCh XiFcWmxVFgOdE1nBSgaaQ0SHKFiqMCAzj5V7S3qW2P8xyztxxmcbI6IW8VfCE4tAXTJs qhzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584784; x=1729189584; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=O48RmE1hmLPs0qNEdPIh+k/rpCPu+e8A6GC4+tMXVMw=; b=bxJ/4/i4+cPuUqR+RJrBcaNXTVWBQadlqhzEyYul1LOx3RCtnz4bv+vP2aj/Aig7Sr dRF7i8e6NH1nKjf4/F7uTlSEosrbJsRW2zRn1z2/N56X6WpgXbAXVwlksAeUopHqRgXQ KAwnnyHHX7wvP/QfWY95gVwNDfiCR+0tahAii979LSLxHw8z4yMC0/dQ8NqqP1EPxi08 hjkg47Dd9Ri+0ki/oZ6Zh7Iw1iVsFHgBVsmt6Tj3+59YIByZGE2RKcyCwV5ML3LJoHNy Z502Qy45Ngeqg1RKziE317fi/1oS/iITm5zu7FU6OSWfEja5na0NJzIqY0Q9yMavfBfJ 2auw== X-Gm-Message-State: AOJu0YyC3nMSRg9yjrXwQmLL6zN+f+1FigF6dCMCJTrnjXpAPE0TiZ/h /aH5HNJySp04jaOt6yyAf+Q/3ie5s5BZGkomR2chw0ypQcLA8RV2EdeHRHyj4ll9WDwYfHByQtT a4A== X-Google-Smtp-Source: AGHT+IGY3J0AFIsOlJbVDrqn6majLeEUOkcp3LAakwsbgSwDXT3SnAGgA9L20idMLdLGQvBeoRsHZDJxo50= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:2445:0:b0:75d:16f9:c075 with SMTP id 41be03b00d2f7-7ea5359cd2amr19a12.9.1728584783356; Thu, 10 Oct 2024 11:26:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:46 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-45-seanjc@google.com> Subject: [PATCH v13 44/85] KVM: x86/mmu: Convert page fault paths to kvm_faultin_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert KVM x86 to use the recently introduced __kvm_faultin_pfn(). Opportunstically capture the refcounted_page grabbed by KVM for use in future changes. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- arch/x86/kvm/mmu/mmu_internal.h | 1 + 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f730870887dd..2e2076287aaf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4416,11 +4416,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + unsigned int foll = fault->write ? FOLL_WRITE : 0; + if (fault->is_private) return kvm_mmu_faultin_pfn_private(vcpu, fault); - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, false, true, - fault->write, &fault->map_writable); + foll |= FOLL_NOWAIT; + fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, + &fault->map_writable, &fault->refcounted_page); /* * If resolving the page failed because I/O is needed to fault-in the @@ -4447,8 +4450,11 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(fault->slot, fault->gfn, true, true, - fault->write, &fault->map_writable); + foll |= FOLL_INTERRUPTIBLE; + foll &= ~FOLL_NOWAIT; + fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, + &fault->map_writable, &fault->refcounted_page); + return RET_PF_CONTINUE; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 59e600f6ff9d..fabbea504a69 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -238,6 +238,7 @@ struct kvm_page_fault { /* Outputs of kvm_mmu_faultin_pfn(). */ unsigned long mmu_seq; kvm_pfn_t pfn; + struct page *refcounted_page; bool map_writable; /* From patchwork Thu Oct 10 18:23:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830790 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55204204941 for ; Thu, 10 Oct 2024 18:26:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584789; cv=none; b=jwvw05k2xY8QJDznEIeVy/IhSSraWqxjvw09/q5mWVOx82OeENKj2tr3nCo3vmjsyR4fX88PBxrWrqpVMwR0iGFirilduXq4gp+wG7aLDI1jCARJkwOcqJYusipthRu6JdGn4VjxLryPmkb9qWAoJ644lXno3YSf02yE2aCjNns= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584789; c=relaxed/simple; bh=YHJlLBVBXana8wfsUYax0JzdySoal3WoOfu+2A9pl0c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qUI4vIQM46wtqJwT/Ru+CiCpjrbVjv1ivtpzkTWztkoYgIucQa8duWHwn4+VzPaV82Ctqu7oL6MlOZ5zjGvveghDM/tjS0HD7mJNMDjy7HInMM3NHzKLL2ezWfvi2sQx6YuBVWteUA0SnCGRnlmYFx5GltmvnPA3QheEqjnItxM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dzGkvQXL; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dzGkvQXL" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e019ab268so1589211b3a.2 for ; Thu, 10 Oct 2024 11:26:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584787; x=1729189587; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=uRLwC7za7TRhyOe/CfUzhvpHD8RlodmrAsAEh9/RrLY=; b=dzGkvQXLwKnQ8kV49+00tqUsM+pb+EFzb+lv8vwXDt79FPnTmt4s8Md8nPZ0C2odAg BR9Oj/cN5OU7Z34cg4SiflaOOJ5uSAxjPR2L8QNFBM0aMzMUmmkaHfupKWrtFiM8kejf VixuAbaQpGtr0YhglX051laUICzExDICU+fAYRcbxNncGOKDgEvabPI3E8c8OvZZimsV 3a5isk8DF+25PgKctldAVdBofKibWUxqq0QyK7kk7WWlPWAEs6Px7f46IOU1Tcdum/We /OMbeEr7lpmmWWpGWaNU3NyZrQEJEuyYs8E+PUOurydDDM/bBLcqQeYJuELStICgdxYN kUHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584787; x=1729189587; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uRLwC7za7TRhyOe/CfUzhvpHD8RlodmrAsAEh9/RrLY=; b=TqbdZfNsP5Zs/8krQNJRMQtuPjXogBx6/v7E1XYaJDl5Pp5dxkwP0MQXwuT2YsSqAI jxwm49Zd7kDKwLu08IDDcbenf6ICkHF5EHHhq8KkcrHAGHFwUMHtYRhbqaMRDGwdx05u qLyHdQwmJh4ZyZpWJvNvUoSwWxTIABHLwsOmSearZxU4NZPZ6ZS7/cV7JLsYo68NhG0A jt5FVYd/30BP+9TF1b4DN3BWDAIrGBK3pnIs+JXSJJAMJETj9gtKwSjHs5SK8lqeslK5 M33LVoc08L2lpIB/4MLq0JdeuKQ8shRXfVj+KT1CLAnbu5AwxBT3K/n0G7JvLLcef+zG 3xFA== X-Gm-Message-State: AOJu0YxOP6jG+EZGgVXd8kaPsYAlOxfikn3PLDZVkuZaw5T7Oqj/FUOb sUYIRdn+HvStDx/X6NKCSaGSr4bOLBd1Q9/07MuID2EPNOPr3+Eh2PNIMAzpvaTO71QOYiaJJkx 1yA== X-Google-Smtp-Source: AGHT+IHljZntO7iRsDkdAW1llbbIMc26iesy/7PCA9rtG6130gQpp6naB/6JGeYrnZXaXdo41CPgD6F3FKs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:aa7:9184:0:b0:71d:f7c9:8cb3 with SMTP id d2e1a72fcca58-71e1dbf1ab5mr7795b3a.5.1728584785442; Thu, 10 Oct 2024 11:26:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:47 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-46-seanjc@google.com> Subject: [PATCH v13 45/85] KVM: guest_memfd: Pass index, not gfn, to __kvm_gmem_get_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Refactor guest_memfd usage of __kvm_gmem_get_pfn() to pass the index into the guest_memfd file instead of the gfn, i.e. resolve the index based on the slot+gfn in the caller instead of in __kvm_gmem_get_pfn(). This will allow kvm_gmem_get_pfn() to retrieve and return the specific "struct page", which requires the index into the folio, without a redoing the index calculation multiple times (which isn't costly, just hard to follow). Opportunistically add a kvm_gmem_get_index() helper to make the copy+pasted code easier to understand. Signed-off-by: Sean Christopherson --- virt/kvm/guest_memfd.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8f079a61a56d..8a878e57c5d4 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -302,6 +302,11 @@ static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot) return get_file_active(&slot->gmem.file); } +static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) +{ + return gfn - slot->base_gfn + slot->gmem.pgoff; +} + static struct file_operations kvm_gmem_fops = { .open = generic_file_open, .release = kvm_gmem_release, @@ -551,12 +556,11 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) } /* Returns a locked folio on success. */ -static struct folio * -__kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, bool *is_prepared, - int *max_order) +static struct folio *__kvm_gmem_get_pfn(struct file *file, + struct kvm_memory_slot *slot, + pgoff_t index, kvm_pfn_t *pfn, + bool *is_prepared, int *max_order) { - pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem = file->private_data; struct folio *folio; @@ -592,6 +596,7 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *max_order) { + pgoff_t index = kvm_gmem_get_index(slot, gfn); struct file *file = kvm_gmem_get_file(slot); struct folio *folio; bool is_prepared = false; @@ -600,7 +605,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, if (!file) return -EFAULT; - folio = __kvm_gmem_get_pfn(file, slot, gfn, pfn, &is_prepared, max_order); + folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &is_prepared, max_order); if (IS_ERR(folio)) { r = PTR_ERR(folio); goto out; @@ -648,6 +653,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long for (i = 0; i < npages; i += (1 << max_order)) { struct folio *folio; gfn_t gfn = start_gfn + i; + pgoff_t index = kvm_gmem_get_index(slot, gfn); bool is_prepared = false; kvm_pfn_t pfn; @@ -656,7 +662,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } - folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &is_prepared, &max_order); + folio = __kvm_gmem_get_pfn(file, slot, index, &pfn, &is_prepared, &max_order); if (IS_ERR(folio)) { ret = PTR_ERR(folio); break; From patchwork Thu Oct 10 18:23:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830791 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3D83204F6B for ; Thu, 10 Oct 2024 18:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584791; cv=none; b=O7NEOzr273bFe+3FTwEV71b9owNm7xlYtswDazl+6+zYkrM4qV+y8aPD3ZBAObSHUjgfUqRyN2Riu64OjYQQjj2zI371rGBFCdE4XNkjfqBz9GSMDvWnfTjhIx136qM5qDx82UM6ly0EP90AC7Tgmbh/r3SIrNe+WrPArZvuvS8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584791; c=relaxed/simple; bh=bZ2ermi+i+GlJV3rIdZ7irm2X1sw1usJ5ghB+vq7WnM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Jes9LIG15mftsdbWr/cNFjQCMZH1whZYhaX9AIePXzXPdSjmqILSmge5/dj0hqF+xG4BC4gwEja7H/FqMSQgUL2y4FUEqYQ1VbyOlkijBgHAue/javK96Ngf0VW8EI+Aw5GwcvPNEcaJD0yzXBGl71VXd2jNnb9kI7mcxUDciWM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fpNaofRE; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fpNaofRE" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e290d41291bso1591408276.1 for ; Thu, 10 Oct 2024 11:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584789; x=1729189589; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=/DfuS3cqNKQbdjipc1jp6gNPynmr8H/oDbYVrbrij3Q=; b=fpNaofRERnQsVJWoNEKh/7Nowa8mnaYcBTqm4xvL96gZzqD0HTZnR8uv/7QUn44Mug vEiJev1UgG+mhZqqAgsDxOJvxhY9x3JDP2NQGvFiuEvxX24RgTLILDckgh/2qq6X3KtK 6Dc37IioCu1GchTrprfk2zvXU/JJTKtdXYop7F8ngmHmgB8Ds7A6py1mzGRTvBuh1e5v +tZ5qSEUmDv46Q7aijpywSEqaqKx6/E4e0UoBTUqX5sTdpf4Hd3zgayPbargNUbcalP/ ZD1NEvP7da5EwZ7MewAiUiFrQgpkfzQgUoU9SYS7aXy/Ax2P3Iv3RNqnAhJDoCKcHA5A WlSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584789; x=1729189589; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=/DfuS3cqNKQbdjipc1jp6gNPynmr8H/oDbYVrbrij3Q=; b=dHMIdIiu6R9fBeAdHqrxH0PQx1s947IFzgC2IzELzF+7DqgXd4JIIR4UURkeTxcHR9 lS7BO2uMMiwJ6/aWadGu71CLcxEvlMgEYq2vgXTVTSM/AdOQPuYAilvFIalsnvu+2FTH Ea9AWiZC/BwBU9bc0fb+4BOxJ8VYs9+sQfqNpl5wXUhLYofUU/KhmjtGzqUu4acsTsln Z9BeUM35pDbQzQfRYfLToFRIjFd03wcaDA6HsTeCn73sakeH8btLNZs9WVT54YwIVnHm fqDr3pMgUWG8mqkKMifDqTL0aRCHEHShG2STbZniKeSw+0L+ykNo3Vg3EcuGzXPbsUx2 fEDA== X-Gm-Message-State: AOJu0YzY6zRTQ0x8xr4VShcB6kBVbsSTm4WMWYGBfkPbJAg5Wrx7nmb9 nf9xGJKluA2L8Iudyxy4Z1sgny31JNHKLxXMyc0jiSSZTKbFNbph4aucdsFyMTpvFDmtLbV4/DG CkA== X-Google-Smtp-Source: AGHT+IHLQFn8WLOIHu4PHZ3K+TrLKBMF9JlmWl30kTXK43TD90b+TNbg3ZepRFQQdmJeel5bU/EhDzobpdQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:b191:0:b0:e1f:eaf1:2254 with SMTP id 3f1490d57ef6-e28fe41acaamr82536276.8.1728584788687; Thu, 10 Oct 2024 11:26:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:48 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-47-seanjc@google.com> Subject: [PATCH v13 46/85] KVM: guest_memfd: Provide "struct page" as output from kvm_gmem_get_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Provide the "struct page" associated with a guest_memfd pfn as an output from __kvm_gmem_get_pfn() so that KVM guest page fault handlers can directly put the page instead of having to rely on kvm_pfn_to_refcounted_page(). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/sev.c | 10 ++++++---- include/linux/kvm_host.h | 6 ++++-- virt/kvm/guest_memfd.c | 8 ++++++-- 4 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e2076287aaf..a038cde74f0d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4400,7 +4400,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &max_order); + &fault->refcounted_page, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4557ff3804ae..c6c852485900 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3849,6 +3849,7 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) { gfn_t gfn = gpa_to_gfn(svm->sev_es.snp_vmsa_gpa); struct kvm_memory_slot *slot; + struct page *page; kvm_pfn_t pfn; slot = gfn_to_memslot(vcpu->kvm, gfn); @@ -3859,7 +3860,7 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) * The new VMSA will be private memory guest memory, so * retrieve the PFN from the gmem backend. */ - if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, NULL)) + if (kvm_gmem_get_pfn(vcpu->kvm, slot, gfn, &pfn, &page, NULL)) return -EINVAL; /* @@ -3888,7 +3889,7 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) * changes then care should be taken to ensure * svm->sev_es.vmsa is pinned through some other means. */ - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); } /* @@ -4688,6 +4689,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) struct kvm_memory_slot *slot; struct kvm *kvm = vcpu->kvm; int order, rmp_level, ret; + struct page *page; bool assigned; kvm_pfn_t pfn; gfn_t gfn; @@ -4714,7 +4716,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) return; } - ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, &order); + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, &page, &order); if (ret) { pr_warn_ratelimited("SEV: Unexpected RMP fault, no backing page for private GPA 0x%llx\n", gpa); @@ -4772,7 +4774,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) out: trace_kvm_rmp_fault(vcpu, gpa, pfn, error_code, rmp_level, ret); out_no_trace: - put_page(pfn_to_page(pfn)); + kvm_release_page_unused(page); } static bool is_pfn_range_shared(kvm_pfn_t start, kvm_pfn_t end) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3b9afb40e935..504483d35197 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2490,11 +2490,13 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order); + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t *pfn, int *max_order) + kvm_pfn_t *pfn, struct page **page, + int *max_order) { KVM_BUG_ON(1, kvm); return -EIO; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8a878e57c5d4..47a9f68f7b24 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -594,7 +594,8 @@ static struct folio *__kvm_gmem_get_pfn(struct file *file, } int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) { pgoff_t index = kvm_gmem_get_index(slot, gfn); struct file *file = kvm_gmem_get_file(slot); @@ -615,7 +616,10 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio); folio_unlock(folio); - if (r < 0) + + if (!r) + *page = folio_file_page(folio, index); + else folio_put(folio); out: From patchwork Thu Oct 10 18:23:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830792 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53601205E0A for ; Thu, 10 Oct 2024 18:26:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584793; cv=none; b=IbD6xbt33/DbVzXm+vR460D8AqcOER3ghZculRHMfDDE1kRgC8dTQKg5OrtTVqfjCwvnjShkAWGPlHZwvJE1H3oy5XRAOxttp/wgsj+Ak2B8F6KZ8V0x84tcoHertw6GkDY2BtguigWxtFqParTsv+t4R1tF4YZrmHQyLh1XMRs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584793; c=relaxed/simple; bh=P6yv0TC0koYnkFHtfd6W3SlzMlk4IripfIbi6+Lm/9A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=a2d/+odCzFeksHpyNQhqv38jZHmozDSH1rHR4NV+3KZlPbSrjx+Sgns7Eq4BV1FROU4Z+1T2GYv7djoFOfxgvAIJh1NL2Swcrw3ruKsTnQMe9nAotO2yaHdTpHtRPtSXex1OZh2kTlq4xTP/RkOHY+yPxNYN/d0dIrzGI4/Xe6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a77PMqjI; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a77PMqjI" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e019ab268so1589286b3a.2 for ; Thu, 10 Oct 2024 11:26:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584792; x=1729189592; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=l4aL9ye2ZxrCnqTpLqQlinT65BUul16gfmDzid4NN5Y=; b=a77PMqjIq3vLWks/ShFXHpZbDnuBVQmai91zT69BbUwBg3+sNEzUevImv4vYPqB3SZ om8MeiFZmfuFyqTQa0KjSKJA9hlghMqeoCasv0rEqK8+1XvrBNkUsubtWjj9vD7aGzDK DWHK1/6pg+wg4oYoW9aC+R4ALvw4SAmWD1iz5nlxSjYqP9yc0vS3FrHLDcpYL5IfkSPZ ZjeNHX7Lj0IluQY7kzhgYqhHrmn/TxGtn+2w11P0tE/6zbCPcnROX6y1R0rOcOpCorWd xxlujfXtkPqmsR8pGzTSMnqlSfuQ2+mvEbtk+8vSE9cw0rUnXeF2Yz7zxJeQSu3EBkzH +QFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584792; x=1729189592; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=l4aL9ye2ZxrCnqTpLqQlinT65BUul16gfmDzid4NN5Y=; b=lFuVE+bYrMWit3rCcPPRUmjwgsExC1TP7b01t9EpH0gZxugKrfQeykY71D484h2/55 F6naiFVoWCpfm14CfVHnvEgRwrpg1uh5wdG1Qkf7stXETGoH5cL2glcrikT4rsPTDn53 n+tC+PpW7TotFpXjYPNB16cvEvCMsshY6ydCSk6eqMrsLFHJze7kYV9IwXOjnh9mskdv 5VhLA3h8dDhdG3aw9xcr0coRYWG4xN/iTX5dOzroD/IOgcdNQvQSaKT0yq348vVEzDO6 JQ9usNnF1/xijt1ahosI3guZrut28BeSJ5Wj2/yNQei1sd1ffZiNXQhHKACE2y3AOasM vXiQ== X-Gm-Message-State: AOJu0Yzr+MTT3aTSHLCMZBhr1lp6kKOdLfWGi0q0U1I5cEATGKgS0UGP 1jfJgufLO82O0xxqU0xHE8tvxYbZNwgV74e8TFt4AoVtT793HQ3dQDtI1uAkKO2KocPx4bFdVvy 8IA== X-Google-Smtp-Source: AGHT+IFunsAHL0GYDEyxrwNnSExkBiXOv88aJkEgjVxw5ZDXOBsEr4Eha3ByEK4QIBQ4extfr7SnAz+Fl/s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:aa7:9184:0:b0:71d:f7c9:8cb3 with SMTP id d2e1a72fcca58-71e1dbf1ab5mr7799b3a.5.1728584790633; Thu, 10 Oct 2024 11:26:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:49 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-48-seanjc@google.com> Subject: [PATCH v13 47/85] KVM: x86/mmu: Put refcounted pages instead of blindly releasing pfns From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that all x86 page fault paths precisely track refcounted pages, use Use kvm_page_fault.refcounted_page to put references to struct page memory when finishing page faults. This is a baby step towards eliminating kvm_pfn_to_refcounted_page(). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a038cde74f0d..f9b7e3a7370f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4373,6 +4373,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || r == RET_PF_RETRY); + if (!fault->refcounted_page) + return; + /* * If the page that KVM got from the *primary MMU* is writable, and KVM * installed or reused a SPTE, mark the page/folio dirty. Note, this @@ -4384,9 +4387,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, * folio dirty if KVM could locklessly make the SPTE writable. */ if (!fault->map_writable || r == RET_PF_RETRY) - kvm_release_pfn_clean(fault->pfn); + kvm_release_page_clean(fault->refcounted_page); else - kvm_release_pfn_dirty(fault->pfn); + kvm_release_page_dirty(fault->refcounted_page); } static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, From patchwork Thu Oct 10 18:23:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830793 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02AAB206050 for ; Thu, 10 Oct 2024 18:26:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584797; cv=none; b=NMx9kh+hsVxvFv0OOJAEJ0df+mClyHKyz5NAYWq4mXjzbQTtlq8G4ROG6V5PtIPxllDga2g7xpAgBzm6lTb2TFFwLapRgHM6tJ4N4+FmZfKnezTdkYjPxJGRzWKNaEuj4xvN95nX9CBd9cvobaXtxwTqvAw5OSGrlAIpriFhO3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584797; c=relaxed/simple; bh=aOhxn0fpApYS19RFll+QfwBJLF7yF1QDDa2KZCpFRDw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kf+/iTOTzVmg1C7le+O1q4QYig0spaNR1c4OP2v/5MsUVbjDBH5yqsalbJNpYsmU/kXeUJJ8wf+ZtD8SV+8nl4BCXw9IDi0pbt86WG5KnUDdS1Cb6pozbOgCBzCQoek0+PAwKGVoi7Li5d9RxdTAS68XaK60daKOoQO5aokkTrE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0FbCESn6; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0FbCESn6" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e1fbe2a6b1so23282187b3.2 for ; Thu, 10 Oct 2024 11:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584794; x=1729189594; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=BN+hsHae4h8wzhzshUJRGKM30XxzO0F311sC9z4rUGc=; b=0FbCESn6n9wvwnedhP6g2o0aQ3uGz7LbaDMietMd23Zpw5H+WqY+ubJNtwyvC9nEJv FWG1G2wSjqX+D1kLYEKQ0FHw8fShSQ9I1S4nI3dc82vP/efq88a1zYT1Twk5N0zf9ScQ /53A6MT01CFUmrfBGbUUJJnvgQCVHLryQ3ee2m4qzskSNjfHdhLhZsMphQs5+zcS9Thx dQDoRI6PZz3brGVXYophhSUWOYJnUXXzqTn7D+S/653MDkAwi1Meo569GoTGnQl8jyTd gG9Yppau1gFkgT2RyuAm8WpGDImvPtucCOwbYKUzjX6gz+sxcdjTnHuWB3tqJHMrzQma e97A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584794; x=1729189594; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=BN+hsHae4h8wzhzshUJRGKM30XxzO0F311sC9z4rUGc=; b=pjnvM/OjaLdPF/OtDxxslsiXiQuyItTdv9C2DoiW8GmHvt/xKQaAUfbZFe/U6gH5PJ wCLj80GnZB+gDeY3O7zRbctGPOUjKCU9sihYCsM1YT72AsRI+MfG5nsBS5Xvq7qcqieT mqtVBCyo0y0EIEknU3Wmy0fq4nF1Rxzxd0dD/MAwwc53QJkZ6PbSZDRiXSX1YR/5V3Pu hyxSHamuEq3cpZfCL64FwNvTrkjmQu4qOQ2QPnxt7WS81W31ownQs8OafpvI9M6w+L/Y LWh8Q1k1+fQ6NLMLIoeYpnJot2RufYb3edP99yizqdsYh+pbRwD5NqU+luBePIeOeR74 QzgA== X-Gm-Message-State: AOJu0YxVh2+RGf714TuMsjN8eVkQXa1XH9xoDArVwCnucNdTiGyyHYc9 minfTdACignkluLi/AmJjcyNr2AOM35t8KT0loAHWjteoKqvvNyhNaeaiTar9aqmmKyQpJZ4fNU zRQ== X-Google-Smtp-Source: AGHT+IEjw5s9ln5Mmco0bHKHDYNNIWwRc71SIREcZZABOz/fcRYPRAU5+BeNEqASF/uZrX6+QTap8C7eoHE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:2c01:b0:6e2:3e17:1838 with SMTP id 00721157ae682-6e3221407b0mr440167b3.1.1728584793806; Thu, 10 Oct 2024 11:26:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:50 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-49-seanjc@google.com> Subject: [PATCH v13 48/85] KVM: x86/mmu: Don't mark unused faultin pages as accessed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones When finishing guest page faults, don't mark pages as accessed if KVM is resuming the guest _without_ installing a mapping, i.e. if the page isn't being used. While it's possible that marking the page accessed could avoid minor thrashing due to reclaiming a page that the guest is about to access, it's far more likely that the gfn=>pfn mapping was was invalidated, e.g. due a memslot change, or because the corresponding VMA is being modified. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f9b7e3a7370f..e14b84d2f55b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4386,7 +4386,9 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, * fault handler, and so KVM must (somewhat) speculatively mark the * folio dirty if KVM could locklessly make the SPTE writable. */ - if (!fault->map_writable || r == RET_PF_RETRY) + if (r == RET_PF_RETRY) + kvm_release_page_unused(fault->refcounted_page); + else if (!fault->map_writable) kvm_release_page_clean(fault->refcounted_page); else kvm_release_page_dirty(fault->refcounted_page); From patchwork Thu Oct 10 18:23:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830794 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 088CD20607E for ; Thu, 10 Oct 2024 18:26:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584798; cv=none; b=U8QheQHxul50NSZ7QysUEJni/07+EdEm9+VjWHCgCTuxztESuXijnRL2BMh9f6uI701CpM1E4OzaLTQ/fQsbttyVWDcsj24Wauy6d0F17m477jqBXUuKIKKxFSNNHxAHorSN91dIameK+mpnXjNNgi2y8Q5QPtoGjVnEt13hvv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584798; c=relaxed/simple; bh=7Japh8LNPjEnA4nandXvXnjfnzZjLu2hqYfo+wet/X8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GvHP/uRSbre5kyMZOu+0OE/9H3h35fms8iAxlzfujk41gcrkM4r/3cd3C2AIf2xYFbbTJ43Gh1ctH2IKNme9v3PTIJmTBQvfT0KI20cNDfSi78HziKYEsNxD2h2sTEOuL4By+dfGqnfyr1ZqxW2p6OrEU1V4/ss3J8ILRI/fQ1k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xxvwKwOg; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xxvwKwOg" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7b696999c65so863526a12.3 for ; Thu, 10 Oct 2024 11:26:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584796; x=1729189596; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=Ndpfmsd07O3XHkbROvy9SWySHpmtrF0LUOG5EHCQjAE=; b=xxvwKwOgsGq9OHAeEg7ulS5M+xc0H7aGJvU84LLWDRvOqgU3TkZZQEPsrvINQkKuQh a0kSZ2m3MyVoMs8rT3NEB6ipyNHqf7wvINEndO31pndYJIj4JwdiGbpEIOldXIyZ4JUH m2Xwt+waP1WmBQ71gKCZlGDkYqz50bxSUom0dv824jufQ4rZn5keWyyQCE4jabUEcNHX 8IIDaN8klrGuagpek/bjhFComuRxJZEumPnJcNvLYqDKl0lJxlPsm74awlyYEz6E8Cx5 VMALXIP+FsaprgSWZvSmcNFCJoD0wV5yGOvsPvB1jxazYhU9yqzSqRKG2eEf1zu2lj+c BfPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584796; x=1729189596; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Ndpfmsd07O3XHkbROvy9SWySHpmtrF0LUOG5EHCQjAE=; b=bzsRct92CKlT9gKKJOsCZHETjvNo8FfcKOV0VIIXuDV8LZw/nKjn3ldiDZaEtt1FDk DFVQOuAFcmmgpJ1wCioAcNkSQfi0bmIloK5kpdJyeC1XICq75XX9daRGonvDJtRZZ8xM eM+TrEep135VzVgZL8yxssTxa16IlGN3Oh+bvV9KOjMe8uWkgE+fQatPJEgPuoF3Wvtx kNI8q1B9jfPIF/4dnBPGjNOn07LLiqNxm0sn/6C6KBchIo7dO5fAZRqZd5VMNNmR01eR N1iCF4mWpq40ThH/NAPl98jV0fHo6vhEXR2pB9v9VXT0wi7gRWVuUcMbdC45P93HPRxM cJHw== X-Gm-Message-State: AOJu0Yzc/qhF8j+C5VS9NquRos4y/F30gt0tT8vQcFo1JodV/ukaKFQI u69CFg0ByJBMJdBwdEHcFplbxDjMC2C8pReGZCvk+/hmyo3zz04Ah48JKqT2v90wG0eYSPizvN1 zFg== X-Google-Smtp-Source: AGHT+IGJc8hgBZq6lOovV4Dgly5ctIXEs7jDnJfEupYxVqninaLorgqTg5WNW0BPJZYnJnHdEtp0yWaf7UM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a65:450a:0:b0:717:a912:c302 with SMTP id 41be03b00d2f7-7ea53525e73mr49a12.1.1728584796119; Thu, 10 Oct 2024 11:26:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:51 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-50-seanjc@google.com> Subject: [PATCH v13 49/85] KVM: Move x86's API to release a faultin page to common KVM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Move KVM x86's helper that "finishes" the faultin process to common KVM so that the logic can be shared across all architectures. Note, not all architectures implement a fast page fault path, but the gist of the comment applies to all architectures. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 24 ++---------------------- include/linux/kvm_host.h | 26 ++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e14b84d2f55b..5acdaf3b1007 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4370,28 +4370,8 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int r) { - lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || - r == RET_PF_RETRY); - - if (!fault->refcounted_page) - return; - - /* - * If the page that KVM got from the *primary MMU* is writable, and KVM - * installed or reused a SPTE, mark the page/folio dirty. Note, this - * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if - * the GFN is write-protected. Folios can't be safely marked dirty - * outside of mmu_lock as doing so could race with writeback on the - * folio. As a result, KVM can't mark folios dirty in the fast page - * fault handler, and so KVM must (somewhat) speculatively mark the - * folio dirty if KVM could locklessly make the SPTE writable. - */ - if (r == RET_PF_RETRY) - kvm_release_page_unused(fault->refcounted_page); - else if (!fault->map_writable) - kvm_release_page_clean(fault->refcounted_page); - else - kvm_release_page_dirty(fault->refcounted_page); + kvm_release_faultin_page(vcpu->kvm, fault->refcounted_page, + r == RET_PF_RETRY, fault->map_writable); } static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 504483d35197..9f7682ece4a1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1231,6 +1231,32 @@ static inline void kvm_release_page_unused(struct page *page) void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); +static inline void kvm_release_faultin_page(struct kvm *kvm, struct page *page, + bool unused, bool dirty) +{ + lockdep_assert_once(lockdep_is_held(&kvm->mmu_lock) || unused); + + if (!page) + return; + + /* + * If the page that KVM got from the *primary MMU* is writable, and KVM + * installed or reused a SPTE, mark the page/folio dirty. Note, this + * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if + * the GFN is write-protected. Folios can't be safely marked dirty + * outside of mmu_lock as doing so could race with writeback on the + * folio. As a result, KVM can't mark folios dirty in the fast page + * fault handler, and so KVM must (somewhat) speculatively mark the + * folio dirty if KVM could locklessly make the SPTE writable. + */ + if (unused) + kvm_release_page_unused(page); + else if (dirty) + kvm_release_page_dirty(page); + else + kvm_release_page_clean(page); +} + kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, unsigned int foll, bool *writable, struct page **refcounted_page); From patchwork Thu Oct 10 18:23:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830795 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 375A5206E60 for ; Thu, 10 Oct 2024 18:26:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584801; cv=none; b=j7e06PoDQVovmuaJUmjA4aEVXDsSVB4SgxEBXO1BverB29fdNO/X9woBoEMFRG/zrjteo0LFzg7btQLPgfh8frUWVk3NZsg+r+7EGwbDGvqSHBKP3SAyX1pIiYuL1ccm/6vpuH6ZcDE0Gi3q+OaoHoRwjvdlFnycPguKcxI7g7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584801; c=relaxed/simple; bh=saU0+rag3xXF2/z4L9d0pyohkblMwsyLsKfPnspMwVQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aDpVRWd31IrG9cLUMqdZNu62Qdq3s8ClxiELrO0TN/z+o3mQZEFmBLowsF2X/dwKNFM9WBejWF2AxCeWrjKcg6BsQ2Cn8AARp7vPJmWITsNvq/T+6p/chddbTS06Z4MqNn/G5cydIVY8CGzy6yuoSc+psm3uASw++ZBpCG5Cjwk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n7BtqPXO; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n7BtqPXO" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2e2ebab7abfso158714a91.0 for ; Thu, 10 Oct 2024 11:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584800; x=1729189600; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=iay4RI/uPKTQE0By5qqA//dP+sG5w7++36T/RHxbCNM=; b=n7BtqPXOYdiWJ3X9kD0iG4C8w4FjCr57M/d0x7tEi5ZIPg/i+rqFcqjVa9Iu7dhTRm nS6nozj5Gx6r6aaboU/1PQ32h7s+9ziIKF4h+h1LWCrLRiUwW8rZY+NEeX29VkUMajfa kPp4Uu0Z6uHkKNXYVdvtCr+F14kxdarXnJ+OFvhdU/rGA6+vEjLsWSXDQnUXI1V6EUWB g/1qV+76T6dKYhGWKC3Y8+mwi7W/PFdRqfEjjmH1UO824LRmq6fMUl3qRoUJPp2+avqD GaAVk4tv/PFte/ebRcd7xjinGzyNHBzovubXDRuPTUDZ/OKUAHrM6Pk0WgOK4dH0HDWj 0krw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584800; x=1729189600; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=iay4RI/uPKTQE0By5qqA//dP+sG5w7++36T/RHxbCNM=; b=nOLgM98mLXQJ3j+gghkLN0p6Ud2KpKWzYbp508MVWTTkBEmbqM4fMAjx7VHX82YtRj cvr+ivHghksKAu2fYsnGw/4rUDQRXWYA2Xl8Mu6Q3jNNoXiXDM3mgaPFBiIMH4RUby7A CvCuuDZR2MqSrWRtf3hUHcXa7VLZcn0Hv8JHHu/HmsrZTZIzRrKtnrJcV4RdwjzxND7Q 6r9AvmrX28faZBwp5SCSFmIAM7ndVTtBnJxMUBmPR2sLG4Jj7S6zJzfcZ7faZochRW7A d0/hILhpeL+V8cM/KrbfAi+Zz79BztmI121ZrDGged0sKnUe8hyft5xW4QPC3SD2188i T3rg== X-Gm-Message-State: AOJu0YwBTauHAuJVFoIn93Anh/qb8A9RM/7KfPSXxgqyJ6Dq6SHgpH/k Z+i0Eh2WU6wr1JWWzCfOifL+SKIWD45GEq4dtw7qGpRNtyFek5wW4UxnW5m1mIjePebRtM+hRMJ nDA== X-Google-Smtp-Source: AGHT+IEd4uxjZrUyEQQM0MU+jkiJqh3LQboji5/ZXtrCgOTVgrRv0yMO5ln1NKE0y5PwnoqkXxjnsp8xFDc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:fe86:b0:2da:872e:9ea4 with SMTP id 98e67ed59e1d1-2e2f0d7e9d7mr54a91.3.1728584798284; Thu, 10 Oct 2024 11:26:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:52 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-51-seanjc@google.com> Subject: [PATCH v13 50/85] KVM: VMX: Hold mmu_lock until page is released when updating APIC access page From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Hold mmu_lock across kvm_release_pfn_clean() when refreshing the APIC access page address to ensure that KVM doesn't mark a page/folio as accessed after it has been unmapped. Practically speaking marking a folio accesses is benign in this scenario, as KVM does hold a reference (it's really just marking folios dirty that is problematic), but there's no reason not to be paranoid (moving the APIC access page isn't a hot path), and no reason to be different from other mmu_notifier-protected flows in KVM. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1a4438358c5e..851be0820e04 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6832,25 +6832,22 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) return; read_lock(&vcpu->kvm->mmu_lock); - if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) { + if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); - read_unlock(&vcpu->kvm->mmu_lock); - goto out; - } + else + vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); - vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); - read_unlock(&vcpu->kvm->mmu_lock); - - /* - * No need for a manual TLB flush at this point, KVM has already done a - * flush if there were SPTEs pointing at the previous page. - */ -out: /* * Do not pin apic access page in memory, the MMU notifier * will call us again if it is migrated or swapped out. */ kvm_release_pfn_clean(pfn); + + /* + * No need for a manual TLB flush at this point, KVM has already done a + * flush if there were SPTEs pointing at the previous page. + */ + read_unlock(&vcpu->kvm->mmu_lock); } void vmx_hwapic_isr_update(int max_isr) From patchwork Thu Oct 10 18:23:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830796 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69EF1206E80 for ; Thu, 10 Oct 2024 18:26:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584804; cv=none; b=q+9pjwjWbTcwWh6OWSaYnN4eSSn2zdb758IcSZj1wML8RpBOsCRqCR3GpgWmBAVDNPmMr1QjwTiTzs6e3tcMhP35fynoxXlfb0wChnW7BofEFj8vEjdG6ZypUge/L3mpsZQB8JeD8xg3jQSJ8/nfG0HwJZmhW9DJI2/jE4M2cZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584804; c=relaxed/simple; bh=ux21RVGWhz9JRdIoM/Dq30UYNcfog+x0iowAjNZ/4XU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=K3by/HccWZXHpEmxm7z5YiXAeyixVUIKYGUfjcBEtxMBOWeugOVXdALNWVPDXNF8P5LroKCPyYIooVUzQ5RWZrlLrAIFN0IuMvzpf7EfQjagu9So7fsAn0GA08c4SlbwHDhfdUx4WIzzSZBXaDIgusAe896TE+2858y6Ip9wAjk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TkN0pgzM; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TkN0pgzM" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e31e5d1739so25048267b3.1 for ; Thu, 10 Oct 2024 11:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584801; x=1729189601; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=W6Zzl9AmyqedpEIwNpLXiwPcmrnHw6ZPa/9pyFpLcZ0=; b=TkN0pgzMcbtXcehOT3G5lQY8rkiUBvkBcxM6gFvQBdXPdSHR+Bdw7nrzPweN+yVkaN uTb/gUN2mQ2Vg1tKjaOAVMtsZSWMylN5yuKIXvVBtG2VEPvienlnVdJRXRAHwEu0kkyK 9Ofhzp0jp3Ck9eoskAcxSdzp6PspHPEsug9npp829r5JsnbCN9OEKkWW2XlQrAVK1qDz HR2epCDdOmr5wBKAPBrQzUwIntkDyltheKd/ormx9zR0906kw75/PioCQuOEzlpeTg8X zE3iJHzZ2VWDViZQG9tcL7+JnaOlpG6gu6+8GDdZFhQ//6LMtrKiMkaqGEETsqG0K2s3 zyjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584801; x=1729189601; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=W6Zzl9AmyqedpEIwNpLXiwPcmrnHw6ZPa/9pyFpLcZ0=; b=rKmH+WI03ZDqrGQvioh4yPejyCY5Ofb6Vkg/HRxiCewlw33zyvuWfJ7ZV/F/P56JJH ZLI1G6NURNkrZEEMzaKekomGpWN+O4i1N9r9vDAIQWJrLMC3LKJ/VUFsR3+DACL7in5Z BPfh3bUrddKScvIc43hVTu31AyzPs6C4rJTndLmtp7+9HBEIoj7w03KpP7EO9PSQlmYd QNCf9S8xdvtktyfOcoyq6mxTJp4Qlo/Ug7wJtH8W5yy5Mn/41//C+kt8q2fKwjJVWTy4 IvBtlke62D8GwRoL/4pb4re6/pN0QCArldHf1nCDyQkaIeWrg2GToxeSLtxAy2FolkxF 8zHQ== X-Gm-Message-State: AOJu0YxtDXh/OfVkExEUc2BK3HpPIOhfKsfrLa0M5jHunJXvakRAo2PE /WDYxvJchN7ihoTjkVDn/RSONuZYS8ZfWPOXSoDdd5W0jHSOTwz1sw4UogKAaPoRdQfwtMu6YgH GFQ== X-Google-Smtp-Source: AGHT+IGFUTZueI+Si3awRWj8jA+YGlhXMEzUcm/2nm7iA7dHF8070ZlvI1MSF0nUJUhFBhXPPz9v/uVg5Fg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:2f08:b0:6e2:2c72:3abb with SMTP id 00721157ae682-6e3224db667mr816567b3.7.1728584801239; Thu, 10 Oct 2024 11:26:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:53 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-52-seanjc@google.com> Subject: [PATCH v13 51/85] KVM: VMX: Use __kvm_faultin_page() to get APIC access page/pfn From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use __kvm_faultin_page() get the APIC access page so that KVM can precisely release the refcounted page, i.e. to remove yet another user of kvm_pfn_to_refcounted_page(). While the path isn't handling a guest page fault, the semantics are effectively the same; KVM just happens to be mapping the pfn into a VMCS field instead of a secondary MMU. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 851be0820e04..44cc25dfebba 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6790,8 +6790,10 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) struct kvm *kvm = vcpu->kvm; struct kvm_memslots *slots = kvm_memslots(kvm); struct kvm_memory_slot *slot; + struct page *refcounted_page; unsigned long mmu_seq; kvm_pfn_t pfn; + bool writable; /* Defer reload until vmcs01 is the current VMCS. */ if (is_guest_mode(vcpu)) { @@ -6827,7 +6829,7 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) * controls the APIC-access page memslot, and only deletes the memslot * if APICv is permanently inhibited, i.e. the memslot won't reappear. */ - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, &writable, &refcounted_page); if (is_error_noslot_pfn(pfn)) return; @@ -6838,10 +6840,13 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); /* - * Do not pin apic access page in memory, the MMU notifier - * will call us again if it is migrated or swapped out. + * Do not pin the APIC access page in memory so that it can be freely + * migrated, the MMU notifier will call us again if it is migrated or + * swapped out. KVM backs the memslot with anonymous memory, the pfn + * should always point at a refcounted page (if the pfn is valid). */ - kvm_release_pfn_clean(pfn); + if (!WARN_ON_ONCE(!refcounted_page)) + kvm_release_page_clean(refcounted_page); /* * No need for a manual TLB flush at this point, KVM has already done a From patchwork Thu Oct 10 18:23:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830797 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F9712071E2 for ; Thu, 10 Oct 2024 18:26:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584805; cv=none; b=O4l3GRD9EyvdDVTQygJdygDmKPVUkU8Jqkl/RHh7LqNrRvf06RhrmzHjCdvmWphr/CfpfRaftFTYJPXfnngYvkGc5/1t0Cgd06LtitIQekjJIfCWrbEDrVMEZCRk/k0VvB6pCGqk9J9g+C+jpTnBWsa2kcu/IrMUWmgnv5eIo4s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584805; c=relaxed/simple; bh=NP4IgcbBDVq5DZEeFtyiO8CdEZFJxxnULnDPMD13pKg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PPATC4KFtowOFz5A0n0CKFjXmInIZnfW1ZO7bRtqvzN7JSoWuB6RUBaKSZjjDZN/dz3UFouA2xlYIEcD1WH1isBXo6Ru6MQjzywJAzGO3CFlFpFYmQ6/eZeXKO2fgwbDgNgNq9/pnSW9c33iandvvAzgOBF7c4cO+YyNTauj8ig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g5hzo9jY; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g5hzo9jY" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e29b4f8837so18952267b3.0 for ; Thu, 10 Oct 2024 11:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584803; x=1729189603; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/fC7L3Fchvvuu9GyGKk9VEzuiyLOIeBtO2SWEaXDNWE=; b=g5hzo9jYuMsLef5elH7BXii/Rz3O5NNAzg3BaELLMkoxekYgKvv3GlwOx0MeUswR5S u7D79PlxCqfi2Eui27GQ/wf2PDK/FtSU9HQpxYIEp6FYcUNcZ4RxRkWKsOxypmmScggd bXGvbabN+3GVR3u3Z5evpvBvV7hTapUH6bSv1xuzsGrxP42UL0bRjk5kLAIDqUd2xwc1 HxptH7WG5BO6Qq5eztg3t9rrh5agOngJ9zqfaGOzOhRlW/rHlxLQl7mVwlclH/5jU39z SiwFtCrrtukpmLM9KZ9QtNIJl9cGgY/kEqCKhzMSjtywjCUbz1pRhU34CDuRasgyTV4R S5uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584803; x=1729189603; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/fC7L3Fchvvuu9GyGKk9VEzuiyLOIeBtO2SWEaXDNWE=; b=et9MFAwz/MLNQlqE6lr2ZnupGslnicGRhNry2nRXr9GTxtkE++uOQFbWrod0T8y0mm HLel7OKczqgjgcqiW349d7vZWYvWtzwSUSBnchUNCjc8fNus+envY7XsIqFQMWL6PfF+ 9bQKeTCOLSLJf/8T+VyDAclrgRBoo/Lc9d+HrZxQOwoNudVAZml/P6fV1pMe9qGhmV7i JIIpiVmfhAswSj43UYiH4qzdl59XXEocbe+z4TUZBcQOkPNMX0B45Ujsz4PYzeo4YhXL 5KkwBLqeRVW2B8A0EaygFGLkZ2zxyihFyPaPpYlpj9BxMtuVUnHZxcQZZrr4tPFiaurU B4Lg== X-Gm-Message-State: AOJu0YwYvPS7OCXcf2cX3AEEefjiOt8DaJUECaYW29Nvj4ZYLEI1Lnzg JzEYtCLnmGfBZnqv3B++/N33VrxNljgSp1gRtl/r70H/6JZp2XGlMFBMm+6kDvP3gc1jL3ho7Z8 Vsg== X-Google-Smtp-Source: AGHT+IF0bQSeBV+k7XDKFBXYzNvC7N99ueMTlOEEH5KNk6JZHzu/00W1ndtLnT51eyBFoV94Bmz3yNyB+P4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:dd1:b0:6db:e464:addc with SMTP id 00721157ae682-6e344ce9308mr1337b3.4.1728584803335; Thu, 10 Oct 2024 11:26:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:54 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-53-seanjc@google.com> Subject: [PATCH v13 52/85] KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark the underlying page as dirty in kvmppc_e500_ref_setup()'s sole caller, kvmppc_e500_shadow_map(), which will allow converting e500 to __kvm_faultin_pfn() + kvm_release_faultin_page() without having to do a weird dance between ref_setup() and shadow_map(). Opportunistically drop the redundant kvm_set_pfn_accessed(), as shadow_map() puts the page via kvm_release_pfn_clean(). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/e500_mmu_host.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index c664fdec75b1..5c2adfd19e12 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe) return tlbe->mas7_3 & (MAS3_SW|MAS3_UW); } -static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, +static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref, struct kvm_book3e_206_tlb_entry *gtlbe, kvm_pfn_t pfn, unsigned int wimg) { @@ -252,11 +252,7 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, /* Use guest supplied MAS2_G and MAS2_E */ ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg; - /* Mark the page accessed */ - kvm_set_pfn_accessed(pfn); - - if (tlbe_is_writable(gtlbe)) - kvm_set_pfn_dirty(pfn); + return tlbe_is_writable(gtlbe); } static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) @@ -337,6 +333,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, unsigned int wimg = 0; pgd_t *pgdir; unsigned long flags; + bool writable = false; /* used to check for invalidations in progress */ mmu_seq = kvm->mmu_invalidate_seq; @@ -490,7 +487,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, goto out; } } - kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); + if (writable) + kvm_set_pfn_dirty(pfn); kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe); From patchwork Thu Oct 10 18:23:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830798 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E53420720B for ; Thu, 10 Oct 2024 18:26:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584807; cv=none; b=qfblW+IqM8P8tbp7zw0T7NFB5t14M8tHQIMtVDQENUCGKFIsDIMT2nTQNqDErtVYlH28EkayoNmQqMyKyKXuJsGKFxueNWeiLE3tG3DwCxLDPT+Q0NDXLp1qiqc84SnhqFDjFcpFeUeiK0P6Tz3/2xJ7eg6d4RpJRZZ2pCAb7k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584807; c=relaxed/simple; bh=kbjwflG2kz8fn2XlGwl/ZUkp2MoQm+/BMH1MpANV1Tw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PoozIGhyILTkiuy8nnO48iY8oKzUalat0ee0Ld2gSQ2zm336JYakyRyRu4JkzCljzkohGV/mbdDjT7O0po8LtdB/ykf06DC6uezM1wUZlz5cocK3dyme5t3Hu44sDUJ2nq7DgPSiDjRbsbcheZfGUg2comGhcvEmjfZ9TCaruf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lmtzsAHw; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lmtzsAHw" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2e2d3d5fb4eso897525a91.2 for ; Thu, 10 Oct 2024 11:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584805; x=1729189605; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RAEBOjiR86B8MKio5svj+gL/GJb0VtWq7ImNxArS0H8=; b=lmtzsAHwQyNwRvm4YycGayWTaD3mMWa0KP6qhPQcjm6Xns70mqWPYJS1kLd7s22LTY /T7yWx+i0180z3+0c870icqFaPkxG/5jw4mabTqo8QyB0EcckP0wh91y7Mr5DVDEcAvZ hQO7SxtZXgjJ+5PNG6GYQ8lhMuxPsiaTv7bKN0qEUlt9rMOH5ix6Gd6u/08WvrX8BdhV sxBtnyqiiR8igp8kOk/PVonv0d4Scqtx+EB6UWy3fO105xlv1kZig9jJkbcTPFSX83n6 4g/v3dji6XZ61FVYO2EBHHk9ucMefuuWtbYJ3TSw4Isga9JWzpWIfkjpdmCzxAee0iT7 Q0eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584805; x=1729189605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RAEBOjiR86B8MKio5svj+gL/GJb0VtWq7ImNxArS0H8=; b=cWrAvGy8TQ357PeajtOg49U+ZzTSIwNjTYNNRPAQYrhEe+JM479OvPUuQML9juISnm +Z5ulE62TPp1dOYJSU1DfxoTtegUzGrYqLUdogq9LUEJs1M3VGm02FwhoxEsRv5aYoTq rziG4950uUiwoziGxWbpD+Z+9bKuDdwW+Q2TAaPwZeKlNxvrzfrTWRtkzfY767MlSdjh 3ifFsQDqijUkILwdqGspOaLjgwPTR0VEeVy823Rockz/tp0GoTa09VfPon8d1s/Favb7 hg9PRwr7BoMjIWUj7nIQCuRamvfrxC+ZLId496L9nhNxOBGZuc2a79cP+DwEdJQ863FM ADPw== X-Gm-Message-State: AOJu0Yy1o1lvRtobVZWbc/kdWduk1UoN3V+J4LD0RVub4Jia5NhZeKGj pPtHqyywjtwZ5h8gFjGDXZZKU9NJXBP6mB65KZSHJ2VurucKlKBJ8YR/aC+GPSi47t+IpxesDnh m7g== X-Google-Smtp-Source: AGHT+IH3g+or1uGKI8zaqA7g6VjAId1uToioip8sNGZo/KyXggeZoODfUOK1q5z450FXHERnWYgwR7EwX2I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90a:5107:b0:2e2:ca3e:10fe with SMTP id 98e67ed59e1d1-2e2f0f88a54mr24a91.8.1728584805241; Thu, 10 Oct 2024 11:26:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:55 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-54-seanjc@google.com> Subject: [PATCH v13 53/85] KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed before dropping mmu_lock when faulting in guest memory so that shadow_map() can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Marking pages accessed outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ outside of mmu_lock can make filesystems unhappy. Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/e500_mmu_host.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 5c2adfd19e12..334dd96f8081 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -498,11 +498,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, kvmppc_mmu_flush_icache(pfn); out: - spin_unlock(&kvm->mmu_lock); - /* Drop refcount on page, so that mmu notifiers can clear it */ kvm_release_pfn_clean(pfn); - + spin_unlock(&kvm->mmu_lock); return ret; } From patchwork Thu Oct 10 18:23:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830799 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62143207A18 for ; Thu, 10 Oct 2024 18:26:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584810; cv=none; b=CE3Heiuh9TURWHmj6qyIBfVDdiDWev28LlGuMQ8KocejXmkaiAoZ5gSvd6rN1jduMi5NRDOGUiOw7t/enPgMFGrz8MOPI77KL9U7MR/rEfxN7Wtu9AxMBopWIAi4Rq0DTEubzQfdREpf7sxFLm5pZ0NSdXQgpMxqlz30zea+Nmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584810; c=relaxed/simple; bh=ZWIlv2h6Cx1yrF7GGy3UqCKI1dfxdLpdeKJSlsj4QNk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gnsa+35qelRudkHPnPBch6YqQ8pZjMF8Ww6MZH0V0BAEhHUzAQIAmM8AB0TGYVkys8fz/Kx6LAKu1nlcdfv/drJMOOR8jY36zwlFhymJt+zA+u6dbv2DkXZzSzr8LBeb8HGWmCrrTFPM8CBsl73gQrs1pwXX0sXN8+6UmDWoKKc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WnVYZ++e; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WnVYZ++e" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e28690bc290so2031247276.1 for ; Thu, 10 Oct 2024 11:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584807; x=1729189607; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=nHP38EtPz8K1+ebGaPqvgAKvgaQP36cKGSrn8JpxRdw=; b=WnVYZ++eloUHmaNH+PMyPBVaTNNQV7PpyzwKfQNf8E0TWCFXkHdgRSE8m9QZdbFqgl LUO9GmnO14pJlNUz/a9YQDP4sRdsmqE0I77QJPEJQLiNtELH3T55kxgvZXX+ZS04dduF HOubAFdITv4Os+cf8tlrjsJ28Wj+yEbPvdLMEALhRVl3zs/sWfw9mSXKeFLyukSkfF0t s159ZU6SoLfXQ7k/TJ51qYCX6bhgSQvRTMspjJT5LynWuHOAFbYEj7AgiVCpYczdtFzC uNelGCXQnKo8zvzdO7U5pHqlDSTPwijrS7Od2lq6i7bZd6sgWjEtE+f9igLiaozAMB78 DGDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584807; x=1729189607; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nHP38EtPz8K1+ebGaPqvgAKvgaQP36cKGSrn8JpxRdw=; b=ne6hlJ6ka88dtbemIwNzKLjSJNwjP4Qz2vaJd9S3YOcIfJqjAFDvsEASpnHZ0k/D6Y ILyygpDpYkV4kO6fgN52RVRdYcHSrwECCROLlHiHk6wgH9PTMVSxUuDOsFBz+RMe9rNB dRygNG6yHJzk2WV3osSDK9EkwyM8nHNZLXB+U7oJ0NC/eLZcehkyphMLvp46B5VU+mju j1PUO7nBbvjS8l/s3tfqCR36YJKE8V7zO83Ayozs/7+sSLuLckoxwR3mB7qRVOIakfzk kOQdD4JTB8/4WNmZM2roj2Q6K8j685iLNlodLZozNFWca9Dld0G19MwCZEgek/dJGmnV kREA== X-Gm-Message-State: AOJu0YyuFJymwKXnzBoEcVFkl71tAPtSlHlK3F44gSNrzstuZ0Ug1Omf w9fPrgJtVriLpzqnWdgdlPa0y/IDvclH7gRRNd9Du9Qd7u32SlaI9QeTsGxErPvsmjusg5cAz9x gsg== X-Google-Smtp-Source: AGHT+IHQvsj0FcPHwm2v4Ks73fb0XY1uO3QmFFCKoUyfp7OKRYByAKtc8uXQ4anZ8QaSR/IH5oavOTtruEI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a5b:b86:0:b0:e0e:8b26:484e with SMTP id 3f1490d57ef6-e28fe516b5amr5188276.8.1728584807403; Thu, 10 Oct 2024 11:26:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:56 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-55-seanjc@google.com> Subject: [PATCH v13 54/85] KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert PPC e500 to use __kvm_faultin_pfn()+kvm_release_faultin_page(), and continue the inexorable march towards the demise of kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/e500_mmu_host.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 334dd96f8081..e5a145b578a4 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -322,6 +322,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, { struct kvm_memory_slot *slot; unsigned long pfn = 0; /* silence GCC warning */ + struct page *page = NULL; unsigned long hva; int pfnmap = 0; int tsize = BOOK3E_PAGESZ_4K; @@ -443,7 +444,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, if (likely(!pfnmap)) { tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT); - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page); if (is_error_noslot_pfn(pfn)) { if (printk_ratelimit()) pr_err("%s: real page not found for gfn %lx\n", @@ -488,8 +489,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, } } writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); - if (writable) - kvm_set_pfn_dirty(pfn); kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ref, gvaddr, stlbe); @@ -498,8 +497,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, kvmppc_mmu_flush_icache(pfn); out: - /* Drop refcount on page, so that mmu notifiers can clear it */ - kvm_release_pfn_clean(pfn); + kvm_release_faultin_page(kvm, page, !!ret, writable); spin_unlock(&kvm->mmu_lock); return ret; } From patchwork Thu Oct 10 18:23:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830800 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0190207A3F for ; Thu, 10 Oct 2024 18:26:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584812; cv=none; b=Eua3LGfl1jCTw7tN+bZkhb8oH6gADm+8TpLrV136ZGThbnKdGlSKqMIGwY5mXmLCGl/IEfzG0TLzwwRd9YD3Z/s3j8wz8JKPqCpQp5N48sSDezqrGVMUEBgHAajgdp4NZfXMPKpwd49uY7oUm3VlUdI6ntYHL1yvLGVto028WvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584812; c=relaxed/simple; bh=lNDyXtKPmFS6sOILzqWSFd9L+nyXpdsd6JebnDHI5l8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lNVa+UG+95SR8Yx146PwdYufAjPRfhWvR/1/FeX6ZAoCxQ221uZea//b9LdRuCFy9FlKcMRsQTPaOmTEsidyyhVJg1wASpN0OXUUHI4t67t5iaYn/zz2D2pV+1dUZVemauGsiGCOs16XFixP28rI0e7fV+SZswSkhFdrLPYRI5M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dg9gcvn1; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dg9gcvn1" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71e123d3d22so1429257b3a.3 for ; Thu, 10 Oct 2024 11:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584810; x=1729189610; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=K6YUqJX41UvLftiTjNafBx3/10X+6KBTIzQjXduU2xw=; b=dg9gcvn1EbsSMEFFWFfwFAC3/SO7ovIMxsDf+/JNqUPrNu+sX4BsLpvzKD94FQExHx ct/k5mewMmav7ktqHjn4oB9I4zkkRwXyn5w0tIoApPyDKIiQH7p6tHkpXFCPJTtIseQ3 VoNYFKhdpcHf9CbYy26zVGFEvyT2J19mE20vTiQrjL6CB9Vq2Zh4KkGD82MaCrPd6y9W UUtvrC92Pn2SaMErlsyPwQ0Db0yjSoKXkYH0TPgJINJmnOxNVVxCzUM3KVkk776W72n9 gwT6JZjKyO2gcBCYOp3KcN9F1c/eMnDgfxq3VKPhPILvMxpVO8cOkwe7VhMm+/YDQ2ay oHcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584810; x=1729189610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=K6YUqJX41UvLftiTjNafBx3/10X+6KBTIzQjXduU2xw=; b=MLQ3n+0zO6NwEPWOmmzwiyTwx57DnZqFnfVbOrnqTAHZe24GMk4wZz/x7UBak8Ewue TUV6xF4as05jiDf1iM2mMsyO3sMfRruxNaPhMTTN3n6X4N4Q9VCEZgs/exby3bkJ0Ish dWndn1fJI1YLb/xhnKoWg/E35U19mlwxcQc65vqPt4PIAOK7OGotw1+SBdNuCxzQHcQY G7iitrf1NS0aSlVfOMXn4CK+k7zVJoSOmSBNLqvwfOgII/UjLRfn2QG6bo/2VnlTnwcu tt4Ya+l3QpNbeAI6bRKsfzfExcqAqR+2IpPENr6Xo7zBi4RGdmOImWmAPYqzekD41zS+ yJ2g== X-Gm-Message-State: AOJu0YzFRCc1paGy6WlK1YayLMPBhhvI0+40dpMdXB93rd89ipl7QLUe ku4uOwETjHvJ7FvHkpsbRPTmmTXeiXByag0NTLsVdqCX9SFn4u0VBvHOv9uISkYceOxvU6ZU9qw Vkg== X-Google-Smtp-Source: AGHT+IEO1ECBC98fGvB+GRqYHAQaBdxJcqvY9DmP3QI4gZm0KdOyij1fLmFl0bPXgfvra8JJdQe8JNFxBHM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:9199:b0:71e:1e8:e337 with SMTP id d2e1a72fcca58-71e1dbe467fmr8496b3a.4.1728584809106; Thu, 10 Oct 2024 11:26:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:57 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-56-seanjc@google.com> Subject: [PATCH v13 55/85] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). While scary sounding, practically speaking the worst case scenario is that KVM would trigger this WARN in filemap_unaccount_folio(): /* * At this point folio must be either written or cleaned by * truncate. Dirty folio here signals a bug and loss of * unwritten data - on ordinary filesystems. * * But it's harmless on in-memory filesystems like tmpfs; and can * occur when a driver which did get_user_pages() sets page dirty * before putting it, while the inode is being finally evicted. * * Below fixes dirty accounting after removing the folio entirely * but leaves the dirty flag set: it has no effect for truncated * folio and anyway will be cleared before returning folio to * buddy allocator. */ if (WARN_ON_ONCE(folio_test_dirty(folio) && mapping_can_writeback(mapping))) folio_account_cleaned(folio, inode_to_wb(mapping->host)); KVM won't actually write memory because the stage-2 mappings are protected by the mmu_notifier, i.e. there is no risk of loss of data, even if the VM were backed by memory that needs writeback. See the link below for additional details. This will also allow converting arm64 to kvm_release_faultin_page(), which requires that mmu_lock be held (for the aforementioned reason). Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index dd221587fcca..ecc6c2b56c43 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1692,15 +1692,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: + if (writable && !ret) + kvm_release_pfn_dirty(pfn); + else + kvm_release_pfn_clean(pfn); + read_unlock(&kvm->mmu_lock); /* Mark the page dirty only if the fault is handled successfully */ - if (writable && !ret) { - kvm_set_pfn_dirty(pfn); + if (writable && !ret) mark_page_dirty_in_slot(kvm, memslot, gfn); - } - kvm_release_pfn_clean(pfn); return ret != -EAGAIN ? ret : 0; } From patchwork Thu Oct 10 18:23:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830801 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23F9C208229 for ; Thu, 10 Oct 2024 18:26:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584815; cv=none; b=mFx+OHqNqGRv4O4zVkVB+g6+Tph/3PJ5jIviaurZFC23s34YN8Ch0XOrbiutZ1xkkjeNI29HVtUgSvh3MsAJWyPQ+jq0ufn7ceDdvu0mhZ6m4uJRtwiimvHfepJ4Qw+VHFcDK79P7EleQD/Gd89ZdoRXquFXRQCXIngux+Fz11c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584815; c=relaxed/simple; bh=XrFS4rtFa/FgrThsq3EMvRUnjE+/w+4Wbq1cV5vxoHg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YrnCRwgYL0rVThI6ju0jTe6m8sfOwg5ssGk4wnGtKFwd3qvGGuHZBSzyXo23VV9zzR3a5qKdRzs0KJHW4aXqiKDsLIQP4ImYohhqZ+LZWzcPjMNRvVDOPo0ZZb5Pj3V/5P/lxFt+6WBdT/gJXfNLNlM1RBh4f/QMmudWHwB8zto= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cMRKCJL2; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cMRKCJL2" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2e2c4154dcbso1513322a91.1 for ; Thu, 10 Oct 2024 11:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584812; x=1729189612; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=L6S7ecEpHeP2pNYK74CcyfNjVxKIOVPI1u0giMU3AdU=; b=cMRKCJL2KTT4vd8DOQ2od82kSNUiso8aIqKq07Yvp3S6byddo0DPcg2b8Jr9ogLGAo BNTKkHdxlFw0l/CcZ5axwvEmJkR/6UB71Pb7jQUOYe4rhWAIBZzwBX4HH87G3XGLEtqO sSChKOxVtrZ3EpUVsVzWJqNdqZbLXzbA7bJvOxpoDZx9KYx9Xz26RYucJeLlSpK0us3L e20ClV/b8sQtSPdFHLtStwrnxtJUeQ5hHPWY0AHVXja2oHzcD5JA+7iFwMVByjbOQ2jD oC5gACr1JoJhr0hdDFfDr1ptb4PWqWzd1vvHuvzjVquTsBV+1uKxCjicP33d2mq+QoPY sXfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584812; x=1729189612; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=L6S7ecEpHeP2pNYK74CcyfNjVxKIOVPI1u0giMU3AdU=; b=v2RcXRfBARNsBNqpB4lGcjvwtlgX1sbOVwnwHvu0JbKmhBeySXfQh03AVm+q0yH7pv FmHe31CNauKj1m4JQO+mmXw42AXbzyEb6cjDPmOMCcG48MgXhtEEJ7M47Ojl3H+VBg20 5C/QLRp9IGKStv/IMPbX4X9HzzyCWsvELmQ49KI71L2rJNEdYJZVQVTu7UukUNxIZ7GO zyMUebpg3Pea6cZj85NGQG6elDJS+1dmi8O7Fx5q2bqaXLcU2MpEnsbwmNnCG2Qxpt+c BCT3q66ls+AkaFyWUvR3IFPCAe1dQxJhOSzsQxuAO1mkra1nrJLletl2G7Hh+6Q3BVPj vVIA== X-Gm-Message-State: AOJu0Yz2aFj+MXGIJL0LPNcXma40umlBJG1AP5TaYV3YYnTQNMV9nuuZ fiFU3gd+bhKKyymvkZrE+JROI5mv33iHR5vwn0sHjOQ69T5E+ZIULcpltdVualtsQTMYHmqdrQ6 DpQ== X-Google-Smtp-Source: AGHT+IFNqFi+zU38KeaNCnXXdZA3xW66mqPsBP1LkkibZO//JxtU2HWKMaWXgt9tHH1pSDOFY+ZWY6Gk4GE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90b:4f4b:b0:2d8:bf47:947c with SMTP id 98e67ed59e1d1-2e2f0c456f8mr35a91.3.1728584812033; Thu, 10 Oct 2024 11:26:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:58 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-57-seanjc@google.com> Subject: [PATCH v13 56/85] KVM: arm64: Use __kvm_faultin_pfn() to handle memory aborts From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert arm64 to use __kvm_faultin_pfn()+kvm_release_faultin_page(). Three down, six to go. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ecc6c2b56c43..4054356c9712 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1439,6 +1439,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + struct page *page; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1560,7 +1561,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to + * vma_lookup() or __kvm_faultin_pfn() become stale prior to * acquiring kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs @@ -1569,8 +1570,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable); + pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, + &writable, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1583,7 +1584,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * If the page was identified as device early by looking at * the VMA flags, vma_pagesize is already representing the * largest quantity we can map. If instead it was mapped - * via gfn_to_pfn_prot(), vma_pagesize is set to PAGE_SIZE + * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE * and must not be upgraded. * * In both cases, we don't let transparent_hugepage_adjust() @@ -1692,11 +1693,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: - if (writable && !ret) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, !!ret, writable); read_unlock(&kvm->mmu_lock); /* Mark the page dirty only if the fault is handled successfully */ From patchwork Thu Oct 10 18:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830802 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33BF41CDFD9 for ; Thu, 10 Oct 2024 18:26:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584816; cv=none; b=C4J04NpSfFSe02vPTS0d/9/g0T2TTgysStzAO6xZUEcB3yd6IQxLjLGvgU+PQeu9VVDzjMJKrTOw6XQY9QDdDQJeo1TqGYI3/04eRzE/HXEkS/hPBA9t9N1hKgDIchQvx7m9VhWAMiD94XwqKH3WQsweE1Eg2mvY7Ik7J6UYJr8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584816; c=relaxed/simple; bh=tUWxw/XFi5Jjw4MLGoQx4yzkrGK2c3j8OG0Ux9/pXvk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YSEZQ3TeJUwhlIk89XYp1FnJ+Yv7obOqEhsShzO1A1KA/I6kphGcxSLN1SySVHxfUzXCfM1UteHzPgshqNL4IhwM/PuvSausmVpF8gL2H690xbP8jv5KBrfiP9h34Mu5Z25BG3711/EKSqLKwMIYzOkjbwK5kx9DHUpXYzZ0Msw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XJ5G74vA; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XJ5G74vA" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e2e4874925so21257137b3.1 for ; Thu, 10 Oct 2024 11:26:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584814; x=1729189614; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9PLNR7J7/StxhsadOj2FvapRbIqIfrDfQ6nFmhEUvk8=; b=XJ5G74vA5iAj8/LHLvdWEOyxWm1sfVqNq+jfgoUcBvu61bKrqTg0Rh5GOJLcTJf+Aq TL7FokBfV0aZSwz/KOejArh1F3p2bP0Yd44s9m4WX/lwCfKaHYxqgq5J8q0UCN1NNYda BH05WWr+Tayqa620VrfEhx9mjQLJkXQDcrwZGDsqxDGFZyd04Umi094470nxsXBb69Ur klYor8P+lwqKm5AFy3yGOCs8lPPDqjyRxQmoDgihuPO+cs/d0BqSVSdu7FrlZ3DPLEyd IdLqIfUnIN/c1Gy68qnUPzDojMCUIhUQknpkbfu0TTJvkhlb2kRKb2OlFqBihneknzCP bOKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584814; x=1729189614; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9PLNR7J7/StxhsadOj2FvapRbIqIfrDfQ6nFmhEUvk8=; b=nHIg+GQfoA5gD6yZfUCw2SdSrNQ/vrhn8iGYAjh90mRUKZ3K4agoZmd1xiO8Ah0Z12 yHaknr1ryGpZvx0u18G8TCsNBpCZXB3iUsRrp7i1+hxYQCeY2D6bTFkOd52w4fBSnD3Y aVEj1azkIzYpjg9R7VGVvd3Yl5zQOMnj20kPNxTJUBaJ8mj6mk7MzYyj1BI0kpN87xK/ WkDZrJcOkurvozPQ17BvlEOaWeUVhOT8QL1qZ08D0XSyC77Afxl+U9gWXAjCOgmicvo3 wIHohadLgtwLLWuDFApmMnjDP7jnZNvFosxdO5lwggdsiTiY6Rhx2n2UYwhf/EzmrDyR 7A7A== X-Gm-Message-State: AOJu0YytgkdUhFpTQPua8LDvajZ9ZZ5S/VwLGntMvI1kRBpituDjpfQe StBLFXWsOG5IEo8ODrSKtHrn7VMVP7caxcX7WOMcZtzmZnx4oFKSYIrYtSjoJX1xtUbe4UYAoxT A5Q== X-Google-Smtp-Source: AGHT+IFTz5/NgoL1lG2XSkf5pquU6CJl4M8l3x57mh4U7FbzzS84rW+oHIk+OXb87gKu92fZB4cgNqwOitE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:46c3:b0:6e3:21cf:a67f with SMTP id 00721157ae682-6e32242fb5emr1006557b3.7.1728584814138; Thu, 10 Oct 2024 11:26:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:23:59 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-58-seanjc@google.com> Subject: [PATCH v13 57/85] KVM: RISC-V: Mark "struct page" pfns dirty iff a stage-2 PTE is installed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Don't mark pages dirty if KVM bails from the page fault handler without installing a stage-2 mapping, i.e. if the page is guaranteed to not be written by the guest. In addition to being a (very) minor fix, this paves the way for converting RISC-V to use kvm_release_faultin_page(). Reviewed-by: Andrew Jones Acked-by: Anup Patel Signed-off-by: Sean Christopherson --- arch/riscv/kvm/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index b63650f9b966..06aa5a0d056d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -669,7 +669,6 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, goto out_unlock; if (writable) { - kvm_set_pfn_dirty(hfn); mark_page_dirty(kvm, gfn); ret = gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, vma_pagesize, false, true); @@ -682,6 +681,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, kvm_err("Failed to map in G-stage\n"); out_unlock: + if ((!ret || ret == -EEXIST) && writable) + kvm_set_pfn_dirty(hfn); + spin_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(hfn); kvm_release_pfn_clean(hfn); From patchwork Thu Oct 10 18:24:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830803 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03ED5208993 for ; Thu, 10 Oct 2024 18:26:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584818; cv=none; b=HYbZmY/jdEz/HQmv8Z0oigqiDKCtPCo1yDnd/9ziyWitdOxD0OMIKH7bbTpv+kNj6UXhmhGIaLofM9M1nvbMmGWBwA1glGCyMhO+NyL2LcjBpQdsV9FBcYX+qBP7js+oCvCy82hs9INHMx4wVRdC7P9Oiqvsr8rAi6k6S1Qahlo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584818; c=relaxed/simple; bh=XU8sCtvYc/Uw+Iuw1o18evVVUc/CvxyvropaE1RipJE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LZIEbUQ1KYh4r0a11nHpZHWrjGkgZsIdVBxSb36/E9mJnx9XKpPuOo1OnPA/yf77lVdTDKLqy4t0toXPhBiQG1nG//H/8eRpSjZGUvIGS/apSsZe/HI1VJBshhomeM2FbiVjegFSKqJCGlB2sktct9pACBXwEv2RTAkGk4ZJt50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W+oxKjMU; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W+oxKjMU" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e290222fde4so1381541276.1 for ; Thu, 10 Oct 2024 11:26:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584816; x=1729189616; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=h7pIaRprT8i6iodquNsVL5vPZSNT9KsZLdIQsfz/EL8=; b=W+oxKjMUfyh+EqY/9j8uTawVGS+361nMyI2WTh0heIuHPPflaTW6hcERh//ZIPGINt hlypNXBr6qobPwsVdXmHX6BEqbZGa3FbwKmQ5Sd8OPggbYFVJFGhHU7MM+e8FbNNd40V 2sP5QAjfViExYotUYGsZOg6HLHcp9O01YQvah7hUFc9sAA6kvJYFAZeN1//luN4wfFDx PzSOdMsfcFKPsHiAosOR5+PBT3aMMCz3e61LQPme5dH6VbZADVeqVvCgX4TieT8OQL0u n2O7FIS+uh6/HFTeEZYPKIRdCh/EQxAPOAwyFhbsAZf74548oWBqEZcLrf0aGcNxzTXf 6CeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584816; x=1729189616; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=h7pIaRprT8i6iodquNsVL5vPZSNT9KsZLdIQsfz/EL8=; b=bofMRgjHUwEpoJJBzPyo5P2b3loA3wt9rYS8ptnNdsuJZAs4XJ8iyKN7njtyfCLzUz 6nQPtYVlanVZerQFiAmfE6igGT9MS8R9oAgaPvIk52Wg+X00GrL1ii65Q6JQ9yWPeEts 09SwwDjB6VBnQGPrlurx0ylcX5nnnaW5tv6zz2/gvh4rxxyI7nHo6Wevh1P/mzBsU+wp NlZ+sERLr2ce1Jy4xQb092wyfQ4HDXf/HASCFAv/XdI3JdZmswPzyFZLuHWwjOtnfSvQ Lcjxs3irkoJZPxsnXVUsz7FpzwbhO79hwceIUCE/W66MNDtqZLw+ds3j7EvgfrznFrqw YQuQ== X-Gm-Message-State: AOJu0YzqWTuM07WAu7pVlAkyCsOoKZdbVb4ttAc90gnjFU0/hWdmz0B3 8WSZRrxCmNIzqEf8aklL+ympgAgNC+M6kzpl9GZXVWLgIy8TgThiVXa2TVgEZejJ0EZCjjsTNR6 uaA== X-Google-Smtp-Source: AGHT+IER2ThMFqRrPlrH2YqE2Mm6yZZ3YtFM0VnOojG5BgkcM2HHrRnMjpl0ZvMfhTqewlD6nF2BR0rNKwA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a5b:c45:0:b0:e0b:f69b:da30 with SMTP id 3f1490d57ef6-e28fe41d0e6mr89144276.9.1728584816073; Thu, 10 Oct 2024 11:26:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:00 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-59-seanjc@google.com> Subject: [PATCH v13 58/85] KVM: RISC-V: Mark "struct page" pfns accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed before dropping mmu_lock when faulting in guest memory so that RISC-V can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Marking pages accessed outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ outside of mmu_lock can make filesystems unhappy (see the link below). Do both under mmu_lock to minimize the chances of doing the wrong thing in the future. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Reviewed-by: Andrew Jones Acked-by: Anup Patel Signed-off-by: Sean Christopherson --- arch/riscv/kvm/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 06aa5a0d056d..2e9aee518142 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -682,11 +682,11 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, out_unlock: if ((!ret || ret == -EEXIST) && writable) - kvm_set_pfn_dirty(hfn); + kvm_release_pfn_dirty(hfn); + else + kvm_release_pfn_clean(hfn); spin_unlock(&kvm->mmu_lock); - kvm_set_pfn_accessed(hfn); - kvm_release_pfn_clean(hfn); return ret; } From patchwork Thu Oct 10 18:24:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830804 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B77E2209676 for ; Thu, 10 Oct 2024 18:26:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584820; cv=none; b=scfytzIwFvJ+LeBrhwkNHpmfBlvDUyowKP0YvVucOVQmOog/cbLDOI+hhpxakqHYg4qv+P8mvKbYSuyW1v5rJlDVFcwYQtsrgIHLHzwKB8unOtK6/0pntwq4PLhPWmD5Up6o3j0sqOOfeQGgVlP2c0jddfx0UAkH9jMhKZOVtWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584820; c=relaxed/simple; bh=h8Ty3DwV1uJmCoppjwTeaFXY32gukNMrofdCY+cnYhk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t1Odvhv7V3WbwTIy1+1XLEE8+CK8b6rEmqVxDrGKjn4oHbUF3OQwfU7UdEak6XUBhq77tUzTtaySTtH/Cv5CASNL+VCCsPcQtckHgc0y1LU4MmpZWtUTA8J/DNbUfdYoPlTnrsagnImqeT5gEoeKyd36cV2anGUoU931Ma3HW4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JNfpTpqi; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JNfpTpqi" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-20c9673e815so4839045ad.2 for ; Thu, 10 Oct 2024 11:26:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584818; x=1729189618; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OPPvMTYiUGb6pAXW02Q14XiUF2+KvArosM2sRK6VjAA=; b=JNfpTpqiBYBKFLYDya+q/+VqUsSM16GJP14qVcVJBPKXZq4oW86J7IKlLAOiNob6Px IEuSrspqaxeP+LwqTSlCTpUpi94WLf5XXZW/QoJCLOugk9sV3+x2J4yYTdzkt3tvnLQx nE5oE/+OqR1Yuki9kQowd4XCBCzHiNhnKLTalavt1s+dTpNB4LgxIrPIUAPsx+IFSoQe Dxl8Q0gJ0H3By5DV9zN8gz91BDltC3iw79l9V7QEzDzNavm6Mkl165TAaQtIf5TNC1uq y5ayftOuatk9cCD2wXS5Rbnu24zhxUq7JElq0/cz+LsUk+p71mBe6JOzZybeoA2rg/hT EvgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584818; x=1729189618; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OPPvMTYiUGb6pAXW02Q14XiUF2+KvArosM2sRK6VjAA=; b=Ob0EEIktd3VLhOvh5g61DQ5xTVsg2beAhVorLpvNOSWEFStBCzDp7/k/Iq9tGyX6mb CGpHmhPDNecDmHkjILs8udaCQpfPROhx5T9dZUpXS/KO5IAbE3izTw3boYgU4pm6bvhj aiNTpilSPZCGDc7FSIfImnAghAvZPcK09FinSoWvhkWrFtmqgA4vuS9RXCSmSUZkKZaL RhifIlyS45dIGubPL+NOfvu5CN244ldW46P4KPJhktzswv6guQnABHycxF73sZCvsj/C fTet7zwJiAfyS9zNU3VBHue9L3yv48ILYdUDWWQcpiJ6XgX0yIAs8DaNwvpD3ds2ECxV ePqQ== X-Gm-Message-State: AOJu0YwXnufJi53OXUxvFAFQB+jQ8GKdmyCQVroMf2qLpQTGqKLavALn uvcoJMZRxWa1co/T5NeQYcjq9iG1Q6YktNWfZJAZsSbFWp6GyRppsrab5T6LIF8dhLSjHVVkRjP rYA== X-Google-Smtp-Source: AGHT+IHVZkXcNH8eDb3GIQubykYaXKWHdrpGe/zHTLQHIjPMwQpAN0ra/QncFunLfkL/hgYZuPxk0qgLkH8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:903:2343:b0:20c:747d:ec15 with SMTP id d9443c01a7336-20c747df618mr290175ad.1.1728584817934; Thu, 10 Oct 2024 11:26:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:01 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-60-seanjc@google.com> Subject: [PATCH v13 59/85] KVM: RISC-V: Use kvm_faultin_pfn() when mapping pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert RISC-V to __kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Opportunisticaly fix a s/priort/prior typo in the related comment. Reviewed-by: Andrew Jones Acked-by: Anup Patel Signed-off-by: Sean Christopherson --- arch/riscv/kvm/mmu.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 2e9aee518142..e11ad1b616f3 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -601,6 +601,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, bool logging = (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + struct page *page; /* We need minimum second+third level pages */ ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); @@ -631,7 +632,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or gfn_to_pfn_prot() become stale priort to acquiring + * vma_lookup() or __kvm_faultin_pfn() become stale prior to acquiring * kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs @@ -647,7 +648,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, return -EFAULT; } - hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writable); + hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page); if (hfn == KVM_PFN_ERR_HWPOISON) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, vma_pageshift, current); @@ -681,11 +682,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, kvm_err("Failed to map in G-stage\n"); out_unlock: - if ((!ret || ret == -EEXIST) && writable) - kvm_release_pfn_dirty(hfn); - else - kvm_release_pfn_clean(hfn); - + kvm_release_faultin_page(kvm, page, ret && ret != -EEXIST, writable); spin_unlock(&kvm->mmu_lock); return ret; } From patchwork Thu Oct 10 18:24:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830805 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09919209F36 for ; Thu, 10 Oct 2024 18:27:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584822; cv=none; b=oKyM32EnEymfT4TjDto11aGOjxWX8OChJIBogEt7E3ZWDLfenQxZLOEngDN2xY8Roju5HZNMyLWY4ruZolloA84HvwjhabnPp3nwKHfQJyarIjO388Lqckz489riFrD296sVhydgsFw/0b5tP6keHVhzJy3L43Q41N6BWnHgiKo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584822; c=relaxed/simple; bh=dTkbtmFokyvOqUKSDyYRNtMSjRIKbeJnigSrLVN5f5s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TNkw4ukEQjNlXWze98zX2nAhmSxmm1U0VJ44KyCJ5kbB6xHW8qz4PFwpioZwUcz+H9oghoBF9UInIGYC/zsyA6rszZCvHsX4NQFkeYtFRoVZq5RSRD0nDaX+NquY0OAjDTHcS4B+UPU9SKQN2D9NNhFHu/aDxhtrsxExnD8ue3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=E4/Nn2nf; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="E4/Nn2nf" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e2904d0cad0so2052464276.1 for ; Thu, 10 Oct 2024 11:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584820; x=1729189620; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=SD6mqWUZeYJtl6GnT7U0LdjoV3gPIs/AdF7iNpavyUo=; b=E4/Nn2nftJfFfMfz7XGdS1N64U1kTObHmK9jXjNfOx0RjrtgUP73dR5xAlk3+GMtGB VyKZy9lVNuNg6pbjDWdFmORwCovT3DjZBTB+WZ837kHeHvK2326FSbxxs8lJBmA+cqfJ AzLzFEr8HhjDktjfMFXgEDl8CY1fwEV6eCqYsTZqVWyTx1h236L085DUqUUmeS6vrvHB GSntK4yQbkZEuyozHwXiNZd4wO+6Z3lbwa+JRbDcIw4FDtwcrBqi9nXBbIffNp9qA3U0 zaS4plFaV8cXO7/+pyiIv0T2C+lmVigBUlVA+qtqRH+dtAzYEjWbWCUNPXbPKT5e2PWb 3B2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584820; x=1729189620; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=SD6mqWUZeYJtl6GnT7U0LdjoV3gPIs/AdF7iNpavyUo=; b=VzYG20vy+/jA5Smed6JS8w20RslU+ZEVbwWzUnvzfzgmiB4I4IJ6DaUzTW82NeQIpv ZodSMphACsk3DeXstVxJ4XYkuoB8xdtvogVzUIQfAaVNSc62eDWdiLkBZZ+2JvCDfY4p AAbROSNiZ1NQTc4AgeWJuybLHyMHwSsv9BeGI6n7tZhuReL2CamAkfaB5qJAYVEmPgiI U1Q2xQk9LeKMRh1PxlI12FITf8FuMhg0IlyYXafsMEKiMHyvA09ewKCgSgbgRLtF5RCH pf+PqAlZJob0OYNnRR/Oy+eDuxhWJwNhIFU3XV089IgPpZlbmEN5DPdxfC8w/r1PECLX EBSQ== X-Gm-Message-State: AOJu0YyypIhsdpiuhsyps6ESHRwLxQfKX2ImMc4xB7lYzS8keOTwX2jc Jrr2V8rRy1nh/1WZKtUSJf/QxoESrkS4OY2k47sQiJicU2BhpqXeXd/PoHbAcuHi2VL1i7ECWX+ iwA== X-Google-Smtp-Source: AGHT+IGRQvxFZI4ySDHPZ8aV1dJi8vCYkQEt4dLiT8ndc3AvxflLIddldJBCv79vdNFr7/DaBU5iasgDdOk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:ce11:0:b0:e28:fb96:d0f4 with SMTP id 3f1490d57ef6-e28fe33505emr40781276.2.1728584819828; Thu, 10 Oct 2024 11:26:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:02 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-61-seanjc@google.com> Subject: [PATCH v13 60/85] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s HV From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Replace Book3s HV's homebrewed fault-in logic with __kvm_faultin_pfn(), which functionally does pretty much the exact same thing. Note, when the code was written, KVM indeed didn't do fast GUP without "!atomic && !async", but that has long since changed (KVM tries fast GUP for all writable mappings). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 25 ++++--------------------- 1 file changed, 4 insertions(+), 21 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 2f1d58984b41..f305395cf26e 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -603,27 +603,10 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok = writing; hva = gfn_to_hva_memslot(memslot, gfn); - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ - if (get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - write_ok = true; - } else { - /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page = NULL; - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - if (PageReserved(page)) - page = NULL; - } - } + pfn = __kvm_faultin_pfn(memslot, gfn, writing ? FOLL_WRITE : 0, + &write_ok, &page); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; /* * Read the PTE from the process' radix tree and use that From patchwork Thu Oct 10 18:24:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830806 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A505A209F56 for ; Thu, 10 Oct 2024 18:27:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584824; cv=none; b=uVRZW3SyFp+AyRGnuVkYiA8/oAoqc3NYmUf/VfvEJmQhBFa09wMbPb3hKK0jvEgvPaVWR56O74jMl5dy6ayL5HiODOZao8zUKT9KQGLJmBwrWt+eQ/RxToCjv4KLLLu5fiWYKMboKZRz3luI5sShnA9qWtNg5R0J/p7bKp7yyGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584824; c=relaxed/simple; bh=fHLEjIjpsClQjytVJZuAJLLMS5Q01wz8isn/MOvrB+A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eAEw2cGJ6qmCAvJEPCvmPtGilQ+k8fiMALFJeNSQSb6sav5lHjtWLfotv3zPaumUXbNholBjh5bqoJQ+YmzfwsNSPklNwY4Up3hoQC43kjCRmYcx0vNgT2TwUsCvucMnWitNfQBwWMIqWepjY091jp+W5X0Y83RrC0uBdu3WRXw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=STJqucsL; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="STJqucsL" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e290b8b69f8so2139786276.2 for ; Thu, 10 Oct 2024 11:27:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584822; x=1729189622; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=YGpJ+GsXH4oG8p6vIrNjfshF8svSSAFECmaA34wp4no=; b=STJqucsL1kdpv+t2aaRSW8Dsd/k9W8Xwx0z/3KChOk2f7SqOVYc8Fpu9FgNXp5fiLv U2zVArrxm+zFzt6S1ZnESa2Fakw84zykbu9yrPyt4txY5HGwhVwXFA7Q0M1ej5kOsfTF cOdyNhQsGU94JfmwxOT9eZyARvp1cSidtXQIDBSmdrHrJCdXoEDgKL9knbEsHRoX8+Q7 x+Vw3mSWTwWjREDYZH57XAFRL/3RValbk71y72Jep2YJ1nJzbIRF8212ST78cHKuXiCE RyAFsegZWGTdL8tf6s6VHlUs9C4I+kaJpsg8j16L/FE68+XPw0gkSDLsBwjx5xn1Kv6y vMIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584822; x=1729189622; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YGpJ+GsXH4oG8p6vIrNjfshF8svSSAFECmaA34wp4no=; b=PuDrMDilKZhSp4TfxDY2VpJ4GP01SCTKwQdwe2aIxqiG4XfPvuLPuA7IPNEx6Qq4Bm dz1fR6Dlf8ZZcMDWoP8azEf/QxlXo7HGoOOTCHg9h3vIsOxXu8Zx+tCmMJrWLAMnkB1/ vfCqmMrXbsiQSBA2hOnw7VBeeT/GrZsTfXVMEqkrxfyH/ymduGUyM/wgPrwRsFD/qHjr /iKhkc8qStxbwZlnE3lsQI9U8YtIbReO/7VpQ8Gx1svye2gjCVVBE7GctVMbOjsKG5BD quzy+bS/WNzno1yzON7bDVAJhXrLK06Mb8pXLB24tZdIXTD2N5cdn3J6PaQUkT1Y3YBf zp6Q== X-Gm-Message-State: AOJu0Yw3JhgmN3HiimBAYvp6Km8hI80XOI9hlqSqJOn6a+9Enud7E09j SMofvbW9Gw24ZOR3PRTaCdI9D2tqdEmBOzp5cuoVNisqD8xw/SaoAExeGieNnpWqANAAUdp4zOA CSA== X-Google-Smtp-Source: AGHT+IFyzgxXPfejJoY0SZCQ3T+Vo8hzb1HJlB5l94FFDfZID39IeT9MHN/ZgR4beqIvxQ/gO8qC3c4Qafk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6902:4ac:b0:e28:fdfc:b788 with SMTP id 3f1490d57ef6-e28fe4426b8mr4979276.9.1728584821558; Thu, 10 Oct 2024 11:27:01 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:03 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-62-seanjc@google.com> Subject: [PATCH v13 61/85] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s Radix From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Replace Book3s Radix's homebrewed (read: copy+pasted) fault-in logic with __kvm_faultin_pfn(), which functionally does pretty much the exact same thing. Note, when the code was written, KVM indeed didn't do fast GUP without "!atomic && !async", but that has long since changed (KVM tries fast GUP for all writable mappings). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 29 +++++--------------------- 1 file changed, 5 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 8304b6f8fe45..14891d0a3b73 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -829,40 +829,21 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long mmu_seq; unsigned long hva, gfn = gpa >> PAGE_SHIFT; bool upgrade_write = false; - bool *upgrade_p = &upgrade_write; pte_t pte, *ptep; unsigned int shift, level; int ret; bool large_enable; + kvm_pfn_t pfn; /* used to check for invalidations in progress */ mmu_seq = kvm->mmu_invalidate_seq; smp_rmb(); - /* - * Do a fast check first, since __gfn_to_pfn_memslot doesn't - * do it with !atomic && !async, which is how we call it. - * We always ask for write permission since the common case - * is that the page is writable. - */ hva = gfn_to_hva_memslot(memslot, gfn); - if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, &page)) { - upgrade_write = true; - } else { - unsigned long pfn; - - /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p); - if (is_error_noslot_pfn(pfn)) - return -EFAULT; - page = NULL; - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - if (PageReserved(page)) - page = NULL; - } - } + pfn = __kvm_faultin_pfn(memslot, gfn, writing ? FOLL_WRITE : 0, + &upgrade_write, &page); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; /* * Read the PTE from the process' radix tree and use that From patchwork Thu Oct 10 18:24:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830807 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49E8820A5D7 for ; Thu, 10 Oct 2024 18:27:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584825; cv=none; b=l7ZHiESgOQvRnitBhytNalqnrABAOoG47pYsBuHFmpA4q8nMVC0CZTfv7Cy6D+qCR/wjg1Rv70H0LFsjwZyBZjaz9NdwSN67UP9UVQ/UtjFuzCYqvFeA7+pTln6pNaro/0wRtpuAAwoX9vwajY1Ky3s+t/MZ2vC/ev2mfaL8GiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584825; c=relaxed/simple; bh=lnamoAhiaB2k3D/cv0luJANVeCRwGvcQfaabdaB/y4g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F2U+uFv5g1dkR6yQS6W7rkasMJizqPxJvGscx0dLuKc9ZjD8q/+Wpwpgk8EWJYKbiGmTO8bfi3BIPwoPPgiNkGQ+srswKNf6d94p15XPlCi6w4F7kELTe3jCyoSG8OCYgNJBolK609aL/E16v6O28TRQkv5xMm8wdxzjGcpW0mU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bjTwwhuo; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bjTwwhuo" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71e026caf8bso1466897b3a.2 for ; Thu, 10 Oct 2024 11:27:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584824; x=1729189624; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=8JV5lSZeo1By1ou6uwDGqoH8kmaYenoeIxSHz9El/d0=; b=bjTwwhuoN0SpvNaeySCVuMnmyKMqY2y7VhEvpQdH8Tydsmp9oc3ImEzNIB7x2f58Vc LW75/zBKZ8Dtw2BE5RKmttfdjQIE7EcP0Et20wml0hzKLI3e8x/5FaIzoiMPPwoGie9t 02kIjtfFUWb/70njpoTggzuy3qcK/ZCuTPOg9Uy3fR3Pn5LLKSSXdxVywqZg7ZuphLmL dJqWYhncEpAB9sXQgmaXSJ7FaOaCiNrjHLf7jQsWXNhvERuNF0jIbJ84cEpV6XNBHYNv Z4H1YTKXoiAb/B1jipjTL2HYW90e2uWFVEIFwneMeDCmQziwMbpeaLSFC2sEZceI1UsL NTaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584824; x=1729189624; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8JV5lSZeo1By1ou6uwDGqoH8kmaYenoeIxSHz9El/d0=; b=dGRwK5pe4ZQDGB12bKJ5y+G1oo/kJSvcbAhsq5uzbcghH3o6tNX6mVHRgbq6Ictd/f nfy/bDncwHqS3HXSjafs5yU+JuSEunaUjbXSXzdt+hUNXGlwnY1te247cRNRxNYGwnT8 WYBS91P7JgKU4wZ6qtnU2I0p7bybhHSW+BTbWd0bLr1HiqHhSt5SW6BiB5r2tfVMJqzm dHfZcmyYrRTva8mYlwEN02c7JRWy87QKCk1aOUlhDH4oKeDn/NqI2tmuMj4ynKGlrsWI ipXu1y4IFnJvxIbTfC6R42JEZ/BZ+FBaXgmBGz5VKYDh46RdzbRmApedIvQCudWlET9d MnYQ== X-Gm-Message-State: AOJu0Yxc7ogRe2i4/kpmboARwNC3w10nrRIynmyvOFMz1bWhqwbLT3eU Ho/7h5RTn5EcW81dWQXOFeM9kX4QVZVwBqrFPOTI9Y3wjo8O86/vrwq8UMkynwUXNaKSNyyv6E9 MUg== X-Google-Smtp-Source: AGHT+IFev0i08WnhWXOHYVCiDa8hDSrBgilu3q+XI+BXP7ns5qqyHbYBrSKz+aJVjoCmfpAuiTEXvCWN4KQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2daa:b0:71e:401:6580 with SMTP id d2e1a72fcca58-71e1dbf2587mr27202b3a.6.1728584823552; Thu, 10 Oct 2024 11:27:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:04 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-63-seanjc@google.com> Subject: [PATCH v13 62/85] KVM: PPC: Drop unused @kvm_ro param from kvmppc_book3s_instantiate_page() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop @kvm_ro from kvmppc_book3s_instantiate_page() as it is now only written, and never read. No functional change intended. Signed-off-by: Sean Christopherson --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 ++---- arch/powerpc/kvm/book3s_hv_nested.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 10618622d7ef..3d289dbe3982 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -203,7 +203,7 @@ extern bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool nested, extern int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp); extern int kvmppc_init_vm_radix(struct kvm *kvm); extern void kvmppc_free_radix(struct kvm *kvm); diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 14891d0a3b73..b3e6e73d6a08 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -821,7 +821,7 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool nested, bool writing, int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm_memory_slot *memslot, - bool writing, bool kvm_ro, + bool writing, pte_t *inserted_pte, unsigned int *levelp) { struct kvm *kvm = vcpu->kvm; @@ -931,7 +931,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot; long ret; bool writing = !!(dsisr & DSISR_ISSTORE); - bool kvm_ro = false; /* Check for unusual errors */ if (dsisr & DSISR_UNSUPP_MMU) { @@ -984,7 +983,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro = true; } /* Failed to set the reference/change bits */ @@ -1002,7 +1000,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu, /* Try to insert a pte */ ret = kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, writing, - kvm_ro, NULL, NULL); + NULL, NULL); if (ret == 0 || ret == -EAGAIN) ret = RESUME_GUEST; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 05f5220960c6..771173509617 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -1527,7 +1527,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, unsigned long n_gpa, gpa, gfn, perm = 0UL; unsigned int shift, l1_shift, level; bool writing = !!(dsisr & DSISR_ISSTORE); - bool kvm_ro = false; long int ret; if (!gp->l1_gr_to_hr) { @@ -1607,7 +1606,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, ea, DSISR_ISSTORE | DSISR_PROTFAULT); return RESUME_GUEST; } - kvm_ro = true; } /* 2. Find the host pte for this L1 guest real address */ @@ -1629,7 +1627,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) { /* No suitable pte found -> try to insert a mapping */ ret = kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, - writing, kvm_ro, &pte, &level); + writing, &pte, &level); if (ret == -EAGAIN) return RESUME_GUEST; else if (ret) From patchwork Thu Oct 10 18:24:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830808 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68B1320ADD0 for ; Thu, 10 Oct 2024 18:27:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584827; cv=none; b=fgIN75Js3S/3Esh1cQozoSe4/UUuLlPIWzp0W96YcaWIeGXxKsAGphuZDNmCQuQylsbjnLtDH0+uIL/ZqiMBHNRt2acGVmwehVd0x+CV/SZlJwTnGcW8ThDVQv4VsZ0ZFz/NzCDpLOV2keFCbxthoFEXU3WxkOyVug3ynIOhJMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584827; c=relaxed/simple; bh=PLrYiBlw82beEM6tLfodLLTPBglt9VZ+TVoJtuJPFlg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oUAP0sW2jZ4Ldb5IpKHai+wGAYcY3x42BIlESZPay3j3B3K0JF0oiwyjBKehBz7+hPPHplX9CCx2F+9QZ9IcP2DD0ZBbeaaCeot+jAv0+XNIQppIBNxHQ+9Df8zs9cGnPzY3S3jUDUqdD9zl4kovP3r28fr0JluR7dJDsTFy9FQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ooDlC/pc; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ooDlC/pc" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7e6af43d0c5so1109390a12.3 for ; Thu, 10 Oct 2024 11:27:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584826; x=1729189626; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=I+2gp5vLlKRJWMtv7YF+IvP9h9YjJXEpCIK4AMigCis=; b=ooDlC/pcgIiD9tsIsdI/XR8hhjsKnl/ubOYfWwqTcVLjG/9/GB7swjVJDrSqoLoWOw 7vkXEltPbHGSo6eNQSnfEV9KrL1bmyzVtt77RGsOKB4v9yqQdHB78U1BuZ/hIDzG2FZx cpC29Q8dE35bdHWwfpE6KKYut07InE+/YOzDqpcW4mYYCpTnffL1vL41QmzJeXlv5JSV Vy6UxPe1LGdKUb6imMlvnN/L7ACFiPOGAu0qVfgVxbFsbFAK29wlRNmnvUlIrLTo62bY 1kZmPlVjpXSazX1NzmqGpjSByYskWop3ZgGxIZFVYyv/jRa6XYdaRoFScRBxFJa6oghl JhXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584826; x=1729189626; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=I+2gp5vLlKRJWMtv7YF+IvP9h9YjJXEpCIK4AMigCis=; b=XeOTltGQRtHvEkGAoTNmE6w2xsM3eOG2HnIb/pCyaj4u02BPT/O7MRISSTzIkKiJdI BtnhkISbcgqvQQLpDKPAF88d9BrQBdIzrGBp9NZrabKniCqW4CqOKCJSDD/XLAY0aYZR FjaN9U5cKhpkRNJX54p0QkLGxP0hRjp7x2wO9aXegrvypjKZY6h1fE2qwfjURiw/p1G6 vZ3Q5qCuxkXl57LhOsFmD/atW35i2PsYHVmAyus8CL1rpnYuy0PluKkkXsE9YksoW+oK uM8EL3q3kHmpDp7eYKq3AzthGOVRMUVJ0r+fwKC/AxAV/xVYRBd3eB8Ce/l0yP0SYM/s Cebw== X-Gm-Message-State: AOJu0Yy9QE5ha4DTH2O7z0rUX8m67ZjP7q1oclzL2PISeETQTSZGwK7V gsgt/dyom2lrwLS/iaDByOsdXcxE/e4EKtSZByEq88eh12RuINGDrJbYeqJsM0tNsm7jnr7r230 y2Q== X-Google-Smtp-Source: AGHT+IEBzxfSdV5URAWtipQSxApJSdWqJAcjcFibIBQs32g4G2eVPMYjVJgo5zFwBPuTEySHq4oumcdrCOE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:e546:0:b0:684:6543:719 with SMTP id 41be03b00d2f7-7ea535307afmr40a12.4.1728584825522; Thu, 10 Oct 2024 11:27:05 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:05 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-64-seanjc@google.com> Subject: [PATCH v13 63/85] KVM: PPC: Book3S: Mark "struct page" pfns dirty/accessed after installing PTE From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages/folios dirty/accessed after installing a PTE, and more specifically after acquiring mmu_lock and checking for an mmu_notifier invalidation. Marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. This will also allow converting Book3S to kvm_release_faultin_page(), which requires that mmu_lock be held (for the aforementioned reason). Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_64_mmu_host.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c index bc6a381b5346..d0e4f7bbdc3d 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_host.c +++ b/arch/powerpc/kvm/book3s_64_mmu_host.c @@ -121,13 +121,10 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, vpn = hpt_vpn(orig_pte->eaddr, map->host_vsid, MMU_SEGSIZE_256M); - kvm_set_pfn_accessed(pfn); if (!orig_pte->may_write || !writable) rflags |= PP_RXRX; - else { + else mark_page_dirty(vcpu->kvm, gfn); - kvm_set_pfn_dirty(pfn); - } if (!orig_pte->may_execute) rflags |= HPTE_R_N; @@ -202,8 +199,11 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, } out_unlock: + if (!orig_pte->may_write || !writable) + kvm_release_pfn_clean(pfn); + else + kvm_release_pfn_dirty(pfn); spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); if (cpte) kvmppc_mmu_hpte_cache_free(cpte); From patchwork Thu Oct 10 18:24:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830809 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F02220B1E0 for ; Thu, 10 Oct 2024 18:27:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584830; cv=none; b=gvnPmqNUvv+VnUeinJXyZlPrCDtXQMb9YfXTT5xzzPZaDnTqrkgwJLOTgSKszpE3PkRO5w++6KCsc7V3YJiXFNUCfdlMkoNUvsjnsDPyscqqv/N8356vLQnrGqN16g2hel8SwGqy7j2RdeypdnHtyzfImC7OtV/0uDgdaL8qW6c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584830; c=relaxed/simple; bh=2hWIRT2fMV5KiR0O+VosZUqCujtj2BkFwotH6/Eh4cU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sSyq6jZxmPCESjYWutrINhSsqx4Lq9ZDFCSLqIQfk0RUUMcTxOwjjBxcHpIvJIatVcqqD/FbB05fJlL/rEX0HK1vNJyfTzIBYwNnDClcle+kj+7ccZKBi86GCWdxl8OM/Z3E47I6ZOdCDm35F8RU39gaZ8SsB1JuViqHYcPrV/8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1pNJedDG; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1pNJedDG" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e2904d0cad0so2052709276.1 for ; Thu, 10 Oct 2024 11:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584828; x=1729189628; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dJnHBDyCwUFZKgDYTVV2s9CsyA2Ll45yxWv/Qp8Ijpw=; b=1pNJedDGxXfMA3UIXvlLk9yUgFqg1Xwn3494wFqV+9i756hIyBtpQeMuDK14Oc8dJ/ pfvqrjHQZGkwYSiQpe7VEIzjzFJBs4dSTwV/KaNCt5t+7W3SnJwkBgP8lzwASf4U8hX2 lWHShUHfEnMJxWGPlMeM6oT39oz82YA3W4dWND4IiJI0SR7DpM7qVYgr3EacLZKX+Ght 39PQ15sIXfArpkmZRyzDuN95/3br1T4ES3Q/qpqzwWVmjVIbR8OhZXDLtz+Q1DJ52QfE nOe3psdiZSnykO5KYiW4RM5eHmQ6AQ7kh4u/3+4CjkTQixhFThwX9QmI7yg626qLu3if LAWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584828; x=1729189628; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dJnHBDyCwUFZKgDYTVV2s9CsyA2Ll45yxWv/Qp8Ijpw=; b=AQ2QH9exgtoA9wyDxmL+0cMR/eMuW3Is0lB/kSIee9yqfH4EmR+6lx7bkMTvYi4Nir Ynss7HumBOqcIlSMGuaI85nQTQNFRsztf1rgFLqLecDgZ16Ns5qYwzwgMRSO83lBlqDJ 2LYntRGN/A1FXO+TlqwlSA0CKtVcQvZHghCAPehvj2shla3UzfTQZw2ux+aNQ6Ul1qAy XVsP+iQ3B6IeCDSp+g9zaXr2tDBaedKOYPBR+tP/41dExtgRgls7zjQ3zXDeCXaQe7hk DogpAiAChLsD6dX6jDfGK0U2Tg3UCYJYlwAJwJdB+5Qg0MP2SJT8+vT4L0mRkUJOZ6Uk Q2zw== X-Gm-Message-State: AOJu0YyHg5KGin8y5BS9nInHQSYrK4FvnsdqS97K707ggRIzLzOcxW9d /Pe5UJatnvbUVWAlYvl1705eIl65j5jHK+tBpcMq2Zr2fyuS2hd3yLspQDJZdO7Ovb55KgBBrnN 17g== X-Google-Smtp-Source: AGHT+IE3wQG+AmAWHFVOEkWtTHcZkQX/AbMzk25RszCXWWBD0mfEZbS2aMxfU1imj+f8RQ2cZBD1mpHhlQw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a5b:a07:0:b0:e28:ef8f:7423 with SMTP id 3f1490d57ef6-e28fe355de0mr48920276.4.1728584827639; Thu, 10 Oct 2024 11:27:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:06 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-65-seanjc@google.com> Subject: [PATCH v13 64/85] KVM: PPC: Use kvm_faultin_pfn() to handle page faults on Book3s PR From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert Book3S PR to __kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/kvm/book3s.c | 7 ++++--- arch/powerpc/kvm/book3s_32_mmu_host.c | 7 ++++--- arch/powerpc/kvm/book3s_64_mmu_host.c | 10 +++++----- 4 files changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 3d289dbe3982..e1ff291ba891 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -235,7 +235,7 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr); extern int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu); extern kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, - bool writing, bool *writable); + bool writing, bool *writable, struct page **page); extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *rev, unsigned long *rmap, long pte_index, int realmode); extern void kvmppc_update_dirty_map(const struct kvm_memory_slot *memslot, diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index ff6c38373957..d79c5d1098c0 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -422,7 +422,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) EXPORT_SYMBOL_GPL(kvmppc_core_prepare_to_enter); kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, bool writing, - bool *writable) + bool *writable, struct page **page) { ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM; gfn_t gfn = gpa >> PAGE_SHIFT; @@ -437,13 +437,14 @@ kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa, bool writing, kvm_pfn_t pfn; pfn = (kvm_pfn_t)virt_to_phys((void*)shared_page) >> PAGE_SHIFT; - get_page(pfn_to_page(pfn)); + *page = pfn_to_page(pfn); + get_page(*page); if (writable) *writable = true; return pfn; } - return gfn_to_pfn_prot(vcpu->kvm, gfn, writing, writable); + return kvm_faultin_pfn(vcpu, gfn, writing, writable, page); } EXPORT_SYMBOL_GPL(kvmppc_gpa_to_pfn); diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c index 4b3a8d80cfa3..5b7212edbb13 100644 --- a/arch/powerpc/kvm/book3s_32_mmu_host.c +++ b/arch/powerpc/kvm/book3s_32_mmu_host.c @@ -130,6 +130,7 @@ extern char etext[]; int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, bool iswrite) { + struct page *page; kvm_pfn_t hpaddr; u64 vpn; u64 vsid; @@ -145,7 +146,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, bool writable; /* Get host physical address for gpa */ - hpaddr = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); + hpaddr = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable, &page); if (is_error_noslot_pfn(hpaddr)) { printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", orig_pte->raddr); @@ -232,7 +233,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, pte = kvmppc_mmu_hpte_cache_next(vcpu); if (!pte) { - kvm_release_pfn_clean(hpaddr >> PAGE_SHIFT); + kvm_release_page_unused(page); r = -EAGAIN; goto out; } @@ -250,7 +251,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, kvmppc_mmu_hpte_cache_map(vcpu, pte); - kvm_release_pfn_clean(hpaddr >> PAGE_SHIFT); + kvm_release_page_clean(page); out: return r; } diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c index d0e4f7bbdc3d..be20aee6fd7d 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_host.c +++ b/arch/powerpc/kvm/book3s_64_mmu_host.c @@ -88,13 +88,14 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, struct hpte_cache *cpte; unsigned long gfn = orig_pte->raddr >> PAGE_SHIFT; unsigned long pfn; + struct page *page; /* used to check for invalidations in progress */ mmu_seq = kvm->mmu_invalidate_seq; smp_rmb(); /* Get host physical address for gpa */ - pfn = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable); + pfn = kvmppc_gpa_to_pfn(vcpu, orig_pte->raddr, iswrite, &writable, &page); if (is_error_noslot_pfn(pfn)) { printk(KERN_INFO "Couldn't get guest page for gpa %lx!\n", orig_pte->raddr); @@ -199,10 +200,9 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, } out_unlock: - if (!orig_pte->may_write || !writable) - kvm_release_pfn_clean(pfn); - else - kvm_release_pfn_dirty(pfn); + /* FIXME: Don't unconditionally pass unused=false. */ + kvm_release_faultin_page(kvm, page, false, + orig_pte->may_write && writable); spin_unlock(&kvm->mmu_lock); if (cpte) kvmppc_mmu_hpte_cache_free(cpte); From patchwork Thu Oct 10 18:24:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830810 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52750208212 for ; Thu, 10 Oct 2024 18:27:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584831; cv=none; b=FsfTE07aMVza1eRZafDV/m43EEETVz/AuusnIrUbC8J1YvWEm27ccCi77iFHbLthL1QMkGI0isYhtb2LnyzJi7Jl0Mk0hx8TAf+HCvimhIcTn3y1XMUv6p2ZKWgWb09GG3sg7MNpMAknuEEnwoJBj7oX0FWPq9JDJIl9xIroeMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584831; c=relaxed/simple; bh=ffJ+Y8PQp+slQQo0S2lzOgDI6zSQ/cN8IdVD7ZeuwkM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WkwvcP2eNh7t3B3wVFxHm91r2l/yg5xQZc3mrWjzjHrj7OvJc1q63kSi0eEJxUWCZEOeI7UKZKbKpi6U1Z94iwe36uQwXa5u4Befnwm2ePFbjk8hfmAijsflZZHd479IujQJrJi7uDhXTRSdZ9EbR3aaIYii208v2lHP5zspU2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SNuiX9KG; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SNuiX9KG" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-6c8f99fef10so1638333a12.3 for ; Thu, 10 Oct 2024 11:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584829; x=1729189629; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0kIT1cNFK/xt59MDB/Io88ATgyMybLemEwY+DfOgUM8=; b=SNuiX9KGV9gd57zGcX+khtxMQ/WZyfcI8IOEqbUSxhtcMIlWJDhUZzOYw3O/Jtp5tp 1NWu+kxo5Hi64N0d8Vocbim1DEDIN7pKYEMAT7CmUIRPux5M5gBhtwooiZa036V75h+k Ychu0Yrt+CY1DTpdXIkHOsKY0bGLGDYVFrglD82TfIlMDlvD77LbhWwQE2/GuTcCLoye BlO64+wQ9ljMZU5XEby944HMtp4M+bmKjEJdM7giaFb7OEfLYmaZJu6Xf8a4cSqRhdfW SsOXG0o6p9nmOEvzhsK039Qzqg4ApQUtgbICzcD04kyCCbohbJE5YCYQwAAz1ROtSzRt QapA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584829; x=1729189629; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0kIT1cNFK/xt59MDB/Io88ATgyMybLemEwY+DfOgUM8=; b=l4DPWiuWzQEQTLRjFy+IPZyWZ+v98e9pKJijv5JqfKzIOzjH7bsGP7w6hzCXhYB3VE nE/tqFk07DoIcTXGs5abJUOrG1n1vD1+z/ClTpwQZ1XdWE1ieTrdFARLlYEUYlWvO9t1 PoKQ5s0+1VbgXKim+hJZ/PdNnt3FQULx03AGhZIYal7KKEhLvMJvf251XGmtdq+t5ekN /HUoYrl4dF4A2npPMaDYgalvS682BLUh/7Czt3M8lMDHkCs0+sD6qLiGogekSXtTDpKc 414jptj822MZxoTznyReJLQ5eaxBnnjpptaUlGhBHut0AG23hdkqhT4Qx2zZw1yQGicN dX5Q== X-Gm-Message-State: AOJu0YyT3kCXHefVhDSahiXXLZ//00nRnSUgQoAwlVPNPED5N8DB2g1u 1+1c7f8XrM1NvLCxf7Hz8BQSRFiOJ7K0ikcCq2kim+0uzWlvhiWGe+1YQzjcsOSc7ueXF/8MMN/ g3A== X-Google-Smtp-Source: AGHT+IEVjHGN/ZoxQo+Z413816spxD1Vig97FmVkGQ57WYZx6k/7CY2I8TFVs2SPVm+Oq4UkGx7TgXf3+Fw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:903:192:b0:20b:bd8d:427a with SMTP id d9443c01a7336-20c6377ab7bmr1062205ad.5.1728584829203; Thu, 10 Oct 2024 11:27:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:07 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-66-seanjc@google.com> Subject: [PATCH v13 65/85] KVM: LoongArch: Mark "struct page" pfns dirty only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages/folios dirty only the slow page fault path, i.e. only when mmu_lock is held and the operation is mmu_notifier-protected, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Reviewed-by: Bibo Mao Signed-off-by: Sean Christopherson --- arch/loongarch/kvm/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 28681dfb4b85..cc2a5f289b14 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -608,13 +608,13 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ if (kvm_pte_young(changed)) kvm_set_pfn_accessed(pfn); - if (kvm_pte_dirty(changed)) { - mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); - } if (page) put_page(page); } + + if (kvm_pte_dirty(changed)) + mark_page_dirty(kvm, gfn); + return ret; out: spin_unlock(&kvm->mmu_lock); @@ -915,12 +915,14 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) else ++kvm->stat.pages; kvm_set_pte(ptep, new_pte); - spin_unlock(&kvm->mmu_lock); - if (prot_bits & _PAGE_DIRTY) { - mark_page_dirty_in_slot(kvm, memslot, gfn); + if (writeable) kvm_set_pfn_dirty(pfn); - } + + spin_unlock(&kvm->mmu_lock); + + if (prot_bits & _PAGE_DIRTY) + mark_page_dirty_in_slot(kvm, memslot, gfn); kvm_release_pfn_clean(pfn); out: From patchwork Thu Oct 10 18:24:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830811 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1734F20B21A for ; Thu, 10 Oct 2024 18:27:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584833; cv=none; b=P3K9oZAtKH2o1FmJ7ZXnA9j7M5YClCvnueI9ASIUuh7gAa08xNon0c2hN0vQbADjAueSJqkZmM5HYMrnFjmZqgutm62HQ3LGpg+MuVWjshoISOtuwym3nk3J6uKCvcVOL23Ws9Aw4KrMaCHciTm4JqFnv03fIwt8vfCM8KMeSqM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584833; c=relaxed/simple; bh=Gq0ZEmsgjgD9l+p+SR2hvcEYSP8z96GfO+zPwvkDX5Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MD+LQmhdKi1m5JXJU2k0yaS99oQP6ArcuYWbciOElL6iMMws+SOcj79Luq24b5ArF0iXPqDHsbN3jzvN3a7onzl4dvtR++HF3QMJVKNrm68wn0TEaMafdgzQz/NXEI+drssLYmx1dGGOPmpkQvxUjtU93uwt+Ujxu0ds7u156S0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EfkfFSQF; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EfkfFSQF" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e290222fde4so1381825276.1 for ; Thu, 10 Oct 2024 11:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584831; x=1729189631; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=2XctyNuj5rmp2glWM4MaDBrVp36o24kp0nhTjUz4HGM=; b=EfkfFSQF3X+VfzLOUd+NMMcl3GFvR0O6RLsG8kW9/3abGsJmogSRIrtQpV+OWWRYNJ GPPNPPUNAXbsIX23X5G7HUZQZMe1WBRNFQm0FfhGz6XCl6mJGiQ4ZSYhxhO7CPQx+8rt Vkr+5ZJ+ns3A7/GgnaSxX6V/Azj+OACWU++fMoUbLZtZNL2ustwBKTJDWDypFdll23fy MW3rG+4rwyG1DZ5JU3s8eppFPrGqE5zTUyzexpo9UP6Hmnrdezl7ABJ1sGb0IFTAI+zz SNg9Wa77JEnsbl4/wjMJHG/BXG4AFPmPEmio8T5u6AeNKeqBwqQn2ivs9Vc9+kwrlQHL nxTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584831; x=1729189631; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2XctyNuj5rmp2glWM4MaDBrVp36o24kp0nhTjUz4HGM=; b=oK9kKuqsBZj4dES+qm4/XokR0Q9fyT+GooCF/WsFSc+514sKOY9xwTMwMpmc5UkMKo ryYtToj+MD3rG/hcmwX35GZEkyulfehAilpiygF8anmTvCyKv/WUsk4fG8V9gqyE3ExD QPFJXFq4iGDw7Mpo8DHahF0869Zk7pFRD60YL1FZgCB6Fl4OD/VBWXQw44SnBrABCp4Z 793sKWmG0PB9HWZiF5qtwVY841d5G0amc8d/H0M7l0IU4awgFEPVKRin8QhmiE3CowYM hjXtNY2PSbP7zlOIJF8mTlKyfqZpIsGNr1+8UcNfONg4tLQBf6W6mGyD5maTfIr7o4Ql M5ZQ== X-Gm-Message-State: AOJu0YwrTeIYIqKKGmpF/cpkkV7Fl5AdV4AjkgSiyu//KSJjhJcNqvVF cGY0UEoPVhEqrGPEEmy7GfsXcusacvWM9fU/99jO34firlWVpHjnXYJ7iuU/1e6+laqxsLvPOL7 qTg== X-Google-Smtp-Source: AGHT+IHdtHq1a5g5QmW1TlBMuTKUOZ7R4d6aOVSlzi7r2JK/pfzCqaKRixPsJw6iRhPElP6sWQ73LfqmPps= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:d353:0:b0:e28:e74f:4cb with SMTP id 3f1490d57ef6-e28fe0df614mr96115276.0.1728584831124; Thu, 10 Oct 2024 11:27:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:08 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-67-seanjc@google.com> Subject: [PATCH v13 66/85] KVM: LoongArch: Mark "struct page" pfns accessed only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed only in the slow path, before dropping mmu_lock when faulting in guest memory so that LoongArch can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Reviewed-by: Bibo Mao Signed-off-by: Sean Christopherson --- arch/loongarch/kvm/mmu.c | 20 ++------------------ 1 file changed, 2 insertions(+), 18 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index cc2a5f289b14..ed43504c5c7e 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -552,12 +552,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) { int ret = 0; - kvm_pfn_t pfn = 0; kvm_pte_t *ptep, changed, new; gfn_t gfn = gpa >> PAGE_SHIFT; struct kvm *kvm = vcpu->kvm; struct kvm_memory_slot *slot; - struct page *page; spin_lock(&kvm->mmu_lock); @@ -570,8 +568,6 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ /* Track access to pages marked old */ new = kvm_pte_mkyoung(*ptep); - /* call kvm_set_pfn_accessed() after unlock */ - if (write && !kvm_pte_dirty(new)) { if (!kvm_pte_write(new)) { ret = -EFAULT; @@ -595,23 +591,11 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ } changed = new ^ (*ptep); - if (changed) { + if (changed) kvm_set_pte(ptep, new); - pfn = kvm_pte_pfn(new); - page = kvm_pfn_to_refcounted_page(pfn); - if (page) - get_page(page); - } + spin_unlock(&kvm->mmu_lock); - if (changed) { - if (kvm_pte_young(changed)) - kvm_set_pfn_accessed(pfn); - - if (page) - put_page(page); - } - if (kvm_pte_dirty(changed)) mark_page_dirty(kvm, gfn); From patchwork Thu Oct 10 18:24:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830812 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5993420C495 for ; Thu, 10 Oct 2024 18:27:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584835; cv=none; b=pg6t4Tjz8lGOX5HKTk1S99DzPwbd27X/rjNZ2XwyAW56FktO9ctzwSeJr3TTzPTlRrFGzewJxSFaODaT1sLURoibMb9m+R7IWdgyiQL6xEZD+3dywOK3KDqwj6yEwuYPB+5StoystdKisICHDVCqMUAdVl1TeDT6gbNSJNukeKI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584835; c=relaxed/simple; bh=+TpPAbVVI1obWW7OdvHKpXsCMwy9+BV5v8TfW2u9vbU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kCMiDNio8VnqaackX8RE5gq3fxurGIeGii5BjJpICE2atjjBx3Ajjaises4Ozin6+LGcDcs2ChdyAKG8xF6SHVD0QhXWjfSpNKmct2WRYsNTN4piL86PKl28WC5tCbZd4pWZvzzu65sMG2u01MReKt9iXLgwC9ewdc7X/7ozzMk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GNF92OQx; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GNF92OQx" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2e2a013a01bso1335887a91.0 for ; Thu, 10 Oct 2024 11:27:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584833; x=1729189633; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3FFUjrr9Bcuid6sLVD5OEsWVTb8VG25P9bGjkNYbAzk=; b=GNF92OQxF0tq2Sy3NXgpLFxQxC22Wl9tw03n7HzL4JrVAJCkh+Hov4DvKeVAkbLbz6 guCgfWZA8nqZBp6XZ9pJ4Hawa9uVtgV3UErDEF6u0+OBm51ovt6v1pRCPWJe2lNSocFb 9IG5h2poJKL8NVnbhGa7Q5llq0RI6iyjaleF9SNYfF/o/fNFc2aKi+UykoVstbZ8OxwU d1wLhivyUM4mbES9K28ITC9CcdOVPnfsFw6ARgKHJiVEHnES4HjeBBPrQzXn/hChp5jb ncyckAUOmfR9C543A9BJWxeDUS9424yzgfnC3W2p+8MxlZym6ly+BGplBaNU9k30CJj/ w62Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584833; x=1729189633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3FFUjrr9Bcuid6sLVD5OEsWVTb8VG25P9bGjkNYbAzk=; b=flfw8JIBIDRbkW8kugIjPI/8lH9xWJyF/WJxbus8V1r+h4ucdkPyk7fCKs7T2QtkZo 444EbpTEG3M+IjgDR12aQEBq0ORxSnJU/Q3gtLFXFtcVwckJgl8u79q0qodOfgPFp1qN LxQvVROIxXq36Bi8FfnfqOKhfsXcLLvPimmZ0hinYI7qlGRkItQJTurjIx9PlTz5hXl7 0u0dQIXgRaVVU6dN3iPX50UwrOrmvLdkJbks0BpKkSD6hZWLHT1rfNQbAVT7PHeJLOii Y1F06dY9gPZutf3sftcj6dDEdBikByOViIKBdxs7c2V6Xy4yt+EQ+5y+xZObiQt6zkro UK8A== X-Gm-Message-State: AOJu0Yx/DLo7F5ORCMhjby1S8o7E1CaWiWKSS//y7bLNtESbLoyqzT3i GtbA+QIqSp9hWdgGQMyqHblP/y0XFO3PaFsO04d/T3puM4QyouZX+jfRyILm7FpH7xf+C+DcdWK p1w== X-Google-Smtp-Source: AGHT+IFpwqxWKAWfrUvKF1EatvnBv5g5CrzcpgT/ZTuiTODJnzoCoYK053S1MFwF0WyL9wI3LVhDYDAiv74= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:90b:46d4:b0:2e2:e148:3d37 with SMTP id 98e67ed59e1d1-2e2f09f1a56mr94a91.2.1728584833102; Thu, 10 Oct 2024 11:27:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:09 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-68-seanjc@google.com> Subject: [PATCH v13 67/85] KVM: LoongArch: Mark "struct page" pfn accessed before dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed before dropping mmu_lock when faulting in guest memory so that LoongArch can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Reviewed-by: Bibo Mao Signed-off-by: Sean Christopherson --- arch/loongarch/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index ed43504c5c7e..7066cafcce64 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -902,13 +902,13 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) if (writeable) kvm_set_pfn_dirty(pfn); + kvm_release_pfn_clean(pfn); spin_unlock(&kvm->mmu_lock); if (prot_bits & _PAGE_DIRTY) mark_page_dirty_in_slot(kvm, memslot, gfn); - kvm_release_pfn_clean(pfn); out: srcu_read_unlock(&kvm->srcu, srcu_idx); return err; From patchwork Thu Oct 10 18:24:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830813 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1836F20CCEE for ; Thu, 10 Oct 2024 18:27:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584837; cv=none; b=TvHlK14wBd8cJJoCMUWoikQgY0mL7u6NHm7Y0Px4LdDHFB8YRJ/OISLqPCAdrrBfmPWBQkXTZ7GW+d4XzKAazhtckRNpQoh46uuZuAFiBDXA9s3m+uEZ+e1VG/HW1zliOJ9epxEc0i1a4XX8uPAj5rBq6AkUyn/SjMianTyoehs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584837; c=relaxed/simple; bh=HhUA1PeKZEb/51LjtENBtdXbtQZ0o3j1vGZCLDD8mMg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=snPkYl0m5hrEi84c19blsgBdSI+Bkxckij27J0+ylbUNmF7FiMaR9QCgLIMCu1DO0Cy8OdgtlCEuH2qclNAXy5Z6veOZAUgfM1Bg4B2ddPyxAcBuK8ZCM11jMhUTTtLdCYkiBYGLPmv+Kl3xr6cqiCsXaKKaiGhQp68K1bG2v7Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZfysK4Eb; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZfysK4Eb" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71dff575924so1543897b3a.1 for ; Thu, 10 Oct 2024 11:27:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584835; x=1729189635; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9LTMkDJZxM2S5Qh4t0DfHllWvhtFq1rJ9EqXHTYHy/A=; b=ZfysK4EbyW9GP/xSuAfW3U3GxCRMKGJ1uQ0l+1a9fb8m3NTbOSK3zUlXUKjOSQ40T5 ULvpVNM76l5WMWoFdvGpGrBdcaqid0kWMM1s0kBGxkAqf4kwZrEANqHFgqecmt8eHYgx Ik8OK2Vg31lI7rB/rwW7MTUKastSn4QH1xeXn/WK8/jnpqBbiN9Zv5K+jntkQWm890PW KjLL2HC/A6kUxWwarJKMFJacXMwAT0kofIlFki+OA82WubYYtrJHX/t4g5yKgnDO+Ptq 4dA1ZW7S4NyO5r2DrnFx44K0GxT/uIbV+E5cT8JHXE51CWa5Cvp/710I6os174/V43Yl h/rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584835; x=1729189635; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9LTMkDJZxM2S5Qh4t0DfHllWvhtFq1rJ9EqXHTYHy/A=; b=BCMv/cVgF6U+YPT64wLkQAz17DjjQ597ZfYSD6cCilrWjSSgJGohRaQw5wlrjZFkrr F31gxphIUt4mpY3Sp3CLiyJDASyHkOAIgjP4XAhAptWnK9Aady4u64KhAViO4Ny5HhT2 SeR+qKk+d2ezxAUFg+F7tfljdxOm2s9yN+NV7qLNzkF/A15g/L/42yvf8jg+88Hte4oU P6zLkJSDUhtZlOptLXa1qq+Xhq7+VxOjUWEs5xm5RxwP+xoBIC2k61TprCz/a5C4y07m GjSrc7ISqIXTBN+aH33ck5jjSEnkG3WbGUueCVy5w72gbjBwt6/ISMICImPBMXotGBuh qojA== X-Gm-Message-State: AOJu0YxdqB//OzET//B6N9rZZctDNS6HV4oNMo0O4UobP622oDwGC387 3CshkMvPgtdYO4eP+Jyse+1KVP/Qnk/6ynNgMIEr6TkOOFN+ciGKKMlFWQ1DbkAQ10JSebJudvH Kxg== X-Google-Smtp-Source: AGHT+IExt7obORK9hFSNzpsBdJKygVAVx/dPvrUuCl2bsyTFt9B1GPfq0Q1aHG/lNRl/yht5gsZcWFEXy1g= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8614:b0:70d:fba:c51c with SMTP id d2e1a72fcca58-71e1dbd032amr6898b3a.3.1728584835380; Thu, 10 Oct 2024 11:27:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:10 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-69-seanjc@google.com> Subject: [PATCH v13 68/85] KVM: LoongArch: Use kvm_faultin_pfn() to map pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert LoongArch to kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson --- arch/loongarch/kvm/mmu.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 7066cafcce64..4d203294767c 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -780,6 +780,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) struct kvm *kvm = vcpu->kvm; struct kvm_memory_slot *memslot; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct page *page; /* Try the fast path to handle old / clean pages */ srcu_idx = srcu_read_lock(&kvm->srcu); @@ -807,7 +808,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) mmu_seq = kvm->mmu_invalidate_seq; /* * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in - * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * kvm_faultin_pfn() (which calls get_user_pages()), so that we don't * risk the page we get a reference to getting unmapped before we have a * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. * @@ -819,7 +820,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) smp_rmb(); /* Slow path - ask KVM core whether we can access this GPA */ - pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable); + pfn = kvm_faultin_pfn(vcpu, gfn, write, &writeable, &page); if (is_error_noslot_pfn(pfn)) { err = -EFAULT; goto out; @@ -831,10 +832,10 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) /* * This can happen when mappings are changed asynchronously, but * also synchronously if a COW is triggered by - * gfn_to_pfn_prot(). + * kvm_faultin_pfn(). */ spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); if (retry_no > 100) { retry_no = 0; schedule(); @@ -900,10 +901,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) ++kvm->stat.pages; kvm_set_pte(ptep, new_pte); - if (writeable) - kvm_set_pfn_dirty(pfn); - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, false, writeable); spin_unlock(&kvm->mmu_lock); if (prot_bits & _PAGE_DIRTY) From patchwork Thu Oct 10 18:24:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830814 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C7C8210185 for ; Thu, 10 Oct 2024 18:27:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584839; cv=none; b=jJq2f98IqIaC2RRCRwqlnlazAvMF1GrWSd+xmty+NbhYdk+74yps9yLwBgAF0NEWI2gVw86TC4XJ2dr3Vi7S+szq3F1E0iFsGUMaYZMGbfDGwgBrUJZwnkEJgRtKy8obUQERSf4G8+DlK/MKsHJMYIlwMXHE0MP2/5x9LqqHSBs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584839; c=relaxed/simple; bh=qdVWvBqKE29lnYElhxBDh75UWFXRti117FkDAjCdYI0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=X5NGT0+woA4xk6vKh3W9uS1aXK8FS1VdeQ/6AqJxzDRAWFN98mL1CqRR9n44Du7WFjW/pNfzPkzoSsD91f67am0aSKFSDltbMDK4tZIZHNRV6YBnspNXuWOaKg25D9vfiM0VsCEYjThORf6ELl5GRRrvQ0eVSrAt8tOktdYnAKg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WEliYGH/; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WEliYGH/" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0b8fa94718so1790328276.0 for ; Thu, 10 Oct 2024 11:27:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584837; x=1729189637; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Sag0fj8OwucY2qRogRu5ELvjWktNcy71jZ/+qgEdHaE=; b=WEliYGH/EkhyOCm54H+/wesyw/aZ7v+0pWnBeiqQzyPlQ2Yj0TH4WvG+UvuO5i/tJP qqFT63mFFsNK8nCKRd2rdRSnBi6xiWs4I9YPkwQ/ORC/XHuM6kP3lGibolilYQnn8KkG 04jph3IKCWStNMMrOeDQO9d+9jis+VCzO92vxkbcdm50eGUrzmGw34nU9RjiIPyyw0Uh euFYKjXV0mCOLJBqh+xh9tHaXM4uh2RwdSp6qf7nlg/wQlMv1OOpW1BTqmQYcv2QbZgI qpCY90RHVgLxl5134r1g7+kqZbPFrbMCMGJiJssp7lQN4g6nR6yUrSuWdNXtIvq6MagM 3JOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584837; x=1729189637; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Sag0fj8OwucY2qRogRu5ELvjWktNcy71jZ/+qgEdHaE=; b=Ue5qc16/mj9I2eJUBbPZ+8EXFS77Ul0jQknpVuOzbumxgDyzS8n3uuQcc/vo0FQrBs esZJHwXBzucXBcpcpHLiF0zQfaZLyD6xfzKWKqlKWxRS5X0tbdxZHNjPYaTkPFZh/oB7 aJYcMuQxpcmQ1ilFEUsTcuqyzHSqwt3ZPZADhD7etNNLEaEQrlitAIP+g15Lw/I+e3KP qG8UK1VaIW5IljDuSwRKm4KiVqQM3HgUJhIO/ImXilVv940+XqqNKGnTz9pArI3QUASC tQJIY19Gy3nEyXAShVUnEayGeH4MeCH7NDM1pnxTXj4I8XxnM/wEnBL0PpyVmPJ7mqa6 nfiA== X-Gm-Message-State: AOJu0Yw8Qoh0qGto7BEbfvUUslkItgLS8i4W6CfqgFFLNGSS1nWIWQ6t hofRsS2uF15c9x1ozGkS6/iiYVwdenyZsR3WdhMc0f3asHgdZrLwLf00qPggvPHDZZjaqFitbnC OXA== X-Google-Smtp-Source: AGHT+IEjJriLMGICq9yty21Fx3Z3FQyTG0XCg9lS5qhc+MvTVoRASG8drmw0OxSoWZyAtrQmM3pJDo8jQYk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a5b:a4c:0:b0:e28:f454:7de5 with SMTP id 3f1490d57ef6-e28fe52740fmr75826276.6.1728584837458; Thu, 10 Oct 2024 11:27:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:11 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-70-seanjc@google.com> Subject: [PATCH v13 69/85] KVM: MIPS: Mark "struct page" pfns dirty only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages/folios dirty only the slow page fault path, i.e. only when mmu_lock is held and the operation is mmu_notifier-protected, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson --- arch/mips/kvm/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index c17157e700c0..4da9ce4eb54d 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -514,7 +514,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, set_pte(ptep, pte_mkdirty(*ptep)); pfn = pte_pfn(*ptep); mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); } if (out_entry) @@ -628,7 +627,6 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, if (write_fault) { prot_bits |= __WRITEABLE; mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); } } entry = pfn_pte(pfn, __pgprot(prot_bits)); @@ -642,6 +640,9 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, if (out_buddy) *out_buddy = *ptep_buddy(ptep); + if (writeable) + kvm_set_pfn_dirty(pfn); + spin_unlock(&kvm->mmu_lock); kvm_release_pfn_clean(pfn); kvm_set_pfn_accessed(pfn); From patchwork Thu Oct 10 18:24:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830815 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10EC9210C22 for ; Thu, 10 Oct 2024 18:27:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584844; cv=none; b=gkCTyUdbZN048iC+TZKuY7BDrIgyLTDD1uG8vDRHalEaBMQxwbqlkVP8ZspXAryTu/ZOueM18vY/4C9AeWOGm7UzWDrgzITKhPv+xbT7CnqOXf/Vnc94Kg/A31a/NdtlXb4m9jlVWBZKW7kqhw1M7/F6H8NAeCq2PfleRk70nhI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584844; c=relaxed/simple; bh=TVe3szAAU/4RHYDireJVG9Y8gE1GRTl7yNXWTsweF6A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rIyZehZzXJxgDlA9Zl0y0PiBDu17EioGyU5sHRxqpMMFI7sXhr6532g1rcjQ1tfTkuW9gp5LsGsy5B6bWTlEouOiFXUbNw+guAFz2LSs2ldRfwBuIR27DYoBVtCwshIdAgPxWe0TUxV2i8JK3bg3pf0GNhyim3ttogrQwZvpXhY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WEyF0Eo2; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WEyF0Eo2" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e25b39871fso27072837b3.0 for ; Thu, 10 Oct 2024 11:27:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584841; x=1729189641; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KbunjaDKwfDzESX7bCyZnVXI4Jj60WAsli6cgjlK44o=; b=WEyF0Eo2XsJNPDFQCmldw4bvVns8foXPz6GhGq8QjDu8xJDPsgU2gDeg4tMkoGp7DG qpJiXq9ko9g1ZdHZfqLtcRYsvBPZ4zAKl/xQNiiKA25w/AQ8ybJ9Au1awx4vkn8oB+xt YHlrCluf/X2gZpoaDcL86BqHQoGLa8wJKT5byh7tuDXy8KhdxSaUh1uaieJMsHHoehat haDEfLKrjSNVFiwwnRXkhtFTP9jsOA1Q8XsG63RusobtgUY5tamyUlCGShrinJ6gi0cZ iOQEEssjZFClbHoFhivOJECpgHTXb0THhmZzFAYSyJ7PX3K8HOO54iWYJUfENuwbZtqd kRSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584841; x=1729189641; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KbunjaDKwfDzESX7bCyZnVXI4Jj60WAsli6cgjlK44o=; b=UqFKbCYDF/9/oJVBcjf/iW8XAwNsEergG2YRexq7hMkQAe+R5j/nZtDjM2MPnDW88X Y1Mq70xSKaz5ZAte/nyRsjvLBVSYjNud+gHCIQHD3Pu6xH+7UB5TMm98nFMSKUyGvrY5 kokSz3XipGVBLI9f7qzJZ0OeqlGtQI67INGBMRdQ4r9UUl5sXtDpdtcCc2GPypPgS7SH UMrzyEIgLh8p3lgd4G22FZb0AcpWofFgp0Oa7zA6c2NJH/Kd11eU8xIIupCKTEyw+Stv uFt2ENc7CipQM6tjKC1agozBeq4NT25Mm8g24A41UvXgonIzXSa83Qb88f0McZO8vfy0 lRPQ== X-Gm-Message-State: AOJu0YyrK+K+xSafvo0Om1WQ8PuiZYEkpTd4Dgo4liMeQJVeJRx6dYvE TyGPdY2PGCqV+Z10QNciFRrTMUFLMb3WypwglJDh62p+BsxiNNoLtuZ95JhYPvmeSHf9wjoRbwQ kIA== X-Google-Smtp-Source: AGHT+IEbamZE/KJDJ4kw63fZFZN0PThnG54xKd82AQ09AwFkx3KIodEPvEqsDBHZyT6DCHkY/GuTD4px7aQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:6c05:0:b0:e24:c3eb:ad03 with SMTP id 3f1490d57ef6-e28fe540170mr1289276.10.1728584839383; Thu, 10 Oct 2024 11:27:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:12 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-71-seanjc@google.com> Subject: [PATCH v13 70/85] KVM: MIPS: Mark "struct page" pfns accessed only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed only in the slow page fault path in order to remove an unnecessary user of kvm_pfn_to_refcounted_page(). Marking pages accessed in the primary MMU during KVM page fault handling isn't harmful, but it's largely pointless and likely a waste of a cycles since the primary MMU will call into KVM via mmu_notifiers when aging pages. I.e. KVM participates in a "pull" model, so there's no need to also "push" updates. Signed-off-by: Sean Christopherson --- arch/mips/kvm/mmu.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 4da9ce4eb54d..f1e4b618ec6d 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -484,8 +484,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm *kvm = vcpu->kvm; gfn_t gfn = gpa >> PAGE_SHIFT; pte_t *ptep; - kvm_pfn_t pfn = 0; /* silence bogus GCC warning */ - bool pfn_valid = false; int ret = 0; spin_lock(&kvm->mmu_lock); @@ -498,12 +496,9 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, } /* Track access to pages marked old */ - if (!pte_young(*ptep)) { + if (!pte_young(*ptep)) set_pte(ptep, pte_mkyoung(*ptep)); - pfn = pte_pfn(*ptep); - pfn_valid = true; - /* call kvm_set_pfn_accessed() after unlock */ - } + if (write_fault && !pte_dirty(*ptep)) { if (!pte_write(*ptep)) { ret = -EFAULT; @@ -512,7 +507,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, /* Track dirtying of writeable pages */ set_pte(ptep, pte_mkdirty(*ptep)); - pfn = pte_pfn(*ptep); mark_page_dirty(kvm, gfn); } @@ -523,8 +517,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, out: spin_unlock(&kvm->mmu_lock); - if (pfn_valid) - kvm_set_pfn_accessed(pfn); return ret; } From patchwork Thu Oct 10 18:24:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830816 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7736F210C30 for ; Thu, 10 Oct 2024 18:27:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584844; cv=none; b=pIb7Wk4VehiLTSM1cz1tAtng9798L3VzJKPYEp3CqBZNgw66XIgs5Be6xAqoicEGhbcJwulrkUFaO/1e9JCJxI2tBIWn1TZFAaV/L6YBOJcD/8+heK1hvNVAf2XxUH/KB2WBV0Zq3Nvwvf1fV+wKmp9mWnK7CCW7WxizdbHAzxw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584844; c=relaxed/simple; bh=Q0WfZXD0AwljsCbiw/O0WdULhXfIX1X/By/5X/lP1LI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mQFdC3J/BOE+L/9wgsUTDvRN8TKy5gug4zOrQZ8b585mfEw5g4Bwr6yuxgDRZuya3P+5UZFFpWNECeZCWKg6eCKMX4nP+JkVatHBeJK3GX669XlhrqNzXz6Mz/FQ4EQgE1OUdtdQ7ISnrgsVfepfxJIK5sdp7WZjp+IIFfbe8lQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3qpUkcRq; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3qpUkcRq" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e2906efc0eeso1521932276.3 for ; Thu, 10 Oct 2024 11:27:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584842; x=1729189642; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OZ/c8QPJoSxQXGCWT5aWp1nMy1t7EEzE7+N8fqq+BFY=; b=3qpUkcRqOzUfCZcj+BZdHa8RALWFF7OtRiu+VQ3N3Sp/jKQU8vYZDxOavhvz5GzCPs 4wMtC27QPkYI4ZZHJTIhsyVCBkQHNDXjn80rQVg5sN8wuj71+RLQ2UFJPOQltkMKo9b0 qmkHu3LJr2xQyjv0+A4khnwPAAKx6T34tSkbj9Gi9Mhug6a0LgiqKq6ggRDbj1I13x3M Y3Gr8fNaFSD6n8ayAIf+4tHcbqgdwwrqOAUwKRd8eJ/bjgCPP4eKndQloqNzhdukKHBr xWjD7W7h8Vs/8C2ceSX6nGlC3uH/sYu6EzXd1UTRGjZgT5NEBX+d23twphqB7dwJBHG7 TZqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584842; x=1729189642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OZ/c8QPJoSxQXGCWT5aWp1nMy1t7EEzE7+N8fqq+BFY=; b=OXlVWsSUQDicwKM5AG/XiLns+fxWF30nfSiGvgOQNrTpImq69JxREVarcjb0brZDZS SDbKdbBJnqOiOJ8fvjuVVC9AZMH3CRzkAWO+5rpOLTa2VGu23m/+89qsFfHteLE+tgDU kRzBWl7OferIgOj0jWFsoSTcfzZdYQpbqLYuQurymGOAXhaxgbFcP2vljq99RjnDS7bI zqoSZ45P9+05SHZkyYLOxda5/fMJGnFJmHU/U8l8yqwPwzwPOr6s4W9CgjC/zEKlZ0W7 z1ML49T5FY9vOotciJdjoRgzVfwVMkrKgaVXMhz7hCBhItCtlw8WCx2LH2+R6b8i5nsF 5uZw== X-Gm-Message-State: AOJu0YzH3CYslg0ifQSfLn+nP1mq2u48nsFCMDrI8gnYDqUPN94xxtEG UjpJl1R6l9U84v1CZNVMJx66K8mGMO+BFlQsFRxn6NB3OhXj6j7WUgyR2YvmaY+PbBE3frSsVZA Pag== X-Google-Smtp-Source: AGHT+IHhD+Vk6TwFIS5v8fA24SNPRWU4vCcQM+1g+zVw6L+6pfrMpIdoNti+PehtHCeQDGmk8Jps/9lQ5Ek= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:d30a:0:b0:e28:fe07:9cc1 with SMTP id 3f1490d57ef6-e28fe4a3a81mr68085276.3.1728584842321; Thu, 10 Oct 2024 11:27:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:13 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-72-seanjc@google.com> Subject: [PATCH v13 71/85] KVM: MIPS: Mark "struct page" pfns accessed prior to dropping mmu_lock From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Mark pages accessed before dropping mmu_lock when faulting in guest memory so that MIPS can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Signed-off-by: Sean Christopherson --- arch/mips/kvm/mmu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index f1e4b618ec6d..69463ab24d97 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -634,10 +634,9 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, if (writeable) kvm_set_pfn_dirty(pfn); - - spin_unlock(&kvm->mmu_lock); kvm_release_pfn_clean(pfn); - kvm_set_pfn_accessed(pfn); + + spin_unlock(&kvm->mmu_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); return err; From patchwork Thu Oct 10 18:24:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830817 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F7E821263A for ; Thu, 10 Oct 2024 18:27:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584847; cv=none; b=JyStFnEddzMWGm/6DYCLhiFHNu2Nx5NGJ8riK9FnP3dq72iivUKBG5mE2j02fuFwkbSHIPySsQ2IgRW5exy6fbBmzq9fKzMdhDIu25x205zEpBM7otjYfB2n0CMILQ8JrhuUg6c49KfArEKvVkgN1NRprN77fGXT2MMsOxlNoWU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584847; c=relaxed/simple; bh=8dGNSdkZyds89DzrI//tIWXaU4bIHqe2OXPOvN+ci/o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tZrzNKZmQbfgO+/8Zh6f3DZ6PmqsttLWw6tGTmtfKJB9My7iPkSAD5JHWiU8Q8xOGF3wlabNy1E1csZPzWhI5dlps2HjEOAzZvYCpJ6Pkij9h9ablrUYkeHVhII4sHikY+6LZ6GDB9DlGu7S7nyu+v/M+jbnc+M0ghXo9Aa4Nlc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JPv9zkzp; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JPv9zkzp" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1159159528so3623545276.1 for ; Thu, 10 Oct 2024 11:27:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584844; x=1729189644; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ojyPU0tzKIRRbswoqL6JjpEgqRgu1FFXpokE5rqa8U0=; b=JPv9zkzpTer/tRVsCcO0xpoVowSI9sq8yAveFD2GfRunYb5X0FtcPWNADF5sqlUd/3 iuZOBdxa/cq2pQY/AtFf67RWj9iN68fjShGVpAPx4/qOHMoD0+4MDx5ng50F99dpeqZz D7tj1u3tjV11ltIQLbM+e6UJ2OtxC2znpSGhxBoavgsmxRK8vxqwx1Wv6ERJeKC1sxzZ warJrQCxdXVL7hXsRhLLyu6K6XZyuCXcbnBSMP58XjLa6moCXuL2/gm/dZ9g29lFRGQD oz4+v/DmwPgsIZnAE2dQyOCO8flkAv8ygdRPm7vy8SmXxLTWo0sUXu9l6nOoTHE0oXK6 D4gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584844; x=1729189644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ojyPU0tzKIRRbswoqL6JjpEgqRgu1FFXpokE5rqa8U0=; b=W7FZd+5enOgUowHVuDYxEq9VAHsmWlqWRTI3bA8Ia24UHotkefPD+NqVihW5WxsrZb vbmC5ZNPTWtWZu9U/ru8ZMvGg9E1AylHoFu/bki4HIcKH87MkO46xL9JxiDkS/pLKhYd 0o5M+cAaX2Tepi/UWMTaQ5NTa+9524QODlUPPTynU5hRXEKg5m/cZqTACfE/5LwVg1PL JgIc1ghNWvNgCJNgkVdbyu6D4G4ohyraohnVBbzZMiRR2CqPQCY3nfisTCl9jSvmQ5VZ TVClj5JFQhoxqhALbK3G67hvCcQn8Ni0cAxV7/4DNMYYK0AdsQFtx89JMuOchZrQDNzF /TOQ== X-Gm-Message-State: AOJu0Yxqdmja3yCRrBolCid3iofFlFMkpQPA6v2nqtYD/hFd6Ydzm3NA wXLwIFDOSwoGe/+c/0SQ3GNcbdqEMfNXpOiyKiv493d74a/8mgOoasWwEW/os2kv9NGfea/qT5A Rpw== X-Google-Smtp-Source: AGHT+IHEyKrdGSoNbMWcRNiKhgC/jPgVf/TjauaPCrq7yqhpXyuJQagleroMC0w40O7lZsOojHLr7EwrZqI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:6805:0:b0:e20:cfc2:a326 with SMTP id 3f1490d57ef6-e29184333b9mr26276.6.1728584844184; Thu, 10 Oct 2024 11:27:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:14 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-73-seanjc@google.com> Subject: [PATCH v13 72/85] KVM: MIPS: Use kvm_faultin_pfn() to map pfns into the guest From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert MIPS to kvm_faultin_pfn()+kvm_release_faultin_page(), which are new APIs to consolidate arch code and provide consistent behavior across all KVM architectures. Signed-off-by: Sean Christopherson --- arch/mips/kvm/mmu.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 69463ab24d97..d2c3b6b41f18 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -557,6 +557,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool writeable; unsigned long prot_bits; unsigned long mmu_seq; + struct page *page; /* Try the fast path to handle old / clean pages */ srcu_idx = srcu_read_lock(&kvm->srcu); @@ -578,7 +579,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, mmu_seq = kvm->mmu_invalidate_seq; /* * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads - * in gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't + * in kvm_faultin_pfn() (which calls get_user_pages()), so that we don't * risk the page we get a reference to getting unmapped before we have a * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. * @@ -590,7 +591,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, smp_rmb(); /* Slow path - ask KVM core whether we can access this GPA */ - pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writeable); + pfn = kvm_faultin_pfn(vcpu, gfn, write_fault, &writeable, &page); if (is_error_noslot_pfn(pfn)) { err = -EFAULT; goto out; @@ -602,10 +603,10 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, /* * This can happen when mappings are changed asynchronously, but * also synchronously if a COW is triggered by - * gfn_to_pfn_prot(). + * kvm_faultin_pfn(). */ spin_unlock(&kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); goto retry; } @@ -632,10 +633,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, if (out_buddy) *out_buddy = *ptep_buddy(ptep); - if (writeable) - kvm_set_pfn_dirty(pfn); - kvm_release_pfn_clean(pfn); - + kvm_release_faultin_page(kvm, page, false, writeable); spin_unlock(&kvm->mmu_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); From patchwork Thu Oct 10 18:24:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830818 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD4D721501F for ; Thu, 10 Oct 2024 18:27:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584850; cv=none; b=srxyr0byeDEyfFbCi0PrrySwhlcjh6Mg/2uL12cTXc13bTsLxtqLKu9aGX3FuAyTkKzHFQHKz0J4S93xacGDr9taAP2TdlMv2UnNHSH8wwRJtCNmU6+7ftu4jWjgzRn78hgZqYgo+KjNFHuLji4iit5LQM05NYSgSlcO7yygF84= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584850; c=relaxed/simple; bh=hdvy6TBStdDo47jCImbsHkCH8/fTSGPuTNA399ptBEY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OFLFupt/j1io6JF+rgrlFaef8OmOg4fhnfXNyODRfr8oFQLv30+evLYwRTnSwHrUfGB3Va5X0whYD0sdzE0W7QRpzDE2b9krh29BR8o7DG6PkFliBBZ7LBqsD8fHFM2g3mjf4/igjLGf/RxlXZ2i7BBUl6vK8SjXrnfO0SXKJmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DuDHV6yB; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DuDHV6yB" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7ea0bf14523so1033824a12.1 for ; Thu, 10 Oct 2024 11:27:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584848; x=1729189648; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=t4DEWxyHdS8FH8bmPlxV35CavnLWtw4MDWneo+GIfaw=; b=DuDHV6yBR6R4IF/Tk4tYF2+2idGq/vVgsnDPd/mRNzisey/lK2BmHd3cUo9vBvOANi zkoGepVCuNujuKVT0SvZmDE0XvYHerpurUIIJMAeWc+b57hH47kC0QJ5wHtQEMfJHmPh PpaE4aHKRT1h6TbKpgXWXqnsnto2OGNq0fH2Ws2KY8/MbS7jjFY3wbSKnu7MIzTUrs1G et9W4d1rTvNIgRtfcukOUMCi85rcuo5kTXGS1TQ9auZ4CcSXEEnywcZI9uI5KSy/FVdZ YPQRO6b5YAteoGfO1rbMctMWpre/nlO8BdIZjHegGDTKR2Jshft+Hp1edSWuOGnxTxrC JmMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584848; x=1729189648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=t4DEWxyHdS8FH8bmPlxV35CavnLWtw4MDWneo+GIfaw=; b=e/02zECZXQbnWTsh7VHJFyRELohGd3p3A94BWf+96A+tbZFTGMKdLJ/5bfhXGACKFg tt60H6chZ4lU4PRHfaT3ncX/Zj+sgc5HyE9MF0ZBEY+6vIBvjkzhpJOD9t1x7RCtLW11 w1S39tuo+/DBSTX+0rliWdUKHv3GWYCnq3Zg9v1jAwQz7eqjUoENW3aVE0FkhA5iQsr5 HV6HVZf+AjnhFNvriH3GyPgLGUPYKrSbr1yBygFOXWprQkDwebdCfwzD+MBTwQt57IuV s5io9ya+C0lNMu9csyoGUakzzzOYuD+hL50vCJj/4NpEbMmP8Z3ddBzv25oYtLfRCLIL wY0g== X-Gm-Message-State: AOJu0YyvTprGOatpnkni/ym4+X6WTN/xCoA8aUPWCiR1dUSjvT6RUYnr XvipRcehKFynX6y+wNq4Y9SMfxOo/7LSZ9TRZyiTgyBAplPG3kWgXdYpWPyQ8g1altaLA3smewD 3UQ== X-Google-Smtp-Source: AGHT+IHU1NaHt28BJAfE9+Eihx21gIRaHWD3wJUzNDiwOfeaS4l1n7K6hdxyKTp49hAFCJqVbhS/fDDFwRQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:2545:0:b0:7e6:b4dd:fc0d with SMTP id 41be03b00d2f7-7ea535b0baemr20a12.7.1728584846518; Thu, 10 Oct 2024 11:27:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:15 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-74-seanjc@google.com> Subject: [PATCH v13 73/85] KVM: PPC: Remove extra get_page() to fix page refcount leak From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Don't manually do get_page() when patching dcbz, as gfn_to_page() gifts the caller a reference. I.e. doing get_page() will leak the page due to not putting all references. Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_pr.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index d7721297b9b6..cd7ab6d85090 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -652,7 +652,6 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) hpage_offset &= ~0xFFFULL; hpage_offset /= 4; - get_page(hpage); page = kmap_atomic(hpage); /* patch dcbz into reserved instruction, so we trap */ From patchwork Thu Oct 10 18:24:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830819 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E24D7216435 for ; Thu, 10 Oct 2024 18:27:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584852; cv=none; b=W7RgQuwhPrACHXAXnzRBiADINxjP537/EyxHqaKn4Vja2yJcuRbj4F/5TpBFho9eXse6AGYLU8PSbGq+bYdrFE+VJCjQgKGbyyDYtQMegmATXp1axSK9efTvM0ZOykq+jzs0kEZw1R7EQbwCxa3ink9fsjotUKHV4Rxtx1xEyek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584852; c=relaxed/simple; bh=69Upu5+wnipQK1zG8Arq0ad1gowf7mjUzNjaillb+fY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iS6DicRl8CkE2NsrcSeA5QpTV82/0cKeUwV8zpMIXwtr/HJHN5CRCdo1QYSPk5llUtPZW/14F7+pSKFjHLUH302tYlqKkyU04XtkahEzHhBWuDS+yzc+c5BPS3ijunEpo0nuxPjqaIh/Ms7fQH80gR68tMnzAzYC2KUpptBbkA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TYCOxp1c; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TYCOxp1c" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e2d287f944so24113607b3.3 for ; Thu, 10 Oct 2024 11:27:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584850; x=1729189650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=DOlbrv8SiXT4cHz79KGvl4RMUSE/a2Kzzqa4ZWuWNYk=; b=TYCOxp1cKgcq2S0piBT9YJkDFLpkMk1shhIozeymloTyKk4ZTR+gIawk/AuNbFbDQ4 J8TeUkJfKYxbM2AtDWZ02S3f2cAN7kjr7P+ogU4gcZwc5SJ2dw7hcKjGSiKZnDuWFVXD 0yacIImKBYGg/kOVQBDjgOI19vyelBez+dQ9CKKVnqCCN8KpSEniZAa2O+TAIbBYuk/8 u/EdPzuUpUAPgVZP64a88b75lAGbcbHuAV/J8cWjiNeHeXpsNY+rv2VipBNaNvpPRk0N 02Tdr0mvAbTRtwDHZcGpoGEUwZ3USAPeQZ3Kyha9LYFJVBqjMm+LcYDMRXO6L/g2oyJe dDcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584850; x=1729189650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DOlbrv8SiXT4cHz79KGvl4RMUSE/a2Kzzqa4ZWuWNYk=; b=usuY+x7kHNQhYK0qkbJkY6cfTn7a73odjd5y69JOvsTDdDApddSB3Nh+wqsEbAxm2D uZhr/fZ6DzFL4V/a517aQaTAYbG6dxPl7PIUTSk8K/0F9J+AsXoG6HB/Q8puyykVN+91 7nenMfg3Qh4AADP2HRtCKJjdlyykO0U9A0o7ZEUf5nkMmcZcj//djfw1MQj82b7xIWL9 IYhje5djq+c3zyjOSGydL8EGlW/dGZwoeeAnqA8ifAA2gpdoYKz/ddWxTDKQEoCJ+37w UuxFfld9DZCGX4N5Ab0nHKTxIPxK7GxIaqN0ofg+nNBFsvdxYGR5OSN73geI8l5gHlNL zaKw== X-Gm-Message-State: AOJu0Yy3h3Sy1Cb94ZZ3bA1CGuVZ1raeJ0xMuxNYiqwUPCv4eG33OEk7 43YYC9sJM+0r+9LqCGErbqsAlPsxf3npao5D5yoreUq06PwZpiaIBanKLG8MahcTRadJPf3zLeQ 1EQ== X-Google-Smtp-Source: AGHT+IFYivPtvbNeJBK5QJ/MFgPKbd4XQ4GDAEMeTnfD7MuZnOh5ZwxKPEpX2RNQ0xKTZSwXcPzTQuagswA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a0d:f043:0:b0:6db:c6eb:bae9 with SMTP id 00721157ae682-6e3220dbc31mr798327b3.2.1728584849896; Thu, 10 Oct 2024 11:27:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:16 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-75-seanjc@google.com> Subject: [PATCH v13 74/85] KVM: PPC: Use kvm_vcpu_map() to map guest memory to patch dcbz instructions From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use kvm_vcpu_map() when patching dcbz in guest memory, as a regular GUP isn't technically sufficient when writing to data in the target pages. As per Documentation/core-api/pin_user_pages.rst: Correct (uses FOLL_PIN calls): pin_user_pages() write to the data within the pages unpin_user_pages() INCORRECT (uses FOLL_GET calls): get_user_pages() write to the data within the pages put_page() As a happy bonus, using kvm_vcpu_{,un}map() takes care of creating a mapping and marking the page dirty. Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_pr.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index cd7ab6d85090..83bcdc80ce51 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -639,28 +639,27 @@ static void kvmppc_set_pvr_pr(struct kvm_vcpu *vcpu, u32 pvr) */ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) { - struct page *hpage; + struct kvm_host_map map; u64 hpage_offset; u32 *page; - int i; + int i, r; - hpage = gfn_to_page(vcpu->kvm, pte->raddr >> PAGE_SHIFT); - if (!hpage) + r = kvm_vcpu_map(vcpu, pte->raddr >> PAGE_SHIFT, &map); + if (r) return; hpage_offset = pte->raddr & ~PAGE_MASK; hpage_offset &= ~0xFFFULL; hpage_offset /= 4; - page = kmap_atomic(hpage); + page = map.hva; /* patch dcbz into reserved instruction, so we trap */ for (i=hpage_offset; i < hpage_offset + (HW_PAGE_SIZE / 4); i++) if ((be32_to_cpu(page[i]) & 0xff0007ff) == INS_DCBZ) page[i] &= cpu_to_be32(0xfffffff7); - kunmap_atomic(page); - put_page(hpage); + kvm_vcpu_unmap(vcpu, &map); } static bool kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) From patchwork Thu Oct 10 18:24:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830820 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5FC41216A12 for ; Thu, 10 Oct 2024 18:27:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584853; cv=none; b=b0pYlCmbiZ2j3poNVmc26cKgnaZX8l5PC0kdFy5esvMIU5ok2zvCRS/jwvp7HGvIBe3RK1oKuzjAfqFCe72qMl5lK30TyN1+jFOygdsTonnhucqrQnmnv0s9MpSOtF5w2pGx4xEkK/dElxF4Y+cEztzBUkMATYpaTDVofg7rhzo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584853; c=relaxed/simple; bh=6VFC6ZQ3q9XjUbMWhh1JwxOVsy5FNZVXQFKK9OKSZCE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=raM/Wy5zWfZPTi/8+lTWw+kYLGtT+usVoxdh4KoRyaNvu+OUQgYdDi56rbYRiHQpTwhmka+mEqi7D8j7cKiXYqLT1oMmK8dyjOzUpIlLOPcRZNtuQX8rP5b5YJrjdGNNgFN28Dhrxi7lYPHyzYQarT0+XC5fRxto4sEh3lDsZdE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kkhtwVA3; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kkhtwVA3" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71e0228d71dso1226092b3a.0 for ; Thu, 10 Oct 2024 11:27:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584852; x=1729189652; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=fZpnWf51XalE5JTpUzeqj9xBy85lPuBTlHLH/3crpzM=; b=kkhtwVA371S5ZHAeHS4k3XRilYdB/vGzQgltlsRLi0pp7zBjZNHvtNTHMuvtzL7CAF 6lNYOD3BDWvBSZ9L0rmgwIlueXDyhW/RA5LQ0k9HKGrI7aQrISXXF0UVGv1X7QGYXfmX nQAjpUZF9ow0QYSRFnzZERqDxBCGJILWB6bbaaxW+qU5NHSMCV90Jig9BhxHgjAoV1za TIvRad+uoWgS/qyZB0Ga3oI/+RuGbJNOcXT3YNU2DEx7phcpFbVb8mMgls/S1lDHVstd CsKALT5sXHk1JsYmc6mHfOyTLTtTo+ZkddLgoeZS9J57GTp1PUXVbgGiI+DyyF4UdFku wG3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584852; x=1729189652; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=fZpnWf51XalE5JTpUzeqj9xBy85lPuBTlHLH/3crpzM=; b=XW+qZSYiL1dNG9QlE3R1EnzuUzTuhxRlLVxkipuNvYyiYuc1kX1SybvfM2mTHY9Nxq 5oyj8CvYrS0EL+mmznR6e4Yye0yWBuYNZEmpZyHe430C2wkGQkpZ8bV0PAJrAcedQyB+ yh04033w3zauyIElga4V5AIxSWItI5nS/UadWQDp0GQ5it68bUFFnL1ioHpdJ0chA63x mc7+wHeqGWT4CNBfVKiQYDDiMmVmjQtENMiGlOHg+wqtO2t+RSeAiVl5OgE9HFqRc2wJ lS+OqCEWDS4JLLZ8ljXUpeZXoV+GzJgLLA9BQ0VKuq6FS4AjjAkC5FMsRId46y19DKrF Sy4w== X-Gm-Message-State: AOJu0YzIc4Jh0i2rLl7VBfAKALwgsfemq3lS6SkRM1SSom03eF1Fb16m UcN5tuUSgy+U1nODDeeicSOkbFDyoTRCZeO2lQeervWrSrLvoEHbnFdygppXRrcQ5CCMG0gpxJf z6A== X-Google-Smtp-Source: AGHT+IHVUYgAS8Gi1OfHaumBjOVs/68U//Xc3NGlK2wEye9l2w1SuCdDQ/WQfsMbBzqgGqsCESzcy3AUxv4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:aa7:93b4:0:b0:71d:f64d:712c with SMTP id d2e1a72fcca58-71e1db65a16mr7365b3a.1.1728584851459; Thu, 10 Oct 2024 11:27:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:17 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-76-seanjc@google.com> Subject: [PATCH v13 75/85] KVM: Convert gfn_to_page() to use kvm_follow_pfn() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Convert gfn_to_page() to the new kvm_follow_pfn() internal API, which will eventually allow removing gfn_to_pfn() and kvm_pfn_to_refcounted_page(). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 696d5e429b3e..1782242a4800 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3145,14 +3145,16 @@ EXPORT_SYMBOL_GPL(kvm_prefetch_pages); */ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) { - kvm_pfn_t pfn; + struct page *refcounted_page = NULL; + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(kvm, gfn), + .gfn = gfn, + .flags = FOLL_WRITE, + .refcounted_page = &refcounted_page, + }; - pfn = gfn_to_pfn(kvm, gfn); - - if (is_error_noslot_pfn(pfn)) - return NULL; - - return kvm_pfn_to_refcounted_page(pfn); + (void)kvm_follow_pfn(&kfp); + return refcounted_page; } EXPORT_SYMBOL_GPL(gfn_to_page); From patchwork Thu Oct 10 18:24:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830821 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC38E218D65 for ; Thu, 10 Oct 2024 18:27:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584856; cv=none; b=pnChyls9FhUwy07xHOG8QWEowrx1lYehXMNpRzImFv8cHL1Jwbvgmv8IFx8vd9BunuZMQ+/opRss04Crb0IVv+TR30xS2Yep4EYH/eb1RJ+uPWJmiYrcc7t2W/I7Lva1GmgGLqq/Gu+nV2kcbiwC4j6dCQr3ALv52ZlBY1Vzq8Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584856; c=relaxed/simple; bh=h3dpe04J9rq+dGEj5vV4pEUoAEcWlYwK7aXFrp+Hc5M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WmGbVW7yfDeQEUoxkhvINypdTO5nxZdmwkzofJ6M5BXZtp39bRFQkJ6zuEYsNL0KV0bZuu1cQBPb4/XfP8VUTe51BDqUlTaa/IQQs3cDTONWVqIjIzRTtEZ3qhWmBmGZAeCFP3hRGBQVskY4TEGiVOejpNOQGOaiVLY5wSfArEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qpMT9mNV; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qpMT9mNV" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e21dcc7044so22862037b3.1 for ; Thu, 10 Oct 2024 11:27:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584854; x=1729189654; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=fQZoDf0NIFdWK0Ory3IxzF5D2dmoMJWboRGGHV/nWyI=; b=qpMT9mNVMlM3JYWe1f9UU6REj2+oc4kj/yK50HSO4pjHutnOBh7JNWul7tfhz3o4gh +vruI0VZ+Fq6kyfhQqfEBqSUimUY7mr4tcemk0pyokvcHPV3ClhMsBLLWdPlziIY/gZr qbn/UhT5Hr+z4AUeczlu7097wCvqY9n+PJbB4r96PtwCFdRgaEAiAjYU0lZuAW3qFAjq oJfimXxrRFyYcyH8hc0hpxmWiRZ23C/Pk0qrQf5Tx+NbiOGiHhn+vpApay6F/DbuQk85 z2HtIRTfoM/9gXAmhP050j35DEG0wsNyQRzI2Y5vZAgVPQ0yrJEQ+eN7pVysywTjL1vy 1DuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584854; x=1729189654; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=fQZoDf0NIFdWK0Ory3IxzF5D2dmoMJWboRGGHV/nWyI=; b=Zq7dhuBuN6MX319rK22n78nPA68t3Ha32HEhIN251IpER3//vCXLPSbJLm3tUCLijs 9iktwkbLPVkfZOiFmn6ZKa8TP25QjtpCTzvPwPXndkqQofkOVj2tYThQYYcL6ozk7HE6 s5x3RC5XcSr1osUt06JtlYU4Kyx+KIEGaeOwAprbcQLMKpCPmpcGLoGXD3lcQIXQpmAw WlmAq4KItoZtVcRJPNAvOFXngT+GvGVfjurGV0Qj+X16KW/yaNoAkDO/NUZSGhZnInnA eBLMbysQIGU/f4o4H/AntsdCQQbthxRMOwvBc6Am6h179hdMJjk2IFo0Pc5VsGjS7T2t RtfQ== X-Gm-Message-State: AOJu0Yw0+i/KFdAXBlLbYE+BoizZg6jepC9Q1ZhbkiSZkjT0rHdM5F2P FY2560WYc44PYK5Plqm+4dg9HjywKH20iQl/hsobU2W9EjG5QNw4/o85dAxtOghLGGqDeHUuhF0 A0w== X-Google-Smtp-Source: AGHT+IGkMjnJcd2oWHVLgC5xfQJb4mHLQjnUvb6XgW9ZV8VH9lzrMuhcRfl3e0TBTVqXdJBO9V/Z2ttoh2Y= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:4091:b0:64a:e220:bfb5 with SMTP id 00721157ae682-6e32213fbacmr192077b3.1.1728584853563; Thu, 10 Oct 2024 11:27:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:18 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-77-seanjc@google.com> Subject: [PATCH v13 76/85] KVM: Add support for read-only usage of gfn_to_page() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Rework gfn_to_page() to support read-only accesses so that it can be used by arm64 to get MTE tags out of guest memory. Opportunistically rewrite the comment to be even more stern about using gfn_to_page(), as there are very few scenarios where requiring a struct page is actually the right thing to do (though there are such scenarios). Add a FIXME to call out that KVM probably should be pinning pages, not just getting pages. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 7 ++++++- virt/kvm/kvm_main.c | 15 ++++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9f7682ece4a1..af928b59b2ab 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1213,7 +1213,12 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); -struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); +struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write); +static inline struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) +{ + return __gfn_to_page(kvm, gfn, true); +} + unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable); unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1782242a4800..8f8b2cd01189 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3138,25 +3138,26 @@ int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, EXPORT_SYMBOL_GPL(kvm_prefetch_pages); /* - * Do not use this helper unless you are absolutely certain the gfn _must_ be - * backed by 'struct page'. A valid example is if the backing memslot is - * controlled by KVM. Note, if the returned page is valid, it's refcount has - * been elevated by gfn_to_pfn(). + * Don't use this API unless you are absolutely, positively certain that KVM + * needs to get a struct page, e.g. to pin the page for firmware DMA. + * + * FIXME: Users of this API likely need to FOLL_PIN the page, not just elevate + * its refcount. */ -struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) +struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write) { struct page *refcounted_page = NULL; struct kvm_follow_pfn kfp = { .slot = gfn_to_memslot(kvm, gfn), .gfn = gfn, - .flags = FOLL_WRITE, + .flags = write ? FOLL_WRITE : 0, .refcounted_page = &refcounted_page, }; (void)kvm_follow_pfn(&kfp); return refcounted_page; } -EXPORT_SYMBOL_GPL(gfn_to_page); +EXPORT_SYMBOL_GPL(__gfn_to_page); int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, bool writable) From patchwork Thu Oct 10 18:24:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830822 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74902216434 for ; Thu, 10 Oct 2024 18:27:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584857; cv=none; b=N2BRamU7aePqRPE28ffzpcKvbBwKk1ytuExRT4ZeYHK1kCxuRVsN1DPKTjl0KBLLd2D/Y3IDY75ZHO5j5GYgPT4zu0j6OVnTkBj3MoRAkp9j7p2AP/EYVKs7gp3e6zgp7EfUD2Cn5zangExKGXVVUKK9Gk53YE37XXnZw65XIPw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584857; c=relaxed/simple; bh=FErwCaq8NdcoMJPhpWM/50ZCDjLg7qNPTjMbO8/YuvM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ml2CcDCsxePFOj40sLvTuJo/I+EMBo2oPqxAj3L90dOb2oUs0j5U2RsaL+necUUJaIDULrJdA1FJuUCdlvG8mIO5rnUOidpVfRpDhBULjH3haX7I+e0eevm/t3xawCM9b2g/6H4niV/dKhEG0cCwO4/Uuzsud4yZk9vaAuJ7mZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yE9BBZab; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yE9BBZab" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20c8b0b0736so7718475ad.3 for ; Thu, 10 Oct 2024 11:27:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584856; x=1729189656; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=04eHUk6AClz9cRMQcjlWjE46ZR1uXceY2ogFB00LGDM=; b=yE9BBZabvoeIqzV8CVJHKuzxLfL2NYkYVtIdFaPw/BUSfPofvy1yEt9sV7egKYCr6h ECdzuEmt/3ij7vj98XqRqiNePbTZy3r1+KnoPq0Y9I3SWy9nfiSrfM3+HN5Tt1cuMsMe P5BqP8mEprGZtOyicY4QMcSjOoZuBJSucYO3xKIILjywU8h/g9JRWiGzdULPCY/h7RrW gs/lx0IgmUkEs0f46fCv0eA4nTg5+x1s07DF9AqCC/d91Tc45agqIYRUIW48yJgViOz+ 6RSp5Afamp+TyeZzsa3QwnWcskOkhHtKwtjtk2jBDyPhGpbSYoTMcCOFE9DIiSVIltMb h9yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584856; x=1729189656; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=04eHUk6AClz9cRMQcjlWjE46ZR1uXceY2ogFB00LGDM=; b=evriD2oYM5hpjURidqLjeTMSlcLucjy8Ln8yGzrz2cLnx4VN3Vh5Du0uzikc4vLddH bkiwDQWYlcj1jYJetLaSd+6simdYEs6q7BLL0SQNROXKA2BgGn4pI9D5Ea+fV0zQUnXx d/hF7rl3ldeXQCSLdm+sPE4hXY9R+T0RDXEhO7Ki8xu8EXwlATkhM0NhqXXwzWC5fO6p 1aaTUefcOOMxz3OngQHOIQmEnrbHrfoseV/tSOzYMIHyIaqtbd2ZAbuMl7C2er9R6P+c xtxHhMqnWZkCrT6DHl8dXd5JwpXDWp1ZIpFT821O5z3FfPIlj7xHLYLqdUnO/4HBEBmy evBw== X-Gm-Message-State: AOJu0YwKDITvkAEBT2btXuux9td8uaPIQ/Y0knA5s0a0+oAfuvXi0M79 Eks2Uu6U9w4+FjJdyV0uUJIRF1zXoTbMJ4tn1s3K4lgRjTX5Y7ZpKPTx4oSBHsYtJGweOKprw3m rBg== X-Google-Smtp-Source: AGHT+IHGiwNd3UbnZUzPre1tk98dNYoM10q7gFGG4lq0PldsJuDm1rxMHI+g8Yc3UsVnmkzWnOurUMFmqIQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:903:41c4:b0:20c:8f78:67c9 with SMTP id d9443c01a7336-20c8f786946mr26385ad.5.1728584855618; Thu, 10 Oct 2024 11:27:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:19 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-78-seanjc@google.com> Subject: [PATCH v13 77/85] KVM: arm64: Use __gfn_to_page() when copying MTE tags to/from userspace From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use __gfn_to_page() instead when copying MTE tags between guest and userspace. This will eventually allow removing gfn_to_pfn_prot(), gfn_to_pfn(), kvm_pfn_to_refcounted_page(), and related APIs. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/arm64/kvm/guest.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 962f985977c2..4cd7ffa76794 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1051,20 +1051,18 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, } while (length > 0) { - kvm_pfn_t pfn = gfn_to_pfn_prot(kvm, gfn, write, NULL); + struct page *page = __gfn_to_page(kvm, gfn, write); void *maddr; unsigned long num_tags; - struct page *page; - if (is_error_noslot_pfn(pfn)) { - ret = -EFAULT; - goto out; - } - - page = pfn_to_online_page(pfn); if (!page) { + ret = -EFAULT; + goto out; + } + + if (!pfn_to_online_page(page_to_pfn(page))) { /* Reject ZONE_DEVICE memory */ - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); ret = -EFAULT; goto out; } @@ -1078,7 +1076,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, /* No tags in memory, so write zeros */ num_tags = MTE_GRANULES_PER_PAGE - clear_user(tags, MTE_GRANULES_PER_PAGE); - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); } else { /* * Only locking to serialise with a concurrent @@ -1093,8 +1091,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, if (num_tags != MTE_GRANULES_PER_PAGE) mte_clear_page_tags(maddr); set_page_mte_tagged(page); - - kvm_release_pfn_dirty(pfn); + kvm_release_page_dirty(page); } if (num_tags != MTE_GRANULES_PER_PAGE) { From patchwork Thu Oct 10 18:24:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830823 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7DEA2194AC for ; Thu, 10 Oct 2024 18:27:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584860; cv=none; b=aDpfFcdFzLKZX9WLpWCOhh4gtmZBC4oGpzzdX8mDiwVZY0Tv8Ba658mmTIhxzi8jsAECRTLh7JEu15oZBlWwkY/6UQp8Jv/B8X8lZpSJzRsZLx2O6MlJJO5agCHfSMqj/vqs6+lPWR1Sa/dAiq0gryzJjbsDMhnFXJghQvcckqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584860; c=relaxed/simple; bh=WU2bYG9iOpd2mUwv0Kp3QHVdfdxFDeft8vY+L5Knm2Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uGqJmrcRUknU1k66/+3iopImUChGAK5vtYCYl1aFqkr4z6SwRy1QwIIaliUwn0h851YSAHPHQR6/2gf0Qigi0m9TipuBCkgPiqmO0VltnfAsfYTRvT5ch0vQQKuDMFVtj66SBorVi/BE49YeZRHSvWL6cQiN4xBYDPlFw/ChIOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rVwSkyMU; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rVwSkyMU" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso1173934a12.0 for ; Thu, 10 Oct 2024 11:27:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584858; x=1729189658; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=43Dd+lWRPu/QQcFLGDLhCyyiH7uAqu5vbmltTcj5okg=; b=rVwSkyMURh+jnY5mhdfmEcQccS8IDp53PkuungLNC+QuxTGyFwJnXYuV4KpMgc91FZ bFvRWLz6gCAuL8I4pFdUmjoij8nlXt+RkgsWY/Ra7wXd/1Wau2HIby3umw7iSRAB2EmN ut+5qUUcerlx01e/0fA02Swap/VPizSrvEggtnZUOJ3WrTusH7oqshsyTmbh3f+dI20n uyRVKZBCTZawUdXzMhX2Ok6XvgFbIG0mVs+37d+uWqG7tQt6n6PSxhsbtHnD6bpftV5j rLI2JXT2gaMRXtSRAfpKWNB6WrSgK1uzbqCfQX+rH4CoQXpbwXxDDS5Cl51S6vCpTwBp n5Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584858; x=1729189658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=43Dd+lWRPu/QQcFLGDLhCyyiH7uAqu5vbmltTcj5okg=; b=KRsuCY6+WPf9teXzhhMIrZewJNzrF5GohJhaYof56tCnyw26hizGQqVc7/JixJAz6c vX6NGbZZiVPiewSv/eV8yGjD2aMBNoy0Ls9vqGKlop4//4Jsa9nWA4X0PHJ785FZ7xC6 d4sDknm+YdzoFjikamabvfpqSyVRn4OwEy28Pa77w3leGx2wwoltpF1funrW8VN6D3FB 9n3TiygneX7nrRM2/TbqtQDzaXHGbGmHtN65nrT0snbt5Ul/QttU4i7/qin9c2V+RwQ0 KABjVMbT5W3TUB0DFP6C2aCbONXcfUC13pkWuDzgi5oDIJ5MuJzPZr9cYt3BYL8WH3se i42g== X-Gm-Message-State: AOJu0YyMKuuIpu/Z/RLMN6HGwK1mZarqbtYciKVM5p3nxeAPgATK3icy AW+4ICesU/bhJo/ORcrvPsEodHkVDioZKbv/Wazj9V4BO63NmvCsoOIXDZQtIejfOTazA0jDZ/H ZlA== X-Google-Smtp-Source: AGHT+IGjs5y0lK7jvR8/zbNWJNDHfDWRUWSPa98vwOnU2bZ6sNQJgoH/6j7HDiYxyC+lHu033MhghQMujcY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:b208:0:b0:7e6:c3f2:24c2 with SMTP id 41be03b00d2f7-7ea535dce6cmr30a12.6.1728584857741; Thu, 10 Oct 2024 11:27:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:20 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-79-seanjc@google.com> Subject: [PATCH v13 78/85] KVM: PPC: Explicitly require struct page memory for Ultravisor sharing From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Explicitly require "struct page" memory when sharing memory between guest and host via an Ultravisor. Given the number of pfn_to_page() calls in the code, it's safe to assume that KVM already requires that the pfn returned by gfn_to_pfn() is backed by struct page, i.e. this is likely a bug fix, not a reduction in KVM capabilities. Switching to gfn_to_page() will eventually allow removing gfn_to_pfn() and kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_hv_uvmem.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 92f33115144b..3a6592a31a10 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -879,9 +879,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, { int ret = H_PARAMETER; - struct page *uvmem_page; + struct page *page, *uvmem_page; struct kvmppc_uvmem_page_pvt *pvt; - unsigned long pfn; unsigned long gfn = gpa >> page_shift; int srcu_idx; unsigned long uvmem_pfn; @@ -901,8 +900,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, retry: mutex_unlock(&kvm->arch.uvmem_lock); - pfn = gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page = gfn_to_page(kvm, gfn); + if (!page) goto out; mutex_lock(&kvm->arch.uvmem_lock); @@ -911,16 +910,16 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, pvt = uvmem_page->zone_device_data; pvt->skip_page_out = true; pvt->remove_gfn = false; /* it continues to be a valid GFN */ - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); goto retry; } - if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, + if (!uv_page_in(kvm->arch.lpid, page_to_pfn(page) << page_shift, gpa, 0, page_shift)) { kvmppc_gfn_shared(gfn, kvm); ret = H_SUCCESS; } - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); @@ -1083,21 +1082,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) { - unsigned long pfn; + struct page *page; int ret = U_SUCCESS; - pfn = gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page = gfn_to_page(kvm, gfn); + if (!page) return -EFAULT; mutex_lock(&kvm->arch.uvmem_lock); if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL)) goto out; - ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, - 0, PAGE_SHIFT); + ret = uv_page_in(kvm->arch.lpid, page_to_pfn(page) << PAGE_SHIFT, + gfn << PAGE_SHIFT, 0, PAGE_SHIFT); out: - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; } From patchwork Thu Oct 10 18:24:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830824 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10C59219CA8 for ; Thu, 10 Oct 2024 18:27:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584862; cv=none; b=YEOyhoJCZryfziod03Q4VXacINo4Q7066LFQithHI7Uj6OwhG842MqasntT7iXmj6EuEF1a8IF+AoCMnh2cJIBRXC1ZhQtQgtmR11dXum/UUHl4nIhbGPw40KAYSnofqIx+sWAKEwv4eIbdZT8zLPTDECZ9stug1dsPYWob+8mg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584862; c=relaxed/simple; bh=5AMkvnsvbpt8lQB2/GwTipD8a+sP1HpfiWdIDM6I+Qg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k0+uSYkcjIWBXBN9SBuQjErd8riRu8NAz4wwUuIM633rGPFLuT8VkSrSMvdlXAiH+le3u5wwtH1FcqBRK1sdoCbuSpjizws/XksJ30e4AtuaixOdjRhRnmO1EhlYYrJTZsc1C+9P2kH7RxLwiR3Qlcv3f76WJylKei7fVMB54OQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N38MzbV0; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N38MzbV0" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e28d223794so23918167b3.0 for ; Thu, 10 Oct 2024 11:27:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584860; x=1729189660; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=Q1MnEbvSXNK1DBYHKiiPeGTlNBdM3PqCgrnaMRmX38Y=; b=N38MzbV0Opg6YS04YGXSDcFzpDbMb+69BxLeF5I/m5HWLMmyQi0t6ZpoTakUKsxG+Y x+62Qe6FkM5MXIYUqzR7ZuT5vHKJWIzNuNGguoPNHzQzRheqSS6wmdrvwE+E+IZ22h0E s9iPV/7n2nG5SP0RkA+0+K0w1YN5FwNO8br3GqOTRvzCCC4t0JdSc920HAAUmDRsQA+Y PwuIw/p9M4ndYcmO76PWt2AyRJmaA5G/ms6keQLf2kLMT2JU3Pns4jgtRATr7ImtkXkA jP4B02S0wLHqIbb3wVDpdw/3eyHLn7HPlSOgctNgsdNUTRNyJWcnfufIx/x/sAAhixZ3 RTkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584860; x=1729189660; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Q1MnEbvSXNK1DBYHKiiPeGTlNBdM3PqCgrnaMRmX38Y=; b=SrDmj3SbuzAUAUjoEBx1k81IwFSCdzKKvCildMdD7hm81H3eyT9ouVnxCkDLDPHlL6 r3bXjqvs0Y5rL8wbNO4gq9hN6d5+2G5gq/Adbqxls/c9c/MD3yEYDb08O4ySek5BQWJD hg/ia0QGBG8okiYsYFtvNpALFsKaUvnodjJqL8/F/uY3fPsJlXw1sa2MnIYx2WuWYQ/Y rA30Zdv5B5UVQ0xX9LexQicGyH1bli/c8UmTq7SUrzNz545THAytqSY2F6KsFU3VNbjX 8z7vIiRYIyzNMD4UCWnz626nYIcfPZmeXvcNhRbzOSYkpxEsRDPjl/o3R/n8nu5YiCXI yM9w== X-Gm-Message-State: AOJu0Yw/6RGcJtKlLrHJ3bKvoqDcjDOqEuG92vm9Y2bwaEHs+lHm6KOx WxFbtOei7bLfExmilY0PtPXiRnlOojt3Mxz3/GNZcpezYB0NBhV3suydqotaovFTnBt0o6a7cGG 54g== X-Google-Smtp-Source: AGHT+IGvHgo281GUU728l2kpqfRNdoQ8M4AtWxtMdwK+Sbd/Nmo7aGxeBXrCmxWS+HWzNnhFIgaVqhxkocM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:f20e:0:b0:e11:5da7:337 with SMTP id 3f1490d57ef6-e28fe3505c2mr129032276.3.1728584859805; Thu, 10 Oct 2024 11:27:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:21 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-80-seanjc@google.com> Subject: [PATCH v13 79/85] KVM: Drop gfn_to_pfn() APIs now that all users are gone From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Drop gfn_to_pfn() and all its variants now that all users are gone. No functional change intended. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 11 -------- virt/kvm/kvm_main.c | 59 ---------------------------------------- 2 files changed, 70 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index af928b59b2ab..4a1eaa40a215 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1274,14 +1274,6 @@ static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, write ? FOLL_WRITE : 0, writable, refcounted_page); } -kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, - bool *writable); -kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool interruptible, bool no_wait, - bool write_fault, bool *writable); - void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); @@ -1356,9 +1348,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); - int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map, bool writable); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8f8b2cd01189..b2c8d429442d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3039,65 +3039,6 @@ static kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) return hva_to_pfn(kfp); } -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool interruptible, bool no_wait, - bool write_fault, bool *writable) -{ - struct kvm_follow_pfn kfp = { - .slot = slot, - .gfn = gfn, - .map_writable = writable, - }; - - if (write_fault) - kfp.flags |= FOLL_WRITE; - if (no_wait) - kfp.flags |= FOLL_NOWAIT; - if (interruptible) - kfp.flags |= FOLL_INTERRUPTIBLE; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); - -kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, - bool *writable) -{ - struct kvm_follow_pfn kfp = { - .slot = gfn_to_memslot(kvm, gfn), - .gfn = gfn, - .flags = write_fault ? FOLL_WRITE : 0, - .map_writable = writable, - }; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); - -kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) -{ - struct kvm_follow_pfn kfp = { - .slot = slot, - .gfn = gfn, - .flags = FOLL_WRITE, - }; - - return kvm_follow_pfn(&kfp); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); - -kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) -{ - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn); - -kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); -} -EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); - kvm_pfn_t __kvm_faultin_pfn(const struct kvm_memory_slot *slot, gfn_t gfn, unsigned int foll, bool *writable, struct page **refcounted_page) From patchwork Thu Oct 10 18:24:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830825 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCF8421A6E3 for ; Thu, 10 Oct 2024 18:27:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584864; cv=none; b=DRa7/F1KPPK09mIhjIWebMF15i1u/RZUSvqa5BL/ZSwrNRl+dvtf86YVO5MXaAGlaAR8xa1Pm87huOh4u2nBCOeXZqA8Mza1u2KAlMeE3EfiY7H2IsYUNxXyrur1ONahiYihYzEIac1nhEtwRa9tixkAh0pEf+Mi/TIeMLVXM3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584864; c=relaxed/simple; bh=Xp30Q5aCXuVbCuroW5D/p68jGCOUTPuU5mMuuixoyAE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=puIlmGdGz8TejKviMl8c/ZAlzHZhOZ/dFXaf8XeQZOaHf/KbY8v10wXbWTW/1iMsyRk1PXWcnwzV6VlRg/x/slbt8p4NhsEDjFqIhPg4Iuxw2OqtRPH3bdR9YAyNrHhdJIazuDZFQFWVBRRgD+XAz6Fusfwb+enzvH30tA7+Fbg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TC4r7pK0; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TC4r7pK0" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71dfeda9ac0so1340111b3a.1 for ; Thu, 10 Oct 2024 11:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584862; x=1729189662; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=A6YyPrl/aVt4zvDZFXUjzF/Zy9evtaTkH7tjbHPMI5E=; b=TC4r7pK0g3B0mqcPz8ZXxRS8Rw8BfJUEjGW4S/zxeQ0UkyLNagAVzGBjTNdiDb2SpF nBffEhUIrykBpiBbRrDSiRgwT5seeVZBDRzQyYetSIdUzr6J3plmmVwFPQ6RY8jHlo2a /bzTNyfYCg7FOcuV8L8aRSCfF0OxPZH0J98kU8yMIJP4RIJQCk+K5i6C+TXRFdGZ6eYI vpEeOfusRR2SHfTSTJqQIXxoxJLlil/pU/N0igi2UlS4qu3p4+hgRBxyMsiC7c3vwh79 jWUX53zTsO2dJgzKEQNNHmt+hOKbbgUbybt44zCMmnLOdc8l2Lu5JrNAilAaQXl0kghC 1L/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584862; x=1729189662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=A6YyPrl/aVt4zvDZFXUjzF/Zy9evtaTkH7tjbHPMI5E=; b=TJeuyWRKhV9ZwoNAm6vxGzX9VfGMRw1oX16AuRjCE1H8R8fK4DQuiX0IIdkeKRDpyl +v/V48sgscezQxNUQTpiCQwPZKbnCymaYUZLAyR+9eB0Tv/JSZtgZ4CPOlTx2hFNiLbG dOCw83F/aLhbF1Hpjwbz2S75Q4SpQ/c4hpy7eDahZZSQmFjgI/JRkchSL5eOFIPZcwoO lJ7QznzbwHjhY01Iqsq/Ev5JJULPoDMmvy1LKoczFn74aoqSKgiLROnB9qA0ZBpdbFep 8f0G+S4u6WOY3MKJjWLFZ+iIz73kDkw+jZLZQBNy9yv4//ty77THkn0NEadekhaDeqiY J6qA== X-Gm-Message-State: AOJu0Yzi48ZsoVQpCKfS3YVpfpGgAc26NhSmi1KYscXJTJtAwtljqH4/ rahrYtxRGDwVn+TqA7GXj/S4JFVi39h9HdHsljNATVBHHc3xmwNKyHqZJrS8x3VWR2VPL46WSKZ tSA== X-Google-Smtp-Source: AGHT+IFRVsKTyvnlBUrQtNNg8zvYT0OU1qw5Av8a7Gq/P6vdrCxit4uLl8sEcLrSKXXTWA/vla3myJTtPpY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:9443:b0:71d:ff10:7c4 with SMTP id d2e1a72fcca58-71e1dbd1d0fmr15196b3a.4.1728584861919; Thu, 10 Oct 2024 11:27:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:22 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-81-seanjc@google.com> Subject: [PATCH v13 80/85] KVM: s390: Use kvm_release_page_dirty() to unpin "struct page" memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Use kvm_release_page_dirty() when unpinning guest pages, as the pfn was retrieved via pin_guest_page(), i.e. is guaranteed to be backed by struct page memory. This will allow dropping kvm_release_pfn_dirty() and friends. Signed-off-by: Sean Christopherson --- arch/s390/kvm/vsie.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index 763a070f5955..e1fdf83879cf 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -670,7 +670,7 @@ static int pin_guest_page(struct kvm *kvm, gpa_t gpa, hpa_t *hpa) /* Unpins a page previously pinned via pin_guest_page, marking it as dirty. */ static void unpin_guest_page(struct kvm *kvm, gpa_t gpa, hpa_t hpa) { - kvm_release_pfn_dirty(hpa >> PAGE_SHIFT); + kvm_release_page_dirty(pfn_to_page(hpa >> PAGE_SHIFT)); /* mark the page always as dirty for migration */ mark_page_dirty(kvm, gpa_to_gfn(gpa)); } From patchwork Thu Oct 10 18:24:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830826 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E172421A712 for ; Thu, 10 Oct 2024 18:27:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584866; cv=none; b=mK5ykx/4dtKZ5YtbXWe2yyarXYXj8yonNoww0K3C3g3jUbcbz0dLvCajfUb/sHgS0g4YiGAeDCTDsYvXbK9p2P8Zu+6S4s8RS4fFzQCR6P4a28rrB2g07WyqDoKaISLwhmKVqSlRwI1FD5f3KwcGl2FhPTswrRwXSFio6qICv8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584866; c=relaxed/simple; bh=Ioj1P42KohQLhw3WPnqB6EBTWlPlGqFgp6J41ckJwc0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XYrLuRaIfRql+ZJGSxakxuuBb6y0NOkxnD+4QCTzVfuMAdFlGy9X8gjpz/cUghy/OGI2nqkhA4ZFko+UsdKPdSBHQ2H06T7Bi9bFHIV73r5DEENqvWUSMqbIAn6nncrMerFLnJD5SkYku2qj2NdCthWsbFzmL71nh12m/sh44t8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4fg+oFS9; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4fg+oFS9" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e28edea9af6so1640235276.3 for ; Thu, 10 Oct 2024 11:27:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584864; x=1729189664; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=7IfrRBwnhP/ARkGggvkJ//bNWbTHtvwsCnjZUdYSoz0=; b=4fg+oFS9HQ6bkDg2/cCK4u+yUaY2HdLlUAKKyU3XiR5S4aFSzAjEWkOGXTGjmr5ePX vGkI99yd3NqFEQL5q+JkKGytg2VixnQYq4PGFlA34nG7QkyPIj6PJ0ZlrIWRk8AoR/X0 dosEXhI0H3WZr2UmtQAb75cyDhtq3eS64wMYjKmvudbvRsnzH7xd9K0hwQjEJYZEEdS5 S5ioD1i4hPY0tOXwvT+EQOL0urSQ1tchIC5V6FTIv7IWcFTlAadalixlcVZHj01gXoru OYG5pbvbDCYSnXyeBnRIHws5VAd7XO3tCVjhRsUtK87ywGe0CzqKMCGctDzPKDvNGGTs 7TYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584864; x=1729189664; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=7IfrRBwnhP/ARkGggvkJ//bNWbTHtvwsCnjZUdYSoz0=; b=xFaONvF1OewHVCZZqUrztCqechhCxSBvPbTYc1dGoD4DkmeYCDOUuUnVzReXFsRSUI zCrtF6JbkulXvqBWlwKOwLThGAQBj/qLUAzGVWB/hb1AbtaBWswkfZM/zPecT0/W7W3m HqD6N54Qc0kGM/eJOcfUGuvKWDJDg7/h0J+o8Pd21F1wHJzJBwXzSZY88p7iE1rDYy8n k1bwFMOlrqR74fWb2H9NztwBYbqALfWaPfs+njY59IMq9pS2nVMMsgFW3G3dKs/fWjPF IjNBslOra5XsJwZblCinQbB/J2cfYR6Uo+Y/i4eC7ScLJtg6LAbM269STpgcRnAE1qbc 79Pg== X-Gm-Message-State: AOJu0YwmfpVtMeQgJkkZGRqB2SPmlsroNtNk6U2542aOWXprJx+hF19x EYY4AKeFbzEtu3cIyhtP8JK9wpqpp165PLxGq+fqojtD0pMw8Ewsevad9VDWxq+7wdj74jdZNOm Pxw== X-Google-Smtp-Source: AGHT+IEHmTL+CF0lwT1L0/srW8AumAgUZ3z+xXpT6s6F1mY7sAC4x3PdYh5uuuYsI/CKxaNz7ipzx99kqDg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:ed0b:0:b0:e17:8e73:866c with SMTP id 3f1490d57ef6-e28fe421ddfmr136730276.10.1728584863963; Thu, 10 Oct 2024 11:27:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:23 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-82-seanjc@google.com> Subject: [PATCH v13 81/85] KVM: Make kvm_follow_pfn.refcounted_page a required field From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that the legacy gfn_to_pfn() APIs are gone, and all callers of hva_to_pfn() pass in a refcounted_page pointer, make it a required field to ensure all future usage in KVM plays nice. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b2c8d429442d..a483da96f4be 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2834,8 +2834,7 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, pfn = page_to_pfn(page); } - if (kfp->refcounted_page) - *kfp->refcounted_page = page; + *kfp->refcounted_page = page; return pfn; } @@ -2986,6 +2985,9 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) might_sleep(); + if (WARN_ON_ONCE(!kfp->refcounted_page)) + return KVM_PFN_ERR_FAULT; + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; From patchwork Thu Oct 10 18:24:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830827 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79B7121B45E for ; Thu, 10 Oct 2024 18:27:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584869; cv=none; b=DCbsClYM1JyLOqJV5x4b2AZt+jvU0pthB5ulnQWx7ZF1H7A3TZ6A1avlAoqHVLHdQLVXGCcQ8GCF0M/6PI8kteuBEDvhxscH4rZiH0OkfoMOeK1n9VRATIKpW0gAkfv6/4m5X6o2Anf3Dn79HKT4WasQ8g6FCZfzsAzJ+jDpjl8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584869; c=relaxed/simple; bh=PnrPzaCfKEgiP1btOoqWv8SGAKP2OnxCXZvE+imIcuc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Vt9kouPJy7S6NYYDPehzF7xGccgoPmEyu75gaL2LXgF71vj0PjpMxJQnWe/mUh43Ht6h0Q3yci4OGIFBXhFbEPkQbiu92UvVTUNR7+ehfVX7tl57sm2IbEg3Zk4SOEqtGbQ4FocWl8f9RCHRnKIvU8JGCn/C7Z4uaKM8c7b+nps= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AchV74I+; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AchV74I+" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e2e5e376fcso26578367b3.2 for ; Thu, 10 Oct 2024 11:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584866; x=1729189666; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=l0R/snR1lBsefu+Gd9v26W9gBjpskR/lINZXMlcZUSs=; b=AchV74I+JHnLrzKw1CnNCcRL41wNv//gHQc4NBTbfpjxRDy9PpJE1B/SqMD9myx1R0 mVHR+P8bSOu3/Oz7lQ1hgUtN9RY36X8gOczJcSEzTD6A4XAuLIdAiocO1EWVeSzm/4NK iNG3yhnElAt6f+kb1qQgRYpbE90dKeTPD0ImYwKFpZwH+xttYCi4J3DbFBX+y8+g6m/J qSa67DktV6zF5OIY/4SFr42+hJi/UbFrb3E+c81xKI47QPmXP/2WartrnYvjKaDYHUd5 jRzI+lw3Ax3tYbLNUtXYTLdiL9GbDm/+hYmfVoG+Cx077QveQVOlUtcb0lxUp1GhD1cL Ji/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584866; x=1729189666; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=l0R/snR1lBsefu+Gd9v26W9gBjpskR/lINZXMlcZUSs=; b=JJ5p1UulIfed2shpz4nn2MkeW4V1HNYH3yWqRQmJvCdLPhmXQEZRZfTi/t0IdMA+bI af7pUNTn/itUNHSWO7CRaEoFDgrxPs9KAI5hcZubq31g9N6DcmkTP/r3yMj/qwfI/TxM jcFVDwLl01foyI+4+2jBaV4+yc2SuMi5A46htZ/2j7k5Vw1s47x+PWiEYzM3JkG9MK4j xr6sl5s4tQxZwkpr8TiQDHkLPQGpx5baSGTOCsRDELF0Pgt06DavTLGjzo9YKDE2gBFN f6/YaUncCEdoXnpRPC9N1YdtS3xWdVOnn3E3gWGFYJxSWbdipDg/UNvdRRKiKkNRJ+nT T03g== X-Gm-Message-State: AOJu0Yyfmhp+cTKfvihA9UAD6xDEwoRQLl0kpW/yZldelaShM1NcMLQ9 7DxjXeLRviMEfC6J5/uIOkyJv69eH2gBFQ7wTkebYMW0hMglLewahreyAU2e5YmUmxmPxgQJd78 5tQ== X-Google-Smtp-Source: AGHT+IE49fRlqLzz+ddXUJ4fW719WQNIEaF+exc+KxAZEodOOT6YKCh59uZdX+xzyPwnVHJ15Mb6QwFZ+jU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:470f:b0:6e3:39e5:f0e8 with SMTP id 00721157ae682-6e339e5f461mr119227b3.6.1728584866570; Thu, 10 Oct 2024 11:27:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:24 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-83-seanjc@google.com> Subject: [PATCH v13 82/85] KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Don't mark pages/folios as accessed in the primary MMU when zapping SPTEs, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. And if KVM zaps a SPTe in response to an mmu_notifier, marking it accessed _after_ the primary MMU has decided to zap the page is likely to go unnoticed, i.e. odds are good that, if the page is being zapped for reclaim, the page will be swapped out regardless of whether or not KVM marks the page accessed. Dropping x86's use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ----------------- arch/x86/kvm/mmu/tdp_mmu.c | 3 --- 2 files changed, 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5acdaf3b1007..55eeca931e23 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -559,10 +559,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) */ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { - kvm_pfn_t pfn; u64 old_spte = *sptep; int level = sptep_to_sp(sptep)->role.level; - struct page *page; if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) @@ -574,21 +572,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) return old_spte; kvm_update_page_stats(kvm, level, -1); - - pfn = spte_to_pfn(old_spte); - - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page = kvm_pfn_to_refcounted_page(pfn); - WARN_ON_ONCE(page && !page_count(page)); - - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); - return old_spte; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8aa0d7a7602b..91caa73a905b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -861,9 +861,6 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); - if (is_accessed_spte(iter.old_spte)) - kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); - /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details. From patchwork Thu Oct 10 18:24:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830828 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE0B421BAF5 for ; Thu, 10 Oct 2024 18:27:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584870; cv=none; b=tVOwn3t0EE4Writ3zngKJvRzfZhT8SnZ181E4G04OxYTQnUg2yGIhPqlY7TtlC7ofm/X2rvlgiVIyVafQLK8HC0ewImPxbZijl9xRzGz9zlfpJz3PSR3kETCKc2yYuL3xdQ//SavOAfjB7OJ3HXjypXWchpPz40PgAspXWJMYzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584870; c=relaxed/simple; bh=4ce0ST1uTErGaf3Fu8uQPVTe9dXk8k7UaRtdo8iN0BA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qt5chIigUlgJ3widx395+swIXD/+80KFdBGvORaw21sM3aN+FtMgVY/Ai5EecBxbz4LZaH4VzX2ci3OYDXPOBjWMGM27ghYIkE7muwziNtgptSwbo7tjuTPs7/7V4Efa6Wh7EcRoUWPEFvyC2FcIfvl4w6V73qogKplxqVKLJV0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LR0bZlhg; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LR0bZlhg" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20b8bf5d09aso15617865ad.3 for ; Thu, 10 Oct 2024 11:27:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584868; x=1729189668; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=qYRk8t9w2U8aQ/ID4AWjouVrA1tdB4hWvcNmB/sB3HQ=; b=LR0bZlhgBqqlK/AxVHUnYDgdPVeVfg62ritjsHW4icLlp2VXJMdBEQpqQ2DjaOB4Vo BMk5u0CdNiJU47xxEKEBNDUg1VIzyKQFA0lEenO//nWQFutF6qqNsjqYWpd0KkluabCG scoWgGmVwMXCV2bJcO3DxVbQdljLpb0S7K+9zHipw5iopVLZ5O/Et9tquTm7UfUH/R3W Tzhi8J9rL8rJ6Zw+R2IdaOJPRXvC3k33WFcoS6zAsYYyRLTyXGYrquN21MhPyyOeIlNt k76iPpUznp7HqqlLd6wYwP+ZU1DkIHHgVTgrkh+gxlQpcl0jRX8rnB7ytNb8WRs+Wq5s 9Kuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584868; x=1729189668; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=qYRk8t9w2U8aQ/ID4AWjouVrA1tdB4hWvcNmB/sB3HQ=; b=pune3INggeQbIE3EMr6KqHZCNT21ic07/FmQViULbirWVvpb+wYxbMl77VTRCKHGoQ q/c9LvztEuzUhAgEHXUGealq7YLCEHdA45rvgWBQcDJKq2nzaR/QnvwIQ9+eGDa7+4fZ WK9stN2T4kiVwPwjhEIlTjLuQMgWyxQ8DxyLYh4l9a0bFaVHczXDkk2zmiis0aTV9ICi pCxo2hr7tTj7fYs+dRdcqluaRFoWkNE9GPfO5koqZ010cxzcDJO3/ZlTnPYV/2vAtEJX qxPndBy+jp9i1TaraItNeC9bhBWZuVp2oEP1tLFgVaHL8o1POfRmtYauMsjedt6fjvF6 ePWw== X-Gm-Message-State: AOJu0Yws9yI5FmADvSJqgNgMitJvzy6DsbtTNLSeyMQTdnM+BVjr61OE X94Yo3cK+4ul9hiTiaoxfc5hnmEa47s24tID0dzn/J9cM6mEYZHS3VnSO+By2P7tWhCxlXsZRf0 svw== X-Google-Smtp-Source: AGHT+IHfwq/leOSDpGNmElHsEevN2NLhktzxL5WWYsNV7s51yqV4fIBatX1NQsHzbv29AfLNcoyCuLVeClM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:e745:b0:20c:857b:5dcb with SMTP id d9443c01a7336-20c857b62ffmr639275ad.4.1728584868183; Thu, 10 Oct 2024 11:27:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:25 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-84-seanjc@google.com> Subject: [PATCH v13 83/85] KVM: arm64: Don't mark "struct page" accessed when making SPTE young From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Don't mark pages/folios as accessed in the primary MMU when making a SPTE young in KVM's secondary MMU, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. Dropping use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/arm64/include/asm/kvm_pgtable.h | 4 +--- arch/arm64/kvm/hyp/pgtable.c | 7 ++----- arch/arm64/kvm/mmu.c | 6 +----- 3 files changed, 4 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 03f4c3d7839c..aab04097b505 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -674,10 +674,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. - * - * Return: The old page-table entry prior to setting the flag, 0 on failure. */ -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the access diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11bcebac908..40bd55966540 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,19 +1245,16 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL, 0); } -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) { - kvm_pte_t pte = 0; int ret; ret = stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF, 0, - &pte, NULL, + NULL, NULL, KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED); if (!ret) dsb(ishst); - - return pte; } struct stage2_age_data { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 4054356c9712..e2ae9005e333 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1706,18 +1706,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { - kvm_pte_t pte; struct kvm_s2_mmu *mmu; trace_kvm_access_fault(fault_ipa); read_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; - pte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); read_unlock(&vcpu->kvm->mmu_lock); - - if (kvm_pte_valid(pte)) - kvm_set_pfn_accessed(kvm_pte_to_pfn(pte)); } /** From patchwork Thu Oct 10 18:24:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830829 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F15F421C172 for ; Thu, 10 Oct 2024 18:27:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584872; cv=none; b=OX+WZNaY1i/jOdWJuNCQGWV1FW8Z7ypwm2GY9Cvpt8Ojv1i7AnHAdX/vhSeBqBb+2c7/Z6h5Q1u4jTuBCeXodCVrYnIg52Y3w7DphCiKqWICGKrUI7ZldOU4KTYs8cL4Ieoie11hExBbfetWoj/3OBKzMWT/0fuRx86ZieAC7pY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584872; c=relaxed/simple; bh=9yDh4yVl+Y5ECwKMlXFEDQ/uzUa/qYgKcVRj3mqEjek=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KnZHH8JiEXv2ODXclSpi/Je/879cpQf1AVuFkiZk1aobzNLzL4bKl1gRXQJHVtNEz+T0UWhny9o4kI+46fMlEwt0rszd2rM+Mb246EEkVVRi9Lyyy2w0+UvJwZfFOwl+s4J8FO0QTFF+UYV8/BWzfqJT3lB2/GnXogFnHhh6J+Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1AdnNrnI; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1AdnNrnI" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e2b049b64aso20338007b3.3 for ; Thu, 10 Oct 2024 11:27:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584870; x=1729189670; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=BeNSr3VxopvO4QfIOfKviYSPGRm8vVX1K3YR0sLhejo=; b=1AdnNrnIXLgNymQtK7DilLGthF1JsfkTOKsD8zYlozlgDyVnSyLY6rQs1ch4aUN54v fTlIRF6YDP2svUdnyjp88hSQcfOn6wvrAuXYoJPHVBwm+zBiaq88Wsg3ie8JPbOZVm6v +FdIhr7f29Y4owHrhAD9S/zkPO3/FI5JCSjOek0MkvkxKw6zlH8yk7tESmKbM59fwOxY truIpSu4JHQroLJXqzMSmcI2c+TlvXrk0XyyyXlQcN9BQPbIRRXz6PSG0liD/fpQxsWp 5Y2cwNpf+u5nZPuU2t9NEaeYfk9Jawe11Dvm4yVWD7vMf94BZ/9HtAVH1x+u2rygpxWQ 2Xsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584870; x=1729189670; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=BeNSr3VxopvO4QfIOfKviYSPGRm8vVX1K3YR0sLhejo=; b=FObd6OvND5NNvriRSkwjAE/ppodHHXPbDEjboiep7p+IBuqV9Rruogl96XU+Xyd+V9 DnMKDzawdo8n4jc8v8QLA59Yh61iXWN+UUiQ04CXiRN2muYCAdPUiDxO+zJjqDrPJsLu iHM0vagzplzQ2d8Gx51pQq2Y/Kl9VSvP0dXn3Zwt1mVJQoj+QWC7XVkJN0VAIQmFgg2s HWurOzOhJyct/Z3eTlfN/eyhMJrBpvxubP/2Z7GxZ+68MwCu1f3ej3I7EitmJnCG3zS7 1gMcZQw/9t7lGRk4x8nhSf8w5leRbDskuAas2rl6I75K8os995XkTDh3M/7zoT8Tr56M DPAw== X-Gm-Message-State: AOJu0Yzrb30FP538/ImzBY831jHq8Z+mcltqHDQyujjkn0zh0roBFhKB MG7IcN5R6yqTY3hPQ2gBRlxoazgwcHnDmrI4RcSvS1bljYDK4y4LgYmSMKyP14zQeo64HMnR5nc PYw== X-Google-Smtp-Source: AGHT+IHhmEpfdjhYs/g96WpbaMl3fao5EtfU192MAf3yyGbJNoRiDL4cTPbNYR1fq37SA+x5liddjIzSKiU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:2e86:b0:6dd:bcce:7cd4 with SMTP id 00721157ae682-6e3221404ddmr516647b3.2.1728584870148; Thu, 10 Oct 2024 11:27:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:26 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-85-seanjc@google.com> Subject: [PATCH v13 84/85] KVM: Drop APIs that manipulate "struct page" via pfns From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Remove all kvm_{release,set}_pfn_*() APIs now that all users are gone. No functional change intended. Reviewed-by: Alex Bennée Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 5 ---- virt/kvm/kvm_main.c | 55 ---------------------------------------- 2 files changed, 60 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4a1eaa40a215..d045f8310a48 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1274,11 +1274,6 @@ static inline kvm_pfn_t kvm_faultin_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, write ? FOLL_WRITE : 0, writable, refcounted_page); } -void kvm_release_pfn_clean(kvm_pfn_t pfn); -void kvm_release_pfn_dirty(kvm_pfn_t pfn); -void kvm_set_pfn_dirty(kvm_pfn_t pfn); -void kvm_set_pfn_accessed(kvm_pfn_t pfn); - int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a483da96f4be..396ca14f18f3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3164,61 +3164,6 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map) } EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); -void kvm_release_pfn_clean(kvm_pfn_t pfn) -{ - struct page *page; - - if (is_error_noslot_pfn(pfn)) - return; - - page = kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_clean(page); -} -EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); - -void kvm_release_pfn_dirty(kvm_pfn_t pfn) -{ - struct page *page; - - if (is_error_noslot_pfn(pfn)) - return; - - page = kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_dirty(page); -} -EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty); - -/* - * Note, checking for an error/noslot pfn is the caller's responsibility when - * directly marking a page dirty/accessed. Unlike the "release" helpers, the - * "set" helpers are not to be used when the pfn might point at garbage. - */ -void kvm_set_pfn_dirty(kvm_pfn_t pfn) -{ - if (WARN_ON(is_error_noslot_pfn(pfn))) - return; - - if (pfn_valid(pfn)) - kvm_set_page_dirty(pfn_to_page(pfn)); -} -EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty); - -void kvm_set_pfn_accessed(kvm_pfn_t pfn) -{ - if (WARN_ON(is_error_noslot_pfn(pfn))) - return; - - if (pfn_valid(pfn)) - kvm_set_page_accessed(pfn_to_page(pfn)); -} -EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed); - static int next_segment(unsigned long len, int offset) { if (len > PAGE_SIZE - offset) From patchwork Thu Oct 10 18:24:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830830 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA22921D163 for ; Thu, 10 Oct 2024 18:27:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584876; cv=none; b=UrrLX3L+hEHVzxS67Az3GulqcfSOOFagrRJtA8wi96v3+VIm6TYfoCfvvqswAn/csmKU5h0UJJP1Enf21ugdLb17aRmAtOLdjKAra+TNbUvtCjlrAu1hE2WewtC/Vvvu9EQE9ALvQRqFRdoM+pld1N56E3a8g4MTGm805u1R9JE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728584876; c=relaxed/simple; bh=aNI4I+LkKsxzC1YKs2vCoD7b+5doZynW6Blg4vQwwvc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ftBJ6kKP+yJ14QfatZBJYd8ysfB+Ay8l/npxNleDFW84dt3OQdxdqOuvITI8e9IkpZY3JEk0Ea5xFFKLwMPEvS/a2/nwI2RDk6s84WNtid3qhLhznGZvmujG/4i5opqZParCfvDBvRJpc0pgay0VlWZMKDE59PN8L3CuCFBzofA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xyu1Nn/r; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xyu1Nn/r" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7c6a9c1a9b8so928988a12.0 for ; Thu, 10 Oct 2024 11:27:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584873; x=1729189673; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=nwXbeHTlA0ocP8s6EMi2R3V8WkxZqKgY3fo4QQ/WBXk=; b=xyu1Nn/rp+FyjGRlvaySHpdLv16ogLwofUZlPSL88LrsPtUsdHxPdI4aMAZWpZ+LCb G47QycBP2achTmuGd1EXNbbjK4pjlf7ihMkj5vlGqDxw6OCVDaNJvwxpCB4vy6cQt+eC bPNyYrqL1IWM+djQgloaSwHlTl/m9PsGRY+UQiNbQLuX8csAdsfFUDyRbBcmemNyquwH u56PAc5JRwdiVO7tt7yODdJN/pEha8mrjjtisATQgI2/uYcwTl1QzP6LKGp+KqXDTb7X W3csPJXsIxIBK/YbHvH2KWvDbUJR0U0jHjWARSRlv5XaOYevRMa1tcC3LSfpUG2LIezf k0Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584873; x=1729189673; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=nwXbeHTlA0ocP8s6EMi2R3V8WkxZqKgY3fo4QQ/WBXk=; b=L9euDMvEL3KIASptYJeOMP5uGpI3vIpi3ty070QMA5bqEDZY0vneOSKckMfeaLQ72y NkmJkIWHGr1Li5DeucDpW2jXSmYz141hfA1/WQJ9hK5GQ2wDEqRftwa/mW7+lvqc915J yzPEC+WB957603hd9BE2nAJYRSeBS6v3WG4JPtjv2+AjOXrhlldW4rpIef8re+/21/hn QZF16C8/wmWwKUldhgKcdqGSihuOgNQBHK1HSGVvrXTALXeh+1qxR1c0ttBy3IcGSrF+ 28Ar7p0MzAXZ3e/ZWLPlildI9ZAStcl+XM7S/UqhGIXagx/FrUASRjJJDsgbTwpT7bTc RXkw== X-Gm-Message-State: AOJu0YxIY4WqOUcccMdpmv8ux4s8636gxXQXNFg0Ug1m9Be0ytFSbImd FGwCIhcMNaYbKu2bvONMzpr9CaEcwCodC1HeE2Gg84iEUoAS6ot/l5pc8/L5X5HLbw4tLt40x4Y 9EQ== X-Google-Smtp-Source: AGHT+IHZp/jcJ6VF/U4r8Ds8cBqCm7oOG+WyWj8b167iLxGpwwfpynwEnbVm0Ou2T4AqluEw0PW/YyS/yKE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:e715:0:b0:7d5:d088:57d4 with SMTP id 41be03b00d2f7-7ea535b6b4fmr26a12.9.1728584872016; Thu, 10 Oct 2024 11:27:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 10 Oct 2024 11:24:27 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-86-seanjc@google.com> Subject: [PATCH v13 85/85] KVM: Don't grab reference on VM_MIXEDMAP pfns that have a "struct page" From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones Now that KVM no longer relies on an ugly heuristic to find its struct page references, i.e. now that KVM can't get false positives on VM_MIXEDMAP pfns, remove KVM's hack to elevate the refcount for pfns that happen to have a valid struct page. In addition to removing a long-standing wart in KVM, this allows KVM to map non-refcounted struct page memory into the guest, e.g. for exposing GPU TTM buffers to KVM guests. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 3 -- virt/kvm/kvm_main.c | 75 ++-------------------------------------- 2 files changed, 2 insertions(+), 76 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d045f8310a48..02f0206fd2dc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1730,9 +1730,6 @@ void kvm_arch_sync_events(struct kvm *kvm); int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu); -struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn); -bool kvm_is_zone_device_page(struct page *page); - struct kvm_irq_ack_notifier { struct hlist_node link; unsigned gsi; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 396ca14f18f3..b1b10dc408a0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -160,52 +160,6 @@ __weak void kvm_arch_guest_memory_reclaimed(struct kvm *kvm) { } -bool kvm_is_zone_device_page(struct page *page) -{ - /* - * The metadata used by is_zone_device_page() to determine whether or - * not a page is ZONE_DEVICE is guaranteed to be valid if and only if - * the device has been pinned, e.g. by get_user_pages(). WARN if the - * page_count() is zero to help detect bad usage of this helper. - */ - if (WARN_ON_ONCE(!page_count(page))) - return false; - - return is_zone_device_page(page); -} - -/* - * Returns a 'struct page' if the pfn is "valid" and backed by a refcounted - * page, NULL otherwise. Note, the list of refcounted PG_reserved page types - * is likely incomplete, it has been compiled purely through people wanting to - * back guest with a certain type of memory and encountering issues. - */ -struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn) -{ - struct page *page; - - if (!pfn_valid(pfn)) - return NULL; - - page = pfn_to_page(pfn); - if (!PageReserved(page)) - return page; - - /* The ZERO_PAGE(s) is marked PG_reserved, but is refcounted. */ - if (is_zero_pfn(pfn)) - return page; - - /* - * ZONE_DEVICE pages currently set PG_reserved, but from a refcounting - * perspective they are "normal" pages, albeit with slightly different - * usage rules. - */ - if (kvm_is_zone_device_page(page)) - return page; - - return NULL; -} - /* * Switches to specified vcpu, until a matching vcpu_put() */ @@ -2804,35 +2758,10 @@ static kvm_pfn_t kvm_resolve_pfn(struct kvm_follow_pfn *kfp, struct page *page, if (kfp->map_writable) *kfp->map_writable = writable; - /* - * FIXME: Remove this once KVM no longer blindly calls put_page() on - * every pfn that points at a struct page. - * - * Get a reference for follow_pte() pfns if they happen to point at a - * struct page, as KVM will ultimately call kvm_release_pfn_clean() on - * the returned pfn, i.e. KVM expects to have a reference. - * - * Certain IO or PFNMAP mappings can be backed with valid struct pages, - * but be allocated without refcounting, e.g. tail pages of - * non-compound higher order allocations. Grabbing and putting a - * reference to such pages would cause KVM to prematurely free a page - * it doesn't own (KVM gets and puts the one and only reference). - * Don't allow those pages until the FIXME is resolved. - * - * Don't grab a reference for pins, callers that pin pages are required - * to check refcounted_page, i.e. must not blindly release the pfn. - */ - if (map) { + if (map) pfn = map->pfn; - - if (!kfp->pin) { - page = kvm_pfn_to_refcounted_page(pfn); - if (page && !get_page_unless_zero(page)) - return KVM_PFN_ERR_FAULT; - } - } else { + else pfn = page_to_pfn(page); - } *kfp->refcounted_page = page;