From patchwork Wed Jun 22 21:36:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12891502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4986CCCA479 for ; Wed, 22 Jun 2022 21:37:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 88EA28E00F8; Wed, 22 Jun 2022 17:37:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 814798E00F6; Wed, 22 Jun 2022 17:37:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F1CB8E00F8; Wed, 22 Jun 2022 17:37:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4B5348E00F6 for ; Wed, 22 Jun 2022 17:37:18 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1C9513545C for ; Wed, 22 Jun 2022 21:37:18 +0000 (UTC) X-FDA: 79607182956.01.F0C91AF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 6FC57100022 for ; Wed, 22 Jun 2022 21:37:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655933828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vhppIX6vNX1hZt/miaFbnfjIfVwbxcf3RlezEUIw2Gs=; b=H+KYQO5T999+7XafPgn3PFTaJbBlMrcFBkd4WO35rSoVcB2TNYuzDhQZN9WAGPy2NJm7fE n7wyEFr19fmJQ/rg73aTqoHVh3g6l89oP4QlZrzem1Ns9TBDt6SqfaXmcrQLR8mEa0vLTL eR5c4Nr9JLmWViyrlo7hxMRd6a6bAzE= Received: from mail-io1-f71.google.com (mail-io1-f71.google.com [209.85.166.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-55-iMuVfAWWMlGlkODTf3MXIw-1; Wed, 22 Jun 2022 17:37:07 -0400 X-MC-Unique: iMuVfAWWMlGlkODTf3MXIw-1 Received: by mail-io1-f71.google.com with SMTP id m3-20020a6bbc03000000b0067277968473so870954iof.19 for ; Wed, 22 Jun 2022 14:37:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vhppIX6vNX1hZt/miaFbnfjIfVwbxcf3RlezEUIw2Gs=; b=E3BlaRCrIDZjL/RU4DsD90eLlNSIMI4MQeJw2WFucz4pjO1Hf/9vQzGOsaKfsLVTIc hNJ9Mr6UTrTOOgtx8rhkLr3YSJA9UiplxdjhWwCiK8fXEl9r5yvY6hisjh/xk4IXv7lr /KaFqW2QHaRtF63GTl1Cg4wcVRJVwPHaN2eYWSoKVAvVDRoSJ5QpTAT6Xcb+/PEIHtCy lQQIc4MSwf5R8hpqqw4+TFMmBjvpItCz5uKND5fO2XN4ciaGOSToftUEYWEJYp4aJpha g4zlHJ7JJ+JRetK2DvGpDv9IL7YhO1og/UdcGgB57VFvE2cAWTE/vv3pQ9zT1/NZnSPW uSNA== X-Gm-Message-State: AJIora9SUBkTTfCpZlJYg4g/AzkcXwWJxXjyqk8fhRaElsEyf83Cwp24 ISjyn8fufz5R3sFnK0S3fJwco3WeHxbZenru+fENGB+My2WNkrPPwCjOkQ+AO+hf1uvs53l6Iiq x1O/kIJHDgHI= X-Received: by 2002:a05:6e02:1a28:b0:2d8:e770:d43f with SMTP id g8-20020a056e021a2800b002d8e770d43fmr3278536ile.137.1655933825829; Wed, 22 Jun 2022 14:37:05 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uJcMRkwsDNKTMq4mKtUplbgjsXpC8ptulSt8S8Q5/HCNbhHNMkLxzTCVOW888ONQuKUYm7dA== X-Received: by 2002:a05:6e02:1a28:b0:2d8:e770:d43f with SMTP id g8-20020a056e021a2800b002d8e770d43fmr3278518ile.137.1655933825523; Wed, 22 Jun 2022 14:37:05 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id g7-20020a0566380c4700b00339d892cc89sm1510446jal.83.2022.06.22.14.37.03 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Jun 2022 14:37:04 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Andrew Morton , David Hildenbrand , "Dr . David Alan Gilbert" , Andrea Arcangeli , Linux MM Mailing List , Sean Christopherson Subject: [PATCH 2/4] kvm: Merge "atomic" and "write" in __gfn_to_pfn_memslot() Date: Wed, 22 Jun 2022 17:36:54 -0400 Message-Id: <20220622213656.81546-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220622213656.81546-1-peterx@redhat.com> References: <20220622213656.81546-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655933834; a=rsa-sha256; cv=none; b=wVAAKZMw4H6IXkP0n2zzEDce1z1Mf4CYV0zr/XgFcWO84fKq3LI5sQ+ouBr2oVc5LRmkhZ mYXMqOrZLQnifbJajg9sx+FkaO0HTS64OOrW/xLY+R8JPeR/ORYMaLpwXk2T76CQWNPx5Z +wRynxdd99gsAdxUEtop3FdeNOskVCs= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H+KYQO5T; spf=none (imf14.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655933834; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vhppIX6vNX1hZt/miaFbnfjIfVwbxcf3RlezEUIw2Gs=; b=n8j2ZaGlotaLy5XxRHlbf7GzoEchUlnM5iqikuUdCFH/WhHHEu/Uq3mxc1kErvt1/2a34S HVPKEP9dHjkHIeeUTGz1kbixvnd78iJyzBBHpIiJTKfa9ZsuFsQvnu8VHH8Oj9QnIaoy/d FTlu/ua0QcS4RhbgTn7YWck7clw6RVg= X-Stat-Signature: k6y77wrp5fuontf8ck483pa65gizez75 X-Rspamd-Server: rspam06 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H+KYQO5T; spf=none (imf14.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Queue-Id: 6FC57100022 X-HE-Tag: 1655933830-164801 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merge two boolean parameters into a bitmask flag called kvm_gtp_flag_t for __gfn_to_pfn_memslot(). This cleans the parameter lists, and also prepare for new boolean to be added to __gfn_to_pfn_memslot(). Signed-off-by: Peter Xu --- arch/arm64/kvm/mmu.c | 5 ++-- arch/powerpc/kvm/book3s_64_mmu_hv.c | 5 ++-- arch/powerpc/kvm/book3s_64_mmu_radix.c | 5 ++-- arch/x86/kvm/mmu/mmu.c | 10 +++---- include/linux/kvm_host.h | 9 ++++++- virt/kvm/kvm_main.c | 37 +++++++++++++++----------- virt/kvm/kvm_mm.h | 6 +++-- virt/kvm/pfncache.c | 2 +- 8 files changed, 49 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f5651a05b6a8..ce1edb512b4e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1204,8 +1204,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, */ smp_rmb(); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + write_fault ? KVM_GTP_WRITE : 0, + NULL, &writable, NULL); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 514fd45c1994..e2769d58dd87 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -598,8 +598,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok = true; } else { /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + writing ? KVM_GTP_WRITE : 0, + NULL, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 42851c32ff3b..232b17c75b83 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -845,8 +845,9 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long pfn; /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + writing ? KVM_GTP_WRITE : 0, + NULL, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f4653688fa6d..e92f1ab63d6a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3968,6 +3968,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + kvm_gtp_flag_t flags = fault->write ? KVM_GTP_WRITE : 0; struct kvm_memory_slot *slot = fault->slot; bool async; @@ -3999,8 +4000,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, - fault->write, &fault->map_writable, + fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, + &async, &fault->map_writable, &fault->hva); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ @@ -4016,9 +4017,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, NULL, + &fault->map_writable, &fault->hva); return RET_PF_CONTINUE; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c20f2d55840c..b646b6fcaec6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1146,8 +1146,15 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn); + +/* gfn_to_pfn (gtp) flags */ +typedef unsigned int __bitwise kvm_gtp_flag_t; + +#define KVM_GTP_WRITE ((__force kvm_gtp_flag_t) BIT(0)) +#define KVM_GTP_ATOMIC ((__force kvm_gtp_flag_t) BIT(1)) + kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, + kvm_gtp_flag_t gtp_flags, bool *async, bool *writable, hva_t *hva); void kvm_release_pfn_clean(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 64ec2222a196..952400b42ee9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2444,9 +2444,11 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, +static int hva_to_pfn_slow(unsigned long addr, bool *async, + kvm_gtp_flag_t gtp_flags, bool *writable, kvm_pfn_t *pfn) { + bool write_fault = gtp_flags & KVM_GTP_WRITE; unsigned int flags = FOLL_HWPOISON; struct page *page; int npages = 0; @@ -2565,20 +2567,22 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, /* * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function can sleep + * @gtp_flags: kvm_gtp_flag_t flags (atomic, write, ..) * @async: whether this function need to wait IO complete if the * host page is not in the memory - * @write_fault: whether we should get a writable host page * @writable: whether it allows to map a writable host page for !@write_fault * - * The function will map a writable host page for these two cases: + * The function will map a writable (KVM_GTP_WRITE set) host page for these + * two cases: * 1): @write_fault = true * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(unsigned long addr, kvm_gtp_flag_t gtp_flags, bool *async, + bool *writable) { + bool write_fault = gtp_flags & KVM_GTP_WRITE; + bool atomic = gtp_flags & KVM_GTP_ATOMIC; struct vm_area_struct *vma; kvm_pfn_t pfn = 0; int npages, r; @@ -2592,7 +2596,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, gtp_flags, writable, &pfn); if (npages == 1) return pfn; @@ -2625,10 +2629,11 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, } kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, + kvm_gtp_flag_t gtp_flags, bool *async, bool *writable, hva_t *hva) { - unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, + gtp_flags & KVM_GTP_WRITE); if (hva) *hva = addr; @@ -2651,28 +2656,30 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, writable = NULL; } - return hva_to_pfn(addr, atomic, async, write_fault, - writable); + return hva_to_pfn(addr, gtp_flags, async, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable, NULL); + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, + write_fault ? KVM_GTP_WRITE : 0, + NULL, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, KVM_GTP_WRITE, + NULL, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, KVM_GTP_WRITE | KVM_GTP_ATOMIC, + NULL, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 41da467d99c9..1c870911eb48 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -3,6 +3,8 @@ #ifndef __KVM_MM_H__ #define __KVM_MM_H__ 1 +#include + /* * Architectures can choose whether to use an rwlock or spinlock * for the mmu_lock. These macros, for use in common code @@ -24,8 +26,8 @@ #define KVM_MMU_READ_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(unsigned long addr, kvm_gtp_flag_t gtp_flags, bool *async, + bool *writable); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index dd84676615f1..0f9f6b5d2fbb 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -123,7 +123,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, unsigned long uhva) smp_rmb(); /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL); + new_pfn = hva_to_pfn(uhva, KVM_GTP_WRITE, NULL, NULL); if (is_error_noslot_pfn(new_pfn)) break;