From patchwork Mon Feb 3 23:09:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11363547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0953C1398 for ; Mon, 3 Feb 2020 23:09:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DC01F20732 for ; Mon, 3 Feb 2020 23:09:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="f+ltTeOs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726853AbgBCXJQ (ORCPT ); Mon, 3 Feb 2020 18:09:16 -0500 Received: from mail-pg1-f202.google.com ([209.85.215.202]:47486 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726369AbgBCXJQ (ORCPT ); Mon, 3 Feb 2020 18:09:16 -0500 Received: by mail-pg1-f202.google.com with SMTP id l15so10365756pgk.14 for ; Mon, 03 Feb 2020 15:09:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=XnYm5qWPfb13s0wRqn1mpnrTwR4nnQJCgFQ13dO6/nI=; b=f+ltTeOsJAty6dDVNXR6x/fkgqbS34nwR1S7fb/S8MIWpfYYBm2fIZFz3goe8MISee vZNyGnXM0oagrrqp6Fg4kXeLKM5vejK8Dc++ZDSPRw3zcAuP/EWeQo162pVO6i3YCBPB IJVaGCVG899JjEbedJ4bTkVCvbsaALE7QpXtogmZOM9bqAKXcq9Wfb4RxnmSX2axlB3J tS5jEbcaL83lUi3DDOfWKbBZks/EzKOh7y5k/ZSW/MGgfvYnGquJBrtRfqN8XTs960K+ 3OLgUJSt7opqlAM9aSC0EC/2GQOtIEfpqcGXIo5fKkbt6L5Nowyj/GFJbQltCI7whG94 A07A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=XnYm5qWPfb13s0wRqn1mpnrTwR4nnQJCgFQ13dO6/nI=; b=FLI5mYpkVUXLOOC5aC7svPtJM0jWNIwKi1eSEyUtwA4QybW5o0iTZXAPkeZpRlkM8v 5nRq56rxnRS8naU9YXTWUffB8KIJOI8m726Z0oIaFJuz9PA0DtgRT9xhmsIMTKOHpZL0 AsDfU6jyF52wgfbb8ytI8gNbl42dU9o5X+Kro9YCiTz8pqhZWfPXLD7a0xmyHMJRF6Z5 /wz/VgmEwfT5tAZ3lguD0o0MKO4ic2iUlk4QLcuC0BECGtwi2xYfOeWDv+gGngA/fpho KhT7E7+O4ZwHSomN9FkcVyaoP+JybnygeYbZCfTtcPs3RAY9ouw0rGsFWXfgyUpTqeEI 8BHw== X-Gm-Message-State: APjAAAUGx+nY4JWwphjeZQ5tPvYHSsWIv84mIeXJxNSEkRVM7JMG67Ol 20OR9gm3BMkdt30CeqpWgdz815M3j4Dx X-Google-Smtp-Source: APXvYqwIpUOKIGZE6NcawbIowukoMFpqRz5bIJROxtcwVDgJozQwJ0dNygWwt5RTsmOYDdDiweDP4j8XLV/K X-Received: by 2002:a63:3c2:: with SMTP id 185mr11545877pgd.72.1580771355857; Mon, 03 Feb 2020 15:09:15 -0800 (PST) Date: Mon, 3 Feb 2020 15:09:09 -0800 Message-Id: <20200203230911.39755-1-bgardon@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog Subject: [PATCH 1/3] kvm: mmu: Replace unsigned with unsigned int for PTE access From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Oliver Upton , Ben Gardon Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There are several functions which pass an access permission mask for SPTEs as an unsigned. This works, but checkpatch complains about it. Switch the occurrences of unsigned to unsigned int to satisfy checkpatch. No functional change expected. Tested by running kvm-unit-tests on an Intel Haswell machine. This commit introduced no new failures. This commit can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2358 Signed-off-by: Ben Gardon Reviewed-by: Oliver Upton Reviewed-by: Vitaly Kuznetsov Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 84eeb61d06aa3..a9c593dec49bf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -452,7 +452,7 @@ static u64 get_mmio_spte_generation(u64 spte) } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, - unsigned access) + unsigned int access) { u64 gen = kvm_vcpu_memslots(vcpu)->generation & MMIO_SPTE_GEN_MASK; u64 mask = generation_mmio_spte_mask(gen); @@ -484,7 +484,7 @@ static unsigned get_mmio_spte_access(u64 spte) } static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, - kvm_pfn_t pfn, unsigned access) + kvm_pfn_t pfn, unsigned int access) { if (unlikely(is_noslot_pfn(pfn))) { mark_mmio_spte(vcpu, sptep, gfn, access); @@ -2475,7 +2475,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gva_t gaddr, unsigned level, int direct, - unsigned access) + unsigned int access) { union kvm_mmu_page_role role; unsigned quadrant; @@ -2990,7 +2990,7 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned pte_access, int level, + unsigned int pte_access, int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, bool can_unsync, bool host_writable) { @@ -3081,9 +3081,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, return ret; } -static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, - int write_fault, int level, gfn_t gfn, kvm_pfn_t pfn, - bool speculative, bool host_writable) +static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned int pte_access, int write_fault, int level, + gfn_t gfn, kvm_pfn_t pfn, bool speculative, + bool host_writable) { int was_rmapped = 0; int rmap_count; @@ -3165,7 +3166,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, { struct page *pages[PTE_PREFETCH_NUM]; struct kvm_memory_slot *slot; - unsigned access = sp->role.access; + unsigned int access = sp->role.access; int i, ret; gfn_t gfn; @@ -3400,7 +3401,8 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) } static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn, - kvm_pfn_t pfn, unsigned access, int *ret_val) + kvm_pfn_t pfn, unsigned int access, + int *ret_val) { /* The pfn is invalid, report the error! */ if (unlikely(is_error_pfn(pfn))) { @@ -4005,7 +4007,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) if (is_mmio_spte(spte)) { gfn_t gfn = get_mmio_spte_gfn(spte); - unsigned access = get_mmio_spte_access(spte); + unsigned int access = get_mmio_spte_access(spte); if (!check_mmio_spte(vcpu, spte)) return RET_PF_INVALID; @@ -4349,7 +4351,7 @@ static void inject_page_fault(struct kvm_vcpu *vcpu, } static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, - unsigned access, int *nr_present) + unsigned int access, int *nr_present) { if (unlikely(is_mmio_spte(*sptep))) { if (gfn != get_mmio_spte_gfn(*sptep)) { From patchwork Mon Feb 3 23:09:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11363549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED51D92A for ; Mon, 3 Feb 2020 23:09:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CC07F20732 for ; Mon, 3 Feb 2020 23:09:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PaZqVy3m" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727004AbgBCXJT (ORCPT ); Mon, 3 Feb 2020 18:09:19 -0500 Received: from mail-pg1-f202.google.com ([209.85.215.202]:53512 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726913AbgBCXJS (ORCPT ); Mon, 3 Feb 2020 18:09:18 -0500 Received: by mail-pg1-f202.google.com with SMTP id y15so10409875pgk.20 for ; Mon, 03 Feb 2020 15:09:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xl3tRCmUg9jAOtg3vbIi0aqmSxX/oTR7vaT6w//vy0o=; b=PaZqVy3m4rRoeYHHiMgnmO1X9LwxYioJlrNTA2svqHK3hggJWYYvBA2MEeaFhf2KVy 5M+xCl2G5eH8vcm3GcKXUqyewxqVrV4gi93BhUegaQhtPnPRZIiXZfDiNxnYm+B9HfCu 2yh7HmUwERA28LMmG7wDFzUkMRuNZzrK8OlMHR/CLeqCm7hlvE2/yrMuFGYDDeb9VanP xVJ1Ts5xwFXU4agYgO2nocbjenLFEsEurFlcz/MWzFplIWT6QJz1VSv3m0NowHglNymJ 9phvS4l+TLdRDuMXHoEKuuniTmtfIkweuh5zbgHAey03inGXaQsSSTXC1adRs1670O1i W/PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xl3tRCmUg9jAOtg3vbIi0aqmSxX/oTR7vaT6w//vy0o=; b=t+42QOGDx3kOBVOp5timL7Y/QyRTGEcnwQwz/k4V+HxGeKu0Bw5layZDdBZzpKBxop e1HX9Gauin0xNev+XmEYoPUhz5cX4VjmbmODVLi/2JhpX6eIqd+gYqV7zFdTIlyaT7Ty q7QoNkVLj+viIgYv8BEPGBTqvbBYQ8CVLKPXLZAilYS13IHA6CkoJ/ApSsnqCY51yoq4 B4ZMmjZtTvxVpbpkFIsvmVoUbNs7kcK+hWjnpVjDsJmDHshWKV/0PnlqdDJwFPsXGdCL qOvs4MO1Ctc7jlHNNaF6DpLl0Hv3QuCq6/P7TgKZslG94kjXfKGF1l3xuYlLfqrIuCF5 HkYw== X-Gm-Message-State: APjAAAXmERU+TnkYF/i+HFKXAVYDlpiS7ZcMPJrrsXdMCxbBJ7IZ9koE MSgjkeoVHb67Xo39LQoqF7g6l3rccfGl X-Google-Smtp-Source: APXvYqyWlUjXcJEzuDtebyMcW/5VXbpWItEPNtkEANzEhxDbZ/L228ZficS0209NBH9onSObxOq8XT0lzWXX X-Received: by 2002:a63:2a06:: with SMTP id q6mr26394796pgq.92.1580771358048; Mon, 03 Feb 2020 15:09:18 -0800 (PST) Date: Mon, 3 Feb 2020 15:09:10 -0800 In-Reply-To: <20200203230911.39755-1-bgardon@google.com> Message-Id: <20200203230911.39755-2-bgardon@google.com> Mime-Version: 1.0 References: <20200203230911.39755-1-bgardon@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog Subject: [PATCH 2/3] kvm: mmu: Separate generating and setting mmio ptes From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Oliver Upton , Ben Gardon Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the functions for generating MMIO page table entries from the function that inserts them into the paging structure. This refactoring will facilitate changes to the MMU sychronization model to use atomic compare / exchanges (which are not guaranteed to succeed) instead of a monolithic MMU lock. No functional change expected. Tested by running kvm-unit-tests on an Intel Haswell machine. This commit introduced no new failures. This commit can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2359 Signed-off-by: Ben Gardon Reviewed-by: Oliver Upton Reviewed-by: Peter Shier --- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a9c593dec49bf..b81010d0edae1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -451,9 +451,9 @@ static u64 get_mmio_spte_generation(u64 spte) return gen; } -static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, - unsigned int access) +static u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) { + u64 gen = kvm_vcpu_memslots(vcpu)->generation & MMIO_SPTE_GEN_MASK; u64 mask = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT; @@ -464,6 +464,17 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, mask |= (gpa & shadow_nonpresent_or_rsvd_mask) << shadow_nonpresent_or_rsvd_mask_len; + return mask; +} + +static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, + unsigned int access) +{ + u64 mask = make_mmio_spte(vcpu, gfn, access); + unsigned int gen = get_mmio_spte_generation(mask); + + access = mask & ACC_ALL; + trace_mark_mmio_spte(sptep, gfn, access, gen); mmu_spte_set(sptep, mask); } From patchwork Mon Feb 3 23:09:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11363553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A52A138D for ; Mon, 3 Feb 2020 23:09:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EC5352084E for ; Mon, 3 Feb 2020 23:09:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QUCkYlSN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727119AbgBCXJW (ORCPT ); Mon, 3 Feb 2020 18:09:22 -0500 Received: from mail-pf1-f202.google.com ([209.85.210.202]:33063 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727080AbgBCXJU (ORCPT ); Mon, 3 Feb 2020 18:09:20 -0500 Received: by mail-pf1-f202.google.com with SMTP id c72so10282650pfc.0 for ; Mon, 03 Feb 2020 15:09:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RjINtf+1IbJq55a0Ed6ZN6M3Wb8hRnnefI63Cx3MO7Q=; b=QUCkYlSNVhLhOTJ0mmNYyS24kEWPAb3E8yra1v2H8umaNEH2T+QeCUD2qC+IUDMxqX 37fRa7vI4Fsr31SXnZvsMb4kutN/Coz+zQzU7M6EaAdzJzo2qiIRUyZLpu8wj4svKaVe VbvBJr/TvUwYhTewt4d+jJRXpQKR7XDLTTj/KZ5g9UB6eid/2yHU+gjLAsItWkaZ8aY0 eHZAsqvA8kRo7hh+RQVYcrjHYzderaxahHKIbgqFqufzjYChQ+He9+BLxnn6TZtFTTqN 36R7GTcPxPdMWo8n+y3atn/Wigamg36uD/ptLBxySNx9mF+yEXWhRroPvu02i0xfHKZZ D6bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RjINtf+1IbJq55a0Ed6ZN6M3Wb8hRnnefI63Cx3MO7Q=; b=ntLWxuPbftDmDiBV3yW+FJkmNNxCKAuEaeOZOoXt4GRty3nPEK38blgI58e0devr5k McSMIqHJzLu0eFWc6HnxisqOYtf2y/G1Wy5r9P6K6OihoSYokC/GcZwi/bnTmHGtacg3 eVodyNEBA0hKFMjYpjjIwK4uyM1yAXxyIq8QTUpSc5Opiuqtr2Ctn8pIOZDN/cNWkD9k a+Q9v3gGkKzDMbDVScIqnjrqNK9Sb7HU0qHjfVW9Hpv+gv7JO90tFdouw+qTON6fkms5 XK3xolcfMQ53gSizxyvstREQq41Oy5mzvFo0g3slD7QsZYslHLTchKblT5lb8jcdG7OK oXbQ== X-Gm-Message-State: APjAAAVYSfxr+veHHn+pg3sbiaomHEia9xQPs35LuyCvxEH+wiUhebmi CATr5g5NXU6B+1w+/3Qt14fuOUj+Yq/L X-Google-Smtp-Source: APXvYqwB8CnUgcoaQEPeq8AOsAR7CQC2zvfiaZ9Mq/eDGhQUW4hp4wi0EUJhhdwa3V8/FBqTPUrf3IjlDIxK X-Received: by 2002:a63:cd15:: with SMTP id i21mr22156938pgg.453.1580771359996; Mon, 03 Feb 2020 15:09:19 -0800 (PST) Date: Mon, 3 Feb 2020 15:09:11 -0800 In-Reply-To: <20200203230911.39755-1-bgardon@google.com> Message-Id: <20200203230911.39755-3-bgardon@google.com> Mime-Version: 1.0 References: <20200203230911.39755-1-bgardon@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog Subject: [PATCH 3/3] kvm: mmu: Separate pte generation from set_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Oliver Upton , Ben Gardon Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the functions for generating leaf page table entries from the function that inserts them into the paging structure. This refactoring will facilitate changes to the MMU sychronization model to use atomic compare / exchanges (which are not guaranteed to succeed) instead of a monolithic MMU lock. No functional change expected. Tested by running kvm-unit-tests on an Intel Haswell machine. This commit introduced no new failures. This commit can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2360 Signed-off-by: Ben Gardon Reviewed-by: Peter Shier --- arch/x86/kvm/mmu/mmu.c | 52 +++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b81010d0edae1..9239ad5265dc6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3000,20 +3000,14 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) #define SET_SPTE_WRITE_PROTECTED_PT BIT(0) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) -static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, bool speculative, - bool can_unsync, bool host_writable) +static u64 make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, + bool can_unsync, bool host_writable, bool ad_disabled, + int *ret) { u64 spte = 0; - int ret = 0; - struct kvm_mmu_page *sp; - - if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) - return 0; - sp = page_header(__pa(sptep)); - if (sp_ad_disabled(sp)) + if (ad_disabled) spte |= SPTE_AD_DISABLED_MASK; else if (kvm_vcpu_ad_need_write_protect(vcpu)) spte |= SPTE_AD_WRPROT_ONLY_MASK; @@ -3066,27 +3060,49 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * is responsibility of mmu_get_page / kvm_sync_page. * Same reasoning can be applied to dirty page accounting. */ - if (!can_unsync && is_writable_pte(*sptep)) - goto set_pte; + if (!can_unsync && is_writable_pte(old_spte)) + return spte; if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); - ret |= SET_SPTE_WRITE_PROTECTED_PT; + *ret |= SET_SPTE_WRITE_PROTECTED_PT; pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); } } - if (pte_access & ACC_WRITE_MASK) { - kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (pte_access & ACC_WRITE_MASK) spte |= spte_shadow_dirty_mask(spte); - } if (speculative) spte = mark_spte_for_access_track(spte); -set_pte: + return spte; +} + +static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, bool speculative, + bool can_unsync, bool host_writable) +{ + u64 spte = 0; + struct kvm_mmu_page *sp; + int ret = 0; + + if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) + return 0; + + sp = page_header(__pa(sptep)); + + spte = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative, + can_unsync, host_writable, sp_ad_disabled(sp), &ret); + if (!spte) + return 0; + + if (spte & PT_WRITABLE_MASK) + kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (mmu_spte_update(sptep, spte)) ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; return ret;