From patchwork Wed Jun 27 21:59:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junaid Shahid X-Patchwork-Id: 10492723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A6BE601A0 for ; Wed, 27 Jun 2018 21:59:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69F5F2A2F4 for ; Wed, 27 Jun 2018 21:59:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E53C2A355; Wed, 27 Jun 2018 21:59:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA42D2A2F4 for ; Wed, 27 Jun 2018 21:59:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966120AbeF0V7l (ORCPT ); Wed, 27 Jun 2018 17:59:41 -0400 Received: from mail-it0-f73.google.com ([209.85.214.73]:43342 "EHLO mail-it0-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965596AbeF0V7k (ORCPT ); Wed, 27 Jun 2018 17:59:40 -0400 Received: by mail-it0-f73.google.com with SMTP id i9-v6so5160458itb.8 for ; Wed, 27 Jun 2018 14:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:date:in-reply-to:message-id:references:subject:from:to :cc; bh=nOKFLXUkBinlneLzXcA+SJ41A1OcGEYg5TFewjOMv6k=; b=fHZr+pRc9j2kr0sF9EGpBNpv5LDvJJ2LWdXSOaQOJEaDqRGc8owLVK1fPxY4uQIono hLcS71GE3kEWT2ppMzX/UiDWwcW3G0kQo0rPoIS12pNrG6sm5ZyiHfeCz9/9HNbIP/+y zvhqfmaUlg+9m5Ro/ie8WIGfSOdTpd/u4zIJFiqi7Th7E/VfYldHL/TDJFgqAl6RGT+o tkZg03nGXUpl5KbYWeEjiJgwKKS5CO3QBynlyAvYosKUoJZbv8X0hE+RQKNKweicKreI 33txi7xG6YjJfKSexzIN/LZX8bYQ9XodqyBzRiRqroapy8COQm4HlAueQXNFDTCfrlJP GZxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:in-reply-to:message-id :references:subject:from:to:cc; bh=nOKFLXUkBinlneLzXcA+SJ41A1OcGEYg5TFewjOMv6k=; b=gu4Yst2iraHFKl5+B073/Fl6Up9ySp67MpRQdVEdS0l13X2XQ+8SWGEydYwOXW2MIX ijb0E/IXT5Y8xCGQsTmoJHGgnZT7r8gyR2IXmAtMPS9zv+qKr0nL2rHD7KVpBNmCKVGZ 5AHZeNmWer6NuW6s3UMZUqm1z9tgs89pD14TeRzuc6c6D2VYIiu6AYDhzjNqvL6OUQbU 6Bt+MTj3U6H7ZB6OuAhNGzCwDrliG2rK4Z5CVbYgqSjwTSjsKFw8aMLO3qPi8ffwb/CW lL7noS5DD5ckkIWBU+Rv1uddBcoahz9xCX+7S+XUhV9gLwceNFEvWnXu0YKRdkBu+T3q pjjQ== X-Gm-Message-State: APt69E3OjOCZuMoFnMWqY0nnFbd3VqBasF57S6vZZ592J/af9FanelHS M24Bt6dpHh+/g8y9mP58gr2ShUIXH/52 X-Google-Smtp-Source: ADUXVKIF6U1cCBQtdZCrciJxXhLB5eyAQmfW3pvuOM2JiucKdnQiak/NFqxSmfl/mf4HWNPTRIj8STVl3UOq MIME-Version: 1.0 X-Received: by 2002:a24:64c:: with SMTP id 73-v6mr3049046itv.11.1530136779190; Wed, 27 Jun 2018 14:59:39 -0700 (PDT) Date: Wed, 27 Jun 2018 14:59:04 -0700 In-Reply-To: <20180627215921.231329-1-junaids@google.com> Message-Id: <20180627215921.231329-2-junaids@google.com> References: <20180627215921.231329-1-junaids@google.com> X-Mailer: git-send-email 2.18.0.rc2.346.g013aa6912e-goog Subject: [PATCH v3 01/18] kvm: x86: Make sync_page() flush remote TLBs once only From: Junaid Shahid To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, andreslc@google.com, jmattson@google.com, liran.alon@oracle.com, sean.j.christopherson@intel.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP sync_page() calls set_spte() from a loop across a page table. It would work better if set_spte() left the TLB flushing to its callers, so that sync_page() can aggregate into a single call. Signed-off-by: Junaid Shahid --- arch/x86/kvm/mmu.c | 16 ++++++++++++---- arch/x86/kvm/paging_tmpl.h | 12 ++++++++---- 2 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d594690d8b95..75bc73e23df0 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2724,6 +2724,10 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) return true; } +/* Bits which may be returned by set_spte() */ +#define SET_SPTE_WRITE_PROTECTED_PT BIT(0) +#define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) + static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, @@ -2800,7 +2804,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); - ret = 1; + ret |= SET_SPTE_WRITE_PROTECTED_PT; pte_access &= ~ACC_WRITE_MASK; spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); } @@ -2816,7 +2820,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, set_pte: if (mmu_spte_update(sptep, spte)) - kvm_flush_remote_tlbs(vcpu->kvm); + ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; done: return ret; } @@ -2827,6 +2831,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, { int was_rmapped = 0; int rmap_count; + int set_spte_ret; int ret = RET_PF_RETRY; pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__, @@ -2854,12 +2859,15 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, was_rmapped = 1; } - if (set_spte(vcpu, sptep, pte_access, level, gfn, pfn, speculative, - true, host_writable)) { + set_spte_ret = set_spte(vcpu, sptep, pte_access, level, gfn, pfn, + speculative, true, host_writable); + if (set_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) { if (write_fault) ret = RET_PF_EMULATE; kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); } + if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH) + kvm_flush_remote_tlbs(vcpu->kvm); if (unlikely(is_mmio_spte(*sptep))) ret = RET_PF_EMULATE; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6288e9d7068e..fc5fadf5b46a 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -968,6 +968,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) int i, nr_present = 0; bool host_writable; gpa_t first_pte_gpa; + int set_spte_ret = 0; /* direct kvm_mmu_page can not be unsync. */ BUG_ON(sp->role.direct); @@ -1024,12 +1025,15 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE; - set_spte(vcpu, &sp->spt[i], pte_access, - PT_PAGE_TABLE_LEVEL, gfn, - spte_to_pfn(sp->spt[i]), true, false, - host_writable); + set_spte_ret |= set_spte(vcpu, &sp->spt[i], + pte_access, PT_PAGE_TABLE_LEVEL, + gfn, spte_to_pfn(sp->spt[i]), + true, false, host_writable); } + if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH) + kvm_flush_remote_tlbs(vcpu->kvm); + return nr_present; }