From patchwork Fri Oct 14 18:21:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 9377277 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0FB186022E for ; Fri, 14 Oct 2016 18:26:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 056952A697 for ; Fri, 14 Oct 2016 18:26:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EBFF72A6A2; Fri, 14 Oct 2016 18:26:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86ED82A67E for ; Fri, 14 Oct 2016 18:26:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756079AbcJNSZu (ORCPT ); Fri, 14 Oct 2016 14:25:50 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:36755 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752674AbcJNSVh (ORCPT ); Fri, 14 Oct 2016 14:21:37 -0400 Received: by mail-wm0-f65.google.com with SMTP id 79so854433wmy.3; Fri, 14 Oct 2016 11:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=b/47VMkVdaOTx0damz+9IdDMCWTnarhugETUAvX954w=; b=l9hAms4uWfoaZC70gJOoOuMgiyvsir/c/iFA4M+nDidymO0Fim5tMd9kzY6MrF5k8k LcJ5cGx8qSaTlTnnThecApMBLjrUzjeIUNY02epUDzsrzGB+SErh72NYGBLkcvgNNHDC eOPuiXIU8Hu4VSajr+9K68/nm1hZ+TDew2SSrIUEaTse050fTtTjIBAsCyuSSuPhRms6 4X0zsQ8hBeogMqY+/nGEluMsMLZUHMts89F2meXl09/OiaGbMkWM+weLHY6Mq91PB5KY 3u2wk1w0uXszENKeNF6RN1zeN3SD/V+SmI17HFvgla2GMBESJnnXPaF65VnamSGeA5z7 gXGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=b/47VMkVdaOTx0damz+9IdDMCWTnarhugETUAvX954w=; b=BXaexvyhv0R2YgV8SbWHuU3fVd5teF/NXW/ZCwSaWMZHAHy3agLjL5c3e2Ete+9/mw 8y6gZwrtH/kf8vbflpd59yYoxh0zCqJvwEbXelycnPW5LE7uvy7FrdgjPpAz4gQuRcbG YuN/o4pqF2JfPEgStjopf3lQ9/fhIac5F0vPegJAOq3GO8yPBFIHSd49423lx4KT2XeK YdZb0cYbm7+BUF3tnCaurFlvkYBnvOglaI2njAWSCBpJnStmzYNIZC8THc8YhoIUBYS1 bs1NRB+RPNhfUeYwZfkVdNWx02UYpsEsCy479j7K4Xgfjhi0+lp4/kv2jPt+/UZJMVZx S3FA== X-Gm-Message-State: AA6/9Rn6fWIwF1MKz/GuAei4KVtsUWCrklHqYQB8QG/rtPBz3KA6+9ntBQEEN4o+TTFD8A== X-Received: by 10.195.6.97 with SMTP id ct1mr2985769wjd.165.1476469295967; Fri, 14 Oct 2016 11:21:35 -0700 (PDT) Received: from 640k.lan (94-39-150-81.adsl-ull.clienti.tiscali.it. [94.39.150.81]) by smtp.gmail.com with ESMTPSA id c10sm1032184wmi.15.2016.10.14.11.21.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Oct 2016 11:21:35 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, yang.zhang.wz@gmail.com, feng.wu@intel.com, mst@redhat.com Subject: [PATCH 1/5] KVM: x86: avoid atomic operations on APICv vmentry Date: Fri, 14 Oct 2016 20:21:27 +0200 Message-Id: <1476469291-5039-2-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1476469291-5039-1-git-send-email-pbonzini@redhat.com> References: <1476469291-5039-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On some benchmarks (e.g. netperf with ioeventfd disabled), APICv posted interrupts turn out to be slower than interrupt injection via KVM_REQ_EVENT. This patch optimizes a bit the IRR update, avoiding expensive atomic operations in the common case where PI.ON=0 at vmentry or the PIR vector is mostly zero. This saves at least 20 cycles (1%) per vmexit, as measured by kvm-unit-tests' inl_from_qemu test (20 runs): | enable_apicv=1 | enable_apicv=0 | mean stdev | mean stdev ----------|-----------------|------------------ before | 5826 32.65 | 5765 47.09 after | 5809 43.42 | 5777 77.02 Of course, any change in the right column is just placebo effect. :) The savings are bigger if interrupts are frequent. Signed-off-by: Paolo Bonzini Reviewed-by: Radim Krčmář --- arch/x86/kvm/lapic.c | 6 ++++-- arch/x86/kvm/vmx.c | 9 ++++++++- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 23b99f305382..63a442aefc12 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -342,9 +342,11 @@ void __kvm_apic_update_irr(u32 *pir, void *regs) u32 i, pir_val; for (i = 0; i <= 7; i++) { - pir_val = xchg(&pir[i], 0); - if (pir_val) + pir_val = READ_ONCE(pir[i]); + if (pir_val) { + pir_val = xchg(&pir[i], 0); *((u32 *)(regs + APIC_IRR + i * 0x10)) |= pir_val; + } } } EXPORT_SYMBOL_GPL(__kvm_apic_update_irr); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 2577183b40d9..7c79d6c6b6ed 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -521,6 +521,12 @@ static inline void pi_set_sn(struct pi_desc *pi_desc) (unsigned long *)&pi_desc->control); } +static inline void pi_clear_on(struct pi_desc *pi_desc) +{ + clear_bit(POSTED_INTR_ON, + (unsigned long *)&pi_desc->control); +} + static inline int pi_test_on(struct pi_desc *pi_desc) { return test_bit(POSTED_INTR_ON, @@ -4854,9 +4860,10 @@ static void vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - if (!pi_test_and_clear_on(&vmx->pi_desc)) + if (!pi_test_on(&vmx->pi_desc)) return; + pi_clear_on(&vmx->pi_desc); kvm_apic_update_irr(vcpu, vmx->pi_desc.pir); }