From patchwork Thu Jul 25 15:35:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 11059091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20BBE14DB for ; Thu, 25 Jul 2019 15:36:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 106E2289F6 for ; Thu, 25 Jul 2019 15:36:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 03FD228A39; Thu, 25 Jul 2019 15:36:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80A6D289F6 for ; Thu, 25 Jul 2019 15:36:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727115AbfGYPg0 (ORCPT ); Thu, 25 Jul 2019 11:36:26 -0400 Received: from foss.arm.com ([217.140.110.172]:59400 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725842AbfGYPg0 (ORCPT ); Thu, 25 Jul 2019 11:36:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 94448344; Thu, 25 Jul 2019 08:36:25 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E2A1C3F71A; Thu, 25 Jul 2019 08:36:23 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Marc Zyngier , Julien Thierry , James Morse , Suzuki K Poulose , Christoffer Dall , Eric Auger , Andre Przywara , Zenghui Yu , "Raslan, KarimAllah" , "Saidi, Ali" Subject: [PATCH v3 00/10] KVM: arm/arm64: vgic: ITS translation cache Date: Thu, 25 Jul 2019 16:35:33 +0100 Message-Id: <20190725153543.24386-1-maz@kernel.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Marc Zyngier It recently became apparent[1] that our LPI injection path is not as efficient as it could be when injecting interrupts coming from a VFIO assigned device. Although the proposed patch wasn't 100% correct, it outlined at least two issues: (1) Injecting an LPI from VFIO always results in a context switch to a worker thread: no good (2) We have no way of amortising the cost of translating a DID+EID pair to an LPI number The reason for (1) is that we may sleep when translating an LPI, so we do need a context process. A way to fix that is to implement a small LPI translation cache that could be looked up from an atomic context. It would also solve (2). This is what this small series proposes. It implements a very basic LRU cache of pre-translated LPIs, which gets used to implement kvm_arch_set_irq_inatomic. The size of the cache is currently hard-coded at 16 times the number of vcpus, a number I have picked under the influence of Ali Saidi. If that's not enough for you, blame me, though. Does it work? well, it doesn't crash, and is thus perfect. More seriously, I don't really have a way to benchmark it directly, so my observations are only indirect: On a TX2 system, I run a 4 vcpu VM with an Ethernet interface passed to it directly. From the host, I inject interrupts using debugfs. In parallel, I look at the number of context switch, and the number of interrupts on the host. Without this series, I get the same number for both IRQ and CS (about half a million of each per second is pretty easy to reach). With this series, the number of context switches drops to something pretty small (in the low 2k), while the number of interrupts stays the same. Yes, this is a pretty rubbish benchmark, what did you expect? ;-) Andre did run some benchmarks of his own, with some rather positive results[2]. So I'm putting this out for people with real workloads to try out and report what they see. [1] https://lore.kernel.org/lkml/1552833373-19828-1-git-send-email-yuzenghui@huawei.com/ [2] https://www.spinics.net/lists/arm-kernel/msg742655.html * From v2: - Added invalidation on turning the ITS off - Added invalidation on MAPC with V=0 - Added Rb's from Eric * From v1: - Fixed race on allocation, where the same LPI could be cached multiple times - Now invalidate the cache on vgic teardown, avoiding memory leaks - Change patch split slightly, general reshuffling - Small cleanups here and there - Rebased on 5.2-rc4 Marc Zyngier (10): KVM: arm/arm64: vgic: Add LPI translation cache definition KVM: arm/arm64: vgic: Add __vgic_put_lpi_locked primitive KVM: arm/arm64: vgic-its: Add MSI-LPI translation cache invalidation KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on specific commands KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on disabling LPIs KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on ITS disable KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on vgic teardown KVM: arm/arm64: vgic-its: Cache successful MSI->LPI translation KVM: arm/arm64: vgic-its: Check the LPI translation cache on MSI injection KVM: arm/arm64: vgic-irqfd: Implement kvm_arch_set_irq_inatomic include/kvm/arm_vgic.h | 3 + virt/kvm/arm/vgic/vgic-init.c | 5 + virt/kvm/arm/vgic/vgic-irqfd.c | 36 +++++- virt/kvm/arm/vgic/vgic-its.c | 207 +++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic-mmio-v3.c | 4 +- virt/kvm/arm/vgic/vgic.c | 26 ++-- virt/kvm/arm/vgic/vgic.h | 5 + 7 files changed, 270 insertions(+), 16 deletions(-) Tested-by: Andre Przywara