From patchwork Thu Jul 2 13:17:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 6709531 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CFEA9C05AC for ; Thu, 2 Jul 2015 13:23:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A531C206E0 for ; Thu, 2 Jul 2015 13:23:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 504DC20220 for ; Thu, 2 Jul 2015 13:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752087AbbGBNX0 (ORCPT ); Thu, 2 Jul 2015 09:23:26 -0400 Received: from mail-wi0-f176.google.com ([209.85.212.176]:34157 "EHLO mail-wi0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752614AbbGBNSP (ORCPT ); Thu, 2 Jul 2015 09:18:15 -0400 Received: by wiar9 with SMTP id r9so100914077wia.1 for ; Thu, 02 Jul 2015 06:18:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vYQAetwWIJsvWPyJy5etZWjhQ2yJEJ4uYV7KuKgY7SE=; b=YccKJ4wvNe1YaEIluI08abWpEpOGR4klrpPHza1vTqV5THhs3rH9cEyRqc4KBuMgbt 7VrkukfEc38f9TIO6ay5GvIb8E3jV6nY84WXEt0MogalBEKXhqJAQuBianFOEHcWHSse 1zjRxvY/v8iEvbSrIS2TnlulH8VGBbS/u8ujyxH9kryAxQEd2XpNw+gS6fi55joVqf1O YaNsNA4xQupGxL2AJ7ZYEGCuqjwdT8RKmN4tExVE3ZrItjtdC0/90b26YRW8c3wgf4/X f2NSJIqcxxImT+4W3oBZCfUvupR/ng2Qc4IzrqWBCFpahxcfdV1ms2Ovqhz/XdEkTGNj 5shA== X-Gm-Message-State: ALoCoQmZAVme8Di5oAUl6Rd57K+CBx0J9Q6N1y1HvKcqAPxZUgjgPZTCzq4Ss41aqGT0deSv5wkF X-Received: by 10.180.94.168 with SMTP id dd8mr54449222wib.76.1435843093390; Thu, 02 Jul 2015 06:18:13 -0700 (PDT) Received: from gnx2579.home (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id tl3sm8157099wjc.20.2015.07.02.06.18.11 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Jul 2015 06:18:12 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, alex.williamson@redhat.com, pbonzini@redhat.com, avi.kivity@gmail.com, mtosatti@redhat.com, feng.wu@intel.com, joro@8bytes.org, b.reynal@virtualopensystems.com Cc: linux-kernel@vger.kernel.org, patches@linaro.org Subject: [RFC 09/17] bypass: IRQ bypass manager proto by Alex Date: Thu, 2 Jul 2015 15:17:19 +0200 Message-Id: <1435843047-6327-10-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1435843047-6327-1-git-send-email-eric.auger@linaro.org> References: <1435843047-6327-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alex Williamson There are plenty of details to be filled in, but I think the basics looks something like the code below. The IRQ bypass manager just defines a pair of structures, one for interrupt producers and one for interrupt consumers. I'm certain that we'll need more callbacks than I've defined below, but figuring out what those should be for the best abstraction is the hardest part of this idea. The manager provides both registration and de-registration interfaces for both types of objects and keeps lists for each, protected by a lock. The manager doesn't even really need to know what the match token is, but I assume for our purposes it will be an eventfd_ctx. On the vfio side, the producer struct would be embedded in the vfio_pci_irq_ctx struct. KVM would probably embed the consumer struct in _irqfd. As I've coded below, the IRQ bypass manager calls the consumer callbacks, so the producer struct would need fields or callbacks to provide the consumer the info it needs. AIUI the Posted Interrupt model, VFIO only needs to provide data to the consumer. For IRQ Forwarding, I think the producer needs to be informed when bypass is active to model the incoming interrupt as edge vs level. I've prototyped the base IRQ bypass manager here as static, but I don't see any reason it couldn't be a module that's loaded by dependency when either vfio-pci or kvm-intel is loaded (or other producer/consumer objects). Is this a reasonable starting point to craft the additional fields and callbacks and interaction of who calls who that we need to support Posted Interrupts and IRQ Forwarding? Is the AMD version of this still alive? Thanks, Alex --- arch/x86/kvm/Kconfig | 1 + drivers/vfio/pci/Kconfig | 1 + drivers/vfio/pci/vfio_pci_intrs.c | 6 ++ include/linux/irqbypass.h | 23 ++++++++ kernel/irq/Kconfig | 3 + kernel/irq/Makefile | 1 + kernel/irq/bypass.c | 116 ++++++++++++++++++++++++++++++++++++++ virt/kvm/eventfd.c | 4 ++ 8 files changed, 155 insertions(+) create mode 100644 include/linux/irqbypass.h create mode 100644 kernel/irq/bypass.c diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index d8a1d56..86d0d77 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -61,6 +61,7 @@ config KVM_INTEL depends on KVM # for perf_guest_get_msrs(): depends on CPU_SUP_INTEL + select IRQ_BYPASS_MANAGER ---help--- Provides support for KVM on Intel processors equipped with the VT extensions. diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 579d83b..02912f1 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -2,6 +2,7 @@ config VFIO_PCI tristate "VFIO support for PCI devices" depends on VFIO && PCI && EVENTFD select VFIO_VIRQFD + select IRQ_BYPASS_MANAGER help Support for the PCI VFIO bus driver. This is required to make use of PCI drivers using the VFIO framework. diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 1f577b4..4e053be 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -181,6 +181,7 @@ static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd) if (vdev->ctx[0].trigger) { free_irq(pdev->irq, vdev); + /* irq_bypass_unregister_producer(); */ kfree(vdev->ctx[0].name); eventfd_ctx_put(vdev->ctx[0].trigger); vdev->ctx[0].trigger = NULL; @@ -214,6 +215,8 @@ static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd) return ret; } + /* irq_bypass_register_producer(); */ + /* * INTx disable will stick across the new irq setup, * disable_irq won't. @@ -319,6 +322,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev, if (vdev->ctx[vector].trigger) { free_irq(irq, vdev->ctx[vector].trigger); + /* irq_bypass_unregister_producer(); */ kfree(vdev->ctx[vector].name); eventfd_ctx_put(vdev->ctx[vector].trigger); vdev->ctx[vector].trigger = NULL; @@ -360,6 +364,8 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev, return ret; } + /* irq_bypass_register_producer(); */ + vdev->ctx[vector].trigger = trigger; return 0; diff --git a/include/linux/irqbypass.h b/include/linux/irqbypass.h new file mode 100644 index 0000000..718508e --- /dev/null +++ b/include/linux/irqbypass.h @@ -0,0 +1,23 @@ +#ifndef IRQBYPASS_H +#define IRQBYPASS_H + +#include + +struct irq_bypass_producer { + struct list_head node; + void *token; + /* TBD */ +}; + +struct irq_bypass_consumer { + struct list_head node; + void *token; + void (*add_producer)(struct irq_bypass_producer *); + void (*del_producer)(struct irq_bypass_producer *); +}; + +int irq_bypass_register_producer(struct irq_bypass_producer *); +void irq_bypass_unregister_producer(struct irq_bypass_producer *); +int irq_bypass_register_consumer(struct irq_bypass_consumer *); +void irq_bypass_unregister_consumer(struct irq_bypass_consumer *); +#endif /* IRQBYPASS_H */ diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index 9a76e3b..4502cdc 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -100,4 +100,7 @@ config SPARSE_IRQ If you don't know what to do here, say N. +config IRQ_BYPASS_MANAGER + bool + endmenu diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile index d121235..a30ed77 100644 --- a/kernel/irq/Makefile +++ b/kernel/irq/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_GENERIC_PENDING_IRQ) += migration.o obj-$(CONFIG_PM_SLEEP) += pm.o obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o +obj-$(CONFIG_IRQ_BYPASS_MANAGER) += bypass.o diff --git a/kernel/irq/bypass.c b/kernel/irq/bypass.c new file mode 100644 index 0000000..5d0f92b --- /dev/null +++ b/kernel/irq/bypass.c @@ -0,0 +1,116 @@ +/* + * IRQ offload/bypass manager + * + * Various virtualization hardware acceleration techniques allow bypassing + * or offloading interrupts receieved from devices around the host kernel. + * Posted Interrupts on Intel VT-d systems can allow interrupts to be + * recieved directly by a virtual machine. ARM IRQ Forwarding can allow + * level triggered device interrupts to be de-asserted directly by the VM. + * This manager allows interrupt producers and consumers to find each other + * to enable this sort of bypass. + */ + +#include +#include +#include +#include + +static LIST_HEAD(producers); +static LIST_HEAD(consumers); +static DEFINE_MUTEX(lock); + +int irq_bypass_register_producer(struct irq_bypass_producer *producer) +{ + struct irq_bypass_producer *tmp; + struct irq_bypass_consumer *consumer; + int ret = 0; + + mutex_lock(&lock); + + list_for_each_entry(tmp, &producers, node) { + if (tmp->token == producer->token) { + ret = -EINVAL; + goto unlock; + } + } + + list_add(&producer->node, &producers); + + list_for_each_entry(consumer, &consumers, node) { + if (consumer->token == producer->token) { + consumer->add_producer(producer); + break; + } + } +unlock: + mutex_unlock(&lock); + return ret; +} +EXPORT_SYMBOL_GPL(irq_bypass_register_producer); + +void irq_bypass_unregister_producer(struct irq_bypass_producer *producer) +{ + struct irq_bypass_consumer *consumer; + + mutex_lock(&lock); + + list_for_each_entry(consumer, &consumers, node) { + if (consumer->token == producer->token) { + consumer->del_producer(producer); + break; + } + } + + list_del(&producer->node); + + mutex_unlock(&lock); +} +EXPORT_SYMBOL_GPL(irq_bypass_unregister_producer); + +int irq_bypass_register_consumer(struct irq_bypass_consumer *consumer) +{ + struct irq_bypass_consumer *tmp; + struct irq_bypass_producer *producer; + int ret = 0; + + mutex_lock(&lock); + + list_for_each_entry(tmp, &consumers, node) { + if (tmp->token == consumer->token) { + ret = -EINVAL; + goto unlock; + } + } + + list_add(&consumer->node, &consumers); + + list_for_each_entry(producer, &producers, node) { + if (producer->token == consumer->token) { + consumer->add_producer(producer); + break; + } + } +unlock: + mutex_unlock(&lock); + return ret; +} +EXPORT_SYMBOL_GPL(irq_bypass_register_consumer); + +void irq_bypass_unregister_consumer(struct irq_bypass_consumer *consumer) +{ + struct irq_bypass_producer *producer; + + mutex_lock(&lock); + + list_for_each_entry(producer, &producers, node) { + if (producer->token == consumer->token) { + consumer->del_producer(producer); + break; + } + } + + list_del(&consumer->node); + + mutex_unlock(&lock); +} +EXPORT_SYMBOL_GPL(irq_bypass_unregister_consumer); diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c index 9ff4193..f3da161 100644 --- a/virt/kvm/eventfd.c +++ b/virt/kvm/eventfd.c @@ -429,6 +429,8 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args) */ fdput(f); + /* irq_bypass_register_consumer(); */ + return 0; fail: @@ -528,6 +530,8 @@ kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd *args) struct _irqfd *irqfd, *tmp; struct eventfd_ctx *eventfd; + /* irq_bypass_unregister_consumer() */ + eventfd = eventfd_ctx_fdget(args->fd); if (IS_ERR(eventfd)) return PTR_ERR(eventfd);