From patchwork Fri Mar 3 15:16:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 13158937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7ACE0C64EC4 for ; Fri, 3 Mar 2023 15:16:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.506074.779086 (Exim 4.92) (envelope-from ) id 1pY790-0000CJ-E1; Fri, 03 Mar 2023 15:16:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 506074.779086; Fri, 03 Mar 2023 15:16:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pY790-0000CC-BL; Fri, 03 Mar 2023 15:16:26 +0000 Received: by outflank-mailman (input) for mailman id 506074; Fri, 03 Mar 2023 15:16:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pY78y-0000C6-OT for xen-devel@lists.xenproject.org; Fri, 03 Mar 2023 15:16:25 +0000 Received: from casper.infradead.org (casper.infradead.org [2001:8b0:10b:1236::1]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 5be6273e-b9d6-11ed-96af-2f268f93b82a; Fri, 03 Mar 2023 16:16:23 +0100 (CET) Received: from [2001:8b0:10b:5:4ce9:a0c4:1cf4:98d9] (helo=u3832b3a9db3152.ant.amazon.com) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pY78l-003CWW-R6; Fri, 03 Mar 2023 15:16:12 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5be6273e-b9d6-11ed-96af-2f268f93b82a DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:To:From: Subject:Message-ID:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=vTZ0Me67UOlVup1c2NNHCtSukQExiUbyi52cHIN3u/Y=; b=hQ0S1tsFb8Hy6EvaajW3EEYtPp tqVRi3zrix4LZJ08Fdh48ng0Q4g3DMlm5+MBDucGUt5SGF0q61ynOfN21vWlbr5IcZdvQUU2Vmet2 BxSuvS4uoCW+m0GC9pq4gV51Jqjxs9WkQ2jiEzPnSXlWtIgOb4DYLRe2ZMhIlgb/9wTfSOABnYcAx 64HCAdF3yDlUGTBvnsoDchOWzdHxU84YfrUK7BNDH74fKCw6aRBEA4aZaYRUC8V/zQBm+Cw4auiwa LUh+Fw3GR6cSe8NSDWMHXIaCq9hSQ8VDYxP+MhoLOGonfU/8V9P27aCT6caKwHGYB7vh8U11YZcHQ HQtzdSGA==; Message-ID: <07866eaf6354dd43d87cffb6eebf101716845b66.camel@infradead.org> Subject: IRQ affinity not working on Xen pci-platform device From: David Woodhouse To: Thomas Gleixner , linux-kernel , xen-devel Date: Fri, 03 Mar 2023 15:16:11 +0000 User-Agent: Evolution 3.44.4-0ubuntu1 MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html I added the 'xen_no_vector_callback' kernel parameter a while back (commit b36b0fe96af) to ensure we could test that more for Linux guests. Most of my testing at the time was done with just two CPUs, and I happened to just test it with four. It fails, because the IRQ isn't actually affine to CPU0. I tried making it work anyway (in line with the comment in platform- pci.c which says that it shouldn't matter if it *runs* on CPU0 as long as it processes events *for* CPU0). That didn't seem to work. If I put the irq_set_affinity() call *before* the request_irq() that does actually work. But it's setting affinity on an IRQ it doesn't even own yet. Test hacks below; this is testable with today's QEMU master (yay!) and: qemu-system-x86_64 -display none -serial mon:stdio -smp 4 \ -accel kvm,xen-version=0x4000a,kernel-irqchip=split \ -kernel ~/git/linux/arch/x86/boot//bzImage \ -append "console=ttyS0,115200 xen_no_vector_callback" ... [ 0.577173] ACPI: \_SB_.LNKC: Enabled at IRQ 11 [ 0.578149] The affinity mask was 0-3 [ 0.579081] The affinity mask is 0-3 and the handler is on 2 [ 0.580288] The affinity mask is 0 and the handler is on 2 diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index c7715f8bd452..e3d159f1eb86 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -1712,11 +1712,12 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl) static int __xen_evtchn_do_upcall(void) { - struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); + struct vcpu_info *vcpu_info = per_cpu(xen_vcpu, 0); int ret = vcpu_info->evtchn_upcall_pending ? IRQ_HANDLED : IRQ_NONE; - int cpu = smp_processor_id(); + int cpu = 0;//smp_processor_id(); struct evtchn_loop_ctrl ctrl = { 0 }; + WARN_ON_ONCE(smp_processor_id() != 0); read_lock(&evtchn_rwlock); do { diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c index fcc819131572..647991211633 100644 --- a/drivers/xen/platform-pci.c +++ b/drivers/xen/platform-pci.c @@ -64,6 +64,16 @@ static uint64_t get_callback_via(struct pci_dev *pdev) static irqreturn_t do_hvm_evtchn_intr(int irq, void *dev_id) { + struct pci_dev *pdev = dev_id; + + if (unlikely(smp_processor_id())) { + const struct cpumask *mask = irq_get_affinity_mask(pdev->irq); + if (mask) + printk("The affinity mask is %*pbl and the handler is on %d\n", + cpumask_pr_args(mask), smp_processor_id()); + return IRQ_NONE; + } + return xen_hvm_evtchn_do_upcall(); } @@ -132,6 +142,12 @@ static int platform_pci_probe(struct pci_dev *pdev, dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret); goto out; } + + const struct cpumask *mask = irq_get_affinity_mask(pdev->irq); + if (mask) + printk("The affinity mask was %*pbl\n", + cpumask_pr_args(mask)); + /* * It doesn't strictly *have* to run on CPU0 but it sure * as hell better process the event channel ports delivered