From patchwork Wed Jun 28 23:22:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 9815633 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3F9A4603F2 for ; Wed, 28 Jun 2017 23:22:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C1C328478 for ; Wed, 28 Jun 2017 23:22:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F13C28502; Wed, 28 Jun 2017 23:22:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C6F928478 for ; Wed, 28 Jun 2017 23:22:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751567AbdF1XWO (ORCPT ); Wed, 28 Jun 2017 19:22:14 -0400 Received: from mail-pg0-f49.google.com ([74.125.83.49]:36333 "EHLO mail-pg0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751545AbdF1XWO (ORCPT ); Wed, 28 Jun 2017 19:22:14 -0400 Received: by mail-pg0-f49.google.com with SMTP id u62so38662693pgb.3 for ; Wed, 28 Jun 2017 16:22:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=VS6B6g/V17ahkCaTlY/oQC1EjmJx0EtqACU0Z2/PlVI=; b=irqCQhzi8B+wO9MKoUY19S+nUQ7VElrd8QHs7q4INZ2p/zZz1s9pWoX1D2BthlnAHc mloHcvBjopW/giLkSJTG1jdrh1H54crDPiZVGYP155FXkY9LjcJAtqj7DQD1Bh4nn7gR 3O4V9EcpxGpzY5JAF5B70vzLvBAH+v9L6XWsd8nRP31XiKb4zWWypWddZrLwtIw/Hrqv oRxyGfMeD5+AVKNfx+qPggKc6d9o0iRkWYsPz7abTp8X1dZ0Hpf/6rmTG0Hw7LJAz+Yy h5tMs4hlieFw9eSLh6GzWQNKxwYs4uNPxSIqP3JN81BohjoFiqcqM73wobN/VnChgfbP LgIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=VS6B6g/V17ahkCaTlY/oQC1EjmJx0EtqACU0Z2/PlVI=; b=jRb7P1oJm58+TA2JpaMnaiGtYP7oxOfRXdW5f+RMvKrPfcHaaBBLjQwGOcLlaMr9Xm 7OzQzTSr99fJVke1wfGzbUPYMiAG7KeP76RjCTvytLLdA5TkaI75khwKqGNd6iPx5xSj LxfeBcARHB+BzRFeRAS5HyVE5/xfsPK8osYY7MB/vqVdNtA68gcVhy/P/OKWatch+kbc K7iFVRqE7QKE7Dl/ny+DwjJpXA6pgOpQ0eqOswRp7DdzjW0wRmq94Wnqy/QSZnLvhOLH 1rSKQOBn5oX4UDss434MlhUwiv82L9vflGDS0NFqFztKnW2tZGEwGtp9/OcxSulX77P9 AdIg== X-Gm-Message-State: AKS2vOzJu/XJnqUZO08nIMPAXthGfznaL5iZLJ0qtyQBOz55dhYTe2Ym bb/gpWoh4I8rY6k/ X-Received: by 10.84.232.200 with SMTP id x8mr14669041plm.252.1498692133302; Wed, 28 Jun 2017 16:22:13 -0700 (PDT) Received: from xeon-e3.wavecable.com (76-14-207-240.or.wavecable.com. [76.14.207.240]) by smtp.gmail.com with ESMTPSA id l85sm6314584pfi.134.2017.06.28.16.22.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 28 Jun 2017 16:22:12 -0700 (PDT) From: Stephen Hemminger X-Google-Original-From: Stephen Hemminger To: kys@microsoft.com, bhelgaas@google.com Cc: linux-pci@vger.kernel.org, devel@linuxdriverproject.org, Stephen Hemminger Subject: [PATCH] hv: fix msi affinity when device requests all possible CPU's Date: Wed, 28 Jun 2017 16:22:04 -0700 Message-Id: <20170628232204.15227-1-sthemmin@microsoft.com> X-Mailer: git-send-email 2.11.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV, the driver requests affinity with all possible CPU's (0-239) even those CPU's are not online (and will never be). Because of this the device is unable to correctly get MSI interrupt's setup. This was caused by the change in 4.12 that converted this affinity into all possible CPU's (0-31) but then host reports an error since this is larger than the number of online cpu's. Previously, this worked (up to 4.12-rc1) because only online cpu's would be put in mask passed to the host. This patch applies only to 4.12. The driver in linux-next needs a a different fix because of the changes to PCI host protocol version. Fixes: 433fcf6b7b31 ("PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs") Signed-off-by: Stephen Hemminger --- drivers/pci/host/pci-hyperv.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c index 84936383e269..3cadfcca3ae9 100644 --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -900,10 +900,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) * processors because Hyper-V only supports 64 in a guest. */ affinity = irq_data_get_affinity_mask(data); + cpumask_and(affinity, affinity, cpu_online_mask); + if (cpumask_weight(affinity) >= 32) { int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL; } else { - for_each_cpu_and(cpu, affinity, cpu_online_mask) { + for_each_cpu(cpu, affinity) { int_pkt->int_desc.cpu_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu)); }