From patchwork Wed Jan 4 22:41:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 9497969 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0E68860237 for ; Wed, 4 Jan 2017 22:33:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0A932811C for ; Wed, 4 Jan 2017 22:33:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D524B2824F; Wed, 4 Jan 2017 22:33:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 117762811C for ; Wed, 4 Jan 2017 22:33:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969000AbdADWd1 (ORCPT ); Wed, 4 Jan 2017 17:33:27 -0500 Received: from mga01.intel.com ([192.55.52.88]:17130 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966029AbdADWd0 (ORCPT ); Wed, 4 Jan 2017 17:33:26 -0500 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP; 04 Jan 2017 14:32:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,317,1477983600"; d="scan'208";a="50168417" Received: from unknown (HELO localhost.lm.intel.com) ([10.232.112.96]) by fmsmga006.fm.intel.com with ESMTP; 04 Jan 2017 14:32:55 -0800 From: Keith Busch To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, Jens Axboe , Jens Axboe , Christoph Hellwig , Thomas Gleixner Cc: Marc Merlin , Keith Busch Subject: [PATCH 1/6] irq/affinity: Assign all online CPUs to vectors Date: Wed, 4 Jan 2017 17:41:06 -0500 Message-Id: <1483569671-1462-2-git-send-email-keith.busch@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1483569671-1462-1-git-send-email-keith.busch@intel.com> References: <1483569671-1462-1-git-send-email-keith.busch@intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes sure all online CPUs are assigned to vectors in cases where the nodes don't have the same number of online CPUs. The calculation for how many vectors needs to be assigned should account for the number of CPUs for a particular node on each round of assignment in order to ensure all online CPUs are assigned a vector when they don't evenly divide, and calculate extras accordingly. Since we attempt to divide evently among the nodes, this may still result in unused vectors if some nodes have fewer CPUs than nodes previosuly set up, but at least every online CPU will be assigned to something. Signed-off-by: Keith Busch Reviewed-by: Sagi Grimberg Reviewed-by: Christoph Hellwig --- kernel/irq/affinity.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 4544b11..b25dce0 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -96,17 +96,19 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Spread the vectors per node */ vecs_per_node = affv / nodes; - /* Account for rounding errors */ - extra_vecs = affv - (nodes * vecs_per_node); for_each_node_mask(n, nodemsk) { - int ncpus, v, vecs_to_assign = vecs_per_node; + int ncpus, v, vecs_to_assign; /* Get the cpus on this node which are in the mask */ cpumask_and(nmsk, cpu_online_mask, cpumask_of_node(n)); /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); + vecs_to_assign = min(vecs_per_node, ncpus); + + /* Account for rounding errors */ + extra_vecs = ncpus - vecs_to_assign; for (v = 0; curvec < last_affv && v < vecs_to_assign; curvec++, v++) { @@ -123,6 +125,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (curvec >= last_affv) break; + vecs_per_node = (affv - curvec) / --nodes; } done: