From patchwork Mon Jul 19 09:57:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12385187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1907BC636CB for ; Mon, 19 Jul 2021 09:58:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06243611C0 for ; Mon, 19 Jul 2021 09:58:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236155AbhGSJRY (ORCPT ); Mon, 19 Jul 2021 05:17:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:20945 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235336AbhGSJRX (ORCPT ); Mon, 19 Jul 2021 05:17:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626688682; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dQSsoxPTtP5kLcONXzbnmAGxukC4WSyjmpY39rCdm2M=; b=YYSzEVboImVhSxxPsF43tbQ+34bu+PPqENvYznRjtZmBPNAyZbfwxQn7/s8iNJz+RViiFd jOAr8BWAVLe1xLzqNwaGCWC5JdKt5wti42lflSoLBfCYpj+u/CHoYF05tJeT3c4NlrpKOx BIL76OxwB7l4GXIKH7IqmPvczgaxMJI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-101-weIrr0KEOfy-P7BmSkMfzQ-1; Mon, 19 Jul 2021 05:58:01 -0400 X-MC-Unique: weIrr0KEOfy-P7BmSkMfzQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 46A0F804140; Mon, 19 Jul 2021 09:58:00 +0000 (UTC) Received: from localhost (ovpn-12-118.pek2.redhat.com [10.72.12.118]) by smtp.corp.redhat.com (Postfix) with ESMTP id 816AE60C82; Mon, 19 Jul 2021 09:57:58 +0000 (UTC) From: Ming Lei To: Thomas Gleixner , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Christoph Hellwig , Ming Lei Subject: [RFC PATCH 3/7] genirq/affinity: don't pass irq_affinity_desc array to irq_build_affinity_masks Date: Mon, 19 Jul 2021 17:57:25 +0800 Message-Id: <20210719095729.834332-4-ming.lei@redhat.com> In-Reply-To: <20210719095729.834332-1-ming.lei@redhat.com> References: <20210719095729.834332-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare for abstracting irq_build_affinity_masks() into one public helper for grouping all CPUs evenly into several groups. Not passing irq_affinity_desc array to irq_build_affinity_masks, instead returning one cpumask array by storing each assigned group into one element of the array. This way helps us to provide generic interface for grouping all CPUs evenly from NUMA and CPU locality viewpoint, and the cost is one extra allocation in irq_build_affinity_masks(), which should be fine since it is done via GFP_KERNEL and irq_build_affinity_masks() is called very less. Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 34 ++++++++++++++++++++++++---------- 1 file changed, 24 insertions(+), 10 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index c8da750f393b..98343a6c0aff 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -249,7 +249,7 @@ static int __irq_build_affinity_masks(unsigned int startvec, cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, struct cpumask *nmsk, - struct irq_affinity_desc *masks) + struct cpumask *masks) { unsigned int i, n, nodes, cpus_per_vec, extra_vecs, done = 0; unsigned int last_affv = numvecs; @@ -268,7 +268,7 @@ static int __irq_build_affinity_masks(unsigned int startvec, */ if (numvecs <= nodes) { for_each_node_mask(n, nodemsk) { - cpumask_or(&masks[curvec].mask, &masks[curvec].mask, + cpumask_or(&masks[curvec], &masks[curvec], node_to_cpumask[n]); if (++curvec == last_affv) curvec = 0; @@ -320,7 +320,7 @@ static int __irq_build_affinity_masks(unsigned int startvec, */ if (curvec >= last_affv) curvec = 0; - irq_spread_init_one(&masks[curvec].mask, nmsk, + irq_spread_init_one(&masks[curvec], nmsk, cpus_per_vec); } done += nv->nvectors; @@ -334,16 +334,16 @@ static int __irq_build_affinity_masks(unsigned int startvec, * 1) spread present CPU on these vectors * 2) spread other possible CPUs on these vectors */ -static int irq_build_affinity_masks(unsigned int numvecs, - struct irq_affinity_desc *masks) +static struct cpumask *irq_build_affinity_masks(unsigned int numvecs) { unsigned int curvec = 0, nr_present = 0, nr_others = 0; cpumask_var_t *node_to_cpumask; cpumask_var_t nmsk, npresmsk; int ret = -ENOMEM; + struct cpumask *masks = NULL; if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) - return ret; + return NULL; if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL)) goto fail_nmsk; @@ -352,6 +352,10 @@ static int irq_build_affinity_masks(unsigned int numvecs, if (!node_to_cpumask) goto fail_npresmsk; + masks = kcalloc(numvecs, sizeof(*masks), GFP_KERNEL); + if (!masks) + goto fail_node_to_cpumask; + /* Stabilize the cpumasks */ get_online_cpus(); build_node_to_cpumask(node_to_cpumask); @@ -385,6 +389,7 @@ static int irq_build_affinity_masks(unsigned int numvecs, if (ret >= 0) WARN_ON(nr_present + nr_others < numvecs); + fail_node_to_cpumask: free_node_to_cpumask(node_to_cpumask); fail_npresmsk: @@ -392,7 +397,11 @@ static int irq_build_affinity_masks(unsigned int numvecs, fail_nmsk: free_cpumask_var(nmsk); - return ret < 0 ? ret : 0; + if (ret < 0) { + kfree(masks); + return NULL; + } + return masks; } static void default_calc_sets(struct irq_affinity *affd, unsigned int affvecs) @@ -463,13 +472,18 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) */ for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) { unsigned int this_vecs = affd->set_size[i]; - int ret; + int j; + struct cpumask *result = irq_build_affinity_masks(this_vecs); - ret = irq_build_affinity_masks(this_vecs, &masks[curvec]); - if (ret) { + if (!result) { kfree(masks); return NULL; } + + for (j = 0; j < this_vecs; j++) + cpumask_copy(&masks[curvec + j].mask, &result[j]); + kfree(result); + curvec += this_vecs; usedvecs += this_vecs; }