From patchwork Thu Apr 13 08:20:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 9678949 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 809F560326 for ; Thu, 13 Apr 2017 08:21:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 716A4285FE for ; Thu, 13 Apr 2017 08:21:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65E0228649; Thu, 13 Apr 2017 08:21:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8E86285FE for ; Thu, 13 Apr 2017 08:21:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751382AbdDMIUs (ORCPT ); Thu, 13 Apr 2017 04:20:48 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:36901 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbdDMIUr (ORCPT ); Thu, 13 Apr 2017 04:20:47 -0400 Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1cyZyK-0000YK-St; Thu, 13 Apr 2017 10:19:21 +0200 Date: Thu, 13 Apr 2017 10:20:23 +0200 (CEST) From: Thomas Gleixner To: LKML cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Benjamin Herrenschmidt , "David S. Miller" , Fenghua Yu , Herbert Xu , Lai Jiangshan , Len Brown , Michael Ellerman , "Rafael J. Wysocki" , Tejun Heo , Tony Luck , Viresh Kumar , linux-crypto@vger.kernel.org Subject: [patch V2 13/13] crypto: n2 - Replace racy task affinity logic In-Reply-To: <20170412201043.231299672@linutronix.de> Message-ID: References: <20170412200726.941336635@linutronix.de> <20170412201043.231299672@linutronix.de> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP spu_queue_register() needs to invoke setup functions on a particular CPU. This is achieved by temporarily setting the affinity of the calling user space thread to the requested CPU and reset it to the original affinity afterwards. That's racy vs. CPU hotplug and concurrent affinity settings for that thread resulting in code executing on the wrong CPU and overwriting the new affinity setting. Replace it by using work_on_cpu_safe() which guarantees to run the code on the requested CPU or to fail in case the CPU is offline. Signed-off-by: Thomas Gleixner Acked-by: Herbert Xu Cc: "David S. Miller" Cc: linux-crypto@vger.kernel.org Acked-by: David S. Miller --- V2: Fixup build-bot complaints drivers/crypto/n2_core.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) --- a/drivers/crypto/n2_core.c +++ b/drivers/crypto/n2_core.c @@ -65,6 +65,11 @@ struct spu_queue { struct list_head list; }; +struct spu_qreg { + struct spu_queue *queue; + unsigned long type; +}; + static struct spu_queue **cpu_to_cwq; static struct spu_queue **cpu_to_mau; @@ -1631,31 +1636,27 @@ static void queue_cache_destroy(void) kmem_cache_destroy(queue_cache[HV_NCS_QTYPE_CWQ - 1]); } -static int spu_queue_register(struct spu_queue *p, unsigned long q_type) +static long spu_queue_register_workfn(void *arg) { - cpumask_var_t old_allowed; + struct spu_qreg *qr = arg; + struct spu_queue *p = qr->queue; + unsigned long q_type = qr->type; unsigned long hv_ret; - if (cpumask_empty(&p->sharing)) - return -EINVAL; - - if (!alloc_cpumask_var(&old_allowed, GFP_KERNEL)) - return -ENOMEM; - - cpumask_copy(old_allowed, ¤t->cpus_allowed); - - set_cpus_allowed_ptr(current, &p->sharing); - hv_ret = sun4v_ncs_qconf(q_type, __pa(p->q), CWQ_NUM_ENTRIES, &p->qhandle); if (!hv_ret) sun4v_ncs_sethead_marker(p->qhandle, 0); - set_cpus_allowed_ptr(current, old_allowed); + return hv_ret ? -EINVAL : 0; +} - free_cpumask_var(old_allowed); +static int spu_queue_register(struct spu_queue *p, unsigned long q_type) +{ + int cpu = cpumask_any_and(&p->sharing, cpu_online_mask); + struct spu_qreg qr = { .queue = p, .type = q_type }; - return (hv_ret ? -EINVAL : 0); + return work_on_cpu_safe(cpu, spu_queue_register_workfn, &qr); } static int spu_queue_setup(struct spu_queue *p)