From patchwork Thu Oct 3 09:09:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 11172321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1605714DB for ; Thu, 3 Oct 2019 09:09:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E643E21848 for ; Thu, 3 Oct 2019 09:09:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E643E21848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35B916B0007; Thu, 3 Oct 2019 05:09:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 30B8B6B0008; Thu, 3 Oct 2019 05:09:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FC3F6B000A; Thu, 3 Oct 2019 05:09:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id F2E116B0007 for ; Thu, 3 Oct 2019 05:09:26 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 92A1D180AD801 for ; Thu, 3 Oct 2019 09:09:26 +0000 (UTC) X-FDA: 76001899932.25.eyes56_6f805e7492b07 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dwagner@suse.de,::linux-kernel@vger.kernel.org:linux-rt-users@vger.kernel.org:akpm@linux-foundation.org:dwagner@suse.de:urezki@gmail.com,RULES_HIT:30003:30054,0,RBL:195.135.220.15:@suse.de:.lbl8.mailshell.net-62.14.6.2 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: eyes56_6f805e7492b07 X-Filterd-Recvd-Size: 2346 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 09:09:26 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BECD7B14B; Thu, 3 Oct 2019 09:09:24 +0000 (UTC) From: Daniel Wagner To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Andrew Morton , Daniel Wagner , Uladzislau Rezki Subject: [PATCH] mm: vmalloc: Use the vmap_area_lock to protect ne_fit_preload_node Date: Thu, 3 Oct 2019 11:09:06 +0200 Message-Id: <20191003090906.1261-1-dwagner@suse.de> X-Mailer: git-send-email 2.16.4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace preempt_enable() and preempt_disable() with the vmap_area_lock spin_lock instead. Calling spin_lock() with preempt disabled is illegal for -rt. Furthermore, enabling preemption inside the spin_lock() doesn't really make sense. Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose") Cc: Uladzislau Rezki (Sony) Signed-off-by: Daniel Wagner Reviewed-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 08c134aa7ff3..0d1175673583 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1091,11 +1091,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, * Even if it fails we do not really care about that. Just proceed * as it is. "overflow" path will refill the cache we allocate from. */ - preempt_disable(); + spin_lock(&vmap_area_lock); if (!__this_cpu_read(ne_fit_preload_node)) { - preempt_enable(); + spin_unlock(&vmap_area_lock); pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); - preempt_disable(); + spin_lock(&vmap_area_lock); if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { if (pva) @@ -1103,9 +1103,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, } } - spin_lock(&vmap_area_lock); - preempt_enable(); - /* * If an allocation fails, the "vend" address is * returned. Therefore trigger the overflow path.