From patchwork Fri Jul 10 01:56:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 941BA14F6 for ; Fri, 10 Jul 2020 01:57:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5762220772 for ; Fri, 10 Jul 2020 01:57:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Bsl6w6DS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5762220772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 91D5C6B0005; Thu, 9 Jul 2020 21:57:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CFC16B0007; Thu, 9 Jul 2020 21:57:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77A9D6B0005; Thu, 9 Jul 2020 21:57:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 620DC6B0005 for ; Thu, 9 Jul 2020 21:57:13 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 10DE0181AEF0B for ; Fri, 10 Jul 2020 01:57:13 +0000 (UTC) X-FDA: 77020503546.06.pear64_041541e26eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id BA20D10095BA5 for ; Fri, 10 Jul 2020 01:57:12 +0000 (UTC) X-Spam-Summary: 1,0,0,877813e16b7b3156,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:1978:1981:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:4118:4250:4321:4605:5007:6119:6120:6261:6653:7514:7875:7903:9413:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14687:14721:21080:21433:21444:21451:21627:21666:21987:21990:30054,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04y85r4mrtppq46ut6bwn1kwe7a3zopdqe1buebcpgyodthbqsz3gwm7bg4oh9i.a1jaj9496mq4u1m89kbzr7fjb7cdtx7g5j8o153xg45csjfaho6nnekxgjycokx.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: pear64_041541e26eca X-Filterd-Recvd-Size: 7774 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:12 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id t6so1817251pgq.1 for ; Thu, 09 Jul 2020 18:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lqgdcf4W4nyd4T+76bDWEwAQ0qL2bOb12H7iEQITiag=; b=Bsl6w6DS+UBvKpZoK4wXCmnQGU0ewfOBlQPrDKoCEkWv/2kDrAJwFHbd1SWqVeJdeL CziLht/b/KkM4NPlZlH+XmNB5vqzrMJNIcdyP2BNi9cupfGbeM6x5dM7ZMa5kucXYDM5 l4Y9+6wiV8JPWAiCxTH0U1fsTFdcf5uc0fQsJ6G2lln4cMvpEHVB7G6bl7YQlu3OtfJ5 cIqtTLtC2u+Va2ORstjbZ8mWnGFoxexPLae5Vz85lo5xBOXzrv1t1d8TLcZ0FWo1gICB jHoDDR1UFjI6izLYc6vvQkk0lFv7w2wW/z7yu9JP1RGuvI/AiQs3+3ACt4BmtlSf6eNX ikOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lqgdcf4W4nyd4T+76bDWEwAQ0qL2bOb12H7iEQITiag=; b=lbdCFpi6n3G2vXzj6z0+XBDeg97WFi6d9PWuVbt/wZLab4SZ1rTN3bIuBUCcKqaH8N M/CmnfvWxBFd/9OJ1SBusVLGBHj4j6o+j6TiWXe892K5tNUWIxyNQ2hgm7oR+Zls3aFB DF7P1KjtFGHMT/QK5CyKWOkZs7V0JqmDr3qx86srDm60ou3eCsNi7VObCQENSJhznNc0 DUIeZW67VxDm1fAxZwfk82lHlcSpJ6+yxxY3ssvEPwc0JVaxFazF7ODHqFop6HAtZSde R1QHZp/8XsLINvtwuz0EGwwWuCKlf41kGOTIewtEA8RtjZyNpwrJqLMOutXiCdl7ZwvF AAPw== X-Gm-Message-State: AOAM532LO81J7GNxKp66p+uwPuO48mIAEg7anEczInEY8XfAHBtjWLM+ sJxBJtkwoRnYZJQSy8dHftY= X-Google-Smtp-Source: ABdhPJw1L2niD/P2jvytdxicFLyx/6l3tljWtgvBgnhSMhEEeF1GCsLRQuoV7KEVgNbXurGjErZj4g== X-Received: by 2002:a63:eb55:: with SMTP id b21mr56103283pgk.433.1594346231584; Thu, 09 Jul 2020 18:57:11 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:11 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard , Remis Lima Baima Subject: [RFC PATCH 1/7] asm-generic: add generic MMU versions of mmu context functions Date: Fri, 10 Jul 2020 11:56:40 +1000 Message-Id: <20200710015646.2020871-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BA20D10095BA5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many of these are no-ops on many architectures, so extend mmu_context.h to cover MMU and NOMMU, and split the NOMMU bits out to nommu_context.h Cc: Arnd Bergmann Cc: Remis Lima Baima Signed-off-by: Nicholas Piggin --- arch/microblaze/include/asm/mmu_context.h | 2 +- arch/sh/include/asm/mmu_context.h | 2 +- include/asm-generic/mmu_context.h | 57 +++++++++++++++++------ include/asm-generic/nommu_context.h | 19 ++++++++ 4 files changed, 64 insertions(+), 16 deletions(-) create mode 100644 include/asm-generic/nommu_context.h diff --git a/arch/microblaze/include/asm/mmu_context.h b/arch/microblaze/include/asm/mmu_context.h index f74f9da07fdc..34004efb3def 100644 --- a/arch/microblaze/include/asm/mmu_context.h +++ b/arch/microblaze/include/asm/mmu_context.h @@ -2,5 +2,5 @@ #ifdef CONFIG_MMU # include #else -# include +# include #endif diff --git a/arch/sh/include/asm/mmu_context.h b/arch/sh/include/asm/mmu_context.h index 48e67d544d53..9470d17c71c2 100644 --- a/arch/sh/include/asm/mmu_context.h +++ b/arch/sh/include/asm/mmu_context.h @@ -134,7 +134,7 @@ static inline void switch_mm(struct mm_struct *prev, #define set_TTB(pgd) do { } while (0) #define get_TTB() (0) -#include +#include #endif /* CONFIG_MMU */ diff --git a/include/asm-generic/mmu_context.h b/include/asm-generic/mmu_context.h index 6be9106fb6fb..86cea80a50df 100644 --- a/include/asm-generic/mmu_context.h +++ b/include/asm-generic/mmu_context.h @@ -3,44 +3,73 @@ #define __ASM_GENERIC_MMU_CONTEXT_H /* - * Generic hooks for NOMMU architectures, which do not need to do - * anything special here. + * Generic hooks to implement no-op functionality. */ -#include - struct task_struct; struct mm_struct; +/* + * enter_lazy_tlb - Called when "tsk" is about to enter lazy TLB mode. + * + * @mm: the currently active mm context which is becoming lazy + * @tsk: task which is entering lazy tlb + * + * tsk->mm will be NULL + */ +#ifndef enter_lazy_tlb static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { } +#endif +/** + * init_new_context - Initialize context of a new mm_struct. + * @tsk: task struct for the mm + * @mm: the new mm struct + */ +#ifndef init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { return 0; } +#endif +/** + * destroy_context - Undo init_new_context when the mm is going away + * @mm: old mm struct + */ +#ifndef destroy_context static inline void destroy_context(struct mm_struct *mm) { } +#endif -static inline void deactivate_mm(struct task_struct *task, - struct mm_struct *mm) -{ -} - -static inline void switch_mm(struct mm_struct *prev, - struct mm_struct *next, - struct task_struct *tsk) +/** + * activate_mm - called after exec switches the current task to a new mm, to switch to it + * @prev_mm: previous mm of this task + * @next_mm: new mm + */ +#ifndef activate_mm +static inline void activate_mm(struct mm_struct *prev_mm, + struct mm_struct *next_mm) { + switch_mm(prev_mm, next_mm, current); } +#endif -static inline void activate_mm(struct mm_struct *prev_mm, - struct mm_struct *next_mm) +/** + * dectivate_mm - called when an mm is released after exit or exec switches away from it + * @tsk: the task + * @mm: the old mm + */ +#ifndef deactivate_mm +static inline void deactivate_mm(struct task_struct *tsk, + struct mm_struct *mm) { } +#endif #endif /* __ASM_GENERIC_MMU_CONTEXT_H */ diff --git a/include/asm-generic/nommu_context.h b/include/asm-generic/nommu_context.h new file mode 100644 index 000000000000..72b8d8b1d81e --- /dev/null +++ b/include/asm-generic/nommu_context.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_GENERIC_NOMMU_H +#define __ASM_GENERIC_NOMMU_H + +/* + * Generic hooks for NOMMU architectures, which do not need to do + * anything special here. + */ + +#include +#include + +static inline void switch_mm(struct mm_struct *prev, + struct mm_struct *next, + struct task_struct *tsk) +{ +} + +#endif /* __ASM_GENERIC_NOMMU_H */ From patchwork Fri Jul 10 01:56:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42754618 for ; Fri, 10 Jul 2020 01:57:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD59B20849 for ; Fri, 10 Jul 2020 01:57:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="rv3GE2kp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD59B20849 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E778C6B0006; Thu, 9 Jul 2020 21:57:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E212A6B0007; Thu, 9 Jul 2020 21:57:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C9B226B0008; Thu, 9 Jul 2020 21:57:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id A94966B0006 for ; Thu, 9 Jul 2020 21:57:19 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 62A878248047 for ; Fri, 10 Jul 2020 01:57:19 +0000 (UTC) X-FDA: 77020503798.17.kick30_010baab26eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 3665B180D0185 for ; Fri, 10 Jul 2020 01:57:19 +0000 (UTC) X-Spam-Summary: 1,0,0,3df794eb967d8b06,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:69:327:355:379:541:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1978:1981:2194:2196:2198:2199:2200:2201:2393:2538:2559:2562:2693:2890:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4042:4250:4321:4385:4605:5007:6120:6261:6653:7875:7903:9036:9413:9592:10241:11026:11657:11914:12043:12050:12109:12296:12297:12438:12517:12519:12555:12895:12986:13141:13230:13894:13972:14096:14687:21080:21433:21444:21450:21451:21611:21627:21666:21740:21795:21990:30001:30003:30034:30045:30054:30070:30075,0,RBL:209.85.216.66:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yg3k5fj33k6qudg9fsyne5erejuyc5tr6hr4a1qia5xak45opqcgrjqumqscu.51bycyyridsqunw5zm6kb67i6somfhqbijrko9f4eyq911geu1finytwsu8h77g.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom _rules:0 X-HE-Tag: kick30_010baab26eca X-Filterd-Recvd-Size: 44124 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:18 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id gc9so1934569pjb.2 for ; Thu, 09 Jul 2020 18:57:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SwlS2eNc8vtjJo38rH8WFlY5uoSZAGkDAN86Mc6lWmM=; b=rv3GE2kpv1OMNOx7Gg42lPkqGYaz8TTcpmrNLLpgTJ2g9M9KmS0E2/T2ZtT4HUrOmS SJPJQyNeczMWkZ8TlM6ck2j6CMvDn2Iih1ANCcgrcGJonxEXn+d4U4Ogir+2q354pYpK thcgTy0C84Gh4dWLLAFV98cyCv/B+ERj12XjLvCWIRJGGCT4UFZoNo4VRL4SvJjiaGNg Onh31lMgnddgZTADgiZWIpPUWVlpqZBTOEvWkMydEHtlMH55zmD01cRcDY1f4BlTCv/h 9+2yB+IET97VVC2mCihWfPnXOxoralR+34zB/qmLD5U/wBUrm9lhn4BtTNh9ODI0WH6G WklQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SwlS2eNc8vtjJo38rH8WFlY5uoSZAGkDAN86Mc6lWmM=; b=cRAQtk/d71nNg8DxGiyxsB4wqjMTgw0XfqzuIR7KNNUUXaA8tsnW0sxp7x5iUq2CsA F/vifG/p0l2YOPKDqvRyVgH2fCmyPNGeLRcHgbR43KvhJotm+7bLxez1u0L6OW1QosZ7 HkptuB+0YoSOxOS2Fs0RbHMnfmbiiSnyhJ8FTHE/r4JZJ2V2aIiuGN1XkFC2sTku5naX RbVaU+wF73Eg3RFhVcWSgJN6lToqvUFk/MQT+FDGWMOGCNf4696dOrVDy5CDqxh0pdJR DfSn1574QBHL0jz7XtYYeWW/v69BwEHAOjdYzjOrMOmq2WUYLCagOVH8bF1akiV6Ws2a SFzA== X-Gm-Message-State: AOAM533W4tEJlud/xkATbUvmqIAscwd6V2kDGj1W/CjnSQGHsdirzm6x aNdDOVF4/Yg7vv/s7+PtkPU= X-Google-Smtp-Source: ABdhPJwU12ZbJl9ZHrtQj5kqCul7c+wiqkVlgVyzPT82mYgjqhnogzj3xgQUsAP8yBn8eF26O4fliw== X-Received: by 2002:a17:90a:3002:: with SMTP id g2mr3189393pjb.68.1594346237081; Thu, 09 Jul 2020 18:57:17 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:16 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 2/7] arch: use asm-generic mmu context for no-op implementations Date: Fri, 10 Jul 2020 11:56:41 +1000 Message-Id: <20200710015646.2020871-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3665B180D0185 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch bunches all architectures together. If the general idea is accepted I will split them individually. Some architectures can go further e.g., with consolidating switch_mm and activate_mm but I only did the more obvious ones. --- arch/alpha/include/asm/mmu_context.h | 12 ++--- arch/arc/include/asm/mmu_context.h | 16 +++---- arch/arm/include/asm/mmu_context.h | 26 ++--------- arch/arm64/include/asm/mmu_context.h | 7 ++- arch/csky/include/asm/mmu_context.h | 8 ++-- arch/hexagon/include/asm/mmu_context.h | 33 +++----------- arch/ia64/include/asm/mmu_context.h | 17 ++----- arch/m68k/include/asm/mmu_context.h | 47 ++++---------------- arch/microblaze/include/asm/mmu_context_mm.h | 8 ++-- arch/microblaze/include/asm/processor.h | 3 -- arch/mips/include/asm/mmu_context.h | 11 ++--- arch/nds32/include/asm/mmu_context.h | 10 +---- arch/nios2/include/asm/mmu_context.h | 21 ++------- arch/nios2/mm/mmu_context.c | 1 + arch/openrisc/include/asm/mmu_context.h | 8 ++-- arch/openrisc/mm/tlb.c | 2 + arch/parisc/include/asm/mmu_context.h | 12 ++--- arch/powerpc/include/asm/mmu_context.h | 22 +++------ arch/riscv/include/asm/mmu_context.h | 22 +-------- arch/s390/include/asm/mmu_context.h | 9 ++-- arch/sh/include/asm/mmu_context.h | 5 +-- arch/sh/include/asm/mmu_context_32.h | 9 ---- arch/sparc/include/asm/mmu_context_32.h | 10 ++--- arch/sparc/include/asm/mmu_context_64.h | 10 ++--- arch/um/include/asm/mmu_context.h | 12 +++-- arch/unicore32/include/asm/mmu_context.h | 24 ++-------- arch/x86/include/asm/mmu_context.h | 6 +++ arch/xtensa/include/asm/mmu_context.h | 11 ++--- arch/xtensa/include/asm/nommu_context.h | 26 +---------- 29 files changed, 106 insertions(+), 302 deletions(-) diff --git a/arch/alpha/include/asm/mmu_context.h b/arch/alpha/include/asm/mmu_context.h index 6d7d9bc1b4b8..4eea7c616992 100644 --- a/arch/alpha/include/asm/mmu_context.h +++ b/arch/alpha/include/asm/mmu_context.h @@ -214,8 +214,6 @@ ev4_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm) tbiap(); } -#define deactivate_mm(tsk,mm) do { } while (0) - #ifdef CONFIG_ALPHA_GENERIC # define switch_mm(a,b,c) alpha_mv.mv_switch_mm((a),(b),(c)) # define activate_mm(x,y) alpha_mv.mv_activate_mm((x),(y)) @@ -229,6 +227,7 @@ ev4_activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm) # endif #endif +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -242,12 +241,7 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm) return 0; } -extern inline void -destroy_context(struct mm_struct *mm) -{ - /* Nothing to do. */ -} - +#define enter_lazy_tlb enter_lazy_tlb static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { @@ -255,6 +249,8 @@ enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) = ((unsigned long)mm->pgd - IDENT_ADDR) >> PAGE_SHIFT; } +#include + #ifdef __MMU_EXTERN_INLINE #undef __EXTERN_INLINE #undef __MMU_EXTERN_INLINE diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index 3a5e6a5b9ed6..586d31902a99 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -102,6 +102,7 @@ static inline void get_new_mmu_context(struct mm_struct *mm) * Initialize the context related info for a new mm_struct * instance. */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -113,6 +114,7 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm) return 0; } +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { unsigned long flags; @@ -153,13 +155,12 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, } /* - * Called at the time of execve() to get a new ASID - * Note the subtlety here: get_new_mmu_context() behaves differently here - * vs. in switch_mm(). Here it always returns a new ASID, because mm has - * an unallocated "initial" value, while in latter, it moves to a new ASID, - * only if it was unallocated + * activate_mm defaults to switch_mm and is called at the time of execve() to + * get a new ASID Note the subtlety here: get_new_mmu_context() behaves + * differently here vs. in switch_mm(). Here it always returns a new ASID, + * because mm has an unallocated "initial" value, while in latter, it moves to + * a new ASID, only if it was unallocated */ -#define activate_mm(prev, next) switch_mm(prev, next, NULL) /* it seemed that deactivate_mm( ) is a reasonable place to do book-keeping * for retiring-mm. However destroy_context( ) still needs to do that because @@ -168,8 +169,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, * there is a good chance that task gets sched-out/in, making it's ASID valid * again (this teased me for a whole day). */ -#define deactivate_mm(tsk, mm) do { } while (0) -#define enter_lazy_tlb(mm, tsk) +#include #endif /* __ASM_ARC_MMU_CONTEXT_H */ diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h index f99ed524fe41..84e58956fcab 100644 --- a/arch/arm/include/asm/mmu_context.h +++ b/arch/arm/include/asm/mmu_context.h @@ -26,6 +26,8 @@ void __check_vmalloc_seq(struct mm_struct *mm); #ifdef CONFIG_CPU_HAS_ASID void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk); + +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -92,32 +94,10 @@ static inline void finish_arch_post_lock_switch(void) #endif /* CONFIG_MMU */ -static inline int -init_new_context(struct task_struct *tsk, struct mm_struct *mm) -{ - return 0; -} - - #endif /* CONFIG_CPU_HAS_ASID */ -#define destroy_context(mm) do { } while(0) #define activate_mm(prev,next) switch_mm(prev, next, NULL) -/* - * This is called when "tsk" is about to enter lazy TLB mode. - * - * mm: describes the currently active mm context - * tsk: task which is entering lazy tlb - * cpu: cpu number which is entering lazy tlb - * - * tsk->mm will be NULL - */ -static inline void -enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* * This is the actual mm switch as far as the scheduler * is concerned. No registers are touched. We avoid @@ -149,6 +129,6 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, #endif } -#define deactivate_mm(tsk,mm) do { } while (0) +#include #endif diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index b0bd9b55594c..0f5e351f586a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -174,7 +174,6 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) * Setting a reserved TTBR0 or EPD0 would work, but it all gets ugly when you * take CPU migration into account. */ -#define destroy_context(mm) do { } while(0) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); #define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) @@ -202,6 +201,7 @@ static inline void update_saved_ttbr0(struct task_struct *tsk, } #endif +#define enter_lazy_tlb enter_lazy_tlb static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { @@ -244,12 +244,11 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, update_saved_ttbr0(tsk, next); } -#define deactivate_mm(tsk,mm) do { } while (0) -#define activate_mm(prev,next) switch_mm(prev, next, current) - void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); +#include + #endif /* !__ASSEMBLY__ */ #endif /* !__ASM_MMU_CONTEXT_H */ diff --git a/arch/csky/include/asm/mmu_context.h b/arch/csky/include/asm/mmu_context.h index abdf1f1cb6ec..b227d29393a8 100644 --- a/arch/csky/include/asm/mmu_context.h +++ b/arch/csky/include/asm/mmu_context.h @@ -24,11 +24,6 @@ #define cpu_asid(mm) (atomic64_read(&mm->context.asid) & ASID_MASK) #define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.asid, 0); 0; }) -#define activate_mm(prev,next) switch_mm(prev, next, current) - -#define destroy_context(mm) do {} while (0) -#define enter_lazy_tlb(mm, tsk) do {} while (0) -#define deactivate_mm(tsk, mm) do {} while (0) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); @@ -46,4 +41,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, flush_icache_deferred(next); } + +#include + #endif /* __ASM_CSKY_MMU_CONTEXT_H */ diff --git a/arch/hexagon/include/asm/mmu_context.h b/arch/hexagon/include/asm/mmu_context.h index cdc4adc0300a..81947764c47d 100644 --- a/arch/hexagon/include/asm/mmu_context.h +++ b/arch/hexagon/include/asm/mmu_context.h @@ -15,39 +15,13 @@ #include #include -static inline void destroy_context(struct mm_struct *mm) -{ -} - /* * VM port hides all TLB management, so "lazy TLB" isn't very * meaningful. Even for ports to architectures with visble TLBs, * this is almost invariably a null function. + * + * mm->context is set up by pgd_alloc, so no init_new_context required. */ -static inline void enter_lazy_tlb(struct mm_struct *mm, - struct task_struct *tsk) -{ -} - -/* - * Architecture-specific actions, if any, for memory map deactivation. - */ -static inline void deactivate_mm(struct task_struct *tsk, - struct mm_struct *mm) -{ -} - -/** - * init_new_context - initialize context related info for new mm_struct instance - * @tsk: pointer to a task struct - * @mm: pointer to a new mm struct - */ -static inline int init_new_context(struct task_struct *tsk, - struct mm_struct *mm) -{ - /* mm->context is set up by pgd_alloc */ - return 0; -} /* * Switch active mm context @@ -74,6 +48,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, /* * Activate new memory map for task */ +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { unsigned long flags; @@ -86,4 +61,6 @@ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) /* Generic hooks for arch_dup_mmap and arch_exit_mmap */ #include +#include + #endif diff --git a/arch/ia64/include/asm/mmu_context.h b/arch/ia64/include/asm/mmu_context.h index 2da0e2eb036b..87a0d5bc11ef 100644 --- a/arch/ia64/include/asm/mmu_context.h +++ b/arch/ia64/include/asm/mmu_context.h @@ -49,11 +49,6 @@ DECLARE_PER_CPU(u8, ia64_need_tlb_flush); extern void mmu_context_init (void); extern void wrap_mmu_context (struct mm_struct *mm); -static inline void -enter_lazy_tlb (struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* * When the context counter wraps around all TLBs need to be flushed because * an old context number might have been reused. This is signalled by the @@ -116,6 +111,7 @@ get_mmu_context (struct mm_struct *mm) * Initialize context number to some sane value. MM is guaranteed to be a * brand-new address-space, so no TLB flushing is needed, ever. */ +#define init_new_context init_new_context static inline int init_new_context (struct task_struct *p, struct mm_struct *mm) { @@ -123,12 +119,6 @@ init_new_context (struct task_struct *p, struct mm_struct *mm) return 0; } -static inline void -destroy_context (struct mm_struct *mm) -{ - /* Nothing to do. */ -} - static inline void reload_context (nv_mm_context_t context) { @@ -178,11 +168,10 @@ activate_context (struct mm_struct *mm) } while (unlikely(context != mm->context)); } -#define deactivate_mm(tsk,mm) do { } while (0) - /* * Switch from address space PREV to address space NEXT. */ +#define activate_mm activate_mm static inline void activate_mm (struct mm_struct *prev, struct mm_struct *next) { @@ -196,5 +185,7 @@ activate_mm (struct mm_struct *prev, struct mm_struct *next) #define switch_mm(prev_mm,next_mm,next_task) activate_mm(prev_mm, next_mm) +#include + # endif /* ! __ASSEMBLY__ */ #endif /* _ASM_IA64_MMU_CONTEXT_H */ diff --git a/arch/m68k/include/asm/mmu_context.h b/arch/m68k/include/asm/mmu_context.h index cac9f289d1f6..56ae27322178 100644 --- a/arch/m68k/include/asm/mmu_context.h +++ b/arch/m68k/include/asm/mmu_context.h @@ -5,10 +5,6 @@ #include #include -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - #ifdef CONFIG_MMU #if defined(CONFIG_COLDFIRE) @@ -58,6 +54,7 @@ static inline void get_mmu_context(struct mm_struct *mm) /* * We're finished using the context for an address space. */ +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { if (mm->context != NO_CONTEXT) { @@ -79,19 +76,6 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, set_context(tsk->mm->context, next->pgd); } -/* - * After we have set current->mm to a new value, this activates - * the context for the new mm so we see the new mappings. - */ -static inline void activate_mm(struct mm_struct *active_mm, - struct mm_struct *mm) -{ - get_mmu_context(mm); - set_context(mm->context, mm->pgd); -} - -#define deactivate_mm(tsk, mm) do { } while (0) - #define prepare_arch_switch(next) load_ksp_mmu(next) static inline void load_ksp_mmu(struct task_struct *task) @@ -176,6 +160,7 @@ extern unsigned long get_free_context(struct mm_struct *mm); extern void clear_context(unsigned long context); /* set the context for a new task to unmapped */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -210,8 +195,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, activate_context(tsk->mm); } -#define deactivate_mm(tsk, mm) do { } while (0) - +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm) { @@ -224,6 +208,7 @@ static inline void activate_mm(struct mm_struct *prev_mm, #include #include +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -231,8 +216,6 @@ static inline int init_new_context(struct task_struct *tsk, return 0; } -#define destroy_context(mm) do { } while(0) - static inline void switch_mm_0230(struct mm_struct *mm) { unsigned long crp[2] = { @@ -300,8 +283,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, str } } -#define deactivate_mm(tsk,mm) do { } while (0) - +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm) { @@ -315,24 +297,11 @@ static inline void activate_mm(struct mm_struct *prev_mm, #endif -#else /* !CONFIG_MMU */ +#include -static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) -{ - return 0; -} - - -static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) -{ -} - -#define destroy_context(mm) do { } while (0) -#define deactivate_mm(tsk,mm) do { } while (0) +#else /* !CONFIG_MMU */ -static inline void activate_mm(struct mm_struct *prev_mm, struct mm_struct *next_mm) -{ -} +#include #endif /* CONFIG_MMU */ #endif /* __M68K_MMU_CONTEXT_H */ diff --git a/arch/microblaze/include/asm/mmu_context_mm.h b/arch/microblaze/include/asm/mmu_context_mm.h index a1c7dd48454c..c2c77f708455 100644 --- a/arch/microblaze/include/asm/mmu_context_mm.h +++ b/arch/microblaze/include/asm/mmu_context_mm.h @@ -33,10 +33,6 @@ to represent all kernel pages as shared among all contexts. */ -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - # define NO_CONTEXT 256 # define LAST_CONTEXT 255 # define FIRST_CONTEXT 1 @@ -105,6 +101,7 @@ static inline void get_mmu_context(struct mm_struct *mm) /* * We're finished using the context for an address space. */ +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { if (mm->context != NO_CONTEXT) { @@ -126,6 +123,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, * After we have set current->mm to a new value, this activates * the context for the new mm so we see the new mappings. */ +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm) { @@ -136,5 +134,7 @@ static inline void activate_mm(struct mm_struct *active_mm, extern void mmu_context_init(void); +#include + # endif /* __KERNEL__ */ #endif /* _ASM_MICROBLAZE_MMU_CONTEXT_H */ diff --git a/arch/microblaze/include/asm/processor.h b/arch/microblaze/include/asm/processor.h index 1ff5a82b76b6..616211871a6e 100644 --- a/arch/microblaze/include/asm/processor.h +++ b/arch/microblaze/include/asm/processor.h @@ -122,9 +122,6 @@ unsigned long get_wchan(struct task_struct *p); # define KSTK_EIP(task) (task_pc(task)) # define KSTK_ESP(task) (task_sp(task)) -/* FIXME */ -# define deactivate_mm(tsk, mm) do { } while (0) - # define STACK_TOP TASK_SIZE # define STACK_TOP_MAX STACK_TOP diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h index cddead91acd4..ed9f2d748f63 100644 --- a/arch/mips/include/asm/mmu_context.h +++ b/arch/mips/include/asm/mmu_context.h @@ -124,10 +124,6 @@ static inline void set_cpu_context(unsigned int cpu, #define cpu_asid(cpu, mm) \ (cpu_context((cpu), (mm)) & cpu_asid_mask(&cpu_data[cpu])) -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - extern void get_new_mmu_context(struct mm_struct *mm); extern void check_mmu_context(struct mm_struct *mm); extern void check_switch_mmu_context(struct mm_struct *mm); @@ -136,6 +132,7 @@ extern void check_switch_mmu_context(struct mm_struct *mm); * Initialize the context related info for a new mm_struct * instance. */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -180,14 +177,12 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, * Destroy context related info for an mm_struct that is about * to be put to rest. */ +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { dsemul_mm_cleanup(mm); } -#define activate_mm(prev, next) switch_mm(prev, next, current) -#define deactivate_mm(tsk, mm) do { } while (0) - static inline void drop_mmu_context(struct mm_struct *mm) { @@ -237,4 +232,6 @@ drop_mmu_context(struct mm_struct *mm) local_irq_restore(flags); } +#include + #endif /* _ASM_MMU_CONTEXT_H */ diff --git a/arch/nds32/include/asm/mmu_context.h b/arch/nds32/include/asm/mmu_context.h index b8fd3d189fdc..c651bc8cacdc 100644 --- a/arch/nds32/include/asm/mmu_context.h +++ b/arch/nds32/include/asm/mmu_context.h @@ -9,6 +9,7 @@ #include #include +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -16,8 +17,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm) return 0; } -#define destroy_context(mm) do { } while(0) - #define CID_BITS 9 extern spinlock_t cid_lock; extern unsigned int cpu_last_cid; @@ -47,10 +46,6 @@ static inline void check_context(struct mm_struct *mm) __new_context(mm); } -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { @@ -62,7 +57,6 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, } } -#define deactivate_mm(tsk,mm) do { } while (0) -#define activate_mm(prev,next) switch_mm(prev, next, NULL) +#include #endif diff --git a/arch/nios2/include/asm/mmu_context.h b/arch/nios2/include/asm/mmu_context.h index 78ab3dacf579..4f99ed09b5a7 100644 --- a/arch/nios2/include/asm/mmu_context.h +++ b/arch/nios2/include/asm/mmu_context.h @@ -26,16 +26,13 @@ extern unsigned long get_pid_from_context(mm_context_t *ctx); */ extern pgd_t *pgd_current; -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* * Initialize the context related info for a new mm_struct instance. * * Set all new contexts to 0, that way the generation will never match * the currently running generation when this context is switched in. */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -43,26 +40,16 @@ static inline int init_new_context(struct task_struct *tsk, return 0; } -/* - * Destroy context related info for an mm_struct that is about - * to be put to rest. - */ -static inline void destroy_context(struct mm_struct *mm) -{ -} - void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk); -static inline void deactivate_mm(struct task_struct *tsk, - struct mm_struct *mm) -{ -} - /* * After we have set current->mm to a new value, this activates * the context for the new mm so we see the new mappings. */ +#define activate_mm activate_mm void activate_mm(struct mm_struct *prev, struct mm_struct *next); +#include + #endif /* _ASM_NIOS2_MMU_CONTEXT_H */ diff --git a/arch/nios2/mm/mmu_context.c b/arch/nios2/mm/mmu_context.c index 45d6b9c58d67..d77aa542deb2 100644 --- a/arch/nios2/mm/mmu_context.c +++ b/arch/nios2/mm/mmu_context.c @@ -103,6 +103,7 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, * After we have set current->mm to a new value, this activates * the context for the new mm so we see the new mappings. */ +#define activate_mm activate_mm void activate_mm(struct mm_struct *prev, struct mm_struct *next) { next->context = get_new_context(); diff --git a/arch/openrisc/include/asm/mmu_context.h b/arch/openrisc/include/asm/mmu_context.h index ced577542e29..a6702384c77d 100644 --- a/arch/openrisc/include/asm/mmu_context.h +++ b/arch/openrisc/include/asm/mmu_context.h @@ -17,13 +17,13 @@ #include +#define init_new_context init_new_context extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm); +#define destroy_context destroy_context extern void destroy_context(struct mm_struct *mm); extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk); -#define deactivate_mm(tsk, mm) do { } while (0) - #define activate_mm(prev, next) switch_mm((prev), (next), NULL) /* current active pgd - this is similar to other processors pgd @@ -32,8 +32,6 @@ extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, extern volatile pgd_t *current_pgd[]; /* defined in arch/openrisc/mm/fault.c */ -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} +#include #endif diff --git a/arch/openrisc/mm/tlb.c b/arch/openrisc/mm/tlb.c index 4b680aed8f5f..821aab4cf3be 100644 --- a/arch/openrisc/mm/tlb.c +++ b/arch/openrisc/mm/tlb.c @@ -159,6 +159,7 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, * instance. */ +#define init_new_context init_new_context int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { mm->context = NO_CONTEXT; @@ -170,6 +171,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) * drops it. */ +#define destroy_context destroy_context void destroy_context(struct mm_struct *mm) { flush_tlb_mm(mm); diff --git a/arch/parisc/include/asm/mmu_context.h b/arch/parisc/include/asm/mmu_context.h index 07b89c74abeb..71f8a3679b83 100644 --- a/arch/parisc/include/asm/mmu_context.h +++ b/arch/parisc/include/asm/mmu_context.h @@ -8,16 +8,13 @@ #include #include -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* on PA-RISC, we actually have enough contexts to justify an allocator * for them. prumpf */ extern unsigned long alloc_sid(void); extern void free_sid(unsigned long); +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -27,6 +24,7 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm) return 0; } +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { @@ -72,8 +70,7 @@ static inline void switch_mm(struct mm_struct *prev, } #define switch_mm_irqs_off switch_mm_irqs_off -#define deactivate_mm(tsk,mm) do { } while (0) - +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { /* @@ -91,4 +88,7 @@ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) switch_mm(prev,next,current); } + +#include + #endif diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 1a474f6b1992..242bd987247b 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -14,7 +14,9 @@ /* * Most if the context management is out of line */ +#define init_new_context init_new_context extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm); +#define destroy_context destroy_context extern void destroy_context(struct mm_struct *mm); #ifdef CONFIG_SPAPR_TCE_IOMMU struct mm_iommu_table_group_mem_t; @@ -237,27 +239,15 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, } #define switch_mm_irqs_off switch_mm_irqs_off - -#define deactivate_mm(tsk,mm) do { } while (0) - -/* - * After we have set current->mm to a new value, this activates - * the context for the new mm so we see the new mappings. - */ -static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) -{ - switch_mm(prev, next, current); -} - -/* We don't currently use enter_lazy_tlb() for anything */ +#ifdef CONFIG_PPC_BOOK3E_64 +#define enter_lazy_tlb enter_lazy_tlb static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { /* 64-bit Book3E keeps track of current PGD in the PACA */ -#ifdef CONFIG_PPC_BOOK3E_64 get_paca()->pgd = NULL; -#endif } +#endif extern void arch_exit_mmap(struct mm_struct *mm); @@ -300,5 +290,7 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm, return 0; } +#include + #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/mmu_context.h index 67c463812e2d..250defa06f3a 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -13,34 +13,16 @@ #include #include -static inline void enter_lazy_tlb(struct mm_struct *mm, - struct task_struct *task) -{ -} - -/* Initialize context-related info for a new mm_struct */ -static inline int init_new_context(struct task_struct *task, - struct mm_struct *mm) -{ - return 0; -} - -static inline void destroy_context(struct mm_struct *mm) -{ -} - void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *task); +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { switch_mm(prev, next, NULL); } -static inline void deactivate_mm(struct task_struct *task, - struct mm_struct *mm) -{ -} +#include #endif /* _ASM_RISCV_MMU_CONTEXT_H */ diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h index c9f3d8a52756..66f9cf0a07e3 100644 --- a/arch/s390/include/asm/mmu_context.h +++ b/arch/s390/include/asm/mmu_context.h @@ -15,6 +15,7 @@ #include #include +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -69,8 +70,6 @@ static inline int init_new_context(struct task_struct *tsk, return 0; } -#define destroy_context(mm) do { } while (0) - static inline void set_user_asce(struct mm_struct *mm) { S390_lowcore.user_asce = mm->context.asce; @@ -125,9 +124,7 @@ static inline void finish_arch_post_lock_switch(void) set_fs(current->thread.mm_segment); } -#define enter_lazy_tlb(mm,tsk) do { } while (0) -#define deactivate_mm(tsk,mm) do { } while (0) - +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { @@ -136,4 +133,6 @@ static inline void activate_mm(struct mm_struct *prev, set_user_asce(next); } +#include + #endif /* __S390_MMU_CONTEXT_H */ diff --git a/arch/sh/include/asm/mmu_context.h b/arch/sh/include/asm/mmu_context.h index 9470d17c71c2..ce40147d4a7d 100644 --- a/arch/sh/include/asm/mmu_context.h +++ b/arch/sh/include/asm/mmu_context.h @@ -85,6 +85,7 @@ static inline void get_mmu_context(struct mm_struct *mm, unsigned int cpu) * Initialize the context related info for a new mm_struct * instance. */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -121,9 +122,7 @@ static inline void switch_mm(struct mm_struct *prev, activate_context(next, cpu); } -#define activate_mm(prev, next) switch_mm((prev),(next),NULL) -#define deactivate_mm(tsk,mm) do { } while (0) -#define enter_lazy_tlb(mm,tsk) do { } while (0) +#include #else diff --git a/arch/sh/include/asm/mmu_context_32.h b/arch/sh/include/asm/mmu_context_32.h index 71bf12ef1f65..bc5034fa6249 100644 --- a/arch/sh/include/asm/mmu_context_32.h +++ b/arch/sh/include/asm/mmu_context_32.h @@ -2,15 +2,6 @@ #ifndef __ASM_SH_MMU_CONTEXT_32_H #define __ASM_SH_MMU_CONTEXT_32_H -/* - * Destroy context related info for an mm_struct that is about - * to be put to rest. - */ -static inline void destroy_context(struct mm_struct *mm) -{ - /* Do nothing */ -} - #ifdef CONFIG_CPU_HAS_PTEAEX static inline void set_asid(unsigned long asid) { diff --git a/arch/sparc/include/asm/mmu_context_32.h b/arch/sparc/include/asm/mmu_context_32.h index 7ddcb8badf70..509043f81560 100644 --- a/arch/sparc/include/asm/mmu_context_32.h +++ b/arch/sparc/include/asm/mmu_context_32.h @@ -6,13 +6,10 @@ #include -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* Initialize a new mmu context. This is invoked when a new * address space instance (unique or shared) is instantiated. */ +#define init_new_context init_new_context int init_new_context(struct task_struct *tsk, struct mm_struct *mm); /* Destroy a dead context. This occurs when mmput drops the @@ -20,17 +17,18 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm); * all the page tables have been flushed. Our job is to destroy * any remaining processor-specific state. */ +#define destroy_context destroy_context void destroy_context(struct mm_struct *mm); /* Switch the current MM context. */ void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk); -#define deactivate_mm(tsk,mm) do { } while (0) - /* Activate a new MM instance for the current task. */ #define activate_mm(active_mm, mm) switch_mm((active_mm), (mm), NULL) +#include + #endif /* !(__ASSEMBLY__) */ #endif /* !(__SPARC_MMU_CONTEXT_H) */ diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h index 312fcee8df2b..7a8380c63aab 100644 --- a/arch/sparc/include/asm/mmu_context_64.h +++ b/arch/sparc/include/asm/mmu_context_64.h @@ -16,17 +16,16 @@ #include #include -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - extern spinlock_t ctx_alloc_lock; extern unsigned long tlb_context_cache; extern unsigned long mmu_context_bmap[]; DECLARE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm); void get_new_mmu_context(struct mm_struct *mm); + +#define init_new_context init_new_context int init_new_context(struct task_struct *tsk, struct mm_struct *mm); +#define destroy_context destroy_context void destroy_context(struct mm_struct *mm); void __tsb_context_switch(unsigned long pgd_pa, @@ -136,7 +135,6 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str spin_unlock_irqrestore(&mm->context.lock, flags); } -#define deactivate_mm(tsk,mm) do { } while (0) #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL) #define __HAVE_ARCH_START_CONTEXT_SWITCH @@ -187,6 +185,8 @@ static inline void finish_arch_post_lock_switch(void) } } +#include + #endif /* !(__ASSEMBLY__) */ #endif /* !(__SPARC64_MMU_CONTEXT_H) */ diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h index 17ddd4edf875..f8a100770691 100644 --- a/arch/um/include/asm/mmu_context.h +++ b/arch/um/include/asm/mmu_context.h @@ -37,10 +37,9 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, * end asm-generic/mm_hooks.h functions */ -#define deactivate_mm(tsk,mm) do { } while (0) - extern void force_flush_all(void); +#define activate_mm activate_mm static inline void activate_mm(struct mm_struct *old, struct mm_struct *new) { /* @@ -66,13 +65,12 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, } } -static inline void enter_lazy_tlb(struct mm_struct *mm, - struct task_struct *tsk) -{ -} - +#define init_new_context init_new_context extern int init_new_context(struct task_struct *task, struct mm_struct *mm); +#define destroy_context destroy_context extern void destroy_context(struct mm_struct *mm); +#include + #endif diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h index 388c0c811c68..e1751cb5439c 100644 --- a/arch/unicore32/include/asm/mmu_context.h +++ b/arch/unicore32/include/asm/mmu_context.h @@ -18,24 +18,6 @@ #include #include -#define init_new_context(tsk, mm) 0 - -#define destroy_context(mm) do { } while (0) - -/* - * This is called when "tsk" is about to enter lazy TLB mode. - * - * mm: describes the currently active mm context - * tsk: task which is entering lazy tlb - * cpu: cpu number which is entering lazy tlb - * - * tsk->mm will be NULL - */ -static inline void -enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - /* * This is the actual mm switch as far as the scheduler * is concerned. No registers are touched. We avoid @@ -52,9 +34,6 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, cpu_switch_mm(next->pgd, next); } -#define deactivate_mm(tsk, mm) do { } while (0) -#define activate_mm(prev, next) switch_mm(prev, next, NULL) - /* * We are inserting a "fake" vma for the user-accessible vector page so * gdb and friends can get to it through ptrace and /proc//mem. @@ -95,4 +74,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, /* by default, allow everything */ return true; } + +#include + #endif diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 47562147e70b..255750548433 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -92,12 +92,14 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) } #endif +#define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); /* * Init a new mm. Used on mm copies, like at fork() * and on mm's that are brand-new, like at execve(). */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -117,6 +119,8 @@ static inline int init_new_context(struct task_struct *tsk, init_new_context_ldt(mm); return 0; } + +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { destroy_context_ldt(mm); @@ -215,4 +219,6 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, unsigned long __get_current_cr3_fast(void); +#include + #endif /* _ASM_X86_MMU_CONTEXT_H */ diff --git a/arch/xtensa/include/asm/mmu_context.h b/arch/xtensa/include/asm/mmu_context.h index 74923ef3b228..e337ba9686e9 100644 --- a/arch/xtensa/include/asm/mmu_context.h +++ b/arch/xtensa/include/asm/mmu_context.h @@ -111,6 +111,7 @@ static inline void activate_context(struct mm_struct *mm, unsigned int cpu) * to -1 says the process has never run on any core. */ +#define init_new_context init_new_context static inline int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { @@ -136,24 +137,18 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, activate_context(next, cpu); } -#define activate_mm(prev, next) switch_mm((prev), (next), NULL) -#define deactivate_mm(tsk, mm) do { } while (0) - /* * Destroy context related info for an mm_struct that is about * to be put to rest. */ +#define destroy_context destroy_context static inline void destroy_context(struct mm_struct *mm) { invalidate_page_directory(); } -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ - /* Nothing to do. */ - -} +#include #endif /* CONFIG_MMU */ #endif /* _XTENSA_MMU_CONTEXT_H */ diff --git a/arch/xtensa/include/asm/nommu_context.h b/arch/xtensa/include/asm/nommu_context.h index 37251b2ef871..7c9d1918dc41 100644 --- a/arch/xtensa/include/asm/nommu_context.h +++ b/arch/xtensa/include/asm/nommu_context.h @@ -7,28 +7,4 @@ static inline void init_kio(void) { } -static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ -} - -static inline int init_new_context(struct task_struct *tsk,struct mm_struct *mm) -{ - return 0; -} - -static inline void destroy_context(struct mm_struct *mm) -{ -} - -static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) -{ -} - -static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, - struct task_struct *tsk) -{ -} - -static inline void deactivate_mm(struct task_struct *tsk, struct mm_struct *mm) -{ -} +#include From patchwork Fri Jul 10 01:56:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7CCC5618 for ; Fri, 10 Jul 2020 01:57:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48DFD20772 for ; Fri, 10 Jul 2020 01:57:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gCXjs8bI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48DFD20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E7956B0007; Thu, 9 Jul 2020 21:57:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5963F6B0008; Thu, 9 Jul 2020 21:57:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45F176B000A; Thu, 9 Jul 2020 21:57:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 3062B6B0007 for ; Thu, 9 Jul 2020 21:57:23 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E92721AAEF for ; Fri, 10 Jul 2020 01:57:22 +0000 (UTC) X-FDA: 77020503924.29.force15_56073f526eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id C124918086E31 for ; Fri, 10 Jul 2020 01:57:22 +0000 (UTC) X-Spam-Summary: 1,0,0,7b7e87d553ccbe82,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:1801:1981:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3870:3871:3872:3874:4250:4321:4605:5007:6119:6120:6261:6653:7514:7875:7903:9413:10004:11026:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14687:14721:21080:21433:21444:21451:21627:21666:21990:30054,0,RBL:209.85.216.67:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yrmgu56z7ofgwb3u397frdq3fjoop3m458y1t6tgbjey35bfmaypwnz438x7f.myu1qthg5f9wqhnwx1ho5ouz4uokzx1cqseo94ncpd5ur6wboaftknkxmt3zkzm.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: force15_56073f526eca X-Filterd-Recvd-Size: 5733 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:22 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id k5so1916536pjg.3 for ; Thu, 09 Jul 2020 18:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TndaA+7n10SOeC82kaRc/SAF/MahQj0lom1PGKrEXvU=; b=gCXjs8bI2mKELHWhu45fBwg1UfGpXWYrZpw6lFS/SvzlUgi0yqDt8mR0Swy0XW/Eoh OxqpNXxxb+SbP/lNioVntQfSfi4+giY2ZWkfv6+c2VV1Qi1h2qu8pfPyehvx7JDtU+cj fvFVZxq2JpBW77ynSZam+fy+gVUFRdFqePTsZSQz9vkfcGVU7meycx1z83dKI6D6L/tX IeuWG+VZDdlxt2r8ZWdEGs3JCuZlfe+MRZJkfZUwxv0/SwopWrbT5G8its/MryhEKJ/d ClkJ1u2tLeLTb181Zjyjn82cQDXUyvsMLzzmGPsgTdpCvZWDM2JzPr7Eb1HexxQB+tSM +CWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TndaA+7n10SOeC82kaRc/SAF/MahQj0lom1PGKrEXvU=; b=G9OKAS3rcKnyuzK6MJkWTEWz9etOcQ7JUkDp3fkqXF+NHcWvhEcC8TuJpXi9ntgp/Z jFCZAzTVpoiJ3tMLZrcjSgGgstZrAnI7Kqa0dYWO1nZUP2ThsJq8OxrkzRpRfzgITnw5 n9WaO26zT2v41SU7Y6F+1r/TSPsVktJctNCJdl7rpbkOx7fiolaG/q1up/E8f49IAvbb vZor/RX5g/Oc170Cf6CyuzVx9tahcOc08TfAYklJsAnoby70Sy/BzQU3RUj4R18qDoWV a+YNfPN9PCeBFsPt4/M4/zr0Vaui5MzQzbzkWM2zuww+8dQr3E3xwbw6tkKZb1kmDjrQ 7ntQ== X-Gm-Message-State: AOAM530R6aRghVfbFxM1Wyn+hF2/p1RqUayIafBgibCiikKbT7d4ydwx 0x0fj/Qj7VOurEQUamk/rVQ= X-Google-Smtp-Source: ABdhPJzaWKUBO6GLgF6DS/ZblFwcTQFZaXYAVtzx8AT9SnnyhUC2nMRdbyYbGy72wTEeylcuoM1gzw== X-Received: by 2002:a17:90a:4bc7:: with SMTP id u7mr3117178pjl.217.1594346241669; Thu, 09 Jul 2020 18:57:21 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:21 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 3/7] mm: introduce exit_lazy_tlb Date: Fri, 10 Jul 2020 11:56:42 +1000 Message-Id: <20200710015646.2020871-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C124918086E31 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Nicholas Piggin --- fs/exec.c | 5 +++-- include/asm-generic/mmu_context.h | 20 ++++++++++++++++++++ kernel/kthread.c | 1 + kernel/sched/core.c | 2 ++ 4 files changed, 26 insertions(+), 2 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index e6e8a9a70327..e2ab71e88293 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1117,9 +1117,10 @@ static int exec_mmap(struct mm_struct *mm) setmax_mm_hiwater_rss(&tsk->signal->maxrss, old_mm); mm_update_next_owner(old_mm); mmput(old_mm); - return 0; + } else { + exit_lazy_tlb(active_mm, tsk); + mmdrop(active_mm); } - mmdrop(active_mm); return 0; } diff --git a/include/asm-generic/mmu_context.h b/include/asm-generic/mmu_context.h index 86cea80a50df..3fc4c3879b79 100644 --- a/include/asm-generic/mmu_context.h +++ b/include/asm-generic/mmu_context.h @@ -24,6 +24,26 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, } #endif +/* + * exit_lazy_tlb - Called after switching away from a lazy TLB mode mm. + * + * mm: the lazy mm context that was switched away from + * tsk: the task that was switched to non-lazy mm + * + * tsk->mm will not be NULL. + * + * Note this is not symmetrical to enter_lazy_tlb, this is not + * called when tasks switch into the lazy mm, it's called after the + * lazy mm becomes non-lazy (either switched to a different mm or the + * owner of the mm returns). + */ +#ifndef exit_lazy_tlb +static inline void exit_lazy_tlb(struct mm_struct *mm, + struct task_struct *tsk) +{ +} +#endif + /** * init_new_context - Initialize context of a new mm_struct. * @tsk: task struct for the mm diff --git a/kernel/kthread.c b/kernel/kthread.c index 132f84a5fde3..e813d92f2eab 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1253,6 +1253,7 @@ void kthread_use_mm(struct mm_struct *mm) if (active_mm != mm) mmdrop(active_mm); + exit_lazy_tlb(active_mm, tsk); to_kthread(tsk)->oldfs = get_fs(); set_fs(USER_DS); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca5db40392d4..debc917bc69b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3439,6 +3439,8 @@ context_switch(struct rq *rq, struct task_struct *prev, switch_mm_irqs_off(prev->active_mm, next->mm, next); if (!prev->mm) { // from kernel + exit_lazy_tlb(prev->active_mm, next); + /* will mmdrop() in finish_task_switch(). */ rq->prev_mm = prev->active_mm; prev->active_mm = NULL; From patchwork Fri Jul 10 01:56:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11B0714DD for ; Fri, 10 Jul 2020 01:57:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C514120708 for ; Fri, 10 Jul 2020 01:57:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hYJ1J+uc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C514120708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F0F1F6B0008; Thu, 9 Jul 2020 21:57:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EBDF96B000A; Thu, 9 Jul 2020 21:57:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D60986B000C; Thu, 9 Jul 2020 21:57:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id C0B3C6B0008 for ; Thu, 9 Jul 2020 21:57:28 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7E1728248047 for ; Fri, 10 Jul 2020 01:57:28 +0000 (UTC) X-FDA: 77020504176.21.club93_0a021aa26eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 55135180442C0 for ; Fri, 10 Jul 2020 01:57:28 +0000 (UTC) X-Spam-Summary: 1,0,0,1a2d33eefd4337a2,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1434:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2199:2393:2553:2559:2562:2693:2736:2903:2904:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4052:4250:4321:4605:5007:6117:6119:6120:6261:6653:7514:7903:8660:9413:9592:10004:10226:11026:11233:11473:11657:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12895:12986:13148:13161:13229:13230:13255:13894:14096:14664:14687:21080:21095:21250:21433:21444:21450:21451:21627:21666:21795:21939:21990:30012:30045:30051:30054:30090,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yfa9hmh64uda5y4x7ozenzagtdbocp3t4poc69au9fqgdrdtuxgomujcghdaj.hn14ir938dof4wd4g8d99pudy8zao4n773tt51ofynxrpddb57yxi9t6a3kycu9.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:f p,MSBL:0 X-HE-Tag: club93_0a021aa26eca X-Filterd-Recvd-Size: 13146 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:27 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id 207so1840435pfu.3 for ; Thu, 09 Jul 2020 18:57:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+5MO1bZ0QpZZ3545VDR9j5SJ7kjibHldB5C8L2Sm+nM=; b=hYJ1J+ucWd2AGeEVGR5Xr3ji7mFHM5j35Bw3Ut4o8mC5uOIgy30o+KIhPFNee9+5jv gEMU/hJVvmeDOF2pIUqMd92H837uftb/kYyRdiOyppwh6DXXkQY9VsopHU6dRiCf9C9c vudJcmVo/772RpC3fCAlGRV77oM5pjhHfPrSH46obfRFznXI9fy1pINsIpzMfJr8Rah9 64XvcCP7QTxFpyxJl4QgnLV95uaK9qSl5fOCf/oayUPiNByAi0Ua7KVXftPRMixni6Mx almGO6i3pa8//FqI+k11fbgzBNf2nWJKeBote9KqnhXjPu2WD0SpRtNY9UxEEZXAwff8 rI1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+5MO1bZ0QpZZ3545VDR9j5SJ7kjibHldB5C8L2Sm+nM=; b=ZNpfh6UHFaNXuk8yxPr2JlNJFD9kklLLnWYj6vu6wz0DTdSFc1OGKFQNWoLojemWKN /nXHJsxu5WwcgiyiH2dDf2O8LgGBIu0rIl6aDJIfWa+Tw5IJzAvtZrlcuisS2HidleN5 9n1hORuc7paeC5K3qvIFfgqFnWzZVLj4y+pVd2AIu0f5+tbbqeupJ3EefY4/ZObHkDiB qCJ5VdH/P7v9Fq9+fNM2lp1RFhvEcG37HJVcGHTqPobhqdPh2sDgpXNLw3YtXwxkQ1By qoFdKfuDxcOa58z1XtmJ9ammohk2xM4/dmOLpzCW9KXUguu4HscN35+bOfD1wdc4nWx9 UbGg== X-Gm-Message-State: AOAM530+08WC4MDbb1/UTMMZ7rZvfiFzd1ZYz7brJZ3z0W1U74QuDSWA THCGFsdnblAeqBZ4eJ1B6/4= X-Google-Smtp-Source: ABdhPJzL9EEp5r78RNYso4V7xHXE/gz6xbWKa/IR+miGnLeio4Yj+s3j+T5CMKEk/0dssSW2RkN5iw== X-Received: by 2002:a65:6786:: with SMTP id e6mr30019082pgr.395.1594346246250; Thu, 09 Jul 2020 18:57:26 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:25 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 4/7] x86: use exit_lazy_tlb rather than membarrier_mm_sync_core_before_usermode Date: Fri, 10 Jul 2020 11:56:43 +1000 Message-Id: <20200710015646.2020871-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 55135180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: And get rid of the generic sync_core_before_usermode facility. This helper is the wrong way around I think. The idea that membarrier state requires a core sync before returning to user is the easy one that does not need hiding behind membarrier calls. The gap in core synchronization due to x86's sysret/sysexit and lazy tlb mode, is the tricky detail that is better put in x86 lazy tlb code. Consider if an arch did not synchronize core in switch_mm either, then membarrier_mm_sync_core_before_usermode would be in the wrong place but arch specific mmu context functions would still be the right place. There is also a exit_lazy_tlb case that is not covered by this call, which could be a bugs (kthread use mm the membarrier process's mm then context switch back to the process without switching mm or lazy mm switch). This makes lazy tlb code a bit more modular. Signed-off-by: Nicholas Piggin --- .../membarrier-sync-core/arch-support.txt | 6 +++- arch/x86/include/asm/mmu_context.h | 35 +++++++++++++++++++ arch/x86/include/asm/sync_core.h | 28 --------------- include/linux/sched/mm.h | 14 -------- include/linux/sync_core.h | 21 ----------- kernel/cpu.c | 4 ++- kernel/kthread.c | 2 +- kernel/sched/core.c | 16 ++++----- 8 files changed, 51 insertions(+), 75 deletions(-) delete mode 100644 arch/x86/include/asm/sync_core.h delete mode 100644 include/linux/sync_core.h diff --git a/Documentation/features/sched/membarrier-sync-core/arch-support.txt b/Documentation/features/sched/membarrier-sync-core/arch-support.txt index 52ad74a25f54..bd43fb1f5986 100644 --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt @@ -5,6 +5,10 @@ # # Architecture requirements # +# If your architecture returns to user-space through non-core-serializing +# instructions, you need to ensure these are done in switch_mm and exit_lazy_tlb +# (if lazy tlb switching is implemented). +# # * arm/arm64/powerpc # # Rely on implicit context synchronization as a result of exception return @@ -24,7 +28,7 @@ # instead on write_cr3() performed by switch_mm() to provide core serialization # after changing the current mm, and deal with the special case of kthread -> # uthread (temporarily keeping current mm into active_mm) by issuing a -# sync_core_before_usermode() in that specific case. +# serializing instruction in exit_lazy_mm() in that specific case. # ----------------------- | arch |status| diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 255750548433..5263863a9be8 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -6,6 +6,7 @@ #include #include #include +#include #include @@ -95,6 +96,40 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); +#ifdef CONFIG_MEMBARRIER +/* + * Ensure that a core serializing instruction is issued before returning + * to user-mode, if a SYNC_CORE was requested. x86 implements return to + * user-space through sysexit, sysrel, and sysretq, which are not core + * serializing. + * + * See the membarrier comment in finish_task_switch as to why this is done + * in exit_lazy_tlb. + */ +#define exit_lazy_tlb exit_lazy_tlb +static inline void exit_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) +{ + /* Switching mm is serializing with write_cr3 */ + if (tsk->mm != mm) + return; + + if (likely(!(atomic_read(&mm->membarrier_state) & + MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) + return; + + /* With PTI, we unconditionally serialize before running user code. */ + if (static_cpu_has(X86_FEATURE_PTI)) + return; + /* + * Return from interrupt and NMI is done through iret, which is core + * serializing. + */ + if (in_irq() || in_nmi()) + return; + sync_core(); +} +#endif + /* * Init a new mm. Used on mm copies, like at fork() * and on mm's that are brand-new, like at execve(). diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h deleted file mode 100644 index c67caafd3381..000000000000 --- a/arch/x86/include/asm/sync_core.h +++ /dev/null @@ -1,28 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_SYNC_CORE_H -#define _ASM_X86_SYNC_CORE_H - -#include -#include -#include - -/* - * Ensure that a core serializing instruction is issued before returning - * to user-mode. x86 implements return to user-space through sysexit, - * sysrel, and sysretq, which are not core serializing. - */ -static inline void sync_core_before_usermode(void) -{ - /* With PTI, we unconditionally serialize before running user code. */ - if (static_cpu_has(X86_FEATURE_PTI)) - return; - /* - * Return from interrupt and NMI is done through iret, which is core - * serializing. - */ - if (in_irq() || in_nmi()) - return; - sync_core(); -} - -#endif /* _ASM_X86_SYNC_CORE_H */ diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 480a4d1b7dd8..9b026264b445 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -7,7 +7,6 @@ #include #include #include -#include /* * Routines for handling mm_structs @@ -364,16 +363,6 @@ enum { #include #endif -static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) -{ - if (current->mm != mm) - return; - if (likely(!(atomic_read(&mm->membarrier_state) & - MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) - return; - sync_core_before_usermode(); -} - extern void membarrier_exec_mmap(struct mm_struct *mm); #else @@ -387,9 +376,6 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, static inline void membarrier_exec_mmap(struct mm_struct *mm) { } -static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) -{ -} #endif #endif /* _LINUX_SCHED_MM_H */ diff --git a/include/linux/sync_core.h b/include/linux/sync_core.h deleted file mode 100644 index 013da4b8b327..000000000000 --- a/include/linux/sync_core.h +++ /dev/null @@ -1,21 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_SYNC_CORE_H -#define _LINUX_SYNC_CORE_H - -#ifdef CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE -#include -#else -/* - * This is a dummy sync_core_before_usermode() implementation that can be used - * on all architectures which return to user-space through core serializing - * instructions. - * If your architecture returns to user-space through non-core-serializing - * instructions, you need to write your own functions. - */ -static inline void sync_core_before_usermode(void) -{ -} -#endif - -#endif /* _LINUX_SYNC_CORE_H */ - diff --git a/kernel/cpu.c b/kernel/cpu.c index 6ff2578ecf17..134688d79589 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -572,7 +572,9 @@ static int finish_cpu(unsigned int cpu) /* * idle_task_exit() will have switched to &init_mm, now - * clean up any remaining active_mm state. + * clean up any remaining active_mm state. exit_lazy_tlb + * is not done, if an arch did any accounting in these + * functions it would have to be added. */ if (mm != &init_mm) idle->active_mm = &init_mm; diff --git a/kernel/kthread.c b/kernel/kthread.c index e813d92f2eab..6f93c649aa97 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1251,9 +1251,9 @@ void kthread_use_mm(struct mm_struct *mm) finish_arch_post_lock_switch(); #endif + exit_lazy_tlb(active_mm, tsk); if (active_mm != mm) mmdrop(active_mm); - exit_lazy_tlb(active_mm, tsk); to_kthread(tsk)->oldfs = get_fs(); set_fs(USER_DS); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index debc917bc69b..31e22c79826c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3294,22 +3294,19 @@ static struct rq *finish_task_switch(struct task_struct *prev) kcov_finish_switch(current); fire_sched_in_preempt_notifiers(current); + /* * When switching through a kernel thread, the loop in * membarrier_{private,global}_expedited() may have observed that * kernel thread and not issued an IPI. It is therefore possible to * schedule between user->kernel->user threads without passing though - * switch_mm(). Membarrier requires a barrier after storing to - * rq->curr, before returning to userspace, so provide them here: - * - * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly - * provided by mmdrop(), - * - a sync_core for SYNC_CORE. + * switch_mm(). Membarrier requires a full barrier after storing to + * rq->curr, before returning to userspace, for + * {PRIVATE,GLOBAL}_EXPEDITED. This is implicitly provided by mmdrop(). */ - if (mm) { - membarrier_mm_sync_core_before_usermode(mm); + if (mm) mmdrop(mm); - } + if (unlikely(prev_state == TASK_DEAD)) { if (prev->sched_class->task_dead) prev->sched_class->task_dead(prev); @@ -6292,6 +6289,7 @@ void idle_task_exit(void) BUG_ON(current != this_rq()->idle); if (mm != &init_mm) { + /* enter_lazy_tlb is not done because we're about to go down */ switch_mm(mm, &init_mm, current); finish_arch_post_lock_switch(); } From patchwork Fri Jul 10 01:56:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE734618 for ; Fri, 10 Jul 2020 01:57:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E07220708 for ; Fri, 10 Jul 2020 01:57:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VdbXxslE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E07220708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B16146B000A; Thu, 9 Jul 2020 21:57:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AC7B76B000C; Thu, 9 Jul 2020 21:57:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DE436B000D; Thu, 9 Jul 2020 21:57:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 8913C6B000A for ; Thu, 9 Jul 2020 21:57:35 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 43EE518024521 for ; Fri, 10 Jul 2020 01:57:35 +0000 (UTC) X-FDA: 77020504470.07.bomb30_26009d226eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 153521803F9B4 for ; Fri, 10 Jul 2020 01:57:35 +0000 (UTC) X-Spam-Summary: 1,0,0,e0c1dc49e357ee62,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:1801:2393:2559:2562:2693:3138:3139:3140:3141:3142:3165:3308:3865:3867:3868:3870:3871:3872:4049:4120:4250:4321:4605:5007:6119:6120:6261:6653:7514:7901:7903:8604:9393:9413:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14687:14877:21080:21433:21444:21451:21611:21627:21666:21795:21990:30051:30054,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yfiz9gtft6fgq9hofofyeaogyr1op6ej84s1u7rapnqh4sawi5mfet8n46b5g.fj4crk3yzq41wui9xu9i4due98aopn9fwscsemhjn7c9c31984ng75nprrntn36.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: bomb30_26009d226eca X-Filterd-Recvd-Size: 9738 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:32 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id d10so1601362pll.3 for ; Thu, 09 Jul 2020 18:57:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r3T2qDIJBRF7db837UpeGXiEJ21sH+EdceuIiNA/ntY=; b=VdbXxslEKwgSTBFhB0flLEIm8JG6qFiJq08UUQI+IEUIR6XakM5E7x329H8nKpzRIS k2l/8T1Y2gnFL4xryZLuuDoAVEDnQ0RiFduaS8KpQ0+vKFIEmXcjy30kyZtHC9ZZFL6U zl3ULuN8r/SqvAV1RGq1Ur1gk/GWMxQM4ciFhZRwcyJlxOdsJ1sMVQK31bykWIc8ENQA 4u0+Q2mOD1Lq2yDEzXzkbnrn0odk6FKAFEBIySAD/3Wz3SJZ3OhFBnVPBfM5BvZ8Lmd0 McV0TehMM0Hld7pUfN7AnRHutVDtv6BLSKPr7sVrsbh5OAZysL1vUPFohrXLXRGbqCNq AMRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r3T2qDIJBRF7db837UpeGXiEJ21sH+EdceuIiNA/ntY=; b=AEww5HZ7iVhJFKzWOM3I1Yrr4Sfo4eR6Wm/JjtH0Mq7Wdxp9GTRg/F4vdSHQnfqBdN DmCfqQzJkXbeI/R9dxFDBiZgqMq/POBwtiFdgeYXN0737ay3HedsU5ZCHkSFf/T2tqsZ aLvF7irYRnghPSkn1zZKDK35n59QdPRqbsQArBZDEPUdvTvyUmV+q5GhJy7FVwf1mBoE Hk1HDMoMMZbkQuM1tCfJgx44IwvbHw0ZHoSlRtsDuNUW1lReALN+M349/MBZUdgELnuS j5VdTW7ZSXgnqVYJvaOv0DatY67KHA2GAFiITmWIXo53QRFfAV/qU2pDKxzRVijIbyMY TmbA== X-Gm-Message-State: AOAM532TpUhxqOjOCAUTqL9WVce3k7ZjKDglWvYqzAFzcM0pVTwPkZFc MiOUY3uDomkbJXIuVWCM9ZY= X-Google-Smtp-Source: ABdhPJyJnvcOSj8f0eErnh4Nc7C/8jUSd3vab/TG0F2X973u875kSeE0uCqdm6s+HyFNPPn5Ydm8Fw== X-Received: by 2002:a17:902:7b92:: with SMTP id w18mr46750453pll.258.1594346251490; Thu, 09 Jul 2020 18:57:31 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:31 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 5/7] lazy tlb: introduce lazy mm refcount helper functions Date: Fri, 10 Jul 2020 11:56:44 +1000 Message-Id: <20200710015646.2020871-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 153521803F9B4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add explicit _lazy_tlb annotated functions for lazy mm refcounting. This makes things a bit more explicit, and allows explicit refcounting to be removed if it is not used. Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/smp.c | 2 +- arch/powerpc/mm/book3s64/radix_tlb.c | 4 ++-- fs/exec.c | 2 +- include/linux/sched/mm.h | 17 +++++++++++++++++ kernel/cpu.c | 2 +- kernel/exit.c | 2 +- kernel/kthread.c | 11 +++++++---- kernel/sched/core.c | 13 +++++++------ 8 files changed, 37 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 73199470c265..ad95812d2a3f 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1253,7 +1253,7 @@ void start_secondary(void *unused) unsigned int cpu = smp_processor_id(); struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; - mmgrab(&init_mm); + mmgrab(&init_mm); /* XXX: where is the mmput for this? */ current->active_mm = &init_mm; smp_store_cpu_info(cpu); diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index b5cc9b23cf02..52730629b3eb 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -652,10 +652,10 @@ static void do_exit_flush_lazy_tlb(void *arg) * Must be a kernel thread because sender is single-threaded. */ BUG_ON(current->mm); - mmgrab(&init_mm); + mmgrab_lazy_tlb(&init_mm); switch_mm(mm, &init_mm, current); current->active_mm = &init_mm; - mmdrop(mm); + mmdrop_lazy_tlb(mm); } _tlbiel_pid(pid, RIC_FLUSH_ALL); } diff --git a/fs/exec.c b/fs/exec.c index e2ab71e88293..3a01b2751ea9 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1119,7 +1119,7 @@ static int exec_mmap(struct mm_struct *mm) mmput(old_mm); } else { exit_lazy_tlb(active_mm, tsk); - mmdrop(active_mm); + mmdrop_lazy_tlb(active_mm); } return 0; } diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 9b026264b445..110d4ad21de6 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -50,6 +50,23 @@ static inline void mmdrop(struct mm_struct *mm) void mmdrop(struct mm_struct *mm); +/* Helpers for lazy TLB mm refcounting */ +static inline void mmgrab_lazy_tlb(struct mm_struct *mm) +{ + mmgrab(mm); +} + +static inline void mmdrop_lazy_tlb(struct mm_struct *mm) +{ + mmdrop(mm); +} + +static inline void mmdrop_lazy_tlb_smp_mb(struct mm_struct *mm) +{ + /* This depends on mmdrop providing a full smp_mb() */ + mmdrop(mm); +} + /* * This has to be called after a get_task_mm()/mmget_not_zero() * followed by taking the mmap_lock for writing before modifying the diff --git a/kernel/cpu.c b/kernel/cpu.c index 134688d79589..ff9fcbc4e76b 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -578,7 +578,7 @@ static int finish_cpu(unsigned int cpu) */ if (mm != &init_mm) idle->active_mm = &init_mm; - mmdrop(mm); + mmdrop_lazy_tlb(mm); return 0; } diff --git a/kernel/exit.c b/kernel/exit.c index 727150f28103..d535da9fd2f8 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -470,7 +470,7 @@ static void exit_mm(void) __set_current_state(TASK_RUNNING); mmap_read_lock(mm); } - mmgrab(mm); + mmgrab_lazy_tlb(mm); BUG_ON(mm != current->active_mm); /* more a memory barrier than a real lock */ task_lock(current); diff --git a/kernel/kthread.c b/kernel/kthread.c index 6f93c649aa97..a7133cc2ddaf 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1238,12 +1238,12 @@ void kthread_use_mm(struct mm_struct *mm) WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); WARN_ON_ONCE(tsk->mm); + mmgrab(mm); + task_lock(tsk); active_mm = tsk->active_mm; - if (active_mm != mm) { - mmgrab(mm); + if (active_mm != mm) tsk->active_mm = mm; - } tsk->mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); @@ -1253,7 +1253,7 @@ void kthread_use_mm(struct mm_struct *mm) exit_lazy_tlb(active_mm, tsk); if (active_mm != mm) - mmdrop(active_mm); + mmdrop_lazy_tlb(active_mm); to_kthread(tsk)->oldfs = get_fs(); set_fs(USER_DS); @@ -1276,9 +1276,12 @@ void kthread_unuse_mm(struct mm_struct *mm) task_lock(tsk); sync_mm_rss(mm); tsk->mm = NULL; + mmgrab_lazy_tlb(mm); /* active_mm is still 'mm' */ enter_lazy_tlb(mm, tsk); task_unlock(tsk); + + mmdrop(mm); } EXPORT_SYMBOL_GPL(kthread_unuse_mm); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 31e22c79826c..d19f2f517f6c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3302,10 +3302,11 @@ static struct rq *finish_task_switch(struct task_struct *prev) * schedule between user->kernel->user threads without passing though * switch_mm(). Membarrier requires a full barrier after storing to * rq->curr, before returning to userspace, for - * {PRIVATE,GLOBAL}_EXPEDITED. This is implicitly provided by mmdrop(). + * {PRIVATE,GLOBAL}_EXPEDITED. This is implicitly provided by + * mmdrop_lazy_tlb_smp_mb(). */ if (mm) - mmdrop(mm); + mmdrop_lazy_tlb_smp_mb(mm); if (unlikely(prev_state == TASK_DEAD)) { if (prev->sched_class->task_dead) @@ -3410,9 +3411,9 @@ context_switch(struct rq *rq, struct task_struct *prev, /* * kernel -> kernel lazy + transfer active - * user -> kernel lazy + mmgrab() active + * user -> kernel lazy + mmgrab_lazy_tlb() active * - * kernel -> user switch + mmdrop() active + * kernel -> user switch + mmdrop_lazy_tlb() active * user -> user switch */ if (!next->mm) { // to kernel @@ -3420,7 +3421,7 @@ context_switch(struct rq *rq, struct task_struct *prev, next->active_mm = prev->active_mm; if (prev->mm) // from user - mmgrab(prev->active_mm); + mmgrab_lazy_tlb(prev->active_mm); else prev->active_mm = NULL; } else { // to user @@ -3438,7 +3439,7 @@ context_switch(struct rq *rq, struct task_struct *prev, if (!prev->mm) { // from kernel exit_lazy_tlb(prev->active_mm, next); - /* will mmdrop() in finish_task_switch(). */ + /* will mmdrop_lazy_tlb() in finish_task_switch(). */ rq->prev_mm = prev->active_mm; prev->active_mm = NULL; } From patchwork Fri Jul 10 01:56:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655403 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80E9A618 for ; Fri, 10 Jul 2020 01:57:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4D65920708 for ; Fri, 10 Jul 2020 01:57:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="O7+ulI/L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4D65920708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3ECB56B000C; Thu, 9 Jul 2020 21:57:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3769D6B000D; Thu, 9 Jul 2020 21:57:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23F1C6B000E; Thu, 9 Jul 2020 21:57:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 0D9BB6B000C for ; Thu, 9 Jul 2020 21:57:38 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C5B9A8248047 for ; Fri, 10 Jul 2020 01:57:37 +0000 (UTC) X-FDA: 77020504554.07.spoon46_5f0206a26eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 7AD4D1803F9B4 for ; Fri, 10 Jul 2020 01:57:37 +0000 (UTC) X-Spam-Summary: 1,0,0,832f10171b24608d,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:4119:4250:4321:4385:4605:5007:6119:6120:6261:6653:7514:7875:7901:7903:9413:9592:10004:11026:11473:11658:11914:12043:12291:12294:12296:12297:12438:12517:12519:12555:12683:12895:13894:13972:14687:21080:21433:21444:21451:21627:21666:21795:21987:30012:30045:30051:30054:30056,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04y8cjf7kf8whmqpd3mkacx1p7nimocgnqt9a4iuky6fge1jkx1d4jrs8uqhqdb.j89t8swb93brkgisbkm9ih1reabypc1mox44c1h5co1keq4389jnr7tfuxcusrr.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: spoon46_5f0206a26eca X-Filterd-Recvd-Size: 8338 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:36 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id x9so1602095plr.2 for ; Thu, 09 Jul 2020 18:57:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Lq+5YyzIbnzKrC5WXbMgZS97wejAnzujpqVbQxvF/Lw=; b=O7+ulI/LRglcP5qCLuMbbgD0ofzWmJwZEblGzJ+okVuGkOiR2gTyC03zrMjdXEH0hB 5Gaw0vVEYTp/Qit2rqR8IwLbwn7QiKgsDKG0Fd7Xj2HiGjxd3aUPrm5RTkEYO3SLArfy CHQUEdv8xru96CJgIzueA831qH9INkjuVQSpgeUmTYZvUpsq+S9dcwtGqnUqc4Ff/XnN 98BCZh90kPt0oy7qNLowjzgpHH4fVv1tL6+BRsfo2l0lALCrdOaf7pBzecCBEWfR17te Ct71OUJd6ZG2tIibzIJLu0lpZD31xXWBnaPD4lvungG6xgPkYsKnz/NsfKtkTikcts8Z yxlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Lq+5YyzIbnzKrC5WXbMgZS97wejAnzujpqVbQxvF/Lw=; b=XbRNvXAabNI4eRIANMKIF7aLYn00GM7r7hgXc+s8/r2h4GgEN1ObAKsKdhoTjjfS0m 7tkj/vfB0FQnN7FzlvIO9cE9xh6eZOLcMA1vsjmIh+UJtX3HQsfAkFRNvuEiUHwrjSiG PYA2t720P6PLndxyvPTEX3NbiQsmE7niGsv5VwgSFRksqdptJonjcYjuq0iZ/eroJRgJ RwLCs2s9EdmToV/k9Rjmu/swx+BknKgPOnDVjciIm25iKEkyW3UEnNB28X0/0fMTYRG7 Xzryj6prQRnj0Vs2NMXdzUVyiFHwACltHltNyQIfsYvlpK+YJp/L7TQfqXOBGc8LUkzu zN1A== X-Gm-Message-State: AOAM532zL49E2sKTq9f8go+r1sk476uxxBfNJ57LfO4p5vNgz/6DXrpm slJCs6jzxrX05fCTb7xVAs0= X-Google-Smtp-Source: ABdhPJzyKsV6YEQg8K0l5yERbWIQQgXyBnhpP546f8l96ub+MkrLebwuhaNiEd0fTTPWSKV0eL1Jog== X-Received: by 2002:a17:90a:7185:: with SMTP id i5mr3327864pjk.170.1594346256131; Thu, 09 Jul 2020 18:57:36 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:35 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 6/7] lazy tlb: allow lazy tlb mm switching to be configurable Date: Fri, 10 Jul 2020 11:56:45 +1000 Message-Id: <20200710015646.2020871-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7AD4D1803F9B4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: NOMMU systems could easily go without this and save a bit of code and the mm refcounting, because their mm switch is a no-op. I haven't flipped them over because haven't audited all arch code to convert over to using the _lazy_tlb refcounting. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 7 +++++ include/linux/sched/mm.h | 12 ++++++--- kernel/sched/core.c | 55 +++++++++++++++++++++++++++------------- kernel/sched/sched.h | 4 ++- 4 files changed, 55 insertions(+), 23 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 8cc35dc556c7..2daf8fe6146a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -411,6 +411,13 @@ config MMU_GATHER_NO_GATHER bool depends on MMU_GATHER_TABLE_FREE +# Would like to make this depend on MMU, because there is little use for lazy mm switching +# with NOMMU, but have to audit NOMMU architecture code first. +config MMU_LAZY_TLB + def_bool y + help + Enable "lazy TLB" mmu context switching for kernel threads. + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 110d4ad21de6..2c2b20e2ccc7 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -53,18 +53,22 @@ void mmdrop(struct mm_struct *mm); /* Helpers for lazy TLB mm refcounting */ static inline void mmgrab_lazy_tlb(struct mm_struct *mm) { - mmgrab(mm); + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + mmgrab(mm); } static inline void mmdrop_lazy_tlb(struct mm_struct *mm) { - mmdrop(mm); + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + mmdrop(mm); } static inline void mmdrop_lazy_tlb_smp_mb(struct mm_struct *mm) { - /* This depends on mmdrop providing a full smp_mb() */ - mmdrop(mm); + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + mmdrop(mm); /* This depends on mmdrop providing a full smp_mb() */ + else + smp_mb(); } /* diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d19f2f517f6c..14b4fae6f6e3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3253,7 +3253,7 @@ static struct rq *finish_task_switch(struct task_struct *prev) __releases(rq->lock) { struct rq *rq = this_rq(); - struct mm_struct *mm = rq->prev_mm; + struct mm_struct *mm = NULL; long prev_state; /* @@ -3272,7 +3272,10 @@ static struct rq *finish_task_switch(struct task_struct *prev) current->comm, current->pid, preempt_count())) preempt_count_set(FORK_PREEMPT_COUNT); - rq->prev_mm = NULL; +#ifdef CONFIG_MMU_LAZY_TLB + mm = rq->prev_lazy_mm; + rq->prev_lazy_mm = NULL; +#endif /* * A task struct has one reference for the use as "current". @@ -3393,22 +3396,11 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev) calculate_sigpending(); } -/* - * context_switch - switch to the new MM and the new thread's register state. - */ -static __always_inline struct rq * -context_switch(struct rq *rq, struct task_struct *prev, - struct task_struct *next, struct rq_flags *rf) +static __always_inline void +context_switch_mm(struct rq *rq, struct task_struct *prev, + struct task_struct *next) { - prepare_task_switch(rq, prev, next); - - /* - * For paravirt, this is coupled with an exit in switch_to to - * combine the page table reload and the switch backend into - * one hypercall. - */ - arch_start_context_switch(prev); - +#ifdef CONFIG_MMU_LAZY_TLB /* * kernel -> kernel lazy + transfer active * user -> kernel lazy + mmgrab_lazy_tlb() active @@ -3440,10 +3432,37 @@ context_switch(struct rq *rq, struct task_struct *prev, exit_lazy_tlb(prev->active_mm, next); /* will mmdrop_lazy_tlb() in finish_task_switch(). */ - rq->prev_mm = prev->active_mm; + rq->prev_lazy_mm = prev->active_mm; prev->active_mm = NULL; } } +#else + if (!next->mm) + next->active_mm = &init_mm; + membarrier_switch_mm(rq, prev->active_mm, next->active_mm); + switch_mm_irqs_off(prev->active_mm, next->active_mm, next); + if (!prev->mm) + prev->active_mm = NULL; +#endif +} + +/* + * context_switch - switch to the new MM and the new thread's register state. + */ +static __always_inline struct rq * +context_switch(struct rq *rq, struct task_struct *prev, + struct task_struct *next, struct rq_flags *rf) +{ + prepare_task_switch(rq, prev, next); + + /* + * For paravirt, this is coupled with an exit in switch_to to + * combine the page table reload and the switch backend into + * one hypercall. + */ + arch_start_context_switch(prev); + + context_switch_mm(rq, prev, next); rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 877fb08eb1b0..b196dd885d33 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -929,7 +929,9 @@ struct rq { struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; - struct mm_struct *prev_mm; +#ifdef CONFIG_MMU_LAZY_TLB + struct mm_struct *prev_lazy_mm; +#endif unsigned int clock_update_flags; u64 clock; From patchwork Fri Jul 10 01:56:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11655405 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD9F714DD for ; Fri, 10 Jul 2020 01:57:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7DB5D20748 for ; Fri, 10 Jul 2020 01:57:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BoASjg5D" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DB5D20748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F8696B000E; Thu, 9 Jul 2020 21:57:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6A9126B0010; Thu, 9 Jul 2020 21:57:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 572E56B0022; Thu, 9 Jul 2020 21:57:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id 39F426B000E for ; Thu, 9 Jul 2020 21:57:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0254418024521 for ; Fri, 10 Jul 2020 01:57:42 +0000 (UTC) X-FDA: 77020504764.06.truck40_290935426eca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id CA7A210095BA7 for ; Fri, 10 Jul 2020 01:57:41 +0000 (UTC) X-Spam-Summary: 1,0,0,19b53a9b00e88b11,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:355:379:421:541:800:960:966:973:982:988:989:1042:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2903:3138:3139:3140:3141:3142:3165:3622:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4385:4605:5007:6119:6120:6261:6653:7514:7901:7903:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:14181:14687:14721:21080:21433:21444:21451:21627:21666:21772:21795:30051:30054:30070:30085:30089,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yfx9ckk35a8889i1mykhwr6ms6jopmg8f5h48ry3m65wtdyabw4cixkeygrso.sh6xmjpwerx38yxmj8wt3ekt5zpfkmcf35u7fnn4rmagf9pi39n44hae5humu7w.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2 3,LUA_SU X-HE-Tag: truck40_290935426eca X-Filterd-Recvd-Size: 7997 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 01:57:41 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id x8so1589182plm.10 for ; Thu, 09 Jul 2020 18:57:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cuNVeJNgy5mpkucMC/444QKJa8KYGoy9AAUByxslVr0=; b=BoASjg5DuB0JgJ/G9iM75j5066TvHrMOt03o2xV9i6pMNokjvc39C4RRiktgpOGDpj doX11205IDJ2VrLi+hwng0m6ABgVjNKqpQL8fhFw5B2J1gLI2JIoG704T6+BP64xHFiV oC1niasIHgPNHKmcbXkkm7jg1TSu+PYvG91YB+PNqEYITyWlWT4Td2P7rbTyPIeOSQ1p UhyJv744LdKzZvGoLMLwHiZ3d7BK5HhwyAmV8z/WA1ssGO44c/ay+9U5Tj75XkrkfkzJ E9mOxkFL6p9WWJnJl2n0ivM3zsrbM75wwFMUskppQkK7tJ4l0oImoRAKuPxJfJI9rV4E QqDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cuNVeJNgy5mpkucMC/444QKJa8KYGoy9AAUByxslVr0=; b=k19eAOUxu81wV9KiJf1RGcVVnoe2D2KzldwG128iWNRs4hgqEM6z4JFZ1cmseGFibg aASnZCE5xHrdBlCLemPgiEW7ctmzGWIxHONEiA0yi9RaRbF5n56+cX1xUAE/WvVcQZuC 3gMcPZyc+6qwDRpMBy+YkyvlF02aHDHLiVVMC1nloQrXi+/p1VulvjOyo+kreWkKVxgQ SCOzbI83gNBrnUX5VOnmvuio/N7guPjtefPrq/1qwIbW3LOPWOlaM5+VqZOfMkOk7KKA I+YixiJAMhMADJJvBn4t2cVwX1uqEruLxyr/gH0xPO3VCAVDlsXRNiJyd06uxBuylwKe LGvA== X-Gm-Message-State: AOAM531Y4xEzQff7F8v/gXBfsVbrGzJc3EwNx1nU5oy15ZzE0ThHD0es 7w9nwWu3cZx9+X8a54P4wBw= X-Google-Smtp-Source: ABdhPJyO+w0HBiEzqfRNFqWFAjyLUpISZ9zKyiEOCRhCmyS9AmC4eBitlltoelxlr5SRAGfnZKpe8g== X-Received: by 2002:a17:902:6194:: with SMTP id u20mr58629926plj.333.1594346260530; Thu, 09 Jul 2020 18:57:40 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (220-245-19-62.static.tpgi.com.au. [220.245.19.62]) by smtp.gmail.com with ESMTPSA id 7sm3912834pgw.85.2020.07.09.18.57.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jul 2020 18:57:40 -0700 (PDT) From: Nicholas Piggin To: linux-arch@vger.kernel.org Cc: Nicholas Piggin , x86@kernel.org, Mathieu Desnoyers , Arnd Bergmann , Peter Zijlstra , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, Anton Blanchard Subject: [RFC PATCH 7/7] lazy tlb: shoot lazies, a non-refcounting lazy tlb option Date: Fri, 10 Jul 2020 11:56:46 +1000 Message-Id: <20200710015646.2020871-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200710015646.2020871-1-npiggin@gmail.com> References: <20200710015646.2020871-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CA7A210095BA7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On big systems, the mm refcount can become highly contented when doing a lot of context switching with threaded applications (particularly switching between the idle thread and an application thread). Abandoning lazy tlb slows switching down quite a bit in the important user->idle->user cases, so so instead implement a non-refcounted scheme that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down any remaining lazy ones. On a 16-socket 192-core POWER8 system, a context switching benchmark with as many software threads as CPUs (so each switch will go in and out of idle), upstream can achieve a rate of about 1 million context switches per second. After this patch it goes up to 118 million. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 16 ++++++++++++++++ arch/powerpc/Kconfig | 1 + include/linux/sched/mm.h | 6 +++--- kernel/fork.c | 39 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 59 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 2daf8fe6146a..edf69437a971 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -418,6 +418,22 @@ config MMU_LAZY_TLB help Enable "lazy TLB" mmu context switching for kernel threads. +config MMU_LAZY_TLB_REFCOUNT + def_bool y + depends on MMU_LAZY_TLB + depends on !MMU_LAZY_TLB_SHOOTDOWN + +config MMU_LAZY_TLB_SHOOTDOWN + bool + depends on MMU_LAZY_TLB + help + Instead of refcounting the "lazy tlb" mm struct, which can cause + contention with multi-threaded apps on large multiprocessor systems, + this option causes __mmdrop to IPI all CPUs in the mm_cpumask and + switch to init_mm if they were using the to-be-freed mm as the lazy + tlb. Architectures which do not track all possible lazy tlb CPUs in + mm_cpumask can not use this (without modification). + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 920c4e3ca4ef..24ac85c868db 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -225,6 +225,7 @@ config PPC select HAVE_PERF_USER_STACK_DUMP select MMU_GATHER_RCU_TABLE_FREE select MMU_GATHER_PAGE_SIZE + select MMU_LAZY_TLB_SHOOTDOWN select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 2c2b20e2ccc7..1067af8039bd 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -53,19 +53,19 @@ void mmdrop(struct mm_struct *mm); /* Helpers for lazy TLB mm refcounting */ static inline void mmgrab_lazy_tlb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) mmgrab(mm); } static inline void mmdrop_lazy_tlb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) mmdrop(mm); } static inline void mmdrop_lazy_tlb_smp_mb(struct mm_struct *mm) { - if (IS_ENABLED(CONFIG_MMU_LAZY_TLB)) + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) mmdrop(mm); /* This depends on mmdrop providing a full smp_mb() */ else smp_mb(); diff --git a/kernel/fork.c b/kernel/fork.c index 142b23645d82..da0fba9e6079 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -685,6 +685,40 @@ static void check_mm(struct mm_struct *mm) #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) #define free_mm(mm) (kmem_cache_free(mm_cachep, (mm))) +static void do_shoot_lazy_tlb(void *arg) +{ + struct mm_struct *mm = arg; + + if (current->active_mm == mm) { + BUG_ON(current->mm); + switch_mm(mm, &init_mm, current); + current->active_mm = &init_mm; + } +} + +static void do_check_lazy_tlb(void *arg) +{ + struct mm_struct *mm = arg; + + BUG_ON(current->active_mm == mm); +} + +static void shoot_lazy_tlbs(struct mm_struct *mm) +{ + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { + smp_call_function_many(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1); + do_shoot_lazy_tlb(mm); + } +} + +static void check_lazy_tlbs(struct mm_struct *mm) +{ + if (IS_ENABLED(CONFIG_DEBUG_VM)) { + smp_call_function(do_check_lazy_tlb, (void *)mm, 1); + do_check_lazy_tlb(mm); + } +} + /* * Called when the last reference to the mm * is dropped: either by a lazy thread or by @@ -695,6 +729,11 @@ void __mmdrop(struct mm_struct *mm) BUG_ON(mm == &init_mm); WARN_ON_ONCE(mm == current->mm); WARN_ON_ONCE(mm == current->active_mm); + + /* Ensure no CPUs are using this as their lazy tlb mm */ + shoot_lazy_tlbs(mm); + check_lazy_tlbs(mm); + mm_free_pgd(mm); destroy_context(mm); mmu_notifier_subscriptions_destroy(mm);