From patchwork Wed Jan 10 14:55:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Parri X-Patchwork-Id: 13516248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33331C4707B for ; Wed, 10 Jan 2024 14:56:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NaV3KWgQAyzktsAmPKlITn3uUzoQsIPVw8jR0hCIHiY=; b=b6tt3DQJ5nKqYZ qzT/QRYLrud5Nxnv8+9L9+W3rp6qitWLnzW1S1hEETWsrw7sMrEEANeR0shQ6ZtnieiZdaNgdSdmG fV0/R9N8wmTgHYpAXSb0r15Rk1v3o32TOmZPtMRRc0ICnQHv+Jfvt9Z7mu8Ffi1HPmuMt/gZJti39 WlDpdFvFvmMiro1wqdm784iILXgJzmpGzvX+8XN8sDFrrlFf/ltekyAed9oQt1gI3TJX/1hYgr5gI 0UAJ7QILcLPpQ4ffKF+ZBKBlqRHVVXtImJH0618i1cEPcSC3RdiBT+ZaFMo1tqD+6C8lEC8sjGV5S baQ46LgUMmF6Ta03dIOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNa03-00CRAJ-2c; Wed, 10 Jan 2024 14:56:11 +0000 Received: from mail-ed1-x52c.google.com ([2a00:1450:4864:20::52c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNa00-00CR6o-1F for linux-riscv@lists.infradead.org; Wed, 10 Jan 2024 14:56:09 +0000 Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-55753dc5cf0so5049127a12.0 for ; Wed, 10 Jan 2024 06:56:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704898564; x=1705503364; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C63BDPk/7WHx0usrsZtg8+fUN103VfNMkfDwnaCswfU=; b=PhHK4Z37V4qKFQmDt1SYHfza539/l9uM727WXkHLkma0RpqCCIucbdtiUUFeo5GnWZ KGDy6AKWHJnRJXLUKoVRSID/wDan3P4qSNtFKvdXX7GiWrkyH1hIjFuqt6dlMhgzsySA Y1zHNSmj/ZTs/b9sPPfr1ANyvhT4HSPJJJVOs1cC0SjE12JE/xOgkSdeKWrljg4lisop 9P8s9CtiiI/5ZHpGhvPPWbGaz4y71b7g0vWU44NQs9qKbuJEdGq0C7J41uFb5pqqplS0 NtJHqXL2GIPSFw52WiD+qrXF83iEVBZmYEoNNicSckfmAtyqtOdPcN0SkDTgZW+IeceI AlGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704898564; x=1705503364; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C63BDPk/7WHx0usrsZtg8+fUN103VfNMkfDwnaCswfU=; b=JLhvXokKAs8IkdORI+NzbJpVHBowQKmz9Qwet0eNSHR8TsonQBiSRYM/1Fst7OUY3j 4R+Kv0fcxTGnMOsslObTaMqrEbJg8pNNAO+3/lPAO2hVZhF/7p+AW4zkb9vEUT/HjARZ dFHvPzGCppqfcl/pLUHrCyeMhVVBfeq+1WqmaUWe+oIEpbh34t+iAJdv2ds7a2IsVHdD Z21VI6r96xOmTfI6ZzoGJoNIgzjsQ7dtIfCDXZbmWMB1bCXfVBkyoDoXN/6skn5GfDNx 0lJMyl+jxCYED3ugq6YfbSLYKud9ljydYXApZylbz5MNXEBFScOLpntexa72OAudnhXf Pz2Q== X-Gm-Message-State: AOJu0YxB7boY0VfdD2TeZ6slNK6YdwwwvuLQKwqQQ/oc0mSkKiS/NGO1 52sHfoNlQ05mH/nL3tb95oA= X-Google-Smtp-Source: AGHT+IEfQ0aiJnEbthSrMkSjQB/zdjUuuUCY/fvSQUuGh/Yx6TJvuSg9tAQCv4MEbcccXdr1VJhfyA== X-Received: by 2002:a17:906:189:b0:a28:da0f:b7b4 with SMTP id 9-20020a170906018900b00a28da0fb7b4mr691778ejb.13.1704898564560; Wed, 10 Jan 2024 06:56:04 -0800 (PST) Received: from andrea.wind3.hub ([31.189.29.12]) by smtp.gmail.com with ESMTPSA id bm3-20020a170906c04300b00a2a4efe7d3dsm2161032ejb.79.2024.01.10.06.56.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 06:56:04 -0800 (PST) From: Andrea Parri To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, mathieu.desnoyers@efficios.com, paulmck@kernel.org, corbet@lwn.net Cc: mmaas@google.com, hboehm@google.com, striker@us.ibm.com, charlie@rivosinc.com, rehn@rivosinc.com, linux-riscv@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Andrea Parri Subject: [PATCH v3 1/4] membarrier: riscv: Add full memory barrier in switch_mm() Date: Wed, 10 Jan 2024 15:55:30 +0100 Message-Id: <20240110145533.60234-2-parri.andrea@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110145533.60234-1-parri.andrea@gmail.com> References: <20240110145533.60234-1-parri.andrea@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240110_065608_423181_98D910F1 X-CRM114-Status: GOOD ( 23.83 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The membarrier system call requires a full memory barrier after storing to rq->curr, before going back to user-space. The barrier is only needed when switching between processes: the barrier is implied by mmdrop() when switching from kernel to userspace, and it's not needed when switching from userspace to kernel. Rely on the feature/mechanism ARCH_HAS_MEMBARRIER_CALLBACKS and on the primitive membarrier_arch_switch_mm(), already adopted by the PowerPC architecture, to insert the required barrier. Fixes: fab957c11efe2f ("RISC-V: Atomic and Locking Code") Signed-off-by: Andrea Parri Reviewed-by: Mathieu Desnoyers --- MAINTAINERS | 2 +- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/membarrier.h | 31 +++++++++++++++++++++++++++++ arch/riscv/mm/context.c | 2 ++ kernel/sched/core.c | 5 +++-- 5 files changed, 38 insertions(+), 3 deletions(-) create mode 100644 arch/riscv/include/asm/membarrier.h diff --git a/MAINTAINERS b/MAINTAINERS index a7c4cf8201e01..0f8cec504b2ba 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13815,7 +13815,7 @@ M: Mathieu Desnoyers M: "Paul E. McKenney" L: linux-kernel@vger.kernel.org S: Supported -F: arch/powerpc/include/asm/membarrier.h +F: arch/*/include/asm/membarrier.h F: include/uapi/linux/membarrier.h F: kernel/sched/membarrier.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index cd4c9a204d08c..33d9ea5fa392f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -27,6 +27,7 @@ config RISCV select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GIGANTIC_PAGE select ARCH_HAS_KCOV + select ARCH_HAS_MEMBARRIER_CALLBACKS select ARCH_HAS_MMIOWB select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API diff --git a/arch/riscv/include/asm/membarrier.h b/arch/riscv/include/asm/membarrier.h new file mode 100644 index 0000000000000..6c016ebb5020a --- /dev/null +++ b/arch/riscv/include/asm/membarrier.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASM_RISCV_MEMBARRIER_H +#define _ASM_RISCV_MEMBARRIER_H + +static inline void membarrier_arch_switch_mm(struct mm_struct *prev, + struct mm_struct *next, + struct task_struct *tsk) +{ + /* + * Only need the full barrier when switching between processes. + * Barrier when switching from kernel to userspace is not + * required here, given that it is implied by mmdrop(). Barrier + * when switching from userspace to kernel is not needed after + * store to rq->curr. + */ + if (IS_ENABLED(CONFIG_SMP) && + likely(!(atomic_read(&next->membarrier_state) & + (MEMBARRIER_STATE_PRIVATE_EXPEDITED | + MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) + return; + + /* + * The membarrier system call requires a full memory barrier + * after storing to rq->curr, before going back to user-space. + * Matches a full barrier in the proximity of the membarrier + * system call entry. + */ + smp_mb(); +} + +#endif /* _ASM_RISCV_MEMBARRIER_H */ diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 217fd4de61342..ba8eb3944687c 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -323,6 +323,8 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, if (unlikely(prev == next)) return; + membarrier_arch_switch_mm(prev, next, task); + /* * Mark the current MM context as inactive, and the next as * active. This is at least used by the icache flushing diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a708d225c28e8..711dc753f7216 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6670,8 +6670,9 @@ static void __sched notrace __schedule(unsigned int sched_mode) * * Here are the schemes providing that barrier on the * various architectures: - * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC. - * switch_mm() rely on membarrier_arch_switch_mm() on PowerPC. + * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC, + * RISC-V. switch_mm() relies on membarrier_arch_switch_mm() + * on PowerPC and on RISC-V. * - finish_lock_switch() for weakly-ordered * architectures where spin_unlock is a full barrier, * - switch_to() for arm64 (weakly-ordered, spin_unlock