From patchwork Mon Oct 14 03:58:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: maobibo X-Patchwork-Id: 13834135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30913CF2590 for ; Mon, 14 Oct 2024 03:59:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 877A46B0085; Sun, 13 Oct 2024 23:59:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F7306B0083; Sun, 13 Oct 2024 23:59:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BC396B0089; Sun, 13 Oct 2024 23:59:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4AA9F6B0082 for ; Sun, 13 Oct 2024 23:59:03 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B07241C555E for ; Mon, 14 Oct 2024 03:58:54 +0000 (UTC) X-FDA: 82670851914.07.0F04A74 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by imf28.hostedemail.com (Postfix) with ESMTP id 8B77EC0003 for ; Mon, 14 Oct 2024 03:58:54 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728878268; a=rsa-sha256; cv=none; b=DhW5YVdZ4xk3D0wnT+6UeGu0UuW11PHSkaLkMoRvby5E0dcIxnGlXrxSzrcxwb8rr7ZIVg wCsC5HGoH3uf8RENo5yFyykdpoRQgPVpPKC9kuORQmsQUDPIcfkOqi2wu4ZKAGU9iAEKF4 Xttj0o8BDd+HCBqkAUqw4TqMe5c0sCk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728878268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6Iw/wAbEu0SHxzXukkARToGn17qUMEhn7+bxpypLMb0=; b=AscuWmbgp+jGQ+LMjeBxUIs3E4reR+UI673yQSJLsgkZ5OclNuOG9DGAE68pnGhNnnf5pZ gJOA/ZTiSF+IbPwF/LO6lFz4tM/ZIgYSX9b19hFyCWTcV7H5JdoJdRBFHTv0WELnfzuNLt d4hHZmKsGHaiU3KFsuaJdUGvjwxoC/k= Received: from loongson.cn (unknown [10.2.5.213]) by gateway (Coremail) with SMTP id _____8Cx67IBlwxngwIaAA--.38435S3; Mon, 14 Oct 2024 11:58:57 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.213]) by front1 (Coremail) with SMTP id qMiowMBxXuT_lgxnc6EoAA--.1717S4; Mon, 14 Oct 2024 11:58:57 +0800 (CST) From: Bibo Mao To: Huacai Chen , Andrey Ryabinin , Andrew Morton Cc: David Hildenbrand , Barry Song , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org Subject: [PATCH v2 2/3] LoongArch: Add barrier between set_pte and memory access Date: Mon, 14 Oct 2024 11:58:54 +0800 Message-Id: <20241014035855.1119220-3-maobibo@loongson.cn> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20241014035855.1119220-1-maobibo@loongson.cn> References: <20241014035855.1119220-1-maobibo@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: qMiowMBxXuT_lgxnc6EoAA--.1717S4 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== X-Stat-Signature: 8onibjmaj8n6bce9ch8qwijsg8jni15h X-Rspamd-Queue-Id: 8B77EC0003 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1728878334-547666 X-HE-Meta: U2FsdGVkX18wEtbY1jZUs6HgcEe3Uz2W9GY4whSyLwjEmriA/p51ji9Y53QCDOF2Kfg/WjAKRg4+IMsKdNgS/MxRqssmeqKb38Iw1mkih6xdSPH6FAhxPFnHWDisGlxykkIk+/fDZEtcOjGd12LX36kvkd1qF9ru0EECVwjN9kO5lLVU9kaPl9OdFia1BuOHudvmhuaIcq7mEI43bh2mCw4OKMtxBz0mSiT3Dh3JTuU1sj0Zalwr7U7pg4heSZsSwY07LGzWqGdt/6IOL2dAci/EO1HITRGrmxj9+HhuRWDGcvwpJRMiHdiGWNYjBNyn69QxqvtoBTRMYbw0/hU3/4IdCfmYeGWCrHLYJ3Pd2P7c3YgT38eyhuMmsx3EEiCJEsKGsgFDKcb45Ce3CQoarvG54XyN91DiuJYhoeIYby6U6pzFI8MNwl9Nm2fO3egBWMhLwJl7G2pxRlSmoP/NKFthvA8/ZNvKahDILSA+lm07y9cRgdetrOTzBb9JT9bvjTbb9t2TqUNd3TqVb6IYCcgvf7EPYVv9SZeOUizFmziEsx/40eJ2j6P3ds04HGWCppFVvrZ2janksLcviM0GO6DFAGPyFrEms/+BT59xqkfkCUTZbJiGuWp6PDf+JLijb2gL2odJxiphNAV6hkqAsPJ8heX26BXcOIfGTSVixaYdoql8FFX9Rk62Im+abNcsKaJCUIeCR637UiaX7PQS10QNLqi/voiNv0kRIlRFyRUpjV1PKkZRhQTy5EGD5a35IfTaYMPufZf3CkcS2pedlnAcrpMFD7znYypRtAoe2wO+5SjwA/R7U45LENk+y+Eyexwv7rXsOt8wYzup/OCrZfL3JnR1ngVSOVDyK2+0ZJ8q/ERz8U3XYLAGRMuy54m9OuNBujfwFf82fumiimdCcB6THAm6uYY44qi72mGSsNRMHkhnWv9Q/aFpjsOkn8SkPAL7CaQwhnnOkvbJJom /JgN9bxW Qe6MboMIbKe/0DxuzrwqZO1AGiDxb6fR0U/MlbzpkaBI+EA+5YYhgvdwmihY28eRjt3cJhAAZbGsqBtn59RnZE/O0JHPY520FV5FiFEVT12xGjAoEQxNzbVO9vRH1lj62mc/KSq6PRdR9d51uoNrzKOC2rg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It is possible to return a spurious fault if memory is accessed right after the pte is set. For user address space, pte is set in kernel space and memory is accessed in user space, there is long time for synchronization, no barrier needed. However for kernel address space, it is possible that memory is accessed right after the pte is set. Here flush_cache_vmap/flush_cache_vmap_early is used for synchronization. Signed-off-by: Bibo Mao --- arch/loongarch/include/asm/cacheflush.h | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index f8754d08a31a..53be231319ef 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -42,12 +42,24 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_dup_mm(mm) do { } while (0) #define flush_cache_range(vma, start, end) do { } while (0) #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) -#define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) +/* + * It is possible for a kernel virtual mapping access to return a spurious + * fault if it's accessed right after the pte is set. The page fault handler + * does not expect this type of fault. flush_cache_vmap is not exactly the + * right place to put this, but it seems to work well enough. + */ +static inline void flush_cache_vmap(unsigned long start, unsigned long end) +{ + smp_mb(); +} +#define flush_cache_vmap flush_cache_vmap +#define flush_cache_vmap_early flush_cache_vmap + #define cache_op(op, addr) \ __asm__ __volatile__( \ " cacop %0, %1 \n" \