From patchwork Tue Jul 2 13:21:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13719648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDB6FC30658 for ; Tue, 2 Jul 2024 13:21:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4641B6B0099; Tue, 2 Jul 2024 09:21:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EBB86B009A; Tue, 2 Jul 2024 09:21:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EC0F6B009B; Tue, 2 Jul 2024 09:21:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E733B6B0099 for ; Tue, 2 Jul 2024 09:21:49 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5EC981C3753 for ; Tue, 2 Jul 2024 13:21:49 +0000 (UTC) X-FDA: 82294875138.08.0DA2ABA Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf25.hostedemail.com (Postfix) with ESMTP id 8870BA001F for ; Tue, 2 Jul 2024 13:21:47 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="KV/Jbu1T"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 36v6DZgoKCDMndhgnPWbTSVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=36v6DZgoKCDMndhgnPWbTSVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719926482; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3p0qtnzDEL/NiIySpMNGGFo+d5LOoZGd/e3zeHTmMTw=; b=I8scqpzLCcML7nzVeytf3UCF09JQbfFx2FtVRY16mH4eYZiRiFYz7lK8Se/BBTXiiMo7iv /HXmGXw25VBbcYTptTq+mK6rU3Ew+Cde4V6HhALppl5LZQ4zL1Md9a4fl6GelJiZ5k+8PJ /tH5fPDNKbxizNzB0GCmAtqW9Qt3KtE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719926482; a=rsa-sha256; cv=none; b=YlVGnsFdfiNXNgp81bl46RGc2sw9bs/k9wMSn4iwqLChfU5D9qbi+dC9yeIwlaDoYJv5Y+ h5ahJ1ora/lZ9IDF9qMnNqVi0C18CdSX+H7rAvGs44+v3vSXbENaTIZoRovn7VoHswHp3U dzuGH+z/Vy9m6FhnJxtpTUmI18WC02k= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="KV/Jbu1T"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 36v6DZgoKCDMndhgnPWbTSVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--yosryahmed.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=36v6DZgoKCDMndhgnPWbTSVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--yosryahmed.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-65033922659so10130957b3.0 for ; Tue, 02 Jul 2024 06:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1719926506; x=1720531306; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3p0qtnzDEL/NiIySpMNGGFo+d5LOoZGd/e3zeHTmMTw=; b=KV/Jbu1Td+lhQgrUbsq+7nIetkaEs95i4GjjwJtGu6kCZkcxwjtKpNwLbuxoPUNo0h 7G1IKvB/KIUiEWG0dXZ+sB+F1Pw9X5/1MsTkdQsoJsM4G+RhW64wrAL+oXrbLDcsuEqB Xo1Wz5mCsh7t+VxwsxB8D2tHiFSGp2ISsRWrH1RcUrQhfwHlz9M6qAWbNMGupaGmtH33 TVwHDMbpb7QgNG9Pxh41FIYwlLXm1x0+6pf4amIwVrDl6NthaVBUwIXNFQcjoIAuTbqu Jh+Bc3+3dmjg98MQDt3OTcc1NIbPqeZvZfMswTXVvN7C4J3myh5xOWKp1zbORCBYx2JD S4Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719926506; x=1720531306; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3p0qtnzDEL/NiIySpMNGGFo+d5LOoZGd/e3zeHTmMTw=; b=JsCFFM6NEDj86YGWI9+7Z7lnaodsVfVW0rFj5L/zcZ31DE1KoQLGb1z+GaGJvpvaS/ 2RNR2/p4U25XcS0AbRw+OCNe6j6Vk2+8TshKOg6ocoqJm+v0a133Y735FbH5/oKhCqlS N/3y+wTf+xkg1DG48tnteoecNdBbyQ7+6zCdAoIwgNx6EdnlBTTrWOmGgMdcMIck4vLM DaGWRoXIrmejrgSfcyzc0CApSCB8CJTc+8Nsi5BrL/k7DNRanU59+IhSxHd8BgJddOZi KXTrM6YHm2uUcT+jxK9WV/4vuTTlkiIgFTU4MBc+krWj06NKDVwYdMbsUqgum+cH1wtH opnw== X-Forwarded-Encrypted: i=1; AJvYcCV9qQnpbrcIGm9BQLYtveYY8iVSSjHxKPUbJEtplOMZE7iuT4wO4InjwnpN1AOkED6A4lRYY6IOYsFXPllO7pkQuEk= X-Gm-Message-State: AOJu0Ywk9mU3pe68GiH2NfoQ0ytgQ/XgVf6SZ14uH+bEnuiQ87DNTHRP XC0PkWm1SslWpI170l1zX2SSSl7bSKLbuiCt4ty+HlPVD55iRhaHGdkxv6xjtN3ok9xKC7AgJZk UJ1FV6K8GM7Uh/lWtsQ== X-Google-Smtp-Source: AGHT+IE5yOxXHFsN9w/WFqj9GD2Jlu4F+FUycVLVxC5570M2iHpzwIws1dqPsB11Seccelx0/1egw1GKNVlINZNd X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a0d:e644:0:b0:62c:f623:3a4e with SMTP id 00721157ae682-64c6e5c126dmr289017b3.0.1719926506628; Tue, 02 Jul 2024 06:21:46 -0700 (PDT) Date: Tue, 2 Jul 2024 13:21:38 +0000 In-Reply-To: <20240702132139.3332013-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20240702132139.3332013-1-yosryahmed@google.com> X-Mailer: git-send-email 2.45.2.803.g4e1b14247a-goog Message-ID: <20240702132139.3332013-3-yosryahmed@google.com> Subject: [RESEND PATCH v3 2/3] x86/mm: Fix LAM inconsistency during context switch From: Yosry Ahmed To: x86@kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , "Kirill A. Shutemov" , Rick Edgecombe , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8870BA001F X-Stat-Signature: gkcswt3bimnae7fao8shzq49w4s6zk4c X-Rspam-User: X-HE-Tag: 1719926507-786884 X-HE-Meta: U2FsdGVkX18maGm5w0Yq2FR+QMbOssQemsoxNIEcRf5t6kYpXJYjqUvXKsaFnc8Bsn/GSTcwAh5TbZSqXse9dcRdBuaACUcjWqSV6ALuwTrVppo1tiI05lmLn74JdY3u0OXn6thdaMSASYumUTdBzy6tzrKNsy88oU24LC3o8Jn4cNYmhSXYy9VTXQ5qxQy1tQixRJGxzInF20PyB7Uqv5/myTF+MGo2zGydDB53ecodyDm/Qm4wJEGIbEL7iTLE3fsK8eN9x2JKdXBYTxQxxi+JyY13AIMrkPGuEA+1PsXPJouH6wvWpBxWNJcHQBS5g+1qxSM2WCWfCRJlu/tHLLlhOyC63LfRrjDwQ33AdGf2KlzYWKlYePgXGNHZgZRxX+14uoMknB9nhyf5KwxeBnqddx6Ttwh2JS2HMdKMqBLtyXfl84+a3ajKloWR+6E6C+YT/j872IkR6m7kFXh8TrTFlYBXCEln79uCmreDetsKPJPj/9/VO0kdmz9L+T4GHdXM1lb1TFWWrdXyOJt/rvMd6hs1P+NHyQl+2eMNnwyNdYcK6dpt0bdaWy6u8pCibEO0pTN27k8pU38mPuIsdjDsG7yrmXmeZyzU4Uvxt/YhDDtMN6Cs79vCpsY3yd/1FOwBjRORjdpbGKLYT9MgOIvdA8M+kuNdn3nUCuhNkqrc7DCHIzk3EzkEASzwj4I0GRcr19Bb5szxQA0z0fbig9TBTFBEUJPQF1drCgEuBIMBB4g8xq2rxuAvVyuCneRcmqR9+Ldcr3zxPMabVt8FAkW4o96EEKCgrkvpUAlaZsb3LfXOz0WEz8j4e32VoqXsWyjsBQv/h7Pnrxqu9uA7uLjenfi0iBapn+ir3CLkN+n4MGH4wokxp0vx7NYhtKkFGAnY9gPBevrgLtGFeV8WDhXtke++XdkkkBFN3KDkAMcQ/t+DBlEWCUXuD199TnV2ncOmFMp92CldO+zEPf9 AbOeYhqa cZ+6nhAzsDFkLj+4mE8V7t+KfQBu3CwjY6AVcayhT83Khj59oiRFZxfx/XVr6LDFCOdYWNeu0415xaC7UWj7GJRnI3Xr15VChe7lkMsqOSrhBMTUHEtY7pZM2gNSi/qQuOx99mHAPRvLXS7KlrPTVvtnxD1EyjtL1OzkD/ksq/FsGIh7qEkrH7T0KHAw2Co6PlEDF/1Px18BNoexfwfGXEc7Qjn/63Tyb+REkI6JHDNUtZ9HZr4swnmsiyIgWMllu//fGbut8r1yE2AF3pqA/+/1PIVlOOkiR6dbJSAHJmn5RMeF0cY7w3xPvqjKwClHBI1giTBaYnvsQ0khsl4fPGqK0+x35fJy9MfNCC1JNgKZMsLcSIiVRaJVvq0ySxIDrjWtOWb6OjolU7hY5tpFpWqxM0jsMweZi5g5RSPMfNEkbCxVred4TdK2HYMB7AlNUO1yMIwKwQXakoQAA8kKLCH+jkxdUfjJ5ZqDzkkcLxi7QbnhYFN900aVXBLOAITLbWA5w93aEGNPWo4S3Rk6xViy0lXXVPrZ8Rnsl8468Vyu/ajZ0Pt8wroZ+NkXuMfNTwy6+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: LAM can only be enabled when a process is single-threaded. But _kernel_ threads can temporarily use a single-threaded process's mm. That means that a context-switching kernel thread can race and observe the mm's LAM metadata (mm->context.lam_cr3_mask) change. The context switch code does two logical things with that metadata: populate CR3 and populate 'cpu_tlbstate.lam'. If it hits this race, 'cpu_tlbstate.lam' and CR3 can end up out of sync. This de-synchronization is currently harmless. But it is confusing and might lead to warnings or real bugs. Update set_tlbstate_lam_mode() to take in the LAM mask and untag mask instead of an mm_struct pointer, and while we are at it, rename it to cpu_tlbstate_update_lam(). This should also make it clearer that we are updating cpu_tlbstate. In switch_mm_irqs_off(), read the LAM mask once and use it for both the cpu_tlbstate update and the CR3 update. Reviewed-by: Kirill A. Shutemov Change-Id: I8bcf94bbf28ebdbbe75e3939e712246a029f84b6 Signed-off-by: Yosry Ahmed --- arch/x86/include/asm/mmu_context.h | 8 +++++++- arch/x86/include/asm/tlbflush.h | 9 ++++----- arch/x86/kernel/process_64.c | 6 ++++-- arch/x86/mm/tlb.c | 8 +++++--- 4 files changed, 20 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 8dac45a2c7fcf..19091ebb86338 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -88,7 +88,13 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) #ifdef CONFIG_ADDRESS_MASKING static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) { - return mm->context.lam_cr3_mask; + /* + * When switch_mm_irqs_off() is called for a kthread, it may race with + * LAM enablement. switch_mm_irqs_off() uses the LAM mask to do two + * things: populate CR3 and populate 'cpu_tlbstate.lam'. Make sure it + * reads a single value for both. + */ + return READ_ONCE(mm->context.lam_cr3_mask); } static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 25726893c6f4d..69e79fff41b80 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -399,11 +399,10 @@ static inline u64 tlbstate_lam_cr3_mask(void) return lam << X86_CR3_LAM_U57_BIT; } -static inline void set_tlbstate_lam_mode(struct mm_struct *mm) +static inline void cpu_tlbstate_update_lam(unsigned long lam, u64 untag_mask) { - this_cpu_write(cpu_tlbstate.lam, - mm->context.lam_cr3_mask >> X86_CR3_LAM_U57_BIT); - this_cpu_write(tlbstate_untag_mask, mm->context.untag_mask); + this_cpu_write(cpu_tlbstate.lam, lam >> X86_CR3_LAM_U57_BIT); + this_cpu_write(tlbstate_untag_mask, untag_mask); } #else @@ -413,7 +412,7 @@ static inline u64 tlbstate_lam_cr3_mask(void) return 0; } -static inline void set_tlbstate_lam_mode(struct mm_struct *mm) +static inline void cpu_tlbstate_update_lam(unsigned long lam, u64 untag_mask) { } #endif diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index e1ce0dfd24258..26a853328f2d4 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -801,10 +801,12 @@ static long prctl_map_vdso(const struct vdso_image *image, unsigned long addr) static void enable_lam_func(void *__mm) { struct mm_struct *mm = __mm; + unsigned long lam; if (this_cpu_read(cpu_tlbstate.loaded_mm) == mm) { - write_cr3(__read_cr3() | mm->context.lam_cr3_mask); - set_tlbstate_lam_mode(mm); + lam = mm_lam_cr3_mask(mm); + write_cr3(__read_cr3() | lam); + cpu_tlbstate_update_lam(lam, mm_untag_mask(mm)); } } diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index a041d2ecd8380..1fe9ba33c5805 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -632,7 +633,6 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, } new_lam = mm_lam_cr3_mask(next); - set_tlbstate_lam_mode(next); if (need_flush) { this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); @@ -651,6 +651,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, this_cpu_write(cpu_tlbstate.loaded_mm, next); this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid); + cpu_tlbstate_update_lam(new_lam, mm_untag_mask(next)); if (next != prev) { cr4_update_pce_mm(next); @@ -697,6 +698,7 @@ void initialize_tlbstate_and_flush(void) int i; struct mm_struct *mm = this_cpu_read(cpu_tlbstate.loaded_mm); u64 tlb_gen = atomic64_read(&init_mm.context.tlb_gen); + unsigned long lam = mm_lam_cr3_mask(mm); unsigned long cr3 = __read_cr3(); /* Assert that CR3 already references the right mm. */ @@ -704,7 +706,7 @@ void initialize_tlbstate_and_flush(void) /* LAM expected to be disabled */ WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); - WARN_ON(mm_lam_cr3_mask(mm)); + WARN_ON(lam); /* * Assert that CR4.PCIDE is set if needed. (CR4.PCIDE initialization @@ -723,7 +725,7 @@ void initialize_tlbstate_and_flush(void) this_cpu_write(cpu_tlbstate.next_asid, 1); this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen, tlb_gen); - set_tlbstate_lam_mode(mm); + cpu_tlbstate_update_lam(lam, mm_untag_mask(mm)); for (i = 1; i < TLB_NR_DYN_ASIDS; i++) this_cpu_write(cpu_tlbstate.ctxs[i].ctx_id, 0);