From patchwork Fri Jul 12 17:00:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13731995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAACAC2BD09 for ; Fri, 12 Jul 2024 17:01:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 716D56B00A5; Fri, 12 Jul 2024 13:01:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69FCC6B00A6; Fri, 12 Jul 2024 13:01:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2AB6B00A7; Fri, 12 Jul 2024 13:01:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2621E6B00A5 for ; Fri, 12 Jul 2024 13:01:30 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DFC03140C10 for ; Fri, 12 Jul 2024 17:01:29 +0000 (UTC) X-FDA: 82331716698.27.CC7B6CD Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf12.hostedemail.com (Postfix) with ESMTP id D18A340018 for ; Fri, 12 Jul 2024 17:01:27 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SEtLCcsV; spf=pass (imf12.hostedemail.com: domain of 3ZmGRZggKCKkSJLTVJWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3ZmGRZggKCKkSJLTVJWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720803642; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MiTpcmi4SGpEzjuyBzeicJVJrQMm8QovFUpVk7p2Hrs=; b=CYKgEt75xQX1rncGbTQKoVFCNYjFCcEUvBufIY0yvi8tfPXdM4TT82xd7/MGNCmG0ZJ6By hwM/iQFB030XzncPN9ka1004P982HuwaIxyD6vWs0T4z2UErYtJqa/CAn0xDel0SR4ErIy P30arMj9ZZjy8XreK00+ONMSIaRZaEQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SEtLCcsV; spf=pass (imf12.hostedemail.com: domain of 3ZmGRZggKCKkSJLTVJWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3ZmGRZggKCKkSJLTVJWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720803642; a=rsa-sha256; cv=none; b=yOvsDAbi1mczIZtDxaEOtpAVr5eY+cO44QSvFP/KzTxI09lAHPhu9sMQfRSmZ10ZK20Jk0 vwWaUB99nXc4F0PTfosRRDcI5OAndZT0ms9p+yTscR9Rxe3omJkHYYsdBlUjLL5sjxQ7Up sG8CTkjVV42JxcSG59eM0ukK1qqxJWo= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-42674318a4eso20458135e9.1 for ; Fri, 12 Jul 2024 10:01:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720803686; x=1721408486; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MiTpcmi4SGpEzjuyBzeicJVJrQMm8QovFUpVk7p2Hrs=; b=SEtLCcsVgT3bdS0WLC6sXaBPT3tOytqB1mbmUZw6yf9YWGq+RvTF3X1g3xKpcjjqRe Bl54ODgjE0MsSu43kaDtfuFH+mD87lHzTn93jfcLn5VCdoKhQHZMlWAipHCCeV3atrx4 vJN5qt7bPFo6jJJmUadmiMtIAFBCzROwbSP+M5z+Fnl14lD8ka9XEZ0AyZBU1ZbNBeq1 NYlFxIjFFFR3xCbZOPAwgwkhwhJAPOjLcaZC7AMSEpi3jTSFTmFG0naP8jCOpd7ACiyT AN9m3Rd//TxjmKhyPe2ISq6TwJQVdL5n9fWuE8yLXuYu25wLzKp1gf9e+ib4lr8TK5MB zMsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720803686; x=1721408486; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MiTpcmi4SGpEzjuyBzeicJVJrQMm8QovFUpVk7p2Hrs=; b=FLoeXS3JUkcMtdr3WrOylItrS+xVR6U2mo38EasLuX72MH0jkPxwedAuFSipSZmZKU zenD733bGg9qAxC7wmZEWXmY7UtWPqKpaFISmvrVUCl/hStDCRH4KxnB5juqswAU8Ubt fKpH2XmUELcj/Fc6apdUtKUzYVUCMgr4ho/yG9jsSe0LA7OGMYwbqb3bHNNrEN48QMxj cqG1lx8eR4oSgIMhKqfnQsO3AaIkzHKGNsoxyVlyBZ520t0Tbs1Fe5uvOUaJBa0XDivR 2GtEQSejQLz9sbQI8VFRe1wifFSeqOySYiMu2eEZ9/CnCk/6UmytnF31ak22IObOBwov vi/Q== X-Forwarded-Encrypted: i=1; AJvYcCXtmlyOyZtIl+e9p2X2MVw08QptCYDumjnAtjTeLVRSdi79d+n5Bs+imfFJHddxEbzEQAkVDK+rql8WJYo7yAQ/1mM= X-Gm-Message-State: AOJu0YyRUQwkcOtD7HApRtAsSQA2/RddyoqTsIeFv5hUiPkuajIIngfT mZBc7ninnPsXPiZp+xQzMlsHplcSB0jsOTeezuZuQJr2BegUTkFutm1TNj9yV+bv4Sd/TFc2g0d O7vdGemLEgQ== X-Google-Smtp-Source: AGHT+IHaxca8Kt7JtoFlz5j8ZwGaa7ohvNgTm761vo7JWzSlYydrUy+c3mRwk3Ip75evYABidpu6mgorHtLpeg== X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a5d:6387:0:b0:367:890e:935e with SMTP id ffacd0b85a97d-367cea67da0mr20496f8f.4.1720803686255; Fri, 12 Jul 2024 10:01:26 -0700 (PDT) Date: Fri, 12 Jul 2024 17:00:30 +0000 In-Reply-To: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> Mime-Version: 1.0 References: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> X-Mailer: b4 0.14-dev Message-ID: <20240712-asi-rfc-24-v1-12-144b319a40d8@google.com> Subject: [PATCH 12/26] mm: asi: asi_exit() on PF, skip handling if address is accessible From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Sean Christopherson , Paolo Bonzini , Alexandre Chartre , Liran Alon , Jan Setje-Eilers , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , Mel Gorman , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Michal Hocko , Khalid Aziz , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Valentin Schneider , Paul Turner , Reiji Watanabe , Junaid Shahid , Ofir Weisse , Yosry Ahmed , Patrick Bellasi , KP Singh , Alexandra Sandulescu , Matteo Rizzo , Jann Horn Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, Brendan Jackman X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D18A340018 X-Stat-Signature: 97rb9tnpcguue5tm68azw8yzis7q11om X-Rspam-User: X-HE-Tag: 1720803687-85867 X-HE-Meta: U2FsdGVkX1+Q7fvK5PfzdaJoYJ4an1lnK21CVaJwV/3N+1gZU64OX0fApkw/pGb0MVpbjsZblOaxkFQLhwdKlFUq1BK7yVF9Lo21JQnDkL+/wKDMzhwb6lI7hNvv+0uh7TCYg6LzBKU9E6vreftmd2Y77+/SiaLbJZYKlp4KAeLZvXYggsqOuByXh6UspqEZi/KiCtDswDx070oOCTfTQy99AMH5ebmvu5tTUqSza76WEHy37C065413mNh8KyjpcZa3VO9UzQYuwIsiYuQQeDy/BUvu+aEtFP97tDGB8qQnaNrELYTAGVnnBELPuVi17OBembRrf1ejHr8BXkYlkrzPCaSqWMUIkP+XzOnhfADlnyWcXHoQhnMWC2fPTmEHtDRKDvQSlLzm2VWskI3oJF/c6RhUIkKJhW4aQ9mweOFQnJntXJU4Lh3AOLoDWbUYtvgLUg39v2i4m1ChG8jAN1ExOEN9W5YimAfYXu65YAP9ExNq7wHWWCvuYCqEvGJeveWxYB2B/IYo3TINnvu1Gyrb1G7vUc6bzhBGG0aIF9T+guuNEo2oO6froWRFGQPQUqrlR8DSTNBLTIxHQjdpVACxdvy9rO18TKqj24WyxSDPzP4MKPzjwlm4JPNcXraO7eo9Eoc86rQkDys64DbF3yz+JSDE+G+QCqs8W2tgTRkJk2xmNhJ2qdiA4Ad25fw1f4IGybLhKXU1bc1X3XkpmWly2i3UeT7RGjklhook2FwBma7FN2TRP70o1Lpe52v/eNVnZPYupvmnG5z2I5MZ+9KleDZ8ZtMOuJopg5tInjyRsRCyOXNWB2BhqXQfuU+LWL3jpFh/REf466UzniNiqGWK1su5mXfV2GHOZ92TIJAMjHESuJeBnCKU8AnX4DZHq97IV2um9zQ5NPu71KWarvyCnUXNk8xw/L1pITWo0KqY2u6ymfWrTHO78DArFYVbex82men/ihNd0IScv2L 0vmPO5eU PgsHqbGCVNngk1ZOO0qiSjFGQ7E7lWnLszqwhJY7XQXwmIIl66P0J7VB7tH0BLlUOS7jnHnBuZwN/TGvkDgZx6uT1n5uc7zBl6KDovUBGxGpqCQVhiJQrY0zKUBqqYomF1Sy35MHJKvy8OLjmsvxMlzLBighFBE5YSZuPhy0ojpVhFkLMtzI96tE3NiYePv8GT+JYL2m8iJfdBIQPBFnB8NtzUqI+RT9nYLsAvX9CgH4jG/3BsqswvAe172QCdSh/+A7U089aRU/ZsibyGoj5F2kMwnBo+BV+wxSOpuc2b7Ls/lT2bDq2eKc4FeF4WZOuM8iqDiadyh3eY7OcGxjEgElDV9/LDPzkhWXm+z5164kRQqR7wikGq2ilRjToIYKjwmEiZQq0zv4F86i/CZmpSOnJ/AxfSeDoalDiWIWiH2yEVq9QJ+IbKuLB4Ye1dCPAJsqzGU3lUd+/dOM0Gb0AEZibG5cQx7Wk8dfODKmOE07wZypfrPiTCcm3ZkmWMCH39+hnzaX/fpD5wJuV+WPwcZom1mzpZMn5SBLQ8M7gMGhq1kMfoUNES2+c2VVEMflmAnMnpYjs0JYhvaM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Ofir Weisse On a page-fault - do asi_exit(). Then check if now after the exit the address is accessible. We do this by refactoring spurious_kernel_fault() into two parts: 1. Verify that the error code value is something that could arise from a lazy TLB update. 2. Walk the page table and verify permissions, which is now called is_address_accessible(). We also define PTE_PRESENT() and PMD_PRESENT() which are suitable for checking userspace pages. For the sake of spurious faults, pte_present() and pmd_present() are only good for kernelspace pages. This is because these macros might return true even if the present bit is 0 (only relevant for userspace). Signed-off-by: Ofir Weisse Signed-off-by: Brendan Jackman --- arch/x86/mm/fault.c | 119 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 104 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index bba4e020dd64..e0bc5006c371 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -942,7 +942,7 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address, force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)address); } -static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) +static __always_inline int kernel_protection_ok(unsigned long error_code, pte_t *pte) { if ((error_code & X86_PF_WRITE) && !pte_write(*pte)) return 0; @@ -953,6 +953,9 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) return 1; } +static inline_or_noinstr int kernel_access_ok( + unsigned long error_code, unsigned long address, pgd_t *pgd); + /* * Handle a spurious fault caused by a stale TLB entry. * @@ -978,11 +981,6 @@ static noinline int spurious_kernel_fault(unsigned long error_code, unsigned long address) { pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int ret; /* * Only writes to RO or instruction fetches from NX may cause @@ -998,6 +996,50 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; pgd = init_mm.pgd + pgd_index(address); + return kernel_access_ok(error_code, address, pgd); +} +NOKPROBE_SYMBOL(spurious_kernel_fault); + +/* + * For kernel addresses, pte_present and pmd_present are sufficient for + * is_address_accessible. For user addresses these functions will return true + * even though the pte is not actually accessible by hardware (i.e _PAGE_PRESENT + * is not set). This happens in cases where the pages are physically present in + * memory, but they are not made accessible to hardware as they need software + * handling first: + * + * - ptes/pmds with _PAGE_PROTNONE need autonuma balancing (see pte_protnone(), + * change_prot_numa(), and do_numa_page()). + * + * - pmds with _PAGE_PSE & !_PAGE_PRESENT are undergoing splitting (see + * split_huge_page()). + * + * Here, we care about whether the hardware can actually access the page right + * now. + * + * These issues aren't currently present for PUD but we also have a custom + * PUD_PRESENT for a layer of future-proofing. + */ +#define PUD_PRESENT(pud) (pud_flags(pud) & _PAGE_PRESENT) +#define PMD_PRESENT(pmd) (pmd_flags(pmd) & _PAGE_PRESENT) +#define PTE_PRESENT(pte) (pte_flags(pte) & _PAGE_PRESENT) + +/* + * Check if an access by the kernel would cause a page fault. The access is + * described by a page fault error code (whether it was a write/instruction + * fetch) and address. This doesn't check for types of faults that are not + * expected to affect the kernel, e.g. PKU. The address can be user or kernel + * space, if user then we assume the access would happen via the uaccess API. + */ +static inline_or_noinstr int +kernel_access_ok(unsigned long error_code, unsigned long address, pgd_t *pgd) +{ + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret; + if (!pgd_present(*pgd)) return 0; @@ -1006,27 +1048,27 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; if (p4d_leaf(*p4d)) - return spurious_kernel_fault_check(error_code, (pte_t *) p4d); + return kernel_protection_ok(error_code, (pte_t *) p4d); pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!PUD_PRESENT(*pud)) return 0; if (pud_leaf(*pud)) - return spurious_kernel_fault_check(error_code, (pte_t *) pud); + return kernel_protection_ok(error_code, (pte_t *) pud); pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!PMD_PRESENT(*pmd)) return 0; if (pmd_leaf(*pmd)) - return spurious_kernel_fault_check(error_code, (pte_t *) pmd); + return kernel_protection_ok(error_code, (pte_t *) pmd); pte = pte_offset_kernel(pmd, address); - if (!pte_present(*pte)) + if (!PTE_PRESENT(*pte)) return 0; - ret = spurious_kernel_fault_check(error_code, pte); + ret = kernel_protection_ok(error_code, pte); if (!ret) return 0; @@ -1034,12 +1076,11 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) * Make sure we have permissions in PMD. * If not, then there's a bug in the page tables: */ - ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd); + ret = kernel_protection_ok(error_code, (pte_t *) pmd); WARN_ONCE(!ret, "PMD has incorrect permission bits\n"); return ret; } -NOKPROBE_SYMBOL(spurious_kernel_fault); int show_unhandled_signals = 1; @@ -1483,6 +1524,29 @@ handle_page_fault(struct pt_regs *regs, unsigned long error_code, } } +static __always_inline void warn_if_bad_asi_pf( + unsigned long error_code, unsigned long address) +{ +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + struct asi *target; + + /* + * It's a bug to access sensitive data from the "critical section", i.e. + * on the path between asi_enter and asi_relax, where untrusted code + * gets run. #PF in this state sees asi_intr_nest_depth() as 1 because + * #PF increments it. We can't think of a better way to determine if + * this has happened than to check the ASI pagetables, hence we can't + * really have this check in non-debug builds unfortunately. + */ + VM_WARN_ONCE( + (target = asi_get_target(current)) != NULL && + asi_intr_nest_depth() == 1 && + !kernel_access_ok(error_code, address, asi_pgd(target)), + "ASI-sensitive data access from critical section, addr=%px error_code=%lx class=%s", + (void *) address, error_code, target->class->name); +#endif +} + DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) { irqentry_state_t state; @@ -1490,6 +1554,31 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2(); + if (static_asi_enabled() && !user_mode(regs)) { + pgd_t *pgd; + + /* Can be a NOP even for ASI faults, because of NMIs */ + asi_exit(); + + /* + * handle_page_fault() might oops if we run it for a kernel + * address. This might be the case if we got here due to an ASI + * fault. We avoid this case by checking whether the address is + * now, after asi_exit(), accessible by hardware. If it is - + * there's nothing to do. Note that this is a bit of a shotgun; + * we can also bail early from user-address faults here that + * weren't actually caused by ASI. So we might wanna move this + * logic later in the handler. In particular, we might be losing + * some stats here. However for now this keeps ASI page faults + * nice and fast. + */ + pgd = (pgd_t *)__va(read_cr3_pa()) + pgd_index(address); + if (kernel_access_ok(error_code, address, pgd)) { + warn_if_bad_asi_pf(error_code, address); + return; + } + } + prefetchw(¤t->mm->mmap_lock); /*