From patchwork Tue Sep 12 14:16:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13381870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 732D5CA0EEB for ; Tue, 12 Sep 2023 15:30:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=i1EFZzlYLO65D0krAzuGuV13aBicmfUFTbxNshIOcEY=; b=X7Q4Li30pHi6W5Q0ErF2LaxLUM qpg7RaE3VPZy6VgHmPegeLBTA1mGsXlhKd+I/1NGcE+mIPRYpmeVOHIq21HdFhPsV2rRzotNl9kvO iDqr4KL1iiCp5wJEIGRfxt1rOv7g4d57vBdC0I8BjL/eemrubqgSJpG1XS2JMJTB8mLDk+a6wtih6 PHq6VVDxCgVTd6f1w8SCHIr8izG2m47w9k2IffJleyy3pzVfrrGqQwSSFafnN+rHRC3/XAVmB6Dco tAhfcH9V6oS8VWxYybPm8ZP8/4t0URSELHlUQu2RUgpkE9/8poqwe/5ETRApL/vDWszu7LgeedAjZ 8DFSuWRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qg5LT-003ij6-29; Tue, 12 Sep 2023 15:30:31 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qg4Ez-003XiX-35 for linux-arm-kernel@lists.infradead.org; Tue, 12 Sep 2023 14:19:58 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-94a355c9028so373274466b.3 for ; Tue, 12 Sep 2023 07:19:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694528383; x=1695133183; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pP9ux+t7ybPRTLPm12BO/OZQWRNexAHVXS3UA5UcNZg=; b=B5QnGlY6YVopf0GIrBLlMxfTHi8LgzUNTqzn63Cudhp5kZ7N+dwNnFmf8IFKE5QbIU kIAbnBncHC9unL5Ifn/jJ13/q6hJ9re3WUq4Og2MEjrai9VwCl7ELiIGSPZmtHCByoDh XqxgXbJim6nbgV+whajOy4Q+B2t8ycO9oQ8raPgezvfYw6lYUQmtePlXzzY+IGmgczA2 VFmIQpu6ta44CuWJsuT5b07K/J/G8BqOEE+7ENsX7vU4alQI014b++MZ8ZF+kQeq055n 9wJ8dEKOeIg+bAX8kMmPwzSFDpzvRu3F5BPpsdS0EVjObXW0N5CdQAOgVDl2gZ3pG0oC SvXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694528383; x=1695133183; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pP9ux+t7ybPRTLPm12BO/OZQWRNexAHVXS3UA5UcNZg=; b=FfwG+EtP+rOu6qNFRGdXUJUhAsxXUQESrQILI8ZsXxiG+nC2uKokSkHKdNx3yo8NuN yl/CaEyk7rmR6LGGGqag77TI3EYj7KKaGk7sD3xFPosA5D0qmflGym6Gl8v0dP2CE6k6 Q2i69gZ50771zlUEgk5s/Arq623SFFtAMRecKvR4qRoj6LJF+VaB7LzInbm7MbRFnt2l eY/zB4Ya2nmnKAquEz1hrei5/wgov5lGwXgNow7WxyBWGDA65gIYRZagtzTEiGvjluxi nZRpH/JudB+wPaJ3P52M82d8nU+b8g/zhl+OIgZ7tGtYETHVnrcIzUKf3+gRwLK+hTyg gsIg== X-Gm-Message-State: AOJu0Yxk0KKui+l4NrGldCwCafzB++Nd2yjJur1AC2vcoAT9c+2G35o0 /1hbS6l+mpwJIPJp6LwgK6h/cr8WNr/8YCl8uAbC/YwAh1wko27h5xQ36diICZStJkt1kDOHebr ZWMj+thlmf6mYCYpP3dDYh2C50j+ADC0UEkvgkADpL+/ltFYNM6cjV6P/h25CQQTXA7N3IHyOT5 U= X-Google-Smtp-Source: AGHT+IFc5FzZka06b1EqPOj2yNTfErdOLl8goiBxj1OycXp8ezZfFNShyD6O9YFAj2amLuNDNkYiwgbA X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:118a]) (user=ardb job=sendgmr) by 2002:a17:907:621c:b0:9a1:f5c4:acb2 with SMTP id ms28-20020a170907621c00b009a1f5c4acb2mr89201ejc.4.1694528383042; Tue, 12 Sep 2023 07:19:43 -0700 (PDT) Date: Tue, 12 Sep 2023 14:16:51 +0000 In-Reply-To: <20230912141549.278777-63-ardb@google.com> Mime-Version: 1.0 References: <20230912141549.278777-63-ardb@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=10057; i=ardb@kernel.org; h=from:subject; bh=Pkejha3L+hehiRFM85wJvx2t6LHOX9hbW4dhnDPZzFI=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIZWh6Nq1sDU6dY5SUj7/Fj7WbZQ7XnpvzT2/FefLg01uv g54Oq2zo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzkyHeGf3Z3fFa8vZwrIFKx +tuSlRMPHXP4/HrOoh8vDLz3S6nu+V3OyPD29a30pX6nGPiFyz1tM1aZVX2btrDMOq2Jk4f15p4 /otwA X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230912141549.278777-124-ardb@google.com> Subject: [PATCH v4 61/61] arm64: mm: add support for WXN memory translation attribute From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Joey Gouly X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230912_071946_026472_2388D218 X-CRM114-Status: GOOD ( 30.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel The AArch64 virtual memory system supports a global WXN control, which can be enabled to make all writable mappings implicitly no-exec. This is a useful hardening feature, as it prevents mistakes in managing page table permissions from being exploited to attack the system. When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is completely under our control, and has been cleaned up to allow WXN to be enabled from boot onwards. EL0 is not under our control, but given that widely deployed security features such as selinux or PaX already limit the ability of user space to create mappings that are writable and executable at the same time, the impact of enabling this for EL0 is expected to be limited. (For this reason, common user space libraries that have a legitimate need for manipulating executable code already carry fallbacks such as [0].) If enabled at compile time, the feature can still be disabled at boot if needed, by passing arm64.nowxn on the kernel command line. [0] https://github.com/libffi/libffi/blob/master/src/closures.c#L440 Signed-off-by: Ard Biesheuvel Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 11 ++++++ arch/arm64/include/asm/cpufeature.h | 10 ++++++ arch/arm64/include/asm/mman.h | 36 ++++++++++++++++++++ arch/arm64/include/asm/mmu_context.h | 30 +++++++++++++++- arch/arm64/kernel/pi/idreg-override.c | 4 ++- arch/arm64/kernel/pi/map_kernel.c | 24 +++++++++++++ arch/arm64/mm/proc.S | 6 ++++ 7 files changed, 119 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 24ce9be834c1..3aa50f42ed5c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1572,6 +1572,17 @@ config RODATA_FULL_DEFAULT_ENABLED This requires the linear region to be mapped down to pages, which may adversely affect performance in some cases. +config ARM64_WXN + bool "Enable WXN attribute so all writable mappings are non-exec" + help + Set the WXN bit in the SCTLR system register so that all writable + mappings are treated as if the PXN/UXN bit is set as well. + If this is set to Y, it can still be disabled at runtime by + passing 'arm64.nowxn' on the kernel command line. + + This should only be set if no software needs to be supported that + relies on being able to execute from writable mappings. + config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" help diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 5eed4a12c625..484ce8d1d3d4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -18,6 +18,7 @@ #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR 0 #define ARM64_SW_FEATURE_OVERRIDE_HVHE 4 #define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF 8 +#define ARM64_SW_FEATURE_OVERRIDE_NOWXN 12 #ifndef __ASSEMBLY__ @@ -932,6 +933,15 @@ static inline bool kaslr_disabled_cmdline(void) return false; } +static inline bool arm64_wxn_enabled(void) +{ + if (!IS_ENABLED(CONFIG_ARM64_WXN) || + cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOWXN)) + return false; + return true; +} + u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index 5966ee4a6154..6d4940342ba7 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -35,11 +35,40 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags) } #define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags) +static inline bool arm64_check_wx_prot(unsigned long prot, + struct task_struct *tsk) +{ + /* + * When we are running with SCTLR_ELx.WXN==1, writable mappings are + * implicitly non-executable. This means we should reject such mappings + * when user space attempts to create them using mmap() or mprotect(). + */ + if (arm64_wxn_enabled() && + ((prot & (PROT_WRITE | PROT_EXEC)) == (PROT_WRITE | PROT_EXEC))) { + /* + * User space libraries such as libffi carry elaborate + * heuristics to decide whether it is worth it to even attempt + * to create writable executable mappings, as PaX or selinux + * enabled systems will outright reject it. They will usually + * fall back to something else (e.g., two separate shared + * mmap()s of a temporary file) on failure. + */ + pr_info_ratelimited( + "process %s (%d) attempted to create PROT_WRITE+PROT_EXEC mapping\n", + tsk->comm, tsk->pid); + return false; + } + return true; +} + static inline bool arch_validate_prot(unsigned long prot, unsigned long addr __always_unused) { unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM; + if (!arm64_check_wx_prot(prot, current)) + return false; + if (system_supports_bti()) supported |= PROT_BTI; @@ -50,6 +79,13 @@ static inline bool arch_validate_prot(unsigned long prot, } #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr) +static inline bool arch_validate_mmap_prot(unsigned long prot, + unsigned long addr) +{ + return arm64_check_wx_prot(prot, current); +} +#define arch_validate_mmap_prot arch_validate_mmap_prot + static inline bool arch_validate_flags(unsigned long vm_flags) { if (!system_supports_mte()) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 4eeb460d7ed6..35cac8ae7d54 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -20,13 +20,41 @@ #include #include #include -#include #include #include #include extern bool rodata_full; +static inline int arch_dup_mmap(struct mm_struct *oldmm, + struct mm_struct *mm) +{ + return 0; +} + +static inline void arch_exit_mmap(struct mm_struct *mm) +{ +} + +static inline void arch_unmap(struct mm_struct *mm, + unsigned long start, unsigned long end) +{ +} + +static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + if (IS_ENABLED(CONFIG_ARM64_WXN) && execute && + (vma->vm_flags & (VM_WRITE | VM_EXEC)) == (VM_WRITE | VM_EXEC)) { + pr_warn_ratelimited( + "process %s (%d) attempted to execute from writable memory\n", + current->comm, current->pid); + /* disallow unless the nowxn override is set */ + return !arm64_wxn_enabled(); + } + return true; +} + static inline void contextidr_thread_switch(struct task_struct *next) { if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR)) diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 062aeae68936..1912aae12de3 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -193,6 +193,7 @@ static const struct ftr_set_desc sw_features __prel64_initconst = { FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL), FIELD("hvhe", ARM64_SW_FEATURE_OVERRIDE_HVHE, hvhe_filter), FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF, NULL), + FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN, NULL), {} }, }; @@ -227,8 +228,9 @@ static const struct { { "arm64.nomops", "id_aa64isar2.mops=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, - { "rodata=off", "arm64_sw.rodataoff=1" }, + { "rodata=off", "arm64_sw.rodataoff=1 arm64_sw.nowxn=1" }, { "arm64.nolva", "id_aa64mmfr2.varange=0" }, + { "arm64.nowxn", "arm64_sw.nowxn=1" }, }; static int __init parse_hexdigit(const char *p, u64 *v) diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 591394befa5f..be7caf07bfa7 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -133,6 +133,25 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) idmap_cpu_replace_ttbr1(swapper_pg_dir); } +static void noinline __section(".idmap.text") disable_wxn(void) +{ + u64 sctlr = read_sysreg(sctlr_el1) & ~SCTLR_ELx_WXN; + + /* + * We cannot safely clear the WXN bit while the MMU and caches are on, + * so turn the MMU off, flush the TLBs and turn it on again but with + * the WXN bit cleared this time. + */ + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %1 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(sctlr)); +} + static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) { u64 sctlr = read_sysreg(sctlr_el1); @@ -230,6 +249,11 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) if (va_bits > VA_BITS_MIN) sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(va_bits)); + if (IS_ENABLED(CONFIG_ARM64_WXN) && + cpuid_feature_extract_unsigned_field(arm64_sw_feature_override.val, + ARM64_SW_FEATURE_OVERRIDE_NOWXN)) + disable_wxn(); + /* * The virtual KASLR displacement modulo 2MiB is decided by the * physical placement of the image, as otherwise, we might not be able diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index f32e4b087ef8..cda34e96f5f5 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -547,6 +547,12 @@ alternative_else_nop_endif * Prepare SCTLR */ mov_q x0, INIT_SCTLR_EL1_MMU_ON +#ifdef CONFIG_ARM64_WXN + ldr_l x1, arm64_sw_feature_override + FTR_OVR_VAL_OFFSET + tst x1, #0xf << ARM64_SW_FEATURE_OVERRIDE_NOWXN + orr x1, x0, #SCTLR_ELx_WXN + csel x0, x0, x1, ne +#endif ret // return to head.S .unreq mair