From patchwork Thu Mar 2 15:00:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hoeun Ryu X-Patchwork-Id: 9600349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A1D4060453 for ; Thu, 2 Mar 2017 15:01:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87DCC2859B for ; Thu, 2 Mar 2017 15:01:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 793A728599; Thu, 2 Mar 2017 15:01:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id F347B28599 for ; Thu, 2 Mar 2017 15:01:13 +0000 (UTC) Received: (qmail 13854 invoked by uid 550); 2 Mar 2017 15:01:11 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13825 invoked from network); 2 Mar 2017 15:01:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=g4DbfHiqwWhRcqN/ec65SjZnUdsH74xaSt3vLs402u8=; b=k44Z4YjFb1FG5H62GTZxjUsFkm4obXci/rzFXtVljgrOBLueKUUJEhT+4Tz+wM/cUs JxVKfKq5wsfTTnhvDhkLHOEo36xDCshPQghmnaAaaHOfh/hct4NNQt1aJoQzo270EwVf hMW9GsDLEh+ig2EXsF2uynaCnvKX58iw6dH5V20JzpcVOjoSP0feZ1KPQA/l9QhwNI4v LeHvFBk4LZa4Jj7FZ0K9Yqwk2e7fZrixwhLk+YSU5BUkSXe9eTt5zJYDCwmOhpY+3EBb YNqbLGiDaLr6dEclOjdC4CUSaq4K0rtHpv2zH2cftWI1Le79/0Q5x4cY/PERx0vcFwMb 3QJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=g4DbfHiqwWhRcqN/ec65SjZnUdsH74xaSt3vLs402u8=; b=PAKQr5fx0rfv9jQogQD856ukz8Kcu5XxYJABUF100tHzpYsiR904W5ejG9yPcaszMP R8/hwJ1RZ0unIMBxJiZOETFoZJOAu48voKkURHBiis+CLtAX6fbJEIB/ltHzxUEBfuGH eMwDrUNBNsXS3Yw6uMn+0QYYAhc1ge1qquD/0ZHahCnHx5+lk3KDHl0x8pAM7RYUMDZc wjXlj2D40+50RHY+nQz+zcrfAFgAbcfstFkJP8gzZJQ6CArlCdgch6zm/6RiYmBu27YG P++XQXyEUp6cITYv/6XYDODXjMxdaqxu1l81tc8LcaUrYnUlzXnwiOshyJiNg89bK9cg PU8A== X-Gm-Message-State: AMke39khPvd1nPjLyV7oLzOG+M9lxM/7RcquDO4lX6C9Glednuju1BC6jkjHAGLJZnfvfg== X-Received: by 10.98.91.131 with SMTP id p125mr16087999pfb.165.1488466857816; Thu, 02 Mar 2017 07:00:57 -0800 (PST) From: Hoeun Ryu To: kernel-hardening@lists.openwall.com Cc: linux-kernel@vger.kernel.org, Kees Cook , Mark Rutland , Andy Lutomirski , Emese Revfy , Russell King , PaX Team , x86@kernel.org, Hoeun Ryu Date: Fri, 3 Mar 2017 00:00:31 +0900 Message-Id: <1488466831-13918-1-git-send-email-hoeun.ryu@gmail.com> X-Mailer: git-send-email 2.7.4 Subject: [kernel-hardening] [RFC] arm64: support HAVE_ARCH_RARE_WRITE X-Virus-Scanned: ClamAV using ClamSMTP This RFC is a quick and dirty arm64 implementation for Kees Cook's RFC for rare_write infrastructure [1]. This implementation is based on Mark Rutland's suggestions, which is that a special userspace mm that maps only __start/end_rodata as RW permission is prepared during early boot time (paging_init) and __arch_rare_write_map() switches to the mm [2]. Due to the limit of implementation (the mm having RW mapping is userspace mm), we need a new arch-specific __arch_rare_write_ptr() to convert RO address to RW address (CONFIG_HAVE_RARE_WRITE_PTR is added), which is general for all architectures (__rare_write_ptr()) in Kees's RFC . So all writes should be instrumented by __rare_write(). One caveat for arm64 is CONFIG_ARM64_SW_TTBR0_PAN. Because __arch_rare_write_map() installes a special user mm to ttbr0, usercopy inside __arch_rare_write_map/unmap() pair will break rare_write. (uaccess_enable() replaces the special mm and RW alias is no longer valid.) A similar problem could rise in general usercopy inside __arch_rare_write_map/unmap(). __arch_rare_write_map() replaces current->mm, so we loose the address space of the `current` process. It passes LKDTM's rare write test. [1] : http://www.openwall.com/lists/kernel-hardening/2017/02/27/5 [2] : https://lkml.org/lkml/2017/2/22/254 Signed-off-by: Hoeun Ryu --- arch/Kconfig | 4 ++ arch/arm64/Kconfig | 2 + arch/arm64/include/asm/pgtable.h | 12 ++++++ arch/arm64/mm/mmu.c | 90 ++++++++++++++++++++++++++++++++++++++++ include/linux/compiler.h | 6 ++- 5 files changed, 113 insertions(+), 1 deletion(-) diff --git a/arch/Kconfig b/arch/Kconfig index b1bae4c..0d7b82d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -902,4 +902,8 @@ config HAVE_ARCH_RARE_WRITE - the routines must validate expected state (e.g. when enabling writes, BUG() if writes are already be enabled). +config HAVE_ARCH_RARE_WRITE_PTR + def_bool n + help + source "kernel/gcov/Kconfig" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 896eba6..e6845ca 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -70,6 +70,8 @@ config ARM64 select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRANSPARENT_HUGEPAGE + select HAVE_ARCH_RARE_WRITE + select HAVE_ARCH_RARE_WRITE_PTR select HAVE_ARM_SMCCC select HAVE_EBPF_JIT select HAVE_C_RECORDMCOUNT diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0eef606..0d4974d 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -731,6 +731,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define kc_vaddr_to_offset(v) ((v) & ~VA_START) #define kc_offset_to_vaddr(o) ((o) | VA_START) +extern unsigned long __rare_write_rw_alias_start; + +#define __arch_rare_write_ptr(__var) ({ \ + unsigned long __addr = (unsigned long)&__var; \ + __addr -= (unsigned long)__start_rodata; \ + __addr += __rare_write_rw_alias_start; \ + (typeof(__var) *)__addr; \ +}) + +unsigned long __arch_rare_write_map(void); +unsigned long __arch_rare_write_unmap(void); + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b805c01..cf5d3dd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -504,6 +504,94 @@ static void __init map_kernel(pgd_t *pgd) kasan_copy_shadow(pgd); } +struct mm_struct rare_write_mm = { + .mm_rb = RB_ROOT, + .mm_users = ATOMIC_INIT(2), + .mm_count = ATOMIC_INIT(1), + .mmap_sem = __RWSEM_INITIALIZER(rare_write_mm.mmap_sem), + .page_table_lock= __SPIN_LOCK_UNLOCKED(rare_write_mm.page_table_lock), + .mmlist = LIST_HEAD_INIT(rare_write_mm.mmlist), +}; + +#ifdef CONFIG_ARM64_PTDUMP_DEBUGFS +#include + +static struct ptdump_info rare_write_ptdump_info = { + .mm = &rare_write_mm, + .markers = (struct addr_marker[]){ + { 0, "rare-write start" }, + { TASK_SIZE_64, "rare-write end" } + }, + .base_addr = 0, +}; + +static int __init ptdump_init(void) +{ + return ptdump_debugfs_register(&rare_write_ptdump_info, + "rare_write_page_tables"); +} +device_initcall(ptdump_init); + +#endif + +unsigned long __rare_write_rw_alias_start = TASK_SIZE_64 / 4; + +__always_inline unsigned long __arch_rare_write_map(void) +{ + struct mm_struct *mm = &rare_write_mm; + + preempt_disable(); + + __switch_mm(mm); + + if (system_uses_ttbr0_pan()) { + update_saved_ttbr0(current, mm); + cpu_switch_mm(mm->pgd, mm); + } + + return 0; +} + +__always_inline unsigned long __arch_rare_write_unmap(void) +{ + struct mm_struct *mm = current->active_mm; + + __switch_mm(mm); + + if (system_uses_ttbr0_pan()) { + cpu_set_reserved_ttbr0(); + if (mm != &init_mm) + update_saved_ttbr0(current, mm); + } + + preempt_enable_no_resched(); + + return 0; +} + +void __init rare_write_init(void) +{ + phys_addr_t pgd_phys = early_pgtable_alloc(); + pgd_t *pgd = pgd_set_fixmap(pgd_phys); + phys_addr_t pa_start = __pa_symbol(__start_rodata); + unsigned long size = __end_rodata - __start_rodata; + + BUG_ON(!pgd); + BUG_ON(!PAGE_ALIGNED(pa_start)); + BUG_ON(!PAGE_ALIGNED(size)); + BUG_ON(__rare_write_rw_alias_start + size > TASK_SIZE_64); + + rare_write_mm.pgd = (pgd_t *)__phys_to_virt(pgd_phys); + init_new_context(NULL, &rare_write_mm); + + __create_pgd_mapping(pgd, + pa_start, __rare_write_rw_alias_start, size, + __pgprot(pgprot_val(PAGE_KERNEL) | PTE_NG), + early_pgtable_alloc, debug_pagealloc_enabled()); + + pgd_clear_fixmap(); +} + /* * paging_init() sets up the page tables, initialises the zone memory * maps and sets up the zero page. @@ -537,6 +625,8 @@ void __init paging_init(void) */ memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE, SWAPPER_DIR_SIZE - PAGE_SIZE); + + rare_write_init(); } /* diff --git a/include/linux/compiler.h b/include/linux/compiler.h index c8c684c..a610ef2 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -355,7 +355,11 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s # define __wr_rare __ro_after_init # define __wr_rare_type const # define __rare_write_type(v) typeof((typeof(v))0) -# define __rare_write_ptr(v) ((__rare_write_type(v) *)&(v)) +# ifndef CONFIG_HAVE_ARCH_RARE_WRITE_PTR +# define __rare_write_ptr(v) ((__rare_write_type(v) *)&(v)) +# else +# define __rare_write_ptr(v) __arch_rare_write_ptr(v) +# endif # define __rare_write(__var, __val) ({ \ __rare_write_type(__var) *__rw_var; \ \