From patchwork Tue Sep 29 18:35:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806527 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15917174A for ; Tue, 29 Sep 2020 18:36:01 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id CAE2220754 for ; Tue, 29 Sep 2020 18:35:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAE2220754 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20021-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 24233 invoked by uid 550); 29 Sep 2020 18:35:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24149 invoked from network); 29 Sep 2020 18:35:54 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8LfRs7pMiXG/De7Fp35AYlrA7J9JX6bB4nnk3Ktwc38=; b=SlfN+Yp13TxrK+XNsIBZ+bSUIsPGnvgsh1H4th376fyaCpnAEdPl2+argCPEcW2DQB J846knShk5nKXkoTjpcpbwEIh9zNVqrxucxBCogoyCPlAjdhMsSrwWGNO/czwfGSVf+B TQXfBFDRPT1K8nA5gpBi+W/EkkC4Mv8FFWOZDdfgYnIKt9qKbeyoF6BV5NprXgHAeXEZ cTDS8px7sIPgmyJs/N/lVsaBUC5NVc0Rc4x37oPm9qu7oGWeqXQVQB88Jvg+/4/RnKSz gOxQW7gAArYjdVTloP5GgwpgTfNly4N/MveTLDAPwAlmmdeua/0jRnN7IRW9aO3nJveq Onfw== X-Gm-Message-State: AOAM531/7K361UZl3rNdlaGjY5tPfs0iDyqH4R757HwcITkry9ZNh4ZS eNzTAzaKtXEg7nE/ih5Jc3I= X-Google-Smtp-Source: ABdhPJzw7/4lwsPXLr9uHK5VyHHxvCPoWj8adTatoRGU2t447dV3ze110C2IkHUbO40w2tP2hJDwnQ== X-Received: by 2002:adf:dcd1:: with SMTP id x17mr6179562wrm.150.1601404542418; Tue, 29 Sep 2020 11:35:42 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 1/6] mm: Extract SLAB_QUARANTINE from KASAN Date: Tue, 29 Sep 2020 21:35:08 +0300 Message-Id: <20200929183513.380760-2-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Heap spraying is an exploitation technique that aims to put controlled bytes at a predetermined memory location on the heap. Heap spraying for exploiting use-after-free in the Linux kernel relies on the fact that on kmalloc(), the slab allocator returns the address of the memory that was recently freed. Allocating a kernel object with the same size and controlled contents allows overwriting the vulnerable freed object. Let's extract slab freelist quarantine from KASAN functionality and call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap spraying technique for exploiting use-after-free vulnerabilities in the kernel code. If this feature is enabled, freed allocations are stored in the quarantine queue where they wait for actual freeing. So they can't be instantly reallocated and overwritten by use-after-free exploits. N.B. Heap spraying for out-of-bounds exploitation is another technique, heap quarantine doesn't break it. Signed-off-by: Alexander Popov --- include/linux/kasan.h | 107 ++++++++++++++++++++----------------- include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 2 +- init/Kconfig | 13 +++++ mm/Makefile | 3 +- mm/kasan/Makefile | 2 + mm/kasan/kasan.h | 75 +++++++++++++------------- mm/kasan/quarantine.c | 2 + mm/kasan/slab_quarantine.c | 106 ++++++++++++++++++++++++++++++++++++ mm/slub.c | 2 +- 10 files changed, 225 insertions(+), 89 deletions(-) create mode 100644 mm/kasan/slab_quarantine.c diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 087fba34b209..b837216f760c 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -42,32 +42,14 @@ void kasan_unpoison_task_stack(struct task_struct *task); void kasan_alloc_pages(struct page *page, unsigned int order); void kasan_free_pages(struct page *page, unsigned int order); -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, - slab_flags_t *flags); - void kasan_poison_slab(struct page *page); void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, const void *object); -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags); -void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); - -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags); -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; -}; /* * These functions provide a special case to support backing module @@ -107,10 +89,6 @@ static inline void kasan_disable_current(void) {} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} -static inline void kasan_cache_create(struct kmem_cache *cache, - unsigned int *size, - slab_flags_t *flags) {} - static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} @@ -122,17 +100,65 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, return (void *)object; } +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +static inline void kasan_remove_zero_shadow(void *start, unsigned long size) {} +static inline void kasan_unpoison_slab(const void *ptr) {} + +static inline int kasan_module_alloc(void *addr, size_t size) +{ + return 0; +} + +static inline int kasan_add_zero_shadow(void *start, unsigned long size) +{ + return 0; +} + +static inline size_t kasan_metadata_size(struct kmem_cache *cache) +{ + return 0; +} + +#endif /* CONFIG_KASAN */ + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; +}; + +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags); +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags); +void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags); +void * __must_check kasan_krealloc(const void *object, size_t new_size, + gfp_t flags); +void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags); +bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); + +#else /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ + +static inline void kasan_cache_create(struct kmem_cache *cache, + unsigned int *size, + slab_flags_t *flags) {} + static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) { return ptr; } -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} + static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags) { return (void *)object; } + static inline void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags) { @@ -144,43 +170,28 @@ static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, { return object; } + static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { return false; } - -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } -static inline void kasan_free_shadow(const struct vm_struct *vm) {} - -static inline int kasan_add_zero_shadow(void *start, unsigned long size) -{ - return 0; -} -static inline void kasan_remove_zero_shadow(void *start, - unsigned long size) -{} - -static inline void kasan_unpoison_slab(const void *ptr) { } -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } - -#endif /* CONFIG_KASAN */ +#endif /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ #ifdef CONFIG_KASAN_GENERIC - #define KASAN_SHADOW_INIT 0 - -void kasan_cache_shrink(struct kmem_cache *cache); -void kasan_cache_shutdown(struct kmem_cache *cache); void kasan_record_aux_stack(void *ptr); - #else /* CONFIG_KASAN_GENERIC */ +static inline void kasan_record_aux_stack(void *ptr) {} +#endif /* CONFIG_KASAN_GENERIC */ +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) +void kasan_cache_shrink(struct kmem_cache *cache); +void kasan_cache_shutdown(struct kmem_cache *cache); +#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ static inline void kasan_cache_shrink(struct kmem_cache *cache) {} static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} -static inline void kasan_record_aux_stack(void *ptr) {} - -#endif /* CONFIG_KASAN_GENERIC */ +#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ #ifdef CONFIG_KASAN_SW_TAGS diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9eb430c163c2..fc7548f27512 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -72,7 +72,7 @@ struct kmem_cache { int obj_offset; #endif /* CONFIG_DEBUG_SLAB */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1be0ed5befa1..71020cee9fd2 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -124,7 +124,7 @@ struct kmem_cache { unsigned int *random_seq; #endif -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif diff --git a/init/Kconfig b/init/Kconfig index d6a0b31b13dc..358c8ce818f4 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1931,6 +1931,19 @@ config SLAB_FREELIST_HARDENED sanity-checking than others. This option is most effective with CONFIG_SLUB. +config SLAB_QUARANTINE + bool "Enable slab freelist quarantine" + depends on !KASAN && (SLAB || SLUB) + help + Enable slab freelist quarantine to delay reusing of freed slab + objects. If this feature is enabled, freed objects are stored + in the quarantine queue where they wait for actual freeing. + So they can't be instantly reallocated and overwritten by + use-after-free exploits. In other words, this feature mitigates + heap spraying technique for exploiting use-after-free + vulnerabilities in the kernel code. + KASAN also employs this feature for use-after-free detection. + config SHUFFLE_PAGE_ALLOCATOR bool "Page allocator randomization" default SLAB_FREELIST_RANDOM && ACPI_NUMA diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..c052bc616a88 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -52,7 +52,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o percpu.o slab_common.o \ compaction.o vmacache.o \ interval_tree.o list_lru.o workingset.o \ - debug.o gup.o $(mmu-y) + debug.o gup.o kasan/ $(mmu-y) # Give 'page_alloc' its own module-parameter namespace page-alloc-y := page_alloc.o @@ -80,7 +80,6 @@ obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o -obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index 370d970e5ab5..f6367d56a4d0 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -32,3 +32,5 @@ CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) := common.o init.o report.o obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o obj-$(CONFIG_KASAN_SW_TAGS) += tags.o tags_report.o + +obj-$(CONFIG_SLAB_QUARANTINE) += slab_quarantine.o quarantine.o diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index ac499456740f..6692177177a2 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -5,6 +5,43 @@ #include #include +struct qlist_node { + struct qlist_node *next; +}; + +struct kasan_track { + pid_t pid; + depot_stack_handle_t stack; +}; + +struct kasan_free_meta { + /* This field is used while the object is in the quarantine. + * Otherwise it might be used for the allocator freelist. + */ + struct qlist_node quarantine_link; +#ifdef CONFIG_KASAN_GENERIC + struct kasan_track free_track; +#endif +}; + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object); + +#if defined(CONFIG_KASAN_GENERIC) && \ + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \ + defined(CONFIG_SLAB_QUARANTINE) +void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); +void quarantine_reduce(void); +void quarantine_remove_cache(struct kmem_cache *cache); +#else +static inline void quarantine_put(struct kasan_free_meta *info, + struct kmem_cache *cache) { } +static inline void quarantine_reduce(void) { } +static inline void quarantine_remove_cache(struct kmem_cache *cache) { } +#endif + +#ifdef CONFIG_KASAN + #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) @@ -87,17 +124,8 @@ struct kasan_global { #endif }; -/** - * Structures to keep alloc and free tracks * - */ - #define KASAN_STACK_DEPTH 64 -struct kasan_track { - u32 pid; - depot_stack_handle_t stack; -}; - #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY #define KASAN_NR_FREE_STACKS 5 #else @@ -121,23 +149,8 @@ struct kasan_alloc_meta { #endif }; -struct qlist_node { - struct qlist_node *next; -}; -struct kasan_free_meta { - /* This field is used while the object is in the quarantine. - * Otherwise it might be used for the allocator freelist. - */ - struct qlist_node quarantine_link; -#ifdef CONFIG_KASAN_GENERIC - struct kasan_track free_track; -#endif -}; - struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, const void *object); -struct kasan_free_meta *get_free_info(struct kmem_cache *cache, - const void *object); static inline const void *kasan_shadow_to_mem(const void *shadow_addr) { @@ -178,18 +191,6 @@ void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag); struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, void *object, u8 tag); -#if defined(CONFIG_KASAN_GENERIC) && \ - (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); -void quarantine_reduce(void); -void quarantine_remove_cache(struct kmem_cache *cache); -#else -static inline void quarantine_put(struct kasan_free_meta *info, - struct kmem_cache *cache) { } -static inline void quarantine_reduce(void) { } -static inline void quarantine_remove_cache(struct kmem_cache *cache) { } -#endif - #ifdef CONFIG_KASAN_SW_TAGS void print_tags(u8 addr_tag, const void *addr); @@ -296,4 +297,6 @@ void __hwasan_storeN_noabort(unsigned long addr, size_t size); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size); +#endif /* CONFIG_KASAN */ + #endif diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 4c5375810449..61666263c53e 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -145,7 +145,9 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache) if (IS_ENABLED(CONFIG_SLAB)) local_irq_save(flags); +#ifdef CONFIG_KASAN *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE; +#endif ___cache_free(cache, object, _THIS_IP_); if (IS_ENABLED(CONFIG_SLAB)) diff --git a/mm/kasan/slab_quarantine.c b/mm/kasan/slab_quarantine.c new file mode 100644 index 000000000000..493c994ff87b --- /dev/null +++ b/mm/kasan/slab_quarantine.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * The layer providing KASAN slab quarantine separately without the + * main KASAN functionality. + * + * Author: Alexander Popov + * + * This feature breaks widespread heap spraying technique used for + * exploiting use-after-free vulnerabilities in the kernel code. + * + * Heap spraying is an exploitation technique that aims to put controlled + * bytes at a predetermined memory location on the heap. Heap spraying for + * exploiting use-after-free in the Linux kernel relies on the fact that on + * kmalloc(), the slab allocator returns the address of the memory that was + * recently freed. Allocating a kernel object with the same size and + * controlled contents allows overwriting the vulnerable freed object. + * + * If freed allocations are stored in the quarantine queue where they wait + * for actual freeing, they can't be instantly reallocated and overwritten + * by use-after-free exploits. + * + * N.B. Heap spraying for out-of-bounds exploitation is another technique, + * heap quarantine doesn't break it. + */ + +#include +#include +#include +#include +#include "../slab.h" +#include "kasan.h" + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags) +{ + cache->kasan_info.alloc_meta_offset = 0; + + if (WARN_ON(*size + sizeof(struct kasan_free_meta) > KMALLOC_MAX_SIZE)) { + cache->kasan_info.free_meta_offset = 0; + return; + } + + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta)) { + cache->kasan_info.free_meta_offset = *size; + *size += sizeof(struct kasan_free_meta); + } + + *flags |= SLAB_KASAN; +} + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object) +{ + BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); + return (void *)object + cache->kasan_info.free_meta_offset; +} + +bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) +{ + quarantine_put(get_free_info(cache, object), cache); + return true; +} + +static void *reduce_helper(const void *ptr, gfp_t flags) +{ + if (gfpflags_allow_blocking(flags)) + quarantine_reduce(); + + return (void *)ptr; +} + +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ + return reduce_helper(ptr, flags); +} + +void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, + gfp_t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, + size_t size, gfp_t flags) +{ + return reduce_helper(object, flags); +} +EXPORT_SYMBOL(kasan_kmalloc); + +void kasan_cache_shrink(struct kmem_cache *cache) +{ + quarantine_remove_cache(cache); +} + +void kasan_cache_shutdown(struct kmem_cache *cache) +{ + if (!__kmem_cache_empty(cache)) + quarantine_remove_cache(cache); +} diff --git a/mm/slub.c b/mm/slub.c index d4177aecedf6..6e276ed7606c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3143,7 +3143,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, do_slab_free(s, page, head, tail, cnt, addr); } -#ifdef CONFIG_KASAN_GENERIC +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); From patchwork Tue Sep 29 18:35:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806533 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C34B174A for ; Tue, 29 Sep 2020 18:36:10 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9FB2B20754 for ; Tue, 29 Sep 2020 18:36:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9FB2B20754 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20022-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 25681 invoked by uid 550); 29 Sep 2020 18:36:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25613 invoked from network); 29 Sep 2020 18:36:00 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MskjPp6ZpujaNwi4yQltgMjSnkxLjmoI3z/cZXeO2Hw=; b=dQgTi/8awkqLewD4o/nji1MdJVDkGt8eXkCjFaZ0G4nIjev3fLdoWdjcdew1ndCGKE SDVllEQzGBs/oqCwaonvH0KdBNJSDGtStGLFE8bHTdv4L37oZKDyXqrq67DlV4C9oNd7 X89va7YXCLYImNSYUpkMlvwydfXoD5aD8XBIxgc+fLggzpLis/if2vyYknVU6qFfwwl+ gSwrx1HjisD//d4QRsq29TSpREEFbyF1iyIjdN9206GjldyspffHSKjHMb7mHqs4rd/B PbZfX5Vu9v7RYYLZQn/gT5U10RRBz+vsJ6gsz4iSGgw0v8t+kX7qtZ3NGXMzNfMureNd YWHA== X-Gm-Message-State: AOAM532weDjp+7MBvDks/igA/c3LeCLj1r2uFl6yHSTa6zbmJhgvwqKt fHyyhOjVaf84AYL+Ho2iKdY= X-Google-Smtp-Source: ABdhPJwrwd83g7bnSyKaLf5bfDcLBH7k0W2piOgi8XJXLrVdkpCHESSboWm1QucjXR4ih++3BiFn+w== X-Received: by 2002:a05:600c:21c4:: with SMTP id x4mr6092766wmj.107.1601404547746; Tue, 29 Sep 2020 11:35:47 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 2/6] mm/slab: Perform init_on_free earlier Date: Tue, 29 Sep 2020 21:35:09 +0300 Message-Id: <20200929183513.380760-3-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Currently in CONFIG_SLAB init_on_free happens too late, and heap objects go to the heap quarantine being dirty. Lets move memory clearing before calling kasan_slab_free() to fix that. Signed-off-by: Alexander Popov Reviewed-by: Alexander Potapenko --- mm/slab.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 3160dff6fd76..5140203c5b76 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3414,6 +3414,9 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + if (unlikely(slab_want_init_on_free(cachep))) + memset(objp, 0, cachep->object_size); + /* Put the object into the quarantine, don't touch it for now. */ if (kasan_slab_free(cachep, objp, _RET_IP_)) return; @@ -3432,8 +3435,6 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, struct array_cache *ac = cpu_cache_get(cachep); check_irq_off(); - if (unlikely(slab_want_init_on_free(cachep))) - memset(objp, 0, cachep->object_size); kmemleak_free_recursive(objp, cachep->flags); objp = cache_free_debugcheck(cachep, objp, caller); memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp); From patchwork Tue Sep 29 18:35:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806535 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAB46618 for ; Tue, 29 Sep 2020 18:36:19 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 0D6452074A for ; Tue, 29 Sep 2020 18:36:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D6452074A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20023-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26012 invoked by uid 550); 29 Sep 2020 18:36:04 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25940 invoked from network); 29 Sep 2020 18:36:03 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2bxuhq/r6r8osy7pzVzO/JXpG1ASc+MUIPI5gsA7kxI=; b=uek3RWgWECbGyBN4iaBCNp1jjLAlR4vrI2czzV+O9UXIVkYF4wI8P8lQrh8kRXEbcX uTZMelJjE1TGEbrgzFGNt3/U5KCQ/NxT1aqbzgqUUeMBeQ8LUHCVVRelIKDlJ3a5ig9a GMU81gIzdtJgEwJ7jPrT6SG7plOVu/A2LxHayReOCPtM2pmGuu0vSLVuFpPjGT2XNc9A xKN5swiEAlz2sb7BwSoYzwDIRtGTa4L7f++xqcrNzFYgS1oVl6GiBYPsGKE+feFSku7e 6KuEkg1+/rDBSpFv4d7EBc4m6qlTTjQ/aF3xrDyZKYWZGF0GL35hyR9TyeXhijGFuLc1 HxxQ== X-Gm-Message-State: AOAM532tmKx9kL7Bx3e5tdOEL8v00bgQoCxVBJ7I0wvP3D5QBXiVSmNY AlFVQlSbuBaDlzvmWROp8uc= X-Google-Smtp-Source: ABdhPJy9A5JEn21mEASXT5hiuePaz0JL4sNUqlLTquEiKyft5GG7jNH5pWMw/hXwHiK/QazBvkyuwg== X-Received: by 2002:a1c:5685:: with SMTP id k127mr6197810wmb.135.1601404551593; Tue, 29 Sep 2020 11:35:51 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 3/6] mm: Integrate SLAB_QUARANTINE with init_on_free Date: Tue, 29 Sep 2020 21:35:10 +0300 Message-Id: <20200929183513.380760-4-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Having slab quarantine without memory erasing is harmful. If the quarantined objects are not cleaned and contain data, then: 1. they will be useful for use-after-free exploitation, 2. there is no chance to detect use-after-free access. So we want the quarantined objects to be erased. Enable init_on_free that cleans objects before placing them into the quarantine. CONFIG_PAGE_POISONING should be disabled since it cuts off init_on_free. Signed-off-by: Alexander Popov --- init/Kconfig | 3 ++- mm/page_alloc.c | 22 ++++++++++++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/init/Kconfig b/init/Kconfig index 358c8ce818f4..cd4cee71fd4e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1933,7 +1933,8 @@ config SLAB_FREELIST_HARDENED config SLAB_QUARANTINE bool "Enable slab freelist quarantine" - depends on !KASAN && (SLAB || SLUB) + depends on !KASAN && (SLAB || SLUB) && !PAGE_POISONING + select INIT_ON_FREE_DEFAULT_ON help Enable slab freelist quarantine to delay reusing of freed slab objects. If this feature is enabled, freed objects are stored diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fab5e97dc9ca..f67118e88500 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -168,6 +168,27 @@ static int __init early_init_on_alloc(char *buf) } early_param("init_on_alloc", early_init_on_alloc); +#ifdef CONFIG_SLAB_QUARANTINE +static int __init early_init_on_free(char *buf) +{ + /* + * Having slab quarantine without memory erasing is harmful. + * If the quarantined objects are not cleaned and contain data, then: + * 1. they will be useful for use-after-free exploitation, + * 2. use-after-free access may not be detected. + * So we want the quarantined objects to be erased. + * + * Enable init_on_free that cleans objects before placing them into + * the quarantine. CONFIG_PAGE_POISONING should be disabled since it + * cuts off init_on_free. + */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON)); + BUILD_BUG_ON(IS_ENABLED(CONFIG_PAGE_POISONING)); + pr_info("mem auto-init: init_on_free is on for CONFIG_SLAB_QUARANTINE\n"); + + return 0; +} +#else /* CONFIG_SLAB_QUARANTINE */ static int __init early_init_on_free(char *buf) { int ret; @@ -184,6 +205,7 @@ static int __init early_init_on_free(char *buf) static_branch_disable(&init_on_free); return ret; } +#endif /* CONFIG_SLAB_QUARANTINE */ early_param("init_on_free", early_init_on_free); /* From patchwork Tue Sep 29 18:35:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806537 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B98C4618 for ; Tue, 29 Sep 2020 18:36:28 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id D693320754 for ; Tue, 29 Sep 2020 18:36:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D693320754 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20024-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26415 invoked by uid 550); 29 Sep 2020 18:36:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26345 invoked from network); 29 Sep 2020 18:36:09 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H9FfRvaA+Wp3IJ0z7DJTjMPazUzC1jfP34QSVpIB8gM=; b=Eyg3dodEpLhXv53JAr79gl0XATIpBMXt3t3K9+pCREI1IWWyry5qPeSzau7Juo3O06 6upssEI8kLehc1py/MtW0yBQrZnx2w7FysHEHEaOiTOl7Fq335XG8RN4N1GYdrVpE1h/ v2OJstFWSiOge6t+AcJ1UsR+DuyLoyQxpiaE/5Z0n7erTbzqR979hv+Aa4empqs1OZo4 jUs5ekudnHX2r5jrLHn/o1XxmsKMlMeKaWtaXzpvcwDjn2fuVCAnVpgK4vGjDSHykRNS MFmOXmUNY1CAi2Xu66WiQ3joMsJsnDNbuNJOSPx7PdPlVErRWur86ErJtyK6zGe2nrAm nt2A== X-Gm-Message-State: AOAM5309lbfvIr/vbCVgnojLetPqgYjap4hddltOSvSUcHl5AVAs97/t SOFqBadEha2D+Ypvd5Ax9zs= X-Google-Smtp-Source: ABdhPJz8ttasfOla2Nh3dmzYRTGGwGjvXLTOok9ECPboyXttbJf8+eh2xPs5JtcBJcd50lm/HCBFRQ== X-Received: by 2002:a05:600c:2909:: with SMTP id i9mr6280384wmd.160.1601404556735; Tue, 29 Sep 2020 11:35:56 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 4/6] mm: Implement slab quarantine randomization Date: Tue, 29 Sep 2020 21:35:11 +0300 Message-Id: <20200929183513.380760-5-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 The randomization is very important for the slab quarantine security properties. Without it the number of kmalloc()+kfree() calls that are needed for overwriting the vulnerable object is almost the same. That would be good for stable use-after-free exploitation, and we should not allow that. This commit contains very compact and hackish changes that introduce the quarantine randomization. At first all quarantine batches are filled by objects. Then during the quarantine reducing we randomly choose and free 1/2 of objects from a randomly chosen batch. Now the randomized quarantine releases the freed object at an unpredictable moment, which is harmful for the heap spraying technique employed by use-after-free exploits. Signed-off-by: Alexander Popov --- mm/kasan/quarantine.c | 79 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 69 insertions(+), 10 deletions(-) diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 61666263c53e..4ce100605086 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -29,6 +29,7 @@ #include #include #include +#include #include "../slab.h" #include "kasan.h" @@ -89,8 +90,13 @@ static void qlist_move_all(struct qlist_head *from, struct qlist_head *to) } #define QUARANTINE_PERCPU_SIZE (1 << 20) + +#ifdef CONFIG_KASAN #define QUARANTINE_BATCHES \ (1024 > 4 * CONFIG_NR_CPUS ? 1024 : 4 * CONFIG_NR_CPUS) +#else +#define QUARANTINE_BATCHES 128 +#endif /* * The object quarantine consists of per-cpu queues and a global queue, @@ -110,10 +116,7 @@ DEFINE_STATIC_SRCU(remove_cache_srcu); /* Maximum size of the global queue. */ static unsigned long quarantine_max_size; -/* - * Target size of a batch in global_quarantine. - * Usually equal to QUARANTINE_PERCPU_SIZE unless we have too much RAM. - */ +/* Target size of a batch in global_quarantine. */ static unsigned long quarantine_batch_size; /* @@ -191,7 +194,12 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) q = this_cpu_ptr(&cpu_quarantine); qlist_put(q, &info->quarantine_link, cache->size); +#ifdef CONFIG_KASAN if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) { +#else + if (unlikely(q->bytes > min_t(size_t, QUARANTINE_PERCPU_SIZE, + READ_ONCE(quarantine_batch_size)))) { +#endif qlist_move_all(q, &temp); raw_spin_lock(&quarantine_lock); @@ -204,7 +212,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) new_tail = quarantine_tail + 1; if (new_tail == QUARANTINE_BATCHES) new_tail = 0; - if (new_tail != quarantine_head) + if (new_tail != quarantine_head || !IS_ENABLED(CONFIG_KASAN)) quarantine_tail = new_tail; } raw_spin_unlock(&quarantine_lock); @@ -213,12 +221,43 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) local_irq_restore(flags); } +static void qlist_move_random(struct qlist_head *from, struct qlist_head *to) +{ + struct qlist_node *curr; + + if (unlikely(qlist_empty(from))) + return; + + curr = from->head; + qlist_init(from); + while (curr) { + struct qlist_node *next = curr->next; + struct kmem_cache *obj_cache = qlink_to_cache(curr); + int rnd = get_random_int(); + + /* + * Hackish quarantine randomization, part 2: + * move only 1/2 of objects to the destination list. + * TODO: use random bits sparingly for better performance. + */ + if (rnd % 2 == 0) + qlist_put(to, curr, obj_cache->size); + else + qlist_put(from, curr, obj_cache->size); + + curr = next; + } +} + void quarantine_reduce(void) { - size_t total_size, new_quarantine_size, percpu_quarantines; + size_t total_size; unsigned long flags; int srcu_idx; struct qlist_head to_free = QLIST_INIT; +#ifdef CONFIG_KASAN + size_t new_quarantine_size, percpu_quarantines; +#endif if (likely(READ_ONCE(quarantine_size) <= READ_ONCE(quarantine_max_size))) @@ -236,12 +275,12 @@ void quarantine_reduce(void) srcu_idx = srcu_read_lock(&remove_cache_srcu); raw_spin_lock_irqsave(&quarantine_lock, flags); - /* - * Update quarantine size in case of hotplug. Allocate a fraction of - * the installed memory to quarantine minus per-cpu queue limits. - */ + /* Update quarantine size in case of hotplug */ total_size = (totalram_pages() << PAGE_SHIFT) / QUARANTINE_FRACTION; + +#ifdef CONFIG_KASAN + /* Subtract per-cpu queue limits from total quarantine size */ percpu_quarantines = QUARANTINE_PERCPU_SIZE * num_online_cpus(); new_quarantine_size = (total_size < percpu_quarantines) ? 0 : total_size - percpu_quarantines; @@ -257,6 +296,26 @@ void quarantine_reduce(void) if (quarantine_head == QUARANTINE_BATCHES) quarantine_head = 0; } +#else /* CONFIG_KASAN */ + /* + * Don't subtract per-cpu queue limits from total quarantine + * size to consume all quarantine slots. + */ + WRITE_ONCE(quarantine_max_size, total_size); + WRITE_ONCE(quarantine_batch_size, total_size / QUARANTINE_BATCHES); + + /* + * Hackish quarantine randomization, part 1: + * pick a random batch for reducing. + */ + if (likely(quarantine_size > quarantine_max_size)) { + do { + quarantine_head = get_random_int() % QUARANTINE_BATCHES; + } while (quarantine_head == quarantine_tail); + qlist_move_random(&global_quarantine[quarantine_head], &to_free); + WRITE_ONCE(quarantine_size, quarantine_size - to_free.bytes); + } +#endif raw_spin_unlock_irqrestore(&quarantine_lock, flags); From patchwork Tue Sep 29 18:35:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23151112E for ; Tue, 29 Sep 2020 18:36:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 3138B2075A for ; Tue, 29 Sep 2020 18:36:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3138B2075A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20025-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27868 invoked by uid 550); 29 Sep 2020 18:36:14 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27773 invoked from network); 29 Sep 2020 18:36:13 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1AriX1I1xNfjnxfNSmAKzK7wEz9F3HLx249hrusGBIU=; b=Opng/M/CrCwjSPuxNZTYJpJ50ufRDtJak+37dVrN8MpBug0XBVMriber/yjdgqHkYJ YDy3rAJQWJ2oHoYJJlgoaCIMI9vFYVBvqXn1FNKnIOmcKjvMjsyn90qllgMDgfZIsZtO pWlPZ7UE2lMl0P4vo7wD1k+FHjOs31n8Sho0FrSa7gngu01Qu2YV3I9aiYWelHoJbG1B tqSghnyIGSEk9IpPfJ+QHcOHudnZF5gxIiJ6k1fCTbQGqNp+/kfYLubza1GDFVeHCDgM i5MlXz95PEbOwaqofT8KmnccBZUCBGxDgBcOpcK8FyXBvppsh+JJtoiywHH+x1tGCjlC 5rmQ== X-Gm-Message-State: AOAM532BgMyjSgt649EkYvHGoYagN6PwqCsnuOFx7xTMQoiMlUhkUL0F 4vAgMLSapprAi4/Ng+12MAM= X-Google-Smtp-Source: ABdhPJw8R3kY6IgJeu8H0uvfxSq+fvJCLPzv970QVYzDqIIZL9GZMM3MlhWS8bLeqHe+hKlVyU728w== X-Received: by 2002:a1c:6a08:: with SMTP id f8mr6140532wmc.151.1601404561398; Tue, 29 Sep 2020 11:36:01 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 5/6] lkdtm: Add heap quarantine tests Date: Tue, 29 Sep 2020 21:35:12 +0300 Message-Id: <20200929183513.380760-6-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Add tests for CONFIG_SLAB_QUARANTINE. The HEAP_SPRAY test aims to reallocate a recently freed heap object. It allocates and frees an object from a separate kmem_cache and then allocates 400000 similar objects from it. I.e. this test performs an original heap spraying technique for use-after-free exploitation. If CONFIG_SLAB_QUARANTINE is disabled, the freed object is instantly reallocated and overwritten, which is required for a successful attack. The PUSH_THROUGH_QUARANTINE test allocates and frees an object from a separate kmem_cache and then performs kmem_cache_alloc()+kmem_cache_free() 400000 times. This test pushes the object through the heap quarantine and reallocates it after it returns back to the allocator freelist. If CONFIG_SLAB_QUARANTINE is enabled, this test should show that the randomized quarantine will release the freed object at an unpredictable moment, which makes use-after-free exploitation much harder. Signed-off-by: Alexander Popov --- drivers/misc/lkdtm/core.c | 2 + drivers/misc/lkdtm/heap.c | 110 +++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/lkdtm.h | 2 + 3 files changed, 114 insertions(+) diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index a5e344df9166..6be5ca49ae6b 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -126,6 +126,8 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(SLAB_FREE_DOUBLE), CRASHTYPE(SLAB_FREE_CROSS), CRASHTYPE(SLAB_FREE_PAGE), + CRASHTYPE(HEAP_SPRAY), + CRASHTYPE(PUSH_THROUGH_QUARANTINE), CRASHTYPE(SOFTLOCKUP), CRASHTYPE(HARDLOCKUP), CRASHTYPE(SPINLOCKUP), diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c index 1323bc16f113..f666a08d9462 100644 --- a/drivers/misc/lkdtm/heap.c +++ b/drivers/misc/lkdtm/heap.c @@ -10,6 +10,7 @@ static struct kmem_cache *double_free_cache; static struct kmem_cache *a_cache; static struct kmem_cache *b_cache; +static struct kmem_cache *spray_cache; /* * This tries to stay within the next largest power-of-2 kmalloc cache @@ -204,6 +205,112 @@ static void ctor_a(void *region) { } static void ctor_b(void *region) { } +static void ctor_spray(void *region) +{ } + +#define SPRAY_LENGTH 400000 +#define SPRAY_ITEM_SIZE 333 + +void lkdtm_HEAP_SPRAY(void) +{ + int *addr; + int **spray_addrs = NULL; + unsigned long i = 0; + + addr = kmem_cache_alloc(spray_cache, GFP_KERNEL); + if (!addr) { + pr_info("Can't allocate memory in spray_cache cache\n"); + return; + } + + memset(addr, 0xA5, SPRAY_ITEM_SIZE); + kmem_cache_free(spray_cache, addr); + pr_info("Allocated and freed spray_cache object %p of size %d\n", + addr, SPRAY_ITEM_SIZE); + + spray_addrs = kcalloc(SPRAY_LENGTH, sizeof(int *), GFP_KERNEL); + if (!spray_addrs) { + pr_info("Unable to allocate memory for spray_addrs\n"); + return; + } + + pr_info("Original heap spraying: allocate %d objects of size %d...\n", + SPRAY_LENGTH, SPRAY_ITEM_SIZE); + for (i = 0; i < SPRAY_LENGTH; i++) { + spray_addrs[i] = kmem_cache_alloc(spray_cache, GFP_KERNEL); + if (!spray_addrs[i]) { + pr_info("Can't allocate memory in spray_cache cache\n"); + break; + } + + memset(spray_addrs[i], 0x42, SPRAY_ITEM_SIZE); + + if (spray_addrs[i] == addr) { + pr_info("FAIL: attempt %lu: freed object is reallocated\n", i); + break; + } + } + + if (i == SPRAY_LENGTH) + pr_info("OK: original heap spraying hasn't succeed\n"); + + for (i = 0; i < SPRAY_LENGTH; i++) { + if (spray_addrs[i]) + kmem_cache_free(spray_cache, spray_addrs[i]); + } + + kfree(spray_addrs); +} + +/* + * Pushing an object through the quarantine requires both allocating and + * freeing memory. Objects are released from the quarantine on new memory + * allocations, but only when the quarantine size is over the limit. + * And the quarantine size grows on new memory freeing. + * + * This test should show that the randomized quarantine will release the + * freed object at an unpredictable moment. + */ +void lkdtm_PUSH_THROUGH_QUARANTINE(void) +{ + int *addr; + int *push_addr; + unsigned long i; + + addr = kmem_cache_alloc(spray_cache, GFP_KERNEL); + if (!addr) { + pr_info("Can't allocate memory in spray_cache cache\n"); + return; + } + + memset(addr, 0xA5, SPRAY_ITEM_SIZE); + kmem_cache_free(spray_cache, addr); + pr_info("Allocated and freed spray_cache object %p of size %d\n", + addr, SPRAY_ITEM_SIZE); + + pr_info("Push through quarantine: allocate and free %d objects of size %d...\n", + SPRAY_LENGTH, SPRAY_ITEM_SIZE); + for (i = 0; i < SPRAY_LENGTH; i++) { + push_addr = kmem_cache_alloc(spray_cache, GFP_KERNEL); + if (!push_addr) { + pr_info("Can't allocate memory in spray_cache cache\n"); + break; + } + + memset(push_addr, 0x42, SPRAY_ITEM_SIZE); + kmem_cache_free(spray_cache, push_addr); + + if (push_addr == addr) { + pr_info("Target object is reallocated at attempt %lu\n", i); + break; + } + } + + if (i == SPRAY_LENGTH) { + pr_info("Target object is NOT reallocated in %d attempts\n", + SPRAY_LENGTH); + } +} void __init lkdtm_heap_init(void) { @@ -211,6 +318,8 @@ void __init lkdtm_heap_init(void) 64, 0, 0, ctor_double_free); a_cache = kmem_cache_create("lkdtm-heap-a", 64, 0, 0, ctor_a); b_cache = kmem_cache_create("lkdtm-heap-b", 64, 0, 0, ctor_b); + spray_cache = kmem_cache_create("lkdtm-heap-spray", + SPRAY_ITEM_SIZE, 0, 0, ctor_spray); } void __exit lkdtm_heap_exit(void) @@ -218,4 +327,5 @@ void __exit lkdtm_heap_exit(void) kmem_cache_destroy(double_free_cache); kmem_cache_destroy(a_cache); kmem_cache_destroy(b_cache); + kmem_cache_destroy(spray_cache); } diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 8878538b2c13..d6b4b0708359 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -45,6 +45,8 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void); void lkdtm_SLAB_FREE_DOUBLE(void); void lkdtm_SLAB_FREE_CROSS(void); void lkdtm_SLAB_FREE_PAGE(void); +void lkdtm_HEAP_SPRAY(void); +void lkdtm_PUSH_THROUGH_QUARANTINE(void); /* lkdtm_perms.c */ void __init lkdtm_perms_init(void); From patchwork Tue Sep 29 18:35:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11806541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A1D2139A for ; Tue, 29 Sep 2020 18:36:52 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9E22820754 for ; Tue, 29 Sep 2020 18:36:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E22820754 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-20026-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28201 invoked by uid 550); 29 Sep 2020 18:36:18 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28142 invoked from network); 29 Sep 2020 18:36:17 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=G75o1vtwJEJqsn4jLLLjWPJ87JkHTslNJiSKbRH/W+U=; b=Apghy8PYidMWWjYdt02TI5BcmBx22W1h0L7gDeFulCrACcz/Yfis8IJ4rEKKZTSJZz OMLBZ/gtJIRUZgf0O1AOMxPz2+z0j1hcfEG8yH64EafpjAW5BDKJNnDb5e48dtHpGow5 LEphxPu+RJ8XyzAeq2RICxhmKZJWdnEu6aX3Lr/bLkc0bVJ/byONMuqA7YITOCLGS09M Ez7MZmZ/OUcQE20mrazJRM1+WnpAWJoNqnemiDs0ydr8FXq1dHS0GV4/uzKzhgaKYq0U kqPgnRkxGaxHCr4KQiAAE2WDWaWM35eCVPWDa3p2iK3Cb/wQAqUB6tDHb2UC5Ug/STkh +wkA== X-Gm-Message-State: AOAM530wnS2QLvrOR1DXwefWASuldAD6waJgaxJfpIeKHKaJ463ocD3K iGBBo6xChWghzl+lDp3xCBk= X-Google-Smtp-Source: ABdhPJyRrT8VZbkuyR+uMlXl+IuwMfrgvg3OVh2K+Av6eKUGJ45KzkGbAM6AJojvO9e0oZN+t9TWiw== X-Received: by 2002:adf:e58b:: with SMTP id l11mr6203909wrm.210.1601404565480; Tue, 29 Sep 2020 11:36:05 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 6/6] mm: Add heap quarantine verbose debugging (not for merge) Date: Tue, 29 Sep 2020 21:35:13 +0300 Message-Id: <20200929183513.380760-7-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Add verbose debugging for deeper understanding of the heap quarantine inner workings (this patch is not for merge). Signed-off-by: Alexander Popov --- mm/kasan/quarantine.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 4ce100605086..98cd6e963755 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -203,6 +203,12 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) qlist_move_all(q, &temp); raw_spin_lock(&quarantine_lock); + + pr_info("quarantine: PUT %zu to tail batch %d, whole sz %zu, batch sz %lu\n", + temp.bytes, quarantine_tail, + READ_ONCE(quarantine_size), + READ_ONCE(quarantine_batch_size)); + WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes); qlist_move_all(&temp, &global_quarantine[quarantine_tail]); if (global_quarantine[quarantine_tail].bytes >= @@ -313,7 +319,22 @@ void quarantine_reduce(void) quarantine_head = get_random_int() % QUARANTINE_BATCHES; } while (quarantine_head == quarantine_tail); qlist_move_random(&global_quarantine[quarantine_head], &to_free); + pr_info("quarantine: whole sz exceed max by %lu, REDUCE head batch %d by %zu, leave %zu\n", + quarantine_size - quarantine_max_size, + quarantine_head, to_free.bytes, + global_quarantine[quarantine_head].bytes); WRITE_ONCE(quarantine_size, quarantine_size - to_free.bytes); + + if (quarantine_head == 0) { + unsigned long i; + + pr_info("quarantine: data level in batches:"); + for (i = 0; i < QUARANTINE_BATCHES; i++) { + pr_info(" %lu - %lu%%\n", + i, global_quarantine[i].bytes * + 100 / quarantine_batch_size); + } + } } #endif