From patchwork Thu Aug 13 15:19:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11712589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E2A9138C for ; Thu, 13 Aug 2020 15:19:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EAF5C207DA for ; Thu, 13 Aug 2020 15:19:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EAF5C207DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 099F06B0010; Thu, 13 Aug 2020 11:19:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 04AEB6B0022; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E53928D0002; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id C3C3C6B0010 for ; Thu, 13 Aug 2020 11:19:46 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7A2928248047 for ; Thu, 13 Aug 2020 15:19:46 +0000 (UTC) X-FDA: 77145905172.29.root25_000ef2a26ff5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id B5A3918086E3C for ; Thu, 13 Aug 2020 15:19:43 +0000 (UTC) X-Spam-Summary: 1,0,0,584b2be0db0f542c,d41d8cd98f00b204,a13xp0p0v88@gmail.com,,RULES_HIT:41:69:327:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3743:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6120:6261:6737:6738:7901:7903:8603:8660:9592:10004:11026:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13148:13230:13231:13894:14096:14394:21080:21444:21451:21627:21740:21939:21990:30012:30025:30054:30075,0,RBL:209.85.128.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04y8ek4yeuz5o3ab8e9udj1jood5wyp4dtjugkckmozdnzjwr1egg51ghwwkn11.y7rgqy1bptkhac74djuhq7iyq4xxw8m1w86btfzngyxy6qizcy9fsitgsrzue9k.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25, LUA_SUMM X-HE-Tag: root25_000ef2a26ff5 X-Filterd-Recvd-Size: 21176 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Aug 2020 15:19:42 +0000 (UTC) Received: by mail-wm1-f68.google.com with SMTP id 184so5405005wmb.0 for ; Thu, 13 Aug 2020 08:19:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P8DeDc5Ct3tTvMBOv5sqWQolQRiRapwMqRCVMUkdEF8=; b=GvifrEOcnFZ3OvYEtZKRlzjX+n59v2O949jsn2YOlnF5ExdHMds1Ocn5Ifk1OoiEdw 4rQtpFJVMD/0r4P52mqS/6RDrz2GMC7p/bUjXmpB1JHu75Y21r2MGL3PgyD3KdXZz+Ig 30Dgwfm9coag1SAZ0NCtlEyhjfJ6iJNTVNp6K02vmDYnsidD0uWIk2OEpLY0OLrxJ6Ko Ma88jVUiLssvslQHHP/3lWaZuARRF/WjDHwKcGBvpeyp37ufYxHti1/H8d2bSI6nulr2 PTNm51tdX/T0k4FEFK8P7IE1vD6g0zR1rx1MkM80PasNp78bQPqVxLyc0qQYBrkCdj7H gdwg== X-Gm-Message-State: AOAM530xFetcRPXUHAd2D+u9BJ6hCXC+rb+nXdl5CFxPuN3NNkA5TN4o 6E/D+31fi6VTEWo+w2AZ7mc= X-Google-Smtp-Source: ABdhPJziXNSlwWGftHvXEAbMx37KiA1iuQhhv19HjQMEdIUHQrRkCfjGs9VO1LTvFGRl3waVFRBe/A== X-Received: by 2002:a05:600c:224e:: with SMTP id a14mr5024287wmm.80.1597331981430; Thu, 13 Aug 2020 08:19:41 -0700 (PDT) Received: from localhost.localdomain ([185.248.161.177]) by smtp.gmail.com with ESMTPSA id d23sm10394044wmd.27.2020.08.13.08.19.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 08:19:40 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC 1/2] mm: Extract SLAB_QUARANTINE from KASAN Date: Thu, 13 Aug 2020 18:19:21 +0300 Message-Id: <20200813151922.1093791-2-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200813151922.1093791-1-alex.popov@linux.com> References: <20200813151922.1093791-1-alex.popov@linux.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B5A3918086E3C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap spraying is an exploitation technique that aims to put controlled bytes at a predetermined memory location on the heap. Heap spraying for exploiting use-after-free in the Linux kernel relies on the fact that on kmalloc(), the slab allocator returns the address of the memory that was recently freed. Allocating a kernel object with the same size and controlled contents allows overwriting the vulnerable freed object. Let's extract slab freelist quarantine from KASAN functionality and call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap spraying technique used for exploiting use-after-free vulnerabilities in the kernel code. If this feature is enabled, freed allocations are stored in the quarantine and can't be instantly reallocated and overwritten by the exploit performing heap spraying. Signed-off-by: Alexander Popov --- include/linux/kasan.h | 107 ++++++++++++++++++++----------------- include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 2 +- init/Kconfig | 11 ++++ mm/Makefile | 3 +- mm/kasan/Makefile | 2 + mm/kasan/kasan.h | 75 +++++++++++++------------- mm/kasan/quarantine.c | 2 + mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++ mm/slub.c | 2 +- 10 files changed, 216 insertions(+), 89 deletions(-) create mode 100644 mm/kasan/slab_quarantine.c diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 087fba34b209..b837216f760c 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -42,32 +42,14 @@ void kasan_unpoison_task_stack(struct task_struct *task); void kasan_alloc_pages(struct page *page, unsigned int order); void kasan_free_pages(struct page *page, unsigned int order); -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, - slab_flags_t *flags); - void kasan_poison_slab(struct page *page); void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, const void *object); -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags); -void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); - -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags); -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; -}; /* * These functions provide a special case to support backing module @@ -107,10 +89,6 @@ static inline void kasan_disable_current(void) {} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} -static inline void kasan_cache_create(struct kmem_cache *cache, - unsigned int *size, - slab_flags_t *flags) {} - static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) {} @@ -122,17 +100,65 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, return (void *)object; } +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +static inline void kasan_remove_zero_shadow(void *start, unsigned long size) {} +static inline void kasan_unpoison_slab(const void *ptr) {} + +static inline int kasan_module_alloc(void *addr, size_t size) +{ + return 0; +} + +static inline int kasan_add_zero_shadow(void *start, unsigned long size) +{ + return 0; +} + +static inline size_t kasan_metadata_size(struct kmem_cache *cache) +{ + return 0; +} + +#endif /* CONFIG_KASAN */ + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; +}; + +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags); +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags); +void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags); +void * __must_check kasan_krealloc(const void *object, size_t new_size, + gfp_t flags); +void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags); +bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); + +#else /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ + +static inline void kasan_cache_create(struct kmem_cache *cache, + unsigned int *size, + slab_flags_t *flags) {} + static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) { return ptr; } -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} + static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags) { return (void *)object; } + static inline void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags) { @@ -144,43 +170,28 @@ static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, { return object; } + static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { return false; } - -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } -static inline void kasan_free_shadow(const struct vm_struct *vm) {} - -static inline int kasan_add_zero_shadow(void *start, unsigned long size) -{ - return 0; -} -static inline void kasan_remove_zero_shadow(void *start, - unsigned long size) -{} - -static inline void kasan_unpoison_slab(const void *ptr) { } -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } - -#endif /* CONFIG_KASAN */ +#endif /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */ #ifdef CONFIG_KASAN_GENERIC - #define KASAN_SHADOW_INIT 0 - -void kasan_cache_shrink(struct kmem_cache *cache); -void kasan_cache_shutdown(struct kmem_cache *cache); void kasan_record_aux_stack(void *ptr); - #else /* CONFIG_KASAN_GENERIC */ +static inline void kasan_record_aux_stack(void *ptr) {} +#endif /* CONFIG_KASAN_GENERIC */ +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) +void kasan_cache_shrink(struct kmem_cache *cache); +void kasan_cache_shutdown(struct kmem_cache *cache); +#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ static inline void kasan_cache_shrink(struct kmem_cache *cache) {} static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} -static inline void kasan_record_aux_stack(void *ptr) {} - -#endif /* CONFIG_KASAN_GENERIC */ +#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */ #ifdef CONFIG_KASAN_SW_TAGS diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9eb430c163c2..fc7548f27512 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -72,7 +72,7 @@ struct kmem_cache { int obj_offset; #endif /* CONFIG_DEBUG_SLAB */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1be0ed5befa1..71020cee9fd2 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -124,7 +124,7 @@ struct kmem_cache { unsigned int *random_seq; #endif -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE) struct kasan_cache kasan_info; #endif diff --git a/init/Kconfig b/init/Kconfig index d6a0b31b13dc..de5aa061762f 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1931,6 +1931,17 @@ config SLAB_FREELIST_HARDENED sanity-checking than others. This option is most effective with CONFIG_SLUB. +config SLAB_QUARANTINE + bool "Enable slab freelist quarantine" + depends on !KASAN && (SLAB || SLUB) + help + Enable slab freelist quarantine to break heap spraying technique + used for exploiting use-after-free vulnerabilities in the kernel + code. If this feature is enabled, freed allocations are stored + in the quarantine and can't be instantly reallocated and + overwritten by the exploit performing heap spraying. + This feature is a part of KASAN functionality. + config SHUFFLE_PAGE_ALLOCATOR bool "Page allocator randomization" default SLAB_FREELIST_RANDOM && ACPI_NUMA diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..c052bc616a88 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -52,7 +52,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o percpu.o slab_common.o \ compaction.o vmacache.o \ interval_tree.o list_lru.o workingset.o \ - debug.o gup.o $(mmu-y) + debug.o gup.o kasan/ $(mmu-y) # Give 'page_alloc' its own module-parameter namespace page-alloc-y := page_alloc.o @@ -80,7 +80,6 @@ obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o -obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index 370d970e5ab5..f6367d56a4d0 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -32,3 +32,5 @@ CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME) obj-$(CONFIG_KASAN) := common.o init.o report.o obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o obj-$(CONFIG_KASAN_SW_TAGS) += tags.o tags_report.o + +obj-$(CONFIG_SLAB_QUARANTINE) += slab_quarantine.o quarantine.o diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index ac499456740f..979c5600db8c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -5,6 +5,43 @@ #include #include +struct qlist_node { + struct qlist_node *next; +}; + +struct kasan_track { + u32 pid; + depot_stack_handle_t stack; +}; + +struct kasan_free_meta { + /* This field is used while the object is in the quarantine. + * Otherwise it might be used for the allocator freelist. + */ + struct qlist_node quarantine_link; +#ifdef CONFIG_KASAN_GENERIC + struct kasan_track free_track; +#endif +}; + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object); + +#if defined(CONFIG_KASAN_GENERIC) && \ + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \ + defined(CONFIG_SLAB_QUARANTINE) +void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); +void quarantine_reduce(void); +void quarantine_remove_cache(struct kmem_cache *cache); +#else +static inline void quarantine_put(struct kasan_free_meta *info, + struct kmem_cache *cache) { } +static inline void quarantine_reduce(void) { } +static inline void quarantine_remove_cache(struct kmem_cache *cache) { } +#endif + +#ifdef CONFIG_KASAN + #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) @@ -87,17 +124,8 @@ struct kasan_global { #endif }; -/** - * Structures to keep alloc and free tracks * - */ - #define KASAN_STACK_DEPTH 64 -struct kasan_track { - u32 pid; - depot_stack_handle_t stack; -}; - #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY #define KASAN_NR_FREE_STACKS 5 #else @@ -121,23 +149,8 @@ struct kasan_alloc_meta { #endif }; -struct qlist_node { - struct qlist_node *next; -}; -struct kasan_free_meta { - /* This field is used while the object is in the quarantine. - * Otherwise it might be used for the allocator freelist. - */ - struct qlist_node quarantine_link; -#ifdef CONFIG_KASAN_GENERIC - struct kasan_track free_track; -#endif -}; - struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, const void *object); -struct kasan_free_meta *get_free_info(struct kmem_cache *cache, - const void *object); static inline const void *kasan_shadow_to_mem(const void *shadow_addr) { @@ -178,18 +191,6 @@ void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag); struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, void *object, u8 tag); -#if defined(CONFIG_KASAN_GENERIC) && \ - (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); -void quarantine_reduce(void); -void quarantine_remove_cache(struct kmem_cache *cache); -#else -static inline void quarantine_put(struct kasan_free_meta *info, - struct kmem_cache *cache) { } -static inline void quarantine_reduce(void) { } -static inline void quarantine_remove_cache(struct kmem_cache *cache) { } -#endif - #ifdef CONFIG_KASAN_SW_TAGS void print_tags(u8 addr_tag, const void *addr); @@ -296,4 +297,6 @@ void __hwasan_storeN_noabort(unsigned long addr, size_t size); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size); +#endif /* CONFIG_KASAN */ + #endif diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 4c5375810449..61666263c53e 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -145,7 +145,9 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache) if (IS_ENABLED(CONFIG_SLAB)) local_irq_save(flags); +#ifdef CONFIG_KASAN *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE; +#endif ___cache_free(cache, object, _THIS_IP_); if (IS_ENABLED(CONFIG_SLAB)) diff --git a/mm/kasan/slab_quarantine.c b/mm/kasan/slab_quarantine.c new file mode 100644 index 000000000000..5764aa7ad253 --- /dev/null +++ b/mm/kasan/slab_quarantine.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * The layer providing KASAN slab quarantine separately without the + * main KASAN functionality. + * + * Author: Alexander Popov + * + * This feature breaks widespread heap spraying technique used for + * exploiting use-after-free vulnerabilities in the kernel code. + * + * Heap spraying is an exploitation technique that aims to put controlled + * bytes at a predetermined memory location on the heap. Heap spraying for + * exploiting use-after-free in the Linux kernel relies on the fact that on + * kmalloc(), the slab allocator returns the address of the memory that was + * recently freed. Allocating a kernel object with the same size and + * controlled contents allows overwriting the vulnerable freed object. + * + * If freed allocations are stored in the quarantine, they can't be + * instantly reallocated and overwritten by the exploit performing + * heap spraying. + */ + +#include +#include +#include +#include +#include "../slab.h" +#include "kasan.h" + +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, + slab_flags_t *flags) +{ + cache->kasan_info.alloc_meta_offset = 0; + + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta)) { + cache->kasan_info.free_meta_offset = *size; + *size += sizeof(struct kasan_free_meta); + BUG_ON(*size > KMALLOC_MAX_SIZE); + } + + *flags |= SLAB_KASAN; +} + +struct kasan_free_meta *get_free_info(struct kmem_cache *cache, + const void *object) +{ + BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); + return (void *)object + cache->kasan_info.free_meta_offset; +} + +bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) +{ + quarantine_put(get_free_info(cache, object), cache); + return true; +} + +static void *reduce_helper(const void *ptr, gfp_t flags) +{ + if (gfpflags_allow_blocking(flags)) + quarantine_reduce(); + + return (void *)ptr; +} + +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ + return reduce_helper(ptr, flags); +} + +void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, + gfp_t flags) +{ + return reduce_helper(object, flags); +} + +void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, + size_t size, gfp_t flags) +{ + return reduce_helper(object, flags); +} +EXPORT_SYMBOL(kasan_kmalloc); + +void kasan_cache_shrink(struct kmem_cache *cache) +{ + quarantine_remove_cache(cache); +} + +void kasan_cache_shutdown(struct kmem_cache *cache) +{ + if (!__kmem_cache_empty(cache)) + quarantine_remove_cache(cache); +} diff --git a/mm/slub.c b/mm/slub.c index 68c02b2eecd9..8d6620effa3c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3143,7 +3143,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, do_slab_free(s, page, head, tail, cnt, addr); } -#ifdef CONFIG_KASAN_GENERIC +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE) void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); From patchwork Thu Aug 13 15:19:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Popov X-Patchwork-Id: 11712593 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF74C739 for ; Thu, 13 Aug 2020 15:19:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A52A1207DA for ; Thu, 13 Aug 2020 15:19:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A52A1207DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B4D256B0022; Thu, 13 Aug 2020 11:19:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AD1AF8D0002; Thu, 13 Aug 2020 11:19:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C0AC6B0024; Thu, 13 Aug 2020 11:19:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 868556B0022 for ; Thu, 13 Aug 2020 11:19:51 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3BC7B348D for ; Thu, 13 Aug 2020 15:19:51 +0000 (UTC) X-FDA: 77145905382.04.humor38_120ded526ff5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id C99888003406 for ; Thu, 13 Aug 2020 15:19:47 +0000 (UTC) X-Spam-Summary: 1,0,0,c70790f7aaa8ed98,d41d8cd98f00b204,a13xp0p0v88@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3870:3871:3872:3874:4117:4321:4385:5007:6120:6261:6737:6738:7901:7903:8531:9040:10004:11026:11473:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:13894:14181:14394:14721:21080:21444:21451:21627:21990:30012:30054:30056,0,RBL:209.85.128.65:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04ygg3k9usz3ekex7s537oaq93atbopeko1h66okyf9iernrwcdi5dro43x5w6y.56bwr7oaq6bo5ksp6uwjnwork15xpymbo3zx4hky8x9c4yca6muyeg38qzwxj5a.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: humor38_120ded526ff5 X-Filterd-Recvd-Size: 6049 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Aug 2020 15:19:47 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id 3so5404398wmi.1 for ; Thu, 13 Aug 2020 08:19:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sWVYbnLUoQh2/bQ8PJoxS/2ocrG/gYbDVzBuksdzAMQ=; b=r4dGulmfFvUGljyDLWmuVSHbjg5rBpILYcFsc/Vt8dy3p78B5KgA1bUddm5oDJ2sgj beB/3R3oQLqa0/MLIBE0vBnnD9j2VOqGIEUg6idXAHDPGqx1UCfD0kfSFasK6gZCaYJy ZJzLHYiCLugfP9M7Y1Hk/gWffXuigcmjQoHT1jtqDMN/7xxdBPL3zat7QRbiFS0SgLWW AuwX2KZENm/mfDIAKTLvMdwUKmFmZFrOvZEhOcY4EPmDv4HJpsk0rVRWB4v2KSlyxAbK X++jQnqsdzXnuBHlGZizJoSxORmVsX+eVPfLvcKKZ83aO1SzzGFtHY73RBikiiUj75o2 XJyg== X-Gm-Message-State: AOAM530z7O+STBoGMUlMEvcm6gYlcCXuWXjM/ReB+G+TM1jlUBTW6Fzh GjjTNzXiP9pevqhK19OpZ14= X-Google-Smtp-Source: ABdhPJwKxFGbrbkYgxZpXwjaDbyI5I0M4/6XBG+VYoGVV0KtiAQ9ydy+etzJBWJI2Bu24vC6GiUcYw== X-Received: by 2002:a1c:de88:: with SMTP id v130mr4675656wmg.98.1597331986347; Thu, 13 Aug 2020 08:19:46 -0700 (PDT) Received: from localhost.localdomain ([185.248.161.177]) by smtp.gmail.com with ESMTPSA id d23sm10394044wmd.27.2020.08.13.08.19.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 08:19:45 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC 2/2] lkdtm: Add heap spraying test Date: Thu, 13 Aug 2020 18:19:22 +0300 Message-Id: <20200813151922.1093791-3-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200813151922.1093791-1-alex.popov@linux.com> References: <20200813151922.1093791-1-alex.popov@linux.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C99888003406 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a simple test for CONFIG_SLAB_QUARANTINE. It performs heap spraying that aims to reallocate the recently freed heap object. This technique is used for exploiting use-after-free vulnerabilities in the kernel code. This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying exploitation technique. Signed-off-by: Alexander Popov --- drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/heap.c | 40 ++++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 42 insertions(+) diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index a5e344df9166..78b7669c35eb 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -126,6 +126,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(SLAB_FREE_DOUBLE), CRASHTYPE(SLAB_FREE_CROSS), CRASHTYPE(SLAB_FREE_PAGE), + CRASHTYPE(HEAP_SPRAY), CRASHTYPE(SOFTLOCKUP), CRASHTYPE(HARDLOCKUP), CRASHTYPE(SPINLOCKUP), diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c index 1323bc16f113..a72a241e314a 100644 --- a/drivers/misc/lkdtm/heap.c +++ b/drivers/misc/lkdtm/heap.c @@ -205,6 +205,46 @@ static void ctor_a(void *region) static void ctor_b(void *region) { } +#define HEAP_SPRAY_SIZE 128 + +void lkdtm_HEAP_SPRAY(void) +{ + int *addr; + int *spray_addrs[HEAP_SPRAY_SIZE] = { 0 }; + unsigned long i = 0; + + addr = kmem_cache_alloc(a_cache, GFP_KERNEL); + if (!addr) { + pr_info("Unable to allocate memory in lkdtm-heap-a cache\n"); + return; + } + + *addr = 0x31337; + kmem_cache_free(a_cache, addr); + + pr_info("Performing heap spraying...\n"); + for (i = 0; i < HEAP_SPRAY_SIZE; i++) { + spray_addrs[i] = kmem_cache_alloc(a_cache, GFP_KERNEL); + *spray_addrs[i] = 0x31337; + pr_info("attempt %lu: spray alloc addr %p vs freed addr %p\n", + i, spray_addrs[i], addr); + if (spray_addrs[i] == addr) { + pr_info("freed addr is reallocated!\n"); + break; + } + } + + if (i < HEAP_SPRAY_SIZE) + pr_info("FAIL! Heap spraying succeed :(\n"); + else + pr_info("OK! Heap spraying hasn't succeed :)\n"); + + for (i = 0; i < HEAP_SPRAY_SIZE; i++) { + if (spray_addrs[i]) + kmem_cache_free(a_cache, spray_addrs[i]); + } +} + void __init lkdtm_heap_init(void) { double_free_cache = kmem_cache_create("lkdtm-heap-double_free", diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 8878538b2c13..dfafb4ae6f3a 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -45,6 +45,7 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void); void lkdtm_SLAB_FREE_DOUBLE(void); void lkdtm_SLAB_FREE_CROSS(void); void lkdtm_SLAB_FREE_PAGE(void); +void lkdtm_HEAP_SPRAY(void); /* lkdtm_perms.c */ void __init lkdtm_perms_init(void);