From patchwork Wed Jun 20 17:39:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10478271 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 659F1601D7 for ; Wed, 20 Jun 2018 17:41:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EF8A28D69 for ; Wed, 20 Jun 2018 17:41:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4276828D9A; Wed, 20 Jun 2018 17:41:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 816D528E08 for ; Wed, 20 Jun 2018 17:40:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 223FD6B0273; Wed, 20 Jun 2018 13:40:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 156A36B0274; Wed, 20 Jun 2018 13:40:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F398E6B0275; Wed, 20 Jun 2018 13:40:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr0-f197.google.com (mail-wr0-f197.google.com [209.85.128.197]) by kanga.kvack.org (Postfix) with ESMTP id 834E76B0273 for ; Wed, 20 Jun 2018 13:40:30 -0400 (EDT) Received: by mail-wr0-f197.google.com with SMTP id g6-v6so273247wrp.4 for ; Wed, 20 Jun 2018 10:40:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=pYD1dN9gbUz7frgscq5rWjFESScbslKNZcwGyQZbsY8=; b=MlsE7TqXuJXpTCj0sfubLNQRTkEgQvmNaWLXRu3FxcNMWKXWMf0gvDZNgvse2T9etd G6qReaetl4cJaG4ueTDYuA87Wa6DR1LydESnBrmAzGbbEJ7p1M+KPkFRWTtwCGqDxot3 ZYT57p2Gg+oNtfSYKzuZx8DnZaSmdHlogth2nqiXI+zzgDJakOEyfmfLl1JXLjyElVpn 8uv51ZvjrLaqMR/mPqrw4et+7yuIzq/ZzkbfvCazBTHRSeIcnq7sijEnguBEimvhqQvb 3I3Z3r2NpM7L6ZEWDARMKiy5Vvm6bYFOPFK3fHFwzBXRJOFcqKz4dY72+KqbBSGWdHjp +L2A== X-Gm-Message-State: APt69E1bRzVrZ6hZCM0kOkQjvfi81IuYAYyNXWWM2NUtTROF7eFh0LIu GSKe01h2ThDd4pM1UG/WaqBQlfYxTxJCv4U9/i9Vd92a4Cw4rzMdVi1ckaOnVhFXHfgtstfFiL/ P2iUdHNlV6SGyk2K2TCUp9CYfsOhT6SC7RICRZhoOqmkz+ixJwdn0dNXJevkEdV366naSFQ6TF/ Fv68l6VliJrzlaH4Fo1h0HzxbRrhg3GHF6wJZdilyizrBk+f5qQPRKtaN/WG/mJatSSd8kd1L21 uwkLALZQgloBtj4IdSq2eAFikYkEA9/wB9sRwxE2n8oviBMwGf//x3DoE84HXhUR8rPVyctskbe wdBhKnqgZGt5jlzHplkkCz5VJiwtPRMu/2WlOmo16WB3xWCcOi+ZATYH98oYsmdbKC2tl8OESAy d X-Received: by 2002:a1c:a8a:: with SMTP id 132-v6mr2629766wmk.44.1529516430064; Wed, 20 Jun 2018 10:40:30 -0700 (PDT) X-Received: by 2002:a1c:a8a:: with SMTP id 132-v6mr2629713wmk.44.1529516428984; Wed, 20 Jun 2018 10:40:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529516428; cv=none; d=google.com; s=arc-20160816; b=IQ4mJ/JT0Y84CS9qHWjX0C+Iq+ORyaSQLJlR5agzTrW3vLbIYZWYVpvZc/9GybUurz dabM9jv6BEf5wtOs/HSPH72ZAvyMF15ItYOme+cEoumfeWGv0TvGorxLciP29yChs9GX M80tX75YbRHD25YZ99jS1Rqn/HLf9G9ARtBZ1SOvkhhJDoTlrAX1xPMaiuPWql4TM6Vf OnimzijQ04YCrt7SKAx5jTDFxLufh79ooHEW8LuOqvBfb/fCAXjENqTRNSxvG7Wr23L0 siJ4GhOspHktkB4SrWoWT5KzoM+SbjCIwJnHbtROXJ7KaOmk2khKEtCvKsj7Qonojfiu PQ9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=pYD1dN9gbUz7frgscq5rWjFESScbslKNZcwGyQZbsY8=; b=mKlFbo+vfbgKtjM0Pew9NGxWIgzUB4eFCf8SWcXC4dsaaC4l5LWYr7AmZnEMw3bFZo kLE+nARxpjeRZBZb+2hp1EnToP2bNDRQrv/mvUsG7nBp+9DMS2YFnafRgCw513UrPlVy /juYxoYxq6dcqUypuLAb8uv8BA12GYJ8qPEduE2O15xB0jrYlszdo1mgIp1yHu4eS9c5 A9F5MqQ3HF7aJ1E8NvOWKK8pil6vVDsa0ULxc9Y8eZ0wSSpKGWQYZ6z6/CCvS/zxtkl6 /g1kLkfYd+xlcmaAemIpFzZaF7R6yUWlDI6FSSbAdJQbc9ZY0w6TWM6KoKyexhQ90Fwc Gf/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=HFas0xyx; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j6-v6sor1515215wro.14.2018.06.20.10.40.28 for (Google Transport Security); Wed, 20 Jun 2018 10:40:28 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=HFas0xyx; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pYD1dN9gbUz7frgscq5rWjFESScbslKNZcwGyQZbsY8=; b=HFas0xyxcBFmW24eV7BHNmnxNcSbWO+EA5+Uie2ueha3z6BaOLkq/6jas31wxnQVfP uaZhwpZiyR7pmOwMXnebW/xW0X+w7sXklObgvJeAqGJC9C3wzC+E7/3p+iU+d34XDfnu T/JyGaPVWkf5ty9Lq4fwE2/rUBd+ui/rcwTLOLJtC195/nkMzjFadtfqBxv2k90aV5hF hlH/cbSHzVvDVGe2L5oQ5sKlRcPn40AaDzgNQSyILD0ZnT1vc2wK+atuHhrRzx4dN5Dt L+LtXq8L0PQJuVM2n1o1srgLsF9xb4Rakps4MsO0UWOi9Fwd9Y9m0M4DxBywFll6XKsI jJPQ== X-Google-Smtp-Source: ADUXVKLpsld9IvfMVSG8eDquQNYl0iU/Mwzurc2OHwZc44WH9pqgHu5iN9jyMGqS9CWgM2m/A+wmzA== X-Received: by 2002:adf:a4cf:: with SMTP id h15-v6mr19429469wrb.130.1529516428330; Wed, 20 Jun 2018 10:40:28 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id f18-v6sm4388598wro.1.2018.06.20.10.40.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jun 2018 10:40:27 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Andrey Konovalov Subject: [PATCH v3 13/17] khwasan: add hooks implementation Date: Wed, 20 Jun 2018 19:39:59 +0200 Message-Id: X-Mailer: git-send-email 2.18.0.rc1.244.gcf134e6275-goog In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This commit adds KHWASAN specific hooks implementation and adjusts common KASAN and KHWASAN ones. 1. When a new slab cache is created, KHWASAN rounds up the size of the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, that corresponds to this object to this tag, and embeds this tag value into the top byte of the returned pointer. 3. On each kfree KHWASAN poisons the shadow memory with a random tag to allow detection of use-after-free bugs. The rest of the logic of the hook implementation is very much similar to the one provided by KASAN. KHWASAN saves allocation and free stack metadata to the slab object the same was KASAN does this. Signed-off-by: Andrey Konovalov --- mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++++++----------- mm/kasan/khwasan.c | 40 ++++++++++++++++++++++ 2 files changed, 103 insertions(+), 20 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 656baa8984c7..1e96ca050c75 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size); @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) void kasan_unpoison_shadow(const void *address, size_t size) { - kasan_poison_shadow(address, size, 0); + u8 tag = get_tag(address); + + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + + kasan_poison_shadow(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - *shadow = size & KASAN_SHADOW_MASK; + + if (IS_ENABLED(CONFIG_KASAN_HW)) + *shadow = tag; + else + *shadow = size & KASAN_SHADOW_MASK; } } @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { - if (likely(!PageHighMem(page))) - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); + if (unlikely(PageHighMem(page))) + return; + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } void kasan_free_pages(struct page *page, unsigned int order) @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { unsigned int orig_size = *size; + unsigned int redzone_size = 0; int redzone_adjust; /* Add alloc meta. */ @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size += sizeof(struct kasan_alloc_meta); /* Add free meta. */ - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta))) { cache->kasan_info.free_meta_offset = *size; *size += sizeof(struct kasan_free_meta); } - redzone_adjust = optimal_redzone(cache->object_size) - - (*size - cache->object_size); + redzone_size = optimal_redzone(cache->object_size); + redzone_adjust = redzone_size - (*size - cache->object_size); if (redzone_adjust > 0) *size += redzone_adjust; *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + max(*size, cache->object_size + redzone_size)); /* * If the metadata doesn't fit, don't enable KASAN at all. @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, return; } + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); + *flags |= SLAB_KASAN; } @@ -325,18 +341,41 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - return kasan_kmalloc(cache, object, cache->object_size, flags); + object = kasan_kmalloc(cache, object, cache->object_size, flags); + if (IS_ENABLED(CONFIG_KASAN_HW) && unlikely(cache->ctor)) { + /* + * Cache constructor might use object's pointer value to + * initialize some of its fields. + */ + cache->ctor(object); + } + return object; +} + +static inline bool shadow_invalid(u8 tag, s8 shadow_byte) +{ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return shadow_byte < 0 || + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; + else + return tag != (u8)shadow_byte; } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { s8 shadow_byte; + u8 tag; + void *tagged_object; unsigned long rounded_up_size; + tag = get_tag(object); + tagged_object = object; + object = reset_tag(object); + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { - kasan_report_invalid_free(object, ip); + kasan_report_invalid_free(tagged_object, ip); return true; } @@ -345,20 +384,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, return false; shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { - kasan_report_invalid_free(object, ip); + if (shadow_invalid(tag, shadow_byte)) { + kasan_report_invalid_free(tagged_object, ip); return true; } rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || + unlikely(!(cache->flags & SLAB_KASAN))) return false; set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); quarantine_put(get_free_info(cache, object), cache); - return true; + + return IS_ENABLED(CONFIG_KASAN_GENERIC); } bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -371,6 +412,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, { unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -383,14 +425,15 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); - kasan_unpoison_shadow(object, size); + tag = random_tag(); + kasan_unpoison_shadow(set_tag(object, tag), size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); - return (void *)object; + return set_tag(object, tag); } EXPORT_SYMBOL(kasan_kmalloc); @@ -440,7 +483,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { + if (reset_tag(ptr) != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -453,7 +496,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (ptr != page_address(virt_to_head_page(ptr))) + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index d34679b8f8c7..fd1725022794 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -88,15 +88,52 @@ void *khwasan_reset_tag(const void *addr) void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { + u8 tag; + u8 *shadow_first, *shadow_last, *shadow; + void *untagged_addr; + + tag = get_tag((const void *)addr); + + /* Ignore accesses for pointers tagged with 0xff (native kernel + * pointer tag) to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KHWASAN_TAG_KERNEL (0xFF). + */ + if (tag == KHWASAN_TAG_KERNEL) + return; + + untagged_addr = reset_tag((const void *)addr); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); + + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { + if (*shadow != tag) { + kasan_report(addr, size, write, ret_ip); + return; + } + } } #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, false, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ void __hwasan_store##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, true, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_store##size##_noabort) @@ -108,15 +145,18 @@ DEFINE_HWASAN_LOAD_STORE(16); void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, false, _RET_IP_); } EXPORT_SYMBOL(__hwasan_loadN_noabort); void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, true, _RET_IP_); } EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { + kasan_poison_shadow((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory);