From patchwork Tue May 8 17:20:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10386643 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 93C2860353 for ; Tue, 8 May 2018 17:22:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D898290B3 for ; Tue, 8 May 2018 17:22:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FCD4290B9; Tue, 8 May 2018 17:22:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FF71290B3 for ; Tue, 8 May 2018 17:22:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24DEA6B02D7; Tue, 8 May 2018 13:21:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 201AE6B02D8; Tue, 8 May 2018 13:21:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0525B6B02DA; Tue, 8 May 2018 13:21:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f69.google.com (mail-wm0-f69.google.com [74.125.82.69]) by kanga.kvack.org (Postfix) with ESMTP id 831346B02D8 for ; Tue, 8 May 2018 13:21:43 -0400 (EDT) Received: by mail-wm0-f69.google.com with SMTP id t195-v6so1319804wmt.9 for ; Tue, 08 May 2018 10:21:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:in-reply-to:references; bh=+K0fmrh6T2CJejUXku9F8XZjtxfb8b6XoTasdXtZuek=; b=QcvR6SL2L7jVZFQq160bBo+ISCyFGRSW17vTZioH0AcYqDX+OXLe+fNbnXYZOyljOY iZ8Ml2HD/7XLZMuR9OGqlxLNqPMGI1aAW+uf9zi3XMQu9RCbWF8cdjdN6yY8nRVhE6cl Vz2AdqoWgvYlo3SbEGpe73/p6PY5lTMgesJmNeqSRXbAsoV3/mqhh3QLf9yOQI8nFS74 WJ2gf3NEEnT2e3rM/jyjVVzH7j6IrW4TO2Ywx1yXPpWvmiZzP7aJ2j1osaS5pOPbct34 Z3+YTeJRpghfsy5jXiRmhl7HVsfWBhmxG73zSoa3q2tooH1o+1P60lbVwqIgPNNfmjb6 F3Xg== X-Gm-Message-State: ALQs6tC08h/eddLtJloENP8rgw2p2o349LEyxz4JyWxXzJ1uGKsziF0j VfBHNIwa6r2AQLT0Tz2/sKVS80OUB204J58UYLdzMaCLuFBaIBNxrkXsijWpvmh3tahH5s+0DhY R2dQgqN0m4SHmCdTiRBfpNbA9XKMvU8OL2NsfUYHkvC0dnSVSihVFp+tLVhdkI67CEyfjgWcttW +KP8jI5zHVdhfQfSOyQ0NNI/BRzslw4J6qgtOioc2NQ2/3UIM1HA3YYiAjt7vf9K6VAHaUIq+5H 5rcDd5hR14beFa1YfOwSozi3d+5VPkC6uxrYuq4K/Q4A01EG72YeF/J/J6Cuc6yU/6geufhlirg fCSxPU6LjB51ZUtNm6k43sC7qyp4xfln+EkcjU05nCkM6kexWPgLxrdxWOiwB62pCugUl4UpWzU B X-Received: by 2002:adf:8212:: with SMTP id 18-v6mr33065188wrb.144.1525800103025; Tue, 08 May 2018 10:21:43 -0700 (PDT) X-Received: by 2002:adf:8212:: with SMTP id 18-v6mr33065107wrb.144.1525800101687; Tue, 08 May 2018 10:21:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525800101; cv=none; d=google.com; s=arc-20160816; b=eIllvHJ+EIM/+B1U/QpaksVR6zUUkLBOCcYRCj7VacuIWEGnNsKawkUOErNPA+6HnQ 5VxLM5g75C+sCDJYghCIkDuLMKmerrF3wrBzSq9JohNptOlauHKYTtuCuXwuI1ueAd6C jbVvth5Xx9Y1vKxD6x7t26+JyQ/ByOoB3oUUl82yW5lOgyIaIzCsYqIsv2LZIMHIkpnL Qj33+Ql0eAtcKY3W98oEdUMMbpiLY8mBErBzPk8c3CiaA/CslE9a/nt2i3uiNW/Reoey d1B5hP8be9+i8Z8o2NNz+pKesQBS9fRZNuK+zDiK7SLZbLUZ/ExbBDjf+h5EwQL4RPdL Q5Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=+K0fmrh6T2CJejUXku9F8XZjtxfb8b6XoTasdXtZuek=; b=TT67LAP/Qcp6wT2OH9Uyf9nnOaoT8v58gzw0mIbO81jP6qXw0Gqr9OTPYzhHUrokyy WfCuvWd/99SNfOU1PGgutzfLqHDf/4Itsd1VUOAlrDZKyxwKgbRocZbpAs/PomP5dKAY wUVV6tmfWDSrKrx4BAu7QaElCKHnOueNy3E/Re8R4QVJO3Va6BMlxJRst+1nGkLxyJi9 W8Pvv0IMiAbq6xBSdAMzrm0qylRLzKBI3TCs4KDqgJ/DlH8nuL8qfdNquiE9G24DZHe1 0rLQEMswhdJoQwxTUxuR9PkR1rODWw3MM0GepwgIF1KtJOo5l28LfTe4AgkpF9FYUMWT M4mw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=isbVoLyx; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id z21-v6sor842572wrb.10.2018.05.08.10.21.41 for (Google Transport Security); Tue, 08 May 2018 10:21:41 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=isbVoLyx; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=+K0fmrh6T2CJejUXku9F8XZjtxfb8b6XoTasdXtZuek=; b=isbVoLyxSymxOwRM0M3r4HZkbNeoMwqzxtBKOuVTQ8KK6+2icgMZfksbtixVqNGCpj ctvzpQpKZTroD5vNWBmO4r157BKzKS+9/34O64nt9e+FleER0l2srSFLGrr/hYkcmzNY h4Q8JelhTQsgjOgIc/BN9kfLMQT7cZ/vlQ73kFYUAcu8mxXjxAehXs9iKFUG95JByYQv koFsWX5I1dGm1SbcO/8JHnyRfm1qysMn7xtEYUj1GYFs0/Vul6obTld9q0y8vsbwXzMK yaUj0lnyKDbHCn404reTHI4WBffk1XClUcSc767/KK/agL/gFYf4zowD09/RO2lxn2hp mKTQ== X-Google-Smtp-Source: AB8JxZrvRTsST/0QLfjvVzYaolnF3xO8nnSsRPIN/z8ezIn5NK6JQU5TH5ED8OnBjJeKARrbwIAhzA== X-Received: by 2002:adf:ac81:: with SMTP id o1-v6mr32427235wrc.220.1525800098451; Tue, 08 May 2018 10:21:38 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id m134sm14178311wmg.4.2018.05.08.10.21.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 May 2018 10:21:37 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Jonathan Corbet , Catalin Marinas , Will Deacon , Christopher Li , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Michal Marek , Andrey Konovalov , Mark Rutland , Nick Desaulniers , Yury Norov , Marc Zyngier , Kristina Martsenko , Suzuki K Poulose , Punit Agrawal , Dave Martin , Ard Biesheuvel , James Morse , Michael Weiser , Julien Thierry , Tyler Baicar , "Eric W . Biederman" , Thomas Gleixner , Ingo Molnar , Kees Cook , Sandipan Das , David Woodhouse , Paul Lawrence , Herbert Xu , Josh Poimboeuf , Geert Uytterhoeven , Tom Lendacky , Arnd Bergmann , Dan Williams , Michal Hocko , Jan Kara , Ross Zwisler , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Matthew Wilcox , "Kirill A . Shutemov" , Souptick Joarder , Hugh Dickins , Davidlohr Bueso , Greg Kroah-Hartman , Philippe Ombredanne , Kate Stewart , Laura Abbott , Boris Brezillon , Vlastimil Babka , Pintu Agarwal , Doug Berger , Anshuman Khandual , Mike Rapoport , Mel Gorman , Pavel Tatashin , Tetsuo Handa , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Kees Cook , Jann Horn , Mark Brand , Chintan Pandya Subject: [PATCH v1 13/16] khwasan: add hooks implementation Date: Tue, 8 May 2018 19:20:59 +0200 Message-Id: <5dddd7d6f18927de291e7b09e1ff45190dd6d361.1525798754.git.andreyknvl@google.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog In-Reply-To: References: In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This commit adds KHWASAN specific hooks implementation and adjusts common KASAN and KHWASAN ones. 1. When a new slab cache is created, KHWASAN rounds up the size of the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, that corresponds to this object to this tag, and embeds this tag value into the top byte of the returned pointer. 3. On each kfree KHWASAN poisons the shadow memory with a random tag to allow detection of use-after-free bugs. The rest of the logic of the hook implementation is very much similar to the one provided by KASAN. KHWASAN saves allocation and free stack metadata to the slab object the same was KASAN does this. Signed-off-by: Andrey Konovalov --- mm/kasan/common.c | 73 ++++++++++++++++++++++++++++++++++++---------- mm/kasan/kasan.h | 8 +++++ mm/kasan/khwasan.c | 40 +++++++++++++++++++++++++ 3 files changed, 105 insertions(+), 16 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 0c1159feaf5e..0654bf97257b 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size); @@ -148,11 +151,15 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) void kasan_unpoison_shadow(const void *address, size_t size) { - kasan_poison_shadow(address, size, 0); + kasan_poison_shadow(address, size, get_tag(address)); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - *shadow = size & KASAN_SHADOW_MASK; + + if (IS_ENABLED(CONFIG_KASAN_HW)) + *shadow = get_tag(address); + else + *shadow = size & KASAN_SHADOW_MASK; } } @@ -216,6 +223,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { unsigned int orig_size = *size; + unsigned int redzone_size = 0; int redzone_adjust; /* Add alloc meta. */ @@ -223,20 +231,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size += sizeof(struct kasan_alloc_meta); /* Add free meta. */ - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta))) { cache->kasan_info.free_meta_offset = *size; *size += sizeof(struct kasan_free_meta); } - redzone_adjust = optimal_redzone(cache->object_size) - - (*size - cache->object_size); + redzone_size = optimal_redzone(cache->object_size); + redzone_adjust = redzone_size - (*size - cache->object_size); if (redzone_adjust > 0) *size += redzone_adjust; *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + max(*size, cache->object_size + redzone_size)); /* * If the metadata doesn't fit, don't enable KASAN at all. @@ -306,18 +314,30 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - return kasan_kmalloc(cache, object, cache->object_size, flags); + object = kasan_kmalloc(cache, object, cache->object_size, flags); + if (IS_ENABLED(CONFIG_KASAN_HW) && unlikely(cache->ctor)) { + /* + * Cache constructor might use object's pointer value to + * initialize some of its fields. + */ + cache->ctor(object); + } + return object; } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { s8 shadow_byte; + u8 tag; unsigned long rounded_up_size; + tag = get_tag(object); + object = reset_tag(object); + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { - kasan_report_invalid_free(object, ip); + kasan_report_invalid_free(set_tag(object, tag), ip); return true; } @@ -326,20 +346,29 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, return false; shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); +#ifdef CONFIG_KASAN_GENERIC if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { kasan_report_invalid_free(object, ip); return true; } +#else + if (tag != (u8)shadow_byte) { + kasan_report_invalid_free(set_tag(object, tag), ip); + return true; + } +#endif rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || + unlikely(!(cache->flags & SLAB_KASAN))) return false; set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); quarantine_put(get_free_info(cache, object), cache); - return true; + + return IS_ENABLED(CONFIG_KASAN_GENERIC); } bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -352,6 +381,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, { unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -364,14 +394,19 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); +#ifdef CONFIG_KASAN_GENERIC kasan_unpoison_shadow(object, size); +#else + tag = random_tag(); + kasan_poison_shadow(object, redzone_start - (unsigned long)object, tag); +#endif kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); - return (void *)object; + return set_tag(object, tag); } EXPORT_SYMBOL(kasan_kmalloc); @@ -380,6 +415,7 @@ void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) struct page *page; unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -392,11 +428,16 @@ void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) KASAN_SHADOW_SCALE_SIZE); redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page)); +#ifdef CONFIG_KASAN_GENERIC kasan_unpoison_shadow(ptr, size); +#else + tag = random_tag(); + kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag); +#endif kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_PAGE_REDZONE); - return (void *)ptr; + return set_tag(ptr, tag); } void *kasan_krealloc(const void *object, size_t size, gfp_t flags) @@ -421,7 +462,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { + if (reset_tag(ptr) != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -434,7 +475,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (ptr != page_address(virt_to_head_page(ptr))) + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 620941d1e84f..06b70d296411 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -12,10 +12,18 @@ #define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ #define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ +#ifdef CONFIG_KASAN_GENERIC #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ +#else +#define KASAN_FREE_PAGE KHWASAN_TAG_INVALID +#define KASAN_PAGE_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_FREE KHWASAN_TAG_INVALID +#endif + #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ /* diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index 4e253c1e4d35..b4919ef74741 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -89,15 +89,52 @@ void *khwasan_reset_tag(const void *addr) void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { + u8 tag; + u8 *shadow_first, *shadow_last, *shadow; + void *untagged_addr; + + tag = get_tag((const void *)addr); + + /* Ignore accesses for pointers tagged with 0xff (native kernel + * pointer tag) to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KHWASAN_TAG_KERNEL (0xFF). + */ + if (tag == KHWASAN_TAG_KERNEL) + return; + + untagged_addr = reset_tag((const void *)addr); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); + + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { + if (*shadow != tag) { + kasan_report(addr, size, write, ret_ip); + return; + } + } } #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, false, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ void __hwasan_store##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, true, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_store##size##_noabort) @@ -109,15 +146,18 @@ DEFINE_HWASAN_LOAD_STORE(16); void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, false, _RET_IP_); } EXPORT_SYMBOL(__hwasan_loadN_noabort); void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, true, _RET_IP_); } EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { + kasan_poison_shadow((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory);