From patchwork Wed Aug 29 11:35:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10579955 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8846F1709 for ; Wed, 29 Aug 2018 11:35:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75EB42AA9C for ; Wed, 29 Aug 2018 11:35:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6997D2AB91; Wed, 29 Aug 2018 11:35:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8B612AB44 for ; Wed, 29 Aug 2018 11:35:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87ADC6B4B80; Wed, 29 Aug 2018 07:35:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 803C56B4B81; Wed, 29 Aug 2018 07:35:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65C646B4B82; Wed, 29 Aug 2018 07:35:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id F03366B4B80 for ; Wed, 29 Aug 2018 07:35:40 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id s205-v6so2725457wmf.7 for ; Wed, 29 Aug 2018 04:35:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kb0anrSFrq4OO5HLjnBi1Wtd8/F9idg7tFDv1gOHdc4=; b=D14tbdUbAgbdUGn5qBtdD2FlbUd/cUuQZfdeUzXunQOctHr4bUmgNNa9yZ2DaUMWWW Oaqca5NbWEadjDw5/SyjYK+1cfvTiMNTzYk5qTFLHTkLBwsa/fSu/A3dWy9GTf3Qw3gn l221PWLXvmbB2181tXCwx3WYU1cB3VhtkVDkoD+TJEMcO8n90XovnMAIvdw9BgiS9YF3 H0rs7cfTLCQFotYwSXD2b7cdAKpgmgJa1jXAf9UNKWGZrl/J3rfgOa0RjX4J9jZPhDxp SDF9cIDp3h4UYyR9V6XCMkVIHgOcAwTdb3n49MSyNSTsuMmAEEcPDfTIVCKhmxfB9FbG plUw== X-Gm-Message-State: APzg51DoIw0TzcOegPjGsytD7jTaP+i3j+0lEBE2yDIRzIIzoUxE8Q7c 2MkNDaGWoG8RtOzvhFiYVECntPXT4SVwO4GnBha2DxuhQKIndYSo9Lz10p2H3LbD1atLO1GnVzW POr2sIg9g+LsapmiUIxnGThqz98AbmjZU1ZvHLcnfBBBhbtF7HZLcq0KRv3ZkM5b+xHpuWxwbHE 73ZMhCBpVvtUs0OqvL329QUT/qc292CpkoOect+GtR7gsm0jwedxCTMrfGhFbvqmhx/txJSfZhY UeZ3APEGEC6RHGSwy1Am/cPmLNJqfxOBjX8vQu/gWF3hxXq2PqRlRpbp+K1Hx/BDDWxeuaHgOUM St69JRkRvBkaErLxA3NI9nEfG78v/ye+Eyfw5cLxc+v7ety1R2XNZt4dOaTaXOdYjPVtQdl2ib/ C X-Received: by 2002:a5d:528f:: with SMTP id c15-v6mr4110279wrv.102.1535542540495; Wed, 29 Aug 2018 04:35:40 -0700 (PDT) X-Received: by 2002:a5d:528f:: with SMTP id c15-v6mr4110230wrv.102.1535542539457; Wed, 29 Aug 2018 04:35:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535542539; cv=none; d=google.com; s=arc-20160816; b=HRZbd/ky6Qa8bsVwTPpdOhv+frNW3aHADWhtOKCGiR/tk5+s6mUxDR9rseAoxYU2Vd LXSZmrq9o/HwEEKq24Hu8Lyt1UeWaFlIBuhgHr983H3p1L4uJTSRNx7ozGcQxqNiTv7b zDuc45JQ6RScv9FXBsj4V+XUULf1pW6i9v3iJx2oVTcmXxHFYyqA+mT9nPUEg7bSc47B 69rdiPZit3PHZEE02LADtYbX242ElUAWpijv2aXKa70R2bUrRfrFzhRlqzUiHY629la6 ZGOp6kmYa/O2RNNY1Rqjks7YHU3b5rU8JpXW5812o00qlrf7JxzPdG7e0KpuzgMK/HTG YWmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=kb0anrSFrq4OO5HLjnBi1Wtd8/F9idg7tFDv1gOHdc4=; b=no8cktB8oeq6W6yiRwfm1qbqL42pNd8V7TeTKTyvdJCKbbvDBiDaEJ/I+ER2lptYJ1 T/TU2RhzJsMPB3DMXZgAx9JkW9pVhK0eDUQ3JPQQRO+VB8jY1D/0Pcogkscy5H7O9bL2 RGf1EJ1bPykxGXMsB1UXpNdO3HWTKOfioWhbkx9Anr3m0QgSAMTF7DXbMLsXY0uQ4rU4 yNaj7xMC3J98I6Bv9PAEhdRfIaGRKL+ip9Oz4KNaK+B+r0gIXotY2gzICNNdN8OecGJH di+IhRRdw+PDNFnMw0zK5g/EgZD1zkLN7XesaLJMuO1KwzgnItL15nCgtFAvnDlJ9+67 7BBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=V77xIUts; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 201-v6sor972297wmf.4.2018.08.29.04.35.39 for (Google Transport Security); Wed, 29 Aug 2018 04:35:39 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=V77xIUts; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kb0anrSFrq4OO5HLjnBi1Wtd8/F9idg7tFDv1gOHdc4=; b=V77xIUts4G5R5xgxpZRlzf+b6PxUbpyYldPcR1vDedUxJCnCO1Cd7FjmZsRhUO5GK6 DqSPZhRHZ24fg1dYh8PnaOSNZMErt53IHxij5lTJDzXliZRTTZg6ONDGkTsFT29mRK57 1XMNNSGwspzUN55Y05uoa5Yd2K8bcaHFFqkFZFufVMI0yVygdcneSYIykpOHpW3hVuSo t6CB96Bg8wUZTsnJtcciwXjlaeD6HHL+bQ91OTSIWQNBdi2yclcFf3iSbu1yGChiIovQ J1HVI08VWA93LS3Z1IPZxaC2zKyBdBFK7g53WLalomO8XnP9SCMeu4wEPYIdpwdYBXUd 1EzQ== X-Google-Smtp-Source: ANB0VdZAmbo5vWf2UbC1iC4mfzsIcGelknxtQTTEElaiO8Z6gaQQGbaRPFvyEa08sXABod5h77vz/Q== X-Received: by 2002:a1c:adcc:: with SMTP id w195-v6mr4157719wme.41.1535542538782; Wed, 29 Aug 2018 04:35:38 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:38 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 07/18] khwasan: add tag related helper functions Date: Wed, 29 Aug 2018 13:35:11 +0200 Message-Id: <6cd298a90d02068969713f2fd440eae21227467b.1535462971.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This commit adds a few helper functions, that are meant to be used to work with tags embedded in the top byte of kernel pointers: to set, to get or to reset (set to 0xff) the top byte. Signed-off-by: Andrey Konovalov --- arch/arm64/mm/kasan_init.c | 2 ++ include/linux/kasan.h | 29 +++++++++++++++++ mm/kasan/kasan.h | 55 ++++++++++++++++++++++++++++++++ mm/kasan/khwasan.c | 65 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 151 insertions(+) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 7a31e8ccbad2..e7f37c0b7e14 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -250,6 +250,8 @@ void __init kasan_init(void) memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE); cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + khwasan_init(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n"); diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 1c31bb089154..1f852244e739 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -166,6 +166,35 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} #define KASAN_SHADOW_INIT 0xFF +void khwasan_init(void); + +void *khwasan_reset_tag(const void *addr); + +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr); +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, + const void *addr); + +#else /* CONFIG_KASAN_HW */ + +static inline void khwasan_init(void) { } + +static inline void *khwasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + +static inline void *khwasan_preset_slub_tag(struct kmem_cache *cache, + const void *addr) +{ + return (void *)addr; +} + +static inline void *khwasan_preset_slab_tag(struct kmem_cache *cache, + unsigned int idx, const void *addr) +{ + return (void *)addr; +} + #endif /* CONFIG_KASAN_HW */ #endif /* LINUX_KASAN_H */ diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 19b950eaccff..a7cc27d96608 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -8,6 +8,10 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KHWASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ +#define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ +#define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ + #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ @@ -126,6 +130,57 @@ static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif +#ifdef CONFIG_KASAN_HW + +#define KHWASAN_TAG_SHIFT 56 +#define KHWASAN_TAG_MASK (0xFFUL << KHWASAN_TAG_SHIFT) + +u8 random_tag(void); + +static inline void *set_tag(const void *addr, u8 tag) +{ + u64 a = (u64)addr; + + a &= ~KHWASAN_TAG_MASK; + a |= ((u64)tag << KHWASAN_TAG_SHIFT); + + return (void *)a; +} + +static inline u8 get_tag(const void *addr) +{ + return (u8)((u64)addr >> KHWASAN_TAG_SHIFT); +} + +static inline void *reset_tag(const void *addr) +{ + return set_tag(addr, KHWASAN_TAG_KERNEL); +} + +#else /* CONFIG_KASAN_HW */ + +static inline u8 random_tag(void) +{ + return 0; +} + +static inline void *set_tag(const void *addr, u8 tag) +{ + return (void *)addr; +} + +static inline u8 get_tag(const void *addr) +{ + return 0; +} + +static inline void *reset_tag(const void *addr) +{ + return (void *)addr; +} + +#endif /* CONFIG_KASAN_HW */ + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index e2c3a7f7fd1f..9d91bf3c8246 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -38,6 +38,71 @@ #include "kasan.h" #include "../slab.h" +static DEFINE_PER_CPU(u32, prng_state); + +void khwasan_init(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(prng_state, cpu) = get_random_u32(); +} + +/* + * If a preemption happens between this_cpu_read and this_cpu_write, the only + * side effect is that we'll give a few allocated in different contexts objects + * the same tag. Since KHWASAN is meant to be used a probabilistic bug-detection + * debug feature, this doesn’t have significant negative impact. + * + * Ideally the tags use strong randomness to prevent any attempts to predict + * them during explicit exploit attempts. But strong randomness is expensive, + * and we did an intentional trade-off to use a PRNG. This non-atomic RMW + * sequence has in fact positive effect, since interrupts that randomly skew + * PRNG at unpredictable points do only good. + */ +u8 random_tag(void) +{ + u32 state = this_cpu_read(prng_state); + + state = 1664525 * state + 1013904223; + this_cpu_write(prng_state, state); + + return (u8)(state % (KHWASAN_TAG_MAX + 1)); +} + +void *khwasan_reset_tag(const void *addr) +{ + return reset_tag(addr); +} + +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr) +{ + /* + * Since it's desirable to only call object contructors ones during + * slab allocation, we preassign tags to all such objects. + * Also preassign tags for SLAB_TYPESAFE_BY_RCU slabs to avoid + * use-after-free reports. + */ + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) + return set_tag(addr, random_tag()); + return (void *)addr; +} + +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, + const void *addr) +{ + /* + * See comment in khwasan_preset_slub_tag. + * For SLAB allocator we can't preassign tags randomly since the + * freelist is stored as an array of indexes instead of a linked + * list. Assign tags based on objects indexes, so that objects that + * are next to each other get different tags. + */ + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) + return set_tag(addr, (u8)idx); + return (void *)addr; +} + void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) {