From patchwork Fri May 25 14:40:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10427685 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D7E39602D6 for ; Fri, 25 May 2018 14:41:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C36C92977B for ; Fri, 25 May 2018 14:41:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B776829793; Fri, 25 May 2018 14:41:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D98282977B for ; Fri, 25 May 2018 14:41:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B252F6B026F; Fri, 25 May 2018 10:41:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AFF886B0270; Fri, 25 May 2018 10:41:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 978876B0271; Fri, 25 May 2018 10:41:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f71.google.com (mail-wm0-f71.google.com [74.125.82.71]) by kanga.kvack.org (Postfix) with ESMTP id 2E8456B026F for ; Fri, 25 May 2018 10:41:15 -0400 (EDT) Received: by mail-wm0-f71.google.com with SMTP id v12-v6so3446977wmc.1 for ; Fri, 25 May 2018 07:41:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=SAy7bFeJar69A1WqWTuqVFtjDThie4u9YeXcFOsNyOU=; b=FVIdkgmZqlIqIH/klNc+fw9Bdui9Ui1n+Ldi48ceaYyCY/JE5amYSb8zA9guKT5Lc5 82ZNY0ytxegFeoRaKLBcxHXvGFqCdsmj0V1/S2jPCRUF9Yodp69BA+MCNFqniYT6fSw7 3Uc5e8G+yq5v0XquD2a2qWOGWUaoXjk6/hhZAovucFlZGJUZ6xhRRu/JKjFx5thxFpU6 mrxxs+i+F0mqv+Eo7nocGMo1Gq6RywknZzP7ngpG9zi55qewDS+8mMD5VYCitkSz6R4s WjvKKJyiHXNT2PrI3OHaecaqYfZTKwfzKsocGdIw7g9YM2RXt2abBGkEuM7bRipaqFlE AmHg== X-Gm-Message-State: ALKqPwfOzEHSV0g6OQeDOUy6jNXipcFcMklL3ikLFbBxX1WXL1YYlGCp gEfIUiPVxg9gFTtmCLjlO2RXeRiO6OGqJrMqvg0hn5TwfaXlhgRbJ0Oa1/RVxeUv0Cb2qf154yQ C58475gye/pzGI2a5ecgzT4KRe6srqEv/cqDHjggwW4+2mgqwbBkIHpfOcCkIL4vhaSXYNkJ2WZ 8Mmn0ueopEdGwiAldynQevY3DAhnnSj1IllnV1rf1RzEPQZDMbqtcpzCPeWmRcvQ4xOC65FVENV GF909eT8AAGMv4kCZrejNZOJS8Bf/vWuBToc67KhcBxG0tLQhMUwY82U+UEKhuHLTTgDcaoqGyj cmxzWUq19AL0cBuXfuiMpOyDU4a4J3n+guem6I8E1Cm1QMhrwPjk4HESMSA7HQrTmaXH0keUImV 0 X-Received: by 2002:a1c:3313:: with SMTP id z19-v6mr1880670wmz.48.1527259274705; Fri, 25 May 2018 07:41:14 -0700 (PDT) X-Received: by 2002:a1c:3313:: with SMTP id z19-v6mr1880622wmz.48.1527259273598; Fri, 25 May 2018 07:41:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527259273; cv=none; d=google.com; s=arc-20160816; b=tApEHWnpP9Hg4YJc+UzUjkWGEQjG/UpyxUypOEHdRIBzH7lFDgSOxDO5ejObV78QDl pcX3uToXube/BUam+iumhdXPNvo/1GopyAsAL9G85n0PZxoHv1wdInfG/1CScmYTA1g/ oETYlz/xyEcp6vO2QRGV5B+ICtCvEo3mOfi4srUaK0lwqzqXag2uxXu6bqwMGTPZujEr dO81SxdlLvlRseAhWd2t7ydvAxxDzePNlIFRQa5CLuwo6Kq5u4tQ/Ge8QZkLPsGkewSp 6sNmQRijIt9SFGg/Qf64EmM1t6XvKZokodTHWOLcU8kFQcfmbWwNuRnrRMMiGgxn2FXj uoJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=SAy7bFeJar69A1WqWTuqVFtjDThie4u9YeXcFOsNyOU=; b=quftTwxCicO+LyHgm9ITlGtWzN7uCIuPMuNcDvgPCaofE83NWXHKrTLMIfMmAhcL5T WgE04VoGjlrw0AeBSaGWB+vZcxPqNvt3p2455uuWMly1s1u4JLHtZuyDAqMdG+XZBduM lwIz7JW5uT8SizRwRwEXpNztPjPXs71kLlVTvd1szHMUxVP5QQoSarWJgaq49jeeH3jg NjmgGIxrRAR96QoaikZVfBh6zUAD41NsHCCXrPSVT3070eXmcwU5J+FxokyMgt5Nm+ox wAiWh8mArWQySNV7/7h6KajQ6YSXULan7e1mz/U/uBi+Vx9N3F1wpuaGUiWQ/qrF7Civ 9bUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=prlb/r4f; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id z11-v6sor9782478wre.74.2018.05.25.07.41.13 for (Google Transport Security); Fri, 25 May 2018 07:41:13 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=prlb/r4f; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SAy7bFeJar69A1WqWTuqVFtjDThie4u9YeXcFOsNyOU=; b=prlb/r4fEBU70lqOwkYkg7AZlhdF4qZWPA2/6oT7Vor/g6StM8uj2mf0ev2y9wxZFl ftVFGhc+ev2cXc67SY0SSqcz+jn3mTYv5yEAkspOFMuwIGYdiLWrgdPNhMX356quZIWA J9P66kvfNjOcFvTOFVYYoXvFjbhDKHD/WileqKBM0+bSglWoUEGg4GbuOr/N/wwceUK1 3qQOdjnF0T1AF9XRTHI4VTFoa7ONFyaQfBcXH1zlZmBb9+jgihLFDYqsGI/W4pVqngaj BI+SD/PQLGckBz8QVFcVSXO6MGQGfkXZoor8HvhBZ2Fxwl+PQRcA7Ki2THZvq77fAAtd kjYA== X-Google-Smtp-Source: AB8JxZq+hM8jrUdqRSE6sS/jh+YL85/x9rqdVXiX1a8/jWvQEHiQZpWFa5ZUba8TP+4HBlbBouSNgg== X-Received: by 2002:adf:b11a:: with SMTP id l26-v6mr2607979wra.258.1527259272396; Fri, 25 May 2018 07:41:12 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id q2-v6sm25293885wrm.26.2018.05.25.07.41.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 May 2018 07:41:11 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Jonathan Corbet , Catalin Marinas , Will Deacon , Christopher Li , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Michal Marek , Andrey Konovalov , Mark Rutland , Nick Desaulniers , Yury Norov , Marc Zyngier , Kristina Martsenko , Suzuki K Poulose , Punit Agrawal , Dave Martin , Ard Biesheuvel , James Morse , Michael Weiser , Julien Thierry , Tyler Baicar , "Eric W . Biederman" , Thomas Gleixner , Ingo Molnar , Kees Cook , Sandipan Das , David Woodhouse , Paul Lawrence , Herbert Xu , Josh Poimboeuf , Geert Uytterhoeven , Tom Lendacky , Arnd Bergmann , Dan Williams , Michal Hocko , Jan Kara , Ross Zwisler , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Matthew Wilcox , "Kirill A . Shutemov" , Souptick Joarder , Hugh Dickins , Davidlohr Bueso , Greg Kroah-Hartman , Philippe Ombredanne , Kate Stewart , Laura Abbott , Boris Brezillon , Vlastimil Babka , Pintu Agarwal , Doug Berger , Anshuman Khandual , Mike Rapoport , Mel Gorman , Pavel Tatashin , Tetsuo Handa , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Kees Cook , Jann Horn , Mark Brand , Chintan Pandya Subject: [PATCH v2 15/16] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Date: Fri, 25 May 2018 16:40:31 +0200 Message-Id: <8b5903df7e0d2f4df5b94b8f4a7b26e081baf17c.1527259068.git.andreyknvl@google.com> X-Mailer: git-send-email 2.17.0.921.gf22659ad46-goog In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP KWHASAN doesn't check memory accesses through pointers tagged with 0xff. When page_address is used to get pointer to memory that corresponds to some page, the tag of the resulting pointer gets set to 0xff, even though the allocated memory might have been tagged differently. For slab pages it's impossible to recover the correct tag to return from page_address, since the page might contain multiple slab objects tagged with different values, and we can't know in advance which one of them is going to get accessed. For non slab pages however, we can recover the tag in page_address, since the whole page was marked with the same tag. This patch adds tagging to non slab memory allocated with pagealloc. To set the tag of the pointer returned from page_address, the tag gets stored to page->flags when the memory gets allocated. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/memory.h | 10 ++++++++++ include/linux/mm.h | 29 +++++++++++++++++++++++++++++ include/linux/page-flags-layout.h | 10 ++++++++++ mm/cma.c | 11 +++++++++++ mm/kasan/common.c | 15 +++++++++++++-- mm/page_alloc.c | 1 + 6 files changed, 74 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index e9e054dfb1fc..3352a65b8312 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -305,7 +305,17 @@ static inline void *phys_to_virt(phys_addr_t x) #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) +#ifndef CONFIG_KASAN_HW #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) +#else +#define page_to_virt(page) ({ \ + unsigned long __addr = \ + ((__page_to_voff(page)) | PAGE_OFFSET); \ + __addr = KASAN_SET_TAG(__addr, page_kasan_tag(page)); \ + ((void *)__addr); \ +}) +#endif + #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ diff --git a/include/linux/mm.h b/include/linux/mm.h index c6fa9a255dbf..b79cf6443151 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -770,6 +770,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) +#define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) /* * Define the bit shifts to access each section. For non-existent @@ -780,6 +781,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) +#define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ #ifdef NODE_NOT_IN_PAGE_FLAGS @@ -802,6 +804,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(const struct page *page) @@ -1021,6 +1024,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid) } #endif /* CONFIG_NUMA_BALANCING */ +#ifdef CONFIG_KASAN_HW +static inline u8 page_kasan_tag(const struct page *page) +{ + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) +{ + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; +} + +static inline void page_kasan_tag_reset(struct page *page) +{ + page_kasan_tag_set(page, 0xff); +} +#else +static inline u8 page_kasan_tag(const struct page *page) +{ + return 0xff; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) { } +static inline void page_kasan_tag_reset(struct page *page) { } +#endif + static inline struct zone *page_zone(const struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7ec86bf31ce4..8dbad17664c2 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -82,6 +82,16 @@ #define LAST_CPUPID_WIDTH 0 #endif +#ifdef CONFIG_KASAN_HW +#define KASAN_TAG_WIDTH 8 +#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ + > BITS_PER_LONG - NR_PAGEFLAGS +#error "KASAN: not enough bits in page flags for tag" +#endif +#else +#define KASAN_TAG_WIDTH 0 +#endif + /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. diff --git a/mm/cma.c b/mm/cma.c index aa40e6c7b042..893f0dff39c0 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -468,6 +468,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; + size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -527,6 +528,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, trace_cma_alloc(pfn, page, count, align); + /* + * CMA can allocate multiple page blocks, which results in different + * blocks being marked with different tags. Reset the tags to ignore + * those page blocks. + */ + if (page) { + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); + } + if (ret && !(gfp_mask & __GFP_NOWARN)) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 8123a61b7e0f..041367528470 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -212,8 +212,15 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { + u8 tag; + unsigned long i; + if (unlikely(PageHighMem(page))) return; + + tag = random_tag(); + for (i = 0; i < (1 << order); i++) + page_kasan_tag_set(page + i, tag); kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } @@ -311,6 +318,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache, void kasan_poison_slab(struct page *page) { + unsigned long i; + + for (i = 0; i < (1 << compound_order(page)); i++) + page_kasan_tag_reset(page + i); kasan_poison_shadow(page_address(page), PAGE_SIZE << compound_order(page), KASAN_KMALLOC_REDZONE); @@ -483,7 +494,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (reset_tag(ptr) != page_address(page)) { + if (ptr != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -496,7 +507,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) + if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 905db9d7962f..54df9c852c6e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1179,6 +1179,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); + page_kasan_tag_reset(page); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL