From patchwork Tue Nov 6 17:30:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10670983 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6ECA415E9 for ; Tue, 6 Nov 2018 17:31:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58E3B2AC0E for ; Tue, 6 Nov 2018 17:31:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B57A2AC09; Tue, 6 Nov 2018 17:31:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 602A52ABBC for ; Tue, 6 Nov 2018 17:31:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 277126B0379; Tue, 6 Nov 2018 12:31:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1D3396B037B; Tue, 6 Nov 2018 12:31:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0000B6B037C; Tue, 6 Nov 2018 12:31:16 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by kanga.kvack.org (Postfix) with ESMTP id 915106B0379 for ; Tue, 6 Nov 2018 12:31:16 -0500 (EST) Received: by mail-wm1-f72.google.com with SMTP id h67-v6so11443805wmh.0 for ; Tue, 06 Nov 2018 09:31:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=DBi4b5mp1uQMIpBdyajagHQ1WOtpTOcwif2rDCgCV7o=; b=W5nS/eECDAosRt4tLt6Mk0nCjufrW6edgojhsxaSxk2b4C6mPlNFqIJ34//hyhoCqn hwrFm7IRYryJC+ZsDkglmX/zy/YjIEXGifa73YMVbnn8t7Sd9G2+bSpG8wHsPgyqnJ8S gnhA+wYIQgpz5wLXbbhVIa/GiOu8Js9+47FbrZuyuswFnCyCvpaeXbsvk45W8CytOgPD 8RUpoXmIa8ms3lX4IJT9N1+KU/HMl3oHReC5iASA+lYDeUIf5aLi1NVIsy3GL4v0AaFO JbxKQ70ddK7FhkINTCvtRNMp0yXE9weUx3VvZ57huosVGmm3PvAET8BspB6UXTNaUntQ axUA== X-Gm-Message-State: AGRZ1gJun2Whw3cDXYR1i0CJ+cEDPk3xf1IuxiLtId76ZybuTvsDgPPf TEnTKDyvLUymTbN/F3W8zk6xIiRZclLeoW3a6tK0vyra7qVKBzfv/7iFjY5jZJHndpmzQMT4Tci az9In/iz1+A8VP15MhSiB3HrFKtBVtYGTPpPEXJMjy5YRE6PTSbaDToEbD3A+MBc+DAWvq98STR U74LlxsbtLXTaeCidujbxRl8RYGV+50t0AnRp7n5ShNftGw3tiwlqkOF8iX6/lsGQHAMeCrI5Eo lLJmfRT/hsUFX0oB0u71gWvAiXstQ8IjeaSdg/zkZPehUt09hbPTeEBjiDeJ8EbNfGFUyRFzCOI iaMyuhG0csr7jDVhCkBgLdGIrTuxKUKG/UPykL4+NnPi2cOApFIZnUdsaGvXKDMVOcE6f9V0u+A R X-Received: by 2002:adf:d1c6:: with SMTP id m6-v6mr2668996wri.138.1541525475090; Tue, 06 Nov 2018 09:31:15 -0800 (PST) X-Received: by 2002:adf:d1c6:: with SMTP id m6-v6mr2668925wri.138.1541525473985; Tue, 06 Nov 2018 09:31:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541525473; cv=none; d=google.com; s=arc-20160816; b=XgzIMw59hHqb9FPn2VdXh2fe4naoihNJ+RXIorKYNR8n716G2EdXhJbjkZlNnOVIyB vPzYfyRvCQuxksOLY4SpwjjyEhQ4kiuS9aCQZgfmLvW5ltELH6iLJE6qdjgmCQlSpj3M wiLYQIPUHDnIvHAqqOfu6K0xiQj8CFgHK5WHMqLeEnN2cyCqV2oEBqCltNjXABp9WDvC gGz3DQr5Wnm9y6r4L1nH7FU1FzmnyQOLTBCyYsRX1+rTzg7bULCrhZWIMWy8Snmr6pX5 mWbyKUmKQoCoXazU6BBGwNlW/zN3QsKBeTeJFz2DSoTi3h+DqdQu23o9XF4eII7Zf2CD 2AeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=DBi4b5mp1uQMIpBdyajagHQ1WOtpTOcwif2rDCgCV7o=; b=YVrnfpXxiqI4rsNUfm1c8eDW7CGoPKIL58EpSNysBBDYNB/ybTMwenUx0AAsabDjY8 ZkogZCLprA3/crpSsQxn7XM+g8+yesB4PpO4n9civvn9IuQFCbUI0z/PDDK0+jeEvzbF 5shLbZnSiSgbZtIT/tLOZ7KRkzGpEIuIBVykOwlmMfDS7geCTZcVOspR7iyhbcDjLhGG AQugUUlZFS7tkoHFO4UgnNUf9h5PCiFMKioCJ03ty+YbE67sTEJF/g0g8CkWgsoNkAEy 4ZCn7eesKSoHuQNlpozDb6sqpq9Ac6nEFNyOHsYXsTkNqrbG+RfuYOeuv2vu9Ejx1ZzB tkLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=r71XCKY5; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id y2-v6sor1724909wmg.16.2018.11.06.09.31.13 for (Google Transport Security); Tue, 06 Nov 2018 09:31:13 -0800 (PST) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=r71XCKY5; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DBi4b5mp1uQMIpBdyajagHQ1WOtpTOcwif2rDCgCV7o=; b=r71XCKY5JOOhJklIvp8L6g7T6Eiq+KqLVewDKvtcuMcJn7F58q3B/1HqHW5s5zHnCf ahWNhVFjUWTqjyCFVWO7NST8b8ilf2mNr3EnsBfwnpzFOMPC8K/VsiuCA7mCKIVH6rZi Kv9jKkcNyY4dcCoyQeCXwZ0MTeWgFpGjR4vsNgw2tSW8/DwQpdB6dc3uEGfrp2RsyJ3V mbdkQLj9q8BUfDSkX8WB3YXl+ZaW7Ya4PUEcZnfDUg4lEPzVjW9/sUGCuhDOScgQhnkq BggKOtjBB+5pgCPcFlTfdfpcjAjLUsuOiTsX2NxiUY7mwGveBxKjwQIK7J5NyOVNi+9g Gi2w== X-Google-Smtp-Source: AJdET5cULYwcOHC8ORe5fmEH1QOsk6mtjsMrg71esstmiY2xGlxcGrQZRwBGvq0FAc6Vam1oXhmMCw== X-Received: by 2002:a7b:c10e:: with SMTP id w14-v6mr2775079wmi.20.1541525473251; Tue, 06 Nov 2018 09:31:13 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id z17-v6sm16328416wrm.65.2018.11.06.09.31.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 09:31:12 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v10 19/22] kasan, mm, arm64: tag non slab memory allocated via pagealloc Date: Tue, 6 Nov 2018 18:30:34 +0100 Message-Id: <34f2d93fec145f7903944fca2e99c4a435eb1192.1541525354.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.1.930.g4563a0d9d0-goog In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Tag-based KASAN doesn't check memory accesses through pointers tagged with 0xff. When page_address is used to get pointer to memory that corresponds to some page, the tag of the resulting pointer gets set to 0xff, even though the allocated memory might have been tagged differently. For slab pages it's impossible to recover the correct tag to return from page_address, since the page might contain multiple slab objects tagged with different values, and we can't know in advance which one of them is going to get accessed. For non slab pages however, we can recover the tag in page_address, since the whole page was marked with the same tag. This patch adds tagging to non slab memory allocated with pagealloc. To set the tag of the pointer returned from page_address, the tag gets stored to page->flags when the memory gets allocated. Reviewed-by: Andrey Ryabinin Reviewed-by: Dmitry Vyukov Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/memory.h | 9 ++++++++- include/linux/mm.h | 29 +++++++++++++++++++++++++++++ include/linux/page-flags-layout.h | 10 ++++++++++ mm/cma.c | 11 +++++++++++ mm/kasan/common.c | 15 +++++++++++++-- mm/page_alloc.c | 1 + mm/slab.c | 2 +- 7 files changed, 73 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 3226a0218b0b..b7108161732e 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -98,6 +98,7 @@ KASAN_TAG_SHIFTED(tag)) #define KASAN_RESET_TAG(addr) KASAN_SET_TAG(addr, 0xff) #else +#define KASAN_SET_TAG(addr, tag) addr #define KASAN_RESET_TAG(addr) addr #endif @@ -309,7 +310,13 @@ static inline void *phys_to_virt(phys_addr_t x) #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) -#define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) +#define page_to_virt(page) ({ \ + unsigned long __addr = \ + ((__page_to_voff(page)) | PAGE_OFFSET); \ + __addr = KASAN_SET_TAG(__addr, page_kasan_tag(page)); \ + ((void *)__addr); \ +}) + #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ diff --git a/include/linux/mm.h b/include/linux/mm.h index fcf9cc9d535f..03c37e25ee10 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -804,6 +804,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) +#define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) /* * Define the bit shifts to access each section. For non-existent @@ -814,6 +815,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) +#define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ #ifdef NODE_NOT_IN_PAGE_FLAGS @@ -836,6 +838,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(const struct page *page) @@ -1101,6 +1104,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid) } #endif /* CONFIG_NUMA_BALANCING */ +#ifdef CONFIG_KASAN_SW_TAGS +static inline u8 page_kasan_tag(const struct page *page) +{ + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) +{ + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; +} + +static inline void page_kasan_tag_reset(struct page *page) +{ + page_kasan_tag_set(page, 0xff); +} +#else +static inline u8 page_kasan_tag(const struct page *page) +{ + return 0xff; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) { } +static inline void page_kasan_tag_reset(struct page *page) { } +#endif + static inline struct zone *page_zone(const struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7ec86bf31ce4..1dda31825ec4 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -82,6 +82,16 @@ #define LAST_CPUPID_WIDTH 0 #endif +#ifdef CONFIG_KASAN_SW_TAGS +#define KASAN_TAG_WIDTH 8 +#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ + > BITS_PER_LONG - NR_PAGEFLAGS +#error "KASAN: not enough bits in page flags for tag" +#endif +#else +#define KASAN_TAG_WIDTH 0 +#endif + /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. diff --git a/mm/cma.c b/mm/cma.c index 4cb76121a3ab..c7b39dd3b4f6 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -407,6 +407,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; + size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -466,6 +467,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, trace_cma_alloc(pfn, page, count, align); + /* + * CMA can allocate multiple page blocks, which results in different + * blocks being marked with different tags. Reset the tags to ignore + * those page blocks. + */ + if (page) { + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); + } + if (ret && !no_warn) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 27f0cae336c9..195ca385cf7a 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -220,8 +220,15 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { + u8 tag; + unsigned long i; + if (unlikely(PageHighMem(page))) return; + + tag = random_tag(); + for (i = 0; i < (1 << order); i++) + page_kasan_tag_set(page + i, tag); kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } @@ -319,6 +326,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache, void kasan_poison_slab(struct page *page) { + unsigned long i; + + for (i = 0; i < (1 << compound_order(page)); i++) + page_kasan_tag_reset(page + i); kasan_poison_shadow(page_address(page), PAGE_SIZE << compound_order(page), KASAN_KMALLOC_REDZONE); @@ -517,7 +528,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (reset_tag(ptr) != page_address(page)) { + if (ptr != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -530,7 +541,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) + if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a919ba5cb3c8..ed6dc8f18c01 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1183,6 +1183,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); + page_kasan_tag_reset(page); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL diff --git a/mm/slab.c b/mm/slab.c index d2f827316dfc..d747433ecdbb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2357,7 +2357,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep, void *freelist; void *addr = page_address(page); - page->s_mem = addr + colour_off; + page->s_mem = kasan_reset_tag(addr) + colour_off; page->active = 0; if (OBJFREELIST_SLAB(cachep))