From patchwork Wed Aug 29 11:35:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10579995 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52CF21709 for ; Wed, 29 Aug 2018 11:36:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 422D22AA9C for ; Wed, 29 Aug 2018 11:36:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3583F2AB91; Wed, 29 Aug 2018 11:36:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 774712AA9C for ; Wed, 29 Aug 2018 11:36:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EECF66B4B8F; Wed, 29 Aug 2018 07:35:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E1E3C6B4B90; Wed, 29 Aug 2018 07:35:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C95F26B4B91; Wed, 29 Aug 2018 07:35:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by kanga.kvack.org (Postfix) with ESMTP id 64E936B4B8F for ; Wed, 29 Aug 2018 07:35:56 -0400 (EDT) Received: by mail-wr1-f69.google.com with SMTP id l45-v6so3334421wre.4 for ; Wed, 29 Aug 2018 04:35:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=hOQLXIfZsQqSVQUxeIhnGNQSnIjX11Dj2Kz0Wy/GlMc=; b=ILwJDAY+J8HdVxjgLOni28cW87rcMUyan7Aw8pApn7aoX4jis65YkWs4FZPcudq0hA aAe2ws3D5BTXNqMGktvoBeS5+ls8Oh4Hb+LTAt/gxRwkgFpi4HQInLKcIknS1OhNN5lD U4cEjtPNjvEddiL/F/GeWO/mmTIc79PFsOs1tGrsx4YZ4XG8SwLVMJNUpIRcxgcGNLH0 hGF718tZAFVBZeZwg3sgcBi9tuEAn0JIDk5+j0mJHQn6kEnfxEeRH2yD4ojUDbSiETPO qJTJq9J8pO0/vZlxzQ910KXNAJzduDDa7XqJ+HGW8uU2RJ8UaMxzMz2oJGp4eyYK/FGQ /73g== X-Gm-Message-State: APzg51Bs3jcNRfcDz+gXlBm42KYnDUesOca1xGNFxHBslk1k3YyxtG58 QRJDU3U9DdPu8OXCqruC5SO2bfLrTQR/R+AWFA4zIcCjByQ/6kGlWhxXxn+gU0psFjfUNTiwKV8 rURMhDNGg2mMQ5M4b2NbXY7tfEfkrEJ1JGYuqzMt5H2CAlnkP5JcaJRwH7x0EWn4I8Rjvz99RHn SRbDgQcZz7uV0sOfjQTX32gSVnz/pumzzIP+3Cbr4aN1NEae42zxhZ/J39wbUg6vKyzqwV2tGsi iVa1UBGjv5DUv5xYumFNLXVqGqXtznX4JRQ1d9HOGbIkeFUQGrcxLSYEqmX6BfGxsRXOhFzce5t LwGOtPzA2D9LoruTal4szvwt/WkE3hOcDqEM4dyAazORuDDj8BMWGgnFRbBy9Xgqqb5zDftKmgp X X-Received: by 2002:adf:f608:: with SMTP id t8-v6mr4069560wrp.186.1535542555921; Wed, 29 Aug 2018 04:35:55 -0700 (PDT) X-Received: by 2002:adf:f608:: with SMTP id t8-v6mr4069508wrp.186.1535542554878; Wed, 29 Aug 2018 04:35:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535542554; cv=none; d=google.com; s=arc-20160816; b=HECBCVLhPcWKZio7++iROQeX33prTMwxtTxMrqgaMROa3dTseeF9jUdD6zqPikvlAj R3SN4Orznsjcczt+Ek1/aWkaabVmvZTEyr7XwFRMGqCQ/2UmC67LhExuaF8SBKBt9fZa /jSLnhMmmzA4ujTKG7QkIpLd/AWcXI23R1zxil4+m7vdqDOYqgC4VuRDrlachyppqKGn I6VaHunCjBgXUM117eQZIrNpZ5CWx/fFlZhd8c40py0VkbHHiMegdIvP9N8nHcm3nMff LLpeMy0Y9MQpUTnSs56QrUbWm3LnBbUflmBmbQDOmBkQkc/2XxHohlfSWhy1AdS3BPpu ptew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=hOQLXIfZsQqSVQUxeIhnGNQSnIjX11Dj2Kz0Wy/GlMc=; b=FkGaIOpb/kBKxSTajU6idmsSuA5MHKsD7H7LObYGaHBlS6E610Y4AXl5Fs6rjT/0Ph /jqTznTbWSII2jWa+wIxIs2DYN8yef0rml8AhIUC+pK6BmGSO53R401ICyQVvTuQoe0W lCp1C0UsJVBwbkzJTSam2b09q9+JCp/UBLOm1i1DcdLst4kA9/zcyN/2KkE/+CE/ikxT RTttkRM+V3sKwPOkMsdyj3tpmLvfa5Nimfm5SOD8uHKpl8e/YQdRwo9Dsdn+vziWqHLd Hg8ZiH81l70osdi+9C3bWhOD9x40rLUeY9KyRe0SEZ3FvnsdqbZ5sHWcyq2xvodvVgSv AzhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TaZcvmI0; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id w3-v6sor1100064wmf.50.2018.08.29.04.35.54 for (Google Transport Security); Wed, 29 Aug 2018 04:35:54 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TaZcvmI0; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hOQLXIfZsQqSVQUxeIhnGNQSnIjX11Dj2Kz0Wy/GlMc=; b=TaZcvmI0t1XFHEh6yOlxuZw05ltSwTYH4y2haKbNdn63CoUCzRt6VcwIqTDxNGL66A GxKmJ6FhmLfN0zt6eC4DnrrCu0SjZFE2t9xoGzQFDm6maNps6Z7dZ0baKBn7LUpK1G1H O01pK5A/HKL4rtlqDa0fkBjop51hE4findPDL1M/OdmsupWe4crxxtc+EDm2JIw6WWoI WKUF67a7gkfI/+QYir8h6LyNt5LtsIV4cKYnUPkTe1dOx+tlYYTsL7cxds1xuolc++Kp /HbkOdy965OxZlviklRmvThGhn+lu8ELFpL9/y1dehcGyHjOzHRpBB2b3xM+I3HZuWbp v0BA== X-Google-Smtp-Source: ANB0VdZtQ5/7G9QQ7MUik7zJ4qXfPY1KcSVwoBTA9GmRMM+L4+4n4J7u33BYwOaHbNL4MdJFu7wVeA== X-Received: by 2002:a1c:2283:: with SMTP id i125-v6mr4103692wmi.28.1535542554046; Wed, 29 Aug 2018 04:35:54 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:53 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 16/18] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Date: Wed, 29 Aug 2018 13:35:20 +0200 Message-Id: X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP KWHASAN doesn't check memory accesses through pointers tagged with 0xff. When page_address is used to get pointer to memory that corresponds to some page, the tag of the resulting pointer gets set to 0xff, even though the allocated memory might have been tagged differently. For slab pages it's impossible to recover the correct tag to return from page_address, since the page might contain multiple slab objects tagged with different values, and we can't know in advance which one of them is going to get accessed. For non slab pages however, we can recover the tag in page_address, since the whole page was marked with the same tag. This patch adds tagging to non slab memory allocated with pagealloc. To set the tag of the pointer returned from page_address, the tag gets stored to page->flags when the memory gets allocated. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/memory.h | 10 ++++++++++ include/linux/mm.h | 29 +++++++++++++++++++++++++++++ include/linux/page-flags-layout.h | 10 ++++++++++ mm/cma.c | 11 +++++++++++ mm/kasan/common.c | 17 +++++++++++++++-- mm/page_alloc.c | 1 + 6 files changed, 76 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index f5e2953b7009..ea7f928aba31 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -312,7 +312,17 @@ static inline void *phys_to_virt(phys_addr_t x) #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) +#ifndef CONFIG_KASAN_HW #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) +#else +#define page_to_virt(page) ({ \ + unsigned long __addr = \ + ((__page_to_voff(page)) | PAGE_OFFSET); \ + __addr = KASAN_SET_TAG(__addr, page_kasan_tag(page)); \ + ((void *)__addr); \ +}) +#endif + #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..a1e7c590d925 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -804,6 +804,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) +#define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) /* * Define the bit shifts to access each section. For non-existent @@ -814,6 +815,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) +#define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ #ifdef NODE_NOT_IN_PAGE_FLAGS @@ -836,6 +838,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(const struct page *page) @@ -1081,6 +1084,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid) } #endif /* CONFIG_NUMA_BALANCING */ +#ifdef CONFIG_KASAN_HW +static inline u8 page_kasan_tag(const struct page *page) +{ + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) +{ + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; +} + +static inline void page_kasan_tag_reset(struct page *page) +{ + page_kasan_tag_set(page, 0xff); +} +#else +static inline u8 page_kasan_tag(const struct page *page) +{ + return 0xff; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) { } +static inline void page_kasan_tag_reset(struct page *page) { } +#endif + static inline struct zone *page_zone(const struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7ec86bf31ce4..8dbad17664c2 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -82,6 +82,16 @@ #define LAST_CPUPID_WIDTH 0 #endif +#ifdef CONFIG_KASAN_HW +#define KASAN_TAG_WIDTH 8 +#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ + > BITS_PER_LONG - NR_PAGEFLAGS +#error "KASAN: not enough bits in page flags for tag" +#endif +#else +#define KASAN_TAG_WIDTH 0 +#endif + /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. diff --git a/mm/cma.c b/mm/cma.c index 4cb76121a3ab..c7b39dd3b4f6 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -407,6 +407,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; + size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -466,6 +467,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, trace_cma_alloc(pfn, page, count, align); + /* + * CMA can allocate multiple page blocks, which results in different + * blocks being marked with different tags. Reset the tags to ignore + * those page blocks. + */ + if (page) { + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); + } + if (ret && !no_warn) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 938229b26f3a..e5648f4218eb 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -212,8 +212,15 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { + u8 tag; + unsigned long i; + if (unlikely(PageHighMem(page))) return; + + tag = random_tag(); + for (i = 0; i < (1 << order); i++) + page_kasan_tag_set(page + i, tag); kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } @@ -311,6 +318,12 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache, void kasan_poison_slab(struct page *page) { + unsigned long i; + + if (IS_ENABLED(CONFIG_SLAB)) + page->s_mem = reset_tag(page->s_mem); + for (i = 0; i < (1 << compound_order(page)); i++) + page_kasan_tag_reset(page + i); kasan_poison_shadow(page_address(page), PAGE_SIZE << compound_order(page), KASAN_KMALLOC_REDZONE); @@ -484,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (reset_tag(ptr) != page_address(page)) { + if (ptr != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -497,7 +510,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) + if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e75865d58ba7..eb5627f89853 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1177,6 +1177,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); + page_kasan_tag_reset(page); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL