From patchwork Tue Jun 26 13:15:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10488935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A59460386 for ; Tue, 26 Jun 2018 13:16:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0497F2893B for ; Tue, 26 Jun 2018 13:16:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EB36A2893E; Tue, 26 Jun 2018 13:16:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 295CA2894C for ; Tue, 26 Jun 2018 13:16:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C343A6B027C; Tue, 26 Jun 2018 09:16:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B950B6B027D; Tue, 26 Jun 2018 09:16:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A374E6B027E; Tue, 26 Jun 2018 09:16:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 49B8D6B027C for ; Tue, 26 Jun 2018 09:16:03 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id w21-v6so801425wmc.4 for ; Tue, 26 Jun 2018 06:16:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=2zUTOAqF1fN5e7AIv8pN3lcl+xHq66RqyUEHWszeZTI=; b=B6zDKV+uFAFQz94ttAGka36S1Kwim8CCx+/czRUMMhPeT87tFSVxkf4cS6tdkHu5f6 uEgBY1EzfFDkSL7X4DH+jlKJYqX4wOQOCUZmIIaY41636zJGM6VerKaYTO5h1uSq+5eF YbGXjcb4izzEiCl5dW8ZZ9apms/cTELIGlM0tEDX4e6Ol6L6id8av5wxF3TM1AeDQXMQ bH53MOML81PqZh7beJ7sph0ZRDwinoRRrBTI0Rn9tU4bXuEgc49Q8lvioOQd0Fhk+lMi C3fb1nIjmVvTudyzi1iyfd7MJFQCyw2wIwuPScY3T7C2j1ftYWDculk6JXADVF5rs0I5 GSJg== X-Gm-Message-State: APt69E2A9SA0N2JJ7DIxBSHEYNruMEKvjQpvGKF0i3EQfgAI0YzGrTtF PTW5exMLHA38feNT2sGB9UbWIdnGs4Bn0UO15segpK7X4Q8a1iKN/KOr9ypMfMPjyR8wNsD+7/F SoLEs8zzK/GvVlWq/nXnBQv6NaKbOAqrmAnX2FkI3OLGvaOXKQ5XtS3QH2grfwtoMue+bzit3jF QiVhIzHuQSjvEd1CnwH2LaQQfNGbRSB/TGTrynoCfdQciZf5e1zP5+b6kC+4XeMtZsRlzSWkUkT FhQUlUr/OK/kMJY5PZ1Qldu5R7b4F9X2N3h3jyWwHk/SifhdcFBAIaTVmEBoxUvzhyPaLDFiLrT r2HVt51otWzVFi4KeNKnjJynEkLiKFVN5fGhuO6jPJXye18ZRinYqIR42KJhET04C6cCvgRME/s c X-Received: by 2002:a1c:b49:: with SMTP id 70-v6mr1519004wml.5.1530018962785; Tue, 26 Jun 2018 06:16:02 -0700 (PDT) X-Received: by 2002:a1c:b49:: with SMTP id 70-v6mr1518956wml.5.1530018961750; Tue, 26 Jun 2018 06:16:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530018961; cv=none; d=google.com; s=arc-20160816; b=x/Lgokvozh8sB7OLdnWYdWGNo4XrAs6Vt1V2Jp4KHTXH+LAk/SK+n4mUKZDS3dvs4/ BzeBG3N171ym4NdRxh8JJuz9FDENATOxoDSUKRgBa9bWRPHSbxqnG+O7Us7AMdW/s/aT JpFi2/TANjZXzI/BHLz8SxY64nIBNpWGcnGIKQUjlMmyj9vLFiEZl6WzKcH+DmZHpaTZ c3FOEcYnQnm6J3Oz610Do+B+CzR0DFmXjWcr48sECGtcrOy6rtEbKx+9vnvXwR0hrgZZ z8Zw/7881XBXatcuV4aE8FYGkPrNuGy2j96pQkLF/zLgJKMH7oKzS6R6uUfH/SCSrn6W KNyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=2zUTOAqF1fN5e7AIv8pN3lcl+xHq66RqyUEHWszeZTI=; b=RqsSyhZ9Zx1bq926dLh3lVleszb+bfvpp65yxdpI/2x5j1YYGjtU5RIGKAqimQ316r FoeODsqVUDnzfs2LHtuh7ZZeeqlo4an7TDvBulfy1j1YZ44KK4rmHLqANlhSrjeAUAPe bd2St9/d16AygMc353AM7qqcFG8/isk+QB+CV6Wial9zTVbM9Awo0ckHRRCsRTFCr6TJ MHcLHBow2qJxGZz8UhdK0OF0Q2VN9QonhFUbUOpWWWCH1GMWRhgCf/19Lh5YRzyRcMRD CT1RLz2Hv54NG9UkwgAtVyW5ajU8gQEt3ehBfqQZBB5EeZx37T/N4C2+WGEaIZ3grn9t 8zfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=q95t2VTt; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id k10-v6sor813003wrr.57.2018.06.26.06.16.01 for (Google Transport Security); Tue, 26 Jun 2018 06:16:01 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=q95t2VTt; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2zUTOAqF1fN5e7AIv8pN3lcl+xHq66RqyUEHWszeZTI=; b=q95t2VTtCnD2dS4yjtLLagTYWA7B3edVC+BIByndEgLfeyh5SIc/tAtn10oIZp/9GG x0T02n8BQtHJ9OjFEB0MU63lbZ4Y/m+2qwzENJ6FXVZUYY24uHFYtpGx4viEcOF0eAfA ssNRHg8pqEOyudRZn1TtRPeDkS/VeZTGM0bPb08Ld8ogc5Av0wwQ3bwizihJxGOXgOXz X5AK5SY5FZkY14/voMbAGuWF5TDEsDZ+svqhj2tcoVxI9yhwturmkYrCr9sGBIChebUe 3UixsrZ4MjCSDoLmI+tcSGNwiVTFPSZGOO7Y1StNIj3+xQ4rjSDWscafk2IfC7FpoxuH uBPA== X-Google-Smtp-Source: AAOMgpclahDHIP9jwkDiwxeX9boCdYqDV8afAD5sC03gyl2ySyize5dIBXmvVp4Sh920XhuY8/IOjg== X-Received: by 2002:adf:9d18:: with SMTP id k24-v6mr1562298wre.29.1530018960844; Tue, 26 Jun 2018 06:16:00 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id w15-v6sm2162639wrn.25.2018.06.26.06.15.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jun 2018 06:15:59 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Andrey Konovalov Subject: [PATCH v4 15/17] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Date: Tue, 26 Jun 2018 15:15:25 +0200 Message-Id: X-Mailer: git-send-email 2.18.0.rc2.346.g013aa6912e-goog In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP KWHASAN doesn't check memory accesses through pointers tagged with 0xff. When page_address is used to get pointer to memory that corresponds to some page, the tag of the resulting pointer gets set to 0xff, even though the allocated memory might have been tagged differently. For slab pages it's impossible to recover the correct tag to return from page_address, since the page might contain multiple slab objects tagged with different values, and we can't know in advance which one of them is going to get accessed. For non slab pages however, we can recover the tag in page_address, since the whole page was marked with the same tag. This patch adds tagging to non slab memory allocated with pagealloc. To set the tag of the pointer returned from page_address, the tag gets stored to page->flags when the memory gets allocated. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/memory.h | 10 ++++++++++ include/linux/mm.h | 29 +++++++++++++++++++++++++++++ include/linux/page-flags-layout.h | 10 ++++++++++ mm/cma.c | 11 +++++++++++ mm/kasan/common.c | 15 +++++++++++++-- mm/page_alloc.c | 1 + 6 files changed, 74 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index e9e054dfb1fc..3352a65b8312 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -305,7 +305,17 @@ static inline void *phys_to_virt(phys_addr_t x) #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) +#ifndef CONFIG_KASAN_HW #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) +#else +#define page_to_virt(page) ({ \ + unsigned long __addr = \ + ((__page_to_voff(page)) | PAGE_OFFSET); \ + __addr = KASAN_SET_TAG(__addr, page_kasan_tag(page)); \ + ((void *)__addr); \ +}) +#endif + #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9ffe380..46afadf4f48c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -784,6 +784,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) +#define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) /* * Define the bit shifts to access each section. For non-existent @@ -794,6 +795,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) +#define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ #ifdef NODE_NOT_IN_PAGE_FLAGS @@ -816,6 +818,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf); #define NODES_MASK ((1UL << NODES_WIDTH) - 1) #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) +#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) static inline enum zone_type page_zonenum(const struct page *page) @@ -1070,6 +1073,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid) } #endif /* CONFIG_NUMA_BALANCING */ +#ifdef CONFIG_KASAN_HW +static inline u8 page_kasan_tag(const struct page *page) +{ + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) +{ + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; +} + +static inline void page_kasan_tag_reset(struct page *page) +{ + page_kasan_tag_set(page, 0xff); +} +#else +static inline u8 page_kasan_tag(const struct page *page) +{ + return 0xff; +} + +static inline void page_kasan_tag_set(struct page *page, u8 tag) { } +static inline void page_kasan_tag_reset(struct page *page) { } +#endif + static inline struct zone *page_zone(const struct page *page) { return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]; diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7ec86bf31ce4..8dbad17664c2 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -82,6 +82,16 @@ #define LAST_CPUPID_WIDTH 0 #endif +#ifdef CONFIG_KASAN_HW +#define KASAN_TAG_WIDTH 8 +#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ + > BITS_PER_LONG - NR_PAGEFLAGS +#error "KASAN: not enough bits in page flags for tag" +#endif +#else +#define KASAN_TAG_WIDTH 0 +#endif + /* * We are going to use the flags for the page to node mapping if its in * there. This includes the case where there is no node, so it is implicit. diff --git a/mm/cma.c b/mm/cma.c index 5809bbe360d7..fdad7ad0d9c4 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -407,6 +407,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; + size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -466,6 +467,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, trace_cma_alloc(pfn, page, count, align); + /* + * CMA can allocate multiple page blocks, which results in different + * blocks being marked with different tags. Reset the tags to ignore + * those page blocks. + */ + if (page) { + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); + } + if (ret && !(gfp_mask & __GFP_NOWARN)) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 1e96ca050c75..6cf7dec0b765 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -212,8 +212,15 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { + u8 tag; + unsigned long i; + if (unlikely(PageHighMem(page))) return; + + tag = random_tag(); + for (i = 0; i < (1 << order); i++) + page_kasan_tag_set(page + i, tag); kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } @@ -311,6 +318,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache, void kasan_poison_slab(struct page *page) { + unsigned long i; + + for (i = 0; i < (1 << compound_order(page)); i++) + page_kasan_tag_reset(page + i); kasan_poison_shadow(page_address(page), PAGE_SIZE << compound_order(page), KASAN_KMALLOC_REDZONE); @@ -483,7 +494,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (reset_tag(ptr) != page_address(page)) { + if (ptr != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -496,7 +507,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) + if (ptr != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100f1e63..266e86323d73 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1176,6 +1176,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); + page_kasan_tag_reset(page); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL