From patchwork Wed Jul 9 11:30:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 4514191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 8ECDABEEAA for ; Wed, 9 Jul 2014 11:44:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42D0D20357 for ; Wed, 9 Jul 2014 11:44:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4FCCD20218 for ; Wed, 9 Jul 2014 11:44:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qF0-0005to-To; Wed, 09 Jul 2014 11:40:50 +0000 Received: from mailout1.w1.samsung.com ([210.118.77.11]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qBT-0008SC-AR for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 11:37:12 +0000 Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8G00F6F08UAQ60@mailout1.w1.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 12:36:30 +0100 (BST) X-AuditID: cbfec7f5-b7f626d000004b39-89-53bd293f04ab Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm2.samsung.com (EUCPMTA) with SMTP id 2D.A4.19257.F392DB35; Wed, 09 Jul 2014 12:36:31 +0100 (BST) Received: from localhost.localdomain ([106.109.129.143]) by eusync1.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8G0029508FLQ20@eusync1.samsung.com>; Wed, 09 Jul 2014 12:36:31 +0100 (BST) From: Andrey Ryabinin To: linux-kernel@vger.kernel.org Subject: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes Date: Wed, 09 Jul 2014 15:30:02 +0400 Message-id: <1404905415-9046-9-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 1.8.5.5 In-reply-to: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRmVeSWpSXmKPExsVy+t/xy7r2mnuDDZa0q1ts+/WIzeL33pms FnPWr2GzuP7tDaPFhIdt7BYru5vZLLY/e8tksbLzAavFpsfXWC3+7NrBZHF51xw2i3tr/rNa 3L7Ma3HpwAImi5Z9F5gs2j7/Y7XYt/I8kLVkI5PF4iO3mS3ePZvMbLF501Rmix8bHrM6iHm0 NPeweeycdZfdY8GmUo9NqzrZPDZ9msTu0fX2CpPHu3Pn2D1OzPjN4vHkynQmj81L6j0+Pr3F 4vF+31U2j74tqxg9ziw4wu7xeZNcAH8Ul01Kak5mWWqRvl0CV8aZ133sBW1qFZ/bJrM3MC6Q 72Lk5JAQMJH4+nE5E4QtJnHh3nq2LkYuDiGBpYwSE28/YoVw+pgkfi06BFbFJqAn8W/WdjYQ W0RAQWJz7zOwImaBZjaJ9o4PrCAJYYEkiQNdz8AaWARUJb5c7wWKc3DwCrhKbLslDLFNQWLZ 8plg5ZwCbhITpl9jBikRAiqZsEJtAiPvAkaGVYyiqaXJBcVJ6blGesWJucWleel6yfm5mxgh kfR1B+PSY1aHGAU4GJV4eF/s3hMsxJpYVlyZe4hRgoNZSYTXVnRvsBBvSmJlVWpRfnxRaU5q 8SFGJg5OqQbGVrFdOcYH3tzqPnvotuKneLGvEseOz5J9LFPe9m/7gXk+YV7bWG5E/xetSbr7 6Nwr6YS2yMWBEQ02L3cU37aUWDpN8VVjZMccpj/Rj2xP+/Apiasej/ffG7tLflEhJ8eS2g/u J4Q45h75eu1b/qO3MTv2huxnd+7jNJj+PDT6GEeWy6YXKo77lViKMxINtZiLihMBiCQe7YIC AAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140709_043711_544053_365B4482 X-CRM114-Status: GOOD ( 14.64 ) X-Spam-Score: -5.7 (-----) Cc: Michal Marek , Christoph Lameter , x86@kernel.org, Russell King , Andrew Morton , linux-kbuild@vger.kernel.org, Andrey Ryabinin , Joonsoo Kim , David Rientjes , linux-mm@kvack.org, Pekka Enberg , Konstantin Serebryany , Yuri Gribov , Dmitry Vyukov , Sasha Levin , Andrey Konovalov , Thomas Gleixner , Alexey Preobrazhensky , Ingo Molnar , Konstantin Khlebnikov , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add kernel address sanitizer hooks to mark allocated page's addresses as accessible in corresponding shadow region. Mark freed pages as unaccessible. Signed-off-by: Andrey Ryabinin --- include/linux/kasan.h | 6 ++++++ mm/Makefile | 2 ++ mm/kasan/kasan.c | 18 ++++++++++++++++++ mm/kasan/kasan.h | 1 + mm/kasan/report.c | 7 +++++++ mm/page_alloc.c | 4 ++++ 6 files changed, 38 insertions(+) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 7efc3eb..4adc0a1 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -17,6 +17,9 @@ void kasan_disable_local(void); void kasan_alloc_shadow(void); void kasan_init_shadow(void); +void kasan_alloc_pages(struct page *page, unsigned int order); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN */ static inline void unpoison_shadow(const void *address, size_t size) {} @@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {} static inline void kasan_init_shadow(void) {} static inline void kasan_alloc_shadow(void) {} +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} +static inline void kasan_free_pages(struct page *page, unsigned int order) {} + #endif /* CONFIG_KASAN */ #endif /* LINUX_KASAN_H */ diff --git a/mm/Makefile b/mm/Makefile index dbe9a22..6a9c3f8 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -2,6 +2,8 @@ # Makefile for the linux memory manager. # +KASAN_SANITIZE_page_alloc.o := n + mmu-y := nommu.o mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \ mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \ diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index e2cd345..109478e 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -177,6 +177,24 @@ void __init kasan_init_shadow(void) } } +void kasan_alloc_pages(struct page *page, unsigned int order) +{ + if (unlikely(!kasan_initialized)) + return; + + if (likely(page && !PageHighMem(page))) + unpoison_shadow(page_address(page), PAGE_SIZE << order); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + if (unlikely(!kasan_initialized)) + return; + + if (likely(!PageHighMem(page))) + poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE); +} + void *kasan_memcpy(void *dst, const void *src, size_t len) { if (unlikely(len == 0)) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 711ae4f..be9597e 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -5,6 +5,7 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */ struct access_info { diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 2430e05..6ef9e57 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info) case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: bug_type = "buffer overflow"; break; + case KASAN_FREE_PAGE: + bug_type = "use after free"; + break; case KASAN_SHADOW_GAP: bug_type = "wild memory access"; break; @@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info) page = virt_to_page(info->access_addr); switch (shadow_val) { + case KASAN_FREE_PAGE: + dump_page(page, "kasan error"); + dump_stack(); + break; case KASAN_SHADOW_GAP: pr_err("No metainfo is available for this access.\n"); dump_stack(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8c9eeec..67833d1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -61,6 +61,7 @@ #include #include #include +#include #include #include @@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order) trace_mm_page_free(page, order); kmemcheck_free_shadow(page, order); + kasan_free_pages(page, order); if (PageAnon(page)) page->mapping = NULL; @@ -2807,6 +2809,7 @@ out: if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie))) goto retry_cpuset; + kasan_alloc_pages(page, order); return page; } EXPORT_SYMBOL(__alloc_pages_nodemask); @@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, if (end != outer_end) free_contig_range(end, outer_end - end); + kasan_alloc_pages(pfn_to_page(start), end - start); done: undo_isolate_page_range(pfn_max_align_down(start), pfn_max_align_up(end), migratetype);