From patchwork Wed Jul 9 11:30:07 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 4514201 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E78CC9F3FF for ; Wed, 9 Jul 2014 11:44:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1D2432018B for ; Wed, 9 Jul 2014 11:44:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39D6E20351 for ; Wed, 9 Jul 2014 11:44:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qFJ-0006YX-DB; Wed, 09 Jul 2014 11:41:09 +0000 Received: from mailout1.w1.samsung.com ([210.118.77.11]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qBU-0008SC-Gl for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 11:37:13 +0000 Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8G00HYV08YG350@mailout1.w1.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 12:36:34 +0100 (BST) X-AuditID: cbfec7f4-b7fac6d000006cfe-ee-53bd294639e0 Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id BA.53.27902.6492DB35; Wed, 09 Jul 2014 12:36:38 +0100 (BST) Received: from localhost.localdomain ([106.109.129.143]) by eusync1.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8G0029508FLQ20@eusync1.samsung.com>; Wed, 09 Jul 2014 12:36:38 +0100 (BST) From: Andrey Ryabinin To: linux-kernel@vger.kernel.org Subject: [RFC/PATCH RESEND -next 13/21] mm: slub: add allocation size field to struct kmem_cache Date: Wed, 09 Jul 2014 15:30:07 +0400 Message-id: <1404905415-9046-14-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 1.8.5.5 In-reply-to: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRmVeSWpSXmKPExsVy+t/xy7pumnuDDWbflrfY9usRm8XvvTNZ LeasX8Nmcf3bG0aLCQ/b2C1WdjezWWx/9pbJYmXnA1aLTY+vsVr82bWDyeLyrjlsFvfW/Ge1 uH2Z1+LSgQVMFi37LjBZtH3+x2qxb+V5IGvJRiaLxUduM1u8ezaZ2WLzpqnMFj82PGZ1EPNo ae5h89g56y67x4JNpR6bVnWyeWz6NIndo+vtFSaPd+fOsXucmPGbxePJlelMHpuX1Ht8fHqL xeP9vqtsHn1bVjF6nFlwhN3j8ya5AP4oLpuU1JzMstQifbsEroy3XduYCl6JVhy+U9/AeF2w i5GTQ0LARGL/vlZmCFtM4sK99WxdjFwcQgJLGSW29/5mhnD6mCR2H/vDDlLFJqAn8W/WdjYQ W0RAQWJz7zNWkCJmgWY2ifaOD6wgCWGBJIn/B+YxgtgsAqoSUxr/gcV5Bdwkju5bxAaxTkFi 2fKZYHFOoPiE6deAtnEAbXOVmLBCbQIj7wJGhlWMoqmlyQXFSem5hnrFibnFpXnpesn5uZsY IZH0ZQfj4mNWhxgFOBiVeHg1ancHC7EmlhVX5h5ilOBgVhLhtRXdGyzEm5JYWZValB9fVJqT WnyIkYmDU6qBMX2Txv4fNVeXbshW1VEpXrrqQMmTJ0/jMl7HvZ8RbBBzVWOZTc6t9z8THOLy BLd9/ruNQ/rmjGfv9Wz+myuqbTnFcjX43sFXDJZ/9sg/5FoREDTBpzP3pIKJ3qELJdl/qu7U Xaj5FTA7xV0jVn5nlNKjz/cXmTauX3BdyCpDS1z2vNJtA25WEyWW4oxEQy3mouJEAHXjEdKC AgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140709_043712_732302_8733544E X-CRM114-Status: GOOD ( 12.93 ) X-Spam-Score: -5.7 (-----) Cc: Michal Marek , Christoph Lameter , x86@kernel.org, Russell King , Andrew Morton , linux-kbuild@vger.kernel.org, Andrey Ryabinin , Joonsoo Kim , David Rientjes , linux-mm@kvack.org, Pekka Enberg , Konstantin Serebryany , Yuri Gribov , Dmitry Vyukov , Sasha Levin , Andrey Konovalov , Thomas Gleixner , Alexey Preobrazhensky , Ingo Molnar , Konstantin Khlebnikov , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When caller creates new kmem_cache, requested size of kmem_cache will be stored in alloc_size. Later alloc_size will be used by kerenel address sanitizer to mark alloc_size of slab object as accessible and the rest of its size as redzone. Signed-off-by: Andrey Ryabinin --- include/linux/slub_def.h | 5 +++++ mm/slab.h | 10 ++++++++++ mm/slab_common.c | 2 ++ mm/slub.c | 1 + 4 files changed, 18 insertions(+) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index d82abd4..b8b8154 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -68,6 +68,11 @@ struct kmem_cache { int object_size; /* The size of an object without meta data */ int offset; /* Free pointer offset. */ int cpu_partial; /* Number of per cpu partial objects to keep around */ + +#ifdef CONFIG_KASAN + int alloc_size; /* actual allocation size kmem_cache_create */ +#endif + struct kmem_cache_order_objects oo; /* Allocation and freeing of slabs */ diff --git a/mm/slab.h b/mm/slab.h index 912af7f..cb2e776 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order) } #endif +#ifdef CONFIG_KASAN +static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) +{ + s->alloc_size = size; +} +#else +static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { } +#endif + + static inline struct kmem_cache *virt_to_cache(const void *obj) { struct page *page = virt_to_head_page(obj); diff --git a/mm/slab_common.c b/mm/slab_common.c index 8df59b09..f5b52f0 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align, s->name = name; s->object_size = object_size; s->size = size; + kasan_set_alloc_size(s, object_size); s->align = align; s->ctor = ctor; @@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz s->name = name; s->size = s->object_size = size; + kasan_set_alloc_size(s, size); s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size); err = __kmem_cache_create(s, flags); diff --git a/mm/slub.c b/mm/slub.c index 3bdd9ac..6ddedf9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align, * the complete object on kzalloc. */ s->object_size = max(s->object_size, (int)size); + kasan_set_alloc_size(s, max(s->alloc_size, (int)size)); s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *))); for_each_memcg_cache_index(i) {