From patchwork Tue May 8 17:20:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10386601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 659C0602C2 for ; Tue, 8 May 2018 17:21:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 50A9D290B1 for ; Tue, 8 May 2018 17:21:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44030290B9; Tue, 8 May 2018 17:21:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49939290B1 for ; Tue, 8 May 2018 17:21:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 116066B02BE; Tue, 8 May 2018 13:21:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0EE716B02BF; Tue, 8 May 2018 13:21:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EABC06B02C0; Tue, 8 May 2018 13:21:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr0-f200.google.com (mail-wr0-f200.google.com [209.85.128.200]) by kanga.kvack.org (Postfix) with ESMTP id 783EC6B02BE for ; Tue, 8 May 2018 13:21:15 -0400 (EDT) Received: by mail-wr0-f200.google.com with SMTP id o16-v6so21489021wri.8 for ; Tue, 08 May 2018 10:21:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:in-reply-to:references; bh=3WnBFn9f1N0q7Pyt+0yOZuK/vAWS+2Cb+e2viuxo88k=; b=aO4bO0wr6NOkYG9GpeVcxrJpQd8ICOQmdbqAOyeumE02MHPAPfT3jETIy691lqd+hW 7/FKF31lFqXo24d/NEK9dsd3HUcjM6uQbD2tRNGovd2nKc/0+0F/pqrUliMOoZJSZLhm Xlblu2ltlHLXj7uqck28dkezHRyg9xZHcs1it8zTNDP6QaHPywZ2JvnG2uXpd5erNLAj P493pfTqDexz8nmNwUvIgFtJnXLz7lLfC4yyZZ5P4MKECqHlnidbwg+cTBlF0dEdAXMU 3bR92WC/L3Hl0bl+7a9PVsJxFGME5t1W0GRSwjC7eC0+hZearQCwaIs9/hZvNIiWKpvN kpOg== X-Gm-Message-State: ALQs6tB6qoUHsEykL3U2DtnCF5cr3hLIodbMIcZDVMiub/qPULXuHZBc 3kdQ8m/LjmczBw/A1t2OjZkKL0rGNzrsBDfiub6a1u8w6ytCZ0+tKU4b7XTAwEWeWWeEahUO/HU ByJJwHlj9wl1mRIFcz7olPigWb5A5F4Dx8NVT/bo4a5B+OkpcvrNHcnepJXZ8NcU9ljh9D2RxC8 9FnH3qT0CvUbEFRCaX2Wty9m4EGynKOehEbpyHeMKtc6CRPSYw/XLWsX7ZokYdnFiqegK0xq9vZ UkAwo1ysB/6bxxjUofvjMrpHeWZhlxfMLTVqGbxNw8iajwPCyJxHKVLwWgSOkYgq1iqjOqbOJuy iHoJgVBmlTikA0dySJ+3wjyoNjE+QPHoMbagMgUAjCrICw414v+wOrV4xKPZ2SLhePblSxb9zRm 4 X-Received: by 2002:adf:bb14:: with SMTP id r20-v6mr32455806wrg.244.1525800075041; Tue, 08 May 2018 10:21:15 -0700 (PDT) X-Received: by 2002:adf:bb14:: with SMTP id r20-v6mr32455699wrg.244.1525800073700; Tue, 08 May 2018 10:21:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525800073; cv=none; d=google.com; s=arc-20160816; b=OQ6qxJnccg51t03eIVKElRUUR/VWmImbYj41CybJQENpkPBF5n1FGIpc14vq04An4e mYI0yxJ+IOWfYjVTRucYDlamES9DWWESCQXI4TwHnesKOK4EaLyjKU9zB3klBWiLXfR6 hAE9hemnXqSLLctLP8hV0KXArIsetUTsRnayTuh/iS4h6UaYnIvxwc1yCi6QVyBrIW4e Vo4jtbz9ODRWDMoiaT6foptGuWHB5VItHcx84AjRPuKotjZ/ApNkHz3OctkfSAIN2FFi R3GLWE4koq2GOVwDZ6R0cBAd3zipmwf4bgGY0afRw9XbuJJUrA5Aeg/LLz053BR0dDs7 BpFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=3WnBFn9f1N0q7Pyt+0yOZuK/vAWS+2Cb+e2viuxo88k=; b=cJ2Aht68lHQP/a8tEcoRx0Qf2HoOGnXaIP64lEsBXT38S+zGDH72bI+RCaplRosZDh xDb1fAWJpBE/kQw7k/OMu1c0UE1eC3jfIK/ddt48hFSPGjoluCKkhnp51ZZt5jo3XQRA h45V3SWcEva68V0IbHRzRo8a2c7YanijAJI9TKxZfd1mecR6trwvT+WLHFWDME0xD91j YW9Ug2a63h0Q68pQi33Zw1dNr2zYXr1VgKGG44QF1K6V3f/A2txtSUrZB8ZtQqAtnRaU zi/B0gXgm0Oph2aiyv8IBWQccuvSu0QQahg3njylOmfVzFe1Kb6AvqOPnYVBEKfi4SUw Bjtg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Oaz/d4OT; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i8-v6sor9453341wrb.18.2018.05.08.10.21.13 for (Google Transport Security); Tue, 08 May 2018 10:21:13 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Oaz/d4OT; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=3WnBFn9f1N0q7Pyt+0yOZuK/vAWS+2Cb+e2viuxo88k=; b=Oaz/d4OTh/QdtvZ8GhMmao/6Q7FK8zK78tn291GH87OJ74BxQ4AclL4/2wld+E1Ot7 sF5gWt5Fm7CtoY16N8dMQEKWfI1hnRD5Hyy0zk96k+sXiTyyRNfLOChbrNBmMxuy1gEz xAAlJTU6bhTSzd/Ze6YMISXh1gltMos7qW8zUULjIcpY9PA6t/9fFNE4j5tpv0fTfrB1 3KBvo+vXySv8DJFda7LBWb7yGDm+Af+hZyZiMlRidFrpoX38q36Fk2KmUK1+yyKHSb+1 5T2jKeSc8CYBVCTYFIdnzfOQbu72kWHiEUzxr3FRlRq80HEUqN2uAIm4q1vv3zv+BPC7 D9sg== X-Google-Smtp-Source: AB8JxZpdY91UWXpE8rcm7FanOxKFqynEsaBuZOYMoEYc8suzEKNulYhAK05KCswRN1HakbiURUj1mg== X-Received: by 2002:adf:ac81:: with SMTP id o1-v6mr32425250wrc.220.1525800069525; Tue, 08 May 2018 10:21:09 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id m134sm14178311wmg.4.2018.05.08.10.21.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 May 2018 10:21:08 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Jonathan Corbet , Catalin Marinas , Will Deacon , Christopher Li , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Michal Marek , Andrey Konovalov , Mark Rutland , Nick Desaulniers , Yury Norov , Marc Zyngier , Kristina Martsenko , Suzuki K Poulose , Punit Agrawal , Dave Martin , Ard Biesheuvel , James Morse , Michael Weiser , Julien Thierry , Tyler Baicar , "Eric W . Biederman" , Thomas Gleixner , Ingo Molnar , Kees Cook , Sandipan Das , David Woodhouse , Paul Lawrence , Herbert Xu , Josh Poimboeuf , Geert Uytterhoeven , Tom Lendacky , Arnd Bergmann , Dan Williams , Michal Hocko , Jan Kara , Ross Zwisler , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Matthew Wilcox , "Kirill A . Shutemov" , Souptick Joarder , Hugh Dickins , Davidlohr Bueso , Greg Kroah-Hartman , Philippe Ombredanne , Kate Stewart , Laura Abbott , Boris Brezillon , Vlastimil Babka , Pintu Agarwal , Doug Berger , Anshuman Khandual , Mike Rapoport , Mel Gorman , Pavel Tatashin , Tetsuo Handa , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Kees Cook , Jann Horn , Mark Brand , Chintan Pandya Subject: [PATCH v1 01/16] khwasan, mm: change kasan hooks signatures Date: Tue, 8 May 2018 19:20:47 +0200 Message-Id: <427db6b29eaf61d77cb485e9e0a393d34741e498.1525798753.git.andreyknvl@google.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog In-Reply-To: References: In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP KHWASAN will change the value of the top byte of pointers returned from the kernel allocation functions (such as kmalloc). This patch updates KASAN hooks signatures and their usage in SLAB and SLUB code to reflect that. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 34 +++++++++++++++++++++++----------- mm/kasan/kasan.c | 24 ++++++++++++++---------- mm/slab.c | 12 ++++++------ mm/slab.h | 2 +- mm/slab_common.c | 4 ++-- mm/slub.c | 16 ++++++++-------- 6 files changed, 54 insertions(+), 38 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index de784fd11d12..cbdc54543803 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -53,14 +53,14 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); void kasan_init_slab_obj(struct kmem_cache *cache, const void *object); -void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); +void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, +void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags); -void kasan_krealloc(const void *object, size_t new_size, gfp_t flags); +void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags); -void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); +void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); struct kasan_cache { @@ -105,16 +105,28 @@ static inline void kasan_poison_object_data(struct kmem_cache *cache, static inline void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) {} -static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {} +static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) +{ + return ptr; +} static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} -static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags) {} -static inline void kasan_krealloc(const void *object, size_t new_size, - gfp_t flags) {} +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags) +{ + return (void *)object; +} +static inline void *kasan_krealloc(const void *object, size_t new_size, + gfp_t flags) +{ + return (void *)object; +} -static inline void kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags) {} +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ + return object; +} static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index bc0e68f7dc75..2f5e52d72c2b 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -485,9 +485,9 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) __memset(alloc_info, 0, sizeof(*alloc_info)); } -void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) +void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - kasan_kmalloc(cache, object, cache->object_size, flags); + return kasan_kmalloc(cache, object, cache->object_size, flags); } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, @@ -528,7 +528,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) return __kasan_slab_free(cache, object, ip, true); } -void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, +void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, gfp_t flags) { unsigned long redzone_start; @@ -538,7 +538,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, quarantine_reduce(); if (unlikely(object == NULL)) - return; + return NULL; redzone_start = round_up((unsigned long)(object + size), KASAN_SHADOW_SCALE_SIZE); @@ -551,10 +551,12 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); + + return (void *)object; } EXPORT_SYMBOL(kasan_kmalloc); -void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) { struct page *page; unsigned long redzone_start; @@ -564,7 +566,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) quarantine_reduce(); if (unlikely(ptr == NULL)) - return; + return NULL; page = virt_to_page(ptr); redzone_start = round_up((unsigned long)(ptr + size), @@ -574,21 +576,23 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) kasan_unpoison_shadow(ptr, size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_PAGE_REDZONE); + + return (void *)ptr; } -void kasan_krealloc(const void *object, size_t size, gfp_t flags) +void *kasan_krealloc(const void *object, size_t size, gfp_t flags) { struct page *page; if (unlikely(object == ZERO_SIZE_PTR)) - return; + return ZERO_SIZE_PTR; page = virt_to_head_page(object); if (unlikely(!PageSlab(page))) - kasan_kmalloc_large(object, size, flags); + return kasan_kmalloc_large(object, size, flags); else - kasan_kmalloc(page->slab_cache, object, size, flags); + return kasan_kmalloc(page->slab_cache, object, size, flags); } void kasan_poison_kfree(void *ptr, unsigned long ip) diff --git a/mm/slab.c b/mm/slab.c index 2f308253c3d7..5f2aeeb18d3c 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3551,7 +3551,7 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { void *ret = slab_alloc(cachep, flags, _RET_IP_); - kasan_slab_alloc(cachep, ret, flags); + ret = kasan_slab_alloc(cachep, ret, flags); trace_kmem_cache_alloc(_RET_IP_, ret, cachep->object_size, cachep->size, flags); @@ -3617,7 +3617,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = slab_alloc(cachep, flags, _RET_IP_); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, size, cachep->size, flags); return ret; @@ -3641,7 +3641,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); - kasan_slab_alloc(cachep, ret, flags); + ret = kasan_slab_alloc(cachep, ret, flags); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, flags, nodeid); @@ -3660,7 +3660,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, size, cachep->size, flags, nodeid); @@ -3679,7 +3679,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); return ret; } @@ -3715,7 +3715,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, return cachep; ret = slab_alloc(cachep, flags, caller); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, size, cachep->size, flags); diff --git a/mm/slab.h b/mm/slab.h index 68bdf498da3b..15ef6a0d9c16 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -441,7 +441,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags); - kasan_slab_alloc(s, object, flags); + p[i] = kasan_slab_alloc(s, object, flags); } if (memcg_kmem_enabled()) diff --git a/mm/slab_common.c b/mm/slab_common.c index 98dcdc352062..0582004351c4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1148,7 +1148,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) page = alloc_pages(flags, order); ret = page ? page_address(page) : NULL; kmemleak_alloc(ret, size, 1, flags); - kasan_kmalloc_large(ret, size, flags); + ret = kasan_kmalloc_large(ret, size, flags); return ret; } EXPORT_SYMBOL(kmalloc_order); @@ -1426,7 +1426,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, ks = ksize(p); if (ks >= new_size) { - kasan_krealloc((void *)p, new_size, flags); + p = kasan_krealloc((void *)p, new_size, flags); return (void *)p; } diff --git a/mm/slub.c b/mm/slub.c index 44aa7847324a..4fcd1442a761 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1351,10 +1351,10 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) +static inline void kmalloc_large_node_hook(void **ptr, size_t size, gfp_t flags) { - kmemleak_alloc(ptr, size, 1, flags); - kasan_kmalloc_large(ptr, size, flags); + kmemleak_alloc(*ptr, size, 1, flags); + *ptr = kasan_kmalloc_large(*ptr, size, flags); } static __always_inline void kfree_hook(void *x) @@ -2765,7 +2765,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, gfpflags, _RET_IP_); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -2793,7 +2793,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); - kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3786,7 +3786,7 @@ void *__kmalloc(size_t size, gfp_t flags) trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags); return ret; } @@ -3803,7 +3803,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) if (page) ptr = page_address(page); - kmalloc_large_node_hook(ptr, size, flags); + kmalloc_large_node_hook(&ptr, size, flags); return ptr; } @@ -3831,7 +3831,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); - kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags); return ret; }