From patchwork Fri Feb 5 15:39:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12070369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43959C433DB for ; Fri, 5 Feb 2021 15:39:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A618A650ED for ; Fri, 5 Feb 2021 15:39:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A618A650ED Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A9D56B0006; Fri, 5 Feb 2021 10:39:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25A5C6B0070; Fri, 5 Feb 2021 10:39:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FA296B0073; Fri, 5 Feb 2021 10:39:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id EE5926B0006 for ; Fri, 5 Feb 2021 10:39:22 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A69213621 for ; Fri, 5 Feb 2021 15:39:22 +0000 (UTC) X-FDA: 77784623364.16.bee77_2e15667275e5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 860DE100E690B for ; Fri, 5 Feb 2021 15:39:22 +0000 (UTC) X-HE-Tag: bee77_2e15667275e5 X-Filterd-Recvd-Size: 8193 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Feb 2021 15:39:22 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id m7so5642793wro.12 for ; Fri, 05 Feb 2021 07:39:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=q8NBwqooOPRUby5+8HD62t04N8TexwYKtcMMLD/hsYw=; b=N8dGmHEbsbmDY3Xr1kBJ5qp2vlfezU7Wn2sOBmtqjLc7astlSDlrYrHIYgx/Hxxz+o AGiHjTmgXxeUuUr8vHVQMMFDhhu9fxy8KVJV8jT5gioitqi1phPYh2pBXdsh16SrZnE1 cZBmsz/S9moA1VbQKtAdFvKo278quUpv15w9RlEFBfL1tDDQQDKOQaKMPZe8dhtVpPmt zTe5A3rfInQaeqklm3ptrM00EOVdpUdkqBJe8YHAPLDwYJzUgKF0xceAGm9Vyd6vl0US AZalEr0wikduX6NeO5dNhgPtzDcI0LBhHy6aC93aif4F3M8ZDXlwLtQbsM7fzAsGN4WH Rakg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=q8NBwqooOPRUby5+8HD62t04N8TexwYKtcMMLD/hsYw=; b=DLpdDMAg/DPk42pwN7kYVOTfKgri5X7bfvDqaPX/UOqUx61UvWTRPT5s54D3D4aWAw Q8bRfoE2OfbSAVEyzECjwaa9MUa5o31rjVpN/r/Xofi1j/2qGUVwu03ZWLYD/1+Ur2hZ y0oZ2ZPEIqrXyT96CekyGKyiApJMB/wPsi4tUZZap+NQmwGXmVvyUWXxHalYAbAFKqEK mRIKKRkECSZMuM8EDAO7w89u1+UmDBSCOd2cAHafNoXRyXpOqaysTzzb79JsSzxXLWSD PthdMvNm2MK3g3Vk03Q7RmuaEDsPpg532XaI9GyRDW/VAdq5ZpEkBZMoUncRF1fQxkYx lcvg== X-Gm-Message-State: AOAM530EO98GNBCtJLzRRDPqIgqO8CdHiBxw5zyud+joliATNqCWLoqW KhceLqU3+2zz5fTFApAk2Xa1Tn1yRVxEgauB X-Google-Smtp-Source: ABdhPJzjR0BPDS3Eaf6xpBfyulEPTn9B6ZW4BoDBH0heBGtCIrPnc5WHtwNTMu+azMOm3gvhxmLgdQ9FDPT5U+V6 X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:edb8:b79c:2e20:e531]) (user=andreyknvl job=sendgmr) by 2002:a05:600c:48a8:: with SMTP id j40mr143776wmp.57.1612539560487; Fri, 05 Feb 2021 07:39:20 -0800 (PST) Date: Fri, 5 Feb 2021 16:39:02 +0100 In-Reply-To: Message-Id: <4aad006aea0dcf7cd24e0cac13026dc8b93a0961.1612538932.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH v2 01/12] kasan, mm: don't save alloc stacks twice From: Andrey Konovalov To: Andrew Morton , Catalin Marinas , Vincenzo Frascino , Dmitry Vyukov , Alexander Potapenko , Marco Elver Cc: Will Deacon , Andrey Ryabinin , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently KASAN saves allocation stacks in both kasan_slab_alloc() and kasan_kmalloc() annotations. This patch changes KASAN to save allocation stacks for slab objects from kmalloc caches in kasan_kmalloc() only, and stacks for other slab objects in kasan_slab_alloc() only. This change requires ____kasan_kmalloc() knowing whether the object belongs to a kmalloc cache. This is implemented by adding a flag field to the kasan_info structure. That flag is only set for kmalloc caches via a new kasan_cache_create_kmalloc() annotation. Reviewed-by: Marco Elver Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 9 +++++++++ mm/kasan/common.c | 18 ++++++++++++++---- mm/slab_common.c | 1 + 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 6d8f3227c264..2d5de4092185 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -83,6 +83,7 @@ static inline void kasan_disable_current(void) {} struct kasan_cache { int alloc_meta_offset; int free_meta_offset; + bool is_kmalloc; }; #ifdef CONFIG_KASAN_HW_TAGS @@ -143,6 +144,13 @@ static __always_inline void kasan_cache_create(struct kmem_cache *cache, __kasan_cache_create(cache, size, flags); } +void __kasan_cache_create_kmalloc(struct kmem_cache *cache); +static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) +{ + if (kasan_enabled()) + __kasan_cache_create_kmalloc(cache); +} + size_t __kasan_metadata_size(struct kmem_cache *cache); static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache) { @@ -278,6 +286,7 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} +static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {} static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } static inline void kasan_poison_slab(struct page *page) {} static inline void kasan_unpoison_object_data(struct kmem_cache *cache, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index fe852f3cfa42..bfdf5464f4ef 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -210,6 +210,11 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size = optimal_size; } +void __kasan_cache_create_kmalloc(struct kmem_cache *cache) +{ + cache->kasan_info.is_kmalloc = true; +} + size_t __kasan_metadata_size(struct kmem_cache *cache) { if (!kasan_stack_collection_enabled()) @@ -394,17 +399,22 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) } } -static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) +static void set_alloc_info(struct kmem_cache *cache, void *object, + gfp_t flags, bool is_kmalloc) { struct kasan_alloc_meta *alloc_meta; + /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */ + if (cache->kasan_info.is_kmalloc && !is_kmalloc) + return; + alloc_meta = kasan_get_alloc_meta(cache, object); if (alloc_meta) kasan_set_track(&alloc_meta->alloc_track, flags); } static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, - size_t size, gfp_t flags, bool keep_tag) + size_t size, gfp_t flags, bool is_kmalloc) { unsigned long redzone_start; unsigned long redzone_end; @@ -423,7 +433,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, KASAN_GRANULE_SIZE); redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_GRANULE_SIZE); - tag = assign_tag(cache, object, false, keep_tag); + tag = assign_tag(cache, object, false, is_kmalloc); /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ kasan_unpoison(set_tag(object, tag), size); @@ -431,7 +441,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, KASAN_KMALLOC_REDZONE); if (kasan_stack_collection_enabled()) - set_alloc_info(cache, (void *)object, flags); + set_alloc_info(cache, (void *)object, flags, is_kmalloc); return set_tag(object, tag); } diff --git a/mm/slab_common.c b/mm/slab_common.c index 9aa3d2fe4c55..39d1a8ff9bb8 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -647,6 +647,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, panic("Out of memory when creating slab %s\n", name); create_boot_cache(s, name, size, flags, useroffset, usersize); + kasan_cache_create_kmalloc(s); list_add(&s->list, &slab_caches); s->refcount = 1; return s;