From patchwork Thu Oct 22 13:19:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11851355 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5B4A61C for ; Thu, 22 Oct 2020 13:20:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3014D222E9 for ; Thu, 22 Oct 2020 13:20:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="jSZdrkfY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3014D222E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A9146B009F; Thu, 22 Oct 2020 09:20:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3594D6B00A0; Thu, 22 Oct 2020 09:20:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2704D6B00A1; Thu, 22 Oct 2020 09:20:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id E940C6B009F for ; Thu, 22 Oct 2020 09:20:04 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8A18D1EE6 for ; Thu, 22 Oct 2020 13:20:04 +0000 (UTC) X-FDA: 77399619528.20.boats11_26078ed27251 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 5CD4A180C0609 for ; Thu, 22 Oct 2020 13:20:04 +0000 (UTC) X-Spam-Summary: 1,0,0,501e3a3fa4a015da,d41d8cd98f00b204,3aoerxwokcfk1e4i5pbemc7ff7c5.3fdc9elo-ddbm13b.fi7@flex--andreyknvl.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:967:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2525:2559:2563:2682:2685:2859:2902:2911:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3355:3865:3867:3868:3870:3871:3872:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4118:4250:4321:4385:4425:4605:5007:6119:6120:6261:6653:6742:7875:7901:7903:8603:8957:9025:9969:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12679:12683:12895:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21611:21627:21939:21990:30054,0,RBL:209.85.221.74:@flex--andreyknvl.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04y8drdgoc19eqq41ijyqcfqaaswboc83rim58fz98aqnq8zaruuf16emujsfdn.5xkj3gnmut1ktazp4pkrrmdugu86 d33jr7eo X-HE-Tag: boats11_26078ed27251 X-Filterd-Recvd-Size: 7742 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Thu, 22 Oct 2020 13:20:03 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id u15so625137wrn.4 for ; Thu, 22 Oct 2020 06:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=OXY1Rszcej9Z+zHeV4E2RjMvc6KaNi19mvRQTnsPsy0=; b=jSZdrkfYHdvmkFju49p2qS0kgX2rxE4/jZ6jdMbQBk8JbMtDBsCGSZDA0F5DFfMUzJ 125NE/PkCAu7vblUYOEBxCcXW/oqJuZuWWNUhTID58/RzuS76DG+2id6CQTTfbadeKuU Nanbu0+Vac4ZfyGxSA4Eup2d5Q0cFnKE9eF+63c/g34MmKm3qg6zcKumetvpvAlrn0NZ CT6lCbQLpwFIm4Z2BFtFjpRLquLSET4CjXaU4jxUM/X1ZV6GXtFLHARWz7EkycSfDpJZ 45XlQqS3KATaw5tIUU5NIoJW4nT6HtVwv0C3OE5c9IagN7/X9hbdZ7wMduUpy+LoOYjy sB9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OXY1Rszcej9Z+zHeV4E2RjMvc6KaNi19mvRQTnsPsy0=; b=EdI+HMdPAUqmFZMj2fXjMN7FIEDZKZIeU/oLuRejoXLtWc+fNYXf39aXMxZtAAcVCk 4pb4i7yFsbV5ROXSwQHf+jCBOo+gd8QpG9VFU+FGHJ3ksTcm7QRd9QfReogwDVPFvFow Uojr2Yk5ZYFmEHvQas/IEDW0g8VZvnAChPxeMz7UjVC9lJJ55aIjBullYqszURifTpLx c9EpOGkOxJ0zxtFxQn/cmylqWVVg22rEXvVhjIHx+Pu/ozwM5baS5FtC5XIt04eSEzpm N36MDpOZOuIec5GMb567iuf9Diysr1HklelTm6Fk0o8QBuQhTxV3h0uTqnkvwyOvlBUz i8SQ== X-Gm-Message-State: AOAM533lnhz+Pjc2cjvGKxDUNC41JtHq0P0G25fMtB3WBhTw9yk3Ee6u vRv40XuwwSj4GyKtq2TqDLaV2mIH17lZkUPK X-Google-Smtp-Source: ABdhPJy3D1LRywWc/ht9ZMDiiul5ZWljka05fqMEDEJhehrq0Xcx8gYVjiEcOyxi8iZPzt+IvfBa0o5POo4KjZTx X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:adf:e88a:: with SMTP id d10mr2859147wrm.247.1603372802756; Thu, 22 Oct 2020 06:20:02 -0700 (PDT) Date: Thu, 22 Oct 2020 15:19:08 +0200 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.0.rc1.297.gfa9743e501-goog Subject: [PATCH RFC v2 16/21] kasan: optimize poisoning in kmalloc and krealloc From: Andrey Konovalov To: Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Alexander Potapenko , Marco Elver Cc: Evgenii Stepanov , Kostya Serebryany , Peter Collingbourne , Serban Constantinescu , Andrey Ryabinin , Elena Petrova , Branislav Rankov , Kevin Brodsky , Andrew Morton , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need to reunpoison the object data, only to poison the redzone. This requires changing kasan annotation for early SLUB cache to kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object. This doesn't do any functional changes, as kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node). Similarly for kasan_krealloc(), as it's called after ksize(), which already unpoisoned the object, there's no need to do it again. Signed-off-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2 --- mm/kasan/common.c | 31 +++++++++++++++++++++---------- mm/slub.c | 3 +-- 2 files changed, 22 insertions(+), 12 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index c5ec60e1a4d2..a581937c2a44 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) tag = assign_tag(cache, object, false, keep_tag); - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ - kasan_unpoison_memory(set_tag(object, tag), size); + /* + * Don't unpoison the object when keeping the tag. Tag is kept for: + * 1. krealloc(), and then the memory has already been unpoisoned via ksize(); + * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc(). + * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS. + */ + if (!keep_tag) + kasan_unpoison_memory(set_tag(object, tag), size); kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); @@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object } EXPORT_SYMBOL(__kasan_kmalloc); -void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags) +static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr, + size_t size, gfp_t flags, bool realloc) { - struct page *page; unsigned long redzone_start; unsigned long redzone_end; @@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, if (unlikely(ptr == NULL)) return NULL; - page = virt_to_page(ptr); - redzone_start = round_up((unsigned long)(ptr + size), - KASAN_GRANULE_SIZE); + redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE); redzone_end = (unsigned long)ptr + page_size(page); - kasan_unpoison_memory(ptr, size); + /* ksize() in __do_krealloc() already unpoisoned the memory. */ + if (!realloc) + kasan_unpoison_memory(ptr, size); kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start, KASAN_PAGE_REDZONE); return (void *)ptr; } +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ + return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false); +} + void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags) { struct page *page; @@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag page = virt_to_head_page(object); if (unlikely(!PageSlab(page))) - return __kasan_kmalloc_large(object, size, flags); + return ____kasan_kmalloc_large(page, object, size, flags, true); else return ____kasan_kmalloc(page->slab_cache, object, size, flags, true); diff --git a/mm/slub.c b/mm/slub.c index 1d3f2355df3b..afb035b0bf2d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node) init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); init_tracking(kmem_cache_node, n); #endif - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), - GFP_KERNEL); + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL); page->freelist = get_freepointer(kmem_cache_node, n); page->inuse = 1; page->frozen = 0;