From patchwork Tue Nov 10 22:20:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11895675 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 338DE1391 for ; Tue, 10 Nov 2020 22:21:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D785C20679 for ; Tue, 10 Nov 2020 22:21:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="MvPjJraT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D785C20679 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3739B6B00B4; Tue, 10 Nov 2020 17:21:18 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 323406B00B5; Tue, 10 Nov 2020 17:21:18 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DB6B6B00B6; Tue, 10 Nov 2020 17:21:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id C710A6B00B4 for ; Tue, 10 Nov 2020 17:21:17 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6C3A1362B for ; Tue, 10 Nov 2020 22:21:17 +0000 (UTC) X-FDA: 77469930594.11.edge19_0d0d5dc272f8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 3F8F2180F8B81 for ; Tue, 10 Nov 2020 22:21:17 +0000 (UTC) X-Spam-Summary: 1,0,0,c106a0d75060d62f,d41d8cd98f00b204,3wxkrxwokcdamzpdqkwzhxsaasxq.oayxuzgj-yywhmow.ads@flex--andreyknvl.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:966:967:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2559:2563:2682:2685:2693:2731:2859:2902:2911:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4118:4250:4321:4385:4425:4605:5007:6261:6653:6742:8603:9010:9025:9969:10004:11026:11473:11658:11914:12043:12291:12295:12296:12297:12438:12555:12679:12701:12737:12895:12986:13161:13229:14181:14394:14659:14721:21063:21080:21324:21365:21444:21450:21451:21627:21740:21939:21966:21990:30012:30029:30054:30070,0,RBL:209.85.221.73:@flex--andreyknvl.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yrjd1z8jdue8yzzsug51f jt3zijyc X-HE-Tag: edge19_0d0d5dc272f8 X-Filterd-Recvd-Size: 7946 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Nov 2020 22:21:16 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id e11so6224265wrw.14 for ; Tue, 10 Nov 2020 14:21:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NSEnP5/Ymw2+CzfuUjUL3yzFjWGlVw3jfopzq6nBhYw=; b=MvPjJraTvvciMCrmEVBHJf7JZffHC8y5WsKXsz7dKaBKOd+ahVD8jotAMioOOY+Zy8 +oIjs5zBxjMN13SJYSVufJUR8ZSJRp8wcFRg6ysEbVOpnOQ5vU6igGKyRCAtUMHR205f 7QyvhDr6wwOcsOkI4zxzz6jcwQtJl0AqqctzgvATGiNPeP/sVqQ+6ox1fbYOYABAYPvk 7DaxX13UIIfsr9P6vPscgt+nvmJbOlQRSXO/sZhZ6Ht3GxlE3kgYYtamuzZF9LJ/Pvpz BQVuFuMfR8gf1A4I0Rij0OHDKjK8tIBD4Mhw3LvAXhWqwcwJRbF96SPZigccwisHP0Br nfyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NSEnP5/Ymw2+CzfuUjUL3yzFjWGlVw3jfopzq6nBhYw=; b=l13PNu0nPR5vSWDPQFfx3/SFn1q/IFT1ruCADCYyGoCHaWrUsfG44l999ZiozjTp58 K5U9MUin9IvLVh1wuO9kZhO/OCLrk9DBCFkoQUjARn7bRMsW0ByPa2ue3PDuR+nM1j0V cH1gMGdHMijhWldJ/IIYp+kGTEcBkEOV/HaEEHdQl2LeIGpqSGRZmRht4Bk3vsURy4Bd NygkwFFrf49bxtS7Py231FnSjWCwpjO/QnvwqgB3+6EE2UTg4Dcah0lbGeu9W6to30Ui yBT/Ze8/W9z1wvc1yQAqzpa3Qp3zrDqmQL9/SW9JvL8DmVQbnA/WgNzlrjsb7b7q0BZ+ CVfg== X-Gm-Message-State: AOAM532qZrLi8i4d8IUnkdCTIDSUoKJhWCA42ky7vzlEzYnFnGwNDGp2 5gMDeBon9Yvz4rC6Yg/OIsnAOLjcF8czDqTg X-Google-Smtp-Source: ABdhPJx5R9Lwi7vyMEdhZoBWGFA+5/tY8OWnRDBKQTuXeI+HTirxH7sptEemR57Y7sCa/FhiaUQSY6Jezq4w1pp+ X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a7b:c772:: with SMTP id x18mr262720wmk.185.1605046875625; Tue, 10 Nov 2020 14:21:15 -0800 (PST) Date: Tue, 10 Nov 2020 23:20:23 +0100 In-Reply-To: Message-Id: <936c0c198145b663e031527c49a6895bd21ac3a0.1605046662.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.222.g5d2a92d10f8-goog Subject: [PATCH v2 19/20] kasan, mm: allow cache merging with no metadata From: Andrey Konovalov To: Dmitry Vyukov , Alexander Potapenko , Marco Elver Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , Evgenii Stepanov , Andrey Ryabinin , Branislav Rankov , Kevin Brodsky , Andrew Morton , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The reason cache merging is disabled with KASAN is because KASAN puts its metadata right after the allocated object. When the merged caches have slightly different sizes, the metadata ends up in different places, which KASAN doesn't support. It might be possible to adjust the metadata allocation algorithm and make it friendly to the cache merging code. Instead this change takes a simpler approach and allows merging caches when no metadata is present. Which is the case for hardware tag-based KASAN with kasan.mode=prod. Signed-off-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba --- include/linux/kasan.h | 26 ++++++++++++++++++++++++-- mm/kasan/common.c | 11 +++++++++++ mm/slab_common.c | 11 ++++++++--- 3 files changed, 43 insertions(+), 5 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 534ab3e2935a..c754eca356f7 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -81,17 +81,35 @@ struct kasan_cache { }; #ifdef CONFIG_KASAN_HW_TAGS + DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); + static inline kasan_enabled(void) { return static_branch_likely(&kasan_flag_enabled); } -#else + +slab_flags_t __kasan_never_merge(slab_flags_t flags); +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + if (kasan_enabled()) + return __kasan_never_merge(flags); + return flags; +} + +#else /* CONFIG_KASAN_HW_TAGS */ + static inline kasan_enabled(void) { return true; } -#endif + +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} + +#endif /* CONFIG_KASAN_HW_TAGS */ void __kasan_alloc_pages(struct page *page, unsigned int order); static inline void kasan_alloc_pages(struct page *page, unsigned int order) @@ -240,6 +258,10 @@ static inline kasan_enabled(void) { return false; } +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 940b42231069..25b18c145b06 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -81,6 +81,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) } #endif /* CONFIG_KASAN_STACK */ +/* + * Only allow cache merging when stack collection is disabled and no metadata + * is present. + */ +slab_flags_t __kasan_never_merge(slab_flags_t flags) +{ + if (kasan_stack_collection_enabled()) + return flags; + return flags & ~SLAB_KASAN; +} + void __kasan_alloc_pages(struct page *page, unsigned int order) { u8 tag; diff --git a/mm/slab_common.c b/mm/slab_common.c index f1b0c4a22f08..3042ee8ea9ce 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -49,12 +50,16 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); /* - * Set of flags that will prevent slab merging + * Set of flags that will prevent slab merging. + * Use slab_never_merge() instead. */ #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ SLAB_FAILSLAB | SLAB_KASAN) +/* KASAN allows merging in some configurations and will remove SLAB_KASAN. */ +#define slab_never_merge() (kasan_never_merge(SLAB_NEVER_MERGE)) + #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_ACCOUNT) @@ -164,7 +169,7 @@ static unsigned int calculate_alignment(slab_flags_t flags, */ int slab_unmergeable(struct kmem_cache *s) { - if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE)) + if (slab_nomerge || (s->flags & slab_never_merge())) return 1; if (s->ctor) @@ -198,7 +203,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, size = ALIGN(size, align); flags = kmem_cache_flags(size, flags, name, NULL); - if (flags & SLAB_NEVER_MERGE) + if (flags & slab_never_merge()) return NULL; list_for_each_entry_reverse(s, &slab_caches, list) {