From patchwork Tue May 11 23:54:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12252403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16866C433B4 for ; Tue, 11 May 2021 23:54:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8FEBB61287 for ; Tue, 11 May 2021 23:54:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8FEBB61287 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F1CFD6B0071; Tue, 11 May 2021 19:54:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF5816B0072; Tue, 11 May 2021 19:54:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAB556B0073; Tue, 11 May 2021 19:54:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id A7C8F6B0071 for ; Tue, 11 May 2021 19:54:43 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6EF861803035E for ; Tue, 11 May 2021 23:54:43 +0000 (UTC) X-FDA: 78130607646.05.6D954EB Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf24.hostedemail.com (Postfix) with ESMTP id B9A4DA0003B9 for ; Tue, 11 May 2021 23:54:28 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id g26-20020a0caada0000b02901b93eb92373so16956165qvb.3 for ; Tue, 11 May 2021 16:54:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n3IHOUs95kRCw7K2YR9PXEfXEB+2w+ikNSMW8RDvjoo=; b=UwYYwxJ1H/ayuXqP5dUago8Gq+6Ap+Z9i5ugskmJN6iD1mZw4silmF8qoSgS7jmMyX JGtuUZ7FVn3SCiOsG34840GJH4s/iUADiiQo5fgXpQF9uMnz2VGP9rou/Cym2MNd1HXD SI/p9a8X1C67hatrvGLaqV1rKh/cm9jsTS77o02Qh5MfXi2V8DTrSJo69NbCxquB0shK qm8Ftc5ETZ0soLWR26+nADxhLNonH+m4ElYicmF7QRw2eEaNnA+qaAKvKy+K8wFmvfhz /0j3mI26G99Mu2D/ZEDoWUaIEq08TB9S/HNvZrWHdEljaGEZnmg5XPXc9c4pIf5uAgxZ L/SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n3IHOUs95kRCw7K2YR9PXEfXEB+2w+ikNSMW8RDvjoo=; b=mAuBkl1s5rX8pfZ+oN+Kf1sQCBBmWf5hNnTxnZ6xHCepwZeKdgXL5BTpIb3NEF4isi T+1oNg+uwWos3h+tawFC9fNyPfAjtYrlTLLS5c7naa/ba3qZhTH1o8c9t8BY1NbXWfNj e5MedchwAm78aw3L4UdV/BneG+Um6NkMZGznm/x8aE+os44ThlMMqpVIctmGsckoNjAH cqfcn5pC/5x6dkrulJa7OLh16WBy5uHZSCy7RGSEszJdTSNvO3LQR1/GWL4AROof9n0L rTfHahjewruwK3gP+0bdycufGAwjZ4WvGEaCJ3J4okLtarnnltOzQVVtk8hd8iP/eYX5 O78A== X-Gm-Message-State: AOAM532x/i/hzq1gfWu2+JDoghXCcqc3RgLvX4jBn+pibVaIsfuHi1BR i16we6sFNlbLM2Hl778dUReBlyk= X-Google-Smtp-Source: ABdhPJwH8rDUbAmiuaFAN+OiaRo5m3hIPMN9EN7ttaeCVMW2ImzTgFsvykMdWypVyt85PXpkoP2/wag= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1c93:1da8:808a:36cd]) (user=pcc job=sendgmr) by 2002:a05:6214:21ce:: with SMTP id d14mr32268041qvh.47.1620777282246; Tue, 11 May 2021 16:54:42 -0700 (PDT) Date: Tue, 11 May 2021 16:54:26 -0700 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v2 3/3] kasan: allow freed user page poisoning to be disabled with HW tags From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=UwYYwxJ1; spf=pass (imf24.hostedemail.com: domain of 3QhmbYAMKCOscPPTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--pcc.bounces.google.com designates 209.85.219.74 as permitted sender) smtp.mailfrom=3QhmbYAMKCOscPPTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--pcc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B9A4DA0003B9 X-Stat-Signature: r6xigm5onjppe9amsst35itygpboarp3 Received-SPF: none (flex--pcc.bounces.google.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from="<3QhmbYAMKCOscPPTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--pcc.bounces.google.com>"; helo=mail-qv1-f74.google.com; client-ip=209.85.219.74 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620777268-161289 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Poisoning freed pages protects against kernel use-after-free. The likelihood of such a bug involving kernel pages is significantly higher than that for user pages. At the same time, poisoning freed pages can impose a significant performance cost, which cannot always be justified for user pages given the lower probability of finding a bug. Therefore, make it possible to configure the kernel to disable freed user page poisoning when using HW tags via the new kasan.skip_user_poison_on_free command line option. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9 --- include/linux/gfp.h | 13 ++++++++++--- include/linux/page-flags.h | 9 +++++++++ include/trace/events/mmflags.h | 9 ++++++++- mm/kasan/hw_tags.c | 10 ++++++++++ mm/page_alloc.c | 12 +++++++----- 5 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 68ba237365dc..9a77e5660b07 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -54,8 +54,9 @@ struct vm_area_struct; #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u #define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_SKIP_KASAN_POISON 0x1000000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x1000000u +#define ___GFP_NOLOCKDEP 0x2000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -233,17 +234,22 @@ struct vm_area_struct; * * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if * __GFP_ZERO is set. + * + * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned + * on deallocation. Typically used for userspace pages. Currently only has an + * effect in HW tags mode, and only if a command line option is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -320,7 +326,8 @@ struct vm_area_struct; #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) #define GFP_NOIO (__GFP_RECLAIM) #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | \ + __GFP_HARDWALL | __GFP_SKIP_KASAN_POISON) #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 04a34c08e0a6..40e2c5000585 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -137,6 +137,9 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, +#endif +#ifdef CONFIG_KASAN_HW_TAGS + PG_skip_kasan_poison, #endif __NR_PAGEFLAGS, @@ -443,6 +446,12 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif +#ifdef CONFIG_KASAN_HW_TAGS +PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) +#else +PAGEFLAG_FALSE(SkipKASanPoison) +#endif + /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 629c7a0eaff2..390270e00a1d 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -85,6 +85,12 @@ #define IF_HAVE_PG_ARCH_2(flag,string) #endif +#ifdef CONFIG_KASAN_HW_TAGS +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) ,{1UL << flag, string} +#else +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) +#endif + #define __def_pageflag_names \ {1UL << PG_locked, "locked" }, \ {1UL << PG_waiters, "waiters" }, \ @@ -112,7 +118,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) +IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 34362c8d0955..954d5c2f7683 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,10 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +static bool skip_user_poison_on_free; +static int __init skip_user_poison_on_free_param(char *buf) +{ + return kstrtobool(buf, &skip_user_poison_on_free); +} +early_param("kasan.skip_user_poison_on_free", skip_user_poison_on_free_param); + void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) { bool init = !want_init_on_free() && want_init_on_alloc(flags); + if (skip_user_poison_on_free && (flags & __GFP_SKIP_KASAN_POISON)) + SetPageSkipKASanPoison(page); + if (flags & __GFP_ZEROTAGS) { int i; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 24e6f668ef73..2c3ac15ddd54 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -394,11 +394,12 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return static_branch_unlikely(&deferred_pages) || (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -449,10 +450,11 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1244,7 +1246,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); + bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page);