From patchwork Tue Sep 21 10:10:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12507481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 463F5C433EF for ; Tue, 21 Sep 2021 10:10:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D75E361184 for ; Tue, 21 Sep 2021 10:10:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D75E361184 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 11E0E6B0071; Tue, 21 Sep 2021 06:10:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A6036B0072; Tue, 21 Sep 2021 06:10:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6029900002; Tue, 21 Sep 2021 06:10:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id CE9E36B0071 for ; Tue, 21 Sep 2021 06:10:22 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 70E1F181AF5C1 for ; Tue, 21 Sep 2021 10:10:22 +0000 (UTC) X-FDA: 78611160684.18.8F2043E Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf28.hostedemail.com (Postfix) with ESMTP id 28A2E90000A4 for ; Tue, 21 Sep 2021 10:10:22 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id v14-20020a05620a0f0e00b0043355ed67d1so29681106qkl.7 for ; Tue, 21 Sep 2021 03:10:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=5N9Nfc2PvxnVX00ioD06NDULEqZan2ouEAVATALDM/U=; b=re1MeyZD4441wN+l0v9OO/U4fqURpUvI2RpDEHeFN3hdbo/USuNsEaCZ0cLijsB8U5 COFjC4fXuvReQMBjNdF2tW+Ruw7giKzv05JpGkMObQY/3aFvxjzLh1ATERUSlzZC4mdG 9/457Fs+MssoO6tUNOrwJVHTiiHsfIOfdI1Yc3yOW/2D/yBxOtC5zN210YghPexjGRA8 Et/pA5inMv0QoXyoDfVohlgF+x9yfHuJh/9AjdjlYPeeHR6ce4nvakK2m12XxtsFgJRM jpNNPjZn3ZWw0x1BhcL3o/71jlesZOCRKzC2maIrS7BlucGmLlSQqjz4qp8JJq4AIGRU VGEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=5N9Nfc2PvxnVX00ioD06NDULEqZan2ouEAVATALDM/U=; b=5BjsOVdz9O51deyz0HJ+C7EgZ6bdKGfnheZwgsvQy9iJCrM/Ln1Dsr1wdkPzVxVewk sBdaS5sUq5HEoQRaqgXdK+qT8h/TsaflplGv4MFMZojSM/aEgqUTXp7tFIK00z1H8wOY 7aiRru3SDnIllwBLmbkAU4y5Ogy8OLINTcptitDNzOsQ6p4vfkJ5HbDp3GbUUIQDIWbB kO2n0Gw+joQqU+i5w01PzPQ4Ep0+XerhxNaFdkNc31fxPxMFEd9NDlHcBvNm+krSEIZl 6Hst8lrf8XmIJtsuKHHmdd7HNIaMM2azzxUvJuvKx/0a2qT77uqQgIthoyf77NKDAJ/p 5MOw== X-Gm-Message-State: AOAM533Iz5w1uq+r0RrqrT03tssDrVj4zypsTN3oxJdD0b3904uVaH9S 5/HXib0l7hEUQWDhb3d/49f2D7eq1w== X-Google-Smtp-Source: ABdhPJw9K0h/Lxwq8gDPZCrvBxtJlDkV0TFyKit97kyILId+bErwqAKAwWm+rn+bPumqtf/fbZqQIAPJtQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:dd03:c280:4625:60db]) (user=elver job=sendgmr) by 2002:a05:6214:490:: with SMTP id ay16mr30085392qvb.25.1632219021369; Tue, 21 Sep 2021 03:10:21 -0700 (PDT) Date: Tue, 21 Sep 2021 12:10:10 +0200 Message-Id: <20210921101014.1938382-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v2 1/5] stacktrace: move filter_irq_stacks() to kernel/stacktrace.c From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 28A2E90000A4 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=re1MeyZD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3ja9JYQUKCCQELVERGOOGLE.COMLINUX-MMKVACK.ORG@flex--elver.bounces.google.com designates 209.85.222.201 as permitted sender) smtp.mailfrom=3ja9JYQUKCCQELVERGOOGLE.COMLINUX-MMKVACK.ORG@flex--elver.bounces.google.com X-Stat-Signature: e33ngjqebqbqwwzuokb3qryhhqxpfm39 X-HE-Tag: 1632219022-730211 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: filter_irq_stacks() has little to do with the stackdepot implementation, except that it is usually used by users (such as KASAN) of stackdepot to reduce the stack trace. However, filter_irq_stacks() itself is not useful without a stack trace as obtained by stack_trace_save() and friends. Therefore, move filter_irq_stacks() to kernel/stacktrace.c, so that new users of filter_irq_stacks() do not have to start depending on STACKDEPOT only for filter_irq_stacks(). Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov --- v2: * New patch. --- include/linux/stackdepot.h | 2 -- include/linux/stacktrace.h | 1 + kernel/stacktrace.c | 30 ++++++++++++++++++++++++++++++ lib/stackdepot.c | 24 ------------------------ 4 files changed, 31 insertions(+), 26 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index 6bb4bc1a5f54..22919a94ca19 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -19,8 +19,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); -unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries); - #ifdef CONFIG_STACKDEPOT int stack_depot_init(void); #else diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h index 9edecb494e9e..bef158815e83 100644 --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -21,6 +21,7 @@ unsigned int stack_trace_save_tsk(struct task_struct *task, unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, unsigned int size, unsigned int skipnr); unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); +unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries); /* Internal interfaces. Do not use in generic code */ #ifdef CONFIG_ARCH_STACKWALK diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c index 9f8117c7cfdd..9c625257023d 100644 --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -13,6 +13,7 @@ #include #include #include +#include /** * stack_trace_print - Print the entries in the stack trace @@ -373,3 +374,32 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) #endif /* CONFIG_USER_STACKTRACE_SUPPORT */ #endif /* !CONFIG_ARCH_STACKWALK */ + +static inline bool in_irqentry_text(unsigned long ptr) +{ + return (ptr >= (unsigned long)&__irqentry_text_start && + ptr < (unsigned long)&__irqentry_text_end) || + (ptr >= (unsigned long)&__softirqentry_text_start && + ptr < (unsigned long)&__softirqentry_text_end); +} + +/** + * filter_irq_stacks - Find first IRQ stack entry in trace + * @entries: Pointer to stack trace array + * @nr_entries: Number of entries in the storage array + * + * Return: Number of trace entries until IRQ stack starts. + */ +unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries) +{ + unsigned int i; + + for (i = 0; i < nr_entries; i++) { + if (in_irqentry_text(entries[i])) { + /* Include the irqentry function into the stack. */ + return i + 1; + } + } + return nr_entries; +} +EXPORT_SYMBOL_GPL(filter_irq_stacks); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 0a2e417f83cb..e90f0f19e77f 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -20,7 +20,6 @@ */ #include -#include #include #include #include @@ -341,26 +340,3 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, return retval; } EXPORT_SYMBOL_GPL(stack_depot_save); - -static inline int in_irqentry_text(unsigned long ptr) -{ - return (ptr >= (unsigned long)&__irqentry_text_start && - ptr < (unsigned long)&__irqentry_text_end) || - (ptr >= (unsigned long)&__softirqentry_text_start && - ptr < (unsigned long)&__softirqentry_text_end); -} - -unsigned int filter_irq_stacks(unsigned long *entries, - unsigned int nr_entries) -{ - unsigned int i; - - for (i = 0; i < nr_entries; i++) { - if (in_irqentry_text(entries[i])) { - /* Include the irqentry function into the stack. */ - return i + 1; - } - } - return nr_entries; -} -EXPORT_SYMBOL_GPL(filter_irq_stacks); From patchwork Tue Sep 21 10:10:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12507483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D3D0C433F5 for ; Tue, 21 Sep 2021 10:10:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D48FF61038 for ; Tue, 21 Sep 2021 10:10:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D48FF61038 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5B1556B0074; Tue, 21 Sep 2021 06:10:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 538DE6B0073; Tue, 21 Sep 2021 06:10:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DA016B0074; Tue, 21 Sep 2021 06:10:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 2F03B6B0072 for ; Tue, 21 Sep 2021 06:10:26 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CFAF88249980 for ; Tue, 21 Sep 2021 10:10:25 +0000 (UTC) X-FDA: 78611160810.06.B1A38B2 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf27.hostedemail.com (Postfix) with ESMTP id 7CB477000081 for ; Tue, 21 Sep 2021 10:10:25 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id r15-20020adfce8f000000b0015df1098ccbso8398183wrn.4 for ; Tue, 21 Sep 2021 03:10:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xqqCxuPrLaI1bLHMtAoGnOmrTQGbiQ0pjESMOvziO8U=; b=qs17jDbteS5bWdNSKYisdGThfhw9mWQnXuLyDn6xh+msrxUA9pL4A27pn9TwTQrYnc moBRZ7rLgtd5xbvFGT+peg1BDWHZINSnPwjQsihQBR8kwVpvgwCLAVYdkYSefa0VxoS/ Q+5b+qWoU3fZeGcpCQwQmtHjSt8ti2Z1t/n+CM1PoatMoed+scoju5sduDhnyIsVzicw y9RoxUQCrJxc6gtXTq0Ql69uBegGYNtHgMHBV4DPPDb6LVMtSUyfjyE7hmLqnTyrMNOi FMEUELhmmZAbG7nPbkA/UWCwEf0cq04sik7J2SQR8/5c1fzbxlm2QsRI2ZeLxSQ9fN6t zT/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xqqCxuPrLaI1bLHMtAoGnOmrTQGbiQ0pjESMOvziO8U=; b=6JF1zuv4jVxnRJx8iYKOjZZ0rWBnqU0v+KQggeYVFMXrXS5nYA5aIJszGDtFHkQ4cR 3vRVx+zunb4jzdeaGVDOWynV1sWIB12WWk006BChHmLxyPAezyz0XQ7szmCaVESWibHt aWm9EWqpGDbL9ztbs8Yszo+X7QM5HtpTcaJtm1gOopTRngyH56zEWUol2wEdjS9+mE6B nnVT9IEAZRkrv9u2e7VyuknzEnumre5mNxpPGB4qakINGMwA1b/9KcI57E2mJOTkJFZ1 21UYXV/x/knLKc8n5BMNjQJCovazdslYcmPkHwW35jOodEnL2EeaIFsZVgbxGP5W7p8b uGXg== X-Gm-Message-State: AOAM532eCYrmeCCeWk8LfXvlUh2WL9YIsBESJAhi6m2PgZ/dRs2gUHKl 5f6DkxzLDsm9AVyeI7234niWYeyMFQ== X-Google-Smtp-Source: ABdhPJx3COIbhHiKgMNm57MtDHgPxrn6NCIlsrggderVlrIYqkuHZHIVgfoPpXY4sPnG4z78HjkMGPH1JQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:dd03:c280:4625:60db]) (user=elver job=sendgmr) by 2002:a1c:a713:: with SMTP id q19mr3631781wme.42.1632219023932; Tue, 21 Sep 2021 03:10:23 -0700 (PDT) Date: Tue, 21 Sep 2021 12:10:11 +0200 In-Reply-To: <20210921101014.1938382-1-elver@google.com> Message-Id: <20210921101014.1938382-2-elver@google.com> Mime-Version: 1.0 References: <20210921101014.1938382-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v2 2/5] kfence: count unexpectedly skipped allocations From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7CB477000081 X-Stat-Signature: i9wc5o5pud75myksbe85s9ng1yt35rm4 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qs17jDbt; spf=pass (imf27.hostedemail.com: domain of 3j69JYQUKCCYGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3j69JYQUKCCYGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632219025-227963 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Maintain a counter to count allocations that are skipped due to being incompatible (oversized, incompatible gfp flags) or no capacity. This is to compute the fraction of allocations that could not be serviced by KFENCE, which we expect to be rare. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov --- v2: * Do not count deadlock-avoidance skips. --- mm/kfence/core.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 7a97db8bc8e7..249d75b7e5ee 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -112,6 +112,8 @@ enum kfence_counter_id { KFENCE_COUNTER_FREES, KFENCE_COUNTER_ZOMBIES, KFENCE_COUNTER_BUGS, + KFENCE_COUNTER_SKIP_INCOMPAT, + KFENCE_COUNTER_SKIP_CAPACITY, KFENCE_COUNTER_COUNT, }; static atomic_long_t counters[KFENCE_COUNTER_COUNT]; @@ -121,6 +123,8 @@ static const char *const counter_names[] = { [KFENCE_COUNTER_FREES] = "total frees", [KFENCE_COUNTER_ZOMBIES] = "zombie allocations", [KFENCE_COUNTER_BUGS] = "total bugs", + [KFENCE_COUNTER_SKIP_INCOMPAT] = "skipped allocations (incompatible)", + [KFENCE_COUNTER_SKIP_CAPACITY] = "skipped allocations (capacity)", }; static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT); @@ -271,8 +275,10 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g list_del_init(&meta->list); } raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); - if (!meta) + if (!meta) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_CAPACITY]); return NULL; + } if (unlikely(!raw_spin_trylock_irqsave(&meta->lock, flags))) { /* @@ -740,8 +746,10 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) * Perform size check before switching kfence_allocation_gate, so that * we don't disable KFENCE without making an allocation. */ - if (size > PAGE_SIZE) + if (size > PAGE_SIZE) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); return NULL; + } /* * Skip allocations from non-default zones, including DMA. We cannot @@ -749,8 +757,10 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) * properties (e.g. reside in DMAable memory). */ if ((flags & GFP_ZONEMASK) || - (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) + (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); return NULL; + } /* * allocation_gate only needs to become non-zero, so it doesn't make From patchwork Tue Sep 21 10:10:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12507485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23B73C433FE for ; Tue, 21 Sep 2021 10:10:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B556061100 for ; Tue, 21 Sep 2021 10:10:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B556061100 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4BAEC6B0072; Tue, 21 Sep 2021 06:10:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 428946B0073; Tue, 21 Sep 2021 06:10:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22BA1900002; Tue, 21 Sep 2021 06:10:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 0CE246B0072 for ; Tue, 21 Sep 2021 06:10:28 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BC6C72CFE4 for ; Tue, 21 Sep 2021 10:10:27 +0000 (UTC) X-FDA: 78611160894.01.74C475D Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf13.hostedemail.com (Postfix) with ESMTP id 817EA10B9E50 for ; Tue, 21 Sep 2021 10:10:27 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id r15-20020adfce8f000000b0015df1098ccbso8398256wrn.4 for ; Tue, 21 Sep 2021 03:10:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4f6yUJ+4Yot7zlWdOonsXRBu3J0/vvau8TCpbcwqOZE=; b=FhrffzQuuvwmcqfZInJt7xW7xX09gltSG0J7M3O03dANv0GcgQW9opGQFiwzomyoHU iAWtFimWyOqrk1dBiqwcLrH5F302D9FtWfW4BT1nkjgKvovvewNaaWNayowTUh1MCjbR /kuw9LPqZWmhLsryAUecaX1Zs1ubY6C5tLNsB2rrV+8Dowzmz1sM4tvxXnUMtXTaSSRQ uzewm2xXpyaegumJhsKYghqUCiWooIJbPqrtqy5oJrE0F0nozAdbvdNdvdMPkf+ECgNI NtakRX6VkT7rP0nSVRo+cgxE4kmfvpi+I6Nepe1o7FUgVoWT1NWBQV/D6+l8b0wVuloi DwMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4f6yUJ+4Yot7zlWdOonsXRBu3J0/vvau8TCpbcwqOZE=; b=Y9wH+3tFDgl+wj4XuOFwr4jGlT7CxES2xQ9b/QrOwFjlG/CLgbc+Xz7L1Jc1gCkhlV g3h449fJUFsE4Vvu6cOdHjBQuVrknCodNnmePHqB4U6oHerdfW2MSGE53IERAyQa4mP8 hdSzWER8weUl/pbVN3wI8yvybPf8Pxp2OoqnJEl2Yu9AYF1XlYtajnQKlU7UutunHtcC 8jOjOSevgbfJRTMbQ9wiFur3NMnInfE2gVJ+daIUxKnCttEk64aPemqQ1PoIuGQk3aIl fOhXhxAjz9takkOfMO7th+OPMLAqU7f7eHn35M/o5NnUclL3Kg7o4R5UeN90yp8wgQpV AhpA== X-Gm-Message-State: AOAM530Cjv5nZ/u5QuQcLNNd9gBnYFGENUENuqhEM6/5ODRJdmzcF4/h SDrjgspV+n4+uu06C7wVUe6RpKSENQ== X-Google-Smtp-Source: ABdhPJwpp8di3cIrkok5g79tnAJ+pYtjLSV2AAjX0dc2ZYWN2XZib61LEjP22aKYvA6CpSXzfmZn1yvNJQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:dd03:c280:4625:60db]) (user=elver job=sendgmr) by 2002:a05:600c:4f55:: with SMTP id m21mr3560923wmq.149.1632219026385; Tue, 21 Sep 2021 03:10:26 -0700 (PDT) Date: Tue, 21 Sep 2021 12:10:12 +0200 In-Reply-To: <20210921101014.1938382-1-elver@google.com> Message-Id: <20210921101014.1938382-3-elver@google.com> Mime-Version: 1.0 References: <20210921101014.1938382-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v2 3/5] kfence: move saving stack trace of allocations into __kfence_alloc() From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 817EA10B9E50 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FhrffzQu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf13.hostedemail.com: domain of 3kq9JYQUKCCkJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3kq9JYQUKCCkJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com X-Stat-Signature: gzcdfypw8thjgamjuw6an5jdsb6mtmnd X-HE-Tag: 1632219027-524397 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the saving of the stack trace of allocations into __kfence_alloc(), so that the stack entries array can be used outside of kfence_guarded_alloc() and we avoid potentially unwinding the stack multiple times. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov --- v2: * New patch. --- mm/kfence/core.c | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 249d75b7e5ee..db01814f8ff0 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -187,19 +187,26 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m * Update the object's metadata state, including updating the alloc/free stacks * depending on the state transition. */ -static noinline void metadata_update_state(struct kfence_metadata *meta, - enum kfence_object_state next) +static noinline void +metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next, + unsigned long *stack_entries, size_t num_stack_entries) { struct kfence_track *track = next == KFENCE_OBJECT_FREED ? &meta->free_track : &meta->alloc_track; lockdep_assert_held(&meta->lock); - /* - * Skip over 1 (this) functions; noinline ensures we do not accidentally - * skip over the caller by never inlining. - */ - track->num_stack_entries = stack_trace_save(track->stack_entries, KFENCE_STACK_DEPTH, 1); + if (stack_entries) { + memcpy(track->stack_entries, stack_entries, + num_stack_entries * sizeof(stack_entries[0])); + } else { + /* + * Skip over 1 (this) functions; noinline ensures we do not + * accidentally skip over the caller by never inlining. + */ + num_stack_entries = stack_trace_save(track->stack_entries, KFENCE_STACK_DEPTH, 1); + } + track->num_stack_entries = num_stack_entries; track->pid = task_pid_nr(current); track->cpu = raw_smp_processor_id(); track->ts_nsec = local_clock(); /* Same source as printk timestamps. */ @@ -261,7 +268,8 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, } } -static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp) +static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp, + unsigned long *stack_entries, size_t num_stack_entries) { struct kfence_metadata *meta = NULL; unsigned long flags; @@ -320,7 +328,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g addr = (void *)meta->addr; /* Update remaining metadata. */ - metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED); + metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED, stack_entries, num_stack_entries); /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ WRITE_ONCE(meta->cache, cache); meta->size = size; @@ -400,7 +408,7 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z memzero_explicit(addr, meta->size); /* Mark the object as freed. */ - metadata_update_state(meta, KFENCE_OBJECT_FREED); + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); raw_spin_unlock_irqrestore(&meta->lock, flags); @@ -742,6 +750,9 @@ void kfence_shutdown_cache(struct kmem_cache *s) void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { + unsigned long stack_entries[KFENCE_STACK_DEPTH]; + size_t num_stack_entries; + /* * Perform size check before switching kfence_allocation_gate, so that * we don't disable KFENCE without making an allocation. @@ -786,7 +797,9 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (!READ_ONCE(kfence_enabled)) return NULL; - return kfence_guarded_alloc(s, size, flags); + num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 0); + + return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries); } size_t kfence_ksize(const void *addr) From patchwork Tue Sep 21 10:10:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12507487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6288DC433FE for ; Tue, 21 Sep 2021 10:10:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 08A9A61038 for ; Tue, 21 Sep 2021 10:10:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 08A9A61038 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9A7946B0073; Tue, 21 Sep 2021 06:10:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 932676B0075; Tue, 21 Sep 2021 06:10:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 736D36B0078; Tue, 21 Sep 2021 06:10:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 547796B0073 for ; Tue, 21 Sep 2021 06:10:30 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1A02D8249980 for ; Tue, 21 Sep 2021 10:10:30 +0000 (UTC) X-FDA: 78611161020.33.2628A60 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf18.hostedemail.com (Postfix) with ESMTP id DB0E94002091 for ; Tue, 21 Sep 2021 10:10:29 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id r5-20020adfb1c5000000b0015cddb7216fso8390486wra.3 for ; Tue, 21 Sep 2021 03:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tX0nZu/PV7Fs/kpXK7987zjFf32SdSUPUBR9+wpGpH0=; b=AByc0i2adW4PQaiPAwC63xjkMdYHvuLGitigqUFIBPxtz+/3ayMie3WMu+VjHA+44b 99gPJXUBPqe4Tp+hIi/pulc1xcn00K2/e3PRFyPjbEqiEn4IpRxkwGn7u6FLWVrpCrHy T7OWYgBqGvoBQooVxFuVwn9hrFwcvDzzNhFSsIRb+7T5a7qB3vRVZpM+vOZRqIXtqL0+ Br9JcZPsLMSyv44adXuopiCM4Zxbcp/eqjo08tMv3vvYxtgDkE1JAfZFvskwVp0x6PDc eL4VjynEHRS2l2+Jxml6H2WYy9tjcCd0P2FwhzhiiZ8AexOWblzroXAbNPZ6N1N8Ro/L mkPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tX0nZu/PV7Fs/kpXK7987zjFf32SdSUPUBR9+wpGpH0=; b=63bwkKhejW6NIoQ/SeCcotWHxccFdO++5RY3sVMZ4J8srbmbEWYrIIzOENkluONNqH krBXyKTvFFDfPaHuQAKm3FoBU4VgkLBxw7fXGMNHctNiJ9IYckVehb3QmXs4UbwppHeb 9wAZJoH+sMWIGpiU16AkUbTN98uQjOcDYrbNLmbpuh+5x9wluirxRc/7bwoSJ429CZMR WUnnBw763XsIix/UtqR+HeZdztvYMbhYvGh7/Pcihn/J4ARL9WL0hegeeEbpRFhouz00 HXPNA9toUeIc/ECAlqBQca/gRueQKWxP4wyMPjXOUf76L2h3kXdQ7610FBAUS37amEnH t7cg== X-Gm-Message-State: AOAM531aF+6x1Krfm4TaKCm7C1r6H0F5tHBZdADLGUFZ4qyFeUEwSAjM mZD/rVCT9ygo+l54BWVIO5LUy9biYg== X-Google-Smtp-Source: ABdhPJy3u9GM6UrGoTyp5BOOfpIHaa6UbZb5lZRRRWS17yyQgoA0dzbwKx44zPW/SLBRML0zN/NKsV3PIQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:dd03:c280:4625:60db]) (user=elver job=sendgmr) by 2002:adf:f545:: with SMTP id j5mr33922187wrp.9.1632219028674; Tue, 21 Sep 2021 03:10:28 -0700 (PDT) Date: Tue, 21 Sep 2021 12:10:13 +0200 In-Reply-To: <20210921101014.1938382-1-elver@google.com> Message-Id: <20210921101014.1938382-4-elver@google.com> Mime-Version: 1.0 References: <20210921101014.1938382-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v2 4/5] kfence: limit currently covered allocations when pool nearly full From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: DB0E94002091 X-Stat-Signature: ka5woc5sfafbtztyx94cgucg8tbbumyq Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AByc0i2a; spf=pass (imf18.hostedemail.com: domain of 3lK9JYQUKCCsLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3lK9JYQUKCCsLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632219029-584935 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: One of KFENCE's main design principles is that with increasing uptime, allocation coverage increases sufficiently to detect previously undetected bugs. We have observed that frequent long-lived allocations of the same source (e.g. pagecache) tend to permanently fill up the KFENCE pool with increasing system uptime, thus breaking the above requirement. The workaround thus far had been increasing the sample interval and/or increasing the KFENCE pool size, but is no reliable solution. To ensure diverse coverage of allocations, limit currently covered allocations of the same source once pool utilization reaches 75% (configurable via `kfence.skip_covered_thresh`) or above. The effect is retaining reasonable allocation coverage when the pool is close to full. A side-effect is that this also limits frequent long-lived allocations of the same source filling up the pool permanently. Uniqueness of an allocation for coverage purposes is based on its (partial) allocation stack trace (the source). A Counting Bloom filter is used to check if an allocation is covered; if the allocation is currently covered, the allocation is skipped by KFENCE. Testing was done using: (a) a synthetic workload that performs frequent long-lived allocations (default config values; sample_interval=1; num_objects=63), and (b) normal desktop workloads on an otherwise idle machine where the problem was first reported after a few days of uptime (default config values). In both test cases the sampled allocation rate no longer drops to zero at any point. In the case of (b) we observe (after 2 days uptime) 15% unique allocations in the pool, 77% pool utilization, with 20% "skipped allocations (covered)". Signed-off-by: Marco Elver --- v2: * Switch to counting bloom filter to guarantee currently covered allocations being skipped. * Use a module param for skip_covered threshold. * Use kfence pool address as hash entropy. * Use filter_irq_stacks(). --- mm/kfence/core.c | 113 ++++++++++++++++++++++++++++++++++++++++++++- mm/kfence/kfence.h | 2 + 2 files changed, 113 insertions(+), 2 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index db01814f8ff0..9b3fb30f24c3 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -11,11 +11,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include #include @@ -82,6 +84,10 @@ static const struct kernel_param_ops sample_interval_param_ops = { }; module_param_cb(sample_interval, &sample_interval_param_ops, &kfence_sample_interval, 0600); +/* Pool usage% threshold when currently covered allocations are skipped. */ +static unsigned long kfence_skip_covered_thresh __read_mostly = 75; +module_param_named(skip_covered_thresh, kfence_skip_covered_thresh, ulong, 0644); + /* The pool of pages used for guard pages and objects. */ char *__kfence_pool __ro_after_init; EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */ @@ -105,6 +111,25 @@ DEFINE_STATIC_KEY_FALSE(kfence_allocation_key); /* Gates the allocation, ensuring only one succeeds in a given period. */ atomic_t kfence_allocation_gate = ATOMIC_INIT(1); +/* + * A Counting Bloom filter of allocation coverage: limits currently covered + * allocations of the same source filling up the pool. + * + * Assuming a range of 15%-85% unique allocations in the pool at any point in + * time, the below parameters provide a probablity of 0.02-0.33 for false + * positive hits respectively: + * + * P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM + */ +#define ALLOC_COVERED_HNUM 2 +#define ALLOC_COVERED_SIZE (1 << (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)) +#define ALLOC_COVERED_HNEXT(h) (1664525 * (h) + 1013904223) +#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1) +static atomic_t alloc_covered[ALLOC_COVERED_SIZE]; + +/* Stack depth used to determine uniqueness of an allocation. */ +#define UNIQUE_ALLOC_STACK_DEPTH 8UL + /* Statistics counters for debugfs. */ enum kfence_counter_id { KFENCE_COUNTER_ALLOCATED, @@ -114,6 +139,7 @@ enum kfence_counter_id { KFENCE_COUNTER_BUGS, KFENCE_COUNTER_SKIP_INCOMPAT, KFENCE_COUNTER_SKIP_CAPACITY, + KFENCE_COUNTER_SKIP_COVERED, KFENCE_COUNTER_COUNT, }; static atomic_long_t counters[KFENCE_COUNTER_COUNT]; @@ -125,11 +151,66 @@ static const char *const counter_names[] = { [KFENCE_COUNTER_BUGS] = "total bugs", [KFENCE_COUNTER_SKIP_INCOMPAT] = "skipped allocations (incompatible)", [KFENCE_COUNTER_SKIP_CAPACITY] = "skipped allocations (capacity)", + [KFENCE_COUNTER_SKIP_COVERED] = "skipped allocations (covered)", }; static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT); /* === Internals ============================================================ */ +static inline bool should_skip_covered(void) +{ + unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100; + + return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh; +} + +static u32 get_alloc_stack_hash(unsigned long *stack_entries, size_t num_entries) +{ + /* Some randomness across reboots / different machines. */ + u32 seed = (u32)((unsigned long)__kfence_pool >> (BITS_PER_LONG - 32)); + + num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH); + num_entries = filter_irq_stacks(stack_entries, num_entries); + return jhash(stack_entries, num_entries * sizeof(stack_entries[0]), seed); +} + +/* + * Adds (or subtracts) count @val for allocation stack trace hash + * @alloc_stack_hash from Counting Bloom filter. + */ +static void alloc_covered_add(u32 alloc_stack_hash, int val) +{ + int i; + + if (!alloc_stack_hash) + return; + + for (i = 0; i < ALLOC_COVERED_HNUM; i++) { + atomic_add(val, &alloc_covered[alloc_stack_hash & ALLOC_COVERED_MASK]); + alloc_stack_hash = ALLOC_COVERED_HNEXT(alloc_stack_hash); + } +} + +/* + * Returns true if the allocation stack trace hash @alloc_stack_hash is + * currently contained (non-zero count) in Counting Bloom filter. + */ +static bool alloc_covered_contains(u32 alloc_stack_hash) +{ + int i; + + if (!alloc_stack_hash) + return false; + + for (i = 0; i < ALLOC_COVERED_HNUM; i++) { + if (!atomic_read(&alloc_covered[alloc_stack_hash & ALLOC_COVERED_MASK])) + return false; + alloc_stack_hash = ALLOC_COVERED_HNEXT(alloc_stack_hash); + } + + return true; +} + static bool kfence_protect(unsigned long addr) { return !KFENCE_WARN_ON(!kfence_protect_page(ALIGN_DOWN(addr, PAGE_SIZE), true)); @@ -269,7 +350,8 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, } static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp, - unsigned long *stack_entries, size_t num_stack_entries) + unsigned long *stack_entries, size_t num_stack_entries, + u32 alloc_stack_hash) { struct kfence_metadata *meta = NULL; unsigned long flags; @@ -332,6 +414,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ WRITE_ONCE(meta->cache, cache); meta->size = size; + meta->alloc_stack_hash = alloc_stack_hash; + for_each_canary(meta, set_canary_byte); /* Set required struct page fields. */ @@ -344,6 +428,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g raw_spin_unlock_irqrestore(&meta->lock, flags); + alloc_covered_add(alloc_stack_hash, 1); + /* Memory initialization. */ /* @@ -368,6 +454,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie) { struct kcsan_scoped_access assert_page_exclusive; + u32 alloc_stack_hash; unsigned long flags; raw_spin_lock_irqsave(&meta->lock, flags); @@ -410,8 +497,13 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z /* Mark the object as freed. */ metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); + alloc_stack_hash = meta->alloc_stack_hash; + meta->alloc_stack_hash = 0; + raw_spin_unlock_irqrestore(&meta->lock, flags); + alloc_covered_add(alloc_stack_hash, -1); + /* Protect to detect use-after-frees. */ kfence_protect((unsigned long)addr); @@ -752,6 +844,7 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { unsigned long stack_entries[KFENCE_STACK_DEPTH]; size_t num_stack_entries; + u32 alloc_stack_hash; /* * Perform size check before switching kfence_allocation_gate, so that @@ -799,7 +892,23 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 0); - return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries); + /* + * Do expensive check for coverage of allocation in slow-path after + * allocation_gate has already become non-zero, even though it might + * mean not making any allocation within a given sample interval. + * + * This ensures reasonable allocation coverage when the pool is almost + * full, including avoiding long-lived allocations of the same source + * filling up the pool (e.g. pagecache allocations). + */ + alloc_stack_hash = get_alloc_stack_hash(stack_entries, num_stack_entries); + if (should_skip_covered() && alloc_covered_contains(alloc_stack_hash)) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_COVERED]); + return NULL; + } + + return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries, + alloc_stack_hash); } size_t kfence_ksize(const void *addr) diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index c1f23c61e5f9..2a2d5de9d379 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -87,6 +87,8 @@ struct kfence_metadata { /* Allocation and free stack information. */ struct kfence_track alloc_track; struct kfence_track free_track; + /* For updating alloc_covered on frees. */ + u32 alloc_stack_hash; }; extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS]; From patchwork Tue Sep 21 10:10:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12507489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB978C433EF for ; Tue, 21 Sep 2021 10:10:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5FC0560F9D for ; Tue, 21 Sep 2021 10:10:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5FC0560F9D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E64016B0075; Tue, 21 Sep 2021 06:10:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DEC6C6B0078; Tue, 21 Sep 2021 06:10:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C657B900002; Tue, 21 Sep 2021 06:10:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AA0FA6B0075 for ; Tue, 21 Sep 2021 06:10:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6B482235BF for ; Tue, 21 Sep 2021 10:10:32 +0000 (UTC) X-FDA: 78611161104.19.FDB23F0 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf11.hostedemail.com (Postfix) with ESMTP id 2C9E6F0000BE for ; Tue, 21 Sep 2021 10:10:32 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id i4-20020a5d5224000000b0015b14db14deso8371910wra.23 for ; Tue, 21 Sep 2021 03:10:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+aOzJl+IOHEzliD/59W8D7vvkFuM5qZHbYDQFSYPXT0=; b=aFGoLLrFkaG1Jv4QLHaJkF6I3uEoq4A5AJzdm69l+j3q/5szCUvpqk9ZBSpifegZh7 7mIN1MOkeF95mNesAfw2+ILFm7i3Kt9YsMHs5CI0eED0o5koHyQnW6Uq9U0f1iH9johY mvSiUClnJeCiqROBHKO51gLbqDHgUYsrqn2TmtZEuNckaKHqkdMD85zsmEVrAT8aASQU eUHxu0Qp8wOoFHuNFe2UyGkFC/qUwXTeerJNNKW/c+P+uzILzp4NrET0Ozjz5L73lFG6 BXfwgGS9ZVeotQ1x1+5LnibZ0TmWgtFQvB/s5jMaYE8qH50/tMfYBpeqMXmb7jf960FS 8VTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+aOzJl+IOHEzliD/59W8D7vvkFuM5qZHbYDQFSYPXT0=; b=gK33UyFrzdFlT8azr2BGWEMZGBZ5pN8bsKXuLgG0SJQv0+SJ8rbjO0ma8trW5BxnBg oxkOmkWvei69fDxAUSiRmeS9+ZpuWXRsjQxUB2QsFLlAceN1N1ItY1hsM9jqdWJgX86O lv00HtCrt+aROgUuZLttKP1vX0QWu8y8t5pDPK2gDLJiai6WpCwL6uwRZb/IMqXZZVXv 0MePbBP42e1sYKdHyfUtKK5pW7Z3KBR3boI5eerAcyA8CARY8cRCEEr7KSRMOGiZhMyN yU4di9Zrxc02Y4CZic7C9ll97M1SdmXNrYOIwifOlDIyLMqZ+E4hF73/fLsFve24Yvuj go/A== X-Gm-Message-State: AOAM531wJzrTuMlOs+pUnwBz+L5gUihFJD4LPRE9WLP562THlUtcyCIr F8rPKgkp32qEI+hLAiNKPtuRUVU1zQ== X-Google-Smtp-Source: ABdhPJys3pVww4ZkhI5lNJvpGxC7FpDSW8fdyC3vE/S9sO05lABxATNp5/Fu8r8Y8OsHyFWVo/Rz/kH4aw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:dd03:c280:4625:60db]) (user=elver job=sendgmr) by 2002:adf:ea90:: with SMTP id s16mr34027049wrm.235.1632219031088; Tue, 21 Sep 2021 03:10:31 -0700 (PDT) Date: Tue, 21 Sep 2021 12:10:14 +0200 In-Reply-To: <20210921101014.1938382-1-elver@google.com> Message-Id: <20210921101014.1938382-5-elver@google.com> Mime-Version: 1.0 References: <20210921101014.1938382-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v2 5/5] kfence: add note to documentation about skipping covered allocations From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 2C9E6F0000BE X-Stat-Signature: e1z87aaz7idteeqtt4f5t8j4im3bnm1a Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aFGoLLrF; spf=pass (imf11.hostedemail.com: domain of 3l69JYQUKCC4OVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3l69JYQUKCC4OVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632219032-727506 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a note briefly mentioning the new policy about "skipping currently covered allocations if pool close to full." Since this has a notable impact on KFENCE's bug-detection ability on systems with large uptimes, it is worth pointing out the feature. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov --- v2: * Rewrite. --- Documentation/dev-tools/kfence.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/Documentation/dev-tools/kfence.rst b/Documentation/dev-tools/kfence.rst index 0fbe3308bf37..d45f952986ae 100644 --- a/Documentation/dev-tools/kfence.rst +++ b/Documentation/dev-tools/kfence.rst @@ -269,6 +269,17 @@ tail of KFENCE's freelist, so that the least recently freed objects are reused first, and the chances of detecting use-after-frees of recently freed objects is increased. +If pool utilization reaches 75% (default) or above, to reduce the risk of the +pool eventually being fully occupied by allocated objects yet ensure diverse +coverage of allocations, KFENCE limits currently covered allocations of the +same source from further filling up the pool. The "source" of an allocation is +based on its partial allocation stack trace. A side-effect is that this also +limits frequent long-lived allocations (e.g. pagecache) of the same source +filling up the pool permanently, which is the most common risk for the pool +becoming full and the sampled allocation rate dropping to zero. The threshold +at which to start limiting currently covered allocations can be configured via +the boot parameter ``kfence.skip_covered_thresh`` (pool usage%). + Interface ---------