From patchwork Tue Feb 18 19:59:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 13980762 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E994926A1D4; Tue, 18 Feb 2025 20:00:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739908801; cv=none; b=GSWvkabk3Y7KNJHNmZMzWT8t/qRybqI8nzuay+EOA6mV4Le/+rK/FRxhtlBRrYQEe6Ortte9Gyk99KmKH2Kzkwr4Jo//dLChV2qh9KQ1i2/xIVV4GauoSf5fqyrJ91onrQDoVZScIcjHq4oZl/chFsEhrIu/sOl4ChNGPyUHEdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739908801; c=relaxed/simple; bh=q3qQhG9lYg9aVw258y1qpBAKrDNbw4juq38tFZFHXPY=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=BOaiKSVAv25lWo1LTrkq2KPiUVe8r2o6skJuA/3eYz3Sok4J5h3qL8ABhd6fqo9xb15MpiRcuPk11kzlNHqV/DvOXiomCKh6Bog27EnEzJS2jpw381SZ6fNeqKS4Sd3jMxClKKees/j7xFSL01sNYIYVIsw5bxYumLfAufwyjzw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7FB78C4CEF5; Tue, 18 Feb 2025 20:00:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tkTlX-00000004ArJ-0q9Y; Tue, 18 Feb 2025 15:00:23 -0500 Message-ID: <20250218200023.055162048@goodmis.org> User-Agent: quilt/0.68 Date: Tue, 18 Feb 2025 14:59:23 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, bpf , linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Peter Zijlstra , Linus Torvalds , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , Zheng Yejian , Martin Kelly , Christophe Leroy , Josh Poimboeuf , Heiko Carstens , Catalin Marinas , Will Deacon , Vasily Gorbik , Alexander Gordeev Subject: [PATCH v5 5/6] ftrace: Update the mcount_loc check of skipped entries References: <20250218195918.255228630@goodmis.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt Now that weak functions turn into skipped entries, update the check to make sure the amount that was allocated would fit both the entries that were allocated as well as those that were skipped. Signed-off-by: Steven Rostedt (Google) --- Changes since v4: https://lore.kernel.org/20250217153453.792481985@goodmis.org - Initialized variable "remaining" to zero kernel/trace/ftrace.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index e3f89924f603..e657013424aa 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -7111,7 +7111,28 @@ static int ftrace_process_locs(struct module *mod, /* We should have used all pages unless we skipped some */ if (pg_unuse) { - WARN_ON(!skipped); + unsigned long pg_remaining, remaining = 0; + unsigned long skip; + + /* Count the number of entries unused and compare it to skipped. */ + pg_remaining = (ENTRIES_PER_PAGE << pg->order) - pg->index; + + if (!WARN(skipped < pg_remaining, "Extra allocated pages for ftrace")) { + + skip = skipped - pg_remaining; + + for (pg = pg_unuse; pg; pg = pg->next) + remaining += 1 << pg->order; + + skip = DIV_ROUND_UP(skip, ENTRIES_PER_PAGE); + + /* + * Check to see if the number of pages remaining would + * just fit the number of entries skipped. + */ + WARN(skip != remaining, "Extra allocated pages for ftrace: %lu with %lu skipped", + remaining, skipped); + } /* Need to synchronize with ftrace_location_range() */ synchronize_rcu(); ftrace_free_pages(pg_unuse);