From patchwork Sun Mar 5 15:55:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13160175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48337C678DB for ; Sun, 5 Mar 2023 15:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbjCEPzx (ORCPT ); Sun, 5 Mar 2023 10:55:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229676AbjCEPzw (ORCPT ); Sun, 5 Mar 2023 10:55:52 -0500 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFFFD19F39; Sun, 5 Mar 2023 07:55:50 -0800 (PST) Received: by mail-ed1-x52e.google.com with SMTP id ay14so25317250edb.11; Sun, 05 Mar 2023 07:55:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678031749; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8efU/42N5gltZQk9HUJR704uG5rbg0MZA6h5sgKkspk=; b=G1aqdxTn+71VdNR8g3O0QASFb740IvcBNC4ST3Jou9cGg/z97S13p9sI0oUOmLEe3m 0aFxvabdCR0ryI5VU+IUzuxCG6+S4cFNG0GkhH83Y8yjGJwu0QVipw1ok6Zdfu7m7rdf fcfbW0+V/NQW+MtVOzHOd+8bzxvcDF+muxckewlR8h8ZhO9D1OLUN+dKb+1sIFtmcfcC kPrMMGuacmYJM55clnmBi3LkuYAH+ZR6iKqDl3NojSduLsIkBaV7QLgSJUMjiCGCno5d QGax4nwIQOb8Kk2xbkIufmvLJ3S6yU0gWKk4V62cAMg5JbfR9JVpvY2Zf4SdQcukbtoi fOmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678031749; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8efU/42N5gltZQk9HUJR704uG5rbg0MZA6h5sgKkspk=; b=rqoN5aQLAp5b/4j0lPKOLXWbLhwfWAqlcbIJPy3v9MeJ0gXzTRYC9eA6E5AiSh+lZM X0OK9sibKugIVI/6m/KHCd41MkXvxH54yPKVSXTxe4M7cU3GF4sJg4ywYlrwG8ZeybVf ATNeYwG1OWdWNaswEpG/K0yJtQxKuD5bqzgi5T4GgfuzR1XEQ8EmAoyE4wfjt81bT8Tt SMgLVtg8i273hOz5xsMzqaMt9dRfADJ0kiqoY6nYLTIUSgG/KNWXBlWWEcIKN0p7o89M r0DYAkXnVTd0se+bBdapoIbMc77roLQjh3zXiEPrM53n8yACElDIHJ4rfGC9RqG5GEyg aOXw== X-Gm-Message-State: AO0yUKW1BWfTeRacIvJlcKlJvuXWLpxbDLq9XAh8wBCBvjMBR3Chkb08 w/+EFMwdIsaGTfck8w/AaZ1SXasOTs9Kbg== X-Google-Smtp-Source: AK7set9MXyGVFBx6g8kl4j1E+zlIjF4HWzjZB7zVejJFNaWTSx1XuSmsdxuSyVNhEVGdwk54z3g6ZA== X-Received: by 2002:a17:906:7c4a:b0:8b2:abcc:8d9e with SMTP id g10-20020a1709067c4a00b008b2abcc8d9emr7989347ejp.26.1678031749089; Sun, 05 Mar 2023 07:55:49 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id pv16-20020a170907209000b009079442dd11sm3357332ejb.154.2023.03.05.07.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Mar 2023 07:55:48 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu , Mukesh Ojha Subject: [PATCH v4 1/3] ring_buffer: Change some static functions to void Date: Sun, 5 Mar 2023 16:55:30 +0100 Message-Id: <20230305155532.5549-2-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230305155532.5549-1-ubizjak@gmail.com> References: <20230305155532.5549-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The results of some static functions are not used. Change the type of these function to void and remove unnecessary returns. No functional change intended. Cc: Steven Rostedt Signed-off-by: Uros Bizjak Reviewed-by: Masami Hiramatsu Reviewed-by: Mukesh Ojha --- kernel/trace/ring_buffer.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index af50d931b020..05fdc92554df 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1569,15 +1569,12 @@ static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, } } -static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, +static void rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, struct buffer_page *bpage) { unsigned long val = (unsigned long)bpage; - if (RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK)) - return 1; - - return 0; + RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK); } /** @@ -1587,30 +1584,28 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, * As a safety measure we check to make sure the data pages have not * been corrupted. */ -static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) +static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *head = rb_list_head(cpu_buffer->pages); struct list_head *tmp; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->next)->prev) != head)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->prev)->next) != head)) - return -1; + return; for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) { if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->next)->prev) != tmp)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->prev)->next) != tmp)) - return -1; + return; } - - return 0; } static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, @@ -4500,7 +4495,6 @@ rb_update_read_stamp(struct ring_buffer_per_cpu *cpu_buffer, default: RB_WARN_ON(cpu_buffer, 1); } - return; } static void @@ -4531,7 +4525,6 @@ rb_update_iter_read_stamp(struct ring_buffer_iter *iter, default: RB_WARN_ON(iter->cpu_buffer, 1); } - return; } static struct buffer_page * @@ -4946,7 +4939,6 @@ rb_reader_unlock(struct ring_buffer_per_cpu *cpu_buffer, bool locked) { if (likely(locked)) raw_spin_unlock(&cpu_buffer->reader_lock); - return; } /** From patchwork Sun Mar 5 15:55:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13160176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC6CDC61DA4 for ; Sun, 5 Mar 2023 15:55:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229705AbjCEPz4 (ORCPT ); Sun, 5 Mar 2023 10:55:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229701AbjCEPzz (ORCPT ); Sun, 5 Mar 2023 10:55:55 -0500 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [IPv6:2a00:1450:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F6B914225; Sun, 5 Mar 2023 07:55:53 -0800 (PST) Received: by mail-ed1-x531.google.com with SMTP id u9so29094203edd.2; Sun, 05 Mar 2023 07:55:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678031751; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TsqhC4fOZNV/xYgcQ01M1YcWRLbPOEgnlfSmRr7BAxs=; b=DJ2EF2T0+pE+wlXP6xeOMbdA6F8LKU6ZzaWRdkJgJI15QmQxwLlnHVUk5D2Y76EUq9 tZNLZqOa0iFFm9Tohtfwgs0odA+gDd+Z0soUYZZbuCjjxLEFacDLUvD5G9qvblnbS9+J 1lfMWJJbSaHVxlAmCwSsuAXiMpNP+d/LBZZabBUEya6j76N4OzpDHUdsQI/Atj3K8HKi EJf8zaqCtPQiHz4vYWDZg3cjZiSEfymIgJIfvfdsCtZIS/7xs9+qqi4J/DoD3mhxNhB3 vD6J/BgFKZWNyK+A/r1Y1miCCZrXYReXHtsDo3tVI20NujhjxnX/LDPTvCQsSL4HhVDZ RESg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678031751; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TsqhC4fOZNV/xYgcQ01M1YcWRLbPOEgnlfSmRr7BAxs=; b=S72HG1tTYx7+Xt1DdYmMg7legXORvdhUCdgmDMd5+HBsC+leTB+2qkck6Rvbg4Y6kZ KpiAUkR/U4rFtkEStz1gAxu31leB4QbKgwWNEXeSZzUr8nQz2MyIUNeBvkOkQn2mNWGC 6Q5Raezrq4O0OqCVGsFCkKrwxuMAtvxfLuI1B/l13FUsdUt0f8mpWq5upKKMfbaRtz+F bfKALaFx3i4yEbral+rY1Ag7AY2EKWqT6KVM6A9S1d9wNm6iPgz+R2aiNQupWQic1Y37 dS7U4yvqiO8Tu/QjJ0tKdqbSx6d7Gk6Fxf3Fsdvnx76D5JiL2Ken0EVxdvxCnqLw8HRo jJgg== X-Gm-Message-State: AO0yUKWKIj6baVEa5zUzVEQ3qJpUBTUrnkHx7i2bEfw/rUh/SKE0dshq U0fXIkKZhEWfpK0cKA4mHG/XFGJ+M1Sbsw== X-Google-Smtp-Source: AK7set9qtAgFu2sM0XMPgNhHQhndpP+iP2PqCl35fksGmPCLteziJudO5oRdxKIDUkmuS1upF+zVKg== X-Received: by 2002:a17:907:9b03:b0:8f0:9566:c1ff with SMTP id kn3-20020a1709079b0300b008f09566c1ffmr8645915ejc.69.1678031751414; Sun, 05 Mar 2023 07:55:51 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id pv16-20020a170907209000b009079442dd11sm3357332ejb.154.2023.03.05.07.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Mar 2023 07:55:51 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu , Mukesh Ojha Subject: [PATCH v4 2/3] ring_buffer: Change some static functions to bool Date: Sun, 5 Mar 2023 16:55:31 +0100 Message-Id: <20230305155532.5549-3-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230305155532.5549-1-ubizjak@gmail.com> References: <20230305155532.5549-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The return values of some functions are of boolean type. Change the type of these function to bool and adjust their return values. Also change type of some internal varibles to bool. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak Reviewed-by: Mukesh Ojha --- v3: Rearrange variable declarations. v4: Change ret in rb_get_reader_page. Rearrange variable declarations. --- kernel/trace/ring_buffer.c | 47 +++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 05fdc92554df..5235037f83d3 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -163,7 +163,7 @@ enum { #define extended_time(event) \ (event->type_len >= RINGBUF_TYPE_TIME_EXTEND) -static inline int rb_null_event(struct ring_buffer_event *event) +static inline bool rb_null_event(struct ring_buffer_event *event) { return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta; } @@ -367,11 +367,9 @@ static void free_buffer_page(struct buffer_page *bpage) /* * We need to fit the time_stamp delta into 27 bits. */ -static inline int test_time_stamp(u64 delta) +static inline bool test_time_stamp(u64 delta) { - if (delta & TS_DELTA_TEST) - return 1; - return 0; + return !!(delta & TS_DELTA_TEST); } #define BUF_PAGE_SIZE (PAGE_SIZE - BUF_PAGE_HDR_SIZE) @@ -700,7 +698,7 @@ rb_time_read_cmpxchg(local_t *l, unsigned long expect, unsigned long set) return ret == expect; } -static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) +static bool rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) { unsigned long cnt, top, bottom, msb; unsigned long cnt2, top2, bottom2, msb2; @@ -1490,7 +1488,7 @@ rb_set_head_page(struct ring_buffer_per_cpu *cpu_buffer) return NULL; } -static int rb_head_page_replace(struct buffer_page *old, +static bool rb_head_page_replace(struct buffer_page *old, struct buffer_page *new) { unsigned long *ptr = (unsigned long *)&old->list.prev->next; @@ -1917,7 +1915,7 @@ static inline unsigned long rb_page_write(struct buffer_page *bpage) return local_read(&bpage->write) & RB_WRITE_MASK; } -static int +static bool rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) { struct list_head *tail_page, *to_remove, *next_page; @@ -2030,12 +2028,13 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) return nr_removed == 0; } -static int +static bool rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *pages = &cpu_buffer->new_pages; - int retries, success; unsigned long flags; + bool success; + int retries; /* Can be called at early boot up, where interrupts must not been enabled */ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); @@ -2054,7 +2053,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * spinning. */ retries = 10; - success = 0; + success = false; while (retries--) { struct list_head *head_page, *prev_page, *r; struct list_head *last_page, *first_page; @@ -2083,7 +2082,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * pointer to point to end of list */ head_page->prev = last_page; - success = 1; + success = true; break; } } @@ -2111,7 +2110,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) static void rb_update_pages(struct ring_buffer_per_cpu *cpu_buffer) { - int success; + bool success; if (cpu_buffer->nr_pages_to_update > 0) success = rb_insert_pages(cpu_buffer); @@ -2994,7 +2993,7 @@ static u64 rb_time_delta(struct ring_buffer_event *event) } } -static inline int +static inline bool rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, struct ring_buffer_event *event) { @@ -3015,7 +3014,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, delta = rb_time_delta(event); if (!rb_time_read(&cpu_buffer->write_stamp, &write_stamp)) - return 0; + return false; /* Make sure the write stamp is read before testing the location */ barrier(); @@ -3028,7 +3027,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, /* Something came in, can't discard */ if (!rb_time_cmpxchg(&cpu_buffer->write_stamp, write_stamp, write_stamp - delta)) - return 0; + return false; /* * It's possible that the event time delta is zero @@ -3061,12 +3060,12 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, if (index == old_index) { /* update counters */ local_sub(event_length, &cpu_buffer->entries_bytes); - return 1; + return true; } } /* could not discard */ - return 0; + return false; } static void rb_start_commit(struct ring_buffer_per_cpu *cpu_buffer) @@ -3281,7 +3280,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer) * Note: The TRANSITION bit only handles a single transition between context. */ -static __always_inline int +static __always_inline bool trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) { unsigned int val = cpu_buffer->current_context; @@ -3298,14 +3297,14 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) bit = RB_CTX_TRANSITION; if (val & (1 << (bit + cpu_buffer->nest))) { do_ring_buffer_record_recursion(); - return 1; + return true; } } val |= (1 << (bit + cpu_buffer->nest)); cpu_buffer->current_context = val; - return 0; + return false; } static __always_inline void @@ -4534,7 +4533,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) unsigned long overwrite; unsigned long flags; int nr_loops = 0; - int ret; + bool ret; local_irq_save(flags); arch_spin_lock(&cpu_buffer->lock); @@ -5409,8 +5408,8 @@ bool ring_buffer_empty(struct trace_buffer *buffer) struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; bool dolock; + bool ret; int cpu; - int ret; /* yes this is racy, but if you don't like the race, lock the buffer */ for_each_buffer_cpu(buffer, cpu) { @@ -5439,7 +5438,7 @@ bool ring_buffer_empty_cpu(struct trace_buffer *buffer, int cpu) struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; bool dolock; - int ret; + bool ret; if (!cpumask_test_cpu(cpu, buffer->cpumask)) return true; From patchwork Sun Mar 5 15:55:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13160177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46D70C678DB for ; Sun, 5 Mar 2023 15:56:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229728AbjCEP4F (ORCPT ); Sun, 5 Mar 2023 10:56:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbjCEPz5 (ORCPT ); Sun, 5 Mar 2023 10:55:57 -0500 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A5D61A482; Sun, 5 Mar 2023 07:55:55 -0800 (PST) Received: by mail-ed1-x52e.google.com with SMTP id ay14so25317792edb.11; Sun, 05 Mar 2023 07:55:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678031755; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KWpr/5GI81lO69HAXhZKZcngIwW2QBW0qpGz3J/BR7Y=; b=JZ7HVCm1GGd9pUjhxo3A1AY0n4FdeAbY0NWJyjiRPJo63D6DLqDHSVtscpmtLuKs5F WrEg5mceFjg2nC73JigTUxVJrbpcHmYSvQ3yEEZ2vD+cnvrBWtFjSBmZJquXg5ESA5Jj BGyCUoDifWlpElAItkad6gpPtRcd0rhHUwDEt2tYyLtiYSZuW9uvSyrlr2RWA4KkHT/g 6JfcuJpqDmWEAridEBSynaCJOZNWLC+QMlHIY0a/Wt+rqAF2H38wynSlQe0waGTotqC5 5con2PbutNeiS+/+/hE42OZeySH8uv67BDi0dJ9m8GyReoxFDpPLMI3Ye2Zmpjz9oUI9 hZWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678031755; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KWpr/5GI81lO69HAXhZKZcngIwW2QBW0qpGz3J/BR7Y=; b=uC/C1NyfC6vUb9SGl7uk6KOfwdq7OPcXulrd3wj7R+H8NtpOUQEJqgxSfB94dWsMCw UDutvgHv2WGbm/5EWz8nQrsJ+Eu9TyeKo8GGCiT+ubeqOUTeun42iNcUHn9OiaE0q1T9 ZCBz7+AKmQK0Ct/tsXIyauS5y3rGy5fqadMWxZfE/o/wQ+bKA9ntgXYi6u2dqGgQtUL6 xTADlDwGdXaPnRy3D+8C+rPhDxnRabvYPch6XNjWPImMcU9BG3teMvyjbCJB+WYJyOBg EC9o//r1LblLtYZ4jZ/YmoexoY7kLaYvjznVJy8FieAk9krwhNR4l40XSE1xjmL3RnGe vFPg== X-Gm-Message-State: AO0yUKXr2SB7z9mnvxRKDmK/pjMVYppdf54AUWokyGO2XGySpqz0CIlc TSNw3HbFmjeD93FIOB+Kj2C8P5ax8P9Myg== X-Google-Smtp-Source: AK7set+24pFH07hsMj0t96ebZC6goJ/4uYDKtLpVg5iPmvH8AsczfOHibVpCEbdxpliYRyB437bL5Q== X-Received: by 2002:a17:907:c28b:b0:8e5:2b62:c3ac with SMTP id tk11-20020a170907c28b00b008e52b62c3acmr7684759ejc.77.1678031755244; Sun, 05 Mar 2023 07:55:55 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id pv16-20020a170907209000b009079442dd11sm3357332ejb.154.2023.03.05.07.55.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Mar 2023 07:55:54 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v4 3/3] ring_buffer: Use try_cmpxchg instead of cmpxchg Date: Sun, 5 Mar 2023 16:55:32 +0100 Message-Id: <20230305155532.5549-4-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230305155532.5549-1-ubizjak@gmail.com> References: <20230305155532.5549-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Use try_cmpxchg instead of cmpxchg (*ptr, old, new) == old. x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg). Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg fails. There is no need to re-read the value in the loop. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak Acked-by: Mukesh Ojha --- v2: Convert only loops with cmpxchg. --- kernel/trace/ring_buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 5235037f83d3..d17345b522f4 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -4061,10 +4061,10 @@ void ring_buffer_record_off(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd | RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_off); @@ -4084,10 +4084,10 @@ void ring_buffer_record_on(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd & ~RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_on);