From patchwork Thu Mar 2 16:41:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13157649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D0C3C678D4 for ; Thu, 2 Mar 2023 16:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229788AbjCBQl5 (ORCPT ); Thu, 2 Mar 2023 11:41:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229563AbjCBQlz (ORCPT ); Thu, 2 Mar 2023 11:41:55 -0500 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [IPv6:2a00:1450:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D680F26B9; Thu, 2 Mar 2023 08:41:52 -0800 (PST) Received: by mail-ed1-x531.google.com with SMTP id g3so14548309eda.1; Thu, 02 Mar 2023 08:41:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677775311; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rCbSLcdXBN5AxQderjzc9KOZWeLSCuHpmkG/hHKOcd0=; b=R364aA+zzF3hkjpWZsGbbClnipBuge10Ar+Cup44xKvbmdUt2gplOq0VgGcQaOI1BC Jo/8zWx9VfhlyzfPZGGJ/p5xAVt3WsMPmj1AggB7ly89zyPEpMK7sr3S38wMUW4SP5QF doXk1YbKu/mwJLxSfb0mZa5MplNeWlk31iawkQJcAAMU/qI+MBWM3uN6qWBcRTTMn37U M6xX2xL55z+BmLK14DtEnqJEMVZR28lBHiXmWATRRbEe6malRrXd3Ifphb+BC+bWb0ug DbdhrZqxlpqf9RfgkcSuYiDjX8pHjiSyoCb+CuBi2IpHrnJGUPeHcIWcbjAEeT+UPERK JB0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677775311; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rCbSLcdXBN5AxQderjzc9KOZWeLSCuHpmkG/hHKOcd0=; b=oqNVO1C/txvjJA9h1+VHLmdGJHTMl56Xc9ptYw3fqqb3uH0vZ0tBLpzX4m/bubeJ+8 vH2WOvKJSHVtnBA5DfKR9gsymtXfZGxW/XnVit7W33qnDYs6JKF5pGLI0aN3fIU/Lb/H eB0jgnfX6U0XUg5jXo2gdpRwYFF9BlB+T9NfALYzriCrCTiE/qWbDA2ER5Uw/dLPE72k RMdEm2DS3h5D6Z4lQYKrL2rl+/JPsCesAH7DTI7odwm6ldp2VnzTlLTWPnXe7Ux9Q8yY 7m0pWhlxGXSN6ualpIXtKETSLaAHY0ztY4C+o9R6e8Xk0xjds93RI3gGeKlJsO+jODR8 P+Aw== X-Gm-Message-State: AO0yUKXUFOv6E+ojGCSW4lK/4ADdFqKSgcrt5hD1v5TBwjaLZRDdeQQc fZjqdNAsMvq+dsQA6BJ0Safhl6btL7fuAg== X-Google-Smtp-Source: AK7set9MjHqRegLglKJSD7ou4x6e5GQF8LTrO8hjMwx30bYlYKTv8gONAHsZgVqvOuWqV8CTVPUcRA== X-Received: by 2002:a17:906:3983:b0:88a:b6ca:7d3d with SMTP id h3-20020a170906398300b0088ab6ca7d3dmr10696112eje.8.1677775310982; Thu, 02 Mar 2023 08:41:50 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id a22-20020a170906191600b008c327bef167sm7230998eje.7.2023.03.02.08.41.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 08:41:50 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v2 1/3] ring_buffer: Change some static functions to void Date: Thu, 2 Mar 2023 17:41:27 +0100 Message-Id: <20230302164129.4862-2-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302164129.4862-1-ubizjak@gmail.com> References: <20230302164129.4862-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The results of some static functions are not used. Change the type of these function to void and remove unnecessary returns. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak --- kernel/trace/ring_buffer.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index af50d931b020..05fdc92554df 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1569,15 +1569,12 @@ static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, } } -static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, +static void rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, struct buffer_page *bpage) { unsigned long val = (unsigned long)bpage; - if (RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK)) - return 1; - - return 0; + RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK); } /** @@ -1587,30 +1584,28 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, * As a safety measure we check to make sure the data pages have not * been corrupted. */ -static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) +static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *head = rb_list_head(cpu_buffer->pages); struct list_head *tmp; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->next)->prev) != head)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->prev)->next) != head)) - return -1; + return; for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) { if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->next)->prev) != tmp)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->prev)->next) != tmp)) - return -1; + return; } - - return 0; } static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, @@ -4500,7 +4495,6 @@ rb_update_read_stamp(struct ring_buffer_per_cpu *cpu_buffer, default: RB_WARN_ON(cpu_buffer, 1); } - return; } static void @@ -4531,7 +4525,6 @@ rb_update_iter_read_stamp(struct ring_buffer_iter *iter, default: RB_WARN_ON(iter->cpu_buffer, 1); } - return; } static struct buffer_page * @@ -4946,7 +4939,6 @@ rb_reader_unlock(struct ring_buffer_per_cpu *cpu_buffer, bool locked) { if (likely(locked)) raw_spin_unlock(&cpu_buffer->reader_lock); - return; } /** From patchwork Thu Mar 2 16:41:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13157650 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A00FC7EE32 for ; Thu, 2 Mar 2023 16:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229849AbjCBQl6 (ORCPT ); Thu, 2 Mar 2023 11:41:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbjCBQl4 (ORCPT ); Thu, 2 Mar 2023 11:41:56 -0500 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB689D312; Thu, 2 Mar 2023 08:41:54 -0800 (PST) Received: by mail-ed1-x533.google.com with SMTP id cy23so1050393edb.12; Thu, 02 Mar 2023 08:41:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677775313; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pqjyz7XpRqD7AGnx5mr4UeTlubc91jtSFlPOIv5ZSYs=; b=EipIhgHyCrIdtNgB3hn14ywBakEdEil54kFMcclD3haJO0nHs9+/wxUvtLEPa6TRxu /n4F3obFuRvF49N2Avo7BEm44jkOx+Dyyh8b4UIMj1O82qEqL62qS9x9XQU0jJ8AVkGl LgtliFk6LOL6wVFwOvwSxfnYpZlr4nOgh+wS54jaG73QH6VlgRtQj8B9mq3o1bXqF8HY Up4luTAUpY/s51R2O7uRuiJKbgRc95an+CZNhwJN+Mc/j6NEnBWHfW/kBv6FwWVxYunq Lom86fAkzsSAgqp8w61QlXnrOBeAYfsV9vryJSzZoZKElkHIeZorsFzlLFszbNmGSfhg syig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677775313; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pqjyz7XpRqD7AGnx5mr4UeTlubc91jtSFlPOIv5ZSYs=; b=7j/ULMN6qcH4BIVVeH519ovjki8WrnKPE8bUIvRIGLNIkc3aOB6KOkZRgjtCZFbR9x ktI+hgmuG1S8p1VPeM/YaWNCSzbtl0f5up05roewjYZj5ZPR/XtLYoirTkb9NoZ3TBES CUyeKLgEhA7GyX18uVLTaSR3unQ2k675m+wik4P3jrXvMdrd/yunKtVHGuToGCfXj/xl QBQvHzw4O4dsVp9G6zloPK6PHku1rhi9dkGWtepLlF5Un/VOf3bSI4tQgL9AVJG1q0ky tQpxSXxfe6NvEQpK8O4mkmyCVRaZ27CKXOOiecaTd5p9P0lGkUATRCww8dHJL3Ba8Y4d 0wiQ== X-Gm-Message-State: AO0yUKWg1A2zfUcb7/WlkAgge9kXYKGDdiyWY1eTPvzcD8SYsQQXFTv6 /3Jjt67As7P6+FLUycpN3tgfutWj3lVZfQ== X-Google-Smtp-Source: AK7set9z4ij1ktI5b4PKsRCPbMZoORsQ0U8Gpzi0rSY4UQbb+TpmBQMvbTDVEvtYIbyGmeX3JMcAlg== X-Received: by 2002:a17:906:a16:b0:8e0:4baf:59bb with SMTP id w22-20020a1709060a1600b008e04baf59bbmr9932381ejf.22.1677775312701; Thu, 02 Mar 2023 08:41:52 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id a22-20020a170906191600b008c327bef167sm7230998eje.7.2023.03.02.08.41.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 08:41:52 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v2 2/3] ring_buffer: Change some static functions to bool Date: Thu, 2 Mar 2023 17:41:28 +0100 Message-Id: <20230302164129.4862-3-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302164129.4862-1-ubizjak@gmail.com> References: <20230302164129.4862-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The return values of some functions are of boolean type. Change the type of these function to bool and adjust their return values. Also change type of some internal varibles to bool. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak --- kernel/trace/ring_buffer.c | 47 ++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 05fdc92554df..4188af7d4cfe 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -163,7 +163,7 @@ enum { #define extended_time(event) \ (event->type_len >= RINGBUF_TYPE_TIME_EXTEND) -static inline int rb_null_event(struct ring_buffer_event *event) +static inline bool rb_null_event(struct ring_buffer_event *event) { return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta; } @@ -367,11 +367,9 @@ static void free_buffer_page(struct buffer_page *bpage) /* * We need to fit the time_stamp delta into 27 bits. */ -static inline int test_time_stamp(u64 delta) +static inline bool test_time_stamp(u64 delta) { - if (delta & TS_DELTA_TEST) - return 1; - return 0; + return !!(delta & TS_DELTA_TEST); } #define BUF_PAGE_SIZE (PAGE_SIZE - BUF_PAGE_HDR_SIZE) @@ -700,7 +698,7 @@ rb_time_read_cmpxchg(local_t *l, unsigned long expect, unsigned long set) return ret == expect; } -static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) +static bool rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) { unsigned long cnt, top, bottom, msb; unsigned long cnt2, top2, bottom2, msb2; @@ -1490,7 +1488,7 @@ rb_set_head_page(struct ring_buffer_per_cpu *cpu_buffer) return NULL; } -static int rb_head_page_replace(struct buffer_page *old, +static bool rb_head_page_replace(struct buffer_page *old, struct buffer_page *new) { unsigned long *ptr = (unsigned long *)&old->list.prev->next; @@ -1917,7 +1915,7 @@ static inline unsigned long rb_page_write(struct buffer_page *bpage) return local_read(&bpage->write) & RB_WRITE_MASK; } -static int +static bool rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) { struct list_head *tail_page, *to_remove, *next_page; @@ -2030,12 +2028,13 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) return nr_removed == 0; } -static int +static bool rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *pages = &cpu_buffer->new_pages; - int retries, success; + int retries; unsigned long flags; + bool success; /* Can be called at early boot up, where interrupts must not been enabled */ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); @@ -2054,7 +2053,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * spinning. */ retries = 10; - success = 0; + success = false; while (retries--) { struct list_head *head_page, *prev_page, *r; struct list_head *last_page, *first_page; @@ -2083,7 +2082,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * pointer to point to end of list */ head_page->prev = last_page; - success = 1; + success = true; break; } } @@ -2111,7 +2110,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) static void rb_update_pages(struct ring_buffer_per_cpu *cpu_buffer) { - int success; + bool success; if (cpu_buffer->nr_pages_to_update > 0) success = rb_insert_pages(cpu_buffer); @@ -2994,7 +2993,7 @@ static u64 rb_time_delta(struct ring_buffer_event *event) } } -static inline int +static inline bool rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, struct ring_buffer_event *event) { @@ -3015,7 +3014,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, delta = rb_time_delta(event); if (!rb_time_read(&cpu_buffer->write_stamp, &write_stamp)) - return 0; + return false; /* Make sure the write stamp is read before testing the location */ barrier(); @@ -3028,7 +3027,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, /* Something came in, can't discard */ if (!rb_time_cmpxchg(&cpu_buffer->write_stamp, write_stamp, write_stamp - delta)) - return 0; + return false; /* * It's possible that the event time delta is zero @@ -3061,12 +3060,12 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, if (index == old_index) { /* update counters */ local_sub(event_length, &cpu_buffer->entries_bytes); - return 1; + return true; } } /* could not discard */ - return 0; + return false; } static void rb_start_commit(struct ring_buffer_per_cpu *cpu_buffer) @@ -3281,7 +3280,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer) * Note: The TRANSITION bit only handles a single transition between context. */ -static __always_inline int +static __always_inline bool trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) { unsigned int val = cpu_buffer->current_context; @@ -3298,14 +3297,14 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) bit = RB_CTX_TRANSITION; if (val & (1 << (bit + cpu_buffer->nest))) { do_ring_buffer_record_recursion(); - return 1; + return true; } } val |= (1 << (bit + cpu_buffer->nest)); cpu_buffer->current_context = val; - return 0; + return false; } static __always_inline void @@ -5408,9 +5407,8 @@ bool ring_buffer_empty(struct trace_buffer *buffer) { struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; - bool dolock; + bool dolock, ret; int cpu; - int ret; /* yes this is racy, but if you don't like the race, lock the buffer */ for_each_buffer_cpu(buffer, cpu) { @@ -5438,8 +5436,7 @@ bool ring_buffer_empty_cpu(struct trace_buffer *buffer, int cpu) { struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; - bool dolock; - int ret; + bool dolock, ret; if (!cpumask_test_cpu(cpu, buffer->cpumask)) return true; From patchwork Thu Mar 2 16:41:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13157651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 984B8C7EE2F for ; Thu, 2 Mar 2023 16:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229618AbjCBQl6 (ORCPT ); Thu, 2 Mar 2023 11:41:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229803AbjCBQl5 (ORCPT ); Thu, 2 Mar 2023 11:41:57 -0500 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F17726B9; Thu, 2 Mar 2023 08:41:56 -0800 (PST) Received: by mail-ed1-x52c.google.com with SMTP id o15so67552877edr.13; Thu, 02 Mar 2023 08:41:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677775314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kLrofQCA379uXcxid9S1DouhqkUwWzrSWgzlacbqnto=; b=CBFi0q4qOzth7spzkCafCsnejBZ721xr4ZthnIXTkOhnTmnyMd6YUmSfUxL89Vy3HC YtXbBLYz+gKvlror5Vywa08LyC5DCv6HyUIqUDT/NU5qf8LGb6RIokAQu3IuKAkPKs4O IHo/8G8bsQQQ7wGcy99pcMgjmhWVHmIVqBZqcYwyw2hI16o/twXYVQlLYcPGbhGauemX tXmQUPQNW5TbbPel2iTN6Q+hrm205V4mhkStxNg2/qpEqs8NSy+yFmUL1kv8BRrisKIc Rbh1rlGyVtjuQl6i0VZaWSKnMn3fMBDPfP7p/06TN6nwFtuMT3Cx9IZTWs1UAbUV1HLx At0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677775314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kLrofQCA379uXcxid9S1DouhqkUwWzrSWgzlacbqnto=; b=BDkp77eAoz4/K+17d/gzHYHC3mF9KwqAc/4YF/caU7Fc3R76G0PALe1EKbrXenvyg1 m/vFYMwvQglxNa0MC0yBUY0alcrute1nc8dvFLyZto+knfKWw7vBlBAWyZhWH3Opcwns DSoCsa1DYHg+k8q9cBSjKhi9mONUP/GQ90C/2K6OnmilA5ynenVsFinoFmtWvakj+icl O+5bDGcA+5LW43LBnqF8nG1RqXxdM7P+Jh8gFLUEMi038n0Jd4bYYYfucDiI504rg0Wa KV6nIWVCJTQhw5T0c1IxIQI/bLzTh76z3J/HXCWArqEsMjc+p+9T9M6HQmacEfNU9pZz W5sQ== X-Gm-Message-State: AO0yUKWm4JcRl2mM+UxeWEsLjlUQGl2IZbh59Y5bKPHXAMExMv+KB6UM vPa+P2QIOxMLg8Fq/vPIMyn6/+vv7deAyg== X-Google-Smtp-Source: AK7set8YEwFGNVdmj+NqqNvop5WliKHmQ5jLgKHZ0k4uiGQ7ef26y3YYlFF8IExTdDxlue87hxbLLg== X-Received: by 2002:a17:907:20ef:b0:8ab:b03d:a34f with SMTP id rh15-20020a17090720ef00b008abb03da34fmr2819385ejb.12.1677775314129; Thu, 02 Mar 2023 08:41:54 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id a22-20020a170906191600b008c327bef167sm7230998eje.7.2023.03.02.08.41.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 08:41:53 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v2 3/3] ring_buffer: Use try_cmpxchg instead of cmpxchg Date: Thu, 2 Mar 2023 17:41:29 +0100 Message-Id: <20230302164129.4862-4-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302164129.4862-1-ubizjak@gmail.com> References: <20230302164129.4862-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Use try_cmpxchg instead of cmpxchg (*ptr, old, new) == old. x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg). Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg fails. There is no need to re-read the value in the loop. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak --- v2: Convert only loops with cmpxchg. --- kernel/trace/ring_buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 4188af7d4cfe..9a6ba5824cf2 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -4061,10 +4061,10 @@ void ring_buffer_record_off(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd | RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_off); @@ -4084,10 +4084,10 @@ void ring_buffer_record_on(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd & ~RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_on);