From patchwork Fri Mar 3 15:17:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13158941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12146C7EE2F for ; Fri, 3 Mar 2023 15:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbjCCPRo (ORCPT ); Fri, 3 Mar 2023 10:17:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231268AbjCCPRi (ORCPT ); Fri, 3 Mar 2023 10:17:38 -0500 Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C08271969C; Fri, 3 Mar 2023 07:17:33 -0800 (PST) Received: by mail-ed1-x52f.google.com with SMTP id cy23so11520600edb.12; Fri, 03 Mar 2023 07:17:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677856652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H5fcDJmBSqQASrUS8VChowm/7aLNy1wBEoT0cPfBX4k=; b=Ej5Zkdvy9ePkCDCTHrvJjMLnSQkKLuyND0SJU7GEWkvPbwBMMKkEe58t0mW4hQZp4L wD3InpR4NeZOKZlWnt+K7byMjPQTyWPm4mewu4KCJ08A3qrvAQs7SCqubKXx7NaIVIiE JaWe9QksnBDKk6LT3WP72HL0ZzP6fI2vfN5MUt5btos2bgOCyJ74ZWy/Q6j6F5pa8Qa0 +H3R81xmIl8TdwHly4RESRRrUeoO/HJmNf+/SqNFmZ0oaJEexr3/2OsiE6D6wqUXgHtW 6FPAQjk28w1b3Xq8I+aaLTNpM5KjkcbctwGTUzvaONdmtSDs+vWTkWxIOv89MkNWTp+o L5sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677856652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H5fcDJmBSqQASrUS8VChowm/7aLNy1wBEoT0cPfBX4k=; b=kszPHR8vohGTxr4qrMNx4BCCdRE/Kmiir1A+osKOpv6wlnny+/TLLU5oXZUaEZ8aXU 7QqGVoS2KOrCE9Kl/EQdXNKv+0fOsLnAMyTTEhmZ3HNP7ss70Lbf3rTXwdJIeY3AGbM7 aSARfh2aes150FtRNmRXUfTR/H/vpIINJAx+bjBJ52kP5PbefeeoEegz68V6VSTYRQqs YE4O+DTFEFHMJRUCGJt5WyWNK2pXgYzbCGyguuPvx/BsM2DLW45ojO7uziMpRVDPTuYY ABlLDHZAIfcTI/TlRq1QuRBi4Nqt0MOejeTd+02D6PCwh4gWNd/fBIFvaPrT3Cg6rHOi wnPQ== X-Gm-Message-State: AO0yUKWyf9f2tY2mKTIMQ3X/E+lUT9TQxbcITKXzFHR50qjlWQc7Z/Xt sVQGnWcAxVKgRDa/1mwnINCR8h0FSOip6g== X-Google-Smtp-Source: AK7set/4lcjLzYeHOJtpuDJo9ZjPnJJI2eaSBahq1FGLZHm/UR8ggT+Kb36qDQ/60079eqGRSewvpg== X-Received: by 2002:aa7:c493:0:b0:4ae:eb0f:892e with SMTP id m19-20020aa7c493000000b004aeeb0f892emr1766910edq.20.1677856651945; Fri, 03 Mar 2023 07:17:31 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id o13-20020a1709062e8d00b008e22978b98bsm1048390eji.61.2023.03.03.07.17.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 07:17:31 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v3 1/3] ring_buffer: Change some static functions to void Date: Fri, 3 Mar 2023 16:17:04 +0100 Message-Id: <20230303151706.57851-2-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230303151706.57851-1-ubizjak@gmail.com> References: <20230303151706.57851-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The results of some static functions are not used. Change the type of these function to void and remove unnecessary returns. No functional change intended. Cc: Steven Rostedt Signed-off-by: Uros Bizjak Reviewed-by: Masami Hiramatsu Reviewed-by: Mukesh Ojha --- kernel/trace/ring_buffer.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index af50d931b020..05fdc92554df 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1569,15 +1569,12 @@ static void rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, } } -static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, +static void rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, struct buffer_page *bpage) { unsigned long val = (unsigned long)bpage; - if (RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK)) - return 1; - - return 0; + RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK); } /** @@ -1587,30 +1584,28 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer, * As a safety measure we check to make sure the data pages have not * been corrupted. */ -static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) +static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *head = rb_list_head(cpu_buffer->pages); struct list_head *tmp; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->next)->prev) != head)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(head->prev)->next) != head)) - return -1; + return; for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) { if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->next)->prev) != tmp)) - return -1; + return; if (RB_WARN_ON(cpu_buffer, rb_list_head(rb_list_head(tmp->prev)->next) != tmp)) - return -1; + return; } - - return 0; } static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, @@ -4500,7 +4495,6 @@ rb_update_read_stamp(struct ring_buffer_per_cpu *cpu_buffer, default: RB_WARN_ON(cpu_buffer, 1); } - return; } static void @@ -4531,7 +4525,6 @@ rb_update_iter_read_stamp(struct ring_buffer_iter *iter, default: RB_WARN_ON(iter->cpu_buffer, 1); } - return; } static struct buffer_page * @@ -4946,7 +4939,6 @@ rb_reader_unlock(struct ring_buffer_per_cpu *cpu_buffer, bool locked) { if (likely(locked)) raw_spin_unlock(&cpu_buffer->reader_lock); - return; } /** From patchwork Fri Mar 3 15:17:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13158940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30BEEC7EE2D for ; Fri, 3 Mar 2023 15:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231470AbjCCPRj (ORCPT ); Fri, 3 Mar 2023 10:17:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231374AbjCCPRi (ORCPT ); Fri, 3 Mar 2023 10:17:38 -0500 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABA8E12843; Fri, 3 Mar 2023 07:17:34 -0800 (PST) Received: by mail-ed1-x532.google.com with SMTP id f13so11660961edz.6; Fri, 03 Mar 2023 07:17:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677856653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Dod82Ac00yzfmr49adYbT857eFos3GriXXh7hPUNHcA=; b=OvUr22kHGeqMI29mcUF1vALz6FRHQJjs5bG/wZL6WpQuKfWsIsgB8zpF8UXCm9JdBm 6vYwhgrAm97zDRefgty/4iIjnOnEmTefy91Ojcf2nYjkJhGyVSc3JoKXOxZKsAUfPUpt hK0NGX6PBc3iTya4zOsNMMfKM/H2SEwiJhj4OtLOBcM5np9UN1YE7P1YElf9QFIfqs4A L/Gbdr1e72Ur0TJCls+YapltBHVhKz17BY9Fc7ENj3ARzR6yq9Q3HlOliPzhLkTMpq5T /xcjBH12JjL2awWB2YuQXwK4ZwZLf2uFmNl+Jj9f4DQOS2HVGeIGEyf9CtUDj9mQphjk eAcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677856653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dod82Ac00yzfmr49adYbT857eFos3GriXXh7hPUNHcA=; b=bAIvT1IY27/9ldNF0w36hmGNvWR0ee/UE/indL670iDdgmWkEHdrRQKLkcyk9F3xsO 9/a+nIAco7xKTi0dCaFQMxCrDSpDHpfQ5mBQCFgz+If/0hIV+MLMD6JigNVP0dByPldP 1P3YM9NNGaxLtQJe02GjGzDEixEAswJRp9/uVDkWkEH39yTHG1bDdr4Bsr8Dab3oDjLZ kispJz1/44+mc3/5AsIfFaOZN4AdLOVbFy1FJuHoAAIJnx/7mX34gFOLmjvFgC3GatUy qnd64yyFPfGOhe+nWftEPbZR5f68uG+YDR4eSEfYIAuFYo2IK8VpKFOUrWatnA1W3api 9FSw== X-Gm-Message-State: AO0yUKU3pY0kNOUmXHq3Xid7KFOvNXqA6pTEu6w7WT5I5RhZJy//eXGg 6benmJODBlb4gGmZrhTmKmwThvgmDbGAdA== X-Google-Smtp-Source: AK7set9mqgdS29xJ1E2F6mHSYU9HodjW4CTmGwMKNYRb6+IdQ9Jp2ywr3i735BXvcYJLoyc+98zwJg== X-Received: by 2002:a17:907:9c04:b0:8a5:8620:575 with SMTP id ld4-20020a1709079c0400b008a586200575mr2245327ejc.3.1677856652944; Fri, 03 Mar 2023 07:17:32 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id o13-20020a1709062e8d00b008e22978b98bsm1048390eji.61.2023.03.03.07.17.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 07:17:32 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v3 2/3] ring_buffer: Change some static functions to bool Date: Fri, 3 Mar 2023 16:17:05 +0100 Message-Id: <20230303151706.57851-3-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230303151706.57851-1-ubizjak@gmail.com> References: <20230303151706.57851-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The return values of some functions are of boolean type. Change the type of these function to bool and adjust their return values. Also change type of some internal varibles to bool. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak Reviewed-by: Mukesh Ojha --- v3: Rearrange variable declarations. --- kernel/trace/ring_buffer.c | 47 ++++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 05fdc92554df..71df857242b4 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -163,7 +163,7 @@ enum { #define extended_time(event) \ (event->type_len >= RINGBUF_TYPE_TIME_EXTEND) -static inline int rb_null_event(struct ring_buffer_event *event) +static inline bool rb_null_event(struct ring_buffer_event *event) { return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta; } @@ -367,11 +367,9 @@ static void free_buffer_page(struct buffer_page *bpage) /* * We need to fit the time_stamp delta into 27 bits. */ -static inline int test_time_stamp(u64 delta) +static inline bool test_time_stamp(u64 delta) { - if (delta & TS_DELTA_TEST) - return 1; - return 0; + return !!(delta & TS_DELTA_TEST); } #define BUF_PAGE_SIZE (PAGE_SIZE - BUF_PAGE_HDR_SIZE) @@ -700,7 +698,7 @@ rb_time_read_cmpxchg(local_t *l, unsigned long expect, unsigned long set) return ret == expect; } -static int rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) +static bool rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set) { unsigned long cnt, top, bottom, msb; unsigned long cnt2, top2, bottom2, msb2; @@ -1490,7 +1488,7 @@ rb_set_head_page(struct ring_buffer_per_cpu *cpu_buffer) return NULL; } -static int rb_head_page_replace(struct buffer_page *old, +static bool rb_head_page_replace(struct buffer_page *old, struct buffer_page *new) { unsigned long *ptr = (unsigned long *)&old->list.prev->next; @@ -1917,7 +1915,7 @@ static inline unsigned long rb_page_write(struct buffer_page *bpage) return local_read(&bpage->write) & RB_WRITE_MASK; } -static int +static bool rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) { struct list_head *tail_page, *to_remove, *next_page; @@ -2030,12 +2028,13 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) return nr_removed == 0; } -static int +static bool rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) { struct list_head *pages = &cpu_buffer->new_pages; - int retries, success; unsigned long flags; + bool success; + int retries; /* Can be called at early boot up, where interrupts must not been enabled */ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); @@ -2054,7 +2053,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * spinning. */ retries = 10; - success = 0; + success = false; while (retries--) { struct list_head *head_page, *prev_page, *r; struct list_head *last_page, *first_page; @@ -2083,7 +2082,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) * pointer to point to end of list */ head_page->prev = last_page; - success = 1; + success = true; break; } } @@ -2111,7 +2110,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer) static void rb_update_pages(struct ring_buffer_per_cpu *cpu_buffer) { - int success; + bool success; if (cpu_buffer->nr_pages_to_update > 0) success = rb_insert_pages(cpu_buffer); @@ -2994,7 +2993,7 @@ static u64 rb_time_delta(struct ring_buffer_event *event) } } -static inline int +static inline bool rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, struct ring_buffer_event *event) { @@ -3015,7 +3014,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, delta = rb_time_delta(event); if (!rb_time_read(&cpu_buffer->write_stamp, &write_stamp)) - return 0; + return false; /* Make sure the write stamp is read before testing the location */ barrier(); @@ -3028,7 +3027,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, /* Something came in, can't discard */ if (!rb_time_cmpxchg(&cpu_buffer->write_stamp, write_stamp, write_stamp - delta)) - return 0; + return false; /* * It's possible that the event time delta is zero @@ -3061,12 +3060,12 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, if (index == old_index) { /* update counters */ local_sub(event_length, &cpu_buffer->entries_bytes); - return 1; + return true; } } /* could not discard */ - return 0; + return false; } static void rb_start_commit(struct ring_buffer_per_cpu *cpu_buffer) @@ -3281,7 +3280,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer) * Note: The TRANSITION bit only handles a single transition between context. */ -static __always_inline int +static __always_inline bool trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) { unsigned int val = cpu_buffer->current_context; @@ -3298,14 +3297,14 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) bit = RB_CTX_TRANSITION; if (val & (1 << (bit + cpu_buffer->nest))) { do_ring_buffer_record_recursion(); - return 1; + return true; } } val |= (1 << (bit + cpu_buffer->nest)); cpu_buffer->current_context = val; - return 0; + return false; } static __always_inline void @@ -5408,9 +5407,8 @@ bool ring_buffer_empty(struct trace_buffer *buffer) { struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; - bool dolock; + bool dolock, ret; int cpu; - int ret; /* yes this is racy, but if you don't like the race, lock the buffer */ for_each_buffer_cpu(buffer, cpu) { @@ -5438,8 +5436,7 @@ bool ring_buffer_empty_cpu(struct trace_buffer *buffer, int cpu) { struct ring_buffer_per_cpu *cpu_buffer; unsigned long flags; - bool dolock; - int ret; + bool dolock, ret; if (!cpumask_test_cpu(cpu, buffer->cpumask)) return true; From patchwork Fri Mar 3 15:17:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 13158939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30C1BC7EE2D for ; Fri, 3 Mar 2023 15:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231138AbjCCPRh (ORCPT ); Fri, 3 Mar 2023 10:17:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjCCPRg (ORCPT ); Fri, 3 Mar 2023 10:17:36 -0500 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7E41149B2; Fri, 3 Mar 2023 07:17:34 -0800 (PST) Received: by mail-ed1-x535.google.com with SMTP id u9so11744989edd.2; Fri, 03 Mar 2023 07:17:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677856654; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aP7JkOmtAPoKEylq3aS9f49bNSoafYjJVu7dXcYE+Mk=; b=kypas4z1GXFmKra5JVQlleH/5cZ3Mh2A5xzCbYjsy+aO4/wFvoCtsqm+wikrhcrJDD TquAw8PTqLfttIBxSfalvlSLzI2KgLhRLcT1/Lzxx0xVOu9+x5ERsyLrdWHvX0m86dci Q2hPbcIwFSFHBHM3sKU5oWRtpQBFFpX1DusSjVzsnI8E93plw0HsgKujwwRnB3FLIm5r y23GNwCwA0/vL7nxyjiFv8py9AS2bMxnxXLXSVB712rJNc/7OXtufaMdTg/KaMDruPIM WZ2xpvEgeqvsAEVbtDLAt4b/+fQPpJUqUz/BB152t+5mA5/Xy4Z2aF+GSDul9AzjXoQC mm0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677856654; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aP7JkOmtAPoKEylq3aS9f49bNSoafYjJVu7dXcYE+Mk=; b=MoC0C2OlTcnnZ0j31xdMASovEiMaQtaXOqnCjZlp1EJKRzigoEPpWuNEet/KEDN3bu 83efBCTgR0zj+i/IkuosKMaXwNlvf15khPIF7cJaOlGt2YJ9jOkHCkR6yB0Y1VT8qhuh iGY9bnRzda48ROX9GG3XIQo9jTPk8FeIFfaCxQwh2mkhTxHkXcWnRAM4kCsR7LyuceA8 rMKdhIJpb/e3OjYYQg9ar4c3ANQQ+b6W73lXanrcLYNWIS086L0q82UjVw8uhk1RsS0J k6+cqBoU2nd9BolRGV3DemKm5LbYIxcoLAWXtO9NTiWkRyJlzVpPgM2u+VNu25bSF8KI cWMA== X-Gm-Message-State: AO0yUKVxU+UqgKzWsly+BnFYYRxHN+R5Eo5woHhBwGl08ioznZBizT1w duMX12qrOmiyQYDi/Z2Pi3//Pu1Nei5hKw== X-Google-Smtp-Source: AK7set/68r3nqsPVT+4Y8wGluLWFM7nENyT18dqTJYMsULBuaKxax5yQEMMlv4ivVUHDAEr9dYiG1A== X-Received: by 2002:a17:907:728c:b0:8b1:788f:2198 with SMTP id dt12-20020a170907728c00b008b1788f2198mr2415159ejc.19.1677856653846; Fri, 03 Mar 2023 07:17:33 -0800 (PST) Received: from localhost.localdomain ([46.248.82.114]) by smtp.gmail.com with ESMTPSA id o13-20020a1709062e8d00b008e22978b98bsm1048390eji.61.2023.03.03.07.17.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Mar 2023 07:17:33 -0800 (PST) From: Uros Bizjak To: linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Uros Bizjak , Steven Rostedt , Masami Hiramatsu Subject: [PATCH v3 3/3] ring_buffer: Use try_cmpxchg instead of cmpxchg Date: Fri, 3 Mar 2023 16:17:06 +0100 Message-Id: <20230303151706.57851-4-ubizjak@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230303151706.57851-1-ubizjak@gmail.com> References: <20230303151706.57851-1-ubizjak@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Use try_cmpxchg instead of cmpxchg (*ptr, old, new) == old. x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg). Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg fails. There is no need to re-read the value in the loop. No functional change intended. Cc: Steven Rostedt Cc: Masami Hiramatsu Signed-off-by: Uros Bizjak --- v2: Convert only loops with cmpxchg. --- kernel/trace/ring_buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 71df857242b4..3bfc2e8a3da4 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -4061,10 +4061,10 @@ void ring_buffer_record_off(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd | RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_off); @@ -4084,10 +4084,10 @@ void ring_buffer_record_on(struct trace_buffer *buffer) unsigned int rd; unsigned int new_rd; + rd = atomic_read(&buffer->record_disabled); do { - rd = atomic_read(&buffer->record_disabled); new_rd = rd & ~RB_BUFFER_OFF; - } while (atomic_cmpxchg(&buffer->record_disabled, rd, new_rd) != rd); + } while (!atomic_try_cmpxchg(&buffer->record_disabled, &rd, new_rd)); } EXPORT_SYMBOL_GPL(ring_buffer_record_on);