From patchwork Fri Dec 8 17:24:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 13485726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E340AC4167B for ; Fri, 8 Dec 2023 17:25:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4r+1Tn/5xmgtuONRiP90vU4VK6HyDeDW4CGCUQxNgW0=; b=q4lCuc+6EeXw+H XNOpbqq5mD0exDnlGSDs0pYPq0jAAGzMTMWOpN8/Squ2jrJyjfjhyK/Oi2AthofMpGGY0tOtnBhpl BEGyIPDaZtVHM3OvZYuscoGCSDTn4mYNrF5VJ/VnUZTptaeZQPzdfiaQp/jIuwO7MhTfacOQBx7Ac vnOyslZ2lYkYN6hjkPGdkfpteZHd8vHIrnomQclEKpxlN7LHbqXnLY8TbN+koqPMq/aykTlwXDXxo LfNVlVyy213j761iEP7tPwFbHeFHrxBn2+KHwvamqTDwBgDG1E+Kp7mtEvGGFTGdpdJ4OTWuO3Q4W HULdQwHHB7YudKZU+/QQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rBebL-00GAl7-1f; Fri, 08 Dec 2023 17:25:23 +0000 Received: from mgamail.intel.com ([134.134.136.126]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rBebH-00GAjU-0e for linux-arm-kernel@lists.infradead.org; Fri, 08 Dec 2023 17:25:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702056319; x=1733592319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t6YX0WodqbjyJ5jO9FJZ3AtIO7LVa6j00EG3qTDrAnw=; b=PKewKbT581rYaTYa6rFSxwKSHf1Xbw6bXAc/IZI6NNVyc6EIvAqBfhj6 qlaxM7WA/Q7RXMcecfCZyRk+LdvUG2c+qlg9VT5FbUsPS6VCHQA4tiqHb O9TE1MfKe0P4XpnB626xQV0NzyGA4c6Yw2C0sXYn8jVs2hmyaF6Y2jlci G18CycOerT/0gaoshBc2WX+CGVPxz+YL4YNQZNdwVRk2b5RK29moNAkfn XY/gPM/DVVXLyYkiECQflC1zjR7TTdVQRUAmyin8k9m7bA+C6d2QsVKxS OkYmjFeUOZKZ3G3GouMMSIjkA83W89+NuBdU4Nv0JwuVxWRKxuRVXDv9A w==; X-IronPort-AV: E=McAfee;i="6600,9927,10918"; a="379432414" X-IronPort-AV: E=Sophos;i="6.04,261,1695711600"; d="scan'208";a="379432414" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2023 09:25:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10918"; a="772201616" X-IronPort-AV: E=Sophos;i="6.04,261,1695711600"; d="scan'208";a="772201616" Received: from ahunter6-mobl1.ger.corp.intel.com (HELO ahunter-VirtualBox.home\044ger.corp.intel.com) ([10.249.34.218]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2023 09:25:12 -0800 From: Adrian Hunter To: Peter Zijlstra Cc: Ingo Molnar , Mark Rutland , Alexander Shishkin , Heiko Carstens , Thomas Richter , Hendrik Brueckner , Suzuki K Poulose , Mike Leach , James Clark , coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, Yicong Yang , Jonathan Cameron , Will Deacon , Arnaldo Carvalho de Melo , Jiri Olsa , Namhyung Kim , Ian Rogers , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Subject: [PATCH RFC V2 2/4] perf/x86/intel/pt: Add support for pause / resume Date: Fri, 8 Dec 2023 19:24:47 +0200 Message-Id: <20231208172449.35444-3-adrian.hunter@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231208172449.35444-1-adrian.hunter@intel.com> References: <20231208172449.35444-1-adrian.hunter@intel.com> MIME-Version: 1.0 Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231208_092519_283661_E2DF8CDD X-CRM114-Status: GOOD ( 20.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prevent tracing to start if aux_paused. Implement support for PERF_EF_PAUSE / PERF_EF_RESUME. When aux_paused, stop tracing. When not aux_paused, only start tracing if it isn't currently meant to be stopped. Signed-off-by: Adrian Hunter --- arch/x86/events/intel/pt.c | 63 ++++++++++++++++++++++++++++++++++++-- arch/x86/events/intel/pt.h | 4 +++ 2 files changed, 64 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c index 42a55794004a..692b51849d1c 100644 --- a/arch/x86/events/intel/pt.c +++ b/arch/x86/events/intel/pt.c @@ -418,6 +418,9 @@ static void pt_config_start(struct perf_event *event) struct pt *pt = this_cpu_ptr(&pt_ctx); u64 ctl = event->hw.config; + if (READ_ONCE(event->aux_paused)) + return; + ctl |= RTIT_CTL_TRACEEN; if (READ_ONCE(pt->vmx_on)) perf_aux_output_flag(&pt->handle, PERF_AUX_FLAG_PARTIAL); @@ -534,7 +537,20 @@ static void pt_config(struct perf_event *event) reg |= (event->attr.config & PT_CONFIG_MASK); event->hw.config = reg; + + /* + * Allow resume before starting so as not to overwrite a value set by a + * PMI. + */ + WRITE_ONCE(pt->resume_allowed, 1); + pt_config_start(event); + + /* + * Allow pause after starting so its pt_config_stop() doesn't race with + * pt_config_start(). + */ + WRITE_ONCE(pt->pause_allowed, 1); } static void pt_config_stop(struct perf_event *event) @@ -1507,6 +1523,7 @@ void intel_pt_interrupt(void) buf = perf_aux_output_begin(&pt->handle, event); if (!buf) { event->hw.state = PERF_HES_STOPPED; + pt->resume_allowed = 0; return; } @@ -1515,6 +1532,7 @@ void intel_pt_interrupt(void) ret = pt_buffer_reset_markers(buf, &pt->handle); if (ret) { perf_aux_output_end(&pt->handle, 0); + pt->resume_allowed = 0; return; } @@ -1569,6 +1587,26 @@ static void pt_event_start(struct perf_event *event, int mode) struct pt *pt = this_cpu_ptr(&pt_ctx); struct pt_buffer *buf; + if (mode & PERF_EF_RESUME) { + if (READ_ONCE(pt->resume_allowed)) { + u64 status; + + /* + * Only if the trace is not active and the error and + * stopped bits are clear, is it safe to start, but a + * PMI might have just cleared these, so resume_allowed + * must be checked again also. + */ + rdmsrl(MSR_IA32_RTIT_STATUS, status); + if (!(status & (RTIT_STATUS_TRIGGEREN | + RTIT_STATUS_ERROR | + RTIT_STATUS_STOPPED)) && + READ_ONCE(pt->resume_allowed)) + pt_config_start(event); + } + return; + } + buf = perf_aux_output_begin(&pt->handle, event); if (!buf) goto fail_stop; @@ -1597,6 +1635,16 @@ static void pt_event_stop(struct perf_event *event, int mode) { struct pt *pt = this_cpu_ptr(&pt_ctx); + if (mode & PERF_EF_PAUSE) { + if (READ_ONCE(pt->pause_allowed)) + pt_config_stop(event); + return; + } + + /* Protect against racing */ + WRITE_ONCE(pt->pause_allowed, 0); + WRITE_ONCE(pt->resume_allowed, 0); + /* * Protect against the PMI racing with disabling wrmsr, * see comment in intel_pt_interrupt(). @@ -1655,8 +1703,12 @@ static long pt_event_snapshot_aux(struct perf_event *event, /* * Here, handle_nmi tells us if the tracing is on */ - if (READ_ONCE(pt->handle_nmi)) + if (READ_ONCE(pt->handle_nmi)) { + /* Protect against racing */ + WRITE_ONCE(pt->pause_allowed, 0); + WRITE_ONCE(pt->resume_allowed, 0); pt_config_stop(event); + } pt_read_offset(buf); pt_update_head(pt); @@ -1673,8 +1725,11 @@ static long pt_event_snapshot_aux(struct perf_event *event, * Compiler barrier not needed as we couldn't have been * preempted by anything that touches pt->handle_nmi. */ - if (pt->handle_nmi) + if (pt->handle_nmi) { + WRITE_ONCE(pt->resume_allowed, 1); pt_config_start(event); + WRITE_ONCE(pt->pause_allowed, 1); + } return ret; } @@ -1790,7 +1845,9 @@ static __init int pt_init(void) if (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries)) pt_pmu.pmu.capabilities = PERF_PMU_CAP_AUX_NO_SG; - pt_pmu.pmu.capabilities |= PERF_PMU_CAP_EXCLUSIVE | PERF_PMU_CAP_ITRACE; + pt_pmu.pmu.capabilities |= PERF_PMU_CAP_EXCLUSIVE | + PERF_PMU_CAP_ITRACE | + PERF_PMU_CAP_AUX_PAUSE; pt_pmu.pmu.attr_groups = pt_attr_groups; pt_pmu.pmu.task_ctx_nr = perf_sw_context; pt_pmu.pmu.event_init = pt_event_init; diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h index 96906a62aacd..b9527205e028 100644 --- a/arch/x86/events/intel/pt.h +++ b/arch/x86/events/intel/pt.h @@ -117,6 +117,8 @@ struct pt_filters { * @filters: last configured filters * @handle_nmi: do handle PT PMI on this cpu, there's an active event * @vmx_on: 1 if VMX is ON on this cpu + * @pause_allowed: PERF_EF_PAUSE is allowed to stop tracing + * @resume_allowed: PERF_EF_RESUME is allowed to start tracing * @output_base: cached RTIT_OUTPUT_BASE MSR value * @output_mask: cached RTIT_OUTPUT_MASK MSR value */ @@ -125,6 +127,8 @@ struct pt { struct pt_filters filters; int handle_nmi; int vmx_on; + int pause_allowed; + int resume_allowed; u64 output_base; u64 output_mask; };