From patchwork Wed May 4 11:28:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837688 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FC43C433EF for ; Wed, 4 May 2022 11:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348756AbiEDLcj (ORCPT ); Wed, 4 May 2022 07:32:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348757AbiEDLcg (ORCPT ); Wed, 4 May 2022 07:32:36 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A06A62AE00; Wed, 4 May 2022 04:29:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663740; x=1683199740; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=b9A7gy5iwYqLx0bIQE562dXg5um+9KqDCbGc2Gw4g3Y=; b=bmaLeag4stKruFdwP9BDHepfII98cwkt/c0P70sWd3s9bD6W+111RA+6 8YH85Xa4DMna5cIFz/lhlDjQoAg6ndjkgzr/zWhrdpd2U+MdhZd+dBX2Q 8AOO9XmbWkB4ykUpukBvEc4j6t90o1/m7R6AohGwG1neUIdYo3UGc5Qre Y=; Received: from unknown (HELO ironmsg03-sd.qualcomm.com) ([10.53.140.143]) by alexa-out-sd-02.qualcomm.com with ESMTP; 04 May 2022 04:29:00 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg03-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:00 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:28:59 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:28:55 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan Subject: [PATCHv14 1/9] arm64: io: Use asm-generic high level MMIO accessors Date: Wed, 4 May 2022 16:58:20 +0530 Message-ID: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Remove custom arm64 MMIO accessors read{b,w,l,q} and their relaxed versions in support to use asm-generic defined accessors. Also define one set of IO barriers (ar/bw version) used by asm-generic code to override the arm64 specific variants. Suggested-by: Arnd Bergmann Signed-off-by: Sai Prakash Ranjan Acked-by: Catalin Marinas Reviewed-by: Arnd Bergmann --- arch/arm64/include/asm/io.h | 41 ++++++++----------------------------- 1 file changed, 8 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 7fd836bea7eb..1b436810d779 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -91,7 +91,7 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) } /* IO barriers */ -#define __iormb(v) \ +#define __io_ar(v) \ ({ \ unsigned long tmp; \ \ @@ -108,39 +108,14 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) : "memory"); \ }) -#define __io_par(v) __iormb(v) -#define __iowmb() dma_wmb() -#define __iomb() dma_mb() - -/* - * Relaxed I/O memory access primitives. These follow the Device memory - * ordering rules but do not guarantee any ordering relative to Normal memory - * accesses. - */ -#define readb_relaxed(c) ({ u8 __r = __raw_readb(c); __r; }) -#define readw_relaxed(c) ({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; }) -#define readl_relaxed(c) ({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; }) -#define readq_relaxed(c) ({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; }) +#define __io_bw() dma_wmb() +#define __io_br(v) +#define __io_aw(v) -#define writeb_relaxed(v,c) ((void)__raw_writeb((v),(c))) -#define writew_relaxed(v,c) ((void)__raw_writew((__force u16)cpu_to_le16(v),(c))) -#define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c))) -#define writeq_relaxed(v,c) ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c))) - -/* - * I/O memory access primitives. Reads are ordered relative to any - * following Normal memory access. Writes are ordered relative to any prior - * Normal memory access. - */ -#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(__v); __v; }) -#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(__v); __v; }) -#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(__v); __v; }) -#define readq(c) ({ u64 __v = readq_relaxed(c); __iormb(__v); __v; }) - -#define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); }) -#define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); }) -#define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); }) -#define writeq(v,c) ({ __iowmb(); writeq_relaxed((v),(c)); }) +/* arm64-specific, don't use in portable drivers */ +#define __iormb(v) __io_ar(v) +#define __iowmb() __io_bw() +#define __iomb() dma_mb() /* * I/O port access primitives. From patchwork Wed May 4 11:28:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DC65C433FE for ; Wed, 4 May 2022 11:29:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348757AbiEDLcm (ORCPT ); Wed, 4 May 2022 07:32:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348758AbiEDLcl (ORCPT ); Wed, 4 May 2022 07:32:41 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EAB42AC71; Wed, 4 May 2022 04:29:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663746; x=1683199746; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ccRVKSGd3DwHAGLmQIoh3WD07iqHMOorja7gB3Ael2c=; b=LwZbQlyYQz2oWoWB1FytHve7Y4Zi5rSZbcG5bdyPNEDnjdzZme8X4Jsv yeJi82+A3eAZylaS/VsOZEKtvMXV19TYQtvuMMbidTDLZyrksJTs/dOPk /hn9C9fDg6/nQG9b7Eeugqu3mDsVGwQEcghXJC4ByPv1AY3/e7atR+ejJ k=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 May 2022 04:29:05 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:05 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:04 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:00 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan , "Mathieu Poirier" , Suzuki K Poulose Subject: [PATCHv14 2/9] coresight: etm4x: Use asm-generic IO memory barriers Date: Wed, 4 May 2022 16:58:21 +0530 Message-ID: <0d76de0ecc0aa7cb01fd8b8863a8e567abd4410b.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Per discussion in [1], it was decided to move to using architecture independent/asm-generic IO memory barriers to have just one set of them and deprecate use of arm64 specific IO memory barriers in driver code. So replace current usage of __io_rmb()/__iowmb() in drivers to __io_ar()/__io_bw(). [1] https://lore.kernel.org/lkml/CAK8P3a0L2tLeF1Q0+0ijUxhGNaw+Z0fyPC1oW6_ELQfn0=i4iw@mail.gmail.com/ Cc: Mathieu Poirier Cc: Suzuki K Poulose Signed-off-by: Sai Prakash Ranjan Reviewed-by: Arnd Bergmann Reviewed-by: Suzuki K Poulose --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 8 ++++---- drivers/hwtracing/coresight/coresight-etm4x.h | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 7f416a12000e..81c0faf45b28 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -98,7 +98,7 @@ u64 etm4x_sysreg_read(u32 offset, bool _relaxed, bool _64bit) } if (!_relaxed) - __iormb(res); /* Imitate the !relaxed I/O helpers */ + __io_ar(res); /* Imitate the !relaxed I/O helpers */ return res; } @@ -106,7 +106,7 @@ u64 etm4x_sysreg_read(u32 offset, bool _relaxed, bool _64bit) void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit) { if (!_relaxed) - __iowmb(); /* Imitate the !relaxed I/O helpers */ + __io_bw(); /* Imitate the !relaxed I/O helpers */ if (!_64bit) val &= GENMASK(31, 0); @@ -130,7 +130,7 @@ static u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit) } if (!_relaxed) - __iormb(res); /* Imitate the !relaxed I/O helpers */ + __io_ar(res); /* Imitate the !relaxed I/O helpers */ return res; } @@ -138,7 +138,7 @@ static u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit) static void ete_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit) { if (!_relaxed) - __iowmb(); /* Imitate the !relaxed I/O helpers */ + __io_bw(); /* Imitate the !relaxed I/O helpers */ if (!_64bit) val &= GENMASK(31, 0); diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index 3c4d69b096ca..f54698731582 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -448,14 +448,14 @@ #define etm4x_read32(csa, offset) \ ({ \ u32 __val = etm4x_relaxed_read32((csa), (offset)); \ - __iormb(__val); \ + __io_ar(__val); \ __val; \ }) #define etm4x_read64(csa, offset) \ ({ \ u64 __val = etm4x_relaxed_read64((csa), (offset)); \ - __iormb(__val); \ + __io_ar(__val); \ __val; \ }) @@ -479,13 +479,13 @@ #define etm4x_write32(csa, val, offset) \ do { \ - __iowmb(); \ + __io_bw(); \ etm4x_relaxed_write32((csa), (val), (offset)); \ } while (0) #define etm4x_write64(csa, val, offset) \ do { \ - __iowmb(); \ + __io_bw(); \ etm4x_relaxed_write64((csa), (val), (offset)); \ } while (0) From patchwork Wed May 4 11:28:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A446C433F5 for ; Wed, 4 May 2022 11:29:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348786AbiEDLcv (ORCPT ); Wed, 4 May 2022 07:32:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348790AbiEDLct (ORCPT ); Wed, 4 May 2022 07:32:49 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C054C2AE16; Wed, 4 May 2022 04:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663751; x=1683199751; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oiJlEqYi2G/M1ak/+E9q5Pf4TyGwzZSyE0d92xpEqUI=; b=n6puG8AsyBN3/iBXH9msclI8FXa8jfceJZe6til6HQffcvfU1j4F9Kpr 3Ppca45NmCtkCW5yP6eBff4Y0RXSbsA4StQeblgBOmhjkbmDGIl7JCxbj k+cIyMqDYXJBBWHRBLP7WbNQmVPVRWYk4EcpO92AqxRV6Q6YY1Qtknr+y U=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 04 May 2022 04:29:11 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:10 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:10 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:06 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan Subject: [PATCHv14 3/9] irqchip/tegra: Fix overflow implicit truncation warnings Date: Wed, 4 May 2022 16:58:22 +0530 Message-ID: <19756648814ae293263bfefacd340a16507fab05.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Fix -Woverflow warnings for tegra irqchip driver which is a result of moving arm64 custom MMIO accessor macros to asm-generic function implementations giving a bonus type-checking now and uncovering these overflow warnings. drivers/irqchip/irq-tegra.c: In function ‘tegra_ictlr_suspend’: drivers/irqchip/irq-tegra.c:151:18: warning: large integer implicitly truncated to unsigned type [-Woverflow] writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR); ^ Cc: Marc Zyngier Suggested-by: Marc Zyngier Reviewed-by: Arnd Bergmann Signed-off-by: Sai Prakash Ranjan --- drivers/irqchip/irq-tegra.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/irqchip/irq-tegra.c b/drivers/irqchip/irq-tegra.c index e1f771c72fc4..ad3e2c1b3c87 100644 --- a/drivers/irqchip/irq-tegra.c +++ b/drivers/irqchip/irq-tegra.c @@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void) lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS); /* Disable COP interrupts */ - writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR); + writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR); /* Disable CPU interrupts */ - writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR); + writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR); /* Enable the wakeup sources of ictlr */ writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET); @@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void) writel_relaxed(lic->cpu_iep[i], ictlr + ICTLR_CPU_IEP_CLASS); - writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR); + writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR); writel_relaxed(lic->cpu_ier[i], ictlr + ICTLR_CPU_IER_SET); writel_relaxed(lic->cop_iep[i], ictlr + ICTLR_COP_IEP_CLASS); - writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR); + writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR); writel_relaxed(lic->cop_ier[i], ictlr + ICTLR_COP_IER_SET); } @@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node, lic->base[i] = base; /* Disable all interrupts */ - writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR); + writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR); /* All interrupts target IRQ */ writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS); From patchwork Wed May 4 11:28:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73A8DC433F5 for ; Wed, 4 May 2022 11:29:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348783AbiEDLcx (ORCPT ); Wed, 4 May 2022 07:32:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348781AbiEDLcv (ORCPT ); Wed, 4 May 2022 07:32:51 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DE111ADA8; Wed, 4 May 2022 04:29:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663756; x=1683199756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Os8aG/GdpU99rvd0ouHCuTPWhAHDiOeHpDYUMpv2miY=; b=syLFGaUOw9ZkzjPpRpCHgfoW6b1akqg8/5XNlqeZAxdC/XO7u0gfYgpS dz4G8O3fBMDLdI9CqSfGhR3GBs0CQPKqzhJphskBCl70kmZktvFJDYKgq 9SAc0hdI+OtVOWSnBbXYfZ0DvzWiOTa/E+YiptNm91+TiPPGSRwJgWWiz Y=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 May 2022 04:29:16 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:15 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:15 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:10 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan , "Neil Armstrong" , kernel test robot Subject: [PATCHv14 4/9] drm/meson: Fix overflow implicit truncation warnings Date: Wed, 4 May 2022 16:58:23 +0530 Message-ID: <6064ec7d0f21ccaee91b02da72b4a9dc8fda7a79.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Fix -Woverflow warnings for drm/meson driver which is a result of moving arm64 custom MMIO accessor macros to asm-generic function implementations giving a bonus type-checking now and uncovering these overflow warnings. drivers/gpu/drm/meson/meson_viu.c: In function ‘meson_viu_init’: drivers/gpu/drm/meson/meson_registers.h:1826:48: error: large integer implicitly truncated to unsigned type [-Werror=overflow] #define VIU_OSD_BLEND_REORDER(dest, src) ((src) << (dest * 4)) ^ drivers/gpu/drm/meson/meson_viu.c:472:18: note: in expansion of macro ‘VIU_OSD_BLEND_REORDER’ writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) | ^~~~~~~~~~~~~~~~~~~~~ Cc: Arnd Bergmann Cc: Neil Armstrong Reported-by: kernel test robot Signed-off-by: Sai Prakash Ranjan Reviewed-by: Arnd Bergmann --- drivers/gpu/drm/meson/meson_viu.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c index 259f3e6bec90..bb7e109534de 100644 --- a/drivers/gpu/drm/meson/meson_viu.c +++ b/drivers/gpu/drm/meson/meson_viu.c @@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv) priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE)); if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { - writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) | - VIU_OSD_BLEND_REORDER(1, 0) | - VIU_OSD_BLEND_REORDER(2, 0) | - VIU_OSD_BLEND_REORDER(3, 0) | - VIU_OSD_BLEND_DIN_EN(1) | - VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 | - VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 | - VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 | - VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) | - VIU_OSD_BLEND_HOLD_LINES(4), - priv->io_base + _REG(VIU_OSD_BLEND_CTRL)); + u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) | + (u32)VIU_OSD_BLEND_REORDER(1, 0) | + (u32)VIU_OSD_BLEND_REORDER(2, 0) | + (u32)VIU_OSD_BLEND_REORDER(3, 0) | + (u32)VIU_OSD_BLEND_DIN_EN(1) | + (u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 | + (u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 | + (u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 | + (u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) | + (u32)VIU_OSD_BLEND_HOLD_LINES(4); + writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL)); writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE, priv->io_base + _REG(OSD1_BLEND_SRC_CTRL)); From patchwork Wed May 4 11:28:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837692 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4321EC433EF for ; Wed, 4 May 2022 11:29:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348788AbiEDLdE (ORCPT ); Wed, 4 May 2022 07:33:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348796AbiEDLdB (ORCPT ); Wed, 4 May 2022 07:33:01 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 949C62B1B5; Wed, 4 May 2022 04:29:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663761; x=1683199761; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QplCjvmIZQdhDrIUwsGK+u+vwQWD5oz5DRbIoplQm/c=; b=NB2rAX02+e7bav81QPzsAfrJVE3DIqINxOft+tC7qa1ZooP7Z+QLRp2Y HKnXi4QGTcDJ2JOObFjV5nY9C3BND9asVIIay8ZfQYkSQzawUSpaZOmUB 6p69iReFtj7WO9QeF5FEtRVa/23wIRMWaVHYpV+kF3aVZMaXYl+4j4Z6M 8=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 04 May 2022 04:29:21 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:21 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:20 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:16 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Prasad Sodagudi , "Sai Prakash Ranjan" Subject: [PATCHv14 5/9] lib: Add register read/write tracing support Date: Wed, 4 May 2022 16:58:24 +0530 Message-ID: <9827bae40f6f319f294d06859c9e3c7442f067f2.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Prasad Sodagudi Generic MMIO read/write i.e., __raw_{read,write}{b,l,w,q} accessors are typically used to read/write from/to memory mapped registers and can cause hangs or some undefined behaviour in following few cases, * If the access to the register space is unclocked, for example: if there is an access to multimedia(MM) block registers without MM clocks. * If the register space is protected and not set to be accessible from non-secure world, for example: only EL3 (EL: Exception level) access is allowed and any EL2/EL1 access is forbidden. * If xPU(memory/register protection units) is controlling access to certain memory/register space for specific clients. and more... Such cases usually results in instant reboot/SErrors/NOC or interconnect hangs and tracing these register accesses can be very helpful to debug such issues during initial development stages and also in later stages. So use ftrace trace events to log such MMIO register accesses which provides rich feature set such as early enablement of trace events, filtering capability, dumping ftrace logs on console and many more. Sample output: rwmmio_write: __qcom_geni_serial_console_write+0x160/0x1e0 width=32 val=0xa0d5d addr=0xfffffbfffdbff700 rwmmio_post_write: __qcom_geni_serial_console_write+0x160/0x1e0 width=32 val=0xa0d5d addr=0xfffffbfffdbff700 rwmmio_read: qcom_geni_serial_poll_bit+0x94/0x138 width=32 addr=0xfffffbfffdbff610 rwmmio_post_read: qcom_geni_serial_poll_bit+0x94/0x138 width=32 val=0x0 addr=0xfffffbfffdbff610 Signed-off-by: Prasad Sodagudi Co-developed-by: Sai Prakash Ranjan Signed-off-by: Sai Prakash Ranjan --- arch/Kconfig | 3 ++ arch/arm64/Kconfig | 1 + include/trace/events/rwmmio.h | 97 +++++++++++++++++++++++++++++++++++ lib/Kconfig | 7 +++ lib/Makefile | 2 + lib/trace_readwrite.c | 47 +++++++++++++++++ 6 files changed, 157 insertions(+) create mode 100644 include/trace/events/rwmmio.h create mode 100644 lib/trace_readwrite.c diff --git a/arch/Kconfig b/arch/Kconfig index 31c4fdc4a4ba..5e7aa17ed609 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1369,6 +1369,9 @@ config ARCH_HAS_ELFCORE_COMPAT config ARCH_HAS_PARANOID_L1D_FLUSH bool +config ARCH_HAVE_TRACE_MMIO_ACCESS + bool + config DYNAMIC_SIGFRAME bool diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 20ea89d9ac2f..926e1a252b6f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -48,6 +48,7 @@ config ARM64 select ARCH_HAS_ZONE_DMA_SET if EXPERT select ARCH_HAVE_ELF_PROT select ARCH_HAVE_NMI_SAFE_CMPXCHG + select ARCH_HAVE_TRACE_MMIO_ACCESS select ARCH_INLINE_READ_LOCK if !PREEMPTION select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION diff --git a/include/trace/events/rwmmio.h b/include/trace/events/rwmmio.h new file mode 100644 index 000000000000..82edee9bf716 --- /dev/null +++ b/include/trace/events/rwmmio.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM rwmmio + +#if !defined(_TRACE_RWMMIO_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_RWMMIO_H + +#include + +DECLARE_EVENT_CLASS(rwmmio_rw_template, + + TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr), + + TP_ARGS(caller, val, width, addr), + + TP_STRUCT__entry( + __field(unsigned long, caller) + __field(unsigned long, addr) + __field(u64, val) + __field(u8, width) + ), + + TP_fast_assign( + __entry->caller = caller; + __entry->val = val; + __entry->addr = (unsigned long)(void *)addr; + __entry->width = width; + ), + + TP_printk("%pS width=%d val=%#llx addr=%#lx", + (void *)(unsigned long)__entry->caller, __entry->width, + __entry->val, __entry->addr) +); + +DEFINE_EVENT(rwmmio_rw_template, rwmmio_write, + TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr), + TP_ARGS(caller, val, width, addr) +); + +DEFINE_EVENT(rwmmio_rw_template, rwmmio_post_write, + TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr), + TP_ARGS(caller, val, width, addr) +); + +TRACE_EVENT(rwmmio_read, + + TP_PROTO(unsigned long caller, u8 width, const volatile void __iomem *addr), + + TP_ARGS(caller, width, addr), + + TP_STRUCT__entry( + __field(unsigned long, caller) + __field(unsigned long, addr) + __field(u8, width) + ), + + TP_fast_assign( + __entry->caller = caller; + __entry->addr = (unsigned long)(void *)addr; + __entry->width = width; + ), + + TP_printk("%pS width=%d addr=%#lx", + (void *)(unsigned long)__entry->caller, __entry->width, __entry->addr) +); + +TRACE_EVENT(rwmmio_post_read, + + TP_PROTO(unsigned long caller, u64 val, u8 width, const volatile void __iomem *addr), + + TP_ARGS(caller, val, width, addr), + + TP_STRUCT__entry( + __field(unsigned long, caller) + __field(unsigned long, addr) + __field(u64, val) + __field(u8, width) + ), + + TP_fast_assign( + __entry->caller = caller; + __entry->val = val; + __entry->addr = (unsigned long)(void *)addr; + __entry->width = width; + ), + + TP_printk("%pS width=%d val=%#llx addr=%#lx", + (void *)(unsigned long)__entry->caller, __entry->width, + __entry->val, __entry->addr) +); + +#endif /* _TRACE_RWMMIO_H */ + +#include diff --git a/lib/Kconfig b/lib/Kconfig index 087e06b4cdfd..5e2fd075724f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -118,6 +118,13 @@ config INDIRECT_IOMEM_FALLBACK mmio accesses when the IO memory address is not a registered emulated region. +config TRACE_MMIO_ACCESS + bool "Register read/write tracing" + depends on TRACING && ARCH_HAVE_TRACE_MMIO_ACCESS + help + Create tracepoints for MMIO read/write operations. These trace events + can be used for logging all MMIO read/write operations. + source "lib/crypto/Kconfig" config CRC_CCITT diff --git a/lib/Makefile b/lib/Makefile index 6b9ffc1bd1ee..3df7d24e65d2 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -151,6 +151,8 @@ lib-y += logic_pio.o lib-$(CONFIG_INDIRECT_IOMEM) += logic_iomem.o +obj-$(CONFIG_TRACE_MMIO_ACCESS) += trace_readwrite.o + obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o obj-$(CONFIG_BTREE) += btree.o diff --git a/lib/trace_readwrite.c b/lib/trace_readwrite.c new file mode 100644 index 000000000000..88637038b30c --- /dev/null +++ b/lib/trace_readwrite.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Register read and write tracepoints + * + * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include + +#ifdef CONFIG_TRACE_MMIO_ACCESS +void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr) +{ + trace_rwmmio_write(caller_addr, val, width, addr); +} +EXPORT_SYMBOL_GPL(log_write_mmio); +EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_write); + +void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr) +{ + trace_rwmmio_post_write(caller_addr, val, width, addr); +} +EXPORT_SYMBOL_GPL(log_post_write_mmio); +EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_post_write); + +void log_read_mmio(u8 width, const volatile void __iomem *addr, + unsigned long caller_addr) +{ + trace_rwmmio_read(caller_addr, width, addr); +} +EXPORT_SYMBOL_GPL(log_read_mmio); +EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_read); + +void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr, + unsigned long caller_addr) +{ + trace_rwmmio_post_read(caller_addr, val, width, addr); +} +EXPORT_SYMBOL_GPL(log_post_read_mmio); +EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_post_read); +#endif /* CONFIG_TRACE_MMIO_ACCESS */ From patchwork Wed May 4 11:28:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97A73C433F5 for ; Wed, 4 May 2022 11:29:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348827AbiEDLdH (ORCPT ); Wed, 4 May 2022 07:33:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348842AbiEDLdD (ORCPT ); Wed, 4 May 2022 07:33:03 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B12BB2B260; Wed, 4 May 2022 04:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663766; x=1683199766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K4dbklDrBN0007czk1A8uvXlWciPzpruZ8ED1Gjwyzc=; b=TYsU0vvWCZO7ijppdryOxfTOo5sZBgoMLM7SPpNBYMPR27DxZPEbcNhI pjIRbujK5W6dUzGRQY0EwgRvFWGEHA+2p2OATwoN0Z27OWjmIJoVWYGG/ hR+cpzLU5fJblTPO9Nyxh8NhCSLjFjgc0oR1Whk+GNQQFk+GllmW5ksy0 c=; Received: from ironmsg-lv-alpha.qualcomm.com ([10.47.202.13]) by alexa-out.qualcomm.com with ESMTP; 04 May 2022 04:29:26 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-lv-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:25 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:25 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:21 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan Subject: [PATCHv14 6/9] KVM: arm64: Add a flag to disable MMIO trace for nVHE KVM Date: Wed, 4 May 2022 16:58:25 +0530 Message-ID: <1517201c3424214edd2d5385eeb10ac106b67d1c.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add a generic flag (__DISABLE_TRACE_MMIO__) to disable MMIO tracing in nVHE KVM as the tracepoint and MMIO logging symbols should not be visible at nVHE KVM as there is no way to execute them. It can also be used to disable MMIO tracing for specific drivers. Signed-off-by: Sai Prakash Ranjan --- arch/arm64/kvm/hyp/nvhe/Makefile | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index f9fe4dc21b1f..87d22a18b7a5 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -4,7 +4,12 @@ # asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS + +# Tracepoint and MMIO logging symbols should not be visible at nVHE KVM as +# there is no way to execute them and any such MMIO access from nVHE KVM +# will explode instantly (Words of Marc Zyngier). So introduce a generic flag +# __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM. +ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__ hostprogs := gen-hyprel HOST_EXTRACFLAGS += -I$(objtree)/include From patchwork Wed May 4 11:28:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30DD5C433F5 for ; Wed, 4 May 2022 11:29:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242024AbiEDLdJ (ORCPT ); Wed, 4 May 2022 07:33:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348831AbiEDLdH (ORCPT ); Wed, 4 May 2022 07:33:07 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60F712B1B0; Wed, 4 May 2022 04:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663771; x=1683199771; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LzT5i3bVI8oZ9ebhE63odydPlszLfAWR/dghJFtYORg=; b=RBYNnbS7rTibWZ3yu5Ra6Djdgart8icXOyF7lOw4Sr4/C8qYBRn9rTjS kp3HSh6R5mV8bLm1YCtPp/NPqdFLzBXXhIR3mKolMF80oIk5P/5EaQdlU Zab65JDEme5vkpAJBOX62PGz1RQaRGCt4/wPzaMWnilILLLDqafGbTufK Y=; Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by alexa-out-sd-02.qualcomm.com with ESMTP; 04 May 2022 04:29:31 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg01-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:30 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:30 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:26 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan Subject: [PATCHv14 7/9] asm-generic/io: Add logging support for MMIO accessors Date: Wed, 4 May 2022 16:58:26 +0530 Message-ID: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add logging support for MMIO high level accessors such as read{b,w,l,q} and their relaxed versions to aid in debugging unexpected crashes/hangs caused by the corresponding MMIO operation. Signed-off-by: Sai Prakash Ranjan --- include/asm-generic/io.h | 91 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 87 insertions(+), 4 deletions(-) diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index 7ce93aaf69f8..9c5114335a2d 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -10,6 +10,7 @@ #include /* I/O is all done through memory accesses */ #include /* for memset() and memcpy() */ #include +#include #ifdef CONFIG_GENERIC_IOMAP #include @@ -61,6 +62,44 @@ #define __io_par(v) __io_ar(v) #endif +/* + * "__DISABLE_TRACE_MMIO__" flag can be used to disable MMIO tracing for + * specific kernel drivers in case of excessive/unwanted logging. + * + * Usage: Add a #define flag at the beginning of the driver file. + * Ex: #define __DISABLE_TRACE_MMIO__ + * #include <...> + * ... + */ +#if IS_ENABLED(CONFIG_TRACE_MMIO_ACCESS) && !(defined(__DISABLE_TRACE_MMIO__)) +#include + +DECLARE_TRACEPOINT(rwmmio_write); +DECLARE_TRACEPOINT(rwmmio_post_write); +DECLARE_TRACEPOINT(rwmmio_read); +DECLARE_TRACEPOINT(rwmmio_post_read); + +void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr); +void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr); +void log_read_mmio(u8 width, const volatile void __iomem *addr, + unsigned long caller_addr); +void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr, + unsigned long caller_addr); + +#else + +static inline void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr) {} +static inline void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr, + unsigned long caller_addr) {} +static inline void log_read_mmio(u8 width, const volatile void __iomem *addr, + unsigned long caller_addr) {} +static inline void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr, + unsigned long caller_addr) {} + +#endif /* CONFIG_TRACE_MMIO_ACCESS */ /* * __raw_{read,write}{b,w,l,q}() access memory in native endianness. @@ -149,9 +188,11 @@ static inline u8 readb(const volatile void __iomem *addr) { u8 val; + log_read_mmio(8, addr, _THIS_IP_); __io_br(); val = __raw_readb(addr); __io_ar(val); + log_post_read_mmio(val, 8, addr, _THIS_IP_); return val; } #endif @@ -162,9 +203,11 @@ static inline u16 readw(const volatile void __iomem *addr) { u16 val; + log_read_mmio(16, addr, _THIS_IP_); __io_br(); val = __le16_to_cpu((__le16 __force)__raw_readw(addr)); __io_ar(val); + log_post_read_mmio(val, 16, addr, _THIS_IP_); return val; } #endif @@ -175,9 +218,11 @@ static inline u32 readl(const volatile void __iomem *addr) { u32 val; + log_read_mmio(32, addr, _THIS_IP_); __io_br(); val = __le32_to_cpu((__le32 __force)__raw_readl(addr)); __io_ar(val); + log_post_read_mmio(val, 32, addr, _THIS_IP_); return val; } #endif @@ -189,9 +234,11 @@ static inline u64 readq(const volatile void __iomem *addr) { u64 val; + log_read_mmio(64, addr, _THIS_IP_); __io_br(); val = __le64_to_cpu(__raw_readq(addr)); __io_ar(val); + log_post_read_mmio(val, 64, addr, _THIS_IP_); return val; } #endif @@ -201,9 +248,11 @@ static inline u64 readq(const volatile void __iomem *addr) #define writeb writeb static inline void writeb(u8 value, volatile void __iomem *addr) { + log_write_mmio(value, 8, addr, _THIS_IP_); __io_bw(); __raw_writeb(value, addr); __io_aw(); + log_post_write_mmio(value, 8, addr, _THIS_IP_); } #endif @@ -211,9 +260,11 @@ static inline void writeb(u8 value, volatile void __iomem *addr) #define writew writew static inline void writew(u16 value, volatile void __iomem *addr) { + log_write_mmio(value, 16, addr, _THIS_IP_); __io_bw(); __raw_writew((u16 __force)cpu_to_le16(value), addr); __io_aw(); + log_post_write_mmio(value, 16, addr, _THIS_IP_); } #endif @@ -221,9 +272,11 @@ static inline void writew(u16 value, volatile void __iomem *addr) #define writel writel static inline void writel(u32 value, volatile void __iomem *addr) { + log_write_mmio(value, 32, addr, _THIS_IP_); __io_bw(); __raw_writel((u32 __force)__cpu_to_le32(value), addr); __io_aw(); + log_post_write_mmio(value, 32, addr, _THIS_IP_); } #endif @@ -232,9 +285,11 @@ static inline void writel(u32 value, volatile void __iomem *addr) #define writeq writeq static inline void writeq(u64 value, volatile void __iomem *addr) { + log_write_mmio(value, 64, addr, _THIS_IP_); __io_bw(); __raw_writeq(__cpu_to_le64(value), addr); __io_aw(); + log_post_write_mmio(value, 64, addr, _THIS_IP_); } #endif #endif /* CONFIG_64BIT */ @@ -248,7 +303,12 @@ static inline void writeq(u64 value, volatile void __iomem *addr) #define readb_relaxed readb_relaxed static inline u8 readb_relaxed(const volatile void __iomem *addr) { - return __raw_readb(addr); + u8 val; + + log_read_mmio(8, addr, _THIS_IP_); + val = __raw_readb(addr); + log_post_read_mmio(val, 8, addr, _THIS_IP_); + return val; } #endif @@ -256,7 +316,12 @@ static inline u8 readb_relaxed(const volatile void __iomem *addr) #define readw_relaxed readw_relaxed static inline u16 readw_relaxed(const volatile void __iomem *addr) { - return __le16_to_cpu(__raw_readw(addr)); + u16 val; + + log_read_mmio(16, addr, _THIS_IP_); + val = __le16_to_cpu(__raw_readw(addr)); + log_post_read_mmio(val, 16, addr, _THIS_IP_); + return val; } #endif @@ -264,7 +329,12 @@ static inline u16 readw_relaxed(const volatile void __iomem *addr) #define readl_relaxed readl_relaxed static inline u32 readl_relaxed(const volatile void __iomem *addr) { - return __le32_to_cpu(__raw_readl(addr)); + u32 val; + + log_read_mmio(32, addr, _THIS_IP_); + val = __le32_to_cpu(__raw_readl(addr)); + log_post_read_mmio(val, 32, addr, _THIS_IP_); + return val; } #endif @@ -272,7 +342,12 @@ static inline u32 readl_relaxed(const volatile void __iomem *addr) #define readq_relaxed readq_relaxed static inline u64 readq_relaxed(const volatile void __iomem *addr) { - return __le64_to_cpu(__raw_readq(addr)); + u64 val; + + log_read_mmio(64, addr, _THIS_IP_); + val = __le64_to_cpu(__raw_readq(addr)); + log_post_read_mmio(val, 64, addr, _THIS_IP_); + return val; } #endif @@ -280,7 +355,9 @@ static inline u64 readq_relaxed(const volatile void __iomem *addr) #define writeb_relaxed writeb_relaxed static inline void writeb_relaxed(u8 value, volatile void __iomem *addr) { + log_write_mmio(value, 8, addr, _THIS_IP_); __raw_writeb(value, addr); + log_post_write_mmio(value, 8, addr, _THIS_IP_); } #endif @@ -288,7 +365,9 @@ static inline void writeb_relaxed(u8 value, volatile void __iomem *addr) #define writew_relaxed writew_relaxed static inline void writew_relaxed(u16 value, volatile void __iomem *addr) { + log_write_mmio(value, 16, addr, _THIS_IP_); __raw_writew(cpu_to_le16(value), addr); + log_post_write_mmio(value, 16, addr, _THIS_IP_); } #endif @@ -296,7 +375,9 @@ static inline void writew_relaxed(u16 value, volatile void __iomem *addr) #define writel_relaxed writel_relaxed static inline void writel_relaxed(u32 value, volatile void __iomem *addr) { + log_write_mmio(value, 32, addr, _THIS_IP_); __raw_writel(__cpu_to_le32(value), addr); + log_post_write_mmio(value, 32, addr, _THIS_IP_); } #endif @@ -304,7 +385,9 @@ static inline void writel_relaxed(u32 value, volatile void __iomem *addr) #define writeq_relaxed writeq_relaxed static inline void writeq_relaxed(u64 value, volatile void __iomem *addr) { + log_write_mmio(value, 64, addr, _THIS_IP_); __raw_writeq(__cpu_to_le64(value), addr); + log_post_write_mmio(value, 64, addr, _THIS_IP_); } #endif From patchwork Wed May 4 11:28:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AD0AC433EF for ; Wed, 4 May 2022 11:29:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235669AbiEDLdS (ORCPT ); Wed, 4 May 2022 07:33:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348843AbiEDLdR (ORCPT ); Wed, 4 May 2022 07:33:17 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 288672B1B0; Wed, 4 May 2022 04:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663777; x=1683199777; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a/+b7pV2Bo1HIRQnpM4Y15NOBUEBq9pVgDGvjKkqQtY=; b=oazpa+FQV9Cs6DPBWGKjY+BjMAxIeBu0YPw5ttqao/fCeqpmo3yDKxsi X+Ei4I95qUwaER0801mpkVFFx/AF2fVYFVgBRxRwltiAx/uBrIyYV0XOL 3RK9YG/4zlf5RPL1iXz+9Ujl9wlqb7s+ogjTAswXTJjxdur+seZND7dNE A=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 May 2022 04:29:36 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:36 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:36 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:31 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan , "Bjorn Andersson" Subject: [PATCHv14 8/9] serial: qcom_geni_serial: Disable MMIO tracing for geni serial Date: Wed, 4 May 2022 16:58:27 +0530 Message-ID: <282f2d8fb795e8b1961693ed4184b8136c3520db.1651663123.git.quic_saipraka@quicinc.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Disable MMIO tracing for geni serial driver to prevent excessive logging. Any access over serial console would involve a lot of TX and RX register accesses (and few others), so these MMIO read/write trace events in these drivers cause a lot of unwanted noise because of the high frequency of such operations and it is not very useful tracing these events for such drivers. Given we want to enable these trace events on development devices (maybe not production devices) where performance also really matters so that we don't regress other components by wasting CPU cycles and memory collecting these traces, it makes more sense to disable these traces from such drivers. Also another reason to disable these traces would be to prevent recursive tracing when we display the trace buffer containing these MMIO trace events since writing onto serial console would further record MMIO traces. Cc: Bjorn Andersson Signed-off-by: Sai Prakash Ranjan --- drivers/tty/serial/qcom_geni_serial.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c index 1543a6028856..75272585c3c3 100644 --- a/drivers/tty/serial/qcom_geni_serial.c +++ b/drivers/tty/serial/qcom_geni_serial.c @@ -1,6 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 // Copyright (c) 2017-2018, The Linux foundation. All rights reserved. +/* Disable MMIO tracing to prevent excessive logging of unwanted MMIO traces */ +#define __DISABLE_TRACE_MMIO__ + #include #include #include From patchwork Wed May 4 11:28:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Prakash Ranjan X-Patchwork-Id: 12837696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39F39C433EF for ; Wed, 4 May 2022 11:29:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348808AbiEDLdX (ORCPT ); Wed, 4 May 2022 07:33:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348824AbiEDLdS (ORCPT ); Wed, 4 May 2022 07:33:18 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C34B62BB38; Wed, 4 May 2022 04:29:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1651663783; x=1683199783; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WMfrbW6aHjrx/MqUK3KUM6mipCuetkL4KAFC6OrnsvM=; b=Ap1PZZlbuV2+3dpL6YNsD871umm5NbWYeSE3uRREOVWBbAK3tOUwBCfh kFLfXwyg9hsMZ2Gq4qE4JIEs4yAvY82afO0lTyCVhWWKCbeIFuBI7MqCj i1ru6IVPKkxof1yWZbdSseAPNICr3oF8EP57r1iqKtm31X5/VazwE/RvV 0=; Received: from ironmsg08-lv.qualcomm.com ([10.47.202.152]) by alexa-out.qualcomm.com with ESMTP; 04 May 2022 04:29:43 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg08-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 04:29:43 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:41 -0700 Received: from blr-ubuntu-253.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 4 May 2022 04:29:37 -0700 From: Sai Prakash Ranjan To: , , CC: , , , , , , , , Sai Prakash Ranjan , "Bjorn Andersson" Subject: [PATCHv14 9/9] soc: qcom: geni: Disable MMIO tracing for GENI SE Date: Wed, 4 May 2022 16:58:28 +0530 Message-ID: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Disable MMIO tracing for GENI SE driver to prevent excessive logging. Any access over serial console would involve a lot of TX and RX register accesses (and few others), so these MMIO read/write trace events in these drivers cause a lot of unwanted noise because of the high frequency of such operations and it is not very useful tracing these events for such drivers. Given we want to enable these trace events on development devices (maybe not production devices) where performance also really matters so that we don't regress other components by wasting CPU cycles and memory collecting these traces, it makes more sense to disable these traces from such drivers. Also another reason to disable these traces would be to prevent recursive tracing when we display the trace buffer containing these MMIO trace events since writing onto serial console would further record MMIO traces. Cc: Bjorn Andersson Signed-off-by: Sai Prakash Ranjan --- drivers/soc/qcom/qcom-geni-se.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c index 28a8c0dda66c..a0ceeede450f 100644 --- a/drivers/soc/qcom/qcom-geni-se.c +++ b/drivers/soc/qcom/qcom-geni-se.c @@ -1,6 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 // Copyright (c) 2017-2018, The Linux Foundation. All rights reserved. +/* Disable MMIO tracing to prevent excessive logging of unwanted MMIO traces */ +#define __DISABLE_TRACE_MMIO__ + #include #include #include