From patchwork Sun Sep 5 03:21:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leo Yan X-Patchwork-Id: 12476017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8F69C433EF for ; Sun, 5 Sep 2021 03:23:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9800660F9D for ; Sun, 5 Sep 2021 03:23:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9800660F9D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=eXex14Ly+/Es0F9YanHlQodQ2671z4R3ma8ZKHTJsDY=; b=QetimY1PB1dr/I +KlacmHCx2bpro8r1pqlVQEFkW/8vI48hhWc+qYEZI+OJlCKUd+clr168ePZcnSBn28EBFMMaLVsH D+dAicQpYmgTxhhzqa5yKGx2x46grADIQv3VTfSDDFVkVsz5qkc+tKuJWEhj5Vpt94OPqASz+5SJs X+F9LoNzV+5Go+Fb6h6QgYi5m5xY3WzE1JNXOUBpf6kqtrkynFvFPCP40g4Vos7mqd5P2gQxywKfN CBpLgpvBe8rbi+/tWI7xJb827S24XPiaBvsKmoJ8Im2qjQ1t8hrcxAAcjhELPjURJafu7PZ9z1l4L KeZ8vWgdn5AG25CjGkuQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mMijK-00F8zT-7m; Sun, 05 Sep 2021 03:22:02 +0000 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mMijF-00F8yw-Ju for linux-arm-kernel@lists.infradead.org; Sun, 05 Sep 2021 03:21:59 +0000 Received: by mail-pf1-x42d.google.com with SMTP id u6so2825580pfi.0 for ; Sat, 04 Sep 2021 20:21:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=8/gerKdrKZe0vTryyiju4nrlBJe7KcLhAqRYTVCfauU=; b=V/9277Ev9IgrpJGZG5cuEjoYmo01JmbLLtjDZLZb261ERG7HexloC+xgCVFkS+Kqnm F/JiG2x0o5rFgNro4kjakWXIxNi/O/W+EUKhCS53NEm+mYMWHa/jjHAmdYz7Azc/Nw2t 6x5NibuFwef9uV9Xz48dvzq594k9RMSpuRxVt+5qPWsfWeLUV5UF+Bugfsqmcup8NmK9 SLYJNy7V2TWlUyfMi+4PPJtAdWLebCu6k6BYcfbVPhLaa8At/R07jjYI8FPBj8dIw7ua yTXawlOcO/9UelaHi2GvBNK/F9nrgqYb4uG2KYknpeMhPuMtrgyvh31IlvUdZBG02yYu EPpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=8/gerKdrKZe0vTryyiju4nrlBJe7KcLhAqRYTVCfauU=; b=bHUNG08bXDUwnrv/6nyq3ISh+ln41dMKjiXRDzW3k77U0/PpLi6VxzUF+ZwHb4SDFa Jd8mxg5GKO+2KkOUC3T/alc0lZuSEgjacwMTBXbn8V57pGPG9iBLLpeQsWPQJVT7M6HB p60AQLWYJ5FQQ2m7KlmWEXFplUsp2GSfd5LvCMPDW7SjTzZ9NOmmOrhk21VgtRmlR5HN DVRJ21bv8pBz/S+9u8Btt2bm0Ov2Etz+exzwC5j9cn67Ibfe2QperwI5XWc6MPz1T0Hd nygLWKk2BIhV8CEatRoVub5gtvDK58zMIbGKNhBp2xExlEVpaLACCznzF8YlTfAAbH63 YupQ== X-Gm-Message-State: AOAM532oh3oQk8nH3uXYi7NHNNurEif4q20mG4smhnlV4rT9Mj/SxBl2 9E2ksdJHGNT7DZw82V1f8FrBmA== X-Google-Smtp-Source: ABdhPJyr5N0kHuWsgI2T1HNJZwUFesDD4hhKas8ngvC6PFP2eD5GtZ7jnC0HmwNLF8PkUR5egT6ywA== X-Received: by 2002:a62:1c96:0:b0:3f5:e01a:e47 with SMTP id c144-20020a621c96000000b003f5e01a0e47mr9960844pfc.76.1630812115633; Sat, 04 Sep 2021 20:21:55 -0700 (PDT) Received: from localhost ([204.124.181.224]) by smtp.gmail.com with ESMTPSA id b27sm253327pfr.121.2021.09.04.20.21.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Sep 2021 20:21:54 -0700 (PDT) From: Leo Yan To: Mathieu Poirier , Suzuki K Poulose , Mike Leach , Robin Murphy , Alexander Shishkin , coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Leo Yan Subject: [PATCH v4] coresight: tmc-etr: Speed up for bounce buffer in flat mode Date: Sun, 5 Sep 2021 11:21:44 +0800 Message-Id: <20210905032144.966766-1-leo.yan@linaro.org> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210904_202157_763752_C6965091 X-CRM114-Status: GOOD ( 20.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The AUX bounce buffer is allocated with API dma_alloc_coherent(), in the low level's architecture code, e.g. for Arm64, it maps the memory with the attribution "Normal non-cacheable"; this can be concluded from the definition for pgprot_dmacoherent() in arch/arm64/include/asm/pgtable.h. Later when access the AUX bounce buffer, since the memory mapping is non-cacheable, it's low efficiency due to every load instruction must reach out DRAM. This patch changes to allocate pages with dma_alloc_noncoherent(), the driver can access the memory via cacheable mapping; therefore, load instructions can fetch data from cache lines rather than always read data from DRAM, the driver can boost memory performance. After using the cacheable mapping, the driver uses dma_sync_single_for_cpu() to invalidate cacheline prior to read bounce buffer so can avoid read stale trace data. By measurement the duration for function tmc_update_etr_buffer() with ftrace function_graph tracer, it shows the performance significant improvement for copying 4MiB data from bounce buffer: # echo tmc_etr_get_data_flat_buf > set_graph_notrace // avoid noise # echo tmc_update_etr_buffer > set_graph_function # echo function_graph > current_tracer before: # CPU DURATION FUNCTION CALLS # | | | | | | | 2) | tmc_update_etr_buffer() { ... 2) # 8148.320 us | } after: # CPU DURATION FUNCTION CALLS # | | | | | | | 2) | tmc_update_etr_buffer() { ... 2) # 2525.420 us | } Signed-off-by: Leo Yan Reviewed-by: Suzuki K Poulose --- Changes from v3: Refined change to use dma_alloc_noncoherent()/dma_free_noncoherent() (Robin Murphy); Retested functionality and performance on Juno-r2 board. Changes from v2: Sync the entire buffer in one go when the tracing is wrap around (Suzuki); Add Suzuki's review tage. .../hwtracing/coresight/coresight-tmc-etr.c | 26 ++++++++++++++++--- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c index acdb59e0e661..a049b525a274 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c @@ -609,8 +609,9 @@ static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata, if (!flat_buf) return -ENOMEM; - flat_buf->vaddr = dma_alloc_coherent(real_dev, etr_buf->size, - &flat_buf->daddr, GFP_KERNEL); + flat_buf->vaddr = dma_alloc_noncoherent(real_dev, etr_buf->size, + &flat_buf->daddr, + DMA_FROM_DEVICE, GFP_KERNEL); if (!flat_buf->vaddr) { kfree(flat_buf); return -ENOMEM; @@ -631,14 +632,18 @@ static void tmc_etr_free_flat_buf(struct etr_buf *etr_buf) if (flat_buf && flat_buf->daddr) { struct device *real_dev = flat_buf->dev->parent; - dma_free_coherent(real_dev, flat_buf->size, - flat_buf->vaddr, flat_buf->daddr); + dma_free_noncoherent(real_dev, etr_buf->size, + flat_buf->vaddr, flat_buf->daddr, + DMA_FROM_DEVICE); } kfree(flat_buf); } static void tmc_etr_sync_flat_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) { + struct etr_flat_buf *flat_buf = etr_buf->private; + struct device *real_dev = flat_buf->dev->parent; + /* * Adjust the buffer to point to the beginning of the trace data * and update the available trace data. @@ -648,6 +653,19 @@ static void tmc_etr_sync_flat_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) etr_buf->len = etr_buf->size; else etr_buf->len = rwp - rrp; + + /* + * The driver always starts tracing at the beginning of the buffer, + * the only reason why we would get a wrap around is when the buffer + * is full. Sync the entire buffer in one go for this case. + */ + if (etr_buf->offset + etr_buf->len > etr_buf->size) + dma_sync_single_for_cpu(real_dev, flat_buf->daddr, + etr_buf->size, DMA_FROM_DEVICE); + else + dma_sync_single_for_cpu(real_dev, + flat_buf->daddr + etr_buf->offset, + etr_buf->len, DMA_FROM_DEVICE); } static ssize_t tmc_etr_get_data_flat_buf(struct etr_buf *etr_buf,