From patchwork Mon Sep 17 13:59:58 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Herring X-Patchwork-Id: 1467571 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 6E4063FCFC for ; Mon, 17 Sep 2012 14:04:33 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TDbs9-0002hn-85; Mon, 17 Sep 2012 14:00:25 +0000 Received: from mail-ob0-f177.google.com ([209.85.214.177]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TDbrz-0002hC-5S for linux-arm-kernel@lists.infradead.org; Mon, 17 Sep 2012 14:00:17 +0000 Received: by obbta17 with SMTP id ta17so9424113obb.36 for ; Mon, 17 Sep 2012 07:00:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=n4I8da58j6uiO9D9n5MvKVepfWANZ4FvyRrWzIDQrJs=; b=RE7CLUwuae1m3uez5M32s6ltjWvRTjFEm+BMQM2xAb/qu4OnYArYpWJlBTB5RtVFRR 4zE0Dh3G6ocp0EKvqImqUwjmudpZzMnLPGdmk6wKECR5xfO0J7p9o3DrCiiIym0zK+SG sgbJEpani3eevwN9kGg3IooRo8/3YT1TcKuHkWJsehYvp3VAm/Iawd2NW1zKmwGThSoa 7gGLHXgdUwj1VFDNlFUCwfKsgj6LX1pzOXDGqNDwMwuNVGDoIETAcHmAOADddSc3JdwD gGYys3L4Lh3/IWWE56OZk1wQAY5r/aETmv4dTxvBXrDd+r6rLxNI4ikxKtH+1Zox9v5d 6/ww== Received: by 10.60.24.7 with SMTP id q7mr11417587oef.54.1347890412345; Mon, 17 Sep 2012 07:00:12 -0700 (PDT) Received: from rob-laptop.grandenetworks.net (65-36-73-129.dyn.grandenetworks.net. [65.36.73.129]) by mx.google.com with ESMTPS id k3sm10789124obw.4.2012.09.17.07.00.10 (version=SSLv3 cipher=OTHER); Mon, 17 Sep 2012 07:00:11 -0700 (PDT) From: Rob Herring To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2] ARM: l2x0: make background cache ops optional for clean and flush range Date: Mon, 17 Sep 2012 08:59:58 -0500 Message-Id: <1347890398-22088-1-git-send-email-robherring2@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1347306334-781-1-git-send-email-robherring2@gmail.com> References: <1347306334-781-1-git-send-email-robherring2@gmail.com> X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.5 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.214.177 listed in list.dnswl.org] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (robherring2[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit (robherring2[at]gmail.com) -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature Cc: Catalin Marinas , Russell King , Rob Herring X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Rob Herring All but background ops are atomic on the pl310, so a spinlock is not needed for a cache sync if background operations are not used. Using background ops was an optimization for flushing large buffers, but that's not needed for platforms where i/o is coherent and/or that have a larger cache size than likely to flush at once. The cache sync spinlock is taken on every readl/writel and can be a bottleneck for code paths with register accesses. The default behaviour is unchanged. Platforms can enable using atomic cache ops only by adding "arm,use-atomic-ops" to pl310 device-tree node. It is assumed that remaining background ops are only used in non-SMP code paths. Signed-off-by: Rob Herring Cc: Catalin Marinas --- Documentation/devicetree/bindings/arm/l2cc.txt | 3 +++ arch/arm/mm/cache-l2x0.c | 18 +++++++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/Documentation/devicetree/bindings/arm/l2cc.txt b/Documentation/devicetree/bindings/arm/l2cc.txt index 7ca5216..907c066 100644 --- a/Documentation/devicetree/bindings/arm/l2cc.txt +++ b/Documentation/devicetree/bindings/arm/l2cc.txt @@ -29,6 +29,9 @@ Optional properties: filter. Addresses in the filter window are directed to the M1 port. Other addresses will go to the M0 port. - interrupts : 1 combined interrupt. +- arm,use-atomic-ops : If present only use atomic cache flush operations and + don't use background operations except for non-SMP safe locations (boot and + shutdown). Example: diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index 2a8e380..e3b2ac2 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -33,6 +33,7 @@ static DEFINE_RAW_SPINLOCK(l2x0_lock); static u32 l2x0_way_mask; /* Bitmask of active ways */ static u32 l2x0_size; static unsigned long sync_reg_offset = L2X0_CACHE_SYNC; +static bool use_background_ops = true; struct l2x0_regs l2x0_saved_regs; @@ -130,6 +131,11 @@ static void l2x0_cache_sync(void) raw_spin_unlock_irqrestore(&l2x0_lock, flags); } +static void l2x0_cache_sync_nolock(void) +{ + cache_sync(); +} + static void __l2x0_flush_all(void) { debug_writel(0x03); @@ -219,7 +225,7 @@ static void l2x0_clean_range(unsigned long start, unsigned long end) void __iomem *base = l2x0_base; unsigned long flags; - if ((end - start) >= l2x0_size) { + if (use_background_ops && ((end - start) >= l2x0_size)) { l2x0_clean_all(); return; } @@ -249,7 +255,7 @@ static void l2x0_flush_range(unsigned long start, unsigned long end) void __iomem *base = l2x0_base; unsigned long flags; - if ((end - start) >= l2x0_size) { + if (use_background_ops && ((end - start) >= l2x0_size)) { l2x0_flush_all(); return; } @@ -379,7 +385,8 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) outer_cache.inv_range = l2x0_inv_range; outer_cache.clean_range = l2x0_clean_range; outer_cache.flush_range = l2x0_flush_range; - outer_cache.sync = l2x0_cache_sync; + if (!outer_cache.sync) + outer_cache.sync = l2x0_cache_sync; outer_cache.flush_all = l2x0_flush_all; outer_cache.inv_all = l2x0_inv_all; outer_cache.disable = l2x0_disable; @@ -456,6 +463,11 @@ static void __init pl310_of_setup(const struct device_node *np, writel_relaxed((filter[0] & ~(SZ_1M - 1)) | L2X0_ADDR_FILTER_EN, l2x0_base + L2X0_ADDR_FILTER_START); } + + if (of_property_read_bool(np, "arm,use-atomic-ops")) { + use_background_ops = false; + outer_cache.sync = l2x0_cache_sync_nolock; + } } static void __init pl310_save(void)