From patchwork Thu Nov 21 13:40:47 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 3218561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9951AC045B for ; Thu, 21 Nov 2013 14:00:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4A59420716 for ; Thu, 21 Nov 2013 14:00:10 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6C72220701 for ; Thu, 21 Nov 2013 14:00:05 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VjUYc-0004cj-G6; Thu, 21 Nov 2013 13:44:36 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VjUXj-0003EQ-AJ; Thu, 21 Nov 2013 13:43:39 +0000 Received: from hqemgate14.nvidia.com ([216.228.121.143]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VjUWG-00031U-PB for linux-arm-kernel@lists.infradead.org; Thu, 21 Nov 2013 13:42:20 +0000 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Thu, 21 Nov 2013 05:41:49 -0800 Received: from hqemhub02.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Thu, 21 Nov 2013 05:35:48 -0800 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Thu, 21 Nov 2013 05:35:48 -0800 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server id 8.3.327.1; Thu, 21 Nov 2013 05:41:50 -0800 Received: from sc-daphne.nvidia.com (Not Verified[172.20.232.60]) by hqnvemgw02.nvidia.com with MailMarshal (v7,1,2,5326) id ; Thu, 21 Nov 2013 05:41:50 -0800 Received: from oreo.Nvidia.com (dhcp-10-21-26-134.nvidia.com [10.21.26.134]) by sc-daphne.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id rALDf0sC014302; Thu, 21 Nov 2013 05:41:46 -0800 (PST) From: Hiroshi Doyu To: , , , , , , Subject: [PATCHv6 11/13] iommu/tegra: smmu: Rename hwgrp -> swgroups Date: Thu, 21 Nov 2013 15:40:47 +0200 Message-ID: <1385041249-7705-12-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.8.1.5 In-Reply-To: <1385041249-7705-1-git-send-email-hdoyu@nvidia.com> References: <1385041249-7705-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131121_084209_126205_822FA551 X-CRM114-Status: GOOD ( 14.45 ) X-Spam-Score: -2.4 (--) Cc: mark.rutland@arm.com, devicetree@vger.kernel.org, lorenzo.pieralisi@arm.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, galak@codeaurora.org, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Hiroshi Doyu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the correct term for SWGROUP related variables and macros. The term "swgroup" is the collection of "memory client". A "memory client" usually represents a HardWare Accelerator(HWA) like GPU. Sometimes a strut device can belong to multiple "swgroup" so that "swgroup's'" is used here. This "swgroups" is the term used in Tegra TRM. Rename along with TRM. Signed-off-by: Hiroshi Doyu --- v4: New for v4 --- drivers/iommu/tegra-smmu.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 76356db..1544f7c 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -179,12 +179,12 @@ enum { #define NUM_SMMU_REG_BANKS 3 -#define smmu_client_enable_hwgrp(c, m) smmu_client_set_hwgrp(c, m, 1) -#define smmu_client_disable_hwgrp(c) smmu_client_set_hwgrp(c, 0, 0) -#define __smmu_client_enable_hwgrp(c, m) __smmu_client_set_hwgrp(c, m, 1) -#define __smmu_client_disable_hwgrp(c) __smmu_client_set_hwgrp(c, 0, 0) +#define smmu_client_enable_swgroups(c, m) smmu_client_set_swgroups(c, m, 1) +#define smmu_client_disable_swgroups(c) smmu_client_set_swgroups(c, 0, 0) +#define __smmu_client_enable_swgroups(c, m) __smmu_client_set_swgroups(c, m, 1) +#define __smmu_client_disable_swgroups(c) __smmu_client_set_swgroups(c, 0, 0) -#define HWGRP_ASID_REG(x) ((x) * sizeof(u32) + SMMU_ASID_BASE) +#define SWGROUPS_ASID_REG(x) ((x) * sizeof(u32) + SMMU_ASID_BASE) /* * Per client for address space @@ -195,7 +195,7 @@ struct smmu_client { struct device *dev; struct list_head list; struct smmu_as *as; - unsigned long hwgrp[2]; + unsigned long swgroups[2]; }; /* @@ -377,7 +377,7 @@ static int register_smmu_client(struct smmu_device *smmu, client->dev = dev; client->of_node = dev->of_node; - memcpy(client->hwgrp, swgroups, sizeof(u64)); + memcpy(client->swgroups, swgroups, sizeof(u64)); return insert_smmu_client(smmu, client); } @@ -403,7 +403,7 @@ static int smmu_of_get_swgroups(struct device *dev, unsigned long *swgroups) return -ENODEV; } -static int __smmu_client_set_hwgrp(struct smmu_client *c, +static int __smmu_client_set_swgroups(struct smmu_client *c, unsigned long *map, int on) { int i; @@ -412,10 +412,10 @@ static int __smmu_client_set_hwgrp(struct smmu_client *c, struct smmu_device *smmu = as->smmu; if (!on) - map = c->hwgrp; + map = c->swgroups; for_each_set_bit(i, map, TEGRA_SWGROUP_MAX) { - offs = HWGRP_ASID_REG(i); + offs = SWGROUPS_ASID_REG(i); val = smmu_read(smmu, offs); if (on) { if (val) { @@ -425,7 +425,7 @@ static int __smmu_client_set_hwgrp(struct smmu_client *c, } val = mask; - memcpy(c->hwgrp, map, sizeof(u64)); + memcpy(c->swgroups, map, sizeof(u64)); } else { WARN_ON((val & mask) == mask); val &= ~mask; @@ -438,7 +438,7 @@ skip: return 0; } -static int smmu_client_set_hwgrp(struct smmu_client *c, +static int smmu_client_set_swgroups(struct smmu_client *c, unsigned long *map, int on) { int err; @@ -447,7 +447,7 @@ static int smmu_client_set_hwgrp(struct smmu_client *c, struct smmu_device *smmu = as->smmu; spin_lock_irqsave(&smmu->lock, flags); - err = __smmu_client_set_hwgrp(c, map, on); + err = __smmu_client_set_swgroups(c, map, on); spin_unlock_irqrestore(&smmu->lock, flags); return err; } @@ -487,7 +487,7 @@ static int smmu_setup_regs(struct smmu_device *smmu) smmu_write(smmu, val, SMMU_PTB_DATA); list_for_each_entry(c, &as->client, list) - __smmu_client_set_hwgrp(c, c->hwgrp, 1); + __smmu_client_set_swgroups(c, c->swgroups, 1); } smmu_write(smmu, smmu->translation_enable_0, SMMU_TRANSLATION_ENABLE_0); @@ -815,7 +815,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, return -ENOMEM; client->as = as; - err = smmu_client_enable_hwgrp(client, client->hwgrp); + err = smmu_client_enable_swgroups(client, client->swgroups); if (err) return -EINVAL; @@ -835,7 +835,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, * Reserve "page zero" for AVP vectors using a common dummy * page. */ - if (test_bit(TEGRA_SWGROUP_AVPC, client->hwgrp)) { + if (test_bit(TEGRA_SWGROUP_AVPC, client->swgroups)) { struct page *page; page = as->smmu->avp_vector_page; @@ -848,7 +848,7 @@ static int smmu_iommu_attach_dev(struct iommu_domain *domain, return 0; err_client: - smmu_client_disable_hwgrp(client); + smmu_client_disable_swgroups(client); spin_unlock(&as->client_lock); return err; } @@ -864,7 +864,7 @@ static void smmu_iommu_detach_dev(struct iommu_domain *domain, list_for_each_entry(c, &as->client, list) { if (c->dev == dev) { - smmu_client_disable_hwgrp(c); + smmu_client_disable_swgroups(c); list_del(&c->list); c->as = NULL; dev_dbg(smmu->dev,