From patchwork Wed Dec 26 01:54:15 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cho KyongHo X-Patchwork-Id: 1910461 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 8C3C8DF23A for ; Wed, 26 Dec 2012 02:00:41 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TngEr-0002Eq-0R; Wed, 26 Dec 2012 01:56:57 +0000 Received: from mailout3.samsung.com ([203.254.224.33]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TngCV-0000tG-4M for linux-arm-kernel@lists.infradead.org; Wed, 26 Dec 2012 01:54:36 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MFM001IT7XPIIE0@mailout3.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 26 Dec 2012 10:54:16 +0900 (KST) Received: from epcpsbgm1.samsung.com ( [203.254.230.48]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 91.46.01231.8C85AD05; Wed, 26 Dec 2012 10:54:16 +0900 (KST) X-AuditID: cbfee61a-b7fa66d0000004cf-96-50da58c88eaa Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 01.46.01231.7C85AD05; Wed, 26 Dec 2012 10:54:15 +0900 (KST) Received: from DOPULLIPCHO06 ([12.23.118.152]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MFM0010P7YFFY00@mmp1.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 26 Dec 2012 10:54:15 +0900 (KST) From: Cho KyongHo To: 'Linux ARM Kernel' , 'Linux IOMMU' , 'Linux Kernel' , 'Linux Samsung SOC' Subject: [PATCH v6 09/12] iommu/exynos: add support for System MMU 3.2 and 3.3 Date: Wed, 26 Dec 2012 10:54:15 +0900 Message-id: <003c01cde30b$e8ff6710$bafe3530$%cho@samsung.com> MIME-version: 1.0 X-Mailer: Microsoft Office Outlook 12.0 Thread-index: Ac3jC+jEnbY01r0KS2uWelf+JlQU2w== Content-language: ko DLP-Filter: Pass X-MTR: 20000000000000000@CPGS X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrPIsWRmVeSWpSXmKPExsVy+t8zA90TEbcCDO5eYLHY9PgaqwOjx+Yl 9QGMUVw2Kak5mWWpRfp2CVwZ+85cZSqYUlLR9mgvWwPjh+guRk4OCQETifs919ggbDGJC/fW A9lcHEICyxgl9s48wQZT9PP3AxaIxCJGiY3PHzNBOMuZJL4/3csEUsUmoCWxeu5xRhBbROAu o8TL8zwgRcwCfxklPp05xA6SEBbwkzh2+B4ziM0ioCpxfelpFhCbV8BW4vOfLUwQtqDEj8n3 wOLMAgYS72f1sULY8hKb17wF6uUAOkld4tFfXRBTREBPYnIXVLWIxL4X7xghpgtIfJt8iAWi WlZi0wFmkGskBLrZJVZ/7IV6TFLi4IobLBMYxWYhWTwLyeJZSBbPQrJiASPLKkbR1ILkguKk 9FxDveLE3OLSvHS95PzcTYyQWJHawbiyweIQowAHoxIP78bvNwOEWBPLiitzDzFKcDArifA6 fwQK8aYkVlalFuXHF5XmpBYfYvQBunwis5Rocj4wjvNK4g2NjU3MTExNzC1NzU1xCCuJ8zZ7 pAQICaQnlqRmp6YWpBbBjGPi4JRqYPT4+3zl3PnpL0Lb3kT8Xv8jc/7NNi853RYNocLdnR5W Z7PiWtykni3Z6vI/LvTeJdYjloIa7Lmf3tlsfK4o+JIx2FMjIdQlidE1mePcjPu+Fn0Fux/s u7CvIy7/40r+Y1M+2aYxXa60UvQxj5ks/HvXj3+/ZXMWPBWYERt94v/5oLiYopbH/kosxRmJ hlrMRcWJAOxHI7fCAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrHIsWRmVeSWpSXmKPExsVy+t9jAd3jEbcCDKZ1WltsenyN1YHRY/OS +gDGqAZGm4zUxJTUIoXUvOT8lMy8dFsl7+B453hTMwNDXUNLC3MlhbzE3FRbJRefAF23zByg qUoKZYk5pUChgMTiYiV9O0wTQkPcdC1gGiN0fUOC4HqMDNBAwjrGjH1nrjIVTCmpaHu0l62B 8UN0FyMnh4SAicTP3w9YIGwxiQv31rN1MXJxCAksYpTY+PwxE4SznEni+9O9TCBVbAJaEqvn HmcEsUUE7jJKvDzPA1LELPCXUeLTmUPsIAlhAT+JY4fvMYPYLAKqEteXngZbwStgK/H5zxYm CFtQ4sfke2BxZgEDifez+lghbHmJzWveAvVyAJ2kLvHory6IKSKgJzG5C6paRGLfi3eMExgF ZiEZNAvJoFlIBs1C0rKAkWUVo2hqQXJBcVJ6rqFecWJucWleul5yfu4mRnAkPpPawbiyweIQ owAHoxIP78bvNwOEWBPLiitzDzFKcDArifA6fwQK8aYkVlalFuXHF5XmpBYfYvQB+nMis5Ro cj4wSeSVxBsam5gZWRqZWRiZmJvjEFYS5232SAkQEkhPLEnNTk0tSC2CGcfEwSnVwDhDd8uR K+I/93KkyM40mbWcR1U7Uv36Ry0Tc9cpAm9PPLivrNQ7gYVdQajAqe9f43OzeXp5m5496DhW dfRrzwFTt3n6Kk59m25/WCi68b7yzKCJZme6suI4lz6eln7Y7rq2wwuV5bLblzlPiP9sk9W8 qVpQoZlnTtVaZsejJ2o8e/zyOavO5SuxFGckGmoxFxUnAgCtjp1T8QIAAA== X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121225_205431_862602_B5C94EFE X-CRM114-Status: GOOD ( 25.40 ) X-Spam-Score: -7.6 (-------) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-7.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [203.254.224.33 listed in list.dnswl.org] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 T_MANY_HDRS_LCASE Odd capitalization of multiple message headers Cc: 'Kukjin Kim' , 'Hyunwoong Kim' , 'Prathyush' , 'Joerg Roedel' , 'Subash Patel' , 'Rahul Sharma' X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Since System MMU 3.2 and 3.3 have more prefetch buffers than 2, the existing function to set prefetch buffers, exynos_sysmmu_set_prefbuf() is not able to support them. This commit removes exynos_sysmmu_set_prefbuf() and introduces new interface, exynos_sysmmu_set_pbuf() that can pass information of more buffers than 2. It is safe to remove the existing function because there is no device driver in the kernel yet that calls the removed function. Signed-off-by: KyongHo Cho --- drivers/iommu/exynos-iommu.c | 370 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 322 insertions(+), 48 deletions(-) diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index c3c5b7b..a558c51 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -80,6 +80,14 @@ #define CTRL_BLOCK 0x7 #define CTRL_DISABLE 0x0 +#define CFG_LRU 0x1 +#define CFG_QOS(n) ((n & 0xF) << 7) +#define CFG_MASK 0x0150FFFF /* Selecting bit 0-15, 20, 22 and 24 */ +#define CFG_ACGEN (1 << 24) /* System MMU 3.3 only */ +#define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */ +#define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */ +#define CFG_SHAREABLE (1 << 12) /* System MMU 3.x only */ + #define REG_MMU_CTRL 0x000 #define REG_MMU_CFG 0x004 #define REG_MMU_STATUS 0x008 @@ -88,6 +96,10 @@ #define REG_PT_BASE_ADDR 0x014 #define REG_INT_STATUS 0x018 #define REG_INT_CLEAR 0x01C +#define REG_PB_INFO 0x400 +#define REG_PB_LMM 0x404 +#define REG_PB_INDICATE 0x408 +#define REG_PB_CFG 0x40C #define REG_PAGE_FAULT_ADDR 0x024 #define REG_AW_FAULT_ADDR 0x028 @@ -96,10 +108,12 @@ #define REG_MMU_VERSION 0x034 -#define REG_PB0_SADDR 0x04C -#define REG_PB0_EADDR 0x050 -#define REG_PB1_SADDR 0x054 -#define REG_PB1_EADDR 0x058 +#define MMU_MAJ_VER(reg) (reg >> 28) +#define MMU_MIN_VER(reg) ((reg >> 21) & 0x7F) + +#define MAX_NUM_PBUF 6 + +#define NUM_MINOR_OF_SYSMMU_V3 4 static void *sysmmu_placeholder; /* Inidcate if a device is System MMU */ @@ -178,6 +192,19 @@ struct exynos_iommu_owner { spinlock_t lock; /* Lock to preserve consistency of System MMU */ }; +#define SYSMMU_PBUFCFG_TLB_UPDATE (1 << 16) +#define SYSMMU_PBUFCFG_ASCENDING (1 << 12) +#define SYSMMU_PBUFCFG_DSECENDING (0 << 12) /* default */ +#define SYSMMU_PBUFCFG_PREFETCH (1 << 8) +#define SYSMMU_PBUFCFG_WRITE (1 << 4) +#define SYSMMU_PBUFCFG_READ (0 << 4) /* default */ + +struct sysmmu_prefbuf { + unsigned long base; + unsigned long size; + unsigned long config; +}; + struct sysmmu_drvdata { struct device *sysmmu; /* System MMU's device descriptor */ struct device *master; /* Client device that needs System MMU */ @@ -185,6 +212,8 @@ struct sysmmu_drvdata { struct clk *clk; int activations; spinlock_t lock; + struct sysmmu_prefbuf pbufs[MAX_NUM_PBUF]; + int num_pbufs; struct iommu_domain *domain; unsigned long pgtable; bool runtime_active; @@ -210,6 +239,21 @@ static bool is_sysmmu_active(struct sysmmu_drvdata *data) return data->activations > 0; } +static unsigned int __sysmmu_version(struct sysmmu_drvdata *drvdata, + int idx, unsigned int *minor) +{ + unsigned int major; + + major = readl(drvdata->sfrbases[idx] + REG_MMU_VERSION); + + if (minor) + *minor = MMU_MIN_VER(major); + + major = MMU_MAJ_VER(major); + + return major; +} + static void sysmmu_unblock(void __iomem *sfrbase) { __raw_writel(CTRL_ENABLE, sfrbase + REG_MMU_CTRL); @@ -245,32 +289,235 @@ static void __sysmmu_tlb_invalidate_entry(void __iomem *sfrbase, static void __sysmmu_set_ptbase(void __iomem *sfrbase, unsigned long pgd) { - __raw_writel(0x1, sfrbase + REG_MMU_CFG); /* 16KB LV1, LRU */ __raw_writel(pgd, sfrbase + REG_PT_BASE_ADDR); __sysmmu_tlb_invalidate(sfrbase); } -static void __sysmmu_set_prefbuf(void __iomem *sfrbase, unsigned long base, - unsigned long size, int idx) +static void __sysmmu_set_prefbuf(void __iomem *pbufbase, unsigned long base, + unsigned long size, int idx) +{ + __raw_writel(base, pbufbase + idx * 8); + __raw_writel(size - 1 + base, pbufbase + 4 + idx * 8); +} + +/* + * Offset of prefetch buffer setting registers are different + * between SysMMU 3.1 and 3.2. 3.3 has a single prefetch buffer setting. + */ +static unsigned short + pbuf_offset[NUM_MINOR_OF_SYSMMU_V3] = {0x04C, 0x04C, 0x070, 0x410}; + +/** + * __sysmmu_sort_prefbuf - sort the given @prefbuf in descending order. + * @prefbuf: array of buffer information + * @nbufs: number of elements of @prefbuf + * @check_size: whether to compare buffer sizes. See below description. + * + * return value is valid if @check_size is true. If the size of first buffer + * in @prefbuf is larger than or equal to the sum of the sizes of the other + * buffers, returns 1. If the size of the first buffer is smaller than the + * sum of other sizes, returns -1. Returns 0, otherwise. + */ +static int __sysmmu_sort_prefbuf(struct sysmmu_prefbuf prefbuf[], + int nbufs, bool check_size) +{ + int i; + + for (i = 0; i < nbufs; i++) { + int j; + for (j = i + 1; j < nbufs; j++) + if (prefbuf[i].size < prefbuf[j].size) + swap(prefbuf[i], prefbuf[j]); + } + + if (check_size) { + unsigned long sum = 0; + for (i = 1; i < nbufs; i++) + sum += prefbuf[i].size; + + if (prefbuf[0].size < sum) + i = -1; + else if (prefbuf[0].size >= (sum * 2)) + i = 1; + else + i = 0; + } + + return i; +} + +static void __exynos_sysmmu_set_pbuf_ver31(struct sysmmu_drvdata *data, + int idx, int nbufs, struct sysmmu_prefbuf prefbuf[]) +{ + unsigned long cfg = + __raw_readl(data->sfrbases[idx] + REG_MMU_CFG) & CFG_MASK; + + if (nbufs > 1) { + unsigned long base = prefbuf[1].base; + unsigned long end = prefbuf[1].base + prefbuf[1].size; + + /* merging buffers from the second to the last */ + while (nbufs-- > 2) { + base = min(base, prefbuf[nbufs - 1].base); + end = max(end, prefbuf[nbufs - 1].base + + prefbuf[nbufs - 1].size); + } + + /* Separate PB mode */ + cfg |= 2 << 28; + + __sysmmu_set_prefbuf(data->sfrbases[idx] + pbuf_offset[1], + base, end - base, 1); + + data->num_pbufs = 2; + data->pbufs[0] = prefbuf[0]; + data->pbufs[1] = prefbuf[1]; + + } else { + /* Combined PB mode */ + cfg |= 3 << 28; + data->num_pbufs = 1; + data->pbufs[0] = prefbuf[0]; + } + + __raw_writel(cfg, data->sfrbases[idx] + REG_MMU_CFG); + + __sysmmu_set_prefbuf(data->sfrbases[idx] + pbuf_offset[1], + prefbuf[0].base, prefbuf[0].size, 0); +} + +static void __exynos_sysmmu_set_pbuf_ver32(struct sysmmu_drvdata *data, + int idx, int nbufs, struct sysmmu_prefbuf prefbuf[]) +{ + int i; + unsigned long cfg = + __raw_readl(data->sfrbases[idx] + REG_MMU_CFG) & CFG_MASK; + + cfg |= 7 << 16; /* enabling PB0 ~ PB2 */ + + /* This is common to all cases below */ + data->pbufs[0] = prefbuf[0]; + + switch (nbufs) { + case 1: + /* Combined PB mode (0 ~ 2) */ + cfg |= 1 << 19; + data->num_pbufs = 1; + break; + case 2: + /* Combined PB mode (0 ~ 1) */ + cfg |= 1 << 21; + data->num_pbufs = 2; + data->pbufs[1] = prefbuf[1]; + break; + case 3: + data->num_pbufs = 3; + data->pbufs[1] = prefbuf[1]; + data->pbufs[2] = prefbuf[2]; + + __sysmmu_sort_prefbuf(data->pbufs, 3, false); + swap(data->pbufs[0], data->pbufs[2]); + + break; + default: + data->pbufs[2].base = prefbuf[2].base; + /* data->size is used for end address temporarily */ + data->pbufs[2].size = prefbuf[2].base + prefbuf[2].size; + + /* Merging all buffers from the third to the last */ + while (nbufs-- > 3) { + data->pbufs[2].base = min(data->pbufs[2].base, + prefbuf[nbufs - 1].base); + data->pbufs[2].size = max(data->pbufs[2].size, + prefbuf[nbufs - 1].base + + prefbuf[nbufs - 1].size); + } + + data->num_pbufs = 3; + data->pbufs[1] = prefbuf[1]; + data->pbufs[2].size = data->pbufs[2].size - + data->pbufs[2].base; + } + + for (i = 0; i < data->num_pbufs; i++) + __sysmmu_set_prefbuf(data->sfrbases[idx] + pbuf_offset[2], + data->pbufs[i].base, data->pbufs[i].size, i); + + __raw_writel(cfg, data->sfrbases[idx] + REG_MMU_CFG); +} + +static void __exynos_sysmmu_set_pbuf_ver33(struct sysmmu_drvdata *data, + int idx, int nbufs, struct sysmmu_prefbuf prefbuf[]) { - __raw_writel(base, sfrbase + REG_PB0_SADDR + idx * 8); - __raw_writel(size - 1 + base, sfrbase + REG_PB0_EADDR + idx * 8); + static char pbcfg[6][6] = { + {7, 7, 7, 7, 7, 7}, {7, 7, 7, 7, 7, 7}, {2, 2, 3, 7, 7, 7}, + {1, 2, 3, 4, 7, 7}, {7, 7, 7, 7, 7, 7}, {2, 2, 3, 4, 5, 6} + }; + int pbselect; + int cmp, i; + long num_pb = __raw_readl(data->sfrbases[idx] + REG_PB_INFO) & 0xFF; + + if (nbufs > num_pb) + nbufs = num_pb; /* ignoring the rest of buffers */ + + for (i = 0; i < nbufs; i++) + data->pbufs[i] = prefbuf[i]; + data->num_pbufs = nbufs; + + cmp = __sysmmu_sort_prefbuf(data->pbufs, nbufs, true); + + pbselect = num_pb - nbufs; + if (num_pb == 6) { + if ((nbufs == 3) && (cmp == 1)) + pbselect = 4; + else if (nbufs < 3) + pbselect = 5; + } else if ((num_pb == 3) && (nbufs < 3)) { + pbselect = 1; + } + + __raw_writel(pbselect, data->sfrbases[idx] + REG_PB_LMM); + + /* Configure prefech buffers */ + for (i = 0; i < nbufs; i++) { + __raw_writel(i, data->sfrbases[idx] + REG_PB_INDICATE); + __raw_writel(data->pbufs[i].config | 1, + data->sfrbases[idx] + REG_PB_CFG); + __sysmmu_set_prefbuf(data->sfrbases[idx] + pbuf_offset[3], + data->pbufs[i].base, data->pbufs[i].size, 0); + } + + /* Disable prefetch buffers that is not set */ + for (cmp = pbcfg[num_pb - 1][nbufs - 1] - nbufs; cmp > 0; cmp--) { + __raw_writel(cmp + nbufs - 1, + data->sfrbases[idx] + REG_PB_INDICATE); + __raw_writel(0, data->sfrbases[idx] + REG_PB_CFG); + } } -void exynos_sysmmu_set_prefbuf(struct device *dev, - unsigned long base0, unsigned long size0, - unsigned long base1, unsigned long size1) +static void (*func_set_pbuf[NUM_MINOR_OF_SYSMMU_V3]) + (struct sysmmu_drvdata *, int, int, struct sysmmu_prefbuf []) = { + __exynos_sysmmu_set_pbuf_ver31, + __exynos_sysmmu_set_pbuf_ver31, + __exynos_sysmmu_set_pbuf_ver32, + __exynos_sysmmu_set_pbuf_ver33, +}; + +void exynos_sysmmu_set_pbuf(struct device *dev, int nbufs, + struct sysmmu_prefbuf prefbuf[]) { struct device *sysmmu; + int nsfrs; + + if (WARN_ON(nbufs < 1)) + return; for_each_sysmmu(dev, sysmmu) { - int i; unsigned long flags; - struct sysmmu_drvdata *data = dev_get_drvdata(sysmmu); + struct sysmmu_drvdata *data; - BUG_ON((base0 + size0) <= base0); - BUG_ON((size1 > 0) && ((base1 + size1) <= base1)); + data = dev_get_drvdata(sysmmu); spin_lock_irqsave(&data->lock, flags); if (!is_sysmmu_active(data)) { @@ -278,33 +525,35 @@ void exynos_sysmmu_set_prefbuf(struct device *dev, continue; } - for (i = 0; i < data->nsfrs; i++) { - if ((readl(data->sfrbases[i] + REG_MMU_VERSION) >> 28) - == 3) { - if (!sysmmu_block(data->sfrbases[i])) - continue; - - if (size1 == 0) { - if (size0 <= SZ_128K) { - base1 = base0; - size1 = size0; - } else { - size1 = size0 - - ALIGN(size0 / 2, SZ_64K); - size0 = size0 - size1; - base1 = base0 + size0; - } - } + for (nsfrs = 0; nsfrs < data->nsfrs; nsfrs++) { + unsigned int maj, min; - __sysmmu_set_prefbuf( - data->sfrbases[i], base0, size0, 0); - __sysmmu_set_prefbuf( - data->sfrbases[i], base1, size1, 1); + maj = __sysmmu_version(data, nsfrs, &min); + BUG_ON(min > 3); + + if (sysmmu_block(data->sfrbases[nsfrs])) { + func_set_pbuf[min](data, nsfrs, + nbufs, prefbuf); + sysmmu_unblock(data->sfrbases[nsfrs]); + } + } /* while (nsfrs < data->nsfrs) */ + spin_unlock_irqrestore(&data->lock, flags); + } +} + +static void __sysmmu_restore_state(struct sysmmu_drvdata *data) +{ + int i, min; + + for (i = 0; i < data->nsfrs; i++) { + if (__sysmmu_version(data, i, &min) == 3) { + if (sysmmu_block(data->sfrbases[i])) { + func_set_pbuf[min](data, i, + data->num_pbufs, data->pbufs); sysmmu_unblock(data->sfrbases[i]); } } - spin_unlock_irqrestore(&data->lock, flags); } } @@ -424,6 +673,32 @@ static bool __sysmmu_disable(struct sysmmu_drvdata *data) return disabled; } +static void __sysmmu_init_config(struct sysmmu_drvdata *data, int idx) +{ + unsigned long cfg = CFG_LRU | CFG_QOS(15); + int maj, min = 0; + + maj = __sysmmu_version(data, idx, &min); + if ((maj == 1) || (maj == 2)) + goto set_cfg; + + BUG_ON(maj != 3); + + cfg |= CFG_SHAREABLE; + if (min < 2) + goto set_cfg; + + BUG_ON(min > 3); + + cfg |= CFG_FLPDCACHE; + cfg |= (min == 2) ? CFG_SYSSEL : CFG_ACGEN; + + func_set_pbuf[min](data, idx, data->num_pbufs, data->pbufs); +set_cfg: + cfg |= __raw_readl(data->sfrbases[idx] + REG_MMU_CFG) & ~CFG_MASK; + __raw_writel(cfg, data->sfrbases[idx] + REG_MMU_CFG); +} + static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data) { int i; @@ -431,18 +706,12 @@ static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data) clk_enable(data->clk); for (i = 0; i < data->nsfrs; i++) { - unsigned long cfg = 1; + BUG_ON(__raw_readl(data->sfrbases[i] + REG_MMU_CTRL) + & CTRL_ENABLE); - __sysmmu_set_ptbase(data->sfrbases[i], data->pgtable); - - if ((readl(data->sfrbases[i] + REG_MMU_VERSION) >> 28) == 3) { - /* System MMU version is 3.x */ - cfg |= (1 << 12) | (2 << 28); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 0); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 1); - } + __sysmmu_init_config(data, i); - __raw_writel(cfg, data->sfrbases[i] + REG_MMU_CFG); + __sysmmu_set_ptbase(data->sfrbases[i], data->pgtable); __raw_writel(CTRL_ENABLE, data->sfrbases[i] + REG_MMU_CTRL); } @@ -666,6 +935,10 @@ static int __init __sysmmu_setup(struct device *sysmmu, u32 master_inst_no = -1; int ret; + drvdata->pbufs[0].base = 0; + drvdata->pbufs[0].size = ~0; + drvdata->num_pbufs = 1; + master_node = of_parse_phandle(sysmmu->of_node, "mmu-master", 0); if (!master_node && !of_property_read_string( sysmmu->of_node, "mmu-master-compat", &compat)) { @@ -818,6 +1091,7 @@ static int sysmmu_resume(struct device *dev) if (is_sysmmu_active(drvdata) && (!pm_runtime_enabled(dev) || drvdata->runtime_active)) { __sysmmu_enable_nocount(drvdata); + __sysmmu_restore_state(drvdata); } spin_unlock_irqrestore(&drvdata->lock, flags); return 0;