From patchwork Thu Jul 17 16:10:10 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King - ARM Linux X-Patchwork-Id: 4576621 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8FB129F1D6 for ; Thu, 17 Jul 2014 16:13:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A32B5201BF for ; Thu, 17 Jul 2014 16:13:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8C277201BB for ; Thu, 17 Jul 2014 16:13:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X7oGb-00041R-CD; Thu, 17 Jul 2014 16:10:45 +0000 Received: from pandora.arm.linux.org.uk ([2001:4d48:ad52:3201:214:fdff:fe10:1be6]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X7oGX-0003oJ-UC for linux-arm-kernel@lists.infradead.org; Thu, 17 Jul 2014 16:10:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=Cv3gB6x2CM3tCHE1XPnmWCSu69my9KPejunRiuJAQ2E=; b=J+IJZryWW7HdfTdKGz/7HFM8YJorAB8phwDdblK+GjlY5JGjhfZrfLXFux6ce9q/MitYTo+nW2V2qeMPXNAEwWTobOLUir3KXvTaTyilII2Ass5itblZN/GAoP9hUXGeeAUHwKT4HNXVGdvj+QmeTFpQ3aA3SfTbIWDbWB70O/w=; Received: from n2100.arm.linux.org.uk ([2002:4e20:1eda:1:214:fdff:fe10:4f86]:55136) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.76) (envelope-from ) id 1X7oG6-0003vh-Iq; Thu, 17 Jul 2014 17:10:14 +0100 Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1X7oG3-0008DP-F6; Thu, 17 Jul 2014 17:10:11 +0100 Date: Thu, 17 Jul 2014 17:10:10 +0100 From: Russell King - ARM Linux To: Santosh Shilimkar Subject: Re: [PATCH 0/3] ARM: mvebu: disable I/O coherency on !SMP Message-ID: <20140717161010.GY21766@n2100.arm.linux.org.uk> References: <1404318070-8503-1-git-send-email-thomas.petazzoni@free-electrons.com> <10673994.kkICTRHJxL@wuerfel> <20140717153401.GX21766@n2100.arm.linux.org.uk> <5105942.b7kcvNdlC8@wuerfel> <53C7F40A.800@ti.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <53C7F40A.800@ti.com> User-Agent: Mutt/1.5.19 (2009-01-05) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140717_091042_326587_F4698EA7 X-CRM114-Status: GOOD ( 22.81 ) X-Spam-Score: -0.1 (/) Cc: Thomas Petazzoni , Lior Amsalem , Jason Cooper , Arnd Bergmann , Andrew Lunn , Simon Guinot , Will Deacon , Christophe Vu-Brugier , Nadav Haklai , Ezequiel Garcia , Gregory Clement , Tawfik Bayouk , linux-arm-kernel@lists.infradead.org, Sebastian Hesselbarth X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Jul 17, 2014 at 12:04:26PM -0400, Santosh Shilimkar wrote: > Thanks for spotting it RMK. Will have a loot at it. I already have this for it - which includes a lot more comments on the code (some in anticipation of the reply from the architecture people.) The first hunk alone should fix the spotted problem. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index ab14b79b03f0..917aabd6b2dc 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1406,8 +1406,8 @@ void __init early_paging_init(const struct machine_desc *mdesc, return; /* remap kernel code and data */ - map_start = init_mm.start_code; - map_end = init_mm.brk; + map_start = init_mm.start_code & PMD_MASK; + map_end = ALIGN(init_mm.brk, PMD_SIZE); /* get a handle on things... */ pgd0 = pgd_offset_k(0); @@ -1434,23 +1434,60 @@ void __init early_paging_init(const struct machine_desc *mdesc, dsb(ishst); isb(); - /* remap level 1 table */ + /* + * WARNING: This code is not architecturally compliant: we modify + * the mappings in-place, indeed while they are in use by this + * very same code. + * + * Even modifying the mappings in a separate page table does + * not resolve this. + * + * The architecture strongly recommends that when a mapping is + * changed, that it is changed by first going via an invalid + * mapping and back to the new mapping. This is to ensure + * that no TLB conflicts (caused by the TLB having more than + * one TLB entry match a translation) can occur. + */ + + /* + * Remap level 1 table. This changes the physical addresses + * used to refer to the level 2 page tables to the high + * physical address alias, leaving everything else the same. + */ for (i = 0; i < PTRS_PER_PGD; pud0++, i++) { set_pud(pud0, __pud(__pa(pmd0) | PMD_TYPE_TABLE | L_PGD_SWAPPER)); pmd0 += PTRS_PER_PMD; } - /* remap pmds for kernel mapping */ - phys = __pa(map_start) & PMD_MASK; + /* + * Remap the level 2 table, pointing the mappings at the high + * physical address alias of these pages. + */ + phys = __pa(map_start); do { *pmdk++ = __pmd(phys | pmdprot); phys += PMD_SIZE; } while (phys < map_end); + /* + * Ensure that the above updates are flushed out of the cache. + * This is not strictly correct; on a system where the caches + * are coherent with each other, but the MMU page table walks + * may not be coherent, flush_cache_all() may be a no-op, and + * this will fail. + */ flush_cache_all(); + + /* + * Re-write the TTBR values to point them at the high physical + * alias of the page tables. We expect __va() will work on + * cpu_get_pgd(), which returns the value of TTBR0. + */ cpu_switch_mm(pgd0, &init_mm); cpu_set_ttbr(1, __pa(pgd0) + TTBR1_OFFSET); + + /* Finally flush any stale TLB values. */ local_flush_bp_all(); local_flush_tlb_all(); }