From patchwork Wed Oct 9 15:35:07 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 3010031 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 869729F245 for ; Wed, 9 Oct 2013 16:13:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C4E17201F9 for ; Wed, 9 Oct 2013 16:13:12 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E4B7F2016D for ; Wed, 9 Oct 2013 16:13:07 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTvrn-0001Un-An; Wed, 09 Oct 2013 15:40:03 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTvqt-00080I-2A; Wed, 09 Oct 2013 15:39:07 +0000 Received: from mail-we0-f170.google.com ([74.125.82.170]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTvnl-0007gQ-Bf for linux-arm-kernel@lists.infradead.org; Wed, 09 Oct 2013 15:36:16 +0000 Received: by mail-we0-f170.google.com with SMTP id u57so1143805wes.1 for ; Wed, 09 Oct 2013 08:35:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cEKW9sSMIJhf5zr5+y1cbD/5LppogR0w/a0po76esZs=; b=ldVvUbX5EVj+G+8g+zlXzLF0XyHfHj4F+Zpd/myGlaTSVmZ/F3D5ojDDelj3WREfVY 429Ipa+IcU5+erUYZQNu2WES+1QvIvS+jdpJBP+2l/H+WeaKpUyPfngT2wDNaL+UUxGj GXOcuWtxBiYGWqjH3ygstdFli+kqPC0TTsat+C7YNl7LSLKS13ku1ylLzBbf43bmsmvi VzFMALi6Fzeh/n5EEsVne/KD3TSLyJJBXDIAB31aGyf3oVkHlsuGSGD9/oxcOtf63gzb 2lGdrxv/20gDZUaUDkVPAPtq8pkRm5DR534FDNASInS8zC6/6qZhUkmVJM8HRwK8Xs+B IfDQ== X-Gm-Message-State: ALoCoQmUQd7sjwy3W6ILng4C6Dofmg4kI9kxfy+nn3GPibutGgQg8ekPMXtc4iUEcumzqFSve+pR X-Received: by 10.180.206.42 with SMTP id ll10mr3329293wic.50.1381332930112; Wed, 09 Oct 2013 08:35:30 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id j5sm16444594wia.4.1969.12.31.16.00.00 (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 09 Oct 2013 08:35:29 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 2/2] ARM: mm: Transparent huge page support for non-LPAE systems. Date: Wed, 9 Oct 2013 16:35:07 +0100 Message-Id: <1381332907-10179-3-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1381332907-10179-1-git-send-email-steve.capper@linaro.org> References: <1381332907-10179-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131009_113553_566795_5F53779D X-CRM114-Status: GOOD ( 15.64 ) X-Spam-Score: -2.6 (--) Cc: linaro-kernel@lists.linaro.org, linux@arm.linux.org.uk, patches@linaro.org, catalin.marinas@arm.com, rob.herring@calxeda.com, dsaxena@linaro.org, broonie@kernel.org, hoffman@marvell.com, Steve Capper X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Much of the required code for THP has been implemented in the earlier non-LPAE HugeTLB patch. One more domain bit is used (to store whether or not the THP is splitting). Some THP helper functions are defined; and we have to re-define pmd_page such that it distinguishes between page tables and sections. Signed-off-by: Steve Capper --- arch/arm/Kconfig | 2 +- arch/arm/include/asm/pgtable-2level.h | 47 +++++++++++++++++++++++++++++++++++ arch/arm/include/asm/pgtable-3level.h | 1 + arch/arm/include/asm/pgtable.h | 2 -- 4 files changed, 49 insertions(+), 3 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index fe6eeae..6b53969 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1755,7 +1755,7 @@ config SYS_SUPPORTS_HUGETLBFS config HAVE_ARCH_TRANSPARENT_HUGEPAGE def_bool y - depends on ARM_LPAE + depends on SYS_SUPPORTS_HUGETLBFS config ARCH_WANT_GENERAL_HUGETLB def_bool y diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h index 29ace75..a48eddc 100644 --- a/arch/arm/include/asm/pgtable-2level.h +++ b/arch/arm/include/asm/pgtable-2level.h @@ -217,6 +217,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) #define PMD_DSECT_PROT_NONE (_AT(pmdval_t, 1) << 5) #define PMD_DSECT_DIRTY (_AT(pmdval_t, 1) << 6) #define PMD_DSECT_AF (_AT(pmdval_t, 1) << 7) +#define PMD_DSECT_SPLITTING (_AT(pmdval_t, 1) << 8) #define PMD_BIT_FUNC(fn,op) \ static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } @@ -304,6 +305,52 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmdret; \ }) +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_DSECT_SPLITTING) +#define pmd_trans_huge(pmd) (pmd_large(pmd)) +#else +static inline int pmd_trans_huge(pmd_t pmd); +#endif + +PMD_BIT_FUNC(mksplitting, |= PMD_DSECT_SPLITTING); +#define pmd_mkhuge(pmd) (__pmd((pmd_val(pmd) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT)) + +static inline unsigned long pmd_pfn(pmd_t pmd) +{ + /* + * for a section, we need to mask off more of the pmd + * before looking up the pfn. + * + * pmd_pfn only gets sections from the thp code. + */ + if (pmd_trans_huge(pmd)) + return __phys_to_pfn(pmd_val(pmd) & HPAGE_MASK); + else + return __phys_to_pfn(pmd_val(pmd) & PHYS_MASK); +} + +#define pfn_pmd(pfn,prot) pmd_modify(__pmd(__pfn_to_phys(pfn)),prot); +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot); + +static inline int has_transparent_hugepage(void) +{ + return 1; +} + +static inline struct page *pmd_page(pmd_t pmd) +{ + /* + * for a section, we need to mask off more of the pmd + * before looking up the page as it is a section descriptor. + * + * pmd_page only gets sections from the thp code. + */ + if (pmd_trans_huge(pmd)) + return (phys_to_page(pmd_val(pmd) & HPAGE_MASK)); + + return phys_to_page(pmd_val(pmd) & PHYS_MASK); +} + #endif /* __ASSEMBLY__ */ #endif /* _ASM_PGTABLE_2LEVEL_H */ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 67a0e06..9c38f29 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -234,6 +234,7 @@ PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); /* represent a notpresent pmd by zero, this is used by pmdp_invalidate */ #define pmd_mknotpresent(pmd) (__pmd(0)) +#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index cf77a59..66a1417 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -189,8 +189,6 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd) return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); } -#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) - #ifndef CONFIG_HIGHPTE #define __pte_map(pmd) pmd_page_vaddr(*(pmd)) #define __pte_unmap(pte) do { } while (0)