From patchwork Thu May 23 17:07:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2608621 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id D9BBA40077 for ; Thu, 23 May 2013 18:31:14 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ2q-0004rF-4g; Thu, 23 May 2013 17:11:18 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ1w-0003AA-N7; Thu, 23 May 2013 17:10:20 +0000 Received: from mail-wi0-x234.google.com ([2a00:1450:400c:c05::234]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UfZ0S-0002uZ-S1 for linux-arm-kernel@lists.infradead.org; Thu, 23 May 2013 17:08:54 +0000 Received: by mail-wi0-f180.google.com with SMTP id hn14so2223180wib.7 for ; Thu, 23 May 2013 10:08:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=hxtFm/Iu7SldjKjSS0tRT3RbBeab1nkZ0rFsKhog6Us=; b=XtkVUGkSYNZ2n1j/rzD6kgdtO7g9bPRlL3xMWL3Tn7wWGO1BMIcKxHg9o0LiSs5y6u R4WkMXG/v1tVTugN50qAPfoQCXK/qXSuOx0pZYJnP7iJZ5td8cf8+MdcugnMFsq8Z2HX qVJOLAZ9AZXtRLmxVUbGi6c3lC4kI3/QwQ0djNBnJG7MDrfS0jsK1jJTinm0zxmKj65K YbpS/WUod6ZggdX8q+y5PIcakfEZSLRKdP35CeWToY9E09/BfMNeXMcfAsK0P6SdeD/O mg0N46xdfpAL8LjjtN6zw9eRRE/6Klo0dvHPfB72SLsx2zOSuJhxNOdmDMKGBRHN5cLl sqAw== X-Received: by 10.180.188.141 with SMTP id ga13mr45826907wic.9.1369328906984; Thu, 23 May 2013 10:08:26 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.26 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:26 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 11/11] ARM64: mm: THP support. Date: Thu, 23 May 2013 18:07:58 +0100 Message-Id: <1369328878-11706-12-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQk1CfMl73pwFdaWO4zAQt10xgqsKSM2rejogN0WFQ8NlvNXNJa4YHNk3toxPiBrx8bfJ+69 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130523_130849_292941_226DD833 X-CRM114-Status: GOOD ( 12.59 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , patches@linaro.org, Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Bring Transparent HugePage support to ARM. The size of a transparent huge page depends on the normal page size. A transparent huge page is always represented as a pmd. If PAGE_SIZE is 4KB, THPs are 2MB. If PAGE_SIZE is 64KB, THPs are 512MB. Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/arm64/Kconfig | 3 ++ arch/arm64/include/asm/pgtable-hwdef.h | 4 +++ arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/tlb.h | 6 ++++ arch/arm64/include/asm/tlbflush.h | 2 ++ 5 files changed, 70 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 10607d6..308a556 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -189,6 +189,9 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_WANT_HUGE_PMD_SHARE def_bool y if !ARM64_64K_PAGES +config HAVE_ARCH_TRANSPARENT_HUGEPAGE + def_bool y + source "mm/Kconfig" config FORCE_MAX_ZONEORDER diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index e6e0a0d..63c9d0d 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -42,6 +42,10 @@ /* * Section */ +#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) +#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 2) +#define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */ +#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */ #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) #define PMD_SECT_NG (_AT(pmdval_t, 1) << 11) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 7476896..33a0d31 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -188,6 +188,61 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_PTE_SPECIAL /* + * Software PMD bits for THP + */ + +#define PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) +#define PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 57) + +/* + * THP definitions. + */ +#define pmd_young(pmd) (pmd_val(pmd) & PMD_SECT_AF) + +#define __HAVE_ARCH_PMD_WRITE +#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) +#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) +#endif + +#define PMD_BIT_FUNC(fn,op) \ +static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } + +PMD_BIT_FUNC(wrprotect, |= PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); +PMD_BIT_FUNC(mksplitting, |= PMD_SECT_SPLITTING); +PMD_BIT_FUNC(mkwrite, &= ~PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkdirty, |= PMD_SECT_DIRTY); +PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); +PMD_BIT_FUNC(mknotpresent, &= ~PMD_TYPE_MASK); + +#define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) + +#define pmd_pfn(pmd) (((pmd_val(pmd) & PMD_MASK) & PHYS_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) (__pmd(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) + +static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) +{ + const pmdval_t mask = PMD_SECT_USER | PMD_SECT_PXN | PMD_SECT_UXN | + PMD_SECT_RDONLY | PMD_SECT_PROT_NONE | + PMD_SECT_VALID; + pmd_val(pmd) = (pmd_val(pmd) & ~mask) | (pgprot_val(newprot) & mask); + return pmd; +} + +#define set_pmd_at(mm, addr, pmdp, pmd) set_pmd(pmdp, pmd) + +static inline int has_transparent_hugepage(void) +{ + return 1; +} + +/* * Mark the prot value as uncacheable and unbufferable. */ #define pgprot_noncached(prot) \ diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 654f096..46b3beb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -187,4 +187,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, #define tlb_migrate_finish(mm) do { } while (0) +static inline void +tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) +{ + tlb_add_flush(tlb, addr); +} + #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 122d632..8b48203 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -117,6 +117,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, dsb(); } +#define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) + #endif #endif