From patchwork Tue Dec 23 15:37:14 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jungseok Lee X-Patchwork-Id: 5534041 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BF52E9F2F7 for ; Tue, 23 Dec 2014 15:40:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BAF042018E for ; Tue, 23 Dec 2014 15:40:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 938F920165 for ; Tue, 23 Dec 2014 15:40:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y3RWn-0006vP-Tc; Tue, 23 Dec 2014 15:37:41 +0000 Received: from mail-pd0-x22c.google.com ([2607:f8b0:400e:c02::22c]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y3RWj-0006rh-Kr for linux-arm-kernel@lists.infradead.org; Tue, 23 Dec 2014 15:37:38 +0000 Received: by mail-pd0-f172.google.com with SMTP id y13so8019144pdi.17 for ; Tue, 23 Dec 2014 07:37:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=AhOaGfUBELWL2gzpvkm0QfvJ9vsKnZxbiMYA7QX+Wgc=; b=Rq5QcA0XdJkebIcwOki3KL13Htxzf/6zBwBIzHyIz9q1iZGlsJcLwgsHDqncCGPV6C Bu/5d+CbKsyM9MJv1PO4CmGYzGaWtwtKaq2vL4F4AoBZGLyOeG+lwNcH1zaOb6s+ZmAA 4GpQdZMzx1uNyL5C8pyanLd+0Vr7snf0jruxnuIwCN6gdKy8P1Gr7wS/PsZPAL4wHY6A 6MkJkxMGDK3i85tyUCA+iXTeIc6RIn0xgNEB14fOw5m36bRrovXzLGKAoW44Od2tEzFu 32aXUrAUV/Lyo0RqZftKUpumbxciwp4t4ftAJ/GaFM27wv3WpHIQj9YBIOcxfBLw5RuF CwMg== X-Received: by 10.68.201.233 with SMTP id kd9mr44100803pbc.149.1419349036381; Tue, 23 Dec 2014 07:37:16 -0800 (PST) Received: from [192.168.123.172] ([182.215.209.43]) by mx.google.com with ESMTPSA id oe6sm20273714pbc.68.2014.12.23.07.37.13 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 23 Dec 2014 07:37:15 -0800 (PST) Subject: Re: [PATCH] arm64: mm: Add pgd_page to support RCU fast_gup Mime-Version: 1.0 (Apple Message framework v1283) From: Jungseok Lee In-Reply-To: <20141222162823.GB15240@e104818-lin.cambridge.arm.com> Date: Wed, 24 Dec 2014 00:37:14 +0900 Message-Id: <0341C428-A85D-4085-9F75-893576D42A65@gmail.com> References: <11B98DF8-2B98-414B-88E5-14C97ADAA60E@gmail.com> <20141221105633.GJ23242@arm.com> <20141222123051.GC20269@e104818-lin.cambridge.arm.com> <374BBC79-DD7B-4FEC-A1B3-DD6D14ED30A8@gmail.com> <20141222153111.GA15240@e104818-lin.cambridge.arm.com> <20141222162823.GB15240@e104818-lin.cambridge.arm.com> To: Catalin Marinas X-Mailer: Apple Mail (2.1283) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141223_073737_739756_70913DAB X-CRM114-Status: GOOD ( 19.20 ) X-Spam-Score: -0.6 (/) Cc: Steve Capper , Will Deacon , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Dec 23, 2014, at 1:28 AM, Catalin Marinas wrote: > On Mon, Dec 22, 2014 at 03:31:12PM +0000, Catalin Marinas wrote: >> On Mon, Dec 22, 2014 at 03:11:37PM +0000, Jungseok Lee wrote: >>> On Dec 22, 2014, at 9:30 PM, Catalin Marinas wrote: >>>> On Sun, Dec 21, 2014 at 10:56:33AM +0000, Will Deacon wrote: >>>>> On Sat, Dec 20, 2014 at 12:49:40AM +0000, Jungseok Lee wrote: >>>>>> This patch adds pgd_page definition in order to keep supporting >>>>>> HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page >>>>>> expression to align with pmd_page for readability. >>>>>> >>>>>> An introduction of pgd_page resolves the following build breakage >>>>>> under 4KB + 4Level memory management combo. >>>>>> >>>>>> mm/gup.c: In function 'gup_huge_pgd': >>>>>> mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration] >>>>>> head = pgd_page(orig); >>>>>> ^ >>>>>> mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast >>>>>> head = pgd_page(orig); >>>>>> >>>>>> Cc: Catalin Marinas >>>>>> Cc: Will Deacon >>>>>> Cc: Steve Capper >>>>>> Signed-off-by: Jungseok Lee >>>>>> --- >>>>>> arch/arm64/include/asm/pgtable.h | 4 +++- >>>>>> 1 file changed, 3 insertions(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>>>>> index df22314..a1fe927 100644 >>>>>> --- a/arch/arm64/include/asm/pgtable.h >>>>>> +++ b/arch/arm64/include/asm/pgtable.h >>>>>> @@ -401,7 +401,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) >>>>>> return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); >>>>>> } >>>>>> >>>>>> -#define pud_page(pud) pmd_page(pud_pmd(pud)) >>>>>> +#define pud_page(pud) pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK)) >>>>> >>>>> It would be cleaner to use pud_pfn here, otherwise we end up passing a >>>>> physical address with the lower attributes present to __phys_to_pfn, which >>>>> "knows" to shift them away. >>>> >>>> It looks like we did the same with pmd_page() but not with pte_page(), >>>> the latter using pte_pfn(). >>>> >>>> It's not a problem with lower attributes because they are within the >>>> first 12 bits, so the shifting gets rid of them. For pmd/pud/pgd, bits >>>> above 12 and up to the actual address must be 0. >>>> >>>> I agree with changing all of them to use pte/pmd/pud/pgd_pfn but I'm >>>> inclined to merge this patch now to fix the kernel build and we'll look >>>> at the clean-up after Christmas. >>> >>> Either way, I'm fine. >> >> Unless you post an updated patch quickly enough ;) > > Actually, don't worry, I'll fold Will's suggestions into your patch. I > found another problem with pmd_page being defined twice. Yeah. One of them should be removed. I tried to rework the patch with Will's suggestion, but I've failed to figure out a full story of pmd_pfn and pmd_page. It looks like that pmd_page cannot be written using pmd_pfn unless pmd_pfn is changed since pmd_pfn masks an input value with PMD_MASK, but pmd_page does not. Additionally, I'm not sure about whether pmd_pfn with PMD_MASK works well for an user process page table. So, my change on pmd_pfn and pmd_page is as follows. Please correct me if I am wrong. -----8<----- Thanks for a kind response, Catalin and Will! Best Regards Jungseok Lee --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -294,11 +294,11 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address, #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) -#define pmd_pfn(pmd) (((pmd_val(pmd) & PMD_MASK) & PHYS_MASK) >> PAGE_SHIFT) +#define pmd_pfn(pmd) ((pmd_val(pmd) & PHYS_MASK) >> PAGE_SHIFT) #define pfn_pmd(pfn,prot) (__pmd(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) #define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) -#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) #define pud_write(pud) pte_write(pud_pte(pud)) #define pud_pfn(pud) (((pud_val(pud) & PUD_MASK) & PHYS_MASK) >> PAGE_SHIFT)