From patchwork Wed Aug 2 09:48:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 9876477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 54FE560390 for ; Wed, 2 Aug 2017 09:51:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E55828794 for ; Wed, 2 Aug 2017 09:51:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 32F6828796; Wed, 2 Aug 2017 09:51:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8650128794 for ; Wed, 2 Aug 2017 09:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XwcRo6cGpFxN9m4/Glb01pmNeNglLW7kHnB1NQIwAWk=; b=RtMYss90BHZlfD9bwGNTBUsqEd MmQV9G2XYDnPUlWqVbvGeT+zdfqL+grBmTsUchgrR71ixELyZYAH5PgIP6aGvpF3VlpNI8GQDPDaV jkPXID9Etiav7wYrw8EcuX6rGRrekQBg9mII8g5kUZqmRTDcc38jTLyF8CSjsZXSmiWFv9jTytV+3 WIT8LV/yCb2eiyS+yj//b3iAn0z/6JEJ5KqeFftgN8r2y4v8bE06GzPELuZSxH0l5sIv6SjIOEd1u NssadWASTzD0wIL4sI+4up/jAuF9cjdSezZuvf67rW8WZMJk0uKJbWPclIHdAbnVIl3//lb9cL7Vr Y9KIkWHw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dcqJ2-0001de-Tu; Wed, 02 Aug 2017 09:51:08 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dcqId-0001GW-PM for linux-arm-kernel@lists.infradead.org; Wed, 02 Aug 2017 09:50:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B625F15AD; Wed, 2 Aug 2017 02:50:27 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.56]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 324F83F3E1; Wed, 2 Aug 2017 02:50:27 -0700 (PDT) From: Punit Agrawal To: will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH v5 4/9] arm64: hugetlb: Add break-before-make logic for contiguous entries Date: Wed, 2 Aug 2017 10:48:59 +0100 Message-Id: <20170802094904.27749-5-punit.agrawal@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170802094904.27749-1-punit.agrawal@arm.com> References: <20170802094904.27749-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170802_025044_021737_7E009B88 X-CRM114-Status: GOOD ( 15.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, David Woods , Steve Capper , Punit Agrawal , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Steve Capper It has become apparent that one has to take special care when modifying attributes of memory mappings that employ the contiguous bit. Both the requirement and the architecturally correct "Break-Before-Make" technique of updating contiguous entries can be found described in: ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit", page D4-1762. The huge pte accessors currently replace the attributes of contiguous pte entries in place thus can, on certain platforms, lead to TLB conflict aborts or even erroneous results returned from TLB lookups. This patch adds a helper function get_clear_flush(.) that clears a contiguous entry and returns the head pte (whilst taking care to retain dirty bit information that could have been modified by DBM). A tlb invalidate is performed to then ensure that there is no possibility of multiple tlb entries being present for the same region. Cc: David Woods Signed-off-by: Steve Capper (Fixed indentation and some comments cleanup) Signed-off-by: Punit Agrawal Reviewed-by: Mark Rutland --- arch/arm64/mm/hugetlbpage.c | 81 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 64 insertions(+), 17 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 08deed7c71f0..f2c976464f39 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -68,6 +68,47 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr, return CONT_PTES; } +/* + * Changing some bits of contiguous entries requires us to follow a + * Break-Before-Make approach, breaking the whole contiguous set + * before we can change any entries. See ARM DDI 0487A.k_iss10775, + * "Misprogramming of the Contiguous bit", page D4-1762. + * + * This helper performs the break step. + */ +static pte_t get_clear_flush(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long pgsize, + unsigned long ncontig) +{ + unsigned long i, saddr = addr; + struct vm_area_struct vma = { .vm_mm = mm }; + pte_t orig_pte = huge_ptep_get(ptep); + + /* + * If we already have a faulting entry then we don't need + * to break before make (there won't be a tlb entry cached). + */ + if (!pte_present(orig_pte)) + return orig_pte; + + for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { + pte_t pte = ptep_get_and_clear(mm, addr, ptep); + + /* + * If HW_AFDBM is enabled, then the HW could turn on + * the dirty bit for any page in the set, so check + * them all. All hugetlb entries are already young. + */ + if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM) && pte_dirty(pte)) + orig_pte = pte_mkdirty(orig_pte); + } + + flush_tlb_range(&vma, saddr, addr); + return orig_pte; +} + void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { @@ -93,6 +134,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte); + get_clear_flush(mm, addr, ptep, pgsize, ncontig); + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) { pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep, pte_val(pfn_pte(pfn, hugeprot))); @@ -194,7 +237,7 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - int ncontig, i; + int ncontig; size_t pgsize; pte_t orig_pte = huge_ptep_get(ptep); @@ -202,17 +245,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, return ptep_get_and_clear(mm, addr, ptep); ncontig = find_num_contig(mm, addr, ptep, &pgsize); - for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { - /* - * If HW_AFDBM is enabled, then the HW could - * turn on the dirty bit for any of the page - * in the set, so check them all. - */ - if (pte_dirty(ptep_get_and_clear(mm, addr, ptep))) - orig_pte = pte_mkdirty(orig_pte); - } - return orig_pte; + return get_clear_flush(mm, addr, ptep, pgsize, ncontig); } int huge_ptep_set_access_flags(struct vm_area_struct *vma, @@ -222,6 +256,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, int ncontig, i, changed = 0; size_t pgsize = 0; unsigned long pfn = pte_pfn(pte), dpfn; + pte_t orig_pte; pgprot_t hugeprot; if (!pte_cont(pte)) @@ -231,10 +266,12 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) { - changed |= ptep_set_access_flags(vma, addr, ptep, - pfn_pte(pfn, hugeprot), dirty); - } + orig_pte = get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + if (!pte_same(orig_pte, pte)) + changed = 1; + + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) + set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot)); return changed; } @@ -244,6 +281,9 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, { int ncontig, i; size_t pgsize; + pte_t pte = pte_wrprotect(huge_ptep_get(ptep)), orig_pte; + unsigned long pfn = pte_pfn(pte), dpfn; + pgprot_t hugeprot; if (!pte_cont(*ptep)) { ptep_set_wrprotect(mm, addr, ptep); @@ -251,8 +291,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, } ncontig = find_num_contig(mm, addr, ptep, &pgsize); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) - ptep_set_wrprotect(mm, addr, ptep); + dpfn = pgsize >> PAGE_SHIFT; + + orig_pte = get_clear_flush(mm, addr, ptep, pgsize, ncontig); + if (pte_dirty(orig_pte)) + pte = pte_mkdirty(pte); + + hugeprot = pte_pgprot(pte); + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) + set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } void huge_ptep_clear_flush(struct vm_area_struct *vma,