From patchwork Tue Oct 16 13:13:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 10643541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20A8013AD for ; Tue, 16 Oct 2018 13:14:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E6BB28C31 for ; Tue, 16 Oct 2018 13:14:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 00A3929654; Tue, 16 Oct 2018 13:14:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76CD828C31 for ; Tue, 16 Oct 2018 13:14:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBB7F6B0266; Tue, 16 Oct 2018 09:14:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C6AC46B0269; Tue, 16 Oct 2018 09:14:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5A8E6B026A; Tue, 16 Oct 2018 09:14:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 77E4C6B0266 for ; Tue, 16 Oct 2018 09:14:22 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id r134-v6so12031819pgr.19 for ; Tue, 16 Oct 2018 06:14:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Wozx3I6t1Z6fZ+0xttJ6rllGkcnzPt2KAsF0ALUBnnI=; b=aeovy2OfkPVRdw/IW6YRdNMYZdE2hBI7ky+lvfZXHAwB7dJDgay6fYzsq1UHoKV1Ic c5M30L5bXxrr35sUhMO2p/QMHCOAGH5yjErzRwoBaL/mD1TgrpaSItxosOxF3at1hZjW 3BXpwbss/zlZxyUB1yezP/34quWrPskX9fF1qcL5MYnDk3GTlr1jHENBOKNSNnq8hzyb kz3539UOui3E3llDz/+qu6tZpdsq+BGtPDz6lvH7iCjA+OaZjrIQ4cm1P63urBkIkKFi 1bKPzgw3pBSTZI3bo+03JT9wMH43sshiTJn8avqR2+Ij/LEmrcqlBfHJtXvDis7iMoYR tZhw== X-Gm-Message-State: ABuFfogfo8DiKwp2MFDlKa1ijIpVPAYdZVdtbho8F7/j99Q/+M7RX455 d95+Gl5MvM5R1gE87wFLsd+pzbXNRl85O8htn+I8XDILpZwsyyxozqQuwFwL51lOxB+jULy9e3+ orUiRuvsi1rAhgr/78C4xuLh+OTmBc/4hBH4OPqhx54K958GpZ7kDF/1OZHu7IKR8vU+UC5hk3E 5YevsfUkPPIEsm77qaDbbIcLkq6roeZXZqcG/Ypvvbby8rlc35cE7ivE/fOfS/vqWdlCUOIcr5A iZtrXeIz4j31F4kZANn0JiDaQ5GVFGZ/oSLsbOsyzG+zg4PdWEkP9O9OidAZitRYVanLzetGokW GfmHmTyBDeMozpwKcwrvVvPv2QqwIyOHNTvD5IsG35z/S9RVhU3n/W/G0tRoL53j9axnysBiybz V X-Received: by 2002:a17:902:8d82:: with SMTP id v2-v6mr22100167plo.9.1539695662165; Tue, 16 Oct 2018 06:14:22 -0700 (PDT) X-Received: by 2002:a17:902:8d82:: with SMTP id v2-v6mr22100125plo.9.1539695661379; Tue, 16 Oct 2018 06:14:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539695661; cv=none; d=google.com; s=arc-20160816; b=WNRsaq6Bw8reEtLAilb4NpmjZC9orF8bbqKGVKuTACaf0mE2d5xksbCgoa+MzTQ7iK 2hxyfRDI5Dt4VqmYqm16ly9xfgnaFuqhqF5IuU0MiBi+9KxrUtrbd8+X43enyNjG0GAY W7lkD5GXVoujKmnZTs24ekxyG5kJtss+63YhjqgftdFA1nac1To3h4XFg4nGEYooL8pf lBodeCTzjetwEtIEU/dbYObt5C5Sc9Ri4DlrQfeRZPA7qvAMi3A4BCpp76xQFFieY42F +RUIK+UwQw3f095ANgE9dr7jVstqKD69QtYviSnnSSq7r9lCC5S4X4Z7t7kHWW2g6Nm3 ubvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Wozx3I6t1Z6fZ+0xttJ6rllGkcnzPt2KAsF0ALUBnnI=; b=hvVGfcOnxVD0MduCpbcvBtcA1okLqDsvap9l0bDhpAZym17oC6Bj99qr7URWtAMcXE N0vi1h8kzRZZbRUHjWxJTPzuVmdkhMTkEwKfKtSQLkhm5aHZLQBHD6rlSq5ghzJW3Nac skWTDL2xcsoWFwNgxpNvky0Ea2SyhgrN/lRNXgBEC6tUBvfNJ/Ci4xSKpRIAorSLsZjj LP8B0LYgxrfz6yQFqenbRporla0/BjkjQlQ9vqows3GfQCWw5CLa+6P9fnDR5lY3dvB5 rWMgF/RLKaqhTOowosP0DfZJ5Jilbt9+/WVs/P1SeH5jFsSq3yb4LuCxU/mKuu4GCEIh 0dVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QRnnDkdr; spf=pass (google.com: domain of npiggin@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id l20-v6sor4443475pgh.86.2018.10.16.06.14.21 for (Google Transport Security); Tue, 16 Oct 2018 06:14:21 -0700 (PDT) Received-SPF: pass (google.com: domain of npiggin@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QRnnDkdr; spf=pass (google.com: domain of npiggin@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Wozx3I6t1Z6fZ+0xttJ6rllGkcnzPt2KAsF0ALUBnnI=; b=QRnnDkdrX4fTuAStV0TssQmvg1tQGZDM8vwrwyePkqEPHeenfDLv/cItj0JX95n69G z/jVDqEaBiBrP+MCux6xsHe8ceWLckKJyYmwS2L38fAkinSn2GD/WWuGpyEw95WtG0ub V/Uhabhj24big4D6uo/4ZkeWohj7BQHqqv7c09ZBuAuEm34YxZgn5plnw1psItwCJGoL zzuBpbBoeYk0qesuFnzoPrdViEUg4i9rMh1GV/bP3wUAan7fBuk9DdbLzpFF2Vy1cosG 0uMbbR8trBZmW0mfb/w+Wc24TkIyRYwl39mgvpMJYP4uPJnprMkcGzs4cPlwH4A4lbMW Hs7Q== X-Google-Smtp-Source: ACcGV62PowkKGuB1+DTkoNK49/BnGHkMHGcEPghSygGdFtY1eXOZ5nEe+W/s+bLONLOvO07moh8WRA== X-Received: by 2002:a63:db55:: with SMTP id x21-v6mr19632152pgi.365.1539695661103; Tue, 16 Oct 2018 06:14:21 -0700 (PDT) Received: from roar.local0.net ([60.240.252.156]) by smtp.gmail.com with ESMTPSA id j62-v6sm16043423pgd.40.2018.10.16.06.14.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Oct 2018 06:14:20 -0700 (PDT) From: Nicholas Piggin To: Andrew Morton Cc: Nicholas Piggin , Linus Torvalds , linux-mm , linux-arch , Linux Kernel Mailing List , ppc-dev , Ley Foon Tan Subject: [PATCH v2 5/5] mm: optimise pte dirty/accessed bit setting by demand based pte insertion Date: Tue, 16 Oct 2018 23:13:43 +1000 Message-Id: <20181016131343.20556-6-npiggin@gmail.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181016131343.20556-1-npiggin@gmail.com> References: <20181016131343.20556-1-npiggin@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Similarly to the previous patch, this tries to optimise dirty/accessed bits in ptes to avoid access costs of hardware setting them. Signed-off-by: Nicholas Piggin --- mm/huge_memory.c | 12 ++++++++---- mm/memory.c | 9 ++++++--- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1f43265204d4..38c2cd3b4879 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1197,6 +1197,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { pte_t entry; entry = mk_pte(pages[i], vma->vm_page_prot); + entry = pte_mkyoung(entry); entry = maybe_mkwrite(pte_mkdirty(entry), vma); memcg = (void *)page_private(pages[i]); set_page_private(pages[i], 0); @@ -2067,7 +2068,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, struct page *page; pgtable_t pgtable; pmd_t old_pmd, _pmd; - bool young, write, soft_dirty, pmd_migration = false; + bool young, write, dirty, soft_dirty, pmd_migration = false; unsigned long addr; int i; @@ -2145,7 +2146,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, page = pmd_page(old_pmd); VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); - if (pmd_dirty(old_pmd)) + dirty = pmd_dirty(old_pmd); + if (dirty) SetPageDirty(page); write = pmd_write(old_pmd); young = pmd_young(old_pmd); @@ -2176,8 +2178,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = maybe_mkwrite(entry, vma); if (!write) entry = pte_wrprotect(entry); - if (!young) - entry = pte_mkold(entry); + if (young) + entry = pte_mkyoung(entry); + if (dirty) + entry = pte_mkdirty(entry); if (soft_dirty) entry = pte_mksoft_dirty(entry); } diff --git a/mm/memory.c b/mm/memory.c index 9e314339a0bd..f907ea7a6303 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1804,10 +1804,9 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr, entry = pte_mkspecial(pfn_t_pte(pfn, prot)); out_mkwrite: - if (mkwrite) { - entry = pte_mkyoung(entry); + entry = pte_mkyoung(entry); + if (mkwrite) entry = maybe_mkwrite(pte_mkdirty(entry), vma); - } set_pte_at(mm, addr, pte, entry); update_mmu_cache(vma, addr, pte); /* XXX: why not for insert_page? */ @@ -2534,6 +2533,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); + entry = pte_mkyoung(entry); entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* * Clear the pte entry and flush it first, before updating the @@ -3043,6 +3043,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); pte = mk_pte(page, vma->vm_page_prot); + pte = pte_mkyoung(pte); if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; @@ -3185,6 +3186,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) __SetPageUptodate(page); entry = mk_pte(page, vma->vm_page_prot); + entry = pte_mkyoung(entry); if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); @@ -3453,6 +3455,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, flush_icache_page(vma, page); entry = mk_pte(page, vma->vm_page_prot); + entry = pte_mkyoung(entry); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* copy-on-write page */