From patchwork Tue Dec 11 05:12:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 10723053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD77013BF for ; Tue, 11 Dec 2018 05:13:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B224D2A631 for ; Tue, 11 Dec 2018 05:13:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A33C72A6D7; Tue, 11 Dec 2018 05:13:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E6AA2A631 for ; Tue, 11 Dec 2018 05:13:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36FE28E0051; Tue, 11 Dec 2018 00:13:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 31FA38E004D; Tue, 11 Dec 2018 00:13:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 211678E0051; Tue, 11 Dec 2018 00:13:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id EE7A78E004D for ; Tue, 11 Dec 2018 00:13:04 -0500 (EST) Received: by mail-qt1-f200.google.com with SMTP id w1so13548848qta.12 for ; Mon, 10 Dec 2018 21:13:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=x4oxrxBm+pL2krEThZsV/R8uiUmR0Xmj61hO/AMgJF8=; b=qfl+rMq9oe10YfGw7/di8Dl5NjtFRhOck8443m4V+g5UXnw5swuA+7Ysr0uJ2Bgbn/ 04cWwdbBedvQJUrRHZBaddmPiPcnulx5l8NwIsgSNBUvqus/1yvXKwCPe96q+oYZ93Jw LCcPbTIX6xSgQgkJrmbXSolD85Go1HyVtb8X8wM/xPd3IvUYCjRn+HWGpN4+6hUSyLcR GIvGdcN0IwGKbhzoEuZqcslcXrGb8acisepCDdVkcoxBR31alK0pBIf3a3OnR4DqQu3o BbwizU9ApLigLJ3ZS/aHIl9VqyvTHN5obMMdspXtqKYDtSM3JcTdP/I/MctYQCZdn5y2 J8aA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWZiFuWvWcPwO6dDVNXlZCllndLxZan7dOP+MhmU5lFM2Skm3dzA 2w/1IJWa72uwIXGm6CHXJlCY6lU1dOkF5f7cempEPToUIrYkhojtnao98idt/SFcqkCjEz6DGs6 luvkoCd2xyKB3K7mP7I++l1S6LzEETdS8IWSKvjaiKX2ia/wBIFMGrMEenN85PRKfEA== X-Received: by 2002:a0c:a1c6:: with SMTP id e64mr14167142qva.196.1544505184742; Mon, 10 Dec 2018 21:13:04 -0800 (PST) X-Google-Smtp-Source: AFSGD/WPGSf97woSDHjNEBGt/NTIgmzsp+4qbOTl46iQd5I8yD3qNFzDa4PYYfm9A8UWo/f5WOn2 X-Received: by 2002:a0c:a1c6:: with SMTP id e64mr14167126qva.196.1544505184272; Mon, 10 Dec 2018 21:13:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544505184; cv=none; d=google.com; s=arc-20160816; b=UD3OEoXuCk6qzdwVdBpieIeMwTOmY0WuO96lBtO8m3D48uVGmWpYS2BjVIQMbOG/sM xbM2FI1pmd+ww0EIKjAqHaSbi5imBC3DkMEaMJC8h/EpQfwD/johozucVVECtPxBtTAr /SgYf5gN68fn64awm/AgCwMh3kjj6DiIWZb4LK+87cJMVDIPotJOn5BhUJ13jqzCnqUq L6D/Dm+pZzBCyyxpbVR809qvFA/ddSLVl1igOTOz4RdI3pIdifl+sngBiPXgk4r1c4Ef 93rjVAmd9i/U9PdNhWfY9/C7zS/U8RvPGkeCG3r6Ym8YP3tvYrOfCVNwqAQ1U1W7ZV4e CspA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=x4oxrxBm+pL2krEThZsV/R8uiUmR0Xmj61hO/AMgJF8=; b=aO3gdbW42T+e1RZXSTC7Fe7A3EJ5AtpuVoO/xtdOdEbK81QdiGXQsrSuE6tKFmOGBk aodxvqZ3Jq5MzgWIiQ3dTWQN7gvwdiGnAiHCnDtBvnZ9WLd9D7JxQZVujG85uXTh5j+o dhMR7x0kHJECDFliGKSVVxj5gjVk1InnfJTHLDwjbeiM6bExoZMPNIktC/TRVkrPAiFm YGCBYn74nabja5IzXLNizfCZabTi680j0NyWpNBgIKEUvssEl3nRj4N1/1zg/t1oB+Vh imqO5i7QSd0kLLJejguR7/F6NLuPg82DEKX9kj3hEbsCahpHMuwVa4YWF5DF/x5J6Xoa XdiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id k6si1425120qte.125.2018.12.10.21.13.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Dec 2018 21:13:04 -0800 (PST) Received-SPF: pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3EEFF3082B0F; Tue, 11 Dec 2018 05:13:03 +0000 (UTC) Received: from xz-x1.nay.redhat.com (dhcp-14-128.nay.redhat.com [10.66.14.128]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8C56560BF1; Tue, 11 Dec 2018 05:12:56 +0000 (UTC) From: Peter Xu To: linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A. Shutemov" , Matthew Wilcox , Michal Hocko , Dave Jiang , "Aneesh Kumar K.V" , Souptick Joarder , Konstantin Khlebnikov , linux-mm@kvack.org Subject: [PATCH v2] mm: thp: fix flags for pmd migration when split Date: Tue, 11 Dec 2018 13:12:54 +0800 Message-Id: <20181211051254.16633-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Tue, 11 Dec 2018 05:13:03 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When splitting a huge migrating PMD, we'll transfer all the existing PMD bits and apply them again onto the small PTEs. However we are fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() or pmd_yound() while actually they don't make sense at all when it's a migration entry. Fix them up by make it conditional. Note that if my understanding is correct about the problem then if without the patch there is chance to lose some of the dirty bits in the migrating pmd pages (on x86_64 we're fetching bit 11 which is part of swap offset instead of bit 2) and it could potentially corrupt the memory of an userspace program which depends on the dirty bit. CC: Andrea Arcangeli CC: Andrew Morton CC: "Kirill A. Shutemov" CC: Matthew Wilcox CC: Michal Hocko CC: Dave Jiang CC: "Aneesh Kumar K.V" CC: Souptick Joarder CC: Konstantin Khlebnikov CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org Signed-off-by: Peter Xu Reviewed-by: Zi Yan --- v2: - fix it up for young/write/dirty bits too [Konstantin] --- mm/huge_memory.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f2d19e4fe854..b00941b3d342 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2157,11 +2157,16 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, page = pmd_page(old_pmd); VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); - if (pmd_dirty(old_pmd)) - SetPageDirty(page); - write = pmd_write(old_pmd); - young = pmd_young(old_pmd); - soft_dirty = pmd_soft_dirty(old_pmd); + if (unlikely(pmd_migration)) { + soft_dirty = pmd_swp_soft_dirty(old_pmd); + young = write = false; + } else { + if (pmd_dirty(old_pmd)) + SetPageDirty(page); + write = pmd_write(old_pmd); + young = pmd_young(old_pmd); + soft_dirty = pmd_soft_dirty(old_pmd); + } /* * Withdraw the table only after we mark the pmd entry invalid.