From patchwork Mon Feb 15 16:13:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9D0CC43333 for ; Mon, 15 Feb 2021 16:18:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9664364E26 for ; Mon, 15 Feb 2021 16:18:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231950AbhBOQSo (ORCPT ); Mon, 15 Feb 2021 11:18:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232416AbhBOQOf (ORCPT ); Mon, 15 Feb 2021 11:14:35 -0500 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC87FC061794 for ; Mon, 15 Feb 2021 08:13:53 -0800 (PST) Received: by mail-qv1-xf2d.google.com with SMTP id g3so3356119qvl.2 for ; Mon, 15 Feb 2021 08:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=7vhkaFBd5aVLg+EiVqDFYdlu6KAvATuHV48NAuG9s4U=; b=FywcUj6ct0QTnpDPy8lzzH7x1K66FMG6qQTU3im2GB8CWoLMq/TDlcCOnPXsoVk4Uo /ccMXbZM8gpaTE2NiqsIH49FrSxMFj0h9Tf41irzLmI1YiuP50CAJdo/pq8XQB8ThEiK zek3+g/0Dg2+hoBdU2MIsSmf+vmxUSDQoOs+LMl6JifDaDMG6nLbiR6/8aIPKuCDXC6i E65s2mATnNNP79AW8TObMspNmeaW9UtbIhrgz6MknOLoAT18wTHiYQcyUQlhu3+VaHJX rL8pKigaqw6j04/B0PU3nSDZwvcA4wMgY9bDOyC8AU3+U0wGYp1/NVEjJHTil6AVQIto RIlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7vhkaFBd5aVLg+EiVqDFYdlu6KAvATuHV48NAuG9s4U=; b=s/ju2gGAQZgEJVb79XsrsG5IJme6C9tuumNAuFHCgG8zHDOgq+2xrTwjTWehEIgToR lykyenJGOCje0LF+ddydZ09ysihh/dseNDzG4rl74hlzMypZ2JI2nBfaHLoO8YeZSlr2 Wrt/LQx+TiZ/9Eg559jIDNXuFPnctIpSXUTkNguuKJkF6GwOW5ttNOD9UjpiOBvq9kRd ydV5kxZIzFvQqDxE75bA/lWFlyDRQpJAxr5bbwJw8V32IBSVzNMXytq7ORUkOmOJgd4x loU+OQ1JtLxy0VzNfY982FWMxDXl6gnJY6LkHyfkq1rn7roZY/OU406AWmmG+yOh7XA3 7DXg== X-Gm-Message-State: AOAM533dfENbJ8R15inJSnuOQ0aBs0zaOfcFi/Izg4PuIerPadTw7yp5 ILGfuPPxY/ZIpx/wJ2lcRuwxrw== X-Google-Smtp-Source: ABdhPJzgy3c8eHGQYHWnWy7dklXzD+wnVi3JsMAEz3Tuu6ovJUqSSVfElA+GFD7VdaJGS9cNQw4Kxw== X-Received: by 2002:ad4:4644:: with SMTP id y4mr15440502qvv.31.1613405633174; Mon, 15 Feb 2021 08:13:53 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:13:52 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 01/14] mm/gup: don't pin migrated cma pages in movable zone Date: Mon, 15 Feb 2021 11:13:36 -0500 Message-Id: <20210215161349.246722-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order not to fragment CMA the pinned pages are migrated. However, they are migrated to ZONE_MOVABLE, which also should not have pinned pages. Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning is allowed. Signed-off-by: Pavel Tatashin Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index e4c224cd9661..df92170e3730 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1556,7 +1556,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, long ret = nr_pages; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: From patchwork Mon Feb 15 16:13:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC212C433DB for ; Mon, 15 Feb 2021 16:19:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 99C4264DEE for ; Mon, 15 Feb 2021 16:19:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232128AbhBOQSu (ORCPT ); Mon, 15 Feb 2021 11:18:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231486AbhBOQPv (ORCPT ); Mon, 15 Feb 2021 11:15:51 -0500 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F2B5C061797 for ; Mon, 15 Feb 2021 08:13:55 -0800 (PST) Received: by mail-qv1-xf2c.google.com with SMTP id c25so3350988qvb.4 for ; Mon, 15 Feb 2021 08:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=wbKChk2S6INWVHujUhFrYBg8HdmXN2ULiFT8EAgmMTM=; b=jVWwBWAt+XJBmDPeGDR7OktAAl46lGJUU9XsLgPbd0vvVehIA+qph4Nxq3xseZ+5p+ HkyL7jClveN4M3Ei8sP0yLwv+XSjchVVqsOjvfhPnX476cryjhVA8QEOi3CknsvicSdr w/JPlXf7S0RnG+oiq2Rpb0pZGaO4unwHVWXtrMr7NcROg534k4pesxPm1Vf9MMK/hjsp cFlotnbr1SE5GTxv/zLCJaZmxwkL3iPquTz3znjxKGVknfOeFyktYXDF32zdHOAab6fB kXeemFTdekLeujeT26xJf2ktxwoEAvtLdQ8bXcCfskdR0jVXjbZtCXLsLX/aJK6rLn5P 9hLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wbKChk2S6INWVHujUhFrYBg8HdmXN2ULiFT8EAgmMTM=; b=tgh5mEjBk96SBcZlXssBICfBUDNZXD4OkPa7lJynM8eCWO3gVklrhc4CcAjH2L26bL /cHhRuNZHgaGVuvPRhB/W6QF1XHz98RiD8d0hs9EAbz2jr73EYnspw0B1jN6u9Ldaqu7 NEujBncya2PSqpu9Wp+uKNnyfQzfKMLfAc+l96U43xcJT2oGHNinfwuLlsznGDOy6VNr 5Keb2IF+PhbAS4mxHM77dUW7KP74B6m1g0uLgh4St8BaAiEmFiGbsXpRMIYtj9zT7yAN te3XTEgcv00AsofK4zS0OduISHHQ5NF8J2jObj1DOmafwQOcGFu/cfBzPc61z+AvzE9M 260g== X-Gm-Message-State: AOAM5303FO897CyrKaKfD5alXie5WydocBIrAgyuQufzMCvFIq+XEVfJ JkBauk59+xsGtI5qu6jkK8K69A== X-Google-Smtp-Source: ABdhPJxlq253kavqQUAIWhTMqplKiGCG184qHWdu/ph3bS8U0nsZsvorlETMdgWLFMhxvFrAmN9pNw== X-Received: by 2002:a0c:aa44:: with SMTP id e4mr12595374qvb.49.1613405634747; Mon, 15 Feb 2021 08:13:54 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:13:54 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 02/14] mm/gup: check every subpage of a compound page during isolation Date: Mon, 15 Feb 2021 11:13:37 -0500 Message-Id: <20210215161349.246722-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are isolated in check_and_migrate_movable_pages() we skip compound number of pages at a time. However, as Jason noted, it is not necessary correct that pages[i] corresponds to the pages that we skipped. This is because it is possible that the addresses in this range had split_huge_pmd()/split_huge_pud(), and these functions do not update the compound page metadata. The problem can be reproduced if something like this occurs: 1. User faulted huge pages. 2. split_huge_pmd() was called for some reason 3. User has unmapped some sub-pages in the range 4. User tries to longterm pin the addresses. The resulting pages[i] might end-up having pages which are not compound size page aligned. Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") Reported-by: Jason Gunthorpe Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index df92170e3730..11ca49f3f11d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1549,26 +1549,23 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned int gup_flags) { unsigned long i; - unsigned long step; bool drain_allow = true; bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; + struct page *prev_head, *head; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: - for (i = 0; i < nr_pages;) { - - struct page *head = compound_head(pages[i]); - - /* - * gup may start from a tail page. Advance step by the left - * part. - */ - step = compound_nr(head) - (pages[i] - head); + prev_head = NULL; + for (i = 0; i < nr_pages; i++) { + head = compound_head(pages[i]); + if (head == prev_head) + continue; + prev_head = head; /* * If we get a page from the CMA zone, since we are going to * be pinning these entries, we might as well move them out @@ -1592,8 +1589,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, } } } - - i += step; } if (!list_empty(&cma_page_list)) { From patchwork Mon Feb 15 16:13:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7324BC4332D for ; Mon, 15 Feb 2021 16:19:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E65660202 for ; Mon, 15 Feb 2021 16:19:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbhBOQS7 (ORCPT ); Mon, 15 Feb 2021 11:18:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232103AbhBOQQS (ORCPT ); Mon, 15 Feb 2021 11:16:18 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11571C061356 for ; Mon, 15 Feb 2021 08:13:57 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id q85so6795254qke.8 for ; Mon, 15 Feb 2021 08:13:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kEb4+EO5FfWEwvtEEV/r+bfuAysDp0uLtQ9Vtb7+Mxs=; b=Mjvcn9gFUcCdc22/z/QIMuZAHqj9t0eazT33m6nDiWcsf9dC7b5/Cqunj984C77GCh +eDO+f4jZHaYmRkVNyiVvE/AZoCTRqAe6ySBISnRnba2iYwkDWhLwgkZr28MD45Z8sGw M+IdB3IPXQBNCXU+B1NJgQpPoEUGoJX07e04vdomEuoHxT0c1nrf30wE5WbEK4Ri2kQc VtN/WdklMkVYcoU59sdJbpIYt6mFZ42vKB5YQWdBrt02lFAgo40Rb8TMLf+rPUXDO8Wh l4hG1rXQtAGTiPNm3SxKtjXYQ9tx1s3SARXTmDHVP/vcvhonKigVIdiYFwfdvJQ1h1J0 MRwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kEb4+EO5FfWEwvtEEV/r+bfuAysDp0uLtQ9Vtb7+Mxs=; b=KyV+jE6esGo+NAZ5tAznOyybaJoHX8FZAXJRW9RJ9g/W8mcGCOQYiiIUZLZbp+u+4F qFy8SXmkyp7/eX6uqQbwE/Jhs0CRhzsG6U428tUTjxPx4B66c3aAWMWci6qtjy2cZqfU pjOZZelahtIWCNw0Hk32kDO+yduLP4XgAt7CV/vcqtKOnZgR7VpDHnhsIUMxS5Uy7dfh 0lgFiqsQlX3VOCC8pehSA+y2J8hdhbq3oNu0miFYI3mZjTIlFVxNLM07gAAFzQsKCFVa mCphTjCPWnfzVxkUciLpStOiqJR1lozX8m5FoTQAwmjcZ9N7207zF2aboknJplbuMMHA DVww== X-Gm-Message-State: AOAM531hvkxxvpUcuAbe1DrDcaXJdgTYkzGanhGuiNCz/ZLLa9o46lrF Obh+80ZP8KXFlvmjr+QUIDt1OQ== X-Google-Smtp-Source: ABdhPJx1c/toBBYCrfBwQhlcATPk9ySGYdcd3wSTS0lxHztOk2SDxHuOaSIOFMhw+gB/ecl0UXpyHw== X-Received: by 2002:a37:8306:: with SMTP id f6mr15907605qkd.366.1613405636226; Mon, 15 Feb 2021 08:13:56 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:13:55 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 03/14] mm/gup: return an error on migration failure Date: Mon, 15 Feb 2021 11:13:38 -0500 Message-Id: <20210215161349.246722-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When migration failure occurs, we still pin pages, which means that we may pin CMA movable pages which should never be the case. Instead return an error without pinning pages when migration failure happens. No need to retry migrating, because migrate_pages() already retries 10 times. Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 11ca49f3f11d..2d0292980b1d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1550,7 +1550,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, { unsigned long i; bool drain_allow = true; - bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1601,17 +1600,15 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { - /* - * some of the pages failed migration. Do get_user_pages - * without migration. - */ - migrate_allow = false; - + ret = migrate_pages(&cma_page_list, alloc_migration_target, + NULL, (unsigned long)&mtc, MIGRATE_SYNC, + MR_CONTIG_RANGE); + if (ret) { if (!list_empty(&cma_page_list)) putback_movable_pages(&cma_page_list); + return ret > 0 ? -ENOMEM : ret; } + /* * We did migrate all the pages, Try to get the page references * again migrating any new CMA pages which we failed to isolate @@ -1621,7 +1618,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, pages, vmas, NULL, gup_flags); - if ((ret > 0) && migrate_allow) { + if (ret > 0) { nr_pages = ret; drain_allow = true; goto check_again; From patchwork Mon Feb 15 16:13:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F303C43381 for ; Mon, 15 Feb 2021 16:19:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 276AA64DF2 for ; Mon, 15 Feb 2021 16:19:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232406AbhBOQSy (ORCPT ); Mon, 15 Feb 2021 11:18:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232105AbhBOQQS (ORCPT ); Mon, 15 Feb 2021 11:16:18 -0500 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C919C06121C for ; Mon, 15 Feb 2021 08:13:58 -0800 (PST) Received: by mail-qk1-x72f.google.com with SMTP id w19so6766479qki.13 for ; Mon, 15 Feb 2021 08:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=f2yaxZQFulou6CXYZ3Dw7hE1ZLU0SOFCh6rJiOJ1t1g=; b=huxQu44YdeJq8AfnoQUX0L6sLMffLXSxbCs3eJPdwBz8VmuwiTtLuZYmrqhBMZUylt RtLCdYhmTEjZWmC4RDlYGAC+Ht+U5vxeG1rj9a8lIasoA07PlVT1BcIXx3PH2y+EnLJz e4ycT2NMEUt6pMG6YaHFkl9SsC5Xn4w7w+7xp91FUqYt1zxafD7zpULlz8X8+NAO5V0I 2BoRnH8kVsAH3mW7PD2ALL/FfokRTmee6k6GnCdZe+4r/xJyA16x90geS1nKbJ/Yzii9 4ML4TUblhgS4Srbk7+/sMGlvNIi3AC7kXkYB9sn5ZzYA4nxzZ4TgMlaXDdO5YP0rlDax CYkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f2yaxZQFulou6CXYZ3Dw7hE1ZLU0SOFCh6rJiOJ1t1g=; b=Q28ZpasRUpSP8hEs90asshG4U7cGc73osNa6JFoJJKYGifaOjxaYBLQgy0l4hmkL/E zePGS/c5ZYoFNVJOj+RwqcdLYcv3sruX7RxN2XNkZM8TZygwHGWnDnbQKfWWzD//gKmV hMp5WKr8t/J8hsyeXs9LCape8xZ+7hUzv+7fmXc/aXYqTiF4uZte5jXKiWv2rCS+JkUF THPHDgqkxlDTDrxYQaFIchyDKm0dxJ/6DGgX9Ld5x5AaZY73uKNMMU+GyoyJyuwMpX+j hIiA8JnbU1n/zHGLmQmuobPbBIa8oaQ76/jpXnma1lmpkKeJJZTq+vwcJSbvU9J4s2t0 Rr0A== X-Gm-Message-State: AOAM533crAVB8ghuu9C1FpGWf2XPs5Di4ieVc4lmGyPFVcvGHb/q3RCp zeXxN6ukT3dVrKZnguWlWmR0JA== X-Google-Smtp-Source: ABdhPJxs/cbAD2q2t+15v4vU1Bf5/Gv19mJh9/gffWmoB3jXHCcVkC0JHrtMhrXok+OOHS2N6Obw5A== X-Received: by 2002:a37:4643:: with SMTP id t64mr5123409qka.344.1613405637737; Mon, 15 Feb 2021 08:13:57 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:13:57 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 04/14] mm/gup: check for isolation errors Date: Mon, 15 Feb 2021 11:13:39 -0500 Message-Id: <20210215161349.246722-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org It is still possible that we pin movable CMA pages if there are isolation errors and cma_page_list stays empty when we check again. Check for isolation errors, and return success only when there are no isolation errors, and cma_page_list is empty after checking. Because isolation errors are transient, we retry indefinitely. Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 60 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2d0292980b1d..be57836ba90f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1548,8 +1548,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long i; - bool drain_allow = true; + unsigned long i, isolation_error_count; + bool drain_allow; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1560,6 +1560,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, check_again: prev_head = NULL; + isolation_error_count = 0; + drain_allow = true; for (i = 0; i < nr_pages; i++) { head = compound_head(pages[i]); if (head == prev_head) @@ -1571,25 +1573,35 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * of the CMA zone if possible. */ if (is_migrate_cma_page(head)) { - if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); - else { + if (PageHuge(head)) { + if (!isolate_huge_page(head, &cma_page_list)) + isolation_error_count++; + } else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); drain_allow = false; } - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + - page_is_file_lru(head), - thp_nr_pages(head)); + if (isolate_lru_page(head)) { + isolation_error_count++; + continue; } + list_add_tail(&head->lru, &cma_page_list); + mod_node_page_state(page_pgdat(head), + NR_ISOLATED_ANON + + page_is_file_lru(head), + thp_nr_pages(head)); } } } + /* + * If list is empty, and no isolation errors, means that all pages are + * in the correct zone. + */ + if (list_empty(&cma_page_list) && !isolation_error_count) + return ret; + if (!list_empty(&cma_page_list)) { /* * drop the above get_user_pages reference. @@ -1609,23 +1621,19 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, return ret > 0 ? -ENOMEM : ret; } - /* - * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. - */ - ret = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); - - if (ret > 0) { - nr_pages = ret; - drain_allow = true; - goto check_again; - } + /* We unpinned pages before migration, pin them again */ + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + if (ret <= 0) + return ret; + nr_pages = ret; } - return ret; + /* + * check again because pages were unpinned, and we also might have + * had isolation errors and need more pages to migrate. + */ + goto check_again; } #else static long check_and_migrate_cma_pages(struct mm_struct *mm, From patchwork Mon Feb 15 16:13:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF1FDC433E9 for ; Mon, 15 Feb 2021 16:24:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 74B2064E0F for ; Mon, 15 Feb 2021 16:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231751AbhBOQXv (ORCPT ); Mon, 15 Feb 2021 11:23:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232266AbhBOQRQ (ORCPT ); Mon, 15 Feb 2021 11:17:16 -0500 Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00C7EC061221 for ; Mon, 15 Feb 2021 08:14:00 -0800 (PST) Received: by mail-qk1-x734.google.com with SMTP id x14so6838556qkm.2 for ; Mon, 15 Feb 2021 08:13:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=+qKWgotgEdvGjys3/bE4u2jFoUM8emF2bpCTDvCGsH0=; b=R+eYrK+Q4M6RkrL1ulUsWY73UAPh+P5N6PGssDIfBpBYLtUATYaGEWy8uXiYym8kDA xk2KDpPBj1vD9bJi2xVKkQ37G2VGgOq6vXKv9he/G1i/6Y6HKCSwVsmxTs+ygAn3Nd3G NaMUtjrDZ8fS8mzmJNIsZy26QqOe0hHNyNkxX31JxxfR5oqiDiVb3FifgBzVDLpaFTFC cLr/SqRiNC8D6UdL8eCgXc/+YtTQyBNFElYqgoKgb1hsQUt+jZ0ZSKd2AUuScw4N/UsI MCeqbViLOfpAhvCncGC4Uam9wPr4Zk6U3zD+pupK/IstJepwbBYDgbtjYJPHhdZZffrQ jJew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+qKWgotgEdvGjys3/bE4u2jFoUM8emF2bpCTDvCGsH0=; b=LkAeSG0Wmf7+6Rq5BzYIuaHnCRJbgMASU9tHlIDB0HuyYwR447AvFSV6mFhbe8QTdY jwHpjX8QtYFEa8Zv/A6/jswm36jsf5eU71ZCyXnqfqClECXT2+KSRYNLcgd8wU+lkUge LpH+1XQUltxrGA4laBAL1s+WXAtEpabUXQ5xHx/AyKQMxJ8e5lb1DrZxg4W1cWA+xC8D IM83ZYysbELT8i9j6VFS5fkSefiHVCilR4PpTmxZmY6vyEn29b7oh/RHVPZSOhSph7Yc El3yAZR2YFaMk/IW4+QyZGPXE8lOuPngn01+vUMIfYI+2RP5DqI2z6GNz3qTWxVd34xt lyOA== X-Gm-Message-State: AOAM532AyCSPtxVEHBZiRnNbKkdk9kY3uP/LNj3PlHSnnxFIp0jOFJxW M2jA6X4CnvOEUoUKIYpp5DedOQ== X-Google-Smtp-Source: ABdhPJwvXmPNyrexesKj606DgnJ/YOQ8iGFbuk+vK73ZOWobC75IMwhqBFGpQKXK1oJJxqostKAEXQ== X-Received: by 2002:a05:620a:985:: with SMTP id x5mr13605459qkx.403.1613405639233; Mon, 15 Feb 2021 08:13:59 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:13:58 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 05/14] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Date: Mon, 15 Feb 2021 11:13:40 -0500 Message-Id: <20210215161349.246722-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not return pages that might belong to CMA region. This is currently used for long term gup to make sure that such pins are not going to be done on any CMA pages. When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it is focusing on CMA pages too much and that there is larger class of pages that need the same treatment. MOVABLE zone cannot contain any long term pins as well so it makes sense to reuse and redefine this flag for that usecase as well. Rename the flag to PF_MEMALLOC_PIN which defines an allocation context which can only get pages suitable for long-term pins. Also re-name: memalloc_nocma_save()/memalloc_nocma_restore to memalloc_pin_save()/memalloc_pin_restore() and make the new functions common. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard Acked-by: Michal Hocko --- include/linux/sched.h | 2 +- include/linux/sched/mm.h | 21 +++++---------------- mm/gup.c | 4 ++-- mm/hugetlb.c | 4 ++-- mm/page_alloc.c | 4 ++-- 5 files changed, 12 insertions(+), 23 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e3a5eeec509..0fbb03bb776e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1568,7 +1568,7 @@ extern struct pid *cad_pid; #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ -#define PF_MEMALLOC_NOCMA 0x10000000 /* All allocation request will have _GFP_MOVABLE cleared */ +#define PF_MEMALLOC_PIN 0x10000000 /* Allocation context constrained to zones which allow long term pinning. */ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ #define PF_SUSPEND_TASK 0x80000000 /* This thread called freeze_processes() and should not be frozen */ diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 1ae08b8462a4..5f4dd3274734 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -270,29 +270,18 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) current->flags = (current->flags & ~PF_MEMALLOC) | flags; } -#ifdef CONFIG_CMA -static inline unsigned int memalloc_nocma_save(void) +static inline unsigned int memalloc_pin_save(void) { - unsigned int flags = current->flags & PF_MEMALLOC_NOCMA; + unsigned int flags = current->flags & PF_MEMALLOC_PIN; - current->flags |= PF_MEMALLOC_NOCMA; + current->flags |= PF_MEMALLOC_PIN; return flags; } -static inline void memalloc_nocma_restore(unsigned int flags) +static inline void memalloc_pin_restore(unsigned int flags) { - current->flags = (current->flags & ~PF_MEMALLOC_NOCMA) | flags; + current->flags = (current->flags & ~PF_MEMALLOC_PIN) | flags; } -#else -static inline unsigned int memalloc_nocma_save(void) -{ - return 0; -} - -static inline void memalloc_nocma_restore(unsigned int flags) -{ -} -#endif #ifdef CONFIG_MEMCG DECLARE_PER_CPU(struct mem_cgroup *, int_active_memcg); diff --git a/mm/gup.c b/mm/gup.c index be57836ba90f..9af6faf1b2b3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1662,7 +1662,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc; if (gup_flags & FOLL_LONGTERM) - flags = memalloc_nocma_save(); + flags = memalloc_pin_save(); rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1671,7 +1671,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (rc > 0) rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas, gup_flags); - memalloc_nocma_restore(flags); + memalloc_pin_restore(flags); } return rc; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4bdb58ab14cb..861de87daf07 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1049,10 +1049,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) { struct page *page; - bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA); + bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (nocma && is_migrate_cma_page(page)) + if (pin && is_migrate_cma_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 519a60d5b6f7..e4b1eda87827 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3813,8 +3813,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask, #ifdef CONFIG_CMA unsigned int pflags = current->flags; - if (!(pflags & PF_MEMALLOC_NOCMA) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (!(pflags & PF_MEMALLOC_PIN) && + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; #endif From patchwork Mon Feb 15 16:13:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2CF3C433E0 for ; Mon, 15 Feb 2021 16:20:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCA3560202 for ; Mon, 15 Feb 2021 16:20:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230394AbhBOQTL (ORCPT ); Mon, 15 Feb 2021 11:19:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231136AbhBOQQw (ORCPT ); Mon, 15 Feb 2021 11:16:52 -0500 Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com [IPv6:2607:f8b0:4864:20::72a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F970C061225 for ; Mon, 15 Feb 2021 08:14:02 -0800 (PST) Received: by mail-qk1-x72a.google.com with SMTP id r77so6775407qka.12 for ; Mon, 15 Feb 2021 08:14:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=p3LY61RMvEkOPbLRzp9wTQQd7Y9CKIynM/s58sr6Xh8=; b=BNcGqGP+5P/yEVtn1w00Xz2Dq3FRjNmwC/PCJblRo5T/nPxHmGtCPYHviMTXfgudEu IvEopPQNUYzy7y77LTOW2htxPKl3cFcDMV9sbJ4hw1wD+0tm5cs5UHu7LIktPJVYx49t 5ovwtmG43xfEipv14T2sORMGyYsEfiP6sUSgclCiyKdttMwcByulGWoZ9yZmDE9aKPtS Z9WEmwLUdSzVpZDc3KFy/bN67hDjS+KCTH9cbZ4tqmEVsF3yjJVA/0DhlXMbKsfC8un7 FFnSsvgRERv849iVWpple0i6I7NsL41pyJ0eGcfH0eAB3wLrb906ovsgMPzNBZ15bzAC 0UBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p3LY61RMvEkOPbLRzp9wTQQd7Y9CKIynM/s58sr6Xh8=; b=msWhS/Dzt45Be8rjx2mCwS1ao1Z2nEbm42JTYnFOSdHpl7yedvfaF5CTmUTjIly+W6 476dixwP7KKstn5DwNjxR7riw0Ia5tU2BGp0PN3z2z6EYXrjaXpuULKM8Q/y99KISKY0 tTIacoSgjMOdYB7it1udg96m/eh2sYeWPqQtBb36GRb5r5JgcHM5zu6BUiZMhI0+KoBu D1ymmzRbWiQWedBuTQ3hQ+xFiJZR7DeWjsNDhW/hLxBxsziFI52kwCvFRKQuun/FLHD5 DoeBlfcrvxQDVhr9/sIFhU6v4c1l1Z1J9a1pkaNULZQHAxqbk7x+HVN5lTfIZ+AWOJqI x/QQ== X-Gm-Message-State: AOAM531MV+mHt67qJ9dm1EE9Ga/ha1pHCrTqgz4pwpBbvSLRsjKoJB7k M40C4QAnSc37flEgxMrVAV8aVQ== X-Google-Smtp-Source: ABdhPJzk2DXr5/nGv//pmWIhQ1tujT4HU+iuHKQUbcp2PsdKdtw7Upgx021BNudhTqEeWym5AAV38A== X-Received: by 2002:a05:620a:908:: with SMTP id v8mr16047359qkv.201.1613405640836; Mon, 15 Feb 2021 08:14:00 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.13.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:00 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 06/14] mm: apply per-task gfp constraints in fast path Date: Mon, 15 Feb 2021 11:13:41 -0500 Message-Id: <20210215161349.246722-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Function current_gfp_context() is called after fast path. However, soon we will add more constraints which will also limit zones based on context. Move this call into fast path, and apply the correct constraints for all allocations. Also update .reclaim_idx based on value returned by current_gfp_context() because it soon will modify the allowed zones. Note: With this patch we will do one extra current->flags load during fast path, but we already load current->flags in fast-path: __alloc_pages_nodemask() prepare_alloc_pages() current_alloc_flags(gfp_mask, *alloc_flags); Later, when we add the zone constrain logic to current_gfp_context() we will be able to remove current->flags load from current_alloc_flags, and therefore return fast-path to the current performance level. Suggested-by: Michal Hocko Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/page_alloc.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e4b1eda87827..f6058e8f3105 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4981,6 +4981,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } gfp_mask &= gfp_allowed_mask; + /* + * Apply scoped allocation constraints. This is mainly about GFP_NOFS + * resp. GFP_NOIO which has to be inherited for all allocation requests + * from a particular context which has been marked by + * memalloc_no{fs,io}_{save,restore}. + */ + gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) return NULL; @@ -4996,13 +5003,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, if (likely(page)) goto out; - /* - * Apply scoped allocation constraints. This is mainly about GFP_NOFS - * resp. GFP_NOIO which has to be inherited for all allocation requests - * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. - */ - alloc_mask = current_gfp_context(gfp_mask); + alloc_mask = gfp_mask; ac.spread_dirty_pages = false; /* From patchwork Mon Feb 15 16:13:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6D19C433E0 for ; Mon, 15 Feb 2021 16:24:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8799E64DF4 for ; Mon, 15 Feb 2021 16:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231801AbhBOQX5 (ORCPT ); Mon, 15 Feb 2021 11:23:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232327AbhBOQRR (ORCPT ); Mon, 15 Feb 2021 11:17:17 -0500 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DC57C0611C2 for ; Mon, 15 Feb 2021 08:14:03 -0800 (PST) Received: by mail-qk1-x72f.google.com with SMTP id q85so6795582qke.8 for ; Mon, 15 Feb 2021 08:14:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9+I2YL20vIMEn1f1+NRTujbux5LnqLr6dmxmUUzNGH0=; b=ijhTSwggny9zK7iGkfL/pEpINleDTxUH8QkHc3vNFgmZf7pNeYhLn1vpVAqDj82yTY Pngejcnd3Qll93X/xq2NUk/YAGn6fwXnaqqpuReOBib62p8jCnjntqRjktqjRlg9OdLi unkFxxOjiu21ne+04lgRMbkUZVsvT1Pv2d9+9t1WfCj8/ZkigNEmhKBYz8s97Y6ROz11 EhstXfzvFI+K8Vvt8BfARWVXSL7nAbxBA5RqRVMQ/P0Q0Yk2oEZVqWF6XP09pXSteh41 Y1499ZiokKAB6MfoYidyLgDpB+gBssFPE0zmQwXZxF0LjyaiRy3zJi013uMo0Zi/f/cM 2WDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9+I2YL20vIMEn1f1+NRTujbux5LnqLr6dmxmUUzNGH0=; b=VonHZyP4OKUKPfTQVHn3ZUrO5JebNPwid3uNxVFbWJzJhsiQC/aPyx0m/f6EGFx5VN jhAUIFsG/+3UUXI+CaaFnJ1uNQJouwgy8Z2KVINQgGgDPj7p17SrnkohqdSwje3JOqU9 V0yQKfNU3//fQ+T0joMZv4C5Rum9y4nG6Yes6x/c8diKsnxaxtc6hhoc+CjmCPej2lo3 7P8TJSGAMBykDAudV9XPBNmuOGukrWpZLIpRMB6fl6Z30JTDlkmGDPX7P9O9Rl1+ckes 4LIF6EnWSAzRqC1bhsYnD2POud9lNmwiBav+oQJ8qLV5jU8FUdenCSrTUILWBCqoMCqw DaLA== X-Gm-Message-State: AOAM531P+8NyZvnxzVt0r+uIiaJuP4xVXCYsrjN/lOoxa+gv3k5/ofsv PlWx9d0OG/yHfP7rJOmZwXP9SQ== X-Google-Smtp-Source: ABdhPJxjYsrqF0x2VMI4WuuO6FsfHdYQnfEarAwQ8bLXxiaeNi4N4rKzMcr4vBsB0q8RoXJPfwP/Fg== X-Received: by 2002:a37:bd84:: with SMTP id n126mr15359454qkf.54.1613405642887; Mon, 15 Feb 2021 08:14:02 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:02 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 07/14] mm: honor PF_MEMALLOC_PIN for all movable pages Date: Mon, 15 Feb 2021 11:13:42 -0500 Message-Id: <20210215161349.246722-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org PF_MEMALLOC_PIN is only honored for CMA pages, extend this flag to work for any allocations from ZONE_MOVABLE by removing __GFP_MOVABLE from gfp_mask when this flag is passed in the current context. Add is_pinnable_page() to return true if page is in a pinnable page. A pinnable page is not in ZONE_MOVABLE and not of MIGRATE_CMA type. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- include/linux/mm.h | 18 ++++++++++++++++++ include/linux/sched/mm.h | 6 +++++- mm/hugetlb.c | 2 +- mm/page_alloc.c | 20 +++++++++----------- 4 files changed, 33 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ecdf8a8cd6ae..7f56d8d62148 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1116,6 +1116,11 @@ static inline bool is_zone_device_page(const struct page *page) } #endif +static inline bool is_zone_movable_page(const struct page *page) +{ + return page_zonenum(page) == ZONE_MOVABLE; +} + #ifdef CONFIG_DEV_PAGEMAP_OPS void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); @@ -1487,6 +1492,19 @@ static inline unsigned long page_to_section(const struct page *page) } #endif +/* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ +#ifdef CONFIG_MIGRATION +static inline bool is_pinnable_page(struct page *page) +{ + return !is_zone_movable_page(page) && !is_migrate_cma_page(page); +} +#else +static inline bool is_pinnable_page(struct page *page) +{ + return true; +} +#endif + static inline void set_page_zone(struct page *page, enum zone_type zone) { page->flags &= ~(ZONES_MASK << ZONES_PGSHIFT); diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 5f4dd3274734..a55277b0d475 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -150,12 +150,13 @@ static inline bool in_vfork(struct task_struct *tsk) * Applies per-task gfp context to the given allocation flags. * PF_MEMALLOC_NOIO implies GFP_NOIO * PF_MEMALLOC_NOFS implies GFP_NOFS + * PF_MEMALLOC_PIN implies !GFP_MOVABLE */ static inline gfp_t current_gfp_context(gfp_t flags) { unsigned int pflags = READ_ONCE(current->flags); - if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS))) { + if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS | PF_MEMALLOC_PIN))) { /* * NOIO implies both NOIO and NOFS and it is a weaker context * so always make sure it makes precedence @@ -164,6 +165,9 @@ static inline gfp_t current_gfp_context(gfp_t flags) flags &= ~(__GFP_IO | __GFP_FS); else if (pflags & PF_MEMALLOC_NOFS) flags &= ~__GFP_FS; + + if (pflags & PF_MEMALLOC_PIN) + flags &= ~__GFP_MOVABLE; } return flags; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 861de87daf07..d1bcf5ed8df2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1052,7 +1052,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (pin && is_migrate_cma_page(page)) + if (pin && !is_pinnable_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f6058e8f3105..ed38a3ccb9eb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3807,16 +3807,13 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) return alloc_flags; } -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, - unsigned int alloc_flags) +/* Must be called after current_gfp_context() which can change gfp_mask */ +static inline unsigned int gfp_to_alloc_flags_cma(gfp_t gfp_mask, + unsigned int alloc_flags) { #ifdef CONFIG_CMA - unsigned int pflags = current->flags; - - if (!(pflags & PF_MEMALLOC_PIN) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; - #endif return alloc_flags; } @@ -4472,7 +4469,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) } else if (unlikely(rt_task(current)) && !in_interrupt()) alloc_flags |= ALLOC_HARDER; - alloc_flags = current_alloc_flags(gfp_mask, alloc_flags); + alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, alloc_flags); return alloc_flags; } @@ -4774,7 +4771,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, reserve_flags = __gfp_pfmemalloc_flags(gfp_mask); if (reserve_flags) - alloc_flags = current_alloc_flags(gfp_mask, reserve_flags); + alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, reserve_flags); /* * Reset the nodemask and zonelist iterators if memory policies can be @@ -4943,7 +4940,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, if (should_fail_alloc_page(gfp_mask, order)) return false; - *alloc_flags = current_alloc_flags(gfp_mask, *alloc_flags); + *alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags); /* Dirty zone balancing only done in the fast path */ ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE); @@ -4985,7 +4982,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, * Apply scoped allocation constraints. This is mainly about GFP_NOFS * resp. GFP_NOIO which has to be inherited for all allocation requests * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. + * memalloc_no{fs,io}_{save,restore}. And PF_MEMALLOC_PIN which ensures + * movable zones are not used during allocation. */ gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; From patchwork Mon Feb 15 16:13:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A0E7C433E0 for ; Mon, 15 Feb 2021 16:24:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0519D64E05 for ; Mon, 15 Feb 2021 16:24:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231680AbhBOQXk (ORCPT ); Mon, 15 Feb 2021 11:23:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231962AbhBOQRp (ORCPT ); Mon, 15 Feb 2021 11:17:45 -0500 Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com [IPv6:2607:f8b0:4864:20::730]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D52AC0611BC for ; Mon, 15 Feb 2021 08:14:05 -0800 (PST) Received: by mail-qk1-x730.google.com with SMTP id z190so893052qka.9 for ; Mon, 15 Feb 2021 08:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=38wDBdME2LupaBaoyY01V6Pw/ZXOdtQ/YB6thxCvT6c=; b=a5+vYsvd2IfA6SFLlrkem8EeYtmqX6ARwpEfK9m5AqHki60j7f5AS5qbJKtGXG777k rS3Ex4AZdJamhlAKxZx4eE/il8uQMphmyQ00jFYxOTZnfVcLZffR5ZFc/h2CzMee5JXG LOqIhFGOtJPazR6r6k+FLMdrShIx0VFTnvL68OGnvrNLy4yVoa1ILrB2CgjULLXlh/tr a2zE/rFz58GK6fdRdZ8n4zBb0dl7rlliQwq9x/Jbx4xW8f+C+DjKAhF5GS9RgiDXTZLt JQUOvvYLCsmyScZkQk+x0WBR9kVOQn1KrJVM22LVSULFNY4ZU3uY8HBcYdXvuepmWwx9 Znxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=38wDBdME2LupaBaoyY01V6Pw/ZXOdtQ/YB6thxCvT6c=; b=Gz0n+zKz/JtbRURT1iWe/rtXws3tDTAb5a04TDGp51TwGlV5fwwjdaMn0/xqx9QH/J VlI0X4wpWcuxT5N/Taboo/CnxHycf3/ixiaKtDGuJSBm9/8W8mRAiemaHK7PUXjU3PBY Y+BQ9fSGtzAK7DBkWuIPHIYAKfftWETyaikSvvEaB1InIHVPYw31ZqGGy1unY/Paoi25 PqOFTdXoOeMa8lS7eLKtiQ91JOIXE+9o3KFt2Va8FmGB7LysPinKz/7Jibcp75JIIVsR 1pnmE1ufK9QuEYSKFCI+fm4TAXPrXJ+AnSNBIsmsgBjPowEss/R4+mcpknwRNIsnwt+1 SfmQ== X-Gm-Message-State: AOAM532ceAPlhJtiwsl9usPb+u90pxYhDEq3IBxCVzbOGwWXWKjyOin+ 1Kk7kLCJmPsDEJImN3/+hN9QOA== X-Google-Smtp-Source: ABdhPJxWdJ09keZeKYoYN0ga+vu74hTzd8SZNUWhh++aQD00fFcf8dANY1Nyi/4IUDyARNVskiqOfw== X-Received: by 2002:a05:620a:1643:: with SMTP id c3mr15703452qko.369.1613405644373; Mon, 15 Feb 2021 08:14:04 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:03 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 08/14] mm/gup: do not migrate zero page Date: Mon, 15 Feb 2021 11:13:43 -0500 Message-Id: <20210215161349.246722-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On some platforms ZERO_PAGE(0) might end-up in a movable zone. Do not migrate zero page in gup during longterm pinning as migration of zero page is not allowed. For example, in x86 QEMU with 16G of memory and kernelcore=5G parameter, I see the following: Boot#1: zero_pfn 0x48a8d zero_pfn zone: ZONE_DMA32 Boot#2: zero_pfn 0x20168d zero_pfn zone: ZONE_MOVABLE On x86, empty_zero_page is declared in .bss and depending on the loader may end up in different physical locations during boots. Also, move is_zero_pfn() my_zero_pfn() functions under CONFIG_MMU, because zero_pfn that they are using is declared in memory.c which is compiled with CONFIG_MMU. Signed-off-by: Pavel Tatashin --- include/linux/mm.h | 3 ++- include/linux/mmzone.h | 4 ++++ include/linux/pgtable.h | 12 ++++++++++++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7f56d8d62148..3c75df55ed00 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1496,7 +1496,8 @@ static inline unsigned long page_to_section(const struct page *page) #ifdef CONFIG_MIGRATION static inline bool is_pinnable_page(struct page *page) { - return !is_zone_movable_page(page) && !is_migrate_cma_page(page); + return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || + is_zero_pfn(page_to_pfn(page)); } #else static inline bool is_pinnable_page(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..c56f508be031 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -407,6 +407,10 @@ enum zone_type { * techniques might use alloc_contig_range() to hide previously * exposed pages from the buddy again (e.g., to implement some sort * of memory unplug in virtio-mem). + * 6. ZERO_PAGE(0), kernelcore/movablecore setups might create + * situations where ZERO_PAGE(0) which is allocated differently + * on different platforms may end up in a movable zone. ZERO_PAGE(0) + * cannot be migrated. * * In general, no unmovable allocations that degrade memory offlining * should end up in ZONE_MOVABLE. Allocators (like alloc_contig_range()) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 8fcdfa52eb4b..7c6cba3d80f0 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1115,6 +1115,7 @@ extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, extern void untrack_pfn_moved(struct vm_area_struct *vma); #endif +#ifdef CONFIG_MMU #ifdef __HAVE_COLOR_ZERO_PAGE static inline int is_zero_pfn(unsigned long pfn) { @@ -1138,6 +1139,17 @@ static inline unsigned long my_zero_pfn(unsigned long addr) return zero_pfn; } #endif +#else +static inline int is_zero_pfn(unsigned long pfn) +{ + return 0; +} + +static inline unsigned long my_zero_pfn(unsigned long addr) +{ + return 0; +} +#endif /* CONFIG_MMU */ #ifdef CONFIG_MMU From patchwork Mon Feb 15 16:13:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 844D4C433E0 for ; Mon, 15 Feb 2021 16:22:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DA2160233 for ; Mon, 15 Feb 2021 16:22:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229890AbhBOQVj (ORCPT ); Mon, 15 Feb 2021 11:21:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232576AbhBOQSO (ORCPT ); Mon, 15 Feb 2021 11:18:14 -0500 Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com [IPv6:2607:f8b0:4864:20::82f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5D7BC0611BE for ; Mon, 15 Feb 2021 08:14:06 -0800 (PST) Received: by mail-qt1-x82f.google.com with SMTP id v10so5142437qtq.7 for ; Mon, 15 Feb 2021 08:14:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=4cyZtopRv2N4VTLbZRZ9cz9IiC/X2Br4gTBlwkFbXX8=; b=XBLPGZxx78dn0PwE/wpLIEtCKekpmfxfcYYWyF3kPJuX3XUOu5NhsPQ6IZ5Pk3YXPO +Eqkzx9eDH13NTbr76GNq3isXawaVgZKkQOaz8wZr/vlRaoyeKSTf2KDf+wWsincl8m3 2SD+TM/tSoxRjmIxT6Rmg8V55F8JBwLhXVWJo9GWOFXmChJ19ODIZuZ3v+tg0T0tddTk jIuogxp6tWs+eE/kYz4ibeypWdRmKsXH5XyvPxSLQBoC0hnRHQwpFLs9UityFA3MaOCZ 4cp6NBoCo38S2SO6gdccsgOUvZ9X7Pm7uoO/OYMWs+gY+BEFNy9Ciy22kFsC+GH+FKEo k/oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4cyZtopRv2N4VTLbZRZ9cz9IiC/X2Br4gTBlwkFbXX8=; b=e7LWJ9ZDj5xBF9To1J48VcHdb5j35fz44iio+MS5Km5beHQ/F4kwM2TnXDcd0k4LNr +vzTdosYIRSwOVgiAhU7TC9Jx0s7VhG54tv5N1eGOyCDVC0903OS9rNJRDnibRqLFv96 5Y3CYS7asxG2d6l0ZGS2GAMrs89AS42vhW4dBABTdOg+h0CkGplL9EonaiQQYu/k3wKt NczezBjAjV97udTDsPR54n9W7BeTaUcTpQrx04yfDfXisdJjjQ+ttajUlPB7GcGu8wfv eeO734RDcTUFMzrtWZ3sH87CIfP2/HiC6NXzfv/CVppOo+GuHRxKmKFdsiGPrl6WJWem ri4w== X-Gm-Message-State: AOAM533dJY3OcEma956RpRKysTCfVpQNZC+dGsnTGZTKmFw1W1KxJzsl Omy+o0DpoTYYgdAF9jcfxWj98g== X-Google-Smtp-Source: ABdhPJyx6tsEeqcl4TM3SVpqd2ABLDZ9p8CsahBkStqIgjJeyAZxG4c8ZNYcyO6VFv64ZCHQC486XQ== X-Received: by 2002:aed:33c4:: with SMTP id v62mr15319509qtd.377.1613405645931; Mon, 15 Feb 2021 08:14:05 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:05 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 09/14] mm/gup: migrate pinned pages out of movable zone Date: Mon, 15 Feb 2021 11:13:44 -0500 Message-Id: <20210215161349.246722-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 9 ++++- include/trace/events/migrate.h | 3 +- mm/gup.c | 67 +++++++++++++++++----------------- 4 files changed, 44 insertions(+), 36 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4594838a0f7c..aae5ef0b3ba1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c56f508be031..e8ccd4eab75e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -387,8 +387,13 @@ enum zone_type { * to increase the number of THP/huge pages. Notable special cases are: * * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * essentially turn such pages unmovable. Therefore, we do not allow + * pinning long-term pages in ZONE_MOVABLE. When pages are pinned and + * faulted, they come from the right zone right away. However, it is + * still possible that address space already has pages in + * ZONE_MOVABLE at the time when pages are pinned (i.e. user has + * touches that memory before pinning). In such case we migrate them + * to a different zone. When migration fails - pinning fails. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") /* * First define the enums in the above macros to be exported to userspace diff --git a/mm/gup.c b/mm/gup.c index 9af6faf1b2b3..da6d370fe551 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -88,11 +88,12 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, int orig_refs = refs; /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; /* @@ -1540,17 +1541,17 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +#ifdef CONFIG_MIGRATION +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i, isolation_error_count; bool drain_allow; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret = nr_pages; struct page *prev_head, *head; struct migration_target_control mtc = { @@ -1568,13 +1569,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, continue; prev_head = head; /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) { - if (!isolate_huge_page(head, &cma_page_list)) + if (!isolate_huge_page(head, &movable_page_list)) isolation_error_count++; } else { if (!PageLRU(head) && drain_allow) { @@ -1586,7 +1586,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, isolation_error_count++; continue; } - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1599,10 +1599,10 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * If list is empty, and no isolation errors, means that all pages are * in the correct zone. */ - if (list_empty(&cma_page_list) && !isolation_error_count) + if (list_empty(&movable_page_list) && !isolation_error_count) return ret; - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1612,12 +1612,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - ret = migrate_pages(&cma_page_list, alloc_migration_target, + ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, - MR_CONTIG_RANGE); + MR_LONGTERM_PIN); if (ret) { - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); return ret > 0 ? -ENOMEM : ret; } @@ -1636,16 +1636,16 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, goto check_again; } #else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { return nr_pages; } -#endif /* CONFIG_CMA */ +#endif /* CONFIG_MIGRATION */ /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which @@ -1669,8 +1669,9 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (gup_flags & FOLL_LONGTERM) { if (rc > 0) - rc = check_and_migrate_cma_pages(mm, start, rc, pages, - vmas, gup_flags); + rc = check_and_migrate_movable_pages(mm, start, rc, + pages, vmas, + gup_flags); memalloc_pin_restore(flags); } return rc; From patchwork Mon Feb 15 16:13:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF603C433E6 for ; Mon, 15 Feb 2021 16:22:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 892BD64DF2 for ; Mon, 15 Feb 2021 16:22:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231250AbhBOQVs (ORCPT ); Mon, 15 Feb 2021 11:21:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232636AbhBOQSZ (ORCPT ); Mon, 15 Feb 2021 11:18:25 -0500 Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com [IPv6:2607:f8b0:4864:20::730]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21D06C061A2A for ; Mon, 15 Feb 2021 08:14:08 -0800 (PST) Received: by mail-qk1-x730.google.com with SMTP id f17so6829487qkl.5 for ; Mon, 15 Feb 2021 08:14:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=z/iYHqVeCyWybAIwBASO/JRh5H6y4Wucaqn9XTIErdQ=; b=It0uOql+v9VUh5dtoEQx48A5m1/+Ns2m8GJk4dTI/WcWKQxD0YMc+v1DKN3iYJBntO KEI4g2vN3aCTXfyqiWi9oE3tQGrxZLu6MUxidcAWeWbz98B9GLRMhMb5dz8BbXorcHD8 /r23qJSou3Llajvr3/9frF/pZqqrlte9PPOIvHSil+IaSSGRtnaMpxT+jiqzmpJ5N7YK wwg04GBl5eESmyrU4L9Tbn1oLltoVGrd4seNgaCA1nIdwQ04G/hN8tV3OLAhlkOge30t apXyGF+hRSlxpFnBM6KnSPz62yW5dNfttYUfN63YI4WdwflFEI4ssurnH6bC6M4TkgH/ 85eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z/iYHqVeCyWybAIwBASO/JRh5H6y4Wucaqn9XTIErdQ=; b=IZPn7RcQRleC9FVpz+s35mFsMkLMG4JXP3fnTiQGGzMNYBy9KOlFnymDxkwPDPUUD1 vIAtmSeeCwQGj4uW04kUPB4tFGvTrwupcAwTTjM/n9gsxPre0KyHZRNSWj2VMh1AShw/ IpEBVxWBuWC6egrB2G7LfHwLvy1VbNJ7VS3S7+8/QJeTj7NLbXfSx42/ejqbTJgsQyib E8ucoQq1pAZBMfSSGMjSJAEnJxLYIFaahFMABX9xnQU5mpz2MQWxRtom6weCtAR2QX4k zWWWBbHLWnCUbxWXelZ3Uw9zMnO0O9GJdIUzhjLTTkSMpNtQW1cwTR2XkWvyKBm/OhZB Krbg== X-Gm-Message-State: AOAM532W81d0riKJKUGDhPzJBzUIsJ2Ibu+QR1oaJiRrIuB0/60XdcOV VD74BcRaj+tbSev8WIdJunTAhQ== X-Google-Smtp-Source: ABdhPJy8OPy6mq/ST9kEwL/ThGKcrCNz7CRNWwuQGlBM+pZiRxaf0OJHpTCaNQYf86QINqvRqUEb5g== X-Received: by 2002:a05:620a:48c:: with SMTP id 12mr15239797qkr.290.1613405647410; Mon, 15 Feb 2021 08:14:07 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:07 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 10/14] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Date: Mon, 15 Feb 2021 11:13:45 -0500 Message-Id: <20210215161349.246722-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Document the special handling of page pinning when ZONE_MOVABLE present. Signed-off-by: Pavel Tatashin Suggested-by: David Hildenbrand Acked-by: Michal Hocko --- Documentation/admin-guide/mm/memory-hotplug.rst | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst index 5c4432c96c4b..c6618f99f765 100644 --- a/Documentation/admin-guide/mm/memory-hotplug.rst +++ b/Documentation/admin-guide/mm/memory-hotplug.rst @@ -357,6 +357,15 @@ creates ZONE_MOVABLE as following. Unfortunately, there is no information to show which memory block belongs to ZONE_MOVABLE. This is TBD. +.. note:: + Techniques that rely on long-term pinnings of memory (especially, RDMA and + vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory + hot remove. Pinned pages cannot reside on ZONE_MOVABLE, to guarantee that + memory can still get hot removed - be aware that pinning can fail even if + there is plenty of free memory in ZONE_MOVABLE. In addition, using + ZONE_MOVABLE might make page pinning more expensive, because pages have to be + migrated off that zone first. + .. _memory_hotplug_how_to_offline_memory: How to offline memory From patchwork Mon Feb 15 16:13:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70765C433E0 for ; Mon, 15 Feb 2021 16:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EAD864DF4 for ; Mon, 15 Feb 2021 16:24:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230349AbhBOQXd (ORCPT ); Mon, 15 Feb 2021 11:23:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232640AbhBOQS2 (ORCPT ); Mon, 15 Feb 2021 11:18:28 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9579AC061A2D for ; Mon, 15 Feb 2021 08:14:09 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id f17so6829575qkl.5 for ; Mon, 15 Feb 2021 08:14:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=vUuAAALAJI+fG2870accHSSNYQyYhwy7cx5c7d7E6Y4=; b=PdbEFtYoJ7fLCoJclskCNrywQzZFn7Ehke/MEjfGzebydpPcze/MbHTovjhEJ3Gxwz lU//OV8u/i/u5TFvO6g54OIG15qqYb3eLSH/9sD+SE4p5e7ib0AE1MXIEVuj6QOppUqK 5wgQZOykjgGcg4HGOhAcTdS5+DdkmYNK0JMovhC+Rr2BW6px4HxAjAQclgTlx++aXrRg dORz3YdX21yrIDCMgbEttpCZ3oMi4PgsrkUQjLGE0Iefwqi27AtMcIHKJDoOwHXlk/HJ At1ADH74dBa2oeIzVAN8ePzthxpfhxXOMUbN95jy/B1U2QBL/hOLhTs7JWZ+tePxvsM1 kVsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vUuAAALAJI+fG2870accHSSNYQyYhwy7cx5c7d7E6Y4=; b=fE/g3pA1PT9W6S0r7Q4Ba4OF9iLeSph5iQpZ5HAMuifXBW2sgzrgxsH65PLeF++L0E 4KOapk4I/766QPrP1edsvisBoCQoM+yCVmBMz5lTKSpTDIfr1tpkae0IkFQzC7g3SQIJ aGmeOKDsKGAKTy5B+//WmxMvwCBKqXcePLLIHojO2GJVAbQlqy0TzkfSMAH9m0OODd9T FWIjumYlwKDGogV4WyREe+/zinMPRwZF+mz3ZRntf6jQ5T4lN06daz2xSeh4+wiHRmIZ k8+ho0Slr/zzwKBLfTB6F+0BFnUd1rqeSqOM1crfSbMNYm24eWZC+4I7TtLVpA55/FIy X7Jw== X-Gm-Message-State: AOAM5308Ioxpi+zL9H/2jxjkDZ3NCXym/IntuLtd4ve/yOkvB86JI9JQ 399piTr+wBG2QYOK+A50XUcXWg== X-Google-Smtp-Source: ABdhPJxSYuHk8rlO/e3c8oYUvO2n9UzI0eM2ykQ1xa7yO+vQpCjOgaDKiXLGr3gxZJzO/JFgBjQdFQ== X-Received: by 2002:a37:9b0c:: with SMTP id d12mr15164482qke.215.1613405648876; Mon, 15 Feb 2021 08:14:08 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:08 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 11/14] mm/gup: change index type to long as it counts pages Date: Mon, 15 Feb 2021 11:13:46 -0500 Message-Id: <20210215161349.246722-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In __get_user_pages_locked() i counts number of pages which should be long, as long is used in all other places to contain number of pages, and 32-bit becomes increasingly small for handling page count proportional values. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index da6d370fe551..fab20b934030 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1472,7 +1472,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, { struct vm_area_struct *vma; unsigned long vm_flags; - int i; + long i; /* calculate required read or write permissions. * If FOLL_FORCE is set, we only require the "MAY" flags. From patchwork Mon Feb 15 16:13:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0587FC4332B for ; Mon, 15 Feb 2021 16:22:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD0AA64DF0 for ; Mon, 15 Feb 2021 16:22:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231682AbhBOQVy (ORCPT ); Mon, 15 Feb 2021 11:21:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232645AbhBOQSa (ORCPT ); Mon, 15 Feb 2021 11:18:30 -0500 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 530BEC061A32 for ; Mon, 15 Feb 2021 08:14:11 -0800 (PST) Received: by mail-qt1-x831.google.com with SMTP id e11so5147285qtg.6 for ; Mon, 15 Feb 2021 08:14:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=QmoVsr7V/2HMMUlT3knKDmL5f3XKOtKg0eH1O4Q+2cw=; b=VINf+kX7XPpEXKir9xWLfgP23fkqUGexISZo96tV848K8Ct0HPPuxDrytkIdwshB9q wST75ooR0nwc6uI66/2ABb+uB7I/4GU35ZaZu7qgAtWWnH1hgUmE8+ylSkxS8cNHswdc Pt6bzcK0apK64B+6FmGF9SrnIvlR2tDlmusUOGQsmKk82YmTTJaPm3BnAHtBtD5cmaaQ UVWy6FFUba4kzc/hTM1LZdM0kfgvR/kH/xSsoIh/OiqK0C7MH3yV03VJOjyNCEowQQcR OujN6fgH+3Yullwus/picqet+hSc+EqPSUzk5hzfIJktRjYRbHLLGWctSscvg418ghgR j1yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QmoVsr7V/2HMMUlT3knKDmL5f3XKOtKg0eH1O4Q+2cw=; b=oEJdnmgtf3rdN5c1IV/3SXRpW6Vd6+uPAmmMivMhGiVVxPir+6W4mq5rXB+E6IVwTH kQeJ1OxqBlEmP8shWQZiYI/1+0xo76qysXXZkquTutzY6OL6awZLHendq1iq9Cn2gn8y R34GlIS6xoOBD97poa0QyADaBX1nTH6SNydwl/2qxpF9h5JNrMX0Xaew59+k2Ff9juk3 97kxbgbLNlic1MJXtW6uv0vICVq9E4tKEo+0w6dys/UNhAT52L49tN3D+uE3U4q/ofj2 ssFfV07D/ft/HPyhzWUOs+lvPlAZXY1epyzcETd5GpRqF6uIt75Q5zJx9fZ3bLXEyO7a TzyQ== X-Gm-Message-State: AOAM530DlZO/u2+FySiLJciEI9sp/VtfFphLEfoZCdrTdg0xWhWW8gqM gcYzVbxRTphHmZZeijfI64QrSg== X-Google-Smtp-Source: ABdhPJzXiJ2mjn2gSGrc+eJpTs9TzmvlyrbhTg4dXgWp/dMT8mWJeCc2GMlBCtxlFza/Y1oyAHMUCQ== X-Received: by 2002:ac8:538a:: with SMTP id x10mr14123307qtp.166.1613405650445; Mon, 15 Feb 2021 08:14:10 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:09 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 12/14] mm/gup: longterm pin migration cleanup Date: Mon, 15 Feb 2021 11:13:47 -0500 Message-Id: <20210215161349.246722-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are longterm pinned, we must migrated them out of movable zone. The function that migrates them has a hidden loop with goto. The loop is to retry on isolation failures, and after successful migration. Make this code better by moving this loop to the caller. Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 93 ++++++++++++++++++++++---------------------------------- 1 file changed, 37 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index fab20b934030..905d550abb91 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1542,27 +1542,28 @@ struct page *get_dump_page(unsigned long addr) #endif /* CONFIG_ELF_CORE */ #ifdef CONFIG_MIGRATION -static long check_and_migrate_movable_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, +/* + * Check whether all pages are pinnable, if so return number of pages. If some + * pages are not pinnable, migrate them, and unpin all pages. Return zero if + * pages were migrated, or if some pages were not successfully isolated. + * Return negative error if migration fails. + */ +static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long i, isolation_error_count; - bool drain_allow; + unsigned long i; + unsigned long isolation_error_count = 0; + bool drain_allow = true; LIST_HEAD(movable_page_list); - long ret = nr_pages; - struct page *prev_head, *head; + long ret = 0; + struct page *prev_head = NULL; + struct page *head; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; -check_again: - prev_head = NULL; - isolation_error_count = 0; - drain_allow = true; for (i = 0; i < nr_pages; i++) { head = compound_head(pages[i]); if (head == prev_head) @@ -1600,47 +1601,27 @@ static long check_and_migrate_movable_pages(struct mm_struct *mm, * in the correct zone. */ if (list_empty(&movable_page_list) && !isolation_error_count) - return ret; + return nr_pages; + if (gup_flags & FOLL_PIN) { + unpin_user_pages(pages, nr_pages); + } else { + for (i = 0; i < nr_pages; i++) + put_page(pages[i]); + } if (!list_empty(&movable_page_list)) { - /* - * drop the above get_user_pages reference. - */ - if (gup_flags & FOLL_PIN) - unpin_user_pages(pages, nr_pages); - else - for (i = 0; i < nr_pages; i++) - put_page(pages[i]); - ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN); - if (ret) { - if (!list_empty(&movable_page_list)) - putback_movable_pages(&movable_page_list); - return ret > 0 ? -ENOMEM : ret; - } - - /* We unpinned pages before migration, pin them again */ - ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, - NULL, gup_flags); - if (ret <= 0) - return ret; - nr_pages = ret; + if (ret && !list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); } - /* - * check again because pages were unpinned, and we also might have - * had isolation errors and need more pages to migrate. - */ - goto check_again; + return ret > 0 ? -ENOMEM : ret; } #else -static long check_and_migrate_movable_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, +static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, unsigned int gup_flags) { return nr_pages; @@ -1658,22 +1639,22 @@ static long __gup_longterm_locked(struct mm_struct *mm, struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long flags = 0; + unsigned int flags; long rc; - if (gup_flags & FOLL_LONGTERM) - flags = memalloc_pin_save(); - - rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, - gup_flags); + if (!(gup_flags & FOLL_LONGTERM)) + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + flags = memalloc_pin_save(); + do { + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + if (rc <= 0) + break; + rc = check_and_migrate_movable_pages(rc, pages, gup_flags); + } while (!rc); + memalloc_pin_restore(flags); - if (gup_flags & FOLL_LONGTERM) { - if (rc > 0) - rc = check_and_migrate_movable_pages(mm, start, rc, - pages, vmas, - gup_flags); - memalloc_pin_restore(flags); - } return rc; } From patchwork Mon Feb 15 16:13:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D43FC433DB for ; Mon, 15 Feb 2021 16:23:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C54F064DFF for ; Mon, 15 Feb 2021 16:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232026AbhBOQWb (ORCPT ); Mon, 15 Feb 2021 11:22:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231680AbhBOQTe (ORCPT ); Mon, 15 Feb 2021 11:19:34 -0500 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD26BC061D73 for ; Mon, 15 Feb 2021 08:14:13 -0800 (PST) Received: by mail-qk1-x736.google.com with SMTP id c3so6177518qkj.11 for ; Mon, 15 Feb 2021 08:14:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ca95TMJT4ShVUsz7drjIPNjU47Yiarr1AByU+uGNnGI=; b=DmFjVHWqGIUohawgIJrEZKPanyIqIveJYndCmiGLiBBvoDo3vVG0AUBjBqzQXIbn9J yz0awcYGyAQD9PAvfCZuAcKtax7KnMy1C+vPWwnlvA00lJ+kJZwYLwwbMQYLvbCtmypY emYxrXCJDGNGkCM+IlpwABLt3U/uplX/ASw2zLEmLtK89hl+HeobwAgWPpITWHAB48DS L5FkwTqoow24Xy7iq2dgsuU/8SxHRtcfYAk9Ie/rpRFS+g+umo6YHvjLEDnc0GBN6vSp GGLSQ/qgagNAN0tzfDn0I0szZq68qs2yeTIh+jUtYhVSDciB9B9LIGwjVTt1pMUVPykc sybw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ca95TMJT4ShVUsz7drjIPNjU47Yiarr1AByU+uGNnGI=; b=UUfGcmn8tktJEo8vsxZJlJNfnhQjjtiaPHP5vDt0CO7o139s3s5iZHcVxqdO0nZA9g wwQz2JHo5+z6yin+AhfjQFYLycElKQk3fqI/2OIydhjlZa1zKJBLEB+gYE1fA4qnVr/0 EgZx2MdaXkGuJAihXW3zX8ozCqvG+RLEaRwYSwe3zC256tSo3AU7rtGMa4MM3LomQujv wlVot0jZoLcgonQK9UoZduY55I2CgdwqTNQ2mS6celefFSgqEdi20jOIrzpfBcPi2jJn zKmRDSxcv1fCydgN5RxqpUwWcxwr3HgIBIwlvBaWM058bTZULkeOD4oYgIl8O7XSlh8T CzFw== X-Gm-Message-State: AOAM533D3ly89y1Hyzm9VIIoey0PuFlKzKPXnjM8e98ThhYmJeRX4OmX 1UllciCmbFGw2WL1yKy1UY8yFQ== X-Google-Smtp-Source: ABdhPJyVgZi4SG49vui25pLzBSwwcHFwnYRaLRz4T+Q6QCzp3DGJ3W1YxH8yH5+/oOm3RauoQz4v/A== X-Received: by 2002:a37:992:: with SMTP id 140mr15944835qkj.349.1613405652076; Mon, 15 Feb 2021 08:14:12 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:11 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 13/14] selftests/vm: gup_test: fix test flag Date: Mon, 15 Feb 2021 11:13:48 -0500 Message-Id: <20210215161349.246722-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In gup_test both gup_flags and test_flags use the same flags field. This is broken. Farther, in the actual gup_test.c all the passed gup_flags are erased and unconditionally replaced with FOLL_WRITE. Which means that test_flags are ignored, and code like this always performs pin dump test: 155 if (gup->flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) 156 nr = pin_user_pages(addr, nr, gup->flags, 157 pages + i, NULL); 158 else 159 nr = get_user_pages(addr, nr, gup->flags, 160 pages + i, NULL); 161 break; Add a new test_flags field, to allow raw gup_flags to work. Add a new subcommand for DUMP_USER_PAGES_TEST to specify that pin test should be performed. Remove unconditional overwriting of gup_flags via FOLL_WRITE. But, preserve the previous behaviour where FOLL_WRITE was the default flag, and add a new option "-W" to unset FOLL_WRITE. Rename flags with gup_flags. With the fix, dump works like this: root@virtme:/# gup_test -c ---- page #0, starting from user virt addr: 0x7f8acb9e4000 page:00000000d3d2ee27 refcount:2 mapcount:1 mapping:0000000000000000 index:0x0 pfn:0x100bcf anon flags: 0x300000000080016(referenced|uptodate|lru|swapbacked) raw: 0300000000080016 ffffd0e204021608 ffffd0e208df2e88 ffff8ea04243ec61 raw: 0000000000000000 0000000000000000 0000000200000000 0000000000000000 page dumped because: gup_test: dump_pages() test DUMP_USER_PAGES_TEST: done root@virtme:/# gup_test -c -p ---- page #0, starting from user virt addr: 0x7fd19701b000 page:00000000baed3c7d refcount:1025 mapcount:1 mapping:0000000000000000 index:0x0 pfn:0x108008 anon flags: 0x300000000080014(uptodate|lru|swapbacked) raw: 0300000000080014 ffffd0e204200188 ffffd0e205e09088 ffff8ea04243ee71 raw: 0000000000000000 0000000000000000 0000040100000000 0000000000000000 page dumped because: gup_test: dump_pages() test DUMP_USER_PAGES_TEST: done Refcount shows the difference between pin vs no-pin case. Also change type of nr from int to long, as it counts number of pages. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- mm/gup_test.c | 23 ++++++++++------------- mm/gup_test.h | 3 ++- tools/testing/selftests/vm/gup_test.c | 15 +++++++++++---- 3 files changed, 23 insertions(+), 18 deletions(-) diff --git a/mm/gup_test.c b/mm/gup_test.c index e3cf78e5873e..a6ed1c877679 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -94,7 +94,7 @@ static int __gup_test_ioctl(unsigned int cmd, { ktime_t start_time, end_time; unsigned long i, nr_pages, addr, next; - int nr; + long nr; struct page **pages; int ret = 0; bool needs_mmap_lock = @@ -126,37 +126,34 @@ static int __gup_test_ioctl(unsigned int cmd, nr = (next - addr) / PAGE_SIZE; } - /* Filter out most gup flags: only allow a tiny subset here: */ - gup->flags &= FOLL_WRITE; - switch (cmd) { case GUP_FAST_BENCHMARK: - nr = get_user_pages_fast(addr, nr, gup->flags, + nr = get_user_pages_fast(addr, nr, gup->gup_flags, pages + i); break; case GUP_BASIC_TEST: - nr = get_user_pages(addr, nr, gup->flags, pages + i, + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; case PIN_FAST_BENCHMARK: - nr = pin_user_pages_fast(addr, nr, gup->flags, + nr = pin_user_pages_fast(addr, nr, gup->gup_flags, pages + i); break; case PIN_BASIC_TEST: - nr = pin_user_pages(addr, nr, gup->flags, pages + i, + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; case PIN_LONGTERM_BENCHMARK: nr = pin_user_pages(addr, nr, - gup->flags | FOLL_LONGTERM, + gup->gup_flags | FOLL_LONGTERM, pages + i, NULL); break; case DUMP_USER_PAGES_TEST: - if (gup->flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) - nr = pin_user_pages(addr, nr, gup->flags, + if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); else - nr = get_user_pages(addr, nr, gup->flags, + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; default: @@ -187,7 +184,7 @@ static int __gup_test_ioctl(unsigned int cmd, start_time = ktime_get(); - put_back_pages(cmd, pages, nr_pages, gup->flags); + put_back_pages(cmd, pages, nr_pages, gup->test_flags); end_time = ktime_get(); gup->put_delta_usec = ktime_us_delta(end_time, start_time); diff --git a/mm/gup_test.h b/mm/gup_test.h index 90a6713d50eb..887ac1d5f5bc 100644 --- a/mm/gup_test.h +++ b/mm/gup_test.h @@ -21,7 +21,8 @@ struct gup_test { __u64 addr; __u64 size; __u32 nr_pages_per_call; - __u32 flags; + __u32 gup_flags; + __u32 test_flags; /* * Each non-zero entry is the number of the page (1-based: first page is * page 1, so that zero entries mean "do nothing") from the .addr base. diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c index 6c6336dd3b7f..943cc2608dc2 100644 --- a/tools/testing/selftests/vm/gup_test.c +++ b/tools/testing/selftests/vm/gup_test.c @@ -37,13 +37,13 @@ int main(int argc, char **argv) { struct gup_test gup = { 0 }; unsigned long size = 128 * MB; - int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 0; + int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1; unsigned long cmd = GUP_FAST_BENCHMARK; int flags = MAP_PRIVATE; char *file = "/dev/zero"; char *p; - while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwSH")) != -1) { + while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHp")) != -1) { switch (opt) { case 'a': cmd = PIN_FAST_BENCHMARK; @@ -65,9 +65,13 @@ int main(int argc, char **argv) */ gup.which_pages[0] = 1; break; + case 'p': + /* works only with DUMP_USER_PAGES_TEST */ + gup.test_flags |= GUP_TEST_FLAG_DUMP_PAGES_USE_PIN; + break; case 'F': /* strtol, so you can pass flags in hex form */ - gup.flags = strtol(optarg, 0, 0); + gup.gup_flags = strtol(optarg, 0, 0); break; case 'm': size = atoi(optarg) * MB; @@ -93,6 +97,9 @@ int main(int argc, char **argv) case 'w': write = 1; break; + case 'W': + write = 0; + break; case 'f': file = optarg; break; @@ -140,7 +147,7 @@ int main(int argc, char **argv) gup.nr_pages_per_call = nr_pages; if (write) - gup.flags |= FOLL_WRITE; + gup.gup_flags |= FOLL_WRITE; fd = open("/sys/kernel/debug/gup_test", O_RDWR); if (fd == -1) { From patchwork Mon Feb 15 16:13:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12088935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12401C433E0 for ; Mon, 15 Feb 2021 16:22:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB23164DF2 for ; Mon, 15 Feb 2021 16:22:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbhBOQWA (ORCPT ); Mon, 15 Feb 2021 11:22:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbhBOQTh (ORCPT ); Mon, 15 Feb 2021 11:19:37 -0500 Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92BB5C061D7C for ; Mon, 15 Feb 2021 08:14:15 -0800 (PST) Received: by mail-qk1-x729.google.com with SMTP id h8so6817544qkk.6 for ; Mon, 15 Feb 2021 08:14:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=fZdFf8Y10WJwP3iDSS3Z5VyVZRkc0J09nWK3ghTjLayZaQBtALZ8zAhfs0QfifSEkk 9UCDPSqsKM90EEUU7EzN/KjQ+x1vD4LB875bg3UUqglChgzLgAyVi/Uyh8m6nO9cEpMt b7wkQN91QNBa4v28HydDX2RkbwWLvNrOVV9oodQvnyQfSxXC2fq8zvZoODL3os3uHDYN CmhqhnaZW2woCDh1PDNhzPD8JZcK9CkmdfedTTwMtsUUtu4aRa7FJ9BVU0As4/HoW2+R JUUAPf3HP2CRj37EmDMU3/PYEc4vc2IsqQHVBGVxXMgHNUlI51p/exyLfxOCpxQ3jXl6 2sjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=OFZarPSIZDXmH7SKaSIVxiw3+QKQjdKe5mFx22DSFYyyjREUhxM5BRURi1eP1MTkKF Q19O91hX2Q0u4ZJS5io3zmKZy1ECsohtIQlIR75lf86ZfuDTFXW73W9Z4/1hXq7vQPwS KlE2+u1zyeQGsAxyrrsV/qfu5BUbtDtmcr/+38psK5pItL+dCgESMtyS2YR3bG1Wew2O Pki++8tX+t+Z/zzRWSMM55gO1hlp688AhJxLh/OPsIifHRS0lvoYLLcU1x5aSa3Gvm1V 52LRxNqfXMTA00Jq43sN26T68Ea8gG1aEApuHAZ6HDSQaNQbVXIT2r66A1UMfr3D6kga SlFw== X-Gm-Message-State: AOAM532IHvD1Yeqf/cHWLnfedZi+waVa0FoML4+AX6W3ziByPwygfZFI lhinIs3Xja3LHfx0iOlGlQdQaA== X-Google-Smtp-Source: ABdhPJzovM58HaLCClF3VUBYsDOrSP/+0sT1fDIUz4gtaDsL/dH2pL1TQ1nvopCVb2IFGgIQdJpNgA== X-Received: by 2002:a37:57c7:: with SMTP id l190mr15700031qkb.487.1613405653668; Mon, 15 Feb 2021 08:14:13 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u7sm10909213qta.75.2021.02.15.08.14.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 08:14:13 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v11 14/14] selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages Date: Mon, 15 Feb 2021 11:13:49 -0500 Message-Id: <20210215161349.246722-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210215161349.246722-1-pasha.tatashin@soleen.com> References: <20210215161349.246722-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are pinned they can be faulted in userland and migrated, and they can be faulted right in kernel without migration. In either case, the pinned pages must end-up being pinnable (not movable). Add a new test to gup_test, to help verify that the gup/pup (get_user_pages() / pin_user_pages()) behavior with respect to pinnable and movable pages is reasonable and correct. Specifically, provide a way to: 1) Verify that only "pinnable" pages are pinned. This is checked automatically for you. 2) Verify that gup/pup performance is reasonable. This requires comparing benchmarks between doing gup/pup on pages that have been pre-faulted in from user space, vs. doing gup/pup on pages that are not faulted in until gup/pup time (via FOLL_TOUCH). This decision is controlled with the new -z command line option. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- mm/gup_test.c | 6 ++++++ tools/testing/selftests/vm/gup_test.c | 23 +++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/gup_test.c b/mm/gup_test.c index a6ed1c877679..d974dec19e1c 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -52,6 +52,12 @@ static void verify_dma_pinned(unsigned int cmd, struct page **pages, dump_page(page, "gup_test failure"); break; + } else if (cmd == PIN_LONGTERM_BENCHMARK && + WARN(!is_pinnable_page(page), + "pages[%lu] is NOT pinnable but pinned\n", + i)) { + dump_page(page, "gup_test failure"); + break; } } break; diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c index 943cc2608dc2..1e662d59c502 100644 --- a/tools/testing/selftests/vm/gup_test.c +++ b/tools/testing/selftests/vm/gup_test.c @@ -13,6 +13,7 @@ /* Just the flags we need, copied from mm.h: */ #define FOLL_WRITE 0x01 /* check pte is writable */ +#define FOLL_TOUCH 0x02 /* mark page accessed */ static char *cmd_to_str(unsigned long cmd) { @@ -39,11 +40,11 @@ int main(int argc, char **argv) unsigned long size = 128 * MB; int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1; unsigned long cmd = GUP_FAST_BENCHMARK; - int flags = MAP_PRIVATE; + int flags = MAP_PRIVATE, touch = 0; char *file = "/dev/zero"; char *p; - while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHp")) != -1) { + while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHpz")) != -1) { switch (opt) { case 'a': cmd = PIN_FAST_BENCHMARK; @@ -110,6 +111,10 @@ int main(int argc, char **argv) case 'H': flags |= (MAP_HUGETLB | MAP_ANONYMOUS); break; + case 'z': + /* fault pages in gup, do not fault in userland */ + touch = 1; + break; default: return -1; } @@ -167,8 +172,18 @@ int main(int argc, char **argv) else if (thp == 0) madvise(p, size, MADV_NOHUGEPAGE); - for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) - p[0] = 0; + /* + * FOLL_TOUCH, in gup_test, is used as an either/or case: either + * fault pages in from the kernel via FOLL_TOUCH, or fault them + * in here, from user space. This allows comparison of performance + * between those two cases. + */ + if (touch) { + gup.gup_flags |= FOLL_TOUCH; + } else { + for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) + p[0] = 0; + } /* Only report timing information on the *_BENCHMARK commands: */ if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||