From patchwork Mon Feb 1 15:38:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE6ADC433E0 for ; Mon, 1 Feb 2021 15:39:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A23CC64EA2 for ; Mon, 1 Feb 2021 15:39:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229866AbhBAPjS (ORCPT ); Mon, 1 Feb 2021 10:39:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230281AbhBAPjM (ORCPT ); Mon, 1 Feb 2021 10:39:12 -0500 Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com [IPv6:2607:f8b0:4864:20::72b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58213C0613ED for ; Mon, 1 Feb 2021 07:38:32 -0800 (PST) Received: by mail-qk1-x72b.google.com with SMTP id s77so3179482qke.4 for ; Mon, 01 Feb 2021 07:38:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uI6NShNpRZxuWFJgcFGIsf9y7F3FhEyQzIGIbDpR8GM=; b=ZK7zb4JfZxvJDH4rP++rGO4ym9RA4/AUZq57tVze+Z39Aim4mSkSHLogDTGtN029/D 6FKlTuyF7Siie7kPLweGrab7Qj4CCRi3fUa7zYUdqJB+ukCQAg67+FfZGHra3shRU4Ud XSQjsrghTq6HK17jPWf+ftPrMORmjZ9/641OwzeMwJT4YqrvIoKXPQbWrcjzurfyPb+k xxV1sKrqi85++t3Vyb0ayML+GV3TZbLXZcho47XFFjbuATYGCbAsVmPW2/kzxVdUyP4B RgvhOuTqVrtMx48Cg8MWcisNzMKsv6p1XORa/mwNdjEm4OFltCV2B8stVa7AkGAFALZa udxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uI6NShNpRZxuWFJgcFGIsf9y7F3FhEyQzIGIbDpR8GM=; b=kACnujCNDDauo9KVZqfE9Ot0DWW/fKoqXltizthftayman03i2Ismuo0nElmH5jxDA V5L+NczIsCA6Z5OQObAvZXLQwQQR9UETyFFasU4+x2UGObpxf1bjdd8JXzq0att/Nv4j C88xITVOLubrDx+v+1A+AjxMYMF1fFjrlWH2JpruBPKu/U7Y7L7HfpxHqT8jWLQQ7Bbx WLVA1nvPKgabXCgLnJ3C6NuuYO0BHVkWMfaVuXuStq6D9VMw69DNFwwez4+yEhLuyr45 QZ+OGbAoUxQ/CGQ5jD+Pvu5VEBQwp7h6jAByILSFzNxI7wZipUnf+yKtt39tVAKq70Nd Czlw== X-Gm-Message-State: AOAM532+ANhALGhV5BTLDvSOO+L7epVYo9najx8eA8WtMbkHcxKByfP1 qhvo2DYmaJ8lGVSy5KtdU3XWfw== X-Google-Smtp-Source: ABdhPJxLW2knwkEUqx65lpqbqovanbpJQgv8STLvLG5owMyBbjkmB+/m15zX9T7iv27mkC5256j0Hw== X-Received: by 2002:a37:484f:: with SMTP id v76mr9917370qka.312.1612193911600; Mon, 01 Feb 2021 07:38:31 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:30 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 01/14] mm/gup: don't pin migrated cma pages in movable zone Date: Mon, 1 Feb 2021 10:38:14 -0500 Message-Id: <20210201153827.444374-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order not to fragment CMA the pinned pages are migrated. However, they are migrated to ZONE_MOVABLE, which also should not have pinned pages. Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning is allowed. Signed-off-by: Pavel Tatashin Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 3e086b073624..24f25b1e9103 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1563,7 +1563,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, long ret = nr_pages; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: From patchwork Mon Feb 1 15:38:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 422A7C433E0 for ; Mon, 1 Feb 2021 15:39:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F67C64DE1 for ; Mon, 1 Feb 2021 15:39:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231318AbhBAPjz (ORCPT ); Mon, 1 Feb 2021 10:39:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231191AbhBAPjw (ORCPT ); Mon, 1 Feb 2021 10:39:52 -0500 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08E34C06178A for ; Mon, 1 Feb 2021 07:38:34 -0800 (PST) Received: by mail-qv1-xf31.google.com with SMTP id j13so8279861qvu.10 for ; Mon, 01 Feb 2021 07:38:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=/BFgBC6QAZDKdqJwclIw/UFhagKoK9nwCvAO/0D2vSo=; b=JQR2l5q2FxTXJmHbPFSlNGtyZOup8fe3+a/Jd1O4t6lh9Agl1DDavVvAYV0PKCw8aE dT+2hLR/1yP4EA97ca5wit4g02UZdNZU6NYa7CZjnsrOFaQ8tY5bu/ZXNnm/f/Cn+yg2 xwTcRozwmj6g31VhQsy1n9qKLpBlAwyVHpnwVBKm77KNgfWOm6xJjIpTbfUIBzTEDHY4 YbOD7x8G/8MxBhG4ubKzQuDtDymfPOtB+gUJV0IA55CXBwqW81Nm8OMSeQM/3TJ3HLOM q4GsqlbxPCUlBOCHwyVGHyWbq1327tlwOXN46Bh0/Ae+SNdkrzpwXue8BjcZMF5aIcix K78g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/BFgBC6QAZDKdqJwclIw/UFhagKoK9nwCvAO/0D2vSo=; b=MeugkllOXz3PJgF14/tOpbhEtMBSYWnI8OfIvYKYVvdPh4AWlalHwgOEPW9dPBeoZ4 KMxlhE8/PSpOYrQMSpkdD7KZkCnMhPaNOi8esq8al3KoEdTh8L2P/nxOasQVi78EH8GF tsj4ELXxpaOpSofEhrfdnLOPTYGrctoW65QB33kdvnlG3ykb46YGUWeLK64WAHbInVbw fZ9Vv6XuXLSlyAgBN06lwwbhT3Up8qSup641cYSXGAGcWY0WITSIqf29ejQEAOABiEkM moe1chYxiD5bQFfeNQQtQDB3em+fjHMvuZIdEIeo6rO6kTjvmeNGjmvk6bMHOgrB1q6H DiVg== X-Gm-Message-State: AOAM532RpRYxPc439pyHe00mViLggZu9svJU0o3W/+t8GVCTuvc9vBgw IzOQKrLuxmaRLgItLLKgy69IaQ== X-Google-Smtp-Source: ABdhPJzkIYa3T6/uQYHSoedmQ+YQr5pq7dhFvGtlP5lG7RzeiWLX+FQTg7qYdxso34nsemW0sghkXQ== X-Received: by 2002:a0c:b929:: with SMTP id u41mr15864576qvf.30.1612193913267; Mon, 01 Feb 2021 07:38:33 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:32 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 02/14] mm/gup: check every subpage of a compound page during isolation Date: Mon, 1 Feb 2021 10:38:15 -0500 Message-Id: <20210201153827.444374-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are isolated in check_and_migrate_movable_pages() we skip compound number of pages at a time. However, as Jason noted, it is not necessary correct that pages[i] corresponds to the pages that we skipped. This is because it is possible that the addresses in this range had split_huge_pmd()/split_huge_pud(), and these functions do not update the compound page metadata. The problem can be reproduced if something like this occurs: 1. User faulted huge pages. 2. split_huge_pmd() was called for some reason 3. User has unmapped some sub-pages in the range 4. User tries to longterm pin the addresses. The resulting pages[i] might end-up having pages which are not compound size page aligned. Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") Reported-by: Jason Gunthorpe Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 24f25b1e9103..16f10d5a9eb6 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1556,26 +1556,23 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, unsigned int gup_flags) { unsigned long i; - unsigned long step; bool drain_allow = true; bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; + struct page *prev_head, *head; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: - for (i = 0; i < nr_pages;) { - - struct page *head = compound_head(pages[i]); - - /* - * gup may start from a tail page. Advance step by the left - * part. - */ - step = compound_nr(head) - (pages[i] - head); + prev_head = NULL; + for (i = 0; i < nr_pages; i++) { + head = compound_head(pages[i]); + if (head == prev_head) + continue; + prev_head = head; /* * If we get a page from the CMA zone, since we are going to * be pinning these entries, we might as well move them out @@ -1599,8 +1596,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, } } } - - i += step; } if (!list_empty(&cma_page_list)) { From patchwork Mon Feb 1 15:38:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 786B3C433E6 for ; Mon, 1 Feb 2021 15:45:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CF3164E46 for ; Mon, 1 Feb 2021 15:45:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231886AbhBAPpL (ORCPT ); Mon, 1 Feb 2021 10:45:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231237AbhBAPjx (ORCPT ); Mon, 1 Feb 2021 10:39:53 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B325FC061794 for ; Mon, 1 Feb 2021 07:38:35 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id t63so16610346qkc.1 for ; Mon, 01 Feb 2021 07:38:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gHgQfTRvar1lIU9rNG4Q491ak9Bif38ih9mdt5Czeeg=; b=GC0toNm7BcOudNaj/YC9Axvt6fvOsNdbAgmE+JWdR4Yqwzf8ZdFKCSD1pQp2UfohPX r8MUgPpT6yf730PLmk6gqtmgfXJLOK76d5rXXg1ZQPkAFLKtPfHnJfgRoc3jcePAJ4dp t/KLt/MRx0UeJYXqOCsCs/zQ2HiuDGgXv9WrpGiMYf2Nnhn0I9kBeJ08PAxtZaWI/DZ2 FKOezf6LfwHVxcsKl0XZJ9sARneq2SogUqa3oQQKOg4MvKr8H6ZCZwM55OFzBePS4PMj f4m1JsPU1WuDnzrsay1xk5cXQ61jqWJKq7Boc9vkZgTnRAMZ9YKN0SbWk46CXMB0B4u9 tDcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gHgQfTRvar1lIU9rNG4Q491ak9Bif38ih9mdt5Czeeg=; b=cAG4k7lr8s66CfR+jOiYPUkhlnCJTwdbOGoHOETA9lGSfA9iHwjHfuiKfRNLz1bI4d mPhpPI9lxjOHrY+Z4fk42FC1SdpCKI3MNsg8Iv6LUhIqwhr9SMShxEuZ9/ujC9/AWkJB 9aVUBRi14jqKANswWxx1SGity+ct6VqnvYdomzfm6nQFIrQ3lTNZof3LURDCo3hhse5y igJ7dPv0Cwcf69ZTnjZXncx7NpUBaeVGAI91F7oCFuIMNiF65U934baPKTWJC7S0eb2H yItpf5PMsLyMbXgKWJ4tjVrFRBAPntYM5RSTG2ujKUK8QXM58TBXncdmfL7AAurqSVKp YjAg== X-Gm-Message-State: AOAM532CKwWAcflSjnl3mZLxFP6L+fUKVbghy8aAymb8jOTIgO4lkdqO ZQ2z4GP/UYiBaJnLSZxdShVJQQ== X-Google-Smtp-Source: ABdhPJzYTS+4AOTbPpreXVUkljPmej3SQM2H6+2t7UJhXKvne6cS5DqT4JyjPRt9T7SRhMlU4kFuQw== X-Received: by 2002:a37:6206:: with SMTP id w6mr16479068qkb.102.1612193914954; Mon, 01 Feb 2021 07:38:34 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:34 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 03/14] mm/gup: return an error on migration failure Date: Mon, 1 Feb 2021 10:38:16 -0500 Message-Id: <20210201153827.444374-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When migration failure occurs, we still pin pages, which means that we may pin CMA movable pages which should never be the case. Instead return an error without pinning pages when migration failure happens. No need to retry migrating, because migrate_pages() already retries 10 times. Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 16f10d5a9eb6..88ce41f41543 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1557,7 +1557,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, { unsigned long i; bool drain_allow = true; - bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1608,17 +1607,15 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, - (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { - /* - * some of the pages failed migration. Do get_user_pages - * without migration. - */ - migrate_allow = false; - + ret = migrate_pages(&cma_page_list, alloc_migration_target, + NULL, (unsigned long)&mtc, MIGRATE_SYNC, + MR_CONTIG_RANGE); + if (ret) { if (!list_empty(&cma_page_list)) putback_movable_pages(&cma_page_list); + return ret > 0 ? -ENOMEM : ret; } + /* * We did migrate all the pages, Try to get the page references * again migrating any new CMA pages which we failed to isolate @@ -1628,7 +1625,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, pages, vmas, NULL, gup_flags); - if ((ret > 0) && migrate_allow) { + if (ret > 0) { nr_pages = ret; drain_allow = true; goto check_again; From patchwork Mon Feb 1 15:38:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 076CEC433E0 for ; Mon, 1 Feb 2021 15:44:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C68CF64DE1 for ; Mon, 1 Feb 2021 15:44:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231494AbhBAPoh (ORCPT ); Mon, 1 Feb 2021 10:44:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231248AbhBAPjx (ORCPT ); Mon, 1 Feb 2021 10:39:53 -0500 Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com [IPv6:2607:f8b0:4864:20::732]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7BCCC061797 for ; Mon, 1 Feb 2021 07:38:37 -0800 (PST) Received: by mail-qk1-x732.google.com with SMTP id s77so3179785qke.4 for ; Mon, 01 Feb 2021 07:38:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=g2firNhV/7kB7YU7cH9BNSgXaQWIxvW5HlS98FNnYEs=; b=cQ7U9FQRpR9JGNNrg7Wiu9qDQUW+pR4nvGFdzAwg72lQTYccPZx1dsnRYuei2XGHR2 nAlpPdY8DvOXXmWhxdO2LVpkkk/LE3TWy1gnwdN9NGZ27+5o4ziEIQHINofLGI6f/ljN xUImvXDI5GxWSaIoGrqJ9WAUFPoBXi7P8YlKOIrlNG9bSr0jV22PcDOKtLdrgcVa/eSO npZal8wKnZ01rdB2fiGFC5d7sGwMqK/gqvcw9nRH7m7NG7fVylYebScPZJ8QRLAfJrjp 31zojxu9KuEQTeGfWRsA44sPb9srl/ktRriq5tGaxbxyU+o/nPqqMqtR4G4Ch4hpcFXK bCsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g2firNhV/7kB7YU7cH9BNSgXaQWIxvW5HlS98FNnYEs=; b=fknKFDMUrMaRpjpatlSnd1FPlb0C/6Hy1+xCfhiDsjepFByXi7/GRW1ifRtpXo5HAG ZbPqvMj/tRg/59nV6PiDo9KFfWw/2IrW/aP/6HgNpsGDyC1l+AyDo+ZTJReRZBgybQNI cCAjIMtNWiIJbOwp3IeeH1+jsrV+8K2KQeAtCxa2Gngdph4cn5PnZnfIOdnQze3g5R0A VId7FP+rvBKFeAfYwxZ2rXSg2/A12MbLB/HA3rUAD2X3onezc4g31XDL6WrxT1fzaBUB x349teTiTU7wyT5qq3qBoMDWh9qf0+XZyZ2OiYQJ17v2ybHyZc+rWwPYZOhGnmw0+WI9 SkDQ== X-Gm-Message-State: AOAM532GjPrzbNSN9wrferS+ZjYdIy8bwbjoPMbbOXcMjhjfOK9M06ly OgVNeysR0kmu29HVPOAoczQWWw== X-Google-Smtp-Source: ABdhPJz3cHryVBgm+sMaWB1E3xhYlRhY35s+BzBXgbN2R53/Sw/rzB5M+lv25DSTo/3rnxq17/J76A== X-Received: by 2002:a37:4d12:: with SMTP id a18mr16245768qkb.371.1612193916967; Mon, 01 Feb 2021 07:38:36 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:36 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 04/14] mm/gup: check for isolation errors Date: Mon, 1 Feb 2021 10:38:17 -0500 Message-Id: <20210201153827.444374-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org It is still possible that we pin movable CMA pages if there are isolation errors and cma_page_list stays empty when we check again. Check for isolation errors, and return success only when there are no isolation errors, and cma_page_list is empty after checking. Because isolation errors are transient, we retry indefinitely. Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 60 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 88ce41f41543..7ecca2d66dff 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1555,8 +1555,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long i; - bool drain_allow = true; + unsigned long i, isolation_error_count; + bool drain_allow; LIST_HEAD(cma_page_list); long ret = nr_pages; struct page *prev_head, *head; @@ -1567,6 +1567,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, check_again: prev_head = NULL; + isolation_error_count = 0; + drain_allow = true; for (i = 0; i < nr_pages; i++) { head = compound_head(pages[i]); if (head == prev_head) @@ -1578,25 +1580,35 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * of the CMA zone if possible. */ if (is_migrate_cma_page(head)) { - if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); - else { + if (PageHuge(head)) { + if (!isolate_huge_page(head, &cma_page_list)) + isolation_error_count++; + } else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); drain_allow = false; } - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + - page_is_file_lru(head), - thp_nr_pages(head)); + if (isolate_lru_page(head)) { + isolation_error_count++; + continue; } + list_add_tail(&head->lru, &cma_page_list); + mod_node_page_state(page_pgdat(head), + NR_ISOLATED_ANON + + page_is_file_lru(head), + thp_nr_pages(head)); } } } + /* + * If list is empty, and no isolation errors, means that all pages are + * in the correct zone. + */ + if (list_empty(&cma_page_list) && !isolation_error_count) + return ret; + if (!list_empty(&cma_page_list)) { /* * drop the above get_user_pages reference. @@ -1616,23 +1628,19 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, return ret > 0 ? -ENOMEM : ret; } - /* - * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. - */ - ret = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); - - if (ret > 0) { - nr_pages = ret; - drain_allow = true; - goto check_again; - } + /* We unpinned pages before migration, pin them again */ + ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + if (ret <= 0) + return ret; + nr_pages = ret; } - return ret; + /* + * check again because pages were unpinned, and we also might have + * had isolation errors and need more pages to migrate. + */ + goto check_again; } #else static long check_and_migrate_cma_pages(struct mm_struct *mm, From patchwork Mon Feb 1 15:38:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5F22C433E6 for ; Mon, 1 Feb 2021 15:44:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5979864E3C for ; Mon, 1 Feb 2021 15:44:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231614AbhBAPoY (ORCPT ); Mon, 1 Feb 2021 10:44:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231438AbhBAPkI (ORCPT ); Mon, 1 Feb 2021 10:40:08 -0500 Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com [IPv6:2607:f8b0:4864:20::830]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E368C0617AA for ; Mon, 1 Feb 2021 07:38:39 -0800 (PST) Received: by mail-qt1-x830.google.com with SMTP id l23so12479587qtq.13 for ; Mon, 01 Feb 2021 07:38:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=PFXsW068QpY9UbKCYMnD/JEGxD6Ab44yTlAU93ZYP48=; b=OnZgvqzmWp78jzXEk0ympLdYBmzijGgqV6WOliINVzAF7+6Qxx1Q4BHl9dG6u1nFXs jgswZyLNa/yKyJWU2kZQvdQvgORglFpin9UvkanWRYeQTexYBXpV4qEe7pMKCV/C2+MY of5qtmfcdieCesZbm6q4rUFP5YZbBO0GyGbwme9Cvtq6TEOw+hfbkk2Ma5YcsHEDx1FW 4DP8zNkMfnqiw09+YnCXveBSnhHYMImEjBINvFM23FZmZKiDvY0lixAgvVJPMZEGikFr GSIJ6l0AaMdBaOnONhgsIxfl8QTG/NKxmiMy0J6SaFFY8kB6IiLJxYvoKqt7Hj7H7md9 EcIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PFXsW068QpY9UbKCYMnD/JEGxD6Ab44yTlAU93ZYP48=; b=ToXB2iHJhTs3tMWVmvl7Ls3bpE6YNqz/+4An+cpVLU/e+AP7Hc/nxv+WEZGTkAg5K5 dJZvi3CIJHSrjH0vQRM96vf1+ZtV6bsxUD+stUI2dRQYL/8nkMW3XEAjGgu4zgxs9thR IYbgjT+HHNNTC958dbBlOaw1CTojxSh8pMUvX50i80DOaoPRdHwZb5mNtKIfWUZ/Xw/u wjfKVAHrWkE6e7XXR6/WvMAsJICn8G0MsTVDRKfod3flQ4iXEas4zqb5nfmJig82ufa3 FZ82lVxnDM15L3Melr14ehD7yeC6P4x5egFpxL7muzUvBNvT6sE2VNSOHo6oymah2X96 p4dg== X-Gm-Message-State: AOAM530baUU12kqok5hSSC3CGhsVpQezC+rjJwgmzp8xw4dLJFoHlc12 JmyERNdaJjCq4nPzyKl+YTsNqg== X-Google-Smtp-Source: ABdhPJyOCDJYiA9jDkQM0ER+KK8HrBkqULy5REpkTXGuoqf/RWtSecXLbxmQLHShGFfIMEg7VMAQvg== X-Received: by 2002:ac8:3986:: with SMTP id v6mr15521807qte.308.1612193918673; Mon, 01 Feb 2021 07:38:38 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:38 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 05/14] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Date: Mon, 1 Feb 2021 10:38:18 -0500 Message-Id: <20210201153827.444374-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not return pages that might belong to CMA region. This is currently used for long term gup to make sure that such pins are not going to be done on any CMA pages. When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it is focusing on CMA pages too much and that there is larger class of pages that need the same treatment. MOVABLE zone cannot contain any long term pins as well so it makes sense to reuse and redefine this flag for that usecase as well. Rename the flag to PF_MEMALLOC_PIN which defines an allocation context which can only get pages suitable for long-term pins. Also re-name: memalloc_nocma_save()/memalloc_nocma_restore to memalloc_pin_save()/memalloc_pin_restore() and make the new functions common. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard Acked-by: Michal Hocko --- include/linux/sched.h | 2 +- include/linux/sched/mm.h | 21 +++++---------------- mm/gup.c | 4 ++-- mm/hugetlb.c | 4 ++-- mm/page_alloc.c | 4 ++-- 5 files changed, 12 insertions(+), 23 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 763b15dd6a61..2589ee67b55c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1576,7 +1576,7 @@ extern struct pid *cad_pid; #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ -#define PF_MEMALLOC_NOCMA 0x10000000 /* All allocation request will have _GFP_MOVABLE cleared */ +#define PF_MEMALLOC_PIN 0x10000000 /* Allocation context constrained to zones which allow long term pinning. */ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ #define PF_SUSPEND_TASK 0x80000000 /* This thread called freeze_processes() and should not be frozen */ diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 1ae08b8462a4..5f4dd3274734 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -270,29 +270,18 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) current->flags = (current->flags & ~PF_MEMALLOC) | flags; } -#ifdef CONFIG_CMA -static inline unsigned int memalloc_nocma_save(void) +static inline unsigned int memalloc_pin_save(void) { - unsigned int flags = current->flags & PF_MEMALLOC_NOCMA; + unsigned int flags = current->flags & PF_MEMALLOC_PIN; - current->flags |= PF_MEMALLOC_NOCMA; + current->flags |= PF_MEMALLOC_PIN; return flags; } -static inline void memalloc_nocma_restore(unsigned int flags) +static inline void memalloc_pin_restore(unsigned int flags) { - current->flags = (current->flags & ~PF_MEMALLOC_NOCMA) | flags; + current->flags = (current->flags & ~PF_MEMALLOC_PIN) | flags; } -#else -static inline unsigned int memalloc_nocma_save(void) -{ - return 0; -} - -static inline void memalloc_nocma_restore(unsigned int flags) -{ -} -#endif #ifdef CONFIG_MEMCG DECLARE_PER_CPU(struct mem_cgroup *, int_active_memcg); diff --git a/mm/gup.c b/mm/gup.c index 7ecca2d66dff..857b273e32ac 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1669,7 +1669,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc; if (gup_flags & FOLL_LONGTERM) - flags = memalloc_nocma_save(); + flags = memalloc_pin_save(); rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1678,7 +1678,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (rc > 0) rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas, gup_flags); - memalloc_nocma_restore(flags); + memalloc_pin_restore(flags); } return rc; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a3e4fa2c5e94..8499ec73c3c7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1044,10 +1044,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) { struct page *page; - bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA); + bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (nocma && is_migrate_cma_page(page)) + if (pin && is_migrate_cma_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6446778cbc6b..39f46fe122b7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3814,8 +3814,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask, #ifdef CONFIG_CMA unsigned int pflags = current->flags; - if (!(pflags & PF_MEMALLOC_NOCMA) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (!(pflags & PF_MEMALLOC_PIN) && + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; #endif From patchwork Mon Feb 1 15:38:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4ABC433DB for ; Mon, 1 Feb 2021 15:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5064E64E2A for ; Mon, 1 Feb 2021 15:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231346AbhBAPnT (ORCPT ); Mon, 1 Feb 2021 10:43:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231743AbhBAPke (ORCPT ); Mon, 1 Feb 2021 10:40:34 -0500 Received: from mail-qt1-x82a.google.com (mail-qt1-x82a.google.com [IPv6:2607:f8b0:4864:20::82a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E83DC061352 for ; Mon, 1 Feb 2021 07:38:41 -0800 (PST) Received: by mail-qt1-x82a.google.com with SMTP id n8so4152181qtp.5 for ; Mon, 01 Feb 2021 07:38:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=/AUhcfgz8BqXY0TY4shngd6CAaVvD3HDn/nwJr6Nrs4=; b=J0xGVvXRyRtWX7IJwY7ZVHBmI78Fn3Ltd6jNbFmk9+dfXXJO1bTfqrksEDtYyC8qxU ZJzAGy4HJiS86t8LsCTsVl3a2dU95OYloMArtE9XuPAO6Yt6+7iZAU1pt3pfhKlD/4AO t5NeXpJqcv+Hve1c4YUVibNCo7wqE5BnYCs4gWjDkOv+k72xPpDjyJyzEu6yeZwgSxQe 0lmHt6hNL2dS4Qp/2/7MB7Psif9ftncTO4GZd5Jpe0pSuQ1RoiY/5MsP6DuopYkU0PIj 4jz3acq+c78oe/mCq3DLO8hufZxqyzBRviBCSIDgGISsaTHEQ99bkdp6kJfrtijZB4cj HcMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/AUhcfgz8BqXY0TY4shngd6CAaVvD3HDn/nwJr6Nrs4=; b=l5IJwW8xNxV2zzzXwdPC6mMPHL7VL7O0C36TnzTMbBwJjqlaha/F78BmzeJ5VOQUpz HoCuVXFmEa8mAM8imtw0B2JUt5uPeXHaqoHMT/G1oKOm00bz7q2hMcYDYnC8SLBz/l8c JryoSKi0XkExufN78XnCyayDh+i/p9cL+mEqyV9Xhj7JCaQ8cH5CQufNyMT8iNLQviUk gGfelXmKrGPiLz9IzFR/+A+YnebZ8LpGbM1jT2XU9l/zXNq7t5tlo/1ubNRJQk0k2nEs 0KzzcvDS6q/HHaShf680xqpJEqOkQBMTB1gUx5BL97Ke6c/OZ9xmK2TRleIzwbFtv5Tu puBQ== X-Gm-Message-State: AOAM532NkCRhZQg6sqfs9vKihd3zaySgz2QC5+zlO0l7jT6yViPrEZeA bMCscn7WFzinrZJ26bLobSmPKw== X-Google-Smtp-Source: ABdhPJzyB0sy//43549QIh7s23fE77NugVMuru2y9cW+ShcV76N4UEQl0bPFelNsWeHZ+WSihm3pIw== X-Received: by 2002:aed:20a8:: with SMTP id 37mr15938278qtb.362.1612193920386; Mon, 01 Feb 2021 07:38:40 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:39 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 06/14] mm: apply per-task gfp constraints in fast path Date: Mon, 1 Feb 2021 10:38:19 -0500 Message-Id: <20210201153827.444374-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Function current_gfp_context() is called after fast path. However, soon we will add more constraints which will also limit zones based on context. Move this call into fast path, and apply the correct constraints for all allocations. Also update .reclaim_idx based on value returned by current_gfp_context() because it soon will modify the allowed zones. Note: With this patch we will do one extra current->flags load during fast path, but we already load current->flags in fast-path: __alloc_pages_nodemask() prepare_alloc_pages() current_alloc_flags(gfp_mask, *alloc_flags); Later, when we add the zone constrain logic to current_gfp_context() we will be able to remove current->flags load from current_alloc_flags, and therefore return fast-path to the current performance level. Suggested-by: Michal Hocko Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/page_alloc.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 39f46fe122b7..a068e8295931 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4982,6 +4982,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } gfp_mask &= gfp_allowed_mask; + /* + * Apply scoped allocation constraints. This is mainly about GFP_NOFS + * resp. GFP_NOIO which has to be inherited for all allocation requests + * from a particular context which has been marked by + * memalloc_no{fs,io}_{save,restore}. + */ + gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) return NULL; @@ -4997,13 +5004,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, if (likely(page)) goto out; - /* - * Apply scoped allocation constraints. This is mainly about GFP_NOFS - * resp. GFP_NOIO which has to be inherited for all allocation requests - * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. - */ - alloc_mask = current_gfp_context(gfp_mask); + alloc_mask = gfp_mask; ac.spread_dirty_pages = false; /* From patchwork Mon Feb 1 15:38:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F460C433E0 for ; Mon, 1 Feb 2021 15:40:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F91E64DE1 for ; Mon, 1 Feb 2021 15:40:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231555AbhBAPkQ (ORCPT ); Mon, 1 Feb 2021 10:40:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231459AbhBAPkK (ORCPT ); Mon, 1 Feb 2021 10:40:10 -0500 Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com [IPv6:2607:f8b0:4864:20::830]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0388C0612F2 for ; Mon, 1 Feb 2021 07:38:42 -0800 (PST) Received: by mail-qt1-x830.google.com with SMTP id c1so12540407qtc.1 for ; Mon, 01 Feb 2021 07:38:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yw2/zMeAmyoZjT+2/nNtQtrKJ0lzG5UVFm3OGx6Im4I=; b=DLguHB6QCFL7F6VwGMxYRX1H+fLcVNwvZhwmdFysW6Er2huRz63qj2OKcgF8qEISlv upxH5gLBZbEQ5Og1v04kLfUatGJgGIRCOuXRFEaJzDn/X+koixRdB+WxexC+kTMzx8OQ jzZucGhLMRakFMVf3+EfDVYvBJdqIJa4jPlQ6xBw9/cIDwtkfV9Wi/4O+8eFR7H9wsjj wmIuAn9CEokQzBQt/eGwbE03megrMBCTOADCj7iUfSVDAFGVBPqvwM6eGnZECK3LTBLG vrRAJWsgyc52/XOye59UIztCx8LeXGMeZgjh37WoCH21LEorhTKM5HHhhlUrXfddzuSj badg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yw2/zMeAmyoZjT+2/nNtQtrKJ0lzG5UVFm3OGx6Im4I=; b=dDPbJzPbyp4R0Mty0CDaPxnQBMav6RxUW8eIpOvym+0JWdi739E/fjt4xPcaqTxUav DaCrFniUVKU7fFttUH38zarLx/UAaG+n3G8uibQiAcHeXF2b+bk95qf7I/qiEHGfKB0k j+CEne5Em2Sm2gJJQmnCiV3EPK/D93m/mT21dPWvZ1ZZ9YUhf3vwqmq7UfcyMYkxOQiF 0OccT3oVk0/7B4mrVYfZSuEtQ0vCRRIbv6/1icYX5uxIb+ZllSlTjYcMnBPxNiEUlNZP /m3p5si1KWOSdjBqwnJYkQvYp15aEVvK07lA/4Mlx4TAuPb+SQ0HJ1Mp/42YMZ8920Me yGWA== X-Gm-Message-State: AOAM530gpZPhkHewDX/MLGnpSylKIgmzUOqgKucnpNZrMNfxnikh8Uda 3UWcfdqYmprxOBGgjyUNWK/q7w== X-Google-Smtp-Source: ABdhPJzWIMV3tPwqt+OR5i5B+/9QKkVX7n1b9D+7W6IJOUyfLb2TZNPLCB+/cSjsw37snHi4wvV+RQ== X-Received: by 2002:aed:2644:: with SMTP id z62mr16275615qtc.146.1612193922139; Mon, 01 Feb 2021 07:38:42 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:41 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 07/14] mm: honor PF_MEMALLOC_PIN for all movable pages Date: Mon, 1 Feb 2021 10:38:20 -0500 Message-Id: <20210201153827.444374-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org PF_MEMALLOC_PIN is only honored for CMA pages, extend this flag to work for any allocations from ZONE_MOVABLE by removing __GFP_MOVABLE from gfp_mask when this flag is passed in the current context. Add is_pinnable_page() to return true if page is in a pinnable page. A pinnable page is not in ZONE_MOVABLE and not of MIGRATE_CMA type. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- include/linux/mm.h | 11 +++++++++++ include/linux/sched/mm.h | 6 +++++- mm/hugetlb.c | 2 +- mm/page_alloc.c | 20 +++++++++----------- 4 files changed, 26 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fee43eb43309..db228aa8d9f7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1122,6 +1122,17 @@ static inline bool is_zone_device_page(const struct page *page) } #endif +static inline bool is_zone_movable_page(const struct page *page) +{ + return page_zonenum(page) == ZONE_MOVABLE; +} + +/* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ +static inline bool is_pinnable_page(struct page *page) +{ + return !is_zone_movable_page(page) && !is_migrate_cma_page(page); +} + #ifdef CONFIG_DEV_PAGEMAP_OPS void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 5f4dd3274734..a55277b0d475 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -150,12 +150,13 @@ static inline bool in_vfork(struct task_struct *tsk) * Applies per-task gfp context to the given allocation flags. * PF_MEMALLOC_NOIO implies GFP_NOIO * PF_MEMALLOC_NOFS implies GFP_NOFS + * PF_MEMALLOC_PIN implies !GFP_MOVABLE */ static inline gfp_t current_gfp_context(gfp_t flags) { unsigned int pflags = READ_ONCE(current->flags); - if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS))) { + if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS | PF_MEMALLOC_PIN))) { /* * NOIO implies both NOIO and NOFS and it is a weaker context * so always make sure it makes precedence @@ -164,6 +165,9 @@ static inline gfp_t current_gfp_context(gfp_t flags) flags &= ~(__GFP_IO | __GFP_FS); else if (pflags & PF_MEMALLOC_NOFS) flags &= ~__GFP_FS; + + if (pflags & PF_MEMALLOC_PIN) + flags &= ~__GFP_MOVABLE; } return flags; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8499ec73c3c7..32261c957ddf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1047,7 +1047,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (pin && is_migrate_cma_page(page)) + if (pin && !is_pinnable_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a068e8295931..ad3ed3ec4dd5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3808,16 +3808,13 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) return alloc_flags; } -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, - unsigned int alloc_flags) +/* Must be called after current_gfp_context() which can change gfp_mask */ +static inline unsigned int gfp_to_alloc_flags_cma(gfp_t gfp_mask, + unsigned int alloc_flags) { #ifdef CONFIG_CMA - unsigned int pflags = current->flags; - - if (!(pflags & PF_MEMALLOC_PIN) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; - #endif return alloc_flags; } @@ -4473,7 +4470,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) } else if (unlikely(rt_task(current)) && !in_interrupt()) alloc_flags |= ALLOC_HARDER; - alloc_flags = current_alloc_flags(gfp_mask, alloc_flags); + alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, alloc_flags); return alloc_flags; } @@ -4775,7 +4772,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, reserve_flags = __gfp_pfmemalloc_flags(gfp_mask); if (reserve_flags) - alloc_flags = current_alloc_flags(gfp_mask, reserve_flags); + alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, reserve_flags); /* * Reset the nodemask and zonelist iterators if memory policies can be @@ -4944,7 +4941,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, if (should_fail_alloc_page(gfp_mask, order)) return false; - *alloc_flags = current_alloc_flags(gfp_mask, *alloc_flags); + *alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags); /* Dirty zone balancing only done in the fast path */ ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE); @@ -4986,7 +4983,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, * Apply scoped allocation constraints. This is mainly about GFP_NOFS * resp. GFP_NOIO which has to be inherited for all allocation requests * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. + * memalloc_no{fs,io}_{save,restore}. And PF_MEMALLOC_PIN which ensures + * movable zones are not used during allocation. */ gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; From patchwork Mon Feb 1 15:38:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96C6DC43381 for ; Mon, 1 Feb 2021 15:43:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5514864E46 for ; Mon, 1 Feb 2021 15:43:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbhBAPm6 (ORCPT ); Mon, 1 Feb 2021 10:42:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231665AbhBAPkf (ORCPT ); Mon, 1 Feb 2021 10:40:35 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACE3DC06121C for ; Mon, 1 Feb 2021 07:38:44 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id d85so16598915qkg.5 for ; Mon, 01 Feb 2021 07:38:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Qju3lH41NogqzBdz4Ys86o8JOXLDC8FcvWasnejN040=; b=EMk6oqBeIporKVXptFBvwyl8xTDfx+fl/2jhb1TjJkIff6Vu9oWxrN6galvCgTsLH2 KsrRKtGcxld0AlvZz1Is2kNI/Am2VEqHdgNPkRM+aMkuZ5bV97v/K1h6h2nAlBhfJyKi gKyGlvRMxyZ7BgAAb8JO1b9N+ryHBHixoPFfdhhbim+lO+mTRrAOW9YI7NYJk/imAwo9 IuJZ/vNWGoJO3sr8pEX7gRzm1X6AwJonyS2gfWF6nHfBS4sX/oY+woKORDEUhG+ISikO x43kEwdJ5gSOr4sPcE4H9HiMatnMjWWxD8zfxufW0d8B8ZHb4vdJVAA4aO0ieAV5LUvH VI6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qju3lH41NogqzBdz4Ys86o8JOXLDC8FcvWasnejN040=; b=pX6chBs20UKNWjey8jNNU+McDq0lDF8boe7uc7tV2bl6CDLQO9CZ+ORNCdermknfac lCgvCHqZTIcYqcEmls21aHd4ee+QEYKqQMF2/VrQhmfcX7lx2V/fletBms7OQGZdYjTX XhxWeHbr1nQqImeFxzBl06EMKQY/ZtGZ24BO5O33yHWegZgUMvfr8mJyzigkvXbb5tir 3lFf7ORyOvc6w+NDPztfalFODFnymj0V07TUIAYaXojxfCrsPnAn2DAn9nV5uNOFlapB VeU1tEBHx+C1gx+raMNQUDQMoc5U3/qLcOzvpj8KWpoeILsszL+ek1h770OzmCguzyG3 f9EQ== X-Gm-Message-State: AOAM530izt/Ef4aNZWLqYJurpu1er0HZOhwPfslYzjj2xgPR9vOXAuI6 ypYEC+s+JC4kyoW+NW8BFoQrhw== X-Google-Smtp-Source: ABdhPJxYxTmEUPPRNgEfvKl+1N4Z3cVG7aMWWdgM95nPZ5soALAPcovsQU/GFxfDfiWr0ndVY8xSWg== X-Received: by 2002:a05:620a:13e2:: with SMTP id h2mr17102254qkl.495.1612193923906; Mon, 01 Feb 2021 07:38:43 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:43 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 08/14] mm/gup: do not migrate zero page Date: Mon, 1 Feb 2021 10:38:21 -0500 Message-Id: <20210201153827.444374-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On some platforms ZERO_PAGE(0) might end-up in a movable zone. Do not migrate zero page in gup during longterm pinning as migration of zero page is not allowed. For example, in x86 QEMU with 16G of memory and kernelcore=5G parameter, I see the following: Boot#1: zero_pfn 0x48a8d zero_pfn zone: ZONE_DMA32 Boot#2: zero_pfn 0x20168d zero_pfn zone: ZONE_MOVABLE On x86, empty_zero_page is declared in .bss and depending on the loader may end up in different physical locations during boots. Also, move is_zero_pfn() my_zero_pfn() functions under CONFIG_MMU, because zero_pfn that they are using is declared in memory.c which is compiled with CONFIG_MMU. Signed-off-by: Pavel Tatashin --- include/linux/mm.h | 3 ++- include/linux/mmzone.h | 4 ++++ include/linux/pgtable.h | 3 +-- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index db228aa8d9f7..67716df9fe1f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1130,7 +1130,8 @@ static inline bool is_zone_movable_page(const struct page *page) /* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ static inline bool is_pinnable_page(struct page *page) { - return !is_zone_movable_page(page) && !is_migrate_cma_page(page); + return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || + is_zero_pfn(page_to_pfn(page)); } #ifdef CONFIG_DEV_PAGEMAP_OPS diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 87a7f9e2d1c2..aacbed98a1ed 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -427,6 +427,10 @@ enum zone_type { * techniques might use alloc_contig_range() to hide previously * exposed pages from the buddy again (e.g., to implement some sort * of memory unplug in virtio-mem). + * 6. ZERO_PAGE(0), kernelcore/movablecore setups might create + * situations where ZERO_PAGE(0) which is allocated differently + * on different platforms may end up in a movable zone. ZERO_PAGE(0) + * cannot be migrated. * * In general, no unmovable allocations that degrade memory offlining * should end up in ZONE_MOVABLE. Allocators (like alloc_contig_range()) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1d3087753426..bad0f417adb3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1118,6 +1118,7 @@ extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, extern void untrack_pfn_moved(struct vm_area_struct *vma); #endif +#ifdef CONFIG_MMU #ifdef __HAVE_COLOR_ZERO_PAGE static inline int is_zero_pfn(unsigned long pfn) { @@ -1142,8 +1143,6 @@ static inline unsigned long my_zero_pfn(unsigned long addr) } #endif -#ifdef CONFIG_MMU - #ifndef CONFIG_TRANSPARENT_HUGEPAGE static inline int pmd_trans_huge(pmd_t pmd) { From patchwork Mon Feb 1 15:38:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35E6FC433E9 for ; Mon, 1 Feb 2021 15:40:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E60864E9E for ; Mon, 1 Feb 2021 15:40:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231784AbhBAPkp (ORCPT ); Mon, 1 Feb 2021 10:40:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231757AbhBAPkg (ORCPT ); Mon, 1 Feb 2021 10:40:36 -0500 Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com [IPv6:2607:f8b0:4864:20::732]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D96C061220 for ; Mon, 1 Feb 2021 07:38:46 -0800 (PST) Received: by mail-qk1-x732.google.com with SMTP id a12so16575135qkh.10 for ; Mon, 01 Feb 2021 07:38:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ITCNOAcLc8mxUeoxq4H/y0MP6pJ+wmZ3SCjtwPIScv8=; b=b7JMPGB124KxUQowzYvqApN2AXc+9SyLN2fKaPZzcfnWHwrDFUxKYGJNCIigKY9E2U LdGEzqOwpccJms3HtkFGC0eBG8tGrVKBCmqkUsMTMzFxM5OuAeqMEkmciIb7bNdvL6a9 ExW3ZlBOgZnyfYzJyezRhuqyTEzzUs0mM0JHg2goHS0HsV4Aom6tUgBbP6wyu6mCnh6B xor2jO8JKhcKBHDNa8LppvmLbHkM42S5yFoflCEHJ5zCucaDAM+w3OFyA/hfW5QTBP69 wmNiN9b796nwWVIe/qrCkEOJwOfHSd8Heuc2CebugVm8vk6+n3Wc8k8YVuilY5m3UJdE MALQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ITCNOAcLc8mxUeoxq4H/y0MP6pJ+wmZ3SCjtwPIScv8=; b=E5j2ttjF9h3H/FNoQkOodSCwYuXx1zrmvkh0x/rYpMKsxDvhkf7HM7f9mtxjkTM1tu YCd1gfKy4qL6xP0cteeiHic+ri6N3MPTLXnbi3TY5upCqqYbMPARviMIllYowyvF8dad Xhp9dDCMaUOqsq8eyzeBt4+9BvPKaBpy622++Vk5QC/3OAiVI0SGqWIcTJfu63rxJmul G5zpwjvRrhrFLCHQbx7j1z3p8ouzHEk0i7UZaIYerNtXuOkyCk5VsnLzZxe6e5SkHzZi 7ppF8w3SOCMbIe/WCOB2Ma2xCm+9k8cJ5kcMLKZsfjy96tWgex/Xgi+J8dPTiNnLhhIU OCBw== X-Gm-Message-State: AOAM5330y0a7i3tniUi4yJwhdYCtjPgL46zYqX2BZ01ZyIVchRNhIYx7 uUrhZ2yQ17KHhvKqWrIjTT/tWw== X-Google-Smtp-Source: ABdhPJyLIn3vr/IRzXOuqsRcXvaEmK5ALPtGKXdfCtUNSQls03mjHxKwY+dh4ePoI8ctHFr+VYXZ8A== X-Received: by 2002:ae9:d801:: with SMTP id u1mr16609957qkf.79.1612193925759; Mon, 01 Feb 2021 07:38:45 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:45 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 09/14] mm/gup: migrate pinned pages out of movable zone Date: Mon, 1 Feb 2021 10:38:22 -0500 Message-Id: <20210201153827.444374-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 9 ++++- include/trace/events/migrate.h | 3 +- mm/gup.c | 67 +++++++++++++++++----------------- 4 files changed, 44 insertions(+), 36 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3a389633b68f..fdf65f23acec 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index aacbed98a1ed..9771edb2f560 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -407,8 +407,13 @@ enum zone_type { * to increase the number of THP/huge pages. Notable special cases are: * * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * essentially turn such pages unmovable. Therefore, we do not allow + * pinning long-term pages in ZONE_MOVABLE. When pages are pinned and + * faulted, they come from the right zone right away. However, it is + * still possible that address space already has pages in + * ZONE_MOVABLE at the time when pages are pinned (i.e. user has + * touches that memory before pinning). In such case we migrate them + * to a different zone. When migration fails - pinning fails. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") /* * First define the enums in the above macros to be exported to userspace diff --git a/mm/gup.c b/mm/gup.c index 857b273e32ac..df29825305f8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,11 +89,12 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, int orig_refs = refs; /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; /* @@ -1547,17 +1548,17 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +#ifdef CONFIG_MIGRATION +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i, isolation_error_count; bool drain_allow; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret = nr_pages; struct page *prev_head, *head; struct migration_target_control mtc = { @@ -1575,13 +1576,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, continue; prev_head = head; /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) { - if (!isolate_huge_page(head, &cma_page_list)) + if (!isolate_huge_page(head, &movable_page_list)) isolation_error_count++; } else { if (!PageLRU(head) && drain_allow) { @@ -1593,7 +1593,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, isolation_error_count++; continue; } - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1606,10 +1606,10 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, * If list is empty, and no isolation errors, means that all pages are * in the correct zone. */ - if (list_empty(&cma_page_list) && !isolation_error_count) + if (list_empty(&movable_page_list) && !isolation_error_count) return ret; - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1619,12 +1619,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - ret = migrate_pages(&cma_page_list, alloc_migration_target, + ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, - MR_CONTIG_RANGE); + MR_LONGTERM_PIN); if (ret) { - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); return ret > 0 ? -ENOMEM : ret; } @@ -1643,16 +1643,16 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, goto check_again; } #else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { return nr_pages; } -#endif /* CONFIG_CMA */ +#endif /* CONFIG_MIGRATION */ /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which @@ -1676,8 +1676,9 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (gup_flags & FOLL_LONGTERM) { if (rc > 0) - rc = check_and_migrate_cma_pages(mm, start, rc, pages, - vmas, gup_flags); + rc = check_and_migrate_movable_pages(mm, start, rc, + pages, vmas, + gup_flags); memalloc_pin_restore(flags); } return rc; From patchwork Mon Feb 1 15:38:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D385C433E9 for ; Mon, 1 Feb 2021 15:43:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3DE4064E9E for ; Mon, 1 Feb 2021 15:43:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231393AbhBAPm5 (ORCPT ); Mon, 1 Feb 2021 10:42:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231744AbhBAPkm (ORCPT ); Mon, 1 Feb 2021 10:40:42 -0500 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 265B1C061222 for ; Mon, 1 Feb 2021 07:38:48 -0800 (PST) Received: by mail-qk1-x731.google.com with SMTP id t63so16611045qkc.1 for ; Mon, 01 Feb 2021 07:38:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=z/iYHqVeCyWybAIwBASO/JRh5H6y4Wucaqn9XTIErdQ=; b=AHmYYD8Uf8KGvHJW8MVVxXtqkqFhVxYZFo/y+SbSqwIYX7GwlLLiomi3JOJQZMdQAi bKHwvo43JM0PXEP3sfVnKqBJBeWSB/zA0i74oMnO7Vnt4HH4WkHKaMRdEVax1hHfY0mV 81Y/qBf7AyFfx9jwAAz1y2vp5a5Q6AZUQCoWzocq0tB9KEs008vc6KUtfX6KYcZ5omgs zxHNup5vu+iWU5dmfRSNBaliZfLlslxblr9TDrIgTrHWQ8YDWsgSsYD8YOC/uec+995p 7QWmM0BzAFryYjEhWRpFG/2hngYhbCy0ZpelZca2yEDQJnWpjCkT4FCCiTGYHFdADHNa UEjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z/iYHqVeCyWybAIwBASO/JRh5H6y4Wucaqn9XTIErdQ=; b=SpMFsL9yzjDqOGEEFI/sEFyx7u9kaLaSzOciZuj2LbUZZcDLqzA8oYB/IhNFWPm+EU ilQknhQm4jEyFH2J2hbRCzIp11Sesd3Q0G4f4/i4FzBT5sPUhy//eHkvNyNNkWibtIFJ 5jAZT9rcMWLCJhHYCdjdY3kE3sIRYPA22+VBOWsaeRyqymjcB4URcBo/XetLpOw0vTB/ 5n33NAdIeb6qvr4+BK9ttR0vXigPArQJ+MKobujcUr7txzyCT8xVFT9SiVPDK0ICjR/8 X2X57Xj6t13wktuJRMitFOwFExEva6wiJ9ZtUmpxBB0wSXhyJgYW2DdIMl9TE9MHgZAd Oocw== X-Gm-Message-State: AOAM531aNCbp/9Z+00gHrXMF67c1SkK1cDRGj57sP3SJkg+8u7aIRJ5Z rVcn416EQmHajy/diedV1bgsxg== X-Google-Smtp-Source: ABdhPJydhT1EeSqCG81uSPKnUS/xT4UZSfZsofqpJUFGgxsGTeZgk0uM9rr5T/hAQj/7Ark0+s1gGg== X-Received: by 2002:a05:620a:745:: with SMTP id i5mr16954962qki.321.1612193927413; Mon, 01 Feb 2021 07:38:47 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:46 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 10/14] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Date: Mon, 1 Feb 2021 10:38:23 -0500 Message-Id: <20210201153827.444374-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Document the special handling of page pinning when ZONE_MOVABLE present. Signed-off-by: Pavel Tatashin Suggested-by: David Hildenbrand Acked-by: Michal Hocko --- Documentation/admin-guide/mm/memory-hotplug.rst | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst index 5c4432c96c4b..c6618f99f765 100644 --- a/Documentation/admin-guide/mm/memory-hotplug.rst +++ b/Documentation/admin-guide/mm/memory-hotplug.rst @@ -357,6 +357,15 @@ creates ZONE_MOVABLE as following. Unfortunately, there is no information to show which memory block belongs to ZONE_MOVABLE. This is TBD. +.. note:: + Techniques that rely on long-term pinnings of memory (especially, RDMA and + vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory + hot remove. Pinned pages cannot reside on ZONE_MOVABLE, to guarantee that + memory can still get hot removed - be aware that pinning can fail even if + there is plenty of free memory in ZONE_MOVABLE. In addition, using + ZONE_MOVABLE might make page pinning more expensive, because pages have to be + migrated off that zone first. + .. _memory_hotplug_how_to_offline_memory: How to offline memory From patchwork Mon Feb 1 15:38:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0638FC433DB for ; Mon, 1 Feb 2021 15:40:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF9DF64E9E for ; Mon, 1 Feb 2021 15:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231612AbhBAPkj (ORCPT ); Mon, 1 Feb 2021 10:40:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231724AbhBAPk1 (ORCPT ); Mon, 1 Feb 2021 10:40:27 -0500 Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com [IPv6:2607:f8b0:4864:20::836]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 387BFC0611C0 for ; Mon, 1 Feb 2021 07:38:50 -0800 (PST) Received: by mail-qt1-x836.google.com with SMTP id t17so12532765qtq.2 for ; Mon, 01 Feb 2021 07:38:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=mjWbyN3QlCSRxacF3EdFD9Q6Qqvve11XewDvx8+hMpo=; b=OAiVj8EUaBhRyXc5Me6CxkkQ4JJVwN6FcXMZMtlbhDja2FZkfFfbZUQ1PV61FJ8Bxh 2qgpQslo1yfXRnpnaXy6AMqLdZo4lI69UbWuvi1JbU/mO7Eq03VkKh3tm3nN2kSYrbNc m1+m9ejmNWrz2xa17nl3fdFICPvoujm7g+JFNXAZGZEYBBFfjvjrns89jj1CryT0oZLE nb7cN2BbJezLQYuJ/RmdUItZgpcGYUEiAlCaLgYl17uHU1gNtYi6ehSL9P5GPWhRDJ2l wgSAinJkk7pCkLG+zVr0RMiDZnPOFJxLchCltU+ktnNkpd+AUWYjAi6UELgens7aoVf2 46eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mjWbyN3QlCSRxacF3EdFD9Q6Qqvve11XewDvx8+hMpo=; b=nz5GyJZ9fGnqQQrcgJnbfd1rGP3Emn/X+XdXXG5E5H7uVsTHiNRgM58xnZmJGzVOSK StuEmEGZnOUaxz3G6WexE03FyUwo31J7SoBjdL/eQj/xQznSFR4aJYZxhxXxykA40C3O bOLk9wVrT+3PVYwE3X4A1cVvg3Bg0x9UXNODD6H1aQBJoqnhKJ06f3cdwuZxf72zxq6X EX9QxxOFImhQ+l/28iCHCX/0oFfQUZzLbBf2PdAvLSP3PRzPSzan0W6nWiDKCECubz/w x/ERXLcExJQGnlmzZNHpU0hR7O+e2DitM5qiPO8TqLo7cavLRiDjrgiPK1Yt1HiW3Vqs /XnA== X-Gm-Message-State: AOAM5303Q4KanjrA8FKH9skZoD8CvbxJulIBdvkrKEblq8qenD38PStV fyB7tbRzmH8LH7Vi7OPbOvcURA== X-Google-Smtp-Source: ABdhPJzNq3NrBsdgUGhvNMpGvQtnn1ipFJ+9G5L4EDS9p0pLvxVzoDtfLf/6zBoa3mZCddCTnis+MA== X-Received: by 2002:ac8:4d59:: with SMTP id x25mr15779216qtv.369.1612193929474; Mon, 01 Feb 2021 07:38:49 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:48 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 11/14] mm/gup: change index type to long as it counts pages Date: Mon, 1 Feb 2021 10:38:24 -0500 Message-Id: <20210201153827.444374-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In __get_user_pages_locked() i counts number of pages which should be long, as long is used in all other places to contain number of pages, and 32-bit becomes increasingly small for handling page count proportional values. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index df29825305f8..f98af75dab0f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1479,7 +1479,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, { struct vm_area_struct *vma; unsigned long vm_flags; - int i; + long i; /* calculate required read or write permissions. * If FOLL_FORCE is set, we only require the "MAY" flags. From patchwork Mon Feb 1 15:38:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5611EC433E6 for ; Mon, 1 Feb 2021 15:43:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B07464E9E for ; Mon, 1 Feb 2021 15:43:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231743AbhBAPnY (ORCPT ); Mon, 1 Feb 2021 10:43:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231726AbhBAPka (ORCPT ); Mon, 1 Feb 2021 10:40:30 -0500 Received: from mail-qt1-x82a.google.com (mail-qt1-x82a.google.com [IPv6:2607:f8b0:4864:20::82a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05EDDC0611C2 for ; Mon, 1 Feb 2021 07:38:52 -0800 (PST) Received: by mail-qt1-x82a.google.com with SMTP id t17so12532861qtq.2 for ; Mon, 01 Feb 2021 07:38:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=IY6D8/ZMyGIUDKqllv86b5qOcyIRLy633LCHCtaaquY=; b=oVL04tKJhPZXNTm/tSGlt9wq7zPOy2Tf8VxZEZHXJ5HW7cFe+sAyUAjIGq9GiytRnq b0dMTa8okb0jB90g5tZMaQtK4RIroStvEJb2Yi6emO1QbKw6dVUHXoI2wjnCXjGn1qpg p/UV4rPuV8V6E70Q2Zj9BH3cfX64CP5xnY2m1fnBBkJbzu0swqDbX/6/Y/vXpCEr2zzm zXM8GtcZAz+LplNkdO67tZc9bybOg1vNjl8f6qcKEQQCdHX04dTZoIsET6+M1j7IgpE0 tLo0ybBF7KNHWWBxyQ2nmA535TFSKu+hTkfF8GAkZF8Mjo0hCEUwxRGHw9YXbSLBcJcp 1Xzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IY6D8/ZMyGIUDKqllv86b5qOcyIRLy633LCHCtaaquY=; b=VkrJ+bo9aJCJGctTAbqPTTuC4tptXX5vSpNWpEebzqI4VPwDGoaUnyItdxd69Hz1mg SSYEG5NGg8L0ETDQEPhVACpyHQEbAxSmvzY7i9xEfjvEVhrHfafw0CHTrlF0+9oH32DV j3MCDyd+2xec/yfCSgBczSs+YuMI5XEl81OOu1YCwspjFxskvDsdwdWg2BaKKvwZUYnl MINreXIBlFH28KTIybx3iAD2PAHHbo4TSmARxw6AHW6Gq7Imx40CrNE7o8kupckLVCzB 6HB378XdyAPOQf49TkBN5qp0nd7cUJiAmCI+uUDWr4kbOFCoNjmcrnPnA0aYiiuVnRRk Kr7g== X-Gm-Message-State: AOAM532VhqLUS68ikGS9KIQJGlt8Gfpgnn34Hz0XJl46tf8u8CwwtmUK ElaIf+Unuv9Rca3lj+fcrfY/kw== X-Google-Smtp-Source: ABdhPJw6cxz5LrQfliJoV4qNwqKUs2MT1e09fwBDV4sp4+jrbzLxQ2ZcMDMyJuf321t2NDUQgUkc2g== X-Received: by 2002:aed:2022:: with SMTP id 31mr15751459qta.85.1612193931196; Mon, 01 Feb 2021 07:38:51 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:50 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 12/14] mm/gup: longterm pin migration cleanup Date: Mon, 1 Feb 2021 10:38:25 -0500 Message-Id: <20210201153827.444374-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are longterm pinned, we must migrated them out of movable zone. The function that migrates them has a hidden loop with goto. The loop is to retry on isolation failures, and after successful migration. Make this code better by moving this loop to the caller. Signed-off-by: Pavel Tatashin Reviewed-by: Jason Gunthorpe --- mm/gup.c | 93 ++++++++++++++++++++++---------------------------------- 1 file changed, 37 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f98af75dab0f..fabfe2a5c627 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1549,27 +1549,28 @@ struct page *get_dump_page(unsigned long addr) #endif /* CONFIG_ELF_CORE */ #ifdef CONFIG_MIGRATION -static long check_and_migrate_movable_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, +/* + * Check whether all pages are pinnable, if so return number of pages. If some + * pages are not pinnable, migrate them, and unpin all pages. Return zero if + * pages were migrated, or if some pages were not successfully isolated. + * Return negative error if migration fails. + */ +static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long i, isolation_error_count; - bool drain_allow; + unsigned long i; + unsigned long isolation_error_count = 0; + bool drain_allow = true; LIST_HEAD(movable_page_list); - long ret = nr_pages; - struct page *prev_head, *head; + long ret = 0; + struct page *prev_head = NULL; + struct page *head; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; -check_again: - prev_head = NULL; - isolation_error_count = 0; - drain_allow = true; for (i = 0; i < nr_pages; i++) { head = compound_head(pages[i]); if (head == prev_head) @@ -1607,47 +1608,27 @@ static long check_and_migrate_movable_pages(struct mm_struct *mm, * in the correct zone. */ if (list_empty(&movable_page_list) && !isolation_error_count) - return ret; + return nr_pages; + if (gup_flags & FOLL_PIN) { + unpin_user_pages(pages, nr_pages); + } else { + for (i = 0; i < nr_pages; i++) + put_page(pages[i]); + } if (!list_empty(&movable_page_list)) { - /* - * drop the above get_user_pages reference. - */ - if (gup_flags & FOLL_PIN) - unpin_user_pages(pages, nr_pages); - else - for (i = 0; i < nr_pages; i++) - put_page(pages[i]); - ret = migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN); - if (ret) { - if (!list_empty(&movable_page_list)) - putback_movable_pages(&movable_page_list); - return ret > 0 ? -ENOMEM : ret; - } - - /* We unpinned pages before migration, pin them again */ - ret = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, - NULL, gup_flags); - if (ret <= 0) - return ret; - nr_pages = ret; + if (ret && !list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); } - /* - * check again because pages were unpinned, and we also might have - * had isolation errors and need more pages to migrate. - */ - goto check_again; + return ret > 0 ? -ENOMEM : ret; } #else -static long check_and_migrate_movable_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, +static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages, - struct vm_area_struct **vmas, unsigned int gup_flags) { return nr_pages; @@ -1665,22 +1646,22 @@ static long __gup_longterm_locked(struct mm_struct *mm, struct vm_area_struct **vmas, unsigned int gup_flags) { - unsigned long flags = 0; + unsigned int flags; long rc; - if (gup_flags & FOLL_LONGTERM) - flags = memalloc_pin_save(); - - rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, - gup_flags); + if (!(gup_flags & FOLL_LONGTERM)) + return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + flags = memalloc_pin_save(); + do { + rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, + NULL, gup_flags); + if (rc <= 0) + break; + rc = check_and_migrate_movable_pages(rc, pages, gup_flags); + } while (!rc); + memalloc_pin_restore(flags); - if (gup_flags & FOLL_LONGTERM) { - if (rc > 0) - rc = check_and_migrate_movable_pages(mm, start, rc, - pages, vmas, - gup_flags); - memalloc_pin_restore(flags); - } return rc; } From patchwork Mon Feb 1 15:38:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34979C433E0 for ; Mon, 1 Feb 2021 15:43:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F030060238 for ; Mon, 1 Feb 2021 15:43:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231710AbhBAPnU (ORCPT ); Mon, 1 Feb 2021 10:43:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231721AbhBAPka (ORCPT ); Mon, 1 Feb 2021 10:40:30 -0500 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFE56C0611BD for ; Mon, 1 Feb 2021 07:38:53 -0800 (PST) Received: by mail-qv1-xf32.google.com with SMTP id ew18so8301213qvb.4 for ; Mon, 01 Feb 2021 07:38:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ca95TMJT4ShVUsz7drjIPNjU47Yiarr1AByU+uGNnGI=; b=Q/nqJX806tZRHYFegPesVU0utE+C3j415U34otQR8JOjTrPTaYCktRo3eqHD+KA8B0 JiLzT+ceYAB7z/lEGuPBLPBhKDvNmq/hks/jl92MK6WiHqwMbDEAKw8rXn3eluhl/QrD 5iksEB1e8/Zi0WQ14labCpY1qKkvRyjSlTMbt3zSK0TL9baUAws5OLrITafHCJ/Ms3eH bZ00Gofhtxlci8uZ2YfReKSmHw+vFnyW4560L3EBN0ZgWGtB6qCOPOwtfGVPjDbizyKx 3wFweXykSH/GQx4mj1fbur1L3b83WFJNlaqiMW8vU9bI+QffAaJhLD5A82ohjhGmKKpA xGYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ca95TMJT4ShVUsz7drjIPNjU47Yiarr1AByU+uGNnGI=; b=e+9ttmaFvrrCGVgXf4GGPX0rDPvkhzhMKfWtJWnqN/KbAwFmM33u+1xg/iSLWOsZpq XjYQ6mXWjttbiEp+6eRCSuq7Gpg3e1Y8tm3y5JsZ2ScYOnzDf1i25Bbny3f7ImoqyOWQ mcP6ozDqZAvZU87aA17ASE5HOK15NCP+PIiUytmRFStJ5g9m5e4W26n9KYNNQ1UrhKSf Aj3CVL+iM+CjllQr2gmg6cBCqnJaE5nyCPtzZADLseHxPXBblDkJFclZwJiLfzy+rrH4 I1i76RVfoRn3OKr4oiae47xm1682EexfTXRj3Idos4IoXyCdT2tEbDLgEua0z9s1cme3 puGw== X-Gm-Message-State: AOAM5338kM1guw6782hTHBhnM4FlhW0NOqMimmixQTiSfsizA+Sqbyoa tC79AjvifB5ws4Tr0lqWyZJ/wg== X-Google-Smtp-Source: ABdhPJwVsMBqxSsWA4ddHxQZpI4/2Okatvpx3unO9+PV7L1g6iNkIlBHPUUUlEFXw7Gyy2pKth1Y5A== X-Received: by 2002:ad4:4993:: with SMTP id t19mr15723191qvx.41.1612193932929; Mon, 01 Feb 2021 07:38:52 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:52 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 13/14] selftests/vm: gup_test: fix test flag Date: Mon, 1 Feb 2021 10:38:26 -0500 Message-Id: <20210201153827.444374-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In gup_test both gup_flags and test_flags use the same flags field. This is broken. Farther, in the actual gup_test.c all the passed gup_flags are erased and unconditionally replaced with FOLL_WRITE. Which means that test_flags are ignored, and code like this always performs pin dump test: 155 if (gup->flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) 156 nr = pin_user_pages(addr, nr, gup->flags, 157 pages + i, NULL); 158 else 159 nr = get_user_pages(addr, nr, gup->flags, 160 pages + i, NULL); 161 break; Add a new test_flags field, to allow raw gup_flags to work. Add a new subcommand for DUMP_USER_PAGES_TEST to specify that pin test should be performed. Remove unconditional overwriting of gup_flags via FOLL_WRITE. But, preserve the previous behaviour where FOLL_WRITE was the default flag, and add a new option "-W" to unset FOLL_WRITE. Rename flags with gup_flags. With the fix, dump works like this: root@virtme:/# gup_test -c ---- page #0, starting from user virt addr: 0x7f8acb9e4000 page:00000000d3d2ee27 refcount:2 mapcount:1 mapping:0000000000000000 index:0x0 pfn:0x100bcf anon flags: 0x300000000080016(referenced|uptodate|lru|swapbacked) raw: 0300000000080016 ffffd0e204021608 ffffd0e208df2e88 ffff8ea04243ec61 raw: 0000000000000000 0000000000000000 0000000200000000 0000000000000000 page dumped because: gup_test: dump_pages() test DUMP_USER_PAGES_TEST: done root@virtme:/# gup_test -c -p ---- page #0, starting from user virt addr: 0x7fd19701b000 page:00000000baed3c7d refcount:1025 mapcount:1 mapping:0000000000000000 index:0x0 pfn:0x108008 anon flags: 0x300000000080014(uptodate|lru|swapbacked) raw: 0300000000080014 ffffd0e204200188 ffffd0e205e09088 ffff8ea04243ee71 raw: 0000000000000000 0000000000000000 0000040100000000 0000000000000000 page dumped because: gup_test: dump_pages() test DUMP_USER_PAGES_TEST: done Refcount shows the difference between pin vs no-pin case. Also change type of nr from int to long, as it counts number of pages. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- mm/gup_test.c | 23 ++++++++++------------- mm/gup_test.h | 3 ++- tools/testing/selftests/vm/gup_test.c | 15 +++++++++++---- 3 files changed, 23 insertions(+), 18 deletions(-) diff --git a/mm/gup_test.c b/mm/gup_test.c index e3cf78e5873e..a6ed1c877679 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -94,7 +94,7 @@ static int __gup_test_ioctl(unsigned int cmd, { ktime_t start_time, end_time; unsigned long i, nr_pages, addr, next; - int nr; + long nr; struct page **pages; int ret = 0; bool needs_mmap_lock = @@ -126,37 +126,34 @@ static int __gup_test_ioctl(unsigned int cmd, nr = (next - addr) / PAGE_SIZE; } - /* Filter out most gup flags: only allow a tiny subset here: */ - gup->flags &= FOLL_WRITE; - switch (cmd) { case GUP_FAST_BENCHMARK: - nr = get_user_pages_fast(addr, nr, gup->flags, + nr = get_user_pages_fast(addr, nr, gup->gup_flags, pages + i); break; case GUP_BASIC_TEST: - nr = get_user_pages(addr, nr, gup->flags, pages + i, + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; case PIN_FAST_BENCHMARK: - nr = pin_user_pages_fast(addr, nr, gup->flags, + nr = pin_user_pages_fast(addr, nr, gup->gup_flags, pages + i); break; case PIN_BASIC_TEST: - nr = pin_user_pages(addr, nr, gup->flags, pages + i, + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; case PIN_LONGTERM_BENCHMARK: nr = pin_user_pages(addr, nr, - gup->flags | FOLL_LONGTERM, + gup->gup_flags | FOLL_LONGTERM, pages + i, NULL); break; case DUMP_USER_PAGES_TEST: - if (gup->flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) - nr = pin_user_pages(addr, nr, gup->flags, + if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) + nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); else - nr = get_user_pages(addr, nr, gup->flags, + nr = get_user_pages(addr, nr, gup->gup_flags, pages + i, NULL); break; default: @@ -187,7 +184,7 @@ static int __gup_test_ioctl(unsigned int cmd, start_time = ktime_get(); - put_back_pages(cmd, pages, nr_pages, gup->flags); + put_back_pages(cmd, pages, nr_pages, gup->test_flags); end_time = ktime_get(); gup->put_delta_usec = ktime_us_delta(end_time, start_time); diff --git a/mm/gup_test.h b/mm/gup_test.h index 90a6713d50eb..887ac1d5f5bc 100644 --- a/mm/gup_test.h +++ b/mm/gup_test.h @@ -21,7 +21,8 @@ struct gup_test { __u64 addr; __u64 size; __u32 nr_pages_per_call; - __u32 flags; + __u32 gup_flags; + __u32 test_flags; /* * Each non-zero entry is the number of the page (1-based: first page is * page 1, so that zero entries mean "do nothing") from the .addr base. diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c index 6c6336dd3b7f..943cc2608dc2 100644 --- a/tools/testing/selftests/vm/gup_test.c +++ b/tools/testing/selftests/vm/gup_test.c @@ -37,13 +37,13 @@ int main(int argc, char **argv) { struct gup_test gup = { 0 }; unsigned long size = 128 * MB; - int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 0; + int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1; unsigned long cmd = GUP_FAST_BENCHMARK; int flags = MAP_PRIVATE; char *file = "/dev/zero"; char *p; - while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwSH")) != -1) { + while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHp")) != -1) { switch (opt) { case 'a': cmd = PIN_FAST_BENCHMARK; @@ -65,9 +65,13 @@ int main(int argc, char **argv) */ gup.which_pages[0] = 1; break; + case 'p': + /* works only with DUMP_USER_PAGES_TEST */ + gup.test_flags |= GUP_TEST_FLAG_DUMP_PAGES_USE_PIN; + break; case 'F': /* strtol, so you can pass flags in hex form */ - gup.flags = strtol(optarg, 0, 0); + gup.gup_flags = strtol(optarg, 0, 0); break; case 'm': size = atoi(optarg) * MB; @@ -93,6 +97,9 @@ int main(int argc, char **argv) case 'w': write = 1; break; + case 'W': + write = 0; + break; case 'f': file = optarg; break; @@ -140,7 +147,7 @@ int main(int argc, char **argv) gup.nr_pages_per_call = nr_pages; if (write) - gup.flags |= FOLL_WRITE; + gup.gup_flags |= FOLL_WRITE; fd = open("/sys/kernel/debug/gup_test", O_RDWR); if (fd == -1) { From patchwork Mon Feb 1 15:38:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12059349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0C1CC433E6 for ; Mon, 1 Feb 2021 15:42:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA34B64DE1 for ; Mon, 1 Feb 2021 15:42:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229819AbhBAPmU (ORCPT ); Mon, 1 Feb 2021 10:42:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231805AbhBAPkq (ORCPT ); Mon, 1 Feb 2021 10:40:46 -0500 Received: from mail-qt1-x82e.google.com (mail-qt1-x82e.google.com [IPv6:2607:f8b0:4864:20::82e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BBF7C061A28 for ; Mon, 1 Feb 2021 07:38:55 -0800 (PST) Received: by mail-qt1-x82e.google.com with SMTP id r20so8949661qtm.3 for ; Mon, 01 Feb 2021 07:38:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=exsx+5QFq+cRHRqiZa7rT587NP5KQIe61OMLBG++wUWQatrxxRLn5sZDzzTFFCp+xI v9use1KTetxdp+cIA0jCaFEohKGm/5a75xIqwgxKqgb6U89WSmsAcpUOx1rwE4WJMpwP 5xTfpHdnJ8557hubYdMhersWyJ2keNNwYaAU5joQ78f2Ijz2YmTYWTSoqOH+qjSZPpsY nnj4x4/uqqtnGJkk2dZ8ogHmYPHpBp/CmkRz/kSZttI6P4BEZIl4MP1J+5e/jcHEvp+4 3fwI46nd8yxjkHX2IaqAJ90yRI9VEzyuShg4qk1IIjX2yiEkziLksJtV9M2TU5Y5CPG+ OyIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9bnQ1cYpKSogma2d9jSskcM/kWG+ieYizroT/wf+fYw=; b=arvitxEbACL/cvHTYKCi7Kp/5vJLpU4HJ2gqSymwIIW8kmNobieroVar9jCFC26EoI 33GW9BUFkisg1k5XVUc6qWuF98ffV0mji6kAfLcnlALS5h6hexbPDpkpB4wk5E54I4Rx 31jsojXcy+l0d6gZ43lu3CmNtQNvF1qQV36FscYhbfBQlW3HAGXHdoxPCSDxtUQYYUR7 QW3VA1dtifoI+S11SY03T3oo+KeTB/G68bMRbnrQLtY34FCFTgcNxpIRSAFNKsfpCjv5 Xy6X9W+iKwgIFO2Hn+K7dahN9YGTKW5XYBtAiPxSFpLJD4peTr+GfdTfZylyzP21nqpw rPmQ== X-Gm-Message-State: AOAM533MlVNLBWrrLiiPEByPTKZZCA+w99NQU8q7iRYHo8vd+UydgYl/ XBrGbPFr/x7IWalLL4XympDczw== X-Google-Smtp-Source: ABdhPJyhr6Rr78ckZxktTwltqLPaxIcTqiiGpt+FOipTh9Ailw7X8EyiMAch+pl7JmPmb69X7j7Y+g== X-Received: by 2002:ac8:5156:: with SMTP id h22mr16073194qtn.176.1612193934750; Mon, 01 Feb 2021 07:38:54 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id 22sm14853307qke.123.2021.02.01.07.38.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 07:38:54 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org, ira.weiny@intel.com, linux-kselftest@vger.kernel.org, jmorris@namei.org Subject: [PATCH v9 14/14] selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages Date: Mon, 1 Feb 2021 10:38:27 -0500 Message-Id: <20210201153827.444374-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210201153827.444374-1-pasha.tatashin@soleen.com> References: <20210201153827.444374-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When pages are pinned they can be faulted in userland and migrated, and they can be faulted right in kernel without migration. In either case, the pinned pages must end-up being pinnable (not movable). Add a new test to gup_test, to help verify that the gup/pup (get_user_pages() / pin_user_pages()) behavior with respect to pinnable and movable pages is reasonable and correct. Specifically, provide a way to: 1) Verify that only "pinnable" pages are pinned. This is checked automatically for you. 2) Verify that gup/pup performance is reasonable. This requires comparing benchmarks between doing gup/pup on pages that have been pre-faulted in from user space, vs. doing gup/pup on pages that are not faulted in until gup/pup time (via FOLL_TOUCH). This decision is controlled with the new -z command line option. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- mm/gup_test.c | 6 ++++++ tools/testing/selftests/vm/gup_test.c | 23 +++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/mm/gup_test.c b/mm/gup_test.c index a6ed1c877679..d974dec19e1c 100644 --- a/mm/gup_test.c +++ b/mm/gup_test.c @@ -52,6 +52,12 @@ static void verify_dma_pinned(unsigned int cmd, struct page **pages, dump_page(page, "gup_test failure"); break; + } else if (cmd == PIN_LONGTERM_BENCHMARK && + WARN(!is_pinnable_page(page), + "pages[%lu] is NOT pinnable but pinned\n", + i)) { + dump_page(page, "gup_test failure"); + break; } } break; diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c index 943cc2608dc2..1e662d59c502 100644 --- a/tools/testing/selftests/vm/gup_test.c +++ b/tools/testing/selftests/vm/gup_test.c @@ -13,6 +13,7 @@ /* Just the flags we need, copied from mm.h: */ #define FOLL_WRITE 0x01 /* check pte is writable */ +#define FOLL_TOUCH 0x02 /* mark page accessed */ static char *cmd_to_str(unsigned long cmd) { @@ -39,11 +40,11 @@ int main(int argc, char **argv) unsigned long size = 128 * MB; int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1; unsigned long cmd = GUP_FAST_BENCHMARK; - int flags = MAP_PRIVATE; + int flags = MAP_PRIVATE, touch = 0; char *file = "/dev/zero"; char *p; - while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHp")) != -1) { + while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHpz")) != -1) { switch (opt) { case 'a': cmd = PIN_FAST_BENCHMARK; @@ -110,6 +111,10 @@ int main(int argc, char **argv) case 'H': flags |= (MAP_HUGETLB | MAP_ANONYMOUS); break; + case 'z': + /* fault pages in gup, do not fault in userland */ + touch = 1; + break; default: return -1; } @@ -167,8 +172,18 @@ int main(int argc, char **argv) else if (thp == 0) madvise(p, size, MADV_NOHUGEPAGE); - for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) - p[0] = 0; + /* + * FOLL_TOUCH, in gup_test, is used as an either/or case: either + * fault pages in from the kernel via FOLL_TOUCH, or fault them + * in here, from user space. This allows comparison of performance + * between those two cases. + */ + if (touch) { + gup.gup_flags |= FOLL_TOUCH; + } else { + for (; (unsigned long)p < gup.addr + size; p += PAGE_SIZE) + p[0] = 0; + } /* Only report timing information on the *_BENCHMARK commands: */ if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||