From patchwork Fri Dec 11 20:21:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7F28C433FE for ; Fri, 11 Dec 2020 20:21:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1F29123FB1 for ; Fri, 11 Dec 2020 20:21:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F29123FB1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 272EF6B005C; Fri, 11 Dec 2020 15:21:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 225316B005D; Fri, 11 Dec 2020 15:21:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C3A66B0068; Fri, 11 Dec 2020 15:21:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id EB1AC6B005C for ; Fri, 11 Dec 2020 15:21:45 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B26ED180FA145 for ; Fri, 11 Dec 2020 20:21:45 +0000 (UTC) X-FDA: 77582122170.27.land62_400da6927403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id A06073D668 for ; Fri, 11 Dec 2020 20:21:45 +0000 (UTC) X-HE-Tag: land62_400da6927403 X-Filterd-Recvd-Size: 4300 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:45 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id y15so7404504qtv.5 for ; Fri, 11 Dec 2020 12:21:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=g6NOhJ3w3XRzsqZuC7ga0SKkBBZ5s5oSTRUPFdoho3o=; b=QIPBzIB9YCwxjTmgunmw4D66koVC14iWgJ5vMT9WKGGdO/jqIVhcL4FXyOkLjiIgwx 2w/qGjWQFjXJd2w3pkSca+bt5aVw5jUgxXMbVac7h+DpJ83Br/D9jE82VTAkwBlT1Rmw 3FXxFGz7ak389EY2W1/R4dmSgNKt09Q1a6FiIcYKEOvzB8dkwEHbpVZYJZo88vtsdrm4 eUKzO1mPeGf02VF+u6ekNdjnRupaeN7zwPbY8uT52XeDZfExtvGWr9yGlBik8X9lBxN8 g5ROSqOruM869dzTToz2XzCgs6b1DakksWvGy7bAPZAbdHwqfkT2n/zDJ6CHexwINi3p lTKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g6NOhJ3w3XRzsqZuC7ga0SKkBBZ5s5oSTRUPFdoho3o=; b=SsQLs6rWcudFlp/1MHsEkzaoxy0SwNze+Sp9lCxjLXr+SJxaFSlt6S5NYSsATf7cFm X6DEjp10Br331bPDPcHD0DyQrlNweNZOBTmwyk0lgdX/BAkF8ujqux8gWKWfbAv9H8dT Wd2ZnIwSRi5tN3XG5KfNOX+HU1CbI5Bwwm/skLn015yeWqtmYmhnfhsmomu8qxTItRON kip+QY52kioNtTCQKebWQMLWVOBTicKNGHHrF8VS/CIwe3S7oN1zo0KTsrfJGZNKxxqv 7UFer0e+CFr3GEF8BFNSPdaKkdWyLMgfRFcK31KkNycLdLYHreQIRXIkad0KX20ncMIx YPwg== X-Gm-Message-State: AOAM530tIegbThhpL9+sd8WkhgO09RY1BHXybyXcAJHiHEdmDVqAQ9XG m0JutOWkKmBlmOi0LNhvDeSX8A== X-Google-Smtp-Source: ABdhPJzZqhcqsCxmzzgOEDVNGFUk9Dt2CNmfQPAeukFqgySkrYTp9H2Ov8uq/5CM5Fj9kDUmv8+bLg== X-Received: by 2002:aed:2ba5:: with SMTP id e34mr18349348qtd.146.1607718104558; Fri, 11 Dec 2020 12:21:44 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:43 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 1/6] mm/gup: don't pin migrated cma pages in movable zone Date: Fri, 11 Dec 2020 15:21:35 -0500 Message-Id: <20201211202140.396852-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order not to fragment CMA the pinned pages are migrated. However, they are migrated to ZONE_MOVABLE, which also should not have pinned pages. Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning is allowed. Signed-off-by: Pavel Tatashin Reviewed-by: David Hildenbrand Reviewed-by: John Hubbard Acked-by: Michal Hocko --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 0c866af5d96f..87452fcad048 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1565,7 +1565,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, long ret = nr_pages; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + .gfp_mask = GFP_USER | __GFP_NOWARN, }; check_again: From patchwork Fri Dec 11 20:21:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06156C433FE for ; Fri, 11 Dec 2020 20:21:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E38523FB1 for ; Fri, 11 Dec 2020 20:21:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E38523FB1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 695016B005D; Fri, 11 Dec 2020 15:21:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F9756B0068; Fri, 11 Dec 2020 15:21:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E2306B006C; Fri, 11 Dec 2020 15:21:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 2FEAE6B005D for ; Fri, 11 Dec 2020 15:21:48 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E8F5C1E1F for ; Fri, 11 Dec 2020 20:21:47 +0000 (UTC) X-FDA: 77582122254.25.hot24_621334627403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id C191318116262 for ; Fri, 11 Dec 2020 20:21:47 +0000 (UTC) X-HE-Tag: hot24_621334627403 X-Filterd-Recvd-Size: 8435 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:47 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id h4so4831207qkk.4 for ; Fri, 11 Dec 2020 12:21:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ve2Y3KyjfJ0AcyhaSnbcdGlrTS/DXPL0D7g5Z5vXr2k=; b=jNxyZJYdVlUEtHJZDtzGthaff6vbf780cJIiunh/MGlWxo+KQJD5PU39v91o0n0xQ6 BbnmNsO2114VQna2I7ROFYoQJ5ZxF9Gk618hdH0TrsdLFeabM2DKcTB9DscJ2Td1xq+G UMoLnniE099PXf6WSCUu0afMNsn8lpbEzcfUM8tJ/6WhGuGWSrMkv6hAvKB+xN9G/m3Q PsIipIFEotu9jjGqcZMgzFWJcJ1lcEVPiHinq/CZ1JT95omh8cma5qCXb+S2tFVhzrl0 G7tiaPKZrxy9Rg6W51BKkZN9rCKRFIi9lW2uc+scX1Cl/zY0BnPOUm6s5lczPanpRkDr HnKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ve2Y3KyjfJ0AcyhaSnbcdGlrTS/DXPL0D7g5Z5vXr2k=; b=JpG35NIRg/R5aU9NCj5DVmDibQz0CpmI/T2/8vErriLL++gd7EsB7FvtH6FFwPH6eH HnSFs81erHsVDtNPopUSjZhy9gqFKvaN8vGR+Oo3OMQ+ailRj+OGPMO5yBWAKN0Pdd9/ QEVn9sWlkoRBYb7zrC0jsgTrtS09Qr4FAEg0D3ZdIWvru9yJSnK8e6kQ+AR6P/LaPghU D+WpbINFQde2eIKiTgGzGYG9os36rfDvpoOPoKTbAgrZZBej4MJ6rnNGbFqt8+q2szr5 8be3W8oSl8vByJiLGsMoOY1FVuIAu3YD0dJULC04XgUMaqLX/Op3thEv4L3uez++T9w0 ak3w== X-Gm-Message-State: AOAM530f+F5wh523HpT136aSnpeHzXQEq4yWDWthLRBlWr1lxzDkKp9u gGV/x//Nqyc7frrgTZKizXN3o5d2YXBWz2Vw X-Google-Smtp-Source: ABdhPJwW9LhE+IEXTVN1QzHVmN5L6rSD9DGH9xIvfUhSPHHBaU8dXdiAb8UWmqgcYwxSXxhbarhP+g== X-Received: by 2002:ae9:eb8b:: with SMTP id b133mr18039447qkg.399.1607718106567; Fri, 11 Dec 2020 12:21:46 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:45 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 2/6] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Date: Fri, 11 Dec 2020 15:21:36 -0500 Message-Id: <20201211202140.396852-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PF_MEMALLOC_NOCMA is used ot guarantee that the allocator will not return pages that might belong to CMA region. This is currently used for long term gup to make sure that such pins are not going to be done on any CMA pages. When PF_MEMALLOC_NOCMA has been introduced we haven't realized that it is focusing on CMA pages too much and that there is larger class of pages that need the same treatment. MOVABLE zone cannot contain any long term pins as well so it makes sense to reuse and redefine this flag for that usecase as well. Rename the flag to PF_MEMALLOC_PIN which defines an allocation context which can only get pages suitable for long-term pins. Also re-name: memalloc_nocma_save()/memalloc_nocma_restore to memalloc_pin_save()/memalloc_pin_restore() and make the new functions common. Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard Acked-by: Michal Hocko --- include/linux/sched.h | 2 +- include/linux/sched/mm.h | 21 +++++---------------- mm/gup.c | 4 ++-- mm/hugetlb.c | 4 ++-- mm/page_alloc.c | 4 ++-- 5 files changed, 12 insertions(+), 23 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index e5ad6d354b7b..f3226ef7134f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1576,7 +1576,7 @@ extern struct pid *cad_pid; #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ -#define PF_MEMALLOC_NOCMA 0x10000000 /* All allocation request will have _GFP_MOVABLE cleared */ +#define PF_MEMALLOC_PIN 0x10000000 /* All allocation request will have _GFP_MOVABLE cleared */ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ #define PF_SUSPEND_TASK 0x80000000 /* This thread called freeze_processes() and should not be frozen */ diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 1ae08b8462a4..5f4dd3274734 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -270,29 +270,18 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) current->flags = (current->flags & ~PF_MEMALLOC) | flags; } -#ifdef CONFIG_CMA -static inline unsigned int memalloc_nocma_save(void) +static inline unsigned int memalloc_pin_save(void) { - unsigned int flags = current->flags & PF_MEMALLOC_NOCMA; + unsigned int flags = current->flags & PF_MEMALLOC_PIN; - current->flags |= PF_MEMALLOC_NOCMA; + current->flags |= PF_MEMALLOC_PIN; return flags; } -static inline void memalloc_nocma_restore(unsigned int flags) +static inline void memalloc_pin_restore(unsigned int flags) { - current->flags = (current->flags & ~PF_MEMALLOC_NOCMA) | flags; + current->flags = (current->flags & ~PF_MEMALLOC_PIN) | flags; } -#else -static inline unsigned int memalloc_nocma_save(void) -{ - return 0; -} - -static inline void memalloc_nocma_restore(unsigned int flags) -{ -} -#endif #ifdef CONFIG_MEMCG DECLARE_PER_CPU(struct mem_cgroup *, int_active_memcg); diff --git a/mm/gup.c b/mm/gup.c index 87452fcad048..007060e66a48 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1671,7 +1671,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, long rc; if (gup_flags & FOLL_LONGTERM) - flags = memalloc_nocma_save(); + flags = memalloc_pin_save(); rc = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, NULL, gup_flags); @@ -1680,7 +1680,7 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (rc > 0) rc = check_and_migrate_cma_pages(mm, start, rc, pages, vmas, gup_flags); - memalloc_nocma_restore(flags); + memalloc_pin_restore(flags); } return rc; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3bcc0bc7e02a..012246234eb5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1033,10 +1033,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) { struct page *page; - bool nocma = !!(current->flags & PF_MEMALLOC_NOCMA); + bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (nocma && is_migrate_cma_page(page)) + if (pin && is_migrate_cma_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 774542e1483e..ec05396a597b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3808,8 +3808,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask, #ifdef CONFIG_CMA unsigned int pflags = current->flags; - if (!(pflags & PF_MEMALLOC_NOCMA) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (!(pflags & PF_MEMALLOC_PIN) && + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; #endif From patchwork Fri Dec 11 20:21:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35056C4361B for ; Fri, 11 Dec 2020 20:21:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E03423F38 for ; Fri, 11 Dec 2020 20:21:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E03423F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E342F6B0068; Fri, 11 Dec 2020 15:21:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E10BD6B006C; Fri, 11 Dec 2020 15:21:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C38046B006E; Fri, 11 Dec 2020 15:21:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id ADE446B0068 for ; Fri, 11 Dec 2020 15:21:49 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 685B63633 for ; Fri, 11 Dec 2020 20:21:49 +0000 (UTC) X-FDA: 77582122338.04.rock57_0400d3e27403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 4304B8019A9C for ; Fri, 11 Dec 2020 20:21:49 +0000 (UTC) X-HE-Tag: rock57_0400d3e27403 X-Filterd-Recvd-Size: 7270 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:48 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id z9so7419235qtn.4 for ; Fri, 11 Dec 2020 12:21:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=wsBBzNzDskmm1L92habwo+bQrCRo6cUbscDHID44jtI=; b=Xw6EwkP5NGT8ojgRINybMgV0CEA/32rfK6VpqwScYHQTiSjgDDDgIAv1ObyIa5yf8C 64a4xj8I9BmfJOgiscgJ3oQ4YecLGRFL6TiNq902N1woOP6EDuQJnsCJ6dOfH9ewifOR MHvhW9NkUCciATgOXVgOufjCPnXSSmSJhqlDa/Xt8iehA/LPGNT8fPO9d/qJwAiqIv6z 7QOagbaZpnlnuSh9xVI5MZS35bpwCt5Wr4lUmMv/8P1oYw6DPaKVMtbez3yebFuipSB5 wAtry6eKK77Kd7hnv+WqLe+JNzdZdVMEoyeZksgW4rPzcNJ8PFJGSFromgAIr7KTjIfZ paSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wsBBzNzDskmm1L92habwo+bQrCRo6cUbscDHID44jtI=; b=a3QRUBKJqdXlcAT/oguYKTtm4cajjrRpal7xO1rNArVZtLOu04Z8efXIkU2tOoewK3 855i7alK0m9tGJodgJh7uVd5hB5US6CFnqUNGOhtxjHg4NpsjI8EVaYBgLDrHjSRJZD5 w1Elws+PHcruHSR4ysJaVqE3xWg9oQxxzs9/lSvTzdCIfprTQB44XnEh8nNUOq3X3QKL feZHtDwJ1VCri43OrBkNk/I7ZLx1xtTQJhnEAv6KEZ+pbcisKYdVFz0En929B7laOtbY fx/50RkWkLzCDnAOBxT/zpcWsrNOa37lo9pqSYDKubQOjsL84zyYsunXvRK8fcyUgV98 XIgg== X-Gm-Message-State: AOAM530laxwZfbeDOxltaT1pPHfbp07z/Ed/iRwVG+detO1ykLLupri1 W9kYS5/NlTz1Jm0WbcleQ+xtpQ== X-Google-Smtp-Source: ABdhPJwZBDj+bDwR2xyRmaL+PBvHpV7akfdLiClVp+ecUemviam551kOC6gczZy6ZbRQQi66Jhb9zA== X-Received: by 2002:aed:29a5:: with SMTP id o34mr17294060qtd.379.1607718108108; Fri, 11 Dec 2020 12:21:48 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:47 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 3/6] mm: apply per-task gfp constraints in fast path Date: Fri, 11 Dec 2020 15:21:37 -0500 Message-Id: <20201211202140.396852-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Function current_gfp_context() is called after fast path. However, soon we will add more constraints which will also limit zones based on context. Move this call into fast path, and apply the correct constraints for all allocations. Also update .reclaim_idx based on value returned by current_gfp_context() because it soon will modify the allowed zones. Note: With this patch we will do one extra current->flags load during fast path, but we already load current->flags in fast-path: __alloc_pages_nodemask() prepare_alloc_pages() current_alloc_flags(gfp_mask, *alloc_flags); Later, when we add the zone constrain logic to current_gfp_context() we will be able to remove current->flags load from current_alloc_flags, and therefore return fast-path to the current performance level. Suggested-by: Michal Hocko Signed-off-by: Pavel Tatashin --- mm/page_alloc.c | 15 ++++++++------- mm/vmscan.c | 10 ++++++---- 2 files changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ec05396a597b..c2dea9ad0e98 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4976,6 +4976,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } gfp_mask &= gfp_allowed_mask; + /* + * Apply scoped allocation constraints. This is mainly about GFP_NOFS + * resp. GFP_NOIO which has to be inherited for all allocation requests + * from a particular context which has been marked by + * memalloc_no{fs,io}_{save,restore}. + */ + gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) return NULL; @@ -4991,13 +4998,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, if (likely(page)) goto out; - /* - * Apply scoped allocation constraints. This is mainly about GFP_NOFS - * resp. GFP_NOIO which has to be inherited for all allocation requests - * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. - */ - alloc_mask = current_gfp_context(gfp_mask); + alloc_mask = gfp_mask; ac.spread_dirty_pages = false; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 469016222cdb..d9546f5897f4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3234,11 +3234,12 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist, unsigned long try_to_free_pages(struct zonelist *zonelist, int order, gfp_t gfp_mask, nodemask_t *nodemask) { + gfp_t current_gfp_mask = current_gfp_context(gfp_mask); unsigned long nr_reclaimed; struct scan_control sc = { .nr_to_reclaim = SWAP_CLUSTER_MAX, - .gfp_mask = current_gfp_context(gfp_mask), - .reclaim_idx = gfp_zone(gfp_mask), + .gfp_mask = current_gfp_mask, + .reclaim_idx = gfp_zone(current_gfp_mask), .order = order, .nodemask = nodemask, .priority = DEF_PRIORITY, @@ -4158,17 +4159,18 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in { /* Minimum pages needed in order to stay on node */ const unsigned long nr_pages = 1 << order; + gfp_t current_gfp_mask = current_gfp_context(gfp_mask); struct task_struct *p = current; unsigned int noreclaim_flag; struct scan_control sc = { .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX), - .gfp_mask = current_gfp_context(gfp_mask), + .gfp_mask = current_gfp_mask, .order = order, .priority = NODE_RECLAIM_PRIORITY, .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE), .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP), .may_swap = 1, - .reclaim_idx = gfp_zone(gfp_mask), + .reclaim_idx = gfp_zone(current_gfp_mask), }; trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, order, From patchwork Fri Dec 11 20:21:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D99EC433FE for ; Fri, 11 Dec 2020 20:21:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 952E823F38 for ; Fri, 11 Dec 2020 20:21:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 952E823F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 758626B006C; Fri, 11 Dec 2020 15:21:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E0466B006E; Fri, 11 Dec 2020 15:21:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57F426B0070; Fri, 11 Dec 2020 15:21:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 406336B006C for ; Fri, 11 Dec 2020 15:21:51 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 097B0824999B for ; Fri, 11 Dec 2020 20:21:51 +0000 (UTC) X-FDA: 77582122422.01.view39_0a117a527403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id DD30710058CAF for ; Fri, 11 Dec 2020 20:21:50 +0000 (UTC) X-HE-Tag: view39_0a117a527403 X-Filterd-Recvd-Size: 8628 Received: from mail-qv1-f67.google.com (mail-qv1-f67.google.com [209.85.219.67]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:50 +0000 (UTC) Received: by mail-qv1-f67.google.com with SMTP id j18so2896572qvu.3 for ; Fri, 11 Dec 2020 12:21:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=t05CqHIlXP0OR2TlEeNfvovOjUdJXSb7qOKVVzDYFcA=; b=gC3VqWF2dqyBIJ4/CZ2fr2C8ptOj6DeaRWXAI75omk7t9oOIeJ+spMyLSqL8FfkZ+X 3lFGo24eAsPsEi1POcVl5lcjbF7vBWloFQOg5AIVApV9W+bgTGaIa2gvx6qJlECT8DVB LirOa8cUvi0IXPjjgs4geBdCtkAHRt4Z3myuyUutSpgL9ngDShfQ25KuVCEST11rw0UB wVsN0yoTSrZ/70RByUrF3NcSrmrZeTSEKkWwqEd+MKc11a4PhBiP5IKh5AIvBslZO2ij 06ZEcdf+SpCIxGSehv1VYDwIDEjp2bERkY6/xf5qoNHKkclsJdCYpwVrJ6qF2m37xusp 9W6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t05CqHIlXP0OR2TlEeNfvovOjUdJXSb7qOKVVzDYFcA=; b=bIo0VBeW+I8D9nkBTN2JxU4MwzM2A5PW9egRH1B9IJYk5bFOeCoqywmZdDz5hP2Mgx he2QQf3ozxFLkkfJGDEbVw/Ro3mkmtwWyAh0cE/cuRrwkYuGYeM0KZBVwbuIyWej6cjm HlhUSX/zH63W7VEM8AOhu/oLnT2yQLbxbfGb+YpXRKAJPQvzrkORgGm3QO/vFedXKxIc e+/muYLK0CXCSoL1zaWSekDaXt7Q5eZipP4vMUadl+PcNENBHCKBZxD7BNOoi/DaK2gU D0x5VMCb05A9bztwCiTYrCNhQS4kTCaLc1HYHPpGQQqpq8QLiU/3hAvPQ2E72yP0/amh r3xA== X-Gm-Message-State: AOAM532qBSpZkpaITeae8RFj3K8LKLdLqNU1OfDy5w4hr4OaG+5BvM/x gkSWik66VgUzFRiITwtqD0fLnQ== X-Google-Smtp-Source: ABdhPJzu6upnB3D5SUtpRcpp8uvfrwWIABh8+bKd3FL/FUipRxlWKEK06r0aFJBUQYxBIErihsu5gg== X-Received: by 2002:ad4:4052:: with SMTP id r18mr17462828qvp.38.1607718109647; Fri, 11 Dec 2020 12:21:49 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:49 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 4/6] mm: honor PF_MEMALLOC_PIN for all movable pages Date: Fri, 11 Dec 2020 15:21:38 -0500 Message-Id: <20201211202140.396852-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PF_MEMALLOC_PIN is only honored for CMA pages, extend this flag to work for any allocations from ZONE_MOVABLE by removing __GFP_MOVABLE from gfp_mask when this flag is passed in the current context. Add is_pinnable_page() to return true if page is in a pinnable page. A pinnable page is not in ZONE_MOVABLE and not of MIGRATE_CMA type. Signed-off-by: Pavel Tatashin Acked-by: Michal Hocko --- include/linux/mm.h | 11 +++++++++++ include/linux/sched/mm.h | 6 +++++- mm/hugetlb.c | 2 +- mm/page_alloc.c | 19 ++++++++----------- 4 files changed, 25 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5299b90a6c40..51b3090dd072 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1109,6 +1109,17 @@ static inline bool is_zone_device_page(const struct page *page) } #endif +static inline bool is_zone_movable_page(const struct page *page) +{ + return page_zonenum(page) == ZONE_MOVABLE; +} + +/* MIGRATE_CMA and ZONE_MOVABLE do not allow pin pages */ +static inline bool is_pinnable_page(struct page *page) +{ + return !is_zone_movable_page(page) && !is_migrate_cma_page(page); +} + #ifdef CONFIG_DEV_PAGEMAP_OPS void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 5f4dd3274734..a55277b0d475 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -150,12 +150,13 @@ static inline bool in_vfork(struct task_struct *tsk) * Applies per-task gfp context to the given allocation flags. * PF_MEMALLOC_NOIO implies GFP_NOIO * PF_MEMALLOC_NOFS implies GFP_NOFS + * PF_MEMALLOC_PIN implies !GFP_MOVABLE */ static inline gfp_t current_gfp_context(gfp_t flags) { unsigned int pflags = READ_ONCE(current->flags); - if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS))) { + if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS | PF_MEMALLOC_PIN))) { /* * NOIO implies both NOIO and NOFS and it is a weaker context * so always make sure it makes precedence @@ -164,6 +165,9 @@ static inline gfp_t current_gfp_context(gfp_t flags) flags &= ~(__GFP_IO | __GFP_FS); else if (pflags & PF_MEMALLOC_NOFS) flags &= ~__GFP_FS; + + if (pflags & PF_MEMALLOC_PIN) + flags &= ~__GFP_MOVABLE; } return flags; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 012246234eb5..b170ef2e04f5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1036,7 +1036,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) bool pin = !!(current->flags & PF_MEMALLOC_PIN); list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { - if (pin && is_migrate_cma_page(page)) + if (pin && !is_pinnable_page(page)) continue; if (PageHWPoison(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c2dea9ad0e98..4d8e7f801c66 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3802,16 +3802,12 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) return alloc_flags; } -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, - unsigned int alloc_flags) +static inline unsigned int cma_alloc_flags(gfp_t gfp_mask, + unsigned int alloc_flags) { #ifdef CONFIG_CMA - unsigned int pflags = current->flags; - - if (!(pflags & PF_MEMALLOC_PIN) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; - #endif return alloc_flags; } @@ -4467,7 +4463,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) } else if (unlikely(rt_task(current)) && !in_interrupt()) alloc_flags |= ALLOC_HARDER; - alloc_flags = current_alloc_flags(gfp_mask, alloc_flags); + alloc_flags = cma_alloc_flags(gfp_mask, alloc_flags); return alloc_flags; } @@ -4769,7 +4765,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, reserve_flags = __gfp_pfmemalloc_flags(gfp_mask); if (reserve_flags) - alloc_flags = current_alloc_flags(gfp_mask, reserve_flags); + alloc_flags = cma_alloc_flags(gfp_mask, reserve_flags); /* * Reset the nodemask and zonelist iterators if memory policies can be @@ -4938,7 +4934,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, if (should_fail_alloc_page(gfp_mask, order)) return false; - *alloc_flags = current_alloc_flags(gfp_mask, *alloc_flags); + *alloc_flags = cma_alloc_flags(gfp_mask, *alloc_flags); /* Dirty zone balancing only done in the fast path */ ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE); @@ -4980,7 +4976,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, * Apply scoped allocation constraints. This is mainly about GFP_NOFS * resp. GFP_NOIO which has to be inherited for all allocation requests * from a particular context which has been marked by - * memalloc_no{fs,io}_{save,restore}. + * memalloc_no{fs,io}_{save,restore}. And PF_MEMALLOC_PIN which ensures + * movable zones are not used during allocation. */ gfp_mask = current_gfp_context(gfp_mask); alloc_mask = gfp_mask; From patchwork Fri Dec 11 20:21:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30225C4361B for ; Fri, 11 Dec 2020 20:21:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A508923F38 for ; Fri, 11 Dec 2020 20:21:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A508923F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D4A806B006E; Fri, 11 Dec 2020 15:21:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D21C46B0070; Fri, 11 Dec 2020 15:21:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C144A6B0072; Fri, 11 Dec 2020 15:21:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 9E68C6B006E for ; Fri, 11 Dec 2020 15:21:52 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 705062C12 for ; Fri, 11 Dec 2020 20:21:52 +0000 (UTC) X-FDA: 77582122464.09.park17_1212cd627403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 5F70C180FAA9F for ; Fri, 11 Dec 2020 20:21:52 +0000 (UTC) X-HE-Tag: park17_1212cd627403 X-Filterd-Recvd-Size: 11112 Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:51 +0000 (UTC) Received: by mail-qt1-f170.google.com with SMTP id u21so7392636qtw.11 for ; Fri, 11 Dec 2020 12:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=e4U32UDa9FIR2WW2M/Hs6yDtPhoxlLQYE5D/fEXlT1M=; b=iUm815QkcoXTftgKqSIhU5C+ikvvGNOqTXnuFVw5J64xBIqtebG+lMokASH9kcUIBp IFGB8V8vFW3Mcut07z4jqtgdTiL27vE4EMMXVBltu38ndFXMpa79ksrAbcOoUqZy49J0 Y04RgoYqyGESlmeHBijXULNXYQSdRmImQ5I9OmSeHpapRP7ZpdpxZ3msyXMnkPt34Tmz WDBvZc5l4/HgJPn3hStBbYle3W1Dxqu75LDrBuLx/WrMPm26J9qtTA+6Y21Gzk7uUgJE n/tQe5RbRe5c6/UiDOQ/OxEXiSH7omB+K5blMmvc7KmsnO6heCloSue5OtJict1nXZnl nUTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e4U32UDa9FIR2WW2M/Hs6yDtPhoxlLQYE5D/fEXlT1M=; b=KbBRvWYpfoDGi/s5g5bDj5pGfvV+nJpPP54ovARCqLvLGJ5Yynda8QFA7xHXYDxdjI ILdr6j083fBob+wey+tB5KqsqwEicFUeQUkfD/8tjkq06D4q5Ze7Ldf0swdcWtY14o+p yYqBuDNcx+FI4LvxI/VFZx04+uqKyRrNSj+pbQpN7pwzWy6tUPEqr4fUN7MUu8bYPoid 8hXKnWVsO2yZ6Yfe8GADOOv1uSqRzf+tfT0LSfeZ2c/oRAFNp3DiIP/qeGO3xYqFTgk0 xT9bi2D7wVJ3H3mUBzWG+HpKn5rMCSx2UqyuBG7fI0xuL7ZLRZnirzWKIGXV9zVRaVBt XW7g== X-Gm-Message-State: AOAM532L7Igy5PlRz1Iq8nqelTkI8mERrkounD7bB4ItueR9VRyG584U g1BSoVPYIjljN3i4NtIZB8mY4A== X-Google-Smtp-Source: ABdhPJzRC9ql4OcWj+2zPv8HmJtSEtO8Eo8pfbmlM02Fiu75AvjuOAVQl2FNE4JzXT1uSRx1Fch8Qg== X-Received: by 2002:ac8:5990:: with SMTP id e16mr18050767qte.52.1607718111263; Fri, 11 Dec 2020 12:21:51 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:50 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 5/6] mm/gup: migrate pinned pages out of movable zone Date: Fri, 11 Dec 2020 15:21:39 -0500 Message-Id: <20201211202140.396852-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 11 ++++-- include/trace/events/migrate.h | 3 +- mm/gup.c | 66 ++++++++++++++-------------------- 4 files changed, 38 insertions(+), 43 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4594838a0f7c..aae5ef0b3ba1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..25c0c13ba4b1 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -386,9 +386,14 @@ enum zone_type { * likely to succeed, and to locally limit unmovable allocations - e.g., * to increase the number of THP/huge pages. Notable special cases are: * - * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * 1. Pinned pages: (long-term) pinning of movable pages is avoided + * when pages are pinned and faulted, but it is still possible that + * address space already has pages in ZONE_MOVABLE at the time when + * pages are pinned (i.e. user has touches that memory before + * pinning). In such case we try to migrate them to a different zone, + * but if migration fails the pages can still end-up pinned in + * ZONE_MOVABLE. In such case, memory offlining might retry a long + * time and will only succeed once user application unpins pages. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") /* * First define the enums in the above macros to be exported to userspace diff --git a/mm/gup.c b/mm/gup.c index 007060e66a48..d5e9c459952e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -89,11 +89,12 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, int orig_refs = refs; /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; /* @@ -1549,19 +1550,18 @@ struct page *get_dump_page(unsigned long addr) } #endif /* CONFIG_ELF_CORE */ -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i; unsigned long step; bool drain_allow = true; bool migrate_allow = true; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret = nr_pages; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, @@ -1579,13 +1579,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, */ step = compound_nr(head) - (pages[i] - head); /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); + isolate_huge_page(head, &movable_page_list); else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); @@ -1593,7 +1592,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, } if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1605,7 +1604,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, i += step; } - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1615,7 +1614,7 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, + if (migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages @@ -1623,17 +1622,16 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, */ migrate_allow = false; - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); } /* * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. + * again migrating any pages which we failed to isolate earlier. */ ret = __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); + pages, vmas, NULL, + gup_flags); if ((ret > 0) && migrate_allow) { nr_pages = ret; @@ -1644,17 +1642,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, return ret; } -#else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) -{ - return nr_pages; -} -#endif /* CONFIG_CMA */ /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which @@ -1678,8 +1665,9 @@ static long __gup_longterm_locked(struct mm_struct *mm, if (gup_flags & FOLL_LONGTERM) { if (rc > 0) - rc = check_and_migrate_cma_pages(mm, start, rc, pages, - vmas, gup_flags); + rc = check_and_migrate_movable_pages(mm, start, rc, + pages, vmas, + gup_flags); memalloc_pin_restore(flags); } return rc; From patchwork Fri Dec 11 20:21:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11969347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A51FC1B0D8 for ; Fri, 11 Dec 2020 20:21:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B04C623F38 for ; Fri, 11 Dec 2020 20:21:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B04C623F38 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 61F6B6B0070; Fri, 11 Dec 2020 15:21:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D2BB6B0072; Fri, 11 Dec 2020 15:21:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E5A26B0073; Fri, 11 Dec 2020 15:21:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 338FA6B0070 for ; Fri, 11 Dec 2020 15:21:54 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F21D9181AEF21 for ; Fri, 11 Dec 2020 20:21:53 +0000 (UTC) X-FDA: 77582122506.24.watch69_530ec2927403 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id D961F1A4A0 for ; Fri, 11 Dec 2020 20:21:53 +0000 (UTC) X-HE-Tag: watch69_530ec2927403 X-Filterd-Recvd-Size: 4718 Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Dec 2020 20:21:53 +0000 (UTC) Received: by mail-qv1-f68.google.com with SMTP id az16so2702499qvb.5 for ; Fri, 11 Dec 2020 12:21:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=UPZkOOUAvu9r4fFw8TJVvFyTq3W2R2uLYiLCZknh/L8=; b=hVQRD1LlUsoJuOAc051CXjk2/dVUpiNlMuxbw4uUGEPOPZ3HmFNU0C98MsLBWna3fM p2zQdMiimm2nS4zvqoYJbPMy3rDxOJTx+XQe+y6m6d4mTjkKSUZjQirRZOm4isp54fjB X6CTaymXFF+7Ab3fjNVul9yMWMIr6kRN1WcvxhcDpRFriU2mnSogEiO/p7rHtb8o1e0Z L7tevjlfqdvtjcRWrrp6QhbaEbwsiOoN0WTHrj8A/pAGwo4Y1Q7ppmtkQ8u6+vB3antE r/eK9monwWuehW5yDmeo3LUI2/qDhI+A97EIyXjbpi78G47yubvPzdmnhSdSRYH6MhL7 KBLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UPZkOOUAvu9r4fFw8TJVvFyTq3W2R2uLYiLCZknh/L8=; b=lMerj6WLHdE5tuSiwwmFzt1aehvpukNuxWxDWBOHozLuRhsMP+zNBPTXZzZsu89Wp6 GY/j2oi3+guyGxD+k6DITWlYTURF20ml2PueMYvnAbhlVF0IsPt1TLM/zpDdRUf2hI4+ SHDPPAJYF4twv1/JHEHp7ZTJ3KpR8szuKF2kRzqb+ub34VS7AiEj64yMZP48HX9Uxc3x 1Fuiele0JDtSAC9F9JCnbwPnp4kI1SbC2Y1Fol8LYW0pQ/aOBsmp0tODx5RnMaAy88Bl 1C2omc7hMtXZ9Oh3RPcopSyH5em/Qh8+a2HJV0nGPuuuhO9+LP/IrQ6IUfFi/BbAtX+n DpzA== X-Gm-Message-State: AOAM532x4VaxSkug5MiZ1gBnOMifzr7ou6DuVbnJQDS9JtNcC6aLXgJv fan+p9CbNLnJFRAOxYGP5qBFWw== X-Google-Smtp-Source: ABdhPJzzBA0IO1zEvNvTSi56jO8MgXCjXqx/kMRSbq6P0sQl3EDuH9tzpjNq07vwblX5aH/exG15Yg== X-Received: by 2002:a0c:f005:: with SMTP id z5mr18789770qvk.9.1607718112802; Fri, 11 Dec 2020 12:21:52 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y192sm8514455qkb.12.2020.12.11.12.21.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Dec 2020 12:21:52 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v3 6/6] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Date: Fri, 11 Dec 2020 15:21:40 -0500 Message-Id: <20201211202140.396852-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201211202140.396852-1-pasha.tatashin@soleen.com> References: <20201211202140.396852-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Document the special handling of page pinning when ZONE_MOVABLE present. Signed-off-by: Pavel Tatashin Suggested-by: David Hildenbrand --- Documentation/admin-guide/mm/memory-hotplug.rst | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst index 5c4432c96c4b..c6618f99f765 100644 --- a/Documentation/admin-guide/mm/memory-hotplug.rst +++ b/Documentation/admin-guide/mm/memory-hotplug.rst @@ -357,6 +357,15 @@ creates ZONE_MOVABLE as following. Unfortunately, there is no information to show which memory block belongs to ZONE_MOVABLE. This is TBD. +.. note:: + Techniques that rely on long-term pinnings of memory (especially, RDMA and + vfio) are fundamentally problematic with ZONE_MOVABLE and, therefore, memory + hot remove. Pinned pages cannot reside on ZONE_MOVABLE, to guarantee that + memory can still get hot removed - be aware that pinning can fail even if + there is plenty of free memory in ZONE_MOVABLE. In addition, using + ZONE_MOVABLE might make page pinning more expensive, because pages have to be + migrated off that zone first. + .. _memory_hotplug_how_to_offline_memory: How to offline memory