From patchwork Sun Aug 11 04:17:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13759673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F85AC3DA4A for ; Sun, 11 Aug 2024 04:17:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3834A6B008A; Sun, 11 Aug 2024 00:17:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 332E86B0092; Sun, 11 Aug 2024 00:17:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FB076B0095; Sun, 11 Aug 2024 00:17:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 015A06B008A for ; Sun, 11 Aug 2024 00:17:10 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A525CA87C9 for ; Sun, 11 Aug 2024 04:17:10 +0000 (UTC) X-FDA: 82438654620.21.758E500 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf30.hostedemail.com (Postfix) with ESMTP id 044EF80012 for ; Sun, 11 Aug 2024 04:17:08 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="NB2/SnJ1"; spf=pass (imf30.hostedemail.com: domain of 3Qzu4ZgYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Qzu4ZgYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723349760; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=CGvOB/Cc5Xh5ChLX3RFa1zTgz8++Mz21jXe5s6NT+Nc=; b=qh8xepUMux6GaUT5LL7iPhkVBB339mkFUiv1SKAlClgEWkmUCV6haaDAaREU5Od0yWs3T9 Tssxy1pSh224TeiXUGHNN5UukcZVjzD1+IoSZICS1JBr/TnZqXsGgzmipaQRXyH2R42h4k zniLDbUHbKH2bFivvszvhi8XJWD/Jx0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723349760; a=rsa-sha256; cv=none; b=ufQMAyhN68SiKmeYq94tZ3IZ4VYgYFzpeJcBZ2giGcej2+3QBLkBo3qoROo6r+eoFKQ48K oqZlfb28JjjRqmeUMDbWFXg2+q1/szf2rwj507x07VaAQbqPWJ3ViQGHb9aRW6XOBQG3kP po7sGZX6Q+F/SmDiymyfAfdplEYJX9M= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="NB2/SnJ1"; spf=pass (imf30.hostedemail.com: domain of 3Qzu4ZgYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Qzu4ZgYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66619cb2d3eso93147817b3.2 for ; Sat, 10 Aug 2024 21:17:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1723349828; x=1723954628; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=CGvOB/Cc5Xh5ChLX3RFa1zTgz8++Mz21jXe5s6NT+Nc=; b=NB2/SnJ19n77kjENXMjMtFg0YhcZr/W9zlvrVC0eeZ9IjOCFEM3Gfz21gPSzbtKI2t yLW454lc6WhMiO41GZaBZ1VfpunT1njJS7+dW+qxNwnJ0Iaoiyo1zJQS+6ihsJ8Jpg/X HaB9LJrosl363I3izn8nD7OyKZux124PAVsfSzOMRG8cCaK6UGvrx5OtOHOscPGwgmPl tBHnA9RyHQEueuvICjnhbJkFUzP47pjeYbeQWMToApWnZrV1OdC3Hnpq3y23NGFTPii9 vub7WIiaEUodxeF0d5vkoAd+TEkWD0URA7jxNPnBXNebu/FjIe2WdkuzLAqrSWzKS3zP jccw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723349828; x=1723954628; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=CGvOB/Cc5Xh5ChLX3RFa1zTgz8++Mz21jXe5s6NT+Nc=; b=VHKfG+8s4pxGzSG0D3yDT8mHcK6U19QmF766hgQc8JC1kfgAQv088JOxn6FLiQS9Jz OQGVpgxjvJkMgPyNf8w/JhS+w0QMLNVv4nJ2P2Ro1h+b1rQyp2cmIjFyZozSoYOrIYXX dbb5LVrfJ7rd7mAqezcgz3PBy5zbjfd4qdsAh9Fjb8vdC+Kg6gKFFz0tYAyH6Z4FkDps +EoWsFWiZXjPU6dy0F8If40rb5CYEfQuaF1tDczynpBehCUJSVaKvazhMo50qdF0tlqy kEMMhRVjwIx0tkxICoZEGdghvt/q4HJuowxSY8MzpkcJ5zCH5P9LuygBEF+98vLp3P+x bmgA== X-Gm-Message-State: AOJu0Yzv2hUz5eoqUVN0BGVrFFQM5pys181Kf1VJYFJyxWvd20KjqqGb lyn3HfuBNZ3fZc6goMbrzG49eFNuajEFBVIbbLG9sDYooVlWuReFkKd249gjAnU1VEu+XKw1Ak2 2gg== X-Google-Smtp-Source: AGHT+IG1YyOFlQolnluyZMc4+th+WGGFkvrkwTs+aMnvM123+KndA3b6M5O2eiME21KpXCS9suqGjCcgOwI= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:f2a1:be95:1189:128]) (user=yuzhao job=sendgmr) by 2002:a25:a146:0:b0:e0b:af9b:fb7e with SMTP id 3f1490d57ef6-e0eb98dcd0bmr11666276.1.1723349827717; Sat, 10 Aug 2024 21:17:07 -0700 (PDT) Date: Sat, 10 Aug 2024 22:17:03 -0600 Mime-Version: 1.0 X-Mailer: git-send-email 2.46.0.76.ge559c4bf1a-goog Message-ID: <20240811041703.2775153-1-yuzhao@google.com> Subject: [PATCH mm-unstable v1] mm/hugetlb_vmemmap: batch HVO work when demoting From: Yu Zhao To: Andrew Morton , Muchun Song Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Stat-Signature: 3bonru83bohehui3qspwi7d6tcxxk613 X-Rspamd-Queue-Id: 044EF80012 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1723349828-288632 X-HE-Meta: U2FsdGVkX18rK+iRpOdGDGhYU1av28ROhYOqdwUkIiEWYwBW1j8Icgl/acutJrG5w1SPDliEcBYFfHrkuSx1c7+bIKKhQ9j81qx7NKAgUqYbuXCMuThYTnP3S1ylfIUqB2FN4lfKApjTuFqGUDwOrBw3BGMfuEAu/GrEDa+b+SBBUdC2OqfE3PCui761HFjmb1YO5psIaXZzGSGvX6uIPLahVzemNIbDoqtFLv2y2YUvzsgJ1zIhZqRidUbM5KW58b9YUQWwVt7jxs/TAzulf9nAHslwpKdhUjddDv9FrZQv5CUIaB/S8Jpf34je6Hbg9OS6MpRcw9ahniicrwmoVlyO8ObqTk7S1syjSzVjwSZzvBn21YO23WOQkweYE1P+7EmmKA/uaQGX8l3h+euZqJ/S3hkU459O3aHl27gpwWmz27Fke/oJsze7bPx9IB9bjipbDA+DKsXhcFPZMw+DedObyawN7BVPnRg7vu/QOxc+ww4Z5CINQ5+3wI8dndP5G9qK3qcHYR5ZEs5KSipXQeMbYdFu/JHCBT/8mKe1H9tSRHMpZPUSnou9cll3w2CN2LuN8ueDvXcxoOjFjmjC8LIkg4CSOhiEOEpo/piwswuUHRrcryABOsiH3rf1mQ6OVy6brvXsqUcMa9JL6GVSmyaxkWzYivMouLj/LNuibjM/wFQDdiqNzumL34Ui8XENlIFJNeTC1QOGiokbhtroafF3pKSQhb5hbIhHXLc7krLHPlUFelMV4SFPX6Ab+Y49ds9f635B9uzvuNgZAKaKNaVDWE7wO1UchXOhEX0PBWqpAvsElX7dxEMz8m/eZ8RJRVVX3zsT93pS4Pew3KmS6wCFoI6WnobpwCllFvqZNBoQGISFLOLk13qHWHOlTIE7cg1cInPErc1UdZ8oYgQ1XsVhTwvovKHWQQsYBHEijMQW8Kw7K/GgiItSy+IxC16mdZugeyVcmST/liAvcip xCniU8WE 4V44UWi+k1GdFz1UH7FpAiSlMl9Eij8+ZU4nxQUP+jjDWQIPjE5BzJcgbPMhDPvq5cqsNlML7+UzA4GO25UIbx0uEiS9zqz6KWFYQVB/edLtNKOqlsOIh5idifQTLZFg5BCbMKoCsAmpjEVTDbT6rXKQrF9vx6lcacebMjvt+PXLHm836+CO3K1FXGgxNHwaYJ+7U21iUw3JhIRoLzS1MarGKxfh2gdSwOnml2JyFy2U1CC5C3ADhPfoQvT1KkV2tkbouUmFCVTQJL0M06blRpwSM2Lr9LKbQjZVDaOJBz7/HUrRubb4M5BFQLySPMErD9GMbLfZXJ3DdhI9Z5CuVuxUEFmthzO7DCWhCFHkoorR+UjwnyITq7ezeXvYpmCjaXkl1kn+Gp/Hw8R6qF+uICiY/bg00VqCGNbgOr+w0ppAA9ZO7vnM0TtKyRQpSH+b+/4ONHoEU7YXnRfTZ0rZInfNxDBCa0DEetjmasaQW/soPqPUqJXZEBdDkhoss2ehaCfj9tiVtX93LK6azIXgVY5rJVDF6XYhq0g6QCjz4W9FSUng= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Batch the HVO work, including de-HVO of the source and HVO of the destination hugeTLB folios, to speed up demotion. After commit bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers"), each request of HVO or de-HVO, batched or not, invokes synchronize_rcu() once. For example, when not batched, demoting one 1GB hugeTLB folio to 512 2MB hugeTLB folios invokes synchronize_rcu() 513 times (1 de-HVO plus 512 HVO requests), whereas when batched, only twice (1 de-HVO plus 1 HVO request). And performance between the two cases are significantly different, e.g., echo 2048kB >/sys/kernel/mm/hugepages/hugepages-1048576kB/demote_size time echo 100 >/sys/kernel/mm/hugepages/hugepages-1048576kB/demote Before this patch: real 8m58.158s user 0m0.009s sys 0m5.900s After this patch: real 0m0.900s user 0m0.000s sys 0m0.851s Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers") Signed-off-by: Yu Zhao Reviewed-by: Muchun Song --- mm/hugetlb.c | 155 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 91 insertions(+), 64 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1fdd9eab240c..444cda461d1e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3921,100 +3921,123 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, return 0; } -static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) +static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst, + struct list_head *src_list) { - int i, nid = folio_nid(folio); - struct hstate *target_hstate; - struct page *subpage; - struct folio *inner_folio; - int rc = 0; + long rc; + struct folio *folio, *next; + LIST_HEAD(dst_list); + LIST_HEAD(ret_list); - target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - - remove_hugetlb_folio(h, folio, false); - spin_unlock_irq(&hugetlb_lock); - - /* - * If vmemmap already existed for folio, the remove routine above would - * have cleared the hugetlb folio flag. Hence the folio is technically - * no longer a hugetlb folio. hugetlb_vmemmap_restore_folio can only be - * passed hugetlb folios and will BUG otherwise. - */ - if (folio_test_hugetlb(folio)) { - rc = hugetlb_vmemmap_restore_folio(h, folio); - if (rc) { - /* Allocation of vmemmmap failed, we can not demote folio */ - spin_lock_irq(&hugetlb_lock); - add_hugetlb_folio(h, folio, false); - return rc; - } - } - - /* - * Use destroy_compound_hugetlb_folio_for_demote for all huge page - * sizes as it will not ref count folios. - */ - destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); + rc = hugetlb_vmemmap_restore_folios(src, src_list, &ret_list); + list_splice_init(&ret_list, src_list); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. * Without the mutex, pages added to target hstate could be marked * as surplus. * - * Note that we already hold h->resize_lock. To prevent deadlock, + * Note that we already hold src->resize_lock. To prevent deadlock, * use the convention of always taking larger size hstate mutex first. */ - mutex_lock(&target_hstate->resize_lock); - for (i = 0; i < pages_per_huge_page(h); - i += pages_per_huge_page(target_hstate)) { - subpage = folio_page(folio, i); - inner_folio = page_folio(subpage); - if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_folio_for_demote(inner_folio, - target_hstate->order); - else - prep_compound_page(subpage, target_hstate->order); - folio_change_private(inner_folio, NULL); - prep_new_hugetlb_folio(target_hstate, inner_folio, nid); - free_huge_folio(inner_folio); + mutex_lock(&dst->resize_lock); + + list_for_each_entry_safe(folio, next, src_list, lru) { + int i; + + if (folio_test_hugetlb_vmemmap_optimized(folio)) + continue; + + list_del(&folio->lru); + /* + * Use destroy_compound_hugetlb_folio_for_demote for all huge page + * sizes as it will not ref count folios. + */ + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(src)); + + for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) { + struct page *page = folio_page(folio, i); + + if (hstate_is_gigantic(dst)) + prep_compound_gigantic_folio_for_demote(page_folio(page), + dst->order); + else + prep_compound_page(page, dst->order); + set_page_private(page, 0); + + init_new_hugetlb_folio(dst, page_folio(page)); + list_add(&page->lru, &dst_list); + } } - mutex_unlock(&target_hstate->resize_lock); - spin_lock_irq(&hugetlb_lock); + prep_and_add_allocated_folios(dst, &dst_list); - /* - * Not absolutely necessary, but for consistency update max_huge_pages - * based on pool changes for the demoted page. - */ - h->max_huge_pages--; - target_hstate->max_huge_pages += - pages_per_huge_page(h) / pages_per_huge_page(target_hstate); + mutex_unlock(&dst->resize_lock); return rc; } -static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed) +static long demote_pool_huge_page(struct hstate *src, nodemask_t *nodes_allowed, + unsigned long nr_to_demote) __must_hold(&hugetlb_lock) { int nr_nodes, node; - struct folio *folio; + struct hstate *dst; + long rc = 0; + long nr_demoted = 0; lockdep_assert_held(&hugetlb_lock); /* We should never get here if no demote order */ - if (!h->demote_order) { + if (!src->demote_order) { pr_warn("HugeTLB: NULL demote order passed to demote_pool_huge_page.\n"); return -EINVAL; /* internal error */ } + dst = size_to_hstate(PAGE_SIZE << src->demote_order); - for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { - list_for_each_entry(folio, &h->hugepage_freelists[node], lru) { + for_each_node_mask_to_free(src, nr_nodes, node, nodes_allowed) { + LIST_HEAD(list); + struct folio *folio, *next; + + list_for_each_entry_safe(folio, next, &src->hugepage_freelists[node], lru) { if (folio_test_hwpoison(folio)) continue; - return demote_free_hugetlb_folio(h, folio); + + remove_hugetlb_folio(src, folio, false); + list_add(&folio->lru, &list); + + if (++nr_demoted == nr_to_demote) + break; } + + spin_unlock_irq(&hugetlb_lock); + + rc = demote_free_hugetlb_folios(src, dst, &list); + + spin_lock_irq(&hugetlb_lock); + + list_for_each_entry_safe(folio, next, &list, lru) { + list_del(&folio->lru); + add_hugetlb_folio(src, folio, false); + nr_demoted--; + } + + if (rc < 0 || nr_demoted == nr_to_demote) + break; } + /* + * Not absolutely necessary, but for consistency update max_huge_pages + * based on pool changes for the demoted page. + */ + src->max_huge_pages -= nr_demoted; + dst->max_huge_pages += nr_demoted * (pages_per_huge_page(src) / pages_per_huge_page(dst)); + + if (rc < 0) + return rc; + + if (nr_demoted) + return nr_demoted; /* * Only way to get here is if all pages on free lists are poisoned. * Return -EBUSY so that caller will not retry. @@ -4249,6 +4272,8 @@ static ssize_t demote_store(struct kobject *kobj, spin_lock_irq(&hugetlb_lock); while (nr_demote) { + long rc; + /* * Check for available pages to demote each time thorough the * loop as demote_pool_huge_page will drop hugetlb_lock. @@ -4261,11 +4286,13 @@ static ssize_t demote_store(struct kobject *kobj, if (!nr_available) break; - err = demote_pool_huge_page(h, n_mask); - if (err) + rc = demote_pool_huge_page(h, n_mask, nr_demote); + if (rc < 0) { + err = rc; break; + } - nr_demote--; + nr_demote -= rc; } spin_unlock_irq(&hugetlb_lock);