From patchwork Tue Sep 15 12:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11778485 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFFF4139F for ; Wed, 16 Sep 2020 00:40:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 90A1820739 for ; Wed, 16 Sep 2020 00:40:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="aATJ2s12" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727173AbgIPAkr (ORCPT ); Tue, 15 Sep 2020 20:40:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726510AbgIOND7 (ORCPT ); Tue, 15 Sep 2020 09:03:59 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8A91C061351 for ; Tue, 15 Sep 2020 06:03:54 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id m15so1306183pls.8 for ; Tue, 15 Sep 2020 06:03:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7E8YN4Bzo15kSKmYkpcbgXDUwiznqDsuN1+PQofAulI=; b=aATJ2s12MvQEI3XsalN4InCLiyf3a0dZSqkDoGA01GnlSCCa/Ejvvzcxnta5b1aMWA uyVKxZ/hn2AEWk+uKJGG1nU457Ix7Gq6VV+1lLSCTBlmn7BZiU3Qqs2iIdCjwAkvu0lC dnRk7HQB1eeTDPiNatjmjwaJgmLYKepz7+2Usk0GC7vbYRD12seCVkzfTnTJgWoVc5dZ d4EE2tk+VHNVh6+m1e/ZazHIujzJ91wjbgA5+M9fGXk9ainaPE/hnkBMN/TpqHm4ARMh seOWd1cO5xkdVCThHtWsWU4Agj7a57iUFzNskKz48oAlec92KoHE2STw3GkbyImFbts2 Wb7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7E8YN4Bzo15kSKmYkpcbgXDUwiznqDsuN1+PQofAulI=; b=Rpk76tM4c5XzDZJdC17d7JV1uswPvvwgxhXQnkuDlOH9ySe/hsoNa8MClc773ol6bP dxVW+PkmefuEh9MQtqtYjQ5XvAyRY7enFv2v/545oq5fH5gc1yncnLPHG0ZYs/sPrxhz TXKnR9WafBy8I55EtJV5duOevelFZeEf3o/kmipVweyNJQ0JcaO/usnGKpVkIWJSE7di 8F1eYKItVqjtK0UT336KIcmzUTSee9kKBsNhOAiEePMWeRa7jKC4Cs6MAoE29qNugmNy 5oR26BKaYcivJZiFazSUhEf/03Fjv4Sr3YazEnEyTcvvPbYUUs1PIs2Bjig1rwIKzzci 6U/A== X-Gm-Message-State: AOAM531MVEt+zWqDWK7U/NL+3Z0I3dMnjzinMfe7MXOi7Yz/r3UIsV+w T0dtUF5nXED67B1VN1p+16/uug== X-Google-Smtp-Source: ABdhPJyOBRo9BX58UBvVpbDWwI/6Gx3WY9L0lfs3E9yhbiVOmY0PhRqppxX7XZCAXzzdHV2IBK3F7Q== X-Received: by 2002:a17:90a:ea02:: with SMTP id w2mr4058535pjy.9.1600175034205; Tue, 15 Sep 2020 06:03:54 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:53 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 23/24] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 15 Sep 2020 20:59:46 +0800 Message-Id: <20200915125947.26204-24-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 18 +++++++++--------- 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c56df0da7ae5..358550a53555 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3ca36e259b4e..e66c3f10c583 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1964,17 +1964,17 @@ static inline void flush_free_huge_page_work(void) static inline bool subpage_hwpoison(struct page *head, struct page *page) { - return page_private(head + 4) == page - head; + return page_private(head + SUBPAGE_INDEX_HWPOISON) == page - head; } static inline void set_subpage_hwpoison(struct page *head, struct page *page) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline void clear_subpage_hwpoison(struct page *head) { - set_page_private(head + 4, 0); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0); } static int __init early_hugetlb_free_vmemmap_param(char *buf) @@ -2114,20 +2114,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -2139,17 +2139,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page)