From patchwork Tue Nov 24 09:52:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9123C56201 for ; Tue, 24 Nov 2020 09:59:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D3A320679 for ; Tue, 24 Nov 2020 09:59:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="IOq24E+r" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D3A320679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B6F056B009D; Tue, 24 Nov 2020 04:59:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1DB36B009F; Tue, 24 Nov 2020 04:59:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 971166B00A0; Tue, 24 Nov 2020 04:59:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 7E5A96B009D for ; Tue, 24 Nov 2020 04:59:21 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4BC74180AD811 for ; Tue, 24 Nov 2020 09:59:21 +0000 (UTC) X-FDA: 77518864122.03.joke15_351847a2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 28FFE28A4E8 for ; Tue, 24 Nov 2020 09:59:21 +0000 (UTC) X-HE-Tag: joke15_351847a2736d X-Filterd-Recvd-Size: 9175 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:59:20 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id 62so16996212pgg.12 for ; Tue, 24 Nov 2020 01:59:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W96GcRd95QFdsA9bezqiYZ3Ng26OfkYqhZoucFhsfUs=; b=IOq24E+rzU6pmMRQtMPcpQIT6iptbDJPPOGq7RrMFzQc6u6a6R3IoEEUTJxmWY4tyx 6j/2S1cv7Ne7sIIVXPRTpBfD/AbCdP4lx5a+oxnnEbbbvd6ENxujpYsvvwlQsK0dSuCQ txljfZ8j1e2GgA5+H3ywIdlDP1SdzRFuIq4B+LVd+aGSxcM55c9UBR1uzCy4Il9J+Nnf DMIEBjm0nESe7pNwQJEZOzG2+KVlL4wLDAsHyXLghlG2ta82jOVfXOo+tRownh+gOEM1 eQJPyBQMmUpqGhR4DQnDNPpWg4Pf98fINfRSUkKGim9tXpuLwNlar9HwyhMJx4M6LptL f5Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W96GcRd95QFdsA9bezqiYZ3Ng26OfkYqhZoucFhsfUs=; b=cf1/W7KloA8y9VSNJ8S5XV/2WjsxD3r5ontsPXuz6QLyHIF/RH8LoMv5/hTqHKZEeb F2L1yrb/v/J2Rw58bV0rt70jWiKbttO0/a49flWEacSYaZ8+y3bOMnCcee+oAnsWh1qf IWz3Z8jxqHD8sywwHuPS3TswgSKkKLE/ggjGm5FsYJGyge2gYaweMvOpCTSHz1DzLtQ4 GQju0anscv47clsQIUCSOwBz9pQiaaoTG0UeLbF/C/2l1MVj95K0lFit9Q4PT1bxJjsy ONgQfiZNZ/swoivDRdhUuexhR5hMqixJkyP4Pfna8pA4lh6pQHe/jfWBJx1xF/mPk4xU TIwg== X-Gm-Message-State: AOAM532et7S260KC8ut710S9FzCjkfWgDJ5BdzJM1SbQP8qptgmF8ixJ rQHeVYVnjpwOoxnB4ysP4STCow== X-Google-Smtp-Source: ABdhPJyGXLT2YnZQYT7GzU/fcRtiRnLECo0raO0CeaMKvw08Ynj/2beYY0SZU8ZQI0PWkm3CFnfVoA== X-Received: by 2002:a62:18c9:0:b029:197:e24e:60f2 with SMTP id 192-20020a6218c90000b0290197e24e60f2mr3356423pfy.14.1606211959566; Tue, 24 Nov 2020 01:59:19 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.59.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:59:19 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 15/16] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 24 Nov 2020 17:52:58 +0800 Message-Id: <20201124095259.58755-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 12 ++++++------ mm/hugetlb_vmemmap.h | 4 ++-- 4 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eed3dd3bd626..8a615ae2d233 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 15e2c1dd32ea..7700da372716 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1429,20 +1429,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1454,17 +1454,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 4bb35d87ae10..54c2ca0e0dbe 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -20,7 +20,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) struct page *page = head; if (PageHWPoison(head)) - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -35,7 +35,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) static inline void set_subpage_hwpoison(struct page *head, struct page *page) { if (PageHWPoison(head)) - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)