From patchwork Mon May 13 05:08:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D9EC25B10 for ; Mon, 13 May 2024 05:08:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4D786B00AB; Mon, 13 May 2024 01:08:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0BE96B00CB; Mon, 13 May 2024 01:08:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9E446B00AB; Mon, 13 May 2024 01:08:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 898F66B0256 for ; Mon, 13 May 2024 01:08:42 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 164CBC0BE5 for ; Mon, 13 May 2024 05:08:42 +0000 (UTC) X-FDA: 82112192484.04.96DEE48 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf10.hostedemail.com (Postfix) with ESMTP id DD159C0011 for ; Mon, 13 May 2024 05:08:39 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=YMUTmWTT; spf=pass (imf10.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576920; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/wPvXkyuXgii0bLFUI24fcIW75QnUckPu3WAKMN0v0o=; b=OblYHIYGcGamG/NlxCeHFsy3qcdQsvep96mkRWZtJM97DBbtSU9GB3+tKv9STtZGQ2TEwu 5iKfcCln58tn/s5kRniikArKwS0DWEJ5bU51+yDZU+dA6Kyb3U9huSLXPfwCafFnlIYADp pUU0WiOEcgroTBG4hIguyE1tkF6+nKQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576920; a=rsa-sha256; cv=none; b=jIPOr0wfnGMUX3sGcI5L90CpLb6aBhtZcCwe+R79eqVwOfMy7w5TkBaM4laNASu1xYryHd kYmKAW8ZVHU7awAWkFXsg1xnx6VbEwXf3nIowr9XNFSyVmvxTFuYgYNTgpaXnAdKvpBgTS jJisH2JelagrEJElpxVH7nLFC8PUK2c= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=YMUTmWTT; spf=pass (imf10.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576916; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=/wPvXkyuXgii0bLFUI24fcIW75QnUckPu3WAKMN0v0o=; b=YMUTmWTT1UYtaBphNlJ83Z0UnGWSGCE0+ov7/10Ik9Tn5PxQBQiq09cWTGntFbo7toUqx+uSk9njFnAV92U2ZCcpalXPqOuFa+HkGkBiQQsjxtUpSGl+qYSAnGuAR10TfvgVupdkEozq1XSn7VYknhjkBZUl8+aLLfdalaWC6pg= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033022160150;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HFTB5_1715576913; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HFTB5_1715576913) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:34 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/7] mm: memory: extend finish_fault() to support large folio Date: Mon, 13 May 2024 13:08:11 +0800 Message-Id: <131bcf31a07fade15a012ed5cdf7156d42a4c2fa.1715571279.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: ru7y3aeu4kfm1s5w7dxy9ytzdffzec85 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DD159C0011 X-HE-Tag: 1715576919-19546 X-HE-Meta: U2FsdGVkX1/pNPSeS0xsIbFkHvigxE633RXPc0ikr+tXOLJZGwlMFbWc+YAkkNwxIAKwvv7cPI1QdjHUyN8g76YdfmGyFPj7CHs2EwxqzRIwDzYMOLbIW7iP5tsiF6vkdKP0cDmlkkV9S4fgsgrUkDdY4rHp93wvPcqOUfDvQGVQmFn0it1Ftn9iCtt8ACTnJAXu0diq2phF1OxUHVqNfxurm0U72M8hxW/qKWB/yvnPliJTcHndGD8qhYIBf5+7JGRshVq9vscWM+Lb5/jVYEHyp5z9Gb/j+hlg24wahEf1mkjixTVeSOkSeNGWIG2vZeLmz0n4poU6mH4e32ixmW0tJaw6fHHwaaJ0olMfff3oPEluk6N1vmf3yEc5pMc4B5KHuLW4j3GOCdLhb/ad9tARNa/PyoEz/qz6ODC8arwbGjrnw0FLFL7P99c5WzwOv4HUFH2ldzLqU6UrxwG57l1BM5wfkesX5Tp7gVXIzg8iO91/2wM9r5VZy1aGjdI50NhwCJGUDt5IJZvzgRIteOhXoBFVDX6Hp9i61N9IGV0rIhqrbDcU0pth2RmLsUNh6XrX1KJ835d6BmIMfSWGHQGHB53DZBsHZ3LU4ee/Fm6C1JbPsp3tJqs7DIf3+OyLcKNKgqPY7o+7GpU9XLAGPv2DHj+mfqVbbePkgOt5u2XbRKm1uQmXrXcCttqkNxNOi5HjhH0LtrVHTbX9TKL0OjvWhXhYuFUux7O0BWP2FMkUXySyv5ZGRkfveYBDd1Q48woLof+uX51aMxNfBgcH+DhAZt9NZ20/tlDqsDE46neZIDgAZQ7Z4IhYUSF9F+eZRzVq5UnjJcW7tLAAkFx3BmCA3TL4SsMuW++W93YYr+ksWy0UX5qgeugWw4lgQAj/6kiJD0ZYRfs0ce2D0jQw4EBP/z/kaex2n/Hki8hMb30aP08WTdnLqIm5DqDHAt/ua1X4r+v8jJZWjO7I0cs 4BDGA+n6 F5G6XlgANq8ywxSFtFOlX0qdgGTBR5UE6l9kTv8631ms5MegrUH6zlCLGHYgCuRdzoPceNb4gy42r7a8aFJFTMLFtXhzqlUvorrr/VDQFybVsWulvTHpdllDmZUkjuaYRUcsOBb6eU+ZDA4Ebt9CB9A7Bok3C8YRYRgyP2D5t5zRbhFlw6DNoQpmkH5VbcmQQQr5K3+yqoPJnu5Xr3x8YxxUJhstgxss+d4nMT5wJrWqaX37XGEkM6Et9+GPwd4LaWZ+Tthrygv+pVEk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add large folio mapping establishment support for finish_fault() as a preparation, to support multi-size THP allocation of anonymous shmem pages in the following patches. Signed-off-by: Baolin Wang --- mm/memory.c | 58 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 48 insertions(+), 10 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index eea6e4984eae..f5ffe012556c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4747,9 +4747,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct page *page; + struct folio *folio; vm_fault_t ret; bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED); + int type, nr_pages, i; + unsigned long addr = vmf->address; /* Did we COW the page? */ if (is_cow) @@ -4780,24 +4783,59 @@ vm_fault_t finish_fault(struct vm_fault *vmf) return VM_FAULT_OOM; } + folio = page_folio(page); + nr_pages = folio_nr_pages(folio); + + /* + * Using per-page fault to maintain the uffd semantics, and same + * approach also applies to non-anonymous-shmem faults to avoid + * inflating the RSS of the process. + */ + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { + nr_pages = 1; + } else if (nr_pages > 1) { + pgoff_t idx = folio_page_idx(folio, page); + /* The page offset of vmf->address within the VMA. */ + pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff; + + /* + * Fallback to per-page fault in case the folio size in page + * cache beyond the VMA limits. + */ + if (unlikely(vma_off < idx || + vma_off + (nr_pages - idx) > vma_pages(vma))) { + nr_pages = 1; + } else { + /* Now we can set mappings for the whole large folio. */ + addr = vmf->address - idx * PAGE_SIZE; + page = &folio->page; + } + } + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + addr, &vmf->ptl); if (!vmf->pte) return VM_FAULT_NOPAGE; /* Re-check under ptl */ - if (likely(!vmf_pte_changed(vmf))) { - struct folio *folio = page_folio(page); - int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio); - - set_pte_range(vmf, folio, page, 1, vmf->address); - add_mm_counter(vma->vm_mm, type, 1); - ret = 0; - } else { - update_mmu_tlb(vma, vmf->address, vmf->pte); + if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) { + update_mmu_tlb(vma, addr, vmf->pte); + ret = VM_FAULT_NOPAGE; + goto unlock; + } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { + for (i = 0; i < nr_pages; i++) + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); ret = VM_FAULT_NOPAGE; + goto unlock; } + folio_ref_add(folio, nr_pages - 1); + set_pte_range(vmf, folio, page, nr_pages, addr); + type = is_cow ? MM_ANONPAGES : mm_counter_file(folio); + add_mm_counter(vma->vm_mm, type, nr_pages); + ret = 0; + +unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; } From patchwork Mon May 13 05:08:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C3DBC25B10 for ; Mon, 13 May 2024 05:08:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A79C46B0256; Mon, 13 May 2024 01:08:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A032A6B025A; Mon, 13 May 2024 01:08:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8053B6B025B; Mon, 13 May 2024 01:08:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5E41A6B025A for ; Mon, 13 May 2024 01:08:45 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1A00240CA4 for ; Mon, 13 May 2024 05:08:45 +0000 (UTC) X-FDA: 82112192610.09.BE12B9D Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf17.hostedemail.com (Postfix) with ESMTP id 0AF904001F for ; Mon, 13 May 2024 05:08:42 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=lg+Y0ddk; spf=pass (imf17.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576923; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e+vn44n5PSOyoo2A5G9/u00K/6MLWmZWHwZJ5wJKuyA=; b=ofshQyNqawBr1F/btkh75oBgZFr2xzdjiCBM0/8vC7pZq72sPc7W8vQsEI7tjwB9zKiQMh DLeF6lCbiTGK3ukCTz39WPsT3mD7tcqia6FAVQTWRgnmSpwo64aKA8HIdDbvFdMV1hTIcm H0N8NYtAFJcu7WmY6ek87oEWQh3YJT4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576923; a=rsa-sha256; cv=none; b=padjI+uBHssMfdwaewalaHYXKaapOJ3jVbtVAu1l/BooVE+heDx5sOZy4JvGkvTeuOKwZk KdkwQfpPqsuSa+l3PCA/rD2z+dzTSlYCI83B0cgAhGRFwOxZ6tG3DFO9YuQjnRCTMG1+lu h4GqtnZ+0dfszt2QQNcq+612Dv7jYEc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=lg+Y0ddk; spf=pass (imf17.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576917; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=e+vn44n5PSOyoo2A5G9/u00K/6MLWmZWHwZJ5wJKuyA=; b=lg+Y0ddkG8Alh4kcZDazfKT8Hf1BsIAYQucMQUcQ/bM2inKgmnBorCpwKP/1YfEIBgFk9SWKudqpxLVgFXVJDA7Qu/pn87NeGqxJh2MhrLSLfO/hlC45Pl/uYQm/1Hru3J2ZW5auo4DQgUXSdFFjP4xecaJYnyvlYWf4g8aM5AA= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067111;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HFTBL_1715576915; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HFTBL_1715576915) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:36 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/7] mm: shmem: add an 'order' parameter for shmem_alloc_hugefolio() Date: Mon, 13 May 2024 13:08:12 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0AF904001F X-Rspam-User: X-Stat-Signature: f3ycg876yhzorsduj95j8idudpt8uq55 X-HE-Tag: 1715576922-205016 X-HE-Meta: U2FsdGVkX19Fy8scKfsx29KnHnezoBO2oBqtWhFmWatNBYTdsWOsvtVjxTKXQImTv4L2mK/cIkMboihWwrkKCR7FdOvwSgZMRTaZ1ORSuzwkwaWFGBLJ/qFzJoRkO329UTniwxtC4TUi1/kek8mVVo+kqniFoeqFgstgf3DoF6Vjt6V+Xsz3RKOmcsZErv5jvu0ip2mMsVIuL6uHSh1HlFXwbdkBEMbotQAbHSvIocyP2Y0Aknqs5MyYXAKLBbWWq6l1fMxk6LIuFFH/zr/jPJoPVARXYg3Qz6MrY2D1cot88omZK/YEfMX65qP4c0P6ZmuijSW2AC3I112IpzRF4mR0JjxNwV6L4/GztkUVX0P6RiKE4wY8wW0fbVQLN07cZ05rIS4EMKYW8vJ2J9ddSQYR2feMSsnCJdst3C1ze2OtYrWcuFAmJUVSyK11Wt2XoIo5jnoL9i8m5k+gmmlOc9qMqXcZAXuxzZjo5XgvjPBaWdLXL/Vd1BV+8H4WD2GszuPK6rfJqIhH4m9YABqeB3rTo+PBcaSv7xZoLp2O1W4LU4hn5osKyfoEp0+WiGBMCTmyeBC/DzSaAGobSyJX1IlxTUi+a89zJ1bWJ3fMb/zzeT8zY30ZwYrCsSXEAStzbcJsZWQtdW9I4UVGP6ZGQlwB5mqmOy3k23LXDXflHckpbWaVjzjhkPHCBBvwy6qGeBgSHf8QBmEfcJ38ai8YBcilh3ylnxFPZPjaqj1icho1clUuavs/UOzYlBCxFMrnNGwEh8A2XhdvgvVYblhUBPC50kvrU2Intj6vgyRIrpsnqsIfc6JaFKqysI5cI2JDxyvRUs6I9KCvMwqhc86ESMbwXPNr4RoAJDitPWqQkWK7wcZcAObD2CGciJG2Iw99S61DwYB8YUxNItYLnBK6Qp93bDRzm8QE39gaxqj3UP4Uun2nagjZLByB+kIhF4OvaLAHgrhIN3DWFGsWrSu sSNAsELU jbhPnVfQCvdBE8MBSBBvsrrkh6q0Pv9EU6ZLY86aRNhdeg0yBMnyVepvesNRutUfeiNd3w3ELdB015aI+dNAtDinObmCgTC8D93mRpvq1ZyLYHbNDeXmQqbA0W2r+24CFtbadktZvqhTrjK2LhGVNDSnxmaplhznBVvjiQkIogImJ7GnD5Mx2zoaeYl/3utkauX6PO+5D6j0XSYLbET8CK3QPKC/fIAdF16GwSqYbQflRjmAHeHMUVS5PE1LEJKlkBGISLT67Z5sWwBxQsS8NMtygSYs2URvNL5+/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a new parameter to specify the huge page order for shmem_alloc_hugefolio(), as a preparation to supoort mTHP. Signed-off-by: Baolin Wang --- mm/shmem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index fa2a0ed97507..e4483c4596a8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1604,14 +1604,14 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) } static struct folio *shmem_alloc_hugefolio(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) + struct shmem_inode_info *info, pgoff_t index, int order) { struct mempolicy *mpol; pgoff_t ilx; struct page *page; - mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx); - page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id()); + mpol = shmem_get_pgoff_policy(info, index, order, &ilx); + page = alloc_pages_mpol(gfp, order, mpol, ilx, numa_node_id()); mpol_cond_put(mpol); return page_rmappable_folio(page); @@ -1660,7 +1660,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, index + HPAGE_PMD_NR - 1, XA_PRESENT)) return ERR_PTR(-E2BIG); - folio = shmem_alloc_hugefolio(gfp, info, index); + folio = shmem_alloc_hugefolio(gfp, info, index, HPAGE_PMD_ORDER); if (!folio) count_vm_event(THP_FILE_FALLBACK); } else { From patchwork Mon May 13 05:08:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32B07C25B5F for ; Mon, 13 May 2024 05:08:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45E926B015A; Mon, 13 May 2024 01:08:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39A556B0256; Mon, 13 May 2024 01:08:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17D976B025A; Mon, 13 May 2024 01:08:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DD9896B015A for ; Mon, 13 May 2024 01:08:44 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 975E3C0ADE for ; Mon, 13 May 2024 05:08:44 +0000 (UTC) X-FDA: 82112192568.07.6EFECEF Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf09.hostedemail.com (Postfix) with ESMTP id 4CEDD14000A for ; Mon, 13 May 2024 05:08:41 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=srFSmgNC; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aUksnnssWSvRHZu6SVpR2z8swC3sgIQdPUjmp/ogrf0=; b=Dos5LPLyh5g+Vjw182q/aJ+N1Zio2C3UWBKuGtJ9MzT06vf8FZuI9cEC5a6kA/Z8DLtXMo Ev3uEiF++YHIwPNmFyC8ZfJw6sqX0Zy/PeLhMMptpd9y4m/Yyt1jlzCjFb1+BrSayoAa7X uzWrl7yYREvrAbi1xXscAhx7iwTJIi8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=srFSmgNC; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576922; a=rsa-sha256; cv=none; b=2JAI+O1gEi01kE3B7VmLlRSIM9NRC3HldLb1SCqsGMgJP9X+LhcIdCospOo73FbOaB772U F80/zgFYtBgdwUHVU6XwFwmE9ISjLHtHs+/CNWDzgtKdL33VZXTPfnN4Ypvjhkwx6b1yVf Dx9kx4PUegLcDGNZoHIJiMd9YDzE+30= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576919; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=aUksnnssWSvRHZu6SVpR2z8swC3sgIQdPUjmp/ogrf0=; b=srFSmgNCQAoX3A57O4gWlnWXqQmz46ne38nEDzmfEKo51vZif83UxWegLppk3TLr9WSnqctHrYlNTOiRRZAUTyBoCF5/f2a0lIZMDVzBhgmhQAueteKYbI1YMZkeDR7wkaMwbUJ18ElttmZIdel5+hIhZ1uQu1A+y1YgVHRIhjc= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067113;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HNn3X_1715576916; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HNn3X_1715576916) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:37 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/7] mm: shmem: add THP validation for PMD-mapped THP related statistics Date: Mon, 13 May 2024 13:08:13 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 4CEDD14000A X-Stat-Signature: xnoyqqn5fwjei6fb1d63eqsg779cwpwp X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1715576921-546283 X-HE-Meta: U2FsdGVkX1+9ioT4igAtPv7MLsTKFX5bKtHfIRcWgPygESV9KJGdOXp7o3Um5EpVH5StUn0hlFWUGth+mr/9ScOHhRvoZyLP5AhgvzuZKufWtIu7CEL9yXlhzyzNpGnSuxjQvMZoz+jVX0o6QcvfW+wHmI3+vcIotSLSNwTzmtIO3b8N6LahrrnHpGVxtOv6CjPm3IYFS92fi1LGQWvPlFCXwZq2LSPGnIZ3azUFU/sVjCmEoYwfm3GRshZRBBnt80jKdh1A0jXnXxqf4Ki0YOfToXDLwzCPrv4JbjE797r7w8UvvVXKZdE8qGae9iWDjs4S0aU7tvEzwz9gJH8CyWHyNS1DnmoBUA+NJVHscmy6xqkqusbjNpy2DdztQljmzsaiDnE2JnffmuXoz2huVor9qvpowM2FLNHHzK/CmQ8STh3itsd3QaICQcLCW8f1b6lL1n594zeLvhy7Td2Q5GSgLEVVH9qPG8q1205rfnHGXTrvqVnq0euDm3kOYraSlPvIW9QPR8oq/8fk6xEGmSzxHHtue36uofgKFVuN6HLv3YSbWpUF/H/qfTT8X2/lfl/T6TKsBYlt6C3oIHS2MRXY1WTI5URcYyLZ3fuWEUF3IXvp2mvwuXUanL4Ee3xKL8AYRO1UmoPsRGk8vMdiBg8vwB8EbI/mo95N7FY2vq15RMZsxJ4FOqfIRR4OmxxeeD8xGoF/Rt0jPL0hzkBvWGyy3e8JfUg1wa0UuXF7Zx4B1dW5XyJsw0vbq88B8ulc8UJAJG23Jn8Db0cTL6xv/3EE41XYsMnI74iN2vvD1Wp3T3JLKClz3Jdpx50ntVU/F7cUQQnId8TQ80UnRG8IuHmqqOQsI5SbrjfrV1QnUwBSjDpEWS9jKilhrGsiSbM1n2rzVkPtNonA+RmCc+6mIjzmviLntcrqqQhEAs7Wzdt5TtdwbgHK6GyCJSA2wAI1UZuR/oFOfsjU/WJfqLA ZF5ovUDu mpDoYadszm0qratKSehACXeL7N69B3dqXm2dIQyh6nZZJIkFnZuFoQQdYf2bphhgcrV7Hd2iGUSA2QMEGDdDB6QuL4C8eFmuXxRHOylRE0qezI57tGYK/6aIoK6E9FVukVsLxiN3Omb7D3CkIC2cSoIr1qwRE/Cn9irkEISZBeQYfkqNLmqZX8Du62Yx1DGep2i47/ZOsLgE+AZy8MK156QJortk18w1ey/Xcrj22J/MoLGFpPvVr89n0pUElIu0lJ8Wax0PowlyZFuV6i3xcxLcQLq9AqBwgqy7jO13G6qQb8s8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to extend support for mTHP, add THP validation for PMD-mapped THP related statistics to avoid statistical confusion. Signed-off-by: Baolin Wang Reviewed-by: Barry Song --- mm/shmem.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index e4483c4596a8..a383ea9a89a5 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1661,7 +1661,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, return ERR_PTR(-E2BIG); folio = shmem_alloc_hugefolio(gfp, info, index, HPAGE_PMD_ORDER); - if (!folio) + if (!folio && pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); } else { pages = 1; @@ -1679,7 +1679,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, if (xa_find(&mapping->i_pages, &index, index + pages - 1, XA_PRESENT)) { error = -EEXIST; - } else if (huge) { + } else if (pages == HPAGE_PMD_NR) { count_vm_event(THP_FILE_FALLBACK); count_vm_event(THP_FILE_FALLBACK_CHARGE); } @@ -2045,7 +2045,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, folio = shmem_alloc_and_add_folio(huge_gfp, inode, index, fault_mm, true); if (!IS_ERR(folio)) { - count_vm_event(THP_FILE_ALLOC); + if (folio_test_pmd_mappable(folio)) + count_vm_event(THP_FILE_ALLOC); goto alloced; } if (PTR_ERR(folio) == -EEXIST) From patchwork Mon May 13 05:08:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79F7BC25B10 for ; Mon, 13 May 2024 05:08:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D6EC6B025A; Mon, 13 May 2024 01:08:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 45CC76B025C; Mon, 13 May 2024 01:08:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 288086B025E; Mon, 13 May 2024 01:08:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EA8D36B025A for ; Mon, 13 May 2024 01:08:46 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 562BF40CAF for ; Mon, 13 May 2024 05:08:46 +0000 (UTC) X-FDA: 82112192652.16.F4FFC60 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf09.hostedemail.com (Postfix) with ESMTP id 3857114001A for ; Mon, 13 May 2024 05:08:43 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=cMPJ0sFh; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576924; a=rsa-sha256; cv=none; b=KIuINUUbNJuOMTVrFJxC+tTmctnmVZhIttxEs7whEJMJMndpboZ3OWa168wT4eQmWTFgka noOLePfb/Z3X+6RfYCVdvBL1BGYo+h13001IJ00jqMAwJ4CbTyYClx1I66lXTuKGQlRWmY J1D4OiPzFP2iES1lWZdv2OYLEdU7Ei0= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=cMPJ0sFh; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576924; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7JnpbcdNJF3PJU4gWLyVFlMg8iu5JgVvM/Myixzu63c=; b=nXzFl+CY5JPOidT5Iq6KyoHLAg2XSeuIT8tLChVHZoScQKYhJ/anyW3JqRL3nHCYoQg3Dg ATxMZzXdq4kaSoDiy1zs34M6D9Z0gGR/+ozUY5xlwctxcFZpzVCpQVR2K4EFoSbrpPc/eJ ULn4sjT//k5R3SDPyGBqFZFlf1Ge4MI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576921; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7JnpbcdNJF3PJU4gWLyVFlMg8iu5JgVvM/Myixzu63c=; b=cMPJ0sFhOlDRvdYH6C+V6jZSIK37rZqCPUUt/bXfpdrKrY/XmtpywWnXWQ1/qSJUkQ4o9hmNBWTQlh8pFstnAhdNaw57PZDO4R79fd/NZ+Yk1vvOuUos52h7ibPuCjbpbzSAtYBCr1C18kOTdGzc/qOgwFJMYDNdBwYtrWQ5uhI= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R461e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067109;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HFTCU_1715576918; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HFTCU_1715576918) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:39 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/7] mm: shmem: add multi-size THP sysfs interface for anonymous shmem Date: Mon, 13 May 2024 13:08:14 +0800 Message-Id: <0307b7a2f16e49e0f752869e83682ef39614ea27.1715571279.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3857114001A X-Stat-Signature: phjq31k7iwjua53sex6ydzmdxyqqoief X-HE-Tag: 1715576923-81129 X-HE-Meta: U2FsdGVkX19b/jFrfR6uTL0+4HwcPb/fYOk/rW8wNFarR2S3XW1jX8eIBnB5rOI1I/Tdw7SDYEaRs5O2sTTOit8zqjFE0H+ho93qCjkN+tKAKFzJQk9pUeEulccMed3ml7XQMHTpdqjlZMhavFuAGFhXIARFQWxanH0qE0RbGAmAv5NNsRxgBR6t968k1AC4cijofwneV4ELExCSW19cYI3Szl8tia8Xfm96iSyQ3AjHXSYgf+qdt7xc+APXgEygyIpvrPzvwzHZCuC+/S5Qn6HEmY3TieHpP5yLydZudg9DQvLQ6IXHOOaZlvJ7fLE5m/GutXk5uSWFTRffcGhfsGCDAgehSo3dDrU+UlNyXT9K+xLiR9FgbAJaqj7TtlUzdZc2l2ROQTVrZBVxM8Q3fxJAMUHUklra0a23KenM2JTLHFL37b/vq3xdWl+hjh/ItY/JhepCXpad/nRpqgW898Q7jl94HXcMafAAaE8QZj89+FGV6fnw27Yk1S/xBnmuWWjo5jFNt7xFjapN02vLWvKeskSOA0DabFa0hbzhMAe9o25FZcWPyF+hPvWY9JWNImWLaXATZNkjczpzWcxx6ZXzjILCF1dAKoEU7JBAumqZfc6SE+YCEBRMNGDR7mL7fTIbCHrwMeFTBTCkTSNGIaO7tFMb2GhADL8qk+mFoCkpKIgSAOdbXbDh89meIoDO8ByCdHN/uh/ReFnPRWBUpUanAUCupdsyxM4V4e2DG2Q20anYMtIwBFrRJkm0HUsNdOa+DWgVdk2OlzKCXNQ2H0OxjL80BveW8RIRqm/OETIGj+BXtGdOpUqOKNflJsQMnxiryLTqbIVQLprNVjnHrxIPdIguTt06KlY1wPb6bj7xwOxnCBplY/HOX6zm+y9e4AcljInv7rV7KwenUc4RJkSiH4XkCNbe49ibK76UDhLJl2n/G6F3+FuB0+je7qC6OKpsD4hZ7rHccWySB3S nZUW5CM3 eJ8+MKFS8Y6FEicm7EvVsVAYac6qM7rV8C1ewbyKTssUt2m3VCuvHl5P5kR2NPDcbGsQSt+pF5NAepA4jGTXqFAfuxn3o0LnAfh2kJGzdKCRmlEk2xcodrOXdp4mZJAiDMwjMjXSrae/qHPU8koA4NbgrtM1wkmZIwcnBb2uctLlgNDxaQvPcysE6gnvj7YKFQUcc2FJHW8cVfatMLAeK2CRbxBVmdWf+XJbiHhQH9VWBsdA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support the use of mTHP with anonymous shmem, add a new sysfs interface 'shmem_enabled' in the '/sys/kernel/mm/transparent_hugepage/hugepages-kB/' directory for each mTHP to control whether shmem is enabled for that mTHP, with a value similar to the top level 'shmem_enabled', which can be set to: "always", "inherit (to inherit the top level setting)", "within_size", "advise", "never", "deny", "force". These values follow the same semantics as the top level, except the 'deny' is equivalent to 'never', and 'force' is equivalent to 'always' to keep compatibility. By default, PMD-sized hugepages have enabled="inherit" and all other hugepage sizes have enabled="never" for '/sys/kernel/mm/transparent_hugepage/hugepages-xxkB/shmem_enabled'. In addition, if top level value is 'force', then only PMD-sized hugepages have enabled="inherit", otherwise configuration will be failed and vice versa. That means now we will avoid using non-PMD sized THP to override the global huge allocation. Signed-off-by: Baolin Wang --- Documentation/admin-guide/mm/transhuge.rst | 29 +++++++ include/linux/huge_mm.h | 10 +++ mm/huge_memory.c | 11 +-- mm/shmem.c | 96 ++++++++++++++++++++++ 4 files changed, 138 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 076443cc10a6..a28496e15bdb 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -332,6 +332,35 @@ deny force Force the huge option on for all - very useful for testing; +Anonymous shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob +to control mTHP allocation: /sys/kernel/mm/transparent_hugepage/hugepages-kB/shmem_enabled. +Its value for each mTHP is essentially consistent with the global setting, except +for the addition of 'inherit' to ensure compatibility with the global settings. +always + Attempt to allocate huge pages every time we need a new page; + +inherit + Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages + have enabled="inherit" and all other hugepage sizes have enabled="never"; + +never + Do not allocate huge pages; + +within_size + Only allocate huge page if it will be fully within i_size. + Also respect fadvise()/madvise() hints; + +advise + Only allocate huge pages if requested with fadvise()/madvise(); + +deny + Has the same semantics as 'never', now mTHP allocation policy is only + used for anonymous shmem and no not override tmpfs. + +force + Has the same semantics as 'always', now mTHP allocation policy is only + used for anonymous shmem and no not override tmpfs. + Need of application restart =========================== diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 017cee864080..1fce6fee7766 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -6,6 +6,7 @@ #include #include /* only for vma_is_dax() */ +#include vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, @@ -63,6 +64,7 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; +extern struct kobj_attribute thpsize_shmem_enabled_attr; /* * Mask of all large folio orders supported for anonymous THP; all orders up to @@ -265,6 +267,14 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders); } +struct thpsize { + struct kobject kobj; + struct list_head node; + int order; +}; + +#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) + enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9efb6fefc391..d3080a8843f2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -449,14 +449,6 @@ static void thpsize_release(struct kobject *kobj); static DEFINE_SPINLOCK(huge_anon_orders_lock); static LIST_HEAD(thpsize_list); -struct thpsize { - struct kobject kobj; - struct list_head node; - int order; -}; - -#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) - static ssize_t thpsize_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -517,6 +509,9 @@ static struct kobj_attribute thpsize_enabled_attr = static struct attribute *thpsize_attrs[] = { &thpsize_enabled_attr.attr, +#ifdef CONFIG_SHMEM + &thpsize_shmem_enabled_attr.attr, +#endif NULL, }; diff --git a/mm/shmem.c b/mm/shmem.c index a383ea9a89a5..59cc26d44344 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -131,6 +131,14 @@ struct shmem_options { #define SHMEM_SEEN_QUOTA 32 }; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static unsigned long huge_anon_shmem_orders_always __read_mostly; +static unsigned long huge_anon_shmem_orders_madvise __read_mostly; +static unsigned long huge_anon_shmem_orders_inherit __read_mostly; +static unsigned long huge_anon_shmem_orders_within_size __read_mostly; +static DEFINE_SPINLOCK(huge_anon_shmem_orders_lock); +#endif + #ifdef CONFIG_TMPFS static unsigned long shmem_default_max_blocks(void) { @@ -4687,6 +4695,12 @@ void __init shmem_init(void) SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; else shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ + + /* + * Default to setting PMD-sized THP to inherit the global setting and + * disable all other multi-size THPs, when anonymous shmem uses mTHP. + */ + huge_anon_shmem_orders_inherit = BIT(HPAGE_PMD_ORDER); #endif return; @@ -4746,6 +4760,11 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY) return -EINVAL; + /* Do not override huge allocation policy with non-PMD sized mTHP */ + if (huge == SHMEM_HUGE_FORCE && + huge_anon_shmem_orders_inherit != BIT(HPAGE_PMD_ORDER)) + return -EINVAL; + shmem_huge = huge; if (shmem_huge > SHMEM_HUGE_DENY) SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; @@ -4753,6 +4772,83 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, } struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled); + +static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int order = to_thpsize(kobj)->order; + const char *output; + + if (test_bit(order, &huge_anon_shmem_orders_always)) + output = "[always] inherit within_size advise never deny [force]"; + else if (test_bit(order, &huge_anon_shmem_orders_inherit)) + output = "always [inherit] within_size advise never deny force"; + else if (test_bit(order, &huge_anon_shmem_orders_within_size)) + output = "always inherit [within_size] advise never deny force"; + else if (test_bit(order, &huge_anon_shmem_orders_madvise)) + output = "always inherit within_size [advise] never deny force"; + else + output = "always inherit within_size advise [never] [deny] force"; + + return sysfs_emit(buf, "%s\n", output); +} + +static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int order = to_thpsize(kobj)->order; + ssize_t ret = count; + + if (sysfs_streq(buf, "always") || sysfs_streq(buf, "force")) { + spin_lock(&huge_anon_shmem_orders_lock); + clear_bit(order, &huge_anon_shmem_orders_inherit); + clear_bit(order, &huge_anon_shmem_orders_madvise); + clear_bit(order, &huge_anon_shmem_orders_within_size); + set_bit(order, &huge_anon_shmem_orders_always); + spin_unlock(&huge_anon_shmem_orders_lock); + } else if (sysfs_streq(buf, "inherit")) { + /* Do not override huge allocation policy with non-PMD sized mTHP */ + if (shmem_huge == SHMEM_HUGE_FORCE && + order != HPAGE_PMD_ORDER) + return -EINVAL; + + spin_lock(&huge_anon_shmem_orders_lock); + clear_bit(order, &huge_anon_shmem_orders_always); + clear_bit(order, &huge_anon_shmem_orders_madvise); + clear_bit(order, &huge_anon_shmem_orders_within_size); + set_bit(order, &huge_anon_shmem_orders_inherit); + spin_unlock(&huge_anon_shmem_orders_lock); + } else if (sysfs_streq(buf, "within_size")) { + spin_lock(&huge_anon_shmem_orders_lock); + clear_bit(order, &huge_anon_shmem_orders_always); + clear_bit(order, &huge_anon_shmem_orders_inherit); + clear_bit(order, &huge_anon_shmem_orders_madvise); + set_bit(order, &huge_anon_shmem_orders_within_size); + spin_unlock(&huge_anon_shmem_orders_lock); + } else if (sysfs_streq(buf, "madvise")) { + spin_lock(&huge_anon_shmem_orders_lock); + clear_bit(order, &huge_anon_shmem_orders_always); + clear_bit(order, &huge_anon_shmem_orders_inherit); + clear_bit(order, &huge_anon_shmem_orders_within_size); + set_bit(order, &huge_anon_shmem_orders_madvise); + spin_unlock(&huge_anon_shmem_orders_lock); + } else if (sysfs_streq(buf, "never") || sysfs_streq(buf, "deny")) { + spin_lock(&huge_anon_shmem_orders_lock); + clear_bit(order, &huge_anon_shmem_orders_always); + clear_bit(order, &huge_anon_shmem_orders_inherit); + clear_bit(order, &huge_anon_shmem_orders_within_size); + clear_bit(order, &huge_anon_shmem_orders_madvise); + spin_unlock(&huge_anon_shmem_orders_lock); + } else { + ret = -EINVAL; + } + + return ret; +} + +struct kobj_attribute thpsize_shmem_enabled_attr = + __ATTR(shmem_enabled, 0644, thpsize_shmem_enabled_show, thpsize_shmem_enabled_store); #endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */ #else /* !CONFIG_SHMEM */ From patchwork Mon May 13 05:08:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F50DC25B10 for ; Mon, 13 May 2024 05:08:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 570286B025E; Mon, 13 May 2024 01:08:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F8ED6B025F; Mon, 13 May 2024 01:08:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34D2C6B0260; Mon, 13 May 2024 01:08:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0DC9E6B025E for ; Mon, 13 May 2024 01:08:48 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id ABB82A0B73 for ; Mon, 13 May 2024 05:08:47 +0000 (UTC) X-FDA: 82112192694.21.EDD1303 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf23.hostedemail.com (Postfix) with ESMTP id 9F43714000C for ; Mon, 13 May 2024 05:08:44 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=wPYsrxdh; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576926; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LvZInHaWrVLtAtiS/JYz69r1CyubcJdgknF7UeCj6iM=; b=zsnwi07l7JTt/2rKn3YKuz6bYZeaW/38EB491Wew6GXY0RJAefUM8HsR6vSmM30DJU8Cai PO5ANlY+V8Jl9BkRYhtkMxWeQMAqf7T67YiGCQLtk/Ejxm4wY41LBjMvzivrfLGv2f5WSm rZ/w6aURplB7Wqgi7McL+JnRUWEjm8o= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=wPYsrxdh; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576926; a=rsa-sha256; cv=none; b=uRBsT4QGMcYf1MMANqGsxA+eiH4yzLI21Xr3A/oiPTF36MK7m8fGuq/7IoeVC2GwGjJ2Lu gpdD5n61eZER/K25FkdZfd0UKxykbluw94ieBBL2pEJ25gj18OnCdVYjMzIoLmUwkc+wYt nFOMNWw62BV3iJXvJYOhckHnQpSzdvo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576922; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=LvZInHaWrVLtAtiS/JYz69r1CyubcJdgknF7UeCj6iM=; b=wPYsrxdhQAFy66Kv/ODPzqgKiE3nfu0j9RgoNBu/lwVQQdCn5L+0LCNsAToiXYEh0D6fhdaa3sztoQbelpj2SFdPFteGDNLP5TaoaL1h7TsVsfRNA/D3yFR1f3spSQD5BtYndfGTQGojFgIPeYnbA0fkHnkxZ+iLa3XWdjBYgpM= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067111;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HNn4G_1715576919; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HNn4G_1715576919) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:40 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/7] mm: shmem: add mTHP support for anonymous shmem Date: Mon, 13 May 2024 13:08:15 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9F43714000C X-Stat-Signature: zamfhty7ruko3ycpdymoikcqdtbhcp5z X-Rspam-User: X-HE-Tag: 1715576924-213555 X-HE-Meta: U2FsdGVkX1+gHg6KSBGH6VHcd0xguJmnBPcv0XA9F3RcKAYeiNSP/jY0NLeoJUZz/3eQPz3pBmrVnOOFRXa7NgINf3C2NABNjxCt+3y5oMEDJ4POAE0mwc0yDj75HalB/F3C+dd3Yk1jBl+u7Il5eBZ30rHcdWpgEtzORsOewgzpHs1QgSpm+6gLUtrcrN3aMEe3O34O3dzcobwM8zZI4E0fJMKXJG5kyxRw9Qdm48aDs7bTUNI1IBrsUnHm21s3oxOKtIOiX3FQ2HoBG3gIVw+kjklrptTkWrYgHVl3J/ahAiyV3Fd+RCLm9cKV1V6SweIwhmrxR2D4iPcZGLG+ydiRTxVYdM6VLDEzQ1qC7DqP3RgsVj+VWqTpcoBIUdTvCRCS5mFB5ilN87pRqHDuuC+YuJLThN0iIlnM8sXpqdlp1eDyFsmiCuS2jtFGMXfA3vTur+yFtSVuyKTjuifNR7/dV1p4shVLiGnSydcJaPGdPUPw0JtIUC1fSJSmMmQtVSSR1ZtkMQXZttA2wtBl6PSlFPGCIbTqOSSssQY23uKB+++Uf8i71Fm35b6yU0FBqeSoTbHVCd8uy/9R8Ncqk2EEh8Gy1x31DeXfKY33jDWvWdyYsM2MpSeNH0E2OpwJT22u2GfAK9aT8i+srPfcNysDZeXoLDyXvmSc456qjwUAddajI371yfvnZeCS0QRcy2QXHnL9Q6r8QTv4YeElTG2/88sfzXGW8Tpqps9LYTd7yq1l7MO3d8iDnQQhBTry8si+FRz+Hs0tKss/P0brlE12CJ84/HX1ESsfq+wXaOurLRRJIVa6k9ISCAm+x2kHNiL+XnnuHVg9Of6aHBu161rLMwtf1RN4X/6SCykky7IurW7/jwjnrrqPcWV9bjOL/9DG1UrrPESrgYp1d8jhynC2ekJf06jroK+gdeMd5uguwm+caPuIzXCZOqceug1FEeGTfRjG8t5HzOpZXie ewF9otXk mTekJNU4AkKZhktIYqHYfJkJHPerNon7nXdzARZSpJgV7/O7dGqAVxuJoBLi+Z3swmbLL8SS/yw0tFL1pk4pHM6CELKp0IqraFViWaFSZaxSmTLp0LlAiJTqtqq4aanUbImhQJaOoVxo0XlyiKOKaBwdtXb+pstrVitToQb7Q4qeegwSrCJZ35vBjicLdjCB6BGtA37LJ/1prxUwtKNfQ0qLnFyHBVJ7nqFGjMdF1yd07ql8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit 19eaf44954df adds multi-size THP (mTHP) for anonymous pages, that can allow THP to be configured through the sysfs interface located at '/sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled'. However, the anonymous share pages will ignore the anonymous mTHP rule configured through the sysfs interface, and can only use the PMD-mapped THP, that is not reasonable. Users expect to apply the mTHP rule for all anonymous pages, including the anonymous share pages, in order to enjoy the benefits of mTHP. For example, lower latency than PMD-mapped THP, smaller memory bloat than PMD-mapped THP, contiguous PTEs on ARM architecture to reduce TLB miss etc. The primary strategy is similar to supporting anonymous mTHP. Introduce a new interface '/mm/transparent_hugepage/hugepage-XXkb/shmem_enabled', which can have all the same values as the top-level '/sys/kernel/mm/transparent_hugepage/shmem_enabled', with adding a new additional "inherit" option. By default all sizes will be set to "never" except PMD size, which is set to "inherit". This ensures backward compatibility with the anonymous shmem enabled of the top level, meanwhile also allows independent control of anonymous shmem enabled for each mTHP. Signed-off-by: Baolin Wang --- include/linux/huge_mm.h | 10 +++ mm/shmem.c | 179 +++++++++++++++++++++++++++++++++------- 2 files changed, 161 insertions(+), 28 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 1fce6fee7766..b5339210268d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -583,6 +583,16 @@ static inline bool thp_migration_supported(void) { return false; } + +static inline int highest_order(unsigned long orders) +{ + return 0; +} + +static inline int next_order(unsigned long *orders, int prev) +{ + return 0; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/shmem.c b/mm/shmem.c index 59cc26d44344..b50ddf013e37 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1611,6 +1611,106 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) return result; } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static unsigned long anon_shmem_allowable_huge_orders(struct inode *inode, + struct vm_area_struct *vma, pgoff_t index, + bool global_huge) +{ + unsigned long mask = READ_ONCE(huge_anon_shmem_orders_always); + unsigned long within_size_orders = READ_ONCE(huge_anon_shmem_orders_within_size); + unsigned long vm_flags = vma->vm_flags; + /* + * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that + * are enabled for this vma. + */ + unsigned long orders = BIT(PMD_ORDER + 1) - 1; + loff_t i_size; + int order; + + if ((vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + return 0; + + /* If the hardware/firmware marked hugepage support disabled. */ + if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) + return 0; + + /* + * Following the 'deny' semantics of the top level, force the huge + * option off from all mounts. + */ + if (shmem_huge == SHMEM_HUGE_DENY) + return 0; + /* + * Only allow inherit orders if the top-level value is 'force', which + * means non-PMD sized THP can not override 'huge' mount option now. + */ + if (shmem_huge == SHMEM_HUGE_FORCE) + return READ_ONCE(huge_anon_shmem_orders_inherit); + + /* Allow mTHP that will be fully within i_size. */ + order = highest_order(within_size_orders); + while (within_size_orders) { + index = round_up(index + 1, order); + i_size = round_up(i_size_read(inode), PAGE_SIZE); + if (i_size >> PAGE_SHIFT >= index) { + mask |= within_size_orders; + break; + } + + order = next_order(&within_size_orders, order); + } + + if (vm_flags & VM_HUGEPAGE) + mask |= READ_ONCE(huge_anon_shmem_orders_madvise); + + if (global_huge) + mask |= READ_ONCE(huge_anon_shmem_orders_inherit); + + return orders & mask; +} + +static unsigned long anon_shmem_suitable_orders(struct inode *inode, struct vm_fault *vmf, + struct address_space *mapping, pgoff_t index, + unsigned long orders) +{ + struct vm_area_struct *vma = vmf->vma; + unsigned long pages; + int order; + + orders = thp_vma_suitable_orders(vma, vmf->address, orders); + if (!orders) + return 0; + + /* Find the highest order that can add into the page cache */ + order = highest_order(orders); + while (orders) { + pages = 1UL << order; + index = round_down(index, pages); + if (!xa_find(&mapping->i_pages, &index, + index + pages - 1, XA_PRESENT)) + break; + order = next_order(&orders, order); + } + + return orders; +} +#else +static unsigned long anon_shmem_allowable_huge_orders(struct inode *inode, + struct vm_area_struct *vma, pgoff_t index, + bool global_huge) +{ + return 0; +} + +static unsigned long anon_shmem_suitable_orders(struct inode *inode, struct vm_fault *vmf, + struct address_space *mapping, pgoff_t index, + unsigned long orders) +{ + return 0; +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index, int order) { @@ -1639,38 +1739,55 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return (struct folio *)page; } -static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, - struct inode *inode, pgoff_t index, - struct mm_struct *fault_mm, bool huge) +static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, + gfp_t gfp, struct inode *inode, pgoff_t index, + struct mm_struct *fault_mm, bool huge, unsigned long orders) { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); - struct folio *folio; + struct vm_area_struct *vma = vmf ? vmf->vma : NULL; + unsigned long suitable_orders; + struct folio *folio = NULL; long pages; - int error; + int error, order; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge = false; - if (huge) { - pages = HPAGE_PMD_NR; - index = round_down(index, HPAGE_PMD_NR); + if (huge || orders > 0) { + if (vma && vma_is_anon_shmem(vma) && orders) { + suitable_orders = anon_shmem_suitable_orders(inode, vmf, + mapping, index, orders); + } else { + pages = HPAGE_PMD_NR; + suitable_orders = BIT(HPAGE_PMD_ORDER); + index = round_down(index, HPAGE_PMD_NR); - /* - * Check for conflict before waiting on a huge allocation. - * Conflict might be that a huge page has just been allocated - * and added to page cache by a racing thread, or that there - * is already at least one small page in the huge extent. - * Be careful to retry when appropriate, but not forever! - * Elsewhere -EEXIST would be the right code, but not here. - */ - if (xa_find(&mapping->i_pages, &index, + /* + * Check for conflict before waiting on a huge allocation. + * Conflict might be that a huge page has just been allocated + * and added to page cache by a racing thread, or that there + * is already at least one small page in the huge extent. + * Be careful to retry when appropriate, but not forever! + * Elsewhere -EEXIST would be the right code, but not here. + */ + if (xa_find(&mapping->i_pages, &index, index + HPAGE_PMD_NR - 1, XA_PRESENT)) - return ERR_PTR(-E2BIG); + return ERR_PTR(-E2BIG); + } - folio = shmem_alloc_hugefolio(gfp, info, index, HPAGE_PMD_ORDER); - if (!folio && pages == HPAGE_PMD_NR) - count_vm_event(THP_FILE_FALLBACK); + order = highest_order(suitable_orders); + while (suitable_orders) { + pages = 1 << order; + index = round_down(index, pages); + folio = shmem_alloc_hugefolio(gfp, info, index, order); + if (folio) + goto allocated; + + if (pages == HPAGE_PMD_NR) + count_vm_event(THP_FILE_FALLBACK); + order = next_order(&suitable_orders, order); + } } else { pages = 1; folio = shmem_alloc_folio(gfp, info, index); @@ -1678,6 +1795,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, if (!folio) return ERR_PTR(-ENOMEM); +allocated: __folio_set_locked(folio); __folio_set_swapbacked(folio); @@ -1972,7 +2090,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct mm_struct *fault_mm; struct folio *folio; int error; - bool alloced; + bool alloced, huge; + unsigned long orders = 0; if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping))) return -EINVAL; @@ -2044,14 +2163,18 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - if (shmem_is_huge(inode, index, false, fault_mm, - vma ? vma->vm_flags : 0)) { + huge = shmem_is_huge(inode, index, false, fault_mm, + vma ? vma->vm_flags : 0); + /* Find hugepage orders that are allowed for anonymous shmem. */ + if (vma && vma_is_anon_shmem(vma)) + orders = anon_shmem_allowable_huge_orders(inode, vma, index, huge); + if (huge || orders > 0) { gfp_t huge_gfp; huge_gfp = vma_thp_gfp_mask(vma); huge_gfp = limit_gfp_mask(huge_gfp, gfp); - folio = shmem_alloc_and_add_folio(huge_gfp, - inode, index, fault_mm, true); + folio = shmem_alloc_and_add_folio(vmf, huge_gfp, + inode, index, fault_mm, true, orders); if (!IS_ERR(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); @@ -2061,7 +2184,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, goto repeat; } - folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false); + folio = shmem_alloc_and_add_folio(vmf, gfp, inode, index, fault_mm, false, 0); if (IS_ERR(folio)) { error = PTR_ERR(folio); if (error == -EEXIST) @@ -2072,7 +2195,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, alloced: alloced = true; - if (folio_test_pmd_mappable(folio) && + if (folio_test_large(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < folio_next_index(folio) - 1) { struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); From patchwork Mon May 13 05:08:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ABD4C25B10 for ; Mon, 13 May 2024 05:08:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E27046B025F; Mon, 13 May 2024 01:08:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DABEC6B0260; Mon, 13 May 2024 01:08:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C26E06B0261; Mon, 13 May 2024 01:08:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9B6346B025F for ; Mon, 13 May 2024 01:08:48 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 55FB3A0FB4 for ; Mon, 13 May 2024 05:08:48 +0000 (UTC) X-FDA: 82112192736.06.E0F19F2 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf16.hostedemail.com (Postfix) with ESMTP id 625C6180007 for ; Mon, 13 May 2024 05:08:46 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=PzG+9wE0; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576926; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vWL4N5Yma0Cz0eiOf0sAW+iyKgVOoYbMTHPZ99SVNnk=; b=6+/hlmo1yF9eZp4k0OdK0D1UFba9KkIJK12TKWbRI8RK+aGUMK3nUhYurgZi1pdCUaNUQY 53n13NehZHv+YMiWpFVP5aGBTEwnhAQrWFv/I55exdTDRbwffmgZM0HJipFD9c6UDwGGHh aT6KinBQNQU6kvZWFMgcMfnSwzfsKJ8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576926; a=rsa-sha256; cv=none; b=GDHZqwxf299g8+dQk2aFdCSiMQ3B1cTIzbI+hDl7Yhi0TZ5ZRBTlxApCw/LRswomVHQWhO xDV8vp0dTPmZ7+FxpXSWhYyjIkV2tXu/jDufAd7RyvOFmaRErwooevhaiRItX49ZI4+0j8 aXF1/uZxClOL68TXtbWrv0xTvSHo7ZI= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=PzG+9wE0; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576924; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=vWL4N5Yma0Cz0eiOf0sAW+iyKgVOoYbMTHPZ99SVNnk=; b=PzG+9wE0a/KnrQOlY906rEaJdjtjLAwYCVa+767wmfRbnmLIaz1BTZSTFibPujcQRrhI/IISeLMcc3tXrepw9enuU9FvRk+6GqvZV6eN8dHebJWWLgDulEkJ6aPfZUeDBQjSjccoTtCzYdiado5BofehFgY2TW/13KV3ihH5Xk0= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033022160150;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HFTD5_1715576921; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HFTD5_1715576921) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:42 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 6/7] mm: shmem: add mTHP size alignment in shmem_get_unmapped_area Date: Mon, 13 May 2024 13:08:16 +0800 Message-Id: <664d35cfa7192adb28c133efffa81b0947f60d67.1715571279.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 625C6180007 X-Rspam-User: X-Stat-Signature: oermk9mxa3ouothfbhpkkiew1rgh9yoe X-HE-Tag: 1715576926-32397 X-HE-Meta: U2FsdGVkX1/JzG7p4szi1+km8D4E1Eds5akjTCJoO1+rEvs+tgDbd07Y7Bqm9wsQ8Gl5ucoCjvBrfIX/Ewj0P+6m3a0lKBHh0vPO55a4fgL9dCQCTf5G1QOYuE+fTYXm9L4D+ylsgP2kHNmWcw7nOx38qW4+8VJNTk+PHYdN2IKYzRL6UY+P2EtHec8h6qvqv/vzG0ypiGI9atFciaIIX+72jT3vx4QzteflnmgHsW5vhfLChEyBX2nORmOfJXZXXikUgDp+WC0wjybVyrpB0hXKc/loDrJnvlbVjyyyppyDcPsJks1TD6Iv+oeewmAi1Pv/3P+w+nENLed4Qt+rBr0N8LmRJrt4muDkvQOkZ8C2bGpkMqhSPmursQPZJYGp/QAXxxZ3EJovWj3f4GqnWMnUkWpXQdas6wBbpGrs8UB56dyIoRZ+zxJ/rCb1/Ioj2obMxZJqa52s6soxizsPH0ke+1goVDcK6vX+4dqx9aLURz129A4N3Ql9W/M7uWI9WdaaEuilQ1t9uHCo1QtzOZR+sf/azWrKnVIKnow7PzeYdRptCHEYYWaiSpCxVcSl9xVNrehSwVevyMeugMk3Ln2vQX7YVXoxwjkRfWVUaoqomMQ8GHMlz2/RtDhn0ZxkOLxrMb2MXycaK6mD+q0V/CkqPntGIKcQOdJEMQ6rGj19VWhXGRWB0yBbYKlE/MaSmqmcbZNY9KDyeHjOXu0ackNrvHLcxqsG6jT0vjhcRrnUd+gISpCNt5sj123crrMFbSMqTXhzJse7frRDD5tkb/i8rsD9uPUM6wEPYEJQgVBpRgu7IsPDFvXgzLxvMYLMmq7UpMroCmTe0yNly+6HLn7YLPH1r3i8+meJSvEsCOgBo7cbM5IexR9MDPHSRpyuRMPUzOubt59N2GhvcBLLxo5s8aingDFmk+EgiW/qQy9sqX+h/YnZWwp+3SwbCADb0p/CBIeDHbspup9A6fX IE1xqdSV +mJ1OJypbfyPOfQ2Sbkn3w/F6MSiqqhc5jeXyjBkWo6CbVv0XkorP+0wtvh/exM+sW+Z/tfUjlsA0P4IZknnDvGIjiXDda0rA67ScjcgmnDrdA29vKeHUJuKP/3c+4lgdA+YwJU7EaTt8ORE/zTpStr+Er3S0gLxJwQ43PWxZVrOSd4GfGBKeGWomQKOX747Wj5WKL35nKfKEimR2hiUtOiD2f2Ii+x0AnV7N1vovzkAYNLR1IKlyDNm8IlSP+dtxUg645YY2PPLS1kH+8sElXKubPxnC5SaSLHeVOqQOWEsq4PMZXColKmZsVQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Although the top-level hugepage allocation can be turned off, anonymous shmem can still use mTHP by configuring the sysfs interface located at '/sys/kernel/mm/transparent_hugepage/hugepage-XXkb/shmem_enabled'. Therefore, add alignment for mTHP size to provide a suitable alignment address in shmem_get_unmapped_area(). Signed-off-by: Baolin Wang Tested-by: Lance Yang --- mm/shmem.c | 36 +++++++++++++++++++++++++++--------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index b50ddf013e37..8b020ff09c72 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2404,6 +2404,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long inflated_len; unsigned long inflated_addr; unsigned long inflated_offset; + unsigned long hpage_size; if (len > TASK_SIZE) return -ENOMEM; @@ -2422,8 +2423,6 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (shmem_huge == SHMEM_HUGE_DENY) return addr; - if (len < HPAGE_PMD_SIZE) - return addr; if (flags & MAP_FIXED) return addr; /* @@ -2435,8 +2434,11 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (uaddr == addr) return addr; + hpage_size = HPAGE_PMD_SIZE; if (shmem_huge != SHMEM_HUGE_FORCE) { struct super_block *sb; + unsigned long __maybe_unused hpage_orders; + int order = 0; if (file) { VM_BUG_ON(file->f_op != &shmem_file_operations); @@ -2449,18 +2451,34 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (IS_ERR(shm_mnt)) return addr; sb = shm_mnt->mnt_sb; + + /* + * Find the highest mTHP order used for anonymous shmem to + * provide a suitable alignment address. + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + hpage_orders = READ_ONCE(huge_anon_shmem_orders_always); + hpage_orders |= READ_ONCE(huge_anon_shmem_orders_within_size); + hpage_orders |= READ_ONCE(huge_anon_shmem_orders_madvise); + hpage_orders |= READ_ONCE(huge_anon_shmem_orders_inherit); + order = highest_order(hpage_orders); + hpage_size = PAGE_SIZE << order; +#endif } - if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) + if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER && !order) return addr; } - offset = (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1); - if (offset && offset + len < 2 * HPAGE_PMD_SIZE) + if (len < hpage_size) + return addr; + + offset = (pgoff << PAGE_SHIFT) & (hpage_size - 1); + if (offset && offset + len < 2 * hpage_size) return addr; - if ((addr & (HPAGE_PMD_SIZE-1)) == offset) + if ((addr & (hpage_size - 1)) == offset) return addr; - inflated_len = len + HPAGE_PMD_SIZE - PAGE_SIZE; + inflated_len = len + hpage_size - PAGE_SIZE; if (inflated_len > TASK_SIZE) return addr; if (inflated_len < len) @@ -2473,10 +2491,10 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (inflated_addr & ~PAGE_MASK) return addr; - inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1); + inflated_offset = inflated_addr & (hpage_size - 1); inflated_addr += offset - inflated_offset; if (inflated_offset > offset) - inflated_addr += HPAGE_PMD_SIZE; + inflated_addr += hpage_size; if (inflated_addr > TASK_SIZE - len) return addr; From patchwork Mon May 13 05:08:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13663025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8702C25B10 for ; Mon, 13 May 2024 05:09:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E6AA6B0203; Mon, 13 May 2024 01:08:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76DF06B0266; Mon, 13 May 2024 01:08:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C1A66B0261; Mon, 13 May 2024 01:08:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 365E36B0203 for ; Mon, 13 May 2024 01:08:50 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D3AFD160B8A for ; Mon, 13 May 2024 05:08:49 +0000 (UTC) X-FDA: 82112192778.22.8894643 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by imf23.hostedemail.com (Postfix) with ESMTP id CEBBF140002 for ; Mon, 13 May 2024 05:08:47 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=eb5os6TL; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715576928; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+jNNX3W26VDDLfggMRp8CMmTsrteBwo6mbmSEbiS2X4=; b=aHBdwfYylJYceQsn4AAsc2S7PKoiTtRk7b0zTcncR+cEi6u9/LTZHaf3QmkcVZbP9Qa7yY vnYh2cKYE1RWrL+wron8h3+QDBFNxnBC2FXgAjzAqhpCcnQxuGyNXFdHWP1QAYt8J2aQgw 1gTcpPF+Y9EPr8OMBad8Jj+yHCfq2XY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=eb5os6TL; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715576928; a=rsa-sha256; cv=none; b=FiQqpvylEQqOxQt6xeXASH+J2Z/g8kWMTHZWw7fKOR49C0W1INrZOEzK116v3oz5f+0qn2 rnAK9/RzBsNphtM4UGFiD6RqqyZZCH0yZ6qWbilvhr/Mko9kf0xuKpArDT1MdaTv6t8X5E e6In4Hx3z8+7CQlN1uUQTK1VXHwJ9BY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1715576925; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=+jNNX3W26VDDLfggMRp8CMmTsrteBwo6mbmSEbiS2X4=; b=eb5os6TLS/bs7VneEJFdYNwmz8r/qv+Up4IuJIvok1kitqTq6t2GF6XuEnw/arVkJU/nONW0TvViR6PoBVBvIL9hGkRjePmPLgBeJ7wlryslyM/V1U/PjemJKWTDIIhpylrq2n8S5xcwI6Dv38fnQgSslirFlEAQgGdO8+iR3G0= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R331e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033022160150;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0W6HN0Qb_1715576922; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W6HN0Qb_1715576922) by smtp.aliyun-inc.com; Mon, 13 May 2024 13:08:43 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 7/7] mm: shmem: add mTHP counters for anonymous shmem Date: Mon, 13 May 2024 13:08:17 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: CEBBF140002 X-Stat-Signature: zarratyzh1zgh3cegagthdxur91c6ttu X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1715576927-676197 X-HE-Meta: U2FsdGVkX1+fSf1wb7z1LGKYvFKKgrhho+ccZ8u2TWP7Og+RUqItEqQ4c/1KrJSae4JYc9uQc3v+q/Ue3ogvHFxRVKdbf8bSTn3vKDN9Cwh30zDdQaN5snCfgA6scdilIxJOGZfIWXiwQGCZ3N7sT0WFwfDdYPLsO3VSUvvLwTTNN2ieOFL7S46TFvqcj/QSfrN8ZZopnuIyTu4YDKBUaiApPVJ7VzWjLBlYQXHtW7Tmf8OO39aO3W0IK7LkeVNcjiFAcNdakRNukmw9XBhJqybG54Et/gQ+E/YwLuKH1iIadBjVA2GMCdFVElolb7v+9Yirrf8FOOCYJF3dFnTKiFo2P8Rf2JswaREleNCwDeRUzxTmWcVB8NyoGbiTBzOggNxoiloCwAvQeViBnS0xMuBpFIScif8Xl1RoLLgEN9/+JPa115l+EbilmgoNQyFxgENodmv6JqS8ZKD6mwE6SS38SepEkkZSmH6bBuZDlyM7rwCODz9NrvjlSpUVND786DOjlL3cNSGrpjS3Lk36yE3W2gTwYCM6kBwnCdTquGp7USTL/Tvoj5/px2XOLrkML9eCKR5PVgkF6n+sL8jLVcRKBFQcoOieWq23DJOK/RKVh3jfh3uibliprzUFtpwv591rjbYpY8MtDB1Qk4BE5IhJxPrE5qH/oTNPL/J9KGvYGBN7roWTwwqz13QtNy5QjzRP6PZkhOXGiP4EoM4ArDmNsp9IJcSdTPdyrwmOJHmg/DxiY/HOpGbfHPYrvS/gBef1mKDCLRR9jaSbPDeewAuTSCpoLXchikaXL71n2/mXS9b2/OUFtqUhheunT0yd/NI+s/Vp6noC/Bvan8mjUd6PVydWPhi7fqHwbUXde4rTGWglZ9H3Mp/Gi4yZu4THhiPnItEwAI+EVnGsiI/Zcu8c2HLtCjR+U7xVObikgvR5oX++6FK5u9HoD6x+IYE8nnuLS/sU83O8jdF+9Et hjFj6YUQ HP2MFfBvk1jsERB5grZjYsPeUOsXh4BB86SJ9mJXpeOU2XJV92LLMCbn53bBOWdJnmvTLQhysnfEz6xsl/vJ7qdmvs0+B7ePb91XTB1DtvapQIVTHv1a8gFdZZ5riAxmRsh0sTf4+0HYWki8a9fXwbh+b9uDYE4MAyxvScjuUia4IqVg6rrsGGAulqpNLr0suITDD8BCMdcz2ETF0sbcP33lIZjoK8l4sfrxtLaWnnwoozwY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add mTHP counters for anonymous shmem. Signed-off-by: Baolin Wang --- include/linux/huge_mm.h | 3 +++ mm/huge_memory.c | 6 ++++++ mm/shmem.c | 18 +++++++++++++++--- 3 files changed, 24 insertions(+), 3 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b5339210268d..e162498fef82 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -281,6 +281,9 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, MTHP_STAT_ANON_SWPOUT, MTHP_STAT_ANON_SWPOUT_FALLBACK, + MTHP_STAT_FILE_ALLOC, + MTHP_STAT_FILE_FALLBACK, + MTHP_STAT_FILE_FALLBACK_CHARGE, __MTHP_STAT_COUNT }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d3080a8843f2..fcda6ae604f6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -555,6 +555,9 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); DEFINE_MTHP_STAT_ATTR(anon_swpout, MTHP_STAT_ANON_SWPOUT); DEFINE_MTHP_STAT_ATTR(anon_swpout_fallback, MTHP_STAT_ANON_SWPOUT_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); +DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE); static struct attribute *stats_attrs[] = { &anon_fault_alloc_attr.attr, @@ -562,6 +565,9 @@ static struct attribute *stats_attrs[] = { &anon_fault_fallback_charge_attr.attr, &anon_swpout_attr.attr, &anon_swpout_fallback_attr.attr, + &file_alloc_attr.attr, + &file_fallback_attr.attr, + &file_fallback_charge_attr.attr, NULL, }; diff --git a/mm/shmem.c b/mm/shmem.c index 8b020ff09c72..fd2cb2e73a21 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1786,6 +1786,9 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, if (pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); +#endif order = next_order(&suitable_orders, order); } } else { @@ -1805,9 +1808,15 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, if (xa_find(&mapping->i_pages, &index, index + pages - 1, XA_PRESENT)) { error = -EEXIST; - } else if (pages == HPAGE_PMD_NR) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); + } else if (pages > 1) { + if (pages == HPAGE_PMD_NR) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK); + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_FALLBACK_CHARGE); +#endif } goto unlock; } @@ -2178,6 +2187,9 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, if (!IS_ERR(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + count_mthp_stat(folio_order(folio), MTHP_STAT_FILE_ALLOC); +#endif goto alloced; } if (PTR_ERR(folio) == -EEXIST)