From patchwork Tue Jul 26 13:27:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Zixian X-Patchwork-Id: 12929286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94900C43334 for ; Tue, 26 Jul 2022 13:28:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2DD08E0002; Tue, 26 Jul 2022 09:27:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDD5E8E0001; Tue, 26 Jul 2022 09:27:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA4908E0002; Tue, 26 Jul 2022 09:27:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A74758E0001 for ; Tue, 26 Jul 2022 09:27:59 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7D64D141279 for ; Tue, 26 Jul 2022 13:27:59 +0000 (UTC) X-FDA: 79729329078.22.933CE3E Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf20.hostedemail.com (Postfix) with ESMTP id CC9581C00C1 for ; Tue, 26 Jul 2022 13:27:57 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lsd291LHszkXkR; Tue, 26 Jul 2022 21:25:21 +0800 (CST) Received: from kwepemm600010.china.huawei.com (7.193.23.86) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 26 Jul 2022 21:27:52 +0800 Received: from huawei.com (10.174.179.164) by kwepemm600010.china.huawei.com (7.193.23.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 26 Jul 2022 21:27:51 +0800 From: Liu Zixian To: , , CC: , , Subject: [PATCH v2] shmem: support huge_fault to avoid pmd split Date: Tue, 26 Jul 2022 21:27:51 +0800 Message-ID: <20220726132751.1639-1-liuzixian4@huawei.com> X-Mailer: git-send-email 2.29.2.windows.3 MIME-Version: 1.0 X-Originating-IP: [10.174.179.164] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600010.china.huawei.com (7.193.23.86) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658842078; a=rsa-sha256; cv=none; b=nSd80xdyKpZG5kpVMj/IoNOgw0l8lZInj0LVy7fzO2jtKvYP7OgYEoLeCwlzFEm6qfr1DE Dbc5u1P8eF7bF+KRwlmgssBiFwmgXTj5iOnboCF2A4dMydSBd3fpsh21Dmq0OKg/yx/XBd jVuqsVJQlW/aZyD9VHk80qnApvUnxK8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of liuzixian4@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liuzixian4@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658842078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=xHG9g8FOyKqwpwAxrJ5Ajei7AjmVXtMfVPDGrV0PkSM=; b=jlEvR+yipbBgxW7H0RNDTohWhtfIB3s5QrpSA86X3q08ZIzkdXB5WLizAoFfYeIIYKNXxg rd33CrDwqFInZV8gYdiQTzIQZV/y1UqmdSpSB7ekd9ZOdvAZp0cDlXT9UEcOdBd2S5bYwB yfrX38PY2Vd9LWJAZIvWT+9IBqAJ7JU= X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CC9581C00C1 Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of liuzixian4@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liuzixian4@huawei.com X-Stat-Signature: e9x6y3hk3dathz66rkjog5s5t57tt38h X-HE-Tag: 1658842077-327241 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Transparent hugepage of tmpfs is useful to improve TLB miss, but it will be split during cow memory fault. This will happen if we mprotect and rewrite code segment (which is private file map) to hotpatch a running process. Users of huge= mount option prefer huge pages after cow. We can avoid the splitting by adding a huge_fault function. Reported-by: kernel test robot Reported-by: kernel test robot --- v2: removed redundant prep_transhuge_page Signed-off-by: Liu Zixian --- mm/shmem.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index a6f565308..5074dff08 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2120,6 +2120,50 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) return ret; } +static vm_fault_t shmem_huge_fault(struct vm_fault *vmf, enum page_entry_size pe_size) +{ + vm_fault_t ret = VM_FAULT_FALLBACK; + unsigned long haddr = vmf->address & HPAGE_PMD_MASK; + struct page *old_page, *new_page; + int gfp_flags = GFP_HIGHUSER_MOVABLE | __GFP_COMP; + + /* read or shared fault will not split huge pmd */ + if (!(vmf->flags & FAULT_FLAG_WRITE) + || (vmf->vma->vm_flags & VM_SHARED)) + return VM_FAULT_FALLBACK; + if (pe_size != PE_SIZE_PMD) + return VM_FAULT_FALLBACK; + + if (pmd_none(*vmf->pmd)) { + if (shmem_fault(vmf) & VM_FAULT_ERROR) + goto out; + if (!PageTransHuge(vmf->page)) + goto out; + old_page = vmf->page; + } else { + old_page = pmd_page(*vmf->pmd); + page_remove_rmap(old_page, vmf->vma, true); + pmdp_huge_clear_flush(vmf->vma, haddr, vmf->pmd); + add_mm_counter(vmf->vma->vm_mm, MM_SHMEMPAGES, -HPAGE_PMD_NR); + } + + new_page = &vma_alloc_folio(gfp_flags, HPAGE_PMD_ORDER, + vmf->vma, haddr, true)->page; + if (!new_page) + goto out; + copy_user_huge_page(new_page, old_page, haddr, vmf->vma, HPAGE_PMD_NR); + __SetPageUptodate(new_page); + + ret = do_set_pmd(vmf, new_page); + +out: + if (vmf->page) { + unlock_page(vmf->page); + put_page(vmf->page); + } + return ret; +} + unsigned long shmem_get_unmapped_area(struct file *file, unsigned long uaddr, unsigned long len, unsigned long pgoff, unsigned long flags) @@ -3884,6 +3928,7 @@ static const struct super_operations shmem_ops = { static const struct vm_operations_struct shmem_vm_ops = { .fault = shmem_fault, + .huge_fault = shmem_huge_fault, .map_pages = filemap_map_pages, #ifdef CONFIG_NUMA .set_policy = shmem_set_policy,