From patchwork Tue Jun 18 09:12:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13701999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3180AC2BA1A for ; Tue, 18 Jun 2024 09:13:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2CC68D0021; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD9FD8D0002; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADC5E8D0002; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8889E8D001F for ; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2EBCF1C1275 for ; Tue, 18 Jun 2024 09:13:14 +0000 (UTC) X-FDA: 82243445508.26.EE8694C Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf09.hostedemail.com (Postfix) with ESMTP id 8983614000B for ; Tue, 18 Jun 2024 09:13:11 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718701987; a=rsa-sha256; cv=none; b=hBX/UPBpA44wAEVyS93eth8MufFFyiIwz3GgKFShQBAddJLe59c46dSXrXG7N2Qq3JULB+ Xrdt5OPPLqTHKmDNpCmHl0yROxu6FeSiZfT87wSGDvsn5/2GQ+rhT0Ss6iZ0FTdruOGMmI FWOrmAcJtG7PjK86Wa45z5nZA65H8E4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718701987; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y7T/ckPRHgJSoX8YPnK9KqfkKG43qW1lzBH+tUwBwRY=; b=4KnBG0Ffp0rQaE2wF1R8M83NxFw6kY67cLzpGzYNNLxB+AvCKgXBXpc7ycjq9wEzhX/tUx oLyPq0lpadu9lE0RtHpe8ey16kWYgc5b8WPsByRgwDwBIYCyb3qUL4VNWUOfuYvhfcoJUX zMgkqnsciiXXpLBj2bY+NrtwjolEW50= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4W3LXQ2rzdzwTCT; Tue, 18 Jun 2024 17:08:54 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 11D1E14041A; Tue, 18 Jun 2024 17:13:08 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 18 Jun 2024 17:13:07 +0800 From: Kefeng Wang To: Andrew Morton CC: Muchun Song , "Matthew Wilcox (Oracle)" , David Hildenbrand , , Huang Ying , Kefeng Wang Subject: [PATCH v2 3/4] mm: memory: improve copy_user_large_folio() Date: Tue, 18 Jun 2024 17:12:41 +0800 Message-ID: <20240618091242.2140164-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240618091242.2140164-1-wangkefeng.wang@huawei.com> References: <20240618091242.2140164-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 8983614000B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: i1wqdoqyqwf9sknp8ofn9h8hptweeukq X-HE-Tag: 1718701991-997283 X-HE-Meta: U2FsdGVkX1+ehHINSrB72v8+S+ldDS+Ensp7cR9k/4zTwFtcmEc3GL41hykVuGCCvKxusR5WR6VZguYksaeAlJN0oV9kdIO1qHt4c0mE6u68I2OK5vKVs6sWV0xqyxktuOmnrnVKspW0CEc16RW0pzbs2ylzzpbLU/nzLQSjSy/zIBqe34poLmS/r4Ts3TuzcewUq1u4X/5/WqKtEPF0QCjcm9zhcZWTQ2ehDEAkWIYPydkyoxOPkxEsE/8LXEFy6J9aIgmTaYdR8FSXQRNGlfSxASpJ6CTAlVgUEg2kI0oVu6xw9t32b8SevD+bLLFRALbGA3ijBmfxvNi6K709DpvxNB5DZaCM7MHajP/KlCeMYRtsre0+sPkuIinEM0gulX2vSOwY3iR0Lo4tr1bl1pcpbx/m2gJznIZ0oYaTB3ThRFjzzl0rJZQsErpG37H8zgetzfCyedR0RwkEnuclsHQgSuNXoRNJzCnMEgWXIwFySVPuKS5zMoaCgEuTTsg9ENZ/skOw/AdgfJFA5iDbx1M1HMibHlmh5gdobsOe3y9t/ZKJqujcs6KvY9nJAVUJzsLbggMFpAs0Rxj7eDVxrHVWFgdNNo0oPtjTeCNTAiAY1eaLU7nZO6d175so1MebrDL03ZPQf4HrC0Hzk4hI85rdjbdGKqVMDvQnTKNNm+mDtc5h4ZfwqKpNDciY9IUATA37Ctf9mwIBC0+jrSPaGXq5JlDDOQp4vO2/abGXzzaE5ZtMftWv9haOoz1y4+tRhvdMn8p9P3ydnMRnLeY0SaFI/D3pUzdzO2lvtAv6cnPl6qQDjkPkcH7JisPLZuiHOJwuXdhRyKp4Ivnbb4FLdIxp9KakWjy1lFRhN3cIb4Zi3SyjF1kKTgUagN6eeteu+zXxqEAVqpG1xmlwAxHBSJVdcxWdKKlbjjX0k1dq1TCpi1Q3dhPIyZLagCyiCsDfw/PNNabxdFnmtNVEQd1 Cx7PcMAH 1J483OrfJ8hkTG8C4zkZMlWMbIcFoSh7qwV+lAghMbkIy5FZhNkE5hHRnfBwZpFnSQsGgZcbHiHQ97hQg58MWXBxNMuMRrr06tBnsBsxZZ7w4+LbmPdRMblIvdAuegrgHUHFQSSCqZ+45bpfIw8mZUJNkdYq5XH46GgkuaRiI7ePKeDHC6fX3Savw+ZAobxXg3Sto X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use nr_pages instead of pages_per_huge_page and move the address alignment from copy_user_large_folio() into the callers since it is only needed when we don't know which address will be accessed. Signed-off-by: Kefeng Wang --- mm/hugetlb.c | 18 ++++++++---------- mm/memory.c | 11 ++++------- 2 files changed, 12 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 58d8703a1065..a41afeeb2188 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5488,9 +5488,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, ret = PTR_ERR(new_folio); break; } - ret = copy_user_large_folio(new_folio, - pte_folio, - addr, dst_vma); + ret = copy_user_large_folio(new_folio, pte_folio, + ALIGN_DOWN(addr, sz), dst_vma); folio_put(pte_folio); if (ret) { folio_put(new_folio); @@ -6680,7 +6679,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct hstate *h = hstate_vma(dst_vma); struct address_space *mapping = dst_vma->vm_file->f_mapping; pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr); - unsigned long size; + unsigned long size = huge_page_size(h); int vm_shared = dst_vma->vm_flags & VM_SHARED; pte_t _dst_pte; spinlock_t *ptl; @@ -6699,8 +6698,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } _dst_pte = make_pte_marker(PTE_MARKER_POISONED); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, - huge_page_size(h)); + set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, size); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); @@ -6774,7 +6772,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, *foliop = NULL; goto out; } - ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); + ret = copy_user_large_folio(folio, *foliop, + ALIGN_DOWN(dst_addr, size), dst_vma); folio_put(*foliop); *foliop = NULL; if (ret) { @@ -6801,9 +6800,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, /* Add shared, newly allocated pages to the page cache. */ if (vm_shared && !is_continue) { - size = i_size_read(mapping->host) >> huge_page_shift(h); ret = -EFAULT; - if (idx >= size) + if (idx >= (i_size_read(mapping->host) >> huge_page_shift(h))) goto out_release_nounlock; /* @@ -6860,7 +6858,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (wp_enabled) _dst_pte = huge_pte_mkuffd_wp(_dst_pte); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, huge_page_size(h)); + set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, size); hugetlb_count_add(pages_per_huge_page(h), dst_mm); diff --git a/mm/memory.c b/mm/memory.c index a48a790a2b5b..12115e45dc24 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6496,20 +6496,17 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr_hint, struct vm_area_struct *vma) { - unsigned int pages_per_huge_page = folio_nr_pages(dst); - unsigned long addr = addr_hint & - ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); + unsigned int nr_pages = folio_nr_pages(dst); struct copy_subpage_arg arg = { .dst = dst, .src = src, .vma = vma, }; - if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) - return copy_user_gigantic_page(dst, src, addr, vma, - pages_per_huge_page); + if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) + return copy_user_gigantic_page(dst, src, addr_hint, vma, nr_pages); - return process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg); + return process_huge_page(addr_hint, nr_pages, copy_subpage, &arg); } long copy_folio_from_user(struct folio *dst_folio,