From patchwork Fri May 24 05:28:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13672724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9538C25B74 for ; Fri, 24 May 2024 05:30:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 562EF6B0099; Fri, 24 May 2024 01:30:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5397F6B009A; Fri, 24 May 2024 01:30:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4283F6B009B; Fri, 24 May 2024 01:30:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1E1EB6B009A for ; Fri, 24 May 2024 01:30:08 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D3D9F808B4 for ; Fri, 24 May 2024 05:30:07 +0000 (UTC) X-FDA: 82152163254.12.394160F Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf30.hostedemail.com (Postfix) with ESMTP id C17F380009; Fri, 24 May 2024 05:30:03 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716528605; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T5BiIzIIz1wtJN57DSaYLDfgSQQ5eTYgwL5ZIhnbx+I=; b=PBjYL8BeS0fhhd4exBPT8KfdGGJTll89ZVg9Knb8rzE6D9tRcZVG/XgMKNVq6QuhAgupWg PQCyRpbF9eyA7P/Nr7wFb1FSqtM42G7ZweqP3RY4cw9pOatUc/zxaEx8BumZpDqqpLoGLF cgahCaHx/oA3U1jF/rQZlxcvcJkttw4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716528605; a=rsa-sha256; cv=none; b=boMc/GnP3X6RLe9yyVP7ISwr8LA4K9RHEZxAra2j2nB7DKpghfRrHfhIisZLqV2OfaL7kh fJe2ALtGmwGEcsvVDFZLnpEQuMm6XbK/oHOPCQ6DJowpBHBvGNiWZK7PLJlEzJgE5U2puF xbBsqazVZheCEM/BgpfxNaS3nuCOqEE= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Vltn85kvkz1S7BD; Fri, 24 May 2024 13:26:20 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 356FA1800B8; Fri, 24 May 2024 13:29:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 24 May 2024 13:29:16 +0800 From: Kefeng Wang To: , CC: Tony Luck , Miaohe Lin , , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , Zi Yan , Jiaqi Yan , Hugh Dickins , Vishal Moola , Alistair Popple , Kefeng Wang Subject: [PATCH 5/5] mm: remove MIGRATE_SYNC_NO_COPY mode Date: Fri, 24 May 2024 13:28:43 +0800 Message-ID: <20240524052843.182275-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240524052843.182275-1-wangkefeng.wang@huawei.com> References: <20240524052843.182275-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: C17F380009 X-Stat-Signature: rfj97pz9ittbuu845nj13i5cuhinn3u5 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1716528603-898236 X-HE-Meta: U2FsdGVkX182faOAa2N42KYQ90e6EvBJ7MvEyoaNFewNYCMRXBm6iCm2/ySevtrFYmrQl14NsOAFy+O7N/Y2lFLOUxiq/jWD2ICrurziD8dkz2kYa2sF+Mwaa3aGlF1MdcZaHCRWKZzth7jz8RbhnDBk16ckzT9v/tSPH1D7NKtkg4sqo158dMIet0YDKcbnl1tHItj1JlPM+LnXYgGF+FBrJpIopgatdiiCdVJzOCCLf0DwVPl92vJ10F+hoowlbl+frvNBx1mAirR17yc2kLXatBNqUaodWbDNv94zSgXrD0lPu+M0chTPCF8Bwvn5d/kqTlk6i37Bs9Eysw8HK5vxJRd1FfmY6Ag409kK7B3E8XjxvhCb3U8RKyHSDNvAjRolfHS30HriH6CNqN7wWcFEKwGIHQNMnxowdNoU01tOtiX4JUObvKvi7ojmoJLNPsXwQ2g+7Cs6Q0JGMKekrLY4KhYhzV9pOsydmTOhFcbpdXygWWWCAyth1uVGb4EX+WjJFMU4BJ11UAdWgSDhuPk0GKuzq1VUcfozzBdnNeuADZyGn7srML0WPhkiv5ieMUAiGOoIU4FlGqlUNKZzWrQxA5yXYiRbdZDjKsM5E5fr7XtGYH8EBWqiY3ElOcVCLGhn5Uo5Iam4SIU7aE9XubmwVwE64oeNzqPa5GvTweruUWRFHZuGDah+UACHL+mKOu1eBZGRao97I/ufev3nknFPt4NuCQzrE0Xc6UnyurbUK88lKSz9rmMGHJeuSJ5tYkm5ZjJcqYm47a+0h+wRPt6eRvtytimo5++lvxRh1SvcWHM94sLGNtVQtLF4ky3+WV55/zTNEiLksb35xUZYwBIj1jrySutnbqaLAVuFcNUb9+vsCPp8RMkUtDDqh1de8OlnvKc70iB2vdOq0lWyjzYT1YLbtXPyRzRQaVlvyUP4Xm8A6UKY27vxfmp/o1mxewv801kyA6PjZNTCr/A CM6NwyvJ 0R3lqU6uxLUmrF9zh9b1kk5JRm5XCNICjnlYVO53lpsNY2IIRLfLYybiLDXUMwp48t874C6U6OXiI0IATegV+TTUbWQhsFxKvC5hu4Qqr2gJQeVBM5WTcbJdyFi9FaWqd87a8a2/RYc9edS7QBBRATGjCLlCIsQ0Q1KGFpxUrM/MEtQAlMg3kFQqkuUO5P2C600agYMqljCy7SFmMfFt36dZhg4Zb19TwFHBliVZURXX+ik1e63OpPpUQkhGKza/AIhoZ47riqf2WouY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to a device DMA engine, which is only used __migrate_device_pages() to decide whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup, it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY. Signed-off-by: Kefeng Wang Reviewed-by: Jane Chu --- fs/aio.c | 12 +----------- fs/hugetlbfs/inode.c | 5 +---- include/linux/migrate_mode.h | 5 ----- mm/balloon_compaction.c | 8 -------- mm/migrate.c | 8 +------- mm/zsmalloc.c | 8 -------- 6 files changed, 3 insertions(+), 43 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index 57c9f7c077e6..07ff8bbdcd2a 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -410,17 +410,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst, struct kioctx *ctx; unsigned long flags; pgoff_t idx; - int rc; - - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the ctx->completion_lock. That does not work with the - * migration workflow of MIGRATE_SYNC_NO_COPY. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - - rc = 0; + int rc = 0; /* mapping->i_private_lock here protects against the kioctx teardown. */ spin_lock(&mapping->i_private_lock); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 412f295acebe..6df794ed4066 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1128,10 +1128,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping, hugetlb_set_folio_subpool(src, NULL); } - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index f37cc03f9369..9fb482bb7323 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -7,16 +7,11 @@ * on most operations but not ->writepage as the potential stall time * is too significant * MIGRATE_SYNC will block when migrating pages - * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages - * with the CPU. Instead, page copy happens outside the migratepage() - * callback and is likely using a DMA engine. See migrate_vma() and HMM - * (mm/hmm.c) for users of this mode. */ enum migrate_mode { MIGRATE_ASYNC, MIGRATE_SYNC_LIGHT, MIGRATE_SYNC, - MIGRATE_SYNC_NO_COPY, }; enum migrate_reason { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 22c96fed70b5..6597ebea8ae2 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page, { struct balloon_dev_info *balloon = balloon_page_device(page); - /* - * We can not easily support the no copy case here so ignore it as it - * is unlikely to be used with balloon pages. See include/linux/hmm.h - * for a user of the MIGRATE_SYNC_NO_COPY mode. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); diff --git a/mm/migrate.c b/mm/migrate.c index 1d1cb832fdb4..e04b451c4289 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -671,10 +671,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, if (src_private) folio_attach_private(dst, folio_detach_private(src)); - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } @@ -903,7 +900,6 @@ static int fallback_migrate_folio(struct address_space *mapping, /* Only writeback folios in full synchronous migration */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: return -EBUSY; @@ -1161,7 +1157,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: rc = -EBUSY; @@ -1372,7 +1367,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, goto out; switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: goto out; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b42d3545ca85..6e7967853477 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1752,14 +1752,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, unsigned long old_obj, new_obj; unsigned int obj_idx; - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the zs lock, which does not work with - * MIGRATE_SYNC_NO_COPY workflow. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageIsolated(page), page); /* The page is locked, so this pointer must remain valid */