From patchwork Wed Apr 24 13:59:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13641918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6140C4345F for ; Wed, 24 Apr 2024 13:59:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F37EE8D0001; Wed, 24 Apr 2024 09:59:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB74B8D0013; Wed, 24 Apr 2024 09:59:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96FC18D0018; Wed, 24 Apr 2024 09:59:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 64FC88D0013 for ; Wed, 24 Apr 2024 09:59:43 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1EF1FC1010 for ; Wed, 24 Apr 2024 13:59:43 +0000 (UTC) X-FDA: 82044583446.29.B6F676D Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf13.hostedemail.com (Postfix) with ESMTP id CD05F2000A; Wed, 24 Apr 2024 13:59:39 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713967181; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+Baf5+y3xEGVLSMICdjOioJzAGqdEUd/r7Jd/zhwmr0=; b=JULeGp0Z1juNm9U0VZ3Nm/BSJSNJLMNR512kXiA418lyjTZPLdRzl/oW+fDQJd/CN1zCzh kOvV0vtwB/h+/63WvapYlx4VT3jm7L5Umdl5rfd1wFzWKeGx2FW2woyOZ5Nma25HQVtWvo NxSLsFR4bXzpr7W5JIp2sYeXeormFUE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713967181; a=rsa-sha256; cv=none; b=bACYFvthuEdMbualUV8jXtBbdq4pSPjGasCuxlZ/0WGgwnQsiv1y4VHJYRI3hCMmDJc+p9 r6vEpDt5zXtQAvc1tkKiAZyrSwSmmOVygmnOVeoGMZ5ahIeHRmq450NhFs9sZXE3/67aS/ bGwn3aLAzVSuB8OLA4qeIzj6o99ywJI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VPgWC4KHqzXlVt; Wed, 24 Apr 2024 21:56:07 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 481EE140156; Wed, 24 Apr 2024 21:59:37 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 24 Apr 2024 21:59:36 +0800 From: Kefeng Wang To: Andrew Morton , CC: Tony Luck , Miaohe Lin , Naoya Horiguchi , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , , , Zi Yan , Jiaqi Yan , Hugh Dickins , Vishal Moola , Kefeng Wang Subject: [PATCH v2 05/10] mm: remove MIGRATE_SYNC_NO_COPY mode Date: Wed, 24 Apr 2024 21:59:24 +0800 Message-ID: <20240424135929.2847185-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240424135929.2847185-1-wangkefeng.wang@huawei.com> References: <20240424135929.2847185-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: CD05F2000A X-Rspam-User: X-Stat-Signature: htzd8qpq6wmyfb8c4mk9i1naw5b313ts X-HE-Tag: 1713967179-382243 X-HE-Meta: U2FsdGVkX19P+bwyk0yapF7gI73rpSjGnQroNmAyCMQGttXWl+jqzGagkGdr5E0zuhsPKgCG6N63JV2meHM87OqLYaOAmARtB33UhKdOATJYJHV7rl/vhqBUtlmHP98Qp6QdTDC+RSngT0yemtTvu6aJ9Ut1m4k/cc8ICLsP0wTJgpr5+OjNmFFmeJoword/6VPqe0TX6Fz3vpi/qqGICza0vzUBEgWQcepUtvB72praVVxkTa1nrfhqESzlczg7MNiEB3N8Z7eG6wKH5Kv/N5wRXZUKUooOi8Pyl+F+hjtu+NdLA51ZSVBF/LVaIl0iiOUymlsTEdEqpXPt2b7v1XilarzazpcRV/F/mwD5NCq7dS6OGyGswWk2e19nIYMhabAyTt+UXIM4PM4DyShNxue0ewRSZSFKpIBrUKF9cIaMAq7PPcywTxVWnSkoZFzF6X9e5daJOYTzc9azTPM6jZ5amvJtqafEqXi6mxR1LZM6djxykZGsfqCeAlMxTiOPHgKPX1QmU8p44uQlwI/B5PdTsiGGRLASRVNgMqAkU+1AG1if6RIn7PmydrvVeP0UMf3p2+Ry8qMiEoFUkiDT4lMfJRTXN0vBs0+x3OnEgxq+eCyxf71+cMT6IVsd0PZirztNwPm1ZlPeuqRW3IE61////jvuy4tGsS0CP1J4lQtTQ/oYy2uR0c5Zqo7Amuj8/U/1j7GjbgcywMKx4L2cgJ9zRb+WU2uqKhVO8UrecboqgVJo9I+Axqib2bYhzYmGD0C24LnbBGfqgsnRkPaRuUP2hpbLuXcJhPW5mwTIzMxAdWhkwy8ntkyhm0VpiD7wrHt5nLWoCN+QNRQs1/MQAY1AazmyKNCQzFsmSGFZYEcLm47mhVPnWgjzf+pk5EDEyKEY45+xBLD49QQT4ckpQwPxpFJCqEXhE4a66lG29o58C6BGXC0WtWs94/EuAAuVivMnlpCm4DTKE7brJr3 sMg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to a device DMA engine, which is only used __migrate_device_pages() to decide whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup, it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY. Signed-off-by: Kefeng Wang --- fs/aio.c | 12 +----------- fs/hugetlbfs/inode.c | 5 +---- include/linux/migrate_mode.h | 5 ----- mm/balloon_compaction.c | 8 -------- mm/migrate.c | 8 +------- mm/zsmalloc.c | 8 -------- 6 files changed, 3 insertions(+), 43 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index 6ed5507cd330..dc7a10f2a6e2 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -410,17 +410,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst, struct kioctx *ctx; unsigned long flags; pgoff_t idx; - int rc; - - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the ctx->completion_lock. That does not work with the - * migration workflow of MIGRATE_SYNC_NO_COPY. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - - rc = 0; + int rc = 0; /* mapping->i_private_lock here protects against the kioctx teardown. */ spin_lock(&mapping->i_private_lock); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 412f295acebe..6df794ed4066 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1128,10 +1128,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping, hugetlb_set_folio_subpool(src, NULL); } - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index f37cc03f9369..9fb482bb7323 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -7,16 +7,11 @@ * on most operations but not ->writepage as the potential stall time * is too significant * MIGRATE_SYNC will block when migrating pages - * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages - * with the CPU. Instead, page copy happens outside the migratepage() - * callback and is likely using a DMA engine. See migrate_vma() and HMM - * (mm/hmm.c) for users of this mode. */ enum migrate_mode { MIGRATE_ASYNC, MIGRATE_SYNC_LIGHT, MIGRATE_SYNC, - MIGRATE_SYNC_NO_COPY, }; enum migrate_reason { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 22c96fed70b5..6597ebea8ae2 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page, { struct balloon_dev_info *balloon = balloon_page_device(page); - /* - * We can not easily support the no copy case here so ignore it as it - * is unlikely to be used with balloon pages. See include/linux/hmm.h - * for a user of the MIGRATE_SYNC_NO_COPY mode. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); diff --git a/mm/migrate.c b/mm/migrate.c index ce4142ac8565..6a9bb4af2595 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -697,10 +697,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, if (src_private) folio_attach_private(dst, folio_detach_private(src)); - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } @@ -929,7 +926,6 @@ static int fallback_migrate_folio(struct address_space *mapping, /* Only writeback folios in full synchronous migration */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: return -EBUSY; @@ -1187,7 +1183,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: rc = -EBUSY; @@ -1398,7 +1393,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, goto out; switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: goto out; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b42d3545ca85..6e7967853477 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1752,14 +1752,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, unsigned long old_obj, new_obj; unsigned int obj_idx; - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the zs lock, which does not work with - * MIGRATE_SYNC_NO_COPY workflow. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageIsolated(page), page); /* The page is locked, so this pointer must remain valid */