From patchwork Mon Jan 29 07:09:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13535055 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0E1D4E1DA for ; Mon, 29 Jan 2024 07:09:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706512193; cv=none; b=bXrOT85N96sxLCSZv+1f6cDagbqmLDt/DhUwuaC9XriOu71Tek0uyrTp18e7Ur1a5EgUnz7zXhqJv46m5P+GMXWx99TvfPf35X+daT/fxfIzxLzUIuq/zQFypybsaQRAuOW5wP5txBIurUBqpdlep2yor8s4KB6cZoXoqnP5wQk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706512193; c=relaxed/simple; bh=iLSX74jXpTk+PzAHUK4VSm+cOjnFbHEJ0eg1zT5GtoM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FPdYJ3GgiZkXL45NlTL1YhWFa9j6jztsJE512Jr5mAn6K3OYWFNOVEVHWu0zyL/S6a1qlB+LDckpQDn3ycPufh9peoXqu6UAm2Fv3Qbk4Q96wCGyLDta8X17lpZNSbdWQUtKBN2oPU/klg5ErAZlm1BUzEQwi6nJzBNrxOxxX2c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4TNfXC2qHwzvVWK; Mon, 29 Jan 2024 15:08:11 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id CB65F180079; Mon, 29 Jan 2024 15:09:48 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 29 Jan 2024 15:09:48 +0800 From: Kefeng Wang To: Andrew Morton , CC: Tony Luck , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , , , Kefeng Wang Subject: [PATCH rfc 4/9] mm: remove MIGRATE_SYNC_NO_COPY mode Date: Mon, 29 Jan 2024 15:09:29 +0800 Message-ID: <20240129070934.3717659-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240129070934.3717659-1-wangkefeng.wang@huawei.com> References: <20240129070934.3717659-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to a device DMA engine, which is only used __migrate_device_pages() to decide whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup, it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY. Signed-off-by: Kefeng Wang --- fs/aio.c | 12 +----------- fs/hugetlbfs/inode.c | 5 +---- include/linux/migrate_mode.h | 5 ----- mm/balloon_compaction.c | 8 -------- mm/migrate.c | 8 +------- mm/zsmalloc.c | 8 -------- 6 files changed, 3 insertions(+), 43 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index bb2ff48991f3..1d0ca2a2776d 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -409,17 +409,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst, struct kioctx *ctx; unsigned long flags; pgoff_t idx; - int rc; - - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the ctx->completion_lock. That does not work with the - * migration workflow of MIGRATE_SYNC_NO_COPY. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - - rc = 0; + int rc = 0; /* mapping->i_private_lock here protects against the kioctx teardown. */ spin_lock(&mapping->i_private_lock); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index ee13c2ca8ad2..52839ffdd9a1 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1129,10 +1129,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping, hugetlb_set_folio_subpool(src, NULL); } - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index f37cc03f9369..9fb482bb7323 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -7,16 +7,11 @@ * on most operations but not ->writepage as the potential stall time * is too significant * MIGRATE_SYNC will block when migrating pages - * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages - * with the CPU. Instead, page copy happens outside the migratepage() - * callback and is likely using a DMA engine. See migrate_vma() and HMM - * (mm/hmm.c) for users of this mode. */ enum migrate_mode { MIGRATE_ASYNC, MIGRATE_SYNC_LIGHT, MIGRATE_SYNC, - MIGRATE_SYNC_NO_COPY, }; enum migrate_reason { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 22c96fed70b5..6597ebea8ae2 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page, { struct balloon_dev_info *balloon = balloon_page_device(page); - /* - * We can not easily support the no copy case here so ignore it as it - * is unlikely to be used with balloon pages. See include/linux/hmm.h - * for a user of the MIGRATE_SYNC_NO_COPY mode. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); diff --git a/mm/migrate.c b/mm/migrate.c index a51ceebbe3b1..107965bbc852 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -668,10 +668,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, if (src_private) folio_attach_private(dst, folio_detach_private(src)); - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } @@ -901,7 +898,6 @@ static int fallback_migrate_folio(struct address_space *mapping, /* Only writeback folios in full synchronous migration */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: return -EBUSY; @@ -1159,7 +1155,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: rc = -EBUSY; @@ -1370,7 +1365,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, goto out; switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: goto out; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c937635e0ad1..b9ffe1a041ca 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1811,14 +1811,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, unsigned long old_obj, new_obj; unsigned int obj_idx; - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the zs lock, which does not work with - * MIGRATE_SYNC_NO_COPY workflow. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageIsolated(page), page); /* The page is locked, so this pointer must remain valid */