From patchwork Wed Feb 7 13:22:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13548491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FDEC4828D for ; Wed, 7 Feb 2024 13:22:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7105E6B0085; Wed, 7 Feb 2024 08:22:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A5526B0083; Wed, 7 Feb 2024 08:22:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C46A6B0085; Wed, 7 Feb 2024 08:22:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 37B3C6B0081 for ; Wed, 7 Feb 2024 08:22:31 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F3B5CC0D02 for ; Wed, 7 Feb 2024 13:22:30 +0000 (UTC) X-FDA: 81765072102.18.534DFCB Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf17.hostedemail.com (Postfix) with ESMTP id 64E2340007 for ; Wed, 7 Feb 2024 13:22:27 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707312149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LO4GATX313XxIRamWeU7GRgzrmEGrMa8jdJMhtcmJmY=; b=d6l3QLckVkQir3l9/hWziyMcez+2ZRnGtnydEB4066IXm0e7sgNKYP61pjFqTg7evV6Afb t0z6YbIz9aqO5/+oF/+4c1c3MfJ7Vh3iD7CJYIj4GSBj4akL9Zzf79Ea6xpKbtj4p5Uxzd YGgwNpKxJDfsVDu3Tkh2HXTC/nht2VQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707312149; a=rsa-sha256; cv=none; b=i6dmzzdUIAoCryxF9tcHNHQUOVVy2NBSRF7hFcEG7DGjEyB3oFxr9SgaXasAdTDtLwkgMF B9j6i+CyIsU1Mj8qehRYnL9cfZ4yE6ImQ2dFD4SF4Nru6FJ7kqoKNLM9E3oN05o7Y6U/gn LiJXaPGtH+dQX1YIMIMOXpjE3/FjUoA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TVLNW1bzBz1xnHb; Wed, 7 Feb 2024 21:21:15 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id D32F214025A; Wed, 7 Feb 2024 21:22:22 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 7 Feb 2024 21:22:20 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage() Date: Wed, 7 Feb 2024 21:22:03 +0800 Message-ID: <20240207132204.1720444-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240207132204.1720444-1-tongtiangen@huawei.com> References: <20240207132204.1720444-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 64E2340007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: fazpbsgmt9xf1cengxoopjmdo5a8we9o X-HE-Tag: 1707312147-23659 X-HE-Meta: U2FsdGVkX18yBQtw5334OR1aV5kMHrwGJwr7ApCTR3pU+EfVMH7+KEa/nDUZ12LdkEneGO+hwKMsQFDjf+0n/wteE6O5BXR/6CLoObpHMscOY0MDs/Ez3baf2oqqDHlHvVqL2U5MoIeuMn0FPe+fdcgTI/Ht+E0ckn9YUxgujsqCKm0oKqjdKbaEnwJjQB+hGcpfHzPfNSCar8lflHkCYaKmSgl2Y71joa7H/GnnAmzH+1klAmy5rp4f9WEu5NsFXhR5b119em5T0f7CUrA41Y4qRL/pUEW7kFTGeZXzdpZv0hMLS9rsqq2pxZUdszx3C0aPi6PYCmMXvrK6WRbW7la2DuSVQBDDPRcDFdyWyblei7QDCFHd/0UolOphO1MSXVj+CIKuB5DU7DMkeBUnrZ5zeEW9QIGKWWR+OQunYmVgw4NCLkqauvCpRNj+NxTZxqUA21Qj2ACaa+a8rttYc6XuvRX2gVfMuclFsFlS/NjS7c+hb5O8bWbxqLNYyOhhLJ27ggkMBfOz2j5+bwCRjwvAjfAWwh3He71RxzvOy9ahAChDtTRango9+EfakxXRSldMXEcm19McgPdxecdnkyvLxh70SML3DfIx9mWnTkCGXr81Si99lkhvrl5luBpiCBYzm9psj7OXAirTIalp3VNPqXK6k56t1khw9QW2P1Rqw3KZAJ0iyQDzHf//tOEeLCVily1dWsSHjs+Cvjes5C/mwxMWx6fjOt/HvgNGUJ/GW4YaezE+DAPOXea5/Xi3kRG2mt5mcvajNpwyIgdxJ8gX+hIUIDcDZ0/GjjfX6QhSKReba77v7aZ27QKS43hZZZ7T78TimWcYfNRD3kj2nKa1sAyEZjNZt0D2VNsSlCK8RSb8YPtHQmOugQn6xjowy1jjQggwBGXj3GMheUpH9csWSIgsqZWFnIyQ0iOog3ShTVUMm9P/Ubwkm0gjzH01CwvhzHguu9fz5pcAl2D weXJdVWG 8AMZJklLZ+En+DkI/Zzrrv++xjj/OLG7U9C6Sarh/J8QfZnKpJ7Hz5QHBaXmhszUOOo0NjKTTfq32AuL6fPx7BzUcPrviDFvbNhCqEAWKLqQe/QlqySrvYPs9qTeshygjp6O8DqZJD7vgaVwWqrWEvgcBiboWYsNvq/UDr/WQ0itXsejmH8G5DN16AR7y2OBvl2D6PINOm0TRPHiGpns+svceVbOa/u0v4dTzNR/GkeVX0TS8gem/ksxLUVVs2P5dlkidOwBVj5PjzzJ01jXm6e3HhJ49WeLVTiqE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the main logic is extracted to copy_page_template.S. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h | 9 +++++ arch/arm64/include/asm/page.h | 10 ++++++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/copy_mc_page.S | 37 +++++++++++++++++++ arch/arm64/lib/copy_page.S | 50 +++----------------------- arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 29 +++++++++++++++ arch/arm64/mm/copypage.c | 45 +++++++++++++++++++++++ include/linux/highmem.h | 8 +++++ 9 files changed, 201 insertions(+), 45 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..dc68337c2623 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..a2fd865b816d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..1e5fe6952869 --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index 6a56d7cf309d..5499f507bb75 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,52 +17,12 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(__pi_copy_page) - ldp x2, x3, [x1] - ldp x4, x5, [x1, #16] - ldp x6, x7, [x1, #32] - ldp x8, x9, [x1, #48] - ldp x10, x11, [x1, #64] - ldp x12, x13, [x1, #80] - ldp x14, x15, [x1, #96] - ldp x16, x17, [x1, #112] - - add x0, x0, #256 - add x1, x1, #128 -1: - tst x0, #(PAGE_SIZE - 1) - - stnp x2, x3, [x0, #-256] - ldp x2, x3, [x1] - stnp x4, x5, [x0, #16 - 256] - ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32 - 256] - ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48 - 256] - ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64 - 256] - ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80 - 256] - ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96 - 256] - ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112 - 256] - ldp x16, x17, [x1, #112] - - add x0, x0, #128 - add x1, x1, #128 - - b.ne 1b - - stnp x2, x3, [x0, #-256] - stnp x4, x5, [x0, #16 - 256] - stnp x6, x7, [x0, #32 - 256] - stnp x8, x9, [x0, #48 - 256] - stnp x10, x11, [x0, #64 - 256] - stnp x12, x13, [x0, #80 - 256] - stnp x14, x15, [x0, #96 - 256] - stnp x16, x17, [x0, #112 - 256] + .macro ldp1 reg1, reg2, ptr, val + ldp \reg1, \reg2, [\ptr, \val] + .endm +SYM_FUNC_START(__pi_copy_page) +#include "copy_page_template.S" ret SYM_FUNC_END(__pi_copy_page) SYM_FUNC_ALIAS(copy_page, __pi_copy_page) diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S new file mode 100644 index 000000000000..b3ddec2c7a27 --- /dev/null +++ b/arch/arm64/lib/copy_page_template.S @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +/* + * Copy a page from src to dest (both are page aligned) + * + * Parameters: + * x0 - dest + * x1 - src + */ + ldp1 x2, x3, x1, #0 + ldp1 x4, x5, x1, #16 + ldp1 x6, x7, x1, #32 + ldp1 x8, x9, x1, #48 + ldp1 x10, x11, x1, #64 + ldp1 x12, x13, x1, #80 + ldp1 x14, x15, x1, #96 + ldp1 x16, x17, x1, #112 + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + + stnp x2, x3, [x0, #-256] + ldp1 x2, x3, x1, #0 + stnp x4, x5, [x0, #16 - 256] + ldp1 x4, x5, x1, #16 + stnp x6, x7, [x0, #32 - 256] + ldp1 x6, x7, x1, #32 + stnp x8, x9, [x0, #48 - 256] + ldp1 x8, x9, x1, #48 + stnp x10, x11, [x0, #64 - 256] + ldp1 x10, x11, x1, #64 + stnp x12, x13, [x0, #80 - 256] + ldp1 x12, x13, x1, #80 + stnp x14, x15, [x0, #96 - 256] + ldp1 x14, x15, x1, #96 + stnp x16, x17, [x0, #112 - 256] + ldp1 x16, x17, x1, #112 + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..50ef24318281 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +KERNEL_ME_SAFE(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) +#endif + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..ff0d9ceea2a4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + + set_page_mte_tagged(to); + } + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index c5ca1a1fc4f5..a42470ca42f2 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off,