From patchwork Fri Jun 14 22:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 13699142 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2062.outbound.protection.outlook.com [40.107.102.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 787581386B3; Fri, 14 Jun 2024 22:16:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.102.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403395; cv=fail; b=VEHB3OZsjJ2PXeFXynYimaF07zyepaiPvfBIsgyFcAYte4g1dZwG3/CnGCFIP20ZCn65rfAwaGjD6vBBDx4FsUop8UWZBT6fiR4c8tXUYzxvovkCQp9GL2Ilgy3N8m7jLiOPCAreU06jyXjWlGFVwSpx5t3ajhsnm6SnFOmk7eo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403395; c=relaxed/simple; bh=1ETBOh5UwVav60kPAFUHvjRYDhByWXKnPEC070kD8gY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=lXtIIgaTNGDgp0CwRjVbfV4piZYl6qbWgIltAx1De/AOTTxIHBMJ1fWrDnkdhouT1P4r+LRts3MZCy4uHpWZyihAwSyAoQL2+Qfyiu0N8srATc6M8QZwy5OrRvy09UTVRu0JkYkewTwEH92PI3lGlYvJ8q4yrc5Amm2FFpaJXEk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=lCA//9EI; arc=fail smtp.client-ip=40.107.102.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="lCA//9EI" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EzStBe8T974ZdKnRVH83dkDInGyfO5RGmn43iudb7fsK2fkB4bGDJnWG109qZzIF6ts7nDo2YdTd3ACseWL88rrNDb5y9ci2DhxrWnJJbNuMtakB2f3H+3Y1P07dlL9H2W2DESFn2HRlJuEQicuUh0S2/7iPdkLQbohGYCnWm57/pB1ALOE0wVPJ+H/O0QYicP1yBjtBUq2jiAb7z3VBE7UpZUsYUCHtqka2An08vIn+8Z3ba65X+SD33N8dNcqCaBubmlrdLSTZJxzLsmNsMZfcDmZAo7AueFBMig8XW6z8tcLIs68WSrk5lUxxDW9K+KmlJM5h/xrxKe8GyGEDpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UFtwicY4CyhIt3hTfES90NWrdOR1HL5qCnm9jVePVUs=; b=Y21oqrxVOcP8CpuVRQTtpQYqH9YlwRmHUhfuNv74KQG4UG2tXL3Ru+XgeAUOjx3MlINVxspyJ4gaNdMgLG0yKHS8x1clhAhdmXyefxu3K0YDXPN3mfxbh+v1ijHobPoBvyv+WUTYsZsgB2HcRFKbhb5uRHQKF1KCzzmc2eja/dkzFFzmrAFVu714B6d5W438BWvYC9Rmo1Fo+vLL+J/2O4rM9Hlcg5t4HC58uVElr/2y4V8j/g9aiUQbniIbqdn7AY3/nz7aadI8p6nVRXtPTqN8aisdflQhLyN5YQXtJGuOy+jUw4rhy8jwTVfzFdQ2kY/lOpJCblJn6sLdGReMeg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UFtwicY4CyhIt3hTfES90NWrdOR1HL5qCnm9jVePVUs=; b=lCA//9EI+Hdd7ew2z9isp3J6U3/VRmWIeLir/s2IZCJPCQ/fhIrp8xK5mzzB82EfJrMwjmmp/Cg8ZjO/gH31Yi1uv3QUZzUPWQa7m1mG1Z3xDhiJ0h6025LzgTSz/Nwj7pgFnkdGjCKyOY1aTQFzgtvTAbsMZzcKwMNfyqx8sgs= Received: from DM6PR13CA0025.namprd13.prod.outlook.com (2603:10b6:5:bc::38) by IA1PR12MB8537.namprd12.prod.outlook.com (2603:10b6:208:453::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.26; Fri, 14 Jun 2024 22:16:27 +0000 Received: from DS1PEPF00017091.namprd03.prod.outlook.com (2603:10b6:5:bc:cafe::31) by DM6PR13CA0025.outlook.office365.com (2603:10b6:5:bc::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25 via Frontend Transport; Fri, 14 Jun 2024 22:16:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF00017091.mail.protection.outlook.com (10.167.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7677.15 via Frontend Transport; Fri, 14 Jun 2024 22:16:26 +0000 Received: from BLR-L-SHIVAGAR.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 14 Jun 2024 17:16:21 -0500 From: Shivank Garg To: , , CC: , , , , , , Byungchul Park Subject: [RFC PATCH 1/5] mm: separate move/undo doing on folio list from migrate_pages_batch() Date: Sat, 15 Jun 2024 03:45:21 +0530 Message-ID: <20240614221525.19170-2-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614221525.19170-1-shivankg@amd.com> References: <20240614221525.19170-1-shivankg@amd.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017091:EE_|IA1PR12MB8537:EE_ X-MS-Office365-Filtering-Correlation-Id: 8f60414b-8c3a-4634-6e58-08dc8cbfa231 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230037|82310400023|1800799021|36860700010|376011; X-Microsoft-Antispam-Message-Info: 5dh8prkNJ+fATbJRmMwsCID/1cj2gcVR+EGg8eZWJSv6WJLmvZiq09OtiQjSthzDGY+VEsF73llLue+jTbQbT/MfoOKzqOu/oMzqvGDTblAJdOTrAHTo/go7fX4kBKJ0aa49Lpbp20socpHoAdJXp9UzIXXG1YuoVohGxl78jhM2IaNxU/p0/SaonSSCe41Yx5j+Rll4FbPS5XQaWEAiFu9CgMyRrKXUOLgcRK5ecNRO/hwn23gfbPl4fh9U1R4w31hFqQusnmTnh41IqUHojXQ61gSa8FJGK7BiDFWrekZYKpOz5pke+NIJ0p7HorjsgW5mwl4F+/lNcUhR5WoJFh/8T4ASIvmijAKKcUq1GgR+OAJThmu4za7chdDRSs3jbUctOipoaNYY3zhhj1tJpux9uhsHpnd4qiFeEbGSHMskoAP7yS9ja3BhJsKWfWnM5K4HWiqn4KipRerejkTXVn3VzWg9jkFSwOUJmugwoWCPQ0NfLMXJWv2uDl9134GeTdJ/EcPXovu2tELMYdUs7wGT8GJoawHdfO+JXq13mHfOlIXJb7C//gB4VSJAOt7mSmNL6+FZ8RtcP9kVSCZq6m8K+W92yVkwYsODUJIepj2ig0hjt/RwKaPOJjOaLrR++nVAy1c1glk5PczOK9/3+JX5Zr0wxq2tuRl8qDma6r5TT6fqoxOWSXRHx2/GxmGXuSaMzQpEWrjFOQqkCnxSdYvPHwwdtNPkcF3Fkir/WjaButropI7RmvxmVJC+tvd2Aqa5cguLUIjs242jESJEozODf2eJd4HNjY4Ln/8oFTIOSN4WSb/3vpJRUpgle9BYEMfrLEXvpJffhAOhtFEiM8qVG/8lMuLyvYu8JatU+nsaPAu5hvGBWuQ/gOYFHdP2uLj2f+aWW/zi3B1PI7ld10hI3N6FKv9zbxjIFeFg3pjXfWaN/O7/SwZc6qJtHN3T/z7y6JTVWdzlX/XMtpEcE5rs5wEk6KgXJFQPZcJeiRRVFLqULe6+w/HpUtirXr9ae2EJq5nN5ib2T9rq+8J/++nBfMKIpsz8IBEsYPG42QLfrIXgqNUHIzbNQwIpAYfLgxU0tgdPSbDR4VXxAgy3AQtMrS1GAILuIOP3nMRSKio9cKN4w4RnpbMHsi4jflB/yBp0mV8KRlfL5G5Y5T4I8zL9d7a1nMogSHqRIyNIREr9bof6C0oKDrxKDH7cWRoeg1sXB0cJluLeYzuzyuQf0rLcl8usS5u7ouXC2pbBDh0g1yBz9yVxsarvDsGyTybsZ7xoH/vGv3LGFOYwBBjwFUWhy/5tZugPzOhGrKKIUV8ItiiSQbqhyu1yDrFz80RO87MYOWKev0bjteZUqHsy/qSrFk3AYZMeWHEvt0u9Bx8LMv8Y5CyfuAT/3G1Tnbds/kdGg+jzKG7JvB1LtJcAMQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(82310400023)(1800799021)(36860700010)(376011);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 22:16:26.5058 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8f60414b-8c3a-4634-6e58-08dc8cbfa231 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017091.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8537 From: Byungchul Park Functionally, no change. This is a preparatory patch picked from luf (lazy unmap flush) patch series. This patch improve code organization and readability for steps involving migrate_folio_move(). Refactored migrate_pages_batch() and separated move and undo parts operating on folio list, from migrate_pages_batch(). Signed-off-by: Byungchul Park Signed-off-by: Shivank Garg --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index c27b1f8097d4..6c36c6e0a360 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1606,6 +1606,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio, return nr_failed; } +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch (rc) { + case -EAGAIN: + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + break; + default: + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + break; + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1628,7 +1703,7 @@ static int migrate_pages_batch(struct list_head *from, int pass = 0; bool is_thp = false; bool is_large = false; - struct folio *folio, *folio2, *dst = NULL, *dst2; + struct folio *folio, *folio2, *dst = NULL; int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1764,42 +1839,11 @@ static int migrate_pages_batch(struct list_head *from, thp_retry = 0; nr_retry_pages = 0; - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages = folio_nr_pages(folio); - - cond_resched(); - - rc = migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry += is_thp; - nr_retry_pages += nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded += nr_pages; - stats->nr_thp_succeeded += is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed += is_thp; - stats->nr_failed_pages += nr_pages; - break; - } - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1808,20 +1852,8 @@ static int migrate_pages_batch(struct list_head *from, rc = rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst = list_first_entry(&dst_folios, struct folio, lru); - dst2 = list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state = 0; - struct anon_vma *anon_vma = NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst = dst2; - dst2 = list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); return rc; } From patchwork Fri Jun 14 22:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 13699143 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2081.outbound.protection.outlook.com [40.107.237.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16E221850BF; Fri, 14 Jun 2024 22:16:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.81 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403400; cv=fail; b=TfQSiLIAxo0DcTxCJWxJ/Rre9Q50VZBHvmKWrSAudO0+4z5uBwlfdWPldU2VpInvUWc6Qv4dzmVHNv93LpGJIZOZwlUpuEJvFiklZvgKXfhrXZf7nUdZfPKh5NHC/ZR5lJPPChuIMRJZ2Jl8zlFo+GwbTZ3Q8ZWsJhzxDUVDhew= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403400; c=relaxed/simple; bh=O6Hvec/7lMmej0k6j+BWWmPLO52AERXy9SggF90YkDU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BInqJSAvuEHCvE2e0xal8spgNGgXPSDR2wgQcyKBsqFE0LLFyd/prGEhpZ9PI24qrXSq597CE0eY2yzTxyEC5T/+G6MqnFb3CglJZ98hYt48qzZsrCIalCBpS/+oCmm1fm8fqRSvyDPfC0yAgDv/BYaPd2PCRGa/1vmzctQa24M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=Lgtt847O; arc=fail smtp.client-ip=40.107.237.81 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="Lgtt847O" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TGoifSAYQdAFyDG87Rh4HGXN6IL4niu0kHRb4lJzmJ0Y5F7G+rNkBxZEmOjOoJs4jjFRotXSYdfTTbzcmotTZP1e+yCUlH3IO+I3Iq7XRDGz0z1vw32epp6PupSXZkQWFftdBU637vIzFFkmWncTjKHKlceu42oAWv/TbE6OmwIzQDYS1RHJP9NJ8KMWY0/MqyfF0bjzuy7UjdYHSpuRte45d3F/eRmIhsMaD9+ZIEKmRfekRMZRbjqLpBmyp+11Y9a/iR8WpwPsHICKE/EevYrUy92md1K6AyHVpGR017VIcLBypMrDYwY2CjCOh+4bQbpHax82LNHdE1ukCQjh0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=paW7JLDrtb2aRBh8FzyCeKkpvtBTLuIly2aSw6Q9hHE=; b=h1AVitlkXzkTxsuzq4aYDKxYNBP/ZbV+5nMfOx5sCI8CSAJ/LhwQ1F4J76feJRuYL5xrpn9a+gG8oBMg2R7ppw7gVMi59SfCJ1DlyTPUoggBVt/meYlLOW0U1uUnMJ260a4lg1X1/IGAP4NCD9gCPBQ7pFHeqUYspfuYsCafaQdq610tWyDKux1GYXG7WSW+FmNFMkRJsTMo3unW4Jf/MV8hxWPi+Zu9ov+1lFhURMGhhyJnwA0V2MQmxtVRF5WkIKKv9W9vYCKegLLy24T5kciY8IPAAb9GEU2/iUDAOBkKqju/S2Zg8oAe/KNe159p1jalUY2L4pdnzHwsEVqN1A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=paW7JLDrtb2aRBh8FzyCeKkpvtBTLuIly2aSw6Q9hHE=; b=Lgtt847OkXGePRrNBryWOZsZN4Y4QqpPsVMQ05KddIiLgQIgcSmCir7LE4li1MnFP33h+60+D/9qwB7Czi1ZpnFypRTvj2CfV63wUGfdLYBqsJp7RAAHpEcrSbIz+th3nyokkERJ61BFsnqErjX8J8RC5eS4+TCOLZN1bealFTA= Received: from DS7PR03CA0199.namprd03.prod.outlook.com (2603:10b6:5:3b6::24) by SA1PR12MB6727.namprd12.prod.outlook.com (2603:10b6:806:256::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.23; Fri, 14 Jun 2024 22:16:31 +0000 Received: from DS1PEPF00017095.namprd03.prod.outlook.com (2603:10b6:5:3b6:cafe::99) by DS7PR03CA0199.outlook.office365.com (2603:10b6:5:3b6::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.26 via Frontend Transport; Fri, 14 Jun 2024 22:16:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF00017095.mail.protection.outlook.com (10.167.17.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7677.15 via Frontend Transport; Fri, 14 Jun 2024 22:16:30 +0000 Received: from BLR-L-SHIVAGAR.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 14 Jun 2024 17:16:25 -0500 From: Shivank Garg To: , , CC: , , , , , Subject: [RFC PATCH 2/5] mm: add folios_copy() for copying pages in batch during migration Date: Sat, 15 Jun 2024 03:45:22 +0530 Message-ID: <20240614221525.19170-3-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614221525.19170-1-shivankg@amd.com> References: <20240614221525.19170-1-shivankg@amd.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017095:EE_|SA1PR12MB6727:EE_ X-MS-Office365-Filtering-Correlation-Id: 31bc017b-1066-43be-6da7-08dc8cbfa480 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|82310400023|36860700010|376011; X-Microsoft-Antispam-Message-Info: VwoEtAfxhdgsi/0NL9sxq39rO9UUHOlRfptpokGYLuYcAehJaOwcgHCySlliGmYjB4fjysFl9ARUweKhrIpzEYzSaXN+//3neIMPhsy6zBxXelKcSFn3Ol/YBcQMa2TeWkd8RL/6C25vatTe9bcgjJuRczi5ZEmWV9Kelyf2KQzL4KRMRy4VfuYlOJBtwo8nVmuZ1YOK0Fsz6/9OGJgvtbM61Dkcm+zBHMgkT/9z6r+lMBNL3I9J3GqxVsD+wDrPFpG90jQRvwM+2Abvct9NdnVvdd14Xnvv+84HHpiZ692pd9X4d63cpNM8S5oRwid3MZggHuez1EgnlYK9WPLzsNDih4w20R7E6Mq3u34ZQQhMyB5vDfTATiZ7Oee1bEll6r1sBeYXb9L9zCNAid7DQndoiyGg13+0c1zichoL23fQwsXb/YZ0uO7IyaThPrkV8eIgr6FhHUsTXRKg1IKxpJX5apH+pJ/G7WqtvFSM742Pj1DswRUVtRjcdm5msVY94v9pHEpWL5PJFxsootguC75vJnUwJIOel6r/91pTlp4MGmOFJc83Cx7y/4zbIek6OboZlu6krknCnWA5Y9tOoMhqbfAIfuYSNbGksYPYq07gdFy4sD+WS/D9gncEIHOsOQp0jPWyWMG9WFHR6GshI6Kqm+UWpOGIP8dh1Kj7BzXwMXASUEXKYhgD+vTJCP9cwW5MfaqYPYa3pEoWOItUFhH5c55XmQRkINzkeytVhnMUTCnUscAtN+KxHK1sqB2/FuJC6157KpoHmWxu9Qoi+D9OXN/WGhMQK5TtQRFU3TIFL7h/H42YnuOzTcNVglH35WkPSXA90TevfO6BhWw0bRpxVKRtyonij+Hz9TwZ+GmFk5XajSwG65fOpVvIzbrxzj6QyYWwoXVwR96fxQEyERlis/soot15z1IDUuvL9YvbOLyZwBTd9zLxbmMFKMwpRuu/xYFfcN5sekrjOckku73rtm/zn1mApEPZJ6iylVKjcbWNHWUUyRgJ2dted+1sPOgrUKIM633wNY+81R1HS6bJIc3xOTyevcXNqMWbKit547wv20/rC6fwA7xLg3rhcnlYvpCKxotR7lJ7mD1zs5XLBmcy5NvVPrTWA823khtlI3OlQt1h11hnVtib47XiByxqMeuTTT6gFty6eDizx3G88X7PvMJi0ydSLBCbM8JRJXQLXt9o0IoI3klrVpYOAI40CoGvC13DBeHxwfKCvtKIPhZP31aAeaRre1cTddP3s5wEhONtZoe2MCQdFsr02MzJz423Ixy28WUlSqkqJ6+y30JVFqH2icCMzIUE1Ka+nPea+gGIZeUXxZVOOzX1jjs+2WLBt+/mmRVaqNfEoW8FY4bO6v7FQvyin+pu9HxHXkQVCDW5qx6JN8RfeEj9A1WoOpjZJ040Aqfr+NoYJA== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(1800799021)(82310400023)(36860700010)(376011);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 22:16:30.3321 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 31bc017b-1066-43be-6da7-08dc8cbfa480 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017095.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6727 This patch introduces the folios_copy() function to copy the folio content from the list of src folios to the list of dst folios. This is preparatory patch for batch page migration offloading. Signed-off-by: Shivank Garg --- include/linux/mm.h | 1 + mm/util.c | 22 ++++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..cd5f37ec72f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1300,6 +1300,7 @@ void put_pages_list(struct list_head *pages); void split_page(struct page *page, unsigned int order); void folio_copy(struct folio *dst, struct folio *src); +void folios_copy(struct list_head *dst_list, struct list_head *src_list); unsigned long nr_free_buffer_pages(void); diff --git a/mm/util.c b/mm/util.c index 5a6a9802583b..3a278db28429 100644 --- a/mm/util.c +++ b/mm/util.c @@ -811,6 +811,28 @@ void folio_copy(struct folio *dst, struct folio *src) } EXPORT_SYMBOL(folio_copy); +/** + * folios_copy - Copy the contents of list of folios. + * @dst_list: Folios to copy to. + * @src_list: Folios to copy from. + * + * The folio contents are copied from @src_list to @dst_list. + * Assume the caller has validated that lists are not empty and both lists + * have equal number of folios. This may sleep. + */ +void folios_copy(struct list_head *dst_list, + struct list_head *src_list) +{ + struct folio *src, *dst; + + dst = list_first_entry(dst_list, struct folio, lru); + list_for_each_entry(src, src_list, lru) { + cond_resched(); + folio_copy(dst, src); + dst = list_next_entry(dst, lru); + } +} + int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS; int sysctl_overcommit_ratio __read_mostly = 50; unsigned long sysctl_overcommit_kbytes __read_mostly; From patchwork Fri Jun 14 22:15:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 13699144 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2060.outbound.protection.outlook.com [40.107.237.60]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A0D1186E29; Fri, 14 Jun 2024 22:16:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.60 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403402; cv=fail; b=gJRS5Mhvrs/WHwIg3ZzF1+S0ebM0SyASsuvl58H9mAxh4Kf2hmhxnKDpWuwkF++MDob99eZEa7czkng/RINqnTU/qa0nHA3X0oJgOnmkTx7FlrUSYTKc5S5igoylYDdgnVPt77xtoBGhO8AH2bKF3CVyc7X/c7t4TSOyNV3RINk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403402; c=relaxed/simple; bh=N7b6t6nKAY6kv9gY3huKKtVSgch4oq0Ga08YN1MTDgo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=S2x5Smuz9zxH2wfqvbn0U868JmQvl1aR9atzs0D1itWyaN2XLZKXvNARdpOXHyCgKNV3wKHAy+lxvLOhmARpp+wegx6oa6dnC8n6dEaW/g/ryzk8DJoI2xGLHRBcXCCr8W3/g8qXwUst4zHd/JKWIR7l/BEEWOXF2FHxzjB6Ip8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=23M0xCFA; arc=fail smtp.client-ip=40.107.237.60 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="23M0xCFA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BAZ9D6ZkPh6wnGmQ8Yjx4TSYxfV3vm+FpNyO2CfftIdPRZaR2w32fpliUh5yXmh3Wk0KA0aUOh01PcZzPEKLUxrnLvMTyFnhq1IVsWP31ueQR9AkOcp9GLIita3gLwuIymt9aq42Jbp3fe2WDd50pZu8Lg2hdsUVJlWDBi5CS3UU87onA4+6Cdiiv0LvMLUQubKsbn1S5XzvCMWYtO2XSr66LBLI515OSXqfs7ng7+zgD4An/8EvYia/CHKMR9ItCiNR9VkyTXo8CU5EXFnGyS/TeU+jxi3GcM9hBJ29H3pRa86ueF+IfsO4/FwZf4aDN6c0wzE20vQ8Nc4GwQrWBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aPio4bxFILAqiYRQIwgxfYQY7t+eiEq4ufvJeX3p6m8=; b=eLH/oKXAa9MV1hutVGErIdiFz4g46XIrPjaJn3sgExjr9JQUkMNWQOndm2BF/r5KdYyMFOHa9wlRBdxnaJOXeYHLI6twry6fXSIVVKtXiKvV1rxV8+fbDcLhksQENeR4Lz+xJYKS/P3RjnoZVmgZy3XL01FNWdQdPzNFzhPFZrNYYSXbQwdVXDHwW+jm/R84eN6SO2jlgSKs+nFS63R3U5wa0r6L20jHDWNaqyVp8HY3kSJaznGJzWgLpRFvmnZcXDGggv0I3MYnAx91zfcGmB/CRhNYJNaZwA7TZjPyc1cIvbnPPzgFPVFOkyE1xe0ZSBiqpGIZp5WGmaEKJYO64A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aPio4bxFILAqiYRQIwgxfYQY7t+eiEq4ufvJeX3p6m8=; b=23M0xCFAxfBCtYUlLmQpZaKj8WTYWV1oteUGEZs6o/A5ubq3aWXOCc6gHBVpwOg/0AA8IDrxUFpk4uZvGhNeYLpxk7G4kC0omPgAxpnBJ2/X/xL4PUHg6Un5T8thwgLv/uVqw9zqo7Lc6q71Ct1WuZUm1jB+e4HrsEcBOwHWqJ8= Received: from DM6PR13CA0022.namprd13.prod.outlook.com (2603:10b6:5:bc::35) by LV8PR12MB9450.namprd12.prod.outlook.com (2603:10b6:408:202::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.24; Fri, 14 Jun 2024 22:16:35 +0000 Received: from DS1PEPF00017091.namprd03.prod.outlook.com (2603:10b6:5:bc:cafe::23) by DM6PR13CA0022.outlook.office365.com (2603:10b6:5:bc::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.21 via Frontend Transport; Fri, 14 Jun 2024 22:16:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF00017091.mail.protection.outlook.com (10.167.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7677.15 via Frontend Transport; Fri, 14 Jun 2024 22:16:34 +0000 Received: from BLR-L-SHIVAGAR.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 14 Jun 2024 17:16:29 -0500 From: Shivank Garg To: , , CC: , , , , , Subject: [RFC PATCH 3/5] mm: add migrate_folios_batch_move to batch the folio move operations Date: Sat, 15 Jun 2024 03:45:23 +0530 Message-ID: <20240614221525.19170-4-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614221525.19170-1-shivankg@amd.com> References: <20240614221525.19170-1-shivankg@amd.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF00017091:EE_|LV8PR12MB9450:EE_ X-MS-Office365-Filtering-Correlation-Id: 1c2bff54-580a-4823-2337-08dc8cbfa715 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|82310400023|36860700010|376011; X-Microsoft-Antispam-Message-Info: /gQbTpKJTJt91DURPcQSZx5M4WWRbBE7XbZ2s8a4wxu19JGclmlKndY2o9EQmW9V9ae1Utp8RdC5Sa4zZvPG1D8Zpp0/lMb1c+SC8CB5OGm7AOsP4QmF2IL1oFXGapG6e9PyYy0H6SWZ0L3ccwA+W9ky7K1DYdktzBpmYMPGmnUmiKyxmay7Jl2RkiS1gcj7iNq0wqdTPBmg60rpFYFHk5CV3f61eLYx3hRRK3mYraB0Mh3MYsexo/upcg72griklhZpa56DNWadJrA27z6Zpn1nTSPWLI5qjihjhK9xGvZh+/L33k5+tQUedE9tHRDaKqM++uzm5abX/YzR50/TSnMM4LSlhznDb/cZaY+w6VXbRJ0GREEHc3Mpj2HYanKb3orIs/cnfmAa5KD6tc4mvs8nHUvUGI7AQxV9BT9rShLvrKTC3SPfGNu9FbfRI1tVeRIEwXmaUL3EZU/uXNyfjeiekK8zv06d366kZaVwsIbE3qRdEVDrRETHsHPoyVzFvXKFm0QNaD/pBdorNhy1FK308lz2jNj6xwqSJwIv+C8t5nB8ZBEvs1JXmV6WMLHi0vUwPlSrN16fjqzV4KUEszOWi2pei6tjsHll4tVKiP/In/J49Wg3xxSsYTAjQ3w//kbSoW7aa6WgHbUE+mFg8vTx03RX4BGzUpXxLiWE37ciuMkh/TRfG587gnybcs6wBi24WNJ95YRmsMkz00yxOPZGQAa8/x2frjwSEw3k6KB/2ondMWeJc2gEbBP6xI0B26bDBMPxmhwZvdOzA6JVVRncaoGGV8XiA/41pZ5/Bpqma13ky7vmveZAuCfsa67+G5IxcgypNEWOqpEQgcPTo6lfrhSDkwUaJ9S1Eweb+/zziJbKkRNnXVZT1YB4K2e0+qk3iKIjXW9SZtSt6FEZUL6sPaBIQKWmKQSysR6QX0AzD0avFRVNVBw/mw0pdI0b2jqyBBJ6LFSfvGJ1g1d8hay1QGMbmLqlgMPN08ZGobtyKmCZiV4SQETmzI9uiYrNASAg2/S//wNdW2LwngpS60U+o5L6NLHL4Lly7OlXWHncFUPwGmjR4XRoTYvQqtZrCVsEdPq8GLo07eeVEDAT+x0psHW7VMHMPplTg9WS11+ynKOKKwjtFi6ePXOY9MaexZV0NkDs/GJrj4ny619/48NkpVg7lFuQhigwP3shiiGl+uwpB3yXej2HRb4ylTxyrs8foAEFat0dgogZRvcB7I0PYZbj3hIOjfKABD7QunYjFwE2ATJFKyQ3lFpS3Zdy0eLQ8Qfxyj9QQhS+DQT/Jw/1ol6mH+1VoMthSHRCby5UrwnxprLQvrT/zgAkIanK78rg5Ue0pUwDu6WLhh69zjbzkJw0t3Pfiee9HHAyX8jViMwDUn5Yzctsi0KYtpqm3/Kcw3JIA2fEN5b3yy8Peg== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(1800799021)(82310400023)(36860700010)(376011);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 22:16:34.7090 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1c2bff54-580a-4823-2337-08dc8cbfa715 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF00017091.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9450 This is a preparatory patch that enable batch copying for folios undergoing migration. By enabling batch copying the folio content, we can efficiently utilize the capabilities of DMA hardware. Currently, the folio move operation is performed individually for each folio in sequential manner: for_each_folio() { Copy folio metadata like flags and mappings Copy the folio bytes from src to dst Update PTEs with new mappings } With this patch, we transition to a batch processing approach as shown below: for_each_folio() { Copy folio metadata like flags and mappings } Batch copy all pages from src to dst for_each_folio() { Update PTEs with new mappings } Signed-off-by: Shivank Garg --- mm/migrate.c | 217 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 215 insertions(+), 2 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 6c36c6e0a360..fce69a494742 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -57,6 +57,11 @@ #include "internal.h" +struct migrate_folio_info { + unsigned long private; + struct list_head list; +}; + bool isolate_movable_page(struct page *page, isolate_mode_t mode) { struct folio *folio = folio_get_nontail_page(page); @@ -1055,6 +1060,14 @@ static void __migrate_folio_extract(struct folio *dst, dst->private = NULL; } +static void __migrate_folio_extract_private(unsigned long private, + int *old_page_state, + struct anon_vma **anon_vmap) +{ + *anon_vmap = (struct anon_vma *)(private & ~PAGE_OLD_STATES); + *old_page_state = private & PAGE_OLD_STATES; +} + /* Restore the source folio to the original state upon failure */ static void migrate_folio_undo_src(struct folio *src, int page_was_mapped, @@ -1658,6 +1671,201 @@ static void migrate_folios_move(struct list_head *src_folios, } } +static void migrate_folios_batch_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + int rc, nr_pages = 0, nr_mig_folios = 0; + int old_page_state = 0; + struct anon_vma *anon_vma = NULL; + bool is_lru; + int is_thp = 0; + struct migrate_folio_info *mig_info, *mig_info2; + LIST_HEAD(temp_src_folios); + LIST_HEAD(temp_dst_folios); + LIST_HEAD(mig_info_list); + + if (mode != MIGRATE_ASYNC) { + *retry += 1; + return; + } + + /* + * Iterate over the list of locked src/dst folios to copy the metadata + */ + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + mig_info = kmalloc(sizeof(*mig_info), GFP_KERNEL); + if (!mig_info) + break; + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + is_lru = !__folio_test_movable(folio); + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); + + /* + * Use MIGRATE_SYNC_NO_COPY mode in migrate_folio family functions + * to copy the flags, mapping and some other ancillary information. + * This does everything except the page copy. The actual page copy + * is handled later in a batch manner. + */ + if (likely(is_lru)) { + struct address_space *mapping = folio_mapping(folio); + + if (!mapping) + rc = migrate_folio(mapping, dst, folio, MIGRATE_SYNC_NO_COPY); + else if (mapping_unmovable(mapping)) + rc = -EOPNOTSUPP; + else if (mapping->a_ops->migrate_folio) + rc = mapping->a_ops->migrate_folio(mapping, dst, folio, + MIGRATE_SYNC_NO_COPY); + else + rc = fallback_migrate_folio(mapping, dst, folio, + MIGRATE_SYNC_NO_COPY); + } else { + /* + * Let CPU handle the non-LRU pages for initial review. + * TODO: implement + * Can we move non-MOVABLE LRU case and mapping_unmovable case + * in unmap_and_move_huge_page and migrate_folio_unmap? + */ + rc = -EAGAIN; + } + /* + * Turning back after successful migrate_folio may create + * side-effects as dst mapping/index and xarray are updated. + */ + + /* + * -EAGAIN: Move src/dst folios to tmp lists for retry + * Other Errno: Put src folio on ret_folios list, remove the dst folio + * Success: Copy the folio bytes, restoring working pte, unlock and + * decrement refcounter + */ + if (rc == -EAGAIN) { + *retry += 1; + *thp_retry += is_thp; + *nr_retry_pages += nr_pages; + + kfree(mig_info); + list_move_tail(&folio->lru, &temp_src_folios); + list_move_tail(&dst->lru, &temp_dst_folios); + __migrate_folio_record(dst, old_page_state, anon_vma); + } else if (rc != MIGRATEPAGE_SUCCESS) { + *nr_failed += 1; + stats->nr_thp_failed += is_thp; + stats->nr_failed_pages += nr_pages; + + kfree(mig_info); + list_del(&dst->lru); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + } else { /* MIGRATEPAGE_SUCCESS */ + nr_mig_folios++; + mig_info->private = (unsigned long)((void *)anon_vma + old_page_state); + list_add_tail(&mig_info->list, &mig_info_list); + } + dst = dst2; + dst2 = list_next_entry(dst, lru); + } + + /* Exit if folio list for batch migration is empty */ + if (!nr_mig_folios) + goto out; + + /* Batch copy the folios */ + folios_copy(dst_folios, src_folios); + + /* + * Iterate the folio lists to remove migration pte and restore them + * as working pte. Unlock the folios, add/remove them to LRU lists (if + * applicable) and release the src folios. + */ + mig_info = list_first_entry(&mig_info_list, struct migrate_folio_info, list); + mig_info2 = list_next_entry(mig_info, list); + dst = list_first_entry(dst_folios, struct folio, lru); + dst2 = list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); + __migrate_folio_extract_private(mig_info->private, &old_page_state, &anon_vma); + list_del(&dst->lru); + if (__folio_test_movable(folio)) { + VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio); + /* + * We clear PG_movable under page_lock so any compactor + * cannot try to migrate this page. + */ + folio_clear_isolated(folio); + } + + /* + * Anonymous and movable src->mapping will be cleared by + * free_pages_prepare so don't reset it here for keeping + * the type to work PageAnon, for example. + */ + if (!folio_mapping_flags(folio)) + folio->mapping = NULL; + + if (likely(!folio_is_zone_device(dst))) + flush_dcache_folio(dst); + + /* + * Below few steps are only applicable for lru pages which is + * ensured as we have removed the non-lru pages from our list. + */ + folio_add_lru(dst); + if (old_page_state & PAGE_WAS_MLOCKED) + lru_add_drain(); // can this step be optimized for batch? + if (old_page_state & PAGE_WAS_MAPPED) + remove_migration_ptes(folio, dst, false); + + folio_unlock(dst); + set_page_owner_migrate_reason(&dst->page, reason); + + /* + * Decrease refcount of dst. It will not free the page because + * new page owner increased refcounter. + */ + folio_put(dst); + /* Remove the source folio from the list */ + list_del(&folio->lru); + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + folio_unlock(folio); + migrate_folio_done(folio, reason); + + /* Page migration successful, increase stat counter */ + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; + + list_del(&mig_info->list); + kfree(mig_info); + mig_info = mig_info2; + mig_info2 = list_next_entry(mig_info, list); + + dst = dst2; + dst2 = list_next_entry(dst, lru); + } +out: + /* Add tmp folios back to the list to let CPU re-attempt migration. */ + list_splice(&temp_src_folios, src_folios); + list_splice(&temp_dst_folios, dst_folios); +} + static void migrate_folios_undo(struct list_head *src_folios, struct list_head *dst_folios, free_folio_t put_new_folio, unsigned long private, @@ -1833,13 +2041,18 @@ static int migrate_pages_batch(struct list_head *from, /* Flush TLBs for all unmapped folios */ try_to_unmap_flush(); - retry = 1; + retry = 0; + /* Batch move the unmapped folios */ + migrate_folios_batch_move(&unmap_folios, &dst_folios, put_new_folio, + private, mode, reason, ret_folios, stats, &retry, + &thp_retry, &nr_failed, &nr_retry_pages); + for (pass = 0; pass < nr_pass && retry; pass++) { retry = 0; thp_retry = 0; nr_retry_pages = 0; - /* Move the unmapped folios */ + /* Move the remaining unmapped folios */ migrate_folios_move(&unmap_folios, &dst_folios, put_new_folio, private, mode, reason, ret_folios, stats, &retry, &thp_retry, From patchwork Fri Jun 14 22:15:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 13699145 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2089.outbound.protection.outlook.com [40.107.236.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CC4E1862B9; Fri, 14 Jun 2024 22:16:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403406; cv=fail; b=K229CtLqG+/AQK4x68xkxg2WDdkCmOI3EJzkBfNcomCiBu+Hf9zqVUuFFQ83LDrOgFppYeLWZwyt6INFrk+P7u3PIyd4CkApkBbs9oysgA2Qhh05/VGoNGX7Cx5aELQaT4gXOnS7698/rdekDbTQcZdH9f57Eb1qREZmFpM3YtQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403406; c=relaxed/simple; bh=2lOGRzTx4p08CUb36BtNffI4I078C66ao1hvSaDZsm4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PtpmIZrOfuT7pGtcCq8hFcati9b29+jJrTBCZG6y8YEOjSoczUqVNYsw7wVNNyUguGj87r9WkFAbKFXIjf/bdi5EosnvZIlDovYO71uDT80khiZgwD+qo/dV7DTI+b8BN+x+BERTmxTqWQlGgTOwwx4lQDGEOFF4eV/W1Cv4v70= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=uJp/MCTa; arc=fail smtp.client-ip=40.107.236.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="uJp/MCTa" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IyDmWkZHPhrk5JRywqsdXcYc+j3uSN+IBf0dNJVGl8zNwZV9ZWPGR/rhFf1QkL6EvCKWX6oAxXpp6Lp8mFUD5m35aYv712ZpVwCjVsp/X447H7WHQ6iVTYCJmM/aEonb4HncqO2oxpkc/aSIRkdX9WyJiUqiJVrPXPRKdJdfH7RMEPCZr0lZ4mBu6U6v+CCqpBJVIk4DZNR4h3zAZ43JLJb+GNxHszRepuvqwb4DIUXMM1jWPLdZUSlyJp99ZLQL/Scee7D98BXxkbj9ub4UE/lyRhsReaPLCQidklvtXJHYzeAN/jIclKaQwfmWm0v3ZO5cXBwqJmNIF6f65puRNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/Hc6CfG76p/sijk5GtkVRUOWEv/pgj/Kai61B6gQOm4=; b=VqQrQto7XtQ3ZEOVI3mCFzt8jOrNC8FAaXPNuPoCkC/EUNFxXY6QmgKWUPZlsezDW/rfw/cwZzQgy0gT9rQH2oOfwlcpPtK0CHYt08+z1ZicEyWJ37nyfJm9WBoIgnP0o1gH5HI8UY059VUl6+dil5VP3sO73xZrkDyUeCLAmtjb5W6JFkd5ueRgD4N1ahJi0iRaObWCIlpWIx/+X/4jdO26f0dittcEnqhWp3xyxzcG/gc0JwnrST2bNWvzRuhRojDkTD98/hMSOvc/uGRxrFOlEJ3h+//OQMn6vbed7B45mq5awEVHGlwgFDmKf0H/6Ar6GIAadc6ZoGBEGzvh/Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/Hc6CfG76p/sijk5GtkVRUOWEv/pgj/Kai61B6gQOm4=; b=uJp/MCTaZE/XsYv/8igUSH0ezCb7lIVpW8ZoayEuutwe2S14UQGiALOchSOjUAVG7hel7QAIVS8mhNwwGkjTWnOf/4t777/BbPoCswTV1xVZleDKvRE7aXUNOStjGrpT4KQp3Mwyx0kSPhX9z3JEGf+76jqXY6tsinO3NoEeFAU= Received: from DM6PR08CA0042.namprd08.prod.outlook.com (2603:10b6:5:1e0::16) by PH8PR12MB6939.namprd12.prod.outlook.com (2603:10b6:510:1be::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.20; Fri, 14 Jun 2024 22:16:40 +0000 Received: from DS1PEPF0001708E.namprd03.prod.outlook.com (2603:10b6:5:1e0:cafe::82) by DM6PR08CA0042.outlook.office365.com (2603:10b6:5:1e0::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25 via Frontend Transport; Fri, 14 Jun 2024 22:16:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0001708E.mail.protection.outlook.com (10.167.17.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7677.15 via Frontend Transport; Fri, 14 Jun 2024 22:16:39 +0000 Received: from BLR-L-SHIVAGAR.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 14 Jun 2024 17:16:33 -0500 From: Shivank Garg To: , , CC: , , , , , , Mike Day Subject: [RFC PATCH 4/5] mm: add support for DMA folio Migration Date: Sat, 15 Jun 2024 03:45:24 +0530 Message-ID: <20240614221525.19170-5-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614221525.19170-1-shivankg@amd.com> References: <20240614221525.19170-1-shivankg@amd.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0001708E:EE_|PH8PR12MB6939:EE_ X-MS-Office365-Filtering-Correlation-Id: 455125b0-30cf-4de1-f2b2-08dc8cbfa9df X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|82310400023|36860700010|376011; X-Microsoft-Antispam-Message-Info: XyQUv2s9dH3vKiDYGmr+m68vR9oSRaKqdrvdg7fdxp7ujmvA5RAVEjugfSCvI+hEop6sLSAmNog+MwXhS5jyYlcFmptl/3Uy0OaCNxVZluAmh1cNxr3sj0yj0B1vrJMq6GnxLWwe2Dd7fTR8q+s4Fl0U/YwpuVtunkPJ/PcPdZj35qI2nHqeMoVIcmiWOTt+7YFxrcSVS0bskBPMSq/mjxYQ1TvkSddOiC/fjft7tr8uEMiQjrUedknxsRu+d5ps70D2ejocQLBl9tcsklbHrvNCBVOtOHfSNF6WDCx20TA/Hbi9HkfoJsla/iSR3gmwGPs5e0uLkbfstbuSvSyC9ZIduyUxBXTiPXSd9W1lkFoWv+0ebWq9nuRiAyaKS5O/qjJjyDRPdMz9BHVW6rosWFe1pn1Nhe8Se1fbchTWv+zvAxeIG9yzomPHuGAlqfk4HSdfi6kSzZyG1TplWaG5JoEZHa/qS06Gy+WzPcuwQCUXt+qOvLbgcoJkb97gINcBhP1MDK8cVA76dJ56ZCwXx8c+FO1WLV3gdSwQ1WoW6S3+AgYIFMIyTWWkDRr//JuU6LCBtJXoxPKOlIEVme/fT12pgunJ6E4inPemywTOmjiBI5dqDO5/6TjzO4X6D2SZAEgznHI9iQURk3BoMbYAIXQ4HFZzm7bVYXHFK9eZfrfkdSCQz+f6PHoBFWrikQW8h/APlxKUGvz2Iv/S9pXgNoT2VyP73Vnogc4NMZMoPKX7ZqOd5Q1M6i9sGa9vff2Ssan9iWeS9E8kxfQWwwYo7EXQ+1pUeyJ4zeSSQqxkiA8hIG9IhRFY8i33JJSNblyQqtcbexp0ZEdCyycnpc1/zRxNjbyQ+vGI/Aojj215qWEqX3jUcUGBTZunjeoxstN4QD62ggmkIoccH4wpSp1cXx4MlF6GvvNduIhIgAKxZE5F+/zDAUhzt8Ue6C7OU+aFOCoNxsz+HkzQI+GU8t3jLUa+VuWvaQPIgi8Eoa+G+FOWe2Kons1CiJTECQzCvdmGGkRb8NjfR3YCgSN5XNI33upyHY1TTmLMkNC3z/wPQIO0JgUCzuSVt9dgTuKE0ACFKePmiWP9cb8RPvPHICwxiXvibXrfbR5+Judvy0ojJIqjrP7NFJhfM+RqKIaBGtaFVpJU4iUA7zti0bFp09LrBeYaQ3fNgryvxcJwUVKc4p9S22jd/NB7uJ9mqq9sZLa0OKtNyVGbUpN4VcLFuBcAzjHs2VxgFzuqGKri79HAhWzpDNpYWhRVXewIJL9+T2wF6JIKq5IVyX0WhCnTX8bKY+zqwotCazx/BGMwvw5QJqeZQconFpAm1XHT3HuyAbYA8+Oy/p+HpEbExH/L6SptqHAKoyEOzW1PpHnWa5CJVFyvJxCEehwo5LDl++kG/KU7/DyTl+Rg77FmEcvy7iDvMA== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(1800799021)(82310400023)(36860700010)(376011);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 22:16:39.3467 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 455125b0-30cf-4de1-f2b2-08dc8cbfa9df X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0001708E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6939 From: Mike Day DMA drivers should implement following functions to enable folio migration offloading: migrate_dma() - This function takes src and dst folios list undergoing migration. It is responsible for transfer of page content between the src and dst folios. can_migrate_dma() - It performs necessary checks if DMA-migration is supported for the give src and dst folios. DMA driver should include a mechanism to call start_offloading and stop_offloading for enabling and disabling migration offload respectively. Signed-off-by: Mike Day Signed-off-by: Shivank Garg --- include/linux/migrate_dma.h | 36 ++++++++++++++++++++++++++ mm/Kconfig | 8 ++++++ mm/Makefile | 1 + mm/migrate.c | 40 +++++++++++++++++++++++++++-- mm/migrate_dma.c | 51 +++++++++++++++++++++++++++++++++++++ 5 files changed, 134 insertions(+), 2 deletions(-) create mode 100644 include/linux/migrate_dma.h create mode 100644 mm/migrate_dma.c diff --git a/include/linux/migrate_dma.h b/include/linux/migrate_dma.h new file mode 100644 index 000000000000..307b234450c3 --- /dev/null +++ b/include/linux/migrate_dma.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _MIGRATE_DMA_H +#define _MIGRATE_DMA_H +#include + +#define MIGRATOR_NAME_LEN 32 +struct migrator { + char name[MIGRATOR_NAME_LEN]; + void (*migrate_dma)(struct list_head *dst_list, struct list_head *src_list); + bool (*can_migrate_dma)(struct folio *dst, struct folio *src); + struct rcu_head srcu_head; + struct module *owner; +}; + +extern struct migrator migrator; +extern struct mutex migrator_mut; +extern struct srcu_struct mig_srcu; + +#ifdef CONFIG_DMA_MIGRATION +void srcu_mig_cb(struct rcu_head *head); +void dma_update_migrator(struct migrator *mig); +unsigned char *get_active_migrator_name(void); +bool can_dma_migrate(struct folio *dst, struct folio *src); +void start_offloading(struct migrator *migrator); +void stop_offloading(void); +#else +static inline void srcu_mig_cb(struct rcu_head *head) { }; +static inline void dma_update_migrator(struct migrator *mig) { }; +static inline unsigned char *get_active_migrator_name(void) { return NULL; }; +static inline bool can_dma_migrate(struct folio *dst, struct folio *src) {return true; }; +static inline void start_offloading(struct migrator *migrator) { }; +static inline void stop_offloading(void) { }; +#endif /* CONFIG_DMA_MIGRATION */ + +#endif /* _MIGRATE_DMA_H */ diff --git a/mm/Kconfig b/mm/Kconfig index ffc3a2ba3a8c..e3ff6583fedb 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -662,6 +662,14 @@ config MIGRATION config DEVICE_MIGRATION def_bool MIGRATION && ZONE_DEVICE +config DMA_MIGRATION + bool "Migrate Pages offloading copy to DMA" + def_bool n + depends on MIGRATION + help + An interface allowing external modules or driver to offload + page copying in page migration. + config ARCH_ENABLE_HUGEPAGE_MIGRATION bool diff --git a/mm/Makefile b/mm/Makefile index e4b5b75aaec9..1e31fb79d700 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -87,6 +87,7 @@ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o +obj-$(CONFIG_DMA_MIGRATION) += migrate_dma.o obj-$(CONFIG_NUMA) += memory-tiers.o obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o diff --git a/mm/migrate.c b/mm/migrate.c index fce69a494742..db826e3862a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -656,6 +657,37 @@ void folio_migrate_copy(struct folio *newfolio, struct folio *folio) } EXPORT_SYMBOL(folio_migrate_copy); +DEFINE_STATIC_CALL(_folios_copy, folios_copy); +DEFINE_STATIC_CALL(_can_dma_migrate, can_dma_migrate); + +#ifdef CONFIG_DMA_MIGRATION +void srcu_mig_cb(struct rcu_head *head) +{ + static_call_query(_folios_copy); +} + +void dma_update_migrator(struct migrator *mig) +{ + int index; + + mutex_lock(&migrator_mut); + index = srcu_read_lock(&mig_srcu); + strscpy(migrator.name, mig ? mig->name : "kernel", MIGRATOR_NAME_LEN); + static_call_update(_folios_copy, mig ? mig->migrate_dma : folios_copy); + static_call_update(_can_dma_migrate, mig ? mig->can_migrate_dma : can_dma_migrate); + if (READ_ONCE(migrator.owner)) + module_put(migrator.owner); + xchg(&migrator.owner, mig ? mig->owner : NULL); + if (READ_ONCE(migrator.owner)) + try_module_get(migrator.owner); + srcu_read_unlock(&mig_srcu, index); + mutex_unlock(&migrator_mut); + call_srcu(&mig_srcu, &migrator.srcu_head, srcu_mig_cb); + srcu_barrier(&mig_srcu); +} + +#endif /* CONFIG_DMA_MIGRATION */ + /************************************************************ * Migration functions ***********************************************************/ @@ -1686,6 +1718,7 @@ static void migrate_folios_batch_move(struct list_head *src_folios, struct anon_vma *anon_vma = NULL; bool is_lru; int is_thp = 0; + bool can_migrate = true; struct migrate_folio_info *mig_info, *mig_info2; LIST_HEAD(temp_src_folios); LIST_HEAD(temp_dst_folios); @@ -1720,7 +1753,10 @@ static void migrate_folios_batch_move(struct list_head *src_folios, * This does everything except the page copy. The actual page copy * is handled later in a batch manner. */ - if (likely(is_lru)) { + can_migrate = static_call(_can_dma_migrate)(dst, folio); + if (unlikely(!can_migrate)) + rc = -EAGAIN; + else if (likely(is_lru)) { struct address_space *mapping = folio_mapping(folio); if (!mapping) @@ -1786,7 +1822,7 @@ static void migrate_folios_batch_move(struct list_head *src_folios, goto out; /* Batch copy the folios */ - folios_copy(dst_folios, src_folios); + static_call(_folios_copy)(dst_folios, src_folios); /* * Iterate the folio lists to remove migration pte and restore them diff --git a/mm/migrate_dma.c b/mm/migrate_dma.c new file mode 100644 index 000000000000..c8b078fdff17 --- /dev/null +++ b/mm/migrate_dma.c @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +atomic_t dispatch_to_dma = ATOMIC_INIT(0); +EXPORT_SYMBOL_GPL(dispatch_to_dma); + +DEFINE_MUTEX(migrator_mut); +DEFINE_SRCU(mig_srcu); + +struct migrator migrator = { + .name = "kernel", + .migrate_dma = folios_copy, + .can_migrate_dma = can_dma_migrate, + .srcu_head.func = srcu_mig_cb, + .owner = NULL, +}; + +bool can_dma_migrate(struct folio *dst, struct folio *src) +{ + return true; +} +EXPORT_SYMBOL_GPL(can_dma_migrate); + +void start_offloading(struct migrator *m) +{ + int offloading = 0; + + pr_info("starting migration offload by %s\n", m->name); + dma_update_migrator(m); + atomic_try_cmpxchg(&dispatch_to_dma, &offloading, 1); +} +EXPORT_SYMBOL_GPL(start_offloading); + +void stop_offloading(void) +{ + int offloading = 1; + + pr_info("stopping migration offload by %s\n", migrator.name); + dma_update_migrator(NULL); + atomic_try_cmpxchg(&dispatch_to_dma, &offloading, 0); +} +EXPORT_SYMBOL_GPL(stop_offloading); + +unsigned char *get_active_migrator_name(void) +{ + return migrator.name; +} +EXPORT_SYMBOL_GPL(get_active_migrator_name); From patchwork Fri Jun 14 22:15:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivank Garg X-Patchwork-Id: 13699146 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2056.outbound.protection.outlook.com [40.107.95.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9ED09187555; Fri, 14 Jun 2024 22:16:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.95.56 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403410; cv=fail; b=E7egFBuAm7zXbvUA3momzsFEbR0zB9ocgjcFaRrUh/eC33Q8k+FIbnkZ/mwJfk32POf22brr0wGWUfytBbA6NNyw6vXXfthKJx0XCTmJ4BBu88nWGR1ewpWfueZWqkl2oaRjTnEXhYm4MrhzQ93Bm7b7mOrf5g7bUYRvxBEuK10= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718403410; c=relaxed/simple; bh=NDKgaj6j8Z9NtlUs9nmb+/zvUeG4kejiyvspYfIiOg0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qiQKhtPzm/uRIyFJH//fXtuyj9KWhAFWLJKx/mDWfyB218m3NEE7MXckBgWsoHgSvPGrufTZuzM1tneJpIuec9OUTMyjcZ7BW8whqV1NLAgMSL0yd+4fCHVtqvheL/WYrG5FcSDpdcmhno6ArWD0FpAwih0VRaPZ+4ShS610i+Q= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=Q5DnrgzL; arc=fail smtp.client-ip=40.107.95.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="Q5DnrgzL" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NnGgeSvVqQa1XMbBz0e+XZChpB7rwuN6O7Lz9uz4nb8/RmTS06WIKrzRXNDVtOrQSmXLZC1iALEEc4tXyVvgE61uhe8orgxiJcjl2NKKOmGAQIIMw3Komeid25RzngWkB7kxisKlHeNu4dZXep6BzEYCp6U7VNB4CfxelL8J+P8q8dcy3mH5xlO4/Y5oJJlE3EXVscoEl0uXKVfddVmn8gFk5o2UI5P+E/2bfFkFmLlOrnGXWH8OQKA2D1xwR1KEeZZiTbNki8B963iTNlBKXYmDsl0acya3e2RZ4/6TpglcGuEbgGqKAQ3FwbOs7nmsUk3F2LKat0IzCXU6WcksKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X3kz8g/wKE8lLARLjEBcTfwiVkI/jzdTOFdz7BVJg2o=; b=n12daZ6sXyaD+Swh6HjEwACTJQAmJAOhUlCW6j1nspT9WR2PpWGxgOYj8zhkR2xi8apXScznPWOHUA+Ga6n1iLRLo/Q9shIkz5fW+eXhclMBQNG4ffV4mHYlLqnzWWcC0ujoNiflwvaSzaMh9IE7vyDzUe55oyVlqvN+Mlo995ZaV2wFpueT24lisv+OUBMGLxddbUU/74ILiwc6fpEZWgP0FSSYm1gwnSqmHL5m3TTEOSVeHMKnRcDWPPf3Wth5Gsd7KfrsW3XeIkvVAdi3B8anETt9P+ttjwD+X/t4ePbwudmIj6ZcQGG0sHVVLh+/NUyFxTeJaDUGB0Ggd4gSJg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux-foundation.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X3kz8g/wKE8lLARLjEBcTfwiVkI/jzdTOFdz7BVJg2o=; b=Q5DnrgzL3tY5E/9/5EzXG2OQxiWcJE/HauqevnEcMk4WaYMQGLfODk9hJ+0v1MCmeURfByG7hoBbRSbL9lGhgMYbw4BaoTRwD63PKiHI/i650jfZAgVU7qB4dv8ezLRxKAulWC1M/SlawuoIslBVD853PfO31suHmd1FP1L2VYc= Received: from DM6PR08CA0045.namprd08.prod.outlook.com (2603:10b6:5:1e0::19) by MW4PR12MB5644.namprd12.prod.outlook.com (2603:10b6:303:189::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun 2024 22:16:45 +0000 Received: from DS1PEPF0001708E.namprd03.prod.outlook.com (2603:10b6:5:1e0:cafe::57) by DM6PR08CA0045.outlook.office365.com (2603:10b6:5:1e0::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.26 via Frontend Transport; Fri, 14 Jun 2024 22:16:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0001708E.mail.protection.outlook.com (10.167.17.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7677.15 via Frontend Transport; Fri, 14 Jun 2024 22:16:45 +0000 Received: from BLR-L-SHIVAGAR.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 14 Jun 2024 17:16:37 -0500 From: Shivank Garg To: , , CC: , , , , , Subject: [RFC PATCH 5/5] dcbm: add dma core batch migrator for batch page offloading Date: Sat, 15 Jun 2024 03:45:25 +0530 Message-ID: <20240614221525.19170-6-shivankg@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614221525.19170-1-shivankg@amd.com> References: <20240614221525.19170-1-shivankg@amd.com> Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0001708E:EE_|MW4PR12MB5644:EE_ X-MS-Office365-Filtering-Correlation-Id: 6909e38f-cf95-487c-0505-08dc8cbfad5e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|82310400023|36860700010|376011; X-Microsoft-Antispam-Message-Info: y7Q7ROrfiMczMSwvi1MacrwDVhLZvbdaqb3Gj3QfVVu7Bzhpg0Tm/zLRIXS09lko3Tyf9DdKzQLUEqmA4i1QTFjcNPqZlw9vaPMkiSKyHW1jdeesTUXwENjAo+QLP85MU6dQhVkg25E9GxisfZnaLow9PfLewKcjqxGD1yMGDFo/spY7RkSILxouJwT/JY2yyLRxs92P0qjWkHtV+2JQlB7f/jnxKeMN6OG8FA0HWNgg24+0shIRtCCknXjP5a3UQq67KLZTQE63dPiluuCNKjI9xU3UYTQ/IbNCq12hVxsowUzoagMCZiQxPWS4FQnjlNQAXlHbEznpvhyeSd8+HHN5KAp4NQpg7329gQA+5on5Ok4SJw6gPLMd2+e6HGjH0f6rhVRKi+p8fnsy4sIbGJTNwKLwkAM5RyPhJ5e5+lB6U88QwDg7OlIqaEVuvf21Ml23IxUHrxk9IAVpD95h2GKcsCO1bYSDGNFyVY1WBUNPkRE7anYvUtJOH3J7Unug2pWrLuKnmBmaWQp4qTgO8bomwkzQAF9A6PAUyXS/Fws1HapifrB19EKV+Em0YtAVJI8ByprIYPNAK7Owy/yfvygutxRJHYB57cA+ccdSJkCRU8sI61B3BCObfJjawIVP2JeYhbAlcZRDOrTW1CCczUaKH1aGK2qAy0axBZeZ5XoXAUBXKSEw5xhBb3W4bNak4yS1bkJfBsraVwXFmwIZWSxD3XlNaadgpL1jIOGNvOrfU2OBZkkcArFATwMF/1lTbSaW/A61ryqVghlAON9PM9L9QvQuhpZdrPdx+vLeORUrpCuu6A4zhUkuNeTjdiaEpiKkUpvBLsZ+blUZm9wGTw/QjhLA2ZBZXHZDhneUTH+qY4z4zPbbgWadYIDd3MVm/R6MZqSJEVe2Fs+Jrk1FEX/vapz3+EG/7Kk6k+wkB6Scj599pKwEk4OkGzIIzvb+qKbFxnGf31efkRSBcTyxO77ilcsuTVRcSE73D8rtlYlX/DXA04dRFU/fEpCjBBe/l1Oi/eN7w22wUwXC8hN6jK9inhpnfwc7iK06dV80VC2YHXTwGjz2Vqe7TBDI9ATWmtRwGkAXFTH9ahlCA9Mfr00zeXrI5N9s7+Hge6o8yr5u8KuuQ8IjbLg+Jc3KjbavB3Hj/SugJcoRF768yk4eZGRqdvX8Lqfs9lLvQaafFYZs4LK5cEPxBR6XvKB2gKwM51IXfHzJrooZu6h0azOiSaeHZoAwAZULxlaz5fiO1zvrwbF3juUpL1JkxmEehUWYB0i8qd+WP39VgMp/QQvijQUG6z6NB0o587E11G4QhtaWNeP1e1WlKoR0rOLkKmc7S3iPB4t0x//y4zfpxdGaCHRdPXn6T6lLREH888Vk9VYO8/vOFdIz+0NkUsYKIO0oSKOc22Y0vEM7TKzJ8upzag== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(1800799021)(82310400023)(36860700010)(376011);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 22:16:45.0967 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6909e38f-cf95-487c-0505-08dc8cbfad5e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0001708E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5644 This commit is example code on how to leverage mm's migrate offload support for offloading batch page migration. The dcbm (DMA core batch migrator) provides a generic interface using DMAEngine for end-to-end testing of the batch page migration offload feature. This facilitates testing and validation of the functionality. Enable DCBM offload: echo 1 > /sys/kernel/dcbm/offloading Disable DCBM offload: echo 0 > /sys/kernel/dcbm/offloading Signed-off-by: Shivank Garg --- drivers/dma/Kconfig | 2 + drivers/dma/Makefile | 1 + drivers/dma/dcbm/Kconfig | 7 ++ drivers/dma/dcbm/Makefile | 1 + drivers/dma/dcbm/dcbm.c | 229 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 240 insertions(+) create mode 100644 drivers/dma/dcbm/Kconfig create mode 100644 drivers/dma/dcbm/Makefile create mode 100644 drivers/dma/dcbm/dcbm.c diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index e928f2ca0f1e..376bd13d46f8 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -750,6 +750,8 @@ config XILINX_ZYNQMP_DPDMA # driver files source "drivers/dma/bestcomm/Kconfig" +source "drivers/dma/dcbm/Kconfig" + source "drivers/dma/mediatek/Kconfig" source "drivers/dma/ptdma/Kconfig" diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index dfd40d14e408..7d67fc29bce2 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -22,6 +22,7 @@ obj-$(CONFIG_AT_HDMAC) += at_hdmac.o obj-$(CONFIG_AT_XDMAC) += at_xdmac.o obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o obj-$(CONFIG_BCM_SBA_RAID) += bcm-sba-raid.o +obj-$(CONFIG_DCBM_DMA) += dcbm/ obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o diff --git a/drivers/dma/dcbm/Kconfig b/drivers/dma/dcbm/Kconfig new file mode 100644 index 000000000000..e58eca03fb52 --- /dev/null +++ b/drivers/dma/dcbm/Kconfig @@ -0,0 +1,7 @@ +config DCBM_DMA + bool "DMA Core Batch Migrator" + depends on DMA_ENGINE + default n + help + Interface driver for batch page migration offloading. Say Y + if you want to try offloading with DMAEngine APIs. diff --git a/drivers/dma/dcbm/Makefile b/drivers/dma/dcbm/Makefile new file mode 100644 index 000000000000..56ba47cce0f1 --- /dev/null +++ b/drivers/dma/dcbm/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_DCBM_DMA) += dcbm.o diff --git a/drivers/dma/dcbm/dcbm.c b/drivers/dma/dcbm/dcbm.c new file mode 100644 index 000000000000..dac87fa55327 --- /dev/null +++ b/drivers/dma/dcbm/dcbm.c @@ -0,0 +1,229 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * + * DMA batch-offlading interface driver + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. + */ + +/* + * This code exemplifies how to leverage mm layer's migration offload support + * for batch page offloading using DMA Engine APIs. + * Developers can use this template to write interface for custom hardware + * accelerators with specialized capabilities for batch page migration. + * This interface driver is end-to-end working and can be used for testing the + * patch series without special hardware given DMAEngine support is available. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static struct dma_chan *chan; +static int is_dispatching; + +static void folios_copy_dma(struct list_head *dst_list, struct list_head *src_list); +static bool can_migrate_dma(struct folio *dst, struct folio *src); + +static DEFINE_MUTEX(migratecfg_mutex); + +/* DMA Core Batch Migrator */ +struct migrator dmigrator = { + .name = "DCBM\0", + .migrate_dma = folios_copy_dma, + .can_migrate_dma = can_migrate_dma, + .owner = THIS_MODULE, +}; + +static ssize_t offloading_set(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int ccode; + int action; + dma_cap_mask_t mask; + + ccode = kstrtoint(buf, 0, &action); + if (ccode) { + pr_debug("(%s:) error parsing input %s\n", __func__, buf); + return ccode; + } + + /* + * action is 0: User wants to disable DMA offloading. + * action is 1: User wants to enable DMA offloading. + */ + switch (action) { + case 0: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 1) { + stop_offloading(); + dma_release_channel(chan); + is_dispatching = 0; + } else + pr_debug("migration offloading is already OFF\n"); + mutex_unlock(&migratecfg_mutex); + break; + case 1: + mutex_lock(&migratecfg_mutex); + if (is_dispatching == 0) { + dma_cap_zero(mask); + dma_cap_set(DMA_MEMCPY, mask); + chan = dma_request_channel(mask, NULL, NULL); + if (!chan) { + chan = ERR_PTR(-ENODEV); + pr_err("Error requesting DMA channel\n"); + mutex_unlock(&migratecfg_mutex); + return -ENODEV; + } + start_offloading(&dmigrator); + is_dispatching = 1; + } else + pr_debug("migration offloading is already ON\n"); + mutex_unlock(&migratecfg_mutex); + break; + default: + pr_debug("input should be zero or one, parsed as %d\n", action); + } + return sizeof(action); +} + +static ssize_t offloading_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%d\n", is_dispatching); +} + +static bool can_migrate_dma(struct folio *dst, struct folio *src) +{ + if (folio_test_hugetlb(src) || folio_test_hugetlb(dst) || + folio_has_private(src) || folio_has_private(dst) || + (folio_nr_pages(src) != folio_nr_pages(dst)) || + folio_nr_pages(src) != 1) + return false; + return true; +} + +static void folios_copy_dma(struct list_head *dst_list, + struct list_head *src_list) +{ + int ret = 0; + struct folio *src, *dst; + struct dma_device *dev; + struct device *dma_dev; + static dma_cookie_t cookie; + struct dma_async_tx_descriptor *tx; + enum dma_status status; + enum dma_ctrl_flags flags = DMA_CTRL_ACK; + dma_addr_t srcdma_handle; + dma_addr_t dstdma_handle; + + + if (!chan) { + pr_err("error chan uninitialized\n"); + goto fail; + } + dev = chan->device; + if (!dev) { + pr_err("error dev is NULL\n"); + goto fail; + } + dma_dev = dmaengine_get_dma_device(chan); + if (!dma_dev) { + pr_err("error dma_dev is NULL\n"); + goto fail; + } + dst = list_first_entry(dst_list, struct folio, lru); + list_for_each_entry(src, src_list, lru) { + srcdma_handle = dma_map_page(dma_dev, &src->page, 0, 4096, DMA_BIDIRECTIONAL); + ret = dma_mapping_error(dma_dev, srcdma_handle); + if (ret) { + pr_err("src mapping error\n"); + goto fail1; + } + dstdma_handle = dma_map_page(dma_dev, &dst->page, 0, 4096, DMA_BIDIRECTIONAL); + ret = dma_mapping_error(dma_dev, dstdma_handle); + if (ret) { + pr_err("dst mapping error\n"); + goto fail2; + } + tx = dev->device_prep_dma_memcpy(chan, dstdma_handle, srcdma_handle, 4096, flags); + if (!tx) { + ret = -EBUSY; + pr_err("prep_dma_error\n"); + goto fail3; + } + cookie = tx->tx_submit(tx); + if (dma_submit_error(cookie)) { + ret = -EINVAL; + pr_err("dma_submit_error\n"); + goto fail3; + } + status = dma_sync_wait(chan, cookie); + dmaengine_terminate_sync(chan); + if (status != DMA_COMPLETE) { + ret = -EINVAL; + pr_err("error while dma wait\n"); + goto fail3; + } +fail3: + dma_unmap_page(dma_dev, dstdma_handle, 4096, DMA_BIDIRECTIONAL); +fail2: + dma_unmap_page(dma_dev, srcdma_handle, 4096, DMA_BIDIRECTIONAL); +fail1: + if (ret) + folio_copy(dst, src); + + dst = list_next_entry(dst, lru); + } +fail: + folios_copy(dst_list, src_list); +} + +static struct kobject *kobj_ref; +static struct kobj_attribute offloading_attribute = __ATTR(offloading, 0664, + offloading_show, offloading_set); + +static int __init dma_module_init(void) +{ + int ret = 0; + + kobj_ref = kobject_create_and_add("dcbm", kernel_kobj); + if (!kobj_ref) + return -ENOMEM; + + ret = sysfs_create_file(kobj_ref, &offloading_attribute.attr); + if (ret) + goto out; + + is_dispatching = 0; + + return 0; +out: + kobject_put(kobj_ref); + return ret; +} + +static void __exit dma_module_exit(void) +{ + /* Stop the DMA offloading to unload the module */ + + //sysfs_remove_file(kobj, &offloading_show.attr); + kobject_put(kobj_ref); +} + +module_init(dma_module_init); +module_exit(dma_module_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Shivank Garg"); +MODULE_DESCRIPTION("DCBM"); /* DMA Core Batch Migrator */