From patchwork Fri Jun 24 02:53:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12893403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5F7AC43334 for ; Fri, 24 Jun 2022 02:53:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59DC88E01BE; Thu, 23 Jun 2022 22:53:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54DC28E01B9; Thu, 23 Jun 2022 22:53:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4155D8E01BE; Thu, 23 Jun 2022 22:53:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 300468E01B9 for ; Thu, 23 Jun 2022 22:53:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 07C95121259 for ; Fri, 24 Jun 2022 02:53:36 +0000 (UTC) X-FDA: 79611608832.16.5A865FC Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf21.hostedemail.com (Postfix) with ESMTP id 4A55E1C0019 for ; Fri, 24 Jun 2022 02:53:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656039215; x=1687575215; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=KWYHMMJ0yn6bZ7P65pQMIF8OzewgLAhwV6RxkcUbBuE=; b=OHL/QD4CUU+rTO3T91r4wj3R3e6WlzCZSRT/KSHXJW4GTnRXF3A+HZPd BhqsmptGeLc8YOVCUUkykPOIk2x2Brk4SfPPtKoju0uxyCsUM2p2JgaXH PPjWmgelUk1aadlxO2C5T+F1IswewjQVJrR41jc37vLy+i0QGjG3mzZ73 h4CD8F30sh5fUJJwfwbgf9Y9b6j2LsNwAfDOurZz7oReAtC1epLGcIcKZ F5gxz7/h6TW2AYJwZRtlXli5JiSwXEA0I7c22pSovN/T/uEGEYwkKX8aW hXccs+jWA1HE54ui/OOECfsjMgQ+05Espps1MIXqJm7N9+rqvu1R2xzHT Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10387"; a="279672707" X-IronPort-AV: E=Sophos;i="5.92,217,1650956400"; d="scan'208";a="279672707" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2022 19:53:31 -0700 X-IronPort-AV: E=Sophos;i="5.92,217,1650956400"; d="scan'208";a="593018057" Received: from yxia2-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.214.143]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2022 19:53:28 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Huang, Ying" , Baolin Wang , Zi Yan , Yang Shi Subject: [PATCH 0/7] migrate_pages(): fix several bugs in error path Date: Fri, 24 Jun 2022 10:53:02 +0800 Message-Id: <20220624025309.1033400-1-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656039215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=7fiBGmiKi1byja6Uhssr2Xr6h+p5NUPBWlLdwej7qt4=; b=6RvAWQLKKFoR6DjDpPSypmCs4LF0YaXs5SoXd07tRxjdUWAgN4KM2P2+7KTRD/IRk3uvfG BkmQtugZSqyackiGzV1aHzTpr17PIVJdexqrSbd65J0785Xn01g/e4N8jqwxPu/E+lglI/ wUOtxazYyybmTrvgArXyr45kIAdCEQY= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="OHL/QD4C"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656039215; a=rsa-sha256; cv=none; b=uZRLUS+2cyAGIZjsIB5yovSeU1GxUcRTo3ghFqA/BssdZ3OCYHxvHxbjlBuGVpJJmhxCiT oVRugJkXNYaX7V9DT4jdCAOuu0nZcVKvgtCPHY3qzW1zrpiLCo0QBV7cl7c9Eg42ppsQAz Y15ceRMkfbCUTNW4uKQZeDCATZawUgI= X-Rspamd-Queue-Id: 4A55E1C0019 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="OHL/QD4C"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zcbcfakne5q4swws6xbcmq1dcned6hjg X-HE-Tag: 1656039215-940604 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Huang, Ying" During review the code of migrate_pages() and build a test program for it. Several bugs in error path are identified and fixed in this series. Most patches are tested via - Apply error-inject.patch in Linux kernel - Compile test-migrate.c (with -lnuma) - Test with test-migrate.sh error-inject.patch, test-migrate.c, and test-migrate.sh are as below. It turns out that error injection is an important tool to fix bugs in error path. Best Regards, Huang, Ying ------------------------- error-inject.patch ------------------------- From 295ea21204f3f025a041fe39c68a2eaec8313c68 Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Tue, 21 Jun 2022 11:08:30 +0800 Subject: [PATCH] migrate_pages: error inject --- mm/migrate.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 55 insertions(+), 3 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 399904015d23..87d47064ec6c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -337,6 +337,42 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) } #endif +#define EI_MP_ENOSYS 0x0001 +#define EI_MP_THP_ENOMEM 0x0002 +#define EI_MP_NP_ENOMEM 0x0004 +#define EI_MP_EAGAIN 0x0008 +#define EI_MP_EOTHER 0x0010 +#define EI_MP_NOSPLIT 0x0020 +#define EI_MP_SPLIT_FAIL 0x0040 +#define EI_MP_EAGAIN_PERM 0x0080 +#define EI_MP_EBUSY 0x0100 + +static unsigned int ei_migrate_pages; + +module_param(ei_migrate_pages, uint, 0644); + +static bool ei_thp_migration_supported(void) +{ + if (ei_migrate_pages & EI_MP_ENOSYS) + return false; + else + return thp_migration_supported(); +} + +static int ei_trylock_page(struct page *page) +{ + if (ei_migrate_pages & EI_MP_EAGAIN) + return 0; + return trylock_page(page); +} + +static int ei_split_huge_page_to_list(struct page *page, struct list_head *list) +{ + if (ei_migrate_pages & EI_MP_SPLIT_FAIL) + return -EBUSY; + return split_huge_page_to_list(page, list); +} + static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; @@ -368,6 +404,9 @@ int folio_migrate_mapping(struct address_space *mapping, if (folio_ref_count(folio) != expected_count) return -EAGAIN; + if (ei_migrate_pages & EI_MP_EAGAIN_PERM) + return -EAGAIN; + /* No turning back from here */ newfolio->index = folio->index; newfolio->mapping = folio->mapping; @@ -929,7 +968,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(page); - if (!trylock_page(page)) { + if (!ei_trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) goto out; @@ -952,6 +991,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, lock_page(page); } + if (ei_migrate_pages & EI_MP_EBUSY) { + rc = -EBUSY; + goto out_unlock; + } + if (PageWriteback(page)) { /* * Only in the case of a full synchronous migration is it @@ -1086,7 +1130,7 @@ static int unmap_and_move(new_page_t get_new_page, int rc = MIGRATEPAGE_SUCCESS; struct page *newpage = NULL; - if (!thp_migration_supported() && PageTransHuge(page)) + if (!ei_thp_migration_supported() && PageTransHuge(page)) return -ENOSYS; if (page_count(page) == 1) { @@ -1102,6 +1146,11 @@ static int unmap_and_move(new_page_t get_new_page, goto out; } + if ((ei_migrate_pages & EI_MP_THP_ENOMEM) && PageTransHuge(page)) + return -ENOMEM; + if ((ei_migrate_pages & EI_MP_NP_ENOMEM) && !PageTransHuge(page)) + return -ENOMEM; + newpage = get_new_page(page, private); if (!newpage) return -ENOMEM; @@ -1305,7 +1354,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages int rc; lock_page(page); - rc = split_huge_page_to_list(page, split_pages); + rc = ei_split_huge_page_to_list(page, split_pages); unlock_page(page); return rc; @@ -1358,6 +1407,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, bool nosplit = (reason == MR_NUMA_MISPLACED); bool no_subpage_counting = false; + if (ei_migrate_pages & EI_MP_NOSPLIT) + nosplit = true; + trace_mm_migrate_pages_start(mode, reason); thp_subpage_migration: