From patchwork Fri Jan 4 12:50:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10748331 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7F8913B5 for ; Fri, 4 Jan 2019 12:53:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 999A12844C for ; Fri, 4 Jan 2019 12:53:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8CFB828471; Fri, 4 Jan 2019 12:53:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 09AA52844C for ; Fri, 4 Jan 2019 12:53:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 224088E00E4; Fri, 4 Jan 2019 07:53:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1D3068E00AE; Fri, 4 Jan 2019 07:53:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C3C98E00E4; Fri, 4 Jan 2019 07:53:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id A28578E00AE for ; Fri, 4 Jan 2019 07:53:27 -0500 (EST) Received: by mail-ed1-f70.google.com with SMTP id o21so35178681edq.4 for ; Fri, 04 Jan 2019 04:53:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=cMRlI2NWPacNnN5U7NKMzhDaWWCj0Q44SgLJB2fJ6H4=; b=KvnrGH9WbB8FC/hJyeo+uMZRMukH7bnnthZOz0rB41MmHpGF+HRBgyM46MgBrwdI/b ubgRWusyLduP+mHHef6eeymW7REWxe6KDC3kSES/ssemdpc0IB5wFj7/72Ys4wHz9Eqn dedkhzrMaunzzwyOncNldnCnZVYDrka0lsOFH7umGVDgUQ770Hrpcqt4PJYqaD969adf mc/8t7eyblfQxJdb2kWNrBDS8HpYbUT/aLKQQGw75Lz0EzdECweMeN00qwlbNr4yokEU pDDmWIAUeqvxdcR8X5IrmWrhrtcS/LTO3TOMsFH0XsBAeff/yj4aA28CigT/4u6XJXGD hw4Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AA+aEWbdZsewQ++Jwm4GZhZt68rmE76i+ClEyPFh/xGUct6z3s/7jJDD gyD2tE/065TRbwFNn+gqklfHOsn64uELI8oVyJfpzDWwTy9sdM8WqGGlcF7dZ9SBhZ5K/nSJ3YG LiN0hltiG38rhvltqF8Epdxp2eVFPBm6dSgDsCEUKDOpDK9nBCKU/kaRNtnSqo+ZuJQ== X-Received: by 2002:a50:e3cb:: with SMTP id c11mr47122384edm.80.1546606407154; Fri, 04 Jan 2019 04:53:27 -0800 (PST) X-Google-Smtp-Source: AFSGD/WBHoXjaSmhwpIm6Zox/ZWsuCCWATCU1kZkZdJpSO39NC2C6mKCXpcdkyI/9qQlbg1/sNaC X-Received: by 2002:a50:e3cb:: with SMTP id c11mr47122337edm.80.1546606406006; Fri, 04 Jan 2019 04:53:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546606405; cv=none; d=google.com; s=arc-20160816; b=JU4L8qx0+dP7DZ1KWgzgGgpkFizK/QnjgjIj64v3WviFFbeVIG7X8bbdSf5NehJjx2 9DF57o7mkU6zjRdnV7Sk/lb6O3GvQ3KX0z76xC+BszRa8+kJV6D7YHOvtDLW36zeLOnf 8s0SgWP038fNp8qymU4pc6tDWSTeRaTiUHgNHoBnYTL4GSiZxfhOt8bJp3fwbzj5nK+h JFpRhZjoObCr35WbE67x9bMTqTZiTwA2lc08x3ogjIPPOCRibK6/FqxNpLcvP8z012Ds d5V8fZ04S7mwMlEu7Gi2BMoxbCXfdNDUP7tLK1qs5w9m8PrrCRgp7cwvDyPFxh6n2e+I IWNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=cMRlI2NWPacNnN5U7NKMzhDaWWCj0Q44SgLJB2fJ6H4=; b=K7aD5jhpWS2ZNfbS69VUDzGh03E5g/kIz5PTFK3Ytidwy56ubQ1/wq7P3n2lSaqEfO LSup8CS1E4bbLYfOGRIGV9G4xM6Y5gt4LAlfRhPUdCGsbFDdQQV2zQpI4pgzqJirqmto GzbuSWsYmy9V/84KLFPn+dPbPaJMHTUxsDNybkXwg9Ske8AF5iAV9mkBZxyo/xLbvfL+ cHMl26y1V0N6KhWYTtkDMSZEQEOS3sJx7PZga1xYNNeSYSfABxyNroIAeSNZR1QM14Xo 6Ihft/+1wtZnqpg8ljaaqGKGEgxctUhAFUZ6yKylKz1oXtpDaI5ZJohw6alSTFJoELqa LAoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp26.blacknight.com (outbound-smtp26.blacknight.com. [81.17.249.194]) by mx.google.com with ESMTPS id g14si7194628edy.160.2019.01.04.04.53.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 04 Jan 2019 04:53:25 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) client-ip=81.17.249.194; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.194 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp26.blacknight.com (Postfix) with ESMTPS id A9632B87AC for ; Fri, 4 Jan 2019 12:53:25 +0000 (GMT) Received: (qmail 7595 invoked from network); 4 Jan 2019 12:53:25 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.96]) by 81.17.254.9 with ESMTPA; 4 Jan 2019 12:53:25 -0000 From: Mel Gorman To: Linux-MM Cc: David Rientjes , Andrea Arcangeli , Vlastimil Babka , ying.huang@intel.com, kirill@shutemov.name, Andrew Morton , Linux List Kernel Mailing , Mel Gorman Subject: [PATCH 18/25] mm, compaction: Rework compact_should_abort as compact_check_resched Date: Fri, 4 Jan 2019 12:50:04 +0000 Message-Id: <20190104125011.16071-19-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190104125011.16071-1-mgorman@techsingularity.net> References: <20190104125011.16071-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP With incremental changes, compact_should_abort no longer makes any documented sense. Rename to compact_check_resched and update the associated comments. There is no benefit other than reducing redundant code and making the intent slightly clearer. It could potentially be merged with earlier patches but it just makes the review slightly harder. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 61 ++++++++++++++++++++++----------------------------------- 1 file changed, 23 insertions(+), 38 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index be27e4fa1b40..1a41a2dbff24 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -398,6 +398,21 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +/* + * Aside from avoiding lock contention, compaction also periodically checks + * need_resched() and records async compaction as contended if necessary. + */ +static inline void compact_check_resched(struct compact_control *cc) +{ + /* async compaction aborts if contended */ + if (need_resched()) { + if (cc->mode == MIGRATE_ASYNC) + cc->contended = true; + + cond_resched(); + } +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -426,33 +441,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock, return true; } - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - cond_resched(); - } - - return false; -} - -/* - * Aside from avoiding lock contention, compaction also periodically checks - * need_resched() and either schedules in sync compaction or aborts async - * compaction. This is similar to what compact_unlock_should_abort() does, but - * is used where no lock is concerned. - * - * Returns false when no scheduling was needed, or sync compaction scheduled. - * Returns true when async compaction should abort. - */ -static inline bool compact_should_abort(struct compact_control *cc) -{ - /* async compaction aborts if contended */ - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - - cond_resched(); - } + compact_check_resched(cc); return false; } @@ -750,8 +739,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, return 0; } - if (compact_should_abort(cc)) - return 0; + compact_check_resched(cc); if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { skip_on_failure = true; @@ -1374,12 +1362,10 @@ static void isolate_freepages(struct compact_control *cc) isolate_start_pfn = block_start_pfn) { /* * This can iterate a massively long zone without finding any - * suitable migration targets, so periodically check if we need - * to schedule, or even abort async compaction. + * suitable migration targets, so periodically check resched. */ - if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); @@ -1673,11 +1659,10 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, /* * This can potentially iterate a massively long zone with * many pageblocks unsuitable, so periodically check if we - * need to schedule, or even abort async compaction. + * need to schedule. */ - if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone);