From patchwork Wed Jul 19 11:29:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13318079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC4B7EB64DD for ; Wed, 19 Jul 2023 03:29:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 531928D0040; Tue, 18 Jul 2023 23:29:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E25D8D0012; Tue, 18 Jul 2023 23:29:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A9508D0040; Tue, 18 Jul 2023 23:29:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 296238D0012 for ; Tue, 18 Jul 2023 23:29:42 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EE6444011A for ; Wed, 19 Jul 2023 03:29:41 +0000 (UTC) X-FDA: 81026931762.10.17464B6 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by imf24.hostedemail.com (Postfix) with ESMTP id 9DE42180002 for ; Wed, 19 Jul 2023 03:29:37 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=none (imf24.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.51) smtp.mailfrom=shikemeng@huaweicloud.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689737378; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2JsmnFo+xD2trSm2Cr2ZnL8sa/khrwxZODHQ0eoC3F4=; b=hRvwWnsgrAHSRuf+tjLlSrPs/0u+b2HMb+SRFQGBkv4iL/iqSQzA7sWmWLNw96noa80aah RWSq+O8Izo8aV0XHlP9MmRPXJYgR9DQ0RPlJqpjca8oR5ZCxyY5qbvEp5sGWhszK1gThgX wIjDxgJATSDjjvUzFf0j43BK7TJiY6U= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=none; spf=none (imf24.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.51) smtp.mailfrom=shikemeng@huaweicloud.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689737378; a=rsa-sha256; cv=none; b=sXicTj1w6f33/QtJT6GbKOlNM7ZNDbC7QYjGqaLdmKfjHtCbX1dIq4c+BUHdwVeV8EqU3h 9+vnoK3kgaQ7GZd2IItj2Na4I6kLj4Y75tHC6SKDsnvRVhmfxBRES0t9WeHjPw9XDh8QB2 462wzqWax/V3KRJqIhErS8h07kACmm8= Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4R5LsM0JB9z4f3jYt for ; Wed, 19 Jul 2023 11:29:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+mVWLdkI7bmOA--.28977S3; Wed, 19 Jul 2023 11:29:27 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: shikemeng@huaweicloud.com Subject: [PATCH 1/4] mm/compaction: use "spinlock_t *" to record held lock in compact [un]lock functions Date: Wed, 19 Jul 2023 19:29:58 +0800 Message-Id: <20230719113001.2023703-2-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230719113001.2023703-1-shikemeng@huaweicloud.com> References: <20230719113001.2023703-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAXC+mVWLdkI7bmOA--.28977S3 X-Coremail-Antispam: 1UD129KBjvJXoWxAw4fKFWUtr1rKFy7CFWktFb_yoW5AF4rpF 1xG3Wayr18Zay5uF4ftFW8WF43Xw1fWa4UAwsIqw1rJF45tF13Gw10ya4UurWUXasavFZa qFW3tFyUAay7ZaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBK14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jr4l82xGYIkIc2x26xkF7I0E14v26r1Y6r1xM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Jr0_Gr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Jr0_GrUvcSsGvfC2KfnxnUUI43ZEXa7sRiDGYDUUUUU= = X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 9DE42180002 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 77fzyy7hfjjkmdseup8wian1i8aqhep1 X-HE-Tag: 1689737377-326837 X-HE-Meta: U2FsdGVkX1+n+KSK/z9AmB6U6/a6VRf952i+JW2UknHz7Oh7OUc00PQd+CUZouZgoH9dd6ITavSDyAZhYkcQIfMvRSaXfWbWne1dvxRctCeD1LISFiUDudKH2iMaWwryD2smhOG0dPiFyRAWRVEmIfwSlCSWzpnsGLaFJyVU4FXZDx5tZdh8OrWOWoe4MkCnDPBxXASQsIg3DfdIzDuB6JeVxLbOihpImZeLNE8dBgeEMp4wssNnFkN1WIY4BswvJzBREquAFo9T8cw3Zh+O66Q+Ja0BsU04NqpdTjnkeJzsmWHuPuUlU89JmFVYw1v/irDe/JCEh7LduDqoAa2Sq0bf8/TEQc9iBQrraaNqGVfgEM+O2lIs2MIZCCc6BgF5gQjlT9g8DMd2t3dx0cWqU/9xg7CbO5+Us0531i/BOMe8tGBKLbPI8zlPcm1xYRX31TvFpeggrOkCi5tMAyrZvVHQ/D4YfhzdzDtKhKsZrqP54rJDpBdnht4YhJ2Gy0GjZnBaYkBZPin9apv9ghen7j0zM1cV+iC3tI3vvGVzlMbRYAV6pfbGTxSRnw6eA8ckyK/GefCPsLHT1xeFOO64x20yAcp31QPrwitjyl3kEbshXl+DjhIvDqDjuIQ24XCawGMPRfoa2sjv6VXj7aBj8eWWrMB6367aQf54ufzY9+7baXivymXlgKFJaDbph+vLqiBvea5LtbfWTBWUvwMZj662Bf3v9jYljBHo8TnadATWXZY5wLWO6jk+JFgHNjSPDLYqC6a+9zW3C9ao6nntRpifjo2ZUjAR7Xkf4foOBsL9DRCP4LvaBQT5sELKWD31Jl7jihrICoDTnw0UC1D+PYusdCpykt6EoOQVjvrkkDkhqYh3aZv5ybbsgCc29fIDul/7g3Kmk+P1QQnQypWdAMN0OGzey+B4RLUYvywm5cVNe0R6UMibxOKQEmeog1uThshWXcwLXr7LroboVGU EG95kEHG v2wIEvNu8i3fIK5Hqc+FoSil9fNFxixBBkqp1lIk56RnGt9J8E5sPLsORWAow3fHZSuZ4dcd5WSxalJsYz43m9ErbEyUpWkxXtN4gZF7rPG9FTiQeb/wGeHRbeWMT61sFHWztnytqD2NkdkyKVffUJfX6eS0zcHfH2mHyZC8P8Q0TCDt9BPJqdR5HY0Vv1veNk4nWrEYlney7nABT7hWs+rufEFKUxbKJfz/VCyY8nCjALIfX+v4ud9n9gEMF6DI+0UcVIC8qXv3il7LgwG+iSH2hN1Fbk1j9Oy/EiyH5oV04tFLzLdJSePIkfwNdfPkJqGic X-Bogosity: Ham, tests=bogofilter, spamicity=0.000075, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make compact_lock_irqsave and compact_unlock_should_abort use "spinlock_t *" to record held lock. This is a preparation to use compact_unlock_should_abort in isolate_migratepages_block to remove repeat code. Signed-off-by: Kemeng Shi --- mm/compaction.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 9641e2131901..dfef14d3ef78 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -523,22 +523,22 @@ static bool test_and_set_skip(struct compact_control *cc, struct page *page) * abort when the current block is finished regardless of success rate. * Sync compaction acquires the lock. * - * Always returns true which makes it easier to track lock state in callers. + * Always returns lock which makes it easier to track lock state in callers. */ -static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, +static spinlock_t *compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, struct compact_control *cc) __acquires(lock) { /* Track if the lock is contended in async mode */ if (cc->mode == MIGRATE_ASYNC && !cc->contended) { if (spin_trylock_irqsave(lock, *flags)) - return true; + return lock; cc->contended = true; } spin_lock_irqsave(lock, *flags); - return true; + return lock; } /* @@ -553,12 +553,12 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, * Returns true if compaction should abort due to fatal signal pending. * Returns false when compaction can continue. */ -static bool compact_unlock_should_abort(spinlock_t *lock, - unsigned long flags, bool *locked, struct compact_control *cc) +static bool compact_unlock_should_abort(spinlock_t **locked, + unsigned long flags, struct compact_control *cc) { if (*locked) { - spin_unlock_irqrestore(lock, flags); - *locked = false; + spin_unlock_irqrestore(*locked, flags); + *locked = NULL; } if (fatal_signal_pending(current)) { @@ -586,7 +586,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, int nr_scanned = 0, total_isolated = 0; struct page *cursor; unsigned long flags = 0; - bool locked = false; + spinlock_t *locked = NULL; unsigned long blockpfn = *start_pfn; unsigned int order; @@ -607,8 +607,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, * pending. */ if (!(blockpfn % COMPACT_CLUSTER_MAX) - && compact_unlock_should_abort(&cc->zone->lock, flags, - &locked, cc)) + && compact_unlock_should_abort(&locked, flags, cc)) break; nr_scanned++; @@ -673,7 +672,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, } if (locked) - spin_unlock_irqrestore(&cc->zone->lock, flags); + spin_unlock_irqrestore(locked, flags); /* * There is a tiny chance that we have read bogus compound_order(), From patchwork Wed Jul 19 11:29:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13318078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AD1DC001DC for ; Wed, 19 Jul 2023 03:29:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C1048D003B; Tue, 18 Jul 2023 23:29:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 271CE8D0012; Tue, 18 Jul 2023 23:29:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C3828D003B; Tue, 18 Jul 2023 23:29:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E640E8D0012 for ; Tue, 18 Jul 2023 23:29:38 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B5CEE16010C for ; Wed, 19 Jul 2023 03:29:38 +0000 (UTC) X-FDA: 81026931636.14.6750DFA Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by imf29.hostedemail.com (Postfix) with ESMTP id EDBC612000C for ; Wed, 19 Jul 2023 03:29:34 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=none (imf29.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.56) smtp.mailfrom=shikemeng@huaweicloud.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689737375; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=awPybspYBqJhakmb86/rO0BqZIwjOzPt8SuDXp7yaAM=; b=WfcML4hHSKn6wi8znyAFo4cqvZ1SybD4GfWtN1KYClcD05bznc28C2gknpVy9yPckfJU7b enZoDegMNcMbAecoDRRN6esJFu9djJ+nfGLJW1EXbiZS2Nd96fHE5iiINu+OsdnnRZATDb +l8liYqtCH/eahg3tyy6o37CzFisLNY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=none (imf29.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.56) smtp.mailfrom=shikemeng@huaweicloud.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689737375; a=rsa-sha256; cv=none; b=TwNpNDtHzFU9ygHuTKoX6M/CnSNCu5+ZXqAQnno5X4Pyn6KSPogJrsxETxIy0k0EVJBfM4 O/jU5xOKROm+nHv8He8eTSbGQyfPXYkMPIDZ5AhLDkkutXhD9wMr/3Faw26Gy9gPSNLtqE 6nhmPiTbCyoTwwTVEKD+Mg2X7s643Kw= Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4R5LsK3YXqz4f3lXs for ; Wed, 19 Jul 2023 11:29:25 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+mVWLdkI7bmOA--.28977S4; Wed, 19 Jul 2023 11:29:27 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: shikemeng@huaweicloud.com Subject: [PATCH 2/4] mm/compaction: use "spinlock_t *" to record held lock in isolate_migratepages_block Date: Wed, 19 Jul 2023 19:29:59 +0800 Message-Id: <20230719113001.2023703-3-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230719113001.2023703-1-shikemeng@huaweicloud.com> References: <20230719113001.2023703-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAXC+mVWLdkI7bmOA--.28977S4 X-Coremail-Antispam: 1UD129KBjvJXoWxGryUGFWDKw13JFyDuF4rKrg_yoW5ury8pF ykCasIkr4kua4agF1aqrs5uFsIg34fJF47Ar43K3WfXF4ftF9rGw1IyFyUurWrZr13AFZ5 CFs8Ka4kAa12v3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jryl82xGYIkIc2x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRsXo7UUUUU = X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: EDBC612000C X-Rspam-User: X-Stat-Signature: bmxwiewp35bbpyyr6g4mxe75cjxcrbab X-Rspamd-Server: rspam01 X-HE-Tag: 1689737374-68427 X-HE-Meta: U2FsdGVkX18vrDTN/QUwG7Lt8fvGPQ9KgCtxoPKT1/1nS5YmF+H70of6RMJ303Js4tp8wVqQaOpDy2PKlhXzny+qblzijZDQxGGK3TlpKc5Gqadr+ecaOUr6Vq6567uc9KbBOujoaPs8LFTNCKjR6Cm9YlB1E70u4sHxk48Pytvu9bovlVLOOE41nblKW5k6jEofYJRDFaFdZFWJ3glZV0nv2rEvJ41UxWpKJ3jjn349j6+aILD3kC3kA7kdvu2KUKesPvE2Jf8i6hixCUOfqcB/maUmpV6VaQ/U82EsHxhXZJIqXu34d1Y0bLPoGU/xyyw8UiPxx1V4TFwqnHQ5lMRe9k6c3ZFNv/e/17HnrQ2OQpONeXjs9MhUN0aXESqUlpaoYTHbykqEYhfSmLJ7SwIDbaANGodiHqqH1O98DM4cJLAh/JgBOwo2XXShzJ+cNpqS+EHlx9hL05z6pB1KuBw8j1MyFwdjiGE40tmN2nwUDyJOn30lpny8eWI2tTzVgh3BbRygbt/r4hNFS7sKEANVbHfypi8S5a0k/ganXCwN3NIxc5kaYUlr3bkjRqxeffmKS+flv9jUda+WK+CM50z+rOR7jIusibs0anmr0MYpdlXVJ0qjwlHGUCgMde+x9Hvjk5mS3cO/sgO6PM26E2YqqPQvwFxWwZ6776YdhHnSZ+0hjbK1jUfPUJ85zOLQKjHb310mLoIu0tq41JgNJDJo5ZNm/baMsBIv495BhEC+eWZXHcjwFQ+yz0GKeMiXmgKtiDS2gIYsWl69hjaN7Nd4hKlgWbGr4EOy9MXOQse44fvqgLuk476fNDPw9V3bpGK9z0utXkZAkakJEK1jqfIqa/a8rG9evWnaCtItYyDrc+jjSE9BH4hSwF1TZRpVWVTn2mizPa3p+GMLvl1/sGs2usav8T5EzccWRPXvHqaHKgHb3LJFznhEdssU5EsNfmB6zslQ8KZfzuEKElt 0Xl+uc0C wLPv07J4lwqijhvxK+/05MrhY1eN3KpxQmg5O/jpRBH4qqN4fG96+PI8frQr0QIHSBE5wn5x9hr2Jziks65ExEWGaT1s/JAvJQQ+7Q2+Q63hjSQ87Xz4FomfLsVJqmT6q5isckgvrd3QTCY7GCq2a+aA+YNTAIeXf8/zWbuDBnLFjkIl7ks2cGe0N/2DlVhlKPOky4HAD4aMmp8vhE9BlKdrtpEVFGsE6QHifNEBd/4yhbRPKFFLGuX1hpBd5Cxue/kC7+khFdAEewvGCJfYOAD/rQLrN7gwAFt1ks8FG10Aq1AjqUUQzlLkzw2V7hOAYggMK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use "spinlock_t *" instead of "struct lruvec *" to record held lock in isolate_migratepages_block. This is a preparation to use compact_unlock_should_abort in isolate_migratepages_block to remove repeat code. Signed-off-by: Kemeng Shi --- mm/compaction.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index dfef14d3ef78..638146a49e89 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -840,7 +840,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; - struct lruvec *locked = NULL; + spinlock_t *locked = NULL; struct folio *folio = NULL; struct page *page = NULL, *valid_page = NULL; struct address_space *mapping; @@ -911,7 +911,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (!(low_pfn % COMPACT_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -946,7 +946,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (PageHuge(page) && cc->alloc_contig) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1035,7 +1035,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1120,12 +1120,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ - if (lruvec != locked) { + if (&lruvec->lru_lock != locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); - locked = lruvec; + locked = compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); lruvec_memcg_debug(lruvec, folio); @@ -1188,7 +1187,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } folio_put(folio); @@ -1204,7 +1203,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } putback_movable_pages(&cc->migratepages); @@ -1236,7 +1235,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio); From patchwork Wed Jul 19 11:30:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 13318077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4F7EEB64DD for ; Wed, 19 Jul 2023 03:29:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24ABE8D0038; Tue, 18 Jul 2023 23:29:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FB4D8D0012; Tue, 18 Jul 2023 23:29:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C30A8D0038; Tue, 18 Jul 2023 23:29:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id F11B98D0012 for ; Tue, 18 Jul 2023 23:29:37 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A0561140133 for ; Wed, 19 Jul 2023 03:29:37 +0000 (UTC) X-FDA: 81026931594.08.BD819A7 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by imf03.hostedemail.com (Postfix) with ESMTP id 1BBC12001A for ; Wed, 19 Jul 2023 03:29:34 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=none; spf=none (imf03.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.56) smtp.mailfrom=shikemeng@huaweicloud.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689737375; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x9v0+bXHLbOs8FPC2KuF031VIxfFu8wt05pC+Hq88L0=; b=XnBxFyaozZFVGiP1bUbiAhAbQZNtmNSl60XiW4fqkpeYYGMskKF50s/wzLAn5bAB8PGvHg F0cA55RhhXnpb5y8NBAWPtPGkc11MHGoMWa/M+R+97f6+2wctucxarZ8PObD8aCkM9Ih+G 073ju9/x5O0uqL1B+DeIekOHMjL/A24= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=none; spf=none (imf03.hostedemail.com: domain of shikemeng@huaweicloud.com has no SPF policy when checking 45.249.212.56) smtp.mailfrom=shikemeng@huaweicloud.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689737375; a=rsa-sha256; cv=none; b=qj32dOLpELiPAzx9dO90HBDzjbg02vGoU912Wiy4G26HBJrI5z3gFYU1RRbRvaLsmFE+oZ afIAJgfF07LWaYNRdhdInz5Xusd8CCumVti5AOl1JZL8J9+of/80K+9XmArlDui1t/RXMy I08SsJRevuO3M9OsN61T3bcEeflLuD8= Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4R5LsL0g6Rz4f3lY0 for ; Wed, 19 Jul 2023 11:29:26 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+mVWLdkI7bmOA--.28977S6; Wed, 19 Jul 2023 11:29:28 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: shikemeng@huaweicloud.com Subject: [PATCH 4/4] mm/compaction: add compact_unlock_irqrestore to remove repeat code Date: Wed, 19 Jul 2023 19:30:01 +0800 Message-Id: <20230719113001.2023703-5-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230719113001.2023703-1-shikemeng@huaweicloud.com> References: <20230719113001.2023703-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAXC+mVWLdkI7bmOA--.28977S6 X-Coremail-Antispam: 1UD129KBjvJXoWxAF15Aw45WrWrCr1DWr1DAwb_yoWrXryxpF 4kGasIyr4kZFy3WF4ftr4ruFs0g34fXF47Ar4Sk3WfJa1FvF93Gw1SyFyUuFWrXryavFZ5 WFs8Jr18AF47Z3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPmb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M2 8IrcIa0xkI8VA2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAv FVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJw A2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE 3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr2 1l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv 67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjI I2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I 3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxV WUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8I cVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aV AFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZE Xa7sRNv31UUUUUU== X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 1BBC12001A X-Stat-Signature: f6xxuh1ughedjhzjucf7pijmn7fjkggg X-Rspam-User: X-HE-Tag: 1689737374-384425 X-HE-Meta: U2FsdGVkX199x7ss/4VU9uN+lIH0pYstIRQ51BG9jSR1YQURIaF7yU/MTxw3zLhfjC3MZCLndG95S7tX3f2KDVnsB0yIvI6qCIQWjgekUv77G6DkbVoD8YbtBBDL5yG4W8Lh6SHFQYt370/IljDfIPss3jpyO6uBehaHgcCmjHX4mSsA2C/L5WjsCb7p8JYjc60GTbfbJXvcTFx7ceJ++FLxGr+ffObGyX24GoPV1VtJVvuNn46OTJSXYwonxpazZIN8ImdsyISPBMH2IV160sjVyaS6R1TjQfSYSMDfCSBmo5sAPqxL2fLo8JBOYfSMGBcummU0wDtBtMAh1TPbLaW0H9Az3GVnRe7/tCJNZQIjIP4gxfsUk3vBdBF2XqotGtsvrFDf0KvmtG2ElweUuuIoPe+l44Byv1UMKCFMzNp1CD5GSbBf+JHFHgoubkbhKfCqx9aQocg4QG6EO9bT+zGx4kDGSyLL/IY1vzFaliw6F4hBi8TXmsKn5DFvOBmdvrxdCKQ2Qc5Z0r7lUh94d1u1HCyKMR95Af4SZ5lO/zibXo/rpuxykIDgmtveVMbkvLTeKHecDADhb7tGKmAgroeKhj2znW9qgInT9+zGz8Lyp4wGhWHgEC5PBn5tUQf2Wxz2IrEreRSReiTK1aTshtYw+X6X+0zB78BBerSO65ssFqBZQw8sV/KLFSdIEkyfGUNUnnkLMVbV0vxU3zscazyDRyBPX3UIWN/77Fm0xWEruEQDJDCHVjxNGXOTHusGuZLlqRp+jzcZFGU3t6Krkssnq4mQ7TOHHn+F9Qo1htl0Osi8o4sdHJqrrDMTXzgCAbcYSEJ4oppWrFkQ2Oh5ml3qzNmawPlEylNsNKU0wNdgfAhm5/Ew79Q9KvpsgS+xLg6usQfVoM7g0eyS2sStvB7kkqkdNmGdUO+h2UaytZHSCTdQACt9AfqaeXZOOvvzn9aCzU0nuHLvV1ZBaW6 Q1uMe283 m8OUDc/V8SP7NOlvi2yBvrwOZSfaQ4vxxstA6z5ZIMp2PgzEf9QBWopRBqhI29NYGABHGPLjBDFjItpO7fwbH2gumbSWYSCD0cMxPOBhIBc4p5D+7N6PGXSsEUOuZJJyAwnrmIwFuWaKXQgrtMtdYlg+Hpaw2EoZOJYt4K4uTktyMftXg5zJDegXS06ZgBA1LXNkYzZa+HNprivpc28KvlkJrNSlGhS1+QniZuWa1pXpdMAlENgVc7UZ3txVzh9UtXEWv4JyGpVbkNcOtSHDDMddCkYLLYp+eJ0e+3fXvl6wIw+frlVos2ysakHSYEZYLp/fj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add compact_unlock_irqrestore to remove repeat code. This also make compact lock functions sereis complete as we can call compact_lock_irqsave/compact_unlock_irqrestore in pair. Signed-off-by: Kemeng Shi --- mm/compaction.c | 43 ++++++++++++++++--------------------------- 1 file changed, 16 insertions(+), 27 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index c1dc821ac6e1..eb1d3d9a422c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -541,6 +541,14 @@ static spinlock_t *compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return lock; } +static inline void compact_unlock_irqrestore(spinlock_t **locked, unsigned long flags) +{ + if (*locked) { + spin_unlock_irqrestore(*locked, flags); + *locked = NULL; + } +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -556,10 +564,7 @@ static spinlock_t *compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, static bool compact_unlock_should_abort(spinlock_t **locked, unsigned long flags, struct compact_control *cc) { - if (*locked) { - spin_unlock_irqrestore(*locked, flags); - *locked = NULL; - } + compact_unlock_irqrestore(locked, flags); if (fatal_signal_pending(current)) { cc->contended = true; @@ -671,8 +676,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, } - if (locked) - spin_unlock_irqrestore(locked, flags); + compact_unlock_irqrestore(&locked, flags); /* * There is a tiny chance that we have read bogus compound_order(), @@ -935,10 +939,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } if (PageHuge(page) && cc->alloc_contig) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked = NULL; - } + compact_unlock_irqrestore(&locked, flags); ret = isolate_or_dissolve_huge_page(page, &cc->migratepages); @@ -1024,10 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked = NULL; - } + compact_unlock_irqrestore(&locked, flags); if (isolate_movable_page(page, mode)) { folio = page_folio(page); @@ -1111,9 +1109,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (&lruvec->lru_lock != locked) { - if (locked) - spin_unlock_irqrestore(locked, flags); - + compact_unlock_irqrestore(&locked, flags); locked = compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); lruvec_memcg_debug(lruvec, folio); @@ -1176,10 +1172,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked = NULL; - } + compact_unlock_irqrestore(&locked, flags); folio_put(folio); isolate_fail: @@ -1192,10 +1185,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * page anyway. */ if (nr_isolated) { - if (locked) { - spin_unlock_irqrestore(locked, flags); - locked = NULL; - } + compact_unlock_irqrestore(&locked, flags); putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; nr_isolated = 0; @@ -1224,8 +1214,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, folio = NULL; isolate_abort: - if (locked) - spin_unlock_irqrestore(locked, flags); + compact_unlock_irqrestore(&locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio);