From patchwork Thu Aug 3 11:38:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13339853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC3ECC001DF for ; Thu, 3 Aug 2023 11:39:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA07C280237; Thu, 3 Aug 2023 07:39:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D510F28022C; Thu, 3 Aug 2023 07:39:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3F7A280237; Thu, 3 Aug 2023 07:39:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B45BB28022C for ; Thu, 3 Aug 2023 07:39:11 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6FE31B2A6A for ; Thu, 3 Aug 2023 11:39:11 +0000 (UTC) X-FDA: 81082597302.25.056AFB7 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id 48AC7100012 for ; Thu, 3 Aug 2023 11:39:07 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of ruanjinjie@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=ruanjinjie@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691062749; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=bG5ot7k0T+KYc+fgBMUXiPdkmWmDwFTgEhbPPVLSmos=; b=mWu9NFgsrE9ikb38+dsCs0rq2/CuQMVK6hGZtTTpfDDeLMc6wyv+cExMmRX9OQeQRqkQP9 JTRX7htfuncfloBu4Jr256qTQSQ+wj4TkuV0Z7RWhUmr5dS1yv49T4uUUXCaQlLZD6uZjH P4pALI8spEsOiA2Q44+lQoyp20TXoEM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of ruanjinjie@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=ruanjinjie@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691062749; a=rsa-sha256; cv=none; b=O7Byex4qSVzVo3Hr4YpihRwD6x07qCNQuNCGVjK3LSGa0pkSML84g9s8pVV4vrzMmZb5nK sLWOuqpPzRwVxro7hXoNwekkDpufj7xwQsOoITD1dZkamPKJouYYdx12qQtcR/PERSWEH2 mbFT0CQVM51BR1Ef5vfKhzy4Ivqy2Ec= Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RGmxP2hCyzNmWv; Thu, 3 Aug 2023 19:35:37 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 3 Aug 2023 19:39:03 +0800 From: Ruan Jinjie To: , Vitaly Wool , Miaohe Lin , Andrew Morton CC: Subject: [PATCH -next v2] mm/z3fold: use helper function put_z3fold_locked() and put_z3fold_locked_list() Date: Thu, 3 Aug 2023 19:38:23 +0800 Message-ID: <20230803113824.886413-1-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemi500008.china.huawei.com (7.221.188.139) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 48AC7100012 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: uyfunwprxe9z669ycoj7pbw8e5bm7zxq X-HE-Tag: 1691062747-446071 X-HE-Meta: U2FsdGVkX19u3fT949ooJvMpkuI9bMJ8N/Xjc9ubilnxMOqI6wcbNb7IjD3+4c3xO6k0rHioBgaerwdx4Nupwt8fgal0zzdsR7/Iz5x36Lgp1jOONiVkoHpABVK4yL0m9BPclhMj3rBs1klFxGCwIlDCiC7BkSKjvXILkYyNVscPYJG8y/FBiSzkqPuU66pRZGhWGSz+v0c6X/j1uWlmwc9XM8wfmqKOO+QoohF2KXuyJxYa4puZH7dUGQjIhqaFnLBN13aueoITLHt90XvW9KmxQ+NDM0V7eHXKHPuvrrDfIyi1ijO/WuABb5F/T7MRfx5nrc6XIPtLRjkOPoLRBBmT9lNVW9JiZQfpb0XFf9jT+nWgRN3+BpTkLDmyNEnztca1Ly8AyrePdScJuN1ehXJbzGUj0RfX9m0Qt/6FBgTUi5VGcON5nbTHt45dVqnnAMpZocqZTY2KL8vXfh+sKujWVZFzXW5/VjKMlFg4I8bqYJQDOzqE8wZ1GZH2N1MQBVnzmaUYg+8bFFZfysBqg1gd+N7KAsj9M/1x+PQnaoUl7Lig9rAbjR37Ne02xA8uDDLrRjU/Fr2mKg4ZeDnn9MNxVK0FUtVSDOf6fY9JHC+XQ5foG33Z/fnukmmxs5fBheJI+c+5GpGYTJeotqmaidVeWiuz+N18EWZ6iJntC72TpMuDUHsAV6Sseeo0pkYe7c4VxpU7yKZiw6MvqCsFfOv1XwOd1tOVkirG5xI1OliwV2X+FRZbleANmE7kFR2zBg04Tz+zwDCeWL2Lk9wV3oJoqcdwCfTjoCNXM3w+O+I/Nc2iR/Wz46c8f860mKcrOujFv56a8NS5RR8epSTuaUBfkXPhK4Y+iaHfT7XZjtZXCHYrIadCtKRu/d8E9K4KUBhQqn0TKAmdowukgrNyU22CT5WLXu+iOwBsXZ8qW3wkXBpiF6kR/TCgEwFwfMmc/7ddSWyC2MTj87ExQtY eFzc+h7E PZZIG0gLI0FHI0utxYXkAIuiKlQSb6BlfhX41Rz8lVH9D2VjUhY3x8pdp3mTExtdfqDltf4RSuhsWW+uhfpx0g0bSkwlbgivgHptUAbQorI0mJMwD8fGuI534Z+/spW/Qyiq8U0XXHqH5hEHpRirh2tJUNsWATSELuXgYFFvZrDHDPfZXEsQ2FPnLcGjkiXHmIuu5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This code is already duplicated six times, use helper function put_z3fold_locked() to release z3fold page instead of open code it to help improve code readability a bit. And add put_z3fold_locked_list() helper function to be consistent with it. No functional change involved. Signed-off-by: Ruan Jinjie Reviewed-by: Miaohe Lin --- v2: - Update the subject title. - Add put_z3fold_locked_list() helper. --- mm/z3fold.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index e84de91ecccb..7952adf9bede 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -480,6 +480,16 @@ static void release_z3fold_page_locked_list(struct kref *ref) __release_z3fold_page(zhdr, true); } +static inline int put_z3fold_locked(struct z3fold_header *zhdr) +{ + return kref_put(&zhdr->refcount, release_z3fold_page_locked); +} + +static inline int put_z3fold_locked_list(struct z3fold_header *zhdr) +{ + return kref_put(&zhdr->refcount, release_z3fold_page_locked_list); +} + static void free_pages_work(struct work_struct *w) { struct z3fold_pool *pool = container_of(w, struct z3fold_pool, work); @@ -666,7 +676,7 @@ static struct z3fold_header *compact_single_buddy(struct z3fold_header *zhdr) return new_zhdr; out_fail: - if (new_zhdr && !kref_put(&new_zhdr->refcount, release_z3fold_page_locked)) { + if (new_zhdr && !put_z3fold_locked(new_zhdr)) { add_to_unbuddied(pool, new_zhdr); z3fold_page_unlock(new_zhdr); } @@ -741,7 +751,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked) list_del_init(&zhdr->buddy); spin_unlock(&pool->lock); - if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) + if (put_z3fold_locked(zhdr)) return; if (test_bit(PAGE_STALE, &page->private) || @@ -752,7 +762,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked) if (!zhdr->foreign_handles && buddy_single(zhdr) && zhdr->mapped_count == 0 && compact_single_buddy(zhdr)) { - if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) { + if (!put_z3fold_locked(zhdr)) { clear_bit(PAGE_CLAIMED, &page->private); z3fold_page_unlock(zhdr); } @@ -878,7 +888,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, return zhdr; out_fail: - if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) { + if (!put_z3fold_locked(zhdr)) { add_to_unbuddied(pool, zhdr); z3fold_page_unlock(zhdr); } @@ -1012,8 +1022,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp, if (zhdr) { bud = get_free_buddy(zhdr, chunks); if (bud == HEADLESS) { - if (!kref_put(&zhdr->refcount, - release_z3fold_page_locked)) + if (!put_z3fold_locked(zhdr)) z3fold_page_unlock(zhdr); pr_err("No free chunks in unbuddied\n"); WARN_ON(1); @@ -1129,7 +1138,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) if (!page_claimed) free_handle(handle, zhdr); - if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) + if (put_z3fold_locked_list(zhdr)) return; if (page_claimed) { /* the page has not been claimed by us */ @@ -1346,7 +1355,7 @@ static void z3fold_page_putback(struct page *page) if (!list_empty(&zhdr->buddy)) list_del_init(&zhdr->buddy); INIT_LIST_HEAD(&page->lru); - if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) + if (put_z3fold_locked(zhdr)) return; if (list_empty(&zhdr->buddy)) add_to_unbuddied(pool, zhdr);