From patchwork Tue Mar 30 10:15:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12172217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C193C433C1 for ; Tue, 30 Mar 2021 10:22:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E0AD619AD for ; Tue, 30 Mar 2021 10:22:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E0AD619AD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 140766B009C; Tue, 30 Mar 2021 06:22:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 117EB6B009D; Tue, 30 Mar 2021 06:22:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAC196B009E; Tue, 30 Mar 2021 06:22:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id CC8636B009C for ; Tue, 30 Mar 2021 06:22:34 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 83F6C181AF5C7 for ; Tue, 30 Mar 2021 10:22:34 +0000 (UTC) X-FDA: 77976151428.10.C6277E8 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf13.hostedemail.com (Postfix) with ESMTP id D2B68E0011CE for ; Tue, 30 Mar 2021 10:22:32 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id h8so5919015plt.7 for ; Tue, 30 Mar 2021 03:22:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=35PJPimaboX0f0GU6NT304tFzSCBj6gk52k3CNbuEN0=; b=k0pCcjq+CZH6CkeB7OZseO3R0aoXsvjx6Oxds9wSe0FcNtTLNcBYwf/eApXmFxW3DE mq4oUwM8ddJlOXfU2bjf+jON7bXFz0ciEk/omi+H0Ge+W1gyu8H7gtDPNeWxqVk4kkax 8O9fmWu+ics5JjV26PspctzHvzQZvS67iRBvMSGw7sglb7RpAryMK5ZKC2OS148Ex75J HjHhEjEKzNLCT48TIWPJBwVXsktvFsRXXJ1l0T9dia74Xt0Whr06FVLmTNvC1tmZd3y/ LyowX64eomXd4tilg5Y+Lcv7QV/EJuKgOc6XLz3G7x6byAy+ULDiudYEZYWfJrNfGetW m1VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=35PJPimaboX0f0GU6NT304tFzSCBj6gk52k3CNbuEN0=; b=g6tyBsXuLu+CM3VwKtiGv45WBTVjKfAKP2HmbgyPU1u+H4u8Yus96+U0hD9ebOuTM7 zK6Ey7tpUGf7V989Li0Tf8DPFkBdvNzzObCIPtQERYitlubXOR2QCUkfuItdQTWy1sk4 4F9eQ7aEPw1wQQyIRlR8ZZ+ObEJauSPJNMrL3ZhRMnLSPK5Tm1QhXEEFM+GQ7rCWFsBX W8oU1KFkOUN3xn4IK+XZzWE8sAipETVWSb6jO5+WY2X3wdW0+H7kjGOTB+u3bH2zbw9u oL8okvUkb5kZ+kMi7OsX7ZdMBPvVCWHmKtTOUWRjmZNrcvtPWJYG/Sjdv/CEztsbnIPW yx8w== X-Gm-Message-State: AOAM531G9CcWTDkvBcEcHN/OsyJOZi6f/55hUMpwPXN3gu/uqJduPIbW A6+IgkjVXfXZyS4ZWC/aYDxs5w== X-Google-Smtp-Source: ABdhPJyrZ53j9C5rLFjTDQpQ/1eARR63PEdeQ3l5NphxlX8LYCZtDeAcNu6lfko9ahvLOeA5KbhfSA== X-Received: by 2002:a17:90a:f286:: with SMTP id fs6mr3663311pjb.183.1617099753361; Tue, 30 Mar 2021 03:22:33 -0700 (PDT) Received: from localhost.localdomain ([2408:8445:ad30:68d8:c87f:ca1b:dc00:4730]) by smtp.gmail.com with ESMTPSA id k10sm202259pfk.205.2021.03.30.03.22.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Mar 2021 03:22:33 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [RFC PATCH 10/15] mm: thp: make deferred split queue lock safe when the LRU pages reparented Date: Tue, 30 Mar 2021 18:15:26 +0800 Message-Id: <20210330101531.82752-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210330101531.82752-1-songmuchun@bytedance.com> References: <20210330101531.82752-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D2B68E0011CE X-Stat-Signature: nbyp5juzsnfec61ftneitp1rh3ugyq4a Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=mail-pl1-f169.google.com; client-ip=209.85.214.169 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617099752-880003 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to lruvec lock, we use the same approach to make the lock safe when the LRU pages reparented. Signed-off-by: Muchun Song --- mm/huge_memory.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 186dc11e8992..434cc7283a64 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -496,6 +496,8 @@ static struct deferred_split *lock_split_queue(struct page *page) struct deferred_split *queue; struct mem_cgroup *memcg; + rcu_read_lock(); +retry: memcg = page_memcg(compound_head(page)); if (memcg) queue = &memcg->deferred_split_queue; @@ -503,6 +505,17 @@ static struct deferred_split *lock_split_queue(struct page *page) queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; spin_lock(&queue->split_queue_lock); + if (unlikely(memcg != page_memcg(page))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } @@ -512,6 +525,8 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page, struct deferred_split *queue; struct mem_cgroup *memcg; + rcu_read_lock(); +retry: memcg = page_memcg(compound_head(page)); if (memcg) queue = &memcg->deferred_split_queue; @@ -519,6 +534,17 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page, queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue; spin_lock_irqsave(&queue->split_queue_lock, *flags); + if (unlikely(memcg != page_memcg(page))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } #else