From patchwork Wed Dec 9 14:51:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Wool X-Patchwork-Id: 11961711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 017C7C433FE for ; Wed, 9 Dec 2020 14:52:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 956CA233EF for ; Wed, 9 Dec 2020 14:52:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 956CA233EF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=konsulko.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 140526B00E8; Wed, 9 Dec 2020 09:52:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F08A6B00E9; Wed, 9 Dec 2020 09:52:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 005EF6B00EA; Wed, 9 Dec 2020 09:52:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id DF0396B00E8 for ; Wed, 9 Dec 2020 09:52:45 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A1505181AC9CC for ; Wed, 9 Dec 2020 14:52:45 +0000 (UTC) X-FDA: 77574035490.19.size04_3913c25273f0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 7C5991AD1B1 for ; Wed, 9 Dec 2020 14:52:45 +0000 (UTC) X-HE-Tag: size04_3913c25273f0 X-Filterd-Recvd-Size: 5165 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 14:52:45 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id 91so2041716wrj.7 for ; Wed, 09 Dec 2020 06:52:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=konsulko.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QO+mDdc0W+GDe87CR6+J/pH9JlgkXQ9+W7Jahz1zoF4=; b=g9Xue5OG7EgSbVCscVZfnyXDCXaL+U6x8BVxb8qWNccgIVM5Pno4SkL9aGGc006+qz jcErJyaSq+DL/OcldXEsQ7PTMFZNy7C8HI8lbpfRJeNbj+CY9ni9f0YgyVKw+FFnh5aa yCiKBlDYja7hs0IuChBIxdI24YraSbk2BqOFY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QO+mDdc0W+GDe87CR6+J/pH9JlgkXQ9+W7Jahz1zoF4=; b=n9aD2isvO4WAqaPW3Ru6Jl4bgk6gBRR6fsKLmXWDLX+1LPWLTA6td5SMyitHwobTme 7NQxqErBpBVKbWgz5Q4X36w1a1iklJOZCv0up6hd3+hEMDm5QGb4WFNzILOVW6xfBAbu 03MXBn7KPMeGlMzJAS3yHZCJ7wSm1oLcX7TJmI7i8fu4jrbUsYylmRk1eLy4XT+qCBuj YFxwm2+gScW5fnPX0RDL8W807v9kgS4DArQjQ9w6rM7dfTlTMrI1TAGHQccBHWx54ui1 +Mg4Y1ZxAFP1r+6LyqCe/R+2jKCiA8uMWOIOSJ3zwwxYfC+NNlHick+RnNunAxixhJrH qrIw== X-Gm-Message-State: AOAM533Bruc9W2mFm0jxjybMIVG2CxIT5/x9bTOJbtme5r+zCM7qFd8J jqd5u8PzAXLSBevv5yumHdMdDOtyzuUWwnuWUaU= X-Google-Smtp-Source: ABdhPJyhALIcSeelDT7f6z1tke+0gzPilS0oVWeaT6VDdfYT0A6zqk0G9/PHg4pkylf1EdTY7kw2Bg== X-Received: by 2002:adf:f98b:: with SMTP id f11mr3207719wrr.235.1607525563914; Wed, 09 Dec 2020 06:52:43 -0800 (PST) Received: from taos.konsulko.bg (lan.nucleusys.com. [92.247.61.126]) by smtp.gmail.com with ESMTPSA id 189sm3831957wma.22.2020.12.09.06.52.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Dec 2020 06:52:43 -0800 (PST) From: Vitaly Wool To: linux-mm@kvack.org Cc: lkml@vger.kernel.org, linux-rt-users@vger.kernel.org, Sebastian Andrzej Siewior , Mike Galbraith , akpm@linux-foundation.org, Vitaly Wool Subject: [PATCH 2/3] z3fold: Remove preempt disabled sections for RT Date: Wed, 9 Dec 2020 16:51:50 +0200 Message-Id: <20201209145151.18994-3-vitaly.wool@konsulko.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201209145151.18994-1-vitaly.wool@konsulko.com> References: <20201209145151.18994-1-vitaly.wool@konsulko.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace get_cpu_ptr() with migrate_disable()+this_cpu_ptr() so RT can take spinlocks that become sleeping locks. Signed-off-by Mike Galbraith Signed-off-by: Vitaly Wool --- mm/z3fold.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 6c2325cd3fba..9fc1cc9630fe 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -610,14 +610,16 @@ static inline void add_to_unbuddied(struct z3fold_pool *pool, { if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 || zhdr->middle_chunks == 0) { - struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied); - + struct list_head *unbuddied; int freechunks = num_free_chunks(zhdr); + + migrate_disable(); + unbuddied = this_cpu_ptr(pool->unbuddied); spin_lock(&pool->lock); list_add(&zhdr->buddy, &unbuddied[freechunks]); spin_unlock(&pool->lock); zhdr->cpu = smp_processor_id(); - put_cpu_ptr(pool->unbuddied); + migrate_enable(); } } @@ -854,8 +856,9 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, int chunks = size_to_chunks(size), i; lookup: + migrate_disable(); /* First, try to find an unbuddied z3fold page. */ - unbuddied = get_cpu_ptr(pool->unbuddied); + unbuddied = this_cpu_ptr(pool->unbuddied); for_each_unbuddied_list(i, chunks) { struct list_head *l = &unbuddied[i]; @@ -873,7 +876,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, !z3fold_page_trylock(zhdr)) { spin_unlock(&pool->lock); zhdr = NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -887,7 +890,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, test_bit(PAGE_CLAIMED, &page->private)) { z3fold_page_unlock(zhdr); zhdr = NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -902,7 +905,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, kref_get(&zhdr->refcount); break; } - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (!zhdr) { int cpu;