From patchwork Wed Dec 9 14:51:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Wool X-Patchwork-Id: 11961709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2A9AC4361B for ; Wed, 9 Dec 2020 14:52:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 39799233EF for ; Wed, 9 Dec 2020 14:52:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39799233EF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=konsulko.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CD4D6B00E6; Wed, 9 Dec 2020 09:52:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97BCE6B00E7; Wed, 9 Dec 2020 09:52:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86B2C6B00E8; Wed, 9 Dec 2020 09:52:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 6E6956B00E6 for ; Wed, 9 Dec 2020 09:52:27 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3A5B11EE6 for ; Wed, 9 Dec 2020 14:52:27 +0000 (UTC) X-FDA: 77574034734.25.beds58_1415b92273f0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 14C121804E3A0 for ; Wed, 9 Dec 2020 14:52:27 +0000 (UTC) X-HE-Tag: beds58_1415b92273f0 X-Filterd-Recvd-Size: 7694 Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 14:52:26 +0000 (UTC) Received: by mail-wr1-f68.google.com with SMTP id c1so2038197wrq.6 for ; Wed, 09 Dec 2020 06:52:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=konsulko.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J2hkHNzCDEGB0J9sgU5PXkAy+/sgZMRXXXGzM8sHpFc=; b=aBTkU87uGdVCZaCoNz7qeW5WKZ1AlBZKjV71BA10HzW0BnLo09qiTXT270YHzU/7rq +HhpphftS+8XYqug0QA7oT83TxR4Mq3Uxx2Gd6p8xmG1fUB72IgQ0Hw/1OcbKHVuzLcV xXU8O5z/YZBYDSwSLj8gjlmRVhyhjQQ3ODqkQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J2hkHNzCDEGB0J9sgU5PXkAy+/sgZMRXXXGzM8sHpFc=; b=iQYUAm8DIdgPO2C6GE/zWSvGUw+jZzu6zkqb/1bAK4HaPkBHcDUtNIaYi/n269Ceuc RAU5mFT3eelGwlF8opsNJDEKQmhYQcTRqz5maAP//Aw6h/+eet0QeATkg7x86r/QDJuG yOm/vr+Vv406zaQnmMU/sPJbD51a+HSleb3vETqv9GgXlazOfW3URiw76zJI/eN0YfZ2 tNR4IAInchK4T02HLdSCbddfhf7p1kFL65wXx2gyEVMB/Pyjrtsm+OOLvwvGGeti5vHP hkXaISkCf6bB5IXadiMWR+0CvTUxfZyVsfLNVmMgA/xC0NxOgEAldLMxifMf6SBVsDBb QMOA== X-Gm-Message-State: AOAM530PTCKV4lY2ESQ6UP0ui8NG3Oi+9+6vzDKg0K8+0mMk5a+Ps9j2 uVv/pwMzS84CRJIAAZBQM6yuXCcJaSzbv2SdMO8= X-Google-Smtp-Source: ABdhPJzipTU+hiVIinfES/sQ4fz736DXvse1IQaCV4h73aNNUxx/Yt/FOMFgFm5Z/BlSVFysn1df7w== X-Received: by 2002:adf:e802:: with SMTP id o2mr3204533wrm.251.1607525545292; Wed, 09 Dec 2020 06:52:25 -0800 (PST) Received: from taos.konsulko.bg (lan.nucleusys.com. [92.247.61.126]) by smtp.gmail.com with ESMTPSA id 189sm3831957wma.22.2020.12.09.06.52.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Dec 2020 06:52:24 -0800 (PST) From: Vitaly Wool To: linux-mm@kvack.org Cc: lkml@vger.kernel.org, linux-rt-users@vger.kernel.org, Sebastian Andrzej Siewior , Mike Galbraith , akpm@linux-foundation.org, Vitaly Wool , stable@kernel.org Subject: [PATCH 1/3] z3fold: simplify freeing slots Date: Wed, 9 Dec 2020 16:51:49 +0200 Message-Id: <20201209145151.18994-2-vitaly.wool@konsulko.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201209145151.18994-1-vitaly.wool@konsulko.com> References: <20201209145151.18994-1-vitaly.wool@konsulko.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There used to be two places in the code where slots could be freed, namely when freeing the last allocated handle from the slots and when releasing the z3fold header these slots aree linked to. The logic to decide on whether to free certain slots was complicated and error prone in both functions and it led to failures in RT case. To fix that, make free_handle() the single point of freeing slots. Signed-off-by: Vitaly Wool Tested-by: Mike Galbraith Cc: stable@kernel.org --- mm/z3fold.c | 55 +++++++++++++---------------------------------------- 1 file changed, 13 insertions(+), 42 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 18feaa0bc537..6c2325cd3fba 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -90,7 +90,7 @@ struct z3fold_buddy_slots { * be enough slots to hold all possible variants */ unsigned long slot[BUDDY_MASK + 1]; - unsigned long pool; /* back link + flags */ + unsigned long pool; /* back link */ rwlock_t lock; }; #define HANDLE_FLAG_MASK (0x03) @@ -181,13 +181,6 @@ enum z3fold_page_flags { PAGE_CLAIMED, /* by either reclaim or free */ }; -/* - * handle flags, go under HANDLE_FLAG_MASK - */ -enum z3fold_handle_flags { - HANDLES_ORPHANED = 0, -}; - /* * Forward declarations */ @@ -303,10 +296,9 @@ static inline void put_z3fold_header(struct z3fold_header *zhdr) z3fold_page_unlock(zhdr); } -static inline void free_handle(unsigned long handle) +static inline void free_handle(unsigned long handle, struct z3fold_header *zhdr) { struct z3fold_buddy_slots *slots; - struct z3fold_header *zhdr; int i; bool is_free; @@ -316,22 +308,13 @@ static inline void free_handle(unsigned long handle) if (WARN_ON(*(unsigned long *)handle == 0)) return; - zhdr = handle_to_z3fold_header(handle); slots = handle_to_slots(handle); write_lock(&slots->lock); *(unsigned long *)handle = 0; - if (zhdr->slots == slots) { - write_unlock(&slots->lock); - return; /* simple case, nothing else to do */ - } + if (zhdr->slots != slots) + zhdr->foreign_handles--; - /* we are freeing a foreign handle if we are here */ - zhdr->foreign_handles--; is_free = true; - if (!test_bit(HANDLES_ORPHANED, &slots->pool)) { - write_unlock(&slots->lock); - return; - } for (i = 0; i <= BUDDY_MASK; i++) { if (slots->slot[i]) { is_free = false; @@ -343,6 +326,8 @@ static inline void free_handle(unsigned long handle) if (is_free) { struct z3fold_pool *pool = slots_to_pool(slots); + if (zhdr->slots == slots) + zhdr->slots = NULL; kmem_cache_free(pool->c_handle, slots); } } @@ -525,8 +510,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) { struct page *page = virt_to_page(zhdr); struct z3fold_pool *pool = zhdr_to_pool(zhdr); - bool is_free = true; - int i; WARN_ON(!list_empty(&zhdr->buddy)); set_bit(PAGE_STALE, &page->private); @@ -536,21 +519,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) list_del_init(&page->lru); spin_unlock(&pool->lock); - /* If there are no foreign handles, free the handles array */ - read_lock(&zhdr->slots->lock); - for (i = 0; i <= BUDDY_MASK; i++) { - if (zhdr->slots->slot[i]) { - is_free = false; - break; - } - } - if (!is_free) - set_bit(HANDLES_ORPHANED, &zhdr->slots->pool); - read_unlock(&zhdr->slots->lock); - - if (is_free) - kmem_cache_free(pool->c_handle, zhdr->slots); - if (locked) z3fold_page_unlock(zhdr); @@ -973,6 +941,9 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, } } + if (zhdr && !zhdr->slots) + zhdr->slots = alloc_slots(pool, + can_sleep ? GFP_NOIO : GFP_ATOMIC); return zhdr; } @@ -1270,7 +1241,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) } if (!page_claimed) - free_handle(handle); + free_handle(handle, zhdr); if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) { atomic64_dec(&pool->pages_nr); return; @@ -1429,19 +1400,19 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries) ret = pool->ops->evict(pool, middle_handle); if (ret) goto next; - free_handle(middle_handle); + free_handle(middle_handle, zhdr); } if (first_handle) { ret = pool->ops->evict(pool, first_handle); if (ret) goto next; - free_handle(first_handle); + free_handle(first_handle, zhdr); } if (last_handle) { ret = pool->ops->evict(pool, last_handle); if (ret) goto next; - free_handle(last_handle); + free_handle(last_handle, zhdr); } next: if (test_bit(PAGE_HEADLESS, &page->private)) {