From patchwork Fri Nov 15 22:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13877380 Received: from mail-yw1-f182.google.com (mail-yw1-f182.google.com [209.85.128.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9907018A924 for ; Fri, 15 Nov 2024 22:46:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710818; cv=none; b=NA8qGn3cloMihQuU0LZLGX4Wd/VdUfXo6H1LiipOQFYlRpU1WZsP/ydovDALxzg7wfqjZisKdR34iIkNgIln89xU/OBvxNEq3yU8gz0uJFruzGQhu5b1DEYydF4lmhKpjv5W6rsZv2NuuE5+XbYitht6iNoz8dQEX8jSqRvlDPU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710818; c=relaxed/simple; bh=NiVYfMxiq/4CbrHhh8ZxoChEDx9tW/kNBNx4aRfyXAY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gJVz0nMcwiPB4dhQ4rxMDvA0Iu0HzchV42AyVlDIaGCcDGylYz4M+VzVKNRTv1zwZN8d1K8fQcAl3o0ZkaWe/Ruvfya5Co719tu5DcTByW6vwSk/oiw4XwQOW9ytM3TF/u4kG+oLLnva7d7yhH4YClzwyodgcfAClmn7GWwSUUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Cr7yu3oM; arc=none smtp.client-ip=209.85.128.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Cr7yu3oM" Received: by mail-yw1-f182.google.com with SMTP id 00721157ae682-6e9ba45d67fso24794377b3.1 for ; Fri, 15 Nov 2024 14:46:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731710815; x=1732315615; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KaJOCXiq+oAY6E09D60BMxslFUVmtN8uELYvwrLg+b4=; b=Cr7yu3oM41NU+H/4f2zCr1YhhnC546C7tvDxeY5oe+GGz+F7j5g23TIScIBOj0KTb1 k2gU0+6TPMxUo8hidTWNeqwCgjHRCA4Vk3lMZTpSnrdrrZ1LhzxbxAJyzJM+aQQ82rHh EAzUlglt8R4XDDNRWON+yELWMLETQBKBcUf9o36COU4qIIF0aJV/xoI00D18V47pTYVT hWswn8RCM2mo5BoxKBu7N10iWbrJfbljH8vSO81LWZpG6+4Nr4sRlE2mfGVP5auI7O/D HDNI1lrxLvQMcuD1ITyvNEaClO1WjM2gweLJ/2GC/i/Pe2s/hm/W8J2KOMo+QAU/nj8L 9k4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731710815; x=1732315615; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KaJOCXiq+oAY6E09D60BMxslFUVmtN8uELYvwrLg+b4=; b=fALYqk/JWE0PFOjzMOHBokD/fBxMwwjIYRqITZOo7bEr5en4H7CpKGiM7pqBVm8kUc m7zopXotj0ej1eKSyQEVBmbI6Zm7oXFUZXdcq1YPHumP3GaukcGFvDz719gD5Fyoir3A DcHx1CeA8msh3bBCsivphugcglVGq8NLGP2IKjtGEEmIZwFpoq7vCIhaGwoUyJ3ko9Y8 8ZtjZ50GpQnWPpt1ZL1LSyfaQcNpVDaFQhyjgJG5MFyrkmd7suY2dOzcKE8AaJX01wvT 8FN/uiBdEASHCHEHIFIjEbjzZi+8O+qZTw+vmdeCeRz4nV4YVPDZ7RCF14E4s1h2t+ZS rxVQ== X-Forwarded-Encrypted: i=1; AJvYcCX5XOH9BFt+MbnJ6Z25ZxHgcJI9Izc+iIbULOhD1bvf3vUvJ1cBngYrnCY/k+p7MWRzfTh66DRiOFLQftIH@vger.kernel.org X-Gm-Message-State: AOJu0Yw49ylCMZC3kYUQSGdnvXJHEcbiLFBWxccFwzDAe8gbc6SbiVBh x31Hdpmo3rVy1ldA/HRnQacnL7Aq4fbSmRK5Ns7Wz3UIvdHUtPDl X-Google-Smtp-Source: AGHT+IFmiXgnWZZs1gqt0JF4hZXzFNRdwrn1XSh3VQKDL/elFzc3Ebo+JLJ62nqDeTWVLJwJ3hwfzw== X-Received: by 2002:a05:690c:9682:b0:6db:db51:c02d with SMTP id 00721157ae682-6ee55c53b57mr48257857b3.25.1731710815490; Fri, 15 Nov 2024 14:46:55 -0800 (PST) Received: from localhost (fwdproxy-nha-011.fbsv.net. [2a03:2880:25ff:b::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6ee712c2c9fsm857617b3.54.2024.11.15.14.46.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Nov 2024 14:46:55 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Subject: [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Date: Fri, 15 Nov 2024 14:44:55 -0800 Message-ID: <20241115224459.427610-2-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241115224459.427610-1-joannelkoong@gmail.com> References: <20241115224459.427610-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a new mapping flag AS_WRITEBACK_INDETERMINATE which filesystems may set to indicate that writing back to disk may take an indeterminate amount of time to complete. Extra caution should be taken when waiting on writeback for folios belonging to mappings where this flag is set. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- include/linux/pagemap.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 68a5f1ff3301..fcf7d4dd7e2b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -210,6 +210,7 @@ enum mapping_flags { AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */ + AS_WRITEBACK_INDETERMINATE = 9, /* Use caution when waiting on writeback */ /* Bits 16-25 are used for FOLIO_ORDER */ AS_FOLIO_ORDER_BITS = 5, AS_FOLIO_ORDER_MIN = 16, @@ -335,6 +336,16 @@ static inline bool mapping_inaccessible(struct address_space *mapping) return test_bit(AS_INACCESSIBLE, &mapping->flags); } +static inline void mapping_set_writeback_indeterminate(struct address_space *mapping) +{ + set_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags); +} + +static inline bool mapping_writeback_indeterminate(struct address_space *mapping) +{ + return test_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags); +} + static inline gfp_t mapping_gfp_mask(struct address_space * mapping) { return mapping->gfp_mask; From patchwork Fri Nov 15 22:44:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13877381 Received: from mail-yb1-f171.google.com (mail-yb1-f171.google.com [209.85.219.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09622191F9B for ; Fri, 15 Nov 2024 22:46:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710819; cv=none; b=cFfS+Z1zt3QwMjb2qZ+bpv0/K45aWwyPi0U2jf4cLqmHY7Jl3j7MVazKBTFU2rPmaEP+nLvxQkRBPyngwgPzJl0znVmsnNrj3cbXsIQTRucFMj4hymWbiRmpL91p/6wdzLcxkv8cMYea3e0FlUGS5aUL1Rv+RtcG5nWIU5k0pS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710819; c=relaxed/simple; bh=snEhwbjkOQNKji7/3kWdqfeV1eyrezZ9SDDncgYtzj4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JsBn6Bexg+zctMT8V8lAO3G+V78RnGrqa2bkjIdm/FqlkSbUaJQ4CoFzN9wdnWLQZjqWec+2tEFcLu6gD6iUiGl/HjOpiorgTA70+fDpq+qU1YED6oKMlBOc176ZTVQ3XDOq0sK0plMX8lujHwjBLs2Nr7F9CqvWf1MAjgr3Jdg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QhobvgLK; arc=none smtp.client-ip=209.85.219.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QhobvgLK" Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-e381cbdd03cso88545276.1 for ; Fri, 15 Nov 2024 14:46:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731710817; x=1732315617; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hqCGll30CJiyTPfCOluQLlSYspNea277q8s6Em+tUXY=; b=QhobvgLKnR3sth99k5+bOEaXmxwS2+O1kzI2Uz4WTBu0lnSY5gHQ2xyyGx3gNociFr kt/5Ig+3oyNBbStoa37//ys0X3xz+CYqnw1buZM2WxK3LqHWDYAhBj7mJ/gVvvWIxq4Y csHc59U9HKY9VxqvJcE+ZQoiH0LEQWuWzJhAyeKN5HPT7rpDm5c0MtUiWLvYDOxGJWD/ DzpFwNQVPyU5iPHBwqJJVhQDmfBY5gRecmAt6TZvy6KvJVCynGTLcKtUUpmf9NWMVlYb hv5JhN+idazEqzwgTpHp86iCWutxWCU3atdWZoc5+VBiBQ5ywynvAiuDhPnZ/90DoDE9 nlrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731710817; x=1732315617; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hqCGll30CJiyTPfCOluQLlSYspNea277q8s6Em+tUXY=; b=H804oVesDY0t8isu8eHMbkgc/vY32s5EJQdk9kMY+1CjiDhiw2A4TyymHqIDoHMWqf a71Ta+u7jh6MoCOOUE7DwCy1puX5uyGCOP9mZ6GnIJ2nr8kHpuLQYQ+tSDDnaHiqr88s ly3f7prhftZh/or3jfq+BeB5Nd2CbhcOZ8TcHdH4JMtI+WsMQ7gnmAYcA1dVcPP5BA+h tYh9qoxhP/fNEBGpjDUbwvDNgGYYEa/MJvUM+MSF+0w/qHVPKezUAwoHincHxbetOIXz rIpreTBf1iKLOkw+ylPVzC1OV81QehLv8SmoGskkAE8oNNRXYl7HFS+9ZJu7ScftLWKs OIyQ== X-Forwarded-Encrypted: i=1; AJvYcCVmJvrWx/WHRlUCBqyVM/5o+9Obt+ggOLzo5EpyW/LxWJJ24Zy1wv09h3IISQLvFu9X9BV54KdRUKc60/4A@vger.kernel.org X-Gm-Message-State: AOJu0YwTFTNSdmPBZz4f91yMVnmnXSG+uzC+QsB7m+7Q3kz+N/mWsccK +Ls8khk7bM0Zu2OgUgeshOuvWrLKJ0LNxLRY033NdI3owXcm4TGa/DY90A== X-Google-Smtp-Source: AGHT+IGSzVy1omnfV54BrGGJewgAW0DzxvxC726Kb7oZp1Mtso6nZFjz9TE4rzvmjlUt+ijR0WPOQw== X-Received: by 2002:a05:690c:688a:b0:6ea:7c46:8c23 with SMTP id 00721157ae682-6ee55ef8021mr61486187b3.35.1731710816774; Fri, 15 Nov 2024 14:46:56 -0800 (PST) Received: from localhost (fwdproxy-nha-001.fbsv.net. [2a03:2880:25ff:1::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6ee7129dc7esm875747b3.40.2024.11.15.14.46.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Nov 2024 14:46:56 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Subject: [PATCH v5 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts Date: Fri, 15 Nov 2024 14:44:56 -0800 Message-ID: <20241115224459.427610-3-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241115224459.427610-1-joannelkoong@gmail.com> References: <20241115224459.427610-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently in shrink_folio_list(), reclaim for folios under writeback falls into 3 different cases: 1) Reclaim is encountering an excessive number of folios under writeback and this folio has both the writeback and reclaim flags set 2) Dirty throttling is enabled (this happens if reclaim through cgroup is not enabled, if reclaim through cgroupv2 memcg is enabled, or if reclaim is on the root cgroup), or if the folio is not marked for immediate reclaim, or if the caller does not have __GFP_FS (or __GFP_IO if it's going to swap) set 3) Legacy cgroupv1 encounters a folio that already has the reclaim flag set and the caller did not have __GFP_FS (or __GFP_IO if swap) set In cases 1) and 2), we activate the folio and skip reclaiming it while in case 3), we wait for writeback to finish on the folio and then try to reclaim the folio again. In case 3, we wait on writeback because cgroupv1 does not have dirty folio throttling, as such this is a mitigation against the case where there are too many folios in writeback with nothing else to reclaim. For filesystems where writeback may take an indeterminate amount of time to write to disk, this has the possibility of stalling reclaim. In this commit, if legacy memcg encounters a folio with the reclaim flag set (eg case 3) and the folio belongs to a mapping that has the AS_WRITEBACK_INDETERMINATE flag set, the folio will be activated and skip reclaim (eg default to behavior in case 2) instead. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- mm/vmscan.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 749cdc110c74..37ce6b6dac06 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1129,8 +1129,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * 2) Global or new memcg reclaim encounters a folio that is * not marked for immediate reclaim, or the caller does not * have __GFP_FS (or __GFP_IO if it's simply going to swap, - * not to fs). In this case mark the folio for immediate - * reclaim and continue scanning. + * not to fs), or the writeback may take an indeterminate + * amount of time to complete. In this case mark the folio + * for immediate reclaim and continue scanning. * * Require may_enter_fs() because we would wait on fs, which * may not have submitted I/O yet. And the loop driver might @@ -1155,6 +1156,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * takes to write them to disk. */ if (folio_test_writeback(folio)) { + mapping = folio_mapping(folio); + /* Case 1 above */ if (current_is_kswapd() && folio_test_reclaim(folio) && @@ -1165,7 +1168,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Case 2 above */ } else if (writeback_throttling_sane(sc) || !folio_test_reclaim(folio) || - !may_enter_fs(folio, sc->gfp_mask)) { + !may_enter_fs(folio, sc->gfp_mask) || + (mapping && mapping_writeback_indeterminate(mapping))) { /* * This is slightly racy - * folio_end_writeback() might have From patchwork Fri Nov 15 22:44:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13877382 Received: from mail-yw1-f173.google.com (mail-yw1-f173.google.com [209.85.128.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2856218A924 for ; Fri, 15 Nov 2024 22:46:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710820; cv=none; b=tpfPIQRAl1jo0kVnOtMaoc+mMAIWxbh6I6N3kPOF1m5dFKfGhYjhg627rkOLZiacjD1xUo0Kvz0B89Bip9RwqOyNSUCAJklZGlz+mG2NXy9vj80LuPezt5tk6AFSG+Ih3yNxLW7jtBgWFB1nugQGSYUD+3nzlJxbph9xtiwGGJo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710820; c=relaxed/simple; bh=TtXpjhzAqSOM28Y8NyVZYfVn3ZFjzZISmYaD2WTho9k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tS41k/Oz9xnf9aY2t1pDVcG7wjS1W6Yxj5tqeGo2l56SfSOGJtdTpl3di5dPz0KSo8tflZ2tnzjQctrgQqiZE4awK9+C+XeJ1l8BsX4YAwpVwkN3BjkvNSatT637bV/ksIZquRTmDjCJERb4QNrCGVEw1Vrepvqckutm80KFQ9E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=M/7dOaLq; arc=none smtp.client-ip=209.85.128.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="M/7dOaLq" Received: by mail-yw1-f173.google.com with SMTP id 00721157ae682-6e5cec98cceso9880577b3.2 for ; Fri, 15 Nov 2024 14:46:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731710818; x=1732315618; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fqTpYYj0xuVtKi0W3IidBx7k3eo4EvWfLhMFNi1N48k=; b=M/7dOaLqCFRK+4GS6ysN9vhbdWPGI34M2FVdBFNU5sD1W4dVqOxiqbbJfhZ6fHWmc5 AgFdlJJur2TXPyc8DuALm9K0fAzqDdX5kTLb/tjKAra9555cF2B2rvm8j/kGEuy4sN/e EKb9lYKi3J+Sc6ShQFQEk+5qQZ/mYGnozARJUWVWC9AeuDKnjN0FKeMc59om9HHxG0Kp kigw2c0+v0tdsQOyNYW6BAQhIqKnA+onE8b+j/irfGRfuTP8+vNo79sipf0ZNPhuhV6L FYbI524OYVhkbmbacpOJ9DwyWwRHx+nK8CudkT1hcNKPhgsnulxBjuQbI1k85euNUi9/ NH6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731710818; x=1732315618; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fqTpYYj0xuVtKi0W3IidBx7k3eo4EvWfLhMFNi1N48k=; b=kGCumWnr0bcXVLI5g9pQJ5aCFQlR1iyXsst3LPgX5Et2vstd94lCVfckQXFw8ff34d NYIYOoFlDbMA3nRo8SWLkIxjvPT/y2aiuZUfnkj5lAFtbyUZbQ9HFPCr1ZteYiQ21eWL /0CGZ+UoZn7bzMlkashn9gKTSaXVNcSaq5C2W/mhfwLKQfAeCiE5WX2/L2BrTDJ6AQJv MzK4AoXkPM8pFhy/kIpa5NTAIzBKKlHvSDrTlN1gY6a2eN5xbZ4fIUCcbbJanub3WTCA B2J808ZnjR6ZD+C2a0Vhv0T97AcBMR9uWpKJ3XO8VFsHQVRY2GZSMAvahUG0hWyBZzRN 4TTQ== X-Forwarded-Encrypted: i=1; AJvYcCWdvW0Zg+s6EAKhuESJTflAIEYpofKAv2QtTX+h86BTgOuKAWR1HA7Mit/l/Jo9TnZaq0cfodkJVpOn+p1u@vger.kernel.org X-Gm-Message-State: AOJu0YzWXcTDDg3psZ6Iwg3b8KnD/ZTy6CWikKO6UW11YC1KJ7O96E8J awjSfbEdvax8vEBGK3uWWcDeV2wFVAExzUomMyKMaM4GJkZR1V2U X-Google-Smtp-Source: AGHT+IHG2B4pBIMlMPyc7j9/RbvQB2IU9wmY3MOJI56h25nmFtflz9zG3n4erefydZn7TRsiabBC0g== X-Received: by 2002:a05:690c:6486:b0:6db:ddea:eab4 with SMTP id 00721157ae682-6ee55c7897dmr58857977b3.37.1731710818032; Fri, 15 Nov 2024 14:46:58 -0800 (PST) Received: from localhost (fwdproxy-nha-007.fbsv.net. [2a03:2880:25ff:7::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6ee7127890dsm879117b3.11.2024.11.15.14.46.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Nov 2024 14:46:57 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Subject: [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Date: Fri, 15 Nov 2024 14:44:57 -0800 Message-ID: <20241115224459.427610-4-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241115224459.427610-1-joannelkoong@gmail.com> References: <20241115224459.427610-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For filesystems with the AS_WRITEBACK_INDETERMINATE flag set, writeback operations may take an indeterminate time to complete. For example, writing data back to disk in FUSE filesystems depends on the userspace server successfully completing writeback. In this commit, wait_sb_inodes() skips waiting on writeback if the inode's mapping has AS_WRITEBACK_INDETERMINATE set, else sync(2) may take an indeterminate amount of time to complete. If the caller wishes to ensure the data for a mapping with the AS_WRITEBACK_INDETERMINATE flag set has actually been written back to disk, they should use fsync(2)/fdatasync(2) instead. Signed-off-by: Joanne Koong --- fs/fs-writeback.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index d8bec3c1bb1f..ad192db17ce4 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2659,6 +2659,9 @@ static void wait_sb_inodes(struct super_block *sb) if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK)) continue; + if (mapping_writeback_indeterminate(mapping)) + continue; + spin_unlock_irq(&sb->s_inode_wblist_lock); spin_lock(&inode->i_lock); From patchwork Fri Nov 15 22:44:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13877383 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78894192B75 for ; Fri, 15 Nov 2024 22:47:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710821; cv=none; b=AhUIrp4rulYbJgwsWLgJ8il/zZGIt0Mf9+kUHfT/ZcSMPVfXi3jdkbRLDlO5vckX/WtTEUcu9Y/1U+uzZCAcVgpLnGcQl8KPR3nPf9aFQEsRSboRBTG1Yt9UBkRqXw/r5YX2DMM3OjC+uBhwt5tlER/SESSKoZdXUqI1O7xFPDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710821; c=relaxed/simple; bh=hhbPr+3h/kgYhAsAVvrf/o6qFIpb3++sEgUCG3cEM+8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Fhti9ahLMwFnTVge3iLiN6/irR6jl2bvaufNRJ3LjjR/OXsjIbfv8k50FPnQGBvqvHzfdY+3nrXUPRRtFSY39VY8rROVhmcDhC47dOyHpnlu7SM65qbHS+JByT1LS2Ju1zyQ0dH3mUOfPN7dSfbjlTMw1+sMSGosg1oj6vpsWlA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=chas17C1; arc=none smtp.client-ip=209.85.219.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="chas17C1" Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-e387afcb162so490338276.1 for ; Fri, 15 Nov 2024 14:47:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731710819; x=1732315619; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sEjWA6vn5Xh3CAWPzY0RXH9AbB2HGrSJFRDx463rkA0=; b=chas17C1ycA8wpUASPn+rhoK56S6245Va4popzyVaQc5zJNWT1ZWICE4Vcjt2H+8pw 5xnzK+yJRLfbxpzMzXBvmVxj0Pe4F5h1VxvRlK9InuePyunqLirRp8P++s+kpVuVjMSt YuM2ejNss1biTk7QyfsmlgdFNz1XurtlxDEfVXo6zmKtKQHhSXfcv7G0rHlzqYK4r8zA ovGFcS0oT4yiogN3LxFBKXBnXLjS5Rx3sdTPQ52fydbiP5VmW277SIDruPNLc5WcsU03 j12BGiT0r2UMV6zNOApL4MGCMTRtfhqTDv3lGx7zlVtvXo6cqj7SiO6wpIZoZ31e9t2R 90Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731710819; x=1732315619; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sEjWA6vn5Xh3CAWPzY0RXH9AbB2HGrSJFRDx463rkA0=; b=hE7pVm863s0ESL2xA0U5ZW2qXD+chbM6zdjc88L4ndnRUdkrG+BCW9gYg7qXtJKfdy J7S/dEXkXPkU3PJIU9LrIKynn3stUC1CxTHe/koRYLbCr0OC2pHG4QXNgqL/JGQAfAb0 R7Z/a1aSIK6wuBniWLau2uVB8ZAP2Zim1+ZE7gizsSlLlt1aCMAprw2jSCCjG8ay8NO5 IJ+sPZeZF+lufmEIRN/om4It51SGbMirQIoUqnvQeYo6lwtm5nYK/NYDwBSsBvAjblnA uCrjtnabFPD/3Cl+f26exourrfXMHNVYg5aIQZacaUp2UmG5kzvOzVsV4WePYrrYx7ms +SmA== X-Forwarded-Encrypted: i=1; AJvYcCUCNMqeDxc+Z0q0FjB4NL6Ec/aiOBLCahwtlFfgdtb5OSujd3S5N0/apwShz3Wkp/UuzcEkvyGWXmzJB9P+@vger.kernel.org X-Gm-Message-State: AOJu0YzA78NOfNmmmmOSM3mxzZR3arvGJ3ZElzSRZD0EV7ZWxLnsmnxX ZT3xk/Afijv8Ri5D9VYgisyIcw66S6qhsS/Rwc9mb1JGa2a5TBDC X-Google-Smtp-Source: AGHT+IEkqbH2Xska95WleqplmUzi/TR1yFbkmLL2aUtOgoFdVaSIwP8XGbBOoG841NmPls+jC0AzZg== X-Received: by 2002:a05:6902:114c:b0:e33:2605:f80b with SMTP id 3f1490d57ef6-e38263f1251mr3757204276.42.1731710819349; Fri, 15 Nov 2024 14:46:59 -0800 (PST) Received: from localhost (fwdproxy-nha-005.fbsv.net. [2a03:2880:25ff:5::face:b00c]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e387e7419ffsm115897276.22.2024.11.15.14.46.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Nov 2024 14:46:59 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Subject: [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with AS_WRITEBACK_INDETERMINATE mappings Date: Fri, 15 Nov 2024 14:44:58 -0800 Message-ID: <20241115224459.427610-5-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241115224459.427610-1-joannelkoong@gmail.com> References: <20241115224459.427610-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For migrations called in MIGRATE_SYNC mode, skip migrating the folio if it is under writeback and has the AS_WRITEBACK_INDETERMINATE flag set on its mapping. If the AS_WRITEBACK_INDETERMINATE flag is set on the mapping, the writeback may take an indeterminate amount of time to complete, and waits may get stuck. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- mm/migrate.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index df91248755e4..fe73284e5246 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1260,7 +1260,10 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - break; + if (!src->mapping || + !mapping_writeback_indeterminate(src->mapping)) + break; + fallthrough; default: rc = -EBUSY; goto out; From patchwork Fri Nov 15 22:44:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13877384 Received: from mail-yb1-f180.google.com (mail-yb1-f180.google.com [209.85.219.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49EBF1922ED for ; Fri, 15 Nov 2024 22:47:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710824; cv=none; b=Syd06MrmzVXnItMPCmNIxJZL28/ukrHDaGDGoFJxeiyn+bVabfAPdGu09KMTncp0PDFrRAtLcZAlPrZzKcPdzfIsCNfSNAeDycbWZtEuOanwZZI5Vknhy66Yx+FPgRg96ECYg1dj/J9s7wpQ5mcJkDo66aFbkKdP2hePsAA5SPE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731710824; c=relaxed/simple; bh=MQD5+bQzl57zPvy4NANsnTfdV4Cm8bsAbC8qam97Q5Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EEEUGJYCty3mDzIs2woG00x58lAD3GjvZwCWNm52ValpPt9T7aW5iEqfkqM3qW/uL2DrYttK+JgSxhcgPzT+ZCXLq0PXStx/BfrtTZOPxxmS4Vgc280YXELqhoxtMLR/Bgpitmk4+QqvKZyaGMU/F1NDSweumoV6NcizmOwOSao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gBgAWGEM; arc=none smtp.client-ip=209.85.219.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gBgAWGEM" Received: by mail-yb1-f180.google.com with SMTP id 3f1490d57ef6-e29218d34f8so1701249276.1 for ; Fri, 15 Nov 2024 14:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731710821; x=1732315621; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qy4s7iJDV3qe45TyStvv2BTPUXnZXHyPJpi2dhigRt0=; b=gBgAWGEMWN71gGMIWXfkGOpbI01PFQaKMCDu8ggOCREy9MmzU4hkdnNzr8BQLuqhsy 28rVejEVfgRyHKU9GH31p1vbep86H6VJplApgM0Kcmmg0JJl0gsvl9YkgCAGV3kbFXBY chOzyLdqs3AkRE9ZaY7BoR5sxcqoZAHF+5z9vBhFRR0MXsKp4c6kp468rSG/hetS17YB FDsG4mRUISMi0IN4LRFfE5AHjnOGwtFKvYm9NMwRTmOTAHQ3wHpkNp7Qqpz3d5X08EcX DBoJJM5J5DX0fk/zeEHTTRWvgUBgoC7+XHi8qGDLLCmEM7LAzzV6y3JgBKbLhnzgvTea Ksdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731710821; x=1732315621; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qy4s7iJDV3qe45TyStvv2BTPUXnZXHyPJpi2dhigRt0=; b=aiorCunUM+SWzyhZ3Zcrzzpg8L7A+T/RR8uA3ogdovrU64k8nobpquJLa0vZW5uQr4 11ZlQhvyzLhnKqZbzOEnDar+loevdockSRSTYUlMhYNC0DJMhu9HWuvy9IhKtCsjdb9X jYT4WPFMw3/4hzEERPq+cbBHNguQ06aA0/6afSj6DjGqVcaSK8Cm7ZTFch2YaWl7kVH9 96rApnwL0XpcUvSu0HKzo0BQhqjzx8tUfQVqG9YgTk04P1LPeFFduIRQ0JIaOzXtM+Jl tliP5mc4hTYes2EqoDfb2cv7lME6KQgFJTBGgGZbpl3zWC3QKmo81UFLIkjiPCqzLC0P C3hg== X-Forwarded-Encrypted: i=1; AJvYcCXzwtO1ixnPqRPxuRJbZbY3I8TMPVasM3tN/mxQhpLAwqW2L5JMTlcHPMYgNuXRpi4w58f4yhtMA5wGj7+e@vger.kernel.org X-Gm-Message-State: AOJu0YxuTWY4OVeT/DZBHL3935jsQTq5QIJ2oAA2xWgNH8yccHxwWuMF MBlKn1hOwe0XiGyVESnrTuCCnkPOIavpfc2DjBejgu9OkEMHw7os X-Google-Smtp-Source: AGHT+IGZ7kVAa/AZTPUzamvW569Ak/MPcUBU3EdEjXoBhOC8Q6ya+wE0uTDItno3ZD6CD1/wAPiNcg== X-Received: by 2002:a25:ac56:0:b0:e38:7d21:3b53 with SMTP id 3f1490d57ef6-e387d213cfdmr1063327276.22.1731710821074; Fri, 15 Nov 2024 14:47:01 -0800 (PST) Received: from localhost (fwdproxy-nha-116.fbsv.net. [2a03:2880:25ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e387e754792sm115631276.26.2024.11.15.14.47.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Nov 2024 14:47:00 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Subject: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Date: Fri, 15 Nov 2024 14:44:59 -0800 Message-ID: <20241115224459.427610-6-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241115224459.427610-1-joannelkoong@gmail.com> References: <20241115224459.427610-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the current FUSE writeback design (see commit 3be5a52b30aa ("fuse: support writable mmap")), a temp page is allocated for every dirty page to be written back, the contents of the dirty page are copied over to the temp page, and the temp page gets handed to the server to write back. This is done so that writeback may be immediately cleared on the dirty page, and this in turn is done for two reasons: a) in order to mitigate the following deadlock scenario that may arise if reclaim waits on writeback on the dirty page to complete: * single-threaded FUSE server is in the middle of handling a request that needs a memory allocation * memory allocation triggers direct reclaim * direct reclaim waits on a folio under writeback * the FUSE server can't write back the folio since it's stuck in direct reclaim b) in order to unblock internal (eg sync, page compaction) waits on writeback without needing the server to complete writing back to disk, which may take an indeterminate amount of time. With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates the situations described above, FUSE writeback does not need to use temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings. This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings and removes the temporary pages + extra copying and the internal rb tree. fio benchmarks -- (using averages observed from 10 runs, throwing away outliers) Setup: sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount bs = 1k 4k 1M Before 351 MiB/s 1818 MiB/s 1851 MiB/s After 341 MiB/s 2246 MiB/s 2685 MiB/s % diff -3% 23% 45% Signed-off-by: Joanne Koong --- fs/fuse/file.c | 339 +++---------------------------------------------- 1 file changed, 20 insertions(+), 319 deletions(-) diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 88d0946b5bc9..56289ac58596 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -415,89 +415,11 @@ u64 fuse_lock_owner_id(struct fuse_conn *fc, fl_owner_t id) struct fuse_writepage_args { struct fuse_io_args ia; - struct rb_node writepages_entry; struct list_head queue_entry; - struct fuse_writepage_args *next; struct inode *inode; struct fuse_sync_bucket *bucket; }; -static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi, - pgoff_t idx_from, pgoff_t idx_to) -{ - struct rb_node *n; - - n = fi->writepages.rb_node; - - while (n) { - struct fuse_writepage_args *wpa; - pgoff_t curr_index; - - wpa = rb_entry(n, struct fuse_writepage_args, writepages_entry); - WARN_ON(get_fuse_inode(wpa->inode) != fi); - curr_index = wpa->ia.write.in.offset >> PAGE_SHIFT; - if (idx_from >= curr_index + wpa->ia.ap.num_folios) - n = n->rb_right; - else if (idx_to < curr_index) - n = n->rb_left; - else - return wpa; - } - return NULL; -} - -/* - * Check if any page in a range is under writeback - */ -static bool fuse_range_is_writeback(struct inode *inode, pgoff_t idx_from, - pgoff_t idx_to) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - bool found; - - if (RB_EMPTY_ROOT(&fi->writepages)) - return false; - - spin_lock(&fi->lock); - found = fuse_find_writeback(fi, idx_from, idx_to); - spin_unlock(&fi->lock); - - return found; -} - -static inline bool fuse_page_is_writeback(struct inode *inode, pgoff_t index) -{ - return fuse_range_is_writeback(inode, index, index); -} - -/* - * Wait for page writeback to be completed. - * - * Since fuse doesn't rely on the VM writeback tracking, this has to - * use some other means. - */ -static void fuse_wait_on_page_writeback(struct inode *inode, pgoff_t index) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - - wait_event(fi->page_waitq, !fuse_page_is_writeback(inode, index)); -} - -static inline bool fuse_folio_is_writeback(struct inode *inode, - struct folio *folio) -{ - pgoff_t last = folio_next_index(folio) - 1; - return fuse_range_is_writeback(inode, folio_index(folio), last); -} - -static void fuse_wait_on_folio_writeback(struct inode *inode, - struct folio *folio) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - - wait_event(fi->page_waitq, !fuse_folio_is_writeback(inode, folio)); -} - /* * Wait for all pending writepages on the inode to finish. * @@ -886,13 +808,6 @@ static int fuse_do_readfolio(struct file *file, struct folio *folio) ssize_t res; u64 attr_ver; - /* - * With the temporary pages that are used to complete writeback, we can - * have writeback that extends beyond the lifetime of the folio. So - * make sure we read a properly synced folio. - */ - fuse_wait_on_folio_writeback(inode, folio); - attr_ver = fuse_get_attr_version(fm->fc); /* Don't overflow end offset */ @@ -1003,17 +918,12 @@ static void fuse_send_readpages(struct fuse_io_args *ia, struct file *file) static void fuse_readahead(struct readahead_control *rac) { struct inode *inode = rac->mapping->host; - struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_conn *fc = get_fuse_conn(inode); unsigned int max_pages, nr_pages; - pgoff_t first = readahead_index(rac); - pgoff_t last = first + readahead_count(rac) - 1; if (fuse_is_bad(inode)) return; - wait_event(fi->page_waitq, !fuse_range_is_writeback(inode, first, last)); - max_pages = min_t(unsigned int, fc->max_pages, fc->max_read / PAGE_SIZE); @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia, int err; for (i = 0; i < ap->num_folios; i++) - fuse_wait_on_folio_writeback(inode, ap->folios[i]); + folio_wait_writeback(ap->folios[i]); fuse_write_args_fill(ia, ff, pos, count); ia->write.in.flags = fuse_write_flags(iocb); @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, return res; } } - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) { + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) { if (!write) inode_lock(inode); fuse_sync_writes(inode); @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa) fuse_sync_bucket_dec(wpa->bucket); for (i = 0; i < ap->num_folios; i++) - folio_put(ap->folios[i]); + folio_end_writeback(ap->folios[i]); fuse_file_put(wpa->ia.ff, false); @@ -1838,7 +1748,7 @@ static void fuse_writepage_finish_stat(struct inode *inode, struct folio *folio) struct backing_dev_info *bdi = inode_to_bdi(inode); dec_wb_stat(&bdi->wb, WB_WRITEBACK); - node_stat_sub_folio(folio, NR_WRITEBACK_TEMP); + node_stat_sub_folio(folio, NR_WRITEBACK); wb_writeout_inc(&bdi->wb); } @@ -1861,7 +1771,6 @@ static void fuse_send_writepage(struct fuse_mount *fm, __releases(fi->lock) __acquires(fi->lock) { - struct fuse_writepage_args *aux, *next; struct fuse_inode *fi = get_fuse_inode(wpa->inode); struct fuse_write_in *inarg = &wpa->ia.write.in; struct fuse_args *args = &wpa->ia.ap.args; @@ -1898,19 +1807,8 @@ __acquires(fi->lock) out_free: fi->writectr--; - rb_erase(&wpa->writepages_entry, &fi->writepages); fuse_writepage_finish(wpa); spin_unlock(&fi->lock); - - /* After rb_erase() aux request list is private */ - for (aux = wpa->next; aux; aux = next) { - next = aux->next; - aux->next = NULL; - fuse_writepage_finish_stat(aux->inode, - aux->ia.ap.folios[0]); - fuse_writepage_free(aux); - } - fuse_writepage_free(wpa); spin_lock(&fi->lock); } @@ -1938,43 +1836,6 @@ __acquires(fi->lock) } } -static struct fuse_writepage_args *fuse_insert_writeback(struct rb_root *root, - struct fuse_writepage_args *wpa) -{ - pgoff_t idx_from = wpa->ia.write.in.offset >> PAGE_SHIFT; - pgoff_t idx_to = idx_from + wpa->ia.ap.num_folios - 1; - struct rb_node **p = &root->rb_node; - struct rb_node *parent = NULL; - - WARN_ON(!wpa->ia.ap.num_folios); - while (*p) { - struct fuse_writepage_args *curr; - pgoff_t curr_index; - - parent = *p; - curr = rb_entry(parent, struct fuse_writepage_args, - writepages_entry); - WARN_ON(curr->inode != wpa->inode); - curr_index = curr->ia.write.in.offset >> PAGE_SHIFT; - - if (idx_from >= curr_index + curr->ia.ap.num_folios) - p = &(*p)->rb_right; - else if (idx_to < curr_index) - p = &(*p)->rb_left; - else - return curr; - } - - rb_link_node(&wpa->writepages_entry, parent, p); - rb_insert_color(&wpa->writepages_entry, root); - return NULL; -} - -static void tree_insert(struct rb_root *root, struct fuse_writepage_args *wpa) -{ - WARN_ON(fuse_insert_writeback(root, wpa)); -} - static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args, int error) { @@ -1994,41 +1855,6 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args, if (!fc->writeback_cache) fuse_invalidate_attr_mask(inode, FUSE_STATX_MODIFY); spin_lock(&fi->lock); - rb_erase(&wpa->writepages_entry, &fi->writepages); - while (wpa->next) { - struct fuse_mount *fm = get_fuse_mount(inode); - struct fuse_write_in *inarg = &wpa->ia.write.in; - struct fuse_writepage_args *next = wpa->next; - - wpa->next = next->next; - next->next = NULL; - tree_insert(&fi->writepages, next); - - /* - * Skip fuse_flush_writepages() to make it easy to crop requests - * based on primary request size. - * - * 1st case (trivial): there are no concurrent activities using - * fuse_set/release_nowrite. Then we're on safe side because - * fuse_flush_writepages() would call fuse_send_writepage() - * anyway. - * - * 2nd case: someone called fuse_set_nowrite and it is waiting - * now for completion of all in-flight requests. This happens - * rarely and no more than once per page, so this should be - * okay. - * - * 3rd case: someone (e.g. fuse_do_setattr()) is in the middle - * of fuse_set_nowrite..fuse_release_nowrite section. The fact - * that fuse_set_nowrite returned implies that all in-flight - * requests were completed along with all of their secondary - * requests. Further primary requests are blocked by negative - * writectr. Hence there cannot be any in-flight requests and - * no invocations of fuse_writepage_end() while we're in - * fuse_set_nowrite..fuse_release_nowrite section. - */ - fuse_send_writepage(fm, next, inarg->offset + inarg->size); - } fi->writectr--; fuse_writepage_finish(wpa); spin_unlock(&fi->lock); @@ -2115,19 +1941,17 @@ static void fuse_writepage_add_to_bucket(struct fuse_conn *fc, } static void fuse_writepage_args_page_fill(struct fuse_writepage_args *wpa, struct folio *folio, - struct folio *tmp_folio, uint32_t folio_index) + uint32_t folio_index) { struct inode *inode = folio->mapping->host; struct fuse_args_pages *ap = &wpa->ia.ap; - folio_copy(tmp_folio, folio); - - ap->folios[folio_index] = tmp_folio; + ap->folios[folio_index] = folio; ap->descs[folio_index].offset = 0; ap->descs[folio_index].length = PAGE_SIZE; inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK); - node_stat_add_folio(tmp_folio, NR_WRITEBACK_TEMP); + node_stat_add_folio(folio, NR_WRITEBACK); } static struct fuse_writepage_args *fuse_writepage_args_setup(struct folio *folio, @@ -2162,18 +1986,12 @@ static int fuse_writepage_locked(struct folio *folio) struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_writepage_args *wpa; struct fuse_args_pages *ap; - struct folio *tmp_folio; struct fuse_file *ff; - int error = -ENOMEM; - - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0); - if (!tmp_folio) - goto err; + int error = -EIO; - error = -EIO; ff = fuse_write_file_get(fi); if (!ff) - goto err_nofile; + goto err; wpa = fuse_writepage_args_setup(folio, ff); error = -ENOMEM; @@ -2184,22 +2002,17 @@ static int fuse_writepage_locked(struct folio *folio) ap->num_folios = 1; folio_start_writeback(folio); - fuse_writepage_args_page_fill(wpa, folio, tmp_folio, 0); + fuse_writepage_args_page_fill(wpa, folio, 0); spin_lock(&fi->lock); - tree_insert(&fi->writepages, wpa); list_add_tail(&wpa->queue_entry, &fi->queued_writes); fuse_flush_writepages(inode); spin_unlock(&fi->lock); - folio_end_writeback(folio); - return 0; err_writepage_args: fuse_file_put(ff, false); -err_nofile: - folio_put(tmp_folio); err: mapping_set_error(folio->mapping, error); return error; @@ -2209,7 +2022,6 @@ struct fuse_fill_wb_data { struct fuse_writepage_args *wpa; struct fuse_file *ff; struct inode *inode; - struct folio **orig_folios; unsigned int max_folios; }; @@ -2244,69 +2056,11 @@ static void fuse_writepages_send(struct fuse_fill_wb_data *data) struct fuse_writepage_args *wpa = data->wpa; struct inode *inode = data->inode; struct fuse_inode *fi = get_fuse_inode(inode); - int num_folios = wpa->ia.ap.num_folios; - int i; spin_lock(&fi->lock); list_add_tail(&wpa->queue_entry, &fi->queued_writes); fuse_flush_writepages(inode); spin_unlock(&fi->lock); - - for (i = 0; i < num_folios; i++) - folio_end_writeback(data->orig_folios[i]); -} - -/* - * Check under fi->lock if the page is under writeback, and insert it onto the - * rb_tree if not. Otherwise iterate auxiliary write requests, to see if there's - * one already added for a page at this offset. If there's none, then insert - * this new request onto the auxiliary list, otherwise reuse the existing one by - * swapping the new temp page with the old one. - */ -static bool fuse_writepage_add(struct fuse_writepage_args *new_wpa, - struct folio *folio) -{ - struct fuse_inode *fi = get_fuse_inode(new_wpa->inode); - struct fuse_writepage_args *tmp; - struct fuse_writepage_args *old_wpa; - struct fuse_args_pages *new_ap = &new_wpa->ia.ap; - - WARN_ON(new_ap->num_folios != 0); - new_ap->num_folios = 1; - - spin_lock(&fi->lock); - old_wpa = fuse_insert_writeback(&fi->writepages, new_wpa); - if (!old_wpa) { - spin_unlock(&fi->lock); - return true; - } - - for (tmp = old_wpa->next; tmp; tmp = tmp->next) { - pgoff_t curr_index; - - WARN_ON(tmp->inode != new_wpa->inode); - curr_index = tmp->ia.write.in.offset >> PAGE_SHIFT; - if (curr_index == folio->index) { - WARN_ON(tmp->ia.ap.num_folios != 1); - swap(tmp->ia.ap.folios[0], new_ap->folios[0]); - break; - } - } - - if (!tmp) { - new_wpa->next = old_wpa->next; - old_wpa->next = new_wpa; - } - - spin_unlock(&fi->lock); - - if (tmp) { - fuse_writepage_finish_stat(new_wpa->inode, - folio); - fuse_writepage_free(new_wpa); - } - - return false; } static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, @@ -2315,15 +2069,6 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, { WARN_ON(!ap->num_folios); - /* - * Being under writeback is unlikely but possible. For example direct - * read to an mmaped fuse file will set the page dirty twice; once when - * the pages are faulted with get_user_pages(), and then after the read - * completed. - */ - if (fuse_folio_is_writeback(data->inode, folio)) - return true; - /* Reached max pages */ if (ap->num_folios == fc->max_pages) return true; @@ -2333,7 +2078,7 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, return true; /* Discontinuity */ - if (data->orig_folios[ap->num_folios - 1]->index + 1 != folio_index(folio)) + if (ap->folios[ap->num_folios - 1]->index + 1 != folio_index(folio)) return true; /* Need to grow the pages array? If so, did the expansion fail? */ @@ -2352,7 +2097,6 @@ static int fuse_writepages_fill(struct folio *folio, struct inode *inode = data->inode; struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_conn *fc = get_fuse_conn(inode); - struct folio *tmp_folio; int err; if (!data->ff) { @@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *folio, data->wpa = NULL; } - err = -ENOMEM; - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0); - if (!tmp_folio) - goto out_unlock; - - /* - * The page must not be redirtied until the writeout is completed - * (i.e. userspace has sent a reply to the write request). Otherwise - * there could be more than one temporary page instance for each real - * page. - * - * This is ensured by holding the page lock in page_mkwrite() while - * checking fuse_page_is_writeback(). We already hold the page lock - * since clear_page_dirty_for_io() and keep it held until we add the - * request to the fi->writepages list and increment ap->num_folios. - * After this fuse_page_is_writeback() will indicate that the page is - * under writeback, so we can release the page lock. - */ if (data->wpa == NULL) { err = -ENOMEM; wpa = fuse_writepage_args_setup(folio, data->ff); - if (!wpa) { - folio_put(tmp_folio); + if (!wpa) goto out_unlock; - } fuse_file_get(wpa->ia.ff); data->max_folios = 1; ap = &wpa->ia.ap; } folio_start_writeback(folio); - fuse_writepage_args_page_fill(wpa, folio, tmp_folio, ap->num_folios); - data->orig_folios[ap->num_folios] = folio; + fuse_writepage_args_page_fill(wpa, folio, ap->num_folios); err = 0; - if (data->wpa) { - /* - * Protected by fi->lock against concurrent access by - * fuse_page_is_writeback(). - */ - spin_lock(&fi->lock); - ap->num_folios++; - spin_unlock(&fi->lock); - } else if (fuse_writepage_add(wpa, folio)) { + ap->num_folios++; + if (!data->wpa) data->wpa = wpa; - } else { - folio_end_writeback(folio); - } out_unlock: folio_unlock(folio); @@ -2441,13 +2154,6 @@ static int fuse_writepages(struct address_space *mapping, data.wpa = NULL; data.ff = NULL; - err = -ENOMEM; - data.orig_folios = kcalloc(fc->max_pages, - sizeof(struct folio *), - GFP_NOFS); - if (!data.orig_folios) - goto out; - err = write_cache_pages(mapping, wbc, fuse_writepages_fill, &data); if (data.wpa) { WARN_ON(!data.wpa->ia.ap.num_folios); @@ -2456,7 +2162,6 @@ static int fuse_writepages(struct address_space *mapping, if (data.ff) fuse_file_put(data.ff, false); - kfree(data.orig_folios); out: return err; } @@ -2481,8 +2186,6 @@ static int fuse_write_begin(struct file *file, struct address_space *mapping, if (IS_ERR(folio)) goto error; - fuse_wait_on_page_writeback(mapping->host, folio->index); - if (folio_test_uptodate(folio) || len >= folio_size(folio)) goto success; /* @@ -2545,13 +2248,9 @@ static int fuse_launder_folio(struct folio *folio) { int err = 0; if (folio_clear_dirty_for_io(folio)) { - struct inode *inode = folio->mapping->host; - - /* Serialize with pending writeback for the same page */ - fuse_wait_on_page_writeback(inode, folio->index); err = fuse_writepage_locked(folio); if (!err) - fuse_wait_on_page_writeback(inode, folio->index); + folio_wait_writeback(folio); } return err; } @@ -2595,7 +2294,7 @@ static vm_fault_t fuse_page_mkwrite(struct vm_fault *vmf) return VM_FAULT_NOPAGE; } - fuse_wait_on_folio_writeback(inode, folio); + folio_wait_writeback(folio); return VM_FAULT_LOCKED; } @@ -3413,9 +3112,12 @@ static const struct address_space_operations fuse_file_aops = { void fuse_init_file_inode(struct inode *inode, unsigned int flags) { struct fuse_inode *fi = get_fuse_inode(inode); + struct fuse_conn *fc = get_fuse_conn(inode); inode->i_fop = &fuse_file_operations; inode->i_data.a_ops = &fuse_file_aops; + if (fc->writeback_cache) + mapping_set_writeback_indeterminate(&inode->i_data); INIT_LIST_HEAD(&fi->write_files); INIT_LIST_HEAD(&fi->queued_writes); @@ -3423,7 +3125,6 @@ void fuse_init_file_inode(struct inode *inode, unsigned int flags) fi->iocachectr = 0; init_waitqueue_head(&fi->page_waitq); init_waitqueue_head(&fi->direct_io_waitq); - fi->writepages = RB_ROOT; if (IS_ENABLED(CONFIG_FUSE_DAX)) fuse_dax_inode_init(inode, flags);