From patchwork Wed Nov 23 22:37:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AA25C4332F for ; Wed, 23 Nov 2022 22:38:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229779AbiKWWiD (ORCPT ); Wed, 23 Nov 2022 17:38:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229823AbiKWWhs (ORCPT ); Wed, 23 Nov 2022 17:37:48 -0500 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94D801835F for ; Wed, 23 Nov 2022 14:37:41 -0800 (PST) Received: by mail-qv1-xf2c.google.com with SMTP id q10so2213845qvt.10 for ; Wed, 23 Nov 2022 14:37:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Vub/YGyob+lvK8mviiw67Vt9Borg9BoCnIRVylzUkCA=; b=yOXtQzooJ0Ackq769KAjCeHNlDeguBFyRt46faGo0wkK4pf0rY3P8gouBKF4uR8aw9 YOFaMQOmSABGUaF/NvBSe0f0pCUfv7+y0PbXVemuUiFDbXOEiOUYpPkw2c1YJgztsg/m 9vVHtNNUirseNS29Vzm6DlvqyNAupScT1VX7y03wUOyjB0tSFuYBphVvlOgX26oN3bzM 92VABWUMjNhKv+0gYmosSccewbIY6OVGudGzVlWR1SOdsYJKmvYqMMLMiNOI/czHGnul Y8LTWNvK9UZPwjmarVQgghotI7PWYau2WX8Z0uB16Pys3iaTqNI1CbBVRc9zAcX+adVk yr7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Vub/YGyob+lvK8mviiw67Vt9Borg9BoCnIRVylzUkCA=; b=nJ5tZHBSRCL1qrBtWMNBeZ+GKdOTHT3rSoGn/WA75Hjz3ScRv2Qz8jtTYPStlLSJhT 8WM0qW7vaEQRV0uAD9NtT63t7Mc3suHzyEcNHWx4bYd5M0vKm7mtfchcaO9sZ1Yzc0b1 tuO5lXFm3akYAHjtdw4/TdAbQXZ//5V+vJLp0u3VWgOtc84IoZ4BYVx1mrZhNTMHIfiS xUIGEDd8gm1ZDPUz8VN+2vSwpfC6l30KceN+qEzZYBtXCqF2ontrMsug9Ehm2ltpVfnT 8DKOx+fl0zh//OzHFYJF8KbJpbxxr3U5KpYdnCREBoBHdljL9OQOevyqEyfiVfon5dOQ HSGg== X-Gm-Message-State: ANoB5pl1yCRkJXvC6CzyjggGMT+nNN0rucDt0TvV+FeVolciAHUv+Lgh dch2tC+mrmo5j8ISFET4WR8LdN8/H6IeUg== X-Google-Smtp-Source: AA0mqf6aJdosbFC8prJlqXg/+47/aRxj7oASswqqJODRlBwyiBXgs+15P+QTRns3kfZjHI5+LBbOUg== X-Received: by 2002:a05:6214:14eb:b0:4c6:b062:c619 with SMTP id k11-20020a05621414eb00b004c6b062c619mr12421797qvw.107.1669243060050; Wed, 23 Nov 2022 14:37:40 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id bn4-20020a05622a1dc400b003a62dcf09f0sm9069529qtb.6.2022.11.23.14.37.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:39 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 01/29] btrfs-progs: turn on more compiler warnings and use -Wall Date: Wed, 23 Nov 2022 17:37:09 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org In converting some of our helpers to take new args I would miss some locations because we don't stop on any warning, and I would miss the warning in the scrollback. Fix this by stopping compiling on any error and turn on the fancy compiler checks. Signed-off-by: Josef Bacik --- Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Makefile b/Makefile index c74a2ea9..1777a22e 100644 --- a/Makefile +++ b/Makefile @@ -94,6 +94,9 @@ CFLAGS = $(SUBST_CFLAGS) \ -D_XOPEN_SOURCE=700 \ -fno-strict-aliasing \ -fPIC \ + -Wall \ + -Wunused-but-set-parameter \ + -Werror \ -I$(TOPDIR) \ $(CRYPTO_CFLAGS) \ -DCOMPRESSION_LZO=$(COMPRESSION_LZO) \ From patchwork Wed Nov 23 22:37:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85B94C433FE for ; Wed, 23 Nov 2022 22:38:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229792AbiKWWiG (ORCPT ); Wed, 23 Nov 2022 17:38:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229825AbiKWWht (ORCPT ); Wed, 23 Nov 2022 17:37:49 -0500 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 954F5183BF for ; Wed, 23 Nov 2022 14:37:43 -0800 (PST) Received: by mail-qt1-x82c.google.com with SMTP id w9so112983qtv.13 for ; Wed, 23 Nov 2022 14:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=TT2mM25lgFrGB/loZnbI4/1hEaSUKp9iCXTzfgSm4cU=; b=7dKrbekBAdT5zcKxyCn6mQitPr+IUbmkO3D+KvQXLbwEYmKpqu7Zk+yzxDbN/TEuql Gc5RujfdkiqOPNOMTq0vf7AWKooRzkGLpbhA9qw7swnian9Rf3msnADLlnfbfEBQz1k/ fm8B87OZCb8xS8BDx87OJhvPNrMRZl1sz7tMbmzSMG9YmUAVkCns4SJ+LbWbm7SBHQlB CLUpNZN+aO0pRSUJu22zuMCj24baVEFENdnz7YOYttSu0KPtO3tK7GDcrxXqGmrWeikW ztS56CV/mOzqkBdWByMNHRlKQe5H/LNlurXnV+qTlpfR9o18MMlQNHTIqYDshubgR5XS 2ZQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TT2mM25lgFrGB/loZnbI4/1hEaSUKp9iCXTzfgSm4cU=; b=FIiaFDRpvxGf11Q2vDp0u1MM0uFlWLeN81oyfvK1YQ0uBa1RpRQroN85cCL5Q+9qQ8 qPum/rn8+CRjsCZYEOYmD3xJo7UIn4qn3yasRhitps/9qIXUewPf761u1/8gM7tfL+ar YX0vJJCWwu9MXSY0+FY5bNhaAOhRIe2MBP0jb0dfiOEucaYxN4kds+65Tb+u+03lGLjw kpUCj7Yg8a66vQjDomne5LDV1222QEEBFcMh1A0i+XM44FBZ8Q9NPjeOiWILMFlkjBDe hJU7eGA8MYBHi6U4xLDsnmJstVU3zdhDdRMJR83PU6AOa3fmokrGLIhcwTWBgYK4zeZQ y+Mg== X-Gm-Message-State: ANoB5pkma1eshNDExwLwl7vGDhj/VFomzTWT9eZaa+h7llCVxFjTsFU9 IL8aKRrSZRMSfxftAXuGdbPRCx5l0P/9OQ== X-Google-Smtp-Source: AA0mqf4mn2TVFeNs8mtsixIOE+8d0YfhX0tMujg8e7+lg+Wfyt03B6ILj7MdiCQHWlONtEKYSSaFWg== X-Received: by 2002:ac8:687:0:b0:3a5:41fd:2216 with SMTP id f7-20020ac80687000000b003a541fd2216mr29251871qth.338.1669243062190; Wed, 23 Nov 2022 14:37:42 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id u10-20020a05620a430a00b006fa617ac616sm4463062qko.49.2022.11.23.14.37.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:41 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 02/29] btrfs-progs: fix make clean to clean convert properly Date: Wed, 23 Nov 2022 17:37:10 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We were not clearing the .o files for btrfs-convert as we had the wrong directory, which meant I missed a compile error that happened when I was messing with kernel-shared. Fix this by making sure we clear the .o files for convert properly. Signed-off-by: Josef Bacik Reviewed-by: Anand Jain --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 1777a22e..475754e2 100644 --- a/Makefile +++ b/Makefile @@ -795,7 +795,7 @@ clean: $(CLEANDIRS) kernel-lib/*.o kernel-lib/.deps/*.o.d \ kernel-shared/*.o kernel-shared/.deps/*.o.d \ image/*.o image/.deps/*.o.d \ - convert/.deps/*.o convert/.deps/*.o.d \ + convert/*.o convert/.deps/*.o.d \ mkfs/*.o mkfs/.deps/*.o.d check/*.o check/.deps/*.o.d \ cmds/*.o cmds/.deps/*.o.d common/*.o common/.deps/*.o.d \ crypto/*.o crypto/.deps/*.o.d \ From patchwork Wed Nov 23 22:37:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BE5EC4332F for ; Wed, 23 Nov 2022 22:38:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbiKWWiH (ORCPT ); Wed, 23 Nov 2022 17:38:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229827AbiKWWht (ORCPT ); Wed, 23 Nov 2022 17:37:49 -0500 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88E6318B09 for ; Wed, 23 Nov 2022 14:37:44 -0800 (PST) Received: by mail-qv1-xf2d.google.com with SMTP id df6so10894932qvb.0 for ; Wed, 23 Nov 2022 14:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=FMY29uWvxB6k2FZ05xUKiUZ9bQV99BVoAVH/+6WpKDg=; b=Lr1qptCGSIGEkSSgjJBv79AiYMNLXg6HMU1faBH3c40me0MnpXvzRXuFmUC0/qs9BT 8fpksKg/arEZUF4S8UQYfDD+/7lEVlBD4aEY/qP/hSESAOV3ZVN7ftL9eHqFpoYNijns 6cZwPLX6U4Z/d4SXaPw1PY6YKwZZ3jpc2SLcrUzD8lei1l421qqbBRlVee3EQjZDFuOm QqhgQNsdLrb0mMxljPU33dsSWgALsH81Ow4xRQ+NFAC6lD8/vAZ3J4s1W7wV34WSc+cT R9/73eZjPmtqOTqL/9QHY6LSxO+Gh5UWCbWlbUFwdddJToLmqJc+GQ32bIeQLhCAh0wA qEcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FMY29uWvxB6k2FZ05xUKiUZ9bQV99BVoAVH/+6WpKDg=; b=fFav+GfnB5MoTAXrHVnuj+PYkH4ToVR0O8VW8HVzCXuOzWo0ig7kz5IPw2wLetTTOE PS2v8Qg0YWEzvNVpC5aSPmo9kSrL0dK16Ncdq924jyginkW7ZSuG9YQPjbwyJaiUlmhU ckJJGT3r/T8QDvTJ0jghQfGaytsF3infUcAcybrtjYM/gkCe7kVM2aFecfE+bHGqnetZ bPI2+fcKHXUvSdT4nblW5uxEk5KvL0fhPrglduZo/uqG/eOvrbTgj+9ZCH6IbuKmAAWp K6B/79vqPIFdK1+O/cQCyfFpvhRDSgSt65CKQpPSVOA+nB4uXnUNK39TqWwW/WJt5K0w MdoA== X-Gm-Message-State: ANoB5plomFhqdzDGWfJkQ/rVSVTowbFKwT7NNyGkOZmScFCotFFT+czV 6b+0+hs83kXQHcXFNugD6vQuEAlUflcbKQ== X-Google-Smtp-Source: AA0mqf7h7CqRvgzv7wmfc2yabimJGvH80vznIRK0PWeDqCrE2pQxVTCGgmhlfh1tgdzlY93kYUlAlg== X-Received: by 2002:a05:6214:3981:b0:4c6:a9fa:47f7 with SMTP id ny1-20020a056214398100b004c6a9fa47f7mr15751088qvb.34.1669243063456; Wed, 23 Nov 2022 14:37:43 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id y10-20020a05622a120a00b0039cc64bcb53sm10460712qtx.27.2022.11.23.14.37.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:43 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 03/29] btrfs-progs: properly test for send_stream_version Date: Wed, 23 Nov 2022 17:37:11 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We want to notrun if this test fails, not if it succeeds. Additionally we want -s, as -q will still print an error if it gets ENOENT from the file we're trying to grep. Signed-off-by: Josef Bacik Reviewed-by: Anand Jain --- tests/misc-tests/053-receive-write-encoded/test.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/misc-tests/053-receive-write-encoded/test.sh b/tests/misc-tests/053-receive-write-encoded/test.sh index a3e97a73..74b745ca 100755 --- a/tests/misc-tests/053-receive-write-encoded/test.sh +++ b/tests/misc-tests/053-receive-write-encoded/test.sh @@ -11,7 +11,7 @@ check_prereq btrfs setup_root_helper prepare_test_dev -if grep -q '1$' "/sys/fs/btrfs/features/send_stream_version"; then +if ! grep -s '1$' "/sys/fs/btrfs/features/send_stream_version"; then _not_run "kernel does not support send stream >1" exit fi From patchwork Wed Nov 23 22:37:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86192C433FE for ; Wed, 23 Nov 2022 22:38:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229811AbiKWWiI (ORCPT ); Wed, 23 Nov 2022 17:38:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229767AbiKWWhu (ORCPT ); Wed, 23 Nov 2022 17:37:50 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E07B024BE3 for ; Wed, 23 Nov 2022 14:37:45 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id d13so7407184qvj.8 for ; Wed, 23 Nov 2022 14:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=2xut37ZRg8AL9c6FvBSSBdj8ccZSle/cxGR4xJZCoaQ=; b=QTnY50W3j4ZP9oOWcPqHlPp9Rdj0qRlST89Zyg+Yo14BLx35+SzPytJMTeMxvYnuZ0 p85IiQil0JDF+UXxs2rUFzAHr0S82moldp0PyqNyOT1hzkF5HRrtvBQmpd0HDzM48qpz 3lqh+S1Ehacem98L/6+4bJPP770G8SFg9t5xGAUZBBwToHyTJYsnOGQUknKUoxhOxSTB AhaIq3Flw8p0Z8IHp73M/1p+nUdUd516C45bm2aLTlhJGNyTzP2RxSuDxGac4yGoTF6S jtCBin0nYxhU3JdaT8BOi1mI2IEGM4utv9jag5iPcglyTZEbBmzy6E74iGdye4u+y4nq 3UaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2xut37ZRg8AL9c6FvBSSBdj8ccZSle/cxGR4xJZCoaQ=; b=K7fc76dHt6qqlusZK6xkKI8NEc/LbStn9+wV8m/7WH4cbkCHn1mPRVHs9Q/1if9T9A 93Z28F/ewtC4Hhmx8loHWzqInw4qcRD5AO/R4pZEsWBTqD4qVRdpsjgBihhE7IapFRvs gCGjc6CA5IyR0Ywtd6xQb+Eb9d1sOgdc1QX+64kTueRE8BLbsSwHyD2ekSs9VgIH2gPW NU5coBjOMkuwAPVTRwQBuefvw9cCcAPcaVr1U7kpZLVQioMMBc4m+gXl07VSA2cTRkMN kN06WIqCguuFEixTO1mq5BEP5nkknt1BRYvHzL93RaxozfdBqh5opi4kwG/2qN9uPy/+ 6qWA== X-Gm-Message-State: ANoB5pljUo5G7jBj4zZ8ZJL8aXEGLZONyf5Mmsj7V1fkYvi38KE0rAeo 93b7lM/M/bPHyhYO/aDmXjxkQjWB8otMMg== X-Google-Smtp-Source: AA0mqf79aAO/4uy+Rv8UENDfBGCoamIHw9BbkX+Lu+Ng4aS2V+1AQW0IGXyytNcT/J3gZgo5SLTe5Q== X-Received: by 2002:a05:6214:aab:b0:4bb:625c:e300 with SMTP id ew11-20020a0562140aab00b004bb625ce300mr10136511qvb.96.1669243064703; Wed, 23 Nov 2022 14:37:44 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id d12-20020ac8060c000000b0039d085a2571sm10446669qth.55.2022.11.23.14.37.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:44 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 04/29] btrfs-progs: use -std=gnu11 Date: Wed, 23 Nov 2022 17:37:12 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The kernel switched to this recently, switch btrfs-progs to this as well to avoid issues with syncing the kernel code. Signed-off-by: Josef Bacik --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 475754e2..aae7d66a 100644 --- a/Makefile +++ b/Makefile @@ -401,7 +401,7 @@ ifdef C grep -v __SIZE_TYPE__ > $(check_defs)) check = $(CHECKER) check_echo = echo - CSTD = -std=gnu89 + CSTD = -std=gnu11 else check = true check_echo = true From patchwork Wed Nov 23 22:37:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22604C433FE for ; Wed, 23 Nov 2022 22:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229812AbiKWWiK (ORCPT ); Wed, 23 Nov 2022 17:38:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229833AbiKWWhu (ORCPT ); Wed, 23 Nov 2022 17:37:50 -0500 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A84125E4 for ; Wed, 23 Nov 2022 14:37:47 -0800 (PST) Received: by mail-qk1-x72e.google.com with SMTP id d8so13461339qki.13 for ; Wed, 23 Nov 2022 14:37:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=FfHKo9+wzg94hafVtLbprq1BBpsW0tQpOtW2wH6SmV4=; b=27wEPDnskH7pFmDUwMswKYuLhhuk3KXhNXr5kOjmC6oheICOVblUCJ2kQJCxvgmV73 oiPjHgrSUcYdnTuJrl6RtXBn84JVY/9bvpNw8FdL1dSAYv+tyK/gNj7sMdwhQZIfpmjH 7NvOV6PjhCz8vBnmnYizoou54OAn6baZbhMvBztmHhfa7q1OwdG1VJYBBkYMM+NtFing ZwLXkfU+S3pTR6TODyFsJeaSf6ugiuOCKZji6UgDr60B+KlmX43AnqnZLX+dBG2a0A8G BMrgeLF2TyjUj8EzXe1yQqmu8LgVN/gq25WBBHpvrK8xlvaaM+u6j5kmwgf9HkO1WXZu 4Yng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FfHKo9+wzg94hafVtLbprq1BBpsW0tQpOtW2wH6SmV4=; b=FnXdV3IGtc6TeUVz5Wlnzj6ztNkGYNP/kM2YQ8XAzyQDbQLCR8wBIMadfTflMREdMi PYiqStkzb5exc4azUwN0TwT4nQlIhI4NI2TSqJGxIdRNVmvRurt1ntue6q9yxEDDB18W LRz5K/vo0iPGPyQFJW9+aJBuTQhBUqp1koyI73siZNcv0bq1Ugs8KsX6EEHR3OxRycWZ WeYGSk9eUelmKa3TDoTmLmeCDDMZdYPZE/BLGbG7Hn+jxIOzeXoZmFgZHIsa/+3FafDw gbGjptFG8Mr9VH2AjXumAy+OwSt/aZi/fUIrCW5bsu0Fk7yuqJ0DokYipHq3AUXEa1YJ KsWA== X-Gm-Message-State: ANoB5pmWaCqa+deHzqPwZ9GrznTg19yLn3aD9In7hc1PD0VpjQbxzEkj 6WC+BCkTfEBF6RFaAdX5fsQSkqPwYb0uOg== X-Google-Smtp-Source: AA0mqf5JDFmRRB/Gx3aEFNqvW7QYFGypDkU3qZ10cbEFq6wU2BujU5yRVPxiKccdi16ianWVbj79NA== X-Received: by 2002:a37:5841:0:b0:6ec:b463:3b88 with SMTP id m62-20020a375841000000b006ecb4633b88mr26532481qkb.720.1669243065923; Wed, 23 Nov 2022 14:37:45 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id r13-20020a05620a298d00b006eee3a09ff3sm13021635qkp.69.2022.11.23.14.37.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:45 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 05/29] btrfs-progs: move btrfs_err_str into common/utils.h Date: Wed, 23 Nov 2022 17:37:13 -0500 Message-Id: <06076dba53813bbcb59b3dd9c070a3eeb249551c.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This doesn't really belong with the ioctl definitions, and when we sync the ioctl definitions with the kernel this helper will go away, so adjust this now. Signed-off-by: Josef Bacik Reviewed-by: Anand Jain --- common/utils.h | 32 ++++++++++++++++++++++++++++++++ ioctl.h | 32 -------------------------------- 2 files changed, 32 insertions(+), 32 deletions(-) diff --git a/common/utils.h b/common/utils.h index 5189e352..87dceef5 100644 --- a/common/utils.h +++ b/common/utils.h @@ -117,4 +117,36 @@ int sysfs_open_fsid_file(int fd, const char *filename); int sysfs_read_file(int fd, char *buf, size_t size); int sysfs_open_fsid_dir(int fd, const char *dirname); +/* An error code to error string mapping for the kernel +* error codes +*/ +static inline char *btrfs_err_str(enum btrfs_err_code err_code) +{ + switch (err_code) { + case BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET: + return "unable to go below two devices on raid1"; + case BTRFS_ERROR_DEV_RAID1C3_MIN_NOT_MET: + return "unable to go below three devices on raid1c3"; + case BTRFS_ERROR_DEV_RAID1C4_MIN_NOT_MET: + return "unable to go below four devices on raid1c4"; + case BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET: + return "unable to go below four/two devices on raid10"; + case BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET: + return "unable to go below two devices on raid5"; + case BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET: + return "unable to go below three devices on raid6"; + case BTRFS_ERROR_DEV_TGT_REPLACE: + return "unable to remove the dev_replace target dev"; + case BTRFS_ERROR_DEV_MISSING_NOT_FOUND: + return "no missing devices found to remove"; + case BTRFS_ERROR_DEV_ONLY_WRITABLE: + return "unable to remove the only writeable device"; + case BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS: + return "add/delete/balance/replace/resize operation " + "in progress"; + default: + return NULL; + } +} + #endif diff --git a/ioctl.h b/ioctl.h index f19695e3..0615054b 100644 --- a/ioctl.h +++ b/ioctl.h @@ -935,38 +935,6 @@ enum btrfs_err_code { BTRFS_ERROR_DEV_RAID1C4_MIN_NOT_MET, }; -/* An error code to error string mapping for the kernel -* error codes -*/ -static inline char *btrfs_err_str(enum btrfs_err_code err_code) -{ - switch (err_code) { - case BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET: - return "unable to go below two devices on raid1"; - case BTRFS_ERROR_DEV_RAID1C3_MIN_NOT_MET: - return "unable to go below three devices on raid1c3"; - case BTRFS_ERROR_DEV_RAID1C4_MIN_NOT_MET: - return "unable to go below four devices on raid1c4"; - case BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET: - return "unable to go below four/two devices on raid10"; - case BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET: - return "unable to go below two devices on raid5"; - case BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET: - return "unable to go below three devices on raid6"; - case BTRFS_ERROR_DEV_TGT_REPLACE: - return "unable to remove the dev_replace target dev"; - case BTRFS_ERROR_DEV_MISSING_NOT_FOUND: - return "no missing devices found to remove"; - case BTRFS_ERROR_DEV_ONLY_WRITABLE: - return "unable to remove the only writeable device"; - case BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS: - return "add/delete/balance/replace/resize operation " - "in progress"; - default: - return NULL; - } -} - #define BTRFS_IOC_SNAP_CREATE _IOW(BTRFS_IOCTL_MAGIC, 1, \ struct btrfs_ioctl_vol_args) #define BTRFS_IOC_DEFRAG _IOW(BTRFS_IOCTL_MAGIC, 2, \ From patchwork Wed Nov 23 22:37:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5E58C4332F for ; Wed, 23 Nov 2022 22:38:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229751AbiKWWiP (ORCPT ); Wed, 23 Nov 2022 17:38:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229795AbiKWWhw (ORCPT ); Wed, 23 Nov 2022 17:37:52 -0500 Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EBF5B6A for ; Wed, 23 Nov 2022 14:37:48 -0800 (PST) Received: by mail-qk1-x734.google.com with SMTP id g10so13481720qkl.6 for ; Wed, 23 Nov 2022 14:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=KvGxEHhgP5LqMrHFmrLZo8SnAvvX5vhiAgd+00vbEgc=; b=lt5u/OinkSEsJ0kAZ5YHQ7UGBv15DjWAiQWDYzwTn0WU7cgojznfn/uQJJ545zYW9a tbbdDT77cNahwLbplw04uAKSx0KijcWlVMjHRAk/MAYbjryQRX7U8oskcdFi/BRfo5Wg 4KqpHBkzxffYR551RC4xPt2EiNZT83qP45ecY8vsuGKI6ComDtQud8gXBx8CfXtt6Yoa 6GY6A2Oa4EnkkCDtlLn1KUlZ2vdpWBpWe8IffaV74VAHBUCGhMnVdmRmgU1GYigdLxIe 9coEhWZRnofBpq6IyLD2ml6KORiX9F3jWie6U7+Y0IQsYaLfWk47wQVZx+ssSrp/RQr0 GYOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KvGxEHhgP5LqMrHFmrLZo8SnAvvX5vhiAgd+00vbEgc=; b=mqu99OBPkbvsIS5PznwPupLdvZgSd/wHFyJGNqpEfUbuOtY4PaDhn4FHnxq6PUCgnM TYHRMaKImvByMs0nyXEVZ1yqjXeTPjHDwBqhr3IVPwzkH3opYTGVNX6x1IHrV+Y/MAqr uagAnbejTELu/bLOM++mZlEU/EmGhFOEoO57mq0qAY++iWgTCooULogSggq+oCj3H2S2 W3rPNLRyGnA+E7giBc46Tt4U4yA8db75mM+yBptyJH45jqGeixpXmkFaROQaXHWxuShp BTyoqy+FVlKkBtzE5qzRD1KC0566fw9o+bn1jAQu1VKMokkF7AKasURZZMjop1rtYNDU Utnw== X-Gm-Message-State: ANoB5pkkbZiA+D7HoJvzWLB1KmTAsetmTYKPT6KYCP4wmX+jgzQbWRHm lnlnVy+RigH7sxegJDGAGqgYYk+/c7z+1g== X-Google-Smtp-Source: AA0mqf4Wk9+l5UZY89pbp0bGF++vbo7OuK44ndjq0qXMU3j8ZmqT3ZcJ3fHExv7OjFd04NuVS+1iAA== X-Received: by 2002:a05:620a:370b:b0:6fa:1da0:2e7b with SMTP id de11-20020a05620a370b00b006fa1da02e7bmr10835170qkb.162.1669243067112; Wed, 23 Nov 2022 14:37:47 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id x13-20020a05620a448d00b006fa4ac86bfbsm12864516qkp.55.2022.11.23.14.37.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:46 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 06/29] btrfs-progs: rename qgroup items to match the kernel naming scheme Date: Wed, 23 Nov 2022 17:37:14 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We're going to sync the kernel source into btrfs-progs, and in the kernel we have all these qgroup fields named with short names instead of the full name, so rename referenced -> rfer compressed -> cmpr exclusive -> excl to match the kernel and update all the users, this will make the sync cleaner. Signed-off-by: Josef Bacik --- check/qgroup-verify.c | 60 ++++++++++++++--------------- cmds/qgroup.c | 57 +++++++++++++-------------- cmds/qgroup.h | 8 ++-- cmds/subvolume.c | 12 +++--- ioctl.h | 8 ++-- kernel-shared/ctree.h | 79 +++++++++++++++++++------------------- kernel-shared/print-tree.c | 18 ++++----- 7 files changed, 116 insertions(+), 126 deletions(-) diff --git a/check/qgroup-verify.c b/check/qgroup-verify.c index ab93d7e0..906fabcb 100644 --- a/check/qgroup-verify.c +++ b/check/qgroup-verify.c @@ -49,10 +49,10 @@ struct qgroup_count; static struct qgroup_count *find_count(u64 qgroupid); struct qgroup_info { - u64 referenced; - u64 referenced_compressed; - u64 exclusive; - u64 exclusive_compressed; + u64 rfer; + u64 rfer_cmpr; + u64 excl; + u64 excl_cmpr; }; struct qgroup_count { @@ -481,12 +481,12 @@ static int account_one_extent(struct ulist *roots, u64 bytenr, u64 num_bytes) nr_refs = group_get_cur_refcnt(count); if (nr_refs) { - count->info.referenced += num_bytes; - count->info.referenced_compressed += num_bytes; + count->info.rfer += num_bytes; + count->info.rfer_cmpr += num_bytes; if (nr_refs == nr_roots) { - count->info.exclusive += num_bytes; - count->info.exclusive_compressed += num_bytes; + count->info.excl += num_bytes; + count->info.excl_cmpr += num_bytes; } } #ifdef QGROUP_VERIFY_DEBUG @@ -494,7 +494,7 @@ static int account_one_extent(struct ulist *roots, u64 bytenr, u64 num_bytes) " excl %llu, refs %llu, roots %llu\n", bytenr, num_bytes, btrfs_qgroup_level(count->qgroupid), btrfs_qgroup_subvid(count->qgroupid), - count->info.referenced, count->info.exclusive, nr_refs, + count->info.rfer, count->info.excl, nr_refs, nr_roots); #endif } @@ -870,12 +870,10 @@ static struct qgroup_count *alloc_count(struct btrfs_disk_key *key, c->key = *key; item = &c->diskinfo; - item->referenced = btrfs_qgroup_info_referenced(leaf, disk); - item->referenced_compressed = - btrfs_qgroup_info_referenced_compressed(leaf, disk); - item->exclusive = btrfs_qgroup_info_exclusive(leaf, disk); - item->exclusive_compressed = - btrfs_qgroup_info_exclusive_compressed(leaf, disk); + item->rfer = btrfs_qgroup_info_rfer(leaf, disk); + item->rfer_cmpr = btrfs_qgroup_info_rfer_cmpr(leaf, disk); + item->excl = btrfs_qgroup_info_excl(leaf, disk); + item->excl_cmpr = btrfs_qgroup_info_excl_cmpr(leaf, disk); INIT_LIST_HEAD(&c->groups); INIT_LIST_HEAD(&c->members); INIT_LIST_HEAD(&c->bad_list); @@ -1286,8 +1284,8 @@ static int report_qgroup_difference(struct qgroup_count *count, int verbose) int is_different; struct qgroup_info *info = &count->info; struct qgroup_info *disk = &count->diskinfo; - long long excl_diff = info->exclusive - disk->exclusive; - long long ref_diff = info->referenced - disk->referenced; + long long excl_diff = info->excl - disk->excl; + long long ref_diff = info->rfer - disk->rfer; is_different = excl_diff || ref_diff; @@ -1297,16 +1295,16 @@ static int report_qgroup_difference(struct qgroup_count *count, int verbose) btrfs_qgroup_subvid(count->qgroupid), is_different ? "are different" : ""); - print_fields(info->referenced, info->referenced_compressed, + print_fields(info->rfer, info->rfer_cmpr, "our:", "referenced"); - print_fields(disk->referenced, disk->referenced_compressed, + print_fields(disk->rfer, disk->rfer_cmpr, "disk:", "referenced"); if (ref_diff) print_fields_signed(ref_diff, ref_diff, "diff:", "referenced"); - print_fields(info->exclusive, info->exclusive_compressed, + print_fields(info->excl, info->excl_cmpr, "our:", "exclusive"); - print_fields(disk->exclusive, disk->exclusive_compressed, + print_fields(disk->excl, disk->excl_cmpr, "disk:", "exclusive"); if (excl_diff) print_fields_signed(excl_diff, excl_diff, @@ -1388,8 +1386,8 @@ static bool is_bad_qgroup(struct qgroup_count *count) { struct qgroup_info *info = &count->info; struct qgroup_info *disk = &count->diskinfo; - s64 excl_diff = info->exclusive - disk->exclusive; - s64 ref_diff = info->referenced - disk->referenced; + s64 excl_diff = info->excl - disk->excl; + s64 ref_diff = info->rfer - disk->rfer; return (excl_diff || ref_diff); } @@ -1594,15 +1592,15 @@ static int repair_qgroup_info(struct btrfs_fs_info *info, btrfs_set_qgroup_info_generation(path.nodes[0], info_item, trans->transid); - btrfs_set_qgroup_info_referenced(path.nodes[0], info_item, - count->info.referenced); - btrfs_set_qgroup_info_referenced_compressed(path.nodes[0], info_item, - count->info.referenced_compressed); + btrfs_set_qgroup_info_rfer(path.nodes[0], info_item, + count->info.rfer); + btrfs_set_qgroup_info_rfer_cmpr(path.nodes[0], info_item, + count->info.rfer_cmpr); - btrfs_set_qgroup_info_exclusive(path.nodes[0], info_item, - count->info.exclusive); - btrfs_set_qgroup_info_exclusive_compressed(path.nodes[0], info_item, - count->info.exclusive_compressed); + btrfs_set_qgroup_info_excl(path.nodes[0], info_item, + count->info.excl); + btrfs_set_qgroup_info_excl_cmpr(path.nodes[0], info_item, + count->info.excl_cmpr); btrfs_mark_buffer_dirty(path.nodes[0]); diff --git a/cmds/qgroup.c b/cmds/qgroup.c index f841c9d4..1d794427 100644 --- a/cmds/qgroup.c +++ b/cmds/qgroup.c @@ -294,10 +294,10 @@ static void print_qgroup_column(struct btrfs_qgroup *qgroup, print_qgroup_column_add_blank(BTRFS_QGROUP_QGROUPID, len); break; case BTRFS_QGROUP_RFER: - len = print_u64(qgroup->info.referenced, unit_mode, max_len); + len = print_u64(qgroup->info.rfer, unit_mode, max_len); break; case BTRFS_QGROUP_EXCL: - len = print_u64(qgroup->info.exclusive, unit_mode, max_len); + len = print_u64(qgroup->info.excl, unit_mode, max_len); break; case BTRFS_QGROUP_PARENT: len = print_parent_column(qgroup); @@ -305,14 +305,14 @@ static void print_qgroup_column(struct btrfs_qgroup *qgroup, break; case BTRFS_QGROUP_MAX_RFER: if (qgroup->limit.flags & BTRFS_QGROUP_LIMIT_MAX_RFER) - len = print_u64(qgroup->limit.max_referenced, + len = print_u64(qgroup->limit.max_rfer, unit_mode, max_len); else len = printf("%*s", max_len, "none"); break; case BTRFS_QGROUP_MAX_EXCL: if (qgroup->limit.flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) - len = print_u64(qgroup->limit.max_exclusive, + len = print_u64(qgroup->limit.max_excl, unit_mode, max_len); else len = printf("%*s", max_len, "none"); @@ -412,9 +412,9 @@ static int comp_entry_with_rfer(struct btrfs_qgroup *entry1, { int ret; - if (entry1->info.referenced > entry2->info.referenced) + if (entry1->info.rfer > entry2->info.rfer) ret = 1; - else if (entry1->info.referenced < entry2->info.referenced) + else if (entry1->info.rfer < entry2->info.rfer) ret = -1; else ret = 0; @@ -428,9 +428,9 @@ static int comp_entry_with_excl(struct btrfs_qgroup *entry1, { int ret; - if (entry1->info.exclusive > entry2->info.exclusive) + if (entry1->info.excl > entry2->info.excl) ret = 1; - else if (entry1->info.exclusive < entry2->info.exclusive) + else if (entry1->info.excl < entry2->info.excl) ret = -1; else ret = 0; @@ -444,9 +444,9 @@ static int comp_entry_with_max_rfer(struct btrfs_qgroup *entry1, { int ret; - if (entry1->limit.max_referenced > entry2->limit.max_referenced) + if (entry1->limit.max_rfer > entry2->limit.max_rfer) ret = 1; - else if (entry1->limit.max_referenced < entry2->limit.max_referenced) + else if (entry1->limit.max_rfer < entry2->limit.max_rfer) ret = -1; else ret = 0; @@ -460,9 +460,9 @@ static int comp_entry_with_max_excl(struct btrfs_qgroup *entry1, { int ret; - if (entry1->limit.max_exclusive > entry2->limit.max_exclusive) + if (entry1->limit.max_excl > entry2->limit.max_excl) ret = 1; - else if (entry1->limit.max_exclusive < entry2->limit.max_exclusive) + else if (entry1->limit.max_excl < entry2->limit.max_excl) ret = -1; else ret = 0; @@ -696,12 +696,10 @@ static int update_qgroup_info(struct qgroup_lookup *qgroup_lookup, u64 qgroupid, return PTR_ERR(bq); bq->info.generation = btrfs_stack_qgroup_info_generation(info); - bq->info.referenced = btrfs_stack_qgroup_info_referenced(info); - bq->info.referenced_compressed = - btrfs_stack_qgroup_info_referenced_compressed(info); - bq->info.exclusive = btrfs_stack_qgroup_info_exclusive(info); - bq->info.exclusive_compressed = - btrfs_stack_qgroup_info_exclusive_compressed(info); + bq->info.rfer = btrfs_stack_qgroup_info_rfer(info); + bq->info.rfer_cmpr = btrfs_stack_qgroup_info_rfer_cmpr(info); + bq->info.excl = btrfs_stack_qgroup_info_excl(info); + bq->info.excl_cmpr = btrfs_stack_qgroup_info_excl_cmpr(info); return 0; } @@ -717,13 +715,10 @@ static int update_qgroup_limit(struct qgroup_lookup *qgroup_lookup, return PTR_ERR(bq); bq->limit.flags = btrfs_stack_qgroup_limit_flags(limit); - bq->limit.max_referenced = - btrfs_stack_qgroup_limit_max_referenced(limit); - bq->limit.max_exclusive = - btrfs_stack_qgroup_limit_max_exclusive(limit); - bq->limit.rsv_referenced = - btrfs_stack_qgroup_limit_rsv_referenced(limit); - bq->limit.rsv_exclusive = btrfs_stack_qgroup_limit_rsv_exclusive(limit); + bq->limit.max_rfer = btrfs_stack_qgroup_limit_max_rfer(limit); + bq->limit.max_excl = btrfs_stack_qgroup_limit_max_excl(limit); + bq->limit.rsv_rfer = btrfs_stack_qgroup_limit_rsv_rfer(limit); + bq->limit.rsv_excl = btrfs_stack_qgroup_limit_rsv_excl(limit); return 0; } @@ -1014,23 +1009,23 @@ static void __update_columns_max_len(struct btrfs_qgroup *bq, btrfs_qgroup_columns[column].max_len = len; break; case BTRFS_QGROUP_RFER: - len = strlen(pretty_size_mode(bq->info.referenced, unit_mode)); + len = strlen(pretty_size_mode(bq->info.rfer, unit_mode)); if (btrfs_qgroup_columns[column].max_len < len) btrfs_qgroup_columns[column].max_len = len; break; case BTRFS_QGROUP_EXCL: - len = strlen(pretty_size_mode(bq->info.exclusive, unit_mode)); + len = strlen(pretty_size_mode(bq->info.excl, unit_mode)); if (btrfs_qgroup_columns[column].max_len < len) btrfs_qgroup_columns[column].max_len = len; break; case BTRFS_QGROUP_MAX_RFER: - len = strlen(pretty_size_mode(bq->limit.max_referenced, + len = strlen(pretty_size_mode(bq->limit.max_rfer, unit_mode)); if (btrfs_qgroup_columns[column].max_len < len) btrfs_qgroup_columns[column].max_len = len; break; case BTRFS_QGROUP_MAX_EXCL: - len = strlen(pretty_size_mode(bq->limit.max_exclusive, + len = strlen(pretty_size_mode(bq->limit.max_excl, unit_mode)); if (btrfs_qgroup_columns[column].max_len < len) btrfs_qgroup_columns[column].max_len = len; @@ -1912,10 +1907,10 @@ static int cmd_qgroup_limit(const struct cmd_struct *cmd, int argc, char **argv) BTRFS_QGROUP_LIMIT_EXCL_CMPR; if (exclusive) { args.lim.flags |= BTRFS_QGROUP_LIMIT_MAX_EXCL; - args.lim.max_exclusive = size; + args.lim.max_excl = size; } else { args.lim.flags |= BTRFS_QGROUP_LIMIT_MAX_RFER; - args.lim.max_referenced = size; + args.lim.max_rfer = size; } if (argc - optind == 2) { diff --git a/cmds/qgroup.h b/cmds/qgroup.h index 69b8c11f..93e81e85 100644 --- a/cmds/qgroup.h +++ b/cmds/qgroup.h @@ -24,10 +24,10 @@ struct btrfs_qgroup_info { u64 generation; - u64 referenced; - u64 referenced_compressed; - u64 exclusive; - u64 exclusive_compressed; + u64 rfer; + u64 rfer_cmpr; + u64 excl; + u64 excl_cmpr; }; struct btrfs_qgroup_stats { diff --git a/cmds/subvolume.c b/cmds/subvolume.c index adbac908..a90147e2 100644 --- a/cmds/subvolume.c +++ b/cmds/subvolume.c @@ -1489,15 +1489,15 @@ static int cmd_subvol_show(const struct cmd_struct *cmd, int argc, char **argv) fflush(stdout); pr_verbose(LOG_DEFAULT, "\t Limit referenced:\t%s\n", - stats.limit.max_referenced == 0 ? "-" : - pretty_size_mode(stats.limit.max_referenced, unit_mode)); + stats.limit.max_rfer == 0 ? "-" : + pretty_size_mode(stats.limit.max_rfer, unit_mode)); pr_verbose(LOG_DEFAULT, "\t Limit exclusive:\t%s\n", - stats.limit.max_exclusive == 0 ? "-" : - pretty_size_mode(stats.limit.max_exclusive, unit_mode)); + stats.limit.max_excl == 0 ? "-" : + pretty_size_mode(stats.limit.max_excl, unit_mode)); pr_verbose(LOG_DEFAULT, "\t Usage referenced:\t%s\n", - pretty_size_mode(stats.info.referenced, unit_mode)); + pretty_size_mode(stats.info.rfer, unit_mode)); pr_verbose(LOG_DEFAULT, "\t Usage exclusive:\t%s\n", - pretty_size_mode(stats.info.exclusive, unit_mode)); + pretty_size_mode(stats.info.excl, unit_mode)); out: free(subvol_path); diff --git a/ioctl.h b/ioctl.h index 0615054b..21aaedde 100644 --- a/ioctl.h +++ b/ioctl.h @@ -71,10 +71,10 @@ BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args) == 4096); struct btrfs_qgroup_limit { __u64 flags; - __u64 max_referenced; - __u64 max_exclusive; - __u64 rsv_referenced; - __u64 rsv_exclusive; + __u64 max_rfer; + __u64 max_excl; + __u64 rsv_rfer; + __u64 rsv_excl; }; BUILD_ASSERT(sizeof(struct btrfs_qgroup_limit) == 40); diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 7a9fd1cb..4ade901a 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -1108,10 +1108,10 @@ struct btrfs_free_space_info { struct btrfs_qgroup_info_item { __le64 generation; - __le64 referenced; - __le64 referenced_compressed; - __le64 exclusive; - __le64 exclusive_compressed; + __le64 rfer; + __le64 rfer_cmpr; + __le64 excl; + __le64 excl_cmpr; } __attribute__ ((__packed__)); /* flags definition for qgroup limits */ @@ -1124,10 +1124,10 @@ struct btrfs_qgroup_info_item { struct btrfs_qgroup_limit_item { __le64 flags; - __le64 max_referenced; - __le64 max_exclusive; - __le64 rsv_referenced; - __le64 rsv_exclusive; + __le64 max_rfer; + __le64 max_excl; + __le64 rsv_rfer; + __le64 rsv_excl; } __attribute__ ((__packed__)); struct btrfs_space_info { @@ -2454,48 +2454,47 @@ BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_rescan, /* btrfs_qgroup_info_item */ BTRFS_SETGET_FUNCS(qgroup_info_generation, struct btrfs_qgroup_info_item, generation, 64); -BTRFS_SETGET_FUNCS(qgroup_info_referenced, struct btrfs_qgroup_info_item, - referenced, 64); -BTRFS_SETGET_FUNCS(qgroup_info_referenced_compressed, - struct btrfs_qgroup_info_item, referenced_compressed, 64); -BTRFS_SETGET_FUNCS(qgroup_info_exclusive, struct btrfs_qgroup_info_item, - exclusive, 64); -BTRFS_SETGET_FUNCS(qgroup_info_exclusive_compressed, - struct btrfs_qgroup_info_item, exclusive_compressed, 64); +BTRFS_SETGET_FUNCS(qgroup_info_rfer, struct btrfs_qgroup_info_item, + rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_info_rfer_cmpr, + struct btrfs_qgroup_info_item, rfer_cmpr, 64); +BTRFS_SETGET_FUNCS(qgroup_info_excl, struct btrfs_qgroup_info_item, excl, 64); +BTRFS_SETGET_FUNCS(qgroup_info_excl_cmpr, + struct btrfs_qgroup_info_item, excl_cmpr, 64); BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_generation, struct btrfs_qgroup_info_item, generation, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_referenced, - struct btrfs_qgroup_info_item, referenced, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_referenced_compressed, - struct btrfs_qgroup_info_item, referenced_compressed, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_exclusive, - struct btrfs_qgroup_info_item, exclusive, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_exclusive_compressed, - struct btrfs_qgroup_info_item, exclusive_compressed, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer, + struct btrfs_qgroup_info_item, rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer_cmpr, + struct btrfs_qgroup_info_item, rfer_cmpr, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl, + struct btrfs_qgroup_info_item, excl, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl_cmpr, + struct btrfs_qgroup_info_item, excl_cmpr, 64); /* btrfs_qgroup_limit_item */ BTRFS_SETGET_FUNCS(qgroup_limit_flags, struct btrfs_qgroup_limit_item, flags, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_max_referenced, struct btrfs_qgroup_limit_item, - max_referenced, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_max_exclusive, struct btrfs_qgroup_limit_item, - max_exclusive, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_rsv_referenced, struct btrfs_qgroup_limit_item, - rsv_referenced, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_rsv_exclusive, struct btrfs_qgroup_limit_item, - rsv_exclusive, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_max_rfer, struct btrfs_qgroup_limit_item, + max_rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_max_excl, struct btrfs_qgroup_limit_item, + max_excl, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_rsv_rfer, struct btrfs_qgroup_limit_item, + rsv_rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_rsv_excl, struct btrfs_qgroup_limit_item, + rsv_excl, 64); BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_flags, struct btrfs_qgroup_limit_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_referenced, - struct btrfs_qgroup_limit_item, max_referenced, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_exclusive, - struct btrfs_qgroup_limit_item, max_exclusive, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_referenced, - struct btrfs_qgroup_limit_item, rsv_referenced, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_exclusive, - struct btrfs_qgroup_limit_item, rsv_exclusive, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_rfer, + struct btrfs_qgroup_limit_item, max_rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_excl, + struct btrfs_qgroup_limit_item, max_excl, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_rfer, + struct btrfs_qgroup_limit_item, rsv_rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_excl, + struct btrfs_qgroup_limit_item, rsv_excl, 64); /* btrfs_balance_item */ BTRFS_SETGET_FUNCS(balance_item_flags, struct btrfs_balance_item, flags, 64); diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c index 0e0404ab..2cf1b283 100644 --- a/kernel-shared/print-tree.c +++ b/kernel-shared/print-tree.c @@ -1088,12 +1088,10 @@ static void print_qgroup_info(struct extent_buffer *eb, int slot) "\t\treferenced %llu referenced_compressed %llu\n" "\t\texclusive %llu exclusive_compressed %llu\n", (unsigned long long)btrfs_qgroup_info_generation(eb, qg_info), - (unsigned long long)btrfs_qgroup_info_referenced(eb, qg_info), - (unsigned long long)btrfs_qgroup_info_referenced_compressed(eb, - qg_info), - (unsigned long long)btrfs_qgroup_info_exclusive(eb, qg_info), - (unsigned long long)btrfs_qgroup_info_exclusive_compressed(eb, - qg_info)); + (unsigned long long)btrfs_qgroup_info_rfer(eb, qg_info), + (unsigned long long)btrfs_qgroup_info_rfer_cmpr(eb, qg_info), + (unsigned long long)btrfs_qgroup_info_excl(eb, qg_info), + (unsigned long long)btrfs_qgroup_info_excl_cmpr(eb, qg_info)); } static void print_qgroup_limit(struct extent_buffer *eb, int slot) @@ -1105,10 +1103,10 @@ static void print_qgroup_limit(struct extent_buffer *eb, int slot) "\t\tmax_referenced %lld max_exclusive %lld\n" "\t\trsv_referenced %lld rsv_exclusive %lld\n", (unsigned long long)btrfs_qgroup_limit_flags(eb, qg_limit), - (long long)btrfs_qgroup_limit_max_referenced(eb, qg_limit), - (long long)btrfs_qgroup_limit_max_exclusive(eb, qg_limit), - (long long)btrfs_qgroup_limit_rsv_referenced(eb, qg_limit), - (long long)btrfs_qgroup_limit_rsv_exclusive(eb, qg_limit)); + (long long)btrfs_qgroup_limit_max_rfer(eb, qg_limit), + (long long)btrfs_qgroup_limit_max_excl(eb, qg_limit), + (long long)btrfs_qgroup_limit_rsv_rfer(eb, qg_limit), + (long long)btrfs_qgroup_limit_rsv_excl(eb, qg_limit)); } static void print_persistent_item(struct extent_buffer *eb, void *ptr, From patchwork Wed Nov 23 22:37:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB0EEC433FE for ; Wed, 23 Nov 2022 22:38:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229798AbiKWWiN (ORCPT ); Wed, 23 Nov 2022 17:38:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229797AbiKWWhw (ORCPT ); Wed, 23 Nov 2022 17:37:52 -0500 Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com [IPv6:2607:f8b0:4864:20::82f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E3EF27914 for ; Wed, 23 Nov 2022 14:37:48 -0800 (PST) Received: by mail-qt1-x82f.google.com with SMTP id s4so133886qtx.6 for ; Wed, 23 Nov 2022 14:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=iLjSF4Oa9uibD9smYy3HJFJ76Fv4iA+KV87w4vX2jSM=; b=R1DGwubIjmzn3GtXVilqfUKX+67l2YBXStCrES1UYtBnjhpgxM2gXeJldgeqmAxO4p YB1b3MQtcNmelNssxqNeOh6CxCX70FeqTyJS6j/LWb1tVCiFM/wKvmenq9i4SFykPK5g 41qQB1h8EobUi/xvWWY9ncJyQp+z702pH+95WYQRQilgxVeormYrk/jSaxbE+f5DZ0MY MFE2ED/4n+IRqwK9OMRabjK0FdTdCswVXWOZ7P97gnx1rlTMPhs66vo/GyiyQfhQE4wA +o5mQ8quPNQdevjmUczeK5FHQ4WKKciZxR8QrETa96CzNQ5SzH7jCd5RJ3tw0Yb8XeQR UBmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iLjSF4Oa9uibD9smYy3HJFJ76Fv4iA+KV87w4vX2jSM=; b=V5Q4/4M5Ff+tX5RQTF15eEG9zDwzkq2bZb340DaBEbeV4fkG3W19DbCvqmQVeMCrek iUjZybBbvJPYSGKVI0s2SqtjB6xLFp4hBwRtDgZdOvInE9fIKatSs8WpBDgnPPMsiP/r 4OOR3E33hnl1ddjl+nETijQyiSwu0o6pVNIvHD+t+eVKAeobMOacncZw4VPOAYJ/Q0DW H05kCiXn2270cjYl0dPLnhJ7s/huTbm7xxsw9k9SySOi11pOluA46HJzHaZF7+LgqOjN tO5FTk9amXBXIQnDECNhDepGcvHdrO9A2rA3QG679V0XBRL9GlIlMcAQfoLn7RBOKwN4 LVeQ== X-Gm-Message-State: ANoB5pl9yAGqje3FI+UCGzn/vjcl5ar1USjiqYSiwTyws7CFYLgekUhi De+VjiUcxgVsUcU7anh1pKHKYzkOfVNAJQ== X-Google-Smtp-Source: AA0mqf6GbshQZtEmwL6kywuxBjj1uV3cbAtO3ZO9I07nkYA7OYoglhjkIs0KUF1ICkkcri2ToAihWw== X-Received: by 2002:ac8:7415:0:b0:3a4:a229:b974 with SMTP id p21-20020ac87415000000b003a4a229b974mr10411389qtq.255.1669243068315; Wed, 23 Nov 2022 14:37:48 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id h4-20020a05620a400400b006eeb3165565sm13182080qko.80.2022.11.23.14.37.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:48 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 07/29] btrfs-progs: make btrfs_qgroup_level helper match the kernel Date: Wed, 23 Nov 2022 17:37:15 -0500 Message-Id: <976e6937a7bf48d19ecc5788a28955fdab0366f5.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We return __u16 in the kernel, as this is actually the size of btrfs_qgroup_level. Adjust the existing helpers and update all the callers to deal with the new size appropriately. This will make syncing the kernel code cleaner. Signed-off-by: Josef Bacik --- check/qgroup-verify.c | 6 +++--- cmds/qgroup.c | 16 ++++++++-------- kernel-shared/ctree.h | 2 +- kernel-shared/print-tree.c | 4 ++-- libbtrfs/ctree.h | 2 +- libbtrfsutil/btrfs_tree.h | 2 +- 6 files changed, 16 insertions(+), 16 deletions(-) diff --git a/check/qgroup-verify.c b/check/qgroup-verify.c index 906fabcb..d79f947f 100644 --- a/check/qgroup-verify.c +++ b/check/qgroup-verify.c @@ -1290,7 +1290,7 @@ static int report_qgroup_difference(struct qgroup_count *count, int verbose) is_different = excl_diff || ref_diff; if (verbose || (is_different && qgroup_printable(count))) { - printf("Counts for qgroup id: %llu/%llu %s\n", + printf("Counts for qgroup id: %u/%llu %s\n", btrfs_qgroup_level(count->qgroupid), btrfs_qgroup_subvid(count->qgroupid), is_different ? "are different" : ""); @@ -1564,7 +1564,7 @@ static int repair_qgroup_info(struct btrfs_fs_info *info, struct btrfs_key key; if (!silent) - printf("Repair qgroup %llu/%llu\n", + printf("Repair qgroup %u/%llu\n", btrfs_qgroup_level(count->qgroupid), btrfs_qgroup_subvid(count->qgroupid)); @@ -1578,7 +1578,7 @@ static int repair_qgroup_info(struct btrfs_fs_info *info, key.offset = count->qgroupid; ret = btrfs_search_slot(trans, root, &key, &path, 0, 1); if (ret) { - error("could not find disk item for qgroup %llu/%llu", + error("could not find disk item for qgroup %u/%llu", btrfs_qgroup_level(count->qgroupid), btrfs_qgroup_subvid(count->qgroupid)); if (ret > 0) diff --git a/cmds/qgroup.c b/cmds/qgroup.c index 1d794427..c6c15da5 100644 --- a/cmds/qgroup.c +++ b/cmds/qgroup.c @@ -233,7 +233,7 @@ static int print_parent_column(struct btrfs_qgroup *qgroup) int len = 0; list_for_each_entry(list, &qgroup->qgroups, next_qgroup) { - len += printf("%llu/%llu", + len += printf("%u/%llu", btrfs_qgroup_level(list->qgroup->qgroupid), btrfs_qgroup_subvid(list->qgroup->qgroupid)); if (!list_is_last(&list->next_qgroup, &qgroup->qgroups)) @@ -251,7 +251,7 @@ static int print_child_column(struct btrfs_qgroup *qgroup) int len = 0; list_for_each_entry(list, &qgroup->members, next_member) { - len += printf("%llu/%llu", + len += printf("%u/%llu", btrfs_qgroup_level(list->member->qgroupid), btrfs_qgroup_subvid(list->member->qgroupid)); if (!list_is_last(&list->next_member, &qgroup->members)) @@ -288,7 +288,7 @@ static void print_qgroup_column(struct btrfs_qgroup *qgroup, switch (column) { case BTRFS_QGROUP_QGROUPID: - len = printf("%llu/%llu", + len = printf("%u/%llu", btrfs_qgroup_level(qgroup->qgroupid), btrfs_qgroup_subvid(qgroup->qgroupid)); print_qgroup_column_add_blank(BTRFS_QGROUP_QGROUPID, len); @@ -732,7 +732,7 @@ static int update_qgroup_relation(struct qgroup_lookup *qgroup_lookup, child = qgroup_tree_search(qgroup_lookup, child_id); if (!child) { - error("cannot find the qgroup %llu/%llu", + error("cannot find the qgroup %u/%llu", btrfs_qgroup_level(child_id), btrfs_qgroup_subvid(child_id)); return -ENOENT; @@ -740,7 +740,7 @@ static int update_qgroup_relation(struct qgroup_lookup *qgroup_lookup, parent = qgroup_tree_search(qgroup_lookup, parent_id); if (!parent) { - error("cannot find the qgroup %llu/%llu", + error("cannot find the qgroup %u/%llu", btrfs_qgroup_level(parent_id), btrfs_qgroup_subvid(parent_id)); return -ENOENT; @@ -1001,7 +1001,7 @@ static void __update_columns_max_len(struct btrfs_qgroup *bq, switch (column) { case BTRFS_QGROUP_QGROUPID: - sprintf(tmp, "%llu/%llu", + sprintf(tmp, "%u/%llu", btrfs_qgroup_level(bq->qgroupid), btrfs_qgroup_subvid(bq->qgroupid)); len = strlen(tmp); @@ -1033,7 +1033,7 @@ static void __update_columns_max_len(struct btrfs_qgroup *bq, case BTRFS_QGROUP_PARENT: len = 0; list_for_each_entry(list, &bq->qgroups, next_qgroup) { - len += sprintf(tmp, "%llu/%llu", + len += sprintf(tmp, "%u/%llu", btrfs_qgroup_level(list->qgroup->qgroupid), btrfs_qgroup_subvid(list->qgroup->qgroupid)); if (!list_is_last(&list->next_qgroup, &bq->qgroups)) @@ -1045,7 +1045,7 @@ static void __update_columns_max_len(struct btrfs_qgroup *bq, case BTRFS_QGROUP_CHILD: len = 0; list_for_each_entry(list, &bq->members, next_member) { - len += sprintf(tmp, "%llu/%llu", + len += sprintf(tmp, "%u/%llu", btrfs_qgroup_level(list->member->qgroupid), btrfs_qgroup_subvid(list->member->qgroupid)); if (!list_is_last(&list->next_member, &bq->members)) diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 4ade901a..61eaab55 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -1071,7 +1071,7 @@ enum btrfs_raid_types { #define BTRFS_QGROUP_LEVEL_SHIFT 48 -static inline u64 btrfs_qgroup_level(u64 qgroupid) +static inline __u16 btrfs_qgroup_level(u64 qgroupid) { return qgroupid >> BTRFS_QGROUP_LEVEL_SHIFT; } diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c index 2cf1b283..e08c72df 100644 --- a/kernel-shared/print-tree.c +++ b/kernel-shared/print-tree.c @@ -706,7 +706,7 @@ void print_objectid(FILE *stream, u64 objectid, u8 type) fprintf(stream, "%llu", (unsigned long long)objectid); return; case BTRFS_QGROUP_RELATION_KEY: - fprintf(stream, "%llu/%llu", btrfs_qgroup_level(objectid), + fprintf(stream, "%u/%llu", btrfs_qgroup_level(objectid), btrfs_qgroup_subvid(objectid)); return; case BTRFS_UUID_KEY_SUBVOL: @@ -815,7 +815,7 @@ void btrfs_print_key(struct btrfs_disk_key *disk_key) case BTRFS_QGROUP_RELATION_KEY: case BTRFS_QGROUP_INFO_KEY: case BTRFS_QGROUP_LIMIT_KEY: - printf(" %llu/%llu)", btrfs_qgroup_level(offset), + printf(" %u/%llu)", btrfs_qgroup_level(offset), btrfs_qgroup_subvid(offset)); break; case BTRFS_UUID_KEY_SUBVOL: diff --git a/libbtrfs/ctree.h b/libbtrfs/ctree.h index 69903f67..ed774ffa 100644 --- a/libbtrfs/ctree.h +++ b/libbtrfs/ctree.h @@ -1104,7 +1104,7 @@ enum btrfs_raid_types { #define BTRFS_QGROUP_LEVEL_SHIFT 48 -static inline u64 btrfs_qgroup_level(u64 qgroupid) +static inline __u16 btrfs_qgroup_level(u64 qgroupid) { return qgroupid >> BTRFS_QGROUP_LEVEL_SHIFT; } diff --git a/libbtrfsutil/btrfs_tree.h b/libbtrfsutil/btrfs_tree.h index 1df9efd6..5e1609e0 100644 --- a/libbtrfsutil/btrfs_tree.h +++ b/libbtrfsutil/btrfs_tree.h @@ -908,7 +908,7 @@ struct btrfs_free_space_info { #define BTRFS_FREE_SPACE_USING_BITMAPS (1ULL << 0) #define BTRFS_QGROUP_LEVEL_SHIFT 48 -static __inline__ __u64 btrfs_qgroup_level(__u64 qgroupid) +static __inline__ __u16 btrfs_qgroup_level(__u64 qgroupid) { return qgroupid >> BTRFS_QGROUP_LEVEL_SHIFT; } From patchwork Wed Nov 23 22:37:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054404 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC43BC4167D for ; Wed, 23 Nov 2022 22:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229784AbiKWWiR (ORCPT ); Wed, 23 Nov 2022 17:38:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229796AbiKWWhw (ORCPT ); Wed, 23 Nov 2022 17:37:52 -0500 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D38A27B04 for ; Wed, 23 Nov 2022 14:37:50 -0800 (PST) Received: by mail-qv1-xf2c.google.com with SMTP id q10so2214057qvt.10 for ; Wed, 23 Nov 2022 14:37:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=lEzT4z3jmt1nwFr/0QrNIzrkbNngQ1VNHZYJqlsVF6E=; b=EoelxDLVdHszLu62yTxWOgE1jRCv35pDJXF19VqEwY2zLjskOv/sOUQUlo4t69faeF eCdIaHU/IoxVISFiVqQp9dGKG90Z0rUBPGlGSkxC/nP+E9wrVQCmeohPLjSVy0oQ4MZk 9wH7cXS64piRaG+93zcxiB2NY4iYfF/aEbh4bFnznKlZ6D7/Ja21OqEQBiCMK9XrQxp6 WaOKfuPALb5+6UhoL79C1oDbnrbfGQu4mVZj6jAzrCx30zKUnLHbknwWtMGmpZ47icc7 J7ZRhnyrfYiNV7Zrx3zQbxgUcujzY+4iCoqrS659gSfffVzDBvgeHXy++ayfdS3/T55N NLgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lEzT4z3jmt1nwFr/0QrNIzrkbNngQ1VNHZYJqlsVF6E=; b=WvWb2ETSJ3FhBesfndtvIWf+dLuZFvvaAgTJz3Yh5D3n0eCilj+eYNB3ygTzfqytC1 FHPOGY/zd9WHaEQhd3fjc9mQIU6XnSwjIPJTmJtsd5e8cz80tJfK8Q8Z4LLI/897HMP6 E3VVZ/TE22b4gSkLGpaoRVA3J/UBeXhR9jaSjCr50S2jlyQR7YgmdX9oNoUzd29aMI77 5JPMbDdgNhV/DlrNT+nvbvKonEUpzDW5xYdma+CpVjiquykz+ajsm6hbguAbPRKis1/c c+ZQR8F6YkvHH9roV9o9nplJcv7dxjAmayPhjmgHYm5FBPgJ/KtJO43G3arW6SIfyIvn RkeQ== X-Gm-Message-State: ANoB5pnewBG2qzfVL5veXgPrNb8aWPavxEYPw0g+5HxNonKs1MYdh9au BGxKah5yuhTWbW5DGQHExhkuATAm6nUCYg== X-Google-Smtp-Source: AA0mqf7EYfYHaWCdbmo3hH+7C5+mHJGcrEzYMFMQ0aTwJFkZf9Bndn/8XIU8+JeKH85GKE2iHrysbg== X-Received: by 2002:ad4:44a5:0:b0:4c6:b5b7:5715 with SMTP id n5-20020ad444a5000000b004c6b5b75715mr11158375qvt.91.1669243069490; Wed, 23 Nov 2022 14:37:49 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id j24-20020ac84418000000b003a50248b89esm10401985qtn.26.2022.11.23.14.37.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:49 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 08/29] btrfs-progs: move NO_RESULT definition into replace.c Date: Wed, 23 Nov 2022 17:37:16 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_RESULT is defined to make sure we differentiate internal errors from actual error codes that come back from the device replace ioctl. Take this out of ioctl.c and move it into replace.c. Signed-off-by: Josef Bacik --- cmds/replace.c | 2 ++ ioctl.h | 1 - 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/cmds/replace.c b/cmds/replace.c index 28e70b04..bdb74dff 100644 --- a/cmds/replace.c +++ b/cmds/replace.c @@ -45,6 +45,8 @@ static int print_replace_status(int fd, const char *path, int once); static char *time2string(char *buf, size_t s, __u64 t); static char *progress2string(char *buf, size_t s, int progress_1000); +/* Used to separate internal errors from actual dev replace ioctl results. */ +#define BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_RESULT -1 static const char *replace_dev_result2string(__u64 result) { diff --git a/ioctl.h b/ioctl.h index 21aaedde..686c1035 100644 --- a/ioctl.h +++ b/ioctl.h @@ -192,7 +192,6 @@ BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_status_params) == 48); #define BTRFS_IOCTL_DEV_REPLACE_CMD_START 0 #define BTRFS_IOCTL_DEV_REPLACE_CMD_STATUS 1 #define BTRFS_IOCTL_DEV_REPLACE_CMD_CANCEL 2 -#define BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_RESULT -1 #define BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR 0 #define BTRFS_IOCTL_DEV_REPLACE_RESULT_NOT_STARTED 1 #define BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED 2 From patchwork Wed Nov 23 22:37:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44D0CC433FE for ; Wed, 23 Nov 2022 22:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbiKWWiT (ORCPT ); Wed, 23 Nov 2022 17:38:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbiKWWhy (ORCPT ); Wed, 23 Nov 2022 17:37:54 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC8123FBA3 for ; Wed, 23 Nov 2022 14:37:51 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id j6so13093241qvn.12 for ; Wed, 23 Nov 2022 14:37:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=FNSk3avLcnx+83jLUU0zUvRpy79WTW+kzQEUfBiacrs=; b=8EFRKsv4ucTiqgNR9PeWGS2bfLp9SeOdkKAmkXq9vr9IG3IW9/fU99zhUA+qgsNyiV rZkSsX8OO58Ux4zM/CIseJrmQ8XiMdM1G8kpjDohwVnM2ctU2w9x+2fQFgg2P8wjYr8n 8c7oHz5ZmQTS8/j7dRRUmpLjsjT+v9KzRkcv39BypDmB8q0FbIeVH3QN6tYWJoleMyJD kEQ17hVAONBef3BJfEKR0SsBRfBEtb9+f15yem/jDYt7skU4ehDZikEFwYntdwIJDspK /6Fl23Xsn1WC2LQXoBclNscsV/ZlAhanVNS5fruu1MBMDd0UOT+hT9N/PTKQt3LLNwnW nqLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNSk3avLcnx+83jLUU0zUvRpy79WTW+kzQEUfBiacrs=; b=vpnl0SLdN32XFrEUK5FsVmszVUDCcyU9/SpKl5aTO745LzpCUl6ixOc96iOL6rmt97 iIJLnMXKafj/jji1m8NLuemiQ/0T+ixobxbmKscgPtfknw/8HTmtEsGDZYk5cDl72Hrp gkTgNRYgejRYDs2TR9LJ0HKNBGVJUF0D4UjToPmspw8ApMaRWBKYQIoQlESAnO5c4/SE vDpaUdfE1z9ZhZv/VH3YY23v9xZChwHF2yU/bLH1nV8bX6NDOAaYdWjE5ob2d7KLDA34 Em6pBd/YlQ8G+IMXjZVsAPJ55AKdZfof6qCyfMnEPewoo25/8gcMT5AjUm5QX4yDN4D7 EDcQ== X-Gm-Message-State: ANoB5pm/siPTP/+FJzhDeoKSiEi3/vkYLUYqAFGudShGA7/i+X9Rcy/i uNDxXuIvN2fqa5qFynvmz7BrdBkGAUfWeA== X-Google-Smtp-Source: AA0mqf4wwHQA/FQGDPTwr88u0J43suBPdoMnaTz3O1S1BLn7ZtzngyDHhlwntQnju6HmXi1kQvcgvA== X-Received: by 2002:ad4:43eb:0:b0:4c6:90f0:cbe6 with SMTP id f11-20020ad443eb000000b004c690f0cbe6mr9500349qvu.116.1669243070675; Wed, 23 Nov 2022 14:37:50 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id w16-20020a05620a425000b006fc2f74ad12sm1666466qko.92.2022.11.23.14.37.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:50 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 09/29] btrfs-progs: rename BLOCK_* to IMAGE_BLOCK_* for metadump Date: Wed, 23 Nov 2022 17:37:17 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org When we sync the kernel we're going to pull in the fs.h dependency, which defines BLOCK_SIZE/BLOCK_MASK. Avoid this conflict by renaming the image definitions with the IMAGE_ prefix. Signed-off-by: Josef Bacik --- image/main.c | 42 +++++++++++++++++++++--------------------- image/metadump.h | 6 +++--- 2 files changed, 24 insertions(+), 24 deletions(-) diff --git a/image/main.c b/image/main.c index b1a0714a..c7bbb05d 100644 --- a/image/main.c +++ b/image/main.c @@ -94,7 +94,7 @@ struct metadump_struct { union { struct meta_cluster cluster; - char meta_cluster_bytes[BLOCK_SIZE]; + char meta_cluster_bytes[IMAGE_BLOCK_SIZE]; }; pthread_t threads[MAX_WORKER_THREADS]; @@ -519,7 +519,7 @@ static int metadump_init(struct metadump_struct *md, struct btrfs_root *root, static int write_zero(FILE *out, size_t size) { - static char zero[BLOCK_SIZE]; + static char zero[IMAGE_BLOCK_SIZE]; return fwrite(zero, size, 1, out); } @@ -563,14 +563,14 @@ static int write_buffers(struct metadump_struct *md, u64 *next) } header->nritems = cpu_to_le32(nritems); - ret = fwrite(&md->cluster, BLOCK_SIZE, 1, md->out); + ret = fwrite(&md->cluster, IMAGE_BLOCK_SIZE, 1, md->out); if (ret != 1) { error("unable to write out cluster: %m"); return -errno; } /* write buffers */ - bytenr += le64_to_cpu(header->bytenr) + BLOCK_SIZE; + bytenr += le64_to_cpu(header->bytenr) + IMAGE_BLOCK_SIZE; while (!list_empty(&md->ordered)) { async = list_entry(md->ordered.next, struct async_work, ordered); @@ -591,8 +591,8 @@ static int write_buffers(struct metadump_struct *md, u64 *next) } /* zero unused space in the last block */ - if (!err && bytenr & BLOCK_MASK) { - size_t size = BLOCK_SIZE - (bytenr & BLOCK_MASK); + if (!err && bytenr & IMAGE_BLOCK_MASK) { + size_t size = IMAGE_BLOCK_SIZE - (bytenr & IMAGE_BLOCK_MASK); bytenr += size; ret = write_zero(md->out, size); @@ -1613,7 +1613,7 @@ static void mdrestore_destroy(struct mdrestore_struct *mdres, int num_threads) static int detect_version(FILE *in) { struct meta_cluster *cluster; - u8 buf[BLOCK_SIZE]; + u8 buf[IMAGE_BLOCK_SIZE]; bool found = false; int i; int ret; @@ -1622,7 +1622,7 @@ static int detect_version(FILE *in) error("seek failed: %m"); return -errno; } - ret = fread(buf, BLOCK_SIZE, 1, in); + ret = fread(buf, IMAGE_BLOCK_SIZE, 1, in); if (!ret) { error("failed to read header"); return -EIO; @@ -1757,7 +1757,7 @@ static int add_cluster(struct meta_cluster *cluster, mdres->compress_method = header->compress; pthread_mutex_unlock(&mdres->mutex); - bytenr = le64_to_cpu(header->bytenr) + BLOCK_SIZE; + bytenr = le64_to_cpu(header->bytenr) + IMAGE_BLOCK_SIZE; nritems = le32_to_cpu(header->nritems); for (i = 0; i < nritems; i++) { item = &cluster->items[i]; @@ -1799,9 +1799,9 @@ static int add_cluster(struct meta_cluster *cluster, pthread_cond_signal(&mdres->cond); pthread_mutex_unlock(&mdres->mutex); } - if (bytenr & BLOCK_MASK) { - char buffer[BLOCK_MASK]; - size_t size = BLOCK_SIZE - (bytenr & BLOCK_MASK); + if (bytenr & IMAGE_BLOCK_MASK) { + char buffer[IMAGE_BLOCK_MASK]; + size_t size = IMAGE_BLOCK_SIZE - (bytenr & IMAGE_BLOCK_MASK); bytenr += size; ret = fread(buffer, size, 1, mdres->in); @@ -2011,7 +2011,7 @@ static int search_for_chunk_blocks(struct mdrestore_struct *mdres) u8 *buffer, *tmp = NULL; int ret = 0; - cluster = malloc(BLOCK_SIZE); + cluster = malloc(IMAGE_BLOCK_SIZE); if (!cluster) { error_msg(ERROR_MSG_MEMORY, NULL); return -ENOMEM; @@ -2043,7 +2043,7 @@ static int search_for_chunk_blocks(struct mdrestore_struct *mdres) goto out; } - ret = fread(cluster, BLOCK_SIZE, 1, mdres->in); + ret = fread(cluster, IMAGE_BLOCK_SIZE, 1, mdres->in); if (ret == 0) { if (feof(mdres->in)) goto out; @@ -2071,7 +2071,7 @@ static int search_for_chunk_blocks(struct mdrestore_struct *mdres) if (current_cluster > mdres->sys_chunk_end) goto out; - bytenr += BLOCK_SIZE; + bytenr += IMAGE_BLOCK_SIZE; nritems = le32_to_cpu(header->nritems); /* Search items for tree blocks in sys chunks */ @@ -2139,8 +2139,8 @@ static int search_for_chunk_blocks(struct mdrestore_struct *mdres) } bytenr += bufsize; } - if (bytenr & BLOCK_MASK) - bytenr += BLOCK_SIZE - (bytenr & BLOCK_MASK); + if (bytenr & IMAGE_BLOCK_MASK) + bytenr += IMAGE_BLOCK_SIZE - (bytenr & IMAGE_BLOCK_MASK); current_cluster = bytenr; } @@ -2251,7 +2251,7 @@ static int build_chunk_tree(struct mdrestore_struct *mdres, if (mdres->in == stdin) return 0; - ret = fread(cluster, BLOCK_SIZE, 1, mdres->in); + ret = fread(cluster, IMAGE_BLOCK_SIZE, 1, mdres->in); if (ret <= 0) { error("unable to read cluster: %m"); return -EIO; @@ -2265,7 +2265,7 @@ static int build_chunk_tree(struct mdrestore_struct *mdres, return -EIO; } - bytenr += BLOCK_SIZE; + bytenr += IMAGE_BLOCK_SIZE; mdres->compress_method = header->compress; nritems = le32_to_cpu(header->nritems); for (i = 0; i < nritems; i++) { @@ -2807,7 +2807,7 @@ static int restore_metadump(const char *input, FILE *out, int old_restore, } } - cluster = malloc(BLOCK_SIZE); + cluster = malloc(IMAGE_BLOCK_SIZE); if (!cluster) { error_msg(ERROR_MSG_MEMORY, NULL); ret = -ENOMEM; @@ -2837,7 +2837,7 @@ static int restore_metadump(const char *input, FILE *out, int old_restore, } while (!mdrestore.error) { - ret = fread(cluster, BLOCK_SIZE, 1, in); + ret = fread(cluster, IMAGE_BLOCK_SIZE, 1, in); if (!ret) break; diff --git a/image/metadump.h b/image/metadump.h index bcffbd47..1beab658 100644 --- a/image/metadump.h +++ b/image/metadump.h @@ -22,10 +22,10 @@ #include "kernel-lib/list.h" #include "kernel-shared/ctree.h" -#define BLOCK_SIZE SZ_1K -#define BLOCK_MASK (BLOCK_SIZE - 1) +#define IMAGE_BLOCK_SIZE SZ_1K +#define IMAGE_BLOCK_MASK (IMAGE_BLOCK_SIZE - 1) -#define ITEMS_PER_CLUSTER ((BLOCK_SIZE - sizeof(struct meta_cluster)) / \ +#define ITEMS_PER_CLUSTER ((IMAGE_BLOCK_SIZE - sizeof(struct meta_cluster)) / \ sizeof(struct meta_cluster_item)) #define COMPRESS_NONE 0 From patchwork Wed Nov 23 22:37:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C8DFC4332F for ; Wed, 23 Nov 2022 22:38:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229816AbiKWWiV (ORCPT ); Wed, 23 Nov 2022 17:38:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229750AbiKWWhz (ORCPT ); Wed, 23 Nov 2022 17:37:55 -0500 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E24D42F46 for ; Wed, 23 Nov 2022 14:37:53 -0800 (PST) Received: by mail-qv1-xf35.google.com with SMTP id d18so9595415qvs.6 for ; Wed, 23 Nov 2022 14:37:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=3N1E6gVteJkHBweW43ajyMRy92VMEYEPt/eL23IpKcM=; b=XurG+mRlrQTYbBUkRc8ASiDsyAOF2IeArbO2DFRSUQ3lrwUhYMFt15CW1Mi0rvIyN9 VIo/J2dQgxY+MmFWU33DBaehDn0X0Duog5FPyGXnUbbOcsBKhdPr42f40Qlv6q+rtO0h 86/LvhGoy5ucsllv+F6gAJbEmKqPXJl3/1v+I3Px0sylRrF2FwHb4hzpckxiA3+io1iF 6gqYFfYqb8SzDS0ghwr28MQt+FdI+OJ+KsKFf/vqq9HAe1HDJgB5WEJOPIH8XiJpeCax VfBLim+Hggpl1a6g1o62HSRiD2VCudNjqLQELxip6baJGXnHcG1XcMYEo+RnyMRwvemE cb+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3N1E6gVteJkHBweW43ajyMRy92VMEYEPt/eL23IpKcM=; b=GU+87q71O0G0WcO2tJNf5rDTJ2uvsllyK/CQPsXDXwVF0eXkoqdlfpk6dURuyX6DrV Cd+9P44GftAi0hOfknzoqMvdqffujy6iUMyK+INLLOjLo8O0DQx/PPTW2R+gKvWpSEiv aYdvywjAL8h3WiURykFdk/hhmeA81Z39ERJTBS2QGKSDFcWRkajZLo2oD4/55huIFNgl QMK5765PMk6+IkaE6QVw4oZO+0hynSmaaot3kDcOOzibWJ/6bjiYVFnbv8wXFVmSoXdh hZZEMcQAveSH06BlHYtCUTn4FK6tvupTOaK1AM2ArpQVuJnGGLl9UPjtSERguBn4fpOC JXNQ== X-Gm-Message-State: ANoB5pl9KPix/cXhKVzIgF3JLfdH4GNXqNyqZKPlj2KWfqE7wAA3GYfL Vv8hFqy7cqD3MPw7G1V/IYQl74zXkPCM3A== X-Google-Smtp-Source: AA0mqf6bsmsdPJ+UJpWt6TQHMwtd9d1YO75983gRLyL6vlObf/9ZLQuwfxsjAEVqAy18l1oWgAF9uQ== X-Received: by 2002:a0c:ff28:0:b0:4bb:798d:879c with SMTP id x8-20020a0cff28000000b004bb798d879cmr10851843qvt.7.1669243072196; Wed, 23 Nov 2022 14:37:52 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id fy11-20020a05622a5a0b00b003a4f435e381sm10585850qtb.18.2022.11.23.14.37.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:51 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 10/29] btrfs-progs: rename btrfs_item_end to btrfs_item_data_end Date: Wed, 23 Nov 2022 17:37:18 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This matches what we did in the kernel, btrfs_item_data_end is more inline with what the helper does, which is give us the offset of the end of the data portion of the item, not the offset of the end of the item itself. Signed-off-by: Josef Bacik --- check/main.c | 12 ++++++------ kernel-shared/ctree.c | 12 ++++++------ kernel-shared/ctree.h | 2 +- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/check/main.c b/check/main.c index 25b13ce1..4c8e6bdf 100644 --- a/check/main.c +++ b/check/main.c @@ -4393,9 +4393,9 @@ again: for (i = 0; i < btrfs_header_nritems(buf); i++) { unsigned int shift = 0, offset; - if (i == 0 && btrfs_item_end(buf, i) != + if (i == 0 && btrfs_item_data_end(buf, i) != BTRFS_LEAF_DATA_SIZE(gfs_info)) { - if (btrfs_item_end(buf, i) > + if (btrfs_item_data_end(buf, i) > BTRFS_LEAF_DATA_SIZE(gfs_info)) { ret = delete_bogus_item(root, path, buf, i); if (!ret) @@ -4406,10 +4406,10 @@ again: break; } shift = BTRFS_LEAF_DATA_SIZE(gfs_info) - - btrfs_item_end(buf, i); - } else if (i > 0 && btrfs_item_end(buf, i) != + btrfs_item_data_end(buf, i); + } else if (i > 0 && btrfs_item_data_end(buf, i) != btrfs_item_offset(buf, i - 1)) { - if (btrfs_item_end(buf, i) > + if (btrfs_item_data_end(buf, i) > btrfs_item_offset(buf, i - 1)) { ret = delete_bogus_item(root, path, buf, i); if (!ret) @@ -4419,7 +4419,7 @@ again: break; } shift = btrfs_item_offset(buf, i - 1) - - btrfs_item_end(buf, i); + btrfs_item_data_end(buf, i); } if (!shift) continue; diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c index 08c494af..d6ff0008 100644 --- a/kernel-shared/ctree.c +++ b/kernel-shared/ctree.c @@ -1938,7 +1938,7 @@ static int leaf_space_used(struct extent_buffer *l, int start, int nr) if (!nr) return 0; - data_len = btrfs_item_end(l, start); + data_len = btrfs_item_data_end(l, start); data_len = data_len - btrfs_item_offset(l, end); data_len += sizeof(struct btrfs_item) * nr; WARN_ON(data_len < 0); @@ -2066,7 +2066,7 @@ static int push_leaf_right(struct btrfs_trans_handle *trans, struct btrfs_root /* push left to right */ right_nritems = btrfs_header_nritems(right); - push_space = btrfs_item_end(left, left_nritems - push_items); + push_space = btrfs_item_data_end(left, left_nritems - push_items); push_space -= leaf_data_end(left); /* make room in the right data area */ @@ -2301,7 +2301,7 @@ static noinline int copy_for_split(struct btrfs_trans_handle *trans, nritems = nritems - mid; btrfs_set_header_nritems(right, nritems); - data_copy_size = btrfs_item_end(l, mid) - leaf_data_end(l); + data_copy_size = btrfs_item_data_end(l, mid) - leaf_data_end(l); copy_extent_buffer(right, l, btrfs_leaf_data(right), btrfs_item_nr_offset(l, mid), @@ -2313,7 +2313,7 @@ static noinline int copy_for_split(struct btrfs_trans_handle *trans, btrfs_leaf_data(l) + leaf_data_end(l), data_copy_size); rt_data_off = BTRFS_LEAF_DATA_SIZE(root->fs_info) - - btrfs_item_end(l, mid); + btrfs_item_data_end(l, mid); for (i = 0; i < nritems; i++) { u32 ioff = btrfs_item_offset(right, i); @@ -2734,7 +2734,7 @@ int btrfs_extend_item(struct btrfs_root *root, struct btrfs_path *path, BUG(); } slot = path->slots[0]; - old_data = btrfs_item_end(leaf, slot); + old_data = btrfs_item_data_end(leaf, slot); BUG_ON(slot < 0); if (slot >= nritems) { @@ -2823,7 +2823,7 @@ int btrfs_insert_empty_items(struct btrfs_trans_handle *trans, BUG_ON(slot < 0); if (slot < nritems) { - unsigned int old_data = btrfs_item_end(leaf, slot); + unsigned int old_data = btrfs_item_data_end(leaf, slot); if (old_data < data_end) { btrfs_print_leaf(leaf, BTRFS_PRINT_TREE_DEFAULT); diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 61eaab55..85ecc16b 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -2022,7 +2022,7 @@ static inline void btrfs_set_item_##member(struct extent_buffer *eb, \ BTRFS_ITEM_SETGET_FUNCS(size) BTRFS_ITEM_SETGET_FUNCS(offset) -static inline u32 btrfs_item_end(struct extent_buffer *eb, int nr) +static inline u32 btrfs_item_data_end(struct extent_buffer *eb, int nr) { return btrfs_item_offset(eb, nr) + btrfs_item_size(eb, nr); } From patchwork Wed Nov 23 22:37:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D572C47088 for ; Wed, 23 Nov 2022 22:38:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229821AbiKWWi1 (ORCPT ); Wed, 23 Nov 2022 17:38:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229775AbiKWWh7 (ORCPT ); Wed, 23 Nov 2022 17:37:59 -0500 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A00EDFD5 for ; Wed, 23 Nov 2022 14:37:55 -0800 (PST) Received: by mail-qv1-xf31.google.com with SMTP id e15so13113857qvo.4 for ; Wed, 23 Nov 2022 14:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=jWGuBS7VS7QGEDamENLBIghWRgR+O3GF/p7AU0D4DIU=; b=wlkZ5v+6qcraAnhR6OglCdNpqlGUXH9Ql5FUTG1r2oA+YMaJBYPSV+IfXuKqTlLLLy cD/ZnsJqKh28oEEqxAc0vndDHgNmMm4groaDeP9/KTLkze3sGmDyL2kWmGt4n4tOjdys XbBbbosVgvGEhPgeyyVIxq4ot0br+7a5vIUuM7q7Z/2Jxu9pRmVYK0qJmdXRrymhhLIE nIXHHXu/3v8TTIkGNvWabd78qC++IvpIecpbwhmQOeuA0pdyLyIHtfs9YO6K65K4wlbr m65R3SJEg2fz5Kg9qgMlIZyKAZDSYZUfGMXagEHS2yljEnPCTwTGfnwbVhRaP8sWObg8 fJiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jWGuBS7VS7QGEDamENLBIghWRgR+O3GF/p7AU0D4DIU=; b=Z0EBk+35TJvcL1jSh6LrkxmFI8SoceUj354aQZAp8y3Fstd8iFbzIFv/eEHgKl2IYy HJSp2QdMrAsax1pW04T0bBUpUj1ItNiy4UQqRQIV0brsKwg114voRUXk+lvvVYF0dJ9/ K/nJOdnfledA1nkbIvVJOGp7yEQt6c43z7IC9932dvyvn05wa15ltdR0cSRGOQQxuDXt +9Q3/q4Z+4R+5DlaMpha99KP8HFmpfKZdPifKnjKgQkIp2/sJdDBDUa0tzpG+J+i/EOS tr+znh1o59Jj8nX+COYjfbI3vCs+917NiINHs/JQ4Au6C7cIkznRhlJ15F9pCtIA/c4a BxGg== X-Gm-Message-State: ANoB5pmuaQTkj8EPHnJBpEbSFFWCb+QZYvPl2LmqH3chWPcZekizycPt mo/Jh/0c9XcXW8fRXCiE69FuWB5Z9+Z6QQ== X-Google-Smtp-Source: AA0mqf7RiGmpvVLt+paLrIkOhlXr1FYxmTD7fqrlhwuJcerYAMaqorlXVkwD1HEtEnYkhXWSY396kw== X-Received: by 2002:a05:6214:3b0f:b0:4c6:57f1:3507 with SMTP id nm15-20020a0562143b0f00b004c657f13507mr28541469qvb.95.1669243073664; Wed, 23 Nov 2022 14:37:53 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id p16-20020a05620a057000b006fb8239db65sm12112107qkp.43.2022.11.23.14.37.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:53 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 11/29] btrfs-progs: copy ioctl.h into libbtrfs Date: Wed, 23 Nov 2022 17:37:19 -0500 Message-Id: <32f587998b6525902e7febb4ae3894946b9209a4.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We're going to sync btrfs.h into btrfs-progs from the kernel, however libbtrfs still needs ioctl.h. To deal with this copy ioctl.h into libbtrfs, and update that code to use the local copy and update the libbtrfs headers list to use this copy. Signed-off-by: Josef Bacik --- Makefile | 2 +- libbtrfs/ctree.h | 2 +- libbtrfs/ioctl.h | 1073 +++++++++++++++++++++++++++++++++++++++++ libbtrfs/send-utils.c | 2 +- 4 files changed, 1076 insertions(+), 3 deletions(-) create mode 100644 libbtrfs/ioctl.h diff --git a/Makefile b/Makefile index aae7d66a..f3a7ce95 100644 --- a/Makefile +++ b/Makefile @@ -221,7 +221,7 @@ libbtrfs_objects = \ libbtrfs_headers = libbtrfs/send-stream.h libbtrfs/send-utils.h libbtrfs/send.h kernel-lib/rbtree.h \ kernel-lib/list.h kernel-lib/rbtree_types.h kerncompat.h \ - ioctl.h libbtrfs/ctree.h version.h + libbtrfs/ioctl.h libbtrfs/ctree.h version.h libbtrfsutil_major := $(shell sed -rn 's/^\#define BTRFS_UTIL_VERSION_MAJOR ([0-9])+$$/\1/p' libbtrfsutil/btrfsutil.h) libbtrfsutil_minor := $(shell sed -rn 's/^\#define BTRFS_UTIL_VERSION_MINOR ([0-9])+$$/\1/p' libbtrfsutil/btrfsutil.h) libbtrfsutil_patch := $(shell sed -rn 's/^\#define BTRFS_UTIL_VERSION_PATCH ([0-9])+$$/\1/p' libbtrfsutil/btrfsutil.h) diff --git a/libbtrfs/ctree.h b/libbtrfs/ctree.h index ed774ffa..5ae1a07d 100644 --- a/libbtrfs/ctree.h +++ b/libbtrfs/ctree.h @@ -25,7 +25,7 @@ #include "kernel-lib/list.h" #include "kernel-lib/rbtree.h" #include "kerncompat.h" -#include "ioctl.h" +#include "libbtrfs/ioctl.h" #else #include #include diff --git a/libbtrfs/ioctl.h b/libbtrfs/ioctl.h new file mode 100644 index 00000000..686c1035 --- /dev/null +++ b/libbtrfs/ioctl.h @@ -0,0 +1,1073 @@ +/* + * Copyright (C) 2007 Oracle. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public + * License v2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write to the + * Free Software Foundation, Inc., 59 Temple Place - Suite 330, + * Boston, MA 021110-1307, USA. + */ + +#ifndef __BTRFS_IOCTL_H__ +#define __BTRFS_IOCTL_H__ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include + +#ifndef __user +#define __user +#endif + +/* We don't want to include entire kerncompat.h */ +#ifndef BUILD_ASSERT +#define BUILD_ASSERT(x) +#endif + +#define BTRFS_IOCTL_MAGIC 0x94 +#define BTRFS_VOL_NAME_MAX 255 + +/* this should be 4k */ +#define BTRFS_PATH_NAME_MAX 4087 +struct btrfs_ioctl_vol_args { + __s64 fd; + char name[BTRFS_PATH_NAME_MAX + 1]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args) == 4096); + +#define BTRFS_DEVICE_PATH_NAME_MAX 1024 + +/* + * Obsolete since 5.15, functionality removed in kernel 5.7: + * BTRFS_SUBVOL_CREATE_ASYNC (1ULL << 0) + */ +#define BTRFS_SUBVOL_RDONLY (1ULL << 1) +#define BTRFS_SUBVOL_QGROUP_INHERIT (1ULL << 2) +#define BTRFS_DEVICE_SPEC_BY_ID (1ULL << 3) +#define BTRFS_SUBVOL_SPEC_BY_ID (1ULL << 4) + +#define BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED \ + (BTRFS_SUBVOL_RDONLY | \ + BTRFS_SUBVOL_QGROUP_INHERIT | \ + BTRFS_DEVICE_SPEC_BY_ID | \ + BTRFS_SUBVOL_SPEC_BY_ID) + +#define BTRFS_FSID_SIZE 16 +#define BTRFS_UUID_SIZE 16 + +#define BTRFS_QGROUP_INHERIT_SET_LIMITS (1ULL << 0) + +struct btrfs_qgroup_limit { + __u64 flags; + __u64 max_rfer; + __u64 max_excl; + __u64 rsv_rfer; + __u64 rsv_excl; +}; +BUILD_ASSERT(sizeof(struct btrfs_qgroup_limit) == 40); + +struct btrfs_qgroup_inherit { + __u64 flags; + __u64 num_qgroups; + __u64 num_ref_copies; + __u64 num_excl_copies; + struct btrfs_qgroup_limit lim; + __u64 qgroups[0]; +}; +BUILD_ASSERT(sizeof(struct btrfs_qgroup_inherit) == 72); + +struct btrfs_ioctl_qgroup_limit_args { + __u64 qgroupid; + struct btrfs_qgroup_limit lim; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_qgroup_limit_args) == 48); + +#define BTRFS_SUBVOL_NAME_MAX 4039 +struct btrfs_ioctl_vol_args_v2 { + __s64 fd; + __u64 transid; + __u64 flags; + union { + struct { + __u64 size; + struct btrfs_qgroup_inherit __user *qgroup_inherit; + }; + __u64 unused[4]; + }; + union { + char name[BTRFS_SUBVOL_NAME_MAX + 1]; + __u64 devid; + __u64 subvolid; + }; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args_v2) == 4096); + +/* + * structure to report errors and progress to userspace, either as a + * result of a finished scrub, a canceled scrub or a progress inquiry + */ +struct btrfs_scrub_progress { + __u64 data_extents_scrubbed; /* # of data extents scrubbed */ + __u64 tree_extents_scrubbed; /* # of tree extents scrubbed */ + __u64 data_bytes_scrubbed; /* # of data bytes scrubbed */ + __u64 tree_bytes_scrubbed; /* # of tree bytes scrubbed */ + __u64 read_errors; /* # of read errors encountered (EIO) */ + __u64 csum_errors; /* # of failed csum checks */ + __u64 verify_errors; /* # of occurrences, where the metadata + * of a tree block did not match the + * expected values, like generation or + * logical */ + __u64 no_csum; /* # of 4k data block for which no csum + * is present, probably the result of + * data written with nodatasum */ + __u64 csum_discards; /* # of csum for which no data was found + * in the extent tree. */ + __u64 super_errors; /* # of bad super blocks encountered */ + __u64 malloc_errors; /* # of internal kmalloc errors. These + * will likely cause an incomplete + * scrub */ + __u64 uncorrectable_errors; /* # of errors where either no intact + * copy was found or the writeback + * failed */ + __u64 corrected_errors; /* # of errors corrected */ + __u64 last_physical; /* last physical address scrubbed. In + * case a scrub was aborted, this can + * be used to restart the scrub */ + __u64 unverified_errors; /* # of occurrences where a read for a + * full (64k) bio failed, but the re- + * check succeeded for each 4k piece. + * Intermittent error. */ +}; + +#define BTRFS_SCRUB_READONLY 1 +struct btrfs_ioctl_scrub_args { + __u64 devid; /* in */ + __u64 start; /* in */ + __u64 end; /* in */ + __u64 flags; /* in */ + struct btrfs_scrub_progress progress; /* out */ + /* pad to 1k */ + __u64 unused[(1024-32-sizeof(struct btrfs_scrub_progress))/8]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_scrub_args) == 1024); + +#define BTRFS_IOCTL_DEV_REPLACE_CONT_READING_FROM_SRCDEV_MODE_ALWAYS 0 +#define BTRFS_IOCTL_DEV_REPLACE_CONT_READING_FROM_SRCDEV_MODE_AVOID 1 +struct btrfs_ioctl_dev_replace_start_params { + __u64 srcdevid; /* in, if 0, use srcdev_name instead */ + __u64 cont_reading_from_srcdev_mode; /* in, see #define + * above */ + __u8 srcdev_name[BTRFS_DEVICE_PATH_NAME_MAX + 1]; /* in */ + __u8 tgtdev_name[BTRFS_DEVICE_PATH_NAME_MAX + 1]; /* in */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_start_params) == 2072); + +#define BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED 0 +#define BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED 1 +#define BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED 2 +#define BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED 3 +#define BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED 4 +struct btrfs_ioctl_dev_replace_status_params { + __u64 replace_state; /* out, see #define above */ + __u64 progress_1000; /* out, 0 <= x <= 1000 */ + __u64 time_started; /* out, seconds since 1-Jan-1970 */ + __u64 time_stopped; /* out, seconds since 1-Jan-1970 */ + __u64 num_write_errors; /* out */ + __u64 num_uncorrectable_read_errors; /* out */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_status_params) == 48); + +#define BTRFS_IOCTL_DEV_REPLACE_CMD_START 0 +#define BTRFS_IOCTL_DEV_REPLACE_CMD_STATUS 1 +#define BTRFS_IOCTL_DEV_REPLACE_CMD_CANCEL 2 +#define BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR 0 +#define BTRFS_IOCTL_DEV_REPLACE_RESULT_NOT_STARTED 1 +#define BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED 2 +#define BTRFS_IOCTL_DEV_REPLACE_RESULT_SCRUB_INPROGRESS 3 +struct btrfs_ioctl_dev_replace_args { + __u64 cmd; /* in */ + __u64 result; /* out */ + + union { + struct btrfs_ioctl_dev_replace_start_params start; + struct btrfs_ioctl_dev_replace_status_params status; + }; /* in/out */ + + __u64 spare[64]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_args) == 2600); + +struct btrfs_ioctl_dev_info_args { + __u64 devid; /* in/out */ + __u8 uuid[BTRFS_UUID_SIZE]; /* in/out */ + __u64 bytes_used; /* out */ + __u64 total_bytes; /* out */ + __u64 unused[379]; /* pad to 4k */ + __u8 path[BTRFS_DEVICE_PATH_NAME_MAX]; /* out */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_info_args) == 4096); + +struct btrfs_ioctl_fs_info_args { + __u64 max_id; /* out */ + __u64 num_devices; /* out */ + __u8 fsid[BTRFS_FSID_SIZE]; /* out */ + __u32 nodesize; /* out */ + __u32 sectorsize; /* out */ + __u32 clone_alignment; /* out */ + __u32 reserved32; + __u64 reserved[122]; /* pad to 1k */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_fs_info_args) == 1024); + +struct btrfs_ioctl_feature_flags { + __u64 compat_flags; + __u64 compat_ro_flags; + __u64 incompat_flags; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_feature_flags) == 24); + +/* balance control ioctl modes */ +#define BTRFS_BALANCE_CTL_PAUSE 1 +#define BTRFS_BALANCE_CTL_CANCEL 2 +#define BTRFS_BALANCE_CTL_RESUME 3 + +/* + * this is packed, because it should be exactly the same as its disk + * byte order counterpart (struct btrfs_disk_balance_args) + */ +struct btrfs_balance_args { + __u64 profiles; + + /* + * usage filter + * BTRFS_BALANCE_ARGS_USAGE with a single value means '0..N' + * BTRFS_BALANCE_ARGS_USAGE_RANGE - range syntax, min..max + */ + union { + __u64 usage; + struct { + __u32 usage_min; + __u32 usage_max; + }; + }; + + __u64 devid; + __u64 pstart; + __u64 pend; + __u64 vstart; + __u64 vend; + + __u64 target; + + __u64 flags; + + /* + * BTRFS_BALANCE_ARGS_LIMIT with value 'limit' + * BTRFS_BALANCE_ARGS_LIMIT_RANGE - the extend version can use minimum + * and maximum + */ + union { + __u64 limit; /* limit number of processed chunks */ + struct { + __u32 limit_min; + __u32 limit_max; + }; + }; + __u32 stripes_min; + __u32 stripes_max; + __u64 unused[6]; +} __attribute__ ((__packed__)); + +/* report balance progress to userspace */ +struct btrfs_balance_progress { + __u64 expected; /* estimated # of chunks that will be + * relocated to fulfil the request */ + __u64 considered; /* # of chunks we have considered so far */ + __u64 completed; /* # of chunks relocated so far */ +}; + +#define BTRFS_BALANCE_STATE_RUNNING (1ULL << 0) +#define BTRFS_BALANCE_STATE_PAUSE_REQ (1ULL << 1) +#define BTRFS_BALANCE_STATE_CANCEL_REQ (1ULL << 2) + +struct btrfs_ioctl_balance_args { + __u64 flags; /* in/out */ + __u64 state; /* out */ + + struct btrfs_balance_args data; /* in/out */ + struct btrfs_balance_args meta; /* in/out */ + struct btrfs_balance_args sys; /* in/out */ + + struct btrfs_balance_progress stat; /* out */ + + __u64 unused[72]; /* pad to 1k */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_balance_args) == 1024); + +#define BTRFS_INO_LOOKUP_PATH_MAX 4080 +struct btrfs_ioctl_ino_lookup_args { + __u64 treeid; + __u64 objectid; + char name[BTRFS_INO_LOOKUP_PATH_MAX]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_lookup_args) == 4096); + +#define BTRFS_INO_LOOKUP_USER_PATH_MAX (4080 - BTRFS_VOL_NAME_MAX - 1) +struct btrfs_ioctl_ino_lookup_user_args { + /* in, inode number containing the subvolume of 'subvolid' */ + __u64 dirid; + /* in */ + __u64 treeid; + /* out, name of the subvolume of 'treeid' */ + char name[BTRFS_VOL_NAME_MAX + 1]; + /* + * out, constructed path from the directory with which the ioctl is + * called to dirid + */ + char path[BTRFS_INO_LOOKUP_USER_PATH_MAX]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_lookup_user_args) == 4096); + +struct btrfs_ioctl_search_key { + /* which root are we searching. 0 is the tree of tree roots */ + __u64 tree_id; + + /* keys returned will be >= min and <= max */ + __u64 min_objectid; + __u64 max_objectid; + + /* keys returned will be >= min and <= max */ + __u64 min_offset; + __u64 max_offset; + + /* max and min transids to search for */ + __u64 min_transid; + __u64 max_transid; + + /* keys returned will be >= min and <= max */ + __u32 min_type; + __u32 max_type; + + /* + * how many items did userland ask for, and how many are we + * returning + */ + __u32 nr_items; + + /* align to 64 bits */ + __u32 unused; + + /* some extra for later */ + __u64 unused1; + __u64 unused2; + __u64 unused3; + __u64 unused4; +}; + +struct btrfs_ioctl_search_header { + __u64 transid; + __u64 objectid; + __u64 offset; + __u32 type; + __u32 len; +} __attribute__((may_alias)); + +#define BTRFS_SEARCH_ARGS_BUFSIZE (4096 - sizeof(struct btrfs_ioctl_search_key)) +/* + * the buf is an array of search headers where + * each header is followed by the actual item + * the type field is expanded to 32 bits for alignment + */ +struct btrfs_ioctl_search_args { + struct btrfs_ioctl_search_key key; + char buf[BTRFS_SEARCH_ARGS_BUFSIZE]; +}; + +/* + * Extended version of TREE_SEARCH ioctl that can return more than 4k of bytes. + * The allocated size of the buffer is set in buf_size. + */ +struct btrfs_ioctl_search_args_v2 { + struct btrfs_ioctl_search_key key; /* in/out - search parameters */ + __u64 buf_size; /* in - size of buffer + * out - on EOVERFLOW: needed size + * to store item */ + __u64 buf[0]; /* out - found items */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_search_args_v2) == 112); + +/* With a @src_length of zero, the range from @src_offset->EOF is cloned! */ +struct btrfs_ioctl_clone_range_args { + __s64 src_fd; + __u64 src_offset, src_length; + __u64 dest_offset; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_clone_range_args) == 32); + +/* flags for the defrag range ioctl */ +#define BTRFS_DEFRAG_RANGE_COMPRESS 1 +#define BTRFS_DEFRAG_RANGE_START_IO 2 + +#define BTRFS_SAME_DATA_DIFFERS 1 +/* For extent-same ioctl */ +struct btrfs_ioctl_same_extent_info { + __s64 fd; /* in - destination file */ + __u64 logical_offset; /* in - start of extent in destination */ + __u64 bytes_deduped; /* out - total # of bytes we were able + * to dedupe from this file */ + /* status of this dedupe operation: + * 0 if dedup succeeds + * < 0 for error + * == BTRFS_SAME_DATA_DIFFERS if data differs + */ + __s32 status; /* out - see above description */ + __u32 reserved; +}; + +struct btrfs_ioctl_same_args { + __u64 logical_offset; /* in - start of extent in source */ + __u64 length; /* in - length of extent */ + __u16 dest_count; /* in - total elements in info array */ + __u16 reserved1; + __u32 reserved2; + struct btrfs_ioctl_same_extent_info info[0]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_same_args) == 24); + +struct btrfs_ioctl_defrag_range_args { + /* start of the defrag operation */ + __u64 start; + + /* number of bytes to defrag, use (u64)-1 to say all */ + __u64 len; + + /* + * flags for the operation, which can include turning + * on compression for this one defrag + */ + __u64 flags; + + /* + * any extent bigger than this will be considered + * already defragged. Use 0 to take the kernel default + * Use 1 to say every single extent must be rewritten + */ + __u32 extent_thresh; + + /* + * which compression method to use if turning on compression + * for this defrag operation. If unspecified, zlib will + * be used + */ + __u32 compress_type; + + /* spare for later */ + __u32 unused[4]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_defrag_range_args) == 48); + +struct btrfs_ioctl_space_info { + __u64 flags; + __u64 total_bytes; + __u64 used_bytes; +}; + +struct btrfs_ioctl_space_args { + __u64 space_slots; + __u64 total_spaces; + struct btrfs_ioctl_space_info spaces[0]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_space_args) == 16); + +struct btrfs_data_container { + __u32 bytes_left; /* out -- bytes not needed to deliver output */ + __u32 bytes_missing; /* out -- additional bytes needed for result */ + __u32 elem_cnt; /* out */ + __u32 elem_missed; /* out */ + __u64 val[0]; /* out */ +}; + +struct btrfs_ioctl_ino_path_args { + __u64 inum; /* in */ + __u64 size; /* in */ + __u64 reserved[4]; + /* struct btrfs_data_container *fspath; out */ + __u64 fspath; /* out */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_path_args) == 56); + +struct btrfs_ioctl_logical_ino_args { + __u64 logical; /* in */ + __u64 size; /* in */ + __u64 reserved[3]; + __u64 flags; /* in */ + /* struct btrfs_data_container *inodes; out */ + __u64 inodes; +}; + +/* + * Return every ref to the extent, not just those containing logical block. + * Requires logical == extent bytenr. + */ +#define BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET (1ULL << 0) + +enum btrfs_dev_stat_values { + /* disk I/O failure stats */ + BTRFS_DEV_STAT_WRITE_ERRS, /* EIO or EREMOTEIO from lower layers */ + BTRFS_DEV_STAT_READ_ERRS, /* EIO or EREMOTEIO from lower layers */ + BTRFS_DEV_STAT_FLUSH_ERRS, /* EIO or EREMOTEIO from lower layers */ + + /* stats for indirect indications for I/O failures */ + BTRFS_DEV_STAT_CORRUPTION_ERRS, /* checksum error, bytenr error or + * contents is illegal: this is an + * indication that the block was damaged + * during read or write, or written to + * wrong location or read from wrong + * location */ + BTRFS_DEV_STAT_GENERATION_ERRS, /* an indication that blocks have not + * been written */ + + BTRFS_DEV_STAT_VALUES_MAX +}; + +/* Reset statistics after reading; needs SYS_ADMIN capability */ +#define BTRFS_DEV_STATS_RESET (1ULL << 0) + +struct btrfs_ioctl_get_dev_stats { + __u64 devid; /* in */ + __u64 nr_items; /* in/out */ + __u64 flags; /* in/out */ + + /* out values: */ + __u64 values[BTRFS_DEV_STAT_VALUES_MAX]; + + __u64 unused[128 - 2 - BTRFS_DEV_STAT_VALUES_MAX]; /* pad to 1k + 8B */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_get_dev_stats) == 1032); + +/* BTRFS_IOC_SNAP_CREATE is no longer used by the btrfs command */ +#define BTRFS_QUOTA_CTL_ENABLE 1 +#define BTRFS_QUOTA_CTL_DISABLE 2 +/* 3 has formerly been reserved for BTRFS_QUOTA_CTL_RESCAN */ +struct btrfs_ioctl_quota_ctl_args { + __u64 cmd; + __u64 status; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_quota_ctl_args) == 16); + +struct btrfs_ioctl_quota_rescan_args { + __u64 flags; + __u64 progress; + __u64 reserved[6]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_quota_rescan_args) == 64); + +struct btrfs_ioctl_qgroup_assign_args { + __u64 assign; + __u64 src; + __u64 dst; +}; + +struct btrfs_ioctl_qgroup_create_args { + __u64 create; + __u64 qgroupid; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_qgroup_create_args) == 16); + +struct btrfs_ioctl_timespec { + __u64 sec; + __u32 nsec; +}; + +struct btrfs_ioctl_received_subvol_args { + char uuid[BTRFS_UUID_SIZE]; /* in */ + __u64 stransid; /* in */ + __u64 rtransid; /* out */ + struct btrfs_ioctl_timespec stime; /* in */ + struct btrfs_ioctl_timespec rtime; /* out */ + __u64 flags; /* in */ + __u64 reserved[16]; /* in */ +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_received_subvol_args) == 200); + +/* + * If we have a 32-bit userspace and 64-bit kernel, then the UAPI + * structures are incorrect, as the timespec structure from userspace + * is 4 bytes too small. We define these alternatives here for backward + * compatibility, the kernel understands both values. + */ + +/* + * Structure size is different on 32bit and 64bit, has some padding if the + * structure is embedded. Packing makes sure the size is same on both, but will + * be misaligned on 64bit. + * + * NOTE: do not use in your code, this is for testing only + */ +struct btrfs_ioctl_timespec_32 { + __u64 sec; + __u32 nsec; +} __attribute__ ((__packed__)); + +struct btrfs_ioctl_received_subvol_args_32 { + char uuid[BTRFS_UUID_SIZE]; /* in */ + __u64 stransid; /* in */ + __u64 rtransid; /* out */ + struct btrfs_ioctl_timespec_32 stime; /* in */ + struct btrfs_ioctl_timespec_32 rtime; /* out */ + __u64 flags; /* in */ + __u64 reserved[16]; /* in */ +} __attribute__ ((__packed__)); +BUILD_ASSERT(sizeof(struct btrfs_ioctl_received_subvol_args_32) == 192); + +#define BTRFS_IOC_SET_RECEIVED_SUBVOL_32_COMPAT_DEFINED 1 + +/* + * Caller doesn't want file data in the send stream, even if the + * search of clone sources doesn't find an extent. UPDATE_EXTENT + * commands will be sent instead of WRITE commands. + */ +#define BTRFS_SEND_FLAG_NO_FILE_DATA 0x1 + +/* + * Do not add the leading stream header. Used when multiple snapshots + * are sent back to back. + */ +#define BTRFS_SEND_FLAG_OMIT_STREAM_HEADER 0x2 + +/* + * Omit the command at the end of the stream that indicated the end + * of the stream. This option is used when multiple snapshots are + * sent back to back. + */ +#define BTRFS_SEND_FLAG_OMIT_END_CMD 0x4 + +/* + * Read the protocol version in the structure + */ +#define BTRFS_SEND_FLAG_VERSION 0x8 + +/* + * Send compressed data using the ENCODED_WRITE command instead of decompressing + * the data and sending it with the WRITE command. This requires protocol + * version >= 2. + */ +#define BTRFS_SEND_FLAG_COMPRESSED 0x10 + +#define BTRFS_SEND_FLAG_MASK \ + (BTRFS_SEND_FLAG_NO_FILE_DATA | \ + BTRFS_SEND_FLAG_OMIT_STREAM_HEADER | \ + BTRFS_SEND_FLAG_OMIT_END_CMD | \ + BTRFS_SEND_FLAG_VERSION | \ + BTRFS_SEND_FLAG_COMPRESSED) + +struct btrfs_ioctl_send_args { + __s64 send_fd; /* in */ + __u64 clone_sources_count; /* in */ + __u64 __user *clone_sources; /* in */ + __u64 parent_root; /* in */ + __u64 flags; /* in */ + __u32 version; /* in */ + __u8 reserved[28]; /* in */ +}; +/* + * Size of structure depends on pointer width, was not caught in the early + * days. Kernel handles pointer width differences transparently. + */ +BUILD_ASSERT(sizeof(__u64 *) == 8 + ? sizeof(struct btrfs_ioctl_send_args) == 72 + : (sizeof(void *) == 4 + ? sizeof(struct btrfs_ioctl_send_args) == 68 + : 0)); + +/* + * Different pointer width leads to structure size change. Kernel should accept + * both ioctl values (derived from the structures) for backward compatibility. + * Size of this structure is same on 32bit and 64bit though. + * + * NOTE: do not use in your code, this is for testing only + */ +struct btrfs_ioctl_send_args_64 { + __s64 send_fd; /* in */ + __u64 clone_sources_count; /* in */ + union { + __u64 __user *clone_sources; /* in */ + __u64 __clone_sources_alignment; + }; + __u64 parent_root; /* in */ + __u64 flags; /* in */ + __u64 reserved[4]; /* in */ +} __attribute__((packed)); +BUILD_ASSERT(sizeof(struct btrfs_ioctl_send_args_64) == 72); + +#define BTRFS_IOC_SEND_64_COMPAT_DEFINED 1 + +/* + * Information about a fs tree root. + * + * All items are filled by the ioctl + */ +struct btrfs_ioctl_get_subvol_info_args { + /* Id of this subvolume */ + __u64 treeid; + + /* Name of this subvolume, used to get the real name at mount point */ + char name[BTRFS_VOL_NAME_MAX + 1]; + + /* + * Id of the subvolume which contains this subvolume. + * Zero for top-level subvolume or a deleted subvolume. + */ + __u64 parent_id; + + /* + * Inode number of the directory which contains this subvolume. + * Zero for top-level subvolume or a deleted subvolume + */ + __u64 dirid; + + /* Latest transaction id of this subvolume */ + __u64 generation; + + /* Flags of this subvolume */ + __u64 flags; + + /* UUID of this subvolume */ + __u8 uuid[BTRFS_UUID_SIZE]; + + /* + * UUID of the subvolume of which this subvolume is a snapshot. + * All zero for a non-snapshot subvolume. + */ + __u8 parent_uuid[BTRFS_UUID_SIZE]; + + /* + * UUID of the subvolume from which this subvolume was received. + * All zero for non-received subvolume. + */ + __u8 received_uuid[BTRFS_UUID_SIZE]; + + /* Transaction id indicating when change/create/send/receive happened */ + __u64 ctransid; + __u64 otransid; + __u64 stransid; + __u64 rtransid; + /* Time corresponding to c/o/s/rtransid */ + struct btrfs_ioctl_timespec ctime; + struct btrfs_ioctl_timespec otime; + struct btrfs_ioctl_timespec stime; + struct btrfs_ioctl_timespec rtime; + + /* Must be zero */ + __u64 reserved[8]; +}; + +#define BTRFS_MAX_ROOTREF_BUFFER_NUM 255 +struct btrfs_ioctl_get_subvol_rootref_args { + /* in/out, minimum id of rootref's treeid to be searched */ + __u64 min_treeid; + + /* out */ + struct { + __u64 treeid; + __u64 dirid; + } rootref[BTRFS_MAX_ROOTREF_BUFFER_NUM]; + + /* out, number of found items */ + __u8 num_items; + __u8 align[7]; +}; +BUILD_ASSERT(sizeof(struct btrfs_ioctl_get_subvol_rootref_args) == 4096); + +/* + * Data and metadata for an encoded read or write. + * + * Encoded I/O bypasses any encoding automatically done by the filesystem (e.g., + * compression). This can be used to read the compressed contents of a file or + * write pre-compressed data directly to a file. + * + * BTRFS_IOC_ENCODED_READ and BTRFS_IOC_ENCODED_WRITE are essentially + * preadv/pwritev with additional metadata about how the data is encoded and the + * size of the unencoded data. + * + * BTRFS_IOC_ENCODED_READ fills the given iovecs with the encoded data, fills + * the metadata fields, and returns the size of the encoded data. It reads one + * extent per call. It can also read data which is not encoded. + * + * BTRFS_IOC_ENCODED_WRITE uses the metadata fields, writes the encoded data + * from the iovecs, and returns the size of the encoded data. Note that the + * encoded data is not validated when it is written; if it is not valid (e.g., + * it cannot be decompressed), then a subsequent read may return an error. + * + * Since the filesystem page cache contains decoded data, encoded I/O bypasses + * the page cache. Encoded I/O requires CAP_SYS_ADMIN. + */ +struct btrfs_ioctl_encoded_io_args { + /* Input parameters for both reads and writes. */ + + /* + * iovecs containing encoded data. + * + * For reads, if the size of the encoded data is larger than the sum of + * iov[n].iov_len for 0 <= n < iovcnt, then the ioctl fails with + * ENOBUFS. + * + * For writes, the size of the encoded data is the sum of iov[n].iov_len + * for 0 <= n < iovcnt. This must be less than 128 KiB (this limit may + * increase in the future). This must also be less than or equal to + * unencoded_len. + */ + const struct iovec __user *iov; + /* Number of iovecs. */ + unsigned long iovcnt; + /* + * Offset in file. + * + * For writes, must be aligned to the sector size of the filesystem. + */ + __s64 offset; + /* Currently must be zero. */ + __u64 flags; + + /* + * For reads, the following members are output parameters that will + * contain the returned metadata for the encoded data. + * For writes, the following members must be set to the metadata for the + * encoded data. + */ + + /* + * Length of the data in the file. + * + * Must be less than or equal to unencoded_len - unencoded_offset. For + * writes, must be aligned to the sector size of the filesystem unless + * the data ends at or beyond the current end of the file. + */ + __u64 len; + /* + * Length of the unencoded (i.e., decrypted and decompressed) data. + * + * For writes, must be no more than 128 KiB (this limit may increase in + * the future). If the unencoded data is actually longer than + * unencoded_len, then it is truncated; if it is shorter, then it is + * extended with zeroes. + */ + __u64 unencoded_len; + /* + * Offset from the first byte of the unencoded data to the first byte of + * logical data in the file. + * + * Must be less than unencoded_len. + */ + __u64 unencoded_offset; + /* + * BTRFS_ENCODED_IO_COMPRESSION_* type. + * + * For writes, must not be BTRFS_ENCODED_IO_COMPRESSION_NONE. + */ + __u32 compression; + /* Currently always BTRFS_ENCODED_IO_ENCRYPTION_NONE. */ + __u32 encryption; + /* + * Reserved for future expansion. + * + * For reads, always returned as zero. Users should check for non-zero + * bytes. If there are any, then the kernel has a newer version of this + * structure with additional information that the user definition is + * missing. + * + * For writes, must be zeroed. + */ + __u8 reserved[64]; +}; + +/* Data is not compressed. */ +#define BTRFS_ENCODED_IO_COMPRESSION_NONE 0 +/* Data is compressed as a single zlib stream. */ +#define BTRFS_ENCODED_IO_COMPRESSION_ZLIB 1 +/* + * Data is compressed as a single zstd frame with the windowLog compression + * parameter set to no more than 17. + */ +#define BTRFS_ENCODED_IO_COMPRESSION_ZSTD 2 +/* + * Data is compressed sector by sector (using the sector size indicated by the + * name of the constant) with LZO1X and wrapped in the format documented in + * fs/btrfs/lzo.c. For writes, the compression sector size must match the + * filesystem sector size. + */ +#define BTRFS_ENCODED_IO_COMPRESSION_LZO_4K 3 +#define BTRFS_ENCODED_IO_COMPRESSION_LZO_8K 4 +#define BTRFS_ENCODED_IO_COMPRESSION_LZO_16K 5 +#define BTRFS_ENCODED_IO_COMPRESSION_LZO_32K 6 +#define BTRFS_ENCODED_IO_COMPRESSION_LZO_64K 7 +#define BTRFS_ENCODED_IO_COMPRESSION_TYPES 8 + +/* Data is not encrypted. */ +#define BTRFS_ENCODED_IO_ENCRYPTION_NONE 0 +#define BTRFS_ENCODED_IO_ENCRYPTION_TYPES 1 + +/* Error codes as returned by the kernel */ +enum btrfs_err_code { + notused, + BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET, + BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET, + BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET, + BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET, + BTRFS_ERROR_DEV_TGT_REPLACE, + BTRFS_ERROR_DEV_MISSING_NOT_FOUND, + BTRFS_ERROR_DEV_ONLY_WRITABLE, + BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS, + BTRFS_ERROR_DEV_RAID1C3_MIN_NOT_MET, + BTRFS_ERROR_DEV_RAID1C4_MIN_NOT_MET, +}; + +#define BTRFS_IOC_SNAP_CREATE _IOW(BTRFS_IOCTL_MAGIC, 1, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_DEFRAG _IOW(BTRFS_IOCTL_MAGIC, 2, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_RESIZE _IOW(BTRFS_IOCTL_MAGIC, 3, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_SCAN_DEV _IOW(BTRFS_IOCTL_MAGIC, 4, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_FORGET_DEV _IOW(BTRFS_IOCTL_MAGIC, 5, \ + struct btrfs_ioctl_vol_args) +/* + * Removed in kernel since 4.17: + * BTRFS_IOC_TRANS_START _IO(BTRFS_IOCTL_MAGIC, 6) + * BTRFS_IOC_TRANS_END _IO(BTRFS_IOCTL_MAGIC, 7) + */ + +#define BTRFS_IOC_SYNC _IO(BTRFS_IOCTL_MAGIC, 8) + +#define BTRFS_IOC_CLONE _IOW(BTRFS_IOCTL_MAGIC, 9, int) +#define BTRFS_IOC_ADD_DEV _IOW(BTRFS_IOCTL_MAGIC, 10, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_RM_DEV _IOW(BTRFS_IOCTL_MAGIC, 11, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_BALANCE _IOW(BTRFS_IOCTL_MAGIC, 12, \ + struct btrfs_ioctl_vol_args) + +#define BTRFS_IOC_CLONE_RANGE _IOW(BTRFS_IOCTL_MAGIC, 13, \ + struct btrfs_ioctl_clone_range_args) + +#define BTRFS_IOC_SUBVOL_CREATE _IOW(BTRFS_IOCTL_MAGIC, 14, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_SNAP_DESTROY _IOW(BTRFS_IOCTL_MAGIC, 15, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_DEFRAG_RANGE _IOW(BTRFS_IOCTL_MAGIC, 16, \ + struct btrfs_ioctl_defrag_range_args) +#define BTRFS_IOC_TREE_SEARCH _IOWR(BTRFS_IOCTL_MAGIC, 17, \ + struct btrfs_ioctl_search_args) +#define BTRFS_IOC_TREE_SEARCH_V2 _IOWR(BTRFS_IOCTL_MAGIC, 17, \ + struct btrfs_ioctl_search_args_v2) +#define BTRFS_IOC_INO_LOOKUP _IOWR(BTRFS_IOCTL_MAGIC, 18, \ + struct btrfs_ioctl_ino_lookup_args) +#define BTRFS_IOC_DEFAULT_SUBVOL _IOW(BTRFS_IOCTL_MAGIC, 19, __u64) +#define BTRFS_IOC_SPACE_INFO _IOWR(BTRFS_IOCTL_MAGIC, 20, \ + struct btrfs_ioctl_space_args) +#define BTRFS_IOC_START_SYNC _IOR(BTRFS_IOCTL_MAGIC, 24, __u64) +#define BTRFS_IOC_WAIT_SYNC _IOW(BTRFS_IOCTL_MAGIC, 22, __u64) +#define BTRFS_IOC_SNAP_CREATE_V2 _IOW(BTRFS_IOCTL_MAGIC, 23, \ + struct btrfs_ioctl_vol_args_v2) +#define BTRFS_IOC_SUBVOL_CREATE_V2 _IOW(BTRFS_IOCTL_MAGIC, 24, \ + struct btrfs_ioctl_vol_args_v2) +#define BTRFS_IOC_SUBVOL_GETFLAGS _IOR(BTRFS_IOCTL_MAGIC, 25, __u64) +#define BTRFS_IOC_SUBVOL_SETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 26, __u64) +#define BTRFS_IOC_SCRUB _IOWR(BTRFS_IOCTL_MAGIC, 27, \ + struct btrfs_ioctl_scrub_args) +#define BTRFS_IOC_SCRUB_CANCEL _IO(BTRFS_IOCTL_MAGIC, 28) +#define BTRFS_IOC_SCRUB_PROGRESS _IOWR(BTRFS_IOCTL_MAGIC, 29, \ + struct btrfs_ioctl_scrub_args) +#define BTRFS_IOC_DEV_INFO _IOWR(BTRFS_IOCTL_MAGIC, 30, \ + struct btrfs_ioctl_dev_info_args) +#define BTRFS_IOC_FS_INFO _IOR(BTRFS_IOCTL_MAGIC, 31, \ + struct btrfs_ioctl_fs_info_args) +#define BTRFS_IOC_BALANCE_V2 _IOWR(BTRFS_IOCTL_MAGIC, 32, \ + struct btrfs_ioctl_balance_args) +#define BTRFS_IOC_BALANCE_CTL _IOW(BTRFS_IOCTL_MAGIC, 33, int) +#define BTRFS_IOC_BALANCE_PROGRESS _IOR(BTRFS_IOCTL_MAGIC, 34, \ + struct btrfs_ioctl_balance_args) +#define BTRFS_IOC_INO_PATHS _IOWR(BTRFS_IOCTL_MAGIC, 35, \ + struct btrfs_ioctl_ino_path_args) +#define BTRFS_IOC_LOGICAL_INO _IOWR(BTRFS_IOCTL_MAGIC, 36, \ + struct btrfs_ioctl_logical_ino_args) +#define BTRFS_IOC_SET_RECEIVED_SUBVOL _IOWR(BTRFS_IOCTL_MAGIC, 37, \ + struct btrfs_ioctl_received_subvol_args) + +#ifdef BTRFS_IOC_SET_RECEIVED_SUBVOL_32_COMPAT_DEFINED +#define BTRFS_IOC_SET_RECEIVED_SUBVOL_32 _IOWR(BTRFS_IOCTL_MAGIC, 37, \ + struct btrfs_ioctl_received_subvol_args_32) +#endif + +#ifdef BTRFS_IOC_SEND_64_COMPAT_DEFINED +#define BTRFS_IOC_SEND_64 _IOW(BTRFS_IOCTL_MAGIC, 38, \ + struct btrfs_ioctl_send_args_64) +#endif + +#define BTRFS_IOC_SEND _IOW(BTRFS_IOCTL_MAGIC, 38, struct btrfs_ioctl_send_args) +#define BTRFS_IOC_DEVICES_READY _IOR(BTRFS_IOCTL_MAGIC, 39, \ + struct btrfs_ioctl_vol_args) +#define BTRFS_IOC_QUOTA_CTL _IOWR(BTRFS_IOCTL_MAGIC, 40, \ + struct btrfs_ioctl_quota_ctl_args) +#define BTRFS_IOC_QGROUP_ASSIGN _IOW(BTRFS_IOCTL_MAGIC, 41, \ + struct btrfs_ioctl_qgroup_assign_args) +#define BTRFS_IOC_QGROUP_CREATE _IOW(BTRFS_IOCTL_MAGIC, 42, \ + struct btrfs_ioctl_qgroup_create_args) +#define BTRFS_IOC_QGROUP_LIMIT _IOR(BTRFS_IOCTL_MAGIC, 43, \ + struct btrfs_ioctl_qgroup_limit_args) +#define BTRFS_IOC_QUOTA_RESCAN _IOW(BTRFS_IOCTL_MAGIC, 44, \ + struct btrfs_ioctl_quota_rescan_args) +#define BTRFS_IOC_QUOTA_RESCAN_STATUS _IOR(BTRFS_IOCTL_MAGIC, 45, \ + struct btrfs_ioctl_quota_rescan_args) +#define BTRFS_IOC_QUOTA_RESCAN_WAIT _IO(BTRFS_IOCTL_MAGIC, 46) +#define BTRFS_IOC_GET_FSLABEL _IOR(BTRFS_IOCTL_MAGIC, 49, \ + char[BTRFS_LABEL_SIZE]) +#define BTRFS_IOC_SET_FSLABEL _IOW(BTRFS_IOCTL_MAGIC, 50, \ + char[BTRFS_LABEL_SIZE]) +#define BTRFS_IOC_GET_DEV_STATS _IOWR(BTRFS_IOCTL_MAGIC, 52, \ + struct btrfs_ioctl_get_dev_stats) +#define BTRFS_IOC_DEV_REPLACE _IOWR(BTRFS_IOCTL_MAGIC, 53, \ + struct btrfs_ioctl_dev_replace_args) +#define BTRFS_IOC_FILE_EXTENT_SAME _IOWR(BTRFS_IOCTL_MAGIC, 54, \ + struct btrfs_ioctl_same_args) +#define BTRFS_IOC_GET_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ + struct btrfs_ioctl_feature_flags) +#define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 57, \ + struct btrfs_ioctl_feature_flags[2]) +#define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ + struct btrfs_ioctl_feature_flags[3]) +#define BTRFS_IOC_RM_DEV_V2 _IOW(BTRFS_IOCTL_MAGIC, 58, \ + struct btrfs_ioctl_vol_args_v2) +#define BTRFS_IOC_LOGICAL_INO_V2 _IOWR(BTRFS_IOCTL_MAGIC, 59, \ + struct btrfs_ioctl_logical_ino_args) +#define BTRFS_IOC_GET_SUBVOL_INFO _IOR(BTRFS_IOCTL_MAGIC, 60, \ + struct btrfs_ioctl_get_subvol_info_args) +#define BTRFS_IOC_GET_SUBVOL_ROOTREF _IOWR(BTRFS_IOCTL_MAGIC, 61, \ + struct btrfs_ioctl_get_subvol_rootref_args) +#define BTRFS_IOC_INO_LOOKUP_USER _IOWR(BTRFS_IOCTL_MAGIC, 62, \ + struct btrfs_ioctl_ino_lookup_user_args) +#define BTRFS_IOC_SNAP_DESTROY_V2 _IOW(BTRFS_IOCTL_MAGIC, 63, \ + struct btrfs_ioctl_vol_args_v2) +#define BTRFS_IOC_ENCODED_READ _IOR(BTRFS_IOCTL_MAGIC, 64, \ + struct btrfs_ioctl_encoded_io_args) +#define BTRFS_IOC_ENCODED_WRITE _IOW(BTRFS_IOCTL_MAGIC, 64, \ + struct btrfs_ioctl_encoded_io_args) + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/libbtrfs/send-utils.c b/libbtrfs/send-utils.c index 9f7054b2..831ec0dc 100644 --- a/libbtrfs/send-utils.c +++ b/libbtrfs/send-utils.c @@ -27,7 +27,7 @@ #include "kernel-lib/rbtree.h" #include "libbtrfs/ctree.h" #include "libbtrfs/send-utils.h" -#include "ioctl.h" +#include "libbtrfs/ioctl.h" static int btrfs_subvolid_resolve_sub(int fd, char *path, size_t *path_len, u64 subvol_id); From patchwork Wed Nov 23 22:37:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD05DC433FE for ; Wed, 23 Nov 2022 22:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229794AbiKWWiY (ORCPT ); Wed, 23 Nov 2022 17:38:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229755AbiKWWiB (ORCPT ); Wed, 23 Nov 2022 17:38:01 -0500 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE99963ED for ; Wed, 23 Nov 2022 14:37:56 -0800 (PST) Received: by mail-qk1-x72f.google.com with SMTP id k2so13471765qkk.7 for ; Wed, 23 Nov 2022 14:37:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Arrp0bENIBQAdhB0jkh3WL4Iu1cHgf0rP4lL7GMhtOg=; b=ub45zrQW2QuAE8ZPqk9ts4Dn64zF7sJzatwewTLoJ817aJAKwXram5jbPVpk9bI5X6 pPYJ9CWyibPyxosvTHGSc/zmG/pT8dyy3jESMq+h4RzKwC6IDSLl9Yu2pc+TCTWDk8U0 jDKwSrwOtT1JseTR3EapuhRqeBelXarQ98KFrQUoDsWRTPAP3mfCQSA1hPe9OIY0vmyx otGsGYCpa+0zZiUQr5G6A2IGLUNKKVdMNASgzkndw9j2gazR05QCmr4XH+IumTlM3l3n 5OXWUwzgaDtYnRlWJXqa3iOsT0jibjwqdzmlowi4cjr5vkKyA8GOGqMOvHSHb17HL3RW LT0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Arrp0bENIBQAdhB0jkh3WL4Iu1cHgf0rP4lL7GMhtOg=; b=nuMh8xoKyKygS0pRFK7xAHQVLGU4MeS0iyf7cZArU4dIZP5ubD6wo3QkqWboZLhBxd Qvv2fEqGkMu+3KhjXZhjV4FpDIE3sOlcI+HF/BS43PdZ8wVVnbOwUPAalauN+DfDRcQ1 ibHNOZLE3EqAZSmv6N3aQ5P9cCMFEqmzvzG3jg8m78+oxR8KlzI6J8twymtN7GpAtCzv pJkHm2vyORGV40VaQsZ39f8dI1fugb4Pezp/5Zr671zDyaEqdFu2TTdOqHZTpjmIsdlH 2oSUF4hSh6VPJTlincl+I7EKorSNDKZHuLYPM1ch2w0ogmugK6jSMToO9cPZQbkZSQpp ZGcw== X-Gm-Message-State: ANoB5pllAxituQnCVAkaKqZgCW0sLFYswllMP/keJZ4E0WPruoiCp3h1 UTIOPrbRiABUPN8ljYg1+or8uDYfohlwAw== X-Google-Smtp-Source: AA0mqf7b6hMUICvwhON0bI+2bin7eK5n+7u9ngcl/22hxPMpUy23LTPJJJpkz5DuCbxVeNpJnPi3pQ== X-Received: by 2002:a05:620a:51d0:b0:6ee:909e:ed6c with SMTP id cx16-20020a05620a51d000b006ee909eed6cmr11618751qkb.264.1669243075051; Wed, 23 Nov 2022 14:37:55 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id t20-20020a05620a451400b006ceb933a9fesm13384055qkp.81.2022.11.23.14.37.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:54 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 12/29] btrfs-progs: sync uapi/btrfs.h into btrfs-progs Date: Wed, 23 Nov 2022 17:37:20 -0500 Message-Id: <80ce230bd4a20f4a5a3d62db25f86b419da44414.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We want to keep this file locally as we want to be uptodate with upstream, so we can build btrfs-progs regardless of which kernel is currently installed. Sync this with the upstream version and put it in kernel-shared/uapi to maintain some semblance of where this file comes from. Signed-off-by: Josef Bacik --- btrfs-fragments.c | 2 +- btrfstune.c | 2 +- check/main.c | 2 +- cmds/balance.c | 2 +- cmds/device.c | 2 +- cmds/filesystem-usage.h | 2 +- cmds/filesystem.c | 2 +- cmds/inspect.c | 2 +- cmds/property.c | 2 +- cmds/qgroup.c | 2 +- cmds/qgroup.h | 2 +- cmds/quota.c | 2 +- cmds/receive.c | 2 +- cmds/replace.c | 2 +- cmds/rescue-chunk-recover.c | 2 +- cmds/scrub.c | 2 +- cmds/send.c | 2 +- cmds/subvolume-list.c | 2 +- cmds/subvolume.c | 2 +- common/device-scan.c | 2 +- common/device-scan.h | 2 +- common/fsfeatures.c | 2 +- common/send-stream.c | 2 +- common/send-utils.c | 2 +- common/utils.c | 2 +- common/utils.h | 2 +- convert/common.c | 2 +- image/main.c | 2 +- kernel-shared/ctree.h | 2 +- ioctl.h => kernel-shared/uapi/btrfs.h | 603 +++++++++++++++----------- mkfs/common.c | 2 +- tests/ioctl-test.c | 2 +- tests/library-test.c | 2 +- 33 files changed, 371 insertions(+), 296 deletions(-) rename ioctl.h => kernel-shared/uapi/btrfs.h (70%) diff --git a/btrfs-fragments.c b/btrfs-fragments.c index df8ad352..970b49e5 100644 --- a/btrfs-fragments.c +++ b/btrfs-fragments.c @@ -31,7 +31,7 @@ #include #include "kernel-shared/ctree.h" #include "common/utils.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int use_color; static void diff --git a/btrfstune.c b/btrfstune.c index afa4cc97..8dd32129 100644 --- a/btrfstune.c +++ b/btrfstune.c @@ -41,7 +41,7 @@ #include "common/string-utils.h" #include "common/help.h" #include "common/box.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static char *device; static int force = 0; diff --git a/check/main.c b/check/main.c index 4c8e6bdf..4d8d6882 100644 --- a/check/main.c +++ b/check/main.c @@ -61,7 +61,7 @@ #include "check/mode-lowmem.h" #include "check/qgroup-verify.h" #include "check/clear-cache.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" /* Global context variables */ struct btrfs_fs_info *gfs_info; diff --git a/cmds/balance.c b/cmds/balance.c index d7631cae..97590319 100644 --- a/cmds/balance.c +++ b/cmds/balance.c @@ -33,7 +33,7 @@ #include "common/messages.h" #include "common/help.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static const char * const balance_cmd_group_usage[] = { "btrfs balance [options] ", diff --git a/cmds/device.c b/cmds/device.c index 0b4afa71..92abd978 100644 --- a/cmds/device.c +++ b/cmds/device.c @@ -41,7 +41,7 @@ #include "cmds/commands.h" #include "cmds/filesystem-usage.h" #include "mkfs/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static const char * const device_cmd_group_usage[] = { "btrfs device []", diff --git a/cmds/filesystem-usage.h b/cmds/filesystem-usage.h index 902c3384..2c0db9dc 100644 --- a/cmds/filesystem-usage.h +++ b/cmds/filesystem-usage.h @@ -20,7 +20,7 @@ #define __CMDS_FI_USAGE_H__ #include "kerncompat.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct device_info { u64 devid; diff --git a/cmds/filesystem.c b/cmds/filesystem.c index ecd5cd8f..a0906b13 100644 --- a/cmds/filesystem.c +++ b/cmds/filesystem.c @@ -53,7 +53,7 @@ #include "common/filesystem-utils.h" #include "cmds/commands.h" #include "cmds/filesystem-usage.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" /* * for btrfs fi show, we maintain a hash of fsids we've already printed. diff --git a/cmds/inspect.c b/cmds/inspect.c index a3d6c0cb..5adb8049 100644 --- a/cmds/inspect.c +++ b/cmds/inspect.c @@ -36,7 +36,7 @@ #include "common/units.h" #include "common/string-utils.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static const char * const inspect_cmd_group_usage[] = { "btrfs inspect-internal ", diff --git a/cmds/property.c b/cmds/property.c index f2ba4962..608c2c9a 100644 --- a/cmds/property.c +++ b/cmds/property.c @@ -38,7 +38,7 @@ #include "common/filesystem-utils.h" #include "cmds/commands.h" #include "cmds/props.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define XATTR_BTRFS_PREFIX "btrfs." #define XATTR_BTRFS_PREFIX_LEN (sizeof(XATTR_BTRFS_PREFIX) - 1) diff --git a/cmds/qgroup.c b/cmds/qgroup.c index c6c15da5..b3fd7e9f 100644 --- a/cmds/qgroup.c +++ b/cmds/qgroup.c @@ -38,7 +38,7 @@ #include "common/messages.h" #include "cmds/commands.h" #include "cmds/qgroup.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define BTRFS_QGROUP_NFILTERS_INCREASE (2 * BTRFS_QGROUP_FILTER_MAX) #define BTRFS_QGROUP_NCOMPS_INCREASE (2 * BTRFS_QGROUP_COMP_MAX) diff --git a/cmds/qgroup.h b/cmds/qgroup.h index 93e81e85..20911f15 100644 --- a/cmds/qgroup.h +++ b/cmds/qgroup.h @@ -20,7 +20,7 @@ #define __CMDS_QGROUP_H__ #include "kerncompat.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct btrfs_qgroup_info { u64 generation; diff --git a/cmds/quota.c b/cmds/quota.c index 9e26d2cb..b68945f0 100644 --- a/cmds/quota.c +++ b/cmds/quota.c @@ -27,7 +27,7 @@ #include "common/open-utils.h" #include "common/messages.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static const char * const quota_cmd_group_usage[] = { "btrfs quota [options] ", diff --git a/cmds/receive.c b/cmds/receive.c index af3138d5..c774aebc 100644 --- a/cmds/receive.c +++ b/cmds/receive.c @@ -55,7 +55,7 @@ #include "common/string-utils.h" #include "cmds/commands.h" #include "cmds/receive-dump.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct btrfs_receive { diff --git a/cmds/replace.c b/cmds/replace.c index bdb74dff..077a9d04 100644 --- a/cmds/replace.c +++ b/cmds/replace.c @@ -39,7 +39,7 @@ #include "common/messages.h" #include "cmds/commands.h" #include "mkfs/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int print_replace_status(int fd, const char *path, int once); static char *time2string(char *buf, size_t s, __u64 t); diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c index e8d4b28f..a085f108 100644 --- a/cmds/rescue-chunk-recover.c +++ b/cmds/rescue-chunk-recover.c @@ -37,7 +37,7 @@ #include "common/utils.h" #include "cmds/rescue.h" #include "check/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct recover_control { int verbose; diff --git a/cmds/scrub.c b/cmds/scrub.c index e6513b39..b606f2ff 100644 --- a/cmds/scrub.c +++ b/cmds/scrub.c @@ -51,7 +51,7 @@ #include "common/units.h" #include "common/help.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static unsigned unit_mode = UNITS_DEFAULT; diff --git a/cmds/send.c b/cmds/send.c index f238b581..c9caa09e 100644 --- a/cmds/send.c +++ b/cmds/send.c @@ -35,7 +35,7 @@ #include "common/string-utils.h" #include "common/messages.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define BTRFS_SEND_BUF_SIZE_V1 (SZ_64K) #define BTRFS_MAX_COMPRESSED (SZ_128K) diff --git a/cmds/subvolume-list.c b/cmds/subvolume-list.c index 2d42e927..6997d877 100644 --- a/cmds/subvolume-list.c +++ b/cmds/subvolume-list.c @@ -35,7 +35,7 @@ #include "common/string-utils.h" #include "common/utils.h" #include "cmds/commands.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" /* * Naming of options: diff --git a/cmds/subvolume.c b/cmds/subvolume.c index a90147e2..a3180f47 100644 --- a/cmds/subvolume.c +++ b/cmds/subvolume.c @@ -42,7 +42,7 @@ #include "common/units.h" #include "cmds/commands.h" #include "cmds/qgroup.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int wait_for_subvolume_cleaning(int fd, size_t count, uint64_t *ids, int sleep_interval) diff --git a/common/device-scan.c b/common/device-scan.c index 660382b2..f36ba95d 100644 --- a/common/device-scan.c +++ b/common/device-scan.c @@ -50,7 +50,7 @@ #include "common/defs.h" #include "common/open-utils.h" #include "common/units.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int btrfs_scan_done = 0; diff --git a/common/device-scan.h b/common/device-scan.h index 13a16e0a..f805b489 100644 --- a/common/device-scan.h +++ b/common/device-scan.h @@ -19,7 +19,7 @@ #include "kerncompat.h" #include -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define BTRFS_SCAN_MOUNTED (1ULL << 0) #define BTRFS_SCAN_LBLKID (1ULL << 1) diff --git a/common/fsfeatures.c b/common/fsfeatures.c index 169e47e9..18afdbab 100644 --- a/common/fsfeatures.c +++ b/common/fsfeatures.c @@ -29,7 +29,7 @@ #include "common/string-utils.h" #include "common/utils.h" #include "common/messages.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" /* * Insert a root item for temporary tree root diff --git a/common/send-stream.c b/common/send-stream.c index 72a25729..69b7af97 100644 --- a/common/send-stream.c +++ b/common/send-stream.c @@ -26,7 +26,7 @@ #include "crypto/crc32c.h" #include "common/send-stream.h" #include "common/messages.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct btrfs_send_attribute { u16 tlv_type; diff --git a/common/send-utils.c b/common/send-utils.c index 85c7f6ee..0ce437c1 100644 --- a/common/send-utils.c +++ b/common/send-utils.c @@ -28,7 +28,7 @@ #include "common/send-utils.h" #include "common/messages.h" #include "common/utils.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int btrfs_subvolid_resolve_sub(int fd, char *path, size_t *path_len, u64 subvol_id); diff --git a/common/utils.c b/common/utils.c index 2c359dcf..1ab232ea 100644 --- a/common/utils.c +++ b/common/utils.c @@ -38,7 +38,7 @@ #include "common/messages.h" #include "cmds/commands.h" #include "mkfs/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static int rand_seed_initialized = 0; static unsigned short rand_seed[3]; diff --git a/common/utils.h b/common/utils.h index 87dceef5..27c6ae2d 100644 --- a/common/utils.h +++ b/common/utils.h @@ -29,7 +29,7 @@ #include "common/internal.h" #include "common/messages.h" #include "common/fsfeatures.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" enum exclusive_operation { BTRFS_EXCLOP_NONE, diff --git a/convert/common.c b/convert/common.c index 1a85085f..228191b8 100644 --- a/convert/common.c +++ b/convert/common.c @@ -29,7 +29,7 @@ #include "common/messages.h" #include "mkfs/common.h" #include "convert/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define BTRFS_CONVERT_META_GROUP_SIZE SZ_32M diff --git a/image/main.c b/image/main.c index c7bbb05d..6a1bcd42 100644 --- a/image/main.c +++ b/image/main.c @@ -51,7 +51,7 @@ #include "common/string-utils.h" #include "image/metadump.h" #include "image/sanitize.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #define MAX_WORKER_THREADS (32) diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 85ecc16b..3f674484 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -25,7 +25,7 @@ #include "kerncompat.h" #include "common/extent-cache.h" #include "kernel-shared/extent_io.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" struct btrfs_root; struct btrfs_trans_handle; diff --git a/ioctl.h b/kernel-shared/uapi/btrfs.h similarity index 70% rename from ioctl.h rename to kernel-shared/uapi/btrfs.h index 686c1035..e694449c 100644 --- a/ioctl.h +++ b/kernel-shared/uapi/btrfs.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ /* * Copyright (C) 2007 Oracle. All rights reserved. * @@ -16,28 +17,15 @@ * Boston, MA 021110-1307, USA. */ -#ifndef __BTRFS_IOCTL_H__ -#define __BTRFS_IOCTL_H__ - -#ifdef __cplusplus -extern "C" { -#endif - -#include +#ifndef _UAPI_LINUX_BTRFS_H +#define _UAPI_LINUX_BTRFS_H +#include #include -#include - -#ifndef __user -#define __user -#endif - -/* We don't want to include entire kerncompat.h */ -#ifndef BUILD_ASSERT -#define BUILD_ASSERT(x) -#endif +#include #define BTRFS_IOCTL_MAGIC 0x94 #define BTRFS_VOL_NAME_MAX 255 +#define BTRFS_LABEL_SIZE 256 /* this should be 4k */ #define BTRFS_PATH_NAME_MAX 4087 @@ -45,18 +33,20 @@ struct btrfs_ioctl_vol_args { __s64 fd; char name[BTRFS_PATH_NAME_MAX + 1]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args) == 4096); -#define BTRFS_DEVICE_PATH_NAME_MAX 1024 +#define BTRFS_DEVICE_PATH_NAME_MAX 1024 +#define BTRFS_SUBVOL_NAME_MAX 4039 -/* - * Obsolete since 5.15, functionality removed in kernel 5.7: - * BTRFS_SUBVOL_CREATE_ASYNC (1ULL << 0) - */ +#ifndef __KERNEL__ +/* Deprecated since 5.7 */ +# define BTRFS_SUBVOL_CREATE_ASYNC (1ULL << 0) +#endif #define BTRFS_SUBVOL_RDONLY (1ULL << 1) #define BTRFS_SUBVOL_QGROUP_INHERIT (1ULL << 2) + #define BTRFS_DEVICE_SPEC_BY_ID (1ULL << 3) -#define BTRFS_SUBVOL_SPEC_BY_ID (1ULL << 4) + +#define BTRFS_SUBVOL_SPEC_BY_ID (1ULL << 4) #define BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED \ (BTRFS_SUBVOL_RDONLY | \ @@ -66,8 +56,21 @@ BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args) == 4096); #define BTRFS_FSID_SIZE 16 #define BTRFS_UUID_SIZE 16 +#define BTRFS_UUID_UNPARSED_SIZE 37 -#define BTRFS_QGROUP_INHERIT_SET_LIMITS (1ULL << 0) +/* + * flags definition for qgroup limits + * + * Used by: + * struct btrfs_qgroup_limit.flags + * struct btrfs_qgroup_limit_item.flags + */ +#define BTRFS_QGROUP_LIMIT_MAX_RFER (1ULL << 0) +#define BTRFS_QGROUP_LIMIT_MAX_EXCL (1ULL << 1) +#define BTRFS_QGROUP_LIMIT_RSV_RFER (1ULL << 2) +#define BTRFS_QGROUP_LIMIT_RSV_EXCL (1ULL << 3) +#define BTRFS_QGROUP_LIMIT_RFER_CMPR (1ULL << 4) +#define BTRFS_QGROUP_LIMIT_EXCL_CMPR (1ULL << 5) struct btrfs_qgroup_limit { __u64 flags; @@ -76,7 +79,14 @@ struct btrfs_qgroup_limit { __u64 rsv_rfer; __u64 rsv_excl; }; -BUILD_ASSERT(sizeof(struct btrfs_qgroup_limit) == 40); + +/* + * flags definition for qgroup inheritance + * + * Used by: + * struct btrfs_qgroup_inherit.flags + */ +#define BTRFS_QGROUP_INHERIT_SET_LIMITS (1ULL << 0) struct btrfs_qgroup_inherit { __u64 flags; @@ -84,17 +94,38 @@ struct btrfs_qgroup_inherit { __u64 num_ref_copies; __u64 num_excl_copies; struct btrfs_qgroup_limit lim; - __u64 qgroups[0]; + __u64 qgroups[]; }; -BUILD_ASSERT(sizeof(struct btrfs_qgroup_inherit) == 72); struct btrfs_ioctl_qgroup_limit_args { __u64 qgroupid; struct btrfs_qgroup_limit lim; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_qgroup_limit_args) == 48); -#define BTRFS_SUBVOL_NAME_MAX 4039 +/* + * Arguments for specification of subvolumes or devices, supporting by-name or + * by-id and flags + * + * The set of supported flags depends on the ioctl + * + * BTRFS_SUBVOL_RDONLY is also provided/consumed by the following ioctls: + * - BTRFS_IOC_SUBVOL_GETFLAGS + * - BTRFS_IOC_SUBVOL_SETFLAGS + */ + +/* Supported flags for BTRFS_IOC_RM_DEV_V2 */ +#define BTRFS_DEVICE_REMOVE_ARGS_MASK \ + (BTRFS_DEVICE_SPEC_BY_ID) + +/* Supported flags for BTRFS_IOC_SNAP_CREATE_V2 and BTRFS_IOC_SUBVOL_CREATE_V2 */ +#define BTRFS_SUBVOL_CREATE_ARGS_MASK \ + (BTRFS_SUBVOL_RDONLY | \ + BTRFS_SUBVOL_QGROUP_INHERIT) + +/* Supported flags for BTRFS_IOC_SNAP_DESTROY_V2 */ +#define BTRFS_SUBVOL_DELETE_ARGS_MASK \ + (BTRFS_SUBVOL_SPEC_BY_ID) + struct btrfs_ioctl_vol_args_v2 { __s64 fd; __u64 transid; @@ -102,7 +133,7 @@ struct btrfs_ioctl_vol_args_v2 { union { struct { __u64 size; - struct btrfs_qgroup_inherit __user *qgroup_inherit; + struct btrfs_qgroup_inherit *qgroup_inherit; }; __u64 unused[4]; }; @@ -112,7 +143,6 @@ struct btrfs_ioctl_vol_args_v2 { __u64 subvolid; }; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_vol_args_v2) == 4096); /* * structure to report errors and progress to userspace, either as a @@ -161,7 +191,6 @@ struct btrfs_ioctl_scrub_args { /* pad to 1k */ __u64 unused[(1024-32-sizeof(struct btrfs_scrub_progress))/8]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_scrub_args) == 1024); #define BTRFS_IOCTL_DEV_REPLACE_CONT_READING_FROM_SRCDEV_MODE_ALWAYS 0 #define BTRFS_IOCTL_DEV_REPLACE_CONT_READING_FROM_SRCDEV_MODE_AVOID 1 @@ -172,7 +201,6 @@ struct btrfs_ioctl_dev_replace_start_params { __u8 srcdev_name[BTRFS_DEVICE_PATH_NAME_MAX + 1]; /* in */ __u8 tgtdev_name[BTRFS_DEVICE_PATH_NAME_MAX + 1]; /* in */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_start_params) == 2072); #define BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED 0 #define BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED 1 @@ -187,7 +215,6 @@ struct btrfs_ioctl_dev_replace_status_params { __u64 num_write_errors; /* out */ __u64 num_uncorrectable_read_errors; /* out */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_status_params) == 48); #define BTRFS_IOCTL_DEV_REPLACE_CMD_START 0 #define BTRFS_IOCTL_DEV_REPLACE_CMD_STATUS 1 @@ -207,7 +234,6 @@ struct btrfs_ioctl_dev_replace_args { __u64 spare[64]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_replace_args) == 2600); struct btrfs_ioctl_dev_info_args { __u64 devid; /* in/out */ @@ -217,7 +243,18 @@ struct btrfs_ioctl_dev_info_args { __u64 unused[379]; /* pad to 4k */ __u8 path[BTRFS_DEVICE_PATH_NAME_MAX]; /* out */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_dev_info_args) == 4096); + +/* + * Retrieve information about the filesystem + */ + +/* Request information about checksum type and size */ +#define BTRFS_FS_INFO_FLAG_CSUM_INFO (1 << 0) + +/* Request information about filesystem generation */ +#define BTRFS_FS_INFO_FLAG_GENERATION (1 << 1) +/* Request information about filesystem metadata UUID */ +#define BTRFS_FS_INFO_FLAG_METADATA_UUID (1 << 2) struct btrfs_ioctl_fs_info_args { __u64 max_id; /* out */ @@ -226,22 +263,70 @@ struct btrfs_ioctl_fs_info_args { __u32 nodesize; /* out */ __u32 sectorsize; /* out */ __u32 clone_alignment; /* out */ - __u32 reserved32; - __u64 reserved[122]; /* pad to 1k */ + /* See BTRFS_FS_INFO_FLAG_* */ + __u16 csum_type; /* out */ + __u16 csum_size; /* out */ + __u64 flags; /* in/out */ + __u64 generation; /* out */ + __u8 metadata_uuid[BTRFS_FSID_SIZE]; /* out */ + __u8 reserved[944]; /* pad to 1k */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_fs_info_args) == 1024); + +/* + * feature flags + * + * Used by: + * struct btrfs_ioctl_feature_flags + */ +#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE (1ULL << 0) +/* + * Older kernels (< 4.9) on big-endian systems produced broken free space tree + * bitmaps, and btrfs-progs also used to corrupt the free space tree (versions + * < 4.7.3). If this bit is clear, then the free space tree cannot be trusted. + * btrfs-progs can also intentionally clear this bit to ask the kernel to + * rebuild the free space tree, however this might not work on older kernels + * that do not know about this bit. If not sure, clear the cache manually on + * first mount when booting older kernel versions. + */ +#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE_VALID (1ULL << 1) +#define BTRFS_FEATURE_COMPAT_RO_VERITY (1ULL << 2) + +/* + * Put all block group items into a dedicated block group tree, greatly + * reducing mount time for large filesystem due to better locality. + */ +#define BTRFS_FEATURE_COMPAT_RO_BLOCK_GROUP_TREE (1ULL << 3) + +#define BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF (1ULL << 0) +#define BTRFS_FEATURE_INCOMPAT_DEFAULT_SUBVOL (1ULL << 1) +#define BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS (1ULL << 2) +#define BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO (1ULL << 3) +#define BTRFS_FEATURE_INCOMPAT_COMPRESS_ZSTD (1ULL << 4) + +/* + * older kernels tried to do bigger metadata blocks, but the + * code was pretty buggy. Lets not let them try anymore. + */ +#define BTRFS_FEATURE_INCOMPAT_BIG_METADATA (1ULL << 5) + +#define BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF (1ULL << 6) +#define BTRFS_FEATURE_INCOMPAT_RAID56 (1ULL << 7) +#define BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA (1ULL << 8) +#define BTRFS_FEATURE_INCOMPAT_NO_HOLES (1ULL << 9) +#define BTRFS_FEATURE_INCOMPAT_METADATA_UUID (1ULL << 10) +#define BTRFS_FEATURE_INCOMPAT_RAID1C34 (1ULL << 11) +#define BTRFS_FEATURE_INCOMPAT_ZONED (1ULL << 12) +#define BTRFS_FEATURE_INCOMPAT_EXTENT_TREE_V2 (1ULL << 13) struct btrfs_ioctl_feature_flags { __u64 compat_flags; __u64 compat_ro_flags; __u64 incompat_flags; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_feature_flags) == 24); /* balance control ioctl modes */ #define BTRFS_BALANCE_CTL_PAUSE 1 #define BTRFS_BALANCE_CTL_CANCEL 2 -#define BTRFS_BALANCE_CTL_RESUME 3 /* * this is packed, because it should be exactly the same as its disk @@ -249,12 +334,6 @@ BUILD_ASSERT(sizeof(struct btrfs_ioctl_feature_flags) == 24); */ struct btrfs_balance_args { __u64 profiles; - - /* - * usage filter - * BTRFS_BALANCE_ARGS_USAGE with a single value means '0..N' - * BTRFS_BALANCE_ARGS_USAGE_RANGE - range syntax, min..max - */ union { __u64 usage; struct { @@ -262,7 +341,6 @@ struct btrfs_balance_args { __u32 usage_max; }; }; - __u64 devid; __u64 pstart; __u64 pend; @@ -285,19 +363,89 @@ struct btrfs_balance_args { __u32 limit_max; }; }; + + /* + * Process chunks that cross stripes_min..stripes_max devices, + * BTRFS_BALANCE_ARGS_STRIPES_RANGE + */ __u32 stripes_min; __u32 stripes_max; + __u64 unused[6]; } __attribute__ ((__packed__)); /* report balance progress to userspace */ struct btrfs_balance_progress { __u64 expected; /* estimated # of chunks that will be - * relocated to fulfil the request */ + * relocated to fulfill the request */ __u64 considered; /* # of chunks we have considered so far */ __u64 completed; /* # of chunks relocated so far */ }; +/* + * flags definition for balance + * + * Restriper's general type filter + * + * Used by: + * btrfs_ioctl_balance_args.flags + * btrfs_balance_control.flags (internal) + */ +#define BTRFS_BALANCE_DATA (1ULL << 0) +#define BTRFS_BALANCE_SYSTEM (1ULL << 1) +#define BTRFS_BALANCE_METADATA (1ULL << 2) + +#define BTRFS_BALANCE_TYPE_MASK (BTRFS_BALANCE_DATA | \ + BTRFS_BALANCE_SYSTEM | \ + BTRFS_BALANCE_METADATA) + +#define BTRFS_BALANCE_FORCE (1ULL << 3) +#define BTRFS_BALANCE_RESUME (1ULL << 4) + +/* + * flags definitions for per-type balance args + * + * Balance filters + * + * Used by: + * struct btrfs_balance_args + */ +#define BTRFS_BALANCE_ARGS_PROFILES (1ULL << 0) +#define BTRFS_BALANCE_ARGS_USAGE (1ULL << 1) +#define BTRFS_BALANCE_ARGS_DEVID (1ULL << 2) +#define BTRFS_BALANCE_ARGS_DRANGE (1ULL << 3) +#define BTRFS_BALANCE_ARGS_VRANGE (1ULL << 4) +#define BTRFS_BALANCE_ARGS_LIMIT (1ULL << 5) +#define BTRFS_BALANCE_ARGS_LIMIT_RANGE (1ULL << 6) +#define BTRFS_BALANCE_ARGS_STRIPES_RANGE (1ULL << 7) +#define BTRFS_BALANCE_ARGS_USAGE_RANGE (1ULL << 10) + +#define BTRFS_BALANCE_ARGS_MASK \ + (BTRFS_BALANCE_ARGS_PROFILES | \ + BTRFS_BALANCE_ARGS_USAGE | \ + BTRFS_BALANCE_ARGS_DEVID | \ + BTRFS_BALANCE_ARGS_DRANGE | \ + BTRFS_BALANCE_ARGS_VRANGE | \ + BTRFS_BALANCE_ARGS_LIMIT | \ + BTRFS_BALANCE_ARGS_LIMIT_RANGE | \ + BTRFS_BALANCE_ARGS_STRIPES_RANGE | \ + BTRFS_BALANCE_ARGS_USAGE_RANGE) + +/* + * Profile changing flags. When SOFT is set we won't relocate chunk if + * it already has the target profile (even though it may be + * half-filled). + */ +#define BTRFS_BALANCE_ARGS_CONVERT (1ULL << 8) +#define BTRFS_BALANCE_ARGS_SOFT (1ULL << 9) + + +/* + * flags definition for balance state + * + * Used by: + * struct btrfs_ioctl_balance_args.state + */ #define BTRFS_BALANCE_STATE_RUNNING (1ULL << 0) #define BTRFS_BALANCE_STATE_PAUSE_REQ (1ULL << 1) #define BTRFS_BALANCE_STATE_CANCEL_REQ (1ULL << 2) @@ -314,7 +462,6 @@ struct btrfs_ioctl_balance_args { __u64 unused[72]; /* pad to 1k */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_balance_args) == 1024); #define BTRFS_INO_LOOKUP_PATH_MAX 4080 struct btrfs_ioctl_ino_lookup_args { @@ -322,9 +469,8 @@ struct btrfs_ioctl_ino_lookup_args { __u64 objectid; char name[BTRFS_INO_LOOKUP_PATH_MAX]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_lookup_args) == 4096); -#define BTRFS_INO_LOOKUP_USER_PATH_MAX (4080 - BTRFS_VOL_NAME_MAX - 1) +#define BTRFS_INO_LOOKUP_USER_PATH_MAX (4080 - BTRFS_VOL_NAME_MAX - 1) struct btrfs_ioctl_ino_lookup_user_args { /* in, inode number containing the subvolume of 'subvolid' */ __u64 dirid; @@ -338,33 +484,55 @@ struct btrfs_ioctl_ino_lookup_user_args { */ char path[BTRFS_INO_LOOKUP_USER_PATH_MAX]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_lookup_user_args) == 4096); +/* Search criteria for the btrfs SEARCH ioctl family. */ struct btrfs_ioctl_search_key { - /* which root are we searching. 0 is the tree of tree roots */ - __u64 tree_id; - - /* keys returned will be >= min and <= max */ - __u64 min_objectid; - __u64 max_objectid; - - /* keys returned will be >= min and <= max */ - __u64 min_offset; - __u64 max_offset; - - /* max and min transids to search for */ - __u64 min_transid; - __u64 max_transid; - - /* keys returned will be >= min and <= max */ - __u32 min_type; - __u32 max_type; + /* + * The tree we're searching in. 1 is the tree of tree roots, 2 is the + * extent tree, etc... + * + * A special tree_id value of 0 will cause a search in the subvolume + * tree that the inode which is passed to the ioctl is part of. + */ + __u64 tree_id; /* in */ /* - * how many items did userland ask for, and how many are we - * returning + * When doing a tree search, we're actually taking a slice from a + * linear search space of 136-bit keys. + * + * A full 136-bit tree key is composed as: + * (objectid << 72) + (type << 64) + offset + * + * The individual min and max values for objectid, type and offset + * define the min_key and max_key values for the search range. All + * metadata items with a key in the interval [min_key, max_key] will be + * returned. + * + * Additionally, we can filter the items returned on transaction id of + * the metadata block they're stored in by specifying a transid range. + * Be aware that this transaction id only denotes when the metadata + * page that currently contains the item got written the last time as + * result of a COW operation. The number does not have any meaning + * related to the transaction in which an individual item that is being + * returned was created or changed. */ - __u32 nr_items; + __u64 min_objectid; /* in */ + __u64 max_objectid; /* in */ + __u64 min_offset; /* in */ + __u64 max_offset; /* in */ + __u64 min_transid; /* in */ + __u64 max_transid; /* in */ + __u32 min_type; /* in */ + __u32 max_type; /* in */ + + /* + * input: The maximum amount of results desired. + * output: The actual amount of items returned, restricted by any of: + * - reaching the upper bound of the search range + * - reaching the input nr_items amount of items + * - completely filling the supplied memory buffer + */ + __u32 nr_items; /* in/out */ /* align to 64 bits */ __u32 unused; @@ -382,7 +550,7 @@ struct btrfs_ioctl_search_header { __u64 offset; __u32 type; __u32 len; -} __attribute__((may_alias)); +}; #define BTRFS_SEARCH_ARGS_BUFSIZE (4096 - sizeof(struct btrfs_ioctl_search_key)) /* @@ -395,57 +563,28 @@ struct btrfs_ioctl_search_args { char buf[BTRFS_SEARCH_ARGS_BUFSIZE]; }; -/* - * Extended version of TREE_SEARCH ioctl that can return more than 4k of bytes. - * The allocated size of the buffer is set in buf_size. - */ struct btrfs_ioctl_search_args_v2 { - struct btrfs_ioctl_search_key key; /* in/out - search parameters */ - __u64 buf_size; /* in - size of buffer - * out - on EOVERFLOW: needed size - * to store item */ - __u64 buf[0]; /* out - found items */ + struct btrfs_ioctl_search_key key; /* in/out - search parameters */ + __u64 buf_size; /* in - size of buffer + * out - on EOVERFLOW: needed size + * to store item */ + __u64 buf[]; /* out - found items */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_search_args_v2) == 112); -/* With a @src_length of zero, the range from @src_offset->EOF is cloned! */ struct btrfs_ioctl_clone_range_args { - __s64 src_fd; - __u64 src_offset, src_length; - __u64 dest_offset; + __s64 src_fd; + __u64 src_offset, src_length; + __u64 dest_offset; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_clone_range_args) == 32); -/* flags for the defrag range ioctl */ +/* + * flags definition for the defrag range ioctl + * + * Used by: + * struct btrfs_ioctl_defrag_range_args.flags + */ #define BTRFS_DEFRAG_RANGE_COMPRESS 1 #define BTRFS_DEFRAG_RANGE_START_IO 2 - -#define BTRFS_SAME_DATA_DIFFERS 1 -/* For extent-same ioctl */ -struct btrfs_ioctl_same_extent_info { - __s64 fd; /* in - destination file */ - __u64 logical_offset; /* in - start of extent in destination */ - __u64 bytes_deduped; /* out - total # of bytes we were able - * to dedupe from this file */ - /* status of this dedupe operation: - * 0 if dedup succeeds - * < 0 for error - * == BTRFS_SAME_DATA_DIFFERS if data differs - */ - __s32 status; /* out - see above description */ - __u32 reserved; -}; - -struct btrfs_ioctl_same_args { - __u64 logical_offset; /* in - start of extent in source */ - __u64 length; /* in - length of extent */ - __u16 dest_count; /* in - total elements in info array */ - __u16 reserved1; - __u32 reserved2; - struct btrfs_ioctl_same_extent_info info[0]; -}; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_same_args) == 24); - struct btrfs_ioctl_defrag_range_args { /* start of the defrag operation */ __u64 start; @@ -476,7 +615,32 @@ struct btrfs_ioctl_defrag_range_args { /* spare for later */ __u32 unused[4]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_defrag_range_args) == 48); + + +#define BTRFS_SAME_DATA_DIFFERS 1 +/* For extent-same ioctl */ +struct btrfs_ioctl_same_extent_info { + __s64 fd; /* in - destination file */ + __u64 logical_offset; /* in - start of extent in destination */ + __u64 bytes_deduped; /* out - total # of bytes we were able + * to dedupe from this file */ + /* status of this dedupe operation: + * 0 if dedup succeeds + * < 0 for error + * == BTRFS_SAME_DATA_DIFFERS if data differs + */ + __s32 status; /* out - see above description */ + __u32 reserved; +}; + +struct btrfs_ioctl_same_args { + __u64 logical_offset; /* in - start of extent in source */ + __u64 length; /* in - length of extent */ + __u16 dest_count; /* in - total elements in info array */ + __u16 reserved1; + __u32 reserved2; + struct btrfs_ioctl_same_extent_info info[]; +}; struct btrfs_ioctl_space_info { __u64 flags; @@ -487,16 +651,15 @@ struct btrfs_ioctl_space_info { struct btrfs_ioctl_space_args { __u64 space_slots; __u64 total_spaces; - struct btrfs_ioctl_space_info spaces[0]; + struct btrfs_ioctl_space_info spaces[]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_space_args) == 16); struct btrfs_data_container { __u32 bytes_left; /* out -- bytes not needed to deliver output */ __u32 bytes_missing; /* out -- additional bytes needed for result */ __u32 elem_cnt; /* out */ __u32 elem_missed; /* out */ - __u64 val[0]; /* out */ + __u64 val[]; /* out */ }; struct btrfs_ioctl_ino_path_args { @@ -506,22 +669,18 @@ struct btrfs_ioctl_ino_path_args { /* struct btrfs_data_container *fspath; out */ __u64 fspath; /* out */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_ino_path_args) == 56); struct btrfs_ioctl_logical_ino_args { __u64 logical; /* in */ __u64 size; /* in */ - __u64 reserved[3]; - __u64 flags; /* in */ + __u64 reserved[3]; /* must be 0 for now */ + __u64 flags; /* in, v2 only */ /* struct btrfs_data_container *inodes; out */ __u64 inodes; }; - -/* - * Return every ref to the extent, not just those containing logical block. - * Requires logical == extent bytenr. - */ -#define BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET (1ULL << 0) +/* Return every ref to the extent, not just those containing logical block. + * Requires logical == extent bytenr. */ +#define BTRFS_LOGICAL_INO_ARGS_IGNORE_OFFSET (1ULL << 0) enum btrfs_dev_stat_values { /* disk I/O failure stats */ @@ -553,26 +712,27 @@ struct btrfs_ioctl_get_dev_stats { /* out values: */ __u64 values[BTRFS_DEV_STAT_VALUES_MAX]; - __u64 unused[128 - 2 - BTRFS_DEV_STAT_VALUES_MAX]; /* pad to 1k + 8B */ + /* + * This pads the struct to 1032 bytes. It was originally meant to pad to + * 1024 bytes, but when adding the flags field, the padding calculation + * was not adjusted. + */ + __u64 unused[128 - 2 - BTRFS_DEV_STAT_VALUES_MAX]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_get_dev_stats) == 1032); -/* BTRFS_IOC_SNAP_CREATE is no longer used by the btrfs command */ #define BTRFS_QUOTA_CTL_ENABLE 1 #define BTRFS_QUOTA_CTL_DISABLE 2 -/* 3 has formerly been reserved for BTRFS_QUOTA_CTL_RESCAN */ +#define BTRFS_QUOTA_CTL_RESCAN__NOTUSED 3 struct btrfs_ioctl_quota_ctl_args { __u64 cmd; __u64 status; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_quota_ctl_args) == 16); struct btrfs_ioctl_quota_rescan_args { __u64 flags; __u64 progress; __u64 reserved[6]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_quota_rescan_args) == 64); struct btrfs_ioctl_qgroup_assign_args { __u64 assign; @@ -584,8 +744,6 @@ struct btrfs_ioctl_qgroup_create_args { __u64 create; __u64 qgroupid; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_qgroup_create_args) == 16); - struct btrfs_ioctl_timespec { __u64 sec; __u32 nsec; @@ -600,39 +758,6 @@ struct btrfs_ioctl_received_subvol_args { __u64 flags; /* in */ __u64 reserved[16]; /* in */ }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_received_subvol_args) == 200); - -/* - * If we have a 32-bit userspace and 64-bit kernel, then the UAPI - * structures are incorrect, as the timespec structure from userspace - * is 4 bytes too small. We define these alternatives here for backward - * compatibility, the kernel understands both values. - */ - -/* - * Structure size is different on 32bit and 64bit, has some padding if the - * structure is embedded. Packing makes sure the size is same on both, but will - * be misaligned on 64bit. - * - * NOTE: do not use in your code, this is for testing only - */ -struct btrfs_ioctl_timespec_32 { - __u64 sec; - __u32 nsec; -} __attribute__ ((__packed__)); - -struct btrfs_ioctl_received_subvol_args_32 { - char uuid[BTRFS_UUID_SIZE]; /* in */ - __u64 stransid; /* in */ - __u64 rtransid; /* out */ - struct btrfs_ioctl_timespec_32 stime; /* in */ - struct btrfs_ioctl_timespec_32 rtime; /* out */ - __u64 flags; /* in */ - __u64 reserved[16]; /* in */ -} __attribute__ ((__packed__)); -BUILD_ASSERT(sizeof(struct btrfs_ioctl_received_subvol_args_32) == 192); - -#define BTRFS_IOC_SET_RECEIVED_SUBVOL_32_COMPAT_DEFINED 1 /* * Caller doesn't want file data in the send stream, even if the @@ -676,43 +801,12 @@ BUILD_ASSERT(sizeof(struct btrfs_ioctl_received_subvol_args_32) == 192); struct btrfs_ioctl_send_args { __s64 send_fd; /* in */ __u64 clone_sources_count; /* in */ - __u64 __user *clone_sources; /* in */ + __u64 *clone_sources; /* in */ __u64 parent_root; /* in */ __u64 flags; /* in */ __u32 version; /* in */ - __u8 reserved[28]; /* in */ + __u8 reserved[28]; /* in */ }; -/* - * Size of structure depends on pointer width, was not caught in the early - * days. Kernel handles pointer width differences transparently. - */ -BUILD_ASSERT(sizeof(__u64 *) == 8 - ? sizeof(struct btrfs_ioctl_send_args) == 72 - : (sizeof(void *) == 4 - ? sizeof(struct btrfs_ioctl_send_args) == 68 - : 0)); - -/* - * Different pointer width leads to structure size change. Kernel should accept - * both ioctl values (derived from the structures) for backward compatibility. - * Size of this structure is same on 32bit and 64bit though. - * - * NOTE: do not use in your code, this is for testing only - */ -struct btrfs_ioctl_send_args_64 { - __s64 send_fd; /* in */ - __u64 clone_sources_count; /* in */ - union { - __u64 __user *clone_sources; /* in */ - __u64 __clone_sources_alignment; - }; - __u64 parent_root; /* in */ - __u64 flags; /* in */ - __u64 reserved[4]; /* in */ -} __attribute__((packed)); -BUILD_ASSERT(sizeof(struct btrfs_ioctl_send_args_64) == 72); - -#define BTRFS_IOC_SEND_64_COMPAT_DEFINED 1 /* * Information about a fs tree root. @@ -774,22 +868,21 @@ struct btrfs_ioctl_get_subvol_info_args { __u64 reserved[8]; }; -#define BTRFS_MAX_ROOTREF_BUFFER_NUM 255 +#define BTRFS_MAX_ROOTREF_BUFFER_NUM 255 struct btrfs_ioctl_get_subvol_rootref_args { - /* in/out, minimum id of rootref's treeid to be searched */ - __u64 min_treeid; + /* in/out, minimum id of rootref's treeid to be searched */ + __u64 min_treeid; - /* out */ - struct { - __u64 treeid; - __u64 dirid; - } rootref[BTRFS_MAX_ROOTREF_BUFFER_NUM]; + /* out */ + struct { + __u64 treeid; + __u64 dirid; + } rootref[BTRFS_MAX_ROOTREF_BUFFER_NUM]; - /* out, number of found items */ - __u8 num_items; - __u8 align[7]; + /* out, number of found items */ + __u8 num_items; + __u8 align[7]; }; -BUILD_ASSERT(sizeof(struct btrfs_ioctl_get_subvol_rootref_args) == 4096); /* * Data and metadata for an encoded read or write. @@ -829,7 +922,7 @@ struct btrfs_ioctl_encoded_io_args { * increase in the future). This must also be less than or equal to * unencoded_len. */ - const struct iovec __user *iov; + const struct iovec *iov; /* Number of iovecs. */ unsigned long iovcnt; /* @@ -921,8 +1014,7 @@ struct btrfs_ioctl_encoded_io_args { /* Error codes as returned by the kernel */ enum btrfs_err_code { - notused, - BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET, + BTRFS_ERROR_DEV_RAID1_MIN_NOT_MET = 1, BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET, BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET, BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET, @@ -944,12 +1036,12 @@ enum btrfs_err_code { struct btrfs_ioctl_vol_args) #define BTRFS_IOC_FORGET_DEV _IOW(BTRFS_IOCTL_MAGIC, 5, \ struct btrfs_ioctl_vol_args) -/* - * Removed in kernel since 4.17: - * BTRFS_IOC_TRANS_START _IO(BTRFS_IOCTL_MAGIC, 6) - * BTRFS_IOC_TRANS_END _IO(BTRFS_IOCTL_MAGIC, 7) +/* trans start and trans end are dangerous, and only for + * use by applications that know how to avoid the + * resulting deadlocks */ - +#define BTRFS_IOC_TRANS_START _IO(BTRFS_IOCTL_MAGIC, 6) +#define BTRFS_IOC_TRANS_END _IO(BTRFS_IOCTL_MAGIC, 7) #define BTRFS_IOC_SYNC _IO(BTRFS_IOCTL_MAGIC, 8) #define BTRFS_IOC_CLONE _IOW(BTRFS_IOCTL_MAGIC, 9, int) @@ -961,18 +1053,18 @@ enum btrfs_err_code { struct btrfs_ioctl_vol_args) #define BTRFS_IOC_CLONE_RANGE _IOW(BTRFS_IOCTL_MAGIC, 13, \ - struct btrfs_ioctl_clone_range_args) + struct btrfs_ioctl_clone_range_args) #define BTRFS_IOC_SUBVOL_CREATE _IOW(BTRFS_IOCTL_MAGIC, 14, \ struct btrfs_ioctl_vol_args) #define BTRFS_IOC_SNAP_DESTROY _IOW(BTRFS_IOCTL_MAGIC, 15, \ - struct btrfs_ioctl_vol_args) + struct btrfs_ioctl_vol_args) #define BTRFS_IOC_DEFRAG_RANGE _IOW(BTRFS_IOCTL_MAGIC, 16, \ struct btrfs_ioctl_defrag_range_args) #define BTRFS_IOC_TREE_SEARCH _IOWR(BTRFS_IOCTL_MAGIC, 17, \ struct btrfs_ioctl_search_args) #define BTRFS_IOC_TREE_SEARCH_V2 _IOWR(BTRFS_IOCTL_MAGIC, 17, \ - struct btrfs_ioctl_search_args_v2) + struct btrfs_ioctl_search_args_v2) #define BTRFS_IOC_INO_LOOKUP _IOWR(BTRFS_IOCTL_MAGIC, 18, \ struct btrfs_ioctl_ino_lookup_args) #define BTRFS_IOC_DEFAULT_SUBVOL _IOW(BTRFS_IOCTL_MAGIC, 19, __u64) @@ -987,14 +1079,14 @@ enum btrfs_err_code { #define BTRFS_IOC_SUBVOL_GETFLAGS _IOR(BTRFS_IOCTL_MAGIC, 25, __u64) #define BTRFS_IOC_SUBVOL_SETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 26, __u64) #define BTRFS_IOC_SCRUB _IOWR(BTRFS_IOCTL_MAGIC, 27, \ - struct btrfs_ioctl_scrub_args) + struct btrfs_ioctl_scrub_args) #define BTRFS_IOC_SCRUB_CANCEL _IO(BTRFS_IOCTL_MAGIC, 28) #define BTRFS_IOC_SCRUB_PROGRESS _IOWR(BTRFS_IOCTL_MAGIC, 29, \ - struct btrfs_ioctl_scrub_args) + struct btrfs_ioctl_scrub_args) #define BTRFS_IOC_DEV_INFO _IOWR(BTRFS_IOCTL_MAGIC, 30, \ - struct btrfs_ioctl_dev_info_args) + struct btrfs_ioctl_dev_info_args) #define BTRFS_IOC_FS_INFO _IOR(BTRFS_IOCTL_MAGIC, 31, \ - struct btrfs_ioctl_fs_info_args) + struct btrfs_ioctl_fs_info_args) #define BTRFS_IOC_BALANCE_V2 _IOWR(BTRFS_IOCTL_MAGIC, 32, \ struct btrfs_ioctl_balance_args) #define BTRFS_IOC_BALANCE_CTL _IOW(BTRFS_IOCTL_MAGIC, 33, int) @@ -1006,37 +1098,24 @@ enum btrfs_err_code { struct btrfs_ioctl_logical_ino_args) #define BTRFS_IOC_SET_RECEIVED_SUBVOL _IOWR(BTRFS_IOCTL_MAGIC, 37, \ struct btrfs_ioctl_received_subvol_args) - -#ifdef BTRFS_IOC_SET_RECEIVED_SUBVOL_32_COMPAT_DEFINED -#define BTRFS_IOC_SET_RECEIVED_SUBVOL_32 _IOWR(BTRFS_IOCTL_MAGIC, 37, \ - struct btrfs_ioctl_received_subvol_args_32) -#endif - -#ifdef BTRFS_IOC_SEND_64_COMPAT_DEFINED -#define BTRFS_IOC_SEND_64 _IOW(BTRFS_IOCTL_MAGIC, 38, \ - struct btrfs_ioctl_send_args_64) -#endif - #define BTRFS_IOC_SEND _IOW(BTRFS_IOCTL_MAGIC, 38, struct btrfs_ioctl_send_args) #define BTRFS_IOC_DEVICES_READY _IOR(BTRFS_IOCTL_MAGIC, 39, \ struct btrfs_ioctl_vol_args) #define BTRFS_IOC_QUOTA_CTL _IOWR(BTRFS_IOCTL_MAGIC, 40, \ - struct btrfs_ioctl_quota_ctl_args) + struct btrfs_ioctl_quota_ctl_args) #define BTRFS_IOC_QGROUP_ASSIGN _IOW(BTRFS_IOCTL_MAGIC, 41, \ - struct btrfs_ioctl_qgroup_assign_args) + struct btrfs_ioctl_qgroup_assign_args) #define BTRFS_IOC_QGROUP_CREATE _IOW(BTRFS_IOCTL_MAGIC, 42, \ - struct btrfs_ioctl_qgroup_create_args) + struct btrfs_ioctl_qgroup_create_args) #define BTRFS_IOC_QGROUP_LIMIT _IOR(BTRFS_IOCTL_MAGIC, 43, \ - struct btrfs_ioctl_qgroup_limit_args) + struct btrfs_ioctl_qgroup_limit_args) #define BTRFS_IOC_QUOTA_RESCAN _IOW(BTRFS_IOCTL_MAGIC, 44, \ struct btrfs_ioctl_quota_rescan_args) #define BTRFS_IOC_QUOTA_RESCAN_STATUS _IOR(BTRFS_IOCTL_MAGIC, 45, \ struct btrfs_ioctl_quota_rescan_args) #define BTRFS_IOC_QUOTA_RESCAN_WAIT _IO(BTRFS_IOCTL_MAGIC, 46) -#define BTRFS_IOC_GET_FSLABEL _IOR(BTRFS_IOCTL_MAGIC, 49, \ - char[BTRFS_LABEL_SIZE]) -#define BTRFS_IOC_SET_FSLABEL _IOW(BTRFS_IOCTL_MAGIC, 50, \ - char[BTRFS_LABEL_SIZE]) +#define BTRFS_IOC_GET_FSLABEL FS_IOC_GETFSLABEL +#define BTRFS_IOC_SET_FSLABEL FS_IOC_SETFSLABEL #define BTRFS_IOC_GET_DEV_STATS _IOWR(BTRFS_IOCTL_MAGIC, 52, \ struct btrfs_ioctl_get_dev_stats) #define BTRFS_IOC_DEV_REPLACE _IOWR(BTRFS_IOCTL_MAGIC, 53, \ @@ -1044,15 +1123,15 @@ enum btrfs_err_code { #define BTRFS_IOC_FILE_EXTENT_SAME _IOWR(BTRFS_IOCTL_MAGIC, 54, \ struct btrfs_ioctl_same_args) #define BTRFS_IOC_GET_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ - struct btrfs_ioctl_feature_flags) + struct btrfs_ioctl_feature_flags) #define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 57, \ - struct btrfs_ioctl_feature_flags[2]) + struct btrfs_ioctl_feature_flags[2]) #define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ - struct btrfs_ioctl_feature_flags[3]) -#define BTRFS_IOC_RM_DEV_V2 _IOW(BTRFS_IOCTL_MAGIC, 58, \ + struct btrfs_ioctl_feature_flags[3]) +#define BTRFS_IOC_RM_DEV_V2 _IOW(BTRFS_IOCTL_MAGIC, 58, \ struct btrfs_ioctl_vol_args_v2) #define BTRFS_IOC_LOGICAL_INO_V2 _IOWR(BTRFS_IOCTL_MAGIC, 59, \ - struct btrfs_ioctl_logical_ino_args) + struct btrfs_ioctl_logical_ino_args) #define BTRFS_IOC_GET_SUBVOL_INFO _IOR(BTRFS_IOCTL_MAGIC, 60, \ struct btrfs_ioctl_get_subvol_info_args) #define BTRFS_IOC_GET_SUBVOL_ROOTREF _IOWR(BTRFS_IOCTL_MAGIC, 61, \ @@ -1060,14 +1139,10 @@ enum btrfs_err_code { #define BTRFS_IOC_INO_LOOKUP_USER _IOWR(BTRFS_IOCTL_MAGIC, 62, \ struct btrfs_ioctl_ino_lookup_user_args) #define BTRFS_IOC_SNAP_DESTROY_V2 _IOW(BTRFS_IOCTL_MAGIC, 63, \ - struct btrfs_ioctl_vol_args_v2) + struct btrfs_ioctl_vol_args_v2) #define BTRFS_IOC_ENCODED_READ _IOR(BTRFS_IOCTL_MAGIC, 64, \ struct btrfs_ioctl_encoded_io_args) #define BTRFS_IOC_ENCODED_WRITE _IOW(BTRFS_IOCTL_MAGIC, 64, \ struct btrfs_ioctl_encoded_io_args) -#ifdef __cplusplus -} -#endif - -#endif +#endif /* _UAPI_LINUX_BTRFS_H */ diff --git a/mkfs/common.c b/mkfs/common.c index d77688ba..70a0b353 100644 --- a/mkfs/common.c +++ b/mkfs/common.c @@ -38,7 +38,7 @@ #include "common/device-utils.h" #include "common/open-utils.h" #include "mkfs/common.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" static u64 reference_root_table[] = { [MKFS_ROOT_TREE] = BTRFS_ROOT_TREE_OBJECTID, diff --git a/tests/ioctl-test.c b/tests/ioctl-test.c index a8a120ac..8452684a 100644 --- a/tests/ioctl-test.c +++ b/tests/ioctl-test.c @@ -18,7 +18,7 @@ #include #include -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #include "kernel-shared/ctree.h" #define LIST_32_COMPAT \ diff --git a/tests/library-test.c b/tests/library-test.c index d2ac56ae..120731dc 100644 --- a/tests/library-test.c +++ b/tests/library-test.c @@ -22,7 +22,7 @@ #include "kernel-lib/rbtree.h" #include "kernel-lib/list.h" #include "kernel-shared/ctree.h" -#include "ioctl.h" +#include "kernel-shared/uapi/btrfs.h" #include "kernel-shared/send.h" #include "common/send-stream.h" #include "common/send-utils.h" From patchwork Wed Nov 23 22:37:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7D3EC4332F for ; Wed, 23 Nov 2022 22:38:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229796AbiKWWiZ (ORCPT ); Wed, 23 Nov 2022 17:38:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229776AbiKWWh7 (ORCPT ); Wed, 23 Nov 2022 17:37:59 -0500 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A31EFCFD for ; Wed, 23 Nov 2022 14:37:57 -0800 (PST) Received: by mail-qk1-x731.google.com with SMTP id k2so13471779qkk.7 for ; Wed, 23 Nov 2022 14:37:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=BGTLGPNKt9rHwMmVBTUJWDbcQrsnFY5gBQrILtbVypo=; b=Y/6V3zB/iBve35MCQo0YbbSAlW0xn3JNqlAjqyjzxF7iYTFI9UhRsWwpcKABBWRQDY iuJ+vQyT5TWDkkF0dZcTSwvQQAhIWkgWZhNBPEfnu5bI6PIZ8XBs4/EmrkmYsvdUpJNK KIV9wLjTff1mpDWHkeZIMNYoSOy252P7zKeF5F74jrek7jqdLefiflh3OESLBfMw6GoS uF69KgDRbhYVtGDx4daAckQUNTNcynXLgRJFzYbT2Ygt38cwrt74gWjp/nBX4qo8abf+ PsfpHE2GaSC/8urVWdZYB3C2NYkXcjVGO1MvtUw6bNwD+dtQo0wHOBYmHOMMrh5hPlN/ 0lwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BGTLGPNKt9rHwMmVBTUJWDbcQrsnFY5gBQrILtbVypo=; b=QGzuA66jC09vMPod1l+vOYZS6DrfIq8qhgSk8OJuEiTMV1+kkTzcoH57acsXNLbfrM wAnKdPuwSPEWsdokI8KtjYCaXRs4HRtlNxqHYnX2V+8V53YCU1NOYwm9YYnpR0L1R6kF d+T5MfufFBlASqMRN+0Y0xaUxx3GwLFWCDekefrBLD79mJlj7Z+uRpm2N/EUuNIV6+tq 9e6DjEiM0Ev9TaB2izK89JyD5HMtwvE4FRHhWtU7lQGiJvKlfYcuJ4lznPT6B1/SNRuM hlPLiEZ6tIq6UW+CWtPrQgukcvqf7jegT7jSNhsVy6JqR5DwIouSB/xMOro+IFsIgf9v X9Ng== X-Gm-Message-State: ANoB5pkT+rhP4ZliWM3vS613Lbg+7ErFnkCnCi//IkdQ1sN+Vc8nZrNL G9vXX3NruSmwO25+cyLgdlX1GL4GGMAGKQ== X-Google-Smtp-Source: AA0mqf6TInCOPG7xHxUv3izbMMjXCBpxJo1vLt8u+DtqwUP30ZTL2yzMoz6vpRfQ+HaUQRAn+ymH1Q== X-Received: by 2002:a37:af05:0:b0:6fa:da64:4879 with SMTP id y5-20020a37af05000000b006fada644879mr14119882qke.312.1669243076432; Wed, 23 Nov 2022 14:37:56 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id u12-20020a05620a084c00b006ee949b8051sm12390637qku.51.2022.11.23.14.37.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:56 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 13/29] btrfs-progs: stop using btrfs_root_item_v0 Date: Wed, 23 Nov 2022 17:37:21 -0500 Message-Id: <9c33821261448abb06676c0cca57ebab56498807.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This isn't defined in the kernel, we simply check if the root item size is less than btrfs_root_item, so adjust the user of btrfs_root_item_v0 to make a similar check. Signed-off-by: Josef Bacik --- cmds/subvolume-list.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cmds/subvolume-list.c b/cmds/subvolume-list.c index 6997d877..1c734f50 100644 --- a/cmds/subvolume-list.c +++ b/cmds/subvolume-list.c @@ -870,8 +870,8 @@ static int list_subvol_search(int fd, struct rb_root *root_lookup) ri = (struct btrfs_root_item *)(args.buf + off); gen = btrfs_root_generation(ri); flags = btrfs_root_flags(ri); - if(sh.len > - sizeof(struct btrfs_root_item_v0)) { + if(sh.len < + sizeof(struct btrfs_root_item)) { otime = btrfs_stack_timespec_sec(&ri->otime); ogen = btrfs_root_otransid(ri); memcpy(uuid, ri->uuid, BTRFS_UUID_SIZE); From patchwork Wed Nov 23 22:37:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89708C4332F for ; Wed, 23 Nov 2022 22:38:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229803AbiKWWia (ORCPT ); Wed, 23 Nov 2022 17:38:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229757AbiKWWiB (ORCPT ); Wed, 23 Nov 2022 17:38:01 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0914E10FF for ; Wed, 23 Nov 2022 14:37:58 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id d7so13485899qkk.3 for ; Wed, 23 Nov 2022 14:37:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=tMV+EvVW/9gxm7EbfBgQRdnBIwmlJ4bs05kZBZaXF4M=; b=IYdUtg8jKczxGTIxji1Zw/jOWMevrb6Vq4dJ1iWf3WZT//PD58uT9BOhiyFjqKg9SS 28G+Chdv+jnzWIvNNEpYenWR/zhQtedrolGHSV2OPbwl42Y0qdz8SPNxax8+r4rCc+AG BKyR1W9Nfr1TC6OVAvgCJIfUAQ+jr3LOAcBzp+gaJVWmnWKxUiNhMEYz/BKN7eaaM9XA 9ZV3o0wjmlgVcR7rCFlbUKdVSdAMkzBA3Rwdik4CPo3QZcAHaULKjp/HfLPfIvuHeBC4 PXJO5EU+DTzM28sAXW2f8UTmB77hic+jSjEZYIu9lwbOtKs4vbJacD6zm6UNEAGP7NCl +fPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tMV+EvVW/9gxm7EbfBgQRdnBIwmlJ4bs05kZBZaXF4M=; b=Z0qGGtBUMvxqGU5NOIK6RuShOawB/b2xaWigBdy5Z0HuSeMOqnn5DBaO4FWtqgczx/ pwjlGYNQ4vfFoGFgg+Fx3pW7r72qLPKfRwupC6Mrx/dkaJ7ShNU4tWCqYRlsyOIK/GCp ajiXBeXQgvWSSd0/Q1LVuBguv3W8+ZrE5GVKyc/MDcImJwMlqNtJgtGXDs5uzOJ7246x 4NWPEbDBr2x7flU0azqpUp/2wJq9vCVXChKhTeXBl23Ln1yqpNDtwDNyLe55uReAQnBv c//lsUO6CF7XtY5GY3em0WQZwujUhh2USbeEz02HqPPLLv24rNj8BN7WSO9p40Qh6Ymf alhg== X-Gm-Message-State: ANoB5pn1avlfMXIa5JqdkAGbnzbUk1vVJyPfT5c7kKt+DkuV1uRw7km+ roK984XThB3+9iUnQCIvyWcfqPd7xRjQFg== X-Google-Smtp-Source: AA0mqf6j+gLsoPQZk7S7hGpn6J0Qssd+52M4WyAgFXnHOdQAy+b5LI1PIaNTdyE40/eNczRGNCvxyQ== X-Received: by 2002:a05:620a:215c:b0:6fa:937f:61d4 with SMTP id m28-20020a05620a215c00b006fa937f61d4mr12649245qkm.280.1669243077659; Wed, 23 Nov 2022 14:37:57 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id z26-20020ac875da000000b003a622111f2csm9982724qtq.86.2022.11.23.14.37.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:57 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 14/29] btrfs-progs: make the find extent buffer helpers take fs_info Date: Wed, 23 Nov 2022 17:37:22 -0500 Message-Id: <7473bfae6a54af4d74b0993556df96a4dd8508d1.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This is a cleanup patch to make syncing the btrfs kernel code into btrfs-progs easier. In btrfs-progs we have an extra cache in the extent_io_tree that's exclusively used for the extent buffer tracking. In order to untangle this dependency start passing around the fs_info to search for extent_buffers, and then have the helpers use the appropriate structure to find the extent buffer. Signed-off-by: Josef Bacik --- kernel-shared/disk-io.c | 3 +-- kernel-shared/extent_io.c | 6 ++++-- kernel-shared/extent_io.h | 4 ++-- kernel-shared/transaction.c | 4 ++-- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index 776758e9..ad4d0f4c 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -227,8 +227,7 @@ static int csum_tree_block(struct btrfs_fs_info *fs_info, struct extent_buffer *btrfs_find_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize) { - return find_extent_buffer(&fs_info->extent_cache, - bytenr, blocksize); + return find_extent_buffer(fs_info, bytenr, blocksize); } struct extent_buffer* btrfs_find_create_tree_block( diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index f112983a..bdfb2de6 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -682,9 +682,10 @@ void free_extent_buffer_nocache(struct extent_buffer *eb) free_extent_buffer_internal(eb, 1); } -struct extent_buffer *find_extent_buffer(struct extent_io_tree *tree, +struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize) { + struct extent_io_tree *tree = &fs_info->extent_cache; struct extent_buffer *eb = NULL; struct cache_extent *cache; @@ -698,9 +699,10 @@ struct extent_buffer *find_extent_buffer(struct extent_io_tree *tree, return eb; } -struct extent_buffer *find_first_extent_buffer(struct extent_io_tree *tree, +struct extent_buffer *find_first_extent_buffer(struct btrfs_fs_info *fs_info, u64 start) { + struct extent_io_tree *tree = &fs_info->extent_cache; struct extent_buffer *eb = NULL; struct cache_extent *cache; diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index ccdf768c..e4ae2dcd 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -129,9 +129,9 @@ static inline int extent_buffer_uptodate(struct extent_buffer *eb) int set_state_private(struct extent_io_tree *tree, u64 start, u64 xprivate); int get_state_private(struct extent_io_tree *tree, u64 start, u64 *xprivate); -struct extent_buffer *find_extent_buffer(struct extent_io_tree *tree, +struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize); -struct extent_buffer *find_first_extent_buffer(struct extent_io_tree *tree, +struct extent_buffer *find_first_extent_buffer(struct btrfs_fs_info *fs_info, u64 start); struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize); diff --git a/kernel-shared/transaction.c b/kernel-shared/transaction.c index 28b16848..c50abfca 100644 --- a/kernel-shared/transaction.c +++ b/kernel-shared/transaction.c @@ -150,7 +150,7 @@ again: goto again; while(start <= end) { - eb = find_first_extent_buffer(tree, start); + eb = find_first_extent_buffer(fs_info, start); BUG_ON(!eb || eb->start != start); ret = write_tree_block(trans, fs_info, eb); if (ret < 0) { @@ -180,7 +180,7 @@ cleanup: break; while (start <= end) { - eb = find_first_extent_buffer(tree, start); + eb = find_first_extent_buffer(fs_info, start); BUG_ON(!eb || eb->start != start); start += eb->len; clear_extent_buffer_dirty(eb); From patchwork Wed Nov 23 22:37:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B053C4167D for ; Wed, 23 Nov 2022 22:38:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229820AbiKWWi0 (ORCPT ); Wed, 23 Nov 2022 17:38:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiKWWiB (ORCPT ); Wed, 23 Nov 2022 17:38:01 -0500 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AACEE101E8 for ; Wed, 23 Nov 2022 14:37:59 -0800 (PST) Received: by mail-qk1-x72e.google.com with SMTP id d8so13461641qki.13 for ; Wed, 23 Nov 2022 14:37:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=+a9t2rbSZiRUOuExHq+1TWeATRk9YCY5aTuvq80WToE=; b=GYeZGT7SkEHJ4gIOmvuefVweDT/T1Agc2IEQZrnasTwQgrmN88k89W4wGz5n2KbMgM urzihLU26DkRtdAfIAQ3YEE71JCnAqH0+gsCMtl4xXNMtJK2uSrADoH2u3dCenftdpaY g24sceNXxDKtNzdDJZhUqmu91uqoLVfKZfRQTKtkvtn6b9n4hiZ05Vxc/MylRytF0AmN RwcTQ0IvPpYvSfKX7yon5AdU3nb2xXpwBNkpLkdsWgG4Z+W1H4djYhFn0moiDxjiDTfb DTMMfIFSPzULwzZcILLGT0RwNT74JoOb9O262iHDtttNQV8DSjjKIzC9M1EQAd+PneO0 e9Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+a9t2rbSZiRUOuExHq+1TWeATRk9YCY5aTuvq80WToE=; b=MBKflWOMDVTZ6jVd69/89fQg2sBEZvwU1LUm9QpnTrFPh8gR2Z3e/KlWLWKRDJApXb 39+FR4VuOV0Of2qq/SCclPH9utTKw89q4yj3yFXy5YrHNG/fIUEpCeb+8GVVCCu1kdRq tzPfdf3CjRDWY0hfcfqaMnc1QbqcQoxzRZzGIjLfKBnuqEfHDtpcxVg0cZTAV0GoKrZF LNawajiA68+newFT2GUfIlT0cd7x1hzEeAxMaLjhNU7sAySYzvsruRpePEUJ/oHVxVvP 8zHX28jD/LSNS7ZxCN0wXYLC+a3FClMHz23Audo72RkTJhwZr6Pkt0rq1Y/Nd+eA86Li Elmw== X-Gm-Message-State: ANoB5pkLV/MjqQPIHlbS2ox7LnKM7pIA7517i8pBBEZ7jkGy53ZxWi3O MVVfQidP7BOdYpHYBgBlmenPAlDbgzJEYw== X-Google-Smtp-Source: AA0mqf76XJ7c2769E+wiKeVP6sF6xDqXw2FppwpWT+obYQ43sG1d0PI1xypqn638Vg0QMckVuhwekQ== X-Received: by 2002:a05:620a:c95:b0:6fa:91f9:c84d with SMTP id q21-20020a05620a0c9500b006fa91f9c84dmr26672039qki.724.1669243078974; Wed, 23 Nov 2022 14:37:58 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id d17-20020a05620a241100b006f87d28ea3asm12994611qkn.54.2022.11.23.14.37.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:58 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 15/29] btrfs-progs: move dirty eb tracking to it's own io_tree Date: Wed, 23 Nov 2022 17:37:23 -0500 Message-Id: <66d6451a0c3d62cecfd2bcfc70c4b4c7f990ccc1.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org btrfs-progs has a cache tree embedded in the extent_io_tree in order to track extent buffers. We use the extent_io_tree part to track dirty, and the cache tree to keep the extent buffers in. When we sync extent-io-tree.[ch] we'll lose this ability, so separate out the dirty tracking into its own extent_io_tree. Subsequent patches will adjust the extent buffer lookup so it doesn't use the custom extent_io_tree thing. Signed-off-by: Josef Bacik --- kernel-shared/ctree.h | 1 + kernel-shared/disk-io.c | 2 ++ kernel-shared/extent_io.c | 4 ++-- kernel-shared/transaction.c | 2 +- 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 3f674484..b9a58325 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -1218,6 +1218,7 @@ struct btrfs_fs_info { struct btrfs_root *log_root_tree; struct extent_io_tree extent_cache; + struct extent_io_tree dirty_buffers; struct extent_io_tree free_space_cache; struct extent_io_tree pinned_extents; struct extent_io_tree extent_ins; diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index ad4d0f4c..382d15f5 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -867,6 +867,7 @@ struct btrfs_fs_info *btrfs_new_fs_info(int writable, u64 sb_bytenr) goto free_all; extent_io_tree_init(&fs_info->extent_cache); + extent_io_tree_init(&fs_info->dirty_buffers); extent_io_tree_init(&fs_info->free_space_cache); extent_io_tree_init(&fs_info->pinned_extents); extent_io_tree_init(&fs_info->extent_ins); @@ -1350,6 +1351,7 @@ void btrfs_cleanup_all_caches(struct btrfs_fs_info *fs_info) free_extent_buffer(eb); } free_mapping_cache_tree(&fs_info->mapping_tree.cache_tree); + extent_io_tree_cleanup(&fs_info->dirty_buffers); extent_io_tree_cleanup(&fs_info->extent_cache); extent_io_tree_cleanup(&fs_info->free_space_cache); extent_io_tree_cleanup(&fs_info->pinned_extents); diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index bdfb2de6..4b6e0bee 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -1042,7 +1042,7 @@ out: int set_extent_buffer_dirty(struct extent_buffer *eb) { - struct extent_io_tree *tree = &eb->fs_info->extent_cache; + struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; if (!(eb->flags & EXTENT_DIRTY)) { eb->flags |= EXTENT_DIRTY; set_extent_dirty(tree, eb->start, eb->start + eb->len - 1); @@ -1053,7 +1053,7 @@ int set_extent_buffer_dirty(struct extent_buffer *eb) int clear_extent_buffer_dirty(struct extent_buffer *eb) { - struct extent_io_tree *tree = &eb->fs_info->extent_cache; + struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; if (eb->flags & EXTENT_DIRTY) { eb->flags &= ~EXTENT_DIRTY; clear_extent_dirty(tree, eb->start, eb->start + eb->len - 1); diff --git a/kernel-shared/transaction.c b/kernel-shared/transaction.c index c50abfca..c1364d69 100644 --- a/kernel-shared/transaction.c +++ b/kernel-shared/transaction.c @@ -136,7 +136,7 @@ int __commit_transaction(struct btrfs_trans_handle *trans, u64 end; struct btrfs_fs_info *fs_info = root->fs_info; struct extent_buffer *eb; - struct extent_io_tree *tree = &fs_info->extent_cache; + struct extent_io_tree *tree = &fs_info->dirty_buffers; int ret; while(1) { From patchwork Wed Nov 23 22:37:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDCF0C4167D for ; Wed, 23 Nov 2022 22:38:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229750AbiKWWiX (ORCPT ); Wed, 23 Nov 2022 17:38:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiKWWiC (ORCPT ); Wed, 23 Nov 2022 17:38:02 -0500 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D1075E9D1 for ; Wed, 23 Nov 2022 14:38:01 -0800 (PST) Received: by mail-qv1-xf31.google.com with SMTP id s18so11519678qvo.9 for ; Wed, 23 Nov 2022 14:38:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=n+6H0AtY36YGq4Isf0qYbCclZ5/pEMXkv36lJm2QyZs=; b=ggv5xzIgtLR45wQ93MEvXXuAFiLKH11TsjP/A2wkWt57ZDKz0fCMs/soDpF9WlXQYD bprpj2tRyfpE6UNTrtchUJZm3srit4SzNlZkLH1P/WMrLI2N+QVaOjT9gmVVX/sHkLff L7Ilr41cCns2cY5pJ7QmaTcM4j/9m7FveEEw/AYlIOaRmNkpnBX4/lNPr29OdQfuhDmT PXb3mMJNezpYBBMlA6M1IIdqAaN+v8z3eR/ojGomkQUFaROIxYHRYwqlJ4fUXzoRyT0g l6Q8gEHpb55xECsINsprks9NehluNr9cDaOeTCp+LGX8WyzOWWk5NWlvkNSXQIyCpX4G uyNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n+6H0AtY36YGq4Isf0qYbCclZ5/pEMXkv36lJm2QyZs=; b=uMY2K0AZgRtSw090kl1XvESwYbm37Q2hbZRExqxUg7+D3iZ2Hc+vPkROgjK2SyMCzo +U2tVU/D8d2bqVMwkW6r0zSw4CfBvA7QHbi0GdA1+smOls4Z6c/mTR+OePABGn8rcN0X 2dOAgiOsCUndXgXDguVOsJpAhBnm7xvUrxuWubwka4/LIwZ4hJ3FZ7AeYKPJS39XjjRJ 4nwtr6J+Z6reYZFebuCJW3w1XkJPbfyExxN7ffCpPgCMtwNm2HT+Q7iPFq5tTi2Bgyly FMs062DDqlLv5xPuFZOVDK52YOG72CB/8Czx4+V38RdZ6n1oCW5bH8t+3vcHwyQbsT1G /c5g== X-Gm-Message-State: ANoB5pmZRTZqMZ8zU3yI3yFYb4Tas7Jakm+eOsrLW07wgLfvczEglXaP hWIeC5Lntc9GbjnWDf7r2lphK0WVZhXOyg== X-Google-Smtp-Source: AA0mqf5T+S+edVodbb3zy4r2EQ5PU7PhNLgsjdZ4uMJ3fc3uOM2BatfBZIkr8bayNB6w2YqO/kTA0A== X-Received: by 2002:a05:6214:2f02:b0:4bc:158d:fae9 with SMTP id od2-20020a0562142f0200b004bc158dfae9mr28715667qvb.22.1669243080284; Wed, 23 Nov 2022 14:38:00 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id fd3-20020a05622a4d0300b003a586888a20sm10506476qtb.79.2022.11.23.14.37.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:37:59 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 16/29] btrfs-progs: do not pass io_tree into verify_parent_transid Date: Wed, 23 Nov 2022 17:37:24 -0500 Message-Id: <5d9e5ce4f0a12d98f07b3bbb062246a85091cbd2.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We do not use the io_tree, don't bother passing it into verify_parent_transid. Signed-off-by: Josef Bacik --- kernel-shared/disk-io.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index 382d15f5..8c428ade 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -258,8 +258,7 @@ void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr, kfree(multi); } -static int verify_parent_transid(struct extent_io_tree *io_tree, - struct extent_buffer *eb, u64 parent_transid, +static int verify_parent_transid(struct extent_buffer *eb, u64 parent_transid, int ignore) { int ret; @@ -374,8 +373,7 @@ struct extent_buffer* read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr, ret = read_whole_eb(fs_info, eb, mirror_num); if (ret == 0 && csum_tree_block(fs_info, eb, 1) == 0 && check_tree_block(fs_info, eb) == 0 && - verify_parent_transid(&fs_info->extent_cache, eb, - parent_transid, ignore) == 0) { + verify_parent_transid(eb, parent_transid, ignore) == 0) { if (eb->flags & EXTENT_BAD_TRANSID && list_empty(&eb->recow)) { list_add_tail(&eb->recow, @@ -2273,8 +2271,7 @@ int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid) if (!ret) return ret; - ret = verify_parent_transid(&buf->fs_info->extent_cache, buf, - parent_transid, + ret = verify_parent_transid(buf, parent_transid, buf->fs_info->allow_transid_mismatch); return !ret; } From patchwork Wed Nov 23 22:37:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 254B3C46467 for ; Wed, 23 Nov 2022 22:38:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229788AbiKWWi3 (ORCPT ); Wed, 23 Nov 2022 17:38:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229764AbiKWWiE (ORCPT ); Wed, 23 Nov 2022 17:38:04 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 428C31759D for ; Wed, 23 Nov 2022 14:38:02 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id d7so13485998qkk.3 for ; Wed, 23 Nov 2022 14:38:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=gxRflk/AZejra4oEnnwbIhPblcA7v3IrlRRNU2k7alA=; b=e2RJNJIwfShwemKcUsduoZqwuQaDQlH3agVQ7df78IkTiVXF1FMihnalcEMIbxyzL7 jL9dzlo/wcNNrg9qq++eBnrs9SA0DyF9CyFfpPT3CdKQPGbVnaJcQOY55bU45K/hXQmf vhA6lmhIj2/nBGlTYQwakBq+RZ8pU4zBZlDcL71fRZ9bNCMCQqSzA0HeQL1XsKs21Emw wOAmWgJl6IN2aeovDAFABAbdB5APi641AIa9eL3qBbOrNOsHiSpZ4rik2tpukNtFjtXe U5ZIPrRqPNYd6AAfu/q7MCLQqUXo11Zl9Rj0c9SrmcPHZt6j38fUwTCtfQAA8or1QfXx j3ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gxRflk/AZejra4oEnnwbIhPblcA7v3IrlRRNU2k7alA=; b=wOIBJvl57KxPUSDKA+R9TULjRz7y66Jl+LATuNioiFrO57tRB6MiOsrKylLLQ2Rp3Q sh5u2RMmR2ReBJp8P9r4CElGujf7jIpCOb+z12q4OsC7BFBEBdKbPmm/Nfva9uqYRE4k a8Sk8lk22XYGsN7OafD1O7wfLu5mvZKbOOrJdO/ZTgg8Qws23y/+Y5ci4n4gcvTRRHK9 i4gWbqEu9ZTLCbtvaODdZNKAKqyZZpRJnFAVyY4I85bBa/6JNfeQriau2aSpoccHzEzq Bp2hSfoEhZvm4jdOSbxhRyB1gipBg0svy/i3ZcEE2Z2MyNR/XFANskOChItKX7B1nwxJ 1g5g== X-Gm-Message-State: ANoB5pnJDI67LHDFid+XZ4aABy5fgQj2ujIDRbJGOTIuhyS4ZqmOEZZr W5rlo27fBpT5ThFJF4KaMWiEwEU14tB8UQ== X-Google-Smtp-Source: AA0mqf7T2AMtVF9HjlI82Damm24l3D1GXOcKsTVS/q+KO3C+r3TDqL116UCkqXVHR4x0/6GGyLQitg== X-Received: by 2002:a05:620a:8c8:b0:6fb:cf37:a30e with SMTP id z8-20020a05620a08c800b006fbcf37a30emr24195079qkz.306.1669243081479; Wed, 23 Nov 2022 14:38:01 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id d12-20020ac8060c000000b0039d085a2571sm10446978qth.55.2022.11.23.14.38.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:01 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 17/29] btrfs-progs: move extent cache code directly into btrfs_fs_info Date: Wed, 23 Nov 2022 17:37:25 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We have some extra features in the btrfs-progs copy of the extent_io_tree that don't exist in the kernel. In order to make syncing easier simply move this functionality into btrfs_fs_info, that way we can sync in the new extent_io_tree code and not have to worry about breaking anything. Signed-off-by: Josef Bacik --- kernel-shared/ctree.h | 6 +++- kernel-shared/disk-io.c | 4 +-- kernel-shared/extent_io.c | 76 ++++++++++++++++++++++++++------------- kernel-shared/extent_io.h | 2 ++ 4 files changed, 60 insertions(+), 28 deletions(-) diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index b9a58325..d359753b 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -1217,7 +1217,11 @@ struct btrfs_fs_info { /* the log root tree is a directory of all the other log roots */ struct btrfs_root *log_root_tree; - struct extent_io_tree extent_cache; + struct cache_tree extent_cache; + u64 max_cache_size; + u64 cache_size; + struct list_head lru; + struct extent_io_tree dirty_buffers; struct extent_io_tree free_space_cache; struct extent_io_tree pinned_extents; diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index 8c428ade..c266f9c2 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -864,7 +864,7 @@ struct btrfs_fs_info *btrfs_new_fs_info(int writable, u64 sb_bytenr) !fs_info->block_group_root || !fs_info->super_copy) goto free_all; - extent_io_tree_init(&fs_info->extent_cache); + extent_buffer_init_cache(fs_info); extent_io_tree_init(&fs_info->dirty_buffers); extent_io_tree_init(&fs_info->free_space_cache); extent_io_tree_init(&fs_info->pinned_extents); @@ -1350,7 +1350,7 @@ void btrfs_cleanup_all_caches(struct btrfs_fs_info *fs_info) } free_mapping_cache_tree(&fs_info->mapping_tree.cache_tree); extent_io_tree_cleanup(&fs_info->dirty_buffers); - extent_io_tree_cleanup(&fs_info->extent_cache); + extent_buffer_free_cache(fs_info); extent_io_tree_cleanup(&fs_info->free_space_cache); extent_io_tree_cleanup(&fs_info->pinned_extents); extent_io_tree_cleanup(&fs_info->extent_ins); diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index 4b6e0bee..492857b0 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -34,13 +34,45 @@ #include "common/device-utils.h" #include "common/internal.h" +static void free_extent_buffer_final(struct extent_buffer *eb); + +void extent_buffer_init_cache(struct btrfs_fs_info *fs_info) +{ + fs_info->max_cache_size = total_memory() / 4; + fs_info->cache_size = 0; + INIT_LIST_HEAD(&fs_info->lru); +} + +void extent_buffer_free_cache(struct btrfs_fs_info *fs_info) +{ + struct extent_buffer *eb; + + while(!list_empty(&fs_info->lru)) { + eb = list_entry(fs_info->lru.next, struct extent_buffer, lru); + if (eb->refs) { + /* + * Reset extent buffer refs to 1, so the + * free_extent_buffer_nocache() can free it for sure. + */ + eb->refs = 1; + fprintf(stderr, + "extent buffer leak: start %llu len %u\n", + (unsigned long long)eb->start, eb->len); + free_extent_buffer_nocache(eb); + } else { + free_extent_buffer_final(eb); + } + } + + free_extent_cache_tree(&fs_info->extent_cache); + fs_info->cache_size = 0; +} + void extent_io_tree_init(struct extent_io_tree *tree) { cache_tree_init(&tree->state); cache_tree_init(&tree->cache); INIT_LIST_HEAD(&tree->lru); - tree->cache_size = 0; - tree->max_cache_size = (u64)total_memory() / 4; } static struct extent_state *alloc_extent_state(void) @@ -73,7 +105,6 @@ static void free_extent_state_func(struct cache_extent *cache) btrfs_free_extent_state(es); } -static void free_extent_buffer_final(struct extent_buffer *eb); void extent_io_tree_cleanup(struct extent_io_tree *tree) { struct extent_buffer *eb; @@ -644,11 +675,9 @@ static void free_extent_buffer_final(struct extent_buffer *eb) BUG_ON(eb->refs); list_del_init(&eb->lru); if (!(eb->flags & EXTENT_BUFFER_DUMMY)) { - struct extent_io_tree *tree = &eb->fs_info->extent_cache; - - remove_cache_extent(&tree->cache, &eb->cache_node); - BUG_ON(tree->cache_size < eb->len); - tree->cache_size -= eb->len; + remove_cache_extent(&eb->fs_info->extent_cache, &eb->cache_node); + BUG_ON(eb->fs_info->cache_size < eb->len); + eb->fs_info->cache_size -= eb->len; } free(eb); } @@ -685,15 +714,14 @@ void free_extent_buffer_nocache(struct extent_buffer *eb) struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize) { - struct extent_io_tree *tree = &fs_info->extent_cache; struct extent_buffer *eb = NULL; struct cache_extent *cache; - cache = lookup_cache_extent(&tree->cache, bytenr, blocksize); + cache = lookup_cache_extent(&fs_info->extent_cache, bytenr, blocksize); if (cache && cache->start == bytenr && cache->size == blocksize) { eb = container_of(cache, struct extent_buffer, cache_node); - list_move_tail(&eb->lru, &tree->lru); + list_move_tail(&eb->lru, &fs_info->lru); eb->refs++; } return eb; @@ -702,27 +730,26 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, struct extent_buffer *find_first_extent_buffer(struct btrfs_fs_info *fs_info, u64 start) { - struct extent_io_tree *tree = &fs_info->extent_cache; struct extent_buffer *eb = NULL; struct cache_extent *cache; - cache = search_cache_extent(&tree->cache, start); + cache = search_cache_extent(&fs_info->extent_cache, start); if (cache) { eb = container_of(cache, struct extent_buffer, cache_node); - list_move_tail(&eb->lru, &tree->lru); + list_move_tail(&eb->lru, &fs_info->lru); eb->refs++; } return eb; } -static void trim_extent_buffer_cache(struct extent_io_tree *tree) +static void trim_extent_buffer_cache(struct btrfs_fs_info *fs_info) { struct extent_buffer *eb, *tmp; - list_for_each_entry_safe(eb, tmp, &tree->lru, lru) { + list_for_each_entry_safe(eb, tmp, &fs_info->lru, lru) { if (eb->refs == 0) free_extent_buffer_final(eb); - if (tree->cache_size <= ((tree->max_cache_size * 9) / 10)) + if (fs_info->cache_size <= ((fs_info->max_cache_size * 9) / 10)) break; } } @@ -731,14 +758,13 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize) { struct extent_buffer *eb; - struct extent_io_tree *tree = &fs_info->extent_cache; struct cache_extent *cache; - cache = lookup_cache_extent(&tree->cache, bytenr, blocksize); + cache = lookup_cache_extent(&fs_info->extent_cache, bytenr, blocksize); if (cache && cache->start == bytenr && cache->size == blocksize) { eb = container_of(cache, struct extent_buffer, cache_node); - list_move_tail(&eb->lru, &tree->lru); + list_move_tail(&eb->lru, &fs_info->lru); eb->refs++; } else { int ret; @@ -751,15 +777,15 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, eb = __alloc_extent_buffer(fs_info, bytenr, blocksize); if (!eb) return NULL; - ret = insert_cache_extent(&tree->cache, &eb->cache_node); + ret = insert_cache_extent(&fs_info->extent_cache, &eb->cache_node); if (ret) { free(eb); return NULL; } - list_add_tail(&eb->lru, &tree->lru); - tree->cache_size += blocksize; - if (tree->cache_size >= tree->max_cache_size) - trim_extent_buffer_cache(tree); + list_add_tail(&eb->lru, &fs_info->lru); + fs_info->cache_size += blocksize; + if (fs_info->cache_size >= fs_info->max_cache_size) + trim_extent_buffer_cache(fs_info); } return eb; } diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index e4ae2dcd..1c7dbc51 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -165,5 +165,7 @@ void extent_buffer_bitmap_clear(struct extent_buffer *eb, unsigned long start, unsigned long pos, unsigned long len); void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start, unsigned long pos, unsigned long len); +void extent_buffer_init_cache(struct btrfs_fs_info *fs_info); +void extent_buffer_free_cache(struct btrfs_fs_info *fs_info); #endif From patchwork Wed Nov 23 22:37:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31D73C4332F for ; Wed, 23 Nov 2022 22:38:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229775AbiKWWi2 (ORCPT ); Wed, 23 Nov 2022 17:38:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229788AbiKWWiF (ORCPT ); Wed, 23 Nov 2022 17:38:05 -0500 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6A03183BF for ; Wed, 23 Nov 2022 14:38:03 -0800 (PST) Received: by mail-qt1-x831.google.com with SMTP id z6so136700qtv.5 for ; Wed, 23 Nov 2022 14:38:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ubDtugAzgVt+qDmaQID8NTQFYmvWYfBQI4KbjPebtwY=; b=JW5Y8mbO3+lAAS1eghGv/nofv2wtGrWKTt0m1WZObXyZGTFk0mwF7kP0aRlKD37fVh P+WCEn0/JW4TbSkv0ByGMiJL9fYZbzfU8m/6sHTH5hdNrLqYkm68ev6Hqv+Qd/5hyQms edD1jX55xp9C4xgeaspCTQshyIVqVvCn2jSjwebZpA8+2YajzobmJNNbBmdzlbU21o4k v8vNzzienrt1xxLTA3gdxrVhbQg/v+1XclkkXJDavTB31Q7FVHKlWPayqzrqh2DgMF3c lC1fE5S46qJWPL3kYQTKwb8tG1rYDdbpqnOF7pEAORzIvi7+vMKhfF/GkiVAfNwuV8I2 RlEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ubDtugAzgVt+qDmaQID8NTQFYmvWYfBQI4KbjPebtwY=; b=bqeReEuML7imVj2XaEBN/AKp2H378x5nJwcQvMyiTZhr3FNuQlVDOZZmZzdV/rZWXY nScS4OyMrcrQQK9BlDw04qUPRyKXDJoL7VkcZOPA0W4zSNSsurBr00wkd6hyvFivlfH5 EVghC/J5gzEc+GnMNQ1RVE05iwTfoQPMhDjau+WRcskP+7JfswNxfbO2L7z1lDTYwhG4 XJwjGR1KqeZrR/zvHSm8YtUnPK9PhL963ly/C8scBbbRxiISzp4E19v2uvxon4a9j81X gIfmdgxexx9yss5Axb+kuO2Hnr0E2a8nJ2McxshQj8GSDZixPJQOg3353vcB7oVGco/f qpUg== X-Gm-Message-State: ANoB5pk+Vibza76wp4PffS1ZvyUI4PvyFNsjkQNyE9xXmlLUojMNY7pP HEflQ4WGXgD4j+EqbOZJ/nxJ5aTFPvhRKw== X-Google-Smtp-Source: AA0mqf79XUAxNPEAq2X/Tix6cCAMfddzIoyWfMbk9CV4o6pkIcnezjEUeUTRnyM1zhahSo1DEqA0xg== X-Received: by 2002:a05:622a:50a7:b0:39c:eb15:b6df with SMTP id fp39-20020a05622a50a700b0039ceb15b6dfmr28370150qtb.518.1669243082773; Wed, 23 Nov 2022 14:38:02 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id y18-20020a05620a25d200b006e16dcf99c8sm12975119qko.71.2022.11.23.14.38.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:02 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 18/29] btrfs-progs: delete state_private code Date: Wed, 23 Nov 2022 17:37:26 -0500 Message-Id: <1a7227721b908e879cde8563edd4bbb0349ac4cf.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We used to store random private things into extent_states, but we haven't done this for a while and there are no users of this code, simply delete it. Signed-off-by: Josef Bacik --- kernel-shared/extent_io.c | 42 --------------------------------------- kernel-shared/extent_io.h | 2 -- 2 files changed, 44 deletions(-) diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index 492857b0..baaf7234 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -591,48 +591,6 @@ int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, return bitset; } -int set_state_private(struct extent_io_tree *tree, u64 start, u64 private) -{ - struct cache_extent *node; - struct extent_state *state; - int ret = 0; - - node = search_cache_extent(&tree->state, start); - if (!node) { - ret = -ENOENT; - goto out; - } - state = container_of(node, struct extent_state, cache_node); - if (state->start != start) { - ret = -ENOENT; - goto out; - } - state->xprivate = private; -out: - return ret; -} - -int get_state_private(struct extent_io_tree *tree, u64 start, u64 *private) -{ - struct cache_extent *node; - struct extent_state *state; - int ret = 0; - - node = search_cache_extent(&tree->state, start); - if (!node) { - ret = -ENOENT; - goto out; - } - state = container_of(node, struct extent_state, cache_node); - if (state->start != start) { - ret = -ENOENT; - goto out; - } - *private = state->xprivate; -out: - return ret; -} - static struct extent_buffer *__alloc_extent_buffer(struct btrfs_fs_info *info, u64 bytenr, u32 blocksize) { diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index 1c7dbc51..4529919a 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -127,8 +127,6 @@ static inline int extent_buffer_uptodate(struct extent_buffer *eb) return 0; } -int set_state_private(struct extent_io_tree *tree, u64 start, u64 xprivate); -int get_state_private(struct extent_io_tree *tree, u64 start, u64 *xprivate); struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info, u64 bytenr, u32 blocksize); struct extent_buffer *find_first_extent_buffer(struct btrfs_fs_info *fs_info, From patchwork Wed Nov 23 22:37:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB96CC433FE for ; Wed, 23 Nov 2022 22:38:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229486AbiKWWi3 (ORCPT ); Wed, 23 Nov 2022 17:38:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229803AbiKWWiG (ORCPT ); Wed, 23 Nov 2022 17:38:06 -0500 Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com [IPv6:2607:f8b0:4864:20::836]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DCE424BE0 for ; Wed, 23 Nov 2022 14:38:05 -0800 (PST) Received: by mail-qt1-x836.google.com with SMTP id jr19so132267qtb.7 for ; Wed, 23 Nov 2022 14:38:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=uR3lErDo7/kuATg8JdB5xvhjhL6I3cwUx15NLkCw2Ko=; b=HpcM90PyF7F67n+pELt8MTLW+yRxSxygnzboy1kslHW2AG3hxc1SkXhknmMYPESnmQ DcOFb1K0LIvJOBAE3rO3hi4C7Lzam/ubUMrEfXbVpxgJwuC+NmcZnc9y8Us+Q3S5di4N 8RJSigGTaPCeqltRjl6BVr53/7wIShVpHIHe+iFKxu5bjibHSOFqqn1upJmsscJ2tbiW v5YAygT+K7LegaXP3Z0e4wPLWwJHZ2I8sNFQPz6iRBCwCEbsLVBH/44PE/4EUld7a8bG 0GDk0BE1QaPev9nIBZYDSus2DRxStMc/gQve9P4o6BX1bwFjiWxcxfSUFYKcN7/wTxyS B/kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uR3lErDo7/kuATg8JdB5xvhjhL6I3cwUx15NLkCw2Ko=; b=hkuLQL3cLNNmG9s+LzaBJKRfa5bb2YCYAY2Fe/XIAlroOcz0yo+GJvc0mSge9qVjhK zuRdZxWIDhZvLT3yWaK8RkZ04Jl/k1Je2EH2VhKeayUPpG5FIDA8uQpR7YEMkVXHR4FX 3YIEvU9nIPz8ep5BHEA9TOjL7d7rm1nuOo6J1+nsFvNE3Luq4KK7UOTa0mTfb98MPamb oXdTS/3IpjLG+ojz8Qlg74JS1Iup9AGB/Ugi6RkBFg8OSb6jCbXJwdj9TA+hdzAmAvuH Wyq16xiGNlT2+u4qtDYoY+KWB6pXpotGrjD2fzOdd2MQ2X+6HYoYIovM/4+twxKMNVlH 5ZUg== X-Gm-Message-State: ANoB5pmiqyBrpkbOSw6IBNALbprMQ+QItg3Mslq8xLXw5uiClQdKtugA wkjtZQuEVmHpmsXTi8S+0z19W9ztWAEeuw== X-Google-Smtp-Source: AA0mqf4hKcb7LH/IvcyT4IaYmVVD4iGPzlhBHPApF5w0dyAOVVycEGecUm0ZDS8ScnMmWuV6fXS0ow== X-Received: by 2002:ac8:1183:0:b0:3a5:8517:c3f3 with SMTP id d3-20020ac81183000000b003a58517c3f3mr28231551qtj.618.1669243083990; Wed, 23 Nov 2022 14:38:03 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id i15-20020a05620a248f00b006fa9d101775sm13071473qkn.33.2022.11.23.14.38.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:03 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 19/29] btrfs-progs: rename extent buffer flags to EXTENT_BUFFER_* Date: Wed, 23 Nov 2022 17:37:27 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We have been overloading the extent_state flags for use on the extent buffers as well. When we sync extent-io-tree.[ch] this will become impossible, so rename these flags to EXTENT_BUFFER_* and use those definitions instead of the extent_state definitions. Signed-off-by: Josef Bacik --- check/main.c | 2 +- check/mode-lowmem.c | 2 +- kernel-shared/ctree.c | 2 +- kernel-shared/disk-io.c | 4 ++-- kernel-shared/extent_io.c | 10 +++++----- kernel-shared/extent_io.h | 13 ++++++++----- 6 files changed, 18 insertions(+), 15 deletions(-) diff --git a/check/main.c b/check/main.c index 4d8d6882..4af6cd4e 100644 --- a/check/main.c +++ b/check/main.c @@ -3641,7 +3641,7 @@ static int check_fs_root(struct btrfs_root *root, super_generation + 1); generation_err = true; if (opt_check_repair) { - root->node->flags |= EXTENT_BAD_TRANSID; + root->node->flags |= EXTENT_BUFFER_BAD_TRANSID; ret = recow_extent_buffer(root, root->node); if (!ret) { printf("Reset generation for root %llu\n", diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index c62d8326..2cde3b63 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -5282,7 +5282,7 @@ static int check_btrfs_root(struct btrfs_root *root, int check_all) super_generation + 1); err |= INVALID_GENERATION; if (opt_check_repair) { - root->node->flags |= EXTENT_BAD_TRANSID; + root->node->flags |= EXTENT_BUFFER_BAD_TRANSID; ret = recow_extent_buffer(root, root->node); if (!ret) { printf("Reset generation for root %llu\n", diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c index d6ff0008..9b9fc9eb 100644 --- a/kernel-shared/ctree.c +++ b/kernel-shared/ctree.c @@ -473,7 +473,7 @@ int __btrfs_cow_block(struct btrfs_trans_handle *trans, write_extent_buffer(cow, root->fs_info->fs_devices->metadata_uuid, btrfs_header_fsid(), BTRFS_FSID_SIZE); - WARN_ON(!(buf->flags & EXTENT_BAD_TRANSID) && + WARN_ON(!(buf->flags & EXTENT_BUFFER_BAD_TRANSID) && btrfs_header_generation(buf) > trans->transid); update_ref_for_cow(trans, root, buf, cow); diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index c266f9c2..4050566a 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -276,7 +276,7 @@ static int verify_parent_transid(struct extent_buffer *eb, u64 parent_transid, (unsigned long long)parent_transid, (unsigned long long)btrfs_header_generation(eb)); if (ignore) { - eb->flags |= EXTENT_BAD_TRANSID; + eb->flags |= EXTENT_BUFFER_BAD_TRANSID; printk("Ignoring transid failure\n"); return 0; } @@ -374,7 +374,7 @@ struct extent_buffer* read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr, if (ret == 0 && csum_tree_block(fs_info, eb, 1) == 0 && check_tree_block(fs_info, eb) == 0 && verify_parent_transid(eb, parent_transid, ignore) == 0) { - if (eb->flags & EXTENT_BAD_TRANSID && + if (eb->flags & EXTENT_BUFFER_BAD_TRANSID && list_empty(&eb->recow)) { list_add_tail(&eb->recow, &fs_info->recow_ebs); diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index baaf7234..99191fe2 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -648,7 +648,7 @@ static void free_extent_buffer_internal(struct extent_buffer *eb, bool free_now) eb->refs--; BUG_ON(eb->refs < 0); if (eb->refs == 0) { - if (eb->flags & EXTENT_DIRTY) { + if (eb->flags & EXTENT_BUFFER_DIRTY) { warning( "dirty eb leak (aborted trans): start %llu len %u", eb->start, eb->len); @@ -1027,8 +1027,8 @@ out: int set_extent_buffer_dirty(struct extent_buffer *eb) { struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; - if (!(eb->flags & EXTENT_DIRTY)) { - eb->flags |= EXTENT_DIRTY; + if (!(eb->flags & EXTENT_BUFFER_DIRTY)) { + eb->flags |= EXTENT_BUFFER_DIRTY; set_extent_dirty(tree, eb->start, eb->start + eb->len - 1); extent_buffer_get(eb); } @@ -1038,8 +1038,8 @@ int set_extent_buffer_dirty(struct extent_buffer *eb) int clear_extent_buffer_dirty(struct extent_buffer *eb) { struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; - if (eb->flags & EXTENT_DIRTY) { - eb->flags &= ~EXTENT_DIRTY; + if (eb->flags & EXTENT_BUFFER_DIRTY) { + eb->flags &= ~EXTENT_BUFFER_DIRTY; clear_extent_dirty(tree, eb->start, eb->start + eb->len - 1); free_extent_buffer(eb); } diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index 4529919a..88fb6171 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -33,10 +33,13 @@ #define EXTENT_DEFRAG_DONE (1U << 7) #define EXTENT_BUFFER_FILLED (1U << 8) #define EXTENT_CSUM (1U << 9) -#define EXTENT_BAD_TRANSID (1U << 10) -#define EXTENT_BUFFER_DUMMY (1U << 11) #define EXTENT_IOBITS (EXTENT_LOCKED | EXTENT_WRITEBACK) +#define EXTENT_BUFFER_UPTODATE (1U << 0) +#define EXTENT_BUFFER_DIRTY (1U << 1) +#define EXTENT_BUFFER_BAD_TRANSID (1U << 2) +#define EXTENT_BUFFER_DUMMY (1U << 3) + #define BLOCK_GROUP_DATA (1U << 1) #define BLOCK_GROUP_METADATA (1U << 2) #define BLOCK_GROUP_SYSTEM (1U << 4) @@ -108,13 +111,13 @@ int set_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end); int clear_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end); static inline int set_extent_buffer_uptodate(struct extent_buffer *eb) { - eb->flags |= EXTENT_UPTODATE; + eb->flags |= EXTENT_BUFFER_UPTODATE; return 0; } static inline int clear_extent_buffer_uptodate(struct extent_buffer *eb) { - eb->flags &= ~EXTENT_UPTODATE; + eb->flags &= ~EXTENT_BUFFER_UPTODATE; return 0; } @@ -122,7 +125,7 @@ static inline int extent_buffer_uptodate(struct extent_buffer *eb) { if (!eb || IS_ERR(eb)) return 0; - if (eb->flags & EXTENT_UPTODATE) + if (eb->flags & EXTENT_BUFFER_UPTODATE) return 1; return 0; } From patchwork Wed Nov 23 22:37:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A5FC3A59F for ; Wed, 23 Nov 2022 22:38:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229818AbiKWWic (ORCPT ); Wed, 23 Nov 2022 17:38:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229627AbiKWWiL (ORCPT ); Wed, 23 Nov 2022 17:38:11 -0500 Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com [IPv6:2607:f8b0:4864:20::830]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C085C742F4 for ; Wed, 23 Nov 2022 14:38:07 -0800 (PST) Received: by mail-qt1-x830.google.com with SMTP id e15so148563qts.1 for ; Wed, 23 Nov 2022 14:38:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=pQhn3vjssZPtS02pgq6ylL3R/Hc1KRmrpvq83sXSRSc=; b=Py2VWFRWIjNTK68WW7DIL3auPBA5OC5Tr5AGQCC2N7CH8MIwunEBum91Txz4KJWsF7 mkgrmlzI+ako6o39EvkLfAXonnYAhnTC00wMbm/bnD73d0CL2cpdQBri1jvnOOY6ezMD GWiNt15QIh2ziJNtrjx2KV2cRH+VWcbPoACegkkIj8DHp0MOqOSfQ6xMqsyYpukLe3nB XmjYAdyBOKNYecjtd+PaLf+aAMks2LHNi70wkTgPrQsj8YD/BQ326+wyJxrvvReo/mBB SAK1V0yNuz9aiBKPFzDh7xSq7QBBNsztPrfOoXIU2y9cReowvQjY2vhHjPOs+LihlMMA rFTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pQhn3vjssZPtS02pgq6ylL3R/Hc1KRmrpvq83sXSRSc=; b=05WLYfDzUc7TbmlrdpNPRrRbf/ntO1MBDPkWhDR0pr5ycRd5wfdxZDTs21GZNdu1Sf Vo43PFD7IC4NznTQKweb123kjCdWY+eFwwPglJWQqybyJIuEVECmymi45YxYsnd3PyLa hl6H3uEIeyHzCyBaTCUo/H/n7pTcZ8DWZ+BsufpI67I+HMYpJ2ZWkot2j71tj2UnQ/Ux X9J2bxGvRgbj6USf6kzc6Ep4bA3EcxLU2qpwM19eLbn1zfDFW1M4AyUtsAvQyawPBo1l AmM5lY8Pz+SDweBgoWVxMetbrH7HxjVTYhUVU9u0TnzMuOdsszZLJYKTZX8xM38F12AJ qICw== X-Gm-Message-State: ANoB5pkWO+RHYciW3Xak24dCyD9iV/+u4Rv1xb711Xuk1FB6YdvSi+cq rk8OsM8LT7wQwGI7B4+MHn2GjTklrzWPvQ== X-Google-Smtp-Source: AA0mqf7e2/1Oc8sfjxy4e7A5r2tSLhr4WSvH8gzI2WVeXmS7o/t1OZfi55ppeY14Bm3t301es/okeQ== X-Received: by 2002:ac8:51d3:0:b0:3a5:f916:1d35 with SMTP id d19-20020ac851d3000000b003a5f9161d35mr27805252qtn.435.1669243085442; Wed, 23 Nov 2022 14:38:05 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id x8-20020ac87a88000000b003a494b61e67sm10366681qtr.46.2022.11.23.14.38.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:05 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 20/29] btrfs-progs: sync ondisk definitions from the kernel Date: Wed, 23 Nov 2022 17:37:28 -0500 Message-Id: <09bba248e2dbf3d262238c8a75cb478229fd6988.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This pulls in the kernel's btrfs_tree.h, which now has all of the ondisk definitions. Include this into ctree.h, and then yank out all the duplicate code from ctree.h. Signed-off-by: Josef Bacik --- kernel-shared/ctree.h | 950 +---------------------- kernel-shared/uapi/btrfs_tree.h | 1259 +++++++++++++++++++++++++++++++ 2 files changed, 1260 insertions(+), 949 deletions(-) create mode 100644 kernel-shared/uapi/btrfs_tree.h diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index d359753b..6dfc3fde 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -26,11 +26,11 @@ #include "common/extent-cache.h" #include "kernel-shared/extent_io.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/uapi/btrfs_tree.h" struct btrfs_root; struct btrfs_trans_handle; struct btrfs_free_space_ctl; -#define BTRFS_MAGIC 0x4D5F53665248425FULL /* ascii _BHRfS_M, no null */ /* * Fake signature for an unfinalized filesystem, which only has barebone tree @@ -42,272 +42,10 @@ struct btrfs_free_space_ctl; #define BTRFS_MAX_MIRRORS 3 -#define BTRFS_MAX_LEVEL 8 - -/* holds pointers to all of the tree roots */ -#define BTRFS_ROOT_TREE_OBJECTID 1ULL - -/* stores information about which extents are in use, and reference counts */ -#define BTRFS_EXTENT_TREE_OBJECTID 2ULL - -/* - * chunk tree stores translations from logical -> physical block numbering - * the super block points to the chunk tree - */ -#define BTRFS_CHUNK_TREE_OBJECTID 3ULL - -/* - * stores information about which areas of a given device are in use. - * one per device. The tree of tree roots points to the device tree - */ -#define BTRFS_DEV_TREE_OBJECTID 4ULL - -/* one per subvolume, storing files and directories */ -#define BTRFS_FS_TREE_OBJECTID 5ULL - -/* directory objectid inside the root tree */ -#define BTRFS_ROOT_TREE_DIR_OBJECTID 6ULL -/* holds checksums of all the data extents */ -#define BTRFS_CSUM_TREE_OBJECTID 7ULL -#define BTRFS_QUOTA_TREE_OBJECTID 8ULL - -/* for storing items that use the BTRFS_UUID_KEY* */ -#define BTRFS_UUID_TREE_OBJECTID 9ULL - -/* tracks free space in block groups. */ -#define BTRFS_FREE_SPACE_TREE_OBJECTID 10ULL - -/* hold the block group items. */ -#define BTRFS_BLOCK_GROUP_TREE_OBJECTID 11ULL - -/* device stats in the device tree */ -#define BTRFS_DEV_STATS_OBJECTID 0ULL - -/* for storing balance parameters in the root tree */ -#define BTRFS_BALANCE_OBJECTID -4ULL - -/* orphan objectid for tracking unlinked/truncated files */ -#define BTRFS_ORPHAN_OBJECTID -5ULL - -/* does write ahead logging to speed up fsyncs */ -#define BTRFS_TREE_LOG_OBJECTID -6ULL -#define BTRFS_TREE_LOG_FIXUP_OBJECTID -7ULL - -/* space balancing */ -#define BTRFS_TREE_RELOC_OBJECTID -8ULL -#define BTRFS_DATA_RELOC_TREE_OBJECTID -9ULL - -/* - * extent checksums all have this objectid - * this allows them to share the logging tree - * for fsyncs - */ -#define BTRFS_EXTENT_CSUM_OBJECTID -10ULL - -/* For storing free space cache */ -#define BTRFS_FREE_SPACE_OBJECTID -11ULL - -/* - * The inode number assigned to the special inode for storing - * free ino cache - */ -#define BTRFS_FREE_INO_OBJECTID -12ULL - -/* dummy objectid represents multiple objectids */ -#define BTRFS_MULTIPLE_OBJECTIDS -255ULL - -/* - * All files have objectids in this range. - */ -#define BTRFS_FIRST_FREE_OBJECTID 256ULL -#define BTRFS_LAST_FREE_OBJECTID -256ULL -#define BTRFS_FIRST_CHUNK_TREE_OBJECTID 256ULL - - - -/* - * the device items go into the chunk tree. The key is in the form - * [ 1 BTRFS_DEV_ITEM_KEY device_id ] - */ -#define BTRFS_DEV_ITEMS_OBJECTID 1ULL - -#define BTRFS_EMPTY_SUBVOL_DIR_OBJECTID 2ULL - -/* - * the max metadata block size. This limit is somewhat artificial, - * but the memmove costs go through the roof for larger blocks. - */ -#define BTRFS_MAX_METADATA_BLOCKSIZE 65536 - -/* - * we can actually store much bigger names, but lets not confuse the rest - * of linux - */ -#define BTRFS_NAME_LEN 255 - -/* - * Theoretical limit is larger, but we keep this down to a sane - * value. That should limit greatly the possibility of collisions on - * inode ref items. - */ -#define BTRFS_LINK_MAX 65535U - -/* 32 bytes in various csum fields */ -#define BTRFS_CSUM_SIZE 32 - -/* csum types */ -enum btrfs_csum_type { - BTRFS_CSUM_TYPE_CRC32 = 0, - BTRFS_CSUM_TYPE_XXHASH = 1, - BTRFS_CSUM_TYPE_SHA256 = 2, - BTRFS_CSUM_TYPE_BLAKE2 = 3, -}; - -#define BTRFS_EMPTY_DIR_SIZE 0 - -#define BTRFS_FT_UNKNOWN 0 -#define BTRFS_FT_REG_FILE 1 -#define BTRFS_FT_DIR 2 -#define BTRFS_FT_CHRDEV 3 -#define BTRFS_FT_BLKDEV 4 -#define BTRFS_FT_FIFO 5 -#define BTRFS_FT_SOCK 6 -#define BTRFS_FT_SYMLINK 7 -#define BTRFS_FT_XATTR 8 -#define BTRFS_FT_MAX 9 - -#define BTRFS_ROOT_SUBVOL_RDONLY (1ULL << 0) - -/* - * the key defines the order in the tree, and so it also defines (optimal) - * block layout. objectid corresponds to the inode number. The flags - * tells us things about the object, and is a kind of stream selector. - * so for a given inode, keys with flags of 1 might refer to the inode - * data, flags of 2 may point to file data in the btree and flags == 3 - * may point to extents. - * - * offset is the starting byte offset for this key in the stream. - * - * btrfs_disk_key is in disk byte order. struct btrfs_key is always - * in cpu native order. Otherwise they are identical and their sizes - * should be the same (ie both packed) - */ -struct btrfs_disk_key { - __le64 objectid; - u8 type; - __le64 offset; -} __attribute__ ((__packed__)); - -struct btrfs_key { - u64 objectid; - u8 type; - u64 offset; -} __attribute__ ((__packed__)); - struct btrfs_mapping_tree { struct cache_tree cache_tree; }; -#define BTRFS_UUID_SIZE 16 -struct btrfs_dev_item { - /* the internal btrfs device id */ - __le64 devid; - - /* size of the device */ - __le64 total_bytes; - - /* bytes used */ - __le64 bytes_used; - - /* optimal io alignment for this device */ - __le32 io_align; - - /* optimal io width for this device */ - __le32 io_width; - - /* minimal io size for this device */ - __le32 sector_size; - - /* type and info about this device */ - __le64 type; - - /* expected generation for this device */ - __le64 generation; - - /* - * starting byte of this partition on the device, - * to allow for stripe alignment in the future - */ - __le64 start_offset; - - /* grouping information for allocation decisions */ - __le32 dev_group; - - /* seek speed 0-100 where 100 is fastest */ - u8 seek_speed; - - /* bandwidth 0-100 where 100 is fastest */ - u8 bandwidth; - - /* btrfs generated uuid for this device */ - u8 uuid[BTRFS_UUID_SIZE]; - - /* uuid of FS who owns this device */ - u8 fsid[BTRFS_UUID_SIZE]; -} __attribute__ ((__packed__)); - -struct btrfs_stripe { - __le64 devid; - __le64 offset; - u8 dev_uuid[BTRFS_UUID_SIZE]; -} __attribute__ ((__packed__)); - -struct btrfs_chunk { - /* size of this chunk in bytes */ - __le64 length; - - /* objectid of the root referencing this chunk */ - __le64 owner; - - __le64 stripe_len; - __le64 type; - - /* optimal io alignment for this chunk */ - __le32 io_align; - - /* optimal io width for this chunk */ - __le32 io_width; - - /* minimal io size for this chunk */ - __le32 sector_size; - - /* 2^16 stripes is quite a lot, a second limit is the size of a single - * item in the btree - */ - __le16 num_stripes; - - /* sub stripes only matter for raid10 */ - __le16 sub_stripes; - struct btrfs_stripe stripe; - /* additional stripes go here */ -} __attribute__ ((__packed__)); - -#define BTRFS_FREE_SPACE_EXTENT 1 -#define BTRFS_FREE_SPACE_BITMAP 2 - -struct btrfs_free_space_entry { - __le64 offset; - __le64 bytes; - u8 type; -} __attribute__ ((__packed__)); - -struct btrfs_free_space_header { - struct btrfs_disk_key location; - __le64 generation; - __le64 num_entries; - __le64 num_bitmaps; -} __attribute__ ((__packed__)); - static inline unsigned long btrfs_chunk_item_size(int num_stripes) { BUG_ON(num_stripes == 0); @@ -315,13 +53,6 @@ static inline unsigned long btrfs_chunk_item_size(int num_stripes) sizeof(struct btrfs_stripe) * (num_stripes - 1); } -#define BTRFS_HEADER_FLAG_WRITTEN (1ULL << 0) -#define BTRFS_HEADER_FLAG_RELOC (1ULL << 1) -#define BTRFS_SUPER_FLAG_SEEDING (1ULL << 32) -#define BTRFS_SUPER_FLAG_METADUMP (1ULL << 33) -#define BTRFS_SUPER_FLAG_METADUMP_V2 (1ULL << 34) -#define BTRFS_SUPER_FLAG_CHANGING_FSID (1ULL << 35) -#define BTRFS_SUPER_FLAG_CHANGING_FSID_V2 (1ULL << 36) #define BTRFS_SUPER_FLAG_CHANGING_CSUM (1ULL << 37) /* @@ -331,32 +62,6 @@ static inline unsigned long btrfs_chunk_item_size(int num_stripes) */ #define BTRFS_SUPER_FLAG_CHANGING_BG_TREE (1ULL << 38) -#define BTRFS_BACKREF_REV_MAX 256 -#define BTRFS_BACKREF_REV_SHIFT 56 -#define BTRFS_BACKREF_REV_MASK (((u64)BTRFS_BACKREF_REV_MAX - 1) << \ - BTRFS_BACKREF_REV_SHIFT) - -#define BTRFS_OLD_BACKREF_REV 0 -#define BTRFS_MIXED_BACKREF_REV 1 - -/* - * every tree block (leaf or node) starts with this header. - */ -struct btrfs_header { - /* these first four must match the super block */ - u8 csum[BTRFS_CSUM_SIZE]; - u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */ - __le64 bytenr; /* which block this node is supposed to live in */ - __le64 flags; - - /* allowed to be different from the super from here on down */ - u8 chunk_tree_uuid[BTRFS_UUID_SIZE]; - __le64 generation; - __le64 owner; - __le32 nritems; - u8 level; -} __attribute__ ((__packed__)); - static inline u32 __BTRFS_LEAF_DATA_SIZE(u32 nodesize) { return nodesize - sizeof(struct btrfs_header); @@ -364,160 +69,9 @@ static inline u32 __BTRFS_LEAF_DATA_SIZE(u32 nodesize) #define BTRFS_LEAF_DATA_SIZE(fs_info) (fs_info->leaf_data_size) -/* - * this is a very generous portion of the super block, giving us - * room to translate 14 chunks with 3 stripes each. - */ -#define BTRFS_SYSTEM_CHUNK_ARRAY_SIZE 2048 -#define BTRFS_LABEL_SIZE 256 - -/* - * just in case we somehow lose the roots and are not able to mount, - * we store an array of the roots from previous transactions - * in the super. - */ -#define BTRFS_NUM_BACKUP_ROOTS 4 -struct btrfs_root_backup { - __le64 tree_root; - __le64 tree_root_gen; - - __le64 chunk_root; - __le64 chunk_root_gen; - - __le64 extent_root; - __le64 extent_root_gen; - - __le64 fs_root; - __le64 fs_root_gen; - - __le64 dev_root; - __le64 dev_root_gen; - - __le64 csum_root; - __le64 csum_root_gen; - - __le64 total_bytes; - __le64 bytes_used; - __le64 num_devices; - /* future */ - __le64 unsed_64[4]; - - u8 tree_root_level; - u8 chunk_root_level; - u8 extent_root_level; - u8 fs_root_level; - u8 dev_root_level; - u8 csum_root_level; - /* future and to align */ - u8 unused_8[10]; -} __attribute__ ((__packed__)); - #define BTRFS_SUPER_INFO_OFFSET (65536) #define BTRFS_SUPER_INFO_SIZE (4096) -/* - * the super block basically lists the main trees of the FS - * it currently lacks any block count etc etc - */ -struct btrfs_super_block { - u8 csum[BTRFS_CSUM_SIZE]; - /* the first 3 fields must match struct btrfs_header */ - u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */ - __le64 bytenr; /* this block number */ - __le64 flags; - - /* allowed to be different from the btrfs_header from here own down */ - __le64 magic; - __le64 generation; - __le64 root; - __le64 chunk_root; - __le64 log_root; - - /* - * This has never been used and is 0 in all versions. We always use - * generation + 1 to read log tree root. - */ - __le64 __unused_log_root_transid; - __le64 total_bytes; - __le64 bytes_used; - __le64 root_dir_objectid; - __le64 num_devices; - __le32 sectorsize; - __le32 nodesize; - /* Unused and must be equal to nodesize */ - __le32 __unused_leafsize; - __le32 stripesize; - __le32 sys_chunk_array_size; - __le64 chunk_root_generation; - __le64 compat_flags; - __le64 compat_ro_flags; - __le64 incompat_flags; - __le16 csum_type; - u8 root_level; - u8 chunk_root_level; - u8 log_root_level; - struct btrfs_dev_item dev_item; - - char label[BTRFS_LABEL_SIZE]; - - __le64 cache_generation; - __le64 uuid_tree_generation; - - u8 metadata_uuid[BTRFS_FSID_SIZE]; - - __le64 nr_global_roots; - - __le64 reserved[27]; - u8 sys_chunk_array[BTRFS_SYSTEM_CHUNK_ARRAY_SIZE]; - struct btrfs_root_backup super_roots[BTRFS_NUM_BACKUP_ROOTS]; - /* Padded to 4096 bytes */ - u8 padding[565]; -} __attribute__ ((__packed__)); -BUILD_ASSERT(sizeof(struct btrfs_super_block) == BTRFS_SUPER_INFO_SIZE); - -/* - * Compat flags that we support. If any incompat flags are set other than the - * ones specified below then we will fail to mount - */ -#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE (1ULL << 0) -/* - * Older kernels on big-endian systems produced broken free space tree bitmaps, - * and btrfs-progs also used to corrupt the free space tree. If this bit is - * clear, then the free space tree cannot be trusted. btrfs-progs can also - * intentionally clear this bit to ask the kernel to rebuild the free space - * tree. - */ -#define BTRFS_FEATURE_COMPAT_RO_FREE_SPACE_TREE_VALID (1ULL << 1) -#define BTRFS_FEATURE_COMPAT_RO_VERITY (1ULL << 2) - -/* - * Save all block group items into a dedicated block group tree, to greatly - * reduce mount time for large fs. - */ -#define BTRFS_FEATURE_COMPAT_RO_BLOCK_GROUP_TREE (1ULL << 3) - -#define BTRFS_FEATURE_INCOMPAT_MIXED_BACKREF (1ULL << 0) -#define BTRFS_FEATURE_INCOMPAT_DEFAULT_SUBVOL (1ULL << 1) -#define BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS (1ULL << 2) -#define BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO (1ULL << 3) -#define BTRFS_FEATURE_INCOMPAT_COMPRESS_ZSTD (1ULL << 4) - -/* - * older kernels tried to do bigger metadata blocks, but the - * code was pretty buggy. Lets not let them try anymore. - */ -#define BTRFS_FEATURE_INCOMPAT_BIG_METADATA (1ULL << 5) -#define BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF (1ULL << 6) -#define BTRFS_FEATURE_INCOMPAT_RAID56 (1ULL << 7) -#define BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA (1ULL << 8) -#define BTRFS_FEATURE_INCOMPAT_NO_HOLES (1ULL << 9) -#define BTRFS_FEATURE_INCOMPAT_METADATA_UUID (1ULL << 10) -#define BTRFS_FEATURE_INCOMPAT_RAID1C34 (1ULL << 11) -#define BTRFS_FEATURE_INCOMPAT_ZONED (1ULL << 12) -#define BTRFS_FEATURE_INCOMPAT_EXTENT_TREE_V2 (1ULL << 13) - -#define BTRFS_FEATURE_COMPAT_SUPP 0ULL - /* * The FREE_SPACE_TREE and FREE_SPACE_TREE_VALID compat_ro bits must not be * added here until read-write support for the free space tree is implemented in @@ -562,43 +116,6 @@ BUILD_ASSERT(sizeof(struct btrfs_super_block) == BTRFS_SUPER_INFO_SIZE); BTRFS_FEATURE_INCOMPAT_ZONED) #endif -/* - * A leaf is full of items. offset and size tell us where to find - * the item in the leaf (relative to the start of the data area) - */ -struct btrfs_item { - struct btrfs_disk_key key; - __le32 offset; - __le32 size; -} __attribute__ ((__packed__)); - -/* - * leaves have an item area and a data area: - * [item0, item1....itemN] [free space] [dataN...data1, data0] - * - * The data is separate from the items to get the keys closer together - * during searches. - */ -struct btrfs_leaf { - struct btrfs_header header; - struct btrfs_item items[]; -} __attribute__ ((__packed__)); - -/* - * all non-leaf blocks are nodes, they hold only keys and pointers to - * other blocks - */ -struct btrfs_key_ptr { - struct btrfs_disk_key key; - __le64 blockptr; - __le64 generation; -} __attribute__ ((__packed__)); - -struct btrfs_node { - struct btrfs_header header; - struct btrfs_key_ptr ptrs[]; -} __attribute__ ((__packed__)); - /* * btrfs_paths remember the path taken from the root down to the leaf. * level 0 is always the leaf, and nodes[1...BTRFS_MAX_LEVEL] will point @@ -627,92 +144,11 @@ struct btrfs_path { u8 skip_check_block; }; -/* - * items in the extent btree are used to record the objectid of the - * owner of the block and the number of references - */ - -struct btrfs_extent_item { - __le64 refs; - __le64 generation; - __le64 flags; -} __attribute__ ((__packed__)); - -struct btrfs_extent_item_v0 { - __le32 refs; -} __attribute__ ((__packed__)); - #define BTRFS_MAX_EXTENT_ITEM_SIZE(r) \ ((BTRFS_LEAF_DATA_SIZE(r->fs_info) >> 4) - \ sizeof(struct btrfs_item)) #define BTRFS_MAX_EXTENT_SIZE 128UL * 1024 * 1024 -#define BTRFS_EXTENT_FLAG_DATA (1ULL << 0) -#define BTRFS_EXTENT_FLAG_TREE_BLOCK (1ULL << 1) - -/* following flags only apply to tree blocks */ - -/* use full backrefs for extent pointers in the block*/ -#define BTRFS_BLOCK_FLAG_FULL_BACKREF (1ULL << 8) - -struct btrfs_tree_block_info { - struct btrfs_disk_key key; - u8 level; -} __attribute__ ((__packed__)); - -struct btrfs_extent_data_ref { - __le64 root; - __le64 objectid; - __le64 offset; - __le32 count; -} __attribute__ ((__packed__)); - -struct btrfs_shared_data_ref { - __le32 count; -} __attribute__ ((__packed__)); - -struct btrfs_extent_inline_ref { - u8 type; - __le64 offset; -} __attribute__ ((__packed__)); - -struct btrfs_extent_ref_v0 { - __le64 root; - __le64 generation; - __le64 objectid; - __le32 count; -} __attribute__ ((__packed__)); - -/* dev extents record free space on individual devices. The owner - * field points back to the chunk allocation mapping tree that allocated - * the extent. The chunk tree uuid field is a way to double check the owner - */ -struct btrfs_dev_extent { - __le64 chunk_tree; - __le64 chunk_objectid; - __le64 chunk_offset; - __le64 length; - u8 chunk_tree_uuid[BTRFS_UUID_SIZE]; -} __attribute__ ((__packed__)); - -struct btrfs_inode_ref { - __le64 index; - __le16 name_len; - /* name goes here */ -} __attribute__ ((__packed__)); - -struct btrfs_inode_extref { - __le64 parent_objectid; - __le64 index; - __le16 name_len; - __u8 name[0]; /* name goes here */ -} __attribute__ ((__packed__)); - -struct btrfs_timespec { - __le64 sec; - __le32 nsec; -} __attribute__ ((__packed__)); - typedef enum { BTRFS_COMPRESS_NONE = 0, BTRFS_COMPRESS_ZLIB = 1, @@ -722,12 +158,6 @@ typedef enum { BTRFS_COMPRESS_LAST = 4, } btrfs_compression_type; -/* we don't understand any encryption methods right now */ -typedef enum { - BTRFS_ENCRYPTION_NONE = 0, - BTRFS_ENCRYPTION_LAST = 1, -} btrfs_encryption_type; - enum btrfs_tree_block_status { BTRFS_TREE_BLOCK_CLEAN, BTRFS_TREE_BLOCK_INVALID_NRITEMS, @@ -739,269 +169,6 @@ enum btrfs_tree_block_status { BTRFS_TREE_BLOCK_INVALID_BLOCKPTR, }; -struct btrfs_inode_item { - /* nfs style generation number */ - __le64 generation; - /* transid that last touched this inode */ - __le64 transid; - __le64 size; - __le64 nbytes; - __le64 block_group; - __le32 nlink; - __le32 uid; - __le32 gid; - __le32 mode; - __le64 rdev; - __le64 flags; - - /* modification sequence number for NFS */ - __le64 sequence; - - /* - * a little future expansion, for more than this we can - * just grow the inode item and version it - */ - __le64 reserved[4]; - struct btrfs_timespec atime; - struct btrfs_timespec ctime; - struct btrfs_timespec mtime; - struct btrfs_timespec otime; -} __attribute__ ((__packed__)); - -struct btrfs_dir_log_item { - __le64 end; -} __attribute__ ((__packed__)); - -struct btrfs_dir_item { - struct btrfs_disk_key location; - __le64 transid; - __le16 data_len; - __le16 name_len; - u8 type; -} __attribute__ ((__packed__)); - -struct btrfs_root_item_v0 { - struct btrfs_inode_item inode; - __le64 generation; - __le64 root_dirid; - __le64 bytenr; - __le64 byte_limit; - __le64 bytes_used; - __le64 last_snapshot; - __le64 flags; - __le32 refs; - struct btrfs_disk_key drop_progress; - u8 drop_level; - u8 level; -} __attribute__ ((__packed__)); - -struct btrfs_root_item { - struct btrfs_inode_item inode; - __le64 generation; - __le64 root_dirid; - __le64 bytenr; - __le64 byte_limit; - __le64 bytes_used; - __le64 last_snapshot; - __le64 flags; - __le32 refs; - struct btrfs_disk_key drop_progress; - u8 drop_level; - u8 level; - - /* - * The following fields appear after subvol_uuids+subvol_times - * were introduced. - */ - - /* - * This generation number is used to test if the new fields are valid - * and up to date while reading the root item. Every time the root item - * is written out, the "generation" field is copied into this field. If - * anyone ever mounted the fs with an older kernel, we will have - * mismatching generation values here and thus must invalidate the - * new fields. See btrfs_update_root and btrfs_find_last_root for - * details. - * the offset of generation_v2 is also used as the start for the memset - * when invalidating the fields. - */ - __le64 generation_v2; - u8 uuid[BTRFS_UUID_SIZE]; - u8 parent_uuid[BTRFS_UUID_SIZE]; - u8 received_uuid[BTRFS_UUID_SIZE]; - __le64 ctransid; /* updated when an inode changes */ - __le64 otransid; /* trans when created */ - __le64 stransid; /* trans when sent. non-zero for received subvol */ - __le64 rtransid; /* trans when received. non-zero for received subvol */ - struct btrfs_timespec ctime; - struct btrfs_timespec otime; - struct btrfs_timespec stime; - struct btrfs_timespec rtime; - - /* - * If we want to use a specific set of fst/checksum/extent roots for - * this root. - */ - __le64 global_tree_id; - __le64 reserved[7]; /* for future */ -} __attribute__ ((__packed__)); - -/* - * this is used for both forward and backward root refs - */ -struct btrfs_root_ref { - __le64 dirid; - __le64 sequence; - __le16 name_len; -} __attribute__ ((__packed__)); - -struct btrfs_disk_balance_args { - /* - * profiles to operate on, single is denoted by - * BTRFS_AVAIL_ALLOC_BIT_SINGLE - */ - __le64 profiles; - - /* - * usage filter - * BTRFS_BALANCE_ARGS_USAGE with a single value means '0..N' - * BTRFS_BALANCE_ARGS_USAGE_RANGE - range syntax, min..max - */ - union { - __le64 usage; - struct { - __le32 usage_min; - __le32 usage_max; - }; - }; - - /* devid filter */ - __le64 devid; - - /* devid subset filter [pstart..pend) */ - __le64 pstart; - __le64 pend; - - /* btrfs virtual address space subset filter [vstart..vend) */ - __le64 vstart; - __le64 vend; - - /* - * profile to convert to, single is denoted by - * BTRFS_AVAIL_ALLOC_BIT_SINGLE - */ - __le64 target; - - /* BTRFS_BALANCE_ARGS_* */ - __le64 flags; - - /* - * BTRFS_BALANCE_ARGS_LIMIT with value 'limit' - * BTRFS_BALANCE_ARGS_LIMIT_RANGE - the extend version can use minimum - * and maximum - */ - union { - __le64 limit; - struct { - __le32 limit_min; - __le32 limit_max; - }; - }; - - /* - * Process chunks that cross stripes_min..stripes_max devices, - * BTRFS_BALANCE_ARGS_STRIPES_RANGE - */ - __le32 stripes_min; - __le32 stripes_max; - - __le64 unused[6]; -} __attribute__ ((__packed__)); - -/* - * store balance parameters to disk so that balance can be properly - * resumed after crash or unmount - */ -struct btrfs_balance_item { - /* BTRFS_BALANCE_* */ - __le64 flags; - - struct btrfs_disk_balance_args data; - struct btrfs_disk_balance_args meta; - struct btrfs_disk_balance_args sys; - - __le64 unused[4]; -} __attribute__ ((__packed__)); - -#define BTRFS_FILE_EXTENT_INLINE 0 -#define BTRFS_FILE_EXTENT_REG 1 -#define BTRFS_FILE_EXTENT_PREALLOC 2 - -struct btrfs_file_extent_item { - /* - * transaction id that created this extent - */ - __le64 generation; - /* - * max number of bytes to hold this extent in ram - * when we split a compressed extent we can't know how big - * each of the resulting pieces will be. So, this is - * an upper limit on the size of the extent in ram instead of - * an exact limit. - */ - __le64 ram_bytes; - - /* - * 32 bits for the various ways we might encode the data, - * including compression and encryption. If any of these - * are set to something a given disk format doesn't understand - * it is treated like an incompat flag for reading and writing, - * but not for stat. - */ - u8 compression; - u8 encryption; - __le16 other_encoding; /* spare for later use */ - - /* are we inline data or a real extent? */ - u8 type; - - /* - * Disk space consumed by the data extent - * Data checksum is stored in csum tree, thus no bytenr/length takes - * csum into consideration. - * - * The inline extent data starts at this offset in the structure. - */ - __le64 disk_bytenr; - __le64 disk_num_bytes; - /* - * The logical offset in file blocks. - * this extent record is for. This allows a file extent to point - * into the middle of an existing extent on disk, sharing it - * between two snapshots (useful if some bytes in the middle of the - * extent have changed - */ - __le64 offset; - /* - * The logical number of file blocks. This always reflects the size - * uncompressed and without encoding. - */ - __le64 num_bytes; - -} __attribute__ ((__packed__)); - -struct btrfs_dev_stats_item { - /* - * grow this item struct at the end for future enhancements and keep - * the existing values unchanged - */ - __le64 values[BTRFS_DEV_STAT_VALUES_MAX]; -} __attribute__ ((__packed__)); - -struct btrfs_csum_item { - u8 csum; -} __attribute__ ((__packed__)); - /* * We don't want to overwrite 1M at the beginning of device, even though * there is our 1st superblock at 64k. Some possible reasons: @@ -1010,20 +177,6 @@ struct btrfs_csum_item { */ #define BTRFS_BLOCK_RESERVED_1M_FOR_SUPER ((u64)1 * 1024 * 1024) -#define BTRFS_BLOCK_GROUP_DATA (1ULL << 0) -#define BTRFS_BLOCK_GROUP_SYSTEM (1ULL << 1) -#define BTRFS_BLOCK_GROUP_METADATA (1ULL << 2) -#define BTRFS_BLOCK_GROUP_RAID0 (1ULL << 3) -#define BTRFS_BLOCK_GROUP_RAID1 (1ULL << 4) -#define BTRFS_BLOCK_GROUP_DUP (1ULL << 5) -#define BTRFS_BLOCK_GROUP_RAID10 (1ULL << 6) -#define BTRFS_BLOCK_GROUP_RAID5 (1ULL << 7) -#define BTRFS_BLOCK_GROUP_RAID6 (1ULL << 8) -#define BTRFS_BLOCK_GROUP_RAID1C3 (1ULL << 9) -#define BTRFS_BLOCK_GROUP_RAID1C4 (1ULL << 10) -#define BTRFS_BLOCK_GROUP_RESERVED (BTRFS_AVAIL_ALLOC_BIT_SINGLE | \ - BTRFS_SPACE_INFO_GLOBAL_RSV) - enum btrfs_raid_types { BTRFS_RAID_RAID10, BTRFS_RAID_RAID1, @@ -1037,32 +190,6 @@ enum btrfs_raid_types { BTRFS_NR_RAID_TYPES }; -#define BTRFS_BLOCK_GROUP_TYPE_MASK (BTRFS_BLOCK_GROUP_DATA | \ - BTRFS_BLOCK_GROUP_SYSTEM | \ - BTRFS_BLOCK_GROUP_METADATA) - -#define BTRFS_BLOCK_GROUP_PROFILE_MASK (BTRFS_BLOCK_GROUP_RAID0 | \ - BTRFS_BLOCK_GROUP_RAID1 | \ - BTRFS_BLOCK_GROUP_RAID5 | \ - BTRFS_BLOCK_GROUP_RAID6 | \ - BTRFS_BLOCK_GROUP_RAID1C3 | \ - BTRFS_BLOCK_GROUP_RAID1C4 | \ - BTRFS_BLOCK_GROUP_DUP | \ - BTRFS_BLOCK_GROUP_RAID10) - -#define BTRFS_BLOCK_GROUP_RAID56_MASK (BTRFS_BLOCK_GROUP_RAID5 | \ - BTRFS_BLOCK_GROUP_RAID6) - -#define BTRFS_BLOCK_GROUP_RAID1_MASK (BTRFS_BLOCK_GROUP_RAID1 | \ - BTRFS_BLOCK_GROUP_RAID1C3 | \ - BTRFS_BLOCK_GROUP_RAID1C4) - -/* used in struct btrfs_balance_args fields */ -#define BTRFS_AVAIL_ALLOC_BIT_SINGLE (1ULL << 48) - -#define BTRFS_EXTENDED_PROFILE_MASK (BTRFS_BLOCK_GROUP_PROFILE_MASK | \ - BTRFS_AVAIL_ALLOC_BIT_SINGLE) - /* * GLOBAL_RSV does not exist as a on-disk block group type and is used * internally for exporting info about global block reserve from space infos @@ -1071,65 +198,11 @@ enum btrfs_raid_types { #define BTRFS_QGROUP_LEVEL_SHIFT 48 -static inline __u16 btrfs_qgroup_level(u64 qgroupid) -{ - return qgroupid >> BTRFS_QGROUP_LEVEL_SHIFT; -} - static inline u64 btrfs_qgroup_subvid(u64 qgroupid) { return qgroupid & ((1ULL << BTRFS_QGROUP_LEVEL_SHIFT) - 1); } -#define BTRFS_QGROUP_STATUS_FLAG_ON (1ULL << 0) -#define BTRFS_QGROUP_STATUS_FLAG_RESCAN (1ULL << 1) -#define BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT (1ULL << 2) - -struct btrfs_qgroup_status_item { - __le64 version; - __le64 generation; - __le64 flags; - __le64 rescan; /* progress during scanning */ -} __attribute__ ((__packed__)); - -#define BTRFS_QGROUP_STATUS_VERSION 1 -struct btrfs_block_group_item { - __le64 used; - __le64 chunk_objectid; - __le64 flags; -} __attribute__ ((__packed__)); - -struct btrfs_free_space_info { - __le32 extent_count; - __le32 flags; -} __attribute__ ((__packed__)); - -#define BTRFS_FREE_SPACE_USING_BITMAPS (1ULL << 0) - -struct btrfs_qgroup_info_item { - __le64 generation; - __le64 rfer; - __le64 rfer_cmpr; - __le64 excl; - __le64 excl_cmpr; -} __attribute__ ((__packed__)); - -/* flags definition for qgroup limits */ -#define BTRFS_QGROUP_LIMIT_MAX_RFER (1ULL << 0) -#define BTRFS_QGROUP_LIMIT_MAX_EXCL (1ULL << 1) -#define BTRFS_QGROUP_LIMIT_RSV_RFER (1ULL << 2) -#define BTRFS_QGROUP_LIMIT_RSV_EXCL (1ULL << 3) -#define BTRFS_QGROUP_LIMIT_RFER_CMPR (1ULL << 4) -#define BTRFS_QGROUP_LIMIT_EXCL_CMPR (1ULL << 5) - -struct btrfs_qgroup_limit_item { - __le64 flags; - __le64 max_rfer; - __le64 max_excl; - __le64 rsv_rfer; - __le64 rsv_excl; -} __attribute__ ((__packed__)); - struct btrfs_space_info { u64 flags; u64 total_bytes; @@ -1557,21 +630,6 @@ static inline u32 BTRFS_MAX_XATTR_SIZE(const struct btrfs_fs_info *info) * data in the FS */ #define BTRFS_STRING_ITEM_KEY 253 -/* - * Inode flags - */ -#define BTRFS_INODE_NODATASUM (1 << 0) -#define BTRFS_INODE_NODATACOW (1 << 1) -#define BTRFS_INODE_READONLY (1 << 2) -#define BTRFS_INODE_NOCOMPRESS (1 << 3) -#define BTRFS_INODE_PREALLOC (1 << 4) -#define BTRFS_INODE_SYNC (1 << 5) -#define BTRFS_INODE_IMMUTABLE (1 << 6) -#define BTRFS_INODE_APPEND (1 << 7) -#define BTRFS_INODE_NODUMP (1 << 8) -#define BTRFS_INODE_NOATIME (1 << 9) -#define BTRFS_INODE_DIRSYNC (1 << 10) -#define BTRFS_INODE_COMPRESS (1 << 11) #define read_eb_member(eb, ptr, type, member, result) ( \ read_extent_buffer(eb, (char *)(result), \ @@ -1941,12 +999,6 @@ static inline u32 btrfs_extent_inline_ref_size(int type) return 0; } -BTRFS_SETGET_FUNCS(ref_root_v0, struct btrfs_extent_ref_v0, root, 64); -BTRFS_SETGET_FUNCS(ref_generation_v0, struct btrfs_extent_ref_v0, - generation, 64); -BTRFS_SETGET_FUNCS(ref_objectid_v0, struct btrfs_extent_ref_v0, objectid, 64); -BTRFS_SETGET_FUNCS(ref_count_v0, struct btrfs_extent_ref_v0, count, 32); - /* struct btrfs_node */ BTRFS_SETGET_FUNCS(key_blockptr, struct btrfs_key_ptr, blockptr, 64); BTRFS_SETGET_FUNCS(key_generation, struct btrfs_key_ptr, generation, 64); diff --git a/kernel-shared/uapi/btrfs_tree.h b/kernel-shared/uapi/btrfs_tree.h new file mode 100644 index 00000000..42744d2b --- /dev/null +++ b/kernel-shared/uapi/btrfs_tree.h @@ -0,0 +1,1259 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _BTRFS_CTREE_H_ +#define _BTRFS_CTREE_H_ + +#include "btrfs.h" +#include +#ifdef __KERNEL__ +#include +#else +#include +#endif + +/* ASCII for _BHRfS_M, no terminating nul */ +#define BTRFS_MAGIC 0x4D5F53665248425FULL + +#define BTRFS_MAX_LEVEL 8 + +/* + * We can actually store much bigger names, but lets not confuse the rest of + * linux. + */ +#define BTRFS_NAME_LEN 255 + +/* + * Theoretical limit is larger, but we keep this down to a sane value. That + * should limit greatly the possibility of collisions on inode ref items. + */ +#define BTRFS_LINK_MAX 65535U + +/* + * This header contains the structure definitions and constants used + * by file system objects that can be retrieved using + * the BTRFS_IOC_SEARCH_TREE ioctl. That means basically anything that + * is needed to describe a leaf node's key or item contents. + */ + +/* holds pointers to all of the tree roots */ +#define BTRFS_ROOT_TREE_OBJECTID 1ULL + +/* stores information about which extents are in use, and reference counts */ +#define BTRFS_EXTENT_TREE_OBJECTID 2ULL + +/* + * chunk tree stores translations from logical -> physical block numbering + * the super block points to the chunk tree + */ +#define BTRFS_CHUNK_TREE_OBJECTID 3ULL + +/* + * stores information about which areas of a given device are in use. + * one per device. The tree of tree roots points to the device tree + */ +#define BTRFS_DEV_TREE_OBJECTID 4ULL + +/* one per subvolume, storing files and directories */ +#define BTRFS_FS_TREE_OBJECTID 5ULL + +/* directory objectid inside the root tree */ +#define BTRFS_ROOT_TREE_DIR_OBJECTID 6ULL + +/* holds checksums of all the data extents */ +#define BTRFS_CSUM_TREE_OBJECTID 7ULL + +/* holds quota configuration and tracking */ +#define BTRFS_QUOTA_TREE_OBJECTID 8ULL + +/* for storing items that use the BTRFS_UUID_KEY* types */ +#define BTRFS_UUID_TREE_OBJECTID 9ULL + +/* tracks free space in block groups. */ +#define BTRFS_FREE_SPACE_TREE_OBJECTID 10ULL + +/* Holds the block group items for extent tree v2. */ +#define BTRFS_BLOCK_GROUP_TREE_OBJECTID 11ULL + +/* device stats in the device tree */ +#define BTRFS_DEV_STATS_OBJECTID 0ULL + +/* for storing balance parameters in the root tree */ +#define BTRFS_BALANCE_OBJECTID -4ULL + +/* orphan objectid for tracking unlinked/truncated files */ +#define BTRFS_ORPHAN_OBJECTID -5ULL + +/* does write ahead logging to speed up fsyncs */ +#define BTRFS_TREE_LOG_OBJECTID -6ULL +#define BTRFS_TREE_LOG_FIXUP_OBJECTID -7ULL + +/* for space balancing */ +#define BTRFS_TREE_RELOC_OBJECTID -8ULL +#define BTRFS_DATA_RELOC_TREE_OBJECTID -9ULL + +/* + * extent checksums all have this objectid + * this allows them to share the logging tree + * for fsyncs + */ +#define BTRFS_EXTENT_CSUM_OBJECTID -10ULL + +/* For storing free space cache */ +#define BTRFS_FREE_SPACE_OBJECTID -11ULL + +/* + * The inode number assigned to the special inode for storing + * free ino cache + */ +#define BTRFS_FREE_INO_OBJECTID -12ULL + +/* dummy objectid represents multiple objectids */ +#define BTRFS_MULTIPLE_OBJECTIDS -255ULL + +/* + * All files have objectids in this range. + */ +#define BTRFS_FIRST_FREE_OBJECTID 256ULL +#define BTRFS_LAST_FREE_OBJECTID -256ULL +#define BTRFS_FIRST_CHUNK_TREE_OBJECTID 256ULL + + +/* + * the device items go into the chunk tree. The key is in the form + * [ 1 BTRFS_DEV_ITEM_KEY device_id ] + */ +#define BTRFS_DEV_ITEMS_OBJECTID 1ULL + +#define BTRFS_BTREE_INODE_OBJECTID 1 + +#define BTRFS_EMPTY_SUBVOL_DIR_OBJECTID 2 + +#define BTRFS_DEV_REPLACE_DEVID 0ULL + +/* + * inode items have the data typically returned from stat and store other + * info about object characteristics. There is one for every file and dir in + * the FS + */ +#define BTRFS_INODE_ITEM_KEY 1 +#define BTRFS_INODE_REF_KEY 12 +#define BTRFS_INODE_EXTREF_KEY 13 +#define BTRFS_XATTR_ITEM_KEY 24 + +/* + * fs verity items are stored under two different key types on disk. + * The descriptor items: + * [ inode objectid, BTRFS_VERITY_DESC_ITEM_KEY, offset ] + * + * At offset 0, we store a btrfs_verity_descriptor_item which tracks the size + * of the descriptor item and some extra data for encryption. + * Starting at offset 1, these hold the generic fs verity descriptor. The + * latter are opaque to btrfs, we just read and write them as a blob for the + * higher level verity code. The most common descriptor size is 256 bytes. + * + * The merkle tree items: + * [ inode objectid, BTRFS_VERITY_MERKLE_ITEM_KEY, offset ] + * + * These also start at offset 0, and correspond to the merkle tree bytes. When + * fsverity asks for page 0 of the merkle tree, we pull up one page starting at + * offset 0 for this key type. These are also opaque to btrfs, we're blindly + * storing whatever fsverity sends down. + */ +#define BTRFS_VERITY_DESC_ITEM_KEY 36 +#define BTRFS_VERITY_MERKLE_ITEM_KEY 37 + +#define BTRFS_ORPHAN_ITEM_KEY 48 +/* reserve 2-15 close to the inode for later flexibility */ + +/* + * dir items are the name -> inode pointers in a directory. There is one + * for every name in a directory. BTRFS_DIR_LOG_ITEM_KEY is no longer used + * but it's still defined here for documentation purposes and to help avoid + * having its numerical value reused in the future. + */ +#define BTRFS_DIR_LOG_ITEM_KEY 60 +#define BTRFS_DIR_LOG_INDEX_KEY 72 +#define BTRFS_DIR_ITEM_KEY 84 +#define BTRFS_DIR_INDEX_KEY 96 +/* + * extent data is for file data + */ +#define BTRFS_EXTENT_DATA_KEY 108 + +/* + * extent csums are stored in a separate tree and hold csums for + * an entire extent on disk. + */ +#define BTRFS_EXTENT_CSUM_KEY 128 + +/* + * root items point to tree roots. They are typically in the root + * tree used by the super block to find all the other trees + */ +#define BTRFS_ROOT_ITEM_KEY 132 + +/* + * root backrefs tie subvols and snapshots to the directory entries that + * reference them + */ +#define BTRFS_ROOT_BACKREF_KEY 144 + +/* + * root refs make a fast index for listing all of the snapshots and + * subvolumes referenced by a given root. They point directly to the + * directory item in the root that references the subvol + */ +#define BTRFS_ROOT_REF_KEY 156 + +/* + * extent items are in the extent map tree. These record which blocks + * are used, and how many references there are to each block + */ +#define BTRFS_EXTENT_ITEM_KEY 168 + +/* + * The same as the BTRFS_EXTENT_ITEM_KEY, except it's metadata we already know + * the length, so we save the level in key->offset instead of the length. + */ +#define BTRFS_METADATA_ITEM_KEY 169 + +#define BTRFS_TREE_BLOCK_REF_KEY 176 + +#define BTRFS_EXTENT_DATA_REF_KEY 178 + +#define BTRFS_EXTENT_REF_V0_KEY 180 + +#define BTRFS_SHARED_BLOCK_REF_KEY 182 + +#define BTRFS_SHARED_DATA_REF_KEY 184 + +/* + * block groups give us hints into the extent allocation trees. Which + * blocks are free etc etc + */ +#define BTRFS_BLOCK_GROUP_ITEM_KEY 192 + +/* + * Every block group is represented in the free space tree by a free space info + * item, which stores some accounting information. It is keyed on + * (block_group_start, FREE_SPACE_INFO, block_group_length). + */ +#define BTRFS_FREE_SPACE_INFO_KEY 198 + +/* + * A free space extent tracks an extent of space that is free in a block group. + * It is keyed on (start, FREE_SPACE_EXTENT, length). + */ +#define BTRFS_FREE_SPACE_EXTENT_KEY 199 + +/* + * When a block group becomes very fragmented, we convert it to use bitmaps + * instead of extents. A free space bitmap is keyed on + * (start, FREE_SPACE_BITMAP, length); the corresponding item is a bitmap with + * (length / sectorsize) bits. + */ +#define BTRFS_FREE_SPACE_BITMAP_KEY 200 + +#define BTRFS_DEV_EXTENT_KEY 204 +#define BTRFS_DEV_ITEM_KEY 216 +#define BTRFS_CHUNK_ITEM_KEY 228 + +/* + * Records the overall state of the qgroups. + * There's only one instance of this key present, + * (0, BTRFS_QGROUP_STATUS_KEY, 0) + */ +#define BTRFS_QGROUP_STATUS_KEY 240 +/* + * Records the currently used space of the qgroup. + * One key per qgroup, (0, BTRFS_QGROUP_INFO_KEY, qgroupid). + */ +#define BTRFS_QGROUP_INFO_KEY 242 +/* + * Contains the user configured limits for the qgroup. + * One key per qgroup, (0, BTRFS_QGROUP_LIMIT_KEY, qgroupid). + */ +#define BTRFS_QGROUP_LIMIT_KEY 244 +/* + * Records the child-parent relationship of qgroups. For + * each relation, 2 keys are present: + * (childid, BTRFS_QGROUP_RELATION_KEY, parentid) + * (parentid, BTRFS_QGROUP_RELATION_KEY, childid) + */ +#define BTRFS_QGROUP_RELATION_KEY 246 + +/* + * Obsolete name, see BTRFS_TEMPORARY_ITEM_KEY. + */ +#define BTRFS_BALANCE_ITEM_KEY 248 + +/* + * The key type for tree items that are stored persistently, but do not need to + * exist for extended period of time. The items can exist in any tree. + * + * [subtype, BTRFS_TEMPORARY_ITEM_KEY, data] + * + * Existing items: + * + * - balance status item + * (BTRFS_BALANCE_OBJECTID, BTRFS_TEMPORARY_ITEM_KEY, 0) + */ +#define BTRFS_TEMPORARY_ITEM_KEY 248 + +/* + * Obsolete name, see BTRFS_PERSISTENT_ITEM_KEY + */ +#define BTRFS_DEV_STATS_KEY 249 + +/* + * The key type for tree items that are stored persistently and usually exist + * for a long period, eg. filesystem lifetime. The item kinds can be status + * information, stats or preference values. The item can exist in any tree. + * + * [subtype, BTRFS_PERSISTENT_ITEM_KEY, data] + * + * Existing items: + * + * - device statistics, store IO stats in the device tree, one key for all + * stats + * (BTRFS_DEV_STATS_OBJECTID, BTRFS_DEV_STATS_KEY, 0) + */ +#define BTRFS_PERSISTENT_ITEM_KEY 249 + +/* + * Persistently stores the device replace state in the device tree. + * The key is built like this: (0, BTRFS_DEV_REPLACE_KEY, 0). + */ +#define BTRFS_DEV_REPLACE_KEY 250 + +/* + * Stores items that allow to quickly map UUIDs to something else. + * These items are part of the filesystem UUID tree. + * The key is built like this: + * (UUID_upper_64_bits, BTRFS_UUID_KEY*, UUID_lower_64_bits). + */ +#if BTRFS_UUID_SIZE != 16 +#error "UUID items require BTRFS_UUID_SIZE == 16!" +#endif +#define BTRFS_UUID_KEY_SUBVOL 251 /* for UUIDs assigned to subvols */ +#define BTRFS_UUID_KEY_RECEIVED_SUBVOL 252 /* for UUIDs assigned to + * received subvols */ + +/* + * string items are for debugging. They just store a short string of + * data in the FS + */ +#define BTRFS_STRING_ITEM_KEY 253 + +/* Maximum metadata block size (nodesize) */ +#define BTRFS_MAX_METADATA_BLOCKSIZE 65536 + +/* 32 bytes in various csum fields */ +#define BTRFS_CSUM_SIZE 32 + +/* csum types */ +enum btrfs_csum_type { + BTRFS_CSUM_TYPE_CRC32 = 0, + BTRFS_CSUM_TYPE_XXHASH = 1, + BTRFS_CSUM_TYPE_SHA256 = 2, + BTRFS_CSUM_TYPE_BLAKE2 = 3, +}; + +/* + * flags definitions for directory entry item type + * + * Used by: + * struct btrfs_dir_item.type + * + * Values 0..7 must match common file type values in fs_types.h. + */ +#define BTRFS_FT_UNKNOWN 0 +#define BTRFS_FT_REG_FILE 1 +#define BTRFS_FT_DIR 2 +#define BTRFS_FT_CHRDEV 3 +#define BTRFS_FT_BLKDEV 4 +#define BTRFS_FT_FIFO 5 +#define BTRFS_FT_SOCK 6 +#define BTRFS_FT_SYMLINK 7 +#define BTRFS_FT_XATTR 8 +#define BTRFS_FT_MAX 9 +/* Directory contains encrypted data */ +#define BTRFS_FT_ENCRYPTED 0x80 + +static inline __u8 btrfs_dir_flags_to_ftype(__u8 flags) +{ + return flags & ~BTRFS_FT_ENCRYPTED; +} + +/* + * Inode flags + */ +#define BTRFS_INODE_NODATASUM (1U << 0) +#define BTRFS_INODE_NODATACOW (1U << 1) +#define BTRFS_INODE_READONLY (1U << 2) +#define BTRFS_INODE_NOCOMPRESS (1U << 3) +#define BTRFS_INODE_PREALLOC (1U << 4) +#define BTRFS_INODE_SYNC (1U << 5) +#define BTRFS_INODE_IMMUTABLE (1U << 6) +#define BTRFS_INODE_APPEND (1U << 7) +#define BTRFS_INODE_NODUMP (1U << 8) +#define BTRFS_INODE_NOATIME (1U << 9) +#define BTRFS_INODE_DIRSYNC (1U << 10) +#define BTRFS_INODE_COMPRESS (1U << 11) + +#define BTRFS_INODE_ROOT_ITEM_INIT (1U << 31) + +#define BTRFS_INODE_FLAG_MASK \ + (BTRFS_INODE_NODATASUM | \ + BTRFS_INODE_NODATACOW | \ + BTRFS_INODE_READONLY | \ + BTRFS_INODE_NOCOMPRESS | \ + BTRFS_INODE_PREALLOC | \ + BTRFS_INODE_SYNC | \ + BTRFS_INODE_IMMUTABLE | \ + BTRFS_INODE_APPEND | \ + BTRFS_INODE_NODUMP | \ + BTRFS_INODE_NOATIME | \ + BTRFS_INODE_DIRSYNC | \ + BTRFS_INODE_COMPRESS | \ + BTRFS_INODE_ROOT_ITEM_INIT) + +#define BTRFS_INODE_RO_VERITY (1U << 0) + +#define BTRFS_INODE_RO_FLAG_MASK (BTRFS_INODE_RO_VERITY) + +/* + * The key defines the order in the tree, and so it also defines (optimal) + * block layout. + * + * objectid corresponds to the inode number. + * + * type tells us things about the object, and is a kind of stream selector. + * so for a given inode, keys with type of 1 might refer to the inode data, + * type of 2 may point to file data in the btree and type == 3 may point to + * extents. + * + * offset is the starting byte offset for this key in the stream. + * + * btrfs_disk_key is in disk byte order. struct btrfs_key is always + * in cpu native order. Otherwise they are identical and their sizes + * should be the same (ie both packed) + */ +struct btrfs_disk_key { + __le64 objectid; + __u8 type; + __le64 offset; +} __attribute__ ((__packed__)); + +struct btrfs_key { + __u64 objectid; + __u8 type; + __u64 offset; +} __attribute__ ((__packed__)); + +/* + * Every tree block (leaf or node) starts with this header. + */ +struct btrfs_header { + /* These first four must match the super block */ + __u8 csum[BTRFS_CSUM_SIZE]; + /* FS specific uuid */ + __u8 fsid[BTRFS_FSID_SIZE]; + /* Which block this node is supposed to live in */ + __le64 bytenr; + __le64 flags; + + /* Allowed to be different from the super from here on down */ + __u8 chunk_tree_uuid[BTRFS_UUID_SIZE]; + __le64 generation; + __le64 owner; + __le32 nritems; + __u8 level; +} __attribute__ ((__packed__)); + +/* + * This is a very generous portion of the super block, giving us room to + * translate 14 chunks with 3 stripes each. + */ +#define BTRFS_SYSTEM_CHUNK_ARRAY_SIZE 2048 + +/* + * Just in case we somehow lose the roots and are not able to mount, we store + * an array of the roots from previous transactions in the super. + */ +#define BTRFS_NUM_BACKUP_ROOTS 4 +struct btrfs_root_backup { + __le64 tree_root; + __le64 tree_root_gen; + + __le64 chunk_root; + __le64 chunk_root_gen; + + __le64 extent_root; + __le64 extent_root_gen; + + __le64 fs_root; + __le64 fs_root_gen; + + __le64 dev_root; + __le64 dev_root_gen; + + __le64 csum_root; + __le64 csum_root_gen; + + __le64 total_bytes; + __le64 bytes_used; + __le64 num_devices; + /* future */ + __le64 unused_64[4]; + + __u8 tree_root_level; + __u8 chunk_root_level; + __u8 extent_root_level; + __u8 fs_root_level; + __u8 dev_root_level; + __u8 csum_root_level; + /* future and to align */ + __u8 unused_8[10]; +} __attribute__ ((__packed__)); + +/* + * A leaf is full of items. offset and size tell us where to find the item in + * the leaf (relative to the start of the data area) + */ +struct btrfs_item { + struct btrfs_disk_key key; + __le32 offset; + __le32 size; +} __attribute__ ((__packed__)); + +/* + * Leaves have an item area and a data area: + * [item0, item1....itemN] [free space] [dataN...data1, data0] + * + * The data is separate from the items to get the keys closer together during + * searches. + */ +struct btrfs_leaf { + struct btrfs_header header; + struct btrfs_item items[]; +} __attribute__ ((__packed__)); + +/* + * All non-leaf blocks are nodes, they hold only keys and pointers to other + * blocks. + */ +struct btrfs_key_ptr { + struct btrfs_disk_key key; + __le64 blockptr; + __le64 generation; +} __attribute__ ((__packed__)); + +struct btrfs_node { + struct btrfs_header header; + struct btrfs_key_ptr ptrs[]; +} __attribute__ ((__packed__)); + +struct btrfs_dev_item { + /* the internal btrfs device id */ + __le64 devid; + + /* size of the device */ + __le64 total_bytes; + + /* bytes used */ + __le64 bytes_used; + + /* optimal io alignment for this device */ + __le32 io_align; + + /* optimal io width for this device */ + __le32 io_width; + + /* minimal io size for this device */ + __le32 sector_size; + + /* type and info about this device */ + __le64 type; + + /* expected generation for this device */ + __le64 generation; + + /* + * starting byte of this partition on the device, + * to allow for stripe alignment in the future + */ + __le64 start_offset; + + /* grouping information for allocation decisions */ + __le32 dev_group; + + /* seek speed 0-100 where 100 is fastest */ + __u8 seek_speed; + + /* bandwidth 0-100 where 100 is fastest */ + __u8 bandwidth; + + /* btrfs generated uuid for this device */ + __u8 uuid[BTRFS_UUID_SIZE]; + + /* uuid of FS who owns this device */ + __u8 fsid[BTRFS_UUID_SIZE]; +} __attribute__ ((__packed__)); + +struct btrfs_stripe { + __le64 devid; + __le64 offset; + __u8 dev_uuid[BTRFS_UUID_SIZE]; +} __attribute__ ((__packed__)); + +struct btrfs_chunk { + /* size of this chunk in bytes */ + __le64 length; + + /* objectid of the root referencing this chunk */ + __le64 owner; + + __le64 stripe_len; + __le64 type; + + /* optimal io alignment for this chunk */ + __le32 io_align; + + /* optimal io width for this chunk */ + __le32 io_width; + + /* minimal io size for this chunk */ + __le32 sector_size; + + /* 2^16 stripes is quite a lot, a second limit is the size of a single + * item in the btree + */ + __le16 num_stripes; + + /* sub stripes only matter for raid10 */ + __le16 sub_stripes; + struct btrfs_stripe stripe; + /* additional stripes go here */ +} __attribute__ ((__packed__)); + +/* + * The super block basically lists the main trees of the FS. + */ +struct btrfs_super_block { + /* The first 4 fields must match struct btrfs_header */ + __u8 csum[BTRFS_CSUM_SIZE]; + /* FS specific UUID, visible to user */ + __u8 fsid[BTRFS_FSID_SIZE]; + /* This block number */ + __le64 bytenr; + __le64 flags; + + /* Allowed to be different from the btrfs_header from here own down */ + __le64 magic; + __le64 generation; + __le64 root; + __le64 chunk_root; + __le64 log_root; + + /* + * This member has never been utilized since the very beginning, thus + * it's always 0 regardless of kernel version. We always use + * generation + 1 to read log tree root. So here we mark it deprecated. + */ + __le64 __unused_log_root_transid; + __le64 total_bytes; + __le64 bytes_used; + __le64 root_dir_objectid; + __le64 num_devices; + __le32 sectorsize; + __le32 nodesize; + __le32 __unused_leafsize; + __le32 stripesize; + __le32 sys_chunk_array_size; + __le64 chunk_root_generation; + __le64 compat_flags; + __le64 compat_ro_flags; + __le64 incompat_flags; + __le16 csum_type; + __u8 root_level; + __u8 chunk_root_level; + __u8 log_root_level; + struct btrfs_dev_item dev_item; + + char label[BTRFS_LABEL_SIZE]; + + __le64 cache_generation; + __le64 uuid_tree_generation; + + /* The UUID written into btree blocks */ + __u8 metadata_uuid[BTRFS_FSID_SIZE]; + + __u64 nr_global_roots; + + __le64 reserved[27]; + __u8 sys_chunk_array[BTRFS_SYSTEM_CHUNK_ARRAY_SIZE]; + struct btrfs_root_backup super_roots[BTRFS_NUM_BACKUP_ROOTS]; + + /* Padded to 4096 bytes */ + __u8 padding[565]; +} __attribute__ ((__packed__)); + +#define BTRFS_FREE_SPACE_EXTENT 1 +#define BTRFS_FREE_SPACE_BITMAP 2 + +struct btrfs_free_space_entry { + __le64 offset; + __le64 bytes; + __u8 type; +} __attribute__ ((__packed__)); + +struct btrfs_free_space_header { + struct btrfs_disk_key location; + __le64 generation; + __le64 num_entries; + __le64 num_bitmaps; +} __attribute__ ((__packed__)); + +#define BTRFS_HEADER_FLAG_WRITTEN (1ULL << 0) +#define BTRFS_HEADER_FLAG_RELOC (1ULL << 1) + +/* Super block flags */ +/* Errors detected */ +#define BTRFS_SUPER_FLAG_ERROR (1ULL << 2) + +#define BTRFS_SUPER_FLAG_SEEDING (1ULL << 32) +#define BTRFS_SUPER_FLAG_METADUMP (1ULL << 33) +#define BTRFS_SUPER_FLAG_METADUMP_V2 (1ULL << 34) +#define BTRFS_SUPER_FLAG_CHANGING_FSID (1ULL << 35) +#define BTRFS_SUPER_FLAG_CHANGING_FSID_V2 (1ULL << 36) + + +/* + * items in the extent btree are used to record the objectid of the + * owner of the block and the number of references + */ + +struct btrfs_extent_item { + __le64 refs; + __le64 generation; + __le64 flags; +} __attribute__ ((__packed__)); + +struct btrfs_extent_item_v0 { + __le32 refs; +} __attribute__ ((__packed__)); + + +#define BTRFS_EXTENT_FLAG_DATA (1ULL << 0) +#define BTRFS_EXTENT_FLAG_TREE_BLOCK (1ULL << 1) + +/* following flags only apply to tree blocks */ + +/* use full backrefs for extent pointers in the block */ +#define BTRFS_BLOCK_FLAG_FULL_BACKREF (1ULL << 8) + +#define BTRFS_BACKREF_REV_MAX 256 +#define BTRFS_BACKREF_REV_SHIFT 56 +#define BTRFS_BACKREF_REV_MASK (((u64)BTRFS_BACKREF_REV_MAX - 1) << \ + BTRFS_BACKREF_REV_SHIFT) + +#define BTRFS_OLD_BACKREF_REV 0 +#define BTRFS_MIXED_BACKREF_REV 1 + +/* + * this flag is only used internally by scrub and may be changed at any time + * it is only declared here to avoid collisions + */ +#define BTRFS_EXTENT_FLAG_SUPER (1ULL << 48) + +struct btrfs_tree_block_info { + struct btrfs_disk_key key; + __u8 level; +} __attribute__ ((__packed__)); + +struct btrfs_extent_data_ref { + __le64 root; + __le64 objectid; + __le64 offset; + __le32 count; +} __attribute__ ((__packed__)); + +struct btrfs_shared_data_ref { + __le32 count; +} __attribute__ ((__packed__)); + +struct btrfs_extent_inline_ref { + __u8 type; + __le64 offset; +} __attribute__ ((__packed__)); + +/* dev extents record free space on individual devices. The owner + * field points back to the chunk allocation mapping tree that allocated + * the extent. The chunk tree uuid field is a way to double check the owner + */ +struct btrfs_dev_extent { + __le64 chunk_tree; + __le64 chunk_objectid; + __le64 chunk_offset; + __le64 length; + __u8 chunk_tree_uuid[BTRFS_UUID_SIZE]; +} __attribute__ ((__packed__)); + +struct btrfs_inode_ref { + __le64 index; + __le16 name_len; + /* name goes here */ +} __attribute__ ((__packed__)); + +struct btrfs_inode_extref { + __le64 parent_objectid; + __le64 index; + __le16 name_len; + __u8 name[]; + /* name goes here */ +} __attribute__ ((__packed__)); + +struct btrfs_timespec { + __le64 sec; + __le32 nsec; +} __attribute__ ((__packed__)); + +struct btrfs_inode_item { + /* nfs style generation number */ + __le64 generation; + /* transid that last touched this inode */ + __le64 transid; + __le64 size; + __le64 nbytes; + __le64 block_group; + __le32 nlink; + __le32 uid; + __le32 gid; + __le32 mode; + __le64 rdev; + __le64 flags; + + /* modification sequence number for NFS */ + __le64 sequence; + + /* + * a little future expansion, for more than this we can + * just grow the inode item and version it + */ + __le64 reserved[4]; + struct btrfs_timespec atime; + struct btrfs_timespec ctime; + struct btrfs_timespec mtime; + struct btrfs_timespec otime; +} __attribute__ ((__packed__)); + +struct btrfs_dir_log_item { + __le64 end; +} __attribute__ ((__packed__)); + +struct btrfs_dir_item { + struct btrfs_disk_key location; + __le64 transid; + __le16 data_len; + __le16 name_len; + __u8 type; +} __attribute__ ((__packed__)); + +#define BTRFS_ROOT_SUBVOL_RDONLY (1ULL << 0) + +/* + * Internal in-memory flag that a subvolume has been marked for deletion but + * still visible as a directory + */ +#define BTRFS_ROOT_SUBVOL_DEAD (1ULL << 48) + +struct btrfs_root_item { + struct btrfs_inode_item inode; + __le64 generation; + __le64 root_dirid; + __le64 bytenr; + __le64 byte_limit; + __le64 bytes_used; + __le64 last_snapshot; + __le64 flags; + __le32 refs; + struct btrfs_disk_key drop_progress; + __u8 drop_level; + __u8 level; + + /* + * The following fields appear after subvol_uuids+subvol_times + * were introduced. + */ + + /* + * This generation number is used to test if the new fields are valid + * and up to date while reading the root item. Every time the root item + * is written out, the "generation" field is copied into this field. If + * anyone ever mounted the fs with an older kernel, we will have + * mismatching generation values here and thus must invalidate the + * new fields. See btrfs_update_root and btrfs_find_last_root for + * details. + * the offset of generation_v2 is also used as the start for the memset + * when invalidating the fields. + */ + __le64 generation_v2; + __u8 uuid[BTRFS_UUID_SIZE]; + __u8 parent_uuid[BTRFS_UUID_SIZE]; + __u8 received_uuid[BTRFS_UUID_SIZE]; + __le64 ctransid; /* updated when an inode changes */ + __le64 otransid; /* trans when created */ + __le64 stransid; /* trans when sent. non-zero for received subvol */ + __le64 rtransid; /* trans when received. non-zero for received subvol */ + struct btrfs_timespec ctime; + struct btrfs_timespec otime; + struct btrfs_timespec stime; + struct btrfs_timespec rtime; + __le64 reserved[8]; /* for future */ +} __attribute__ ((__packed__)); + +/* + * Btrfs root item used to be smaller than current size. The old format ends + * at where member generation_v2 is. + */ +static inline __u32 btrfs_legacy_root_item_size(void) +{ + return offsetof(struct btrfs_root_item, generation_v2); +} + +/* + * this is used for both forward and backward root refs + */ +struct btrfs_root_ref { + __le64 dirid; + __le64 sequence; + __le16 name_len; +} __attribute__ ((__packed__)); + +struct btrfs_disk_balance_args { + /* + * profiles to operate on, single is denoted by + * BTRFS_AVAIL_ALLOC_BIT_SINGLE + */ + __le64 profiles; + + /* + * usage filter + * BTRFS_BALANCE_ARGS_USAGE with a single value means '0..N' + * BTRFS_BALANCE_ARGS_USAGE_RANGE - range syntax, min..max + */ + union { + __le64 usage; + struct { + __le32 usage_min; + __le32 usage_max; + }; + }; + + /* devid filter */ + __le64 devid; + + /* devid subset filter [pstart..pend) */ + __le64 pstart; + __le64 pend; + + /* btrfs virtual address space subset filter [vstart..vend) */ + __le64 vstart; + __le64 vend; + + /* + * profile to convert to, single is denoted by + * BTRFS_AVAIL_ALLOC_BIT_SINGLE + */ + __le64 target; + + /* BTRFS_BALANCE_ARGS_* */ + __le64 flags; + + /* + * BTRFS_BALANCE_ARGS_LIMIT with value 'limit' + * BTRFS_BALANCE_ARGS_LIMIT_RANGE - the extend version can use minimum + * and maximum + */ + union { + __le64 limit; + struct { + __le32 limit_min; + __le32 limit_max; + }; + }; + + /* + * Process chunks that cross stripes_min..stripes_max devices, + * BTRFS_BALANCE_ARGS_STRIPES_RANGE + */ + __le32 stripes_min; + __le32 stripes_max; + + __le64 unused[6]; +} __attribute__ ((__packed__)); + +/* + * store balance parameters to disk so that balance can be properly + * resumed after crash or unmount + */ +struct btrfs_balance_item { + /* BTRFS_BALANCE_* */ + __le64 flags; + + struct btrfs_disk_balance_args data; + struct btrfs_disk_balance_args meta; + struct btrfs_disk_balance_args sys; + + __le64 unused[4]; +} __attribute__ ((__packed__)); + +enum { + BTRFS_FILE_EXTENT_INLINE = 0, + BTRFS_FILE_EXTENT_REG = 1, + BTRFS_FILE_EXTENT_PREALLOC = 2, + BTRFS_NR_FILE_EXTENT_TYPES = 3, +}; + +struct btrfs_file_extent_item { + /* + * transaction id that created this extent + */ + __le64 generation; + /* + * max number of bytes to hold this extent in ram + * when we split a compressed extent we can't know how big + * each of the resulting pieces will be. So, this is + * an upper limit on the size of the extent in ram instead of + * an exact limit. + */ + __le64 ram_bytes; + + /* + * 32 bits for the various ways we might encode the data, + * including compression and encryption. If any of these + * are set to something a given disk format doesn't understand + * it is treated like an incompat flag for reading and writing, + * but not for stat. + */ + __u8 compression; + __u8 encryption; + __le16 other_encoding; /* spare for later use */ + + /* are we inline data or a real extent? */ + __u8 type; + + /* + * disk space consumed by the extent, checksum blocks are included + * in these numbers + * + * At this offset in the structure, the inline extent data start. + */ + __le64 disk_bytenr; + __le64 disk_num_bytes; + /* + * the logical offset in file blocks (no csums) + * this extent record is for. This allows a file extent to point + * into the middle of an existing extent on disk, sharing it + * between two snapshots (useful if some bytes in the middle of the + * extent have changed + */ + __le64 offset; + /* + * the logical number of file blocks (no csums included). This + * always reflects the size uncompressed and without encoding. + */ + __le64 num_bytes; + +} __attribute__ ((__packed__)); + +struct btrfs_csum_item { + __u8 csum; +} __attribute__ ((__packed__)); + +struct btrfs_dev_stats_item { + /* + * grow this item struct at the end for future enhancements and keep + * the existing values unchanged + */ + __le64 values[BTRFS_DEV_STAT_VALUES_MAX]; +} __attribute__ ((__packed__)); + +#define BTRFS_DEV_REPLACE_ITEM_CONT_READING_FROM_SRCDEV_MODE_ALWAYS 0 +#define BTRFS_DEV_REPLACE_ITEM_CONT_READING_FROM_SRCDEV_MODE_AVOID 1 + +struct btrfs_dev_replace_item { + /* + * grow this item struct at the end for future enhancements and keep + * the existing values unchanged + */ + __le64 src_devid; + __le64 cursor_left; + __le64 cursor_right; + __le64 cont_reading_from_srcdev_mode; + + __le64 replace_state; + __le64 time_started; + __le64 time_stopped; + __le64 num_write_errors; + __le64 num_uncorrectable_read_errors; +} __attribute__ ((__packed__)); + +/* different types of block groups (and chunks) */ +#define BTRFS_BLOCK_GROUP_DATA (1ULL << 0) +#define BTRFS_BLOCK_GROUP_SYSTEM (1ULL << 1) +#define BTRFS_BLOCK_GROUP_METADATA (1ULL << 2) +#define BTRFS_BLOCK_GROUP_RAID0 (1ULL << 3) +#define BTRFS_BLOCK_GROUP_RAID1 (1ULL << 4) +#define BTRFS_BLOCK_GROUP_DUP (1ULL << 5) +#define BTRFS_BLOCK_GROUP_RAID10 (1ULL << 6) +#define BTRFS_BLOCK_GROUP_RAID5 (1ULL << 7) +#define BTRFS_BLOCK_GROUP_RAID6 (1ULL << 8) +#define BTRFS_BLOCK_GROUP_RAID1C3 (1ULL << 9) +#define BTRFS_BLOCK_GROUP_RAID1C4 (1ULL << 10) +#define BTRFS_BLOCK_GROUP_RESERVED (BTRFS_AVAIL_ALLOC_BIT_SINGLE | \ + BTRFS_SPACE_INFO_GLOBAL_RSV) + +#define BTRFS_BLOCK_GROUP_TYPE_MASK (BTRFS_BLOCK_GROUP_DATA | \ + BTRFS_BLOCK_GROUP_SYSTEM | \ + BTRFS_BLOCK_GROUP_METADATA) + +#define BTRFS_BLOCK_GROUP_PROFILE_MASK (BTRFS_BLOCK_GROUP_RAID0 | \ + BTRFS_BLOCK_GROUP_RAID1 | \ + BTRFS_BLOCK_GROUP_RAID1C3 | \ + BTRFS_BLOCK_GROUP_RAID1C4 | \ + BTRFS_BLOCK_GROUP_RAID5 | \ + BTRFS_BLOCK_GROUP_RAID6 | \ + BTRFS_BLOCK_GROUP_DUP | \ + BTRFS_BLOCK_GROUP_RAID10) +#define BTRFS_BLOCK_GROUP_RAID56_MASK (BTRFS_BLOCK_GROUP_RAID5 | \ + BTRFS_BLOCK_GROUP_RAID6) + +#define BTRFS_BLOCK_GROUP_RAID1_MASK (BTRFS_BLOCK_GROUP_RAID1 | \ + BTRFS_BLOCK_GROUP_RAID1C3 | \ + BTRFS_BLOCK_GROUP_RAID1C4) + +/* + * We need a bit for restriper to be able to tell when chunks of type + * SINGLE are available. This "extended" profile format is used in + * fs_info->avail_*_alloc_bits (in-memory) and balance item fields + * (on-disk). The corresponding on-disk bit in chunk.type is reserved + * to avoid remappings between two formats in future. + */ +#define BTRFS_AVAIL_ALLOC_BIT_SINGLE (1ULL << 48) + +/* + * A fake block group type that is used to communicate global block reserve + * size to userspace via the SPACE_INFO ioctl. + */ +#define BTRFS_SPACE_INFO_GLOBAL_RSV (1ULL << 49) + +#define BTRFS_EXTENDED_PROFILE_MASK (BTRFS_BLOCK_GROUP_PROFILE_MASK | \ + BTRFS_AVAIL_ALLOC_BIT_SINGLE) + +static inline __u64 chunk_to_extended(__u64 flags) +{ + if ((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0) + flags |= BTRFS_AVAIL_ALLOC_BIT_SINGLE; + + return flags; +} +static inline __u64 extended_to_chunk(__u64 flags) +{ + return flags & ~BTRFS_AVAIL_ALLOC_BIT_SINGLE; +} + +struct btrfs_block_group_item { + __le64 used; + __le64 chunk_objectid; + __le64 flags; +} __attribute__ ((__packed__)); + +struct btrfs_free_space_info { + __le32 extent_count; + __le32 flags; +} __attribute__ ((__packed__)); + +#define BTRFS_FREE_SPACE_USING_BITMAPS (1ULL << 0) + +#define BTRFS_QGROUP_LEVEL_SHIFT 48 +static inline __u16 btrfs_qgroup_level(__u64 qgroupid) +{ + return (__u16)(qgroupid >> BTRFS_QGROUP_LEVEL_SHIFT); +} + +/* + * is subvolume quota turned on? + */ +#define BTRFS_QGROUP_STATUS_FLAG_ON (1ULL << 0) +/* + * RESCAN is set during the initialization phase + */ +#define BTRFS_QGROUP_STATUS_FLAG_RESCAN (1ULL << 1) +/* + * Some qgroup entries are known to be out of date, + * either because the configuration has changed in a way that + * makes a rescan necessary, or because the fs has been mounted + * with a non-qgroup-aware version. + * Turning qouta off and on again makes it inconsistent, too. + */ +#define BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT (1ULL << 2) + +#define BTRFS_QGROUP_STATUS_FLAGS_MASK (BTRFS_QGROUP_STATUS_FLAG_ON | \ + BTRFS_QGROUP_STATUS_FLAG_RESCAN | \ + BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) + +#define BTRFS_QGROUP_STATUS_VERSION 1 + +struct btrfs_qgroup_status_item { + __le64 version; + /* + * the generation is updated during every commit. As older + * versions of btrfs are not aware of qgroups, it will be + * possible to detect inconsistencies by checking the + * generation on mount time + */ + __le64 generation; + + /* flag definitions see above */ + __le64 flags; + + /* + * only used during scanning to record the progress + * of the scan. It contains a logical address + */ + __le64 rescan; +} __attribute__ ((__packed__)); + +struct btrfs_qgroup_info_item { + __le64 generation; + __le64 rfer; + __le64 rfer_cmpr; + __le64 excl; + __le64 excl_cmpr; +} __attribute__ ((__packed__)); + +struct btrfs_qgroup_limit_item { + /* + * only updated when any of the other values change + */ + __le64 flags; + __le64 max_rfer; + __le64 max_excl; + __le64 rsv_rfer; + __le64 rsv_excl; +} __attribute__ ((__packed__)); + +struct btrfs_verity_descriptor_item { + /* Size of the verity descriptor in bytes */ + __le64 size; + /* + * When we implement support for fscrypt, we will need to encrypt the + * Merkle tree for encrypted verity files. These 128 bits are for the + * eventual storage of an fscrypt initialization vector. + */ + __le64 reserved[2]; + __u8 encryption; +} __attribute__ ((__packed__)); + +#endif /* _BTRFS_CTREE_H_ */ From patchwork Wed Nov 23 22:37:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C89FC4167D for ; Wed, 23 Nov 2022 22:38:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229825AbiKWWib (ORCPT ); Wed, 23 Nov 2022 17:38:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbiKWWiK (ORCPT ); Wed, 23 Nov 2022 17:38:10 -0500 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2249A18B05 for ; Wed, 23 Nov 2022 14:38:08 -0800 (PST) Received: by mail-qv1-xf2a.google.com with SMTP id i12so13134860qvs.2 for ; Wed, 23 Nov 2022 14:38:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=WxpJQL5JuIKuj/wv1S1YyFvElKre6MbeeF736GplbWk=; b=Gq4DXvUzAKFtl1odIjOZUi8FKLbGWuvx/1i8g/tqQchlHoPHXoEyMRRY8h+Ej8MDPL o8t3JuvU4xCk5cEOLhUMEARDgdhohu6i+b0o/IVQ27MOR92R9UVMv3pk2GLOpEujlRUw qEuKbJ9HEO3l/rUB1eP6JURxvhPriegz+oh5LFz3GiJ2+LWsQctsODGwJy91aF/4IpUe wXFF6qcDdDKtk2NidLKYdkT6JGvcWTGCBRKz+zyxd0DpqEnUC8cJUU/9LeZPGnax12jZ GI7RHSMXtrP0QnUQtcZdndhwAP+PDEl+LR8ylWQDNlwfJZQHG4Rg6CQMG70RPhxDWMon t1Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WxpJQL5JuIKuj/wv1S1YyFvElKre6MbeeF736GplbWk=; b=hF2hkxMAlWd9Qhx57EGfCWcz8zpj9Cl4Rxs8X8Ff+tGPUfUbgujIyVPuFey4nI4fvj cdYbgNjCmM+A0HCkN0E4UqdQ1jMKavw5JAoWJ5MM4QTKuNNCm4spi9t8cvW4uAijW4+9 rXjkCdoD3FicuFIOryAHuxPGdeOcNMQAF3iGrY4UsSkNEd+rBMu4tLy8tt2Ej42sOWQ1 l9Kj3VP42divombkEu4Ix+CU7POjcbCSxt9+E7lFiUz4h/wbfCf1yAQ1RJgkimFNc6kD YM3JVKDouwXQsDsFfmQ11CllCv3LgPBMaYW5at0h5luLq/IqFry6GDd3AC5geTv6nPBd vYIg== X-Gm-Message-State: ANoB5plmgezkCXrL8Fjw+uip1khsAUvGSO+LvkGg7ULkA09Zt2grBf/h 2j4lULCSpaQgZPBTHOJDTIXr9kDSz/LoRg== X-Google-Smtp-Source: AA0mqf4Rqloywcp4aNcvihE/kLB88aJCXdTGsxCIikqyQqI4U9rAhl2btAbRy8jmrO98/ZVZyO7aeg== X-Received: by 2002:a05:6214:3484:b0:4c6:ad8b:9a10 with SMTP id mr4-20020a056214348400b004c6ad8b9a10mr12436872qvb.76.1669243086803; Wed, 23 Nov 2022 14:38:06 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id bj38-20020a05620a192600b006cbe3be300esm13135121qkb.12.2022.11.23.14.38.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:06 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 21/29] btrfs-progs: sync compression.h from the kernel Date: Wed, 23 Nov 2022 17:37:29 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This patch copies in compression.h from the kernel. This is relatively straightforward, we just have to drop the compression types definition from ctree.h, and update the image to use BTRFS_NR_COMPRESS_TYPES instead of BTRFS_COMPRESS_LAST, and add a few things to kerncompat.h to make everything build smoothly. Signed-off-by: Josef Bacik --- check/mode-common.c | 1 + check/mode-lowmem.c | 1 + cmds/filesystem.c | 1 + cmds/restore.c | 3 +- common/parse-utils.c | 1 + kerncompat.h | 21 ++++ kernel-shared/compression.h | 184 ++++++++++++++++++++++++++++++++++++ kernel-shared/ctree.h | 9 -- kernel-shared/file.c | 1 + kernel-shared/print-tree.c | 1 + 10 files changed, 213 insertions(+), 10 deletions(-) create mode 100644 kernel-shared/compression.h diff --git a/check/mode-common.c b/check/mode-common.c index 7a38eceb..c8ac235d 100644 --- a/check/mode-common.c +++ b/check/mode-common.c @@ -27,6 +27,7 @@ #include "kernel-shared/disk-io.h" #include "kernel-shared/volumes.h" #include "kernel-shared/backref.h" +#include "kernel-shared/compression.h" #include "common/internal.h" #include "common/messages.h" #include "common/utils.h" diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index 2cde3b63..10258d34 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -28,6 +28,7 @@ #include "kernel-shared/transaction.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/backref.h" +#include "kernel-shared/compression.h" #include "kernel-shared/volumes.h" #include "common/messages.h" #include "common/internal.h" diff --git a/cmds/filesystem.c b/cmds/filesystem.c index a0906b13..5ecd7871 100644 --- a/cmds/filesystem.c +++ b/cmds/filesystem.c @@ -35,6 +35,7 @@ #include "kernel-lib/list.h" #include "kernel-lib/sizes.h" #include "kernel-shared/ctree.h" +#include "kernel-shared/compression.h" #include "kernel-shared/volumes.h" #include "kernel-lib/list_sort.h" #include "kernel-shared/disk-io.h" diff --git a/cmds/restore.c b/cmds/restore.c index e9d3bdb8..19df6be2 100644 --- a/cmds/restore.c +++ b/cmds/restore.c @@ -43,6 +43,7 @@ #include "kernel-shared/print-tree.h" #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/compression.h" #include "common/utils.h" #include "common/help.h" #include "common/open-utils.h" @@ -718,7 +719,7 @@ static int copy_file(struct btrfs_root *root, int fd, struct btrfs_key *key, struct btrfs_file_extent_item); extent_type = btrfs_file_extent_type(leaf, fi); compression = btrfs_file_extent_compression(leaf, fi); - if (compression >= BTRFS_COMPRESS_LAST) { + if (compression >= BTRFS_NR_COMPRESS_TYPES) { warning("compression type %d not supported", compression); ret = -1; diff --git a/common/parse-utils.c b/common/parse-utils.c index 11ef2309..f16b7aac 100644 --- a/common/parse-utils.c +++ b/common/parse-utils.c @@ -25,6 +25,7 @@ #include #include "libbtrfsutil/btrfsutil.h" #include "kernel-shared/volumes.h" +#include "kernel-shared/compression.h" #include "common/parse-utils.h" #include "common/messages.h" #include "common/utils.h" diff --git a/kerncompat.h b/kerncompat.h index 15595500..dedcf5f0 100644 --- a/kerncompat.h +++ b/kerncompat.h @@ -192,6 +192,10 @@ struct mutex { unsigned long lock; }; +typedef struct spinlock_struct { + unsigned long lock; +} spinlock_t; + #define mutex_init(m) \ do { \ (m)->lock = 1; \ @@ -550,4 +554,21 @@ do { \ (x) = (val); \ } while (0) +typedef struct refcount_struct { + int refs; +} refcount_t; + +typedef u32 blk_status_t; +typedef u32 blk_opf_t; +typedef int atomic_t; + +struct work_struct { +}; + +typedef struct wait_queue_head_s { +} wait_queue_head_t; + +#define __init +#define __cold + #endif diff --git a/kernel-shared/compression.h b/kernel-shared/compression.h new file mode 100644 index 00000000..285f1a9d --- /dev/null +++ b/kernel-shared/compression.h @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2008 Oracle. All rights reserved. + */ + +#ifndef BTRFS_COMPRESSION_H +#define BTRFS_COMPRESSION_H + +#include "kerncompat.h" + +struct btrfs_inode; +struct address_space; +struct cgroup_subsys_state; + +/* + * We want to make sure that amount of RAM required to uncompress an extent is + * reasonable, so we limit the total size in ram of a compressed extent to + * 128k. This is a crucial number because it also controls how easily we can + * spread reads across cpus for decompression. + * + * We also want to make sure the amount of IO required to do a random read is + * reasonably small, so we limit the size of a compressed extent to 128k. + */ + +/* Maximum length of compressed data stored on disk */ +#define BTRFS_MAX_COMPRESSED (SZ_128K) + +/* Maximum size of data before compression */ +#define BTRFS_MAX_UNCOMPRESSED (SZ_128K) + +#define BTRFS_ZLIB_DEFAULT_LEVEL 3 + +struct compressed_bio { + /* Number of outstanding bios */ + refcount_t pending_ios; + + /* Number of compressed pages in the array */ + unsigned int nr_pages; + + /* the pages with the compressed data on them */ + struct page **compressed_pages; + + /* inode that owns this data */ + struct inode *inode; + + /* starting offset in the inode for our pages */ + u64 start; + + /* Number of bytes in the inode we're working on */ + unsigned int len; + + /* Number of bytes on disk */ + unsigned int compressed_len; + + /* The compression algorithm for this bio */ + u8 compress_type; + + /* Whether this is a write for writeback. */ + bool writeback; + + /* IO errors */ + blk_status_t status; + + union { + /* For reads, this is the bio we are copying the data into */ + struct bio *orig_bio; + struct work_struct write_end_work; + }; +}; + +static inline unsigned int btrfs_compress_type(unsigned int type_level) +{ + return (type_level & 0xF); +} + +static inline unsigned int btrfs_compress_level(unsigned int type_level) +{ + return ((type_level & 0xF0) >> 4); +} + +int __init btrfs_init_compress(void); +void __cold btrfs_exit_compress(void); + +int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping, + u64 start, struct page **pages, + unsigned long *out_pages, + unsigned long *total_in, + unsigned long *total_out); +int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page, + unsigned long start_byte, size_t srclen, size_t destlen); +int btrfs_decompress_buf2page(const char *buf, u32 buf_len, + struct compressed_bio *cb, u32 decompressed); + +blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + unsigned int len, u64 disk_start, + unsigned int compressed_len, + struct page **compressed_pages, + unsigned int nr_pages, + blk_opf_t write_flags, + struct cgroup_subsys_state *blkcg_css, + bool writeback); +void btrfs_submit_compressed_read(struct inode *inode, struct bio *bio, + int mirror_num); + +unsigned int btrfs_compress_str2level(unsigned int type, const char *str); + +enum btrfs_compression_type { + BTRFS_COMPRESS_NONE = 0, + BTRFS_COMPRESS_ZLIB = 1, + BTRFS_COMPRESS_LZO = 2, + BTRFS_COMPRESS_ZSTD = 3, + BTRFS_NR_COMPRESS_TYPES = 4, +}; + +struct workspace_manager { + struct list_head idle_ws; + spinlock_t ws_lock; + /* Number of free workspaces */ + int free_ws; + /* Total number of allocated workspaces */ + atomic_t total_ws; + /* Waiters for a free workspace */ + wait_queue_head_t ws_wait; +}; + +struct list_head *btrfs_get_workspace(int type, unsigned int level); +void btrfs_put_workspace(int type, struct list_head *ws); + +struct btrfs_compress_op { + struct workspace_manager *workspace_manager; + /* Maximum level supported by the compression algorithm */ + unsigned int max_level; + unsigned int default_level; +}; + +/* The heuristic workspaces are managed via the 0th workspace manager */ +#define BTRFS_NR_WORKSPACE_MANAGERS BTRFS_NR_COMPRESS_TYPES + +extern const struct btrfs_compress_op btrfs_heuristic_compress; +extern const struct btrfs_compress_op btrfs_zlib_compress; +extern const struct btrfs_compress_op btrfs_lzo_compress; +extern const struct btrfs_compress_op btrfs_zstd_compress; + +const char* btrfs_compress_type2str(enum btrfs_compression_type type); +bool btrfs_compress_is_valid_type(const char *str, size_t len); + +int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end); + +int zlib_compress_pages(struct list_head *ws, struct address_space *mapping, + u64 start, struct page **pages, unsigned long *out_pages, + unsigned long *total_in, unsigned long *total_out); +int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb); +int zlib_decompress(struct list_head *ws, unsigned char *data_in, + struct page *dest_page, unsigned long start_byte, size_t srclen, + size_t destlen); +struct list_head *zlib_alloc_workspace(unsigned int level); +void zlib_free_workspace(struct list_head *ws); +struct list_head *zlib_get_workspace(unsigned int level); + +int lzo_compress_pages(struct list_head *ws, struct address_space *mapping, + u64 start, struct page **pages, unsigned long *out_pages, + unsigned long *total_in, unsigned long *total_out); +int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb); +int lzo_decompress(struct list_head *ws, unsigned char *data_in, + struct page *dest_page, unsigned long start_byte, size_t srclen, + size_t destlen); +struct list_head *lzo_alloc_workspace(unsigned int level); +void lzo_free_workspace(struct list_head *ws); + +int zstd_compress_pages(struct list_head *ws, struct address_space *mapping, + u64 start, struct page **pages, unsigned long *out_pages, + unsigned long *total_in, unsigned long *total_out); +int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb); +int zstd_decompress(struct list_head *ws, unsigned char *data_in, + struct page *dest_page, unsigned long start_byte, size_t srclen, + size_t destlen); +void zstd_init_workspace_manager(void); +void zstd_cleanup_workspace_manager(void); +struct list_head *zstd_alloc_workspace(unsigned int level); +void zstd_free_workspace(struct list_head *ws); +struct list_head *zstd_get_workspace(unsigned int level); +void zstd_put_workspace(struct list_head *ws); + +#endif diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 6dfc3fde..8e92bd4e 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -149,15 +149,6 @@ struct btrfs_path { sizeof(struct btrfs_item)) #define BTRFS_MAX_EXTENT_SIZE 128UL * 1024 * 1024 -typedef enum { - BTRFS_COMPRESS_NONE = 0, - BTRFS_COMPRESS_ZLIB = 1, - BTRFS_COMPRESS_LZO = 2, - BTRFS_COMPRESS_ZSTD = 3, - BTRFS_COMPRESS_TYPES = 3, - BTRFS_COMPRESS_LAST = 4, -} btrfs_compression_type; - enum btrfs_tree_block_status { BTRFS_TREE_BLOCK_CLEAN, BTRFS_TREE_BLOCK_INVALID_NRITEMS, diff --git a/kernel-shared/file.c b/kernel-shared/file.c index 59d82a1d..807ba477 100644 --- a/kernel-shared/file.c +++ b/kernel-shared/file.c @@ -21,6 +21,7 @@ #include "common/utils.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/transaction.h" +#include "compression.h" #include "kerncompat.h" /* diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c index e08c72df..e2f9f760 100644 --- a/kernel-shared/print-tree.c +++ b/kernel-shared/print-tree.c @@ -25,6 +25,7 @@ #include "kernel-shared/disk-io.h" #include "kernel-shared/print-tree.h" #include "kernel-shared/volumes.h" +#include "kernel-shared/compression.h" #include "common/utils.h" static void print_dir_item_type(struct extent_buffer *eb, From patchwork Wed Nov 23 22:37:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58DD8C4167D for ; Wed, 23 Nov 2022 22:38:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229815AbiKWWih (ORCPT ); Wed, 23 Nov 2022 17:38:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbiKWWiO (ORCPT ); Wed, 23 Nov 2022 17:38:14 -0500 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E3527AF64 for ; Wed, 23 Nov 2022 14:38:11 -0800 (PST) Received: by mail-qt1-x82b.google.com with SMTP id c15so128966qtw.8 for ; Wed, 23 Nov 2022 14:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=meLHrAu6ext4zKwrkYJB4G1K1uSAKzXi3jfxgEgdiog=; b=QTJS3/ncejDrLRrlE8bLmd8z9bKlqGEZzRzO4n7cfAqEnEzsc6p/ZOsQyqs+wzGDlc /8gYvDSBV2mghcC6QCzezkMLawfSO7hSRloBj5w1w7+IhzWRcQq+w6yjwSNhEdJNfAq3 LdGxG+ESDNoLbzj6U458+jqgOdd0LVR77WSsnsXsE7sGdeVpgN/gdS5axymo1Rfjsn8q S1lOoJRMoEJdTT/WmEV3RFMa9v1/B9U5z27yq9p3jNoy2SxFq0RWlYvRzU5awofone4z CWyQUrhu/EezaKGXl6UYC5ENldHW71TZ+rAjPgu3jOAKZd54MyezCDCc7ODTA8YeSyPt WkaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=meLHrAu6ext4zKwrkYJB4G1K1uSAKzXi3jfxgEgdiog=; b=2CylfbQg5j8xBB1e6kI2ftktOatKj+Uu2cAEOjSMhRS4K6/gIkKB8eiC4uqCmdFCce Ssx4WSh2P8y7rz11hYeNtkye8930PJVSzh93KbFMYyeLQYcMhYfQniOCuuyhcmfw67s/ XvSzFzt7EuyiUlJuvQvKSnEAseYXfJSDYA0KwH0co2c8WIDWUr7IlA7y28z7cWeXz9Us RtOVUH6XsDoxJJRF1fuTuD/xoRoDzAeN4KvnbPPuhuzlVDd2N2VDIttIcJsa5WoQvM6P rOD2N7FVOxzEPQ5MjeCDMzWyd4xtr8A3ohaBcijlRK7iE/UgKXGd9ShCnqNIog1NuUU6 wzuA== X-Gm-Message-State: ANoB5plaew4CRMNrmdybdXkKvFDkh19fT5x2giX5UKJFAbIHJ7r9eBcL ikkHnq4evR/IVF7iV9YxJBWwd4Cs1jxv3Q== X-Google-Smtp-Source: AA0mqf7UklyruWJ/uRV+fRFnD3XMFTO+ps6vZsUo9lfYHt2PwoCBxN9ucOuDBfS7aEqlapqww05kAw== X-Received: by 2002:ac8:7415:0:b0:3a4:a229:b974 with SMTP id p21-20020ac87415000000b003a4a229b974mr10412267qtq.255.1669243088180; Wed, 23 Nov 2022 14:38:08 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id d7-20020ac86147000000b003a5c60686b0sm10591503qtm.22.2022.11.23.14.38.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:07 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 22/29] btrfs-progs: sync messages.* from the kernel Date: Wed, 23 Nov 2022 17:37:30 -0500 Message-Id: <704447b05b6d4de3ac73e31d250bf03d486b0766.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org These are the printk helpers from the kernel. There were a few modifications, the hi-lights are - We do not have fs_info::fs_state, so that needed to be removed. - We do not have discard.h sync'ed yet, so that dependency was dropped. - Anything related to struct super_block was commented out. - The transaction abort had to be modified to fit with the current btrfs-progs code. Additionally there were kerncompat.h changes that needed to be made to handle the dependencies properly. Those are easier to spot. Any function that needed to be modified has a MODIFIED tag in the comment section with a list of things that were changed. Signed-off-by: Josef Bacik --- Makefile | 1 + btrfs-corrupt-block.c | 1 + btrfstune.c | 1 + check/clear-cache.c | 1 + check/main.c | 1 + check/mode-common.c | 1 + check/mode-lowmem.c | 1 + cmds/filesystem-du.c | 1 + cmds/filesystem-usage.c | 1 + cmds/qgroup.c | 1 + cmds/replace.c | 1 + cmds/rescue-chunk-recover.c | 1 + cmds/rescue.c | 1 + cmds/subvolume-list.c | 1 + common/units.c | 1 + convert/common.c | 1 + convert/main.c | 1 + convert/source-ext2.c | 1 + image/main.c | 1 + kerncompat.h | 44 ++-- kernel-shared/backref.c | 1 + kernel-shared/ctree.h | 2 + kernel-shared/delayed-ref.c | 1 + kernel-shared/extent_io.c | 1 + kernel-shared/free-space-tree.c | 1 + kernel-shared/messages.c | 372 ++++++++++++++++++++++++++++++++ kernel-shared/messages.h | 253 ++++++++++++++++++++++ kernel-shared/transaction.c | 5 - kernel-shared/transaction.h | 1 - kernel-shared/ulist.c | 1 + kernel-shared/zoned.h | 1 + libbtrfs/ctree.h | 1 + mkfs/main.c | 1 + 33 files changed, 678 insertions(+), 26 deletions(-) create mode 100644 kernel-shared/messages.c create mode 100644 kernel-shared/messages.h diff --git a/Makefile b/Makefile index f3a7ce95..3d209a20 100644 --- a/Makefile +++ b/Makefile @@ -166,6 +166,7 @@ objects = \ kernel-shared/free-space-tree.o \ kernel-shared/inode-item.o \ kernel-shared/inode.o \ + kernel-shared/messages.o \ kernel-shared/print-tree.o \ kernel-shared/root-tree.o \ kernel-shared/transaction.o \ diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c index 33e3f85d..29915f47 100644 --- a/btrfs-corrupt-block.c +++ b/btrfs-corrupt-block.c @@ -28,6 +28,7 @@ #include "kernel-shared/disk-io.h" #include "kernel-shared/transaction.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/messages.h" #include "common/utils.h" #include "common/help.h" #include "common/extent-cache.h" diff --git a/btrfstune.c b/btrfstune.c index 8dd32129..0ad7275c 100644 --- a/btrfstune.c +++ b/btrfstune.c @@ -31,6 +31,7 @@ #include "kernel-shared/transaction.h" #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/messages.h" #include "common/defs.h" #include "common/utils.h" #include "common/extent-cache.h" diff --git a/check/clear-cache.c b/check/clear-cache.c index 0a3001a4..c4ee6b33 100644 --- a/check/clear-cache.c +++ b/check/clear-cache.c @@ -21,6 +21,7 @@ #include "kernel-shared/free-space-tree.h" #include "kernel-shared/volumes.h" #include "kernel-shared/transaction.h" +#include "kernel-shared/messages.h" #include "common/internal.h" #include "common/messages.h" #include "check/common.h" diff --git a/check/main.c b/check/main.c index 4af6cd4e..bce91451 100644 --- a/check/main.c +++ b/check/main.c @@ -41,6 +41,7 @@ #include "kernel-shared/free-space-tree.h" #include "kernel-shared/backref.h" #include "kernel-shared/ulist.h" +#include "kernel-shared/messages.h" #include "common/defs.h" #include "common/extent-cache.h" #include "common/internal.h" diff --git a/check/mode-common.c b/check/mode-common.c index c8ac235d..a49755da 100644 --- a/check/mode-common.c +++ b/check/mode-common.c @@ -28,6 +28,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/backref.h" #include "kernel-shared/compression.h" +#include "kernel-shared/messages.h" #include "common/internal.h" #include "common/messages.h" #include "common/utils.h" diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index 10258d34..2b91cffe 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -30,6 +30,7 @@ #include "kernel-shared/backref.h" #include "kernel-shared/compression.h" #include "kernel-shared/volumes.h" +#include "kernel-shared/messages.h" #include "common/messages.h" #include "common/internal.h" #include "common/utils.h" diff --git a/cmds/filesystem-du.c b/cmds/filesystem-du.c index 98cf75eb..e22135c6 100644 --- a/cmds/filesystem-du.c +++ b/cmds/filesystem-du.c @@ -32,6 +32,7 @@ #include "kernel-lib/rbtree_types.h" #include "kernel-lib/interval_tree_generic.h" #include "kernel-shared/ctree.h" +#include "kernel-shared/messages.h" #include "common/utils.h" #include "common/open-utils.h" #include "common/units.h" diff --git a/cmds/filesystem-usage.c b/cmds/filesystem-usage.c index 5810324f..09aa1405 100644 --- a/cmds/filesystem-usage.c +++ b/cmds/filesystem-usage.c @@ -31,6 +31,7 @@ #include "kernel-shared/ctree.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/volumes.h" +#include "kernel-shared/messages.h" #include "common/utils.h" #include "common/string-table.h" #include "common/open-utils.h" diff --git a/cmds/qgroup.c b/cmds/qgroup.c index b3fd7e9f..77932330 100644 --- a/cmds/qgroup.c +++ b/cmds/qgroup.c @@ -39,6 +39,7 @@ #include "cmds/commands.h" #include "cmds/qgroup.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/messages.h" #define BTRFS_QGROUP_NFILTERS_INCREASE (2 * BTRFS_QGROUP_FILTER_MAX) #define BTRFS_QGROUP_NCOMPS_INCREASE (2 * BTRFS_QGROUP_COMP_MAX) diff --git a/cmds/replace.c b/cmds/replace.c index 077a9d04..917874ab 100644 --- a/cmds/replace.c +++ b/cmds/replace.c @@ -40,6 +40,7 @@ #include "cmds/commands.h" #include "mkfs/common.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/messages.h" static int print_replace_status(int fd, const char *path, int once); static char *time2string(char *buf, size_t s, __u64 t); diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c index a085f108..460a7f2f 100644 --- a/cmds/rescue-chunk-recover.c +++ b/cmds/rescue-chunk-recover.c @@ -38,6 +38,7 @@ #include "cmds/rescue.h" #include "check/common.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/messages.h" struct recover_control { int verbose; diff --git a/cmds/rescue.c b/cmds/rescue.c index 2536ca70..c23bd989 100644 --- a/cmds/rescue.c +++ b/cmds/rescue.c @@ -28,6 +28,7 @@ #include "kernel-shared/transaction.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/messages.h" #include "common/messages.h" #include "common/utils.h" #include "common/help.h" diff --git a/cmds/subvolume-list.c b/cmds/subvolume-list.c index 1c734f50..e4bb5898 100644 --- a/cmds/subvolume-list.c +++ b/cmds/subvolume-list.c @@ -36,6 +36,7 @@ #include "common/utils.h" #include "cmds/commands.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/messages.h" /* * Naming of options: diff --git a/common/units.c b/common/units.c index 698dc1d0..5192b6a8 100644 --- a/common/units.c +++ b/common/units.c @@ -18,6 +18,7 @@ #include #include "common/units.h" #include "common/messages.h" +#include "kernel-shared/messages.h" /* * Note: this function uses a static per-thread buffer. Do not call this diff --git a/convert/common.c b/convert/common.c index 228191b8..af115d14 100644 --- a/convert/common.c +++ b/convert/common.c @@ -30,6 +30,7 @@ #include "mkfs/common.h" #include "convert/common.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/messages.h" #define BTRFS_CONVERT_META_GROUP_SIZE SZ_32M diff --git a/convert/main.c b/convert/main.c index c7be19f4..80b36697 100644 --- a/convert/main.c +++ b/convert/main.c @@ -99,6 +99,7 @@ #include "kernel-shared/disk-io.h" #include "kernel-shared/volumes.h" #include "kernel-shared/transaction.h" +#include "kernel-shared/messages.h" #include "crypto/crc32c.h" #include "common/defs.h" #include "common/extent-cache.h" diff --git a/convert/source-ext2.c b/convert/source-ext2.c index b0b865b9..a8b33317 100644 --- a/convert/source-ext2.c +++ b/convert/source-ext2.c @@ -27,6 +27,7 @@ #include #include "kernel-lib/sizes.h" #include "kernel-shared/transaction.h" +#include "kernel-shared/messages.h" #include "common/extent-cache.h" #include "common/messages.h" #include "convert/common.h" diff --git a/image/main.c b/image/main.c index 6a1bcd42..6bdb5d66 100644 --- a/image/main.c +++ b/image/main.c @@ -39,6 +39,7 @@ #include "kernel-shared/transaction.h" #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/messages.h" #include "crypto/crc32c.h" #include "common/internal.h" #include "common/messages.h" diff --git a/kerncompat.h b/kerncompat.h index dedcf5f0..59beb4f4 100644 --- a/kerncompat.h +++ b/kerncompat.h @@ -35,6 +35,8 @@ #include #include #include +#include +#include #include @@ -314,6 +316,12 @@ static inline int IS_ERR_OR_NULL(const void *ptr) #define printk(fmt, args...) fprintf(stderr, fmt, ##args) #define KERN_CRIT "" #define KERN_ERR "" +#define KERN_EMERG "" +#define KERN_ALERT "" +#define KERN_CRIT "" +#define KERN_NOTICE "" +#define KERN_INFO "" +#define KERN_WARNING "" /* * kmalloc/kfree @@ -329,26 +337,6 @@ static inline int IS_ERR_OR_NULL(const void *ptr) #define memalloc_nofs_save() (0) #define memalloc_nofs_restore(x) ((void)(x)) -#ifndef BTRFS_DISABLE_BACKTRACE -static inline void assert_trace(const char *assertion, const char *filename, - const char *func, unsigned line, long val) -{ - if (val) - return; - fprintf(stderr, - "%s:%d: %s: Assertion `%s` failed, value %ld\n", - filename, line, func, assertion, val); -#ifndef BTRFS_DISABLE_BACKTRACE - print_trace(); -#endif - abort(); - exit(1); -} -#define ASSERT(c) assert_trace(#c, __FILE__, __func__, __LINE__, (long)(c)) -#else -#define ASSERT(c) assert(c) -#endif - #define BUG_ON(c) bugon_trace(#c, __FILE__, __func__, __LINE__, (long)(c)) #define BUG() \ do { \ @@ -568,7 +556,23 @@ struct work_struct { typedef struct wait_queue_head_s { } wait_queue_head_t; +struct super_block { + char *s_id; +}; + +struct va_format { + const char *fmt; + va_list *va; +}; + #define __init #define __cold +#define __printf(a, b) __attribute__((__format__(printf, a, b))) + +static inline bool sb_rdonly(struct super_block *sb) +{ + return false; +} + #endif diff --git a/kernel-shared/backref.c b/kernel-shared/backref.c index 9c5a3895..897cd089 100644 --- a/kernel-shared/backref.c +++ b/kernel-shared/backref.c @@ -23,6 +23,7 @@ #include "kernel-shared/ulist.h" #include "kernel-shared/transaction.h" #include "common/internal.h" +#include "messages.h" #define pr_debug(...) do { } while (0) diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 8e92bd4e..ef770b4d 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -372,6 +372,8 @@ struct btrfs_fs_info { u64 zone_size; u64 zoned; }; + + struct super_block *sb; }; static inline bool btrfs_is_zoned(const struct btrfs_fs_info *fs_info) diff --git a/kernel-shared/delayed-ref.c b/kernel-shared/delayed-ref.c index 5b041ac6..f148b5f2 100644 --- a/kernel-shared/delayed-ref.c +++ b/kernel-shared/delayed-ref.c @@ -20,6 +20,7 @@ #include "kernel-shared/ctree.h" #include "kernel-shared/delayed-ref.h" #include "kernel-shared/transaction.h" +#include "messages.h" /* * delayed back reference update tracking. For subvolume trees diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index 99191fe2..7074b75f 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -33,6 +33,7 @@ #include "common/utils.h" #include "common/device-utils.h" #include "common/internal.h" +#include "messages.h" static void free_extent_buffer_final(struct extent_buffer *eb); diff --git a/kernel-shared/free-space-tree.c b/kernel-shared/free-space-tree.c index 656de3fa..4064b7cb 100644 --- a/kernel-shared/free-space-tree.c +++ b/kernel-shared/free-space-tree.c @@ -24,6 +24,7 @@ #include "kernel-shared/transaction.h" #include "kernel-lib/bitops.h" #include "common/internal.h" +#include "messages.h" static struct btrfs_root *btrfs_free_space_root(struct btrfs_fs_info *fs_info, struct btrfs_block_group *block_group) diff --git a/kernel-shared/messages.c b/kernel-shared/messages.c new file mode 100644 index 00000000..e8ba1df8 --- /dev/null +++ b/kernel-shared/messages.c @@ -0,0 +1,372 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "kerncompat.h" +#include "kernel-lib/bitops.h" +#include "ctree.h" +#include "messages.h" +#include "transaction.h" + +#ifdef CONFIG_PRINTK + +#define STATE_STRING_PREFACE ": state " +#define STATE_STRING_BUF_LEN (sizeof(STATE_STRING_PREFACE) + BTRFS_FS_STATE_COUNT) + +/* + * Characters to print to indicate error conditions or uncommon filesystem state. + * RO is not an error. + */ +static const char fs_state_chars[] = { + [BTRFS_FS_STATE_ERROR] = 'E', + [BTRFS_FS_STATE_REMOUNTING] = 'M', + [BTRFS_FS_STATE_RO] = 0, + [BTRFS_FS_STATE_TRANS_ABORTED] = 'A', + [BTRFS_FS_STATE_DEV_REPLACING] = 'R', + [BTRFS_FS_STATE_DUMMY_FS_INFO] = 0, + [BTRFS_FS_STATE_NO_CSUMS] = 'C', + [BTRFS_FS_STATE_LOG_CLEANUP_ERROR] = 'L', +}; + +static void btrfs_state_to_string(const struct btrfs_fs_info *info, char *buf) +{ + unsigned int bit; + bool states_printed = false; + unsigned long fs_state = READ_ONCE(info->fs_state); + char *curr = buf; + + memcpy(curr, STATE_STRING_PREFACE, sizeof(STATE_STRING_PREFACE)); + curr += sizeof(STATE_STRING_PREFACE) - 1; + + for_each_set_bit(bit, &fs_state, sizeof(fs_state)) { + WARN_ON_ONCE(bit >= BTRFS_FS_STATE_COUNT); + if ((bit < BTRFS_FS_STATE_COUNT) && fs_state_chars[bit]) { + *curr++ = fs_state_chars[bit]; + states_printed = true; + } + } + + /* If no states were printed, reset the buffer */ + if (!states_printed) + curr = buf; + + *curr++ = 0; +} +#endif + +/* + * Generally the error codes correspond to their respective errors, but there + * are a few special cases. + * + * EUCLEAN: Any sort of corruption that we encounter. The tree-checker for + * instance will return EUCLEAN if any of the blocks are corrupted in + * a way that is problematic. We want to reserve EUCLEAN for these + * sort of corruptions. + * + * EROFS: If we check BTRFS_FS_STATE_ERROR and fail out with a return error, we + * need to use EROFS for this case. We will have no idea of the + * original failure, that will have been reported at the time we tripped + * over the error. Each subsequent error that doesn't have any context + * of the original error should use EROFS when handling BTRFS_FS_STATE_ERROR. + */ +const char * __attribute_const__ btrfs_decode_error(int error) +{ + char *errstr = "unknown"; + + switch (error) { + case -ENOENT: /* -2 */ + errstr = "No such entry"; + break; + case -EIO: /* -5 */ + errstr = "IO failure"; + break; + case -ENOMEM: /* -12*/ + errstr = "Out of memory"; + break; + case -EEXIST: /* -17 */ + errstr = "Object already exists"; + break; + case -ENOSPC: /* -28 */ + errstr = "No space left"; + break; + case -EROFS: /* -30 */ + errstr = "Readonly filesystem"; + break; + case -EOPNOTSUPP: /* -95 */ + errstr = "Operation not supported"; + break; + case -EUCLEAN: /* -117 */ + errstr = "Filesystem corrupted"; + break; + case -EDQUOT: /* -122 */ + errstr = "Quota exceeded"; + break; + } + + return errstr; +} + +/* + * __btrfs_handle_fs_error decodes expected errors from the caller and + * invokes the appropriate error response. + */ +__cold +void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function, + unsigned int line, int error, const char *fmt, ...) +{ + struct super_block *sb = fs_info->sb; +#ifdef CONFIG_PRINTK + char statestr[STATE_STRING_BUF_LEN]; + const char *errstr; +#endif + +#ifdef CONFIG_PRINTK_INDEX + printk_index_subsys_emit( + "BTRFS: error (device %s%s) in %s:%d: error=%d %s", KERN_CRIT, fmt); +#endif + + /* + * Special case: if the error is EROFS, and we're already under + * SB_RDONLY, then it is safe here. + */ + if (error == -EROFS && sb_rdonly(sb)) + return; + +#ifdef CONFIG_PRINTK + errstr = btrfs_decode_error(error); + btrfs_state_to_string(fs_info, statestr); + if (fmt) { + struct va_format vaf; + va_list args; + + va_start(args, fmt); + vaf.fmt = fmt; + vaf.va = &args; + + pr_crit("BTRFS: error (device %s%s) in %s:%d: error=%d %s (%pV)\n", + sb->s_id, statestr, function, line, error, errstr, &vaf); + va_end(args); + } else { + pr_crit("BTRFS: error (device %s%s) in %s:%d: error=%d %s\n", + sb->s_id, statestr, function, line, error, errstr); + } +#endif + + /* + * We don't have fs_info::fs_state yet, and the rest of this is more + * kernel related cleanup, so for now comment it out. + */ +#if 0 + /* + * Today we only save the error info to memory. Long term we'll also + * send it down to the disk. + */ + set_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state); + + /* Don't go through full error handling during mount. */ + if (!(sb->s_flags & SB_BORN)) + return; + + if (sb_rdonly(sb)) + return; + + btrfs_discard_stop(fs_info); + + /* Handle error by forcing the filesystem readonly. */ + btrfs_set_sb_rdonly(sb); +#endif + + btrfs_info(fs_info, "forced readonly"); + /* + * Note that a running device replace operation is not canceled here + * although there is no way to update the progress. It would add the + * risk of a deadlock, therefore the canceling is omitted. The only + * penalty is that some I/O remains active until the procedure + * completes. The next time when the filesystem is mounted writable + * again, the device replace operation continues. + */ +} + +#ifdef CONFIG_PRINTK +static const char * const logtypes[] = { + "emergency", + "alert", + "critical", + "error", + "warning", + "notice", + "info", + "debug", +}; + +/* + * Use one ratelimit state per log level so that a flood of less important + * messages doesn't cause more important ones to be dropped. + */ +static struct ratelimit_state printk_limits[] = { + RATELIMIT_STATE_INIT(printk_limits[0], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[1], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[2], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[3], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[4], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[5], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[6], DEFAULT_RATELIMIT_INTERVAL, 100), + RATELIMIT_STATE_INIT(printk_limits[7], DEFAULT_RATELIMIT_INTERVAL, 100), +}; + +void __cold _btrfs_printk(const struct btrfs_fs_info *fs_info, const char *fmt, ...) +{ + char lvl[PRINTK_MAX_SINGLE_HEADER_LEN + 1] = "\0"; + struct va_format vaf; + va_list args; + int kern_level; + const char *type = logtypes[4]; + struct ratelimit_state *ratelimit = &printk_limits[4]; + +#ifdef CONFIG_PRINTK_INDEX + printk_index_subsys_emit("%sBTRFS %s (device %s): ", NULL, fmt); +#endif + + va_start(args, fmt); + + while ((kern_level = printk_get_level(fmt)) != 0) { + size_t size = printk_skip_level(fmt) - fmt; + + if (kern_level >= '0' && kern_level <= '7') { + memcpy(lvl, fmt, size); + lvl[size] = '\0'; + type = logtypes[kern_level - '0']; + ratelimit = &printk_limits[kern_level - '0']; + } + fmt += size; + } + + vaf.fmt = fmt; + vaf.va = &args; + + if (__ratelimit(ratelimit)) { + if (fs_info) { + char statestr[STATE_STRING_BUF_LEN]; + + btrfs_state_to_string(fs_info, statestr); + _printk("%sBTRFS %s (device %s%s): %pV\n", lvl, type, + fs_info->sb->s_id, statestr, &vaf); + } else { + _printk("%sBTRFS %s: %pV\n", lvl, type, &vaf); + } + } + + va_end(args); +} +#endif + +#ifdef CONFIG_BTRFS_ASSERT +void __cold btrfs_assertfail(const char *expr, const char *file, int line) +{ + pr_err("assertion failed: %s, in %s:%d\n", expr, file, line); + BUG(); +} +#endif + +void __cold btrfs_print_v0_err(struct btrfs_fs_info *fs_info) +{ + btrfs_err(fs_info, +"Unsupported V0 extent filesystem detected. Aborting. Please re-create your filesystem with a newer kernel"); +} + +#if BITS_PER_LONG == 32 +void __cold btrfs_warn_32bit_limit(struct btrfs_fs_info *fs_info) +{ + if (!test_and_set_bit(BTRFS_FS_32BIT_WARN, &fs_info->flags)) { + btrfs_warn(fs_info, "reaching 32bit limit for logical addresses"); + btrfs_warn(fs_info, +"due to page cache limit on 32bit systems, btrfs can't access metadata at or beyond %lluT", + BTRFS_32BIT_MAX_FILE_SIZE >> 40); + btrfs_warn(fs_info, + "please consider upgrading to 64bit kernel/hardware"); + } +} + +void __cold btrfs_err_32bit_limit(struct btrfs_fs_info *fs_info) +{ + if (!test_and_set_bit(BTRFS_FS_32BIT_ERROR, &fs_info->flags)) { + btrfs_err(fs_info, "reached 32bit limit for logical addresses"); + btrfs_err(fs_info, +"due to page cache limit on 32bit systems, metadata beyond %lluT can't be accessed", + BTRFS_32BIT_MAX_FILE_SIZE >> 40); + btrfs_err(fs_info, + "please consider upgrading to 64bit kernel/hardware"); + } +} +#endif + +/* + * We only mark the transaction aborted and then set the file system read-only. + * This will prevent new transactions from starting or trying to join this + * one. + * + * This means that error recovery at the call site is limited to freeing + * any local memory allocations and passing the error code up without + * further cleanup. The transaction should complete as it normally would + * in the call path but will return -EIO. + * + * We'll complete the cleanup in btrfs_end_transaction and + * btrfs_commit_transaction. + * + * MODIFIED: + * - We do not have trans->aborted, change to fs_info->transaction_aborted. + * - We do not have btrfs_dump_space_info_for_trans_abort(). + * - We do not have transaction_wait, transaction_blocked_wait. + */ +__cold +void __btrfs_abort_transaction(struct btrfs_trans_handle *trans, + const char *function, + unsigned int line, int error, bool first_hit) +{ + struct btrfs_fs_info *fs_info = trans->fs_info; + + fs_info->transaction_aborted = error; +#if 0 + if (first_hit && error == -ENOSPC) + btrfs_dump_space_info_for_trans_abort(fs_info); + /* Wake up anybody who may be waiting on this transaction */ + wake_up(&fs_info->transaction_wait); + wake_up(&fs_info->transaction_blocked_wait); +#endif + __btrfs_handle_fs_error(fs_info, function, line, error, NULL); +} + +/* + * __btrfs_panic decodes unexpected, fatal errors from the caller, issues an + * alert, and either panics or BUGs, depending on mount options. + * + * MODIFIED: + * - We don't have btrfs_test_opt() yet, kill that and s_id. + */ +__cold +void __btrfs_panic(struct btrfs_fs_info *fs_info, const char *function, + unsigned int line, int error, const char *fmt, ...) +{ + const char *errstr; + struct va_format vaf = { .fmt = fmt }; + va_list args; +#if 0 + char *s_id = ""; + + if (fs_info) + s_id = fs_info->sb->s_id; +#endif + + va_start(args, fmt); + vaf.va = &args; + + errstr = btrfs_decode_error(error); +#if 0 + if (fs_info && (btrfs_test_opt(fs_info, PANIC_ON_FATAL_ERROR))) + panic(KERN_CRIT "BTRFS panic (device %s) in %s:%d: %pV (error=%d %s)\n", + s_id, function, line, &vaf, error, errstr); +#endif + + btrfs_crit(fs_info, "panic in %s:%d: %pV (error=%d %s)", + function, line, &vaf, error, errstr); + va_end(args); + /* Caller calls BUG() */ +} diff --git a/kernel-shared/messages.h b/kernel-shared/messages.h new file mode 100644 index 00000000..92fa124f --- /dev/null +++ b/kernel-shared/messages.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef BTRFS_MESSAGES_H +#define BTRFS_MESSAGES_H + +#include "kerncompat.h" +#include + +struct btrfs_fs_info; +struct btrfs_trans_handle; + +static inline __printf(2, 3) __cold +void btrfs_no_printk(const struct btrfs_fs_info *fs_info, const char *fmt, ...) +{ +} + +#ifdef CONFIG_PRINTK + +#define btrfs_printk(fs_info, fmt, args...) \ + _btrfs_printk(fs_info, fmt, ##args) + +__printf(2, 3) +__cold +void _btrfs_printk(const struct btrfs_fs_info *fs_info, const char *fmt, ...); + +#else + +#define btrfs_printk(fs_info, fmt, args...) \ + btrfs_no_printk(fs_info, fmt, ##args) +#endif + +#define btrfs_emerg(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_EMERG fmt, ##args) +#define btrfs_alert(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_ALERT fmt, ##args) +#define btrfs_crit(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_CRIT fmt, ##args) +#define btrfs_err(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_ERR fmt, ##args) +#define btrfs_warn(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_WARNING fmt, ##args) +#define btrfs_notice(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_NOTICE fmt, ##args) +#define btrfs_info(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_INFO fmt, ##args) + +/* + * Wrappers that use printk_in_rcu + */ +#define btrfs_emerg_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_EMERG fmt, ##args) +#define btrfs_alert_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_ALERT fmt, ##args) +#define btrfs_crit_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_CRIT fmt, ##args) +#define btrfs_err_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_ERR fmt, ##args) +#define btrfs_warn_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_WARNING fmt, ##args) +#define btrfs_notice_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_NOTICE fmt, ##args) +#define btrfs_info_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_INFO fmt, ##args) + +/* + * Wrappers that use a ratelimited printk_in_rcu + */ +#define btrfs_emerg_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_EMERG fmt, ##args) +#define btrfs_alert_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_ALERT fmt, ##args) +#define btrfs_crit_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_CRIT fmt, ##args) +#define btrfs_err_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_ERR fmt, ##args) +#define btrfs_warn_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_WARNING fmt, ##args) +#define btrfs_notice_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_NOTICE fmt, ##args) +#define btrfs_info_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_INFO fmt, ##args) + +/* + * Wrappers that use a ratelimited printk + */ +#define btrfs_emerg_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_EMERG fmt, ##args) +#define btrfs_alert_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_ALERT fmt, ##args) +#define btrfs_crit_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_CRIT fmt, ##args) +#define btrfs_err_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_ERR fmt, ##args) +#define btrfs_warn_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_WARNING fmt, ##args) +#define btrfs_notice_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_NOTICE fmt, ##args) +#define btrfs_info_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_INFO fmt, ##args) + +#if defined(CONFIG_DYNAMIC_DEBUG) +#define btrfs_debug(fs_info, fmt, args...) \ + _dynamic_func_call_no_desc(fmt, btrfs_printk, \ + fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_in_rcu(fs_info, fmt, args...) \ + _dynamic_func_call_no_desc(fmt, btrfs_printk_in_rcu, \ + fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl_in_rcu(fs_info, fmt, args...) \ + _dynamic_func_call_no_desc(fmt, btrfs_printk_rl_in_rcu, \ + fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl(fs_info, fmt, args...) \ + _dynamic_func_call_no_desc(fmt, btrfs_printk_ratelimited, \ + fs_info, KERN_DEBUG fmt, ##args) +#elif defined(DEBUG) +#define btrfs_debug(fs_info, fmt, args...) \ + btrfs_printk(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_in_rcu(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_printk_rl_in_rcu(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl(fs_info, fmt, args...) \ + btrfs_printk_ratelimited(fs_info, KERN_DEBUG fmt, ##args) +#else +#define btrfs_debug(fs_info, fmt, args...) \ + btrfs_no_printk(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_in_rcu(fs_info, fmt, args...) \ + btrfs_no_printk_in_rcu(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl_in_rcu(fs_info, fmt, args...) \ + btrfs_no_printk_in_rcu(fs_info, KERN_DEBUG fmt, ##args) +#define btrfs_debug_rl(fs_info, fmt, args...) \ + btrfs_no_printk(fs_info, KERN_DEBUG fmt, ##args) +#endif + +#define btrfs_printk_in_rcu(fs_info, fmt, args...) \ +do { \ + rcu_read_lock(); \ + btrfs_printk(fs_info, fmt, ##args); \ + rcu_read_unlock(); \ +} while (0) + +#define btrfs_no_printk_in_rcu(fs_info, fmt, args...) \ +do { \ + rcu_read_lock(); \ + btrfs_no_printk(fs_info, fmt, ##args); \ + rcu_read_unlock(); \ +} while (0) + +#define btrfs_printk_ratelimited(fs_info, fmt, args...) \ +do { \ + static DEFINE_RATELIMIT_STATE(_rs, \ + DEFAULT_RATELIMIT_INTERVAL, \ + DEFAULT_RATELIMIT_BURST); \ + if (__ratelimit(&_rs)) \ + btrfs_printk(fs_info, fmt, ##args); \ +} while (0) + +#define btrfs_printk_rl_in_rcu(fs_info, fmt, args...) \ +do { \ + rcu_read_lock(); \ + btrfs_printk_ratelimited(fs_info, fmt, ##args); \ + rcu_read_unlock(); \ +} while (0) + +#ifdef CONFIG_BTRFS_ASSERT +void __cold btrfs_assertfail(const char *expr, const char *file, int line); + +#define ASSERT(expr) \ + (likely(expr) ? (void)0 : btrfs_assertfail(#expr, __FILE__, __LINE__)) +#else +#define ASSERT(expr) (void)(expr) +#endif + +void __cold btrfs_print_v0_err(struct btrfs_fs_info *fs_info); + +__printf(5, 6) +__cold +void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function, + unsigned int line, int error, const char *fmt, ...); + +const char * __attribute_const__ btrfs_decode_error(int error); + +__cold +void __btrfs_abort_transaction(struct btrfs_trans_handle *trans, + const char *function, + unsigned int line, int error, bool first_hit); + +/* + * Call btrfs_abort_transaction as early as possible when an error condition is + * detected, that way the exact line number is reported. + * + * MODIFIED: + * - We do not have fs_info->fs_state. + * - We do not have test_and_set_bit. + */ +#if 0 +#define btrfs_abort_transaction(trans, error) \ +do { \ + bool first = false; \ + /* Report first abort since mount */ \ + if (!test_and_set_bit(BTRFS_FS_STATE_TRANS_ABORTED, \ + &((trans)->fs_info->fs_state))) { \ + first = true; \ + if ((error) != -EIO && (error) != -EROFS) { \ + WARN(1, KERN_DEBUG \ + "BTRFS: Transaction aborted (error %d)\n", \ + (error)); \ + } else { \ + btrfs_debug((trans)->fs_info, \ + "Transaction aborted (error %d)", \ + (error)); \ + } \ + } \ + __btrfs_abort_transaction((trans), __func__, \ + __LINE__, (error), first); \ +} while (0) +#else +#define btrfs_abort_transaction(trans, error) \ + __btrfs_abort_transaction((trans), __func__, __LINE__, \ + (error), false) +#endif + +#define btrfs_handle_fs_error(fs_info, error, fmt, args...) \ + __btrfs_handle_fs_error((fs_info), __func__, __LINE__, \ + (error), fmt, ##args) + +__printf(5, 6) +__cold +void __btrfs_panic(struct btrfs_fs_info *fs_info, const char *function, + unsigned int line, int error, const char *fmt, ...); +/* + * If BTRFS_MOUNT_PANIC_ON_FATAL_ERROR is in mount_opt, __btrfs_panic + * will panic(). Otherwise we BUG() here. + */ +#define btrfs_panic(fs_info, error, fmt, args...) \ +do { \ + __btrfs_panic(fs_info, __func__, __LINE__, error, fmt, ##args); \ + BUG(); \ +} while (0) + +#if BITS_PER_LONG == 32 +#define BTRFS_32BIT_MAX_FILE_SIZE (((u64)ULONG_MAX + 1) << PAGE_SHIFT) +/* + * The warning threshold is 5/8th of the MAX_LFS_FILESIZE that limits the logical + * addresses of extents. + * + * For 4K page size it's about 10T, for 64K it's 160T. + */ +#define BTRFS_32BIT_EARLY_WARN_THRESHOLD (BTRFS_32BIT_MAX_FILE_SIZE * 5 / 8) +void btrfs_warn_32bit_limit(struct btrfs_fs_info *fs_info); +void btrfs_err_32bit_limit(struct btrfs_fs_info *fs_info); +#endif + +#endif diff --git a/kernel-shared/transaction.c b/kernel-shared/transaction.c index c1364d69..a3b67d8c 100644 --- a/kernel-shared/transaction.c +++ b/kernel-shared/transaction.c @@ -277,8 +277,3 @@ error: free(trans); return ret; } - -void btrfs_abort_transaction(struct btrfs_trans_handle *trans, int error) -{ - trans->fs_info->transaction_aborted = error; -} diff --git a/kernel-shared/transaction.h b/kernel-shared/transaction.h index 599cc954..27b27123 100644 --- a/kernel-shared/transaction.h +++ b/kernel-shared/transaction.h @@ -47,6 +47,5 @@ int commit_tree_roots(struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info); int btrfs_commit_transaction(struct btrfs_trans_handle *trans, struct btrfs_root *root); -void btrfs_abort_transaction(struct btrfs_trans_handle *trans, int error); #endif diff --git a/kernel-shared/ulist.c b/kernel-shared/ulist.c index e193b02d..0cd4f74f 100644 --- a/kernel-shared/ulist.c +++ b/kernel-shared/ulist.c @@ -21,6 +21,7 @@ #include "kerncompat.h" #include "ulist.h" #include "kernel-shared/ctree.h" +#include "messages.h" /* * ulist is a generic data structure to hold a collection of unique u64 diff --git a/kernel-shared/zoned.h b/kernel-shared/zoned.h index cc0d6b6f..adbe144e 100644 --- a/kernel-shared/zoned.h +++ b/kernel-shared/zoned.h @@ -22,6 +22,7 @@ #include #include "kernel-shared/disk-io.h" #include "kernel-shared/volumes.h" +#include "messages.h" #ifdef BTRFS_ZONED #include diff --git a/libbtrfs/ctree.h b/libbtrfs/ctree.h index 5ae1a07d..4d4df6d3 100644 --- a/libbtrfs/ctree.h +++ b/libbtrfs/ctree.h @@ -26,6 +26,7 @@ #include "kernel-lib/rbtree.h" #include "kerncompat.h" #include "libbtrfs/ioctl.h" +#include "kernel-shared/messages.h" #else #include #include diff --git a/mkfs/main.c b/mkfs/main.c index df091b16..6d4ca540 100644 --- a/mkfs/main.c +++ b/mkfs/main.c @@ -37,6 +37,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/transaction.h" #include "kernel-shared/zoned.h" +#include "kernel-shared/messages.h" #include "crypto/crc32c.h" #include "common/defs.h" #include "common/internal.h" From patchwork Wed Nov 23 22:37:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C276C433FE for ; Wed, 23 Nov 2022 22:38:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbiKWWie (ORCPT ); Wed, 23 Nov 2022 17:38:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229773AbiKWWiM (ORCPT ); Wed, 23 Nov 2022 17:38:12 -0500 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D5097A36B for ; Wed, 23 Nov 2022 14:38:10 -0800 (PST) Received: by mail-qk1-x731.google.com with SMTP id z1so13488252qkl.9 for ; Wed, 23 Nov 2022 14:38:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=0InXHCJtqmyPMKpy/Vu8sHtnq1JMsRpWdzO4KXBQUnw=; b=YDnFG4LkWAy7CFXooEN07uuJnqSg/nvplNMNG8F37l6lGOmZ/5c0VeJLqCyfuR8Mk9 Z/k22xYAezCbI8Loz4RmcTMq3HiOcbhe421PiJhXMnm+TJ2CrZZTGHf+LrKQF3QuoVvh QBL/mi1L68HFCR/g6NjwZYtt/B5lYV5I339L8PuUDAHcdKtyfrOrRQ4p6MxHD79OaqXb AbTrnqUiu4N8VqLhVgeWPDSCuqI8YmqSSsP6fQQ507B4uEn9rK5YYG0nW29x1xypXrid VF3uM17zGUniqn1QtslbFjJgqVe1LMfcD8uVlsu5WdpnyZkDcEX9amhJafDlpY4m4+mo fItg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0InXHCJtqmyPMKpy/Vu8sHtnq1JMsRpWdzO4KXBQUnw=; b=Ev19XvvpHJAhjOcrxtJ975V+34h2HPZO4vYN0YaVWsxyrL4uuvdsO5RvBT86MHIxOh tbBr/lw3IXLCwlILg/IhELCs+hWl8dBq0NysFsFoAUoprK77XKG3xuEZ8kc1k0KbB14F hAgkxeqe3zU+Aa0xc9Q5lG+XoYDNJcwEAgkQtBx6J3VKKahJR7lMzXIIOdEfvOU83wXl YuP3HFidaxjncuDJuQ9E7RzvQVHrtq3Aiz9nd7WuOHLB9bw9Z89YkfCH5fUeHmKYv21P EP6yM2teu8fQhoIVFHDd8IEJQSE3FkptwfRZ4RWWxHgfIjPvYZHmGsmQPQxp3JWNTqix XC7g== X-Gm-Message-State: ANoB5pmcaXB8bQ7aFK5hYQFWfNVqHswJKITwNE/LVqF7BBVR1MQ6d+JA 9sXYWDSViuHMUsS+HgVPrZ1x/sp1UX8Taw== X-Google-Smtp-Source: AA0mqf5zm4kPCKjstqIcpmYL6MCpC34Dd1RdhTXzJlebgcVG36tkSigyzWLkdLWnI13FISp0C2P31Q== X-Received: by 2002:ae9:f10c:0:b0:6ec:5496:4e17 with SMTP id k12-20020ae9f10c000000b006ec54964e17mr13360431qkg.559.1669243089369; Wed, 23 Nov 2022 14:38:09 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id q16-20020a37f710000000b006b95b0a714esm12692590qkj.17.2022.11.23.14.38.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:09 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 23/29] btrfs-progs: replace btrfs_leaf_data with btrfs_item_nr_offset Date: Wed, 23 Nov 2022 17:37:31 -0500 Message-Id: <649870cfc38fb82ca4cb34386ed26f2a44315375.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We're using btrfs_item_nr_offset(leaf, 0) to get the start of the leaf data in the kernel, we don't have btrfs_leaf_data. Replace all occurrences of btrfs_leaf_data() with btrfs_item_nr_offset(leaf, 0) in order to make syncing accessors.[ch] easier. ctree.c will be synced later, so this is simply an intermediate step. Signed-off-by: Josef Bacik --- btrfs-corrupt-block.c | 4 ++-- check/main.c | 4 ++-- image/main.c | 8 +++---- kernel-shared/ctree.c | 50 +++++++++++++++++++++---------------------- 4 files changed, 33 insertions(+), 33 deletions(-) diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c index 29915f47..493cfc69 100644 --- a/btrfs-corrupt-block.c +++ b/btrfs-corrupt-block.c @@ -845,8 +845,8 @@ static void shift_items(struct btrfs_root *root, struct extent_buffer *eb) unsigned int data_end = btrfs_item_offset(eb, nritems - 1); /* Shift the item data up to and including slot back by shift space */ - memmove_extent_buffer(eb, btrfs_leaf_data(eb) + data_end - shift_space, - btrfs_leaf_data(eb) + data_end, + memmove_extent_buffer(eb, btrfs_item_nr_offset(eb, 0) + data_end - shift_space, + btrfs_item_nr_offset(eb, 0) + data_end, btrfs_item_offset(eb, slot - 1) - data_end); /* Now update the item pointers. */ diff --git a/check/main.c b/check/main.c index bce91451..c0863705 100644 --- a/check/main.c +++ b/check/main.c @@ -4429,8 +4429,8 @@ again: i, shift, (unsigned long long)buf->start); offset = btrfs_item_offset(buf, i); memmove_extent_buffer(buf, - btrfs_leaf_data(buf) + offset + shift, - btrfs_leaf_data(buf) + offset, + btrfs_item_nr_offset(buf, 0) + offset + shift, + btrfs_item_nr_offset(buf, 0) + offset, btrfs_item_size(buf, i)); btrfs_set_item_offset(buf, i, offset + shift); btrfs_mark_buffer_dirty(buf); diff --git a/image/main.c b/image/main.c index 6bdb5d66..5afc4b7c 100644 --- a/image/main.c +++ b/image/main.c @@ -323,7 +323,7 @@ static void zero_items(struct metadump_struct *md, u8 *dst, btrfs_item_key_to_cpu(src, &key, i); if (key.type == BTRFS_CSUM_ITEM_KEY) { size = btrfs_item_size(src, i); - memset(dst + btrfs_leaf_data(src) + + memset(dst + btrfs_item_nr_offset(src, 0) + btrfs_item_offset(src, i), 0, size); continue; } @@ -369,7 +369,7 @@ static void copy_buffer(struct metadump_struct *md, u8 *dst, size = sizeof(struct btrfs_header); memset(dst + size, 0, src->len - size); } else if (level == 0) { - size = btrfs_leaf_data(src) + + size = btrfs_item_nr_offset(src, 0) + btrfs_item_offset(src, nritems - 1) - btrfs_item_nr_offset(src, nritems); memset(dst + btrfs_item_nr_offset(src, nritems), 0, size); @@ -1248,8 +1248,8 @@ static void truncate_item(struct extent_buffer *eb, int slot, u32 new_size) btrfs_set_item_offset(eb, i, ioff + size_diff); } - memmove_extent_buffer(eb, btrfs_leaf_data(eb) + data_end + size_diff, - btrfs_leaf_data(eb) + data_end, + memmove_extent_buffer(eb, btrfs_item_nr_offset(eb, 0) + data_end + size_diff, + btrfs_item_nr_offset(eb, 0) + data_end, old_data_start + new_size - data_end); btrfs_set_item_size(eb, slot, new_size); } diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c index 9b9fc9eb..9f8bc9a5 100644 --- a/kernel-shared/ctree.c +++ b/kernel-shared/ctree.c @@ -2072,21 +2072,21 @@ static int push_leaf_right(struct btrfs_trans_handle *trans, struct btrfs_root /* make room in the right data area */ data_end = leaf_data_end(right); memmove_extent_buffer(right, - btrfs_leaf_data(right) + data_end - push_space, - btrfs_leaf_data(right) + data_end, + btrfs_item_nr_offset(right, 0) + data_end - push_space, + btrfs_item_nr_offset(right, 0) + data_end, BTRFS_LEAF_DATA_SIZE(root->fs_info) - data_end); /* copy from the left data area */ - copy_extent_buffer(right, left, btrfs_leaf_data(right) + + copy_extent_buffer(right, left, btrfs_item_nr_offset(right, 0) + BTRFS_LEAF_DATA_SIZE(root->fs_info) - push_space, - btrfs_leaf_data(left) + leaf_data_end(left), push_space); + btrfs_item_nr_offset(left, 0) + leaf_data_end(left), push_space); memmove_extent_buffer(right, btrfs_item_nr_offset(right, push_items), - btrfs_leaf_data(right), + btrfs_item_nr_offset(right, 0), right_nritems * sizeof(struct btrfs_item)); /* copy the items from left to right */ - copy_extent_buffer(right, left, btrfs_leaf_data(right), + copy_extent_buffer(right, left, btrfs_item_nr_offset(right, 0), btrfs_item_nr_offset(left, left_nritems - push_items), push_items * sizeof(struct btrfs_item)); @@ -2205,15 +2205,15 @@ static int push_leaf_left(struct btrfs_trans_handle *trans, struct btrfs_root /* push data from right to left */ copy_extent_buffer(left, right, btrfs_item_nr_offset(left, btrfs_header_nritems(left)), - btrfs_leaf_data(right), + btrfs_item_nr_offset(right, 0), push_items * sizeof(struct btrfs_item)); push_space = BTRFS_LEAF_DATA_SIZE(root->fs_info) - btrfs_item_offset(right, push_items -1); - copy_extent_buffer(left, right, btrfs_leaf_data(left) + + copy_extent_buffer(left, right, btrfs_item_nr_offset(left, 0) + leaf_data_end(left) - push_space, - btrfs_leaf_data(right) + + btrfs_item_nr_offset(right, 0) + btrfs_item_offset(right, push_items - 1), push_space); old_left_nritems = btrfs_header_nritems(left); @@ -2239,13 +2239,13 @@ static int push_leaf_left(struct btrfs_trans_handle *trans, struct btrfs_root if (push_items < right_nritems) { push_space = btrfs_item_offset(right, push_items - 1) - leaf_data_end(right); - memmove_extent_buffer(right, btrfs_leaf_data(right) + + memmove_extent_buffer(right, btrfs_item_nr_offset(right, 0) + BTRFS_LEAF_DATA_SIZE(root->fs_info) - push_space, - btrfs_leaf_data(right) + + btrfs_item_nr_offset(right, 0) + leaf_data_end(right), push_space); - memmove_extent_buffer(right, btrfs_leaf_data(right), + memmove_extent_buffer(right, btrfs_item_nr_offset(right, 0), btrfs_item_nr_offset(right, push_items), (btrfs_header_nritems(right) - push_items) * sizeof(struct btrfs_item)); @@ -2303,14 +2303,14 @@ static noinline int copy_for_split(struct btrfs_trans_handle *trans, btrfs_set_header_nritems(right, nritems); data_copy_size = btrfs_item_data_end(l, mid) - leaf_data_end(l); - copy_extent_buffer(right, l, btrfs_leaf_data(right), + copy_extent_buffer(right, l, btrfs_item_nr_offset(right, 0), btrfs_item_nr_offset(l, mid), nritems * sizeof(struct btrfs_item)); copy_extent_buffer(right, l, - btrfs_leaf_data(right) + + btrfs_item_nr_offset(right, 0) + BTRFS_LEAF_DATA_SIZE(root->fs_info) - data_copy_size, - btrfs_leaf_data(l) + leaf_data_end(l), data_copy_size); + btrfs_item_nr_offset(l, 0) + leaf_data_end(l), data_copy_size); rt_data_off = BTRFS_LEAF_DATA_SIZE(root->fs_info) - btrfs_item_data_end(l, mid); @@ -2662,8 +2662,8 @@ int btrfs_truncate_item(struct btrfs_path *path, u32 new_size, int from_end) /* shift the data */ if (from_end) { - memmove_extent_buffer(leaf, btrfs_leaf_data(leaf) + - data_end + size_diff, btrfs_leaf_data(leaf) + + memmove_extent_buffer(leaf, btrfs_item_nr_offset(leaf, 0) + + data_end + size_diff, btrfs_item_nr_offset(leaf, 0) + data_end, old_data_start + new_size - data_end); } else { struct btrfs_disk_key disk_key; @@ -2690,8 +2690,8 @@ int btrfs_truncate_item(struct btrfs_path *path, u32 new_size, int from_end) } } - memmove_extent_buffer(leaf, btrfs_leaf_data(leaf) + - data_end + size_diff, btrfs_leaf_data(leaf) + + memmove_extent_buffer(leaf, btrfs_item_nr_offset(leaf, 0) + + data_end + size_diff, btrfs_item_nr_offset(leaf, 0) + data_end, old_data_start - data_end); offset = btrfs_disk_key_offset(&disk_key); @@ -2754,8 +2754,8 @@ int btrfs_extend_item(struct btrfs_root *root, struct btrfs_path *path, } /* shift the data */ - memmove_extent_buffer(leaf, btrfs_leaf_data(leaf) + - data_end - data_size, btrfs_leaf_data(leaf) + + memmove_extent_buffer(leaf, btrfs_item_nr_offset(leaf, 0) + + data_end - data_size, btrfs_item_nr_offset(leaf, 0) + data_end, old_data - data_end); data_end = old_data; @@ -2848,8 +2848,8 @@ int btrfs_insert_empty_items(struct btrfs_trans_handle *trans, (nritems - slot) * sizeof(struct btrfs_item)); /* shift the data */ - memmove_extent_buffer(leaf, btrfs_leaf_data(leaf) + - data_end - total_data, btrfs_leaf_data(leaf) + + memmove_extent_buffer(leaf, btrfs_item_nr_offset(leaf, 0) + + data_end - total_data, btrfs_item_nr_offset(leaf, 0) + data_end, old_data - data_end); data_end = old_data; } @@ -3002,9 +3002,9 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root, if (slot + nr != nritems) { int data_end = leaf_data_end(leaf); - memmove_extent_buffer(leaf, btrfs_leaf_data(leaf) + + memmove_extent_buffer(leaf, btrfs_item_nr_offset(leaf, 0) + data_end + dsize, - btrfs_leaf_data(leaf) + data_end, + btrfs_item_nr_offset(leaf, 0) + data_end, last_off - data_end); for (i = slot + nr; i < nritems; i++) { From patchwork Wed Nov 23 22:37:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF4BCC433FE for ; Wed, 23 Nov 2022 22:38:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229814AbiKWWid (ORCPT ); Wed, 23 Nov 2022 17:38:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbiKWWiM (ORCPT ); Wed, 23 Nov 2022 17:38:12 -0500 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69894183AC for ; Wed, 23 Nov 2022 14:38:11 -0800 (PST) Received: by mail-qk1-x731.google.com with SMTP id k2so13472091qkk.7 for ; Wed, 23 Nov 2022 14:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=BnXYrU8VIB8sygEzBVNCP/G5cdFprLNWi6nqJQSrz7g=; b=6l24NwFDlM6RQBxVKx4Qjfu0soFzG419scNriucgWaimEN21i+sSuOvND7F3q6oBlW waJoTs/wp3Cyqkt0wzSTLA6qQAZRiStn6XP3QdsQwCVP2mtnuZlpL+FLDJCi8Ku36oj/ PTKmfYq2HhNmo3Oe2b/yml9Cw4PBidFKQM7/XOM+loG/XzUXn5vSF8sGhNXWylLva+iu h7YJcTBFaMY7fbRANWaHX/rKVzf723DV2mO8RykaK7/RdA331NbP0aGLMIC7cKPPP38V K4/WBXUOsNMoBT39n/C9zH7u3sOF4QxultIey6JptDAGgOeA2/xwsYzyrqaJ0+yKdf6x IYWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BnXYrU8VIB8sygEzBVNCP/G5cdFprLNWi6nqJQSrz7g=; b=aVrctCGSloEfZ2aOBxv+RWc/9dJiHd3EnsU4OVIKnvsyCRlU97rf3lRwG7CBOGOq96 MXgysGcaE3v/j5XzQojrzOCF+EfmQL7d8OvsdpJc3pycIiGEzKSnRPABKumgHwx5MLmD uiJE25fiPW7lm0i/bghFnDzXQmNRO9xPiMPaUx6jddzoMGnS7dBfOxo9GTvt60VkDsrO tFCIpn6ID/pCmaAN2r8Nj1aKfFMBmfopqwbWlMfJhwlO8b10kf9bODKjSuh8wCu9attG yxlHe6VNrJcmEJMB+vr/OBFvoHdmEGn5HzyewWgeDmiPRqBdrPlC+3KqpN+Bs5aCVsA8 6GsA== X-Gm-Message-State: ANoB5pmwxt//ENxAFI7UF/cwhyQ5h6Pg2rdIfEpV18OP5gfltflPA2x5 v14IRs1ujYcnFU7Y3YjTh40G6mkzQnM7DA== X-Google-Smtp-Source: AA0mqf5jWFrsIdXlI5EXkPBr2uy5Z2/ubylbLZr3hx0JbC/R9KMMEpKvpgv2hPnwm2BsEfghOB+jew== X-Received: by 2002:a37:86c6:0:b0:6ee:96d8:962d with SMTP id i189-20020a3786c6000000b006ee96d8962dmr26251993qkd.209.1669243090786; Wed, 23 Nov 2022 14:38:10 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id m2-20020ac86882000000b00399ad646794sm10533049qtq.41.2022.11.23.14.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:10 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 24/29] btrfs-progs: don't use btrfs_header_csum helper Date: Wed, 23 Nov 2022 17:37:32 -0500 Message-Id: <354e57e57b2d4ada9ee8877d98dd5899813a2af5.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This is a useless helper, the csum is always the first thing in the header, simply read from offset 0. Signed-off-by: Josef Bacik --- cmds/rescue-chunk-recover.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c index 460a7f2f..e6f2b80e 100644 --- a/cmds/rescue-chunk-recover.c +++ b/cmds/rescue-chunk-recover.c @@ -94,8 +94,7 @@ static struct extent_record *btrfs_new_extent_record(struct extent_buffer *eb) rec->cache.start = btrfs_header_bytenr(eb); rec->cache.size = eb->len; rec->generation = btrfs_header_generation(eb); - read_extent_buffer(eb, rec->csum, (unsigned long)btrfs_header_csum(eb), - BTRFS_CSUM_SIZE); + read_extent_buffer(eb, rec->csum, 0, BTRFS_CSUM_SIZE); return rec; } From patchwork Wed Nov 23 22:37:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03543C4332F for ; Wed, 23 Nov 2022 22:38:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229746AbiKWWig (ORCPT ); Wed, 23 Nov 2022 17:38:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229815AbiKWWiO (ORCPT ); Wed, 23 Nov 2022 17:38:14 -0500 Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41C8D42F62 for ; Wed, 23 Nov 2022 14:38:13 -0800 (PST) Received: by mail-qk1-x729.google.com with SMTP id j26so6355254qki.10 for ; Wed, 23 Nov 2022 14:38:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=o03egyVaTEsE3CKcxFRLWKfxOgwMpxxdglBL5YDF05U=; b=GF2T0J9Wd8jvWSZBi+BMHcabDU3rNPzyoStpniQIy5SR7+DdAVlCxDikmmEoGSODiJ mVCRaySkinf1OZgfQ7AOEIkY9RJsHNBPI0MIaKDWfLVn51rRpesDNn+DHLXeNuyotncx iq54mnGJabXi0FztvtyMJ13eaRJYbU2srkT/2gTRx/ahIVyo9ru/UC/qVPVzuzPMGMe+ mqY3iwIsQa3gzvmTwCV9x77J9tOc6xrQz7wmGtfqULNfffa4J8/8wA4m2K8EQXLT5AWJ ARVJnd4eyPhgXgZjmI+zQ8D32Tx032POc2gE83+ox2IEPCrSasWFPfAk312KwH/lfR7x Tt5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o03egyVaTEsE3CKcxFRLWKfxOgwMpxxdglBL5YDF05U=; b=SnJgWbFdf8HzqtGzTWEvubSB6s1BuPNlA7lwchSSfunY6Hg9GDfXSZECUtV02ZnnwY 3zBoG2dHN28qGKm9sSBN2DZSFxEckiXadaM4Dh9XXxwJHVVq57u2FkqBt14Up81PmZwA G5FoniWwpDDW4xbeJf1cAeGJaV3gbGgKoK/ip7CU3MBKauuQzGPLJe3hKLjxDGqch8yb un9NG7wrpG0/cVeKdUPc8cr+OBjSsunQjO+nVFAN4TUjLDrONXy7YT5auxu6PkejqBNh cpbMuMrm78PzRdxva6Luzad0QrKCRuob3QjM9x6PslUl35Zp8FN/T6Ews1IMC31H8GBE AnNA== X-Gm-Message-State: ANoB5plmXch5D5RHHRhXFbaNgUsrLuV2aHBTIImzOFfxjc2JMZBga2U7 YX3MLPy3jnfAQR0eN7NAqY6sNl0gVeYvIQ== X-Google-Smtp-Source: AA0mqf65CKEz9F478vLowIKXJQv+RSnMkOUXeIcyPM8Qg1QCrTKKAAZWnVIiLoDuhIGQGWZYZegshg== X-Received: by 2002:a05:620a:164a:b0:6f9:5ebe:2bba with SMTP id c10-20020a05620a164a00b006f95ebe2bbamr11911757qko.426.1669243092025; Wed, 23 Nov 2022 14:38:12 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id bj26-20020a05620a191a00b006fa313bf185sm12882629qkb.8.2022.11.23.14.38.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:11 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 25/29] btrfs-progs: make write_extent_buffer take a const eb Date: Wed, 23 Nov 2022 17:37:33 -0500 Message-Id: X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This is what we do in the kernel, and while we're syncing individual files we're going to have state where some callers are using a const, but progs isn't. So adjust write_extent_buffer to take a const eb in order to make this less painful. Signed-off-by: Josef Bacik --- kernel-shared/extent_io.c | 4 ++-- kernel-shared/extent_io.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index 7074b75f..6f97312b 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -1059,10 +1059,10 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dst, memcpy(dst, eb->data + start, len); } -void write_extent_buffer(struct extent_buffer *eb, const void *src, +void write_extent_buffer(const struct extent_buffer *eb, const void *src, unsigned long start, unsigned long len) { - memcpy(eb->data + start, src, len); + memcpy((void *)eb->data + start, src, len); } void copy_extent_buffer(struct extent_buffer *dst, struct extent_buffer *src, diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index 88fb6171..d824d467 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -145,7 +145,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv, unsigned long start, unsigned long len); void read_extent_buffer(const struct extent_buffer *eb, void *dst, unsigned long start, unsigned long len); -void write_extent_buffer(struct extent_buffer *eb, const void *src, +void write_extent_buffer(const struct extent_buffer *eb, const void *src, unsigned long start, unsigned long len); void copy_extent_buffer(struct extent_buffer *dst, struct extent_buffer *src, unsigned long dst_offset, unsigned long src_offset, From patchwork Wed Nov 23 22:37:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EB4BC433FE for ; Wed, 23 Nov 2022 22:38:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229582AbiKWWik (ORCPT ); Wed, 23 Nov 2022 17:38:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229787AbiKWWiS (ORCPT ); Wed, 23 Nov 2022 17:38:18 -0500 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB40227914 for ; Wed, 23 Nov 2022 14:38:14 -0800 (PST) Received: by mail-qv1-xf31.google.com with SMTP id e15so13114341qvo.4 for ; Wed, 23 Nov 2022 14:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=4J+AavxlOmb7kWpUhFiuICRHB5dLyE4uO1+P69sKJjI=; b=R4oubvtI0XoaBJRcMo5V+Q5Y2pysYi2KcJDH0L3ZQP3O9CMAwOdd6pR5JxmPcUGeO4 fzV+OLDLr5frDIASzmwsHNz8d8+Fzr7fTjjHzFGngpkdx9QpLqbYPOmlq9p5AHoVr2wR K012CCYZmFghgo5i0q6yqzBWX6m/989Q9VXzYZMIU5LikPC6C3aDsLBGAtoB7v+rcnjU TKhQt1Sc5eeYnjs4tGhQVHeUvt/S3QeUEJ9veHMh73X32WUYge4lYCXSC/mM927hOFx5 pRfkE9zhSR1dB27npNS9ravvvjRZcxtdwtVUK77rg+D+6RGRMmHAmj5V6RbJXjfzHr1c nuJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4J+AavxlOmb7kWpUhFiuICRHB5dLyE4uO1+P69sKJjI=; b=uymkZOwnj5xewg2RcPvYCs2vuChnDRhj2b/zJSRyc8vgNcNnzpXQwzZGcnyBT0wTkZ 3s2EAIOlqD3xLUUxpBzsD8eRbUYkDKKx3OPF51XeQgKuA0dCxgac76De+WKfD2ae0RWu FkIZudaqdyVwirI6NANGyjYa3rqusvm/oPptW7fycE81mIRQZb4BRkPJhK1rXEVmaABT 7zflfnTTTE46gJmpuSM3YarNYIK6S6+cBQm7p8l+mZbCu/OEavgKDHCJXesuTIxmu5g3 s0NM+eBtRw6mIvKI5GTWEk1/JtwRlIof72l31ED9VboHeLqMWp6Vi7bYJdGMsaR5ljgk McyQ== X-Gm-Message-State: ANoB5plKzIXWE8EUxTmheJkYa4MTKDJL+FKXp51oi5dAuxtDQJREy+bu zeK3t9peeT8Wnr+e/j8N3iWxLlnQ14JUeQ== X-Google-Smtp-Source: AA0mqf7cGIVALXqd8J6G5LwfS69BaSTOinb20mu85cC0BlroddWzsX4CLCo2xRYqRO9RzmKtSOZ+7w== X-Received: by 2002:a0c:f80f:0:b0:4c6:5c07:acbc with SMTP id r15-20020a0cf80f000000b004c65c07acbcmr9946852qvn.85.1669243093372; Wed, 23 Nov 2022 14:38:13 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id c24-20020a37e118000000b006cf38fd659asm12537402qkm.103.2022.11.23.14.38.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:12 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 26/29] btrfs-progs: sync accessors.[ch] from the kernel Date: Wed, 23 Nov 2022 17:37:34 -0500 Message-Id: <27adceccff3969d5112a4bfe4e46fa2d40f3ff86.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This syncs accessors.[ch] from the kernel. For the most part accessors.h will remain the same, there's just some helpers that need to be adjusted for eb->data instead of eb->pages. Additionally accessors.c needed to be completely updated to deal with this as well. This is a set of files where we will likely only sync the header going forward, and leave the c file in place as it needs to be specific to btrfs-progs. This forced a few "unrelated" changes - Using btrfs_dir_item_ftype() instead of btrfs_dir_item_type(). This is due to the encryption changes, and was simpler to just do in this patch. - Adjusting some of the print tree code to use the actual helpers and not the btrfs-progs ones. Signed-off-by: Josef Bacik --- Makefile | 1 + check/main.c | 4 +- check/mode-common.c | 4 +- check/mode-lowmem.c | 6 +- cmds/restore.c | 2 +- kerncompat.h | 4 +- kernel-shared/accessors.c | 117 ++++ kernel-shared/accessors.h | 1087 ++++++++++++++++++++++++++++++++++++ kernel-shared/ctree.h | 885 +---------------------------- kernel-shared/dir-item.c | 8 +- kernel-shared/inode.c | 2 +- kernel-shared/print-tree.c | 16 +- libbtrfs/ctree.h | 14 + mkfs/common.c | 1 + 14 files changed, 1255 insertions(+), 896 deletions(-) create mode 100644 kernel-shared/accessors.c create mode 100644 kernel-shared/accessors.h diff --git a/Makefile b/Makefile index 3d209a20..d2738e44 100644 --- a/Makefile +++ b/Makefile @@ -153,6 +153,7 @@ objects = \ kernel-lib/raid56.o \ kernel-lib/rbtree.o \ kernel-lib/tables.o \ + kernel-shared/accessors.o \ kernel-shared/backref.o \ kernel-shared/ctree.o \ kernel-shared/delayed-ref.o \ diff --git a/check/main.c b/check/main.c index c0863705..5d83de64 100644 --- a/check/main.c +++ b/check/main.c @@ -1435,7 +1435,7 @@ static int process_dir_item(struct extent_buffer *eb, btrfs_dir_item_key_to_cpu(eb, di, &location); name_len = btrfs_dir_name_len(eb, di); data_len = btrfs_dir_data_len(eb, di); - filetype = btrfs_dir_type(eb, di); + filetype = btrfs_dir_ftype(eb, di); rec->found_size += name_len; if (cur + sizeof(*di) + name_len > total || @@ -2139,7 +2139,7 @@ static int add_missing_dir_index(struct btrfs_root *root, disk_key.offset = 0; btrfs_set_dir_item_key(leaf, dir_item, &disk_key); - btrfs_set_dir_type(leaf, dir_item, imode_to_type(rec->imode)); + btrfs_set_dir_flags(leaf, dir_item, imode_to_type(rec->imode)); btrfs_set_dir_data_len(leaf, dir_item, 0); btrfs_set_dir_name_len(leaf, dir_item, backref->namelen); name_ptr = (unsigned long)(dir_item + 1); diff --git a/check/mode-common.c b/check/mode-common.c index a49755da..a1d095f9 100644 --- a/check/mode-common.c +++ b/check/mode-common.c @@ -765,7 +765,7 @@ static int find_file_type_dir_index(struct btrfs_root *root, u64 ino, u64 dirid, if (location.objectid != ino || location.type != BTRFS_INODE_ITEM_KEY || location.offset != 0) goto out; - filetype = btrfs_dir_type(path.nodes[0], di); + filetype = btrfs_dir_ftype(path.nodes[0], di); if (filetype >= BTRFS_FT_MAX || filetype == BTRFS_FT_UNKNOWN) goto out; len = min_t(u32, BTRFS_NAME_LEN, @@ -824,7 +824,7 @@ static int find_file_type_dir_item(struct btrfs_root *root, u64 ino, u64 dirid, location.type != BTRFS_INODE_ITEM_KEY || location.offset != 0) continue; - filetype = btrfs_dir_type(path.nodes[0], di); + filetype = btrfs_dir_ftype(path.nodes[0], di); if (filetype >= BTRFS_FT_MAX || filetype == BTRFS_FT_UNKNOWN) continue; len = min_t(u32, BTRFS_NAME_LEN, diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index 2b91cffe..4b0c8b27 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -869,7 +869,7 @@ loop: location.offset != 0) goto next; - filetype = btrfs_dir_type(node, di); + filetype = btrfs_dir_ftype(node, di); if (file_type != filetype) goto next; @@ -967,7 +967,7 @@ static int find_dir_item(struct btrfs_root *root, struct btrfs_key *key, location.offset != location_key->offset) goto next; - filetype = btrfs_dir_type(node, di); + filetype = btrfs_dir_ftype(node, di); if (file_type != filetype) goto next; @@ -1760,7 +1760,7 @@ begin: (*size) += name_len; read_extent_buffer(node, namebuf, (unsigned long)(di + 1), len); - filetype = btrfs_dir_type(node, di); + filetype = btrfs_dir_ftype(node, di); if (di_key->type == BTRFS_DIR_ITEM_KEY && di_key->offset != btrfs_name_hash(namebuf, len)) { diff --git a/cmds/restore.c b/cmds/restore.c index 19df6be2..c328b075 100644 --- a/cmds/restore.c +++ b/cmds/restore.c @@ -993,7 +993,7 @@ static int search_dir(struct btrfs_root *root, struct btrfs_key *key, name_len = btrfs_dir_name_len(leaf, dir_item); read_extent_buffer(leaf, filename, name_ptr, name_len); filename[name_len] = '\0'; - type = btrfs_dir_type(leaf, dir_item); + type = btrfs_dir_ftype(leaf, dir_item); btrfs_dir_item_key_to_cpu(leaf, dir_item, &location); /* full path from root of btrfs being restored */ diff --git a/kerncompat.h b/kerncompat.h index 59beb4f4..c7d59eb8 100644 --- a/kerncompat.h +++ b/kerncompat.h @@ -499,9 +499,7 @@ struct __una_u16 { __le16 x; } __attribute__((__packed__)); struct __una_u32 { __le32 x; } __attribute__((__packed__)); struct __una_u64 { __le64 x; } __attribute__((__packed__)); -#define get_unaligned_le8(p) (*((u8 *)(p))) #define get_unaligned_8(p) (*((u8 *)(p))) -#define put_unaligned_le8(val,p) ((*((u8 *)(p))) = (val)) #define put_unaligned_8(val,p) ((*((u8 *)(p))) = (val)) #define get_unaligned_le16(p) le16_to_cpu(((const struct __una_u16 *)(p))->x) #define get_unaligned_16(p) (((const struct __una_u16 *)(p))->x) @@ -575,4 +573,6 @@ static inline bool sb_rdonly(struct super_block *sb) return false; } +#define unlikely(cond) (cond) + #endif diff --git a/kernel-shared/accessors.c b/kernel-shared/accessors.c new file mode 100644 index 00000000..06c976a6 --- /dev/null +++ b/kernel-shared/accessors.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2007 Oracle. All rights reserved. + */ + +#include "kerncompat.h" +#include "messages.h" +#include "ctree.h" +#include "accessors.h" + +static bool check_setget_bounds(const struct extent_buffer *eb, + const void *ptr, unsigned off, int size) +{ + const unsigned long member_offset = (unsigned long)ptr + off; + + if (unlikely(member_offset + size > eb->len)) { + btrfs_warn(eb->fs_info, + "bad eb member %s: ptr 0x%lx start %llu member offset %lu size %d", + (member_offset > eb->len ? "start" : "end"), + (unsigned long)ptr, eb->start, member_offset, size); + return false; + } + + return true; +} + +/* + * MODIFIED: + * - We don't have eb->pages. + */ +void btrfs_init_map_token(struct btrfs_map_token *token, struct extent_buffer *eb) +{ + token->eb = eb; + token->kaddr = eb->data; + token->offset = 0; +} + +/* + * MODIFIED: + * - We don't have eb->pages, simply wrap the set/get helpers. + */ + +/* + * Macro templates that define helpers to read/write extent buffer data of a + * given size, that are also used via ctree.h for access to item members by + * specialized helpers. + * + * Generic helpers: + * - btrfs_set_8 (for 8/16/32/64) + * - btrfs_get_8 (for 8/16/32/64) + * + * Generic helpers with a token (cached address of the most recently accessed + * page): + * - btrfs_set_token_8 (for 8/16/32/64) + * - btrfs_get_token_8 (for 8/16/32/64) + * + * The set/get functions handle data spanning two pages transparently, in case + * metadata block size is larger than page. Every pointer to metadata items is + * an offset into the extent buffer page array, cast to a specific type. This + * gives us all the type checking. + * + * The extent buffer pages stored in the array pages do not form a contiguous + * phyusical range, but the API functions assume the linear offset to the range + * from 0 to metadata node size. + */ + +#define DEFINE_BTRFS_SETGET_BITS(bits) \ +u##bits btrfs_get_token_##bits(struct btrfs_map_token *token, \ + const void *ptr, unsigned long off) \ +{ \ + const unsigned long member_offset = (unsigned long)ptr + off; \ + const int size = sizeof(u##bits); \ + ASSERT(token); \ + ASSERT(token->kaddr); \ + ASSERT(check_setget_bounds(token->eb, ptr, off, size)); \ + return get_unaligned_le##bits(token->kaddr + member_offset); \ +} \ +u##bits btrfs_get_##bits(const struct extent_buffer *eb, \ + const void *ptr, unsigned long off) \ +{ \ + const unsigned long member_offset = (unsigned long)ptr + off; \ + const int size = sizeof(u##bits); \ + ASSERT(check_setget_bounds(eb, ptr, off, size)); \ + return get_unaligned_le##bits(eb->data + member_offset); \ +} \ +void btrfs_set_token_##bits(struct btrfs_map_token *token, \ + const void *ptr, unsigned long off, \ + u##bits val) \ +{ \ + unsigned long member_offset = (unsigned long)ptr + off; \ + const int size = sizeof(u##bits); \ + ASSERT(token); \ + ASSERT(token->kaddr); \ + ASSERT(check_setget_bounds(token->eb, ptr, off, size)); \ + put_unaligned_le##bits(val, token->kaddr + member_offset); \ +} \ +void btrfs_set_##bits(const struct extent_buffer *eb, void *ptr, \ + unsigned long off, u##bits val) \ +{ \ + unsigned long member_offset = (unsigned long)ptr + off; \ + const int size = sizeof(u##bits); \ + ASSERT(check_setget_bounds(eb, ptr, off, size)); \ + put_unaligned_le##bits(val, (void *)eb->data + member_offset); \ +} + +DEFINE_BTRFS_SETGET_BITS(8) +DEFINE_BTRFS_SETGET_BITS(16) +DEFINE_BTRFS_SETGET_BITS(32) +DEFINE_BTRFS_SETGET_BITS(64) + +void btrfs_node_key(const struct extent_buffer *eb, + struct btrfs_disk_key *disk_key, int nr) +{ + unsigned long ptr = btrfs_node_key_ptr_offset(eb, nr); + read_eb_member(eb, (struct btrfs_key_ptr *)ptr, + struct btrfs_key_ptr, key, disk_key); +} diff --git a/kernel-shared/accessors.h b/kernel-shared/accessors.h new file mode 100644 index 00000000..667dcbb8 --- /dev/null +++ b/kernel-shared/accessors.h @@ -0,0 +1,1087 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef BTRFS_ACCESSORS_H +#define BTRFS_ACCESSORS_H + +struct btrfs_map_token { + struct extent_buffer *eb; + char *kaddr; + unsigned long offset; +}; + +void btrfs_init_map_token(struct btrfs_map_token *token, struct extent_buffer *eb); + +/* + * Some macros to generate set/get functions for the struct fields. This + * assumes there is a lefoo_to_cpu for every type, so lets make a simple one + * for u8: + */ +#define le8_to_cpu(v) (v) +#define cpu_to_le8(v) (v) +#define __le8 u8 + +static inline u8 get_unaligned_le8(const void *p) +{ + return *(u8 *)p; +} + +static inline void put_unaligned_le8(u8 val, void *p) +{ + *(u8 *)p = val; +} + +#define read_eb_member(eb, ptr, type, member, result) (\ + read_extent_buffer(eb, (char *)(result), \ + ((unsigned long)(ptr)) + \ + offsetof(type, member), \ + sizeof(((type *)0)->member))) + +#define write_eb_member(eb, ptr, type, member, result) (\ + write_extent_buffer(eb, (char *)(result), \ + ((unsigned long)(ptr)) + \ + offsetof(type, member), \ + sizeof(((type *)0)->member))) + +#define DECLARE_BTRFS_SETGET_BITS(bits) \ +u##bits btrfs_get_token_##bits(struct btrfs_map_token *token, \ + const void *ptr, unsigned long off); \ +void btrfs_set_token_##bits(struct btrfs_map_token *token, \ + const void *ptr, unsigned long off, \ + u##bits val); \ +u##bits btrfs_get_##bits(const struct extent_buffer *eb, \ + const void *ptr, unsigned long off); \ +void btrfs_set_##bits(const struct extent_buffer *eb, void *ptr, \ + unsigned long off, u##bits val); + +DECLARE_BTRFS_SETGET_BITS(8) +DECLARE_BTRFS_SETGET_BITS(16) +DECLARE_BTRFS_SETGET_BITS(32) +DECLARE_BTRFS_SETGET_BITS(64) + +#define BTRFS_SETGET_FUNCS(name, type, member, bits) \ +static inline u##bits btrfs_##name(const struct extent_buffer *eb, \ + const type *s) \ +{ \ + static_assert(sizeof(u##bits) == sizeof(((type *)0))->member); \ + return btrfs_get_##bits(eb, s, offsetof(type, member)); \ +} \ +static inline void btrfs_set_##name(const struct extent_buffer *eb, type *s, \ + u##bits val) \ +{ \ + static_assert(sizeof(u##bits) == sizeof(((type *)0))->member); \ + btrfs_set_##bits(eb, s, offsetof(type, member), val); \ +} \ +static inline u##bits btrfs_token_##name(struct btrfs_map_token *token, \ + const type *s) \ +{ \ + static_assert(sizeof(u##bits) == sizeof(((type *)0))->member); \ + return btrfs_get_token_##bits(token, s, offsetof(type, member));\ +} \ +static inline void btrfs_set_token_##name(struct btrfs_map_token *token,\ + type *s, u##bits val) \ +{ \ + static_assert(sizeof(u##bits) == sizeof(((type *)0))->member); \ + btrfs_set_token_##bits(token, s, offsetof(type, member), val); \ +} + +/* + * MODIFIED: + * - We have eb->data, not eb->pages[0] + */ +#define BTRFS_SETGET_HEADER_FUNCS(name, type, member, bits) \ +static inline u##bits btrfs_##name(const struct extent_buffer *eb) \ +{ \ + const type *p = (type *)eb->data; \ + return get_unaligned_le##bits(&p->member); \ +} \ +static inline void btrfs_set_##name(const struct extent_buffer *eb, \ + u##bits val) \ +{ \ + type *p = (type *)eb->data; \ + put_unaligned_le##bits(val, &p->member); \ +} + +#define BTRFS_SETGET_STACK_FUNCS(name, type, member, bits) \ +static inline u##bits btrfs_##name(const type *s) \ +{ \ + return get_unaligned_le##bits(&s->member); \ +} \ +static inline void btrfs_set_##name(type *s, u##bits val) \ +{ \ + put_unaligned_le##bits(val, &s->member); \ +} + +static inline u64 btrfs_device_total_bytes(const struct extent_buffer *eb, + struct btrfs_dev_item *s) +{ + static_assert(sizeof(u64) == + sizeof(((struct btrfs_dev_item *)0))->total_bytes); + return btrfs_get_64(eb, s, offsetof(struct btrfs_dev_item, + total_bytes)); +} + +/* + * MODIFIED + * - Removed WARN_ON(!IS_ALIGNED(val, eb->fs_info->sectorsize)); + */ +static inline void btrfs_set_device_total_bytes(const struct extent_buffer *eb, + struct btrfs_dev_item *s, + u64 val) +{ + static_assert(sizeof(u64) == + sizeof(((struct btrfs_dev_item *)0))->total_bytes); + btrfs_set_64(eb, s, offsetof(struct btrfs_dev_item, total_bytes), val); +} + +BTRFS_SETGET_FUNCS(device_type, struct btrfs_dev_item, type, 64); +BTRFS_SETGET_FUNCS(device_bytes_used, struct btrfs_dev_item, bytes_used, 64); +BTRFS_SETGET_FUNCS(device_io_align, struct btrfs_dev_item, io_align, 32); +BTRFS_SETGET_FUNCS(device_io_width, struct btrfs_dev_item, io_width, 32); +BTRFS_SETGET_FUNCS(device_start_offset, struct btrfs_dev_item, start_offset, 64); +BTRFS_SETGET_FUNCS(device_sector_size, struct btrfs_dev_item, sector_size, 32); +BTRFS_SETGET_FUNCS(device_id, struct btrfs_dev_item, devid, 64); +BTRFS_SETGET_FUNCS(device_group, struct btrfs_dev_item, dev_group, 32); +BTRFS_SETGET_FUNCS(device_seek_speed, struct btrfs_dev_item, seek_speed, 8); +BTRFS_SETGET_FUNCS(device_bandwidth, struct btrfs_dev_item, bandwidth, 8); +BTRFS_SETGET_FUNCS(device_generation, struct btrfs_dev_item, generation, 64); + +BTRFS_SETGET_STACK_FUNCS(stack_device_type, struct btrfs_dev_item, type, 64); +BTRFS_SETGET_STACK_FUNCS(stack_device_total_bytes, struct btrfs_dev_item, + total_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(stack_device_bytes_used, struct btrfs_dev_item, + bytes_used, 64); +BTRFS_SETGET_STACK_FUNCS(stack_device_io_align, struct btrfs_dev_item, + io_align, 32); +BTRFS_SETGET_STACK_FUNCS(stack_device_io_width, struct btrfs_dev_item, + io_width, 32); +BTRFS_SETGET_STACK_FUNCS(stack_device_sector_size, struct btrfs_dev_item, + sector_size, 32); +BTRFS_SETGET_STACK_FUNCS(stack_device_id, struct btrfs_dev_item, devid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_device_group, struct btrfs_dev_item, dev_group, 32); +BTRFS_SETGET_STACK_FUNCS(stack_device_seek_speed, struct btrfs_dev_item, + seek_speed, 8); +BTRFS_SETGET_STACK_FUNCS(stack_device_bandwidth, struct btrfs_dev_item, + bandwidth, 8); +BTRFS_SETGET_STACK_FUNCS(stack_device_generation, struct btrfs_dev_item, + generation, 64); + +static inline unsigned long btrfs_device_uuid(struct btrfs_dev_item *d) +{ + return (unsigned long)d + offsetof(struct btrfs_dev_item, uuid); +} + +static inline unsigned long btrfs_device_fsid(struct btrfs_dev_item *d) +{ + return (unsigned long)d + offsetof(struct btrfs_dev_item, fsid); +} + +BTRFS_SETGET_FUNCS(chunk_length, struct btrfs_chunk, length, 64); +BTRFS_SETGET_FUNCS(chunk_owner, struct btrfs_chunk, owner, 64); +BTRFS_SETGET_FUNCS(chunk_stripe_len, struct btrfs_chunk, stripe_len, 64); +BTRFS_SETGET_FUNCS(chunk_io_align, struct btrfs_chunk, io_align, 32); +BTRFS_SETGET_FUNCS(chunk_io_width, struct btrfs_chunk, io_width, 32); +BTRFS_SETGET_FUNCS(chunk_sector_size, struct btrfs_chunk, sector_size, 32); +BTRFS_SETGET_FUNCS(chunk_type, struct btrfs_chunk, type, 64); +BTRFS_SETGET_FUNCS(chunk_num_stripes, struct btrfs_chunk, num_stripes, 16); +BTRFS_SETGET_FUNCS(chunk_sub_stripes, struct btrfs_chunk, sub_stripes, 16); +BTRFS_SETGET_FUNCS(stripe_devid, struct btrfs_stripe, devid, 64); +BTRFS_SETGET_FUNCS(stripe_offset, struct btrfs_stripe, offset, 64); + +static inline char *btrfs_stripe_dev_uuid(struct btrfs_stripe *s) +{ + return (char *)s + offsetof(struct btrfs_stripe, dev_uuid); +} + +BTRFS_SETGET_STACK_FUNCS(stack_chunk_length, struct btrfs_chunk, length, 64); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_owner, struct btrfs_chunk, owner, 64); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_stripe_len, struct btrfs_chunk, + stripe_len, 64); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_io_align, struct btrfs_chunk, io_align, 32); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_io_width, struct btrfs_chunk, io_width, 32); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_sector_size, struct btrfs_chunk, + sector_size, 32); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_type, struct btrfs_chunk, type, 64); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_num_stripes, struct btrfs_chunk, + num_stripes, 16); +BTRFS_SETGET_STACK_FUNCS(stack_chunk_sub_stripes, struct btrfs_chunk, + sub_stripes, 16); +BTRFS_SETGET_STACK_FUNCS(stack_stripe_devid, struct btrfs_stripe, devid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_stripe_offset, struct btrfs_stripe, offset, 64); + +static inline struct btrfs_stripe *btrfs_stripe_nr(struct btrfs_chunk *c, int nr) +{ + unsigned long offset = (unsigned long)c; + + offset += offsetof(struct btrfs_chunk, stripe); + offset += nr * sizeof(struct btrfs_stripe); + return (struct btrfs_stripe *)offset; +} + +static inline char *btrfs_stripe_dev_uuid_nr(struct btrfs_chunk *c, int nr) +{ + return btrfs_stripe_dev_uuid(btrfs_stripe_nr(c, nr)); +} + +static inline u64 btrfs_stripe_offset_nr(const struct extent_buffer *eb, + struct btrfs_chunk *c, int nr) +{ + return btrfs_stripe_offset(eb, btrfs_stripe_nr(c, nr)); +} + +static inline void btrfs_set_stripe_offset_nr(struct extent_buffer *eb, + struct btrfs_chunk *c, int nr, + u64 val) +{ + btrfs_set_stripe_offset(eb, btrfs_stripe_nr(c, nr), val); +} + +static inline u64 btrfs_stripe_devid_nr(const struct extent_buffer *eb, + struct btrfs_chunk *c, int nr) +{ + return btrfs_stripe_devid(eb, btrfs_stripe_nr(c, nr)); +} + +static inline void btrfs_set_stripe_devid_nr(struct extent_buffer *eb, + struct btrfs_chunk *c, int nr, + u64 val) +{ + btrfs_set_stripe_devid(eb, btrfs_stripe_nr(c, nr), val); +} + +/* struct btrfs_block_group_item */ +BTRFS_SETGET_STACK_FUNCS(stack_block_group_used, struct btrfs_block_group_item, + used, 64); +BTRFS_SETGET_FUNCS(block_group_used, struct btrfs_block_group_item, used, 64); +BTRFS_SETGET_STACK_FUNCS(stack_block_group_chunk_objectid, + struct btrfs_block_group_item, chunk_objectid, 64); + +BTRFS_SETGET_FUNCS(block_group_chunk_objectid, + struct btrfs_block_group_item, chunk_objectid, 64); +BTRFS_SETGET_FUNCS(block_group_flags, struct btrfs_block_group_item, flags, 64); +BTRFS_SETGET_STACK_FUNCS(stack_block_group_flags, + struct btrfs_block_group_item, flags, 64); + +/* struct btrfs_free_space_info */ +BTRFS_SETGET_FUNCS(free_space_extent_count, struct btrfs_free_space_info, + extent_count, 32); +BTRFS_SETGET_FUNCS(free_space_flags, struct btrfs_free_space_info, flags, 32); + +/* struct btrfs_inode_ref */ +BTRFS_SETGET_FUNCS(inode_ref_name_len, struct btrfs_inode_ref, name_len, 16); +BTRFS_SETGET_FUNCS(inode_ref_index, struct btrfs_inode_ref, index, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_ref_name_len, struct btrfs_inode_ref, name_len, 16); +BTRFS_SETGET_STACK_FUNCS(stack_inode_ref_index, struct btrfs_inode_ref, index, 64); + +/* struct btrfs_inode_extref */ +BTRFS_SETGET_FUNCS(inode_extref_parent, struct btrfs_inode_extref, + parent_objectid, 64); +BTRFS_SETGET_FUNCS(inode_extref_name_len, struct btrfs_inode_extref, + name_len, 16); +BTRFS_SETGET_FUNCS(inode_extref_index, struct btrfs_inode_extref, index, 64); + +/* struct btrfs_inode_item */ +BTRFS_SETGET_FUNCS(inode_generation, struct btrfs_inode_item, generation, 64); +BTRFS_SETGET_FUNCS(inode_sequence, struct btrfs_inode_item, sequence, 64); +BTRFS_SETGET_FUNCS(inode_transid, struct btrfs_inode_item, transid, 64); +BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); +BTRFS_SETGET_FUNCS(inode_nbytes, struct btrfs_inode_item, nbytes, 64); +BTRFS_SETGET_FUNCS(inode_block_group, struct btrfs_inode_item, block_group, 64); +BTRFS_SETGET_FUNCS(inode_nlink, struct btrfs_inode_item, nlink, 32); +BTRFS_SETGET_FUNCS(inode_uid, struct btrfs_inode_item, uid, 32); +BTRFS_SETGET_FUNCS(inode_gid, struct btrfs_inode_item, gid, 32); +BTRFS_SETGET_FUNCS(inode_mode, struct btrfs_inode_item, mode, 32); +BTRFS_SETGET_FUNCS(inode_rdev, struct btrfs_inode_item, rdev, 64); +BTRFS_SETGET_FUNCS(inode_flags, struct btrfs_inode_item, flags, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_generation, struct btrfs_inode_item, + generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_sequence, struct btrfs_inode_item, + sequence, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_transid, struct btrfs_inode_item, + transid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_size, struct btrfs_inode_item, size, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_nbytes, struct btrfs_inode_item, nbytes, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_block_group, struct btrfs_inode_item, + block_group, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_nlink, struct btrfs_inode_item, nlink, 32); +BTRFS_SETGET_STACK_FUNCS(stack_inode_uid, struct btrfs_inode_item, uid, 32); +BTRFS_SETGET_STACK_FUNCS(stack_inode_gid, struct btrfs_inode_item, gid, 32); +BTRFS_SETGET_STACK_FUNCS(stack_inode_mode, struct btrfs_inode_item, mode, 32); +BTRFS_SETGET_STACK_FUNCS(stack_inode_rdev, struct btrfs_inode_item, rdev, 64); +BTRFS_SETGET_STACK_FUNCS(stack_inode_flags, struct btrfs_inode_item, flags, 64); +BTRFS_SETGET_FUNCS(timespec_sec, struct btrfs_timespec, sec, 64); +BTRFS_SETGET_FUNCS(timespec_nsec, struct btrfs_timespec, nsec, 32); +BTRFS_SETGET_STACK_FUNCS(stack_timespec_sec, struct btrfs_timespec, sec, 64); +BTRFS_SETGET_STACK_FUNCS(stack_timespec_nsec, struct btrfs_timespec, nsec, 32); + +/* struct btrfs_dev_extent */ +BTRFS_SETGET_FUNCS(dev_extent_chunk_tree, struct btrfs_dev_extent, chunk_tree, 64); +BTRFS_SETGET_FUNCS(dev_extent_chunk_objectid, struct btrfs_dev_extent, + chunk_objectid, 64); +BTRFS_SETGET_FUNCS(dev_extent_chunk_offset, struct btrfs_dev_extent, + chunk_offset, 64); +BTRFS_SETGET_FUNCS(dev_extent_length, struct btrfs_dev_extent, length, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_extent_chunk_tree, struct btrfs_dev_extent, + chunk_tree, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_extent_chunk_objectid, struct btrfs_dev_extent, + chunk_objectid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_extent_chunk_offset, struct btrfs_dev_extent, + chunk_offset, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_extent_length, struct btrfs_dev_extent, length, 64); + +BTRFS_SETGET_FUNCS(extent_refs, struct btrfs_extent_item, refs, 64); +BTRFS_SETGET_FUNCS(extent_generation, struct btrfs_extent_item, generation, 64); +BTRFS_SETGET_FUNCS(extent_flags, struct btrfs_extent_item, flags, 64); + +BTRFS_SETGET_FUNCS(tree_block_level, struct btrfs_tree_block_info, level, 8); + +static inline void btrfs_tree_block_key(const struct extent_buffer *eb, + struct btrfs_tree_block_info *item, + struct btrfs_disk_key *key) +{ + read_eb_member(eb, item, struct btrfs_tree_block_info, key, key); +} + +static inline void btrfs_set_tree_block_key(const struct extent_buffer *eb, + struct btrfs_tree_block_info *item, + struct btrfs_disk_key *key) +{ + write_eb_member(eb, item, struct btrfs_tree_block_info, key, key); +} + +BTRFS_SETGET_FUNCS(extent_data_ref_root, struct btrfs_extent_data_ref, root, 64); +BTRFS_SETGET_FUNCS(extent_data_ref_objectid, struct btrfs_extent_data_ref, + objectid, 64); +BTRFS_SETGET_FUNCS(extent_data_ref_offset, struct btrfs_extent_data_ref, + offset, 64); +BTRFS_SETGET_FUNCS(extent_data_ref_count, struct btrfs_extent_data_ref, count, 32); + +BTRFS_SETGET_FUNCS(shared_data_ref_count, struct btrfs_shared_data_ref, count, 32); + +BTRFS_SETGET_FUNCS(extent_inline_ref_type, struct btrfs_extent_inline_ref, + type, 8); +BTRFS_SETGET_FUNCS(extent_inline_ref_offset, struct btrfs_extent_inline_ref, + offset, 64); + +static inline u32 btrfs_extent_inline_ref_size(int type) +{ + if (type == BTRFS_TREE_BLOCK_REF_KEY || + type == BTRFS_SHARED_BLOCK_REF_KEY) + return sizeof(struct btrfs_extent_inline_ref); + if (type == BTRFS_SHARED_DATA_REF_KEY) + return sizeof(struct btrfs_shared_data_ref) + + sizeof(struct btrfs_extent_inline_ref); + if (type == BTRFS_EXTENT_DATA_REF_KEY) + return sizeof(struct btrfs_extent_data_ref) + + offsetof(struct btrfs_extent_inline_ref, offset); + return 0; +} + +/* struct btrfs_node */ +BTRFS_SETGET_FUNCS(key_blockptr, struct btrfs_key_ptr, blockptr, 64); +BTRFS_SETGET_FUNCS(key_generation, struct btrfs_key_ptr, generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_key_blockptr, struct btrfs_key_ptr, blockptr, 64); +BTRFS_SETGET_STACK_FUNCS(stack_key_generation, struct btrfs_key_ptr, + generation, 64); + +static inline u64 btrfs_node_blockptr(const struct extent_buffer *eb, int nr) +{ + unsigned long ptr; + + ptr = offsetof(struct btrfs_node, ptrs) + + sizeof(struct btrfs_key_ptr) * nr; + return btrfs_key_blockptr(eb, (struct btrfs_key_ptr *)ptr); +} + +static inline void btrfs_set_node_blockptr(const struct extent_buffer *eb, + int nr, u64 val) +{ + unsigned long ptr; + + ptr = offsetof(struct btrfs_node, ptrs) + + sizeof(struct btrfs_key_ptr) * nr; + btrfs_set_key_blockptr(eb, (struct btrfs_key_ptr *)ptr, val); +} + +static inline u64 btrfs_node_ptr_generation(const struct extent_buffer *eb, int nr) +{ + unsigned long ptr; + + ptr = offsetof(struct btrfs_node, ptrs) + + sizeof(struct btrfs_key_ptr) * nr; + return btrfs_key_generation(eb, (struct btrfs_key_ptr *)ptr); +} + +static inline void btrfs_set_node_ptr_generation(const struct extent_buffer *eb, + int nr, u64 val) +{ + unsigned long ptr; + + ptr = offsetof(struct btrfs_node, ptrs) + + sizeof(struct btrfs_key_ptr) * nr; + btrfs_set_key_generation(eb, (struct btrfs_key_ptr *)ptr, val); +} + +static inline unsigned long btrfs_node_key_ptr_offset(const struct extent_buffer *eb, int nr) +{ + return offsetof(struct btrfs_node, ptrs) + + sizeof(struct btrfs_key_ptr) * nr; +} + +void btrfs_node_key(const struct extent_buffer *eb, + struct btrfs_disk_key *disk_key, int nr); + +static inline void btrfs_set_node_key(const struct extent_buffer *eb, + struct btrfs_disk_key *disk_key, int nr) +{ + unsigned long ptr; + + ptr = btrfs_node_key_ptr_offset(eb, nr); + write_eb_member(eb, (struct btrfs_key_ptr *)ptr, + struct btrfs_key_ptr, key, disk_key); +} + +/* struct btrfs_item */ +BTRFS_SETGET_FUNCS(raw_item_offset, struct btrfs_item, offset, 32); +BTRFS_SETGET_FUNCS(raw_item_size, struct btrfs_item, size, 32); +BTRFS_SETGET_STACK_FUNCS(stack_item_offset, struct btrfs_item, offset, 32); +BTRFS_SETGET_STACK_FUNCS(stack_item_size, struct btrfs_item, size, 32); + +static inline unsigned long btrfs_item_nr_offset(const struct extent_buffer *eb, int nr) +{ + return offsetof(struct btrfs_leaf, items) + + sizeof(struct btrfs_item) * nr; +} + +static inline struct btrfs_item *btrfs_item_nr(const struct extent_buffer *eb, int nr) +{ + return (struct btrfs_item *)btrfs_item_nr_offset(eb, nr); +} + +#define BTRFS_ITEM_SETGET_FUNCS(member) \ +static inline u32 btrfs_item_##member(const struct extent_buffer *eb, int slot) \ +{ \ + return btrfs_raw_item_##member(eb, btrfs_item_nr(eb, slot)); \ +} \ +static inline void btrfs_set_item_##member(const struct extent_buffer *eb, \ + int slot, u32 val) \ +{ \ + btrfs_set_raw_item_##member(eb, btrfs_item_nr(eb, slot), val); \ +} \ +static inline u32 btrfs_token_item_##member(struct btrfs_map_token *token, \ + int slot) \ +{ \ + struct btrfs_item *item = btrfs_item_nr(token->eb, slot); \ + return btrfs_token_raw_item_##member(token, item); \ +} \ +static inline void btrfs_set_token_item_##member(struct btrfs_map_token *token, \ + int slot, u32 val) \ +{ \ + struct btrfs_item *item = btrfs_item_nr(token->eb, slot); \ + btrfs_set_token_raw_item_##member(token, item, val); \ +} + +BTRFS_ITEM_SETGET_FUNCS(offset) +BTRFS_ITEM_SETGET_FUNCS(size); + +static inline u32 btrfs_item_data_end(const struct extent_buffer *eb, int nr) +{ + return btrfs_item_offset(eb, nr) + btrfs_item_size(eb, nr); +} + +static inline void btrfs_item_key(const struct extent_buffer *eb, + struct btrfs_disk_key *disk_key, int nr) +{ + struct btrfs_item *item = btrfs_item_nr(eb, nr); + + read_eb_member(eb, item, struct btrfs_item, key, disk_key); +} + +static inline void btrfs_set_item_key(struct extent_buffer *eb, + struct btrfs_disk_key *disk_key, int nr) +{ + struct btrfs_item *item = btrfs_item_nr(eb, nr); + + write_eb_member(eb, item, struct btrfs_item, key, disk_key); +} + +BTRFS_SETGET_FUNCS(dir_log_end, struct btrfs_dir_log_item, end, 64); + +/* struct btrfs_root_ref */ +BTRFS_SETGET_FUNCS(root_ref_dirid, struct btrfs_root_ref, dirid, 64); +BTRFS_SETGET_FUNCS(root_ref_sequence, struct btrfs_root_ref, sequence, 64); +BTRFS_SETGET_FUNCS(root_ref_name_len, struct btrfs_root_ref, name_len, 16); +BTRFS_SETGET_STACK_FUNCS(stack_root_ref_dirid, struct btrfs_root_ref, dirid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_root_ref_sequence, struct btrfs_root_ref, sequence, 64); +BTRFS_SETGET_STACK_FUNCS(stack_root_ref_name_len, struct btrfs_root_ref, name_len, 16); + +/* struct btrfs_dir_item */ +BTRFS_SETGET_FUNCS(dir_data_len, struct btrfs_dir_item, data_len, 16); +BTRFS_SETGET_FUNCS(dir_flags, struct btrfs_dir_item, type, 8); +BTRFS_SETGET_FUNCS(dir_name_len, struct btrfs_dir_item, name_len, 16); +BTRFS_SETGET_FUNCS(dir_transid, struct btrfs_dir_item, transid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dir_flags, struct btrfs_dir_item, type, 8); +BTRFS_SETGET_STACK_FUNCS(stack_dir_data_len, struct btrfs_dir_item, data_len, 16); +BTRFS_SETGET_STACK_FUNCS(stack_dir_name_len, struct btrfs_dir_item, name_len, 16); +BTRFS_SETGET_STACK_FUNCS(stack_dir_transid, struct btrfs_dir_item, transid, 64); + +static inline u8 btrfs_dir_ftype(const struct extent_buffer *eb, + const struct btrfs_dir_item *item) +{ + return btrfs_dir_flags_to_ftype(btrfs_dir_flags(eb, item)); +} + +static inline u8 btrfs_stack_dir_ftype(const struct btrfs_dir_item *item) +{ + return btrfs_dir_flags_to_ftype(btrfs_stack_dir_flags(item)); +} + +static inline void btrfs_dir_item_key(const struct extent_buffer *eb, + const struct btrfs_dir_item *item, + struct btrfs_disk_key *key) +{ + read_eb_member(eb, item, struct btrfs_dir_item, location, key); +} + +static inline void btrfs_set_dir_item_key(struct extent_buffer *eb, + struct btrfs_dir_item *item, + const struct btrfs_disk_key *key) +{ + write_eb_member(eb, item, struct btrfs_dir_item, location, key); +} + +BTRFS_SETGET_FUNCS(free_space_entries, struct btrfs_free_space_header, + num_entries, 64); +BTRFS_SETGET_FUNCS(free_space_bitmaps, struct btrfs_free_space_header, + num_bitmaps, 64); +BTRFS_SETGET_FUNCS(free_space_generation, struct btrfs_free_space_header, + generation, 64); + +static inline void btrfs_free_space_key(const struct extent_buffer *eb, + const struct btrfs_free_space_header *h, + struct btrfs_disk_key *key) +{ + read_eb_member(eb, h, struct btrfs_free_space_header, location, key); +} + +static inline void btrfs_set_free_space_key(struct extent_buffer *eb, + struct btrfs_free_space_header *h, + const struct btrfs_disk_key *key) +{ + write_eb_member(eb, h, struct btrfs_free_space_header, location, key); +} + +/* struct btrfs_disk_key */ +BTRFS_SETGET_STACK_FUNCS(disk_key_objectid, struct btrfs_disk_key, objectid, 64); +BTRFS_SETGET_STACK_FUNCS(disk_key_offset, struct btrfs_disk_key, offset, 64); +BTRFS_SETGET_STACK_FUNCS(disk_key_type, struct btrfs_disk_key, type, 8); + +#ifdef __LITTLE_ENDIAN + +/* + * Optimized helpers for little-endian architectures where CPU and on-disk + * structures have the same endianness and we can skip conversions. + */ + +static inline void btrfs_disk_key_to_cpu(struct btrfs_key *cpu_key, + const struct btrfs_disk_key *disk_key) +{ + memcpy(cpu_key, disk_key, sizeof(struct btrfs_key)); +} + +static inline void btrfs_cpu_key_to_disk(struct btrfs_disk_key *disk_key, + const struct btrfs_key *cpu_key) +{ + memcpy(disk_key, cpu_key, sizeof(struct btrfs_key)); +} + +static inline void btrfs_node_key_to_cpu(const struct extent_buffer *eb, + struct btrfs_key *cpu_key, int nr) +{ + struct btrfs_disk_key *disk_key = (struct btrfs_disk_key *)cpu_key; + + btrfs_node_key(eb, disk_key, nr); +} + +static inline void btrfs_item_key_to_cpu(const struct extent_buffer *eb, + struct btrfs_key *cpu_key, int nr) +{ + struct btrfs_disk_key *disk_key = (struct btrfs_disk_key *)cpu_key; + + btrfs_item_key(eb, disk_key, nr); +} + +static inline void btrfs_dir_item_key_to_cpu(const struct extent_buffer *eb, + const struct btrfs_dir_item *item, + struct btrfs_key *cpu_key) +{ + struct btrfs_disk_key *disk_key = (struct btrfs_disk_key *)cpu_key; + + btrfs_dir_item_key(eb, item, disk_key); +} + +#else + +static inline void btrfs_disk_key_to_cpu(struct btrfs_key *cpu, + const struct btrfs_disk_key *disk) +{ + cpu->offset = le64_to_cpu(disk->offset); + cpu->type = disk->type; + cpu->objectid = le64_to_cpu(disk->objectid); +} + +static inline void btrfs_cpu_key_to_disk(struct btrfs_disk_key *disk, + const struct btrfs_key *cpu) +{ + disk->offset = cpu_to_le64(cpu->offset); + disk->type = cpu->type; + disk->objectid = cpu_to_le64(cpu->objectid); +} + +static inline void btrfs_node_key_to_cpu(const struct extent_buffer *eb, + struct btrfs_key *key, int nr) +{ + struct btrfs_disk_key disk_key; + + btrfs_node_key(eb, &disk_key, nr); + btrfs_disk_key_to_cpu(key, &disk_key); +} + +static inline void btrfs_item_key_to_cpu(const struct extent_buffer *eb, + struct btrfs_key *key, int nr) +{ + struct btrfs_disk_key disk_key; + + btrfs_item_key(eb, &disk_key, nr); + btrfs_disk_key_to_cpu(key, &disk_key); +} + +static inline void btrfs_dir_item_key_to_cpu(const struct extent_buffer *eb, + const struct btrfs_dir_item *item, + struct btrfs_key *key) +{ + struct btrfs_disk_key disk_key; + + btrfs_dir_item_key(eb, item, &disk_key); + btrfs_disk_key_to_cpu(key, &disk_key); +} + +#endif + +/* struct btrfs_header */ +BTRFS_SETGET_HEADER_FUNCS(header_bytenr, struct btrfs_header, bytenr, 64); +BTRFS_SETGET_HEADER_FUNCS(header_generation, struct btrfs_header, generation, 64); +BTRFS_SETGET_HEADER_FUNCS(header_owner, struct btrfs_header, owner, 64); +BTRFS_SETGET_HEADER_FUNCS(header_nritems, struct btrfs_header, nritems, 32); +BTRFS_SETGET_HEADER_FUNCS(header_flags, struct btrfs_header, flags, 64); +BTRFS_SETGET_HEADER_FUNCS(header_level, struct btrfs_header, level, 8); +BTRFS_SETGET_STACK_FUNCS(stack_header_generation, struct btrfs_header, + generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_header_owner, struct btrfs_header, owner, 64); +BTRFS_SETGET_STACK_FUNCS(stack_header_nritems, struct btrfs_header, nritems, 32); +BTRFS_SETGET_STACK_FUNCS(stack_header_bytenr, struct btrfs_header, bytenr, 64); + +static inline int btrfs_header_flag(const struct extent_buffer *eb, u64 flag) +{ + return (btrfs_header_flags(eb) & flag) == flag; +} + +static inline void btrfs_set_header_flag(struct extent_buffer *eb, u64 flag) +{ + u64 flags = btrfs_header_flags(eb); + + btrfs_set_header_flags(eb, flags | flag); +} + +static inline void btrfs_clear_header_flag(struct extent_buffer *eb, u64 flag) +{ + u64 flags = btrfs_header_flags(eb); + + btrfs_set_header_flags(eb, flags & ~flag); +} + +static inline int btrfs_header_backref_rev(const struct extent_buffer *eb) +{ + u64 flags = btrfs_header_flags(eb); + + return flags >> BTRFS_BACKREF_REV_SHIFT; +} + +static inline void btrfs_set_header_backref_rev(struct extent_buffer *eb, int rev) +{ + u64 flags = btrfs_header_flags(eb); + + flags &= ~BTRFS_BACKREF_REV_MASK; + flags |= (u64)rev << BTRFS_BACKREF_REV_SHIFT; + btrfs_set_header_flags(eb, flags); +} + +static inline int btrfs_is_leaf(const struct extent_buffer *eb) +{ + return btrfs_header_level(eb) == 0; +} + +/* struct btrfs_root_item */ +BTRFS_SETGET_FUNCS(disk_root_generation, struct btrfs_root_item, generation, 64); +BTRFS_SETGET_FUNCS(disk_root_refs, struct btrfs_root_item, refs, 32); +BTRFS_SETGET_FUNCS(disk_root_bytenr, struct btrfs_root_item, bytenr, 64); +BTRFS_SETGET_FUNCS(disk_root_level, struct btrfs_root_item, level, 8); + +BTRFS_SETGET_STACK_FUNCS(root_generation, struct btrfs_root_item, generation, 64); +BTRFS_SETGET_STACK_FUNCS(root_bytenr, struct btrfs_root_item, bytenr, 64); +BTRFS_SETGET_STACK_FUNCS(root_drop_level, struct btrfs_root_item, drop_level, 8); +BTRFS_SETGET_STACK_FUNCS(root_level, struct btrfs_root_item, level, 8); +BTRFS_SETGET_STACK_FUNCS(root_dirid, struct btrfs_root_item, root_dirid, 64); +BTRFS_SETGET_STACK_FUNCS(root_refs, struct btrfs_root_item, refs, 32); +BTRFS_SETGET_STACK_FUNCS(root_flags, struct btrfs_root_item, flags, 64); +BTRFS_SETGET_STACK_FUNCS(root_used, struct btrfs_root_item, bytes_used, 64); +BTRFS_SETGET_STACK_FUNCS(root_limit, struct btrfs_root_item, byte_limit, 64); +BTRFS_SETGET_STACK_FUNCS(root_last_snapshot, struct btrfs_root_item, + last_snapshot, 64); +BTRFS_SETGET_STACK_FUNCS(root_generation_v2, struct btrfs_root_item, + generation_v2, 64); +BTRFS_SETGET_STACK_FUNCS(root_ctransid, struct btrfs_root_item, ctransid, 64); +BTRFS_SETGET_STACK_FUNCS(root_otransid, struct btrfs_root_item, otransid, 64); +BTRFS_SETGET_STACK_FUNCS(root_stransid, struct btrfs_root_item, stransid, 64); +BTRFS_SETGET_STACK_FUNCS(root_rtransid, struct btrfs_root_item, rtransid, 64); + +/* struct btrfs_root_backup */ +BTRFS_SETGET_STACK_FUNCS(backup_tree_root, struct btrfs_root_backup, + tree_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_tree_root_gen, struct btrfs_root_backup, + tree_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_tree_root_level, struct btrfs_root_backup, + tree_root_level, 8); + +BTRFS_SETGET_STACK_FUNCS(backup_chunk_root, struct btrfs_root_backup, + chunk_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_chunk_root_gen, struct btrfs_root_backup, + chunk_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_chunk_root_level, struct btrfs_root_backup, + chunk_root_level, 8); + +BTRFS_SETGET_STACK_FUNCS(backup_extent_root, struct btrfs_root_backup, + extent_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_extent_root_gen, struct btrfs_root_backup, + extent_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_extent_root_level, struct btrfs_root_backup, + extent_root_level, 8); + +BTRFS_SETGET_STACK_FUNCS(backup_fs_root, struct btrfs_root_backup, + fs_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_fs_root_gen, struct btrfs_root_backup, + fs_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_fs_root_level, struct btrfs_root_backup, + fs_root_level, 8); + +BTRFS_SETGET_STACK_FUNCS(backup_dev_root, struct btrfs_root_backup, + dev_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_dev_root_gen, struct btrfs_root_backup, + dev_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_dev_root_level, struct btrfs_root_backup, + dev_root_level, 8); + +BTRFS_SETGET_STACK_FUNCS(backup_csum_root, struct btrfs_root_backup, + csum_root, 64); +BTRFS_SETGET_STACK_FUNCS(backup_csum_root_gen, struct btrfs_root_backup, + csum_root_gen, 64); +BTRFS_SETGET_STACK_FUNCS(backup_csum_root_level, struct btrfs_root_backup, + csum_root_level, 8); +BTRFS_SETGET_STACK_FUNCS(backup_total_bytes, struct btrfs_root_backup, + total_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(backup_bytes_used, struct btrfs_root_backup, + bytes_used, 64); +BTRFS_SETGET_STACK_FUNCS(backup_num_devices, struct btrfs_root_backup, + num_devices, 64); + +/* struct btrfs_balance_item */ +BTRFS_SETGET_FUNCS(balance_flags, struct btrfs_balance_item, flags, 64); + +static inline void btrfs_balance_data(const struct extent_buffer *eb, + const struct btrfs_balance_item *bi, + struct btrfs_disk_balance_args *ba) +{ + read_eb_member(eb, bi, struct btrfs_balance_item, data, ba); +} + +static inline void btrfs_set_balance_data(struct extent_buffer *eb, + struct btrfs_balance_item *bi, + const struct btrfs_disk_balance_args *ba) +{ + write_eb_member(eb, bi, struct btrfs_balance_item, data, ba); +} + +static inline void btrfs_balance_meta(const struct extent_buffer *eb, + const struct btrfs_balance_item *bi, + struct btrfs_disk_balance_args *ba) +{ + read_eb_member(eb, bi, struct btrfs_balance_item, meta, ba); +} + +static inline void btrfs_set_balance_meta(struct extent_buffer *eb, + struct btrfs_balance_item *bi, + const struct btrfs_disk_balance_args *ba) +{ + write_eb_member(eb, bi, struct btrfs_balance_item, meta, ba); +} + +static inline void btrfs_balance_sys(const struct extent_buffer *eb, + const struct btrfs_balance_item *bi, + struct btrfs_disk_balance_args *ba) +{ + read_eb_member(eb, bi, struct btrfs_balance_item, sys, ba); +} + +static inline void btrfs_set_balance_sys(struct extent_buffer *eb, + struct btrfs_balance_item *bi, + const struct btrfs_disk_balance_args *ba) +{ + write_eb_member(eb, bi, struct btrfs_balance_item, sys, ba); +} + +static inline void btrfs_disk_balance_args_to_cpu(struct btrfs_balance_args *cpu, + const struct btrfs_disk_balance_args *disk) +{ + memset(cpu, 0, sizeof(*cpu)); + + cpu->profiles = le64_to_cpu(disk->profiles); + cpu->usage = le64_to_cpu(disk->usage); + cpu->devid = le64_to_cpu(disk->devid); + cpu->pstart = le64_to_cpu(disk->pstart); + cpu->pend = le64_to_cpu(disk->pend); + cpu->vstart = le64_to_cpu(disk->vstart); + cpu->vend = le64_to_cpu(disk->vend); + cpu->target = le64_to_cpu(disk->target); + cpu->flags = le64_to_cpu(disk->flags); + cpu->limit = le64_to_cpu(disk->limit); + cpu->stripes_min = le32_to_cpu(disk->stripes_min); + cpu->stripes_max = le32_to_cpu(disk->stripes_max); +} + +static inline void btrfs_cpu_balance_args_to_disk( + struct btrfs_disk_balance_args *disk, + const struct btrfs_balance_args *cpu) +{ + memset(disk, 0, sizeof(*disk)); + + disk->profiles = cpu_to_le64(cpu->profiles); + disk->usage = cpu_to_le64(cpu->usage); + disk->devid = cpu_to_le64(cpu->devid); + disk->pstart = cpu_to_le64(cpu->pstart); + disk->pend = cpu_to_le64(cpu->pend); + disk->vstart = cpu_to_le64(cpu->vstart); + disk->vend = cpu_to_le64(cpu->vend); + disk->target = cpu_to_le64(cpu->target); + disk->flags = cpu_to_le64(cpu->flags); + disk->limit = cpu_to_le64(cpu->limit); + disk->stripes_min = cpu_to_le32(cpu->stripes_min); + disk->stripes_max = cpu_to_le32(cpu->stripes_max); +} + +/* struct btrfs_super_block */ +BTRFS_SETGET_STACK_FUNCS(super_bytenr, struct btrfs_super_block, bytenr, 64); +BTRFS_SETGET_STACK_FUNCS(super_flags, struct btrfs_super_block, flags, 64); +BTRFS_SETGET_STACK_FUNCS(super_generation, struct btrfs_super_block, + generation, 64); +BTRFS_SETGET_STACK_FUNCS(super_root, struct btrfs_super_block, root, 64); +BTRFS_SETGET_STACK_FUNCS(super_sys_array_size, + struct btrfs_super_block, sys_chunk_array_size, 32); +BTRFS_SETGET_STACK_FUNCS(super_chunk_root_generation, + struct btrfs_super_block, chunk_root_generation, 64); +BTRFS_SETGET_STACK_FUNCS(super_root_level, struct btrfs_super_block, + root_level, 8); +BTRFS_SETGET_STACK_FUNCS(super_chunk_root, struct btrfs_super_block, + chunk_root, 64); +BTRFS_SETGET_STACK_FUNCS(super_chunk_root_level, struct btrfs_super_block, + chunk_root_level, 8); +BTRFS_SETGET_STACK_FUNCS(super_log_root, struct btrfs_super_block, log_root, 64); +BTRFS_SETGET_STACK_FUNCS(super_log_root_level, struct btrfs_super_block, + log_root_level, 8); +BTRFS_SETGET_STACK_FUNCS(super_total_bytes, struct btrfs_super_block, + total_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(super_bytes_used, struct btrfs_super_block, + bytes_used, 64); +BTRFS_SETGET_STACK_FUNCS(super_sectorsize, struct btrfs_super_block, + sectorsize, 32); +BTRFS_SETGET_STACK_FUNCS(super_nodesize, struct btrfs_super_block, + nodesize, 32); +BTRFS_SETGET_STACK_FUNCS(super_stripesize, struct btrfs_super_block, + stripesize, 32); +BTRFS_SETGET_STACK_FUNCS(super_root_dir, struct btrfs_super_block, + root_dir_objectid, 64); +BTRFS_SETGET_STACK_FUNCS(super_num_devices, struct btrfs_super_block, + num_devices, 64); +BTRFS_SETGET_STACK_FUNCS(super_compat_flags, struct btrfs_super_block, + compat_flags, 64); +BTRFS_SETGET_STACK_FUNCS(super_compat_ro_flags, struct btrfs_super_block, + compat_ro_flags, 64); +BTRFS_SETGET_STACK_FUNCS(super_incompat_flags, struct btrfs_super_block, + incompat_flags, 64); +BTRFS_SETGET_STACK_FUNCS(super_csum_type, struct btrfs_super_block, + csum_type, 16); +BTRFS_SETGET_STACK_FUNCS(super_cache_generation, struct btrfs_super_block, + cache_generation, 64); +BTRFS_SETGET_STACK_FUNCS(super_magic, struct btrfs_super_block, magic, 64); +BTRFS_SETGET_STACK_FUNCS(super_uuid_tree_generation, struct btrfs_super_block, + uuid_tree_generation, 64); +BTRFS_SETGET_STACK_FUNCS(super_nr_global_roots, struct btrfs_super_block, + nr_global_roots, 64); + +/* struct btrfs_file_extent_item */ +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_type, struct btrfs_file_extent_item, + type, 8); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_disk_bytenr, + struct btrfs_file_extent_item, disk_bytenr, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_offset, + struct btrfs_file_extent_item, offset, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_generation, + struct btrfs_file_extent_item, generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_num_bytes, + struct btrfs_file_extent_item, num_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_ram_bytes, + struct btrfs_file_extent_item, ram_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_disk_num_bytes, + struct btrfs_file_extent_item, disk_num_bytes, 64); +BTRFS_SETGET_STACK_FUNCS(stack_file_extent_compression, + struct btrfs_file_extent_item, compression, 8); + +BTRFS_SETGET_FUNCS(file_extent_type, struct btrfs_file_extent_item, type, 8); +BTRFS_SETGET_FUNCS(file_extent_disk_bytenr, struct btrfs_file_extent_item, + disk_bytenr, 64); +BTRFS_SETGET_FUNCS(file_extent_generation, struct btrfs_file_extent_item, + generation, 64); +BTRFS_SETGET_FUNCS(file_extent_disk_num_bytes, struct btrfs_file_extent_item, + disk_num_bytes, 64); +BTRFS_SETGET_FUNCS(file_extent_offset, struct btrfs_file_extent_item, + offset, 64); +BTRFS_SETGET_FUNCS(file_extent_num_bytes, struct btrfs_file_extent_item, + num_bytes, 64); +BTRFS_SETGET_FUNCS(file_extent_ram_bytes, struct btrfs_file_extent_item, + ram_bytes, 64); +BTRFS_SETGET_FUNCS(file_extent_compression, struct btrfs_file_extent_item, + compression, 8); +BTRFS_SETGET_FUNCS(file_extent_encryption, struct btrfs_file_extent_item, + encryption, 8); +BTRFS_SETGET_FUNCS(file_extent_other_encoding, struct btrfs_file_extent_item, + other_encoding, 16); + +/* btrfs_qgroup_status_item */ +BTRFS_SETGET_FUNCS(qgroup_status_generation, struct btrfs_qgroup_status_item, + generation, 64); +BTRFS_SETGET_FUNCS(qgroup_status_version, struct btrfs_qgroup_status_item, + version, 64); +BTRFS_SETGET_FUNCS(qgroup_status_flags, struct btrfs_qgroup_status_item, + flags, 64); +BTRFS_SETGET_FUNCS(qgroup_status_rescan, struct btrfs_qgroup_status_item, + rescan, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_generation, + struct btrfs_qgroup_status_item, generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_version, + struct btrfs_qgroup_status_item, version, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_flags, + struct btrfs_qgroup_status_item, flags, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_rescan, + struct btrfs_qgroup_status_item, rescan, 64); + +/* btrfs_qgroup_info_item */ +BTRFS_SETGET_FUNCS(qgroup_info_generation, struct btrfs_qgroup_info_item, + generation, 64); +BTRFS_SETGET_FUNCS(qgroup_info_rfer, struct btrfs_qgroup_info_item, rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_info_rfer_cmpr, struct btrfs_qgroup_info_item, + rfer_cmpr, 64); +BTRFS_SETGET_FUNCS(qgroup_info_excl, struct btrfs_qgroup_info_item, excl, 64); +BTRFS_SETGET_FUNCS(qgroup_info_excl_cmpr, struct btrfs_qgroup_info_item, + excl_cmpr, 64); + +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_generation, + struct btrfs_qgroup_info_item, generation, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer, struct btrfs_qgroup_info_item, + rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer_cmpr, + struct btrfs_qgroup_info_item, rfer_cmpr, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl, struct btrfs_qgroup_info_item, + excl, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl_cmpr, + struct btrfs_qgroup_info_item, excl_cmpr, 64); + +/* btrfs_qgroup_limit_item */ +BTRFS_SETGET_FUNCS(qgroup_limit_flags, struct btrfs_qgroup_limit_item, flags, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_max_rfer, struct btrfs_qgroup_limit_item, + max_rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_max_excl, struct btrfs_qgroup_limit_item, + max_excl, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_rsv_rfer, struct btrfs_qgroup_limit_item, + rsv_rfer, 64); +BTRFS_SETGET_FUNCS(qgroup_limit_rsv_excl, struct btrfs_qgroup_limit_item, + rsv_excl, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_flags, + struct btrfs_qgroup_limit_item, flags, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_rfer, + struct btrfs_qgroup_limit_item, max_rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_excl, + struct btrfs_qgroup_limit_item, max_excl, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_rfer, + struct btrfs_qgroup_limit_item, rsv_rfer, 64); +BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_excl, + struct btrfs_qgroup_limit_item, rsv_excl, 64); + +/* btrfs_dev_replace_item */ +BTRFS_SETGET_FUNCS(dev_replace_src_devid, + struct btrfs_dev_replace_item, src_devid, 64); +BTRFS_SETGET_FUNCS(dev_replace_cont_reading_from_srcdev_mode, + struct btrfs_dev_replace_item, cont_reading_from_srcdev_mode, + 64); +BTRFS_SETGET_FUNCS(dev_replace_replace_state, struct btrfs_dev_replace_item, + replace_state, 64); +BTRFS_SETGET_FUNCS(dev_replace_time_started, struct btrfs_dev_replace_item, + time_started, 64); +BTRFS_SETGET_FUNCS(dev_replace_time_stopped, struct btrfs_dev_replace_item, + time_stopped, 64); +BTRFS_SETGET_FUNCS(dev_replace_num_write_errors, struct btrfs_dev_replace_item, + num_write_errors, 64); +BTRFS_SETGET_FUNCS(dev_replace_num_uncorrectable_read_errors, + struct btrfs_dev_replace_item, num_uncorrectable_read_errors, + 64); +BTRFS_SETGET_FUNCS(dev_replace_cursor_left, struct btrfs_dev_replace_item, + cursor_left, 64); +BTRFS_SETGET_FUNCS(dev_replace_cursor_right, struct btrfs_dev_replace_item, + cursor_right, 64); + +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_src_devid, + struct btrfs_dev_replace_item, src_devid, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_cont_reading_from_srcdev_mode, + struct btrfs_dev_replace_item, + cont_reading_from_srcdev_mode, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_replace_state, + struct btrfs_dev_replace_item, replace_state, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_time_started, + struct btrfs_dev_replace_item, time_started, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_time_stopped, + struct btrfs_dev_replace_item, time_stopped, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_num_write_errors, + struct btrfs_dev_replace_item, num_write_errors, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_num_uncorrectable_read_errors, + struct btrfs_dev_replace_item, + num_uncorrectable_read_errors, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_cursor_left, + struct btrfs_dev_replace_item, cursor_left, 64); +BTRFS_SETGET_STACK_FUNCS(stack_dev_replace_cursor_right, + struct btrfs_dev_replace_item, cursor_right, 64); + +/* btrfs_verity_descriptor_item */ +BTRFS_SETGET_FUNCS(verity_descriptor_encryption, struct btrfs_verity_descriptor_item, + encryption, 8); +BTRFS_SETGET_FUNCS(verity_descriptor_size, struct btrfs_verity_descriptor_item, + size, 64); +BTRFS_SETGET_STACK_FUNCS(stack_verity_descriptor_encryption, + struct btrfs_verity_descriptor_item, encryption, 8); +BTRFS_SETGET_STACK_FUNCS(stack_verity_descriptor_size, + struct btrfs_verity_descriptor_item, size, 64); + +/* Cast into the data area of the leaf. */ +#define btrfs_item_ptr(leaf, slot, type) \ + ((type *)(btrfs_item_nr_offset(leaf, 0) + btrfs_item_offset(leaf, slot))) + +#define btrfs_item_ptr_offset(leaf, slot) \ + ((unsigned long)(btrfs_item_nr_offset(leaf, 0) + btrfs_item_offset(leaf, slot))) + +#endif diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index ef770b4d..bcd426d3 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -27,6 +27,7 @@ #include "kernel-shared/extent_io.h" #include "kernel-shared/uapi/btrfs.h" #include "kernel-shared/uapi/btrfs_tree.h" +#include "accessors.h" struct btrfs_root; struct btrfs_trans_handle; @@ -624,254 +625,16 @@ static inline u32 BTRFS_MAX_XATTR_SIZE(const struct btrfs_fs_info *info) */ #define BTRFS_STRING_ITEM_KEY 253 -#define read_eb_member(eb, ptr, type, member, result) ( \ - read_extent_buffer(eb, (char *)(result), \ - ((unsigned long)(ptr)) + \ - offsetof(type, member), \ - sizeof(((type *)0)->member))) - -#define write_eb_member(eb, ptr, type, member, result) ( \ - write_extent_buffer(eb, (char *)(result), \ - ((unsigned long)(ptr)) + \ - offsetof(type, member), \ - sizeof(((type *)0)->member))) - -#define BTRFS_SETGET_HEADER_FUNCS(name, type, member, bits) \ -static inline u##bits btrfs_##name(const struct extent_buffer *eb) \ -{ \ - const struct btrfs_header *h = (struct btrfs_header *)eb->data; \ - return le##bits##_to_cpu(h->member); \ -} \ -static inline void btrfs_set_##name(struct extent_buffer *eb, \ - u##bits val) \ -{ \ - struct btrfs_header *h = (struct btrfs_header *)eb->data; \ - h->member = cpu_to_le##bits(val); \ -} - -#define BTRFS_SETGET_FUNCS(name, type, member, bits) \ -static inline u##bits btrfs_##name(const struct extent_buffer *eb, \ - const type *s) \ -{ \ - unsigned long offset = (unsigned long)s; \ - const type *p = (type *) (eb->data + offset); \ - return get_unaligned_le##bits(&p->member); \ -} \ -static inline void btrfs_set_##name(struct extent_buffer *eb, \ - type *s, u##bits val) \ -{ \ - unsigned long offset = (unsigned long)s; \ - type *p = (type *) (eb->data + offset); \ - put_unaligned_le##bits(val, &p->member); \ -} - -#define BTRFS_SETGET_STACK_FUNCS(name, type, member, bits) \ -static inline u##bits btrfs_##name(const type *s) \ -{ \ - return le##bits##_to_cpu(s->member); \ -} \ -static inline void btrfs_set_##name(type *s, u##bits val) \ -{ \ - s->member = cpu_to_le##bits(val); \ -} - -BTRFS_SETGET_FUNCS(device_type, struct btrfs_dev_item, type, 64); -BTRFS_SETGET_FUNCS(device_total_bytes, struct btrfs_dev_item, total_bytes, 64); -BTRFS_SETGET_FUNCS(device_bytes_used, struct btrfs_dev_item, bytes_used, 64); -BTRFS_SETGET_FUNCS(device_io_align, struct btrfs_dev_item, io_align, 32); -BTRFS_SETGET_FUNCS(device_io_width, struct btrfs_dev_item, io_width, 32); -BTRFS_SETGET_FUNCS(device_start_offset, struct btrfs_dev_item, - start_offset, 64); -BTRFS_SETGET_FUNCS(device_sector_size, struct btrfs_dev_item, sector_size, 32); -BTRFS_SETGET_FUNCS(device_id, struct btrfs_dev_item, devid, 64); -BTRFS_SETGET_FUNCS(device_group, struct btrfs_dev_item, dev_group, 32); -BTRFS_SETGET_FUNCS(device_seek_speed, struct btrfs_dev_item, seek_speed, 8); -BTRFS_SETGET_FUNCS(device_bandwidth, struct btrfs_dev_item, bandwidth, 8); -BTRFS_SETGET_FUNCS(device_generation, struct btrfs_dev_item, generation, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_device_type, struct btrfs_dev_item, type, 64); -BTRFS_SETGET_STACK_FUNCS(stack_device_total_bytes, struct btrfs_dev_item, - total_bytes, 64); -BTRFS_SETGET_STACK_FUNCS(stack_device_bytes_used, struct btrfs_dev_item, - bytes_used, 64); -BTRFS_SETGET_STACK_FUNCS(stack_device_io_align, struct btrfs_dev_item, - io_align, 32); -BTRFS_SETGET_STACK_FUNCS(stack_device_io_width, struct btrfs_dev_item, - io_width, 32); -BTRFS_SETGET_STACK_FUNCS(stack_device_sector_size, struct btrfs_dev_item, - sector_size, 32); -BTRFS_SETGET_STACK_FUNCS(stack_device_id, struct btrfs_dev_item, devid, 64); -BTRFS_SETGET_STACK_FUNCS(stack_device_group, struct btrfs_dev_item, - dev_group, 32); -BTRFS_SETGET_STACK_FUNCS(stack_device_seek_speed, struct btrfs_dev_item, - seek_speed, 8); -BTRFS_SETGET_STACK_FUNCS(stack_device_bandwidth, struct btrfs_dev_item, - bandwidth, 8); -BTRFS_SETGET_STACK_FUNCS(stack_device_generation, struct btrfs_dev_item, - generation, 64); - -static inline char *btrfs_device_uuid(struct btrfs_dev_item *d) +static inline unsigned long btrfs_header_fsid(void) { - return (char *)d + offsetof(struct btrfs_dev_item, uuid); + return offsetof(struct btrfs_header, fsid); } -static inline char *btrfs_device_fsid(struct btrfs_dev_item *d) +static inline unsigned long btrfs_header_chunk_tree_uuid(struct extent_buffer *eb) { - return (char *)d + offsetof(struct btrfs_dev_item, fsid); + return offsetof(struct btrfs_header, chunk_tree_uuid); } -BTRFS_SETGET_FUNCS(chunk_length, struct btrfs_chunk, length, 64); -BTRFS_SETGET_FUNCS(chunk_owner, struct btrfs_chunk, owner, 64); -BTRFS_SETGET_FUNCS(chunk_stripe_len, struct btrfs_chunk, stripe_len, 64); -BTRFS_SETGET_FUNCS(chunk_io_align, struct btrfs_chunk, io_align, 32); -BTRFS_SETGET_FUNCS(chunk_io_width, struct btrfs_chunk, io_width, 32); -BTRFS_SETGET_FUNCS(chunk_sector_size, struct btrfs_chunk, sector_size, 32); -BTRFS_SETGET_FUNCS(chunk_type, struct btrfs_chunk, type, 64); -BTRFS_SETGET_FUNCS(chunk_num_stripes, struct btrfs_chunk, num_stripes, 16); -BTRFS_SETGET_FUNCS(chunk_sub_stripes, struct btrfs_chunk, sub_stripes, 16); -BTRFS_SETGET_FUNCS(stripe_devid, struct btrfs_stripe, devid, 64); -BTRFS_SETGET_FUNCS(stripe_offset, struct btrfs_stripe, offset, 64); - -static inline char *btrfs_stripe_dev_uuid(struct btrfs_stripe *s) -{ - return (char *)s + offsetof(struct btrfs_stripe, dev_uuid); -} - -BTRFS_SETGET_STACK_FUNCS(stack_chunk_length, struct btrfs_chunk, length, 64); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_owner, struct btrfs_chunk, owner, 64); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_stripe_len, struct btrfs_chunk, - stripe_len, 64); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_io_align, struct btrfs_chunk, - io_align, 32); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_io_width, struct btrfs_chunk, - io_width, 32); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_sector_size, struct btrfs_chunk, - sector_size, 32); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_type, struct btrfs_chunk, type, 64); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_num_stripes, struct btrfs_chunk, - num_stripes, 16); -BTRFS_SETGET_STACK_FUNCS(stack_chunk_sub_stripes, struct btrfs_chunk, - sub_stripes, 16); -BTRFS_SETGET_STACK_FUNCS(stack_stripe_devid, struct btrfs_stripe, devid, 64); -BTRFS_SETGET_STACK_FUNCS(stack_stripe_offset, struct btrfs_stripe, offset, 64); - -static inline struct btrfs_stripe *btrfs_stripe_nr(struct btrfs_chunk *c, - int nr) -{ - unsigned long offset = (unsigned long)c; - offset += offsetof(struct btrfs_chunk, stripe); - offset += nr * sizeof(struct btrfs_stripe); - return (struct btrfs_stripe *)offset; -} - -static inline char *btrfs_stripe_dev_uuid_nr(struct btrfs_chunk *c, int nr) -{ - return btrfs_stripe_dev_uuid(btrfs_stripe_nr(c, nr)); -} - -static inline u64 btrfs_stripe_offset_nr(struct extent_buffer *eb, - struct btrfs_chunk *c, int nr) -{ - return btrfs_stripe_offset(eb, btrfs_stripe_nr(c, nr)); -} - -static inline void btrfs_set_stripe_offset_nr(struct extent_buffer *eb, - struct btrfs_chunk *c, int nr, - u64 val) -{ - btrfs_set_stripe_offset(eb, btrfs_stripe_nr(c, nr), val); -} - -static inline u64 btrfs_stripe_devid_nr(struct extent_buffer *eb, - struct btrfs_chunk *c, int nr) -{ - return btrfs_stripe_devid(eb, btrfs_stripe_nr(c, nr)); -} - -static inline void btrfs_set_stripe_devid_nr(struct extent_buffer *eb, - struct btrfs_chunk *c, int nr, - u64 val) -{ - btrfs_set_stripe_devid(eb, btrfs_stripe_nr(c, nr), val); -} - -/* struct btrfs_block_group_item */ -BTRFS_SETGET_STACK_FUNCS(stack_block_group_used, struct btrfs_block_group_item, - used, 64); -BTRFS_SETGET_FUNCS(block_group_used, struct btrfs_block_group_item, - used, 64); -BTRFS_SETGET_STACK_FUNCS(stack_block_group_chunk_objectid, - struct btrfs_block_group_item, chunk_objectid, 64); - -BTRFS_SETGET_FUNCS(block_group_chunk_objectid, - struct btrfs_block_group_item, chunk_objectid, 64); -BTRFS_SETGET_FUNCS(block_group_flags, - struct btrfs_block_group_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(stack_block_group_flags, - struct btrfs_block_group_item, flags, 64); - -/* extent tree v2 uses chunk_objectid for the global tree id. */ -BTRFS_SETGET_STACK_FUNCS(stack_block_group_global_tree_id, - struct btrfs_block_group_item, chunk_objectid, 64); -BTRFS_SETGET_FUNCS(block_group_global_tree_id, struct btrfs_block_group_item, - chunk_objectid, 64); - -/* struct btrfs_free_space_info */ -BTRFS_SETGET_FUNCS(free_space_extent_count, struct btrfs_free_space_info, - extent_count, 32); -BTRFS_SETGET_FUNCS(free_space_flags, struct btrfs_free_space_info, flags, 32); - -/* struct btrfs_inode_ref */ -BTRFS_SETGET_FUNCS(inode_ref_name_len, struct btrfs_inode_ref, name_len, 16); -BTRFS_SETGET_STACK_FUNCS(stack_inode_ref_name_len, struct btrfs_inode_ref, name_len, 16); -BTRFS_SETGET_FUNCS(inode_ref_index, struct btrfs_inode_ref, index, 64); - -/* struct btrfs_inode_extref */ -BTRFS_SETGET_FUNCS(inode_extref_parent, struct btrfs_inode_extref, - parent_objectid, 64); -BTRFS_SETGET_FUNCS(inode_extref_name_len, struct btrfs_inode_extref, - name_len, 16); -BTRFS_SETGET_FUNCS(inode_extref_index, struct btrfs_inode_extref, index, 64); - -/* struct btrfs_inode_item */ -BTRFS_SETGET_FUNCS(inode_generation, struct btrfs_inode_item, generation, 64); -BTRFS_SETGET_FUNCS(inode_sequence, struct btrfs_inode_item, sequence, 64); -BTRFS_SETGET_FUNCS(inode_transid, struct btrfs_inode_item, transid, 64); -BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); -BTRFS_SETGET_FUNCS(inode_nbytes, struct btrfs_inode_item, nbytes, 64); -BTRFS_SETGET_FUNCS(inode_block_group, struct btrfs_inode_item, block_group, 64); -BTRFS_SETGET_FUNCS(inode_nlink, struct btrfs_inode_item, nlink, 32); -BTRFS_SETGET_FUNCS(inode_uid, struct btrfs_inode_item, uid, 32); -BTRFS_SETGET_FUNCS(inode_gid, struct btrfs_inode_item, gid, 32); -BTRFS_SETGET_FUNCS(inode_mode, struct btrfs_inode_item, mode, 32); -BTRFS_SETGET_FUNCS(inode_rdev, struct btrfs_inode_item, rdev, 64); -BTRFS_SETGET_FUNCS(inode_flags, struct btrfs_inode_item, flags, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_inode_generation, - struct btrfs_inode_item, generation, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_sequence, - struct btrfs_inode_item, sequence, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_transid, - struct btrfs_inode_item, transid, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_size, - struct btrfs_inode_item, size, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_nbytes, - struct btrfs_inode_item, nbytes, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_block_group, - struct btrfs_inode_item, block_group, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_nlink, - struct btrfs_inode_item, nlink, 32); -BTRFS_SETGET_STACK_FUNCS(stack_inode_uid, - struct btrfs_inode_item, uid, 32); -BTRFS_SETGET_STACK_FUNCS(stack_inode_gid, - struct btrfs_inode_item, gid, 32); -BTRFS_SETGET_STACK_FUNCS(stack_inode_mode, - struct btrfs_inode_item, mode, 32); -BTRFS_SETGET_STACK_FUNCS(stack_inode_rdev, - struct btrfs_inode_item, rdev, 64); -BTRFS_SETGET_STACK_FUNCS(stack_inode_flags, - struct btrfs_inode_item, flags, 64); - static inline struct btrfs_timespec * btrfs_inode_atime(struct btrfs_inode_item *inode_item) { @@ -904,399 +667,6 @@ btrfs_inode_otime(struct btrfs_inode_item *inode_item) return (struct btrfs_timespec *)ptr; } -BTRFS_SETGET_FUNCS(timespec_sec, struct btrfs_timespec, sec, 64); -BTRFS_SETGET_FUNCS(timespec_nsec, struct btrfs_timespec, nsec, 32); -BTRFS_SETGET_STACK_FUNCS(stack_timespec_sec, struct btrfs_timespec, - sec, 64); -BTRFS_SETGET_STACK_FUNCS(stack_timespec_nsec, struct btrfs_timespec, - nsec, 32); - -/* struct btrfs_dev_extent */ -BTRFS_SETGET_FUNCS(dev_extent_chunk_tree, struct btrfs_dev_extent, - chunk_tree, 64); -BTRFS_SETGET_FUNCS(dev_extent_chunk_objectid, struct btrfs_dev_extent, - chunk_objectid, 64); -BTRFS_SETGET_FUNCS(dev_extent_chunk_offset, struct btrfs_dev_extent, - chunk_offset, 64); -BTRFS_SETGET_FUNCS(dev_extent_length, struct btrfs_dev_extent, length, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_dev_extent_length, struct btrfs_dev_extent, - length, 64); - -static inline u8 *btrfs_dev_extent_chunk_tree_uuid(struct btrfs_dev_extent *dev) -{ - unsigned long ptr = offsetof(struct btrfs_dev_extent, chunk_tree_uuid); - return (u8 *)((unsigned long)dev + ptr); -} - - -/* struct btrfs_extent_item */ -BTRFS_SETGET_FUNCS(extent_refs, struct btrfs_extent_item, refs, 64); -BTRFS_SETGET_STACK_FUNCS(stack_extent_refs, struct btrfs_extent_item, refs, 64); -BTRFS_SETGET_FUNCS(extent_generation, struct btrfs_extent_item, - generation, 64); -BTRFS_SETGET_FUNCS(extent_flags, struct btrfs_extent_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(stack_extent_flags, struct btrfs_extent_item, flags, 64); - -BTRFS_SETGET_FUNCS(extent_refs_v0, struct btrfs_extent_item_v0, refs, 32); - -BTRFS_SETGET_FUNCS(tree_block_level, struct btrfs_tree_block_info, level, 8); - -static inline void btrfs_tree_block_key(struct extent_buffer *eb, - struct btrfs_tree_block_info *item, - struct btrfs_disk_key *key) -{ - read_eb_member(eb, item, struct btrfs_tree_block_info, key, key); -} - -static inline void btrfs_set_tree_block_key(struct extent_buffer *eb, - struct btrfs_tree_block_info *item, - struct btrfs_disk_key *key) -{ - write_eb_member(eb, item, struct btrfs_tree_block_info, key, key); -} - -BTRFS_SETGET_FUNCS(extent_data_ref_root, struct btrfs_extent_data_ref, - root, 64); -BTRFS_SETGET_FUNCS(extent_data_ref_objectid, struct btrfs_extent_data_ref, - objectid, 64); -BTRFS_SETGET_FUNCS(extent_data_ref_offset, struct btrfs_extent_data_ref, - offset, 64); -BTRFS_SETGET_FUNCS(extent_data_ref_count, struct btrfs_extent_data_ref, - count, 32); - -BTRFS_SETGET_FUNCS(shared_data_ref_count, struct btrfs_shared_data_ref, - count, 32); - -BTRFS_SETGET_FUNCS(extent_inline_ref_type, struct btrfs_extent_inline_ref, - type, 8); -BTRFS_SETGET_FUNCS(extent_inline_ref_offset, struct btrfs_extent_inline_ref, - offset, 64); -BTRFS_SETGET_STACK_FUNCS(stack_extent_inline_ref_type, - struct btrfs_extent_inline_ref, type, 8); -BTRFS_SETGET_STACK_FUNCS(stack_extent_inline_ref_offset, - struct btrfs_extent_inline_ref, offset, 64); - -static inline u32 btrfs_extent_inline_ref_size(int type) -{ - if (type == BTRFS_TREE_BLOCK_REF_KEY || - type == BTRFS_SHARED_BLOCK_REF_KEY) - return sizeof(struct btrfs_extent_inline_ref); - if (type == BTRFS_SHARED_DATA_REF_KEY) - return sizeof(struct btrfs_shared_data_ref) + - sizeof(struct btrfs_extent_inline_ref); - if (type == BTRFS_EXTENT_DATA_REF_KEY) - return sizeof(struct btrfs_extent_data_ref) + - offsetof(struct btrfs_extent_inline_ref, offset); - BUG(); - return 0; -} - -/* struct btrfs_node */ -BTRFS_SETGET_FUNCS(key_blockptr, struct btrfs_key_ptr, blockptr, 64); -BTRFS_SETGET_FUNCS(key_generation, struct btrfs_key_ptr, generation, 64); - -static inline unsigned long btrfs_node_key_ptr_offset(const struct extent_buffer *eb, int nr) -{ - return offsetof(struct btrfs_node, ptrs) + - sizeof(struct btrfs_key_ptr) * nr; -} - -static inline struct btrfs_key_ptr *btrfs_node_key_ptr(const struct extent_buffer *eb, int nr) -{ - return (struct btrfs_key_ptr *)btrfs_node_key_ptr_offset(eb, nr); -} - -static inline u64 btrfs_node_blockptr(struct extent_buffer *eb, int nr) -{ - return btrfs_key_blockptr(eb, btrfs_node_key_ptr(eb, nr)); -} - -static inline void btrfs_set_node_blockptr(struct extent_buffer *eb, - int nr, u64 val) -{ - btrfs_set_key_blockptr(eb, btrfs_node_key_ptr(eb, nr), val); -} - -static inline u64 btrfs_node_ptr_generation(struct extent_buffer *eb, int nr) -{ - return btrfs_key_generation(eb, btrfs_node_key_ptr(eb, nr)); -} - -static inline void btrfs_set_node_ptr_generation(struct extent_buffer *eb, - int nr, u64 val) -{ - btrfs_set_key_generation(eb, btrfs_node_key_ptr(eb, nr), val); -} - -static inline void btrfs_node_key(struct extent_buffer *eb, - struct btrfs_disk_key *disk_key, int nr) -{ - read_eb_member(eb, btrfs_node_key_ptr(eb, nr), struct btrfs_key_ptr, - key, disk_key); -} - -static inline void btrfs_set_node_key(struct extent_buffer *eb, - struct btrfs_disk_key *disk_key, int nr) -{ - write_eb_member(eb, btrfs_node_key_ptr(eb, nr), struct btrfs_key_ptr, - key, disk_key); -} - -/* struct btrfs_item */ -BTRFS_SETGET_FUNCS(raw_item_offset, struct btrfs_item, offset, 32); -BTRFS_SETGET_FUNCS(raw_item_size, struct btrfs_item, size, 32); - -static inline unsigned long btrfs_item_nr_offset(const struct extent_buffer *eb, int nr) -{ - return offsetof(struct btrfs_leaf, items) + - sizeof(struct btrfs_item) * nr; -} - -static inline struct btrfs_item *btrfs_item_nr(const struct extent_buffer *eb, int nr) -{ - return (struct btrfs_item *)btrfs_item_nr_offset(eb, nr); -} - -#define BTRFS_ITEM_SETGET_FUNCS(member) \ -static inline u32 btrfs_item_##member(const struct extent_buffer *eb, int slot) \ -{ \ - return btrfs_raw_item_##member(eb, btrfs_item_nr(eb, slot)); \ -} \ -static inline void btrfs_set_item_##member(struct extent_buffer *eb, \ - int slot, u32 val) \ -{ \ - btrfs_set_raw_item_##member(eb, btrfs_item_nr(eb, slot), val); \ -} - -BTRFS_ITEM_SETGET_FUNCS(size) -BTRFS_ITEM_SETGET_FUNCS(offset) - -static inline u32 btrfs_item_data_end(struct extent_buffer *eb, int nr) -{ - return btrfs_item_offset(eb, nr) + btrfs_item_size(eb, nr); -} - -static inline void btrfs_item_key(struct extent_buffer *eb, - struct btrfs_disk_key *disk_key, int nr) -{ - struct btrfs_item *item = btrfs_item_nr(eb, nr); - read_eb_member(eb, item, struct btrfs_item, key, disk_key); -} - -static inline void btrfs_set_item_key(struct extent_buffer *eb, - struct btrfs_disk_key *disk_key, int nr) -{ - struct btrfs_item *item = btrfs_item_nr(eb, nr); - write_eb_member(eb, item, struct btrfs_item, key, disk_key); -} - -BTRFS_SETGET_FUNCS(dir_log_end, struct btrfs_dir_log_item, end, 64); - -/* - * struct btrfs_root_ref - */ -BTRFS_SETGET_FUNCS(root_ref_dirid, struct btrfs_root_ref, dirid, 64); -BTRFS_SETGET_FUNCS(root_ref_sequence, struct btrfs_root_ref, sequence, 64); -BTRFS_SETGET_FUNCS(root_ref_name_len, struct btrfs_root_ref, name_len, 16); - -BTRFS_SETGET_STACK_FUNCS(stack_root_ref_dirid, struct btrfs_root_ref, dirid, 64); -BTRFS_SETGET_STACK_FUNCS(stack_root_ref_sequence, struct btrfs_root_ref, sequence, 64); -BTRFS_SETGET_STACK_FUNCS(stack_root_ref_name_len, struct btrfs_root_ref, name_len, 16); - -/* struct btrfs_dir_item */ -BTRFS_SETGET_FUNCS(dir_data_len, struct btrfs_dir_item, data_len, 16); -BTRFS_SETGET_FUNCS(dir_type, struct btrfs_dir_item, type, 8); -BTRFS_SETGET_FUNCS(dir_name_len, struct btrfs_dir_item, name_len, 16); -BTRFS_SETGET_FUNCS(dir_transid, struct btrfs_dir_item, transid, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_dir_data_len, struct btrfs_dir_item, data_len, 16); -BTRFS_SETGET_STACK_FUNCS(stack_dir_type, struct btrfs_dir_item, type, 8); -BTRFS_SETGET_STACK_FUNCS(stack_dir_name_len, struct btrfs_dir_item, name_len, 16); -BTRFS_SETGET_STACK_FUNCS(stack_dir_transid, struct btrfs_dir_item, transid, 64); - -static inline void btrfs_dir_item_key(struct extent_buffer *eb, - struct btrfs_dir_item *item, - struct btrfs_disk_key *key) -{ - read_eb_member(eb, item, struct btrfs_dir_item, location, key); -} - -static inline void btrfs_set_dir_item_key(struct extent_buffer *eb, - struct btrfs_dir_item *item, - struct btrfs_disk_key *key) -{ - write_eb_member(eb, item, struct btrfs_dir_item, location, key); -} - -/* struct btrfs_free_space_header */ -BTRFS_SETGET_FUNCS(free_space_entries, struct btrfs_free_space_header, - num_entries, 64); -BTRFS_SETGET_FUNCS(free_space_bitmaps, struct btrfs_free_space_header, - num_bitmaps, 64); -BTRFS_SETGET_FUNCS(free_space_generation, struct btrfs_free_space_header, - generation, 64); - -static inline void btrfs_free_space_key(struct extent_buffer *eb, - struct btrfs_free_space_header *h, - struct btrfs_disk_key *key) -{ - read_eb_member(eb, h, struct btrfs_free_space_header, location, key); -} - -static inline void btrfs_set_free_space_key(struct extent_buffer *eb, - struct btrfs_free_space_header *h, - struct btrfs_disk_key *key) -{ - write_eb_member(eb, h, struct btrfs_free_space_header, location, key); -} - -/* struct btrfs_disk_key */ -BTRFS_SETGET_STACK_FUNCS(disk_key_objectid, struct btrfs_disk_key, - objectid, 64); -BTRFS_SETGET_STACK_FUNCS(disk_key_offset, struct btrfs_disk_key, offset, 64); -BTRFS_SETGET_STACK_FUNCS(disk_key_type, struct btrfs_disk_key, type, 8); - -static inline void btrfs_disk_key_to_cpu(struct btrfs_key *cpu, - struct btrfs_disk_key *disk) -{ - cpu->offset = le64_to_cpu(disk->offset); - cpu->type = disk->type; - cpu->objectid = le64_to_cpu(disk->objectid); -} - -static inline void btrfs_cpu_key_to_disk(struct btrfs_disk_key *disk, - const struct btrfs_key *cpu) -{ - disk->offset = cpu_to_le64(cpu->offset); - disk->type = cpu->type; - disk->objectid = cpu_to_le64(cpu->objectid); -} - -static inline void btrfs_node_key_to_cpu(struct extent_buffer *eb, - struct btrfs_key *key, int nr) -{ - struct btrfs_disk_key disk_key; - btrfs_node_key(eb, &disk_key, nr); - btrfs_disk_key_to_cpu(key, &disk_key); -} - -static inline void btrfs_item_key_to_cpu(struct extent_buffer *eb, - struct btrfs_key *key, int nr) -{ - struct btrfs_disk_key disk_key; - btrfs_item_key(eb, &disk_key, nr); - btrfs_disk_key_to_cpu(key, &disk_key); -} - -static inline void btrfs_dir_item_key_to_cpu(struct extent_buffer *eb, - struct btrfs_dir_item *item, - struct btrfs_key *key) -{ - struct btrfs_disk_key disk_key; - btrfs_dir_item_key(eb, item, &disk_key); - btrfs_disk_key_to_cpu(key, &disk_key); -} - -/* struct btrfs_header */ -BTRFS_SETGET_HEADER_FUNCS(header_bytenr, struct btrfs_header, bytenr, 64); -BTRFS_SETGET_HEADER_FUNCS(header_generation, struct btrfs_header, - generation, 64); -BTRFS_SETGET_HEADER_FUNCS(header_owner, struct btrfs_header, owner, 64); -BTRFS_SETGET_HEADER_FUNCS(header_nritems, struct btrfs_header, nritems, 32); -BTRFS_SETGET_HEADER_FUNCS(header_flags, struct btrfs_header, flags, 64); -BTRFS_SETGET_HEADER_FUNCS(header_level, struct btrfs_header, level, 8); -BTRFS_SETGET_STACK_FUNCS(stack_header_bytenr, struct btrfs_header, bytenr, 64); -BTRFS_SETGET_STACK_FUNCS(stack_header_nritems, struct btrfs_header, nritems, - 32); -BTRFS_SETGET_STACK_FUNCS(stack_header_owner, struct btrfs_header, owner, 64); -BTRFS_SETGET_STACK_FUNCS(stack_header_generation, struct btrfs_header, - generation, 64); - -static inline int btrfs_header_flag(struct extent_buffer *eb, u64 flag) -{ - return (btrfs_header_flags(eb) & flag) == flag; -} - -static inline int btrfs_set_header_flag(struct extent_buffer *eb, u64 flag) -{ - u64 flags = btrfs_header_flags(eb); - btrfs_set_header_flags(eb, flags | flag); - return (flags & flag) == flag; -} - -static inline int btrfs_clear_header_flag(struct extent_buffer *eb, u64 flag) -{ - u64 flags = btrfs_header_flags(eb); - btrfs_set_header_flags(eb, flags & ~flag); - return (flags & flag) == flag; -} - -static inline int btrfs_header_backref_rev(struct extent_buffer *eb) -{ - u64 flags = btrfs_header_flags(eb); - return flags >> BTRFS_BACKREF_REV_SHIFT; -} - -static inline void btrfs_set_header_backref_rev(struct extent_buffer *eb, - int rev) -{ - u64 flags = btrfs_header_flags(eb); - flags &= ~BTRFS_BACKREF_REV_MASK; - flags |= (u64)rev << BTRFS_BACKREF_REV_SHIFT; - btrfs_set_header_flags(eb, flags); -} - -static inline unsigned long btrfs_header_fsid(void) -{ - return offsetof(struct btrfs_header, fsid); -} - -static inline unsigned long btrfs_header_chunk_tree_uuid(struct extent_buffer *eb) -{ - return offsetof(struct btrfs_header, chunk_tree_uuid); -} - -static inline u8 *btrfs_header_csum(struct extent_buffer *eb) -{ - unsigned long ptr = offsetof(struct btrfs_header, csum); - return (u8 *)ptr; -} - -static inline int btrfs_is_leaf(struct extent_buffer *eb) -{ - return (btrfs_header_level(eb) == 0); -} - -/* struct btrfs_root_item */ -BTRFS_SETGET_FUNCS(disk_root_generation, struct btrfs_root_item, - generation, 64); -BTRFS_SETGET_FUNCS(disk_root_refs, struct btrfs_root_item, refs, 32); -BTRFS_SETGET_FUNCS(disk_root_bytenr, struct btrfs_root_item, bytenr, 64); -BTRFS_SETGET_FUNCS(disk_root_level, struct btrfs_root_item, level, 8); - -BTRFS_SETGET_STACK_FUNCS(root_generation, struct btrfs_root_item, - generation, 64); -BTRFS_SETGET_STACK_FUNCS(root_bytenr, struct btrfs_root_item, bytenr, 64); -BTRFS_SETGET_STACK_FUNCS(root_level, struct btrfs_root_item, level, 8); -BTRFS_SETGET_STACK_FUNCS(root_dirid, struct btrfs_root_item, root_dirid, 64); -BTRFS_SETGET_STACK_FUNCS(root_refs, struct btrfs_root_item, refs, 32); -BTRFS_SETGET_STACK_FUNCS(root_flags, struct btrfs_root_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(root_used, struct btrfs_root_item, bytes_used, 64); -BTRFS_SETGET_STACK_FUNCS(root_limit, struct btrfs_root_item, byte_limit, 64); -BTRFS_SETGET_STACK_FUNCS(root_last_snapshot, struct btrfs_root_item, - last_snapshot, 64); -BTRFS_SETGET_STACK_FUNCS(root_generation_v2, struct btrfs_root_item, - generation_v2, 64); -BTRFS_SETGET_STACK_FUNCS(root_ctransid, struct btrfs_root_item, - ctransid, 64); -BTRFS_SETGET_STACK_FUNCS(root_otransid, struct btrfs_root_item, - otransid, 64); -BTRFS_SETGET_STACK_FUNCS(root_stransid, struct btrfs_root_item, - stransid, 64); -BTRFS_SETGET_STACK_FUNCS(root_rtransid, struct btrfs_root_item, - rtransid, 64); - static inline struct btrfs_timespec* btrfs_root_ctime( struct btrfs_root_item *root_item) { @@ -1329,115 +699,12 @@ static inline struct btrfs_timespec* btrfs_root_rtime( return (struct btrfs_timespec *)ptr; } -/* struct btrfs_root_backup */ -BTRFS_SETGET_STACK_FUNCS(backup_tree_root, struct btrfs_root_backup, - tree_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_tree_root_gen, struct btrfs_root_backup, - tree_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_tree_root_level, struct btrfs_root_backup, - tree_root_level, 8); - -BTRFS_SETGET_STACK_FUNCS(backup_chunk_root, struct btrfs_root_backup, - chunk_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_chunk_root_gen, struct btrfs_root_backup, - chunk_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_chunk_root_level, struct btrfs_root_backup, - chunk_root_level, 8); - -BTRFS_SETGET_STACK_FUNCS(backup_extent_root, struct btrfs_root_backup, - extent_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_extent_root_gen, struct btrfs_root_backup, - extent_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_extent_root_level, struct btrfs_root_backup, - extent_root_level, 8); - -BTRFS_SETGET_STACK_FUNCS(backup_fs_root, struct btrfs_root_backup, - fs_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_fs_root_gen, struct btrfs_root_backup, - fs_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_fs_root_level, struct btrfs_root_backup, - fs_root_level, 8); - -BTRFS_SETGET_STACK_FUNCS(backup_dev_root, struct btrfs_root_backup, - dev_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_dev_root_gen, struct btrfs_root_backup, - dev_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_dev_root_level, struct btrfs_root_backup, - dev_root_level, 8); - -BTRFS_SETGET_STACK_FUNCS(backup_csum_root, struct btrfs_root_backup, - csum_root, 64); -BTRFS_SETGET_STACK_FUNCS(backup_csum_root_gen, struct btrfs_root_backup, - csum_root_gen, 64); -BTRFS_SETGET_STACK_FUNCS(backup_csum_root_level, struct btrfs_root_backup, - csum_root_level, 8); -BTRFS_SETGET_STACK_FUNCS(backup_total_bytes, struct btrfs_root_backup, - total_bytes, 64); -BTRFS_SETGET_STACK_FUNCS(backup_bytes_used, struct btrfs_root_backup, - bytes_used, 64); -BTRFS_SETGET_STACK_FUNCS(backup_num_devices, struct btrfs_root_backup, - num_devices, 64); - -/* struct btrfs_super_block */ - -BTRFS_SETGET_STACK_FUNCS(super_bytenr, struct btrfs_super_block, bytenr, 64); -BTRFS_SETGET_STACK_FUNCS(super_flags, struct btrfs_super_block, flags, 64); -BTRFS_SETGET_STACK_FUNCS(super_generation, struct btrfs_super_block, - generation, 64); -BTRFS_SETGET_STACK_FUNCS(super_root, struct btrfs_super_block, root, 64); -BTRFS_SETGET_STACK_FUNCS(super_sys_array_size, - struct btrfs_super_block, sys_chunk_array_size, 32); -BTRFS_SETGET_STACK_FUNCS(super_chunk_root_generation, - struct btrfs_super_block, chunk_root_generation, 64); -BTRFS_SETGET_STACK_FUNCS(super_root_level, struct btrfs_super_block, - root_level, 8); -BTRFS_SETGET_STACK_FUNCS(super_chunk_root, struct btrfs_super_block, - chunk_root, 64); -BTRFS_SETGET_STACK_FUNCS(super_chunk_root_level, struct btrfs_super_block, - chunk_root_level, 8); -BTRFS_SETGET_STACK_FUNCS(super_log_root, struct btrfs_super_block, - log_root, 64); -BTRFS_SETGET_STACK_FUNCS(super_log_root_level, struct btrfs_super_block, - log_root_level, 8); -BTRFS_SETGET_STACK_FUNCS(super_total_bytes, struct btrfs_super_block, - total_bytes, 64); -BTRFS_SETGET_STACK_FUNCS(super_bytes_used, struct btrfs_super_block, - bytes_used, 64); -BTRFS_SETGET_STACK_FUNCS(super_sectorsize, struct btrfs_super_block, - sectorsize, 32); -BTRFS_SETGET_STACK_FUNCS(super_nodesize, struct btrfs_super_block, - nodesize, 32); -BTRFS_SETGET_STACK_FUNCS(super_stripesize, struct btrfs_super_block, - stripesize, 32); -BTRFS_SETGET_STACK_FUNCS(super_root_dir, struct btrfs_super_block, - root_dir_objectid, 64); -BTRFS_SETGET_STACK_FUNCS(super_num_devices, struct btrfs_super_block, - num_devices, 64); -BTRFS_SETGET_STACK_FUNCS(super_compat_flags, struct btrfs_super_block, - compat_flags, 64); -BTRFS_SETGET_STACK_FUNCS(super_compat_ro_flags, struct btrfs_super_block, - compat_ro_flags, 64); -BTRFS_SETGET_STACK_FUNCS(super_incompat_flags, struct btrfs_super_block, - incompat_flags, 64); -BTRFS_SETGET_STACK_FUNCS(super_csum_type, struct btrfs_super_block, - csum_type, 16); -BTRFS_SETGET_STACK_FUNCS(super_cache_generation, struct btrfs_super_block, - cache_generation, 64); -BTRFS_SETGET_STACK_FUNCS(super_uuid_tree_generation, struct btrfs_super_block, - uuid_tree_generation, 64); -BTRFS_SETGET_STACK_FUNCS(super_magic, struct btrfs_super_block, magic, 64); -BTRFS_SETGET_STACK_FUNCS(super_nr_global_roots, struct btrfs_super_block, - nr_global_roots, 64); - -static inline unsigned long btrfs_leaf_data(struct extent_buffer *l) +static inline u8 *btrfs_dev_extent_chunk_tree_uuid(struct btrfs_dev_extent *dev) { - return offsetof(struct btrfs_leaf, items); + unsigned long ptr = offsetof(struct btrfs_dev_extent, chunk_tree_uuid); + return (u8 *)((unsigned long)dev + ptr); } -/* struct btrfs_file_extent_item */ -BTRFS_SETGET_FUNCS(file_extent_type, struct btrfs_file_extent_item, type, 8); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_type, struct btrfs_file_extent_item, type, 8); - static inline unsigned long btrfs_file_extent_inline_start(struct btrfs_file_extent_item *e) { @@ -1451,131 +718,6 @@ static inline u32 btrfs_file_extent_calc_inline_size(u32 datasize) return offsetof(struct btrfs_file_extent_item, disk_bytenr) + datasize; } -BTRFS_SETGET_FUNCS(file_extent_disk_bytenr, struct btrfs_file_extent_item, - disk_bytenr, 64); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_disk_bytenr, struct btrfs_file_extent_item, - disk_bytenr, 64); -BTRFS_SETGET_FUNCS(file_extent_generation, struct btrfs_file_extent_item, - generation, 64); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_generation, struct btrfs_file_extent_item, - generation, 64); -BTRFS_SETGET_FUNCS(file_extent_disk_num_bytes, struct btrfs_file_extent_item, - disk_num_bytes, 64); -BTRFS_SETGET_FUNCS(file_extent_offset, struct btrfs_file_extent_item, - offset, 64); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_offset, struct btrfs_file_extent_item, - offset, 64); -BTRFS_SETGET_FUNCS(file_extent_num_bytes, struct btrfs_file_extent_item, - num_bytes, 64); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_num_bytes, struct btrfs_file_extent_item, - num_bytes, 64); -BTRFS_SETGET_FUNCS(file_extent_ram_bytes, struct btrfs_file_extent_item, - ram_bytes, 64); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_ram_bytes, struct btrfs_file_extent_item, - ram_bytes, 64); -BTRFS_SETGET_FUNCS(file_extent_compression, struct btrfs_file_extent_item, - compression, 8); -BTRFS_SETGET_STACK_FUNCS(stack_file_extent_compression, struct btrfs_file_extent_item, - compression, 8); -BTRFS_SETGET_FUNCS(file_extent_encryption, struct btrfs_file_extent_item, - encryption, 8); -BTRFS_SETGET_FUNCS(file_extent_other_encoding, struct btrfs_file_extent_item, - other_encoding, 16); - -/* btrfs_qgroup_status_item */ -BTRFS_SETGET_FUNCS(qgroup_status_version, struct btrfs_qgroup_status_item, - version, 64); -BTRFS_SETGET_FUNCS(qgroup_status_generation, struct btrfs_qgroup_status_item, - generation, 64); -BTRFS_SETGET_FUNCS(qgroup_status_flags, struct btrfs_qgroup_status_item, - flags, 64); -BTRFS_SETGET_FUNCS(qgroup_status_rescan, struct btrfs_qgroup_status_item, - rescan, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_version, - struct btrfs_qgroup_status_item, version, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_generation, - struct btrfs_qgroup_status_item, generation, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_flags, - struct btrfs_qgroup_status_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_status_rescan, - struct btrfs_qgroup_status_item, rescan, 64); - -/* btrfs_qgroup_info_item */ -BTRFS_SETGET_FUNCS(qgroup_info_generation, struct btrfs_qgroup_info_item, - generation, 64); -BTRFS_SETGET_FUNCS(qgroup_info_rfer, struct btrfs_qgroup_info_item, - rfer, 64); -BTRFS_SETGET_FUNCS(qgroup_info_rfer_cmpr, - struct btrfs_qgroup_info_item, rfer_cmpr, 64); -BTRFS_SETGET_FUNCS(qgroup_info_excl, struct btrfs_qgroup_info_item, excl, 64); -BTRFS_SETGET_FUNCS(qgroup_info_excl_cmpr, - struct btrfs_qgroup_info_item, excl_cmpr, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_generation, - struct btrfs_qgroup_info_item, generation, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer, - struct btrfs_qgroup_info_item, rfer, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_rfer_cmpr, - struct btrfs_qgroup_info_item, rfer_cmpr, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl, - struct btrfs_qgroup_info_item, excl, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_info_excl_cmpr, - struct btrfs_qgroup_info_item, excl_cmpr, 64); - -/* btrfs_qgroup_limit_item */ -BTRFS_SETGET_FUNCS(qgroup_limit_flags, struct btrfs_qgroup_limit_item, - flags, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_max_rfer, struct btrfs_qgroup_limit_item, - max_rfer, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_max_excl, struct btrfs_qgroup_limit_item, - max_excl, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_rsv_rfer, struct btrfs_qgroup_limit_item, - rsv_rfer, 64); -BTRFS_SETGET_FUNCS(qgroup_limit_rsv_excl, struct btrfs_qgroup_limit_item, - rsv_excl, 64); - -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_flags, - struct btrfs_qgroup_limit_item, flags, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_rfer, - struct btrfs_qgroup_limit_item, max_rfer, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_max_excl, - struct btrfs_qgroup_limit_item, max_excl, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_rfer, - struct btrfs_qgroup_limit_item, rsv_rfer, 64); -BTRFS_SETGET_STACK_FUNCS(stack_qgroup_limit_rsv_excl, - struct btrfs_qgroup_limit_item, rsv_excl, 64); - -/* btrfs_balance_item */ -BTRFS_SETGET_FUNCS(balance_item_flags, struct btrfs_balance_item, flags, 64); - -static inline struct btrfs_disk_balance_args* btrfs_balance_item_data( - struct extent_buffer *eb, struct btrfs_balance_item *bi) -{ - unsigned long offset = (unsigned long)bi; - struct btrfs_balance_item *p; - p = (struct btrfs_balance_item *)(eb->data + offset); - return &p->data; -} - -static inline struct btrfs_disk_balance_args* btrfs_balance_item_meta( - struct extent_buffer *eb, struct btrfs_balance_item *bi) -{ - unsigned long offset = (unsigned long)bi; - struct btrfs_balance_item *p; - p = (struct btrfs_balance_item *)(eb->data + offset); - return &p->meta; -} - -static inline struct btrfs_disk_balance_args* btrfs_balance_item_sys( - struct extent_buffer *eb, struct btrfs_balance_item *bi) -{ - unsigned long offset = (unsigned long)bi; - struct btrfs_balance_item *p; - p = (struct btrfs_balance_item *)(eb->data + offset); - return &p->sys; -} - static inline u64 btrfs_dev_stats_value(const struct extent_buffer *eb, const struct btrfs_dev_stats_item *ptr, int index) @@ -1584,7 +726,7 @@ static inline u64 btrfs_dev_stats_value(const struct extent_buffer *eb, read_extent_buffer(eb, &val, offsetof(struct btrfs_dev_stats_item, values) + - ((unsigned long)ptr) + (index * sizeof(u64)), + ((unsigned long)ptr) + (index * sizeof(u64)), sizeof(val)); return val; } @@ -1646,15 +788,6 @@ static inline int __btrfs_fs_compat_ro(struct btrfs_fs_info *fs_info, u64 flag) return !!(btrfs_super_compat_ro_flags(disk_super) & flag); } -/* helper function to cast into the data area of the leaf. */ -#define btrfs_item_ptr(leaf, slot, type) \ - ((type *)(btrfs_leaf_data(leaf) + \ - btrfs_item_offset(leaf, slot))) - -#define btrfs_item_ptr_offset(leaf, slot) \ - ((unsigned long)(btrfs_leaf_data(leaf) + \ - btrfs_item_offset(leaf, slot))) - u64 btrfs_name_hash(const char *name, int len); u64 btrfs_extref_hash(u64 parent_objectid, const char *name, int len); diff --git a/kernel-shared/dir-item.c b/kernel-shared/dir-item.c index 27dfb362..ef49441c 100644 --- a/kernel-shared/dir-item.c +++ b/kernel-shared/dir-item.c @@ -89,7 +89,7 @@ int btrfs_insert_xattr_item(struct btrfs_trans_handle *trans, leaf = path->nodes[0]; btrfs_cpu_key_to_disk(&disk_key, &location); btrfs_set_dir_item_key(leaf, dir_item, &disk_key); - btrfs_set_dir_type(leaf, dir_item, BTRFS_FT_XATTR); + btrfs_set_dir_flags(leaf, dir_item, BTRFS_FT_XATTR); btrfs_set_dir_name_len(leaf, dir_item, name_len); btrfs_set_dir_data_len(leaf, dir_item, data_len); name_ptr = (unsigned long)(dir_item + 1); @@ -141,7 +141,7 @@ int btrfs_insert_dir_item(struct btrfs_trans_handle *trans, struct btrfs_root leaf = path->nodes[0]; btrfs_cpu_key_to_disk(&disk_key, location); btrfs_set_dir_item_key(leaf, dir_item, &disk_key); - btrfs_set_dir_type(leaf, dir_item, type); + btrfs_set_dir_flags(leaf, dir_item, type); btrfs_set_dir_data_len(leaf, dir_item, 0); btrfs_set_dir_name_len(leaf, dir_item, name_len); name_ptr = (unsigned long)(dir_item + 1); @@ -170,7 +170,7 @@ insert: leaf = path->nodes[0]; btrfs_cpu_key_to_disk(&disk_key, location); btrfs_set_dir_item_key(leaf, dir_item, &disk_key); - btrfs_set_dir_type(leaf, dir_item, type); + btrfs_set_dir_flags(leaf, dir_item, type); btrfs_set_dir_data_len(leaf, dir_item, 0); btrfs_set_dir_name_len(leaf, dir_item, name_len); name_ptr = (unsigned long)(dir_item + 1); @@ -292,7 +292,7 @@ static int verify_dir_item(struct btrfs_root *root, struct btrfs_dir_item *dir_item) { u16 namelen = BTRFS_NAME_LEN; - u8 type = btrfs_dir_type(leaf, dir_item); + u8 type = btrfs_dir_ftype(leaf, dir_item); if (type == BTRFS_FT_XATTR) namelen = XATTR_NAME_MAX; diff --git a/kernel-shared/inode.c b/kernel-shared/inode.c index d1786c7a..1430cf33 100644 --- a/kernel-shared/inode.c +++ b/kernel-shared/inode.c @@ -548,7 +548,7 @@ int btrfs_mkdir(struct btrfs_trans_handle *trans, struct btrfs_root *root, */ btrfs_dir_item_key_to_cpu(path->nodes[0], dir_item, &found_key); ret_ino = found_key.objectid; - if (btrfs_dir_type(path->nodes[0], dir_item) != BTRFS_FT_DIR) + if (btrfs_dir_ftype(path->nodes[0], dir_item) != BTRFS_FT_DIR) ret = -EEXIST; goto out; } diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c index e2f9f760..cbd5152b 100644 --- a/kernel-shared/print-tree.c +++ b/kernel-shared/print-tree.c @@ -27,11 +27,12 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/compression.h" #include "common/utils.h" +#include "accessors.h" static void print_dir_item_type(struct extent_buffer *eb, struct btrfs_dir_item *di) { - u8 type = btrfs_dir_type(eb, di); + u8 type = btrfs_dir_ftype(eb, di); static const char* dir_item_str[] = { [BTRFS_FT_REG_FILE] = "FILE", [BTRFS_FT_DIR] = "DIR", @@ -959,15 +960,20 @@ static void print_disk_balance_args(struct btrfs_disk_balance_args *ba) static void print_balance_item(struct extent_buffer *eb, struct btrfs_balance_item *bi) { + struct btrfs_disk_balance_args ba; + printf("\t\tbalance status flags %llu\n", - btrfs_balance_item_flags(eb, bi)); + btrfs_balance_flags(eb, bi)); printf("\t\tDATA\n"); - print_disk_balance_args(btrfs_balance_item_data(eb, bi)); + btrfs_balance_data(eb, bi, &ba); + print_disk_balance_args(&ba); printf("\t\tMETADATA\n"); - print_disk_balance_args(btrfs_balance_item_meta(eb, bi)); + btrfs_balance_meta(eb, bi, &ba); + print_disk_balance_args(&ba); printf("\t\tSYSTEM\n"); - print_disk_balance_args(btrfs_balance_item_sys(eb, bi)); + btrfs_balance_sys(eb, bi, &ba); + print_disk_balance_args(&ba); } static void print_dev_stats(struct extent_buffer *eb, diff --git a/libbtrfs/ctree.h b/libbtrfs/ctree.h index 4d4df6d3..ffaa6613 100644 --- a/libbtrfs/ctree.h +++ b/libbtrfs/ctree.h @@ -39,6 +39,20 @@ struct btrfs_trans_handle; struct btrfs_free_space_ctl; #define BTRFS_MAGIC 0x4D5F53665248425FULL /* ascii _BHRfS_M, no null */ +#define le8_to_cpu(v) (v) +#define cpu_to_le8(v) (v) +#define __le8 u8 + +static inline u8 get_unaligned_le8(const void *p) +{ + return *(u8 *)p; +} + +static inline void put_unaligned_le8(u8 val, void *p) +{ + *(u8 *)p = val; +} + /* * Fake signature for an unfinalized filesystem, which only has barebone tree * structures (normally 6 near empty trees, on SINGLE meta/sys temporary chunks) diff --git a/mkfs/common.c b/mkfs/common.c index 70a0b353..597ef397 100644 --- a/mkfs/common.c +++ b/mkfs/common.c @@ -39,6 +39,7 @@ #include "common/open-utils.h" #include "mkfs/common.h" #include "kernel-shared/uapi/btrfs.h" +#include "kernel-shared/accessors.h" static u64 reference_root_table[] = { [MKFS_ROOT_TREE] = BTRFS_ROOT_TREE_OBJECTID, From patchwork Wed Nov 23 22:37:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F257C46467 for ; Wed, 23 Nov 2022 22:38:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229827AbiKWWij (ORCPT ); Wed, 23 Nov 2022 17:38:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229582AbiKWWiR (ORCPT ); Wed, 23 Nov 2022 17:38:17 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B44C263C6 for ; Wed, 23 Nov 2022 14:38:15 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id j6so13093783qvn.12 for ; Wed, 23 Nov 2022 14:38:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Ej1WP8ReC7wyRvlTThQtozwq0U6bL8CfE5jPXW/1Aa0=; b=3O08JoudVD9AG2RNKDjKEgi0f9W5CEGPw7wAM5FXuP+B60YqsmgErRU/JQGkEvjHNq pKdlrvVZcEjtN6wEDbbW44FLFGKqVDrqGp8jcQEwfg4pFkIIIZ1W6DqwZDie0bjnql+6 NNcn/JustgLIRvSkdZ/yEyje4cw9xmG3towl6IxVYwer2FBSgwRJ5fCRzqMDydhQwKnD qEn/G25mRBR5Dhu/dUBTtqS6znlixjRiZfkaQvUNi2zRPcSCIz82oQObySQY9eUHl0Me anC+9EAIpsACmeJ5p/Zy3VhdnI/RRdkkUycCELVZ38zsqRR0XInRRJySDtdFcJ9eE+2T /+Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ej1WP8ReC7wyRvlTThQtozwq0U6bL8CfE5jPXW/1Aa0=; b=Ao5i+Ns/I9r5n5MbFiQDRxJczgRzIRg8qBVA5qnxW629CCxjf9yhE8M18lvVxCueRj Q6JIxn6djtf1uioFJzbvsVBr9sce+0KeuSAiyigKrlCxgoV45RINPT+hGU+Dj7yG09L/ oIPm+oYZITWEbX0NBHSw7TIUcLY797aU5Kor1/T6QIbq7WJxzTh5dYgOA65ifXB2JuNU K2TifYN7JNdOVfr0VlVOx8qYZ9OkANxWqLODJDNuMGcHorLYeMnnbU+0ATcBrkyDccl9 Pdg7B2mMhSOc8ULy+6682X37Ux1mmt1qqr8nWIpiBC1vNvRE0KSpcwHB150/Qa0O/wva oj8A== X-Gm-Message-State: ANoB5pnQUNmLyYVPE/hsnU4t0kxyZKFzNQpbFyPgZLlUVGjHpgpz+WAN yihfIUJ9hWa+DyVHrOKRDP3EtkM8ps1QUw== X-Google-Smtp-Source: AA0mqf4P2exqz2CMPKu+jtHN7PUfCGPdcbemlh/2OUmE78CLl/uJkL+UzEoVJA4aHJPBa6g4yM2k7w== X-Received: by 2002:a05:6214:3482:b0:4c6:8cba:470b with SMTP id mr2-20020a056214348200b004c68cba470bmr27474876qvb.26.1669243094962; Wed, 23 Nov 2022 14:38:14 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id s19-20020a05620a29d300b006cfc9846594sm13129249qkp.93.2022.11.23.14.38.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:14 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 27/29] btrfs-progs: sync file-item.h into progs Date: Wed, 23 Nov 2022 17:37:35 -0500 Message-Id: <5fca793b64a40b07e7e8c4a70f41537c18dbd9dc.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This patch syncs file-item.h into btrfs-progs. This carries with it an API change for btrfs_del_csums, which takes a root argument in the kernel, so all callsites have been updated accordingly. I didn't sync file-item.c because it carries with it a bunch of bio related helpers which are difficult to adapt to the kernel. Additionally there's a few helpers in the local copy of file-item.c that aren't in the kernel that are required for different tools. This requires more cleanups in both the kernel and progs in order to sync file-item.c, so for now just do file-item.h in order to pull things out of ctree.h. Signed-off-by: Josef Bacik --- btrfs-corrupt-block.c | 3 +- btrfstune.c | 1 + check/clear-cache.c | 7 ++- check/main.c | 1 + check/mode-common.c | 6 ++- check/mode-lowmem.c | 1 + cmds/inspect-tree-stats.c | 1 + cmds/restore.c | 1 + convert/main.c | 1 + convert/source-ext2.c | 1 + image/main.c | 1 + kernel-shared/ctree.h | 45 ------------------- kernel-shared/extent-tree.c | 7 ++- kernel-shared/file-item.c | 13 +++--- kernel-shared/file-item.h | 89 +++++++++++++++++++++++++++++++++++++ kernel-shared/file.c | 1 + kernel-shared/print-tree.c | 1 + mkfs/rootdir.c | 1 + 18 files changed, 125 insertions(+), 56 deletions(-) create mode 100644 kernel-shared/file-item.h diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c index 493cfc69..56603bc8 100644 --- a/btrfs-corrupt-block.c +++ b/btrfs-corrupt-block.c @@ -29,6 +29,7 @@ #include "kernel-shared/transaction.h" #include "kernel-shared/extent_io.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/utils.h" #include "common/help.h" #include "common/extent-cache.h" @@ -1109,7 +1110,7 @@ static int delete_csum(struct btrfs_root *root, u64 bytenr, u64 bytes) return ret; } - ret = btrfs_del_csums(trans, bytenr, bytes); + ret = btrfs_del_csums(trans, root, bytenr, bytes); if (ret) error("error deleting csums %d", ret); btrfs_commit_transaction(trans, root); diff --git a/btrfstune.c b/btrfstune.c index 0ad7275c..c41d3838 100644 --- a/btrfstune.c +++ b/btrfstune.c @@ -32,6 +32,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/defs.h" #include "common/utils.h" #include "common/extent-cache.h" diff --git a/check/clear-cache.c b/check/clear-cache.c index c4ee6b33..1ea937dc 100644 --- a/check/clear-cache.c +++ b/check/clear-cache.c @@ -22,6 +22,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/transaction.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/internal.h" #include "common/messages.h" #include "check/common.h" @@ -463,6 +464,7 @@ int truncate_free_ino_items(struct btrfs_root *root) while (1) { struct extent_buffer *leaf; struct btrfs_file_extent_item *fi; + struct btrfs_root *csum_root; struct btrfs_key found_key; u8 found_type; @@ -521,7 +523,10 @@ int truncate_free_ino_items(struct btrfs_root *root) goto out; } - ret = btrfs_del_csums(trans, extent_disk_bytenr, + csum_root = btrfs_csum_root(trans->fs_info, + extent_disk_bytenr); + ret = btrfs_del_csums(trans, csum_root, + extent_disk_bytenr, extent_num_bytes); if (ret < 0) { btrfs_abort_transaction(trans, ret); diff --git a/check/main.c b/check/main.c index 5d83de64..96317c9c 100644 --- a/check/main.c +++ b/check/main.c @@ -42,6 +42,7 @@ #include "kernel-shared/backref.h" #include "kernel-shared/ulist.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/defs.h" #include "common/extent-cache.h" #include "common/internal.h" diff --git a/check/mode-common.c b/check/mode-common.c index a1d095f9..96ee311a 100644 --- a/check/mode-common.c +++ b/check/mode-common.c @@ -29,6 +29,7 @@ #include "kernel-shared/backref.h" #include "kernel-shared/compression.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/internal.h" #include "common/messages.h" #include "common/utils.h" @@ -1312,7 +1313,7 @@ static int fill_csum_tree_from_one_fs_root(struct btrfs_trans_handle *trans, if (type == BTRFS_FILE_EXTENT_PREALLOC) { start += btrfs_file_extent_offset(node, fi); len = btrfs_file_extent_num_bytes(node, fi); - ret = btrfs_del_csums(trans, start, len); + ret = btrfs_del_csums(trans, csum_root, start, len); if (ret < 0) goto out; } @@ -1474,7 +1475,8 @@ static int remove_csum_for_file_extent(u64 ino, u64 offset, u64 rootid, void *ct btrfs_release_path(&path); /* Now delete the csum for the preallocated or nodatasum range */ - ret = btrfs_del_csums(trans, disk_bytenr, disk_len); + root = btrfs_csum_root(fs_info, disk_bytenr); + ret = btrfs_del_csums(trans, root, disk_bytenr, disk_len); out: btrfs_release_path(&path); return ret; diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index 4b0c8b27..78ef0385 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -31,6 +31,7 @@ #include "kernel-shared/compression.h" #include "kernel-shared/volumes.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/messages.h" #include "common/internal.h" #include "common/utils.h" diff --git a/cmds/inspect-tree-stats.c b/cmds/inspect-tree-stats.c index 9ed3dabd..a6c0efbc 100644 --- a/cmds/inspect-tree-stats.c +++ b/cmds/inspect-tree-stats.c @@ -28,6 +28,7 @@ #include "kernel-shared/ctree.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/extent_io.h" +#include "kernel-shared/file-item.h" #include "common/utils.h" #include "common/help.h" #include "common/messages.h" diff --git a/cmds/restore.c b/cmds/restore.c index c328b075..8af26776 100644 --- a/cmds/restore.c +++ b/cmds/restore.c @@ -44,6 +44,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" #include "kernel-shared/compression.h" +#include "kernel-shared/file-item.h" #include "common/utils.h" #include "common/help.h" #include "common/open-utils.h" diff --git a/convert/main.c b/convert/main.c index 80b36697..1beb0b8e 100644 --- a/convert/main.c +++ b/convert/main.c @@ -100,6 +100,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/transaction.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "crypto/crc32c.h" #include "common/defs.h" #include "common/extent-cache.h" diff --git a/convert/source-ext2.c b/convert/source-ext2.c index a8b33317..05805495 100644 --- a/convert/source-ext2.c +++ b/convert/source-ext2.c @@ -28,6 +28,7 @@ #include "kernel-lib/sizes.h" #include "kernel-shared/transaction.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "common/extent-cache.h" #include "common/messages.h" #include "convert/common.h" diff --git a/image/main.c b/image/main.c index 5afc4b7c..a329a087 100644 --- a/image/main.c +++ b/image/main.c @@ -40,6 +40,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/extent_io.h" #include "kernel-shared/messages.h" +#include "kernel-shared/file-item.h" #include "crypto/crc32c.h" #include "common/internal.h" #include "common/messages.h" diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index bcd426d3..39e1748e 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -426,14 +426,6 @@ static inline u32 BTRFS_NODEPTRS_PER_EXTENT_BUFFER(const struct extent_buffer *e return BTRFS_LEAF_DATA_SIZE(eb->fs_info) / sizeof(struct btrfs_key_ptr); } -#define BTRFS_FILE_EXTENT_INLINE_DATA_START \ - (offsetof(struct btrfs_file_extent_item, disk_bytenr)) -static inline u32 BTRFS_MAX_INLINE_DATA_SIZE(const struct btrfs_fs_info *info) -{ - return BTRFS_MAX_ITEM_SIZE(info) - - BTRFS_FILE_EXTENT_INLINE_DATA_START; -} - static inline u32 BTRFS_MAX_XATTR_SIZE(const struct btrfs_fs_info *info) { return BTRFS_MAX_ITEM_SIZE(info) - sizeof(struct btrfs_dir_item); @@ -705,19 +697,6 @@ static inline u8 *btrfs_dev_extent_chunk_tree_uuid(struct btrfs_dev_extent *dev) return (u8 *)((unsigned long)dev + ptr); } -static inline unsigned long btrfs_file_extent_inline_start(struct - btrfs_file_extent_item *e) -{ - unsigned long offset = (unsigned long)e; - offset += offsetof(struct btrfs_file_extent_item, disk_bytenr); - return offset; -} - -static inline u32 btrfs_file_extent_calc_inline_size(u32 datasize) -{ - return offsetof(struct btrfs_file_extent_item, disk_bytenr) + datasize; -} - static inline u64 btrfs_dev_stats_value(const struct extent_buffer *eb, const struct btrfs_dev_stats_item *ptr, int index) @@ -731,17 +710,6 @@ static inline u64 btrfs_dev_stats_value(const struct extent_buffer *eb, return val; } -/* - * this returns the number of bytes used by the item on disk, minus the - * size of any extent headers. If a file is compressed on disk, this is - * the compressed size - */ -static inline u32 btrfs_file_extent_inline_item_len(struct extent_buffer *eb, - int nr) -{ - return btrfs_item_size(eb, nr) - BTRFS_FILE_EXTENT_INLINE_DATA_START; -} - /* struct btrfs_ioctl_search_header */ static inline u64 btrfs_search_header_transid(struct btrfs_ioctl_search_header *sh) { @@ -1081,19 +1049,6 @@ int btrfs_del_inode_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root, const char *name, int name_len, u64 ino, u64 parent_ino, u64 *index); -/* file-item.c */ -int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len); -int btrfs_insert_file_extent(struct btrfs_trans_handle *trans, - struct btrfs_root *root, - u64 objectid, u64 pos, u64 offset, - u64 disk_num_bytes, - u64 num_bytes); -int btrfs_insert_inline_extent(struct btrfs_trans_handle *trans, - struct btrfs_root *root, u64 objectid, - u64 offset, const char *buffer, size_t size); -int btrfs_csum_file_block(struct btrfs_trans_handle *trans, u64 alloc_end, - u64 bytenr, char *data, size_t len); - /* uuid-tree.c, interface for mounted mounted filesystem */ int btrfs_lookup_uuid_subvol_item(int fd, const u8 *uuid, u64 *subvol_id); int btrfs_lookup_uuid_received_subvol_item(int fd, const u8 *uuid, diff --git a/kernel-shared/extent-tree.c b/kernel-shared/extent-tree.c index 0b0c40af..fda87ee1 100644 --- a/kernel-shared/extent-tree.c +++ b/kernel-shared/extent-tree.c @@ -33,6 +33,7 @@ #include "kernel-shared/free-space-tree.h" #include "kernel-shared/zoned.h" #include "common/utils.h" +#include "file-item.h" #define PENDING_EXTENT_INSERT 0 #define PENDING_EXTENT_DELETE 1 @@ -2109,7 +2110,11 @@ static int __free_extent(struct btrfs_trans_handle *trans, btrfs_release_path(path); if (is_data) { - ret = btrfs_del_csums(trans, bytenr, num_bytes); + struct btrfs_root *csum_root; + + csum_root = btrfs_csum_root(trans->fs_info, bytenr); + ret = btrfs_del_csums(trans, csum_root, bytenr, + num_bytes); BUG_ON(ret); } diff --git a/kernel-shared/file-item.c b/kernel-shared/file-item.c index 0a870495..9f8a3296 100644 --- a/kernel-shared/file-item.c +++ b/kernel-shared/file-item.c @@ -25,6 +25,7 @@ #include "kernel-shared/print-tree.h" #include "crypto/crc32c.h" #include "common/internal.h" +#include "file-item.h" #define MAX_CSUM_ITEMS(r, size) ((((BTRFS_LEAF_DATA_SIZE(r->fs_info) - \ sizeof(struct btrfs_item) * 2) / \ @@ -400,7 +401,8 @@ static noinline int truncate_one_csum(struct btrfs_root *root, * deletes the csum items from the csum tree for a given * range of bytes. */ -int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len) +int btrfs_del_csums(struct btrfs_trans_handle *trans, struct btrfs_root *root, + u64 bytenr, u64 len) { struct btrfs_path *path; struct btrfs_key key; @@ -410,7 +412,6 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len) int ret; u16 csum_size = trans->fs_info->csum_size; int blocksize = trans->fs_info->sectorsize; - struct btrfs_root *csum_root = btrfs_csum_root(trans->fs_info, bytenr); path = btrfs_alloc_path(); if (!path) @@ -421,7 +422,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len) key.offset = end_byte - 1; key.type = BTRFS_EXTENT_CSUM_KEY; - ret = btrfs_search_slot(trans, csum_root, &key, path, -1, 1); + ret = btrfs_search_slot(trans, root, &key, path, -1, 1); if (ret > 0) { if (path->slots[0] == 0) goto out; @@ -448,7 +449,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len) /* delete the entire item, it is inside our range */ if (key.offset >= bytenr && csum_end <= end_byte) { - ret = btrfs_del_item(trans, csum_root, path); + ret = btrfs_del_item(trans, root, path); BUG_ON(ret); } else if (key.offset < bytenr && csum_end > end_byte) { unsigned long offset; @@ -488,13 +489,13 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans, u64 bytenr, u64 len) * btrfs_split_item returns -EAGAIN when the * item changed size or key */ - ret = btrfs_split_item(trans, csum_root, path, &key, + ret = btrfs_split_item(trans, root, path, &key, offset); BUG_ON(ret && ret != -EAGAIN); key.offset = end_byte - 1; } else { - ret = truncate_one_csum(csum_root, path, &key, bytenr, + ret = truncate_one_csum(root, path, &key, bytenr, len); BUG_ON(ret); } diff --git a/kernel-shared/file-item.h b/kernel-shared/file-item.h new file mode 100644 index 00000000..048e0be7 --- /dev/null +++ b/kernel-shared/file-item.h @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef BTRFS_FILE_ITEM_H +#define BTRFS_FILE_ITEM_H + +#include "kerncompat.h" +#include "accessors.h" + +struct bio; +struct inode; +struct btrfs_ordered_sum; +struct btrfs_inode; +struct extent_map; + +#define BTRFS_FILE_EXTENT_INLINE_DATA_START \ + (offsetof(struct btrfs_file_extent_item, disk_bytenr)) + +static inline u32 BTRFS_MAX_INLINE_DATA_SIZE(const struct btrfs_fs_info *info) +{ + return BTRFS_MAX_ITEM_SIZE(info) - + BTRFS_FILE_EXTENT_INLINE_DATA_START; +} + +/* + * Returns the number of bytes used by the item on disk, minus the size of any + * extent headers. If a file is compressed on disk, this is the compressed + * size. + */ +static inline u32 btrfs_file_extent_inline_item_len( + const struct extent_buffer *eb, + int nr) +{ + return btrfs_item_size(eb, nr) - BTRFS_FILE_EXTENT_INLINE_DATA_START; +} + +static inline unsigned long btrfs_file_extent_inline_start( + const struct btrfs_file_extent_item *e) +{ + return (unsigned long)e + BTRFS_FILE_EXTENT_INLINE_DATA_START; +} + +static inline u32 btrfs_file_extent_calc_inline_size(u32 datasize) +{ + return BTRFS_FILE_EXTENT_INLINE_DATA_START + datasize; +} + +int btrfs_del_csums(struct btrfs_trans_handle *trans, + struct btrfs_root *root, u64 bytenr, u64 len); +blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst); +int btrfs_insert_hole_extent(struct btrfs_trans_handle *trans, + struct btrfs_root *root, u64 objectid, u64 pos, + u64 num_bytes); +int btrfs_lookup_file_extent(struct btrfs_trans_handle *trans, + struct btrfs_root *root, + struct btrfs_path *path, u64 objectid, + u64 bytenr, int mod); +int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, + struct btrfs_root *root, + struct btrfs_ordered_sum *sums); +blk_status_t btrfs_csum_one_bio(struct btrfs_inode *inode, struct bio *bio, + u64 offset, bool one_ordered); +int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end, + struct list_head *list, int search_commit, + bool nowait); +void btrfs_extent_item_to_extent_map(struct btrfs_inode *inode, + const struct btrfs_path *path, + struct btrfs_file_extent_item *fi, + struct extent_map *em); +int btrfs_inode_clear_file_extent_range(struct btrfs_inode *inode, u64 start, + u64 len); +int btrfs_inode_set_file_extent_range(struct btrfs_inode *inode, u64 start, u64 len); +void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_size); +u64 btrfs_file_extent_end(const struct btrfs_path *path); + +/* + * MODIFIED: + * - This function doesn't exist in the kernel. + */ +int btrfs_insert_file_extent(struct btrfs_trans_handle *trans, + struct btrfs_root *root, + u64 objectid, u64 pos, u64 offset, + u64 disk_num_bytes, u64 num_bytes); +int btrfs_csum_file_block(struct btrfs_trans_handle *trans, + u64 alloc_end, u64 bytenr, char *data, size_t len); +int btrfs_insert_inline_extent(struct btrfs_trans_handle *trans, + struct btrfs_root *root, u64 objectid, + u64 offset, const char *buffer, size_t size); + +#endif diff --git a/kernel-shared/file.c b/kernel-shared/file.c index 807ba477..6324f555 100644 --- a/kernel-shared/file.c +++ b/kernel-shared/file.c @@ -23,6 +23,7 @@ #include "kernel-shared/transaction.h" #include "compression.h" #include "kerncompat.h" +#include "file-item.h" /* * Get the first file extent that covers (part of) the given range diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c index cbd5152b..fa751912 100644 --- a/kernel-shared/print-tree.c +++ b/kernel-shared/print-tree.c @@ -28,6 +28,7 @@ #include "kernel-shared/compression.h" #include "common/utils.h" #include "accessors.h" +#include "file-item.h" static void print_dir_item_type(struct extent_buffer *eb, struct btrfs_dir_item *di) diff --git a/mkfs/rootdir.c b/mkfs/rootdir.c index 2cf6eef9..7e5d41a5 100644 --- a/mkfs/rootdir.c +++ b/mkfs/rootdir.c @@ -35,6 +35,7 @@ #include "kernel-shared/volumes.h" #include "kernel-shared/disk-io.h" #include "kernel-shared/transaction.h" +#include "kernel-shared/file-item.h" #include "common/internal.h" #include "common/messages.h" #include "common/path-utils.h" From patchwork Wed Nov 23 22:37:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 843FCC4332F for ; Wed, 23 Nov 2022 22:38:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229829AbiKWWil (ORCPT ); Wed, 23 Nov 2022 17:38:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229797AbiKWWiT (ORCPT ); Wed, 23 Nov 2022 17:38:19 -0500 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77316DFD3 for ; Wed, 23 Nov 2022 14:38:17 -0800 (PST) Received: by mail-qt1-x831.google.com with SMTP id l2so121075qtq.11 for ; Wed, 23 Nov 2022 14:38:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Okak4IKYXIwwhjTwdFjL/rQ1AtP2yDXPgJnWa789H6g=; b=EOsIpFcah6BfZL5UUCoJ1K0YGEcK+qtTfDUOTPT5/M+KmP4p5Ng4ujsJiDWH5xNnh4 mwuwqUUvNN2L4/oldezz3VzFDKQeM6DYvYy+H9dgyY0gQ6ZZOf+GezobE26Mp/MD1l4r RSmaBXWF3CqZm4h75kL74349P1pu/1d/2k47iaJnQa0qrpQWPmVl85gzNd+LRcf2roG/ Dve1gxqGtA8tWQdot6mjorVSiIg4OX5xhHISyZ0INjRDazgK4mzxXoHuHqzNBLtRlCUT Za+0fR3ykPay1CY/Y5GgJvkdjB8rMtl6RuLgtSXmW0kk6SDLtwDZ2pXYrUh+hUYRjRJW z2Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Okak4IKYXIwwhjTwdFjL/rQ1AtP2yDXPgJnWa789H6g=; b=uyOdf1MsAgMlsRIPNZm9yoFYI1v9jQZJ92roLJ6j4D5mUdcHFNmplTsgsH2cjEXFsG iiTgOJ659tyAUNG7ri1HvjAMKs/1PmdkN1i+be0kq7iMfwfbvuHWeWkdVgyooezvnzew DYmkpN+UsZRtZvX/XwPud8k9SXzgw2VQSSH4VcDI6wI/evLoVu3W+nij93CzZIYseN7n FE0s9wN34yFsNcKHf2Vkhg7B3iGGWVBdIW4cnHWKDlxcRHo+efb19AhkAc0cmFlOAUR0 BsmjTgpeafzka0m3zL+Zdqjij24+T2xsgHtW7TNb8P8Nxvd/o8dxN7QDn5mC3VKLFC+C j74A== X-Gm-Message-State: ANoB5pmhNBnWN0+KaSYbEJeVaGmyHukYaTzYlFmibNjhVOwegsAiBNKg 2FgF+jI/Aaone6DWMkXNH3MoE7AbLR/q2A== X-Google-Smtp-Source: AA0mqf4cK5WHYAsc/IDAgB0PiDy2gMqtPscO1wpqdIkm1b8FuB0AbBwZYaRg/U8K0m5IyZM2wD5gFA== X-Received: by 2002:a05:622a:5a14:b0:3a5:bf05:f0dd with SMTP id fy20-20020a05622a5a1400b003a5bf05f0ddmr10400161qtb.342.1669243096188; Wed, 23 Nov 2022 14:38:16 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id k7-20020ac80747000000b003a4d5fed8c3sm10300187qth.85.2022.11.23.14.38.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:15 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 28/29] btrfs-progs: sync async-thread.[ch] from the kernel Date: Wed, 23 Nov 2022 17:37:36 -0500 Message-Id: <3117806772362ba571917479723483ad7cb71eac.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We won't actually use the async code in progs, however we call the helpers and such all over the normal code, so sync this into btrfs-progs to make syncing other parts of the kernel easier. Signed-off-by: Josef Bacik --- Makefile | 1 + common/internal.h | 4 + kerncompat.h | 81 ++++++++- kernel-lib/bitops.h | 12 ++ kernel-lib/trace.h | 29 +++ kernel-shared/async-thread.c | 339 +++++++++++++++++++++++++++++++++++ kernel-shared/async-thread.h | 46 +++++ 7 files changed, 511 insertions(+), 1 deletion(-) create mode 100644 kernel-lib/trace.h create mode 100644 kernel-shared/async-thread.c create mode 100644 kernel-shared/async-thread.h diff --git a/Makefile b/Makefile index d2738e44..6ae8c990 100644 --- a/Makefile +++ b/Makefile @@ -154,6 +154,7 @@ objects = \ kernel-lib/rbtree.o \ kernel-lib/tables.o \ kernel-shared/accessors.o \ + kernel-shared/async-thread.o \ kernel-shared/backref.o \ kernel-shared/ctree.o \ kernel-shared/delayed-ref.o \ diff --git a/common/internal.h b/common/internal.h index d5ea9986..81729964 100644 --- a/common/internal.h +++ b/common/internal.h @@ -39,4 +39,8 @@ #define max_t(type,x,y) \ ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; }) +#define clamp_t(type, val, lo, hi) min_t(type, max_t(type, val, lo), hi) + +#define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi) + #endif diff --git a/kerncompat.h b/kerncompat.h index c7d59eb8..1ce7d2cc 100644 --- a/kerncompat.h +++ b/kerncompat.h @@ -218,6 +218,28 @@ static inline int mutex_is_locked(struct mutex *m) return (m->lock != 1); } +static inline void spin_lock_init(spinlock_t *lock) +{ + lock->lock = 0; +} + +static inline void spin_lock(spinlock_t *lock) +{ + lock->lock++; +} + +static inline void spin_unlock(spinlock_t *lock) +{ + lock->lock--; +} + +#define spin_lock_irqsave(_l, _f) do { _f = 0; spin_lock((_l)); } while (0) + +static inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) +{ + spin_unlock(lock); +} + #define cond_resched() do { } while (0) #define preempt_enable() do { } while (0) #define preempt_disable() do { } while (0) @@ -540,6 +562,9 @@ do { \ (x) = (val); \ } while (0) +#define smp_rmb() do {} while (0) +#define smp_mb__before_atomic() do {} while (0) + typedef struct refcount_struct { int refs; } refcount_t; @@ -548,9 +573,18 @@ typedef u32 blk_status_t; typedef u32 blk_opf_t; typedef int atomic_t; -struct work_struct { +struct work_struct; +typedef void (*work_func_t)(struct work_struct *work); + +struct workqueue_struct { }; +struct work_struct { + work_func_t func; +}; + +#define INIT_WORK(_w, _f) do { (_w)->func = (_f); } while (0) + typedef struct wait_queue_head_s { } wait_queue_head_t; @@ -565,6 +599,7 @@ struct va_format { #define __init #define __cold +#define __pure #define __printf(a, b) __attribute__((__format__(printf, a, b))) @@ -575,4 +610,48 @@ static inline bool sb_rdonly(struct super_block *sb) #define unlikely(cond) (cond) +static inline void atomic_set(atomic_t *a, int val) +{ + *a = val; +} + +static inline int atomic_read(const atomic_t *a) +{ + return *a; +} + +static inline void atomic_inc(atomic_t *a) +{ + (*a)++; +} + +static inline void atomic_dec(atomic_t *a) +{ + (*a)--; +} + +static inline struct workqueue_struct *alloc_workqueue(const char *name, + unsigned long flags, + int max_active, ...) +{ + return (struct workqueue_struct *)5; +} + +static inline void destroy_workqueue(struct workqueue_struct *wq) +{ +} + +static inline void flush_workqueue(struct workqueue_struct *wq) +{ +} + +static inline void workqueue_set_max_active(struct workqueue_struct *wq, + int max_active) +{ +} + +static inline void queue_work(struct workqueue_struct *wq, struct work_struct *work) +{ +} + #endif diff --git a/kernel-lib/bitops.h b/kernel-lib/bitops.h index e0b85215..b9bf3b38 100644 --- a/kernel-lib/bitops.h +++ b/kernel-lib/bitops.h @@ -12,6 +12,8 @@ #define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) #define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64)) #define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32)) +#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG)) +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) #define for_each_set_bit(bit, addr, size) \ for ((bit) = find_first_bit((addr), (size)); \ @@ -34,6 +36,16 @@ static inline void clear_bit(int nr, unsigned long *addr) addr[nr / BITS_PER_LONG] &= ~(1UL << (nr % BITS_PER_LONG)); } +static inline bool test_and_set_bit(unsigned long nr, unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old | mask; + return (old & mask) != 0; +} + /** * hweightN - returns the hamming weight of a N-bit word * @x: the word to weigh diff --git a/kernel-lib/trace.h b/kernel-lib/trace.h new file mode 100644 index 00000000..086bcd10 --- /dev/null +++ b/kernel-lib/trace.h @@ -0,0 +1,29 @@ +#ifndef __PROGS_TRACE_H__ +#define __PROGS_TRACE_H__ + +static inline void trace_btrfs_workqueue_alloc(void *ret, const char *name) +{ +} + +static inline void trace_btrfs_ordered_sched(struct btrfs_work *work) +{ +} + +static inline void trace_btrfs_all_work_done(struct btrfs_fs_info *fs_info, + struct btrfs_work *work) +{ +} + +static inline void trace_btrfs_work_sched(struct btrfs_work *work) +{ +} + +static inline void trace_btrfs_work_queued(struct btrfs_work *work) +{ +} + +static inline void trace_btrfs_workqueue_destroy(void *wq) +{ +} + +#endif /* __PROGS_TRACE_H__ */ diff --git a/kernel-shared/async-thread.c b/kernel-shared/async-thread.c new file mode 100644 index 00000000..811668da --- /dev/null +++ b/kernel-shared/async-thread.c @@ -0,0 +1,339 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2007 Oracle. All rights reserved. + * Copyright (C) 2014 Fujitsu. All rights reserved. + */ + +#include "kerncompat.h" +#include "async-thread.h" +#include "ctree.h" +#include "kernel-lib/trace.h" +#include "kernel-lib/bitops.h" + +enum { + WORK_DONE_BIT, + WORK_ORDER_DONE_BIT, +}; + +#define NO_THRESHOLD (-1) +#define DFT_THRESHOLD (32) + +struct btrfs_workqueue { + struct workqueue_struct *normal_wq; + + /* File system this workqueue services */ + struct btrfs_fs_info *fs_info; + + /* List head pointing to ordered work list */ + struct list_head ordered_list; + + /* Spinlock for ordered_list */ + spinlock_t list_lock; + + /* Thresholding related variants */ + atomic_t pending; + + /* Up limit of concurrency workers */ + int limit_active; + + /* Current number of concurrency workers */ + int current_active; + + /* Threshold to change current_active */ + int thresh; + unsigned int count; + spinlock_t thres_lock; +}; + +struct btrfs_fs_info * __pure btrfs_workqueue_owner(const struct btrfs_workqueue *wq) +{ + return wq->fs_info; +} + +struct btrfs_fs_info * __pure btrfs_work_owner(const struct btrfs_work *work) +{ + return work->wq->fs_info; +} + +bool btrfs_workqueue_normal_congested(const struct btrfs_workqueue *wq) +{ + /* + * We could compare wq->pending with num_online_cpus() + * to support "thresh == NO_THRESHOLD" case, but it requires + * moving up atomic_inc/dec in thresh_queue/exec_hook. Let's + * postpone it until someone needs the support of that case. + */ + if (wq->thresh == NO_THRESHOLD) + return false; + + return atomic_read(&wq->pending) > wq->thresh * 2; +} + +struct btrfs_workqueue *btrfs_alloc_workqueue(struct btrfs_fs_info *fs_info, + const char *name, unsigned int flags, + int limit_active, int thresh) +{ + struct btrfs_workqueue *ret = kzalloc(sizeof(*ret), GFP_KERNEL); + + if (!ret) + return NULL; + + ret->fs_info = fs_info; + ret->limit_active = limit_active; + atomic_set(&ret->pending, 0); + if (thresh == 0) + thresh = DFT_THRESHOLD; + /* For low threshold, disabling threshold is a better choice */ + if (thresh < DFT_THRESHOLD) { + ret->current_active = limit_active; + ret->thresh = NO_THRESHOLD; + } else { + /* + * For threshold-able wq, let its concurrency grow on demand. + * Use minimal max_active at alloc time to reduce resource + * usage. + */ + ret->current_active = 1; + ret->thresh = thresh; + } + + ret->normal_wq = alloc_workqueue("btrfs-%s", flags, ret->current_active, + name); + if (!ret->normal_wq) { + kfree(ret); + return NULL; + } + + INIT_LIST_HEAD(&ret->ordered_list); + spin_lock_init(&ret->list_lock); + spin_lock_init(&ret->thres_lock); + trace_btrfs_workqueue_alloc(ret, name); + return ret; +} + +/* + * Hook for threshold which will be called in btrfs_queue_work. + * This hook WILL be called in IRQ handler context, + * so workqueue_set_max_active MUST NOT be called in this hook + */ +static inline void thresh_queue_hook(struct btrfs_workqueue *wq) +{ + if (wq->thresh == NO_THRESHOLD) + return; + atomic_inc(&wq->pending); +} + +/* + * Hook for threshold which will be called before executing the work, + * This hook is called in kthread content. + * So workqueue_set_max_active is called here. + */ +static inline void thresh_exec_hook(struct btrfs_workqueue *wq) +{ + int new_current_active; + long pending; + int need_change = 0; + + if (wq->thresh == NO_THRESHOLD) + return; + + atomic_dec(&wq->pending); + spin_lock(&wq->thres_lock); + /* + * Use wq->count to limit the calling frequency of + * workqueue_set_max_active. + */ + wq->count++; + wq->count %= (wq->thresh / 4); + if (!wq->count) + goto out; + new_current_active = wq->current_active; + + /* + * pending may be changed later, but it's OK since we really + * don't need it so accurate to calculate new_max_active. + */ + pending = atomic_read(&wq->pending); + if (pending > wq->thresh) + new_current_active++; + if (pending < wq->thresh / 2) + new_current_active--; + new_current_active = clamp_val(new_current_active, 1, wq->limit_active); + if (new_current_active != wq->current_active) { + need_change = 1; + wq->current_active = new_current_active; + } +out: + spin_unlock(&wq->thres_lock); + + if (need_change) { + workqueue_set_max_active(wq->normal_wq, wq->current_active); + } +} + +static void run_ordered_work(struct btrfs_workqueue *wq, + struct btrfs_work *self) +{ + struct list_head *list = &wq->ordered_list; + struct btrfs_work *work; + spinlock_t *lock = &wq->list_lock; + unsigned long flags; + bool free_self = false; + + while (1) { + spin_lock_irqsave(lock, flags); + if (list_empty(list)) + break; + work = list_entry(list->next, struct btrfs_work, + ordered_list); + if (!test_bit(WORK_DONE_BIT, &work->flags)) + break; + /* + * Orders all subsequent loads after reading WORK_DONE_BIT, + * paired with the smp_mb__before_atomic in btrfs_work_helper + * this guarantees that the ordered function will see all + * updates from ordinary work function. + */ + smp_rmb(); + + /* + * we are going to call the ordered done function, but + * we leave the work item on the list as a barrier so + * that later work items that are done don't have their + * functions called before this one returns + */ + if (test_and_set_bit(WORK_ORDER_DONE_BIT, &work->flags)) + break; + trace_btrfs_ordered_sched(work); + spin_unlock_irqrestore(lock, flags); + work->ordered_func(work); + + /* now take the lock again and drop our item from the list */ + spin_lock_irqsave(lock, flags); + list_del(&work->ordered_list); + spin_unlock_irqrestore(lock, flags); + + if (work == self) { + /* + * This is the work item that the worker is currently + * executing. + * + * The kernel workqueue code guarantees non-reentrancy + * of work items. I.e., if a work item with the same + * address and work function is queued twice, the second + * execution is blocked until the first one finishes. A + * work item may be freed and recycled with the same + * work function; the workqueue code assumes that the + * original work item cannot depend on the recycled work + * item in that case (see find_worker_executing_work()). + * + * Note that different types of Btrfs work can depend on + * each other, and one type of work on one Btrfs + * filesystem may even depend on the same type of work + * on another Btrfs filesystem via, e.g., a loop device. + * Therefore, we must not allow the current work item to + * be recycled until we are really done, otherwise we + * break the above assumption and can deadlock. + */ + free_self = true; + } else { + /* + * We don't want to call the ordered free functions with + * the lock held. + */ + work->ordered_free(work); + /* NB: work must not be dereferenced past this point. */ + trace_btrfs_all_work_done(wq->fs_info, work); + } + } + spin_unlock_irqrestore(lock, flags); + + if (free_self) { + self->ordered_free(self); + /* NB: self must not be dereferenced past this point. */ + trace_btrfs_all_work_done(wq->fs_info, self); + } +} + +static void btrfs_work_helper(struct work_struct *normal_work) +{ + struct btrfs_work *work = container_of(normal_work, struct btrfs_work, + normal_work); + struct btrfs_workqueue *wq = work->wq; + int need_order = 0; + + /* + * We should not touch things inside work in the following cases: + * 1) after work->func() if it has no ordered_free + * Since the struct is freed in work->func(). + * 2) after setting WORK_DONE_BIT + * The work may be freed in other threads almost instantly. + * So we save the needed things here. + */ + if (work->ordered_func) + need_order = 1; + + trace_btrfs_work_sched(work); + thresh_exec_hook(wq); + work->func(work); + if (need_order) { + /* + * Ensures all memory accesses done in the work function are + * ordered before setting the WORK_DONE_BIT. Ensuring the thread + * which is going to executed the ordered work sees them. + * Pairs with the smp_rmb in run_ordered_work. + */ + smp_mb__before_atomic(); + set_bit(WORK_DONE_BIT, &work->flags); + run_ordered_work(wq, work); + } else { + /* NB: work must not be dereferenced past this point. */ + trace_btrfs_all_work_done(wq->fs_info, work); + } +} + +void btrfs_init_work(struct btrfs_work *work, btrfs_func_t func, + btrfs_func_t ordered_func, btrfs_func_t ordered_free) +{ + work->func = func; + work->ordered_func = ordered_func; + work->ordered_free = ordered_free; + INIT_WORK(&work->normal_work, btrfs_work_helper); + INIT_LIST_HEAD(&work->ordered_list); + work->flags = 0; +} + +void btrfs_queue_work(struct btrfs_workqueue *wq, struct btrfs_work *work) +{ + unsigned long flags; + + work->wq = wq; + thresh_queue_hook(wq); + if (work->ordered_func) { + spin_lock_irqsave(&wq->list_lock, flags); + list_add_tail(&work->ordered_list, &wq->ordered_list); + spin_unlock_irqrestore(&wq->list_lock, flags); + } + trace_btrfs_work_queued(work); + queue_work(wq->normal_wq, &work->normal_work); +} + +void btrfs_destroy_workqueue(struct btrfs_workqueue *wq) +{ + if (!wq) + return; + destroy_workqueue(wq->normal_wq); + trace_btrfs_workqueue_destroy(wq); + kfree(wq); +} + +void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int limit_active) +{ + if (wq) + wq->limit_active = limit_active; +} + +void btrfs_flush_workqueue(struct btrfs_workqueue *wq) +{ + flush_workqueue(wq->normal_wq); +} diff --git a/kernel-shared/async-thread.h b/kernel-shared/async-thread.h new file mode 100644 index 00000000..90657605 --- /dev/null +++ b/kernel-shared/async-thread.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2007 Oracle. All rights reserved. + * Copyright (C) 2014 Fujitsu. All rights reserved. + */ + +#ifndef BTRFS_ASYNC_THREAD_H +#define BTRFS_ASYNC_THREAD_H + +#include "kerncompat.h" +#include "kernel-lib/list.h" + +struct btrfs_fs_info; +struct btrfs_workqueue; +struct btrfs_work; +typedef void (*btrfs_func_t)(struct btrfs_work *arg); + +struct btrfs_work { + btrfs_func_t func; + btrfs_func_t ordered_func; + btrfs_func_t ordered_free; + + /* Don't touch things below */ + struct work_struct normal_work; + struct list_head ordered_list; + struct btrfs_workqueue *wq; + unsigned long flags; +}; + +struct btrfs_workqueue *btrfs_alloc_workqueue(struct btrfs_fs_info *fs_info, + const char *name, + unsigned int flags, + int limit_active, + int thresh); +void btrfs_init_work(struct btrfs_work *work, btrfs_func_t func, + btrfs_func_t ordered_func, btrfs_func_t ordered_free); +void btrfs_queue_work(struct btrfs_workqueue *wq, + struct btrfs_work *work); +void btrfs_destroy_workqueue(struct btrfs_workqueue *wq); +void btrfs_workqueue_set_max(struct btrfs_workqueue *wq, int max); +struct btrfs_fs_info * __pure btrfs_work_owner(const struct btrfs_work *work); +struct btrfs_fs_info * __pure btrfs_workqueue_owner(const struct btrfs_workqueue *wq); +bool btrfs_workqueue_normal_congested(const struct btrfs_workqueue *wq); +void btrfs_flush_workqueue(struct btrfs_workqueue *wq); + +#endif From patchwork Wed Nov 23 22:37:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 13054425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8422AC4167D for ; Wed, 23 Nov 2022 22:38:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229507AbiKWWim (ORCPT ); Wed, 23 Nov 2022 17:38:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiKWWiY (ORCPT ); Wed, 23 Nov 2022 17:38:24 -0500 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9979525E4 for ; Wed, 23 Nov 2022 14:38:19 -0800 (PST) Received: by mail-qk1-x72e.google.com with SMTP id d8so13462116qki.13 for ; Wed, 23 Nov 2022 14:38:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=pCPnZhM4gbEmnEJ0NP9oO4mm946QAEdObVjiEmxxgX0=; b=mU8YVZbhdfCOqbPljXN3e0vSnQAFu9GScXcjT6z0lLxlx4bix4Ia676HWMJFPudDpP tAKO6TwbBPTqIe3KmRA/a97vhHa0bls/l2ZMEh5NlHFcBStqe0OAeenbJHOdymWsly6r Hu9h6FMlAAC6XLP7l7hv1iIfmIdkle0V7WShkuLPPdr4xrADSUqv/PJbdexPKJGp6s7F /A/JHFC67gV1J2EpjB3sV9x2sXWOw+yP2nvFcSoHI01C0e4yRxbSkfPJRbxT8tr0eNKP lQD1YadVk4tRUCBRpBTTLG/Vb7ZkJT+grFMoNvLF8ZCCmxuN0p+5SOvE39FJqsl3rYnY prZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pCPnZhM4gbEmnEJ0NP9oO4mm946QAEdObVjiEmxxgX0=; b=SwSiXiCnk71aMaamE1dg9XwIVIjPuD0Nu6tzJW4uwazWhAneza8RLVrjhWLO73pb68 c8z3uVaMvxi6yX6m3yqS4TswWSKnu+GukKqDEuU3AAGt4TEFgETui42y7aaQEjCwF5PA JqHqvx/rAHGPP6jKqX9Z65GbhbkkAQlNx5hjQQNNmmLrOAjq5UQQPn4GMboG6o9YDuf6 F2LoKvpYFFnSYsMLaaOfvw6+nVREqW9C9sBwaulv7Qhs1y7X9CsS1Zzq5sYzQl4vO6rK XU+c1MOOv8dOofhsR2cqXQFYiPwM4/usxAccKphMVKQJILc5cIoRg3tXi3SOXwApk+4Y yLUw== X-Gm-Message-State: ANoB5plNsQcw3mT2ovqD9UvTWh9vY5tORQJVdi1nPSGyxzjNJ+4VswaF urPWHvSk0OR8FqaWZ4o7Ad1yc9WmhCSydA== X-Google-Smtp-Source: AA0mqf5b+UAY3Dw618zAE/86iKpbzvM7fdo/5t1hf1PXb3TRAAR/NyZ4XdmOVKWJoEoDtNWLEzXb8A== X-Received: by 2002:a05:620a:164e:b0:6ec:7654:63d5 with SMTP id c14-20020a05620a164e00b006ec765463d5mr18018591qko.425.1669243097656; Wed, 23 Nov 2022 14:38:17 -0800 (PST) Received: from localhost (cpe-174-109-170-245.nc.res.rr.com. [174.109.170.245]) by smtp.gmail.com with ESMTPSA id d7-20020a05620a240700b006e702033b15sm13360829qkn.66.2022.11.23.14.38.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 14:38:17 -0800 (PST) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v3 29/29] btrfs-progs: sync extent-io-tree.[ch] and misc.h from the kernel Date: Wed, 23 Nov 2022 17:37:37 -0500 Message-Id: <015c1d4853ad3fbfd78b2fc14eeed467423b6f62.1669242804.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org This is a bit larger than the previous syncs, because we use extent_io_tree's everywhere. There's a lot of stuff added to kerncompat.h, and then I went through and cleaned up all the API changes, which were - extent_io_tree_init takes an fs_info and an owner now. - extent_io_tree_cleanup is now extent_io_tree_release. - set_extent_dirty takes a gfpmask. - clear_extent_dirty takes a cached_state. - find_first_extent_bit takes a cached_state. The diffstat looks insane for this, but keep in mind extent-io-tree.c and extent-io-tree.h are ~2000 loc just by themselves. Signed-off-by: Josef Bacik --- Makefile | 1 + check/clear-cache.c | 8 +- check/common.h | 2 +- check/main.c | 27 +- check/mode-common.c | 9 +- check/mode-lowmem.c | 4 +- check/repair.c | 16 +- cmds/rescue-chunk-recover.c | 2 +- image/main.c | 9 +- kerncompat.h | 160 ++- kernel-lib/trace.h | 26 + kernel-shared/ctree.h | 1 + kernel-shared/disk-io.c | 16 +- kernel-shared/extent-io-tree.c | 1733 ++++++++++++++++++++++++++++++++ kernel-shared/extent-io-tree.h | 239 +++++ kernel-shared/extent-tree.c | 45 +- kernel-shared/extent_io.c | 473 +-------- kernel-shared/extent_io.h | 39 +- kernel-shared/misc.h | 143 +++ kernel-shared/transaction.c | 5 +- 20 files changed, 2386 insertions(+), 572 deletions(-) create mode 100644 kernel-shared/extent-io-tree.c create mode 100644 kernel-shared/extent-io-tree.h create mode 100644 kernel-shared/misc.h diff --git a/Makefile b/Makefile index 6ae8c990..b7000ad1 100644 --- a/Makefile +++ b/Makefile @@ -160,6 +160,7 @@ objects = \ kernel-shared/delayed-ref.o \ kernel-shared/dir-item.o \ kernel-shared/disk-io.o \ + kernel-shared/extent-io-tree.o \ kernel-shared/extent-tree.o \ kernel-shared/extent_io.o \ kernel-shared/file-item.o \ diff --git a/check/clear-cache.c b/check/clear-cache.c index 1ea937dc..d9842a00 100644 --- a/check/clear-cache.c +++ b/check/clear-cache.c @@ -311,7 +311,7 @@ static int verify_space_cache(struct btrfs_root *root, while (start < bg_end) { ret = find_first_extent_bit(used, cache->start, &start, &end, - EXTENT_DIRTY); + EXTENT_DIRTY, NULL); if (ret || start >= bg_end) { ret = 0; break; @@ -323,7 +323,7 @@ static int verify_space_cache(struct btrfs_root *root, return ret; } end = min(end, bg_end - 1); - clear_extent_dirty(used, start, end); + clear_extent_dirty(used, start, end, NULL); start = end + 1; last_end = start; } @@ -350,7 +350,7 @@ static int check_space_cache(struct btrfs_root *root) int ret; int error = 0; - extent_io_tree_init(&used); + extent_io_tree_init(root->fs_info, &used, 0); ret = btrfs_mark_used_blocks(gfs_info, &used); if (ret) return ret; @@ -406,7 +406,7 @@ static int check_space_cache(struct btrfs_root *root) error++; } } - extent_io_tree_cleanup(&used); + extent_io_tree_release(&used); return error ? -EINVAL : 0; } diff --git a/check/common.h b/check/common.h index 645c4539..2d5db213 100644 --- a/check/common.h +++ b/check/common.h @@ -147,7 +147,7 @@ u64 calc_stripe_length(u64 type, u64 length, int num_stripes); static inline void block_group_tree_init(struct block_group_tree *tree) { cache_tree_init(&tree->tree); - extent_io_tree_init(&tree->pending_extents); + extent_io_tree_init(NULL, &tree->pending_extents, 0); INIT_LIST_HEAD(&tree->block_groups); } diff --git a/check/main.c b/check/main.c index 96317c9c..e9bf7c4a 100644 --- a/check/main.c +++ b/check/main.c @@ -5156,7 +5156,7 @@ static void free_block_group_record(struct cache_extent *cache) void free_block_group_tree(struct block_group_tree *tree) { - extent_io_tree_cleanup(&tree->pending_extents); + extent_io_tree_release(&tree->pending_extents); cache_tree_free_extents(&tree->tree, free_block_group_record); } @@ -5169,7 +5169,7 @@ static void update_block_group_used(struct block_group_tree *tree, bg_item = lookup_cache_extent(&tree->tree, bytenr, num_bytes); if (!bg_item) { set_extent_dirty(&tree->pending_extents, bytenr, - bytenr + num_bytes - 1); + bytenr + num_bytes - 1, GFP_NOFS); return; } bg_rec = container_of(bg_item, struct block_group_record, cache); @@ -5408,7 +5408,8 @@ static int process_block_group_item(struct block_group_tree *block_group_cache, } while (!find_first_extent_bit(&block_group_cache->pending_extents, - rec->objectid, &start, &end, EXTENT_DIRTY)) { + rec->objectid, &start, &end, EXTENT_DIRTY, + NULL)) { u64 len; if (start >= rec->objectid + rec->offset) @@ -5417,7 +5418,7 @@ static int process_block_group_item(struct block_group_tree *block_group_cache, len = min(end - start + 1, rec->objectid + rec->offset - start); rec->actual_used += len; clear_extent_dirty(&block_group_cache->pending_extents, start, - start + len - 1); + start + len - 1, NULL); } return ret; @@ -8013,7 +8014,8 @@ static int check_extent_refs(struct btrfs_root *root, rec = container_of(cache, struct extent_record, cache); set_extent_dirty(gfs_info->excluded_extents, rec->start, - rec->start + rec->max_size - 1); + rec->start + rec->max_size - 1, + GFP_NOFS); cache = next_cache_extent(cache); } @@ -8022,7 +8024,8 @@ static int check_extent_refs(struct btrfs_root *root, while (cache) { set_extent_dirty(gfs_info->excluded_extents, cache->start, - cache->start + cache->size - 1); + cache->start + cache->size - 1, + GFP_NOFS); cache = next_cache_extent(cache); } prune_corrupt_blocks(); @@ -8177,7 +8180,8 @@ next: if (!init_extent_tree && opt_check_repair && (!cur_err || fix)) clear_extent_dirty(gfs_info->excluded_extents, rec->start, - rec->start + rec->max_size - 1); + rec->start + rec->max_size - 1, + NULL); free(rec); } repair_abort: @@ -8883,7 +8887,7 @@ static int check_chunks_and_extents(void) cache_tree_init(&nodes); cache_tree_init(&reada); cache_tree_init(&corrupt_blocks); - extent_io_tree_init(&excluded_extents); + extent_io_tree_init(gfs_info, &excluded_extents, 0); INIT_LIST_HEAD(&dropping_trees); INIT_LIST_HEAD(&normal_trees); @@ -8971,7 +8975,7 @@ again: out: if (opt_check_repair) { free_corrupt_blocks_tree(gfs_info->corrupt_blocks); - extent_io_tree_cleanup(&excluded_extents); + extent_io_tree_release(&excluded_extents); gfs_info->fsck_extent_cache = NULL; gfs_info->free_extent_hook = NULL; gfs_info->corrupt_blocks = NULL; @@ -9002,7 +9006,7 @@ loop: free_extent_record_cache(&extent_cache); free_root_item_list(&normal_trees); free_root_item_list(&dropping_trees); - extent_io_tree_cleanup(&excluded_extents); + extent_io_tree_release(&excluded_extents); goto again; } @@ -9171,7 +9175,8 @@ static int reset_block_groups(void) btrfs_chunk_type(leaf, chunk), key.offset, btrfs_chunk_length(leaf, chunk)); set_extent_dirty(&gfs_info->free_space_cache, key.offset, - key.offset + btrfs_chunk_length(leaf, chunk)); + key.offset + btrfs_chunk_length(leaf, chunk), + GFP_NOFS); path.slots[0]++; } start = 0; diff --git a/check/mode-common.c b/check/mode-common.c index 96ee311a..b86408dc 100644 --- a/check/mode-common.c +++ b/check/mode-common.c @@ -597,10 +597,11 @@ void reset_cached_block_groups() while (1) { ret = find_first_extent_bit(&gfs_info->free_space_cache, 0, - &start, &end, EXTENT_DIRTY); + &start, &end, EXTENT_DIRTY, NULL); if (ret) break; - clear_extent_dirty(&gfs_info->free_space_cache, start, end); + clear_extent_dirty(&gfs_info->free_space_cache, start, end, + NULL); } start = 0; @@ -627,7 +628,7 @@ int exclude_metadata_blocks(void) excluded_extents = malloc(sizeof(*excluded_extents)); if (!excluded_extents) return -ENOMEM; - extent_io_tree_init(excluded_extents); + extent_io_tree_init(gfs_info, excluded_extents, 0); gfs_info->excluded_extents = excluded_extents; return btrfs_mark_used_tree_blocks(gfs_info, excluded_extents); @@ -636,7 +637,7 @@ int exclude_metadata_blocks(void) void cleanup_excluded_extents(void) { if (gfs_info->excluded_extents) { - extent_io_tree_cleanup(gfs_info->excluded_extents); + extent_io_tree_release(gfs_info->excluded_extents); free(gfs_info->excluded_extents); } gfs_info->excluded_extents = NULL; diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c index 78ef0385..7077f4fb 100644 --- a/check/mode-lowmem.c +++ b/check/mode-lowmem.c @@ -261,12 +261,12 @@ static int modify_block_group_cache(struct btrfs_block_group *block_group, int c if (cache && !block_group->cached) { block_group->cached = 1; - clear_extent_dirty(free_space_cache, start, end - 1); + clear_extent_dirty(free_space_cache, start, end - 1, NULL); } if (!cache && block_group->cached) { block_group->cached = 0; - clear_extent_dirty(free_space_cache, start, end - 1); + clear_extent_dirty(free_space_cache, start, end - 1, NULL); } return 0; } diff --git a/check/repair.c b/check/repair.c index f84df9cf..a457337b 100644 --- a/check/repair.c +++ b/check/repair.c @@ -79,13 +79,13 @@ static int traverse_tree_blocks(struct extent_io_tree *tree, * This can not only avoid forever loop with broken filesystem * but also give us some speedups. */ - if (test_range_bit(tree, eb->start, end - 1, EXTENT_DIRTY, 0)) + if (test_range_bit(tree, eb->start, end - 1, EXTENT_DIRTY, 0, NULL)) return 0; if (pin) btrfs_pin_extent(fs_info, eb->start, eb->len); else - set_extent_dirty(tree, eb->start, end - 1); + set_extent_dirty(tree, eb->start, end - 1, GFP_NOFS); nritems = btrfs_header_nritems(eb); for (i = 0; i < nritems; i++) { @@ -129,7 +129,7 @@ static int traverse_tree_blocks(struct extent_io_tree *tree, btrfs_pin_extent(fs_info, bytenr, fs_info->nodesize); else - set_extent_dirty(tree, bytenr, end); + set_extent_dirty(tree, bytenr, end, GFP_NOFS); continue; } @@ -211,7 +211,7 @@ static int populate_used_from_extent_root(struct btrfs_root *root, ret = -EINVAL; break; } - set_extent_dirty(io_tree, start, end); + set_extent_dirty(io_tree, start, end, GFP_NOFS); } path.slots[0]++; @@ -260,7 +260,7 @@ int btrfs_fix_block_accounting(struct btrfs_trans_handle *trans) if (ret) return ret; - extent_io_tree_init(&used); + extent_io_tree_init(fs_info, &used, 0); ret = btrfs_mark_used_blocks(fs_info, &used); if (ret) @@ -282,7 +282,7 @@ int btrfs_fix_block_accounting(struct btrfs_trans_handle *trans) start = 0; while (1) { ret = find_first_extent_bit(&used, 0, &start, &end, - EXTENT_DIRTY); + EXTENT_DIRTY, NULL); if (ret) break; @@ -291,11 +291,11 @@ int btrfs_fix_block_accounting(struct btrfs_trans_handle *trans) 1, 0); if (ret) goto out; - clear_extent_dirty(&used, start, end); + clear_extent_dirty(&used, start, end, NULL); } btrfs_set_super_bytes_used(fs_info->super_copy, bytes_used); ret = 0; out: - extent_io_tree_cleanup(&used); + extent_io_tree_release(&used); return ret; } diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c index e6f2b80e..9f44d6e1 100644 --- a/cmds/rescue-chunk-recover.c +++ b/cmds/rescue-chunk-recover.c @@ -1100,7 +1100,7 @@ static int block_group_free_all_extent(struct btrfs_trans_handle *trans, if (list_empty(&cache->dirty_list)) list_add_tail(&cache->dirty_list, &trans->dirty_bgs); - set_extent_dirty(&info->free_space_cache, start, end); + set_extent_dirty(&info->free_space_cache, start, end, GFP_NOFS); cache->used = 0; diff --git a/image/main.c b/image/main.c index a329a087..4b0c0fde 100644 --- a/image/main.c +++ b/image/main.c @@ -473,7 +473,7 @@ static void metadump_destroy(struct metadump_struct *md, int num_threads) free(name->sub); free(name); } - extent_io_tree_cleanup(&md->seen); + extent_io_tree_release(&md->seen); } static int metadump_init(struct metadump_struct *md, struct btrfs_root *root, @@ -489,7 +489,7 @@ static int metadump_init(struct metadump_struct *md, struct btrfs_root *root, memset(md, 0, sizeof(*md)); INIT_LIST_HEAD(&md->list); INIT_LIST_HEAD(&md->ordered); - extent_io_tree_init(&md->seen); + extent_io_tree_init(NULL, &md->seen, 0); md->root = root; md->out = out; md->pending_start = (u64)-1; @@ -785,11 +785,12 @@ static int copy_tree_blocks(struct btrfs_root *root, struct extent_buffer *eb, bytenr = btrfs_header_bytenr(eb); if (test_range_bit(&metadump->seen, bytenr, - bytenr + fs_info->nodesize - 1, EXTENT_DIRTY, 1)) + bytenr + fs_info->nodesize - 1, EXTENT_DIRTY, 1, + NULL)) return 0; set_extent_dirty(&metadump->seen, bytenr, - bytenr + fs_info->nodesize - 1); + bytenr + fs_info->nodesize - 1, GFP_NOFS); ret = add_extent(btrfs_header_bytenr(eb), fs_info->nodesize, metadump, 0); diff --git a/kerncompat.h b/kerncompat.h index 1ce7d2cc..7c9e48be 100644 --- a/kerncompat.h +++ b/kerncompat.h @@ -75,10 +75,17 @@ #define BITS_PER_LONG (__SIZEOF_LONG__ * BITS_PER_BYTE) #define __GFP_BITS_SHIFT 20 #define __GFP_BITS_MASK ((int)((1 << __GFP_BITS_SHIFT) - 1)) +#define __GFP_DMA32 0 +#define __GFP_HIGHMEM 0 #define GFP_KERNEL 0 #define GFP_NOFS 0 +#define GFP_NOWAIT 0 +#define GFP_ATOMIC 0 #define __read_mostly #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#define _RET_IP_ 0 +#define TASK_UNINTERRUPTIBLE 0 +#define SLAB_MEM_SPREAD 0 #ifndef ULONG_MAX #define ULONG_MAX (~0UL) @@ -358,6 +365,38 @@ static inline int IS_ERR_OR_NULL(const void *ptr) #define kvfree(x) free(x) #define memalloc_nofs_save() (0) #define memalloc_nofs_restore(x) ((void)(x)) +#define __releases(x) +#define __acquires(x) + +struct kmem_cache { + size_t size; +}; + +static inline struct kmem_cache *kmem_cache_create(const char *name, + size_t size, unsigned long idk, + unsigned long flags, void *private) +{ + struct kmem_cache *ret = malloc(sizeof(*ret)); + if (!ret) + return ret; + ret->size = size; + return ret; +} + +static inline void kmem_cache_destroy(struct kmem_cache *cache) +{ + free(cache); +} + +static inline void *kmem_cache_alloc(struct kmem_cache *cache, gfp_t mask) +{ + return malloc(cache->size); +} + +static inline void kmem_cache_free(struct kmem_cache *cache, void *ptr) +{ + free(ptr); +} #define BUG_ON(c) bugon_trace(#c, __FILE__, __func__, __LINE__, (long)(c)) #define BUG() \ @@ -365,7 +404,12 @@ do { \ BUG_ON(1); \ __builtin_unreachable(); \ } while (0) -#define WARN_ON(c) warning_trace(#c, __FILE__, __func__, __LINE__, (long)(c)) + +#define WARN_ON(c) ({ \ + int __ret_warn_on = !!(c); \ + warning_trace(#c, __FILE__, __func__, __LINE__, (long)(c)); \ + __ret_warn_on; \ +}) #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ @@ -564,11 +608,33 @@ do { \ #define smp_rmb() do {} while (0) #define smp_mb__before_atomic() do {} while (0) +#define smp_mb() do {} while (0) typedef struct refcount_struct { int refs; } refcount_t; +static inline void refcount_set(refcount_t *ref, int val) +{ + ref->refs = val; +} + +static inline void refcount_inc(refcount_t *ref) +{ + ref->refs++; +} + +static inline void refcount_dec(refcount_t *ref) +{ + ref->refs--; +} + +static inline bool refcount_dec_and_test(refcount_t *ref) +{ + ref->refs--; + return ref->refs == 0; +} + typedef u32 blk_status_t; typedef u32 blk_opf_t; typedef int atomic_t; @@ -585,9 +651,14 @@ struct work_struct { #define INIT_WORK(_w, _f) do { (_w)->func = (_f); } while (0) -typedef struct wait_queue_head_s { +typedef struct wait_queue_head { } wait_queue_head_t; +struct wait_queue_entry { +}; + +#define DEFINE_WAIT(name) struct wait_queue_entry name = {} + struct super_block { char *s_id; }; @@ -597,6 +668,9 @@ struct va_format { va_list *va; }; +struct lock_class_key { +}; + #define __init #define __cold #define __pure @@ -654,4 +728,86 @@ static inline void queue_work(struct workqueue_struct *wq, struct work_struct *w { } +static inline bool wq_has_sleeper(struct wait_queue_head *wq) +{ + return false; +} + +static inline bool waitqueue_active(struct wait_queue_head *wq) +{ + return false; +} + +static inline void wake_up(struct wait_queue_head *wq) +{ +} + +static inline void lockdep_set_class(spinlock_t *lock, struct lock_class_key *lclass) +{ +} + +static inline bool cond_resched_lock(spinlock_t *lock) +{ + return false; +} + +static inline void init_waitqueue_head(wait_queue_head_t *wqh) +{ +} + +static inline bool need_resched(void) +{ + return false; +} + +static inline bool gfpflags_allow_blocking(gfp_t mask) +{ + return true; +} + +static inline void prepare_to_wait(wait_queue_head_t *wqh, + struct wait_queue_entry *entry, + unsigned long flags) +{ +} + +static inline void finish_wait(wait_queue_head_t *wqh, + struct wait_queue_entry *entry) +{ +} + +static inline void schedule(void) +{ +} + +/* + * Temporary definitions while syncing. + */ +struct btrfs_inode; +struct extent_state; + +static inline void btrfs_merge_delalloc_extent(struct btrfs_inode *inode, + struct extent_state *state, + struct extent_state *other) +{ +} + +static inline void btrfs_set_delalloc_extent(struct btrfs_inode *inode, + struct extent_state *state, + u32 bits) +{ +} + +static inline void btrfs_split_delalloc_extent(struct btrfs_inode *inode, + struct extent_state *orig, + u64 split) +{ +} + +static inline void btrfs_clear_delalloc_extent(struct btrfs_inode *inode, + struct extent_state *state, + u32 bits) +{ +} + #endif diff --git a/kernel-lib/trace.h b/kernel-lib/trace.h index 086bcd10..99bee344 100644 --- a/kernel-lib/trace.h +++ b/kernel-lib/trace.h @@ -26,4 +26,30 @@ static inline void trace_btrfs_workqueue_destroy(void *wq) { } +static inline void trace_alloc_extent_state(struct extent_state *state, + gfp_t mask, unsigned long ip) +{ +} + +static inline void trace_free_extent_state(struct extent_state *state, + unsigned long ip) +{ +} + +static inline void trace_btrfs_clear_extent_bit(struct extent_io_tree *tree, + u64 start, u64 end, u32 bits) +{ +} + +static inline void trace_btrfs_set_extent_bit(struct extent_io_tree *tree, + u64 start, u64 end, u32 bits) +{ +} + +static inline void trace_btrfs_convert_extent_bit(struct extent_io_tree *tree, + u64 start, u64 end, u32 bits, + u32 clear_bits) +{ +} + #endif /* __PROGS_TRACE_H__ */ diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h index 39e1748e..cdf6e8e4 100644 --- a/kernel-shared/ctree.h +++ b/kernel-shared/ctree.h @@ -28,6 +28,7 @@ #include "kernel-shared/uapi/btrfs.h" #include "kernel-shared/uapi/btrfs_tree.h" #include "accessors.h" +#include "extent-io-tree.h" struct btrfs_root; struct btrfs_trans_handle; diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c index 4050566a..3c3bd99b 100644 --- a/kernel-shared/disk-io.c +++ b/kernel-shared/disk-io.c @@ -865,10 +865,10 @@ struct btrfs_fs_info *btrfs_new_fs_info(int writable, u64 sb_bytenr) goto free_all; extent_buffer_init_cache(fs_info); - extent_io_tree_init(&fs_info->dirty_buffers); - extent_io_tree_init(&fs_info->free_space_cache); - extent_io_tree_init(&fs_info->pinned_extents); - extent_io_tree_init(&fs_info->extent_ins); + extent_io_tree_init(fs_info, &fs_info->dirty_buffers, 0); + extent_io_tree_init(fs_info, &fs_info->free_space_cache, 0); + extent_io_tree_init(fs_info, &fs_info->pinned_extents, 0); + extent_io_tree_init(fs_info, &fs_info->extent_ins, 0); fs_info->block_group_cache_tree = RB_ROOT; fs_info->excluded_extents = NULL; @@ -1349,11 +1349,11 @@ void btrfs_cleanup_all_caches(struct btrfs_fs_info *fs_info) free_extent_buffer(eb); } free_mapping_cache_tree(&fs_info->mapping_tree.cache_tree); - extent_io_tree_cleanup(&fs_info->dirty_buffers); + extent_io_tree_release(&fs_info->dirty_buffers); extent_buffer_free_cache(fs_info); - extent_io_tree_cleanup(&fs_info->free_space_cache); - extent_io_tree_cleanup(&fs_info->pinned_extents); - extent_io_tree_cleanup(&fs_info->extent_ins); + extent_io_tree_release(&fs_info->free_space_cache); + extent_io_tree_release(&fs_info->pinned_extents); + extent_io_tree_release(&fs_info->extent_ins); } int btrfs_scan_fs_devices(int fd, const char *path, diff --git a/kernel-shared/extent-io-tree.c b/kernel-shared/extent-io-tree.c new file mode 100644 index 00000000..206d154f --- /dev/null +++ b/kernel-shared/extent-io-tree.c @@ -0,0 +1,1733 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "messages.h" +#include "ctree.h" +#include "async-thread.h" +#include "extent-io-tree.h" +#include "misc.h" +#include "ulist.h" +#include "kernel-lib/trace.h" +#include "common/internal.h" + +/* + * MODIFIED: + * - temporarily define this until we can sync everything. + */ +struct extent_changeset { + u64 bytes_changed; + struct ulist range_changed; +}; + +/* + * MODIFIED: + * - Need to set this to NULL so we init this when we init an extent_io_tree + * for the first time. + */ +static struct kmem_cache *extent_state_cache = NULL; + +static inline bool extent_state_in_tree(const struct extent_state *state) +{ + return !RB_EMPTY_NODE(&state->rb_node); +} + +#ifdef CONFIG_BTRFS_DEBUG +static LIST_HEAD(states); +static DEFINE_SPINLOCK(leak_lock); + +static inline void btrfs_leak_debug_add_state(struct extent_state *state) +{ + unsigned long flags; + + spin_lock_irqsave(&leak_lock, flags); + list_add(&state->leak_list, &states); + spin_unlock_irqrestore(&leak_lock, flags); +} + +static inline void btrfs_leak_debug_del_state(struct extent_state *state) +{ + unsigned long flags; + + spin_lock_irqsave(&leak_lock, flags); + list_del(&state->leak_list); + spin_unlock_irqrestore(&leak_lock, flags); +} + +static inline void btrfs_extent_state_leak_debug_check(void) +{ + struct extent_state *state; + + while (!list_empty(&states)) { + state = list_entry(states.next, struct extent_state, leak_list); + pr_err("BTRFS: state leak: start %llu end %llu state %u in tree %d refs %d\n", + state->start, state->end, state->state, + extent_state_in_tree(state), + refcount_read(&state->refs)); + list_del(&state->leak_list); + kmem_cache_free(extent_state_cache, state); + } +} + +#define btrfs_debug_check_extent_io_range(tree, start, end) \ + __btrfs_debug_check_extent_io_range(__func__, (tree), (start), (end)) +static inline void __btrfs_debug_check_extent_io_range(const char *caller, + struct extent_io_tree *tree, + u64 start, u64 end) +{ + struct btrfs_inode *inode = tree->inode; + u64 isize; + + if (!inode) + return; + + isize = i_size_read(&inode->vfs_inode); + if (end >= PAGE_SIZE && (end % 2) == 0 && end != isize - 1) { + btrfs_debug_rl(inode->root->fs_info, + "%s: ino %llu isize %llu odd range [%llu,%llu]", + caller, btrfs_ino(inode), isize, start, end); + } +} +#else +#define btrfs_leak_debug_add_state(state) do {} while (0) +#define btrfs_leak_debug_del_state(state) do {} while (0) +#define btrfs_extent_state_leak_debug_check() do {} while (0) +#define btrfs_debug_check_extent_io_range(c, s, e) do {} while (0) +#endif + +/* + * For the file_extent_tree, we want to hold the inode lock when we lookup and + * update the disk_i_size, but lockdep will complain because our io_tree we hold + * the tree lock and get the inode lock when setting delalloc. These two things + * are unrelated, so make a class for the file_extent_tree so we don't get the + * two locking patterns mixed up. + */ +static struct lock_class_key file_extent_tree_class; + +struct tree_entry { + u64 start; + u64 end; + struct rb_node rb_node; +}; + +/* + * MODIFIED: + * - We use this as an entry point for init'ing the kmem_cache. + */ +void extent_io_tree_init(struct btrfs_fs_info *fs_info, + struct extent_io_tree *tree, unsigned int owner) +{ + extent_state_init_cachep(); + tree->fs_info = fs_info; + tree->state = RB_ROOT; + spin_lock_init(&tree->lock); + tree->inode = NULL; + tree->owner = owner; + if (owner == IO_TREE_INODE_FILE_EXTENT) + lockdep_set_class(&tree->lock, &file_extent_tree_class); +} + +void extent_io_tree_release(struct extent_io_tree *tree) +{ + spin_lock(&tree->lock); + /* + * Do a single barrier for the waitqueue_active check here, the state + * of the waitqueue should not change once extent_io_tree_release is + * called. + */ + smp_mb(); + while (!RB_EMPTY_ROOT(&tree->state)) { + struct rb_node *node; + struct extent_state *state; + + node = rb_first(&tree->state); + state = rb_entry(node, struct extent_state, rb_node); + rb_erase(&state->rb_node, &tree->state); + RB_CLEAR_NODE(&state->rb_node); + /* + * btree io trees aren't supposed to have tasks waiting for + * changes in the flags of extent states ever. + */ + ASSERT(!waitqueue_active(&state->wq)); + free_extent_state(state); + + cond_resched_lock(&tree->lock); + } + spin_unlock(&tree->lock); +} + +static struct extent_state *alloc_extent_state(gfp_t mask) +{ + struct extent_state *state; + + /* + * The given mask might be not appropriate for the slab allocator, + * drop the unsupported bits + */ + mask &= ~(__GFP_DMA32|__GFP_HIGHMEM); + state = kmem_cache_alloc(extent_state_cache, mask); + if (!state) + return state; + state->state = 0; + RB_CLEAR_NODE(&state->rb_node); + btrfs_leak_debug_add_state(state); + refcount_set(&state->refs, 1); + init_waitqueue_head(&state->wq); + trace_alloc_extent_state(state, mask, _RET_IP_); + return state; +} + +static struct extent_state *alloc_extent_state_atomic(struct extent_state *prealloc) +{ + if (!prealloc) + prealloc = alloc_extent_state(GFP_ATOMIC); + + return prealloc; +} + +void free_extent_state(struct extent_state *state) +{ + if (!state) + return; + if (refcount_dec_and_test(&state->refs)) { + WARN_ON(extent_state_in_tree(state)); + btrfs_leak_debug_del_state(state); + trace_free_extent_state(state, _RET_IP_); + kmem_cache_free(extent_state_cache, state); + } +} + +static int add_extent_changeset(struct extent_state *state, u32 bits, + struct extent_changeset *changeset, + int set) +{ + int ret; + + if (!changeset) + return 0; + if (set && (state->state & bits) == bits) + return 0; + if (!set && (state->state & bits) == 0) + return 0; + changeset->bytes_changed += state->end - state->start + 1; + ret = ulist_add(&changeset->range_changed, state->start, state->end, + GFP_ATOMIC); + return ret; +} + +static inline struct extent_state *next_state(struct extent_state *state) +{ + struct rb_node *next = rb_next(&state->rb_node); + + if (next) + return rb_entry(next, struct extent_state, rb_node); + else + return NULL; +} + +static inline struct extent_state *prev_state(struct extent_state *state) +{ + struct rb_node *next = rb_prev(&state->rb_node); + + if (next) + return rb_entry(next, struct extent_state, rb_node); + else + return NULL; +} + +/* + * Search @tree for an entry that contains @offset. Such entry would have + * entry->start <= offset && entry->end >= offset. + * + * @tree: the tree to search + * @offset: offset that should fall within an entry in @tree + * @node_ret: pointer where new node should be anchored (used when inserting an + * entry in the tree) + * @parent_ret: points to entry which would have been the parent of the entry, + * containing @offset + * + * Return a pointer to the entry that contains @offset byte address and don't change + * @node_ret and @parent_ret. + * + * If no such entry exists, return pointer to entry that ends before @offset + * and fill parameters @node_ret and @parent_ret, ie. does not return NULL. + */ +static inline struct extent_state *tree_search_for_insert(struct extent_io_tree *tree, + u64 offset, + struct rb_node ***node_ret, + struct rb_node **parent_ret) +{ + struct rb_root *root = &tree->state; + struct rb_node **node = &root->rb_node; + struct rb_node *prev = NULL; + struct extent_state *entry = NULL; + + while (*node) { + prev = *node; + entry = rb_entry(prev, struct extent_state, rb_node); + + if (offset < entry->start) + node = &(*node)->rb_left; + else if (offset > entry->end) + node = &(*node)->rb_right; + else + return entry; + } + + if (node_ret) + *node_ret = node; + if (parent_ret) + *parent_ret = prev; + + /* Search neighbors until we find the first one past the end */ + while (entry && offset > entry->end) + entry = next_state(entry); + + return entry; +} + +/* + * Search offset in the tree or fill neighbor rbtree node pointers. + * + * @tree: the tree to search + * @offset: offset that should fall within an entry in @tree + * @next_ret: pointer to the first entry whose range ends after @offset + * @prev_ret: pointer to the first entry whose range begins before @offset + * + * Return a pointer to the entry that contains @offset byte address. If no + * such entry exists, then return NULL and fill @prev_ret and @next_ret. + * Otherwise return the found entry and other pointers are left untouched. + */ +static struct extent_state *tree_search_prev_next(struct extent_io_tree *tree, + u64 offset, + struct extent_state **prev_ret, + struct extent_state **next_ret) +{ + struct rb_root *root = &tree->state; + struct rb_node **node = &root->rb_node; + struct extent_state *orig_prev; + struct extent_state *entry = NULL; + + ASSERT(prev_ret); + ASSERT(next_ret); + + while (*node) { + entry = rb_entry(*node, struct extent_state, rb_node); + + if (offset < entry->start) + node = &(*node)->rb_left; + else if (offset > entry->end) + node = &(*node)->rb_right; + else + return entry; + } + + orig_prev = entry; + while (entry && offset > entry->end) + entry = next_state(entry); + *next_ret = entry; + entry = orig_prev; + + while (entry && offset < entry->start) + entry = prev_state(entry); + *prev_ret = entry; + + return NULL; +} + +/* + * Inexact rb-tree search, return the next entry if @offset is not found + */ +static inline struct extent_state *tree_search(struct extent_io_tree *tree, u64 offset) +{ + return tree_search_for_insert(tree, offset, NULL, NULL); +} + +static void extent_io_tree_panic(struct extent_io_tree *tree, int err) +{ + btrfs_panic(tree->fs_info, err, + "locking error: extent tree was modified by another thread while locked"); +} + +/* + * Utility function to look for merge candidates inside a given range. Any + * extents with matching state are merged together into a single extent in the + * tree. Extents with EXTENT_IO in their state field are not merged because + * the end_io handlers need to be able to do operations on them without + * sleeping (or doing allocations/splits). + * + * This should be called with the tree lock held. + */ +static void merge_state(struct extent_io_tree *tree, struct extent_state *state) +{ + struct extent_state *other; + + if (state->state & (EXTENT_LOCKED | EXTENT_BOUNDARY)) + return; + + other = prev_state(state); + if (other && other->end == state->start - 1 && + other->state == state->state) { + if (tree->inode) + btrfs_merge_delalloc_extent(tree->inode, state, other); + state->start = other->start; + rb_erase(&other->rb_node, &tree->state); + RB_CLEAR_NODE(&other->rb_node); + free_extent_state(other); + } + other = next_state(state); + if (other && other->start == state->end + 1 && + other->state == state->state) { + if (tree->inode) + btrfs_merge_delalloc_extent(tree->inode, state, other); + state->end = other->end; + rb_erase(&other->rb_node, &tree->state); + RB_CLEAR_NODE(&other->rb_node); + free_extent_state(other); + } +} + +static void set_state_bits(struct extent_io_tree *tree, + struct extent_state *state, + u32 bits, struct extent_changeset *changeset) +{ + u32 bits_to_set = bits & ~EXTENT_CTLBITS; + int ret; + + if (tree->inode) + btrfs_set_delalloc_extent(tree->inode, state, bits); + + ret = add_extent_changeset(state, bits_to_set, changeset, 1); + BUG_ON(ret < 0); + state->state |= bits_to_set; +} + +/* + * Insert an extent_state struct into the tree. 'bits' are set on the + * struct before it is inserted. + * + * This may return -EEXIST if the extent is already there, in which case the + * state struct is freed. + * + * The tree lock is not taken internally. This is a utility function and + * probably isn't what you want to call (see set/clear_extent_bit). + */ +static int insert_state(struct extent_io_tree *tree, + struct extent_state *state, + u32 bits, struct extent_changeset *changeset) +{ + struct rb_node **node; + struct rb_node *parent = NULL; + const u64 end = state->end; + + set_state_bits(tree, state, bits, changeset); + + node = &tree->state.rb_node; + while (*node) { + struct extent_state *entry; + + parent = *node; + entry = rb_entry(parent, struct extent_state, rb_node); + + if (end < entry->start) { + node = &(*node)->rb_left; + } else if (end > entry->end) { + node = &(*node)->rb_right; + } else { + btrfs_err(tree->fs_info, + "found node %llu %llu on insert of %llu %llu", + entry->start, entry->end, state->start, end); + return -EEXIST; + } + } + + rb_link_node(&state->rb_node, parent, node); + rb_insert_color(&state->rb_node, &tree->state); + + merge_state(tree, state); + return 0; +} + +/* + * Insert state to @tree to the location given by @node and @parent. + */ +static void insert_state_fast(struct extent_io_tree *tree, + struct extent_state *state, struct rb_node **node, + struct rb_node *parent, unsigned bits, + struct extent_changeset *changeset) +{ + set_state_bits(tree, state, bits, changeset); + rb_link_node(&state->rb_node, parent, node); + rb_insert_color(&state->rb_node, &tree->state); + merge_state(tree, state); +} + +/* + * Split a given extent state struct in two, inserting the preallocated + * struct 'prealloc' as the newly created second half. 'split' indicates an + * offset inside 'orig' where it should be split. + * + * Before calling, + * the tree has 'orig' at [orig->start, orig->end]. After calling, there + * are two extent state structs in the tree: + * prealloc: [orig->start, split - 1] + * orig: [ split, orig->end ] + * + * The tree locks are not taken by this function. They need to be held + * by the caller. + */ +static int split_state(struct extent_io_tree *tree, struct extent_state *orig, + struct extent_state *prealloc, u64 split) +{ + struct rb_node *parent = NULL; + struct rb_node **node; + + if (tree->inode) + btrfs_split_delalloc_extent(tree->inode, orig, split); + + prealloc->start = orig->start; + prealloc->end = split - 1; + prealloc->state = orig->state; + orig->start = split; + + parent = &orig->rb_node; + node = &parent; + while (*node) { + struct extent_state *entry; + + parent = *node; + entry = rb_entry(parent, struct extent_state, rb_node); + + if (prealloc->end < entry->start) { + node = &(*node)->rb_left; + } else if (prealloc->end > entry->end) { + node = &(*node)->rb_right; + } else { + free_extent_state(prealloc); + return -EEXIST; + } + } + + rb_link_node(&prealloc->rb_node, parent, node); + rb_insert_color(&prealloc->rb_node, &tree->state); + + return 0; +} + +/* + * Utility function to clear some bits in an extent state struct. It will + * optionally wake up anyone waiting on this state (wake == 1). + * + * If no bits are set on the state struct after clearing things, the + * struct is freed and removed from the tree + */ +static struct extent_state *clear_state_bit(struct extent_io_tree *tree, + struct extent_state *state, + u32 bits, int wake, + struct extent_changeset *changeset) +{ + struct extent_state *next; + u32 bits_to_clear = bits & ~EXTENT_CTLBITS; + int ret; + + if (tree->inode) + btrfs_clear_delalloc_extent(tree->inode, state, bits); + + ret = add_extent_changeset(state, bits_to_clear, changeset, 0); + BUG_ON(ret < 0); + state->state &= ~bits_to_clear; + if (wake) + wake_up(&state->wq); + if (state->state == 0) { + next = next_state(state); + if (extent_state_in_tree(state)) { + rb_erase(&state->rb_node, &tree->state); + RB_CLEAR_NODE(&state->rb_node); + free_extent_state(state); + } else { + WARN_ON(1); + } + } else { + merge_state(tree, state); + next = next_state(state); + } + return next; +} + +/* + * Clear some bits on a range in the tree. This may require splitting or + * inserting elements in the tree, so the gfp mask is used to indicate which + * allocations or sleeping are allowed. + * + * Pass 'wake' == 1 to kick any sleepers, and 'delete' == 1 to remove the given + * range from the tree regardless of state (ie for truncate). + * + * The range [start, end] is inclusive. + * + * This takes the tree lock, and returns 0 on success and < 0 on error. + */ +int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_state **cached_state, + gfp_t mask, struct extent_changeset *changeset) +{ + struct extent_state *state; + struct extent_state *cached; + struct extent_state *prealloc = NULL; + u64 last_end; + int err; + int clear = 0; + int wake; + int delete = (bits & EXTENT_CLEAR_ALL_BITS); + + btrfs_debug_check_extent_io_range(tree, start, end); + trace_btrfs_clear_extent_bit(tree, start, end - start + 1, bits); + + if (delete) + bits |= ~EXTENT_CTLBITS; + + if (bits & EXTENT_DELALLOC) + bits |= EXTENT_NORESERVE; + + wake = (bits & EXTENT_LOCKED) ? 1 : 0; + if (bits & (EXTENT_LOCKED | EXTENT_BOUNDARY)) + clear = 1; +again: + if (!prealloc) { + /* + * Don't care for allocation failure here because we might end + * up not needing the pre-allocated extent state at all, which + * is the case if we only have in the tree extent states that + * cover our input range and don't cover too any other range. + * If we end up needing a new extent state we allocate it later. + */ + prealloc = alloc_extent_state(mask); + } + + spin_lock(&tree->lock); + if (cached_state) { + cached = *cached_state; + + if (clear) { + *cached_state = NULL; + cached_state = NULL; + } + + if (cached && extent_state_in_tree(cached) && + cached->start <= start && cached->end > start) { + if (clear) + refcount_dec(&cached->refs); + state = cached; + goto hit_next; + } + if (clear) + free_extent_state(cached); + } + + /* This search will find the extents that end after our range starts. */ + state = tree_search(tree, start); + if (!state) + goto out; +hit_next: + if (state->start > end) + goto out; + WARN_ON(state->end < start); + last_end = state->end; + + /* The state doesn't have the wanted bits, go ahead. */ + if (!(state->state & bits)) { + state = next_state(state); + goto next; + } + + /* + * | ---- desired range ---- | + * | state | or + * | ------------- state -------------- | + * + * We need to split the extent we found, and may flip bits on second + * half. + * + * If the extent we found extends past our range, we just split and + * search again. It'll get split again the next time though. + * + * If the extent we found is inside our range, we clear the desired bit + * on it. + */ + + if (state->start < start) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + err = split_state(tree, state, prealloc, start); + if (err) + extent_io_tree_panic(tree, err); + + prealloc = NULL; + if (err) + goto out; + if (state->end <= end) { + state = clear_state_bit(tree, state, bits, wake, changeset); + goto next; + } + goto search_again; + } + /* + * | ---- desired range ---- | + * | state | + * We need to split the extent, and clear the bit on the first half. + */ + if (state->start <= end && state->end > end) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + err = split_state(tree, state, prealloc, end + 1); + if (err) + extent_io_tree_panic(tree, err); + + if (wake) + wake_up(&state->wq); + + clear_state_bit(tree, prealloc, bits, wake, changeset); + + prealloc = NULL; + goto out; + } + + state = clear_state_bit(tree, state, bits, wake, changeset); +next: + if (last_end == (u64)-1) + goto out; + start = last_end + 1; + if (start <= end && state && !need_resched()) + goto hit_next; + +search_again: + if (start > end) + goto out; + spin_unlock(&tree->lock); + if (gfpflags_allow_blocking(mask)) + cond_resched(); + goto again; + +out: + spin_unlock(&tree->lock); + if (prealloc) + free_extent_state(prealloc); + + return 0; + +} + +static void wait_on_state(struct extent_io_tree *tree, + struct extent_state *state) + __releases(tree->lock) + __acquires(tree->lock) +{ + DEFINE_WAIT(wait); + prepare_to_wait(&state->wq, &wait, TASK_UNINTERRUPTIBLE); + spin_unlock(&tree->lock); + schedule(); + spin_lock(&tree->lock); + finish_wait(&state->wq, &wait); +} + +/* + * Wait for one or more bits to clear on a range in the state tree. + * The range [start, end] is inclusive. + * The tree lock is taken by this function + */ +void wait_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 bits, + struct extent_state **cached_state) +{ + struct extent_state *state; + + btrfs_debug_check_extent_io_range(tree, start, end); + + spin_lock(&tree->lock); +again: + /* + * Maintain cached_state, as we may not remove it from the tree if there + * are more bits than the bits we're waiting on set on this state. + */ + if (cached_state && *cached_state) { + state = *cached_state; + if (extent_state_in_tree(state) && + state->start <= start && start < state->end) + goto process_node; + } + while (1) { + /* + * This search will find all the extents that end after our + * range starts. + */ + state = tree_search(tree, start); +process_node: + if (!state) + break; + if (state->start > end) + goto out; + + if (state->state & bits) { + start = state->start; + refcount_inc(&state->refs); + wait_on_state(tree, state); + free_extent_state(state); + goto again; + } + start = state->end + 1; + + if (start > end) + break; + + if (!cond_resched_lock(&tree->lock)) { + state = next_state(state); + goto process_node; + } + } +out: + /* This state is no longer useful, clear it and free it up. */ + if (cached_state && *cached_state) { + state = *cached_state; + *cached_state = NULL; + free_extent_state(state); + } + spin_unlock(&tree->lock); +} + +static void cache_state_if_flags(struct extent_state *state, + struct extent_state **cached_ptr, + unsigned flags) +{ + if (cached_ptr && !(*cached_ptr)) { + if (!flags || (state->state & flags)) { + *cached_ptr = state; + refcount_inc(&state->refs); + } + } +} + +static void cache_state(struct extent_state *state, + struct extent_state **cached_ptr) +{ + return cache_state_if_flags(state, cached_ptr, + EXTENT_LOCKED | EXTENT_BOUNDARY); +} + +/* + * Find the first state struct with 'bits' set after 'start', and return it. + * tree->lock must be held. NULL will returned if nothing was found after + * 'start'. + */ +static struct extent_state *find_first_extent_bit_state(struct extent_io_tree *tree, + u64 start, u32 bits) +{ + struct extent_state *state; + + /* + * This search will find all the extents that end after our range + * starts. + */ + state = tree_search(tree, start); + while (state) { + if (state->end >= start && (state->state & bits)) + return state; + state = next_state(state); + } + return NULL; +} + +/* + * Find the first offset in the io tree with one or more @bits set. + * + * Note: If there are multiple bits set in @bits, any of them will match. + * + * Return 0 if we find something, and update @start_ret and @end_ret. + * Return 1 if we found nothing. + */ +int find_first_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits, + struct extent_state **cached_state) +{ + struct extent_state *state; + int ret = 1; + + spin_lock(&tree->lock); + if (cached_state && *cached_state) { + state = *cached_state; + if (state->end == start - 1 && extent_state_in_tree(state)) { + while ((state = next_state(state)) != NULL) { + if (state->state & bits) + goto got_it; + } + free_extent_state(*cached_state); + *cached_state = NULL; + goto out; + } + free_extent_state(*cached_state); + *cached_state = NULL; + } + + state = find_first_extent_bit_state(tree, start, bits); +got_it: + if (state) { + cache_state_if_flags(state, cached_state, 0); + *start_ret = state->start; + *end_ret = state->end; + ret = 0; + } +out: + spin_unlock(&tree->lock); + return ret; +} + +/* + * Find a contiguous area of bits + * + * @tree: io tree to check + * @start: offset to start the search from + * @start_ret: the first offset we found with the bits set + * @end_ret: the final contiguous range of the bits that were set + * @bits: bits to look for + * + * set_extent_bit and clear_extent_bit can temporarily split contiguous ranges + * to set bits appropriately, and then merge them again. During this time it + * will drop the tree->lock, so use this helper if you want to find the actual + * contiguous area for given bits. We will search to the first bit we find, and + * then walk down the tree until we find a non-contiguous area. The area + * returned will be the full contiguous area with the bits set. + */ +int find_contiguous_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits) +{ + struct extent_state *state; + int ret = 1; + + spin_lock(&tree->lock); + state = find_first_extent_bit_state(tree, start, bits); + if (state) { + *start_ret = state->start; + *end_ret = state->end; + while ((state = next_state(state)) != NULL) { + if (state->start > (*end_ret + 1)) + break; + *end_ret = state->end; + } + ret = 0; + } + spin_unlock(&tree->lock); + return ret; +} + +/* + * Find a contiguous range of bytes in the file marked as delalloc, not more + * than 'max_bytes'. start and end are used to return the range, + * + * True is returned if we find something, false if nothing was in the tree. + */ +bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start, + u64 *end, u64 max_bytes, + struct extent_state **cached_state) +{ + struct extent_state *state; + u64 cur_start = *start; + bool found = false; + u64 total_bytes = 0; + + spin_lock(&tree->lock); + + /* + * This search will find all the extents that end after our range + * starts. + */ + state = tree_search(tree, cur_start); + if (!state) { + *end = (u64)-1; + goto out; + } + + while (state) { + if (found && (state->start != cur_start || + (state->state & EXTENT_BOUNDARY))) { + goto out; + } + if (!(state->state & EXTENT_DELALLOC)) { + if (!found) + *end = state->end; + goto out; + } + if (!found) { + *start = state->start; + *cached_state = state; + refcount_inc(&state->refs); + } + found = true; + *end = state->end; + cur_start = state->end + 1; + total_bytes += state->end - state->start + 1; + if (total_bytes >= max_bytes) + break; + state = next_state(state); + } +out: + spin_unlock(&tree->lock); + return found; +} + +/* + * Set some bits on a range in the tree. This may require allocations or + * sleeping, so the gfp mask is used to indicate what is allowed. + * + * If any of the exclusive bits are set, this will fail with -EEXIST if some + * part of the range already has the desired bits set. The extent_state of the + * existing range is returned in failed_state in this case, and the start of the + * existing range is returned in failed_start. failed_state is used as an + * optimization for wait_extent_bit, failed_start must be used as the source of + * truth as failed_state may have changed since we returned. + * + * [start, end] is inclusive This takes the tree lock. + */ +static int __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, u64 *failed_start, + struct extent_state **failed_state, + struct extent_state **cached_state, + struct extent_changeset *changeset, gfp_t mask) +{ + struct extent_state *state; + struct extent_state *prealloc = NULL; + struct rb_node **p; + struct rb_node *parent; + int err = 0; + u64 last_start; + u64 last_end; + u32 exclusive_bits = (bits & EXTENT_LOCKED); + + btrfs_debug_check_extent_io_range(tree, start, end); + trace_btrfs_set_extent_bit(tree, start, end - start + 1, bits); + + if (exclusive_bits) + ASSERT(failed_start); + else + ASSERT(failed_start == NULL && failed_state == NULL); +again: + if (!prealloc) { + /* + * Don't care for allocation failure here because we might end + * up not needing the pre-allocated extent state at all, which + * is the case if we only have in the tree extent states that + * cover our input range and don't cover too any other range. + * If we end up needing a new extent state we allocate it later. + */ + prealloc = alloc_extent_state(mask); + } + + spin_lock(&tree->lock); + if (cached_state && *cached_state) { + state = *cached_state; + if (state->start <= start && state->end > start && + extent_state_in_tree(state)) + goto hit_next; + } + /* + * This search will find all the extents that end after our range + * starts. + */ + state = tree_search_for_insert(tree, start, &p, &parent); + if (!state) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + prealloc->start = start; + prealloc->end = end; + insert_state_fast(tree, prealloc, p, parent, bits, changeset); + cache_state(prealloc, cached_state); + prealloc = NULL; + goto out; + } +hit_next: + last_start = state->start; + last_end = state->end; + + /* + * | ---- desired range ---- | + * | state | + * + * Just lock what we found and keep going + */ + if (state->start == start && state->end <= end) { + if (state->state & exclusive_bits) { + *failed_start = state->start; + cache_state(state, failed_state); + err = -EEXIST; + goto out; + } + + set_state_bits(tree, state, bits, changeset); + cache_state(state, cached_state); + merge_state(tree, state); + if (last_end == (u64)-1) + goto out; + start = last_end + 1; + state = next_state(state); + if (start < end && state && state->start == start && + !need_resched()) + goto hit_next; + goto search_again; + } + + /* + * | ---- desired range ---- | + * | state | + * or + * | ------------- state -------------- | + * + * We need to split the extent we found, and may flip bits on second + * half. + * + * If the extent we found extends past our range, we just split and + * search again. It'll get split again the next time though. + * + * If the extent we found is inside our range, we set the desired bit + * on it. + */ + if (state->start < start) { + if (state->state & exclusive_bits) { + *failed_start = start; + cache_state(state, failed_state); + err = -EEXIST; + goto out; + } + + /* + * If this extent already has all the bits we want set, then + * skip it, not necessary to split it or do anything with it. + */ + if ((state->state & bits) == bits) { + start = state->end + 1; + cache_state(state, cached_state); + goto search_again; + } + + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + err = split_state(tree, state, prealloc, start); + if (err) + extent_io_tree_panic(tree, err); + + prealloc = NULL; + if (err) + goto out; + if (state->end <= end) { + set_state_bits(tree, state, bits, changeset); + cache_state(state, cached_state); + merge_state(tree, state); + if (last_end == (u64)-1) + goto out; + start = last_end + 1; + state = next_state(state); + if (start < end && state && state->start == start && + !need_resched()) + goto hit_next; + } + goto search_again; + } + /* + * | ---- desired range ---- | + * | state | or | state | + * + * There's a hole, we need to insert something in it and ignore the + * extent we found. + */ + if (state->start > start) { + u64 this_end; + if (end < last_start) + this_end = end; + else + this_end = last_start - 1; + + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + + /* + * Avoid to free 'prealloc' if it can be merged with the later + * extent. + */ + prealloc->start = start; + prealloc->end = this_end; + err = insert_state(tree, prealloc, bits, changeset); + if (err) + extent_io_tree_panic(tree, err); + + cache_state(prealloc, cached_state); + prealloc = NULL; + start = this_end + 1; + goto search_again; + } + /* + * | ---- desired range ---- | + * | state | + * + * We need to split the extent, and set the bit on the first half + */ + if (state->start <= end && state->end > end) { + if (state->state & exclusive_bits) { + *failed_start = start; + cache_state(state, failed_state); + err = -EEXIST; + goto out; + } + + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) + goto search_again; + err = split_state(tree, state, prealloc, end + 1); + if (err) + extent_io_tree_panic(tree, err); + + set_state_bits(tree, prealloc, bits, changeset); + cache_state(prealloc, cached_state); + merge_state(tree, prealloc); + prealloc = NULL; + goto out; + } + +search_again: + if (start > end) + goto out; + spin_unlock(&tree->lock); + if (gfpflags_allow_blocking(mask)) + cond_resched(); + goto again; + +out: + spin_unlock(&tree->lock); + if (prealloc) + free_extent_state(prealloc); + + return err; + +} + +int set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_state **cached_state, gfp_t mask) +{ + return __set_extent_bit(tree, start, end, bits, NULL, NULL, + cached_state, NULL, mask); +} + +/* + * Convert all bits in a given range from one bit to another + * + * @tree: the io tree to search + * @start: the start offset in bytes + * @end: the end offset in bytes (inclusive) + * @bits: the bits to set in this range + * @clear_bits: the bits to clear in this range + * @cached_state: state that we're going to cache + * + * This will go through and set bits for the given range. If any states exist + * already in this range they are set with the given bit and cleared of the + * clear_bits. This is only meant to be used by things that are mergeable, ie. + * converting from say DELALLOC to DIRTY. This is not meant to be used with + * boundary bits like LOCK. + * + * All allocations are done with GFP_NOFS. + */ +int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, u32 clear_bits, + struct extent_state **cached_state) +{ + struct extent_state *state; + struct extent_state *prealloc = NULL; + struct rb_node **p; + struct rb_node *parent; + int err = 0; + u64 last_start; + u64 last_end; + bool first_iteration = true; + + btrfs_debug_check_extent_io_range(tree, start, end); + trace_btrfs_convert_extent_bit(tree, start, end - start + 1, bits, + clear_bits); + +again: + if (!prealloc) { + /* + * Best effort, don't worry if extent state allocation fails + * here for the first iteration. We might have a cached state + * that matches exactly the target range, in which case no + * extent state allocations are needed. We'll only know this + * after locking the tree. + */ + prealloc = alloc_extent_state(GFP_NOFS); + if (!prealloc && !first_iteration) + return -ENOMEM; + } + + spin_lock(&tree->lock); + if (cached_state && *cached_state) { + state = *cached_state; + if (state->start <= start && state->end > start && + extent_state_in_tree(state)) + goto hit_next; + } + + /* + * This search will find all the extents that end after our range + * starts. + */ + state = tree_search_for_insert(tree, start, &p, &parent); + if (!state) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) { + err = -ENOMEM; + goto out; + } + prealloc->start = start; + prealloc->end = end; + insert_state_fast(tree, prealloc, p, parent, bits, NULL); + cache_state(prealloc, cached_state); + prealloc = NULL; + goto out; + } +hit_next: + last_start = state->start; + last_end = state->end; + + /* + * | ---- desired range ---- | + * | state | + * + * Just lock what we found and keep going. + */ + if (state->start == start && state->end <= end) { + set_state_bits(tree, state, bits, NULL); + cache_state(state, cached_state); + state = clear_state_bit(tree, state, clear_bits, 0, NULL); + if (last_end == (u64)-1) + goto out; + start = last_end + 1; + if (start < end && state && state->start == start && + !need_resched()) + goto hit_next; + goto search_again; + } + + /* + * | ---- desired range ---- | + * | state | + * or + * | ------------- state -------------- | + * + * We need to split the extent we found, and may flip bits on second + * half. + * + * If the extent we found extends past our range, we just split and + * search again. It'll get split again the next time though. + * + * If the extent we found is inside our range, we set the desired bit + * on it. + */ + if (state->start < start) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) { + err = -ENOMEM; + goto out; + } + err = split_state(tree, state, prealloc, start); + if (err) + extent_io_tree_panic(tree, err); + prealloc = NULL; + if (err) + goto out; + if (state->end <= end) { + set_state_bits(tree, state, bits, NULL); + cache_state(state, cached_state); + state = clear_state_bit(tree, state, clear_bits, 0, NULL); + if (last_end == (u64)-1) + goto out; + start = last_end + 1; + if (start < end && state && state->start == start && + !need_resched()) + goto hit_next; + } + goto search_again; + } + /* + * | ---- desired range ---- | + * | state | or | state | + * + * There's a hole, we need to insert something in it and ignore the + * extent we found. + */ + if (state->start > start) { + u64 this_end; + if (end < last_start) + this_end = end; + else + this_end = last_start - 1; + + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) { + err = -ENOMEM; + goto out; + } + + /* + * Avoid to free 'prealloc' if it can be merged with the later + * extent. + */ + prealloc->start = start; + prealloc->end = this_end; + err = insert_state(tree, prealloc, bits, NULL); + if (err) + extent_io_tree_panic(tree, err); + cache_state(prealloc, cached_state); + prealloc = NULL; + start = this_end + 1; + goto search_again; + } + /* + * | ---- desired range ---- | + * | state | + * + * We need to split the extent, and set the bit on the first half. + */ + if (state->start <= end && state->end > end) { + prealloc = alloc_extent_state_atomic(prealloc); + if (!prealloc) { + err = -ENOMEM; + goto out; + } + + err = split_state(tree, state, prealloc, end + 1); + if (err) + extent_io_tree_panic(tree, err); + + set_state_bits(tree, prealloc, bits, NULL); + cache_state(prealloc, cached_state); + clear_state_bit(tree, prealloc, clear_bits, 0, NULL); + prealloc = NULL; + goto out; + } + +search_again: + if (start > end) + goto out; + spin_unlock(&tree->lock); + cond_resched(); + first_iteration = false; + goto again; + +out: + spin_unlock(&tree->lock); + if (prealloc) + free_extent_state(prealloc); + + return err; +} + +/* + * Find the first range that has @bits not set. This range could start before + * @start. + * + * @tree: the tree to search + * @start: offset at/after which the found extent should start + * @start_ret: records the beginning of the range + * @end_ret: records the end of the range (inclusive) + * @bits: the set of bits which must be unset + * + * Since unallocated range is also considered one which doesn't have the bits + * set it's possible that @end_ret contains -1, this happens in case the range + * spans (last_range_end, end of device]. In this case it's up to the caller to + * trim @end_ret to the appropriate size. + */ +void find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits) +{ + struct extent_state *state; + struct extent_state *prev = NULL, *next = NULL; + + spin_lock(&tree->lock); + + /* Find first extent with bits cleared */ + while (1) { + state = tree_search_prev_next(tree, start, &prev, &next); + if (!state && !next && !prev) { + /* + * Tree is completely empty, send full range and let + * caller deal with it + */ + *start_ret = 0; + *end_ret = -1; + goto out; + } else if (!state && !next) { + /* + * We are past the last allocated chunk, set start at + * the end of the last extent. + */ + *start_ret = prev->end + 1; + *end_ret = -1; + goto out; + } else if (!state) { + state = next; + } + + /* + * At this point 'state' either contains 'start' or start is + * before 'state' + */ + if (in_range(start, state->start, state->end - state->start + 1)) { + if (state->state & bits) { + /* + * |--range with bits sets--| + * | + * start + */ + start = state->end + 1; + } else { + /* + * 'start' falls within a range that doesn't + * have the bits set, so take its start as the + * beginning of the desired range + * + * |--range with bits cleared----| + * | + * start + */ + *start_ret = state->start; + break; + } + } else { + /* + * |---prev range---|---hole/unset---|---node range---| + * | + * start + * + * or + * + * |---hole/unset--||--first node--| + * 0 | + * start + */ + if (prev) + *start_ret = prev->end + 1; + else + *start_ret = 0; + break; + } + } + + /* + * Find the longest stretch from start until an entry which has the + * bits set + */ + while (state) { + if (state->end >= start && !(state->state & bits)) { + *end_ret = state->end; + } else { + *end_ret = state->start - 1; + break; + } + state = next_state(state); + } +out: + spin_unlock(&tree->lock); +} + +/* + * Count the number of bytes in the tree that have a given bit(s) set. This + * can be fairly slow, except for EXTENT_DIRTY which is cached. The total + * number found is returned. + */ +u64 count_range_bits(struct extent_io_tree *tree, + u64 *start, u64 search_end, u64 max_bytes, + u32 bits, int contig) +{ + struct extent_state *state; + u64 cur_start = *start; + u64 total_bytes = 0; + u64 last = 0; + int found = 0; + + if (WARN_ON(search_end <= cur_start)) + return 0; + + spin_lock(&tree->lock); + + /* + * This search will find all the extents that end after our range + * starts. + */ + state = tree_search(tree, cur_start); + while (state) { + if (state->start > search_end) + break; + if (contig && found && state->start > last + 1) + break; + if (state->end >= cur_start && (state->state & bits) == bits) { + total_bytes += min(search_end, state->end) + 1 - + max(cur_start, state->start); + if (total_bytes >= max_bytes) + break; + if (!found) { + *start = max(cur_start, state->start); + found = 1; + } + last = state->end; + } else if (contig && found) { + break; + } + state = next_state(state); + } + spin_unlock(&tree->lock); + return total_bytes; +} + +/* + * Searche a range in the state tree for a given mask. If 'filled' == 1, this + * returns 1 only if every extent in the tree has the bits set. Otherwise, 1 + * is returned if any bit in the range is found set. + */ +int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, int filled, struct extent_state *cached) +{ + struct extent_state *state = NULL; + int bitset = 0; + + spin_lock(&tree->lock); + if (cached && extent_state_in_tree(cached) && cached->start <= start && + cached->end > start) + state = cached; + else + state = tree_search(tree, start); + while (state && start <= end) { + if (filled && state->start > start) { + bitset = 0; + break; + } + + if (state->start > end) + break; + + if (state->state & bits) { + bitset = 1; + if (!filled) + break; + } else if (filled) { + bitset = 0; + break; + } + + if (state->end == (u64)-1) + break; + + start = state->end + 1; + if (start > end) + break; + state = next_state(state); + } + + /* We ran out of states and were still inside of our range. */ + if (filled && !state) + bitset = 0; + spin_unlock(&tree->lock); + return bitset; +} + +/* Wrappers around set/clear extent bit */ +int set_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_changeset *changeset) +{ + /* + * We don't support EXTENT_LOCKED yet, as current changeset will + * record any bits changed, so for EXTENT_LOCKED case, it will + * either fail with -EEXIST or changeset will record the whole + * range. + */ + ASSERT(!(bits & EXTENT_LOCKED)); + + return __set_extent_bit(tree, start, end, bits, NULL, NULL, NULL, + changeset, GFP_NOFS); +} + +int clear_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_changeset *changeset) +{ + /* + * Don't support EXTENT_LOCKED case, same reason as + * set_record_extent_bits(). + */ + ASSERT(!(bits & EXTENT_LOCKED)); + + return __clear_extent_bit(tree, start, end, bits, NULL, GFP_NOFS, + changeset); +} + +int try_lock_extent(struct extent_io_tree *tree, u64 start, u64 end, + struct extent_state **cached) +{ + int err; + u64 failed_start; + + err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, &failed_start, + NULL, cached, NULL, GFP_NOFS); + if (err == -EEXIST) { + if (failed_start > start) + clear_extent_bit(tree, start, failed_start - 1, + EXTENT_LOCKED, cached); + return 0; + } + return 1; +} + +/* + * Either insert or lock state struct between start and end use mask to tell + * us if waiting is desired. + */ +int lock_extent(struct extent_io_tree *tree, u64 start, u64 end, + struct extent_state **cached_state) +{ + struct extent_state *failed_state = NULL; + int err; + u64 failed_start; + + err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, &failed_start, + &failed_state, cached_state, NULL, GFP_NOFS); + while (err == -EEXIST) { + if (failed_start != start) + clear_extent_bit(tree, start, failed_start - 1, + EXTENT_LOCKED, cached_state); + + wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED, + &failed_state); + err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, + &failed_start, &failed_state, + cached_state, NULL, GFP_NOFS); + } + return err; +} + +void __cold extent_state_free_cachep(void) +{ + btrfs_extent_state_leak_debug_check(); + kmem_cache_destroy(extent_state_cache); +} + +/* + * MODIFIED: + * - This gets called by extent_io_tree_init, so only init if the cache isn't + * NULL. + */ +int __init extent_state_init_cachep(void) +{ + if (extent_state_cache) + return 0; + + extent_state_cache = kmem_cache_create("btrfs_extent_state", + sizeof(struct extent_state), 0, + SLAB_MEM_SPREAD, NULL); + if (!extent_state_cache) + return -ENOMEM; + + return 0; +} diff --git a/kernel-shared/extent-io-tree.h b/kernel-shared/extent-io-tree.h new file mode 100644 index 00000000..cdee8c08 --- /dev/null +++ b/kernel-shared/extent-io-tree.h @@ -0,0 +1,239 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef BTRFS_EXTENT_IO_TREE_H +#define BTRFS_EXTENT_IO_TREE_H + +#include "misc.h" + +struct extent_changeset; +struct io_failure_record; + +/* Bits for the extent state */ +enum { + ENUM_BIT(EXTENT_DIRTY), + ENUM_BIT(EXTENT_UPTODATE), + ENUM_BIT(EXTENT_LOCKED), + ENUM_BIT(EXTENT_NEW), + ENUM_BIT(EXTENT_DELALLOC), + ENUM_BIT(EXTENT_DEFRAG), + ENUM_BIT(EXTENT_BOUNDARY), + ENUM_BIT(EXTENT_NODATASUM), + ENUM_BIT(EXTENT_CLEAR_META_RESV), + ENUM_BIT(EXTENT_NEED_WAIT), + ENUM_BIT(EXTENT_NORESERVE), + ENUM_BIT(EXTENT_QGROUP_RESERVED), + ENUM_BIT(EXTENT_CLEAR_DATA_RESV), + /* + * Must be cleared only during ordered extent completion or on error + * paths if we did not manage to submit bios and create the ordered + * extents for the range. Should not be cleared during page release + * and page invalidation (if there is an ordered extent in flight), + * that is left for the ordered extent completion. + */ + ENUM_BIT(EXTENT_DELALLOC_NEW), + /* + * When an ordered extent successfully completes for a region marked as + * a new delalloc range, use this flag when clearing a new delalloc + * range to indicate that the VFS' inode number of bytes should be + * incremented and the inode's new delalloc bytes decremented, in an + * atomic way to prevent races with stat(2). + */ + ENUM_BIT(EXTENT_ADD_INODE_BYTES), + /* + * Set during truncate when we're clearing an entire range and we just + * want the extent states to go away. + */ + ENUM_BIT(EXTENT_CLEAR_ALL_BITS), +}; + +#define EXTENT_DO_ACCOUNTING (EXTENT_CLEAR_META_RESV | \ + EXTENT_CLEAR_DATA_RESV) +#define EXTENT_CTLBITS (EXTENT_DO_ACCOUNTING | \ + EXTENT_ADD_INODE_BYTES | \ + EXTENT_CLEAR_ALL_BITS) + +/* + * Redefined bits above which are used only in the device allocation tree, + * shouldn't be using EXTENT_LOCKED / EXTENT_BOUNDARY / EXTENT_CLEAR_META_RESV + * / EXTENT_CLEAR_DATA_RESV because they have special meaning to the bit + * manipulation functions + */ +#define CHUNK_ALLOCATED EXTENT_DIRTY +#define CHUNK_TRIMMED EXTENT_DEFRAG +#define CHUNK_STATE_MASK (CHUNK_ALLOCATED | \ + CHUNK_TRIMMED) + +enum { + IO_TREE_FS_PINNED_EXTENTS, + IO_TREE_FS_EXCLUDED_EXTENTS, + IO_TREE_BTREE_INODE_IO, + IO_TREE_INODE_IO, + IO_TREE_RELOC_BLOCKS, + IO_TREE_TRANS_DIRTY_PAGES, + IO_TREE_ROOT_DIRTY_LOG_PAGES, + IO_TREE_INODE_FILE_EXTENT, + IO_TREE_LOG_CSUM_RANGE, + IO_TREE_SELFTEST, + IO_TREE_DEVICE_ALLOC_STATE, +}; + +struct extent_io_tree { + struct rb_root state; + struct btrfs_fs_info *fs_info; + /* Inode associated with this tree, or NULL. */ + struct btrfs_inode *inode; + + /* Who owns this io tree, should be one of IO_TREE_* */ + u8 owner; + + spinlock_t lock; +}; + +struct extent_state { + u64 start; + u64 end; /* inclusive */ + struct rb_node rb_node; + + /* ADD NEW ELEMENTS AFTER THIS */ + wait_queue_head_t wq; + refcount_t refs; + u32 state; + +#ifdef CONFIG_BTRFS_DEBUG + struct list_head leak_list; +#endif +}; + +void extent_io_tree_init(struct btrfs_fs_info *fs_info, + struct extent_io_tree *tree, unsigned int owner); +void extent_io_tree_release(struct extent_io_tree *tree); + +int lock_extent(struct extent_io_tree *tree, u64 start, u64 end, + struct extent_state **cached); + +int try_lock_extent(struct extent_io_tree *tree, u64 start, u64 end, + struct extent_state **cached); + +int __init extent_state_init_cachep(void); +void __cold extent_state_free_cachep(void); + +u64 count_range_bits(struct extent_io_tree *tree, + u64 *start, u64 search_end, + u64 max_bytes, u32 bits, int contig); + +void free_extent_state(struct extent_state *state); +int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, int filled, struct extent_state *cached_state); +int clear_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_changeset *changeset); +int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_state **cached, gfp_t mask, + struct extent_changeset *changeset); + +static inline int clear_extent_bit(struct extent_io_tree *tree, u64 start, + u64 end, u32 bits, + struct extent_state **cached) +{ + return __clear_extent_bit(tree, start, end, bits, cached, + GFP_NOFS, NULL); +} + +static inline int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end, + struct extent_state **cached) +{ + return __clear_extent_bit(tree, start, end, EXTENT_LOCKED, cached, + GFP_NOFS, NULL); +} + +static inline int clear_extent_bits(struct extent_io_tree *tree, u64 start, + u64 end, u32 bits) +{ + return clear_extent_bit(tree, start, end, bits, NULL); +} + +int set_record_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_changeset *changeset); +int set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, struct extent_state **cached_state, gfp_t mask); + +static inline int set_extent_bits_nowait(struct extent_io_tree *tree, u64 start, + u64 end, u32 bits) +{ + return set_extent_bit(tree, start, end, bits, NULL, GFP_NOWAIT); +} + +static inline int set_extent_bits(struct extent_io_tree *tree, u64 start, + u64 end, u32 bits) +{ + return set_extent_bit(tree, start, end, bits, NULL, GFP_NOFS); +} + +static inline int clear_extent_uptodate(struct extent_io_tree *tree, u64 start, + u64 end, struct extent_state **cached_state) +{ + return __clear_extent_bit(tree, start, end, EXTENT_UPTODATE, + cached_state, GFP_NOFS, NULL); +} + +static inline int set_extent_dirty(struct extent_io_tree *tree, u64 start, + u64 end, gfp_t mask) +{ + return set_extent_bit(tree, start, end, EXTENT_DIRTY, NULL, mask); +} + +static inline int clear_extent_dirty(struct extent_io_tree *tree, u64 start, + u64 end, struct extent_state **cached) +{ + return clear_extent_bit(tree, start, end, + EXTENT_DIRTY | EXTENT_DELALLOC | + EXTENT_DO_ACCOUNTING, cached); +} + +int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, + u32 bits, u32 clear_bits, + struct extent_state **cached_state); + +static inline int set_extent_delalloc(struct extent_io_tree *tree, u64 start, + u64 end, u32 extra_bits, + struct extent_state **cached_state) +{ + return set_extent_bit(tree, start, end, + EXTENT_DELALLOC | extra_bits, + cached_state, GFP_NOFS); +} + +static inline int set_extent_defrag(struct extent_io_tree *tree, u64 start, + u64 end, struct extent_state **cached_state) +{ + return set_extent_bit(tree, start, end, + EXTENT_DELALLOC | EXTENT_DEFRAG, + cached_state, GFP_NOFS); +} + +static inline int set_extent_new(struct extent_io_tree *tree, u64 start, + u64 end) +{ + return set_extent_bit(tree, start, end, EXTENT_NEW, NULL, GFP_NOFS); +} + +static inline int set_extent_uptodate(struct extent_io_tree *tree, u64 start, + u64 end, struct extent_state **cached_state, gfp_t mask) +{ + return set_extent_bit(tree, start, end, EXTENT_UPTODATE, + cached_state, mask); +} + +int find_first_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits, + struct extent_state **cached_state); +void find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits); +int find_contiguous_extent_bit(struct extent_io_tree *tree, u64 start, + u64 *start_ret, u64 *end_ret, u32 bits); +bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start, + u64 *end, u64 max_bytes, + struct extent_state **cached_state); +void wait_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 bits, + struct extent_state **cached_state); + +#endif /* BTRFS_EXTENT_IO_TREE_H */ diff --git a/kernel-shared/extent-tree.c b/kernel-shared/extent-tree.c index fda87ee1..8c1c3fe7 100644 --- a/kernel-shared/extent-tree.c +++ b/kernel-shared/extent-tree.c @@ -34,6 +34,7 @@ #include "kernel-shared/zoned.h" #include "common/utils.h" #include "file-item.h" +#include "extent-io-tree.h" #define PENDING_EXTENT_INSERT 0 #define PENDING_EXTENT_DELETE 1 @@ -74,7 +75,7 @@ static int remove_sb_from_cache(struct btrfs_root *root, BUG_ON(ret); while (nr--) { clear_extent_dirty(free_space_cache, logical[nr], - logical[nr] + stripe_len - 1); + logical[nr] + stripe_len - 1, NULL); } kfree(logical); } @@ -142,7 +143,7 @@ static int cache_block_group(struct btrfs_root *root, if (key.objectid > last) { hole_size = key.objectid - last; set_extent_dirty(free_space_cache, last, - last + hole_size - 1); + last + hole_size - 1, GFP_NOFS); } if (key.type == BTRFS_METADATA_ITEM_KEY) last = key.objectid + root->fs_info->nodesize; @@ -155,7 +156,8 @@ next: if (block_group->start + block_group->length > last) { hole_size = block_group->start + block_group->length - last; - set_extent_dirty(free_space_cache, last, last + hole_size - 1); + set_extent_dirty(free_space_cache, last, last + hole_size - 1, + GFP_NOFS); } remove_sb_from_cache(root, block_group); block_group->cached = 1; @@ -295,7 +297,8 @@ again: while(1) { ret = find_first_extent_bit(&root->fs_info->free_space_cache, - last, &start, &end, EXTENT_DIRTY); + last, &start, &end, EXTENT_DIRTY, + NULL); if (ret) { goto new_group; } @@ -1797,7 +1800,8 @@ static int update_block_group(struct btrfs_trans_handle *trans, u64 bytenr, cache->space_info->bytes_used -= num_bytes; if (mark_free) { set_extent_dirty(&info->free_space_cache, - bytenr, bytenr + num_bytes - 1); + bytenr, bytenr + num_bytes - 1, + GFP_NOFS); } } cache->used = old_val; @@ -1815,10 +1819,10 @@ static int update_pinned_extents(struct btrfs_fs_info *fs_info, if (pin) { set_extent_dirty(&fs_info->pinned_extents, - bytenr, bytenr + num - 1); + bytenr, bytenr + num - 1, GFP_NOFS); } else { clear_extent_dirty(&fs_info->pinned_extents, - bytenr, bytenr + num - 1); + bytenr, bytenr + num - 1, NULL); } while (num > 0) { cache = btrfs_lookup_block_group(fs_info, bytenr); @@ -1855,13 +1859,13 @@ void btrfs_finish_extent_commit(struct btrfs_trans_handle *trans) while(1) { ret = find_first_extent_bit(pinned_extents, 0, &start, &end, - EXTENT_DIRTY); + EXTENT_DIRTY, NULL); if (ret) break; update_pinned_extents(trans->fs_info, start, end + 1 - start, 0); - clear_extent_dirty(pinned_extents, start, end); - set_extent_dirty(free_space_cache, start, end); + clear_extent_dirty(pinned_extents, start, end, NULL); + set_extent_dirty(free_space_cache, start, end, GFP_NOFS); } } @@ -2248,20 +2252,23 @@ check_failed: } if (test_range_bit(&info->extent_ins, ins->objectid, - ins->objectid + num_bytes -1, EXTENT_LOCKED, 0)) { + ins->objectid + num_bytes -1, EXTENT_LOCKED, 0, + NULL)) { search_start = ins->objectid + num_bytes; goto new_group; } if (test_range_bit(&info->pinned_extents, ins->objectid, - ins->objectid + num_bytes -1, EXTENT_DIRTY, 0)) { + ins->objectid + num_bytes -1, EXTENT_DIRTY, 0, + NULL)) { search_start = ins->objectid + num_bytes; goto new_group; } if (info->excluded_extents && test_range_bit(info->excluded_extents, ins->objectid, - ins->objectid + num_bytes -1, EXTENT_DIRTY, 0)) { + ins->objectid + num_bytes -1, EXTENT_DIRTY, 0, + NULL)) { search_start = ins->objectid + num_bytes; goto new_group; } @@ -2371,7 +2378,8 @@ int btrfs_reserve_extent(struct btrfs_trans_handle *trans, if (ret < 0) return ret; clear_extent_dirty(&info->free_space_cache, - ins->objectid, ins->objectid + ins->offset - 1); + ins->objectid, ins->objectid + ins->offset - 1, + NULL); return ret; } @@ -2412,7 +2420,7 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans, if (ref->root == BTRFS_EXTENT_TREE_OBJECTID) { ret = find_first_extent_bit(&trans->fs_info->extent_ins, node->bytenr, &start, &end, - EXTENT_LOCKED); + EXTENT_LOCKED, NULL); ASSERT(!ret); ASSERT(start == node->bytenr); ASSERT(end == node->bytenr + node->num_bytes - 1); @@ -2594,10 +2602,10 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info) while(1) { ret = find_first_extent_bit(&info->free_space_cache, 0, - &start, &end, EXTENT_DIRTY); + &start, &end, EXTENT_DIRTY, NULL); if (ret) break; - clear_extent_dirty(&info->free_space_cache, start, end); + clear_extent_dirty(&info->free_space_cache, start, end, NULL); } while (!list_empty(&info->space_info)) { @@ -3769,7 +3777,8 @@ u64 add_new_free_space(struct btrfs_block_group *block_group, while (start < end) { ret = find_first_extent_bit(&info->pinned_extents, start, &extent_start, &extent_end, - EXTENT_DIRTY | EXTENT_UPTODATE); + EXTENT_DIRTY | EXTENT_UPTODATE, + NULL); if (ret) break; diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c index 6f97312b..210bae15 100644 --- a/kernel-shared/extent_io.c +++ b/kernel-shared/extent_io.c @@ -69,174 +69,6 @@ void extent_buffer_free_cache(struct btrfs_fs_info *fs_info) fs_info->cache_size = 0; } -void extent_io_tree_init(struct extent_io_tree *tree) -{ - cache_tree_init(&tree->state); - cache_tree_init(&tree->cache); - INIT_LIST_HEAD(&tree->lru); -} - -static struct extent_state *alloc_extent_state(void) -{ - struct extent_state *state; - - state = malloc(sizeof(*state)); - if (!state) - return NULL; - state->cache_node.objectid = 0; - state->refs = 1; - state->state = 0; - state->xprivate = 0; - return state; -} - -static void btrfs_free_extent_state(struct extent_state *state) -{ - state->refs--; - BUG_ON(state->refs < 0); - if (state->refs == 0) - free(state); -} - -static void free_extent_state_func(struct cache_extent *cache) -{ - struct extent_state *es; - - es = container_of(cache, struct extent_state, cache_node); - btrfs_free_extent_state(es); -} - -void extent_io_tree_cleanup(struct extent_io_tree *tree) -{ - struct extent_buffer *eb; - - while(!list_empty(&tree->lru)) { - eb = list_entry(tree->lru.next, struct extent_buffer, lru); - if (eb->refs) { - /* - * Reset extent buffer refs to 1, so the - * free_extent_buffer_nocache() can free it for sure. - */ - eb->refs = 1; - fprintf(stderr, - "extent buffer leak: start %llu len %u\n", - (unsigned long long)eb->start, eb->len); - free_extent_buffer_nocache(eb); - } else { - free_extent_buffer_final(eb); - } - } - - cache_tree_free_extents(&tree->state, free_extent_state_func); -} - -static inline void update_extent_state(struct extent_state *state) -{ - state->cache_node.start = state->start; - state->cache_node.size = state->end + 1 - state->start; -} - -/* - * Utility function to look for merge candidates inside a given range. - * Any extents with matching state are merged together into a single - * extent in the tree. Extents with EXTENT_IO in their state field are - * not merged - */ -static int merge_state(struct extent_io_tree *tree, - struct extent_state *state) -{ - struct extent_state *other; - struct cache_extent *other_node; - - if (state->state & EXTENT_IOBITS) - return 0; - - other_node = prev_cache_extent(&state->cache_node); - if (other_node) { - other = container_of(other_node, struct extent_state, - cache_node); - if (other->end == state->start - 1 && - other->state == state->state) { - state->start = other->start; - update_extent_state(state); - remove_cache_extent(&tree->state, &other->cache_node); - btrfs_free_extent_state(other); - } - } - other_node = next_cache_extent(&state->cache_node); - if (other_node) { - other = container_of(other_node, struct extent_state, - cache_node); - if (other->start == state->end + 1 && - other->state == state->state) { - other->start = state->start; - update_extent_state(other); - remove_cache_extent(&tree->state, &state->cache_node); - btrfs_free_extent_state(state); - } - } - return 0; -} - -/* - * insert an extent_state struct into the tree. 'bits' are set on the - * struct before it is inserted. - */ -static int insert_state(struct extent_io_tree *tree, - struct extent_state *state, u64 start, u64 end, - int bits) -{ - int ret; - - BUG_ON(end < start); - state->state |= bits; - state->start = start; - state->end = end; - update_extent_state(state); - ret = insert_cache_extent(&tree->state, &state->cache_node); - BUG_ON(ret); - merge_state(tree, state); - return 0; -} - -/* - * split a given extent state struct in two, inserting the preallocated - * struct 'prealloc' as the newly created second half. 'split' indicates an - * offset inside 'orig' where it should be split. - */ -static int split_state(struct extent_io_tree *tree, struct extent_state *orig, - struct extent_state *prealloc, u64 split) -{ - int ret; - prealloc->start = orig->start; - prealloc->end = split - 1; - prealloc->state = orig->state; - update_extent_state(prealloc); - orig->start = split; - update_extent_state(orig); - ret = insert_cache_extent(&tree->state, &prealloc->cache_node); - BUG_ON(ret); - return 0; -} - -/* - * clear some bits on a range in the tree. - */ -static int clear_state_bit(struct extent_io_tree *tree, - struct extent_state *state, int bits) -{ - int ret = state->state & bits; - - state->state &= ~bits; - if (state->state == 0) { - remove_cache_extent(&tree->state, &state->cache_node); - btrfs_free_extent_state(state); - } else { - merge_state(tree, state); - } - return ret; -} - /* * extent_buffer_bitmap_set - set an area of a bitmap * @eb: the extent buffer @@ -293,305 +125,6 @@ void extent_buffer_bitmap_clear(struct extent_buffer *eb, unsigned long start, } } -/* - * clear some bits on a range in the tree. - */ -int clear_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, int bits) -{ - struct extent_state *state; - struct extent_state *prealloc = NULL; - struct cache_extent *node; - u64 last_end; - int err; - int set = 0; - -again: - if (!prealloc) { - prealloc = alloc_extent_state(); - if (!prealloc) - return -ENOMEM; - } - - /* - * this search will find the extents that end after - * our range starts - */ - node = search_cache_extent(&tree->state, start); - if (!node) - goto out; - state = container_of(node, struct extent_state, cache_node); - if (state->start > end) - goto out; - last_end = state->end; - - /* - * | ---- desired range ---- | - * | state | or - * | ------------- state -------------- | - * - * We need to split the extent we found, and may flip - * bits on second half. - * - * If the extent we found extends past our range, we - * just split and search again. It'll get split again - * the next time though. - * - * If the extent we found is inside our range, we clear - * the desired bit on it. - */ - if (state->start < start) { - err = split_state(tree, state, prealloc, start); - BUG_ON(err == -EEXIST); - prealloc = NULL; - if (err) - goto out; - if (state->end <= end) { - set |= clear_state_bit(tree, state, bits); - if (last_end == (u64)-1) - goto out; - start = last_end + 1; - } else { - start = state->start; - } - goto search_again; - } - /* - * | ---- desired range ---- | - * | state | - * We need to split the extent, and clear the bit - * on the first half - */ - if (state->start <= end && state->end > end) { - err = split_state(tree, state, prealloc, end + 1); - BUG_ON(err == -EEXIST); - - set |= clear_state_bit(tree, prealloc, bits); - prealloc = NULL; - goto out; - } - - start = state->end + 1; - set |= clear_state_bit(tree, state, bits); - if (last_end == (u64)-1) - goto out; - start = last_end + 1; - goto search_again; -out: - if (prealloc) - btrfs_free_extent_state(prealloc); - return set; - -search_again: - if (start > end) - goto out; - goto again; -} - -/* - * set some bits on a range in the tree. - */ -int set_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, int bits) -{ - struct extent_state *state; - struct extent_state *prealloc = NULL; - struct cache_extent *node; - int err = 0; - u64 last_start; - u64 last_end; -again: - if (!prealloc) { - prealloc = alloc_extent_state(); - if (!prealloc) - return -ENOMEM; - } - - /* - * this search will find the extents that end after - * our range starts - */ - node = search_cache_extent(&tree->state, start); - if (!node) { - err = insert_state(tree, prealloc, start, end, bits); - BUG_ON(err == -EEXIST); - prealloc = NULL; - goto out; - } - - state = container_of(node, struct extent_state, cache_node); - last_start = state->start; - last_end = state->end; - - /* - * | ---- desired range ---- | - * | state | - * - * Just lock what we found and keep going - */ - if (state->start == start && state->end <= end) { - state->state |= bits; - merge_state(tree, state); - if (last_end == (u64)-1) - goto out; - start = last_end + 1; - goto search_again; - } - /* - * | ---- desired range ---- | - * | state | - * or - * | ------------- state -------------- | - * - * We need to split the extent we found, and may flip bits on - * second half. - * - * If the extent we found extends past our - * range, we just split and search again. It'll get split - * again the next time though. - * - * If the extent we found is inside our range, we set the - * desired bit on it. - */ - if (state->start < start) { - err = split_state(tree, state, prealloc, start); - BUG_ON(err == -EEXIST); - prealloc = NULL; - if (err) - goto out; - if (state->end <= end) { - state->state |= bits; - start = state->end + 1; - merge_state(tree, state); - if (last_end == (u64)-1) - goto out; - start = last_end + 1; - } else { - start = state->start; - } - goto search_again; - } - /* - * | ---- desired range ---- | - * | state | or | state | - * - * There's a hole, we need to insert something in it and - * ignore the extent we found. - */ - if (state->start > start) { - u64 this_end; - if (end < last_start) - this_end = end; - else - this_end = last_start -1; - err = insert_state(tree, prealloc, start, this_end, - bits); - BUG_ON(err == -EEXIST); - prealloc = NULL; - if (err) - goto out; - start = this_end + 1; - goto search_again; - } - /* - * | ---- desired range ---- | - * | ---------- state ---------- | - * We need to split the extent, and set the bit - * on the first half - */ - err = split_state(tree, state, prealloc, end + 1); - BUG_ON(err == -EEXIST); - - state->state |= bits; - merge_state(tree, prealloc); - prealloc = NULL; -out: - if (prealloc) - btrfs_free_extent_state(prealloc); - return err; -search_again: - if (start > end) - goto out; - goto again; -} - -int set_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end) -{ - return set_extent_bits(tree, start, end, EXTENT_DIRTY); -} - -int clear_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end) -{ - return clear_extent_bits(tree, start, end, EXTENT_DIRTY); -} - -int find_first_extent_bit(struct extent_io_tree *tree, u64 start, - u64 *start_ret, u64 *end_ret, int bits) -{ - struct cache_extent *node; - struct extent_state *state; - int ret = 1; - - /* - * this search will find all the extents that end after - * our range starts. - */ - node = search_cache_extent(&tree->state, start); - if (!node) - goto out; - - while(1) { - state = container_of(node, struct extent_state, cache_node); - if (state->end >= start && (state->state & bits)) { - *start_ret = state->start; - *end_ret = state->end; - ret = 0; - break; - } - node = next_cache_extent(node); - if (!node) - break; - } -out: - return ret; -} - -int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, - int bits, int filled) -{ - struct extent_state *state = NULL; - struct cache_extent *node; - int bitset = 0; - - node = search_cache_extent(&tree->state, start); - while (node && start <= end) { - state = container_of(node, struct extent_state, cache_node); - - if (filled && state->start > start) { - bitset = 0; - break; - } - if (state->start > end) - break; - if (state->state & bits) { - bitset = 1; - if (!filled) - break; - } else if (filled) { - bitset = 0; - break; - } - start = state->end + 1; - if (start > end) - break; - node = next_cache_extent(node); - if (!node) { - if (filled) - bitset = 0; - break; - } - } - return bitset; -} - static struct extent_buffer *__alloc_extent_buffer(struct btrfs_fs_info *info, u64 bytenr, u32 blocksize) { @@ -1030,7 +563,8 @@ int set_extent_buffer_dirty(struct extent_buffer *eb) struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; if (!(eb->flags & EXTENT_BUFFER_DIRTY)) { eb->flags |= EXTENT_BUFFER_DIRTY; - set_extent_dirty(tree, eb->start, eb->start + eb->len - 1); + set_extent_dirty(tree, eb->start, eb->start + eb->len - 1, + GFP_NOFS); extent_buffer_get(eb); } return 0; @@ -1041,7 +575,8 @@ int clear_extent_buffer_dirty(struct extent_buffer *eb) struct extent_io_tree *tree = &eb->fs_info->dirty_buffers; if (eb->flags & EXTENT_BUFFER_DIRTY) { eb->flags &= ~EXTENT_BUFFER_DIRTY; - clear_extent_dirty(tree, eb->start, eb->start + eb->len - 1); + clear_extent_dirty(tree, eb->start, eb->start + eb->len - 1, + NULL); free_extent_buffer(eb); } return 0; diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h index d824d467..8ba56eed 100644 --- a/kernel-shared/extent_io.h +++ b/kernel-shared/extent_io.h @@ -23,17 +23,7 @@ #include "common/extent-cache.h" #include "kernel-lib/list.h" -#define EXTENT_DIRTY (1U << 0) -#define EXTENT_WRITEBACK (1U << 1) -#define EXTENT_UPTODATE (1U << 2) -#define EXTENT_LOCKED (1U << 3) -#define EXTENT_NEW (1U << 4) -#define EXTENT_DELALLOC (1U << 5) -#define EXTENT_DEFRAG (1U << 6) -#define EXTENT_DEFRAG_DONE (1U << 7) -#define EXTENT_BUFFER_FILLED (1U << 8) -#define EXTENT_CSUM (1U << 9) -#define EXTENT_IOBITS (EXTENT_LOCKED | EXTENT_WRITEBACK) +struct extent_io_tree; #define EXTENT_BUFFER_UPTODATE (1U << 0) #define EXTENT_BUFFER_DIRTY (1U << 1) @@ -65,23 +55,6 @@ static inline int le_test_bit(int nr, const u8 *addr) struct btrfs_fs_info; -struct extent_io_tree { - struct cache_tree state; - struct cache_tree cache; - struct list_head lru; - u64 cache_size; - u64 max_cache_size; -}; - -struct extent_state { - struct cache_extent cache_node; - u64 start; - u64 end; - int refs; - unsigned long state; - u64 xprivate; -}; - struct extent_buffer { struct cache_extent cache_node; u64 start; @@ -99,16 +72,6 @@ static inline void extent_buffer_get(struct extent_buffer *eb) eb->refs++; } -void extent_io_tree_init(struct extent_io_tree *tree); -void extent_io_tree_cleanup(struct extent_io_tree *tree); -int set_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, int bits); -int clear_extent_bits(struct extent_io_tree *tree, u64 start, u64 end, int bits); -int find_first_extent_bit(struct extent_io_tree *tree, u64 start, - u64 *start_ret, u64 *end_ret, int bits); -int test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, - int bits, int filled); -int set_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end); -int clear_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end); static inline int set_extent_buffer_uptodate(struct extent_buffer *eb) { eb->flags |= EXTENT_BUFFER_UPTODATE; diff --git a/kernel-shared/misc.h b/kernel-shared/misc.h new file mode 100644 index 00000000..99c4951b --- /dev/null +++ b/kernel-shared/misc.h @@ -0,0 +1,143 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef BTRFS_MISC_H +#define BTRFS_MISC_H + +#include "kerncompat.h" + +#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len)) + +/* + * Enumerate bits using enum autoincrement. Define the @name as the n-th bit. + */ +#define ENUM_BIT(name) \ + __ ## name ## _BIT, \ + name = (1U << __ ## name ## _BIT), \ + __ ## name ## _SEQ = __ ## name ## _BIT + +static inline void cond_wake_up(struct wait_queue_head *wq) +{ + /* + * This implies a full smp_mb barrier, see comments for + * waitqueue_active why. + */ + if (wq_has_sleeper(wq)) + wake_up(wq); +} + +static inline void cond_wake_up_nomb(struct wait_queue_head *wq) +{ + /* + * Special case for conditional wakeup where the barrier required for + * waitqueue_active is implied by some of the preceding code. Eg. one + * of such atomic operations (atomic_dec_and_return, ...), or a + * unlock/lock sequence, etc. + */ + if (waitqueue_active(wq)) + wake_up(wq); +} + +static inline u64 mult_perc(u64 num, u32 percent) +{ + return div_u64(num * percent, 100); +} +/* Copy of is_power_of_two that is 64bit safe */ +static inline bool is_power_of_two_u64(u64 n) +{ + return n != 0 && (n & (n - 1)) == 0; +} + +static inline bool has_single_bit_set(u64 n) +{ + return is_power_of_two_u64(n); +} + +/* + * Simple bytenr based rb_tree relate structures + * + * Any structure wants to use bytenr as single search index should have their + * structure start with these members. + */ +struct rb_simple_node { + struct rb_node rb_node; + u64 bytenr; +}; + +static inline struct rb_node *rb_simple_search(struct rb_root *root, u64 bytenr) +{ + struct rb_node *node = root->rb_node; + struct rb_simple_node *entry; + + while (node) { + entry = rb_entry(node, struct rb_simple_node, rb_node); + + if (bytenr < entry->bytenr) + node = node->rb_left; + else if (bytenr > entry->bytenr) + node = node->rb_right; + else + return node; + } + return NULL; +} + +/* + * Search @root from an entry that starts or comes after @bytenr. + * + * @root: the root to search. + * @bytenr: bytenr to search from. + * + * Return the rb_node that start at or after @bytenr. If there is no entry at + * or after @bytner return NULL. + */ +static inline struct rb_node *rb_simple_search_first(struct rb_root *root, + u64 bytenr) +{ + struct rb_node *node = root->rb_node, *ret = NULL; + struct rb_simple_node *entry, *ret_entry = NULL; + + while (node) { + entry = rb_entry(node, struct rb_simple_node, rb_node); + + if (bytenr < entry->bytenr) { + if (!ret || entry->bytenr < ret_entry->bytenr) { + ret = node; + ret_entry = entry; + } + + node = node->rb_left; + } else if (bytenr > entry->bytenr) { + node = node->rb_right; + } else { + return node; + } + } + + return ret; +} + +static inline struct rb_node *rb_simple_insert(struct rb_root *root, u64 bytenr, + struct rb_node *node) +{ + struct rb_node **p = &root->rb_node; + struct rb_node *parent = NULL; + struct rb_simple_node *entry; + + while (*p) { + parent = *p; + entry = rb_entry(parent, struct rb_simple_node, rb_node); + + if (bytenr < entry->bytenr) + p = &(*p)->rb_left; + else if (bytenr > entry->bytenr) + p = &(*p)->rb_right; + else + return parent; + } + + rb_link_node(node, parent, p); + rb_insert_color(node, root); + return NULL; +} + +#endif diff --git a/kernel-shared/transaction.c b/kernel-shared/transaction.c index a3b67d8c..a1b46b6c 100644 --- a/kernel-shared/transaction.c +++ b/kernel-shared/transaction.c @@ -142,7 +142,7 @@ int __commit_transaction(struct btrfs_trans_handle *trans, while(1) { again: ret = find_first_extent_bit(tree, 0, &start, &end, - EXTENT_DIRTY); + EXTENT_DIRTY, NULL); if (ret) break; @@ -174,7 +174,8 @@ cleanup: while (1) { int find_ret; - find_ret = find_first_extent_bit(tree, 0, &start, &end, EXTENT_DIRTY); + find_ret = find_first_extent_bit(tree, 0, &start, &end, + EXTENT_DIRTY, NULL); if (find_ret) break;