From patchwork Thu May 2 22:33:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13652179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1C8CC25B5F for ; Thu, 2 May 2024 22:33:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7F32E1126FB; Thu, 2 May 2024 22:33:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.b="h7+FHtd3"; dkim-atps=neutral Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by gabe.freedesktop.org (Postfix) with ESMTPS id 802DA1126EE for ; Thu, 2 May 2024 22:33:46 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6f4178aec15so2719377b3a.0 for ; Thu, 02 May 2024 15:33:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1714689226; x=1715294026; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FS85aS0uSYWPmkJmR+HxkNl8OY9pDJKR/ix6jc3gM1o=; b=h7+FHtd3vfLs32TLdegJT6TjeStXP11xuQcgLHUe3P4m0JfYKZVyRUIafYR/Yqu4pD dEiokBgu2VRmXndPMFWP4UDypCYYDtD9mPsKtcKwvQ+Kzrwkk9BA9qYUPHdRIf9giTVF BlOTPyA7qdWCBmq80u2RAajrsWl22c/oF/6yI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714689226; x=1715294026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FS85aS0uSYWPmkJmR+HxkNl8OY9pDJKR/ix6jc3gM1o=; b=V8FhagBLHT/fGyFM0Oj4Cgp3ZYGhJ+8hMnFP+db3jcau9t00MIyRWGjqXl71ZKW5vE vTSfNE/91mk3wroorFeXIFyjyj6cAVvbhU3+8n/ROaImgc2DmgQGtj0ozyoEjuOJqlh5 +pfI4ilchvXLbjbKxh6TlFNchM9BR4ExyKAw1HsxVlSZp7kLyIJ2yfWO1goqsGI1xidW lVYOlvV0qMRIJg+CYc+oBw2s3KH4NgZNUTjHTVaHs6kZPQHRZ4lN9/lMDSbmIqV0uOH8 FeGCIPpXH2s3UTJjVKPGI9A0QCSIKxH/Inp+XM0RYY0VI2xfofqHVAz2w6T/H7nnHSag FW6w== X-Forwarded-Encrypted: i=1; AJvYcCVYq0RUX1faNYpzE1xi4ZK9VWkw70Auae3P1HbePJQHm6TNxkWrzVkM3Vxuh+v0IwrcCjTH5syS1COv6dNU/o3b7hFS0nEVYYo+/dCwU6T4 X-Gm-Message-State: AOJu0YyzVDt6Oh4CYQ4+Akw2Av7Won4onKDnI97a8CY7oidhmFkY6ygz 1VXNqLDsTip82G/EFHkc/T70OuCVNeZDkCZ5s2eog23+Z1GegB61R8yDVS2MQw== X-Google-Smtp-Source: AGHT+IHPqWSEIWAkmTHA3m+ip5N1siYc+jc02yV1sSYNCUk2VbsJlQut2h0V+Wbt+chL+SgMhgBGWw== X-Received: by 2002:a05:6a21:1f27:b0:1ac:6762:e62e with SMTP id ry39-20020a056a211f2700b001ac6762e62emr1201922pzb.30.1714689225930; Thu, 02 May 2024 15:33:45 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id eu6-20020a17090af94600b002b3631c9270sm2784217pjb.25.2024.05.02.15.33.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 15:33:42 -0700 (PDT) From: Kees Cook To: Christian Brauner Cc: Kees Cook , Alexander Viro , Jan Kara , linux-fsdevel@vger.kernel.org, Zack Rusin , Broadcom internal kernel review list , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Andi Shyti , Lucas De Marchi , Matt Atwood , Matthew Auld , Nirmoy Das , Jonathan Cavitt , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Kent Overstreet , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , Andrew Morton , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH 1/5] fs: Do not allow get_file() to resurrect 0 f_count Date: Thu, 2 May 2024 15:33:36 -0700 Message-Id: <20240502223341.1835070-1-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502222252.work.690-kees@kernel.org> References: <20240502222252.work.690-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=923; i=keescook@chromium.org; h=from:subject; bh=WtgCtyIMNT2LXHS8P/KpT13C94RxNhKJ7RovKI+clEE=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmNBTDS8z47/RFROL1IOhhstMS1R8chpsln1UTA /ua0FxEWaCJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZjQUwwAKCRCJcvTf3G3A JmiTD/9h3vxAjfd+x38LUO8iBg0XaVYG0fE+uUAgPtl4+tThkzIHVPwRZvUaAttpv0rxblAgfyt blgim7dvZLmsZwvtY+fG0B7Ex0KmlH4jF9o/0oOrFCCJcHFKogjX9q8vOq7nXb0AutdLQxXbALT Gi07OKXY6BfixXgXojnjmDsDX37JKM+9aMT/Js5FWDOhIaFSpn/b4bEgh4VvE5aUCW2AMcqbZUa 5fIMXU+GdA6q1EHkPu/V/oANVb9vi6kkL3MQRMd2Q6yY/QjA5EVHCA2u8w+Wu2DgUCffgY3zli5 QvqPOdlTd2unyBGzT+05ygn2f+4E7JW9FczuGQ9wfArVgUiCUZ5OrOhU7YpgHTgtmZhuA/XKvrb 2uemrbNiU2fmNpil1TqgxnryEz0MfaYDb8BjLOmchF87oBK0wKbVOHNJjVVSoKmqPh3qEk1jC2s uHic4zSh8WIZc/ycwpUNqcghKK0fL0v0apyhBiLHa7PYDDRWE89n4xXFDk8p3es23xa8pvtFZ3p xrXlj4DW1sDT08plpkk0UXHYq6K8qAaSw1Ts1IwjwkVHo95wmegwPwc3/U1ZfqsuTr9BG7ugDI5 LiSdxoiZF/Q9dl04c99LliAAzAHYWqj9BQJwJkkOe2KTTQ5L/Hi4L4GT4RdcdaHuJAF0JTaTtR2 hEa2Zvq5tUpbQpQ== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If f_count reaches 0, calling get_file() should be a failure. Adjust to use atomic_long_inc_not_zero() and return NULL on failure. In the future get_file() can be annotated with __must_check, though that is not currently possible. Signed-off-by: Kees Cook --- Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: linux-fsdevel@vger.kernel.org --- include/linux/fs.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 00fc429b0af0..210bbbfe9b83 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1038,7 +1038,8 @@ struct file_handle { static inline struct file *get_file(struct file *f) { - atomic_long_inc(&f->f_count); + if (unlikely(!atomic_long_inc_not_zero(&f->f_count))) + return NULL; return f; } From patchwork Thu May 2 22:33:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13652177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 70809C4345F for ; Thu, 2 May 2024 22:33:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 392BA1126EB; Thu, 2 May 2024 22:33:46 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.b="NOKGyqIw"; dkim-atps=neutral Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6F16110EE05 for ; Thu, 2 May 2024 22:33:44 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-6f3f6aa1437so4770423b3a.3 for ; Thu, 02 May 2024 15:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1714689224; x=1715294024; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1A06K+cPndg/pyM0u+MnZKl+yBNNRhhQ81oQHFD2WfU=; b=NOKGyqIwSIXXlZkVuwpJY2wKvjpaOOtvxf8kqVJH3GsFeDpBHd8U9KMfa0vWh8363v DE1xvdpG/KlSkmf3qsKqRebokJHpr/ZLrcWEsfMU1M1rAuMUf2VAFysre94LeT0n2snO WGteOqEogjK1YcWy6n/kFQP/Wr5PGT7Z5/RRQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714689224; x=1715294024; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1A06K+cPndg/pyM0u+MnZKl+yBNNRhhQ81oQHFD2WfU=; b=PRq8BJS2xdDe2hXdYXH3zSEtg9AZzKMI5qsp5QVuQG6BUmonDxaoMSuycAdiZZS57L XG3caFMSlDRdBq7kYexIBagJxyS7qtBmYoAvKaRylNWMsA1j4K+jvG9BADB9trWXx3+8 6TbjPmf5c5gNh1jt0vk/k3wfUiz2rvlXrYeGRuB6OmlNSgnnksFqHzgONrtZdy/5S/4p T8UUjknq79ZPPkDOaVcWAH5nYvZpH4IkD9+xT67tiWcV4JYAungd99gNiDQ/7ojGIKwL FoNX2yzJOcaWnaSzlpUv6MSRuW8coiRyYjXbYfym/dcfjnStyz/F3c7SIFBr28qqzb5K xw+g== X-Forwarded-Encrypted: i=1; AJvYcCUsbpX/u2PsLF4JGYE90ZEJjyNU4kdtHDyLak8E6sPf/Gq/vYN+m4erzTVpB9w4gFDzChkBZ8XsgYez4GSBZ61BvQYzM88e3R0kqbTaqX2X X-Gm-Message-State: AOJu0YzAYw0ucmX1J4IU8FZUql7+x0sBzD22SgWBYmUEzBtRpSivjSd6 aTwSfMlUxqicR+r0DW7rAaG+txnxFyZbNUjFybkp42/Uz7ZrMQ1S9VPWV4wJHA== X-Google-Smtp-Source: AGHT+IG2YWa+9vY4h3Nt4PaRaRqLrTI7aivCYrlbdPDKTXJ4+9jbccE7kk1f6L0pdlB3sdtwRrv0Kg== X-Received: by 2002:a05:6a20:2ea1:b0:1a7:64dd:ebe8 with SMTP id bj33-20020a056a202ea100b001a764ddebe8mr1102791pzb.49.1714689223844; Thu, 02 May 2024 15:33:43 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id c12-20020a631c4c000000b005c6617b52e6sm1794043pgm.5.2024.05.02.15.33.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 15:33:42 -0700 (PDT) From: Kees Cook To: Christian Brauner Cc: Kees Cook , Zack Rusin , Broadcom internal kernel review list , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , dri-devel@lists.freedesktop.org, Al Viro , Jan Kara , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Andi Shyti , Lucas De Marchi , Matt Atwood , Matthew Auld , Nirmoy Das , Jonathan Cavitt , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Kent Overstreet , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH 2/5] drm/vmwgfx: Do not directly manipulate file->f_count Date: Thu, 2 May 2024 15:33:37 -0700 Message-Id: <20240502223341.1835070-2-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502222252.work.690-kees@kernel.org> References: <20240502222252.work.690-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1187; i=keescook@chromium.org; h=from:subject; bh=OUKzE9BVu8bWw6VjJLw4TE6pDmrAN0hiIKHseOy5gQI=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmNBTDv/zsdqEX/aWrvJELPzlHroMCjXF3QLhey S4S5ggvUL6JAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZjQUwwAKCRCJcvTf3G3A JkpxD/oDlial29qMNd66BSgea9O5UW/Aq3fbPMw4W3EMgMW8bVd/2Cha0TLF9V6wPXJVpP9GICw eySmpxWOHWSZcjcLDQcl3iM4a9UgU5v0D/1sXfoWYpKb8RVGeGLgKP3alVDBkkX0vAShE079Rgv AaOYBsxL4AWKH0J2mPuUNW1OfLBaqETFXnLQBSaj2UyS3/WXTVW1GHxm9VvBJGhFVxuE+wCb6J2 xOaHDjJd0EfmnXYWA5G0fszOm/QZch9KwaaSnl4bvV6ennST4HrFfxcb3aoODK7M6tHEtze1gql 3NOu5+wUDcxv68MozmrUGEAXWOvGgTGZ5kugToTTsHflWiG23cemytSMo203nmTfyKc8/u3S0AO fT0/foBY7k3RMwGcqYBnsdh1/Jq5xVXpDYVLyMgNRsgW1NXdWmM1cA+dosWN6PMw6lRAue96s9c i3rmc1I8FSgzzkU6CVzCHA2OvjWHWnAtCxDZFGoTWPNSt68KMgYxy+w7dhirZS3oY8UA8NByD4z iU0Wr8PshZ6X/yOrMNAqrLv/YHZHlXt9M1fl5FRs35Y9Cfj6SaJjRhsWINHqY7XmL7DbZgLHnfT 2sm2WcxYxdug3OFoyke5UhhyXwLjB1tcBLaU3maUuNFLkfVGdBWJy1b3jBcj9FeFF/FmdXedjLE f5Rb9fzOhMoS+kQ== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The correct helper for taking an f_count reference is get_file(). Now that it checks for 0 counts, use it and check results. Signed-off-by: Kees Cook --- Cc: Zack Rusin Cc: Broadcom internal kernel review list Cc: Maarten Lankhorst Cc: Maxime Ripard Cc: Thomas Zimmermann Cc: David Airlie Cc: Daniel Vetter Cc: dri-devel@lists.freedesktop.org --- drivers/gpu/drm/vmwgfx/ttm_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/vmwgfx/ttm_object.c b/drivers/gpu/drm/vmwgfx/ttm_object.c index 6806c05e57f6..68d8ee3020b1 100644 --- a/drivers/gpu/drm/vmwgfx/ttm_object.c +++ b/drivers/gpu/drm/vmwgfx/ttm_object.c @@ -475,7 +475,7 @@ void ttm_object_device_release(struct ttm_object_device **p_tdev) */ static bool __must_check get_dma_buf_unless_doomed(struct dma_buf *dmabuf) { - return atomic_long_inc_not_zero(&dmabuf->file->f_count) != 0L; + return get_file(dmabuf->file) != NULL; } /** From patchwork Thu May 2 22:33:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13652178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 24A08C4345F for ; Thu, 2 May 2024 22:33:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4FBDE1126F5; Thu, 2 May 2024 22:33:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.b="kqtP212f"; dkim-atps=neutral Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by gabe.freedesktop.org (Postfix) with ESMTPS id EE0C910F850 for ; Thu, 2 May 2024 22:33:45 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1e834159f40so69524285ad.2 for ; Thu, 02 May 2024 15:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1714689225; x=1715294025; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=syonTmYD8c0TYLT7ugjKF/sEjFnMsTQVi6C/fR41bxY=; b=kqtP212fnLiGQ26P/2lpLag7HOtmi9WRQEWlBZMclblGBK7DChnja/X5OuP9cUH3vf zgZ4NIFHxrS5bU99vFCJcoY6OcAlQsCCFf6FW1bC5bJ4FlEr2NEtvS47TyMGce2UKzzk tpaQnTqYi7CuhQQYWN33cA+uDBxbLq//rasA4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714689225; x=1715294025; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=syonTmYD8c0TYLT7ugjKF/sEjFnMsTQVi6C/fR41bxY=; b=l5b/LRwolbbpdjsnusOoNCwpFQamQrzBVzoA9gNO8JfthTWt98hhZz+wH7xfLdEjx9 n6AUhGO7mrOoZf1yqM5EwVerJAHshYh3EZtAVcNkxZYVDZCHUubIb5uW9mqC37TWLrpK yfNS7iQS/+Mrtt6+37aYnoKQMSK5U51YZPaZo+bM3/NgQZID5zu7xlqut1McMYixxZvZ 8LtrA5agQKNTgtmKI7yOoSxbj3gy0jpMK4JaHtTjd7c5/37QtjaHIIfdW+LkEjuYE1kt m2qxt1eihJ+lzTAD//Sk/avhtJIabWBVCl5eQy7uJD6DFlCGtNNhsZJhVgdYLvYih5EW Y0Yg== X-Forwarded-Encrypted: i=1; AJvYcCU2zZd6I33vqBONIXV2gskDyPv5xQHihAKirgP5UrKy01sZhplxPC0mVR4TwCXt0xdzUGhR6V9C6QjTAGzCDwYttEtAg5Vrom5btu1KNOI7 X-Gm-Message-State: AOJu0YyHgYMibuhWknLd4DI6eLCuXZ6wYfbqEfNjFeI2WuCVB11NVkKd x5qS2ikmvNszACAwUZY65pytFO8YhK68zdI67T23YF/9xjeAK0aH9Pql05rePA== X-Google-Smtp-Source: AGHT+IEF0+ZDExdwzLRwfCmuME2rfDCoHX3a0jBuTtemcfZavKrmzhzYpowtSNcwJG7mXyaEXL+8xg== X-Received: by 2002:a17:902:bb8d:b0:1e6:68d0:d6c1 with SMTP id m13-20020a170902bb8d00b001e668d0d6c1mr934693pls.40.1714689225425; Thu, 02 May 2024 15:33:45 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id y2-20020a17090264c200b001eb2eac7012sm1885713pli.138.2024.05.02.15.33.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 15:33:42 -0700 (PDT) From: Kees Cook To: Christian Brauner Cc: Kees Cook , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Andi Shyti , Lucas De Marchi , Matt Atwood , Matthew Auld , Nirmoy Das , Jonathan Cavitt , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Al Viro , Jan Kara , Zack Rusin , Broadcom internal kernel review list , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Kent Overstreet , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH 3/5] drm/i915: Do not directly manipulate file->f_count Date: Thu, 2 May 2024 15:33:38 -0700 Message-Id: <20240502223341.1835070-3-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502222252.work.690-kees@kernel.org> References: <20240502222252.work.690-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1418; i=keescook@chromium.org; h=from:subject; bh=P8oJtedy3HXH20Pj7T8DcdEaOEqGIej1pz81IGBrmfY=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmNBTDTJYtS4zv8xaeAG0nbcBc+V98YhNpyJSpC aI+gh9aiHmJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZjQUwwAKCRCJcvTf3G3A Jj8cD/oDPHTNltydyOxjPKQZBsKvSQLMCwHO4DniBHjFHb53bO32xnSinmbDfMMW6W+37gvrw8+ HE6zUQkGh6uuZsI+sQEoqKgttjLhXt9oHFUm4OxoD0XufetWCTKTscDgXMU1JnsNj8DboYt4rpJ YmSi0hs6QUUf5hOHAUXAKfGs5W7DLp6tpj4rGdmFqMXpkgD3QrChXAj6i8RDdUrhyk9B2pBuY+4 sLgAOFTS0sj9z31xdOO8K3DRa/huLmrWWbAxb8bM8wIR0JFY0lbyIDpDRceiptDZaIJ6SDuE+tn 8hO23F2XRQBqC2khO22JbW5VDa0wCLX1gHTzNFPXb1tP/0vGKwSn9hsEgK1Cs4O8uH/j+XXx3Fu ERZsO42/LUneZufeL5Yn/IXSKnnFBz50uY0DLgUUDbyVaGRt5r41lZ6VS+ICzQdfv0Np/iFVmVj 4QbPjG3GeAvApJNmze6/hNVFC53hBO3XzEb5rZp6CQ0esedaZROBpIP6JhlqTAT/UsP6lYFZCjN mf5dE8DHcNPFgiFNkrVlemIs+i2cWQzPwKMzqlMsJhnW3wYbbkhwnRCZcWpBFOiCxbnTuQiPXd0 MQTHMYZNA3iJPXlf8SU5ArabrMEfY+TNTBgUTOjKDUiIcn8B6Ykj8rc7hA4qkFA9Q3afzMjfC7m Uvjb+C8zzpXuOpg== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The correct helper for taking an f_count reference is get_file(). Use it and check results. Signed-off-by: Kees Cook --- Cc: Jani Nikula Cc: Joonas Lahtinen Cc: Rodrigo Vivi Cc: Tvrtko Ursulin Cc: David Airlie Cc: Daniel Vetter Cc: Andi Shyti Cc: Lucas De Marchi Cc: Matt Atwood Cc: Matthew Auld Cc: Nirmoy Das Cc: Jonathan Cavitt Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org --- drivers/gpu/drm/i915/gt/shmem_utils.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index bccc3a1200bc..dc25e6dc884b 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -38,8 +38,9 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) void *ptr; if (i915_gem_object_is_shmem(obj)) { - file = obj->base.filp; - atomic_long_inc(&file->f_count); + file = get_file(obj->base.filp); + if (!file) + return ERR_PTR(-ESRCH); return file; } From patchwork Thu May 2 22:33:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13652180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44770C04FFE for ; Thu, 2 May 2024 22:33:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CC3721126F9; Thu, 2 May 2024 22:33:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.b="Ilz+6c/B"; dkim-atps=neutral Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by gabe.freedesktop.org (Postfix) with ESMTPS id CE2081126EE for ; Thu, 2 May 2024 22:33:47 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-6ed04c91c46so8126431b3a.0 for ; Thu, 02 May 2024 15:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1714689227; x=1715294027; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lF5MThykxvPk9uyVBf2VyyviHzvbPbdlmOnhbtR5xIY=; b=Ilz+6c/B892A5FSZdwU//UQ4FaE6Q3uOgA+HnJ8WpbpEeHlYcmluPu2B+U0Ra/BoCe IdIkgjgsiIvE7KopBre40J2ilvxNuh6mdIJzEVL8zbNnOfIlP0Mq+PxLsODLVYU8lli4 3cNT7m14oTjKTTRhqR3Kl7SSp/tMm0J3UGRV0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714689227; x=1715294027; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lF5MThykxvPk9uyVBf2VyyviHzvbPbdlmOnhbtR5xIY=; b=AFKT0NqFlQYVLG112iB6OUcvrVdbVC1zP9RA+9J1skqqfwHGPRenPw/M8BlJ8WR5oo Yo4zWI1slLXAhifgvbDC6xelvn+UEzTKCKfwgP/AqeyvlKAz0ai6wVOWuU/CefS8bdNa WjrHYdZS2/7SFtm3/t8ajU2k83uuj7vW2mlRbgTDP8OcXq+1Xe5Ja5fadVUtb7uTvYzm iJ5geh7pqyamPFVmaRO93BEjBq912NCHOFow0iUv4tpc4D5ygVSvlLR3pgZrtpi5jtGV QRI/ZhKxs32rsdajposDguXfZRoCY2MDzo1PqIUEqncHFkt1Tsp7cGUZ5CtcjEsMPaNb pD9Q== X-Forwarded-Encrypted: i=1; AJvYcCXuyBgqoY52xOhNkEY1qG1Ex3G1wNPzBsCRBbAUzOznSI0/H0LM2dvU2dJgjBF69i+5p1plf2WfcGSq6hO1YqC50+Wva6U2EUfmJyIAzj+/ X-Gm-Message-State: AOJu0YxOpmzSgyXRMLN69mWWxU0SwmqdUqQpsmHqbEvTtydSbhflG1/g gS06cXVgPb7PVnFcBCSh4I2Q7LfezP42JvYJhaFu61w/pgmLxpIAQRyhG949PA== X-Google-Smtp-Source: AGHT+IHtiShlIqDNUhw3v6g852b7qDSgAQXi3kqCqyOfYkujNeIEUghf0HtbZSg1PqKylxw220jafA== X-Received: by 2002:a05:6a20:4daa:b0:1a9:7f1b:e3f7 with SMTP id gj42-20020a056a204daa00b001a97f1be3f7mr1097122pzb.50.1714689227324; Thu, 02 May 2024 15:33:47 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id y21-20020a056a00191500b006ece7bb5636sm1777178pfi.134.2024.05.02.15.33.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 15:33:42 -0700 (PDT) From: Kees Cook To: Christian Brauner Cc: Kees Cook , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Kent Overstreet , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , linux-kbuild@vger.kernel.org, Al Viro , Jan Kara , Zack Rusin , Broadcom internal kernel review list , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Andi Shyti , Lucas De Marchi , Matt Atwood , Matthew Auld , Nirmoy Das , Jonathan Cavitt , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-hardening@vger.kernel.org Subject: [PATCH 4/5] refcount: Introduce refcount_long_t and APIs Date: Thu, 2 May 2024 15:33:39 -0700 Message-Id: <20240502223341.1835070-4-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502222252.work.690-kees@kernel.org> References: <20240502222252.work.690-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=30349; i=keescook@chromium.org; h=from:subject; bh=nA88bPD2WL8ZV8ljbWg9pw64tzn8llNidezlYm5uvKM=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmNBTDcl6TtH0GVE9GWJhPEYcrNfUnHW4Y0kjRd xoCLSGozA+JAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZjQUwwAKCRCJcvTf3G3A JmuCEACNdtzqcUYqDarKRX44A6dJJMhRBRisjH2wnxt4JqEeFbHKKpU9QD0me7kV2TikLgtqzdP e6C0c0hkvH9YX+0X6BidFQyurXXIpRdwmyCVYWX/oAZ4IjaAf1HxVDXcqb9C3YT3niQh2s7TDRU TAAci1qDRpdfNlG7Xw95SBog0U6x+7Ph+/xJS4d7UNOKeglYTC3H/xnHNlZRHH92B3bEPRxXoDG vB+cNXLiw0VFe1ZNxMte7anqzC4GUyUGsw1IvFQ3VK8wl2anlwb0D3sp85loKkcbN24e1PFwbMU 1BhPrs8ZbP/mHt9TYi8MQKYtjeiq60W54qJG4/JMMLWUdFWziA0/a4Oin8M/FzbtoDqznzXScWq vk+mkxRX07K4KzEbbpokrZH1udptZArIV7/afi3qRawEOU/Rxt+S4Ck0/Q92CgEG5AUQ5xhAaN4 V3O2sAxrxQTdasz1zm1ozzIsXmQp8S5lRNBuK9yMcHbP4uZHdLd6NHif6eQ797zgL1NVzfa9q3d 4eudvSgIzr5W29diGSYiLIthiGc1ij5pCximj86dQwMFbS3sbIXwHdn2VXTQ0xw1YwqHqOEkPkZ j5xtEMtONaf8NOgF4hm6L8VZSm4dF8ttytzold77hjPbiZnVVEhPlRtfNactJML3zur2Xq/6j/1 +RmN8ZDstIWEdQg== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Duplicate the refcount_t types and APIs gain refcount_long_t. This is needed for larger counters that while currently very unlikely to overflow, still want to detect and mitigate underflow. Generate refcount-long.h via direct string replacements. Doing macros like compat_binfmt_elf doesn't appear to work well. Signed-off-by: Kees Cook --- Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland Cc: Kent Overstreet Cc: Masahiro Yamada Cc: Nathan Chancellor Cc: Nicolas Schier Cc: linux-kbuild@vger.kernel.org --- MAINTAINERS | 2 +- Makefile | 11 +- include/linux/refcount-impl.h | 344 +++++++++++++++++++++++++++++++++ include/linux/refcount.h | 341 +------------------------------- include/linux/refcount_types.h | 12 ++ lib/refcount.c | 17 +- 6 files changed, 385 insertions(+), 342 deletions(-) create mode 100644 include/linux/refcount-impl.h diff --git a/MAINTAINERS b/MAINTAINERS index 7c121493f43d..2e6c8eaab194 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3360,7 +3360,7 @@ S: Maintained F: Documentation/atomic_*.txt F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h -F: include/linux/refcount.h +F: include/linux/refcount*.h F: scripts/atomic/ ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER diff --git a/Makefile b/Makefile index 4bef6323c47d..a4bdcd34f323 100644 --- a/Makefile +++ b/Makefile @@ -1190,7 +1190,9 @@ PHONY += prepare archprepare archprepare: outputmakefile archheaders archscripts scripts include/config/kernel.release \ asm-generic $(version_h) include/generated/utsrelease.h \ - include/generated/compile.h include/generated/autoconf.h remove-stale-files + include/generated/compile.h include/generated/autoconf.h \ + include/generated/refcount-long.h \ + remove-stale-files prepare0: archprepare $(Q)$(MAKE) $(build)=scripts/mod @@ -1262,6 +1264,13 @@ filechk_compile.h = $(srctree)/scripts/mkcompile_h \ include/generated/compile.h: FORCE $(call filechk,compile.h) +include/generated/refcount-long.h: $(srctree)/include/linux/refcount-impl.h + $(Q)$(PERL) -pe 's/\b(atomic|(__)?refcount)_/\1_long_/g; \ + s/ATOMIC_/ATOMIC_LONG_/g; \ + s/(REFCOUNT)_(IMPL|INIT|MAX|SAT)/\1_LONG_\2/g; \ + s/\b(U?)INT_/\1LONG_/g; \ + s/\bint\b/long/g;' $< >$@ + PHONY += headerdep headerdep: $(Q)find $(srctree)/include/ -name '*.h' | xargs --max-args 1 \ diff --git a/include/linux/refcount-impl.h b/include/linux/refcount-impl.h new file mode 100644 index 000000000000..f5c73a0f46a4 --- /dev/null +++ b/include/linux/refcount-impl.h @@ -0,0 +1,344 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Variant of atomic_t specialized for reference counts. + * + * The interface matches the atomic_t interface (to aid in porting) but only + * provides the few functions one should use for reference counting. + * + * Saturation semantics + * ==================== + * + * refcount_t differs from atomic_t in that the counter saturates at + * REFCOUNT_SATURATED and will not move once there. This avoids wrapping the + * counter and causing 'spurious' use-after-free issues. In order to avoid the + * cost associated with introducing cmpxchg() loops into all of the saturating + * operations, we temporarily allow the counter to take on an unchecked value + * and then explicitly set it to REFCOUNT_SATURATED on detecting that underflow + * or overflow has occurred. Although this is racy when multiple threads + * access the refcount concurrently, by placing REFCOUNT_SATURATED roughly + * equidistant from 0 and INT_MAX we minimise the scope for error: + * + * INT_MAX REFCOUNT_SATURATED UINT_MAX + * 0 (0x7fff_ffff) (0xc000_0000) (0xffff_ffff) + * +--------------------------------+----------------+----------------+ + * <---------- bad value! ----------> + * + * (in a signed view of the world, the "bad value" range corresponds to + * a negative counter value). + * + * As an example, consider a refcount_inc() operation that causes the counter + * to overflow: + * + * int old = atomic_fetch_add_relaxed(r); + * // old is INT_MAX, refcount now INT_MIN (0x8000_0000) + * if (old < 0) + * atomic_set(r, REFCOUNT_SATURATED); + * + * If another thread also performs a refcount_inc() operation between the two + * atomic operations, then the count will continue to edge closer to 0. If it + * reaches a value of 1 before /any/ of the threads reset it to the saturated + * value, then a concurrent refcount_dec_and_test() may erroneously free the + * underlying object. + * Linux limits the maximum number of tasks to PID_MAX_LIMIT, which is currently + * 0x400000 (and can't easily be raised in the future beyond FUTEX_TID_MASK). + * With the current PID limit, if no batched refcounting operations are used and + * the attacker can't repeatedly trigger kernel oopses in the middle of refcount + * operations, this makes it impossible for a saturated refcount to leave the + * saturation range, even if it is possible for multiple uses of the same + * refcount to nest in the context of a single task: + * + * (UINT_MAX+1-REFCOUNT_SATURATED) / PID_MAX_LIMIT = + * 0x40000000 / 0x400000 = 0x100 = 256 + * + * If hundreds of references are added/removed with a single refcounting + * operation, it may potentially be possible to leave the saturation range; but + * given the precise timing details involved with the round-robin scheduling of + * each thread manipulating the refcount and the need to hit the race multiple + * times in succession, there doesn't appear to be a practical avenue of attack + * even if using refcount_add() operations with larger increments. + * + * Memory ordering + * =============== + * + * Memory ordering rules are slightly relaxed wrt regular atomic_t functions + * and provide only what is strictly required for refcounts. + * + * The increments are fully relaxed; these will not provide ordering. The + * rationale is that whatever is used to obtain the object we're increasing the + * reference count on will provide the ordering. For locked data structures, + * its the lock acquire, for RCU/lockless data structures its the dependent + * load. + * + * Do note that inc_not_zero() provides a control dependency which will order + * future stores against the inc, this ensures we'll never modify the object + * if we did not in fact acquire a reference. + * + * The decrements will provide release order, such that all the prior loads and + * stores will be issued before, it also provides a control dependency, which + * will order us against the subsequent free(). + * + * The control dependency is against the load of the cmpxchg (ll/sc) that + * succeeded. This means the stores aren't fully ordered, but this is fine + * because the 1->0 transition indicates no concurrency. + * + * Note that the allocator is responsible for ordering things between free() + * and alloc(). + * + * The decrements dec_and_test() and sub_and_test() also provide acquire + * ordering on success. + * + */ +#ifndef _LINUX_REFCOUNT_IMPL_H +#define _LINUX_REFCOUNT_IMPL_H + +#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), } +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2) + +void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t); + +/** + * refcount_set - set a refcount's value + * @r: the refcount + * @n: value to which the refcount will be set + */ +static inline void refcount_set(refcount_t *r, int n) +{ + atomic_set(&r->refs, n); +} + +/** + * refcount_read - get a refcount's value + * @r: the refcount + * + * Return: the refcount's value + */ +static inline unsigned int refcount_read(const refcount_t *r) +{ + return atomic_read(&r->refs); +} + +static inline __must_check __signed_wrap +bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +{ + int old = refcount_read(r); + + do { + if (!old) + break; + } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); + + if (oldp) + *oldp = old; + + if (unlikely(old < 0 || old + i < 0)) + refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF); + + return old; +} + +/** + * refcount_add_not_zero - add a value to a refcount unless it is 0 + * @i: the value to add to the refcount + * @r: the refcount + * + * Will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + * + * Return: false if the passed refcount is 0, true otherwise + */ +static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) +{ + return __refcount_add_not_zero(i, r, NULL); +} + +static inline __signed_wrap +void __refcount_add(int i, refcount_t *r, int *oldp) +{ + int old = atomic_fetch_add_relaxed(i, &r->refs); + + if (oldp) + *oldp = old; + + if (unlikely(!old)) + refcount_warn_saturate(r, REFCOUNT_ADD_UAF); + else if (unlikely(old < 0 || old + i < 0)) + refcount_warn_saturate(r, REFCOUNT_ADD_OVF); +} + +/** + * refcount_add - add a value to a refcount + * @i: the value to add to the refcount + * @r: the refcount + * + * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_inc(), or one of its variants, should instead be used to + * increment a reference count. + */ +static inline void refcount_add(int i, refcount_t *r) +{ + __refcount_add(i, r, NULL); +} + +static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero(1, r, oldp); +} + +/** + * refcount_inc_not_zero - increment a refcount unless it is 0 + * @r: the refcount to increment + * + * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED + * and WARN. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See the comment on top. + * + * Return: true if the increment was successful, false otherwise + */ +static inline __must_check bool refcount_inc_not_zero(refcount_t *r) +{ + return __refcount_inc_not_zero(r, NULL); +} + +static inline void __refcount_inc(refcount_t *r, int *oldp) +{ + __refcount_add(1, r, oldp); +} + +/** + * refcount_inc - increment a refcount + * @r: the refcount to increment + * + * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. + * + * Provides no memory ordering, it is assumed the caller already has a + * reference on the object. + * + * Will WARN if the refcount is 0, as this represents a possible use-after-free + * condition. + */ +static inline void refcount_inc(refcount_t *r) +{ + __refcount_inc(r, NULL); +} + +static inline __must_check __signed_wrap +bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp) +{ + int old = atomic_fetch_sub_release(i, &r->refs); + + if (oldp) + *oldp = old; + + if (old == i) { + smp_acquire__after_ctrl_dep(); + return true; + } + + if (unlikely(old < 0 || old - i < 0)) + refcount_warn_saturate(r, REFCOUNT_SUB_UAF); + + return false; +} + +/** + * refcount_sub_and_test - subtract from a refcount and test if it is 0 + * @i: amount to subtract from the refcount + * @r: the refcount + * + * Similar to atomic_dec_and_test(), but it will WARN, return false and + * ultimately leak on underflow and will fail to decrement when saturated + * at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Use of this function is not recommended for the normal reference counting + * use case in which references are taken and released one at a time. In these + * cases, refcount_dec(), or one of its variants, should instead be used to + * decrement a reference count. + * + * Return: true if the resulting refcount is 0, false otherwise + */ +static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) +{ + return __refcount_sub_and_test(i, r, NULL); +} + +static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp) +{ + return __refcount_sub_and_test(1, r, oldp); +} + +/** + * refcount_dec_and_test - decrement a refcount and test if it is 0 + * @r: the refcount + * + * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to + * decrement when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Return: true if the resulting refcount is 0, false otherwise + */ +static inline __must_check bool refcount_dec_and_test(refcount_t *r) +{ + return __refcount_dec_and_test(r, NULL); +} + +static inline void __refcount_dec(refcount_t *r, int *oldp) +{ + int old = atomic_fetch_sub_release(1, &r->refs); + + if (oldp) + *oldp = old; + + if (unlikely(old <= 1)) + refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); +} + +/** + * refcount_dec - decrement a refcount + * @r: the refcount + * + * Similar to atomic_dec(), it will WARN on underflow and fail to decrement + * when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before. + */ +static inline void refcount_dec(refcount_t *r) +{ + __refcount_dec(r, NULL); +} + +extern __must_check bool refcount_dec_if_one(refcount_t *r); +extern __must_check bool refcount_dec_not_one(refcount_t *r); +extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, + spinlock_t *lock, + unsigned long *flags) __cond_acquires(lock); + +#endif /* _LINUX_REFCOUNT_IMPL_H */ diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 59b3b752394d..a744f814374f 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -1,94 +1,4 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* - * Variant of atomic_t specialized for reference counts. - * - * The interface matches the atomic_t interface (to aid in porting) but only - * provides the few functions one should use for reference counting. - * - * Saturation semantics - * ==================== - * - * refcount_t differs from atomic_t in that the counter saturates at - * REFCOUNT_SATURATED and will not move once there. This avoids wrapping the - * counter and causing 'spurious' use-after-free issues. In order to avoid the - * cost associated with introducing cmpxchg() loops into all of the saturating - * operations, we temporarily allow the counter to take on an unchecked value - * and then explicitly set it to REFCOUNT_SATURATED on detecting that underflow - * or overflow has occurred. Although this is racy when multiple threads - * access the refcount concurrently, by placing REFCOUNT_SATURATED roughly - * equidistant from 0 and INT_MAX we minimise the scope for error: - * - * INT_MAX REFCOUNT_SATURATED UINT_MAX - * 0 (0x7fff_ffff) (0xc000_0000) (0xffff_ffff) - * +--------------------------------+----------------+----------------+ - * <---------- bad value! ----------> - * - * (in a signed view of the world, the "bad value" range corresponds to - * a negative counter value). - * - * As an example, consider a refcount_inc() operation that causes the counter - * to overflow: - * - * int old = atomic_fetch_add_relaxed(r); - * // old is INT_MAX, refcount now INT_MIN (0x8000_0000) - * if (old < 0) - * atomic_set(r, REFCOUNT_SATURATED); - * - * If another thread also performs a refcount_inc() operation between the two - * atomic operations, then the count will continue to edge closer to 0. If it - * reaches a value of 1 before /any/ of the threads reset it to the saturated - * value, then a concurrent refcount_dec_and_test() may erroneously free the - * underlying object. - * Linux limits the maximum number of tasks to PID_MAX_LIMIT, which is currently - * 0x400000 (and can't easily be raised in the future beyond FUTEX_TID_MASK). - * With the current PID limit, if no batched refcounting operations are used and - * the attacker can't repeatedly trigger kernel oopses in the middle of refcount - * operations, this makes it impossible for a saturated refcount to leave the - * saturation range, even if it is possible for multiple uses of the same - * refcount to nest in the context of a single task: - * - * (UINT_MAX+1-REFCOUNT_SATURATED) / PID_MAX_LIMIT = - * 0x40000000 / 0x400000 = 0x100 = 256 - * - * If hundreds of references are added/removed with a single refcounting - * operation, it may potentially be possible to leave the saturation range; but - * given the precise timing details involved with the round-robin scheduling of - * each thread manipulating the refcount and the need to hit the race multiple - * times in succession, there doesn't appear to be a practical avenue of attack - * even if using refcount_add() operations with larger increments. - * - * Memory ordering - * =============== - * - * Memory ordering rules are slightly relaxed wrt regular atomic_t functions - * and provide only what is strictly required for refcounts. - * - * The increments are fully relaxed; these will not provide ordering. The - * rationale is that whatever is used to obtain the object we're increasing the - * reference count on will provide the ordering. For locked data structures, - * its the lock acquire, for RCU/lockless data structures its the dependent - * load. - * - * Do note that inc_not_zero() provides a control dependency which will order - * future stores against the inc, this ensures we'll never modify the object - * if we did not in fact acquire a reference. - * - * The decrements will provide release order, such that all the prior loads and - * stores will be issued before, it also provides a control dependency, which - * will order us against the subsequent free(). - * - * The control dependency is against the load of the cmpxchg (ll/sc) that - * succeeded. This means the stores aren't fully ordered, but this is fine - * because the 1->0 transition indicates no concurrency. - * - * Note that the allocator is responsible for ordering things between free() - * and alloc(). - * - * The decrements dec_and_test() and sub_and_test() also provide acquire - * ordering on success. - * - */ - #ifndef _LINUX_REFCOUNT_H #define _LINUX_REFCOUNT_H @@ -101,10 +11,6 @@ struct mutex; -#define REFCOUNT_INIT(n) { .refs = ATOMIC_INIT(n), } -#define REFCOUNT_MAX INT_MAX -#define REFCOUNT_SATURATED (INT_MIN / 2) - enum refcount_saturation_type { REFCOUNT_ADD_NOT_ZERO_OVF, REFCOUNT_ADD_OVF, @@ -113,249 +19,10 @@ enum refcount_saturation_type { REFCOUNT_DEC_LEAK, }; -void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t); - -/** - * refcount_set - set a refcount's value - * @r: the refcount - * @n: value to which the refcount will be set - */ -static inline void refcount_set(refcount_t *r, int n) -{ - atomic_set(&r->refs, n); -} - -/** - * refcount_read - get a refcount's value - * @r: the refcount - * - * Return: the refcount's value - */ -static inline unsigned int refcount_read(const refcount_t *r) -{ - return atomic_read(&r->refs); -} - -static inline __must_check __signed_wrap -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) -{ - int old = refcount_read(r); - - do { - if (!old) - break; - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); - - if (oldp) - *oldp = old; - - if (unlikely(old < 0 || old + i < 0)) - refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF); - - return old; -} - -/** - * refcount_add_not_zero - add a value to a refcount unless it is 0 - * @i: the value to add to the refcount - * @r: the refcount - * - * Will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_inc(), or one of its variants, should instead be used to - * increment a reference count. - * - * Return: false if the passed refcount is 0, true otherwise - */ -static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) -{ - return __refcount_add_not_zero(i, r, NULL); -} - -static inline __signed_wrap -void __refcount_add(int i, refcount_t *r, int *oldp) -{ - int old = atomic_fetch_add_relaxed(i, &r->refs); - - if (oldp) - *oldp = old; - - if (unlikely(!old)) - refcount_warn_saturate(r, REFCOUNT_ADD_UAF); - else if (unlikely(old < 0 || old + i < 0)) - refcount_warn_saturate(r, REFCOUNT_ADD_OVF); -} - -/** - * refcount_add - add a value to a refcount - * @i: the value to add to the refcount - * @r: the refcount - * - * Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_inc(), or one of its variants, should instead be used to - * increment a reference count. - */ -static inline void refcount_add(int i, refcount_t *r) -{ - __refcount_add(i, r, NULL); -} - -static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) -{ - return __refcount_add_not_zero(1, r, oldp); -} - -/** - * refcount_inc_not_zero - increment a refcount unless it is 0 - * @r: the refcount to increment - * - * Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED - * and WARN. - * - * Provides no memory ordering, it is assumed the caller has guaranteed the - * object memory to be stable (RCU, etc.). It does provide a control dependency - * and thereby orders future stores. See the comment on top. - * - * Return: true if the increment was successful, false otherwise - */ -static inline __must_check bool refcount_inc_not_zero(refcount_t *r) -{ - return __refcount_inc_not_zero(r, NULL); -} - -static inline void __refcount_inc(refcount_t *r, int *oldp) -{ - __refcount_add(1, r, oldp); -} - -/** - * refcount_inc - increment a refcount - * @r: the refcount to increment - * - * Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN. - * - * Provides no memory ordering, it is assumed the caller already has a - * reference on the object. - * - * Will WARN if the refcount is 0, as this represents a possible use-after-free - * condition. - */ -static inline void refcount_inc(refcount_t *r) -{ - __refcount_inc(r, NULL); -} - -static inline __must_check __signed_wrap -bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp) -{ - int old = atomic_fetch_sub_release(i, &r->refs); - - if (oldp) - *oldp = old; - - if (old == i) { - smp_acquire__after_ctrl_dep(); - return true; - } - - if (unlikely(old < 0 || old - i < 0)) - refcount_warn_saturate(r, REFCOUNT_SUB_UAF); - - return false; -} - -/** - * refcount_sub_and_test - subtract from a refcount and test if it is 0 - * @i: amount to subtract from the refcount - * @r: the refcount - * - * Similar to atomic_dec_and_test(), but it will WARN, return false and - * ultimately leak on underflow and will fail to decrement when saturated - * at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before, and provides an acquire ordering on success such that free() - * must come after. - * - * Use of this function is not recommended for the normal reference counting - * use case in which references are taken and released one at a time. In these - * cases, refcount_dec(), or one of its variants, should instead be used to - * decrement a reference count. - * - * Return: true if the resulting refcount is 0, false otherwise - */ -static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) -{ - return __refcount_sub_and_test(i, r, NULL); -} - -static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp) -{ - return __refcount_sub_and_test(1, r, oldp); -} - -/** - * refcount_dec_and_test - decrement a refcount and test if it is 0 - * @r: the refcount - * - * Similar to atomic_dec_and_test(), it will WARN on underflow and fail to - * decrement when saturated at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before, and provides an acquire ordering on success such that free() - * must come after. - * - * Return: true if the resulting refcount is 0, false otherwise - */ -static inline __must_check bool refcount_dec_and_test(refcount_t *r) -{ - return __refcount_dec_and_test(r, NULL); -} - -static inline void __refcount_dec(refcount_t *r, int *oldp) -{ - int old = atomic_fetch_sub_release(1, &r->refs); - - if (oldp) - *oldp = old; - - if (unlikely(old <= 1)) - refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); -} +/* Make the generation of refcount_long_t easier. */ +#define refcount_long_saturation_type refcount_saturation_type -/** - * refcount_dec - decrement a refcount - * @r: the refcount - * - * Similar to atomic_dec(), it will WARN on underflow and fail to decrement - * when saturated at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before. - */ -static inline void refcount_dec(refcount_t *r) -{ - __refcount_dec(r, NULL); -} +#include +#include -extern __must_check bool refcount_dec_if_one(refcount_t *r); -extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, - spinlock_t *lock, - unsigned long *flags) __cond_acquires(lock); #endif /* _LINUX_REFCOUNT_H */ diff --git a/include/linux/refcount_types.h b/include/linux/refcount_types.h index 162004f06edf..6ea02d6a9623 100644 --- a/include/linux/refcount_types.h +++ b/include/linux/refcount_types.h @@ -16,4 +16,16 @@ typedef struct refcount_struct { atomic_t refs; } refcount_t; +/** + * typedef refcount_long_t - variant of atomic64_t specialized for reference counts + * @refs: atomic_long_t counter field + * + * The counter saturates at REFCOUNT_LONG_SATURATED and will not move once + * there. This avoids wrapping the counter and causing 'spurious' + * use-after-free bugs. + */ +typedef struct refcount_long_struct { + atomic_long_t refs; +} refcount_long_t; + #endif /* _LINUX_REFCOUNT_TYPES_H */ diff --git a/lib/refcount.c b/lib/refcount.c index a207a8f22b3c..201304b7d7a5 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -10,10 +10,8 @@ #define REFCOUNT_WARN(str) WARN_ONCE(1, "refcount_t: " str ".\n") -void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t) +static void refcount_report_saturation(enum refcount_saturation_type t) { - refcount_set(r, REFCOUNT_SATURATED); - switch (t) { case REFCOUNT_ADD_NOT_ZERO_OVF: REFCOUNT_WARN("saturated; leaking memory"); @@ -34,8 +32,21 @@ void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t) REFCOUNT_WARN("unknown saturation event!?"); } } + +void refcount_warn_saturate(refcount_t *r, enum refcount_saturation_type t) +{ + refcount_set(r, REFCOUNT_SATURATED); + refcount_report_saturation(t); +} EXPORT_SYMBOL(refcount_warn_saturate); +void refcount_long_warn_saturate(refcount_long_t *r, enum refcount_saturation_type t) +{ + refcount_long_set(r, REFCOUNT_LONG_SATURATED); + refcount_report_saturation(t); +} +EXPORT_SYMBOL(refcount_long_warn_saturate); + /** * refcount_dec_if_one - decrement a refcount if it is 1 * @r: the refcount From patchwork Thu May 2 22:33:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13652181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D27A8C25B10 for ; Thu, 2 May 2024 22:33:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BC6081126FF; Thu, 2 May 2024 22:33:51 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=chromium.org header.i=@chromium.org header.b="MoVdcG8D"; dkim-atps=neutral Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7339B1126FA for ; Thu, 2 May 2024 22:33:49 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-618a51c8c29so972820a12.1 for ; Thu, 02 May 2024 15:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1714689229; x=1715294029; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nhj8hJv8Nz4SLO0kIrZY8268X0YP9luJWZdcMXgW554=; b=MoVdcG8DqtxO9u5U76SV/89d/vnvvzQ6tbCrqSpTN8yrPlq4MNBsLABvBIJdC/lhWY qSeNLnnqKRRRxAibsKhsWHdccy+FvVpAHEf5JdRE7XQfEbt7CeFvtw/2t/Y2c0vjqsPy Xjwnx4VGYzkRbIsUEGhq3FNMfLg42erlGH2sc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714689229; x=1715294029; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nhj8hJv8Nz4SLO0kIrZY8268X0YP9luJWZdcMXgW554=; b=dsGOiV240qA+3x/a/WbY2b7aWl7hTXmL2rEfve0TA3JtX0wD8+MDP9a+2y342fwJjH OxveKr7twM88peSMoxgEKnxQLtxjzdcp6Z8P10YHlSbFe+x8ZKYMhuBIqSZPL2Z6HNnI magxfIwjLpJxHN4H0FYBMlhc5yVAZZe7cIMwpGClQ+Hgnt86De70oodDcG+tKTTXVNga vSbhrT+Wa9fnctsRQH9M2rmQElQLY2gRVZh651rqszILBxJ5BBm5zjeqhum2M0BBVaov xQcpSG4ykdtNMXiMsge/KR6RcvzyCCDsbOZYydeueJYGveKT9/k+FR8qOvjaaCTBBH+y heqw== X-Forwarded-Encrypted: i=1; AJvYcCXT6bcl+rOmSY3ebTX5yv9H1NYps3rXisRKNFZmfbBgF5+lgxpeTqBuRD0Nw4KR2wAVg0pghNwKkeWXMAKoZAWyEkyWKhIPmOomNeFt8FZ7 X-Gm-Message-State: AOJu0YwmGHxIZpTgviJjoTxj4zUPtWirWtam11WQXE4STAVJ5RhoKbEa o1JFvfovAnOg+sNIch87Tks3jl7XkmaPjOigJL5HDO0VyqR/v8+zdPhTxPO+Fw== X-Google-Smtp-Source: AGHT+IFc7AO8befqehZXahPHokNXwIeinKWSHFTDAZBc8gTAVzLseerL0chlFu6kgm2QDfWE7mdbeg== X-Received: by 2002:a17:90a:ac18:b0:2b2:9783:d0ca with SMTP id o24-20020a17090aac1800b002b29783d0camr1477782pjq.12.1714689228740; Thu, 02 May 2024 15:33:48 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id fw14-20020a17090b128e00b002a2d4bf345bsm3698747pjb.55.2024.05.02.15.33.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 15:33:47 -0700 (PDT) From: Kees Cook To: Christian Brauner Cc: Kees Cook , Alexander Viro , Jan Kara , linux-fsdevel@vger.kernel.org, Zack Rusin , Broadcom internal kernel review list , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Andi Shyti , Lucas De Marchi , Matt Atwood , Matthew Auld , Nirmoy Das , Jonathan Cavitt , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Kent Overstreet , Masahiro Yamada , Nathan Chancellor , Nicolas Schier , Andrew Morton , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH 5/5] fs: Convert struct file::f_count to refcount_long_t Date: Thu, 2 May 2024 15:33:40 -0700 Message-Id: <20240502223341.1835070-5-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502222252.work.690-kees@kernel.org> References: <20240502222252.work.690-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3356; i=keescook@chromium.org; h=from:subject; bh=aku0Ov6TP/MVYBgFlGuMSUdhnxeal+hPQHrvCM+9JCg=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmNBTDsTosM5PigT8MgtAILfCBKJBIs1YpjR+n9 i4hU3z7b2GJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZjQUwwAKCRCJcvTf3G3A JpwAD/47yb9BpbWzPuD0heUR6EIq/XA2DmBYJDN5EAEcjQh8e95gPvrbNsBQnT+lojr4owqsbV7 yzKXeVQsU4C13GRWVgHZOs4DuQkYYtC0AJVHQ9LrjXn9Rgkfb928ujjbYMNhZpvfGxJ///oDZEx SLgxaj8AKWkuPDgKyktn25SCmT7MnZbHQ5h+s3Eu+U0PO1IeAY8qwit7+cWw0KtH0SmVPGyzLSg OD7OAFb7j5XoajUhOQcltZOHvQI8FEQQsMvOJxUtioIyuro1Ah2yAhiJ+Gb4IiuzreOOS5mpQY9 rAeyVw1zgu2gR6iJ7gpPfq40oFIp3yulcggvyRh6GeikDzJhwpr9MIdIEnYpgeyTm/PgomJjyNR KVeoluWeWITtKvPSs1kXp33TxqrsjXk2F06HFgRPXqyLP4d/qcZhl7q6Vs8GmNlEbtLMqxJscV2 o/JWSiqQ4VeQfHwCAZNY9enOk5CCC6Z+YwOof8OUIKsivBxLc7dLGjX+X01ttLx5GWsmPr7/RHY jAKBvlDq1GORFiHN/58xNQNqu1fGiicNh5Wyuk7D22DfoO0k+gWAoz/7JKx6Q+N1zM4URIMfePa ZwcN0ap5AIn9iwCooiNslmX4gAjKTcV7LO1fvZy8mgzRK2khRr0rpHK6qi52Vq3AMNylI0Nw9qZ qJGKfgmX6LueF0A== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Underflow of f_count needs to be more carefully detected than it currently is. The results of get_file() should be checked, but the first step is detection. Redefine f_count from atomic_long_t to refcount_long_t. Signed-off-by: Kees Cook --- Cc: Christian Brauner Cc: Alexander Viro Cc: Jan Kara Cc: linux-fsdevel@vger.kernel.org --- fs/file.c | 4 ++-- fs/file_table.c | 6 +++--- include/linux/fs.h | 6 +++--- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/file.c b/fs/file.c index 3b683b9101d8..570424dd634b 100644 --- a/fs/file.c +++ b/fs/file.c @@ -865,7 +865,7 @@ static struct file *__get_file_rcu(struct file __rcu **f) if (!file) return NULL; - if (unlikely(!atomic_long_inc_not_zero(&file->f_count))) + if (unlikely(!refcount_long_inc_not_zero(&file->f_count))) return ERR_PTR(-EAGAIN); file_reloaded = rcu_dereference_raw(*f); @@ -987,7 +987,7 @@ static inline struct file *__fget_files_rcu(struct files_struct *files, * barrier. We only really need an 'acquire' one to * protect the loads below, but we don't have that. */ - if (unlikely(!atomic_long_inc_not_zero(&file->f_count))) + if (unlikely(!refcount_long_inc_not_zero(&file->f_count))) continue; /* diff --git a/fs/file_table.c b/fs/file_table.c index 4f03beed4737..f29e7b94bca1 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -167,7 +167,7 @@ static int init_file(struct file *f, int flags, const struct cred *cred) * fget-rcu pattern users need to be able to handle spurious * refcount bumps we should reinitialize the reused file first. */ - atomic_long_set(&f->f_count, 1); + refcount_long_set(&f->f_count, 1); return 0; } @@ -470,7 +470,7 @@ static DECLARE_DELAYED_WORK(delayed_fput_work, delayed_fput); void fput(struct file *file) { - if (atomic_long_dec_and_test(&file->f_count)) { + if (refcount_long_dec_and_test(&file->f_count)) { struct task_struct *task = current; if (unlikely(!(file->f_mode & (FMODE_BACKING | FMODE_OPENED)))) { @@ -503,7 +503,7 @@ void fput(struct file *file) */ void __fput_sync(struct file *file) { - if (atomic_long_dec_and_test(&file->f_count)) + if (refcount_long_dec_and_test(&file->f_count)) __fput(file); } diff --git a/include/linux/fs.h b/include/linux/fs.h index 210bbbfe9b83..b8f6cce7c39d 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1001,7 +1001,7 @@ struct file { */ spinlock_t f_lock; fmode_t f_mode; - atomic_long_t f_count; + refcount_long_t f_count; struct mutex f_pos_lock; loff_t f_pos; unsigned int f_flags; @@ -1038,7 +1038,7 @@ struct file_handle { static inline struct file *get_file(struct file *f) { - if (unlikely(!atomic_long_inc_not_zero(&f->f_count))) + if (unlikely(!refcount_long_inc_not_zero(&f->f_count))) return NULL; return f; } @@ -1046,7 +1046,7 @@ static inline struct file *get_file(struct file *f) struct file *get_file_rcu(struct file __rcu **f); struct file *get_file_active(struct file **f); -#define file_count(x) atomic_long_read(&(x)->f_count) +#define file_count(x) refcount_long_read(&(x)->f_count) #define MAX_NON_LFS ((1UL<<31) - 1)