From patchwork Wed Apr 24 19:17:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13642446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CACE4C4345F for ; Wed, 24 Apr 2024 19:18:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xtXTiIa7qWEFDfGB1tF7arEdCrDOXCP0iArKRCTPBF8=; b=ib1VD11YZ2PMqT LfKGud47KrgrfD+24r27LC/6rnHr8Cp7JhFbGwN+0W2B0s5IqoebGQAWP10rdUPtgcWwl/HMebUmt A060Ngssc8ZcySLin1c0xHoSzvxSNwTG3aesTPLDSXJ2dkCmoIwEG3nZD8ZvoZL8M6GYS5DO/SZrF YsVBhYDTr67wW5Eq1x6KjgFtIW/c/jJKnnZ1zz8ShpSbIU36dLNvO/Xn8dEiAAnQawRVuGlqqLeCy h9wABMD1CwRcFmrAlAyZtPgrlltaDkqyABW1oujiEnTG6Uoc+tST+cIk0dqZbcOi8ER5RZwwqRrg0 +TOY+m8eqOJZEo0lqLow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rzi80-00000005cTR-0zxs; Wed, 24 Apr 2024 19:18:00 +0000 Received: from mail-pl1-x634.google.com ([2607:f8b0:4864:20::634]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rzi7o-00000005cL8-0GdR for linux-arm-kernel@lists.infradead.org; Wed, 24 Apr 2024 19:17:50 +0000 Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1e65b29f703so1529205ad.3 for ; Wed, 24 Apr 2024 12:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713986261; x=1714591061; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vo2K8jVQRi43xxtGN8MvYubFuFXJHBPNUxXy1Vyz154=; b=lHyCJkRhwtL7yJk/zls5Kr67QW5PMky8WlW1+iUW7nbA3mgEU+3KxYkzSf+/GZFIqN 2igUgPWMfeRYQ/820RxPNdhC0FCQq5W0Axh5jXGVeOTS/dxGqR04o1Tv5k1cF+nys00v 08vANpCW/6ozVOr1S1HZHtf3qhTDCxdD9DgYw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713986261; x=1714591061; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vo2K8jVQRi43xxtGN8MvYubFuFXJHBPNUxXy1Vyz154=; b=FL5k6MrCT3gPv89KN57fpncbWiOHv366uCRL7pQLkE/e5LYbvqsiu+kI6wy3MJh42G CFN3yPKMROhwYJO+bddFQ18Tqs3ThSAyqtNIl5kM2/19MNhjleYj5f/duArJZRej1IhQ q6X4FmDOmQaewNYzJQ/k1Nvi+YYTbcTv42UQ2yiADfCMvpR2mp2ocMCge+xJedvkEvZw ZJzEZ3Wp5rH+DZuQ7AsqZNN82BSCitBi1czwJCZ2siv4HnHoFpqg6JLqoFB9Gr0PzDPT sVlBVdQcRqMosCet7jUERIb7xJe2lPF+Wmd7fUW0+WdOJHFnHyK3Ydgn1DdL7apB5px3 Z/ew== X-Forwarded-Encrypted: i=1; AJvYcCVz5N1t1M+dscJtZCMB+u1WdcDHAWleZ5gjNk8V41NCeFULI4L2m9//HgRWajlTgi9usG4rcbnjZD9vx06Mf6e/ebMe7uwo1Z7RV87MSomOTTF6Xn8= X-Gm-Message-State: AOJu0YxefXxdYh0LuIpS7F9L0C752CNpxGrlec9MSPrNANmr6Z0QCpyL KYNC/hlhLMyWq5MBEb4KzbBUAa4s038QaHD3P7E8hik2TBI/yE9xff6ZYRtp6g== X-Google-Smtp-Source: AGHT+IHRQ9hlNd8Q/9S+swXrlzFZSC66Zi7MONSBZn2dTrR2Cr0f8ozzcVQ3tJLvlRKVSkey9HjOwQ== X-Received: by 2002:a17:902:9a0a:b0:1e2:88c9:6c08 with SMTP id v10-20020a1709029a0a00b001e288c96c08mr3007206plp.49.1713986261101; Wed, 24 Apr 2024 12:17:41 -0700 (PDT) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id d10-20020a170903230a00b001e8b81de172sm10657799plh.262.2024.04.24.12.17.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Apr 2024 12:17:40 -0700 (PDT) From: Kees Cook To: Mark Rutland Cc: Kees Cook , Will Deacon , Peter Zijlstra , Boqun Feng , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Jakub Kicinski , Catalin Marinas , Arnd Bergmann , Andrew Morton , "David S. Miller" , David Ahern , Eric Dumazet , Paolo Abeni , "Paul E. McKenney" , Uros Bizjak , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition Date: Wed, 24 Apr 2024 12:17:34 -0700 Message-Id: <20240424191740.3088894-1-keescook@chromium.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240424191225.work.780-kees@kernel.org> References: <20240424191225.work.780-kees@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2485; i=keescook@chromium.org; h=from:subject; bh=FYeUwjmTkimRdEar0nMCEB+zeOzWgwm4wTKMgd4cZTs=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBmKVrROPsWV+7wAsNj0gHWR6QvLgSLyEzC1bVpp NVRrZ63HliJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCZila0QAKCRCJcvTf3G3A JkgzD/483ESp9GunXhngEBu3wqcJ70FbQue3kCmbpZPgaJNUscH+iguBQUrzVLdn7cu87nT79UE 3xRAFrXdq5GSxPYCb92aPEvd8aADFJvKoDecGHPMGvZZr8kounYftdQ8SEpVM9oCDOjPyjA+KbY MeySHGPTKhpplullPe8MqxiQHrtIoquHhMmPCgei09S6htYIiB8sqQA5TU0xWKlYx7WFL1vnzPI 5T2ndbMr6XQDFTlro5lIHHCPD4sXQQzkXxTcIAj3bNuGLnaKaJK1VAjG73ZDSlATjlSEkI4gwTe ONZaFwDrJBOJAk3CSrkxoK/DvAba4x+V0xqqUTX2v6iKZ5c3A5pcNuWX8si2iEPTgtyF7A4U+3H rEmjb2JKW+/zvKMc06jW2biYhQFLBOSCK8Cglq39Fg0nVhw4bLokjyM+9dneyphXqyV1k/FGfkY tbOSZtbZ6FOSnFbEmmELsvsZce6ReBSibvd6VPR347wmA7fNATxiOxhZULH5Kt04r84/4bJx3SL APbwpN8bVPbPRu9wywmI87MrMFEYOTXQqp2gJjba4fn0y5aGIWmrlroPuF68mNwDWGHb8AlHI8F Aycb/DYtsP8itByLJZTj9bWXlZJf/UxERnyLFLxuJGIETgsD3jCajtE/J8bdow6dwZZ0NAJKlw6 npZGDZtqE/FCMNg== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240424_121748_262048_33C492FC X-CRM114-Status: GOOD ( 16.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use add_wrap() to annotate the addition in atomic_add_return() as expecting to wrap around. Signed-off-by: Kees Cook --- Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Mark Rutland Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/atomic.h | 3 ++- arch/x86/include/asm/atomic64_32.h | 2 +- arch/x86/include/asm/atomic64_64.h | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h index 55a55ec04350..a5862a258760 100644 --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -3,6 +3,7 @@ #define _ASM_X86_ATOMIC_H #include +#include #include #include #include @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) static __always_inline int arch_atomic_add_return(int i, atomic_t *v) { - return i + xadd(&v->counter, i); + return wrapping_add(int, i, xadd(&v->counter, i)); } #define arch_atomic_add_return arch_atomic_add_return diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h index 3486d91b8595..608b100e8ffe 100644 --- a/arch/x86/include/asm/atomic64_32.h +++ b/arch/x86/include/asm/atomic64_32.h @@ -254,7 +254,7 @@ static __always_inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v) { s64 old, c = 0; - while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c) + while ((old = arch_atomic64_cmpxchg(v, c, wrapping_add(s64, c, i))) != c) c = old; return old; diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h index 3165c0feedf7..f1dc8aa54b52 100644 --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -76,7 +76,7 @@ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v) { - return i + xadd(&v->counter, i); + return wrapping_add(s64, i, xadd(&v->counter, i)); } #define arch_atomic64_add_return arch_atomic64_add_return