From patchwork Fri Apr 29 20:36:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4764C433FE for ; Fri, 29 Apr 2022 20:36:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380852AbiD2UkM (ORCPT ); Fri, 29 Apr 2022 16:40:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380832AbiD2UkJ (ORCPT ); Fri, 29 Apr 2022 16:40:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4BAC83019 for ; Fri, 29 Apr 2022 13:36:50 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d129-20020a254f87000000b006411bf3f331so8383224ybb.4 for ; Fri, 29 Apr 2022 13:36:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iDcj9a5wJ6KF8f+uE+XRRsjEqScwcRIl55nWoICQltk=; b=VW+j5ZfVSptad3iPsruCkmIS1Tj3Z+za8DnZQ9e2AKAAU1hwwE3U6zANtt1LiP1k70 PzjE1z73jFzrB112joxauNMQ7A45H0tJGDuSlPSbAkkgl0e3ZEPQRiNEWDP5Rl25Bosg 1Jbc55usRzrGRJ7TH0UHTB1o1NoYAtnTmNJbDf2m0fTX0WGg1OWpottuVKIyom9vgc2G g02UrUWiCKRLI2CBIhG7xcec/Rqdm4RcaOPKyUBUFZ58TP/nP+2EaksYUAjCuWA25uKS PSlCtU+tZy+9cCncww3S1LgNuV6zJgQ9j1nbp0AQ0N6x7+2QXFSR9dTOjLrDfk8eMVkq RopQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iDcj9a5wJ6KF8f+uE+XRRsjEqScwcRIl55nWoICQltk=; b=ir6bP+4Gt+HdPgMgDSrspTQl4kfGYT84cFq5YMJ7juVva45dtx1zYR++Ka8tyl6rpf Tx8fED+DU1lPjflMxhp0Ud8abpHZu5HLI1OLIxJWOavPm11PwxBXKVWISlU1XMuQTTnh F0XoK60v4IlVd96UP40tGDXp/mjbI/WN3x642K6Ag39miKmXp2uwinM4Qjy4V1HOtTc5 qoKzZa1aza5Ie1E2HulIM8+Frc4pkRsRSfdgR98xNSC0Pf+iWGY0EU6oacNsNhjdblbe AWRvA+gcKzHwIdBA/gsfNiZFqlUYEEny1ystbhELbbAFWlY25N0jyiYv/YUrbVrPXQzw Ehow== X-Gm-Message-State: AOAM532mavRsYM9FkSJu2eN9asp1GWORh1l/stovH6z8bSGJ+G7LWpac 31pkfh091ryh7RXNXZEUtZJS+53fxCXDb+Grcow= X-Google-Smtp-Source: ABdhPJzA35JbNRKDKaxDQaRXbam70YWDIDvwSLWyFMqWNAxB9wR20Jtx/sX3avgqy+VoqLJBj9AK8KPqLrHXIWBAEoM= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:8391:0:b0:648:b50e:3702 with SMTP id t17-20020a258391000000b00648b50e3702mr1249824ybk.551.1651264609911; Fri, 29 Apr 2022 13:36:49 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:24 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=880; h=from:subject; bh=bTBRIvqJN0r4QxW0Fiqjtc1SR+oznSS8ydEfQr7tcSo=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVtkZEXF4lWFd0quNeiDzZL9e1AigoAzh9F8Ek vYmOPRaJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7t1EC/ 47sHOPPMx1FdEmLi3pxA8b6kFaGgSF4qhS4uCyWFckVJ1yyehiGk20aNGPxlvFyGac+BdaOuQipIxN Ou37XvXH8DIf8oJjUTCr3v2DLhnVRSN/mHuBi+0o2xQq7uCIiEZwujO3w2ajffL/L2OmGR5bDM7bFi 31QqrHChTCaYmswmWq4i7BibuTh0EsIkGnG+BrI/myJEKtebWiQxmcSLIYIXs/e+GF9i0JnhVAvOkb W2ercajsgf4zDhfp2PAml/n3V7kt3BjFjuqMp/JmrWBc3QUcYAZ7zDZ8QWB2G8lFAFr3JkRdDJSWF6 8nv0nx0ZHKmGMIz6sZfts+c7m+x26Y2JJqqPSv0Jtr6TyxPYTiudk/LobE+R7VCoRJSRE0QMhBBIu1 PqRy883di9Kp8Gj8kY8K3M7eGw9RUG9pDPQIN1eBqV3wbn6G3Baw7odJCOA0x3TWGsuJHKPCHNlKdf FfFz1EMuWGs4NAUh0tqYXIeUXTD6w3yAqsj1M54wU9ryQ= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 01/21] efi/libstub: Filter out CC_FLAGS_CFI From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Explicitly filter out CC_FLAGS_CFI in preparation for the flags being removed from CC_FLAGS_LTO. Signed-off-by: Sami Tolvanen --- drivers/firmware/efi/libstub/Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d0537573501e..234fb2910622 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -39,6 +39,8 @@ KBUILD_CFLAGS := $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \ # remove SCS flags from all objects in this directory KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) +# disable CFI +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_CFI), $(KBUILD_CFLAGS)) # disable LTO KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO), $(KBUILD_CFLAGS)) From patchwork Fri Apr 29 20:36:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AECBC433EF for ; Fri, 29 Apr 2022 20:36:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380850AbiD2UkO (ORCPT ); Fri, 29 Apr 2022 16:40:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352138AbiD2UkM (ORCPT ); Fri, 29 Apr 2022 16:40:12 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6638C83019 for ; Fri, 29 Apr 2022 13:36:53 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2f7c322f770so84411947b3.20 for ; Fri, 29 Apr 2022 13:36:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JOrxvbOVlGf7EyGgmLRy8hRRuIaVy8OT7IdSZD87U5I=; b=Y30BjpEMgFxeuPMvy6ZRJiiEoJqk6vUs6zeFP/lHpZN75znpV0KZdnsD99iSmnKufh D5pqTtvZvKeJ2ZmovWBmJI1BSXAvMowA02+ZQC7bQ2ovyAIPagWAVITLkSHJf48Pru7/ 7jJnVIV9jGNDguYioImDfM+IMh1KGBCGa4ZIWBt9jQvyYCZ4bwLatI1K/xtrywvJOkFo DkO/lR51Mjw9EFQAqMK5qbdtq1SaEjvNPoGwP+UmYjkCuXipQL19OuA1rba3+FTIkYMm A8G/tqbcqJ6dSR21U48glMT5/y7mhztRGqimRqtQOKjgk21ojX2czLDsU1Hy7cZEWtOl s6Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JOrxvbOVlGf7EyGgmLRy8hRRuIaVy8OT7IdSZD87U5I=; b=5FRlY7zZ7JiqhGRWxoiJ0n51c+hBvwIyPso1Y9TY4TYcV8v7UnLiF7BV7VJvjyT8h9 JnBoUPiAI3CZQSwtYFqlB6OmBmGEWhfjps/rAjzIgpqAKVKdBuPLbStyFlSg9Xv3mdRf aHFVq1/x5UfyzEFxefbsjptXyn7Z0yErpfIsfPZ40kdex/+T6zM61X9PYf42r5eHxvTl xrskH89L13gN7TjuG4oyht7eqY4wvyEPofWmpWuRMduNRMs+81EWSThUgjLuot50KG/5 JK/e6tCbUVSzcNVWzfmVffFozbwZazKBHXbTQ4V8vWz/XIbSkD09+npwXSimjqsyrRvw ZszA== X-Gm-Message-State: AOAM530NwHCD3KtmCo/CfVZIoUBQA3gGZehS8YH/P3VvlIJmBzGtDUWq 7U9qYINmsiatDqQvkpUWYZQDR6/Qy7Niy5A4OAY= X-Google-Smtp-Source: ABdhPJx2pZXPMHEohJD02K61M8Rxapyj9kDbKXnjRblyOnZOTpfcJK4og8Se0+UJw00mTa9qreKhSTG3v5rJ8X1kupk= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:ca0b:0:b0:648:3e2d:3f1e with SMTP id a11-20020a25ca0b000000b006483e2d3f1emr1223584ybg.362.1651264612616; Fri, 29 Apr 2022 13:36:52 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:25 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=988; h=from:subject; bh=v6x7xk37aCQOHgc7sG74rvKs2iMK+RM1w6hDUEikaBs=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVHFCiNqy4GPu8szeEmkqeeCxcHoxHInmNHUVO UvSshn2JAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7jGZDA Cb+XLPnvhLu7p23guX41EWhUvqkksLxfSlZZP//UklIAoFk8vkvSW7K/iJyR9HpwWwg0Qr797KOmie Rbed3Rbjt+5Iv5ZKyz6YGigskXSKpss+Pvxt/4no7Wmpq02cu1+F9O525lK1uKs58LXrnRh6Kz1nIP WRvep2W16rS3TKvI72UR8KNVO09tANkuHMKTwFRjRdYVkWpLwQ/OUmg6wELeLesKzHCaYDx59npP8p FCChOXFfWJsfDoMHDiPm//ahj2oM/uDJEXtb6OpAuzq6O8t4Lb9HAMxPtOToO9V83SFICtdEHaaV0M rh1Yj1pIZv7sCsgOfiFQpXg/irA8KJ+G6aV4hsd0Rid964Aczdwumocy2l79rgFmXlKEmlop6FF9NZ XbPV7/4br/l8IDsvr0k3tsnEWmskc2U3pMmYuCyvhMZ+4jNMQ0SoHGQ9FsD7AtoNMmesek3DT5dEKF 2z0DyDfb5Di5YaDuIDFHC4Ase9/aAPLkZbC7/qFbSH9pg= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 02/21] arm64/vdso: Filter out CC_FLAGS_CFI From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Explicitly filter out CC_FLAGS_CFI in preparation for the flags being removed from CC_FLAGS_LTO. Signed-off-by: Sami Tolvanen --- arch/arm64/kernel/vdso/Makefile | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index 172452f79e46..6c26e0a76a06 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -33,7 +33,8 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO # the CFLAGS of vgettimeofday.c to make possible to build the # kernel with CONFIG_WERROR enabled. CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \ - $(CC_FLAGS_LTO) -Wmissing-prototypes -Wmissing-declarations + $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \ + -Wmissing-prototypes -Wmissing-declarations KASAN_SANITIZE := n KCSAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Fri Apr 29 20:36:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C9C7C433F5 for ; Fri, 29 Apr 2022 20:37:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380903AbiD2UkY (ORCPT ); Fri, 29 Apr 2022 16:40:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380856AbiD2UkP (ORCPT ); Fri, 29 Apr 2022 16:40:15 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 416C383B17 for ; Fri, 29 Apr 2022 13:36:56 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f4dee8688cso83762447b3.16 for ; Fri, 29 Apr 2022 13:36:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mNJyMVnIlguT1rUilClQ07jGJwBfREePO+0SEXQazuU=; b=SbPzWHz94PkiMgXJJgKeWVHyDbGKW0iDGndnpCpHSSnGy0udUUlBjtpN0J1quzFzj6 9DNm9Bc3j3kVnXgb5WawLcDyFQd04kIbtQGi1DPmqk1xDQVxu0qIpcBAxx/d6ejw5h49 KT14vLiZFTxcJrqYL4GVD3dvIF1Fv8ZgUnBDLOgTvQr5SYM7PWI8gAxJ1oq0ZHm80ihD EWjK+FxhDoerz3ciebN4rXM84mTDWRZyheZGg7EhS5aWE9aJqx3EyOAjZOUEGrgxQVMm WiRwj5rN0xhl95Qy5z18BBET4b5MLZM3cRbUhOAOvSictxqNCKkfnrg7fXhpvLyH149+ sl2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mNJyMVnIlguT1rUilClQ07jGJwBfREePO+0SEXQazuU=; b=pFxSIk06va38iCT2VrcweEumuY9IREXpQkJHK8cAU6GOC1LBjVYMhrOjNhq5yWtNG+ aRqGhwVpp/eUavmwjvL7gTcXOqkMb6jHm3jwOkN32YtlJX9NuFsgP6ebvI/Lx1wK3YpA GlgeQF/xfhINO4lFWVT/Y2zU4RZt4eqLAJDR7ewoVQ4N8/ZhMC0dch5fxSbiJIg7nIqG htVhonu/HGLpNc1k12bGvNDNcamA/7juivdjMZw80t1QY5FnXJFeChegMr9kkLz/NVXK XcV7jPMMfHFJXzYQN6Mlc6tbRsyh3/oCrf+j+DZE0eeMaOZIVLYXmtYlgcs/ew4IrJQc L9yA== X-Gm-Message-State: AOAM531DbftS1OwbDnBHljL89ZetT2RWxOSyvQX45icPm6weFQLI7aMs XwYHmhQKBxe7i6bkNV/xUDzdUMJfQ+qYbui1fdw= X-Google-Smtp-Source: ABdhPJyayjsqoZ9MAE/d6QWDpfC4MBpe6t087Wq8ydOs9sA1lE9zrm0OkbjlaqzE1YyNGhiVhX9WmrSBQYStbxUZJck= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a81:5085:0:b0:2f4:d6fb:f76f with SMTP id e127-20020a815085000000b002f4d6fbf76fmr1289650ywb.190.1651264615259; Fri, 29 Apr 2022 13:36:55 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:26 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=669; h=from:subject; bh=o0oSO6A7dgHXw3JXyfs3kpLOQ31qQjhpc3pVMXlQeFk=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVconqZfKUU7vtSg68iLeto5MMy78pevg9vXBt ZBfrWfiJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7sdRC/ 4rfcaIe+IT88EzfaYojaN8Sm+K4SATmfqAiUgwPtEJr2MID9IqKsrAZ9k2RjrSDq6w4bhs0A8V4B0G xeVCMT4W1+zvdWXlHb0A00gQOmvQ1vQyWI56OKIo9NiJti+CgLpYubN2ZUJOjoGlgeg1gLbJO66pDw icjIr0tUfuqNt27ID7vosvM2t0dLvewBveN5xF39aBc335pIruuGBF+43BjkceurX4nUyMuyRQKyuC C53Q0gRzSGtojQq0oZgtUt5bNOXehc7Bl3Y49D251QXr9mC5mhzdgMstIidEmNouXNfULHiTCOP+JZ EQXw/jX3KI7Q/9i8Rhjf+MJb6HiZMqWNHAz0Cxs32KU9fFjs5mCqmaXOcuKypUxZf8Bl1rRbOJ6N0P 828GLh6ViZaJwR3qKRUm4OMT2qNKAA5RQww1KQsFt1rvpUpLxfm3f4Y4aHj9Gzwtf2CnUqUpg4EAY3 OYhC9Lm3NuP0XGZvhb3CJ0qnvicQJW7kr29sr8vDYrhkI= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 03/21] kallsyms: Ignore __kcfi_typeid_ From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The compiler generates CFI type identifier symbols for annotating assembly functions at link time. Ignore them in kallsyms. Signed-off-by: Sami Tolvanen --- scripts/kallsyms.c | 1 + 1 file changed, 1 insertion(+) diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 8caabddf817c..eebd02e4b832 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -118,6 +118,7 @@ static bool is_ignored_symbol(const char *name, char type) "__ThumbV7PILongThunk_", "__LA25Thunk_", /* mips lld */ "__microLA25Thunk_", + "__kcfi_typeid_", /* CFI type identifiers */ NULL }; From patchwork Fri Apr 29 20:36:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82CBFC433F5 for ; Fri, 29 Apr 2022 20:37:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380880AbiD2Uk1 (ORCPT ); Fri, 29 Apr 2022 16:40:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380877AbiD2UkX (ORCPT ); Fri, 29 Apr 2022 16:40:23 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2BE383B25 for ; Fri, 29 Apr 2022 13:36:58 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f7c011e3e9so84572327b3.23 for ; Fri, 29 Apr 2022 13:36:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/SID+odwOHhW6fSf1SyMHupf3+2plu1m/MfG1cXg+b0=; b=pXDTPirUN5MykkQCbB5q8AmcY3Jq2zcVF3SAcIhszlCK/5FsnWORlQsvYGApnGAxAh N4ygHCVO+0tz++iUcvEcAIzqvAX84/Bk0ksbA2mq3JhrrA0dtmX6fExMFYmtmqgRnRge 57dmSVL1PRWhdkPOFdWHNe0JEgMypLQrindoY00jJsc1acBXPYHF/bcx+WBJSqTkVvUK 2QWG2Hkwf8YbJv5WpPdTkXsxom7fiaQ5plQrRjALzpQU03C2Q/ENlR7/kBTEnPJLu6tU AVip6caQubJWmSCSZeQ/NAIyLrLi6utkSyQ1d7lqpcMm1Nv/4Kk2TwLZqH9FKMgYKM6g QA7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/SID+odwOHhW6fSf1SyMHupf3+2plu1m/MfG1cXg+b0=; b=D63F0AUreHaXurVQzq1jWUQuo1S+YcMAQXLc9T+CVy50Ej/zLuenezzuEJZTwh+gav lQ/gpbXtHxqzaOra7IL9T5UTghLSQHbnlPR5A8lNNoNJAfsZ1+mUZ0IHa50vA+3w1YhP z4ubVRasVdqCUaFL4UMS8J5pgnjucAekDIaurLPlgeEHptPyKYDz5lWWvdBz+XYyOiBd Q3HKHb9PXFslnKOEA05cOrAiQ8KsxfKYxPuKWFv1anumzO7ppOjTjzw6xW1kvZVI6Zge y7L2gCDoJpXaObxm3rriiIQpcvknHHIcSeRr3siojS+GHQt0Y0ROaeP+99VGLCMNwDXT apfw== X-Gm-Message-State: AOAM5338QKY9/8yt4c8ZoqByEiT3xTZeglfPxAW1f4rFEtAY8/IB8EVl vbzScOtYU69M2JkQgyooGWiIm0f2Z6NQ4qpok+k= X-Google-Smtp-Source: ABdhPJyyE/KgNn+KGSWVEO4DlT7y+o1aFloDvMPuRcf58kbzYBHSuDzpw++tVU+X2i3WAOfEyDFH7VO3xUNBm2uS/lI= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:d68c:0:b0:648:d3bd:734e with SMTP id n134-20020a25d68c000000b00648d3bd734emr1265294ybg.277.1651264617711; Fri, 29 Apr 2022 13:36:57 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:27 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=10304; h=from:subject; bh=R/qPr3A1VbpNtnYdnS8DSefcpakC96p89+rbs5j/O5k=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVe5jOiLucPBZlfPxt03GQBYZcYZYxotX7Rlbh pXb8uw2JAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7v0XC/ 46oeqCOaakLm+oj5yYo6EzrAFTs5pV0T/Iu+pYXTl5auQFlm0gA4dW+t6GzPHH4IgUsYSf+lpZPwyi ofHwbMrHNfQ5D7ysP1KkA/LJKm7TDhtzpGczUZKgZXgmcnwfH3bnKFH7TFMZGKzFzufC32y9Xw7Gpg bjkVAuZtU5WNqNR5aj/UmamsjNWhnU6POOFzQkr8TxSq7Ia3BkFysQ6cV6B4IFFh3ImVZXywtA6bee w2Z5YjCyontExD8s0G6yp5ba/58lnC+890RwiOvzcD30wacx3bTk1hz6Uue44DlnxRJ3vLEZyxGbld k0A8yc0yresK/+93VLifSu1yuZKlNGiCXcCDApJwyJvKNMeP645FRVN/D1gMmlyRzPDErbvu1c72aV QnF9aVPrFtrYZgbQZCfRjHLkrKUP6ZeNk8K4+vRP9oAFBlea19xt3Re16+QsONeGmW2wpXhQBeU0Ug Um6X49lF8QenmfoRzqFjg2RmGAM/dtUJUQkRVUOZCalVc= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 04/21] cfi: Remove CONFIG_CFI_CLANG_SHADOW From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org In preparation to switching to -fsanitize=kcfi, remove support for the CFI module shadow that will no longer be needed. Signed-off-by: Sami Tolvanen --- arch/Kconfig | 10 -- include/linux/cfi.h | 12 --- kernel/cfi.c | 237 +------------------------------------------- kernel/module.c | 15 --- 4 files changed, 1 insertion(+), 273 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 31c4fdc4a4ba..625db6376726 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -739,16 +739,6 @@ config CFI_CLANG https://clang.llvm.org/docs/ControlFlowIntegrity.html -config CFI_CLANG_SHADOW - bool "Use CFI shadow to speed up cross-module checks" - default y - depends on CFI_CLANG && MODULES - help - If you select this option, the kernel builds a fast look-up table of - CFI check functions in loaded modules to reduce performance overhead. - - If unsure, say Y. - config CFI_PERMISSIVE bool "Use CFI in permissive mode" depends on CFI_CLANG diff --git a/include/linux/cfi.h b/include/linux/cfi.h index c6dfc1ed0626..4ab51c067007 100644 --- a/include/linux/cfi.h +++ b/include/linux/cfi.h @@ -20,18 +20,6 @@ extern void __cfi_check(uint64_t id, void *ptr, void *diag); #define __CFI_ADDRESSABLE(fn, __attr) \ const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn -#ifdef CONFIG_CFI_CLANG_SHADOW - -extern void cfi_module_add(struct module *mod, unsigned long base_addr); -extern void cfi_module_remove(struct module *mod, unsigned long base_addr); - -#else - -static inline void cfi_module_add(struct module *mod, unsigned long base_addr) {} -static inline void cfi_module_remove(struct module *mod, unsigned long base_addr) {} - -#endif /* CONFIG_CFI_CLANG_SHADOW */ - #else /* !CONFIG_CFI_CLANG */ #ifdef CONFIG_X86_KERNEL_IBT diff --git a/kernel/cfi.c b/kernel/cfi.c index 9594cfd1cf2c..2cc0d01ea980 100644 --- a/kernel/cfi.c +++ b/kernel/cfi.c @@ -32,237 +32,6 @@ static inline void handle_cfi_failure(void *ptr) } #ifdef CONFIG_MODULES -#ifdef CONFIG_CFI_CLANG_SHADOW -/* - * Index type. A 16-bit index can address at most (2^16)-2 pages (taking - * into account SHADOW_INVALID), i.e. ~256M with 4k pages. - */ -typedef u16 shadow_t; -#define SHADOW_INVALID ((shadow_t)~0UL) - -struct cfi_shadow { - /* Page index for the beginning of the shadow */ - unsigned long base; - /* An array of __cfi_check locations (as indices to the shadow) */ - shadow_t shadow[1]; -} __packed; - -/* - * The shadow covers ~128M from the beginning of the module region. If - * the region is larger, we fall back to __module_address for the rest. - */ -#define __SHADOW_RANGE (_UL(SZ_128M) >> PAGE_SHIFT) - -/* The in-memory size of struct cfi_shadow, always at least one page */ -#define __SHADOW_PAGES ((__SHADOW_RANGE * sizeof(shadow_t)) >> PAGE_SHIFT) -#define SHADOW_PAGES max(1UL, __SHADOW_PAGES) -#define SHADOW_SIZE (SHADOW_PAGES << PAGE_SHIFT) - -/* The actual size of the shadow array, minus metadata */ -#define SHADOW_ARR_SIZE (SHADOW_SIZE - offsetof(struct cfi_shadow, shadow)) -#define SHADOW_ARR_SLOTS (SHADOW_ARR_SIZE / sizeof(shadow_t)) - -static DEFINE_MUTEX(shadow_update_lock); -static struct cfi_shadow __rcu *cfi_shadow __read_mostly; - -/* Returns the index in the shadow for the given address */ -static inline int ptr_to_shadow(const struct cfi_shadow *s, unsigned long ptr) -{ - unsigned long index; - unsigned long page = ptr >> PAGE_SHIFT; - - if (unlikely(page < s->base)) - return -1; /* Outside of module area */ - - index = page - s->base; - - if (index >= SHADOW_ARR_SLOTS) - return -1; /* Cannot be addressed with shadow */ - - return (int)index; -} - -/* Returns the page address for an index in the shadow */ -static inline unsigned long shadow_to_ptr(const struct cfi_shadow *s, - int index) -{ - if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS)) - return 0; - - return (s->base + index) << PAGE_SHIFT; -} - -/* Returns the __cfi_check function address for the given shadow location */ -static inline unsigned long shadow_to_check_fn(const struct cfi_shadow *s, - int index) -{ - if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS)) - return 0; - - if (unlikely(s->shadow[index] == SHADOW_INVALID)) - return 0; - - /* __cfi_check is always page aligned */ - return (s->base + s->shadow[index]) << PAGE_SHIFT; -} - -static void prepare_next_shadow(const struct cfi_shadow __rcu *prev, - struct cfi_shadow *next) -{ - int i, index, check; - - /* Mark everything invalid */ - memset(next->shadow, 0xFF, SHADOW_ARR_SIZE); - - if (!prev) - return; /* No previous shadow */ - - /* If the base address didn't change, an update is not needed */ - if (prev->base == next->base) { - memcpy(next->shadow, prev->shadow, SHADOW_ARR_SIZE); - return; - } - - /* Convert the previous shadow to the new address range */ - for (i = 0; i < SHADOW_ARR_SLOTS; ++i) { - if (prev->shadow[i] == SHADOW_INVALID) - continue; - - index = ptr_to_shadow(next, shadow_to_ptr(prev, i)); - if (index < 0) - continue; - - check = ptr_to_shadow(next, - shadow_to_check_fn(prev, prev->shadow[i])); - if (check < 0) - continue; - - next->shadow[index] = (shadow_t)check; - } -} - -static void add_module_to_shadow(struct cfi_shadow *s, struct module *mod, - unsigned long min_addr, unsigned long max_addr) -{ - int check_index; - unsigned long check = (unsigned long)mod->cfi_check; - unsigned long ptr; - - if (unlikely(!PAGE_ALIGNED(check))) { - pr_warn("cfi: not using shadow for module %s\n", mod->name); - return; - } - - check_index = ptr_to_shadow(s, check); - if (check_index < 0) - return; /* Module not addressable with shadow */ - - /* For each page, store the check function index in the shadow */ - for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) { - int index = ptr_to_shadow(s, ptr); - - if (index >= 0) { - /* Each page must only contain one module */ - WARN_ON_ONCE(s->shadow[index] != SHADOW_INVALID); - s->shadow[index] = (shadow_t)check_index; - } - } -} - -static void remove_module_from_shadow(struct cfi_shadow *s, struct module *mod, - unsigned long min_addr, unsigned long max_addr) -{ - unsigned long ptr; - - for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) { - int index = ptr_to_shadow(s, ptr); - - if (index >= 0) - s->shadow[index] = SHADOW_INVALID; - } -} - -typedef void (*update_shadow_fn)(struct cfi_shadow *, struct module *, - unsigned long min_addr, unsigned long max_addr); - -static void update_shadow(struct module *mod, unsigned long base_addr, - update_shadow_fn fn) -{ - struct cfi_shadow *prev; - struct cfi_shadow *next; - unsigned long min_addr, max_addr; - - next = vmalloc(SHADOW_SIZE); - - mutex_lock(&shadow_update_lock); - prev = rcu_dereference_protected(cfi_shadow, - mutex_is_locked(&shadow_update_lock)); - - if (next) { - next->base = base_addr >> PAGE_SHIFT; - prepare_next_shadow(prev, next); - - min_addr = (unsigned long)mod->core_layout.base; - max_addr = min_addr + mod->core_layout.text_size; - fn(next, mod, min_addr & PAGE_MASK, max_addr & PAGE_MASK); - - set_memory_ro((unsigned long)next, SHADOW_PAGES); - } - - rcu_assign_pointer(cfi_shadow, next); - mutex_unlock(&shadow_update_lock); - synchronize_rcu(); - - if (prev) { - set_memory_rw((unsigned long)prev, SHADOW_PAGES); - vfree(prev); - } -} - -void cfi_module_add(struct module *mod, unsigned long base_addr) -{ - update_shadow(mod, base_addr, add_module_to_shadow); -} - -void cfi_module_remove(struct module *mod, unsigned long base_addr) -{ - update_shadow(mod, base_addr, remove_module_from_shadow); -} - -static inline cfi_check_fn ptr_to_check_fn(const struct cfi_shadow __rcu *s, - unsigned long ptr) -{ - int index; - - if (unlikely(!s)) - return NULL; /* No shadow available */ - - index = ptr_to_shadow(s, ptr); - if (index < 0) - return NULL; /* Cannot be addressed with shadow */ - - return (cfi_check_fn)shadow_to_check_fn(s, index); -} - -static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr) -{ - cfi_check_fn fn; - - rcu_read_lock_sched_notrace(); - fn = ptr_to_check_fn(rcu_dereference_sched(cfi_shadow), ptr); - rcu_read_unlock_sched_notrace(); - - return fn; -} - -#else /* !CONFIG_CFI_CLANG_SHADOW */ - -static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr) -{ - return NULL; -} - -#endif /* CONFIG_CFI_CLANG_SHADOW */ static inline cfi_check_fn find_module_check_fn(unsigned long ptr) { @@ -291,11 +60,7 @@ static inline cfi_check_fn find_check_fn(unsigned long ptr) * up if necessary. */ RCU_NONIDLE({ - if (IS_ENABLED(CONFIG_CFI_CLANG_SHADOW)) - fn = find_shadow_check_fn(ptr); - - if (!fn) - fn = find_module_check_fn(ptr); + fn = find_module_check_fn(ptr); }); return fn; diff --git a/kernel/module.c b/kernel/module.c index 6cea788fd965..296fe02323e9 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2151,8 +2151,6 @@ void __weak module_arch_freeing_init(struct module *mod) { } -static void cfi_cleanup(struct module *mod); - /* Free a module, remove from lists, etc. */ static void free_module(struct module *mod) { @@ -2194,9 +2192,6 @@ static void free_module(struct module *mod) synchronize_rcu(); mutex_unlock(&module_mutex); - /* Clean up CFI for the module. */ - cfi_cleanup(mod); - /* This may be empty, but that's OK */ module_arch_freeing_init(mod); module_memfree(mod->init_layout.base); @@ -4141,7 +4136,6 @@ static int load_module(struct load_info *info, const char __user *uargs, synchronize_rcu(); kfree(mod->args); free_arch_cleanup: - cfi_cleanup(mod); module_arch_cleanup(mod); free_modinfo: free_modinfo(mod); @@ -4530,15 +4524,6 @@ static void cfi_init(struct module *mod) if (exit) mod->exit = *exit; #endif - - cfi_module_add(mod, module_addr_min); -#endif -} - -static void cfi_cleanup(struct module *mod) -{ -#ifdef CONFIG_CFI_CLANG - cfi_module_remove(mod, module_addr_min); #endif } From patchwork Fri Apr 29 20:36:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAD7AC433EF for ; Fri, 29 Apr 2022 20:37:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380868AbiD2Uk0 (ORCPT ); Fri, 29 Apr 2022 16:40:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380880AbiD2UkY (ORCPT ); Fri, 29 Apr 2022 16:40:24 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D264F83B35 for ; Fri, 29 Apr 2022 13:37:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b11-20020a5b008b000000b00624ea481d55so8365969ybp.19 for ; Fri, 29 Apr 2022 13:37:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=svE7kVFxuPlWWXguE8ak9oGVScd1Q4Hw38uYtQTvTB0=; b=FUoFYj7nrytyx4SQGCiHW92ijRaq67I39GZlpyGeR8NTM7JBhSVqrSH5XEo6AC3s33 6YkKHAE2q9oeFmrTKUs1tSYTJjtAN96i5EtiDUrkHTNJ7F9pNyevX/fqtpTVyhzDlRbo ygW8w7mnl/f/ac4KYXkyTVBJCPUQ7uLY5KreI2pr7rOjr0HziZOezs6YJb+DRgvxqmvU 2Ojm1ZLAjlxdV9gK1KqwV4JtQZcjNXSEYjw35XVQqvPjpeqRn7S3FLFze1rSeIrqplFC BTe4KL/fuc79sjwROMSFVnl3eKltBCdAdu+AkyhU0liNzqOkDRw1+i7197CedoqcHGSp XPMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=svE7kVFxuPlWWXguE8ak9oGVScd1Q4Hw38uYtQTvTB0=; b=RqRdJEFpBSZhF7jgPG9DyF3MdTUo0AlU+OfwUF7RlpIyzI/wOTUH9ocnYvIEhhuqbF u8glD3Y1swHSLlsTEJ/8eh8fYfCWYHpt6PtV512DNX+8Tt48+WnMd0naruB2BLcIBx1y x1AOm3CPHVQk8GUUhC6kbrbz1ihfIgg0rF9s21qRd6Zrwk9RrdlWsl4R7HWcyhivjU0W s29jSkp+KX+pkp5OHunkwKulE95SKjEk6lQd1jLEsWAYf5rvCiPrm/AZZRTPhssZntEx LmL6Xttbt31hhoU7mEj8j0Toa5J91vdLw++Pko9zDD0P6tMYINek+QSk+RleUpf5LL9q HaUg== X-Gm-Message-State: AOAM531LIPKE3i4Un9GHiEsGqq/Z5c6C1sqOfZPYVKxscgoaXLY90Hpt EogST94Y1bKOD3EEVaxUDZFH2u1eV7yEjXlebX8= X-Google-Smtp-Source: ABdhPJxy36VnWpTDmVaEj1n6wpH9g3bLRxJK7JnambFm4dQ2hqamT1j8SUFu5O05FMrvWlAjIaN1GHbxJVVSP4ly7Ts= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a05:6902:1ca:b0:648:b241:65be with SMTP id u10-20020a05690201ca00b00648b24165bemr1292760ybh.196.1651264620101; Fri, 29 Apr 2022 13:37:00 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:28 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=3214; h=from:subject; bh=3j3iVyBHsh++mvfP+DxD8j3SrYI24EpoK/VyI7QxUVI=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVHq3nlFPfzRWE27X7yYpyk4ANovqw9jbo7ZSD aCfwoPWJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7j1FC/ 9mpAG9wikAu1bkYBp0Lg/jv7Wm7fpH2DwKXLlccEeCa1pxSjoSVgx6wZkzGBqugwmN/O7ovT3GkjhQ eQ6wRcszZQ7fTIEvFK+vcaeg6XDDudg6DpaxZRR1dAMmt2yUCErrCLOe5tN/ft1R5yKnKLwGRHsYvV vUDHffcDPPv60LVfD+5wLq688gEgzU48LyD440gez0N1kY/vqObQpuQlHQTtBE/7qD2ulPR5JdMQu8 st9Dh0JD/ATz/I1VA+tl38E7FpTEG0fkqECvMS++yK/JtdB1p+lWaUybht6V838s5BbAzbCBuCuG2h rAddbV72f5KNjs7b0k+EaHJ0fuNY9jPdM4AFqmgjTDBRmzBmkOUQ0U3lCDBj9dXLor5ywcTq2Iqk1G iUekvxJzPaoYKy4o2dQvYBH7YPC2qnSBfn+37GO62vDWM2ayd+N4wwOAQBmy23ARN3CsD5TKITf17Y OIawNg6pfn2v1zDPmU2ZUQnd5lwz6WXpEe195D//G1qwQ= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 05/21] cfi: Drop __CFI_ADDRESSABLE From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The __CFI_ADDRESSABLE macro is used for init_module and cleanup_module to ensure we have the address of the CFI jump table, and with CONFIG_X86_KERNEL_IBT to ensure LTO won't optimize away the symbols. As __CFI_ADDRESSABLE is no longer necessary with -fsanitize=kcfi, add a more flexible version of the __ADDRESSABLE macro and always ensure these symbols won't be dropped. Signed-off-by: Sami Tolvanen --- include/linux/cfi.h | 20 -------------------- include/linux/compiler.h | 6 ++++-- include/linux/module.h | 4 ++-- 3 files changed, 6 insertions(+), 24 deletions(-) diff --git a/include/linux/cfi.h b/include/linux/cfi.h index 4ab51c067007..2cdbc0fbd0ab 100644 --- a/include/linux/cfi.h +++ b/include/linux/cfi.h @@ -13,26 +13,6 @@ typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag); /* Compiler-generated function in each module, and the kernel */ extern void __cfi_check(uint64_t id, void *ptr, void *diag); -/* - * Force the compiler to generate a CFI jump table entry for a function - * and store the jump table address to __cfi_jt_. - */ -#define __CFI_ADDRESSABLE(fn, __attr) \ - const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn - -#else /* !CONFIG_CFI_CLANG */ - -#ifdef CONFIG_X86_KERNEL_IBT - -#define __CFI_ADDRESSABLE(fn, __attr) \ - const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn - -#endif /* CONFIG_X86_KERNEL_IBT */ - #endif /* CONFIG_CFI_CLANG */ -#ifndef __CFI_ADDRESSABLE -#define __CFI_ADDRESSABLE(fn, __attr) -#endif - #endif /* _LINUX_CFI_H */ diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 219aa5ddbc73..9303f5fe5d89 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -221,9 +221,11 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, * otherwise, or eliminated entirely due to lack of references that are * visible to the compiler. */ -#define __ADDRESSABLE(sym) \ - static void * __section(".discard.addressable") __used \ +#define ___ADDRESSABLE(sym, __attrs) \ + static void * __used __attrs \ __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym; +#define __ADDRESSABLE(sym) \ + ___ADDRESSABLE(sym, __section(".discard.addressable")) /** * offset_to_ptr - convert a relative memory offset to an absolute pointer diff --git a/include/linux/module.h b/include/linux/module.h index 1e135fd5c076..87857275c047 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -132,7 +132,7 @@ extern void cleanup_module(void); { return initfn; } \ int init_module(void) __copy(initfn) \ __attribute__((alias(#initfn))); \ - __CFI_ADDRESSABLE(init_module, __initdata); + ___ADDRESSABLE(init_module, __initdata); /* This is only required if you want to be unloadable. */ #define module_exit(exitfn) \ @@ -140,7 +140,7 @@ extern void cleanup_module(void); { return exitfn; } \ void cleanup_module(void) __copy(exitfn) \ __attribute__((alias(#exitfn))); \ - __CFI_ADDRESSABLE(cleanup_module, __exitdata); + ___ADDRESSABLE(cleanup_module, __exitdata); #endif From patchwork Fri Apr 29 20:36:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7884AC433F5 for ; Fri, 29 Apr 2022 20:37:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380892AbiD2Uka (ORCPT ); Fri, 29 Apr 2022 16:40:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380893AbiD2UkY (ORCPT ); Fri, 29 Apr 2022 16:40:24 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2953784A1D for ; Fri, 29 Apr 2022 13:37:03 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2f825f8c944so84500777b3.15 for ; Fri, 29 Apr 2022 13:37:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=vvdIaJf57HGaDpMnbrJ8giqLRUU+BTS8OjFrmaz2vTE=; b=pp5ZuqtMoctSU0CirU7eAqTG/TJOM0spnINt8Gjn3TLZaHntxQdntxMbG6AC9Ytn1A YbobImxlpkDKOtCc7p+HUcjkX6xeFPx/DEbzlgXaBCLyH85pOtUO5AjwtRLJ5v32iDgF GOxVmU9+KLnarIhguwlaAcJMXLCHb/CIIQn5HllZfWoEDWIMA9Dkq2G1E64CEeOFgP/U 8lRGqsLZUoDl9j6+7EG0KVET8D+nRVZ9/GAPoDTb0VvTkSklhvkSLnv3DwQ3HD+kRcY5 VrykuvxxspP2ULayXxAZ1mjo3k7ttqlSFX+HHy88dA/a3EhnXPGJv5NH9Tm80twX9v1Q sg6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=vvdIaJf57HGaDpMnbrJ8giqLRUU+BTS8OjFrmaz2vTE=; b=V/LYy3+XnUQjVGs0VL7olES1/Fd1491Ii6XIuY5um8umk4qjB4ANdruapb/uxAPVxD fKl2MkbIt3z+PY4fqA5qKC7gIzefde0Xp3LfK/DYpEp+Rxqca5Q3HI2qrxEWLT6BvkQ0 riM765bDLd7Mh3sJ5EVQgjjwU5UXu+ACUTDfuo6UY6F0hIaNmPWXLPN4IgtCgNEKxI/N nIWcN+DFabhQ99K/j/RAE6gZJMyWr56L+XgfHH+6dUWSFH0Z/VN1ZJlxAID2d+vVDJfn UG5nyP2CDCGA8RSKLnhYz8HoIPmexGBck1LS8DRlf/vk4DGv8xCCLJ4P+6hRVUoEC9Ee I1Ng== X-Gm-Message-State: AOAM531Bq3yIe4klKUUG0+R7rn0T/FoneyPno6LhY6IblqK+jwtMvway WeroAv6/aHt7sPJOPfi11f1ez1SNAA+C7k93a4A= X-Google-Smtp-Source: ABdhPJzK+TM3mWgJX5Du5pYGBn42fe+Iz5sXydBvEX3gH6bzFZSppJYXAmv+vNiXUVCsvdPo/kBBgKk9tLQAOWDp42s= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a05:6902:390:b0:645:7d46:f1db with SMTP id f16-20020a056902039000b006457d46f1dbmr1280096ybs.85.1651264622333; Fri, 29 Apr 2022 13:37:02 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:29 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=15228; h=from:subject; bh=0g7XP1LgryML35bYmY5P55DUledGJO075lwf/wPq5XY=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExVzS6QJPpNhwgvz4rLUMgG/eseoiNocaoGE0+w tX5LFGCJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVQAKCRBMtfaEi7xW7hkYC/ 4ubQXAmms/iLy/3p508tt7PKf5jjRHk61JliAxBIHRWk2AJ+/ApG33RlrxlUW3l1l97z84CdogaqSa jdo5kzTr5mVE8BojuUFoGj3o6JKuqeuKDXPluSgQkK9/n0l6BGyNpH5iMrTqjmTS5+qKjA7Z8yCk6N h2uNUbPmBJ5gerAmb+taQVdcEAcfdLsOmPIoRxKuuFj8hqFU8wwW7mbcQAQZNJ5U//BpPEbO0Ad299 aRh5GtCHpvSxLRL9t1egoSBtslKOt86N19OjWNXXnJybdVSRx5Mo8+RUbX33VIUz4hXbuUH7FJuyL9 JlZlvDFZwT5JKvRgZWVGKqKMEyyQuQfM5pn1P6Zx/CT5vuWlWZY3EUuVGjUZ/9wHZFNgzPJgCbRBoh ng2kyfVoFHuWLCYRgUx6LAoxF3YPUvGXy+ZGeUgSSYjaZyl2o+S9T4Gt8+YCoSNHiXDy5yGH3hYtX5 TMqHSVc2S30OxsckAvhcZzkLCdIj8XoQ0tfMFRpbJIeMU= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Switch from Clang's original forward-edge control-flow integrity implementation to -fsanitize=kcfi, which is better suited for the kernel, as it doesn't require LTO, doesn't use a jump table that requires altering function references, and won't break cross-module function address equality. Signed-off-by: Sami Tolvanen --- Makefile | 13 +-- arch/Kconfig | 8 +- include/asm-generic/vmlinux.lds.h | 38 ++++----- include/linux/cfi.h | 24 +++++- include/linux/compiler-clang.h | 8 +- include/linux/module.h | 4 +- kernel/cfi.c | 129 ++++++++++++++++-------------- kernel/module.c | 34 +------- scripts/module.lds.S | 24 ++---- 9 files changed, 126 insertions(+), 156 deletions(-) diff --git a/Makefile b/Makefile index c3ec1ea42379..22a5d48f5fb4 100644 --- a/Makefile +++ b/Makefile @@ -915,18 +915,7 @@ export CC_FLAGS_LTO endif ifdef CONFIG_CFI_CLANG -CC_FLAGS_CFI := -fsanitize=cfi \ - -fsanitize-cfi-cross-dso \ - -fno-sanitize-cfi-canonical-jump-tables \ - -fno-sanitize-trap=cfi \ - -fno-sanitize-blacklist - -ifdef CONFIG_CFI_PERMISSIVE -CC_FLAGS_CFI += -fsanitize-recover=cfi -endif - -# If LTO flags are filtered out, we must also filter out CFI. -CC_FLAGS_LTO += $(CC_FLAGS_CFI) +CC_FLAGS_CFI := -fsanitize=kcfi -fno-sanitize-blacklist KBUILD_CFLAGS += $(CC_FLAGS_CFI) export CC_FLAGS_CFI endif diff --git a/arch/Kconfig b/arch/Kconfig index 625db6376726..601379a6173d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -722,12 +722,8 @@ config ARCH_SUPPORTS_CFI_CLANG config CFI_CLANG bool "Use Clang's Control Flow Integrity (CFI)" - depends on LTO_CLANG && ARCH_SUPPORTS_CFI_CLANG - # Clang >= 12: - # - https://bugs.llvm.org/show_bug.cgi?id=46258 - # - https://bugs.llvm.org/show_bug.cgi?id=47479 - depends on CLANG_VERSION >= 120000 - select KALLSYMS + depends on ARCH_SUPPORTS_CFI_CLANG + depends on $(cc-option,-fsanitize=kcfi) help This option enables Clang’s forward-edge Control Flow Integrity (CFI) checking, where the compiler injects a runtime check to each diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 69138e9db787..20bfd2f01d6f 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -421,6 +421,22 @@ __end_ro_after_init = .; #endif +/* + * .kcfi_traps contains a list KCFI trap locations. + */ +#ifndef KCFI_TRAPS +#ifdef CONFIG_CFI_CLANG +#define KCFI_TRAPS \ + __kcfi_traps : AT(ADDR(__kcfi_traps) - LOAD_OFFSET) { \ + __start___kcfi_traps = .; \ + KEEP(*(.kcfi_traps)) \ + __stop___kcfi_traps = .; \ + } +#else +#define KCFI_TRAPS +#endif +#endif + /* * Read only Data */ @@ -529,6 +545,8 @@ __stop___modver = .; \ } \ \ + KCFI_TRAPS \ + \ RO_EXCEPTION_TABLE \ NOTES \ BTF \ @@ -537,21 +555,6 @@ __end_rodata = .; -/* - * .text..L.cfi.jumptable.* contain Control-Flow Integrity (CFI) - * jump table entries. - */ -#ifdef CONFIG_CFI_CLANG -#define TEXT_CFI_JT \ - . = ALIGN(PMD_SIZE); \ - __cfi_jt_start = .; \ - *(.text..L.cfi.jumptable .text..L.cfi.jumptable.*) \ - . = ALIGN(PMD_SIZE); \ - __cfi_jt_end = .; -#else -#define TEXT_CFI_JT -#endif - /* * Non-instrumentable text section */ @@ -579,7 +582,6 @@ *(.text..refcount) \ *(.ref.text) \ *(.text.asan.* .text.tsan.*) \ - TEXT_CFI_JT \ MEM_KEEP(init.text*) \ MEM_KEEP(exit.text*) \ @@ -1008,8 +1010,7 @@ * keep any .init_array.* sections. * https://bugs.llvm.org/show_bug.cgi?id=46478 */ -#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) || \ - defined(CONFIG_CFI_CLANG) +#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) # ifdef CONFIG_CONSTRUCTORS # define SANITIZER_DISCARDS \ *(.eh_frame) @@ -1027,6 +1028,7 @@ *(.discard) \ *(.discard.*) \ *(.modinfo) \ + *(.kcfi_types) \ /* ld.bfd warns about .gnu.version* even when not emitted */ \ *(.gnu.version*) \ diff --git a/include/linux/cfi.h b/include/linux/cfi.h index 2cdbc0fbd0ab..9cbadfca7e01 100644 --- a/include/linux/cfi.h +++ b/include/linux/cfi.h @@ -2,17 +2,33 @@ /* * Clang Control Flow Integrity (CFI) support. * - * Copyright (C) 2021 Google LLC + * Copyright (C) 2022 Google LLC */ #ifndef _LINUX_CFI_H #define _LINUX_CFI_H +#include +#include + #ifdef CONFIG_CFI_CLANG -typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag); -/* Compiler-generated function in each module, and the kernel */ -extern void __cfi_check(uint64_t id, void *ptr, void *diag); +#ifdef CONFIG_MODULES +void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod); +#endif + +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs); +enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs); +#else + +#ifdef CONFIG_MODULES +static inline void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, + struct module *mod) {} +#endif +static inline enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs) +{ + return BUG_TRAP_TYPE_NONE; +} #endif /* CONFIG_CFI_CLANG */ #endif /* _LINUX_CFI_H */ diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index babb1347148c..c4ff42859077 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -66,9 +66,6 @@ # define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) #endif -#define __nocfi __attribute__((__no_sanitize__("cfi"))) -#define __cficanonical __attribute__((__cfi_canonical_jump_table__)) - /* * Turn individual warnings and errors on and off locally, depending * on version. @@ -93,3 +90,8 @@ #define __diag_ignore_all(option, comment) \ __diag_clang(11, ignore, option) + +#if CONFIG_CFI_CLANG +/* Disable CFI checking inside a function. */ +#define __nocfi __attribute__((__no_sanitize__("kcfi"))) +#endif diff --git a/include/linux/module.h b/include/linux/module.h index 87857275c047..430ea19f14f6 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -27,7 +27,6 @@ #include #include #include -#include #include #include @@ -389,7 +388,8 @@ struct module { unsigned int num_syms; #ifdef CONFIG_CFI_CLANG - cfi_check_fn cfi_check; + unsigned long *kcfi_traps; + unsigned long *kcfi_traps_end; #endif /* Kernel parameters. */ diff --git a/kernel/cfi.c b/kernel/cfi.c index 2cc0d01ea980..d9907df6576e 100644 --- a/kernel/cfi.c +++ b/kernel/cfi.c @@ -1,94 +1,101 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Clang Control Flow Integrity (CFI) error and slowpath handling. + * Clang Control Flow Integrity (CFI) error handling. * - * Copyright (C) 2021 Google LLC + * Copyright (C) 2022 Google LLC */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -/* Compiler-defined handler names */ -#ifdef CONFIG_CFI_PERMISSIVE -#define cfi_failure_handler __ubsan_handle_cfi_check_fail -#else -#define cfi_failure_handler __ubsan_handle_cfi_check_fail_abort -#endif - -static inline void handle_cfi_failure(void *ptr) +#include + +/* Returns the target of the indirect call that follows the trap in `addr`. */ +void * __weak arch_get_cfi_target(unsigned long addr, struct pt_regs *regs) { - if (IS_ENABLED(CONFIG_CFI_PERMISSIVE)) - WARN_RATELIMIT(1, "CFI failure (target: %pS):\n", ptr); - else - panic("CFI failure (target: %pS)\n", ptr); + return NULL; } #ifdef CONFIG_MODULES +/* Populates `kcfi_trap(_end)?` fields in `struct module`. */ +void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, + struct module *mod) +{ + char *secstrings; + unsigned int i; + + mod->kcfi_traps = NULL; + mod->kcfi_traps_end = NULL; + + secstrings = (char *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; + + for (i = 1; i < hdr->e_shnum; i++) { + if (strcmp(secstrings+sechdrs[i].sh_name, "__kcfi_traps")) + continue; -static inline cfi_check_fn find_module_check_fn(unsigned long ptr) + mod->kcfi_traps = (unsigned long *)sechdrs[i].sh_addr; + mod->kcfi_traps_end = (unsigned long *)(sechdrs[i].sh_addr + sechdrs[i].sh_size); + break; + } +} + +static bool is_module_cfi_trap(unsigned long addr) { - cfi_check_fn fn = NULL; + bool found = false; struct module *mod; + unsigned long *p; rcu_read_lock_sched_notrace(); - mod = __module_address(ptr); + + mod = __module_address(addr); if (mod) - fn = mod->cfi_check; + for (p = mod->kcfi_traps; !found && p < mod->kcfi_traps_end; ++p) + found = (*p == addr); + rcu_read_unlock_sched_notrace(); - return fn; + return found; } -static inline cfi_check_fn find_check_fn(unsigned long ptr) -{ - cfi_check_fn fn = NULL; +#else /* CONFIG_MODULES */ - if (is_kernel_text(ptr)) - return __cfi_check; +static inline bool is_module_cfi_trap(unsigned long addr) +{ + return false; +} - /* - * Indirect call checks can happen when RCU is not watching. Both - * the shadow and __module_address use RCU, so we need to wake it - * up if necessary. - */ - RCU_NONIDLE({ - fn = find_module_check_fn(ptr); - }); +#endif /* CONFIG_MODULES */ - return fn; -} +extern unsigned long __start___kcfi_traps[]; +extern unsigned long __stop___kcfi_traps[]; -void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag) +static bool is_cfi_trap(unsigned long addr) { - cfi_check_fn fn = find_check_fn((unsigned long)ptr); + unsigned long *p; - if (likely(fn)) - fn(id, ptr, diag); - else /* Don't allow unchecked modules */ - handle_cfi_failure(ptr); + for (p = __start___kcfi_traps; p < __stop___kcfi_traps; ++p) + if (*p == addr) + return true; + + return is_module_cfi_trap(addr); } -EXPORT_SYMBOL(__cfi_slowpath_diag); -#else /* !CONFIG_MODULES */ +#define __CFI_ERROR_FMT "CFI failure at %pS (target: %pS)\n" -void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag) +static enum bug_trap_type __report_cfi(void *addr, void *target, struct pt_regs *regs) { - handle_cfi_failure(ptr); /* No modules */ + if (IS_ENABLED(CONFIG_CFI_PERMISSIVE)) { + pr_warn(__CFI_ERROR_FMT, addr, target); + __warn(NULL, 0, addr, 0, regs, NULL); + + return BUG_TRAP_TYPE_WARN; + } else { + pr_crit(__CFI_ERROR_FMT, addr, target); + return BUG_TRAP_TYPE_BUG; + } } -EXPORT_SYMBOL(__cfi_slowpath_diag); - -#endif /* CONFIG_MODULES */ -void cfi_failure_handler(void *data, void *ptr, void *vtable) +enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs) { - handle_cfi_failure(ptr); + if (!is_cfi_trap(addr)) + return BUG_TRAP_TYPE_NONE; + + return __report_cfi((void *)addr, arch_get_cfi_target(addr, regs), regs); } -EXPORT_SYMBOL(cfi_failure_handler); diff --git a/kernel/module.c b/kernel/module.c index 296fe02323e9..411ae8c358e6 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -57,6 +57,7 @@ #include #include #include +#include #include #include "module-internal.h" @@ -3871,8 +3872,9 @@ static int complete_formation(struct module *mod, struct load_info *info) if (err < 0) goto out; - /* This relies on module_mutex for list integrity. */ + /* These rely on module_mutex for list integrity. */ module_bug_finalize(info->hdr, info->sechdrs, mod); + module_cfi_finalize(info->hdr, info->sechdrs, mod); module_enable_ro(mod, false); module_enable_nx(mod); @@ -3928,8 +3930,6 @@ static int unknown_module_param_cb(char *param, char *val, const char *modname, return 0; } -static void cfi_init(struct module *mod); - /* * Allocate and load the module: note that size of section 0 is always * zero, and we rely on this for optional sections. @@ -4059,9 +4059,6 @@ static int load_module(struct load_info *info, const char __user *uargs, flush_module_icache(mod); - /* Setup CFI for the module. */ - cfi_init(mod); - /* Now copy in args */ mod->args = strndup_user(uargs, ~0UL >> 1); if (IS_ERR(mod->args)) { @@ -4502,31 +4499,6 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *, #endif /* CONFIG_LIVEPATCH */ #endif /* CONFIG_KALLSYMS */ -static void cfi_init(struct module *mod) -{ -#ifdef CONFIG_CFI_CLANG - initcall_t *init; - exitcall_t *exit; - - rcu_read_lock_sched(); - mod->cfi_check = (cfi_check_fn) - find_kallsyms_symbol_value(mod, "__cfi_check"); - init = (initcall_t *) - find_kallsyms_symbol_value(mod, "__cfi_jt_init_module"); - exit = (exitcall_t *) - find_kallsyms_symbol_value(mod, "__cfi_jt_cleanup_module"); - rcu_read_unlock_sched(); - - /* Fix init/exit functions to point to the CFI jump table */ - if (init) - mod->init = *init; -#ifdef CONFIG_MODULE_UNLOAD - if (exit) - mod->exit = *exit; -#endif -#endif -} - /* Maximum number of characters written by module_flags() */ #define MODULE_FLAGS_BUF_SIZE (TAINT_FLAGS_COUNT + 4) diff --git a/scripts/module.lds.S b/scripts/module.lds.S index 1d0e1e4dc3d2..ccd75d283840 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -3,20 +3,11 @@ * Archs are free to supply their own linker scripts. ld will * combine them automatically. */ -#ifdef CONFIG_CFI_CLANG -# include -# define ALIGN_CFI ALIGN(PAGE_SIZE) -# define SANITIZER_DISCARDS *(.eh_frame) -#else -# define ALIGN_CFI -# define SANITIZER_DISCARDS -#endif - SECTIONS { /DISCARD/ : { *(.discard) *(.discard.*) - SANITIZER_DISCARDS + *(.kcfi_types) } __ksymtab 0 : { *(SORT(___ksymtab+*)) } @@ -31,6 +22,10 @@ SECTIONS { __patchable_function_entries : { *(__patchable_function_entries) } +#ifdef CONFIG_CFI_CLANG + __kcfi_traps : { KEEP(*(.kcfi_traps)) } +#endif + #ifdef CONFIG_LTO_CLANG /* * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and @@ -51,15 +46,6 @@ SECTIONS { *(.rodata .rodata.[0-9a-zA-Z_]*) *(.rodata..L*) } - - /* - * With CONFIG_CFI_CLANG, we assume __cfi_check is at the beginning - * of the .text section, and is aligned to PAGE_SIZE. - */ - .text : ALIGN_CFI { - *(.text.__cfi_check) - *(.text .text.[0-9a-zA-Z_]* .text..L.cfi*) - } #endif } From patchwork Fri Apr 29 20:36:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CE11C433FE for ; Fri, 29 Apr 2022 20:37:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380951AbiD2Uku (ORCPT ); Fri, 29 Apr 2022 16:40:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380896AbiD2UkY (ORCPT ); Fri, 29 Apr 2022 16:40:24 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 503C58303B for ; Fri, 29 Apr 2022 13:37:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d188-20020a25cdc5000000b00648429e5ab9so8392013ybf.13 for ; Fri, 29 Apr 2022 13:37:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gEbHgFs6lZ2lHqINgip3igp2EHgtJN39yys4lM28Yvo=; b=SaFVW3jXVx8M6aolNzkLVyQeDYZbzqRO7KWtEU0s7ci1omw18PaWzOYuK5758tDqgJ /pdjdxdei9jWmzk3zIw3oTE+XswiK7PorZHzgsq4bL1+VChgPuz0iC/N21WzcGY/qRM2 6JWLbIrhTS9SAvQz7MMzKl0+WgH4XmbeFbiBh2+IVPL72dZ5iBaKryX3AN1s0hPVY4YP 9bQt1yqW6LnkKYulq8WAbtjiiEhgkR1G+Rp0rnFrwV8hN5rsQYtFyb6s5lo8ZXCf+7D6 +m7c1oAJxWXgtwRGknWfZgz5PiGIewZRKWdX5+TRIcmMl/JLQ64JSXrUoonAfSIFb+BK TB5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gEbHgFs6lZ2lHqINgip3igp2EHgtJN39yys4lM28Yvo=; b=5AnzccQSJ7jatiVwZgaziEKFZ1yNzu5KnyZJwTAlnZ1mGRYKEIQyOZrjZIKlxTr8WP 3aFFVaIz2ALXuZU2vSA9YOuosrbB+/8ot9CgTm9x7M9rvw3b+pksDzURK+cJCY1r1gO9 GdRQnszaMutVFMoFcmV+G3Y3ODi09OBnYOwwNxVZiyQLaScTEFTJdenNh4/b+PZJ/B/E 6ZvM+gXGgr8DRXNJOjnAjrFdR/gFhXfgYAw4aBMcwBc0viyv/9f8T7RTFNc1Nz4xEksc DHVCA+ykZaXxxnQL6ohIJeEm2xqGXyN5K+xMcnO2q8BcAAywmzbVTyhk3JajDsQiEB6O C26Q== X-Gm-Message-State: AOAM530Z+OM4RUZZGOwP2Y8TAhSW1QD+a+uVOinC6hC9BmtmsCzr3ilo M/b7IfpDL3I/DE5enpshorf4EQAgqD9aPzMa2Qs= X-Google-Smtp-Source: ABdhPJxgT/9aX0dsDa8KW+mzDJMfQTvkgU1FZElGfOjFMsqeYE38W+fgWXvD8McWD+ilA728NnLgBmCmjrnuTB/FeAI= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:50c1:0:b0:645:8827:ccee with SMTP id e184-20020a2550c1000000b006458827cceemr1235151ybb.191.1651264624520; Fri, 29 Apr 2022 13:37:04 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:30 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=2369; h=from:subject; bh=Og1wzH6F2UpL15DjmLD4N886OfryrjOOCzdPAdTewLk=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExWDDNV9KXYBM56nCJGHcU0XSslxkQdvoI9sPjn UaGSs/mJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVgAKCRBMtfaEi7xW7s4kDA CayRTWp0pfbkNNkWzA1sCM4Jf74dHv2rjrsx0M4Mu5sNdiCt2HqSZqZ16xxzlHJd3QA+Q/0K9CwUjY 4ms0S0URAxH9BFNtjc2F9WTrQHV4OGR/ct8SM76W0FjA3jwqKJg4Rbaq636Ftg5FNF748bcjT1BogU N6yNWvZSXXQ62G7HD9dXV2ytAikPt71vmX/I7ly/kl6or31Z6G2Br8BUHs2zvPs7OgtfbOIRN/TgX/ Z/f7/UEILECdxD84CZs9Hfe11i5gANBQmtBgyxURHHIsVe962bP9m+Zn3DDkuql3d+LlM3k1ZwgNfl sK2Kq1l4YRTLUcQNcgPXGemiFFX1lWzMxTYQwCzMfDeUlyaHVrs5EQR7ro7n7CEZv61G4bH7w4ysNo TAPIP8YcvmvYJaquDG+aMSVUwNrrDVPwRbjy0KkylxqzDhWqsDNhiBIdTK/HvX3PuPB6dxUU+UufwP +9unVtIcNlT3JMTufz/3caq6rKwQhkZ0mavOKMcl5DjVY= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 07/21] cfi: Add type helper macros From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With CONFIG_CFI_CLANG, assembly functions called indirectly from C code must be annotated with type identifiers to pass CFI checking. The compiler emits a __kcfi_typeid_ symbol for each address-taken function declaration in C, which contains the expected type identifier. Add typed versions of SYM_FUNC_START and SYM_FUNC_START_ALIAS, which emit the type identifier before the function. Signed-off-by: Sami Tolvanen --- include/linux/cfi_types.h | 57 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 include/linux/cfi_types.h diff --git a/include/linux/cfi_types.h b/include/linux/cfi_types.h new file mode 100644 index 000000000000..dd16e755a197 --- /dev/null +++ b/include/linux/cfi_types.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Clang Control Flow Integrity (CFI) type definitions. + */ +#ifndef _LINUX_CFI_TYPES_H +#define _LINUX_CFI_TYPES_H + +#ifdef CONFIG_CFI_CLANG +#include + +#ifdef __ASSEMBLY__ +/* + * Use the __kcfi_typeid_ type identifier symbol to + * annotate indirectly called assembly functions. The compiler emits + * these symbols for all address-taken function declarations in C + * code. + */ +#ifndef __CFI_TYPE +#define __CFI_TYPE(name) \ + .4byte __kcfi_typeid_##name +#endif + +#define SYM_TYPED_ENTRY(name, fname, linkage, align...) \ + linkage(name) ASM_NL \ + align ASM_NL \ + __CFI_TYPE(fname) ASM_NL \ + name: + +#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \ + SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN) + +#define __SYM_TYPED_FUNC_START(name, fname) \ + SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN) + +#endif /* __ASSEMBLY__ */ + +#else /* CONFIG_CFI_CLANG */ + +#ifdef __ASSEMBLY__ +#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \ + SYM_FUNC_START_ALIAS(name) + +#define __SYM_TYPED_FUNC_START(name, fname) \ + SYM_FUNC_START(name) +#endif /* __ASSEMBLY__ */ + +#endif /* CONFIG_CFI_CLANG */ + +#ifdef __ASSEMBLY__ +#define SYM_TYPED_FUNC_START_ALIAS(name) \ + __SYM_TYPED_FUNC_START_ALIAS(name, name) + +#define SYM_TYPED_FUNC_START(name) \ + __SYM_TYPED_FUNC_START(name, name) +#endif /* __ASSEMBLY__ */ + +#endif /* _LINUX_CFI_TYPES_H */ From patchwork Fri Apr 29 20:36:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EBD1C43217 for ; Fri, 29 Apr 2022 20:37:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380873AbiD2Ukz (ORCPT ); Fri, 29 Apr 2022 16:40:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380879AbiD2Uk0 (ORCPT ); Fri, 29 Apr 2022 16:40:26 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F60F83028 for ; Fri, 29 Apr 2022 13:37:07 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i5-20020a258b05000000b006347131d40bso8323632ybl.17 for ; Fri, 29 Apr 2022 13:37:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=toNjlzYxqQcjAnQGwFPffr4qkB53a+Hv7lcj5bGaBXY=; b=jT2IjI35/b2hVboGLC+3ooZRUd4Cub9P+C1GAq5Lzi2RpnBI3nprAEP2ONx0TVgYm7 RKQUcusXWuRlX/RSwyRoevlkYnze8FoZ0cgWs0XkmJqrkmbzD/iTVcyFjW+7lBIrx62H /5rgh472pg0znBMROIcX+aZAbTykh8odIhd1d1eZt3Xhiezsk+ssRCOfRqGk8nkxZYmt ARLP84GFUuoI5iTnarTZJWo/NmZGIqMZ514LXqX1L9+OC5K07pwoLWEacuVGILpzB6pm 2o3vScassDAgL2/k3dHkvufJRo2sRVu6AtNiuj4EIghSxVaW+pLsQMGc5wu3M1WI7cGe WneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=toNjlzYxqQcjAnQGwFPffr4qkB53a+Hv7lcj5bGaBXY=; b=LPCOmpvfFXHuTlpZrAoffwr3vWmBhIvhdNUIWqqXXJfa0eVTK5SFEtVqmm6zGSZFbe WkC0mNcdSakTEXHjVccftQl0pPBSFivFNLyVPXkzyGlvUmp2yfAGmfFLDHCH899tLyh5 E1ARhkpl6AcbSKv6oiKrAD4L0TUtkutNjAjMAC1enhJEeKPLItlOOIMoJHfL0oZ6grxT e5pMczrftIWHgJm/ijQ5Tr0+hgx4r4vf9ssuCPrNWkQedPKNrcPtBs8OPnkag8Lv0CSQ Ynpp5DEsc7uUpVnKXCKXOqiz7JVJIdSWXfCngbx+lvjUyPYrpeO+AG58YSaj/d/4Ftjc p+FQ== X-Gm-Message-State: AOAM530gI5uBO9ECLVCOCipqpA8DXnh8GUseOOC9l4T4REhnGLLBcmJs peuZI4iDivg/W/xofXDzZS2z3d3XHDUOuD7Q5yw= X-Google-Smtp-Source: ABdhPJzdzB9TYiBo0cH3i7IETZo7Vtx6hAzbF9aM1cgVRBMhpZILTfH4t0qUn8eq/RY/6gNn7ogPmqoZuJIkbfyQUcw= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a5b:2c3:0:b0:645:6565:fafa with SMTP id h3-20020a5b02c3000000b006456565fafamr1378832ybp.323.1651264626792; Fri, 29 Apr 2022 13:37:06 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:31 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=1874; h=from:subject; bh=1h/ukcHKrQmO+sSN3rxH1Ur6msZUgu7FigRGuH22//0=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExWegFw7CQHbGR62Fd6xfcRvARj359xlrSpoucc ArdcVGSJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVgAKCRBMtfaEi7xW7nlfC/ 9Gb3x9jy2vor5Y8IfV4iZYQJ6l0SNWdPk8O9fG2ALgkAE9IAoY3yNt7CInVDl3Up/jUDXm4GdmPnik 7to6P2rxmV8pQNKUSi67KiH3hmMrB0fIU4J/MvSzBHUWdgHSZn3aQPH3Rc079Y5V61E2jD8VlPZKaV 1u6KN3yidWSWuQ77JZWWHoYh1Bv+flKFATCRKjS5HkAVSbVRC5Y7rW05p1iUMkjP+4OIlVfQ2+BIlQ 9o2QzOGq3ZtadJoEnDu7xld7MjP+sYgMbheQX+Q5w0Yy+1IsJsrvp5SbLmLVPQTNr087q7N3Wukgcq Fe3AntpsILle4R1Z8Q5dScpj0rfxzvTIXFLUM1de97js+7Gw1Kk5FdXtb39myqLExTtew1nu0eu/sI SmNQU7Yh+eAXFSkB0/bpxyZXcO+/eYjmueqmPwLiJvXqZ9L21Tg7yUlB530p3vgKhgMOAOaCTMaPxh ig39Vl4Nop4cdSOyLMlDd+4kAlJ6y+2uWnMBYCMybx2lg= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 08/21] arm64/crypto: Add types to indirect called assembly functions From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With CONFIG_CFI_CLANG, assembly functions indirectly called from C code must be annotated with type identifiers to pass CFI checking. Use SYM_TYPED_FUNC_START for indirectly called functions in the crypto code. Signed-off-by: Sami Tolvanen --- arch/arm64/crypto/ghash-ce-core.S | 5 +++-- arch/arm64/crypto/sm3-ce-core.S | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index 7868330dd54e..ebe5558929b7 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -6,6 +6,7 @@ */ #include +#include #include SHASH .req v0 @@ -350,11 +351,11 @@ CPU_LE( rev64 T1.16b, T1.16b ) * void pmull_ghash_update(int blocks, u64 dg[], const char *src, * struct ghash_key const *k, const char *head) */ -SYM_FUNC_START(pmull_ghash_update_p64) +SYM_TYPED_FUNC_START(pmull_ghash_update_p64) __pmull_ghash p64 SYM_FUNC_END(pmull_ghash_update_p64) -SYM_FUNC_START(pmull_ghash_update_p8) +SYM_TYPED_FUNC_START(pmull_ghash_update_p8) __pmull_ghash p8 SYM_FUNC_END(pmull_ghash_update_p8) diff --git a/arch/arm64/crypto/sm3-ce-core.S b/arch/arm64/crypto/sm3-ce-core.S index ef97d3187cb7..ca70cfacd0d0 100644 --- a/arch/arm64/crypto/sm3-ce-core.S +++ b/arch/arm64/crypto/sm3-ce-core.S @@ -6,6 +6,7 @@ */ #include +#include #include .irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 @@ -73,7 +74,7 @@ * int blocks) */ .text -SYM_FUNC_START(sm3_ce_transform) +SYM_TYPED_FUNC_START(sm3_ce_transform) /* load state */ ld1 {v8.4s-v9.4s}, [x0] rev64 v8.4s, v8.4s From patchwork Fri Apr 29 20:36:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5EDFC4332F for ; Fri, 29 Apr 2022 20:37:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380867AbiD2Uky (ORCPT ); Fri, 29 Apr 2022 16:40:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380885AbiD2Uk3 (ORCPT ); Fri, 29 Apr 2022 16:40:29 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04D3483B22 for ; Fri, 29 Apr 2022 13:37:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d129-20020a254f87000000b006411bf3f331so8383829ybb.4 for ; Fri, 29 Apr 2022 13:37:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=5hCa0Th+W5/EJvbV2Mxb+ItoW5xxwAS7kF6LUvID5mc=; b=S0N+CrTDZPvcK92q2enU+oVhR8rVjO0uZ5+G+H/+XSr0HLDhtVQlnf3XdP2RpmbQTB qxib2RNe+lgYyVAC/K4UeXT5QGc0R00zizFDHp3tWHZqzalRZrtj6xsSrA36fRYOAmHY RnJyYZqHEruYZOOBGjwd/Go7AWOCMlZ5QMLRILy3Ev8qGB/6Y0qYjhU14rV+0nU/g/sH M3hWk+95HJeec8dFyf5hxRyv7r+cCx02MHG30C9DdAWXmaBjn4GAy2+GpXeVpkmkD2hV Yi/dbME+Gg3iSIXmapnP/Ol0jDRfngnUdvKw1FSR5l6MFtv3fNCo0nEc7aXEBh0Hu06o 3vtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=5hCa0Th+W5/EJvbV2Mxb+ItoW5xxwAS7kF6LUvID5mc=; b=eDqu0ajMAsoqQq6FrZgb7wfZW3OhKeweChqus5xsqxueo/6O4IRQxrgdFw8NepdsAP ja9Nm4U0nakvE+n2Inp1wKtvv0giDCXn46zJXLU2glFnYGPlMWbLpGvmIvKK9szpDRN2 3BFoca3vKi2vIV5DF7rlDQjkKodYci3K+AQ5OJaW2oxZyzAbLjnyEXtDNeHKWTCivK6M b0KkOuHrgbLPRpoRExsdhM3pigELaOB4MEu7DoH7LieGrQGK90S+hoh2cfV2KzRRXzdR FX1NiHsY3SBYd/UQuXvLwwPqUFprcmFoBbcPdpMdSnNi8l8fcJHOEk8q7ihFMxLt75Or t5RQ== X-Gm-Message-State: AOAM530EPDWx+lftwwwYQnKQ7J7FrRUMYkGudRePltVmPK3qaNrVDVL5 LQTlIlnJgvSov4UcHDJg1eeiKki+sgd2BMQ5nEw= X-Google-Smtp-Source: ABdhPJz+eDKnS2W33vUCyhR6WhYeLBeYAFhDRUfMiHJY50CUszLf61iMmFxt93d2j2RTvlyfCCPtlhqeWBFFGoC3YyE= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:9d12:0:b0:63e:5463:9161 with SMTP id i18-20020a259d12000000b0063e54639161mr1245278ybp.520.1651264629191; Fri, 29 Apr 2022 13:37:09 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:32 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=4436; h=from:subject; bh=NE+VmE0GV2F5TAg2wjWn2y0FVE5ul0MCwcRd3JvOCxM=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExWtMuY47WbpGwUnz3/EYOP7dO1dD9ixSIlsYG3 x8Cd/pyJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVgAKCRBMtfaEi7xW7rmmC/ 9YbnECwIsbp/HZ5mIfeFER1Ea+SidGBDwevjph92DWZRHHBk+vWKzcN4tx2RqzfmU/WvZ50K3w9BCw qwWWAs9udhv1ocs1hOPeO12RlgvSuSTw8kYQIJIrepC1qXFD3lUUl8yDxipO/T4mSTysgs5AReakBS Yt32fnxwnduGRpVjea7S6gSqDnZtoVDV7jrcJgyIG1SlDlZPKsIpn9+nE4fpaDVKsDQq5NGKa8UJKk 3qfqREIt7aV262NXiQxLF1qGmNffIZgOSyxdY6tQWT9vrXbBCub2HgKUx+u2DEQHrveMTyg8Y8YT7+ 8Pms4AMGpH2I11QDsLNutbwe5QI70n3ytnK5J+apZiYU0Cv3lKxDt2Odxl623Ly+CVShBd2zIsafj2 j14rVS/5d6DaDtTs5ckt62TAIZzmhvcZkyF+MULQnjhDQGhteaK9K5BCp74kSh0K92dYKMf4D44gK9 4J75YA3echFB0WWw7aq3WXfsP+NT7W6VM6M20yqGC2sIA= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 09/21] arm64: Add CFI error handling From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling CFI failures and determining the target address. Signed-off-by: Sami Tolvanen --- arch/arm64/include/asm/brk-imm.h | 2 ++ arch/arm64/include/asm/insn.h | 1 + arch/arm64/kernel/traps.c | 57 ++++++++++++++++++++++++++++++++ 3 files changed, 60 insertions(+) diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h index ec7720dbe2c8..3a50b70b4404 100644 --- a/arch/arm64/include/asm/brk-imm.h +++ b/arch/arm64/include/asm/brk-imm.h @@ -16,6 +16,7 @@ * 0x400: for dynamic BRK instruction * 0x401: for compile time BRK instruction * 0x800: kernel-mode BUG() and WARN() traps + * 0x801: Control-Flow Integrity traps * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff) */ #define KPROBES_BRK_IMM 0x004 @@ -25,6 +26,7 @@ #define KGDB_DYN_DBG_BRK_IMM 0x400 #define KGDB_COMPILED_DBG_BRK_IMM 0x401 #define BUG_BRK_IMM 0x800 +#define CFI_BRK_IMM 0x801 #define KASAN_BRK_IMM 0x900 #define KASAN_BRK_MASK 0x0ff diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 1e5760d567ae..12225bdfa776 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -334,6 +334,7 @@ __AARCH64_INSN_FUNCS(store_pre, 0x3FE00C00, 0x38000C00) __AARCH64_INSN_FUNCS(load_pre, 0x3FE00C00, 0x38400C00) __AARCH64_INSN_FUNCS(store_post, 0x3FE00C00, 0x38000400) __AARCH64_INSN_FUNCS(load_post, 0x3FE00C00, 0x38400400) +__AARCH64_INSN_FUNCS(ldur, 0x3FE00C00, 0x38400000) __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) __AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000) __AARCH64_INSN_FUNCS(ldclr, 0x3F20FC00, 0x38201000) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 0529fd57567e..b524411ba663 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -990,6 +991,55 @@ static struct break_hook bug_break_hook = { .imm = BUG_BRK_IMM, }; +#ifdef CONFIG_CFI_CLANG +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs) +{ + /* The expected CFI check instruction sequence: + * ldur    wA, [xN, #-4] + * movk    wB, #nnnnn + * movk    wB, #nnnnn, lsl #16 + * cmp     wA, wB + * b.eq    .Ltmp1 + * brk     #0x801 ; <- addr + * .Ltmp1: + * + * Therefore, the target address is in the xN register, which we can + * decode from the ldur instruction. + */ + u32 insn, rn; + void *p = (void *)(addr - 5 * AARCH64_INSN_SIZE); + + if (aarch64_insn_read(p, &insn) || !aarch64_insn_is_ldur(insn)) + return NULL; + + rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, insn); + return (void *)regs->regs[rn]; +} + +static int cfi_handler(struct pt_regs *regs, unsigned int esr) +{ + switch (report_cfi(regs->pc, regs)) { + case BUG_TRAP_TYPE_BUG: + die("Oops - CFI", regs, 0); + break; + + case BUG_TRAP_TYPE_WARN: + break; + + default: + return DBG_HOOK_ERROR; + } + + arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); + return DBG_HOOK_HANDLED; +} + +static struct break_hook cfi_break_hook = { + .fn = cfi_handler, + .imm = CFI_BRK_IMM, +}; +#endif /* CONFIG_CFI_CLANG */ + static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr) { pr_err("%s generated an invalid instruction at %pS!\n", @@ -1063,6 +1113,10 @@ int __init early_brk64(unsigned long addr, unsigned int esr, if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM) return kasan_handler(regs, esr) != DBG_HOOK_HANDLED; +#endif +#ifdef CONFIG_CFI_CLANG + if ((esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == CFI_BRK_IMM) + return cfi_handler(regs, esr) != DBG_HOOK_HANDLED; #endif return bug_handler(regs, esr) != DBG_HOOK_HANDLED; } @@ -1070,6 +1124,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr, void __init trap_init(void) { register_kernel_break_hook(&bug_break_hook); +#ifdef CONFIG_CFI_CLANG + register_kernel_break_hook(&cfi_break_hook); +#endif register_kernel_break_hook(&fault_break_hook); #ifdef CONFIG_KASAN_SW_TAGS register_kernel_break_hook(&kasan_break_hook); From patchwork Fri Apr 29 20:36:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 510C9C433FE for ; Fri, 29 Apr 2022 20:37:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380851AbiD2Ukw (ORCPT ); Fri, 29 Apr 2022 16:40:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380910AbiD2Ukb (ORCPT ); Fri, 29 Apr 2022 16:40:31 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5029183B17 for ; Fri, 29 Apr 2022 13:37:12 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2f7c011e3e9so84576907b3.23 for ; Fri, 29 Apr 2022 13:37:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cVgS1u5MhvRMiR1WU9isbIwtBDwqAYBDvFb5Rwt/KL4=; b=tVbCOpJ0sZ4rDt1MYoC0Min9+6wSFsedsvM0gD96SVPmYGNZp89DjlCHwo5SOABbCw yNrlap1HSnjUnL8WCYE7p+mSnFLciWvKqHeiX+i+LnGScaJuaHBZGSfUBhI0Z8mjlJ9G cRZG7uklBj75QtC40VbJ2hUrZDngH9/Tlt/83KBvAvd8v3rXSY/k3ck+1faWVxH+QUM9 llg59EPv4O8gdCRjZIdegcnQC+JursjYHfZUISVVxkxhNHceQjNsZRMaHZwQrDE9IfIJ sMN4kIUKc4biyB/rd0QGkHqfaDLSpuOh8NW0O8dsMBXsEbANLKJ6WVzCTDvubUTfv95F b4kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cVgS1u5MhvRMiR1WU9isbIwtBDwqAYBDvFb5Rwt/KL4=; b=OA0ee1W+oISQu0VAUdxwu21y/53DoJzsfz4oSk2GJxewK/i3oGKV+1UhFSrXcVyTO3 HymXMIzsfcDmM2Q63StVjoo6z7iW1Zfb+7Py2bo29hVk4hLhjz1kQ4S0antoRRc9JJjY zLCyPebLW29PO8Jf1IkSCPetVqq+05ow7ay1YB19HwonsBIE3ptGvZ2qoPDSIRoXsRxu +BtKUQKnn/FASqTxg+KC1opb0QzXw0l8nXGr0dedDk4tGotmjul5fMKjxtBInuX2k0UO 8V3JkwBSL4+bEripmh4dmAr6RFo+onwgceyCLpCx4VcfB4cOmQYI6rT+Nb4gaTqRJadt YmhQ== X-Gm-Message-State: AOAM5319sdFobGhX3J4dZ26m/vS//iSwq8EaI+8r9RAfD5z/89Ic8TRy Xo1w9L6bSEX9/N0q2BfzQok4rxD08EEYCucA9o4= X-Google-Smtp-Source: ABdhPJzDSzXbiWKi3+ZlkkXBQprag3GQyFRmC5APFE77cF3MpzWJsGR1z3FQqmOAxNriPXIfl77kdLj8R4fMY6PPgFo= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a81:949:0:b0:2f7:c45b:e291 with SMTP id 70-20020a810949000000b002f7c45be291mr1105855ywj.503.1651264631532; Fri, 29 Apr 2022 13:37:11 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:33 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=8367; h=from:subject; bh=wxPfyY3n6qk4ggt+1iaVhPzUbNJ4pejg7hKWKsOWDww=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExW3T+BGjsIEzJbw7cxjH3V+dvYpLhrtRTSNf6l BujPW6iJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVgAKCRBMtfaEi7xW7vRsC/ sHqWUB62HWVZ1/7qSk0SXqQ8W11ZJCOwR11Zg5msEkkO2Q1Hi8jH3MPCalRU59BDSHPYEo+pUBmwvk jftNvM6a0H9awx5zB9WTMPal4OsoW4Mm5Twqa9c4jqyNfAcXZMJuYKWbJ59LGWWllZoX5MQjLAdxrE ipjyG+yC1VFAgwpmJAh7Rw0Uyas9d4Ew0aSMsWO3IJQP3yfexzbAu5gLCgVRzcXACm2kNFPFdl5Fco cLFQ3aUaF4C99w2Ks+llSux1J3zs9Icoqrxj0fKbJjEQRQxvFNobQcJLNUxUNwfphjRr7CfdXjwX5Q 71tDw0Qw/OO+qDDxuGgU3HabllChobINT0sMGg1xspp2C3JdF95uuBIw3bPuNaYcaapEwh6GKwgMEu f1gEImJzUtbFdHKbWoPNxD2jRd8Vo9FDvcbnlEiYY8HN68rzUPj/REoq7XbhweFt9Xrt4UG0RjeuKp UkAuZv5Pqw9aJr72AhHkU+TE7FrgSBaafkSMTnO07zUrE= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 10/21] treewide: Drop function_nocfi From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With -fsanitize=kcfi, we no longer need function_nocfi() as the compiler won't change function references to point to a jump table. Remove all implementations and uses of the macro. Signed-off-by: Sami Tolvanen --- arch/arm64/include/asm/compiler.h | 16 ---------------- arch/arm64/include/asm/ftrace.h | 2 +- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/acpi_parking_protocol.c | 2 +- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/ftrace.c | 2 +- arch/arm64/kernel/machine_kexec.c | 2 +- arch/arm64/kernel/psci.c | 2 +- arch/arm64/kernel/smp_spin_table.c | 2 +- drivers/firmware/psci/psci.c | 4 ++-- drivers/misc/lkdtm/usercopy.c | 2 +- include/linux/compiler.h | 10 ---------- 12 files changed, 11 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index dc3ea4080e2e..6fb2e6bcc392 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -23,20 +23,4 @@ #define __builtin_return_address(val) \ (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) -#ifdef CONFIG_CFI_CLANG -/* - * With CONFIG_CFI_CLANG, the compiler replaces function address - * references with the address of the function's CFI jump table - * entry. The function_nocfi macro always returns the address of the - * actual function instead. - */ -#define function_nocfi(x) ({ \ - void *addr; \ - asm("adrp %0, " __stringify(x) "\n\t" \ - "add %0, %0, :lo12:" __stringify(x) \ - : "=r" (addr)); \ - addr; \ -}) -#endif - #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index 1494cfa8639b..c96d47cb8f46 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -26,7 +26,7 @@ #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS #define ARCH_SUPPORTS_FTRACE_OPS 1 #else -#define MCOUNT_ADDR ((unsigned long)function_nocfi(_mcount)) +#define MCOUNT_ADDR ((unsigned long)_mcount) #endif /* The BL at the callsite's adjusted rec->ip */ diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 6770667b34a3..c9df5ab2c448 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -164,7 +164,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp) ttbr1 |= TTBR_CNP_BIT; } - replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1)); + replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1); cpu_install_idmap(); replace_phys(ttbr1); diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c index bfeeb5319abf..b1990e38aed0 100644 --- a/arch/arm64/kernel/acpi_parking_protocol.c +++ b/arch/arm64/kernel/acpi_parking_protocol.c @@ -99,7 +99,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu) * that read this address need to convert this address to the * Boot-Loader's endianness before jumping. */ - writeq_relaxed(__pa_symbol(function_nocfi(secondary_entry)), + writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point); writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d72c4b4d389c..dae07d99508b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1619,7 +1619,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) if (arm64_use_ng_mappings) return; - remap_fn = (void *)__pa_symbol(function_nocfi(idmap_kpti_install_ng_mappings)); + remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings); cpu_install_idmap(); remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir)); diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 4506c4a90ac1..4128ca6ed485 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -56,7 +56,7 @@ int ftrace_update_ftrace_func(ftrace_func_t func) unsigned long pc; u32 new; - pc = (unsigned long)function_nocfi(ftrace_call); + pc = (unsigned long)ftrace_call; new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func, AARCH64_INSN_BRANCH_LINK); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index e16b248699d5..4eb5388aa5a6 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -204,7 +204,7 @@ void machine_kexec(struct kimage *kimage) typeof(cpu_soft_restart) *restart; cpu_install_idmap(); - restart = (void *)__pa_symbol(function_nocfi(cpu_soft_restart)); + restart = (void *)__pa_symbol(cpu_soft_restart); restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem, 0, 0); } else { diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c index ab7f4c476104..29a8e444db83 100644 --- a/arch/arm64/kernel/psci.c +++ b/arch/arm64/kernel/psci.c @@ -38,7 +38,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu) static int cpu_psci_cpu_boot(unsigned int cpu) { - phys_addr_t pa_secondary_entry = __pa_symbol(function_nocfi(secondary_entry)); + phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry); int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry); if (err) pr_err("failed to boot CPU%d (%d)\n", cpu, err); diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c index 7e1624ecab3c..49029eace3ad 100644 --- a/arch/arm64/kernel/smp_spin_table.c +++ b/arch/arm64/kernel/smp_spin_table.c @@ -66,7 +66,7 @@ static int smp_spin_table_cpu_init(unsigned int cpu) static int smp_spin_table_cpu_prepare(unsigned int cpu) { __le64 __iomem *release_addr; - phys_addr_t pa_holding_pen = __pa_symbol(function_nocfi(secondary_holding_pen)); + phys_addr_t pa_holding_pen = __pa_symbol(secondary_holding_pen); if (!cpu_release_addr[cpu]) return -ENODEV; diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c index cfb448eabdaa..aa3133cafced 100644 --- a/drivers/firmware/psci/psci.c +++ b/drivers/firmware/psci/psci.c @@ -334,7 +334,7 @@ static int __init psci_features(u32 psci_func_id) static int psci_suspend_finisher(unsigned long state) { u32 power_state = state; - phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume)); + phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume); return psci_ops.cpu_suspend(power_state, pa_cpu_resume); } @@ -359,7 +359,7 @@ int psci_cpu_suspend_enter(u32 state) static int psci_system_suspend(unsigned long unused) { - phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume)); + phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume); return invoke_psci_fn(PSCI_FN_NATIVE(1_0, SYSTEM_SUSPEND), pa_cpu_resume, 0, 0); diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c index 9161ce7ed47a..79a17b1c4885 100644 --- a/drivers/misc/lkdtm/usercopy.c +++ b/drivers/misc/lkdtm/usercopy.c @@ -318,7 +318,7 @@ void lkdtm_USERCOPY_KERNEL(void) pr_info("attempting bad copy_to_user from kernel text: %px\n", vm_mmap); - if (copy_to_user((void __user *)user_addr, function_nocfi(vm_mmap), + if (copy_to_user((void __user *)user_addr, vm_mmap, unconst + PAGE_SIZE)) { pr_warn("copy_to_user failed, but lacked Oops\n"); goto free_user; diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 9303f5fe5d89..80ed9644d129 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -203,16 +203,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, __v; \ }) -/* - * With CONFIG_CFI_CLANG, the compiler replaces function addresses in - * instrumented C code with jump table addresses. Architectures that - * support CFI can define this macro to return the actual function address - * when needed. - */ -#ifndef function_nocfi -#define function_nocfi(x) (x) -#endif - #endif /* __KERNEL__ */ /* From patchwork Fri Apr 29 20:36:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3277C433EF for ; Fri, 29 Apr 2022 20:37:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380954AbiD2UlG (ORCPT ); Fri, 29 Apr 2022 16:41:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380897AbiD2Ukt (ORCPT ); Fri, 29 Apr 2022 16:40:49 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDB7184EC6 for ; Fri, 29 Apr 2022 13:37:14 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2eb7d137101so84820747b3.12 for ; Fri, 29 Apr 2022 13:37:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FuO92T+gQ2JupQ9SuTqzvkGpbYeTtQp61yvoEs/MIhk=; b=TTfEcRfHmOL8UirjhQoEa7l5bNv/GjH+C6lfGkMxomdncoe0Hr0VHhGvlXKWylhz93 CX/PcLQAgVyH0VfQURB3o/Ei/fHqTtBrgnXUprp9inU2sMx4h4JAwPk/UhzkkRwcl8Li wu6pHNY4Ukd+XIw+4Hj1nXjNRsYLPtVMUm6AYg8gTYJm4G2u/pBxf2jrgIdLagjYVwhO dLgmnbRMMvKmzohj4Q87bXPOX8hoLaGCuv6lcan48IjoVdneRG7K8DVMs90E8MuTZD+9 7794liBZaVnMpmz/tu4JC4xxRfi713cDgBRfSnPH/QWlwzfp5/9aD7Eel9pDnJ9jE2RV zVdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FuO92T+gQ2JupQ9SuTqzvkGpbYeTtQp61yvoEs/MIhk=; b=sTsbOzT+uZYjPQaZxgegJo/Jcz7z6gcpuJIhXGmVSC/ROBHD1hrFj740n0pxQjIkfM tV/BPv5/qbas7eqBTF3qt9sEx9+R5wABEWR7FFxZZbvqO9XRJFK8CukoC2K4YmgxqWhB OvzqaJbNrLbY3TJznBNBVukMh+4RJHh/mOjHaHsBlaoEgYHC0MepR1CvfDwsZLu90yHX uxqbvTxrluOEt3SwumoKVzqpsNSx+3xoo8LFSY83BY6eejKTbX6uIZXWmQfC4m70yI00 u5pOncyhn6lOJ431rVXLMcOJQ/igzhmvFeEO11SXmr3enGU8OFEvY/kxGNOg/eZ7aZpp LS3A== X-Gm-Message-State: AOAM532ZcPSWC+20mtHyhJM+qagIL1F0JJJyN42eQodiN1Dz+tBdqOfh KeIWPePo9bQm2dMon/oxsiaEN9xTmcUw9o7DM7k= X-Google-Smtp-Source: ABdhPJxeSO9gyhjDxumTaAK3RpQIixgMmQctex1GmXog3DTcW7LHLIOrUVAl+SahcH2iIkb0Eq7WaSsfN7LgwDGm5oE= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a05:6902:102a:b0:649:4564:5407 with SMTP id x10-20020a056902102a00b0064945645407mr117842ybt.565.1651264633971; Fri, 29 Apr 2022 13:37:13 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:34 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=2498; h=from:subject; bh=YiPdw8l/Nw5UPpL/vQkDV/vuspgcXAw5ywUyCPTHbRs=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExW59cQE6i4EWdpszacwquVOaMnOhiCbxpiPvav kGoTfU6JAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVgAKCRBMtfaEi7xW7mz/C/ 9TomtflYaaEXqp9/lMi4QyjyQFmshJEM6VyddnW6E9huSf2jbCzNiOV61PpGZCgXsVyDHLZbPcNABw 8glehzbIS5ML5xwg4eFwoXsaJE4rxgx3qbNVyJi8QjOjbiIqcFnhbQ+s7eXyBIqczUmW+gGM7wP0T1 U0X8gtEyI6jVDf9UPF7TjqQI+lk4Hc/szkjhUNZGH6smQ/D4lzmeJU8s3OMk4KfYIws09Q4JJkRGjI nrBWGaBnB2VNj9lgnWGb+08kCF192EqptfaxQ5qbt8d5V2nUBL2vCk+hRclwrhbvz1DKYBm+EpKnDS kIzlncItNwcDyrQlmcXwJ/BeUBWYOgduyJns83VjU6BalQptHxVJfI5bfbURT9hYpruMXTjCnS33qC I2Qp1HAjHRkIeoU/q7BNe5VtyqrzcwN0mF/YvqRQIkmvuGWxR2KIuqrCJk/leuk4CcufsF3wicIQRQ FmxC55GjR38SNTNu9y0zxoYMTgbL+xxlvCcmnT1UyUFY8= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 11/21] treewide: Drop WARN_ON_FUNCTION_MISMATCH From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org CONFIG_CFI_CLANG no longer breaks cross-module function address equality, which makes WARN_ON_FUNCTION_MISMATCH unnecessary. Remove the definition and switch back to WARN_ON_ONCE. Signed-off-by: Sami Tolvanen --- include/asm-generic/bug.h | 16 ---------------- kernel/kthread.c | 3 +-- kernel/workqueue.c | 2 +- 3 files changed, 2 insertions(+), 19 deletions(-) diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h index edb0e2a602a8..a4c116dec698 100644 --- a/include/asm-generic/bug.h +++ b/include/asm-generic/bug.h @@ -219,22 +219,6 @@ void __warn(const char *file, int line, void *caller, unsigned taint, # define WARN_ON_SMP(x) ({0;}) #endif -/* - * WARN_ON_FUNCTION_MISMATCH() warns if a value doesn't match a - * function address, and can be useful for catching issues with - * callback functions, for example. - * - * With CONFIG_CFI_CLANG, the warning is disabled because the - * compiler replaces function addresses taken in C code with - * local jump table addresses, which breaks cross-module function - * address equality. - */ -#if defined(CONFIG_CFI_CLANG) && defined(CONFIG_MODULES) -# define WARN_ON_FUNCTION_MISMATCH(x, fn) ({ 0; }) -#else -# define WARN_ON_FUNCTION_MISMATCH(x, fn) WARN_ON_ONCE((x) != (fn)) -#endif - #endif /* __ASSEMBLY__ */ #endif diff --git a/kernel/kthread.c b/kernel/kthread.c index 50265f69a135..dfeb87876b4a 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1050,8 +1050,7 @@ static void __kthread_queue_delayed_work(struct kthread_worker *worker, struct timer_list *timer = &dwork->timer; struct kthread_work *work = &dwork->work; - WARN_ON_FUNCTION_MISMATCH(timer->function, - kthread_delayed_work_timer_fn); + WARN_ON_ONCE(timer->function != kthread_delayed_work_timer_fn); /* * If @delay is 0, queue @dwork->work immediately. This is for diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 0d2514b4ff0d..18c1a1c09684 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1651,7 +1651,7 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq, struct work_struct *work = &dwork->work; WARN_ON_ONCE(!wq); - WARN_ON_FUNCTION_MISMATCH(timer->function, delayed_work_timer_fn); + WARN_ON_ONCE(timer->function != delayed_work_timer_fn); WARN_ON_ONCE(timer_pending(timer)); WARN_ON_ONCE(!list_empty(&work->entry)); From patchwork Fri Apr 29 20:36:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EC6AC433F5 for ; Fri, 29 Apr 2022 20:37:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381002AbiD2UlC (ORCPT ); Fri, 29 Apr 2022 16:41:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380949AbiD2Uku (ORCPT ); Fri, 29 Apr 2022 16:40:50 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0333D84EEA for ; Fri, 29 Apr 2022 13:37:17 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f8be9326fcso22423257b3.18 for ; Fri, 29 Apr 2022 13:37:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RZXa52/+Ccn/FZBSGLGkE2FGb19e6hj/BpUe4CISMqY=; b=josmFUY6sljvn74nbHLbhRTJAUNE7yBBD98Qcf7w5EQ59pm5LnY3aEdiKslS4uyUdq QWXZZukalNl6IxfYG+j3MTT6SVuufgTk6h/3imVwbyFtypJ6Eb+5iu3zX7OiWNC9ee2D gU1FQ1Soj99CCEaCdQg9uDLpFiUCbqfluZmHnaRKKZ3ArEv1Sx4V24aJ+iEs8lBrJR7M C4OFl191/L8Z6I8Mod4iBF6Zpbj90sePJz8AFJfeAilDgco+WL483HO8WXxphkwVe9u+ C2/ErRnmGzDy4hQ6Ef1n8xa1Ppzw8CJfQ/J9CE6NwUZfHasM7w1iM2pyCxDImgTQLavS PgIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RZXa52/+Ccn/FZBSGLGkE2FGb19e6hj/BpUe4CISMqY=; b=m0Ukg3N/n6IitFRqpSttwWvXKG3DhDSTvgE3R9K6IX2YZ/5rZfaeZReUW8L8qWNcn0 Kpdic4Gab6qTX/Mo61APL1ty7qHqbPt4b7Rw/2MjkXrya8ZuS0bafHXeE0z44sESgiid XvlI4eQIo8tZMeJWLiumzSzCeJuZPgLRdJhufbOZtFIysP/yN0YMt9DrgAPsZZw3sqYu yibZ+NqpLmzjqkyX6HxUi2RCZKFsvybHoTOKW6HIGzzHmWRWCYgVzPPxFrMZppztcGMa L68rD2BdOWSPAyHAmULOi1bbMlppshpPySCq8F4w93wMYJKC0Oa7j4k1H4RKQz5LWePq MNDQ== X-Gm-Message-State: AOAM531Av4jsjFtcyW/afnLkt2B65NE+Qvv77UptaQi1nMuHtEsMy6YA E9bwACS/9cMSD30CGIXl1LwsO7/UUg2XsZjY8HE= X-Google-Smtp-Source: ABdhPJzYWDvyPKIoG6j4qeVpD8eaxo3WYBiOn2SGRCv1SlZnlvNloHm83UTXxElMZg7DUXgIBs568v8KMUh01iV6d8E= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a05:6902:187:b0:63d:9c95:edca with SMTP id t7-20020a056902018700b0063d9c95edcamr1205951ybh.81.1651264636219; Fri, 29 Apr 2022 13:37:16 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:35 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=2028; h=from:subject; bh=EW4LD0wCU9ODc4AbUHEIgMCEO+Bkbwf1AVkfBUZqjt0=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExXM3pFXxbECykde0/DaTjsNNDc6R96rQVSD0FL 7MXUJzOJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7u7RC/ 4mjffA2OiXywm1kgxwD0ET4V0DIkVj3mDteDH4c0+AAADiJNYraamuovVFmW7kqZD95ZrDQ0DtWzBN De7f5Z9BRZNMX6MVRoo9GZ+vvVTpybMWnZxExbIai3lN4abSxarO4qyBYYddytuOn7gyGtnzCK/9T6 fz+QovIirBiumarGHSecOncP1veI4GDHBeWLjZLhj+f+visgFFCKO3BZd+O3yCUjY79lkYmAJnKXOF q4s8uzinImXbHIZaD63mvPzR2dr8R/gU36JxBAIE4D84O9KH2YyUpSl7DUuDJo+y3ty8PQ61/7hh3n GVilTp+N13t+4iSU/hLKu3hczF8y7OLaIjw+//vL/DV9uKs4+sgDgrob11gvGDZHpUd/4udw+ihg7k IwIiS0ak6/a21tSjs4EolviB/Au3/qiKjXvicRjakYv6pd9r5zR2TpEWYQ9HMxDaDTl52r3Y56gYVX 6Q07g4a5BTbZvyMK4W4T30XQuWwzx6kapZdydxqypIsT0= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 12/21] treewide: Drop __cficanonical From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org CONFIG_CFI_CLANG doesn't use a jump table anymore and therefore, won't change function references to point elsewhere. Remove the __cficanonical attribute and all uses of it. Signed-off-by: Sami Tolvanen --- include/linux/compiler_types.h | 4 ---- include/linux/init.h | 4 ++-- include/linux/pci.h | 4 ++-- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1c2c33ae1b37..bdd2526af46a 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -263,10 +263,6 @@ struct ftrace_likely_data { # define __nocfi #endif -#ifndef __cficanonical -# define __cficanonical -#endif - /* * Any place that could be marked with the "alloc_size" attribute is also * a place to be marked with the "malloc" attribute. Do this as part of the diff --git a/include/linux/init.h b/include/linux/init.h index baf0b29a7010..76058c9e0399 100644 --- a/include/linux/init.h +++ b/include/linux/init.h @@ -220,8 +220,8 @@ extern bool initcall_debug; __initcall_name(initstub, __iid, id) #define __define_initcall_stub(__stub, fn) \ - int __init __cficanonical __stub(void); \ - int __init __cficanonical __stub(void) \ + int __init __stub(void); \ + int __init __stub(void) \ { \ return fn(); \ } \ diff --git a/include/linux/pci.h b/include/linux/pci.h index 60adf42460ab..3cc50c4e3c64 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -2021,8 +2021,8 @@ enum pci_fixup_pass { #ifdef CONFIG_LTO_CLANG #define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \ class_shift, hook, stub) \ - void __cficanonical stub(struct pci_dev *dev); \ - void __cficanonical stub(struct pci_dev *dev) \ + void stub(struct pci_dev *dev); \ + void stub(struct pci_dev *dev) \ { \ hook(dev); \ } \ From patchwork Fri Apr 29 20:36:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AAEBC433FE for ; Fri, 29 Apr 2022 20:37:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381029AbiD2UlH (ORCPT ); Fri, 29 Apr 2022 16:41:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380963AbiD2Ukv (ORCPT ); Fri, 29 Apr 2022 16:40:51 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 652BB852E1 for ; Fri, 29 Apr 2022 13:37:19 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id z14-20020a5b0b0e000000b0064848b628cfso8407867ybp.0 for ; Fri, 29 Apr 2022 13:37:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tv2NfEjyfprEjH/WagRqqCH1hQfxPpGMj5Z2HbEgkRA=; b=pB+2omq61Q3217AvI0rEZSjmyFqJ9NJ7YuTSaZYQdZKihILOoXitcHp4Aqk4DfNuJC Ii1imHx96bqsVsHF4LbRU0itbEjHrNgVdkQ3irES1Ih8mxUshYfbtfCT+TfHZ/oz1dtP NY0q6seVOURhumX36uwOxNHojZ45LOr10er9aVaKi/UiJRMGUW9RPY1W/crXdM6e/nzW jbvzkhILVLoFfcKOggaouemXiLcCzeXKkf/R9rS3XSJYuy1mUw5kXCtFqCb5OkWHmIae AVmKCEcuaeO8akCJpncoE0TcUuvyXfT1iSaAa7lBOHuy3YgYc6i8BTmI6s6WYnCNi+AU t24g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tv2NfEjyfprEjH/WagRqqCH1hQfxPpGMj5Z2HbEgkRA=; b=2siMndKLfcQzNxuMqfOc1JN12wqAQU5ePur2XK9spFSf20QTi1dkJTpElOOLpoPP4D 9sLKlNS2PoR7zT5SMv5sSJUT2KmgAdjpWLMaZ242LEZZgPWGkO13Gpz2R3fxEcc3N+0G 153KnGkcAdkQrPdc6vtpOGErhIGCRUE6kF8MVNVAaON97ffTx/jpJ7csxjn630wyRVes jK8QOwV0mfUgeyJZBvCmnvMaN93JZliuJxX+W6EsCsIxNAprrXHK0EbSNlU5/HzufNc1 GnjquW5VnBr7T4RpwxhRlb13/3h1soWXeXj9XTBJ409H12kFHOCWhpheodj+nfDxo+Sc Fdug== X-Gm-Message-State: AOAM532UWhSCE7L3UOIxnT6eusq2ZSY6Fynbp3E1hSDPG0NkVthl1hEJ eTxvb57T0qIkiOumsmXKQIpb12gPqwUavPhJna4= X-Google-Smtp-Source: ABdhPJzf8F6MhhQvERk6o6aC905EfQk7qKag0BcoEBBD0uWfioudMPTHJnGQUhT5pwFx9Z28Q9LGUrtznvm+L73jrDw= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a81:234b:0:b0:2f8:4082:bbd3 with SMTP id j72-20020a81234b000000b002f84082bbd3mr1122079ywj.47.1651264638553; Fri, 29 Apr 2022 13:37:18 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:36 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-14-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=1454; h=from:subject; bh=tehRW82jP0RyTtdlRN0VdoftCcUQiAtEwsyBl3tnXQM=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExXrPGtzBdwEr3WsUJMcTa5SnMpMUdhtG1ELjyr Yo4vk3yJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7gG8C/ 4rBi6pp+EX2VhofKNrL5OB4Nbj0tmkbvch9On+rothI5RrjLElKTQ7rJKCbeCMf/fd9MhXmmy30ZDc 0i6d/nQFbMUJNbP3YvHszFfVozL5oetNY0LKa1tEWLrl1PmBWiMM1Y0hzUx695EFNuB9Z/IgzAWJHc VyKoFSgqL/HZvX7mFYcG2EAsOE+icuxIZICSAVt8NfVenhy3oKckN6+QZSZVHFbTE8sAiAE8x4UgZ4 nSjkqIrcznKGhg5HogEmdwZzCaAcGZxDqdA0V1aaU4kyfLHFp9XZzKmiiUwTYYSWwvCY9Tvo+xL+Vu 0CTamYHWDP1HUePZ8CB/mw9CsUeTuhu6zkvuMYcG354jmx7ooV+gAhNyCiEi4fKTuxQ5a1mRQuNNRE 8SsQdB8squmkfEfpsoDlQCt5QaUw4WeUghUIX9vTwT7ISqGIun7rYMiGMRNi2bz7Bl6AT0hPw82Lgr VbBurkClkSn9jw/pGy7ura6cWEncKA6ChV3483s2teTmc= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 13/21] cfi: Add the cfi_unchecked macro From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The cfi_unchecked macro allows CFI checking to be disabled for a specific indirect call expression, by passing the expression as an argument to the macro. For example: static void call(void (*f)(void)) { cfi_unchecked(f()); } Signed-off-by: Sami Tolvanen --- include/linux/compiler-clang.h | 2 ++ include/linux/compiler_types.h | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index c4ff42859077..0d6a0e7e36dc 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -94,4 +94,6 @@ #if CONFIG_CFI_CLANG /* Disable CFI checking inside a function. */ #define __nocfi __attribute__((__no_sanitize__("kcfi"))) +/* Disable CFI checking for the indirect call expression. */ +#define cfi_unchecked(expr) __builtin_kcfi_call_unchecked(expr) #endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index bdd2526af46a..41f547fe9724 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -263,6 +263,10 @@ struct ftrace_likely_data { # define __nocfi #endif +#ifndef cfi_unchecked +# define cfi_unchecked(expr) expr +#endif + /* * Any place that could be marked with the "alloc_size" attribute is also * a place to be marked with the "malloc" attribute. Do this as part of the From patchwork Fri Apr 29 20:36:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74AD3C4332F for ; Fri, 29 Apr 2022 20:37:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380965AbiD2UlN (ORCPT ); Fri, 29 Apr 2022 16:41:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380982AbiD2Ukv (ORCPT ); Fri, 29 Apr 2022 16:40:51 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB798B3DD2 for ; Fri, 29 Apr 2022 13:37:21 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f7d4addafdso84050817b3.6 for ; Fri, 29 Apr 2022 13:37:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=at094/n7Bv0uMdAnePTtcmGS9w2uuX44gIiSUGsiiUw=; b=aqcIrj9Lso+THq6tWdJjaEjslDJU8f6IStjKgDa3X0g0uDng3QzbCDnmsBrF8AR9hH JBJZPxAwynEbDOkhSJ0GsSMILl7U26MJWY1ya16P7nOIzobEUQla2lws7JLGIEsOQ/Nz Lp3TPnIv5QnYJHiuBAQ9PnfT3T6HARYsGNd2fNltarfP7WRDBr2cV4mWjsPbsTXHOBxB fTrcCZz/aV4dvMJcU0qqGBGYGs2hU/RNEHLMOOpEbIXvYbyBGsjeYcUqYkiQ2VguhMs9 DXdvpxid28aS+gPIy3QAxOM8bjz9Uyv00LisAAGBa+dMmyqQq4DAcXI6Kc67FzciIaWI vMNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=at094/n7Bv0uMdAnePTtcmGS9w2uuX44gIiSUGsiiUw=; b=mTUP3JwCKIUhRkgPcaBZA454QC2LEWfCDomZDKTpbO1ta7j8tNRLM/S3fNYDcdW8Rb bQVY8kf3nqCjzWfR+ttfoX9pb4Y+7O1ZWJIL3SnxshC72I4kOqDSL8XaIp8kkC5wcIK5 P2jMoBF7FM0cTAXNbhP2h+a7D44WTR2hiVIy6gTCihIhbfmv6edPV8hZZPlVCt82jO61 rXK3HvQOyOuXgyDM7F7f5GnLi0grEVhFiKC8tGM65TOGNQJ8YQDNdvqjmzH57tKJhASK KKgel+Yehqv6Srh/1jBVHKZ2psWa9KAfxObz+w5EZyAWn8Y9zm4YjomfZyno+CKWIjSI n4zw== X-Gm-Message-State: AOAM533qzbCwshsezvSUe6Vg0+Ig92mtuDmNza6KEro/eCpnuL5pDTpw dG9EuxrE4cT5A1UBoIv3LwOMrv+FRZB89qiBEcU= X-Google-Smtp-Source: ABdhPJwKd/LLWNg2QG73psSZJdSbzY/4GgMNmrmTD7KhlvYF4iCuR5BDaLXQFtDyn0kCtPHGPJp52QuMU61a4NMwB68= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a05:6902:1d1:b0:636:fa07:4b9a with SMTP id u17-20020a05690201d100b00636fa074b9amr1303121ybh.590.1651264640815; Fri, 29 Apr 2022 13:37:20 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:37 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-15-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=85495; h=from:subject; bh=iiWY4b289kgcrgUapNqeh5S+gRJExbvQtnMsr191Klo=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExXT8MTQkfhbZ1IBE3KS7pqiwS1u7YM43iZqTsM S3Beb2CJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7mzZC/ 9AUba9TTHqxZ3KWbLmEs6qAODGC/XbOnAc+mK/pf10Wo3/pS1nmb1hDawYnoZtevXHqptV5MWrQpUH BhwPOCJ8Eacykd6lDcN/70z/FCUF776C60sje6K5xKUjPKskeT59UaviTUosCwhRoWWdx1EblFRwDm 4srf9M3PBu+sw6/L/XFBqHOC4smOklt+GYBEQaPqpS/DFrktP1qzCAzB+ATrZn7ruHEt3wautVHLE2 s2Z8EmKgLJcjnb8o6dkTR0VeWG1HJ/RNtWzA7ZfRBDWWVKNkXVz4/yUr1nnjG79I0Y1pw2S40jjTH7 Nl2ILyuZoVdokm2PEvzeLQ2ajuMVQ/zoYKj/dM61P4Y8EBr+NP5cXy0bLBtpb2ngfrGwJiHw4NBWRY yTR83Wu4AkuG4gAU+y0w5wVucOgazknncHQEHmck2P+FDJ4NBMrRC4D9NOGzhf7/i3MJZLu4YVNCkr K1Zvyk3zTMnbZRD2EY/6sMpELrIDSSztLE9ZNjqgmb0P4= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Include the function arguments in the static call macro to make it possible to add a wrapper for the call. This is needed with CONFIG_CFI_CLANG to disable indirect call checking for static calls that are patched into direct calls at runtime. Users of static_call were updated using the following Coccinelle script and manually adjusted to preserve coding style: @@ expression name; expression list args; identifier static_call =~ "^static_call(_mod|_cond)?$"; @@ - static_call(name)(args) + static_call(name, args) Signed-off-by: Sami Tolvanen --- arch/arm/include/asm/paravirt.h | 2 +- arch/arm64/include/asm/paravirt.h | 2 +- arch/x86/crypto/aesni-intel_glue.c | 7 +- arch/x86/events/core.c | 40 +-- arch/x86/include/asm/kvm_host.h | 6 +- arch/x86/include/asm/paravirt.h | 4 +- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/hyperv.c | 4 +- arch/x86/kvm/irq.c | 2 +- arch/x86/kvm/kvm_cache_regs.h | 10 +- arch/x86/kvm/lapic.c | 32 +-- arch/x86/kvm/mmu.h | 4 +- arch/x86/kvm/mmu/mmu.c | 8 +- arch/x86/kvm/mmu/spte.c | 4 +- arch/x86/kvm/pmu.c | 4 +- arch/x86/kvm/trace.h | 4 +- arch/x86/kvm/x86.c | 326 +++++++++++----------- arch/x86/kvm/x86.h | 4 +- arch/x86/kvm/xen.c | 4 +- drivers/cpufreq/amd-pstate.c | 8 +- include/linux/entry-common.h | 2 +- include/linux/kernel.h | 2 +- include/linux/perf_event.h | 6 +- include/linux/sched.h | 2 +- include/linux/static_call.h | 16 +- include/linux/static_call_types.h | 10 +- include/linux/tracepoint.h | 2 +- kernel/static_call_inline.c | 2 +- kernel/trace/bpf_trace.c | 2 +- security/keys/trusted-keys/trusted_core.c | 14 +- 30 files changed, 267 insertions(+), 268 deletions(-) diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h index 95d5b0d625cd..43c419eadb9a 100644 --- a/arch/arm/include/asm/paravirt.h +++ b/arch/arm/include/asm/paravirt.h @@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); static inline u64 paravirt_steal_clock(int cpu) { - return static_call(pv_steal_clock)(cpu); + return static_call(pv_steal_clock, cpu); } #endif diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index 9aa193e0e8f2..35a9d649c448 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock); static inline u64 paravirt_steal_clock(int cpu) { - return static_call(pv_steal_clock)(cpu); + return static_call(pv_steal_clock, cpu); } int __init pv_time_init(void); diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 41901ba9d3a2..06182c068145 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -507,10 +507,9 @@ static int ctr_crypt(struct skcipher_request *req) while ((nbytes = walk.nbytes) > 0) { kernel_fpu_begin(); if (nbytes & AES_BLOCK_MASK) - static_call(aesni_ctr_enc_tfm)(ctx, walk.dst.virt.addr, - walk.src.virt.addr, - nbytes & AES_BLOCK_MASK, - walk.iv); + static_call(aesni_ctr_enc_tfm, ctx, + walk.dst.virt.addr, walk.src.virt.addr, + nbytes & AES_BLOCK_MASK, walk.iv); nbytes &= ~AES_BLOCK_MASK; if (walk.nbytes == walk.total && nbytes > 0) { diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index eef816fc216d..74315c87220b 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -695,7 +695,7 @@ void x86_pmu_disable_all(void) struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr) { - return static_call(x86_pmu_guest_get_msrs)(nr); + return static_call(x86_pmu_guest_get_msrs, nr); } EXPORT_SYMBOL_GPL(perf_guest_get_msrs); @@ -726,7 +726,7 @@ static void x86_pmu_disable(struct pmu *pmu) cpuc->enabled = 0; barrier(); - static_call(x86_pmu_disable_all)(); + static_call(x86_pmu_disable_all); } void x86_pmu_enable_all(int added) @@ -991,7 +991,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) if (cpuc->txn_flags & PERF_PMU_TXN_ADD) n0 -= cpuc->n_txn; - static_call_cond(x86_pmu_start_scheduling)(cpuc); + static_call_cond(x86_pmu_start_scheduling, cpuc); for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) { c = cpuc->event_constraint[i]; @@ -1008,7 +1008,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) * change due to external factors (sibling state, allow_tfa). */ if (!c || (c->flags & PERF_X86_EVENT_DYNAMIC)) { - c = static_call(x86_pmu_get_event_constraints)(cpuc, i, cpuc->event_list[i]); + c = static_call(x86_pmu_get_event_constraints, cpuc, i, cpuc->event_list[i]); cpuc->event_constraint[i] = c; } @@ -1090,7 +1090,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) */ if (!unsched && assign) { for (i = 0; i < n; i++) - static_call_cond(x86_pmu_commit_scheduling)(cpuc, i, assign[i]); + static_call_cond(x86_pmu_commit_scheduling, cpuc, i, assign[i]); } else { for (i = n0; i < n; i++) { e = cpuc->event_list[i]; @@ -1098,13 +1098,13 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) /* * release events that failed scheduling */ - static_call_cond(x86_pmu_put_event_constraints)(cpuc, e); + static_call_cond(x86_pmu_put_event_constraints, cpuc, e); cpuc->event_constraint[i] = NULL; } } - static_call_cond(x86_pmu_stop_scheduling)(cpuc); + static_call_cond(x86_pmu_stop_scheduling, cpuc); return unsched ? -EINVAL : 0; } @@ -1217,7 +1217,7 @@ static inline void x86_assign_hw_event(struct perf_event *event, hwc->last_cpu = smp_processor_id(); hwc->last_tag = ++cpuc->tags[i]; - static_call_cond(x86_pmu_assign)(event, idx); + static_call_cond(x86_pmu_assign, event, idx); switch (hwc->idx) { case INTEL_PMC_IDX_FIXED_BTS: @@ -1347,7 +1347,7 @@ static void x86_pmu_enable(struct pmu *pmu) cpuc->enabled = 1; barrier(); - static_call(x86_pmu_enable_all)(added); + static_call(x86_pmu_enable_all, added); } static DEFINE_PER_CPU(u64 [X86_PMC_IDX_MAX], pmc_prev_left); @@ -1472,7 +1472,7 @@ static int x86_pmu_add(struct perf_event *event, int flags) if (cpuc->txn_flags & PERF_PMU_TXN_ADD) goto done_collect; - ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign); + ret = static_call(x86_pmu_schedule_events, cpuc, n, assign); if (ret) goto out; /* @@ -1494,7 +1494,7 @@ static int x86_pmu_add(struct perf_event *event, int flags) * This is before x86_pmu_enable() will call x86_pmu_start(), * so we enable LBRs before an event needs them etc.. */ - static_call_cond(x86_pmu_add)(event); + static_call_cond(x86_pmu_add, event); ret = 0; out: @@ -1521,7 +1521,7 @@ static void x86_pmu_start(struct perf_event *event, int flags) cpuc->events[idx] = event; __set_bit(idx, cpuc->active_mask); - static_call(x86_pmu_enable)(event); + static_call(x86_pmu_enable, event); perf_event_update_userpage(event); } @@ -1594,7 +1594,7 @@ void x86_pmu_stop(struct perf_event *event, int flags) struct hw_perf_event *hwc = &event->hw; if (test_bit(hwc->idx, cpuc->active_mask)) { - static_call(x86_pmu_disable)(event); + static_call(x86_pmu_disable, event); __clear_bit(hwc->idx, cpuc->active_mask); cpuc->events[hwc->idx] = NULL; WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); @@ -1647,7 +1647,7 @@ static void x86_pmu_del(struct perf_event *event, int flags) if (i >= cpuc->n_events - cpuc->n_added) --cpuc->n_added; - static_call_cond(x86_pmu_put_event_constraints)(cpuc, event); + static_call_cond(x86_pmu_put_event_constraints, cpuc, event); /* Delete the array entry. */ while (++i < cpuc->n_events) { @@ -1667,7 +1667,7 @@ static void x86_pmu_del(struct perf_event *event, int flags) * This is after x86_pmu_stop(); so we disable LBRs after any * event can need them etc.. */ - static_call_cond(x86_pmu_del)(event); + static_call_cond(x86_pmu_del, event); } int x86_pmu_handle_irq(struct pt_regs *regs) @@ -1745,7 +1745,7 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) return NMI_DONE; start_clock = sched_clock(); - ret = static_call(x86_pmu_handle_irq)(regs); + ret = static_call(x86_pmu_handle_irq, regs); finish_clock = sched_clock(); perf_sample_event_took(finish_clock - start_clock); @@ -2217,7 +2217,7 @@ early_initcall(init_hw_perf_events); static void x86_pmu_read(struct perf_event *event) { - static_call(x86_pmu_read)(event); + static_call(x86_pmu_read, event); } /* @@ -2298,7 +2298,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu) if (!x86_pmu_initialized()) return -EAGAIN; - ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign); + ret = static_call(x86_pmu_schedule_events, cpuc, n, assign); if (ret) return ret; @@ -2638,13 +2638,13 @@ static const struct attribute_group *x86_pmu_attr_groups[] = { static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) { - static_call_cond(x86_pmu_sched_task)(ctx, sched_in); + static_call_cond(x86_pmu_sched_task, ctx, sched_in); } static void x86_pmu_swap_task_ctx(struct perf_event_context *prev, struct perf_event_context *next) { - static_call_cond(x86_pmu_swap_task_ctx)(prev, next); + static_call_cond(x86_pmu_swap_task_ctx, prev, next); } void perf_check_microcode(void) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4ff36610af6a..0d3869f6efc2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1576,7 +1576,7 @@ void kvm_arch_free_vm(struct kvm *kvm); static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) { if (kvm_x86_ops.tlb_remote_flush && - !static_call(kvm_x86_tlb_remote_flush)(kvm)) + !static_call(kvm_x86_tlb_remote_flush, kvm)) return 0; else return -ENOTSUPP; @@ -1953,12 +1953,12 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq) static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { - static_call_cond(kvm_x86_vcpu_blocking)(vcpu); + static_call_cond(kvm_x86_vcpu_blocking, vcpu); } static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { - static_call_cond(kvm_x86_vcpu_unblocking)(vcpu); + static_call_cond(kvm_x86_vcpu_unblocking, vcpu); } static inline int kvm_cpu_get_apicid(int mps_cpu) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 964442b99245..16aa752f1ccb 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -28,7 +28,7 @@ void paravirt_set_sched_clock(u64 (*func)(void)); static inline u64 paravirt_sched_clock(void) { - return static_call(pv_sched_clock)(); + return static_call(pv_sched_clock); } struct static_key; @@ -42,7 +42,7 @@ bool pv_is_native_vcpu_is_preempted(void); static inline u64 paravirt_steal_clock(int cpu) { - return static_call(pv_steal_clock)(cpu); + return static_call(pv_steal_clock, cpu); } #ifdef CONFIG_PARAVIRT_SPINLOCKS diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index b24ca7f4ed7c..e40e9b8b2bd6 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -311,7 +311,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) kvm_hv_set_cpuid(vcpu); /* Invoke the vendor callback only after the above state is updated. */ - static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu); + static_call(kvm_x86_vcpu_after_set_cpuid, vcpu); /* * Except for the MMU, which needs to do its thing any vendor specific diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 46f9dfb60469..b1b8006f9084 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -1335,7 +1335,7 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data, } /* vmcall/vmmcall */ - static_call(kvm_x86_patch_hypercall)(vcpu, instructions + i); + static_call(kvm_x86_patch_hypercall, vcpu, instructions + i); i += 3; /* ret */ @@ -2201,7 +2201,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu) * hypercall generates UD from non zero cpl and real mode * per HYPER-V spec */ - if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || !is_protmode(vcpu)) { + if (static_call(kvm_x86_get_cpl, vcpu) != 0 || !is_protmode(vcpu)) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; } diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index 172b05343cfd..b86cf55afe4d 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -150,7 +150,7 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu) { __kvm_migrate_apic_timer(vcpu); __kvm_migrate_pit_timer(vcpu); - static_call_cond(kvm_x86_migrate_timers)(vcpu); + static_call_cond(kvm_x86_migrate_timers, vcpu); } bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args) diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 3febc342360c..643b4abb2797 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -86,7 +86,7 @@ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, int reg return 0; if (!kvm_register_is_available(vcpu, reg)) - static_call(kvm_x86_cache_reg)(vcpu, reg); + static_call(kvm_x86_cache_reg, vcpu, reg); return vcpu->arch.regs[reg]; } @@ -126,7 +126,7 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index) might_sleep(); /* on svm */ if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR)) - static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_PDPTR); + static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_PDPTR); return vcpu->arch.walk_mmu->pdptrs[index]; } @@ -141,7 +141,7 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask) ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS; if ((tmask & vcpu->arch.cr0_guest_owned_bits) && !kvm_register_is_available(vcpu, VCPU_EXREG_CR0)) - static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0); + static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR0); return vcpu->arch.cr0 & mask; } @@ -155,14 +155,14 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask) ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS; if ((tmask & vcpu->arch.cr4_guest_owned_bits) && !kvm_register_is_available(vcpu, VCPU_EXREG_CR4)) - static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR4); + static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR4); return vcpu->arch.cr4 & mask; } static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu) { if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3)) - static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR3); + static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR3); return vcpu->arch.cr3; } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 66b0eb0bda94..743b99eb43ef 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -525,7 +525,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic) if (unlikely(vcpu->arch.apicv_active)) { /* need to update RVI */ kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR); - static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic)); + static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic)); } else { apic->irr_pending = false; kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR); @@ -555,7 +555,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic) * just set SVI. */ if (unlikely(vcpu->arch.apicv_active)) - static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, vec); + static_call_cond(kvm_x86_hwapic_isr_update, vcpu, vec); else { ++apic->isr_count; BUG_ON(apic->isr_count > MAX_APIC_VECTOR); @@ -603,7 +603,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic) * and must be left alone. */ if (unlikely(vcpu->arch.apicv_active)) - static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic)); + static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic)); else { --apic->isr_count; BUG_ON(apic->isr_count < 0); @@ -739,7 +739,7 @@ static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr) { int highest_irr; if (kvm_x86_ops.sync_pir_to_irr) - highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu); + highest_irr = static_call(kvm_x86_sync_pir_to_irr, apic->vcpu); else highest_irr = apic_find_highest_irr(apic); if (highest_irr == -1 || (highest_irr & 0xF0) <= ppr) @@ -1132,8 +1132,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, apic->regs + APIC_TMR); } - static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode, - trig_mode, vector); + static_call(kvm_x86_deliver_interrupt, apic, delivery_mode, + trig_mode, vector); break; case APIC_DM_REMRD: @@ -1888,7 +1888,7 @@ static void cancel_hv_timer(struct kvm_lapic *apic) { WARN_ON(preemptible()); WARN_ON(!apic->lapic_timer.hv_timer_in_use); - static_call(kvm_x86_cancel_hv_timer)(apic->vcpu); + static_call(kvm_x86_cancel_hv_timer, apic->vcpu); apic->lapic_timer.hv_timer_in_use = false; } @@ -1905,7 +1905,7 @@ static bool start_hv_timer(struct kvm_lapic *apic) if (!ktimer->tscdeadline) return false; - if (static_call(kvm_x86_set_hv_timer)(vcpu, ktimer->tscdeadline, &expired)) + if (static_call(kvm_x86_set_hv_timer, vcpu, ktimer->tscdeadline, &expired)) return false; ktimer->hv_timer_in_use = true; @@ -2329,7 +2329,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value) kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id); if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE)) - static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu); + static_call_cond(kvm_x86_set_virtual_apic_mode, vcpu); apic->base_address = apic->vcpu->arch.apic_base & MSR_IA32_APICBASE_BASE; @@ -2419,9 +2419,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->arch.pv_eoi.msr_val = 0; apic_update_ppr(apic); if (vcpu->arch.apicv_active) { - static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu); - static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1); - static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, -1); + static_call_cond(kvm_x86_apicv_post_state_restore, vcpu); + static_call_cond(kvm_x86_hwapic_irr_update, vcpu, -1); + static_call_cond(kvm_x86_hwapic_isr_update, vcpu, -1); } vcpu->arch.apic_arb_prio = 0; @@ -2697,9 +2697,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s) kvm_apic_update_apicv(vcpu); apic->highest_isr_cache = -1; if (vcpu->arch.apicv_active) { - static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu); - static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic)); - static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic)); + static_call_cond(kvm_x86_apicv_post_state_restore, vcpu); + static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic)); + static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic)); } kvm_make_request(KVM_REQ_EVENT, vcpu); if (ioapic_in_kernel(vcpu->kvm)) @@ -3002,7 +3002,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu) /* evaluate pending_events before reading the vector */ smp_rmb(); sipi_vector = apic->sipi_vector; - static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector); + static_call(kvm_x86_vcpu_deliver_sipi_vector, vcpu, sipi_vector); vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; } } diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index e6cae6f22683..73880aa0b9e2 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -113,7 +113,7 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu) if (!VALID_PAGE(root_hpa)) return; - static_call(kvm_x86_load_mmu_pgd)(vcpu, root_hpa, + static_call(kvm_x86_load_mmu_pgd, vcpu, root_hpa, vcpu->arch.mmu->shadow_root_level); } @@ -218,7 +218,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, { /* strip nested paging fault error codes */ unsigned int pfec = access; - unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu); + unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu); /* * For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f9080ee50ffa..0bdf76d94875 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -268,7 +268,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, int ret = -ENOTSUPP; if (range && kvm_x86_ops.tlb_remote_flush_with_range) - ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range); + ret = static_call(kvm_x86_tlb_remote_flush_with_range, kvm, range); if (ret) kvm_flush_remote_tlbs(kvm); @@ -5102,7 +5102,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) * stale entries. Flushing on alloc also allows KVM to skip the TLB * flush when freeing a root (see kvm_tdp_mmu_put_root()). */ - static_call(kvm_x86_flush_tlb_current)(vcpu); + static_call(kvm_x86_flush_tlb_current, vcpu); out: return r; } @@ -5408,7 +5408,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (is_noncanonical_address(gva, vcpu)) return; - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); + static_call(kvm_x86_flush_tlb_gva, vcpu, gva); } if (!mmu->invlpg) @@ -5464,7 +5464,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) } if (tlb_flush) - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); + static_call(kvm_x86_flush_tlb_gva, vcpu, gva); ++vcpu->stat.invlpg; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..6b7bae4778a4 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -131,8 +131,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (level > PG_LEVEL_4K) spte |= PT_PAGE_SIZE_MASK; if (tdp_enabled) - spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, - kvm_is_mmio_pfn(pfn)); + spte |= static_call(kvm_x86_get_mt_mask, vcpu, gfn, + kvm_is_mmio_pfn(pfn)); if (host_writable) spte |= shadow_host_writable_mask; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index eca39f56c231..4361f0e247ee 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -371,7 +371,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) return 1; if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) && - (static_call(kvm_x86_get_cpl)(vcpu) != 0) && + (static_call(kvm_x86_get_cpl, vcpu) != 0) && (kvm_read_cr0(vcpu) & X86_CR0_PE)) return 1; @@ -523,7 +523,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) select_user = config & 0x2; } - return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user; + return (static_call(kvm_x86_get_cpl, pmc->vcpu) == 0) ? select_os : select_user; } void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index e3a24b8f04be..a4845e1b5574 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -308,7 +308,7 @@ TRACE_EVENT(name, \ __entry->guest_rip = kvm_rip_read(vcpu); \ __entry->isa = isa; \ __entry->vcpu_id = vcpu->vcpu_id; \ - static_call(kvm_x86_get_exit_info)(vcpu, \ + static_call(kvm_x86_get_exit_info, vcpu, \ &__entry->exit_reason, \ &__entry->info1, \ &__entry->info2, \ @@ -792,7 +792,7 @@ TRACE_EVENT(kvm_emulate_insn, ), TP_fast_assign( - __entry->csbase = static_call(kvm_x86_get_segment_base)(vcpu, VCPU_SREG_CS); + __entry->csbase = static_call(kvm_x86_get_segment_base, vcpu, VCPU_SREG_CS); __entry->len = vcpu->arch.emulate_ctxt->fetch.ptr - vcpu->arch.emulate_ctxt->fetch.data; __entry->rip = vcpu->arch.emulate_ctxt->_eip - __entry->len; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6ab19afc638..ca400a219241 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -796,7 +796,7 @@ EXPORT_SYMBOL_GPL(kvm_requeue_exception_e); */ bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl) { - if (static_call(kvm_x86_get_cpl)(vcpu) <= required_cpl) + if (static_call(kvm_x86_get_cpl, vcpu) <= required_cpl) return true; kvm_queue_exception_e(vcpu, GP_VECTOR, 0); return false; @@ -918,7 +918,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) if (!is_pae(vcpu)) return 1; - static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l); + static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l); if (cs_l) return 1; } @@ -932,7 +932,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) (is_64_bit_mode(vcpu) || kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))) return 1; - static_call(kvm_x86_set_cr0)(vcpu, cr0); + static_call(kvm_x86_set_cr0, vcpu, cr0); kvm_post_set_cr0(vcpu, old_cr0, cr0); @@ -1054,7 +1054,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu) { - if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || + if (static_call(kvm_x86_get_cpl, vcpu) != 0 || __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) { kvm_inject_gp(vcpu, 0); return 1; @@ -1072,7 +1072,7 @@ bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) if (cr4 & vcpu->arch.cr4_guest_rsvd_bits) return false; - return static_call(kvm_x86_is_valid_cr4)(vcpu, cr4); + return static_call(kvm_x86_is_valid_cr4, vcpu, cr4); } EXPORT_SYMBOL_GPL(kvm_is_valid_cr4); @@ -1144,7 +1144,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) return 1; } - static_call(kvm_x86_set_cr4)(vcpu, cr4); + static_call(kvm_x86_set_cr4, vcpu, cr4); kvm_post_set_cr4(vcpu, old_cr4, cr4); @@ -1285,7 +1285,7 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu) dr7 = vcpu->arch.guest_debug_dr7; else dr7 = vcpu->arch.dr7; - static_call(kvm_x86_set_dr7)(vcpu, dr7); + static_call(kvm_x86_set_dr7, vcpu, dr7); vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_BP_ENABLED; if (dr7 & DR7_BP_EN_MASK) vcpu->arch.switch_db_regs |= KVM_DEBUGREG_BP_ENABLED; @@ -1600,7 +1600,7 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr) rdmsrl_safe(msr->index, &msr->data); break; default: - return static_call(kvm_x86_get_msr_feature)(msr); + return static_call(kvm_x86_get_msr_feature, msr); } return 0; } @@ -1676,7 +1676,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info) efer &= ~EFER_LMA; efer |= vcpu->arch.efer & EFER_LMA; - r = static_call(kvm_x86_set_efer)(vcpu, efer); + r = static_call(kvm_x86_set_efer, vcpu, efer); if (r) { WARN_ON(r > 0); return r; @@ -1802,7 +1802,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data, msr.index = index; msr.host_initiated = host_initiated; - return static_call(kvm_x86_set_msr)(vcpu, &msr); + return static_call(kvm_x86_set_msr, vcpu, &msr); } static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu, @@ -1844,7 +1844,7 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, msr.index = index; msr.host_initiated = host_initiated; - ret = static_call(kvm_x86_get_msr)(vcpu, &msr); + ret = static_call(kvm_x86_get_msr, vcpu, &msr); if (!ret) *data = msr.data; return ret; @@ -1912,7 +1912,7 @@ static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu) static int complete_fast_msr_access(struct kvm_vcpu *vcpu) { - return static_call(kvm_x86_complete_emulated_msr)(vcpu, vcpu->run->msr.error); + return static_call(kvm_x86_complete_emulated_msr, vcpu, vcpu->run->msr.error); } static int complete_fast_rdmsr(struct kvm_vcpu *vcpu) @@ -1976,7 +1976,7 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu) trace_kvm_msr_read_ex(ecx); } - return static_call(kvm_x86_complete_emulated_msr)(vcpu, r); + return static_call(kvm_x86_complete_emulated_msr, vcpu, r); } EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr); @@ -2001,7 +2001,7 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu) trace_kvm_msr_write_ex(ecx, data); } - return static_call(kvm_x86_complete_emulated_msr)(vcpu, r); + return static_call(kvm_x86_complete_emulated_msr, vcpu, r); } EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr); @@ -2507,12 +2507,12 @@ static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) if (is_guest_mode(vcpu)) vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset( l1_offset, - static_call(kvm_x86_get_l2_tsc_offset)(vcpu), - static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu)); + static_call(kvm_x86_get_l2_tsc_offset, vcpu), + static_call(kvm_x86_get_l2_tsc_multiplier, vcpu)); else vcpu->arch.tsc_offset = l1_offset; - static_call(kvm_x86_write_tsc_offset)(vcpu, vcpu->arch.tsc_offset); + static_call(kvm_x86_write_tsc_offset, vcpu, vcpu->arch.tsc_offset); } static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multiplier) @@ -2523,13 +2523,13 @@ static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multipli if (is_guest_mode(vcpu)) vcpu->arch.tsc_scaling_ratio = kvm_calc_nested_tsc_multiplier( l1_multiplier, - static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu)); + static_call(kvm_x86_get_l2_tsc_multiplier, vcpu)); else vcpu->arch.tsc_scaling_ratio = l1_multiplier; if (kvm_has_tsc_control) - static_call(kvm_x86_write_tsc_multiplier)( - vcpu, vcpu->arch.tsc_scaling_ratio); + static_call(kvm_x86_write_tsc_multiplier, vcpu, + vcpu->arch.tsc_scaling_ratio); } static inline bool kvm_check_tsc_unstable(void) @@ -3307,7 +3307,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu) static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu) { ++vcpu->stat.tlb_flush; - static_call(kvm_x86_flush_tlb_all)(vcpu); + static_call(kvm_x86_flush_tlb_all, vcpu); } static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) @@ -3325,14 +3325,14 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) kvm_mmu_sync_prev_roots(vcpu); } - static_call(kvm_x86_flush_tlb_guest)(vcpu); + static_call(kvm_x86_flush_tlb_guest, vcpu); } static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu) { ++vcpu->stat.tlb_flush; - static_call(kvm_x86_flush_tlb_current)(vcpu); + static_call(kvm_x86_flush_tlb_current, vcpu); } /* @@ -4310,7 +4310,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) * fringe case that is not enabled except via specific settings * of the module parameters. */ - r = static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE); + r = static_call(kvm_x86_has_emulated_msr, kvm, MSR_IA32_SMBASE); break; case KVM_CAP_NR_VCPUS: r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS); @@ -4548,14 +4548,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { /* Address WBINVD may be executed by guest */ if (need_emulate_wbinvd(vcpu)) { - if (static_call(kvm_x86_has_wbinvd_exit)()) + if (static_call(kvm_x86_has_wbinvd_exit)) cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); else if (vcpu->cpu != -1 && vcpu->cpu != cpu) smp_call_function_single(vcpu->cpu, wbinvd_ipi, NULL, 1); } - static_call(kvm_x86_vcpu_load)(vcpu, cpu); + static_call(kvm_x86_vcpu_load, vcpu, cpu); /* Save host pkru register if supported */ vcpu->arch.host_pkru = read_pkru(); @@ -4634,7 +4634,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) int idx; if (vcpu->preempted && !vcpu->arch.guest_state_protected) - vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu); + vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl, vcpu); /* * Take the srcu lock as memslots will be accessed to check the gfn @@ -4647,14 +4647,14 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_steal_time_set_preempted(vcpu); srcu_read_unlock(&vcpu->kvm->srcu, idx); - static_call(kvm_x86_vcpu_put)(vcpu); + static_call(kvm_x86_vcpu_put, vcpu); vcpu->arch.last_host_tsc = rdtsc(); } static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s) { - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call_cond(kvm_x86_sync_pir_to_irr, vcpu); return kvm_apic_get_state(vcpu, s); } @@ -4773,7 +4773,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu, for (bank = 0; bank < bank_num; bank++) vcpu->arch.mce_banks[bank*4] = ~(u64)0; - static_call(kvm_x86_setup_mce)(vcpu); + static_call(kvm_x86_setup_mce, vcpu); out: return r; } @@ -4880,11 +4880,11 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft; events->interrupt.nr = vcpu->arch.interrupt.nr; events->interrupt.soft = 0; - events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); + events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu); events->nmi.injected = vcpu->arch.nmi_injected; events->nmi.pending = vcpu->arch.nmi_pending != 0; - events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu); + events->nmi.masked = static_call(kvm_x86_get_nmi_mask, vcpu); events->nmi.pad = 0; events->sipi_vector = 0; /* never valid when reporting to user space */ @@ -4951,13 +4951,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.interrupt.nr = events->interrupt.nr; vcpu->arch.interrupt.soft = events->interrupt.soft; if (events->flags & KVM_VCPUEVENT_VALID_SHADOW) - static_call(kvm_x86_set_interrupt_shadow)(vcpu, - events->interrupt.shadow); + static_call(kvm_x86_set_interrupt_shadow, vcpu, + events->interrupt.shadow); vcpu->arch.nmi_injected = events->nmi.injected; if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) vcpu->arch.nmi_pending = events->nmi.pending; - static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked); + static_call(kvm_x86_set_nmi_mask, vcpu, events->nmi.masked); if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR && lapic_in_kernel(vcpu)) @@ -5254,7 +5254,7 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, if (!kvm_x86_ops.enable_direct_tlbflush) return -ENOTTY; - return static_call(kvm_x86_enable_direct_tlbflush)(vcpu); + return static_call(kvm_x86_enable_direct_tlbflush, vcpu); case KVM_CAP_HYPERV_ENFORCE_CPUID: return kvm_hv_set_enforce_cpuid(vcpu, cap->args[0]); @@ -5723,14 +5723,14 @@ static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr) if (addr > (unsigned int)(-3 * PAGE_SIZE)) return -EINVAL; - ret = static_call(kvm_x86_set_tss_addr)(kvm, addr); + ret = static_call(kvm_x86_set_tss_addr, kvm, addr); return ret; } static int kvm_vm_ioctl_set_identity_map_addr(struct kvm *kvm, u64 ident_addr) { - return static_call(kvm_x86_set_identity_map_addr)(kvm, ident_addr); + return static_call(kvm_x86_set_identity_map_addr, kvm, ident_addr); } static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm, @@ -6027,14 +6027,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (!kvm_x86_ops.vm_copy_enc_context_from) break; - r = static_call(kvm_x86_vm_copy_enc_context_from)(kvm, cap->args[0]); + r = static_call(kvm_x86_vm_copy_enc_context_from, kvm, cap->args[0]); break; case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM: r = -EINVAL; if (!kvm_x86_ops.vm_move_enc_context_from) break; - r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, cap->args[0]); + r = static_call(kvm_x86_vm_move_enc_context_from, kvm, cap->args[0]); break; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { @@ -6525,7 +6525,7 @@ long kvm_arch_vm_ioctl(struct file *filp, if (!kvm_x86_ops.mem_enc_ioctl) goto out; - r = static_call(kvm_x86_mem_enc_ioctl)(kvm, argp); + r = static_call(kvm_x86_mem_enc_ioctl, kvm, argp); break; } case KVM_MEMORY_ENCRYPT_REG_REGION: { @@ -6539,7 +6539,7 @@ long kvm_arch_vm_ioctl(struct file *filp, if (!kvm_x86_ops.mem_enc_register_region) goto out; - r = static_call(kvm_x86_mem_enc_register_region)(kvm, ®ion); + r = static_call(kvm_x86_mem_enc_register_region, kvm, ®ion); break; } case KVM_MEMORY_ENCRYPT_UNREG_REGION: { @@ -6553,7 +6553,8 @@ long kvm_arch_vm_ioctl(struct file *filp, if (!kvm_x86_ops.mem_enc_unregister_region) goto out; - r = static_call(kvm_x86_mem_enc_unregister_region)(kvm, ®ion); + r = static_call(kvm_x86_mem_enc_unregister_region, kvm, + ®ion); break; } case KVM_HYPERV_EVENTFD: { @@ -6661,7 +6662,7 @@ static void kvm_init_msr_list(void) } for (i = 0; i < ARRAY_SIZE(emulated_msrs_all); i++) { - if (!static_call(kvm_x86_has_emulated_msr)(NULL, emulated_msrs_all[i])) + if (!static_call(kvm_x86_has_emulated_msr, NULL, emulated_msrs_all[i])) continue; emulated_msrs[num_emulated_msrs++] = emulated_msrs_all[i]; @@ -6724,13 +6725,13 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v) static void kvm_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) { - static_call(kvm_x86_set_segment)(vcpu, var, seg); + static_call(kvm_x86_set_segment, vcpu, var, seg); } void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) { - static_call(kvm_x86_get_segment)(vcpu, var, seg); + static_call(kvm_x86_get_segment, vcpu, var, seg); } gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access, @@ -6753,7 +6754,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, { struct kvm_mmu *mmu = vcpu->arch.walk_mmu; - u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; + u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0; return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read); @@ -6763,7 +6764,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read); { struct kvm_mmu *mmu = vcpu->arch.walk_mmu; - u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; + u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0; access |= PFERR_FETCH_MASK; return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } @@ -6773,7 +6774,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, { struct kvm_mmu *mmu = vcpu->arch.walk_mmu; - u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; + u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0; access |= PFERR_WRITE_MASK; return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } @@ -6826,7 +6827,7 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt, { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); struct kvm_mmu *mmu = vcpu->arch.walk_mmu; - u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; + u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0; unsigned offset; int ret; @@ -6851,7 +6852,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu, gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception) { - u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; + u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0; /* * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED @@ -6874,7 +6875,7 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt, if (system) access |= PFERR_IMPLICIT_ACCESS; - else if (static_call(kvm_x86_get_cpl)(vcpu) == 3) + else if (static_call(kvm_x86_get_cpl, vcpu) == 3) access |= PFERR_USER_MASK; return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception); @@ -6928,7 +6929,7 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v if (system) access |= PFERR_IMPLICIT_ACCESS; - else if (static_call(kvm_x86_get_cpl)(vcpu) == 3) + else if (static_call(kvm_x86_get_cpl, vcpu) == 3) access |= PFERR_USER_MASK; return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, @@ -6949,8 +6950,8 @@ EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system); static int kvm_can_emulate_insn(struct kvm_vcpu *vcpu, int emul_type, void *insn, int insn_len) { - return static_call(kvm_x86_can_emulate_instruction)(vcpu, emul_type, - insn, insn_len); + return static_call(kvm_x86_can_emulate_instruction, vcpu, emul_type, + insn, insn_len); } int handle_ud(struct kvm_vcpu *vcpu) @@ -6995,7 +6996,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva, bool write) { struct kvm_mmu *mmu = vcpu->arch.walk_mmu; - u64 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0) + u64 access = ((static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0) | (write ? PFERR_WRITE_MASK : 0); /* @@ -7425,7 +7426,7 @@ static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt, static unsigned long get_segment_base(struct kvm_vcpu *vcpu, int seg) { - return static_call(kvm_x86_get_segment_base)(vcpu, seg); + return static_call(kvm_x86_get_segment_base, vcpu, seg); } static void emulator_invlpg(struct x86_emulate_ctxt *ctxt, ulong address) @@ -7438,7 +7439,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) if (!need_emulate_wbinvd(vcpu)) return X86EMUL_CONTINUE; - if (static_call(kvm_x86_has_wbinvd_exit)()) { + if (static_call(kvm_x86_has_wbinvd_exit)) { int cpu = get_cpu(); cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); @@ -7543,27 +7544,27 @@ static int emulator_set_cr(struct x86_emulate_ctxt *ctxt, int cr, ulong val) static int emulator_get_cpl(struct x86_emulate_ctxt *ctxt) { - return static_call(kvm_x86_get_cpl)(emul_to_vcpu(ctxt)); + return static_call(kvm_x86_get_cpl, emul_to_vcpu(ctxt)); } static void emulator_get_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt) { - static_call(kvm_x86_get_gdt)(emul_to_vcpu(ctxt), dt); + static_call(kvm_x86_get_gdt, emul_to_vcpu(ctxt), dt); } static void emulator_get_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt) { - static_call(kvm_x86_get_idt)(emul_to_vcpu(ctxt), dt); + static_call(kvm_x86_get_idt, emul_to_vcpu(ctxt), dt); } static void emulator_set_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt) { - static_call(kvm_x86_set_gdt)(emul_to_vcpu(ctxt), dt); + static_call(kvm_x86_set_gdt, emul_to_vcpu(ctxt), dt); } static void emulator_set_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt) { - static_call(kvm_x86_set_idt)(emul_to_vcpu(ctxt), dt); + static_call(kvm_x86_set_idt, emul_to_vcpu(ctxt), dt); } static unsigned long emulator_get_cached_segment_base( @@ -7721,8 +7722,8 @@ static int emulator_intercept(struct x86_emulate_ctxt *ctxt, struct x86_instruction_info *info, enum x86_intercept_stage stage) { - return static_call(kvm_x86_check_intercept)(emul_to_vcpu(ctxt), info, stage, - &ctxt->exception); + return static_call(kvm_x86_check_intercept, emul_to_vcpu(ctxt), info, + stage, &ctxt->exception); } static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt, @@ -7764,7 +7765,7 @@ static void emulator_write_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg, ulon static void emulator_set_nmi_mask(struct x86_emulate_ctxt *ctxt, bool masked) { - static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked); + static_call(kvm_x86_set_nmi_mask, emul_to_vcpu(ctxt), masked); } static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) @@ -7782,7 +7783,7 @@ static void emulator_exiting_smm(struct x86_emulate_ctxt *ctxt) static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt, const char *smstate) { - return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smstate); + return static_call(kvm_x86_leave_smm, emul_to_vcpu(ctxt), smstate); } static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt) @@ -7847,7 +7848,7 @@ static const struct x86_emulate_ops emulate_ops = { static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask) { - u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); + u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu); /* * an sti; sti; sequence only disable interrupts for the first * instruction. So, if the last instruction, be it emulated or @@ -7858,7 +7859,7 @@ static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask) if (int_shadow & mask) mask = 0; if (unlikely(int_shadow || mask)) { - static_call(kvm_x86_set_interrupt_shadow)(vcpu, mask); + static_call(kvm_x86_set_interrupt_shadow, vcpu, mask); if (!mask) kvm_make_request(KVM_REQ_EVENT, vcpu); } @@ -7900,7 +7901,7 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu) struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; int cs_db, cs_l; - static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l); + static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l); ctxt->gpa_available = false; ctxt->eflags = kvm_get_rflags(vcpu); @@ -7960,9 +7961,8 @@ static void prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *data, */ memset(&info, 0, sizeof(info)); - static_call(kvm_x86_get_exit_info)(vcpu, (u32 *)&info[0], &info[1], - &info[2], (u32 *)&info[3], - (u32 *)&info[4]); + static_call(kvm_x86_get_exit_info, vcpu, (u32 *)&info[0], &info[1], + &info[2], (u32 *)&info[3], (u32 *)&info[4]); run->exit_reason = KVM_EXIT_INTERNAL_ERROR; run->emulation_failure.suberror = KVM_INTERNAL_ERROR_EMULATION; @@ -8039,7 +8039,7 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type) kvm_queue_exception(vcpu, UD_VECTOR); - if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl)(vcpu) == 0) { + if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl, vcpu) == 0) { prepare_emulation_ctxt_failure_exit(vcpu); return 0; } @@ -8228,10 +8228,10 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu) int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) { - unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu); + unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu); int r; - r = static_call(kvm_x86_skip_emulated_instruction)(vcpu); + r = static_call(kvm_x86_skip_emulated_instruction, vcpu); if (unlikely(!r)) return 0; @@ -8494,7 +8494,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, writeback: if (writeback) { - unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu); + unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu); toggle_interruptibility(vcpu, ctxt->interruptibility); vcpu->arch.emulate_regs_need_sync_to_vcpu = false; if (!ctxt->have_exception || @@ -8505,7 +8505,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r = kvm_vcpu_do_singlestep(vcpu); - static_call_cond(kvm_x86_update_emulated_instruction)(vcpu); + static_call_cond(kvm_x86_update_emulated_instruction, vcpu); __kvm_set_rflags(vcpu, ctxt->eflags); } @@ -9187,7 +9187,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) a3 &= 0xFFFFFFFF; } - if (static_call(kvm_x86_get_cpl)(vcpu) != 0) { + if (static_call(kvm_x86_get_cpl, vcpu) != 0) { ret = -KVM_EPERM; goto out; } @@ -9266,7 +9266,7 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt) char instruction[3]; unsigned long rip = kvm_rip_read(vcpu); - static_call(kvm_x86_patch_hypercall)(vcpu, instruction); + static_call(kvm_x86_patch_hypercall, vcpu, instruction); return emulator_write_emulated(ctxt, rip, instruction, 3, &ctxt->exception); @@ -9283,7 +9283,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu) { struct kvm_run *kvm_run = vcpu->run; - kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu); + kvm_run->if_flag = static_call(kvm_x86_get_if_flag, vcpu); kvm_run->cr8 = kvm_get_cr8(vcpu); kvm_run->apic_base = kvm_get_apic_base(vcpu); @@ -9318,7 +9318,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu) tpr = kvm_lapic_get_cr8(vcpu); - static_call(kvm_x86_update_cr8_intercept)(vcpu, tpr, max_irr); + static_call(kvm_x86_update_cr8_intercept, vcpu, tpr, max_irr); } @@ -9336,7 +9336,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) { if (vcpu->arch.exception.error_code && !is_protmode(vcpu)) vcpu->arch.exception.error_code = false; - static_call(kvm_x86_queue_exception)(vcpu); + static_call(kvm_x86_queue_exception, vcpu); } static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit) @@ -9366,10 +9366,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit) */ else if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { - static_call(kvm_x86_inject_nmi)(vcpu); + static_call(kvm_x86_inject_nmi, vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { - static_call(kvm_x86_inject_irq)(vcpu); + static_call(kvm_x86_inject_irq, vcpu); can_inject = false; } } @@ -9430,7 +9430,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit) * The kvm_x86_ops hooks communicate this by returning -EBUSY. */ if (vcpu->arch.smi_pending) { - r = can_inject ? static_call(kvm_x86_smi_allowed)(vcpu, true) : -EBUSY; + r = can_inject ? static_call(kvm_x86_smi_allowed, vcpu, true) : -EBUSY; if (r < 0) goto out; if (r) { @@ -9439,35 +9439,35 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit) enter_smm(vcpu); can_inject = false; } else - static_call(kvm_x86_enable_smi_window)(vcpu); + static_call(kvm_x86_enable_smi_window, vcpu); } if (vcpu->arch.nmi_pending) { - r = can_inject ? static_call(kvm_x86_nmi_allowed)(vcpu, true) : -EBUSY; + r = can_inject ? static_call(kvm_x86_nmi_allowed, vcpu, true) : -EBUSY; if (r < 0) goto out; if (r) { --vcpu->arch.nmi_pending; vcpu->arch.nmi_injected = true; - static_call(kvm_x86_inject_nmi)(vcpu); + static_call(kvm_x86_inject_nmi, vcpu); can_inject = false; - WARN_ON(static_call(kvm_x86_nmi_allowed)(vcpu, true) < 0); + WARN_ON(static_call(kvm_x86_nmi_allowed, vcpu, true) < 0); } if (vcpu->arch.nmi_pending) - static_call(kvm_x86_enable_nmi_window)(vcpu); + static_call(kvm_x86_enable_nmi_window, vcpu); } if (kvm_cpu_has_injectable_intr(vcpu)) { - r = can_inject ? static_call(kvm_x86_interrupt_allowed)(vcpu, true) : -EBUSY; + r = can_inject ? static_call(kvm_x86_interrupt_allowed, vcpu, true) : -EBUSY; if (r < 0) goto out; if (r) { kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false); - static_call(kvm_x86_inject_irq)(vcpu); - WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0); + static_call(kvm_x86_inject_irq, vcpu); + WARN_ON(static_call(kvm_x86_interrupt_allowed, vcpu, true) < 0); } if (kvm_cpu_has_injectable_intr(vcpu)) - static_call(kvm_x86_enable_irq_window)(vcpu); + static_call(kvm_x86_enable_irq_window, vcpu); } if (is_guest_mode(vcpu) && @@ -9495,7 +9495,7 @@ static void process_nmi(struct kvm_vcpu *vcpu) * If an NMI is already in progress, limit further NMIs to just one. * Otherwise, allow two (and we'll inject the first one immediately). */ - if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected) + if (static_call(kvm_x86_get_nmi_mask, vcpu) || vcpu->arch.nmi_injected) limit = 1; vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0); @@ -9585,11 +9585,11 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) put_smstate(u32, buf, 0x7f7c, seg.limit); put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); - static_call(kvm_x86_get_gdt)(vcpu, &dt); + static_call(kvm_x86_get_gdt, vcpu, &dt); put_smstate(u32, buf, 0x7f74, dt.address); put_smstate(u32, buf, 0x7f70, dt.size); - static_call(kvm_x86_get_idt)(vcpu, &dt); + static_call(kvm_x86_get_idt, vcpu, &dt); put_smstate(u32, buf, 0x7f58, dt.address); put_smstate(u32, buf, 0x7f54, dt.size); @@ -9639,7 +9639,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) put_smstate(u32, buf, 0x7e94, seg.limit); put_smstate(u64, buf, 0x7e98, seg.base); - static_call(kvm_x86_get_idt)(vcpu, &dt); + static_call(kvm_x86_get_idt, vcpu, &dt); put_smstate(u32, buf, 0x7e84, dt.size); put_smstate(u64, buf, 0x7e88, dt.address); @@ -9649,7 +9649,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) put_smstate(u32, buf, 0x7e74, seg.limit); put_smstate(u64, buf, 0x7e78, seg.base); - static_call(kvm_x86_get_gdt)(vcpu, &dt); + static_call(kvm_x86_get_gdt, vcpu, &dt); put_smstate(u32, buf, 0x7e64, dt.size); put_smstate(u64, buf, 0x7e68, dt.address); @@ -9678,28 +9678,28 @@ static void enter_smm(struct kvm_vcpu *vcpu) * state (e.g. leave guest mode) after we've saved the state into the * SMM state-save area. */ - static_call(kvm_x86_enter_smm)(vcpu, buf); + static_call(kvm_x86_enter_smm, vcpu, buf); kvm_smm_changed(vcpu, true); kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf)); - if (static_call(kvm_x86_get_nmi_mask)(vcpu)) + if (static_call(kvm_x86_get_nmi_mask, vcpu)) vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK; else - static_call(kvm_x86_set_nmi_mask)(vcpu, true); + static_call(kvm_x86_set_nmi_mask, vcpu, true); kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); kvm_rip_write(vcpu, 0x8000); cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG); - static_call(kvm_x86_set_cr0)(vcpu, cr0); + static_call(kvm_x86_set_cr0, vcpu, cr0); vcpu->arch.cr0 = cr0; - static_call(kvm_x86_set_cr4)(vcpu, 0); + static_call(kvm_x86_set_cr4, vcpu, 0); /* Undocumented: IDT limit is set to zero on entry to SMM. */ dt.address = dt.size = 0; - static_call(kvm_x86_set_idt)(vcpu, &dt); + static_call(kvm_x86_set_idt, vcpu, &dt); kvm_set_dr(vcpu, 7, DR7_FIXED_1); @@ -9730,7 +9730,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - static_call(kvm_x86_set_efer)(vcpu, 0); + static_call(kvm_x86_set_efer, vcpu, 0); #endif kvm_update_cpuid_runtime(vcpu); @@ -9769,7 +9769,7 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu) vcpu->arch.apicv_active = activate; kvm_apic_update_apicv(vcpu); - static_call(kvm_x86_refresh_apicv_exec_ctrl)(vcpu); + static_call(kvm_x86_refresh_apicv_exec_ctrl, vcpu); /* * When APICv gets disabled, we may still have injected interrupts @@ -9792,7 +9792,7 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm, lockdep_assert_held_write(&kvm->arch.apicv_update_lock); - if (!static_call(kvm_x86_check_apicv_inhibit_reasons)(reason)) + if (!static_call(kvm_x86_check_apicv_inhibit_reasons, reason)) return; old = new = kvm->arch.apicv_inhibit_reasons; @@ -9845,7 +9845,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) if (irqchip_split(vcpu->kvm)) kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors); else { - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call_cond(kvm_x86_sync_pir_to_irr, vcpu); if (ioapic_in_kernel(vcpu->kvm)) kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); } @@ -9867,12 +9867,13 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu) bitmap_or((ulong *)eoi_exit_bitmap, vcpu->arch.ioapic_handled_vectors, to_hv_synic(vcpu)->vec_bitmap, 256); - static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap); + static_call_cond(kvm_x86_load_eoi_exitmap, vcpu, + eoi_exit_bitmap); return; } - static_call_cond(kvm_x86_load_eoi_exitmap)( - vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors); + static_call_cond(kvm_x86_load_eoi_exitmap, vcpu, + (u64 *)vcpu->arch.ioapic_handled_vectors); } void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, @@ -9891,7 +9892,7 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, void kvm_arch_guest_memory_reclaimed(struct kvm *kvm) { - static_call_cond(kvm_x86_guest_memory_reclaimed)(kvm); + static_call_cond(kvm_x86_guest_memory_reclaimed, kvm); } static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) @@ -9899,7 +9900,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) if (!lapic_in_kernel(vcpu)) return; - static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); + static_call_cond(kvm_x86_set_apic_access_page_addr, vcpu); } void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) @@ -10050,10 +10051,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_APF_READY, vcpu)) kvm_check_async_pf_completion(vcpu); if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu)) - static_call(kvm_x86_msr_filter_changed)(vcpu); + static_call(kvm_x86_msr_filter_changed, vcpu); if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu)) - static_call(kvm_x86_update_cpu_dirty_logging)(vcpu); + static_call(kvm_x86_update_cpu_dirty_logging, vcpu); } if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || @@ -10075,7 +10076,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } if (req_int_win) - static_call(kvm_x86_enable_irq_window)(vcpu); + static_call(kvm_x86_enable_irq_window, vcpu); if (kvm_lapic_enabled(vcpu)) { update_cr8_intercept(vcpu); @@ -10090,7 +10091,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) preempt_disable(); - static_call(kvm_x86_prepare_switch_to_guest)(vcpu); + static_call(kvm_x86_prepare_switch_to_guest, vcpu); /* * Disable IRQs before setting IN_GUEST_MODE. Posted interrupt @@ -10126,7 +10127,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * i.e. they can post interrupts even if APICv is temporarily disabled. */ if (kvm_lapic_enabled(vcpu)) - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call_cond(kvm_x86_sync_pir_to_irr, vcpu); if (kvm_vcpu_exit_request(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; @@ -10140,7 +10141,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (req_immediate_exit) { kvm_make_request(KVM_REQ_EVENT, vcpu); - static_call(kvm_x86_request_immediate_exit)(vcpu); + static_call(kvm_x86_request_immediate_exit, vcpu); } fpregs_assert_state_consistent(); @@ -10171,12 +10172,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) */ WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); - exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu); + exit_fastpath = static_call(kvm_x86_vcpu_run, vcpu); if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST)) break; if (kvm_lapic_enabled(vcpu)) - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call_cond(kvm_x86_sync_pir_to_irr, vcpu); if (unlikely(kvm_vcpu_exit_request(vcpu))) { exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED; @@ -10192,7 +10193,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) */ if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) { WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP); - static_call(kvm_x86_sync_dirty_debug_regs)(vcpu); + static_call(kvm_x86_sync_dirty_debug_regs, vcpu); kvm_update_dr0123(vcpu); kvm_update_dr7(vcpu); } @@ -10221,7 +10222,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (vcpu->arch.xfd_no_write_intercept) fpu_sync_guest_vmexit_xfd_state(); - static_call(kvm_x86_handle_exit_irqoff)(vcpu); + static_call(kvm_x86_handle_exit_irqoff, vcpu); if (vcpu->arch.guest_fpu.xfd_err) wrmsrl(MSR_IA32_XFD_ERR, 0); @@ -10275,13 +10276,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (vcpu->arch.apic_attention) kvm_lapic_sync_from_vapic(vcpu); - r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath); + r = static_call(kvm_x86_handle_exit, vcpu, exit_fastpath); return r; cancel_injection: if (req_immediate_exit) kvm_make_request(KVM_REQ_EVENT, vcpu); - static_call(kvm_x86_cancel_injection)(vcpu); + static_call(kvm_x86_cancel_injection, vcpu); if (unlikely(vcpu->arch.apic_attention)) kvm_lapic_sync_from_vapic(vcpu); out: @@ -10554,7 +10555,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) goto out; } - r = static_call(kvm_x86_vcpu_pre_run)(vcpu); + r = static_call(kvm_x86_vcpu_pre_run, vcpu); if (r <= 0) goto out; @@ -10673,10 +10674,10 @@ static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR); kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR); - static_call(kvm_x86_get_idt)(vcpu, &dt); + static_call(kvm_x86_get_idt, vcpu, &dt); sregs->idt.limit = dt.size; sregs->idt.base = dt.address; - static_call(kvm_x86_get_gdt)(vcpu, &dt); + static_call(kvm_x86_get_gdt, vcpu, &dt); sregs->gdt.limit = dt.size; sregs->gdt.base = dt.address; @@ -10857,28 +10858,28 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs, dt.size = sregs->idt.limit; dt.address = sregs->idt.base; - static_call(kvm_x86_set_idt)(vcpu, &dt); + static_call(kvm_x86_set_idt, vcpu, &dt); dt.size = sregs->gdt.limit; dt.address = sregs->gdt.base; - static_call(kvm_x86_set_gdt)(vcpu, &dt); + static_call(kvm_x86_set_gdt, vcpu, &dt); vcpu->arch.cr2 = sregs->cr2; *mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3; vcpu->arch.cr3 = sregs->cr3; kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); - static_call_cond(kvm_x86_post_set_cr3)(vcpu, sregs->cr3); + static_call_cond(kvm_x86_post_set_cr3, vcpu, sregs->cr3); kvm_set_cr8(vcpu, sregs->cr8); *mmu_reset_needed |= vcpu->arch.efer != sregs->efer; - static_call(kvm_x86_set_efer)(vcpu, sregs->efer); + static_call(kvm_x86_set_efer, vcpu, sregs->efer); *mmu_reset_needed |= kvm_read_cr0(vcpu) != sregs->cr0; - static_call(kvm_x86_set_cr0)(vcpu, sregs->cr0); + static_call(kvm_x86_set_cr0, vcpu, sregs->cr0); vcpu->arch.cr0 = sregs->cr0; *mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4; - static_call(kvm_x86_set_cr4)(vcpu, sregs->cr4); + static_call(kvm_x86_set_cr4, vcpu, sregs->cr4); if (update_pdptrs) { idx = srcu_read_lock(&vcpu->kvm->srcu); @@ -11048,7 +11049,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, */ kvm_set_rflags(vcpu, rflags); - static_call(kvm_x86_update_exception_bitmap)(vcpu); + static_call(kvm_x86_update_exception_bitmap, vcpu); kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu->kvm); @@ -11255,7 +11256,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.hv_root_tdp = INVALID_PAGE; #endif - r = static_call(kvm_x86_vcpu_create)(vcpu); + r = static_call(kvm_x86_vcpu_create, vcpu); if (r) goto free_guest_fpu; @@ -11312,7 +11313,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) kvmclock_reset(vcpu); - static_call(kvm_x86_vcpu_free)(vcpu); + static_call(kvm_x86_vcpu_free, vcpu); kmem_cache_free(x86_emulator_cache, vcpu->arch.emulate_ctxt); free_cpumask_var(vcpu->arch.wbinvd_dirty_mask); @@ -11419,7 +11420,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) cpuid_0x1 = kvm_find_cpuid_entry(vcpu, 1, 0); kvm_rdx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600); - static_call(kvm_x86_vcpu_reset)(vcpu, init_event); + static_call(kvm_x86_vcpu_reset, vcpu, init_event); kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); kvm_rip_write(vcpu, 0xfff0); @@ -11438,10 +11439,10 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) else new_cr0 |= X86_CR0_NW | X86_CR0_CD; - static_call(kvm_x86_set_cr0)(vcpu, new_cr0); - static_call(kvm_x86_set_cr4)(vcpu, 0); - static_call(kvm_x86_set_efer)(vcpu, 0); - static_call(kvm_x86_update_exception_bitmap)(vcpu); + static_call(kvm_x86_set_cr0, vcpu, new_cr0); + static_call(kvm_x86_set_cr4, vcpu, 0); + static_call(kvm_x86_set_efer, vcpu, 0); + static_call(kvm_x86_update_exception_bitmap, vcpu); /* * On the standard CR0/CR4/EFER modification paths, there are several @@ -11493,7 +11494,7 @@ int kvm_arch_hardware_enable(void) bool stable, backwards_tsc = false; kvm_user_return_msr_cpu_online(); - ret = static_call(kvm_x86_hardware_enable)(); + ret = static_call(kvm_x86_hardware_enable); if (ret != 0) return ret; @@ -11575,7 +11576,7 @@ int kvm_arch_hardware_enable(void) void kvm_arch_hardware_disable(void) { - static_call(kvm_x86_hardware_disable)(); + static_call(kvm_x86_hardware_disable); drop_user_return_notifiers(); } @@ -11625,7 +11626,7 @@ void kvm_arch_hardware_unsetup(void) { kvm_unregister_perf_callbacks(); - static_call(kvm_x86_hardware_unsetup)(); + static_call(kvm_x86_hardware_unsetup); } int kvm_arch_check_processor_compat(void *opaque) @@ -11665,7 +11666,7 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) pmu->need_cleanup = true; kvm_make_request(KVM_REQ_PMU, vcpu); } - static_call(kvm_x86_sched_in)(vcpu, cpu); + static_call(kvm_x86_sched_in, vcpu, cpu); } void kvm_arch_free_vm(struct kvm *kvm) @@ -11725,7 +11726,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_hv_init_vm(kvm); kvm_xen_init_vm(kvm); - return static_call(kvm_x86_vm_init)(kvm); + return static_call(kvm_x86_vm_init, kvm); out_page_track: kvm_page_track_cleanup(kvm); @@ -11864,7 +11865,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) __x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0); mutex_unlock(&kvm->slots_lock); } - static_call_cond(kvm_x86_vm_destroy)(kvm); + static_call_cond(kvm_x86_vm_destroy, kvm); kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1)); kvm_pic_destroy(kvm); kvm_ioapic_destroy(kvm); @@ -12147,7 +12148,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) { return (is_guest_mode(vcpu) && - static_call(kvm_x86_guest_apic_has_interrupt)(vcpu)); + static_call(kvm_x86_guest_apic_has_interrupt, vcpu)); } static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) @@ -12166,12 +12167,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) if (kvm_test_request(KVM_REQ_NMI, vcpu) || (vcpu->arch.nmi_pending && - static_call(kvm_x86_nmi_allowed)(vcpu, false))) + static_call(kvm_x86_nmi_allowed, vcpu, false))) return true; if (kvm_test_request(KVM_REQ_SMI, vcpu) || (vcpu->arch.smi_pending && - static_call(kvm_x86_smi_allowed)(vcpu, false))) + static_call(kvm_x86_smi_allowed, vcpu, false))) return true; if (kvm_arch_interrupt_allowed(vcpu) && @@ -12197,7 +12198,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu) { - if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu)) + if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt, vcpu)) return true; return false; @@ -12236,7 +12237,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu) { - return static_call(kvm_x86_interrupt_allowed)(vcpu, false); + return static_call(kvm_x86_interrupt_allowed, vcpu, false); } unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu) @@ -12262,7 +12263,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu) { unsigned long rflags; - rflags = static_call(kvm_x86_get_rflags)(vcpu); + rflags = static_call(kvm_x86_get_rflags, vcpu); if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) rflags &= ~X86_EFLAGS_TF; return rflags; @@ -12274,7 +12275,7 @@ static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP && kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip)) rflags |= X86_EFLAGS_TF; - static_call(kvm_x86_set_rflags)(vcpu, rflags); + static_call(kvm_x86_set_rflags, vcpu, rflags); } void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) @@ -12405,7 +12406,7 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu) return false; if (vcpu->arch.apf.send_user_only && - static_call(kvm_x86_get_cpl)(vcpu) == 0) + static_call(kvm_x86_get_cpl, vcpu) == 0) return false; if (is_guest_mode(vcpu)) { @@ -12516,7 +12517,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu) void kvm_arch_start_assignment(struct kvm *kvm) { if (atomic_inc_return(&kvm->arch.assigned_device_count) == 1) - static_call_cond(kvm_x86_pi_start_assignment)(kvm); + static_call_cond(kvm_x86_pi_start_assignment, kvm); } EXPORT_SYMBOL_GPL(kvm_arch_start_assignment); @@ -12564,8 +12565,7 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons, irqfd->producer = prod; kvm_arch_start_assignment(irqfd->kvm); - ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, - prod->irq, irqfd->gsi, 1); + ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 1); if (ret) kvm_arch_end_assignment(irqfd->kvm); @@ -12589,7 +12589,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons, * when the irq is masked/disabled or the consumer side (KVM * int this case doesn't want to receive the interrupts. */ - ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0); + ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 0); if (ret) printk(KERN_INFO "irq bypass consumer (token %p) unregistration" " fails: %d\n", irqfd->consumer.token, ret); @@ -12600,7 +12600,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons, int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq, bool set) { - return static_call(kvm_x86_pi_update_irte)(kvm, host_irq, guest_irq, set); + return static_call(kvm_x86_pi_update_irte, kvm, host_irq, guest_irq, set); } bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 588792f00334..4b3b3d9b66b8 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -113,7 +113,7 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu) if (!is_long_mode(vcpu)) return false; - static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l); + static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l); return cs_l; } @@ -248,7 +248,7 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk) static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu) { - return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu); + return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked, vcpu); } void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index bf6cc25eee76..9c5d966d18e4 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -732,7 +732,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data) instructions[0] = 0xb8; /* vmcall / vmmcall */ - static_call(kvm_x86_patch_hypercall)(vcpu, instructions + 5); + static_call(kvm_x86_patch_hypercall, vcpu, instructions + 5); /* ret */ instructions[8] = 0xc3; @@ -867,7 +867,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu) vcpu->run->exit_reason = KVM_EXIT_XEN; vcpu->run->xen.type = KVM_EXIT_XEN_HCALL; vcpu->run->xen.u.hcall.longmode = longmode; - vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl)(vcpu); + vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl, vcpu); vcpu->run->xen.u.hcall.input = input; vcpu->run->xen.u.hcall.params[0] = params[0]; vcpu->run->xen.u.hcall.params[1] = params[1]; diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 7be38bc6a673..06c77ca2b3bb 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -146,7 +146,7 @@ DEFINE_STATIC_CALL(amd_pstate_enable, pstate_enable); static inline int amd_pstate_enable(bool enable) { - return static_call(amd_pstate_enable)(enable); + return static_call(amd_pstate_enable, enable); } static int pstate_init_perf(struct amd_cpudata *cpudata) @@ -194,7 +194,7 @@ DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf); static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata) { - return static_call(amd_pstate_init_perf)(cpudata); + return static_call(amd_pstate_init_perf, cpudata); } static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf, @@ -226,8 +226,8 @@ static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf, u32 des_perf, u32 max_perf, bool fast_switch) { - static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf, - max_perf, fast_switch); + static_call(amd_pstate_update_perf, cpudata, min_perf, des_perf, + max_perf, fast_switch); } static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index ab78bd4c2eb0..a7d800a5dbd8 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -421,7 +421,7 @@ void raw_irqentry_exit_cond_resched(void); #define irqentry_exit_cond_resched_dynamic_enabled raw_irqentry_exit_cond_resched #define irqentry_exit_cond_resched_dynamic_disabled NULL DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched); -#define irqentry_exit_cond_resched() static_call(irqentry_exit_cond_resched)() +#define irqentry_exit_cond_resched() static_call(irqentry_exit_cond_resched) #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); void dynamic_irqentry_exit_cond_resched(void); diff --git a/include/linux/kernel.h b/include/linux/kernel.h index fe6efb24d151..7814129fe0c9 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -107,7 +107,7 @@ DECLARE_STATIC_CALL(might_resched, __cond_resched); static __always_inline void might_resched(void) { - static_call_mod(might_resched)(); + static_call_mod(might_resched); } #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index af97dd427501..2e12811b3730 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1253,15 +1253,15 @@ DECLARE_STATIC_CALL(__perf_guest_handle_intel_pt_intr, *perf_guest_cbs->handle_i static inline unsigned int perf_guest_state(void) { - return static_call(__perf_guest_state)(); + return static_call(__perf_guest_state); } static inline unsigned long perf_guest_get_ip(void) { - return static_call(__perf_guest_get_ip)(); + return static_call(__perf_guest_get_ip); } static inline unsigned int perf_guest_handle_intel_pt_intr(void) { - return static_call(__perf_guest_handle_intel_pt_intr)(); + return static_call(__perf_guest_handle_intel_pt_intr); } extern void perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs); extern void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs); diff --git a/include/linux/sched.h b/include/linux/sched.h index a8911b1f35aa..e8a98ee1442d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2040,7 +2040,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond_resched); static __always_inline int _cond_resched(void) { - return static_call_mod(cond_resched)(); + return static_call_mod(cond_resched); } #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index df53bed9d71f..7f1219fb98cf 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -21,8 +21,8 @@ * * __static_call_return0; * - * static_call(name)(args...); - * static_call_cond(name)(args...); + * static_call(name, args...); + * static_call_cond(name, args...); * static_call_update(name, func); * static_call_query(name); * @@ -38,13 +38,13 @@ * DEFINE_STATIC_CALL(my_name, func_a); * * # Call func_a() - * static_call(my_name)(arg1, arg2); + * static_call(my_name, arg1, arg2); * * # Update 'my_name' to point to func_b() * static_call_update(my_name, &func_b); * * # Call func_b() - * static_call(my_name)(arg1, arg2); + * static_call(my_name, arg1, arg2); * * * Implementation details: @@ -94,7 +94,7 @@ * * When calling a static_call that can be NULL, use: * - * static_call_cond(name)(arg1); + * static_call_cond(name, arg1); * * which will include the required value tests to avoid NULL-pointer * dereferences. @@ -204,7 +204,7 @@ extern long __static_call_return0(void); }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) -#define static_call_cond(name) (void)__static_call(name) +#define static_call_cond(name, args...) (void)__static_call(name)(args) #define EXPORT_STATIC_CALL(name) \ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ @@ -246,7 +246,7 @@ static inline int static_call_init(void) { return 0; } }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) -#define static_call_cond(name) (void)__static_call(name) +#define static_call_cond(name, args...) (void)__static_call(name)(args) static inline void __static_call_update(struct static_call_key *key, void *tramp, void *func) @@ -323,7 +323,7 @@ static inline void __static_call_nop(void) { } (typeof(STATIC_CALL_TRAMP(name))*)func; \ }) -#define static_call_cond(name) (void)__static_call_cond(name) +#define static_call_cond(name, args...) (void)__static_call_cond(name)(args) static inline void __static_call_update(struct static_call_key *key, void *tramp, void *func) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 5a00b8b2cf9f..7e1ce240a2cd 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -81,13 +81,13 @@ struct static_call_key { #ifdef MODULE #define __STATIC_CALL_MOD_ADDRESSABLE(name) -#define static_call_mod(name) __raw_static_call(name) +#define static_call_mod(name, args...) __raw_static_call(name)(args) #else #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_mod(name) __static_call(name) +#define static_call_mod(name, args...) __static_call(name)(args) #endif -#define static_call(name) __static_call(name) +#define static_call(name, args...) __static_call(name)(args) #else @@ -95,8 +95,8 @@ struct static_call_key { void *func; }; -#define static_call(name) \ - ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) +#define static_call(name, args...) \ + ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args) #endif /* CONFIG_HAVE_STATIC_CALL */ diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index 28031b15f878..1c68fcad48a2 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -170,7 +170,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) rcu_dereference_raw((&__tracepoint_##name)->funcs); \ if (it_func_ptr) { \ __data = (it_func_ptr)->data; \ - static_call(tp_func_##name)(__data, args); \ + static_call(tp_func_##name, __data, args); \ } \ } while (0) #else diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c index dc5665b62814..9752489fcaab 100644 --- a/kernel/static_call_inline.c +++ b/kernel/static_call_inline.c @@ -533,7 +533,7 @@ static int __init test_static_call_init(void) if (scd->func) static_call_update(sc_selftest, scd->func); - WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect); + WARN_ON(static_call(sc_selftest, scd->val) != scd->expect); } return 0; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d8553f46caa2..fa1a0deddda5 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1096,7 +1096,7 @@ BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags) static const u32 br_entry_size = sizeof(struct perf_branch_entry); u32 entry_cnt = size / br_entry_size; - entry_cnt = static_call(perf_snapshot_branch_stack)(buf, entry_cnt); + entry_cnt = static_call(perf_snapshot_branch_stack, buf, entry_cnt); if (unlikely(flags)) return -EINVAL; diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c index 9b9d3ef79cbe..3f48310a4ce3 100644 --- a/security/keys/trusted-keys/trusted_core.c +++ b/security/keys/trusted-keys/trusted_core.c @@ -170,15 +170,15 @@ static int trusted_instantiate(struct key *key, switch (key_cmd) { case Opt_load: - ret = static_call(trusted_key_unseal)(payload, datablob); + ret = static_call(trusted_key_unseal, payload, datablob); dump_payload(payload); if (ret < 0) pr_info("key_unseal failed (%d)\n", ret); break; case Opt_new: key_len = payload->key_len; - ret = static_call(trusted_key_get_random)(payload->key, - key_len); + ret = static_call(trusted_key_get_random, payload->key, + key_len); if (ret < 0) goto out; @@ -188,7 +188,7 @@ static int trusted_instantiate(struct key *key, goto out; } - ret = static_call(trusted_key_seal)(payload, datablob); + ret = static_call(trusted_key_seal, payload, datablob); if (ret < 0) pr_info("key_seal failed (%d)\n", ret); break; @@ -257,7 +257,7 @@ static int trusted_update(struct key *key, struct key_preparsed_payload *prep) dump_payload(p); dump_payload(new_p); - ret = static_call(trusted_key_seal)(new_p, datablob); + ret = static_call(trusted_key_seal, new_p, datablob); if (ret < 0) { pr_info("key_seal failed (%d)\n", ret); kfree_sensitive(new_p); @@ -334,7 +334,7 @@ static int __init init_trusted(void) trusted_key_sources[i].ops->exit); migratable = trusted_key_sources[i].ops->migratable; - ret = static_call(trusted_key_init)(); + ret = static_call(trusted_key_init); if (!ret) break; } @@ -351,7 +351,7 @@ static int __init init_trusted(void) static void __exit cleanup_trusted(void) { - static_call_cond(trusted_key_exit)(); + static_call_cond(trusted_key_exit); } late_initcall(init_trusted); From patchwork Fri Apr 29 20:36:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 845C0C433FE for ; Fri, 29 Apr 2022 20:37:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380946AbiD2UlL (ORCPT ); Fri, 29 Apr 2022 16:41:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380914AbiD2Uk4 (ORCPT ); Fri, 29 Apr 2022 16:40:56 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6232BC8664 for ; Fri, 29 Apr 2022 13:37:24 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2f4e758e54bso84996177b3.3 for ; Fri, 29 Apr 2022 13:37:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mgYY2lX+6A5TnPtbA+mhyQDA8k1F/8PITaCzkIIr+as=; b=YjpbUMEKsiPqXO7j5QaX9CEnteTNYermxAVj8vpBYNcU/DKnyhIaC+KAxIbQ0HMh50 jZAmuGlryAzqWQgF7v18N9rzJ/FqqonqTEqQ/8zVqDbxRTv0LgoCJ6IGvArsgIrIU3Fc FuGRQz95g0YZ+nDn4YVpaZcNM/vIYbltZH6wh1xpHvNFyaOb+eq8KSDdMHNkV7zxuTUE jgEidlDR+yMW3OdmVRrZA4F/WUddh9IvCWCnqoGIW5ErpQkQ2PDPToXqhYoIN4XlOqhF nNy8Gp5x2LZn9OUz9m9EXN06xvPvfyCapJ6Il8ctGUGTDqd3GoZZddD4mn/UC3XWHeII RLZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mgYY2lX+6A5TnPtbA+mhyQDA8k1F/8PITaCzkIIr+as=; b=dJj75sbETG7aiJ4VE039Sk9Q7xX31jzIiWxfRL7Le6kw564BQugKdo2BK1Ebg4sBHv HGk95Ya5Jy+mNFAWqgPZW9sWwqmg38rBKJ9C4qBv3ORIAcuYshTBqtKxFfhJStq5boZ6 1dcvcKGauaFQXRRsTeosWPRvctcPbjiySC1Ht78KVZGSya8n9bYqHj5eHZEpc/ca580o s13F8HP/9RpCGXQn6YmPaMsK/A+1uwd5J89e49qn0MCNQOoSEvh/vcg8mqAMAZWTxsuF iRXoLr0SjAkHhsIl3BxI4ZomTQrdWHJk2+vN3wr3JTyFuyhoHgTxSuthKIekIP88L7jM s8Rg== X-Gm-Message-State: AOAM533Z5UClz7bToC3KpA7ZBHcVb+NiuuXn2OCe+xM8IgZfCbomNgiJ RavAw/e6K/Z7oMF7DGnFMpLAWfvXuu3eA6zow5s= X-Google-Smtp-Source: ABdhPJyambFdCnuhmSj7yilVaN1wZnPPJvyC6XgV07kwK6srYdBE5TrGTkmK5Jt3LNHtm046MZq4+WTcrii2u9VORI0= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:6157:0:b0:645:8d0e:f782 with SMTP id v84-20020a256157000000b006458d0ef782mr1403212ybb.36.1651264643566; Fri, 29 Apr 2022 13:37:23 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:38 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-16-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=3560; h=from:subject; bh=bFdr8KmKdxGKWmbD2cJi1nCiQhugbXkWJr6xn+4w/Vg=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExXTqTrMCKvJUQRO1SvugummwkQzb2swGsRRMku BRKSFeaJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7pzWC/ 9n2bx+onwgeKynQ327/gYKyBj78JXPpfWB3IzaBiLs329WOK7TdsBUtqcjEop94O5lG3vUcXzch3sP RPbgoCCB4IVfV0NIpcBczH7oUNdurADR5O5bYK5KKFqphm81E7KU4PYHv5HLtfauMByN4QqVaOiuKj nb4s0SqLCSNGJ8NHnLtsdfz1ESGEY3UQ9Hf94PEgnLxMtxbVQnq4bSi3ArQDlpl2QFPcA3/U/ou/ir 3ye/0/3VNdUfN6wx9MhGZRyUWfNrfsJCVH6LtgXxLk+sKB1dR7r2sJqxQLn0IXlpaGxI2oFll0L3W4 9h0AFZpcu+A6kKeGZhi5v7qoeGrn6rv+iz76lwz2ARGx3q9Fqmfxl8sHSfrHkZUk6tl9dDbDglqQ76 xWXwKlr62HVRv+4b0BefusoLDrdAFe4dl5+a0RR9NPa3RGsJBN0qGPaC+ZEqob2tNLJYOl103mi8Nq 1z6e8gI84Cpj0HOxsEVgnM2XFp/xJq+9FZzznLF8NVshQ= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 15/21] static_call: Use cfi_unchecked From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With CONFIG_HAVE_STATIC_CALL, static calls are patched into direct calls. Disable indirect call CFI checking for the call sites with the cfi_unchecked macro. Signed-off-by: Sami Tolvanen --- include/linux/static_call.h | 6 ++++-- include/linux/static_call_types.h | 9 ++++++--- tools/include/linux/static_call_types.h | 13 ++++++++----- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 7f1219fb98cf..f666c841b718 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -204,7 +204,8 @@ extern long __static_call_return0(void); }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) -#define static_call_cond(name, args...) (void)__static_call(name)(args) +#define static_call_cond(name, args...) \ + (void)cfi_unchecked(__static_call(name)(args)) #define EXPORT_STATIC_CALL(name) \ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ @@ -246,7 +247,8 @@ static inline int static_call_init(void) { return 0; } }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) -#define static_call_cond(name, args...) (void)__static_call(name)(args) +#define static_call_cond(name, args...) \ + (void)cfi_unchecked(__static_call(name)(args)) static inline void __static_call_update(struct static_call_key *key, void *tramp, void *func) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 7e1ce240a2cd..faebc1412c86 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -81,13 +81,16 @@ struct static_call_key { #ifdef MODULE #define __STATIC_CALL_MOD_ADDRESSABLE(name) -#define static_call_mod(name, args...) __raw_static_call(name)(args) +#define static_call_mod(name, args...) \ + cfi_unchecked(__raw_static_call(name)(args)) #else #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_mod(name, args...) __static_call(name)(args) +#define static_call_mod(name, args...) \ + cfi_unchecked(__static_call(name)(args)) #endif -#define static_call(name, args...) __static_call(name)(args) +#define static_call(name, args...) \ + cfi_unchecked(__static_call(name)(args)) #else diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 5a00b8b2cf9f..faebc1412c86 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -81,13 +81,16 @@ struct static_call_key { #ifdef MODULE #define __STATIC_CALL_MOD_ADDRESSABLE(name) -#define static_call_mod(name) __raw_static_call(name) +#define static_call_mod(name, args...) \ + cfi_unchecked(__raw_static_call(name)(args)) #else #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_mod(name) __static_call(name) +#define static_call_mod(name, args...) \ + cfi_unchecked(__static_call(name)(args)) #endif -#define static_call(name) __static_call(name) +#define static_call(name, args...) \ + cfi_unchecked(__static_call(name)(args)) #else @@ -95,8 +98,8 @@ struct static_call_key { void *func; }; -#define static_call(name) \ - ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) +#define static_call(name, args...) \ + ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args) #endif /* CONFIG_HAVE_STATIC_CALL */ From patchwork Fri Apr 29 20:36:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B01ADC433EF for ; Fri, 29 Apr 2022 20:37:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380990AbiD2UlJ (ORCPT ); Fri, 29 Apr 2022 16:41:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380994AbiD2UlB (ORCPT ); Fri, 29 Apr 2022 16:41:01 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A88F6C8ABA for ; Fri, 29 Apr 2022 13:37:26 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2e642be1a51so83733877b3.21 for ; Fri, 29 Apr 2022 13:37:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wn6Fk0jdzmyKcWzAtklzhzbugkNpgCzrIA6neEfx0hw=; b=B0ipEFgTHkrk1DW294MGsIOmwMvWMTi7yFa5IvHBScPSSCIQ12inv8qFk9kMkQ+cWo Q10QSGBYwQsJ+1mP+2NJRTJ2NZGdpgNoGUiUQ44g1Cr5ZqLcK7SplW9ejQ+/9J/oKRmF Gc2BAu/DPl7VA2HzLqre22OeoNpvhHeWjiGMJA4K4ERKRcpd1RZMx8A7GDrxViAyVHpo /YbXW0Z+LLwNCzoEK+5FjnolHrsOyMim+pFQeE4KyVUM+oU54jxhjmnYfSZFGTzKELSY 6uw4QPhNnYwzhdxwvFQ5Yu+rG31zgQ2jWlMVVo+VQskxCMWz5snM1X1IrOKP68SgBaO2 uqYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wn6Fk0jdzmyKcWzAtklzhzbugkNpgCzrIA6neEfx0hw=; b=FwIddEAkyAh1mJzPGYacka1P76X3WIWBYyMfb925LbgN73DkYTeIQ3GqqoSpBZsTPT Eu8sUIsWXp4LbhlT9BqqNtiirt18obCwYeqCcGV6oqmpxxbeqqPuEyJWF3dyPKw7UdxK wKAdUZZ0q0PZq9xvcXZ0mxrl83z78gkUqxUvrmSAT4yjzrRCsrCiHjOrqYjqTi2Y1Hhs NccYIGuQRggzgtj0p79Zck7opmmu8TwZ5mUh3D1iv91FyC/v8m0nfALy1/0UnOCvOTmZ r7W7zVHw0OJrqp2lC0VX9uevt1YuAYiai+ItMHbWemPfqfef9JU+7GOxs/WIz9UHMdIg vC7A== X-Gm-Message-State: AOAM530jHpDcPHr6moE6nwwuI4GqI5g8esJ8mck1z6aUOfyZ74/WRELF oO/6lYly72nikjCnGGykegGg9d/Ey3S1Z4R/b6M= X-Google-Smtp-Source: ABdhPJzIdF+dDHh9hmX3ACfTDHmp2X8oeUP0bceeU1/QL1FSMLv4TzO0/t9MouMPiSEZ/YrjXh5v45gGaX2itQXlRUg= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:7008:0:b0:648:6d04:f4ab with SMTP id l8-20020a257008000000b006486d04f4abmr1288105ybc.127.1651264645781; Fri, 29 Apr 2022 13:37:25 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:39 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-17-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=10066; h=from:subject; bh=5Ub68Ba6efUThwF+cCSBq2rK+XTwq+L6mbSroJ2k1gE=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExX80UREFSoz+uKGJsMYz1mV8KXavHjkutESBeW An4NZkSJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7lD+DA CQoFQ7kl9gHyGLqmu+nwdDyP/PSkHFFRg3CiJvlC0ZRJ2zLoG5Nzrh/KnIA2TmZQz+sHZTegYwvudJ n8i1eZ8VZyWzjuZhZPu/myYCfotxkoUI3gl53m2MOcD0uQYmhDgdubyUDJO9TdtCjwElVp1XY8oMOh 708PBYVRyPioCZ2nwpzUuQ5IhA+bgW16TblTFxij9BF4gxD8PFKfiBEkkasHhKCbHDcGopUVVyVgQK 1XqsebLMxBO0UnFaqkvO6MMfbFw+xYbM/h8Yg0y5Q5JTLhdTr+XSj8ZDlryGNJO6xxJr0Q6poc0yMS AxYKHb4dsc7u84x/sSBolnBK7bjMSlYRfsMKvI9pnltVYatPZ4diR0DkgbuoIpeRQr6yNT65OyPmP3 XHDtXL9jv33JBEavuzC9/Gwa8slGA9SwU/jmu1jf/fiWnTL/kWQEQOtXFSKhgu1hVkkk2/RfUvtxWw ppG6f/2balDMELLoCNfwzhU7imNi9CsYIwxKYQEOtU/Iw= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With -fsanitize=kcfi, the compiler injects a type identifier before each function. Teach objtool to recognize the identifier. Signed-off-by: Sami Tolvanen --- scripts/Makefile.build | 3 +- scripts/link-vmlinux.sh | 3 + tools/objtool/arch/x86/include/arch/elf.h | 2 + tools/objtool/builtin-check.c | 3 +- tools/objtool/check.c | 128 ++++++++++++++++++++-- tools/objtool/elf.c | 13 +++ tools/objtool/include/objtool/arch.h | 1 + tools/objtool/include/objtool/builtin.h | 2 +- tools/objtool/include/objtool/elf.h | 2 + 9 files changed, 145 insertions(+), 12 deletions(-) diff --git a/scripts/Makefile.build b/scripts/Makefile.build index 9717e6f6fb31..c850ac420b60 100644 --- a/scripts/Makefile.build +++ b/scripts/Makefile.build @@ -235,7 +235,8 @@ objtool_args = \ $(if $(CONFIG_RETPOLINE), --retpoline) \ $(if $(CONFIG_X86_SMAP), --uaccess) \ $(if $(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL), --mcount) \ - $(if $(CONFIG_SLS), --sls) + $(if $(CONFIG_SLS), --sls) \ + $(if $(CONFIG_CFI_CLANG), --kcfi) cmd_objtool = $(if $(objtool-enabled), ; $(objtool) $(objtool_args) $@) cmd_gen_objtooldep = $(if $(objtool-enabled), { echo ; echo '$@: $$(wildcard $(objtool))' ; } >> $(dot-target).cmd) diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh index 20f44504a644..d171f8507db2 100755 --- a/scripts/link-vmlinux.sh +++ b/scripts/link-vmlinux.sh @@ -152,6 +152,9 @@ objtool_link() if is_enabled CONFIG_SLS; then objtoolopt="${objtoolopt} --sls" fi + if is_enabled CONFIG_CFI_CLANG; then + objtoolopt="${objtoolopt} --kcfi" + fi info OBJTOOL ${1} tools/objtool/objtool ${objtoolcmd} ${objtoolopt} ${1} fi diff --git a/tools/objtool/arch/x86/include/arch/elf.h b/tools/objtool/arch/x86/include/arch/elf.h index 69cc4264b28a..8833d989eec7 100644 --- a/tools/objtool/arch/x86/include/arch/elf.h +++ b/tools/objtool/arch/x86/include/arch/elf.h @@ -3,4 +3,6 @@ #define R_NONE R_X86_64_NONE +#define KCFI_TYPEID_LEN 6 + #endif /* _OBJTOOL_ARCH_ELF */ diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c index fc6975ab8b06..8a662dcc21be 100644 --- a/tools/objtool/builtin-check.c +++ b/tools/objtool/builtin-check.c @@ -21,7 +21,7 @@ bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, lto, vmlinux, mcount, noinstr, backup, sls, dryrun, - ibt; + ibt, kcfi; static const char * const check_usage[] = { "objtool check [] file.o", @@ -49,6 +49,7 @@ const struct option check_options[] = { OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"), OPT_BOOLEAN(0, "dry-run", &dryrun, "don't write the modifications"), OPT_BOOLEAN(0, "ibt", &ibt, "validate ENDBR placement"), + OPT_BOOLEAN('k', "kcfi", &kcfi, "detect control-flow integrity type identifiers"), OPT_END(), }; diff --git a/tools/objtool/check.c b/tools/objtool/check.c index bd0c2c828940..e6bee2f2996a 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -27,6 +27,12 @@ struct alternative { bool skip_orig; }; +struct kcfi_type { + struct section *sec; + unsigned long offset; + struct hlist_node hash; +}; + static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache; static struct cfi_init_state initial_func_cfi; @@ -143,6 +149,99 @@ static bool is_sibling_call(struct instruction *insn) return (is_static_jump(insn) && insn->call_dest); } +static int kcfi_bits; +static struct hlist_head *kcfi_hash; + +static void *kcfi_alloc_hash(unsigned long size) +{ + kcfi_bits = max(10, ilog2(size)); + kcfi_hash = mmap(NULL, sizeof(struct hlist_head) << kcfi_bits, + PROT_READ|PROT_WRITE, + MAP_PRIVATE|MAP_ANON, -1, 0); + if (kcfi_hash == (void *)-1L) { + WARN("mmap fail kcfi_hash"); + kcfi_hash = NULL; + } else if (stats) { + printf("kcfi_bits: %d\n", kcfi_bits); + } + + return kcfi_hash; +} + +static void add_kcfi_type(struct kcfi_type *type) +{ + hlist_add_head(&type->hash, + &kcfi_hash[hash_min( + sec_offset_hash(type->sec, type->offset), + kcfi_bits)]); +} + +static bool add_kcfi_types(struct section *sec) +{ + struct reloc *reloc; + + list_for_each_entry(reloc, &sec->reloc_list, list) { + struct kcfi_type *type; + + if (reloc->sym->type != STT_SECTION) { + WARN("unexpected relocation symbol type in %s", sec->name); + return false; + } + + type = malloc(sizeof(*type)); + if (!type) { + perror("malloc"); + return false; + } + + type->sec = reloc->sym->sec; + type->offset = reloc->addend; + + add_kcfi_type(type); + } + + return true; +} + +static int read_kcfi_types(struct objtool_file *file) +{ + if (!kcfi) + return 0; + + if (!kcfi_alloc_hash(file->elf->text_size / 16)) + return -1; + + if (!for_each_section_by_name(file->elf, ".rela.kcfi_types", add_kcfi_types)) + return -1; + + return 0; +} + +static bool is_kcfi_typeid(struct elf *elf, struct instruction *insn) +{ + struct hlist_head *head; + struct kcfi_type *type; + struct reloc *reloc; + + if (!kcfi) + return false; + + /* Compiler-generated annotation in .kcfi_types. */ + head = &kcfi_hash[hash_min(sec_offset_hash(insn->sec, insn->offset), kcfi_bits)]; + + hlist_for_each_entry(type, head, hash) + if (type->sec == insn->sec && type->offset == insn->offset) + return true; + + /* Manual annotation (in assembly code). */ + reloc = find_reloc_by_dest(elf, insn->sec, insn->offset); + + if (reloc && !strncmp(reloc->sym->name, "__kcfi_typeid_", 14)) + return true; + + return false; +} + /* * This checks to see if the given function is a "noreturn" function. * @@ -388,13 +487,18 @@ static int decode_instructions(struct objtool_file *file) insn->sec = sec; insn->offset = offset; - ret = arch_decode_instruction(file, sec, offset, - sec->sh.sh_size - offset, - &insn->len, &insn->type, - &insn->immediate, - &insn->stack_ops); - if (ret) - goto err; + if (is_kcfi_typeid(file->elf, insn)) { + insn->type = INSN_KCFI_TYPEID; + insn->len = KCFI_TYPEID_LEN; + } else { + ret = arch_decode_instruction(file, sec, offset, + sec->sh.sh_size - offset, + &insn->len, &insn->type, + &insn->immediate, + &insn->stack_ops); + if (ret) + goto err; + } /* * By default, "ud2" is a dead end unless otherwise @@ -420,7 +524,8 @@ static int decode_instructions(struct objtool_file *file) } sym_for_each_insn(file, func, insn) { - insn->func = func; + if (insn->type != INSN_KCFI_TYPEID) + insn->func = func; if (insn->type == INSN_ENDBR && list_empty(&insn->call_node)) { if (insn->offset == insn->func->offset) { list_add_tail(&insn->call_node, &file->endbr_list); @@ -2219,6 +2324,10 @@ static int decode_sections(struct objtool_file *file) if (ret) return ret; + ret = read_kcfi_types(file); + if (ret) + return ret; + ret = decode_instructions(file); if (ret) return ret; @@ -3595,7 +3704,8 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio int i; struct instruction *prev_insn; - if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP) + if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP || + insn->type == INSN_KCFI_TYPEID) return true; /* diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index d7b99a737496..c4e277d41fd2 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -120,6 +120,19 @@ struct section *find_section_by_name(const struct elf *elf, const char *name) return NULL; } +bool for_each_section_by_name(const struct elf *elf, const char *name, + bool (*callback)(struct section *)) +{ + struct section *sec; + + elf_hash_for_each_possible(section_name, sec, name_hash, str_hash(name)) { + if (!strcmp(sec->name, name) && !callback(sec)) + return false; + } + + return true; +} + static struct section *find_section_by_index(struct elf *elf, unsigned int idx) { diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h index 9b19cc304195..3db5951e7aa9 100644 --- a/tools/objtool/include/objtool/arch.h +++ b/tools/objtool/include/objtool/arch.h @@ -28,6 +28,7 @@ enum insn_type { INSN_CLD, INSN_TRAP, INSN_ENDBR, + INSN_KCFI_TYPEID, INSN_OTHER, }; diff --git a/tools/objtool/include/objtool/builtin.h b/tools/objtool/include/objtool/builtin.h index c39dbfaef6dc..68409070bca5 100644 --- a/tools/objtool/include/objtool/builtin.h +++ b/tools/objtool/include/objtool/builtin.h @@ -10,7 +10,7 @@ extern const struct option check_options[]; extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, lto, vmlinux, mcount, noinstr, backup, sls, dryrun, - ibt; + ibt, kcfi; extern int cmd_parse_options(int argc, const char **argv, const char * const usage[]); diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h index 22ba7e2b816e..7fd3462ce32a 100644 --- a/tools/objtool/include/objtool/elf.h +++ b/tools/objtool/include/objtool/elf.h @@ -148,6 +148,8 @@ int elf_write(struct elf *elf); void elf_close(struct elf *elf); struct section *find_section_by_name(const struct elf *elf, const char *name); +bool for_each_section_by_name(const struct elf *elf, const char *name, + bool (*callback)(struct section *)); struct symbol *find_func_by_offset(struct section *sec, unsigned long offset); struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset); struct symbol *find_symbol_by_name(const struct elf *elf, const char *name); From patchwork Fri Apr 29 20:36:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6962AC433EF for ; Fri, 29 Apr 2022 20:37:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380914AbiD2UlL (ORCPT ); Fri, 29 Apr 2022 16:41:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380947AbiD2UlC (ORCPT ); Fri, 29 Apr 2022 16:41:02 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A8D3C90FC for ; Fri, 29 Apr 2022 13:37:29 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2eb7d137101so84825677b3.12 for ; Fri, 29 Apr 2022 13:37:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pCRF8WxuKsJLhQegFtXAhQOgdkZIXxpUFMBNWofutJg=; b=rb5PJBXqZ7PDnTljG6ifIE8gyd72IC34q/VtcjXuyOJl6AMtdmAjmsfq0lfuEQekK+ pMx1HTeDGFvMcB0zYoRjn8iodh+Cl+ug6t0NujoKdJdq5JfOpcm53hx9cYRc9czaoM6V wCCo8uZtcZNwjZnC3yaMKuANSRu6vyIDn4uQlpbz9pEsr0iVULxkNpsgUh09QxYqO+bt 8ih8Yb01J1emTtSHlr8jVVrGUw/d4znoCzHf779F0LEMOL9mgrcOcmkCJcbSd2cORoDr xqfesnGB09orA1eomT3hmGVMGTGc6KccZjw7UoOwSyooZfLr+oU9LAzuLtNDPJOImR+j aHHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pCRF8WxuKsJLhQegFtXAhQOgdkZIXxpUFMBNWofutJg=; b=P1ai0FvKK6q6xVAEiqITwGL79CrM9jXtWyd9p/OPhpYM92iCjZTesiTKAQlYTLPSJD 6dHOVY5jd1pvLBpuC1AjrQdfmlXYOcKJI4J6fcMSwVqsrzzq2Vnl3DKLb+tLmvBApw99 /6upBjexLu85BmtdXQQUcP6CCL6wKgHgEbOWZQ/W6Y3eHce2eoCq6AcJJuERJp6Uc5PT L25rsDpKiz+ultLwheeuXQ0j8DAiztjszexxWmcRhqizulpf2VdSz04Kl7fpvEMOZf6Q 5TVRNhFIU8O3q9xTOFsXcdGcgdyQLQE2tSfmMiFThrTmjhBnno4Bc/3+wF54/lbO7rMa gHGw== X-Gm-Message-State: AOAM533PU3KJOdvMOnOQ8Nl7Q5JZ68zlE1Jis2hVYjlpBL6aQIVGpFjc cLykyVfnGMwYKjxzn/SLPzMso5qerYMD50oBi8w= X-Google-Smtp-Source: ABdhPJxW/4+AcaZSu3uuWyLU6uIyltSWFVdpdAXqDeDs5bikx77gj1C032JW2iKcFOB1vCKNvHaYzdWvZDktOIt6B6I= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a0d:ddcf:0:b0:2f7:dc0d:93e0 with SMTP id g198-20020a0dddcf000000b002f7dc0d93e0mr1144605ywe.9.1651264648287; Fri, 29 Apr 2022 13:37:28 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:40 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-18-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=641; h=from:subject; bh=DxQc6EGDGB8zEXE4dDnKQIh2BeDsSr/ulqb1I/8a6/M=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExXdkK4E+eES2dE3iysIrEucJWLi1au+dxYyIiv PoIr2IqJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMVwAKCRBMtfaEi7xW7jN+C/ 9NuHLivGVpdnQ5h09S1KucfK0dDYhnxeXvjJsVMWhnO+lQDVoYHCCl5XS5s1fIVTqbdKMprLkXjWt6 pmEkoXNNEA9NLrUjE6YWmfUmUHDQ5qLnl1zZRAdfrDM+TwEdRrdk76z3O1H72HxKW2Fhdo0fp7rO/S AfFpWddva2zdjw6GUPJcbTB08W2/n/QuoReY4FJI2CiZ5dhhjeP9UA+yefvCFQNmyhaUX30cg8Jxm8 H651xE0f2VrSPBcSVmrVMN29dgeghi1M0QnTu3SWABhA9Zq/sWAq4dmOhnHAjFfwPVr/B5TDzIadY0 yt/QHe4RXgBdIpCeYc+nV8hl/cRLKQ+KtM1YfepHZV2Pn7EqOwnvmiMRpDIt8VWv/W5SqdpE4SNG4K wS1d/ZasQIzQxU+EC3z7E5RijhdSJh9X5+J0iYeI4YuT6IhQsCVsn3zqUwQLXusdt9Iu6hwIiOOJ/5 V86zuETn5IGcbe1euGU6/Eo3BmgfEx8DHtnulobHhXn+A= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 17/21] x86/tools/relocs: Ignore __kcfi_typeid_ relocations From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Ignore __kcfi_typeid_ symbols. These are compiler-generated constants that contain CFI type identifiers. Signed-off-by: Sami Tolvanen --- arch/x86/tools/relocs.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c index e2c5b296120d..2925074b9a58 100644 --- a/arch/x86/tools/relocs.c +++ b/arch/x86/tools/relocs.c @@ -56,6 +56,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = { "^(xen_irq_disable_direct_reloc$|" "xen_save_fl_direct_reloc$|" "VDSO|" + "__kcfi_typeid_|" "__crc_)", /* From patchwork Fri Apr 29 20:36:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BD98C433EF for ; Fri, 29 Apr 2022 20:37:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380972AbiD2UlO (ORCPT ); Fri, 29 Apr 2022 16:41:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381010AbiD2UlC (ORCPT ); Fri, 29 Apr 2022 16:41:02 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 886738303B for ; Fri, 29 Apr 2022 13:37:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f7c5767f0fso84626217b3.4 for ; Fri, 29 Apr 2022 13:37:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ShAlAeHdmcDUjAVONmV7zEdNMXGAfOF5Y1w7mJCNcoY=; b=i3IPOmea3WQBb5zC3FZWXpYYQId+mZZRQL3lpa+e0IqP+k+RlUySom3NcPlGi368fl O3Sw4HBQ2DylP5epqpA/chiMGxk5DUS4rvyqSxjhKls6hkGH6skN3An+dxcdkYBFBmth yDSstoYwlSgfxlwrga/Tm0/y2UL3woX4VTlWu56RePp7m1ACnPzYMV22om7VuUGz8pxo FhMWrXxJj2EDUekLAgjYEJZpwCDZnAQCzeKnN5q4nByQqFn4P1w82kU3U5fiw7G7iOA4 iawuz2E7NZ2JuP+5hqRkQq4v0WK3iguutbpwR7oUJpRmbV4Ptv/VAltW148FcTItu+yw JnOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ShAlAeHdmcDUjAVONmV7zEdNMXGAfOF5Y1w7mJCNcoY=; b=JBeckRHTINV1Duz3ItDl+6kKtLn8PAMtmMguT8K73363YyktyemA2ODJi5VORVOe+X k30mO4by+yVvN0V+A6P941a2Ffq3ioCRWtotv3kMmpW4AnxZLd/13XOyxLBgrjko+CaD zv9+kiLwynbrBwpG3j0JWM+mhA9C+3e21nziGC3+SSXqQwM9KaW8IAOE0TkoAXBabxnc jIp+5DW7ShodqRPDInP10OseXHnfFY/UEC6Uqzc5z8dn6Nzhd+1qXJUygjKbKbw+s22r yBtGb+Je+eUCLKz7jNYBghxuhagaVKjrr54wYNsrNHbEVyTFQFD4OvKM9DWNsKr6Gr+e Y95g== X-Gm-Message-State: AOAM530uQCmQDdlfivrdvAi2NhtJueplRI0P9xh+aE1/VUMLT+Akx03b urZxfLuMbRKBF3RHHMQmWTKORzP1UGTddrYpT3E= X-Google-Smtp-Source: ABdhPJxAr9lCOMAnD/lmiPhOn84jHYA3HAzVbUEJjeI/G0X2/Oveum1+LxesrBYCtgzRUSo51kSIlt+AudPlAEj5GCQ= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:9247:0:b0:645:ddd5:a182 with SMTP id e7-20020a259247000000b00645ddd5a182mr1253711ybo.289.1651264650733; Fri, 29 Apr 2022 13:37:30 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:41 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-19-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=1843; h=from:subject; bh=PAL7Ge7dn/aa7QflxoZtvwjqq1kRRtVutYQP/c/YmXk=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExY8khNJI68eCV8X0o9DTv4MoJXG1yRWIAhiPc5 hxrwlLaJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMWAAKCRBMtfaEi7xW7oJMDA Cad9W583bxOURyKHkEV/CrQ13N5e2U+VOfMEwQDpyD3aw0pv2KNd9nK5jBFqzgoUulkUx9GFEwJ9RO AqsO6pFZyjBI1JO8vhI5a2YISdcHJ8Ym9ofY0eFizdNEda58ZkKk/KFe45OQrsEKv2lndYZqb7ZUKk F8w9hlxttRolsjibEx8PbUiSc3Nvppr39PXvnTL7KgKUtwHzwtgo5qn6c8liuOJIRRElgdF1NgT1ba x+LDRxsJMtK1MuvCa0MBnCvtzKSrlUui5uyZS+xik3gLWoujt/IwH6CsENW8/t0NF4FwrDVEbHnq8v bMTp9LQZJW2SpZO/qg59L9d2cef0Mlb4y6jc8wpmjXfApBQhEW5EgjUS5gjLmZ3hDR4DGrO6BihZTr hRZZg6uJK9lNidsGYcqu228RH7pCKS/wGGvrCyQO66ceD+Yw2wUy12ZmQAIJoSUbU2C21CYSA1jAh/ z6h8soAKo8UPCCyuQNoThrFMVewhwAVcuDKVqLmZp9RRc= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 18/21] x86: Add types to indirect called assembly functions From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org With CONFIG_CFI_CLANG, assembly functions indirectly called from C code must be annotated with type identifiers to pass CFI checking. Signed-off-by: Sami Tolvanen --- arch/x86/crypto/blowfish-x86_64-asm_64.S | 5 +++-- arch/x86/lib/memcpy_64.S | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S index 802d71582689..4a43e072d2d1 100644 --- a/arch/x86/crypto/blowfish-x86_64-asm_64.S +++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S @@ -6,6 +6,7 @@ */ #include +#include .file "blowfish-x86_64-asm.S" .text @@ -141,7 +142,7 @@ SYM_FUNC_START(__blowfish_enc_blk) RET; SYM_FUNC_END(__blowfish_enc_blk) -SYM_FUNC_START(blowfish_dec_blk) +SYM_TYPED_FUNC_START(blowfish_dec_blk) /* input: * %rdi: ctx * %rsi: dst @@ -332,7 +333,7 @@ SYM_FUNC_START(__blowfish_enc_blk_4way) RET; SYM_FUNC_END(__blowfish_enc_blk_4way) -SYM_FUNC_START(blowfish_dec_blk_4way) +SYM_TYPED_FUNC_START(blowfish_dec_blk_4way) /* input: * %rdi: ctx * %rsi: dst diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index d0d7b9bc6cad..e5d9b299577f 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -2,6 +2,7 @@ /* Copyright 2002 Andi Kleen */ #include +#include #include #include #include @@ -27,7 +28,7 @@ * Output: * rax original destination */ -SYM_FUNC_START(__memcpy) +__SYM_TYPED_FUNC_START(__memcpy, memcpy) ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \ "jmp memcpy_erms", X86_FEATURE_ERMS From patchwork Fri Apr 29 20:36:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4CD3C433F5 for ; Fri, 29 Apr 2022 20:38:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380949AbiD2UlU (ORCPT ); Fri, 29 Apr 2022 16:41:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380958AbiD2UlG (ORCPT ); Fri, 29 Apr 2022 16:41:06 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1299783B1B for ; Fri, 29 Apr 2022 13:37:34 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b12-20020a056902030c00b0061d720e274aso8350860ybs.20 for ; Fri, 29 Apr 2022 13:37:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dAD7TU5bxFw4+mtb6XZpH6YJckvLCjHt6y7FOMRwG5M=; b=sF4rYvxurIR8HSYS3JDDW5zN4WVtRvWJLoykvKPRZlF2xUvwg49nOitD3FnwryJfzG q7yNxY2NXlCrhtPLxo9ZiAN5PVyxyXvMyHMP8TI78vutHXo4STj7VEhf7492C2Y+tsV+ /S6COa3b8gLTsTS6KwkR/kpJc/tmJ0jEDFXPv4WRePnoKxxqRUFo+kdbrbCuldPWepY4 scMjYHWSDzPDQcxdpgp2KwRs/AmLjQ+Jo93U1R8SCCGwY4XZEVOved3HyMyFTtqqZKtg fc+57Z4pGr2BfvmrY+PS4Z/ooYTznRcTEbHo9uQI0RVSXnPSEr9Piu6r59wPLL/8Og0A AM3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dAD7TU5bxFw4+mtb6XZpH6YJckvLCjHt6y7FOMRwG5M=; b=h/SSwPcLXXuQwVBiplp5qiHJCPY5Uo2GbhQd2EanwSnnjNf1SaZ9xzkVKpmsDIbAvr Ylshbt6+Zas1Maf9XDfXApzWnahjh1kuRb/B7rYDCiaVcYWWXod60jyPYX1bVdDvUM42 KXgY17ANd6eWJ+cPlkVqkdHfEwD5KDF9+UFAYbOz36fLlqrfTznhQK8F+TVfLypS04b3 fePS+E7QY44JzK/CmkeM7QxepXtjc8t246vMrVp3mdNhJDZn4RVVbIyPhhAzGd+MiHH6 HiJ6NjaW/4ukyH9C8jZ0IAaaIqzETsJtYzm7fRh0R306FrmxOHgH5mVRCog1owafM83F mv7Q== X-Gm-Message-State: AOAM532NmZ33dX8HtnBtLv2XDnZEX2OesMYe654BDHj2oKYVIhW8oIhK irguhbNEu4DYSoqs4Wp/xM8CsHjtcYY8pBGrCco= X-Google-Smtp-Source: ABdhPJyNThaiFHtVkOlq27b/IsKVC7XcaL87tiPVfnWjDS5LKEWWfPb5Rc+LRi2xmrClI0aQlv/FhNCy8ypRoBYPdOA= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:1443:0:b0:63d:6c01:d26f with SMTP id 64-20020a251443000000b0063d6c01d26fmr1397777ybu.296.1651264653259; Fri, 29 Apr 2022 13:37:33 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:42 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-20-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=867; h=from:subject; bh=up7BEek35o0M0XxxZGb0YVxeAOsDA2frVViAIISPcAE=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExYVHtzXXfVkfpC4f30GmwEWwVNyJTrvW77sDTZ wzWWtoaJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMWAAKCRBMtfaEi7xW7mGLDA CTEoA/YLkPzO4RCaefNoqVsLm/slm3eKAb0EBTNjPQnz5eXoKCVQRKj9PWoPlcKYv0zhhpUXtVDSTh ac1XDqL26rGZoUjODIYRAKzZesDGF6mk0vMZegM83LL0VgP+HiMZb3/+JRnyO5lXl9biI+PKRs9yao cB6eP6nxZ2oJoZ3wi8wRjZa2dq6HwI1vjgqvc6wkIVZQB1hn1iiX4jK2s+b1RI6sSKL1KoBnu8ruUc XnRlTBjlHvsg3sv+Oo60X+xtVo4jt5qSlxPjVS36s+rcz8PjrRYMwuyiX1owAbYAx6891grqZ/Rtja zgQtJMT25Vbp1i0Ch4rkcNHucaP6MceaEMqi96kngq6U8NlFl+N4JSFjR9NwvRPX4zPYA/hjeYCpG/ AzTNfBREHLnSC45xKO3dgHPlu7+SNR2AwuC0gBm9U9UpzBYdOAHJEKi3xrii9tnkvtP20793/d3GJb h5zaP+BZ4RGrD05vaq3ji8gcCm/xu4Nv1ehokRtYYrvXM= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 19/21] x86/purgatory: Disable CFI From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Disable CONFIG_CFI_CLANG for the stand-alone purgatory.ro. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Tested-by: Nick Desaulniers Tested-by: Sedat Dilek --- arch/x86/purgatory/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile index ae53d54d7959..b3fa947fa38b 100644 --- a/arch/x86/purgatory/Makefile +++ b/arch/x86/purgatory/Makefile @@ -55,6 +55,10 @@ ifdef CONFIG_RETPOLINE PURGATORY_CFLAGS_REMOVE += $(RETPOLINE_CFLAGS) endif +ifdef CONFIG_CFI_CLANG +PURGATORY_CFLAGS_REMOVE += $(CC_FLAGS_CFI) +endif + CFLAGS_REMOVE_purgatory.o += $(PURGATORY_CFLAGS_REMOVE) CFLAGS_purgatory.o += $(PURGATORY_CFLAGS) From patchwork Fri Apr 29 20:36:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F3A8C433FE for ; Fri, 29 Apr 2022 20:38:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381033AbiD2Ulb (ORCPT ); Fri, 29 Apr 2022 16:41:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380966AbiD2UlG (ORCPT ); Fri, 29 Apr 2022 16:41:06 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CD98CC534 for ; Fri, 29 Apr 2022 13:37:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id w133-20020a25c78b000000b0064847b10a22so8371158ybe.18 for ; Fri, 29 Apr 2022 13:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=c3P0Hz4r3G9OpLG/quSc8J3HkB5RqmI/gpWW3ubqRHw=; b=L3b4tsGo9U6EKhT1enH+lnwRGBAriMT04pxJOWGmkTh/LOTTSEf7+ZnnodamgedxF/ 0cwAbqdA5BTRqaVaWO9cIk3CrgrGdAh+jueAcHfV8Bg8TTAZ+khVx+B8eXyacA4mY4Xx K3TQUEqnaXw1lY5l03iwoFMiQRQsJmRAKWpBy5VlbUJzdl8A2l7wvJu19Rol8flqku9I WNZ/yUORvsGxlp+SHTywXjv/cQ3+dFZB9Bv+ISu7Xf8nMdBe2LwKPup9v4pkWEFDDPFS kaimPMTLgDV3p2dFB9fsB4bsR6Vkc62TMrM/JjgmNdYT0fQUUEBM5oeL8rAQdxi3N/ib ZNmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=c3P0Hz4r3G9OpLG/quSc8J3HkB5RqmI/gpWW3ubqRHw=; b=rIrVZikXuXexWP3jMRtRZHjFqhtICc3cD3F1ju0FoTMwj7DszbFs0/1KngW5Exv6xf 9SOJxDAF49nGyXx+LiD2soQ9wjpT2QldJ/zfajiUe75/TNkVaBnbCzc5RZpMPuRHLbsB pRFOPP6NDF3R8N1Nb86ge9M97hOx3Ln012BCdaZXO3pjhJYWr1h0+/BpZEbYFJ6WQ++k iUKh6gQimqqGY8vu46WB0ti9CppCEzlhkNPKfsUmjqOOaBQ9KHcQQIDsHIhGwnB+3sNW zCDhnJwitW7O0Ma/nL4GQq5rnWz0pnZgFL0WH9Rd21Y0DYE1N3nPDBEDtQCpRTpgvzNX 8Icg== X-Gm-Message-State: AOAM530MnGABhCUPCSYmmJmo0rHhTgGvqLq3lMfhgeLxQDO5Ezu1drr+ 0FE1/SCofVZL/FAYIg7n+TrGvWZLgea/mwsRqKM= X-Google-Smtp-Source: ABdhPJy48Ta9ddvMIF4e+65DRSdxotM1XKnrKu3boCyUAK2XttelwSr/WCmbVBiS9zrXdXlk0V9aV262+2v6W6Pe/+Y= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a81:8493:0:b0:2f7:d7c3:15f8 with SMTP id u141-20020a818493000000b002f7d7c315f8mr1216311ywf.196.1651264655480; Fri, 29 Apr 2022 13:37:35 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:43 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-21-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=1427; h=from:subject; bh=qwzizA85BcxzgY6DCPbuVyLVCg+lMcvwRT1NDXtCdLs=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExYM8ehL1/+o0aCrhRJzqnXsps5M4p21c1LQlC3 sSVsl4WJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMWAAKCRBMtfaEi7xW7tTEDA CIw6jCzNraux3uPsdGgkoAmjqy1E308ymvORumKcO9YZjpCWgxUG+wrt+QoMZR3WsGxUe2zWhOF83/ GBMcbJoiHwlX4DbEEk02ZUts9xuqk851PS5XPj4XupWl8F+kbRnXqWj7hSkLGOAkDS71i8n4fC9xr0 6Ez6wHIz/0fvbvzjy2pHl3vX2S1oouofjPnfB6+NNmsyEb/K8pceL8f2CcR6jIzAd/J0QmzZTj76WT O4AwX/0ZNLkWjMQpGtuKzzGFHozskjUdwKRyIsCMnHGCm2py2s1oLV/ZY9mk7yD4IoRB3MrC0v0FDM Fl8hcWy38RiLdiG2OvGkUee959W0eF+qPDLNNpHfpiSB1J1t43APoJLX+iPYsGLObYHe9DGTllVo0x kt56P/u268sda8zQK7JrAuU3qLye5d96mt/FiDL13+rXzBZuloXyNLx7trQLbxVhiAQkGYOvGEGt2D lY6u8L5nEHbyCOCMu0kfKXPQzqa6gVH7sa1innsuiCfcA= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 20/21] x86/vdso: Disable CFI From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org CC_FLAGS_LTO no longer includes CC_FLAGS_CFI, so filter these flags out as well. Signed-off-by: Sami Tolvanen --- arch/x86/entry/vdso/Makefile | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 693f8b9031fb..abf41ef0f89e 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -91,7 +91,7 @@ ifneq ($(RETPOLINE_VDSO_CFLAGS),) endif endif -$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL) +$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL) # # vDSO code runs in userspace and -pg doesn't help with profiling anyway. @@ -151,6 +151,7 @@ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32)) +KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic KBUILD_CFLAGS_32 += -fno-stack-protector KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls) From patchwork Fri Apr 29 20:36:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 12832760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0034AC433F5 for ; Fri, 29 Apr 2022 20:38:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381044AbiD2Ul1 (ORCPT ); Fri, 29 Apr 2022 16:41:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1381045AbiD2UlJ (ORCPT ); Fri, 29 Apr 2022 16:41:09 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E108ECE128 for ; Fri, 29 Apr 2022 13:37:38 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id v17-20020a056902029100b006484d85132eso8347690ybh.14 for ; Fri, 29 Apr 2022 13:37:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=CBFSOSCWmXZdYTn8wlTZ3Rj5NSKYEySXgB9hFZpDWcQ=; b=ZMe9V/MSuwf8udmciH8lqoba83OKO8v0g4K1vzIlxr2talii7K7Ys0ordrrMAHe+pp DaqLObTTros84C3FoilHwVOlv/STu0KhGjugyeK349pAgMRnv4wFfKGXK66cTNEK/E/2 9CS4D6qXyibLUANPFQftw9xg5fy6QMtOA2/GCKenixGS3WyJxUkAAE2Fomn5XE4N6+Fu MH4isN9/BTjRcs0gsnSNZkRVTC/M1BBUGL5BQ+8ljr9lJua+F/PodDy+y3v9FvpxQypS cIBtMTLJxrS2yefnyXe+cpHCAo5gcyjEL1MfDZsZasLEWiGX5qs2r02D/2wLWwBVNFNQ E5nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=CBFSOSCWmXZdYTn8wlTZ3Rj5NSKYEySXgB9hFZpDWcQ=; b=fqffm01GCwcNuy196b3v79N6vHtEeyawE5hsocRpsx8RB9ZgcV0qY+Sid8ObF32u+I L/BRj2WSWCLBNl73NRCZ9HvF+anba7HFIG6qk7uZpXBpXd8/uHfCdiS/e35dpVDuFOJu 0lpvVjJC3OvXHur3BcXFEIaEYGN6yt2YszaLk9qH3n3UC+STKbdK6gqCfbyIZ8Hv452Y /14amPWojb2S73+fGGEYPOi+sxkncvzeMqyHv+DKw0t0nqfJttuUTEIXpqfTpFsgYzqc Q5jZYiNhtsHBzI3dcik8S8Nh4NbYDSXNa458ViOYNmvIEtgVn2Eecgd50bJwJxKkXgRY OkTQ== X-Gm-Message-State: AOAM531dDaHUTRFOFfwfLsuWXJf4gAY4Y3dxSZisVwj3bKqydDtzcQcl u5MYVvtvWJOshqd68fsYp5G6bym81FwMgntotmA= X-Google-Smtp-Source: ABdhPJz3PRZnRDaYveY8PEOrWLh5TwRRZFIgz/hxn66OLDYa/VrKRiT3aho8JsA1WKfUtkL1urGOjyBFCiYVRzF+Yvc= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:351:bea9:f158:1021]) (user=samitolvanen job=sendgmr) by 2002:a25:e0d3:0:b0:645:77c9:4e31 with SMTP id x202-20020a25e0d3000000b0064577c94e31mr1243653ybg.97.1651264657838; Fri, 29 Apr 2022 13:37:37 -0700 (PDT) Date: Fri, 29 Apr 2022 13:36:44 -0700 In-Reply-To: <20220429203644.2868448-1-samitolvanen@google.com> Message-Id: <20220429203644.2868448-22-samitolvanen@google.com> Mime-Version: 1.0 References: <20220429203644.2868448-1-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=3169; h=from:subject; bh=uFL18k3KC4iCKF4AXgGD1Mq4jj5gai1wigbzvC9PP/o=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBibExY2LifXNevk1TsxQ0IeoobF4r0E0uMXfS+2IhA yLd8T56JAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCYmxMWAAKCRBMtfaEi7xW7ou7DA CVtuVd3sKEK3z2ChoeiIcGcU8iukBN8OM/HXmzh3pZq6yVg67wrrJiJ5oCCKPR2sRaYP0t6MSn772K l5U4p82O/prDh9AK0kKeyaSy4kvxVMyegTBCTiWzEC3tN48aXVcmI3dMp98RL7LGQzu9v4CLIEVaAa ZyLplEa810jUrIxOZER6sZfDmqUN7V/0ZKKzwsynPVXj76wvte6NCWe3NJ/ZxHTKwz2evdYtezNBOs w5UmruzJ3X4BM+NXqEmVZVfauFH9okVc4aHyl3C0tSJ4hQddWslVlW6jiD7FczG0DVtfhYaktcY3yZ 335Ly2E9sGHsooiWLotHkG2PgPaJ+Jhfa2cw8T3990e55MXFQKaNXNTCRXA+Qq5C7qXBuPFrmbgIz1 isZNNgIYvj+7LUjbO4NBInJYbyLkQMpgNLLqoZq/7hfv1xoJcIZg4CS+1iAmr+3dOxn1GAj/d/7CTi jKj9QZzAlA88rYhDAQ3LbORG2j6VAUCtSo+oWCun3pPF8= X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG From: Sami Tolvanen To: linux-kernel@vger.kernel.org Cc: Kees Cook , Josh Poimboeuf , Peter Zijlstra , x86@kernel.org, Catalin Marinas , Will Deacon , Mark Rutland , Nathan Chancellor , Nick Desaulniers , Joao Moreira , Sedat Dilek , Steven Rostedt , linux-hardening@vger.kernel.org, linux-arm-kernel@lists.infradead.org, llvm@lists.linux.dev, Sami Tolvanen Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Add CONFIG_CFI_CLANG error handling and allow the config to be selected on x86_64. Signed-off-by: Sami Tolvanen --- arch/x86/Kconfig | 1 + arch/x86/include/asm/linkage.h | 7 ++++++ arch/x86/kernel/traps.c | 39 +++++++++++++++++++++++++++++++++- 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b0142e01002e..01db5c5c4dde 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -108,6 +108,7 @@ config X86 select ARCH_SUPPORTS_PAGE_TABLE_CHECK if X86_64 select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096 + select ARCH_SUPPORTS_CFI_CLANG if X86_64 select ARCH_SUPPORTS_LTO_CLANG select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_USE_BUILTIN_BSWAP diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h index 85865f1645bd..d20acf5ebae3 100644 --- a/arch/x86/include/asm/linkage.h +++ b/arch/x86/include/asm/linkage.h @@ -25,6 +25,13 @@ #define RET ret #endif +#ifdef CONFIG_CFI_CLANG +#define __CFI_TYPE(name) \ + .fill 10, 1, 0x90 ASM_NL \ + .4byte __kcfi_typeid_##name ASM_NL \ + .fill 2, 1, 0xcc +#endif + #else /* __ASSEMBLY__ */ #ifdef CONFIG_SLS diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 1563fb995005..b9e46e6ed83b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs) ILL_ILLOPN, error_get_trap_addr(regs)); } +#ifdef CONFIG_CFI_CLANG +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs) +{ + char buffer[MAX_INSN_SIZE]; + int offset; + struct insn insn; + unsigned long *target; + + /* + * The expected CFI check instruction sequence: + *   cmpl    , -6(%reg) ; 7 bytes + * je .Ltmp1 ; 2 bytes + * ud2 ; <- addr + * .Ltmp1: + * + * Therefore, the target address is in a register that we can + * decode from the cmpl instruction. + */ + if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE)) + return NULL; + if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64)) + return NULL; + if (insn.opcode.value != 0x81) + return NULL; + + offset = insn_get_modrm_rm_off(&insn, regs); + if (offset < 0) + return NULL; + + target = (void *)regs + offset; + + return (void *)*target; +} +#endif + static noinstr bool handle_bug(struct pt_regs *regs) { bool handled = false; @@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs) */ if (regs->flags & X86_EFLAGS_IF) raw_local_irq_enable(); - if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) { + if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN || + report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) { regs->ip += LEN_UD2; handled = true; }