From patchwork Tue May 7 23:46:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxwell Bland X-Patchwork-Id: 13657936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68F78C10F1A for ; Tue, 7 May 2024 23:47:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cRlWxxWJjFHdQRmdgcKhO41kq0B/z5WIfM8bf/yID8A=; b=y+Y9kuh5zbFFfG 0lRftDqx9w14owHYfqFaBzdMFYmW+bxWxD9XTd3XtwEWvPxTbmqx4P4S6fllIv9olUk1X4sSCp+/s 0+1MVdR2My7ghackr84Gb+VdmX8kJ8Uffz4bGcYkHOPbrQDHgXLq0R0x113nTGIqdt9jGWPNe4fxE +AwcQbPGXcj1qLsGsSOT2lnzSU33P8VEJlOKFOJ3eyxhU8aRZ8wk05mRVso+vmJ3p1/WzQ7Me0ltN bZSv9kk8m3NGh4wKKflk4O/nujmQ7SUsPc4QIxScETa4gDFehKw4G+yrPE2lO2ZHjy1mX1eLxTsST V7QsBttiAQw8hz2u1oOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s4UWb-0000000DNfx-1Xiw; Tue, 07 May 2024 23:47:09 +0000 Received: from mx0b-00823401.pphosted.com ([148.163.152.46]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s4UWX-0000000DNdj-29kW for linux-arm-kernel@lists.infradead.org; Tue, 07 May 2024 23:47:07 +0000 Received: from pps.filterd (m0355089.ppops.net [127.0.0.1]) by mx0b-00823401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 447IXwm9025129; Tue, 7 May 2024 23:46:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=motorola.com; h= date:from:to:cc:subject:message-id:references:mime-version :content-type:in-reply-to; s=DKIM202306; bh=qJ9uL4sH4d2ZxBF43AXo nEw5zb5DxFZc2u0Y2w0tXnA=; b=CeSkU88bFUlON8DzbJFPUwn824jDhGPGKYAv Bu5fyZGkzvsFhG89IeoXpk7tUkF4dy0Hx8UEq4ZwvtkO0KpmmlM+PDDfvNMs+Dn6 rPvMUMPP6zfBgn8NRStzhWIkACJTfr9H+zDOCgEGrO7sI40rG5xhuNL2g4qhNL8T 5GZgMPsA6En0CKKT56nVGKc8yg1r5y9Twa0AW+tmSAA6Kt+oorLK8T+SKWzc66fq /RASjiR9gb7f/mx+dMO3uTtGXvRNlrtZnSRscP79ggDCz6MJt8z7ru4keuvzW9x+ 8ALWtqh5paapx5DUDx3324JlGIq1oH401f/scTAwK++NjqIO1g== Received: from ilclpfpp02.lenovo.com ([144.188.128.68]) by mx0b-00823401.pphosted.com (PPS) with ESMTPS id 3xyspm0e1e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 07 May 2024 23:46:46 +0000 (GMT) Received: from va32lmmrp02.lenovo.com (va32lmmrp02.mot.com [10.62.176.191]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ilclpfpp02.lenovo.com (Postfix) with ESMTPS id 4VYw0j4yp9zc3Hg; Tue, 7 May 2024 23:46:45 +0000 (UTC) Received: from ilclasset02 (ilclasset02.mot.com [100.64.49.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: mbland) by va32lmmrp02.lenovo.com (Postfix) with ESMTPSA id 4VYw0j19kNz2VZ3B; Tue, 7 May 2024 23:46:45 +0000 (UTC) Date: Tue, 7 May 2024 18:46:43 -0500 From: Maxwell Bland To: "open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)" Cc: Catalin Marinas , Will Deacon , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Zi Shen Lim , Mark Rutland , Suzuki K Poulose , Mark Brown , linux-arm-kernel@lists.infradead.org, open list , Puranjay Mohan , Josh Poimboeuf Subject: [PATCH bpf-next v3 2/3] arm64/cfi,bpf: Support kCFI + BPF on arm64 Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Proofpoint-GUID: lMxuw40P7kJoi-p8EMok5hEjGPlCM3HT X-Proofpoint-ORIG-GUID: lMxuw40P7kJoi-p8EMok5hEjGPlCM3HT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.11.176.26 definitions=2024-05-07_15,2024-05-06_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0 impostorscore=0 clxscore=1015 lowpriorityscore=0 bulkscore=0 adultscore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2405010000 definitions=main-2405070167 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240507_164705_702690_E260D82A X-CRM114-Status: GOOD ( 25.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, bpf_dispatcher_*_func() is marked with `__nocfi` therefore calling BPF programs from this interface doesn't cause CFI warnings. When BPF programs are called directly from C: from BPF helpers or struct_ops, CFI warnings are generated. Implement proper CFI prologues for the BPF programs and callbacks and drop __nocfi for arm64. Fix the trampoline generation code to emit kCFI prologue when a struct_ops trampoline is being prepared. Signed-off-by: Puranjay Mohan --- arch/arm64/include/asm/cfi.h | 23 ++++++++++++++ arch/arm64/kernel/alternative.c | 54 +++++++++++++++++++++++++++++++++ arch/arm64/net/bpf_jit_comp.c | 18 +++++++++-- 3 files changed, 93 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/cfi.h diff --git a/arch/arm64/include/asm/cfi.h b/arch/arm64/include/asm/cfi.h new file mode 100644 index 000000000000..670e191f8628 --- /dev/null +++ b/arch/arm64/include/asm/cfi.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_CFI_H +#define _ASM_ARM64_CFI_H + +#ifdef CONFIG_CFI_CLANG +#define __bpfcall +static inline int cfi_get_offset(void) +{ + return 4; +} +#define cfi_get_offset cfi_get_offset +extern u32 cfi_bpf_hash; +extern u32 cfi_bpf_subprog_hash; +extern u32 cfi_get_func_hash(void *func); +#else +#define cfi_bpf_hash 0U +#define cfi_bpf_subprog_hash 0U +static inline u32 cfi_get_func_hash(void *func) +{ + return 0; +} +#endif /* CONFIG_CFI_CLANG */ +#endif /* _ASM_ARM64_CFI_H */ diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 8ff6610af496..1715da7df137 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -298,3 +299,56 @@ noinstr void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr, updptr[i] = cpu_to_le32(aarch64_insn_gen_nop()); } EXPORT_SYMBOL(alt_cb_patch_nops); + +#ifdef CONFIG_CFI_CLANG +struct bpf_insn; + +/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */ +extern unsigned int __bpf_prog_runX(const void *ctx, + const struct bpf_insn *insn); + +/* + * Force a reference to the external symbol so the compiler generates + * __kcfi_typid. + */ +__ADDRESSABLE(__bpf_prog_runX); + +/* u32 __ro_after_init cfi_bpf_hash = __kcfi_typeid___bpf_prog_runX; */ +asm ( +" .pushsection .data..ro_after_init,\"aw\",@progbits \n" +" .type cfi_bpf_hash,@object \n" +" .globl cfi_bpf_hash \n" +" .p2align 2, 0x0 \n" +"cfi_bpf_hash: \n" +" .word __kcfi_typeid___bpf_prog_runX \n" +" .size cfi_bpf_hash, 4 \n" +" .popsection \n" +); + +/* Must match bpf_callback_t */ +extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64); + +__ADDRESSABLE(__bpf_callback_fn); + +/* u32 __ro_after_init cfi_bpf_subprog_hash = __kcfi_typeid___bpf_callback_fn; */ +asm ( +" .pushsection .data..ro_after_init,\"aw\",@progbits \n" +" .type cfi_bpf_subprog_hash,@object \n" +" .globl cfi_bpf_subprog_hash \n" +" .p2align 2, 0x0 \n" +"cfi_bpf_subprog_hash: \n" +" .word __kcfi_typeid___bpf_callback_fn \n" +" .size cfi_bpf_subprog_hash, 4 \n" +" .popsection \n" +); + +u32 cfi_get_func_hash(void *func) +{ + u32 hash; + + if (get_kernel_nofault(hash, func - cfi_get_offset())) + return 0; + + return hash; +} +#endif diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 76b91f36c729..703247457409 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -162,6 +163,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx) emit(insn, ctx); } +static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx) +{ + if (IS_ENABLED(CONFIG_CFI_CLANG)) + emit(hash, ctx); +} + /* * Kernel addresses in the vmalloc space use at most 48 bits, and the * remaining bits are guaranteed to be 0x1. So we can compose the address @@ -337,6 +344,7 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf, * */ + emit_kcfi(is_main_prog ? cfi_bpf_hash : cfi_bpf_subprog_hash, ctx); /* bpf function may be invoked by 3 instruction types: * 1. bl, attached via freplace to bpf prog via short jump * 2. br, attached via freplace to bpf prog via long jump @@ -1806,9 +1814,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) jit_data->ro_header = ro_header; } - prog->bpf_func = (void *)ctx.ro_image; + prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset(); prog->jited = 1; - prog->jited_len = prog_size; + prog->jited_len = prog_size - cfi_get_offset(); if (!prog->is_func || extra_pass) { int i; @@ -2072,6 +2080,12 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, /* return address locates above FP */ retaddr_off = stack_size + 8; + if (flags & BPF_TRAMP_F_INDIRECT) { + /* + * Indirect call for bpf_struct_ops + */ + emit_kcfi(cfi_get_func_hash(func_addr), ctx); + } /* bpf trampoline may be invoked by 3 instruction types: * 1. bl, attached to bpf prog or kernel function via short jump * 2. br, attached to bpf prog or kernel function via long jump