From patchwork Thu Jul 14 21:29:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 12918493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11978C43334 for ; Thu, 14 Jul 2022 21:32:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Py0geLzv1xEUYJk+Hv7b2a59VT6mFzgNsAhe09vAxHE=; b=keA5khiluTr0Lk S/AWEK0UOB6KwY7KOhubSaQ2j0dl0kmjX3+u4vlxcUGBd+K9uRukzJQZVHkXVmNGd2e9KpKp9p4BU dxOYWKXT0NoNV9rRtHbXRaTiPGMs1R5WZZO+9z+mfoxyKYUaXyQm3yfJR11Fp2y/sZ+V2S1cbc00z z8W2N4LrlQvypBOvcWoVk1dHwvuPqWNXWTBNQazSNvY+Buonc1klztnfOKf2Lf+3QtNhdY+9eqwNC NtXxGIeFZq5yM15p5qizB9zVGLqpTNvsPZCaXATicNdZ8DlTCwPT3XzSVoRuBZNvTLdIxuLGHW6M2 H0HB1abUHOSqErJVvIKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oC6QN-001eRM-55; Thu, 14 Jul 2022 21:31:07 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oC6Pb-001e5E-Do for linux-arm-kernel@lists.infradead.org; Thu, 14 Jul 2022 21:30:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657834219; x=1689370219; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nafIjV4kL1CtsV7LjTclQU8SdUN5WtDpotMOesG1y/c=; b=OxMeSupAfq6TV7l75kzwjW/1NKHhgu5AD+fJZE20jLwBStZCU699tyA4 u0+UL0xj832H5jEk6ntJn4RkC3nuh0AqkqJmfBVXF92/P5O5Yu7LnTCTM 3p6dsKcJVRJZI1gMsl2jHIbN+Dz88WCgPID9ScamUD8DKiajVbR86IRSI 4=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-02.qualcomm.com with ESMTP; 14 Jul 2022 14:30:06 -0700 X-QCInternal: smtphost Received: from nasanex01b.na.qualcomm.com ([10.46.141.250]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2022 14:30:06 -0700 Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Thu, 14 Jul 2022 14:30:04 -0700 From: Elliot Berman To: Bjorn Andersson , , Lorenzo Pieralisi , Sudeep Holla , "Marc Zyngier" CC: Elliot Berman , Trilok Soni , Murali Nalajala , Srivatsa Vaddagiri , Carl van Schaik , Andy Gross , , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Will Deacon , Catalin Marinas , , Subject: [PATCH v2 03/11] arm64: gunyah: Add Gunyah hypercalls ABI Date: Thu, 14 Jul 2022 14:29:32 -0700 Message-ID: <20220714212940.2988436-4-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220714212940.2988436-1-quic_eberman@quicinc.com> References: <20220223233729.1571114-1-quic_eberman@quicinc.com> <20220714212940.2988436-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01b.na.qualcomm.com (10.47.209.197) To nasanex01b.na.qualcomm.com (10.46.141.250) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220714_143019_550941_F84FCE86 X-CRM114-Status: GOOD ( 16.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add initial support to perform Gunyah hypercalls. The arm64 ABI for Gunyah hypercalls generally follows the SMC Calling Convention. Signed-off-by: Elliot Berman --- MAINTAINERS | 1 + arch/arm64/include/asm/gunyah.h | 134 ++++++++++++++++++++++++++++++++ 2 files changed, 135 insertions(+) create mode 100644 arch/arm64/include/asm/gunyah.h diff --git a/MAINTAINERS b/MAINTAINERS index b36bd47bcaaa..1d098bdba5c9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8742,6 +8742,7 @@ L: linux-arm-msm@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml F: Documentation/virt/gunyah/ +F: arch/arm64/include/asm/gunyah.h HABANALABS PCI DRIVER M: Oded Gabbay diff --git a/arch/arm64/include/asm/gunyah.h b/arch/arm64/include/asm/gunyah.h new file mode 100644 index 000000000000..2dbef08d58d7 --- /dev/null +++ b/arch/arm64/include/asm/gunyah.h @@ -0,0 +1,134 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef __ASM_GUNYAH_H +#define __ASM_GUNYAH_H + +#include +#include + +#define GH_CALL_TYPE_PLATFORM_CALL 0 +#define GH_CALL_TYPE_HYPERCALL 2 +#define GH_CALL_TYPE_SERVICE 3 +#define GH_CALL_TYPE_SHIFT 14 +#define GH_CALL_FUNCTION_NUM_MASK 0x3fff + +#define GH_SERVICE(fn) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + (GH_CALL_TYPE_SERVICE << GH_CALL_TYPE_SHIFT) \ + | ((fn) & GH_CALL_FUNCTION_NUM_MASK)) + +#define GH_HYPERCALL(fn) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + (GH_CALL_TYPE_HYPERCALL << GH_CALL_TYPE_SHIFT) \ + | ((fn) & GH_CALL_FUNCTION_NUM_MASK)) + +#define ___gh_count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x + +#define __gh_count_args(...) \ + ___gh_count_args(_, ## __VA_ARGS__, 8, 7, 6, 5, 4, 3, 2, 1, 0) + +#define __gh_skip_0(...) __VA_ARGS__ +#define __gh_skip_1(a, ...) __VA_ARGS__ +#define __gh_skip_2(a, b, ...) __VA_ARGS__ +#define __gh_skip_3(a, b, c, ...) __VA_ARGS__ +#define __gh_skip_4(a, b, c, d, ...) __VA_ARGS__ +#define __gh_skip_5(a, b, c, d, e, ...) __VA_ARGS__ +#define __gh_skip_6(a, b, c, d, e, f, ...) __VA_ARGS__ +#define __gh_skip_7(a, b, c, d, e, f, g, ...) __VA_ARGS__ +#define __gh_skip_8(a, b, c, d, e, f, g, h, ...) __VA_ARGS__ +#define __gh_to_res(nargs, ...) __gh_skip_ ## nargs (__VA_ARGS__) + +#define __gh_declare_arg_0(...) + +#define __gh_declare_arg_1(arg1, ...) \ + .a1 = (arg1) + +#define __gh_declare_arg_2(arg1, arg2, ...) \ + __gh_declare_arg_1(arg1), \ + .a2 = (arg2) + +#define __gh_declare_arg_3(arg1, arg2, arg3, ...) \ + __gh_declare_arg_2(arg1, arg2), \ + .a3 = (arg3) + +#define __gh_declare_arg_4(arg1, arg2, arg3, arg4, ...) \ + __gh_declare_arg_3(arg1, arg2, arg3), \ + .a4 = (arg4) + +#define __gh_declare_arg_5(arg1, arg2, arg3, arg4, arg5, ...) \ + __gh_declare_arg_4(arg1, arg2, arg3, arg4), \ + .a5 = (arg5) + +#define __gh_declare_arg_6(arg1, arg2, arg3, arg4, arg5, arg6, ...) \ + __gh_declare_arg_5(arg1, arg2, arg3, arg4, arg5), \ + .a6 = (arg6) + +#define __gh_declare_arg_7(arg1, arg2, arg3, arg4, arg5, arg6, arg7, ...) \ + __gh_declare_arg_6(arg1, arg2, arg3, arg4, arg5, arg6), \ + .a7 = (arg7) + +#define __gh_declare_arg_8(arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, ...) \ + __gh_declare_arg_7(arg1, arg2, arg3, arg4, arg5, arg6, arg7), \ + .a8 = (arg8) + +#define ___gh_declare_args(nargs) __gh_declare_arg_ ## nargs +#define __gh_declare_args(nargs) ___gh_declare_args(nargs) +#define _gh_declare_args(nargs, ...) __gh_declare_args(nargs)(__VA_ARGS__) + +#define __gh_assign_res_0(...) + +#define __gh_assign_res_1(r1) \ + r1 = res.a0 + +#define __gh_assign_res_2(r1, r2) \ + __gh_assign_res_1(r1); \ + r2 = res.a1 + +#define __gh_assign_res_3(r1, r2, r3) \ + __gh_assign_res_2(r1, r2); \ + r3 = res.a2 + +#define __gh_assign_res_4(r1, r2, r3, r4) \ + __gh_assign_res_3(r1, r2, r3); \ + r4 = res.a3 + +#define __gh_assign_res_5(r1, r2, r3, r4, r5) \ + __gh_assign_res_4(r1, r2, r3, r4); \ + r5 = res.a4 + +#define __gh_assign_res_6(r1, r2, r3, r4, r5, r6) \ + __gh_assign_res_5(r1, r2, r3, r4, r5); \ + r6 = res.a5 + +#define __gh_assign_res_7(r1, r2, r3, r4, r5, r6, r7) \ + __gh_assign_res_6(r1, r2, r3, r4, r5, r6); \ + r7 = res.a6 + +#define __gh_assign_res_8(r1, r2, r3, r4, r5, r6, r7, r8) \ + __gh_assign_res_7(r1, r2, r3, r4, r5, r6, r7); \ + r8 = res.a7 + +#define ___gh_assign_res(nargs) __gh_assign_res_ ## nargs +#define __gh_assign_res(nargs) ___gh_assign_res(nargs) +#define _gh_assign_res(...) __gh_assign_res(__gh_count_args(__VA_ARGS__))(__VA_ARGS__) + +/** + * arch_gh_hypercall() - Performs an AArch64-specific call into hypervisor using Gunyah ABI + * @hcall_num: Hypercall function ID to invoke + * @nargs: Number of input arguments + * @...: First nargs are the input arguments. Remaining arguments are output variables. + */ +#define arch_gh_hypercall(hcall_num, nargs, ...) \ + do { \ + struct arm_smccc_1_2_regs res; \ + struct arm_smccc_1_2_regs args = { \ + .a0 = hcall_num, \ + _gh_declare_args(nargs, __VA_ARGS__) \ + }; \ + arm_smccc_1_2_hvc(&args, &res); \ + _gh_assign_res(__gh_to_res(nargs, __VA_ARGS__)); \ + } while (0) + +#endif