From patchwork Fri Jul 10 08:25:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11656019 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA6B513B4 for ; Fri, 10 Jul 2020 08:28:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 878C22080D for ; Fri, 10 Jul 2020 08:28:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="IhcLlFZ/"; dkim=pass (1024-bit key) header.d=sharedspace.onmicrosoft.com header.i=@sharedspace.onmicrosoft.com header.b="esnpVxcb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728072AbgGJI2E (ORCPT ); Fri, 10 Jul 2020 04:28:04 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:31073 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728051AbgGJI15 (ORCPT ); Fri, 10 Jul 2020 04:27:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1594369676; x=1625905676; h=from:to:cc:subject:date:message-id:in-reply-to: references:content-transfer-encoding:mime-version; bh=jujQwJqOLPuDYyvakyfr0JQ/lXS2evV2VU8BVZri5gI=; b=IhcLlFZ/N4GM4HoLGNeEYJYdVOncT64m81nmgboFMorhtJN92wrVzLwP RhSHuCBMJHJhBfrRWzSCugLF2kmiLodBNPJlmavAGdQtPQ36wBt5cN8AM 8/9Fp/vpGLGQ0ECtZ40dUwVk7UJ5RRf0vhl29alGTw9xN2sAVGSc7vMDH B0iiGPsVdwV560SsmV6pt6PWUyaRy3ks+qCODi+9ptaof7nC99JA2Ur4W JVNslIsCXeUzJUXHL5VcNbKt+qAU27gFzoJnuhC6/Y2dcLO3hVhKCuP3o 2IitJOdWtw7cQ3lvlhiiN6twLsjCybzM9CbG2sOguko5lhFEnfPwqOxa1 A==; IronPort-SDR: NM9TaiWDyOecFQQcSyZxTp36eFruRKeT5AGLGr57E65wQehrxZ2kZ/+Ap9JneC9DzOmiH+omL1 NY8tVNoVEhZ+Xv96NvVxBuIFiKt0iJv5+pOs8F4XSZbR2UWgZ1wAG0Jt8qdT13cZzHrp0o9W/u aSZ72hC+7/YZ0AMzGOOCfMLkyqmb2P15OJvB/1sk7mV/AaQZ4/W5HusRmlU/itjHQ/kZeYJejA 8L+KEIyvTszhBViLfd+EoWkWS/bJXqmiXxIhgK59Bsw0t7V6jkQMOS8uiCtUi5NDWP6y4FUoe8 3TE= X-IronPort-AV: E=Sophos;i="5.75,335,1589212800"; d="scan'208";a="142121704" Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176]) by ob1.hgst.iphmx.com with ESMTP; 10 Jul 2020 16:27:55 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LfIiJt+AFTmvgiY3whZnkC3HaK+7RHtE6Kiqfz2ppTRo0ZhKKpMkg2aHtdBdvmjJ4rNgAkJdNw/BwlteaWf4dkadn59rrCb98fPZAb8XtSGHUZ/NEuKf+piQJk6nTpHSMe75UXAvJ7tRbTuvSScH9oPlxv7Zx2t+zv3Bw/y5wQyncMhDTuTqsT6xAnS2R415+Hz8bTVqAUJNubbHakDvKLs4u/SNOAbdnLhYXB95S0XQ9kP28qB3b6jHSxZyvj/3MORskLgCYqdKzYjy7VLbb1L8Y1AEfqZcVk3nhzAw3ZTvt5KXCQA/8fjhG4e2hTU5KSj5nvj14EC/xzb3RN/3Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+4cI2swrtgr9DV/USYRsvOxGJi9NnEJgcgBI070sS/I=; b=V4R3Q9Gv9H7SEs9oeFzR7/zbsI+CjuTJWkZY6/ixKB7uLI/EgFq4Kivc3OkP1q7v1DAD0ZeU0oge7aLTOVTb1S2ndPlS3y7pH4UCk09TLOuQ/lkhmsZZNwIEU4fK7o/MH51Virwg8KXMfHMi5eZM1lVras4P9i6rwKn41D2OPN9mOrDDGDAW3Dj2BLL9yr6whreuFd1C2H4Is9qFdWBSAp4Pk3Q299lD8jWVVnb42RYtGOp0A+bcoWKgfGQITlxnmObWgWNbgEqM9OVWGEZG+PPWgGuZN5yObRJtid1P5VECo+Ia1DABflkafmsYcZ1Fe9ieTKpepq1jwM5etpWAZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+4cI2swrtgr9DV/USYRsvOxGJi9NnEJgcgBI070sS/I=; b=esnpVxcbnrfcF+WPbROyU8rGj1eh09dzWVQXlZFDYTAVUqXCFHH9bUx2GTY+uLCtPk4IhbUbVu5nje0KnQYJgv9MIyN0Q61QorQAapXQ3Op6J3Ll2lQb6QCYabybCB5yz3N7uCGtVHBv+FdG1twXYJpa5QrWvZKYFCC47LSGr7c= Authentication-Results: dabbelt.com; dkim=none (message not signed) header.d=none;dabbelt.com; dmarc=none action=none header.from=wdc.com; Received: from DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) by DM5PR04MB0461.namprd04.prod.outlook.com (2603:10b6:3:ab::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.22; Fri, 10 Jul 2020 08:27:54 +0000 Received: from DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::e0a4:aa82:1847:dea5]) by DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::e0a4:aa82:1847:dea5%7]) with mapi id 15.20.3174.023; Fri, 10 Jul 2020 08:27:54 +0000 From: Anup Patel To: Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Albert Ou , Paolo Bonzini Cc: Alexander Graf , Atish Patra , Alistair Francis , Damien Le Moal , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v13 15/17] RISC-V: KVM: Add SBI v0.1 support Date: Fri, 10 Jul 2020 13:55:46 +0530 Message-Id: <20200710082548.123180-16-anup.patel@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200710082548.123180-1-anup.patel@wdc.com> References: <20200710082548.123180-1-anup.patel@wdc.com> X-ClientProxiedBy: PN1PR0101CA0035.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00:c::21) To DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from wdc.com (103.15.57.207) by PN1PR0101CA0035.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00:c::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20 via Frontend Transport; Fri, 10 Jul 2020 08:27:48 +0000 X-Mailer: git-send-email 2.25.1 X-Originating-IP: [103.15.57.207] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 353cf0d4-03d2-407f-ac61-08d824ab2432 X-MS-TrafficTypeDiagnostic: DM5PR04MB0461: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: WDCIPOUTBOUND: EOP-TRUE X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-Forefront-PRVS: 046060344D X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 89BPgtG9tW6xMHlWVJf+TiDtEivD+xvZgROUvttG9WzGR8hdckSYYCY88uMmb/SsXkVxljfSMRc16OlgLO0CbJjxAC1ObvZu3SHutnRUPqMzJy5h1eYWo2WLFQQGgcRsM5FuGHJRzJP1siN83ZYNxcxqCNPdry0iRfCpz/uwZFxOwkdfT0E+r5Dk/ZNYJfFiumvZ3MGvGvp+K7bFkpybBVhHnOjOwsla+3EiU9Dil9xS2Z6zx+Zdumge+I4KW+1BEEyBvpXvgrIkK8aiI8f4CjjUvpKnbObnJ7vu84+N7z2JGGxrPjy8O+WPmmkHG3KDVWSqPCIiYpkMVBDyk8e3Gg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR04MB6201.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFTY:;SFS:(4636009)(366004)(39860400002)(136003)(376002)(396003)(346002)(2906002)(316002)(83380400001)(110136005)(54906003)(7416002)(478600001)(86362001)(8936002)(5660300002)(7696005)(4326008)(2616005)(26005)(956004)(52116002)(8886007)(8676002)(186003)(66946007)(66476007)(55016002)(36756003)(66556008)(44832011)(16526019)(1076003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData: 1x0waOqZILuuM7K/dysT6CMV/IgV1/zqnHVLndSC5RkeW8Tpcm81zoJj4rIMaIVilI3RGH5hEh9qFp86z1/zibUa/SH91oN6KhKxUZVdlog9aoEr+vCpsonNtsvdV46HUrT48GnfEPNusMEXQXf/wgb4RIPUL8V5J6MRUU+YO+hy0kp4Q12W9M1QyWiie3tyfbernl/UDAgmB99729yUkM75IqSLZn3qMmgCofpvyxnSMJRzv9Bv28bT5Obkj0MpPMcHDym/M5SOxfba35dXxqal7U+xhGIv4/6xgJQM+UazjRPkMdUoEeH9Ifz5e3eowkQyZgYSbj3RVy1g0cD/BsuTgh8v/8zdnxylFcbU8u6ia+6RGNQxF1qYNEWRMYEzXCDrFpaPMUh7TQ9EoANMR/S821iMNAhMwN/jcNw14ThuLhfVKPvrPBQp9RIaZ/HxCgdD53COhBMYt4o/7tTgTkPb+pb+cILqj9TPNPsMQ24= X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 353cf0d4-03d2-407f-ac61-08d824ab2432 X-MS-Exchange-CrossTenant-AuthSource: DM6PR04MB6201.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jul 2020 08:27:54.3020 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: QtB4ijLru487QAQmAkGdAiZptPi/Zh86OR07f8z0lgc9wyBR3RiijCiP0eYt+ZurM17WZTJqgMtuGSZEh5PePA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR04MB0461 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Atish Patra The KVM host kernel is running in HS-mode needs so we need to handle the SBI calls coming from guest kernel running in VS-mode. This patch adds SBI v0.1 support in KVM RISC-V. Almost all SBI v0.1 calls are implemented in KVM kernel module except GETCHAR and PUTCHART calls which are forwarded to user space because these calls cannot be implemented in kernel space. In future, when we implement SBI v0.2 for Guest, we will forward SBI v0.2 experimental and vendor extension calls to user space. Signed-off-by: Atish Patra Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini --- arch/riscv/include/asm/kvm_host.h | 10 ++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 9 ++ arch/riscv/kvm/vcpu_exit.c | 4 + arch/riscv/kvm/vcpu_sbi.c | 173 ++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 8 ++ 6 files changed, 205 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 4571938e937f..ae43bd204284 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -79,6 +79,10 @@ struct kvm_mmio_decode { int return_handled; }; +struct kvm_sbi_context { + int return_handled; +}; + #define KVM_MMU_PAGE_CACHE_NR_OBJS 32 struct kvm_mmu_page_cache { @@ -189,6 +193,9 @@ struct kvm_vcpu_arch { /* MMIO instruction details */ struct kvm_mmio_decode mmio_decode; + /* SBI context */ + struct kvm_sbi_context sbi_context; + /* Cache pages needed to program page tables with spinlock held */ struct kvm_mmu_page_cache mmu_page_cache; @@ -260,4 +267,7 @@ bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask); void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run); +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run); + #endif /* __RISCV_KVM_HOST_H__ */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 3e0c7558320d..b56dc1650d2c 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -9,6 +9,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) kvm-objs += main.o vm.o vmid.o tlb.o mmu.o -kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o vcpu_timer.o +kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o vcpu_timer.o vcpu_sbi.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 96605f773933..adb0815951aa 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -882,6 +882,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) } } + /* Process SBI value returned from user-space */ + if (run->exit_reason == KVM_EXIT_RISCV_SBI) { + ret = kvm_riscv_vcpu_sbi_return(vcpu, vcpu->run); + if (ret) { + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); + return ret; + } + } + if (run->immediate_exit) { srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); return -EINTR; diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6ef833e29e71..defe92594cd2 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -647,6 +647,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret = stage2_page_fault(vcpu, run, trap); break; + case EXC_SUPERVISOR_SYSCALL: + if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) + ret = kvm_riscv_vcpu_sbi_ecall(vcpu, run); + break; default: break; }; diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c new file mode 100644 index 000000000000..9d1d25cf217f --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -0,0 +1,173 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * Copyright (c) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include + +#define SBI_VERSION_MAJOR 0 +#define SBI_VERSION_MINOR 1 + +static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu, + struct kvm_run *run, u32 type) +{ + int i; + struct kvm_vcpu *tmp; + + kvm_for_each_vcpu(i, tmp, vcpu->kvm) + tmp->arch.power_off = true; + kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); + + memset(&run->system_event, 0, sizeof(run->system_event)); + run->system_event.type = type; + run->exit_reason = KVM_EXIT_SYSTEM_EVENT; +} + +static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, + struct kvm_run *run) +{ + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + + vcpu->arch.sbi_context.return_handled = 0; + vcpu->stat.ecall_exit_stat++; + run->exit_reason = KVM_EXIT_RISCV_SBI; + run->riscv_sbi.extension_id = cp->a7; + run->riscv_sbi.function_id = cp->a6; + run->riscv_sbi.args[0] = cp->a0; + run->riscv_sbi.args[1] = cp->a1; + run->riscv_sbi.args[2] = cp->a2; + run->riscv_sbi.args[3] = cp->a3; + run->riscv_sbi.args[4] = cp->a4; + run->riscv_sbi.args[5] = cp->a5; + run->riscv_sbi.ret[0] = cp->a0; + run->riscv_sbi.ret[1] = cp->a1; +} + +int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + + /* Handle SBI return only once */ + if (vcpu->arch.sbi_context.return_handled) + return 0; + vcpu->arch.sbi_context.return_handled = 1; + + /* Update return values */ + cp->a0 = run->riscv_sbi.ret[0]; + cp->a1 = run->riscv_sbi.ret[1]; + + /* Move to next instruction */ + vcpu->arch.guest_context.sepc += 4; + + return 0; +} + +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + ulong hmask; + int i, ret = 1; + u64 next_cycle; + struct kvm_vcpu *rvcpu; + bool next_sepc = true; + struct cpumask cm, hm; + struct kvm *kvm = vcpu->kvm; + struct kvm_cpu_trap utrap = { 0 }; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + + if (!cp) + return -EINVAL; + + switch (cp->a7) { + case SBI_EXT_0_1_CONSOLE_GETCHAR: + case SBI_EXT_0_1_CONSOLE_PUTCHAR: + /* + * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be + * handled in kernel so we forward these to user-space + */ + kvm_riscv_vcpu_sbi_forward(vcpu, run); + next_sepc = false; + ret = 0; + break; + case SBI_EXT_0_1_SET_TIMER: +#if __riscv_xlen == 32 + next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; +#else + next_cycle = (u64)cp->a0; +#endif + kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); + break; + case SBI_EXT_0_1_CLEAR_IPI: + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT); + break; + case SBI_EXT_0_1_SEND_IPI: + if (cp->a0) + hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, + &utrap); + else + hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; + if (utrap.scause) { + utrap.sepc = cp->sepc; + kvm_riscv_vcpu_trap_redirect(vcpu, &utrap); + next_sepc = false; + break; + } + for_each_set_bit(i, &hmask, BITS_PER_LONG) { + rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); + kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT); + } + break; + case SBI_EXT_0_1_SHUTDOWN: + kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN); + next_sepc = false; + ret = 0; + break; + case SBI_EXT_0_1_REMOTE_FENCE_I: + case SBI_EXT_0_1_REMOTE_SFENCE_VMA: + case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID: + if (cp->a0) + hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, + &utrap); + else + hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; + if (utrap.scause) { + utrap.sepc = cp->sepc; + kvm_riscv_vcpu_trap_redirect(vcpu, &utrap); + next_sepc = false; + break; + } + cpumask_clear(&cm); + for_each_set_bit(i, &hmask, BITS_PER_LONG) { + rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); + if (rvcpu->cpu < 0) + continue; + cpumask_set_cpu(rvcpu->cpu, &cm); + } + riscv_cpuid_to_hartid_mask(&cm, &hm); + if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I) + sbi_remote_fence_i(cpumask_bits(&hm)); + else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA) + sbi_remote_hfence_vvma(cpumask_bits(&hm), + cp->a1, cp->a2); + else + sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), + cp->a1, cp->a2, cp->a3); + break; + default: + /* Return error for unsupported SBI calls */ + cp->a0 = SBI_ERR_NOT_SUPPORTED; + break; + }; + + if (next_sepc) + cp->sepc += 4; + + return ret; +} diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 4fdf30316582..10ea1cec7904 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -248,6 +248,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_IOAPIC_EOI 26 #define KVM_EXIT_HYPERV 27 #define KVM_EXIT_ARM_NISV 28 +#define KVM_EXIT_RISCV_SBI 28 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -412,6 +413,13 @@ struct kvm_run { __u64 esr_iss; __u64 fault_ipa; } arm_nisv; + /* KVM_EXIT_RISCV_SBI */ + struct { + unsigned long extension_id; + unsigned long function_id; + unsigned long args[6]; + unsigned long ret[2]; + } riscv_sbi; /* Fix the size of the union. */ char padding[256]; };