From patchwork Fri Nov 5 23:58:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 12606175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FB52C433EF for ; Fri, 5 Nov 2021 23:59:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 61BED610CD for ; Fri, 5 Nov 2021 23:59:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 61BED610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nmv+bqM4iLsy4fZhxG0hFPxDSpLW4DkMSXgBDV7bRfQ=; b=y68xz+YPoONG05 lZPrD6JUuDC87rRMoxQck87FPu7SZAiitAXfsRJ82yI/yOzeE4yK/lDNFdQ/kT6Y9Bmhbq2Yd60iL gcq6h+tbQQ3gvG6r9TGhqw69OSPSCK4ir7Xkivp5xppALIJ3fNlwsG66PsPR2hV4Kqd0mf2rmvd1W x1ROAr8QuAS8n05o6mOL65JIPic35D8aKdQQVoV0fKEdJhhTAQnIV/7rtOCdlJ4+deENcvWe4tA+Z 2uWnCqe6eO/AnaOUSDrc9i5hKD8BjOX8iIo5f/GjS3SL1U+I6zDb9tGNeviaAz3D89XTgDmzuY/2O nswdSCMKfhGR4TL1a7Hw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj970-00CSuK-Da; Fri, 05 Nov 2021 23:59:10 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj96u-00CSpx-Lf; Fri, 05 Nov 2021 23:59:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1636156744; x=1667692744; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oAAklw2+l3Z+7rUkDWJ+Pk60d+W6Gai8Zdl3z5WLmn4=; b=GXdpLSVQMPcVbGz+frJQJGaGSFGJBYHperkG9pIVq1Ep1qFGmcO1UGXs /Mincxf9SJWVWdDOX3q8+qJ6RxrxDaD0eYdzJrnIB8OFq0ADEJbzqxsHx zLfVrAEsTW/CDmac4GSbuHlSJkpQpQ3FshVbewLnesNXLfJ/IrzPzXJjy 0gjAN4uTDG041I3GGMsvrzMqu3+3uwXONX7S2HCR34VwVr8isW6mu5vQg qamj1OmN71ud9/wLWPRbRlAvPKiz8DWZoTbKNCao/HomkNVfGv3zZ+Dok KRWy9NCt63BJunCpBbhHL7TR5TuPNyxVWEAvr0tBUFhfxfHyQZFbUOIXB Q==; X-IronPort-AV: E=Sophos;i="5.87,212,1631548800"; d="scan'208";a="183840500" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 06 Nov 2021 07:59:02 +0800 IronPort-SDR: bQkNicv+q9IyAbw8Igc9hZTKsreCqZLIWkM9xPZk3XQ2ppzYSKDahCP97pzeeYq3Msdx70HPX8 O0uHLyPNMncwZrG0cd0WpWicvoYm7SBKiUdLbkEgnLkzBfMp/fWAriD1n8ThPYEUUrnuLCWiBu 09bAI7+OZkz+rBzRQgzBT3N0Jg0ouWMlBUQj/EF4sCn9K85OZcB8GfiMoJBxs1uXQxzQDsUPRl KFmvgYL9bOiuue7G1yTmFPueeAm6RdIN6rwwbv2x3qvAx0vQq1/f75ipFvGgZY74/20wxbqh2+ lAX1og1wR34SQ7sOokZePf6Y Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2021 16:32:46 -0700 IronPort-SDR: 6I4QEXh2k8Tvp4lrFF0q9Fcf6ZEsiD2oVrlSCtYZzXFLRYzcdgtShfxX84k+ulbNGJhE/XVrb6 ShUzRzKhp0qFva7/JuG0AxkpQMF7jmxY8vbpsvKYcf+HB+6JDMbQ4zIAkbMRzWJ9s3gdHGyD6T dwqGTfGH2tgAGNWTzwckIQjx9R3qR7P17IlkbssJUw6etHwU9WHqVjh2ushANW1fkxPzHVp4Bt dnikvKdA5ApxHtsoClqFYEehIyRDvA+q5yQkH2ev3Z1e9RYtMnkZPemdO4bePKpqe5BrI0nyP5 yAs= WDCIronportException: Internal Received: from unknown (HELO hulk.wdc.com) ([10.225.167.48]) by uls-op-cesaip02.wdc.com with ESMTP; 05 Nov 2021 16:59:02 -0700 From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Heinrich Schuchardt , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Paul Walmsley , Vincent Chen , pbonzini@redhat.com, Sean Christopherson Subject: [PATCH v4 1/5] RISC-V: KVM: Mark the existing SBI implementation as v01 Date: Fri, 5 Nov 2021 16:58:48 -0700 Message-Id: <20211105235852.3011900-2-atish.patra@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211105235852.3011900-1-atish.patra@wdc.com> References: <20211105235852.3011900-1-atish.patra@wdc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_165904_807632_40990E12 X-CRM114-Status: GOOD ( 26.61 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The existing SBI specification impelementation follows v0.1 specification. The latest specification known as v0.2 allows more scalability and performance improvements. Rename the existing implementation as v01 and provide a way to allow future extensions. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 29 +++++ arch/riscv/kvm/vcpu_sbi.c | 147 +++++++++++++++++++++----- 2 files changed, 147 insertions(+), 29 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h new file mode 100644 index 000000000000..1a4cb0db2d0b --- /dev/null +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/** + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#ifndef __RISCV_KVM_VCPU_SBI_H__ +#define __RISCV_KVM_VCPU_SBI_H__ + +#define KVM_SBI_VERSION_MAJOR 0 +#define KVM_SBI_VERSION_MINOR 2 + +struct kvm_vcpu_sbi_extension { + unsigned long extid_start; + unsigned long extid_end; + /** + * SBI extension handler. It can be defined for a given extension or group of + * extension. But it should always return linux error codes rather than SBI + * specific error codes. + */ + int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, struct kvm_cpu_trap *utrap, + bool *exit); +}; + +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid); +#endif /* __RISCV_KVM_VCPU_SBI_H__ */ diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index eb3c045edf11..05cab5f27eee 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -12,9 +12,25 @@ #include #include #include +#include -#define SBI_VERSION_MAJOR 0 -#define SBI_VERSION_MINOR 1 +static int kvm_linux_err_map_sbi(int err) +{ + switch (err) { + case 0: + return SBI_SUCCESS; + case -EPERM: + return SBI_ERR_DENIED; + case -EINVAL: + return SBI_ERR_INVALID_PARAM; + case -EFAULT: + return SBI_ERR_INVALID_ADDRESS; + case -EOPNOTSUPP: + return SBI_ERR_NOT_SUPPORTED; + default: + return SBI_ERR_FAILURE; + }; +} static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) @@ -72,16 +88,17 @@ static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu, run->exit_reason = KVM_EXIT_SYSTEM_EVENT; } -int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) +static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, + bool *exit) { ulong hmask; - int i, ret = 1; + int i, ret = 0; u64 next_cycle; struct kvm_vcpu *rvcpu; - bool next_sepc = true; struct cpumask cm, hm; struct kvm *kvm = vcpu->kvm; - struct kvm_cpu_trap utrap = { 0 }; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; if (!cp) @@ -95,8 +112,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) * handled in kernel so we forward these to user-space */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - next_sepc = false; - ret = 0; + *exit = true; break; case SBI_EXT_0_1_SET_TIMER: #if __riscv_xlen == 32 @@ -104,47 +120,42 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) #else next_cycle = (u64)cp->a0; #endif - kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); + ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); break; case SBI_EXT_0_1_CLEAR_IPI: - kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT); + ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT); break; case SBI_EXT_0_1_SEND_IPI: if (cp->a0) hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, - &utrap); + utrap); else hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; - if (utrap.scause) { - utrap.sepc = cp->sepc; - kvm_riscv_vcpu_trap_redirect(vcpu, &utrap); - next_sepc = false; + if (utrap->scause) break; - } + for_each_set_bit(i, &hmask, BITS_PER_LONG) { rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); - kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT); + ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT); + if (ret < 0) + break; } break; case SBI_EXT_0_1_SHUTDOWN: kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN); - next_sepc = false; - ret = 0; + *exit = true; break; case SBI_EXT_0_1_REMOTE_FENCE_I: case SBI_EXT_0_1_REMOTE_SFENCE_VMA: case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID: if (cp->a0) hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, - &utrap); + utrap); else hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; - if (utrap.scause) { - utrap.sepc = cp->sepc; - kvm_riscv_vcpu_trap_redirect(vcpu, &utrap); - next_sepc = false; + if (utrap->scause) break; - } + cpumask_clear(&cm); for_each_set_bit(i, &hmask, BITS_PER_LONG) { rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); @@ -154,22 +165,100 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) } riscv_cpuid_to_hartid_mask(&cm, &hm); if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I) - sbi_remote_fence_i(cpumask_bits(&hm)); + ret = sbi_remote_fence_i(cpumask_bits(&hm)); else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA) - sbi_remote_hfence_vvma(cpumask_bits(&hm), + ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), cp->a1, cp->a2); else - sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), + ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), cp->a1, cp->a2, cp->a3); break; default: + ret = -EINVAL; + break; + } + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { + .extid_start = SBI_EXT_0_1_SET_TIMER, + .extid_end = SBI_EXT_0_1_SHUTDOWN, + .handler = kvm_sbi_ext_v01_handler, +}; + +static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { + &vcpu_sbi_ext_v01, +}; + +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid) +{ + int i = 0; + + for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) { + if (sbi_ext[i]->extid_start <= extid && + sbi_ext[i]->extid_end >= extid) + return sbi_ext[i]; + } + + return NULL; +} + +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + int ret = 1; + bool next_sepc = true; + bool userspace_exit = false; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + const struct kvm_vcpu_sbi_extension *sbi_ext; + struct kvm_cpu_trap utrap = { 0 }; + unsigned long out_val = 0; + bool ext_is_v01 = false; + + if (!cp) + return -EINVAL; + + sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7); + if (sbi_ext && sbi_ext->handler) { + if (cp->a7 >= SBI_EXT_0_1_SET_TIMER && + cp->a7 <= SBI_EXT_0_1_SHUTDOWN) + ext_is_v01 = true; + ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit); + } else { /* Return error for unsupported SBI calls */ cp->a0 = SBI_ERR_NOT_SUPPORTED; - break; + goto ecall_done; } + /* Handle special error cases i.e trap, exit or userspace forward */ + if (utrap.scause) { + /* No need to increment sepc or exit ioctl loop */ + ret = 1; + utrap.sepc = cp->sepc; + kvm_riscv_vcpu_trap_redirect(vcpu, &utrap); + next_sepc = false; + goto ecall_done; + } + + /* Exit ioctl loop or Propagate the error code the guest */ + if (userspace_exit) { + next_sepc = false; + ret = 0; + } else { + /** + * SBI extension handler always returns an Linux error code. Convert + * it to the SBI specific error code that can be propagated the SBI + * caller. + */ + ret = kvm_linux_err_map_sbi(ret); + cp->a0 = ret; + ret = 1; + } +ecall_done: if (next_sepc) cp->sepc += 4; + if (!ext_is_v01) + cp->a1 = out_val; return ret; } From patchwork Fri Nov 5 23:58:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 12606171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06ACEC4332F for ; Fri, 5 Nov 2021 23:59:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C9B4D610CD for ; Fri, 5 Nov 2021 23:59:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C9B4D610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GP7hs10w3/ERBWvEZ9x2KSgay+AS0uySx9GyHPaIVHM=; b=xW1JxJ4nFwdaGH 5LrZ81SZqwk1kwDHifkUeU6eGYPCkAClSTUngdUn8XXAk+wyQq08cCDerw268ZS7axlLygisEjABj 3uggpEBmgwJAOCQmGVvuXbGScp5qk668thTC9gmDa44D6POTKVHcgy04SsMRds5KRjvBiOu+3wIZD s+YulOrFP8pkXcPkRdw+NQHZ+WUmCoUOocjBmfxJN+OYiEJfrKejK2FaB68Wktiy4urrdSFvXMviF 07uJEZ7PDR5q+v/5xoKkCnzDQTdDGiG1bsS39+5VNXZZR4938Bz3YCfDE6F07xhwiX2OdPNuJ4etz ScjHamhGXTpTCf7fg6pQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj974-00CSwV-5U; Fri, 05 Nov 2021 23:59:14 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj96w-00CSpw-JK; Fri, 05 Nov 2021 23:59:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1636156746; x=1667692746; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ry3CefX/Tp4xaLpui45N/sUnV2uYJCRDeodPmHhQp30=; b=Dk231y/Mlfx0tbJTQ3ckWaCCQcjIiH/F9MpbF4TkcaqUDLJjgRwCTzNg QVyn9IIgAcONWHu/uRVj18UM39r5BJk7sExJg6FyjQmKgJHHc3M9TlGlD 2Gh+wG00K7iISQHxxGp9VnClN2Q1vfXzi0wy37u3/EdCiMAB9LpB/GVJe 5D2DvvwCpW8LgR3WN3x08kzPvV4g0VtUl7yVjWrMQtRrCatCQZ1/o7SHJ vQOIxO6h38QI/DqY+IE4c/NhswjFJznEftt62LAIigHB/TZYfb3sJ9vyk yUEb7pXsygfJqwmH2dAWhHCOjMaRFJMoMRFlFjxBDaXZocgNol60GyKa/ Q==; X-IronPort-AV: E=Sophos;i="5.87,212,1631548800"; d="scan'208";a="183840504" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 06 Nov 2021 07:59:02 +0800 IronPort-SDR: rEXLYXmRdJ35fdKypTJ9YaCPppLf3KMO6HMOPX38JasY0eBX9S/a7y46mKP5sqRZNxvFAGYMj9 FFQtKDCdhbfsJk0XdVjuvAxDjTUZgSVzr7zJq34jUugvYMHYXtn7NBhbHnmwfnoLe7s4ahOk5q 1ygoVn2sriCdSGLEj8txSQ5lUXV4qDx95KcTv3LwF3z1GHslDyADzJUymVwNsJzo4WZu5qVbtX XpE8rQa/27FlnX3YXCvSen5GdN1zW5zePmhOIf/5prbnhp+a+K8wmZ0c32Sc/BlYQNYcdGZ/3Z iRBNtEWhV0J1QPW+zP2ZMwQL Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2021 16:32:46 -0700 IronPort-SDR: nPfFgdVx3EqyS/bfu7H2v5XNaDKqbaygz/10P+fMvqqhGn/0V1iDl3bF00LnyxkfaQm9XaEr/g neXs+JNseTJBl9wDooouv0o8jYtvJjR8DJb/jyKkghJToE5QBUTLaQrGtAkC8MBhXuzWINwRU5 u5MYWgpzHVDFPGwElPzqE2zJdeu88HQBbQRoSmnW2mtmMb4G1Aq9XwfYkMAkID977O5O57iD6l ij+jfLiFA+OSi/A/jcJHUo3GCV4qn9aqZtZRllKyCQp8D9SRg08//mA97IkIrdy2IO5iNetpSi 8Fw= WDCIronportException: Internal Received: from unknown (HELO hulk.wdc.com) ([10.225.167.48]) by uls-op-cesaip02.wdc.com with ESMTP; 05 Nov 2021 16:59:03 -0700 From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Heinrich Schuchardt , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Paul Walmsley , Vincent Chen , pbonzini@redhat.com, Sean Christopherson Subject: [PATCH v4 2/5] RISC-V: KVM: Reorganize SBI code by moving SBI v0.1 to its own file Date: Fri, 5 Nov 2021 16:58:49 -0700 Message-Id: <20211105235852.3011900-3-atish.patra@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211105235852.3011900-1-atish.patra@wdc.com> References: <20211105235852.3011900-1-atish.patra@wdc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_165906_745481_87178008 X-CRM114-Status: GOOD ( 25.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org With SBI v0.2, there may be more SBI extensions in future. It makes more sense to group related extensions in separate files. Guest kernel will choose appropriate SBI version dynamically. Move the existing implementation to a separate file so that it can be removed in future without much conflict. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 2 + arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu_sbi.c | 151 +++----------------------- arch/riscv/kvm/vcpu_sbi_v01.c | 129 ++++++++++++++++++++++ 4 files changed, 149 insertions(+), 134 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_v01.c diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index 1a4cb0db2d0b..704151969ceb 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -25,5 +25,7 @@ struct kvm_vcpu_sbi_extension { bool *exit); }; +void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run); const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid); + #endif /* __RISCV_KVM_VCPU_SBI_H__ */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 30cdd1df0098..d3d5ff3a6019 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -23,4 +23,5 @@ kvm-y += vcpu_exit.o kvm-y += vcpu_fp.o kvm-y += vcpu_switch.o kvm-y += vcpu_sbi.o +kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o kvm-y += vcpu_timer.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 05cab5f27eee..06b42f6977e1 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -9,9 +9,7 @@ #include #include #include -#include #include -#include #include static int kvm_linux_err_map_sbi(int err) @@ -32,8 +30,21 @@ static int kvm_linux_err_map_sbi(int err) }; } -static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, - struct kvm_run *run) +#ifdef CONFIG_RISCV_SBI_V01 +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01; +#else +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { + .extid_start = -1UL, + .extid_end = -1UL, + .handler = NULL, +}; +#endif + +static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { + &vcpu_sbi_ext_v01, +}; + +void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) { struct kvm_cpu_context *cp = &vcpu->arch.guest_context; @@ -71,126 +82,6 @@ int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run) return 0; } -#ifdef CONFIG_RISCV_SBI_V01 - -static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu, - struct kvm_run *run, u32 type) -{ - int i; - struct kvm_vcpu *tmp; - - kvm_for_each_vcpu(i, tmp, vcpu->kvm) - tmp->arch.power_off = true; - kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); - - memset(&run->system_event, 0, sizeof(run->system_event)); - run->system_event.type = type; - run->exit_reason = KVM_EXIT_SYSTEM_EVENT; -} - -static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) -{ - ulong hmask; - int i, ret = 0; - u64 next_cycle; - struct kvm_vcpu *rvcpu; - struct cpumask cm, hm; - struct kvm *kvm = vcpu->kvm; - struct kvm_cpu_context *cp = &vcpu->arch.guest_context; - - if (!cp) - return -EINVAL; - - switch (cp->a7) { - case SBI_EXT_0_1_CONSOLE_GETCHAR: - case SBI_EXT_0_1_CONSOLE_PUTCHAR: - /* - * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be - * handled in kernel so we forward these to user-space - */ - kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; - break; - case SBI_EXT_0_1_SET_TIMER: -#if __riscv_xlen == 32 - next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; -#else - next_cycle = (u64)cp->a0; -#endif - ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); - break; - case SBI_EXT_0_1_CLEAR_IPI: - ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT); - break; - case SBI_EXT_0_1_SEND_IPI: - if (cp->a0) - hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, - utrap); - else - hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; - if (utrap->scause) - break; - - for_each_set_bit(i, &hmask, BITS_PER_LONG) { - rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); - ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT); - if (ret < 0) - break; - } - break; - case SBI_EXT_0_1_SHUTDOWN: - kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN); - *exit = true; - break; - case SBI_EXT_0_1_REMOTE_FENCE_I: - case SBI_EXT_0_1_REMOTE_SFENCE_VMA: - case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID: - if (cp->a0) - hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, - utrap); - else - hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; - if (utrap->scause) - break; - - cpumask_clear(&cm); - for_each_set_bit(i, &hmask, BITS_PER_LONG) { - rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); - if (rvcpu->cpu < 0) - continue; - cpumask_set_cpu(rvcpu->cpu, &cm); - } - riscv_cpuid_to_hartid_mask(&cm, &hm); - if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I) - ret = sbi_remote_fence_i(cpumask_bits(&hm)); - else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA) - ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), - cp->a1, cp->a2); - else - ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), - cp->a1, cp->a2, cp->a3); - break; - default: - ret = -EINVAL; - break; - } - - return ret; -} - -const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { - .extid_start = SBI_EXT_0_1_SET_TIMER, - .extid_end = SBI_EXT_0_1_SHUTDOWN, - .handler = kvm_sbi_ext_v01_handler, -}; - -static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { - &vcpu_sbi_ext_v01, -}; - const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid) { int i = 0; @@ -220,9 +111,11 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7); if (sbi_ext && sbi_ext->handler) { +#ifdef CONFIG_RISCV_SBI_V01 if (cp->a7 >= SBI_EXT_0_1_SET_TIMER && cp->a7 <= SBI_EXT_0_1_SHUTDOWN) ext_is_v01 = true; +#endif ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit); } else { /* Return error for unsupported SBI calls */ @@ -262,13 +155,3 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) return ret; } - -#else - -int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) -{ - kvm_riscv_vcpu_sbi_forward(vcpu, run); - return 0; -} - -#endif diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c new file mode 100644 index 000000000000..5a8e2afc8a57 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include + +static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu, + struct kvm_run *run, u32 type) +{ + int i; + struct kvm_vcpu *tmp; + + kvm_for_each_vcpu(i, tmp, vcpu->kvm) + tmp->arch.power_off = true; + kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); + + memset(&run->system_event, 0, sizeof(run->system_event)); + run->system_event.type = type; + run->exit_reason = KVM_EXIT_SYSTEM_EVENT; +} + +static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, + bool *exit) +{ + ulong hmask; + int i, ret = 0; + u64 next_cycle; + struct kvm_vcpu *rvcpu; + struct cpumask cm, hm; + struct kvm *kvm = vcpu->kvm; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + + if (!cp) + return -EINVAL; + + switch (cp->a7) { + case SBI_EXT_0_1_CONSOLE_GETCHAR: + case SBI_EXT_0_1_CONSOLE_PUTCHAR: + /* + * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be + * handled in kernel so we forward these to user-space + */ + kvm_riscv_vcpu_sbi_forward(vcpu, run); + *exit = true; + break; + case SBI_EXT_0_1_SET_TIMER: +#if __riscv_xlen == 32 + next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; +#else + next_cycle = (u64)cp->a0; +#endif + ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); + break; + case SBI_EXT_0_1_CLEAR_IPI: + ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT); + break; + case SBI_EXT_0_1_SEND_IPI: + if (cp->a0) + hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, + utrap); + else + hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; + if (utrap->scause) + break; + + for_each_set_bit(i, &hmask, BITS_PER_LONG) { + rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); + ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT); + if (ret < 0) + break; + } + break; + case SBI_EXT_0_1_SHUTDOWN: + kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN); + *exit = true; + break; + case SBI_EXT_0_1_REMOTE_FENCE_I: + case SBI_EXT_0_1_REMOTE_SFENCE_VMA: + case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID: + if (cp->a0) + hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0, + utrap); + else + hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1; + if (utrap->scause) + break; + + cpumask_clear(&cm); + for_each_set_bit(i, &hmask, BITS_PER_LONG) { + rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i); + if (rvcpu->cpu < 0) + continue; + cpumask_set_cpu(rvcpu->cpu, &cm); + } + riscv_cpuid_to_hartid_mask(&cm, &hm); + if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I) + ret = sbi_remote_fence_i(cpumask_bits(&hm)); + else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA) + ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), + cp->a1, cp->a2); + else + ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), + cp->a1, cp->a2, cp->a3); + break; + default: + ret = -EINVAL; + break; + }; + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { + .extid_start = SBI_EXT_0_1_SET_TIMER, + .extid_end = SBI_EXT_0_1_SHUTDOWN, + .handler = kvm_sbi_ext_v01_handler, +}; From patchwork Fri Nov 5 23:58:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 12606165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AFABC433EF for ; Fri, 5 Nov 2021 23:59:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 68BED6113A for ; Fri, 5 Nov 2021 23:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 68BED6113A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=M5O4i/TnKKtiRXDtbdX6RM/WSnYmFgn91ZAkf6aktqI=; b=lYZj0c9PLmXHcv DVh+qfjWz1L9ZWGaJZwZ2KJFLEoonT/dTb9HUYs/w3mODxAPs4kLdrJduAwBUlX3nmcvmVHeKOhSD +NEGkMA96x+M/1ZvoUbxiyP8d/R8QTcMJo4+UY0RIre/OzUJlcvYbn3WEVUIBfbe4MII2bL0ZQ6NO dwKgFEBRltTh411Gev9x/m+t7fKUosVHYvNLLP7ATlJjmhKgLXauHPHru0jU9g65jggIsuTxhILfe Ekx/8UL7MO+tRMxi7IlRNUYTfBMPYuQP5ev/H3oHMqOFflHb6WMyCEEQDZ7WnNLsYASRtDVfrMEuT efZ0kTxeRMHL2pLGjyJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj972-00CSvP-Ah; Fri, 05 Nov 2021 23:59:12 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj96w-00CSpx-Q3; Fri, 05 Nov 2021 23:59:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1636156746; x=1667692746; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Er5RxKJadD7bsDIJOhUGnJYsnW6DU5RB+Akt9hHrrm4=; b=A+x2x4wGpKaTSo07E8syzHxgJyMjgLw2jC3RaLCBCVAODyNXEZCZ+PcX i1PmR5MmpgOK6X+BUQvucsH2zsB3egFnKWlId/p4gyvLLhMuia1RkrjaB FiTyc8IoU44kIu7pXkK7x6goDCakSHprJXt2L9vTEPsB7PMS95j83oAAX cCCrAk7vx1QsVBkdDIy/Na/HaRPX99chEaPutZ7bUeWgpTNJfQddWXJT/ 28E1yI7l3S0Ga5bm9bFHFCARh2Bgz0kVoyCFttmpklmEuuw4mGx3O3NnX eKlY3irUPo25mSfeb1fxEXWbZXFu0w2YnLg0WahDs2KQ9p9kgobcJSr0k g==; X-IronPort-AV: E=Sophos;i="5.87,212,1631548800"; d="scan'208";a="183840506" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 06 Nov 2021 07:59:03 +0800 IronPort-SDR: ztvny1jjg+CoYuA9Hkd6ucOK7szZWLInNTv9059d1mJZseJt5EIDSRGx99u7DfVuCz86vuJCi3 KE0EQGl38xEefVgJUk6zyZg814Z798xdFw9EqnaPI/i5VDuobeem2RxpF26ATV2F+E0Fegs370 iy3PbGREL7xR2pBB2eo2S4yN82HMR/7i5aPqJ7kxnQiXUNyrxF04C0NqChmhMGz0CUkJFPBJYK P6geJVY79Cs8ZEf1mSZvZxWtuCFNhFRBt0ScERDw9Y7QS+LmHkYSPMlrtZLLBkGZgpFKBHonk0 OajiAknAxALQyhXP53AjfZGQ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2021 16:32:47 -0700 IronPort-SDR: 2pByQIwvnZxVNsg3qiQi15VIOnwm8U/03nFqck+hOmTJ9V+2z65HGPmvXtzj/luzCU8xUEV9n2 1rqeyVA+EfMFml9NNbMZgdTUiF96e1moODcXiQoRYvrfvmn/BRgvxh3GfefpamOyXCNHSU2Ypa xcj6+m0eocZpH7P2vQOQ5ETKhMiZ2MsDANAPsUkxqS5ANCTmvvPL5iqG1xwkjZdhuiqb/2O/kE kvIVHut1u2mzqLcNcjBjrlux8FtxBzrr/5Lx6CYvW07Aa640rNJ6X/eww0eZbygd7rGTEnik/7 Jho= WDCIronportException: Internal Received: from unknown (HELO hulk.wdc.com) ([10.225.167.48]) by uls-op-cesaip02.wdc.com with ESMTP; 05 Nov 2021 16:59:03 -0700 From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Heinrich Schuchardt , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Paul Walmsley , Vincent Chen , pbonzini@redhat.com, Sean Christopherson Subject: [PATCH v4 3/5] RISC-V: KVM: Add SBI v0.2 base extension Date: Fri, 5 Nov 2021 16:58:50 -0700 Message-Id: <20211105235852.3011900-4-atish.patra@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211105235852.3011900-1-atish.patra@wdc.com> References: <20211105235852.3011900-1-atish.patra@wdc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_165906_942473_B4EA82AD X-CRM114-Status: GOOD ( 19.41 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org SBI v0.2 base extension defined to allow backward compatibility and probing of future extensions. This is also the only mandatory SBI extension that must be implemented by SBI implementors. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 2 + arch/riscv/include/asm/sbi.h | 8 +++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu_sbi.c | 3 +- arch/riscv/kvm/vcpu_sbi_base.c | 73 +++++++++++++++++++++++++++ 5 files changed, 86 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index 704151969ceb..76e4e17a3e00 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -9,6 +9,8 @@ #ifndef __RISCV_KVM_VCPU_SBI_H__ #define __RISCV_KVM_VCPU_SBI_H__ +#define KVM_SBI_IMPID 3 + #define KVM_SBI_VERSION_MAJOR 0 #define KVM_SBI_VERSION_MINOR 2 diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 0d42693cb65e..4f9370b6032e 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -27,6 +27,14 @@ enum sbi_ext_id { SBI_EXT_IPI = 0x735049, SBI_EXT_RFENCE = 0x52464E43, SBI_EXT_HSM = 0x48534D, + + /* Experimentals extensions must lie within this range */ + SBI_EXT_EXPERIMENTAL_START = 0x0800000, + SBI_EXT_EXPERIMENTAL_END = 0x08FFFFFF, + + /* Vendor extensions must lie within this range */ + SBI_EXT_VENDOR_START = 0x09000000, + SBI_EXT_VENDOR_END = 0x09FFFFFF, }; enum sbi_ext_base_fid { diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index d3d5ff3a6019..84c02922a329 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -24,4 +24,5 @@ kvm-y += vcpu_fp.o kvm-y += vcpu_switch.o kvm-y += vcpu_sbi.o kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o +kvm-y += vcpu_sbi_base.o kvm-y += vcpu_timer.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 06b42f6977e1..92b682f4f29e 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -39,9 +39,10 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { .handler = NULL, }; #endif - +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base; static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, + &vcpu_sbi_ext_base, }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c new file mode 100644 index 000000000000..1aeda3e10e7c --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include + +static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *trap, bool *exit) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct sbiret ecall_ret; + + if (!cp) + return -EINVAL; + + switch (cp->a6) { + case SBI_EXT_BASE_GET_SPEC_VERSION: + *out_val = (KVM_SBI_VERSION_MAJOR << + SBI_SPEC_VERSION_MAJOR_SHIFT) | + KVM_SBI_VERSION_MINOR; + break; + case SBI_EXT_BASE_GET_IMP_ID: + *out_val = KVM_SBI_IMPID; + break; + case SBI_EXT_BASE_GET_IMP_VERSION: + *out_val = 0; + break; + case SBI_EXT_BASE_PROBE_EXT: + *out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0; + if ((!*out_val) && + ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START && + cp->a0 <= SBI_EXT_EXPERIMENTAL_END) || + ((cp->a0 >= SBI_EXT_VENDOR_START && + cp->a0 <= SBI_EXT_VENDOR_END)))) { + /* For experimental/vendor extensions forward to the userspace*/ + kvm_riscv_vcpu_sbi_forward(vcpu, run); + *exit = true; + } + break; + case SBI_EXT_BASE_GET_MVENDORID: + case SBI_EXT_BASE_GET_MARCHID: + case SBI_EXT_BASE_GET_MIMPID: + ecall_ret = sbi_ecall(SBI_EXT_BASE, cp->a6, 0, 0, 0, 0, 0, 0); + if (!ecall_ret.error) + *out_val = ecall_ret.value; + /*TODO: We are unnecessarily converting the error twice */ + ret = sbi_err_map_linux_errno(ecall_ret.error); + break; + default: + ret = -EOPNOTSUPP; + break; + } + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = { + .extid_start = SBI_EXT_BASE, + .extid_end = SBI_EXT_BASE, + .handler = kvm_sbi_ext_base_handler, +}; From patchwork Fri Nov 5 23:58:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 12606169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D82EC433FE for ; Fri, 5 Nov 2021 23:59:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 03BA8610CD for ; Fri, 5 Nov 2021 23:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 03BA8610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zAjCAc94pTnQJFpTCp1cskIdAYhMBMS6zcB7kUd6vSE=; b=BMrd2SX1gFwQHH SJ1I33Yo/ggumWElIXmVFI/7R36NCa1sn4AiHW/J14/MNKA72qGrrODi5Ls0697x+wLT8ZebBXXRM 30PHnxzXPJ65yOyomawvz5cn/AN/pz5uRWrIaGX08YJI+Rj1TWM3rurhXIPZb630JIeP4lR+YQFzJ 1xANLoG3xl0w6bgasoCnD7wzwhPBc4LUoKucQjlAWvm4AvA81cOBisEKaiBwUoCAVnmb56G0hPhIi TQQKlTadoYL5HWVB6CtaH5cBflCh/3jwWzN/kcLlAtdbt1T1mKrHZ1GJ3P6OuOWJe9S6Y+oxNev0c puX9dET6eRWJjQ4WJqkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj976-00CSxc-9o; Fri, 05 Nov 2021 23:59:16 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj96w-00CSqy-Rf; Fri, 05 Nov 2021 23:59:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1636156746; x=1667692746; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8N7I3xFAN4A0OtRWqWWyaemIsM7CmwJ3kiGZkFqV+q8=; b=cL26L92puCtIRMLE43DUHalKXZmeD6MCz2QOo+IRsyCe8B/QRjRy+xdJ MDkV/jLW9wlQ6yu6oGdmFDnhvpzmVN8yGtCFNj7/SqXn19Ym9PGIULIqd GucU5ZoT/ygUCukuo41GkeIzTuqhL50wnS3QX5Vc4/evXdP8Zs0AGFrFe KkMWmaLePf0rMgiwbFJya88PDzCR/AGDJXywtBnaQqvh/51xIyYVm98Mp jqTCWa6yrwS18mOYB7hGfjc/jB+qWA5mFOhoYVc2xvNhmsV9OKCGchD74 6QptsdLJuiKSpNulDiYpMNiHaiynjv4Yw/DXYmzG54GjrRU6/k3+59Azf w==; X-IronPort-AV: E=Sophos;i="5.87,212,1631548800"; d="scan'208";a="183840508" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 06 Nov 2021 07:59:03 +0800 IronPort-SDR: GyRGGj3qfrel7v6FBgTIPpZpSotjUopIAUPRqf8A4uc6gLg7TZWjTsYU+qK5YJe9vTi8PSQffR tG6L4bFUgV6Xag60EiEsyzpC8kUuHItBnosz1WmVWITg207ONuDRoz7xgepi/3c6NZkJpNxfEa CXcoHQIKKg2vSlUV+GVVHPcDORKH/NOl26pXpQFr0AWQOU9LMz7GmBkkELUW8AL2CE/13KwI9Y Oomer3KQur2AYtfbB6CtgUlwyc4oYidj9K8tE23zE0q1R7ecF33d//7thTv2fc87mtf/JPVFz3 YpKnRczw08GpPHlLAFQuzgHK Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2021 16:32:47 -0700 IronPort-SDR: 4prbuhVOfYfRIJwLSL7Yjz1dLcKn/hge99idIBa14P8ZIIGPjNHHUfE2mCr/KBWIIEUhc2N+z7 swEaWYHUJpgwPYewqvPckC2DIO83iM7OGTRMRXgc6GFR0Z3HQZDL6BOGU2cb5JnX+KoyscMrUE 62+cSD7jew3dwEd4mD7TER/g/GMbZaE652DjJ1TB8iq1POah0vHAN4TPizZYNCXJv6DuRWqKfx XaAGNubSHyGQrkIeAHxR/EvZe6dAY0+hi7D6pPZaSDRmqAkXnqm7EU9pTuz1y1rO/BrpRFPXQ4 bFQ= WDCIronportException: Internal Received: from unknown (HELO hulk.wdc.com) ([10.225.167.48]) by uls-op-cesaip02.wdc.com with ESMTP; 05 Nov 2021 16:59:03 -0700 From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Heinrich Schuchardt , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Paul Walmsley , Vincent Chen , pbonzini@redhat.com, Sean Christopherson Subject: [PATCH v4 4/5] RISC-V: KVM: Add v0.1 replacement SBI extensions defined in v02 Date: Fri, 5 Nov 2021 16:58:51 -0700 Message-Id: <20211105235852.3011900-5-atish.patra@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211105235852.3011900-1-atish.patra@wdc.com> References: <20211105235852.3011900-1-atish.patra@wdc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_165906_989307_185294ED X-CRM114-Status: GOOD ( 19.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The SBI v0.2 contains some of the improved versions of required v0.1 extensions such as remote fence, timer and IPI. This patch implements those extensions. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu_sbi.c | 7 ++ arch/riscv/kvm/vcpu_sbi_replace.c | 136 ++++++++++++++++++++++++++++++ 3 files changed, 144 insertions(+) create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 84c02922a329..4757ae158bf3 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,4 +25,5 @@ kvm-y += vcpu_switch.o kvm-y += vcpu_sbi.o kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o kvm-y += vcpu_sbi_base.o +kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_timer.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 92b682f4f29e..3502c6166eba 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -40,9 +40,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { }; #endif extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base; +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time; +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi; +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence; + static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, &vcpu_sbi_ext_base, + &vcpu_sbi_ext_time, + &vcpu_sbi_ext_ipi, + &vcpu_sbi_ext_rfence, }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c new file mode 100644 index 000000000000..a80fa7691b14 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include + +static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, bool *exit) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + u64 next_cycle; + + if (!cp || (cp->a6 != SBI_EXT_TIME_SET_TIMER)) + return -EINVAL; + +#if __riscv_xlen == 32 + next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; +#else + next_cycle = (u64)cp->a0; +#endif + kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = { + .extid_start = SBI_EXT_TIME, + .extid_end = SBI_EXT_TIME, + .handler = kvm_sbi_ext_time_handler, +}; + +static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, bool *exit) +{ + int i, ret = 0; + struct kvm_vcpu *tmp; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + unsigned long hmask = cp->a0; + unsigned long hbase = cp->a1; + + if (!cp || (cp->a6 != SBI_EXT_IPI_SEND_IPI)) + return -EINVAL; + + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { + if (hbase != -1UL) { + if (tmp->vcpu_id < hbase) + continue; + if (!(hmask & (1UL << (tmp->vcpu_id - hbase)))) + continue; + } + ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT); + if (ret < 0) + break; + } + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = { + .extid_start = SBI_EXT_IPI, + .extid_end = SBI_EXT_IPI, + .handler = kvm_sbi_ext_ipi_handler, +}; + +static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, bool *exit) +{ + int i, ret = 0; + struct cpumask cm, hm; + struct kvm_vcpu *tmp; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + unsigned long hmask = cp->a0; + unsigned long hbase = cp->a1; + unsigned long funcid = cp->a6; + + if (!cp) + return -EINVAL; + + cpumask_clear(&cm); + cpumask_clear(&hm); + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { + if (hbase != -1UL) { + if (tmp->vcpu_id < hbase) + continue; + if (!(hmask & (1UL << (tmp->vcpu_id - hbase)))) + continue; + } + if (tmp->cpu < 0) + continue; + cpumask_set_cpu(tmp->cpu, &cm); + } + + riscv_cpuid_to_hartid_mask(&cm, &hm); + + switch (funcid) { + case SBI_EXT_RFENCE_REMOTE_FENCE_I: + ret = sbi_remote_fence_i(cpumask_bits(&hm)); + break; + case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), cp->a2, cp->a3); + break; + case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), cp->a2, + cp->a3, cp->a4); + break; + case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: + case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: + case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA: + case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: + /* TODO: implement for nested hypervisor case */ + default: + ret = -EOPNOTSUPP; + } + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = { + .extid_start = SBI_EXT_RFENCE, + .extid_end = SBI_EXT_RFENCE, + .handler = kvm_sbi_ext_rfence_handler, +}; From patchwork Fri Nov 5 23:58:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 12606173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D46F5C433F5 for ; Fri, 5 Nov 2021 23:59:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D257610CD for ; Fri, 5 Nov 2021 23:59:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9D257610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2sHN//SD7C7yQgg3PDrIRyY66fLrJjrXoHYQfIn8Fcs=; b=M8Pn1g8oget7F5 oMJgp8oWGjTFpwTd+8QhvxGxH7VLRW60oHofK7J/ZL7AViPjSnMIJgfVCYFw94W9Dd77EC1G11L79 QR+/IfusEb2GrEr+4K5uJI7GzM3Ii7jU/OjFW0VNdSugOlZoiduKE8qpl4ij30u4HuR0qnnlMH5R/ C2YOBUpgZytDbe7FMRzqTV+p6i4tVbQY3jOCQfMuJYBY066JGVOQvafvJbv0YL9px2VuE/rPypAkP ULXcBOTIsYVPZ9tzID8HH53cp0J4Dpl2traQQKTUTPm7TSLGjd79hvxy4Ub7/UK+G4OP9itb1O0VW eNzRvr+dA8ArnKc+tvMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj978-00CSzR-EM; Fri, 05 Nov 2021 23:59:18 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj96y-00CSpx-Fj; Fri, 05 Nov 2021 23:59:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1636156748; x=1667692748; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CPBkmZZLjh3iLG2oa7rF/7SGMXdLh45iugpgmq06jR0=; b=GEL1cX83miRo7JLmby+tvAswPFkZCCdAzn+qdyZywfZnWVKbqhNfA8Su vcZwtDr5jujKevnMBIrBU/H8rNbIuiZSnPGqYjTZI/hLFOAvYEZgwv1E4 /zjTfWU4c0ZbcYQ2BcyF2zooSg6EtOe1WTI3vkeEbYCvMhvd3Ukun1MzB xq/qvFbvKIxjfweFla5iq2Vns0SeTd/7qQoZpSshBrnBKuFQ3Q5RYI61p 7HBjT9zy5ocF8Y/R5OJV7nlmUS5xrnFmOsSXMAhDMtxC/egu0dWviGp0f ZQ+Re6+1J9uAvL48mIrAZj/V3ZjBFAgUjopGtnhqmKgChuA53ekR5JmX4 g==; X-IronPort-AV: E=Sophos;i="5.87,212,1631548800"; d="scan'208";a="183840511" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 06 Nov 2021 07:59:04 +0800 IronPort-SDR: soZNEzGWfPnLyFu3HM/zHBHQkKtPCCji5MGZLOEfrh/t4o09oU0UBpRSNaKPeWdBs0b/uf6+Ph Hzt9iq+kYC1qfOyxP4CEB3qbYZmOacgp62GFJPpF6yomB2zMd7WNtqigEgQXD2vZp6GSA3NOqQ OL6ySBxbwjxTcO7KkQs7CmFh7S4+saUjiyBeBWCbb1A6shV7Ol6iONbhkV61vzGLatpQcTXwtq ponEWoyz1IyZrEjb4tGgkTLTngkrWcl/ACoAE/G0Q/Xv5Mtw3GSiwzamEW/j24vcWIyLSFLlch H33KPsOHC3vITU3+cn+MnCqO Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2021 16:32:48 -0700 IronPort-SDR: Usq9zpvtPvxj8wXdxBMHGJngbrSgxOMSBNgqyOohPmxmbffHG+jDOLZ3zyORCCSD9WL7vh51ok xMVcQONiQUfhc2EWL6VTd52yU8KHl46VmRh21ytS+gi1JRNH9tW/qDm9z4CMxfeBPo91/Rvdfg uRqw8OopCyvo2dz5A7h8m/huCh4/JZNrZrdukwZKhfqklLzuGDUuFXKmlUzyDaaT1t4NNn6OAI EAdS+lHgFYYcqFZfEloyvDfDmc6Jw1lo5x1LWYiAufwXrmwbhrKvfz2SvnOPgIfY2t0CvOV9DF MKY= WDCIronportException: Internal Received: from unknown (HELO hulk.wdc.com) ([10.225.167.48]) by uls-op-cesaip02.wdc.com with ESMTP; 05 Nov 2021 16:59:04 -0700 From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Heinrich Schuchardt , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Paul Walmsley , Vincent Chen , pbonzini@redhat.com, Sean Christopherson Subject: [PATCH v4 5/5] RISC-V: KVM: Add SBI HSM extension in KVM Date: Fri, 5 Nov 2021 16:58:52 -0700 Message-Id: <20211105235852.3011900-6-atish.patra@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211105235852.3011900-1-atish.patra@wdc.com> References: <20211105235852.3011900-1-atish.patra@wdc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_165908_650364_13567DD0 X-CRM114-Status: GOOD ( 23.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org SBI HSM extension allows OS to start/stop harts any time. It also allows ordered booting of harts instead of random booting. Implement SBI HSM exntesion and designate the vcpu 0 as the boot vcpu id. All other non-zero non-booting vcpus should be brought up by the OS implementing HSM extension. If the guest OS doesn't implement HSM extension, only single vcpu will be available to OS. Signed-off-by: Atish Patra Reviewed-by: Anup Patel --- arch/riscv/include/asm/sbi.h | 1 + arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu.c | 23 ++++++++ arch/riscv/kvm/vcpu_sbi.c | 4 ++ arch/riscv/kvm/vcpu_sbi_hsm.c | 107 ++++++++++++++++++++++++++++++++++ 5 files changed, 136 insertions(+) create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 4f9370b6032e..79af25c45c8d 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -90,6 +90,7 @@ enum sbi_hsm_hart_status { #define SBI_ERR_INVALID_PARAM -3 #define SBI_ERR_DENIED -4 #define SBI_ERR_INVALID_ADDRESS -5 +#define SBI_ERR_ALREADY_AVAILABLE -6 extern unsigned long sbi_spec_version; struct sbiret { diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 4757ae158bf3..aaf181a3d74b 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -26,4 +26,5 @@ kvm-y += vcpu_sbi.o kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o +kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e3d3aed46184..50158867406d 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -53,6 +53,17 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context; + bool loaded; + + /** + * The preemption should be disabled here because it races with + * kvm_sched_out/kvm_sched_in(called from preempt notifiers) which + * also calls vcpu_load/put. + */ + get_cpu(); + loaded = (vcpu->cpu != -1); + if (loaded) + kvm_arch_vcpu_put(vcpu); memcpy(csr, reset_csr, sizeof(*csr)); @@ -64,6 +75,11 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + + /* Reset the guest CSRs for hotplug usecase */ + if (loaded) + kvm_arch_vcpu_load(vcpu, smp_processor_id()); + put_cpu(); } int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) @@ -100,6 +116,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) { + /** + * vcpu with id 0 is the designated boot cpu. + * Keep all vcpus with non-zero cpu id in power-off state so that they + * can brought to online using SBI HSM extension. + */ + if (vcpu->vcpu_idx != 0) + kvm_riscv_vcpu_power_off(vcpu); } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 3502c6166eba..bff753b6e4b0 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -25,6 +25,8 @@ static int kvm_linux_err_map_sbi(int err) return SBI_ERR_INVALID_ADDRESS; case -EOPNOTSUPP: return SBI_ERR_NOT_SUPPORTED; + case -EALREADY: + return SBI_ERR_ALREADY_AVAILABLE; default: return SBI_ERR_FAILURE; }; @@ -43,6 +45,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence; +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm; static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, @@ -50,6 +53,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_time, &vcpu_sbi_ext_ipi, &vcpu_sbi_ext_rfence, + &vcpu_sbi_ext_hsm, }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c new file mode 100644 index 000000000000..aefee16a1560 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -0,0 +1,107 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include + +static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *reset_cntx; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm_vcpu *target_vcpu; + unsigned long target_vcpuid = cp->a0; + + target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); + if (!target_vcpu) + return -EINVAL; + if (!target_vcpu->arch.power_off) + return -EALREADY; + + reset_cntx = &target_vcpu->arch.guest_reset_context; + /* start address */ + reset_cntx->sepc = cp->a1; + /* target vcpu id to start */ + reset_cntx->a0 = target_vcpuid; + /* private data passed from kernel */ + reset_cntx->a1 = cp->a2; + kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu); + + kvm_riscv_vcpu_power_on(target_vcpu); + + return 0; +} + +static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) +{ + if (vcpu->arch.power_off) + return -EINVAL; + + kvm_riscv_vcpu_power_off(vcpu); + + return 0; +} + +static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + unsigned long target_vcpuid = cp->a0; + struct kvm_vcpu *target_vcpu; + + target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); + if (!target_vcpu) + return -EINVAL; + if (!target_vcpu->arch.power_off) + return SBI_HSM_HART_STATUS_STARTED; + else + return SBI_HSM_HART_STATUS_STOPPED; +} + +static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long *out_val, + struct kvm_cpu_trap *utrap, + bool *exit) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm *kvm = vcpu->kvm; + unsigned long funcid = cp->a6; + + if (!cp) + return -EINVAL; + switch (funcid) { + case SBI_EXT_HSM_HART_START: + mutex_lock(&kvm->lock); + ret = kvm_sbi_hsm_vcpu_start(vcpu); + mutex_unlock(&kvm->lock); + break; + case SBI_EXT_HSM_HART_STOP: + ret = kvm_sbi_hsm_vcpu_stop(vcpu); + break; + case SBI_EXT_HSM_HART_STATUS: + ret = kvm_sbi_hsm_vcpu_get_status(vcpu); + if (ret >= 0) { + *out_val = ret; + ret = 0; + } + break; + default: + ret = -EOPNOTSUPP; + } + + return ret; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = { + .extid_start = SBI_EXT_HSM, + .extid_end = SBI_EXT_HSM, + .handler = kvm_sbi_ext_hsm_handler, +};