From patchwork Fri Apr 7 01:59:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhaotianrui X-Patchwork-Id: 13204403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A2A5C77B70 for ; Fri, 7 Apr 2023 02:00:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240039AbjDGCAq (ORCPT ); Thu, 6 Apr 2023 22:00:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239717AbjDGCAW (ORCPT ); Thu, 6 Apr 2023 22:00:22 -0400 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C02B98A42; Thu, 6 Apr 2023 19:00:16 -0700 (PDT) Received: from loongson.cn (unknown [10.2.5.185]) by gateway (Coremail) with SMTP id _____8Ax69kueS9kOqcXAA--.36414S3; Fri, 07 Apr 2023 10:00:14 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.185]) by localhost.localdomain (Coremail) with SMTP id AQAAf8BxLL4jeS9k17sXAA--.23369S22; Fri, 07 Apr 2023 10:00:13 +0800 (CST) From: Tianrui Zhao To: Paolo Bonzini Cc: Huacai Chen , WANG Xuerui , Greg Kroah-Hartman , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Jens Axboe , Mark Brown , Alex Deucher , Oliver Upton , maobibo@loongson.cn, Xi Ruoyao , zhaotianrui@loongson.cn Subject: [PATCH v5 20/30] LoongArch: KVM: Implement handle csr excption Date: Fri, 7 Apr 2023 09:59:53 +0800 Message-Id: <20230407020003.3651096-21-zhaotianrui@loongson.cn> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230407020003.3651096-1-zhaotianrui@loongson.cn> References: <20230407020003.3651096-1-zhaotianrui@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8BxLL4jeS9k17sXAA--.23369S22 X-CM-SenderInfo: p2kd03xldq233l6o00pqjv00gofq/ X-Coremail-Antispam: 1Uk129KBjvJXoW3Ar1ktry7uryUAF13XryxKrg_yoW7Xr18pF Z7Ca15Xw40qw12kas3JFs0vrZ8X3ykGr17tFy7t34SvF12yF1rXFWvgryDXF98Ja9YqFW2 qay5trn5ur4UtaUanT9S1TB71UUUUjJqnTZGkaVYY2UrUUUUj1kv1TuYvTs0mT0YCTnIWj qI5I8CrVACY4xI64kE6c02F40Ex7xfYxn0WfASr-VFAUDa7-sFnT9fnUUIcSsGvfJTRUUU bxxFc2x0x2IEx4CE42xK8VAvwI8IcIk0rVWrJVCq3wA2ocxC64kIII0Yj41l84x0c7CEw4 AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1l84ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aV CY1x0267AKxVW8Jr0_Cr1UM2kKe7AKxVWUAVWUtwAS0I0E0xvYzxvE52x082IY62kv0487 Mc804VCY07AIYIkI8VC2zVCFFI0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VCjz48v1s IEY20_WwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lc7CjxVAaw2AFwI0_ JF0_Jw1l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VWrMxC20s026xCaFVCjc4 AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCj r7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y0x0EwIxGrwCI42IY6x IIjxv20xvE14v26w1j6s0DMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8Jr0_Cr1UMIIF0xvEx4A2js IEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvj4RKpBTUUUUU Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement kvm handle loongarch vcpu exit caused by reading and writing csr. Using loongarch_csr structure to emulate the registers. Signed-off-by: Tianrui Zhao --- arch/loongarch/include/asm/kvm_csr.h | 55 +++++++++++++++ arch/loongarch/kvm/exit.c | 100 +++++++++++++++++++++++++++ 2 files changed, 155 insertions(+) create mode 100644 arch/loongarch/include/asm/kvm_csr.h create mode 100644 arch/loongarch/kvm/exit.c diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h new file mode 100644 index 000000000..18052d0ad --- /dev/null +++ b/arch/loongarch/include/asm/kvm_csr.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_LOONGARCH_KVM_CSR_H__ +#define __ASM_LOONGARCH_KVM_CSR_H__ +#include +#include +#include +#include +#include + +#define kvm_read_hw_gcsr(id) gcsr_read(id) +#define kvm_write_hw_gcsr(csr, id, val) gcsr_write(val, id) + +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v); +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v); + +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu); + +static inline void kvm_save_hw_gcsr(struct loongarch_csrs *csr, int gid) +{ + csr->csrs[gid] = gcsr_read(gid); +} + +static inline void kvm_restore_hw_gcsr(struct loongarch_csrs *csr, int gid) +{ + gcsr_write(csr->csrs[gid], gid); +} + +static inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid) +{ + return csr->csrs[gid]; +} + +static inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val) +{ + csr->csrs[gid] = val; +} + +static inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val) +{ + csr->csrs[gid] |= val; +} + +static inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long mask, + unsigned long val) +{ + unsigned long _mask = mask; + + csr->csrs[gid] &= ~_mask; + csr->csrs[gid] |= val & _mask; +} +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */ diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c new file mode 100644 index 000000000..dcb8eb815 --- /dev/null +++ b/arch/loongarch/kvm/exit.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include "trace.h" + +static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + unsigned long val = 0; + + if (csrid < 4096 && (get_gcsr_flag(csrid) & SW_GCSR)) + val = kvm_read_sw_gcsr(csr, csrid); + else + pr_warn_once("Unsupport csrread 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); + return val; +} + +static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (csrid < 4096 && (get_gcsr_flag(csrid) & SW_GCSR)) + kvm_write_sw_gcsr(csr, csrid, val); + else + pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid, + unsigned long csr_mask, unsigned long val) +{ + struct loongarch_csrs *csr = vcpu->arch.csr; + + if (csrid < 4096 && (get_gcsr_flag(csrid) & SW_GCSR)) { + unsigned long orig; + + orig = kvm_read_sw_gcsr(csr, csrid); + orig &= ~csr_mask; + orig |= val & csr_mask; + kvm_write_sw_gcsr(csr, csrid, orig); + } else + pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n", + csrid, vcpu->arch.pc); +} + +static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) +{ + unsigned int rd, rj, csrid; + unsigned long csr_mask; + unsigned long val = 0; + + /* + * CSR value mask imm + * rj = 0 means csrrd + * rj = 1 means csrwr + * rj != 0,1 means csrxchg + */ + rd = inst.reg2csr_format.rd; + rj = inst.reg2csr_format.rj; + csrid = inst.reg2csr_format.csr; + + /* Process CSR ops */ + if (rj == 0) { + /* process csrrd */ + val = _kvm_emu_read_csr(vcpu, csrid); + vcpu->arch.gprs[rd] = val; + } else if (rj == 1) { + /* process csrwr */ + val = vcpu->arch.gprs[rd]; + _kvm_emu_write_csr(vcpu, csrid, val); + } else { + /* process csrxchg */ + val = vcpu->arch.gprs[rd]; + csr_mask = vcpu->arch.gprs[rj]; + _kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val); + } + + return EMULATE_DONE; +}