From patchwork Thu Dec 3 12:18:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yifei Jiang X-Patchwork-Id: 11948617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 348D3C71156 for ; Thu, 3 Dec 2020 12:21:37 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 544F522244 for ; Thu, 3 Dec 2020 12:21:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 544F522244 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SJqFisnfm72XofZpDL7eT8Tys0xuXnLyNONekxDMZNc=; b=cJ0nsY0p2wDtIaou3whQ/1a1f BnN25W2arKQae4DMWaVmP45CsgbWXTHG561xFEShc4fAg/TvJd5SMoDTfpwF5j3khDeMx0OJ6Dk9X EDHnaMqZynOmS1THSqhy8vv+rg4c1XC0Eog7kKHwt/HD4SYh6njaoxv09RrS4NeKOkqy9++dIZAF6 LUBfFMp1hQDX9bGL7sSTBzinoH35kdcE5FVKOmbqIp4qGtvD5vx2IolixaZAot2/jYTAps8nzuJZh H7SEnWgy7oRxwH5l0N2xv5HOUs94izH47eZFxYQupcEjnJnrtceaaa/xsGlsYQJHRLoKCTm2UUv2u D58Yzub1A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknby-0001rg-2F; Thu, 03 Dec 2020 12:21:26 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknbt-0001nK-NC; Thu, 03 Dec 2020 12:21:23 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cmw0N6GcxzLywj; Thu, 3 Dec 2020 20:20:36 +0800 (CST) Received: from huawei.com (10.174.186.236) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Thu, 3 Dec 2020 20:21:05 +0800 From: Yifei Jiang To: , , , , , Subject: [PATCH RFC 1/3] RISC-V: KVM: Change the method of calculating cycles to nanoseconds Date: Thu, 3 Dec 2020 20:18:37 +0800 Message-ID: <20201203121839.308-2-jiangyifei@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20201203121839.308-1-jiangyifei@huawei.com> References: <20201203121839.308-1-jiangyifei@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.236] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201203_072122_053892_4CD6173F X-CRM114-Status: UNSURE ( 9.35 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhang.zhanghailiang@huawei.com, kvm@vger.kernel.org, yinyipeng1@huawei.com, victor.zhangxiaofeng@huawei.com, linux-kernel@vger.kernel.org, Yifei Jiang , kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, wu.wubin@huawei.com, dengkai1@huawei.com Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Because we will introduce the dynamic guest frequency later, we can't use the fixed mult and shift to calculate nanoseconds. Signed-off-by: Yifei Jiang Signed-off-by: Yipeng Yin --- arch/riscv/include/asm/kvm_vcpu_timer.h | 3 --- arch/riscv/kvm/vcpu_timer.c | 3 +-- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h index 375281eb49e0..87e00d878999 100644 --- a/arch/riscv/include/asm/kvm_vcpu_timer.h +++ b/arch/riscv/include/asm/kvm_vcpu_timer.h @@ -12,9 +12,6 @@ #include struct kvm_guest_timer { - /* Mult & Shift values to get nanoseconds from cycles */ - u32 nsec_mult; - u32 nsec_shift; /* Time delta value */ u64 time_delta; }; diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index ddd0ce727b83..f6b35180199a 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -33,7 +33,7 @@ static u64 kvm_riscv_delta_cycles2ns(u64 cycles, cycles_delta = cycles - cycles_now; else cycles_delta = 0; - delta_ns = (cycles_delta * gt->nsec_mult) >> gt->nsec_shift; + delta_ns = mul_u64_u64_div_u64(cycles_delta, NSEC_PER_SEC, riscv_timebase); local_irq_restore(flags); return delta_ns; @@ -218,7 +218,6 @@ int kvm_riscv_guest_timer_init(struct kvm *kvm) { struct kvm_guest_timer *gt = &kvm->arch.timer; - riscv_cs_get_mult_shift(>->nsec_mult, >->nsec_shift); gt->time_delta = -get_cycles64(); return 0; From patchwork Thu Dec 3 12:18:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yifei Jiang X-Patchwork-Id: 11948613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7A3BC63777 for ; Thu, 3 Dec 2020 12:21:36 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7AE7D22245 for ; Thu, 3 Dec 2020 12:21:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AE7D22245 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XPmE2TfPhjrhg8mnxVVSFiFIYAGKjq/9+ln5nBwnc48=; b=cfF+Er7Zavn/Jby+CgbpqHUc3 gsruCBsPa/9XhqT5yslJ3ZE47tRLjA/GANs8hshBcKcRrTCWW+zubDX7gMi3xHOxIXS5U+PB4FUXz TwUrJUKILJxhKNzcRTljZvnS9OC9l8EOlUDE/qJXJjZJk+EOf+/5hTMPRSEJnh7JkBAst2J4QOjy3 09K+z50R4RgRPlpC1tYCN4yd4vzOY6FyxUOdxUy+KgxnVUWYvUS8umFWYG1ZmHKWUg4p40mFsfqaS D3IJlGFfI209pelyJcDobZH3HinaXsMpIyrUJ5di7mspLKIyvkDuEAecXPXJBTKtHxY8mN6SEufZp UJ8x8ptuQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknc0-0001sj-1q; Thu, 03 Dec 2020 12:21:28 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknbv-0001p3-Hi; Thu, 03 Dec 2020 12:21:25 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cmw0d6T1hz15Wyd; Thu, 3 Dec 2020 20:20:49 +0800 (CST) Received: from huawei.com (10.174.186.236) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Thu, 3 Dec 2020 20:21:07 +0800 From: Yifei Jiang To: , , , , , Subject: [PATCH RFC 2/3] RISC-V: KVM: Support dynamic time frequency from userspace Date: Thu, 3 Dec 2020 20:18:38 +0800 Message-ID: <20201203121839.308-3-jiangyifei@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20201203121839.308-1-jiangyifei@huawei.com> References: <20201203121839.308-1-jiangyifei@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.236] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201203_072124_109444_E251F52F X-CRM114-Status: GOOD ( 11.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhang.zhanghailiang@huawei.com, kvm@vger.kernel.org, yinyipeng1@huawei.com, victor.zhangxiaofeng@huawei.com, linux-kernel@vger.kernel.org, Yifei Jiang , kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, wu.wubin@huawei.com, dengkai1@huawei.com Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements KVM_S/GET_ONE_REG of time frequency to support setting dynamic time frequency from userspace. When the time frequency specified by userspace is inconsistent with host 'riscv_timebase', it will use scale_mult and scale_shift to calculate guest scaling time. Signed-off-by: Yifei Jiang Signed-off-by: Yipeng Yin --- arch/riscv/include/asm/kvm_vcpu_timer.h | 9 ++++++ arch/riscv/kvm/vcpu_timer.c | 40 +++++++++++++++++++++---- 2 files changed, 44 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h index 87e00d878999..41b5503de9e4 100644 --- a/arch/riscv/include/asm/kvm_vcpu_timer.h +++ b/arch/riscv/include/asm/kvm_vcpu_timer.h @@ -12,6 +12,10 @@ #include struct kvm_guest_timer { + u64 frequency; + bool need_scale; + u64 scale_mult; + u64 scale_shift; /* Time delta value */ u64 time_delta; }; @@ -38,4 +42,9 @@ int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu); int kvm_riscv_guest_timer_init(struct kvm *kvm); +static inline bool kvm_riscv_need_scale(struct kvm_guest_timer *gt) +{ + return gt->need_scale; +} + #endif diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index f6b35180199a..2d203660a7e9 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -15,9 +15,38 @@ #include #include +#define SCALE_SHIFT_VALUE 48 +#define SCALE_TOLERANCE_HZ 1000 + +static void kvm_riscv_set_time_freq(struct kvm_guest_timer *gt, u64 freq) +{ + /* + * Guest time frequency and Host time frequency are identical + * if the error between them is limited within SCALE_TOLERANCE_HZ. + */ + u64 diff = riscv_timebase > freq ? + riscv_timebase - freq : freq - riscv_timebase; + gt->need_scale = (diff >= SCALE_TOLERANCE_HZ); + if (gt->need_scale) { + gt->scale_shift = SCALE_SHIFT_VALUE; + gt->scale_mult = mul_u64_u32_div(1ULL << gt->scale_shift, + freq, riscv_timebase); + } + gt->frequency = freq; +} + +static u64 kvm_riscv_scale_time(struct kvm_guest_timer *gt, u64 time) +{ + if (kvm_riscv_need_scale(gt)) + return mul_u64_u64_shr(time, gt->scale_mult, gt->scale_shift); + + return time; +} + static u64 kvm_riscv_current_cycles(struct kvm_guest_timer *gt) { - return get_cycles64() + gt->time_delta; + u64 host_time = get_cycles64(); + return kvm_riscv_scale_time(gt, host_time) + gt->time_delta; } static u64 kvm_riscv_delta_cycles2ns(u64 cycles, @@ -33,7 +62,7 @@ static u64 kvm_riscv_delta_cycles2ns(u64 cycles, cycles_delta = cycles - cycles_now; else cycles_delta = 0; - delta_ns = mul_u64_u64_div_u64(cycles_delta, NSEC_PER_SEC, riscv_timebase); + delta_ns = mul_u64_u64_div_u64(cycles_delta, NSEC_PER_SEC, gt->frequency); local_irq_restore(flags); return delta_ns; @@ -106,7 +135,7 @@ int kvm_riscv_vcpu_get_reg_timer(struct kvm_vcpu *vcpu, switch (reg_num) { case KVM_REG_RISCV_TIMER_REG(frequency): - reg_val = riscv_timebase; + reg_val = gt->frequency; break; case KVM_REG_RISCV_TIMER_REG(time): reg_val = kvm_riscv_current_cycles(gt); @@ -150,10 +179,10 @@ int kvm_riscv_vcpu_set_reg_timer(struct kvm_vcpu *vcpu, switch (reg_num) { case KVM_REG_RISCV_TIMER_REG(frequency): - ret = -EOPNOTSUPP; + kvm_riscv_set_time_freq(gt, reg_val); break; case KVM_REG_RISCV_TIMER_REG(time): - gt->time_delta = reg_val - get_cycles64(); + gt->time_delta = reg_val - kvm_riscv_scale_time(gt, get_cycles64()); break; case KVM_REG_RISCV_TIMER_REG(compare): t->next_cycles = reg_val; @@ -219,6 +248,7 @@ int kvm_riscv_guest_timer_init(struct kvm *kvm) struct kvm_guest_timer *gt = &kvm->arch.timer; gt->time_delta = -get_cycles64(); + gt->frequency = riscv_timebase; return 0; } From patchwork Thu Dec 3 12:18:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yifei Jiang X-Patchwork-Id: 11948615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15525C64E8A for ; Thu, 3 Dec 2020 12:21:37 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 415C822241 for ; Thu, 3 Dec 2020 12:21:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 415C822241 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=IlJofWTf5kNH59UH/NIFHMxlgX6nM9Mq5uXuo8MVACk=; b=m2q/V2KFCCqXsiE0I4xirwJ9l 1/Yos6oPi5n2Y3+vtkjrrSM7bH343kj5V1alARPIX2TRdKdsvh5bOedrzcdSE2gvxGymA8DaKEj/f XfWoHWcL1NES+u8tZUsWlZpWbJn1cPY55+1MzPOCumLv1dRzkGOL/0gNt9ecRUcPIkgXvw30smB8g WAMNfRX2e4h7KXG0SW9fLpWksPITWCX3g27lV1esvmXrFn9yZ7U1gn9KEdBJFVBEFGnI4DyYT/WhC jrFdLwgvtnor10Ikn03fqY/050gsFsMUVvtRkWJ6yn5aaw6C2L/NjlY+7cQll96vgkouJgtmqQhMF aSX47QhSw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknc0-0001t2-MA; Thu, 03 Dec 2020 12:21:28 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kknbv-0001p2-Nm; Thu, 03 Dec 2020 12:21:25 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cmw0d67pXz15WyR; Thu, 3 Dec 2020 20:20:49 +0800 (CST) Received: from huawei.com (10.174.186.236) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Thu, 3 Dec 2020 20:21:09 +0800 From: Yifei Jiang To: , , , , , Subject: [PATCH RFC 3/3] RISC-V: KVM: Implement guest time scaling Date: Thu, 3 Dec 2020 20:18:39 +0800 Message-ID: <20201203121839.308-4-jiangyifei@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20201203121839.308-1-jiangyifei@huawei.com> References: <20201203121839.308-1-jiangyifei@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.236] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201203_072124_292043_C8641ED4 X-CRM114-Status: GOOD ( 12.36 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhang.zhanghailiang@huawei.com, kvm@vger.kernel.org, yinyipeng1@huawei.com, victor.zhangxiaofeng@huawei.com, linux-kernel@vger.kernel.org, Yifei Jiang , kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, wu.wubin@huawei.com, dengkai1@huawei.com Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When time frequency needs to scale, RDTIME/RDTIMEH instruction in guest doesn't work correctly. Because it still uses the host's time frequency. To read correct time, the RDTIME/RDTIMEH instruction executed by guest should trap to HS-mode. The TM bit of HCOUNTEREN CSR could control whether these instructions are trapped to HS-mode. Therefore, we can implement guest time scaling by setting TM bit in kvm_riscv_vcpu_timer_restore() and emulating RDTIME/RDTIMEH instruction in system_opcode_insn(). Signed-off-by: Yifei Jiang Signed-off-by: Yipeng Yin --- arch/riscv/include/asm/csr.h | 3 +++ arch/riscv/include/asm/kvm_vcpu_timer.h | 1 + arch/riscv/kvm/vcpu_exit.c | 35 +++++++++++++++++++++++++ arch/riscv/kvm/vcpu_timer.c | 10 +++++++ 4 files changed, 49 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index bc825693e0e3..a4d8ca76cf1d 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -241,6 +241,9 @@ #define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER) #define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT) +/* The counteren flag */ +#define CE_TM 1 + #ifndef __ASSEMBLY__ #define csr_swap(csr, val) \ diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h index 41b5503de9e4..61384eb57334 100644 --- a/arch/riscv/include/asm/kvm_vcpu_timer.h +++ b/arch/riscv/include/asm/kvm_vcpu_timer.h @@ -41,6 +41,7 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu); int kvm_riscv_guest_timer_init(struct kvm *kvm); +u64 kvm_riscv_read_guest_time(struct kvm_vcpu *vcpu); static inline bool kvm_riscv_need_scale(struct kvm_guest_timer *gt) { diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index f054406792a6..4beb9d25049a 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -18,6 +18,10 @@ #define INSN_MASK_WFI 0xffffff00 #define INSN_MATCH_WFI 0x10500000 +#define INSN_MASK_RDTIME 0xfff03000 +#define INSN_MATCH_RDTIME 0xc0102000 +#define INSN_MASK_RDTIMEH 0xfff03000 +#define INSN_MATCH_RDTIMEH 0xc8102000 #define INSN_MATCH_LB 0x3 #define INSN_MASK_LB 0x707f @@ -138,6 +142,34 @@ static int truly_illegal_insn(struct kvm_vcpu *vcpu, return 1; } +static int system_opcode_insn_rdtime(struct kvm_vcpu *vcpu, + struct kvm_run *run, + ulong insn) +{ +#ifdef CONFIG_64BIT + if ((insn & INSN_MASK_RDTIME) == INSN_MATCH_RDTIME) { + u64 guest_time = kvm_riscv_read_guest_time(vcpu); + SET_RD(insn, &vcpu->arch.guest_context, guest_time); + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + return 1; + } +#else + if ((insn & INSN_MASK_RDTIME) == INSN_MATCH_RDTIME) { + u64 guest_time = kvm_riscv_read_guest_time(vcpu); + SET_RD(insn, &vcpu->arch.guest_context, (u32)guest_time); + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + return 1; + } + if ((insn & INSN_MASK_RDTIMEH) == INSN_MATCH_RDTIMEH) { + u64 guest_time = kvm_riscv_read_guest_time(vcpu); + SET_RD(insn, &vcpu->arch.guest_context, (u32)(guest_time >> 32)); + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + return 1; + } +#endif + return 0; +} + static int system_opcode_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) @@ -154,6 +186,9 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, return 1; } + if (system_opcode_insn_rdtime(vcpu, run, insn)) + return 1; + return truly_illegal_insn(vcpu, run, insn); } diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index 2d203660a7e9..2040dbe57ee6 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -49,6 +49,11 @@ static u64 kvm_riscv_current_cycles(struct kvm_guest_timer *gt) return kvm_riscv_scale_time(gt, host_time) + gt->time_delta; } +u64 kvm_riscv_read_guest_time(struct kvm_vcpu *vcpu) +{ + return kvm_riscv_current_cycles(&vcpu->kvm->arch.timer); +} + static u64 kvm_riscv_delta_cycles2ns(u64 cycles, struct kvm_guest_timer *gt, struct kvm_vcpu_timer *t) @@ -241,6 +246,11 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) csr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta)); csr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32)); #endif + + if (kvm_riscv_need_scale(gt)) + csr_clear(CSR_HCOUNTEREN, 1UL << CE_TM); + else + csr_set(CSR_HCOUNTEREN, 1UL << CE_TM); } int kvm_riscv_guest_timer_init(struct kvm *kvm)