From patchwork Wed Dec 20 16:00:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Jones X-Patchwork-Id: 13500327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9423C46CD3 for ; Wed, 20 Dec 2023 16:01:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2MHIJuMZhgoDpn9p6EBIuG4q1I4oKwz6pYAA5dbz8AQ=; b=n1NUp9SqU6nSxc TNRwCy6kcWuLuHMrOLofH5YTnFR9656ayHktEknJV7DsvHIOqkNKhrRrcwyu1AD5+HBNKodrt0jID 36F127p2E2odCGcJH3yp9QB+XHct5Kf+5v0MzDedFYHgXl5CIHug+oMyq5Bn4lcuYOh/ISOxLWGcs XeFFipYHYagq+P0Dlz5ojH4UXRviAQg3G73Gr7ZhxfTNLgH62oj5voXmOSJ76pW0SwKiSEjjbw8/T pX6kJMXWa/Ms7qeLL/13fyCz3SS072qtJ2hP+PR4h8oADdt/dBwF/oCuwTfF4+TtHsiUpJFdsS6ux 3mGFnGThp8JTpUkBTFTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rFz0F-000MP6-2M; Wed, 20 Dec 2023 16:00:59 +0000 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rFyzz-000M4q-21 for linux-riscv@lists.infradead.org; Wed, 20 Dec 2023 16:00:45 +0000 Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-33678156e27so1016180f8f.1 for ; Wed, 20 Dec 2023 08:00:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1703088041; x=1703692841; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gwLdEL3t/SiXrqXkaiuKfwXDHLYhzeq9cY7LIf5M13Y=; b=mKY8iz1N3Sh0YYrFMi8XhC8Ra9ux7UUfLSq/RSxLUVfMns+YFXyhheQ7GYFeagE1VA J8UFpx92UzTVhH99duERwtHw5ygWWVT+t+RsifTIodaIxV0qSSBU1dX+sv2YGv3Iho06 2vl6MbP3gVEY6AhjzPruwReBkPi25N7Endw1scg4IhTOVSJIsIof+UifZ/jR0vlbFKie JIYJr4quiSIs3Yhk3J3mkxGVcQCV8qVgJELHWLcG8+stXEN3unkBzyFsxuKnm9aM57wa sG74KA17jp5K6gUoVwPmDbuEJThMPQwUAadvl91EmT66FjZpf7ECHWWYBIZ2vh5zbv91 r98Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703088041; x=1703692841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gwLdEL3t/SiXrqXkaiuKfwXDHLYhzeq9cY7LIf5M13Y=; b=hDX+AVUQuXcmp9cCpQXZNydSxTfxBHtx+rlJuHWjSYlK6Q4PKx5fP3DflsUYzXfmme +HcW3wKEbWiX7tfsba4aops5EFWb11slbBw6g91Osg8J6GkExoKZrKtYFHoSLhYxTHGa dQ1fc06bvA7N0vHU75IPmcfI90J10yq/lcFwBcQvEEVC35Dw/apQwKxhXqkcN62TQ83H 9jfJLHGdgXVjdCPLsLoESab1bCdOUbqkRGv2VKmrBq/Q2Uuktwza5basGv0uJ0x3Kjjc 9K6XT4Swyf6SUoxx75EE/wEmW8o247h0xjW+B40d17Inkf5PzivGQHLElYDkyAOXt2aT KSuw== X-Gm-Message-State: AOJu0Yy6McK+QTXADxLWl4LMk3qnYJ8AP1wk6AGeT4GD+TzTufYWmMZS DT+dOZF5u7MIVhTtbZXKlQ1flA== X-Google-Smtp-Source: AGHT+IEUW/DiFOKNXsoqmxGaEpp9aDk7k7GiHIRbWZ5wbPtHchhvyTT+72hquJBGViI4njrJlZxthQ== X-Received: by 2002:a5d:5f93:0:b0:336:6602:1b5 with SMTP id dr19-20020a5d5f93000000b00336660201b5mr3785782wrb.133.1703088040796; Wed, 20 Dec 2023 08:00:40 -0800 (PST) Received: from localhost (cst-prg-16-115.cust.vodafone.cz. [46.135.16.115]) by smtp.gmail.com with ESMTPSA id u17-20020a5d5151000000b003365e685102sm12300507wrt.29.2023.12.20.08.00.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 08:00:40 -0800 (PST) From: Andrew Jones To: kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, virtualization@lists.linux-foundation.org Cc: anup@brainfault.org, atishp@atishpatra.org, pbonzini@redhat.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgross@suse.com, srivatsa@csail.mit.edu, guoren@kernel.org, conor.dooley@microchip.com, Atish Patra Subject: [PATCH v4 09/13] RISC-V: KVM: Implement SBI STA extension Date: Wed, 20 Dec 2023 17:00:22 +0100 Message-ID: <20231220160012.40184-24-ajones@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231220160012.40184-15-ajones@ventanamicro.com> References: <20231220160012.40184-15-ajones@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231220_080043_671324_6F58EAB2 X-CRM114-Status: GOOD ( 17.75 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a select SCHED_INFO to the KVM config in order to get run_delay info. Then implement SBI STA's set-steal-time-shmem function and kvm_riscv_vcpu_record_steal_time() to provide the steal-time info to guests. Reviewed-by: Anup Patel Reviewed-by: Atish Patra Signed-off-by: Andrew Jones --- arch/riscv/kvm/Kconfig | 1 + arch/riscv/kvm/vcpu_sbi_sta.c | 96 ++++++++++++++++++++++++++++++++++- 2 files changed, 95 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig index dfc237d7875b..148e52b516cf 100644 --- a/arch/riscv/kvm/Kconfig +++ b/arch/riscv/kvm/Kconfig @@ -32,6 +32,7 @@ config KVM select KVM_XFER_TO_GUEST_WORK select MMU_NOTIFIER select PREEMPT_NOTIFIERS + select SCHED_INFO help Support hosting virtualized guest machines. diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c index 87bf1a5f05ce..01f09fe8c3b0 100644 --- a/arch/riscv/kvm/vcpu_sbi_sta.c +++ b/arch/riscv/kvm/vcpu_sbi_sta.c @@ -6,9 +6,15 @@ #include #include #include +#include +#include +#include +#include #include +#include #include +#include void kvm_riscv_vcpu_sbi_sta_reset(struct kvm_vcpu *vcpu) { @@ -19,14 +25,100 @@ void kvm_riscv_vcpu_sbi_sta_reset(struct kvm_vcpu *vcpu) void kvm_riscv_vcpu_record_steal_time(struct kvm_vcpu *vcpu) { gpa_t shmem = vcpu->arch.sta.shmem; + u64 last_steal = vcpu->arch.sta.last_steal; + u32 *sequence_ptr, sequence; + u64 *steal_ptr, steal; + unsigned long hva; + gfn_t gfn; if (shmem == INVALID_GPA) return; + + /* + * shmem is 64-byte aligned (see the enforcement in + * kvm_sbi_sta_steal_time_set_shmem()) and the size of sbi_sta_struct + * is 64 bytes, so we know all its offsets are in the same page. + */ + gfn = shmem >> PAGE_SHIFT; + hva = kvm_vcpu_gfn_to_hva(vcpu, gfn); + + if (WARN_ON(kvm_is_error_hva(hva))) { + vcpu->arch.sta.shmem = INVALID_GPA; + return; + } + + sequence_ptr = (u32 *)(hva + offset_in_page(shmem) + + offsetof(struct sbi_sta_struct, sequence)); + steal_ptr = (u64 *)(hva + offset_in_page(shmem) + + offsetof(struct sbi_sta_struct, steal)); + + if (WARN_ON(get_user(sequence, sequence_ptr))) + return; + + sequence = le32_to_cpu(sequence); + sequence += 1; + + if (WARN_ON(put_user(cpu_to_le32(sequence), sequence_ptr))) + return; + + if (!WARN_ON(get_user(steal, steal_ptr))) { + steal = le64_to_cpu(steal); + vcpu->arch.sta.last_steal = READ_ONCE(current->sched_info.run_delay); + steal += vcpu->arch.sta.last_steal - last_steal; + WARN_ON(put_user(cpu_to_le64(steal), steal_ptr)); + } + + sequence += 1; + WARN_ON(put_user(cpu_to_le32(sequence), sequence_ptr)); + + kvm_vcpu_mark_page_dirty(vcpu, gfn); } static int kvm_sbi_sta_steal_time_set_shmem(struct kvm_vcpu *vcpu) { - return SBI_ERR_FAILURE; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + unsigned long shmem_phys_lo = cp->a0; + unsigned long shmem_phys_hi = cp->a1; + u32 flags = cp->a2; + struct sbi_sta_struct zero_sta = {0}; + unsigned long hva; + bool writable; + gpa_t shmem; + int ret; + + if (flags != 0) + return SBI_ERR_INVALID_PARAM; + + if (shmem_phys_lo == SBI_STA_SHMEM_DISABLE && + shmem_phys_hi == SBI_STA_SHMEM_DISABLE) { + vcpu->arch.sta.shmem = INVALID_GPA; + return 0; + } + + if (shmem_phys_lo & (SZ_64 - 1)) + return SBI_ERR_INVALID_PARAM; + + shmem = shmem_phys_lo; + + if (shmem_phys_hi != 0) { + if (IS_ENABLED(CONFIG_32BIT)) + shmem |= ((gpa_t)shmem_phys_hi << 32); + else + return SBI_ERR_INVALID_ADDRESS; + } + + hva = kvm_vcpu_gfn_to_hva_prot(vcpu, shmem >> PAGE_SHIFT, &writable); + if (kvm_is_error_hva(hva) || !writable) + return SBI_ERR_INVALID_ADDRESS; + + ret = kvm_vcpu_write_guest(vcpu, shmem, &zero_sta, sizeof(zero_sta)); + if (ret) + return SBI_ERR_FAILURE; + + vcpu->arch.sta.shmem = shmem; + vcpu->arch.sta.last_steal = current->sched_info.run_delay; + + return 0; } static int kvm_sbi_ext_sta_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, @@ -52,7 +144,7 @@ static int kvm_sbi_ext_sta_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, static unsigned long kvm_sbi_ext_sta_probe(struct kvm_vcpu *vcpu) { - return 0; + return !!sched_info_on(); } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta = {