From patchwork Wed Jan 10 23:13:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A899FC4707B for ; Wed, 10 Jan 2024 23:15:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0OImMelo1oQTRAuh/rEnjelUYdIRX3MHNZ9LuuETFCQ=; b=CQaDHtSXQGzMCK kDMBXKX8kTgW/gbzMBY1FGRzEor/YmJ63ETAMwIrIAa8eDLEzy/eg1sFHVJlSuGiLWyC9IXfqzl/1 lkcSzRkQQi0qsX8YeoaJYNlTFL570wIkL9rQLIp0vu0wmB0RtYHVey9umeFeFoFuzoApOPzvgz+rj iNfLOy6yxOCLVVw24HVZ16+9PLkNeqa2uQUl60fcwnlqj36OSpGzM47ZGyhY3kj35yXdGtLJ83rAZ 3fiMF5YsPw4/UpinqkLU3xIDXgKzPk0zKCha4CQahx+44cYkjH6A8nWb/xnbOO7eyCI4rubQ+3gdL CT4VFJiA4gq8G9Zc8mDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rNhmo-00Edjs-0V; Wed, 10 Jan 2024 23:15:02 +0000 Received: from mail-io1-xd32.google.com ([2607:f8b0:4864:20::d32]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rNhmc-00EdWH-2s for linux-riscv@lists.infradead.org; Wed, 10 Jan 2024 23:14:54 +0000 Received: by mail-io1-xd32.google.com with SMTP id ca18e2360f4ac-7bed9c7d33fso83437939f.1 for ; Wed, 10 Jan 2024 15:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928490; x=1705533290; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4ymgmumoudNB/wN3zowFkODrzcqTcd6665yl2L5pnQU=; b=M8FX2LL21k2f7sVXBxN4ojQmXXOZpY4ST2P1SEPDv1ogTMj1TPF0Ce061AwbgWnl6w RsqWc/iKMe/Pd0SikBB0ujJSFUR6TxNuzS2dto2W5dk/LC8UM/zUPjkfW71rT0xyT1/4 rJAGTX4vviw/4g9FJ9CTaIsVQfmLCLbfZ7Q99MPKK8PwLoHSd4P1KeuO24iA5/DzQoBk nIsp+UUW44+G67bL6O+9JZBinx85hs68M0bVMgYlpZJVhUdeymtQTW1FulVB6BwSDntZ h7npiDgVImRIfC0YwHBnvnQTxd9FWbpF8+CCzc7vCUZwMgl7xCcIO3xkBXmA8epcAToc YHAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928490; x=1705533290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4ymgmumoudNB/wN3zowFkODrzcqTcd6665yl2L5pnQU=; b=es7yO0lmQmLci801/h6OAu3wPX2U5uUFSoFTfWQMu5AQ0rIgLH9kiS+beaj61vSOVz nJAvWjUJJSUeF2hSZJjJZg2AEwlSy2z80TP6k+1EI9SDOtRdI9mgGScDqE4uBIAd2wHi JpluKBX2L7pscr72weVO/SPBykkrFaDWzuD/cMBpgxMrNiliXUFaMyhB8NGhPE8VrH39 X2OU5wVZKESQ37xFmRnVLPq3dCYMaxvEYyepAi+EFD/YOBS5yRcAhv0QYomNL5lTNmtu ynPBuEiFQFAcKDKOo1d3VawSpDyITw9LWQPqTMKILHPRdUwdQT792x3bykI7ObAtp1cj iWwg== X-Gm-Message-State: AOJu0Yy5wBtbNFwoJ5PKMfjVqfXGuNjCjl8I4uaRmtgPeX9gFETLTJ4G 7eE3u1bo2L0lHwc3hINdpr9aBKybtID7ge77R6z7/oeyCx0= X-Google-Smtp-Source: AGHT+IEBrJvcmp0z9cRbrl/cYdMmAuqNUhkHuP1bqFpvH7zFsUr66dYYdNFD9nWIMUnU4ZDfRyAw3w== X-Received: by 2002:a5d:9917:0:b0:7bc:d40:6064 with SMTP id x23-20020a5d9917000000b007bc0d406064mr359544iol.28.1704928490166; Wed, 10 Jan 2024 15:14:50 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:49 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Subject: [v3 08/10] RISC-V: KVM: Implement SBI PMU Snapshot feature Date: Wed, 10 Jan 2024 15:13:57 -0800 Message-Id: <20240110231359.1239367-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240110_151451_002144_786BB2F4 X-CRM114-Status: GOOD ( 24.18 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Alexandre Ghiti , kvm@vger.kernel.org, Anup Patel , Paul Walmsley , Atish Patra , Conor Dooley , Guo Ren , kvm-riscv@lists.infradead.org, Atish Patra , Palmer Dabbelt , linux-riscv@lists.infradead.org, Vladimir Isaev , Will Deacon , Andrew Jones Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org PMU Snapshot function allows to minimize the number of traps when the guest access configures/access the hpmcounters. If the snapshot feature is enabled, the hypervisor updates the shared memory with counter data and state of overflown counters. The guest can just read the shared memory instead of trap & emulate done by the hypervisor. This patch doesn't implement the counter overflow yet. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 7 ++ arch/riscv/kvm/vcpu_pmu.c | 119 +++++++++++++++++++++++++- arch/riscv/kvm/vcpu_sbi_pmu.c | 3 + 3 files changed, 127 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 395518a1664e..586bab84be35 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -50,6 +50,10 @@ struct kvm_pmu { bool init_done; /* Bit map of all the virtual counter used */ DECLARE_BITMAP(pmc_in_use, RISCV_KVM_MAX_COUNTERS); + /* The address of the counter snapshot area (guest physical address) */ + gpa_t snapshot_addr; + /* The actual data of the snapshot */ + struct riscv_pmu_snapshot_data *sdata; }; #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) @@ -85,6 +89,9 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_pmu_setup_snapshot(struct kvm_vcpu *vcpu, unsigned long saddr_low, + unsigned long saddr_high, unsigned long flags, + struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 29bf4ca798cb..b3112c23ad33 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -311,6 +311,81 @@ int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, return ret; } +static void kvm_pmu_clear_snapshot_area(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data); + + if (kvpmu->sdata) { + memset(kvpmu->sdata, 0, snapshot_area_size); + if (kvpmu->snapshot_addr != INVALID_GPA) + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, + kvpmu->sdata, snapshot_area_size); + kfree(kvpmu->sdata); + kvpmu->sdata = NULL; + } + kvpmu->snapshot_addr = INVALID_GPA; +} + +int kvm_riscv_vcpu_pmu_setup_snapshot(struct kvm_vcpu *vcpu, unsigned long saddr_low, + unsigned long saddr_high, unsigned long flags, + struct kvm_vcpu_sbi_return *retdata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data); + int sbiret = 0; + gpa_t saddr; + unsigned long hva; + bool writable; + + if (!kvpmu) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (saddr_low == -1 && saddr_high == -1) { + kvm_pmu_clear_snapshot_area(vcpu); + return 0; + } + + saddr = saddr_low; + + if (saddr_high != 0) { + if (IS_ENABLED(CONFIG_32BIT)) + saddr |= ((gpa_t)saddr << 32); + else + sbiret = SBI_ERR_INVALID_ADDRESS; + goto out; + } + + if (kvm_is_error_gpa(vcpu->kvm, saddr)) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + hva = kvm_vcpu_gfn_to_hva_prot(vcpu, saddr >> PAGE_SHIFT, &writable); + if (kvm_is_error_hva(hva) || !writable) { + sbiret = SBI_ERR_INVALID_ADDRESS; + goto out; + } + + kvpmu->snapshot_addr = saddr; + kvpmu->sdata = kzalloc(snapshot_area_size, GFP_ATOMIC); + if (!kvpmu->sdata) + return -ENOMEM; + + if (kvm_vcpu_write_guest(vcpu, saddr, kvpmu->sdata, snapshot_area_size)) { + kfree(kvpmu->sdata); + kvpmu->snapshot_addr = INVALID_GPA; + sbiret = SBI_ERR_FAILURE; + } + +out: + retdata->err_val = sbiret; + + return 0; +} + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) { @@ -344,20 +419,32 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, int i, pmc_index, sbiret = 0; struct kvm_pmc *pmc; int fevent_code; + bool snap_flag_set = flags & SBI_PMU_START_FLAG_INIT_FROM_SNAPSHOT; if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { sbiret = SBI_ERR_INVALID_PARAM; goto out; } + if (snap_flag_set && kvpmu->snapshot_addr == INVALID_GPA) { + sbiret = SBI_ERR_NO_SHMEM; + goto out; + } + /* Start the counters that have been configured and requested by the guest */ for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { pmc_index = i + ctr_base; if (!test_bit(pmc_index, kvpmu->pmc_in_use)) continue; pmc = &kvpmu->pmc[pmc_index]; - if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE) + if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE) { pmc->counter_val = ival; + } else if (snap_flag_set) { + kvm_vcpu_read_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + pmc->counter_val = kvpmu->sdata->ctr_values[pmc_index]; + } + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { fevent_code = get_event_code(pmc->event_idx); if (fevent_code >= SBI_PMU_FW_MAX) { @@ -398,14 +485,21 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int i, pmc_index, sbiret = 0; + u64 enabled, running; struct kvm_pmc *pmc; int fevent_code; + bool snap_flag_set = flags & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT; - if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { + if ((kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0)) { sbiret = SBI_ERR_INVALID_PARAM; goto out; } + if (snap_flag_set && kvpmu->snapshot_addr == INVALID_GPA) { + sbiret = SBI_ERR_NO_SHMEM; + goto out; + } + /* Stop the counters that have been configured and requested by the guest */ for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { pmc_index = i + ctr_base; @@ -438,9 +532,28 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, } else { sbiret = SBI_ERR_INVALID_PARAM; } + + if (snap_flag_set && !sbiret) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + else if (pmc->perf_event) + pmc->counter_val += perf_event_read_value(pmc->perf_event, + &enabled, &running); + /* TODO: Add counter overflow support when sscofpmf support is added */ + kvpmu->sdata->ctr_values[i] = pmc->counter_val; + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + } + if (flags & SBI_PMU_STOP_FLAG_RESET) { pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; clear_bit(pmc_index, kvpmu->pmc_in_use); + if (snap_flag_set) { + /* Clear the snapshot area for the upcoming deletion event */ + kvpmu->sdata->ctr_values[i] = 0; + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + } } } @@ -566,6 +679,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) kvpmu->num_hw_ctrs = num_hw_ctrs + 1; kvpmu->num_fw_ctrs = SBI_PMU_FW_MAX; memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); + kvpmu->snapshot_addr = INVALID_GPA; if (kvpmu->num_hw_ctrs > RISCV_KVM_MAX_HW_CTRS) { pr_warn_once("Limiting the hardware counters to 32 as specified by the ISA"); @@ -625,6 +739,7 @@ void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) } bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); + kvm_pmu_clear_snapshot_area(vcpu); } void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c index b70179e9e875..9f61136e4bb1 100644 --- a/arch/riscv/kvm/vcpu_sbi_pmu.c +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -64,6 +64,9 @@ static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_PMU_COUNTER_FW_READ: ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); break; + case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: + ret = kvm_riscv_vcpu_pmu_setup_snapshot(vcpu, cp->a0, cp->a1, cp->a2, retdata); + break; default: retdata->err_val = SBI_ERR_NOT_SUPPORTED; }