From patchwork Mon Mar 17 10:08:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vaibhav Jain X-Patchwork-Id: 14019055 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B8F4944E; Mon, 17 Mar 2025 10:09:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742206168; cv=none; b=ODASsIM4W6xVS6JyMR29NjPUfqi+EMWk/JLWhIBBi7kTjaKIyvaaPtwTGWOQUG+utNWZEQd8Hj+h56LNSSb62Mk9qmq9kRa/24hZFWwomGM+GmnoY+EDCuZIEp4WcIlhiW7KZGDo4raHulWliToDZjKk+AS0ZftMmZmk2EMBJ2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742206168; c=relaxed/simple; bh=o2vh4d6OR9hqUpoXJIBgeAEI3c9scySYTAswYdItrCU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jQUdnzYZJIrKUBf+20Ds/vLRi9mDszIS8aA0sBp0wOYoM3zQBxpuJ6vDgDO40TsUjOlUZy/ZcrsglCBJMhog4D+BHf+RX/oNQd1Tieo51ZOcqfxjHJY67dw8MtQ1HREoTp1R0IqxUKjzimBRYvPORS3OJd6W8CcHnY3+VylIcSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=lHSOc3JH; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="lHSOc3JH" Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 52H3jjmx018175; Mon, 17 Mar 2025 10:09:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=z/LV+WbBPYcrlyjWl VeSBSCemvLJATvAb+1sEG+ugyM=; b=lHSOc3JH227uqWLF27BBxg5Kq54zLLPLy PP2zKkGNlBvDsOuTqczksHMht8fjkOilqDh08qo30ZVY20SrsPsc9nazyyBLw68X gId0Wh8YaPkeh4ZhO/ckH0H8lVicVHmDL42ijEiCs2aElMIIpld6ZppPJkPwACH+ gPGPP0UwQ73OcQSHeLR3Dbey/UDCkrYR6ml3jdxvRZWogU0c9wLtHytwLpY3DkPN nsw850waQt1cakp5GbHB7xPiubxPU4Hw0u3enA33NFzWuduMIM09TD1Fzmul9C95 ML21CzqQtOc71nnc1ynux8+Zw08zbhbkQm6pZxMI4x6s8dumij2LA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 45ec499jb5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 17 Mar 2025 10:09:16 +0000 (GMT) Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 52H9r7nw007777; Mon, 17 Mar 2025 10:09:16 GMT Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 45ec499jb2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 17 Mar 2025 10:09:16 +0000 (GMT) Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 52H8Jpmq009196; Mon, 17 Mar 2025 10:09:15 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 45dm8ynqgh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 17 Mar 2025 10:09:14 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 52HA9BpI52494700 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 17 Mar 2025 10:09:11 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0D6512006C; Mon, 17 Mar 2025 10:09:11 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AAFB920063; Mon, 17 Mar 2025 10:09:06 +0000 (GMT) Received: from vaibhav?linux.ibm.com (unknown [9.124.208.110]) by smtpav06.fra02v.mail.ibm.com (Postfix) with SMTP; Mon, 17 Mar 2025 10:09:06 +0000 (GMT) Received: by vaibhav@linux.ibm.com (sSMTP sendmail emulation); Mon, 17 Mar 2025 15:39:05 +0530 From: Vaibhav Jain To: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org Cc: Vaibhav Jain , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Vaidyanathan Srinivasan , sbhat@linux.ibm.com, gautam@linux.ibm.com, kconsul@linux.ibm.com, amachhiw@linux.ibm.com, Athira Rajeev Subject: [PATCH v5 5/6] powerpc/kvm-hv-pmu: Implement GSB message-ops for hostwide counters Date: Mon, 17 Mar 2025 15:38:32 +0530 Message-ID: <20250317100834.451452-6-vaibhav@linux.ibm.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317100834.451452-1-vaibhav@linux.ibm.com> References: <20250317100834.451452-1-vaibhav@linux.ibm.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: N07OKIkawaAaZmFkwAHKJzNUtltE3T5E X-Proofpoint-ORIG-GUID: Z04-drdAv3ZJUHLZhffo0K3tZW0QyrjN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1093,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-03-17_03,2025-03-17_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0 adultscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2502280000 definitions=main-2503170073 Implement and setup necessary structures to send a prepolulated Guest-State-Buffer(GSB) requesting hostwide counters to L0-PowerVM and have the returned GSB holding the values of these counters parsed. This is done via existing GSB implementation and with the newly added support of Hostwide elements in GSB. The request to L0-PowerVM to return Hostwide counters is done using a pre-allocated GSB named 'gsb_l0_stats'. To be able to populate this GSB with the needed Guest-State-Elements (GSIDs) a instance of 'struct kvmppc_gs_msg' named 'gsm_l0_stats' is introduced. The 'gsm_l0_stats' is tied to an instance of 'struct kvmppc_gs_msg_ops' named 'gsb_ops_l0_stats' which holds various callbacks to be compute the size ( hostwide_get_size() ), populate the GSB ( hostwide_fill_info() ) and refresh ( hostwide_refresh_info() ) the contents of 'l0_stats' that holds the Hostwide counters returned from L0-PowerVM. To protect these structures from simultaneous access a spinlock 'lock_l0_stats' has been introduced. The allocation and initialization of the above structures is done in newly introduced kvmppc_init_hostwide() and similarly the cleanup is performed in newly introduced kvmppc_cleanup_hostwide(). Signed-off-by: Vaibhav Jain --- Changelog v4->v5: * Update kvmppc_register_pmu() to refactor the module init path. v3->v4: * Minor tweaks to code and patch description since this code is now being built as a kernel module. * Introduced kvmppc_events_sysfs_show() as power_events_sysfs_show() is not exported to modules. v2->v3: Removed a redundant branch in kvmppc_init_hostwide() [Gautam] v1->v2: * Added error handling to hostwide_fill_info() [Gautam] --- arch/powerpc/perf/kvm-hv-pmu.c | 207 +++++++++++++++++++++++++++++++++ 1 file changed, 207 insertions(+) diff --git a/arch/powerpc/perf/kvm-hv-pmu.c b/arch/powerpc/perf/kvm-hv-pmu.c index 12f40a7b3ced..705be24ccb43 100644 --- a/arch/powerpc/perf/kvm-hv-pmu.c +++ b/arch/powerpc/perf/kvm-hv-pmu.c @@ -27,10 +27,41 @@ #include #include +#include "asm/guest-state-buffer.h" + enum kvmppc_pmu_eventid { KVMPPC_EVENT_MAX, }; +#define KVMPPC_PMU_EVENT_ATTR(_name, _id) \ + PMU_EVENT_ATTR_ID(_name, kvmppc_events_sysfs_show, _id) + +static ssize_t kvmppc_events_sysfs_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); +} + +/* Holds the hostwide stats */ +static struct kvmppc_hostwide_stats { + u64 guest_heap; + u64 guest_heap_max; + u64 guest_pgtable_size; + u64 guest_pgtable_size_max; + u64 guest_pgtable_reclaim; +} l0_stats; + +/* Protect access to l0_stats */ +static DEFINE_SPINLOCK(lock_l0_stats); + +/* GSB related structs needed to talk to L0 */ +static struct kvmppc_gs_msg *gsm_l0_stats; +static struct kvmppc_gs_buff *gsb_l0_stats; + static struct attribute *kvmppc_pmu_events_attr[] = { NULL, }; @@ -90,6 +121,176 @@ static void kvmppc_pmu_read(struct perf_event *event) { } +/* Return the size of the needed guest state buffer */ +static size_t hostwide_get_size(struct kvmppc_gs_msg *gsm) + +{ + size_t size = 0; + const u16 ids[] = { + KVMPPC_GSID_L0_GUEST_HEAP, + KVMPPC_GSID_L0_GUEST_HEAP_MAX, + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, + KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM + }; + + for (int i = 0; i < ARRAY_SIZE(ids); i++) + size += kvmppc_gse_total_size(kvmppc_gsid_size(ids[i])); + return size; +} + +/* Populate the request guest state buffer */ +static int hostwide_fill_info(struct kvmppc_gs_buff *gsb, + struct kvmppc_gs_msg *gsm) +{ + int rc = 0; + struct kvmppc_hostwide_stats *stats = gsm->data; + + /* + * It doesn't matter what values are put into request buffer as + * they are going to be overwritten anyways. But for the sake of + * testcode and symmetry contents of existing stats are put + * populated into the request guest state buffer. + */ + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP)) + rc = kvmppc_gse_put_u64(gsb, + KVMPPC_GSID_L0_GUEST_HEAP, + stats->guest_heap); + + if (!rc && kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP_MAX)) + rc = kvmppc_gse_put_u64(gsb, + KVMPPC_GSID_L0_GUEST_HEAP_MAX, + stats->guest_heap_max); + + if (!rc && kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE)) + rc = kvmppc_gse_put_u64(gsb, + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, + stats->guest_pgtable_size); + if (!rc && + kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX)) + rc = kvmppc_gse_put_u64(gsb, + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, + stats->guest_pgtable_size_max); + if (!rc && + kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM)) + rc = kvmppc_gse_put_u64(gsb, + KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM, + stats->guest_pgtable_reclaim); + + return rc; +} + +/* Parse and update the host wide stats from returned gsb */ +static int hostwide_refresh_info(struct kvmppc_gs_msg *gsm, + struct kvmppc_gs_buff *gsb) +{ + struct kvmppc_gs_parser gsp = { 0 }; + struct kvmppc_hostwide_stats *stats = gsm->data; + struct kvmppc_gs_elem *gse; + int rc; + + rc = kvmppc_gse_parse(&gsp, gsb); + if (rc < 0) + return rc; + + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP); + if (gse) + stats->guest_heap = kvmppc_gse_get_u64(gse); + + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP_MAX); + if (gse) + stats->guest_heap_max = kvmppc_gse_get_u64(gse); + + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); + if (gse) + stats->guest_pgtable_size = kvmppc_gse_get_u64(gse); + + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); + if (gse) + stats->guest_pgtable_size_max = kvmppc_gse_get_u64(gse); + + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); + if (gse) + stats->guest_pgtable_reclaim = kvmppc_gse_get_u64(gse); + + return 0; +} + +/* gsb-message ops for setting up/parsing */ +static struct kvmppc_gs_msg_ops gsb_ops_l0_stats = { + .get_size = hostwide_get_size, + .fill_info = hostwide_fill_info, + .refresh_info = hostwide_refresh_info, +}; + +static int kvmppc_init_hostwide(void) +{ + int rc = 0; + unsigned long flags; + + spin_lock_irqsave(&lock_l0_stats, flags); + + /* already registered ? */ + if (gsm_l0_stats) { + rc = 0; + goto out; + } + + /* setup the Guest state message/buffer to talk to L0 */ + gsm_l0_stats = kvmppc_gsm_new(&gsb_ops_l0_stats, &l0_stats, + GSM_SEND, GFP_KERNEL); + if (!gsm_l0_stats) { + rc = -ENOMEM; + goto out; + } + + /* Populate the Idents */ + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_HEAP); + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_HEAP_MAX); + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); + + /* allocate GSB. Guest/Vcpu Id is ignored */ + gsb_l0_stats = kvmppc_gsb_new(kvmppc_gsm_size(gsm_l0_stats), 0, 0, + GFP_KERNEL); + if (!gsb_l0_stats) { + rc = -ENOMEM; + goto out; + } + + /* ask the ops to fill in the info */ + rc = kvmppc_gsm_fill_info(gsm_l0_stats, gsb_l0_stats); + +out: + if (rc) { + if (gsm_l0_stats) + kvmppc_gsm_free(gsm_l0_stats); + if (gsb_l0_stats) + kvmppc_gsb_free(gsb_l0_stats); + gsm_l0_stats = NULL; + gsb_l0_stats = NULL; + } + spin_unlock_irqrestore(&lock_l0_stats, flags); + return rc; +} + +static void kvmppc_cleanup_hostwide(void) +{ + unsigned long flags; + + spin_lock_irqsave(&lock_l0_stats, flags); + + if (gsm_l0_stats) + kvmppc_gsm_free(gsm_l0_stats); + if (gsb_l0_stats) + kvmppc_gsb_free(gsb_l0_stats); + gsm_l0_stats = NULL; + gsb_l0_stats = NULL; + + spin_unlock_irqrestore(&lock_l0_stats, flags); +} + /* L1 wide counters PMU */ static struct pmu kvmppc_pmu = { .module = THIS_MODULE, @@ -109,6 +310,10 @@ static int __init kvmppc_register_pmu(void) /* only support events for nestedv2 right now */ if (kvmhv_is_nestedv2()) { + rc = kvmppc_init_hostwide(); + if (rc) + goto out; + /* Register the pmu */ rc = perf_pmu_register(&kvmppc_pmu, kvmppc_pmu.name, -1); if (rc) @@ -124,6 +329,8 @@ static int __init kvmppc_register_pmu(void) static void __exit kvmppc_unregister_pmu(void) { if (kvmhv_is_nestedv2()) { + kvmppc_cleanup_hostwide(); + if (kvmppc_pmu.type != -1) perf_pmu_unregister(&kvmppc_pmu);