From patchwork Thu Aug 1 04:58:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749524 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DD7D13E03A for ; Thu, 1 Aug 2024 04:59:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488355; cv=none; b=QdaNgDVveO9OnE6kgYW5JJwLvY07n8e+O4U7b3/ZMrLWpxiEGl4c9SF7UNzO5bEgMVw7qwRh7Y9pYRxAFx8cO3TR04bTp/JuEcHpL2+I4zuiolJKMrswzPjKaQhLuS9a9pnPElwC2aDl4LdX8EIHK5SRZiRkq30GFo7f5nEwupQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488355; c=relaxed/simple; bh=EJkw76iAldXs8IDGX6eZpFuF0CjLD+JdIskJg3lIASc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=DED/RaDWYeK0Y/JQ7eqyMlRhe8jPAIWWR93GGZV+jFyrN5pNQk6I++7u10GO1DIjt6ciWOwyrtbLC6dFBn6jlTpIXnPxFmdRwSu/s2k5QuTZpqp5Ko0d9KL+rAAt2xu8zWM/UUNn7aKv4FnxHuGY0N3L6A1JarazMVf0A7knttI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DxQLDvUF; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DxQLDvUF" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb6f2b965dso8669216a91.3 for ; Wed, 31 Jul 2024 21:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488353; x=1723093153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tm5RaV+/pBsHkRX5KjcoF4je7qI2HY3dp2o/9i3k7x4=; b=DxQLDvUFaa+X0cD66TrFfX9wk95GXJjhNeL53k3yUzdO45UvSTkGTgTa20ICi7PyK5 jfCIgcWlEE01pCDMCrdAqJttdb7VylGbSt170SzeabQFixCfU83Iq6J65jUZprUOw1Xo ZeJOamsmUU0sqHF0rFjKtmwgPJwxrLdccprfVPtUURm4OThoQ36dU5zAQKDcestg2RdM aoG1kW4Q0CLDxebjZD69cTxEO34xrvjOGaqESeDHx2BvdwSXkzS1DFF4fZgNEN+UCEsI u0LoaKu7gVq/cgMlEajKVa/SynGIhKI9prqj6sX1Ektpmre4vYOVFiO4y0pXsIeFcfvv o8AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488353; x=1723093153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tm5RaV+/pBsHkRX5KjcoF4je7qI2HY3dp2o/9i3k7x4=; b=kMjIhZLNn95unGNLFbpaqDLyuA93XvPgUsYKJUvm04fRyJwwoQiHKtWr82hYYFTfiL bervywjoLS2QeTotw0ofDuVpJrq9LfUe86gCcIK7/rnBcr1nYI06G0DpmJDJsiDrY7S3 6bkUFWiRtNgeXya+6UV/1tL60PNUHIU99+t3QJ+SQRayG0Pa+2UqxpP1CG2pZGa7g3WW DlQQhBA0sQSkrb4vctp9ahlXx4FJlpY+GaNXqgKSjjIO8bSQsaWYdFnFFHA3a5cFUnYr 5/BsgKKIcMT5oWhyTzO0iS4VyS9GWLWd83RmlHGLFAjnVwZ1G0BVENI8OvMDsA7h2ANc vmqA== X-Forwarded-Encrypted: i=1; AJvYcCUf+pNZWvr2IZEtz8vJwGPUKa6gOebzY+HSerxWIdRTTIPT9JTqCpVvEC2Vv5ltvir+jQp0Na/etZnjteB1G7QGRvMv X-Gm-Message-State: AOJu0YwGZzN60dPRcnevJef/8ZyVHR0iW6E2ul5NgK7HUcV/wgC+dF2t 0tTOT5E0SS9drSbaDvfa/nR5wuczg9snHoXBpkBaF3Ssel1LelHD4ujeXWJXco0cbHs849wYTzz TtH9vKA== X-Google-Smtp-Source: AGHT+IE/W6WkgcA+8Zbv/MP0qycRKD57qXfVH139pmKdChk/+QPD8tNaOz1bgp6b1YFc086V3snnnjvAtMiq X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:90a:9c3:b0:2c8:b576:2822 with SMTP id 98e67ed59e1d1-2cfe7ba7df8mr25621a91.8.1722488352924; Wed, 31 Jul 2024 21:59:12 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:10 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-2-mizhang@google.com> Subject: [RFC PATCH v3 01/58] sched/core: Move preempt_model_*() helpers from sched.h to preempt.h From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sean Christopherson Move the declarations and inlined implementations of the preempt_model_*() helpers to preempt.h so that they can be referenced in spinlock.h without creating a potential circular dependency between spinlock.h and sched.h. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Ankur Arora Signed-off-by: Mingwei Zhang --- include/linux/preempt.h | 41 +++++++++++++++++++++++++++++++++++++++++ include/linux/sched.h | 41 ----------------------------------------- 2 files changed, 41 insertions(+), 41 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 7233e9cf1bab..ce76f1a45722 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -481,4 +481,45 @@ DEFINE_LOCK_GUARD_0(preempt, preempt_disable(), preempt_enable()) DEFINE_LOCK_GUARD_0(preempt_notrace, preempt_disable_notrace(), preempt_enable_notrace()) DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) +#ifdef CONFIG_PREEMPT_DYNAMIC + +extern bool preempt_model_none(void); +extern bool preempt_model_voluntary(void); +extern bool preempt_model_full(void); + +#else + +static inline bool preempt_model_none(void) +{ + return IS_ENABLED(CONFIG_PREEMPT_NONE); +} +static inline bool preempt_model_voluntary(void) +{ + return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY); +} +static inline bool preempt_model_full(void) +{ + return IS_ENABLED(CONFIG_PREEMPT); +} + +#endif + +static inline bool preempt_model_rt(void) +{ + return IS_ENABLED(CONFIG_PREEMPT_RT); +} + +/* + * Does the preemption model allow non-cooperative preemption? + * + * For !CONFIG_PREEMPT_DYNAMIC kernels this is an exact match with + * CONFIG_PREEMPTION; for CONFIG_PREEMPT_DYNAMIC this doesn't work as the + * kernel is *built* with CONFIG_PREEMPTION=y but may run with e.g. the + * PREEMPT_NONE model. + */ +static inline bool preempt_model_preemptible(void) +{ + return preempt_model_full() || preempt_model_rt(); +} + #endif /* __LINUX_PREEMPT_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 61591ac6eab6..90691d99027e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2064,47 +2064,6 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock); __cond_resched_rwlock_write(lock); \ }) -#ifdef CONFIG_PREEMPT_DYNAMIC - -extern bool preempt_model_none(void); -extern bool preempt_model_voluntary(void); -extern bool preempt_model_full(void); - -#else - -static inline bool preempt_model_none(void) -{ - return IS_ENABLED(CONFIG_PREEMPT_NONE); -} -static inline bool preempt_model_voluntary(void) -{ - return IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY); -} -static inline bool preempt_model_full(void) -{ - return IS_ENABLED(CONFIG_PREEMPT); -} - -#endif - -static inline bool preempt_model_rt(void) -{ - return IS_ENABLED(CONFIG_PREEMPT_RT); -} - -/* - * Does the preemption model allow non-cooperative preemption? - * - * For !CONFIG_PREEMPT_DYNAMIC kernels this is an exact match with - * CONFIG_PREEMPTION; for CONFIG_PREEMPT_DYNAMIC this doesn't work as the - * kernel is *built* with CONFIG_PREEMPTION=y but may run with e.g. the - * PREEMPT_NONE model. - */ -static inline bool preempt_model_preemptible(void) -{ - return preempt_model_full() || preempt_model_rt(); -} - static __always_inline bool need_resched(void) { return unlikely(tif_need_resched()); From patchwork Thu Aug 1 04:58:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749525 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CE671422C2 for ; Thu, 1 Aug 2024 04:59:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488357; cv=none; b=DILb8WMzfv3Ef9fAIK3nkN4iKBtRfAMyxbB11mv0S8BCOl0v1L5Nn12+4miuGBKfG9Rhk2sH0EjNUtnFmKXNewJaQ2ujP4I4ImcqknjP39NqaqR8HA+VUDIf2tEdZcoa+3iM328z7c7ZEx5K6Njz0D7PXhvjujzv5Q1Jo5PlhSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488357; c=relaxed/simple; bh=EwrTeTKxjkriGU2aCK6fduTgKiXaA6b+BnkXf+Qj9ZU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OuRtVLZ+mK49Jqb7aHWu7xFInkNCepoigCAxY1iQFWB04EFz1j8yzi4u0idyPcX8qgH7rpVdnK+PB0gcF8E6rV68llwWYdJ7GGlia+JJB3y8eZiGrX5+Do0bHp/pD0cjiHG27G6c7jfzQHKg7JEwdhY/juGF/a0+W1rWqoyCKic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ex0DYgu7; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ex0DYgu7" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0bcd04741fso865419276.2 for ; Wed, 31 Jul 2024 21:59:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488355; x=1723093155; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=30BVAe+lKKTv8P9DRYR4aCbEP24dhJmDTry4BD28JV4=; b=Ex0DYgu7uMf12V/WcnHY1PvTkF33ZiIfV4lRCvFoOF+HaL6pgdk8HOIyE5aKcFVRDA nVQh7WrkxVgtJeyzFf2Ua87Sq06XeQ46rXUOYAyPCZuz1vTPeU3GL0ver1UtzRg9bOnT lax9U+eNBX1ZYVRqMEn0nyQ+1aAhUgjIqSA+KJz7t4hpT2TeOKMi2d3hiUobhUcDuYVi ubdhkv5u4zeSMvUq9YMU1bw/Ufb7ImXv9Khg6+rYFQD7OBIpHGAt3nLn9HFp3v7YSgbj 9GLUva0sQwg+buJdUwLp4ggahSEylks589JetjEjTD61aEPsteoD0tOoqkR+fafTES/4 xdKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488355; x=1723093155; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=30BVAe+lKKTv8P9DRYR4aCbEP24dhJmDTry4BD28JV4=; b=unzRWkcYZ509KUDe8hFhjNH+uRGTOtUGB8W6VBzCGdC/kYNz41BDXvriUfW745FF2a qitFYx45KLBbimXCFmd9db7gbPnGt8UdlSZsi2wMHcJ6jPNVV8th6ixZcc5EqL9Rk+LI KCK3RSYSUnbhLYz3zyJCZYkjAuV9TfT45dkTrXbJEb1NtYTbloyQU5pdYzLG7jHFUWuQ o/UT58xPLorm0fE9S25Xe3DnZIW2Gco3kL/2DY28rfothP1MvfCb4k2cgC5EuGBbuQsf Z/sL07jw2cNNzYIrDcnVdjET3wnzwR9sxmq02xwFEuPGALGt/ZVTvfvB6dNpGPhsg+pn vkgA== X-Forwarded-Encrypted: i=1; AJvYcCUeHevhA4oioMxVe4uyVo/H0DtwrNz7TBB5W1JGG59U1G2YQImhxQVDy3T7FNMcHzcp1k0ahtmdWieC4uJloenr9dTI X-Gm-Message-State: AOJu0YwUKfmIxHDkMCKYHHbd/mc0eMMJqml82PV91EoZZ4RK+ETEx2Uh rl08IlQLFaBc6WdOwgXg48twex8ttWwttVgx40SUgCTvHGRe0HySF7pnicPXq77kXberd6vBZbc so4HKBQ== X-Google-Smtp-Source: AGHT+IGvhovXiEJA501fg8AsGQ0Ln/NIeUOKimM+zmuKhSDeJfSO8OIJ055qsyeZqjkBJTB18xRTjwVfaXKZ X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:2203:b0:e0b:c99d:b6d4 with SMTP id 3f1490d57ef6-e0bcd23d3c0mr2531276.5.1722488354932; Wed, 31 Jul 2024 21:59:14 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:11 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-3-mizhang@google.com> Subject: [RFC PATCH v3 02/58] sched/core: Drop spinlocks on contention iff kernel is preemptible From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sean Christopherson Use preempt_model_preemptible() to detect a preemptible kernel when deciding whether or not to reschedule in order to drop a contended spinlock or rwlock. Because PREEMPT_DYNAMIC selects PREEMPTION, kernels built with PREEMPT_DYNAMIC=y will yield contended locks even if the live preemption model is "none" or "voluntary". In short, make kernels with dynamically selected models behave the same as kernels with statically selected models. Somewhat counter-intuitively, NOT yielding a lock can provide better latency for the relevant tasks/processes. E.g. KVM x86's mmu_lock, a rwlock, is often contended between an invalidation event (takes mmu_lock for write) and a vCPU servicing a guest page fault (takes mmu_lock for read). For _some_ setups, letting the invalidation task complete even if there is mmu_lock contention provides lower latency for *all* tasks, i.e. the invalidation completes sooner *and* the vCPU services the guest page fault sooner. But even KVM's mmu_lock behavior isn't uniform, e.g. the "best" behavior can vary depending on the host VMM, the guest workload, the number of vCPUs, the number of pCPUs in the host, why there is lock contention, etc. In other words, simply deleting the CONFIG_PREEMPTION guard (or doing the opposite and removing contention yielding entirely) needs to come with a big pile of data proving that changing the status quo is a net positive. Opportunistically document this side effect of preempt=full, as yielding contended spinlocks can have significant, user-visible impact. Fixes: c597bfddc9e9 ("sched: Provide Kconfig support for default dynamic preempt mode") Link: https://lore.kernel.org/kvm/ef81ff36-64bb-4cfe-ae9b-e3acf47bff24@proxmox.com Cc: Valentin Schneider Cc: Peter Zijlstra (Intel) Cc: Marco Elver Cc: Frederic Weisbecker Cc: David Matlack Cc: Friedrich Weber Cc: Ankur Arora Cc: Thomas Gleixner Signed-off-by: Sean Christopherson Reviewed-by: Ankur Arora Reviewed-by: Chen Yu Signed-off-by: Mingwei Zhang --- Documentation/admin-guide/kernel-parameters.txt | 4 +++- include/linux/spinlock.h | 14 ++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index b600df82669d..ebb971a57d04 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4774,7 +4774,9 @@ none - Limited to cond_resched() calls voluntary - Limited to cond_resched() and might_sleep() calls full - Any section that isn't explicitly preempt disabled - can be preempted anytime. + can be preempted anytime. Tasks will also yield + contended spinlocks (if the critical section isn't + explicitly preempt disabled beyond the lock itself). print-fatal-signals= [KNL] debug: print fatal signals diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 3fcd20de6ca8..63dd8cf3c3c2 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -462,11 +462,10 @@ static __always_inline int spin_is_contended(spinlock_t *lock) */ static inline int spin_needbreak(spinlock_t *lock) { -#ifdef CONFIG_PREEMPTION + if (!preempt_model_preemptible()) + return 0; + return spin_is_contended(lock); -#else - return 0; -#endif } /* @@ -479,11 +478,10 @@ static inline int spin_needbreak(spinlock_t *lock) */ static inline int rwlock_needbreak(rwlock_t *lock) { -#ifdef CONFIG_PREEMPTION + if (!preempt_model_preemptible()) + return 0; + return rwlock_is_contended(lock); -#else - return 0; -#endif } /* From patchwork Thu Aug 1 04:58:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749526 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4D38142627 for ; Thu, 1 Aug 2024 04:59:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488359; cv=none; b=oxMHVVQJqvbKBdFg5chOJG7qYSzHAib4fxwfW35BMOzyFPsc4ei7WhmW2WXpTQse8P37SG58mFIhDLPhJ64J/rRqzu+enKNok7n7dGF6oK7TnlwDkRO1RpAGVU1sTJqP0FEFLPftMUjOinGacH+sIQHlm4L93wKh0J893BhKn3k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488359; c=relaxed/simple; bh=ZvGS+oj4XuLPg8nuEXUEAPuSbcPZ0/B3ZPBcvTQ6b6I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BF1tFqZa+D5SHIeFP+irotBQ0R0n7fVZC/kChDPE5ikUq0H7c65tDGnrswc6fdTibfYIXfghQr8lcQlsXY8vA5BbN9OSawbklhlsA0B5jCyqt5D6noUOLG10d5w8HtwXlMdDumk0uLxc49l5vEcf0rsMfOv9jDAghYXAoCsrwVA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ViFeccXC; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ViFeccXC" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-6818fa37eecso6938718a12.1 for ; Wed, 31 Jul 2024 21:59:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488357; x=1723093157; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1xyyU30celNRJ8DLWsDy56YEJTFb2rNjwWtIE7jVfbg=; b=ViFeccXCE5NIewqn5sltPfgVd+IGJztbPUuypoxLXvTQbzxaEw6MSCZnWOSQb8XqGK TCOaFVwZ5oOpNjmrZd+NFl/MkxcMyJpTOtUjNx1TR57g1oNT7YiBhmyV+ff6Wz+ZTdUO 1lVDuhooMKvJCtfLI2E4CFimYmMURQ3SWMM5OvlsDkXVsesswmi89frsQEog4yKhqZ2J z3Y6zmXFxTLG3Ro6QHYVhWoin7osM7RgJ3BgZSuZAbI2UKOQqglKAQLX6Sz2UqNHiZPQ 3flo4b6SiGDmWMT5uF3IQFFt1lRoQmy66uKma7OuYhl5ZAMsWpoNQcrptabB/tIV8Is9 8hcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488357; x=1723093157; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1xyyU30celNRJ8DLWsDy56YEJTFb2rNjwWtIE7jVfbg=; b=EALtby5ZNGTn7RJoQs94JUWFol6cH5Qfez/K3db0u7YTtQ6bZ2Y1nbVzhG/mnJHed8 BmiTOLQbQ4AlfaLHyeol2KBfZ+lcQ7bbxPmnbl/q0i38NBI0Jbzgni2KYmhDMYE66JgC +Wi8XQ2450wa3kffsn+XpLpHoXuSdan15kpmHYfXNOEDRsx0AXPwAiQcwohbrnbCvf9t i637VKxhfa3aNUwQaGRIZALRr6K/2UynuCJqt0VlSmnZVUE2yWKTW9EKqn6/EY+m+xtg edTz82qPvhbZ+5I2x0wf0JGmb9+sN2wnsDRdWdQe5WJ8RkK88bjjyvuyqsIar+DQEsGg l76w== X-Forwarded-Encrypted: i=1; AJvYcCXYJNSBf8OvTOtd2l/3N9n3/9CXlEiCXpZhSLIILDR4h9sD5K0nvpcKbAxT+G8ORconpAiTkdZN6Jquw7zteG1ZKEx/ X-Gm-Message-State: AOJu0YxlxMkbgVEDDy3u4Siyo77DcsW597JC/tnbJ5waKpcsDHtT8z3f gYTgc2BTMno0IfL+mWGk0IQlm8e+yTDtyRWfjDiHo/1XPW0hWUWLIAY1b60yCRtjBsAbDaArUkA qDCGNyw== X-Google-Smtp-Source: AGHT+IFCfoz9h6lJmm/P5SbMt/spaprFBzE+plqdpnUgcDlwekzgDeUVyAcFT3/5A6VAv+kI/DyBylARHwev X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a02:50d:b0:694:4311:6eb4 with SMTP id 41be03b00d2f7-7b6362f4786mr2600a12.8.1722488356855; Wed, 31 Jul 2024 21:59:16 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:12 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-4-mizhang@google.com> Subject: [RFC PATCH v3 03/58] perf/x86: Do not set bit width for unavailable counters From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Not all x86 processors have fixed counters. It may also be the case that a processor has only fixed counters and no general-purpose counters. Set the bit widths corresponding to each counter type only if such counters are available. Fixes: b3d9468a8bd2 ("perf, x86: Expose perf capability to other modules") Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/events/core.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 5b0dd07b1ef1..5bf78cd619bf 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2985,8 +2985,13 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->version = x86_pmu.version; cap->num_counters_gp = x86_pmu.num_counters; cap->num_counters_fixed = x86_pmu.num_counters_fixed; - cap->bit_width_gp = x86_pmu.cntval_bits; - cap->bit_width_fixed = x86_pmu.cntval_bits; + + if (cap->num_counters_gp) + cap->bit_width_gp = x86_pmu.cntval_bits; + + if (cap->num_counters_fixed) + cap->bit_width_fixed = x86_pmu.cntval_bits; + cap->events_mask = (unsigned int)x86_pmu.events_maskl; cap->events_mask_len = x86_pmu.events_mask_len; cap->pebs_ept = x86_pmu.pebs_ept; From patchwork Thu Aug 1 04:58:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749527 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B581F143746 for ; Thu, 1 Aug 2024 04:59:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488361; cv=none; b=BWsveTCYjM8WXz9ggxH8fBh/gwQYtvF8ccsVxXWQNIZtnNk+m4O/yOYxt2oDPZchppmNfrvNf1W5Iyv6WARf1wQHDv4P0LFxZJ2n63XA8NT74+E7ZhJHkq8Mvn8MbnyFBx8z5kcQcO0J12PDk64nwf8m1MMLLChPu/8CUYcudZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488361; c=relaxed/simple; bh=mt1xxk37hsP4lZImODdUkAOy4mNop3xM6GrJ9LrVXt0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t0LQzSCpWjz5NC0KGIYZV6dGcTUxzMIzZRfz1PzB2SZf1BSxFJvmGzaY1+FNnem9xSxrEBPMW5pmKNtsE1EaZzoiKa2/iVN/SAdnJZ1j3T2uXHeUOKGV8PhOAut9W5H7JHDBvfgpVMSGbBcZ4cwZOd87D8lqCXIFS5x5qw107w8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fmklishm; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fmklishm" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc47634e3dso51891035ad.0 for ; Wed, 31 Jul 2024 21:59:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488359; x=1723093159; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Qf4Zqa1drSnZS3LbbOppIMykS9maFMjN3nXFpX7jW/0=; b=fmklishmz+Dp7A6kpt4im0xAmBkIORJlaEA3KUlz1RSZytUcgTRgoykdtSbOicOcnk W5aCJtiGkinkV+nk83zSgocYARYDFETGdW2TuGzNEK1ChpkoVOhmgVJRPLbJ22YRkSGk sT5fPSkOAAt9W946dvBjEjvpoHvH5sZm/Z6Wq4q0h4QyYuJcxIhwQEY0/MEQFKVEiyGH 2W2T6Iptgl3uT8BxCmk/Dlsn7AqwiOSermQ1jDF2gMM9Rdv1xxgGq9+IoNkmWrYwpKvl vNc+yaGXA8m0WT3+lS3YQKvLMGTOBwJkkutOIKIPA+T7erQMBiQC2CwW15mwiO6pvd0p CXZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488359; x=1723093159; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Qf4Zqa1drSnZS3LbbOppIMykS9maFMjN3nXFpX7jW/0=; b=evJiDyAxT/R7kj4RD1+H9bnxzzCe36+6g+nKjx1Gc2S0HWOAAmOOVQ8nYrEzUD7jAy IgcmrpsttRbbgJM92s0Y7bkL3DaOUNcsjlpmnpqt6fCEX2HwQPz8+2Ko9YeSEVYc20XY 1XEeSBkicdajBuIDlRTvMGTurhIn1A664exrmeGh0o0IChZOQnWlX4w44aIgOcWCiH3o 9jgKYKEUCuMMzpkXGYl091CiUxM94e6y+NAvGesf4tfO2chyLi3QQBE0aOac/HsPzdMh 6T9DWPc4u81lkrciIQUONTT4KjtVpibEaePZMUalsR5LApkz728TuV+5KxkPTS3Tf/ZA NmNg== X-Forwarded-Encrypted: i=1; AJvYcCVij+IBkC+ud/745OocjGJVsXXWo37s0RBmqEZntpOP4bzdQ1LRDfPl2HJeLwG2yrY0ehec907Idx9nWebICMOwI6jo X-Gm-Message-State: AOJu0YxKgdoEF3dV8bQ7TCVzMTDfFaMW2WUBAzc+x6a4RdJM8qDvyWrB C/JOQ10n5Fo7IHC13psbr5wuHOmqUndJp0vE022euQS+ubFKEZX5HYxGDgwgCO1jhxk4xYQ9x2A p7ZRmAA== X-Google-Smtp-Source: AGHT+IHd1t8tqucILDMxWLj0bOtRq4jrrKgUdFhTNJwu0HLBwioN5wYluPjRYKP+sUeTOOIZp+AQ7DDVOA5r X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:ce81:b0:1fb:984c:5531 with SMTP id d9443c01a7336-1ff4ce4d58amr1346895ad.2.1722488358790; Wed, 31 Jul 2024 21:59:18 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:13 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-5-mizhang@google.com> Subject: [RFC PATCH v3 04/58] x86/msr: Define PerfCntrGlobalStatusSet register From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Define PerfCntrGlobalStatusSet (MSR 0xc0000303) as it is required by passthrough PMU to set the overflow bits of PerfCntrGlobalStatus (MSR 0xc0000300). When using passthrough PMU, it is necessary to restore the guest state of the overflow bits. Since PerfCntrGlobalStatus is read-only, this is done by writing to PerfCntrGlobalStatusSet instead. The register is available on AMD processors where the PerfMonV2 feature bit of CPUID leaf 0x80000022 EAX is set. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/msr-index.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index e022e6eb766c..b9f8744b47e5 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -681,6 +681,7 @@ #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET 0xc0000303 /* AMD Last Branch Record MSRs */ #define MSR_AMD64_LBR_SELECT 0xc000010e From patchwork Thu Aug 1 04:58:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749528 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BCBC143885 for ; Thu, 1 Aug 2024 04:59:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488363; cv=none; b=mZZuOC3gpQPsRhNO2bafWN7miTwUFz94O/29chSLn6CpMhx/Z4LAaVrwe5mGCmxZtPE37pywT9XVxzM3AIjI2Gzn2Z7TjDMjVeV7ojFLMjExo3wJhM/ovnaJR5H5bh3YDNWISLglnbAHsEfZpdi/CGRT15T8uPvndySQTHKdgH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488363; c=relaxed/simple; bh=XpWZVpP+1fU8zyiBCBEZSEjntYhRTb04Ki1oElLfW4s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MF8sXIS35E4d+mlTZR+5ZWt16zM2fv+Q/q+HQofLJMoIEk8P6Ux6n/362Qwgsh4YGR1gWo8JqaIg7cMUzH63EFTF2/n0e1yVlwA4kpHfgq2yTWLTkBPLLRMqi5YdUqLM6vDUlYzV2dnDWkiPOEFjdtepHjCIgGInOsDVT+kiGNQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fdJ8GY+b; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fdJ8GY+b" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d1df50db2so1762990b3a.0 for ; Wed, 31 Jul 2024 21:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488361; x=1723093161; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6duVsFRfrk0IpAUCyLOr5AOz6KYb78N++19JHZQFaLc=; b=fdJ8GY+bhcQyRIG8y92u20GHycTeC+kKZsEZj9XZ8VjPMkcCxfjrDhTF+RLMqx4THP FowQGy0dl7bDp8eH1TxiK4m1wr+p+qJ86MDliMYpXnIe+lsotMe3bUB3E8GyTjGh3rUw cDq7Vn/IATRqfZ2DH+GEPUvfZesjPH4miqdRvz5g+df2B5mCQFWcuCtQg4R7kusJGBGG /WBO5iftWgoIiMUR/A7lSCzpUI5pEOnI5sqdpTTYUi2LFwSeYvDb8DVJ0TylLrZHBnw8 A6s8hUn5Mql72TFM9Qwuh0vAT0aHd+No6+lgknUiRt0ppQ+4Iq+QZQDpSoVPEN/KKkk1 aYZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488361; x=1723093161; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6duVsFRfrk0IpAUCyLOr5AOz6KYb78N++19JHZQFaLc=; b=Fp7XJAZboBMpW8F8M6c7n9Llq0zSu2y1saEO2/zhhLvfJr1v9NKODCxE0/l7v3dU+l 1p+XGjW75kkfdxPnTWXKUprr+To974gmWshVqOX40dDAmYQPiadZSrPdi0NO0e1kprO9 CIcMFEl03GDlFSDkU+cyGmDJeCQmd8iBSZiEnHbFFrX2FYvQ4Igsq1fY2f+xycker1vn KJDNORdAQd4MWti7KvPy73BPiovlLUCheMFvCHWBQIuz3tZVATo+x9LdQ8bseNTGBDp2 9/dvl30J25UndTmedPmFW9qeTBj29pNlx5hP+rNUKfxq+KktrieTKC3IouYAcexTGT9i OE5Q== X-Forwarded-Encrypted: i=1; AJvYcCWS/LLOvW+CvVytNYTQmY1WnbROt+oar+JHrFAv+1eFHxalXvI5let1fuq6spf46kgKloajXVh10d/Fg7Gt+D/y2o9R X-Gm-Message-State: AOJu0Ywgv2Kq3pnptJ9DzgKG5MNQJwvUSzM3ja2eAF1Zly0l3qml0ijw oCsNpbh/moTWVXVws2QyD39uRQwjAhAqixledHZTf/tk9saxE2zcioqEMFVn0d2l+w86/xMC257 PRnh4wQ== X-Google-Smtp-Source: AGHT+IE0xF0ZWDReclAZ16EiEfEORg6bp7W7nUuzCTTl84dqX0CiNk3QO/0bqT1nO6h5B7MExJodqJcFutLy X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a00:9465:b0:706:6a2f:36b0 with SMTP id d2e1a72fcca58-71065e4ef40mr758b3a.2.1722488360754; Wed, 31 Jul 2024 21:59:20 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:14 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-6-mizhang@google.com> Subject: [RFC PATCH v3 05/58] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Dapeng Mi Add additional PMU MSRs MSR_CORE_PERF_GLOBAL_STATUS_SET to allow passthrough PMU operation on the read-only MSR IA32_PERF_GLOBAL_STATUS. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/msr-index.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index b9f8744b47e5..1d7104713926 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1113,6 +1113,7 @@ #define MSR_CORE_PERF_GLOBAL_STATUS 0x0000038e #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 +#define MSR_CORE_PERF_GLOBAL_STATUS_SET 0x00000391 #define MSR_PERF_METRICS 0x00000329 From patchwork Thu Aug 1 04:58:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749529 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DA50143884 for ; Thu, 1 Aug 2024 04:59:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488364; cv=none; b=kpFhWJORRwXsHmZ9SSwWZiQoNdglEgbpvVxyiQ+Q/y4kn7uIhmXGY2rIZnPeII7ELvnQkjO58qxdF1gj7mghvcTSTqIYUVJ/2K2Y9yz6NHV8THl6Y8OmYm8jN6ooSNc7Ii3xSExS9TfRmiOsVc1RwVvG8Pef2TxGG3tYV9Nh+Uo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488364; c=relaxed/simple; bh=Y2oKdGQFixKYysep5mtMEZPG1HOavWPGM4mOQh9bSTc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nQfqga0i3abuVAx3XEfUcuHU5vyAJX+rjrayfeXB6BAZ4b/UEXYsmC6XCrRH1AewEdVNTWA/4u6Kn7nNJc0vdkZXCFncrYmClT5WzMUlaVkwLpRAddT+CleJMuB5hqmVn6f5dtb30GK/U35ha41BhTqwXAJ5jI6Uf6f+CJp0o9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vPMQ6GLC; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vPMQ6GLC" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7a2787eb33dso6444640a12.1 for ; Wed, 31 Jul 2024 21:59:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488363; x=1723093163; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=U/v+CJ5mcJsRXRRG9Vhu8lil7+MVwtmAo65IN57uonU=; b=vPMQ6GLCd1neXWHU0IT+7ySlnMVtA9RH0b2m4kc+dCBBrUGyqv1emNHWZSL0pzP7wq Sr9IyQ6Mu3eOJasflyC1ouOMTtIeOy3zodafkZ5lhJCYD1NXijrn7R1iMdX/AVVsnyHl OszrWKVLq9UTZNRIfabhQ9qs8fH49lSI84FKl5Q9msCrgag9Jfjof4N5jqB6xv38o+aH k46hpQq1XmVgEb76jJOkTHblTrOBHYRNlFwPd1U9OSnB8xJK/RsNdzDIq7egG2c3ie6c DN5CBpboebQDetw3PRWKEvasy/Sc8CLFVHkQ5tag7qGhp2ZWsy/zXuSyi2sH0nD54IUN qfZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488363; x=1723093163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=U/v+CJ5mcJsRXRRG9Vhu8lil7+MVwtmAo65IN57uonU=; b=L4i/PR190Lo9w8ojlsycMyak4ihhJ8I9WPmgdEB/C2ZS9qVogiqFESrRw9p6PKcTOU HQffckcpUdfyPk7Kt5eOCU6wueyVFil1UfWOpxoF7O6+XiEZWSb9AHF0mh5vmrsRzdko PhjDXUzVC+yWgEfHiq1uADRH4SST3kcYX0GTPLjsZerRqBqKkO+ukjB+h4JlH6eTHIAI IzWs17t65qSarzKvf0g0t2SCcOcKt/5g9xEZq48a7GyOaHl3VBQuN1OAgxMa/YWrnNn7 +54ayNIvu4Gn81IB1u5LzSPHqRNqqZqT/9vWjs8QLEVBDHjQqjw7LJDHyruUBmkBD+rJ RZpQ== X-Forwarded-Encrypted: i=1; AJvYcCXt5T2Kz5dF3LPGPqFvrJbGu/deY/KYzdphrg9u1SpvlMA1JdHv6eCXTGwl4Wd/keAdqP12vsyNHjvZclfyqm9js6cC X-Gm-Message-State: AOJu0YzOP/KpXwvF3HW/Z0pBci5P/tdPJVbDbG4ZunWVYaPuqa8n4/k2 t2xteGgfqAGemITyyu95mFFVxL6iJmM6IKiyfFL6VxJXUQP83H1cfyCsuFkDpfid3Bo3gjxn4K5 mKK2B7g== X-Google-Smtp-Source: AGHT+IH78Dq08etLHi4ZMudq84xi8KT5KSIOZU8bUwbPZ9A/Ozize3/rl6h0gHrRn30YS/d2rLGkH7RHU5zr X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:f688:b0:1fa:acf0:72d6 with SMTP id d9443c01a7336-1ff4ced872cmr1004635ad.3.1722488362716; Wed, 31 Jul 2024 21:59:22 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:15 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-7-mizhang@google.com> Subject: [RFC PATCH v3 06/58] perf: Support get/put passthrough PMU interfaces From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang Currently, the guest and host share the PMU resources when a guest is running. KVM has to create an extra virtual event to simulate the guest's event, which brings several issues, e.g., high overhead, not accuracy and etc. A new passthrough PMU method is proposed to address the issue. It requires that the PMU resources can be fully occupied by the guest while it's running. Two new interfaces are implemented to fulfill the requirement. The hypervisor should invoke the interface while creating a guest which wants the passthrough PMU capability. The PMU resources should only be temporarily occupied as a whole when a guest is running. When the guest is out, the PMU resources are still shared among different users. The exclude_guest event modifier is used to guarantee the exclusive occupation of the PMU resources. When creating a guest, the hypervisor should check whether there are !exclude_guest events in the system. If yes, the creation should fail. Because some PMU resources have been occupied by other users. If no, the PMU resources can be safely accessed by the guest directly. Perf guarantees that no new !exclude_guest events are created when a guest is running. Only the passthrough PMU is affected, but not for other PMU e.g., uncore and SW PMU. The behavior of those PMUs are not changed. The guest enter/exit interfaces should only impact the supported PMUs. Add a new PERF_PMU_CAP_PASSTHROUGH_VPMU flag to indicate the PMUs that support the feature. Add nr_include_guest_events to track the !exclude_guest events of PMU with PERF_PMU_CAP_PASSTHROUGH_VPMU. Suggested-by: Sean Christopherson Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 10 ++++++ kernel/events/core.c | 66 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index a5304ae8c654..45d1ea82aa21 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -291,6 +291,7 @@ struct perf_event_pmu_context; #define PERF_PMU_CAP_NO_EXCLUDE 0x0040 #define PERF_PMU_CAP_AUX_OUTPUT 0x0080 #define PERF_PMU_CAP_EXTENDED_HW_TYPE 0x0100 +#define PERF_PMU_CAP_PASSTHROUGH_VPMU 0x0200 struct perf_output_handle; @@ -1728,6 +1729,8 @@ extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); +int perf_get_mediated_pmu(void); +void perf_put_mediated_pmu(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1814,6 +1817,13 @@ static inline u64 perf_event_pause(struct perf_event *event, bool reset) { return 0; } + +static inline int perf_get_mediated_pmu(void) +{ + return 0; +} + +static inline void perf_put_mediated_pmu(void) { } #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index 8f908f077935..45868d276cde 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -402,6 +402,20 @@ static atomic_t nr_bpf_events __read_mostly; static atomic_t nr_cgroup_events __read_mostly; static atomic_t nr_text_poke_events __read_mostly; static atomic_t nr_build_id_events __read_mostly; +static atomic_t nr_include_guest_events __read_mostly; + +static atomic_t nr_mediated_pmu_vms; +static DEFINE_MUTEX(perf_mediated_pmu_mutex); + +/* !exclude_guest event of PMU with PERF_PMU_CAP_PASSTHROUGH_VPMU */ +static inline bool is_include_guest_event(struct perf_event *event) +{ + if ((event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU) && + !event->attr.exclude_guest) + return true; + + return false; +} static LIST_HEAD(pmus); static DEFINE_MUTEX(pmus_lock); @@ -5212,6 +5226,9 @@ static void _free_event(struct perf_event *event) unaccount_event(event); + if (is_include_guest_event(event)) + atomic_dec(&nr_include_guest_events); + security_perf_event_free(event); if (event->rb) { @@ -5769,6 +5786,36 @@ u64 perf_event_pause(struct perf_event *event, bool reset) } EXPORT_SYMBOL_GPL(perf_event_pause); +/* + * Currently invoked at VM creation to + * - Check whether there are existing !exclude_guest events of PMU with + * PERF_PMU_CAP_PASSTHROUGH_VPMU + * - Set nr_mediated_pmu_vms to prevent !exclude_guest event creation on + * PMUs with PERF_PMU_CAP_PASSTHROUGH_VPMU + * + * No impact for the PMU without PERF_PMU_CAP_PASSTHROUGH_VPMU. The perf + * still owns all the PMU resources. + */ +int perf_get_mediated_pmu(void) +{ + guard(mutex)(&perf_mediated_pmu_mutex); + if (atomic_inc_not_zero(&nr_mediated_pmu_vms)) + return 0; + + if (atomic_read(&nr_include_guest_events)) + return -EBUSY; + + atomic_inc(&nr_mediated_pmu_vms); + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_mediated_pmu); + +void perf_put_mediated_pmu(void) +{ + atomic_dec(&nr_mediated_pmu_vms); +} +EXPORT_SYMBOL_GPL(perf_put_mediated_pmu); + /* * Holding the top-level event's child_mutex means that any * descendant process that has inherited this event will block @@ -11907,6 +11954,17 @@ static void account_event(struct perf_event *event) account_pmu_sb_event(event); } +static int perf_account_include_guest_event(void) +{ + guard(mutex)(&perf_mediated_pmu_mutex); + + if (atomic_read(&nr_mediated_pmu_vms)) + return -EACCES; + + atomic_inc(&nr_include_guest_events); + return 0; +} + /* * Allocate and initialize an event structure */ @@ -12114,11 +12172,19 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu, if (err) goto err_callchain_buffer; + if (is_include_guest_event(event)) { + err = perf_account_include_guest_event(); + if (err) + goto err_security_alloc; + } + /* symmetric to unaccount_event() in _free_event() */ account_event(event); return event; +err_security_alloc: + security_perf_event_free(event); err_callchain_buffer: if (!event->parent) { if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) From patchwork Thu Aug 1 04:58:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749530 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CB88143885 for ; Thu, 1 Aug 2024 04:59:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488366; cv=none; b=F1Bpw0oEt8K9tfso5CQs2Q2Ohhy13YSr4pobFIRhGtWYrInD3A6mepOVLGVnC49VG+/UxMsoM5SKonl0+a/QtvlI/pGMXiHnf81GxTuwuhSX/KpUcN14ISCFMp21aiM6rhVDUoZk4WQbygymuMr30nyleBC7T3dLpGuvvEF1IK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488366; c=relaxed/simple; bh=dif1OkND5KGhR45F83fypWTka4+w2DTtKcoRJeXtzhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=STgnjlU3VopOu7ItV+VlP4ddlz2Z4tb/gJtRwqoQMNKPAV57BbebbbAxZMq2p1aAEaV7u8dNXOsNsFWk0e7ou+MMxAQU0LQDpSvNzOdqcq+cfAjovGBn5nSLIXsfGSTB9h6filORuX0vZQ0OxF5LPTjNHf4REpBC8+LWAAqY1Qw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ut/bMld9; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ut/bMld9" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb81c562edso6400714a91.2 for ; Wed, 31 Jul 2024 21:59:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488364; x=1723093164; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xPzgKPV3k5GL7Cs71Q56CHqm2Jkw0rEVhVnkYtPwoz8=; b=Ut/bMld9jrJaj+OtAWz3nwf2pduw72QUtUAL1MT4Jy5gONyUAahhIUWFr1oxm8qDYj kJjduelhNMoP48SvCusPOEHcP3PFadPvArz7jVhS0A4r4OXPIEMHYgjecHKuhGHiy8i8 Kkcs84V/PbaOLcoO9HcFPZnxSzzcNiJjBGeqWNc6KWt9pgh3svdxBYsV/8xILorxzgrS csiOJCaQAXD+y/Og/wIlCJ+xNU/32eA8dLzBJ7tAuXBi6Pbcf+POcsSpOWGXEuXvRlZS bC11xaiab9R4dEJOS3yeqFu8+RwT+BJ9fWP1PGJA42XuLmf46HeNLrFhZGEj16YYFl2Z Q/AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488364; x=1723093164; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xPzgKPV3k5GL7Cs71Q56CHqm2Jkw0rEVhVnkYtPwoz8=; b=pz11WfGP2uA6o9Qd+Vxp3DIlf8tlXfTc9359UrU3LVe776DEACcE4mD48UrLN7speH sYZsO/XC6LwTNopgMAceJSgqPaaBcFSBlyxC6qe6ovya4c3eAyeG6eQvSp6vLaH5uSnS uQy7EKIhJRMn3SJ0HDt/cRbppSOhLbugk6h8ubm20oyTDbXDx80A5CQaJk5mOBpTONyq P+aeitSR3b3YLfm1x9EI9wvZog+KWlQkimS734OdtuOCIYGEHseYlxQlBKoyvjXE8YzG sN5MJoWWCrMWR34pLC6/+ckUinS9MNPfyzciTLlZ7WZMMxJqUbmyjd5wh1L+0/AFNMTS N58g== X-Forwarded-Encrypted: i=1; AJvYcCVpiLFcNh3iOldvDhLMtIXLrFhro9ytsC8SScYtirHr2HR/3Uy0VK+8ohFuchuSvrMgbox5CySSgG6l0KIzkvzzG89s X-Gm-Message-State: AOJu0YwyiTqiflWofwnJtjTj4B/LXdKrfSyng8EabJfyVe44Oe4bMwZD eUr/ZbIwb7LEInLdARuv6/L9E/ogwvA5Y/DCdyWmeTnsplfEMsGd7RLc8ukt9MCagtbOjl/5oul yAetNXA== X-Google-Smtp-Source: AGHT+IFz+avBXprMtBMgz1SOFfYLpvVoYnZfKnbG8nnpFLCHengkkev16Do8OqHeqS65tqNLTt7NTgV+82xq X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:90a:c794:b0:2c9:7ebd:969d with SMTP id 98e67ed59e1d1-2cfe7b550bcmr14138a91.4.1722488364284; Wed, 31 Jul 2024 21:59:24 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:16 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-8-mizhang@google.com> Subject: [RFC PATCH v3 07/58] perf: Skip pmu_ctx based on event_type From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang To optimize the cgroup context switch, the perf_event_pmu_context iteration skips the PMUs without cgroup events. A bool cgroup was introduced to indicate the case. It can work, but this way is hard to extend for other cases, e.g. skipping non-passthrough PMUs. It doesn't make sense to keep adding bool variables. Pass the event_type instead of the specific bool variable. Check both the event_type and related pmu_ctx variables to decide whether skipping a PMU. Event flags, e.g., EVENT_CGROUP, should be cleard in the ctx->is_active. Add EVENT_FLAGS to indicate such event flags. No functional change. Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- kernel/events/core.c | 70 +++++++++++++++++++++++--------------------- 1 file changed, 37 insertions(+), 33 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 45868d276cde..7cb51dbf897a 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -376,6 +376,7 @@ enum event_type_t { /* see ctx_resched() for details */ EVENT_CPU = 0x8, EVENT_CGROUP = 0x10, + EVENT_FLAGS = EVENT_CGROUP, EVENT_ALL = EVENT_FLEXIBLE | EVENT_PINNED, }; @@ -699,23 +700,32 @@ do { \ ___p; \ }) -static void perf_ctx_disable(struct perf_event_context *ctx, bool cgroup) +static bool perf_skip_pmu_ctx(struct perf_event_pmu_context *pmu_ctx, + enum event_type_t event_type) +{ + if ((event_type & EVENT_CGROUP) && !pmu_ctx->nr_cgroups) + return true; + + return false; +} + +static void perf_ctx_disable(struct perf_event_context *ctx, enum event_type_t event_type) { struct perf_event_pmu_context *pmu_ctx; list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { - if (cgroup && !pmu_ctx->nr_cgroups) + if (perf_skip_pmu_ctx(pmu_ctx, event_type)) continue; perf_pmu_disable(pmu_ctx->pmu); } } -static void perf_ctx_enable(struct perf_event_context *ctx, bool cgroup) +static void perf_ctx_enable(struct perf_event_context *ctx, enum event_type_t event_type) { struct perf_event_pmu_context *pmu_ctx; list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { - if (cgroup && !pmu_ctx->nr_cgroups) + if (perf_skip_pmu_ctx(pmu_ctx, event_type)) continue; perf_pmu_enable(pmu_ctx->pmu); } @@ -877,7 +887,7 @@ static void perf_cgroup_switch(struct task_struct *task) return; perf_ctx_lock(cpuctx, cpuctx->task_ctx); - perf_ctx_disable(&cpuctx->ctx, true); + perf_ctx_disable(&cpuctx->ctx, EVENT_CGROUP); ctx_sched_out(&cpuctx->ctx, EVENT_ALL|EVENT_CGROUP); /* @@ -893,7 +903,7 @@ static void perf_cgroup_switch(struct task_struct *task) */ ctx_sched_in(&cpuctx->ctx, EVENT_ALL|EVENT_CGROUP); - perf_ctx_enable(&cpuctx->ctx, true); + perf_ctx_enable(&cpuctx->ctx, EVENT_CGROUP); perf_ctx_unlock(cpuctx, cpuctx->task_ctx); } @@ -2732,9 +2742,9 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, event_type &= EVENT_ALL; - perf_ctx_disable(&cpuctx->ctx, false); + perf_ctx_disable(&cpuctx->ctx, 0); if (task_ctx) { - perf_ctx_disable(task_ctx, false); + perf_ctx_disable(task_ctx, 0); task_ctx_sched_out(task_ctx, event_type); } @@ -2752,9 +2762,9 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, perf_event_sched_in(cpuctx, task_ctx); - perf_ctx_enable(&cpuctx->ctx, false); + perf_ctx_enable(&cpuctx->ctx, 0); if (task_ctx) - perf_ctx_enable(task_ctx, false); + perf_ctx_enable(task_ctx, 0); } void perf_pmu_resched(struct pmu *pmu) @@ -3299,9 +3309,6 @@ ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type) struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); struct perf_event_pmu_context *pmu_ctx; int is_active = ctx->is_active; - bool cgroup = event_type & EVENT_CGROUP; - - event_type &= ~EVENT_CGROUP; lockdep_assert_held(&ctx->lock); @@ -3336,7 +3343,7 @@ ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type) barrier(); } - ctx->is_active &= ~event_type; + ctx->is_active &= ~(event_type & ~EVENT_FLAGS); if (!(ctx->is_active & EVENT_ALL)) ctx->is_active = 0; @@ -3349,7 +3356,7 @@ ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type) is_active ^= ctx->is_active; /* changed bits */ list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { - if (cgroup && !pmu_ctx->nr_cgroups) + if (perf_skip_pmu_ctx(pmu_ctx, event_type)) continue; __pmu_ctx_sched_out(pmu_ctx, is_active); } @@ -3543,7 +3550,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next) raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING); if (context_equiv(ctx, next_ctx)) { - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); /* PMIs are disabled; ctx->nr_pending is stable. */ if (local_read(&ctx->nr_pending) || @@ -3563,7 +3570,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next) perf_ctx_sched_task_cb(ctx, false); perf_event_swap_task_ctx_data(ctx, next_ctx); - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); /* * RCU_INIT_POINTER here is safe because we've not @@ -3587,13 +3594,13 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next) if (do_switch) { raw_spin_lock(&ctx->lock); - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); inside_switch: perf_ctx_sched_task_cb(ctx, false); task_ctx_sched_out(ctx, EVENT_ALL); - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); raw_spin_unlock(&ctx->lock); } } @@ -3890,12 +3897,12 @@ static void pmu_groups_sched_in(struct perf_event_context *ctx, static void ctx_groups_sched_in(struct perf_event_context *ctx, struct perf_event_groups *groups, - bool cgroup) + enum event_type_t event_type) { struct perf_event_pmu_context *pmu_ctx; list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { - if (cgroup && !pmu_ctx->nr_cgroups) + if (perf_skip_pmu_ctx(pmu_ctx, event_type)) continue; pmu_groups_sched_in(ctx, groups, pmu_ctx->pmu); } @@ -3912,9 +3919,6 @@ ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type) { struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); int is_active = ctx->is_active; - bool cgroup = event_type & EVENT_CGROUP; - - event_type &= ~EVENT_CGROUP; lockdep_assert_held(&ctx->lock); @@ -3932,7 +3936,7 @@ ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type) barrier(); } - ctx->is_active |= (event_type | EVENT_TIME); + ctx->is_active |= ((event_type & ~EVENT_FLAGS) | EVENT_TIME); if (ctx->task) { if (!is_active) cpuctx->task_ctx = ctx; @@ -3947,11 +3951,11 @@ ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type) * in order to give them the best chance of going on. */ if (is_active & EVENT_PINNED) - ctx_groups_sched_in(ctx, &ctx->pinned_groups, cgroup); + ctx_groups_sched_in(ctx, &ctx->pinned_groups, event_type); /* Then walk through the lower prio flexible groups */ if (is_active & EVENT_FLEXIBLE) - ctx_groups_sched_in(ctx, &ctx->flexible_groups, cgroup); + ctx_groups_sched_in(ctx, &ctx->flexible_groups, event_type); } static void perf_event_context_sched_in(struct task_struct *task) @@ -3966,11 +3970,11 @@ static void perf_event_context_sched_in(struct task_struct *task) if (cpuctx->task_ctx == ctx) { perf_ctx_lock(cpuctx, ctx); - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); perf_ctx_sched_task_cb(ctx, true); - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); perf_ctx_unlock(cpuctx, ctx); goto rcu_unlock; } @@ -3983,7 +3987,7 @@ static void perf_event_context_sched_in(struct task_struct *task) if (!ctx->nr_events) goto unlock; - perf_ctx_disable(ctx, false); + perf_ctx_disable(ctx, 0); /* * We want to keep the following priority order: * cpu pinned (that don't need to move), task pinned, @@ -3993,7 +3997,7 @@ static void perf_event_context_sched_in(struct task_struct *task) * events, no need to flip the cpuctx's events around. */ if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree)) { - perf_ctx_disable(&cpuctx->ctx, false); + perf_ctx_disable(&cpuctx->ctx, 0); ctx_sched_out(&cpuctx->ctx, EVENT_FLEXIBLE); } @@ -4002,9 +4006,9 @@ static void perf_event_context_sched_in(struct task_struct *task) perf_ctx_sched_task_cb(cpuctx->task_ctx, true); if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree)) - perf_ctx_enable(&cpuctx->ctx, false); + perf_ctx_enable(&cpuctx->ctx, 0); - perf_ctx_enable(ctx, false); + perf_ctx_enable(ctx, 0); unlock: perf_ctx_unlock(cpuctx, ctx); From patchwork Thu Aug 1 04:58:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749531 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3665B1448E5 for ; Thu, 1 Aug 2024 04:59:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488370; cv=none; b=aYNSk08TnzEU4TYW4rmEztAYgzIUBplUjBLmJJCi4gmsjEMHezHd9ajeMVqaVg/lL58JWxLxepi3rSyQu0pjrnIl5v5jBNdxtOFciW8H7RMdwe9JCpwGTHZRztKAgkAMcjlYNYcr48ZZO+8/9/NXJTZAZatKKaN+kJiFiLIj1u0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488370; c=relaxed/simple; bh=tpzCQzHMZFX7c1AT5cMEt40rEO2f6HsB/bvU99X/K/0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qnvgf1G27+RoQydG780MTFAtKROp8wa/n+1DE9vjWkuwg9BeNDmDqtkEmWBUttZQ/tqd7PiEqXNhVEmNyA5Kw04eV+jPgh929QcZ3pnERQ+88fgXAcE3dGOiXytSAmzKlr7iJ/8DaHMDvOSIvqwyJzI44fE+1pBVqlo2VGtE1/M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W1znYxYD; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W1znYxYD" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e03b3f48c65so9617082276.0 for ; Wed, 31 Jul 2024 21:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488366; x=1723093166; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PXHETXmeoMyzkRFbIwNld5EMsoT/IC0GxFRJGsoikdg=; b=W1znYxYDNIkT5ESLvQEVZAcVXsPQvOfMnihoSnQvE6oNtYxN+RFVSAP+CEUWUB4qsb 4/z16SSeDtYHzoTFjCjVVP+Gejc7Trffs/o5PBIGnBHU6r3OOaovaLg6msH6ooFMBV6d cxrobBnER8CsSQpVSGcjYwhWEzYty+8bJbmKyL+EwrfefAcFZwPI9RTkGy8KSnXPbvaU +GoE13q5jjclRCXp3HyPF2RsDAN+6M705PENmctOFxJeMX0bX5J6kJUAO+sRo0THKpDT p5D7Wp7hDhQt3nbxgP42q46N6U/slLpgu4iTJ4xb5GVh7VPTjn828TamQAoYlM57BsLm 4ZAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488366; x=1723093166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PXHETXmeoMyzkRFbIwNld5EMsoT/IC0GxFRJGsoikdg=; b=bA/d406EusCRpUIoSK5vkIJGoh5MqdzR9qeA2z79DRNnh4HqVIdI422OALkfFG22Qs +1GcBKBGJA4qY+5r85xwauOReKhIJpgZoROp3wN1kZmt1kVmbIxdt6A1xdYn+La+2UEj vFTB2IDykdv32ymQeydBLuzWGEVZ9fpF1YeUBG6NBo/TUYjrsRrw7hlx5yuhn+Umeziw FxWuN1MsKzMDVMHRpD45dNeeVsLZUXM7Y7GRyTfLy8YqDVmVcLNVAEB+83E5xzfu7ZfS XTGBxtZQz0ruVQ2ETfiSYm2CRQQ4xgPI5lqX5UwGBDO3SF6AcXVrZDW2Atv4joLAoEoR iNJw== X-Forwarded-Encrypted: i=1; AJvYcCWObMlGKvTFKkNIDO9FBD/Bdy2W78wf4UB8JmTIYs2SxJaxti7pdfZiaOnOal2iiSa4BsEIe9phhHtFaRO8c9TL5Wq/ X-Gm-Message-State: AOJu0YwHXEdOGFKIJo0KPBeA+1M+kAZlJkdnUEvfdpghMATTb3cyfvu9 fo1cncqv98PUdIk9MHzDDNvKSLg+v9g87Kgn64rOSU3Nski2z3FDJRVC8xiUX3eRu/l+5s6o4cH cO0/mYg== X-Google-Smtp-Source: AGHT+IEHQusDUcT+dsbbixAwzBJb2jo69k9Y9gegBibYfPM2SYsP1xjAxKCxFo07IkBWHLNsgRYNtIIxRUIb X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:1501:b0:e0b:edb:143c with SMTP id 3f1490d57ef6-e0bccf8a219mr6379276.0.1722488366177; Wed, 31 Jul 2024 21:59:26 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:17 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-9-mizhang@google.com> Subject: [RFC PATCH v3 08/58] perf: Clean up perf ctx time From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang The current perf tracks two timestamps for the normal ctx and cgroup. The same type of variables and similar codes are used to track the timestamps. In the following patch, the third timestamp to track the guest time will be introduced. To avoid the code duplication, add a new struct perf_time_ctx and factor out a generic function update_perf_time_ctx(). No functional change. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 13 +++++---- kernel/events/core.c | 59 +++++++++++++++++++------------------- 2 files changed, 37 insertions(+), 35 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 45d1ea82aa21..e22cdb6486e6 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -906,6 +906,11 @@ struct perf_event_groups { u64 index; }; +struct perf_time_ctx { + u64 time; + u64 stamp; + u64 offset; +}; /** * struct perf_event_context - event context structure @@ -945,9 +950,7 @@ struct perf_event_context { /* * Context clock, runs when context enabled. */ - u64 time; - u64 timestamp; - u64 timeoffset; + struct perf_time_ctx time; /* * These fields let us detect when two contexts have both @@ -1040,9 +1043,7 @@ struct bpf_perf_event_data_kern { * This is a per-cpu dynamically allocated data structure. */ struct perf_cgroup_info { - u64 time; - u64 timestamp; - u64 timeoffset; + struct perf_time_ctx time; int active; }; diff --git a/kernel/events/core.c b/kernel/events/core.c index 7cb51dbf897a..c25e2bf27001 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -775,7 +775,7 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event) struct perf_cgroup_info *t; t = per_cpu_ptr(event->cgrp->info, event->cpu); - return t->time; + return t->time.time; } static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now) @@ -784,20 +784,16 @@ static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now) t = per_cpu_ptr(event->cgrp->info, event->cpu); if (!__load_acquire(&t->active)) - return t->time; - now += READ_ONCE(t->timeoffset); + return t->time.time; + now += READ_ONCE(t->time.offset); return now; } +static inline void update_perf_time_ctx(struct perf_time_ctx *time, u64 now, bool adv); + static inline void __update_cgrp_time(struct perf_cgroup_info *info, u64 now, bool adv) { - if (adv) - info->time += now - info->timestamp; - info->timestamp = now; - /* - * see update_context_time() - */ - WRITE_ONCE(info->timeoffset, info->time - info->timestamp); + update_perf_time_ctx(&info->time, now, adv); } static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, bool final) @@ -860,7 +856,7 @@ perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) for (css = &cgrp->css; css; css = css->parent) { cgrp = container_of(css, struct perf_cgroup, css); info = this_cpu_ptr(cgrp->info); - __update_cgrp_time(info, ctx->timestamp, false); + __update_cgrp_time(info, ctx->time.stamp, false); __store_release(&info->active, 1); } } @@ -1469,18 +1465,11 @@ static void perf_unpin_context(struct perf_event_context *ctx) raw_spin_unlock_irqrestore(&ctx->lock, flags); } -/* - * Update the record of the current time in a context. - */ -static void __update_context_time(struct perf_event_context *ctx, bool adv) +static inline void update_perf_time_ctx(struct perf_time_ctx *time, u64 now, bool adv) { - u64 now = perf_clock(); - - lockdep_assert_held(&ctx->lock); - if (adv) - ctx->time += now - ctx->timestamp; - ctx->timestamp = now; + time->time += now - time->stamp; + time->stamp = now; /* * The above: time' = time + (now - timestamp), can be re-arranged @@ -1491,7 +1480,19 @@ static void __update_context_time(struct perf_event_context *ctx, bool adv) * it's (obviously) not possible to acquire ctx->lock in order to read * both the above values in a consistent manner. */ - WRITE_ONCE(ctx->timeoffset, ctx->time - ctx->timestamp); + WRITE_ONCE(time->offset, time->time - time->stamp); +} + +/* + * Update the record of the current time in a context. + */ +static void __update_context_time(struct perf_event_context *ctx, bool adv) +{ + u64 now = perf_clock(); + + lockdep_assert_held(&ctx->lock); + + update_perf_time_ctx(&ctx->time, now, adv); } static void update_context_time(struct perf_event_context *ctx) @@ -1509,7 +1510,7 @@ static u64 perf_event_time(struct perf_event *event) if (is_cgroup_event(event)) return perf_cgroup_event_time(event); - return ctx->time; + return ctx->time.time; } static u64 perf_event_time_now(struct perf_event *event, u64 now) @@ -1523,9 +1524,9 @@ static u64 perf_event_time_now(struct perf_event *event, u64 now) return perf_cgroup_event_time_now(event, now); if (!(__load_acquire(&ctx->is_active) & EVENT_TIME)) - return ctx->time; + return ctx->time.time; - now += READ_ONCE(ctx->timeoffset); + now += READ_ONCE(ctx->time.offset); return now; } @@ -11302,14 +11303,14 @@ static void task_clock_event_update(struct perf_event *event, u64 now) static void task_clock_event_start(struct perf_event *event, int flags) { - local64_set(&event->hw.prev_count, event->ctx->time); + local64_set(&event->hw.prev_count, event->ctx->time.time); perf_swevent_start_hrtimer(event); } static void task_clock_event_stop(struct perf_event *event, int flags) { perf_swevent_cancel_hrtimer(event); - task_clock_event_update(event, event->ctx->time); + task_clock_event_update(event, event->ctx->time.time); } static int task_clock_event_add(struct perf_event *event, int flags) @@ -11329,8 +11330,8 @@ static void task_clock_event_del(struct perf_event *event, int flags) static void task_clock_event_read(struct perf_event *event) { u64 now = perf_clock(); - u64 delta = now - event->ctx->timestamp; - u64 time = event->ctx->time + delta; + u64 delta = now - event->ctx->time.stamp; + u64 time = event->ctx->time.time + delta; task_clock_event_update(event, time); } From patchwork Thu Aug 1 04:58:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749532 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C8A7141987 for ; Thu, 1 Aug 2024 04:59:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488370; cv=none; b=W7oC594aMke5Ie+K/iMvIZ5LbzuMR/PtHJyJb8XGX2cZSeLpzQg9uvBYUEOSJKxwZTnZh0f8OynbjkLNrMzcJXJpNmCJ6XWtMYZEOEJU/VRJ7xRc/xPBcg0wtgh2yz9qirIVasRB/jxA+Ei7NomyG5KM07m25xVezfSUzES8Q5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488370; c=relaxed/simple; bh=OOWfNweRN8UmxAgMm429fvbilzrUTmdzkqKEsYdhGjk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cCd/BpOPivgclPB+/CLaMpo4mvmxftp9zXivw4wwskQPTruJ75RDNc6O0/7j8zJ8fcz1DTOnMpVstnbSjLzZDilwWtAFjHiEKzmFhoUouT4DXImXoFbPy055vWnUkfYgqDNBqP1WRhhRbBt+1Tbuv4SkHczzqXrT9XsYQp77qhk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hDmzwT9p; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hDmzwT9p" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7a2d4261a48so5195808a12.0 for ; Wed, 31 Jul 2024 21:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488368; x=1723093168; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=2cD3Fq+3Ll7SQblZbqIooS7m9hYfT1x+ljplkdygDdA=; b=hDmzwT9puLp4b02CIRRM/6TXT6pSSIC3caR0dD1vpWP579fQ/CFXdHSiFi7T2Ua5Im 7vH0YkVDAAuD54smsVE1PdD5eW7KECDMEpd9bXgpjUD5HTynFmezWYFp9Ke51fNrIYVG tQf1cKRpzDI/c3FtDKqz1Djd7T9w/oWD1ZHQd9Ovt/mQGrOZ0OXebOoDyDJgbC5dESZr Qla62S65mGKzns+kLIhE4KvGNODm35Zvv9Xq0E1QrbQ1oxGRXM7DjWl2x8pjYNrA7esP cdHFoeX08I3iBdDvqV3hqsfr1lrIJS/yqKLjTKbFqsXUXoFSzXJKe2rv2qOOdjmRe/ku usug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488368; x=1723093168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2cD3Fq+3Ll7SQblZbqIooS7m9hYfT1x+ljplkdygDdA=; b=UVuLF+MmsW9fb25Nv4TrMWH0Lu4y88FQ231wdVgfFHVu+gyyuXsZUp37I4kBg1OAi9 tk7Zzqon83OkYtJxtUfmfe0FAlMUnOO4e8bBHCfW9b5toiGRdOmT8QXPjlfg6zWZ1QXp cw6iqEFVTX3O5sgcPhBYIwxaR0pPE/fbFe7fqW5/8GgUfKGPblgDvg6e7BL9pupKdq6W aAbgY+g0JOaZdqNnphTLhr0XHzSgVMrZr4FQ7tvpuEoLVYcD2UcazpZUIyFTHR+EbJLG KIFPmHUeaP2JV9VEbYfJldMXTZg+N42XvzhLddRJ5/PePU7Ihg2XmgYQN5exeBCbvGyN UEVw== X-Forwarded-Encrypted: i=1; AJvYcCUtPgt5bgXaF3vHAIFYYbHwlFD6u8i0zPwdoEwTe/S6FCYsX4JpLgDu1rD5/MMjMMxeXT3rEUtFN8MHAaYNSnPdi80A X-Gm-Message-State: AOJu0Yw1BIqdXbfy68+x95JF7IV7PiWKTuan+BZ7W9eU3jCRy37Vz1Hn udXUdaYQE5jMjuZ5H/xRV2RNGx+IzQvdYIO9O4SQvsBbfSwQMC3/6f0h8SIKeHS7lM4V114q8Tl UpPLGqQ== X-Google-Smtp-Source: AGHT+IHdZN8qIdXmQilRo+XUYuvIXDFh0W0Ep1kXY3d5pfqC0bWoMoZk6WZF21Gr18cZdgy0xKjuZYYdewlO X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a02:588:b0:718:da6:277e with SMTP id 41be03b00d2f7-7b633c20a7bmr2383a12.2.1722488367710; Wed, 31 Jul 2024 21:59:27 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:18 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-10-mizhang@google.com> Subject: [RFC PATCH v3 09/58] perf: Add a EVENT_GUEST flag From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang Current perf doesn't explicitly schedule out all exclude_guest events while the guest is running. There is no problem with the current emulated vPMU. Because perf owns all the PMU counters. It can mask the counter which is assigned to an exclude_guest event when a guest is running (Intel way), or set the corresponding HOSTONLY bit in evsentsel (AMD way). The counter doesn't count when a guest is running. However, either way doesn't work with the introduced passthrough vPMU. A guest owns all the PMU counters when it's running. The host should not mask any counters. The counter may be used by the guest. The evsentsel may be overwritten. Perf should explicitly schedule out all exclude_guest events to release the PMU resources when entering a guest, and resume the counting when exiting the guest. It's possible that an exclude_guest event is created when a guest is running. The new event should not be scheduled in as well. The ctx time is shared among different PMUs. The time cannot be stopped when a guest is running. It is required to calculate the time for events from other PMUs, e.g., uncore events. Add timeguest to track the guest run time. For an exclude_guest event, the elapsed time equals the ctx time - guest time. Cgroup has dedicated times. Use the same method to deduct the guest time from the cgroup time as well. Co-developed-by: Peter Zijlstra (Intel) Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 6 ++ kernel/events/core.c | 178 +++++++++++++++++++++++++++++++------ 2 files changed, 155 insertions(+), 29 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index e22cdb6486e6..81a5f8399cb8 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -952,6 +952,11 @@ struct perf_event_context { */ struct perf_time_ctx time; + /* + * Context clock, runs when in the guest mode. + */ + struct perf_time_ctx timeguest; + /* * These fields let us detect when two contexts have both * been cloned (inherited) from a common ancestor. @@ -1044,6 +1049,7 @@ struct bpf_perf_event_data_kern { */ struct perf_cgroup_info { struct perf_time_ctx time; + struct perf_time_ctx timeguest; int active; }; diff --git a/kernel/events/core.c b/kernel/events/core.c index c25e2bf27001..57648736e43e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -376,7 +376,8 @@ enum event_type_t { /* see ctx_resched() for details */ EVENT_CPU = 0x8, EVENT_CGROUP = 0x10, - EVENT_FLAGS = EVENT_CGROUP, + EVENT_GUEST = 0x20, + EVENT_FLAGS = EVENT_CGROUP | EVENT_GUEST, EVENT_ALL = EVENT_FLEXIBLE | EVENT_PINNED, }; @@ -407,6 +408,7 @@ static atomic_t nr_include_guest_events __read_mostly; static atomic_t nr_mediated_pmu_vms; static DEFINE_MUTEX(perf_mediated_pmu_mutex); +static DEFINE_PER_CPU(bool, perf_in_guest); /* !exclude_guest event of PMU with PERF_PMU_CAP_PASSTHROUGH_VPMU */ static inline bool is_include_guest_event(struct perf_event *event) @@ -706,6 +708,10 @@ static bool perf_skip_pmu_ctx(struct perf_event_pmu_context *pmu_ctx, if ((event_type & EVENT_CGROUP) && !pmu_ctx->nr_cgroups) return true; + if ((event_type & EVENT_GUEST) && + !(pmu_ctx->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU)) + return true; + return false; } @@ -770,12 +776,21 @@ static inline int is_cgroup_event(struct perf_event *event) return event->cgrp != NULL; } +static inline u64 __perf_event_time_ctx(struct perf_event *event, + struct perf_time_ctx *time, + struct perf_time_ctx *timeguest); + +static inline u64 __perf_event_time_ctx_now(struct perf_event *event, + struct perf_time_ctx *time, + struct perf_time_ctx *timeguest, + u64 now); + static inline u64 perf_cgroup_event_time(struct perf_event *event) { struct perf_cgroup_info *t; t = per_cpu_ptr(event->cgrp->info, event->cpu); - return t->time.time; + return __perf_event_time_ctx(event, &t->time, &t->timeguest); } static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now) @@ -784,9 +799,9 @@ static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now) t = per_cpu_ptr(event->cgrp->info, event->cpu); if (!__load_acquire(&t->active)) - return t->time.time; - now += READ_ONCE(t->time.offset); - return now; + return __perf_event_time_ctx(event, &t->time, &t->timeguest); + + return __perf_event_time_ctx_now(event, &t->time, &t->timeguest, now); } static inline void update_perf_time_ctx(struct perf_time_ctx *time, u64 now, bool adv); @@ -796,6 +811,18 @@ static inline void __update_cgrp_time(struct perf_cgroup_info *info, u64 now, bo update_perf_time_ctx(&info->time, now, adv); } +static inline void __update_cgrp_guest_time(struct perf_cgroup_info *info, u64 now, bool adv) +{ + update_perf_time_ctx(&info->timeguest, now, adv); +} + +static inline void update_cgrp_time(struct perf_cgroup_info *info, u64 now) +{ + __update_cgrp_time(info, now, true); + if (__this_cpu_read(perf_in_guest)) + __update_cgrp_guest_time(info, now, true); +} + static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, bool final) { struct perf_cgroup *cgrp = cpuctx->cgrp; @@ -809,7 +836,7 @@ static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, cgrp = container_of(css, struct perf_cgroup, css); info = this_cpu_ptr(cgrp->info); - __update_cgrp_time(info, now, true); + update_cgrp_time(info, now); if (final) __store_release(&info->active, 0); } @@ -832,11 +859,11 @@ static inline void update_cgrp_time_from_event(struct perf_event *event) * Do not update time when cgroup is not active */ if (info->active) - __update_cgrp_time(info, perf_clock(), true); + update_cgrp_time(info, perf_clock()); } static inline void -perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) +perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx, bool guest) { struct perf_event_context *ctx = &cpuctx->ctx; struct perf_cgroup *cgrp = cpuctx->cgrp; @@ -856,8 +883,12 @@ perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) for (css = &cgrp->css; css; css = css->parent) { cgrp = container_of(css, struct perf_cgroup, css); info = this_cpu_ptr(cgrp->info); - __update_cgrp_time(info, ctx->time.stamp, false); - __store_release(&info->active, 1); + if (guest) { + __update_cgrp_guest_time(info, ctx->time.stamp, false); + } else { + __update_cgrp_time(info, ctx->time.stamp, false); + __store_release(&info->active, 1); + } } } @@ -1061,7 +1092,7 @@ static inline int perf_cgroup_connect(pid_t pid, struct perf_event *event, } static inline void -perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx) +perf_cgroup_set_timestamp(struct perf_cpu_context *cpuctx, bool guest) { } @@ -1488,16 +1519,34 @@ static inline void update_perf_time_ctx(struct perf_time_ctx *time, u64 now, boo */ static void __update_context_time(struct perf_event_context *ctx, bool adv) { - u64 now = perf_clock(); + lockdep_assert_held(&ctx->lock); + + update_perf_time_ctx(&ctx->time, perf_clock(), adv); +} +static void __update_context_guest_time(struct perf_event_context *ctx, bool adv) +{ lockdep_assert_held(&ctx->lock); - update_perf_time_ctx(&ctx->time, now, adv); + /* must be called after __update_context_time(); */ + update_perf_time_ctx(&ctx->timeguest, ctx->time.stamp, adv); } static void update_context_time(struct perf_event_context *ctx) { __update_context_time(ctx, true); + if (__this_cpu_read(perf_in_guest)) + __update_context_guest_time(ctx, true); +} + +static inline u64 __perf_event_time_ctx(struct perf_event *event, + struct perf_time_ctx *time, + struct perf_time_ctx *timeguest) +{ + if (event->attr.exclude_guest) + return time->time - timeguest->time; + else + return time->time; } static u64 perf_event_time(struct perf_event *event) @@ -1510,7 +1559,26 @@ static u64 perf_event_time(struct perf_event *event) if (is_cgroup_event(event)) return perf_cgroup_event_time(event); - return ctx->time.time; + return __perf_event_time_ctx(event, &ctx->time, &ctx->timeguest); +} + +static inline u64 __perf_event_time_ctx_now(struct perf_event *event, + struct perf_time_ctx *time, + struct perf_time_ctx *timeguest, + u64 now) +{ + /* + * The exclude_guest event time should be calculated from + * the ctx time - the guest time. + * The ctx time is now + READ_ONCE(time->offset). + * The guest time is now + READ_ONCE(timeguest->offset). + * So the exclude_guest time is + * READ_ONCE(time->offset) - READ_ONCE(timeguest->offset). + */ + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest)) + return READ_ONCE(time->offset) - READ_ONCE(timeguest->offset); + else + return now + READ_ONCE(time->offset); } static u64 perf_event_time_now(struct perf_event *event, u64 now) @@ -1524,10 +1592,9 @@ static u64 perf_event_time_now(struct perf_event *event, u64 now) return perf_cgroup_event_time_now(event, now); if (!(__load_acquire(&ctx->is_active) & EVENT_TIME)) - return ctx->time.time; + return __perf_event_time_ctx(event, &ctx->time, &ctx->timeguest); - now += READ_ONCE(ctx->time.offset); - return now; + return __perf_event_time_ctx_now(event, &ctx->time, &ctx->timeguest, now); } static enum event_type_t get_event_type(struct perf_event *event) @@ -3334,9 +3401,15 @@ ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type) * would only update time for the pinned events. */ if (is_active & EVENT_TIME) { + bool stop; + + /* vPMU should not stop time */ + stop = !(event_type & EVENT_GUEST) && + ctx == &cpuctx->ctx; + /* update (and stop) ctx time */ update_context_time(ctx); - update_cgrp_time_from_cpuctx(cpuctx, ctx == &cpuctx->ctx); + update_cgrp_time_from_cpuctx(cpuctx, stop); /* * CPU-release for the below ->is_active store, * see __load_acquire() in perf_event_time_now() @@ -3354,7 +3427,18 @@ ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type) cpuctx->task_ctx = NULL; } - is_active ^= ctx->is_active; /* changed bits */ + if (event_type & EVENT_GUEST) { + /* + * Schedule out all !exclude_guest events of PMU + * with PERF_PMU_CAP_PASSTHROUGH_VPMU. + */ + is_active = EVENT_ALL; + __update_context_guest_time(ctx, false); + perf_cgroup_set_timestamp(cpuctx, true); + barrier(); + } else { + is_active ^= ctx->is_active; /* changed bits */ + } list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { if (perf_skip_pmu_ctx(pmu_ctx, event_type)) @@ -3853,10 +3937,15 @@ static inline void group_update_userpage(struct perf_event *group_event) event_update_userpage(event); } +struct merge_sched_data { + int can_add_hw; + enum event_type_t event_type; +}; + static int merge_sched_in(struct perf_event *event, void *data) { struct perf_event_context *ctx = event->ctx; - int *can_add_hw = data; + struct merge_sched_data *msd = data; if (event->state <= PERF_EVENT_STATE_OFF) return 0; @@ -3864,13 +3953,22 @@ static int merge_sched_in(struct perf_event *event, void *data) if (!event_filter_match(event)) return 0; - if (group_can_go_on(event, *can_add_hw)) { + /* + * Don't schedule in any exclude_guest events of PMU with + * PERF_PMU_CAP_PASSTHROUGH_VPMU, while a guest is running. + */ + if (__this_cpu_read(perf_in_guest) && event->attr.exclude_guest && + event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU && + !(msd->event_type & EVENT_GUEST)) + return 0; + + if (group_can_go_on(event, msd->can_add_hw)) { if (!group_sched_in(event, ctx)) list_add_tail(&event->active_list, get_event_list(event)); } if (event->state == PERF_EVENT_STATE_INACTIVE) { - *can_add_hw = 0; + msd->can_add_hw = 0; if (event->attr.pinned) { perf_cgroup_event_disable(event, ctx); perf_event_set_state(event, PERF_EVENT_STATE_ERROR); @@ -3889,11 +3987,15 @@ static int merge_sched_in(struct perf_event *event, void *data) static void pmu_groups_sched_in(struct perf_event_context *ctx, struct perf_event_groups *groups, - struct pmu *pmu) + struct pmu *pmu, + enum event_type_t event_type) { - int can_add_hw = 1; + struct merge_sched_data msd = { + .can_add_hw = 1, + .event_type = event_type, + }; visit_groups_merge(ctx, groups, smp_processor_id(), pmu, - merge_sched_in, &can_add_hw); + merge_sched_in, &msd); } static void ctx_groups_sched_in(struct perf_event_context *ctx, @@ -3905,14 +4007,14 @@ static void ctx_groups_sched_in(struct perf_event_context *ctx, list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) { if (perf_skip_pmu_ctx(pmu_ctx, event_type)) continue; - pmu_groups_sched_in(ctx, groups, pmu_ctx->pmu); + pmu_groups_sched_in(ctx, groups, pmu_ctx->pmu, event_type); } } static void __pmu_ctx_sched_in(struct perf_event_context *ctx, struct pmu *pmu) { - pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu); + pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu, 0); } static void @@ -3927,9 +4029,11 @@ ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type) return; if (!(is_active & EVENT_TIME)) { + /* EVENT_TIME should be active while the guest runs */ + WARN_ON_ONCE(event_type & EVENT_GUEST); /* start ctx time */ __update_context_time(ctx, false); - perf_cgroup_set_timestamp(cpuctx); + perf_cgroup_set_timestamp(cpuctx, false); /* * CPU-release for the below ->is_active store, * see __load_acquire() in perf_event_time_now() @@ -3945,7 +4049,23 @@ ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type) WARN_ON_ONCE(cpuctx->task_ctx != ctx); } - is_active ^= ctx->is_active; /* changed bits */ + if (event_type & EVENT_GUEST) { + /* + * Schedule in all !exclude_guest events of PMU + * with PERF_PMU_CAP_PASSTHROUGH_VPMU. + */ + is_active = EVENT_ALL; + + /* + * Update ctx time to set the new start time for + * the exclude_guest events. + */ + update_context_time(ctx); + update_cgrp_time_from_cpuctx(cpuctx, false); + barrier(); + } else { + is_active ^= ctx->is_active; /* changed bits */ + } /* * First go through the list and put on any pinned groups From patchwork Thu Aug 1 04:58:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749533 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C348C146D7D for ; Thu, 1 Aug 2024 04:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488372; cv=none; b=PE/5U/NhK4ovJotA7Bw9qfNLzINXqJx4H0XdFBv7c6IVEZvDVH8AK1tEcnesLPG9DnI7U90b8D3ONkOjQxMgqmyH5lHfKQjGUea6CpkHChVOj4LzbawYIKZN6ukF+5LPstuuy4GvbEZTiyB1dPnaUEWWHRDEmFS5dG22e1dxg/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488372; c=relaxed/simple; bh=o5hEtqrZtZQ+GSVgDLW9bBqacTp65v2v47VvU5C1Xzg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=anLncitwog6g53Q30ltfqNJKJyj0UhaNRkjUE4A9NbgOQyr0ZpjRPkYYZxDlz0ARur4LESqhZr/o4u+hbJkle4/rRbek/SXDV6T4PtA+sAK1HstV9G0fhFHRUVUlx+t885XtiahlCQt7HWFO5PbEEyKhw4dadk6dKy2dOMmme20= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HYswRxFf; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HYswRxFf" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66619cb2d3eso139767557b3.2 for ; Wed, 31 Jul 2024 21:59:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488369; x=1723093169; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=jpGgyT7HBtCD2+qsQsy2UlEb4EuF6pmeTb/cXO1/UxM=; b=HYswRxFfZ0Ik8888e6av6S8JJAfNfBZxXAWZTCjcal+m/a+uItyyodUMx9mDlk2tQq UR0Rqy+YR+B/uHpXPbPXx+ax74G+1ouhI0bf7UPH8ZrlVCt3NXD8htz7v5IWXNm38yNv EavhIVPox2hUekDMO/GIFA1GfH+/S/HU1/MmIZlaRTnPSHlooYPVg/mxynV2iNa+D0oK RO8akl8GVnWz+rlwSPgKWSg86QH2HRZSjbCzAmAIWD/+Y5vxXrtBUauFiJrYh8oqYG46 8mw0tqH0Q8eS1q0rMpmUJ7aQSTTIwz1jSUecJgfhWjQNCBL50nzRlWLCgXDynnCGIW27 DP5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488369; x=1723093169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jpGgyT7HBtCD2+qsQsy2UlEb4EuF6pmeTb/cXO1/UxM=; b=QU7vifVNxEsgUttslWFMoKW6J0e0eqIWeedyDN5Gn23MTZnCG2sSQupfCRctOaNRGA TTusPrZIEcJ97LaHX7dMbmTc7SonPaTIzQCmfkWrhF5enxvrcP2/ZWCq4jo2fsjF2aka Spj9P5A9hLJVhm1cU/aejmk1IpX9GcQ9tfMRI+Ml8DfCxFGtiHYMB/eHyIBnPXPzsPYn hEMEMandFiSYzXEzyJOpZwP/HyJHFEQtXQQod/90qV4MR5S7JTeBJX4PKhOI/YEIG6Pw AsOtY8FEysc6NqnASdcaC7lCKxtams5nIeo2DzHHhhX1tZGX1pYeQz5+k3gckIkAiJCE 6Ruw== X-Forwarded-Encrypted: i=1; AJvYcCXN6enFY64cFJS/awxGOUtXNbpq6uGlWc06hFqjKR4s401cXmiBwJ4CevU+12t2419wJhbYysIlwhISe9mK8NLDli4B X-Gm-Message-State: AOJu0YyTRD4p07Tz5ICNjWLy+VXTiRqy8WZYBuAaS5MxAXBJ/ID7dAL1 kmswHX3I2k7xbMfwDfPnmcHTQO8rnseyHTBvNUVIh7tfQDvu5oEcA0Q1J59mNP6/OIOCyf9wA5J SxEAiyw== X-Google-Smtp-Source: AGHT+IE36C0k9NJpJpbVp5XW9qqNl7CG6TduA9fccCY/71DaZJCFFGgQWLIivQyA9lnhS41mowX8N3sFiVCM X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:18c5:b0:e05:faf5:a19e with SMTP id 3f1490d57ef6-e0bcd2553f4mr2378276.6.1722488369672; Wed, 31 Jul 2024 21:59:29 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:19 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-11-mizhang@google.com> Subject: [RFC PATCH v3 10/58] perf: Add generic exclude_guest support From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang Only KVM knows the exact time when a guest is entering/exiting. Expose two interfaces to KVM to switch the ownership of the PMU resources. Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 4 +++ kernel/events/core.c | 54 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 81a5f8399cb8..75773f9890cc 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1738,6 +1738,8 @@ extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); int perf_get_mediated_pmu(void); void perf_put_mediated_pmu(void); +void perf_guest_enter(void); +void perf_guest_exit(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1831,6 +1833,8 @@ static inline int perf_get_mediated_pmu(void) } static inline void perf_put_mediated_pmu(void) { } +static inline void perf_guest_enter(void) { } +static inline void perf_guest_exit(void) { } #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index 57648736e43e..57ff737b922b 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5941,6 +5941,60 @@ void perf_put_mediated_pmu(void) } EXPORT_SYMBOL_GPL(perf_put_mediated_pmu); +/* When entering a guest, schedule out all exclude_guest events. */ +void perf_guest_enter(void) +{ + struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + perf_ctx_lock(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(__this_cpu_read(perf_in_guest))) + goto unlock; + + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + ctx_sched_out(&cpuctx->ctx, EVENT_GUEST); + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) { + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + task_ctx_sched_out(cpuctx->task_ctx, EVENT_GUEST); + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + } + + __this_cpu_write(perf_in_guest, true); + +unlock: + perf_ctx_unlock(cpuctx, cpuctx->task_ctx); +} +EXPORT_SYMBOL_GPL(perf_guest_enter); + +void perf_guest_exit(void) +{ + struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + perf_ctx_lock(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest))) + goto unlock; + + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + ctx_sched_in(&cpuctx->ctx, EVENT_GUEST); + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) { + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + ctx_sched_in(cpuctx->task_ctx, EVENT_GUEST); + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + } + + __this_cpu_write(perf_in_guest, false); +unlock: + perf_ctx_unlock(cpuctx, cpuctx->task_ctx); +} +EXPORT_SYMBOL_GPL(perf_guest_exit); + /* * Holding the top-level event's child_mutex means that any * descendant process that has inherited this event will block From patchwork Thu Aug 1 04:58:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749534 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8F68141987 for ; Thu, 1 Aug 2024 04:59:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488373; cv=none; b=se6R47rm16uBAoVgWSOFs+cQ31fWwE5ze73u6nZP60r8KI/KQLejGdhxRKREK9wxisybxMb0MMAr7tB9nykG8dILSvk48uhemeJM9Zd1j43+xZLTi4lCGy8id0YbO0m0RpjRnS49Ts06aQd+Bx4p8vMBbd4JSGFXZepNrFB1AsQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488373; c=relaxed/simple; bh=U9q0KA3ceAZtvfLGiI0Um1mr6SQ6acjihjd/dq5pXZc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qs7G7QmNPS/6+CiMIO/VCg811BrecO8a1jI1kF9MO801Sn5kwb/NiTk4uCbZ/PV/iHNSZPRGQ71VJgtrx+/deEvhr3rxhZ5zgSUrRoixlf56hl/KzaLhftBqcHI05wpIUtXiyCY3Ro0SkiyuLojET+iV5R4IGiepCH/f7zdp/do= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TojKWnip; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TojKWnip" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb6c5f9810so6593797a91.2 for ; Wed, 31 Jul 2024 21:59:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488371; x=1723093171; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EfFo/7SOA2NaX48pCngKagowZ/hRWebUXr42Y4/rrHs=; b=TojKWnip8flvl93DDu4hmLwAAFDfPhTI7fsB/LESFyxKszGYKMxD20Ua+pLmSDGnuW mGwYIhXFNOOQMewUYymaTHsPOPk2GimCuRqLfF/gRcYxZAMT4sioExoMmZFk0lhO66Bi ItzRvD/3QZkhHZONAY5ECD6M+YmfcgeIlDS5QuKQyEnzAzSx53gocb8Po1DX0r9UyjFK jfl0k42ok+6JYltXOvRDqRyyi9Etc0RA/ppzmLYwOD7ijs51wjvQ/ONqJSi8pa+vNGbE 6FLGe7chpvaxoMBDrHjf95XzfetTqWjsok431lrLhGK3Be2Fc86BThImHZRfRvvPqByD 7MUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488371; x=1723093171; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EfFo/7SOA2NaX48pCngKagowZ/hRWebUXr42Y4/rrHs=; b=Tz3KuFvO5EsEg2DHUGrWJqYJQTUbN9UGm8TQS2ol6xuCoMC5UYs+6JfHnLYR0Wo6EU SIcZm5BYkND8Ps1jIdTUPHMuS4QuV1NlTxm5CkjuSHcY2IQiAe4ciCrA3hWDqFLHV6y3 yBDMuK2bjF9IGUOD5fmHzSDyyVQDoJkxhcP/74DS3ijwd/0Yvfoq6ysIIlRwlAXUS0OL 67B0cjOjv2Zv60hfozyo+p9hiAm729MGWQIrMgp3UKGCt5PWx9uxGyJBvNXbN8MsjC3d EbKEzJwUFAKBzrqvDDInvKUu5G/igmP6sBjTjelmH8nxOZs0qVdqqwswG3j9XBKpsjuR thkA== X-Forwarded-Encrypted: i=1; AJvYcCW8YIAULCJSli3Nh4MKw1qIiO283HCtAJ8+s8UIWmXOaTnF4d5FnwQdwgmvqDNiE3FaBnqA55UUNrBihm/gKZC29i6N X-Gm-Message-State: AOJu0YxPmDHUeTA99bYqjWx1tDYXYvMG7lVCcjrj/MS9oFaUKcIeeBvp hM0GKhjyhW95nkVKr7aGNyIiUSjpecYfAgOHVphNYSV4RgflbXV9j4iPQfko19HmiEhDkzXRM7+ +rxWvUg== X-Google-Smtp-Source: AGHT+IH80Lb85B5R8lk+f773M0uEW3Y0z51r3A8XQV/UqIsQk9O6qLgF2FJ/z2vQpwIO9/0ZtI8d/04lsfh4 X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:90b:1b06:b0:2c9:61f9:9aea with SMTP id 98e67ed59e1d1-2cfe7b4c4d2mr70658a91.5.1722488371281; Wed, 31 Jul 2024 21:59:31 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:20 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-12-mizhang@google.com> Subject: [RFC PATCH v3 11/58] x86/irq: Factor out common code for installing kvm irq handler From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang KVM will register irq handler for POSTED_INTR_WAKEUP_VECTOR and KVM_GUEST_PMI_VECTOR, the existing kvm_set_posted_intr_wakeup_handler() is renamed to x86_set_kvm_irq_handler(), and vector input parameter is used to distinguish POSTED_INTR_WARKUP_VECTOR and KVM_GUEST_PMI_VECTOR. Caller should call x86_set_kvm_irq_handler() once to register a non-dummy handler for each vector. If caller register one handler for a vector, later the caller register the same or different non-dummy handler again, the second call will output warn message. Suggested-by: Sean Christopherson Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/irq.h | 2 +- arch/x86/kernel/irq.c | 18 ++++++++++++------ arch/x86/kvm/vmx/vmx.c | 4 ++-- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h index 194dfff84cb1..050a247b69b4 100644 --- a/arch/x86/include/asm/irq.h +++ b/arch/x86/include/asm/irq.h @@ -30,7 +30,7 @@ struct irq_desc; extern void fixup_irqs(void); #if IS_ENABLED(CONFIG_KVM) -extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)); +void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)); #endif extern void (*x86_platform_ipi_callback)(void); diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 385e3a5fc304..18cd418fe106 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -312,16 +312,22 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi) static void dummy_handler(void) {} static void (*kvm_posted_intr_wakeup_handler)(void) = dummy_handler; -void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)) +void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)) { - if (handler) + if (!handler) + handler = dummy_handler; + + if (vector == POSTED_INTR_WAKEUP_VECTOR && + (handler == dummy_handler || + kvm_posted_intr_wakeup_handler == dummy_handler)) kvm_posted_intr_wakeup_handler = handler; - else { - kvm_posted_intr_wakeup_handler = dummy_handler; + else + WARN_ON_ONCE(1); + + if (handler == dummy_handler) synchronize_rcu(); - } } -EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wakeup_handler); +EXPORT_SYMBOL_GPL(x86_set_kvm_irq_handler); /* * Handler for POSTED_INTERRUPT_VECTOR. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b3c83c06f826..ad465881b043 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -8292,7 +8292,7 @@ void vmx_migrate_timers(struct kvm_vcpu *vcpu) void vmx_hardware_unsetup(void) { - kvm_set_posted_intr_wakeup_handler(NULL); + x86_set_kvm_irq_handler(POSTED_INTR_WAKEUP_VECTOR, NULL); if (nested) nested_vmx_hardware_unsetup(); @@ -8602,7 +8602,7 @@ __init int vmx_hardware_setup(void) if (r && nested) nested_vmx_hardware_unsetup(); - kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler); + x86_set_kvm_irq_handler(POSTED_INTR_WAKEUP_VECTOR, pi_wakeup_handler); return r; } From patchwork Thu Aug 1 04:58:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749535 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 151F614885B for ; Thu, 1 Aug 2024 04:59:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488375; cv=none; b=UmciHKPLfvL/WYJOrXLlHUHkKEMsCfVpzpDmb5LJnh8di141OeG2CGWOB2n5gnFgHKkVlLKYB4XUsOsxJ5PJ30xcHxzk3s3R8dE2hcS04mS7y41A6mZC6xWm9hwPbfje40tIyLdWbhY8A/ntcA+OPY0TEOjdrniLk31agTCBx1g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488375; c=relaxed/simple; bh=rqzRetFhlSPkAjMKC/ToO5qCQVBnIVY/hV21gk6XRpE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JrbzKlWYC6eYdoXEDcXyz6mmEtC5qCMIH1sSAbQvUwdvL4oYgfCLhd7GeuGqkeosM2TwDuvh1zg5OXzs7Q1lzzC3bbqE2wmkcrsqlCRI8UvTPjijQ8b5V+8suw6WxU5j55TuryQ20xw5m9gV+p854JvRBJzx6SQtuDDbKRKutk8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EBQvuhYE; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EBQvuhYE" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc52d3c76eso60606935ad.3 for ; Wed, 31 Jul 2024 21:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488373; x=1723093173; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=q8fJNbHfJJMZMWvZ8/ilEbDnuyOnCJRnwYPgtX9B8zU=; b=EBQvuhYEF2hLBwB9p0dZU+vCewr6r82B4c16txNDNUXdzlZEFk5TsJ8Hq0qzg4axbY Et3/eL+K1o1Jukuoh7GpVKeuOvPPz9c8MU8QH0X4hXSN1DO2Dg0tcpUKlw3+VZ1m9CM5 bWBiZboaGcm+vZawyJ/77Qd1jnj3oyH/3s+Vc4QW5Wft5YyZzKDClvN2yYWmaSQhd7Ii URMQg+7o052WPFXJGEyVcqEkgn7msghLC1aIxnm8Z2b3YbZXPRI46uPmC2Wx/+Kt3u9f TSOSaZZARCwbdmwUuHoPKZzr876aPwgKvbyNT0OYJJyUoO2RXrWT4sihYpqYiawV6rXv SMYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488373; x=1723093173; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=q8fJNbHfJJMZMWvZ8/ilEbDnuyOnCJRnwYPgtX9B8zU=; b=Z21TRmuGKwjed7GQNTTBtVuV1fQ66Xp7JsTvuUutbyBC7fC1KY7AEQQR6w9Zr0vpfu FlZy16GGBct1puD8kneHEP6NvVWz4dMUbiMzlmQKqzGGUcGLEX+iTl0c4Uk1R4nSIUl+ qqsO0SCCv8IrOVOG5xsobG7zF/BmlUd3dhZ6u8baxx2OUpDNLkH9SfVP7NrVAmgqiN14 EJjESrpVyXEe4q1+L+aI6n/5Bopn3VgVW8Ejm3U+Cr48o5qQC94Px/wp1cueeJV2zN1n Q9D+j01wpjLRghGX8wtcjZOLFbkI6QWELbhGetaVhTlsUBoj8b2NvfaJcX63j8nTMkzG vILA== X-Forwarded-Encrypted: i=1; AJvYcCVYF4QBgfpTE1sPo/2yJY6Cniw3fertZf4N3fVCfQ0A1bk5ltvmIZNOaXoV2Zl3jR0hVS3gCyAfN9ZFmeCTMmYSHzR5 X-Gm-Message-State: AOJu0YwKQXghkMzju8s+M+gFYxf2VGwEbSTAE0uuCT1zfdZrETXSdtGn hfVL2dDN3gH813fzs8TmAYt0Q5oFRZwFa2+W4/MlK9d7d8NpyrsWrZ0VVEfXV4mQBy3uXHTUZGq HBjWOxQ== X-Google-Smtp-Source: AGHT+IH5q/6b1ijF1gNpeziEBYhuxhp9BV0CkRuIrN05ArJLU9odNox6SNQDBwusFWLqlSvpRCQsofQEqZle X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:bb89:b0:1fe:d72d:13bc with SMTP id d9443c01a7336-1ff4ce87265mr738715ad.5.1722488373141; Wed, 31 Jul 2024 21:59:33 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:21 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-13-mizhang@google.com> Subject: [RFC PATCH v3 12/58] perf: core/x86: Register a new vector for KVM GUEST PMI From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang Create a new vector in the host IDT for kvm guest PMI handling within mediated passthrough vPMU. In addition, guest PMI handler registration is added into x86_set_kvm_irq_handler(). This is the preparation work to support mediated passthrough vPMU to handle kvm guest PMIs without interference from PMI handler of the host PMU. Signed-off-by: Dapeng Mi Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/hardirq.h | 1 + arch/x86/include/asm/idtentry.h | 1 + arch/x86/include/asm/irq_vectors.h | 5 ++++- arch/x86/kernel/idt.c | 1 + arch/x86/kernel/irq.c | 21 +++++++++++++++++++ .../beauty/arch/x86/include/asm/irq_vectors.h | 5 ++++- 6 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index c67fa6ad098a..42a396763c8d 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -19,6 +19,7 @@ typedef struct { unsigned int kvm_posted_intr_ipis; unsigned int kvm_posted_intr_wakeup_ipis; unsigned int kvm_posted_intr_nested_ipis; + unsigned int kvm_guest_pmis; #endif unsigned int x86_platform_ipis; /* arch dependent */ unsigned int apic_perf_irqs; diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index d4f24499b256..7b1e3e542b1d 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -745,6 +745,7 @@ DECLARE_IDTENTRY_SYSVEC(IRQ_WORK_VECTOR, sysvec_irq_work); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_VECTOR, sysvec_kvm_posted_intr_ipi); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_WAKEUP_VECTOR, sysvec_kvm_posted_intr_wakeup_ipi); DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_NESTED_VECTOR, sysvec_kvm_posted_intr_nested_ipi); +DECLARE_IDTENTRY_SYSVEC(KVM_GUEST_PMI_VECTOR, sysvec_kvm_guest_pmi_handler); #else # define fred_sysvec_kvm_posted_intr_ipi NULL # define fred_sysvec_kvm_posted_intr_wakeup_ipi NULL diff --git a/arch/x86/include/asm/irq_vectors.h b/arch/x86/include/asm/irq_vectors.h index 13aea8fc3d45..ada270e6f5cb 100644 --- a/arch/x86/include/asm/irq_vectors.h +++ b/arch/x86/include/asm/irq_vectors.h @@ -77,7 +77,10 @@ */ #define IRQ_WORK_VECTOR 0xf6 -/* 0xf5 - unused, was UV_BAU_MESSAGE */ +#if IS_ENABLED(CONFIG_KVM) +#define KVM_GUEST_PMI_VECTOR 0xf5 +#endif + #define DEFERRED_ERROR_VECTOR 0xf4 /* Vector on which hypervisor callbacks will be delivered */ diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index f445bec516a0..0bec4c7e2308 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -157,6 +157,7 @@ static const __initconst struct idt_data apic_idts[] = { INTG(POSTED_INTR_VECTOR, asm_sysvec_kvm_posted_intr_ipi), INTG(POSTED_INTR_WAKEUP_VECTOR, asm_sysvec_kvm_posted_intr_wakeup_ipi), INTG(POSTED_INTR_NESTED_VECTOR, asm_sysvec_kvm_posted_intr_nested_ipi), + INTG(KVM_GUEST_PMI_VECTOR, asm_sysvec_kvm_guest_pmi_handler), # endif # ifdef CONFIG_IRQ_WORK INTG(IRQ_WORK_VECTOR, asm_sysvec_irq_work), diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 18cd418fe106..b29714e23fc4 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -183,6 +183,12 @@ int arch_show_interrupts(struct seq_file *p, int prec) seq_printf(p, "%10u ", irq_stats(j)->kvm_posted_intr_wakeup_ipis); seq_puts(p, " Posted-interrupt wakeup event\n"); + + seq_printf(p, "%*s: ", prec, "VPMU"); + for_each_online_cpu(j) + seq_printf(p, "%10u ", + irq_stats(j)->kvm_guest_pmis); + seq_puts(p, " KVM GUEST PMI\n"); #endif #ifdef CONFIG_X86_POSTED_MSI seq_printf(p, "%*s: ", prec, "PMN"); @@ -311,6 +317,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi) #if IS_ENABLED(CONFIG_KVM) static void dummy_handler(void) {} static void (*kvm_posted_intr_wakeup_handler)(void) = dummy_handler; +static void (*kvm_guest_pmi_handler)(void) = dummy_handler; void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)) { @@ -321,6 +328,10 @@ void x86_set_kvm_irq_handler(u8 vector, void (*handler)(void)) (handler == dummy_handler || kvm_posted_intr_wakeup_handler == dummy_handler)) kvm_posted_intr_wakeup_handler = handler; + else if (vector == KVM_GUEST_PMI_VECTOR && + (handler == dummy_handler || + kvm_guest_pmi_handler == dummy_handler)) + kvm_guest_pmi_handler = handler; else WARN_ON_ONCE(1); @@ -356,6 +367,16 @@ DEFINE_IDTENTRY_SYSVEC_SIMPLE(sysvec_kvm_posted_intr_nested_ipi) apic_eoi(); inc_irq_stat(kvm_posted_intr_nested_ipis); } + +/* + * Handler for KVM_GUEST_PMI_VECTOR. + */ +DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_guest_pmi_handler) +{ + apic_eoi(); + inc_irq_stat(kvm_guest_pmis); + kvm_guest_pmi_handler(); +} #endif #ifdef CONFIG_X86_POSTED_MSI diff --git a/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h b/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h index 13aea8fc3d45..670dcee46631 100644 --- a/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h +++ b/tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h @@ -77,7 +77,10 @@ */ #define IRQ_WORK_VECTOR 0xf6 -/* 0xf5 - unused, was UV_BAU_MESSAGE */ +#if IS_ENABLED(CONFIG_KVM) +#define KVM_GUEST_PMI_VECTOR 0xf5 +#endif + #define DEFERRED_ERROR_VECTOR 0xf4 /* Vector on which hypervisor callbacks will be delivered */ From patchwork Thu Aug 1 04:58:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749536 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C6AF149012 for ; Thu, 1 Aug 2024 04:59:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488377; cv=none; b=ZS2ztYZIXncJk50Z2/HwiYQVUSSHfIH4GU4EoP1RvBr4jC1QNoIObURVi5aewbCdlgix1vDihM3jprvRdnaFjOuTr0P2GnTWVvYvGKoTWNa51ZJb8Y8rjE0JBkZCYD3ex7yvrjfWKaawkLYkqNKMnPIFjOggh6bxYQy+iJ08P9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488377; c=relaxed/simple; bh=ZFJd+NLNdnT0l3kLUsJ2QtMJkhs1k5mrwO9pnFY/Pko=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=u/LAM5ke6dWfqvGNlQqp3EAm9d0Hl5QLg+Xih+huPT80SM70L4wuK9w4MdAfmRvA4mREoP5pNP9MpJA6xkLvq74igS9ulzL1xKHhj48o8u1Vlbnl01HOn5z42Jzbm6j0glACbg/Fp5sWhZ9QaTy3kCoyFvGqRxZZ8WBId5C9cf0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ec3olsbU; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ec3olsbU" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0b7922ed63so7350782276.0 for ; Wed, 31 Jul 2024 21:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488375; x=1723093175; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=REfxQKHiuWvfLVYbjy5X6aAAeE6tA/w4QJFbbRv9Hss=; b=ec3olsbUPXhNNSG+vhNYYY0VwX6tRg2jbwUMHF//YZTxpeGISisjHZ9VNSXBw9XGKE b4wB4HprdNDXtuTQJzTlmfgm1DY/rgwHiYX/jsC83fv16wnLI4mKWO923ChZtxRd+DhJ pWa5R5R5rSxjyNalt2FCVIBWfO9HvALgDNaoYdhcuZHxoTKrUeFv3fLFnEhkbEe64rUy VES2w54FJSIfoNtH2a+YBVSu4HpDTVoY6SbNPeeGI8vtWVoVdOPfnL3SfKZHWtRY/UZe vt0bllfP0knndn8s0J0K6Ehpu5KMK3cVyCIN1wWyGIBntTs9cVFZoUlf7P8Lv3FrgtRQ SL3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488375; x=1723093175; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=REfxQKHiuWvfLVYbjy5X6aAAeE6tA/w4QJFbbRv9Hss=; b=e6rrWXurPvSKf9CUUG6i5Sh2Smow8ofFII8y2n2A/SnyQXDU5JAhtum1C+FZCWRKAs 5rZ0e+93HUqdwEywubNOT/WhrAELQ+nkXsh/m8GTEXaRooEWokWmVXERIsKYNAKPE76a mQ7dHGStfJH8QUyA/p3Pv+FMKNgMQ6uITT1XeV/GRCf1d7KwZEGk4mMcEDgEhvvpKjfF 81Z6r3VzcQ7ayDZFR4a1bmVwa0bhRJ51T8jAkChxVPlE7Dr0PSjgvIIPUfbbhx/BNiij 9gdM38aicfADRUswYolKbAnEOMgo3WuDF5Ns3XyvW7vJbJNhOJoNseR5qage4wxlcBc0 3zzw== X-Forwarded-Encrypted: i=1; AJvYcCXmw14cBOwMFkW9EvW3NT5NT9mZ0mwAyaPKH2QLsWhCJZ+R0dL55ONozsBG8q/hT5aTzE6LnvvW26Q57CpcJjtmIF8Y X-Gm-Message-State: AOJu0YwdcqnIWXDSawAEGKWYYlbXQDI6P0QC9F8/W3ePpBHl6pBbkQTZ w4ZZrJxQkcIlMu4OdT7EdI2A1tdyT+WQUIlCTUl6tK9I6GACrty8Qj5oTNAB9OU3KWzKb6PJ49D S9F1SSA== X-Google-Smtp-Source: AGHT+IF5ZGeKkM/HeBRl1kZxTX4360lX8K5MNcIZn2sb2skkVgd4NCRm8ugb9lwNZdcMkg0PxPcK/P4Zp9sH X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:724:b0:e03:a2f7:72e with SMTP id 3f1490d57ef6-e0bccf824demr2951276.0.1722488375074; Wed, 31 Jul 2024 21:59:35 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:22 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-14-mizhang@google.com> Subject: [RFC PATCH v3 13/58] KVM: x86/pmu: Register KVM_GUEST_PMI_VECTOR handler From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang Add function to register/unregister guest KVM PMI handler at KVM module initialization and destroy. This allows the host PMU with passthough capability enabled can switch PMI handler at PMU context switch. Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/x86.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8c9e4281d978..f1d589c07068 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13946,6 +13946,16 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +static void kvm_handle_guest_pmi(void) +{ + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); + + if (WARN_ON_ONCE(!vcpu)) + return; + + kvm_make_request(KVM_REQ_PMI, vcpu); +} + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); @@ -13980,12 +13990,14 @@ static int __init kvm_x86_init(void) { kvm_mmu_x86_module_init(); mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible(); + x86_set_kvm_irq_handler(KVM_GUEST_PMI_VECTOR, kvm_handle_guest_pmi); return 0; } module_init(kvm_x86_init); static void __exit kvm_x86_exit(void) { + x86_set_kvm_irq_handler(KVM_GUEST_PMI_VECTOR, NULL); WARN_ON_ONCE(static_branch_unlikely(&kvm_has_noapic_vcpu)); } module_exit(kvm_x86_exit); From patchwork Thu Aug 1 04:58:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749537 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF5811494A7 for ; Thu, 1 Aug 2024 04:59:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488379; cv=none; b=QMhbyS6qp/xsAgpPCtzufQgn5Uhai3I1GRC1b5dhmT5yGyuO+p8nWId/nTIYUl4BfccTEoF7GsmXUIHjddwwRar0X4YJ16DnSLjku2Z2KBDMDGHxEPblkLBTudAV9xF6MEw+6jS8DHqqQ9v4N5rMP1MF73gNWPfpFph+BOlGy70= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488379; c=relaxed/simple; bh=02tnM7/qpDAUsw/+gR8ObHrOBPgLqDoOs5Lh1ph0QeQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YL4fGOBky7Ya0a1yIu62QIpqe66h8pgfe7nYY801jpB+YhiC+bl3hfmRaWNdtlrAlMFN8T8Ww/6czbnhCRVSqeirLC8XSKSNDLYXpN8GRFT2OMh9VUQwqb3svUQlW90RmAcsonAdpQRX/DG67iXZ4IJWHosB7bwDxqQWc44wCuQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IQtzQXmN; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IQtzQXmN" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc658a161bso39734355ad.0 for ; Wed, 31 Jul 2024 21:59:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488377; x=1723093177; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=km+p8LX37fq6mHG8tieZJX2dAdQpJDVGZUlT9VV4F9Q=; b=IQtzQXmNhsMfy2ccIXk9V375xJqS5qKR8k8yIFl1XYgGmlF5JEJKY2UNcP/2nQPCQ2 JYNIz7qG4KSolnxFnC/sXvvc929JqPBJPocXQ0Z3sPHGv06WpRPL6UpHNwPAwM6UbgbQ WTVJcQJRrH1cVO29BGvFMrb5xhEsVjYzCsnVhVvFJaVLzaoG7WPKiAno5PnsyUTTt9O4 osJoE4u/J+ox3QrXPUr95cYCTp8dD2VBZTwJ9ZhgqXh4zSYHa/FOxf9iL4Y4c8oQyl2i GrdiyKCuXFr2HQd20qb21auh4c5JrkJI9JDAPMDm9prijj6KU75Pe7baeVcSAoPFuqmn C9DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488377; x=1723093177; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=km+p8LX37fq6mHG8tieZJX2dAdQpJDVGZUlT9VV4F9Q=; b=bywxnIn6e8CS+oN8MTd597/l7NJaVbVvIPOXAPT9Yaz2dEzf+kulkOByP5QdRdPptU ixwD4ZnpTgGlx+FLYT2KdXSKu2ReJG+2SAYbc7Vju7fQW2MVLbNHuZPkjKF2OOf7yuQ5 W8HWJIaELA4s1UKZmUnhcbAhSGIwIVu6padSAQxqW96QAB8FC4g3hQoRl0nzkHKttXgq i8tsgFyWOhQ8++M1q0i7x+68ngCNa+/mC8W31W+YdAd9eu9UaZx6KJHrkiT0jSr3Amgn lN25CnF30jgoEueAsmemRcNZGbJWaWNco18ZSRN/vrlrHHYUV2uFKBZqLimnd8vKiwf4 sfjg== X-Forwarded-Encrypted: i=1; AJvYcCVEOm9RnGTRl4D+psYgGBX0mxnMnl9U5gvbakVltvl49R5niQEB37HWDQZ7FkeCWNSwZXt7ppCjT8BkrvW1rq5iTKLz X-Gm-Message-State: AOJu0YyTEP8dztnMKWi/IqN//gVxITdWoHvNz1vVe9ZdegkUlhwGu54g T1If7Wruqw8CPGqPUuxtIClYFkBQ9lH5jypnuSL5mVmXw9beyzrv9pQlhfSe/tIAELpRiPnTePS b7CYZyQ== X-Google-Smtp-Source: AGHT+IECPDnBABss0TIx0jpPEV7X5AaKhV9r393eoJgc9v2aXrrjfacIDevTaKo6bcPtGTm/IU/yklRpVTQ7 X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:ce11:b0:1fb:82f5:6631 with SMTP id d9443c01a7336-1ff4d1ffdbbmr915595ad.9.1722488376993; Wed, 31 Jul 2024 21:59:36 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:23 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-15-mizhang@google.com> Subject: [RFC PATCH v3 14/58] perf: Add switch_interrupt() interface From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang There will be a dedicated interrupt vector for guests on some platforms, e.g., Intel. Add an interface to switch the interrupt vector while entering/exiting a guest. When PMI switch into a new guest vector, guest_lvtpc value need to be reflected onto HW, e,g., guest clear PMI mask bit, the HW PMI mask bit should be cleared also, then PMI can be generated continuously for guest. So guest_lvtpc parameter is added into perf_guest_enter() and switch_interrupt(). At switch_interrupt(), the target pmu with PASSTHROUGH cap should be found. Since only one passthrough pmu is supported, we keep the implementation simply by tracking the pmu as a global variable. Signed-off-by: Kan Liang [Simplify the commit with removal of srcu lock/unlock since only one pmu is supported.] Signed-off-by: Mingwei Zhang --- include/linux/perf_event.h | 9 +++++++-- kernel/events/core.c | 36 ++++++++++++++++++++++++++++++++++-- 2 files changed, 41 insertions(+), 4 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 75773f9890cc..aeb08f78f539 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -541,6 +541,11 @@ struct pmu { * Check period value for PERF_EVENT_IOC_PERIOD ioctl. */ int (*check_period) (struct perf_event *event, u64 value); /* optional */ + + /* + * Switch the interrupt vectors, e.g., guest enter/exit. + */ + void (*switch_interrupt) (bool enter, u32 guest_lvtpc); /* optional */ }; enum perf_addr_filter_action_t { @@ -1738,7 +1743,7 @@ extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); int perf_get_mediated_pmu(void); void perf_put_mediated_pmu(void); -void perf_guest_enter(void); +void perf_guest_enter(u32 guest_lvtpc); void perf_guest_exit(void); #else /* !CONFIG_PERF_EVENTS: */ static inline void * @@ -1833,7 +1838,7 @@ static inline int perf_get_mediated_pmu(void) } static inline void perf_put_mediated_pmu(void) { } -static inline void perf_guest_enter(void) { } +static inline void perf_guest_enter(u32 guest_lvtpc) { } static inline void perf_guest_exit(void) { } #endif diff --git a/kernel/events/core.c b/kernel/events/core.c index 57ff737b922b..047ca5748ee2 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -422,6 +422,7 @@ static inline bool is_include_guest_event(struct perf_event *event) static LIST_HEAD(pmus); static DEFINE_MUTEX(pmus_lock); +static struct pmu *passthru_pmu; static struct srcu_struct pmus_srcu; static cpumask_var_t perf_online_mask; static struct kmem_cache *perf_event_cache; @@ -5941,8 +5942,21 @@ void perf_put_mediated_pmu(void) } EXPORT_SYMBOL_GPL(perf_put_mediated_pmu); +static void perf_switch_interrupt(bool enter, u32 guest_lvtpc) +{ + /* Mediated passthrough PMU should have PASSTHROUGH_VPMU cap. */ + if (!passthru_pmu) + return; + + if (passthru_pmu->switch_interrupt && + try_module_get(passthru_pmu->module)) { + passthru_pmu->switch_interrupt(enter, guest_lvtpc); + module_put(passthru_pmu->module); + } +} + /* When entering a guest, schedule out all exclude_guest events. */ -void perf_guest_enter(void) +void perf_guest_enter(u32 guest_lvtpc) { struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); @@ -5962,6 +5976,8 @@ void perf_guest_enter(void) perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); } + perf_switch_interrupt(true, guest_lvtpc); + __this_cpu_write(perf_in_guest, true); unlock: @@ -5980,6 +5996,8 @@ void perf_guest_exit(void) if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest))) goto unlock; + perf_switch_interrupt(false, 0); + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); ctx_sched_in(&cpuctx->ctx, EVENT_GUEST); perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); @@ -11842,7 +11860,21 @@ int perf_pmu_register(struct pmu *pmu, const char *name, int type) if (!pmu->event_idx) pmu->event_idx = perf_event_idx_default; - list_add_rcu(&pmu->entry, &pmus); + /* + * Initialize passthru_pmu with the core pmu that has + * PERF_PMU_CAP_PASSTHROUGH_VPMU capability. + */ + if (pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU) { + if (!passthru_pmu) + passthru_pmu = pmu; + + if (WARN_ONCE(passthru_pmu != pmu, "Only one passthrough PMU is supported\n")) { + ret = -EINVAL; + goto free_dev; + } + } + + list_add_tail_rcu(&pmu->entry, &pmus); atomic_set(&pmu->exclusive_cnt, 0); ret = 0; unlock: From patchwork Thu Aug 1 04:58:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749538 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 918EA149C4B for ; Thu, 1 Aug 2024 04:59:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488381; cv=none; b=YIoySkQrNqEBz1OIdVDS1xkJ6hblgf6bwFqR/Byp84cQQuzLpsq+sPpI2XskuM7feLJsQyTcrokr3AieiBpLsYgvqiEmvW2F5n9S3/kAdKNNgRD5byjNrUlTftokZ9z89bLusZ47/c4tNgZiqE3z9M8j0WtbGd9ZPQwO8WjtNCA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488381; c=relaxed/simple; bh=EFtuBGgmb/ySfNg1lLBNQXQnQVSSdGAckiedsEFvN8M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IWdp9rFduksrSav6AN36aAbyX7XJ0wIb4dGMFnYBHYvvyqBhjUrp2114NTpa5c7LAdexwjX46YiYnm1vLKAkpkc/Cbij7XP1+pxw7JQ5e6miQpT3Fci0tkb94zzEo/9dv5FRKVjzN/7NLWr453v5ECm/JsD3RsnPHV5XwBmuar8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FlxFcygg; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FlxFcygg" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-64b70c4a269so121799267b3.1 for ; Wed, 31 Jul 2024 21:59:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488379; x=1723093179; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=w7uRuuJLWNcIJX8XquVv1gmL4i5q3RaqambFEsShWAg=; b=FlxFcyggeevJ+sTlFpLPseM7g1KX1TG9XAAt2WTKvgwdcGd7MblmXZNqNb6YeFuwSJ 2DHbeSjKhVDQgzRwayvuV0eVqXOQLIw7lkI0bYjbPyWj8AJaGJQt8i/W+WKiLu7HjWUQ M8OrwQwDd7QQ2kdNlEJhDi9iatOuzLQdkhbi/Po8bhAWE1+jGOtvo25hiIouw+yOMnVy eaA4LL6uGIab9jSggmKawineORuMK9x3XbndKvNhujJz14y8LBWA0r3as4o7UFEOd6hv NCLOkC3GlHM5s3t4OiJ8vCQAANG5QQH81vVgic9HtMlB12aCLqbcHP5gypSWMUdBbnjv St3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488379; x=1723093179; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=w7uRuuJLWNcIJX8XquVv1gmL4i5q3RaqambFEsShWAg=; b=gpuV3JGVks/eY3UMT3DI0bHOdlq9AuVnvPYmij1v+7VOeN3S43Cszw0uWaymDrGYar JvC1pAZHm/GwvhsOlHKeRMDZvsWYr3xi6FfRq9eFtvPl5XpIrVZgsRlcyKs9qJkLRoiK XtBhY/OY7wyJBNu62YEUTczTc9QO24xp40MROnKbjDYZgGy14Lq8K0BI0U5ngoeXWvBP Trcr8j3pmw5myQ1/bLgQXL4VONMU1KB7BfMJs4o6/ipY/v3vFWOhz68JO1C+m1V9VW4g GE8XponUwbWPimx0cdXfEANwUopZ9EdYE6veYS8S8cKhUblay46FG7nfQADY687x8tnd ex3A== X-Forwarded-Encrypted: i=1; AJvYcCXdYMP3N1eWJcnfdhFQGeCso/CE5HuZBkSQzsXUO1QgsL4RnV5aslsb138mH9OZaFfNbNJMR0T8gSL9f4KN8HhFTYOg X-Gm-Message-State: AOJu0YxsVvYIiKaQ1bHogNgbW+9jx0uObrGAwsaBpGbqU/9f+Hf9dsLN xgUMtvS6PPAMgKxGeLg/MUNj4Dn0+RuBumxWqVDpJnFYKhCf6ucl51h+5weywMxB0Ddw/+qbmBZ iStykPA== X-Google-Smtp-Source: AGHT+IHOUzg+IUPvsg1EhMzy5wRuwEGHHgmUm7eoY+LS5vO9f74Mpw915QluvDl83oVdZltRnH9hWIucwB8U X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:690c:ed2:b0:64b:5dc3:e4fe with SMTP id 00721157ae682-6874abdc89cmr263197b3.1.1722488378564; Wed, 31 Jul 2024 21:59:38 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:24 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-16-mizhang@google.com> Subject: [RFC PATCH v3 15/58] perf/x86: Support switch_interrupt interface From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang Implement switch_interrupt interface for x86 PMU, switch PMI to dedicated KVM_GUEST_PMI_VECTOR at perf guest enter, and switch PMI back to NMI at perf guest exit. Signed-off-by: Xiong Zhang Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/events/core.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 5bf78cd619bf..b17ef8b6c1a6 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2673,6 +2673,15 @@ static bool x86_pmu_filter(struct pmu *pmu, int cpu) return ret; } +static void x86_pmu_switch_interrupt(bool enter, u32 guest_lvtpc) +{ + if (enter) + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_GUEST_PMI_VECTOR | + (guest_lvtpc & APIC_LVT_MASKED)); + else + apic_write(APIC_LVTPC, APIC_DM_NMI); +} + static struct pmu pmu = { .pmu_enable = x86_pmu_enable, .pmu_disable = x86_pmu_disable, @@ -2702,6 +2711,8 @@ static struct pmu pmu = { .aux_output_match = x86_pmu_aux_output_match, .filter = x86_pmu_filter, + + .switch_interrupt = x86_pmu_switch_interrupt, }; void arch_perf_update_userpage(struct perf_event *event, From patchwork Thu Aug 1 04:58:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749539 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E1FD143733 for ; Thu, 1 Aug 2024 04:59:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488382; cv=none; b=TBDOwhsWUjnb3LyqT1nFwlag6y4TpftnBqh4O1quYTX8QVZBiV3AkkeZ3rUh+y9tk+M4YS+uyKKVXj58KcYuJf0KqSwy2gNHPwEekEhbl2ZmlPe2Re0Vu+rH0A7oYdKA18jZ+ecmnYM1lwbqSyxs1uRg7BA3ygr86F9fEsJDh7g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488382; c=relaxed/simple; bh=0/b75EMzTbLP6NsYR29AELL/PRg0NZLtZLfh/BsNyNE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AqOSfiPxb6z7O2AJTUKjraneu5hnhMm82Tq7Wt/3mZ4z4jVMTGK0JDnbVUpdPtvjdiW4khsU94urHcN4/BE+BwfoMLizmSMaWtMpgAXK3E7K17nGVC7RX41ignOPSIDLi3lYWFjGCdA9MqPkes8X9Vy/yjtSr/ps1re6lI6GcOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yep9aF8E; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yep9aF8E" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70d188c9cabso5468481b3a.0 for ; Wed, 31 Jul 2024 21:59:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488380; x=1723093180; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=MGOggTalV7FwiA/xVIceGN8/hpdIkEgMiUN8jsDmXoY=; b=yep9aF8EJyG7bH/vhPa/oZev1JFZD519cWI6KtQG/09+rEoz/vEZ0vqmbwBcNYAipw 0NtjWNwSon3klzBol1xu+FyMNZzyHH2kAIZwmU5cR5dfhjgR1DZ6EzsUm/RTDZYaC+t8 5aB6a5pa6XSnH468pOq8hueXf/JFDGHYh8J2VD1lDglB01lDapbDzWK5oe0UcPNvNZhP si7yy5M5NATIUuM+yoGhYCdAzVkTb6SHLcGabr8ffHdPRJxOV5Z3bWVh8aQSnjpFMxyJ 4zu17gg02rmFPI3CP5ZtiXTbNPyHTxwtmdoUbrMGTghgETyhC7d2F8Pb9LCFd+W6+Nvk qvUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488380; x=1723093180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MGOggTalV7FwiA/xVIceGN8/hpdIkEgMiUN8jsDmXoY=; b=ZBNzSwMcHOxhBIBDTgwp6OEd0YaR0AMLWW6z/hzJZKVCvXADaSF5P+xIe88vFyCJdx 6yi6LtcNGtJCUAFCqJJiE7G4bEIdrkobbhhJvMcQEgSGaRSyls1aGfAJW6eUJHJKwFK1 XtgnLBdh/o5F1X8OAIUkTnKcA2UrAldwvBm/QALjavfQ1HBbjxrkK1nl0abHklVqMxYt hBoXJLRK+FjRWpE9bF+KabV/v0ns61HAW1LsS+tr5DJ7GIoDTmiSPh0lSBtIGR88zZiq sxZh2tL6nNWqxWixuHlzuyg/t6gNjbLyh/1/FXcr1VuqoQhExHuANcaHvYL8a399PSVa lOgg== X-Forwarded-Encrypted: i=1; AJvYcCVwX2S7cdb7vGBU1q1H9SK40mo8X+oUupf9jJVB6VgbegyxVkKO4+7D6j5X7safUbBU67KJKLw1/KjLZ7dwFAWXLEqn X-Gm-Message-State: AOJu0YxXLYC4q8fKkrnWYW3U9M49TsjaLWaArxfM3/4bSG7CGIrs+P/w rm3X7fTeU6WFWLU1RD434Ql6by5TGYGTZNyR/r4SVMPJFew/VD2kqwf2e/LXhcYbKRI/Z5SMHOm PLJXb6A== X-Google-Smtp-Source: AGHT+IGliNyqPgVM+9MNmf/Lk2KHQLa5q91ZtJCzNXEKFOh/pBmygSkrQhNypvWjFArbh7+Gxuvomj3sONmb X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a00:61c6:b0:710:4d08:e41f with SMTP id d2e1a72fcca58-7105d7e450bmr5344b3a.4.1722488380182; Wed, 31 Jul 2024 21:59:40 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:25 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-17-mizhang@google.com> Subject: [RFC PATCH v3 16/58] perf/x86: Forbid PMI handler when guest own PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org If a guest PMI is delivered after VM-exit, the KVM maskable interrupt will be held pending until EFLAGS.IF is set. In the meantime, if the logical processor receives an NMI for any reason at all, perf_event_nmi_handler() will be invoked. If there is any active perf event anywhere on the system, x86_pmu_handle_irq() will be invoked, and it will clear IA32_PERF_GLOBAL_STATUS. By the time KVM's PMI handler is invoked, it will be a mystery which counter(s) overflowed. When LVTPC is using KVM PMI vecotr, PMU is owned by guest, Host NMI let x86_pmu_handle_irq() run, x86_pmu_handle_irq() restore PMU vector to NMI and clear IA32_PERF_GLOBAL_STATUS, this breaks guest vPMU passthrough environment. So modify perf_event_nmi_handler() to check perf_in_guest per cpu variable, and if so, to simply return without calling x86_pmu_handle_irq(). Suggested-by: Jim Mattson Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index b17ef8b6c1a6..cb5d8f5fd9ce 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -52,6 +52,8 @@ DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) = { .pmu = &pmu, }; +DEFINE_PER_CPU(bool, pmi_vector_is_nmi) = true; + DEFINE_STATIC_KEY_FALSE(rdpmc_never_available_key); DEFINE_STATIC_KEY_FALSE(rdpmc_always_available_key); DEFINE_STATIC_KEY_FALSE(perf_is_hybrid); @@ -1733,6 +1735,24 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) u64 finish_clock; int ret; + /* + * When guest pmu context is loaded this handler should be forbidden from + * running, the reasons are: + * 1. After perf_guest_enter() is called, and before cpu enter into + * non-root mode, NMI could happen, but x86_pmu_handle_irq() restore PMU + * to use NMI vector, which destroy KVM PMI vector setting. + * 2. When VM is running, host NMI other than PMI causes VM exit, KVM will + * call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save + * guest PMU context (kvm_pmu_save_pmu_context()), as x86_pmu_handle_irq() + * clear global_status MSR which has guest status now, then this destroy + * guest PMU status. + * 3. After VM exit, but before KVM save guest PMU context, host NMI other + * than PMI could happen, x86_pmu_handle_irq() clear global_status MSR + * which has guest status now, then this destroy guest PMU status. + */ + if (!this_cpu_read(pmi_vector_is_nmi)) + return 0; + /* * All PMUs/events that share this PMI handler should make sure to * increment active_events for their events. @@ -2675,11 +2695,14 @@ static bool x86_pmu_filter(struct pmu *pmu, int cpu) static void x86_pmu_switch_interrupt(bool enter, u32 guest_lvtpc) { - if (enter) + if (enter) { apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_GUEST_PMI_VECTOR | (guest_lvtpc & APIC_LVT_MASKED)); - else + this_cpu_write(pmi_vector_is_nmi, false); + } else { apic_write(APIC_LVTPC, APIC_DM_NMI); + this_cpu_write(pmi_vector_is_nmi, true); + } } static struct pmu pmu = { From patchwork Thu Aug 1 04:58:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749540 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24B7A14A4CC for ; Thu, 1 Aug 2024 04:59:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488384; cv=none; b=AqV20nFDOi2AqYGAhF/dogEx7VIFDzI5yZG37/PuuVMbuWTbifEOGFY/3F3EZ/f1HYpPLZ7jURYDmiPv9gzEXEsGqWntG50JiyBEdAEjcPrwdB3D0XEcbjIjdxcHL7Dq02peQ0e/TYrZI8lCFXM+9v1ZwHNGrqHmglVar4mGCTw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488384; c=relaxed/simple; bh=SZpuNCUbeW47z3voU4/T4n8knmMeJ7taYycwdsZPWMc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sk+5DfZ3Fcj6H8VHkQBnm2tp77xEL8uUYbcBsNPq3mWCGA/ibwsfhnNdotnU3wPHv7YgpJzAlnyhXWqyEAJtnDOn6uGwbxt5a6GRUPFULJK/hgP8yUfIlmhYm36UzRznU4Nx0SkCxLMqTwwcRrEguCLCNCAMyQE+ZuZiq0spAMc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Njb1j3Xv; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Njb1j3Xv" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7a2d4261a48so5195954a12.0 for ; Wed, 31 Jul 2024 21:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488382; x=1723093182; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=apo+gka9tJa5IlwnnK8v5X5JZRB3daZzNGsyw+GPEag=; b=Njb1j3Xva1WeCV1YV/MVHl1tqR76Q6NH9ITjZE5bcmUzB8aRfpQDzNHtFAgoZzE6DJ BdTZo3q8uP6uSI4i96tyL58I3KtyfJluc2GXndm2KztAq8u2fcctxhMZYquKEt+orJWD 77ZikjXBcImpBlAGn45prcfFkE7u4w0TxRr03ucm7JT/yU3Ty+/vhaVb98btlyv7NM+C GtDTMz3r3oxsxpIW6Kf8bVsxGKQrc5ujFwumYt5sX2Swx+dfeabOsERxgJzEpNshVi3E +oITnaBugZZz9DUazFXWsxS2Q6YFalyzpWaK37HAe0gNc5yxAepaSe+urYU0xRrirumd NQhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488382; x=1723093182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=apo+gka9tJa5IlwnnK8v5X5JZRB3daZzNGsyw+GPEag=; b=LpfInkhKQ1oLO/q5g38cLHYKh8MgAlKVqvF1LM8+L1p+f5Dfee5K7nmE8bqdw6V/po FmdTzQ1rVkMwoL92lssXwnAbnUzSbcLiW+iPS7+fUWmd+MAV2PsFEbG6IVfoK/TqGJt6 Do2lYrd0F5vfI5CIc14/tJQpW9sLhH2ToSW2M7e8dziasAlJr2Qf2gs8so3jzsB1cTYI WOQzRz0huBxizqTIdeWxMxuwFTN38i/WWAGInf5vgoqI0w8LpBY56Hjxdg4j+kEGtnFB kMz0bOr50GfP1Q+U7wNDKip3nFEfGPunGOpYkIY9zPPbO9gJR19YJM2QJhU4Eew6nDLJ mQRw== X-Forwarded-Encrypted: i=1; AJvYcCWXCie4YVnDg5RDLclO8vRdnOeL3pGkgwENyKhFC+w8CAv38gQ+GfIWSRq+FjuWDaHlNWKz4Zm69bffklUKryPyXv1r X-Gm-Message-State: AOJu0Yxdn27HW2RaNGUwEHXGuLAobceoKaN+wYdie49UoAJaSPv0hohs 6WvhW6pn5mk6dKVmId80ZD/EDokjunqhUbuLJzzsMRMxnO/Yu0Xny4o9sBk0HKW58D3tMxceihX Q0cRQrA== X-Google-Smtp-Source: AGHT+IGgJu42eibEXwGmvhNIKvCVPcTIrKdXusvwGwzjhiPGFNmYBNNMkSLWK0IiHmwfzRGjr6PPwL6r9sPb X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:c40a:b0:1fc:6c23:8a69 with SMTP id d9443c01a7336-1ff4d1ffd30mr929935ad.7.1722488382211; Wed, 31 Jul 2024 21:59:42 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:26 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-18-mizhang@google.com> Subject: [RFC PATCH v3 17/58] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Plumb passthrough PMU capability to x86_pmu_cap in order to let any kernel entity such as KVM know that host PMU support passthrough PMU mode and has the implementation. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/events/core.c | 1 + arch/x86/include/asm/perf_event.h | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index cb5d8f5fd9ce..c16ceebf2d70 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -3029,6 +3029,7 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->events_mask = (unsigned int)x86_pmu.events_maskl; cap->events_mask_len = x86_pmu.events_mask_len; cap->pebs_ept = x86_pmu.pebs_ept; + cap->passthrough = !!(pmu.capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU); } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 7f1e17250546..5cf37fe1f30a 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -258,6 +258,7 @@ struct x86_pmu_capability { unsigned int events_mask; int events_mask_len; unsigned int pebs_ept :1; + unsigned int passthrough :1; }; /* From patchwork Thu Aug 1 04:58:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749541 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7EC314A4DC for ; Thu, 1 Aug 2024 04:59:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488386; cv=none; b=tiis6nWJsrLBEVrjYKSdLNFK8SRSEcJQ8fITSr2f/hyorvQKVehG5JlfF2cXWT9cZVYCyYGdm30gCDHbirJW9RehUheSRugSNhNnWRZ2/mIH2PhhygHaL5gYsPOXlXJEmX93a530vk/DjLoB3aLu4VaLkDwTiMO/KWsVcQVCDXM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488386; c=relaxed/simple; bh=H0MXtlDHM6Bo/TPGQa+PMJT7MOLdgMr92QDyRKaFp8U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NGE2h/t5+Rmi8V5ElQNelYYNYIvQOrCyH8Be4K5U+H3MMEom+UNlpTUc7Wxazmz/a22jVO8N4ViuHHTm0xzuWY/KPh8LblpeqClNBEkKmZ2gIBLGzn8upS/sPEiPK9wHJMEmzq9nsExX+cjakWBBlN1T6ds+rRhr+rGhPhu5MI0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PDj8XhD4; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PDj8XhD4" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71065f49abeso47888b3a.3 for ; Wed, 31 Jul 2024 21:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488384; x=1723093184; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Cx0nuKJStuOrB4Sy2ugR4uZaoZyxQRBJGD6cg8YNS8Y=; b=PDj8XhD42Xw6vGKvwncEf7cD4CU8/PFMf8FT2fmgt097b4frSgYQZjyAEkRX3ib6+i tusOurvZsrf9KMVQvu/s8nen+KOBdQ+qpazVE86rdlfISKPK29UUnb3gPSm5sbrEv/Pz YdAyHpdbiSDOjPPA8WZg/9hU3APGB1sKZjcktcjjTl6uapY91aWIaejC2LxLGoQ8w5Ni Ya85XLUHYwoeqZ82DJQIByXm/evi7w7jC6sdifhJQPAd3mWeeKm9W0xAmmSjFtcejlND JzkQTWP31TjTTmCK6iZ40r8Tz9xwiH80RvsKJjMdl5ouIZOketj7gyGT97DfiexJIIc2 ymWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488384; x=1723093184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Cx0nuKJStuOrB4Sy2ugR4uZaoZyxQRBJGD6cg8YNS8Y=; b=Zb84Ac8LPJI8JKtZcFLJDMFeWNG/lHfnvQGHVSCPzeEo4pZUWO3wChcGVARsNYgCjM 6HUg79XVoG3QkV9A5ia0MzLgTbRZKrT37stVWTuvRr/rq2dU6fhgjX1s4edpYKFtl+7y NiJz3IIRzxENmErqpSiLKuqnO3nMNNuATxTBZ9NrCf7SLYM8GF5vsx6b5JMpNerC6eHV R1HzewmoftoFevPCTNCZPyP3xJ142VufTDHRWj3xi/sWvDWqbUEHqUqGx3VomD+N8Fp9 KJhXpt2LRUmambsihcsmu3uHEAg95qHpAbpMfuMDzfR8Smqt0fprdm3MK40xr9jXfgOO Z9bQ== X-Forwarded-Encrypted: i=1; AJvYcCWph6f+XUQob1mDvJN6QG9ULYLXSHsMWlARTibscSx9+bRzyfRPxndeHOuMia3hpr82qC2gIbBD3zHrZTgGAvagmX/c X-Gm-Message-State: AOJu0YzXJM1bq6+EepivQ8G4RKp+vMRUv3ssDssC6V8Y7xabZP3PfPS2 BtJpXQIifbps191SoqWS4NDWNQpOJTuibBP6TWAnP7SYxncmFz029gaE7SYv0hhFh84AmVSy6fQ Iuz3LeQ== X-Google-Smtp-Source: AGHT+IFVr3lXogLikaGE4tP2X9IyV+ex0pdWhVSJYkGS2lzvNdcELKOKlh1HJT6NXYYXdrepZW6InQ2RF4wE X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6a00:7007:b0:710:4d06:93b3 with SMTP id d2e1a72fcca58-7105d7c4bebmr4378b3a.3.1722488384060; Wed, 31 Jul 2024 21:59:44 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:27 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-19-mizhang@google.com> Subject: [RFC PATCH v3 18/58] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Introduce enable_passthrough_pmu as a RO KVM kernel module parameter. This variable is true only when the following conditions satisfies: - set to true when module loaded. - enable_pmu is true. - is running on Intel CPU. - supports PerfMon v4. - host PMU supports passthrough mode. The value is always read-only because passthrough PMU currently does not support features like LBR and PEBS, while emualted PMU does. This will end up with two different values for kvm_cap.supported_perf_cap, which is initialized at module load time. Maintaining two different perf capabilities will add complexity. Further, there is not enough motivation to support running two types of PMU implementations at the same time, although it is possible/feasible in reality. Finally, always propagate enable_passthrough_pmu and perf_capabilities into kvm->arch for each KVM instance. Co-developed-by: Xiong Zhang Signed-off-by: Xiong Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.h | 14 ++++++++++++++ arch/x86/kvm/vmx/vmx.c | 7 +++++-- arch/x86/kvm/x86.c | 8 ++++++++ arch/x86/kvm/x86.h | 1 + 5 files changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8ca74e7678f..a15c783f20b9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1406,6 +1406,7 @@ struct kvm_arch { bool bus_lock_detection_enabled; bool enable_pmu; + bool enable_passthrough_pmu; u32 notify_window; u32 notify_vmexit_flags; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 4d52b0b539ba..cf93be5e7359 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -208,6 +208,20 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) enable_pmu = false; } + /* Pass-through vPMU is only supported in Intel CPUs. */ + if (!is_intel) + enable_passthrough_pmu = false; + + /* + * Pass-through vPMU requires at least PerfMon version 4 because the + * implementation requires the usage of MSR_CORE_PERF_GLOBAL_STATUS_SET + * for counter emulation as well as PMU context switch. In addition, it + * requires host PMU support on passthrough mode. Disable pass-through + * vPMU if any condition fails. + */ + if (!enable_pmu || kvm_pmu_cap.version < 4 || !kvm_pmu_cap.passthrough) + enable_passthrough_pmu = false; + if (!enable_pmu) { memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap)); return; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ad465881b043..2ad122995f11 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -146,6 +146,8 @@ module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO); extern bool __read_mostly allow_smaller_maxphyaddr; module_param(allow_smaller_maxphyaddr, bool, S_IRUGO); +module_param(enable_passthrough_pmu, bool, 0444); + #define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD) #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE #define KVM_VM_CR0_ALWAYS_ON \ @@ -7924,7 +7926,8 @@ static __init u64 vmx_get_perf_capabilities(void) if (boot_cpu_has(X86_FEATURE_PDCM)) rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); - if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) { + if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR) && + !enable_passthrough_pmu) { x86_perf_get_lbr(&vmx_lbr_caps); /* @@ -7938,7 +7941,7 @@ static __init u64 vmx_get_perf_capabilities(void) perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; } - if (vmx_pebs_supported()) { + if (vmx_pebs_supported() && !enable_passthrough_pmu) { perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK; /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f1d589c07068..0c40f551130e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -187,6 +187,10 @@ bool __read_mostly enable_pmu = true; EXPORT_SYMBOL_GPL(enable_pmu); module_param(enable_pmu, bool, 0444); +/* Enable/disable mediated passthrough PMU virtualization */ +bool __read_mostly enable_passthrough_pmu; +EXPORT_SYMBOL_GPL(enable_passthrough_pmu); + bool __read_mostly eager_page_split = true; module_param(eager_page_split, bool, 0644); @@ -6682,6 +6686,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, mutex_lock(&kvm->lock); if (!kvm->created_vcpus) { kvm->arch.enable_pmu = !(cap->args[0] & KVM_PMU_CAP_DISABLE); + /* Disable passthrough PMU if enable_pmu is false. */ + if (!kvm->arch.enable_pmu) + kvm->arch.enable_passthrough_pmu = false; r = 0; } mutex_unlock(&kvm->lock); @@ -12623,6 +12630,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm->arch.default_tsc_khz = max_tsc_khz ? : tsc_khz; kvm->arch.guest_can_read_msr_platform_info = true; kvm->arch.enable_pmu = enable_pmu; + kvm->arch.enable_passthrough_pmu = enable_passthrough_pmu; #if IS_ENABLED(CONFIG_HYPERV) spin_lock_init(&kvm->arch.hv_root_tdp_lock); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index d80a4c6b5a38..dc45ba42bec2 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -332,6 +332,7 @@ extern u64 host_arch_capabilities; extern struct kvm_caps kvm_caps; extern bool enable_pmu; +extern bool enable_passthrough_pmu; /* * Get a filtered version of KVM's supported XCR0 that strips out dynamic From patchwork Thu Aug 1 04:58:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749542 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE33F14A4D4 for ; Thu, 1 Aug 2024 04:59:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488388; cv=none; b=K+2xIC8DNjBGeRz5x04TjGU2/pZfpFOvqypc1f+BGXGqQfXF3AgX7S+2SXSG1W15q4JXxg7Q7ZPtnZlq/uH/BL80UdO/FMKXPlK29gN6VnEVjuZZJNyzZUksPQAgiDkyXrzofdfQQLnB1jJk+DYhDxO32URmMbDIx1e4GFtBWSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488388; c=relaxed/simple; bh=E22tkx3zDeo4zxqMBL7eB1aOxoR/sVYOfdyHuCgXr1k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gy0FQMDNxHkCxT6eE9MX4vCF6j7NoQxR1QDHtyy+vtl6hMlRXw1APykNErotDmd2uNIJYrvjLe8X7lVKhYbPJocKhOHob8g6lhOenGzqNqKnnrfSP8/GDa5XurS6dyYymAw7priFw6rIrKobWJF9AVLNnZ5+IjFu1yXp9I+x9I4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aQZOis27; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aQZOis27" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-66619cb2d3eso139771277b3.2 for ; Wed, 31 Jul 2024 21:59:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488386; x=1723093186; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=lAIIlQl06GPQqqS5dtZbchIOgiDTPlp/q2IQgaaszaw=; b=aQZOis2717LAOgYisVzqOAPo903uENzztzfqiH6fKfAcrXM2nlt3T7he6cvur1a+v1 t3L2SO06UH5Fc1VtAhX9HT582ijeiN8knK55MJj3nX2QvyKHtoiGDnTTQqzkK60Zq/h2 4e5a4RClfwsCxCNy02WL49nikOMasFhLw4w2wiBBXll2mGlEfu3jyl+rYHaryExXMR6O np0uPHnHN/Fv1Im0VYRAd69WdbUcaHZMhuZRFD8P0OV2KXk+pVM4mwpg73LgMLpl4bir aZQJXccDFxRzmypMIhBx7IGmT8eX25grk5J+S5af8HiZpum2oRO/MJ4o4xTTfpYjb2IG /03Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488386; x=1723093186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lAIIlQl06GPQqqS5dtZbchIOgiDTPlp/q2IQgaaszaw=; b=eRcItX/YB1d27g2VKayJE6jS2n/1OFf3C2guoR9+alCfh+DOTM1MqqxzC90nfBKM1m HFtclg/n48XDkmK73KdrGWGOi1OcJ7uW2dbUFVn0Rq1CeH2qWm4HJUDqP/EPCxaa+iKY dbOAK93NGQDdhUMBx1ufqadvCneVdgrE4BGsXtvPpwip2xqGS+y8ahwwXoAOvlp23PyF PvNFRYKYd44XSEKRrWS5XSfPhdt0IZRsuKZnpF+T92CrlAUSjx9RrBftbG5RZcWClCYn Pnq7ugRR4UGNlU3XwFIvqUtNY+gxh0isEqcgchaaR8QBOceXQErWzyLPGlfwhOA/9AK5 XZDQ== X-Forwarded-Encrypted: i=1; AJvYcCVTyoVF/TTbpWce7fsvzsCAexC7tvBt1G0DfNwFjOh9bABjKF0+ij2kdGif2LgkG29dWU6iyCGKbXUpiE27CobqAti8 X-Gm-Message-State: AOJu0Yz6ROCcCXKX5JnlAcpT6xCpUESR2wH3kt9ZWPuqANw8X2KYncgD yYubAc9OLNtaPcjjghrI7Adg34VqDcai+XV7iwas78J6UOO0kg4u77QPww+BzRG4+TXq7zhHZWY O9nRqOg== X-Google-Smtp-Source: AGHT+IEcLS+CVskEAhC1qKZwJ4KFeyFPlH+JmrOrYLRNgq6/u9XRbf1EMVU/nXXhW/F6NC8SnzPZxAxE1/Mv X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:690c:85:b0:669:e266:2c56 with SMTP id 00721157ae682-6874f03562amr1170967b3.6.1722488385912; Wed, 31 Jul 2024 21:59:45 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:28 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-20-mizhang@google.com> Subject: [RFC PATCH v3 19/58] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Plumb through pass-through PMU setting from kvm->arch into kvm_pmu on each vcpu created. Note that enabling PMU is decided by VMM when it sets the CPUID bits exposed to guest VM. So plumb through the enabling for each pmu in intel_pmu_refresh(). Co-developed-by: Xiong Zhang Signed-off-by: Xiong Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/pmu.c | 1 + arch/x86/kvm/vmx/pmu_intel.c | 12 +++++++++--- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a15c783f20b9..4b3ce6194bdb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -595,6 +595,8 @@ struct kvm_pmu { * redundant check before cleanup if guest don't use vPMU at all. */ u8 event_count; + + bool passthrough; }; struct kvm_pmu_ops; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a593b03c9aed..5768ea2935e9 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -797,6 +797,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) memset(pmu, 0, sizeof(*pmu)); static_call(kvm_x86_pmu_init)(vcpu); + pmu->passthrough = false; kvm_pmu_refresh(vcpu); } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index be40474de6e4..e417fd91e5fe 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -470,15 +470,21 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) return; entry = kvm_find_cpuid_entry(vcpu, 0xa); - if (!entry) + if (!entry || !vcpu->kvm->arch.enable_pmu) { + pmu->passthrough = false; return; - + } eax.full = entry->eax; edx.full = entry->edx; pmu->version = eax.split.version_id; - if (!pmu->version) + if (!pmu->version) { + pmu->passthrough = false; return; + } + + pmu->passthrough = vcpu->kvm->arch.enable_passthrough_pmu && + lapic_in_kernel(vcpu); pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters, kvm_pmu_cap.num_counters_gp); From patchwork Thu Aug 1 04:58:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749543 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 345FA14A602 for ; Thu, 1 Aug 2024 04:59:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488389; cv=none; b=sITN6z3zZgmPxIOtWCLPz6TEtMeD1MWwnk1OS0lPbv4pxtDuASkgzVWyFKw1Gm+jNRoXDdZViVf8YFoMAoZukA7KIONdHxSVXAunHjKnzezLbCD9HO2TwU0SYvX6tPpm5AIulTKKFKvLsAYrBaXaeywz/dceJb9Z/mdE6/o+OOA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488389; c=relaxed/simple; bh=DmaqoFrNzmr8Co3BtZYZ3sN8u8+AQaBmtrlmA+BnoV4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bGrAfkRub+NnpFrKZOWUs+5YTwmtgXTRSEhKkqEVloSNaKanwxG6VmXB8Bw362jpHbjAD0cc10MWZrdzLJ2n1ZMytUAD3+qvxiF8JcUi/pwXh42cbKOtASamtqzpRojgvORAxPp/X6iK1Zd3HOMHzDDkxtDF97KKFBtvinPE4UU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HXY1uDEX; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HXY1uDEX" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7a242496838so5202095a12.3 for ; Wed, 31 Jul 2024 21:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488387; x=1723093187; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=BZY24OME/y+6YnDgfJl2/HTI8j4mWVFxPD5yIL3mP5U=; b=HXY1uDEXfW8WvuOcEBUcHU2YHyxorpNeNL5dFbKmHv4sUG/zlIIGg21jA7GqCpdPtl 4HXkej02s1lW7TpHyilMoD7gt/xYfC7tstmimYgKjpxYKGltj2WD46HNTDE6nvArGa/j Hh9jJdbadJ2Rj6N4aziYbJ4+MmT8Upd7086bJiwr868Aud4/4o7Tsc0HUKPf5grlE/yA B7g8hFd1BkjeGO0VcRocH5qWKY+QXxl3T7WWAlHukgLkXdBoFc4qFY57nYrfxb6JQbAO /vGyFN+MJa2mYfTMXBHDLbYCBicOqlQfkBtt0OxmnjJ3KVSQhCneprecTxkyVimSQA3w lTLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488387; x=1723093187; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BZY24OME/y+6YnDgfJl2/HTI8j4mWVFxPD5yIL3mP5U=; b=QjO1dYrvDFPsM7lemPfxajDk5O7XdofILYLP7zYJaXmyRgkhMQccGMk3fc7bbe67M+ UK/CUJeHumtHerJOQyi/aSZnRFTqxElOT+CttvN2CYCmk+Frp14exIZlc5s241JkD6YW uDskhtKzod3LXGIjUUE7rIQDnjpuIQRYxZKO92hzhmpXdMSXDOGmDkcHFZqqrZ98FBHx R+gu3nMTsyak8gNPm6fTJvdeSXVAanpte1WgsA83fpKGbCNg1D+FO1AgRqrf5O4+p9o4 My1DSTPHs8Ad8n28XreiyAYQPjV9wi7fOelaRiU7etPPbyvAtpIjCCXlqx1EwSB7N6du dW6g== X-Forwarded-Encrypted: i=1; AJvYcCXsGPyin1w87Ujx0cwFdbpqwy+TfM3apesQcYkQcHUpJ0LPIOTz2cOSTPZvA2XAdEeOAmGuUz0UrN6ujbkwHU2To519 X-Gm-Message-State: AOJu0YzlV87Pm05wBj3RBHnuSv4v84YrybIxReqYLR1aMlYEoccZxnFq ej4ShBkr2byvrad8Ff7Elf5IOacwqSP/Y6J7kF/9nl8dCnX0OdImU7f/vFn80cQTVPXLF880cv8 D93wY5A== X-Google-Smtp-Source: AGHT+IHFPMD1Fq57svUf/REJRzdZUqR1Ipzwj18MNvxMBjrLkVe7FTwvAMyTJry2Sy3HQp9p1YdM6vMus0Bj X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a65:6904:0:b0:75a:6218:3d10 with SMTP id 41be03b00d2f7-7b634b5422bmr2921a12.5.1722488387315; Wed, 31 Jul 2024 21:59:47 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:29 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-21-mizhang@google.com> Subject: [RFC PATCH v3 20/58] KVM: x86/pmu: Always set global enable bits in passthrough mode From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Currently, the global control bits for a vcpu are restored to the reset state only if the guest PMU version is less than 2. This works for emulated PMU as the MSRs are intercepted and backing events are created for and managed by the host PMU [1]. If such a guest in run with passthrough PMU, the counters no longer work because the global enable bits are cleared. Hence, set the global enable bits to their reset state if passthrough PMU is used. A passthrough-capable host may not necessarily support PMU version 2 and it can choose to restore or save the global control state from struct kvm_pmu in the PMU context save and restore helpers depending on the availability of the global control register. [1] 7b46b733bdb4 ("KVM: x86/pmu: Set enable bits for GP counters in PERF_GLOBAL_CTRL at "RESET""); Reported-by: Mingwei Zhang Signed-off-by: Sandipan Das [removed the fixes tag] Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 5768ea2935e9..e656f72fdace 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -787,7 +787,7 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) * in the global controls). Emulate that behavior when refreshing the * PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL. */ - if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters) + if ((pmu->passthrough || kvm_pmu_has_perf_global_ctrl(pmu)) && pmu->nr_arch_gp_counters) pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0); } From patchwork Thu Aug 1 04:58:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749544 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 608D014388E for ; Thu, 1 Aug 2024 04:59:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488391; cv=none; b=H4DIvpKH535nDtBuiRj7EQhnkpAy74JjrlDj1aZ9raZ0YIAHx8swzF+uBa6u0atx4j1occIjZb65i2NkgwoSMN0/B1HOG0Smac4j59X/SzH+NzRcVZXbQ8ZC1xr9eqKalT5qhFnLwcTrzjlcURJwxEDAZPMI5CXD30IBb6pYmm8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488391; c=relaxed/simple; bh=BaHu02UFGqjoVlKM/KF0W8Z1EcBs0bxy9IzWhYDzC2A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Hvo7243j387XfeL7wkNJjFArtQmTTSxd7q2X5TofUUeDHilt+EC/rKlk+CU4t+qhnvrz0UrcPiMBcujRUacuzUBEXzZ/snKTm6YQ+0SyXzrdK3tscKs66Xs/FFoduooYOkJjyQohaxhxWyJA3vwzGM1NjS9Ema2GCvT6ELhl81k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PK0BQ5DH; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PK0BQ5DH" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7ad78c1a019so4320608a12.2 for ; Wed, 31 Jul 2024 21:59:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488390; x=1723093190; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=RUvQCQx/WMIpOOFe9mcTTW5PZCf+TCUSSqHzyb7UegQ=; b=PK0BQ5DH9hon1toAm7uuoTbuasYKVwaq1eEvGzpK7piVVqv6R60PqoPZQlP7xrqubf 2kHO8Qe9aPc2Xs4tXeb9F0XdkvEsUdYobIbUF5xdS8sdHazjpLGfx+JT5FwsBFpyl3zG YpHF/kM2dLIOulCH38KzG4o3FkrE9BM1CcV6HZfa+EFm5NPqIPc9LTZAtbt4ZOgv6yAD 9YpTYwwQ6FbZZdy4SxM1c06jmcqV0AyQ+pitkE663x2esSlfRit8uy7K4HNqyHhxMIMH YZFr4m40pngUZwHlhPbCU8KJnCpWC2eUw4TRp12XlJLtryoWVGa2wb074Yr7Mhgd7eUf Mozw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488390; x=1723093190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RUvQCQx/WMIpOOFe9mcTTW5PZCf+TCUSSqHzyb7UegQ=; b=pmJtW5JljDRYuo+s5wqod/CD+/7Qs936gWK8bmrbxbrowDl/M3V34m/Npa3AJUyFny aOLfW5ZDbQYDDRn0b4h7ULayWvtW0rIJnvJxAjlxkzKDsgm92VKjao9LI/5bT6f3K0T3 RVMk65Hk6NstcnAJwXuQC503InvMFO0OCdDbaAFEDj5ES4XirYOKWDVU2dOFjK+IigZq mQ8XRE2DsUYs+Ekxa2Eg67mFRmU1dvG0Lq0WgJ2yA4SIJYbr2PKJGyahG2kC0wsBf/Wm X4xvrwtZsWq6iw7nv57Fi7EBn0BqVux4IrdjFXOIo9O78rIp+2hEdy1agRDTTvyKtLCK do8A== X-Forwarded-Encrypted: i=1; AJvYcCXMuHogrx4HWox3IjlIpS7d1sV6OvnkLNe6FY4WyytqZC/MzXccQi2/ZLm0wQxJk5FotmX5eQu1GTskh5eipT6CKnkc X-Gm-Message-State: AOJu0Yx+5epPVwh+k/iD9oNGhGoH2YdRKsJlBr11AXmjfKwBZfBwjje8 OnMiNZb0d03xUR7mf85vFUk4pIXKjXOE8imJg3ZEWvkMrKTfGwg09b3fvtSWymW3JH0Wcf2MNg6 WIAQINg== X-Google-Smtp-Source: AGHT+IHjXcgdfDuOm2S1Gi0CNX+VfGrMUYUIMcwlsSzve61nT6t1siC1nF2RVWD4E2UIzf/j4bY/WUAd3RI8 X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a65:6813:0:b0:76c:26a8:e0dc with SMTP id 41be03b00d2f7-7b634c51fcbmr2594a12.5.1722488389288; Wed, 31 Jul 2024 21:59:49 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:30 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-22-mizhang@google.com> Subject: [RFC PATCH v3 21/58] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Add a helper to check if passthrough PMU is enabled for convenience as it is vendor neutral. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cf93be5e7359..56ba0772568c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -48,6 +48,11 @@ struct kvm_pmu_ops { void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); +static inline bool is_passthrough_pmu_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu_to_pmu(vcpu)->passthrough; +} + static inline bool kvm_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu) { /* From patchwork Thu Aug 1 04:58:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749545 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F6CD14AD3A for ; Thu, 1 Aug 2024 04:59:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488393; cv=none; b=e78edQrlj9TzLieMb1/+d2hMuX/36HeNEPv5D+1vDNsDovJdAzRLLMTzIPkagfhX1IKaIzMqKmoI24gOrUtCR9080oU6OdtLbcyG4AVSMmcVZ+EYh7dCGCylk1VKZIIOVjjxsATSEOvnw3G+u6efAMMBYFKflUcgZU1Hqq+DMz4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488393; c=relaxed/simple; bh=i+iKXS4OaJEBodpsWYy3FtgcOpXsnTrgmVcFte6q7sU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ioo+WYyowZs1CoUws6tjpDmMY9wS/kulduwF+556AUUy/UJGEO6WtkP1HczKOcbzMpHMN8fb3BrWR1MPHYjo7fGSY3SCHXfJREqUMnKIgqd4ARC+xokBvRlZiDlYCJbWKHtisc5UciyZw5/LZLqo6e4G1EkqRi+GDBWu38GQgUE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MXwuBvJ2; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MXwuBvJ2" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7a242496897so4959069a12.2 for ; Wed, 31 Jul 2024 21:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488392; x=1723093192; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Nlv85ewzW2qIefav1CarrlFlRBn2irhgWxbrOUVK3qI=; b=MXwuBvJ2j/hgbuWaE8eR6KWW+URs4AJQxUP4GFxDFz3mrbgXciu7yiBpgfs9VPTkLW dPc/3YBijyy58v4/pDkdWfH4U0HjUx5YxpmLg9BZB6/gl59cys96uDyjMexLZT1ukwFK LNkAOhmCsTU9EQ3qEFfEi8PvhIiDFWWoq7xWTBMA9lgQ2gGHaWylAZ/ioxiicaqf+xIg H07LqMVGK5r3RkxIXxGaWSm9xZbZWdsOwt9RGVN4jbyJV1nE0NEBhEYihCA/2ZUGvKW8 TY+qWxSLhJB2QHNeHcRJIitnl9ONZ+LrKIniJlk85RSmdHLnkCKuqYDdzOE89nvdVVF/ agMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488392; x=1723093192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Nlv85ewzW2qIefav1CarrlFlRBn2irhgWxbrOUVK3qI=; b=lppCBiAkwi+BWGE7kskxBA4gKSk7L7dINokVT3iu6QePlLghmF9FvaZbLcCVAhTL/8 h9SGaYtKobLURiIWPqYKRnn+WcGGNoXlWXH546NNMYmzbgNdmwwpm4vjaWCm+vWBHbqB jTcmYMe0ShknrszxnLrQPnTrF8wI99u1eSJj2XiBqOncs2tuvxrm4JakBgWA8iwisLYq hG9R0hRfZtwMSPNJHGgO4IQXbnKiBfbXKKYG77xIjDCWBAnv4gQSIM5maNQqcolU/slD MkiDdGX410VIt8eZbdWnWlhYpv1XBzNOyjOaZdVjgCAdN78aUcjS6EBZaOd01cNaNOie EXgw== X-Forwarded-Encrypted: i=1; AJvYcCWkCgRg8yP0yw+tyaRTJldKV92q2eze0rzjJQiogWh59wtY2N80bbawZOEqNWh8M+KmWQx6jt85ADww/UGPAxcPIwvW X-Gm-Message-State: AOJu0Yxiu3UwPvzmpBbyXEuZ3cBjZ0j1ZfQBD3hgDgi+R1cXOzjpvE1w MnKgsNygzAx6A4fvjS+ZEELn2qV2jRWYQxbKDSoJWTTZQQ1zg/gatr8J1ftQDtNAIVQ13DjWFe0 iJNtAhw== X-Google-Smtp-Source: AGHT+IFZ49mLHiqlUlHM39yjLzPb9I3pfDZn0rW5d2UkwB7f/PM+pZci+lS82tm5+kMsHtKKKRxeQImasO02 X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a02:59b:b0:660:d635:147b with SMTP id 41be03b00d2f7-7b636ac2440mr2376a12.11.1722488391411; Wed, 31 Jul 2024 21:59:51 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:31 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-23-mizhang@google.com> Subject: [RFC PATCH v3 22/58] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Initialize host_perf_cap early in kvm_x86_vendor_init(). This helps KVM recognize the HW PMU capability against its guest VMs. This awareness directly decides the feasibility of passing through RDPMC and indirectly affects the performance in PMU context switch. Having the host PMU feature set cached in host_perf_cap saves a rdmsrl() to IA32_PERF_CAPABILITY MSR on each PMU context switch. In addition, just opportunistically remove the host_perf_cap initialization in vmx_get_perf_capabilities() so the value is not dependent on module parameter "enable_pmu". Signed-off-by: Mingwei Zhang Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/vmx/vmx.c | 4 ---- arch/x86/kvm/x86.c | 6 ++++++ 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 56ba0772568c..e041c8a23e2f 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -295,4 +295,5 @@ bool is_vmware_backdoor_pmc(u32 pmc_idx); extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; +extern u64 __read_mostly host_perf_cap; #endif /* __KVM_X86_PMU_H */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2ad122995f11..4d60a8cf2dd1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7918,14 +7918,10 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) static __init u64 vmx_get_perf_capabilities(void) { u64 perf_cap = PMU_CAP_FW_WRITES; - u64 host_perf_cap = 0; if (!enable_pmu) return 0; - if (boot_cpu_has(X86_FEATURE_PDCM)) - rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); - if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR) && !enable_passthrough_pmu) { x86_perf_get_lbr(&vmx_lbr_caps); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0c40f551130e..6db4dc496d2b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -239,6 +239,9 @@ EXPORT_SYMBOL_GPL(host_xss); u64 __read_mostly host_arch_capabilities; EXPORT_SYMBOL_GPL(host_arch_capabilities); +u64 __read_mostly host_perf_cap; +EXPORT_SYMBOL_GPL(host_perf_cap); + const struct _kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS(), STATS_DESC_COUNTER(VM, mmu_shadow_zapped), @@ -9793,6 +9796,9 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) rdmsrl(MSR_IA32_ARCH_CAPABILITIES, host_arch_capabilities); + if (boot_cpu_has(X86_FEATURE_PDCM)) + rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); + r = ops->hardware_setup(); if (r != 0) goto out_mmu_exit; From patchwork Thu Aug 1 04:58:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749546 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2194014AD38 for ; Thu, 1 Aug 2024 04:59:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488395; cv=none; b=QOQ8vH4bLf5csNDE2nrIFX2UGyWxHRGmqaY2891McTiaiwe3g+ZaqXP4NUX0gWj3tiHxAEQpO1MIThqwcl10JApg5zHJwLbvY0k3Rucr+DYWWaZtV9byVes1TnGHC/D00JEyPdM9tmhSYQ9jIWepoAHp295laY1E9PLpnxWkLeg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488395; c=relaxed/simple; bh=4rVl8h2r83tKUyBK+CojTPOVOBKT5NmS290fwCGxVKE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=E8bzvI4bszjsLkqjhbzyc7M03tG6HAFL0O3+aRa9bn1lr2dmDnFaoz8B+4nA66b6nqhZkV/uNwb8x79mDprx4Bf4kRZORikmB1bPvBDNjWQ/rOvtfpx1Tc6IKKfiXq7lM1C8YBjLhPy/inLzkHJw0tMNUX+NgXrQrAd6lutQcB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aDdylcc7; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aDdylcc7" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb639aa911so6756504a91.3 for ; Wed, 31 Jul 2024 21:59:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488393; x=1723093193; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ADqKoJFMO3oZC+aeRLYgTAUByywP/uLwbAPB3o+n+qY=; b=aDdylcc7jI+Z8auXznRGasKkekxCodFPYasMqX/yCZSsfAas9Wi46Tiz26dozCTbVm QG8Fq6c55LQ4AbNHOCvo+YxTDwS+Mmy1yFxsuT7RZsxWAOUk8ACwx94zWzBnVRMANoNm YLJ2Cyen0o7e3nsbKxC4M/K0147TivrRRp4XOCKo8FoQWFdDKpyuAm7bdWsH5CIlkw/C phsWSqSakn3Qo8T4NjnYvSXqmFe5tKXQHpOFJI+ZPTKN0t0UndVrmwotBCALWKGTEBVZ zW/CUP885wHvMHjZD1vlS50H/bJafE3m22POPD8gR0htEzY+tHMjsZ5ojv0zuNs2U0kT kycA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488393; x=1723093193; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ADqKoJFMO3oZC+aeRLYgTAUByywP/uLwbAPB3o+n+qY=; b=pc/NLmutoVZ47K340AIb1SXLeHDahAw0DNd0cyYI4NGRDL63PyBn6oLe+I2uSzxQ6D PUu2ZSJ9x/r8nhgFjuo8AqAizziYVctzejbQNRa+nynysBZV2jpXjOFvUwPwqRE+RzTg +Z+BLWlH1W4umekmzbxB+ccR48M6jihLRPH+iY1sobBmZW+JiwSH7hSm/h8iAFXRTZcW R5DsS4i+cIDktmjtxlS04TW6Mhq2zNDQrWG7OsK7KzofVzbuEl2HodDhk14h4Y7Oyvio PdY4xc/0KDdSkemVM7yrOZnHvRcXnHmNxQrAD/ZPqB925MF6K7EsZckabWPaXseKBP9t Ysrg== X-Forwarded-Encrypted: i=1; AJvYcCVmtP0HbJSNSGv1g7yKu431x0M8B13HeAa86Eu7e+D9bgfJr/sLmEbBKvbANxBBiB7rSppOZpRBpVY11QdQsn/RUdT0 X-Gm-Message-State: AOJu0YwwCa/Sq3BtPw/Rz+Azr9Wz37XgbojC3OdyULrNI3baWFPMevKC BpmIrXJktptTwrcedTHN0CsfoENV+2Xz5t8OkZb+Hou1EXtscf+4ul8ZFBwN72jyt1pjDD6g+n9 41mJchQ== X-Google-Smtp-Source: AGHT+IEOB4XveBkp1W3fqnjUIxUUwVwyG5uh9+fFTNZkjj82cYdgVR3Cn0ntZxpbe4XF8qvJFld3PE+zoua+ X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:90a:fa05:b0:2c9:9232:75e3 with SMTP id 98e67ed59e1d1-2cfe7b25507mr39182a91.4.1722488393393; Wed, 31 Jul 2024 21:59:53 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:32 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-24-mizhang@google.com> Subject: [RFC PATCH v3 23/58] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Clear RDPMC_EXITING in vmcs when all counters on the host side are exposed to guest VM. This gives performance to passthrough PMU. However, when guest does not get all counters, intercept RDPMC to prevent access to unexposed counters. Make decision in vmx_vcpu_after_set_cpuid() when guest enables PMU and passthrough PMU is enabled. Co-developed-by: Xiong Zhang Signed-off-by: Xiong Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.c | 16 ++++++++++++++++ arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/vmx/vmx.c | 5 +++++ 3 files changed, 22 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index e656f72fdace..19104e16a986 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -96,6 +96,22 @@ void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops) #undef __KVM_X86_PMU_OP } +bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + if (is_passthrough_pmu_enabled(vcpu) && + !enable_vmware_backdoor && + pmu->nr_arch_gp_counters == kvm_pmu_cap.num_counters_gp && + pmu->nr_arch_fixed_counters == kvm_pmu_cap.num_counters_fixed && + pmu->counter_bitmask[KVM_PMC_GP] == (((u64)1 << kvm_pmu_cap.bit_width_gp) - 1) && + pmu->counter_bitmask[KVM_PMC_FIXED] == (((u64)1 << kvm_pmu_cap.bit_width_fixed) - 1)) + return true; + + return false; +} +EXPORT_SYMBOL_GPL(kvm_pmu_check_rdpmc_passthrough); + static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e041c8a23e2f..91941a0f6e47 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -290,6 +290,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); +bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu); bool is_vmware_backdoor_pmc(u32 pmc_idx); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4d60a8cf2dd1..339742350b7a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7911,6 +7911,11 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vmx->msr_ia32_feature_control_valid_bits &= ~FEAT_CTL_SGX_LC_ENABLED; + if (kvm_pmu_check_rdpmc_passthrough(&vmx->vcpu)) + exec_controls_clearbit(vmx, CPU_BASED_RDPMC_EXITING); + else + exec_controls_setbit(vmx, CPU_BASED_RDPMC_EXITING); + /* Refresh #PF interception to account for MAXPHYADDR changes. */ vmx_update_exception_bitmap(vcpu); } From patchwork Thu Aug 1 04:58:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749547 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3284B14B07C for ; Thu, 1 Aug 2024 04:59:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488397; cv=none; b=noKihRZNMSrMki8jffhDrFmfFFCZI56d8WY3XE8Bmjs+i00vxJg/y6Ir2WNMu9YPkNpPwZqqE3mw1pviLRxljLiKfvvygDdEKB3a1GY4UFUV7PRhK1LN6IgdRTOiGJsQvZAEzaNUxDH2nnX/MC79VfxXUvvO8wYbmHpn8Le3gNI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488397; c=relaxed/simple; bh=jr68039rWj3uJXfJ8bCfP8G9dMD/BnbvBWedRvXK6xo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uggtOrVfXF6scvfmLlSgkFpKbpKQiTzZvFFSTvF6kFKFlyrh5HmYft1El4ka7RDAEv+c4eUk7XOzJ6tE1lJgEQ/Y+lgAtJHD4msatE2QNcXwel05uz/FGmc3GxjuUHi2brK9kz0aeezKNN9U/+gIeTyrw3ilE7QoWMt+yIJWd7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TOSQzj5P; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TOSQzj5P" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc52d8bf24so19731115ad.1 for ; Wed, 31 Jul 2024 21:59:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488395; x=1723093195; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TxaTUli0MW2WfeRTkV/gChz1AThZ25Knkg7O5SunQdo=; b=TOSQzj5Px6HT2XlpojUStH79OVI5ZiiqE/CSSaIa8Eanw/EbCcOkUodf+U8Dk475Wu NtR4nsQ8b4oFXpeJuX5OWEG1XuhbfoEGyLkd+9UG8ctnd8RAfB+DhK0GoQQNauuR8b16 LU7G0moN3IgLSfzm3TwpiZ1+DXw+GR/Z25ZoT194DBBMZtaRrq5mcEjMFjd5YHx1+ugi j78SgkhOkX+zSAmceVwPi3GX4BuYeKUt0HooKuvptfdK+Oh1TgDbPDUBjLQ51S4Zsbrc 5H2aJgxBtCQuWnnhSHNckMGl7N5wIhV2VAKJNRimKdFjtUl79fdUIgUhfhsUyM2la8u7 clfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488395; x=1723093195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TxaTUli0MW2WfeRTkV/gChz1AThZ25Knkg7O5SunQdo=; b=DQsJQM4n+Fzx1PL9rzRQLPprmfz/P6RZAjimpBcWrdN+DKBixeX7fE0qV525ieFuVP IVUIPy6YD3yWDyPz+FeYZM3TN8d6492RAx65gWxU4Ce+/hho2Oc9XLSB1fBqIJ42vyHU yGw5bhm8KMfU9c0akb6iAt3jKa2woGZJeNowFlmO0eS0paP0ZDcfXsEaE9qdlW8xibf9 yjo+Eg6FM0O+QAa+ddReC8/hQadpwVJyJmCy4iqz8I1MzNGlNWuducTfGa89zNuW8BI5 jaCqP8lGVBrb521tUD/dgXHDKAAEQdDcIg+s8VHEXnbrso6h9+XDJL0nypUHwoQBMOIM womw== X-Forwarded-Encrypted: i=1; AJvYcCVzISfGpukRrsTRt22Ls30NkDtCHT9/bKKeiApl+ziRLCOuIMKEhpUwwiqB65hLigPQTVDg1YLBSFRnQVhbxvbOXffx X-Gm-Message-State: AOJu0YwWo5xVcY1S1WJ7hqtjHjJ1+7qBP1/lRtOfM1E/FmQJEVqYRRkc czvJgCrcD4s7Y9mpQ/3fx3NHPjVIIAM36dHWlpRYyNM44QMY1gOoUK4DPyI+COxpyKRTX5t+nJF aWJhysA== X-Google-Smtp-Source: AGHT+IGHkM4r42/a/rH4G9daniUo06gyI0yMhqNdsk8wRgb2+jJwFdjmzK0/Eyk61kWilTzxiClGhhQvR1Ng X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:fac8:b0:1fb:bd8:f83e with SMTP id d9443c01a7336-1ff524a1bcdmr1645ad.4.1722488395148; Wed, 31 Jul 2024 21:59:55 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:33 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-25-mizhang@google.com> Subject: [RFC PATCH v3 24/58] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Dapeng Mi Define macro PMU_CAP_PERF_METRICS to represent bit[15] of MSR_IA32_PERF_CAPABILITIES MSR. This bit is used to represent whether perf metrics feature is enabled. Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang --- arch/x86/kvm/vmx/capabilities.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 41a4533f9989..d8317552b634 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -22,6 +22,7 @@ extern int __read_mostly pt_mode; #define PT_MODE_HOST_GUEST 1 #define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PERF_METRICS BIT_ULL(15) #define PMU_CAP_LBR_FMT 0x3f struct nested_vmx_msrs { From patchwork Thu Aug 1 04:58:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749548 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8129C14B955 for ; Thu, 1 Aug 2024 04:59:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488400; cv=none; b=cOx0+8p9K3vV+LULNEHybq+Uda9VSRgznPt6VrV8le6RCtlGtizXgzSJwEPglitUxMAx3lml18LH5aFBKA5BH7HtXg39LtUeKYEUtFn38YDg+mvzmHxv8geJyD//Jj4vKweUTgw9VMzoVUUI1X+8WbZepP5WdYXoGGvNQNBJ0GI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488400; c=relaxed/simple; bh=cpdewCYRTH3HagecD26mSoOrzn+fANDPWjNHtBFYlsA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VdCsH7nq7okuVuQjtxOP0ACeOCry4KC+rRRcJbbrn6OR4+Gd72194TfIlbOxVfivmNcj1TWy7rWL7c3uSLl5jQP7DIJpzjnzRYWK6LFz/uiYVhD4sMowhe9e7FlsIXx/6izCoIcG53Ce51xGnw+YUWEgFKidLbpZtm6aZxYjpqs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YJw6FWc1; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YJw6FWc1" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-666010fb35cso29306327b3.0 for ; Wed, 31 Jul 2024 21:59:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488397; x=1723093197; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=E4e28VhtKidH7wKlinlrbGaKmDHKvgGnSQceLj2+f5c=; b=YJw6FWc1048zjvTHXzwlZ49YC8pOXRmu570nEd4zK6aEB10apzIiv4ZtagoR/TROQN WMt1LxAoVKuddR1Ei4qJQykTtalI11LetS2infIpVAu67NlbSVKcizKGEsbeOATIs/Td HKJPDghefWKsD+ljQCfAagqSN27lKMe2UUOWLdTqjeDYW0wlo7GjFHD/CxKwgAVwzR37 jHK0Q+bgvINro6UoRM7ARuGbbrdj4yf95wnnNHEc7Vpc9frm8UCUjIsOWU+5guxkGzq2 /DvhgH0oyoYGTLnVae/FmUqkjMY/JxWCN2Dxsvm6sVo8jNq85vaPYZPv3kGbGmlz9B1L CVdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488397; x=1723093197; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=E4e28VhtKidH7wKlinlrbGaKmDHKvgGnSQceLj2+f5c=; b=VSXz72ecjItN3Gh9TH1J7x7EA4Y1/ZkNQsW7jcZlD6br66jvWXVGqwcTLpqZA1A6qU AUNnNcVhTfAN3NyBKiFIXQdJhFIz/I4iZ7Lrt4sCQ9WvFXy1eaXTsqUmI5I5ylH7/COy oL8ROUiMCe347CF2cAs/92WWsJK8+mpnbc+XtlZEhPsOpPPcw54t7YjWlRrpewNxF5za vRP/oSgx0v4zmLoqSLNU4DpeFFCwT6HeqXKpKE4m2uE7Ys9P7K4xTQfGEhCc/jJb+RKN mx9XVohq5Yd6PQbZy5Yb8VcXzG5708SI/e9+FxksUQOJCMeV3BLuLFAZ5qrhnr2/wCjz FMhg== X-Forwarded-Encrypted: i=1; AJvYcCUZ+chzXEqUB31fjqqKl+btAxCQ3gpNay5TAL8xqSE7cxXxRtrHYJYCIAJN+8kw53/JYfZULtF92hzhAxlM5jfLh5F7 X-Gm-Message-State: AOJu0YxQYY+EItgjWwYNFj8nLvBfF/EzMVFYKBBNghFNB88LrhCpEWEs xwLuBeVPnSQ9eUcmsdmC/LxSCauG7J+HckZ2X19LnspO4rsu1ED5Bomgt6hq9gwA+Tz9O14YwUG xFRKrQA== X-Google-Smtp-Source: AGHT+IFNeoRXudJIhYy54+gWI9XhGfZjSENkGoRo+7f/LsMQFMRvJAGHHKhngEbnDzEbKA/LkqDUDBWf+PXb X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:690c:2fc8:b0:66a:a05e:9fe4 with SMTP id 00721157ae682-6885437710amr2107b3.3.1722488397409; Wed, 31 Jul 2024 21:59:57 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:34 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-26-mizhang@google.com> Subject: [RFC PATCH v3 25/58] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Introduce a vendor specific API to check if rdpmc passthrough allowed. RDPMC passthrough requires guest VM have the full ownership of all counters. These include general purpose counters and fixed counters and some vendor specific MSRs such as PERF_METRICS. Since PERF_METRICS MSR is Intel specific, putting the check into vendor specific code. Signed-off-by: Mingwei Zhang Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/pmu.c | 1 + arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/svm/pmu.c | 6 ++++++ arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++ 5 files changed, 25 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index f852b13aeefe..fd986d5146e4 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -20,6 +20,7 @@ KVM_X86_PMU_OP(get_msr) KVM_X86_PMU_OP(set_msr) KVM_X86_PMU_OP(refresh) KVM_X86_PMU_OP(init) +KVM_X86_PMU_OP(is_rdpmc_passthru_allowed) KVM_X86_PMU_OP_OPTIONAL(reset) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 19104e16a986..3afefe4cf6e2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -102,6 +102,7 @@ bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu) if (is_passthrough_pmu_enabled(vcpu) && !enable_vmware_backdoor && + static_call(kvm_x86_pmu_is_rdpmc_passthru_allowed)(vcpu) && pmu->nr_arch_gp_counters == kvm_pmu_cap.num_counters_gp && pmu->nr_arch_fixed_counters == kvm_pmu_cap.num_counters_fixed && pmu->counter_bitmask[KVM_PMC_GP] == (((u64)1 << kvm_pmu_cap.bit_width_gp) - 1) && diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 91941a0f6e47..e1af6d07b191 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -40,6 +40,7 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + bool (*is_rdpmc_passthru_allowed)(struct kvm_vcpu *vcpu); const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index dfcc38bd97d3..6b471b1ec9b8 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -228,6 +228,11 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } } +static bool amd_is_rdpmc_passthru_allowed(struct kvm_vcpu *vcpu) +{ + return true; +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, @@ -237,6 +242,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .set_msr = amd_pmu_set_msr, .refresh = amd_pmu_refresh, .init = amd_pmu_init, + .is_rdpmc_passthru_allowed = amd_is_rdpmc_passthru_allowed, .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index e417fd91e5fe..02c9019c6f85 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -725,6 +725,21 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } } +static bool intel_is_rdpmc_passthru_allowed(struct kvm_vcpu *vcpu) +{ + /* + * Per Intel SDM vol. 2 for RDPMC, MSR_PERF_METRICS is accessible by + * with type 0x2000 in ECX[31:16], while the index value in ECX[15:0] is + * implementation specific. Therefore, if the host has this MSR, but + * does not expose it to the guest, RDPMC has to be intercepted. + */ + if ((host_perf_cap & PMU_CAP_PERF_METRICS) && + !(vcpu_get_perf_capabilities(vcpu) & PMU_CAP_PERF_METRICS)) + return false; + + return true; +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, @@ -736,6 +751,7 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .is_rdpmc_passthru_allowed = intel_is_rdpmc_passthru_allowed, .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_INTEL_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = 1, From patchwork Thu Aug 1 04:58:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749549 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60F1A14B963 for ; Thu, 1 Aug 2024 05:00:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488401; cv=none; b=M8PhqhNIYEdJgq/+ea+cK69Tf/m+U+YC7LrmGJO6ZD+mpbWuJ+RfmRmHAXtXNPkcpUBpLpMSExOXawv5VNa2XT1cTLEBXpQvZxOCKCYOszEfbzRbDwBIbYGcu2gDqIoTpVDRwANPhaIowgyPF65IsEClbP+FiSDiDlsaSWZQqYA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488401; c=relaxed/simple; bh=ojdh7gBepq1i+vJ4JUVSIiOIMMPBmE/ya/kGYtZEtqg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uzPHemSE+Q+l8HJj7c2ZDq2gscQk/eGJRESeGajgPCJtpIj7lh1h6K5TjRJ5w+7vQHhoS3xxIpFKmayKuZQg9zE6I24Ok3OKjEW37hview2lstIZ2UOkZt2+LXvQ3i2MhaFY9h8JC59GPY2UZV7G8IVSQg5wEy3KvEc4DaF7aVs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=F++i3+9+; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F++i3+9+" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fee7c9e4a4so50472865ad.2 for ; Wed, 31 Jul 2024 22:00:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488399; x=1723093199; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ay8cbgGhApv7UrgYb1tWI0u4K4Q0eozMprgv8HJaiyI=; b=F++i3+9+z9ifooDIAx3k9ljd8lRyB9fFUzeKk1cx5r0bdHIJP32FGnN1X1wuougz6v 4yLcz0nswu9UbElpWFWL/xOlbo+y5kiAUWkM3JkwNG0f2SJCtNkZUzME9YeNKfGIk6TH fIFuDLWAozBWm0umn1U7YbacOE/2h3iASetj8ZZixNNMHSz6vYiEVnhFMg9qZ/I0bKg6 MmSfB3xy0TB6NCIMpZVZTJSR+MrleDX72XOsG9yh1EWjxpuMZIBKEoQcxrLKw+4Pzgvs XROwhc6WuHf450zI31Fi/81nkSt8ZeL9SpelUo5WlRPAdkMtjvQTnjrACvt0ouVYSJmt XUkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488399; x=1723093199; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ay8cbgGhApv7UrgYb1tWI0u4K4Q0eozMprgv8HJaiyI=; b=DW4y/rwH1GOYiBgMM7KKHjMYLORp9/m1DNr+pxDiHOvb8Jz4jNzsyzEmifCVn9LIw/ HpZtX2iPGEGGPbGHQI+itHXvaJdN06UomQOyAt2rVJRH3p+CZHrFgSZJAhQf1Q4VReQu XURu6A5hKei4Jqj0lrggdp0t/9lcExsAHklLsDCvtzoYAmfugawhvnh3hZuYZferG+RF rjfQi1SrJRg2Nlep07GlYK8blUpeo/KPMhbuRlOwngsizf9pE/8Ek6qeLIATUNPhl9A9 beTKuYKNcWJ0PWa3YzgzC1EEvafg3dt10qInw3oUMwPqsacsILnNberyDnHCrO45fn23 zu+w== X-Forwarded-Encrypted: i=1; AJvYcCVAQjaUCruTYxrFg/2DqYzecUP2trSpcfDsa1y3owwkmdrl76c5OEKPF17L8mS3w5Wo+jWEbK+XIbWUd3lhDubtYE8L X-Gm-Message-State: AOJu0Yzswho1D7XhoxM2iNToBuh/w2XdmAoqL/g/ttWlYoPZFPk9HtwA /5t0JzAiOZPAZva272BnHyCJUytv7Ww1tSx+feZRZYbM08iS4EXph91r29t7z+9r9j7+po+vNzl 9wgXYXA== X-Google-Smtp-Source: AGHT+IEwEBLnMnRl/c01wVz7uda9D8/WxO5N/unYavTUeXIqaEhGa1vexGz1gywI9cCh9deL5W3O+pD2i5B7 X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:903:41c4:b0:1fb:5f82:6a61 with SMTP id d9443c01a7336-1ff4ce7a0d6mr958315ad.5.1722488399367; Wed, 31 Jul 2024 21:59:59 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:35 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-27-mizhang@google.com> Subject: [RFC PATCH v3 26/58] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang In PMU passthrough mode, there are three requirements to manage IA32_PERF_GLOBAL_CTRL: - guest IA32_PERF_GLOBAL_CTRL MSR must be saved at vm exit. - IA32_PERF_GLOBAL_CTRL MSR must be cleared at vm exit to avoid any counter of running within KVM runloop. - guest IA32_PERF_GLOBAL_CTRL MSR must be restored at vm entry. Introduce vmx_set_perf_global_ctrl() function to auto switching IA32_PERF_GLOBAL_CTR and invoke it after the VMM finishes setting up the CPUID bits. Signed-off-by: Dapeng Mi Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/vmx/vmx.c | 117 +++++++++++++++++++++++++++++++------ arch/x86/kvm/vmx/vmx.h | 3 +- 3 files changed, 103 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index d77a31039f24..5ed89a099533 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -106,6 +106,7 @@ #define VM_EXIT_CLEAR_BNDCFGS 0x00800000 #define VM_EXIT_PT_CONCEAL_PIP 0x01000000 #define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000 +#define VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL 0x40000000 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 339742350b7a..34a420fa98c5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4394,6 +4394,97 @@ static u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *vmx) return pin_based_exec_ctrl; } +static void vmx_set_perf_global_ctrl(struct vcpu_vmx *vmx) +{ + u32 vmentry_ctrl = vm_entry_controls_get(vmx); + u32 vmexit_ctrl = vm_exit_controls_get(vmx); + struct vmx_msrs *m; + int i; + + if (cpu_has_perf_global_ctrl_bug() || + !is_passthrough_pmu_enabled(&vmx->vcpu)) { + vmentry_ctrl &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; + vmexit_ctrl &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + vmexit_ctrl &= ~VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL; + } + + if (is_passthrough_pmu_enabled(&vmx->vcpu)) { + /* + * Setup auto restore guest PERF_GLOBAL_CTRL MSR at vm entry. + */ + if (vmentry_ctrl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) { + vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, 0); + } else { + m = &vmx->msr_autoload.guest; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i < 0) { + i = m->nr++; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); + } + m->val[i].index = MSR_CORE_PERF_GLOBAL_CTRL; + m->val[i].value = 0; + } + /* + * Setup auto clear host PERF_GLOBAL_CTRL msr at vm exit. + */ + if (vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) { + vmcs_write64(HOST_IA32_PERF_GLOBAL_CTRL, 0); + } else { + m = &vmx->msr_autoload.host; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i < 0) { + i = m->nr++; + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); + } + m->val[i].index = MSR_CORE_PERF_GLOBAL_CTRL; + m->val[i].value = 0; + } + /* + * Setup auto save guest PERF_GLOBAL_CTRL msr at vm exit + */ + if (!(vmexit_ctrl & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL)) { + m = &vmx->msr_autostore.guest; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i < 0) { + i = m->nr++; + vmcs_write32(VM_EXIT_MSR_STORE_COUNT, m->nr); + } + m->val[i].index = MSR_CORE_PERF_GLOBAL_CTRL; + } + } else { + if (!(vmentry_ctrl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)) { + m = &vmx->msr_autoload.guest; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i >= 0) { + m->nr--; + m->val[i] = m->val[m->nr]; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); + } + } + if (!(vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)) { + m = &vmx->msr_autoload.host; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i >= 0) { + m->nr--; + m->val[i] = m->val[m->nr]; + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); + } + } + if (!(vmexit_ctrl & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL)) { + m = &vmx->msr_autostore.guest; + i = vmx_find_loadstore_msr_slot(m, MSR_CORE_PERF_GLOBAL_CTRL); + if (i >= 0) { + m->nr--; + m->val[i] = m->val[m->nr]; + vmcs_write32(VM_EXIT_MSR_STORE_COUNT, m->nr); + } + } + } + + vm_entry_controls_set(vmx, vmentry_ctrl); + vm_exit_controls_set(vmx, vmexit_ctrl); +} + static u32 vmx_vmentry_ctrl(void) { u32 vmentry_ctrl = vmcs_config.vmentry_ctrl; @@ -4401,17 +4492,10 @@ static u32 vmx_vmentry_ctrl(void) if (vmx_pt_mode_is_system()) vmentry_ctrl &= ~(VM_ENTRY_PT_CONCEAL_PIP | VM_ENTRY_LOAD_IA32_RTIT_CTL); - /* - * IA32e mode, and loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically. - */ - vmentry_ctrl &= ~(VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | - VM_ENTRY_LOAD_IA32_EFER | - VM_ENTRY_IA32E_MODE); - - if (cpu_has_perf_global_ctrl_bug()) - vmentry_ctrl &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; - - return vmentry_ctrl; + /* + * IA32e mode, and loading of EFER is toggled dynamically. + */ + return vmentry_ctrl &= ~(VM_ENTRY_LOAD_IA32_EFER | VM_ENTRY_IA32E_MODE); } static u32 vmx_vmexit_ctrl(void) @@ -4429,12 +4513,8 @@ static u32 vmx_vmexit_ctrl(void) vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP | VM_EXIT_CLEAR_IA32_RTIT_CTL); - if (cpu_has_perf_global_ctrl_bug()) - vmexit_ctrl &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; - - /* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */ - return vmexit_ctrl & - ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER); + /* Loading of EFER is toggled dynamically */ + return vmexit_ctrl & ~VM_EXIT_LOAD_IA32_EFER; } void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu) @@ -4777,6 +4857,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) vmcs_write64(VM_FUNCTION_CONTROL, 0); vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); + vmcs_write64(VM_EXIT_MSR_STORE_ADDR, __pa(vmx->msr_autostore.guest.val)); vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0); vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0); @@ -7916,6 +7997,8 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) else exec_controls_setbit(vmx, CPU_BASED_RDPMC_EXITING); + vmx_set_perf_global_ctrl(vmx); + /* Refresh #PF interception to account for MAXPHYADDR changes. */ vmx_update_exception_bitmap(vcpu); } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 7b64e271a931..32e3974c1a2c 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -510,7 +510,8 @@ static inline u8 vmx_get_rvi(void) VM_EXIT_LOAD_IA32_EFER | \ VM_EXIT_CLEAR_BNDCFGS | \ VM_EXIT_PT_CONCEAL_PIP | \ - VM_EXIT_CLEAR_IA32_RTIT_CTL) + VM_EXIT_CLEAR_IA32_RTIT_CTL | \ + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL) #define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL \ (PIN_BASED_EXT_INTR_MASK | \ From patchwork Thu Aug 1 04:58:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749550 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 058B414B97B for ; Thu, 1 Aug 2024 05:00:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488403; cv=none; b=KdtY+DDWSspvRO7+l54vvyhlNiUlNtvaGGvf/3L4ysCnxJnQvRq4h1MZdTX5ShYSYbbHz1l+HX8k2oujzs4l/J3uu/X4qpowfMWznLwQhANXNEGqQgfwb/DlprieSbN1IF/sPtFVqre5HeN9JKKN22OSGMHtOH9i53Il3i7+6v4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488403; c=relaxed/simple; bh=jVaOpdzGgUeZCcxUSgcRGcOgS5BCijhoj9k5blSwf0Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kxxJ2UTK3gwk9xUgHqf7bOM8q9Fi63GgoTt2MO97nS8QzNt7CFUSPU3I6drLsvefNgu1KIdRNVOmeOVIIM/p8wuKSxK6prowbYoC3JIXnV3i7Ny7QLNqpE1WCSTJ4scT8+safnlo/G3dO93ZfaKojqLqXDIIedWabdFMUH9xQms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ONCBjSsO; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ONCBjSsO" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb4d02f34cso6565431a91.3 for ; Wed, 31 Jul 2024 22:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488401; x=1723093201; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KYRCskJ7zvPTDOVf+/zrSTGljeKv/UQQeog2dK9opJ8=; b=ONCBjSsO39j+KcJ1+C/v+zc+55Rn0yfY7UOCXIdP7kw2OPP5JVG8ETel9pgN0Ap2yb y0uY7fER6HIWVt+wXZbxNQMX2ol/2cm9TH/nM0QhlyDSmcVJVYMkSlWl+jbz1Ul9heqC IRSXwWLqersa7G5NOTXZvn/CuPZlH7JQS19xNLFSGNSZbuYkdo8Br8QhARgtkCTYAlkB PTz1M67M845P495GsvaFabMPTZ1ae9EFLXrgGOw+cZwob8NL23qC+/P7+b6cJLxr4gdS IFpt6gZt5GrG9F2oYboEURwbHc4KMj710ypIIJ/F4Pt1YLV66iZpkjZ/uLdl7bOp62Rj s2PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488401; x=1723093201; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KYRCskJ7zvPTDOVf+/zrSTGljeKv/UQQeog2dK9opJ8=; b=q6N7zHh1PXI5uGlw04og/AuoWF44wQb3SgY+Ha4PpBmJYAumvgmGmjaccjUeVl+TfS 1DqFkAaxsfOq8Lqheb65m+L/YhjQ53dx7wM/4st5PlYiSt/bKRJfoTZkTV9RxdKmsP2O LLqz/f7kw0LKqtI0y9Re7oZv8CjYKWZk1mwLhscfE+IE7zQhWjsAPU3hizNNu2Sab4oi qJI3KuLqxjfWImIMdoqDT3nFpf0OdspPQss3Z2Alcwn+Jk1insT99uDH9/Dm5QRIulBn ysOli6bm5ADNI9gt+PbOAXcXYFO/obh/eWCMjEM2OotDJuxy4v1w7P+yztEbrkZnqvwa Igvw== X-Forwarded-Encrypted: i=1; AJvYcCX7LrN/JyKfilbNsfhQR8zKWWm2yU9ia/QhtuBqIwBJNiR1GuGb6v7wjBTdzRolTkxJGxncFDxsL2oLZhgu8korrwr9 X-Gm-Message-State: AOJu0YwJsAqtjkwLdlhTmFHXvAMFXui5BFibF7vnpggrH48ylHZlDDb/ /0Yjsjbv/gMJWXLxGobgU9k7GvvyU2kdxa4/WQyd30KsgVCkhi5R8h+dx4OrMRIjvKJXLMgl/Rm Gs6DNMw== X-Google-Smtp-Source: AGHT+IH6c/VUOVw0ZnJ/ftVLFu6/gHn7JaI8qXMS2BuHKe/7URPK44Ba8bG8xQ88Gi3/jTv0r6E7od1qhri+ X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:90a:9c3:b0:2c9:9208:ee66 with SMTP id 98e67ed59e1d1-2cfe7b656a4mr3116a91.7.1722488401129; Wed, 31 Jul 2024 22:00:01 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:36 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-28-mizhang@google.com> Subject: [RFC PATCH v3 27/58] KVM: x86/pmu: Create a function prototype to disable MSR interception From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Add one extra pmu function prototype in kvm_pmu_ops to disable PMU MSR interception. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/cpuid.c | 4 ++++ arch/x86/kvm/pmu.c | 5 +++++ arch/x86/kvm/pmu.h | 2 ++ 4 files changed, 12 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index fd986d5146e4..1b7876dcb3c3 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -24,6 +24,7 @@ KVM_X86_PMU_OP(is_rdpmc_passthru_allowed) KVM_X86_PMU_OP_OPTIONAL(reset) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) +KVM_X86_PMU_OP_OPTIONAL(passthrough_pmu_msrs) #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index f2f2be5d1141..3deb79b39847 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -381,6 +381,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.reserved_gpa_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu); kvm_pmu_refresh(vcpu); + + if (is_passthrough_pmu_enabled(vcpu)) + kvm_pmu_passthrough_pmu_msrs(vcpu); + vcpu->arch.cr4_guest_rsvd_bits = __cr4_reserved_bits(guest_cpuid_has, vcpu); diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3afefe4cf6e2..bd94f2d67f5c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1059,3 +1059,8 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) kfree(filter); return r; } + +void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) +{ + static_call_cond(kvm_x86_pmu_passthrough_pmu_msrs)(vcpu); +} diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e1af6d07b191..63f876557716 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -41,6 +41,7 @@ struct kvm_pmu_ops { void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); bool (*is_rdpmc_passthru_allowed)(struct kvm_vcpu *vcpu); + void (*passthrough_pmu_msrs)(struct kvm_vcpu *vcpu); const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; @@ -292,6 +293,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu); +void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu); bool is_vmware_backdoor_pmc(u32 pmc_idx); From patchwork Thu Aug 1 04:58:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749551 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A8D61448FB for ; Thu, 1 Aug 2024 05:00:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488405; cv=none; b=DTY4998B4aTMEAS6n9qYeNrncj4Gy/vP08l+t4kOIQzndeBBysXl/hAUKo/ikMouXDEeYqZN9K4QpHMNYh+p+oRlD+ysrDE4N1BfDj+c0xjOEhy3s4fyqGoVyeK7fXZfmYuI2PnKfvblKL9aeRGpfjfhI4tru2j0YyAcofuuBvc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488405; c=relaxed/simple; bh=c1fCTvilDFVcRNJhCRp0QJSo7ujqHFTlpwpghPBcc60=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Cg3VPpWZNga75joPploUeFI39dK+y0mbxCNxzZuUMvFIgGMDjnCSjYW9hbwlRas3Yu6DsCGs6OKfvD4DLhjGLPoKxOn2sE/K8K70zrpysUk4589ObthlIWVE+9tb+sYqx6aQHLjZw+++buj8kb6d6qgNiVMLiH4d5B7HCdSM8cA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=utzzxdxC; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="utzzxdxC" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0bbd1ca079so2381709276.2 for ; Wed, 31 Jul 2024 22:00:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488403; x=1723093203; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=irJZdD9WRLE+TSvW8RXjQ94EKY3/u6wzawNTQJVZPek=; b=utzzxdxCLFdbCJPp/mofHdjNH8l9Pmb4Dkv4Y2zCxprXatjHwkkJ9CaRASRszbMyOc y283SWu9pXSji2T4ZvDZKx6ldHnHyEl778P6IURpWXBUnS8yT5oICXBXZeS8wymyzZHS oHs/4zTEWH6Z3g5YXRQcxfoLzFK59obfJVb4RhUwtNEmCzoLpsP951/Cwq1njGujpN1B leSLMYkAu+ZwCjDJOqf4ZZfPLZdEKsz+wvQF3IL6l5q1VkCOSewExXLRVq9FL80cKB+I vj5vC8rc31dQnxwr9giIQgTcS0wkrA7PLO0gSIrXnRuHPEQhjkCmSm8Yp5gEUmsCDmF9 0FbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488403; x=1723093203; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=irJZdD9WRLE+TSvW8RXjQ94EKY3/u6wzawNTQJVZPek=; b=XjLoqdOex7fxO5ycewlBRVHrIFtL5ZKRj5KCQ9wgFKcA2y/3053lGjSEOacTGXI+Fc ABY227J+moZ0iB4CV8I8+B/98uunB+eewNk/XAGeoUuEUOWkFETyXrbf/WgC72tJep9R l4mxx2SnRBipoZNYobD0WlPHvEdASG/5N++XyM7zIxwx9xHbdS+gWzs9cnYI5mf2FH/f Z6g8b2bduiBXBfhxdoMojwVoosdevFNbMO/dP9X7kqQTnqdQP8EyEkoIRB+JZmFRYTSS VJgZyvUNNnOUDIrRchry2+9/6skcSTzFh5BUYOgThFzhH5yvWYdIsXOyV+i5I/IQuj8k JNSw== X-Forwarded-Encrypted: i=1; AJvYcCVRn1cu8dPyjJc2XWOiOLmZIp+b+TT/ypZw+OaJY+kYDeM54GmirKysa8YpJvpeUs7YyrrsitKGJARwL18QEMN5JjIz X-Gm-Message-State: AOJu0YzmvctATo+XXFv+V7JQngDn5Q1rPu8GROmiL3584Zxplt0rWVfX ei4jn6IHiIe6/+rhNVQWE5d1I5WBTjdt9Ghv7BKkPtRvKP1ie18PeDPzPUFe7QzYPKOm5RCJwN9 zDMU3ZQ== X-Google-Smtp-Source: AGHT+IHZe+/cCSU0lslMnaab+D51PLQFe9tuFbokmysftHCFBXl9tfqMC3s3XuLIcm7zeS1joxRcJfr6JG4Y X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6902:1201:b0:e05:aa67:2d4b with SMTP id 3f1490d57ef6-e0bcd2167d8mr3501276.3.1722488402898; Wed, 31 Jul 2024 22:00:02 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:37 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-29-mizhang@google.com> Subject: [RFC PATCH v3 28/58] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Dapeng Mi Event selectors for GP counters and fixed counters control MSR are intercepted for the purpose of security, i.e., preventing guest from using unallowed events to steal information or take advantages of any CPU errata. Other than event selectors, disable PMU counter MSR interception specified in guest CPUID, counter MSR index outside of exported range will still be intercepted. Global registers like global_ctrl will passthrough only if pmu version is greater than 1. Signed-off-by: Dapeng Mi Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/cpuid.c | 3 +-- arch/x86/kvm/vmx/pmu_intel.c | 47 ++++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 3deb79b39847..f01e2f1ccce1 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -382,8 +382,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); - if (is_passthrough_pmu_enabled(vcpu)) - kvm_pmu_passthrough_pmu_msrs(vcpu); + kvm_pmu_passthrough_pmu_msrs(vcpu); vcpu->arch.cr4_guest_rsvd_bits = __cr4_reserved_bits(guest_cpuid_has, vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 02c9019c6f85..737de5bf1eee 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -740,6 +740,52 @@ static bool intel_is_rdpmc_passthru_allowed(struct kvm_vcpu *vcpu) return true; } +/* + * Setup PMU MSR interception for both mediated passthrough vPMU and legacy + * emulated vPMU. Note that this function is called after each time userspace + * set CPUID. + */ +static void intel_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) +{ + bool msr_intercept = !is_passthrough_pmu_enabled(vcpu); + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + int i; + + /* + * Unexposed PMU MSRs are intercepted by default. However, + * KVM_SET_CPUID{,2} may be invoked multiple times. To ensure MSR + * interception is correct after each call of setting CPUID, explicitly + * touch msr bitmap for each PMU MSR. + */ + for (i = 0; i < kvm_pmu_cap.num_counters_gp; i++) { + if (i >= pmu->nr_arch_gp_counters) + msr_intercept = true; + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PERFCTR0 + i, MSR_TYPE_RW, msr_intercept); + if (fw_writes_is_enabled(vcpu)) + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PMC0 + i, MSR_TYPE_RW, msr_intercept); + else + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PMC0 + i, MSR_TYPE_RW, true); + } + + msr_intercept = !is_passthrough_pmu_enabled(vcpu); + for (i = 0; i < kvm_pmu_cap.num_counters_fixed; i++) { + if (i >= pmu->nr_arch_fixed_counters) + msr_intercept = true; + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_TYPE_RW, msr_intercept); + } + + if (pmu->version > 1 && is_passthrough_pmu_enabled(vcpu) && + pmu->nr_arch_gp_counters == kvm_pmu_cap.num_counters_gp && + pmu->nr_arch_fixed_counters == kvm_pmu_cap.num_counters_fixed) + msr_intercept = false; + else + msr_intercept = true; + + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_STATUS, MSR_TYPE_RW, msr_intercept); + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, MSR_TYPE_RW, msr_intercept); + vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_TYPE_RW, msr_intercept); +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, @@ -752,6 +798,7 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, .is_rdpmc_passthru_allowed = intel_is_rdpmc_passthru_allowed, + .passthrough_pmu_msrs = intel_passthrough_pmu_msrs, .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_INTEL_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = 1, From patchwork Thu Aug 1 04:58:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749552 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69FFC647 for ; Thu, 1 Aug 2024 05:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488406; cv=none; b=LWu4y+BaHAKL8sSTUfm2jJRRiZfSyG9FgobylvrOEbTPX04z68OXOkkIXDGrk0jwWDejD0xQfvLTg+HW39Vc7GFmsB1dRh6NG/MuHWFkGQh9N1UCzrjKJpTxPqZCgjbX9kle6o4NghCsjXQrmtg6Yf6xrBXWG8HPyUQFowYpooA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488406; c=relaxed/simple; bh=1N6o/tt0qjZsiNn9TRtgdTMszPz/rc89Ap92HYjYtN8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YGZ3mQ8WiRB0KNI5z62HUCPiP/vZ8boIYU0NL7u6OOhXoiXzzSjm+LgCQ8MYMRRSp3jMsj7AIiUl+08wvnA0dJepPH4dnMkD5w87Ez89uhwE+sWZrPMFPYW8YC+dysPIkdkSDQWBBVs93riGMmpdGk2AlTMb0wzxImBHkfbOYYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=L6zvgYSc; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="L6zvgYSc" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7acb08471b1so3609617a12.0 for ; Wed, 31 Jul 2024 22:00:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488405; x=1723093205; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Y25WbRm7EAjN1rG2FTSFj2DRaI/V7TjNZhs9wOJ+514=; b=L6zvgYScixa8tXW8dNa9alUZuE0XUIkLr5A69kIA2/Z8opnrTMnHPXyAILSzJpTyVj 3eyN9TeTP7w8zyLU8FHX/b1GLZlEIAQ9cg2dLiEyNFgRRP7Zj1eJNbfsdY8MvMed1DXr 3OhUf/gx5MkySqTBcxlDOBh/TNx3TLbwYMpJMITlYYMyOLADItiStxT9skMHprZlwH82 ef4OUM9YRRZddX/HzrpAS6Nmkh+LkNAohIkuUt/uFXzSQACnOTjNX7oAdan/7iBon7zu gbDqmFkBKRx7nJMEwB7ognjQjghxsYkq0i0BIUWGm8I/P6Lj+LkutOEVy0feU4SgcqB9 2+3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488405; x=1723093205; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Y25WbRm7EAjN1rG2FTSFj2DRaI/V7TjNZhs9wOJ+514=; b=b4+BXQwJIUPs0YMS55OXElHWxk7e7xfDx9n8Un+VWafJ1yRI4voPL0C5GJRIFgDUgG rf3zIfyDa8B5D76HSUSUtGaF7O83guFeE8WXi63LBwFC5mZYZLXAJZHYuiGK3IqVLhbc bTB+Je8lx9fVdxB+qoCqhZ2PKw2DrBlzqVnmJnCWNc3h1JdzC4zE4AqWPNA0Zg+7ij3O QgsRnct+6y+8k45/m8yB+0cbBm6/ks265A+vKpQ5sXRDg2Or7JCzB4iY5QZJ2G4/MCtZ i4UJVwOPhqsMSQUB6u9gEMxpdl6gXWHQQX4pz4q2YD3yzYxcPlFq5SOVXPLgpLL7I3Xe 0HWA== X-Forwarded-Encrypted: i=1; AJvYcCVBDVfEA9y314jaZrJ173ZJpOriQTOPM5FIx+IJcRCg1v1v5ACXIWHMvCVz8H2sHg3FuAJdvDoFVn061riAr/zNTgFy X-Gm-Message-State: AOJu0YzGYg7lf4PxB2A3Tf1IekXD4Gmy/cEG0ElEqrLc5TAQxZdjFpE4 ofDs/hw5JYGZDLvF+/ou2T3kvELr+5OxV7gCleZkJD48NTYt/BcYp5bDtzDsiFW+h1V0HNouybO UlHmCug== X-Google-Smtp-Source: AGHT+IF8rIuIfSmR/4aWJNkI7c7QHFwq9CTozYEPbwuVYBVtfJNVlPByRErHPVL59wZvz91muV2Q0JVJYfDN X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a02:c03:b0:7a0:ec69:7cb8 with SMTP id 41be03b00d2f7-7b634665ddamr2631a12.3.1722488404429; Wed, 31 Jul 2024 22:00:04 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:38 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-30-mizhang@google.com> Subject: [RFC PATCH v3 29/58] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Avoid calling into legacy/emulated vPMU logic such as reprogram_counters() when passthrough vPMU is enabled. Note that even when passthrough vPMU is enabled, global_ctrl may still be intercepted if guest VM only sees a subset of the counters. Suggested-by: Xiong Zhang Signed-off-by: Mingwei Zhang Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index bd94f2d67f5c..e9047051489e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -713,7 +713,8 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (pmu->global_ctrl != data) { diff = pmu->global_ctrl ^ data; pmu->global_ctrl = data; - reprogram_counters(pmu, diff); + if (!is_passthrough_pmu_enabled(vcpu)) + reprogram_counters(pmu, diff); } break; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: From patchwork Thu Aug 1 04:58:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749553 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 140E314D2BE for ; Thu, 1 Aug 2024 05:00:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488408; cv=none; b=IvQqHvuVMDAmpr/32U9EzXdY/ID/cAF0KjbSnVRpR7AzA1s6QBkE2sHdJDQBhJGspvvFV0X3narGZKxag80kZZX8PGE6yVvBCx8qV3STMe6zunolrGVXgiZx3bIKSWc8yzb+7U05AO0Cv3IFILSpfPdsReVq0YIn89AIPL5htTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488408; c=relaxed/simple; bh=FO96eiT7Yj29ohpoHoetrK4qKd1titJ576Y2eGrKkvY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XGKR6LPKAY0ZydTcmMvMM6WGG+rSTEE7QCmtEA5Nby8BMEJ4X9MXsu3hNkdEaoo12cM8w3vWygksFsvf39oMWvnHh+BbqfAlJfYcHkPX5QO0wCQ06Rwp7b/dp7ayAV5b6NLZOaw+hHpFDysZPGMnA6lvMOMOG8NoNVTKRsESbeY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rdgr82F7; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rdgr82F7" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70e97ac260eso6440811b3a.2 for ; Wed, 31 Jul 2024 22:00:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488406; x=1723093206; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=hFzd5YCdF5XDzcX1ZBxRBaFj5i7eloMdVPJCkRexheo=; b=Rdgr82F71eW6no+KuvCEYaCWPQ6Ylb+MPXBeA0LxuYvab49RMARyT3NawGd+ohPOXd yTlT9HyM1EtPYcVuX8N81dD60z2OnnzKifrtJJuSn1TnbhaQ4HyCnMzkKlEMyUegZuhe L/uA16uEz5wWybJuy6MfLpdORMhHTgRUqKah7n6lm3wDQ9MxeY9aqfGEy+WROqZM6nYP KeYHvuJuwzFPQpAO+vSZlh5p77ZiShYOJEn/npU+3nrrGfZHvsptNfshxdoy+T+7Uy/P PpkMp7h0Ig6tJqoSbyA248jcrpt5sam7dgK/vQ3WGgiRf+/74t7uMRpmdhDsTlUMhxEO PlwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488406; x=1723093206; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hFzd5YCdF5XDzcX1ZBxRBaFj5i7eloMdVPJCkRexheo=; b=DOHrca9eC1IhEDfoTD4nHxmuCZRIzRtAtP8aI0DdRwj95120bhGh15OMIO4KKLx86Z 6eaLYjoCWZWw42pIpT4XihLzXpnkAJZ8ENgF1bxrs3tC4w46V13weVr5GVi+dZ1VsPqY 2L13PYzs3/5wOL4Erx87eFkRmY4MvBmkHHHu0t3+oDWRGOskasTjIg1g4D99qiB4JGAa MxwhTlN7KC04A1QUxjcOEwGCvnCSpp5vuYUL0YEkxdnOC2CX2e4H+ZRCtzx7RD5mm/wP SVvAw9+/1OEZmbU6OYmePkxahWx5lilzgDmq4WaDElI5CYHMwMSUC2OylkpI933mtEaE VbxA== X-Forwarded-Encrypted: i=1; AJvYcCUkI5Lk7o3fLt3mWP4ZGyaVAWH+9CH8dgNezMyWE/HNjhu5FrwcMTgBL3kUstu4X2YXrlO8ds9s4mBUVUv4V5+mlbZJ X-Gm-Message-State: AOJu0YzFTKP1LD4GaYYu9udDmes1MKxiBgHzYqtrXaAiqEP3bOXSvywq b/y/yzQ4bpbQQkeY0TusnKwnwnOIPldiJYiXgQLZqHG7QMdtTxKo5PIjzp4lpW92rUfNlpE9Zdg FtlOaOw== X-Google-Smtp-Source: AGHT+IGPMFisB/nCQ2SpyxPF79PtrHunowCmHj93kJuiBDB6BwR2q9YvSxx59YdpinxgM55/oSsQQokOOaXz X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a00:858d:b0:70e:9e1b:b76c with SMTP id d2e1a72fcca58-7105d6d1e06mr20445b3a.2.1722488406318; Wed, 31 Jul 2024 22:00:06 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:39 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-31-mizhang@google.com> Subject: [RFC PATCH v3 30/58] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Reject PMU MSRs interception explicitly in vmx_get_passthrough_msr_slot() since interception of PMU MSRs are specially handled in intel_passthrough_pmu_msrs(). Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/kvm/vmx/vmx.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 34a420fa98c5..41102658ed21 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -166,7 +166,7 @@ module_param(enable_passthrough_pmu, bool, 0444); /* * List of MSRs that can be directly passed to the guest. - * In addition to these x2apic, PT and LBR MSRs are handled specially. + * In addition to these x2apic, PMU, PT and LBR MSRs are handled specially. */ static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = { MSR_IA32_SPEC_CTRL, @@ -695,6 +695,13 @@ static int vmx_get_passthrough_msr_slot(u32 msr) case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8: case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ + case MSR_IA32_PMC0 ... MSR_IA32_PMC0 + 7: + case MSR_IA32_PERFCTR0 ... MSR_IA32_PERFCTR0 + 7: + case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR0 + 2: + case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + /* PMU MSRs. These are handled in intel_passthrough_pmu_msrs() */ return -ENOENT; } From patchwork Thu Aug 1 04:58:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749554 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9529814D2B5 for ; Thu, 1 Aug 2024 05:00:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488410; cv=none; b=dTrWZ0Y+j781Y8x73kOwqtqB/RX4le2Be+XfzJJd1HSdVhR7tJtRoWwWS0D9s/pXmxsqrDeEA+RImphzTGyKAur9TQok9GDSogj546zURltjTPm3YHkP0RDuzKm02laHe55nK0P1SVghKTs9wHzepjrf4eRSuMgI6SdDbQjVQpk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488410; c=relaxed/simple; bh=p3G6v+vySZulVP8fq6n+1kq4EnOmvFAPMDF+phz0KVo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bCc99ax0DgFSnLI3s+WrNPH4RRkirK0KXb8w5oA+2lr9ixy9v8VOkgowI5MK9YujYD4SEmDaNBMTf0cWMn2KOdw8AoX+4BC8C/xoWQT73Pa1aW/gNm0XJKwp3Bmrkj+mAsPdn5U+xpG4hhA2pUggXDqB6z4ioCduN7IUSXLQ234= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qlpH71tW; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qlpH71tW" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7163489149fso6500571a12.3 for ; Wed, 31 Jul 2024 22:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488408; x=1723093208; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=LjR5TutF57Z+U0kiSopi0HHzsk31y+CBb0PAzrY9SwM=; b=qlpH71tWO+C0VxSozsNaHHTwpD4jy358DC6fAom4GIeN7aHhkRhh8qL3O/BwqF1HsY AQhnoaJCK2Vq8CI9wUjRMMancIEnEaQY80v5GdxtmIWh3mKR4kVVVZlvBRDBdkyVNUZO NbWz3qzQS/mie0h7ktq/r/rsqdmJ5JB5CTgnW8JOdYJtS5aUnfhIX5kRjXs/omma9hFn H3XPgnPxS3avSytB6RFpj1z8E4NrxhT6yXgn+ecKMqYUKBk1S4B7GmMvvEHnU6PxcShX j4lku9A0awCtGzsm+1nOi6Kgv3t+qGtFXrPseUudtW99T5r1lxBdLRyMX3LcJRYE+Yd7 wfJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488408; x=1723093208; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LjR5TutF57Z+U0kiSopi0HHzsk31y+CBb0PAzrY9SwM=; b=F+iLzvUrlBAmfg8vOcqzpz9vvjryafd1EGloUzS5aCvhX8sbG8wfJHt2XSNRAvZyAN qXqksSy/M5cguF2gYNqKtl4tAT4VeIQVFd1zk9E42B+q7eR4f94k6bMbz8ZTeLkr/d/g AfBnhn8Ax9hgNKaKUau/ilY0M5qE+eyesMKlGTzQN0hEnlJjWY+p34bndwVEE3UE+aUX NSF7P0+3II0nKnMdrAyoF4F4TYiz0qd1ukiiB4yunr/4rQOitJ6kBi0gEXRGFZtosg+U qRPze8y4dw9DCP6GWA5GqHMTCZNZe089+uYsIjFMDTJ2GijMzkC62H5g7nNU4BSJ8f52 3fMA== X-Forwarded-Encrypted: i=1; AJvYcCUrS33HBuyXD1HnuIEvy5Esofu2TbQb4O3ABuRwZHa83Zu5Wzw+hMazxSvY6nRmebfQqxZPcz9MA3uM73TTDW8fWU3K X-Gm-Message-State: AOJu0YxHiBxzQDIo/oBRo9kULRLLNp9kt2zGDsErZvYKxQtMzOEu7BkK Ck41Jkn4xYGg2JeF8zAujVquV6e0sxF9glYIh5QaR3OGxQDNcZvu9QVngskoICxREw/z2r+p80X 2qRnGvQ== X-Google-Smtp-Source: AGHT+IF52maArG4VmTHMH8GbTzoryZmzz65yaXqqNwszdr58L0KBYdjaW4qxWBLtVuF3cQm/y7AMnMUwnaVt X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:d4c9:b0:1fd:9157:bd0d with SMTP id d9443c01a7336-1ff4d223878mr903745ad.8.1722488407939; Wed, 31 Jul 2024 22:00:07 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:40 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-32-mizhang@google.com> Subject: [RFC PATCH v3 31/58] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Add the MSR indices for both selector and counter in each kvm_pmc. Giving convenience to mediated passthrough vPMU in scenarios of querying MSR from a given pmc. Note that legacy vPMU does not need this because it never directly accesses PMU MSRs, instead each kvm_pmc is bound to a perf_event. For actual Zen 4 and later hardware, it will never be the case that the PerfMonV2 CPUID bit is set but the PerfCtrCore bit is not. However, a guest can be booted with PerfMonV2 enabled and PerfCtrCore disabled. KVM does not clear the PerfMonV2 bit from guest CPUID as long as the host has the PerfCtrCore capability. In this case, passthrough mode will use the K7 legacy MSRs to program events but with the incorrect assumption that there are 6 such counters instead of 4 as advertised by CPUID leaf 0x80000022 EBX. The host kernel will also report unchecked MSR accesses for the absent counters while saving or restoring guest PMU contexts. Ensure that K7 legacy MSRs are not used as long as the guest CPUID has either PerfCtrCore or PerfMonV2 set. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/pmu.c | 13 +++++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 13 +++++++++++++ 3 files changed, 28 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4b3ce6194bdb..603727312f9c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -522,6 +522,8 @@ struct kvm_pmc { */ u64 emulated_counter; u64 eventsel; + u64 msr_counter; + u64 msr_eventsel; struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 6b471b1ec9b8..64060cbd8210 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -177,6 +177,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); union cpuid_0x80000022_ebx ebx; + int i; pmu->version = 1; if (guest_cpuid_has(vcpu, X86_FEATURE_PERFMON_V2)) { @@ -210,6 +211,18 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->counter_bitmask[KVM_PMC_FIXED] = 0; pmu->nr_arch_fixed_counters = 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); + + if (pmu->version > 1 || guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + pmu->gp_counters[i].msr_eventsel = MSR_F15H_PERF_CTL0 + 2 * i; + pmu->gp_counters[i].msr_counter = MSR_F15H_PERF_CTR0 + 2 * i; + } + } else { + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + pmu->gp_counters[i].msr_eventsel = MSR_K7_EVNTSEL0 + i; + pmu->gp_counters[i].msr_counter = MSR_K7_PERFCTR0 + i; + } + } } static void amd_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 737de5bf1eee..0de918dc14ea 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -562,6 +562,19 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) ~((1ull << pmu->nr_arch_gp_counters) - 1); } } + + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + pmu->gp_counters[i].msr_eventsel = MSR_P6_EVNTSEL0 + i; + if (fw_writes_is_enabled(vcpu)) + pmu->gp_counters[i].msr_counter = MSR_IA32_PMC0 + i; + else + pmu->gp_counters[i].msr_counter = MSR_IA32_PERFCTR0 + i; + } + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmu->fixed_counters[i].msr_eventsel = MSR_CORE_PERF_FIXED_CTR_CTRL; + pmu->fixed_counters[i].msr_counter = MSR_CORE_PERF_FIXED_CTR0 + i; + } } static void intel_pmu_init(struct kvm_vcpu *vcpu) From patchwork Thu Aug 1 04:58:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749555 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B548714D44E for ; Thu, 1 Aug 2024 05:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488412; cv=none; b=XfHUCxOQGzTB0zv5VEGpkbEVor9d4HHTOnA7KjUcbyuDSTEvcQ698Skf19rZjoGqftY+pcxE/Wythz+wooW7VH2a3FFnIxNB8TZ+Unjar/vkWe2gQIJTUcqBEgDLp4UfF/bOGoT74oSXsDEb1CSnKsi2Mgy5TRlbz7X4Lku5g9U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488412; c=relaxed/simple; bh=vp0TPcxz9YqnISsMUjuvll8W3kOGdLZrO0ZGdpIQdlE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jNqtHmUCswtUErmK/qS8xewWWoPM+SOl38mDEeNuakqvdsoki2YcsUj5WWy1tVdpKkaFfZJgpiScUBSEYXifpN9IDfUpelghtNk0RsdW1WvuwcVCFc6yS+u0NCqdyfOotwiiDOJ6hhS4BRs4UHFZFTsxL7b6izTogEA+3CRZcCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qlG/T+Be; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qlG/T+Be" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-72c1d0fafb3so4604372a12.2 for ; Wed, 31 Jul 2024 22:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488410; x=1723093210; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=NFo9WM0gmtF5x3FaTooid3tzFZdwq4T5Qe1FVRaD2ZQ=; b=qlG/T+BefVKCzj1MilqKXlBFgawrz8O7R78WHOpnXbe6I/EAYICJ64FIFZZW7/ulIS /RdJ+i0UVdz4SCOeys+HNPNC+sf/+XmGsIgiOPIbfVEpQu1PyWNVPLh/9cjGLyKMxaHQ 6oRye+vFy7FyzwK3EGXkDwsvgNlho1lpLu6XsdGUgrZb6DYqjSGYxfvLkmHK8qqKESx1 fxPVf6sTR5vW2+TetlXbhsZkSKepUZaIamTQo7tWvEJKnibbPCCS690IwranTwTgLWmG 94UocLm69FkfNLQkUXKzSI4jgKUUd+4c1NaBKskQIZJzDE6ywpXgsHW6SeqFFUC9td7O J/Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488410; x=1723093210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NFo9WM0gmtF5x3FaTooid3tzFZdwq4T5Qe1FVRaD2ZQ=; b=QvyDccmk9NUaD5eXnh+bjo7/G9P0uAt5etWtmebhH96k2kgJS1N6vb01thwvwzjwhB AZzFAZDg5yNRuDuAQiv+YUs4ERFSUVt9lrBJx6/Is6/YkSFY2hUCFpuhFHl44RZdK8EM 0WF4dpnNp3yJ0TuwYy8lzjTRLU80FqqqxkYvHtDT3YH6OnIEdxkqOkFmyiZCtaQdPqNW oigjerA/nMAiPPAguCzKdtypluWgl2MPqSKDWxCVnRGvAy46PKqaD2Qfl+L9g/TxUnEi FcH5sGo9K1aWpGfvM8n7nIf4WIweMOP+wNzJ32wZ3W7QfQ4UcXe5SpTs+A8yhC/gRXkq bmpw== X-Forwarded-Encrypted: i=1; AJvYcCWPfK251k5RZy35eBG+K5fl10JywBjjk+zTvasYgRpgT6pJEKbM9BAxqPEC55hmknsQV03Jnf9TwbHiI4nBFPej6wJz X-Gm-Message-State: AOJu0YwsOfyE35JHB0s9eW/GzWIuqLvl1VM3DgCdx7RES8AaEUmyC3Fd QvZKuhGRDe14OKpl6Dujfqvv9bPWPHgfHNbLQvwDE2MtIUrkJCDQIA60lrd2j+54bsfxH20MKng L0iFF7A== X-Google-Smtp-Source: AGHT+IG/SIUMZioOPfnMc/e+wMajqFKhBd8J8lwB4YBsAnJW0/bHsfIAXMn9IMG0Q/qSI/c/2u4pNu8tjdKw X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:d491:b0:1fb:27bd:82e2 with SMTP id d9443c01a7336-1ff4d1ffd25mr1126205ad.8.1722488409791; Wed, 31 Jul 2024 22:00:09 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:41 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-33-mizhang@google.com> Subject: [RFC PATCH v3 32/58] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Plumb through kvm_pmu_ops with these two extra functions to allow pmu context switch. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 ++ arch/x86/kvm/pmu.c | 14 ++++++++++++++ arch/x86/kvm/pmu.h | 4 ++++ 3 files changed, 20 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index 1b7876dcb3c3..1a848ba6a7a7 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -25,6 +25,8 @@ KVM_X86_PMU_OP_OPTIONAL(reset) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) KVM_X86_PMU_OP_OPTIONAL(passthrough_pmu_msrs) +KVM_X86_PMU_OP_OPTIONAL(save_pmu_context) +KVM_X86_PMU_OP_OPTIONAL(restore_pmu_context) #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index e9047051489e..782b564bdf96 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1065,3 +1065,17 @@ void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) { static_call_cond(kvm_x86_pmu_passthrough_pmu_msrs)(vcpu); } + +void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu) +{ + lockdep_assert_irqs_disabled(); + + static_call_cond(kvm_x86_pmu_save_pmu_context)(vcpu); +} + +void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu) +{ + lockdep_assert_irqs_disabled(); + + static_call_cond(kvm_x86_pmu_restore_pmu_context)(vcpu); +} diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 63f876557716..8bd4b79e363f 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -42,6 +42,8 @@ struct kvm_pmu_ops { void (*cleanup)(struct kvm_vcpu *vcpu); bool (*is_rdpmc_passthru_allowed)(struct kvm_vcpu *vcpu); void (*passthrough_pmu_msrs)(struct kvm_vcpu *vcpu); + void (*save_pmu_context)(struct kvm_vcpu *vcpu); + void (*restore_pmu_context)(struct kvm_vcpu *vcpu); const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; @@ -294,6 +296,8 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu); void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu); +void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu); +void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu); bool is_vmware_backdoor_pmc(u32 pmc_idx); From patchwork Thu Aug 1 04:58:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749556 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A82414D2A0 for ; Thu, 1 Aug 2024 05:00:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488414; cv=none; b=LAhH3isgiTAVLLdMKGcFEQgx4LpUGbV/4k5fK2yOnTwatFVpdfNgc65/DFM/4jh1UwPfY5bDIGeJnY3FquNb64/KDkjuuagR89MXK72udxZKHEKwG57jU+SQx0PgeVSyikg41U6kj47uQxvQhusOp8ie4wL9bCf2uQxPiclr4y4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488414; c=relaxed/simple; bh=FBh94zCGJGvi1JIg6LM0uz999OeW76OKHNijRrT9eVM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OHUpWxf11c+p+U/KyyDpl3m6LAX3fyHqW3FtjbQHhcpBREv8PJrbfSoRaIo2V+XFoF1yrof2cEkE5WKhQpGAt+7w2lJDeUf3YzZVGQGiZUNQU5Xq7J4/SAkFQeKK1k21H0WqdySkONif5QxYcFvUXvmFACCcLvzyQHxKATj6AF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=X3h/OsZI; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="X3h/OsZI" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc47634e3dso51896945ad.0 for ; Wed, 31 Jul 2024 22:00:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488412; x=1723093212; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=JRgfZdTWC6ROEv8+dJ/jds7GWG7YC6PLcmHROAgx7aE=; b=X3h/OsZIW3U6Fr3X2MCMt5f1w1DygOp2xpMaRauk1aZItcHi94qEJWLYKYaBv7d4gp Rl6KZ8MobqcTCLcCSfiAGJUAePp8aysBlyXlIYhHUEuPijkpUtQpBpVr1MuB0zXRpIVq 1LV8JAeorlHAWpuZy5AXdARTZjMIUtZTXplZ49uOGz+jk6p/bjyrf1OSgDLPENKQ7Cy9 rJzzXxU8VZgLcTzIUjj52CDhT46svJYSbjfjIpWjAhUhcJ1n8yCPBo94JA+ZorATjK1i FU7lg8mxYrQxJ5yz/suzlXskvtrxp3F2a5D7GeFx6DOEXs2vlwZr2qar7uLv9+30nKMn ABgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488412; x=1723093212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=JRgfZdTWC6ROEv8+dJ/jds7GWG7YC6PLcmHROAgx7aE=; b=pNFJQl31CcdYsNv5XovwXuT46z+OqDY5EWWGqC75O3t5COEyGf77LjrlG10JI8/wU8 WoCC1XJCPbchqCIOX/l+WEA45U20U+ypR9sIebijzZFx5QgZsLw89BSgVGLKBuq9p9Dk PmVrAkUHmgh0GeGJbXX/9BBAp50ucox+M0Q7m4cSX85ZWsuOz8t/LQXqnPx65Qv6/0VG 02s1I7FQ0U28sUFZZqKo90SB4vWJTbKFfADrGuRabH4y9NFv2yS1wBj6WnjOS6xJDaRW IJL29pnipz9LXPytpPrsnICoK5i7zDMA37HlsFkyOOx5hC2DEGkeNwzxLdT3Wmj6/oEL 10ow== X-Forwarded-Encrypted: i=1; AJvYcCXKX89lFlH5KSQ5QYz8s2rKukiGAsZAVopE+f3rQDGJCX5z5VwuTY3v3/fElrgAvi+8CSv7feVcYyeFqAvu1S4ZvCOJ X-Gm-Message-State: AOJu0Yxp2j4tiSA+cFahPQ+M/aJQGa9JEkCofdrJRsrHDSA31FLuWpYE hmvkGw518FQJWt7OkkC6mwUr6Ugmdl9Hd7p7yzDO1hUT/C19eOqJm1i/emzeyWWtXlKf2/DSjku OwDKnkQ== X-Google-Smtp-Source: AGHT+IFVXsLsgX4z9IgU0GJZDbgbP5o4RXhYLKjV6/E4VfUKjCf3tJaeScC15MZLxWjRr53zsR7h+SzEyiFz X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:e748:b0:1fb:54d9:ebb3 with SMTP id d9443c01a7336-1ff4ce9bc66mr989875ad.6.1722488411752; Wed, 31 Jul 2024 22:00:11 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:42 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-34-mizhang@google.com> Subject: [RFC PATCH v3 33/58] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Implement the save/restore of PMU state for pasthrough PMU in Intel. In passthrough mode, KVM owns exclusively the PMU HW when control flow goes to the scope of passthrough PMU. Thus, KVM needs to save the host PMU state and gains the full HW PMU ownership. On the contrary, host regains the ownership of PMU HW from KVM when control flow leaves the scope of passthrough PMU. Implement PMU context switches for Intel CPUs and opptunistically use rdpmcl() instead of rdmsrl() when reading counters since the former has lower latency in Intel CPUs. Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi Signed-off-by: Mingwei Zhang Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.c | 46 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/pmu_intel.c | 41 +++++++++++++++++++++++++++++++- 2 files changed, 86 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 782b564bdf96..9bb733384069 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1068,14 +1068,60 @@ void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u32 i; + lockdep_assert_irqs_disabled(); static_call_cond(kvm_x86_pmu_save_pmu_context)(vcpu); + + /* + * Clear hardware selector MSR content and its counter to avoid + * leakage and also avoid this guest GP counter get accidentally + * enabled during host running when host enable global ctrl. + */ + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + pmc = &pmu->gp_counters[i]; + rdpmcl(i, pmc->counter); + rdmsrl(pmc->msr_eventsel, pmc->eventsel); + if (pmc->counter) + wrmsrl(pmc->msr_counter, 0); + if (pmc->eventsel) + wrmsrl(pmc->msr_eventsel, 0); + } + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc = &pmu->fixed_counters[i]; + rdpmcl(INTEL_PMC_FIXED_RDPMC_BASE | i, pmc->counter); + if (pmc->counter) + wrmsrl(pmc->msr_counter, 0); + } } void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + lockdep_assert_irqs_disabled(); static_call_cond(kvm_x86_pmu_restore_pmu_context)(vcpu); + + /* + * No need to zero out unexposed GP/fixed counters/selectors since RDPMC + * in this case will be intercepted. Accessing to these counters and + * selectors will cause #GP in the guest. + */ + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + pmc = &pmu->gp_counters[i]; + wrmsrl(pmc->msr_counter, pmc->counter); + wrmsrl(pmc->msr_eventsel, pmu->gp_counters[i].eventsel); + } + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc = &pmu->fixed_counters[i]; + wrmsrl(pmc->msr_counter, pmc->counter); + } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 0de918dc14ea..89c8f73a48c8 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -572,7 +572,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) } for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { - pmu->fixed_counters[i].msr_eventsel = MSR_CORE_PERF_FIXED_CTR_CTRL; + pmu->fixed_counters[i].msr_eventsel = 0; pmu->fixed_counters[i].msr_counter = MSR_CORE_PERF_FIXED_CTR0 + i; } } @@ -799,6 +799,43 @@ static void intel_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_TYPE_RW, msr_intercept); } +static void intel_save_guest_pmu_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + /* Global ctrl register is already saved at VM-exit. */ + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, pmu->global_status); + /* Clear hardware MSR_CORE_PERF_GLOBAL_STATUS MSR, if non-zero. */ + if (pmu->global_status) + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, pmu->global_status); + + rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl); + /* + * Clear hardware FIXED_CTR_CTRL MSR to avoid information leakage and + * also avoid these guest fixed counters get accidentially enabled + * during host running when host enable global ctrl. + */ + if (pmu->fixed_ctr_ctrl) + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, 0); +} + +static void intel_restore_guest_pmu_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + u64 global_status, toggle; + + /* Clear host global_ctrl MSR if non-zero. */ + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0); + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, global_status); + toggle = pmu->global_status ^ global_status; + if (global_status & toggle) + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, global_status & toggle); + if (pmu->global_status & toggle) + wrmsrl(MSR_CORE_PERF_GLOBAL_STATUS_SET, pmu->global_status & toggle); + + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl); +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, @@ -812,6 +849,8 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .cleanup = intel_pmu_cleanup, .is_rdpmc_passthru_allowed = intel_is_rdpmc_passthru_allowed, .passthrough_pmu_msrs = intel_passthrough_pmu_msrs, + .save_pmu_context = intel_save_guest_pmu_context, + .restore_pmu_context = intel_restore_guest_pmu_context, .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_INTEL_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = 1, From patchwork Thu Aug 1 04:58:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749557 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58EB914D711 for ; Thu, 1 Aug 2024 05:00:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488416; cv=none; b=fyFM4iXuHtxhryaNDuG20BuYa+9So9DfpJKEmXmDx2O9e3sKTOwq0+9J2WN2DMPdQxQKfEpDZ9slC2g7PlOdIIFEwFhQh12hUImCzwav6ESunr8u/nZt0gcH53nqgtkWrwYZJBqsfGALRI8pOorP5xUIMimiVkclsJGWreBu2kY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488416; c=relaxed/simple; bh=Bk+0nvKop0JHRtbRPrVchmKBbVyM3rgvuVxfqjS5P8E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F4sBy7klORJnQf9JSlGq679ptZdoDy483afmIj31yB+wi1jsXbLccRcdBRGCI7dI3yeeMve1tPvsJjzfuePOT6cGjN5h5SSC2p9OCkYNbOPm37VmImIeS2zNMzo8qUR1GOKh1XiUTV31JcOiV9O0Yyvl5ALbw4ItHUkKFVCw2GY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j7V3IFFd; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j7V3IFFd" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0bb6fa79b5so3076564276.3 for ; Wed, 31 Jul 2024 22:00:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488414; x=1723093214; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ofi3REk1BGr1x1WS6xJKcE1TSnEgsigC0M33dE9Lcio=; b=j7V3IFFdgM51eIUiEQpAA8hGjjuTD6kA72vZCTzO+rLENeMQpj2w/AzVBjufIc+VAA wzcNOfGscqL23M+UFUrtQac3/AlbOPmWhuCdgZWkWvQe5TyXPWQ8dwnnhBU1DDxiLmif oS9HY7whWHmx17eKnjE9v+sWpFnlqk78sjw7RPzhIY8hzkNgD3oTg8PFCM53H6kvAqoz aJGASCeFXgP8QS7/OJCZqiqzygOozwMSWaa+Wq++eYfcj82rztc3D8quAg8+CpGaQas6 FWqgMGFrMKRuqjrrXQoaDOtDk16ZgNLeYgVB0pZn+VYS5ZeF6KE85vNXsE0FLYTqCSJW P46A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488414; x=1723093214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ofi3REk1BGr1x1WS6xJKcE1TSnEgsigC0M33dE9Lcio=; b=YzESnDOAi5ayRMfpIv56vEUczo+pUhaWq//7cpOA4K1x6ZV5Q7+FLNRfqU7OpVAY+D ABN+YuzehDRpqVdXX1c2ZDjWKkGmZ8csOXdvRV4FBmvAAVaMuRMgz0anmRtEUm4vX/Ry yuF1iX6XJJAChIpWvOWsk0FYYdECLHsB5j7j9dx9+Esil6X/LTsTDHBG0PW1Qnix6BvX hPWijMX63aU4sbQyH79/4vAS9EEvkzY8PH+1ZRSAbcRwPLCfDFhRKgWQEBldW+36gBFY RPTlfW8Bb+/7D9ZnbBLkAvKdWVwvVljfE4wvuAnDbP8IJXKINvqrE9WyL1gX8VgIEFbd UHgQ== X-Forwarded-Encrypted: i=1; AJvYcCWygY+4cb5MVIhSvZ0q19OAiu6oxSWKGFBsJoFLUo2hQcL2VcAlB8GPL5GvxdD488AA8R/dHLSGl2yVoXv3SSGRp0SZ X-Gm-Message-State: AOJu0Ywj2BmU/HCOvdHdubdI9qwdoOXHTyDGeqHiutNSnAWsDv2NnOe2 BNrQ8fcM8Qq/kzhJj7RVxLJlTro5VnoAcRg7Ra0ZmV8qnhbbML44/Mp80uMQkAAA8czGoRnRAfe bF4t7GA== X-Google-Smtp-Source: AGHT+IHatkdmPJu4KhGRiDjpGk6tcK6tPjwWCykTdQmn9N24iyfBIP7+GxGFvF7Jp7lm3F8hV0gNzxADhkbU X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6902:e10:b0:e08:6c33:7334 with SMTP id 3f1490d57ef6-e0bcd377446mr3384276.8.1722488413693; Wed, 31 Jul 2024 22:00:13 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:43 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-35-mizhang@google.com> Subject: [RFC PATCH v3 34/58] KVM: x86/pmu: Make check_pmu_event_filter() an exported function From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Make check_pmu_event_filter() exported to usable by vendor modules like kvm_intel. This is because passthrough PMU intercept the guest writes to event selectors and directly do the event filter checking inside the vendor specific set_msr() instead of deferring to the KVM_REQ_PMU handler. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/kvm/pmu.c | 3 ++- arch/x86/kvm/pmu.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 9bb733384069..9aa08472b7df 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -443,7 +443,7 @@ static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter, return true; } -static bool check_pmu_event_filter(struct kvm_pmc *pmc) +bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_x86_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; @@ -457,6 +457,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return is_fixed_event_allowed(filter, pmc->idx); } +EXPORT_SYMBOL_GPL(check_pmu_event_filter); static bool pmc_event_is_allowed(struct kvm_pmc *pmc) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 8bd4b79e363f..9cde62f3988e 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -298,6 +298,7 @@ bool kvm_pmu_check_rdpmc_passthrough(struct kvm_vcpu *vcpu); void kvm_pmu_passthrough_pmu_msrs(struct kvm_vcpu *vcpu); void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu); void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu); +bool check_pmu_event_filter(struct kvm_pmc *pmc); bool is_vmware_backdoor_pmc(u32 pmc_idx); From patchwork Thu Aug 1 04:58:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749558 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99C1814D45E for ; Thu, 1 Aug 2024 05:00:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488418; cv=none; b=gI6VDYQLEaGTzQQiBx9jfanLsy9kIIGuraA227uj3z6a3dwXrODYLkr99DE0tN7ufxGhK4Vgme8EdyZBRLsKysYsDvcT8C8+2f6e9eKv5DAi+UfBU4O7XVxxB5EqlIbjmwHwY8pyFRjkE/xk57yLfCrLyeMJ275MLFPWwhct7pI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488418; c=relaxed/simple; bh=Rj2gupleinIeOJ4kMvsxNFt16M17oDTj0RlHR3E7rxc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hqhpzIxhqZViIqrQV2uNd2f7xStxxjZkTCN2LUvyBm20C2S/jKlpCBYHlMMPwzCWlHyIsk8CQvzIUFyZj5yONoDIzA2vyayRh7m7inM0kKJMfDUJ8uT4a/0rwLCA59UAK/I7nYHf6pnfKh4hQWrRn0POVodcqteMSt8HS1ffPIE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZcfC3CBj; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZcfC3CBj" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc5e651bcdso66725365ad.3 for ; Wed, 31 Jul 2024 22:00:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488416; x=1723093216; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=pn0DNQGT68J/BKfX5XJXHgSRwinQWRsCNvIBUEjyeLQ=; b=ZcfC3CBjA1hOsie4lZa+xI4UR8etK+PtzytDJ1MC+lRqvva3ND9PvXaEaNkAGDskpR OC5VBW3uLGAAt30QaWfyZGwpin3rD1mynr4q7a5RMZXb8MYhcy0vhsSdFS5T6kor9rqx Q2wCsagUH9rHeGpGRdsG6Ye70qihyf1MI/b33vgPXlExhEQTNBk8jyZjmmvin0z7opWx VsvDCi7BOrPDR56xoSfA0G6WTypPPAjk8Fn0Ur4koPikGN0eW8NOrxjjQrrATRMR7GH8 D0GVgPb74f/f3wpFFYFizOjMLIqzpeZzQit32N2U0b6yHodhjSmbeejSo0eakf4pkXJ+ fq5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488416; x=1723093216; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pn0DNQGT68J/BKfX5XJXHgSRwinQWRsCNvIBUEjyeLQ=; b=cktOli1r8YUWBeus9z6pFHxkJluTnDCBkS4s5R2afRIZsODZ+saa9zJDoOgMJGjx73 +QTdWHXW9xuwqXmk9vEZ9+LPuplL9KWPpmXbWYNzXRAuaVVWgLhjrDVUlp2HkbYFuLMC JcCjVrBXC5gPtITOBMk+NeBo+gAC+5fCTTUECJflGWUMwFRcFfkZaPj8jX1GOh7gFa2X GYd4OBb/2hUu/yhnTqlwyNYwYeMLG+e1g3IdvuIKDjCcri9ZfFWyucAv1VyxTVZZkkEy WIrNO2/irAVJ8N0o4i3u67wtV9BwTE98qTy9aq4of/7w6l8Wc2MCePiMbeHO1hxW9R0g voCQ== X-Forwarded-Encrypted: i=1; AJvYcCXNyYbNJHZSJcewHaNN+kEYnte1Jg2emiz7XI41CPUkKUDwCLh5SFC93biTuYhffHAyaf5Opio1JDT1F4XIEEVN0YA/ X-Gm-Message-State: AOJu0YyQ/Vb/tw7k3SbfJGufPyhRnQOiv9AN+6grPKbGuaOxQIu7T3K+ gOvaiMp6B9qHY9oOmwd78DMGDtB+ydPfVSKo8q5H2VMOBFxwlB7jcQoyJcNdBzF4bIDDiqg8zDo bU/ehkg== X-Google-Smtp-Source: AGHT+IFSr6KydggPLsI+JPw3CVmdbWxptejcGi1U+C/f+6hocUkPmWtWinmmFWhFFdZB9oLJOos9ql+5L8lk X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:f54f:b0:1ff:393f:8dcb with SMTP id d9443c01a7336-1ff4ce4e6f2mr1128855ad.2.1722488415706; Wed, 31 Jul 2024 22:00:15 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:44 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-36-mizhang@google.com> Subject: [RFC PATCH v3 35/58] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Only allow writing to event selector if event is allowed in filter. Since passthrough PMU implementation does the PMU context switch at VM Enter/Exit boudary, even if the value of event selector passes the checking, it cannot be written directly to HW since PMU HW is owned by the host PMU at the moment. Because of that, introduce eventsel_hw to cache that value which will be assigned into HW just before VM entry. Note that regardless of whether an event value is allowed, the value will be cached in pmc->eventsel and guest VM can always read the cached value back. This implementation is consistent with the HW CPU design. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 5 ++--- arch/x86/kvm/vmx/pmu_intel.c | 13 ++++++++++++- 3 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 603727312f9c..e5c288d4264f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -522,6 +522,7 @@ struct kvm_pmc { */ u64 emulated_counter; u64 eventsel; + u64 eventsel_hw; u64 msr_counter; u64 msr_eventsel; struct perf_event *perf_event; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 9aa08472b7df..545930f743b9 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1085,10 +1085,9 @@ void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu) for (i = 0; i < pmu->nr_arch_gp_counters; i++) { pmc = &pmu->gp_counters[i]; rdpmcl(i, pmc->counter); - rdmsrl(pmc->msr_eventsel, pmc->eventsel); if (pmc->counter) wrmsrl(pmc->msr_counter, 0); - if (pmc->eventsel) + if (pmc->eventsel_hw) wrmsrl(pmc->msr_eventsel, 0); } @@ -1118,7 +1117,7 @@ void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu) for (i = 0; i < pmu->nr_arch_gp_counters; i++) { pmc = &pmu->gp_counters[i]; wrmsrl(pmc->msr_counter, pmc->counter); - wrmsrl(pmc->msr_eventsel, pmu->gp_counters[i].eventsel); + wrmsrl(pmc->msr_eventsel, pmu->gp_counters[i].eventsel_hw); } for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 89c8f73a48c8..0cd38c5632ee 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -399,7 +399,18 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data & reserved_bits) return 1; - if (data != pmc->eventsel) { + if (is_passthrough_pmu_enabled(vcpu)) { + pmc->eventsel = data; + if (!check_pmu_event_filter(pmc)) { + if (pmc->eventsel_hw & + ARCH_PERFMON_EVENTSEL_ENABLE) { + pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE; + pmc->counter = 0; + } + return 0; + } + pmc->eventsel_hw = data; + } else if (data != pmc->eventsel) { pmc->eventsel = data; kvm_pmu_request_counter_reprogram(pmc); } From patchwork Thu Aug 1 04:58:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749559 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8DC814E2C4 for ; Thu, 1 Aug 2024 05:00:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488420; cv=none; b=XCAPMxKvjayCircp+5T5E+qAmppi6Jr9jc7oASsObtreJav/n6Pjwx44vE4XqD7q01Uuiy00GtQxXRHduZqm0cFmvkslLW++iakX4jrdhDQ8XIJfXFUD+g71zY+Ooy9ru7lWLybIp4sa6sE8F8XvbMRhudahCuqbOW38IzcIQ/I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488420; c=relaxed/simple; bh=DVGGSBE4W6Dj56Ja/cNg4RJ9IrCtN6hDhrMfpoaPwqc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=t3uIFxVc26qxRb7/CnxcUtMIu9CzoDESKcziywv8s2kr+f8WsCNmQcFF5AJVfQHSQdz1tycHNP93rzS+U8ePy7kIgMBeiO3X1o6BYDqapDtHRQselW0MZ0tfqdKM1NwZZO8N2aLJVj9orMM1+9s1Ee6qkIygY9gRS0r74qsUXb0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HLoOAw2W; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HLoOAw2W" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6886cd07673so23087b3.3 for ; Wed, 31 Jul 2024 22:00:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488418; x=1723093218; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EAjDDshI/vxpb+COr8g3koCBShOFriBt/RZDXZjNeQw=; b=HLoOAw2WKwN6ZvWzlZa0SydEvaNrgfkEE47+RnXFsnI9nN/IeFh+ab38HBTRc6cdBH 2IjL3KsDK0SqESgzh9S4fcL2j7T247yFTB+fm0gmKWUNaBhNLZzrUl/ct7psygm3CEQL 9SvMpUEIt80T4QzjYleMtdu09aAt4KaVoN4Uv1IUFf9AoXwsZumuhPa9nGlxknpKKji7 134N7RtcghidWkPGvd8x3eNxyX/yFSbs6jbSqOXwkDjqFva+6RfSPAGcNPHl+1Z6jRjh 7C1pjSp5z89+OBihPq2AlhWM4kWjHrNlyj9rtH7kwHrfudxJGh99C0F48DlTnlZPxw0q RHRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488418; x=1723093218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EAjDDshI/vxpb+COr8g3koCBShOFriBt/RZDXZjNeQw=; b=mMiXpyhO2qWN53tRXcsfIB0QrPzv6hHXx4v6qOQEmTr7zmDGEQnxGmIM12eoBqyoId gQ6C+Mi26lF1A6eW8Nw3ixFf8LgdNNUVriFlCky3ylMeIvHGl+8X27hAY9Qfggn8hb5c xaLvrlDm8rzahiMUBE5bUTkELXKHfbx1p7dVWCGWhhkHySMIUE0aYkFNUnj5IbiKbSTa 8HfeDwh/gOW39azbg5O3JcPkVHypgyf7ao2dqMpe5GbcsTiiAOOeLhc8kOstfrEaxWYH p7bPz/+C72B8IlCFVem8yBx3dFtO5kEPrXYLlpf/qpv7qVT0cT2/mbK5p0PVOpMRAjrK 4WcA== X-Forwarded-Encrypted: i=1; AJvYcCVtgNiszI8Jr4fwpu8Ok+VmNbP4rbN71QWhtajF02rYuafSEe4AE2OewNe84hmF6mAny5kvNpl4ohVjD1jQGMfzzy1N X-Gm-Message-State: AOJu0YwovQw3rCytbfTra7lKi4KWUYvvtfHNqdUZavxofJTZ0QoFWaqP AZjmHM9jgraoMNLY01SB8b0AX8XSE7p6t0tg1smJ09jxVV/gSqE/FL1VoGubZR5QWf1Dlu/R/eN Ems8dfA== X-Google-Smtp-Source: AGHT+IEkVZ+i3g09rESJdS/AT8fQo3wvNx/XY/njVc3ARcUO1sIJlc9FqWoLOF7Q3kcvAsK0WKMpyjnDTiTK X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:690c:289:b0:62c:f6fd:5401 with SMTP id 00721157ae682-6874f036bdamr164567b3.6.1722488417696; Wed, 31 Jul 2024 22:00:17 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:45 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-37-mizhang@google.com> Subject: [RFC PATCH v3 36/58] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Allow writing to fixed counter selector if counter is exposed. If this fixed counter is filtered out, this counter won't be enabled on HW. Passthrough PMU implements the context switch at VM Enter/Exit boundary the guest value cannot be directly written to HW since the HW PMU is owned by the host. Introduce a new field fixed_ctr_ctrl_hw in kvm_pmu to cache the guest value. which will be assigne to HW at PMU context restore. Since passthrough PMU intercept writes to fixed counter selector, there is no need to read the value at pmu context save, but still clear the fix counter ctrl MSR and counters when switching out to host PMU. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 28 ++++++++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e5c288d4264f..93c17da8271d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -549,6 +549,7 @@ struct kvm_pmu { unsigned nr_arch_fixed_counters; unsigned available_event_types; u64 fixed_ctr_ctrl; + u64 fixed_ctr_ctrl_hw; u64 fixed_ctr_ctrl_mask; u64 global_ctrl; u64 global_status; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 0cd38c5632ee..c61936266cbd 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -34,6 +34,25 @@ #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) +static void reprogram_fixed_counters_in_passthrough_pmu(struct kvm_pmu *pmu, u64 data) +{ + struct kvm_pmc *pmc; + u64 new_data = 0; + int i; + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); + if (check_pmu_event_filter(pmc)) { + pmc->current_config = fixed_ctrl_field(data, i); + new_data |= (pmc->current_config << (i * 4)); + } else { + pmc->counter = 0; + } + } + pmu->fixed_ctr_ctrl_hw = new_data; + pmu->fixed_ctr_ctrl = data; +} + static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { struct kvm_pmc *pmc; @@ -351,7 +370,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data & pmu->fixed_ctr_ctrl_mask) return 1; - if (pmu->fixed_ctr_ctrl != data) + if (is_passthrough_pmu_enabled(vcpu)) + reprogram_fixed_counters_in_passthrough_pmu(pmu, data); + else if (pmu->fixed_ctr_ctrl != data) reprogram_fixed_counters(pmu, data); break; case MSR_IA32_PEBS_ENABLE: @@ -820,13 +841,12 @@ static void intel_save_guest_pmu_context(struct kvm_vcpu *vcpu) if (pmu->global_status) wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, pmu->global_status); - rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl); /* * Clear hardware FIXED_CTR_CTRL MSR to avoid information leakage and * also avoid these guest fixed counters get accidentially enabled * during host running when host enable global ctrl. */ - if (pmu->fixed_ctr_ctrl) + if (pmu->fixed_ctr_ctrl_hw) wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, 0); } @@ -844,7 +864,7 @@ static void intel_restore_guest_pmu_context(struct kvm_vcpu *vcpu) if (pmu->global_status & toggle) wrmsrl(MSR_CORE_PERF_GLOBAL_STATUS_SET, pmu->global_status & toggle); - wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl); + wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl_hw); } struct kvm_pmu_ops intel_pmu_ops __initdata = { From patchwork Thu Aug 1 04:58:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749560 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6421014E2DA for ; Thu, 1 Aug 2024 05:00:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488421; cv=none; b=sSCgcqlrwSaVIaVchWUcXjkbIrc2u5Dn/F5A96g0R3fwaqqvY3FXrGquDr7VKOlm0LDcIo3bXW47cYOMwE2Yr5GjJFpRGhPV4ejIoXZ/bkKVhjDeoorbNgrBMkOvjFT81GvpSB1vNClmN/H26niq98R+7hRZCEkEMwxV7Qllxo4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488421; c=relaxed/simple; bh=sb75vcFDzulmDk5HWuW7Gc/aQrD0/kwRNadd+yKPw4o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QkgSfG7k+UW0ke4sWBlTwjh/GNy76bdwnjhTsFRmI1wwGZTvf3ey6i13nG6BWbxEiIF2SKDh+6e48v8e+hbZOeH/UYl70pKcs8bBn+A0tbAEe9fv38lK6l33gGVFolaqVhQujDrN2KclMk/5kgVHthjn6iQQ2ohoAJ8h1We2flA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yrR89s0U; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yrR89s0U" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-710415c77f8so3215209b3a.2 for ; Wed, 31 Jul 2024 22:00:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488420; x=1723093220; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4LXh7OlsGwBv50TmKUUaww/CB75T5vSYKtw0XszKyZY=; b=yrR89s0Uv23V7DMZvRpv36a0YlkWB+7PYzX5AGGBxfZlY1UOAly6LQRNIZdtVVLERS 1n0h5c9mxr+w0TL/24GzmKcyl7nRGTG44HkW/hV0cVjXEXcQX43nUy/iJFGID+5vxd+r BjRX/+J+RC4GCElg+R+hCzmSVQsMrjEhdgN35tjsjtlokVZ9l/LitrImx7Z8z4K72DyN 0+k4fIZBWkDt31k8r5v99LWjb14VbYaECziCNhSDW2xxRPrmse5kVgfzSHJgqiiwIIOZ jDTXvXcylfvfy620jJyM5P03BRtmMa5e5STI+ZI3TJdvKkXZCQqd9t9z0eowU2L9dvMO tvWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488420; x=1723093220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4LXh7OlsGwBv50TmKUUaww/CB75T5vSYKtw0XszKyZY=; b=HFm6/dHERX269czn7yDWJcFtLT54sgfIDS+N6pr8Uh48Tw23dbziFkEWYsjnaT9wlZ PBPO5rgm03tG4cdAwpoNOADInVhqmN1CTS3z/aUwIOsO1Aloa6uXWlyN8DSgx5dDC+ve CxqV+XtGED8RLCHpV0lA8Rg0E3Wmca/IftqcYkKqHFI9xlagekY0/t5979Xa7k9GKiuj 6A66R9fhRhbpSfV7Nlb4BSO+6nRX3wAUsJJ25G1vj2GGF5YncqBzrGIkUFV7pD5Ve4F5 IkAgXymNISpDC8NZk3PXJ1BCtcLC0SwUarsjO1zy8EksEMvN7gKD7iG76eqa+LAAE24c ZpIw== X-Forwarded-Encrypted: i=1; AJvYcCW8Enw5m9QOGRpE42g/ZhBgspgZa3qRG+deEyyWO5fdOibXBDQZWQ0x0lvr7+l/3spcBGW2TNP6Gx6SOCgT7RDgJlbY X-Gm-Message-State: AOJu0Yx2Gx8I//9T3WqZiiz0SZd9TitDSSpKC9gB9KClsUkv9k2+HS8Y UApzrrownLn5ZZ+PIxoWs08x9nwTxYxhSWUjlQTfvOW9YKuC6MBU7a1Rp+vG5gILA/VJ3Ye7J5R D7wOflg== X-Google-Smtp-Source: AGHT+IES1CCMjPABx118HgeIt67Wypa61/tdVQ+2nUZ0kx+7i1ZzblE7ArThsw7cytD3amKIUuGUyKIxXBq2 X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6a00:8583:b0:70e:9e1e:e6ed with SMTP id d2e1a72fcca58-7105d6acf2bmr5625b3a.2.1722488419524; Wed, 31 Jul 2024 22:00:19 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:46 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-38-mizhang@google.com> Subject: [RFC PATCH v3 37/58] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang In PMU passthrough mode, use global_ctrl field in struct kvm_pmu as the cached value. This is convenient for KVM to set and get the value from the host side. In addition, load and save the value across VM enter/exit boundary in the following way: - At VM exit, if processor supports GUEST_VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL, read guest IA32_PERF_GLOBAL_CTRL GUEST_IA32_PERF_GLOBAL_CTRL VMCS field, else read it from VM-exit MSR-stroe array in VMCS. The value is then assigned to global_ctrl. - At VM Entry, if processor supports GUEST_VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, read guest IA32_PERF_GLOBAL_CTRL from GUEST_IA32_PERF_GLOBAL_CTRL VMCS field, else read it from VM-entry MSR-load array in VMCS. The value is then assigned to global ctrl. Implement the above logic into two helper functions and invoke them around VM Enter/exit boundary. Co-developed-by: Mingwei Zhang Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/vmx.c | 49 ++++++++++++++++++++++++++++++++- 2 files changed, 50 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 93c17da8271d..7bf901a53543 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -601,6 +601,8 @@ struct kvm_pmu { u8 event_count; bool passthrough; + int global_ctrl_slot_in_autoload; + int global_ctrl_slot_in_autostore; }; struct kvm_pmu_ops; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 41102658ed21..b126de6569c8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4430,6 +4430,7 @@ static void vmx_set_perf_global_ctrl(struct vcpu_vmx *vmx) } m->val[i].index = MSR_CORE_PERF_GLOBAL_CTRL; m->val[i].value = 0; + vcpu_to_pmu(&vmx->vcpu)->global_ctrl_slot_in_autoload = i; } /* * Setup auto clear host PERF_GLOBAL_CTRL msr at vm exit. @@ -4457,6 +4458,7 @@ static void vmx_set_perf_global_ctrl(struct vcpu_vmx *vmx) vmcs_write32(VM_EXIT_MSR_STORE_COUNT, m->nr); } m->val[i].index = MSR_CORE_PERF_GLOBAL_CTRL; + vcpu_to_pmu(&vmx->vcpu)->global_ctrl_slot_in_autostore = i; } } else { if (!(vmentry_ctrl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)) { @@ -4467,6 +4469,7 @@ static void vmx_set_perf_global_ctrl(struct vcpu_vmx *vmx) m->val[i] = m->val[m->nr]; vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); } + vcpu_to_pmu(&vmx->vcpu)->global_ctrl_slot_in_autoload = -ENOENT; } if (!(vmexit_ctrl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)) { m = &vmx->msr_autoload.host; @@ -4485,6 +4488,7 @@ static void vmx_set_perf_global_ctrl(struct vcpu_vmx *vmx) m->val[i] = m->val[m->nr]; vmcs_write32(VM_EXIT_MSR_STORE_COUNT, m->nr); } + vcpu_to_pmu(&vmx->vcpu)->global_ctrl_slot_in_autostore = -ENOENT; } } @@ -7272,7 +7276,7 @@ void vmx_cancel_injection(struct kvm_vcpu *vcpu) vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); } -static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx) +static void __atomic_switch_perf_msrs(struct vcpu_vmx *vmx) { int i, nr_msrs; struct perf_guest_switch_msr *msrs; @@ -7295,6 +7299,46 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx) msrs[i].host, false); } +static void save_perf_global_ctrl_in_passthrough_pmu(struct vcpu_vmx *vmx) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu); + int i; + + if (vm_exit_controls_get(vmx) & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL) { + pmu->global_ctrl = vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL); + } else { + i = pmu->global_ctrl_slot_in_autostore; + pmu->global_ctrl = vmx->msr_autostore.guest.val[i].value; + } +} + +static void load_perf_global_ctrl_in_passthrough_pmu(struct vcpu_vmx *vmx) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu); + u64 global_ctrl = pmu->global_ctrl; + int i; + + if (vm_entry_controls_get(vmx) & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) { + vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, global_ctrl); + } else { + i = pmu->global_ctrl_slot_in_autoload; + vmx->msr_autoload.guest.val[i].value = global_ctrl; + } +} + +static void __atomic_switch_perf_msrs_in_passthrough_pmu(struct vcpu_vmx *vmx) +{ + load_perf_global_ctrl_in_passthrough_pmu(vmx); +} + +static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx) +{ + if (is_passthrough_pmu_enabled(&vmx->vcpu)) + __atomic_switch_perf_msrs_in_passthrough_pmu(vmx); + else + __atomic_switch_perf_msrs(vmx); +} + static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediate_exit) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -7405,6 +7449,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, vcpu->arch.cr2 = native_read_cr2(); vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET; + if (is_passthrough_pmu_enabled(vcpu)) + save_perf_global_ctrl_in_passthrough_pmu(vmx); + vmx->idt_vectoring_info = 0; vmx_enable_fb_clear(vmx); From patchwork Thu Aug 1 04:58:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749561 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E7BB14EC46 for ; Thu, 1 Aug 2024 05:00:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488423; cv=none; b=rVALLT8oFfkOg8OmDBl+dgXgaVBR9W5hNBouU5pizJ2nj32Bz+Wsg1Zq+SbXtrgWfMIRVhfK2NGRzrTulmz0J+1BSKGofPyVOdVoUlSEcRClSDzl/tSybxFPNt8q5Ls4RxYZf0gV2IyBfND5i9IJNlE/MwxreBtUZAqy9k3E2x8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488423; c=relaxed/simple; bh=uchEYIDNrqNFkFo5hYnWZA3xII4jFCfDZ+jnDG9DxfA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UC+zehEF7mB4K5FL1BCHw9U3RxwPygg2V/OHQTDGI8zIMFg1mHW5Z0jbepr0VwFnCTtHLWVcm2hIx49xg8J9OMCrNepmhpygKQMOxnAN3FikyRrg+qFGWPzUnJyxPkmhmTuElUOOSF0XxqAEHrkQYo3ZJSRCFK/FaJ7hCe8Erw4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BuGxJBF3; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BuGxJBF3" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fd8a1a75e7so52008235ad.3 for ; Wed, 31 Jul 2024 22:00:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488422; x=1723093222; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EzPwInoXw8r9R9/3N5pxZC1Jl8vARWmduDMAlNnpjFU=; b=BuGxJBF3Ukh+ThmtE8NZNed4/uKJ+k+jmyjVJSBuk3Y+ICLP5Sp6Fn5L64y3wb3XTR Fg1eE+5LtYbHU1W/z0hkKEm0aMNLygVkQQwk5igthxhpL9ZLAqjridEmpRUUjc8G98tq WMjjd5M/StoOofFfwprDIPmS9Aonf3Uzl0NlNbIPHUiqvtw4knEoF3wZR+Ccz+0DyXqi NaMjyk+ntxdAfodg3JWunPmkI1gwgqxJNiZCFTVsAI5R/nkERsuHwViy4rR0C/9U0jBt uNjhyo9Llonyg/kdM3cva3Pe24JUXAPcAHUKxOL84K36h2VtYP0300ymdoF8y17a2+Sn h//w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488422; x=1723093222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EzPwInoXw8r9R9/3N5pxZC1Jl8vARWmduDMAlNnpjFU=; b=eHLTAiRLzv9n5EGw5HXK5JlxvnyLmcJrAoC8ho9U9+Vm2Yvmog3lNFSRwnT0LiCZAc 3j5AM/FlzbxV1yjUvkSfftMH3/oyH9OwH8i4L0aucX9FmaixOk2O2JMFmgfNZr9mvsOv H0JVb55RuptHiqtjOqU8RQMz7oAMtcoOUEDS4P46RCLTEiKBi5KIfPxGAq81H5QzvxBu wcKMro6DJMA2XNh+XbxnbKPgpBzQ94fsRf1VzdpBtVk/SmrzwB9S9yjQEte8kD7xhewB WBRQz6PBO319z7xPGaturNdQrU/yGWZxU7CrxzwT0yo9MPGvskTmMUG8tkx33rmfUev2 HA6Q== X-Forwarded-Encrypted: i=1; AJvYcCXwcq3WXUwlcnuwyGxrEKgZAmNrYuqSQ+p1ZeVqqJLhJO/6kekfNV0C5j520J940hpxUodpvKndauJXp3wgDMdTaI10 X-Gm-Message-State: AOJu0Yw9asVCMvAEMr61/4A51U4k8DlBAajL0XFWFKW15+cakFbZZOze knREwx4WN2mbKzumfgxX8dxyikJpf+cXsedfDj28+aHPb0Nd8XMlDWIsrfCk/D2Q0nzT4MklDiQ vhLugXg== X-Google-Smtp-Source: AGHT+IHohXLG4T4Jt5qVEfQZjBK6DSuynxcBatU4fUbztWoehHb/r3igek7rZ7u8FLodPcj9Uxxqgk0dP+nQ X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:903:41d2:b0:1fb:1ae6:6aab with SMTP id d9443c01a7336-1ff4ce2a602mr1109605ad.2.1722488421540; Wed, 31 Jul 2024 22:00:21 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:47 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-39-mizhang@google.com> Subject: [RFC PATCH v3 38/58] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Excluding existing vLBR logic from the passthrough PMU because the it does not support LBR related MSRs. So to avoid any side effect, do not call vLBR related code in both vcpu_enter_guest() and pmi injection function. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi Tested-by: Yongwei Ma --- arch/x86/kvm/vmx/pmu_intel.c | 13 ++++++++----- arch/x86/kvm/vmx/vmx.c | 2 +- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c61936266cbd..40c503cd263b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -660,13 +660,16 @@ static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu) static void intel_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { - u8 version = vcpu_to_pmu(vcpu)->version; + u8 version; - if (!intel_pmu_lbr_is_enabled(vcpu)) - return; + if (!is_passthrough_pmu_enabled(vcpu)) { + if (!intel_pmu_lbr_is_enabled(vcpu)) + return; - if (version > 1 && version < 4) - intel_pmu_legacy_freezing_lbrs_on_pmi(vcpu); + version = vcpu_to_pmu(vcpu)->version; + if (version > 1 && version < 4) + intel_pmu_legacy_freezing_lbrs_on_pmi(vcpu); + } } static void vmx_update_intercept_for_lbr_msrs(struct kvm_vcpu *vcpu, bool set) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b126de6569c8..a4b2b0b69a68 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7561,7 +7561,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) pt_guest_enter(vmx); atomic_switch_perf_msrs(vmx); - if (intel_pmu_lbr_is_enabled(vcpu)) + if (!is_passthrough_pmu_enabled(&vmx->vcpu) && intel_pmu_lbr_is_enabled(vcpu)) vmx_passthrough_lbr_msrs(vcpu); if (enable_preemption_timer) From patchwork Thu Aug 1 04:58:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749562 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67D4914EC42 for ; Thu, 1 Aug 2024 05:00:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488425; cv=none; b=h7KuAMKK9oYlTi8q441x4LWtL4SM/MqcYT0HveCCaQAJe+Ze+quMRilZozuWvApACsVW5lQkAyFClM2kns9du8s+YBfh48RfLTELXhBIP+96D1WpxKBKKCR8Wad7ZXXWLDI+t6VFbPq0C6WrCBSgeRzT4ghQkTDzG/d3pmQCxO8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488425; c=relaxed/simple; bh=GqCQnIP6R+UN05hqGjPkN/T2Em6ifHploh1jeGX/+PU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HLUu6HAtnTOzK3Uw9KBHkOUUfbh35EII323jyHIzSsciyNPXhNTikAy3wtPPhi6Ir36TmNny1Wue3xWad7AxHjdq8B+fKaoMDfzmj7x7mUKBlIMCZ94cF5CNX1v0yzfok2qpEX4xlh0q2PUSplVyZn104qnj73ZVzd/7noNQorU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2qTOfQ3h; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2qTOfQ3h" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7a23a0fb195so6696908a12.0 for ; Wed, 31 Jul 2024 22:00:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488424; x=1723093224; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+CFrfDGrFiZkK906go6M7xBsrwmWC22BGlbM4Q3ki74=; b=2qTOfQ3hn5y8YAyoYSX3hDXKKRadm/emJpR+eJhA3VULkgjNKLCjydieWFTwhjjDHB fEDoqStiu2H1C86gH3d4BHYUrtXuNcjy9FG5dhjX5XVhI0AN5OBtSVP0LFIQ4WYii1hH eD+DJ95cmse+XpLAjmL4vTulsCCViwNipoOHxQDPeIZKYrCn1h++BrL3fIohFNU83ETj UH0vaSfD60veSITq0nIMjaHzH1sX9DKm4TG0PEvahnOg2SezNBj1DcDzFgqjN0xWPJcV SU7z70048qcivtqC1F9Z9WMHBkASTQPMldMjLB7w1dP0MtuazEaKGa3IGbQOUXZ6sCey TpsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488424; x=1723093224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+CFrfDGrFiZkK906go6M7xBsrwmWC22BGlbM4Q3ki74=; b=m4rQPsSgFnkZv/A7NhZoYhZxG+vjs3z7RZh2gULNJ4NpHxlcVLx2tciZ4P1doHaj3a oTDFvimgagZkzGBjkeIkju+lmNHX6KwASTm0eUPpB8+A9/uncZQmYEYq9Q+USgqgbSCu YZ1ICAbjw9GqOHmaP00T77Ws+3q0NQivzB2jX69bS8P34rfFJXfdllLqn3RxsdyQB74S CnTUnLDfW+MiuWCrbG1cHnHR3uTAhxdcyIRZyEfdA2L8yEON3rzFdXO5kom3fq2vNVI6 iNjNvnm4lXMra/D0zxNjlOgpXXthEiKJu6Zj22qM+Oh6i6Npz9Rcj+zQSS2p9JO+HF1o ooEg== X-Forwarded-Encrypted: i=1; AJvYcCW8hrUE2r5cQWtHdD+S1HhxeIZWxWgCVnTLJ/ymTXhdm9fZxsoTnhayXDxhQUbIc5TU/PYA6twAm9CVoclxFLGUY1iz X-Gm-Message-State: AOJu0YxKPr1rMwCS0fyoEO1nEd+2RcJgSHvRt057P3EShYx7Izt0biaf xFjvTJwezICPYd8fx9VvW0+SIRm6cPXT6X2BUgpjs7QlOxOpf80EX3RmsN1jkc4/c5WFjGUyN3N JWsuNBg== X-Google-Smtp-Source: AGHT+IF8IwpLv21C9d4LG9BaErE5Gf7fmPC9Sda/hlrd14dwgLHEKydBQ5WIEeD0RreCXXR2vtQ0FeG3FP9p X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a63:3e44:0:b0:6e4:9469:9e3c with SMTP id 41be03b00d2f7-7b6316c53d4mr2601a12.0.1722488423510; Wed, 31 Jul 2024 22:00:23 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:48 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-40-mizhang@google.com> Subject: [RFC PATCH v3 39/58] KVM: x86/pmu: Notify perf core at KVM context switch boundary From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang Before restore guest PMU context, call perf_guest_enter() to let perf core sched out all exclude_guest perf events and switch PMI to dedicated KVM_GUEST_PMI_VECTOR. After save guest PMU context, call perf_guest_exit() to let perf core switch PMI back to NMI and sched in exclude_guest perf events. Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 545930f743b9..5cc539bdcc7e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -1097,6 +1097,8 @@ void kvm_pmu_save_pmu_context(struct kvm_vcpu *vcpu) if (pmc->counter) wrmsrl(pmc->msr_counter, 0); } + + perf_guest_exit(); } void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu) @@ -1107,6 +1109,8 @@ void kvm_pmu_restore_pmu_context(struct kvm_vcpu *vcpu) lockdep_assert_irqs_disabled(); + perf_guest_enter(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); + static_call_cond(kvm_x86_pmu_restore_pmu_context)(vcpu); /* From patchwork Thu Aug 1 04:58:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749563 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF7C414F11C for ; Thu, 1 Aug 2024 05:00:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488427; cv=none; b=l9QJOpe5LLYEd72JoP0Yrbm/baf+DUjQE1xB04msm/IiLm83UpSRawuo31pxD4nINZZzeshiE5oGPNtjJ2A3XG2bgMWaxlHDdIKHldX4ECjLLyAeTFWLCoy2UHguadkiGTol+iZom3MB6PwL9pZLj1nGTI2DhOzdBOZbVVFdQTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488427; c=relaxed/simple; bh=j5z07mEw1a3NzUvM+YcPKrlrEYXrkrfP13gxBdsC9Rk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ct3ayd67FHIpvNwN/93QjmCpawCSZlgTD57Ei/Sx57Kt4c70KlCbYk3Xdnw6JUZF6xF06wzU6c2l6ZfOKweqTdmVZrcKzz088MloVOURdq+W3+3jcXnS70sRds5hURQhFyfyAskYTD4p+Eo8qhPgzIcPMbwenHzpI/aGyslhznk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZPKktfGx; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZPKktfGx" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc5e651bcdso66728645ad.3 for ; Wed, 31 Jul 2024 22:00:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488425; x=1723093225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=QyXLIC3s5xW9HET961595nlV4jFaPkV2bjM57q/oowA=; b=ZPKktfGxFUa2qUunpQv0cglRuGMAYox53WU4Lowx4dezbcbhxB/cr38l4BYdOHB3X+ SoNf2gRQALjhZrgFKCLQ/jDDF40SDKVpH91ns54VMJJFUN3wk7yVBM2Zd/9duIVEiofh PDa9zeZ7xWlnIEaxvW3gdA+kvZrupXbqbTJhXPRXUBF7b+Ohf+xQOBmATPwebJvFByEQ MPtuNNUYCntY8Mv/FlQp/UoBToSZNiRWfB4wBJXXcfryqXTFphMcXwWGbpRZR+Ne2Gxp /2HRn619xsBHweYBxW9daKPHRxBzsoTCRrCj7516u5xWRH+VKMqGJcJg2dUWdsTNAKR2 c7bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488425; x=1723093225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QyXLIC3s5xW9HET961595nlV4jFaPkV2bjM57q/oowA=; b=VsewlajiW+xi2FsOUK2xja+hlIS8J7h0vjCrPnVuupEKfhFLXxBmC2Em7owaDiVnwJ 0azB2ccLOpciHCPc+OaEsiUmvJkZvlfbftGuDErKus9+tMKyFIY2L0paZSi//39RTZ2M XFeDAkWPYNYV7XmeVht+uFQC955+19zPNKN3S20BXbJ4Ilwo1wwm6RfUEHfV3i8zp3nc mKoPJuBtDmUKZAoecppP7XQtlECiD5yg/k204aOMwZb7VMwzFBReBNP48HMXEYIQr0Vt gEf2hzei+4f8jCO8DA9a9/S6hf2hUtYbRIsPDrMRvxpWYV5hxVzjnMG8JM0g2W6WwBTq tBfA== X-Forwarded-Encrypted: i=1; AJvYcCU8Ql0ZhVE4U7bP1Z4bPgij3yX5UjRPjEe/DEW+hVE9nx8LRR2ejDRfmNDwhzkbXtT6yH4/oju6KZkgosdZCkT4UAv8 X-Gm-Message-State: AOJu0Yx01H9RDiw8D8QOlIQIXnkIskR/zkyur0s7Slg9RKZHuv2BnUfE EqBqLNUIND7qcb+cXLppY9hOoP0q6HNOEpXpyX1kBf/EVRFWdgwMZ+LPukc7fz+lay43DP+6vyf gMj4Z1g== X-Google-Smtp-Source: AGHT+IFUTtpDpvZf3y7902GI7QcnK8JU6POP1tL8roTadbEKEqyAuifvYwcGL1fDLnEiYOD61Y5kOcwkmt4+ X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:903:1d1:b0:1ff:458e:8e01 with SMTP id d9443c01a7336-1ff4cc7e37amr1485355ad.0.1722488425316; Wed, 31 Jul 2024 22:00:25 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:49 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-41-mizhang@google.com> Subject: [RFC PATCH v3 40/58] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang When passthrough PMU is enabled by kvm and perf, KVM call perf_get_mediated_pmu() to exclusive own x86 core PMU at VM creation, KVM call perf_put_mediated_pmu() to return x86 core PMU to host perf at VM destroy. When perf_get_mediated_pmu() fail, the host has system wide perf events without exclude_guest = 1 which must be disabled to enable VM with passthrough PMU. Once VM with passthrough PMU starts, perf will refuse to create system wide perf event without exclude_guest = 1 until the vm is closed. Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/x86.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6db4dc496d2b..dd6d2c334d90 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6690,8 +6690,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (!kvm->created_vcpus) { kvm->arch.enable_pmu = !(cap->args[0] & KVM_PMU_CAP_DISABLE); /* Disable passthrough PMU if enable_pmu is false. */ - if (!kvm->arch.enable_pmu) + if (!kvm->arch.enable_pmu) { + if (kvm->arch.enable_passthrough_pmu) + perf_put_mediated_pmu(); kvm->arch.enable_passthrough_pmu = false; + } r = 0; } mutex_unlock(&kvm->lock); @@ -12637,6 +12640,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm->arch.guest_can_read_msr_platform_info = true; kvm->arch.enable_pmu = enable_pmu; kvm->arch.enable_passthrough_pmu = enable_passthrough_pmu; + if (kvm->arch.enable_passthrough_pmu) { + ret = perf_get_mediated_pmu(); + if (ret < 0) { + kvm_err("failed to enable mediated passthrough pmu, please disable system wide perf events\n"); + goto out_uninit_mmu; + } + } + #if IS_ENABLED(CONFIG_HYPERV) spin_lock_init(&kvm->arch.hv_root_tdp_lock); @@ -12785,6 +12796,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) __x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0); mutex_unlock(&kvm->slots_lock); } + if (kvm->arch.enable_passthrough_pmu) + perf_put_mediated_pmu(); kvm_unload_vcpu_mmus(kvm); static_call_cond(kvm_x86_vm_destroy)(kvm); kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1)); From patchwork Thu Aug 1 04:58:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749564 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD17A14F9CA for ; Thu, 1 Aug 2024 05:00:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488429; cv=none; b=eZ/ACL9KvHe0GUpCBArC764TJAFY5L4BOsFuFQ4QJQeXalBwskfyXrBJ2OupSoO+vn4lBl5gOYcWVqK3TpVbkazPIxTRY3CZYNbq8vgRgA19L+ZDtdmZEvXMBRfAq0bWPQUIUFqJIxV0kFl1BD7W5z0yE6menhS4mn0lXHZh9nw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488429; c=relaxed/simple; bh=Er1+rIamv8zqHCWIvU5+MdGxE5F17D0FaBd+GU9PuDk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rXXu+nNRpujgschWnnDW392at7A36VjBJT40FvLFR5JOjMfefpmdZgrDaloRhCFYhcP1OGlI1pG8BldPwnEmu01JeWFOaNiMM/zVW2jQ2rBgxbVmA8wZL7qFLss/gfoQlR36Fr+u6p8/jNfN8AGIcHINlkzIuJmjJvXNgkeV6fo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IEU5Yr2j; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IEU5Yr2j" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-70e923f6632so6085660b3a.3 for ; Wed, 31 Jul 2024 22:00:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488427; x=1723093227; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=hMe5GjAdDPvJgfCRTahsDqHRizME6fQZtVPI9cUADrw=; b=IEU5Yr2jCPuIOkgiag8TcQHtv5b0Ez6XvgYD2I0WwhQSXpqxrosbuAQ+tEldavrwSU PHRiaCZUuR+90fGoZTwze8Iapm5+tBaBnxaP9vkpEy4YCuvtuFnbRBlu28Uncr5PzvzV iox9flORua6pFGvV380kdNJ6dAp+tMPgHokpmfuotN80uu0ZPlQ+XTYahwJhBgpO5jzQ Fmp8P7xiXpcifjN/19pAydtW6uqU0BvF8XCcAvAN3D6tplZ0EpcR/BP9IFSaUg/5dsh4 tdxAJQH7pi9DU4Ln0Be8UL7zqlA+91Ot0qQV4ZGgSpIq5yI14ZYTJJlw3U/UPwXI+L0f 4vTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488427; x=1723093227; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hMe5GjAdDPvJgfCRTahsDqHRizME6fQZtVPI9cUADrw=; b=DZ0Mt5n1xcHiZvyPE2SMHxrDlqn5hzH9gE9Bj+KCmARMZ/0uqbS1QKZsN39NDi+mE6 2I3pMVHBYtv/ivNZFJfh7YwOAJI6XFkIwo30rOlJOxCtbm86VvqlJccoxcPFtghmSa2W +4KrLstrE8mqwFk0hMwEnrrhhkw+oRVLRvGESsMa4X5hm0QlYCo6NgcHb0IrTDOjjbY0 R2c5xn1IZkeZQRncu+liP5ZJ2RZrk0/dP0GjHTm+34LAN31X3F+4/tZqTyWA3JHd+zya wa+tsxNiMJwe/8XdCJlozWASF053gJBRnQPx9hhSHplyM56c80Dpx1RQ25lE10W/uWB6 YJMw== X-Forwarded-Encrypted: i=1; AJvYcCU+VvmvlBCUIjnc8b1q3CDhX3O1SiB3ltaQmlniGMINEOVIT75B6CFMCDtJh96ZxzUMnFtgBVx9Ru1MozUAy4goTr6Z X-Gm-Message-State: AOJu0Yw+K9CD8+kG0m2JE6Mb9LXtRLDM7uYnnU/UgJimeKk9pJOwN5FP GYYvk7n3i8pZl1+44z2koQdSXiEk8eiVhC8KIg6lVyPTQoYPiveAvXHlAP8lBxbQqigUppct4VS mcJtavg== X-Google-Smtp-Source: AGHT+IGQLdd9VIQeVQT4wYUeYXOg0Hx4N4Em1cpDz2c5QtwlNnmaxxHjY9+MSpq0udu3T1+5Rmfp56K1We6y X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6a00:73b4:b0:710:4d94:f9aa with SMTP id d2e1a72fcca58-7105d83cbd1mr37503b3a.6.1722488427018; Wed, 31 Jul 2024 22:00:27 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:50 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-42-mizhang@google.com> Subject: [RFC PATCH v3 41/58] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Xiong Zhang Add correct PMU context switch at VM_entry/exit boundary. Signed-off-by: Dapeng Mi Signed-off-by: Xiong Zhang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/kvm/x86.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dd6d2c334d90..70274c0da017 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11050,6 +11050,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) set_debugreg(0, 7); } + if (is_passthrough_pmu_enabled(vcpu)) + kvm_pmu_restore_pmu_context(vcpu); + guest_timing_enter_irqoff(); for (;;) { @@ -11078,6 +11081,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ++vcpu->stat.exits; } + if (is_passthrough_pmu_enabled(vcpu)) + kvm_pmu_save_pmu_context(vcpu); + /* * Do this here before restoring debug registers on the host. And * since we do this before handling the vmexit, a DR access vmexit From patchwork Thu Aug 1 04:58:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749565 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B923D14F9DC for ; Thu, 1 Aug 2024 05:00:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488431; cv=none; b=ffaDCz2H+5b5ahE3oCcCqqAYqUtW/tDHlz6b6ByLv+fi2p1LFM42r/TxQh/cgK3u3HqX2vJC9Y9grQHGD0cr6Rp0LMHpwqGaNDttV+5a9qPHRXXCesumSqoyjpDNnVxVBOPmpaN0IAMwsnCma4UBuU4fOwrg/KtXF+Qz4rk11Mk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488431; c=relaxed/simple; bh=7iFLRpxCNJxlym88F0bUoG1NCAHDJGUFPH+qNfjdvqU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SFyrawOKWMdoKa+ADoF6to/xZCsgLIhBhZwU04CkEzlbL4g0Q/kavFojE713XFOT2z8kasdZFmmFVaNVqnlUh5dQUCWeuxlL6T7hyPXoRt1b8nFULBjIoXZgNLyPUOluzJvFgayECVYwOWkzIG6jsCuyFounuFoaExknEds1YWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=w7eLXsxP; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="w7eLXsxP" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc54c57a92so48564735ad.3 for ; Wed, 31 Jul 2024 22:00:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488429; x=1723093229; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=MYIaRCJR9KXfQeP3vZxlCUD7Ptg5FAXmwROhpQn8ZNw=; b=w7eLXsxPjEif45C4AKIT3OrVYcjukLUvkQv/q8Qk5No4N6xmeVzRUpM3zdp5EVx1oT XBUrSnulc0Ma/JspKHJ+9Pz9JrYUfyCDRsOm8R19oJI1DJx+wGyYOe+wJ9/8goxQ/F+z A7Mxl27FH8/XqWZRR2ax5E1lBCoktU9ZDX1480gqvA1nGLcJ9c3gfY9TNBSjqvuQuheI JVldSX9OKkO1eRJ2Sg/7Xrxe7YohyRRW3JTBiDdb9WHQTl92YB/GFQq995HQTzces/hT yvAFZrwnbbreIYuLS70FZ9BFxKwCkNxECK1rgvo5BAxUp70d1TzqNYp8Rce4TGNF5UAS GZcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488429; x=1723093229; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MYIaRCJR9KXfQeP3vZxlCUD7Ptg5FAXmwROhpQn8ZNw=; b=pEg6v6CnymMxpw/dvj47hi2aGTMUbasNyl1jSeA+995tJyjNX5iKZPIfG+Oi+eRtbZ hvzaDX7NlNJJIsgGriO0X5659uRLr3QN57DNuLdzXZN/oDN25f2LyzcUH1xRHzv9zk3X aZABrnIBZGkeEjYFc5GvP7I7BP9gYhWaSWRaIm1fIRNL+u633hVdCTkVdh6As37BZ3dW TodEPLyKIsMUuHjsGWRVOH7lGs51hDyXhPzg7wOb2WEdsAmPoFHaZzTPqOOh0x4BziT3 WUreiK886nfO4e/5BIxIonGR95T2YlYhgaGC0uTwOnFEa0hzAbBEb9zMLqs1y5Kpoij/ jyMw== X-Forwarded-Encrypted: i=1; AJvYcCXTsCv/GuqhRL8rRyQTdnSbnC2XW6cbeotIC1km8c3if3Fob6dXcDCBAaYtJvPuUVoOTq6OTKe9A8L5ueU7faYT8ji1 X-Gm-Message-State: AOJu0YzUQ82TVpZQ+W6hUoYmat3DC3RPyAagERwvSw0VuDUhRg0W60Mj OaI/VqLz7HN7UgNuFrd94VvGCqPv8++T3CPLEX13azxs7DyrfSTPhy4tsp6kqZVqbBL/WptT22O bgucaWQ== X-Google-Smtp-Source: AGHT+IFTMAmfGnGK0igFx7Zv5FKwA/uTx54JxyxM0/4jv+JbOdblOvarmWsJVgiS5TVGU8XxieB3hR4Wn3IC X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:788c:b0:1fd:6529:7443 with SMTP id d9443c01a7336-1ff4d241368mr467005ad.11.1722488429024; Wed, 31 Jul 2024 22:00:29 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:51 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-43-mizhang@google.com> Subject: [RFC PATCH v3 42/58] KVM: x86/pmu: Introduce PMU operator to increment counter From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Introduce PMU operator to increment counter because in passthrough PMU there is no common backend implementation like host perf API. Having a PMU operator for counter increment and overflow checking will help hiding architectural differences. So Introduce the operator function to make it convenient for passthrough PMU to synthesize a PMI. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 12 ++++++++++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index 1a848ba6a7a7..72ca78df8d2b 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -27,6 +27,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup) KVM_X86_PMU_OP_OPTIONAL(passthrough_pmu_msrs) KVM_X86_PMU_OP_OPTIONAL(save_pmu_context) KVM_X86_PMU_OP_OPTIONAL(restore_pmu_context) +KVM_X86_PMU_OP_OPTIONAL(incr_counter) #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 9cde62f3988e..325f17673a00 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -44,6 +44,7 @@ struct kvm_pmu_ops { void (*passthrough_pmu_msrs)(struct kvm_vcpu *vcpu); void (*save_pmu_context)(struct kvm_vcpu *vcpu); void (*restore_pmu_context)(struct kvm_vcpu *vcpu); + bool (*incr_counter)(struct kvm_pmc *pmc); const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 40c503cd263b..42af2404bdb9 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -74,6 +74,17 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) } } +static bool intel_incr_counter(struct kvm_pmc *pmc) +{ + pmc->counter += 1; + pmc->counter &= pmc_bitmask(pmc); + + if (!pmc->counter) + return true; + + return false; +} + static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { @@ -885,6 +896,7 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .passthrough_pmu_msrs = intel_passthrough_pmu_msrs, .save_pmu_context = intel_save_guest_pmu_context, .restore_pmu_context = intel_restore_guest_pmu_context, + .incr_counter = intel_incr_counter, .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_INTEL_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = 1, From patchwork Thu Aug 1 04:58:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749566 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2E4115098A for ; Thu, 1 Aug 2024 05:00:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488433; cv=none; b=l421l2EtY8fpbDOJpMj7CXAWRzPbyQDJIwlzW4/eUw3uGI6kkzzf+pna3raRtUoU0Z5Hzgv5VPvUl6u2do9PMx+FILSqc78UI1WGfRyKsrZlwyJulfymhHYUvRk0OZnIBEVm87jtPGKzo1SFWkavNLAouxtRgHQHUhbVU7j6raE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488433; c=relaxed/simple; bh=h2BIhzPWqK1WX6sUig0pWm4EWfyb/ZKPY5ztzMGHoIY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XyDbFD50yeVHHI/caSI0v0PDw7bTHThKTmceQ6O1jgG+/ti1qK+g9f1mg+38SDhhct/FzncWr0MUHaqn2XXWiWz5yIOqEzFkmYUHnC+MpD5Qm7/jeWP6YnlrCstw/Irhf6Zc+cpNRyA9HgGpk/JWuquCNf98hT6qq86W+MOHREg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BJexU4ZM; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BJexU4ZM" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb567fccf4so6417213a91.0 for ; Wed, 31 Jul 2024 22:00:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488431; x=1723093231; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OoGP7Z9uDxmG6CzYVyLj44ANC+cEU7w76pDMHfnQCbo=; b=BJexU4ZMaVaNo47MAlItEsi4c4vh+EHaRfRYRvdy9SlXk5cyYM0FIi2elFyRBW8/0u mXA3tnqvc+7ExlIgMT04vlXVj+x9jrKgfJVQx2JaP+gXw0iHDK+BaDI6oDPm0gzj16fv Jn+aa6cPghv63xh5vT+PsUs6ViFoJBtHuFLJN3unTYag3syNWlj/ZW7yB+UydXFzwFta WkEHA3MxEHnbgOLvPrq5Cb8Nb2J7Q3Ml3ot0yx/oo9O0199aCAOXpEF1sMF5N0PaKbHh n74K/J/8W+4bbcZTHYOx0LPczExx6bupi8hjpj9kbXcfvpfRE7kfSk4W7lBFEYq0KGUw yudA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488431; x=1723093231; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OoGP7Z9uDxmG6CzYVyLj44ANC+cEU7w76pDMHfnQCbo=; b=s6QIr+9y5i8xAnt4RdhMsAPlugUMCIaWFp7HArHO/fY9RsNaex6Cpa3V4xGgwSIT2N NZnPkOzqqoIyELP4lTmKOPpqbxDt9Ib+iNh89JOTP7bzcQankkgfD2+La886uAAXB+Ky AEzoIng+NMTTTg74qd3W8voo965hDPokAChxImSuaiY11sg233Et+v2VU7F/EGZOhs+w CrtmhJIiiL9ZaQAvSPqdKRfZHoD+V26XwZo1q7ciWJ10+GUnGuX168opaxyGYLfD9fOX yCAhzZxE20p1dFmZLYGvy0cfMCRn+F6HI9uYNVxpD5PWJ8/0uOt72T3hW6PdkVI7Caq4 GhEg== X-Forwarded-Encrypted: i=1; AJvYcCUN469QbJ9Nx3BEdlWIbPUwJ3f7JkL5O1AeWjg4SfASOBo/tnAajwHcjuRUsRq0+O3gv/dnXu1DDfcDzCEqMhJUTKR+ X-Gm-Message-State: AOJu0YxxYwTvt5dj0Gm+228nKuz5EYVRpxTC5tzn8ZlXcOR5Bx12w80Y 6X77KfG/D1YX+jetLFu5UNWIWvcIv3d/mR2sfihhlJveMgkdC40olYK6MVEfTxtLNloUhHjqGzx vE89O2g== X-Google-Smtp-Source: AGHT+IE0SUiysq8onpo5FTRQNhm7CnSNYELnvE81ag75kjeNorppDjOFmoYHczpasFS8m6aV5EhpFQVwJMND X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:90b:4f90:b0:2c9:9a89:a2ae with SMTP id 98e67ed59e1d1-2cfe73470b4mr16443a91.0.1722488431039; Wed, 31 Jul 2024 22:00:31 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:52 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-44-mizhang@google.com> Subject: [RFC PATCH v3 43/58] KVM: x86/pmu: Introduce PMU operator for setting counter overflow From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Introduce PMU operator for setting counter overflow. When emulating counter increment, multiple counters could overflow at the same time, i.e., during the execution of the same instruction. In passthrough PMU, having an PMU operator provides convenience to update the PMU global status in one shot with details hidden behind the vendor specific implementation. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 5 +++++ 3 files changed, 7 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index 72ca78df8d2b..bd5b118a5ce5 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -28,6 +28,7 @@ KVM_X86_PMU_OP_OPTIONAL(passthrough_pmu_msrs) KVM_X86_PMU_OP_OPTIONAL(save_pmu_context) KVM_X86_PMU_OP_OPTIONAL(restore_pmu_context) KVM_X86_PMU_OP_OPTIONAL(incr_counter) +KVM_X86_PMU_OP_OPTIONAL(set_overflow) #undef KVM_X86_PMU_OP #undef KVM_X86_PMU_OP_OPTIONAL diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 325f17673a00..78a7f0c5f3ba 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -45,6 +45,7 @@ struct kvm_pmu_ops { void (*save_pmu_context)(struct kvm_vcpu *vcpu); void (*restore_pmu_context)(struct kvm_vcpu *vcpu); bool (*incr_counter)(struct kvm_pmc *pmc); + void (*set_overflow)(struct kvm_vcpu *vcpu); const u64 EVENTSEL_EVENT; const int MAX_NR_GP_COUNTERS; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 42af2404bdb9..2d46c911f0b7 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -881,6 +881,10 @@ static void intel_restore_guest_pmu_context(struct kvm_vcpu *vcpu) wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl_hw); } +static void intel_set_overflow(struct kvm_vcpu *vcpu) +{ +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, @@ -897,6 +901,7 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .save_pmu_context = intel_save_guest_pmu_context, .restore_pmu_context = intel_restore_guest_pmu_context, .incr_counter = intel_incr_counter, + .set_overflow = intel_set_overflow, .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_INTEL_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = 1, From patchwork Thu Aug 1 04:58:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749567 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C97921509A5 for ; Thu, 1 Aug 2024 05:00:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488435; cv=none; b=H1b/05+ySnbXW9+huU1Hm7b5QcjVmt1ZGAZ5eb6+vYFv3JqiPdSXxdv6I/IsYMhJrCcwXnm8OLcEd/PTRRdQP2lEFygqKn6z6lFWYhFSAxDwG2two4B77dXMZvUNUxoyOog4KgeOpkpXGW/Tpd3bHs3Q4+RGBNc24G97JTbIWYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488435; c=relaxed/simple; bh=gtemBvTk2lbV8xf3rduUWl3sB4EfzskwrEwm5VO2/vE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F/lOCdgrcexcmnf4UweLWXU7PwJtkycobnPKSCSvus2KXyHuUqdw9KOYFwoTFitooDe3k6QzPhk9OVnHGX3ndH4x/LlCfmEkqzPlqyD/VazMNfapa/cCZNW4i927POWa8wxwc44TmA76nT6NuCOVmlNUtjtezhcNY/S8Htd0cmY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PFvEsRow; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PFvEsRow" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-650fccfd1dfso120371997b3.0 for ; Wed, 31 Jul 2024 22:00:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488433; x=1723093233; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TIaORN0fE0skJR5pSCv7IKrnjEmXFRXmkgvuWeH6mb8=; b=PFvEsRowABhzjOebOTBtaYd3yoGrCsT0R+Jb4sHXQ2OKiE9Gd8gaHqFFZUmEF6rtLQ DkTBTu+ClomVXcpwqpVmf6EUO5BVUQDEIi/V8FEUU0Cmm8AHrE5QzlUUY/YiDksmpo15 ZejNfZM/62H5C72HThkNywl3vn3aAReUYR48IVDoyi5JijSDs1zNfJy42us8W4XckJGg 0r0kfdn7Hn2HEb0uc1bumQIUNI2RVqB1/zKNjasUQyzsg22qIC3OqLYxbXsJF/DP7Sat /43mmeyLRujSyGlNpwmk5bctVp5WKqTaJXznrQ85AL4bODmIFSyR4sIsFyFxuL1eVCCC Rf/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488433; x=1723093233; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TIaORN0fE0skJR5pSCv7IKrnjEmXFRXmkgvuWeH6mb8=; b=VAis3KeiaGxxiHCBVtmGQDBBlX/GZgSHnYN/HuEiiL7R2r4jyRYK9pCvcK+mKraO7o l+1lbbAwcHN5V7cJ4uMTIz59b2hAHk2voUVQLwmL9aO5R5dXw9jqMm4t9pd9rIr8FRMN hVF2he4otP7LMTliX8+VjcKLUlDNheHEyq+IZto96Q4jC4/cxqrZs5h5FZooA5EsGM6y UQOVmjkycb0zrASbEpGufq7fwd5kRJ4wCZMbxEhrnsc5NRmO+JwH9mIpn/Np/tNobqgv CNsVT7wbbXiuZXI0BO6qVR/mLeSrU6VwA/BYbPJxoh0bYeYVPZO0D8aoHn912XN3uPYF 8Jsw== X-Forwarded-Encrypted: i=1; AJvYcCWozEGohh8rX1Jc4KtuF9OObtUsGuU3XmjDx63FvRflFrCmUDYKryNs+JHQgSAja+J/LlxooS6w5AqqAkVZ3U/siymp X-Gm-Message-State: AOJu0YwUll37uoiqpb2YJihvexZMbfj0CZwK0nfJRfrIFlAPSU8KhbXd ckMUpkMJTdorzBhMjO64n6DYlg5cCeI1zJl48iEI3zdPig4sAv4fVCNd3IYUuX+TWggSddIo+4n 3UaV91Q== X-Google-Smtp-Source: AGHT+IE6GrVg0oI7YgKfRUu8LQp7LFjo+hVTe1xzxA7oLbHsyfE7RwuRCLp1ZikHSyVNP5OgmZRb2O/kzfY7 X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:2009:b0:e0b:1407:e357 with SMTP id 3f1490d57ef6-e0bcd1fd0e3mr2207276.3.1722488432889; Wed, 31 Jul 2024 22:00:32 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:53 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-45-mizhang@google.com> Subject: [RFC PATCH v3 44/58] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Implement emulated counter increment for passthrough PMU under KVM_REQ_PMU. Defer the counter increment to KVM_REQ_PMU handler because counter increment requests come from kvm_pmu_trigger_event() which can be triggered within the KVM_RUN inner loop or outside of the inner loop. This means the counter increment could happen before or after PMU context switch. So process counter increment in one place makes the implementation simple. Signed-off-by: Mingwei Zhang Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 5cc539bdcc7e..41057d0122bd 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -510,6 +510,18 @@ static int reprogram_counter(struct kvm_pmc *pmc) eventsel & ARCH_PERFMON_EVENTSEL_INT); } +static void kvm_pmu_handle_event_in_passthrough_pmu(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + static_call_cond(kvm_x86_pmu_set_overflow)(vcpu); + + if (atomic64_read(&pmu->__reprogram_pmi)) { + kvm_make_request(KVM_REQ_PMI, vcpu); + atomic64_set(&pmu->__reprogram_pmi, 0ull); + } +} + void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); @@ -517,6 +529,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) struct kvm_pmc *pmc; int bit; + if (is_passthrough_pmu_enabled(vcpu)) + return kvm_pmu_handle_event_in_passthrough_pmu(vcpu); + bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX); /* @@ -848,6 +863,17 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) kvm_pmu_reset(vcpu); } +static void kvm_passthrough_pmu_incr_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) +{ + if (static_call(kvm_x86_pmu_incr_counter)(pmc)) { + __set_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->global_status); + kvm_make_request(KVM_REQ_PMU, vcpu); + + if (pmc->eventsel & ARCH_PERFMON_EVENTSEL_INT) + set_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); + } +} + static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { pmc->emulated_counter++; @@ -880,7 +906,8 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user; } -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) +static void __kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel, + bool is_passthrough) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -914,9 +941,19 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc)) continue; - kvm_pmu_incr_counter(pmc); + if (is_passthrough) + kvm_passthrough_pmu_incr_counter(vcpu, pmc); + else + kvm_pmu_incr_counter(pmc); } } + +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) +{ + bool is_passthrough = is_passthrough_pmu_enabled(vcpu); + + __kvm_pmu_trigger_event(vcpu, eventsel, is_passthrough); +} EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *filter) From patchwork Thu Aug 1 04:58:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749568 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DB6913CF98 for ; Thu, 1 Aug 2024 05:00:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488436; cv=none; b=DDhsgk5p71JsW8kp9SkNE5gNoeEZlGKT5sTN2IHUX3cYRht4G4UkLgaOw3XPwq2mtamcnogPlp/jBf+arCEhaSPIuz74HNPZO8YN9Bz/hXWY3ZXfdFC6qKCR2wxW6aCp8Oi6Fg7mEkiWmfTre9LK31DeOsGG0AlWQlD+HxF5G1Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488436; c=relaxed/simple; bh=FR/or2X1AR+pzb7YK69oWzYFGkBpcavB6Wog21Mz7QQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i/PfTY7ISgwM98Hme1EPIxC4rx0FvyFfWRHXXE7BlCIk/xOR5SHUsZLPi2AtvW7GwAS0ElLIhNxI3xsYRrFzBP1LaSQtoXstyKt5hd+Gnmp8h0HLN3bCmhnbxunW+tDEYPccUeQhXbr6i8uUyY34fyySOzUSvLx1FATA5v7+cho= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=M+Zel8MP; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M+Zel8MP" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc5e1ab396so66519765ad.2 for ; Wed, 31 Jul 2024 22:00:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488435; x=1723093235; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=A5H6PRz3kdGjvOkv+983LAIItnKmgszUywTdiWr809I=; b=M+Zel8MPcbDQej9J8MhzCua3H2RhToMGPz2c0qLfBKwJhPE4+NN6dhtJy10RIjflxQ kPr3DDZj6yzlhYiDDbgGAsIT04iHi3gqNW/0O8+cVIx8Xy80zWgVyhpN0zpj2jH5zlm3 xOxAeq6F8TI9cMxQnlJ3XjVm/Xd8HqC5vRNgxyfJa54qQr7aphQ2fDlhXSQgB6REAiB9 GbQQybEEtDZkyxm8b04Bmv1uvlUl1sVBqjtDoGhc4gzer2TWgILHg/BSgXDdmTnopOJN yls1UuX50d2JG7QOE/Lx4f883HsAurDoOGik+fRlfTPMOROWXvGOHAPRAq9d+TmdyXra NWBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488435; x=1723093235; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=A5H6PRz3kdGjvOkv+983LAIItnKmgszUywTdiWr809I=; b=fYIdDGkFksGFHKLeqPUhP87UDRR6BlNpVujkZL3Dm0+WU6F0ajBA1C3W2aUnJFV3Gg DAgehX5en0Ep0SjQvi2LJAodWwnLNCst9V/9npOQqKPWD/StT9+7CIu221aWDhchs7ge 11z3e0MB3daam0Ua844JOK5bRgRQDUZXH9Zo6B3lTL+gpijHf6EaGbE9LslVETwc+f1g qhVbOFS6C9DYwDukrO0dg4LGnkumnFNdjrjH9qpMEGMYmEaVopwTBCO9tt15xa4i62Tv lE/OB64Q56MRKIGxx+34lMLkoILw+jouFp2n8gxps9GGJZvb7h9i+R9e7GadA3+VmrWW RSxA== X-Forwarded-Encrypted: i=1; AJvYcCUa11CdBcbb4zQKYlJblq0+bmF9HLFnppXk+5Ulrjt4ATIVsslKdjSBRMS3zTSKZXJaLxI5J3H0yE2SPdgG15bp35ZZ X-Gm-Message-State: AOJu0YwlBQbAAQkVtuv++e3neuXA3yaRIu3lXqCPeZrfJGaTvHC6Y4OV OzmxWFiVgP4cXr52IjeDOI1qp100xn5R3d7Ff5/kQH764OhkVmqKPFB0vnJQzGBMHzQJv0bMpgo dOsTNRQ== X-Google-Smtp-Source: AGHT+IGTWpbXQbXGItfLiLmaLczQ8nGBbsWUzLbJcIxXQ+6RcaYhZLMBaSJNqCH4aoSgeDRLaUv6obcP3dQU X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:903:41cd:b0:1fd:a54e:bc1f with SMTP id d9443c01a7336-1ff4d25a6b3mr1322245ad.11.1722488434623; Wed, 31 Jul 2024 22:00:34 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:54 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-46-mizhang@google.com> Subject: [RFC PATCH v3 45/58] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Update pmc_{read,write}_counter() to disconnect perf API because passthrough PMU does not use host PMU on backend. Because of that pmc->counter contains directly the actual value of the guest VM when set by the host (VMM) side. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 5 +++++ arch/x86/kvm/pmu.h | 4 ++++ 2 files changed, 9 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 41057d0122bd..3604cf467b34 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -322,6 +322,11 @@ static void pmc_update_sample_period(struct kvm_pmc *pmc) void pmc_write_counter(struct kvm_pmc *pmc, u64 val) { + if (pmc_to_pmu(pmc)->passthrough) { + pmc->counter = val; + return; + } + /* * Drop any unconsumed accumulated counts, the WRMSR is a write, not a * read-modify-write. Adjust the counter value so that its value is diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 78a7f0c5f3ba..7e006cb61296 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -116,6 +116,10 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; + counter = pmc->counter; + if (pmc_to_pmu(pmc)->passthrough) + return counter & pmc_bitmask(pmc); + counter = pmc->counter + pmc->emulated_counter; if (pmc->perf_event && !pmc->is_paused) From patchwork Thu Aug 1 04:58:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749569 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 592E81514DC for ; Thu, 1 Aug 2024 05:00:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488438; cv=none; b=h6u+SmKxR45NeGVkarrXfxfZ6SbIRZ9m15TofrdFuZvjivDnaAKK0LPeI5NC0lWFAGkkGACljExopT+RQwpl0DjML3kGSKBiNmgbZyW9JEq66qmVljKRNp1znosBrrUcONNhZbcRJud39aztqn4Y5y2vYjCzvgWn61kBkmzpOQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488438; c=relaxed/simple; bh=/t9IB+N1Gp0Cv22UUgWbguzFF9F5B22srQNp+4bgaSw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Mh54f1Uw5NupUDFL7pls1wwNBFg8caHBHbPag59IEGuse1zmKdZWVMTV9zlzfpP1JqsuV2QtL1UGc1r7WIQV6z/d1N8wleieFH9HVRY154ONdsKG0Y2B5C58aPJEHD1MiP1vsiLqObDE7X0D6xvLUTChrWFUtCxtnMKUAEN2uX8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=u6UWW2+W; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u6UWW2+W" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fd72932d74so52848755ad.1 for ; Wed, 31 Jul 2024 22:00:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488437; x=1723093237; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=h27QCl+dRCmN06LWHq3+Vp6xAQ68Ef9l13jpLFvCsdw=; b=u6UWW2+WKQdI8ZmZvey/FGnqJXMO7Mlx1DEH0QRE3LxvsGiYK+PlJkHZhd1E99n+bH WlihIv36xCy5U2fk21bZTGsYbZL0eXXptJ4YQ0iGvkf2I+d0bB1qHULheY1y0yiXllcP GvgSt4vkXEpU6qIuAVfGApeh0H4/DA8QXHpubvJw7WJdR8ELvtuVG9jbAf0qjiTvr5W7 Uo77euQk9eAxS1wrX7wvJNcp96JVIkdSk+kw1TZKohDQhQrUpT1s+wccen/YWVJzoOBU X+Pe5csMM2izc51pXmhkXrRVIIV6RYklkUiZTgU/XnsAyw+4cktDGZF8UX0QqAQQG+1K nVUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488437; x=1723093237; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=h27QCl+dRCmN06LWHq3+Vp6xAQ68Ef9l13jpLFvCsdw=; b=Krxvft81c/npYu0O0BfQNLVD98RZZ5rCI3IJiQ+1i6dy361/nQMZF6SFUv35/FBz2v y+wFM2XPJvHrHa2tj2t0PVXPPMjuB9NkgaeBwQN6++xi9dq8t45nKc4W1fJHKDZOm1fo KeSigyqHQMYSoc3Qw7gpPwti0XmMKrBZinwf8Mx2l30TiactM0wEVQUGO27w/A1amwUe 8ajAVfd0jdriUIdzESXOX12L6b0AFfI1eQ+iBlkdTEFC/1DiDh9I3c7voKN+15Kr41Aq 8Q5/rm867Fym2UpDZ7hsNOx+LGedWRl+L+l/4375UaNaTgxi7ARFQSHOECvjcxrGA0BB vrcg== X-Forwarded-Encrypted: i=1; AJvYcCW7j9VMcCfZnFSX0IsvGKVPRFLN0ymP3wFMVyT0uFNivSP9RUIpx9B9Ufv/cWYAkb/StPbJ6h18Z4mMcI1Tmc48My5/ X-Gm-Message-State: AOJu0YxQTPQOxWoZp5d8S5o13FBgW4FfwwmUe5y537zlZ3sCxpg1Tyjt iyeZzF57lDn+4LetoOYYKsc2Oxts17jd/AbRDf129fjKz7mPCaFas672ShMQRNSirAkw27DMsiP rkaL17Q== X-Google-Smtp-Source: AGHT+IFL9RF67FQNgTpNi/3id+k6hiRWIHGowxUpl+EsGdYy19ECvfREAviEIESyvW4wGaOhadsbXurHFTAA X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:db07:b0:1fb:5a07:7977 with SMTP id d9443c01a7336-1ff4ce4424amr1132565ad.3.1722488436555; Wed, 31 Jul 2024 22:00:36 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:55 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-47-mizhang@google.com> Subject: [RFC PATCH v3 46/58] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Disconnect counter reprogram logic because passthrough PMU never use host PMU nor does it use perf API to do anything. Instead, when passthrough PMU is enabled, touching anywhere around counter reprogram part should be an error. Signed-off-by: Mingwei Zhang Signed-off-by: Dapeng Mi --- arch/x86/kvm/pmu.c | 3 +++ arch/x86/kvm/pmu.h | 8 ++++++++ 2 files changed, 11 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3604cf467b34..fcd188cc389a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -478,6 +478,9 @@ static int reprogram_counter(struct kvm_pmc *pmc) bool emulate_overflow; u8 fixed_ctr_ctrl; + if (WARN_ONCE(pmu->passthrough, "Passthrough PMU never reprogram counter\n")) + return 0; + emulate_overflow = pmc_pause_counter(pmc); if (!pmc_event_is_allowed(pmc)) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7e006cb61296..10553bc1ae1d 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -256,6 +256,10 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) { + /* Passthrough PMU never reprogram counters via KVM_REQ_PMU. */ + if (pmc_to_pmu(pmc)->passthrough) + return; + set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } @@ -264,6 +268,10 @@ static inline void reprogram_counters(struct kvm_pmu *pmu, u64 diff) { int bit; + /* Passthrough PMU never reprogram counters via KVM_REQ_PMU. */ + if (pmu->passthrough) + return; + if (!diff) return; From patchwork Thu Aug 1 04:58:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749570 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40E971514F5 for ; Thu, 1 Aug 2024 05:00:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488440; cv=none; b=ny6GKgnkywhRCAsMHbDsBq8wnDiK7fZBtTSAtP1GIqBjSiVW7zXs9E4Snj77fpGm4OQ2qM8/77vxAQwRBqmHGYgLSDUXhTIoRrEHfVzdxVk9wkwUWhqNaIdPYB92Xc04I9WcHpz/eNiCYedgVz3yvXO4H/uqypGFa31Z+O8ZvKc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488440; c=relaxed/simple; bh=VAHMUQMsnCV9I/A+oDWWPfZJTovvEUhxF98f6nJvAIU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g+U9XnUoTlbw2De5D2Gq+OnjN1XshOfc2u6XmWOWx6R+7q/cdukC304hkdR2C/0HbE98sADTRM/j+bHgfzNLB8qG22+tTPjIPBEHkgUqafFr4HB2g/+qYtZ32+QsZ8jeccUPEsBEyw3z4ifKUNJSa5N1GytoC0+dQ+eARPZBEe0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tPl3b8w4; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tPl3b8w4" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc60ef3076so60911385ad.1 for ; Wed, 31 Jul 2024 22:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488439; x=1723093239; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GWAsbLkHW+pH+Ccv39iK+ZoMvl5iGC6gOz/C2c65y1Q=; b=tPl3b8w4pB249F3x7fC3+JhLR+uIy6dpXePqsM/R2i9mfG7yfBd4JdoA8JsyJzO/JG UGrmQxGwvIQ71F8TuvOOa1n35F9olZTL6uCyDMxhsfafz7MjH/NMC5iBig+N2hEVrKvW OhWezhOgrdpMEwXEGqoEaadXHOCqqica3wfAUvPwY+vX8qAsSNdbJiu6zwWToVhoV46V v1xNYgAkSv+Gljx7qjqPj5kqEmi/sRNiY2GyDxFkYCik0uPawTzohTlSZpJEX0+z1pXQ LFNDSTiLAxQVY17oVInFEnMfwZ/v5VLaZeNK6gQx1df0bQDCpOS5gTUykcDJI98VcSy0 NhuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488439; x=1723093239; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GWAsbLkHW+pH+Ccv39iK+ZoMvl5iGC6gOz/C2c65y1Q=; b=Z/bfVY/MVx5YfOd9cOieoLuHhadD2vyIcuMJhzsUvObnVSiHaDxhUeagjZnGHbrbaF I5apUz54+YDaM9SFBnS6LyfMTd/wHHbFOc92oBd57UG2xj9WfjYHVLHzmPhXQMRZDdD+ t4iIld1GIJFKan4Zn3RHVKywQaMqYjLFHIQfi7FMqM7wBDOUpqsnApgddInWZTZQDEV5 ps/qLF4K1nQDaW6GRZgQw3qgzSVFZHmbipJfWVGARTTqIjYPMV3ZXaVfBEXTE24h6YRw 4LZ7vHLmpxk08AzgYAo4VBlAcp52/Y9TPaG3R4bIFMaTtzbXdWjhdJ756oZfnke8by/n h4Bw== X-Forwarded-Encrypted: i=1; AJvYcCXI5KvvWyZJ7OBVf+jlgB9ybVFg4cCJEUx3scs4Sxe3Nyy3t326NJSE+DQMphTjzrntGYaDbsy7NZWn0vSmHaOyDh5w X-Gm-Message-State: AOJu0Yyo9EkOgDk14bKLkL5PblaQELZE2HsJX6kn+o/GVPQbNuDrYPwa 4BnU4+Fa2vqMhGP+BMDBwN1aRc/nYBWS+DFlY11c3r4uoqVK97M80qFYOC4gtATd5Dxxn2FzlxM cmuwOhg== X-Google-Smtp-Source: AGHT+IEKQb3xMKg5Ray/eNf7i3D7IdXa6mI1QNFpnmVWVX8oLP2CrMMTRi6mhHZFAwkBpII9qVpAHPA5wqKC X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:eccd:b0:1fc:4b97:cd1c with SMTP id d9443c01a7336-1ff4d27a611mr1369585ad.8.1722488438622; Wed, 31 Jul 2024 22:00:38 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:56 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-48-mizhang@google.com> Subject: [RFC PATCH v3 47/58] KVM: nVMX: Add nested virtualization support for passthrough PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org Add nested virtualization support for passthrough PMU by combining the MSR interception bitmaps of vmcs01 and vmcs12. Readers may argue even without this patch, nested virtualization works for passthrough PMU because L1 will see Perfmon v2 and will have to use legacy vPMU implementation if it is Linux. However, any assumption made on L1 may be invalid, e.g., L1 may not even be Linux. If both L0 and L1 pass through PMU MSRs, the correct behavior is to allow MSR access from L2 directly touch HW MSRs, since both L0 and L1 passthrough the access. However, in current implementation, if without adding anything for nested, KVM always set MSR interception bits in vmcs02. This leads to the fact that L0 will emulate all MSR read/writes for L2, leading to errors, since the current passthrough vPMU never implements set_msr() and get_msr() for any counter access except counter accesses from the VMM side. So fix the issue by setting up the correct MSR interception for PMU MSRs. Signed-off-by: Mingwei Zhang --- arch/x86/kvm/vmx/nested.c | 52 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 643935a0f70a..ef385f9e7513 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -612,6 +612,55 @@ static inline void nested_vmx_set_intercept_for_msr(struct vcpu_vmx *vmx, msr_bitmap_l0, msr); } +/* Pass PMU MSRs to nested VM if L0 and L1 are set to passthrough. */ +static void nested_vmx_set_passthru_pmu_intercept_for_msr(struct kvm_vcpu *vcpu, + unsigned long *msr_bitmap_l1, + unsigned long *msr_bitmap_l0) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct vcpu_vmx *vmx = to_vmx(vcpu); + int i; + + for (i = 0; i < pmu->nr_arch_gp_counters; i++) { + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_ARCH_PERFMON_EVENTSEL0 + i, + MSR_TYPE_RW); + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_IA32_PERFCTR0 + i, + MSR_TYPE_RW); + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_IA32_PMC0 + i, + MSR_TYPE_RW); + } + + for (i = 0; i < vcpu_to_pmu(vcpu)->nr_arch_fixed_counters; i++) { + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_CORE_PERF_FIXED_CTR0 + i, + MSR_TYPE_RW); + } + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_CORE_PERF_FIXED_CTR_CTRL, + MSR_TYPE_RW); + + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_CORE_PERF_GLOBAL_STATUS, + MSR_TYPE_RW); + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_CORE_PERF_GLOBAL_CTRL, + MSR_TYPE_RW); + nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, + msr_bitmap_l0, + MSR_CORE_PERF_GLOBAL_OVF_CTRL, + MSR_TYPE_RW); +} + /* * Merge L0's and L1's MSR bitmap, return false to indicate that * we do not use the hardware. @@ -713,6 +762,9 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0, MSR_IA32_FLUSH_CMD, MSR_TYPE_W); + if (is_passthrough_pmu_enabled(vcpu)) + nested_vmx_set_passthru_pmu_intercept_for_msr(vcpu, msr_bitmap_l1, msr_bitmap_l0); + kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false); vmx->nested.force_msr_bitmap_recalc = false; From patchwork Thu Aug 1 04:58:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749571 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DDD015217F for ; Thu, 1 Aug 2024 05:00:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488443; cv=none; b=NinaDh3OyfMLr6K6jY/HgYTVTX3hLN+FAqaTFWvfx88ic7Q3iMa4FO1zNEBwP8tYHqnSmz1G2QYbHK0J1N5GAH02mDEZ7WpEFrOc8/aEwE/Jr08lQI4XNt527P/qnKdn5rxCGjRIUs0PjCVuSbQkRd3ESXBJzqbtqMEW0n9xpRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488443; c=relaxed/simple; bh=gl95pBxp7BeyG9K/Y+Fnr6+jDVQ1IDOMug3Ga5gpSNk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CphTO9VvFJj89wi2RHoku6gNKhqxRnVeQsd9NTQJZByVPoJyHi7ba7A1MQrLf1syBV7XWfrFHj1Yy+nHrZSM7XegAh7Coukn3ilUCKCe8GwtZvyHDxWbE1mn4Wx8E/Ayxhm+rOxo6C4GsgZldigUafv8Sr1JeRiufCd0gcWAx3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0Ry74xMc; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0Ry74xMc" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e087ed145caso9045340276.3 for ; Wed, 31 Jul 2024 22:00:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488440; x=1723093240; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=CkIYviuQ5dR+yDR0bvvzUB4JBAAA8hfQoQWbxFCrigw=; b=0Ry74xMcCmcncRcFeZ0/SPAb8d5pUnKs/ZU7KPH2D5ZY6x8u5kGVgXeAZrUvBr57YT OyeuajcSWBLDdemiBsR90baehx2/0+8wTK5WZZpoXJzF4UYBk4EHkyYwunlh/vjVn5mg m4IXnX8/1+ZQEQsvzfh9ZmpJjOino7cRiwrIv+ovkPBYIfhRmH093RWYPp/nKnW6Mx0i Nx3SC2ntX9vJUuqrRIWLl6hPv5KCm1hTF4LWpHa/XTlHHKm2Xe4VNaTkJBjBZFcXsu5T 6I5eetdYns1yzIZfR/060wTs2dyhByUcgoHT+wlJyWvV1cFK4EB/6ky+M5tkwH+A5/M4 lSHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488440; x=1723093240; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CkIYviuQ5dR+yDR0bvvzUB4JBAAA8hfQoQWbxFCrigw=; b=KNHmhoFuTCHZsYnSk4VKLmpc3vy7mMVBrroyPDYYLscxjauAy53Hc+OPihoCMo1EaQ RQwpFgCvwFGrVi6EmmpbYnd0FSt6jOIo0MhpigutjLItiLUEmfjfinss9YHXH6Y6VqoG nOouj06NIj6HxF3vM5Yptjuhe1vJsm8CNuAvs2dTsEl/EtdwP1VkSxSOmxG2RclogIzp uDAgJAxEsOl7YdjIpp1vGfjh0TcOcZATrhh4fq4MHq40DPZYJSYemPm7Zt9IYs2v0iQU DJGhVip3QW20TvtBe1YcAxeWRdtjHs16p5THKk3wm5ZFHw2QBN+1dpU+KiIaucjMSTFJ QpKA== X-Forwarded-Encrypted: i=1; AJvYcCUTeZiGlMtij0Sc8CGb+hn4PJN07GVeFNDc2H9dRntyyCFAjlweoZy1TBYCI9lwjdXRvmyyozjCzAUINIcvdiqOFT/W X-Gm-Message-State: AOJu0YyUWylXvzGNuPyBLrPlsW7zg+N3dwzdVwMF7tFAgbFN53nfNkmX 8L4VCKBHihGwa8/PdlLqWJKrL/10vZvV7HUVkUF07UHlZS0MWLLcDThsVsXe7FCIwsUIH989b+h ykQcThw== X-Google-Smtp-Source: AGHT+IHdlDxVEdWmfC6yxxfRkAXJgX2CgLmJcYEVF4/LpbYxa8ZRj7AVmDXVoAKs5gqpvX3cXhcOPaWAIdy2 X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6902:218a:b0:e05:fc91:8935 with SMTP id 3f1490d57ef6-e0bcd28c40cmr3086276.3.1722488440480; Wed, 31 Jul 2024 22:00:40 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:57 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-49-mizhang@google.com> Subject: [RFC PATCH v3 48/58] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Kan Liang Apply the PERF_PMU_CAP_PASSTHROUGH_VPMU for Intel core PMU. It only indicates that the perf side of core PMU is ready to support the passthrough vPMU. Besides the capability, the hypervisor should still need to check the PMU version and other capabilities to decide whether to enable the passthrough vPMU. Signed-off-by: Kan Liang Tested-by: Yongwei Ma Signed-off-by: Mingwei Zhang --- arch/x86/events/intel/core.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 38c1b1f1deaa..d5bb7d4ed062 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4743,6 +4743,8 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu) else pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT; + pmu->pmu.capabilities |= PERF_PMU_CAP_PASSTHROUGH_VPMU; + intel_pmu_check_event_constraints(pmu->event_constraints, pmu->num_counters, pmu->num_counters_fixed, @@ -6235,6 +6237,9 @@ __init int intel_pmu_init(void) pr_cont(" AnyThread deprecated, "); } + /* The perf side of core PMU is ready to support the passthrough vPMU. */ + x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_PASSTHROUGH_VPMU; + /* * Install the hw-cache-events table: */ From patchwork Thu Aug 1 04:58:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749572 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FDF4152184 for ; Thu, 1 Aug 2024 05:00:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488445; cv=none; b=HC1t6E74oQYCwaYQFrSUBsEJ97rfFHPtLQnQK9gVWyX6JfUXH3q+zt3wleQTSdoaVsQ8J8LjXXPWG3CX1do9xMx9duhjBKKDGL/yTg5WWYU9fuUiiHyIj18UlB23q6+qU/amEKcRmPEfIJOHn1soIPHXBZkZtcJVV5YxkcVu+2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488445; c=relaxed/simple; bh=khra115MfhwYq97yn8cJINr8xVP0oElGg1CFJMbnxbw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=THJcpTWEjUyf9vJ4Ni/GqhZq94E5yGf9XA9wBW+d4YvJ3VJs283zgEJUyO4N8LaK27kyD9jF+2yuN0aKJiuNBPI3D4I6QtjZRDT0mLQzSZHJFcbAJIzsjFBnAkQ7jUPPCs9BshlLeL0EcJObS+3IoJU58pZybWJY6U73cJrnb3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EqNU98Ef; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EqNU98Ef" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-79d95667cfaso6712638a12.2 for ; Wed, 31 Jul 2024 22:00:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488442; x=1723093242; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bMGNya1kRgnB01fzG2jSRfrVgJKNJ2jxqEMN5YgYGYs=; b=EqNU98Ef8l7SriLAZ0TmV7X0pKJ9EIC32pIZmehOL+SGj+9LuMmhrvQuA5FSNL7RoY 9vnvQ8xSCZEUXimyoQSOOfqKiC9vbrBe47Aje6x18J3t8lmnbgEEoGnk4ZE05U6YBRzx Dn2rQAGioX6ZgQSCjsDqBQbLswArFDg14qonLSV0j3sMv7reU7SBAiIVL8W3xPeXoASW wBuIxseBgdDBQm1h9VHOuNRjrUc06y2XY9WDvXh3+CPnXgrifo8Jbl+jazu46TDFY6+R aHFodKZ4j1fYBsJ+Nn0K5R3o8uxSV06PY7ZEp/VGc8OBuQ7CjU66EQm2biizpVEFnI3d tvYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488442; x=1723093242; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bMGNya1kRgnB01fzG2jSRfrVgJKNJ2jxqEMN5YgYGYs=; b=tGYyXdHF3WmnVTDVa2AoNj5z2/UwlwM6zv6tOSnn+3XXO1pOidER/+n+dxzLFxshMz mOUTXHMsXJl0hgCqZLM/yKyL5ljfrOWctgpGUmdUReaYaWGNDKVyb6E+MyCW2anrFRXc GjfTVRNdptxA6gzoO8SaUfRZqRXbJhQW0b3FD0Pr79z1C4YIpbr8irKCE7J6PeiiS+RV EbwUWn5E9mWtjDb9S4fVJdRRCG4k3kJKmsA7s7LWf08Tnu9qn1R+EDMtgxFDFfPgM0jp q4VBDCm3FaT4Bvs7GNy0C8unzbkiGeHfCXAiHeAIhHDicDj/h6yGevu0V2fUz5vnOGwB aB6A== X-Forwarded-Encrypted: i=1; AJvYcCXNnhwSdmG30/K+uFNhPMvX6p5/+sWNczmxqZwdr41YMopjT5kFZES1ndR+0RDfSS0BNdVE9mPJHASHFWQA+NL8CjmP X-Gm-Message-State: AOJu0YzvbT06FCSGSfXalW+yj7xq3iq5QV8g67wu8A4CZwXi11N61PE6 cg5RC0537Gj17oDtYxsXSBapyBTsJvI3Q+610JXv1SlttQ0M3xMiPlxTifx+6Rtb3XhBG6rQhGf RD2Cl4Q== X-Google-Smtp-Source: AGHT+IGfx7mO/NY4BArw2bym2Mie6tKCDGVNHh0oFe5aCqJBAueW7+HcjI5X9KNZUDZ4yiP5Y8S8ibw5gF+y X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:aa7:85cb:0:b0:70d:16a4:c34e with SMTP id d2e1a72fcca58-7105d807b66mr5878b3a.4.1722488442283; Wed, 31 Jul 2024 22:00:42 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:58 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-50-mizhang@google.com> Subject: [RFC PATCH v3 49/58] KVM: x86/pmu/svm: Set passthrough capability for vcpus From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Pass on the passthrough PMU setting from kvm->arch into kvm_pmu for each vcpu. As long as the host supports PerfMonV2, the guest PMU version does not matter. Note that guest vCPUs without a local APIC do not allocate an instance of struct kvm_lapic because of which reading the guest LVTPC before switching over to the PMI vector results in a NULL pointer dereference. Such vCPUs also cannot receive PMIs. Hence, disable passthrough mode in such cases. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 64060cbd8210..0a16f0eb2511 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -211,6 +211,8 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->counter_bitmask[KVM_PMC_FIXED] = 0; pmu->nr_arch_fixed_counters = 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); + pmu->passthrough = vcpu->kvm->arch.enable_passthrough_pmu && + lapic_in_kernel(vcpu); if (pmu->version > 1 || guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { for (i = 0; i < pmu->nr_arch_gp_counters; i++) { From patchwork Thu Aug 1 04:58:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749573 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 466D7152517 for ; Thu, 1 Aug 2024 05:00:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488446; cv=none; b=mAM7H8qmBDAgErCTKybDJReSnbztpk3twVxTo5rrw4mh/O92xYljjXQuWKtiytqoAHh2WZvCmj6fcQgO27kK88Cmw5S+i4mzgLIoZEK+HEqsfM46LXKg66IKfo9pgLNWc3h3iHjtLurgM+lOBbCuldG1jUP2pQz6qrMbTRaOWwE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488446; c=relaxed/simple; bh=cdpKS3/uACb3KxMOvGHLsLYuNv1uvNwIlHZC51S5MPI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QJFBmGpQwwoUe4xS/m6UFouiBMktjRQXP/8wGlyt3LCpeDWmcLmlPXgOdzO6g5lzEBDMF5/MzZAMxaZhMOhoVo7VHuXEUAKX3VoFbmFXWwGGLtTAhTXpeVE8X8Z18VXRnbU3bZzkMPVmD65r/Vp/J1AmXjg3CWV6gLLJxhoHkuE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ERqfIqa2; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ERqfIqa2" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e0bcd04741fso868002276.2 for ; Wed, 31 Jul 2024 22:00:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488444; x=1723093244; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bm7ZCS61nv2akVrVL6X4rg4hbgFHoPFsqWK4Hp1ajno=; b=ERqfIqa2j7N2rfjegAq9N1aLkIiTUJ7E82iNvBtoCoNbm5HtVcH/mXPzAnMoCS/Ift V8GkZHQPGDV3SgbiNG54ap++Jatv4tmdpAMNLwbPnLJwcOrE+CdJUZzUzv79pv9nwC/3 PrFkuj8SM+x2NS8UvOGNRxEybU/2mSOi54qyMyj31NI9Ww4Qk5y5C+aUXWqGcUCxDpmd 7X26ICA8VJWiN+y3qZwWXMEuB04Cu7r6gcdVzpPqME4byMYTm1ORUz9DuWmnWeG+JCZe 7tp0futE3pMIQJ7Yf5Baloy8Uj+5uqHVwFVbMxFnq5aLFTORJLr1JJU5szXqpx1gCHPn 97iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488444; x=1723093244; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bm7ZCS61nv2akVrVL6X4rg4hbgFHoPFsqWK4Hp1ajno=; b=hiCUPB9RwaoKobrSoLCveMteG9u6RM9L1D9Avu3lbU79EkfrMuzYP1X8tnoH+7QLN8 mLlhE9a7n+KxC1YPnJKQ69W8TltpwMtWe5Znh44aOOryCT6GrA9mlqkwBgYVJNcuA5Hu 0/rxzT8ZsIVGDyTU3/5CLi5OHljcCBqP6gY16eBzxGq0RoERu10TUz1wWk/ElHP9RGmn j6sGDt7iu6varBoLJNeYgNh63McBE+Sk6SQ3ohSgjXJH/l3hDepWZBpqFmcJDwI2V+JJ 7HS1Jq2DeEdaZs9JH+H7bz1uhYVuSmwbWAkdWTadkqwDzBQhyDL6yoYzfaW+KuwTPUp8 GRWg== X-Forwarded-Encrypted: i=1; AJvYcCXbTh2a9SOXPii3nSUhXpNmX2o4N7H9/2NK0uSQm97nTQJMEth65SLONOZJU4U8aEFK2UNecG7cg/LqZ5Vk5nTDbFBu X-Gm-Message-State: AOJu0YzGjzWwJ2piw17vPcC493KKQWghoko8q5rS8vFgwPWxFil+Fq7D ae11gHBiUS/HKv7IcYPUHyY9lpSO4QPtUTrKSQrsWTjzCLVkt8ABNCYhLdFPxcAEYbHJ1JGovCh utdxbUA== X-Google-Smtp-Source: AGHT+IF4VPoCa5Bi3PU0camhKoeHZtOuU9gIT2voR88T+GMpmTvgcbCdr/cc6zxAFccrkaQh5tCHXOh06UAT X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6902:1007:b0:dfb:22ca:1efd with SMTP id 3f1490d57ef6-e0bcd36bf56mr2383276.9.1722488444219; Wed, 31 Jul 2024 22:00:44 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:58:59 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-51-mizhang@google.com> Subject: [RFC PATCH v3 50/58] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Since passthrough PMU can be also used on some AMD platforms, set the "enable_passthrough_pmu" KVM kernel module parameter to true when the following conditions are met. - parameter is set to true when module loaded - enable_pmu is true - is running on and AMD CPU - CPU supports PerfMonV2 - host PMU supports passthrough mode Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/pmu.h | 22 ++++++++++++++-------- arch/x86/kvm/svm/svm.c | 2 ++ 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 10553bc1ae1d..9fb3ddfd3a10 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -196,6 +196,7 @@ extern struct kvm_pmu_emulated_event_selectors kvm_pmu_eventsel; static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) { bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL; + bool is_amd = boot_cpu_data.x86_vendor == X86_VENDOR_AMD; int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS; /* @@ -223,18 +224,23 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) enable_pmu = false; } - /* Pass-through vPMU is only supported in Intel CPUs. */ - if (!is_intel) + /* Pass-through vPMU is only supported in Intel and AMD CPUs. */ + if (!is_intel && !is_amd) enable_passthrough_pmu = false; /* - * Pass-through vPMU requires at least PerfMon version 4 because the - * implementation requires the usage of MSR_CORE_PERF_GLOBAL_STATUS_SET - * for counter emulation as well as PMU context switch. In addition, it - * requires host PMU support on passthrough mode. Disable pass-through - * vPMU if any condition fails. + * On Intel platforms, pass-through vPMU requires at least PerfMon + * version 4 because the implementation requires the usage of + * MSR_CORE_PERF_GLOBAL_STATUS_SET for counter emulation as well as + * PMU context switch. In addition, it requires host PMU support on + * passthrough mode. Disable pass-through vPMU if any condition fails. + * + * On AMD platforms, pass-through vPMU requires at least PerfMonV2 + * because MSR_PERF_CNTR_GLOBAL_STATUS_SET is required. */ - if (!enable_pmu || kvm_pmu_cap.version < 4 || !kvm_pmu_cap.passthrough) + if (!enable_pmu || !kvm_pmu_cap.passthrough || + (is_intel && kvm_pmu_cap.version < 4) || + (is_amd && kvm_pmu_cap.version < 2)) enable_passthrough_pmu = false; if (!enable_pmu) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 296c524988f9..12868b7e6f51 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -239,6 +239,8 @@ module_param(intercept_smi, bool, 0444); bool vnmi = true; module_param(vnmi, bool, 0444); +module_param(enable_passthrough_pmu, bool, 0444); + static bool svm_gp_erratum_intercept = true; static u8 rsm_ins_bytes[] = "\x0f\xaa"; From patchwork Thu Aug 1 04:59:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749574 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAABC152783 for ; Thu, 1 Aug 2024 05:00:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488448; cv=none; b=b+YO9Af/qVaK5pnyuHy6EhwUp76oGCWj+V5TcfgJm33Yuiegy3Ce1IqjaX3k1M00ou6KCHYugxmREfccDhWZWORTfr538F/UvM9wz8IuFpHmc5hLbWFMPZvPuZJcpi/UNIYrahr4PIOqcwXpCb2W5tyu68+KTuKA/qMzuQZi/EU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488448; c=relaxed/simple; bh=qqJCUZdQ2OMvCBwxsyPO9mS+RTuW+7jJ1oS93OxTWpU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ok4Bc5DgkWrBFXxYpsO6fQTdTxIFOHNK+55zecgMvgIJ+BhgCL6cBw+YlP84yQJx8m6uEyTQIVaFzafGG8ChUCzIr0TH8EJ0HpyO3x+SS/nczFzFvswh+IwgppRhmhD1Stnc2GEQDZQyXgsD3zdD0y0B0d9E3iWZS2HblZihGX8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tpmkeodu; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tpmkeodu" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d34fa1726so6898155b3a.1 for ; Wed, 31 Jul 2024 22:00:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488446; x=1723093246; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1ftBoMx0u0YKTugTE5W425aVzmDhuqmdSWnt6KFLZIM=; b=tpmkeoduhQGCmwNbtvDQfV+Q8PXvF2sFSpMvU4hWgRGloCgV9TrirdpphrKckY923u VepU4GzeW1VURN3Yb5YA272Xpzxb7/WONL8b/7rNQSLCldcDlndhlx3lVeYUzm19bX8X zLih7om6zUklgiRUl9RkFCZHSQ9f3s6JMlriY3SpZL1AiJU782MZmdx5M/z7bLvaz3AF h8KaYYBGLEMurwbq9zEd/+H9eDP52SeylgVMI2uylA4PFT71YH3n2yuwrXPguOUtkjaT m8eSwFsMjd+jMQp+0jmsANrtyDLDrKVQv4tRHWlJ3PlBY2zehaOyOxgiSOgP3XNEnFTf OCTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488446; x=1723093246; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1ftBoMx0u0YKTugTE5W425aVzmDhuqmdSWnt6KFLZIM=; b=uMHOOGoY28QanQUjOL8il3Y05BLMWI7t8OgVQEAQaFAizs+sH3dOH/dL20x/hJxEx1 cUU6D9IvO7//sRI8mUAYXjehTUA4ovYUx9c9iKQkABhod9DsIYPh6mVGY+3pIW5BGWJl WoboV8p6nfV+tY6rnKvkF+g1SShkktNcYyAcRYGX/Ux+VYe6r5vg582RpKigxYYlNHS1 w/M+jgHrmTWQDR8cxzP5lBS23Wb89B6Zws0GQ4GonaGvBphqsxuzkMW6aJQkbLsyi13k /JHQCDorx0fVHW6uGjwslujxxrgrgBpqL+wskOnqydUC2toPnxH9FPwL+Vvuv6mQEarz VkoQ== X-Forwarded-Encrypted: i=1; AJvYcCVtg7PTWqzt5TGEqlintprM/UTn2Pru/8d5Q8dZOtmWykRyAG2sHsCnMXut+Bkuh9F6hh16yR238Dmk2MYKUAMih5Oc X-Gm-Message-State: AOJu0Yy2C9fY6NdUnEDpnTx85QbKHY4WAMIKPGjwgn4BzHWPx9VwhBmO 43eyi8zL7apKiVW5/UQH+1b0/9IIxFVp8z1F3R2d6IGjFkMrrdXtwPwxDu+u4lveRDxVsXM0yr7 ZCFF4gQ== X-Google-Smtp-Source: AGHT+IHbs8qaJDfvFGAXP/rpBAwEhmqT0IS9EaOILk19416ojQXGwDiYFKRgx0TkGPVP948qxfLhKBzuwVNb X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a05:6a00:9444:b0:70d:2a24:245d with SMTP id d2e1a72fcca58-7105d7d6f05mr6682b3a.3.1722488446062; Wed, 31 Jul 2024 22:00:46 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:00 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-52-mizhang@google.com> Subject: [RFC PATCH v3 51/58] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das If passthrough PMU is enabled and all counters exposed to guest, clear the RDPMC interception bit in the VMCB Control Area (byte offset 0xc bit 15) to let RDPMC instructions proceed without VM-Exits. This improves the guest PMU performance in passthrough mode. If either condition is not satisfied, then intercept RDPMC and prevent guest accessing unexposed counters. Note that On AMD platforms, passing through RDPMC will only allow guests to read the general-purpose counters. Details about the RDPMC interception bit can be found in Appendix B the "Layout of VMCB" from the AMD64 Architecture Programmer's Manual Volume 2. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/svm.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 12868b7e6f51..fc78f34832ca 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1229,6 +1229,11 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu) /* No need to intercept these MSRs */ set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 1, 1); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 1, 1); + + if (kvm_pmu_check_rdpmc_passthrough(vcpu)) + svm_clr_intercept(svm, INTERCEPT_RDPMC); + else + svm_set_intercept(svm, INTERCEPT_RDPMC); } } From patchwork Thu Aug 1 04:59:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749575 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9E3F1527BF for ; Thu, 1 Aug 2024 05:00:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488450; cv=none; b=etwlli/vIbzZ1ePrDKgQC9snNIxon/YznDCLBKnaFs2TLVnfzYn4bQz+VDMwkYPU/ETi/F9z1/uylDJ2Hs87zhhhXBIzRABTxZtoQJO7v+iekeLQx/fAqY0eOCCVwEC98O/cj/YC0ARbH4EpGxGL35djs/Vr8rwNl+O0grvKjNY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488450; c=relaxed/simple; bh=ooIoHd4B2WjFXEVjvElOl6cYTrco31uD4hT/IofdTWw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pHgMMZyf76Wq5sBA3s//8DLog/k4425q/tn3IClZ4eA0WxqEdmb78j0AZCiJl87xZqcyn9ER0/hdQ6hkQ9FDXk+wr0BPE55JN+y+g4Hk8Ecbqi24BcaT0umszVnlE1r0myTm8Lzy0bZLkeh3LJzFNYlh2KL/tx8wSGDDiuAnME0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oA8YMdfC; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oA8YMdfC" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1fc5b60f416so53100565ad.3 for ; Wed, 31 Jul 2024 22:00:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488448; x=1723093248; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=p0t/UD4dIivvXTkixB6hrl3EoPpTSTvHLUpyKF5tk6Y=; b=oA8YMdfCf4vhW6Ao4Qo0tN1c0KG8Z1kX7Ac1Gz9UyAHxfPFvjSB19pOtGdkMR3VMc1 vdof0N47a002JEeFcAaEnXdZRomTWv0wUaOiMCkNDhM9Y7R6HrQr9jpggTfe3n914T1S FooljaWq8VuF86JqtnGXmrISf4NpcH5FJXv4p4OUZwXJa6l5D/lyKCGXv+aP66bdxBtX qXAgagxOAS3pphB3l99vBxqdupHfDnzF5gXgNoiyN4vrtKurwJmXQlcvX+/+joI/9Zi+ qRPwGIaPayndxe2NH5JfNEA6MXuby1h6bTuwJgSB9Y3fFrwFJFZUKOGBNL2HVfPbSkRk Oc8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488448; x=1723093248; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=p0t/UD4dIivvXTkixB6hrl3EoPpTSTvHLUpyKF5tk6Y=; b=BTfuEj4qUkVFlxaO/Fu4GGgh1xnqifY203gTsYO6GP5vBMoUE4frI/d7YcyDEV+26N O/pdsRw6QNP8i1YpTbApwHY/gMq7CMbXxI49ANHAeXsq/2kQ1VhuVejUtjvbugTQ99JH pvmzLxT3SLhgyP2YuN6Z4s4ca+Bkd3TJYmc9TbuZGHOKJl5VChJdkDIUW7Gl6Z886z7h TsK8DOOc002uK9rMDS0yuItyicFgEck6Rc/6vHWyrgwwRw5Cqwq+F+2VWW2miJt3OepD Hm+TKq7QGnOMpVSswS1xHFHirVRVJ0tt2lbVryiDA4rWftmJb+Rl6s88l6jodrW5z94N yhTA== X-Forwarded-Encrypted: i=1; AJvYcCWrSsyQcSykPfYQp2XT24t6uZ4BfpfC3WtW0gK26jxmimvdlMvJQhVjgA6zcrCpSSHiGU8IvHlk3qmk9e0Gxn8JaV8G X-Gm-Message-State: AOJu0Yz2O0cgUAWlzL8Et0Q5EhrmIbLyAgrg83Lz/RqBETQ3TnKFHAV+ DJPtf5KYzU7c7b0gu/jEZKekbQEC1MgLejh/jnDPB0XV2iO1nt6NHEikDQN8CQ/UjOwDS2F82pJ o0b+qoQ== X-Google-Smtp-Source: AGHT+IHwqVGLUGlxkC788P9aAg8SZHts1m5iM6VOVnyqPUl1W3ZH+X/Hy+gczlwAUKEWq0d/8alcVjwKFnXv X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:902:ce81:b0:1fb:984c:5531 with SMTP id d9443c01a7336-1ff4ce4d58amr1349925ad.2.1722488447984; Wed, 31 Jul 2024 22:00:47 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:01 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-53-mizhang@google.com> Subject: [RFC PATCH v3 52/58] KVM: x86/pmu/svm: Implement callback to disable MSR interception From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Implement the AMD-specific callback for passthrough PMU that disables interception of PMU-related MSRs if the guest PMU counters qualify the requirement of passthrough. The PMU registers include the following. - PerfCntrGlobalStatus (MSR 0xc0000300) - PerfCntrGlobalCtl (MSR 0xc0000301) - PerfCntrGlobalStatusClr (MSR 0xc0000302) - PerfCntrGlobalStatusSet (MSR 0xc0000303) - PERF_CTLx and PERF_CTRx pairs (MSRs 0xc0010200..0xc001020b) Note that the passthrough/interception is invoked after each CPUID set. Since CPUID set can be done multiple times, do the intercept/clear of the bitmap explicitly for each counters as well as global registers. Note that even if the host is PerfCtrCore or PerfMonV2 capable, a guest should still be able to use the four K7 legacy counters. Disable interception of these MSRs in passthrough mode. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 55 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 0a16f0eb2511..cc03c3e9941f 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -248,6 +248,60 @@ static bool amd_is_rdpmc_passthru_allowed(struct kvm_vcpu *vcpu) return true; } +static void amd_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct vcpu_svm *svm = to_svm(vcpu); + int msr_clear = !!(is_passthrough_pmu_enabled(vcpu)); + int i; + + for (i = 0; i < min(pmu->nr_arch_gp_counters, AMD64_NUM_COUNTERS); i++) { + /* + * Legacy counters are always available irrespective of any + * CPUID feature bits and when X86_FEATURE_PERFCTR_CORE is set, + * PERF_LEGACY_CTLx and PERF_LEGACY_CTRx registers are mirrored + * with PERF_CTLx and PERF_CTRx respectively. + */ + set_msr_interception(vcpu, svm->msrpm, MSR_K7_EVNTSEL0 + i, 0, 0); + set_msr_interception(vcpu, svm->msrpm, MSR_K7_PERFCTR0 + i, msr_clear, msr_clear); + } + + for (i = 0; i < kvm_pmu_cap.num_counters_gp; i++) { + /* + * PERF_CTLx registers require interception in order to clear + * HostOnly bit and set GuestOnly bit. This is to prevent the + * PERF_CTRx registers from counting before VM entry and after + * VM exit. + */ + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTL + 2 * i, 0, 0); + + /* + * Pass through counters exposed to the guest and intercept + * counters that are unexposed. Do this explicitly since this + * function may be set multiple times before vcpu runs. + */ + if (i >= pmu->nr_arch_gp_counters) + msr_clear = 0; + set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTR + 2 * i, msr_clear, msr_clear); + } + + /* + * In mediated passthrough vPMU, intercept global PMU MSRs when guest + * PMU only owns a subset of counters provided in HW or its version is + * less than 2. + */ + if (is_passthrough_pmu_enabled(vcpu) && pmu->version > 1 && + pmu->nr_arch_gp_counters == kvm_pmu_cap.num_counters_gp) + msr_clear = 1; + else + msr_clear = 0; + + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, msr_clear, msr_clear); + set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, msr_clear, msr_clear); +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, @@ -258,6 +312,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .is_rdpmc_passthru_allowed = amd_is_rdpmc_passthru_allowed, + .passthrough_pmu_msrs = amd_passthrough_pmu_msrs, .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS, From patchwork Thu Aug 1 04:59:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749576 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BFA9152E06 for ; Thu, 1 Aug 2024 05:00:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488451; cv=none; b=JKSqXOEe79ypkxe0jt5DudH8NoV2aB4ixG1NPc2G32fYKg8q7gia29M1YtMEcM09a5ghtd6eGIzY/hOhqgJ+71sqzbZm2LC8uY2rbsXNAUU60t0rDn9iISovKS7uB3QFHdwjUxXxXSFBwf4Q5k4RODZnp7RPS43rfDSlciGVrJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488451; c=relaxed/simple; bh=tIwjYv8ypqkxRXoGS4xc9qXzWyaseA4dDsRGcwRADJc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UDDRtmNzTJlLpfeJyytNVcYhCDFdc6EoG3m3XcYXZv73NJGdcvJhXmhHQeH5+DduQl6JYz3bF/to21t8yO3Uoin/Y/2WKFeKCZbL5/HDfVERnKK+d7uHj3JBDaQ+/Y59qyfHRTVHAlW3sbtveLVJeW5jrhNnya6axyD7dMntTwQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BWtNYTC5; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BWtNYTC5" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7a28217cfecso6769762a12.2 for ; Wed, 31 Jul 2024 22:00:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488450; x=1723093250; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=vsSwrDmkl/yCmJ0VnLkvY5bBtC/K75K4TCnFQ/SbBkk=; b=BWtNYTC5YjtZh1XTeVx1jGlp53fv0wojOrNoPIuX+6WU9N2j1iB72RMyY8MJhcTTrc 6RMaEkpmokJfJ0WuYCfE7TAlKn6ssGO/DBVxrAlIkpP5/TFKcYJPJHq3cCybtm2N1jQv yePjxkSx3ogKwLNUTeBWOg83kpFXGXiRy4RcwcPCfT60oCld5uHLLsyXaXCD9hCdvNEy RaID6TKoichdhszk6hh9mBYn2zBd/cyag9GC2inMQWul3PYyTFhUHBEUBRNGQiZQh+Li pCxsBp5w3Mh8nWnDLUm/5PUBqPHBoh2bpCYCASZOswh3VV6Pvvtb8mRcfHFlMKi9R5TC 8VcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488450; x=1723093250; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=vsSwrDmkl/yCmJ0VnLkvY5bBtC/K75K4TCnFQ/SbBkk=; b=alWiTnvsn2r0B/Rbt4n+ypAh5qjjO5bZt4etURc/LQlk9yghEYTxqXbjkzYtmrleHn px7qJ9/y/aHbei+Fbv1+QvK4DNJElNBN5WztwHNa23mYyBXcNFvIdqKfXB0se1EYHSFD gEUg6zUlhNQVfHXR+nQGZ5mFQNjYWc21iFlFuHR5IVy7Zys0LRn0KMZ32Jz+umPTuw+G aPZiysDzPJFQ3c5Bz2tr4ksOnZtQit60xsqe//T0g4jilVn8YDlq5WKllCtR0rLlDVx/ Fva9QcGyiZgzR3zcpmDtOm7S3N6ixpLdUK2WXkd+fd3mokCrXyk/ezbLEpLieRFfUoR4 DTcg== X-Forwarded-Encrypted: i=1; AJvYcCUdj4u1PemAICOIarTfktFXri+kW313Rd/L2LR7oBohAZO40uJ5yYEA5bQN/grZrltQC+ltiOL5hJm2i38hSYvDwFX9 X-Gm-Message-State: AOJu0Yx0oU85FKt4OllK/8TxorKNGbX9ng6M2N+t7btvxYbCdLa7uMMf Bis5bTkiwGLcP6+CcfedQceiCUdu8m2aQPtaCPLSTuupW/LLnX0ho/9JOXAsFnY8+gIGJRDqY9W 81QKCzg== X-Google-Smtp-Source: AGHT+IFBlYeU4l8+KBps0mlbqMtbd2Yclwsw8dmK7Kp2BehpNXQf8F5wL2xKaITxDppl5saznX7iR1mMQ22i X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a63:1c47:0:b0:6f3:b24:6c27 with SMTP id 41be03b00d2f7-7b634b58f82mr2706a12.5.1722488449902; Wed, 31 Jul 2024 22:00:49 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:02 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-54-mizhang@google.com> Subject: [RFC PATCH v3 53/58] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das On AMD platforms, there is no way to restore PerfCntrGlobalCtl at VM-Entry or clear it at VM-Exit. Since the register states will be restored before entering and saved after exiting guest context, the counters can keep ticking and even overflow leading to chaos while still in host context. To avoid this, the PERF_CTLx MSRs (event selectors) are always intercepted. KVM will always set the GuestOnly bit and clear the HostOnly bit so that the counters run only in guest context even if their enable bits are set. Intercepting these MSRs is also necessary for guest event filtering. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index cc03c3e9941f..2b7cc7616162 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -165,7 +165,12 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) data &= ~pmu->reserved_bits; if (data != pmc->eventsel) { pmc->eventsel = data; - kvm_pmu_request_counter_reprogram(pmc); + if (is_passthrough_pmu_enabled(vcpu)) { + data &= ~AMD64_EVENTSEL_HOSTONLY; + pmc->eventsel_hw = data | AMD64_EVENTSEL_GUESTONLY; + } else { + kvm_pmu_request_counter_reprogram(pmc); + } } return 0; } From patchwork Thu Aug 1 04:59:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749577 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DC211534EF for ; Thu, 1 Aug 2024 05:00:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488454; cv=none; b=HPv3wz28lgrP76TOQxV+bgDj3Pv1cy+utGnlfcHqqCBUHQNImb7yAWqvTuOqsHLBLjCPI+VBBN/ftZxIHljezXW7vwBr8bDXOaxDuLJ85I50sPWs0Z8q+ZanoxvOliEBOs09bECd2Tysedpd5yjHXTTk0GlsoT+HvzRsW3UAK7U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488454; c=relaxed/simple; bh=AQu9CJlpiD7JOUVngaIJhBzSsl0Ca9lu4IgWe3SQ/fk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RTU5IxoYleU9kNkpBFDv1DqkD9LyeqXMRZAR5iaDyXasTKzeoT9nXtENmGbxNF+MEbtYVa3fLgCjY1+qc9dRMeE68SBkQNzfPMFHJoOLlbXlDUYB/lmUnqvGTSDwzOHZeuG5g4I4bt8y0ikMf67cSWoPWCyl7A1f4AUce8Xzr9g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wcxmruUm; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wcxmruUm" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-79028eac001so6061377a12.0 for ; Wed, 31 Jul 2024 22:00:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488452; x=1723093252; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=8hg3GW7u9rE2O+00AHYjQL6l0Apz+A7KMtHXAzzNMVY=; b=wcxmruUmAcp5EaTl8kfoLrWLUbgE1DCjQ3ILXHltZEB1NW42VupKcNWkR9axCpllCW 5ch3OSfGPTivGpb4KJC6t4ML5LtEpLoumyheByikGDRYueDyoDt+ZbYa1e5lQP8rtAUj 3ZQwd2iSvRSEbaNADG5xPmEj2xtMyLdGleX6X5kdgxEI6uP/c7poGQ+lxMxQYOA2sgDj fX+GqglAOE5vDGPBAaAu2kOpMnbxDhIGV3/ufd2dO5sKHVvHORDmMcPHAxanTg+vbrdj XdyBZxI+QRFFGjAJem3TacSf7asDRZDTNqhPuqhoik6iDEqkwluIWv8Hu/HIZOwj/F3J 4ZLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488452; x=1723093252; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8hg3GW7u9rE2O+00AHYjQL6l0Apz+A7KMtHXAzzNMVY=; b=rf5gOp4Qz9Zr+5UdXWjxLdOL5qfENcxsqKpV3px/WLyLX2e9mA6Hjszb4rKrPn31rV V+FuVbWG/y65n8WBYYl3HGvqRcosGBJnExfI9ZGisEyRhKHTL5e5qQAZjnpIlMjZXoQG LcIIA6HVrzkzEG2Y3U87kaXLUr5MCsE3xz6GDTojZTgT6f0QfmhbwBXeqnCEyWYKhj/1 nUqRSwD64qxLdTWANJTu3Zoa2o/hJ9/dhrwfJFnIPJQXw0S7lD8u00X+1zmJE2ka8Nu8 +vWBLpEV5JZ6MfeWvuOfKA/XPJUVNN4IJcNjXsBZOaFJ2rgAD5MrxzlCh1u92zCcjiFI XMfw== X-Forwarded-Encrypted: i=1; AJvYcCWM8RfbUPlDTwfsOn7Hcpc35w4Br0BxKXOXMJUA6NxS585GsfSkPzji30pl2Nf2yctFACh+DMV4pmSKnSaU5kVT+OYf X-Gm-Message-State: AOJu0Ywuy0COCYPc60Wia86Bqmm2L6Hr4JE7JgWBoBamPWmvlQAiz6nu plzsVFOcEZoM6YLNAy2jIxVlyRWBWFbnuPrLNEIzzWE490bS0ZfnG1ru+G8GbWr3kIo/N2Fj+RA fkSb6lw== X-Google-Smtp-Source: AGHT+IEyzaQ1LIq/msoDOdJZ/0w7NMmQUez/LxB6xhMj0/9BAUy0e4hdyO+JVmr21/iEnITcymNYQ6Uqf8KT X-Received: from mizhang-super.c.googlers.com ([34.105.13.176]) (user=mizhang job=sendgmr) by 2002:a17:902:eb84:b0:1fb:325d:2b62 with SMTP id d9443c01a7336-1ff4d224c09mr430825ad.10.1722488451817; Wed, 31 Jul 2024 22:00:51 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:03 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-55-mizhang@google.com> Subject: [RFC PATCH v3 54/58] KVM: x86/pmu/svm: Add registers to direct access list From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Add all PMU-related MSRs (including legacy K7 MSRs) to the list of possible direct access MSRs. Most of them will not be intercepted when using passthrough PMU. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/svm.c | 24 ++++++++++++++++++++++++ arch/x86/kvm/svm/svm.h | 2 +- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index fc78f34832ca..ff07f6ee867a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -141,6 +141,30 @@ static const struct svm_direct_access_msrs { { .index = X2APIC_MSR(APIC_TMICT), .always = false }, { .index = X2APIC_MSR(APIC_TMCCT), .always = false }, { .index = X2APIC_MSR(APIC_TDCR), .always = false }, + { .index = MSR_K7_EVNTSEL0, .always = false }, + { .index = MSR_K7_PERFCTR0, .always = false }, + { .index = MSR_K7_EVNTSEL1, .always = false }, + { .index = MSR_K7_PERFCTR1, .always = false }, + { .index = MSR_K7_EVNTSEL2, .always = false }, + { .index = MSR_K7_PERFCTR2, .always = false }, + { .index = MSR_K7_EVNTSEL3, .always = false }, + { .index = MSR_K7_PERFCTR3, .always = false }, + { .index = MSR_F15H_PERF_CTL0, .always = false }, + { .index = MSR_F15H_PERF_CTR0, .always = false }, + { .index = MSR_F15H_PERF_CTL1, .always = false }, + { .index = MSR_F15H_PERF_CTR1, .always = false }, + { .index = MSR_F15H_PERF_CTL2, .always = false }, + { .index = MSR_F15H_PERF_CTR2, .always = false }, + { .index = MSR_F15H_PERF_CTL3, .always = false }, + { .index = MSR_F15H_PERF_CTR3, .always = false }, + { .index = MSR_F15H_PERF_CTL4, .always = false }, + { .index = MSR_F15H_PERF_CTR4, .always = false }, + { .index = MSR_F15H_PERF_CTL5, .always = false }, + { .index = MSR_F15H_PERF_CTR5, .always = false }, + { .index = MSR_AMD64_PERF_CNTR_GLOBAL_CTL, .always = false }, + { .index = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, .always = false }, + { .index = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, .always = false }, + { .index = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, .always = false }, { .index = MSR_INVALID, .always = false }, }; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0f1472690b59..d096b405c9f3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -30,7 +30,7 @@ #define IOPM_SIZE PAGE_SIZE * 3 #define MSRPM_SIZE PAGE_SIZE * 2 -#define MAX_DIRECT_ACCESS_MSRS 48 +#define MAX_DIRECT_ACCESS_MSRS 72 #define MSRPM_OFFSETS 32 extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; extern bool npt_enabled; From patchwork Thu Aug 1 04:59:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749578 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A51CF1534F8 for ; Thu, 1 Aug 2024 05:00:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488456; cv=none; b=h9E6oqCXJ0a+SVIP+2HdNYqDNwMIq0sKuhUmzT1mWbAafqNJl75yPgqrVG4sovTtbtexoKO2yZ6cwQxg35vN+AbEx0Di2IKGdnnJxj+C9VrqOy1TKAXt2mRTKgx2OABiMs/VAjQ0foarSyEUKr0dNjpcQTTmfBd4cTDEpDr48Dc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488456; c=relaxed/simple; bh=hfCifMWLKFUz3KUyJZoboIqKJGi++P8HvrtPYrBpy5A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JidDkvys9eTIyVfqNusjCKxA4oh1P0OCnyoewCMsd+sIPLzNpz2DOQc3eiv1IiHDYcNGVjHF4qAWDzwXYM6kGqyDXBJ8WpM12wk/vyRPgWCb7wgjLeiXxfp32fjmRMBieamDFnBkV4KzLaZSEyMxbiM/97SftPh7MyD6v/OSUMw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vwzbls5A; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vwzbls5A" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb4c2276b6so6660183a91.1 for ; Wed, 31 Jul 2024 22:00:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488454; x=1723093254; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=311Z3AoQ2JtdjycDfcinivqhlqwksY6b2H5cCC/LJ0Q=; b=vwzbls5AbGUfjzePy0Q7rtL7b6bpyASCc0ducG71yUoeAxU3M706JJhL5WiHkLYJgb 4d1VMXav6/PSg2+WCYzEl3G2ia8Aei0VojagXkBBVM8yhzAg5WWiD+RDH944oEhwD8HN /pj0o2X9zsRWaHrZpBT9vtGJXzwwWIhmmMRDa6w/FYlCpbiQoh4io/BMP2ICr6vvbgSY ChJTbuBCmrsTgyEbOa8QQrFWyv0G+NbvCrxED53NzXuTXZd2WgZ/rTNKlxrAk+mUxZHk uzlbjPISIrg6ZvEfC+7lC9SDPffTMK8oDAgObbU4bFEXNUlSJZx/fMP84iUKlaUWPKNV vQGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488454; x=1723093254; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=311Z3AoQ2JtdjycDfcinivqhlqwksY6b2H5cCC/LJ0Q=; b=QhV1VmjhVjBsSzAaYoqHYh0Y8xT1yV4d6fWMvXegimgaZPPQ/ys8aN/Lr44P+dFif0 Hg4YTrdTBNhR8ZtBb86wM7XcKfGqFOVZqRWzVzpYKzgo9wFvg+u4b3h/K81y0gkv8UQN pBp9V4rLqyI4UBNeJb+Y8jlCLp2IESDBYHGwbJNbTkPYGvE2/TPLQgRGWYE+Z+qFueLs ksWz2MoQOLwXDJgWicp/n41QxaKnG/jEmqoYG23uY5WGrcB9vLJYtMY7PrqQPo8M+wAT JLYE2f8F78K1raFdG9syrv1BiNtiHuL5QPpDKL3qNMpxuJwk0EU8iA1+JsTsReiqbyek DVug== X-Forwarded-Encrypted: i=1; AJvYcCW8vcAaM+yOpgXJtulzWQHvA/+RcAINLYL1zFrSzKNpmAg4+PBmXCbV3uBQjRN+JQbx3WCZV9VI3rDHaOBRzZYLmWwg X-Gm-Message-State: AOJu0YzK8/3MP+/ObKkJ7pDU/TJSjjeD5pj8IvpUI9uYjRdEXDxDyzmi sNN3y4N2knm3/DgJvKrWqmMOpQnt3Np5F0iAGg/Pt2b2RgKvJBWBlttx1Ylr3Ns/Uz0PnKUSpf0 RcGc/RA== X-Google-Smtp-Source: AGHT+IE7b9MrGSg1LLwijHD9u/x7qeWAFstIK8S2MxnZDPs1/+5zP/QQ8/lDj5iAaacVgnjFrbQ1sMQRpQDn X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:90a:582:b0:2c9:ba2b:42ac with SMTP id 98e67ed59e1d1-2cfe7925363mr31965a91.4.1722488453820; Wed, 31 Jul 2024 22:00:53 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:04 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-56-mizhang@google.com> Subject: [RFC PATCH v3 55/58] KVM: x86/pmu/svm: Implement handlers to save and restore context From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Implement the AMD-specific handlers to save and restore the state of PMU-related MSRs when using passthrough PMU. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 2b7cc7616162..86818da66bbe 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -307,6 +307,36 @@ static void amd_passthrough_pmu_msrs(struct kvm_vcpu *vcpu) set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, msr_clear, msr_clear); } +static void amd_save_pmu_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, pmu->global_ctrl); + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0); + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, pmu->global_status); + + /* Clear global status bits if non-zero */ + if (pmu->global_status) + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status); +} + +static void amd_restore_pmu_context(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + u64 global_status; + + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0); + rdmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, global_status); + + /* Clear host global_status MSR if non-zero. */ + if (global_status) + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, global_status); + + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, pmu->global_status); + + wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, pmu->global_ctrl); +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, @@ -318,6 +348,8 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .init = amd_pmu_init, .is_rdpmc_passthru_allowed = amd_is_rdpmc_passthru_allowed, .passthrough_pmu_msrs = amd_passthrough_pmu_msrs, + .save_pmu_context = amd_save_pmu_context, + .restore_pmu_context = amd_restore_pmu_context, .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS, From patchwork Thu Aug 1 04:59:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749579 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76D2B15358F for ; Thu, 1 Aug 2024 05:00:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488457; cv=none; b=tAyD1BDjfWAEJPwxtyOxPaMJos+EOYbudj/kWkFukCkSgsRtMeGBpgu58wCuVQtCDAkAPpHx44n9DyigOq0AGxcgbcMJeBHpO3vkL+Ai24t5FoQ0DpbomHZK5FVGz3gRVQc9nLViueWQqrXuxPzNan3wz3SIIwP+KpZZ4OtUrDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488457; c=relaxed/simple; bh=eV0VTFji/+sFryEiNGZW9hnaYuwL+4y7PlybVqifHtQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XKTOHzBQw3c407X7Ft/BvL11thGs2BPVt2eOC9taZUkOUA9244R/9QpopkUTMVej4A+MRRVs7cHcmGoIJydR/xNMOa7sPdPb48DBXWDjPOW/QUfTA49WVDBbiDkMmDl5a5BQ3ygPruxx/RrTMjojCFxar6OXJ2X0JlPc+wgC5zE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=S9s7hcqc; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="S9s7hcqc" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-70d392d311cso5691211b3a.0 for ; Wed, 31 Jul 2024 22:00:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488456; x=1723093256; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3VOoyg1IkmJ6RRZa642jfLxRQmJVLrh6eRxX47OZP1k=; b=S9s7hcqclcz4rryG509l0g5hQImNLttXR/QrbsG0YyqRNIYf26igpoIhGpdyh8BRu1 7BlEBhQ5ZUV7FHrzhTpPqR6/BfEzx3bUCtw13SvOnTPsrYGawfEzft47dzJ9bH44FUUE 1CW8ckjoXF2Gkie4FSu+KGEXO+pNK04xLpIblBsbhu0BAk6l7EMqqsKdwFHYUzWJBIUD 8JNZuVbpZOu8qFViT3XFALtPY+SOHOzFPeagC8Zje5pgJbolJGlEdTQoHdMSuuKdd7dT KH+veaWaJ/lmT+zLOBZTFldUBmtNVCEUIzb4GdJTqIUTrOq3RHS/N6elnXevOy013xWH coNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488456; x=1723093256; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3VOoyg1IkmJ6RRZa642jfLxRQmJVLrh6eRxX47OZP1k=; b=AgWD21JKqoSiLfs8qY5F5YIBdip4idq8ktpTKDDypTlVGF5e1WbBaPpyupBK3kHEh6 quNucrRTKk+nKIIYsyYS47LMxqkkMgPqSIh5qywCszc9w0rWJmB7qlyUgNeaKdNF/0sR u++LPyk1KwxkMatQxc0hIHGvnt/pv9QRb8r273D0CRZpO5IQCUnKFsVu65/AqPPDmk8p BcgEioWO5PbgQIRcdT0ph1oOA0wOhG3Q8HCp7cwASfPFVGIG9W/oDtl5q/zfgYm/4NFc N+ZR/g+WIrWhE0sP2hBBlinDyz26dzewxRw4D6SBMIbwceha2HX/mbaANMvxaf63jCF3 RjhA== X-Forwarded-Encrypted: i=1; AJvYcCV1cEPhk6qfFiTgK3Dq7oqVJElGU6VFNh5DaFvR4yZNQP7E9m92aeYbvzYCCIVqfZKtOhC+8pPaewpJpPC41xIfDHgS X-Gm-Message-State: AOJu0YyScLgSu/3yeBj+0eUPIUBTHbgtnmx+gm276HD6M8zB0YaQuiw7 DJUVpy6aWTWLW+Ri5aMCOdw27+IALvbeGZDrkxoOV9cvTAeNl7Ghq7pR4mmjNK++RieSzFyACY7 4Pzbyxg== X-Google-Smtp-Source: AGHT+IEclZ07ukoLzf1pYkJmmMQ+HHGVZo5A5QV38BQbp8xRkfjknzqxOFAU5OGM6vN1ghMRe70pBmZRHJTv X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a05:6a00:6f46:b0:70d:1e28:1c33 with SMTP id d2e1a72fcca58-7105d68fecfmr34795b3a.1.1722488455721; Wed, 31 Jul 2024 22:00:55 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:05 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-57-mizhang@google.com> Subject: [RFC PATCH v3 56/58] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Manali Shukla With the Passthrough PMU enabled, the PERF_CTLx MSRs (event selectors) are always intercepted and the event filter checking can be directly done inside amd_pmu_set_msr(). Add a check to allow writing to event selector for GP counters if and only if the event is allowed in filter. Signed-off-by: Manali Shukla Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 86818da66bbe..9f3e910ee453 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -166,6 +166,15 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data != pmc->eventsel) { pmc->eventsel = data; if (is_passthrough_pmu_enabled(vcpu)) { + if (!check_pmu_event_filter(pmc)) { + /* + * When guest request an invalid event, + * stop the counter by clearing the + * event selector MSR. + */ + pmc->eventsel_hw = 0; + return 0; + } data &= ~AMD64_EVENTSEL_HOSTONLY; pmc->eventsel_hw = data | AMD64_EVENTSEL_GUESTONLY; } else { From patchwork Thu Aug 1 04:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749580 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76F52153820 for ; Thu, 1 Aug 2024 05:00:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488459; cv=none; b=TLaqRsbCKkpnXThFyJDuJd9SpdUKcxKKvgBPT5MW8UXnuNQXuK2YT2AgvFhK+IrsihBrHz27IaJpW2EAnTZ9JkWzARYcYWUjWceiD1i/n9yZJCWjQsntOvxQiDcqyBrMRVQJwDCZlf+chKRc+si9kbPUKNXYTt6Qsqac7aIVT+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488459; c=relaxed/simple; bh=rFp2g1SMteC67ga0HvXSMTdcgESu5ZuQ9UForgZgVWc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ExMUDazpSphZFBqRb4AnXrqVfQsJQCfUgrmnsLqzAzesPNmaz0/aFunajwtND/oTpQAHqtkxWjI+GLXHaz1vQNz8/GaZ0p+k8k3tnwDh2u244lpP+yp80TpCBvp6j44gY7jCE0QGTMdrZjGyrMeQh/z+XHejyy6y5End58eZb6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c23YvgE1; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c23YvgE1" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2cb4d02f34cso6567707a91.3 for ; Wed, 31 Jul 2024 22:00:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488458; x=1723093258; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=l8nLk/LSkHQ4nrGTbHoBdPhfqphQ88KWleGa+/sy98o=; b=c23YvgE1Nsz9TcCipakFlu1gQ3OoftQyjO3J665votWWNdUqZ0xfEUXv7rOSadiceS Zx66/u5TsF2WExc6mCKMddT4WuFcRWi8gctHjgcavpTTvfwV7Qv4226DwhFgozfsjPMJ yx2UfhWRH8RO8t+Jk1KTgz6PRwgAfJc6gJLhPdAc6wUPrahu0FVVyn3yeze3RSeKfkCH +52ktxLeCXm+5NmjhStYLenhANmQr/A4TeLyoW0icRJavoqLg77BPgw5TWtgEsgBAHhr Td9RiUh0/hG9oAFne1zK/pICi0VVc5CXSx+Q2O568a6F4pSEb8xJ0Tvdb+E/VnX2ttM0 PF5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488458; x=1723093258; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=l8nLk/LSkHQ4nrGTbHoBdPhfqphQ88KWleGa+/sy98o=; b=bxp7BpzfyLloHu8WliSBEkziTDJ+UTvz3WjfcQdhLuwKXaMvO4xZapzLCp78+OSAv1 FmDBPx9K1bi1sofgmwacPdXC2lq++UlfwhlU5t13hT68kn1ziTxLKmOytlB/vzvkN06X kvfh8WoiBYTqO/tLumnIaGnhVQ5o6bj2c8mlzSkfqY8xi8TjmwF+DgnTlp9OcPYuzHTd eiDnigeYySAeYzn7VbEV/AUm+HZqzZNsSdbT1jK2jboThncN0Y0X0nfJ3VblqdECY61t sgBZNsSl45rvvXVQxVtcOvAshwu9o2zDMa3B+VpY11qQo9AWv3JhE8h/8kQ0n46W7xuB MIwQ== X-Forwarded-Encrypted: i=1; AJvYcCXLE/y0LnNTSselWszg7a0r9Uo8FMi/3VUmDT8jpuYh/AZqUwmt/ow8VivOtBuLMVAqtqYfyNA/w3HUowlI7+hFdMis X-Gm-Message-State: AOJu0Yz/bsZ3sAdVGhIQ//cFRSiEm9Q1/HuBgC52vXTBmY4NrIGqnesV w4I+9t50alojMZhwusmW3rRLsWTNIM4r+CdApaYDGgRuLC03q7vsuWlAoQwK+wTa2JFlKcP95oD qDSvo0A== X-Google-Smtp-Source: AGHT+IFYttR/L8EQ2gsQZV8ThmFF2D60A1iub9UHBECx0FOvbVcyN0ESaoIsP44EN0FK6nV8frsUwciHF01m X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a17:90a:2cc2:b0:2c3:1985:e9c3 with SMTP id 98e67ed59e1d1-2cfe79245e9mr3146a91.3.1722488457660; Wed, 31 Jul 2024 22:00:57 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:06 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-58-mizhang@google.com> Subject: [RFC PATCH v3 57/58] KVM: x86/pmu/svm: Implement callback to increment counters From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Implement the AMD-specific callback for passthrough PMU that increments counters for cases such as instruction emulation. A PMI will also be injected if the increment results in an overflow. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/kvm/svm/pmu.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 9f3e910ee453..70465903ef1e 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -346,6 +346,17 @@ static void amd_restore_pmu_context(struct kvm_vcpu *vcpu) wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, pmu->global_ctrl); } +static bool amd_incr_counter(struct kvm_pmc *pmc) +{ + pmc->counter += 1; + pmc->counter &= pmc_bitmask(pmc); + + if (!pmc->counter) + return true; + + return false; +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, @@ -359,6 +370,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .passthrough_pmu_msrs = amd_passthrough_pmu_msrs, .save_pmu_context = amd_save_pmu_context, .restore_pmu_context = amd_restore_pmu_context, + .incr_counter = amd_incr_counter, .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, .MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC, .MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS, From patchwork Thu Aug 1 04:59:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingwei Zhang X-Patchwork-Id: 13749581 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 818C0153BE8 for ; Thu, 1 Aug 2024 05:01:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488462; cv=none; b=LklWWxJGpBZRifV4+hJsDI4be/6Wdhw9+sJzMEIJvlQvUahjIuWBVxBrp0HhIdb0ym5y7K/vLTx8cNHVNnHbo6saf2p82hnDHGPqdgTDynCrgMQnwgKyi/AUP6Miblm1T/2U9XNNVxkgE9csaX31qSqZhW57RNti3mfepCsIDkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722488462; c=relaxed/simple; bh=fXP7LGydKCxqdr0K8IzhBFKvsQh/bpt5mTvxJ7gEmnI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Djjw/lLsu9oi3+aZ3rPvrPrXH1R4kcYdEkGk5JkKFPPUl9yyTNykt+MFIV5JFdDrtJSluMIExxyT4AcZtV25CW5ekREpyIudEmL7uYy5YfZbZS/4l0MCqJ9GH15HLzoBzuPnHLAei0JNJM/zwSVS4LdM1WDIaeCcp0/Yv4HRQz8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KB4fx7ok; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mizhang.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KB4fx7ok" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-6c8f99fef10so7062694a12.3 for ; Wed, 31 Jul 2024 22:01:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722488460; x=1723093260; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GjoERFKejGpS4usQM9OjkSeO52H+0Hj79NteoX98Mb4=; b=KB4fx7ok7gWdQwbs/O4qurCUMMjvJtT7UpIif3wf+yWCJt1uvMuSLcAQX8I494oCCy dGK4cYDqBty7+yBvMhBG6uAgKsNBID/WYfX+xbwSH2YsMSg6jKfgRXa61IPc65G8f/3q XhM7lPdiTiOIGQzSqTPPjBMdt2vqLiOVKvTX0GotIwtWCFvjMx2ycjLyo3fQoIXIAaUr GITAmzcs+XUa0myy5qXLVK5CeA6noSfg73CtI0dv+Aux4Foywqkvm2e1i5KkgKyY6ng6 JujhRX8imve7o7BVK5NcWiYKKOf9PkiDklE4FReoCAm7lSTDYp9cD5py7X18wZ6vU0+u iMzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722488460; x=1723093260; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GjoERFKejGpS4usQM9OjkSeO52H+0Hj79NteoX98Mb4=; b=XI5CYvQMiMgEsLMpfa/l19lDBGT4hPtCL6uVd8IWtRkIgMHfuGyJ1czrz7Hjhn0uLR HgUYa9jOf3fIdFWOrOIW1QvioDRFzuIywsqooeXK8zBW3bIaMkoZHTg16QMKMnG/UOyS dyExhcQrnnBU8NAoRYMGzEVY+ajbhw0es1CCVG+ac4/9nLvxTWrtKW+RGuV5vMZ9oQmB lnRvE4eionpVZaZE0i0MBXDkzsAxJ+3v/18rEOjmvkVSGW5QJO2J0yY41uji6D9Zjstc RoBRQvYnluaXNwyymOqFp65tqwRkXUrNh7/oIDvlG688Bykyn6rjbuNTKrz1xcezMIzL EPUA== X-Forwarded-Encrypted: i=1; AJvYcCURS+A5+LJW458m7PRA5F8hlyGY+dGZ3u6kv0lM/oWZc8KPHx2q2RvKQPGdxx6k0dNfKWtVvsSv89OKI0mgdKVi6x36 X-Gm-Message-State: AOJu0Yyn01iC6MTLmiez0aoBRfUkIyhhHiJ9Is1tX9yTPZG7Al0KbBmA y06tT5Zn/oVAH7HoBmF+wJlNKthaQJYB9xYLXKfR86y7L3Y7Xzy3F6CjwlBEpkUEYcxfHY4BSbK uVrmqmw== X-Google-Smtp-Source: AGHT+IGH2B5hZj0CoAd0yBCiQG/xuC2VJX4+PA9cCxAa4A9gXQ/fi05isczh9OmtU5VnOqVQD7tgqygc9Fct X-Received: from mizhang-super.c.googlers.com ([35.247.89.60]) (user=mizhang job=sendgmr) by 2002:a63:2543:0:b0:7a1:1324:6294 with SMTP id 41be03b00d2f7-7b6362e5872mr2479a12.8.1722488459548; Wed, 31 Jul 2024 22:00:59 -0700 (PDT) Reply-To: Mingwei Zhang Date: Thu, 1 Aug 2024 04:59:07 +0000 In-Reply-To: <20240801045907.4010984-1-mizhang@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240801045907.4010984-1-mizhang@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801045907.4010984-59-mizhang@google.com> Subject: [RFC PATCH v3 58/58] perf/x86/amd: Support PERF_PMU_CAP_PASSTHROUGH_VPMU for AMD host From: Mingwei Zhang To: Sean Christopherson , Paolo Bonzini , Xiong Zhang , Dapeng Mi , Kan Liang , Zhenyu Wang , Manali Shukla , Sandipan Das Cc: Jim Mattson , Stephane Eranian , Ian Rogers , Namhyung Kim , Mingwei Zhang , gce-passthrou-pmu-dev@google.com, Samantha Alt , Zhiyuan Lv , Yanfei Xu , Like Xu , Peter Zijlstra , Raghavendra Rao Ananta , kvm@vger.kernel.org, linux-perf-users@vger.kernel.org From: Sandipan Das Apply the PERF_PMU_CAP_PASSTHROUGH_VPMU flag for version 2 and later implementations of the core PMU. Aside from having Global Control and Status registers, virtualizing the PMU using the passthrough model requires an interface to set or clear the overflow bits in the Global Status MSRs while restoring or saving the PMU context of a vCPU. PerfMonV2-capable hardware has additional MSRs for this purpose namely, PerfCntrGlobalStatusSet and PerfCntrGlobalStatusClr, thereby making it suitable for use with passthrough PMU. Signed-off-by: Sandipan Das Signed-off-by: Mingwei Zhang --- arch/x86/events/amd/core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 1fc4ce44e743..09f61821029f 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -1426,6 +1426,8 @@ static int __init amd_core_pmu_init(void) amd_pmu_global_cntr_mask = (1ULL << x86_pmu.num_counters) - 1; + x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_PASSTHROUGH_VPMU; + /* Update PMC handling functions */ x86_pmu.enable_all = amd_pmu_v2_enable_all; x86_pmu.disable_all = amd_pmu_v2_disable_all;