Message ID | 20220202105158.7072-1-ravi.bangoria@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] perf/amd: Implement erratum #1292 workaround for F19h M00-0Fh | expand |
On Wed, Feb 02, 2022 at 04:21:58PM +0530, Ravi Bangoria wrote: > +/* Overcounting of Retire Based Events Erratum */ > +static struct event_constraint retire_event_constraints[] __read_mostly = { > + EVENT_CONSTRAINT(0xC0, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC1, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC2, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC3, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC4, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC5, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC8, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xC9, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xCA, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xCC, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0xD1, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0x1000000C7, 0x4, AMD64_EVENTSEL_EVENT), > + EVENT_CONSTRAINT(0x1000000D0, 0x4, AMD64_EVENTSEL_EVENT), Can't this be encoded nicer? Something like: EVENT_CONSTRAINT(0xC0, 0x4, AMD64_EVENTSEL_EVENT & ~0xF). To match all of 0xCn ? > + EVENT_CONSTRAINT_END > +};
Hi Peter, On 02-Feb-22 8:06 PM, Peter Zijlstra wrote: > On Wed, Feb 02, 2022 at 04:21:58PM +0530, Ravi Bangoria wrote: >> +/* Overcounting of Retire Based Events Erratum */ >> +static struct event_constraint retire_event_constraints[] __read_mostly = { >> + EVENT_CONSTRAINT(0xC0, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC1, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC2, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC3, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC4, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC5, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC8, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xC9, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xCA, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xCC, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0xD1, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0x1000000C7, 0x4, AMD64_EVENTSEL_EVENT), >> + EVENT_CONSTRAINT(0x1000000D0, 0x4, AMD64_EVENTSEL_EVENT), > > Can't this be encoded nicer? Something like: > > EVENT_CONSTRAINT(0xC0, 0x4, AMD64_EVENTSEL_EVENT & ~0xF). > > To match all of 0xCn ? I don't think so as not all 0xCn events are constrained. But I can probably use EVENT_CONSTRAINT_RANGE() for continuous event codes: EVENT_CONSTRAINT_RANGE(0xC0, 0xC5, 0x4, AMD64_EVENTSEL_EVENT), EVENT_CONSTRAINT_RANGE(0xC8, 0xCA, 0x4, AMD64_EVENTSEL_EVENT), Thanks, Ravi
On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > > Perf counter may overcount for a list of Retire Based Events. Implement > workaround for Zen3 Family 19 Model 00-0F processors as suggested in > Revision Guide[1]: > > To count the non-FP affected PMC events correctly: > o Use Core::X86::Msr::PERF_CTL2 to count the events, and > o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and > o Program Core::X86::Msr::PERF_CTL2[20] to 0b. > > Note that the specified workaround applies only to counting events and > not to sampling events. Thus sampling event will continue functioning > as is. > > Although the issue exists on all previous Zen revisions, the workaround > is different and thus not included in this patch. > > This patch needs Like's patch[2] to make it work on kvm guest. IIUC, this patch along with Like's patch actually breaks PMU virtualization for a kvm guest. Suppose I have some code which counts event 0xC2 [Retired Branch Instructions] on PMC0 and event 0xC4 [Retired Taken Branch Instructions] on PMC1. I then divide PMC1 by PMC0 to see what percentage of my branch instructions are taken. On hardware that suffers from erratum 1292, both counters may overcount, but if the inaccuracy is small, then my final result may still be fairly close to reality. With these patches, if I run that same code in a kvm guest, it looks like one of those events will be counted on PMC2 and the other won't be counted at all. So, when I calculate the percentage of branch instructions taken, I either get 0 or infinity. > [1] https://bugzilla.kernel.org/attachment.cgi?id=298241 > [2] https://lore.kernel.org/lkml/20220117055703.52020-1-likexu@tencent.com > > Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
Hi Jim, On 03-Feb-22 9:39 AM, Jim Mattson wrote: > On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >> >> Perf counter may overcount for a list of Retire Based Events. Implement >> workaround for Zen3 Family 19 Model 00-0F processors as suggested in >> Revision Guide[1]: >> >> To count the non-FP affected PMC events correctly: >> o Use Core::X86::Msr::PERF_CTL2 to count the events, and >> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and >> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. >> >> Note that the specified workaround applies only to counting events and >> not to sampling events. Thus sampling event will continue functioning >> as is. >> >> Although the issue exists on all previous Zen revisions, the workaround >> is different and thus not included in this patch. >> >> This patch needs Like's patch[2] to make it work on kvm guest. > > IIUC, this patch along with Like's patch actually breaks PMU > virtualization for a kvm guest. > > Suppose I have some code which counts event 0xC2 [Retired Branch > Instructions] on PMC0 and event 0xC4 [Retired Taken Branch > Instructions] on PMC1. I then divide PMC1 by PMC0 to see what > percentage of my branch instructions are taken. On hardware that > suffers from erratum 1292, both counters may overcount, but if the > inaccuracy is small, then my final result may still be fairly close to > reality. > > With these patches, if I run that same code in a kvm guest, it looks > like one of those events will be counted on PMC2 and the other won't > be counted at all. So, when I calculate the percentage of branch > instructions taken, I either get 0 or infinity. Events get multiplexed internally. See below quick test I ran inside guest. My host is running with my+Like's patch and guest is running with only my patch. $ ./perf stat -e branch-instructions,branch-misses -- ./branch-misses Performance counter stats for './branch-misses': 19,847,153,209 branch-instructions:u (50.03%) 950,410,251 branch-misses:u # 4.79% of all branches (49.97%) $ cat branch-misses.c #include <stdlib.h> int main() { long i = 1000000000; long j = 0; while(i--) { switch(rand() % 20) { case 0: j += 0; break; case 1: j += 1; break; case 2: j += 2; break; case 3: j += 3; break; case 4: j += 4; break; case 5: j += 5; break; case 6: j += 6; break; case 7: j += 7; break; case 8: j += 8; break; case 9: j += 9; break; case 10: j += 10; break; case 11: j += 11; break; case 12: j += 12; break; case 13: j += 13; break; case 14: j += 14; break; case 15: j += 15; break; case 16: j += 16; break; case 17: j += 17; break; case 18: j += 18; break; case 19: j += 19; break; default: j += 20; break; } } return 0; } Thanks, Ravi
On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > > Hi Jim, > > On 03-Feb-22 9:39 AM, Jim Mattson wrote: > > On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >> > >> Perf counter may overcount for a list of Retire Based Events. Implement > >> workaround for Zen3 Family 19 Model 00-0F processors as suggested in > >> Revision Guide[1]: > >> > >> To count the non-FP affected PMC events correctly: > >> o Use Core::X86::Msr::PERF_CTL2 to count the events, and > >> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and > >> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. > >> > >> Note that the specified workaround applies only to counting events and > >> not to sampling events. Thus sampling event will continue functioning > >> as is. > >> > >> Although the issue exists on all previous Zen revisions, the workaround > >> is different and thus not included in this patch. > >> > >> This patch needs Like's patch[2] to make it work on kvm guest. > > > > IIUC, this patch along with Like's patch actually breaks PMU > > virtualization for a kvm guest. > > > > Suppose I have some code which counts event 0xC2 [Retired Branch > > Instructions] on PMC0 and event 0xC4 [Retired Taken Branch > > Instructions] on PMC1. I then divide PMC1 by PMC0 to see what > > percentage of my branch instructions are taken. On hardware that > > suffers from erratum 1292, both counters may overcount, but if the > > inaccuracy is small, then my final result may still be fairly close to > > reality. > > > > With these patches, if I run that same code in a kvm guest, it looks > > like one of those events will be counted on PMC2 and the other won't > > be counted at all. So, when I calculate the percentage of branch > > instructions taken, I either get 0 or infinity. > > Events get multiplexed internally. See below quick test I ran inside > guest. My host is running with my+Like's patch and guest is running > with only my patch. Your guest may be multiplexing the counters. The guest I posited does not. I hope that you are not saying that kvm's *thread-pinned* perf events are not being multiplexed at the host level, because that completely breaks PMU virtualization.
On 03-Feb-22 11:25 PM, Jim Mattson wrote: > On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >> >> Hi Jim, >> >> On 03-Feb-22 9:39 AM, Jim Mattson wrote: >>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>> >>>> Perf counter may overcount for a list of Retire Based Events. Implement >>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in >>>> Revision Guide[1]: >>>> >>>> To count the non-FP affected PMC events correctly: >>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and >>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and >>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. >>>> >>>> Note that the specified workaround applies only to counting events and >>>> not to sampling events. Thus sampling event will continue functioning >>>> as is. >>>> >>>> Although the issue exists on all previous Zen revisions, the workaround >>>> is different and thus not included in this patch. >>>> >>>> This patch needs Like's patch[2] to make it work on kvm guest. >>> >>> IIUC, this patch along with Like's patch actually breaks PMU >>> virtualization for a kvm guest. >>> >>> Suppose I have some code which counts event 0xC2 [Retired Branch >>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch >>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what >>> percentage of my branch instructions are taken. On hardware that >>> suffers from erratum 1292, both counters may overcount, but if the >>> inaccuracy is small, then my final result may still be fairly close to >>> reality. >>> >>> With these patches, if I run that same code in a kvm guest, it looks >>> like one of those events will be counted on PMC2 and the other won't >>> be counted at all. So, when I calculate the percentage of branch >>> instructions taken, I either get 0 or infinity. >> >> Events get multiplexed internally. See below quick test I ran inside >> guest. My host is running with my+Like's patch and guest is running >> with only my patch. > > Your guest may be multiplexing the counters. The guest I posited does not. It would be helpful if you can provide an example. > I hope that you are not saying that kvm's *thread-pinned* perf events > are not being multiplexed at the host level, because that completely > breaks PMU virtualization. IIUC, multiplexing happens inside the guest. Thanks, Ravi
On Fri, Feb 4, 2022 at 1:33 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > > > > On 03-Feb-22 11:25 PM, Jim Mattson wrote: > > On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >> > >> Hi Jim, > >> > >> On 03-Feb-22 9:39 AM, Jim Mattson wrote: > >>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >>>> > >>>> Perf counter may overcount for a list of Retire Based Events. Implement > >>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in > >>>> Revision Guide[1]: > >>>> > >>>> To count the non-FP affected PMC events correctly: > >>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and > >>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and > >>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. > >>>> > >>>> Note that the specified workaround applies only to counting events and > >>>> not to sampling events. Thus sampling event will continue functioning > >>>> as is. > >>>> > >>>> Although the issue exists on all previous Zen revisions, the workaround > >>>> is different and thus not included in this patch. > >>>> > >>>> This patch needs Like's patch[2] to make it work on kvm guest. > >>> > >>> IIUC, this patch along with Like's patch actually breaks PMU > >>> virtualization for a kvm guest. > >>> > >>> Suppose I have some code which counts event 0xC2 [Retired Branch > >>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch > >>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what > >>> percentage of my branch instructions are taken. On hardware that > >>> suffers from erratum 1292, both counters may overcount, but if the > >>> inaccuracy is small, then my final result may still be fairly close to > >>> reality. > >>> > >>> With these patches, if I run that same code in a kvm guest, it looks > >>> like one of those events will be counted on PMC2 and the other won't > >>> be counted at all. So, when I calculate the percentage of branch > >>> instructions taken, I either get 0 or infinity. > >> > >> Events get multiplexed internally. See below quick test I ran inside > >> guest. My host is running with my+Like's patch and guest is running > >> with only my patch. > > > > Your guest may be multiplexing the counters. The guest I posited does not. > > It would be helpful if you can provide an example. Perf on any current Linux distro (i.e. without your fix). > > I hope that you are not saying that kvm's *thread-pinned* perf events > > are not being multiplexed at the host level, because that completely > > breaks PMU virtualization. > > IIUC, multiplexing happens inside the guest. I'm not sure that multiplexing is the answer. Extrapolation may introduce greater imprecision than the erratum. If you count something like "instructions retired" three ways: 1) Unfixed counter 2) PMC2 with the fix 3) Multiplexed on PMC2 with the fix Is (3) always more accurate than (1)? > Thanks, > Ravi
On 4/2/2022 9:01 pm, Jim Mattson wrote: > On Fri, Feb 4, 2022 at 1:33 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >> >> >> >> On 03-Feb-22 11:25 PM, Jim Mattson wrote: >>> On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>> >>>> Hi Jim, >>>> >>>> On 03-Feb-22 9:39 AM, Jim Mattson wrote: >>>>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>>> >>>>>> Perf counter may overcount for a list of Retire Based Events. Implement >>>>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in >>>>>> Revision Guide[1]: >>>>>> >>>>>> To count the non-FP affected PMC events correctly: >>>>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and >>>>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and >>>>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. >>>>>> >>>>>> Note that the specified workaround applies only to counting events and >>>>>> not to sampling events. Thus sampling event will continue functioning >>>>>> as is. >>>>>> >>>>>> Although the issue exists on all previous Zen revisions, the workaround >>>>>> is different and thus not included in this patch. >>>>>> >>>>>> This patch needs Like's patch[2] to make it work on kvm guest. >>>>> >>>>> IIUC, this patch along with Like's patch actually breaks PMU >>>>> virtualization for a kvm guest. >>>>> >>>>> Suppose I have some code which counts event 0xC2 [Retired Branch >>>>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch >>>>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what >>>>> percentage of my branch instructions are taken. On hardware that >>>>> suffers from erratum 1292, both counters may overcount, but if the >>>>> inaccuracy is small, then my final result may still be fairly close to >>>>> reality. >>>>> >>>>> With these patches, if I run that same code in a kvm guest, it looks >>>>> like one of those events will be counted on PMC2 and the other won't >>>>> be counted at all. So, when I calculate the percentage of branch >>>>> instructions taken, I either get 0 or infinity. >>>> >>>> Events get multiplexed internally. See below quick test I ran inside >>>> guest. My host is running with my+Like's patch and guest is running >>>> with only my patch. >>> >>> Your guest may be multiplexing the counters. The guest I posited does not. >> >> It would be helpful if you can provide an example. > > Perf on any current Linux distro (i.e. without your fix). The patch for errata #1292 (like most hw issues or vulnerabilities) should be applied to both the host and guest. For non-patched guests on a patched host, the KVM-created perf_events will be true for is_sampling_event() due to get_sample_period(). I think we (KVM) have a congenital defect in distinguishing whether guest counters are used in counting mode or sampling mode, which is just a different use of pure software. > >>> I hope that you are not saying that kvm's *thread-pinned* perf events >>> are not being multiplexed at the host level, because that completely >>> breaks PMU virtualization. >> >> IIUC, multiplexing happens inside the guest. > > I'm not sure that multiplexing is the answer. Extrapolation may > introduce greater imprecision than the erratum. If you run the same test on the patched host, the PMC2 will be used in a multiplexing way. This is no different. > > If you count something like "instructions retired" three ways: > 1) Unfixed counter > 2) PMC2 with the fix > 3) Multiplexed on PMC2 with the fix > > Is (3) always more accurate than (1)? The loss of accuracy is due to a reduction in the number of trustworthy counters, not to these two workaround patches. Any multiplexing (whatever on the host or the guest) will result in a loss of accuracy. Right ? I'm not sure if we should provide a sysfs knob for (1), is there a precedent for this ? > >> Thanks, >> Ravi
On Wed, Feb 9, 2022 at 2:19 AM Like Xu <like.xu.linux@gmail.com> wrote: > > On 4/2/2022 9:01 pm, Jim Mattson wrote: > > On Fri, Feb 4, 2022 at 1:33 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >> > >> > >> > >> On 03-Feb-22 11:25 PM, Jim Mattson wrote: > >>> On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >>>> > >>>> Hi Jim, > >>>> > >>>> On 03-Feb-22 9:39 AM, Jim Mattson wrote: > >>>>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: > >>>>>> > >>>>>> Perf counter may overcount for a list of Retire Based Events. Implement > >>>>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in > >>>>>> Revision Guide[1]: > >>>>>> > >>>>>> To count the non-FP affected PMC events correctly: > >>>>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and > >>>>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and > >>>>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. > >>>>>> > >>>>>> Note that the specified workaround applies only to counting events and > >>>>>> not to sampling events. Thus sampling event will continue functioning > >>>>>> as is. > >>>>>> > >>>>>> Although the issue exists on all previous Zen revisions, the workaround > >>>>>> is different and thus not included in this patch. > >>>>>> > >>>>>> This patch needs Like's patch[2] to make it work on kvm guest. > >>>>> > >>>>> IIUC, this patch along with Like's patch actually breaks PMU > >>>>> virtualization for a kvm guest. > >>>>> > >>>>> Suppose I have some code which counts event 0xC2 [Retired Branch > >>>>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch > >>>>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what > >>>>> percentage of my branch instructions are taken. On hardware that > >>>>> suffers from erratum 1292, both counters may overcount, but if the > >>>>> inaccuracy is small, then my final result may still be fairly close to > >>>>> reality. > >>>>> > >>>>> With these patches, if I run that same code in a kvm guest, it looks > >>>>> like one of those events will be counted on PMC2 and the other won't > >>>>> be counted at all. So, when I calculate the percentage of branch > >>>>> instructions taken, I either get 0 or infinity. > >>>> > >>>> Events get multiplexed internally. See below quick test I ran inside > >>>> guest. My host is running with my+Like's patch and guest is running > >>>> with only my patch. > >>> > >>> Your guest may be multiplexing the counters. The guest I posited does not. > >> > >> It would be helpful if you can provide an example. > > > > Perf on any current Linux distro (i.e. without your fix). > > The patch for errata #1292 (like most hw issues or vulnerabilities) should be > applied to both the host and guest. As I'm sure you are aware, guests are often not patched. For example, we have a lot of Debian-9 guests running on Milan, despite the fact that it has to be booted with "nopcid" due to a bug introduced on 4.9-stable. We submitted the fix and notified Debian about a year ago, but they have not seen fit to cut a new kernel. Do you think they will cut a new kernel for this patch? > For non-patched guests on a patched host, the KVM-created perf_events > will be true for is_sampling_event() due to get_sample_period(). > > I think we (KVM) have a congenital defect in distinguishing whether guest > counters are used in counting mode or sampling mode, which is just > a different use of pure software. I have no idea what you are saying. However, when kvm sees a guest counter used in sampling mode, it will just request a PERF_TYPE_RAW perf event with the INT bit set in 'config.' If it sees a guest counter used in counting mode, it will either request a PERF_TYPE_RAW perf event or a PERF_TYPE_HARDWARE perf event, depending on whether or not it finds the requested event in amd_event_mapping[]. > > > >>> I hope that you are not saying that kvm's *thread-pinned* perf events > >>> are not being multiplexed at the host level, because that completely > >>> breaks PMU virtualization. > >> > >> IIUC, multiplexing happens inside the guest. > > > > I'm not sure that multiplexing is the answer. Extrapolation may > > introduce greater imprecision than the erratum. > > If you run the same test on the patched host, the PMC2 will be > used in a multiplexing way. This is no different. > > > > > If you count something like "instructions retired" three ways: > > 1) Unfixed counter > > 2) PMC2 with the fix > > 3) Multiplexed on PMC2 with the fix > > > > Is (3) always more accurate than (1)? Since Ravi has gone dark, I will answer my own question. For better reproducibility, I simplified his program to: int main() { return 0;} On an unpatched Milan host, I get instructions retired between 21911 and 21915. I get branch instructions retired between 5565 and 5566. It does not matter if I count them separately or at the same time. After applying v3 of Ravi's patch, if I try to count these events at the same time, I get 36869 instructions retired and 4962 branch instructions on the first run. On subsequent runs, perf refuses to count both at the same time. I get branch instructions retired between 5565 and 5567, but no instructions retired. Instead, perf tells me: Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog If I just count one thing at a time (on the patched kernel), I get between 21911 and 21916 instructions retired, and I get between 5565 and 5566 branch instructions retired. I don't know under what circumstances the unfixed counters overcount or by how much. However, for this simple test case, the fixed PMC2 yields the same results as any unfixed counter. Ravi's patch, however makes counting two of these events simultaneously either (a) impossible, or (b) highly inaccurate (from 10% under to 68% over). > The loss of accuracy is due to a reduction in the number of trustworthy counters, > not to these two workaround patches. Any multiplexing (whatever on the host or > the guest) will result in a loss of accuracy. Right ? Yes, that's my point. Fixing one inaccuracy by using a mechanism that introduces another inaccuracy only makes sense if the inaccuracy you are fixing is worse than the inaccuracy you are introducing. That does not appear to be the case here, but I am not privy to all of the details of this erratum.
Hi Jim, On 10-Feb-22 3:10 AM, Jim Mattson wrote: > On Wed, Feb 9, 2022 at 2:19 AM Like Xu <like.xu.linux@gmail.com> wrote: >> >> On 4/2/2022 9:01 pm, Jim Mattson wrote: >>> On Fri, Feb 4, 2022 at 1:33 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>> >>>> >>>> >>>> On 03-Feb-22 11:25 PM, Jim Mattson wrote: >>>>> On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>>> >>>>>> Hi Jim, >>>>>> >>>>>> On 03-Feb-22 9:39 AM, Jim Mattson wrote: >>>>>>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>>>>> >>>>>>>> Perf counter may overcount for a list of Retire Based Events. Implement >>>>>>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in >>>>>>>> Revision Guide[1]: >>>>>>>> >>>>>>>> To count the non-FP affected PMC events correctly: >>>>>>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and >>>>>>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and >>>>>>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. >>>>>>>> >>>>>>>> Note that the specified workaround applies only to counting events and >>>>>>>> not to sampling events. Thus sampling event will continue functioning >>>>>>>> as is. >>>>>>>> >>>>>>>> Although the issue exists on all previous Zen revisions, the workaround >>>>>>>> is different and thus not included in this patch. >>>>>>>> >>>>>>>> This patch needs Like's patch[2] to make it work on kvm guest. >>>>>>> >>>>>>> IIUC, this patch along with Like's patch actually breaks PMU >>>>>>> virtualization for a kvm guest. >>>>>>> >>>>>>> Suppose I have some code which counts event 0xC2 [Retired Branch >>>>>>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch >>>>>>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what >>>>>>> percentage of my branch instructions are taken. On hardware that >>>>>>> suffers from erratum 1292, both counters may overcount, but if the >>>>>>> inaccuracy is small, then my final result may still be fairly close to >>>>>>> reality. >>>>>>> >>>>>>> With these patches, if I run that same code in a kvm guest, it looks >>>>>>> like one of those events will be counted on PMC2 and the other won't >>>>>>> be counted at all. So, when I calculate the percentage of branch >>>>>>> instructions taken, I either get 0 or infinity. >>>>>> >>>>>> Events get multiplexed internally. See below quick test I ran inside >>>>>> guest. My host is running with my+Like's patch and guest is running >>>>>> with only my patch. >>>>> >>>>> Your guest may be multiplexing the counters. The guest I posited does not. >>>> >>>> It would be helpful if you can provide an example. >>> >>> Perf on any current Linux distro (i.e. without your fix). >> >> The patch for errata #1292 (like most hw issues or vulnerabilities) should be >> applied to both the host and guest. > > As I'm sure you are aware, guests are often not patched. For example, > we have a lot of Debian-9 guests running on Milan, despite the fact > that it has to be booted with "nopcid" due to a bug introduced on > 4.9-stable. We submitted the fix and notified Debian about a year ago, > but they have not seen fit to cut a new kernel. Do you think they will > cut a new kernel for this patch? > >> For non-patched guests on a patched host, the KVM-created perf_events >> will be true for is_sampling_event() due to get_sample_period(). >> >> I think we (KVM) have a congenital defect in distinguishing whether guest >> counters are used in counting mode or sampling mode, which is just >> a different use of pure software. > > I have no idea what you are saying. However, when kvm sees a guest > counter used in sampling mode, it will just request a PERF_TYPE_RAW > perf event with the INT bit set in 'config.' If it sees a guest > counter used in counting mode, it will either request a PERF_TYPE_RAW > perf event or a PERF_TYPE_HARDWARE perf event, depending on whether or > not it finds the requested event in amd_event_mapping[]. > >>> >>>>> I hope that you are not saying that kvm's *thread-pinned* perf events >>>>> are not being multiplexed at the host level, because that completely >>>>> breaks PMU virtualization. >>>> >>>> IIUC, multiplexing happens inside the guest. >>> >>> I'm not sure that multiplexing is the answer. Extrapolation may >>> introduce greater imprecision than the erratum. >> >> If you run the same test on the patched host, the PMC2 will be >> used in a multiplexing way. This is no different. >> >>> >>> If you count something like "instructions retired" three ways: >>> 1) Unfixed counter >>> 2) PMC2 with the fix >>> 3) Multiplexed on PMC2 with the fix >>> >>> Is (3) always more accurate than (1)? > > Since Ravi has gone dark, I will answer my own question. Sorry about the delay. I was discussing this internally with hw folks. > > For better reproducibility, I simplified his program to: > > int main() { return 0;} > > On an unpatched Milan host, I get instructions retired between 21911 > and 21915. I get branch instructions retired between 5565 and 5566. It > does not matter if I count them separately or at the same time. > > After applying v3 of Ravi's patch, if I try to count these events at > the same time, I get 36869 instructions retired and 4962 branch > instructions on the first run. On subsequent runs, perf refuses to > count both at the same time. I get branch instructions retired between > 5565 and 5567, but no instructions retired. Instead, perf tells me: > > Some events weren't counted. Try disabling the NMI watchdog: > echo 0 > /proc/sys/kernel/nmi_watchdog > perf stat ... > echo 1 > /proc/sys/kernel/nmi_watchdog > > If I just count one thing at a time (on the patched kernel), I get > between 21911 and 21916 instructions retired, and I get between 5565 > and 5566 branch instructions retired. > > I don't know under what circumstances the unfixed counters overcount > or by how much. However, for this simple test case, the fixed PMC2 > yields the same results as any unfixed counter. Ravi's patch, however > makes counting two of these events simultaneously either (a) > impossible, or (b) highly inaccurate (from 10% under to 68% over). In further discussions with our hardware team, I am given to understand that the conditions under which the overcounting can happen, is quite rare. In my tests, I've found that the patched vs. unpatched cases are not significantly different to warrant the restriction introduced by this fix. I have requested Peter to hold off pushing this fix. > >> The loss of accuracy is due to a reduction in the number of trustworthy counters, >> not to these two workaround patches. Any multiplexing (whatever on the host or >> the guest) will result in a loss of accuracy. Right ? > > Yes, that's my point. Fixing one inaccuracy by using a mechanism that > introduces another inaccuracy only makes sense if the inaccuracy you > are fixing is worse than the inaccuracy you are introducing. That does > not appear to be the case here, but I am not privy to all of the > details of this erratum. Thanks, Ravi
On 10/2/2022 12:06 pm, Ravi Bangoria wrote: > Hi Jim, > > On 10-Feb-22 3:10 AM, Jim Mattson wrote: >> On Wed, Feb 9, 2022 at 2:19 AM Like Xu <like.xu.linux@gmail.com> wrote: >>> >>> On 4/2/2022 9:01 pm, Jim Mattson wrote: >>>> On Fri, Feb 4, 2022 at 1:33 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>> >>>>> >>>>> >>>>> On 03-Feb-22 11:25 PM, Jim Mattson wrote: >>>>>> On Wed, Feb 2, 2022 at 9:18 PM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>>>> >>>>>>> Hi Jim, >>>>>>> >>>>>>> On 03-Feb-22 9:39 AM, Jim Mattson wrote: >>>>>>>> On Wed, Feb 2, 2022 at 2:52 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote: >>>>>>>>> >>>>>>>>> Perf counter may overcount for a list of Retire Based Events. Implement >>>>>>>>> workaround for Zen3 Family 19 Model 00-0F processors as suggested in >>>>>>>>> Revision Guide[1]: >>>>>>>>> >>>>>>>>> To count the non-FP affected PMC events correctly: >>>>>>>>> o Use Core::X86::Msr::PERF_CTL2 to count the events, and >>>>>>>>> o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and >>>>>>>>> o Program Core::X86::Msr::PERF_CTL2[20] to 0b. >>>>>>>>> >>>>>>>>> Note that the specified workaround applies only to counting events and >>>>>>>>> not to sampling events. Thus sampling event will continue functioning >>>>>>>>> as is. >>>>>>>>> >>>>>>>>> Although the issue exists on all previous Zen revisions, the workaround >>>>>>>>> is different and thus not included in this patch. >>>>>>>>> >>>>>>>>> This patch needs Like's patch[2] to make it work on kvm guest. >>>>>>>> >>>>>>>> IIUC, this patch along with Like's patch actually breaks PMU >>>>>>>> virtualization for a kvm guest. >>>>>>>> >>>>>>>> Suppose I have some code which counts event 0xC2 [Retired Branch >>>>>>>> Instructions] on PMC0 and event 0xC4 [Retired Taken Branch >>>>>>>> Instructions] on PMC1. I then divide PMC1 by PMC0 to see what >>>>>>>> percentage of my branch instructions are taken. On hardware that >>>>>>>> suffers from erratum 1292, both counters may overcount, but if the >>>>>>>> inaccuracy is small, then my final result may still be fairly close to >>>>>>>> reality. >>>>>>>> >>>>>>>> With these patches, if I run that same code in a kvm guest, it looks >>>>>>>> like one of those events will be counted on PMC2 and the other won't >>>>>>>> be counted at all. So, when I calculate the percentage of branch >>>>>>>> instructions taken, I either get 0 or infinity. >>>>>>> >>>>>>> Events get multiplexed internally. See below quick test I ran inside >>>>>>> guest. My host is running with my+Like's patch and guest is running >>>>>>> with only my patch. >>>>>> >>>>>> Your guest may be multiplexing the counters. The guest I posited does not. >>>>> >>>>> It would be helpful if you can provide an example. >>>> >>>> Perf on any current Linux distro (i.e. without your fix). >>> >>> The patch for errata #1292 (like most hw issues or vulnerabilities) should be >>> applied to both the host and guest. >> >> As I'm sure you are aware, guests are often not patched. For example, It's true. What a real world. >> we have a lot of Debian-9 guests running on Milan, despite the fact >> that it has to be booted with "nopcid" due to a bug introduced on >> 4.9-stable. We submitted the fix and notified Debian about a year ago, >> but they have not seen fit to cut a new kernel. Do you think they will >> cut a new kernel for this patch? Indeed, thanks for your user stories. >> >>> For non-patched guests on a patched host, the KVM-created perf_events >>> will be true for is_sampling_event() due to get_sample_period(). >>> >>> I think we (KVM) have a congenital defect in distinguishing whether guest >>> counters are used in counting mode or sampling mode, which is just >>> a different use of pure software. >> >> I have no idea what you are saying. However, when kvm sees a guest >> counter used in sampling mode, it will just request a PERF_TYPE_RAW >> perf event with the INT bit set in 'config.' If it sees a guest The counters work very simply: increments until it overflows. The use of INT bit is not related to counting or sampling mode. A pmu driver can set the INT bit, but set a very small ctr value and not expect it to overflow, and it can be used for counting mode as well, right? We don't know under what circumstances the overcount will occur, maybe it's related to the INT bit and maybe bot, but absolutely it's nothing to do with the software check is_sampling_event(). >> counter used in counting mode, it will either request a PERF_TYPE_RAW >> perf event or a PERF_TYPE_HARDWARE perf event, depending on whether or >> not it finds the requested event in amd_event_mapping[]. >> >>>> >>>>>> I hope that you are not saying that kvm's *thread-pinned* perf events >>>>>> are not being multiplexed at the host level, because that completely >>>>>> breaks PMU virtualization. >>>>> >>>>> IIUC, multiplexing happens inside the guest. >>>> >>>> I'm not sure that multiplexing is the answer. Extrapolation may >>>> introduce greater imprecision than the erratum. >>> >>> If you run the same test on the patched host, the PMC2 will be >>> used in a multiplexing way. This is no different. >>> >>>> >>>> If you count something like "instructions retired" three ways: >>>> 1) Unfixed counter >>>> 2) PMC2 with the fix >>>> 3) Multiplexed on PMC2 with the fix >>>> >>>> Is (3) always more accurate than (1)? >> >> Since Ravi has gone dark, I will answer my own question. > > Sorry about the delay. I was discussing this internally with hw folks. > >> >> For better reproducibility, I simplified his program to: >> >> int main() { return 0;} >> >> On an unpatched Milan host, I get instructions retired between 21911 >> and 21915. I get branch instructions retired between 5565 and 5566. It >> does not matter if I count them separately or at the same time. >> >> After applying v3 of Ravi's patch, if I try to count these events at >> the same time, I get 36869 instructions retired and 4962 branch >> instructions on the first run. On subsequent runs, perf refuses to >> count both at the same time. I get branch instructions retired between >> 5565 and 5567, but no instructions retired. Instead, perf tells me: >> >> Some events weren't counted. Try disabling the NMI watchdog: >> echo 0 > /proc/sys/kernel/nmi_watchdog >> perf stat ... >> echo 1 > /proc/sys/kernel/nmi_watchdog >> >> If I just count one thing at a time (on the patched kernel), I get >> between 21911 and 21916 instructions retired, and I get between 5565 >> and 5566 branch instructions retired. >> >> I don't know under what circumstances the unfixed counters overcount >> or by how much. However, for this simple test case, the fixed PMC2 >> yields the same results as any unfixed counter. Ravi's patch, however >> makes counting two of these events simultaneously either (a) >> impossible, or (b) highly inaccurate (from 10% under to 68% over). > > In further discussions with our hardware team, I am given to understand > that the conditions under which the overcounting can happen, is quite > rare. In my tests, I've found that the patched vs. unpatched cases are > not significantly different to warrant the restriction introduced by That's cute and thank you both. I hope we can come to this conclusion before the code is committed. But the kvm's patch may have made those PMU driver developers who read the erratum #1292 a little happier, wouldn't it ? > this fix. I have requested Peter to hold off pushing this fix. > >> >>> The loss of accuracy is due to a reduction in the number of trustworthy counters, >>> not to these two workaround patches. Any multiplexing (whatever on the host or >>> the guest) will result in a loss of accuracy. Right ? >> >> Yes, that's my point. Fixing one inaccuracy by using a mechanism that >> introduces another inaccuracy only makes sense if the inaccuracy you >> are fixing is worse than the inaccuracy you are introducing. That does Couldn't agree more, and in response to similar issues, we may adopt a quantitative-first strategy in the future. >> not appear to be the case here, but I am not privy to all of the >> details of this erratum. > > Thanks, > Ravi
diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 9687a8aef01c..d4dc5ff35366 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -874,6 +874,24 @@ amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, int idx, } } +/* Overcounting of Retire Based Events Erratum */ +static struct event_constraint retire_event_constraints[] __read_mostly = { + EVENT_CONSTRAINT(0xC0, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC1, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC2, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC3, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC4, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC5, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC8, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xC9, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xCA, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xCC, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0xD1, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0x1000000C7, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT(0x1000000D0, 0x4, AMD64_EVENTSEL_EVENT), + EVENT_CONSTRAINT_END +}; + static struct event_constraint pair_constraint; static struct event_constraint * @@ -881,10 +899,30 @@ amd_get_event_constraints_f17h(struct cpu_hw_events *cpuc, int idx, struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; + struct event_constraint *c; if (amd_is_pair_event_code(hwc)) return &pair_constraint; + /* + * Although 'Overcounting of Retire Based Events' erratum exists + * for older generation cpus, workaround to set bit 43 works only + * for Family 19h Model 00-0Fh as per the Revision Guide. + */ + if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model <= 0xf) { + if (is_sampling_event(event)) + goto out; + + for_each_event_constraint(c, retire_event_constraints) { + if (constraint_match(c, event->hw.config)) { + event->hw.config |= (1ULL << 43); + event->hw.config &= ~(1ULL << 20); + return c; + } + } + } + +out: return &unconstrained; }
Perf counter may overcount for a list of Retire Based Events. Implement workaround for Zen3 Family 19 Model 00-0F processors as suggested in Revision Guide[1]: To count the non-FP affected PMC events correctly: o Use Core::X86::Msr::PERF_CTL2 to count the events, and o Program Core::X86::Msr::PERF_CTL2[43] to 1b, and o Program Core::X86::Msr::PERF_CTL2[20] to 0b. Note that the specified workaround applies only to counting events and not to sampling events. Thus sampling event will continue functioning as is. Although the issue exists on all previous Zen revisions, the workaround is different and thus not included in this patch. This patch needs Like's patch[2] to make it work on kvm guest. [1] https://bugzilla.kernel.org/attachment.cgi?id=298241 [2] https://lore.kernel.org/lkml/20220117055703.52020-1-likexu@tencent.com Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> --- v1: https://lore.kernel.org/r/20220202042838.6532-1-ravi.bangoria@amd.com v1->v2: - Don't put any constraint on sampling events - s/errata/erratum/ arch/x86/events/amd/core.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+)