Message ID | 20250411091631.954228-1-kevin.brodsky@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | pkeys-based page table hardening | expand |
* Kevin Brodsky <kevin.brodsky@arm.com> wrote: > Performance > =========== > > Caveat: these numbers should be seen as a lower bound for the overhead > of a real POE-based protection. The hardware checks added by POE are > however not expected to incur significant extra overhead. > > +-------------------+----------------------------------+------------------+---------------+ > | Benchmark | Result Class | Without batching | With batching | > +===================+==================================+==================+===============+ > | mmtests/kernbench | elsp-64 | 0.20% | 0.20% | > | | syst-64 | 1.62% | 0.63% | > | | user-64 | -0.04% | 0.05% | > +-------------------+----------------------------------+------------------+---------------+ > | micromm/fork | fork: p:1 | (R) 225.56% | -0.07% | > | | fork: p:512 | (R) 254.32% | 0.73% | > +-------------------+----------------------------------+------------------+---------------+ > | micromm/munmap | munmap: p:1 | (R) 24.49% | 4.29% | > | | munmap: p:512 | (R) 161.47% | (R) 6.06% | > +-------------------+----------------------------------+------------------+---------------+ > | micromm/vmalloc | fix_size_alloc_test: p:1, h:0 | (R) 14.80% | (R) 11.85% | > | | fix_size_alloc_test: p:4, h:0 | (R) 38.42% | (R) 10.47% | > | | fix_size_alloc_test: p:16, h:0 | (R) 64.74% | (R) 6.41% | > | | fix_size_alloc_test: p:64, h:0 | (R) 79.98% | (R) 3.24% | > | | fix_size_alloc_test: p:256, h:0 | (R) 85.46% | (R) 2.77% | > | | fix_size_alloc_test: p:16, h:1 | (R) 47.89% | 3.10% | > | | fix_size_alloc_test: p:64, h:1 | (R) 62.43% | 3.36% | > | | fix_size_alloc_test: p:256, h:1 | (R) 64.30% | (R) 2.68% | > | | random_size_alloc_test: p:1, h:0 | (R) 74.94% | (R) 3.13% | > | | vm_map_ram_test: p:1, h:0 | (R) 30.53% | (R) 26.20% | > +-------------------+----------------------------------+------------------+---------------+ So I had to look 3 times to figure out what the numbers mean: they are the extra overhead from this hardening feature, measured in system time percentage, right? So "4.29%" means there's a 4.29% slowdown on that particular workload when the feature is enabled. Maybe add an explanation to the next iteration? :-) Thanks, Ingo
On 11/04/2025 11:21, Ingo Molnar wrote: > * Kevin Brodsky <kevin.brodsky@arm.com> wrote: > >> Performance >> =========== >> >> Caveat: these numbers should be seen as a lower bound for the overhead >> of a real POE-based protection. The hardware checks added by POE are >> however not expected to incur significant extra overhead. >> >> +-------------------+----------------------------------+------------------+---------------+ >> | Benchmark | Result Class | Without batching | With batching | >> +===================+==================================+==================+===============+ >> | mmtests/kernbench | elsp-64 | 0.20% | 0.20% | >> | | syst-64 | 1.62% | 0.63% | >> | | user-64 | -0.04% | 0.05% | >> +-------------------+----------------------------------+------------------+---------------+ >> | micromm/fork | fork: p:1 | (R) 225.56% | -0.07% | >> | | fork: p:512 | (R) 254.32% | 0.73% | >> +-------------------+----------------------------------+------------------+---------------+ >> | micromm/munmap | munmap: p:1 | (R) 24.49% | 4.29% | >> | | munmap: p:512 | (R) 161.47% | (R) 6.06% | >> +-------------------+----------------------------------+------------------+---------------+ >> | micromm/vmalloc | fix_size_alloc_test: p:1, h:0 | (R) 14.80% | (R) 11.85% | >> | | fix_size_alloc_test: p:4, h:0 | (R) 38.42% | (R) 10.47% | >> | | fix_size_alloc_test: p:16, h:0 | (R) 64.74% | (R) 6.41% | >> | | fix_size_alloc_test: p:64, h:0 | (R) 79.98% | (R) 3.24% | >> | | fix_size_alloc_test: p:256, h:0 | (R) 85.46% | (R) 2.77% | >> | | fix_size_alloc_test: p:16, h:1 | (R) 47.89% | 3.10% | >> | | fix_size_alloc_test: p:64, h:1 | (R) 62.43% | 3.36% | >> | | fix_size_alloc_test: p:256, h:1 | (R) 64.30% | (R) 2.68% | >> | | random_size_alloc_test: p:1, h:0 | (R) 74.94% | (R) 3.13% | >> | | vm_map_ram_test: p:1, h:0 | (R) 30.53% | (R) 26.20% | >> +-------------------+----------------------------------+------------------+---------------+ > So I had to look 3 times to figure out what the numbers mean: they are > the extra overhead from this hardening feature, measured in system time > percentage, right? These are relative increases compared to the baseline for this series (described earlier on: 6.15-rc1 + 2 additional series). Real time is measured, except for kernbench where all 3 measurements are provided. > So "4.29%" means there's a 4.29% slowdown on that particular workload > when the feature is enabled. Maybe add an explanation to the next iteration? :-) Yes that's right. I thought it was clear from the description above but evidently I was wrong :) I'll add a "plain text" reading like this one in the next version. I should also have mentioned which config was used, namely: defconfig + CONFIG_KPKEYS_HARDENED_PGTABLES=y - Kevin