Message ID | 20250119110410.GAZ4zcKkx5sCjD5XvH@fat_crate.local |
---|---|
State | New |
Headers | show |
Series | [GIT,PULL] sched/urgent for v6.13 | expand |
The pull request you sent on Sun, 19 Jan 2025 12:04:10 +0100:
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip tags/sched_urgent_for_v6.13
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/8ff6d472ab35d5cb9a3941a1fcd5b7cbc9338c7f
Thank you!
Hi Prateek, Thank you for the analysis details! > Thank you for the reproducer. I haven't tried it yet (in part due > to the slightly scary "Assumptions" section) It wasn't meant to be scary, my apologies. It is meant to say that the reproducer will only perform testing-related tasks (which you'd normally do manually), without touching the infrastructure (firewall, networking, instance mangement, etc). As long as you set all that up the same way you do when you test manually, you will be fine. I'll clarify the README. Should you run into any questions, please do not hesitate to contact me directly, and I'll help clear the path. > v6.14-rc1 baseline > v6.5.0 (pre-EEVDF) -0.95% > v6.14-rc1 + NO_PL + NO_RTP +6.06% This is interesting. While you do reproduce the benefits of NO_PL+NO_RTP, your result shows no regression compared to the baseline CFS. I'm only speculating, but running both SUT and loadgen on the same machine is a large variation of the test setup, and can lead to result differences like this one. > Digging through the scripts, I found that SCHED_BATCH setting is done > via systemd in [3] via the "CPUSchedulingPolicy" parameter. > [3] https://github.com/aws/repro-collection/blob/main/workloads/mysql/files/mysqld.service.tmpl That is correct, the reproducer uses systemd to set the scheduler policy for mysqld. > interestingly, if I do (version 1): [...] > I more or less get the same results as baseline v6.14-rc1 (Weird!) > But then if I do (version 2): [...] > I see the performance reach to the same level as that with NO_PL + > NO_RTP. That's a good find. I will compare on my setup if performance changes when manually setting all mysqld tasks to SCHED_BATCH. And I haven't yet run perf sched stats on the reproducer, but it may hold useful insight. I'll follow up with more details as I gather them. Your find also helps to point out that even when it works, SCHED_BATCH is a more complex and error prone mitigation than just disabling PL and RTP. The same reproducer setup that uses systemd to set SCHED_BATCH does show improvement in 6.12, but not in 6.13+. There may not even be a single approach that works well on both. Conversely, setting NO_PLACE_LAG + NO_RUN_TO_PARITY is simply done at boot time, and does not require further user effort. It's even simpler if those two features are exposed via sysctl, making it trivial to pesist and query with standard Linux commands as needed. Peter, I've renewed my initial patch so it applies to the current sched/core, and removed the dependency on changing the default values first. I'd appreciate you considering it for merging [1]. [1] https://lore.kernel.org/20250212053644.14787-1-cpru@amazon.com -Cristian
On Tue, Feb 11, 2025 at 11:41:13PM -0600, Cristian Prundeanu wrote: > Your find also helps to point out that even when it works, SCHED_BATCH is > a more complex and error prone mitigation than just disabling PL and RTP. > The same reproducer setup that uses systemd to set SCHED_BATCH does show > improvement in 6.12, but not in 6.13+. There may not even be a single > approach that works well on both. > > Conversely, setting NO_PLACE_LAG + NO_RUN_TO_PARITY is simply done at boot > time, and does not require further user effort. For your workload. It will wreck other workloads. Yes, SCHED_BATCH might be more fiddly, but it allows for composition. You can run multiple workloads together and they all behave. Maybe the right thing here is to get mysql patched; so that it will request BATCH itself for the threads that need it.