mbox series

[v4,0/6] LoongArch: Add pv ipi support on LoongArch VM

Message ID 20240201031950.3225626-1-maobibo@loongson.cn (mailing list archive)
Headers show
Series LoongArch: Add pv ipi support on LoongArch VM | expand

Message

maobibo Feb. 1, 2024, 3:19 a.m. UTC
This patchset adds pv ipi support for VM. On physical machine, ipi HW
uses IOCSR registers, however there is trap into hypervisor when vcpu
accesses IOCSR registers if system is in VM mode. SWI is a interrupt
mechanism like SGI on ARM, software can send interrupt to CPU, only that
on LoongArch SWI can only be sent to local CPU now. So SWI can not used
for IPI on real HW system, however it can be used on VM when combined with
hypercall method. This patch uses SWI interrupt for IPI mechanism, SWI
injection uses hypercall method. And there is one trap with IPI sending,
however with IPI receiving there is no trap. with IOCSR HW ipi method,
there will be two trap into hypervisor with ipi receiving.

Also this patch adds IPI multicast support for VM, this idea comes from
x86 pv ipi. IPI can be sent to 128 vcpus in one time.

Here is the microbenchmarck data with perf bench futex wake case on 3C5000
single-way machine, there are 16 cpus on 3C5000 single-way machine, VM
has 16 vcpus also. The benchmark data is ms time unit to wakeup 16 threads,
the performance is higher if data is smaller.

perf bench futex wake, Wokeup 16 of 16 threads in ms
--physical machine--   --VM original--   --VM with pv ipi patch--
  0.0176 ms               0.1140 ms            0.0481 ms

---
Change in V4:
  1. Modfiy pv ipi hook function name call_func_ipi() and 
call_func_single_ipi() with send_ipi_mask()/send_ipi_single(), since pv
ipi is used for both remote function call and reschedule notification.
  2. Refresh changelog.

Change in V3:
  1. Add 128 vcpu ipi multicast support like x86
  2. Change cpucfg base address from 0x10000000 to 0x40000000, in order
to avoid confliction with future hw usage
  3. Adjust patch order in this patchset, move patch
Refine-ipi-ops-on-LoongArch-platform to the first one.

Change in V2:
  1. Add hw cpuid map support since ipi routing uses hw cpuid
  2. Refine changelog description
  3. Add hypercall statistic support for vcpu
  4. Set percpu pv ipi message buffer aligned with cacheline
  5. Refine pv ipi send logic, do not send ipi message with if there is
pending ipi message.
---

Bibo Mao (6):
  LoongArch/smp: Refine ipi ops on LoongArch platform
  LoongArch: KVM: Add hypercall instruction emulation support
  LoongArch: KVM: Add cpucfg area for kvm hypervisor
  LoongArch: Add paravirt interface for guest kernel
  LoongArch: KVM: Add vcpu search support from physical cpuid
  LoongArch: Add pv ipi support on LoongArch system

 arch/loongarch/Kconfig                        |   9 +
 arch/loongarch/include/asm/Kbuild             |   1 -
 arch/loongarch/include/asm/hardirq.h          |   5 +
 arch/loongarch/include/asm/inst.h             |   1 +
 arch/loongarch/include/asm/irq.h              |  10 +-
 arch/loongarch/include/asm/kvm_host.h         |  27 +++
 arch/loongarch/include/asm/kvm_para.h         | 157 ++++++++++++++++++
 arch/loongarch/include/asm/kvm_vcpu.h         |   1 +
 arch/loongarch/include/asm/loongarch.h        |  11 ++
 arch/loongarch/include/asm/paravirt.h         |  27 +++
 .../include/asm/paravirt_api_clock.h          |   1 +
 arch/loongarch/include/asm/smp.h              |  31 ++--
 arch/loongarch/include/uapi/asm/Kbuild        |   2 -
 arch/loongarch/kernel/Makefile                |   1 +
 arch/loongarch/kernel/irq.c                   |  24 +--
 arch/loongarch/kernel/paravirt.c              | 154 +++++++++++++++++
 arch/loongarch/kernel/perf_event.c            |  14 +-
 arch/loongarch/kernel/setup.c                 |   2 +
 arch/loongarch/kernel/smp.c                   |  60 ++++---
 arch/loongarch/kernel/time.c                  |  12 +-
 arch/loongarch/kvm/exit.c                     | 125 ++++++++++++--
 arch/loongarch/kvm/vcpu.c                     |  94 ++++++++++-
 arch/loongarch/kvm/vm.c                       |  11 ++
 23 files changed, 678 insertions(+), 102 deletions(-)
 create mode 100644 arch/loongarch/include/asm/kvm_para.h
 create mode 100644 arch/loongarch/include/asm/paravirt.h
 create mode 100644 arch/loongarch/include/asm/paravirt_api_clock.h
 delete mode 100644 arch/loongarch/include/uapi/asm/Kbuild
 create mode 100644 arch/loongarch/kernel/paravirt.c


base-commit: 1bbb19b6eb1b8685ab1c268a401ea64380b8bbcb

Comments

WANG Xuerui Feb. 15, 2024, 10:11 a.m. UTC | #1
Hi,

On 2/1/24 11:19, Bibo Mao wrote:
> [snip]
>
> Here is the microbenchmarck data with perf bench futex wake case on 3C5000
> single-way machine, there are 16 cpus on 3C5000 single-way machine, VM
> has 16 vcpus also. The benchmark data is ms time unit to wakeup 16 threads,
> the performance is higher if data is smaller.
> 
> perf bench futex wake, Wokeup 16 of 16 threads in ms
> --physical machine--   --VM original--   --VM with pv ipi patch--
>    0.0176 ms               0.1140 ms            0.0481 ms
> 
> ---
> Change in V4:
>    1. Modfiy pv ipi hook function name call_func_ipi() and
> call_func_single_ipi() with send_ipi_mask()/send_ipi_single(), since pv
> ipi is used for both remote function call and reschedule notification.
>    2. Refresh changelog.
> 
> Change in V3:
>    1. Add 128 vcpu ipi multicast support like x86
>    2. Change cpucfg base address from 0x10000000 to 0x40000000, in order
> to avoid confliction with future hw usage
>    3. Adjust patch order in this patchset, move patch
> Refine-ipi-ops-on-LoongArch-platform to the first one.

Sorry for the late reply (and Happy Chinese New Year), and thanks for 
providing microbenchmark numbers! But it seems the more comprehensive 
CoreMark results were omitted (that's also absent in v3)? While the 
changes between v4 and v2 shouldn't be performance-sensitive IMO (I 
haven't checked carefully though), it could be better to showcase the 
improvements / non-harmfulness of the changes and make us confident in 
accepting the changes.
WANG Xuerui Feb. 15, 2024, 10:25 a.m. UTC | #2
On 2/15/24 18:11, WANG Xuerui wrote:
> Sorry for the late reply (and Happy Chinese New Year), and thanks for 
> providing microbenchmark numbers! But it seems the more comprehensive 
> CoreMark results were omitted (that's also absent in v3)? While the 

Of course the benchmark suite should be UnixBench instead of CoreMark. 
Lesson: don't multi-task code reviews, especially not after consuming 
beer -- a cup of coffee won't fully cancel the influence. ;-)
maobibo Feb. 17, 2024, 3:15 a.m. UTC | #3
On 2024/2/15 下午6:25, WANG Xuerui wrote:
> On 2/15/24 18:11, WANG Xuerui wrote:
>> Sorry for the late reply (and Happy Chinese New Year), and thanks for 
>> providing microbenchmark numbers! But it seems the more comprehensive 
>> CoreMark results were omitted (that's also absent in v3)? While the 
> 
> Of course the benchmark suite should be UnixBench instead of CoreMark. 
> Lesson: don't multi-task code reviews, especially not after consuming 
> beer -- a cup of coffee won't fully cancel the influence. ;-)
> 
Where is rule about benchmark choices like UnixBench/Coremark for ipi 
improvement?

Regards
Bibo Mao
WANG Xuerui Feb. 22, 2024, 9:34 a.m. UTC | #4
On 2/17/24 11:15, maobibo wrote:
> On 2024/2/15 下午6:25, WANG Xuerui wrote:
>> On 2/15/24 18:11, WANG Xuerui wrote:
>>> Sorry for the late reply (and Happy Chinese New Year), and thanks for 
>>> providing microbenchmark numbers! But it seems the more comprehensive 
>>> CoreMark results were omitted (that's also absent in v3)? While the 
>>
>> Of course the benchmark suite should be UnixBench instead of CoreMark. 
>> Lesson: don't multi-task code reviews, especially not after consuming 
>> beer -- a cup of coffee won't fully cancel the influence. ;-)
>>
> Where is rule about benchmark choices like UnixBench/Coremark for ipi 
> improvement?

Sorry for the late reply. The rules are mostly unwritten, but in general 
you can think of the preference of benchmark suites as a matter of 
"effectiveness" -- the closer it's to some real workload in the wild, 
the better. Micro-benchmarks is okay for illustrating the points, but 
without demonstrating the impact on realistic workloads, a change could 
be "useless" in practice or even decrease various performance metrics 
(be that throughput or latency or anything that matters in the certain 
case), but get accepted without notice.
maobibo Feb. 22, 2024, 10:06 a.m. UTC | #5
On 2024/2/22 下午5:34, WANG Xuerui wrote:
> On 2/17/24 11:15, maobibo wrote:
>> On 2024/2/15 下午6:25, WANG Xuerui wrote:
>>> On 2/15/24 18:11, WANG Xuerui wrote:
>>>> Sorry for the late reply (and Happy Chinese New Year), and thanks 
>>>> for providing microbenchmark numbers! But it seems the more 
>>>> comprehensive CoreMark results were omitted (that's also absent in 
>>>> v3)? While the 
>>>
>>> Of course the benchmark suite should be UnixBench instead of 
>>> CoreMark. Lesson: don't multi-task code reviews, especially not after 
>>> consuming beer -- a cup of coffee won't fully cancel the influence. ;-)
>>>
>> Where is rule about benchmark choices like UnixBench/Coremark for ipi 
>> improvement?
> 
> Sorry for the late reply. The rules are mostly unwritten, but in general 
> you can think of the preference of benchmark suites as a matter of 
> "effectiveness" -- the closer it's to some real workload in the wild, 
> the better. Micro-benchmarks is okay for illustrating the points, but 
> without demonstrating the impact on realistic workloads, a change could 
> be "useless" in practice or even decrease various performance metrics 
> (be that throughput or latency or anything that matters in the certain 
> case), but get accepted without notice.
yes, micro-benchmark cannot represent the real world, however it does 
not mean that UnixBench/Coremark should be run. You need to point out 
what is the negative effective from code, or what is the possible real 
scenario which may benefit. And points out the reasonable benchmark 
sensitive for IPIs rather than blindly saying UnixBench/Coremark.

Regards
Bibo Mao

>
WANG Xuerui Feb. 22, 2024, 10:13 a.m. UTC | #6
On 2/22/24 18:06, maobibo wrote:
> 
> 
> On 2024/2/22 下午5:34, WANG Xuerui wrote:
>> On 2/17/24 11:15, maobibo wrote:
>>> On 2024/2/15 下午6:25, WANG Xuerui wrote:
>>>> On 2/15/24 18:11, WANG Xuerui wrote:
>>>>> Sorry for the late reply (and Happy Chinese New Year), and thanks 
>>>>> for providing microbenchmark numbers! But it seems the more 
>>>>> comprehensive CoreMark results were omitted (that's also absent in 
>>>>> v3)? While the 
>>>>
>>>> Of course the benchmark suite should be UnixBench instead of 
>>>> CoreMark. Lesson: don't multi-task code reviews, especially not 
>>>> after consuming beer -- a cup of coffee won't fully cancel the 
>>>> influence. ;-)
>>>>
>>> Where is rule about benchmark choices like UnixBench/Coremark for ipi 
>>> improvement?
>>
>> Sorry for the late reply. The rules are mostly unwritten, but in 
>> general you can think of the preference of benchmark suites as a 
>> matter of "effectiveness" -- the closer it's to some real workload in 
>> the wild, the better. Micro-benchmarks is okay for illustrating the 
>> points, but without demonstrating the impact on realistic workloads, a 
>> change could be "useless" in practice or even decrease various 
>> performance metrics (be that throughput or latency or anything that 
>> matters in the certain case), but get accepted without notice.
> yes, micro-benchmark cannot represent the real world, however it does 
> not mean that UnixBench/Coremark should be run. You need to point out 
> what is the negative effective from code, or what is the possible real 
> scenario which may benefit. And points out the reasonable benchmark 
> sensitive for IPIs rather than blindly saying UnixBench/Coremark.

I was not meaning to argue with you, nor was I implying that your 
changes "must be regressing things even though I didn't check myself" -- 
my point is, *any* comparison with realistic workload that shows the 
performance mostly unaffected inside/outside KVM, would give reviewers 
(and yourself too) much more confidence in accepting the change.

For me, personally I think a microbenchmark could be enough, because the 
only externally-visible change is the IPI mechanism overhead, but please 
consider other reviewers that may or may not be familiar enough with 
LoongArch to be able to notice the "triviality". Also, given the 6-patch 
size of the series, it could hardly be considered "trivial".