Message ID | 1472797976-24210-2-git-send-email-nikunj@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 2 Sep 2016 12:02:53 +0530 Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > --- > hw/ppc/spapr_hcall.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c > index e5eca67..daea7a0 100644 > --- a/hw/ppc/spapr_hcall.c > +++ b/hw/ppc/spapr_hcall.c > @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, > target_ulong *args) > { > sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); > + target_ulong ret; > > if ((opcode <= MAX_HCALL_OPCODE) > && ((opcode & 0x3) == 0)) { > spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; > > if (fn) { > - return fn(cpu, spapr, opcode, args); > + qemu_mutex_lock_iothread(); > + ret = fn(cpu, spapr, opcode, args); > + qemu_mutex_unlock_iothread(); > + return ret; > } > } else if ((opcode >= KVMPPC_HCALL_BASE) && > (opcode <= KVMPPC_HCALL_MAX)) { > spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; > > if (fn) { > - return fn(cpu, spapr, opcode, args); > + qemu_mutex_lock_iothread(); > + ret = fn(cpu, spapr, opcode, args); > + qemu_mutex_unlock_iothread(); > + return ret; > } > } > This will serialize all hypercalls, even when it is not needed... Isn't that too much coarse grain locking ? Cheers. -- Greg
Greg Kurz <groug@kaod.org> writes: > On Fri, 2 Sep 2016 12:02:53 +0530 > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> --- >> hw/ppc/spapr_hcall.c | 11 +++++++++-- >> 1 file changed, 9 insertions(+), 2 deletions(-) >> >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c >> index e5eca67..daea7a0 100644 >> --- a/hw/ppc/spapr_hcall.c >> +++ b/hw/ppc/spapr_hcall.c >> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, >> target_ulong *args) >> { >> sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); >> + target_ulong ret; >> >> if ((opcode <= MAX_HCALL_OPCODE) >> && ((opcode & 0x3) == 0)) { >> spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; >> >> if (fn) { >> - return fn(cpu, spapr, opcode, args); >> + qemu_mutex_lock_iothread(); >> + ret = fn(cpu, spapr, opcode, args); >> + qemu_mutex_unlock_iothread(); >> + return ret; >> } >> } else if ((opcode >= KVMPPC_HCALL_BASE) && >> (opcode <= KVMPPC_HCALL_MAX)) { >> spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; >> >> if (fn) { >> - return fn(cpu, spapr, opcode, args); >> + qemu_mutex_lock_iothread(); >> + ret = fn(cpu, spapr, opcode, args); >> + qemu_mutex_unlock_iothread(); >> + return ret; >> } >> } >> > > This will serialize all hypercalls, even when it is not needed... Isn't that > too much coarse grain locking ? You are right, I was thinking to do this only for emulation case, as this is not needed for hardware acceleration. Regards Nikunj
On Fri, 02 Sep 2016 14:58:12 +0530 Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > Greg Kurz <groug@kaod.org> writes: > > > On Fri, 2 Sep 2016 12:02:53 +0530 > > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > > > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > >> --- > >> hw/ppc/spapr_hcall.c | 11 +++++++++-- > >> 1 file changed, 9 insertions(+), 2 deletions(-) > >> > >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c > >> index e5eca67..daea7a0 100644 > >> --- a/hw/ppc/spapr_hcall.c > >> +++ b/hw/ppc/spapr_hcall.c > >> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, > >> target_ulong *args) > >> { > >> sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); > >> + target_ulong ret; > >> > >> if ((opcode <= MAX_HCALL_OPCODE) > >> && ((opcode & 0x3) == 0)) { > >> spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; > >> > >> if (fn) { > >> - return fn(cpu, spapr, opcode, args); > >> + qemu_mutex_lock_iothread(); > >> + ret = fn(cpu, spapr, opcode, args); > >> + qemu_mutex_unlock_iothread(); > >> + return ret; > >> } > >> } else if ((opcode >= KVMPPC_HCALL_BASE) && > >> (opcode <= KVMPPC_HCALL_MAX)) { > >> spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; > >> > >> if (fn) { > >> - return fn(cpu, spapr, opcode, args); > >> + qemu_mutex_lock_iothread(); > >> + ret = fn(cpu, spapr, opcode, args); > >> + qemu_mutex_unlock_iothread(); > >> + return ret; > >> } > >> } > >> > > > > This will serialize all hypercalls, even when it is not needed... Isn't that > > too much coarse grain locking ? > > You are right, I was thinking to do this only for emulation case, as > this is not needed for hardware acceleration. > Yes, at the very least. And even in the MTTCG case, shouldn't we serialize only when we know I/O will actually happen ? > Regards > Nikunj > > Cheers. -- Greg
On 02.09.2016 08:32, Nikunj A Dadhania wrote: > Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > --- > hw/ppc/spapr_hcall.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c > index e5eca67..daea7a0 100644 > --- a/hw/ppc/spapr_hcall.c > +++ b/hw/ppc/spapr_hcall.c > @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, > target_ulong *args) > { > sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); > + target_ulong ret; > > if ((opcode <= MAX_HCALL_OPCODE) > && ((opcode & 0x3) == 0)) { > spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; > > if (fn) { > - return fn(cpu, spapr, opcode, args); > + qemu_mutex_lock_iothread(); > + ret = fn(cpu, spapr, opcode, args); > + qemu_mutex_unlock_iothread(); > + return ret; > } > } else if ((opcode >= KVMPPC_HCALL_BASE) && > (opcode <= KVMPPC_HCALL_MAX)) { > spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; > > if (fn) { > - return fn(cpu, spapr, opcode, args); > + qemu_mutex_lock_iothread(); > + ret = fn(cpu, spapr, opcode, args); > + qemu_mutex_unlock_iothread(); > + return ret; > } > } I think this will cause a deadlock when running on KVM since the lock is already taken in kvm_arch_handle_exit() - which calls spapr_hypercall()! Thomas
Greg Kurz <groug@kaod.org> writes: > On Fri, 02 Sep 2016 14:58:12 +0530 > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > >> Greg Kurz <groug@kaod.org> writes: >> >> > On Fri, 2 Sep 2016 12:02:53 +0530 >> > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: >> > >> >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> >> --- >> >> hw/ppc/spapr_hcall.c | 11 +++++++++-- >> >> 1 file changed, 9 insertions(+), 2 deletions(-) >> >> >> >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c >> >> index e5eca67..daea7a0 100644 >> >> --- a/hw/ppc/spapr_hcall.c >> >> +++ b/hw/ppc/spapr_hcall.c >> >> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, >> >> target_ulong *args) >> >> { >> >> sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); >> >> + target_ulong ret; >> >> >> >> if ((opcode <= MAX_HCALL_OPCODE) >> >> && ((opcode & 0x3) == 0)) { >> >> spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; >> >> >> >> if (fn) { >> >> - return fn(cpu, spapr, opcode, args); >> >> + qemu_mutex_lock_iothread(); >> >> + ret = fn(cpu, spapr, opcode, args); >> >> + qemu_mutex_unlock_iothread(); >> >> + return ret; >> >> } >> >> } else if ((opcode >= KVMPPC_HCALL_BASE) && >> >> (opcode <= KVMPPC_HCALL_MAX)) { >> >> spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; >> >> >> >> if (fn) { >> >> - return fn(cpu, spapr, opcode, args); >> >> + qemu_mutex_lock_iothread(); >> >> + ret = fn(cpu, spapr, opcode, args); >> >> + qemu_mutex_unlock_iothread(); >> >> + return ret; >> >> } >> >> } >> >> >> > >> > This will serialize all hypercalls, even when it is not needed... Isn't that >> > too much coarse grain locking ? >> >> You are right, I was thinking to do this only for emulation case, as >> this is not needed for hardware acceleration. >> > > Yes, at the very least. And even in the MTTCG case, shouldn't we serialize only > when we know I/O will actually happen ? Yes, haven't figured out what all would need protection apart from I/O. I have started with coarse grain locking and will start fine tuning, once other issues are sorted out. Regards, Nikunj
Thomas Huth <thuth@redhat.com> writes: > On 02.09.2016 08:32, Nikunj A Dadhania wrote: >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> --- >> hw/ppc/spapr_hcall.c | 11 +++++++++-- >> 1 file changed, 9 insertions(+), 2 deletions(-) >> >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c >> index e5eca67..daea7a0 100644 >> --- a/hw/ppc/spapr_hcall.c >> +++ b/hw/ppc/spapr_hcall.c >> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, >> target_ulong *args) >> { >> sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); >> + target_ulong ret; >> >> if ((opcode <= MAX_HCALL_OPCODE) >> && ((opcode & 0x3) == 0)) { >> spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; >> >> if (fn) { >> - return fn(cpu, spapr, opcode, args); >> + qemu_mutex_lock_iothread(); >> + ret = fn(cpu, spapr, opcode, args); >> + qemu_mutex_unlock_iothread(); >> + return ret; >> } >> } else if ((opcode >= KVMPPC_HCALL_BASE) && >> (opcode <= KVMPPC_HCALL_MAX)) { >> spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; >> >> if (fn) { >> - return fn(cpu, spapr, opcode, args); >> + qemu_mutex_lock_iothread(); >> + ret = fn(cpu, spapr, opcode, args); >> + qemu_mutex_unlock_iothread(); >> + return ret; >> } >> } > > I think this will cause a deadlock when running on KVM since the lock is > already taken in kvm_arch_handle_exit() - which calls spapr_hypercall()! Ouch, havent tried this branch yet on KVM :( Will change to emulation only as suggested in my previous mails. Regards, Nikunj
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c index e5eca67..daea7a0 100644 --- a/hw/ppc/spapr_hcall.c +++ b/hw/ppc/spapr_hcall.c @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, target_ulong opcode, target_ulong *args) { sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine()); + target_ulong ret; if ((opcode <= MAX_HCALL_OPCODE) && ((opcode & 0x3) == 0)) { spapr_hcall_fn fn = papr_hypercall_table[opcode / 4]; if (fn) { - return fn(cpu, spapr, opcode, args); + qemu_mutex_lock_iothread(); + ret = fn(cpu, spapr, opcode, args); + qemu_mutex_unlock_iothread(); + return ret; } } else if ((opcode >= KVMPPC_HCALL_BASE) && (opcode <= KVMPPC_HCALL_MAX)) { spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - KVMPPC_HCALL_BASE]; if (fn) { - return fn(cpu, spapr, opcode, args); + qemu_mutex_lock_iothread(); + ret = fn(cpu, spapr, opcode, args); + qemu_mutex_unlock_iothread(); + return ret; } }
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> --- hw/ppc/spapr_hcall.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)