Message ID | 20231030141728.1406118-1-nik.borisov@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: User mutex guards to eliminate __kvm_x86_vendor_init() | expand |
On Mon, Oct 30, 2023, Nikolay Borisov wrote: > Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as superfluous But this intro is actively misleading. The double-underscore variant most definitely isn't superfluous, e.g. it eliminates the need for gotos reduces the probability of incorrect error codes, bugs in the error handling, etc. It _becomes_ superflous after switching to guard(mutex). IMO, this is one of the instances where the "problem, then solution" appoach is counter-productive. If there are no objections, I'll massage the change log to the below when applying (for 6.8, in a few weeks). Use the recently introduced guard(mutex) infrastructure acquire and automatically release vendor_module_lock when the guard goes out of scope. Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify releasing vendor_module_lock in error paths. No functional change intended. > the the underscore version doesn't have any other callers. > > Instead, use the newly added cleanup infrastructure to ensure that > kvm_x86_vendor_init() holds the vendor_module_lock throughout its > exectuion and that in case of error in the middle it's released. No > functional changes.
On 30.10.23 г. 18:07 ч., Sean Christopherson wrote: > On Mon, Oct 30, 2023, Nikolay Borisov wrote: >> Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as > > superfluous > > But this intro is actively misleading. The double-underscore variant most definitely > isn't superfluous, e.g. it eliminates the need for gotos reduces the probability > of incorrect error codes, bugs in the error handling, etc. It _becomes_ superflous > after switching to guard(mutex). > > IMO, this is one of the instances where the "problem, then solution" appoach is > counter-productive. If there are no objections, I'll massage the change log to > the below when applying (for 6.8, in a few weeks). > > Use the recently introduced guard(mutex) infrastructure acquire and > automatically release vendor_module_lock when the guard goes out of scope. > Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify > releasing vendor_module_lock in error paths. > > No functional change intended. Thanks, I'm fine with this changelog. > >> the the underscore version doesn't have any other callers. >> >> Instead, use the newly added cleanup infrastructure to ensure that >> kvm_x86_vendor_init() holds the vendor_module_lock throughout its >> exectuion and that in case of error in the middle it's released. No >> functional changes. >
On 10/30/23 17:07, Sean Christopherson wrote: > On Mon, Oct 30, 2023, Nikolay Borisov wrote: >> Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as > > superfluous > > But this intro is actively misleading. The double-underscore variant most definitely > isn't superfluous, e.g. it eliminates the need for gotos reduces the probability > of incorrect error codes, bugs in the error handling, etc. It _becomes_ superflous > after switching to guard(mutex). > > IMO, this is one of the instances where the "problem, then solution" appoach is > counter-productive. If there are no objections, I'll massage the change log to > the below when applying (for 6.8, in a few weeks). I think this is a "Speak Now or Forever Rest in Peace" situation. I'm going to wait a couple days more for reviews to come in, post a v14 myself, and apply the series to kvm/next as soon as Linus merges the 6.7 changes. The series will be based on the 6.7 tags/for-linus, and when 6.7-rc1 comes up, I'll do this to straighten the history: git checkout kvm/next git tag -s -f kvm-gmem HEAD git reset --hard v6.7-rc1 git merge tags/kvm-gmem # fix conflict with Christian Brauner's VFS series git commit git push kvm 6.8 is not going to be out for four months, and I'm pretty sure that anything discovered within "a few weeks" can be applied on top, and the heaviness of a 35-patch series will outweigh any imperfections by a long margin). (Full disclosure: this is _also_ because I want to apply this series to the RHEL kernel, and Red Hat has a high level of disdain for non-upstream patches. But it's mostly because I want all dependencies to be able to move on and be developed on top of stock kvm/next). Paolo
On Mon, Oct 30, 2023, Paolo Bonzini wrote: > On 10/30/23 17:07, Sean Christopherson wrote: > > On Mon, Oct 30, 2023, Nikolay Borisov wrote: > > > Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as > > > > superfluous > > > > But this intro is actively misleading. The double-underscore variant most definitely > > isn't superfluous, e.g. it eliminates the need for gotos reduces the probability > > of incorrect error codes, bugs in the error handling, etc. It _becomes_ superflous > > after switching to guard(mutex). > > > > IMO, this is one of the instances where the then solution problem appoach is > > counter-productive. If there are no objections, I'll massage the change log to > > the below when applying (for 6.8, in a few weeks). > > I think this is a "Speak Now or Forever Rest in Peace" situation. I'm going > to wait a couple days more for reviews to come in, post a v14 myself, and > apply the series to kvm/next as soon as Linus merges the 6.7 changes. The > series will be based on the 6.7 tags/for-linus, and when 6.7-rc1 comes up, > I'll do this to straighten the history: Heh, I'm pretty sure you meant to respond to the guest_memfd series. > git checkout kvm/next > git tag -s -f kvm-gmem HEAD > git reset --hard v6.7-rc1 > git merge tags/kvm-gmem > # fix conflict with Christian Brauner's VFS series > git commit > git push kvm > > 6.8 is not going to be out for four months, and I'm pretty sure that > anything discovered within "a few weeks" can be applied on top, and the > heaviness of a 35-patch series will outweigh any imperfections by a long > margin). > > (Full disclosure: this is _also_ because I want to apply this series to the > RHEL kernel, and Red Hat has a high level of disdain for non-upstream > patches. But it's mostly because I want all dependencies to be able to move > on and be developed on top of stock kvm/next).
On 10/30/23 18:36, Sean Christopherson wrote: >>> If there are no objections, I'll massage the change log to >>> the below when applying (for 6.8, in a few weeks). >> >> I think this is a "Speak Now or Forever Rest in Peace" situation. I'm going >> to wait a couple days more for reviews to come in, post a v14 myself, and >> apply the series to kvm/next as soon as Linus merges the 6.7 changes. The >> series will be based on the 6.7 tags/for-linus, and when 6.7-rc1 comes up, >> I'll do this to straighten the history: > > Heh, I'm pretty sure you meant to respond to the guest_memfd series. Well, it was the "in a few weeks" that almost caused me a panic attack. :) But yeah, I soon got to the conclusion that this required a wider diffusion and reposted there. Paolo
On Mon, 2023-10-30 at 18:17 +0200, Nikolay Borisov wrote: > > On 30.10.23 г. 18:07 ч., Sean Christopherson wrote: > > On Mon, Oct 30, 2023, Nikolay Borisov wrote: > > > Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as > > > > superfluous > > > > But this intro is actively misleading. The double-underscore variant most definitely > > isn't superfluous, e.g. it eliminates the need for gotos reduces the probability > > of incorrect error codes, bugs in the error handling, etc. It _becomes_ superflous > > after switching to guard(mutex). > > > > IMO, this is one of the instances where the "problem, then solution" appoach is > > counter-productive. If there are no objections, I'll massage the change log to > > the below when applying (for 6.8, in a few weeks). > > > > Use the recently introduced guard(mutex) infrastructure acquire and > > automatically release vendor_module_lock when the guard goes out of scope. > > Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify > > releasing vendor_module_lock in error paths. > > > > No functional change intended. > > Thanks, I'm fine with this changelog. > > Reviewed-by: Kai Huang <kai.huang@intel.com>
On 30.10.23 г. 18:07 ч., Sean Christopherson wrote: > On Mon, Oct 30, 2023, Nikolay Borisov wrote: >> Current separation between (__){0,1}kvm_x86_vendor_init() is >> superfluos as > > superfluous > > But this intro is actively misleading. The double-underscore variant > most definitely > isn't superfluous, e.g. it eliminates the need for gotos reduces the > probability > of incorrect error codes, bugs in the error handling, etc. It _becomes_ > superflous > after switching to guard(mutex). > > IMO, this is one of the instances where the "problem, then solution" > appoach is > counter-productive. If there are no objections, I'll massage the change > log to > the below when applying (for 6.8, in a few weeks). > > Use the recently introduced guard(mutex) infrastructure acquire and > automatically release vendor_module_lock when the guard goes out of > scope. > Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify > releasing vendor_module_lock in error paths. > > No functional change intended. > >> the the underscore version doesn't have any other callers. >> Has this fallen through the cracks as I don't see it in 6.7?
On Sat, Dec 09, 2023, Nikolay Borisov wrote: > > > On 30.10.23 г. 18:07 ч., Sean Christopherson wrote: > > On Mon, Oct 30, 2023, Nikolay Borisov wrote: > > > Current separation between (__){0,1}kvm_x86_vendor_init() is > > > superfluos as > > > > superfluous > > > > But this intro is actively misleading. The double-underscore variant > > most definitely > > isn't superfluous, e.g. it eliminates the need for gotos reduces the > > probability > > of incorrect error codes, bugs in the error handling, etc. It _becomes_ > > superflous > > after switching to guard(mutex). > > > > IMO, this is one of the instances where the then solution problem > > appoach is > > counter-productive. If there are no objections, I'll massage the change > > log to > > the below when applying (for 6.8, in a few weeks). > > > > Use the recently introduced guard(mutex) infrastructure acquire and > > automatically release vendor_module_lock when the guard goes out of > > scope. > > Drop the inner __kvm_x86_vendor_init(), its sole purpose was to simplify > > releasing vendor_module_lock in error paths. > > > > No functional change intended. > > > > > the the underscore version doesn't have any other callers. > > > > > > Has this fallen through the cracks as I don't see it in 6.7? As above, I have this tagged for inclusion in 6.8, not 6.7. Though admittedly, this one did actually fall through the cracks as I moved it to the wrong mailbox when Paolo usurped the thread for unrelated guest_memfd stuff. Anyways, I do plan on grabbing this for 6.8, I'm just buried in non-upstream stuff right now.
On Mon, 30 Oct 2023 16:17:28 +0200, Nikolay Borisov wrote: > Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as > the the underscore version doesn't have any other callers. > > Instead, use the newly added cleanup infrastructure to ensure that > kvm_x86_vendor_init() holds the vendor_module_lock throughout its > exectuion and that in case of error in the middle it's released. No > functional changes. > > [...] Applied to kvm-x86 misc, thanks! [1/1] KVM: x86: Use mutex guards to eliminate __kvm_x86_vendor_init() https://github.com/kvm-x86/linux/commit/955997e88017 -- https://github.com/kvm-x86/linux/tree/next
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 41cce5031126..cd7c2d0f88cb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9446,11 +9446,13 @@ static void kvm_x86_check_cpu_compat(void *ret) *(int *)ret = kvm_x86_check_processor_compatibility(); } -static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) +int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) { u64 host_pat; int r, cpu; + guard(mutex)(&vendor_module_lock); + if (kvm_x86_ops.hardware_enable) { pr_err("already loaded vendor module '%s'\n", kvm_x86_ops.name); return -EEXIST; @@ -9580,17 +9582,6 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) kmem_cache_destroy(x86_emulator_cache); return r; } - -int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) -{ - int r; - - mutex_lock(&vendor_module_lock); - r = __kvm_x86_vendor_init(ops); - mutex_unlock(&vendor_module_lock); - - return r; -} EXPORT_SYMBOL_GPL(kvm_x86_vendor_init); void kvm_x86_vendor_exit(void)
Current separation between (__){0,1}kvm_x86_vendor_init() is superfluos as the the underscore version doesn't have any other callers. Instead, use the newly added cleanup infrastructure to ensure that kvm_x86_vendor_init() holds the vendor_module_lock throughout its exectuion and that in case of error in the middle it's released. No functional changes. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> --- arch/x86/kvm/x86.c | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-)