diff mbox series

KVM: SVM: Hide SEV migration lockdep goo behind CONFIG_DEBUG_LOCK_ALLOC

Message ID 20220613214237.2538266-1-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: SVM: Hide SEV migration lockdep goo behind CONFIG_DEBUG_LOCK_ALLOC | expand

Commit Message

Sean Christopherson June 13, 2022, 9:42 p.m. UTC
Wrap the manipulation of @role and the manual mutex_{release,acquire}()
invocations in CONFIG_PROVE_LOCKING=y to squash a clang-15 warning.  When
building with -Wunused-but-set-parameter and CONFIG_DEBUG_LOCK_ALLOC=n,
clang-15 seees there's no usage of @role in mutex_lock_killable_nested()
and yells.  PROVE_LOCKING selects DEBUG_LOCK_ALLOC, and the only reason
KVM manipulates @role is to make PROVE_LOCKING happy.

To avoid true ugliness, use "i" and "j" to detect the first pass in the
loops; the "idx" field that's used by kvm_for_each_vcpu() is guaranteed
to be '0' on the first pass as it's simply the first entry in the vCPUs
XArray, which is fully KVM controlled.  kvm_for_each_vcpu() passes '0'
for xa_for_each_range()'s "start", and xa_for_each_range() will not enter
the loop if there's no entry at '0'.

Fixes: 0c2c7c069285 ("KVM: SEV: Mark nested locking of vcpu->lock")
Reported-by: kernel test robot <lkp@intel.com>
Cc: Peter Gonda <pgonda@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---

Compile tested only, still haven't figured out why SEV is busted on our
test systems with upstream kernels.  I also haven't verified this squashes
the clang-15 warning, so a thumbs up on that end would be helpful.

 arch/x86/kvm/svm/sev.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)


base-commit: 8baacf67c76c560fed954ac972b63e6e59a6fba0

Comments

Nathan Chancellor June 13, 2022, 9:51 p.m. UTC | #1
On Mon, Jun 13, 2022 at 09:42:37PM +0000, Sean Christopherson wrote:
> Wrap the manipulation of @role and the manual mutex_{release,acquire}()
> invocations in CONFIG_PROVE_LOCKING=y to squash a clang-15 warning.  When
> building with -Wunused-but-set-parameter and CONFIG_DEBUG_LOCK_ALLOC=n,
> clang-15 seees there's no usage of @role in mutex_lock_killable_nested()
> and yells.  PROVE_LOCKING selects DEBUG_LOCK_ALLOC, and the only reason
> KVM manipulates @role is to make PROVE_LOCKING happy.
> 
> To avoid true ugliness, use "i" and "j" to detect the first pass in the
> loops; the "idx" field that's used by kvm_for_each_vcpu() is guaranteed
> to be '0' on the first pass as it's simply the first entry in the vCPUs
> XArray, which is fully KVM controlled.  kvm_for_each_vcpu() passes '0'
> for xa_for_each_range()'s "start", and xa_for_each_range() will not enter
> the loop if there's no entry at '0'.
> 
> Fixes: 0c2c7c069285 ("KVM: SEV: Mark nested locking of vcpu->lock")
> Reported-by: kernel test robot <lkp@intel.com>
> Cc: Peter Gonda <pgonda@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> 
> Compile tested only, still haven't figured out why SEV is busted on our
> test systems with upstream kernels.  I also haven't verified this squashes
> the clang-15 warning, so a thumbs up on that end would be helpful.

I can confirm that with the config that the kernel test robot provided,
the warning disappears with this patch. If it is useful:

Tested-by: Nathan Chancellor <nathan@kernel.org> # build

>  arch/x86/kvm/svm/sev.c | 17 +++++++----------
>  1 file changed, 7 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 51fd985cf21d..309bcdb2f929 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1606,38 +1606,35 @@ static int sev_lock_vcpus_for_migration(struct kvm *kvm,
>  {
>  	struct kvm_vcpu *vcpu;
>  	unsigned long i, j;
> -	bool first = true;
>  
>  	kvm_for_each_vcpu(i, vcpu, kvm) {
>  		if (mutex_lock_killable_nested(&vcpu->mutex, role))
>  			goto out_unlock;
>  
> -		if (first) {
> +#ifdef CONFIG_PROVE_LOCKING
> +		if (!i)
>  			/*
>  			 * Reset the role to one that avoids colliding with
>  			 * the role used for the first vcpu mutex.
>  			 */
>  			role = SEV_NR_MIGRATION_ROLES;
> -			first = false;
> -		} else {
> +		else
>  			mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
> -		}
> +#endif
>  	}
>  
>  	return 0;
>  
>  out_unlock:
>  
> -	first = true;
>  	kvm_for_each_vcpu(j, vcpu, kvm) {
>  		if (i == j)
>  			break;
>  
> -		if (first)
> -			first = false;
> -		else
> +#ifdef CONFIG_PROVE_LOCKING
> +		if (j)
>  			mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_);
> -
> +#endif
>  
>  		mutex_unlock(&vcpu->mutex);
>  	}
> 
> base-commit: 8baacf67c76c560fed954ac972b63e6e59a6fba0
> -- 
> 2.36.1.476.g0c4daa206d-goog
> 
>
Paolo Bonzini June 14, 2022, 4:47 p.m. UTC | #2
Queued, thanks.

For what it's worth, GCC also produces the warning with "make W=1".

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 51fd985cf21d..309bcdb2f929 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1606,38 +1606,35 @@  static int sev_lock_vcpus_for_migration(struct kvm *kvm,
 {
 	struct kvm_vcpu *vcpu;
 	unsigned long i, j;
-	bool first = true;
 
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		if (mutex_lock_killable_nested(&vcpu->mutex, role))
 			goto out_unlock;
 
-		if (first) {
+#ifdef CONFIG_PROVE_LOCKING
+		if (!i)
 			/*
 			 * Reset the role to one that avoids colliding with
 			 * the role used for the first vcpu mutex.
 			 */
 			role = SEV_NR_MIGRATION_ROLES;
-			first = false;
-		} else {
+		else
 			mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
-		}
+#endif
 	}
 
 	return 0;
 
 out_unlock:
 
-	first = true;
 	kvm_for_each_vcpu(j, vcpu, kvm) {
 		if (i == j)
 			break;
 
-		if (first)
-			first = false;
-		else
+#ifdef CONFIG_PROVE_LOCKING
+		if (j)
 			mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_);
-
+#endif
 
 		mutex_unlock(&vcpu->mutex);
 	}