diff mbox

[PATCH:,4/5] Fix hotremove of CPUs for KVM.

Message ID 1253839640-12695-5-git-send-email-zamsden@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Zachary Amsden Sept. 25, 2009, 12:47 a.m. UTC
In the process of bringing down CPUs, the SVM / VMX structures associated
with those CPUs are not freed.  This may cause leaks when unloading and
reloading the KVM module, as only the structures associated with online
CPUs are cleaned up.  So, clean up all possible CPUs, not just online ones.

Signed-off-by: Zachary Amsden <zamsden@redhat.com>
---
 arch/x86/kvm/svm.c |    2 +-
 arch/x86/kvm/vmx.c |    7 +++++--
 2 files changed, 6 insertions(+), 3 deletions(-)

Comments

Avi Kivity Sept. 27, 2009, 8:54 a.m. UTC | #1
On 09/25/2009 03:47 AM, Zachary Amsden wrote:
> In the process of bringing down CPUs, the SVM / VMX structures associated
> with those CPUs are not freed.  This may cause leaks when unloading and
> reloading the KVM module, as only the structures associated with online
> CPUs are cleaned up.  So, clean up all possible CPUs, not just online ones.
>
> Signed-off-by: Zachary Amsden<zamsden@redhat.com>
> ---
>   arch/x86/kvm/svm.c |    2 +-
>   arch/x86/kvm/vmx.c |    7 +++++--
>   2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 8f99d0c..13ca268 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -525,7 +525,7 @@ static __exit void svm_hardware_unsetup(void)
>   {
>   	int cpu;
>
> -	for_each_online_cpu(cpu)
> +	for_each_possible_cpu(cpu)
>   		svm_cpu_uninit(cpu);
>
>   	__free_pages(pfn_to_page(iopm_base>>  PAGE_SHIFT), IOPM_ALLOC_ORDER);
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index b8a8428..603bde3 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -1350,8 +1350,11 @@ static void free_kvm_area(void)
>   {
>   	int cpu;
>
> -	for_each_online_cpu(cpu)
> -		free_vmcs(per_cpu(vmxarea, cpu));
> +	for_each_possible_cpu(cpu)
> +		if (per_cpu(vmxarea, cpu)) {
> +			free_vmcs(per_cpu(vmxarea, cpu));
> +			per_cpu(vmxarea, cpu) = NULL;
> +		}
>   }
>
>   static __init int alloc_kvm_area(void)
>    

First, I'm not sure per_cpu works for possible but not actual cpus.  
Second, we now eagerly allocate but lazily free, leading to lots of ifs 
and buts.  I think the code can be cleaner by eagerly allocating and 
eagerly freeing.
Zachary Amsden Sept. 28, 2009, 1:42 a.m. UTC | #2
On 09/26/2009 10:54 PM, Avi Kivity wrote:
>
> First, I'm not sure per_cpu works for possible but not actual cpus.  
> Second, we now eagerly allocate but lazily free, leading to lots of 
> ifs and buts.  I think the code can be cleaner by eagerly allocating 
> and eagerly freeing.

Eager freeing requires a hotplug remove notification to the arch layer.  
I had done that originally, but not sure.

How does per_cpu() work when defined in a module anyway?  The linker 
magic going on here evades a simple one-minute analysis.

Zach
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8f99d0c..13ca268 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -525,7 +525,7 @@  static __exit void svm_hardware_unsetup(void)
 {
 	int cpu;
 
-	for_each_online_cpu(cpu)
+	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
 
 	__free_pages(pfn_to_page(iopm_base >> PAGE_SHIFT), IOPM_ALLOC_ORDER);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b8a8428..603bde3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1350,8 +1350,11 @@  static void free_kvm_area(void)
 {
 	int cpu;
 
-	for_each_online_cpu(cpu)
-		free_vmcs(per_cpu(vmxarea, cpu));
+	for_each_possible_cpu(cpu)
+		if (per_cpu(vmxarea, cpu)) {
+			free_vmcs(per_cpu(vmxarea, cpu));
+			per_cpu(vmxarea, cpu) = NULL;
+		}
 }
 
 static __init int alloc_kvm_area(void)