diff mbox

[PULL,5/8] KVM: async_pf: Provide additional direct page notification

Message ID 1391086429-43935-6-git-send-email-borntraeger@de.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Christian Borntraeger Jan. 30, 2014, 12:53 p.m. UTC
From: Dominik Dingel <dingel@linux.vnet.ibm.com>

By setting a Kconfig option, the architecture can control when
guest notifications will be presented by the apf backend.
There is the default batch mechanism, working as before, where the vcpu
thread should pull in this information.
Opposite to this, there is now the direct mechanism, that will push the
information to the guest.
This way s390 can use an already existing architecture interface.

Still the vcpu thread should call check_completion to cleanup leftovers.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/x86/kvm/mmu.c       |  2 +-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/Kconfig         |  4 ++++
 virt/kvm/async_pf.c      | 20 ++++++++++++++++++--
 4 files changed, 24 insertions(+), 4 deletions(-)

Comments

Paolo Bonzini Jan. 31, 2014, 11:38 a.m. UTC | #1
Il 30/01/2014 13:53, Christian Borntraeger ha scritto:
> +static inline void kvm_async_page_present_async(struct kvm_vcpu *vcpu,
> +						struct kvm_async_pf *work)
> +{
> +#ifndef CONFIG_KVM_ASYNC_PF_SYNC
> +	kvm_arch_async_page_present(vcpu, work);
> +#endif
> +}
> +

This is not used, should it be used in kvm_check_async_pf_completion?

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christian Borntraeger Jan. 31, 2014, 12:24 p.m. UTC | #2
On 31/01/14 12:38, Paolo Bonzini wrote:
> Il 30/01/2014 13:53, Christian Borntraeger ha scritto:
>> +static inline void kvm_async_page_present_async(struct kvm_vcpu *vcpu,
>> +                        struct kvm_async_pf *work)
>> +{
>> +#ifndef CONFIG_KVM_ASYNC_PF_SYNC
>> +    kvm_arch_async_page_present(vcpu, work);
>> +#endif
>> +}
>> +
> 
> This is not used, should it be used in kvm_check_async_pf_completion?

Yes. Looks like this got mixed up when updating the branch from before
commit f2e106692d5189303997ad7b96de8d8123aa5613 (KVM: Drop FOLL_GET in 
GUP when doing async page fault) on top of this commit. Do you want a
respin of the series or an on top patch?

Christian

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Jan. 31, 2014, 1:13 p.m. UTC | #3
Il 31/01/2014 13:24, Christian Borntraeger ha scritto:
> On 31/01/14 12:38, Paolo Bonzini wrote:
>> Il 30/01/2014 13:53, Christian Borntraeger ha scritto:
>>> +static inline void kvm_async_page_present_async(struct kvm_vcpu *vcpu,
>>> +                        struct kvm_async_pf *work)
>>> +{
>>> +#ifndef CONFIG_KVM_ASYNC_PF_SYNC
>>> +    kvm_arch_async_page_present(vcpu, work);
>>> +#endif
>>> +}
>>> +
>>
>> This is not used, should it be used in kvm_check_async_pf_completion?
>
> Yes. Looks like this got mixed up when updating the branch from before
> commit f2e106692d5189303997ad7b96de8d8123aa5613 (KVM: Drop FOLL_GET in
> GUP when doing async page fault) on top of this commit. Do you want a
> respin of the series or an on top patch?

It should only break s390, so your call.  I think a respin would be 
preferrable for you. :)

Paolo

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christian Borntraeger Jan. 31, 2014, 1:32 p.m. UTC | #4
This fixup fixes patch 5 and makes it equivalent to the code that
went through testing. Looks like the change from patch 5 does 
not cause real problems. x86 will simply inject the completion
via kvm_arch_async_page_present in kvm_check_async_pf_completion.
s390 will inject twice (sync in execute but also async), but the
guest OS can handle that. (thats why I did not catch this in
my regression test after rebasing). So the only visible effect is
in the counters and in performance. We can handle this as an addon
patch

Dominik Dingel (1):
  KVM: async_pf: Add missing call for async page present

 virt/kvm/async_pf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Paolo Bonzini Jan. 31, 2014, 2:58 p.m. UTC | #5
Il 31/01/2014 14:32, Christian Borntraeger ha scritto:
> This fixup fixes patch 5 and makes it equivalent to the code that
> went through testing. Looks like the change from patch 5 does
> not cause real problems. x86 will simply inject the completion
> via kvm_arch_async_page_present in kvm_check_async_pf_completion.
> s390 will inject twice (sync in execute but also async), but the
> guest OS can handle that. (thats why I did not catch this in
> my regression test after rebasing). So the only visible effect is
> in the counters and in performance. We can handle this as an addon
> patch

Good!

I will pull and apply this patch as soon as -rc1 is out.

Paolo

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e50425d..aaa60f3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3328,7 +3328,7 @@  static int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn)
 	arch.direct_map = vcpu->arch.mmu.direct_map;
 	arch.cr3 = vcpu->arch.mmu.get_cr3(vcpu);
 
-	return kvm_setup_async_pf(vcpu, gva, gfn, &arch);
+	return kvm_setup_async_pf(vcpu, gva, gfn_to_hva(vcpu->kvm, gfn), &arch);
 }
 
 static bool can_do_async_pf(struct kvm_vcpu *vcpu)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c0102ef..f5937b8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -192,7 +192,7 @@  struct kvm_async_pf {
 
 void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu);
 void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu);
-int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
+int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, unsigned long hva,
 		       struct kvm_arch_async_pf *arch);
 int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu);
 #endif
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index fbe1a48..13f2d19 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -22,6 +22,10 @@  config KVM_MMIO
 config KVM_ASYNC_PF
        bool
 
+# Toggle to switch between direct notification and batch job
+config KVM_ASYNC_PF_SYNC
+       bool
+
 config HAVE_KVM_MSI
        bool
 
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 8631d9c..00980ab 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -28,6 +28,21 @@ 
 #include "async_pf.h"
 #include <trace/events/kvm.h>
 
+static inline void kvm_async_page_present_sync(struct kvm_vcpu *vcpu,
+					       struct kvm_async_pf *work)
+{
+#ifdef CONFIG_KVM_ASYNC_PF_SYNC
+	kvm_arch_async_page_present(vcpu, work);
+#endif
+}
+static inline void kvm_async_page_present_async(struct kvm_vcpu *vcpu,
+						struct kvm_async_pf *work)
+{
+#ifndef CONFIG_KVM_ASYNC_PF_SYNC
+	kvm_arch_async_page_present(vcpu, work);
+#endif
+}
+
 static struct kmem_cache *async_pf_cache;
 
 int kvm_async_pf_init(void)
@@ -69,6 +84,7 @@  static void async_pf_execute(struct work_struct *work)
 	down_read(&mm->mmap_sem);
 	get_user_pages(current, mm, addr, 1, 1, 0, NULL, NULL);
 	up_read(&mm->mmap_sem);
+	kvm_async_page_present_sync(vcpu, apf);
 	unuse_mm(mm);
 
 	spin_lock(&vcpu->async_pf.lock);
@@ -138,7 +154,7 @@  void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
 	}
 }
 
-int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
+int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, unsigned long hva,
 		       struct kvm_arch_async_pf *arch)
 {
 	struct kvm_async_pf *work;
@@ -159,7 +175,7 @@  int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn,
 	work->wakeup_all = false;
 	work->vcpu = vcpu;
 	work->gva = gva;
-	work->addr = gfn_to_hva(vcpu->kvm, gfn);
+	work->addr = hva;
 	work->arch = *arch;
 	work->mm = current->mm;
 	atomic_inc(&work->mm->mm_count);