diff mbox

[1/4] kvm tools: Add ioeventfd support

Message ID 1306492621-10208-1-git-send-email-levinsasha928@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sasha Levin May 27, 2011, 10:36 a.m. UTC
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.

Such notifications are usefull if all we need to know is that
a specific area of the memory has been changed, and we don't need
a heavyweight exit to happen.

The implementation uses epoll to scale to large number of ioeventfds.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 tools/kvm/Makefile                |    1 +
 tools/kvm/include/kvm/ioeventfd.h |   27 ++++++++
 tools/kvm/ioeventfd.c             |  127 +++++++++++++++++++++++++++++++++++++
 tools/kvm/kvm-run.c               |    4 +
 4 files changed, 159 insertions(+), 0 deletions(-)
 create mode 100644 tools/kvm/include/kvm/ioeventfd.h
 create mode 100644 tools/kvm/ioeventfd.c

Comments

Stefan Hajnoczi May 27, 2011, 10:47 a.m. UTC | #1
On Fri, May 27, 2011 at 11:36 AM, Sasha Levin <levinsasha928@gmail.com> wrote:
> ioeventfd is way provided by KVM to receive notifications about
> reads and writes to PIO and MMIO areas within the guest.
>
> Such notifications are usefull if all we need to know is that
> a specific area of the memory has been changed, and we don't need
> a heavyweight exit to happen.
>
> The implementation uses epoll to scale to large number of ioeventfds.
>
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> ---
>  tools/kvm/Makefile                |    1 +
>  tools/kvm/include/kvm/ioeventfd.h |   27 ++++++++
>  tools/kvm/ioeventfd.c             |  127 +++++++++++++++++++++++++++++++++++++
>  tools/kvm/kvm-run.c               |    4 +
>  4 files changed, 159 insertions(+), 0 deletions(-)
>  create mode 100644 tools/kvm/include/kvm/ioeventfd.h
>  create mode 100644 tools/kvm/ioeventfd.c

Did you run any benchmarks?

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pekka Enberg May 27, 2011, 10:52 a.m. UTC | #2
On Fri, May 27, 2011 at 1:36 PM, Sasha Levin <levinsasha928@gmail.com> wrote:
> +void ioeventfd__start(void)
> +{
> +       pthread_t thread;
> +
> +       pthread_create(&thread, NULL, ioeventfd__thread, NULL);

Please be more careful with error handling. If an API call can fail,
there's almost never any reason to silently ignore it.

                        Pekka
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ingo Molnar May 27, 2011, 10:54 a.m. UTC | #3
* Sasha Levin <levinsasha928@gmail.com> wrote:

> ioeventfd is way provided by KVM to receive notifications about
> reads and writes to PIO and MMIO areas within the guest.
> 
> Such notifications are usefull if all we need to know is that
> a specific area of the memory has been changed, and we don't need
> a heavyweight exit to happen.
> 
> The implementation uses epoll to scale to large number of ioeventfds.

Nice! :-)

> +struct ioevent {
> +	u64			start;
> +	u8			len;

If that's an mmio address then it might be worth naming it 
ioevent->mmio_addr, ioevent->mmio_end.

> +	void			(*ioevent_callback_fn)(struct kvm *kvm, void *ptr);

Please only 'fn', we already know this is an ioevent.

> +	struct kvm		*kvm;
> +	void			*ptr;

what is the purpose of the pointer?

AFAICS it the private data of the callback function. In such cases 
please name them in a harmonizing fashion, such as:

	void			(*fn)(struct kvm *kvm, void *data);
	struct kvm		*fn_kvm;
	void			*fn_data;

Also, will tools/kvm/ ever run with multiple 'struct kvm' instances 
present?

A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way 
too aspecific, it tells us nothing. What is a 'kvm'?

A much better name would be 'struct machine *machine', hm? Even if 
everyone agrees this would be a separate patch, obviously.

Also, can ioevent->kvm *ever* be different from the kvm that the 
mmio-event receiving vcpu thread is associated with? If not then the 
fn_kvm field is really superfluous - we get the machine from the mmio 
handler and can pass it down to the callback function.

> +	int			event_fd;

'fd'

> +	u64			datamatch;

what's a datamatch? 'cookie'? 'key'?

> +
> +	struct list_head	list_used;

just 'list' is enough i think - it's obvious that ioevent->list is a 
list of ioevents, right?

> +	kvm_ioevent = (struct kvm_ioeventfd) {
> +		.addr			= ioevent->start,
> +		.len			= ioevent->len,

Do you see how confusing the start/len naming is? Here you are 
assigning a 'start' field to an 'addr' and the reviewer is kept 
wondering whether that's right. If it was ->mmio_addr then it would 
be a lot more obvious what is going on.
> +static void *ioeventfd__thread(void *param)
> +{
> +	for (;;) {
> +		int nfds, i;
> +
> +		nfds = epoll_wait(epoll_fd, events, IOEVENTFD_MAX_EVENTS, -1);
> +		for (i = 0; i < nfds; i++) {
> +			u64 tmp;
> +			struct ioevent *ioevent;
> +
> +			ioevent = events[i].data.ptr;
> +
> +			if (read(ioevent->event_fd, &tmp, sizeof(tmp)) < 0)
> +				die("Failed reading event");
> +
> +			ioevent->ioevent_callback_fn(ioevent->kvm, ioevent->ptr);
> +		}
> +	}
> +
> +	return NULL;
> +}
> +
> +void ioeventfd__start(void)
> +{
> +	pthread_t thread;
> +
> +	pthread_create(&thread, NULL, ioeventfd__thread, NULL);
> +}

Shouldnt this use the thread pool, so that we know about each and 
every worker thread we have started, in one central place?

(This might have relevance, see the big-reader-lock mail i sent 
earlier today.)

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sasha Levin May 27, 2011, 10:57 a.m. UTC | #4
On Fri, 2011-05-27 at 11:47 +0100, Stefan Hajnoczi wrote:
> On Fri, May 27, 2011 at 11:36 AM, Sasha Levin <levinsasha928@gmail.com> wrote:
> > ioeventfd is way provided by KVM to receive notifications about
> > reads and writes to PIO and MMIO areas within the guest.
> >
> > Such notifications are usefull if all we need to know is that
> > a specific area of the memory has been changed, and we don't need
> > a heavyweight exit to happen.
> >
> > The implementation uses epoll to scale to large number of ioeventfds.
> >
> > Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> > ---
> >  tools/kvm/Makefile                |    1 +
> >  tools/kvm/include/kvm/ioeventfd.h |   27 ++++++++
> >  tools/kvm/ioeventfd.c             |  127 +++++++++++++++++++++++++++++++++++++
> >  tools/kvm/kvm-run.c               |    4 +
> >  4 files changed, 159 insertions(+), 0 deletions(-)
> >  create mode 100644 tools/kvm/include/kvm/ioeventfd.h
> >  create mode 100644 tools/kvm/ioeventfd.c
> 
> Did you run any benchmarks?
> 
> Stefan

Yes, they showed a nice improvements - I'll post them with a V2 of the
patch.
Sasha Levin May 27, 2011, 11:02 a.m. UTC | #5
On Fri, 2011-05-27 at 12:54 +0200, Ingo Molnar wrote:
> * Sasha Levin <levinsasha928@gmail.com> wrote:
> 
> > ioeventfd is way provided by KVM to receive notifications about
> > reads and writes to PIO and MMIO areas within the guest.
> > 
> > Such notifications are usefull if all we need to know is that
> > a specific area of the memory has been changed, and we don't need
> > a heavyweight exit to happen.
> > 
> > The implementation uses epoll to scale to large number of ioeventfds.
> 
> Nice! :-)
> 
> > +struct ioevent {
> > +	u64			start;
> > +	u8			len;
> 
> If that's an mmio address then it might be worth naming it 
> ioevent->mmio_addr, ioevent->mmio_end.
> 
> > +	void			(*ioevent_callback_fn)(struct kvm *kvm, void *ptr);
> 
> Please only 'fn', we already know this is an ioevent.
> 
> > +	struct kvm		*kvm;
> > +	void			*ptr;
> 
> what is the purpose of the pointer?
> 
> AFAICS it the private data of the callback function. In such cases 
> please name them in a harmonizing fashion, such as:
> 
> 	void			(*fn)(struct kvm *kvm, void *data);
> 	struct kvm		*fn_kvm;
> 	void			*fn_data;
> 
> Also, will tools/kvm/ ever run with multiple 'struct kvm' instances 
> present?

I'm not sure. We pass it around to all our functions instead of using a
global, so I assumed we might have several guests under one process.

> A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way 
> too aspecific, it tells us nothing. What is a 'kvm'?
> 
> A much better name would be 'struct machine *machine', hm? Even if 
> everyone agrees this would be a separate patch, obviously.
> 
> Also, can ioevent->kvm *ever* be different from the kvm that the 
> mmio-event receiving vcpu thread is associated with? If not then the 
> fn_kvm field is really superfluous - we get the machine from the mmio 
> handler and can pass it down to the callback function.
> 
> > +	int			event_fd;
> 
> 'fd'
> 
> > +	u64			datamatch;
> 
> what's a datamatch? 'cookie'? 'key'?

The kernel-side ioeventfd matches the value written to the PIO port and
signals the event only if both values match.
It's named this way in the kernel code so I wanted to be consistent.

> 
> > +
> > +	struct list_head	list_used;
> 
> just 'list' is enough i think - it's obvious that ioevent->list is a 
> list of ioevents, right?
> 

We might have a list of free ioevents if we ever decide to scale it
beyond the max 20 event limit we currently have, so I would rather be
specific with the list names.

> > +	kvm_ioevent = (struct kvm_ioeventfd) {
> > +		.addr			= ioevent->start,
> > +		.len			= ioevent->len,
> 
> Do you see how confusing the start/len naming is? Here you are 
> assigning a 'start' field to an 'addr' and the reviewer is kept 
> wondering whether that's right. If it was ->mmio_addr then it would 
> be a lot more obvious what is going on.

Yes, I'll rename them to addr/len to match with KVM naming.

> > +static void *ioeventfd__thread(void *param)
> > +{
> > +	for (;;) {
> > +		int nfds, i;
> > +
> > +		nfds = epoll_wait(epoll_fd, events, IOEVENTFD_MAX_EVENTS, -1);
> > +		for (i = 0; i < nfds; i++) {
> > +			u64 tmp;
> > +			struct ioevent *ioevent;
> > +
> > +			ioevent = events[i].data.ptr;
> > +
> > +			if (read(ioevent->event_fd, &tmp, sizeof(tmp)) < 0)
> > +				die("Failed reading event");
> > +
> > +			ioevent->ioevent_callback_fn(ioevent->kvm, ioevent->ptr);
> > +		}
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +void ioeventfd__start(void)
> > +{
> > +	pthread_t thread;
> > +
> > +	pthread_create(&thread, NULL, ioeventfd__thread, NULL);
> > +}
> 
> Shouldnt this use the thread pool, so that we know about each and 
> every worker thread we have started, in one central place?
> 
Our thread pool currently responds to events - it runs a callback if it
has received a notification to do so. It doesn't manage threads which
have to run all the time like in this case.

Though once we return from epoll_wait() here we do minimal work before
sending the IO event into the thread pool.


> (This might have relevance, see the big-reader-lock mail i sent 
> earlier today.)
> 
> Thanks,
> 
> 	Ingo
Ingo Molnar May 27, 2011, 11:29 a.m. UTC | #6
* Sasha Levin <levinsasha928@gmail.com> wrote:

> > > +	pthread_create(&thread, NULL, ioeventfd__thread, NULL);
> > > +}
> > 
> > Shouldnt this use the thread pool, so that we know about each and 
> > every worker thread we have started, in one central place?
> 
> Our thread pool currently responds to events - it runs a callback 
> if it has received a notification to do so. It doesn't manage 
> threads which have to run all the time like in this case.

then it should perhaps be extended to handle that, because it's 
always good to have explicit enumeration of all worker threads, and 
because:

> > (This might have relevance, see the big-reader-lock mail i sent 
> > earlier today.)

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Pekka Enberg May 27, 2011, 11:30 a.m. UTC | #7
Hi Ingo,

On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar <mingo@elte.hu> wrote:
> A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way
> too aspecific, it tells us nothing. What is a 'kvm'?

Why, an instance of a kernel virtual machine, of course! It was the
very first thing I wrote for this project!

On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar <mingo@elte.hu> wrote:
> A much better name would be 'struct machine *machine', hm? Even if
> everyone agrees this would be a separate patch, obviously.

Well, I don't really think 'struct machine' is that much better. The
obvious benefits of 'struct kvm' is that it's the same name that's
used by Qemu and libkvm and maps nicely to '/dev/kvm'.

If you really, really wanna change it, I could use some more
convincing or bribes of some sort.

                        Pekka
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ingo Molnar May 27, 2011, 11:38 a.m. UTC | #8
* Pekka Enberg <penberg@kernel.org> wrote:

> Hi Ingo,
> 
> On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar <mingo@elte.hu> wrote:
> > A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way
> > too aspecific, it tells us nothing. What is a 'kvm'?
> 
> Why, an instance of a kernel virtual machine, of course! It was the
> very first thing I wrote for this project!

Oh, that is way too funny: i never thought of 'KVM' as a 'kernel 
virtual machine' :-/ It's the name of a cool kernel subsystem - not 
the name of one instance of a virtual machine.

> On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar <mingo@elte.hu> wrote:
> > A much better name would be 'struct machine *machine', hm? Even if
> > everyone agrees this would be a separate patch, obviously.
> 
> Well, I don't really think 'struct machine' is that much better. The
> obvious benefits of 'struct kvm' is that it's the same name that's
> used by Qemu and libkvm and maps nicely to '/dev/kvm'.

To me /dev/kvm is what interfaces to 'KVM' - where 'KVM' is the 
magic, axiomatic name for the aforementioned cool kernel subsystem! :-)

> If you really, really wanna change it, I could use some more
> convincing or bribes of some sort.

No, i guess the naming is just fine, i just need to rewire 5 years 
worth of neural pathways ;-)

I'll save the bribes for a worthier goal! :-)

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/tools/kvm/Makefile b/tools/kvm/Makefile
index 2ebc86c..e7ceb5c 100644
--- a/tools/kvm/Makefile
+++ b/tools/kvm/Makefile
@@ -48,6 +48,7 @@  OBJS    += irq.o
 OBJS    += rbtree.o
 OBJS    += util/rbtree-interval.o
 OBJS    += virtio/9p.o
+OBJS    += ioeventfd.o
 
 
 FLAGS_BFD=$(CFLAGS) -lbfd
diff --git a/tools/kvm/include/kvm/ioeventfd.h b/tools/kvm/include/kvm/ioeventfd.h
new file mode 100644
index 0000000..fa57b41
--- /dev/null
+++ b/tools/kvm/include/kvm/ioeventfd.h
@@ -0,0 +1,27 @@ 
+#ifndef KVM__IOEVENTFD_H
+#define KVM__IOEVENTFD_H
+
+#include <linux/types.h>
+#include <linux/list.h>
+#include <sys/eventfd.h>
+
+struct kvm;
+
+struct ioevent {
+	u64			start;
+	u8			len;
+	void			(*ioevent_callback_fn)(struct kvm *kvm, void *ptr);
+	struct kvm		*kvm;
+	void			*ptr;
+	int			event_fd;
+	u64			datamatch;
+
+	struct list_head	list_used;
+};
+
+void ioeventfd__init(void);
+void ioeventfd__start(void);
+void ioeventfd__add_event(struct kvm *kvm, struct ioevent *ioevent);
+void ioeventfd__del_event(struct kvm *kvm, u64 start, u64 datamatch);
+
+#endif
diff --git a/tools/kvm/ioeventfd.c b/tools/kvm/ioeventfd.c
new file mode 100644
index 0000000..5444432
--- /dev/null
+++ b/tools/kvm/ioeventfd.c
@@ -0,0 +1,127 @@ 
+#include <sys/epoll.h>
+#include <sys/ioctl.h>
+#include <pthread.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <signal.h>
+
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include <linux/types.h>
+
+#include "kvm/ioeventfd.h"
+#include "kvm/kvm.h"
+#include "kvm/util.h"
+
+#define IOEVENTFD_MAX_EVENTS	20
+
+static struct	epoll_event events[IOEVENTFD_MAX_EVENTS];
+static int	epoll_fd;
+static LIST_HEAD(used_ioevents);
+
+void ioeventfd__init(void)
+{
+	epoll_fd = epoll_create(IOEVENTFD_MAX_EVENTS);
+	if (epoll_fd < 0)
+		die("Failed creating epoll fd");
+}
+
+void ioeventfd__add_event(struct kvm *kvm, struct ioevent *ioevent)
+{
+	struct kvm_ioeventfd kvm_ioevent;
+	struct epoll_event epoll_event;
+	struct ioevent *new_ioevent;
+	int event;
+
+	new_ioevent = malloc(sizeof(*new_ioevent));
+	if (new_ioevent == NULL)
+		die("Failed allocating memory for new ioevent");
+
+	*new_ioevent = *ioevent;
+	event = new_ioevent->event_fd;
+
+	kvm_ioevent = (struct kvm_ioeventfd) {
+		.addr			= ioevent->start,
+		.len			= ioevent->len,
+		.datamatch		= ioevent->datamatch,
+		.fd			= event,
+		.flags			= KVM_IOEVENTFD_FLAG_PIO | KVM_IOEVENTFD_FLAG_DATAMATCH,
+	};
+
+	if (ioctl(kvm->vm_fd, KVM_IOEVENTFD, &kvm_ioevent) != 0)
+		die("Failed creating new ioeventfd");
+
+	epoll_event = (struct epoll_event) {
+		.events			= EPOLLIN,
+		.data.ptr		= new_ioevent,
+	};
+
+	if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, event, &epoll_event) != 0)
+		die("Failed assigning new event to the epoll fd");
+
+	list_add_tail(&new_ioevent->list_used, &used_ioevents);
+}
+
+void ioeventfd__del_event(struct kvm *kvm, u64 start, u64 datamatch)
+{
+	struct kvm_ioeventfd kvm_ioevent;
+	struct ioevent *ioevent;
+	u8 found = 0;
+
+	list_for_each_entry(ioevent, &used_ioevents, list_used) {
+		if (ioevent->start == start) {
+			found = 1;
+			break;
+		}
+	}
+
+	if (found == 0 || ioevent == NULL)
+		return;
+
+	kvm_ioevent = (struct kvm_ioeventfd) {
+		.addr			= ioevent->start,
+		.len			= ioevent->len,
+		.datamatch		= ioevent->datamatch,
+		.flags			= KVM_IOEVENTFD_FLAG_PIO
+					| KVM_IOEVENTFD_FLAG_DEASSIGN
+					| KVM_IOEVENTFD_FLAG_DATAMATCH,
+	};
+
+	ioctl(kvm->vm_fd, KVM_IOEVENTFD, &kvm_ioevent);
+
+	epoll_ctl(epoll_fd, EPOLL_CTL_DEL, ioevent->event_fd, NULL);
+
+	list_del(&ioevent->list_used);
+
+	close(ioevent->event_fd);
+	free(ioevent);
+}
+
+static void *ioeventfd__thread(void *param)
+{
+	for (;;) {
+		int nfds, i;
+
+		nfds = epoll_wait(epoll_fd, events, IOEVENTFD_MAX_EVENTS, -1);
+		for (i = 0; i < nfds; i++) {
+			u64 tmp;
+			struct ioevent *ioevent;
+
+			ioevent = events[i].data.ptr;
+
+			if (read(ioevent->event_fd, &tmp, sizeof(tmp)) < 0)
+				die("Failed reading event");
+
+			ioevent->ioevent_callback_fn(ioevent->kvm, ioevent->ptr);
+		}
+	}
+
+	return NULL;
+}
+
+void ioeventfd__start(void)
+{
+	pthread_t thread;
+
+	pthread_create(&thread, NULL, ioeventfd__thread, NULL);
+}
diff --git a/tools/kvm/kvm-run.c b/tools/kvm/kvm-run.c
index f384ddd..48b8e70 100644
--- a/tools/kvm/kvm-run.c
+++ b/tools/kvm/kvm-run.c
@@ -29,6 +29,7 @@ 
 #include <kvm/symbol.h>
 #include <kvm/virtio-9p.h>
 #include <kvm/vesa.h>
+#include <kvm/ioeventfd.h>
 
 /* header files for gitish interface  */
 #include <kvm/kvm-run.h>
@@ -505,6 +506,8 @@  int kvm_cmd_run(int argc, const char **argv, const char *prefix)
 
 	kvm = kvm__init(kvm_dev, ram_size);
 
+	ioeventfd__init();
+
 	max_cpus = kvm__max_cpus(kvm);
 
 	if (nrcpus > max_cpus) {
@@ -612,6 +615,7 @@  int kvm_cmd_run(int argc, const char **argv, const char *prefix)
 		vesa__init(kvm);
 
 	thread_pool__init(nr_online_cpus);
+	ioeventfd__start();
 
 	for (i = 0; i < nrcpus; i++) {
 		if (pthread_create(&kvm_cpus[i]->thread, NULL, kvm_cpu_thread, kvm_cpus[i]) != 0)