diff mbox

[kvm-unit-tests,5/5] add hyperv_connections test

Message ID 20170606191959.16987-6-rkagan@virtuozzo.com (mailing list archive)
State New, archived
Headers show

Commit Message

Roman Kagan June 6, 2017, 7:19 p.m. UTC
Add a test for Hyper-V message and event connections.

It requires QEMU with the extended test device supporting message end
event connection test modes (recently posted on qemu-devel).  On older
QEMU versions it fails.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
---
 x86/Makefile.common      |   3 +
 x86/hyperv_connections.c | 328 +++++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg        |   5 +
 3 files changed, 336 insertions(+)
 create mode 100644 x86/hyperv_connections.c

Comments

Radim Krčmář June 13, 2017, 7:28 p.m. UTC | #1
2017-06-06 22:19+0300, Roman Kagan:
> Add a test for Hyper-V message and event connections.
> 
> It requires QEMU with the extended test device supporting message end
> event connection test modes (recently posted on qemu-devel).  On older
> QEMU versions it fails.

Doesn't QEMU provide a way to detect this feature from the outside (some
command line magic) that we could use it to skip the test?

Thanks.
Paolo Bonzini June 14, 2017, 11:28 a.m. UTC | #2
On 13/06/2017 21:28, Radim Krčmář wrote:
>> It requires QEMU with the extended test device supporting message end
>> event connection test modes (recently posted on qemu-devel).  On older
>> QEMU versions it fails.
> Doesn't QEMU provide a way to detect this feature from the outside (some
> command line magic) that we could use it to skip the test?

Should we check whether SIGNAL_EVENT returns
HV_STATUS_INVALID_HYPERCALL_CODE or HV_STATUS_INVALID_CONNECTION_ID, and
if the former skip the test completely?

Paolo
Roman Kagan June 14, 2017, 12:01 p.m. UTC | #3
On Tue, Jun 13, 2017 at 09:28:59PM +0200, Radim Krčmář wrote:
> 2017-06-06 22:19+0300, Roman Kagan:
> > Add a test for Hyper-V message and event connections.
> > 
> > It requires QEMU with the extended test device supporting message end
> > event connection test modes (recently posted on qemu-devel).  On older
> > QEMU versions it fails.
> 
> Doesn't QEMU provide a way to detect this feature from the outside (some
> command line magic) that we could use it to skip the test?

I didn't know there was such a trick.  However, I failed to figure out
how to get it to work here: the -device options appeared to be
interpreted after -kernel so it didn't cause a SKIP; adding a dedicated
toplevel command-line option would probably be an overkill.

Roman.
Roman Kagan June 14, 2017, 12:02 p.m. UTC | #4
On Wed, Jun 14, 2017 at 01:28:23PM +0200, Paolo Bonzini wrote:
> 
> 
> On 13/06/2017 21:28, Radim Krčmář wrote:
> >> It requires QEMU with the extended test device supporting message end
> >> event connection test modes (recently posted on qemu-devel).  On older
> >> QEMU versions it fails.
> > Doesn't QEMU provide a way to detect this feature from the outside (some
> > command line magic) that we could use it to skip the test?
> 
> Should we check whether SIGNAL_EVENT returns
> HV_STATUS_INVALID_HYPERCALL_CODE or HV_STATUS_INVALID_CONNECTION_ID, and
> if the former skip the test completely?

Yeah this looks like a workable solution, thanks!

Roman.
Radim Krčmář June 14, 2017, 12:59 p.m. UTC | #5
2017-06-14 15:01+0300, Roman Kagan:
> On Tue, Jun 13, 2017 at 09:28:59PM +0200, Radim Krčmář wrote:
> > 2017-06-06 22:19+0300, Roman Kagan:
> > > Add a test for Hyper-V message and event connections.
> > > 
> > > It requires QEMU with the extended test device supporting message end
> > > event connection test modes (recently posted on qemu-devel).  On older
> > > QEMU versions it fails.
> > 
> > Doesn't QEMU provide a way to detect this feature from the outside (some
> > command line magic) that we could use it to skip the test?
> 
> I didn't know there was such a trick.  However, I failed to figure out
> how to get it to work here: the -device options appeared to be
> interpreted after -kernel so it didn't cause a SKIP;

Hm, the test should be skipped if QEMU fails to start.

>                                                      adding a dedicated
> toplevel command-line option would probably be an overkill.

Right, I assumed that QEMU wants to prevent migration to QEMU's that
don't support this feature (which usually implies a property), but it
seems that there are no real users to care about ...

The solution proposed by Paolo looks good.

Thanks.
Roman Kagan June 14, 2017, 1:21 p.m. UTC | #6
On Wed, Jun 14, 2017 at 02:59:51PM +0200, Radim Krčmář wrote:
> 2017-06-14 15:01+0300, Roman Kagan:
> > On Tue, Jun 13, 2017 at 09:28:59PM +0200, Radim Krčmář wrote:
> > > 2017-06-06 22:19+0300, Roman Kagan:
> > > > Add a test for Hyper-V message and event connections.
> > > > 
> > > > It requires QEMU with the extended test device supporting message end
> > > > event connection test modes (recently posted on qemu-devel).  On older
> > > > QEMU versions it fails.
> > > 
> > > Doesn't QEMU provide a way to detect this feature from the outside (some
> > > command line magic) that we could use it to skip the test?
> > 
> > I didn't know there was such a trick.  However, I failed to figure out
> > how to get it to work here: the -device options appeared to be
> > interpreted after -kernel so it didn't cause a SKIP;
> 
> Hm, the test should be skipped if QEMU fails to start.

IIUC the SKIP is taken when QEMU is run with the command line as if in
the real test, but with a -kernel option pointing at a non-existing
file, and the error message contains that filename.

The assumption is apparently that QEMU was ok with all other options.

The problem is that many options are interpreted after -kernel, so the
skip-checking logic decides that QEMU is ok with the options but the
actual test start shows that it's not, resulting in a FAIL.

Roman.
Radim Krčmář June 14, 2017, 4:34 p.m. UTC | #7
2017-06-14 16:21+0300, Roman Kagan:
> On Wed, Jun 14, 2017 at 02:59:51PM +0200, Radim Krčmář wrote:
> > 2017-06-14 15:01+0300, Roman Kagan:
> > > On Tue, Jun 13, 2017 at 09:28:59PM +0200, Radim Krčmář wrote:
> > > > 2017-06-06 22:19+0300, Roman Kagan:
> > > > > Add a test for Hyper-V message and event connections.
> > > > > 
> > > > > It requires QEMU with the extended test device supporting message end
> > > > > event connection test modes (recently posted on qemu-devel).  On older
> > > > > QEMU versions it fails.
> > > > 
> > > > Doesn't QEMU provide a way to detect this feature from the outside (some
> > > > command line magic) that we could use it to skip the test?
> > > 
> > > I didn't know there was such a trick.  However, I failed to figure out
> > > how to get it to work here: the -device options appeared to be
> > > interpreted after -kernel so it didn't cause a SKIP;
> > 
> > Hm, the test should be skipped if QEMU fails to start.
> 
> IIUC the SKIP is taken when QEMU is run with the command line as if in
> the real test, but with a -kernel option pointing at a non-existing
> file, and the error message contains that filename.

Yes, it's an ugly hack.

> The assumption is apparently that QEMU was ok with all other options.
> 
> The problem is that many options are interpreted after -kernel, so the
> skip-checking logic decides that QEMU is ok with the options but the
> actual test start shows that it's not, resulting in a FAIL.

I see, thanks, that is a bug.

Dropping the -kernel option in the check seems nicer than providing a
minimal working kernel ... I'll see how much refactoring is needed.
diff mbox

Patch

diff --git a/x86/Makefile.common b/x86/Makefile.common
index 7bb6b50..ca97a8e 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -49,6 +49,7 @@  tests-common = $(TEST_DIR)/vmexit.flat $(TEST_DIR)/tsc.flat \
                $(TEST_DIR)/tsc_adjust.flat $(TEST_DIR)/asyncpf.flat \
                $(TEST_DIR)/init.flat $(TEST_DIR)/smap.flat \
                $(TEST_DIR)/hyperv_synic.flat $(TEST_DIR)/hyperv_stimer.flat \
+               $(TEST_DIR)/hyperv_connections.flat \
 
 ifdef API
 tests-api = api/api-sample api/dirty-log api/dirty-log-perf
@@ -71,6 +72,8 @@  $(TEST_DIR)/hyperv_synic.elf: $(TEST_DIR)/hyperv.o
 
 $(TEST_DIR)/hyperv_stimer.elf: $(TEST_DIR)/hyperv.o
 
+$(TEST_DIR)/hyperv_connections.elf: $(TEST_DIR)/hyperv.o
+
 arch_clean:
 	$(RM) $(TEST_DIR)/*.o $(TEST_DIR)/*.flat $(TEST_DIR)/*.elf \
 	$(TEST_DIR)/.*.d lib/x86/.*.d \
diff --git a/x86/hyperv_connections.c b/x86/hyperv_connections.c
new file mode 100644
index 0000000..5ce1fb3
--- /dev/null
+++ b/x86/hyperv_connections.c
@@ -0,0 +1,328 @@ 
+#include "libcflat.h"
+#include "vm.h"
+#include "smp.h"
+#include "isr.h"
+#include "atomic.h"
+#include "hyperv.h"
+#include "bitops.h"
+
+#define MAX_CPUS 64
+
+#define MSG_VEC 0xb0
+#define EVT_VEC 0xb1
+#define MSG_SINT 0x8
+#define EVT_SINT 0x9
+#define MSG_CONN_BASE 0x10
+#define EVT_CONN_BASE 0x20
+#define MSG_TYPE 0x12345678
+
+#define WAIT_CYCLES 10000000
+
+static atomic_t ncpus_done;
+
+struct hv_vcpu {
+    struct hv_message_page *msg_page;
+    struct hv_event_flags_page *evt_page;
+    struct hv_input_post_message *post_msg;
+    u8 msg_conn;
+    u8 evt_conn;
+    u64 hvcall_status;
+    atomic_t sint_received;
+};
+
+static struct hv_vcpu hv_vcpus[MAX_CPUS];
+
+static void sint_isr(isr_regs_t *regs)
+{
+    atomic_inc(&hv_vcpus[smp_id()].sint_received);
+}
+
+static void *hypercall_page;
+
+static void setup_hypercall()
+{
+    u64 guestid = (0x8f00ull << 48);
+
+    hypercall_page = alloc_page();
+    memset(hypercall_page, 0, PAGE_SIZE);
+
+    wrmsr(HV_X64_MSR_GUEST_OS_ID, guestid);
+
+    wrmsr(HV_X64_MSR_HYPERCALL,
+          (u64)virt_to_phys(hypercall_page) | HV_X64_MSR_HYPERCALL_ENABLE);
+}
+
+static void teardown_hypercall()
+{
+    wrmsr(HV_X64_MSR_HYPERCALL, 0);
+    wrmsr(HV_X64_MSR_GUEST_OS_ID, 0);
+    free_page(hypercall_page);
+}
+
+static u64 do_hypercall(u16 code, u64 arg, bool fast)
+{
+    u64 ret;
+    u64 ctl = code;
+    if (fast)
+        ctl |= HV_HYPERCALL_FAST;
+
+    asm volatile ("call *%[hcall_page]"
+#ifdef __x86_64__
+                  "\n mov $0,%%r8"
+                  : "=a"(ret)
+                  : "c"(ctl), "d"(arg),
+#else
+                  : "=A"(ret)
+                  : "A"(ctl),
+                    "b" ((u32)(arg >> 32)), "c" ((u32)arg),
+                    "D"(0), "S"(0),
+#endif
+                    [hcall_page] "m" (hypercall_page)
+#ifdef __x86_64__
+                  : "r8"
+#endif
+                  );
+
+    return ret;
+}
+
+static void setup_cpu(void *ctx)
+{
+    int vcpu = smp_id();
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+
+    write_cr3((ulong)ctx);
+    irq_enable();
+
+    hv->msg_page = alloc_page();
+    hv->evt_page = alloc_page();
+    hv->post_msg = alloc_page();
+    memset(hv->msg_page, 0, sizeof(*hv->msg_page));
+    memset(hv->evt_page, 0, sizeof(*hv->evt_page));
+    memset(hv->post_msg, 0, sizeof(*hv->post_msg));
+    hv->msg_conn = MSG_CONN_BASE + vcpu;
+    hv->evt_conn = EVT_CONN_BASE + vcpu;
+
+    wrmsr(HV_X64_MSR_SIMP,
+          (u64)virt_to_phys(hv->msg_page) | HV_SYNIC_SIMP_ENABLE);
+    wrmsr(HV_X64_MSR_SIEFP,
+          (u64)virt_to_phys(hv->evt_page) | HV_SYNIC_SIEFP_ENABLE);
+    wrmsr(HV_X64_MSR_SCONTROL, HV_SYNIC_CONTROL_ENABLE);
+
+    msg_conn_create(MSG_SINT, MSG_VEC, hv->msg_conn);
+    evt_conn_create(EVT_SINT, EVT_VEC, hv->evt_conn);
+
+    hv->post_msg->connectionid = hv->msg_conn;
+    hv->post_msg->message_type = MSG_TYPE;
+    hv->post_msg->payload_size = 8;
+    hv->post_msg->payload[0] = (u64)vcpu << 16;
+}
+
+static void teardown_cpu(void *ctx)
+{
+    int vcpu = smp_id();
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+
+    evt_conn_destroy(EVT_SINT, hv->evt_conn);
+    msg_conn_destroy(MSG_SINT, hv->msg_conn);
+
+    wrmsr(HV_X64_MSR_SCONTROL, 0);
+    wrmsr(HV_X64_MSR_SIEFP, 0);
+    wrmsr(HV_X64_MSR_SIMP, 0);
+
+    free_page(hv->post_msg);
+    free_page(hv->evt_page);
+    free_page(hv->msg_page);
+}
+
+static void do_msg(void *ctx)
+{
+    int vcpu = (ulong)ctx;
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    struct hv_input_post_message *msg = hv->post_msg;
+
+    msg->payload[0]++;
+    atomic_set(&hv->sint_received, 0);
+    hv->hvcall_status = do_hypercall(HVCALL_POST_MESSAGE,
+                                     virt_to_phys(msg), 0);
+    atomic_inc(&ncpus_done);
+}
+
+static void clear_msg(void *ctx)
+{
+    /* should only be done on the current vcpu */
+    int vcpu = smp_id();
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    struct hv_message *msg = &hv->msg_page->sint_message[MSG_SINT];
+
+    atomic_set(&hv->sint_received, 0);
+    msg->header.message_type = 0;
+    barrier();
+    wrmsr(HV_X64_MSR_EOM, 0);
+    atomic_inc(&ncpus_done);
+}
+
+static bool msg_ok(int vcpu)
+{
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    struct hv_input_post_message *post_msg = hv->post_msg;
+    struct hv_message *msg = &hv->msg_page->sint_message[MSG_SINT];
+
+    return msg->header.message_type == post_msg->message_type &&
+        msg->header.payload_size == post_msg->payload_size &&
+        msg->header.message_flags.msg_pending == 0 &&
+        msg->u.payload[0] == post_msg->payload[0] &&
+        hv->hvcall_status == 0 &&
+        atomic_read(&hv->sint_received) == 1;
+}
+
+static bool msg_busy(int vcpu)
+{
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    struct hv_input_post_message *post_msg = hv->post_msg;
+    struct hv_message *msg = &hv->msg_page->sint_message[MSG_SINT];
+
+    return msg->header.message_type == post_msg->message_type &&
+        msg->header.payload_size == post_msg->payload_size &&
+        msg->header.message_flags.msg_pending == 1 &&
+        msg->u.payload[0] == post_msg->payload[0] - 1 &&
+        hv->hvcall_status == 0 &&
+        atomic_read(&hv->sint_received) == 0;
+}
+
+static void do_evt(void *ctx)
+{
+    int vcpu = (ulong)ctx;
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+
+    atomic_set(&hv->sint_received, 0);
+    hv->hvcall_status = do_hypercall(HVCALL_SIGNAL_EVENT,
+                                     hv->evt_conn, 1);
+    atomic_inc(&ncpus_done);
+}
+
+static void clear_evt(void *ctx)
+{
+    /* should only be done on the current vcpu */
+    int vcpu = smp_id();
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    ulong *flags = hv->evt_page->slot[EVT_SINT].flags;
+
+    atomic_set(&hv->sint_received, 0);
+    flags[BIT_WORD(hv->evt_conn)] &= ~BIT_MASK(hv->evt_conn);
+    barrier();
+    atomic_inc(&ncpus_done);
+}
+
+static bool evt_ok(int vcpu)
+{
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    ulong *flags = hv->evt_page->slot[EVT_SINT].flags;
+
+    return flags[BIT_WORD(hv->evt_conn)] == BIT_MASK(hv->evt_conn) &&
+        hv->hvcall_status == 0 &&
+        atomic_read(&hv->sint_received) == 1;
+}
+
+static bool evt_busy(int vcpu)
+{
+    struct hv_vcpu *hv = &hv_vcpus[vcpu];
+    ulong *flags = hv->evt_page->slot[EVT_SINT].flags;
+
+    return flags[BIT_WORD(hv->evt_conn)] == BIT_MASK(hv->evt_conn) &&
+        hv->hvcall_status == 0 &&
+        atomic_read(&hv->sint_received) == 0;
+}
+
+static int run_test(int ncpus, int dst_add, ulong wait_cycles,
+                    void (*func)(void *), bool (*is_ok)(int))
+{
+    int i, ret = 0;
+
+    atomic_set(&ncpus_done, 0);
+    for (i = 0; i < ncpus; i++) {
+        ulong dst = (i + dst_add) % ncpus;
+        on_cpu_async(i, func, (void *)dst);
+    }
+    while (atomic_read(&ncpus_done) != ncpus) {
+        pause();
+    }
+
+    while (wait_cycles--) {
+        pause();
+    }
+
+    if (is_ok) {
+        for (i = 0; i < ncpus; i++) {
+            ret += is_ok(i);
+        }
+    }
+    return ret;
+}
+
+int main(int ac, char **av)
+{
+    int ncpus, i, ncpus_ok;
+
+    if (!synic_supported()) {
+        report_skip("Hyper-V SynIC is not supported");
+        goto summary;
+    }
+
+    setup_vm();
+    smp_init();
+    ncpus = cpu_count();
+    if (ncpus > MAX_CPUS) {
+        ncpus = MAX_CPUS;
+    }
+
+    handle_irq(MSG_VEC, sint_isr);
+    handle_irq(EVT_VEC, sint_isr);
+
+    setup_hypercall();
+
+    for (i = 0; i < ncpus; i++) {
+        on_cpu(i, setup_cpu, (void *)read_cr3());
+    }
+
+    ncpus_ok = run_test(ncpus, 0, WAIT_CYCLES, do_msg, msg_ok);
+    report("send message to self: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    run_test(ncpus, 0, 0, clear_msg, NULL);
+
+    ncpus_ok = run_test(ncpus, 1, WAIT_CYCLES, do_msg, msg_ok);
+    report("send message to another cpu: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    ncpus_ok = run_test(ncpus, 1, WAIT_CYCLES, do_msg, msg_busy);
+    report("send message to busy slot: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    ncpus_ok = run_test(ncpus, 0, WAIT_CYCLES, clear_msg, msg_ok);
+    report("receive pending message: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    ncpus_ok = run_test(ncpus, 0, WAIT_CYCLES, do_evt, evt_ok);
+    report("signal event on self: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    run_test(ncpus, 0, 0, clear_evt, NULL);
+
+    ncpus_ok = run_test(ncpus, 1, WAIT_CYCLES, do_evt, evt_ok);
+    report("signal event on another cpu: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    ncpus_ok = run_test(ncpus, 1, WAIT_CYCLES, do_evt, evt_busy);
+    report("signal event already set: %d/%d",
+           ncpus_ok == ncpus, ncpus_ok, ncpus);
+
+    for (i = 0; i < ncpus; i++) {
+        on_cpu(i, teardown_cpu, NULL);
+    }
+
+    teardown_hypercall();
+
+summary:
+    return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 5ab4667..f53151f 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -503,6 +503,11 @@  file = hyperv_synic.flat
 smp = 2
 extra_params = -cpu kvm64,hv_synic -device hyperv-testdev
 
+[hyperv_connections]
+file = hyperv_connections.flat
+smp = 2
+extra_params = -cpu kvm64,hv_synic -device hyperv-testdev
+
 [hyperv_stimer]
 file = hyperv_stimer.flat
 smp = 2