diff mbox

[v2] libxc: add xc_domain_add_to_physmap_batch to wrap XENMEM_add_to_physmap_batch

Message ID 20170620172926.8970-1-blackskygg@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sky Liu June 20, 2017, 5:29 p.m. UTC
This is a preparation for the proposal "allow setting up shared memory areas
between VMs from xl config file". See:
V2: https://lists.xen.org/archives/html/xen-devel/2017-06/msg02256.html
V1: https://lists.xen.org/archives/html/xen-devel/2017-05/msg01288.html

The plan is to use XENMEM_add_to_physmap_batch in xl to map foregin pages from
one DomU to another so that the page could be shared. But currently there is no
wrapper for XENMEM_add_to_physmap_batch in libxc, so we just add a wrapper for
it.

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
---
Changed Since v1:
  * explain why such a sudden wrapper
  * change the parameters' types

Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
Cc: Wei Liu <wei.liu2@citrix.com>,
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jan Beulich <JBeulich@suse.com
---
 tools/libxc/include/xenctrl.h |  9 +++++++++
 tools/libxc/xc_domain.c       | 44 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

Comments

Wei Liu June 21, 2017, 3:44 p.m. UTC | #1
On Wed, Jun 21, 2017 at 01:29:26AM +0800, Zhongze Liu wrote:
> This is a preparation for the proposal "allow setting up shared memory areas
> between VMs from xl config file". See:
> V2: https://lists.xen.org/archives/html/xen-devel/2017-06/msg02256.html
> V1: https://lists.xen.org/archives/html/xen-devel/2017-05/msg01288.html
> 
> The plan is to use XENMEM_add_to_physmap_batch in xl to map foregin pages from
> one DomU to another so that the page could be shared. But currently there is no
> wrapper for XENMEM_add_to_physmap_batch in libxc, so we just add a wrapper for
> it.
> 
> Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
> ---
> +int xc_domain_add_to_physmap_batch(xc_interface *xch,
> +                                   domid_t domid,
> +                                   domid_t foreign_domid,
> +                                   unsigned int space,
> +                                   unsigned int size,
> +                                   xen_ulong_t *idxs,
> +                                   xen_pfn_t *gpfns,
> +                                   int *errs)
> +{
> +    int rc;
> +    DECLARE_HYPERCALL_BOUNCE(idxs, size * sizeof(*idxs), XC_HYPERCALL_BUFFER_BOUNCE_IN);
> +    DECLARE_HYPERCALL_BOUNCE(gpfns, size * sizeof(*gpfns), XC_HYPERCALL_BUFFER_BOUNCE_IN);
> +    DECLARE_HYPERCALL_BOUNCE(errs, size * sizeof(*errs), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
> +
> +    struct xen_add_to_physmap_batch xatp_batch = {
> +        .domid = domid,
> +        .space = space,
> +        .size = size,
> +        .u = {.foreign_domid = foreign_domid}

Coding style issue.

Just a note, the struct is different for pre-4.7 and post-4.7 Xen. You
don't need to implement a version of this function for pre-4.7 Xen.

> +    };
> +
> +    if ( xc_hypercall_bounce_pre(xch, idxs)  ||
> +         xc_hypercall_bounce_pre(xch, gpfns) ||
> +         xc_hypercall_bounce_pre(xch, errs)  )
> +    {
> +        PERROR("Could not bounce memory for XENMEM_add_to_physmap_batch");
> +        goto out;

rc will be uninitialised in this exit path.

> +    }
> +
> +    set_xen_guest_handle(xatp_batch.idxs, idxs);
> +    set_xen_guest_handle(xatp_batch.gpfns, gpfns);
> +    set_xen_guest_handle(xatp_batch.errs, errs);
> +
> +    rc = do_memory_op(xch, XENMEM_add_to_physmap_batch,
> +                      &xatp_batch, sizeof(xatp_batch));
> +
> +out:
> +    xc_hypercall_bounce_post(xch, idxs);
> +    xc_hypercall_bounce_post(xch, gpfns);
> +    xc_hypercall_bounce_post(xch, errs);
> +
> +    return rc;
> +}
> +
>  int xc_domain_claim_pages(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long nr_pages)
> -- 
> 2.13.1
>
Sky Liu June 22, 2017, 4:08 p.m. UTC | #2
Hi Wei,

2017-06-21 23:44 GMT+08:00 Wei Liu <wei.liu2@citrix.com>:
> On Wed, Jun 21, 2017 at 01:29:26AM +0800, Zhongze Liu wrote:
>> This is a preparation for the proposal "allow setting up shared memory areas
>> between VMs from xl config file". See:
>> V2: https://lists.xen.org/archives/html/xen-devel/2017-06/msg02256.html
>> V1: https://lists.xen.org/archives/html/xen-devel/2017-05/msg01288.html
>>
>> The plan is to use XENMEM_add_to_physmap_batch in xl to map foregin pages from
>> one DomU to another so that the page could be shared. But currently there is no
>> wrapper for XENMEM_add_to_physmap_batch in libxc, so we just add a wrapper for
>> it.
>>
>> Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
>> ---
>> +int xc_domain_add_to_physmap_batch(xc_interface *xch,
>> +                                   domid_t domid,
>> +                                   domid_t foreign_domid,
>> +                                   unsigned int space,
>> +                                   unsigned int size,
>> +                                   xen_ulong_t *idxs,
>> +                                   xen_pfn_t *gpfns,
>> +             .                      int *errs)
>> +{
>> +    int rc;
>> +    DECLARE_HYPERCALL_BOUNCE(idxs, size * sizeof(*idxs), XC_HYPERCALL_BUFFER_BOUNCE_IN);
>> +    DECLARE_HYPERCALL_BOUNCE(gpfns, size * sizeof(*gpfns), XC_HYPERCALL_BUFFER_BOUNCE_IN);
>> +    DECLARE_HYPERCALL_BOUNCE(errs, size * sizeof(*errs), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
>> +
>> +    struct xen_add_to_physmap_batch xatp_batch = {
>> +        .domid = domid,
>> +        .space = space,
>> +        .size = size,
>> +        .u = {.foreign_domid = foreign_domid}
>
> Coding style issue.

Do you mean that I should add a space between '{' and '.' near ".u =
{.foreign" in this line?

>
> Just a note, the struct is different for pre-4.7 and post-4.7 Xen. You
> don't need to implement a version of this function for pre-4.7 Xen.
>
>> +    };
>> +
>> +    if ( xc_hypercall_bounce_pre(xch, idxs)  ||
>> +         xc_hypercall_bounce_pre(xch, gpfns) ||
>> +         xc_hypercall_bounce_pre(xch, errs)  )
>> +    {
>> +        PERROR("Could not bounce memory for XENMEM_add_to_physmap_batch");
>> +        goto out;
>
> rc will be uninitialised in this exit path.
>
>> +    }
>> +
>> +    set_xen_guest_handle(xatp_batch.idxs, idxs);
>> +    set_xen_guest_handle(xatp_batch.gpfns, gpfns);
>> +    set_xen_guest_handle(xatp_batch.errs, errs);
>> +
>> +    rc = do_memory_op(xch, XENMEM_add_to_physmap_batch,
>> +                      &xatp_batch, sizeof(xatp_batch));
>> +
>> +out:
>> +    xc_hypercall_bounce_post(xch, idxs);
>> +    xc_hypercall_bounce_post(xch, gpfns);
>> +    xc_hypercall_bounce_post(xch, errs);
>> +
>> +    return rc;
>> +}
>> +
>>  int xc_domain_claim_pages(xc_interface *xch,
>>                                 uint32_t domid,
>>                                 unsigned long nr_pages)
>> --
>> 2.13.1
>>


Cheers,

Zhongze Liu
diff mbox

Patch

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 1629f412dd..9501818558 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1372,6 +1372,15 @@  int xc_domain_add_to_physmap(xc_interface *xch,
                              unsigned long idx,
                              xen_pfn_t gpfn);
 
+int xc_domain_add_to_physmap_batch(xc_interface *xch,
+                                   domid_t domid,
+                                   domid_t foreign_domid,
+                                   unsigned int space,
+                                   unsigned int size,
+                                   xen_ulong_t *idxs,
+                                   xen_pfn_t *gfpns,
+                                   int *errs);
+
 int xc_domain_populate_physmap(xc_interface *xch,
                                uint32_t domid,
                                unsigned long nr_extents,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 5d192ea0e4..7c942f73c8 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1032,6 +1032,50 @@  int xc_domain_add_to_physmap(xc_interface *xch,
     return do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp));
 }
 
+int xc_domain_add_to_physmap_batch(xc_interface *xch,
+                                   domid_t domid,
+                                   domid_t foreign_domid,
+                                   unsigned int space,
+                                   unsigned int size,
+                                   xen_ulong_t *idxs,
+                                   xen_pfn_t *gpfns,
+                                   int *errs)
+{
+    int rc;
+    DECLARE_HYPERCALL_BOUNCE(idxs, size * sizeof(*idxs), XC_HYPERCALL_BUFFER_BOUNCE_IN);
+    DECLARE_HYPERCALL_BOUNCE(gpfns, size * sizeof(*gpfns), XC_HYPERCALL_BUFFER_BOUNCE_IN);
+    DECLARE_HYPERCALL_BOUNCE(errs, size * sizeof(*errs), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    struct xen_add_to_physmap_batch xatp_batch = {
+        .domid = domid,
+        .space = space,
+        .size = size,
+        .u = {.foreign_domid = foreign_domid}
+    };
+
+    if ( xc_hypercall_bounce_pre(xch, idxs)  ||
+         xc_hypercall_bounce_pre(xch, gpfns) ||
+         xc_hypercall_bounce_pre(xch, errs)  )
+    {
+        PERROR("Could not bounce memory for XENMEM_add_to_physmap_batch");
+        goto out;
+    }
+
+    set_xen_guest_handle(xatp_batch.idxs, idxs);
+    set_xen_guest_handle(xatp_batch.gpfns, gpfns);
+    set_xen_guest_handle(xatp_batch.errs, errs);
+
+    rc = do_memory_op(xch, XENMEM_add_to_physmap_batch,
+                      &xatp_batch, sizeof(xatp_batch));
+
+out:
+    xc_hypercall_bounce_post(xch, idxs);
+    xc_hypercall_bounce_post(xch, gpfns);
+    xc_hypercall_bounce_post(xch, errs);
+
+    return rc;
+}
+
 int xc_domain_claim_pages(xc_interface *xch,
                                uint32_t domid,
                                unsigned long nr_pages)