Message ID | 20171012162603.3016-7-paul.durrant@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 12.10.17 at 18:25, <paul.durrant@citrix.com> wrote: > ... XENMEM_resource_ioreq_server > > This patch adds support for a new resource type that can be mapped using > the XENMEM_acquire_resource memory op. > > If an emulator makes use of this resource type then, instead of mapping > gfns, the IOREQ server will allocate pages from the heap. These pages > will never be present in the P2M of the guest at any point and so are > not vulnerable to any direct attack by the guest. They are only ever > accessible by Xen and any domain that has mapping privilege over the > guest (which may or may not be limited to the domain running the emulator). > > NOTE: Use of the new resource type is not compatible with use of > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is > set. > > Signed-off-by: Paul Durrant <paul.durrant@citrix.com> > Acked-by: George Dunlap <George.Dunlap@eu.citrix.com> > Reviewed-by: Wei Liu <wei.liu2@citrix.com> Can you have validly retained this? > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) > return rc; > } > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) > +{ > + struct domain *currd = current->domain; > + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; > + > + if ( iorp->page ) > + { > + /* > + * If a guest frame has already been mapped (which may happen > + * on demand if hvm_get_ioreq_server_info() is called), then > + * allocating a page is not permitted. > + */ > + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) > + return -EPERM; > + > + return 0; > + } > + > + /* > + * Allocated IOREQ server pages are assigned to the emulating > + * domain, not the target domain. This is because the emulator is > + * likely to be destroyed after the target domain has been torn > + * down, and we must use MEMF_no_refcount otherwise page allocation > + * could fail if the emulating domain has already reached its > + * maximum allocation. > + */ > + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); > + if ( !iorp->page ) > + return -ENOMEM; > + > + if ( !get_page_type(iorp->page, PGT_writable_page) ) > + { ASSERT_UNREACHABLE() ? > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, > return rc; > } > > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, > + unsigned long idx, mfn_t *mfn) > +{ > + struct hvm_ioreq_server *s; > + int rc; > + > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > + > + if ( id == DEFAULT_IOSERVID ) > + return -EOPNOTSUPP; > + > + s = get_ioreq_server(d, id); > + > + ASSERT(!IS_DEFAULT(s)); > + > + rc = hvm_ioreq_server_alloc_pages(s); > + if ( rc ) > + goto out; > + > + switch ( idx ) > + { > + case XENMEM_resource_ioreq_server_frame_bufioreq: > + rc = -ENOENT; > + if ( !HANDLE_BUFIOREQ(s) ) > + goto out; > + > + *mfn = _mfn(page_to_mfn(s->bufioreq.page)); > + rc = 0; > + break; How about if ( HANDLE_BUFIOREQ(s) ) *mfn = _mfn(page_to_mfn(s->bufioreq.page)); else rc = -ENOENT; break; ? > +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, > + unsigned long frame, > + unsigned long nr_frames, > + unsigned long mfn_list[]) > +{ > + unsigned int i; This now doesn't match up with the upper bound's type. > @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { > * is optional if nr_frames is 0. > */ > uint64_aligned_t frame; > + > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 > +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) I don't see what you need the trailing underscore for. This is normally only needed on local variables defined in (gcc extended) macros, which we generally can't use in a public header anyway. Jan
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 16 October 2017 15:07 > To: Paul Durrant <Paul.Durrant@citrix.com> > Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson > <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel@lists.xenproject.org; Konrad Rzeszutek Wilk > <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org> > Subject: Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new > mappable resource type... > > >>> On 12.10.17 at 18:25, <paul.durrant@citrix.com> wrote: > > ... XENMEM_resource_ioreq_server > > > > This patch adds support for a new resource type that can be mapped using > > the XENMEM_acquire_resource memory op. > > > > If an emulator makes use of this resource type then, instead of mapping > > gfns, the IOREQ server will allocate pages from the heap. These pages > > will never be present in the P2M of the guest at any point and so are > > not vulnerable to any direct attack by the guest. They are only ever > > accessible by Xen and any domain that has mapping privilege over the > > guest (which may or may not be limited to the domain running the > emulator). > > > > NOTE: Use of the new resource type is not compatible with use of > > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns > flag is > > set. > > > > Signed-off-by: Paul Durrant <paul.durrant@citrix.com> > > Acked-by: George Dunlap <George.Dunlap@eu.citrix.com> > > Reviewed-by: Wei Liu <wei.liu2@citrix.com> > > Can you have validly retained this? I didn't think the structure of this particular patch had changed that fundamentally. > > > --- a/xen/arch/x86/hvm/ioreq.c > > +++ b/xen/arch/x86/hvm/ioreq.c > > @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct > hvm_ioreq_server *s, bool buf) > > return rc; > > } > > > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) > > +{ > > + struct domain *currd = current->domain; > > + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; > > + > > + if ( iorp->page ) > > + { > > + /* > > + * If a guest frame has already been mapped (which may happen > > + * on demand if hvm_get_ioreq_server_info() is called), then > > + * allocating a page is not permitted. > > + */ > > + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) > > + return -EPERM; > > + > > + return 0; > > + } > > + > > + /* > > + * Allocated IOREQ server pages are assigned to the emulating > > + * domain, not the target domain. This is because the emulator is > > + * likely to be destroyed after the target domain has been torn > > + * down, and we must use MEMF_no_refcount otherwise page > allocation > > + * could fail if the emulating domain has already reached its > > + * maximum allocation. > > + */ > > + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); > > + if ( !iorp->page ) > > + return -ENOMEM; > > + > > + if ( !get_page_type(iorp->page, PGT_writable_page) ) > > + { > > ASSERT_UNREACHABLE() ? Ok. > > > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, > ioservid_t id, > > return rc; > > } > > > > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, > > + unsigned long idx, mfn_t *mfn) > > +{ > > + struct hvm_ioreq_server *s; > > + int rc; > > + > > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > > + > > + if ( id == DEFAULT_IOSERVID ) > > + return -EOPNOTSUPP; > > + > > + s = get_ioreq_server(d, id); > > + > > + ASSERT(!IS_DEFAULT(s)); > > + > > + rc = hvm_ioreq_server_alloc_pages(s); > > + if ( rc ) > > + goto out; > > + > > + switch ( idx ) > > + { > > + case XENMEM_resource_ioreq_server_frame_bufioreq: > > + rc = -ENOENT; > > + if ( !HANDLE_BUFIOREQ(s) ) > > + goto out; > > + > > + *mfn = _mfn(page_to_mfn(s->bufioreq.page)); > > + rc = 0; > > + break; > > How about > > if ( HANDLE_BUFIOREQ(s) ) > *mfn = _mfn(page_to_mfn(s->bufioreq.page)); > else > rc = -ENOENT; > break; > Looking at the overall structure I prefer it as it is. If I could have got rid of the out label by doing this then it might have been worth the change. > ? > > > +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, > > + unsigned long frame, > > + unsigned long nr_frames, > > + unsigned long mfn_list[]) > > +{ > > + unsigned int i; > > This now doesn't match up with the upper bound's type. > Ok. > > @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { > > * is optional if nr_frames is 0. > > */ > > uint64_aligned_t frame; > > + > > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 > > +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) > > I don't see what you need the trailing underscore for. This is > normally only needed on local variables defined in (gcc extended) > macros, which we generally can't use in a public header anyway. > I thought it was generally desirable to attempt to distinguish macro arguments from variable to avoid name clashes. What do you prefer I should do in a public header? Paul > Jan
>>> On 16.10.17 at 16:17, <Paul.Durrant@citrix.com> wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: 16 October 2017 15:07 >> >>> On 12.10.17 at 18:25, <paul.durrant@citrix.com> wrote: >> > ... XENMEM_resource_ioreq_server >> > >> > This patch adds support for a new resource type that can be mapped using >> > the XENMEM_acquire_resource memory op. >> > >> > If an emulator makes use of this resource type then, instead of mapping >> > gfns, the IOREQ server will allocate pages from the heap. These pages >> > will never be present in the P2M of the guest at any point and so are >> > not vulnerable to any direct attack by the guest. They are only ever >> > accessible by Xen and any domain that has mapping privilege over the >> > guest (which may or may not be limited to the domain running the >> emulator). >> > >> > NOTE: Use of the new resource type is not compatible with use of >> > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns >> flag is >> > set. >> > >> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com> >> > Acked-by: George Dunlap <George.Dunlap@eu.citrix.com> >> > Reviewed-by: Wei Liu <wei.liu2@citrix.com> >> >> Can you have validly retained this? > > I didn't think the structure of this particular patch had changed that > fundamentally. The structure didn't change that much, yes, but the page type ref acquiring which you now do alter behavior meaningfully. >> > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, >> ioservid_t id, >> > return rc; >> > } >> > >> > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, >> > + unsigned long idx, mfn_t *mfn) >> > +{ >> > + struct hvm_ioreq_server *s; >> > + int rc; >> > + >> > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); >> > + >> > + if ( id == DEFAULT_IOSERVID ) >> > + return -EOPNOTSUPP; >> > + >> > + s = get_ioreq_server(d, id); >> > + >> > + ASSERT(!IS_DEFAULT(s)); >> > + >> > + rc = hvm_ioreq_server_alloc_pages(s); >> > + if ( rc ) >> > + goto out; >> > + >> > + switch ( idx ) >> > + { >> > + case XENMEM_resource_ioreq_server_frame_bufioreq: >> > + rc = -ENOENT; >> > + if ( !HANDLE_BUFIOREQ(s) ) >> > + goto out; >> > + >> > + *mfn = _mfn(page_to_mfn(s->bufioreq.page)); >> > + rc = 0; >> > + break; >> >> How about >> >> if ( HANDLE_BUFIOREQ(s) ) >> *mfn = _mfn(page_to_mfn(s->bufioreq.page)); >> else >> rc = -ENOENT; >> break; >> > > Looking at the overall structure I prefer it as it is. If I could have got > rid of the out label by doing this then it might have been worth the change. Okay, you're the maintainer. Just to clarify - what I find particularly odd is the setting of rc to zero above, yet the other case block relying on it already being zero when entering the switch(). >> > @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { >> > * is optional if nr_frames is 0. >> > */ >> > uint64_aligned_t frame; >> > + >> > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 >> > +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) >> >> I don't see what you need the trailing underscore for. This is >> normally only needed on local variables defined in (gcc extended) >> macros, which we generally can't use in a public header anyway. > > I thought it was generally desirable to attempt to distinguish macro > arguments from variable to avoid name clashes. What do you prefer I should do > in a public header? There are various cases to be considered here, but in the one here there is no risk of name clash at all: Regardless of the name of the parameter, any instance of it will be expanded exactly once. Even if the expansion matches exactly the parameter name, no issue will arise. There are certainly forms of macros where some care is needed in how to name the parameters. Trailing underscores to disambiguate names, however, should - as said - rarely if ever be needed for other than local variables inside the macro body (because _then_ there indeed can be name conflicts with outer scope variables). Jan
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index f654e7796c..ff41312455 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if hvm_get_ioreq_server_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + if ( d->is_dying ) return -EINVAL; @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *currd = current->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + /* + * Allocated IOREQ server pages are assigned to the emulating + * domain, not the target domain. This is because the emulator is + * likely to be destroyed after the target domain has been torn + * down, and we must use MEMF_no_refcount otherwise page allocation + * could fail if the emulating domain has already reached its + * maximum allocation. + */ + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); + if ( !iorp->page ) + return -ENOMEM; + + if ( !get_page_type(iorp->page, PGT_writable_page) ) + { + put_page(iorp->page); + iorp->page = NULL; + return -ENOMEM; + } + + iorp->va = __map_domain_page_global(iorp->page); + if ( !iorp->va ) + { + put_page_and_type(iorp->page); + iorp->page = NULL; + return -ENOMEM; + } + + clear_page(iorp->va); + return 0; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( !iorp->page ) + return; + + unmap_domain_page_global(iorp->va); + iorp->va = NULL; + + put_page_and_type(iorp->page); + iorp->page = NULL; +} + bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct hvm_ioreq_server *s; @@ -484,6 +560,27 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc = hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc = hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) { unsigned int i; @@ -612,7 +709,18 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); return rc; } @@ -622,6 +730,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); } @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, return rc; } +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + if ( id == DEFAULT_IOSERVID ) + return -EOPNOTSUPP; + + s = get_ioreq_server(d, id); + + ASSERT(!IS_DEFAULT(s)); + + rc = hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc = -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn = _mfn(page_to_mfn(s->bufioreq.page)); + rc = 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn = _mfn(page_to_mfn(s->ioreq.page)); + break; + + default: + rc = -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + return rc; +} + int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index d9df5ca69f..c9bc4a4e92 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -122,6 +122,7 @@ #include <asm/fixmap.h> #include <asm/io_apic.h> #include <asm/pci.h> +#include <asm/hvm/ioreq.h> #include <asm/hvm/grant_table.h> #include <asm/pv/grant_table.h> @@ -3866,6 +3867,27 @@ int xenmem_add_to_physmap_one( return rc; } +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, + unsigned long frame, + unsigned long nr_frames, + unsigned long mfn_list[]) +{ + unsigned int i; + + for ( i = 0; i < nr_frames; i++ ) + { + mfn_t mfn; + int rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + + if ( rc ) + return rc; + + mfn_list[i] = mfn_x(mfn); + } + + return 0; +} + long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index a88fc83565..1a9872b75c 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1000,6 +1000,11 @@ static int acquire_resource(XEN_GUEST_HANDLE_PARAM(void) arg) switch ( xmar.type ) { + case XENMEM_resource_ioreq_server: + rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame, + xmar.nr_frames, mfn_list); + break; + default: rc = -EOPNOTSUPP; break; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 1829fcf43e..9e37c97a37 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -31,6 +31,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index f2e0f498c4..637b1eee1c 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -615,4 +615,9 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) return mfn <= (virt_to_mfn(eva - 1) + 1); } +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, + unsigned long frame, + unsigned long nr_frames, + unsigned long mfn_list[]); + #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 9677bd74e7..59b6006910 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -90,6 +90,10 @@ struct xen_dm_op_create_ioreq_server { * the frame numbers passed back in gfns <ioreq_gfn> and <bufioreq_gfn> * respectively. (If the IOREQ Server is not handling buffered emulation * only <ioreq_gfn> will be valid). + * + * NOTE: To access the synchronous ioreq structures and buffered ioreq + * ring, it is preferable to use the XENMEM_acquire_resource memory + * op specifying resource type XENMEM_resource_ioreq_server. */ #define XEN_DMOP_get_ioreq_server_info 2 diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index b7cf753d75..53380287d4 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -609,9 +609,14 @@ struct xen_mem_acquire_resource { domid_t domid; /* IN - the type of resource */ uint16_t type; + +#define XENMEM_resource_ioreq_server 0 + /* * IN - a type-specific resource identifier, which must be zero * unless stated otherwise. + * + * type == XENMEM_resource_ioreq_server -> id == ioreq server id */ uint32_t id; /* IN/OUT - As an IN parameter number of (4K) frames of the resource @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { * is optional if nr_frames is 0. */ uint64_aligned_t frame; + +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) + /* IN/OUT - If the tools domain is PV then, upon return, frame_list * will be populated with the MFNs of the resource. * If the tools domain is HVM then it is expected that, on