Message ID | 1602780274-29141-17-git-send-email-olekstysh@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | IOREQ feature (+ virtio-mmio) on Arm | expand |
On 15.10.2020 18:44, Oleksandr Tyshchenko wrote: > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, > return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); > } > > +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, > + unsigned long gfn, mfn_t mfn) > +{ > + struct page_info *page = mfn_to_page(mfn); > + int rc; > + > + if ( !get_page(page, fd) ) > + return -EINVAL; > + > + /* > + * It is valid to always use p2m_map_foreign_rw here as if this gets > + * called that d != fd. A case when d == fd would be rejected by > + * rcu_lock_remote_domain_by_id() earlier. > + */ Are you describing things here on the assumption that no new callers may surface later on? To catch such, I'd recommend adding at least a respective ASSERT(). > + rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw); > + if ( rc ) > + put_page(page); > + > + return 0; I can't imagine it's correct to not signal the error to the caller(s). > --- a/xen/common/memory.c > +++ b/xen/common/memory.c > @@ -1099,7 +1099,8 @@ static int acquire_resource( > * reference counted, it is unsafe to allow mapping of > * resource pages unless the caller is the hardware domain. > */ > - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) > + if ( paging_mode_translate(currd) && !is_hardware_domain(currd) && > + !arch_refcounts_p2m() ) > return -EACCES; I guess the hook may want naming differently, as both prior parts of the condition should be needed only on the x86 side, and there (for PV) there's no p2m involved in the refcounting. Maybe arch_acquire_resource_check()? And then the comment wants moving into the x86 hook as well. You may want to consider leaving a more generic one here... Jan
On 12.11.20 14:18, Jan Beulich wrote: Hi Jan > On 15.10.2020 18:44, Oleksandr Tyshchenko wrote: >> --- a/xen/arch/arm/p2m.c >> +++ b/xen/arch/arm/p2m.c >> @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, >> return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); >> } >> >> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, >> + unsigned long gfn, mfn_t mfn) >> +{ >> + struct page_info *page = mfn_to_page(mfn); >> + int rc; >> + >> + if ( !get_page(page, fd) ) >> + return -EINVAL; >> + >> + /* >> + * It is valid to always use p2m_map_foreign_rw here as if this gets >> + * called that d != fd. A case when d == fd would be rejected by >> + * rcu_lock_remote_domain_by_id() earlier. >> + */ > Are you describing things here on the assumption that no new > callers may surface later on? To catch such, I'd recommend > adding at least a respective ASSERT(). Agree, will add. > >> + rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw); >> + if ( rc ) >> + put_page(page); >> + >> + return 0; > I can't imagine it's correct to not signal the error to the > caller(s). It is not correct, I have just missed to place return rc. Thank you for the catch. > >> --- a/xen/common/memory.c >> +++ b/xen/common/memory.c >> @@ -1099,7 +1099,8 @@ static int acquire_resource( >> * reference counted, it is unsafe to allow mapping of >> * resource pages unless the caller is the hardware domain. >> */ >> - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) >> + if ( paging_mode_translate(currd) && !is_hardware_domain(currd) && >> + !arch_refcounts_p2m() ) >> return -EACCES; > I guess the hook may want naming differently, as both prior parts of > the condition should be needed only on the x86 side, and there (for > PV) there's no p2m involved in the refcounting. Maybe > arch_acquire_resource_check()? And then the comment wants moving into > the x86 hook as well. You may want to consider leaving a more generic > one here... ok, again, I am fine with the name). Will follow everything suggested above.
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4eeb867..370173c 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page = mfn_to_page(mfn); + int rc; + + if ( !get_page(page, fd) ) + return -EINVAL; + + /* + * It is valid to always use p2m_map_foreign_rw here as if this gets + * called that d != fd. A case when d == fd would be rejected by + * rcu_lock_remote_domain_by_id() earlier. + */ + rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw); + if ( rc ) + put_page(page); + + return 0; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6102771..8d03ab4 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1320,7 +1320,8 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l, } /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) { return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); @@ -2620,7 +2621,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc = set_foreign_p2m_entry(tdom, gpfn, mfn); + rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index cf53ca3..fb9ea96 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1099,7 +1099,8 @@ static int acquire_resource( * reference counted, it is unsafe to allow mapping of * resource pages unless the caller is the hardware domain. */ - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) + if ( paging_mode_translate(currd) && !is_hardware_domain(currd) && + !arch_refcounts_p2m() ) return -EACCES; if ( copy_from_guest(&xmar, arg, 1) ) @@ -1168,7 +1169,7 @@ static int acquire_resource( for ( i = 0; !rc && i < xmar.nr_frames; i++ ) { - rc = set_foreign_p2m_entry(currd, gfn_list[i], + rc = set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 28ca9a8..d11be80 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -161,6 +161,15 @@ typedef enum { #endif #include <xen/p2m-common.h> +static inline bool arch_refcounts_p2m(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on Arm. + */ + return true; +} + static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) { @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order) return gfn_add(gfn, 1UL << order); } -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} - /* * A vCPU has cache enabled only when the MMU is enabled and data cache * is enabled. diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5f7ba31..6c42022 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -369,6 +369,15 @@ struct p2m_domain { #endif #include <xen/p2m-common.h> +static inline bool arch_refcounts_p2m(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is not supported on x86. + */ + return false; +} + /* * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m. */ @@ -634,9 +643,6 @@ int p2m_finish_type_change(struct domain *d, int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); -/* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); - /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 58031a6..b4bc709 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -3,6 +3,10 @@ #include <xen/mm.h> +/* Set foreign entry in the p2m table */ +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn); + /* Remove a page from a domain's p2m table */ int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,