Message ID | 20230602004824.20731-15-vikram.garhwal@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | dynamic node programming using overlay dtbo | expand |
Hi Vikram, > -----Original Message----- > Subject: [XEN][PATCH v7 14/19] common/device_tree: Add rwlock for dt_host > > Dynamic programming ops will modify the dt_host and there might be other > function which are browsing the dt_host at the same time. To avoid the race > conditions, adding rwlock for browsing the dt_host during runtime. > > Reason behind adding rwlock instead of spinlock: > For now, dynamic programming is the sole modifier of dt_host in Xen > during > run time. All other access functions like iommu_release_dt_device() are > just reading the dt_host during run-time. So, there is a need to protect > others from browsing the dt_host while dynamic programming is > modifying > it. rwlock is better suitable for this task as spinlock won't be able to > differentiate between read and write access. > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com> Reviewed-by: Henry Wang <Henry.Wang@arm.com> Kind regards, Henry
Hi Vikram, On 02/06/2023 02:48, Vikram Garhwal wrote: > Dynamic programming ops will modify the dt_host and there might be other > function which are browsing the dt_host at the same time. To avoid the race > conditions, adding rwlock for browsing the dt_host during runtime. > > Reason behind adding rwlock instead of spinlock: > For now, dynamic programming is the sole modifier of dt_host in Xen during > run time. All other access functions like iommu_release_dt_device() are > just reading the dt_host during run-time. So, there is a need to protect > others from browsing the dt_host while dynamic programming is modifying > it. rwlock is better suitable for this task as spinlock won't be able to > differentiate between read and write access. > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com> > > --- > Changes from v6: > Remove redundant "read_unlock(&dt_host->lock);" in the following case: > XEN_DOMCTL_deassign_device > --- > xen/common/device_tree.c | 4 ++++ > xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ > xen/include/xen/device_tree.h | 6 ++++++ > 3 files changed, 25 insertions(+) > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c > index c5250a1644..c8fcdf8fa1 100644 > --- a/xen/common/device_tree.c > +++ b/xen/common/device_tree.c > @@ -2146,7 +2146,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes) > > dt_dprintk(" <- unflatten_device_tree()\n"); > > + /* Init r/w lock for host device tree. */ > + rwlock_init(&dt_host->lock); unflatten_device_tree() is called for dt_host and will also be used when adding a new dt overlay meaning that rwlock_init for dt_host will be called multiple times (even if mynodes != &dt_host) which is rather strange. > + > return 0; > + > } > > static void dt_alias_add(struct dt_alias_prop *ap, > diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c > index 301a5bcd97..f4d9deb624 100644 > --- a/xen/drivers/passthrough/device_tree.c > +++ b/xen/drivers/passthrough/device_tree.c > @@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d) > if ( !is_iommu_enabled(d) ) > return 0; > > + read_lock(&dt_host->lock); > + > list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) > { > rc = iommu_deassign_dt_device(d, dev); > @@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d) > { > dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", > dt_node_full_name(dev), d->domain_id); > + > + read_unlock(&dt_host->lock); > return rc; > } > } > > + read_unlock(&dt_host->lock); > + > return 0; > } > > @@ -246,6 +252,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > int ret; > struct dt_device_node *dev; > > + read_lock(&dt_host->lock); > + > switch ( domctl->cmd ) > { > case XEN_DOMCTL_assign_device: > @@ -295,7 +303,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > spin_unlock(&dtdevs_lock); > > if ( d == dom_io ) > + { > + read_unlock(&dt_host->lock); > return -EINVAL; > + } > > ret = iommu_add_dt_device(dev); > if ( ret < 0 ) > @@ -333,7 +344,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > break; > > if ( d == dom_io ) > + { > + read_unlock(&dt_host->lock); > return -EINVAL; > + } > > ret = iommu_deassign_dt_device(d, dev); > > @@ -348,5 +362,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > break; > } > > + read_unlock(&dt_host->lock); > return ret; > } > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h > index e239f7de26..dee40d2ea3 100644 > --- a/xen/include/xen/device_tree.h > +++ b/xen/include/xen/device_tree.h > @@ -18,6 +18,7 @@ > #include <xen/string.h> > #include <xen/types.h> > #include <xen/list.h> > +#include <xen/rwlock.h> Sort alphabetically (despite list.h not being placed correctly) > > #define DEVICE_TREE_MAX_DEPTH 16 > > @@ -106,6 +107,11 @@ struct dt_device_node { > struct list_head domain_list; > > struct device dev; > + > + /* > + * Lock that protects r/w updates to unflattened device tree i.e. dt_host. > + */ No need for multiline comment style > + rwlock_t lock; > }; > > #define dt_to_dev(dt_node) (&(dt_node)->dev) ~Michal
Hi, On 02/06/2023 01:48, Vikram Garhwal wrote: > Dynamic programming ops will modify the dt_host and there might be other > function which are browsing the dt_host at the same time. To avoid the race > conditions, adding rwlock for browsing the dt_host during runtime. Please explain that writer will be added in a follow-up patch. > > Reason behind adding rwlock instead of spinlock: > For now, dynamic programming is the sole modifier of dt_host in Xen during > run time. All other access functions like iommu_release_dt_device() are > just reading the dt_host during run-time. So, there is a need to protect > others from browsing the dt_host while dynamic programming is modifying > it. rwlock is better suitable for this task as spinlock won't be able to > differentiate between read and write access. > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com> > > --- > Changes from v6: > Remove redundant "read_unlock(&dt_host->lock);" in the following case: > XEN_DOMCTL_deassign_device > --- > xen/common/device_tree.c | 4 ++++ > xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ > xen/include/xen/device_tree.h | 6 ++++++ > 3 files changed, 25 insertions(+) > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c > index c5250a1644..c8fcdf8fa1 100644 > --- a/xen/common/device_tree.c > +++ b/xen/common/device_tree.c > @@ -2146,7 +2146,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes) > > dt_dprintk(" <- unflatten_device_tree()\n"); > > + /* Init r/w lock for host device tree. */ > + rwlock_init(&dt_host->lock); > + > return 0; > + > } > > static void dt_alias_add(struct dt_alias_prop *ap, > diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c > index 301a5bcd97..f4d9deb624 100644 > --- a/xen/drivers/passthrough/device_tree.c > +++ b/xen/drivers/passthrough/device_tree.c > @@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d) > if ( !is_iommu_enabled(d) ) > return 0; > > + read_lock(&dt_host->lock); > + > list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) > { > rc = iommu_deassign_dt_device(d, dev); > @@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d) > { > dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", > dt_node_full_name(dev), d->domain_id); > + > + read_unlock(&dt_host->lock); > return rc; > } > } > > + read_unlock(&dt_host->lock); > + > return 0; > } > > @@ -246,6 +252,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > int ret; > struct dt_device_node *dev; > > + read_lock(&dt_host->lock); > + > switch ( domctl->cmd ) > { > case XEN_DOMCTL_assign_device: > @@ -295,7 +303,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > spin_unlock(&dtdevs_lock); > > if ( d == dom_io ) > + { > + read_unlock(&dt_host->lock); > return -EINVAL; > + } > > ret = iommu_add_dt_device(dev); > if ( ret < 0 ) > @@ -333,7 +344,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > break; > > if ( d == dom_io ) > + { > + read_unlock(&dt_host->lock); > return -EINVAL; > + } > > ret = iommu_deassign_dt_device(d, dev); > > @@ -348,5 +362,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > break; > } > > + read_unlock(&dt_host->lock); > return ret; > } > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h > index e239f7de26..dee40d2ea3 100644 > --- a/xen/include/xen/device_tree.h > +++ b/xen/include/xen/device_tree.h > @@ -18,6 +18,7 @@ > #include <xen/string.h> > #include <xen/types.h> > #include <xen/list.h> > +#include <xen/rwlock.h> > > #define DEVICE_TREE_MAX_DEPTH 16 > > @@ -106,6 +107,11 @@ struct dt_device_node { > struct list_head domain_list; > > struct device dev; > + > + /* > + * Lock that protects r/w updates to unflattened device tree i.e. dt_host. > + */ From the description, it sounds like the rwlock will only be used to protect the entire device-tree rather than a single node. So it doesn't seem to be sensible to increase each node structure (there are a lot) by 12 bytes. Can you outline your plan? > + rwlock_t lock; > }; > > #define dt_to_dev(dt_node) (&(dt_node)->dev) Cheers,
On Mon, Jun 05, 2023 at 08:52:17PM +0100, Julien Grall wrote: > Hi, > > On 02/06/2023 01:48, Vikram Garhwal wrote: > > Dynamic programming ops will modify the dt_host and there might be other > > function which are browsing the dt_host at the same time. To avoid the race > > conditions, adding rwlock for browsing the dt_host during runtime. > Please explain that writer will be added in a follow-up patch. > > > > > Reason behind adding rwlock instead of spinlock: > > For now, dynamic programming is the sole modifier of dt_host in Xen during > > run time. All other access functions like iommu_release_dt_device() are > > just reading the dt_host during run-time. So, there is a need to protect > > others from browsing the dt_host while dynamic programming is modifying > > it. rwlock is better suitable for this task as spinlock won't be able to > > differentiate between read and write access. > > > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com> > > > > --- > > Changes from v6: > > Remove redundant "read_unlock(&dt_host->lock);" in the following case: > > XEN_DOMCTL_deassign_device > > --- > > xen/common/device_tree.c | 4 ++++ > > xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ > > xen/include/xen/device_tree.h | 6 ++++++ > > 3 files changed, 25 insertions(+) > > > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c > > index c5250a1644..c8fcdf8fa1 100644 > > --- a/xen/common/device_tree.c > > +++ b/xen/common/device_tree.c > > @@ -2146,7 +2146,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes) > > dt_dprintk(" <- unflatten_device_tree()\n"); > > + /* Init r/w lock for host device tree. */ > > + rwlock_init(&dt_host->lock); > > + > > return 0; > > + > > } > > static void dt_alias_add(struct dt_alias_prop *ap, > > diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c > > index 301a5bcd97..f4d9deb624 100644 > > --- a/xen/drivers/passthrough/device_tree.c > > +++ b/xen/drivers/passthrough/device_tree.c > > @@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d) > > if ( !is_iommu_enabled(d) ) > > return 0; > > + read_lock(&dt_host->lock); > > + > > list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) > > { > > rc = iommu_deassign_dt_device(d, dev); > > @@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d) > > { > > dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", > > dt_node_full_name(dev), d->domain_id); > > + > > + read_unlock(&dt_host->lock); > > return rc; > > } > > } > > + read_unlock(&dt_host->lock); > > + > > return 0; > > } > > @@ -246,6 +252,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > > int ret; > > struct dt_device_node *dev; > > + read_lock(&dt_host->lock); > > + > > switch ( domctl->cmd ) > > { > > case XEN_DOMCTL_assign_device: > > @@ -295,7 +303,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > > spin_unlock(&dtdevs_lock); > > if ( d == dom_io ) > > + { > > + read_unlock(&dt_host->lock); > > return -EINVAL; > > + } > > ret = iommu_add_dt_device(dev); > > if ( ret < 0 ) > > @@ -333,7 +344,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > > break; > > if ( d == dom_io ) > > + { > > + read_unlock(&dt_host->lock); > > return -EINVAL; > > + } > > ret = iommu_deassign_dt_device(d, dev); > > @@ -348,5 +362,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, > > break; > > } > > + read_unlock(&dt_host->lock); > > return ret; > > } > > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h > > index e239f7de26..dee40d2ea3 100644 > > --- a/xen/include/xen/device_tree.h > > +++ b/xen/include/xen/device_tree.h > > @@ -18,6 +18,7 @@ > > #include <xen/string.h> > > #include <xen/types.h> > > #include <xen/list.h> > > +#include <xen/rwlock.h> > > #define DEVICE_TREE_MAX_DEPTH 16 > > @@ -106,6 +107,11 @@ struct dt_device_node { > > struct list_head domain_list; > > struct device dev; > > + > > + /* > > + * Lock that protects r/w updates to unflattened device tree i.e. dt_host. > > + */ > > From the description, it sounds like the rwlock will only be used to protect > the entire device-tree rather than a single node. So it doesn't seem to be > sensible to increase each node structure (there are a lot) by 12 bytes. > > Can you outline your plan? Yeah, so intent is to protect the dt_host as whole instead of each node. I moved it out of struct and kept a single lock for dt_host. > > > + rwlock_t lock; > > }; > > #define dt_to_dev(dt_node) (&(dt_node)->dev) > > Cheers, > > -- > Julien Grall
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c index c5250a1644..c8fcdf8fa1 100644 --- a/xen/common/device_tree.c +++ b/xen/common/device_tree.c @@ -2146,7 +2146,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes) dt_dprintk(" <- unflatten_device_tree()\n"); + /* Init r/w lock for host device tree. */ + rwlock_init(&dt_host->lock); + return 0; + } static void dt_alias_add(struct dt_alias_prop *ap, diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c index 301a5bcd97..f4d9deb624 100644 --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d) if ( !is_iommu_enabled(d) ) return 0; + read_lock(&dt_host->lock); + list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) { rc = iommu_deassign_dt_device(d, dev); @@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d) { dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", dt_node_full_name(dev), d->domain_id); + + read_unlock(&dt_host->lock); return rc; } } + read_unlock(&dt_host->lock); + return 0; } @@ -246,6 +252,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, int ret; struct dt_device_node *dev; + read_lock(&dt_host->lock); + switch ( domctl->cmd ) { case XEN_DOMCTL_assign_device: @@ -295,7 +303,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, spin_unlock(&dtdevs_lock); if ( d == dom_io ) + { + read_unlock(&dt_host->lock); return -EINVAL; + } ret = iommu_add_dt_device(dev); if ( ret < 0 ) @@ -333,7 +344,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, break; if ( d == dom_io ) + { + read_unlock(&dt_host->lock); return -EINVAL; + } ret = iommu_deassign_dt_device(d, dev); @@ -348,5 +362,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d, break; } + read_unlock(&dt_host->lock); return ret; } diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h index e239f7de26..dee40d2ea3 100644 --- a/xen/include/xen/device_tree.h +++ b/xen/include/xen/device_tree.h @@ -18,6 +18,7 @@ #include <xen/string.h> #include <xen/types.h> #include <xen/list.h> +#include <xen/rwlock.h> #define DEVICE_TREE_MAX_DEPTH 16 @@ -106,6 +107,11 @@ struct dt_device_node { struct list_head domain_list; struct device dev; + + /* + * Lock that protects r/w updates to unflattened device tree i.e. dt_host. + */ + rwlock_t lock; }; #define dt_to_dev(dt_node) (&(dt_node)->dev)
Dynamic programming ops will modify the dt_host and there might be other function which are browsing the dt_host at the same time. To avoid the race conditions, adding rwlock for browsing the dt_host during runtime. Reason behind adding rwlock instead of spinlock: For now, dynamic programming is the sole modifier of dt_host in Xen during run time. All other access functions like iommu_release_dt_device() are just reading the dt_host during run-time. So, there is a need to protect others from browsing the dt_host while dynamic programming is modifying it. rwlock is better suitable for this task as spinlock won't be able to differentiate between read and write access. Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com> --- Changes from v6: Remove redundant "read_unlock(&dt_host->lock);" in the following case: XEN_DOMCTL_deassign_device --- xen/common/device_tree.c | 4 ++++ xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ xen/include/xen/device_tree.h | 6 ++++++ 3 files changed, 25 insertions(+)