Message ID | 20211013094707.163054-13-yishaih@nvidia.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Bjorn Helgaas |
Headers | show |
Series | Add mlx5 live migration driver | expand |
On Wed, 13 Oct 2021 12:47:06 +0300 Yishai Hadas <yishaih@nvidia.com> wrote: > Add infrastructure to let vfio_pci_core drivers trap device RESET. > > The motivation for this is to let the underlay driver be aware that > reset was done and set its internal state accordingly. I think the intention of the uAPI here is that the migration error state is exited specifically via the reset ioctl. Maybe that should be made more clear, but variant drivers can already wrap the core ioctl for the purpose of determining that mechanism of reset has occurred. Thanks, Alex > Signed-off-by: Yishai Hadas <yishaih@nvidia.com> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com> > --- > drivers/vfio/pci/vfio_pci_config.c | 8 ++++++-- > drivers/vfio/pci/vfio_pci_core.c | 2 ++ > include/linux/vfio_pci_core.h | 10 ++++++++++ > 3 files changed, 18 insertions(+), 2 deletions(-) > > diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c > index 6e58b4bf7a60..002198376f43 100644 > --- a/drivers/vfio/pci/vfio_pci_config.c > +++ b/drivers/vfio/pci/vfio_pci_config.c > @@ -859,7 +859,9 @@ static int vfio_exp_config_write(struct vfio_pci_core_device *vdev, int pos, > > if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) { > vfio_pci_zap_and_down_write_memory_lock(vdev); > - pci_try_reset_function(vdev->pdev); > + ret = pci_try_reset_function(vdev->pdev); > + if (!ret && vdev->ops && vdev->ops->reset_done) > + vdev->ops->reset_done(vdev); > up_write(&vdev->memory_lock); > } > } > @@ -941,7 +943,9 @@ static int vfio_af_config_write(struct vfio_pci_core_device *vdev, int pos, > > if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) { > vfio_pci_zap_and_down_write_memory_lock(vdev); > - pci_try_reset_function(vdev->pdev); > + ret = pci_try_reset_function(vdev->pdev); > + if (!ret && vdev->ops && vdev->ops->reset_done) > + vdev->ops->reset_done(vdev); > up_write(&vdev->memory_lock); > } > } > diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c > index e581a327f90d..d2497a8ed7f1 100644 > --- a/drivers/vfio/pci/vfio_pci_core.c > +++ b/drivers/vfio/pci/vfio_pci_core.c > @@ -923,6 +923,8 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, > > vfio_pci_zap_and_down_write_memory_lock(vdev); > ret = pci_try_reset_function(vdev->pdev); > + if (!ret && vdev->ops && vdev->ops->reset_done) > + vdev->ops->reset_done(vdev); > up_write(&vdev->memory_lock); > > return ret; > diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h > index ef9a44b6cf5d..6ccf5824f098 100644 > --- a/include/linux/vfio_pci_core.h > +++ b/include/linux/vfio_pci_core.h > @@ -95,6 +95,15 @@ struct vfio_pci_mmap_vma { > struct list_head vma_next; > }; > > +/** > + * struct vfio_pci_core_device_ops - VFIO PCI driver device callbacks > + * > + * @reset_done: Called when the device was reset > + */ > +struct vfio_pci_core_device_ops { > + void (*reset_done)(struct vfio_pci_core_device *vdev); > +}; > + > struct vfio_pci_core_device { > struct vfio_device vdev; > struct pci_dev *pdev; > @@ -137,6 +146,7 @@ struct vfio_pci_core_device { > struct mutex vma_lock; > struct list_head vma_list; > struct rw_semaphore memory_lock; > + const struct vfio_pci_core_device_ops *ops; > }; > > #define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX)
On Fri, Oct 15, 2021 at 01:52:37PM -0600, Alex Williamson wrote: > On Wed, 13 Oct 2021 12:47:06 +0300 > Yishai Hadas <yishaih@nvidia.com> wrote: > > > Add infrastructure to let vfio_pci_core drivers trap device RESET. > > > > The motivation for this is to let the underlay driver be aware that > > reset was done and set its internal state accordingly. > > I think the intention of the uAPI here is that the migration error > state is exited specifically via the reset ioctl. Maybe that should be > made more clear, but variant drivers can already wrap the core ioctl > for the purpose of determining that mechanism of reset has occurred. It is not just recovering the error state. Any transition to reset changes the firmware state. Eg if userspace uses one of the other emulation paths to trigger the reset after putting the device off running then the driver state and FW state become desynchronized. So all the reset paths need to be synchronized some how, either blocked while in non-running states or aligning the SW state with the new post-reset FW state. Jason
On Fri, 15 Oct 2021 17:03:28 -0300 Jason Gunthorpe <jgg@nvidia.com> wrote: > On Fri, Oct 15, 2021 at 01:52:37PM -0600, Alex Williamson wrote: > > On Wed, 13 Oct 2021 12:47:06 +0300 > > Yishai Hadas <yishaih@nvidia.com> wrote: > > > > > Add infrastructure to let vfio_pci_core drivers trap device RESET. > > > > > > The motivation for this is to let the underlay driver be aware that > > > reset was done and set its internal state accordingly. > > > > I think the intention of the uAPI here is that the migration error > > state is exited specifically via the reset ioctl. Maybe that should be > > made more clear, but variant drivers can already wrap the core ioctl > > for the purpose of determining that mechanism of reset has occurred. > > It is not just recovering the error state. > > Any transition to reset changes the firmware state. Eg if userspace > uses one of the other emulation paths to trigger the reset after > putting the device off running then the driver state and FW state > become desynchronized. > > So all the reset paths need to be synchronized some how, either > blocked while in non-running states or aligning the SW state with the > new post-reset FW state. This only catches the two flavors of FLR and the RESET ioctl itself, so we've got gaps relative to "all the reset paths" anyway. I'm also concerned about adding arbitrary callbacks for every case that it gets too cumbersome to write a wrapper for the existing callbacks. However, why is this a vfio thing when we have the pci_error_handlers.reset_done callback. At best this ought to be redundant to that. Thanks, Alex
On 10/16/2021 12:12 AM, Alex Williamson wrote: > On Fri, 15 Oct 2021 17:03:28 -0300 > Jason Gunthorpe <jgg@nvidia.com> wrote: > >> On Fri, Oct 15, 2021 at 01:52:37PM -0600, Alex Williamson wrote: >>> On Wed, 13 Oct 2021 12:47:06 +0300 >>> Yishai Hadas <yishaih@nvidia.com> wrote: >>> >>>> Add infrastructure to let vfio_pci_core drivers trap device RESET. >>>> >>>> The motivation for this is to let the underlay driver be aware that >>>> reset was done and set its internal state accordingly. >>> I think the intention of the uAPI here is that the migration error >>> state is exited specifically via the reset ioctl. Maybe that should be >>> made more clear, but variant drivers can already wrap the core ioctl >>> for the purpose of determining that mechanism of reset has occurred. >> It is not just recovering the error state. >> >> Any transition to reset changes the firmware state. Eg if userspace >> uses one of the other emulation paths to trigger the reset after >> putting the device off running then the driver state and FW state >> become desynchronized. >> >> So all the reset paths need to be synchronized some how, either >> blocked while in non-running states or aligning the SW state with the >> new post-reset FW state. > This only catches the two flavors of FLR and the RESET ioctl itself, so > we've got gaps relative to "all the reset paths" anyway. I'm also > concerned about adding arbitrary callbacks for every case that it gets > too cumbersome to write a wrapper for the existing callbacks. > > However, why is this a vfio thing when we have the > pci_error_handlers.reset_done callback. At best this ought to be > redundant to that. Thanks, > > Alex > Alex, How about the below patch instead ? This will centralize the 'reset_done' notifications for drivers to one place (i.e. pci_error_handlers.reset_done) and may close the gap that you pointed on. I just followed the logic in vfio_pci_aer_err_detected() from usage and locking point of view. Do we really need to take the &vdev->igate mutex as was done there ? The next patch from the series in mlx5 will stay as of in V1, it may just set its ops and be called upon PCI 'reset_done'. diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index e581a327f90d..20bf37c00fb6 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1925,6 +1925,27 @@ static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev, return PCI_ERS_RESULT_CAN_RECOVER; } +static void vfio_pci_aer_err_reset_done(struct pci_dev *pdev) +{ + struct vfio_pci_core_device *vdev; + struct vfio_device *device; + + device = vfio_device_get_from_dev(&pdev->dev); + if (device == NULL) + return; + + vdev = container_of(device, struct vfio_pci_core_device, vdev); + + mutex_lock(&vdev->igate); + if (vdev->ops && vdev->ops->reset_done) + vdev->ops->reset_done(vdev); + mutex_unlock(&vdev->igate); + + vfio_device_put(device); + + return; +} + int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn) { struct vfio_device *device; @@ -1947,6 +1968,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_sriov_configure); const struct pci_error_handlers vfio_pci_core_err_handlers = { .error_detected = vfio_pci_aer_err_detected, + .reset_done = vfio_pci_aer_err_reset_done, }; EXPORT_SYMBOL_GPL(vfio_pci_core_err_handlers); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index ef9a44b6cf5d..6ccf5824f098 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -95,6 +95,15 @@ struct vfio_pci_mmap_vma { struct list_head vma_next; }; +/** + * struct vfio_pci_core_device_ops - VFIO PCI driver device callbacks + * + * @reset_done: Called when the device was reset + */ +struct vfio_pci_core_device_ops { + void (*reset_done)(struct vfio_pci_core_device *vdev); +}; + struct vfio_pci_core_device { struct vfio_device vdev; struct pci_dev *pdev; @@ -137,6 +146,7 @@ struct vfio_pci_core_device { struct mutex vma_lock; struct list_head vma_list; struct rw_semaphore memory_lock; + const struct vfio_pci_core_device_ops *ops; };
On Sun, Oct 17, 2021 at 05:29:39PM +0300, Yishai Hadas wrote: > On 10/16/2021 12:12 AM, Alex Williamson wrote: > > On Fri, 15 Oct 2021 17:03:28 -0300 > > Jason Gunthorpe <jgg@nvidia.com> wrote: > > > > > On Fri, Oct 15, 2021 at 01:52:37PM -0600, Alex Williamson wrote: > > > > On Wed, 13 Oct 2021 12:47:06 +0300 > > > > Yishai Hadas <yishaih@nvidia.com> wrote: > > > > > Add infrastructure to let vfio_pci_core drivers trap device RESET. > > > > > > > > > > The motivation for this is to let the underlay driver be aware that > > > > > reset was done and set its internal state accordingly. > > > > I think the intention of the uAPI here is that the migration error > > > > state is exited specifically via the reset ioctl. Maybe that should be > > > > made more clear, but variant drivers can already wrap the core ioctl > > > > for the purpose of determining that mechanism of reset has occurred. > > > It is not just recovering the error state. > > > > > > Any transition to reset changes the firmware state. Eg if userspace > > > uses one of the other emulation paths to trigger the reset after > > > putting the device off running then the driver state and FW state > > > become desynchronized. > > > > > > So all the reset paths need to be synchronized some how, either > > > blocked while in non-running states or aligning the SW state with the > > > new post-reset FW state. > > This only catches the two flavors of FLR and the RESET ioctl itself, so > > we've got gaps relative to "all the reset paths" anyway. I'm also > > concerned about adding arbitrary callbacks for every case that it gets > > too cumbersome to write a wrapper for the existing callbacks. > > > > However, why is this a vfio thing when we have the > > pci_error_handlers.reset_done callback. At best this ought to be > > redundant to that. Thanks, > > > > Alex > > > Alex, > > How about the below patch instead ? > > This will centralize the 'reset_done' notifications for drivers to one place > (i.e. pci_error_handlers.reset_done) and may close the gap that you pointed > on. > > I just followed the logic in vfio_pci_aer_err_detected() from usage and > locking point of view. > > Do we really need to take the &vdev->igate mutex as was done there ? > > The next patch from the series in mlx5 will stay as of in V1, it may just > set its ops and be called upon PCI 'reset_done'. > > > diff --git a/drivers/vfio/pci/vfio_pci_core.c > b/drivers/vfio/pci/vfio_pci_core.c > index e581a327f90d..20bf37c00fb6 100644 > +++ b/drivers/vfio/pci/vfio_pci_core.c > @@ -1925,6 +1925,27 @@ static pci_ers_result_t > vfio_pci_aer_err_detected(struct pci_dev *pdev, > return PCI_ERS_RESULT_CAN_RECOVER; > } > > +static void vfio_pci_aer_err_reset_done(struct pci_dev *pdev) > +{ > + struct vfio_pci_core_device *vdev; > + struct vfio_device *device; > + > + device = vfio_device_get_from_dev(&pdev->dev); > + if (device == NULL) > + return; Do not add new vfio_device_get_from_dev() calls, this should extract it from the pci_get_drvdata. > + > + vdev = container_of(device, struct vfio_pci_core_device, vdev); > + > + mutex_lock(&vdev->igate); > + if (vdev->ops && vdev->ops->reset_done) > + vdev->ops->reset_done(vdev); > + mutex_unlock(&vdev->igate); > + > + vfio_device_put(device); > + > + return; > +} > + > int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn) > { > struct vfio_device *device; > @@ -1947,6 +1968,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_sriov_configure); > > const struct pci_error_handlers vfio_pci_core_err_handlers = { > .error_detected = vfio_pci_aer_err_detected, > + .reset_done = vfio_pci_aer_err_reset_done, > }; > EXPORT_SYMBOL_GPL(vfio_pci_core_err_handlers); Most likely mlx5vf should just implement a pci_error_handlers struct and install vfio_pci_aer_err_detected in it. Jason
On 10/18/2021 3:02 PM, Jason Gunthorpe wrote: > On Sun, Oct 17, 2021 at 05:29:39PM +0300, Yishai Hadas wrote: >> On 10/16/2021 12:12 AM, Alex Williamson wrote: >>> On Fri, 15 Oct 2021 17:03:28 -0300 >>> Jason Gunthorpe <jgg@nvidia.com> wrote: >>> >>>> On Fri, Oct 15, 2021 at 01:52:37PM -0600, Alex Williamson wrote: >>>>> On Wed, 13 Oct 2021 12:47:06 +0300 >>>>> Yishai Hadas <yishaih@nvidia.com> wrote: >>>>>> Add infrastructure to let vfio_pci_core drivers trap device RESET. >>>>>> >>>>>> The motivation for this is to let the underlay driver be aware that >>>>>> reset was done and set its internal state accordingly. >>>>> I think the intention of the uAPI here is that the migration error >>>>> state is exited specifically via the reset ioctl. Maybe that should be >>>>> made more clear, but variant drivers can already wrap the core ioctl >>>>> for the purpose of determining that mechanism of reset has occurred. >>>> It is not just recovering the error state. >>>> >>>> Any transition to reset changes the firmware state. Eg if userspace >>>> uses one of the other emulation paths to trigger the reset after >>>> putting the device off running then the driver state and FW state >>>> become desynchronized. >>>> >>>> So all the reset paths need to be synchronized some how, either >>>> blocked while in non-running states or aligning the SW state with the >>>> new post-reset FW state. >>> This only catches the two flavors of FLR and the RESET ioctl itself, so >>> we've got gaps relative to "all the reset paths" anyway. I'm also >>> concerned about adding arbitrary callbacks for every case that it gets >>> too cumbersome to write a wrapper for the existing callbacks. >>> >>> However, why is this a vfio thing when we have the >>> pci_error_handlers.reset_done callback. At best this ought to be >>> redundant to that. Thanks, >>> >>> Alex >>> >> Alex, >> >> How about the below patch instead ? >> >> This will centralize the 'reset_done' notifications for drivers to one place >> (i.e. pci_error_handlers.reset_done) and may close the gap that you pointed >> on. >> >> I just followed the logic in vfio_pci_aer_err_detected() from usage and >> locking point of view. >> >> Do we really need to take the &vdev->igate mutex as was done there ? >> >> The next patch from the series in mlx5 will stay as of in V1, it may just >> set its ops and be called upon PCI 'reset_done'. >> >> >> diff --git a/drivers/vfio/pci/vfio_pci_core.c >> b/drivers/vfio/pci/vfio_pci_core.c >> index e581a327f90d..20bf37c00fb6 100644 >> +++ b/drivers/vfio/pci/vfio_pci_core.c >> @@ -1925,6 +1925,27 @@ static pci_ers_result_t >> vfio_pci_aer_err_detected(struct pci_dev *pdev, >> return PCI_ERS_RESULT_CAN_RECOVER; >> } >> >> +static void vfio_pci_aer_err_reset_done(struct pci_dev *pdev) >> +{ >> + struct vfio_pci_core_device *vdev; >> + struct vfio_device *device; >> + >> + device = vfio_device_get_from_dev(&pdev->dev); >> + if (device == NULL) >> + return; > Do not add new vfio_device_get_from_dev() calls, this should extract > it from the pci_get_drvdata. > >> + >> + vdev = container_of(device, struct vfio_pci_core_device, vdev); >> + >> + mutex_lock(&vdev->igate); >> + if (vdev->ops && vdev->ops->reset_done) >> + vdev->ops->reset_done(vdev); >> + mutex_unlock(&vdev->igate); >> + >> + vfio_device_put(device); >> + >> + return; >> +} >> + >> int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn) >> { >> struct vfio_device *device; >> @@ -1947,6 +1968,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_sriov_configure); >> >> const struct pci_error_handlers vfio_pci_core_err_handlers = { >> .error_detected = vfio_pci_aer_err_detected, >> + .reset_done = vfio_pci_aer_err_reset_done, >> }; >> EXPORT_SYMBOL_GPL(vfio_pci_core_err_handlers); > Most likely mlx5vf should just implement a pci_error_handlers struct > and install vfio_pci_aer_err_detected in it. > > Jason This can work as well. It may cleanup the need to set an extra ops on vfio_pci_core_device, the reset will go directly to the mlx5 driver. I plan to follow that in coming V2. Yishai
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 6e58b4bf7a60..002198376f43 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -859,7 +859,9 @@ static int vfio_exp_config_write(struct vfio_pci_core_device *vdev, int pos, if (!ret && (cap & PCI_EXP_DEVCAP_FLR)) { vfio_pci_zap_and_down_write_memory_lock(vdev); - pci_try_reset_function(vdev->pdev); + ret = pci_try_reset_function(vdev->pdev); + if (!ret && vdev->ops && vdev->ops->reset_done) + vdev->ops->reset_done(vdev); up_write(&vdev->memory_lock); } } @@ -941,7 +943,9 @@ static int vfio_af_config_write(struct vfio_pci_core_device *vdev, int pos, if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP)) { vfio_pci_zap_and_down_write_memory_lock(vdev); - pci_try_reset_function(vdev->pdev); + ret = pci_try_reset_function(vdev->pdev); + if (!ret && vdev->ops && vdev->ops->reset_done) + vdev->ops->reset_done(vdev); up_write(&vdev->memory_lock); } } diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index e581a327f90d..d2497a8ed7f1 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -923,6 +923,8 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, vfio_pci_zap_and_down_write_memory_lock(vdev); ret = pci_try_reset_function(vdev->pdev); + if (!ret && vdev->ops && vdev->ops->reset_done) + vdev->ops->reset_done(vdev); up_write(&vdev->memory_lock); return ret; diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index ef9a44b6cf5d..6ccf5824f098 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -95,6 +95,15 @@ struct vfio_pci_mmap_vma { struct list_head vma_next; }; +/** + * struct vfio_pci_core_device_ops - VFIO PCI driver device callbacks + * + * @reset_done: Called when the device was reset + */ +struct vfio_pci_core_device_ops { + void (*reset_done)(struct vfio_pci_core_device *vdev); +}; + struct vfio_pci_core_device { struct vfio_device vdev; struct pci_dev *pdev; @@ -137,6 +146,7 @@ struct vfio_pci_core_device { struct mutex vma_lock; struct list_head vma_list; struct rw_semaphore memory_lock; + const struct vfio_pci_core_device_ops *ops; }; #define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX)