Message ID | 20200113153605.52350-2-brian@brkho.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | drm/msm: Add the MSM_WAIT_IOVA ioctl | expand |
On Mon, Jan 13, 2020 at 10:36:04AM -0500, Brian Ho wrote: > This wait queue is signaled on all IRQs for a given GPU and will be > used as part of the new MSM_WAIT_IOVA ioctl so userspace can sleep > until the value at a given iova reaches a certain condition. > > Signed-off-by: Brian Ho <brian@brkho.com> > --- > drivers/gpu/drm/msm/msm_gpu.c | 4 ++++ > drivers/gpu/drm/msm/msm_gpu.h | 3 +++ > 2 files changed, 7 insertions(+) > > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c > index a052364a5d74..d7310c1336e5 100644 > --- a/drivers/gpu/drm/msm/msm_gpu.c > +++ b/drivers/gpu/drm/msm/msm_gpu.c > @@ -779,6 +779,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, > static irqreturn_t irq_handler(int irq, void *data) > { > struct msm_gpu *gpu = data; > + wake_up_all(&gpu->event); > + I suppose it is intentional to have this happen on *all* interrupts because you might be using the CP interrupts for fun and profit and you don't want to plumb in callbacks? I suppose it is okay to do this for all interrupts (including errors) but if we're spending a lot of time here we might want to only trigger on certain IRQs. > return gpu->funcs->irq(gpu); > } > > @@ -871,6 +873,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, > > spin_lock_init(&gpu->perf_lock); > > + init_waitqueue_head(&gpu->event); > + > > /* Map registers: */ > gpu->mmio = msm_ioremap(pdev, config->ioname, name); > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h > index ab8f0f9c9dc8..60562f065dbc 100644 > --- a/drivers/gpu/drm/msm/msm_gpu.h > +++ b/drivers/gpu/drm/msm/msm_gpu.h > @@ -104,6 +104,9 @@ struct msm_gpu { > > struct msm_gem_address_space *aspace; > > + /* GPU-wide wait queue that is signaled on all IRQs */ > + wait_queue_head_t event; > + > /* Power Control: */ > struct regulator *gpu_reg, *gpu_cx; > struct clk_bulk_data *grp_clks; > -- > 2.25.0.rc1.283.g88dfdc4193-goog > > _______________________________________________ > Freedreno mailing list > Freedreno@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/freedreno
On Mon, Jan 13, 2020 at 9:55 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > > On Mon, Jan 13, 2020 at 10:36:04AM -0500, Brian Ho wrote: > > This wait queue is signaled on all IRQs for a given GPU and will be > > used as part of the new MSM_WAIT_IOVA ioctl so userspace can sleep > > until the value at a given iova reaches a certain condition. > > > > Signed-off-by: Brian Ho <brian@brkho.com> > > --- > > drivers/gpu/drm/msm/msm_gpu.c | 4 ++++ > > drivers/gpu/drm/msm/msm_gpu.h | 3 +++ > > 2 files changed, 7 insertions(+) > > > > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c > > index a052364a5d74..d7310c1336e5 100644 > > --- a/drivers/gpu/drm/msm/msm_gpu.c > > +++ b/drivers/gpu/drm/msm/msm_gpu.c > > @@ -779,6 +779,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, > > static irqreturn_t irq_handler(int irq, void *data) > > { > > struct msm_gpu *gpu = data; > > + wake_up_all(&gpu->event); > > + > > I suppose it is intentional to have this happen on *all* interrupts because you > might be using the CP interrupts for fun and profit and you don't want to plumb > in callbacks? I suppose it is okay to do this for all interrupts (including > errors) but if we're spending a lot of time here we might want to only trigger > on certain IRQs. Was just talking to Kristian about GPU hangs.. and I suspect we might want the ioctl to return an error if there is a gpu reset (so that userspace can use the robustness uapi to test if the gpu reset was something it cares about, etc) Which is as good as a reason as I can think of the wake_up_all() on all irqs.. BR, -R > > > return gpu->funcs->irq(gpu); > > } > > > > @@ -871,6 +873,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, > > > > spin_lock_init(&gpu->perf_lock); > > > > + init_waitqueue_head(&gpu->event); > > + > > > > /* Map registers: */ > > gpu->mmio = msm_ioremap(pdev, config->ioname, name); > > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h > > index ab8f0f9c9dc8..60562f065dbc 100644 > > --- a/drivers/gpu/drm/msm/msm_gpu.h > > +++ b/drivers/gpu/drm/msm/msm_gpu.h > > @@ -104,6 +104,9 @@ struct msm_gpu { > > > > struct msm_gem_address_space *aspace; > > > > + /* GPU-wide wait queue that is signaled on all IRQs */ > > + wait_queue_head_t event; > > + > > /* Power Control: */ > > struct regulator *gpu_reg, *gpu_cx; > > struct clk_bulk_data *grp_clks; > > -- > > 2.25.0.rc1.283.g88dfdc4193-goog > > > > _______________________________________________ > > Freedreno mailing list > > Freedreno@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/freedreno > > -- > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > a Linux Foundation Collaborative Project
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index a052364a5d74..d7310c1336e5 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -779,6 +779,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, static irqreturn_t irq_handler(int irq, void *data) { struct msm_gpu *gpu = data; + wake_up_all(&gpu->event); + return gpu->funcs->irq(gpu); } @@ -871,6 +873,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, spin_lock_init(&gpu->perf_lock); + init_waitqueue_head(&gpu->event); + /* Map registers: */ gpu->mmio = msm_ioremap(pdev, config->ioname, name); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index ab8f0f9c9dc8..60562f065dbc 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -104,6 +104,9 @@ struct msm_gpu { struct msm_gem_address_space *aspace; + /* GPU-wide wait queue that is signaled on all IRQs */ + wait_queue_head_t event; + /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; struct clk_bulk_data *grp_clks;
This wait queue is signaled on all IRQs for a given GPU and will be used as part of the new MSM_WAIT_IOVA ioctl so userspace can sleep until the value at a given iova reaches a certain condition. Signed-off-by: Brian Ho <brian@brkho.com> --- drivers/gpu/drm/msm/msm_gpu.c | 4 ++++ drivers/gpu/drm/msm/msm_gpu.h | 3 +++ 2 files changed, 7 insertions(+)