diff mbox

drm/i915/reset_stats: Only allow root to read reset_stats

Message ID 20180116160935.20021-1-antonio.argenziano@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Antonio Argenziano Jan. 16, 2018, 4:09 p.m. UTC
Instead of returning a zero value for non root users, return an EPERM
error.

Signed-off-by: Antonio Argenziano <antonio.argenziano@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michel Thierry <michel.thierry@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c         | 2 +-
 drivers/gpu/drm/i915/i915_gem_context.c | 9 ++++-----
 2 files changed, 5 insertions(+), 6 deletions(-)

Comments

Chris Wilson Jan. 16, 2018, 5:27 p.m. UTC | #1
Quoting Antonio Argenziano (2018-01-16 16:09:35)
> Instead of returning a zero value for non root users, return an EPERM
> error.

What? reset-stats are per-context, private to the client, with the
exception of the global value which is solely protected by capable().
There is a genuine debate over how much information leakage of one
guilty reset in another context should affect an innocent, and we have
tried to limit that to being only those directly affected and have a
requirement to know (i.e they were executing at the time of the reset
and their calculations will have been restarted with perturbed state).
-Chris
Antonio Argenziano Jan. 16, 2018, 6:35 p.m. UTC | #2
On 16/01/18 09:27, Chris Wilson wrote:
> Quoting Antonio Argenziano (2018-01-16 16:09:35)
>> Instead of returning a zero value for non root users, return an EPERM
>> error.
> 
> What? reset-stats are per-context, private to the client, with the
> exception of the global value which is solely protected by capable().

Completely missed that, thanks. Will ping the change to gem_reset_stats.

-Antonio

> There is a genuine debate over how much information leakage of one
> guilty reset in another context should affect an innocent, and we have
> tried to limit that to being only those directly affected and have a
> requirement to know (i.e they were executing at the time of the reset
> and their calculations will have been restarted with perturbed state).
> -Chris
>
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 6c8da9d20c33..23fb4c61163c 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2817,7 +2817,7 @@  static const struct drm_ioctl_desc i915_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_RENDER_ALLOW),
-	DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_gem_context_reset_stats_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_gem_context_reset_stats_ioctl, DRM_RENDER_ALLOW|DRM_ROOT_ONLY),
 	DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_GETPARAM, i915_gem_context_getparam_ioctl, DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_context_setparam_ioctl, DRM_RENDER_ALLOW),
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 648e7536ff51..e6872bd8b754 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -842,6 +842,9 @@  int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 	if (args->flags || args->pad)
 		return -EINVAL;
 
+	if (!capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
 	ret = -ENOENT;
 	rcu_read_lock();
 	ctx = __i915_gem_context_lookup_rcu(file->driver_priv, args->ctx_id);
@@ -855,11 +858,7 @@  int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
 	 * we should wrap the hangstats with a seqlock.
 	 */
 
-	if (capable(CAP_SYS_ADMIN))
-		args->reset_count = i915_reset_count(&dev_priv->gpu_error);
-	else
-		args->reset_count = 0;
-
+	args->reset_count = i915_reset_count(&dev_priv->gpu_error);
 	args->batch_active = atomic_read(&ctx->guilty_count);
 	args->batch_pending = atomic_read(&ctx->active_count);