From patchwork Wed Sep 9 12:21:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Herrmann X-Patchwork-Id: 7146131 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id F383DBEEC1 for ; Wed, 9 Sep 2015 12:22:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 102122097E for ; Wed, 9 Sep 2015 12:22:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 0005320965 for ; Wed, 9 Sep 2015 12:22:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E4E196E9F3; Wed, 9 Sep 2015 05:22:02 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9BDEA6E0F6 for ; Wed, 9 Sep 2015 05:22:00 -0700 (PDT) Received: by wicfx3 with SMTP id fx3so154709950wic.1 for ; Wed, 09 Sep 2015 05:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fA/dVjy+cxPlsox05NCtiauMLCoCfolHEa8OCepm1HA=; b=CB34URnWi/pLbH8H9BkXMnMDoO2fTjwC9YahXscFpS5yOQ1prMTwYnl6YdBXWFg3OW mLpic09ppBFygBfkL5OUM0H85+Ok+1XHUBWsasc6e6eKI5xihrj4t4za3DyMMw/T1Cjk jb97BzpDufhPMqO2tnlOhfSkcBLOywnLpILCo8NUNtmfw8sXYZAqf8YT+vfN9UmPR1Sj SFBdMPgq75Ca1yTRas0o9DKnwfacY5fsxFWRhmWjTsLdxT39/6AOwOkURMNlSVhO4DZv 54bP9LU4ZN6mG8eLDCxzQvLxD0KitTCmADLxkO84OPDFqB/V7CX1vIhgrt8TTA+qMTrT 9YQA== X-Received: by 10.180.198.48 with SMTP id iz16mr55317488wic.91.1441801319283; Wed, 09 Sep 2015 05:21:59 -0700 (PDT) Received: from david-t2.fritz.box (p5DDC7D72.dip0.t-ipconnect.de. [93.220.125.114]) by smtp.gmail.com with ESMTPSA id fu5sm6380823wic.0.2015.09.09.05.21.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 09 Sep 2015 05:21:58 -0700 (PDT) From: David Herrmann To: dri-devel@lists.freedesktop.org Subject: [PATCH 5/5] drm: allocate kernel mode-object IDs in cyclic fashion Date: Wed, 9 Sep 2015 14:21:33 +0200 Message-Id: <1441801293-1440-6-git-send-email-dh.herrmann@gmail.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1441801293-1440-1-git-send-email-dh.herrmann@gmail.com> References: <1441801293-1440-1-git-send-email-dh.herrmann@gmail.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we support connector hotplugging, user-space might see mode-object IDs coming and going asynchronously. Therefore, we must make sure to not re-use object IDs, so to not confuse user-space and introduce races. Therefore, all kernel-allocated objects will no longer recycle IDs. Instead, we use the cyclic idr allocator (which performs still fine for reasonable allocation schemes). However, for user-space allocated objects like framebuffers, we don't want to risk allowing malicious users to screw with the ID space. Furthermore, those objects happen to not be subject to ID hotplug races, as they're allocated and freed explicitly. Hence, we still recycle IDs for these objects (which are just framebuffers so far). For atomic-modesetting, objects are looked up by the kernel without specifying an object-type. Hence, we have a theoretical race where a framebuffer recycles a previous connector ID. However, user-allocated objects are never returned by drm_mode_object_find() (as they need separate ref-count handling), so this race cannot happen with the currently available objects. Even if we add ref-counting to other objects, all we need to make sure is to never lookup user-controlled objects with the same function as kernel-controlled objects, but not specifying the type. This sounds highly unlikely to happen ever, so we should be safe, anyway. Signed-off-by: David Herrmann --- drivers/gpu/drm/drm_crtc.c | 37 ++++++++++++++++++++++++++++++++----- 1 file changed, 32 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c index 33d877c..fd8a2e2 100644 --- a/drivers/gpu/drm/drm_crtc.c +++ b/drivers/gpu/drm/drm_crtc.c @@ -269,18 +269,42 @@ const char *drm_get_format_name(uint32_t format) EXPORT_SYMBOL(drm_get_format_name); /* + * Flags for drm_mode_object_get_reg(): + * DRM_MODE_OBJECT_ID_UNLINKED: Allocate the object ID, but do not store the + * object pointer. Hence, the object is not + * registered but needs to be inserted manually. + * This must be used for hotplugged objects. + * DRM_MODE_OBJECT_ID_RECYCLE: Allow recycling previously allocated IDs. If + * not set, the IDs space is allocated in a cyclic + * fashion. This should be the default for all + * kernel allocated objects, to not confuse + * user-space on hotplug. This must not be used + * for user-allocated objects, though. + */ +enum { + DRM_MODE_OBJECT_ID_UNLINKED = (1U << 0), + DRM_MODE_OBJECT_ID_RECYCLE = (1U << 1), +}; + +/* * Internal function to assign a slot in the object idr and optionally * register the object into the idr. */ static int drm_mode_object_get_reg(struct drm_device *dev, struct drm_mode_object *obj, uint32_t obj_type, - bool register_obj) + unsigned int flags) { + void *ptr = (flags & DRM_MODE_OBJECT_ID_UNLINKED) ? NULL : obj; int ret; mutex_lock(&dev->mode_config.idr_mutex); - ret = idr_alloc(&dev->mode_config.crtc_idr, register_obj ? obj : NULL, 1, 0, GFP_KERNEL); + if (flags & DRM_MODE_OBJECT_ID_RECYCLE) + ret = idr_alloc(&dev->mode_config.crtc_idr, ptr, 1, 0, + GFP_KERNEL); + else + ret = idr_alloc_cyclic(&dev->mode_config.crtc_idr, ptr, 1, 0, + GFP_KERNEL); if (ret >= 0) { /* * Set up the object linking under the protection of the idr @@ -312,7 +336,7 @@ static int drm_mode_object_get_reg(struct drm_device *dev, int drm_mode_object_get(struct drm_device *dev, struct drm_mode_object *obj, uint32_t obj_type) { - return drm_mode_object_get_reg(dev, obj, obj_type, true); + return drm_mode_object_get_reg(dev, obj, obj_type, 0); } static void drm_mode_object_register(struct drm_device *dev, @@ -414,7 +438,8 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb, fb->dev = dev; fb->funcs = funcs; - ret = drm_mode_object_get(dev, &fb->base, DRM_MODE_OBJECT_FB); + ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB, + DRM_MODE_OBJECT_ID_RECYCLE); if (ret) goto out; @@ -872,7 +897,9 @@ int drm_connector_init(struct drm_device *dev, drm_modeset_lock_all(dev); - ret = drm_mode_object_get_reg(dev, &connector->base, DRM_MODE_OBJECT_CONNECTOR, false); + ret = drm_mode_object_get_reg(dev, &connector->base, + DRM_MODE_OBJECT_CONNECTOR, + DRM_MODE_OBJECT_ID_UNLINKED); if (ret) goto out_unlock;