From patchwork Mon Jan 25 21:16:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 8115561 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B6169BEEE5 for ; Mon, 25 Jan 2016 21:17:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A864120386 for ; Mon, 25 Jan 2016 21:17:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 8410620382 for ; Mon, 25 Jan 2016 21:17:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A96236E197; Mon, 25 Jan 2016 13:17:03 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id CA3506E18D for ; Mon, 25 Jan 2016 13:16:59 -0800 (PST) Received: by mail-wm0-f65.google.com with SMTP id 123so13700179wmz.2 for ; Mon, 25 Jan 2016 13:16:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lf5hqH+J9kf+cr9N3uCG3cPdYtahhfYj8Yb81Ghav9w=; b=dumcqgYDBpEkRJ3/qvG/J1mkZpH+EyYKxOO9w8lMJ98POZOvPVMFrwxgS2nuhFnNKM FDOS3dzQHRSyeyqlHdmiz464ZzS1IXit4dkG4TN1fvQdY+SMKAAFotdPTDvG/nuAIkqa Ftq0+FuMBnGhnFKPuxNPtnG8D8S7J0g0iCSN4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lf5hqH+J9kf+cr9N3uCG3cPdYtahhfYj8Yb81Ghav9w=; b=eO/RykGArzsteNOM79xaQfbEBc+gxPI4QOjYGyhFi2ERw1rPgFSZaus+56uBuOby/n ToimJX8rikSEI35TFN4iEpt9vrJBWLulxxkJavLoYJmGietT+rBA2WmGqmHJ1XXhTbH6 NO3KEoX3oY3oIpzGGS2IVL+YPCV8eOOFXAdQeUVGkjTYmQgrL0/EMzvv2tbdUYkYt8TE PntBLRdy5SB5b/0rrRXouDuS6Md/kBDwew0K380OsGqN3+bx6DiGCeqyTDsfJx1L45Xv aJFzxNIYVQL3HpaZHvc2Bzmj+iaPBzG+3OOmNCjkR+7O5oQeGzSZRjQlndvoc5Otor4J 0/gw== X-Gm-Message-State: AG10YOQ4Eubo16X5vQNgdJ3aDkFDpLHV6bBv/Qpr4PaVeAMcYEYQc6PoPF35UAns9/fo0A== X-Received: by 10.194.133.233 with SMTP id pf9mr19683014wjb.75.1453756618205; Mon, 25 Jan 2016 13:16:58 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:56c9:0:22cf:30ff:fe4c:37d6]) by smtp.gmail.com with ESMTPSA id y124sm515441wmg.3.2016.01.25.13.16.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 25 Jan 2016 13:16:57 -0800 (PST) From: Daniel Vetter To: DRI Development , Intel Graphics Development Subject: [PATCH 02/15] drm: Clean up pending events in the core Date: Mon, 25 Jan 2016 22:16:43 +0100 Message-Id: <1453756616-28942-2-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1453756616-28942-1-git-send-email-daniel.vetter@ffwll.ch> References: <1453756616-28942-1-git-send-email-daniel.vetter@ffwll.ch> Cc: Alex Deucher , Daniel Vetter , Laurent Pinchart X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There's really no reason to not do so, instead of replicating this for every use-case and every driver. Now we can't just nuke the events, since that would still mean that all drm_event users would need to know when that has happened, since calling e.g. drm_send_event isn't allowed any more. Instead just unlink them from the file, and detect this case and handle it appropriately in all functions. v2: Adjust existing kerneldoc too. v3: Improve wording of the kerneldoc and split out vblank cleanup (Laurent). Cc: Alex Deucher Cc: Laurent Pinchart Acked-by: Daniel Stone Reviewed-by: Alex Deucher (v1) Link: http://patchwork.freedesktop.org/patch/msgid/1452548477-15905-10-git-send-email-daniel.vetter@ffwll.ch Reviewed-by: Laurent Pinchart Signed-off-by: Daniel Vetter --- drivers/gpu/drm/drm_fops.c | 30 +++++++++++++++++++++++++++++- include/drm/drmP.h | 2 ++ 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c index eb6a02f78697..afe8c53e5aad 100644 --- a/drivers/gpu/drm/drm_fops.c +++ b/drivers/gpu/drm/drm_fops.c @@ -264,6 +264,7 @@ static int drm_open_helper(struct file *filp, struct drm_minor *minor) INIT_LIST_HEAD(&priv->fbs); mutex_init(&priv->fbs_lock); INIT_LIST_HEAD(&priv->blobs); + INIT_LIST_HEAD(&priv->pending_event_list); INIT_LIST_HEAD(&priv->event_list); init_waitqueue_head(&priv->event_wait); priv->event_space = 4096; /* set aside 4k for event buffer */ @@ -366,6 +367,13 @@ static void drm_events_release(struct drm_file *file_priv) v->base.destroy(&v->base); } + /* Unlink pending events */ + list_for_each_entry_safe(e, et, &file_priv->pending_event_list, + pending_link) { + list_del(&e->pending_link); + e->file_priv = NULL; + } + /* Remove unconsumed events */ list_for_each_entry_safe(e, et, &file_priv->event_list, link) { list_del(&e->link); @@ -712,6 +720,7 @@ int drm_event_reserve_init_locked(struct drm_device *dev, file_priv->event_space -= e->length; p->event = e; + list_add(&p->pending_link, &file_priv->pending_event_list); p->file_priv = file_priv; /* we *could* pass this in as arg, but everyone uses kfree: */ @@ -774,7 +783,10 @@ void drm_event_cancel_free(struct drm_device *dev, { unsigned long flags; spin_lock_irqsave(&dev->event_lock, flags); - p->file_priv->event_space += p->event->length; + if (p->file_priv) { + p->file_priv->event_space += p->event->length; + list_del(&p->pending_link); + } spin_unlock_irqrestore(&dev->event_lock, flags); p->destroy(p); } @@ -788,11 +800,22 @@ EXPORT_SYMBOL(drm_event_cancel_free); * This function sends the event @e, initialized with drm_event_reserve_init(), * to its associated userspace DRM file. Callers must already hold * dev->event_lock, see drm_send_event() for the unlocked version. + * + * Note that the core will take care of unlinking and disarming events when the + * corresponding DRM file is closed. Drivers need not worry about whether the + * DRM file for this event still exists and can call this function upon + * completion of the asynchronous work unconditionally. */ void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e) { assert_spin_locked(&dev->event_lock); + if (!e->file_priv) { + e->destroy(e); + return; + } + + list_del(&e->pending_link); list_add_tail(&e->link, &e->file_priv->event_list); wake_up_interruptible(&e->file_priv->event_wait); @@ -807,6 +830,11 @@ EXPORT_SYMBOL(drm_send_event_locked); * This function sends the event @e, initialized with drm_event_reserve_init(), * to its associated userspace DRM file. This function acquires dev->event_lock, * see drm_send_event_locked() for callers which already hold this lock. + * + * Note that the core will take care of unlinking and disarming events when the + * corresponding DRM file is closed. Drivers need not worry about whether the + * DRM file for this event still exists and can call this function upon + * completion of the asynchronous work unconditionally. */ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e) { diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 1b71852d0a55..3c8422c69572 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -283,6 +283,7 @@ struct drm_ioctl_desc { struct drm_pending_event { struct drm_event *event; struct list_head link; + struct list_head pending_link; struct drm_file *file_priv; pid_t pid; /* pid of requester, no guarantee it's valid by the time we deliver the event, for tracing only */ @@ -346,6 +347,7 @@ struct drm_file { struct list_head blobs; wait_queue_head_t event_wait; + struct list_head pending_event_list; struct list_head event_list; int event_space;