From patchwork Wed Nov 14 09:37:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sakari Ailus X-Patchwork-Id: 10682285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53C3914D6 for ; Wed, 14 Nov 2018 09:38:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4251D29AEE for ; Wed, 14 Nov 2018 09:38:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 333CF2986D; Wed, 14 Nov 2018 09:38:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 983382AB6E for ; Wed, 14 Nov 2018 09:38:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732234AbeKNTk7 (ORCPT ); Wed, 14 Nov 2018 14:40:59 -0500 Received: from mga02.intel.com ([134.134.136.20]:60554 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727558AbeKNTk7 (ORCPT ); Wed, 14 Nov 2018 14:40:59 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Nov 2018 01:38:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,231,1539673200"; d="scan'208";a="108458091" Received: from paasikivi.fi.intel.com ([10.237.72.42]) by orsmga002.jf.intel.com with ESMTP; 14 Nov 2018 01:38:28 -0800 Received: from punajuuri.localdomain (punajuuri.localdomain [192.168.240.130]) by paasikivi.fi.intel.com (Postfix) with ESMTPS id 6877F20162; Wed, 14 Nov 2018 11:38:27 +0200 (EET) Received: from sailus by punajuuri.localdomain with local (Exim 4.89) (envelope-from ) id 1gMrcI-0007Yz-RB; Wed, 14 Nov 2018 11:37:47 +0200 From: Sakari Ailus To: Greg Kroah-Hartman Cc: Dave Stevenson , Hans Verkuil , mchehab@kernel.org, linux-media@vger.kernel.org, Sasha Levin Subject: [PATCH v2 for v4.4 1/1] v4l: event: Add subscription to list before calling "add" operation Date: Wed, 14 Nov 2018 11:37:46 +0200 Message-Id: <20181114093746.29035-1-sakari.ailus@linux.intel.com> X-Mailer: git-send-email 2.11.0 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP [ upstream commit 92539d3eda2c090b382699bbb896d4b54e9bdece ] Patch ad608fbcf166 changed how events were subscribed to address an issue elsewhere. As a side effect of that change, the "add" callback was called before the event subscription was added to the list of subscribed events, causing the first event queued by the add callback (and possibly other events arriving soon afterwards) to be lost. Fix this by adding the subscription to the list before calling the "add" callback, and clean up afterwards if that fails. Fixes: ad608fbcf166 ("media: v4l: event: Prevent freeing event subscriptions while accessed") Reported-by: Dave Stevenson Signed-off-by: Sakari Ailus Tested-by: Dave Stevenson Reviewed-by: Hans Verkuil Tested-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab [Sakari Ailus: Backported to v4.4 stable] Signed-off-by: Sakari Ailus --- since v1 (as requested by Sasha): - Add my final SoB - Indicate specifically this is a backport - Remove the extra cc stable drivers/media/v4l2-core/v4l2-event.c | 43 ++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 19 deletions(-) diff --git a/drivers/media/v4l2-core/v4l2-event.c b/drivers/media/v4l2-core/v4l2-event.c index b47ac4e053d0e..f5c8a952f0aa3 100644 --- a/drivers/media/v4l2-core/v4l2-event.c +++ b/drivers/media/v4l2-core/v4l2-event.c @@ -197,6 +197,22 @@ int v4l2_event_pending(struct v4l2_fh *fh) } EXPORT_SYMBOL_GPL(v4l2_event_pending); +static void __v4l2_event_unsubscribe(struct v4l2_subscribed_event *sev) +{ + struct v4l2_fh *fh = sev->fh; + unsigned int i; + + lockdep_assert_held(&fh->subscribe_lock); + assert_spin_locked(&fh->vdev->fh_lock); + + /* Remove any pending events for this subscription */ + for (i = 0; i < sev->in_use; i++) { + list_del(&sev->events[sev_pos(sev, i)].list); + fh->navailable--; + } + list_del(&sev->list); +} + int v4l2_event_subscribe(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub, unsigned elems, const struct v4l2_subscribed_event_ops *ops) @@ -228,27 +244,23 @@ int v4l2_event_subscribe(struct v4l2_fh *fh, spin_lock_irqsave(&fh->vdev->fh_lock, flags); found_ev = v4l2_event_subscribed(fh, sub->type, sub->id); + if (!found_ev) + list_add(&sev->list, &fh->subscribed); spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); if (found_ev) { /* Already listening */ kfree(sev); - goto out_unlock; - } - - if (sev->ops && sev->ops->add) { + } else if (sev->ops && sev->ops->add) { ret = sev->ops->add(sev, elems); if (ret) { + spin_lock_irqsave(&fh->vdev->fh_lock, flags); + __v4l2_event_unsubscribe(sev); + spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); kfree(sev); - goto out_unlock; } } - spin_lock_irqsave(&fh->vdev->fh_lock, flags); - list_add(&sev->list, &fh->subscribed); - spin_unlock_irqrestore(&fh->vdev->fh_lock, flags); - -out_unlock: mutex_unlock(&fh->subscribe_lock); return ret; @@ -283,7 +295,6 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh, { struct v4l2_subscribed_event *sev; unsigned long flags; - int i; if (sub->type == V4L2_EVENT_ALL) { v4l2_event_unsubscribe_all(fh); @@ -295,14 +306,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh, spin_lock_irqsave(&fh->vdev->fh_lock, flags); sev = v4l2_event_subscribed(fh, sub->type, sub->id); - if (sev != NULL) { - /* Remove any pending events for this subscription */ - for (i = 0; i < sev->in_use; i++) { - list_del(&sev->events[sev_pos(sev, i)].list); - fh->navailable--; - } - list_del(&sev->list); - } + if (sev != NULL) + __v4l2_event_unsubscribe(sev); spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);