From patchwork Fri Sep 29 07:11:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerd Hoffmann X-Patchwork-Id: 9977153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5FD6960329 for ; Fri, 29 Sep 2017 07:11:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F521297C6 for ; Fri, 29 Sep 2017 07:11:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 438A3297E1; Fri, 29 Sep 2017 07:11:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CE2E4297C6 for ; Fri, 29 Sep 2017 07:11:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 621F06EA63; Fri, 29 Sep 2017 07:11:19 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1A20B6EA62; Fri, 29 Sep 2017 07:11:17 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5D50D4E4F3; Fri, 29 Sep 2017 07:11:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 5D50D4E4F3 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=kraxel@redhat.com Received: from sirius.home.kraxel.org (ovpn-116-102.ams2.redhat.com [10.36.116.102]) by smtp.corp.redhat.com (Postfix) with ESMTP id BD894189F5; Fri, 29 Sep 2017 07:11:14 +0000 (UTC) Received: from localhost (localhost [IPv6:::1]) by sirius.home.kraxel.org (Postfix) with ESMTP id 5A4CDEC1; Fri, 29 Sep 2017 09:11:13 +0200 (CEST) Message-ID: <1506669073.6902.11.camel@redhat.com> From: Gerd Hoffmann To: "Zhang, Tina" , "zhenyuw@linux.intel.com" , "Wang, Zhi A" , "Tian, Kevin" Date: Fri, 29 Sep 2017 09:11:13 +0200 In-Reply-To: <237F54289DF84E4997F34151298ABEBC7C5E7169@SHSMSX101.ccr.corp.intel.com> References: <1503051696-5158-1-git-send-email-tina.zhang@intel.com> <1503051696-5158-6-git-send-email-tina.zhang@intel.com> <1506409923.32072.9.camel@redhat.com> <237F54289DF84E4997F34151298ABEBC7C5E5ADC@SHSMSX101.ccr.corp.intel.com> <1506507064.25036.5.camel@redhat.com> <237F54289DF84E4997F34151298ABEBC7C5E7169@SHSMSX101.ccr.corp.intel.com> Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 29 Sep 2017 07:11:17 +0000 (UTC) Cc: Daniel Vetter , "intel-gfx@lists.freedesktop.org" , "intel-gvt-dev@lists.freedesktop.org" , "Lv, Zhiyuan" Subject: Re: [Intel-gfx] [PATCH v14 5/7] vfio: ABI for mdev display dma-buf operation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Hi, > The reason why I want to propose the close IOCTL is because that the > current lock (fb_obj_list_lock), cannot sync the intel_vgpu_fb_info > releasing and reusing. > You see, the intel_vgpu_fb_info reusing and releasing are in > different threads. There is a case that intel_vgpu_find_dmabuf can > return a intel_vgpu_fb_obj, while the intel_vgpu_fb_obj > is on the way to be released. That's the problem. Oh, right. But that race is fixable. We need to move the locks one level up, so we don't only cover list operations (add/lookup/delete) but also the kref_{get,put} operations for the list elements. Patch against my tree, only build-tested so far. cheers, Gerd From 3e8c30a857d98d36357e8d9bb04b7ccb72264543 Mon Sep 17 00:00:00 2001 From: Gerd Hoffmann Date: Fri, 29 Sep 2017 08:59:34 +0200 Subject: [PATCH] fix locking --- drivers/gpu/drm/i915/gvt/dmabuf.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index 2fb3247eff..06ff7bb04e 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -84,24 +84,25 @@ static void intel_vgpu_fb_obj_release(struct kref *kref) { struct intel_vgpu_fb_obj *fb_obj = container_of(kref, struct intel_vgpu_fb_obj, kref); - struct intel_vgpu *vgpu; - vgpu = fb_obj->vgpu; - mutex_lock(&vgpu->fb_obj_list_lock); list_del(&fb_obj->list); - mutex_unlock(&vgpu->fb_obj_list_lock); kfree(fb_obj); } static void intel_vgpu_gem_release(struct drm_i915_gem_object *obj) { + struct intel_vgpu *vgpu; + if (WARN_ON(!obj->gvt || !obj->gvt->vgpu)) { gvt_err("gvt info is invalid\n"); return; } - intel_gvt_hypervisor_put_vfio_device(obj->gvt->vgpu); + vgpu = obj->gvt->vgpu; + intel_gvt_hypervisor_put_vfio_device(vgpu); + mutex_lock(&vgpu->fb_obj_list_lock); kref_put(&obj->gvt->kref, intel_vgpu_fb_obj_release); + mutex_unlock(&vgpu->fb_obj_list_lock); obj->gvt = NULL; } @@ -239,7 +240,6 @@ intel_vgpu_pick_exposed_dmabuf(struct intel_vgpu *vgpu, struct list_head *pos; struct intel_vgpu_fb_obj *fb_obj; - mutex_lock(&vgpu->fb_obj_list_lock); list_for_each(pos, &vgpu->fb_obj_list_head) { fb_obj = container_of(pos, struct intel_vgpu_fb_obj, list); @@ -251,11 +251,9 @@ intel_vgpu_pick_exposed_dmabuf(struct intel_vgpu *vgpu, (fb_obj->fb.width == latest_info->width) && (fb_obj->fb.height == latest_info->height) && (fb_obj->fb.stride == latest_info->stride)) { - mutex_unlock(&vgpu->fb_obj_list_lock); return fb_obj; } } - mutex_unlock(&vgpu->fb_obj_list_lock); return NULL; } @@ -265,16 +263,13 @@ intel_vgpu_find_dmabuf(struct intel_vgpu *vgpu, u32 dmabuf_id) struct list_head *pos; struct intel_vgpu_fb_obj *fb_obj; - mutex_lock(&vgpu->fb_obj_list_lock); list_for_each(pos, &vgpu->fb_obj_list_head) { fb_obj = container_of(pos, struct intel_vgpu_fb_obj, list); if (fb_obj->dmabuf_id == dmabuf_id) { - mutex_unlock(&vgpu->fb_obj_list_lock); return fb_obj; } } - mutex_unlock(&vgpu->fb_obj_list_lock); return NULL; } @@ -327,8 +322,10 @@ int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args) return ret; /* If exists, pick up the exposed dmabuf fd */ + mutex_lock(&vgpu->fb_obj_list_lock); fb_obj = intel_vgpu_pick_exposed_dmabuf(vgpu, &fb_info); if (fb_obj != NULL) { + mutex_unlock(&vgpu->fb_obj_list_lock); update_fb_info(gvt_dmabuf, fb_obj); return 0; } @@ -345,7 +342,6 @@ int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args) fb_obj->fb = fb_info; fb_obj->dmabuf_id = id++; - mutex_lock(&vgpu->fb_obj_list_lock); list_add_tail(&fb_obj->list, &vgpu->fb_obj_list_head); mutex_unlock(&vgpu->fb_obj_list_lock); update_fb_info(gvt_dmabuf, fb_obj); @@ -362,11 +358,15 @@ int intel_vgpu_get_dmabuf(struct intel_vgpu *vgpu, void *args) struct dma_buf *dmabuf; int ret; + mutex_lock(&vgpu->fb_obj_list_lock); fb_obj = intel_vgpu_find_dmabuf(vgpu, gvt_dmabuf->dmabuf_id); - if (NULL == fb_obj) + if (NULL == fb_obj) { + mutex_unlock(&vgpu->fb_obj_list_lock); return -EINVAL; + } obj = intel_vgpu_create_gem(dev, fb_obj); + mutex_unlock(&vgpu->fb_obj_list_lock); if (obj == NULL) { gvt_vgpu_err("create gvt gem obj failed:%d\n", vgpu->id); return -ENOMEM; -- 2.9.3