From patchwork Wed Aug 21 21:50:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11108201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D12A414F7 for ; Wed, 21 Aug 2019 21:50:47 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B9C7422DD3 for ; Wed, 21 Aug 2019 21:50:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9C7422DD3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8C7086E8AA; Wed, 21 Aug 2019 21:50:43 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9D7A56E39E for ; Wed, 21 Aug 2019 21:50:41 +0000 (UTC) Received: by mail-ed1-x544.google.com with SMTP id r12so4774126edo.5 for ; Wed, 21 Aug 2019 14:50:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=gbt/pw6uiT1vMaB/LCV0WaF0eaMceVLfJ8zpKgkhMxk=; b=rmg71X/xPTaGBJM2496lkoyVqi4ZYoQmCVVBTUBB000NqwB/cowKmJxKt72grHBaJP UYlZ8SrbvVGp24BcleLkXhIpqFIJhIm6DociieMY7C5fWAHC/xhDMzbgQq9JISvPz+X5 pHaZmCeTDJ8/8+yQFNYQijLYoDv+QYFLM6dulUwHxJlAF/4nd3zLsJTLj3E+EY+BdsnY Vf2JMCuzhdHUl6xfHGaI26ZEjveYLUTGXUzJv4raogzwGGMcu1xyV9bsI1JEpgUgGVd/ 94d3c5IQbc1FacOfxM+XiS9YdGJS35RZ0vUl2fffMGL1Ujx+PZd85PTdG1vCAiicJtRc atsA== X-Gm-Message-State: APjAAAUp+95BwTBkxecuVB4bkkw8i5hTRQirsfE9ijW4CU4ADqKiNofK 0VrMYsVv/c/rBjCtm++Q/U9Rd7RJSqNAMw== X-Google-Smtp-Source: APXvYqx0/ncYovvBj66MJ3pGNzUvd3EpZhZd7h8NUvYSXiXwqfMjovgUZpj3IU6oDiITFxZeKpJPfA== X-Received: by 2002:a17:906:3fc7:: with SMTP id k7mr28885703ejj.208.1566424239729; Wed, 21 Aug 2019 14:50:39 -0700 (PDT) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id ch24sm3329015ejb.3.2019.08.21.14.50.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2019 14:50:38 -0700 (PDT) From: Daniel Vetter To: DRI Development Subject: [PATCH 1/3] dma_resv: prime lockdep annotations Date: Wed, 21 Aug 2019 23:50:28 +0200 Message-Id: <20190821215030.31660-1-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.23.0.rc1 MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=gbt/pw6uiT1vMaB/LCV0WaF0eaMceVLfJ8zpKgkhMxk=; b=hfDHzXJUgxHSwT08WCOaRWQrZbOozCip2cSVyITYpfCmfAFD9N/TfljKbEsQw2vcyt Vu3qc6a4gIvXlznNn9Zv/1JYkyTXYW7iWTu+Yp4EqV0qf9MwqaV+HfdzdRfuFDlehArF 6bxDdIO9h+zznBg2kh9E6SOkyCobE49+LbT+E= X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , Tomeu Vizoso , Daniel Vetter , Intel Graphics Development , VMware Graphics , Gerd Hoffmann , Thomas Zimmermann , Daniel Vetter , Alex Deucher , Dave Airlie , =?utf-8?q?Christian_K=C3=B6nig?= , Ben Skeggs Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Full audit of everyone: - i915, radeon, amdgpu should be clean per their maintainers. - vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all. - panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean. - v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section. - vmwgfx has a bunch of ioctls that do their own copy_*_user: - vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe. - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out. - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there. Summary: vmwgfx seems to be fine too. - virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe. - qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe. - A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up. v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues. Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this. Solution is to spawn a worker (but only once). It's horrible, but it works. Cc: Alex Deucher Cc: Christian König Cc: Chris Wilson Cc: Thomas Zimmermann Cc: Rob Herring Cc: Tomeu Vizoso Cc: Eric Anholt Cc: Dave Airlie Cc: Gerd Hoffmann Cc: Ben Skeggs Cc: "VMware Graphics" Cc: Thomas Hellstrom Signed-off-by: Daniel Vetter --- drivers/dma-buf/dma-resv.c | 42 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 42a8f3f11681..29988b1564c1 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@ #include #include +#include /** * DOC: Reservation Object Overview @@ -95,6 +96,28 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); } +#if IS_ENABLED(CONFIG_LOCKDEP) +struct lockdep_work { + struct work_struct work; + struct dma_resv obj; + struct mm_struct *mm; +} lockdep_work; + +void lockdep_work_fn(struct work_struct *work) +{ + dma_resv_init(&lockdep_work.obj); + + down_read(&lockdep_work.mm->mmap_sem); + ww_mutex_lock(&lockdep_work.obj.lock, NULL); + fs_reclaim_acquire(GFP_KERNEL); + fs_reclaim_release(GFP_KERNEL); + ww_mutex_unlock(&lockdep_work.obj.lock); + up_read(&lockdep_work.mm->mmap_sem); + + mmput(lockdep_work.mm); +} +#endif + /** * dma_resv_init - initialize a reservation object * @obj: the reservation object @@ -107,6 +130,25 @@ void dma_resv_init(struct dma_resv *obj) &reservation_seqcount_class); RCU_INIT_POINTER(obj->fence, NULL); RCU_INIT_POINTER(obj->fence_excl, NULL); + +#if IS_ENABLED(CONFIG_LOCKDEP) + if (current->mm) { + static atomic_t lockdep_primed; + + /* + * This gets called from all kinds of places, launch a worker. + * Usual init sections don't work for kernel threads lack an + * ->mm. + */ + if (atomic_cmpxchg(&lockdep_primed, 0, 1) == 0) { + INIT_WORK(&lockdep_work.work, lockdep_work_fn); + lockdep_work.mm = current->mm; + mmget(lockdep_work.mm); + + schedule_work(&lockdep_work.work); + } + } +#endif } EXPORT_SYMBOL(dma_resv_init);