From patchwork Tue Nov 12 20:22:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11240063 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 102201850 for ; Tue, 12 Nov 2019 20:23:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5A6B214E0 for ; Tue, 12 Nov 2019 20:23:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="gQiQtULX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5A6B214E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DC1BE6B000C; Tue, 12 Nov 2019 15:22:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D76E66B000E; Tue, 12 Nov 2019 15:22:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEADC6B0010; Tue, 12 Nov 2019 15:22:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id A7FC26B000C for ; Tue, 12 Nov 2019 15:22:53 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 2BBB6181AEF10 for ; Tue, 12 Nov 2019 20:22:53 +0000 (UTC) X-FDA: 76148749026.14.bike38_191e92c21393f X-Spam-Summary: 2,0,0,fcfbe22e005adf25,d41d8cd98f00b204,jgg@ziepe.ca,::jglisse@redhat.com:rcampbell@nvidia.com:jhubbard@nvidia.com:felix.kuehling@amd.com:linux-rdma@vger.kernel.org:dri-devel@lists.freedesktop.org:amd-gfx@lists.freedesktop.org:alexander.deucher@amd.com:bskeggs@redhat.com:boris.ostrovsky@oracle.com:christian.koenig@amd.com:david1.zhou@amd.com:dennis.dalessandro@intel.com:jgross@suse.com:mike.marciniszyn@intel.com:oleksandr_andrushchenko@epam.com:petrcvekcz@gmail.com:sstabellini@kernel.org:nouveau@lists.freedesktop.org:xen-devel@lists.xenproject.org:hch@infradead.org:jgg@mellanox.com,RULES_HIT:1:2:41:69:152:355:379:541:800:960:966:968:973:981:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2895:2898:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:3874:4051:4250:4321:4385:4605:5007:6119:6261:6653:6742:7576:7875:7903:9010:9592:10004:11026:11232:11233:11473:11 657:1165 X-HE-Tag: bike38_191e92c21393f X-Filterd-Recvd-Size: 12698 Received: from mail-qk1-f195.google.com (mail-qk1-f195.google.com [209.85.222.195]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Nov 2019 20:22:52 +0000 (UTC) Received: by mail-qk1-f195.google.com with SMTP id 205so15716809qkk.1 for ; Tue, 12 Nov 2019 12:22:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xw2ai8Vr1f5l5+zlCN78rO+ZKXMukpf6DQd9uHD6kc8=; b=gQiQtULXW3UV8Oh9FfwyO3E18ceB+rUPG/7scE/QRLgHK/rJI+SicVvgByu0b7cRv1 P3EL30/4moeXzNPr0YSFt0etyt/LoUcyiGveq05UEy1cjdvgc8H03dV4KNgaLRWhg9SJ sEU3WHn8tRfj0JLd5NgCYAKsnBziH7B9wB+ZfHNMGxItNMaEHHUgE5GfxAZUDedA7tXH vqudTxfFOdHTkfg9bLmvjeSIlXNekjsJ7IqVXuwlNT/I6AEE/DXJPB1uX5GUADdT+erJ slvQKvXE5+X0O5xSM2b8/r+Fjf5BRR7KuYudjTmdtC+/VPOJiCLgaVwVqd4lHjWSV5YS dcrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xw2ai8Vr1f5l5+zlCN78rO+ZKXMukpf6DQd9uHD6kc8=; b=WnpZSij6C9WH3yKIv0oiQYzfaSqz/2E/pTJJyIXgYrUvHdR5y3VxwQjj5GBM6OW8ja ftmnAB85Ah2Aa+YVTWSvftUVfgmE1BVNuAKa12d+cPy5J+ABnqrJNKZhi+bIWaveFytP qrQ7o0GgSXZ/YN5Zu4f6EMsw0Es81UaJzt1K2rIUksedzWus/wQWFn+21nLh03bjeDz4 ymmYGoXc2RQEJHbOBWBpitGJ4APtWoFQPU5FmHcPmi8j+OggMPkxLMWlQ8QJXm4xzwis OUZ+Nmmg6O0GlYZ9GvWeQkanig9D/fH8Qlj1Gq6aPJLuKbk/V6ZHicCqD6dNww5T7vVl WY8w== X-Gm-Message-State: APjAAAXcfSnyk0jbbeQCiWfwHFDTHiBp1gPvLUgI6TM7LNn4s8zts8Fl 4f+f/MLLaGSVLH4BRVWhL4b4HVBGsIc= X-Google-Smtp-Source: APXvYqxgWgKtyMbX/5h2iyVelZCuU14MNQbnuaor7pMdC+cfcmNuVKUEvAj36OMo4uowOJ2sBx30tw== X-Received: by 2002:a05:620a:999:: with SMTP id x25mr767745qkx.189.1573590171692; Tue, 12 Nov 2019 12:22:51 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id p33sm13643740qtf.80.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kA-ET; Tue, 12 Nov 2019 16:22:47 -0400 From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Cc: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Alex Deucher , Ben Skeggs , Boris Ostrovsky , =?utf-8?q?Christian_K=C3=B6ni?= =?utf-8?q?g?= , David Zhou , Dennis Dalessandro , Juergen Gross , Mike Marciniszyn , Oleksandr Andrushchenko , Petr Cvek , Stefano Stabellini , nouveau@lists.freedesktop.org, xen-devel@lists.xenproject.org, Christoph Hellwig , Jason Gunthorpe Subject: [PATCH v3 07/14] drm/radeon: use mmu_interval_notifier_insert Date: Tue, 12 Nov 2019 16:22:24 -0400 Message-Id: <20191112202231.3856-8-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jason Gunthorpe The new API is an exact match for the needs of radeon. For some reason radeon tries to remove overlapping ranges from the interval tree, but interval trees (and mmu_interval_notifier_insert()) support overlapping ranges directly. Simply delete all this code. Since this driver is missing a invalidate_range_end callback, but still calls get_user_pages(), it cannot be correct against all races. Reviewed-by: Christian König Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/radeon/radeon.h | 9 +- drivers/gpu/drm/radeon/radeon_mn.c | 218 ++++++----------------------- 2 files changed, 51 insertions(+), 176 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index d59b004f669583..30e32adc1fc666 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -68,6 +68,10 @@ #include #include +#ifdef CONFIG_MMU_NOTIFIER +#include +#endif + #include #include #include @@ -509,8 +513,9 @@ struct radeon_bo { struct ttm_bo_kmap_obj dma_buf_vmap; pid_t pid; - struct radeon_mn *mn; - struct list_head mn_list; +#ifdef CONFIG_MMU_NOTIFIER + struct mmu_interval_notifier notifier; +#endif }; #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, tbo.base) diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c index dbab9a3a969b9e..f93829f08a4dc1 100644 --- a/drivers/gpu/drm/radeon/radeon_mn.c +++ b/drivers/gpu/drm/radeon/radeon_mn.c @@ -36,131 +36,51 @@ #include "radeon.h" -struct radeon_mn { - struct mmu_notifier mn; - - /* objects protected by lock */ - struct mutex lock; - struct rb_root_cached objects; -}; - -struct radeon_mn_node { - struct interval_tree_node it; - struct list_head bos; -}; - /** - * radeon_mn_invalidate_range_start - callback to notify about mm change + * radeon_mn_invalidate - callback to notify about mm change * * @mn: our notifier - * @mn: the mm this callback is about - * @start: start of updated range - * @end: end of updated range + * @range: the VMA under invalidation * * We block for all BOs between start and end to be idle and * unmap them by move them into system domain again. */ -static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, - const struct mmu_notifier_range *range) +static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn); + struct radeon_bo *bo = container_of(mn, struct radeon_bo, notifier); struct ttm_operation_ctx ctx = { false, false }; - struct interval_tree_node *it; - unsigned long end; - int ret = 0; - - /* notification is exclusive, but interval is inclusive */ - end = range->end - 1; - - /* TODO we should be able to split locking for interval tree and - * the tear down. - */ - if (mmu_notifier_range_blockable(range)) - mutex_lock(&rmn->lock); - else if (!mutex_trylock(&rmn->lock)) - return -EAGAIN; - - it = interval_tree_iter_first(&rmn->objects, range->start, end); - while (it) { - struct radeon_mn_node *node; - struct radeon_bo *bo; - long r; - - if (!mmu_notifier_range_blockable(range)) { - ret = -EAGAIN; - goto out_unlock; - } - - node = container_of(it, struct radeon_mn_node, it); - it = interval_tree_iter_next(it, range->start, end); + long r; - list_for_each_entry(bo, &node->bos, mn_list) { + if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound) + return true; - if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound) - continue; + if (!mmu_notifier_range_blockable(range)) + return false; - r = radeon_bo_reserve(bo, true); - if (r) { - DRM_ERROR("(%ld) failed to reserve user bo\n", r); - continue; - } - - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, - true, false, MAX_SCHEDULE_TIMEOUT); - if (r <= 0) - DRM_ERROR("(%ld) failed to wait for user bo\n", r); - - radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - if (r) - DRM_ERROR("(%ld) failed to validate user bo\n", r); - - radeon_bo_unreserve(bo); - } + r = radeon_bo_reserve(bo, true); + if (r) { + DRM_ERROR("(%ld) failed to reserve user bo\n", r); + return true; } - -out_unlock: - mutex_unlock(&rmn->lock); - - return ret; -} - -static void radeon_mn_release(struct mmu_notifier *mn, struct mm_struct *mm) -{ - struct mmu_notifier_range range = { - .mm = mm, - .start = 0, - .end = ULONG_MAX, - .flags = 0, - .event = MMU_NOTIFY_UNMAP, - }; - - radeon_mn_invalidate_range_start(mn, &range); -} - -static struct mmu_notifier *radeon_mn_alloc_notifier(struct mm_struct *mm) -{ - struct radeon_mn *rmn; - rmn = kzalloc(sizeof(*rmn), GFP_KERNEL); - if (!rmn) - return ERR_PTR(-ENOMEM); + r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); + if (r <= 0) + DRM_ERROR("(%ld) failed to wait for user bo\n", r); - mutex_init(&rmn->lock); - rmn->objects = RB_ROOT_CACHED; - return &rmn->mn; -} + radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); + r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); + if (r) + DRM_ERROR("(%ld) failed to validate user bo\n", r); -static void radeon_mn_free_notifier(struct mmu_notifier *mn) -{ - kfree(container_of(mn, struct radeon_mn, mn)); + radeon_bo_unreserve(bo); + return true; } -static const struct mmu_notifier_ops radeon_mn_ops = { - .release = radeon_mn_release, - .invalidate_range_start = radeon_mn_invalidate_range_start, - .alloc_notifier = radeon_mn_alloc_notifier, - .free_notifier = radeon_mn_free_notifier, +static const struct mmu_interval_notifier_ops radeon_mn_ops = { + .invalidate = radeon_mn_invalidate, }; /** @@ -174,51 +94,20 @@ static const struct mmu_notifier_ops radeon_mn_ops = { */ int radeon_mn_register(struct radeon_bo *bo, unsigned long addr) { - unsigned long end = addr + radeon_bo_size(bo) - 1; - struct mmu_notifier *mn; - struct radeon_mn *rmn; - struct radeon_mn_node *node = NULL; - struct list_head bos; - struct interval_tree_node *it; - - mn = mmu_notifier_get(&radeon_mn_ops, current->mm); - if (IS_ERR(mn)) - return PTR_ERR(mn); - rmn = container_of(mn, struct radeon_mn, mn); - - INIT_LIST_HEAD(&bos); - - mutex_lock(&rmn->lock); - - while ((it = interval_tree_iter_first(&rmn->objects, addr, end))) { - kfree(node); - node = container_of(it, struct radeon_mn_node, it); - interval_tree_remove(&node->it, &rmn->objects); - addr = min(it->start, addr); - end = max(it->last, end); - list_splice(&node->bos, &bos); - } - - if (!node) { - node = kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL); - if (!node) { - mutex_unlock(&rmn->lock); - return -ENOMEM; - } - } - - bo->mn = rmn; - - node->it.start = addr; - node->it.last = end; - INIT_LIST_HEAD(&node->bos); - list_splice(&bos, &node->bos); - list_add(&bo->mn_list, &node->bos); - - interval_tree_insert(&node->it, &rmn->objects); - - mutex_unlock(&rmn->lock); - + int ret; + + ret = mmu_interval_notifier_insert(&bo->notifier, current->mm, addr, + radeon_bo_size(bo), &radeon_mn_ops); + if (ret) + return ret; + + /* + * FIXME: radeon appears to allow get_user_pages to run during + * invalidate_range_start/end, which is not a safe way to read the + * PTEs. It should use the mmu_interval_read_begin() scheme around the + * get_user_pages to ensure that the PTEs are read properly + */ + mmu_interval_read_begin(&bo->notifier); return 0; } @@ -231,27 +120,8 @@ int radeon_mn_register(struct radeon_bo *bo, unsigned long addr) */ void radeon_mn_unregister(struct radeon_bo *bo) { - struct radeon_mn *rmn = bo->mn; - struct list_head *head; - - if (!rmn) + if (!bo->notifier.mm) return; - - mutex_lock(&rmn->lock); - /* save the next list entry for later */ - head = bo->mn_list.next; - - list_del(&bo->mn_list); - - if (list_empty(head)) { - struct radeon_mn_node *node; - node = container_of(head, struct radeon_mn_node, bos); - interval_tree_remove(&node->it, &rmn->objects); - kfree(node); - } - - mutex_unlock(&rmn->lock); - - mmu_notifier_put(&rmn->mn); - bo->mn = NULL; + mmu_interval_notifier_remove(&bo->notifier); + bo->notifier.mm = NULL; }