From patchwork Tue Nov 12 20:22:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11240087 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25CC014ED for ; Tue, 12 Nov 2019 20:24:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA56621925 for ; Tue, 12 Nov 2019 20:24:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="b6CK4yU3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA56621925 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgx-00034C-17; Tue, 12 Nov 2019 20:23:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgv-00033f-E3 for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:09 +0000 X-Inumbo-ID: 337b22a4-058a-11ea-b678-bc764e2007e4 Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 337b22a4-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:51 +0000 (UTC) Received: by mail-qk1-x744.google.com with SMTP id z23so15661900qkj.10 for ; Tue, 12 Nov 2019 12:22:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pTAE3C6UkPphbB8bkoC7OIstj7M49YCEG83oq3fCqt8=; b=b6CK4yU3UoL1CgOKkjCuHrBDiEHENtFT8feX58DmCzY3bqiu4ZK/Od2V3upqZFZ5q7 zm5H9MRQbRsh0K4H8EnHPsoAOVpkNGbeQTWESbKT7P1dzjMaa36fJQDItWwIT4VRKg3R Gas8YxYUJLlyuUQYpfr0WFPgm8oZMHWzG77Crk8koct7WiOYQzWm92xQWPw7pPLdfWH2 NYRVu5/wK0lZF5PCsWAVQL7hRAsVZrhf1u7s5+qbe9Y9tLc5b1GA1xrg+xi8AQKDdxZe ibyjaFHn3hUIJ+4muJB7bymx2DOpJUqTbrvG73Ef8vCtg1rq/D41eg/PfUtqDhj5VWop W2PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pTAE3C6UkPphbB8bkoC7OIstj7M49YCEG83oq3fCqt8=; b=hX6aow4pN46bxvXXPSsAv1Uo8Sez/ZEe/zdiH6EPXU308iQCh5sSHYX55IjNmdz6qO ZT3mTm2bfwVBeKz67D6rcnMhs7s6t55OVXFPo3J42vbT7EY06F7latyOk5+5e8lv/0Vp s8VbvaKUrUtuWEcSnmQXuy9k4MO9Yzf9VacmPHeO29fX3ap3WqiCncDbTU7e3++wjNcN e+CP0RMuLu+jooN0ruWdVn/RTES82UYyvK8RIp8MCp9komUeAN8eUnvQxqJa503/b7+m 63+ozpblCLsv8dd3celUvR8wt0RpzJvcydbo1Iznzyq5r8VckIbHSC+dnXiqKZHVwIo0 RukA== X-Gm-Message-State: APjAAAUDkDOpq1Ey1U/5pkw7fOfwGcR8uepFy49AdQRZWD/ZxMFEuARU XsCg4r/AZ2pnmtEgzMukw1HEkg== X-Google-Smtp-Source: APXvYqzaFmdsuGMd+sc3S8UlqP5hfIuFeBKJzgQIPlueQUEvMNlEoPp1r+TNgyLS5EenfDQ/yuhiOA== X-Received: by 2002:a37:9e89:: with SMTP id h131mr17485213qke.477.1573590171112; Tue, 12 Nov 2019 12:22:51 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id h27sm11695982qtk.37.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kG-Fs; Tue, 12 Nov 2019 16:22:47 -0400 From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:25 -0400 Message-Id: <20191112202231.3856-9-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 08/14] nouveau: use mmu_notifier directly for invalidate_range_start X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?utf-8?q?Christian_K=C3=B6nig?= , Ben Skeggs Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Jason Gunthorpe There is no reason to get the invalidate_range_start() callback via an indirection through hmm_mirror, just register a normal notifier directly. Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/nouveau/nouveau_svm.c | 95 ++++++++++++++++++--------- 1 file changed, 63 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 668d4bd0c118f1..577f8811925a59 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -88,6 +88,7 @@ nouveau_ivmm_find(struct nouveau_svm *svm, u64 inst) } struct nouveau_svmm { + struct mmu_notifier notifier; struct nouveau_vmm *vmm; struct { unsigned long start; @@ -96,7 +97,6 @@ struct nouveau_svmm { struct mutex mutex; - struct mm_struct *mm; struct hmm_mirror mirror; }; @@ -251,10 +251,11 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit) } static int -nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn, + const struct mmu_notifier_range *update) { - struct nouveau_svmm *svmm = container_of(mirror, typeof(*svmm), mirror); + struct nouveau_svmm *svmm = + container_of(mn, struct nouveau_svmm, notifier); unsigned long start = update->start; unsigned long limit = update->end; @@ -264,6 +265,9 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, SVMM_DBG(svmm, "invalidate %016lx-%016lx", start, limit); mutex_lock(&svmm->mutex); + if (unlikely(!svmm->vmm)) + goto out; + if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) { if (start < svmm->unmanaged.start) { nouveau_svmm_invalidate(svmm, start, @@ -273,19 +277,31 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, } nouveau_svmm_invalidate(svmm, start, limit); + +out: mutex_unlock(&svmm->mutex); return 0; } -static void -nouveau_svmm_release(struct hmm_mirror *mirror) +static void nouveau_svmm_free_notifier(struct mmu_notifier *mn) +{ + kfree(container_of(mn, struct nouveau_svmm, notifier)); +} + +static const struct mmu_notifier_ops nouveau_mn_ops = { + .invalidate_range_start = nouveau_svmm_invalidate_range_start, + .free_notifier = nouveau_svmm_free_notifier, +}; + +static int +nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, + const struct mmu_notifier_range *update) { + return 0; } -static const struct hmm_mirror_ops -nouveau_svmm = { +static const struct hmm_mirror_ops nouveau_svmm = { .sync_cpu_device_pagetables = nouveau_svmm_sync_cpu_device_pagetables, - .release = nouveau_svmm_release, }; void @@ -294,7 +310,10 @@ nouveau_svmm_fini(struct nouveau_svmm **psvmm) struct nouveau_svmm *svmm = *psvmm; if (svmm) { hmm_mirror_unregister(&svmm->mirror); - kfree(*psvmm); + mutex_lock(&svmm->mutex); + svmm->vmm = NULL; + mutex_unlock(&svmm->mutex); + mmu_notifier_put(&svmm->notifier); *psvmm = NULL; } } @@ -320,7 +339,7 @@ nouveau_svmm_init(struct drm_device *dev, void *data, mutex_lock(&cli->mutex); if (cli->svm.cli) { ret = -EBUSY; - goto done; + goto out_free; } /* Allocate a new GPU VMM that can support SVM (managed by the @@ -335,24 +354,33 @@ nouveau_svmm_init(struct drm_device *dev, void *data, .fault_replay = true, }, sizeof(struct gp100_vmm_v0), &cli->svm.vmm); if (ret) - goto done; + goto out_free; - /* Enable HMM mirroring of CPU address-space to VMM. */ - svmm->mm = get_task_mm(current); - down_write(&svmm->mm->mmap_sem); + down_write(¤t->mm->mmap_sem); svmm->mirror.ops = &nouveau_svmm; - ret = hmm_mirror_register(&svmm->mirror, svmm->mm); - if (ret == 0) { - cli->svm.svmm = svmm; - cli->svm.cli = cli; - } - up_write(&svmm->mm->mmap_sem); - mmput(svmm->mm); + ret = hmm_mirror_register(&svmm->mirror, current->mm); + if (ret) + goto out_mm_unlock; -done: + svmm->notifier.ops = &nouveau_mn_ops; + ret = __mmu_notifier_register(&svmm->notifier, current->mm); if (ret) - nouveau_svmm_fini(&svmm); + goto out_hmm_unregister; + /* Note, ownership of svmm transfers to mmu_notifier */ + + cli->svm.svmm = svmm; + cli->svm.cli = cli; + up_write(¤t->mm->mmap_sem); mutex_unlock(&cli->mutex); + return 0; + +out_hmm_unregister: + hmm_mirror_unregister(&svmm->mirror); +out_mm_unlock: + up_write(¤t->mm->mmap_sem); +out_free: + mutex_unlock(&cli->mutex); + kfree(svmm); return ret; } @@ -494,12 +522,12 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) ret = hmm_range_register(range, &svmm->mirror); if (ret) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return (int)ret; } if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return -EBUSY; } @@ -507,7 +535,7 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) if (ret <= 0) { if (ret == 0) ret = -EBUSY; - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); hmm_range_unregister(range); return ret; } @@ -587,12 +615,15 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.version = 0; for (fi = 0; fn = fi + 1, fi < buffer->fault_nr; fi = fn) { + struct mm_struct *mm; + /* Cancel any faults from non-SVM channels. */ if (!(svmm = buffer->fault[fi]->svmm)) { nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr); + mm = svmm->notifier.mm; /* We try and group handling of faults within a small * window into a single update. @@ -609,11 +640,11 @@ nouveau_svm_fault(struct nvif_notify *notify) /* Intersect fault window with the CPU VMA, cancelling * the fault if the address is invalid. */ - down_read(&svmm->mm->mmap_sem); - vma = find_vma_intersection(svmm->mm, start, limit); + down_read(&mm->mmap_sem); + vma = find_vma_intersection(mm, start, limit); if (!vma) { SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -623,7 +654,7 @@ nouveau_svm_fault(struct nvif_notify *notify) if (buffer->fault[fi]->addr != start) { SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -704,7 +735,7 @@ nouveau_svm_fault(struct nvif_notify *notify) NULL); svmm->vmm->vmm.object.client->super = false; mutex_unlock(&svmm->mutex); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); } /* Cancel any faults in the window whose pages didn't manage