From patchwork Tue Jul 21 21:31:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11676787 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6871159A for ; Tue, 21 Jul 2020 21:31:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0CFB20729 for ; Tue, 21 Jul 2020 21:31:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="DnuBVmb4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0CFB20729 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BE1E6B0003; Tue, 21 Jul 2020 17:31:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 86DF46B0005; Tue, 21 Jul 2020 17:31:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AA9A6B0006; Tue, 21 Jul 2020 17:31:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 6350D6B0003 for ; Tue, 21 Jul 2020 17:31:34 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0301B9AF910B for ; Tue, 21 Jul 2020 21:31:34 +0000 (UTC) X-FDA: 77063379708.08.bee21_201303026f30 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id CDFDC18021E42 for ; Tue, 21 Jul 2020 21:31:33 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30034:30054:30056:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yf4kfn9xyhtih3rgos9i3fr4i15op1jhwa8poakj7xuzyhwqxzaipwjjswdpi.m6wtcu798a5jtyeajazpukmd1khyxchj4t81rahqiirf1ougcbn515jifb55uki.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bee21_201303026f30 X-Filterd-Recvd-Size: 5548 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 21 Jul 2020 21:31:33 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 21 Jul 2020 14:30:29 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 21 Jul 2020 14:31:31 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 21 Jul 2020 14:31:31 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 21 Jul 2020 21:31:25 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 21 Jul 2020 21:31:25 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 21 Jul 2020 14:31:25 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v3 0/5] mm/migrate: avoid device private invalidations Date: Tue, 21 Jul 2020 14:31:14 -0700 Message-ID: <20200721213119.32344-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1595367029; bh=S6TSnAaZe1yneC9qg+vali1aI4hicYt4hljXvpfEYPc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=DnuBVmb4eDYGee/BzoLo3pnjgjneMSU/TSTsQmcQ9DAbiNQQpPFtY+zbjGwdGx2FF wQ6Uie5n1usLkpeq4/h6ifVK6vhZghGtFt7gzWin8SCeBODjKeAvShGk2vXYxUCVOe YPy7FXSBzYywTFkmYRzY/Mf2A+pRCCSw28133BMQTAYMJufABqRMzUNyGM+QA4MadI 3lrbbggcvIW9549w/6LetDLr5CS0Y/oxcvSS+Xw6f+zXsjvkvv7hn5hg4ry/LAzEst lBF6Vstb7mTJefMpdEj61ilm2pL9jwuK6fgV8W22Nomolyn3Eh2Tuokdxb4C7/b5lL 1PhIiZU+SarUg== X-Rspamd-Queue-Id: CDFDC18021E42 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The goal for this series is to avoid device private memory TLB invalidations when migrating a range of addresses from system memory to device private memory and some of those pages have already been migrated. The approach taken is to introduce a new mmu notifier invalidation event type and use that in the device driver to skip invalidation callbacks from migrate_vma_setup(). The device driver is also then expected to handle device MMU invalidations as part of the migrate_vma_setup(), migrate_vma_pages(), migrate_vma_finalize() process. Note that this is opt-in. A device driver can simply invalidate its MMU in the mmu notifier callback and not handle MMU invalidations in the migration sequence. This series is based on Jason Gunthorpe's HMM tree (linux-5.8.0-rc4). Also, this replaces the need for the following two patches I sent: ("mm: fix migrate_vma_setup() src_owner and normal pages") https://lore.kernel.org/linux-mm/20200622222008.9971-1-rcampbell@nvidia.com ("nouveau: fix mixed normal and device private page migration") https://lore.kernel.org/lkml/20200622233854.10889-3-rcampbell@nvidia.com Bharata Rao, let me know if I can add your reviewed-by back since I made a fair number of changes to this version of the series. Changes in v3: Changed the direction field "dir" to a "flags" field and renamed src_owner to pgmap_owner. Fixed a locking issue in nouveau for the migration invalidation. Added a HMM selftest test case to exercise the HMM test driver invalidation changes. Removed reviewed-by Bharata B Rao since this version is moderately changed. Changes in v2: Rebase to Jason Gunthorpe's HMM tree. Added reviewed-by from Bharata B Rao. Rename the mmu_notifier_range::data field to migrate_pgmap_owner as suggested by Jason Gunthorpe. Ralph Campbell (5): nouveau: fix storing invalid ptes mm/migrate: add a flags parameter to migrate_vma mm/notifier: add migration invalidation type nouveau/svm: use the new migration invalidation mm/hmm/test: use the new migration invalidation arch/powerpc/kvm/book3s_hv_uvmem.c | 4 ++- drivers/gpu/drm/nouveau/nouveau_dmem.c | 19 ++++++++--- drivers/gpu/drm/nouveau/nouveau_svm.c | 21 +++++------- drivers/gpu/drm/nouveau/nouveau_svm.h | 13 ++++++- .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 ++++--- include/linux/migrate.h | 16 ++++++--- include/linux/mmu_notifier.h | 7 ++++ lib/test_hmm.c | 34 +++++++++++-------- mm/migrate.c | 14 ++++++-- tools/testing/selftests/vm/hmm-tests.c | 18 +++++++--- 10 files changed, 112 insertions(+), 47 deletions(-)